id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.08987 | Decoding Imagined Auditory Pitch Phenomena with an Autoencoder Based
Temporal Convolutional Architecture | Stimulus decoding of functional Magnetic Resonance Imaging (fMRI) data with
machine learning models has provided new insights about neural representational
spaces and task-related dynamics. However, the scarcity of labelled
(task-related) fMRI data is a persistent obstacle, resulting in
model-underfitting and poor generalization. In this work, we mitigated data
poverty by extending a recent pattern-encoding strategy from the visual memory
domain to our own domain of auditory pitch tasks, which to our knowledge had
not been done. Specifically, extracting preliminary information about
participants' neural activation dynamics from the unlabelled fMRI data resulted
in improved downstream classifier performance when decoding heard and imagined
pitch. Our results demonstrate the benefits of leveraging unlabelled fMRI data
against data poverty for decoding pitch based tasks, and yields novel
significant evidence for both separate and overlapping pathways of heard and
imagined pitch processing, deepening our understanding of auditory cognitive
neuroscience. | Sean Paulsen, Lloyd May, Michael Casey | 2023-05-15T20:02:58Z | http://arxiv.org/abs/2305.08987v1 | Decoding Imagined Auditory Pitch Phenomena with an Autoencoder Based Temporal Convolutional Architecture
###### Abstract
Stimulus decoding of functional Magnetic Resonance Imaging (fMRI) data with machine learning models has provided new insights about neural representational spaces and task-related dynamics. However, the scarcity of labelled (task-related) fMRI data is a persistent obstacle, resulting in model-undefitting and poor generalization. In this work, we mitigated data poverty by extending a recent pattern-encoding strategy from the visual memory domain to our own domain of auditory pitch tasks, which to our knowledge had not been done. Specifically, extracting preliminary information about participants' neural activation dynamics from the _unlabelled_ fMRI data resulted in improved downstream classifier performance when decoding heard and imagined pitch. Our results demonstrate the benefits of leveraging unlabelled fMRI data against data poverty for decoding pitch based tasks, and yields novel significant evidence for both separate and overlapping pathways of heard and imagined pitch processing, deepening our understanding of auditory cognitive neuroscience.
neuroimaging; neuroscience; auditory cognition; deep learning.
## I Introduction
### _Motivation_
Brain decoding is the problem of classifying the stimulus that evoked given brain activity. Music's well-defined structure and the wealth of previous results about the neural representation of that structure are thus an appealing foundation upon which to approach this problem. Our primary goal was to train a machine learning classification model to predict the pitch-class of a note (the relative position of the note within the key) given an input of brain activity evoked by that note. We hypothesized that such a classifier would achieve significant results for three tasks: trained and tested on neural activity when the note is actually heard (hereafter referred to as the "heard task"), the same when the note is only _imagined_ ("imagined task"), and most importantly, trained on neural activity when the notes are heard but evaluated on data when the notes are imagined ("cross-decoding task") to test for overlap between heard and imagined pathways. To our knowledge, the cross-decoding task had not been done before. Toward these ends, we obtained functional Magnetic Resonance Imaging (fMRI) data from musically trained participants while they both heard and imagined particular pitches. We further detail our scanning protocol in the Methods and Materials section.
Training machine learning models on such voxel data is challenging, though, primarily due to the scarcity of relevant and labelled data to be used for training, and our experiments were no exception. However, Firat et al. [5]'s work on visual memory brain decoding addressed this challenge of fMRI data poverty in a novel and effective way. More specifically, Firat et al. hypothesized that unlabelled fMRI data, which are normally deemed irrelevant and discarded, contain information about overall patterns of brain activity and can therefore be exploited in brain decoding classification tasks. Their architecture began with a sparse autoencoder [10] to perform unsupervised learning of neural activation patterns latent in unlabelled fMRI data. These patterns then served as filters in a temporal Convolutional Neural Network [12] to encode the labelled fMRI data into a non-linear, more expressive feature space. We refer to the inputs of this pipeline as "unencoded datasets" and the outputs as "encoded datasets" throughout this work. Thus, the encoded dataset is the result of filtering the task-dependent fMRI data by the patterns latent in task-independent data. Firat et al. then demonstrated improved performance of Multi-Voxel Pattern Analysis (MVPA) classifiers trained and tested on encoded datasets compared to unencoded datasets.
### _Our Approach_
In Section 2 of this paper, we expand on the architecture of Firat et al. by adapting their autoencoder-CNN pipeline from the visual domain to our novel auditory domain task of decoding imagined pitch. Section 3 presents our results, in which our encoded datasets are _essential_ for successful decoding of the imagined task, as well as first-of-their-kind significant results on the cross-decoding task. Section 4 discusses these results in the greater context of our goals and motivations. In particular, that this work demonstrates for the first time, to the best of our knowledge, that temporal filtering of fMRI data for an auditory task not only improves the performance of MVPA classifiers, but can also reveal fundamental, learnable attributes of auditory imagery that would go undetected by machine
learning models trained on unencoded datasets. Section 5 details our methods and materials: participant selection, fMRI scanning protocol, hardware for training models, and statistical methods for evaluating our final classifiers. Section 6 concludes this paper and explores future work.
## II Architecture Design
### _Neural Activation Pattern Training Data_
Each fMRI scan yielded a timeseries of 3-dimensional voxel data, where the value of each voxel represented the intensity of neural activity at that geographic location in the brain. We used the Python Multi-Variate Pattern Analysis (PyMVPA) [8] library to store and transform fMRI data throughout the experiment. When we imported a participant's fMRI data, PyMVPA flattened the 3D voxel data into a single spatial dimension by concatenating along two axes (during which all voxels are preserved), restricted to one of twenty selected Regions Of Interest (ROIs) at a time, and provided a mapping back to 3D space for that ROI. Thus, we began with a matrix \(VT\) of \(V\)-many voxels, which depended on each ROI, by \(T\) timesteps, which was 1864 for all participants and ROIs.
The Hemodynamic Response Function (HRF) in Figure 1 depicts the rise and fall of the intensity value of a voxel in response to a stimulus across 12 seconds. The time between images in our fMRI scans (TR) was 2 seconds, therefore the HRF would be observed across 6 timesteps in a given voxel. We thus expected any other latent activation patterns to occur across 6 timesteps as well. We therefore compiled our training data by sampling 1x6 windows of data from the matrix \(VT\). Collecting every possible such window would provide the largest set of training data, but we believed the extreme overlap in that case could cause unpredictable bias during training. Spacing the samples out by exactly 6 timesteps would remove overlap, but could induce a different bias with every sample beginning and ending where another sample begins and ends, possibly limiting the kinds of patterns we expose to the model during training. Sampling with a stride greater than 6, however, might unnecessarily reduce the total size of our training set. Therefore our method considered each possible 6-TR window, then added it to the training data with probability 1/6. This allowed us to sample windows of training data that can begin at any timestep across the entire scan, while balancing our desire to both reduce overlap and minimize reduction of the training set. We further discarded any sample overlapping with labelled timesteps to avoid any possibility of downstream circularity. In summary, we collected 6-TR windows of unlabelled fMRI data, for each participant, for each ROI, to learn neural activation patterns latent in that participant in that ROI.
### _Learning the Patterns_
We implemented a sparse autoencoder model to perform unsupervised learning of the latent temporal neural activation patterns among each region's voxels without the need for hand-crafted features. The sparse autoencoder was implemented with the Keras [3] library in Python. The model input was encoded by a dense layer with sparsity enforced by an "activity regularizer" parameter \(\rho=.001\), hereafter referred to as the "sparsity constraint," and then rectified linear unit (ReLU) activation functions were applied to obtain the encoded version of the input. We refer to the preceding steps as the "encoding layer" throughout this paper. Each encoding layer had fourteen neurons in its dense layer, obtained via grid search on \(\{8,10,12,14,16,18\}\). Each neuron's set of trained weights would then serve as a filter obtaining the encoded dataset. The decoding layer was also dense, with six neurons (recall that this layer attempts to reproduce the six-dimensional input) and ReLU activations. The model was optimized via backpropagation to minimize the mean squared error between the output of the decoding layer and the input using the "adamax" optimizer [11].
### _Filtering with Temporal Convolution_
For each combination of participant and ROI, we extracted the set of learned neural activation patterns from the corresponding trained encoding layer and used them as filters in a tCNN to obtain the corresponding encoded dataset. Our tCNN pipeline is depicted in Figure 2. More specifically, we performed a 1D full convolution on the \(VT\) matrix along its time axis with each of the fourteen trained neurons as the temporal filter. This resulted in fourteen response matrices for each combination of participant and ROI. Note that a full convolution means each response matrix had the same dimensions as \(VT\).
We expected the voxels to exhibit locally correlated activations [13], so we employed max pooling to extract spatial information from the filtered data in our response matrices. Recall, though, that \(VT\) is the result of flattening the 3D voxel space to 1D, and therefore voxels next to each other in \(VT\) are not necessarily next to each other geographically in the brain. Firat et al. [5] did not detail their solution to this problem of 3D max-pooling with 1D data, so we devised our own method. Recall that PyMVPA provided a mapping back to the 3D voxel space of unencoded voxel values for each ROI, so we directly we backfilled the original 3D space with the values of each response matrix.
Fig. 1: Hemodynamic Response Function (HRF) plotted as a 6-TR timeseries. [14].
For 3-dimensional spatial max-pooling, we proposed a pooling cube of tunable dimensions [\(c_{1}\),\(c_{2}\),\(c_{3}\)] moving exhaustively throughout each 3D space with no overlap, storing the maximum value within the cube at each step in a list. The jagged 3D voxel structure of each ROI was padded on all sides with zeroes due to the way PyMVPA maps back from 1D to 3D, so these zeroes needed to be accounted for. We certainly did not want to record a zero as a max-pooled value when the pooling cube is full of these padding zeroes, and more subtly we did not want to record voxel values on the jagged fringes as max-pooled values when they were being compared almost entirely to padding zeroes. Our solution was a tunable parameter \(z_{0}\) which we called "zero threshold". The maximum value within the cube was only recorded as a max-pooled value when the proportion of non-zero values within the pooling cube exceeded \(z_{0}\). Our [\(2\), \(2\), \(2\)] pooling cube and zero threshold of 0.6 were obtained via grid search.
We performed our method of 3D max-pooling on each timestep for each of the response matrices, applied hyperbolic tangent to each list of max-pooled values, and finally concatenated the lists for each timestep. The result of the concatenation was the encoded dataset for that participant and ROI. A repository of our code is available upon request.
### _Pitch Decoding Classifiers_
For each participant and ROI, we partitioned the labelled fMRI data by whether the corresponding pitch was heard or imagined. The heard samples were split further in half, with each half serving in turn as training data and testing data for an MVPA classifier. We stored the trained classifiers' predictions on the respective test sets with their corresponding pitch-class labels. Our analysis of classifier performance on the heard task was performed on the union of the two halves of test set predictions for each participant and ROI. The imagined task was evaluated similarly. For the cross-decoding task, we trained the classifier on all heard data, then predicted the labels of all imagined data. We calculated group level significance for each task and ROI using a t-test between per-participant prediction mean accuracies and null decoding model mean accuracies, detailed further in the Methods and Materials section.
## III Results
### _Temporal Filter Results_
Figure 3 shows twenty learned temporal filters (i.e, trained neurons) uniformly at random across the encoding layers of all participants and ROIs. Six weights connect each such neuron to the input layer, one for each timestep in the input, so we plotted the raw values of each sampled neuron's weights as a timeseries. This allows us to visually evaluate the learned filters as a pattern of neural activity. Observe that several of these patterns are good approximations of the HRF, which we expected most of the autoencoders to learn. Note further that none of the patterns are dominated by a single weight, which is to say that the models were not biased toward any particular timestep in the input data. This was the intent of our careful creation of each autoencoder's training data.
### _Brain Decoding Results_
Table I contains the results of our pitch decoding experiments. We evaluated the group-level statistical significance of the multivariate classifiers' ability to outperform chance in each of our regions of interest. The region of interest is given in the first column. The second column indicates the task, as explained above. The next two columns give the accuracy and False Discovery Rate (FDR)-corrected p-values when the classifiers were trained and evaluated with their respective encoded dataset, and the last two columns give the same information on the unencoded dataset. Observe one of our critical results, that thirteen of the fifteen successful regions _required_ the encoded dataset to obtain statistical significance. Eleven of the fifteen significant results were for the imagined task, and indeed _all_ of these regions required the encoded dataset for significance.
## IV Discussion
### _Architecture Discussion_
Our first goal was to learn auditory neural activation patterns latent in 6-TR windows of unlabelled fMRI data with sparse
Fig. 2: Our tCNN pipeline from voxel space to the encoded dataset. The filters are the neurons extracted from each trained autoencoder and represent neural activation patterns.
autoencoders. We took care to avoid subtle biases when we collected our training data for the autoencoders by minimizing the overlap of the samples while allowing for the possibility of a sample to begin at any timestep in the scan. We plotted the weights of twenty uniformly randomly sampled encoder-layer neurons as timeseries to visualize the neural activation patterns that those neurons represented. These visualizations reassured our efforts in two ways. First, several of them are good approximations of the HRF, which we expected to be learned by one of the neurons in most of the autoencoders. Second, none of the patterns are dominated by a single timestep, and the peaks of activity are fairly well distributed across the timesteps, which was the intent of our training data collection method. These considerations, along with the success of our brain decoding classifiers, provide evidence that each neuron learned a latent auditory neural activation pattern, accomplishing our first goal.
Our second goal was to generate a collection of encoded datasets by transforming the unencoded voxel data \(VT\) in terms of the neural activation patterns learned by each autoencoder's encoding layer. Thus, the final step of our architecture was a modified tCNN. We used each of the learned activation patterns as temporal filters by convolving them with their respective \(VT\) along the time axis and applying our own method of 3D max pooling. Concatenating the pooled matrices for each participant and ROI yielded our encoded datasets, thus achieving our second goal. Note that the final dimension after concatenating was dependent on the size of the 3D pooling cube and the number of filters. In our experiments the encoded datasets' dimensions ended up being roughly equal to the dimension of their respective unencoded dataset. However, one could increase the size of the pooling cube or learn fewer temporal filters if
Fig. 3: Learned temporal filters, sampled uniformly at random across all sparse autoencoders. Each consists of six weight values, one for each timestep. The HRF appears to have been learned by several of the selected neurons.
the dimensionality were a burden on computing. That would, of course, be a tradeoff with performance, but it is nevertheless valuable to have a mechanism for dimension reduction available in this pipeline.
### _Brain Decoding Discussion_
Our third goal was to train a machine learning classifier to predict the pitch-class labels of heard and imagined pitches, trained and tested on fMRI data of twenty selected regions of interest. We hypothesized that such classifiers would outperform chance with statistical significance, and that the classifiers would achieve higher accuracy when trained on encoded datasets versus the unencoded datasets. We used the PyMVPA library to train multi-class Support Vector Machines (SVMs) with linear kernels on each of the encoded datasets and each of the unencoded datasets. Each classifier's accuracy was calculated on a held out test set, and the accuracies were averaged across participants for each ROI. Finally we calculated the group-level significance of the accuracies and controlled the FDR by correcting our p-values for multiple comparisons. Further details are in the Methods and Materials section.
As shown in Table I, the statistical significance of outperforming chance relied almost entirely on the encoded datasets. For the imagined task the classifiers did not obtain significant results in any ROIs using _unencoded_ data. Indeed, training on the encoded datasets did not merely nudge almost-significant p-values past the threshold, but quite the opposite. Our encoded datasets enabled the classifiers to reduce their p-values by more than an order of magnitude in most regions in Table I, and _two_ orders in some, indicating that the encoded dataset reveals fundamental, learnable attributes of auditory imagery that would otherwise remain undetected by machine learning models trained on unencoded data. Thus, we achieved our third goal and obtained statistically significant evidence of our hypothesis in the case of the imagined task. Moreover, the significant results on the cross-decoding task provide a critical novel result- statistically significant evidence of geographical overlap between heard and imagined sound.
Eleven of the fifteen significant results were achieved on the imagined pitch decoding task. This is explained by the greater cognitive involvement in imagining versus hearing sound. That is, imagining sound is a more involved activity than listening, evoking stronger, wider signals that are easier for the autoencoder to detect and learn.
The heard and cross-decoding tasks both achieved two significant results, one each on the encoded and unencoded datasets. In both cases of significant unencoded datasets, the p-value for the respective encoded dataset was at least an order of magnitude worse. For the heard task, the two regions are near each other- Heschl's Gyrus and the Superior Temporal Sulcus are both auditory cortex areas in the superior temporal lobe. Therefore, while the inconsistency of the encoded dataset on the heard task requires further study, the results on the heard task are geographically consistent. On the other hand, the significant regions on the cross-decoding task are in separate lobes and non-adjacent. The Right Rostral-Middle Frontal Gyrus is interesting because significant results were achieved on the cross-decoding task with the unencoded dataset with a p-value at least an order of magnitude better than any other region for that task and dataset. Further, for the heard and imagined tasks, the encoded dataset improved the p-values in this region. Thus, the significant result in the Right Rostral-Middle Frontal Gyrus is curious, piquing further study.
## V Methods and Materials
### _Participant Selection_
Participants possessed at least 8 years of formal music training or professional performance experience in Western tonal music, and they completed the Bucknell Auditory Imagery Scale (BAIS) [7] and the Bregman Musical Ability Rating Survey [9]. Twenty-three such participants passed the screening process and provided their written informed consent in accordance with the Institutional Review Board at Dartmouth College. Each subject was compensated $20 US upon completion of the scan.
All scanning used a 3.0 T Siemens MAGNETOM Prisma MRI scanner with a 32-channel head coil and Lumina button box with four colored push buttons. Each scan performed a T2* weighted single shot echoplanar (EPI) scanning sequence with a repetition time (TR) of 2 sec and 240mm field of view with 3mm voxels, yielding 80 voxel by 80 voxel images with 35 axial slices for a total of 224,000 voxels per volume. We used the fMRIPrep software [4] to perform motion correction, field unwarping, normalization, and bias field correction preprocessing, as well as brain extraction and ROI parcellation, on the raw T2* BOLD data.
### _fMRI Protocol_
Each participant's fMRI scan consisted of 8 runs of 21 musical trials. Each scan was randomly assigned either the key of E Major or F Major, which was not known by the participant. We designed each run to collect data for either the heard task or the imagined task, alternating from run to run. Each trial began with an arpeggio in the assigned key for the participant to internally establish a tonal context, followed by a cue-sequence of ascending notes in their assigned major scale. After a randomized time interval, the participant either heard the next ascending note in the scale, or was instructed to imagine the next ascending note, depending on the run. The following four seconds (2 TRs) of scanning collected from all trials constituted the labelled data for the heard and imagined tasks. Next, a probe tone was played, and the participant rated the probe tone's goodness of fit in the tonal context from 1 to 4. We excluded the data of any participant with at least 20% of their ratings missing, or whose ratings did not reflect internalization of the tonal hierarchy. Thus, we excluded the data of six of the twenty-three participants.
Previous literature on imagined and heard tonal pitch-classes directed us to twenty regions of interest in the frontal, temporal, and parietal lobes according to the Desikan-Killiany (D-K) atlas in Freesurfer [6]. The D-K ROIs are large cortical regions, reducing the burden of correcting for multiple comparisons
compared to a larger quantity of smaller regions. Further, the D-K ROIs are consistent with the scales of relevant previous literature. The full table of the ROI atlas indices, cortical labels, and corresponding Brodmann areas is available on request.
### _Autoencoder Training_
The autoencoders were trained on Intel Xeon E5 processors, either 2.3, 2.6, or 3.2 GHz for 30 epochs on Dartmouth's Discovery High Performance Cluster with an average training time of approximately 3 hours. 10% of the training data were held out as a validation set during training to prevent overfitting via early stopping. For each combination of participant and ROI, we trained ten autoencoders and kept the model with the lowest validation accuracy after 30 epochs. This was to avoid the rare but observed case where an autoencoder failed to find any minima during training.
### _MVPA Classifiers_
For each ROI, we partitioned the labelled fMRI data of each participant into two halves according to whether the pitches were heard or imagined. We then split the heard data in half, with each half serving in turn as training data and testing data for a multi-class SVM with linear kernels. We implemented the SVMs with the libSVM support vector machine library [2]. We then pooled the classifier's predictions on each of the two rounds of test data into a single set, along with their corresponding pitch-class labels. Our analysis of the heard task was performed on this collection of predictions and labels for each participant and region of interest. The imagined task was evaluated similarly. For the cross-decoding task, the classifier used all heard data for training, then predicted the labels of all imagined data. We calculated group level significance for each task using a t-test between per-participant prediction mean accuracies and null decoding model mean accuracies. We used Monte Carlo simulation to calculate the null models, repeating each classifier's training and testing 10,000 times with randomly permuted target labels and storing the mean overall accuracy. We corrected the group-level p-values for multiple comparisons using the method in Benjamini and Hochberg [1], which strictly controls the FDR of a family of hypothesis tests.
## VI Conclusion and Future Work
In this work, we adapted the architecture and pipeline of Firat et al. [5] from the visual domain to the auditory domain. Latent neural activation patterns were learned from unlabelled fMRI data, which are normally discarded, in order to generate our encoded datasets, which improved the performance of downstream MVPA classifiers. On the task of decoding the pitch class of imagined sound from fMRI data, the encoded datasets enabled the classifiers to outperform chance with group-level statistical significance in eleven ROIs. This demonstrated for the first time, to the best of our knowledge, that exploiting unlabelled fMRI data to perform temporal filtering for an auditory task not only improves the performance of MVPA classifiers, but can also reveal fundamental, learnable attributes of auditory imagery that would go undetected by machine learning models trained on unencoded datasets. Further, the group-level classifier performance on the cross-decoding task in two ROIs provided our novel statistically significant evidence of geographical overlap between heard and imagined sound.
There are several immediate directions for future work. First is toward an end-to-end architecture for this task, rather than a disconnected training session to obtain the encoded datasets. Second is toward decoding/cross-decoding the other information in our fMRI protocol, such as the timbre (clarinet or trumpet) of the heard or imagined sound. Third is toward the generalization of our pipeline to other fMRI datasets with auditory tasks. Fourth is a deeper dive on the ROIs with significant cross-decoding results, as these results did not quite match our expectations.
|
2302.03875 | Neural Artistic Style Transfer with Conditional Adversaria | A neural artistic style transformation (NST) model can modify the appearance
of a simple image by adding the style of a famous image. Even though the
transformed images do not look precisely like artworks by the same artist of
the respective style images, the generated images are appealing. Generally, a
trained NST model specialises in a style, and a single image represents that
style. However, generating an image under a new style is a tedious process,
which includes full model training. In this paper, we present two methods that
step toward the style image independent neural style transfer model. In other
words, the trained model could generate semantically accurate generated image
under any content, style image input pair. Our novel contribution is a
unidirectional-GAN model that ensures the Cyclic consistency by the model
architecture.Furthermore, this leads to much smaller model size and an
efficient training and validation phase. | P. N. Deelaka | 2023-02-08T04:34:20Z | http://arxiv.org/abs/2302.03875v1 | # Neural Artistic Style Transfer with Conditional Adversarial Networks
###### Abstract
A neural artistic style transformation (NST) model can modify the appearance of a simple image by adding the style of a famous image. Even though the transformed images do not look precisely like artworks by the same artist of the respective style images, the generated images are appealing. Generally, a trained NST model specialises in a style, and a single image represents that style. However, generating an image under a new style is a tedious process, which includes full model training. In this paper, we present two methods that step toward the style image independent neural style transfer model. In other words, the trained model could generate semantically accurate generated image under any content, style image input pair. Our novel contribution is a unidirectional-GAN model that ensures the Cyclic consistency by the model architecture.Furthermore, this leads to much smaller model size and an efficient training and validation phase.
+
Footnote †: journal: homepage: www.elsevier.com/locate/displa
+
Footnote †: journal: homepage: www.elsevier.com/locate/displa
+
Footnote †: journal: homepage: www.elsevier.com/locate/displa
## 1 Introduction
Imposing a style on an image is one of the most laborious tasks in graphic designing. Most of the time, this process is handled by a skillful graphic designer, and it will take hours to finish one image with good quality. Using a neural style transfer (NST) model like [1] is not popular among the computer graphics community due to several reasons. Since a model is specialized for the trained style, a simple application that supports several style transfers would be significant in terms of storage, considering the size of one NST model. Furthermore, the NST model imposing alias artifacts on the input image makes the model less reliable. Our first goal in this paper is to develop a NST model that supports more than one style to transfer. The second goal is to introduce a reliable NST model that imposes only general features related to a style.
Motivated by the Generative Adversarial Network (GAN) [2; 3; 4] based image-to-image translation and style transfer literature [5], we upgrade the convolutional neural network (CNN) based generator architecture into a conditional generative adversarial network(cGAN) based image synthesis process. Both proposed models have two independent discriminators to generate content based adversarial loss and style based adversarial loss. The concept introduced in the [5] on understanding and controlling latent space to impose desirable features on generated images lay the fundamental idea to the second model we introduce in this paper. Using a mapping network introduced in [5] for matrix learning [6] make the encoder learn style features like in Natural Language Processing semantic classification model generates feature embedding for tokenized words.
CNN based NST models introduced so far [1; 7; 5] are models that train thousands of content images only against a single image which will define the transform style of the model trained. The objective function of this supervised training was the weighted sum of content loss and style loss. Style loss function calculates the norm of correlation matrix between style and input images. Images generated using the above conventional NST embeds alias artifacts from the style image and changes the color palette of the generated image significantly from the content image, as shown in Fig. 1. Since neural artistic style translation is an unpaired image-to-image translation, by definition, it makes NST unfitting for the Pix-to-Pix GAN [7] model. Even though StyleGAN [5] model is designed on an unpaired image-to-image translation between two pixel spaces, it uses the same set of features to generate the image from a separate pixel space.
In this paper, we explore GANs in the NST, where just as the
StyleGAN model generates an image with the mixture of both input image features. The proposed model extracts style features from the style image and object features from the content image and generates a new image with objects from the content image painted in the style of the style image. For such a case, we propose a GAN model that can be used for NST, which can support unpaired image-to-image translation on independent input pixel spaces. Furthermore, the proposed model extracts and generates a different latent space on both style and content image, and the pixel space of the generated image differs from both input images' pixel spaces.
We introduce a new GAN architecture with two discriminative headers in the first approach. In the second approach, we introduce an advancement to the architecture of the first approach, which makes the training separate parameter spaces for discriminators disappear. In the revealing network architecture, the encoder sub-models in the generator work as the discriminative headers in the training phase. In this model training process, generator training and discriminator training optimize the same parameter space.
Conditional adversarial network architecture has been extensively upgraded over the past few years to support various image synthesis tasks in the computer vision field. Even though there were improvements in CNN based NST models such as [7; 5], they were based on CNN supervised learning as in [1]. The major contribution of this paper is to introduce a new cGAN architecture where the discriminator and encoder sub model of the generator share the same parameter space in the training time. The generator model has been made more reliable and consistent in style transferring as in the CycleGAN paper [8] introduced.
## 2 Related Studies
A neural style transfer paper [1] by Leon et al. first introduced the concept of neural style transformation under image synthesis in the deep learning based computer vision field in 2015. Since then, it has been prevalent in research because of its mainstream appeal. In [5], the authors have introduced an extension to the [1] to make the image synthesis process more efficient and faster in the NST process. Also, the [7] color histogram matching algorithm has been used on top of [1] model architecture to preserve the original light condition in the input image.
In [9; 10; 12] they have used semi-supervised learning (GAN) to achieve more controllability in the style transferring process, produce clear and detailed images, and disentangled feature [9] from both content and style feature spaces. However, as shown in the 1, most research studies in NST have been conducted under the concept of a model per style.
Use of GAN [2; 13] in image synthesis, image super-resolution, image editing, and representation learning is very popular. Google pix-to-pix translation [3] paper has introduced a U-net based image to image translation model that was trained with GAN. It showed impressive performance in semantic segmentation, semantic labeling, map translation [aerial photo into graphics], edge and boundary detection, and in image-to-image translation tasks such as thermal to color translation. However, the pix-to-pix model could only have a single input. This restricted the use of the pix-to-pix model for the style transfer model. Additionally, the model requires to be fed with original and translated images as a pair. However, it is not practical to generate transferred images beforehand like in other image synthesis areas in style transfer. For example, we can manually annotate and pair images in semantic segmentation. Jun-Yan Zhu et al. [8], has introduced a consistent and well defined method for bijective validation of image translation between two pixel spaces. In [8], they have introduced image translation between two domains with unpaired initial distributions.
In design, the CycleGAN model takes a single image input, making CycleGAN unsuitable for direct use in style transferring. In 2019, StyleGAN model [5] paper has introduced an effective model to combine features from two different input images and generate a new image with features of input images. Since that paper targeted synthesizing human faces, both images fed into the model were human images. Therefore, they have proposed only a single encoder model (Mapping Network) [5] for feature extraction. Furthermore, since features in human faces were explicitly localized and acceptable network architecture for global features like texture extraction differ from the Mapping Network architecture.
The Wavelet Convolutional neural network model [14] has been proposed to classify images based on texture features in wavelet representation. The input image was first operated on
Figure 1: The generated image (c), which is generated for the content image (b) using the CNN based NST model trained on the style image (a), shows to be different from the image (d), which represents a Japanese artwork with the same semantics of the content image (b).
a multi-resolution image decomposition method called spectral analysis. However, since the wavelet transformation maps pixels into a spectral space, it cannot be used for the generator's style/texture encoding model In [15; 16]. They have empirically proven that convolutional models with residual connections can be used for texture classification tasks. In [15] they have claimed that pre-trained CNN models on ImageNet [6] dataset have the potential to recognize texture features as well as the shapes such as clothes in complex images.
As a part of the style transfer process, we have to extract style/texture from the style image. The style feature of an artwork is represented by different factors such as the medium used to paint, the surface, and the artist's personality. The paper published by Yang Lia et al. [17] introduced the GAN-based method to control the style ingrained on the aspect of the continuous flow of color gradient to incorporate with brush stroke patterns in artworks.
## 3 System Model and Implementation
In the first approach, we propose a GAN model where the discriminator has two parallel headers to independently generate adversarial results on content and style. In the same approach, we synthesize the training image dataset by using the NST [5] model to generate paired style transfer images. We advance the cGAN model architecture and the training process for the model to understand the class related to the style image and extract core features from the content model. Then the model imposes general features of the identified style class to the generated image. Specifically, using the second method, we can generate images that are not affected by alias feature presence like in Fig. 1.
Since the style encoding submodule is optimized under the metrics learning objective function for image similarity search, given that the model is trained under a higher variation of style classes [18], the trained model will support any art style irrespective of the style presence in the training time. For the approach one training, we synthesize a dataset in the form of an image matrix. In the matrix, every column denotes a content image from the Multi-Salient-Object (MSO) dataset [19]. All other columns contain style transfer images of content images from MSO related to the row and style related to the particular column.
### Approach #1 : GAN with dual headed discriminator
One of the Foundation work for this approach has been done by the pix-to-pix paper [3], as a cross domain image transformation model with great qualitative and qualitative accuracy in the generated images. However, the architecture's nature is not directly compatible with the NST for two reasons. The first reason is that the GAN architecture must support two images as the inputs, and the second reason is that the NST is an unpaired image to image transformation in contrast to the pix-to-pix model.
#### 3.1.1 Objective of approach #1
the cGANs [6] generator learns a mapping from the input image domain \(x\) & noise vector \(z\) to paired image domain \(y\), \(G:\{x,z\}\to y\), to produce "real" images. While the discriminator \(D\) learns to distinguish "real" images from "fake" images generated by the generator \(G\).
The objective of cGAN can be expressed as.
\[\mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D})=\\ \mathbb{E}_{x,y}[\log(\mathcal{D}(x,y))]\ +\ \mathbb{E}_{x,y}[\log(1- \mathcal{D}(x,\mathcal{G}(x,z)))] \tag{1}\]
Here, the generator \(G\) tries to minimize this objective against the adversarial discriminator \(D\) tries to maximize the adversarial loss.
i.e. \(\mathcal{G}^{*}=\arg\min_{\mathcal{G}}\max_{D}\mathcal{L}_{cGAN}(\mathcal{G}, \mathcal{D})\)
The proposing model has two distinct discriminators against one generator, and here onward we use rGAN as the short form for this proposing model. Each discriminator trains separately. Weighted sum of adversarial loss defines as the objective of rGAN can express as;
\[\mathcal{L}_{GAN}(\mathcal{G},\mathcal{D}_{s},\mathcal{D}_{c})=\\ \alpha.\mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D}_{s})\ +\ (1-\alpha). \mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D}_{c}) \tag{2}\]
As stated in the [3] we added \(L1\) loss for perceptual clarity of the generating image. Thus, the final objective is,
\[\mathcal{G}^{*}=\arg\min_{\mathcal{G}}\max_{\mathcal{D}_{s}}\max_{\mathcal{D} _{s}}\mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D}_{s},\mathcal{D}_{c})\ +\ \lambda. \mathcal{L}_{l1}(\mathcal{G}) \tag{3}\]
A pairwise marginal ranking loss used for the objective function in the content to assess synthesis image quality [20] of generated images.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & Svoboda & Gao & Chen & Gatys & Zhang & Appr. 1 & Appr. 2 \\ & [9] & [10] & [11] & [1] & [12] & & \\ \hline Semi-supervised learning & \(\surd\) & \(\surd\) & & & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline does not require paired samples to & & \(\surd\) & & \(\surd\) & & & \(\surd\) \\ train & & & & & & & \(\surd\) \\ \hline trained model support more than & & & \(\surd\) & & \(\surd\) & \(\surd\) & \(\surd\) \\ one style & & & & & & \(\surd\) & \(\surd\) \\ \hline preserve original image color & & & \(\surd\) & \(\surd\) & & & \(\surd\) \\ \hline does not introduce alias artifact in & \(\surd\) & \(\surd\) & & & \(\surd\) & \(\surd\) & \(\surd\) \\ generated image & & & & & & & \(\surd\) & \(\surd\) \\ \hline \end{tabular}
\end{table}
Table 1: Table of comparison
#### 3.1.2 Network Architecture of approach #1
The skeleton of the GAN model is adopted from the paper [3] with an added style discriminator header and channels in the encoder to support the style image. Generator and Content discriminator use standard convolutional modules [Conv \(\rightarrow\) Batchnorm\(\rightarrow\) ReLU]. However, the style discriminator is designed by the wavelet convolution neural network. The complete training procedure diagrammed in Fig. 2
Generator with skip connection -The generator model aims to extract object features from the content image and local-global fused features from the style image. Subsequently, those features are encoded from the input images' pixel space into bottleneck layers' latent vector space. The decoder model reconstructs the style transferred image up to the same size as the input. The bottleneck vector size is set to 64 as defined in [21].
Apart from the two inputs taken by the encoder model, the generator architecture is a U-Net architecture [22]. Since the U-Net allows information sharing from low level features of the encoder at the image reconstruction step, it gives a chance to fuse low level features related to the texture of the style image. Furthermore, As in the [3] states, U-net residual architecture supports reducing blurriness in the generated image.
Markovian discriminator [content discriminator] -The exact discriminator designed in [3] is used due to several reasons. Since generated images are mixed with style image features, it is crucial to ensure that local image patches are real. PatchGAN [3] penalizing structure at the scale of patches ensures the content image is not embedded with alias artifacts, further allowing the generation of a more semantically accurate image. The discriminator considers the image a Markov random field with plate per patch. Therefore, it indirectly supports having original colors in the generated image because each patch is considered independent from surrounding patches.
wavelet CNN discriminator [style discriminator] -Theoretically, the style of an image cannot be successfully captured by a sequential CNN model because CNN targets capturing local features and transforming those features until it reaches a complexity that the fully connected layer will support. As stated in [15; 23], even a CNN model with a feature fusion like residual CNN [24], the inception model [25] can be used for texture extraction tasks. The wavelet convolution neural networks begin with the wavelet transformation layer, which decomposes pixel space into spectral space in several resolutions using frequency statistics. Then in the Convolutional Neural Network poison, we down-sample different resolution results from the wavelet layer with channel-wise adding. In essence, this model acts as a global-local feature fusion to extract style from the images.
The paper [14] introduced a state-of-the-art texture extraction architecture based on the wavelet transformation. The style discriminator header is implemented using wavelet CNN in this approach. Using this model in the generator as a style encoder header adds unexpected hues into the generated image and partitions the generated image by light black stripes. We stipulate that features cause these behaviors in the CNN followed by a wavelet transformation layer in spectral space.
#### 3.1.3 Optimization and inference
As suggested in the original GAN [2] we alternate training between generator model and discriminator model to get the generator to converge into a point where loss is minimized, and discriminators damps on equilibrium level. Instead of feed forwarding real and fake sample sets separately over the discriminator's heads, with the motive of achieving a more robust gradient descent, we concatenate real and fake sample sets and feed them into the discriminator after a shuffling. More robust gradient descent can be achieved using this approach because, in the previous method, parameters of Batch Normalization layers radically changed due to real and fake sample twist feeding, but in this approach, batch normalization layers will be in a more stabilized position.
We used a non-reference loss function - L1 norm as the image reconstruction loss in the generative model, mainly because the L1 loss preserves color & luminance and equally weights the loss regardless of local structure. Furthermore, L1-loss does not penalize pixel value difference thoroughly. Therefore, it gives a chance to add style image color variations and features on the content image. We have also experimented on a reference loss function called multi-scale structural similarity index (SSIM) [26], which is sensitive to local structure. Also experimented using the MixLoss function [27] which is a weighted combination of SSIM and L1 losses as the reconstruction loss. However, the generated images in both experiments gave unexpected hues and noise.
In the content discriminator header, we have used binary cross-entropy loss as suggested in [3]. We used an instance normalization layer instead of batch normalization for a more robust and smooth training curve in the style discriminator header. For the style loss function, we have implemented a slightly modified version of the pairwise marginal loss function [28] to support both positive and negative pairs at the same time.
Since the objective of the GAN base of paired image transformation, we have used it as stated in Method for model training. From that image matrix, we synthesize the training dataset in the form of triplets; [content image, style image, style transfer image]. We fed the batched dataset into the model training pipeline.
### Approach #2 : GAN with discriminatory encoder sub models
Approach #1 targeted designing a model that can transfer images into more than one style. Therefore, to support a new style,
Figure 2: The GAN architecture of approach #1.
we had to train a new CNN-based NST model such as the fast style transfer [5] model on a chosen set of styles. The style transfer images from the trained model on content images will be composed into an image matrix in the approach #1. After that, the GAN in approach #1 will be trained on the composed image matrix with randomized image augmentations on the pre-processing pipeline. Therefore, even if the model generates semantically accurate style transfer images on several styles, the complicated process of training the model for a new style is a downside of the approach #1.
Also, in the paper [1] the original image color palette remains consistent because the model proposed by Approach #1 still learns style from a single image, and it restricts the color palette of the generated image. As we discussed, making a style represented by a single image, in reality, is not accurate and could introduce many unnecessary features into the generated image. For example, in Fig. 1, (third image) generated image embeds many alias artifacts such as the presence of waves instead of blurred background in the reference image (second image) and curly sprinkles like features in bird feathers. Furthermore, the fourth image in Fig. 1 shows a scene of a bird with leaves, and that image significantly differs from the generated image (third image). The generated image has embedded a blueish color palette like in the style image (first image of Fig. 1) of the model. Both [1] and the proposed GAN model generates images that show the style image feature in the generated image.
Besides the ample parameter space in the generative model, a discriminator model with a large parameter space degrades the gradient flow in the training phase. As a result of this ample parameter space [order of \(10^{5}\)], training the model to get at least moderate results requires a relatively large data set [order of \(10^{4}\)]. Also, it is hard to expect to converge the generative model into an optimal point while the discriminator oscillates over the proper equilibrium level. Because adversarial loss employs to assess the generated samples against "real" samples, it is hard to guarantee that the generator learns for optimal transformation operation without collapsing. Further defining evaluation metrics on image reconstruction for style transfer is even more challenging. Also, evaluating model training by a metrics function does not always guarantee model learning. All the facts above indicate that using conventional ways to build a model that understands style transfer on unpaired data is not an accurate and reliable method.
In this approach, we are going to introduce a new model training process as the solution. The starting point is to drop the concept of training separate discriminator models. In this approach, the discriminator is not fed with fake samples as in GAN training. In this approach, we define two separate encoder models for content feature extraction and style feature extraction. As shown in Fig. 3, we fed extracted features into the decoder to generate style transfer images. After that, we use two encoder models in the discriminator to generate an adversarial loss.
#### 3.2.1 Objective of approach #2
Even though we have adopted cGAN in this approach, the training phase will not be the same as in cGAN training. In this approach both generator and discriminator trains to minimize the objective, basically because the discriminator is a part of the generator. The objective function of \(rGAN\) can express as;
\[\mathcal{L}_{rGAN}(\mathcal{G},\mathcal{D}_{s},\mathcal{D}_{c})=\\ \alpha.\mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D}_{s})\ +\ (1-\alpha). \mathcal{L}_{cGAN}(\mathcal{G},\mathcal{D}_{c}) \tag{4}\]
same as in the Approach #1, but the final Objective 4 is different from 2.
\[\mathcal{G}^{*}=\arg\min_{\mathcal{G}}\min_{\mathcal{D}_{s}}\min_{\mathcal{D }_{c}}\mathcal{L}_{rGAN}(\mathcal{G},\mathcal{D}_{s},\mathcal{D}_{c})\ +\ \lambda. \mathcal{L}_{1}(\mathcal{G}) \tag{5}\]
As in [8], the objective is defined to validate that the transformation is learned by the rGAN. Since here we are using the same encoder models in the discriminator, it will validate the whole training process. Because, in the first step, encoder models train as a discriminator model to minimize loss of feature extraction. Then, in the second step, use those encoder models in the generator model to adversarial train the decoder with encoders to minimize adversarial loss. Those two steps repeat over the data set as same as in the cGAN training.
Especially, in the discriminator training process we do not feed "fake" samples as in cGAN training. This is because encoders are supposed to train in a way they generate optimal latent vectors instead of classification. In this architecture both discriminator models output encoding vectors as they are used in the generator. We used the pairwise marginal loss function for the objective of the content encoder training in the discriminator. Style encoder trained under metrics learning objective for image similarity search.
\[H_{i,k}\ =\ \frac{1}{\gamma}.\sum_{j}xa_{i,j}.xs_{j,k} \tag{6}\]
\[\mathcal{L}(xa,xs,ys)\ =\ -\sum_{i}ys_{i}.\log(H_{i}) \tag{7}\]
In above equations 6 & 7, \(xa\), \(xs\) and \(ys\) denote anchor sample batch, style sample batch and style label vector respectively. \(\gamma\) used for regularizing constant in Gram metrics. Use the Gram metrics in 6 for sparse categorical cross-entropy in 7.
In essence, encoder models are trained to generate optimal features while the generator model is trained to generate images that minimize weighted sum of content, style and l1 loss as shown in the equation 5 by using trained encoders.
Figure 3: The GAN architecture of approach #2.
#### 3.2.2 Network architecture of approach #2
The entire network consists of mainly three parameter spaces called a content encoder, style encoder, and decoder, as shown in Fig. 3. Content encoder and style encoder outputs latent vectors of content image and style image, respectively. The decoder uses those encoded features in the generative model to generate style transfer images. The discriminative model uses the content encoder and style encoder to generate adversarial losses to train.
Content encoder -Content encoders take an image and output an encoded latent vector [of length 32]. An encoder is created by a set of sequential modules in the form of [Conv \(\rightarrow\) Batchnorm\(\rightarrow\) ReLU]. The content encoder is trained on pairwise marginal loss in the training process. As an adoption model of StyleGAN [5] synthesis network here, we do not directly feed encoded features over the bottleneck to the decoder model. Instead, we feed features using skip connections from the content encoding models intermediate CNN modules to intermediate up-sampling CNN modules in the decoding model.
In the training process, the content encoder is trained in the way a face-recognition model is being trained [21]. Instead of using triplet loss [28], here we are using the sum of pairwise marginal loss on the positive sample and negative sample in each training step. The content discriminator is fed by random sample pairs in the training process, where positive and negative pairs are randomly generated in a way each has a 0.5 probability. Also, instead of using semi-hard ranking loss functions like triplet loss, hard ranking loss functions, namely the pairwise marginal loss function in the discriminator make the gradient descent more robust. As stated above, we can guarantee that the content encoder model is trained to do expected tasks because the same encoder model is trained on content encoding vector classification by using a ranking loss function in the discriminator training step. Furthermore, we have assessed the accuracy of the content discriminator network and objective function on a separate data set.
Style encoder -The major concern here is that encoder vectors must show locational proximity to indicate semantic similarity in the original style when transformed into UMAP, PCA base encoded vector spaces. For example, the pencil drawing's encoder vector position in point cloud must have proximity to a pencil sketch image rather than an oil painting. In essence, the style encoder is trained to yield approximately similar encoding vectors for similar style images. The task appears to be like embedding a vector training process but defining an objective function to train and using that in an adversarial loss function causes many issues. Also, the use of classification objective function to train is not accurate in theory and empirically. As stated in CycleGAN paper [8] "building \(y\), \(G:x\to y\), mapping does not guarantee that an individual input x and an output y are paired up in a meaningful way- there are infinitely many mappings \(G\)". On the other hand, a design objective as a dictionary [29] learning is not practically possible due to the complexity of the parameter space and scale. Because manipulating dictionary metrics in training and inference stages is computationally costly [30].
The idea is to build an embedding vector training process without explicitly using an embedding vector. As well as the model architecture must support proper discriminative loss function to be trained in the adversarial phase.
The concept of metrics learning in deep learning [6] for similarity search is the optimal way to train Because, in theory, such a model uses sparse categorical cross-entropy as the objective function. As stated in the objective section, this objective makes encoding vector distribution estimates for each class so that their negative log-likelihood is as minimal as possible. In essence, this makes encoding vectors of the same class drop under the same distribution and achieve distribution clustering by preserving proximity.
As stated in Deep-TEN [16] CNN models with residual connection from low level feature layers to high level feature layers are suitable for tasks like material and texture extraction. In [16] they have empirically proven this point by using the ResNet model to classify garments based on texture. However, the DensNet [31] model is a perfect candidate for this because it concatenates all features in previous modules, in the current module, and the transition layer is weighted between different layers to generate more complex features than features generated by ResNet [24] on a particular resolution. This makes the model have a much stronger gradient flow. Also, in [31] the width of the layer being proportional to the growth rate makes the feature extractor of the style encoder have to have a much smaller parameter space.
Generator -Generator models are mainly a combination of content encoder, style encoder and decoder. Here the decoder model is a sequential set of up-sampling CNN modules like the U-Net with skip connection to content encoder sub-model. The network architecture is inspired by the StyleGAN [5] synthesis network. Instead of using a single mapping network [5] to generate features from the inputs here we implement two separate encoder models as shown in Fig. 3.
The decoder sub-model in the generator starts from the latent vector space generated only by the style encoder. The generator
Figure 4: Approach #1 GAN model evaluation image matrix.
model is fed by the low-level feature from the content encoder using skip connection. The targethere is to make decoder sensitivity style encoded high-level features while using low-level features to generate content on the image. Therefore, the final image will not be affected by the exact shapes and colors of the content image and what objects are present in the style image.
## 4 Model Evaluation
### Results of approach #1
Despite having two discriminative models the discriminator in total and individually tended to converge into an equilibrium point in the training as we experimented. Furthermore, the generator shows less variety than the CNN-based NST model in loss, especially l2 loss in experiment results.
The resulting generated images tend to have a much higher resemblance to the respective art style compared to the CNN-based NST model, as illustrated, especially in the third and fourth rows of Fig. 4. Especially in generated images, the presence of content color palette dropping is insignificant. Furthermore, the introduction of new artifacts to generated images are not prominent with the new model.
### Results of approach #2
For the GAN training process we used two separate datasets for style images and content images as stated in the method section. The complete adversarial training process on style discriminator, content discriminator and the generator converge to an optimal point by minimizing loss function related to each instead of the discriminative loss oscillating over equilibrium level. Also, generative loss and content encoder loss happened to have smooth loss graphs relative to the style encoder. Here in a batch the presence of class is not uniformly distributed.
Even Though we had adopted the matrix learning to calculate the model loss, the sample set does not always contain samples from each class in the same order. Under this method the model overfitting and tendency to model collapsing reduce significantly than the orders static sampling method.
The generated images have not changed radically like in [9] model inference, but the generated images are imposed with texture and some color effect from the style image as illustrated in Fig. 5. As proposed, the model is supposed to generate an image which will appear as done by the same artist rather than seeming to have all features from a given style image. Considering the above mentioned point, the model we introduce does a quite good job in understanding style image texture and imposing that on the content image without introducing extrinsic features from style image as artifacts. The slight amount of noise and blur represented in generated images is due to the downscaling of input images.
## 5 Discussion
### Lessons Learned
In the first approach, we have mainly targeted building a GAN for NST that supports more than one style. The generating image shows significantly less content color palette changes than the CNN base NST models because of the adversarial training. Texture discontinuity and blurriness of the generated image were significantly affected by the severe downsampling of the images' pixel space because the downsampling operation could drop interpixel correlations present in the pixel space.
From the second approach results, we can specify that the style extraction by using metric learning emphasizes general features for a style class. Consequently, the generated images have texture and similar semantic appearance features from the style image. On the other hand, the generated image will not have striking features such as rapid contract change and sharp edge as in the reference style image because of style normalization, in a broader sense.
In this paper we introduce a comprehensive architecture for the GAN which has two parallel discriminator headers and generators that takes 6 channel tensor inputs. Because this model has a larger parameter space than the general GAN model, the training process is slower than average, but the inference time has not changed significantly. On the other hand, the model proposed in the second approach has even less parameter space than the general GAN model. However, training the encoder model parameter set sharing between generator and discriminator caused an increase in the expected return time. Although, the inference is much faster than approach #1 due to the smaller model size and fewer FLOPS.
### Open Challenges and Future Research Directions
The model proposed in approach #2 is not capable of clearly introducing striking features such as outline markings and rapid color gradient patches into the generated image even though they appear in the style image. Hence, the proposed model is only fine-tuned on very low-resolution images (128, 128, 3), where the severe downscaling of input images caused edges of the images to blur and be discontinued. Therefore, input images being in very low resolution could cause the issue of not introducing striking features.
The target sensor replacement method introduced in [32] paper is trained in the Eigenspace of the related pixel space. It ensures that the decoder is fed with essential details about style in the bottleneck layer. Also, we can have more control over the latent space, implying that we can build a model that controls
Figure 5: Approach #2 GAN model evaluation image matrix.
the style transfer [12; 33] under rGAN architecture. Building Multimodal architecture that serves like the model introduced in this paper and with aleatoric uncertainty of generating image would be like variational autoencoder [34] among autoencoders [22].
## 6 Conclusion
In this paper, we introduced two novel semi-supervised learning approaches of neural artistic style transformation. The first approach used a GAN-based style transfer model capable of transferring images into more than one style. In the second approach, we introduced neural style transfer that understands how an actual artist would paint a given image in his style. The adversarial network and adversarial training process introduced in the second approach would be applicable to many ambiguous tasks and train on a lower parameter space than the conventional GAN model.
## Acknowledgment
We thank L.T.N. Wickremasinghe, T.T. Jayasekera and P.G. Amanda for the support on this project.
|
2301.05954 | Searching for Molecular Jets from High-Mass Protostars | We report Very Large Array (VLA) observations in the Q-band toward 10 ionized
jet candidates to search for SiO emission, a well-known shocked gas tracer. We
detected 7 mm continuum counterparts toward 90% of the jet candidates. In most
cases, the jet candidate is located toward the center of the 7 mm core, and the
high masses ($\approx 100\,M_\odot$) and densities ($\approx 10^7\,
\text{cm}^{-3}$) of the cores suggest that the central objects are very young
high-mass protostars. We detected SiO $J=1-0$ emission associated with 6 target
sources. In all cases, the morphology and spectrum of the emission is
consistent with what is expected for molecular jets along an outflow axis, thus
confirming the jet nature of 60% of our sample. Our data suggest a positive
correlation between the SiO luminosity $L_{SiO}$, and both the bolometric
luminosity $L_{Bol}$ and the radio luminosity $S_\nu d^2$ of the driving
sources. | Tatiana M. Rodriguez, Peter Hofner, Isaac Edelman, Esteban D. Araya, Viviana Rosero | 2023-01-14T17:01:46Z | http://arxiv.org/abs/2301.05954v1 | # Searching for Molecular Jets from High-Mass Protostars
###### Abstract
We report Very Large Array (VLA) observations in the Q band toward 10 ionized jet candidates to search for SiO emission, a well-known shocked gas tracer. We detected 7 mm continuum counterparts toward 90% of the jet candidates. In most cases, the jet candidate is located toward the center of the 7 mm core, and the high masses (\(\approx 100\,M_{\odot}\)) and densities (\(\approx 10^{7}\,\mathrm{cm}^{-3}\)) of the cores suggest that the central objects are very young high-mass protostars. We detected SiO \(J=1-0\) emission associated with 6 target sources. In all cases, the morphology and spectrum of the emission is consistent with what is expected for molecular jets along an outflow axis, thus confirming the jet nature of 60% of our sample. Our data suggest a positive correlation between the SiO luminosity \(L_{SiO}\), and both the bolometric luminosity \(L_{Bol}\) and the radio luminosity \(S_{\nu}d^{2}\) of the driving sources.
0000-0002-8185-8008]Tatiana M. Rodriguez
0000-0002-8870-7880]Peter Hofner
0000-0002-4880-7880]Isaac Edelman
0000-0002-4880-0880]Esteban D. Araya
## 1 Introduction
Many questions remain unanswered regarding the origin of high-mass (\(M_{*}\gtrsim 8M_{\odot}\)) stars. Large scale molecular outflows are observed ubiquitously in high-mass star-forming (HMSF) regions (e.g., Zhang et al., 2001; Wu et al., 2005), which argues in favor of a formation scenario similar to that of lower mass stars, i.e., via disk accretion (e.g., Cesaroni et al., 2017; Oliva and Kuiper, 2020; Williams et al., 2022). In this scenario, the emerging protostar accretes material from its surroundings while ejecting material along its poles. These bipolar jets play a key role in the formation process, getting rid of excess angular momentum and allowing accretion to proceed. Furthermore, because of the injection of turbulence in the surrounding medium, outflows play an important role in the future star formation in the region (e.g. Tanaka et al., 2017; Grudic et al., 2022). Massive young stellar objects (MYSOs) reach the conditions necessary for nuclear burning while still deeply embedded and actively accreting, making the detection of au-scale ionized jets an observational challenge. This has hindered efforts to address fundamental questions regarding the nature of the mass flows, as the number of known jets driven by MYSOs has only recently increased from a handful, thanks to high resolution and sensitivity surveys (e.g., Purser et al., 2021; Kavak et al., 2021). Increasing this still small number is an important task.
A survey that significantly contributed to the study of jets from MYSOs was published by Rosero et al. (2016, 2019). The authors conducted deep (\(3\sim 10\,\mu\)Jy beam\({}^{-1}\) rms), sub-arcsecond resolution (\(0\farcs 4\)) VLA observations at 1.3 and 6 cm toward HMSF regions. Their sample was carefully chosen to target MYSOs in the earliest stages of star formation, i.e., prior to the formation of hyper-compact (HC) H II regions. They observed 18 cold molecular clumps (CMC), 15 cold molecular clumps with mid-infrared association (CMC-IR), and 25 hot molecular cores (HMC), and detected a total of 70 sources of radio emission, with a detection rate of 6%, 53%, and 100% for CMC, CMC-IR, and HMC, respectively. While several of these sources were shown to be ionized jets, about 30 of them were classified as jet candidates based on two key characteristics: they are unresolved at \(0\farcs 4\) and have a rising spectral index \(\alpha\) (i.e., \(0.1<\alpha<1.5\), with \(S_{\nu}\propto\nu^{\alpha}\)). A rising spectral index at cm wavelengths is expected from thermal emission from a partially optically thick ionized jet, as described by Reynolds (1986). However, as Rosero et al. (2019) show in their Figure 2, a spherical ionized region of constant density can also account for their measured \(\alpha\) values. Confirmation of the jet nature of the candidates in the Rosero et al. (2016, 2019) study is therefore needed.
One way to distinguish between the models of the origin of the radio continuum put forward by Rosero et al. (2019), is to determine whether the continuum sources display a morphology consistent with ionized jets at higher angular resolution, i.e., elongated in the direction of the large scale molecular flows, which are present toward most of the sources. Alternatively, one could use molecular jet observations to differentiate between
the models. In the earliest stages of formation, when the protostar is still deeply embedded, we expect that a molecular flow would be associated with an ionized jet. Thus, the detection of a molecular jet should allow us to differentiate whether the radio continuum emission arises from a mass ejection phenomenon or from an extremely compact, constant density, ionized region. In this work, we conduct SiO(1\(-\)0) observations toward a sub-sample of jet candidates from the Rosero et al. (2016, 2019) study to search for molecular jets.
The abundance of the SiO molecule is highly enriched in shocked gas regions, making it an ideal probe for our science goal. SiO in star forming regions can be attributed to a variety of processes. Schilke et al. (1997) showed that the production of SiO in the gas phase can be linked to the sputtering of Si-bearing dust grains in C-type shocks in the jet working surface, and Anderl et al. (2013) explored the role of grain-grain collisions in C-type shocks in the SiO abundance enhancement. Recently, it has been shown that SiO in the gas phase could originate at the base of jets (i.e., within the dust sublimation radius) due to evaporation (e.g., Lee et al., 2017; Lee, 2020; Podio et al., 2021; Dutta et al., 2022).
We selected 9 regions from the Rosero et al. (2016, 2019) survey that host a jet candidate and where molecular outflow tracers have been detected. These regions are: IRAS sources 18345\(-\)0641, 18440\(-\)0148, 18517\(+\)0437, 18553\(+\)0414, 19012\(+\)0536, 20293\(+\)3952, 20343\(+\)4129, G53.11\(+\)00.05 mm2, and G53.25\(+\)00.04 mm2. Additionally, we included IRAS 19266\(+\)1745 in our sample. This region contains 3 sources with a negative spectral index (\(\alpha<-0.5\)). The nature of the continuum emission in this region is addressed in Section A.6.
This paper is organized as follows: in Section 2 we describe details of the observations, as well as the data calibration and imaging process. In Section 3, we present our observational results, which we discuss in Section 4. Lastly, Section 5 contains a summary of this work and our conclusions.
Regarding the format of the paper, to improve the readability, we include in the main body of the text the data of only one source (IRAS 18517\(+\)0437), serving as an example. Analogous images and comments for all regions can be found in Appendix A.
## 2 Observations
We obtained Q-band (\(\lambda=7\) mm) observations of our target sources with NRAO's1 Karl G. Jansky Very Large Array (VLA) in the D configuration on March 28, 2021. The phase center for each target region is given in columns 2 and 3 of Table 1. We adopted distances and luminosities from Rosero et al. (2019), unless indicated otherwise; these are presented in columns 4 and 5. Additionally, in column 6, we list the classification adopted by these authors of the evolutionary stage of the source, i.e., CMC, CMC-IR, or HMC (see Rosero et al. 2016 Section 2.1 for more details).
Footnote 1: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
The main target of our observations was the SiO \(J=1-0\) line (\(\nu_{o}=43.42376\) GHz). We used two 8-bit samplers of the WIDAR correlator with dual polarization. To fully exploit the capabilities of the receiver, 1 GHz subbands were centered at 43.25, and also at 48.57 GHz, thus also including observations of the CS \(J=1-0\) emission line (\(\nu_{o}=48.99095\) GHz). To cover these two molecular lines, we set up 64 MHz wide spectral windows (SPWs), with \(1024\times 50\) kHz channels. We also configured 10 broad band SPWs to measure the Q-band continuum emission, each with \(128\times 1\) MHz channels.
Flux density and bandpass calibration was based on observations of the quasars 3C286 and J1733\(-\)1304, respectively. The complex gain calibrators used are presented in Table 2. Typical time on source was approximately 13.5 minutes.
The data were processed through NRAO's Common Astronomy Software Applications (CASA, McMullin et al., 2007) Calibration Pipeline, using version 6.1.2. For each source we combined all 128 MHz SPWs to create a continuum map corresponding to a central frequency of 45.84 GHz. The typical synthesized beam size and rms for naturally weighted continuum maps are 2\(\farcs\)0 \(\times\) 1\(\farcs\)6 and 91 \(\mu\)Jy beam\({}^{-1}\), respectively. Regarding the molecular emission, the typical beam size, rms, and channel width of the SiO(1\(-\)0) velocity cubes are 2\(\farcs\)0\(\times\)1\(\farcs\)6, 4.5 mJy beam\({}^{-1}\), and 1.5 km s\({}^{-1}\), respectively. A slight taper was applied to the CS(1\(-\)0) data to account for the extended nature of the emission. The typical beam size, rms, and channel width of the CS(1\(-\)0) velocity cubes are 2\(\farcs\)4\(\times\)2\(\farcs\)0, 11 mJy beam\({}^{-1}\), and 1.5 km s\({}^{-1}\), respectively. Columns 7, 8, and 9 of Table 1 show whether
continuum, SiO, and CS emission was detected (y) or not (n) in each region.
## 3 Results
### 7 mm Continuum
We detected 7 mm continuum sources (hereafter referred to as cores) in all the observed regions (referred to with the first 5 digits of their name), except for G53.25. Some regions contain only one core, while others host several. We were able to identify a total of 23 discrete 7 mm cores. The cores in each region are denominated C\({}_{i}\), with \(i=1,2,...\), increasing with RA. We were able to associate 16 cores with radio continuum sources. We note that not all the sources in the sample were detected at 6 cm in the Rosero et al. (2016, 2019) survey, hence we will refer to the 1.3 cm emission only when making comparisons with radio continuum. In Table 3 we list the regions observed, the 7 mm cores detected, and their centimeter counterpart identified by Rosero et al. (2016) using the designation given by these authors.
In Figure 1, we show the 7 mm core 18517 C\({}_{1}\) in contours overlaid on the 1.3 cm emission. The peak emission at 1.3 cm, as exemplified in Fig. 1, is essentially coincident with that of the 7 mm continuum in 10 cores (18345 C\({}_{1}\), 18517 C\({}_{1}\), 18553 C\({}_{1}\), 19012 C\({}_{1}\), 19012 C\({}_{2}\), 19266 C\({}_{1}\), 19266 C\({}_{3}\), 19266 C\({}_{5}\), G53.11 C\({}_{1}\), and 20343 C\({}_{3}\)), while for the 6 other cores, the 1.3 cm source is located near the edge of the 7 mm core (20293 C\({}_{1}\), C\({}_{3}\), and C\({}_{4}\), 20343 C\({}_{1}\), 18440 C\({}_{1}\), and 19266 C\({}_{2}\)). We discuss this position offset in Section 4.1.
We carried out a 2-D Gaussian fit to all cores using the CASA task imfit. Based on the results of this fit, we indicate in column 4 of Table 3 whether the core is resolved (R), or unresolved/resolved in at least one direction (U). The deconvolved core size and integrated flux density \(S_{\nu}\) are given in columns 5 and 6, respectively. We note that the size and PA reported for U sources are those of the synthesized beam. Additionally, we classified as extended (E) two sources that present a structure several times the beam size, 19012 C\({}_{1}\) (Fig. A5.1) and 20343 C\({}_{3}\) (Fig. A10.1). Both present extended emission also at 1.3 and 6 cm. Based on their morphology and spectral energy distribution (SED), these are most likely H II regions. Since the goal of our work is the study of
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{ Region} & R.A. & Dec & Distance & L & Type & Detection & \\ & [h m s] & [\({}^{\circ}\)\(\prime\)\(\prime\)\(\prime\)] & [kpc] & [log\({}_{10}\) L\({}_{\odot}\)] & & 7 mm & SiO & CS \\ \hline
18345\(-\)0641 & 18 37 16.8 & \(-\) 06 38 30 & 9.5\({}^{a}\) & 5.3\({}^{b}\) & HMC & y & y & y \\
18440\(-\)0148 & 18 46 36.6 & \(-\) 01 45 21 & 5.2\({}^{b}\) & 3.5\({}^{c}\) & HMC & y & n & y \\
18517\(+\)0437 & 18 54 14.2 & \(+\) 04 41 40 & 1.9 & 3.8\({}^{c,*}\) & HMC & y & y & y \\
18553\(+\)0414 & 18 57 53.6 & \(+\) 04 18 16 & 12.3 & 4.8 & HMC & y & y & y \\
19012\(+\)0536 & 19 03 45.3 & \(+\) 05 40 42 & 4.2 & 4.0 & HMC & y & y & y \\
19266\(+\)1745 & 19 28 55.6 & \(+\) 17 52 00 & 9.5 & 4.4 & HMC & y & n & y \\ G53.11\(+\)00.05 mm2 & 19 29 20.6 & \(+\) 17 57 18 & 1.9 & 1.9 & CMC & y & y & y \\ G53.25\(+\)00.04 mm2 & 19 29 33.5 & \(+\) 18 00 54 & 2.0 & 2.1 & CMC-IR & n & n & n \\
20293\(+\)3952 & 20 31 12.9 & \(+\) 40 03 22 & 1.3/2.0\({}^{d}\) & 3.1/3.5\({}^{d}\) & HMC & y & y & y \\
20343\(+\)4129 & 20 36 07.5 & \(+\) 41 40 09 & 1.4 & 3.0 & HMC & y & n & y \\ \hline \multicolumn{1}{c}{The right ascension and declination values are given in the J2000 epoch.} & & & & & & & \\ \multicolumn{1}{c}{\({}^{a}\)From Szymczak et al. (2007).} & & & & & & & \\ \multicolumn{1}{c}{\({}^{b}\)From Urquhart et al. (2018).} & & & & & & & \\ \multicolumn{1}{c}{\({}^{c}\)From Lu et al. (2014).} & & & & & & & & \\ \multicolumn{1}{c}{\({}^{d}\)Near/far.} & & & & & & & & \\ \multicolumn{1}{c}{\({}^{a}\)Corrected for the adopted distance.} & & & & & & & & \\ \end{tabular}
\end{table}
Table 1: Observed Sources and Detection Summary
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ Calibrator} & A.P.\({}^{a}\) & Sources Calibrated \\ \hline J2007\(+\)4029 & B & 20293\(+\)3952, 20343\(+\)4129 \\ J1925\(+\)2106 & A & 19266\(+\)1745, G53.11\(+\)00.05, G53.25\(+\)00.04 \\ J1832\(-\)1035 & C & 18345\(-\)0641 \\ J1851\(+\)0035 & C & 18440\(-\)0148, 18517\(+\)0437, 19012\(+\)0536, \\ & 18553\(+\)0414 \\ \hline \end{tabular} \({}^{a}\)Astrometry precision (A.P.) classifications A, B, and C correspond to positional accuracies of \(<\)2 mas, 2-10 mas, and 0.01-0.15\({}^{\prime\prime}\), respectively. From the VLA calibrator list: [https://science.nrao.edu/facilities/via/observing/callist](https://science.nrao.edu/facilities/via/observing/callist).
\end{table}
Table 2: Calibrators List
jet candidates, we will exclude these two cores from the following analysis and discussion.
We plotted the SED (including the 7 mm emission) of the cores with non-extended radio continuum counterpart. In Figure 2, we show the resulting SED of 18517 C\({}_{1}\). The blue solid and the gray dashed lines show the ionized gas sphere model and power-law fit from Rosero et al. (2019), respectively. As in the case for 18517, for most sources the flux density rises significantly at 7 mm, which indicates that thermal emission from dust begins to dominate at this wavelength. In the case of 18440 C\({}_{1}\) (Fig. A2.1, right), 19012 C\({}_{2}\) (Fig. A5.1, first right panel), 20293 C\({}_{3}\) (Fig. A9.1, second right panel) and C\({}_{4}\) (Fig. A9.1, third right panel), the 7 mm flux is consistent within errors with one of the ionized gas models. This suggests that no dust emission was detected toward these cores.
Assuming that the 7 mm emission arises from optically thin dust, we calculated the mass of each core using the equation
\[M_{d}=\frac{d^{2}S_{\nu}R_{g}}{B_{\nu}(T_{d})\kappa_{\nu}}. \tag{1}\]
In this expression, \(d\) is the distance to the source, \(R_{g}\) is the gas-to-dust ratio, \(B_{\nu}(T_{d})\) is the Planck function for a dust temperature \(T_{d}\), and \(\kappa_{\nu}\) is the dust opacity (see Hildebrand, 1983). Most of these quantities are unknown, hence we made the following assumptions. First, based on Lu et al. (2014) and Rathborne et al. (2010), we took 30 K as the dust temperature for all cores. Second, to obtain the dust opacity, we interpolated the value of \(\kappa_{1.3mm}\) for cores with thin ice mantles and a density of \(10^{6}\) cm\({}^{-3}\) from Ossenkopf & Henning (1994), using the power law relation \(\kappa_{\nu}=\kappa_{\nu_{0}}(\frac{\nu}{\nu_{0}})^{\beta}\). We took an intermediate value for the dust emissivity index \(\beta\) of 1.5, which results in \(\kappa_{7mm}=0.08\) cm\({}^{2}\) g\({}^{-1}\). Third, \(R_{g}\) was estimated using equation 2 in Giannetti et al. (2017), which depends on the galactocentric distance of the sources. The \(R_{g}\) values obtained range between \(\sim\) 65 and 135. Finally, we extrapolated the ionized gas power-law fit to 7 mm and subtracted the resulting value from the total integrated flux for those cores with centimeter counterpart and rising spectral index, i.e., 18345 C\({}_{1}\), 18517 C\({}_{1}\), 18553 C\({}_{1}\), G53.11 C\({}_{1}\), 20293 C\({}_{1}\), and 20343 C\({}_{1}\). This was done to account for the contribution from ionized
Figure 1: The white contours show the 7 mm continuum emission toward 18517 C\({}_{1}\), and the 1.3 cm emission from Rosero et al. (2016) is shown in color. The filled and empty white ellipses in the bottom left represent the 1.3 cm and 7 mm synthesized beam sizes, respectively. We note that the centimeter and millimeter emission peaks are coincident. The red diamond and \(\times\) show the position of the methanol 44 and 25 GHz emission from Rodríguez-Garza et al. (2017) and Sanchez-Tovar, E. et al. (submitted), respectively.
Figure 2: SED of 18517 C\({}_{1}\). The black dots represent the 1.3 and 6 cm emission from Rosero et al. (2016), while the green square shows the 7 mm flux density. The error bar of the 7 mm flux density represents the Gaussian fit error plus 10% to account for calibration uncertainties. The solid, blue line and dotted, gray line represent the spherical ionized gas model and power-law fit from Rosero et al. (2019), respectively.
gas and hence the estimated mass of these cores is a lower limit.
From the derived masses, we obtained the density \(n(H_{2})\) and column density \(N(H_{2})\) of the emission. For the calculation of these quantities, we assumed the cores to have a spherical geometry with a diameter equal to their geometric mean FWHM. The geometric mean FWHM of each core in units of arcsec and au, as well as the obtained masses, densities, and column densities are presented in Table 4. We note that for unresolved sources the densities obtained are lower limits.
### SiO \(J=1-0\)
We detected SiO emission in 6 of the 10 regions observed: 18345, 18517, 18553, 19012, G53.11, and 20293. Figure 3 shows the SiO (1\(-\)0) integrated intensity (moment-0) map in 18517. We find a number of shared characteristics in the morphology of the SiO emission. First, in all cases the SiO emission appears to originate in close proximity to the radio continuum source. Second, the emission appears in elongated structures, and is highly asymmetric or monopolar. In 18517 (Fig. A3.2), 19012 (Fig. A5.3), and 20293 (Fig. A9.2), the SiO emission appears quite collimated and elongated on scales \(\gtrsim 0.1\) pc. On the other hand, in 18345 (Fig. A1.2), 18553 (Fig. A4.2), and G53.11 (Fig. A7.2), we observe a few clumps of strong emission at distances \(\gtrsim 0.1\) pc from the core. In all cases the SiO emission seems to be clumpy.
In Figure 4, we show the spectrum of the SiO emission multiplied by 5 in 18517 in blue, obtained integrating over all the emission that seems to be associated with C\({}_{1}\) within the 3\(\sigma\) contour level, as shown in Figure 3. The vertical dotted line in Figure 4 marks the systemic velocity, taken from Bronfman et al. (1996), derived from single dish CS(2\(-\)1) observations. We took the systemic velocities of 18345, 19012, and 20293 from Bronfman et al. (1996) as well, while the measurements of NH\({}_{3}\left(J,K\right)=\left(1,1\right)\) from Wienen et al. (2015) and C\({}^{18}\)O(1\(-\)0) from Zhang et al. (2017) were used to obtain the systemic velocity of 18553 and G53.11, respectively. The peak flux density \(S_{\nu}^{peak}\) and rms of the SiO
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Region & Core & 1.3 cm Sourcea & Typeb & Sizec & \(S_{\nu}\) \\ & & & & [\({}^{\prime\prime}\times\)\({}^{\prime\prime}\),c] & [mJy] \\ \hline
18345\(-\)0641 & C\({}_{1}\) & A & U & 2.1 \(\times\) 1.6, \(-\)9 & 0.63 \(\pm\)0.10 \\
18440\(-\)0148 & C\({}_{1}\) & A & U & 2.0 \(\times\) 1.6, \(-\)3 & 0.43 \(\pm\)0.07 \\ & C\({}_{2}\) & - & R & 2.1 \(\times\) 1.4, 0 & 0.77 \(\pm\)0.05 \\
18517+0437 & C\({}_{1}\) & A & R & 1.0 \(\times\) 0.4, 85 & 1.59 \(\pm\)0.19 \\
18553+0414 & C\({}_{1}\) & A & R & 1.5 \(\times\) 1.1, 125 & 1.39 \(\pm\)0.15 \\
19012+0536 & C\({}_{1}\) & G39.389–0.143 & E & 3.4 \(\times\) 3.2, 9 & 1.86 \(\pm\)0.15 \\ & C\({}_{2}\) & A & R & 1.7 \(\times\) 0.7, 68 & 1.94 \(\pm\)0.18 \\
19266+1745 & C\({}_{1}\) & A & R & 1.9 \(\times\) 0.5, 93 & 1.10 \(\pm\)0.06 \\ & C\({}_{2}\) & C & U & 1.9 \(\times\) 1.6, \(-\)40 & 0.36 \(\pm\)0.04 \\ & C\({}_{3}\) & B & R & 1.5 \(\times\) 0.5, 110 & 0.48 \(\pm\)0.02 \\ & C\({}_{4}\) & - & U & 1.9 \(\times\) 1.6, \(-\)40 & 0.35 \(\pm\)0.01 \\ & C\({}_{5}\) & G53.037\(\pm\)0.115 & U & 1.9 \(\times\) 1.6, \(-\)40 & 0.39 \(\pm\)0.01 \\ & C\({}_{6}\) & - & U & 1.9 \(\times\) 1.6, \(-\)40 & 0.36 \(\pm\)0.01 \\ G53.11+00.05mm2 & C\({}_{1}\) & A & U & 1.9 \(\times\) 1.6, \(-\)37 & 0.59 \(\pm\)0.08 \\
20293+3952 & C\({}_{1}\) & E & R & 1.6 \(\times\) 1.5, 24 & 0.72 \(\pm\)0.06 \\ & C\({}_{2}\) & - & U & 2.2 \(\times\) 1.6, \(-\)79 & 0.59 \(\pm\)0.03 \\ & C\({}_{3}\) & C\({}^{d}\) & R & 1.5 \(\times\) 1.3, 23 & 1.78 \(\pm\)0.11 \\ & C\({}_{4}\) & G78.976+0.358 & U & 2.2 \(\times\) 1.6, \(-\)79 & 1.75 \(\pm\)0.11 \\
20343+4129 & C\({}_{1}\) & B & U & 2.2 \(\times\) 1.6, \(-\)80 & 0.39 \(\pm\)0.01 \\ & C\({}_{2}\) & - & R & 1.2 \(\times\) 1.0, 86 & 0.49 \(\pm\)0.02 \\ & C\({}_{3}\) & A & E & 2.7 \(\times\) 1.4, \(-\)55 & 1.23 \(\pm\)0.20 \\ & C\({}_{4}\) & - & R & 1.2 \(\times\) 0.8, 49 & 0.54 \(\pm\)0.04 \\ & C\({}_{5}\) & - & R & 1.4 \(\times\) 1.0, 46 & 0.66 \(\pm\)0.05 \\ \hline \end{tabular}
\end{table}
Table 3: Measured Parameters of the 7 mm Cores
line emission measured from the integrated spectrum in each region are listed in columns 2 and 3 of Table 5. Due to the asymmetry observed in most cases, we characterize the line width \(\Delta v\) with the full width at zero power (FWZP), which ranges between \(\sim 7.5\,\mathrm{km\ s^{-1}}\) (18517, Fig. 4) and 24 km s\({}^{-1}\) (20293, Fig. A9.2). We also found that usually the SiO line peaks blue-shifted from the adopted systemic velocity. The observed velocity offsets \(v_{off}\) range between 1.6 km s\({}^{-1}\) (18345, Fig. A1.2) and almost 4 km s\({}^{-1}\) (20293, Fig. A9.2). In Table 5 we list \(v_{off}\) and \(\Delta v\) of the line measured in each region in columns 4 and 5, respectively.
To estimate the energetics of the emission, we calculated for each region the SiO(1\(-\)0) luminosity using the expression
\[L_{\mathrm{SiO}}=4\pi\,d^{2}\,\int S_{\nu}\,dv, \tag{2}\]
where \(d\) and \(\int S_{\nu}\,dv\) are the distance and integrated flux of the emission, respectively. In columns 6 and 7 of Table 5, we list the measured \(\int S_{\nu}\,dv\) values and implied luminosities. The obtained SiO(1\(-\)0) luminosities range between \(10^{-8}\sim 10^{-6}\,L_{\odot}\).
### Cs \(J=1-0\)
We detected CS(1\(-\)0) emission in 90% of the regions observed, which is all except G53.25. In Figure 5, we show the CS moment-0 map toward the 18517 region. While in some cases the CS emission peak is essentially coincident with the 7 mm core (e.g., 18553, Fig. A4.2), in other cases it is found at a considerable distance from the jet candidate (e.g., G53.11, Fig. A7.2), and in other regions we are not able to associate it with a singular core (e.g., 18440, Fig. A2.2). Additionally, and un
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Source} & \multicolumn{1}{c}{Geom. Mean FWHM} & M\({}_{d}\) & n(H\({}_{2}\)) & N(H\({}_{2}\)) \\ & [′′] & [\(10^{3}\) au] & [M\({}_{\odot}\)] & [\(10^{7}\) cm\({}^{-3}\)] & [\(10^{24}\) cm\({}^{-2}\)] \\ \hline
18345 C\({}_{1}\) & 1.9 & 17.8 & 83 \(\pm\)22 & 0.3 \(\pm\)0.1 & 0.6 \(\pm\)0.2 \\
18440 C\({}_{1}\) & 1.8 & 9.3 & \(-\)a & –a & –a \\
18440 C\({}_{2}\) & 1.7 & 8.8 & 113 \(\pm\)17 & 0.9 \(\pm\)0.1 & 1.4 \(\pm\)0.2 \\
18517 C\({}_{1}\) & 0.7 & 1.3 & 14 \(\pm\)3 & 171 \(\pm\)41 & 21.5 \(\pm\)5.2 \\
18553 C\({}_{1}\) & 1.3 & 15.9 & 627 \(\pm\)155 & 3.7 \(\pm\)0.9 & 6.0 \(\pm\)1.5 \\
19012 C\({}_{2}\) & 1.1 & 4.7 & \(-\)a & –a & –a & –a & –a \\
19266 C\({}_{1}\) & 1.7 & 16.0 & 429 \(\pm\)80 & 2.5 \(\pm\)0.5 & 4.0 \(\pm\)0.7 \\
19266 C\({}_{2}\) & 1.7 & 16.6 & 92 \(\pm\)24 & 0.5 \(\pm\)0.1 & 0.8 \(\pm\)0.2 \\
19266 C\({}_{3}\) & 0.8 & 7.8 & 189 \(\pm\)34 & 9.6 \(\pm\)1.7 & 7.5 \(\pm\)1.3 \\
19266 C\({}_{4}\) & 1.7 & 16.6 & 141 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\
19266 C\({}_{5}\) & 1.7 & 16.6 & 152 \(\pm\)27 & 0.8 \(\pm\)0.1 & 1.3 \(\pm\)0.2 \\
19266 C\({}_{6}\) & 1.7 & 16.6 & 142 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\ G53.11 C\({}_{1}\) & 1.7 & 3.3 & 9 \(\pm\)1 & 5.9 \(\pm\)1.1 & 2.0 \(\pm\)0.4 \\
20293 C\({}_{1}\) & 1.5 & 2.0/3.0b & 3/8 \(\pm\)1/2 & 11.0/7.2\(\pm\)2.6/1.7 & 2.2 \(\pm\)0.5 \\
20293 C\({}_{2}\) & 1.9 & 2.4/3.7b & 4/10 \(\pm\)1/2 & 7.1/4.6b & 1.3/0.8 & 1.7 \(\pm\)0.3 \\
20293 C\({}_{3}\) & 1.4 & 1.8/2.8b & –a & –a & –a & –a & –a & –a & –a & –a & –a & –a & –a & –a & –a & –a \\
19266 C\({}_{2}\) & 1.7 & 16.6 & 141 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\
19266 C\({}_{5}\) & 1.7 & 16.6 & 152 \(\pm\)27 & 0.8 \(\pm\)0.1 & 1.3 \(\pm\)0.2 \\
19266 C\({}_{6}\) & 1.7 & 16.6 & 142 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\
19266 C\({}_{6}\) & 1.7 & 16.6 & 142 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\
19266 C\({}_{6}\) & 1.7 & 16.6 & 152 \(\pm\)27 & 0.8 \(\pm\)0.1 & 1.3 \(\pm\)0.2 \\
19266 C\({}_{6}\) & 1.7 & 16.6 & 142 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\
19266 C\({}_{6}\) & 1.7 & 16.6 & 142 \(\pm\)25 & 0.7 \(\pm\)0.1 & 1.2 \(\pm\)0.2 \\ G53.11 C\({}_{1}\) & 1.7 & 3.3 & 9 \(\pm\)1 & 5.9 \(\pm\)1.1 & 2.
like the SiO flows, we find significant morphology variations in the different regions. The CS emission is usually clumpy, quite extended in some cases (e.g., 20343, Fig. A10.2) and elongated or filamentary in others (e.g., 20293, Fig. A9.2).
In Figure 4, we show in red the spectrum of the CS(\(1{-}0\)) emission in 18517, obtained integrating over all the emission that appears to be connected to C\({}_{1}\) and within the 35% contour level shown in Figure 5. We find that most CS spectra show single peak and symmetric lines, with the exception of 20293 (Fig. A9.2). We note, however, that due to our lack of sufficiently small spacings, not all CS emission was recovered by our observations, therefore, additional observations with more compact arrays are needed to reliably map the CS distribution in our sample.
## 4 Discussion
### Continuum emission
The association of the radio continuum sources with dust cores traced by 1.2 mm emission was introduced by Rosero et al. (2016), who based their analysis on available single-dish data. The higher resolution of our 7 mm data now allow us to clearly connect the dust cores and ionized gas and thus investigate where star formation occurs. The cores traced by our observations have typical sizes of \(\approx 10,000\) au, and most of the radio continuum sources in our sample are located toward the center of the 7 mm cores (see Section 3.1 and Figures A1.1, A3.1, A4.1, A5.1, A6.1, A7.1, and A10.1). This confirms that the cores are associated with YSOs, and indicates these YSOs are deeply embedded, which, in turn, suggests they are in an early evolutionary stage. Additionally, the large masses (\(\approx 100\,M_{\odot}\)) and high densities (\(\approx 10^{7}\) cm\({}^{-3}\)) estimated for most cores confirm that these are high-mass objects. Therefore, our observations reveal a scenario similar to that predicted by the core accretion model (e.g., McKee & Tan, 2003), with an accreting, emerging protostar in the center of a dust core, which propels an ionized jet.
However, not in all cases do the 1.3 cm and 7 mm peaks exactly coincide (e.g., A2.1, left). We observe slight offsets between the radio continuum sources and the 7 mm core peak across the sample that range between \(500\sim 4,300\) au, with a mean separation of about 2,000 au. We note that all the cores where a position offset is observed are weak detections (i.e., peak \(\lesssim 5\sigma\)) and in all cases the offset is within the 7 mm beam size. It is then possible that this position discrepancy is an
Figure 4: SiO \(\times 5\) and CS \(J=1-0\) emission in 18517 are shown in red and blue, respectively. The SiO and CS spectra were obtained integrating over all the emission that appears to be spatially connected to C\({}_{1}\) and enclosed in the 3\(\sigma\) and 35% contour levels from Fig. 3 and 5, respectively. The SiO emission is highly asymmetric, and shows a clear line wing toward blue-shifted velocities. The CS line is single peaked and symmetric. The vertical dotted line marks the systemic velocity.
Figure 3: SiO integrated intensity in 18517 is shown in color and black contours. The latter represent \([-3,3,3.5,4,4.5]\)\(\times\)\(\sigma_{SiO}\), with \(\sigma_{SiO}=18.1\) mJy beam\({}^{-1}\) km s\({}^{-1}\). White contours show the 7 mm emission, and are the same as in Figure 1. The white cross marks the position of the radio continuum source. The filled, white ellipse in the bottom left represents the synthesized beam size. The SiO(\(1{-}0\)) emission is highly collimated, and is clearly associated with the jet candidate embedded in the 7 mm core. The emission is monopolar, and extends to about 0.08 pc from the 7 mm core. The green line was drawn to guide the reader on the possible direction of the emission.
effect of low signal-to-noise ratio in our observations and we caution about an overinterpretation of this feature. Nonetheless, position discrepancies were also noted by Rosero et al. (2016) in several cases when associating the radio continuum sources with the single-dish 1.2 mm cores. The offset reported by these authors range between 4,000 and 10,000 au (median values for CMC-IR and HMC). Additionally, Liu et al. (2021) found 6 cm and 1.3 mm continuum peak offsets in a number of their sources as well, with typical separation distances of approximately 2,500 au. If such offsets are indeed real, they could be interpreted as the result of a jet-cloud collision. This would indicate that the observed radio continuum emission is a jet knot located at a certain distance from the driving source. Since only one of the offset sources has been associated with an SiO outflow (20293 C\({}_{1}\), see Fig. A9.2), we are not able to extend this interpretation to all our case studies.
### SiO emission
Shocks are thought to be the main source of gaseous SiO. Both shocks associated with cloud-cloud collisions and with ionized jets can provide the conditions to highly enrich the SiO abundance. In the former, the SiO emission is observed as large cloud scale flows with relatively small (\(<2\sim 5\,\mathrm{km\ s}^{-1}\)) line widths (e.g., Jimenez-Serra et al., 2010; Cosentino et al., 2018, 2020). Our observations, on the other hand, present projected sizes and line widths of \(<1\,\mathrm{pc}\) and \(\gtrsim 10\,\mathrm{km\ s}^{-1}\) (see Table 5), respectively. Hence, we discard this scenario as the origin of the SiO emission detected.
As mentioned before, we observe clumpy and collimated SiO structures, which are a common trait of jet-driven molecular flows. In favor of this interpretation, molecular outflows have been previously detected in 18345, 18517, 18553, 19012, and 20293 using probes such as CO and HCO\({}^{+}\) (see Appendix A for details). In all cases where an image of the outflow is available, we find that the direction of the SiO emission is consistent with the outflow axis. Additionally, morphological and line asymmetries are usual in these kind of objects, and monopolar SiO flows are rather common in both low- and high-mass protostars as well (e.g., Zapata et al., 2006; Nony et al., 2020; Jhan et al., 2022). It is worth noting that the nature of this monopolarity is not well understood; while some authors argue that this feature is intrinsic (e.g., Codella et al., 2014), others propose that it can be explained by asymmetries in the ambient gas (e.g., Fernandez-Lopez et al., 2013). In conclusion, it is our interpretation that the observed SiO emission is driven by ionized jets.
The next logical question is whether these SiO flows are associated with jet candidates. We are able to morphologically connect all the observed SiO flows with a 7 mm core associated with a jet candidate, thus we propose that these are the driving sources. In fact, according to Liu et al. (2021), sources with luminosities \(\gtrsim 10^{2}\,L_{\odot}\) can drive strong enough jets to sputter the surrounding dust grains, allowing the detection of SiO. Based on the estimations made by Rosero et al. (2019) using Hi-GAL data, all the sources in our sample associated with an SiO outflow have luminosities \(>10^{2}\,L_{\odot}\), which supports our interpretation.
#### 4.2.1 Comments on Detection Rate
The SiO detection rate is generally greater than 50% in the early stages of high-mass star formation, and it decreases for more evolved objects (see Section 5.1 of Liu et al., 2021, for a more detailed discussion). In our sample, which consists mostly of sources in the HMC evolutionary phase, we detected SiO jets in 6 of the 10 regions observed, which translates to a detection rate of 60%. Regarding the other 40% of our sample, either the SiO emission is lacking in those regions or the sensitivity of our observations is insufficient to detect it. If the non
Figure 5: CS(1\(-\)0) integrated intensity map toward 18517 in color and black contours. The contour levels represent \(-\)35%, 35%, 55%, 75%, and 95% of the peak CS(1\(-\)0) emission, with the peak emission and rms of the map being 600 and 147 mJy beam\({}^{-1}\) km s\({}^{-1}\), respectively. The white contours show the 7 mm emission and are the same as in Fig. 1. The outlined and filled white ellipses in the bottom left represent the CS and 7 mm continuum synthesized beam sizes, respectively. The green line was drawn to guide the reader on the possible direction of the SiO emission and is the same as in Figure 3. The CS emission presents an elongated morphology in the North-South direction, although slightly asymmetric, more intense to the South. Its peak coincides with the 7 mm continuum peak.
detection is due to the latter, then the SiO line intensity in those regions is weaker than \(\sim 15\) mJy beam\({}^{-1}\), which is our \(3\sigma\) limit. The regions where we did detect SiO emission have distances that range between \(\sim 2\) and 12.3 kpc, thus a correlation with distance is not immediately clear. Note that the outflows associated with more distant sources are orders of magnitude more luminous than those driven by sources located within 2 kpc from the Sun. Of the non-detected regions, 18440 and 19266 have distances larger than 5 kpc, while G53.25 and 20343 are within 2 kpc.
Most previous surveys have used higher \(J\) transitions of the SiO molecule (e.g., Gibb et al., 2007; Csengeri et al., 2016; Li et al., 2019; Liu et al., 2021), which are expected to be brighter, and hence easier to detect. Nonetheless, our detection rate is similar to what most other studies have found, so it appears that VLA SiO \(J=1-0\) observations are useful to trace jets in high-mass star forming regions.
#### 4.2.2 Energetic and spatial correlations
Liu et al. (2021) observed the SiO(\(5-4\)) emission toward a set of infrared dark clouds associated with MYSOs with a resolution comparable to ours (\(\sim 1^{\prime\prime}\)). The SiO luminosities we measured are similar to those they report after considering the expected intensity ratio between the \(J=1-0\) and \(J=5-4\) lines, assuming LTE conditions with \(T=30\) K. Additionally, we found a positive correlation between the SiO luminosity \(L_{SiO}\) and the bolometric luminosity \(L_{Bol}\) of the sources, as shown in the top panel of Figure 6. This has also been seen in previous studies (e.g., Liu et al., 2021; Codella et al., 1999; Liu et al., 2022) and, although the scatter in our observations is significant, supports the idea that more luminous objects drive stronger SiO jets.
A positive correlation was also observed between \(L_{SiO}\) and the radio luminosity \(S_{1.3cm}d^{2}\), which we present in the bottom panel of Figure 6. This points to a connection between the outflowing ionized and molecular gas, and suggests that more radio-luminous objects drive more luminous flows, similar to what Figure 9 of Rosero et al. (2019) shows.
We found no clear correlation between \(L_{SiO}\) and \(L_{Bol}/M_{d}\), often used as an indicator of the evolutionary stage of a YSO. It is possible, though, that this lack of correlation is impacted by the fact that the sources targeted in this work are all extremely young and expected to be of similar age.
Regarding other outflow tracers, we find that the SiO flows seem to be oriented in the same direction as the molecular outflows traced by other molecules. Unfortunately, the available data are highly inhomogeneous (see Appendix A for details). This hampered our attempt to search for correlations between the CO (or HCO\({}^{+}\)) and SiO flows, from both a morphological and an energetic point of view.
We also searched the literature and archival data for spatial correlations between the SiO and the shocked gas tracer 2.122 \(\mu\)m H\({}_{2}\), as well as the 4.5 \(\mu\)m (also known as green band) excess from the Spitzer Space Telescope2 as provided by IRSA (2022). We found H\({}_{2}\) features in 18345 (Varricatt et al., 2010, 2013) and 20293 (Beuther
Figure 6: SiO luminosity \(L_{SiO}\) versus bolometric luminosity \(L_{Bol}\) (top) and 1.3 cm radio luminosity \(S_{\nu}d^{2}\) (bottom) plots. The outlined square and circle show the luminosity values for 20293 C\({}_{1}\) if the near or far distance was adopted, respectively. Although the scatter in our data is significant, our observations suggest a positive correlation between \(L_{SiO}\) and both \(L_{Bol}\) and \(S_{\nu}d^{2}\).
et al. 2004), where only the former appears to be coincident with an SiO knot. We only found significant Spitzer 4.5 \(\mu\)m extended excess emission in 18517 (Fig. 7) and 19012 (Fig. A5.2). In both cases, the green band emission seems to be associated with the jet candidate, but elongated in a direction quasi perpendicular to the SiO flow.
## 5 Summary and Conclusions
The results of this work can be summarized as follows:
1. We detected 7 mm continuum emission toward 90% of the jet candidates, and identified a total of 23 individual cores.
2. We found that, in most cases, the radio continuum source is essentially coincident with the 7 mm peak, suggesting these are deeply embedded objects. Additionally, the large masses and densities of the cores support the idea that these are MYSOs in the earliest stages of formation.
3. We detected SiO(1\(-\)0) flows in 6 of the targeted regions. In all cases, the flows appear to be associated with a jet candidate, thus confirming the jet nature of 60% of our sample.
4. Although based on a small sample with substantial scatter, our data suggest a positive correlation between the SiO luminosity (\(L_{SiO}\)) and both the bolometric (\(L_{Bol}\)) and radio luminosity (\(S_{\nu}d^{2}\)) of the driving sources.
This is, to the best of our knowledge, the first search for molecular jets carried out with the VLA using the \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ { { }}}}}}}}}}}}}}}}\) =1-0\) line. Our results demonstrate both the suitability of this transition to trace molecular flows from MYSOs, and the capability of the VLA to conduct this kind of work at relatively high frequency and with only a few minutes of time on-source. Our study adds to the growing list of studies with high detection rate of SiO in the earliest stages of star formation (e.g., Lopez-Sepulcre et al. 2011; Csengeri et al. 2016b; Liu et al. 2021, 2022).
###### Acknowledgements.
We wish to thank the anonymous referee for comments and suggestions that helped improve this work. P. H. and E. D. A. acknowledge support from NSF grants AST-1814011, and AST-1814063, respectively.
|
2307.10769 | Fluid dynamics from the Boltzmann equation using a maximum entropy
distribution | Using the recently developed ``Maximum Entropy'' (or ``least biased'')
distribution function to truncate the moment hierarchy arising from kinetic
theory, we formulate a far-from-equilibrium macroscopic theory that provides
the possibility of describing both free-streaming and hydrodynamic regimes of
heavy-ion collisions within a single framework. Unlike traditional hydrodynamic
theories that include viscous corrections to finite order, the present
formulation incorporates contributions to all orders in shear and bulk inverse
Reynolds numbers, allowing it to handle large dissipative fluxes. By
considering flow profiles relevant for heavy-ion collisions (Bjorken and Gubser
flows), we demonstrate that the present approach provides excellent agreement
with underlying kinetic theory throughout the fluid's evolution and,
especially, in far-off-equilibrium regimes where traditional hydrodynamics
breaks down. | Chandrodoy Chattopadhyay, Ulrich Heinz, Thomas Schaefer | 2023-07-20T10:57:37Z | http://arxiv.org/abs/2307.10769v2 | # Fluid dynamics from the Boltzmann equation using a maximum entropy distribution
###### Abstract
Using the recently developed 'Maximum Entropy' (or 'least biased') distribution function to truncate the moment hierarchy arising from kinetic theory, we formulate a far-from-equilibrium macroscopic theory that provides the possibility of describing both free-streaming and hydrodynamic regimes of heavy-ion collisions within a single framework. Unlike traditional hydrodynamic theories that include viscous corrections to finite order, the present formulation incorporates contributions to all orders in shear and bulk inverse Reynolds numbers, allowing it to handle large dissipative fluxes. By considering flow profiles relevant for heavy-ion collisions (Bjorken and Gubser flows), we demonstrate that the present approach provides excellent agreement with underlying kinetic theory throughout the fluid's evolution and, especially, in far-off-equilibrium regimes where traditional hydrodynamics breaks down.
## I Introduction
Obtaining equations of hydrodynamics, a macroscopic theory that governs the space-time evolution of conserved densities of a system, from a microscopic description of the same system requires 'coarse-graining'. For a system composed of weakly-interacting particles described kinetically in terms of a single-particle phase-space distribution \(f(x,p)\), the coarse-graining can be achieved by integrating out the momentum information encoded in the distribution and focusing the attention on its lowest momentum moments. In kinetic theory, the time-evolution of the particle distribution is described by the Boltzmann equation which can be re-cast as an infinite hierarchy of equations for the coarse-grained'moments' of the distribution. The low-order moments correspond to conserved densities, their fluxes, as well as non-equilibrium components of the conserved currents, such as shear and bulk viscous stresses and charge diffusion currents. In the full microscopic description, the evolution of these non-equilibrium fluxes couples to higher-order 'non-hydrodynamic' moments of the distribution. To obtain a hydrodynamic theory one therefore requires a truncation scheme, i.e. a procedure to close the system of equations by expressing the non-hydrodynamic moments in terms of hydrodynamic ones. This step amounts to reconstructing an approximate distribution function using only information contained in a handful of its low-order moments. While this procedure is crucial in determining the range of applicability of the ensuing hydrodynamic equations, it is inherently ambiguous and mathematically ill-defined. The main objective of this paper is to use a truncation distribution that, on the one hand, originates from a well-motivated information-theoretical principle, and on the other allows for the formulation of a hydrodynamic theory that can also work when the system is not close to local thermal equilibrium.
Previous work on obtaining relativistic hydrodynamics from kinetic theory explored several different classes of truncation distributions. Well-known examples are the Grad 14-moment approximation [1; 2; 3; 4], the Chapman-Enskog expansion [5], and the more recently introduced Romatschke-Strickland anisotropic distribution [6; 7]. All these truncation distributions invoke extra, and sometimes ad-hoc, approximations that are based on specific assumptions about the microscopic kinetic dynamics and thereby introduce additional information on top of that contained in the hydrodynamic degrees of freedom. For example, in Grad's approach, the distribution function is expanded around a local thermal form (Juttner distribution) in powers of particle momenta. This was extended to relativistic kinetic theory by Israel and Stewart [3; 4] and, more recently, by Denicol, Niemi, Molnar, and Rischke [8] into a general framework for second-order transient relativistic fluid dynamics. A slightly different approximation for the truncating distribution function where the expansion is in terms of the fluid's velocity gradients (Chapman-Enskog series [5]), was implemented by Jaiswal [9; 10]. Both of these approaches assume proximity of the fluid to local equilibrium and lead to inconsistencies when the system is far from equilibrium. A typical manifestation of the latter is that the distribution function turns negative (unphysical) at large particle momenta. Some of these problems can be by-passed by using the Romatschke-Strickland ansatz as a truncation distribution. Doing so gives rise to 'anisotropic hydrodynamics' and its extensions [11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Although the resulting framework does not require the fluid to be close to local equilibrium, this approach assumes a rather particular flow pattern, namely, the early stage dynamics of quark-gluon plasma formed in relativistic heavy-ion collisions.1 In such collisions the fluid initially undergoes strongly anisotropic expansion mostly along the beam direction, leading to substantial momentum space anisotropy which the Romatschke-Strickland ansatz [6] captures efficiently. However, this begs the question how to generalize this ansatz in a principled approach [7] to
far-off-equilibrium fluids expanding with more general, fully three-dimensional flow profiles.
Recently, Everett _et al._[22] proposed the Maximum Entropy principle [23; 24] as the guiding idea for reconstructing a single-particle distribution function uniquely using only macroscopic information encoded in the hydrodynamic variables of a relativistic system. The resulting distribution, known as the'maximum-entropy' or 'least-biased' distribution, maximizes the Shannon (information) entropy of the system subject to all, but nothing else than, the information contained in the hydrodynamic degrees of freedom. The work in [22] was motivated by the problem of 'particlizing' the fluid in a heavy-ion collision at the end of its evolution. This procedure provides the distribution of particles emitted from the collision fireball, including their species identity, space-time positions, and four-momenta.2 In the present work we generalize this approach by turning it into a dynamical framework for general far-off-equilibrium hydrodynamic evolution in 3+1 dimensions, which we call _ME hydrodynamics_ (or _ME hydro_ in short).
Footnote 2: It was recently generalized by Pradeep and Stephanov [25] to uniquely reconstruct multiparticle distribution functions that correctly encode also the thermal and critical fluctuations inherited from the hydrodynamic evolution of the fluid.
In essence, ME hydro implements the following algorithm: The task is to evolve the energy-momentum tensor \(T^{\mu\nu}\) and the conserved baryon, strangeness and electric currents \(j^{\mu}_{B,S,Q}\) in space and time. We use hydrodynamic evolution equations obtained from momentum moments of the relativistic Boltzmann equation. There are two types of equations: the conservation laws for energy, momentum and the conserved charges, and a set of relaxation-type equations for the dissipative flows (i.e. the bulk and shear viscous stresses and that diffusion currents). The latter couple to higher, non-hydrodynamic moments of the distribution function and therefore require truncation. The form of these equations is universal [26] - detailed information about the microscopic physics in the medium is encoded in its equation of state and transport coefficients.
When advancing the system by one time step, most terms in the evolution equations can be obtained from the components of \(T^{\mu\nu}\) and \(j^{\mu}_{B,S,Q}\) from the previous time step. The terms which can not, i.e. the couplings to higher, non-hydrodynamic modes (but only these!), are evaluated with the following simple model: Assume that the fluid is made up of a (single- or multi-component, as desired) gas of massive particles with Boltzmann, Bose-Einstein and/or Fermi-Dirac statistics (again as desired), with mass(es) corresponding to typical expectations for the presumed microscopic constituents. Match an equilibrium distribution \(f_{\rm eq}\) and a Maximum Entropy distribution \(f_{\rm ME}\) for the gas particles to \(T^{\mu\nu}\) and \(j^{\mu}_{B,S,Q}\) from the previous time step, and use \(\delta f=f_{\rm ME}-f_{\rm eq}\) to evaluate the non-hydrodynamic moments. Advance by one step, and repeat.
It should be noted that the Maximum Entropy distribution \(f_{\rm ME}\) introduced in this algorithm for the purpose of truncating the moment hierarchy _does not_ solve the Boltzmann equation, nor is its deviation from equilibrium \(\delta f=f_{\rm ME}-f_{\rm eq}\) assumed to be small. It is just the'most likely' \(\delta f\) (in an information theoretical sense) compatible with the hydrodynamic state at each time step and at every point of the spatial grid. The described procedure is unambiguous and well-motivated even for large dissipative flows (i.e. large inverse Reynolds numbers \({\rm Re}^{-1}\)).
In addition to the fact that the Maximum Entropy distribution \(f_{\rm ME}\) stems from a deep connection between information theory and statistical mechanics, it has several other appealing features. For example, it is positive definite over the entire range of particle momenta,3 it can generate a wide range of non-equilibrium stresses allowed by kinetic theory, and it does not require the system to be close to local thermal equilibrium. Accordingly, we propose to truncate the infinite hierarchy of moment equations for the Boltzmann equation with \(f_{\rm ME}\), in order to obtain hydrodynamic equations that can be used even far away from local equilibrium.
Footnote 3: This allows it to be interpreted as a probability density of particles in phase-space.
For non-relativistic fluids this method was introduced in the 1990's by Levermore [27] who showed that using a Maximum Entropy distribution for moment closure leads to a system of equations that is hyperbolic, i.e. its initial value problem is well-posed, and that the second law of thermodynamics is always satisfied. Murchikova _et al._ studied and compared it to other truncation schemes in Ref. [28] for the relativistic Boltzmann equation for neutrino transport in astrophysics. For conformally invariant relativistic fluids a related approach called 'dissipative type theory' (DTT) was advocated by Cantarutti and Calzetta [29; 30; 31]. However, in contrast to ME hydrodynamics, DTT is not based on the Maximum Entropy principle, but on the principle of maximizing _the rate of entropy production_. Since that rate is controlled by the collision term in the Boltzmann equation, maximizing it requires additional information about the microscopic kinetic processes which ME hydrodynamics does not require.4
Footnote 4: As will be discussed in Section II.4, in deriving DTT the authors of [31] make several additional approximations, one of which is the assumption of small deviations from local equilibrium. As a result of these approximations their distribution \(f_{DTT}\) agrees exactly with \(f_{\rm ME}\) but, according to the approximations made, their form for \(f_{DTT}\) should not be used in far-from-equilibrium situations.
This paper is organized as follows. In Sec. II we review the relativistic Boltzmann equation with a relaxation-type collisional kernel and its re-formulation in terms of moment equations. We then briefly describe in Secs. II.1-II.4 some of the standard truncation schemes, culminating in the Maximum Entropy truncation procedure.
In Secs. III and IV we test the performance of ME hydrodynamics in two highly symmetric situations, Bjorken and Gubser flow, for which the underlying kinetic theory can be solved exactly, allowing for a quantitatively precise comparison between the microscopic and macroscopic descriptions. While the symmetries underlying Gubser flow include conformal symmetry, this symmetry can be broken in Bjorken flow for which we study both massless and massive constituents. Both of these flow profiles include regimes where the system is far-from-equilibrium. Thus they provide stringent test beds for ME hydrodynamics far from local equilibrium. For both flow profiles ME hydrodynamics passes the test with flying colors. Conclusions are offered in Sec. V. Several appendices add technical details and provide further clarifications.
## II The Boltzmann equation in relaxation time approximation
We consider a weakly coupled system of particles whose dynamics is described statistically by relativistic kinetic theory. The statistical description relies on a single-particle distribution function \(f(x,p)\) which gives the mean density of particles in phase-space. The evolution of the distribution function is governed by the relativistic Boltzmann equation
\[p^{\mu}\partial_{\mu}f(x,p)=\mathcal{C}[f], \tag{1}\]
where \(p^{\mu}\) is the particle four-momentum with \(p^{\mu}p_{\mu}=m^{2}\), \(m\) being the particle mass. The collisional kernel \(\mathcal{C}[f]\) models the interactions between particles and typically includes the effects of \(2\leftrightarrow 2\) scatterings. In this work, we choose a simplistic collisional kernel given by the relaxation-time approximation (RTA) [32] where the complicated effects of interactions are assumed to drive the system toward _local_ equilibrium. To specify local equilibrium, this description introduces macroscopic variables like flow velocity \(u^{\mu}(x)\), temperature \(T(x)\), and chemical potential \(\mu(x)\) such that the collisonal kernel is approximated by [32]
\[\mathcal{C}[f]\approx-\frac{u\cdot p}{\tau_{R}}\,\left(f-f_{\rm eq}\right). \tag{2}\]
In the above, \(\tau_{R}(x)\) is the relaxation-time whose functional dependence on temperature and chemical potential has to be parametrized. The form of the equilibrium phase-space function is given by the Juttner distribution
\[f_{\rm eq}(x,p)=\Big{\{}{\rm exp}\big{[}(u(x)\cdot p)/T(x)-\alpha(x)\big{]}- \epsilon\Big{\}}^{-1}, \tag{3}\]
where \(u^{\mu}(x)\), with \(u^{\mu}u_{\mu}=1\), is the four-velocity of the local fluid rest frame, \(\alpha(x)\equiv\mu(x)/T(x)\) is the reduced chemical potential, and \(\epsilon=-1,0,1\) distinguishes between Fermi-Dirac, Maxwell-Boltzmann, or Bose-Einstein statistics. In the following, we assume that the particles satisfy Boltzmann statistics (\(\epsilon=0\)) and their number is not conserved (\(\alpha(x)=0\)). This reduces the number of macroscopic variables to 4, namely, \(u^{\mu}(x)\) and \(T(x)\). These variables are determined by demanding that the RTA collisional kernel (2) satisfies energy-momentum conservation. The energy-momentum tensor is the second-moment of \(f(x,p)\),
\[T^{\mu\nu}\equiv\langle p^{\mu}\,p^{\nu}\rangle, \tag{4}\]
where we use the notation \(\langle\cdots\rangle\equiv\int dP\,(\cdots)\,f(x,p)\), with \(dP\equiv d^{3}p/[(2\pi^{3})E_{p}]\) being the Lorentz invariant phase-space measure and \(E_{p}=\sqrt{p^{2}+m^{2}}\) the particle energy. We also define \(\langle\cdots\rangle_{\rm eq}\equiv\int dP\,(\cdots)\,f_{\rm eq}\) and \(\langle\cdots\rangle_{\delta}\equiv\int dP\,(\cdots)\,\delta f\) where \(\delta f\equiv f-f_{\rm eq}\) is the deviation from local equilibrium. Demanding \(\partial_{\mu}T^{\mu\nu}=0\) and using the Boltzmann equation with \(\mathcal{C}[f]\) given by Eq. (2) for a momentum-independent \(\tau_{R}\) yields5
Footnote 5: If the relaxation-time depends on particle momentum, the RTA collisional kernel has to be generalized to be compatible with conservation laws [33; 34; 35; 36].
\[T^{\mu}_{\nu}\,u^{\nu}=e_{\rm eq}\,u^{\mu} \tag{5}\]
where for our system the equilibrium energy density \(e_{\rm eq}=\left\langle\left(u\cdot p\right)^{2}\right\rangle_{\rm eq}\) is given by the equation of state (EoS)
\[e_{\rm eq}(T,m)=\frac{3T^{4}z^{2}}{2\pi^{2}}\,\left(K_{2}(z)+\frac{z}{3}K_{1}( z)\right), \tag{6}\]
with \(z\equiv m/T\) and \(K_{n}\) being the modified Bessel functions of second kind of order \(n\). Eq. (5) is the Landau matching condition, stating that \(u^{\nu}\) and \(e_{\rm eq}\) are the time-like eigenvector and associated eigenvalue of \(T^{\mu}_{\nu}\). The EoS (6) defines the local temperature \(T(x)\) in terms of the energy density \(e(x)\) in the fluid's rest frame, i.e., \(e(x)=e_{\rm eq}(T(x),m)\). The matching condition (5) implies that the energy-momentum tensor of the system can be decomposed as
\[T^{\mu\nu}=e_{\rm eq}\,u^{\mu}\,u^{\nu}-(P+\Pi)\,\Delta^{\mu\nu}+\pi^{\mu\nu}, \tag{7}\]
where \(\Delta^{\mu\nu}\equiv g^{\mu\nu}-u^{\mu}u^{\nu}\) projects any tensor orthogonal to \(u^{\mu}\). The coefficient of \(\Delta^{\mu\nu}\) is the total isotropic pressure which is split into an equilibrium part, \(P(T,m)\), and a bulk viscous part \(\Pi\). The symmetric, traceless, and orthogonal (to \(u^{\mu}\)) part of \(T^{\mu\nu}\) is the shear stress tensor \(\pi^{\mu\nu}\). In local equilibrium both \(\Pi\) and \(\pi^{\mu\nu}\) vanish, i.e. they arise solely from deviations from local equilibrium:
\[\Pi \equiv-\frac{1}{3}\,\langle\Delta_{\mu\nu}\,p^{\mu}\,p^{\nu} \rangle_{\delta},\] \[\pi^{\mu\nu} \equiv\langle p^{\langle\mu}\,p^{\nu\rangle}\,\rangle_{\delta}. \tag{8}\]
In the last definition we used the notation \(A^{\langle\mu\nu\rangle}\equiv\Delta^{\mu\nu}_{\alpha\beta}\,A^{\alpha\beta}\), with the double-symmetric, traceless and orthogonal projector
\[\Delta^{\mu\nu}_{\alpha\beta}\equiv\left(\Delta^{\mu}_{\alpha}\,\Delta^{\nu }_{\beta}+\Delta^{\mu}_{\beta}\,\Delta^{\nu}_{\alpha}\right)/2-\frac{1}{3}\, \Delta^{\mu\nu}\,\Delta_{\alpha\beta}. \tag{9}\]
Using Eq. (4) it is straightforward to show that the Landau matching condition (5) implies
\[\left\langle\left(u\cdot p\right)^{2}\right\rangle_{\delta}=0,\quad\left\langle p ^{\left\langle\mu\right\rangle}\,\left(u\cdot p\right)\right\rangle_{\delta}=0. \tag{10}\]
One can either solve the transport equation (1) to obtain \(f(x,p)\) and then use it to calculate the shear and bulk viscous pressures from Eq. (8), or re-cast the Boltzmann equation into an infinite system of coupled evolution equations for moments of \(f(x,p)\) whose solution yields the bulk and shear stresses. Exact evolution equations for the dissipative fluxes \(\Pi\) and \(\pi^{\mu\nu}\) were obtained by Denicol, Niemi, Molnar, and Rischke [8]. For a single-component Boltzmann gas of particles with mass \(m\), without conserved currents (i.e. with zero chemical potential), the bulk and shear evolution equations are [8; 37]:
\[\dot{\Pi}+\frac{\Pi}{\tau_{R}}=-\beta_{\Pi}\,\theta-\left(1-c_{s} ^{2}\right)\,\Pi\,\theta+\frac{m^{4}}{9}\,\rho_{-2}\,\theta\] \[+\left(\frac{1}{3}-c_{s}^{2}\right)\,\pi^{\mu\nu}\,\sigma_{\mu \nu}+\frac{m^{2}}{3}\,\rho_{-2}^{\mu\nu}\sigma_{\mu\nu},\] (11) \[\dot{\pi}^{\left\langle\mu\nu\right\rangle}+\frac{\pi^{\mu\nu}}{ \tau_{R}}=2\beta_{\pi}\sigma^{\mu\nu}-\frac{10}{7}\,\pi^{\mu\left\langle\lambda \right.}\,\sigma_{\lambda}^{\nu\right\rangle}+2\,\pi^{\lambda\left\langle\mu \right.}\omega_{\lambda}^{\nu\right\rangle}\] \[-\frac{4}{3}\,\pi^{\mu\nu}\,\theta+\frac{4m^{2}}{7}\rho_{-2}^{ \left\langle\lambda\right.}\sigma_{\lambda}^{\nu\right\rangle}-\rho_{-2}^{\mu \nu\lambda\rho}\sigma_{\lambda\rho}-\Delta_{\alpha\beta}^{\mu\nu}\nabla_{ \lambda}\rho_{-1}^{\alpha\beta\lambda}\] \[+\frac{2}{5}\nabla^{\left(\mu\left.\rho_{1}^{\nu}\right)}-m^{2} \rho_{-1}^{\left\langle\mu\right.}\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\((a,b_{\mu},c_{\mu\nu})\)7:
Footnote 7: Without loss of generality, \(c_{\mu\nu}\) can be taken to be symmetric and traceless with its trace absorbed in the definition of ‘a’; this sets its number of independent coefficients to 9.
\[\frac{\delta f(x,p)}{f_{\rm eq}} \approx a+b_{\mu}\,p^{\mu}+c_{\mu\nu}\,p^{\mu}\,p^{\nu}, \tag{19}\] \[\equiv\mathcal{A}+\mathcal{B}\left(u\cdot p\right)+p^{\langle\mu \rangle}\left(b_{\langle\mu\rangle}+2\left(u\cdot p\right)c_{\langle\mu\rangle}\right)\] \[+\mathcal{C}\left(u\cdot p\right)^{2}+c_{\langle\mu\nu\rangle}\,p ^{\langle\mu\,}p^{\nu\rangle},\]
where, in the second line, we defined \(\mathcal{B}\equiv u_{\mu}\,b^{\mu}\), \(c_{\langle\mu\rangle}\equiv\Delta_{\mu}^{\alpha}\,c_{\alpha\beta}\,u^{\beta}\), \(c\equiv c_{\mu\nu}\,u^{\mu}\,u^{\nu}\), \(\mathcal{A}\equiv a-m^{2}\,c/3\), and \(\mathcal{C}\equiv 4c/3\). In the absence of a conserved charge current, the coefficients \(b_{\mu}\) are set to zero and the 14-moment approximation essentially reduces to a 10-moment approximation:
\[\frac{\delta f(x,p)}{f_{\rm eq}}\approx \mathcal{A}+2\,c_{\langle\mu\rangle}\,\left(u\cdot p\right)\,p^{ \langle\mu\rangle}+\mathcal{C}\left(u\cdot p\right)^{2}\] \[+c_{\langle\mu\nu\rangle}\,p^{\langle\mu\,}p^{\nu\rangle}. \tag{20}\]
The ten coefficients are determined by (i) imposing the Landau matching conditions (Eq. (10)) and (ii) the criteria that \(\delta f\) given by Eq. (20) yields the bulk and shear stresses as per their definitions in Eqs. (8). Note that the Landau matching condition, \(\langle p^{\langle\mu\rangle}\left(u\cdot p\right)\rangle_{\delta}=0\) implies that \(c_{\langle\mu\rangle}\) in Eq. (20) must be zero.
### The Chapman-Enskog approximation
In this framework, the RTA Boltzmann equation is approximately solved assuming that the Knudsen number (a ratio of microscopic scattering rate and macroscopic length scale) is small [5]. One re-writes Eq. (32) as,
\[f=f_{\rm eq}-\frac{\varepsilon\tau_{R}}{u\cdot p}\,p^{\mu}\partial_{\mu}f, \tag{21}\]
where \(\varepsilon\) is a power-counting parameter that signifies the smallness of the second-term. Assuming a perturbative solution of the form \(f=\sum_{i=0}^{\infty}\,f_{i}\,\varepsilon^{i}\), and plugging it in both sides of Eq. (21) one obtains,
\[f_{0} =f_{\rm eq},\ f_{1}=-\frac{\tau_{R}}{u\cdot p}\,p^{\mu}\partial_{ \mu}f_{\rm eq},\] \[f_{2} =\frac{\tau_{R}}{u\cdot p}\,p^{\mu}\partial_{\mu}\,\left(\frac{ \tau_{R}}{u\cdot p}p^{\nu}\partial_{\nu}f_{\rm eq}\right), \tag{22}\]
and so on. Writing \(f=f_{\rm eq}+\delta f\) and keeping terms up to first-order in velocity gradients (\(\delta f\approx f_{1}\)) the out-of-equilibrium part of the distribution function is [47],
\[\delta f =\frac{\beta\,\tau_{R}}{u\cdot p}\left[\frac{1}{3}\left\{m^{2}- \left(1-3c_{s}^{2}\right)\left(u\cdot p\right)^{2}\right\}\theta\right.\] \[\qquad\qquad\left.+\,p^{\mu}\,p^{\nu}\,\sigma_{\mu\nu}\right]f_{ \rm eq}. \tag{23}\]
This yields the first-order expressions of the bulk viscous pressures and shear stress tensor,
\[\Pi=-\tau_{R}\,\beta_{\Pi}\,\theta,\ \pi^{\mu\nu}=2\tau_{R}\,\beta_{\pi}\,\sigma^{\mu\nu}. \tag{24}\]
It is customary to express \(\delta f\) in terms of the shear and bulk viscous pressures instead of the scalar expansion rate and velocity stress tensor appearing in Eq. (23). For this, one uses the first-order expressions (Eq. (24)) to get [47],
\[\frac{\delta f}{f_{\rm eq}} =-\frac{\beta}{3(u\cdot p)\beta_{\Pi}}\,\left[m^{2}-\left(1-3c_{ s}^{2}\right)\,\left(u\cdot p\right)^{2}\right]\,\Pi\] \[+\frac{\beta}{2(u\cdot p)\beta_{\pi}}\,p^{\mu}\,p^{\nu}\,\pi_{\mu \nu}. \tag{25}\]
The above form of off-equilibrium correction will be referred to as Chapman-Enskog \(\delta f\) and the hydrodynamics derived from it as second-order CE-hydro.
### The anisotropic-hydrodynamic approximation
Both the Grad's approximation as well as Chapman-Enskog \(\delta f\) have certain shortcomings. The Grad's 14-moment assumption is ad-hoc as there is no _a priori_ reason for expanding the distribution in powers of the particle momenta, whereas the Chapman-Enskog method is expected to work only near equilibrium. In fact, for both Grad and CE approaches, \(\delta f\) becomes negative at large momenta and is unable to describe substantial deviations from local isotropy. These distributions are thus unsuitable for describing the very early stages of heavy-ion collisions, where the medium expands rapidly along the longitudinal (or beam) direction of the colliding nuclei. A form of the distribution function that does not become negative at any momenta and can handle such large deviations from local momentum isotropy is given by the Romatschke-Strickland ansatz [6; 7]:
\[f_{RS}=\exp\left(-\beta_{RS}\,\sqrt{p_{\mu}\,p_{\nu}\Xi^{\mu\nu}}\right). \tag{26}\]
In the above, \(\Xi^{\mu\nu}=u^{\mu}u^{\nu}+\xi^{\mu\nu}-\Delta^{\mu\nu}\,\psi\), where \(\beta_{RS}\) plays the role of an effective inverse temperature, \(\xi^{\mu\nu}\) characterizes the deformation from momentum isotropy, and \(\psi\) models the isotropic deviation from equilibrium. Anisotropic hydrodynamics (aHydro) [11; 12; 13; 15; 16; 17; 18; 18; 20; 48] assumes that the leading part of a distribution function that may describe early-time dynamics of fluids formed in heavy-ion collisions, is given by \(f\approx f_{RS}\). This assumption for \(f(x,p)\), supplemented by matching conditions relating the parameters \((\beta_{RS},\xi^{\mu\nu},\psi)\) to energy density, shear and bulk stresses, is then used to truncate the system of Eqs. (11-12). The aHydro approach can be further improved by including corrections to the Romatschke-Strickland distribution, \(f\approx f_{RS}+\delta f\), where \(\delta f\) is usually obtained using a 14-moment approximation. This procedure is used to formulate 'viscous anisotropic hydrodynamics' [14; 19].
### The maximum entropy approximation
A novel way of constructing a distribution function from knowledge of the hydrodynamic variables was proposed in [22]. It is based on the idea that a 'least biased' distribution function that uses all of and only information of hydrodynamic moments is the one that maximizes the non-equilibrium entropy density8,
Footnote 8: Eq. (27) holds for particles obeying classical statistics. To accommodate quantum statistics, the term in the integrand \(f(\ln f-1)\) has to be generalized to \(f\ln(f)-(1+\theta f)\ln(1+\theta f)/\theta\), where \(\theta=1,-1\) correspond to Bose-Einstein and Fermi-Dirac particles, respectively.
\[s[f]=-\int dP\,\left(u\cdot p\right)\,f\,\left(\ln f-1\right), \tag{27}\]
subject to the constraints that \(f\) reproduces the instantaneous values of hydrodynamic quantities,
\[e =\int dP\,\left(u\cdot p\right)^{2}\,f,\] \[P+\Pi =-\frac{1}{3}\,\int dP\,\left(\Delta_{\mu\nu}\,p^{\mu}\,p^{\nu} \right)\,f,\] \[\pi^{\mu\nu} =\int dP\,p^{\langle\mu}\,p^{\nu\rangle}\,f. \tag{28}\]
Taking the functional derivative of \(s[f]\) and employing the method of Lagrange multipliers, the maximum-entropy or 'least-biased' distribution for Boltzmann particles is obtained to be
\[f_{\rm ME}=\exp\left(-\Lambda\left(u\cdot p\right)-\frac{\lambda _{\Pi}}{u\cdot p}p_{\langle\mu\rangle}p^{\langle\mu\rangle}-\frac{\gamma_{ \langle\mu\nu\rangle}}{u\cdot p}\,p^{\langle\mu}\,p^{\nu\rangle}\right). \tag{29}\]
In the above, \(\Lambda\), \(\lambda_{\Pi}\), and \(\gamma_{\langle\mu\nu\rangle}\) are Lagrange parameters corresponding to energy density, isotropic pressure, and shear stress tensor, respectively. For particles obeying quantum statistics, the maximum entropy distribution changes to \(f_{ME}=[\exp(\psi)\pm 1]^{-1}\), where \(+(-)\) denotes Fermi-Dirac (Bose-Einstein) statistics, and \(\psi\) is (up to a minus sign) identical to the argument of the exponential appearing in Eq. (29). Note that in the absence of dissipation (\(\Pi=\pi^{\mu\nu}=0\)) Eq. (29) reduces to the usual Boltzmann distribution. Also, for small deviations from local equilibrium, \(f_{\rm ME}\approx f_{\rm eq}+\delta f\), with \(\delta f\) approaching the Chapman-Enskog form given by Eq. (25) [22]. Accordingly, for small deviations from local equilibrium, the maximum-entropy truncation scheme yields second-order Chapman-Enskog hydrodynamics.
It should be also noted that, similar to the aHydro ansatz (26), the maximum-entropy distribution is positive definite for all momenta. However, unlike the RS-distribution which is constructed precisely to match the specific symmetries associated with the initially dominating longitudinal expansion of the fluid formed in heavy-ion collisions, the maximum-entropy distribution cares only about information contained in the macroscopic currents, irrespective of the symmetries of the expansion geometry. As a result, it can be expected to describe a considerably larger class of fluid evolutions than the Romatschke-Strickland ansatz (26). In this work we will compare the effectiveness of ME and anisotropic hydrodynamics in describing the macroscopic collective evolution of systems whose microscopic dynamics is controlled by the RTA Boltzmann equation. We shall consider two highly symmetric flow patterns that can be regarded as idealized rough approximations for different heavy-ion evolution stages and for which the RTA Boltzmann equation can be solved exactly - Bjorken [49] and Gubser [50; 51] flow.
We close this Section with a brief comparative discussion of our ME approach [22] with the DTT (Dissipative Type Theory) approach proposed in [31]. For a conformal system, the authors of [31] try to obtain a distribution (referred to as \(f_{\rm DTT}\)) that locally maximizes the _entropy production rate_ while matching the ideal and shear viscous components of the energy-momentum tensor. As the rate of entropy production is determined by the collisional kernel, this distribution depends on the relaxation time scale \(\tau_{R}\). This is not the case for \(f_{\rm ME}\) which is constructed entirely from macroscopic hydrodynamic input and agnostic about the underlying microscopic kinetics. Surprisingly, after a suitable re-definition of the Lagrange parameters, the authors of [31] arrive at a form for \(f_{\rm DTT}\) identical to the conformal version of Eq. (29). Using this result to close the exact shear evolution equation (12) reduces their DTT hydrodynamic framework exactly to ME hydrodynamics. Unfortunately, the last step in their derivation involves an expansion in deviations from equilibrium, and keeping only the leading term (as done in [31] leads to an approximation for \(f_{\rm DTT}\)) that maximizes the entropy production rate if and only if the system is already in thermal equilibrium, i.e. iff \(f_{\rm DTT}=f_{\rm eq}\). Hence, with their approximation \(f_{\rm DTT}\approx f_{\rm ME}\), the results presented in [31] reproduce ME hydrodynamics. Our work here extends theirs in multiple directions, most importantly for non-conformal systems.
## III Bjorken flow
Bjorken flow [49] describes the early stages of matter evolution in ultra-relativistic heavy-ion collisions. In this description, the system is assumed to expand boost-invariantly along the \(z-\) (beam or longitudinal) direction with a \(z\to-z\) symmetry, while being homogeneous and rotationally invariant in the \((x-y)\)-plane (transverse to the beam direction). Mathematically, these assumptions imply invariance of the system under the combined \(SO(1,1)\otimes ISO(2)\otimes Z_{2}\) symmetry group9[50; 51]
which, although obscure in Cartesian coordinates, becomes manifest in the Milne coordinates:
Footnote 1: The \(\tau\)-dependence of \(\tau\) is not a function of \(\tau\), but it is not a function of \(\tau\).
\[\tau =\sqrt{t^{2}-z^{2}},\,r=\sqrt{x^{2}+y^{2}},\] \[\phi =\tan^{-1}(y/x),\,\,\eta=\tanh^{-1}(z/t). \tag{30}\]
Here \(\tau\) is the longitudinal proper time and \(\eta\) is the spacetime rapidity. In Milne coordinates, the metric tensor takes the form \(g_{\mu\nu}=\text{diag}(1,-1,-r^{2},-\tau^{2})\) such that the line element given by
\[ds^{2}=d\tau^{2}-dr^{2}-r^{2}\,d\phi^{2}-\tau^{2}\,d\eta^{2}, \tag{31}\]
is manifestly invariant under the Bjorken symmetries mentioned above. More importantly, the fluid appears to be at rest in these coordinates, (\(u^{\tau}=1,u^{x}=u^{y}=u^{\eta}=0\)), which is the unique flow velocity profile consistent with the combined \(SO(1,1)\otimes ISO(2)\otimes Z_{2}\) symmetry group. These symmetries further imply that all macroscopic quantities (like temperature, shear stress tensor etc.) are functions solely of the proper time \(\tau\) such that partial differential equations of hydrodynamics get replaced by ordinary ones.
### Exact solution of the RTA Boltzmann equation in Bjorken flow
We consider a fluid undergoing Bjorken expansion and assume that the medium is composed of particles whose dynamics is described by kinetic theory. Bjorken symmetries at the microscopic level imply that the single particle distribution function \(f(x,p)\) depends on \(\tau\), the transverse momenta \(p_{T}=\sqrt{p_{x}^{2}+p_{y}^{2}}\) and boost-invariant longitudinal momentum \(p_{\eta}\)[52; 53]. The Boltzmann equation with a collisional kernel in the relaxation-time approximation [32] is
\[\frac{\partial f}{\partial\tau}=-\frac{1}{\tau_{R}}\left(f-f_{\text{eq}} \right), \tag{32}\]
with the equilibrium distribution \(f_{\text{eq}}=\exp(-p^{\tau}/T)\), where \(p^{\tau}=\sqrt{p_{T}^{2}+p_{\eta}^{2}/\tau^{2}+m^{2}}\) is the particle energy and \(T\) is the temperature. The relaxation time is parameterized as \(\tau_{R}\equiv 5C/T\) with constant \(C\). The shear stress tensor of the fluid is diagonal, \(\pi^{\mu\nu}=\text{diag}(0,\pi/2,\pi/2,-\pi/2)\) with a single independent degree of freedom, \(\pi\equiv\pi_{\eta}^{\eta}\). Correspondingly, the energy-momentum tensor, \(T^{\mu\nu}=\text{diag}(e,P_{T},P_{T},P_{L})\), has only 3 independent components, with \(P_{T}\equiv P+\Pi+\pi/2\) and \(P_{L}\equiv P+\Pi-\pi\) being the effective transverse and longitudinal pressures.
The RTA Boltzmann equation (32) is formally solved exactly by [52; 53; 54]
\[f(\tau;p_{T},p_{\eta}) =D(\tau,\tau_{0})\,f_{0}(p_{T},p_{\eta}) \tag{33}\] \[+\int_{\tau_{0}}^{\tau}\,\frac{d\tau^{\prime}}{\tau_{R}(\tau^{ \prime})}\,D(\tau,\tau^{\prime})\,\exp\bigl{(}-p^{\tau}(\tau^{\prime})/T(\tau ^{\prime})\bigr{)},\]
where \(f_{0}(p_{T},p_{\eta})\) is the initial distribution function, and the 'damping function' \(D(\tau_{2},\tau_{1})\) is defined as
\[D(\tau_{2},\tau_{1})\equiv\exp\left(-\int_{\tau_{1}}^{\tau_{2}}\,\frac{d\tau^{ \prime}}{\tau_{R}(\tau^{\prime})}\right). \tag{34}\]
In this work we consider an initial distribution of generalized Romatschke-Strickland form [55; 56]:
\[f_{0}(p_{T},p_{\eta})=\exp\left(\alpha_{0}-\frac{\sqrt{p_{T}^{2} +(1{+}\xi_{0})p_{\eta}^{2}/\tau_{0}^{2}+m^{2}}}{T_{0}^{RS}}\right). \tag{35}\]
Here \(T_{0}^{RS}\) sets the momentum scale, \(\xi_{0}\) parametrizes the momentum-space anisotropy, and the fugacity-like parameter \(\alpha_{0}\) models a distribution that differs from \(f_{\text{eq}}\) by a multiplicative factor even if the anisotropy \(\xi_{0}\) is chosen to vanish. These three parameters can be tuned to generate all possible values for the three independent components of the energy momentum tensor [55; 56].
In order to explicitly determine the solution \(f(\tau;p_{T},p_{\eta})\) in Eq. (33) one needs the time evolution of the temperature. This is obtained by imposing the energy conservation condition through Landau matching: \(e(\tau)=e_{\text{eq}}(T(\tau))\). The solution for temperature is then determined from an integral equation [53; 54; 57]:
\[e_{\text{eq}}(T(\tau))=D(\tau,\tau_{0})\,\frac{(T_{0}^{RS})^{4} \,e^{\alpha_{0}}}{4\pi^{2}}\,\tilde{H}_{e}\Bigl{(}\frac{\tau_{0}}{\sqrt{1+\xi_ {0}}\tau},\frac{m}{T_{0}^{RS}}\Bigr{)}\] \[+\frac{1}{4\pi^{2}}\,\int_{\tau_{0}}^{\tau}\,\frac{d\tau^{\prime }}{\tau_{R}(\tau^{\prime})}\,D(\tau,\tau^{\prime})\,T^{4}(\tau^{\prime})\, \tilde{H}_{e}\Bigl{(}\frac{\tau^{\prime}}{\tau},\frac{m}{T(\tau^{\prime})} \Bigr{)}. \tag{36}\]
In the above the temperature dependence of \(e_{\text{eq}}(T)\) is specified by Eq. (6), and the function \(\tilde{H}_{e}[y,z]\) is defined as,
\[\tilde{H}[y,z]=\int_{0}^{\infty}\,du\,u^{3}\,\exp\left(-\sqrt{y^{2}+z^{2}} \right)\,H_{e}\left(y,\frac{z}{u}\right), \tag{37}\]
with [54]
\[H_{e}(y,z)=y\left(\sqrt{y^{2}+z^{2}}+\frac{1+z^{2}}{\sqrt{y^{2}-1}}\,\tanh^{-1 }\sqrt{\frac{y^{2}-1}{y^{2}+z^{2}}}\right). \tag{38}\]
Eq. (36) is solved for \(T(\tau)\) by numerical iteration. The solution for \(T(\tau)\) can then be used in Eq. (33) to obtain
the distribution function at any time. One may also directly use the following formulae to calculate the effective transverse and longitudinal pressures,
\[P_{T}(\tau)= D(\tau,\tau_{0})\frac{(T_{0}^{RS})^{4}}{8\pi^{2}\alpha_{0}}\tilde{H} _{T}\left(\frac{\tau_{0}}{\tau\sqrt{1+\xi_{0}}},\frac{m}{T_{0}^{RS}}\right) \tag{39}\] \[+\frac{1}{8\pi^{2}}\int_{\tau_{0}}^{\tau}\frac{d\tau^{\prime}}{ \tau_{R}(\tau^{\prime})}D(\tau,\tau^{\prime})T^{4}(\tau^{\prime})\tilde{H}_{T }\left(\frac{\tau^{\prime}}{\tau},\frac{m}{T(\tau^{\prime})}\right),\] \[P_{L}(\tau)= D(\tau,\tau_{0})\frac{(T_{0}^{RS})^{4}}{4\pi^{2}\alpha_{0}} \tilde{H}_{L}\left(\frac{\tau_{0}}{\tau\sqrt{1+\xi_{0}}},\frac{m}{T_{0}^{RS}}\right)\] (40) \[+\frac{1}{4\pi^{2}}\int_{\tau_{0}}^{\tau}\frac{d\tau^{\prime}}{ \tau_{R}(\tau^{\prime})}D(\tau,\tau^{\prime})T^{4}(\tau^{\prime})\tilde{H}_{ L}\left(\frac{\tau^{\prime}}{\tau},\frac{m}{T(\tau^{\prime})}\right).\]
Here the functions \(\tilde{H}_{T,L}\) are defined by [54]
\[\tilde{H}_{T,L}(y,z)\equiv\int_{0}^{\infty}du\,u^{3}\exp\!\left(-\sqrt{u^{2}+ z^{2}}\right)H_{T,L}\left(y,\frac{z}{u}\right),\]
with
\[H_{T}(y,z)= \frac{y}{(y^{2}-1)^{3/2}}\bigg{[}-\sqrt{(y^{2}-1)(y^{2}+z^{2})}\] \[+\left(z^{2}+2y^{2}-1\right)\tanh^{-1}\sqrt{\frac{y^{2}-1}{y^{2}+ z^{2}}}\bigg{]}, \tag{41}\]
\[H_{L}(y,z)= \frac{y^{3}}{(y^{2}-1)^{3/2}}\bigg{[}\sqrt{(y^{2}-1)(y^{2}+z^{2})}\] \[-\left(z^{2}+1\right)\tanh^{-1}\sqrt{\frac{y^{2}-1}{y^{2}+z^{2}}} \bigg{]}. \tag{42}\]
From Eqs. (39,40) it is straightforward to obtain the bulk and shear viscous stresses as \(\Pi=\frac{1}{3}(P_{L}+2P_{T}-3P)\) and \(\pi=\frac{2}{3}(P_{T}-P_{L})\).
### Maximum entropy truncation for Bjorken flow
We now describe the maximum entropy truncation scheme [22] for the case of Bjorken flow. Using the RTA Boltzmann equation (Eq. (32)), one may derive exact evolution equations for the three independent components of energy-momentum tensor, viz., \(e,P_{L},P_{T}\):
\[\frac{de}{d\tau} =-\frac{1}{\tau}\,\left(e+P_{L}\right), \tag{43}\] \[\frac{dP_{L}}{d\tau} =-\frac{P_{L}-P}{\tau_{R}}+\frac{\zeta_{z}^{L}}{\tau},\] (44) \[\frac{dP_{T}}{d\tau} =-\frac{P_{T}-P}{\tau_{R}}+\frac{\zeta_{z}^{\perp}}{\tau}, \tag{45}\]
where
\[\zeta_{z}^{L}=-3P_{L}+I_{240}^{\rm exact},\ \ \zeta_{z}^{\perp}=-P_{T}+I_{221}^{ \rm exact}, \tag{46}\]
using the definition
\[I_{nrq}^{\rm exact}=\frac{1}{(2q)!!}\int dP\,(p^{\tau})^{n-r-2q}\,\left(\frac {p_{\eta}}{\tau}\right)^{r}\,p_{T}^{2q}\,f \tag{47}\]
with \(dP\equiv d^{2}p_{T}dp_{\eta}/[(2\pi)^{3}\tau p^{\tau}]\).
Equations (43-45) are exact but not closed, owing to the couplings \(\zeta_{z}^{L}\) and \(\zeta_{z}^{\perp}\) which require knowledge of the solution of the Boltzmann equation (32). In the maximum entropy truncation scheme one evaluates the moments \(I_{240}\) and \(I_{221}\) in Eq. (46) approximately by replacing \(f\) in Eq. (47) by the maximum entropy distribution \(f_{\rm ME}\). In Bjorken coordinates Eq. (29) takes the form
\[f_{\rm ME}=\exp\!\left(-\Lambda p^{\tau}-\frac{\lambda_{\Pi}}{p^{\tau}}\left(p_ {T}^{2}+p_{\eta}^{2}/\tau^{2}\right)-\frac{\gamma_{ij}p^{i}p^{j}}{p^{\tau}} \right)\!, \tag{48}\]
where the indices \(\{i,j\}\) run over \(\{x,y,\eta\}\). The assumption \(f\approx f_{\rm ME}\) closes the system of equations (43-45). Due to the symmetries of Bjorken flow the traceless tensor \(\gamma_{ij}\) in Eq. (48) becomes diagonal, with a single independent component: \(\gamma_{ij}={\rm diag}(0,\gamma/2,\gamma/2,-\tau^{2}\gamma)\). Accordingly, the scalar \(\gamma_{ij}p^{i}p^{j}\) simplifies to \(\gamma\,(p_{T}^{2}/2-p_{\eta}^{2}/\tau^{2})\). The three Lagrange parameters \((\Lambda,\lambda_{\Pi},\gamma)\) have to be chosen such that \(f_{\rm ME}\) reproduces the three independent components of \(T^{\mu\nu}\) at each instant of time (matching conditions):
\[e=\tilde{I}_{200},\qquad P_{L}=\tilde{I}_{220},\qquad P_{T}=\tilde{I}_{201}. \tag{49}\]
Here \(\tilde{I}_{nrq}\) denotes moments of the maximum entropy distribution:
\[\tilde{I}_{nrq}=\frac{1}{(2q)!!}\int dP\,(p^{\tau})^{n-r-2q}\,\left(\frac{p_{ \eta}}{\tau}\right)^{r}\,p_{T}^{2q}\,f_{\rm ME}. \tag{50}\]
Thus, instead of Eqs. (43-45), we shall solve
\[\frac{de}{d\tau} =-\frac{e+P_{L}}{\tau}, \tag{51}\] \[\frac{dP_{L}}{d\tau} =-\frac{P_{L}-P}{\tau_{R}}+\frac{\tilde{\zeta}_{z}^{L}}{\tau},\] (52) \[\frac{dP_{T}}{d\tau} =-\frac{P_{T}-P}{\tau_{R}}+\frac{\tilde{\zeta}_{z}^{\perp}}{\tau}, \tag{53}\]
with
\[\tilde{\zeta}_{z}^{L}=-3P_{L}+\tilde{I}_{240},\ \ \tilde{\zeta}_{z}^{\perp}=-P_{T}+ \tilde{I}_{221}. \tag{54}\]
For brevity we will refer to Eqs. (51)-(53) as _Maximum Entropy (ME) hydrodynamics_ for Bjorken flow.
Instead of directly solving these equations, which at every time step involves a 3-dimensional inversion to obtain from (\(e\), \(P_{L}\), \(P_{T}\)) the Lagrange multipliers \((\lambda,\lambda_{\Pi},\gamma)\) needed for evaluating the couplings \((\tilde{\zeta}_{z}^{L},\hat{\zeta}_{z}^{\perp})\), we shall recast Eqs. (51-53) as evolution equations of the Lagrange parameters themselves: Defining \(X^{a}\equiv\{e,P_{L},P_{T}\}\) and \(x^{a}\equiv\{\Lambda,\lambda_{\Pi},\gamma\}\), we write
\[dX^{a}=M^{a}_{\ b}\,dx^{b}, \tag{55}\]
with \(M^{a}{}_{b}\equiv\partial X^{a}/\partial x^{b}\), and invert this to obtain
\[\frac{dx^{a}}{d\tau}=\left(M^{-1}\right)^{a}{}_{b}\,\frac{dX^{b}}{d\tau} \tag{56}\]
as evolution equations for the Lagrange multipliers. Using local rest frame coordinates for simplicity, \(p_{z,\text{LRF}}\!=\!p_{\eta}/\tau\) such that \(E_{\text{LRF}}=\sqrt{p_{\text{LRF}}^{2}\!+\!m^{2}}\) with \(p_{\text{LRF}}^{2}=p_{T}^{2}+p_{z,\text{LRF}}^{2}\), and dropping the LRF subscript to ease clutter, the maximum entropy distribution reads10
Footnote 10: We note in passing that for \(\gamma\!=\!0\) the maximum entropy distribution (57) is isotropic in the LRF, and hence \(\gamma\!=\!0\) is equivalent to zero shear stress, \(\pi\!=\!0\).
\[f_{\text{ME}}=\exp\Bigl{(}-\Lambda E_{p}-\frac{\lambda_{\Pi}}{E_{p}}\,p^{2}- \frac{\gamma}{E_{p}}\,\bigl{(}p_{T}^{2}/2-p_{z}^{2}\bigr{)}\Bigr{)}. \tag{57}\]
Starting from the matching conditions (49),
\[e =\int dP\,E_{p}^{2}\,f_{\text{ME}}, \tag{58}\] \[P_{L} =\int dP\,p_{z}^{2}\,f_{\text{ME}},\quad P_{T}=\frac{1}{2}\int dP \,p_{T}^{2}\,f_{\text{ME}},\]
with \(dP=d^{3}p/[(2\pi)^{3}E_{p}]\), the first row of the matrix \(M\) is found to have the elements
\[M^{e}_{\Lambda} \equiv\frac{\partial e}{\partial\Lambda}=-\!\int dP\,E_{p}^{3}\, f_{\text{ME}}=-\tilde{I}_{300},\] \[M^{e}_{\lambda_{\Pi}} \equiv\frac{\partial e}{\partial\lambda_{\Pi}}=-\!\int dP\,E_{p} \,p^{2}\,f_{\text{ME}}=-2\tilde{I}_{301}-\tilde{I}_{320}, \tag{59}\] \[M^{e}_{\gamma} \equiv\frac{\partial e}{\partial\gamma}=-\int dP\,E_{p}\,\left(p_ {T}^{2}/2-p_{z}^{2}\right)=-\tilde{I}_{301}+\tilde{I}_{320}.\]
Similarly, the second row has the elements
\[M^{P_{L}}_{\Lambda} \equiv\frac{\partial P_{L}}{\partial\Lambda}=-\int dP\,E_{p}\,p_ {z}^{2}\,f_{\text{ME}}=-\tilde{I}_{320},\] \[M^{P_{L}}_{\lambda_{\Pi}} \equiv\frac{\partial P_{L}}{\partial\lambda_{\Pi}}=-\int dP\,E_{p }^{-1}\,p_{z}^{2}\,p^{2}\,f_{\text{ME}} \tag{60}\] \[=-2\tilde{I}_{321}-\tilde{I}_{340},\] \[M^{P_{L}}_{\gamma} \equiv\frac{\partial P_{L}}{\partial\gamma}=-\int dP\,E_{p}^{-1} \,p_{z}^{2}\,\left(p_{T}^{2}/2-p_{z}^{2}\right)\] \[=-\tilde{I}_{321}+\tilde{I}_{340}.\]
The components of the third row are
\[M^{P_{T}}_{\Lambda} \equiv\frac{\partial P_{T}}{\partial\Lambda}=-\frac{1}{2}\int dP \,E_{p}\,p_{T}^{2}\,f_{\text{ME}}=-\tilde{I}_{301},\] \[M^{P_{T}}_{\lambda_{\Pi}} \equiv\frac{\partial P_{T}}{\partial\lambda_{\Pi}}=-\frac{1}{2} \int dP\,E_{p}^{-1}\,p_{T}^{2}\,p^{2}\,f_{\text{ME}} \tag{61}\] \[=-4\tilde{I}_{302}-\tilde{I}_{321},\] \[M^{P_{T}}_{\gamma} \equiv\frac{\partial P_{T}}{\partial\gamma}=-\frac{1}{2}\int dP \,E_{p}^{-1}\,p_{T}^{2}\,\left(p_{T}^{2}/2-p_{z}^{2}\right)\] \[=-2\tilde{I}_{302}+\tilde{I}_{321}.\]
In spherical polar coordinates \((p,\theta_{p},\phi_{p})\) the moments \(\tilde{I}_{nrq}\) read
\[\tilde{I}_{nrq} =\frac{1}{(2q)!!}\int dP\,E_{p}^{n-r-2q}\,p^{r+2q}\,\cos^{r} \theta_{p}\,\sin^{2q}\theta_{p}\] \[\times\exp\Bigl{(}-\Lambda E_{p}-\frac{\lambda_{\Pi}}{E_{p}}p^{2} -\frac{\gamma\,p^{2}}{E_{p}}\bigl{(}\tfrac{1}{2}\sin^{2}\theta_{p}-\cos^{2} \theta_{p}\bigr{)}\Bigr{)},\] \[=\frac{1}{(2q)!!}\int_{0}^{\infty}\frac{dp}{4\pi^{2}}\,E_{p}^{n-r- 2q-1}\,p^{r+2q+2}\] \[\times\exp\left(-\Lambda E_{p}-\frac{\lambda_{\Pi}p^{2}}{E_{p}}- \frac{\gamma\,p^{2}}{2E_{p}}\right)\,R_{rq}\left(\frac{3\gamma p^{2}}{2E_{p}}\right) \tag{62}\]
where
\[R_{rq}(\alpha)\equiv\int_{0}^{\pi}\!\!d\theta_{p}\,\cos^{r}\theta_{p}\,\sin^{2q+ 1}\theta_{p}\,\exp\left(\alpha\,\cos^{2}\theta_{p}\right). \tag{63}\]
The integrals \(R_{rq}(\alpha)\) can be expressed analytically in terms of error functions. Note that \(\alpha=3\gamma p^{2}/2E_{p}\) has the same sign as \(\gamma\) which can be positive or negative. For \(\alpha<0\) we can define \(t(\alpha)=\sqrt{\pi}\,\text{Erf}(\sqrt{-\alpha})/\sqrt{-\alpha}\) which is well-behaved as \(\alpha\to-\infty\). Listing only the ones required in this analysis, the \(R_{rq}\) functions can then be expressed as
\[R_{00} =t,\quad R_{01}=-\frac{e^{\alpha}}{\alpha}+\frac{t(1+2\alpha)}{2 \alpha},\] \[R_{02} =-\frac{e^{\alpha}(3+2\alpha)}{2\alpha^{2}}+\frac{t(3+4\alpha+4 \alpha^{2})}{4\alpha^{2}},\] \[R_{20} =\frac{e^{\alpha}}{\alpha}-\frac{t}{2\alpha},\quad R_{21}=\frac{3e ^{\alpha}}{2\alpha^{2}}-\frac{t(3+2\alpha)}{4\alpha^{2}},\] \[R_{40} =\frac{e^{\alpha}(2\alpha-3)}{2\alpha^{2}}+\frac{3t}{4\alpha^{2}}. \tag{64}\]
For positive \(\alpha>0\), however, the right hand sides of Eqs. (64) are inconvenient because they are sums of terms that individually diverge exponentially in the limit \(\alpha\to\infty\). This is made explicit by writing
\[t(\alpha)=\frac{\sqrt{\pi}\,\text{Erf}(\sqrt{-\alpha})}{\sqrt{-\alpha}}=\frac{2 \,e^{\alpha}\,\mathcal{D}(\sqrt{\alpha})}{\sqrt{\alpha}},\ \ \text{for}\,\alpha>0, \tag{65}\]
where the DawsonF function (available at [58] for numerical implementation in C++) is well-behaved as \(\alpha\to\infty\). While these exponential divergences all cancel between the various terms on the right hand sides of Eqs. (64), they should be removed analytically before numerical implementation. This can be achieved by extracting a factor \(e^{\alpha}\) from the \(R_{rq}\) functions and combining it with the exponential prefactor in Eq. (62), defining \(\tilde{R}_{rq}(\alpha)=e^{-\alpha}R_{rq}(\alpha)\) which is manifestly free from exponential divergences in the limit \(\alpha\to+\infty\). For the numerical implementation of the moments \(\tilde{I}_{nrq}\) we therefore use the
following expressions:
\[\tilde{I}_{nrq} =\frac{1}{(2q)!!}\int_{0}^{\infty}\frac{dp}{4\pi^{2}}\,E_{p}^{n-r-2q- 1}\,p^{r+2q+2}\,R_{rq}\left(\frac{3\gamma p^{2}}{2E_{p}}\right)\] \[\times\exp\left(-\Lambda E_{p}-\frac{\lambda_{\Pi}p^{2}}{E_{p}} \,-\frac{\gamma\,p^{2}}{2E_{p}}\right),\;\;\;\gamma<0\] \[\tilde{I}_{nrq} =\frac{1}{(2q)!!}\int_{0}^{\infty}\frac{dp}{4\pi^{2}}\,E_{p}^{n- r-2q-1}\,p^{r+2q+2}\,\tilde{R}_{rq}\left(\frac{3\gamma p^{2}}{2E_{p}}\right)\] \[\times\exp\left(-\Lambda E_{p}-\frac{\lambda_{\Pi}p^{2}}{E_{p}} \,+\frac{\gamma\,p^{2}}{E_{p}}\right),\;\;\;\gamma>0. \tag{66}\]
Note that, since the functions \(R\) and \(\tilde{R}\) are well-behaved in the regions where they are used in Eqs. (66), convergence of these integrals demands that the exponential factors in Eqs. (66) fall off at large momenta. This requires
\[\Lambda+\lambda_{\Pi}>\biggl{|}\min\Bigl{(}\frac{\gamma}{2},-\gamma\Bigr{)} \biggr{|}. \tag{67}\]
This criterium was already deduced in [22] where it was found that \(\Lambda+\lambda_{\Pi}>\bigl{|}\min(\gamma_{1},\gamma_{2},\gamma_{3})\bigr{|}\) where \(\gamma_{i}\) are the eigenvalues of the shear stress tensor in the fluid rest frame. For Bjorken flow, Milne coordinates are the local rest frame coordinates, and thus \(\gamma_{1}=\gamma_{2}=\gamma/2\) and \(\gamma_{3}=-\gamma\). We note that although the Lagrange multiplier \(\Lambda\) appears similar to an inverse temperature \(\beta\) it does not need to be positive for \(f_{\rm ME}\) to be well-behaved; all that is required is the sum \(\Lambda+\lambda_{\Pi}\) being positive. This feature will manifest itself in Sec. III.5 where we generate negative bulk viscous pressures using \(f_{\rm ME}\).
### Note on initial conditions
In this subsection we explore the range of bulk and shear stresses that can be accessed with the maximum-entropy ansatz (57) for the distribution function, as well as its quantum statistical generalization. For Bjorken flow, positivity of the distribution function \(f(x,p)\) implies that the effective longitudinal and transverse pressures, \(P_{L}\) and \(P_{T}\), are both positive. Also, for non-zero particle mass \(m\) the trace of the energy-momentum tensor is non-negative. These constraints imply that the bulk and shear stresses (in units of the thermal pressure) satisfy the following inequalities [55]:
\[\tilde{\pi}-\tilde{\Pi}<1,\;\frac{\tilde{\pi}}{2}+\tilde{\Pi}>-1,\;\tilde{\Pi }\leq\frac{\tilde{e}}{3}-1, \tag{68}\]
where \(\tilde{A}\equiv A/P\). Equations (68) restrict the dissipative fluxes to lie within a triangular region in the scaled shear and bulk pressure plane, as depicted in Fig. 1. Note that the upper bound on \(\tilde{\Pi}\) depends on the mass of the constituent particles, and for Fig. 1 we chose \(m/T=1\).11
Footnote 11: For conformal systems the allowed region in Fig. 1 shrinks to the line \(\Pi=0\), and the scaled shear stress can vary from \(-2\) to \(1\).
Let us consider the lower part of this triangular region, where the shear stress is small and the scaled bulk viscous pressure is negative and large in magnitude. The limit \(\tilde{\Pi}\!=\!-1\), \(\tilde{\pi}\!=\!0\) characterises a state where the effective pressures vanish: \(P_{L}\!=\!P_{T}\!=\!0\)[55; 56]. More specifically, as the temperature is held fixed at a value comparable to the particle mass, the state requires \(f(x,p)\sim\mathcal{A}\,\delta(p)/p^{2}\) such that the energy density stems entirely from the rest mass of the particles. The modified Romatschke-Strickland distribution (35) can accommodate such extreme states, due to the fugacity factor \(\alpha_{0}\) which provides control over the normalization factor \(\mathcal{A}\). States with \(P_{L}=P_{T}=0\) (i.e. \(\tilde{\Pi}\!=-\) 1, \(\tilde{\pi}\!=\!0\)) can also be generated with the maximum-entropy distribution, by making the Lagrange parameter \(\Lambda\) sufficiently negative, as discussed in Appendix A of [56].12
Footnote 12: For \(\Lambda<0\), the enhancement at low momenta stems from \(f_{\rm ME}(p\!=\!0)=\exp(-\Lambda m)\).
However, the need for overpopulating low-momentum modes in order to generate states with large negative bulk pressures implies that classical statistics must break down in the lower part of the triangle in Fig. 1. A physical quantity that signals this breakdown is the Boltzmann entropy (27) which goes negative for distribution functions that generate \((\tilde{\pi},\tilde{\Pi})\) pairs in the lower part of the triangle. Using \(f_{\rm ME}\) as given in Eq. (57), the non-equilibrium entropy density can be expressed in terms of macroscopic quantities as
\[s=\Lambda\,e+\lambda_{\Pi}\left(P_{L}\!+\!2P_{T}\right)+\gamma\left(P_{T}\!-\!P_ {L}\right)+n, \tag{69}\]
Figure 1: Shear and bulk stresses, scaled by the thermal pressure \(P\), generated with \(f_{\rm ME}\) for different values of the Lagrange multipliers at fixed \(m/T=1\). The red points indicate unphysical regions of negative Boltzmann entropy.
where \(n\) is the number density of particles,13
Footnote 13: Note that away from thermal equilibrium \(n\) does _not_ satisfy the equilibrium relation \(n_{\rm eq}=P/T\).
\[n=\int dP\,E_{p}\,f_{\rm ME}\,. \tag{70}\]
We scan through the allowed region of phase space for \((\tilde{\pi},\tilde{\Pi})\) with fixed \(\tilde{e}\) and obtain the corresponding Lagrange parameters \((\Lambda,\lambda_{\Pi},\gamma)\). The Boltzmann entropy density for these points is then computed, and depending on whether the entropy is positive or negative, we mark their positions \((\tilde{\pi},\tilde{\Pi})\) in the triangle by green or red dots. Fig. 1 shows that for systems with \(m/T\sim 1\) initial conditions for dissipative fluxes generated by \(f_{\rm ME}\) that lie in the lower parts of the kinetically allowed triangle are not appropriately described using Boltzmann statistics.
It is, in fact, reasonable to doubt the applicability Boltzmann statistics even near the edges of the green region in Fig. 1 where the entropy density becomes small and classical statistics likely begins to break down. In Fig. 2 we therefore explore the ranges of \((\tilde{\pi},\tilde{\Pi})\) that can be accessed using the maximum-entropy distributions for quantum statistics. For Bjorken flow, \(f_{\rm ME}\) generalizes for particles with arbitrary statistic to [22]
\[f_{\rm ME}=\left[\exp\Bigl{(}\Lambda E_{p}+\frac{\lambda_{\Pi}}{E_{p}}\,p^{2} +\frac{\gamma}{E_{p}}\left(p_{T}^{2}/2-p_{z}^{2}\right)\Bigr{)}+\theta\right] ^{-1}, \tag{71}\]
where \(\theta\!=\!1\) (\(-1\)) for Fermi-Dirac (Bose-Einstein) statistics, respectively.14 To generate Fig. 2 we scanned a wide range of values for two of the three Lagrange parameters, namely \(\Lambda\) and \(\gamma\), and root-solved for \(\lambda_{\Pi}\) such that \(m/T\) stays fixed at unity (for comparison with the Maxwell-Boltzmann case studied in Fig. 1). With these Lagrange parameters the scaled fluxes are calculated for FD (brown) and BE (black) statistics, shown as scatter plots in Fig. 2 where they are overlaid over the (positive entropy density) green points for classical MB statistics from Fig. 1.15 One sees that, once effects of quantum statistics are consistently incorporated, the fraction of \((\tilde{\pi},\tilde{\Pi})\)-space that can be accessed with the maximum-entropy parametrization of the distribution function is further reduced.16
Footnote 14: This does not allow for the possibility of Bose condensation which will be explored elsewhere.
Footnote 15: One shows easily that the extension of Eq. (27) to quantum statistics always yields positive values for the entropy density.
Footnote 16: We note that the accessible region for quantum statistics, as well as the region with positive entropy density for Maxwell-Boltzmann statistics, shrink when \(m/T\) is reduced, and grows to cover almost the entire triangle when \(m/T\) is large, \(m/T>10\).
For computational economy we will continue to use the Maxwell-Boltzmann form (48,57) of the Maximum Entropy distribution in the rest of the paper. However, in later sections of this paper dealing with non-conformal dynamics we shall restrict ourselves to initial conditions that do not lie outside the region allowed by Fermi-Dirac statistics. This guarantees positive Boltzmann entropy density in the initial state and, due to the H-theorem stating that entropy can never decrease, also at all later times.
### Conformal dynamics
Before comparing results of the maximum entropy truncation scheme with exact solutions of the RTA Boltzmann equation and results from other hydrodynamic approximations for the general case of massive particles, we first study the somewhat simpler massless case. For such a conformal system the bulk viscous pressure vanishes and the energy-momentum tensor \(T^{\mu\nu}\!=\!{\rm diag}\left(e,P_{T},P_{T},P_{L}\right)\) becomes traceless (\(T_{\mu}^{\mu}=0\)). As a result, \(T^{\mu\nu}\) has only two independent components for which we take the energy density \(e\) and effective longitudinal pressure \(P_{L}\).
To obtain the evolution of \(e\) and \(P_{L}\) in the microscopic theory we solve the RTA Boltzmann equation using a standard RS-ansatz as initial condition, obtained from Eq. (35) by setting the parameters \(\alpha_{0}\) and \(m\) to zero:
\[f_{0}=\exp\left(-\frac{\sqrt{p_{T}^{2}+(1{+}\xi_{0})p_{\eta}^{2}/\tau_{0}^{2} }}{T_{0}^{RS}}\right). \tag{72}\]
The conformal (\(m=0\)) limit of Eq. (36) yields for the
Figure 2: Shear and bulk stresses generated by \(f_{\rm ME}\) at fixed \(m/T=1\) using Maxwell-Boltzmann (MB, green), Fermi-Dirac (FD, brown), and Bose-Einstein (BE, black) statistics. For classical (MB) statistics only points with positive entropy density are shown.
exact evolution \(e(\tau)\) of the equilibrium energy density
\[e_{\rm eq}(T(\tau))=D(\tau,\tau_{0})\frac{3(T_{0}^{RS})^{4}}{\pi^{2 }}\,{\cal H}_{e}\left(\frac{\tau_{0}}{\tau\sqrt{1+\xi_{0}}}\right)\] \[+\int_{\tau_{0}}^{\tau}\frac{d\tau^{\prime}}{\tau_{R}(\tau^{ \prime})}\,D(\tau,\tau^{\prime})\,{\cal H}_{e}\left(\frac{\tau^{\prime}}{\tau} \right)\,e_{\rm eq}(T(\tau^{\prime})) \tag{73}\]
where
\[{\cal H}_{e}(x)\equiv\frac{1}{2}H_{e}(x,0)=\frac{1}{2}\,\left(x^{2}+\frac{ \tanh^{-1}\sqrt{1-\frac{1}{x^{2}}}}{\sqrt{1-\frac{1}{x^{2}}}}\right). \tag{74}\]
From this the exact temperature evolution \(T(\tau)\) is obtained through the equation of state \(e_{\rm eq}(T)=3P=3T^{4}/\pi^{2}\). For the relaxation time we use the conformal ansatz
\[\tau_{R}(\tau)=5\,\frac{C}{T(\tau)}, \tag{75}\]
fixing \(C=10/(4\pi)\) in this subsection.17 By tuning the parameters \((T_{0}^{RS},\xi_{0})\) we generated a variety of initial values for the normalised shear stress \(\bar{\pi}\equiv\pi/(4P)\) (note that \(\pi=P-P_{L}\)) while keeping the initial temperature fixed at \(T_{0}=500\) MeV. Table 1 tabulates the different initial values of \((T_{0}^{RS},\xi_{0})\) used in our analysis, along with the corresponding initial values for \(\bar{\pi}\) and the color coding used for the evolution trajectories plotted in Fig. 3:
Footnote 17: For a conformal systems the parameter \(C\) (which controls the interaction strength among the microscopic constituents) is equal to the specific shear viscosity, \(C=\eta/s\), where \(\eta\) is the shear viscosity and \(s=(e+P)/T\) the entropy density.
In the conformal limit the ME hydrodynamic equations (51)-(53) reduce to
\[\frac{de}{d\tau} =-\frac{e+P_{L}}{\tau}, \tag{76}\] \[\frac{dP_{L}}{d\tau} =-\frac{P_{L}-P}{\tau_{R}}+\frac{\tilde{\zeta}_{\tau}^{L}}{\tau}, \tag{77}\]
where all moments are to be calculated as described in Sec. III.2, albeit the particle mass \(m\) set to zero. Also, we shall drop the Lagrange parameter \(\lambda_{\Pi}\) in the maximum entropy distribution (57) which was introduced to match it to the bulk viscous pressure \(\Pi\) which vanishes in conformal systems. To re-write Eqs. (76-77) for \(X^{a}\equiv(e,P_{L})\) in terms of the Lagrange parameters \(x^{a}\equiv(\Lambda,\gamma)\) we use \(dX^{a}=M^{a}_{\ b}\,dx^{b}\), which takes the explicit form
\[\begin{pmatrix}de\\ dP_{L}\end{pmatrix}=\begin{pmatrix}-\tilde{I}_{300}&-\tilde{I}_{301}+\tilde{I} _{320}\\ -\tilde{I}_{320}&-\tilde{I}_{321}+\tilde{I}_{340}\end{pmatrix}\begin{pmatrix}d \Lambda\\ d\gamma\end{pmatrix}. \tag{78}\]
Calculating \(\tilde{I}_{nrq}\) from Eq. (66) with \(m\!=\!0\), the integral over \(p\) can be done analytically18 such that (using \(\bar{\gamma}\!\equiv\!\gamma/\Lambda\))
Footnote 18: For any \((n,r,q)\), the integration over the variable \(t\) in Eq. (79) can also be performed analytically. However, the resulting expressions are cumbersome and therefore not listed here. For analytical expressions of \(\tilde{I}_{nrq}\) in terms of a generating functional please refer to Appendix B of [31].
\[\tilde{I}_{nrq}=\frac{(n+1)!}{(2q)!!\Lambda^{n+2}}\int_{0}^{1}\,\frac{dt}{2\pi ^{2}}\,\frac{\left(1-t^{2}\right)^{q}\,t^{r}}{\left[1+\frac{\bar{\gamma}}{2} \left(1-3t^{2}\right)\right]^{n+2}}. \tag{79}\]
The evolution equations for \((\Lambda,\gamma)\) are obtained from \(dx^{a}/d\tau=(M^{-1})^{a}_{\ b}\,dX^{b}/d\tau\). To match the initial ME distribution to the assumed initial temperature \(T_{0}=500\,\)MeV and the selected initial shear stress values listed in Tables 1 and 2, the following initial values \((\Lambda_{0},\gamma_{0})\) must be chosen for the Lagrange parameters:
Figure 3: Evolution of the shear inverse Reynolds number \(\pi/(4P)\) from the exact solution of the RTA Boltzmann equation for a conformal gas of massless particles (dotted lines), compared with that from ME hydrodynamics (“Max-Ent”, thin dashed lines). Hydrodynamic evolution according to the first-order Navier-Stokes limit is shown as a thick red dashed line for comparison. Please refer to Tables 1 or 2 for the color coding.
Figure 3 shows the evolution of the shear inverse Reynolds number \(\bar{\pi}=\pi/(e+P)\) as a function of the inverse Knudsen number \(\tau/\tau_{R}\), computed from the RTA Boltzmann equation (dotted curves) and ME hydrodynamics (dashed curves), for identical initial conditions. We note excellent agreement between the microscopic kinetic and macroscopic hydrodynamic descriptions, except for the blue curves which correspond to the largest initial momentum-space anisotropy, \(\bar{\pi}_{0}=-0.45\), where some small differences between the exact kinetic evolution and its ME hydrodynamic approximation are visible. These differences are ironed out as the inverse Knudsen number reaches values of \(\mathcal{O}(1)\). At late times the system is close to local equilibrium; by \(\tau/\tau_{R}\approx 3\), all curves are seen to merge with the first-order (Navier-Stokes) hydrodynamic result \(\bar{\pi}_{\rm NS}=\frac{4}{15}(\tau_{R}/\tau)\) that controls the late-time asymptotics for conformal Bjorken flow.
In Figs. 4a,b we show the ME hydrodynamic time evolution of the Lagrange parameter \(\Lambda\) in units of the instantaneous inverse temperature (a), and the scaled anisotropy parameter \(\bar{\gamma}=\gamma/\Lambda\) (b). Although different curves for \(\Lambda T\) in panel (a) evolve rather differently from each other at times \(\tau\ll\tau_{R}\) where the system is far from equilibrium, they converge to a universal curve around \(\tau=\tau_{R}\), which then approaches unity as the system locally thermalizes (\(\tau\gg\tau_{R}\)) with \(\Lambda\) assuming the role of an inverse temperature. To understand the time evolution of \(\bar{\gamma}\) in panel (b) we first re-write the maximum-entropy distribution (57) with \(\lambda_{\Pi}\!=\!0\) in the form
\[f_{\rm ME}(p,\theta_{p})=\exp\Bigl{[}-\Lambda\,p\,\Bigl{(}1\!+\! \frac{\bar{\gamma}}{2}\Bigr{)}\Bigl{(}1-\frac{3\bar{\gamma}}{2\!+\!\bar{ \gamma}}\cos^{2}\theta_{p}\Bigr{)}\Bigr{]}. \tag{80}\]
Here \(p=\sqrt{p_{T}^{2}+p_{z}^{2}}\) is the magnitude of the 3-momentum in the LRF and \(\theta_{p}=\tan^{-1}(p_{z}/p)\). First, Eq. (67) implies \(\bar{\gamma}\in(-2,1)\). For \(\bar{\gamma}\to-2\), \(f_{\rm ME}\to\exp(-3\,\Lambda\,p_{z}^{2}/p)\) whereas for \(\bar{\gamma}\to 1\), \(f_{\rm ME}\to\exp\bigl{(}-3\,\Lambda\,p_{T}^{2}/(2p)\bigr{)}\). Thus, as \(\bar{\gamma}\) varies between these limits, the momentum space distribution changes from one falling off steeply in the longitudinal direction (\(P_{L}/P_{T}\ll 1\)) to one that rapidly decreases in the transverse direction (\(P_{L}/P_{T}\gg 1\)). In Fig. 4b we see that for all but the orange curve \(\bar{\gamma}\) initially decreases rapidly towards negative values.19 This is because of the initially very large longitudinal expansion rate in Bjorken flow which rapidly red-shifts the longitudinal particle momenta to small values. As a result, the effective longitudinal pressure in the fluid quickly becomes much smaller than the transverse one. The orange curve corresponds to an initial distribution where only a few particles have appreciable longitudinal momenta. Accordingly, the role of red-shifting \(p_{z}\) by longitudinal expansion becomes negligible. Instead, microscopic collisions begin to locally isotropize the longitudinal and transverse momenta, bringing the longitudinal and transverse pressures closer to each other. The same phenomenon is observed for the other curves at somewhat later times. To describe this process of local isotropization \(\bar{\gamma}\) increases as time proceeds, for the orange curve right away, for the others a bit later. At \(\tau/\tau_{R}\to\infty\) the fluid reaches local thermal equilibrium (i.e. local momentum isotropy), and \(\bar{\gamma}\to 0\).
Footnote 19: In fact, for a free-streaming gas undergoing Bjorken expansion (\(\tau_{R}\to\infty\)), \(\bar{\gamma}\) for these curves would approach its limit of \(-2\) at late times.
Figure 4: Conformal ME hydrodynamic evolution of the Lagrange parameters \(\Lambda\) (panel (a)) and \(\gamma/\Lambda\) (panel (b)), for the maximum-entropy distribution with initial conditions listed in Table 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Blue & Green & Magenta & Maroon & Orange \\ \hline \(\bar{\pi}_{0}\) & \(-0.45\) & \(-0.25\) & \(0\) & \(0.15\) & \(0.245\) \\ \hline \hline \(\Lambda_{0}T_{0}\) & \(2.456\) & \(1.176\) & \(1.0\) & \(1.133\) & \(6.017\) \\ \hline \(\gamma_{0}/\Lambda_{0}\) & \(0.845\) & \(0.475\) & \(0\) & \(-0.56\) & \(-1.818\) \\ \hline \end{tabular}
\end{table}
Table 2: Association of initial conditions of \(\bar{\pi}\equiv\pi/(4P)\) with the Maximum-Entropy Lagrange parameters for conformal dynamics.
### Non-conformal dynamics
We now break conformal symmetry by introducing a non-zero, fixed mass \(m=500\,\)MeV for the particle constituents. For ease of comparison we keep the same conformal ansatz (75) for the relaxation time, with \(C=10/4\pi\), as before. We then solve the RTA Boltzmann equation (32) with initial conditions parametrized by the generalized Romatschke-Strickland (RS) ansatz (35). \(T^{\mu\nu}\) has now the three macroscopic degrees of freedom \((e,P_{L},P_{T})\) or, equivalently, \((T,\Pi,\pi)\). To explore the range of evolution trajectories we select a variety of RS parameter sets \((\alpha_{0},T_{0}^{RS},\xi_{0})\), listed in Table 4, subject to the constraint of fixed initial temperature \(T_{0}=500\,\)MeV and the requirement that the corresponding initial bulk and shear viscous stresses (listed in Table 3) remain inside the domain that can be accessed with \(f_{\rm ME}\) for Fermi-Dirac statistics, as discussed in Sec. III.3.
We start from the exact solution of the RTA Boltzmann equation, obtaining the exact temperature evolution \(T(\tau)\) from the energy density given in Eq. (36) by Landau matching, \(e(\tau)=e_{\rm eq}\big{(}T(\tau)\big{)}\), where \(e_{\rm eq}(T,m)\) is given by Eq. (6). Plugging \(T(\tau)\) into Eqs. (39, 40) we obtain \(P_{T}\) and \(P_{L}\) as functions of \(\tau\). From these we then calculate the exact evolution of the bulk and shear viscous stresses, \(\Pi=(P_{T}+2\,P_{L}-3P)/3\) and \(\pi=2(P_{T}-P_{L})/3\).
The resulting evolution trajectories are shown in Fig. 5 in the \((\pi,\Pi)\)-plane and by solid lines in Fig. 6 as function of time. In Fig. 5 the brown dots show the range of initial conditions accessible with the ME distribution function for particles with mass \(m/T_{0}=1\) (where here \(T_{0}=500\,\)MeV) obeying Fermi-Dirac statistics. By construction, all trajectories start inside the brown-dotted region but, since the temperature decreases with time and therefore \(m/T\) increases, the ME-accessible region grows with time, and the expansion trajectories are seen to make use of this enhanced freedom. However, they never move outside the kinetically allowed20 triangle delineated by the solid blue lines for zero longitudinal and transverse pressures and the condition \(T_{\mu}^{\mu}\geq 0\).21 At late times all trajectories converge on the thermal equilibrium point \(\bar{\pi}=\bar{\Pi}=0\).
Footnote 20: Assuming positive definite distribution functions which applies for both the generalized RS and the ME parametrizations.
Footnote 21: Positivity of the trace implies \(\Pi/P\leq e/(3P)-1\) which defines a horizontal line that moves upward as the system cools [55] and is therefore not shown in the figure. We have checked that none of the trajectories ever moves above this time-evolving bound.
In contrast, second-order Chapman-Enskog type hydrodynamics is known [56] to violate these triangular bounds for certain far-from-equilibrium initial conditions. For large values of the bulk and/or shear stresses, the off-equilibrium correction \(\delta f\) (see Eq. (25)) can become so large and negative that the total distribution function \(f=f_{\rm eq}+\delta f\) becomes negative (i.e. unphysical) over a large range of momenta. This is at least a contributing factor to the observed violations.
We will now show that the same does not happen in ME hydrodynamics. In Fig. 6 we compare the exact solution of the RTA Boltzmann equation (solid lines) with its ME hydrodynamic approximation (dashed lines), for identical initial conditions. The corresponding initial values of the Lagrange parameters \((\Lambda,\lambda_{\Pi},\gamma)\) are listed in Table 5.
Figure 5: Evolution of dissipative fluxes in the scaled shear-bulk plane using RTA Boltzmann equation (solid lines). The brown zone denotes the region that can be populated initially by \(f_{\rm ME}\) for Fermi-Dirac statistics.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & Blue & Green & Magenta & Maroon & Orange & Black & Cyan \\ \hline \(T_{0}^{RS}/T_{0}\) & \(0.631\) & \(0.921\) & \(0.364\) & \(1.343\) & \(0.198\) & \(3.598\) & \(0.468\) \\ \hline \(\alpha_{0}\) & \(0.762\) & \(-0.845\) & \(3.310\) & \(1.726\) & \(5.313\) & \(-5.206\) & \(3.357\) \\ \hline \(\xi_{0}\) & \(-0.82\) & \(-0.807\) & \(-0.849\) & \(236.25\) & \(-0.982\) & \(0\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 4: Values of the RS parameters \((T_{0}^{RS},\alpha_{0},\xi_{0})\) in Eq. (35) which generate the initial conditions listed in Table 3, all having the same fixed initial temperature \(T_{0}=500\,\)MeV and particle mass \(m/T_{0}=1\).
Figure 6 shows the evolution of the bulk (panel (a)) and shear (panel (b)) inverse Reynolds numbers, \(\bar{\Pi}=\Pi/(e+P)\) and \(\bar{\pi}=\pi/(e+P)\), as functions of the proper time in units of \(\tau_{R}\) (i.e. of the inverse Knudsen number). (Please use Table 3 to identify the initial conditions corresponding to each color.) Solid lines mark the exact RTA BE evolution, dashed lines the ME hydrodynamic approximation. The somewhat thicker red dashed curves show the evolution according to first-order Navier-Stokes hydrodynamics: \(\bar{\Pi}_{\rm NS}=-(\zeta/s)/(\tau T)\), \(\bar{\pi}_{\rm NS}=(4/3)(n/s)/(\tau T)\). Since all curves start from the same initial temperature \(T_{0}=500\,\)MeV, the asymptotic Navier-Stokes result in panels (a) and (b) is unique and the same for all trajectories. In almost all cases, ranging from curves with large initial momentum _anisotropies_ to those with large initial _isotropic_ off-equilibrium deformations, the ME hydrodynamic results are seen to be in very good agreement with the exact RTA Boltzmann solution, throughout their evolution.22 This is in sharp contrast with the much poorer performance of second-order Chapman-Enskog hydrodynamics, which was studied in [56] (see Figs. 15-17 in [56]) and already mentioned above.
Footnote 22: The one exception are the orange trajectories, corresponding to the largest negative initial shear stress \(\bar{\pi}_{0}=-1.8\) where significant deviations from the exact evolution are observed for the bulk stress. This indicates a weakness of ME hydrodynamics in handling large shear-bulk coupling effects.
As explained in Sec. III.4 during our analysis of conformal dynamics, the rapid longitudinal expansion of Bjorken expansion at early times strongly red-shifts the longitudinal momenta of particles. In the absence of isotropizing collisions among the microscopic constituents this eventually results in a distribution function that is sharply peaked in \(p_{z}\), \(f(\tau;p_{T},p_{z})\propto\delta(p_{z})\), and the effective longitudinal pressure \(P_{L}\) correspondingly approaches zero. Thus, generic initial conditions at early times (while the Knudsen number is large in Bjorken flow) rapidly evolve towards \(P_{L}\approx 0\), giving rise to early-time universality in scaled quantities such as \(P_{L}/P\). This feature being a characteristic of Bjorken expansion geometry generalises to non-conformal systems as well [55; 56]. We show in Fig. 7 the evolution of scaled effective longitudinal pressure as a function of scaled time, comparing the exact solution from the RTA Boltzmann equation with ME hydrodynamics. The agreement between both approaches is excellent. Fig. 7 should be compared with panel (b) of Fig. 4 in Sec. III.4 for its similarities; the explanation offered there directly carries over to the \(P_{L}/P\) ratio plotted here and therefore needs no repetition.
The discussion of the ME hydrodynamic time evolution of the energy-momentum tensor is completed in Appendix C with an analysis of the evolution of the ME Lagrange parameters that accompanies the evolution trajectories shown in Figs. 6 and 7. We close this section by comparing the performance of ME hydrodynamics with
Figure 6: Comparison of the time evolution of the (a) bulk and (b) shear inverse Reynolds numbers obtained from the exact solution of the RTA BE (solid lines) and from ME hydrodynamics (dashed lines).
Figure 7: Comparison of the evolution of the scaled effective longitudinal pressure obtained from kinetic theory (solid lines) and ME hydrodynamics (dashed lines).
that of the (modified) viscous anisotropic hydrodynamic (mVAH) approach discussed in Ref. [56]. Like ME hydrodynamics, mVAH was found to agree very well with the exact RTA BE solutions shown in Figs. 6 and 7 (see Figs. 18-20 in [56]). In fact, in Appendix A we show that for generalized RS initial conditions, as used in Figs. 6 and 7, the agreement of mVAH with the exact RTA BE solution is even better than that of ME hydrodynamics. On the other hand, if we initialize the identical macroscopic initial condition listed in Table 3 not with a generalized RS ansatz for the distribution function (which is used to close the mVAH equations), with parameters listed in Table 4, but instead with a maximum entropy distribution \(f_{\rm ME}\) with parameters listed in Table 5 (which is used to close the ME hydrodynamic equations), the picture turns upside-down: For \(f_{\rm ME}\) initial conditions, the RTA BE evolution is slightly different than for mVAH initial conditions, and ME hydrodynamics describes the exact evolution more accurately than mVAH.
In spite of the excellent ME hydrodynamic description of the exact evolution of the energy-momentum tensor obtained from kinetic theory, it is shown in App. D that significant discrepancies between the micro- and macroscopic approaches are seen in the evolution of the entropy density. This parallels a similar finding for anisotropic hydrodynamics, first made in [59] and here confirmed in App. D. In fact, for the entropy evolution in Bjorken flow, ME hydrodynamics and mVAH agree much better with each other than either does with the exact kinetic theory.
In summary, for Bjorken expansion mVAH and ME hydrodynamics are both highly competitive macroscopic approximations to the underlying kinetic evolution. However, since the modified RS ansatz (35) for the distribution function (on which mVAH rests) was custom-built for Bjorken geometry, whereas this is not the case for the \(f_{\rm ME}\) ansatz (29), ME hydrodynamics is expected to exhibit superior performance in general expansion scenarios, without the restricting symmetries of Bjorken flow.
## IV Gubser flow
To put this expectation to the test, in this section we make a first step in this direction by studying Gubser flow [50; 51]. While Bjorken flow, without any transverse expansion, is widely assumed to be a good approximation for the dynamical state of the matter formed in ultra-relativistic heavy-ion collisions just after its creation, the finite transverse size of the colliding nuclei implies large transverse density and pressure gradients of the created matter which, after a period of a few relaxation times, drive collective transverse expansion, starting at the edges of the transverse energy density distribution. The subsequent stage of fully three-dimensional expansion without any remaining symmetries and very different longitudinal and transverse expansion rates can no longer be treated analytically. Gubser flow is an idealization located somewhere between Bjorken flow and generic three-dimensional flow, by incorporating transverse flow on top of longitudinal boost-invariant expansion, albeit with a very specific transverse flow profile23 that retains just enough symmetry that the RTA Boltzmann equation continues to be exactly solvable by analytic means [60; 61].
Footnote 23: The transverse expansion encoded in Gubser flow is so violent that, at late times, the _transverse_ momenta of the constituent particles are red-shifted all the way towards zero effective transverse pressure \(P_{T}\).
Gubser derived his flow profile by starting from Bjorken flow, keeping longitudinal boost-invariance and reflection symmetry as well as azimuthal rotational symmetry around the beam axis but relaxing the assumption of transverse homogeneity. Mathematically speaking, the Gubser flow profile replaces the \(ISO(2)\otimes SO(1,1)\otimes Z_{2}\) symmetry of Bjorken flow by invariance under \(SO(3)_{q}\otimes SO(1,1)\otimes Z_{2}\), where \(SO(3)_{q}\) denotes the special conformal group of transformations [50; 51]. The symmetries of this flow are manifest in de Sitter coordinates in a curved space-time constructed as the direct product of 3-dimensional de Sitter space with a line, \(dS_{3}\otimes R\). One first Weyl-rescales the Milne metric,
\[d\hat{s}^{2}\equiv\frac{ds^{2}}{\tau^{2}}=\frac{d\tau^{2}-dr^{2}-r^{2}d\phi^{ 2}}{\tau^{2}}-d\eta^{2}, \tag{81}\]
and then transforms the Milne coordinates \(x^{\mu}=(\tau,r,\phi,\eta)\) to "Gubser coordinates" \(\hat{x}^{\mu}=(\rho,\theta,\phi,\eta)\), with
\[\rho =-\sinh^{-1}\left(\frac{1-q^{2}\tau^{2}+q^{2}r^{2}}{2q\tau}\right), \tag{82}\] \[\theta =\tan^{-1}\left(\frac{2qr}{1+q^{2}\tau^{2}-q^{2}r^{2}}\right), \tag{83}\]
where \(q\) is an energy scale that sets the transverse size of the system. In these coordinates the metric takes the form
\[\hat{g}_{\mu\nu}=\text{diag}\left(1,\,-\cosh^{2}\rho,\,-\cosh^{2}\rho\,\sin^{ 2}\theta,\,-1\right) \tag{84}\]
with the line element
\[d\hat{s}^{2}=d\rho^{2}-\cosh^{2}\rho\left(d\theta^{2}+\sin^{2}\theta\,d\phi^{ 2}\right)-d\eta^{2}, \tag{85}\]
which is manifestly invariant under the \(SO(3)_{q}\) group of rotations in \((\theta,\phi)\) space. In these coordinates, the flow appears static, \(\hat{u}^{\mu}=(1,0,0,0)\), and all quantities depend only on the Gubser time \(\rho\). Moreover, to make conformal symmetry manifest, all quantities expressed in Gubser coordinates (denoted by a hat) are rendered dimensionless by appropriate rescaling with powers of the Weyl rescaling parameter \(\tau\). For example, the Gubser temperature and energy-momentum tensor are
\[\hat{T}(\rho)=\tau\,T(\tau,r),\quad\hat{T}^{\mu\nu}(\rho)=\tau^{2}\,\frac{ \partial\hat{x}^{\mu}}{\partial x^{\alpha}}\frac{\partial\hat{x}^{\nu}}{ \partial x^{\beta}}\,T^{\alpha\beta}(\tau,r). \tag{86}\]
The Gubser symmetries imply a diagonal form of the energy-momentum tensor, \(\hat{T}^{\mu\nu}\!=\!\mathrm{diag}(\hat{e},\hat{P}_{T},\hat{P}_{T},\hat{P}_{L})\), with \(\hat{e}\!=\!3\hat{P}\times\hat{T}^{4}\) and zero trace such that there is no bulk viscous pressure and \(\hat{e}\!=\!2\hat{P}_{T}\!+\!\hat{P}_{L}\). The shear stress tensor has only one independent component which we take (as in Bjorken flow) to be \(\hat{\pi}\!=\!\frac{2}{3}\,(\hat{P}_{T}\!\!-\!\hat{P}_{L})\). Using as before bars to denote normalization by \(4P\), in kinetic theory the normalized shear stress spans the range between \(\hat{\hat{\pi}}\!=\!-\frac{1}{2}\) (where \(\hat{P}_{T}\!=\!0\) and \(\hat{P}_{L}\!=\!3\hat{P}\)) and \(\hat{\pi}\!=\!+\frac{1}{4}\) (where \(\hat{P}_{L}\!=\!0\) and \(\hat{P}_{T}\!=\!\frac{3}{2}\hat{P}\)).
The phase-space distribution function satisfying the symmetries of this flow can only depend on Gubser invariant variables, \(\rho\), \(\hat{p}_{\Omega}\equiv\sqrt{\hat{p}_{\theta}^{2}+\hat{p}_{\phi}^{2}/\sin^{2}\theta}\), and \(\hat{p}_{\eta}\), i.e., \(f(x,p)=f(\rho;\hat{p}_{\Omega},\hat{p}_{\eta})\)[61]. Using these choices of phase-space variables, the RTA BE simplifies to
\[\frac{\partial f}{\partial\rho}=-\frac{1}{\hat{\tau}_{R}}\,\left(f-f_{\rm eq}\right) \tag{87}\]
where the relaxation time \(\hat{\tau}_{R}=5(\eta/s)/\hat{T}\). The equilibrium distribution is
\[f_{\rm eq}=\exp\left(-\frac{\hat{p}^{\rho}}{\hat{T}}\right)\quad\text{with} \quad\hat{p}^{\rho}\equiv\sqrt{\frac{\hat{p}_{\Omega}^{2}}{\cosh^{2}\rho}+ \hat{p}_{\eta}^{2}}. \tag{88}\]
The formal solution of Eq. (87) is [60]
\[f(\rho;\hat{p}_{\Omega}^{2},\hat{p}_{\eta})=D(\rho,\rho_{0})\,f_ {0}(\hat{p}_{\Omega}^{2},\hat{p}_{\eta})\] \[\qquad\qquad+\int_{\rho_{0}}^{\rho}\frac{d\rho^{\prime}}{\hat{ \tau}_{R}(\rho^{\prime})}\,D(\rho,\rho^{\prime})\,f_{\rm eq}\left(\hat{p}^{ \rho}(\rho^{\prime}),\hat{T}(\rho)\right), \tag{89}\]
with \(D(\rho_{2},\rho_{1})=\exp\!\left(-\int_{\rho_{1}}^{\rho_{2}}\,d\rho^{\prime}/ \hat{\tau}_{R}(\rho^{\prime})\right)\) for the damping function. Similar to the Bjorken case we parametrize the initial distribution with a Romatschke-Strickland ansatz
\[f_{0}(\hat{p}_{\Omega}^{2},\hat{p}_{\eta})=\exp\left(-\frac{\sqrt{\hat{p}_{ \Omega}^{2}/\cosh^{2}\rho_{0}+\left(1\!+\!\xi_{0}\right)\hat{p}_{\eta}^{2}}}{ \hat{T}_{0}^{RS}}\right). \tag{90}\]
The temperature evolution is then obtained by solving the integral equation24
Footnote 24: Equation (91) extends the original work in [60; 61] from equilibrium initial conditions to the general case of non-zero initial momentum anisotropy \(\xi_{0}\)[62; 63].
\[\hat{T}^{4}(\rho) =D(\rho,\rho_{0})\,(\hat{T}_{0}^{RS})^{4}\,\mathcal{E}_{G}(\rho, \rho_{0};\xi_{0})\] \[+\int_{\rho_{0}}^{\rho}\frac{d\rho^{\prime}}{\hat{\tau}_{R}(\rho^ {\prime})}\,D(\rho,\rho^{\prime})\,\hat{T}^{4}(\rho^{\prime})\,\mathcal{E}_{G} (\rho,\rho^{\prime};0) \tag{91}\]
where
\[\mathcal{E}_{G}(\rho;\rho_{1},\xi)=\left(\frac{\cosh\rho_{1}}{\cosh\rho}\right) ^{4}\mathcal{H}_{e}\left(\frac{\cosh\rho}{\cosh\rho_{1}\,\sqrt{1\!+\!\xi}} \right), \tag{92}\]
with \(\mathcal{H}_{e}(x)\) defined in Eq. (74). With \(\hat{T}(\rho)\) from (91) the exact distribution function can be computed from Eq. (87).
### ME hydrodynamics for Gubser flow
To derive the Maximum Entropy hydrodynamic equations for Gubser flow we first write down the exact evolution equations for the two independent components of \(T^{\mu\nu}\), \(e\) and \(P_{T}\):25
Footnote 25: This choice differs from what we did for Bjorken flow where, in the conformal limit, we selected \(e\) and \(P_{L}\). The reason is that in Bjorken flow the thermal energy decreases with time exclusively by work done by the _longitudinal_ pressure \(P_{L}\) whereas in Gubser flow we want to focus on the effects of the _transverse_ pressure \(P_{T}\) on the cooling and flow patterns of the system.
\[\frac{d\hat{e}}{d\rho} =-2\tanh\rho\,\left(\hat{e}+\hat{P}_{T}\right), \tag{93}\] \[\frac{d\hat{P}_{T}}{d\rho} =-\frac{1}{\tau_{R}}\left(\hat{P}_{T}-\hat{P}\right)-\left(2\tanh \rho\right)\,\hat{\zeta}^{\perp}. \tag{94}\]
Note that \(2\,\tanh\rho\) is the scalar expansion rate in Gubser flow, \(\hat{\theta}\equiv\hat{\nabla}\!\cdot\!\hat{u}=2\,\tanh\rho\), which here takes the place that \(1/\tau\) holds in Bjorken flow.26 The coupling \(\hat{\zeta}^{\perp}\) is defined by
Footnote 26: We refer to Appendix E for a discussion of the nontrivial relationship between the scalar expansion rates \(\theta=\nabla\cdot u\) in Minkowski space and \(\hat{\theta}\equiv\hat{\nabla}\cdot\hat{u}\) in Gubser space. We note in particular that \(\hat{\theta}=2\,\tanh\rho\) can take either sign whereas the scalar expansion rate in Minkowski space is always positive for Gubser flow, \(\theta\geq 0\).
\[\hat{\zeta}^{\perp}=2\hat{P}_{T}-2\hat{I}_{202}^{\rm exact} \tag{95}\]
with the thermodynamic integral
\[\hat{I}_{nrq}^{\rm exact}\equiv\frac{1}{(2q)!!}\int d\hat{P}\,\left(\hat{p}^{ \rho}\right)^{n-r-2q}\,\left(\hat{p}_{\eta}\right)^{r}\,\left(\frac{\hat{p}_{ \Omega}}{\cosh\rho}\right)^{2q}f, \tag{96}\]
where
\[d\hat{P}\equiv\frac{d\hat{p}_{\theta}\,d\hat{p}_{\phi}\,d\hat{p}_{\eta}}{(2\pi)^{3 }\,\hat{p}^{\rho}\,\sqrt{-\hat{g}}}, \tag{97}\]
with \(\sqrt{-\hat{g}}=\cosh^{2}\rho\,\sin\theta\). Note that Eq. (96) maps onto Eq. (47) with the substitutions
\[\frac{\hat{p}_{\theta}}{\cosh\rho}\mapsto p_{x},\,\,\frac{\hat{p}_{\phi}}{\cosh \rho\,\sin\theta}\mapsto p_{y},\,\,\hat{p}_{\eta}\mapsto p_{z}, \tag{98}\]
where the use of LRF coordinates is implied.27
Footnote 27: Note that this mapping also implies \(\hat{p}_{\Omega}/\cosh\rho\mapsto p_{T}\) and \(\hat{p}^{\rho}\mapsto E_{p}=p\).
As in Sec. III.2 we close the set of exact equations (93-94) by replacing \(\hat{I}_{202}^{\rm exact}\to\hat{\hat{I}}_{202}^{\rm}\) where the tilde indicates substitution of the Maximum Entropy distribution \(f_{\rm ME}\) as an approximation for the exact solution \(f\) of the RTA Boltzmann equation:
\[\hat{\hat{I}}_{nrq}\equiv\frac{1}{(2q)!!}\int d\hat{P}\,\left(\hat{p}^{ \rho}\right)^{n-r-2q}\,\left(\hat{p}_{\eta}\right)^{r}\,\left(\frac{\hat{p}_{ \Omega}}{\cosh\rho}\right)^{2q}\,f_{\rm ME}. \tag{99}\]
In Gubser coordinates the ME distribution reads
\[f_{\text{ME}}(\rho,\hat{p}_{\Omega},\hat{p}_{\eta})=\exp\left[-\hat{\Lambda}\, \hat{p}^{\rho}-\frac{\hat{\gamma}}{\hat{p}^{\rho}}\left(\frac{\hat{p}_{\Omega}^ {2}}{2\cosh^{2}\rho}-\hat{p}_{\eta}^{2}\right)\right]. \tag{100}\]
We will again solve directly for the Lagrange parameters \((\hat{\Lambda},\hat{\gamma})\) instead of \((\hat{e},\hat{P}_{T})\), by inverting
\[\begin{pmatrix}d\hat{e}\\ d\hat{P}_{T}\end{pmatrix}=\begin{pmatrix}-\hat{\hat{I}}_{300}&-\hat{\hat{I}}_{ 301}+\hat{\hat{I}}_{320}\\ -\hat{\hat{I}}_{301}&-2\hat{\hat{I}}_{302}+\hat{\hat{I}}_{321}\end{pmatrix} \begin{pmatrix}d\hat{\Lambda}\\ d\hat{\gamma}\end{pmatrix}. \tag{101}\]
Making use of the mapping (98) we compute the moments \(\hat{\hat{I}}_{nrq}\) from Eq. (79) by replacing the Lagrange parameters (\(\Lambda\), \(\hat{\gamma}\)) in the latter by their hatted counterparts.
### Chapman-Enskog hydrodynamics for Gubser flow
In order to demonstrate the degree of improvement attained by using the ME truncation scheme in comparison with traditional hydrodynamic approaches, we briefly recap the evolution equations for the latter. For illustration we use the Gubser flow version of the Chapman-Enskog-like framework briefly discussed in Sec. II.2. Recall that Gubser flow is conformally symmetric and thus the bulk viscous pressure vanishes, \(\hat{\Pi}=0\). Up to third order in the Chapman-Enskog (CE) approximation, the Gubser flow evolution equations for the energy density and shear stress28 are [59]
Footnote 28: Note that the definitions of \(\hat{\pi}\) here and in [59] differ by a sign.
\[\frac{d\hat{e}}{d\rho} =-2\tanh\rho\left(\frac{4}{3}\,\hat{e}+\frac{\hat{\pi}}{2}\right), \tag{102}\] \[\frac{d\hat{\pi}}{d\rho} =-\frac{\hat{\pi}}{\hat{\tau}_{R}}-\tanh\rho\,\left(\frac{4}{3} \hat{\beta}_{\pi}+\hat{\lambda}\,\hat{\pi}-\hat{\chi}\,\frac{\hat{\pi}^{2}}{ \hat{\beta}_{\pi}}\right), \tag{103}\]
with first- and second-order transport coefficients \(\hat{\beta}_{\pi}=4\hat{P}/5\) and \(\hat{\lambda}=46/21\). The last term in Eq. (103) enters only at third order in the CE expansion, with third-order transport coefficient \(\hat{\chi}=72/245\).29
Footnote 29: Note that the second-order Chapman-Enskog equation for shear evolution, i.e. Eq. (103) with \(\hat{\chi}=0\), is identical to the corresponding evolution in the DNMR framework [60].
### Gubser evolution in kinetic theory and hydrodynamics
We solve the RTA Boltzmann equation exactly for three different shear viscosity values, \(4\pi\eta/s\in(1,3,10)\). We start the evolution at Gubser time \(\rho_{0}=-10\) and tune the Romatschke-Strickland parameters \((\hat{T}_{0}^{RS},\xi_{0})\) such that the initial Gubser temperature in all cases is fixed at \(\hat{T}_{0}=0.002\).30 For a better visual separation of the evolution trajectories, the initial values for the normalised shear stress \(\hat{\pi}_{0}\) for \(4\pi\eta/s=1,\ 3\), and \(10\) are set to \(-0.45\), \(-0.25\), and \(0\), respectively. The initial ME Lagrange parameters \((\hat{\Lambda}_{0},\hat{\gamma})\) are tuned to reproduce these initial values of \(\hat{T}_{0}\) and \(\hat{\pi}_{0}\). All these initial conditions are summarised in Table 6.31
Footnote 30: In the center of a fireball with typical transverse size \(1/q\approx 4.3\,\text{fm}\), this corresponds to an actual temperature \(T_{0}\approx 2\,\text{GeV}\) at Milne time \(\tau_{0}\approx 1.9\times 10^{-4}\,\text{fm}/c\).
Footnote 31: Note that the values of \(\xi_{0}\) in Table 6 are identical to those in Table 1 for conformal Bjorken flow. In both cases we generate identical initial normalised shear stresses; in conformal dynamics these depend only on the anisotropy parameter \(\xi_{0}\), irrespective of the temperature scale \((T_{0}^{RS}\) or \(T_{0}^{RS})\) in the RS-ansatz.
To set expectations we first compare in Fig. 8 the exact RTA BE results from microscopic kinetic theory (green lines) with the macroscopic approximations of second-(red lines, panel (a)) and third-order (magenta lines, panel (b) - see also Fig. 4 in [59]) Chapman-Enskog hydrodynamics. Three different line styles distinguish evolutions with different shear viscosities as shown in the legend.
As already discussed in [59], in kinetic theory the evolution of the normalized shear stress in Gubser flow is controlled by an "early-time"32 attractor at large negative Gubser time \(\rho\), corresponding to \(\hat{\pi}=\frac{1}{4}\) (i.e. the longitudinal free-streaming limit \(\hat{P}_{L}=0\)), and a "late-time" attractor at large positive \(\rho\), corresponding to \(\hat{\pi}=-\frac{1}{2}\) (i.e. the transverse free-streaming limit \(\hat{P}_{T}=0\)). For all three initial conditions and specific shear viscosities the "early-time" dynamics is characterized by a rapid approach towards the _longitudinal_ free-streaming limit at \(\hat{P}_{L}=0\), very similar to what is observed in Milne time in Fig. 7 for Bjorken flow. Around \(\rho=0\) the Gubser dynamical evolution of the shear stress is non-universal and depends quite sensitively on the value of the spe
cific shear viscosity \(\eta/s\). The "late-time" behavior, however, is again universal and characterized by an approach to the _transverse_ free-streaming limit at zero _transverse_ pressure, \(\hat{P}_{T}=0\). This has no analog in Bjorken flow and must therefore be caused by the transverse expansion in Gubser flow.
That the fluid dynamics for Gubser flow approaches free-streaming limits, characterized by very large Knudsen numbers \(\text{Kn}\!=\!\hat{\tau}_{R}|\hat{\theta}|\!=\!2\hat{\tau}_{R}|\tanh\rho|\), at both large negative and large positive \(\rho\) values is implicit in Fig. 4 of Ref. [61] which shows the Knudsen number growing exponentially in both limits. What was not realized in that first analysis is that at large negative \(\rho\) the scalar expansion rate \(|\hat{\nabla}\cdot\hat{u}|\) is dominated by _longitudinal_ expansion whereas the growth of the Knudsen number at large positive \(\rho\) has to be associated with a large (relative to the microscopic scattering rate) _transverse_ expansion rate.33 This explains the approach to different attractors (\(\hat{P}_{L}=0\) at negative \(\rho\), \(\hat{P}_{T}=0\) at positive \(\rho\)) at early and late Gubser times.
Footnote 33: We refer to App. E for technical details.
The red and magenta lines in Figs. 8a and b show that CE hydrodynamics does not correctly reproduce either one of these two attractors. The discrepancy between the exact kinetic theory and its macroscopic hydrodynamic approximation gets smaller at third order of the CE expansion (panel (b)) than at second order (panel (a)),34 but clearly remains sizeable [59].35 The transition around \(\rho=0\) between the "early" and "late" free-streaming limits, where the deviations from thermal equilibrium are small, is described well by CE hydrodynamics, at both second and third order precision. The duration of agree
Figure 8: Gubser evolution of the normalised shear stress, comparing the exact solution of the RTA Boltzmann equation (green lines) with CE hydrodynamics at second (red lines in panel a) and third (magenta lines in panel b) order of the Chapman-Enskog expansion. Different line styles correspond to different shear viscosities as detailed in the legend.
Figure 9: Evolution of (a) the shear inverse Reynolds number and (b) the pressure anisotropy in Gubser flow, comparing the exact solution of the RTA Boltzmann equation (green curves) with ME hydrodynamics (blue curves). Where the green curves become invisible they are hidden behind the blue curves.
ment increases with the strength of the microscopic interactions (smaller \(\eta/s\)).
Figure 9 shows that the shortcomings CE hydrodynamics are not shared by Maximum Entropy hydrodynamics. For \(\rho\!<\!0\) the evolution of the normalised shear stress shown in panel (a) agrees almost perfectly between the microscopic and macroscopic approaches. At positive \(\rho\) small differences can be seen (the exact shear stress lies slightly above the ME approximation) but at \(\rho\to\infty\) the normalized shear stress from ME hydrodynamics converges perfectly to the correct asymptotic value for transverse free-streaming. The kinetic constraints \(\hat{P}_{L,T}\!\geq\!0\) are never violated. Studying the pressure anisotropy \(\hat{P}_{L}/\hat{P}_{T}\) in panel (b) reveals that, as \(\hat{e}\!=\!3\hat{P}\to 0\) at large \(\rho\), the longitudinal-to-transverse pressure ratio \(\hat{P}_{L}/\hat{P}_{T}\) grows exponentially but differs by a constant factor between kinetic theory and ME hydrodynamics; this constant approaches 1 (i.e. the difference between the green and blue curves vanishes) as the specific shear viscosity decreases, i.e. as the system becomes more strongly coupled.36
Footnote 36: As already remarked at the end of Sec. II.4, the curves in Fig. 9 agree with those first shown in Ref. [31], claimed (incorrectly) to result from a hydrodynamic framework based on the principle of maximizing the _rate of entropy production_. The correct interpretation of the numerical results in [31] is that they are predictions of ME hydrodynamics, i.e. they maximize the _entropy_ itself. In fact, almost everywhere in Fig. 9 the deviations from local thermal equilibrium are so large that the approximations made in [31] to arrive at their final form for \(f_{\rm DTT}\) fail catastrophically.
In Fig. 4 of Ref. [59] a version of Fig. 9 was shown where all blue ME hydrodynamic curves were replaced by trajectories obtained from anisotropic hydrodynamics (aHydro). We have repeated that exercise for mVAH (not shown) and found excellent agreement between the mVAH and ME-hydrodynamic predictions. As we had noticed in Sec. III.5 and discuss in detail in App. A for Bjorken flow, for the RS-type initial conditions used in Fig. 9 the mVAH predictions for Gubser flow again agree slightly better with the exact kinetic evolution than the ones from ME hydrodynamics. (This may flip again for ME initial conditions but we did not solve the RTA Boltzmann equation for that case.) So ME hydrodynamics and mVAH are again very competitive hydrodynamic approximations of the underlying kinetic theory when considering Gubser flow.
So why did the anisotropic hydrodynamic approach (whose construction made heavy use of the specific symmetries of Bjorken flow) not fail --as one might have expected-- when moving from Bjorken flow (without any transverse expansion) to Gubser flow (with very strong transverse expansion that completely dominates the fluid dynamics at late Gubser times)? The answer is that mVAH for Gubser flow is not the same as mVAH for Bjorken flow. Although both are obtained by using an (almost identical looking) RS-type ansatz for the microscopic distribution function in order to close the hydrodynamic equations, for Gubser flow the RS distribution function (90) is expressed through Gubser coordinates instead of the Milne coordinates used in (35), which span a different type of space-time: one is intrinsically curved, the other flat. This makes them physically very different distributions. In the end the "Gubser RS ansatz" (90) is as well adapted to Gubser flow as the standard RS ansatz (35) is to Bjorken flow, sharing this feature with the ME ansatz (Eqs. (48) and (100), respectively).
## V Conclusions and outlook
Using the Maximum Entropy distribution constrained by the full energy-momentum tensor to truncate the moment hierarchy of the relativistic Boltzmann equation in Relaxation Time Approximation, we here developed _ME hydrodynamics_, a new relativistic framework for dissipative fluid dynamics that accounts non-perturbatively for the full set of dissipative energy-momentum flows. The framework can be straightforwardly extended to systems with conserved charges, by including the corresponding diffusion currents as additional constraints when maximizing the Shannon entropy, but in this work we focused on fluids without such conserved charges. ME hydrodynamics is conceived to provide an extension of standard second-order ("transient") relativistic dissipative fluid dynamics, which is based on the assumption of weak dissipative flows, into the domain of far-from-equilibrium dynamics. It is a generically macroscopic approach which uses only macroscopic hydrodynamic information, in the sense that the distribution function used for moment truncation is only constrained by macroscopic hydrodynamic quantities. It makes use of microscopic Boltzmann kinetic theory only to the extent that it is assumed that _some_ such kinetic description exists for the fluid, without additional specifics. Furthermore, the kinetic description is only used to determine the coupling to non-hydrodynamic moments; the form of the evolution equations for the conserved charges remains completely general.
While the framework accommodates arbitrary three-dimensional flow patterns, we here studied it, for testing purposes, only for Bjorken and Gubser flow. Assuming that the microscopic physics of the fluid is controlled by the RTA Boltzmann equation, these flows provide highly symmetric environments in which this underlying microscopic physics can be solved semi-analytically with arbitrary numerical precision. These microscopic solutions then provide the exact space-time evolution of the full energy-momentum tensor of the system against which the predictions obtained numerically from the macroscopic ME hydrodynamic framework can be compared with quantitative precision. In this work we performed such comparisons for both massless and massive Boltzmann gases undergoing Bjorken expansion and for a massless gas undergoing Gubser flow. The agreement of the macroscopic ME hydrodynamic predictions with the
exact underlying kinetic results was found to be excellent in all cases, except for initial conditions encoding the most extreme deviations from local thermal equilibrium where differences of a few percent were visible between the micro- and macroscopic descriptions.
As shown in this work and in earlier publications, the same is not true for most other macroscopic hydrodynamic theories. Generically, other approaches fail to reproduce the universal early-time (free-streaming) attractor for the normalized longitudinal pressure \(P_{L}/P\) in Bjorken flow, and the universal longitudinal and transverse free-streaming attractors for \(\hat{P}_{L}\) and \(\hat{P}_{T}\) at large negative and positive de Sitter times, respectively, in Gubser flow. Whenever the bulk and/or shear viscous stresses become large, the standard dissipative hydrodynamic approaches break down. The only other approach that can compete with ME hydrodynamics in the cases of Bjorken and Gubser flows is _anisotropic hydrodynamics_, but only because it uses for truncation of the Boltzmann moment hierarchy a custom-made ansatz of (modified) Romatschke-Strickland form for the distribution function that includes the momentum anisotropies associated with the shear and bulk viscous stresses in these flows non-perturbatively (albeit not by maximizing the Shannon entropy). Contrary to ME hydrodynamics, the success of anisotropic hydrodynamics for Bjorken and Gubser flows is therefore not expected to carry over to generic three-dimensional flow patterns.
We are confident that the excellent performance of ME hydrodynamics carries over to generic three-dimensional hydrodynamic evolution. To subject this confidence to rigorous numerical tests will require development of high-precision numerical solutions for (3+1)-dimensional kinetic theory for comparison. If successful, (3+1)-dimensional ME hydro will become the preferred macroscopic dynamical framework for relativistic heavy-ion collisions -- unless it turns out that the early stage of the latter is not sufficiently weakly coupled to admit some sort of kinetic description. Since ME hydro is based on a truncation of the Boltzmann moment hierarchy we do not know how to generalize it to fluids which are so strongly coupled that a kinetic theory approach becomes fundamentally inapplicable.
## Acknowledgements
We thank Jean-Paul Blaizot for stimulating discussions suggesting the exploration of entropy production in the ME hydro approach (see Appendix D). CC thanks Sourendon Gupta for several illuminating discussions and insightful comments, and the organizers of the ETHCVM 2023 meeting ("Emergent Topics in Relativistic Hydrodynamics, Chirality, Vorticity, and Magnetic Fields") in Puri, India, and of the Hot Quarks 2022 conference in Estes Park, Colorado, for providing an opportunity to present and discuss with participants parts of this work. Fruitful discussions with Derek Everett, Kevin Ingles, Lipei Du, Dananjaya Liyanage, Sunil Jaiswal, Amaresh Jaiswal, and Subrata Pal are also gratefully acknowledged. This work was supported by the U.S. Department of Energy, Office of Science, Office for Nuclear Physics under Awards No. DE-FG02-03ER41260 (C.C. and T.S.) and DE-SC0004286 (U.H.). Furthermore, the authors acknowledge partial support by the National Science Foundation under Grant No. NSF PHY-1748958 (KITP).
## Appendix A Sensitivity of kinetic theory solutions to the form of the initial distribution
In this appendix we test the sensitivity of non-conformal kinetic theory results to the choice of the initial distribution function used in the RTA Boltzmann equation. The solutions for a certain set of initial shear and bulk inverse Reynolds numbers (and temperature \(T_{0}=500\) MeV) generated by \(f_{0}=f_{\rm mRS}\) have already been presented in Sec. III.5. We now generate identical initial conditions for \((T_{0},\vec{\pi}_{0},\vec{\Pi}_{0})\) using an initial distribution to be of maximum-entropy form, i.e., \(f_{0}=f_{\rm ME}\). The solutions are shown by dashed lines in Fig. 10 where the solid lines calculated with \(f_{0}=f_{\rm mRS}\) are the same as those in Fig. 6. The orange and magenta curves in Fig. 10a show that non-hydrodynamic moments of the distribution function can lead to observable differences in the early-time evolution of hydrodynamic quantities. These differences are more prominent in the bulk channel than in the shear sector, and also for initial conditions where the shear inverse Reynolds numbers are large and
Figure 10: Comparison of scaled bulk and shear stress evolution obtained from kinetic theory using an initial distribution of modified Romatschke-Strickland form (solid lines) and maximum-entropy type (dashed lines).
negative (\(\bar{\pi}_{0}=-0.41\) for the orange curve and \(-0.22\) for the magenta one). For \(\bar{\pi}_{0}\geq 0\), the differences are found to be negligible.
Nevertheless, the early time non-universality in kinetic theory results corresponding to the two choices of initial distribution suggest that for \(f_{0}=f_{\rm mRS}\) modified anisotropic hydro might perform better than ME-hydro, and vice-versa for \(f_{0}=f_{\rm ME}\). We verify this intuition in Figs. 11 and 12. In Fig. 11 the comparison between kinetic theory solutions for \(f_{0}=f_{\rm mRS}\) (shown by solid lines) with m-aHydro (black dashed lines) shows that the two are in excellent agreement. A similar comparison in Fig. 12 between kinetic theory solutions with \(f_{0}\!=\!f_{\rm ME}\) (shown by _solid_ lines37) and ME-hydro (black dashed lines) shows that the latter describes the former well. Thus, for some initial conditions the band of uncertainty in the microscopic description of hydrodynamic quantities due to an incomplete knowledge of the initial distribution may be too large to distinguish between the superiority of one macroscopic description over another.
Footnote 37: Note that the solid lines of Fig. 12 are identical to the dashed lines of the corresponding color in Fig. 10.
## Appendix B Classical vs quantum statistics
In Figs. 13 and 14 we explore the extent to which the Maxwell-Boltzmann approximation describes the macroscopic dynamics of particles governed by quantum statistics when subject to Bjorken flow.38 For both of these
Figure 11: Comparison of dissipative flux evolution obtained from kinetic theory using an initial distribution \(f_{0}=f_{\rm mRS}\) (solid lines) and modified viscous anisotropic hydro (mVAH) (black dashed lines).
Figure 12: Comparison of scaled bulk and shear evolution obtained from kinetic theory using \(f_{0}=f_{\rm ME}\) (solid lines) and ME-hydro (black dashed lines).
Figure 13: Comparison of scaled bulk and shear evolution obtained from kinetic theory for particles obeying Bose-Einstein statistics (dashed) and classical statistics (solid).
figures we use initial distributions of maximum-entropy type, i.e., \(f_{0}\) given by Eq. (57) for classical statistics and by Eq. (71) for quantum statistics with appropriate choice of \(\theta\). The results for classical and quantum statistics are denoted, respectively, by solid and dashed lines. The blue, maroon, magenta, and orange curves in Fig. 13a shows that Boltzmann approximation does not appropriately capture the early-time bulk viscous pressure evolution for particles obeying Bose-Einstein statistics. In contrast, for Fermi-Dirac particles the agreement between classical and quantum statistics is much better, as demonstrated in Fig. 14.
In Figs. 15 and 16 we show ME-hydro solutions for Bose-Einstein and Fermi-Dirac statistics, respectively, and compare with the corresponding exact kinetic theory solutions. For quantum ME hydro one must solve Eqs. (51-53) using \(f=f_{\rm ME}\) given by Eq. (71) to obtain the moments \(\tilde{I}_{nrq}\) (see Eq. (50)). As for the Boltzmann case, we solved directly for the Lagrange parameters \((\Lambda,\lambda_{\rm H},\gamma)\). The Jacobian of transformation relating \(M^{a}_{\,b}\) to the moments \(\tilde{I}_{nrq}\) (Eqs. 59-61) is the same as for classical statistics, up to changing \(\tilde{I}_{nrq}\to\tilde{I}^{q}_{nrq}\) where
\[\tilde{I}^{q}_{nrq}=\frac{1}{(2q)!!}\int dP\,\,(p^{\tau})^{n-r-2 q}\,\left(\frac{p_{\eta}}{\tau}\right)^{r}\,p_{T}^{2q}\,f_{\rm ME}\,\tilde{I}_{ \rm ME}, \tag{78}\]
with \(\tilde{f}_{\rm ME}\equiv 1-\theta\,f_{\rm ME}\). Figs. 15 and 16 demonstrate that ME-hydro gives an excellent description of non-conformal kinetic theory for quantum statistics as well.
## Appendix C Evolution of the Lagrange parameters
We discuss the evolution of the ME Lagrange parameters in ME hydrodynamics for non-conformal fluids undergoing Bjorken flow. Similar to Fig. 4 showing the corresponding conformal evolution, we explore in Figs. 17 and 18 the time dependence of the Lagrange parameters \((\Lambda,\lambda_{\rm H},\gamma)\) (suitably normalized as mentioned later) controlling the ME distribution function in a non-conformal
Figure 16: Same as Fig. 15 but for Fermi-Dirac statistics.
Figure 14: Same as Fig. 13, but for dashed lines now standing for Fermi-Dirac statistics.
Figure 15: Comparison between kinetic theory and ME-hydro for Bose-Einstein statistics.
gas. In Figs. 17a we notice a curious feature that the parameter \(\Lambda\), which one may be inclined to associate with an inverse temperature, is negative! This is not a problem though, as for the maximum-entropy distribution to be well-behaved \(\Lambda\) does not need to be positive. Instead, it is the sum of \(\Lambda\) and \(\lambda_{\pi}\) that must be positive (see Eq. (67)). In fact, negative values of \(\Lambda\) arise as our initial conditions correspond to negative bulk viscous pressures for a gas at moderate temperatures (\(T_{0}/m=1\)).
One observes from Fig. 17a that the magenta, maroon, orange, and cyan curves that correspond to initial \(\Pi/P\leq 0\) and substantial initial shear stress are characterised by negative \(\Lambda_{0}\). In contrast, the blue, green, and black curves with initial \(\Pi/P\geq 0\) have positive \(\Lambda_{0}\). These are in line with the arguments given above. The various solutions in panel (a) evolve distinctly from each other during the early stages. At times \(\tau\approx 2\,\tau_{R}\) they approach unity suggesting onset of near-equilibrium dynamics when \(\Lambda\) can be thought of as an inverse temperature. The solutions for \(\lambda_{\Pi}T\), too, evolve quite differently from each other at early times before approaching zero as the bulk viscous pressure of the system vanishes. Note that for all curves, the sum \(\Lambda+\lambda_{\Pi}\) is greater than zero; see also Table 5.
In Fig. 18a we plot the evolution of Lagrange multiplier \(\gamma\) (made dimensionless by dividing by \((\Lambda+\lambda_{\Pi})\)) which controls the momentum space anisotropy of the distribution. As the black and cyan curves start with vanishing shear stress, they correspond to \(\gamma_{0}\!=\!0\). The maroon curve has a large initial positive shear (or \(P_{L}/P\ll 1\)) and starts with a large negative initial \(\gamma\). Similar to the \(\Lambda T\) and \(\lambda_{\Pi}T\) evolution in Fig. 17, the far-off-equilibrium dynamics of \(\gamma/(\Lambda+\lambda_{\Pi})\) are strongly dependent on initial conditions. However, at late times, all of them approach zero as the system isotropizes. We finally plot in Fig. 18b the time evolution of \(\sigma\equiv\Lambda+\lambda_{\Pi}-\left|\min(\gamma/2,-\gamma)\right|\) (again, made dimensionless by multiplication with the temperature) for all the ME hydro solutions discussed so far. According to Eq. (67) \(\sigma\) must be positive throughout the system's evolution for the Maximum Entropy distribution to be well-behaved. We see that this is indeed the case for all curves. The kink in the magenta, green, orange and blue curves occurs when \(\gamma\) crosses zero. At late times, \(\lambda_{\Pi}\) and \(\gamma\) approach zero, and \(\sigma\) assumes the role of an inverse temperature such that \(\sigma T\approx 1\).
Figure 17: Evolution of Lagrange parameters \(\Lambda\) and \(\lambda_{\Pi}\) (both in units of the instantaneous inverse temperature) of the maximum-entropy distribution function corresponding to different initial conditions; see text for details.
## Appendix D Entropy evolution
It has been shown earlier that both mVAH and ME-hydro describe the kinetic evolution of _hydrodynamic_ moments of the distribution function rather well. In this appendix we compare their performance when it comes to modeling a non-hydrodynamic moment of \(f\). In Fig. 19 we plot the evolution of the non-equilibrium entropy per unit rapidity and transverse area, \(s\tau\), obtained using kinetic theory and the hydrodynamic approximations.39
Footnote 39: For non-dissipative Bjorken expansion, the entropy of a fluid element does not change with time which manifests in \(s\tau\) being a constant of motion.
In panel (b) we solve kinetic theory with an initial distribution \(f_{0}=f_{\rm ME}\) that generates \((\tilde{\Pi}_{0},\tilde{\pi}_{0})\) as mentioned in Table 3, and then compute the entropy density from Eq. (27). The ME-hydro results for these initial conditions were already obtained in Appendix A, Fig. 12. We use the corresponding solutions for \((\Lambda,\Lambda_{\Pi},\gamma)\) to calculate the ME-hydro entropy density (69). The corresponding results for \(s\tau\) are shown by dashed lines in Fig. 19b. For panel (a) we repeat the same procedure but use \(f_{0}=f_{\rm mRS}\) for the kinetic theory solutions, and for mVAH we calculate the entropy density (27) by plugging in the modified Romatschke-Strickland distribution (35) with the corresponding time-dependent RS parameters [56].
Both panels show substantial differences in evolution between the exact non-equilibrium entropy and the hydrodynamic results. From the analysis of Calzetta and Cantarruti [31] it can be deduced that in the conformal case the divergence of the entropy four-current for a maximum-entropy type distribution is given by,
\[\partial_{\mu}S^{\mu}=-\frac{1}{\tau_{R}}\,\gamma_{(\mu\nu)}\,\pi^{\mu\nu}. \tag{75}\]
In the non-conformal case \(\partial_{\mu}S^{\mu}\) gets generalized to,
\[\partial_{\mu}S^{\mu}=-\frac{1}{\tau_{R}}\,\left(\gamma_{(\mu\nu)}\,\pi^{\mu \nu}+3\,\lambda_{\Pi}\,\Pi\right). \tag{76}\]
For a system undergoing Bjorken expansion this implies that the entropy per unit rapidity increases as
\[\frac{d\left(s\tau\right)}{d\tau}=-3\bar{\tau}\,\left(\gamma\,\frac{\pi}{2}+ \lambda_{\Pi}\,\Pi\right), \tag{77}\]
where \(\bar{\tau}\equiv\tau/\tau_{R}\). This can also be verified using the entropy density definition (69):
\[s=\Lambda e+\lambda_{\Pi}\,\left(P_{L}+2P_{T}\right)+\gamma\,\left(P_{T}-P_{L }\right)+n. \tag{78}\]
Taking the differential of \(s\) and noting that
\[dn=-e\,d\Lambda-\left(P_{L}+2P_{T}\right)\,d\lambda_{\Pi}-\left(P_{T}-P_{L} \right)\,d\gamma \tag{79}\]
one finds40
Footnote 40: Note that for a system in equilibrium \(ds=\beta\,de\) as expected.
\[ds=\Lambda\,de+\lambda_{\Pi}\,\left(dP_{L}+2\,dP_{T}\right)+\gamma\,\left(dP_ {T}-dP_{L}\right). \tag{80}\]
Now, using the ME-hydro equations of motion (51-53) we obtain
\[\frac{d\left(s\tau\right)}{d\tau} =-3\bar{\tau}\left(\lambda_{\Pi}\,\Pi+\gamma\,\frac{\pi}{2} \right)+n-\Lambda\,P_{L}\] \[-\gamma\,\left(\tilde{I}_{240}-2P_{L}-\tilde{I}_{221}\right)\] \[-\lambda_{\Pi}\left(2P_{L}-\tilde{I}_{240}-2\tilde{I}_{221}\right). \tag{81}\]
In order to show that the terms in r.h.s. of above equation that are not proportional to \(\bar{\tau}\) cancel each other we start with the number density (in local rest frame coordinates):
\[n =\int_{0}^{\infty}\frac{dp_{T}\,p_{T}}{2\pi^{2}}\,\int_{0}^{ \infty}dp_{z}\,f_{\rm ME}\] \[=-\int_{0}^{\infty}\frac{dp_{T}\,p_{T}}{2\pi^{2}}\,\int_{0}^{ \infty}dp_{z}\,p_{z}\,\frac{\partial f_{\rm ME}}{\partial p_{z}}. \tag{82}\]
Figure 19: Evolution of entropy per unit rapidity and transverse area in non-conformal Bjorken flow. Solid lines are exact solutions of kinetic theory; dashed lines correspond to hydrodynamic approximations. For the color coding, refer to Table 3.
We then use,
\[p_{z}\,\frac{\partial f_{\rm ME}}{\partial p_{z}} =-f_{\rm ME}\,\frac{p_{z}^{2}}{E_{p}}\left[\Lambda+\lambda_{\Pi}\, \left(2-\frac{p_{T}^{2}+p_{z}^{2}}{E_{p}^{2}}\right)\right]\] \[+\gamma\,\left(-2-\frac{p_{T}^{2}}{2\,E_{p}^{2}}+\frac{p_{z}^{2}} {E_{p}^{2}}\right)\Bigg{]}, \tag{108}\]
to obtain,
\[n =\Lambda\,P_{L}+\lambda_{\Pi}\left(2P_{L}-\tilde{I}_{240}-2\tilde {I}_{221}\right)\] \[+\gamma\,\left(\tilde{I}_{240}-2P_{L}-\tilde{I}_{221}\right). \tag{109}\]
Accordingly, the entropy production rate (106) stems solely from terms proportional to \(\bar{\tau}\) (or the collisional kernel):
\[\frac{d\left(s\tau\right)}{d\tau}=-3\bar{\tau}\left(\lambda_{\Pi}\,\Pi+\gamma \,\frac{\pi}{2}\right), \tag{110}\]
consistent with Eq. (104). Eq. (104) also implies that the initial slopes of the solid and dashed curves in Fig. 19b must be equal. This is because the kinetic theory curves are initialised with a maximum-entropy type distribution such that
\[\frac{d\left(s\tau\right)}{d\tau}\Bigg{|}_{\tau_{0}} =\left(\frac{\tau}{\tau_{R}}\right)_{0}\int dP\,p^{\tau}\,\log(f _{\rm ME})\,\left(f_{\rm ME}-f_{eq,0}\right),\] \[=-3\tau_{0}\left(\lambda_{\Pi,0}\,\Pi_{0}+\gamma_{0}\,\frac{\pi_ {0}}{2}\right). \tag{111}\]
Although in Fig. 19b the solid and dashed curves in orange, blue, and cyan colors graze along each other at early times, the initial slopes of the solid and dashed maroon curves differ strongly. In fact, the initial slope of the maroon ME-hydro curve agrees with the expectation (104) whereas the kinetic theory solution does not; the rate of entropy production is considerably smaller for the latter than for the former. To understand this puzzling behavior we compute the second time derivative of \(s\tau\) in kinetic theory. Starting from (27) multiplied by \(\tau\) and evaluating the first time derivative using the RTA Boltzmann equation gives
\[\frac{d(s\tau)}{d\tau}=\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\log(f)\, \delta f \tag{112}\]
where \(dP=d^{2}p_{T}dp_{\eta}/[(2\pi)^{3}\tau p^{\tau}]\). One then obtains for the second derivative
\[\frac{d^{2}(s\tau)}{d\tau^{2}} =-\frac{d(s\tau)}{d\tau}\,\left(\frac{1}{\tau_{R}}+\frac{d\log \tau_{R}}{d\tau}\right)\] \[-\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\log(f)\,\frac{ \partial f_{\rm eq}}{\partial\tau}\] \[-\frac{\tau}{\tau_{R}^{2}}\,\int dP\,p^{\tau}\,\frac{(\delta f)^ {2}}{f}. \tag{113}\]
While the first momentum integral appearing on the r.h.s. involving the derivative of \(f_{\rm eq}\) is well behaved, it is not obvious that the last one converges, especially for large deviations from equilibrium. On expanding the integrand, \((\delta f)^{2}/f=f-2f_{\rm eq}+f_{\rm eq}^{2}/f\), it becomes clear that for the initial slope of the entropy production rate to be finite, the quantity \(f_{\rm eq,0}^{2}/f_{\rm ME}\) must decay to zero at large momenta. This in turn requires
\[\frac{2}{T_{0}}-\Lambda_{0}-\lambda_{\Pi,0}>\Big{|}\min\left(-\frac{\gamma_{0} }{2},\gamma_{0}\right)\Big{|}\,. \tag{114}\]
This criterium is not met by the maroon curve. In consequence the initial slope of the kinetic entropy production rate has a singularity: the slope of \(d(s\tau)/d\tau\) approaches negative infinity. This singularity is not captured by ME hydro. To pin-point the origin of this difference let us calculate the quantity \(d^{2}(s\tau)/d\tau^{2}\) in ME hydro. Re-writing Eq. (110) as
\[\frac{d(s\tau)}{d\tau}=\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\log(f_{\rm ME })\,\left(f_{\rm ME}-f_{\rm eq}\right) \tag{115}\]
and taking another time derivative we have
\[\frac{d^{2}(s\tau)}{d\tau^{2}} =-\frac{d\log\tau_{R}}{d\tau}\,\frac{d(s\tau)}{d\tau}\] \[+\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\log(f_{\rm ME})\, \frac{\partial f_{\rm ME}}{\partial\tau}\] \[-\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\log(f_{\rm ME})\, \frac{\partial f_{\rm eq}}{\partial\tau}\] \[+\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\frac{(f_{\rm ME}-f_{ \rm eq})}{f_{\rm ME}}\,\frac{\partial f_{\rm ME}}{\partial\tau}. \tag{116}\]
We can simplify the second term on the r.h.s. of this equation as follows: although \(f_{\rm ME}\) is not a solution of the Boltzmann equation, the ME hydro equations of motion (51-53) permit replacement of the partial derivative \(\partial f_{\rm ME}/\partial\tau\) by \(-(f_{\rm ME}-f_{\rm eq})/\tau_{R}\). To see this we write
\[\int dP\,p^{\tau}\log(f_{\rm ME})\,\frac{\partial f_{\rm ME}}{ \partial\tau}=-\int dP\,\frac{\partial f_{\rm ME}}{\partial\tau}\] \[\times\Big{[}\Lambda\,(p^{\tau})^{2}+\lambda_{\Pi}\left(p_{T}^{2} +\frac{p_{\eta}^{2}}{\tau^{2}}\right)+\gamma\left(\frac{p_{T}^{2}}{2}-\frac{p_ {\eta}^{2}}{\tau^{2}}\right)\Big{]} \tag{117}\]
and use the ME hydro equations of motion for \((e,P_{L},P_{T})\) in the form [31]
\[\int dP\,(p^{\tau})^{2}\,\Big{[}\frac{\partial f_{\rm ME}}{ \partial\tau}+\frac{1}{\tau_{R}}\,\left(f_{\rm ME}-f_{\rm eq}\right)\Big{]} =0,\] \[\int dP\,\frac{p_{\eta}^{2}}{\tau^{2}}\,\Big{[}\frac{\partial f_{ \rm ME}}{\partial\tau}+\frac{1}{\tau_{R}}\,\left(f_{\rm ME}-f_{\rm eq}\right) \Big{]} =0,\] \[\int dP\,\frac{p_{T}^{2}}{2}\,\Big{[}\frac{\partial f_{\rm ME}}{ \partial\tau}+\frac{1}{\tau_{R}}\,\left(f_{\rm ME}-f_{\rm eq}\right)\Big{]} =0. \tag{118}\]
The second term on the r.h.s. of (116) can therefore be
written as41
Footnote 41: Note that the term involving the Lagrange multiplier \(\Lambda\) in Eq. (41) vanishes due to Landau matching.
\[-\frac{\tau}{\tau_{R}^{2}}\,\int dP\,p^{\tau}\log(f_{\rm ME})\,\left(f_{\rm ME} -f_{\rm eq}\right)=-\frac{1}{\tau_{R}}\,\frac{d(s\tau)}{d\tau}, \tag{42}\]
and the second-derivative of \(s\tau\) in ME hydro becomes
\[\frac{d^{2}(s\tau)}{d\tau^{2}} =-\frac{d(s\tau)}{d\tau}\,\left(\frac{1}{\tau_{R}}+\frac{d\log \tau_{R}}{d\tau}\right)\] \[-\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\log(f_{\rm ME})\, \frac{\partial f_{\rm eq}}{\partial\tau}\] \[+\frac{\tau}{\tau_{R}}\,\int dP\,p^{\tau}\,\frac{(f_{\rm ME}-f_{ \rm eq})}{f_{\rm ME}}\,\frac{\partial f_{\rm ME}}{\partial\tau}. \tag{43}\]
At \(\tau=\tau_{0}\) and for kinetic theory initialised with a maximum entropy distribution, the r.h.s. of the above expression agrees with the kinetic result (40) for the first two terms.42 However, as \(f_{\rm ME}\) does not solve the Boltzmann equation, the last term differs in the two approaches:
Footnote 42: Note that the initial slope of the temperature in ME hydro is identical to that in kinetic theory initialised with \(f_{\rm ME}\). This also implies that \(\partial f_{\rm eq}/\partial\tau\) at \(\tau=\tau_{0}\) is identical in both approaches.
\[\frac{(f_{\rm ME}-f_{\rm eq})}{f_{\rm ME}}\,\frac{\partial f_{\rm ME}}{ \partial\tau}\neq-\frac{1}{\tau_{R}}\,\frac{(f_{\rm ME}-f_{\rm eq})^{2}}{f_{ \rm ME}}. \tag{44}\]
Note that the source of divergence for \(d^{2}(s\tau)/d\tau^{2}\) at \(\tau\!=\!\tau_{0}\) is precisely the term \((f_{\rm ME}-f_{eq,0})^{2}/f_{\rm ME}\). In ME hydro \(\partial f_{\rm ME}/\partial\tau\propto f_{\rm ME}\) such that
\[\frac{(f_{\rm ME}-f_{\rm eq})}{f_{\rm ME}}\,\frac{\partial f_{\rm ME}}{ \partial\tau}\propto(f_{\rm ME}-f_{\rm eq}) \tag{45}\]
whose momentum integral in (43) is finite. This is why ME hydro does not capture the singularity in the second time derivative of \(s\tau\) for initial conditions deviating strongly from thermal equilibrium.43
Footnote 43: We found that the discrepancies between ME hydro and kinetic theory typically become serious once \(P_{L}/P\) or \(P_{T}/P\) drop below about ten percent.
## Appendix E Gubser flow in Milne coordinates
Although it is mathematically convenient to work with Gubser flow in curved space-time parametrized by de-Sitter coordinates, one may gain a more physical picture of the expanding fluid by expressing it in familiar Milne coordinates. For instance, the quantity \(\hat{\theta}\equiv 2\tanh\rho\) which is usually dubbed as the scalar expansion rate in Gubser flow goes negative at \(\rho<0\), although the fluid is always expanding in flat space. It should be noted that unlike scalar quantities such as temperature where the transformation from Milne to Gubser coordinates is implemented by a simple scaling, \(T\to\hat{T}=\tau T\), the relation between the Milne expansion rate \(\theta\) and the Gubser expansion rate \(\hat{\theta}\) is more complicated (see Eq. (7) of [65]):
\[\tau\theta=\hat{\theta}+\frac{3}{\tau}\,\frac{\partial\tau}{\partial\rho}= \frac{1+2\tilde{r}^{2}+5\tilde{r}^{2}}{s_{+}\,s_{-}}. \tag{46}\]
Here \(s_{\pm}\equiv(1+r_{\pm}^{2})^{1/2}\), with \(r_{\pm}\equiv(\tilde{r}\pm\tilde{\tau})\), and the scaled coordinates are \(\tilde{r}\equiv q\,r\) and \(\tilde{\tau}=q\,\tau\) (here, \(q\) characterizes the inverse size of the system). To get to the last equality in (46) we used the relations (see Eq. (148) of [50])
\[\tilde{\tau}=\frac{\text{sech}\rho}{\cos\theta-\tanh\rho},\qquad\tilde{r}= \frac{\sin\theta}{\cos\theta-\tanh\rho}, \tag{47}\]
and wrote everything in terms of Milne coordinates.44 Note that Eq. (46) could have also been obtained directly from the definition of the expansion rate in flat space-time:
Footnote 44: Note that the initial slope of the temperature in ME hydro is identical to that in kinetic theory initialised with \(f_{\rm ME}\). This also implies that \(\partial f_{\rm eq}/\partial\tau\) at \(\tau=\tau_{0}\) is identical in both approaches.
\[\theta\equiv d_{\mu}u^{\mu}=\frac{\partial u^{\tau}}{\partial\tau}+\frac{ \partial u^{\tau}}{\partial r}+\frac{u^{\tau}}{\tau}+\frac{u^{\tau}}{r}, \tag{48}\]
where, in Milne coordinates, the components of \(u^{\mu}\) are given by [50]\(u^{\tau}\equiv\cosh\kappa\), \(u^{\tau}\equiv\sinh\kappa\), with
\[\tanh\kappa(\tau,r)=\frac{2\tilde{\tau}\tilde{r}}{1+\tilde{\tau}^{2}+\tilde{r} ^{2}}. \tag{49}\]
In order to extract the early and late-time behaviors of the flow, we compute the longitudinal and transverse expansion rates [66], \(\theta_{L}\equiv z_{\mu}d_{z}u^{\mu}\), \(\theta_{\perp}\equiv\theta-\theta_{L}\), where \(d_{z}=-z^{\mu}d_{\mu}\), with \(z^{\mu}\) being the longitudinal component of velocity,
\[z^{\mu}=\frac{1}{\sqrt{1+\left(u^{\tau}\right)^{2}}}\,\left(\tau u^{\eta},0,0, \frac{u^{\tau}}{\tau}\right). \tag{50}\]
For Gubser flow \(u^{\eta}=0\) such that the only non-vanishing component of \(z^{\mu}\) is \(z^{\eta}=1/\tau\). Accordingly the longitudinal expansion rate is
\[\theta_{L}=-z_{\eta}z^{\eta}\,\left(\frac{\partial u^{\eta}}{\partial\eta}+ \Gamma_{\eta\tau}^{\eta}\,u^{\tau}\right)=\frac{\cosh\kappa}{\tau}. \tag{51}\]
Written in terms of Milne variables, the longitudinal and transverse expansion rates are
\[\tau\theta_{L}=\frac{1+\tilde{r}^{2}+\tilde{\tau}^{2}}{s_{+}s_{-}},\qquad\tau \theta_{\perp}=\frac{\tilde{r}^{2}+4\tilde{\tau}^{2}}{s_{+}s_{-}}. \tag{52}\]
The symmetries of Gubser flow imply that the state of a fluid element \(A\) located at \(r_{A}\) at proper time \(\tau_{A}\) is identical to that of the central fluid cell \(C(r\!\!=\!\!0)\) at a proper time \(\tau_{C}=(1/q)\times\text{sech}\rho_{A}/(1\!-\!\tanh\rho_{A})\), where
\(\rho(\tau_{A},r_{A})\). This allows us to focus on the central cell. Eq. (107) implies that the longitudinal expansion rate of Gubser flow is \(\theta_{L}=1/\tau\) at all times, which is identical to the Bjorken expansion rate, as expected. In contrast, the transverse expansion rate \(\theta_{\perp}=(1/\tau)\times 4\tilde{\tau}^{2}/(1+\tilde{\tau}^{2})\) vanishes at early times \(\tau\ll 1/q\) but approaches \(\theta_{\perp}\approx 4/\tau\) at late times. Clearly, \(\theta_{\perp}\) dominates over \(\theta_{L}\) at late times, with a transition taking place around \(\tilde{\tau}=\sqrt{1/3}\). Note that the corresponding Gubser transition time is \(\rho\approx-0.55\).
Although the transverse expansion rate exceeds the longitudinal one at late times, it decreases with \(\tau\). Why then does the system not thermalize at late times as in Bjorken flow? The reason is the following: once \(P_{T}/e\ll 1\), the temperature decreases as, \(d\tilde{T}/d\rho=-(\tanh\rho)\,\tilde{T}/2\). For the central cell, \(\partial_{\rho}=\tau\,\partial_{\tau}\), such that at late times (or large \(\rho\)),
\[\frac{\partial T}{\partial\tau}\approx-\frac{3}{2}\,\frac{T}{\tau}\implies T \sim\tau^{-3/2}. \tag{108}\]
Accordingly, the expansion rate \(\theta\propto 1/\tau\) exceeds the microscopic scattering rate \(1/\tau_{R}\propto T\sim\tau^{-3/2}\) and does not permit the system to thermalize. Note that assuming (incorrectly) an ideal cooling law for the central cell leads to even faster cooling at late times, \(T\sim\tau^{-5/3}\) (hence, an even slower microscopic scattering rate), leading to the same conclusion.
|
2307.08126 | Ergodicity of a surgered flow on unit tangent bundle of hyperbolic
surface | Starting with a trivial periodic flow on $\mathbb{S}M$, the unit tangent
bundle of a genus two surface, we perform a Dehn-type surgery on the manifold
around a tubular neighborhood of a curve on $\mathbb{S}M$ that projects to a
self-intersecting closed geodesic on $M$, to get a surgered flow which
restricted to the surgery region is ergodic with respect to the volume measure.
The surgered flow projects to a map on the surgery track that can be taken to
be a linked twist map with oppositely oriented shears which generates the
ergodic behavior for sufficiently strong shears in the surgery. | Aritro Pathak | 2023-07-16T19:00:04Z | http://arxiv.org/abs/2307.08126v1 | # Ergodicity of a surgered flow on unit tangent bundle of hyperbolic surface
###### Abstract
Starting with a trivial periodic flow on \(\mathbb{S}M\), the unit tangent bundle of a genus two surface, we perform a Dehn-type surgery on the manifold around a tubular neighborhood of a curve on \(\mathbb{S}M\) that projects to a self-intersecting closed geodesic on \(M\), to get a surgered flow which restricted to the surgery region is ergodic with respect to the volume measure. The surgered flow projects to a map on the surgery track that can be taken to be a linked twist map with oppositely oriented shears which generates the ergodic behavior for sufficiently strong shears in the surgery.
## 1 Introduction
Consider the self-intersecting closed geodesic \(\beta:[0,1]\to M\) on the genus two surface \(M\) as shown in Fig. 1. This gives an immersed submanifold of \(M\) and two points \(t_{0},t_{1}\in[0,1]\) with \(\beta(t_{0})=\beta(t_{1})\), which is the point of intersection.
Consider \(S^{1}=[0,1]/\sim\) identifying \(0\) and \(1\), and any embedding \(S^{1}\to\mathbb{S}M\), with the following property: each \(\theta(t)\) an element of the \(\mathbb{S}^{1}\) fiber over \(\beta(t)\) in \(\mathbb{S}M\), when \(t\notin\{t_{0},t_{1}\}\), while \(\theta(t_{0})\neq\theta(t_{1})\) and both \(\theta(t_{0}),\theta(t_{1})\) belong to the \(\mathbb{S}^{1}\) fiber over the point \(\beta(t_{0})=\beta(t_{1})\).
On \(\mathbb{S}M\), this creates a closed curve \(\theta\) which avoids intersecting itself and projects under the canonical projection \(\pi:\mathbb{S}M\to M\) to the curve \(\beta\). Consider an annulus \(\mathbb{T}\) around the curve \(S\) in \(\mathbb{S}M\) which creates a two layered track as depicted in Fig. 2. We will refer to \(\mathbb{T}\) later as the surgery track. For each point \(x\in S\), and a small enough local chart \(U_{x}\) of \(\mathbb{S}M\) around the point \(x\), \(\mathbb{T}\cap U_{x}\) is diffeomorphic to a rectangular strip. As a result of the construction of the set \(\mathbb{T}\in\mathbb{S}M\), we have two square regions stacked on top of each other, \(S_{1}\) on top and \(S_{2}\) on bottom, along with two different lobes as shown in Fig. 2, and the center
Figure 1: Self-intersecting closed geodesic \(\beta:[0,1]\to M\) on the hyperbolic surface \(M\) of genus \(2\). The surgery region in the unit tangent bundle of \(M\) is described above, as shown in Fig. 2. The surgery procedure leads to hyperbolicity and ergodicity for the part of the altered flow that intersects the surgery neighborhood. However, the altered flow is not mixing
## 1 Introduction
The _binary_ binary
of these two squares project under the canonical projection to the point \(\beta(t_{0})=\beta(t_{1})\), and further we orient the surgery track \(\mathbb{T}\) in such a way that at every point on \(\mathbb{T}\) there is an \(\mathbb{S}^{1}\) fiber transverse to \(\mathbb{T}\) and passing through the point. For every point in one of the two layers of the double layer region of \(\mathbb{T}\), the \(\mathbb{S}^{1}\) fiber that passes transversely through the point, also passes \(\mathbb{T}\) transversely through exactly one point on the other layer. The shearing squares \(S_{1},S_{2}\) are taken to be such that the same \(\mathbb{S}^{1}\) fiber over the double layer only intersects \(S_{1}\) once and \(S_{2}\) once, such as shown in Figure 5. Also, any \(\mathbb{S}^{1}\) fiber that intersects one of the lobes of \(\mathbb{T}\), intersects it exactly once.
**Definition 1**.: Consider the set of \(\mathbb{S}^{1}\) fibers that intersect the shearing track \(\mathbb{T}\). The union \(\widetilde{S}\) of all these fibers is the surgery region.
Further, the vertical separation between the two square regions is \(d\). Also, we will consider a parametrization for each of the fibers of the surgery region so that the origin on each such fiber is within a distance \(d/2\) from the point(s) at which it intersects the the surgery track (either once or twice).
We will show that the restriction of the altered flow after the surgery, to this surgery region, becomes ergodic, whereas in the complement \(\mathbb{S}M\setminus\widetilde{S}\) we still get the trivial periodic flow.
We consider an initial trivial periodic flow \(\mathbf{T}_{t}\) on \(\mathbb{S}M\), which in local coordinates, on a given \(\mathbb{S}^{1}\) fiber passing through the point \(x\in\mathbb{T}\), is simply a unit speed periodic flow \(\mathbf{T}_{t}(x,\theta)=\theta+t(\mathrm{mod}1)\) on the fiber over \(x\). Upon performing a modified Dehn surgery on \(\mathbb{S}M\) by means of a shear map \(\tilde{f}\) on \(\mathbb{T}\), which we describe later, we will alter this simple periodic fiber flow; every time the flow encounters the surgery track \(\mathbb{T}\) at a point \(x\in\mathbb{T}\), the flow is taken to the periodic flow \(\mathbf{T}_{t}\) on the fiber that intersects \(\mathbb{T}\) at the point \(f(x)\in\mathbb{T}\).
We define \(\tilde{f}\) first through the map \(f\) on a subset \(T\) of the torus below. We consider a shear profile which is linear across \(T\), with slope \(\alpha\).
**Definition 2**.: \(f(x,y)=(x+\alpha(y-y_{0}),y)\) for \(y_{0}\leq y\leq y_{1}\) with \(\alpha(y_{1}-y_{0})=k\) for some positive integer \(k\), defined on: \(T=\{(x,y):0\leq x\leq 1,y_{0}\leq y\leq y_{1}\}\) considered as a subspace of the torus \(\mathbb{R}^{2}/\mathbb{Z}^{2}\). We term \(k\) the winding number.
We now twist \(T\) into the "two-layered" track \(\mathbb{T}\) of Fig. 2. The map is now linked over the double layer, by the periodic fiber flow over this surgered region, as explained above. See Figure 5. The width of the track \(w=y_{1}-y_{0}\) is taken to be so that \(w\ll 1\).
We show that even though the initial flow is trivial and completely periodic on the unit tangent bundle, upon performing the modified Dehn surgery, the surgered flow is ergodic.
Upon the modified Dehn surgery, the surgered flow can be described as follows: the flow \(\mathbf{T}_{t}\) is taken to be counterclockwise on each fiber of the surgery region and this counterclockwise description is consistent for all the fibers of the surgery region. Whenever the flow encounters the surgery track 'from below' at a point \(x\in\mathbb{T}\) in a sense that is again consistent across the surgery track, the flow experiences a shear by the map \(\tilde{f}\) to reach \(\tilde{f}(x)\) and then leaves the track 'above' from the point \(\tilde{f}(x)\) in a way that makes sense across the track \(\mathbb{T}\). This is true for the double layer region as well. A typical part of the orbit that encounters both the double layers is shown in Figure 5.
As in the picture shown in Fig. 3, one can equivalently describe the map \(\tilde{f}\) on the double layer as well as the lobes as a succession of two shears on the domain shown in Figure 3, one horizontally which we term \(F\), and the other vertical shear which we term as \(G\), so that the map becomes, in the domain \(\{(x,y):x\in[0,1],y\in[y_{0},y_{1}]\}\) for \(F\) and the domain
\(\{(x,y):x\in[x_{0},x_{1}],y\in[0,1]\}\) for \(G\):
\[F\cdot\begin{pmatrix}x\\ y-y_{0}\end{pmatrix}=\begin{pmatrix}1&\alpha\\ 0&1\end{pmatrix}\cdot\begin{pmatrix}x\\ y-y_{0}\end{pmatrix}, \tag{1}\]
\[G\cdot\begin{pmatrix}x-x_{0}\\ y\end{pmatrix}=\begin{pmatrix}1&0\\ -\alpha&1\end{pmatrix}\cdot\begin{pmatrix}x-x_{0}\\ y\end{pmatrix}, \tag{2}\]
Further, the map \(F\) is the identity in the region \([0,y_{0}]\cup[y_{1},1]\) and the map \(G\) is the identity in the region \([0,x_{0}]\cup[x_{1},1]\). The maps \(F\) and \(G\) link together in the central square region with the vertices \(ABCD\).
The surgered flow restricted to the surgery region is called \(\Psi\). With the linked twist map description, both the lower and upper squares coincide and we have the domain of Figure 3, and the orbit of any point \(x\) in this domain under the linked twist map \(H=G\cdot F\) is actually a subset of the orbit of \(x\) under successive horizontal and vertical shears \(F,G\). But ergodicity under the map \(H\) also obviously gives ergodicity under the map which is a succession of the horizontal and vertical shears. The map \(H\) shows hyperbolic behavior.
While in a proper Dehn surgery, one alters the manifold to recover another smooth manifold, as described in [10],[11], our surgered manifold is not smooth; we only enforce a \(C^{0}\) joining of the shear to the boundary of \(\mathbb{T}\). We only apply the machinery of uniform hyperbolicity to achieve an ergodic flow in such a surgered manifold.
We refer the reader to the recent manuscripts [10],[11] for more background for this work. The existence of a Smale horseshoe for the surgered flow is shown in [12]. We also refer the reader to earlier works of [11, 12] which establish ergodicity in the linked twist mapping when the twists reinforce each other, and also [10] which establishes the presence of a horseshoe in linked twist mappings.
For the situation of a Dehn surgery with \(C^{k}\) boundary shear profile on \(\mathbb{T}\), the problem of studying even some basic properties of the corresponding projected map on the shearing track becomes difficult, which would be the object of future analysis.
When we unravel the surgery track, we get the schematic picture in Fig. 7 with the left edge of the track in Fig. 7 identified with the right edge of the track. The dynamics is under the map \(\tilde{f}\), which is linked between the squares \(S_{1}\) and \(S_{2}\), but otherwise is actually just equivalent to the original twist map of Definition 2.
When the shear is made smooth and thus weak enough at the boundary of the shearing region such as in Figure 4(a), with the boundary identification we have, the problem of determining the orbit structure appears to become difficult, unlike in the case considered in [11] where in the linked twist map the shears in the central region reinforce each other and thus we escape into the bulk of the square \(S\) where we again experience reinforcing strong shears. In our opposing identification, orbits can spend a long time in the corners with successively weak shears, and the study of the orbit structure within the central region \(S_{1}\cup S_{2}\) becomes difficult.
We prove the following theorem in Section 2.
**Theorem 1**.: The map \(\tilde{f}\) with the winding number \(k\geq 2\) on \(\mathbb{T}\) is ergodic with respect to the Lebesgue measure, when the shear parameter \(\alpha>6.23\). In fact this map is Bernoulli.1
Footnote 1: which is more that what we need.
As a result, we establish the main result for the surg
**Corollary 2**.: The surgered flow \(\Psi\) on \(\mathbb{S}M\), restricted to the surgery region, is ergodic with respect to the volume measure.
Proof.: Consider any subset \(A\) of the surgery region, that is invariant under the surgered flow \(\Psi\). The projection of \(A\) to the annulus \(\mathbb{T}\) is then invariant under the shear map \(\tilde{f}\) and thus has Lebesgue measure zero on \(\mathbb{T}\) since the map \(\tilde{f}\) is ergodic by Theorem 1, and thus the set \(A\) itself also has zero volume measure in \(\mathbb{S}M\).
Further, even though the map \(f\) on \(\mathbb{T}\) has the Bernoulli hence mixing property, we are also able to establish that the surgered flow is not weakly mixing.
**Theorem 3**.: The surgered flow \(\psi_{t}\) is not weakly-mixing.
Proof.: Consider the width \(d_{1}\) between the two squares in the surgery region, as shown in the figure. Consider an arbitrary subset of the surgery region of \(\mathbb{S}M\) with the following property: the set \(A\) projects to a rectangle \(R\) (and thus of positive Lebesgue measure on \(\mathbb{T}\)) lying on any one of the lobes of track \(T\), and in local charts on \(\mathbb{S}M\) where the coordinates are given by \((t,\theta),t\in[a,b]\times[c,e],\theta\in[0,1]]\)), with \(R\subset[a,b]\times[c,e]\), we have \(A=R\times[-\epsilon,\epsilon]\) with \(2\epsilon<d_{1}\). In other words, the set is a cube with uniform width \(\epsilon\) in the direction of the fibers \(\mathbb{S}^{1}\) in a local chart. Consider any other set \(B\) that has the same property, also with a width \(2\epsilon\). Now consider the flow of the set \(A\) under \(\Psi\). Since \(\epsilon<d_{1}\),the set \(\Psi_{t}(A)\) can only be decomposed as disjoint unions \(\sqcup_{i=1}^{N(t)}A_{i}(t)\) with each \(A_{i}\subset\mathbb{S}M\), \(N(t)\to\infty\) as \(t\to\infty\), but where each of the sets \(A_{i}(t)\) continue to have width \(2\epsilon<d_{1}\) in the direction of the fibers.
The set \(\Psi_{t}(A)\) as \(t\to\infty\) spends a positive fraction in each time interval of length \(2\pi\), uniformly a distance \(O(\epsilon)\) away from the track \(\mathbb{T}\), and thus for a positive fraction of the time, the set \(\Psi_{t}(A)\) also has null intersection with \(B\), and the flow is not weakly mixing.
From now on it is enough to work in the domain \(\tilde{T}:=W\cup V\) of Figure 3 and the map \(H\). 1
Figure 4: A uniform shear profile \(f\) shown in part(b), whose derivative is discontinuous at the boundary, with \(k=5\). A nonuniform shear profile \(f\) that is \(C^{m}\) (or could be made \(C^{\infty}\)) at the two boundaries, where also \(k=5\), is shown in part (a).
Footnote 1: The \(\alpha\)-term is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is not a \(\alpha\)-term but is a \(\alpha\)-term but is not a \(\alpha\)-term but is a \(\alpha\)-term but is not a \(\alpha\)-term but is a \(\alpha\)-term but is not a \(\alpha\)
**Theorem 4**.: For Lebesgue almost every \(x,y\in S_{1}\), denote the local stable and unstable manifolds at \(x\) and \(y\) respectively by \(\gamma^{s}(x)\) and \(\gamma^{u}(y)\), if \(H^{m}(\gamma^{u}(x))\) intersects \(H^{-n}(\gamma^{s}(y))\) for all positive integers \(m,n\) large enough, then \(H\) and all its powers are ergodic.
## 2 Proof of Theorem 1.
### The dynamics of the linked shear map.
Given that we only consider linear shears, as stated earlier for any point \(x\), the stable and unstable manifolds \(\gamma^{s}(x),\gamma^{u}(x)\) are linear segments whose slopes are respectively \(L\) and \(1/L\) from Eq. (6). The unstable manifold is a linear segment lying on the left boundary of the cone \(C\) given by \(\{(v,w):L\leq v/w\leq 0\}\), because of Eq. (6).
Consider points in \(S\). Under successive shears in the lobes, eventually the image under \(H\) of this segment either enters the square \(S\) vertically in Fig. 3, again within a cone \(C\) upon entry, or enters \(S\) from the left in Fig. 2 horizontally within the cone \(C^{\prime}\), which is a rotation by \(\pi/2\) of \(C\).
Further, we assume that the dimensions of the central square regions are small compared to the total length of the unfolded track (which is taken to be of unit length). In particular, in the ensuing analysis when we talk about points just to the left or right of the edges of the square region, they will be understood without any difficulty, since the lobes are considered
Figure 5: The point \(x\) moves to \(f(x)\) on the top square, and then the ensuing dynamics is shown on the two layered square. The perpendicular distance between the two square layers is given by \(d_{1}\).
large enough so that no ambiguity arises about these notions of 'left' and 'right' near the boundary of the square regions.
Under exactly one shear, the boundary of \(C\) given by the line \(v/w=L\) maps exactly to the corresponding boundary of \(C^{\prime}\), whereas after a sufficiently large number of shears in the lobes, the images of the unstable manifolds are within the interior of the cone \(C\) or \(C^{\prime}\). This follows from Eq. (6), which is equivalent to \(L=1/(\alpha+L)\) which shows that under one iteration of a horizontal shear, the boundary of \(C\) is mapped exactly to the boundary of \(C^{\prime}\), and under further horizontal(vertical) shears, a segment in \(C^{\prime}(C)\) gets mapped within \(C^{\prime}(C)\), and upon further iterations, a segment in the interior of \(C(C^{\prime})\) under a horizontal(vertical) shear gets mapped into the interior of the cone \(C^{\prime}(C)\).
We ensure the shear parameter is large enough that eventually for all large enough iterations \(n\) of the map, we have a v- segment through \(S\), contained in \(H^{n}(\gamma^{u}(x))\). Following the notation of [10], by an h-segment we mean a straight line segment that intersects both the right and left sides of \(S\). A v-segment is a straight line segment that intersects both the top and bottom sides of \(S\). For sake of simplicity of notation, consider the first return map \(H_{s}\) to \(S\).
Start with any segment \(\gamma\subset H_{s}^{p}(\gamma^{u}(x))\) in \(S\), within the cone \(C\), with vertical length \(l_{v}(\gamma)\), for any positive integer \(p\). Whenever a segment enters \(S\) vertically under the map \(H_{s}\), it enters within the cone \(C\), and when a segment enters \(S\) horizontally it does so within the cone \(C^{\prime}\), and we will ensure that eventually for some integers \(\tilde{m_{0}},n_{0}>0\) a segment \(L_{1}\subset H^{\tilde{m_{0}}}(\gamma)\subset H^{n_{0}}(\gamma^{u}(x))\) inserted into \(S\) has vertical length \(l_{v}(L_{1})>\delta l_{v}(\gamma)\), i.e. greater than the length of the original segment by a multiplicative factor of \(\delta>1\) (here \(\delta\) is uniform constant, independent of the segments \(\gamma\)), or eventually for some integer \(m_{0}\) a segment \(L_{2}\subset F\cdot H^{m_{0}}(\gamma^{u})(x)\) inserted into \(S\) which has horizontal length \(l_{v}(L_{1})>\delta l_{v}(\gamma)\), i.e. greater than the length of the original segment by a multiplicative factor of \(\delta>1\).
Because of the exponential growth of the length of the segments above, eventually we will get a large enough segment through \(S\) that is either a vertical v-segment or a horizontal h-segment. In the next iteration of the map, as noted earlier, because \(k\geq 2\), we achieve both vertical and horizontal segments in \(S\).
Note that when we get both v-segments and h-segments in \(S\) under the map \(H\) in the domain \(\tilde{T}\) of Figure 3, we correspondingly get both v-segments and h-segments in both the top and bottom layers \(S_{1},S_{2}\) of \(\mathbb{T}\) under the map \(\tilde{f}\).
Consider without loss of generality that the segment \(\gamma\) lies within the cone \(C\) within \(S\). It will be enough to separately consider the two subcases in Sections 2.1.1 and 2.1.2.
Figure 6: Passing from the vertical cone \(C\) to the cone \(C^{\prime}\), through the map \(H\) in the square region \(S\). The left edge of the cone \(C\) is mapped to the top edge of the cone \(C^{\prime}\) as shown in the above figure. Similarly, when one passes from the cone \(C^{\prime}\) to the cone \(C\), the top edge of \(C^{\prime}\) gets mapped to the left edge of \(C\).
Figure 8: An expanded picture of the case of first return to the square \(S_{2}\) as depicted in Fig. 7, and identifications of the segments \(I_{1},I_{2}^{\prime},I_{2}^{\prime\prime},I_{3},I_{4}\) used in the analysis. We have that the disjoint union of the segments \(I_{2}^{\prime}\cup I_{2}^{\prime\prime}=I_{2}\).
Figure 7: Unfolding the linked twist map. The two square regions are labelled as \(S_{1}\) and \(S_{2}\). The distance between the right edge \(RE_{1}\) of \(S_{1}\) and the left edge \(LE_{2}\) of \(S_{2}\) can without loss of generality be taken to be equal to the distance between the right edge \(RE_{2}\) of \(S_{2}\) and the left edge \(LE_{1}\) of \(S_{1}\) under the identification, i.e. we consider the lobes to be symmetric. The initial segment \(\gamma\) is in the square \(S_{1}\) and \(H_{s}(\gamma)\) intersects the left edge of the square \(S_{2}\).
#### 2.1.1 First return of \(\gamma\) intersects only one square
First, we are interested in the dynamics where, the first return of \(\gamma\) under the iterations under the \(H\) map intersects just one of the two squares \(S_{1},S_{2}\). There are two possibilities here: either it intersects one of the left edges \(LE_{1}\) or \(LE_{2}\) or that it intersects one of the right edges \(RE_{1}\) or \(RE_{2}\). It will be enough to consider the case shown in Figures 7 and 8, where it intersects one of the left edge of one of the two squares \(S_{1},S_{2}\), and the analysis in the other case is identical.
For the sake of argument, from now on we consider the 'unfolded' linked twist map shown in Fig. 7, where the segment under first return intersects the left edge of the square \(S_{2}\). The segment \(\gamma\) itself lies in the square \(S_{1}\). This segment now belongs to some \(H^{m_{1}}(\gamma)\).
Consider the situation in Fig. 7. We divide the segment \(H^{m_{1}}(\gamma)\) into four distinct parts \(I_{1},I_{2},I_{3},I_{4}\) as before, and we show that it is enough that we require the following conditions stated later.
Following [10], as in [11], we define the distance \(d\) as follows: we are looking for the unique integer \(q\) such that \(1/q<\alpha l_{v}(I_{2})\) and \(1/(q-1)\geq\alpha l_{v}(I_{2})\), which is \(q=\lfloor\frac{1}{\alpha l_{v}(I_{2})}+1\rfloor\). Given the segment \(I_{2}\), let the vertical endpoints of \(I_{2}\) be \(y_{1},y_{2}=y_{1}+l_{v}(I_{2})\). In this case, under a forward iterate of the map, one endpoint moves forward by a distance of \(\alpha y_{1}\) and the other endpoint moves forward by a distance \(\alpha(y_{1}+l_{v}(I_{2}))\). We seek a point on \(I_{2}\) such that this point moves under this iteration of the map by a rational amount of \(p/q\) with \(\alpha y_{1}\leq p/q<\alpha(y_{1}+l_{v}(I_{2}))\), with \(q=\lfloor\frac{1}{\alpha l_{v}(I_{2})}+1\rfloor\) as above. Clearly such a point would exist. The period of such a orbit, depending on whether we find \(p,q\) coprime, is some divisor of \(q\) and the distance between succesive points of the orbits is some multiple of \(d=\frac{1}{\lfloor\frac{1}{\alpha l_{v}(I_{2})}+1\rfloor}\) and thus at least this value. We could have chosen a larger value of \(q\) to make the distance \(d\) smaller, but this would need stronger shear than that needed with a value of \(q\) that is the smallest possible.
We claim it is enough in this case if we have the requirement that there exists a constant \(\delta>1\) such that all four of the following hold.
\[d\geq 2\delta l_{v}(\gamma) \tag{7}\] \[\alpha l_{v}(I_{2}^{\prime\prime}\cup I_{3})\geq\delta l_{v}(\gamma).\] (8) \[\alpha l_{v}(I_{1}\cup I_{2}^{\prime})+l_{h}(I_{1}\cup I_{2}^{ \prime})\geq 3\delta l_{v}(\gamma)\] (9) \[l_{h}(I_{2}^{\prime\prime}\cup I_{3})\geq\delta l_{v}(\gamma) \tag{10}\]
We outline the arguments below. As mentioned earlier, we assume that the segment \(\gamma\) under consideration is inside the square \(S_{1}\). We consider the specific point \(p_{1}\) on the segment \(I_{2}\) that has a rational orbit with \(d\) being the distance between the nearest points of the orbit.
The case where this first return is to the square \(S_{1}\) is entirely analogous to the analysis here.
The cases of the second return happening to \(S_{1}\) and the second return happening to \(S_{2}\) have to be treated slightly differently, and that will be apparent in the argument below.
The distance from \(p_{1}\) to \(LE_{2}\) is denoted \(D\) as shown in Fig. 7.
Consider the points of the rational orbit in the set \(T=\{x:x\notin S_{1}\cup S_{2},d(x,LE_{2})\in([\max(0,D-d/2),D+d/2)\text{ or }d(x,LE_{1})\in([\max(0,D-d/2),D+d/2)\}\). By construction there is at most one point of the rational orbit in the distance range \([\max(0,D-d/2),D+d/2)\)
from \(LE_{1}\), and exactly the point \(p_{1}\) of the rational orbit in the distance range \([\max(0,D-d/2,D+d/2))\) from \(LE_{2}\).
We are interested in the first time the orbit of the point \(p_{1}\) under the map \(H\) returns to within a distance at most \(D\) to the left of \(LE_{1}\) or to the left of \(LE_{2}\), or within any of the two squares itself. We call this point the point of second return. Consider the symmetrically placed point \(p_{2}\) at a distance exactly \(D\) from the left edge \(LE_{2}\) of \(S_{2}\), which in general is not part of the rational orbit.
Case(i): Suppose that this point of second return is to a point \(Q\) such that \(d(Q,LE_{1})<(D-d/2)\) to the left of \(LE_{1}\), in case \(D>d/2\), and if such a point exists. In this case, there is always at least a horizontal length \(D=l_{h}(I_{2}^{{}^{\prime\prime}}\cup I_{3})\) (in fact a bigger length if at least one further iteration has taken place in between) that has not been cut off prior to returning to the distance at most \(D\) from either of \(LE_{1}\) or \(LE_{2}\). \(Q\) is at least a distance \(d/2\) away from the point \(p_{2}\), and \(d/2<l_{h}(I_{2}^{{}^{\prime\prime}}\cup I_{3})=D\) by construction in this case, thus a horizontal length at least \(d/2\) has been pushed inside the square \(S_{2}\).
Case(ii): Now suppose instead that at the second return, the point of the orbit \(H^{k}(p_{1})\) for some \(k\geq 1\) lies in the distance range \((\max(0,D-d/2),D)\) from the left edge \(LE_{1}\) of \(S_{1}\), or equivalently, at a distance less than or equal to \(d/2\) to the right of \(p_{2}\), and call this point \(Q^{{}^{\prime}}\). In this case, the successive horizontal lengths outside \(S_{1}\cup S_{2}\), as the orbit moves from \(p_{1}\) to \(Q^{\prime}\), with at least one point in between \(p_{1}\) and \(Q^{\prime}\), is always at least \(\min\bigl{(}d/2+l_{h}(I_{2}^{{}^{\prime\prime}}\cup I_{3}),l_{h}(I_{2}^{{}^{ \prime\prime}}\cup I_{3})+\alpha l_{v}(I_{2}^{\prime\prime}\cup I_{3})\bigr{)}.\) This is because, the nearest point of the orbit to the left of \(Q^{\prime}\) exactly a distance \(d\) from \(Q^{\prime}\), is at least a distance \(d/2\) away from \(p_{2}\). In the case we move directly from \(p_{1}\) to \(H(p_{1})=Q^{\prime}\) we will have at least a length \(\alpha l_{v}(I_{2}^{{}^{\prime\prime}}\cup I_{3})\) that is inserted inside the central square region \(S_{1}\), or an h-segment in which case we are done, otherwise if there are further iterations in between, which means \(H^{k}(p_{1})=Q^{\prime}\) for some \(k\geq 2\), then from the above argument, we would still have a length of \(d/2\) inserted inside the square \(S_{1}\).(3)
Case(iii): In case the second return is to the region between \(p_{1}\) and \(LE_{2}\), the point of return is at least at a distance \(d\) to the right of \(p_{1}\), and since till that point we always have a horizontal length \(D=l_{h}(I_{2}^{\prime\prime}\cup I_{3})\) outside of \(S_{1}\cup S_{2}\), with \(D>d\) by construction in this case, a horizontal length at least \(d\) would be inserted inside the square \(S_{2}\).
Case(iv): Consider the case when the second return of \(p_{1}\) is to a point inside \(S_{1}\cup S_{2}\), except for the points \(R_{1}\) or \(R_{2}\). The amount of horizontal length within the square is at least \(\min(d,l_{h}(I_{2}^{\prime\prime}\cup I_{3}))+\alpha l_{v}(I_{2}^{\prime \prime}\cup I_{3})\). The term \(d\) appears since \(H^{k}(p_{1})\) might be the point just to the left of \(R_{1}\) or \(R_{2}\) and either of \(R_{1},R_{2}\) may be arbitrarily close to the right edges \(RE_{1}\) or \(RE_{2}\). In either case, we would be done.
Case(v): If the second return is to \(R_{1}\) (or equivalently \(R_{2}\)) in that case again, we would either have at least an amount \(l_{h}(I_{2}^{\prime\prime}\cup I_{3})\)) to the right of the point \(R_{1}\), in which case we are done because of Equation 8, or the segment is cut off by the right edge \(RE_{1}\). This is because starting from \(p_{1}\), the orbit \(H^{k}(p_{1})\) for \(k\geq 1\) may pass through a point a distance \(D+\epsilon\) to the left of \(LE_{1}\) for arbitrarily small \(\epsilon\), and then reach the point \(R_{1}\) and thus only possibly a
horizontal distance of \(l_{h}(I_{2}^{\prime\prime}\cup I_{3})\)) of the segment to the right of \(R_{1}\) being inserted into the square \(S_{1}\).
Consider in particular the case where we have segments that are intersecting either of \(RE_{1}\) or \(RE_{2}\). Because of the equation (7) above, it must happen that when first the point is at \(T_{1}\) and the part of the segment to the left of \(T_{1}\) gets cut off by the right edge of \(S_{1}\), 4 either a horizontal length greater than \(\delta l_{v}(\gamma)\) gets cut off inside the square \(S_{1}\), in which case we are done, otherwise a part greater than or equal to \(2\delta l_{v}(\gamma)\) gets cut off outside \(S_{1}\cup S_{2}\). If this segment gets cut off by the right edge of the square \(S_{2}\) again when the point is at \(T_{2}\), then if a horizontal length \(\delta l_{v}(\gamma)\) gets cut off inside the square \(S_{2}\) we are again done, otherwise a portion at least \(\delta l_{v}(\gamma)\) gets cut off outside (\(S_{1}\cup S_{2}\)). Now the point cannot again return to \(T_{1}\) nor to \(T_{2}\), and in this case by hypothesis, the point returns to \(R_{1}\) prior to reaching any other point within \(S_{1}\cup S_{2}\) or within a distance \(D\) to the left of either \(S_{1}\) or \(S_{2}\). 5 Then at least a segment of length \(\delta l_{v}(\gamma)\) remains within one of the central square regions or touches the left edge of the particular central square.
Footnote 4: the case where it first gets cut off by \(S_{2}\) first is also entirely analogous
Footnote 5: which may both be arbitrarily close to the right edge
If the segment touches both the right and left edges, we would be done with a complete h-segment within the square, otherwise from the argument in the previous two paragraphs, we would still have a segment of length at least \(\delta l_{v}(\gamma)\) inserted within the square \(S_{1}\) and we would also be done.
#### 2.1.2 First return of \(\gamma\) intersects both \(S_{1}\), \(S_{2}\)
In the case the first return of \(\gamma\) has intersection with both squares \(S_{1}\) and \(S_{2}\), as shown in Fig. 9,because of the symmetry, the distance between the right edge of \(S_{1}\) and the left edge of \(S_{2}\) is the same as the distance between the right edge of \(S_{2}\) and the left edge of \(S_{1}\). Suppose that the return was as shown in Figure 2, where the return has a segment \(I_{1}\) in the square \(S_{1}\), a segment \(I_{2}\) in between the right edge of \(S_{1}\) and the left edge of \(S_{2}\), and finally a segment \(S_{3}\) in the square \(S_{2}\). The case where the return is between the right edge of \(S_{2}\) and the left edge of \(S_{1}\) is analogous.
In this case, it is enough that one of the following three holds,
\[l_{h}(I_{1})\geq\delta l_{v}(\delta), \tag{11}\] \[l_{h}(I_{3})\geq\delta l_{v}(\gamma)\] (12) \[l_{h}(H(I_{2}))-l_{h}(I_{2})\geq 2\delta l_{v}(\gamma), \tag{13}\]
since this means that either one of the segments within the two squares are long enough, or that the horizontal length of image of the segment \(I_{2}\) under the map \(H\), increases by at least \(2\delta l_{v}(\gamma)\) and so now at least one part of it has a segment that intersects either one of the squares with length at least \(\delta l_{v}(\gamma)\).6
### Showing the existence of a critical shear parameter:
We work with the Equations (7-13) and show the existence of a critical shear parameter \(\alpha_{0}\) such that for all \(\alpha\geq\alpha_{0}\) we have ergodicity for the linked shear map.
It would clearly be enough to show that all the following hold:
\[d\geq 2\delta l_{v}(\gamma) \tag{14}\] \[\alpha l_{v}(I_{3})\geq\delta l_{v}(\gamma).\] (15) \[\alpha l_{v}(I_{1})+l_{h}(I_{1})\geq 3\delta l_{v}(\gamma)\] (16) \[l_{h}(I_{3})\geq\delta l_{v}(\gamma)\] (17) \[l_{h}(I_{4})\geq\delta l_{v}(\gamma) \tag{18}\]
For equation (14) to hold, it is enough to show that
\[\frac{\alpha l_{v}(I_{2})}{1+\alpha l_{v}(I_{2})}\geq 2 \delta l_{v}(\gamma) \tag{19}\] \[\implies l_{v}(I_{2})\geq\frac{2\delta l_{v}(\gamma)}{\alpha(1-2 \delta l_{v}(\gamma))} \tag{20}\]
We can always chose a \(\delta>1\) such that \((1-2\delta l_{v}(\gamma))\) is positive; since the width of the strands of \(\mathbb{T}\) are arbitrarily small and thus the vertical length \(l_{v}(\gamma)\) is also arbitrarily small compared to \(1/2\).
Further, from elementary geometry, we note that since the segment \(I_{2}\) is within the cone \(C\) or \(C^{\prime}\) depending on which square we are in, we must have that \(l_{h}(I_{3})\geq l_{v}(I_{3})(\alpha+L)\). Thus for equation 13 and equation 15 to hold, it is enough to have that
\[l_{v}(I_{3})\geq\frac{\delta l_{v}(\gamma)}{\alpha+L}\,\left(\,\geq\frac{ \delta}{\alpha}l_{v}(\gamma)\right) \tag{21}\]
(Note that clearly we have \(\alpha>L\).)
Further, by an identical argument as above for the segment \(I_{1}\), to satisfy equation 16 it is enough to have that
\[l_{v}(I_{1})\geq\frac{3\delta l_{v}(\gamma)}{L+2\alpha} \tag{22}\]
Figure 9: The case where the return is to both the squares \(S_{1}\) as well as \(S_{2}\).
Further, equation 18 is satisfied if we have
\[l_{v}(I_{4})\geq\frac{\delta l_{v}(\gamma)}{\alpha+L}. \tag{23}\]
Thus we can find a \(\delta>1\) satisfying the above relations, and either that segments \(I_{1},I_{2},I_{3}\) satisfy equations (20-22), or that \(I_{4}\) satisfies equation 23, if we ensure that
\[l_{v}(\gamma)>l_{v}(\gamma)\Big{(}\frac{2}{\alpha+L}+\frac{3}{2\alpha+L}+\frac {2}{\alpha(1-2l_{v}(\gamma))}\Big{)} \tag{24}\]
Recall that \(L=-\frac{\alpha}{2}+\sqrt{(\frac{\alpha}{2})^{2}-1}\).
We are precluding the possibility of having an \(h\)- segment after the segment \(\gamma\) suffers just one shear. Since our lobes are symmetric and the length of \(\mathbb{T}\) is unity, the width \(w\ll L\), we can find some small enough \(\eta\) such that \(l_{v}(\gamma)(L+\alpha)<1/2+\eta\). Taking a crude estimate of \(\eta=1/4^{(7)}\), we have an estimate of \(l_{v}(\gamma)<\frac{3}{4(\alpha+L)}\). In that case, it will be enough to ensure that:
\[1>\Big{(}\frac{2}{\alpha+L}+\frac{3}{2\alpha+L}+\frac{2}{\alpha(1-\frac{3}{2( \alpha+L)})}\Big{)}, \tag{25}\]
in which case we could satisfy the estimate in equation 24. The equation above has a solution set of all shear parameters \(\alpha>\alpha_{1}=6.23\).
In case we have to satisfy the set of equations 11 to 13, by arguments similar to ones used above, we aim to ensure:
\[l_{v}(I_{1})\geq\frac{\delta l_{v}(\gamma)}{(\alpha+L)} \tag{26}\] \[l_{v}(I_{3})\geq\frac{\delta l_{v}(\gamma)}{(\alpha+L)}\] (27) \[l_{v}(I_{2})\geq\frac{2\delta l_{v}(\gamma)}{\alpha} \tag{28}\]
It is enough to ensure the following:
\[l_{v}(\gamma)=\sum_{i=1}^{3}l_{v}(I_{i})\geq l_{v}(\gamma)\Big{(}\frac{2}{ \alpha+L}+\frac{2}{\alpha}\Big{)}, \tag{29}\]
i.e.
\[1>\frac{2}{\alpha+L}+\frac{2}{\alpha}. \tag{30}\]
The above is also ensured for all \(\alpha>\alpha_{2}=4.13\).
Thus combining the two cases above, we get an optimal constant of 6.23, and ergodicity and the Bernoulli property is established for all \(\alpha>\alpha_{0}=6.23\).
Acknowledgements:
The author is thankful to Boris Hasselblatt and Curtis Heberle for discussions on this problem, and also to Feliks Przytycki for useful feedback on this question. The author is supported as a PhD student in University of Missouri at the time of writing of the manuscript.
|
2308.06956 | Modular System Synthesis | This paper describes a way to improve the scalability of program synthesis by
exploiting modularity: larger programs are synthesized from smaller programs.
The key issue is to make each "larger-created-from-smaller" synthesis
sub-problem be of a similar nature, so that the kind of synthesis sub-problem
that needs to be solved--and the size of each search space--has roughly the
same character at each level. This work holds promise for creating
program-synthesis tools that have far greater capabilities than currently
available tools, and opens new avenues for synthesis research: how synthesis
tools should support modular system design, and how synthesis applications can
best exploit such capabilities. | Kanghee Park, Keith J. C. Johnson, Loris D'Antoni, Thomas Reps | 2023-08-14T06:20:32Z | http://arxiv.org/abs/2308.06956v1 | # Modular System Synthesis
###### Abstract
This paper describes a way to improve the scalability of program synthesis by exploiting _modularity_: larger programs are synthesized from smaller programs. The key issue is to make each "larger-created-from-smaller" synthesis sub-problem be of a similar nature, so that the kind of synthesis sub-problem that needs to be solved--and the size of each search space--has roughly the same character at each level. This work holds promise for creating program-synthesis tools that have far greater capabilities than currently available tools, and opens new avenues for synthesis research: how synthesis tools should support modular system design, and how synthesis applications can best exploit such capabilities.
## I Introduction
In program synthesis, the goal is to automatically (or semi-automatically) create programs that match high-level intents provided by a user--e.g., logical specifications or input-output examples. To date, however, synthesis tools cannot extend with large programs because they require synthesizing (or at least reasoning about) a program in its entirety.
The obvious direction is to try to exploit _compositionality_ and synthesize larger programs by having them invoke other (already synthesized) programs. Consider for example the problem of writing a program for a ticket-vendor application that can, among other things, issue and reserve tickets. Building such a system requires creating modules for various data structures--perhaps a stack and queue--and using these modules in a top-level module that processes ticket requests. It is natural to ask whether such modules can be synthesized separately--i.e., in a compositional fashion.
The fundamental question is
Can one address the scalability problem of program synthesis by exploiting compositionality, so that (i) larger programs are synthesized from smaller programs, and (ii) each "larger-created-from-smaller" synthesis sub-problem is of a similar nature, so that the essence of each sub-problem (and the size of each search space) has roughly the same character?
A solution to this question is surprisingly tricky to envisage. Most existing synthesis approaches require having a concrete semantics or implementation in hand when reasoning about modules, components, APIs, etc. [5, 18, 20], and such synthesis tools end up reasoning about the entire program all the way down to its lowest-level components. Not only is this approach in fundamental opposition to the "similarity-similar-size" principle articulated above, it makes synthesis increasingly hard as more modules are considered.
Instead, when code is synthesized for some module \(M\), all reasoning about lower-level modules \(\{M_{i}\}\) on which \(M\)_directly_ depends should be carried out in a way that is _agnostic_ about the implementations of \(\{M_{i}\}\). This observation leads us to pose two related challenges: (_i_) How can one carry out program synthesis without having in hand details about the implementations of lower-level modules? (_ii_) How can one ensure that each synthesis problem results in code that is independent of the implementations of lower-level modules?
In this paper, we present the case for the following thesis:
Program synthesis can scale using modular system design.
Modular system design is one of the most important concepts in designing software. A system should be organized in a layered fashion, where information hiding is used to hide implementation choices [16]. The _information-hiding_ principle intuitively states that each module exports an interface that does not reveal specific implementation choices used inside the module, and changing the module's implementation should not force any changes to be made to other modules.
Programmers practice modular system design, or at least aspire to it. In essence, our goal is to provide a level of automation for what good programmers do manually. Of course, we are not trying to automate everything. What is left in the hands of the programmer are architectural decisions and specifications of the intended behavior of individual modules. The programmer is responsible for the overall organization of the system's design, and must decide such issues as: What are the layers in the system? What are the implementation choices in a given layer (such as choices about data structures and data representations)? What operations are exposed in each layer, and what is the intended behavior of each operation?
We identify _two_ opportunities for providing automation for each module and, as a key contribution of this paper, we formally define these synthesis problems.
**Module-Implementation Synthesis.** Synthesis can be helpful in creating the implementations of the various functions in each module from some specifications. The key difference from traditional synthesis problems is that implementation details of "lower" modules are not available. Instead, one only has access to _implementation-agnostic specifications_ of the semantics of such modules.
**Module-Specification Synthesis.** Because modules can only expose their semantics to other modules in a way that does not reveal their implementation details, it can be challenging
to come up with such semantic definitions. We propose to automate the creation of such implementation-agnostic semantic definitions using synthesis, namely, _synthesis of formulas_.
Note the role of the second kind of synthesis problem: its results provide part of the specification when one moves on to the task of synthesizing the implementation of functions in the next module. By analogy with the Paul Simon lyric "one man's ceiling is another man's floor" [19], we have "one module's semantics is another module's primitives."
We call this approach _modular system synthesis_ (MoSS). The visibility restrictions of information hiding provide the key for MoSS to achieve the objective of making synthesis scalable via "similar-nature/similar-size" sub-problems: both of our synthesis problems concern a single module of the system, and a single module's implementation only. By concealing the implementation of lower-level modules, MoSS ensures that the formula representing the semantics of these layers remains independent of the size of the "accumulated" system as we move to higher-level layers. Moreover, MoSS retains the usual benefit of modular system design, namely, it results in software that (usually) can be readily adapted--in this context, re-synthesized--as requirements change.
This paper contributes both a framework and solidifying the concept of contract-based design in the context of program synthesis, which abstracts components or sub-systems based on their interfaces. Notably, the study of interface compatibility and composition has not been extensively explored in the context of program synthesis, opening up many opportunities for future developments. Specifically, using the aforementioned ticket-vending application as an example (SSII), it (i) defines modular system synthesis (SSIII); (ii) defines the two kinds of synthesis problems that arise in MoSS (SSIV); and (iii) describes a proof-of-concept system, called MoSSKit, that achieves these goals (SSV).
MoSSKit is based on two existing program-synthesis techniques: JLibSketch[14] a program-sketching tool that supports algebraic specifications, and spyro[15] a tool for synthesizing precise specifications from a given codebase. We used MoSSKit to carry out case studies based on two-layer modular synthesis problems from Mariano et al. [14], which demonstrated that concealing lower-level components can be advantageous in reducing the complexity of the synthesis problem. Expanding upon their work, our case study in SSV-B further explored scenarios involving multiple layers. MoSS exhibits even better scalability compared to scenarios where executable semantics for all lower layers are exposed. A further case study based on Mariano et al. in SSV-D also highlights the challenges of writing correct specifications. Our framework and the act of performing synthesis for both the implementations and specifications of the modules unveiled bugs in the modules synthesized by Mariano et al. and in the module's specifications, which they manually wrote.
SSVI discusses related work. SSVII concludes.
## II Illustrative Example
We present an experiment that illustrates the various aspects of MoSS. The problem to be solved is as follows: Synthesize a simple ticket-vendor application that supports the operations prepSales, resTicket, issueTicket, soldOut, numTicketsRem, and numWaiting. (To simplify matters, we assume it is not necessary to cancel a reservation.)
### _A Modular TicketVendor Implementation_
We decompose the system into three modules (Fig. 1):
**Module 3:** The TicketVendor module uses a Queue of reservations to implement the aforementioned operations.
**Module 2:** The Queue module implements the operations emptyQ, enq, front, deq, sizeQ, and isEmptyQ. In our setting, a Queue is implemented using two stacks [12].1
Footnote 1: The invariant is that the second Stack holds a prefix of the Queue’s front elements, with the top element of the second Stack being the Queue’s front-most element. The first Stack holds the Queue’s back elements—with the top element of the first Stack being the Queue’s back-most element.
**Module 1:** The Stack module implements the operations emptyS, push, top, pop, sizeS, and isEmptyS. In our setting, a Stack is implemented using linked-list primitives of the programming language.
Moreover, the implementation of each module is to abide by the principle of information hiding: (_i_) The TicketVendor module can use operations exposed by Queue, but their actual implementations are hidden in Module 2. (_ii_) The Queue module can use operations exposed by Stack, but their actual implementations are hidden in Module 1.
### _The Input of Modular TicketVendor Synthesis_
A MoSSKit user supplies the following information:
_Architectural-design choices_:
* The decomposition of the problem into TicketVendor, Queue, and Stack modules (gray boxes in Fig. 1).
* Which operations are to be exposed by each module, denoted by \(\mathcal{P}[module]\)--e.g., in Fig. 1, the Queue module exposes \(\mathcal{P}[\texttt{Queue}]\), which contains enq and deq operations, but not push and pop operations on the underlying stacks.
_Data-structure/data-representation choices_:
**Module 3:** TicketVendor uses a Queue.
**Module 2:** A Queue is implemented using two Stacks.
**Module 1:** A Stack is implemented using a linked list.
These choices are shown by the green boxes underneath each module in Fig. 1. For example, the Queue module is built on top of the Stack module. However, only the Stack interface--i.e., the function symbols in \(\mathcal{P}[\texttt{Stack}]\) and its (potentially synthesized) implementation-agnostic specification \(\varphi^{\texttt{Stack}}_{\texttt{sem}}\)--is accessible by the Queue module.
_Specifications of the module-specific synthesis problems_:
**Module 3:** Specifications of the behaviors of prepSales, resTicket, issueTicket, soldOut, numTicketsRem, and numWaiting in terms of the exposed Queue operations (and possibly other TicketVendor operations). For example, the implementation-specific specifications for the
TicketVendor module, denoted by the yellow box labeled \(\varphi^{\texttt{TicketVendor}}_{\textit{imp}}\) in Fig. 1, might constrain issueTicket to dequeue a buyer from the underlying Queue module, but only if soldOut (a TicketVendor operation) is false.
**Module 2:** Specifications of the behaviors of the Queue operations in terms of the exposed Stack operations (and possibly other Queue operations). For example, the implementation-specific specification for the Queue module (\(\varphi^{\texttt{Queue}}_{\textit{imp}}\)), shown in Fig. 1, contains, among others, constraints that state that (_i_) if the first stack \(st_{in}\) is empty, so is the second stack \(st_{out}\), (_ii_) enqueuing 1 on an empty queue and then retrieving the front of the queue yields 1.
**Module 1:** Specifications of the behaviors of the Stack operations in terms of the programming language's linked-list operations (and possibly other Stack operations). For example, the implementation-specific specification of the Stack module (\(\varphi^{\texttt{Stack}}_{\textit{imp}}\)) might specify that push adds an element on the front of the stack's underlying linked list.
A user must also specify a search space of possible implementations. In MOSKit, this is done using a sketch file.
### _The Output of Modular TicketVendor Synthesis_
Using the MoSS framework, we synthesize three module implementations: the TicketVendor module implementation, which satisfies \(\varphi^{\texttt{TicketVendor}}_{\textit{imp}}\) (and uses Queue); the Queue module implementation, which satisfies \(\varphi^{\texttt{Queue}}_{\textit{imp}}\) (and uses Stack); and the Stack module implementation, which satisfies \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) (and uses lists). However, to synthesize the TicketVendor module implementation, we need an _implementation-agnostic specification_ of Queue, denoted by \(\varphi^{\texttt{Queue}}_{\textit{sem}}\). The same can be said for the Queue module implementation, for which we need an implementation-agnostic specification of Stack, denoted by \(\varphi^{\texttt{Stack}}_{\textit{sem}}\).2
Footnote 2: Technically, List is part of the programming language; however, so that all sub-problems have the same form, we assume—as shown in Fig. 1—that we also have available an implementation-agnostic specification of List, denoted by \(\varphi^{\texttt{List}}_{\textit{sem}}\). In our evaluation, we synthesize \(\varphi^{\texttt{List}}_{\textit{sem}}\) automatically.
The user could write \(\varphi^{\texttt{Queue}}_{\textit{sem}}\) and \(\varphi^{\texttt{Stack}}_{\textit{sem}}\) manually, but it is more convenient to synthesize these specifications from the Queue and Stack module implementations, respectively. The MoSS methodology is to start with the bottom-most module and work upward, alternately applying two synthesis procedures: first synthesizing the implementation of a module \(M\) and then synthesizing \(M\)'s implementation-agnostic specification \(\varphi^{M}_{\textit{sem}}\), which gets exposed to the next higher module.
For the modular TicketVendor-synthesis problem, we start with Stack, the bottommost module, and synthesize a Stack module implementation--a set of \(\mathcal{P}[\texttt{List}]\) programs--that satisfies the implementation-specific specification \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) (In MoSSKit, this step is done using program sketching and the tool JLibSketch[14].) This step is depicted in Fig. 1 as the Implementation Synthesis problem in the Stack module. We then switch to the Specification Synthesis problem for Stack, and synthesize \(\varphi^{\texttt{Stack}}_{\textit{sem}}\), an implementation-agnostic specification of Stack. (In MoSSKit, this step is done by providing a grammar of possible properties and by using the tool spyro[15].) For the Stack module, the resultant \(\varphi^{\texttt{Stack}}_{\textit{sem}}\) is the conjunction of the equalities shown at 1 in Fig. 1.
Using \(\varphi^{\texttt{Stack}}_{\textit{sem}}\) (1), together with the implementation-specific specification \(\varphi^{\texttt{Queue}}_{\textit{imp}}\) (1), we now synthesize the Queue module implementation (1)--a set of \(\mathcal{P}[\texttt{Stack}]\) programs--and the implementation-agnostic specification \(\varphi^{\texttt{Queue}}_{\textit{sem}}\) (1) via the same two-step process.
Finally, using \(\varphi^{\texttt{Queue}}_{\textit{sem}}\) and the implementation-specific specification \(\varphi^{\texttt{TicketVendor}}_{\textit{imp}}\), we synthesize the TicketVendor module implementation. (If needed by a further client, we would then synthesize the implementation-agnostic specification \(\varphi^{\texttt{TicketVendor}}_{\textit{sem}}\).) Thus, the last output of the synthesis procedure, shown in Fig. 1, consists of implementations of Stack, Queue, and TicketVendor, and the implementation-agnostic specifications \(\varphi^{\texttt{Stack}}_{\textit{sem}}\) and \(\varphi^{\texttt{Queue}}_{\textit{sem}}\).
### _Benefits of Modular System Synthesis_
At some point, we might want to decide to modify the implementation of the Queue module to use directly the linked-list primitives provided by the language (shown in Fig. 2). Information hiding allows us to do so in a compartmentalized way--i.e., by only changing the specific Queue module. Importantly, the module's interface, composed of the function
Fig. 1: Organization of the modular TicketVendor synthesis problem: user-supplied inputs are shown in solid boxes; synthesized outputs are shown in dashed boxes. On the right, the Queue module’s specifications and implementation are expanded; the other modules would have similar details.
symbols in \(\mathcal{P}[\texttt{Queue}]\) and its implementation-agnostic specification \(\varphi^{\texttt{Queue}}_{\texttt{sem}}\), does not change when the implementation of the Queue module changes. Because this interface is what the TicketVendor module was synthesized with respect to, changes to the Queue implementation are not visible to TicketVendor.
## III Modular System Design
In this section, we formally define modular system design and the corresponding specification mechanisms. A system is organized in modules, and each module exports a module interface _MI_ and a specification \(\varphi^{\textit{MI}}_{\texttt{sem}}\) of the semantics of the module interface. Both _MI_ and \(\varphi^{\textit{MI}}_{\texttt{sem}}\) hide the module's implementation. A module's implementation can also have a set of private functions _PF_, which can only be used within the module. A program is constructed by stacking layers of such modules.3 For instance, the example in Fig. 1 has three modules: Stack, Queue, and TicketVendor. (None of those modules have private functions.)
Footnote 3: In general, the structure of the dependencies among layers can form a directed acyclic graph. However, to reduce notational clutter, throughout the paper we assume that the layers have a strict linear order.
In the following, we assume a programming language \(\mathcal{P}\) (e.g., C with its core libraries), and use \(\mathcal{P}[\textit{MI}]\) to denote \(\mathcal{P}\) extended with the functions exposed by module _MI_.
**Definition 1** (Modular System Design): _A system is **implemented modularly** if it is partitioned into disjoint sets of functions \(\textit{PF}_{1},\textit{M}_{1},\textit{PF}_{2},\textit{M}_{2},\ldots,\textit{ PF}_{n},\textit{M}_{n}\), such that for each \(f\in\textit{PF}_{i}\cup\textit{M}_{i}\), \(f\) is implemented using \(\mathcal{P}[\textit{MI}_{i-1}\cup\textit{PF}_{i}\cup\textit{M}_{i}]\)--i.e., \(f\) only uses operations in \(\mathcal{P}\), and calls to functions in the interface exported from layer \(i\)-1, to private functions of layer \(i\), and to functions in the interface exported from layer \(i\)._
To reduce notational clutter, we will ignore private functions, and only discuss the functions in module interfaces.
As we saw in SSII, we need to abide by the principle of _information hiding_--i.e., changing the implementations of any function in \(\textit{M}_{i-1}\) should not require changing the implementations of functions in \(\textit{M}_{i}\). With this principle in mind, we now describe the different natures of the specification for the module implementation at a given layer \(i\) (SSIII-A) and the specification exposed to layer \(i+1\) (SSIII-B).
### _Implementation-specific Specifications_
When synthesizing specific implementations of the functions \(\textit{M}_{i}\) at layer \(i\), the specifications are allowed to use symbols in \(\mathcal{P}[\textit{M}_{i-1}\cup\textit{M}_{i}]\)--i.e., the specification can refer to the functions we are specifying and to the ones in the interface exported from the previous layer--as well as implementation-specific details from layer \(i\) (e.g., data-structure declarations).
**Definition 2**: _An **implementation-specific specification** for a set of functions \(\textit{M}_{i}\) at layer \(i\) is a predicate \(\varphi^{\textit{MI}_{i}}_{\texttt{imp}}\) that only uses symbols in \(\mathcal{P}[\textit{M}_{i-1}\cup\textit{M}_{i}]\)._
**Example 1**: _In the implementation-specific specification of Queue from Fig. 1, where Queue is implemented using two Stacks, one of the properties is as follows:_
\[\texttt{isEmptyQ}(q)\iff\texttt{isEmptyS}(q.st_{in})\wedge\texttt{isEmptyS}(q. st_{out}).\]
For the version from Fig. 2, where Queue is implemented using a List, the analogous property is
\[\texttt{isEmptyQ}(q)\iff\texttt{isEmptyL}(q.l).\]
A specification might also contain a set of examples, e.g., \(\texttt{front}(\texttt{enq}(\texttt{emptyQ},1))=1\) and \(\texttt{front}(\texttt{enq}(\texttt{enq}(\texttt{emptyQ},1),2))=1\).
### _Implementation-agnostic Specifications_
While implementation-specific details are needed to converge on an implementation with which the programmer is happy, when exposing the specification of \(\textit{M}_{i}\) at layer \(i+1\), to abide to the principle of information hiding, one cannot provide specifications that involve function symbols in \(\mathcal{P}[\textit{M}_{i-1}\cup\textit{M}_{i}]\), but only those in \(\mathcal{P}[\textit{M}_{i}]\).
**Definition 3**: _An **implementation-agnostic specification** for a set of functions \(\textit{M}_{i}\) at layer \(i\) is a predicate \(\varphi^{\textit{M}_{i}}_{\texttt{sem}}\) that only uses symbols in \(\mathcal{P}[\textit{M}_{i}]\)._
**Example 2**: _Because of the vocabulary restrictions imposed by Def. 3, it is natural for implementation-agnostic specifications to take the form of algebraic specifications [7, 9, 10, 13, 23]. For instance, for the Queue module, the conjunction of the following equalities is an implementation-agnostic specification \(\varphi^{\texttt{Queue}}_{\texttt{sem}}\) for Queue:_
\[\texttt{isEmptyQ}(\texttt{emptyQ})=\top\texttt{ isEmptyQ}(\texttt{enq}(q,x))=\bot \tag{1}\] \[\texttt{sizeQ}(\texttt{emptyQ})=0\quad\texttt{sizeQ}(\texttt{enq}(q,x ))=\texttt{sizeQ}(q)+1\] \[\texttt{front}(\texttt{enq}(q,x))=\texttt{ite}(\texttt{isEmptyQ}(q ),x,\texttt{front}(q))\] \[\texttt{deq}(\texttt{enq}(q,x))=\texttt{ite}(\texttt{isEmptyQ}(q ),q,\texttt{deq}(\texttt{enq}(q),x))\]
Note that Eq. (1) serves as \(\varphi^{\texttt{Queue}}_{\texttt{sem}}\) both for the version of Queue from Fig. 1, where Queue is implemented using two Stacks, and for the version of Queue from Fig. 2, where Queue is implemented using a List.
## IV Synthesis in Modular System Synthesis
In this section, we define the _implementation-synthesis_ (SSIV-A) and _specification-synthesis_ (SSIV-B) problems that enable our scheme for modular system synthesis.
Fig. 2: Alternative implementation of the Queue module using list primitives instead of two stacks. \(\mathcal{P}[\texttt{Queue}]\) and \(\varphi^{\texttt{Queue}}_{\texttt{sem}}\) are the same as in Fig. 1.
### _Synthesis of Implementations_
The obvious place in which synthesis can be helpful is in synthesizing the implementations of the various functions at each layer from their implementation-specific specifications. For example, in Fig. 1, an implementation of Queue (the function \(\mathtt{enq}\) is shown in the second box on the right) is synthesized from the implementation-agnostic specification \(\varphi_{\mathit{sem}}^{\mathtt{State}}\) of \(\mathtt{Stack}\), _and_ an implementation-specific specification \(\varphi_{\mathit{sem}}^{\mathtt{Queue}}\) that is allowed to talk about how the two Stacks used to implement a Queue are manipulated (e.g., \(\mathtt{isEmptyS}(st_{out})\rightarrow\mathtt{isEmptyS}(st_{in})\)).
**Definition 4** (Implementation synthesis): _For module interface \(\mathit{M}_{i}\), the **implementation-synthesis** problem is a triple \((S_{i},\varphi_{\mathit{sem}}^{\mathit{M}_{i}},\varphi_{\mathit{imp}}^{ \mathit{M}_{i}})\), where_
* \(S_{i}\) _is the set of possible implementations we can use for_ \(\mathit{M}_{i}\) _(every program in_ \(S_{i}\) _uses only symbols in_ \(\mathcal{P}[\mathit{M}_{i-1}\cup\mathit{M}_{i}]\)_)._
* \(\varphi_{\mathit{sem}}^{\mathit{M}_{i}}\) _is an implementation-agnostic specification of the module-interface functions in_ \(\mathit{M}_{i-1}\)_._
* \(\varphi_{\mathit{imp}}^{\mathit{M}_{i}}\) _is an implementation-specific specification that uses only symbols in_ \(\mathcal{P}[\mathit{M}_{i-1}\cup\mathit{M}_{i}]\)_._
A solution to the implementation-synthesis problem is an implementation of \(\mathit{M}_{i}\) in \(S_{i}\) that satisfies \(\varphi_{\mathit{imp}}^{\mathit{M}_{i}}\).
This particular form of synthesis where one draws a program from a search space to match a specification is fairly standard in the literature. However, we observe that a particular aspect of modular system design makes most synthesis approaches inadequate--i.e., the specification \(\varphi_{\mathit{sem}}^{\mathit{M}_{i-1}}\) can talk about functions in \(\mathit{M}_{i-1}\) only in an implementation-agnostic way. For example, when synthesizing functions in Queue, we do not have direct access to a stack implementation--i.e., we cannot actually execute the implementation. Instead, we have access to the semantics of Stack through implementation-agnostic properties such as \(\mathtt{isEmptyS}(\mathtt{push}(st,x))=\bot\).
We are aware of only one tool, JLibSketch, that can perform synthesis with algebraic specifications [14], and we use it in our evaluation. In JLibSketch, one provides \(S_{i}\) as a program sketch (i.e., a program with integer holes that need to be synthesized), \(\varphi_{\mathit{sem}}^{\mathit{M}_{i-1}}\) as a set of rewrite rules over the functions in \(\mathit{M}_{i-1}\), and \(\varphi_{\mathit{imp}}^{\mathit{M}_{i}}\) as a set of assertions.
### _Synthesis of Implementation-agnostic Specifications_
Because the implementation of layer \(i\)-\(1\) is hidden when performing synthesis at layer \(i\), the user has to somehow come up with implementation-agnostic specifications like the ones shown in Fig. 1. Our next observation is that such specifications can also be synthesized! With this observation, modular system design becomes a fairly automatic business where the programmer mostly has to decide how to structure modules and provide implementation-specific specifications and search spaces (typically as regular-tree grammars [3]).
In Fig. 1, the implementation-agnostic specification \(\varphi_{\mathit{sem}}^{\mathtt{Queue}}\) of Queue is synthesized from the Queue implementation. (The same \(\varphi_{\mathit{sem}}^{\mathtt{Queue}}\), or one equivalent to it, is synthesized from the alternative Queue implementation of Fig. 2.)
**Definition 5** (Specification synthesis): _For module interface \(\mathit{M}_{i}\), a **specification-synthesis problem** is a pair \((F_{i},\Phi_{i})\) where
* \(F_{i}\) is a set of programs, written in \(\mathcal{P}[\mathit{M}_{i-1}\cup\mathit{M}_{i}]\), that is a concrete implementation of \(\mathit{M}_{i}\).
* \(\Phi_{i}\) is the set of possible properties we can use for \(\varphi_{\mathit{sem}}^{\mathit{M}_{i}}\) (every property in \(\Phi_{i}\) uses only symbols in \(\mathcal{P}[\mathit{M}_{i}]\)). (Typically, \(\Phi_{i}\) is given as a regular-tree grammar for a fragment of logic in which terms can only use symbols in \(\mathcal{P}[\mathit{M}_{i}]\).) A solution to the specification-synthesis problem is a set of properties \(\varphi_{\mathit{sem}}^{\mathit{M}_{i}}\subseteq\Phi_{i}\) such that for every \(\alpha\in\varphi_{\mathit{sem}}^{\mathit{M}_{i}}\).
**Soundness:** The implementation \(F_{i}\) satisfies \(\alpha\).
**Precision:** There is no property \(\alpha^{\prime}\in\Phi_{i}\) that implies \(\alpha\) and such that the implementation \(F_{i}\) satisfies \(\alpha^{\prime}\).
In general, there might not be just one answer to this synthesis problem because there could be multiple ways to build the set of properties \(\varphi_{\mathit{sem}}^{\mathit{M}_{i}}\). Furthermore, it can be the case that there are infinitely many properties in \(\Phi_{i}\) that are sound, precise, and mutually incomparable. While in this paper we do not worry about these details, the tool we use in our evaluation spro is always guaranteed to find a maximal set of properties in \(\Phi_{i}\) whenever such a set is finite (spro uses a regular-tree grammar to describe the set of possible properties \(\Phi_{i}\), but requires such a set to be finite.) In practice, even when the set is infinite, one can build tools that find a "good" set of properties and stop without trying to find an exhaustive set.
_Discussion._ When the goal is to build a system structured in a modular fashion, modular system synthesis enables defining "small" synthesis problems of similar nature that concern only a single module's implementation.
While implementation-agnostic specifications can be synthesized via the synthesis problem defined in Def. 5, one should be aware that there is additional flexibility to be gained if one is willing to write implementation-agnostic specifications manually. In particular, if all of the implementation-agnostic specifications are synthesized, then it is necessary to create the system _bottom-up_, synthesizing the module implementations in the order \(\mathit{M}_{1}\), \(\mathit{M}_{2}\), \(\ldots\), \(\mathit{M}_{n}\) (interleaved with the synthesis of \(\varphi_{\mathit{sem}}^{\mathit{M}_{1}}\), \(\varphi_{\mathit{sem}}^{\mathit{M}_{2}}\), \(\ldots\), \(\varphi_{\mathit{sem}}^{\mathit{M}_{n}}\)). In contrast, when the user is willing to write the implementation-agnostic specifications manually (in addition to the implementation-specific specifications \(\{\varphi_{\mathit{imp}}^{\mathit{M}_{i}}\}\)), then the module implementations for \(\mathit{M}_{1}\), \(\mathit{M}_{2}\), \(\ldots\), \(\mathit{M}_{n}\) can be synthesized in any order.
## V Implementation and Case-Study Evaluation
We carried out case studies of MoSS for the simple three-layer system that has been used as a running example and for some of the modular-synthesis problems presented in the paper that introduced JLibSketch[14].
### _Implementation_
Our implementation, called MoSSKit, uses JLibSketch[14] to synthesize the implementation code for each layer \(k\) (from the implementation-specific specification for layer \(k\)
and spyro[15] to synthesize the implementation-agnostic specification for use at layer \(k+1\).
JLibSketch is a program-synthesis tool for Java that allows libraries to be described with collections of algebraic specifications. Similar to its popular C counterpart sketch[22], JLibSketch allows one to write programs with holes and assertions, and then tries to find integer values for the holes that cause all assertions to hold. Each specification is a rewrite rule of the form _pattern \(\Rightarrow\) result_. For instance, one of the rewrite rules in the specification of a stack could be \(\texttt{pop}(\texttt{push}(st,k))\Rightarrow st\). To prevent infinite rewrite loops, a set of rewrite rules provided to JLibSketch must not form a cycle. For instance, the rule \(a+b\Rightarrow b+a\) is not allowed. The synthesis problem that JLibSketch addresses is to find a program that is correct for any program input, for any library implementation that satisfies the algebraic specifications.
spyro addresses the problem of synthesizing specifications automatically, given an implementation. spyro takes as input (_i_) a set of function definitions \(\Sigma\), and (_ii_) a domain-specific language \(\mathcal{L}\)--in the form of a grammar--in which the extracted properties are to be expressed. Properties that are expressible in \(\mathcal{L}\) are called _\(\mathcal{L}\)-properties_. spyro outputs a set of \(\mathcal{L}\)-properties \(\{\varphi_{i}\}\) that describe the behavior of \(\Sigma\). Moreover, each of the \(\varphi_{i}\) is a _best_\(\mathcal{L}\)-property for \(\Sigma\): there is no other \(\mathcal{L}\)-property for \(\Sigma\) that is strictly more precise than \(\varphi_{i}\). Furthermore, the set \(\{\varphi_{i}\}\) is _exhaustive_: no more \(\mathcal{L}\)-properties can be added to it to make the conjunction \(\bigwedge_{i}\varphi_{i}\) more precise. spyro uses sketch as the underlying program synthesizer--i.e., it generates a number of synthesis problems in the form of sketch files and uses sketch to solve such problems.
Although spyro is built on top of sketch (instead of JLibSketch), in our case study we manually implemented the term-rewriting approach used by the JLibSketch solver in the sketch files used by spyro to synthesize implementation-agnostic specifications that only depend on algebraic specifications of lower layers. That is, we replace every function call \(f\) appearing in a sketch file with a function \(normalize(f)\), where \(normalize\) is a procedure that applies the rewrite rules from the algebraic specification.
MoSSKit inherits the limitations of JLibSketch and spyro--i.e., the synthesized implementations and specifications are sound up to a bound. Despite this limitation, the authors of JLibSketch and spyro have shown that these tools typically do not return unsound results in practice. SSV-E provides a detailed discussion of the limitations of MoSS and MoSSKit.
### _Ticket-vendor Case Study_
Our first benchmark is the ticket-vending application described throughout the paper. Our goal is to synthesize the four module implementations in Fig. 1 (except the bottom one), as well as the specification of each module that needs to be exposed to a higher-level module.
When synthesizing specifications, due to the scalability limitations of spyro, we called spyro multiple times with different smaller grammars instead of providing one big grammar of all possible properties of each module. In each call to spyro, we provided a grammar in which we fixed a left-hand-side expression of an equality predicate, and asked spyro to search for a right-hand-side expression for the equality. We allowed the right-hand-side expression to contain a conditional where the guard can be selected from the outputs of Boolean operators in the module, their negation, or constants. For instance, Figures 3 and 4 illustrate two inputs provided to spyro to solve the specification-synthesis problem for List: (_i_) a program describing the implementation of List (Fig. 3), and (_ii_) a grammar describing the set of possible properties (Fig. 4).
Because we wanted to use the synthesized equalities as input to JLibSketch when synthesizing the implementation
Fig. 3: Implementation of snoc supplied to spyro. Returning a value from a function is done by storing the value into a reference parameter of the function.
Fig. 4: Grammar for the domain-specific language in which spyro is to express an extracted List property. The relation definition in lines 8-11 specifies that the variables snoc_out l, v1 and v2 are related by snoc_out = snoc(cons(l,v1),v2). From the grammar (“generator”) in lines 12-20, spyro synthesizes best implementation-agnostic properties of form GUARD = snoc_out = \(L\) (implicitly) combined with snoc_out = snoc(cons(v1,l),v2)). In this case, the only expression for GUARD that succeeds is \(\top\), and the property synthesized is snoc_out = cons(v1,sno(1,v2)) (with the additional implicit conjunct = snoc(cons(v1,l),v2)).
of the next higher-level module, we provided grammars of equalities that avoided generating cyclic rewrite rules. We addressed this issue by limiting the search space for the right-hand-side expression. The function symbols permitted in the right-hand-side expression are one of the functions in the left-hand-side expression, functions used in the implementation of a function in the left-hand-side expression, or constants. Also, the outermost function symbol of the left-hand side can only be applied to a strictly smaller term.
To illustrate some of the properties synthesized by (that are not shown in Fig. 1) the complete set of equalities in the implementation-agnostic specification \(\varphi_{\textit{sem}}^{\texttt{List}}\) synthesized by \(\texttt{spyro}\) is the following:
\[\begin{array}{l}\texttt{head}(\texttt{cons}(hd,tl))=tl\qquad\texttt{isEmpty }(\texttt{nil})=\top\\ \texttt{tail}(\texttt{cons}(hd,tl))=hd\qquad\texttt{isEmpty}(\texttt{cons}(hd,tl ))=\bot\\ \texttt{size}(\texttt{nil})=0\qquad\texttt{snoc}(\texttt{nil},x)=\texttt{ cons}(x,\texttt{nil})\\ \texttt{size}(\texttt{cons}(hd,tl))=\texttt{size}(tl)+1\\ \texttt{snoc}(\texttt{cons}(hd,tl),x)=\texttt{cons}(hd,\texttt{snoc}(tl,x)) \end{array}\]
When considering the cumulative time taken to synthesize the algebraic specification of each module, \(\texttt{spyro}\) took 41 seconds for \(\varphi_{\textit{sem}}^{\texttt{List}}\) (longest-taking property 7 seconds), 34 seconds for \(\varphi_{\textit{sem}}^{\texttt{Stack}}\) (longest-taking property 7 seconds), and 44 seconds for \(\varphi_{\textit{sem}}^{\texttt{Queue}}\) (longest-taking property 13 seconds).
We used \(\texttt{JLiB\&Ketch}\) to synthesize implementations of the modules. In addition to the implementation-agnostic specification of the module below the one we were trying to synthesize, we provided an implementation-specific specification of the module to be synthesized. For example, the \(\varphi_{\textit{imp}}^{\texttt{Stack}}\) specification involved \(\texttt{JLibSketch}\) code with 17 assertions, and the following examples are an excerpt from the \(\varphi_{\textit{imp}}^{\texttt{Stack}}\) specification (\(x,y\), and \(z\) are universally quantified integers that are allowed to be in the range 0 to 10):
\[\begin{array}{l}\texttt{top}(\texttt{push}(\texttt{emptyS},x))=x\\ \texttt{size}(\texttt{emptyS})=0\qquad\texttt{size}(\texttt{emptyS})=0\qquad \texttt{size}(\texttt{push}(\texttt{emptyS},x))=1\end{array}\]
Besides the assertions, we provided \(\texttt{JLibSketch}\) with a fairly complete sketch of the structure of the implementation: we provided loops and branching structures, and only asked \(\texttt{JLibSketch}\) to synthesize basic statements and expressions. For example, the sketch provided for the operation \(\texttt{enq}\) of module \(\texttt{Queue}=(st_{in}:\texttt{Stack},st_{out}:\texttt{Stack})\) is shown in Fig. 5. This sketch of \(\texttt{enq}\) of module Queue uses two Stacks: \(st_{in}\), which stores elements in the rear part of the queue, and \(st_{out}\), which stores elements in the front part of the queue. Stack \(st_{in}\) holds the rearmost element on top, and Stack \(st_{out}\) stores the frontmost element on top. To make the front operation more efficient, we decided to make sure that the frontmost element is always at the top of \(st_{out}\). This implementation decision is expressed as assertions in lines 5 and 15, constituting an implementation-specific specification \(\varphi_{\textit{imp}}^{\texttt{Queue}}\), shown as 1 in Fig. 1.
Afterward, based on the implementation synthesized by \(\texttt{JLibSketch}\), \(\texttt{spyro}\) was able to solve each Queue specification-synthesis problem within 40 seconds, yielding the following implementation-agnostic specification \(\varphi_{\textit{sem}}^{\texttt{Queue}}\).
\[\begin{array}{l}\texttt{isEmpty}(\texttt{emptyQ})=\top\\ \texttt{size}(\texttt{emptyQ})=0\\ \texttt{size}(\texttt{enq}(q,i))=\texttt{size}(\texttt{q})+1\\ \texttt{isEmptyQ}(q)=\texttt{front}(\texttt{enq}(q,i))=i\\ \texttt{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} \texttt{isEmpty}(q)=q\\ \texttt{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} \texttt{isEmpty}(q)\to\texttt{dep}(\texttt{enq}(q,i))=\texttt{enq}(\texttt{dep}(q,i))\end{array}\]
A \(\texttt{TicketVendor}\) is implemented using a Queue, which stores the id numbers of clients who have reserved tickets. Each issued ticket contains the id of the buyer. The implementation-specific specification \(\varphi_{\textit{imp}}^{\texttt{TicketVendor}}\) consisted of \(\texttt{JLibSketch}\) code with 24 assertions, and contains multiple examples, such as the following (again, \(x\) and \(y\) are universally quantified integers that are allowed to be in the range 0 to 10):
\[\begin{array}{l}\texttt{numTicketsRem}(\texttt{prepSales}(2))=2\\ \texttt{numWaiting}(\texttt{prepSales}(2))=0\\ \texttt{numWaiting}(\texttt{resTicket}(\texttt{prepSales}(2),x))=1\\ \texttt{issueTicket}(\texttt{resTicket}(\texttt{prepSales}(2),x)).owner=x \end{array}\]
Again, we provided \(\texttt{JLibSketch}\) with a fairly complete sketch of the program structure, and \(\texttt{JLibSketch}\) was able to synthesize the implementations of all the TicketVendor functions within 10 seconds. For example, the function \(\texttt{prepSales}\) for \(\texttt{TicketVendor}=(num_{ticket}:\texttt{int},q_{waiting}:\texttt{Queue})\) was synthesized as \(\texttt{prepSales}(n:\texttt{int}):=(n,\texttt{emptyQ})\).
We compared the time needed to synthesize each module from the algebraic specification of the previous module to the time needed to synthesize using the implementation of all previous modules. Synthesizing Stack from the specification \(\varphi_{\textit{sem}}^{\texttt{List}}\) took 3 seconds instead of the 2 seconds needed when the implementation of List was provided. Synthesizing Queue from the specification \(\varphi_{\textit{sem}}^{\texttt{Stack}}\) took 188
Fig. 5: JLibSketch sketch of \(\texttt{enq}\). Lines 5 and 15 assert the implementation-specific property \(\texttt{isEmptyS}(st_{out})\to\texttt{isEmptyS}(st_{in})\). JLibSketch generates an expression to fill in each occurrence of the generators, \(\texttt{genStack2}\) and \(\texttt{genGuard}\)—the reader can think of each of these generators as being grammars from which \(\texttt{JLibSketch}\) can pick an expression. For these generators, expressions can be variables or single function calls to functions of the appropriate type—e-g., \(\texttt{genStack2}\) can generate expressions such as \(\texttt{st}_{in}\), \(\texttt{st}_{out}\), \(\texttt{st}_{in}\).pop(), \(\texttt{st}_{out}\).pop(), etc.
seconds instead of the 799 seconds needed when the concrete implementations of Stack and List were provided. Synthesizing TicketVendor from the specification \(\varphi^{\texttt{Queue}}_{\texttt{sam}}\) took 7 seconds, but JLibSketch crashed when the concrete implementations of Queue, Stack and List were provided.
**Key finding:** This experiment shows that modular synthesis takes 1-5 minutes per module, whereas the time taken to synthesize a module from the underlying module implementations grows with the number of modules--to the point where synthesis is unsuccessful with existing tools.
As discussed in SSII-D, we also synthesized an implementation of Queue that uses List instead of two Stacks. The List holds the oldest element of the Queue at its head. The implementation-specific specification \(\varphi^{\texttt{Queue}}_{\texttt{imp}}\) (\(\texttt{as List}\)) consisted of JLibSketch code with 19 assertions, including examples similar to those shown at 1 in Fig. 2. We used JLibSketch to verify whether the specification \(\varphi^{\texttt{Queue}}_{\texttt{sem}}\) still held true for the new implementation. Because it did (confirmation took \(<\)1 second), TicketVendor does not need to be changed to use the Queue (as List) implementation.
### _Case Studies from Mariano et al. [14]_
Our second set of benchmarks is collected from the paper that introduced synthesis from algebraic specifications via JLibSketch[14]. In that work, Mariano et al. used a number of benchmarks that involve two modules--e.g., synthesizing a backend cryptographic component for a tool that brings NuCypher to Apache Kafka, using ArrayList and HashMap as underlying modules. The goal of their paper was to show that in JLibSketch it was easier/faster to synthesize the module at layer 1 when the module of layer 0 was exposed through an algebraic specification (rather than a concrete implementation). The current implementation of MoSSKit does not support strings, so we used only the benchmarks for which the algebraic specifications for the layer-0 module (i) did not use string operations, and (ii) did not use auxiliary functions that were not in the signature of the module. In total, we considered four layer-0 modules: ArrayList, TreeSet, HashSet, and HashMap. Each JLibSketch benchmark consisted of (i) an algebraic specification of the layer-0 module (written by hand), (ii) a sketch-like specification of the layer-1 module, and (iii) a mock implementation of the layer-0 module--i.e., a simplified implementation that mimics the module's intended behavior (e.g., HashSet is implemented using an array). The mock is not needed by JLibSketch, but allowed Mariano et al. to compare synthesis-from-algebraic-specifications against synthesis-from-mocks [14, SS5].
We used these items in a different manner from the JLibSketch experiments. From just the mock implementation of layer 0, we asked MoSSKit to synthesize a most-precise algebraic specification, which we compared with the algebraic specification manually written by Mariano et al. From that algebraic specification and the sketch-like specification of the layer-1 module, we asked MoSSKit to synthesize the implementation of layer 1. (The second step essentially replicated the algebraic-synthesis part of the JLibSketch experiments.)
For the layer-0 synthesis step of each benchmark, we synthesized algebraic specifications using grammars similar to the ones used in SSV-B.
When considering the time taken to synthesize the entire algebraic specification of each module, spyro took 626 seconds for \(\varphi^{\texttt{ArrayList}}_{\texttt{sam}}\), 54 seconds for \(\varphi^{\texttt{HashSet}}_{\texttt{sam}}\), and 1,732 seconds for \(\varphi^{\texttt{HashMap}}_{\texttt{sam}}\). Because mock implementations are simplified versions of actual implementations, the mock implementation of TreeSet is identical to the mock implementation of HashSet--i.e., they both represent sets as arrays. Furthermore, the two implementations have the same algebraic specifications--i.e., \(\varphi^{\texttt{HashSet}}_{\texttt{sam}}=\varphi^{\texttt{TreeSet}}_{\texttt{ sem}}\)--which can thus be synthesized in the same amount of time.
**Key finding:** For all but two benchmarks, the \(\mathcal{L}\)-conjunctions synthesized by MoSSKit were equivalent to the algebraic properties manually written by Mariano et al. For the mock implementation of HashMap and ArrayList provided in JLibSketch, for specific grammars, MoSSKit synthesized empty \(\mathcal{L}\)-conjunctions (i.e., the predicate _true_) instead of the algebraic specifications provided by Mariano et al.--i.e., \(k_{1}=k_{2}\Rightarrow\texttt{get}(\texttt{put}(m,k_{1},v),k_{2})=v\) and \(i=j\Rightarrow\texttt{get}(\texttt{set}(l,i,v),j)=v\), for HashMap and ArrayList, respectively. Upon further inspection, we discovered that JLibSketch's mock implementation of HashMap was incorrect, and did not satisfy the specification that Mariano et al. gave, due to an incorrect handling of hash collision! After fixing the bug in the mock implementation of HashMap, we were able to synthesize the expected algebraic specification. However, when inspecting the implementation of ArrayList, we found that for this benchmark the implementation was correct but the algebraic specification provided by Mariano et al. was incorrect! After modifying the grammar, we could synthesize the correct algebraic specification \((i=j)\land(0\leq i)\land(i\leq\texttt{sizeL}(l))\Rightarrow\texttt{get}( \texttt{set}(l,i,v),j)=v\). However, this modification revealed a bug in one of the implementations of HashMap that Mariano et al. had synthesized from the earlier erroneous specification! We discuss this finding further in the next section.
This finding illustrates how modular system synthesis can help to _identify_ and _avoid_ bugs in module implementations.
### _Additional Case Studies Based on Mariano et al. [14]_
We noticed that the JLibSketch benchmarks provided an opportunity to build a more complicated benchmark that involved 3 modules (instead of 2). In particular, two of the benchmarks involved synthesizing the implementation of a (layer-1) HashMap module from a (layer-0) algebraic specification of ArrayList. (The two benchmarks synthesized different implementations that handled collisions differently and we refer to the corresponding modules as HashMap1 and HashMap2.) The third benchmark involved synthesizing the implementation of a (layer-2) Kafka from a (layer-1) algebraic specification of HashMap. Thus, we built two 3-layer benchmarks in which the goal was to synthesize Kafka using an implementation of HashMap that used an implementation of ArrayList. For us, each 3-layer benchmark involved four
synthesis problems: (1) the algebraic specification \(\varphi^{\text{ArrayList}}_{sem}\) of ArrayList (from the mock); (2) the implementation of either HashMap1 or HashMap2; (3) the algebraic specification of HashMap; and (4) the implementation of Kafka (this part was already synthesized in [14]).
As discussed in the previous section, we identified a bug in the specification \(\varphi^{\text{ArrayList}}_{sem}\) manually provided by Mariano et al., and were able to use to MoSSKit to synthesize a correct algebraic specification--i.e., step (1). For step (2), the implementation synthesized by Mariano et al. for HashMap2 was still correct, and we could also use MoSSKit to synthesize it from the corrected specification \(\varphi^{\text{ArrayList}}_{sem}\). However, the implementation of HashMap1 synthesized by JLibSketch was incorrect because it depended on the original, erroneous specification \(\varphi^{\text{ArrayList}}_{sem}\) for ArrayList--(1) put could store values to negative indices; and (2) get could search key from incorrect index after rehashing. We manually changed the implementation of the rehashing function in the sketch of HashMap1 to fix the bug, but the change was large enough that we did not attempt to rewrite the program sketch needed to synthesize this specification (i.e., we manually wrote the implementation of HashMap1 instead of synthesizing it). Synthesis problem (3) is at the heart of handling a multi-module system in a modular fashion: we used MoSSKit to synthesize algebraic specifications of HashMap1 and HashMap2--in each case, giving MoSSKit access to the (correct) implementations of HashMap1 and HashMap2 and the (correct) algebraic specification of ArrayList (but not an implementation of ArrayList).
**Key finding:** MoSSKit failed to synthesize the same algebraic specification we had obtained for HashMap in SV-C when attempting to synthesize a specification for HashMap1 and HashMap2. When inspecting the synthesized properties, we realized that the algebraic specification \(\varphi^{\text{ArrayList}}_{sem}\) exposed by ArrayList still had a problem! In particular, \(\varphi^{\text{ArrayList}}_{sem}\) was too weak to prove the algebraic specifications needed by HashMap1 and HashMap2--i.e., \(\varphi^{\text{ArrayList}}_{sem}\) did not characterize properties that were needed by HashMap1 and HashMap2 to satisfy the algebraic specification \(\varphi^{\text{slashMap}}_{sem}\). We used Sketch itself to produce a violation of the algebraic specification \(\varphi^{\text{slashMap}}_{sem}\) for HashMap1 under the weaker assumption that ArrayList only satisfied the specification \(\varphi^{\text{ArrayList}}_{sem}\), and used the violations generated by sketch to identify what properties we needed to add to strengthen \(\varphi^{\text{ArrayList}}_{sem}\). In particular, size\(\left\lvert\text{ensureCapacity}(l,n)\right\rangle=\texttt{size}L(l)\) and get\(\left\lvert\text{ensureCapacity}(l,n),i\right\rangle=\texttt{get}(l,i)\) were added to describe the behavior of ensureCapacity. We were then able to modify the grammar used to synthesize algebraic specifications for \(\varphi^{\text{ArrayList}}_{sem}\) and synthesize the missing property. After obtaining \(\varphi^{\text{ArrayList}}_{sem}\), we successfully synthesized the full algebraic specification for HashMap2 (i.e., \(\varphi^{\text{slashMap}}_{sem}\)) and most of the algebraic specification for HashMap1. Because the corrected implementation of HashMap1 was particularly complicated--e.g., each call to put requires rehashing when the load factor is greater than a predefined value--MoSSKit timed out while synthesizing every property, with the exception of the property \(\texttt{get}(\texttt{emptyMap},k)=err\).
This finding illustrates how modular system synthesis can help identify when module specifications are not strong enough to characterize the behavior of other modules.
### _Limitations of_ MoSSKit
JLibSketch and spyro represent the algebraic specifications of modules as rewrite rules for algebraic datatypes (ADTs). Reasoning about ADTs is a challenging problem, and to the best of our knowledge, sketch and JLibSketch are only frameworks capable of handling problems involving ADTs effectively. Therefore, MoSSKit uses them as the underlying solver and inherits limitations of sketch.
The primary limitation of MoSSKit is its bounded soundness guarantee. sketch ensures soundness only for a bounded number of loop/recursion unrollings, and bounded input sizes. Verifying the unbounded correctness of the synthesized programs poses a significant challenge, as semantics of lower-level modules are represented as rewrite rules on ADTs. As a future direction, we plan to integrate MoSSKit with verifiers such as Dafny to perform full verification, as was done in [15] for the properties synthesized by spyro. However, it is worth noting that MoSSKit has already been useful in finding bugs in existing implementations: specification synthesis has helped find implementation errors in the case studies of Mariano et al. [14], as demonstrated in SSV-C and SSV-D.
Although the case studies in SSV-B and reference [14] show satisfactory performance of sketch for most problems, scalability issues persist. In particular, unrolling nested loops significantly increases the number of holes of the sketch problem, which increases the problem's difficulty.
Besides the limitations inherited from sketch, MoSS has a specific requirement for the system's modular structure, which should be a directed acyclic graph (DAG)--i.e., the implementation-agnostic specifications of all dependent modules must be provided to synthesize a particular module. MoSS addresses the challenges in writing accurate specifications by using the synthesis of implementation-agnostic specifications. However, in this approach one needs to synthesize all dependent modules and their specifications before attempting to synthesize a new module. Alternatively, to synthesize higher-level modules without the lower-level implementations, the user can manually supply the implementation-agnostic specifications of the lower-level modules.
## VI Related Work
A problem related to ours is that of component-based synthesis (CBS), where the goal is _assembbling_ pre-existing components/APIs to generate more complex programs. Many existing approaches for solving CBS problems scale reasonably well [5, 18, 20], but require the individual components to be executable. In our setting, this approach is not possible because the details of lower-level components (e.g., how a Stack is implemented) need not be observable.
A few tools have abstracted components and modules using specifications. JLibSketch[14] uses algebraic properties to
represent the semantics of modules and is a key component of our implementation. (CL)S [2] and APIphany [8] use types to represent the behavior of components and can be used in tandem with specialized type-directed synthesizers. The key differences between our work and these tools is that MoSS provides two well-defined synthesis primitives that support composing multiple modules, rather than synthesizing just one implementation for one module. Furthermore, the aforementioned types are limited in how they can represent relations between multiple components in an implementation-agnostic way, thus making us opt for algebraic specifications.
Many synthesis tools perform some kind of "compositional" synthesis by breaking an input specification into sub-specifications that are used to separately synthesize sub-components of a target program [1, 17]. This notion of "compositionality" is orthogonal to ours, and is more of a divide-and-conquer approach to solving _individual_ synthesis problems. MoSS can make use of such a divide-and-conquer approach when synthesizing a module's implementation.
For the task of synthesizing an algebraic specification, MoSSKit uses spyro. Besides spyro, there are a number of works about discovering specifications from code, based on both static techniques [6, 21] and dynamic techniques [4, 11]. The static approaches mostly target predicates involving individual functions (instead of algebraic properties and equalities involving multiple functions). The dynamic techniques are flexible and can identify algebraic specifications (e.g., for Java container classes [11]), but require some "bootstrapping" inputs, and only guarantee soundness with respect to behaviors that are covered by the tests that the inputs exercise.
## VII Conclusion
_Conceptual contributions._ At the conceptual level, this paper contributes both a framework and a new way to think about program synthesis that opens many research directions. Specifically, the paper introduces MoSS, a framework for using synthesis to perform modular system synthesis. The main contribution of this paper is not an immediate solution to the modular-synthesis problem, but rather the identification of two key synthesis primitives that are required to realize MoSS in practice: 1) synthesis from an implementation-agnostic specification, and 2) synthesis of an implementation-agnostic specification. While our tool implements both of these primitives using tools based on sketch (thus inheriting its limitations), an interesting research directions is whether other synthesis approaches (enumeration, CEGIS, etc.) can be extended to handle our synthesis problems, perhaps by leveraging the popular egg framework [24] which allows one to reason about equivalence of terms with respect to a term-rewriting system--i.e., our algebraic specifications.
_Experimental Contributions._ We created MoSSKit, a proof-of-concept implementation of MoSS based on two existing program-synthesis tools: JLibSketch[14], a program-sketching tool that supports algebraic specifications, and spyro[15], a tool for synthesizing precise specifications from code. The case studies carried out with MoSSKit show that (_i_) modular synthesis is faster than monolithic synthesis, and (_ii_) performing synthesis for both implementations and specifications of the modules can prevent subtle bugs.
## Acknowledgement
Supported, in part, by a Microsoft Faculty Fellowship, a gift from Rajiv and Ritu Batra; by ONR under grant N00014-17-1-2889; and by NSF under grants CCF-{1750965,1763871,1918211,2023222,2211968,2212558}.
Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors, and do not necessarily reflect the views of the sponsoring entities.
## References
* 27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015, Proceedings, Part II_, volume 9207 of _Lecture Notes in Computer Science_, pages 163-179. Springer, 2015.
* 6th International Symposium, ISoLA 2014, Imperial, Corfu, Greece, October 8-11, 2014, Proceedings, Part I_, volume 8802 of _Lecture Notes in Computer Science_, pages 26-40. Springer, 2014.
* [3] H. Comon, M. Dauchet, R. Gilleron, F. Jacquemard, D. Lugiez, C. Loding, S. Tison, and M. Tommasi. _Tree Automata Techniques and Applications_. 2008.
* [4] M. D. Ernst, J. H. Perkins, P. J. Guo, S. McCamant, C. Pacheco, M. S. Tschantz, and C. Xiao. The Daikon system for dynamic detection of likely invariants. _Scient. Comput. Program._, 69(1-3):35-45, 2007.
* [5] Y. Feng, R. Martins, Y. Wang, I. Dillig, and T. W. Reps. Component-based synthesis for complex APIs. In _Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017_, pages 599-612, 2017.
* [6] C. Flanagan and K. R. M. Leino. Houdini, an annotating assistant for ESC/lava. In J. N. Oliveira and P. Zave, editors, _FME 2001: Formal Methods for Increasing Software Productivity, International Symposium of Formal Methods Europe, Berlin, Germany, March 12-16, 2001, Proceedings_, volume 2021 of _Lecture Notes in Computer Science_, pages 500-517. Springer, 2001.
* [7] J. Goguen, J. Thatcher, E. Wagner, and J. Wright. Abstract databases as initial algebras and correctness of data representations. In _Proceedings Conference on Computer Graphics, Pattern Recognition and Data Structure_, May 1975.
* 17, 2022_, pages 122-136. ACM, 2022.
* [9] J. V. Guttag. _The Specification and Application to Programming of Abstract Data Types_. PhD thesis, Computer Systems Research Group, Univ. of Toronto, Toronto, Canada, Sept. 1975.
* [10] J. V. Guttag and J. J. Homing. The algebraic specification of abstract data types. _Acta Informatica_, 10:27-52, 1978.
* [11] J. Henkel, C. Reichenbach, and A. Diwan. Discovering documentation for Java container classes. _IEEE Trans. Software Eng._, 33(8):526-543, 2007.
* [12] R. Hood and R. Melville. Real-time queue operation in pure LISP. _Inf. Process. Lett._, 13(2):50-54, 1981.
* [13] B. H. Liskov and S. N. Zilles. Specification techniques for data abstractions. _IEEE Trans. Software Eng._, 1(1):7-19, 1975.
* [14] B. Mariano, J. Reese, S. Xu, T. Nguyen, X. Qiu, J. S. Foster, and A. Solar-Lezama. Program synthesis with algebraic library specifications. _Proc. ACM Program. Lang._, 3(OOPSLA):132:1-132:25, 2019.
* [15] K. Park, L. D'Antoni, and T. Reps. Synthesizing specifications. _CoRR_, abs/2301.11117, 2023.
* [16] D. L. Parnas. On the criteria to be used in decomposing systems into modules. _Comm. ACM_, 15(12):1053-1058, 1972.
* [17] M. Raza, S. Gulwani, and N. Milic-Frayling. Compositional program synthesis from natural language and examples. In _Proceedings of the 24th International Conference on Artificial Intelligence_, IJCAI'15, pages 792-800. AAAI Press, 2015.
* [18] K. Shi, J. Steinhardt, and P. Liang. FrAngel: Component-based synthesis with control structures. _Proc. ACM Program. Lang._, 3(POPL):73:1-73:29, 2019.
* [19] P. Simon. One man's ceiling is another man's floor, May 1973. T-700.050.850-1 BMI, ISWC, JASRAC.
* 15th International Conference, VMCAI 2014, San Diego, CA, USA, January 19-21, 2014, Proceedings_, volume 8318 of _Lecture Notes in Computer Science_, pages 395-414. Springer, 2014.
* [21] J. L. Singleton, G. T. Leavens, H. Rajan, and D. R. Cok. Inferring concise specifications of APIs. _CoRR_, abs/1905.06847, 2019.
* [22] A. Solar-Lezama. Program sketching. _Int. J. Softw. Tools Technol. Transf._, 15(5-6):475-495, 2013.
* [23] J. M. Spitzen and B. Wegbreit. The verification and synthesis of data structures. _Acta Informatica_, 4:127-144, 1974.
* [24] M. Willsey, C. Nandi, Y. R. Wang, O. Flatt, Z. Tatlock, and P. Panchekha. egg: Fast and extensible equality saturation. _Proc. ACM Program. Lang._, 5(POPL):1-29, 2021.
### _Ticket-vendor Detailed Case Study_
In MoSSKit, to synthesize the implementation-agnostic specification of the operations \(\textit{MI}_{k}\) in layer \(k\), we supplied spyro with the code corresponding to the implementations of the functions \(\textit{MI}_{k}\), and a domain-specific language \(\mathcal{L}\) of equalities over the functions \(\textit{MI}_{k}\). Although spyro is built on top of sketch (instead of JLibSketch), we manually implemented the term rewriting approach of JLibSketch in the sketch files used by spyro in our case study to synthesize implementation-agnostic specifications that only depend on algebraic specifications of lower layers.
List _Specification Synthesis_. As shown in Fig. 1, we assumed that spyro, used with a specific implementation of List, synthesized an implementation-agnostic specification for operations in \(\mathcal{P}[\texttt{List}]\)--i.e., nil, cons, head, tail, snoc, sizeL, and isEmptyL. Due to the current scalability limitations of spyro, we called spyro multiple times with different smaller grammars instead of providing one big grammar of all possible properties. In each call to spyro, we provided a grammar in which we fixed a left-hand-side expression of an equality predicate, and asked spyro to search for a right-hand-side expression for the equality. We allowed the right-hand-side expression to contain a conditional where the guard can be selected from the outputs of Boolean operators in the module, their negation, or constants.
Because we wanted to use the synthesized equalities as input to JLibSketch when synthesizing implementations for the Stack module, we provided grammars of equalities that avoided generating cyclic rewrite rules. We addressed this issue by limiting the search space for the right-hand-side expression. The function symbols permitted in the right-hand-side expression are one of the functions in the left-hand-side expression, functions used in the implementation of a function in the left-hand-side expression, or constants. Also, the outermost function symbol of the left-hand side can only be applied to a strictly smaller term. For instance, in one of the calls to spyro, the goal is to find values of _guard_ and _exp_ that satisfy the following equation:
\[\textit{guard}\rightarrow\texttt{snoc}(\texttt{cons}(hd,tl),x)=\textit{exp} \tag{2}\]
where _guard_ is one of isEmptyL(\(l\)), \(\neg\texttt{isEmptyL}(l)\) or \(\top\), and _exp_ is expressed by the grammar \(L:=tl\mid\texttt{nil}\mid\texttt{snoc}(tl,I)\mid\texttt{cons}(I,L);I:=hd\mid x\).
spyro was able to solve each List specification-synthesis problem within 10 seconds. For the problem in Eq. (2), spyro synthesized \(\textit{guard}=\top\) and \(\textit{exp}=\texttt{cons}(hd,\texttt{snoc}(tl,x))\). The complete set of equalities in the implementation-agnostic specification \(\varphi^{\texttt{List}}_{\textit{sem}}\) synthesized by spyro is the following:
\[\begin{array}{l}\texttt{isEmptyL}(\texttt{nil})=\top\quad\quad\texttt{isEmptyL}( \texttt{cons}(hd,tl))=\bot\\ \texttt{sizeL}(\texttt{nil})=0\\ \texttt{sizeL}(\texttt{cons}(hd,tl))=\texttt{sizeL}(tl)+1\\ \texttt{head}(\texttt{cons}(hd,tl))=tl\\ \texttt{tail}(\texttt{cons}(hd,tl))=hd\\ \texttt{snoc}(\texttt{nil},x)=\texttt{cons}(x,\texttt{nil})\\ \texttt{snoc}(\texttt{cons}(hd,tl),x)=\texttt{cons}(hd,\texttt{snoc}(tl,x)) \end{array}\]
Stack _Implementation Synthesis_. We then used JLibSketch to synthesize an implementation of the Stack operations emptyS, push, top, pop, sizeS, and isEmptyS. In this implementation, a Stack uses a List. When building the JLibSketch files for this step, we manually translated the implementation-agnostic specification \(\varphi^{\texttt{List}}_{\textit{sem}}\) synthesized by spyro in the previous step into JLibSketch rewrite rules.
On top of the implementation-agnostic specification of the List module, we also provided an implementation-specific specification \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) for the kind of Stack we were trying to synthesize. The \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) specification involved JLibSketch code with 17 assertions. The following examples are an excerpt from the \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) specification (\(x\), \(y\), and \(z\) are universally quantified integers that are allowed to be in the range 0 to 10):
\[\begin{array}{l}\texttt{top}(\texttt{push}(\texttt{emptyS},x))=x\\ \texttt{sizeS}(\texttt{emptyS})=0\\ \texttt{top}(\texttt{push}(\texttt{push}(\texttt{emptyS},x),y))=y\\ \texttt{sizeS}(\texttt{push}(\texttt{emptyS},x))=1\end{array}\]
Besides the assertions, we provided JLibSketch with a fairly complete sketch of the structure of the implementation: we provided loops and branching structures and only asked JLibSketch to synthesize basic statements and expressions. JLibSketch was able to synthesize the implementations of all the Stack functions within 10 seconds. For example, the function pop for Stack\(=(l:\texttt{List})\) was synthesized as pop\((st:\texttt{Stack}):=\texttt{tail}(st.l)\).
Stack _Specification Synthesis_. Our implementation-specific specification \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) does not contain any function symbols from \(\mathcal{P}[\texttt{List}]\)--i.e., it was actually implementation-agnostic. However, since \(\varphi^{\texttt{Stack}}_{\textit{imp}}\) only describes the behavior for specific examples, we used spyro to synthesize a new implementation-agnostic specification of Stack that generalized to arbitrary inputs. To use spyro, we manually translated the Stack implementation computed by JLibSketch into code that could be used by spyro.
By providing grammars similar to the ones provided for the List functions for the List specification-synthesis problem, spyro was able to solve each Stack specification-synthesis problem within 30 seconds, and computed the implementation-agnostic specification \(\varphi^{\texttt{Stack}}_{\textit{sem}}\) presented in Fig. 1 in SSII.
Queue _Implementation Synthesis_. We then used JLibSketch to synthesize an implementation of the Queue operations emptyQ, enq, front, deq, sizeQ, and isEmptyQ. A Queue is implemented using two Stacks: \(st_{in}\), which stores elements in the rear part of the queue, and \(st_{out}\), which stores elements
in the front part of the queue. Stack \(st_{in}\) holds the rearmost element on top, and Stack \(st_{out}\) stores the frontmost element on top. To make the front operation more efficient, we decided to make sure that the frontmost element is always at the top of \(st_{out}\).
The implementation-specific specification \(\varphi_{\textit{imp}}^{\texttt{Queue}}\) for the Queue operations consisted of JLibSketch code with 20 assertions. The assertions included invariants relating the two stacks, such as isEmptyS\((st_{out})\rightarrow\texttt{isEmptyS}(st_{in})\), as well as such examples as
\[\texttt{front}(\texttt{enq}(\texttt{emptyQ},x))=x\] \[\texttt{sizeQ}(\texttt{emptyQ})=0\] \[\texttt{front}(\texttt{enq}(\texttt{enq}(\texttt{emptyQ},x),y))=y\] \[\texttt{sizeQ}(\texttt{enq}(\texttt{emptyQ},x))=1\]
Again, \(x,y\), and \(z\) are universally quantified integers that are allowed to be in the range 0 to 10. Again, we provided JLibSketch with a fairly complete sketch of the program structure, and JLibSketch was able to synthesize all the Queue implementations within 10 seconds. For example, the function enq for Queue \(=(st_{in}:\texttt{Stack},st_{out}:\texttt{Stack})\) was synthesized as enq\((q:\texttt{Queue}):=\texttt{if isEmptyS}(st_{out})\) then \((q.st_{in},\texttt{push}(q.st_{out},i))\) else\((\texttt{push}(q.st_{in},i),q.st_{out})\). This implementation is correct due to the invariant isEmptyS\((st_{out})\rightarrow\texttt{isEmptyS}(st_{in})\), because this property ensures that \(st_{out}\) is empty only if both stacks \(st_{in}\) and \(st_{out}\) are empty.
Queue_Specification Synthesis._ With an experimental setup similar to the one for Stack specification synthesis, spyro was able to solve each Queue specification-synthesis problem within 40 seconds, yielding the following implementation-agnostic specification \(\varphi_{\textit{sem}}^{\texttt{Queue}}\):
```
isEmptyS\((\texttt{emptyQ})=\top\) isEmptyQ\((\texttt{enq}(q,i))=\bot\) sizeQ\((\texttt{emptyQ})=0\) sizeQ\((\texttt{enq}(q,i))=\texttt{sizeQ}(q)+1\) isEmptyQ\((q)\rightarrow\texttt{front}(\texttt{enq}(q,i))=i\) -isEmptyQ\((q)\rightarrow\texttt{front}(\texttt{enq}(q,i))=\texttt{front}(q)\) isEmptyQ\((q)\rightarrow\texttt{deq}(\texttt{enq}(q,i))=q\) -isEmptyQ\((q)\rightarrow\texttt{deq}(\texttt{enq}(q,i))=\texttt{enq}(\texttt{ deq}(q),i)\)
```
TicketVendor_Implementation Synthesis._ We used JLibSketch to synthesize an implementation of the TicketVendor operations resTicket, issueTicket, soldOut, numTicketsRem, and numWaiting. A TicketVendor is implemented using a Queue, which stores the id numbers of clients who have reserved tickets. Each issued ticket contains the id of the buyer.
The implementation-specific specification \(\varphi_{\textit{imp}}^{\texttt{TicketVendor}}\) consisted of JLibSketch code with 24 assertions, and contains multiple examples, such as the following (again, \(x\) and \(y\) are universally quantified integers that are allowed to be in the range 0 to 10):
```
numTicketsRem\((\texttt{prepSales}(2))=2\) numWaiting\((\texttt{prepSales}(2))=0\) numWaiting\((\texttt{resTicket}(\texttt{prepSales}(2),x))=1\) issueTicket\((\texttt{resTicket}(\texttt{prepSales}(2),x)).owner=x\)
```
Again, we provided JLibSketch with a fairly complete sketch of the program structure, and JLibSketch was able to synthesize the implementations of all the TicketVendor functions within 10 seconds. For example, the function prepSales for TicketVendor \(=(num_{ticket}:\texttt{int},q_{waiting}:\texttt{Queue})\) was synthesized as prepSales\((n:\texttt{int}):=(n,\texttt{emptyQ})\).
_Changing the Queue Implementation._ As illustrated in SSII-D, we also synthesized a different implementation of Queue that uses List instead of two Stacks. The List holds the oldest element of the Queue at its head. The implementation-specific specification \(\varphi_{\textit{imp}}^{\texttt{Queue}}\) (as List) consisted of JLibSketch code with 19 assertions, including such examples as
\[\texttt{front}(\texttt{enq}(\texttt{emptyQ},x))=x\] \[\texttt{sizeQ}(\texttt{emptyQ})=0\] \[\texttt{front}(\texttt{enq}(\texttt{enq}(\texttt{emptyQ},x),y))=y\] \[\texttt{sizeQ}(\texttt{enq}(\texttt{emptyQ},x))=1\]
where \(x,y\) and \(z\) are any distinct integers between 0 and 10.
Because we synthesized the implementation-agnostic specification \(\varphi_{\textit{sem}}^{\texttt{Queue}}\) from the previous implementation, as a sanity check we used JLibSketch to verify whether the specification \(\varphi_{\textit{sem}}^{\texttt{Queue}}\) still held true for the new implementation. Because this was the case (the check took less than a second), TicketVendor does not need to be changed to use the Queue-as-List implementation.
### _Implementation Synthesis with JLibSketch_
We present the three inputs provided to JLibSketch to solve the implementation-synthesis problem for Queue: (_i_) a program sketch describing the search space of possible programs (Fig. 6), (_ii_) an implementation-agnostic specification \(\varphi_{\textit{sem}}^{\texttt{Stack}}\) of the Stack module in the form of rewrite rules (Fig. 7), and (_iii_) an implementation-specific specification \(\varphi_{\textit{imp}}^{\texttt{Queue}}\) of the Queue module in the form of assertions (Fig. 8).
*publicvoidenqueue(intx){
*Stackst_in=this.st_in;
*Stackst_out=this.st_out;
*assume!st_out.isEmpty()||st_in.isEmpty();
*if(genGuard(st_in,st_out)){
*st_in=genStack(st_in,st_out,x);
*st_out=genStack2(st_in,st_out,x);
*}else{
*st_in=genStack2(st_in,st_out,x);
*st_out=genStack2(st_in,st_out,x);
*}
*assert!st_out.isEmpty()||st_in.isEmpty();
*this.st_in=st_in;
*this.st_out=st_out;
*}
*privatestaticvoidrev(Stackin,Stackout){
*while(!in.isEmpty()){
*out.push(in.top());
*in.pop();
*}
*}
*publicvoiddequeue(){
*Stackst_in=this.st_in;
*Stackst_out=this.st_out;
*assume!st_out.isEmpty()||st_in.isEmpty();
*st_in=genStack1(st_in,st_out);
*st_out=genStack1(st_in,st_out);
*if(genGuard(st_in,st_out)){
*rev(st_in,st_out);
*}
*this.st_in=st_in;
*this.st_out=st_out;
*assert!st_out.isEmpty()||st_in.isEmpty();
*} |
2306.06222 | Stochastic Embeddings of Graphs into Trees | It is known that every graph with n vertices embeds stochastically into trees
with distortion $O(\log n)$. In this paper, we show that this upper bound is
sharp for a large class of graphs. As this class of graphs contains diamond
graphs, this result extends known examples that obtain this largest possible
stochastic distortion. | Th. Schlumprecht, Garrett Tresch | 2023-06-09T19:39:06Z | http://arxiv.org/abs/2306.06222v1 | # Stochastic embeddings of graphs into trees
###### Abstract.
It is known that every graph with \(n\) vertices embeds stochastically into trees with distortion \(O(\log n)\). In this paper, we show that this upper bound is sharp for a large class of graphs. As this class of graphs contains diamond graphs, this result extends known examples that obtain this largest possible stochastic distortion.
Key words and phrases:Stochastic embeddings, Distortion, Slashpowers, Trees 2010 Mathematics Subject Classification: 05C5, 68R10 The research was partially supported by the National Science Foundation under Grant Number DMS-2054443. This paper is part of the Ph.D. thesis of the second named author.
## 1. Introduction
Let \(G\) be a graph with \(n\) vertices and let \(\mathcal{T}(G)\) be the set of vertices of \(G\). We say that \(G\) is _strongly
We shall often denote a (directed or undirected) path by \(P=(x_{i})_{i=0}^{k}\) or \((x_{0},\ldots,x_{k})\) and call it a _path from \(x_{0}\) to \(x_{k}\)_, or _between \(x_{0}\) and \(x_{k}\)_. For two paths \(A:=(a_{i})_{i=0}^{p}\) and \(B:=(b_{i})_{i=0}^{q}\) with \(a_{p}=b_{0}\), we use \(A\smile B\) to denote the concatenation of the two paths at the vertex \(a_{p}\). More specifically, \(V(A\smile B)=\{a_{0},a_{1},\ldots,a_{p}=b_{0},b_{1},\ldots,b_{q}\}\) and \(E(A\smile B)=\{\{a_{i-1},a_{i}\}:i\in\{1,\ldots,p\}\}\cup\{\{b_{i-1},b_{i}\}:i \in\{1,\ldots,q\}\}\). Note that the concatenation of the two paths above is a path iff \(V(A)\cap V(B)=\{a_{p}=b_{0}\}\).
A subgraph \(C=(V(C),E(C))\subset(V(G),E(G))\)_a cycle in_\(G\), if the distinct vertices of \(C\), can be ordered into \(V(C)=\{x_{0},x_{1},\ldots,x_{k}\}\), with \(k\geq 3\), and \(x_{0}=x_{k}\), such that \(E(P)=\big{\{}\{x_{i-1},x_{i}\}:i\in\{1,\ldots,k\}\big{\}}\).
A graph \(G\) is called _connected_, if for any two vertices there is a path between them. A _tree_ is a connected graph that does not contain a cycle, or equivalently, between any two vertices there is a unique path between them.
### Geodesic Metrics
If \(G\) is a connected graph and \(d_{G}\) is a metric on \(V(G)\), we call \(d_{G}\) a _geodesic metric on \(G\)_ if
\[d_{G}(u,v)=\min\{\text{length}_{d_{G}}(P):\text{$P$ is a path from $u$ to $v$}\},\text{ for $u,v\in V$},\]
where for a path \(P=(x_{j})_{j=0}^{n}\) in \(G\), we define the length of \(P\) by
\[\text{length}_{d_{G}}(P)=\sum_{j=1}^{n}d_{G}(x_{j-1},x_{j}).\]
and sometimes refer to this value as the _metric length of_\(P\). In this case, we call the pair \((G,d_{G})\) a _geodesic graph_. For \(e=\{u,v\}\in E(G)\) we denote \(d_{G}(e)=d_{G}(u,v)\).
Assume that \(w:E(G)\to\mathbb{R}^{+}\) is a function. Define for \(u,v\in V(G)\)
\[d_{G}(u,v):=\min\Big{\{}\sum_{j=1}^{n}w(\{x_{j-1},x_{j}\}):(x_{j})_{j=0}^{n} \text{ is a path from $u$ to $v$}\Big{\}}.\]
Then \(d_{G}\) is a geodesic metric on \(G\), and we call it the _metric generated by the weight function_\(w\). Note that if \(w:E(G)\to\mathbb{R}^{+}\) is an arbitrary function and \(d_{G}\) the geodesic metric generated by \(w\), it does not necessarily follow that for an edge \(e\) we have \(d_{G}(e)=w(e)\), since it might be possible that there is a path of shorter metric length between the two endpoints of \(e\).
For the special weight function, defined by \(w(e)=1\) for all \(e\in E(G)\), and for a path \(P\) in \(G\) we call \(\sum_{e\in E(P)}w(e)=|E(P)|\) the _graph length of \(P\)_.
Conversely, any geodesic metric on \(G\) is generated by the weight function
\[w:E\to\mathbb{R}^{+},\qquad\{u,v\}\mapsto d_{G}(u,v).\]
If \((G,d_{G})\) is a geodesic graph and \(H=(V(H),E(H))\) is a connected subgraph we call \((H,d_{H})\), where \(d_{H}\) is the geodesic distance on \(V(H)\) generated by the weight \(w:E(H)\to\mathbb{R}^{+}\), \(e\mapsto d_{G}(e)\), an _induced geodesic subgraph_ and \(d_{H}\) the _induced geodesic metric of \(d_{G}\) on \(V(H)\)_. Note that \(d_{H}\) is not necessarily the restriction of \(d_{G}\) on \(V(G)\) (for example, if \(G\) is a cycle and \(H\) is obtained by taking away one edge). In
the scenario when \(d_{H}(x,y)=d_{G}(x,y)\) for all \(x,y\in V(H)\) we say that \(H\) is an _isometric geodesic subgraph_ of \(G\) or, when the context is clear, simply an _isometric subgraph_.
### \(s\)-\(t\) Graphs
We call a connected, graph \(G=(V(G),E(G))\) an \(s\)-\(t\)_graph_ if it has two distinguished vertices denoted by \(s=s(G)\) and \(t=t(G)\in V(G)\) so that \(G\) can be turned into a directed graph \(G_{d}=(V(G),E_{d}(G))\) where every edge \(e\in E_{d}(G)\) is an element of a directed path from \(s(G)\) to \(t(G)\). When given such an orientation, we call \(G\) a _directed_\(s\)-\(t\) graph and note that under this definition every vertex \(v\in V(G)\) lies on a directed path from \(s(G)\) to \(t(G)\).
Let \(G=(V(G),E(G))\) be an \(s\)-\(t\) graph and let \(d_{G}\) be a geodesic metric on \(G\). We say that \((G,d_{G})\) is a _geodesic \(s\)-\(t\) graph_, if all paths from \(s(G)\) to \(t(G)\) have the same length, and we say that \((G,d_{G})\) is a _normalized geodesic \(s\)-\(t\) graph_, if that length is \(1\). This property has the following important, and in this paper often used, consequence.
**Proposition 2.1**.: _If \((G,d_{G})\) is a geodesic \(s\)-\(t\) graph, every path \(P=(x_{j})_{j=0}^{l}\) from \(x_{0}=s(G)\) to \(x_{l}=t(G)\) is an isometric subgraph of \(G\). In particular, this means that_
\[d_{G}(x_{i},x_{j})=\sum_{s=i+1}^{j}d_{G}(x_{s-1},x_{s})\text{ for }0\leq i<j \leq l.\]
In the case where \(d_{G}(e)=1\) for all \(e\in E(G)\) for a geodesic \(s\)-\(t\) graph \((G,d_{G})\) then \((G,d_{G})\) is sometimes referred to as a _bundle graph[8]_.
**Examples 2.2**.: We list three elementary examples of geodesic \(s\)-\(t\) graphs:
* (_Paths_): Let \(P=(x_{i})_{i=0}^{k}\) be a path. If we define \(s(P)=x_{0}\) and \(t(P)=x_{k}\) then it is clear that \(P\) is an \(s\)-\(t\) graph. Let \(w:E(P)\to\mathbb{R}^{+}\) be a weight function, then \(w\) generates a geodesic metric \(d_{P}\) on \(P\) making \((P,d_{P})\) a geodesic \(s\)-\(t\) graph.
* (_Cycles_): Let \(C=(V(C),E(C))\) be a cycle, with \(V(C)=\left\{x_{0},x_{1},\ldots,x_{l-1}\right\}\), with \(l\geq 3\), such that \(E(C)=\left\{\left\{x_{j-1},x_{j}\right\}:j=1,2,\ldots,l\right\}\), where \(x_{l}=x_{0}\). Let \(d_{C}\) be a geodesic metric on \(V(C)\), generated by a weight function \(w:E(C)\to\mathbb{R}^{+}\) and assume that there is an \(1\leq m\leq l-1\) so that \[d_{C}(x_{0},x_{m})=\sum_{j=1}^{m}w(\left\{x_{j-1},x_{j}\right\})=\sum_{j=m+1} ^{l}w(\left\{x_{j-1},x_{m-j}\right\}).\] We can then orient \(E(C)\) by \[E_{d}(C)=\left\{(x_{i-1},x_{i}):i\in\left\{1,\ldots,m\right\}\right\}\cup\left\{ (x_{j},x_{j-1}):\ j\in\left\{m+1,\ldots,l\right\}\right\}.\] Then \((C,d_{C})\) is a geodesic \(s\)-\(t\) graph, with \(s(C)=x_{0}\) and \(t(C)=x_{m}\). We call in that case \((C,d_{C})\) a _geodesic \(s\)-\(t\) cycle_.
* _Generalized Laakso graphs_. For \(k,l_{1},l_{2},m\in\mathbb{N}_{0}\) such that \(l_{1}\geq l_{2}\geq 1\), and \(l_{1}+l_{2}\geq 3\). Let \(L=(V(L),E(L))\) be defined by \[V(L)=\left\{x_{i}\right\}_{i=0}^{k}\cup\left\{y_{i}^{(1)}\right\}_{i=0}^{l_{1} }\cup\left\{y_{i}^{(2)}\right\}_{i=0}^{l_{2}}\cup\left\{z_{i}\right\}_{i=0}^{m},\]
where we make the following identifications: \[x_{k}\equiv y_{0}^{(1)}\equiv y_{0}^{(2)},\text{ and }y_{l_{1}}^{(1)}\equiv y_{l_{2}}^{(2)}\equiv z_{0},\] and \[E(L)=\big{\{}\{x_{i-1},x_{i}\}:i=1,2,\ldots,k\big{\}}\cup\big{\{}\{z_{i-1},z_{i}\} :i=1,2,\ldots,m\big{\}}\] \[\qquad\cup\big{\{}\{y_{i-1}^{(1)},y_{i}^{(1)}\}:i=1,2,\ldots,l_{1} \big{\}}\cup\big{\{}\{y_{i-1}^{(2)},y_{i}^{(2)}\}:i=1,2,\ldots,l_{2}\big{\}}.\] We call \(L\) a \((k,l_{1},l_{2},m)\)-_Laakso graph_. If \(L\) is a \((k,l_{1},l_{2},m)\)-Laakso graph for some \(k,l_{1},l_{2},m\), then we say that \(L\) is _a generalized Laakso graph_. Note that in particular, a \((1,2,2,1)\)-Laakso graph is the base graph for the family of standard Laakso graphs, while a \((0,2,2,0)\)-Laakso graph is the base graph for the family of standard diamond graphs. We denote \(s(L)=x_{0}\) and \(t(L)=z_{m}\) and note that the orientation \[E_{d}(L)=\big{\{}(x_{i-1},x_{i}):i=1,2,\ldots,k\big{\}}\cup\big{\{}(z_{i-1},z_{ i}):i=1,2,\ldots,m\big{\}}\\ \cup\big{\{}(y_{i-1}^{(1)},y_{i}^{(1)}):i=1,2,\ldots,l_{1}\big{\}} \cup\big{\{}(y_{i-1}^{(2)},y_{i}^{(2)}):i=1,2,\ldots,l_{2}\big{\}}.\] implies that these generalized Laakso graphs are \(s\)-\(t\) graphs.
Let \(w:E(L)\to(0,1]\) be such that,
\[\sum_{i=1}^{k}w(x_{i-1},x_{i})+\sum_{i=1}^{l_{1}}w(y_{i-1}^{(1)},y_{i}^{(1)})+\sum_{i=1}^{m}w(z_{i-1},z_{i})\] \[=\sum_{i=1}^{k}w(x_{i-1},x_{i})+\sum_{i=1}^{l_{2}}w(y_{i-1}^{(2)},y_{i}^{(1)})+\sum_{i=1}^{m}w(z_{i-1},z_{i})\] then \(w\) generates a geodesic metric \(d_{L}\) on \(L\) which turns \((L,d_{L})\) into a geodesic \(s\)-\(t\) graph.
We call a \((k,l_{1},l_{2},m)\)-Laakso graph \(L=(V(L),E(L))\)_balanced_ if \(l_{1}=l_{2}\).
An important attribute of geodesic \(s\)-\(t\) graphs is that all cycles and geodesic \(s\)-\(t\) subgraphs over the same two distinguished points \(s,t\) are isometric subgraphs. Such a property will allow us to restrict our attention to _generalized Laakso graphs_ as introduced in Example 2.2 (c).
Figure 1. A \((k,l_{1},l_{2},m)\)-Laakso Graph
For the next result, we introduce the following notation: If \(G\) is an \(s\)-\(t\) graph, we call a subgraph \(H\) of \(G\) which contains \(s(G)\) and \(t(G)\) and which has the property that it is an \(s\)-\(t\) graph with the same distinguished points \(s(G)\) and \(t(G)\), an \(s\)-\(t\)_subgraph of \(G\)_.
**Lemma 2.3**.: _Suppose \((G,d_{G})\) is a geodesic \(s\)-\(t\) graph with distinguished points \(s(G)\) and \(t(G)\)._
1. _Every cycle_ \(C\) _in_ \(G\) _is a subgraph of a generalized Laakso graph_ \(L\) _in_ \(G\)_, for which_ \(s(L)=s(G)\) _and_ \(t(L)=t(G)\) _and both,_ \((C,d_{C})\) _as well as_ \((L,d_{L})\) _are isometric subgraphs of_ \(G\)_, where_ \(d_{C}\) _and_ \(d_{L}\) _are the geodesic metrics on_ \(C\) _and_ \(L\)_, respectively, induced by_ \(d_{G}\)_._
2. _If_ \(H\) _is an_ \(s\)_-_\(t\) _subgraph of_ \(G\) _then_ \((H,d_{H})\) _is an isometric subgraph of_ \((G,d_{G})\)_, where_ \(d_{H}\) _is the geodesic metric on_ \(H\) _induced by_ \(d_{G}\)_._
_Moreover, if \(G\) is not a path, it contains cycles and, thus, also a generalized Laakso graph._
Proof.: Note that as every vertex of \(G\) is on an \(s\)-\(t\) path, if \(G\) is a tree, then it must simply be an \(s\)-\(t\) path. This proves the "moreover" part of our claim.
Proof of (1). Let \(C\subset G\) be an arbitrary cycle. Label the vertices of \(C\) by \(C=(x_{0},x_{1},\ldots,x_{n}=x_{0})\) and suppose that \(d_{C}\) is the induced geodesic metric of \(d_{G}\) on \(C\). We choose \(y\) and \(z\) in \(V(C)\) such that
\[d_{G}(y,s(G))=\min_{x\in V(C)}d_{G}(x,s(G)),\text{ and }d_{G}(z,t(G))=\min_{x \in V(C)}d_{G}(x,t(G)),\]
and observe that \(y\neq z\). Indeed, assume that \(y=z\). Let \(y^{\prime}\in V(C)\) so that \(\{y,y^{\prime}\}\in E(C)\subset E(G)\), and let \(P=(x_{j})_{j=0}^{n}\) be a path between \(s(G)\) and \(t(G)\) containing \(\{y,y^{\prime}\}\). It follows that \(\operatorname{length}_{d_{G}}(P)=d_{G}(s(G),t(G))\). We assume without loss of generality that \(x_{i_{1}}=y\) and \(x_{i_{2}}=y^{\prime}\), with \(0<i_{1}<i_{2}<n\) (if \(i_{2}<i_{1}\) we swap \(s(G)\) with \(t(G)\)). Then
\[d_{G}(s(G),y)+d_{G}(y,t(G)) \geq d_{G}(s(G),t(G))=\operatorname{length}_{d_{G}}(P)\] \[=d_{G}(s(G),y)+d_{G}(y,y^{\prime})+d_{G}(y^{\prime},t(G))\] \[>d_{G}(s(G),y)+d_{G}(y^{\prime},t(G))\geq d_{G}(s(G),y)+d_{G}(y,t (G)),\]
where the last inequality follows from the minimality conditions on \(y=z\). This is a contradiction; thus, we conclude that \(y\neq z\).
After relabeling the vertices of \(C\) we can assume that \(y=x_{0}\) and \(z=x_{l_{1}}\), for some \(0<l_{1}<n\). We let \(A=(a_{j})_{j=0}^{k}\) be a path between \(a_{0}=s(G)\) and \(a_{k}=x_{0}=y\), of shortest metric length, \(B=(b_{j})_{j=0}^{m}\) be a path between \(b_{0}=x_{l_{1}}=z\) and \(b_{m}=t(G)\), of shortest metric length, and let \(Q^{(1)}=(x_{j})_{j=0}^{l_{1}}\) and \(Q^{(2)}=(x_{n-j})_{j=0}^{n-l_{1}}\) (both being paths from \(y\) to \(z\), and together forming the cycle \(C\)). Since \(V(A)\cap V(Q^{(1)})=V(A)\cap V(Q^{(2)})=\{x_{0}\}\) and \(V(B)\cap V(Q^{(1)})=V(B)\cap V(Q^{(2)})=\{x_{l_{1}}\}\) it follows that the graph \(L\) with \(V(L)=\{a_{j}\}_{j=0}^{k}\cup\{x_{j}\}_{j=0}^{l_{1}}\cup\{x_{n-j}\}_{j=0}^{n-l_ {1}}\cup\{b_{j}\}_{j=0}^{m}\) and \(E(L)=E(A)\cup E(Q^{(1)})\cup E(Q^{(2)})\cup E(B)\) is a \((k,l_{1},n-l_{1},m)\)- Laakso graph. Let \(d_{L}\) and
be the induced geodesic metrics on \(V(L)\) and \(V(C)\), respectively. We need to show that they coincide with the restriction of \(d_{G}\) to \(V(L)\) and \(V(C)\), respectively.
We define \(P^{(1)}=A\smile Q^{(1)}\smile B\) and \(P^{(2)}=A\smile Q^{(2)}\smile B\), and recall that by Proposition 2.1 they are both isometric subpaths of \((G,d_{G})\) as well as \((L,d_{L})\). Since they have the same length it follows that \(\operatorname{length}_{d_{G}}(Q^{(1)})=\operatorname{length}_{d_{G}}(Q^{(2)})\).
Let \(u,v\in V(L)\). We need to show that \(d_{L}(u,v)=d_{G}(u,v)\), and in case that \(u,v\in V(C)\), also that \(d_{C}(u,v)=d_{G}(u,v)\). We first consider the case that \(u,v\) are both in \(P^{(i)}\) for some \(i=1,2\). Then it follows that \(d_{G}(u,v)\) and \(d_{L}(u,v)\) are both equal to the induced metric on \(P^{(i)}\), and, thus, equal to each other. If moreover \(u,v\in V(C)\), and thus \(u,v\in V(Q^{(i)})\), the path of shortest length in \(C\) between \(x\) and \(y\) (with respect to \(d_{C}\)!) will be inside \(Q^{(i)}\). Thus, again by Proposition 2.1, it follows that \(d_{C}(u,v)=d_{L}(u,v)=d_{G}(u,v)\).
If \(u,v\) are not both in \(P^{(i)}\), we can assume that \(u=x_{i_{1}}\) for some \(1<i_{1}<l_{1}\) and \(v=x_{i_{2}}\) for some \(l_{1}<i_{2}<n\). Let us assume that \(d_{C}(u,v)\neq d_{G}(u,v)\), and thus since always \(d_{C}(u,v)\geq d_{G}(u,v)\), that \(d_{C}(u,v)>d_{G}(u,v)\). This means that there is a path \(P=(w_{j})_{j=0}^{n}\) in \(G\) from \(w_{0}=x_{i_{1}}\) to \(w_{n}=x_{i_{2}}\) in \(G\) whose metric length is smaller than \(d_{C}(u,v)\). We can also assume that \(u\in V(Q^{(1)})\) and \(v\in V(Q^{(2)})\) are chosen, so that \(n\) is minimal, which implies that \(w_{j}\in V(G)\setminus V(C)\) for \(j=1,2,\ldots,n-1\). Let \(P^{\prime}=(w_{n-j})_{j=0}^{n}\), the path from \(x_{i_{2}}\) to \(x_{i_{1}}\), which reverses \(P\).
It follows that
\[R^{(1)} =A\smile(x_{j})_{j=0}^{i_{1}}\smile P\smile(x_{i_{2}-j})_{j=0}^{i_ {2}-l_{1}}\smile B\text{ and }\] \[R^{(2)} =A\smile(x_{n-j})_{j=0}^{n-i_{2}}\smile P^{\prime}\smile(x_{j})_{j =i_{1}}^{l_{1}}\smile B\]
are both paths from \(s(G)\) to \(t(G)\) and therefore of metric length \(d_{G}(s(G),t(G))\). But on the other hand we have, by Proposition 2.1
\[\operatorname{length}_{d_{G}}(R^{(1)})+\operatorname{length}_{d _{G}}(R^{(2)})= 2\operatorname{length}_{d_{G}}(A)+2\operatorname{length}_{d_{G}}(B)+2 \operatorname{length}_{d_{G}}(P)\] \[+\operatorname{length}_{d_{G}}\bigl{(}(x_{j})_{j=0}^{i_{1}} \bigr{)}+\operatorname{length}_{d_{G}}\bigl{(}(x_{j})_{j=i_{1}}^{l_{1}}\bigr{)}\] \[+\operatorname{length}_{d_{G}}\bigl{(}(x_{n-j})_{j=0}^{n-i_{2}} \bigr{)}+\operatorname{length}_{d_{G}}\bigl{(}(x_{i_{2}-j})_{j=0}^{i_{2}-l_{ 1}}\bigr{)}\] \[=2d_{G}(s(G),t(G))+2\operatorname{length}_{d_{G}}(P),\]
which would mean that \(\operatorname{length}_{d_{G}}(P)=0\), and is thus a contradiction. We deduce that \(d_{C}(u,v)=d_{G}(u,v)\) and, thus, since \(d_{G}(u,v)\leq d_{L}(u,v)\leq d_{C}(u,v)\), also \(d_{L}(u,v)=d_{C}(u,v)\).
Proof of (2). Assume that \(H=(V(H),E(H))\) is a subgraph of \(G\), which is an \(s\)-\(t\) graph with \(s(H)=s(G)\). Let \(d_{H}\), be the geodesic metric on \(V(H)\), induced by \(d_{G}\). Since every path in \(H\) is a path in \(G\), it follows that \(H\) together with \(d_{H}\) is a geodesic \(s\)-\(t\) graph. Thus, if \(u,v\in V(H)\) lie on the same path \(Q\) in \(H\) from \(s(G)\) to \(t(G)\) that is also a path in \(G\), then \(d_{G}(u,v)=d_{Q}(u,v)=d_{H}(u,v)\). If for \(u,v\in V(H)\) there is no path in \(H\) which contain \(u\) and \(v\), we can find two paths from \(s(G)\) to \(t(G)\) in \(H\), \(Q^{(1)}=(x_{i})_{i=0}^{l_{1}}\) and \(Q^{(2)}=(y_{j})_{j=0}^{l_{2}}\), so that \(Q^{(1)}\) contains \(u\) but not \(v\) and \(Q^{(2)}\) contains \(v\) but not \(u\). Let \(i_{1}\in\{1,2,\ldots,l_{1}-1\}\) and \(j_{1}=\{1,2,\ldots,l_{2}-1\}\) such that \(x_{i_{1}}=u\)
and \(y_{j_{1}}=v\). We define
\[i_{0}=\max\big{\{}0\leq i<i_{1}:x_{i}\in\{y_{j}\}_{j=0}^{j_{1}-1}\big{\}}\text{ and }j_{0}=\max\big{\{}0\leq j<j_{1}:y_{j}\in\{x_{i}\}_{i=0}^{i_{1}-1}\big{\}}\]
and
\[i_{2}=\min\big{\{}i_{1}<i\leq l_{1}:x_{i}\in\{y_{j}\}_{j=j_{1}+1}^{l}\big{\}} \text{ and }j_{2}=\min\big{\{}j_{1}<j\leq l_{2}:y_{j}\in\{x_{i}\}_{i=i_{1}+1}^{k}\big{\}}.\]
Then it follows that \(x_{i_{0}}=y_{j_{0}}\) and \(x_{i_{2}}=y_{j_{2}}\), and thus
\[C=(x_{i_{0}},x_{i_{0}+1},\ldots,x_{i_{2}}=y_{j_{2}},y_{j_{2}-1},y_{j_{2}-2}, \ldots y_{j_{0}}=x_{i_{0}})\]
is a cycle in \(H\). Denoting by \(d_{C}\) the geodesic metric on \(C\) induced by \(G\), which is the same metric induced by \(d_{H}\) (because they are both generated by the same weights), we deduce that from our previous work that \(d_{H}(u,v)=d_{C}(u,v)=d_{G}(u,v)\).
### Measured geodesic \(s\)-\(t\) graphs
A triple \((G,d_{G},\nu_{G})\), with \((G,d_{G})\) being a geodesic \(s\)-\(t\) graph, and \(\nu_{G}\) being a probability measure on \(E(G)\), is called a _measured geodesic \(s\)-\(t\) graph_.
**Examples 2.4**.: For the three elementary geodesic \(s\)-\(t\) graphs, introduced in Examples 2.2, we choose the following probabilities on their edges.
1. Let \(P=(x_{j})_{j=0}^{k}\) be a path with a geodesic metric \(d_{P}\) generated by a weight function \(w:E(P)\to\mathbb{R}^{+}\). Then put \[\nu_{P}(e)=\frac{w(e)}{\sum_{j=1}^{k}w(\{x_{j-1},x_{j}\})}=\frac{d_{P}(e)}{d_ {P}(s(P),t(P))}.\]
2. Let \(C=(x_{0},x_{1},\ldots,x_{n}=x_{0})\) together with a geodesic metric \(d_{C}\) a geodesic \(s\)-\(t\) cycle, and let \(1\leq m\leq n-1\) be such that \[d_{C}(x_{0},x_{m})=\sum_{j=1}^{m}d_{C}(x_{j-1},x_{j})=\sum_{j=m+1}^{n}d_{C}(x_{ j-1},x_{j})=\frac{1}{2}\sum_{j=1}^{n}d_{C}(x_{j-1},x_{j}).\] and let \(s(C)=x_{0}\), \(t(C)=x_{m}\). Then we put \[\nu_{C}(e)=\frac{1}{2}\frac{d_{C}(e)}{d_{C}(x_{0},x_{m})}.\]
3. Let \(L=(V(L),E(L))\) be a \((k,l_{1},l_{2},m)\)-Laakso graph, with \[V(L)=\{x_{i}\}_{i=0}^{k}\cup\{y_{i}^{(1)}\}_{i=0}^{l_{1}}\cup\{y_{i}^{(2)}\}_{ i=0}^{l_{2}}\cup\{z_{i}\}_{i=0}^{m},\] where \[x_{k}\equiv y_{0}^{(1)}\equiv y_{0}^{(2)},\text{ and }y_{l_{1}}^{(1)}\equiv y_{l_{2}}^{(2)}\equiv z_{0}.\] \[E(L)=\big{\{}\{x_{i-1},x_{i}\}:i=1,2,\ldots,k\big{\}}\cup\big{\{} \{z_{i-1},z_{i}\}:i=1,2,\ldots,m\big{\}}\] \[\cup\big{\{}\{y_{i-1}^{(1)},y_{i}^{(1)}\}:i=1,2,\ldots,l_{1}\big{\}} \cup\big{\{}\{y_{i-1}^{(2)},y_{i}^{(2)}\}:i=1,2,\ldots,l_{2}\big{\}}.\]
For a weight function \(w:E(L)\to\mathbb{R}^{+}\) satisfying
\[\sum_{i=1}^{k}w(x_{i-1},x_{i})+\sum_{i=1}^{l_{1}}w(y_{i-1}^{(1)},y_ {i}^{(1)})+\sum_{i=1}^{m}w(z_{i-1},z_{i})\] \[\qquad=\sum_{i=1}^{k}w(x_{i-1},x_{i})+\sum_{i=1}^{l_{2}}w(y_{i-1}^ {(2)},y_{i}^{(1)})+\sum_{i=1}^{m}w(z_{i-1},z_{i})\] which generates a geodesic metric \(d_{L}\) turning \(L\) into a geodesic \(s\)-\(t\) graph, with \(s(L)=x_{0}\) and \(t(L)=z_{m}\) we define \(\nu_{L}\) by \[\nu_{L}(\{x_{i-1},x_{i}\}) =\frac{d_{L}(x_{i-1},x_{i})}{d_{L}(s(L),t(L))}\text{ for }i=1,2 \ldots k,\] \[\nu_{L}(\{y_{i-1}^{(1)},y_{i}^{(1)}\}) =\frac{1}{2}\frac{d_{L}(y_{i-1}^{(1)},d_{L}(y_{i}^{(1)}))}{d_{L}( s(L),t(L))}\text{ for }i=1,2\ldots l_{1},\] \[\nu_{L}(\{y_{i-1}^{(2)},y_{i}^{(2)}\}) =\frac{1}{2}\frac{d_{L}(y_{i-1}^{(2)},d_{L}(y_{i}^{(2)}))}{d_{L}( s(L),t(L))}\text{ for }i=1,2\ldots l_{2},\] \[\nu_{L}(\{z_{i-1},z_{i}\}) =\frac{d_{L}(z_{i-1},z_{i})}{d_{L}(s(L),t(L))}\text{ for }i=1,2, \ldots,m.\]
### Expected Distortion
Assume that \(G\) and \(H\) are simple, connected graphs and \(d_{G}\),\(d_{H}\) are geodesic metrics on \(G\) and \(H\), respectively. If \(\nu_{G}\) is a probability on \(E(G)\) and \(f:V(G)\to V(H)\) then the _expected distortion of \(f\) with respect to \((G,d_{G},\nu_{G})\)_ is defined by
\[\mathbb{D}_{\nu}(f)=\mathbb{E}_{\nu}\Big{(}\frac{d_{H}\circ f}{d_{G}}\Big{)}= \sum_{e\in E(G)}\frac{d_{H}(f(e))}{d_{G}(e)}\nu(e).\]
### Stochastic Embeddings
Let \(\mathcal{M}\) be a class of metric spaces, and let \((X,d_{X})\) be a metric space. A family \((f_{i})_{i=1}^{n}\) of maps \(f_{i}:X\to M_{i}\), with \((M_{i},d_{i})\in\mathcal{M}\), together with numbers \(\mathbb{P}=(p_{i})_{i=1}^{n}\subset[0,1]\), such that \(\sum_{i=}^{n}p_{i}=1\), is called a _\(D\)-stochastic embedding of \(X\) into elements of the class \(\mathcal{M}\)_ if for all \(x,y\in X\) and \(i=1,\ldots,n\)
\[d_{X}(x,y)\leq d_{i}(f_{i}(x),f_{i}(y)),\] \[\mathbb{E}_{\mathbb{P}}\big{(}d_{i}(f_{i}(x),f_{i}(y)\big{)}= \sum_{i=1}^{n}p_{i}d_{i}\big{(}f_{i}(x),f_{i}(y)\big{)}\leq Dd_{X}(x,y).\]
In that case we say that \((X,d_{X})\) is _\(D\)-stochastically embeddable into \(\mathcal{M}\)_. We denote
\[\mathcal{S}_{\mathcal{M}}(X,d_{X}):=\inf\{D\geq 1:\text{ }(X,d_{X})\text{ is }D-\text{stochastically embeddable into }\mathcal{M}\}\]
and call the value \(\mathcal{S}_{\mathcal{M}}(X,d_{X})\)_the stochastic distortion of_\((X,d_{X})\) into \(\mathcal{M}\).
By following the same ideas as in [1] one could deduce from the Min-Max Theorem that
\[\mathcal{S}_{\mathcal{M}}(X,d_{X})=\sup_{\nu}\inf_{(f,M)}\sum_{x,y\in X,x\neq y }\nu(\{x,y\})\frac{d_{M}(f(x),f(y))}{d_{X}(x,y)},\]
where the sup is taken over all probabilities on the doubletons of \(X\) with finite support, and the \(\inf\) is take over all pairs \((f,M)\) with \((M,d_{M})\) being in the class \(\mathcal{M}\) and \(f:(X,d_{X})\to(M,d_{M})\) being expansive. We will only need that the left side is not smaller than the right side, and this only for geodesic graphs, a result which is much more elementary and will be observed in the next Proposition.
**Proposition 2.5**.: _Let \((G,d_{G})\), be a geodesic graph, and let \(\nu_{G}\) be a probability on \(E(G)\). Suppose that \(\mathcal{M}\) is a family of geodesic graphs._
_If \((H_{i},d_{H_{i}})\in\mathcal{M}\), for \(i=1,2,\ldots,n\), and if \((f_{i})_{i=1}^{n}\), \(f_{i}:V(G)\to V(H_{i})\), together with \(\mathbb{P}=(p_{i})_{i=1}^{n}\subset(0,1]\) forms a \(D\)-stochastic embedding of \(G\) into \((H_{i})_{i=1}^{n}\), then \(D\geq c_{\nu}(G,d_{G})\), where_
\[c_{\nu}(G,d_{G}):=\inf\left\{\mathbb{D}_{\nu}(f)\Big{|}(H,d_{H})\in\mathcal{M},f:(V(G),d_{G})\to(V(H),d_{H})\text{ expansive}\right\},\]
_and, thus,_
\[D\geq c(G,d_{G}):=\max\big{\{}c_{\nu}(G,d_{G}):\nu\text{ is a probability on }E(G)\big{\}}.\]
Proof.: We observe for a probability \(\nu\) on \(E(G)\) that
\[D \geq\max_{x,y\in V(G),x\neq y}\frac{\sum_{i=1}^{n}p_{i}d_{H_{i}}( f_{i}(x),f_{i}(y))}{d_{G}(x,y)}\] \[\geq\max_{e=\{x,y\}\in E(G)}\frac{\sum_{i=1}^{n}p_{i}d_{H_{i}}( f_{i}(x),f_{i}(y))}{d_{G}(x,y)}\] \[\geq\sum_{e=\{x,y\}\in E(G)}\frac{\nu(e)}{d_{G}(x,y)}\sum_{i=1}^{ n}p_{i}d_{H_{i}}(f_{i}(x),f_{i}(y))\] \[=\sum_{i=1}^{n}p_{i}\sum_{e=\{x,y\}\in E(G)}\frac{\nu(e)}{d_{G}( x,y)}d_{H_{i}}(f_{i}(x),f_{i}(y))\] \[=\sum_{i=1}^{n}p_{i}\mathbb{D}_{\nu}(f_{i})\geq c_{\nu}(G,d_{G}).\]
If \((X,d_{X})\) is a finite metric space with \(|X|=n\), then by a result of Fakcharoenphol, Rao, and Talwar a stochastic embeddings with \(O(\log(n))\) stochastic distortion into the family of geodesic trees always exists.
**Theorem 2.6**.: _[_6_]_ _If \((X,d_{X})\) is a finite metric space with \(|X|=n\) and \(\mathcal{T}\) denotes the family of weighted trees with the corresponding induced geodesic metrics, then_
\[S_{\mathcal{T}}(X,d_{X})=O(\log(n))\]
Thus, in light of Theorem 2.6, in proving the Main Theorem it suffices to give a logarithmic lower bound for slash powers of geodesic \(s\)-\(t\) graphs that contain a cycle. We accomplish this by calculating the expected distortion with respect to a probability defined on the edges.
We shall utilize Proposition 2.5 in the case that \(\mathcal{M}\) is the family of geodesic trees.
### Slash Products of \(s\)-\(t\) Graphs
(cf. [10]). Assume \(G\) and \(H\) are directed \(s\)-\(t\) graphs (_i.e.,_ directed graphs whose edges are oriented in a way so that every edge is on a directed path from \(s(G)\) to \(t(G)\) and \(s(H)\) to \(t(H)\) respectively).
The vertices of \(H\oslash G\) are defined by
\[V(H\oslash G)=\{(e,v):e\in E_{d}(H),v\in V(G)\},\]
where we identify the elements \((e,t(G))\) and \((\tilde{e},s(G))\), for \(e,\tilde{e}\in E_{d}(H)\), for which \(e^{+}=\tilde{e}^{-}\). We also consider \(V(H)\) to be a subset of \(V(H\oslash G)\), by making for \(u\in V(H)\) the following identification:
\[u\equiv(e,s(G)),\text{ whenever }\tilde{e}\in E_{d}(H),\text{ for which }e^{-}=u,\text{ and}\] \[u\equiv(\tilde{e},t(G)),\text{ whenever }\tilde{e}\in E_{d}(H),\text{ for which }\tilde{e}^{+}=u.\]
In particular
\[s(H)\equiv(e,s(G)),\text{ whenever }e\in E_{d}(H),\text{ with }e^{-}=s(H),\text{ and}\] \[t(H)\equiv(\tilde{e},t(G)),\text{ whenever }\tilde{e}\in E_{d}(H),\text{ with }\tilde{e}^{+}=t(H).\]
The directed edges of \(H\oslash G\) are defined by:
\[E_{d}(H\oslash G)=\big{\{}\big{(}(e,f^{-}),(e,f^{+})\big{)}:e\in E_{d}(H),f\in E _{d}(G)\big{\}}\]
the undirected edges by
\[E(H\oslash G) =\big{\{}\big{\{}(e,f^{-}),(e,f^{+})\big{\}}:e\in E(H),f\in E_{d}( G)\big{\}}\] \[=\big{\{}\big{\{}(e,u),(e,v)\big{\}}:e\in E(H),\{u,v\}\in E(G) \big{\}}.\]
We abbreviate for \(e\in E_{d}(H)\), \(f\in E_{d}(G)\), the edge \(\big{(}(e,f^{-}),(e,f^{+})\big{)}\) (of \(H\oslash G\)), by \(e\oslash f\), and for \(v\in V(G)\) the vertex \((e,v)\) by \(e\oslash v\).
**Remark**.: Note that for \(e\in E(H)\),
\[e\oslash G=(e\oslash V(G),e\oslash E(G))=\big{(}\{(e\oslash v):v\in V(G)\},\{(e \oslash f):f\in E(G)\}\big{)}\]
is a subgraph of \(H\oslash G\), which is graph isomorphic to \(G\), via the embedding \(\psi_{e}:V(G)\to V(H\oslash G),\quad v\mapsto e\oslash v\).
Therefore, on may think of \(H\oslash G\) to be obtained by replacing each edge of \(H\) by a copy of \(G\). More formally, if \(H\) is a directed graph \(e\in E_{d}(H)\) and if \(G\) is an \(s\)-\(t\) graph, then we mean by the graph _obtained by replacing the edge \(e\) of \(H\) by a copy of \(G\)_ the graph \(H^{\underline{e}}G\), defined by
\[V(H^{\underline{e}}G) =V(H)\cup V(G),\] \[\text{ where we identify }e^{-}\equiv s(G)\text{ and }e^{+}\equiv t(G),\] \[E(H^{\underline{e}}G) =\big{(}E(H)\setminus\{e\}\big{)}\cup E(G).\]
Using the identification of vertices of a graph \(H\) with elements of a slash product \(H\oslash G\) we observe the following proposition.
**Proposition 2.7**.: _If \(H\) and \(G\) are \(s\)-\(t\) graphs then \(H\oslash G\) is also an \(s\)-\(t\) graph, with \(s(H\oslash G)=s(H)\) and \(t(H\oslash G)=t(H)\)._
Proof.: Note that every path from \(s(H)\) to \(t(H)\) in \(H\oslash G\) is a path which is obtained from a path \((y_{i})_{i=0}^{l}\) in \(H\) from \(s(H)\) to \(t(H)\), by replacing each edge \(e_{i}=\{y_{i-1},y_{i}\}\) by a path in \(H\oslash G\) of the form \(\big{(}e_{i},x_{j}^{(i)}\big{)}_{j=0}^{l_{i}}\), where \((x_{j}^{(i)})_{j=0}^{l_{i}}\) is path from \(s(G)\) to \(t(G)\) in \(G\), for \(i=1,2\ldots,l\).
Assume that \((H,d_{H})\) and \((G,d_{G})\) are normalized geodesic \(s\)-\(t\) graphs. On \(V(H\oslash G)\) we let \(d_{H\oslash G}\) be the geodesic metric on \(V(H\oslash G)\) generated by the weight function defined on all \(e\oslash f\in E(H\oslash G)\) by
\[w_{H\oslash G}(e\oslash f)=d_{H}(e)\cdot d_{G}(f).\]
It follows then that \((H\oslash G,d_{H\oslash G})\) is a normalized geodesic \(s\)-\(t\) graph. Assume, moreover that \(\nu_{G}\) and \(\nu_{H}\) are probabilities on \(E(G)\) and \(E(H)\), respectively. We consider the _product probability \(\nu_{H\oslash G}\) on \(E_{d}(H\oslash G)\)_, which is given by
\[\nu_{H\oslash G}(e\oslash f)=\nu_{H}(e)\cdot\nu_{G}(f),\text{ for }e\in E(H)\text{ and }f\in E(G).\]
It follows then that \((H\oslash G,d_{H\oslash G},\nu_{H\oslash G})\) is a measured normalized geodesic \(s\)-\(t\) graph.
We state some easy to prove and well known facts about slash products:
**Proposition 2.8**.: _[_10_, Lemma 2.3]_ _Taking slash products is an associative operation, i.e. if \(G\), \(H\) and \(K\) are \(s\)-\(t\) graphs then_
\[\Psi: V\big{(}K\oslash(H\oslash G)\big{)}\to V\big{(}(K\oslash H)\oslash G \big{)},\] \[e\oslash(f\oslash v)\mapsto(e\oslash f)\oslash v,\text{ for }e\in E(K) \text{, }f\in E(H)\text{, and }v\in V(G).\]
_is a graph isomorphism. Identifying graphs \(K\oslash(H\oslash G)\) and \((K\oslash H)\oslash G\) we denote them simply by \(K\oslash H\oslash G\) and \(e\oslash f\oslash g\) instead of \((e\oslash f)\oslash g\) and \(e\oslash(f\oslash g)\)._
_Moreover, if \((G,d_{G})\), \((H,d_{H})\) and \((K,d_{K})\) are geodesic \(s\)-\(t\) graphs then_
\[\Psi\big{(}V\big{(}K\oslash(H\oslash G)\big{)},d_{K\oslash(H\oslash G)}\big{)} \to\big{(}V\big{(}(K\oslash H)\oslash G\big{)},d_{(K\oslash H)\oslash G}\big{)}\]
_is also a metric isometry, and_
\[\nu_{K\oslash(H\oslash G)}(e\oslash(f\oslash g)) =\nu_{(K\oslash H)\oslash G}((e\oslash f)\oslash g)=\nu_{K}(e)\nu_{ H}(f)\nu_{G}(g),\] \[\text{ for }e\in E(K)\text{, }f\in E(H)\text{, and }g\in E(G).\]
For an \(s\)-\(t\) graph \(G\) we define the \(n\)_-th slash power of \(G\)_, by induction: \(G^{\oslash 1}=G\), and assuming \(G^{\oslash n}\) is defined we put \(G^{\oslash(n+1)}=G\oslash G^{\oslash n}\). It follows from the above remark that \(G^{\oslash n}\) is also an \(s\)-\(t\) graph with \(s(G^{\oslash n})=s(G)\) and \(t(G^{\oslash n})=t(G)\) (using the identification of \(V(G)\) with a subset of \(V(G^{\oslash n})\) introduced in the previous subsection). Using the graph-isomorphisms defined in Proposition 2.8, we can write the elements \(e\in E(G^{\oslash n})\) as \(e=e_{n}\oslash e^{\prime}\), as \(e=\tilde{e}\oslash e_{1}\), or as \(e=e_{1}\oslash e_{2}\oslash\ldots e_{n}\), with \(e_{j}\in E(G)\), for \(j=1,2\ldots,n\), and \(e^{\prime},\tilde{e}\in E(G^{\oslash(n-1)})\). We can write the elements \(v\in V(G^{\oslash n})\) as \(v=e_{n}\oslash w\), or as \(e^{\prime}\oslash z\), with \(e_{n}\in E(G)\), \(w\in V(G^{\oslash(n-1)})\), \(e^{\prime}\in E(G^{\oslash(n-1)})\) and \(z\in V(G)\). If \((G,d_{G},\nu_{G})\) is a measured normalized metric \(s\)-\(t\) graph, we define metric \(d_{G^{\oslash n}}\) on \(V(G^{\oslash n})\) and the probability \(\nu_{G^{\oslash n}}\) also inductively by \(d_{G^{\oslash 1}}=d_{G}\), \(\nu_{G^{\oslash 1}}=\nu_{G}\), and \(d_{G^{\oslash n}}(e)=d_{G}(e_{n})d_{G^{\oslash(n-1)}}(e^{\prime})\) and \(\nu_{G^{\oslash n}}(e)=\nu_{G}(e_{n})\nu_{G^{\oslash(n-1)}}(e^{\prime})\), for \(e=e_{n}\oslash e^{\prime}\)
\(e^{\prime}\in E(G^{\oslash(n-1)})\), and \(e_{n}\in E(G)\).
**Proposition 2.9**.: _Let \((G,d_{G})\) and \((H,d_{H})\) be normalized geodesic \(s\)-\(t\) graphs, and let \(G^{\prime}\) and \(H^{\prime}\) be \(s\)-\(t\) subgraphs of \(H\oslash G\), respectively. Then \(H^{\prime}\oslash G^{\prime}\) is an \(s\)-\(t\) subgraph of \(H\oslash G\), and thus, by Lemma 2.3, it is an isometric subgraph of \(H\oslash G\)._
_In particular, for \(n\in\mathbb{N}\), \(G^{\prime\oslash n}\) is an \(s\)-\(t\), and thus isometric, subgraph of \(G^{\oslash n}\)._
Proof.: The claim follows from the fact that every \(s\)-\(t\) path in \(H^{\prime}\oslash G^{\prime}\), is a concatination of paths \(P_{i}=\big{(}(x_{i-1},x_{i})\oslash y_{j}^{(i)}\big{)}_{j=0}^{l_{i}}\), \(i=1,2,\ldots l\), where \((y_{j}^{(i)})_{j=0}^{l_{i}}\) is a path in \(G^{\prime}\) from \(y_{0}^{(i)}=s(G)\) to \(y_{l_{i}}^{(i)}=t(G)\) in \(G^{\prime}\), for every \(i=1,2\ldots,l\), and \((x_{i})_{i=0}^{l}\) is a path in \(H^{\prime}\) for \(s(H)\) to \(t(H)\).
Since the metric on \(H^{\prime}\oslash G^{\prime}\) induced by \(d_{H}\oslash d_{G}\) is the geodesic metric induced by the weight function \(w^{\prime}:E(H^{\prime}\oslash G^{\prime})\to\mathbb{R}^{+}\), \(e\oslash f\mapsto d_{H}(e)\cdot d_{G}(f)\), Lemma 2.3 implies that \(H^{\prime}\oslash G^{\prime}\) is an isometric subgraph of \(H\oslash G\).
The following shows that a high enough slash power of a generalized Laakso graph will eventually contain a balanced Laakso subgraph. This will allow us to only consider the balanced case in our analysis in Section 4.
**Lemma 2.10**.: _Let \(L=(V(L),E(L))\) be a \((k,l_{1},l_{2},m)\)-Laakso graph, with \(l_{1}<l_{2}\). Put_
\[n_{0}=2+\Bigg{\lfloor}\frac{\ln(l_{2})-\ln(l_{1})}{\ln(k+m+l_{2})-\ln(k+m+l_{ 1})}\Bigg{\rfloor}.\]
_Then \((L^{\oslash n_{0}},d_{L^{\oslash n_{0}}})\) contains a subgraph which is a balanced generalized Laakso graph \(L^{\prime}\) with \(s(L^{\prime})=s(L^{\oslash n_{0}})\) and \(t(L^{\prime})=t(L^{\oslash n_{0}})\). Moreover, if \((L,d_{L})\) is also a normalized geodesic \(s\)-\(t\) graph, \(c_{0}\) being the metric length of its cycle, then \(L^{\prime}\) with the induced geodesic metric \(d_{L^{\prime}}\) on edges of \(L^{\prime}\) will be an isometric geodesic subgraph of \((L^{\oslash n_{0}},d_{L^{\oslash n_{0}}})\), whose cycle has metric length \(c_{0}\)._
Proof.: We write as in Example 2.4 (c) the graph \(L\) as \(L=(V(L),E(L))\), with
\[V(L)=\{x_{i}\}_{i=0}^{k}\cup\{y_{i}^{(1)}\}_{i=0}^{l_{1}}\cup\{y_{i}^{(2)}\}_ {i=0}^{l_{2}}\cup\{z_{i}\}_{i=0}^{m},\]
where we make the following identifications:
\[x_{k}\equiv y_{0}^{(1)}\equiv y_{0}^{(2)},\text{ and }y_{l_{1}}^{(1)}\equiv y_{ l_{2}}^{(2)}\equiv z_{0}.\]
\[E(L)=\big{\{}\{x_{i-1},x_{i}\}:i=1,2,\ldots,k\big{\}}\cup\big{\{}\{z_{i-1},z_{i }\}:i=1,2,\ldots,m\big{\}}\]
We define the following four paths in \(L\)
\[P_{1}^{(1)}=(x_{i})_{i=0}^{k}\smile(y_{i}^{(1)})_{i=0}^{l_{1}}\smile(z_{i})_{i= 1}^{m}\text{ and }P_{1}^{(2)}=(x_{i})_{i=0}^{k}\smile(y_{i}^{(2)})_{i=0}^{l_{2}}\smile(z_{i})_{i= 1}^{m},\]
\[Q_{1}^{(1)}=(y_{i}^{(1)})_{i=0}^{l_{1}}\text{ and }Q_{1}^{(2)}=(y_{i}^{(2)})_{i=0}^{l_{2}}.\]
For \(n\in\mathbb{N}\), \(n\geq 2\), we define inductively the following paths \(Q_{n}^{(1)}\), \(Q_{n}^{(2)}\), which are subgraphs of \(L^{\oslash n}\): \(Q_{n}^{(1)}\) is obtained by replacing each edge of \(Q_{n-1}^{(1)}\), by \(P_{1}^{(2)}\) and
is obtained by replacing each edge of \(Q_{n-1}^{(2)}\), by the path \(P_{1}^{(1)}\). Note that \(Q_{n}^{(1)}\) and \(Q_{n}^{(2)}\) generate a cycle in \(L^{\otimes n}\) of length \(c_{0}\).
Denote the graph lengths of \(Q_{n}^{(1)}\) and \(Q_{n}^{(2)}\) by \(M_{n}\) and \(N_{n}\). From a simple induction argument it follows that
\[M_{n}=(k+m+l_{2})^{n-1}l_{1}\text{ and }N_{n}=(k+m+l_{1})^{n-1}l_{2}.\]
and it is easy to see that for the above defined number \(n_{0}\)
\[M_{n_{0}-1}\leq N_{n_{0}-1}<N_{n_{0}}<M_{n_{0}}.\]
For \(i=0,1,2,\ldots,M_{n_{0}-1}\), let \(Q_{n_{0},i}^{(1)}\) be the path which is obtained by replacing \(i\) edges of \(Q_{n_{0}-1}^{(1)}\) by \(P_{1}^{(2)}\) and \(M_{n_{0}-1}-i\) edges by \(P_{1}^{(1)}\). Note that \(Q_{n_{0},i}^{(1)}\) is a subgraph of \(L^{\otimes n_{0}}\) which together with \(Q_{n_{0}}^{(2)}\) generates a cycle in \(L^{\otimes n_{0}}\). Let \(M_{n_{0},i}\) be the graph length of \(Q_{n_{0},i}^{(1)}\) and note that
\[M_{n_{0}-1}<M_{n_{0},i}=(k+m)M_{n_{0}-1}+il_{2}+(M_{n_{0}-1}-i)l_{1}\leq M_{n_{ 0}}.\]
We will show that for some choice of \(0\leq i\leq M_{n_{0}-1}\) it follows that \(M_{n_{0},i}=N_{n_{0}}\). Since
\[M_{n_{0},i}-N_{n_{0}} =(k+m+l_{1})(M_{n_{0}-1}-N_{n_{0}-1})+i(l_{2}-l_{1})\] \[=\begin{cases}(k+m+l_{2})M_{n_{0}-1}-(k+m+l_{1})N_{n_{0}-1}&\text{ if }i=M_{n_{0}-1},\\ (k+m+l_{1})(M_{n_{0}-1}-N_{n_{0}-1})&\text{ if }i\!=\!0,\end{cases}\] \[=\begin{cases}M_{n_{0}}-N_{n_{0}}>0&\text{ if }i=M_{n_{0}-1},\\ (k+m+l_{1})(M_{n_{0}-1}-N_{n_{0}-1})\leq 0&\text{ if }i=0,\end{cases}\]
it is enough to show that \(M_{n_{0}-1}-N_{n_{0}-1}\) is divisible by \(l_{2}-l_{1}\), in order to deduce that \(M_{n_{0},i}=N_{n_{0}}\) for an appropriate choice of \(i\). To verify this we write
\[M_{n_{0}-1}-N_{n_{0}-1} =(k+m+l_{2})^{n_{0}-2}l_{1}-(k+m+l_{1})^{n_{0}-2}l_{2}\] \[=\sum_{j=0}^{n_{0}-2}(l_{2}^{j}\cdot l_{1}-l_{1}^{j}\cdot l_{2}) \binom{n_{0}-2}{j}(k+m)^{n_{0}-2-j}.\]
For \(j=0\), \(l_{2}^{j}\cdot l_{1}-l_{1}^{j}\cdot l_{2}=l_{1}-l_{2}\), and for \(j\geq 1\) it follows that \(l_{2}^{j}\cdot l_{1}-l_{1}^{j}\cdot l_{2}=l_{1}\cdot l_{2}(l_{2}^{j-1}-l_{1}^ {j-1})\) is also an integer multiple of \(l_{2}-l_{1}\). We conclude therefore that for some appropriate choice of \(0\leq i\leq M_{n_{0}-1}\) the paths \(Q_{n_{0},i}^{(1)}\) and \(Q_{n_{0}}^{(2)}\) have equal paths length and generate a cycle \(C\) which is a subgraph of \(L^{\otimes n_{0}}\). Finally let \(R_{1}\) be any path in \(L^{\otimes n_{0}}\) from \(s(L^{\otimes n_{0}})\) to the nearest point of \(C\) and \(R_{2}\) any path from the point of \(C\), which is nearest to \(t(L^{\otimes n_{0}})\), to \(t(L^{\otimes n_{0}})\), then \(R_{1}\), \(R_{2}\) and \(C\) generate a balanced generalized Laakso graph which is a subgraph of \(L^{\otimes n_{0}}\). The "moreover" portion of the statement is immediate from Lemma 2.3.
**Remark**.: Lemma 2.10 immediately implies that for all \(n\geq n_{0}\), \((L^{\otimes n},d_{L^{\otimes n}})\) contains a balanced generalized Laakso \(s\)-\(t\) subgraph. Indeed, if we have such a graph, say \(L_{1}\subset G^{\otimes n_{1}}\), for some \(n_{1}\in\mathbb{N}\), then by the definition of slash powers the graph
obtained from \(L_{1}\) by replacing each edge with the same \(s\)-\(t\) path from \(G\), will be a balanced generalized Laakso subgraph \(L_{2}\subset G^{\oslash(n_{1}+1)}\).
## 3. Embedding Cycles into Trees
The following is a generalization and sharpening of a result by Gupta [7, Lemma 7.1]. The proof, which we include for completeness, is similar.
**Lemma 3.1**.: _Let \((V(C),E(C),d_{C})\) be a geodesic cycle, and let_
\[c_{0}=\operatorname{length}_{d_{C}}(C)=\sum_{e\in E(C)}d_{C}(e)\]
_be its length._
_Let \((T,d_{T})\) be a geodesic tree. Assume that there is an expansive map_
\[\Psi:(V(C),d_{C})\to(V(T),d_{T})\]
_(meaning that \(d_{C}(u,v)\leq d_{T}(\Psi(u),\Psi(v))\), for \(u,v\in V(C)\))._
_Then there exists an edge \(e=\{u,v\}\in E(C)\), for which_
\[d_{T}(\Psi(u),\Psi(v))\geq\frac{c_{0}-d_{C}(u,v)}{8}.\]
For the proof of Lemma 3.1 we recall the following important result by Gupta.
**Theorem 3.2**.: _[_7_, Theorem 1.1]_ _Let \(T=(V(T),E(T),d_{T})\) be a geodesic tree and let \(V^{\prime}\subset V\). Then there is a tree \(T^{\prime}=(V(T^{\prime}),E(T^{\prime}),d_{T^{\prime}})\) with \(V(T^{\prime})=V^{\prime}\), and a geodesic metric \(d_{T^{\prime}}\) on \(T^{\prime}\), so that_
\[1\leq\frac{d_{T^{\prime}}(x,y)}{d_{T}(x,y)}\leq 8,\text{ for }x,y\in V^{\prime}. \tag{1}\]
Proof of Lemma 3.1.: In view of Theorem 3.2 it is enough to show the following claim:
Let \((T,d_{T})\) be a geodesic tree on \(V(C)\), with the property that
\[d_{T}(u,v)\geq d_{C}(u,v),\text{ for all }f=\{u,v\}\in E(T).\]
Then there exists an edge \(e=\{x,y\}\in E(C)\) for which
\[d_{T}(x,y)\geq c_{0}-d_{C}(x,y).\]
For each \(f=\{u,v\}\in E(T)\) we can assume that
\[d_{T}(u,v)=d_{C}(u,v)=\sum_{j=1}^{m(f)}d_{C}(x_{i-1}(f),x_{i}(f))\leq c_{0}/2, \tag{2}\]
where \((x_{i}(f))_{i=1}^{m(f)}\) is a shortest path from \(u\) to \(v\) in \(C\) (which is unique if \(d_{C}(u,v)<c_{0}/2\), and if \(d_{C}(u,v)=c_{0}/2\), there are two such paths). Indeed, otherwise we could replace \(d_{T}\) by the geodesic metric on \(V(T)\) which is generated by the weight function \(W:E(T)\to(0,c_{0}]\), \(f=\{u,v\}\mapsto d_{C}(u,v)\).
For each \(e=\{a,b\}\in E(C)\) we can write \(d_{T}(a,b)\) as
\[d_{T}(a,b)=\operatorname{length}_{d_{T}}([a,b]_{T})=\sum_{j=1}^{n(e)}d_{T}(y_{j-1 }(e),y_{j}(e)) \tag{3}\]
where \([a,b]_{T}=(y_{i}(e))_{i=1}^{n(e)}\) denotes the (unique) path from \(a\) to \(b\) in \(T\).
After possibly passing to a tree \(T^{\prime}\) on \(V(C)\) and a geodesic distance \(d_{T^{\prime}}\), for which \(d_{T^{\prime}}(e)\leq d_{T}(e)\), for all \(e\in E(C)\), we also can assume that \(d_{T}(e)<c_{0}/2\) for each \(e\in E(T)\). This can be seen as follows: Assume \(d_{C}(a,b)=d_{T}(a,b)=c_{0}/2\) for some \(e=\{a,b\}\in E(T)\). Since \(a\) and \(b\) cannot be both leaves of \(T\), we can assume that the degree of \(b\) is at least \(2\), and thus that there exists a \(c\in V(C)\), \(a\neq c\), with \(\{b,c\}\in E(T)\). It follows that \(d_{T}(b,c)=d_{C}(b,c)<c_{0}/2\) (note that if \(d_{C}(b,c)=d_{C}(a,b)=c_{0}/2\) it would follow that \(a=c\)). We now consider the tree \(T^{\prime}\) on \(V(C)\) whose edges are
\[E(T^{\prime})=(E(T)\setminus\left\{\left\{a,b\right\}\right\})\cup\left\{\left\{ a,c\right\}\right\}\]
(note that \(\{a,c\}\not\in E(T)\), because otherwise \((a,b,c,a)\) would be a cycle in \(T\)) and the geodesic distance \(d_{T^{\prime}}\), generated by the weight function \(W^{\prime}(f)=d_{T}(f)\) if \(f\in E(T)\setminus\left\{\left\{a,b\right\}\right\}\), and \(W^{\prime}(\{a,c\})=d_{C}(a,c)=d_{C}(a,b)-d_{C}(b,c)>0\). It follows that for any \(e=\{g,h\}\in E(C)\), \(d_{T^{\prime}}(e)=d_{T}(e)\), as long as \(\{a,b\}\) is not an edge of the path \([g,h]_{T}\). If \(\{a,b\}\) is an edge of the path \([g,h]_{T}\), we can replace in \([g,h]_{T}\) the edge \(\{a,b\}\) by \(\{a,c\}\smile\{c,b\}\) in order to get a walk in \(T^{\prime}\) from \(g\) to \(h\), whose length is at least as large as the length of path \([g,h]_{T^{\prime}}\) in \(T^{\prime}\).
We suppose now that \(V(C)=\{0,1,2,\ldots,n-1\}\) and \(E(C)=\left\{\left\{i,i+1\right\}:i=0,1,2,\ldots,n-1\right\}\), with \(n\geq 3\), and where we assume that addition in \(V(C)\) is modulo \(n\). For \(i\in V(C)\) we consider the clockwise path in \(C\) defined by \(S(i)=(i+j:j=1,2,\ldots,k_{i})\) where
\[k_{i}=\max\left\{k\in\mathbb{N}:\sum_{j=i+1}^{i+k}d_{C}(j-1,j)\leq c_{0}/2\right\}\]
and \(S^{\prime}(i)\) is the counter clockwise path consisting of the complement of \(S(i)\), _i.e.,_\(S^{\prime}(i)=(i-j:j=1,2,\ldots,n-k_{i}-1)\). We observe that for any \(j\in S(i)\) or \(j\in S^{\prime}(i)\), there is a shortest path from \(i\) to \(j\), which is in \(S(i)\), respectively \(S^{\prime}(i)\) (in the limit case that \(d_{C}(i,j)=c_{0}/2\), it follows from our convention that this path is in \(S(i)\)), and, thus, it follows for \(j\geq 0\) with \(i+j\in S(i)\), that
\[d_{C}(i,i+j)=\sum_{s=1}^{j}d_{C}(i+s-1,i+s). \tag{4}\]
We also observe that for \(u,v\in V(C)\), with \(d_{C}(u,v)<c_{0}/2\), it follows that \(u\in S(v)\), if and only if \(v\in S^{\prime}(u)\).
The following claim is crucial.
**Claim.** After possibly passing to another tree \(T^{\prime}\) on \(V(C)\) and a geodesic distance \(d_{T^{\prime}}\), with \(d_{T^{\prime}}(e)\leq d_{T}(e)\), for \(e\in E(C)\), we may assume that if \(\{v_{0},u\},\{v_{0},w\}\in E(T)\)
\(u\neq w\), then \(u\) and \(w\) cannot be either both in \(S(v_{0})\) or both in \(S^{\prime}(v_{0})\).
Indeed, assume, for example, that \(u=v_{0}+s\), \(w=v_{0}+t\in S(v_{0})\), \(t,s\geq 1\), and that \(d_{C}(v_{0},u)<d_{C}(v_{0},w)\), which by (4) means that \(0<s<t\). Then we consider the tree \(T^{\prime}\) on \(V(C)\) defined by \(E(T^{\prime})=(E(T)\setminus\left\{\left\{v_{0},w\right\}\right\})\cup\left\{ \left\{u,w\right\}\right\}\), and let \(d_{T^{\prime}}\) be the geodesic metric generated by the weight function \(W^{\prime}(e)=d_{T}(e)\) if \(e\in E(T)\setminus\left\{\left\{v_{0},u\right\}\right\}\) and
\[W^{\prime}(\left\{u,w\right\})=d_{T}(v_{0},w)-d_{T}(v_{0},u)=d_{C}(v_{0},w)-d _{C}(v_{0},u)<d_{C}(v_{0},w)=d_{T}(v_{0},w).\]
Then it follows for any \(e=\left\{a,b\right\}\in E(C)\) that either \([a,b]_{T}\) does not contain the edge \(\left\{v_{0},w\right\}\), in which case \(d_{T}(a,b)=d_{T^{\prime}}(a,b)\) or \([a,b]_{T}\) does contain the edge \(\left\{v_{0},w\right\}\). If we replace in \([a,b]_{T^{\prime}}\) the edge \(\left\{v_{0},w\right\}\) by \(\left\{v_{0},u\right\}\) and \(\left\{u,w\right\}\) we obtain a walk in \(T^{\prime}\), whose length with respect to \(d_{T^{\prime}}\) equals to \(d_{T}(a,b)\). Thus, proving the claim \(d_{T^{\prime}}(a,b)\leq d_{T}(a,b)\).
The claim implies, in particular, that we can assume that the degree of each \(v\in V(C)\) with respect to \(T\) is at most \(2\), and thus that \(T\) is a path, and we can order \(V(C)\) into \(x_{0},x_{1},\ldots x_{n-1}\) with \(E(T)=\left\{\left\{x_{i-1},x_{i}\right\}:i=1,2,\ldots,n-1\right\}\). After relabeling the elements of \(V(C)\), we can assume that \(x_{0}=0\), and that \(x_{1}\in S(x_{0})\). But this implies that \(x_{2}\in S(x_{1})\), because otherwise \(x_{2}\in S^{\prime}(x_{1})\) and \(x_{0}\in S^{\prime}(x_{1})\) (here we use the assumption that \(d_{C}(x_{1},x_{2})=d_{T}(x_{1},x_{2})<c_{0}/2\), and \(d_{C}(x_{0},x_{1})=d_{T}(x_{0},x_{1})<c_{0}/2\)), which is a contradiction to the above proven claim. Iterating this argument we conclude that \(x_{i}\in S(x_{i-1})\), for \(i=1,2,\ldots,n-1\). Let \(k\leq n-1\) so that \(x_{k}=n-1\). Then
\[d_{T}(0,n-1)=\sum_{j=1}^{k}d_{T}(x_{j-1},x_{j})=\sum_{j=1}^{k}d_{C}(x_{j-1},x_ {j})=c_{0}-d_{C}(0,n-1)\]
which finishes the proof of our Lemma.
Figure 2. We may assume \(\left\{v_{0},w\right\}\notin E(T)\), else, obtaining an improved tree by replacing the edge \(\left\{v_{0},w\right\}\) with \(\left\{u,w\right\}\).
## 4. Embedding of Slash Powers of Balanced Laakso Graphs into Trees
Throughout this section, \(L\) is a fixed balanced Laakso graph, and thus for some \(k,l,m\in\mathbb{N}_{0}\), \(l\geq 2\) we have
\[V(L)=\{x_{i}\}_{i=0}^{k}\cup\{y^{(1)}\}_{i=0}^{l}\cup\{y_{i}^{(2)}\}_{i=0}^{l} \cup\{z_{i}\}_{i=0}^{m},\text{ with }x_{k}\!\equiv\!y_{0}^{(1)}\!\equiv\!y_{0}^{(2)},y_{l}^{(1)}\! \equiv\!y_{l}^{(2)}\equiv z_{0}\]
and
\[E(L)=\big{\{} \{x_{i-1},x_{i}\}:i=1,2,\ldots,k\big{\}}\cup\big{\{}\{z_{i-1},z_{i} \}:i=1,2,\ldots,m\big{\}}\] \[\cup\big{\{}\{y_{i-1}^{(1)},y_{i}^{(1)}\}:i=1,2,\ldots,l\big{\}} \cup\big{\{}\{y_{i-1}^{(2)},y_{i}^{(2)}\}:i=1,2,\ldots,l\big{\}}.\]
Let \(d_{L}\) be a normalized geodesic metric on \(V(L)\) and let \(C_{0}\) be the cycle in \(L\) generated by the paths \((y_{i}^{(1)})_{i=0}^{l}\) and \((y_{i}^{(2)})_{i=0}^{l}\). Note that
\[c_{0}:=\operatorname{length}_{d_{C}}(C_{0})=2\sum_{j=1}^{l}d_{L}(y_{j-1}^{(1) },y_{j}^{(1)})=2\sum_{j=1}^{l}d_{L}(y_{j-1}^{(2)},y_{j}^{(2)}).\]
We require that
\[d_{L}(y_{j-1}^{(\sigma)},y_{j}^{(\sigma)})\leq\frac{c_{0}}{4}\leq\frac{1}{2}, \text{ for }j=1,2,\ldots l,\,\sigma=1,2. \tag{5}\]
Let \(\nu_{L}\) be the probability on \(E(L)\) defined in Example 2.4 (c).
For \(n\in\mathbb{N}\) we abbreviate the \(n\)-th slash power of the measured normalized geodesic \(s\)-\(t\)- graph \((L,d_{L},\nu_{L})\) by \((L_{n},d_{n},\nu_{n})\).
We write each \(e\in E(L_{n})\) as \(e=e_{1}\oslash e_{2}\oslash\ldots e_{n}\), with \(e_{1},e_{2},\ldots,e_{n}\in E(L)\) and each \(v\in V(L_{n})\) as \(v=e_{1}\oslash e_{2}\oslash\ldots e_{n-1}\oslash u\) with \(e_{1},e_{2},\ldots,e_{n-1}\in E(L)\) and \(u\in V(L)\). For \(e=e_{1}\oslash e_{2}\oslash\ldots\oslash e_{n}\in E(L_{n})\) we put \(e|_{[2,n]}=e_{2}\oslash e_{3}\ldots\oslash e_{n}\in E(L_{n-1})\) and \(e|_{[1,n-1]}=e_{1}\oslash e_{2}\ldots\oslash e_{n-1}\).
For \(n\in\mathbb{N}\) we put
\[\mathcal{C}^{(n)}=\big{\{}C=(V(C),E(C)):C\text{ is a cycle in $L_{n}$ of metric length $c_{0}$}\big{\}},\]
which are the cycles of largest metric length in \(L_{n}\), and for \(e\in E(L_{n})\) let
\[\mathcal{C}^{(n)}_{e}=\big{\{}C\in\mathcal{C}^{(n)}:e\in E(C)\big{\}}.\]
We deduce from Lemma 2.3, Lemma 3.1 and (5) the following Corollary.
**Corollary 4.1**.: _Let \((T,d_{T})\) be a geodesic tree, \(n\in\mathbb{N}\) and \(\Psi:V(L_{n})\to V(T)\) be an expansive map. Then for every \(C\in\mathcal{C}^{(n)}\) there is an edge \(e_{C}\in E(C)\) for which_
\[d_{T}(\Psi(e_{C}))\geq\frac{c_{0}-d_{n}(e_{C})}{8}\geq\frac{3c_{0}}{32}.\]
**Proposition 4.2**.: _Let \(n\in\mathbb{N}\)._
* _For every_ \(C\in\mathcal{C}^{(n)}\) _it follows that_ \(|E(C)|=2l(k+l+m)^{n-1}\)_._
* \(\big{|}\mathcal{C}^{(n)}\big{|}=2^{2l\frac{(k+l+m)^{n-1}-1}{k+l+m-1}}\)_._ _In particular, using the formula for geometric series, we deduce for_ \(n\geq 2\)_, that_ \(\big{|}\mathcal{C}^{(n)}\big{|}=\big{|}\mathcal{C}^{(n-1)}\big{|}2^{2l(k+l+m)^{ n-2}}\)
_._
* _Let_ \(n\geq 2\) _and_ \(e=e_{1}\oslash e_{2}\oslash\ldots\oslash e_{n-1}\oslash e_{n}\in E(L_{n})\)_. If_ \(e_{1}\in E(C_{0})\)_, then_ \[\big{|}\mathcal{C}_{e}^{(n)}|=\big{|}\mathcal{C}_{e|_{[1,n-1]}}^{(n-1)}\big{|} \cdot\begin{cases}2^{2l(k+l+m)^{n-2}-1}&\text{ if }e_{n}\in E(C_{0})\text{,}\\ 2^{2l(k+l+m)^{n-2}}&\text{ if }e_{n}\in E(L)\setminus E(C_{0})\text{.}\end{cases}\] _If_ \(e_{1}\in E(L)\setminus E(C)\)_, then_ \(\mathcal{C}_{e}^{(n)}=\emptyset\)_._
Proof.: (a) For \(n=1\) the claim is clear. Assuming the claim is true for some \(n\geq 1\). Then the claim for \(n+1\) follows from the fact that one obtains an element of \(\mathcal{C}^{(n+1)}\) by taking an element \(C\) from \(\mathcal{C}^{(n)}\) and replacing each edge of \(C\) by a path from \(s(L)\) to \(t(L)\) in \(L\), whose graph length is \(k+l+m\).
(b) Again for \(n=1\), the claim is clear. Assuming the claim is true for \(n\), then the fact that an element of \(C^{(n+1)}\) is obtained by starting with a cycle \(C\) in \(\mathcal{C}^{(n)}\) and replacing each edge of \(C\) either by the path \((x_{i})_{i=0}^{k}\smile(y_{i}^{(1)})_{i=1}^{l}\smile(z_{i})_{i=0}^{m}\) or the path \((x_{i})_{i=0}^{k}\smile(y_{i}^{(2)})_{i=0}^{l}\smile(z_{i})_{i=0}^{m}\). This means that for each \(C\) in \(\mathcal{C}^{(n)}\) there are \(2^{|E(C)|}\) possibilities to extend \(C\) to an element of \(\mathcal{C}^{(n+1)}\). Thus by (a) and the induction hypothesis
\[\big{|}\mathcal{C}^{(n+1)}\big{|}=\big{|}\mathcal{C}^{(n)}\big{|}2^{2l(k+l+m)^ {n-1}}=2^{2l\frac{(k+l+m)^{n-1}-1}{k+l+m-1}+2l(k+l+m)^{n-1}}=2^{2l\frac{(k+l+m )^{n}-1}{k+l+m-1}}\text{.}\]
(c) If \(n=2\) (see Figure 3 above), and \((e_{1}\oslash e_{2})\in E(L_{2})\) with \(e_{1}\in E(C_{0})\), then a cycle \(C\in\mathcal{C}^{(2)}\), contains \(e_{1}\oslash e_{2}\) if it is obtained from \(C_{0}\) (which is the only element of \(\mathcal{C}^{(1)}\)) by replacing each edge by the path \((x_{i})_{i=0}^{k}\smile(y_{i}^{(1)})_{i=1}^{l}\smile(z_{i})_{i=0}^{m}\) or the path \((x_{i})_{i=0}^{k}\smile(y_{i}^{(2)})_{i=1}^{l}\smile(z_{i})_{i=0}^{m}\). If \(e_{2}\) is an edge of the path \((x_{i})_{i=0}^{k}\) or \((z_{i})_{i=0}^{m}\) this can be done in \(2^{|E(C_{0})|}=2^{2l}\) ways. But in the case that \(e_{2}\) is an edge of either the path \((y_{i}^{(1)})_{i=1}^{l}\) or \((y_{i}^{(2)})_{i=1}^{l}\), then there are only \(2^{|E(C_{0})|-1}=2^{2l-1}\) ways to do such. If \(e_{1}\in E(L)\setminus E(C)\), then \(e_{1}\oslash e_{2}\) cannot be an edge of any \(C\in\mathcal{C}^{(2)}\).
Assuming now that our claim is true for some \(n\geq 2\) and \(e=e_{1}\oslash e_{2}\oslash\ldots\oslash e_{n+1}\in\mathbb{E}(L_{n+1})\), we can proceed similarly to verify the claim for \(n+1\). Indeed, we obtain the elements \(C\) of \(\mathcal{C}_{e}^{(n+1)}\), by replacing each edge of an element \(C^{\prime}\in\mathcal{C}_{e|_{[1,e_{n}]}}^{(n)}\) by the path \((x_{i})_{i=0}^{k}\smile(y_{i}^{(1)})_{i=0}^{l}\smile(z_{i})_{i=0}^{m}\) or the path \((x_{i})_{i=0}^{k}\smile(y_{i}^{(2)})_{i=0}^{l}\smile(z_{i})_{i=0}^{m}\). Again, if \(e_{n}\) is an edge of the path \((x_{i})_{i=0}^{k}\) or \((z_{i})_{i=0}^{m}\) this can be done in \(2^{|E(C^{\prime})|}=2^{2l(k+l+m)^{n-1}}\) ways. But in the case that \(e_{2}\) is an edge of either the path \((y_{i}^{(1)})_{i=0}^{l}\) or \((y_{i}^{(2)})_{i=0}^{l}\), then there are only \(2^{|E(C^{\prime})|-1}=2^{2l(k+l+m)^{n-1}-1}\) ways to do such. As in the case \(n=2\), it follows that if \(e_{1}\in E(L)\setminus E(C)\), then \(\mathcal{C}_{e}^{(n+1)}=\emptyset\) since \(\mathcal{C}_{e|_{[1,n]}}^{(n)}=\emptyset\), by our induction hypothesis.
**Corollary 4.3**.: _Let \(n\in\mathbb{N}\) and assume that \(\phi\) is a map from \(\mathcal{C}^{(n)}\) to \(E(L_{n})\), with the property that \(\phi_{1}(C)\in C_{0}\), where we write \(\phi(C)\) as \(\phi(C)=\phi_{1}(C)\oslash\phi_{2}(C)\oslash\ldots\oslash\phi_{n}(C)\), with \(\phi_{j}(C)\in E(L)\), \(j=1,2,\ldots,n\)._
_Then_
\[\sum_{C\in\mathcal{C}^{(n)}}\frac{1}{|\mathcal{C}_{\phi(C)}^{(n)}|}\frac{\nu_ {n}(\phi(C))}{d_{n}(\phi(C))}=\frac{1}{2}. \tag{6}\]
Proof.: From the inductive definition of \(\nu_{n}(e)\) and \(d_{n}(e)\) we deduce that
\[\sum_{C\in\mathcal{C}^{(n)}}\frac{1}{|\mathcal{C}_{\phi(C)}^{(n)}|}\frac{\nu_ {n}(\phi(C))}{d_{n}(\phi(C))}=\sum_{C\in\mathcal{C}^{(n)}}\frac{1}{|\mathcal{ C}_{\phi(C)}^{(n)}|}\frac{\nu_{n-1}(\phi(C)|_{[1,n-1]})}{d_{n-1}(\phi(C)|_{[1,n-1]})} \frac{\nu_{1}(\phi_{n}(C))}{d_{1}(\phi_{n}(C))}\]
which, by Proposition 4.2, and the fact that \(\frac{\nu_{1}(\phi_{n}(C))}{d_{1}(\phi_{n}(C))}=\frac{1}{2}\iff\phi_{n}(C)\in E (C)\) and \(\frac{\nu_{1}(\phi_{n}(C))}{d_{1}(\phi_{n}(C))}=1\iff\phi_{n}(C)\in E(L) \setminus E(C)\), in the case of \(n\geq 2\), equals to
\[=\sum_{C\in\mathcal{C}^{(n)}}\frac{1}{|\mathcal{C}_{\phi(C)}^{(n-1)}|}\frac{ \nu_{n-1}(\phi(C)|_{[1,n-1]})}{d_{n-1}(\phi(C)|_{[1,n-1]})}2^{-2l(k+l+m)^{n-2}}\]
Repeating the same argument as in the second equation this equals to
\[\sum_{C\in\mathcal{C}^{(n)}}\frac{1}{|\mathcal{C}_{\phi(C)}^{(n-2)}|}\frac{ \nu_{n-2}(\phi(C)|_{[1,n-2]})}{d_{n-2}(\phi(C)|_{[1,n-2]})}2^{-2l(k+l+m)^{n-3} -2l(k+l+m)^{n-2}}.\]
Iterating this argument we finally obtain
\[\sum_{C\in\mathcal{C}^{(n)}}\frac{1}{|\mathcal{C}_{\phi_{1}(C)}^ {(1)}|}\frac{\nu_{1}(\phi_{1}(C))}{d_{1}(\phi_{1}(C))}2^{-2l\sum_{j=1}^{n-2}( k+l+m)^{j}}\] \[=\frac{1}{2}\sum_{C\in\mathcal{C}^{(n)}}2^{-2l\frac{(k+l+m)^{n-1 }-1}{k+l+m-1}}=\frac{1}{2}\text{ (by Proposition \ref{prop:2} (b))}\]
which verifies our statement.
The following observation follows directly from the definition of the metric \(d_{H\oslash G}\) for two geodesic \(s\)-\(t\) graphs \(H\) and \(G\).
**Proposition 4.4**.: _For each \(e_{1}\in E(L)\), and \(n\geq 2\) the map_
\[\Phi_{e_{1}}:V(L_{n-1})\to V(L_{n}),\quad e_{2}\oslash e_{3}\oslash\ldots\oslash e_{ n-1}\oslash v\mapsto e_{1}\oslash e_{2}\oslash e_{3}\oslash\ldots\oslash e_{n-1}\oslash v\]
_is graph isomorphism and_
\[d_{n}\big{(}\Phi_{e_{1}}(u),\Phi_{e_{1}}(v)\big{)}=d_{1}(e_{1})d_{n-1}(u,v)\text { for }u,v\in V(L_{n-1}).\]
We are now ready to state the main result of this section.
**Theorem 4.5**.: _Let \(n\in\mathbb{N}\), \((T,d_{T})\) a geodesic tree and \(\Psi:V(L_{n})\to V(T)\) be an expansive map._
_Then_
\[\mathbb{D}_{\nu_{n}}(\Psi)=\mathbb{E}_{\nu_{n}}\Big{(}\frac{d_{T}(\Psi(e))}{d _{n}(e)}\Big{)}=\sum_{e=\{x,y\}\in E(L_{n})}\frac{d_{T}(\Psi(e))}{d_{n}(e)}\nu_ {n}(e)\geq\frac{3}{128}c_{0}n.\]
Proof.: We consider the map
\[F_{n}:E(L_{n})\to\mathbb{R},\qquad e\mapsto\min\Big{(}\frac{d_{T}(\Psi(e))}{d _{n}(e)},\frac{3}{32}\frac{c_{0}}{d_{n}(e)}\Big{)}.\]
We will show by induction for all \(n\in\mathbb{N}\) that
\[\mathbb{E}_{\nu_{n}}(F_{n})\geq\frac{3}{128}c_{0}n. \tag{7}\]
For \(n=1\), (7) is true since it follows from (5) that \(\frac{3}{32}\frac{c_{0}}{d_{1}(e)}\geq\frac{6}{32}c_{0}\). Assume that our claim is true for \(n-1\), where \(n\geq 2\).
Considering for each \(e\in E(L_{n})\) the cases \(d_{T}(\Psi(e))\geq\frac{3}{16}c_{0}\) and \(d_{T}(\Psi(e))<\frac{3}{16}c_{0}\), we obtain
\[\mathbb{E}_{\nu_{n}}(F_{n}) \geq\frac{3}{32}c_{0}\sum_{\begin{subarray}{c}e\in E(L_{n}),\\ d_{T}(\Psi(e))\geq\frac{3}{32}c_{0}\end{subarray}}\frac{\nu_{n}(e)}{d_{n}(e)}+ \sum_{\begin{subarray}{c}e\in E(L_{n}),\\ d_{T}(\Psi(e))<\frac{3}{32}c_{0}\end{subarray}}\min\Big{(}d_{T}(\Psi(e)),\frac {3}{32}c_{0}\Big{)}\frac{\nu_{n}(e)}{d_{n}(e)}\] \[\geq c_{0}\underbrace{\frac{1}{2}\frac{3}{32}\sum_{\begin{subarray} {c}e\in E(L_{n}),\\ d_{T}(\Psi(e))\geq\frac{3}{32}c_{0}\end{subarray}}\frac{\nu_{n}(e)}{d_{n}(e)}}_ {=A}+\underbrace{\sum_{\begin{subarray}{c}e\in E(L_{n})\\ =B\end{subarray}}\min\Big{(}d_{T}(\Psi(e)),\frac{3}{64}c_{0}\Big{)}\frac{\nu_{ n}(e)}{d_{n}(e)}}_{=A}.\]
For each \(C\in\mathcal{C}^{(n)}\) choose, according to Corollary 4.1, an edge \(e^{C}=e_{1}^{C}\oslash e_{2}^{C}\oslash\ldots\oslash e_{n}^{C}\in E(C)\) for which \(d_{T}(\Psi(e_{C}))\geq\frac{3}{32}c_{0}\). The value of \(A\) can be estimated as follows: Since for each \(C\in\mathcal{C}^{(n)}\), the edge \(e_{C}\) is an element of at most \(|C_{e_{C}}^{(n)}|\) circles of \(\mathcal{C}^{(n)}\) it follows from Proposition 4.2 (c) and Corollary 4.3, that
\[A\geq c_{0}\frac{3}{64}\sum_{e\in\{e_{C}:C\in\mathcal{C}^{(n)}\}}\frac{\nu_{n} (e)}{d_{n}(e)}\geq c_{0}\frac{3}{64}\sum_{C\in\mathcal{C}^{(n)}}\frac{\nu_{n} (e_{C})}{d_{n}(e_{C})}\frac{1}{|C_{e_{C}}^{(n)}|}=\frac{3}{128}c_{0}. \tag{8}\]
In order to estimate \(B\) we define \(P_{1}=(x_{i})_{i=0}^{k}\smile(y_{i}^{(1)})_{i=0}^{l}\smile(z_{i})_{i=0}^{m}\) and \(P_{2}=(x_{i})_{i=0}^{k}\smile(y_{i}^{(2)})_{i=0}^{l}\smile(z_{i})_{i=0}^{m}\) (_i.e.,_ the two paths from \(s(L)\) to \(t(L)\) in \(L\)) and compute
\[B =\sum_{e_{1}\in E(L)}\sum_{e^{\prime}\in E(L_{n-1})}\min\Big{(}d_ {T}(\Psi(e_{1}\oslash e^{\prime})),\frac{3}{64}c_{0}\Big{)}\frac{\nu_{n-1}(e^{ \prime})}{d_{n-1}(e^{\prime})}\frac{\nu_{1}(e_{1})}{d_{1}(e_{1})}\] \[=\frac{1}{2}\sum_{e_{1}\in E(P_{1})}\sum_{e^{\prime}\in E(L_{n-1 })}\min\Big{(}d_{T}(\Psi(e_{1}\oslash e^{\prime})),\frac{3}{64}c_{0}\Big{)} \frac{\nu_{n-1}(e^{\prime})}{d_{n-1}(e^{\prime})}\] \[\qquad+\frac{1}{2}\sum_{e_{1}\in E(P_{2})}\sum_{e^{\prime}\in E( L_{n-1})}\min\Big{(}d_{T}(\Psi(e_{1}\oslash e^{\prime})),\frac{3}{64}c_{0} \Big{)}\frac{\nu_{n-1}(e^{\prime})}{d_{n-1}(e^{\prime})}.\]
The above equality is true since \(\frac{\nu_{1}(e_{1})}{d_{1}(e_{1})}=1\iff e_{1}\in E((x_{i})_{i=0}^{k})\) or \(e_{1}\in E((z_{i})_{i=0}^{m})\)\(\iff e_{1}\in E(P_{1})\cap E(P_{2})\) and \(\frac{\nu_{1}(e_{1})}{d_{1}(e_{1})}=\frac{1}{2}\iff e_{1}\in E((y_{i}^{(1)})_{ i=0}^{l})\) or \(e_{1}\in E((y_{i}^{(2)})_{i=0}^{l})\). For \(\sigma=1,2\) we compute
\[\sum_{e_{1}\in E(P_{\sigma})}\sum_{e^{\prime}\in E(L_{n-1})}\min \Big{(}d_{T}(\Psi(e_{1}\oslash e^{\prime})),\frac{3}{64}c_{0}\Big{)}\frac{\nu_ {n-1}(e^{\prime})}{d_{n-1}(e^{\prime})}\] \[\qquad\qquad=\sum_{e_{1}\in E(P_{\sigma})}d_{1}(e_{1})\sum_{e^{ \prime}\in E(L_{n-1})}\min\Big{(}\frac{d_{T}(\Psi(e_{1}\oslash e^{\prime}))}{d _{1}(e_{1})},\frac{1}{d_{1}(e_{1})}\frac{3}{64}c_{0}\Big{)}\frac{\nu_{n-1}(e^ {\prime})}{d_{n-1}(e^{\prime})}\] \[\qquad\qquad\geq\sum_{e_{1}\in E(P_{\sigma})}d_{1}(e_{1})\sum_{e ^{\prime}\in E(L_{n-1})}\min\Big{(}\frac{d_{T}(\Psi(e_{1}\oslash e^{\prime}))}{ d_{1}(e_{1})},\frac{3}{32}c_{0}\Big{)}\frac{\nu_{n-1}(e^{\prime})}{d_{n-1}(e^{ \prime})}\quad\text{ (by (\ref{eq:1}))}\] \[\qquad\qquad=\sum_{e_{1}\in E(P_{\sigma})}d_{1}(e_{1})\sum_{e^{ \prime}\in E(L_{n-1})}\min\Big{(}d^{\prime}_{T}(\Psi_{e_{1}}(e^{\prime})), \frac{3}{32}c_{0}\Big{)}\frac{\nu_{n-1}(e^{\prime})}{d_{n-1}(e^{\prime})},\]
where for \(e_{1}\in E(P_{\sigma})\) we define \(\Psi_{e_{1}}:V(L_{n-1})\to V(T)\), \(v\mapsto\Psi(e_{1}\oslash v)\), which by Proposition 4.4 is an expansive map for the metric \(d^{\prime}_{T}(\cdot,\cdot)=\frac{d_{T}(\cdot,\cdot)}{d_{L}(e_{1})}\) on \(V(T)\).
It follows, therefore, from the induction hypothesis that for each \(\sigma=1,2\)
\[\sum_{e_{1}\in E(P_{\sigma})}\sum_{e^{\prime}\in E(L_{n-1})}\min \Big{(}d_{T}(e_{1}\oslash e^{\prime}),c_{0}\frac{3}{64}\Big{)}\frac{\nu_{n-1}(e ^{\prime})}{d_{n-1}(e^{\prime})}\geq\frac{3}{128}c_{0}(n-1).\]
and, thus, that \(B\geq\frac{3}{128}c_{0}(n-1)\) which yields
\[\mathbb{E}_{\nu_{n}}(F_{n})\geq A+B\geq\frac{3}{128}c_{0}n,\]
and finishes the induction step, and, thus, the proof of the claim.
## 5. Proof of the Main Theorem
The following result is an immediate consequence of Lemma 2.3, Proposition 2.9, and Theorem 4.5.
**Corollary 5.1**.: _Let \((G,d_{G})\) be a normalized, geodesic \(s\)-\(t\) graph such that \(G\) contains a cycle, and let_
\[c_{0}=\max\big{\{}\mathrm{length}_{d_{G}}(C)\ :\ C\text{ is cycle in }G\big{\}}.\]
_Suppose \(N\in\mathbb{N}\) is such that \(G^{\otimes N}\) contains a balanced generalized Laakso \(s\)-\(t\) subgraph, say \(L\), whose cycle has length \(c_{0}\), and such that \(d_{G^{\otimes N}}(e)\leq\frac{c_{0}}{4}\) for all \(e\in E(L)\) and \(\nu\) is defined as in Example 2.4 (c) on \(E(L)\) and vanishes on \(E(G^{\otimes N})\setminus E(L)\). Let \(n\in\mathbb{N}\), \((T,d_{T})\) be a geodesic tree, and \(\Psi:V(G^{\otimes Nn})\to V(T)\) be an expansive map._
_Then_
\[\mathbb{D}_{\nu_{n}}(\Psi)=\mathbb{E}_{\nu_{n}}\Big{(}\frac{d_{T}(\Psi(e))}{d _{Nn}(e)}\Big{)}=\sum_{e=\{x,y\}\in E(L^{\otimes n})}\frac{d_{T}(\Psi(e))}{d_{ Nn}(e)}\nu_{n}(e)\geq\frac{3}{128}c_{0}n.\]
Proof of Main Theorem.: The result is clear in the case that \(G\) is a tree. Indeed, if \(G\) is a tree and an \(s\)-\(t\) graph it must be a path, and, thus, all slash powers are paths and the result follows trivially. Hence, we may suppose that \(G\) contains a cycle. Let \(C_{0}\) be a cycle in \(G\) with metric length \(c_{0}\). By Corollary 5.1 and the observation that for any \(k\in\mathbb{N}\), \(|V(G^{\otimes k})|\) is of the order \(|V(G)|^{k}\) it is sufficient to show that for a large enough \(N\in\mathbb{N}\), \(G^{\otimes N}\) contains a balanced generalized Laakso \(s\)-\(t\) subgraph, \(L\), where \(d_{G^{\otimes N}}(e)\leq\frac{c_{0}}{4}\) for all \(e\in E(L)\). However, this follows from the following observations that are consequences of the previous results of this paper:
1. We can find a generalized Laakso \(s\)-\(t\) subgraph, say \(L_{1}\), of \(G^{\otimes 2}\) such that for all \(e\in E(L_{1})\), \(d_{G^{\otimes 2}}(e)<1\). Indeed, we note that since \(G\) has a cycle it must contain an \(s\)-\(t\) path with a graph length of at least \(2\), say \(P\). Then in \(G^{\otimes 2}\) there must be a cycle \(C_{1}\subset G^{\otimes 2}\) formed by replacing each edge of \(C_{0}\) with the path \(P\). By Lemma 2.3 (1) we can find a generalized Laakso \(s\)-\(t\) subgraph which by the proof of Lemma 2.3 (1) must satisfy \(d_{G^{\otimes 2}}(e)<1\). Note too that \(d_{G^{\otimes 2}}(C_{1})=c_{0}\).
2. There is a large enough \(N_{1}\in\mathbb{N}\) so that for all \(n\geq N_{1}\), \(d_{L_{1}^{\otimes n}}(e)<\frac{c_{0}}{4}<\frac{1}{2}\) for any \(e\in E(L_{1}^{\otimes n})\). Indeed, if \(\delta:=\max\{d_{G^{\otimes 2}}(e)\ :\ e\in E(L_{1})\}\), then by the definition of the metric given on slash powers the first such \(n\) such that \(\delta^{n}<\frac{c_{0}}{4}\) will do.
3. By the Remark following Lemma 2.10, there is an \(N_{2}\in\mathbb{N}\) large enough so that for all \(n\geq N_{2}\), \((L_{1}^{\otimes n},d_{L_{1}^{\otimes n}})\) contains a balanced generalized Laakso subgraph. It follows by the proof of Lemma 2.10 and the same remark that the cycle in this balanced Laakso graph will have metric length \(c_{0}\).
Hence, there exists an \(N_{3}\in\mathbb{N}\) so that, \((L_{1}^{\otimes N_{3}},d_{G^{\otimes 2N_{3}}})\) contains the desired balanced generalized Laakso \(s\)-\(t\) subgraph.
|
2304.08476 | Cohomology operations for moment-angle complexes and resolutions of
Stanley-Reisner rings | A fundamental result in toric topology identifies the cohomology ring of the
moment-angle complex $\mathcal{Z}_K$ associated to a simplicial complex $K$
with the Koszul homology of the Stanley--Reisner ring of $K$. By studying
cohomology operations induced by the standard torus action on the moment-angle
complex, we extend this to a topological interpretation of the minimal free
resolution of the Stanley-Reisner ring. The exterior algebra module structure
in cohomology induced by the torus action recovers the linear part of the
minimal free resolution, and we show that higher cohomology operations induced
by the action (in the sense of Goresky-Kottwitz-MacPherson) can be assembled
into an explicit differential on the resolution. Describing these operations in
terms of Hochster's formula, we recover and extend a result due to Katth\"an.
We then apply all of this to study the equivariant formality of torus actions
on moment-angle complexes. For these spaces, we obtain complete algebraic and
combinatorial characterisations of which subtori of the naturally acting torus
act equivariantly formally. | Steven Amelotte, Benjamin Briggs | 2023-04-17T17:56:33Z | http://arxiv.org/abs/2304.08476v2 | # Cohomology operations for moment-angle complexes and resolutions of Stanley-Reisner rings
###### Abstract.
A fundamental result in toric topology identifies the cohomology ring of the moment-angle complex \(\mathcal{Z}_{K}\) associated to a simplicial complex \(K\) with the Koszul homology of the Stanley-Reisner ring of \(K\). By studying cohomology operations induced by the standard torus action on the moment-angle complex, we extend this to a topological interpretation of the minimal free resolution of the Stanley-Reisner ring. The exterior algebra module structure in cohomology induced by the torus action recovers the linear part of the minimal free resolution, and we show that higher cohomology operations induced by the action (in the sense of Goresky-Kottwitz-MacPherson) can be assembled into an explicit differential on the resolution. Describing these operations in terms of Hochster's formula, we recover and extend a result due to Katthain. We then apply all of this to study the equivariant formality of torus actions on moment-angle complexes. For these spaces, we obtain complete algebraic and combinatorial characterisations of which subtori of the naturally acting torus act equivariantly formally.
Key words and phrases:Moment-angle complex, equivariant formality, Stanley-Reisner ring, minimal free resolution 2020 Mathematics Subject Classification: 13F55, 57S12, 55U10 For part of this work, Amelotte was hosted by the Institute for Computational and Experimental Research in Mathematics in Providence, RI, supported by the National Science Foundation under Grant No. 1929284. Briggs was funded by the European Union under the Grant Agreement no. 101064551 (Hochschild).
## 1. Introduction
Equivariantly formal torus actions are among the richest and best understood examples of group actions studied in geometry and topology. The action of a torus \(T\) on a space \(X\) is called _equivariantly formal_ if the equivariant cohomology \(H^{*}_{T}(X)\) is free as a module over the polynomial ring \(H^{*}(BT)\). This condition makes the relationship between ordinary and equivariant cohomology as simple as possible, and often allows \(H^{*}_{T}(X)\) to be described in terms of fixed point data (as in the Borel localization theorem and its variants) or low-dimensional orbits of the action (as in the Chang-Skjelbred lemma [9] and its generalisations to the Atiyah-Bredon sequence [16]). Examples of equivariantly formal torus actions include Hamiltonian actions on compact symplectic manifolds, GKM manifolds and all \(T\)-spaces with cohomology concentrated in even degrees.
An important source of torus actions is the _moment-angle complex_, a central construction in toric topology that functorially assigns a space \(\mathcal{Z}_{K}\) with a \(T^{m}=(S^{1})^{m}\)-action to each simplicial complex \(K\) on \(m\) vertices. These \(T^{m}\)-spaces play a universal role in toric topology. For example, every quasitoric manifold (including every smooth projective toric variety) is diffeomorphic to the quotient \(\mathcal{Z}_{K}/T^{m-n}\) of a moment-angle complex by some restriction of its standard \(T^{m}\)-action to a maximal freely acting subtorus \(T^{m-n}\subseteq T^{m}\) (cf. [7, 7.3]). On the other hand, the homotopy quotient of \(\mathcal{Z}_{K}\) by the entire \(T^{m}\)-action yields the _Davis-Januszkiewicz space_\(DJ_{K}\), whose cohomology ring \(H^{*}(DJ_{K};k)=H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\) is isomorphic to the Stanley-Reisner ring
\[k[K]=S/(v_{i_{1}}\cdots v_{i_{q}}\,:\,\{i_{1},\ldots,i_{q}\}\notin K),\]
where \(S=H^{*}(BT^{m};k)=k[v_{1},\ldots,v_{m}]\) is a polynomial ring with generators in degree \(2\).
The (homotopy) quotients of moment-angle complexes by other subgroups \(H\subseteq T^{m}\) have recently been investigated by many authors [14, 17, 25, 26, 28, 30]. One motivation for this paper is to answer the following question:
**Question 1**.: Let \(K\) be a simplicial complex on the vertex set \([m]=\{1,\ldots,m\}\). For which subtori \(H\subseteq T^{m}\) is the \(H\)-action on the moment-angle complex \(\mathcal{Z}_{K}\) equivariantly formal?
Rather than directly computing the equivariant cohomology \(H^{*}_{H}(\mathcal{Z}_{K})\) as a module over a polynomial ring for every subtorus \(H\subseteq T^{m}\), we approach Question 1 from a Koszul dual perspective by studying the ordinary cohomology \(H^{*}(\mathcal{Z}_{K})\) as a module over an exterior algebra.
Indeed, the \(T^{m}\)-action on \(\mathcal{Z}_{K}\) equips \(H^{*}(\mathcal{Z}_{K})\) with a natural (in \(K\)) structure of a graded module over \(\Lambda=\Lambda(\iota_{1},\ldots,\iota_{m})\), with generators \(\iota_{j}\) acting by derivations of degree \(-1\). In Section 3 we begin a detailed study of this module structure. As a graded algebra, the ordinary cohomology of \(\mathcal{Z}_{K}\) is well understood in terms of the homological algebra of the Stanley-Reisner ring (see [6]):
\[H^{*}(\mathcal{Z}_{K};k)\cong\operatorname{Tor}^{S}(k[K],k).\]
We describe the \(\Lambda\)-module structure on both sides of this isomorphism, lifting it to a cochain-level isomorphism of differential graded \(\Lambda\)-modules.
Hochster's formula for the Betti numbers of a Stanley-Reisner ring yields another description of the cohomology of a moment-angle complex, namely a decomposition \(H^{*}(\mathcal{Z}_{K})\cong\bigoplus_{J\subseteq[m]}\widetilde{H}^{*}(K_{J})\) in terms of the reduced simplicial cohomology of full subcomplexes \(K_{J}\subseteq K\). We show that this can also be upgraded to an isomorphism of \(\Lambda\)-modules, where the action of each \(\iota_{j}\) on the components of Hochster's decomposition coincides, up to sign, with the maps
\[\widetilde{H}^{*}(K_{J})\longrightarrow\widetilde{H}^{*}(K_{J\smallsetminus j}) \tag{1}\]
induced by the inclusions of full subcomplexes \(K_{J\smallsetminus j}\hookrightarrow K_{J}\) (see Lemma 3.5). Together with Theorem A below, this recovers a combinatorial description due to Katthan [24] of the linear part of the minimal free resolution of \(k[K]\).
The derivations \(\iota_{j}\colon H^{*}(\mathcal{Z}_{K})\to H^{*-1}(\mathcal{Z}_{K})\) extend to a family of higher operations
\[\delta_{s}\colon H^{*}(\mathcal{Z}_{K})\longrightarrow H^{*-2\deg(s)+1}( \mathcal{Z}_{K}),\]
indexed by the monomials of \(S\). In the context of equivariant cohomology, the higher cohomology operations induced by a torus action were introduced by Goresky, Kottwitz and MacPherson [18] as
obstructions to equivariant formality. As with other higher cohomology operations, \(\delta_{s}\) is generally only defined on the kernels of lower degree operations, taking values with indeterminacy depending on these previous operations. For torus actions on smooth manifolds, this indeterminacy can be avoided by identifying de Rham cohomology with the space of harmonic forms, as in [1] and [8]. In our case, since a moment-angle complex is not always a manifold and we wish to work with coefficients in an arbitrary field \(k\), we make use of the homological perturbation lemma in Section 4 to obtain an explicit family of higher operations which are well-defined endomorphisms of \(H^{*}(\mathcal{Z}_{K})\). The relevance to equivariant formality is made clear by the main results of Section 4, which show that these operations assemble into a differential yielding the minimal Hirsch-Brown model for the action of any subtorus \(H\subseteq T^{m}\) on \(\mathcal{Z}_{K}\). In particular, for \(H=T^{m}\) we obtain the following.
**Theorem A**.: _If \(K\) is a simplicial complex on the vertex set \([m]\), then_
\[\big{(}S\otimes H^{*}(\mathcal{Z}_{K};k),\delta\big{)},\qquad\delta=\sum_{J \subseteq[m]}v_{J}\otimes\delta_{J}\]
_is the minimal free resolution of the Stanley-Reisner ring \(k[K]\), or equivalently the minimal Hirsch-Brown model for the action of \(T^{m}\) on \(\mathcal{Z}_{K}\). Here \(v_{J}=v_{i_{1}}\cdots v_{i_{q}}\) and \(\delta_{J}\) is the cohomology operation indexed by \(v_{J}\), for each \(J=\{i_{1},\ldots,i_{q}\}\subseteq[m]\)._
Describing the linear part of the resolution of \(k[K]\) amounts to understanding the primary operations \(\iota_{j}=\delta_{\{j\}}\), and these are given in combinatorial terms by Katthan's formula (1). In [24], Katthan poses the question of describing the quadratic part of the resolution in similar terms; in other words, how can the secondary operations \(\delta_{\{ij\}}\) be understood in terms of Hochster's decomposition \(H^{*}(\mathcal{Z}_{K})\cong\bigoplus_{J\subseteq[m]}\widetilde{H}^{*}(K_{J})\)? More generally, one can ask
**Question 2**.: How can the higher operations \(\delta_{s}\) on \(H^{*}(\mathcal{Z}_{K};k)\)--or equivalently, the differentials in the minimal free resolution of \(k[K]\)--be described in terms of the combinatorics of \(K\)?
In Section 6 we give combinatorial models for the higher operations by identifying them with differentials in a Mayer-Vietoris spectral sequence defined entirely in terms of simplicial cochains on full subcomplexes of \(K\). From the perspective of commutative algebra, this yields a description of the minimal \(S\)-free resolution of the Stanley-Reisner ring, purely in terms of Hochster's decomposition and the combinatorics of \(K\), up to the indeterminacy in their definition, which remains an obstacle to a complete description. This recovers and extends Katthan's theorem.
Special attention is paid to the secondary operations (yielding the quadratic part of the resolution of \(k[K]\)) in Section 6.1. We find that, just as the primary operations \(\delta_{\{i\}}\) are determined by the maps \(K_{J\smallsetminus i}\hookrightarrow K_{J}\) for all \(J\subseteq[m]\), the secondary operations \(\delta_{\{ij\}}\) are essentially determined by the natural inclusions
\[K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\hookrightarrow K_{J}\quad\text{ and } \quad K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\hookrightarrow\Sigma K_{J \smallsetminus ij}.\]
More generally, the behaviour of the higher operations is governed by inclusions of _face deletions_\(K_{J}\smallsetminus F=\bigcup_{i\in F}K_{J\smallsetminus i}\) for \(F\in K_{J}\); see Section 6.3.
Returning to the question of equivariant formality, we find that this condition can be read off from the minimal free resolution of the Stanley-Reisner ring. We show in Section 5.1 that for any subtorus \(H\subseteq T^{m}\), the \(H\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal if and only if \(k[K]\) is \(\mathcal{J}\)-closed in the sense of Diethorn [11], where \(\mathcal{J}\subseteq S\) is a certain ideal generated by linear polynomials associated to the torus \(H\).
To obtain combinatorial characterisations of equivariant formality, we first reduce Question 1 to the case of coordinate subtorus actions on \(\mathcal{Z}_{K}\), that is, actions of subtori of the form
\[T^{I}=\big{\{}(t_{1},\ldots,t_{m})\in T^{m}\,:\,t_{i}=1\text{ for }i\notin I \big{\}}\]
for some \(I\subseteq[m]\); see Section 5.2. The simplest case to consider is that of a coordinate circle \(S^{1}_{j}=T^{\{j\}}\subseteq T^{m}\). In this case, equivariant formality is determined by the \(\Lambda\)-module structure on \(H^{*}(\mathcal{Z}_{K})\) alone: we show in Theorem 5.9 that the \(S^{1}_{j}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal if and only if the derivation \(\iota_{j}\colon H^{*}(\mathcal{Z}_{K})\to H^{*-1}(\mathcal{Z}_{K})\) is trivial. By the combinatorial description (1)
of the primary operations, it is equivalent that the inclusion of full subcomplexes \(K_{J\smallsetminus j}\hookrightarrow K_{J}\) induces the trivial map in reduced simplicial cohomology for all \(J\subseteq[m]\) with \(j\in J\).
For the \(T^{I}\)-actions on \(\mathcal{Z}_{K}\) with \(|I|>1\), equivariant formality is not determined by the \(\Lambda\)-module \(H^{*}(\mathcal{Z}_{K})\), as the vanishing of higher operations is also necessary. Although the higher operations are not simply induced by inclusions among the full subcomplexes appearing in Hochster's formula, it nonetheless turns out that the vanishing of all \(\delta_{J}\) with \(J\subseteq I\) is detected by inclusions of the (not necessarily full) face deletion subcomplexes.
**Theorem B**.: _Let \(K\) be a simplicial complex on the vertex set \([m]\) and let \(I\subseteq[m]\). Then the following conditions are equivalent:_
1. _the coordinate_ \(T^{I}\)_-action on_ \(\mathcal{Z}_{K}\) _is equivariantly formal over_ \(k\)_;_
2. _the cohomology operations_ \(\delta_{J}\) _vanish on_ \(H^{*}(\mathcal{Z}_{K};k)\) _for all_ \(J\subseteq I\)_;_
3. _the Stanley-Reisner ring_ \(k[K]\) _is_ \(\mathcal{J}_{I}\)_-closed, where_ \(\mathcal{J}_{I}=(v_{i}\,:\,i\notin I)\)_;_
4. \(K_{J}\smallsetminus(I\cap J)\hookrightarrow K_{J}\) _induces the trivial map on_ \(\widetilde{H}^{*}(\,;k)\) _for all_ \(J\subseteq[m]\)_._
In case \(K\) is flag (or equivalently, the Stanley-Reisner ring \(k[K]\) is quadratic), condition (d) above can be simplified considerably. In this situation, we obtain the following characterisation of equivariant formality purely in terms of the combinatorics of \(K\).
**Theorem C**.: _Let \(K\) be a flag complex on \([m]\) and let \(I\subseteq[m]\). The coordinate \(T^{I}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal if and only if \(I\in K\) and \(K_{\{i,j\}}*K_{I\smallsetminus\{i,j\}}\subseteq K\) for every missing edge \(\{i,j\}\notin K\)._
An interesting consequence is that the equivariant formality of these actions is independent of the field \(k\) in the flag case. We give examples in Section 5.5 illustrating that this is not true in general.
**Acknowledgements.** The authors are very grateful to Rachel Diethorn and Matthias Franz for many helpful comments on a draft of this work. The first author would also like to thank Graham Denham for a helpful conversation regarding Macaulay2.
## 2. Preliminaries and notation
Throughout this paper we fix a natural number \(m\) and a simplicial complex \(K\) on the vertex set \([m]=\{1,\ldots,m\}\). The _moment-angle complex_ over \(K\) is the subspace of \((D^{2})^{m}\) defined by the polyhedral product
\[\mathcal{Z}_{K}=\bigcup_{\sigma\in K}(D^{2},S^{1})^{\sigma},\qquad(D^{2},S^{ 1})^{\sigma}=\left\{(z_{1},\ldots,z_{m})\in(D^{2})^{m}\,:\,z_{i}\in S^{1}\ \text{if}\ i\notin\sigma\right\}. \tag{2}\]
Note that the coordinatewise action of the torus \(T^{m}=(S^{1})^{m}\) on \((D^{2})^{m}\) restricts to an action of \(T^{m}\) on \(\mathcal{Z}_{K}\).
We also fix a field \(k\), and we write
\[S=k[v_{1},\ldots,v_{m}]\quad\text{and}\quad\Lambda=\Lambda(\iota_{1},\ldots, \iota_{m})\]
for the polynomial algebra and exterior algebra over \(k\), generated by variables in bijection with \([m]\). We think of the exterior variables \(\iota_{i}\) as dual to the polynomial variables \(v_{i}\). Both \(S\) and \(\Lambda\) are multigraded by \(\mathbb{N}^{m}\). In particular, when \(J=\{j_{1},\ldots,j_{r}\}\subseteq[m]\), we write \(v_{J}=v_{j_{1}}\cdots v_{j_{r}}\) for the monomial with (squarefree) multidegree \(J\).
The _Stanley-Reisner ring_ of \(K\) is the multigraded algebra
\[k[K]=S/(v_{i_{1}}\cdots v_{i_{q}}\,:\,\{i_{1},\ldots,i_{q}\}\notin K).\]
The Koszul complex of \(k[K]\) is the multigraded dg algebra
\[\left(k[K]\otimes\Lambda(u_{1},\ldots,u_{m}),d\right)\]
with each \(u_{i}\) given homological degree \(1\) and multidegree \((0,\ldots,1,\ldots,0)\) (a \(1\) in the \(i\)th position), and with differential determined by \(d(u_{i})=v_{i}\) and the graded Leibniz rule. As before, for a subset \(J=\{j_{1},\ldots,j_{r}\}\subseteq[m]\) with \(j_{1}<\cdots<j_{r}\), we will use the notation \(u_{J}=u_{j_{1}}\wedge\cdots\wedge u_{j_{r}}\).
For each \(i\) there is a unique derivation \(\iota_{i}\) on the Koszul complex with (homological) degree \(-1\), determined by
\[\iota_{i}(u_{i})=1,\ \iota_{i}(u_{j})=0\ \text{if}\ i\neq j,\ \text{and}\ \iota_{i}(v_{j})=0\ \text{for all}\ j. \tag{3}\]
In other words \(\iota_{i}=\frac{\partial}{\partial u_{i}}\). A computation shows that \(\iota_{i}^{2}=0\) and \(\iota_{i}\iota_{j}+\iota_{j}\iota_{i}=0\), and it follows that these derivations give the Koszul complex the structure of a dg module over \(\Lambda=\Lambda(\iota_{1},\ldots,\iota_{m})\).
According to [7, Corollary 4.3.3], the quotient \(S\to k[K]\) provides an algebraic model for the map
\[H^{*}(BT^{m};k)\longrightarrow H^{*}_{T^{m}}(\mathcal{Z}_{K};k) \tag{4}\]
induced by the Borel fibration \(ET^{m}\times_{T^{m}}\mathcal{Z}_{K}\to BT^{m}\). Moreover, by [7, Theorem 4.5.4], the homology of the Koszul complex of \(k[K]\) computes the cohomology ring \(H^{*}(\mathcal{Z}_{K};k)\) of the corresponding moment-angle complex; see Lemma 3.1 below for a more precise statement. In the next section we will use this to describe concretely a (cochain-level) model for the \(\Lambda\)-module structure on \(H^{*}(\mathcal{Z}_{K};k)\), which is in some sense Koszul dual to (4).
Given two subsets \(I,J\subseteq[m]\) we frequently use the notation
\[\varepsilon(I,J)=|\{(i,j)\in I\times J:i>j\}|,\]
and when \(I=\{i_{0},\ldots,i_{n}\}\), we also use the short-hand \(\varepsilon(i_{0}\ldots i_{n},J)=\varepsilon(I,J)\). As we will see below, the signs \((-1)^{\varepsilon(I,J)}\) occur often in the combinatorics of simplicial complexes.
## 3. \(\Lambda\)-module models for \(\mathcal{Z}_{K}\)
In this section we study exterior algebra module structures on the cochain complex and cohomology of a moment-angle complex \(\mathcal{Z}_{K}\) which are induced by the standard torus action on \(\mathcal{Z}_{K}\). We review two well-known tools for computing the cohomology of a moment-angle complex: the Koszul complex of the Stanley-Reisner ring \(k[K]\), and Hochster's formula. Regarding these as algebraic and combinatorial models for \(\mathcal{Z}_{K}\), respectively, we show that both models can be equipped with a compatible structure of a differential graded module over the exterior algebra \(\Lambda\).
Throughout, \(k\) continues to denote a field. All cochain and cohomology groups are taken with coefficients in \(k\), and all undecorated tensor products are taken over \(k\).
### Cellular decomposition of \(\mathcal{Z}_{K}\)
We begin by recalling a convenient cellular decomposition of \(\mathcal{Z}_{K}\) from [7, SS4.4]. Consider the subdivision of the unit disk \(D^{2}=\{z\in\mathbb{C}:|z|\leqslant 1\}\) into three cells where the \(0\)-skeleton is given by the point \(z=1\), the \(1\)-skeleton is given by the boundary circle \(S^{1}\subset D^{2}\), and the \(2\)-skeleton is given by the disk itself. Denoting the \(0\)-, \(1\)- and \(2\)-cell by \(1\), \(\mathbb{S}\) and \(\mathbb{D}\), respectively, and taking products yields a cellular decomposition of the polydisk \((D^{2})^{m}\) with cells parametrised by words \(\varkappa\in\{1,\mathbb{S},\mathbb{D}\}^{m}\) or, equivalently, pairs of subsets \(I,J\subseteq[m]\) with \(I\cap J=\varnothing\). To each such pair is associated the cell
\[\varkappa(I,J)=\big{\{}(z_{1},\ldots,z_{m})\in(D^{2})^{m} \,:\,z_{i}\in\mathbb{D}\ \text{for}\ i\in I,\] \[z_{j}\in\mathbb{S}\ \text{for}\ j\in J,\] \[z_{k}=1\ \text{for}\ k\notin I\cup J\big{\}}.\]
Then for any simplicial complex \(K\) on the vertex set \([m]\), \(\mathcal{Z}_{K}\) is the cellular subcomplex of \((D^{2})^{m}\) consisting of those cells \(\varkappa(I,J)\) with \(I\in K\).
The cellular cochain complex \(\mathcal{C}^{*}_{\text{cw}}(\mathcal{Z}_{K})\) has a basis consisting of cochains \(\varkappa(I,J)^{\vee}\) dual to the cells above with \(I\in K\), \(J\subseteq[m]\) and \(I\cap J=\varnothing\). Although the cellular cochain complex in general does not come with a functorial associative multiplication, Baskakov, Buchstaber and Panov [3, 6] have constructed a cellular approximation \(\widetilde{\Delta}_{K}\) to the diagonal map \(\Delta_{K}\colon\mathcal{Z}_{K}\to\mathcal{Z}_{K}\times\mathcal{Z}_{K}\) which is functorial in \(K\) and defines a cup product
\[\cup\colon\mathcal{C}^{*}_{\text{cw}}(\mathcal{Z}_{K})\otimes\mathcal{C}^{*}_{ \text{cw}}(\mathcal{Z}_{K})\stackrel{{\times}}{{\longrightarrow}} \mathcal{C}^{*}_{\text{cw}}(\mathcal{Z}_{K}\times\mathcal{Z}_{K})\stackrel{{ \widetilde{\Delta}_{K}}}{{\longrightarrow}}\mathcal{C}^{*}_{\text{cw}}( \mathcal{Z}_{K})\]
giving \(\mathcal{C}^{*}_{\rm cw}(\mathcal{Z}_{K})\) the structure of a commutative differential graded algebra.
### The reduced Koszul complex
The _reduced Koszul complex_ of \(k[K]\) is the quotient of the Koszul complex by the acyclic multigraded ideal of elements of non-squarefree multidegree:
\[R(K)=\big{(}\Lambda(u_{1},\ldots,u_{m})\otimes k[K]\big{)}/(v_{i}^{2},v_{i}u_{ i}),\quad d(u_{i})=v_{i}. \tag{5}\]
The differential of the Koszul complex induces a well-defined differential on \(R(K)\), and moreover the quotient map
\[\big{(}\Lambda(u_{1},\ldots,u_{m})\otimes k[K],d\big{)}\stackrel{{ \simeq}}{{\longrightarrow}}R(K) \tag{6}\]
is a quasi-isomorphism of dg algebras, see [7, Lemma 3.2.6]. In particular,
\[H_{*}\big{(}\Lambda(u_{1},\ldots,u_{m})\otimes k[K],d\big{)}\cong H_{*}(R(K)) \cong\operatorname{Tor}^{S}_{*}(k[K],k). \tag{7}\]
The quotient map (6) is an isomorphism when restricted to any squarefree multidegree. Making this identification, for any \(U\subseteq[m]\) we write
\[R(K)_{i,U}\coloneqq\big{(}k[K]\otimes\Lambda^{i}(u_{1},\ldots,u_{m})\big{)}_{ U}.\]
In particular, we may consider \(R(K)\) as a subcomplex of the Koszul complex
\[R(K)\stackrel{{\simeq}}{{\longrightarrow}}\big{(}\Lambda(u_{1}, \ldots,u_{m})\otimes k[K],d\big{)}, \tag{8}\]
by including all elements with squarefree multidegree. Under this inclusion, \(R(K)\) is a dg \(\Lambda\)-submodule of the Koszul complex, using the action \(\iota_{i}=\frac{\partial}{\partial u_{i}}\) defined in (3). We note that this inclusion is not a dg algebra homomorphism, and conversely the quotient (6) is not a dg \(\Lambda\)-module homomorphism.
In the next lemma we use the total (cohomological) grading \(R^{n}(K)=\bigoplus_{n=2|U|-i}R(K)_{i,U}\).
**Lemma 3.1** ([7, Lemma 4.5.3]).: _There is an isomorphism of differential graded algebras_
\[g\colon R(K) \longrightarrow\mathcal{C}^{*}_{\rm cw}(\mathcal{Z}_{K})\] \[v_{I}u_{J} \longmapsto\varkappa(I,J)^{\vee}\]
_which is functorial in \(K\)._
The action of \(T^{m}\) on \(\mathcal{Z}_{K}\) equips the cohomology ring \(H^{*}(\mathcal{Z}_{K})\) with the structure of a graded module over an exterior algebra \(\Lambda=\Lambda(\iota_{1},\ldots,\iota_{m})\) on \(m\) generators of degree \(-1\). This can be lifted to an action of \(\Lambda\) on the cellular cochains of \(\mathcal{Z}_{K}\) as follows.
Regarding \(S^{1}\) as a CW-complex with the same \(0\)-cell and \(1\)-cell as \(D^{2}\), we have identifications \(\mathcal{C}^{*}_{\rm cw}(S^{1})=H^{*}(S^{1})=\Lambda(u)\), \(|u|=1\). Taking products yields a cellular decomposition of the torus which makes \(T^{m}=(S^{1})^{m}\) a cellular subcomplex of \((D^{2})^{m}\). With respect to these cell structures, it is easy to see that the standard action \(\mu\colon T^{m}\times\mathcal{Z}_{K}\to\mathcal{Z}_{K}\) is a cellular map and hence induces a map on cellular cochains.
For each \(j\in[m]\), let \(\mu_{j}\colon S^{1}_{j}\times\mathcal{Z}_{K}\to\mathcal{Z}_{K}\) be the coordinate circle action obtained by restricting the \(T^{m}\)-action to the \(j^{\rm th}\) factor. Identifying \(\mathcal{C}^{*}_{\rm cw}(S^{1}_{j}\times\mathcal{Z}_{K})\) with \(\Lambda(u_{j})\otimes\mathcal{C}^{*}_{\rm cw}(\mathcal{Z}_{K})\), each coordinate circle action \(\mu_{j}\) induces a map
\[\mu^{*}_{j}\colon\mathcal{C}^{*}_{\rm cw}(\mathcal{Z}_{K}) \longrightarrow\Lambda(u_{j})\otimes\mathcal{C}^{*}_{\rm cw}( \mathcal{Z}_{K})\] \[\alpha \longmapsto 1\otimes\alpha+u_{j}\otimes\iota_{j}\alpha, \tag{9}\]
which defines a graded derivation
\[\iota_{j}\colon\mathcal{C}^{*}_{\rm cw}(\mathcal{Z}_{K})\longrightarrow \mathcal{C}^{*-1}_{\rm cw}(\mathcal{Z}_{K})\]
for each \(j\in[m]\). Since \([\iota_{j},d]=\iota_{j}d+d\iota_{j}=0\), these maps induce graded derivations on the cohomology ring \(H^{*}(\mathcal{Z}_{K})\) having degree \(-1\), which we call _primary cohomology operations_ and also denote by \(\iota_{1},\ldots,\iota_{m}\). Moreover, since \([\iota_{i},\iota_{j}]=\iota_{i}\iota_{j}+\iota_{j}\iota_{i}=0\) for all \(i,j\in[m]\), the actions of these derivations extend to graded \(\Lambda\)-module structures on both \(\mathcal{C}^{*}_{\rm cw}(\mathcal{Z}_{K})\) and \(H^{*}(\mathcal{Z}_{K})\).
**Remark 3.2**.: Since \(\Lambda\) acts on the cohomology of \(\mathcal{Z}_{K}\) for every simplicial complex \(K\) on the vertex set \([m]\), and this action is functorial in \(K\), we think of \(\Lambda\) as an algebra of cohomology operations for moment-angle complexes. In Section 4 we will embed \(\Lambda\) into a larger algebra of _higher cohomology operations_.
**Lemma 3.3**.: _The map \(g\colon R(K)\to\mathcal{C}^{*}_{\mathrm{cw}}(\mathcal{Z}_{K})\) of Lemma 3.1 is an isomorphism of differential graded \(\Lambda(\iota_{1},\dots,\iota_{m})\)-modules._
Proof.: Since the generators of \(\Lambda\) act by derivations, it suffices by Lemma 3.1 to show that \(g\circ\iota_{i}\) and \(\iota_{i}\circ g\) agree on the generators of \(R(K)\) for each \(i=1,\dots,m\). By definition of the \(\Lambda\)-module structure on \(R(K)\), \(g\circ\iota_{i}\) vanishes on all generators except \(u_{i}\), in which case \((g\circ\iota_{i})(u_{i})=g(1)=\varkappa(\varnothing,\varnothing)^{\vee}\), the dual of the \(0\)-cell \((1,\dots,1)\in\mathcal{C}^{\mathrm{cw}}_{0}(\mathcal{Z}_{K};k)\). On the other hand, we have
\[(\iota_{i}\circ g)(v_{j})=\iota_{i}\varkappa(\{j\},\varnothing)^{\vee}\quad \text{and}\quad(\iota_{i}\circ g)(u_{j})=\iota_{i}\varkappa(\varnothing,\{j\} )^{\vee}.\]
Since \(v_{j}\in R^{2}(K)\) is closed for all \(j\), so is \(\iota_{i}\varkappa(\{j\},\varnothing)^{\vee}\). It follows that \(\iota_{i}\varkappa(\{j\},\varnothing)^{\vee}=0\) since the only cocycle in \(\mathcal{C}^{1}_{\mathrm{cw}}(\mathcal{Z}_{K})\) is trivial by Lemma 3.1. To see that \(\iota_{i}\varkappa(\varnothing,\{j\})^{\vee}\) equals \(\varkappa(\varnothing,\varnothing)^{\vee}\) for \(i=j\) and is trivial for \(i\neq j\), observe that the orbit of the \(0\)-cell \((1,\dots,1)\) under the action \(\mu_{i}\colon S^{1}_{i}\times\mathcal{Z}_{K}\to\mathcal{Z}_{K}\) is exactly the \(1\)-cell \(\varkappa(\varnothing,\{i\})\). Dualizing, it follows that
\[\mu_{i}^{*}\big{(}\varkappa(\varnothing,\{j\})^{\vee}\big{)}=\begin{cases}1 \otimes\varkappa(\varnothing,\{i\})^{\vee}+u_{i}\otimes\varkappa(\varnothing, \varnothing)^{\vee}&\text{if }i=j\\ 1\otimes\varkappa(\varnothing,\{j\})^{\vee}&\text{if }i\neq j.\end{cases}\]
Therefore by (9) we have
\[\iota_{i}\varkappa(\varnothing,\{j\})^{\vee}=\begin{cases}\varkappa( \varnothing,\varnothing)^{\vee}&\text{if }i=j\\ 0&\text{if }i\neq j,\end{cases}\]
which completes the proof.
### Combinatorial model for \(\mathcal{C}^{*}_{\mathrm{cw}}(\mathcal{Z}_{K})\)
The following celebrated result of Hochster interprets the Koszul homology of a Stanley-Reisner ring \(k[K]\) in terms of the simplicial cohomology groups of full subcomplexes of \(K\).
**Theorem 3.4** (Hochster's formula [23]).: _For each squarefree multidegree \(U\subseteq[m]\), there is an isomorphism of cochain complexes_
\[h\colon\big{(}k[K]\otimes\Lambda^{*}(u_{1},\dots,u_{m})\big{)}_{U} \longrightarrow\widetilde{C}^{|U|-*-1}(K_{U})\] \[v_{I}u_{J}\,(\text{with }I\cup J=U) \longmapsto(-1)^{\varepsilon(I,U)}I^{\vee}. \tag{10}\]
Note that, summing over all \(U\subseteq[m]\), the theorem above gives an isomorphism of complexes \(R(K)\cong\bigoplus_{U\subseteq[m]}\widetilde{C}^{*}(K_{U})\). Taking cohomology then yields the usual Hochster decomposition
\[\operatorname{Tor}^{S}_{*}(k[K],k)\cong\bigoplus_{U\subseteq[m]}\widetilde{ H}^{|U|-*-1}(K_{U}).\]
The next result describes the \(\Lambda\)-module structures on \(R(K)\cong\mathcal{C}^{*}_{\mathrm{cw}}(\mathcal{Z}_{K})\) and \(H^{*}(\mathcal{Z}_{K})\) in terms of the decompositions above.
**Lemma 3.5**.: _For each \(j\in[m]\) and each squarefree multidegree \(U\subseteq[m]\) containing \(j\), there are commutative diagrams_
_where the horizontal maps are the isomorphisms of (10), and the right vertical map is defined by the inclusion \(K_{U\smallsetminus j}\hookrightarrow K_{U}\) (up to the indicated sign). In particular, there are commutative diagrams_
_where the right vertical map is a sum of maps induced by inclusions \(K_{U\smallsetminus j}\hookrightarrow K_{U}\) for \(j\in U\) (and is trivial on summands indexed by \(U\subseteq[m]\) with \(j\notin U\))._
Proof.: Let \(v_{I}u_{J}\in R(K)_{i,U}\) (so \(I\in K\), \(J\in[m]\), \(I\cap J=\varnothing\) and \(I\cup J=U\)) and assume \(j\in U\). If \(j\in I\), then \(\iota_{j}(v_{I}u_{J})=0\), and \(h(v_{I}u_{J})=(-1)^{\varepsilon(I,U)}I^{\vee}\) is in the kernel of the restriction map \(\widetilde{C}^{*}(K_{U})\to\widetilde{C}^{*}(K_{U\smallsetminus j})\) since, dually, \(I\in\widetilde{C}_{*}(K_{U})\) is clearly not in the image of \(\widetilde{C}_{*}(K_{U\smallsetminus j})\to\widetilde{C}_{*}(K_{U})\) when \(j\in I\). On the other hand, if \(j\in J\), then following anticlockwise around the first diagram, we have
\[h\iota_{j}(v_{I}u_{J}) =h\big{(}(-1)^{\varepsilon(j,J)}v_{I}u_{J\smallsetminus j}\big{)}\] \[=(-1)^{\varepsilon(j,J)}(-1)^{\varepsilon(I,U\smallsetminus j)}I^{ \vee}. \tag{11}\]
Following clockwise around the diagram, the right vertical map sends \(h(v_{I}u_{J})=(-1)^{\varepsilon(I,U)}I^{\vee}\) to
\[(-1)^{\varepsilon(I,U)}(-1)^{\varepsilon(j,U)+|U|-|J|}I^{\vee}=(-1)^{ \varepsilon(I,U)}(-1)^{\varepsilon(j,U)+|I|}I^{\vee}. \tag{12}\]
To see that (11) and (12) are equal, observe that
\[\underbrace{\varepsilon(j,U)-\varepsilon(j,J)}_{\varepsilon(j,I)}+\underbrace {\varepsilon(I,U)-\varepsilon(I,U\smallsetminus j)}_{\varepsilon(I,j)}+|I|=|I|+| I|\equiv 0\ \text{mod}\ 2,\]
so the signs agree.
The second commutative diagram in the statement of the lemma is obtained from the first by passing to cohomology, summing over all \(i\in\mathbb{Z}\) and \(U\subseteq[m]\) with \(2|U|-i=n\), and absorbing signs into the horizontal isomorphisms.
**Remark 3.6**.: By Lemma 3.5, \(\bigoplus_{U\subseteq[m]}\widetilde{C}^{*}(K_{U})\) and hence \(\bigoplus_{U\subseteq[m]}\widetilde{H}^{*}(K_{U})\) can clearly be given \(\Lambda(\iota_{1},\dots,\iota_{m})\)-module structures by letting \(\iota_{j}\) act on simplicial cochains by the right vertical map in the diagram. With respect to these module structures, the Hochster decompositions
\[R(K)\cong\bigoplus_{U\subseteq[m]}\widetilde{C}^{*}(K_{U})\quad\text{ and }\quad\operatorname{Tor}^{S}(k[K],k)\cong\bigoplus_{U\subseteq[m]} \widetilde{H}^{*}(K_{U})\]
become isomorphisms of differential graded \(\Lambda(\iota_{1},\dots,\iota_{m})\)-modules.
Combining Lemma 3.3, Lemma 3.5 and Remark 3.6, we obtain the following.
**Corollary 3.7**.: _There is a zig-zag of isomorphisms of differential graded \(\Lambda(\iota_{1},\dots,\iota_{m})\)-modules_
\[\mathcal{C}^{*}_{\operatorname{cw}}(\mathcal{Z}_{K})\stackrel{{ g}}{{\longleftarrow}}R(K)\stackrel{{ h}}{{\longrightarrow}}\bigoplus_{U\subseteq[m]}\widetilde{C}^{*}(K_{U}),\]
_inducing graded \(\Lambda(\iota_{1},\dots,\iota_{m})\)-module isomorphisms_
\[H^{*}(\mathcal{Z}_{K})\cong\operatorname{Tor}^{S}(k[K],k)\cong\bigoplus_{U \subseteq[m]}\widetilde{H}^{*}(K_{U}).\]
### Higher cohomology operations
To any differential graded module \(N\) over \(\Lambda=\Lambda(\iota_{1},\dots,\iota_{m})\), one can associate a family of _higher cohomology operations_ acting on \(H^{*}(N)\), indexed by monomials:
\[\delta_{s},\quad s=v_{1}^{n_{1}}\cdots v_{m}^{n_{m}}\in S=k[v_{1},\dots,v_{m}].\]
In the context of Lie group actions and equivariant cohomology, these operations were introduced by Goresky, Kottwitz and MacPherson as obstructions to equivariant formality in [18]; see also Remarks 4.5 and 4.6 for other contexts in which analogous operations arise. For any monomial \(s=v_{1}^{n_{1}}\cdots v_{m}^{n_{m}}\), the operation \(\delta_{s}\) is of degree \(1-2\sum_{j=1}^{m}n_{j}\) and is well-defined as a map of the form
\[\bigcap_{t|s,t\neq s}\ker\delta_{t}\longrightarrow\frac{H^{*}(N)}{\sum_{t|s,t \neq s}\operatorname{im}\delta_{t}}, \tag{13}\]
see [18, Proposition 13.8]. In particular, the _primary operations_\(\delta_{v_{j}}\) are the degree \(-1\) maps
\[\delta_{v_{j}}\colon H^{*}(N) \longrightarrow H^{*-1}(N)\] \[\longmapsto[\iota_{j}\alpha]\]
induced by the \(\Lambda\)-module structure on \(N\).
If \([\alpha]\in H^{n}(N)\) and \(\delta_{v_{i}}[\alpha]=\delta_{v_{j}}[\alpha]=0\), then \(\iota_{i}\alpha=d\beta_{i}\) and \(\iota_{j}\alpha=d\beta_{j}\) for some \(\beta_{i},\beta_{j}\in N^{n-2}\). Note that
\[d(\iota_{i}\beta_{j}+\iota_{j}\beta_{i}) =-\iota_{i}d\beta_{j}-\iota_{j}d\beta_{i}\] \[=-\iota_{i}\iota_{j}\alpha-\iota_{j}\iota_{i}\alpha\] \[=0\]
since \(\iota_{i}\iota_{j}+\iota_{j}\iota_{i}=0\). In this case, the _secondary operation_\(\delta_{v_{i}v_{j}}\), of degree \(-3\), is defined by
\[\delta_{v_{i}v_{j}}[\alpha]=[\iota_{i}\beta_{j}+\iota_{j}\beta_{i}]. \tag{14}\]
Assuming the primary operations \(\delta_{v_{i}}\), \(\delta_{v_{j}}\), \(\delta_{v_{k}}\) and secondary operations \(\delta_{v_{i}v_{j}}\), \(\delta_{v_{i}v_{k}}\), \(\delta_{v_{j}v_{k}}\) act trivially on \(H^{*}(N)\), the _tertiary operation_\(\delta_{v_{i}v_{j}v_{k}}\colon H^{*}(N)\to H^{*-5}(N)\) is defined by
\[\delta_{v_{i}v_{j}v_{k}}[\alpha]=[\iota_{i}\beta_{jk}+\iota_{j}\beta_{ik}+\iota _{k}\beta_{ij}],\]
where
\[d\beta_{ij} =\iota_{i}\beta_{j}+\iota_{j}\beta_{i} d\beta_{i} =\iota_{i}\alpha\] \[d\beta_{ik} =\iota_{i}\beta_{k}+\iota_{k}\beta_{i} \text{and} d\beta_{j} =\iota_{j}\alpha\] \[d\beta_{jk} =\iota_{j}\beta_{k}+\iota_{k}\beta_{j} d\beta_{k} =\iota_{k}\alpha\]
for some \(\beta_{i},\beta_{j},\beta_{k}\in N^{n-2}\) and \(\beta_{ij},\beta_{ik},\beta_{jk}\in N^{n-4}\).
Similar cochain-level formulas can be given for all higher operations: assuming representatives \(\delta_{t}(\alpha)\in N\) of \(\delta_{t}[\alpha]\) have been defined for all monomials \(t\) dividing \(s=v_{i_{1}}\cdots v_{i_{q}}\) (\(i_{1},\dots,i_{q}\) not necessarily distinct) with \(|t|<|s|\), and assuming \(\delta_{t}\) is trivial on \(H^{*}(N)\) for all such \(t\), then \(\delta_{s}[\alpha]\) is represented by
\[\sum_{\ell=1}^{q}\iota_{i_{\ell}}d^{-1}\delta_{s/v_{i_{\ell}}}(\alpha)\in N^{n -2q+1}, \tag{15}\]
where \(d^{-1}\delta_{s/v_{i_{\ell}}}(\alpha)\) denotes a choice of preimage of \(\delta_{s/v_{i_{\ell}}}(\alpha)\) satisfying a certain system of equations analogous to those above.
We will frequently make use of the fact that \(\delta_{s}\) is a well-defined map \(H^{*}(N)\to H^{*-|s|+1}(N)\) with no indeterminacy when the previous operations \(\delta_{t}\) with \(t|s\) all vanish. This follows from [18, Proposition 13.8], cited above, which in turn follows from an identification of the higher operations with differentials in a spectral sequence associated to the Koszul dual dg \(S\)-module \(S\otimes N\) with differential \(1_{S}\otimes d+\sum_{j=1}^{m}v_{j}\otimes\iota_{j}\). (For cohomology operations on \(H^{*}(\mathcal{Z}_{K})\), this fact will also follow from the proof of Lemma 6.6, which identifies the higher operations with differentials in a different spectral sequence.) We include here a more direct proof for the special case of a secondary operation, which we will need in Section 6.
**Lemma 3.8**.: _Let \((N,d)\) be a rational graded \(\Lambda(\iota_{1},\dots,\iota_{m})\)-module. Suppose the primary operations \(\delta_{v_{i}}\) and \(\delta_{v_{j}}\) vanish on \(H^{\star}(N)\) for some \(i,j\in[m]\). Then the secondary operation (14) is a well-defined map_
\[\delta_{v_{i}v_{j}}\colon H^{\star}(N)\longrightarrow H^{\star-3}(N).\]
Proof.: Let \([\alpha]\in H^{n}(N)\). Since \(\delta_{v_{i}}=\delta_{v_{j}}=0\), the secondary operation \(\delta_{v_{i}v_{j}}[\alpha]\) is represented by \(\iota_{i}\beta_{j}+\iota_{j}\beta_{i}\) where \(\iota_{i}\alpha=d\beta_{i}\) and \(\iota_{j}\alpha=d\beta_{j}\). Suppose \(\beta^{\prime}_{i},\beta^{\prime}_{j}\in N^{n-2}\) also satisfy the equations \(\iota_{i}\alpha=d\beta^{\prime}_{i}\) and \(\iota_{j}\alpha=d\beta^{\prime}_{j}\). Then because \(\beta_{i}-\beta^{\prime}_{i}\) and \(\beta_{j}-\beta^{\prime}_{j}\) are closed, there must exist \(\omega,\omega^{\prime}\in N^{n-4}\) with \(\iota_{i}(\beta_{j}-\beta^{\prime}_{j})=d\omega\) and \(\iota_{j}(\beta_{i}-\beta^{\prime}_{i})=d\omega^{\prime}\) since \(\delta_{v_{i}}[\beta_{j}-\beta^{\prime}_{j}]=\delta_{v_{j}}[\beta_{i}-\beta^{ \prime}_{i}]=0\). Therefore
\[(\iota_{i}\beta_{j}+\iota_{j}\beta_{i})-(\iota_{i}\beta^{\prime}_{j}+\iota_{j }\beta^{\prime}_{i})=\iota_{i}(\beta_{j}-\beta^{\prime}_{j})+\iota_{j}(\beta_ {i}-\beta^{\prime}_{i})=d(\omega+\omega^{\prime}),\]
from which it follows that \([\iota_{i}\beta_{j}+\iota_{j}\beta_{i}]=[\iota_{i}\beta^{\prime}_{j}+\iota_{j }\beta^{\prime}_{i}]\in H^{n-3}(N)\) and \(\delta_{v_{i}v_{j}}\) is well-defined.
**Remark 3.9**.: Since the generators of \(\Lambda\) act by derivations on the cochains of a moment-angle complex \(\mathcal{Z}_{K}\), it can be shown that \(\delta_{v_{i}v_{j}}\) is in fact a well-defined derivation of the cohomology ring \(H^{\star}(\mathcal{Z}_{K})\) when the primary operations \(\delta_{v_{i}}\) and \(\delta_{v_{j}}\) are trivial (although we will not need this). Analogous computations establishing the well-definedness of tertiary and higher operations when lower degree operations vanish are similarly straightforward but tedious.
Taking \(N\) to be \(\mathcal{C}^{\star}_{\mathrm{cw}}(\mathcal{Z}_{K})\), we next describe how the higher operations interact with the multigrading on \(H^{\star}(\mathcal{Z}_{K})\). Below we identify \(H^{\star}(\mathcal{Z}_{K})\) with \(\bigoplus_{J\subseteq[m]}\widetilde{H}^{\star}(K_{J})\) via the equivalences of Corollary 3.7 and denote the multidegree \(J\) part of \(H^{\star}(\mathcal{Z}_{K})\) by \(\widetilde{H}^{\star}(K_{J})\).
**Lemma 3.10**.: _Let \(s=v_{i_{1}}\cdots v_{i_{q}}\in S\) be a monomial. Suppose the operation \(\delta_{t}\) acts trivially on \(H^{\star}(\mathcal{Z}_{K})\cong\bigoplus_{J\subseteq[m]}\widetilde{H}^{\star}(K _{J})\) for all \(t|s\) with \(|t|<|s|\), and consider the operation_
\[\delta_{s}\colon H^{\star}(\mathcal{Z}_{K})\longrightarrow H^{\star-2q+1}( \mathcal{Z}_{K}).\]
_If \(s\) is squarefree, then_
\[\delta_{s}\big{(}\widetilde{H}^{p}(K_{J})\big{)}\subseteq\widetilde{H}^{p-q+1} (K_{J\smallsetminus i_{1}\cdots i_{q}})\]
_for all \(J\subseteq[m]\) containing \(i_{1},\dots,i_{q}\). Otherwise, \(\delta_{s}=0\)._
Proof.: The claim holds for primary operations \(\delta_{v_{j}}=\iota_{j}\) by Lemma 3.5. Let \(s=v_{i_{1}}\cdots v_{i_{q}}\) be a squarefree monomial and assume inductively that for any \([\alpha]\in\widetilde{H}^{p}(K_{J})\) and \(1\leqslant\ell\leqslant q\), a representative \(\delta_{s/v_{i_{\ell}}}(\alpha)\) of the cohomology class \(\delta_{s/v_{i_{\ell}}}[\alpha]\) (as in (15)) can be chosen to lie in \(\widetilde{C}^{p-q+2}(K_{J\smallsetminus i_{1}\cdots\widehat{i_{\ell}}\cdots i _{q}})\). Then for any \(\alpha\in\widetilde{H}^{p}(K_{J})\), \(\delta_{s}[\alpha]\) is represented by a cochain of the form
\[\sum_{\ell=1}^{q}\iota_{i_{\ell}}d^{-1}\delta_{s/v_{i_{\ell}}}(\alpha)\in \bigoplus_{J\subseteq[m]}\widetilde{C}^{\star}(K_{J}),\]
where each preimage \(d^{-1}\delta_{s/v_{i_{\ell}}}(\alpha)\) can be chosen to lie in \(\widetilde{C}^{p-q+1}(K_{J\smallsetminus i_{1}\cdots\widehat{i_{\ell}}\cdots i _{q}})\). Therefore each term \(\iota_{i_{\ell}}d^{-1}\delta_{s/v_{i_{\ell}}}(\alpha)\) is of multidegree \(J\smallsetminus i_{1}\cdots i_{q}\), and \(\delta_{s}[\alpha]\in\widetilde{H}^{p-q+1}(K_{J\smallsetminus i_{1}\cdots i_{q}})\), as desired. Finally, if \(s\) is not squarefree, the triviality of \(\delta_{s}\) follows from the fact that \(\iota_{i_{\ell}}\) acts trivially on \(\widetilde{C}^{\star}(K_{U})\) when \(j\notin U\).
By Lemma 3.10, among the higher cohomology operations induced by the standard \(T^{m}\)-action on \(\mathcal{Z}_{K}\), we may restrict attention to those indexed by squarefree monomials in \(S=k[v_{1},\dots,v_{m}]\). (This is in contrast to the case of a general \(T^{m}\)-space; see Examples 5.11 and 5.12). Identifying squarefree monomials \(s=v_{I}=v_{i_{1}}\cdots v_{i_{q}}\) with subsets \(I=\{i_{1},\dots,i_{q}\}\subseteq[m]\), we use the notation \(\delta_{I}\) for \(\delta_{i^{\mathrm{v}}\cdots i_{q}}\) to denote \(\delta_{s}\).
In the next section we take a different approach to the higher cohomology operations for moment-angle complexes by rigidifying them into well-defined endomorphisms of \(H^{\star}(\mathcal{Z}_{K})\). These operations can then be used to describe explicit Hirsch-Brown models for these toric spaces, as has been done in a different context in [8].
## 4. Hirsch-Brown models from perturbation theory
In this section we explain how the minimal free resolution of the Stanley-Reisner ring can be constructed from the reduced Koszul complex using the homological perturbation lemma. As a consequence we obtain direct formulae for the cohomology operations \(\delta_{I}\) from Section 3.4, and we obtain an explicit Hirsch-Brown model for the action of any subtorus \(H\subseteq T^{m}\) on \(\mathcal{Z}_{K}\).
We fix, as usual, a simplicial complex \(K\) on a vertex set \([m]\), and we consider the reduced Koszul complex \(R(K)\) with its dg \(\Lambda\)-module structure as in Section 3.2. We need to introduce some notation to be used in the next lemma. The element
\[\rho=\sum_{i\in[m]}v_{i}\otimes\iota_{i}\in S\otimes\Lambda\]
defines by left multiplication a homological degree \(-1\), multigraded endomorphism
\[\rho\colon S\otimes R(K)\longrightarrow S\otimes R(K). \tag{16}\]
A direct computation shows that \((S\otimes d+\rho)^{2}=0\), where \(d\) is the differential on \(R(K)\). In other words, \(\rho\) defines a perturbation of the complex \(S\otimes R(K)\).
We recall that \(R(K)\) admits a basis of monomials \(v_{V}u_{U}\) indexed by disjoint subsets \(U,V\subseteq[m]\) with \(V\in K\), and we use this to define a linear map
\[\theta\colon R(K)\to k[K],\quad v_{V}\mapsto v_{V}\ \ \text{and}\ \ v_{V}u_{U}\mapsto 0\,\ \text{if}\ U\neq\varnothing. \tag{17}\]
In the next lemma \(\theta_{S}\colon S\otimes R(K)\to k[K]\) is the unique \(S\)-linear extension of \(\theta\).
**Lemma 4.1**.: _If \(S\otimes R(K)\) is equipped with the perturbed differential \(S\otimes d+\rho\), then the map \(\theta_{S}\colon S\otimes R(K)\to k[K]\) is a quasi-isomorphism of dg \(S\)-modules._
Proof.: We use the quasi-isomorphism
\[\theta^{\prime}\colon R(K)\stackrel{{\simeq}}{{\longrightarrow }}\big{(}\Lambda(u_{1},\ldots,u_{m})\otimes k[K],d\big{)}\]
of dg \(\Lambda\)-modules from (8) above. From this we obtain a chain map
\[\theta^{\prime}_{S}\colon\big{(}S\otimes R(K),S\otimes d+\rho\big{)}\to\big{(} S\otimes\Lambda(u_{1},\ldots,u_{m})\otimes k[K],S\otimes d+\rho\big{)},\]
giving both sides the differential perturbed by \(\rho\). Since \(\theta^{\prime}\) is a quasi-isomorphism and \(\theta^{\prime}=k\otimes_{S}\theta^{\prime}_{S}\), it follows that \(\theta^{\prime}_{S}\) is a quasi-isomorphism by the (derived) Nakayama Lemma.
We also make use of the surjective chain map
\[\theta^{\prime\prime}_{S}\colon\big{(}S\otimes\Lambda(u_{1},\ldots,u_{m}) \otimes k[K],S\otimes d+\rho\big{)}\to k[K],\quad f\otimes 1\otimes g\mapsto fg,\,f \otimes u_{U}\otimes g\mapsto 0\ \text{if}\ U\neq\varnothing.\]
It is well known that
\[\theta^{\prime\prime}_{S}\otimes_{k[K]}k\colon\big{(}S\otimes\Lambda(u_{1}, \ldots,u_{m}),\rho\big{)}\to k\]
is a quasi-isomorphism, as the left-hand side is the standard Koszul complex on the maximal homogeneous ideal of \(S\). It follows that \(\theta^{\prime\prime}_{S}\) is a quasi-isomorphism, again by the (derived) Nakayama Lemma. Finally, \(\theta_{S}\) factors as
\[S\otimes R(K)\stackrel{{\theta^{\prime}_{S}}}{{\longrightarrow }}S\otimes\Lambda(u_{1},\ldots,u_{m})\otimes k[K]\stackrel{{\theta^ {\prime\prime}_{S}}}{{\longrightarrow}}k[K].\]
From this we see that \(\theta_{S}\) is a chain map and that it is a quasi-isomorphism.
**Remark 4.2**.: The perturbed differentials appearing above can be interpreted within the framework of _twisted tensor products_, using the _twisting cochain_\(\tau\colon\Lambda(u_{1},\ldots,u_{m})\to S\), with \(u_{i}\mapsto v_{i}\) and \(u_{U}\mapsto 0\) if \(|U|\neq 1\); consult [27] for more on this theory. Even more classically, the perturbed (or twisted) tensor products \(S\otimes-\) and \(\Lambda(u_{1},\ldots,u_{m})\otimes-\) that we have used are the well-known BGG functors between dg \(\Lambda\)-modules and dg \(S\)-modules, realising the famous Bernstein-Gel'fand-Gel'fand correspondence [4].
Recall from (7) that the reduced Koszul complex satisfies \(H(R(K))\cong\operatorname{Tor}^{S}(k[K],k)\). In order to construct cohomology operations explicitly as well-defined endomorphisms of \(\operatorname{Tor}^{S}(k[K],k)\), we fix data that realises this isomorphism at the chain level.
**Definition 4.3**.: A _multigraded deformation retraction_ for \(R(K)\) is a diagram of maps, homogeneous with respect to multidegree,
(18)
such that \(\pi\) and \(\sigma\) are degree zero chain maps satisfying \(\pi\sigma=1\), and \(h\) is a (homological) degree \(1\) homotopy with \(dh+hd=1-\sigma\pi\).
One can readily construct multigraded deformation retraction data: Within \(R(K)\) denote \(Z=\ker(d)\) and \(B=\operatorname{im}(d)\), and make the identification \(\operatorname{Tor}^{S}(k[K],k)=Z/B\). Next choose a splitting \(s_{1}\colon R(K)/Z\to R(K)\) of the surjection \(p_{1}\colon R(K)\to R(K)/Z\), and a splitting \(s_{2}\colon Z/B\to Z\) of the surjection \(p_{2}\colon Z\to Z/B\), both as \(\mathbb{Z}\times\mathbb{Z}^{m}\) graded vector spaces. Then we may take \(\sigma=s_{2}\) and \(\pi=p_{2}(1-s_{1}p_{1})\), and since \(ds_{1}\colon R(K)/Z\to B\) is an isomorphism we may take \(h=s_{1}(ds_{1})^{-1}\). These maps all together satisfy the conditions of Definition 4.3.
**Theorem 4.4**.: _Let \(K\) be a simplicial complex on the vertex set \([m]\) and choose a multigraded deformation retraction for the reduced Koszul complex \(R(K)\), as in (18). For each \(I\subseteq[m]\) we define an operation on \(\operatorname{Tor}^{S}(k[K],k)\) having homological degree \(-1\) and multidegree \(-I\) by the formula_
\[\partial_{I}=\sum_{I=\{i_{1},\ldots,i_{n}\}}\pi\iota_{i_{1}}h\iota_{i_{2}}h \cdots h\iota_{i_{n}}\sigma,\]
_where the sum is taken over all possible orderings of \(I\) (therefore having \(|I|!\) summands). These operations are the coefficients of the differential in the minimal (multigraded) free resolution of the Stanley-Reisner ring. In other words, there is a quasi-isomorphism of dg \(S\)-modules_
\[\big{(}S\otimes\operatorname{Tor}^{S}(k[K],k),d\big{)}\stackrel{{ \simeq}}{{\longrightarrow}}k[K]\quad\text{ with }d=\sum_{I\subseteq[m]}\upsilon_{I}\otimes \partial_{I}.\]
We have used the notation \(\partial_{I}\) to distinguish these operations from the operations \(\delta_{I}\) defined in Section 3.4; however, we will show in Proposition 4.9 that they are essentially the same.
Proof.: To begin with, we extend the chosen deformation retraction \(S\)-linearly:
where \(\sigma_{S}=S\otimes\iota\), \(\pi_{S}=S\otimes\pi\), \(h_{S}=S\otimes h\), and the complex \(S\otimes R(K)\) has the differential \(S\otimes d\). This is again a deformation retraction.
We make use of the homological perturbation lemma, whose history goes back at least to [21]; a convenient reference is [10]. Using the perturbation (16) and [10, 2.4] we obtain the perturbed deformation retraction
where
\[d^{\prime}=\sum_{n\geqslant 0}\pi_{S}\rho(h_{S}\rho)^{n}\sigma_{S}, \pi^{\prime}_{S}=\pi_{S}+\sum_{n\geqslant 0}\pi_{S}\rho(h_{S}\rho)^{n}h_{S},\] \[\sigma^{\prime}_{S}=\sigma_{S}+\sum_{i\geqslant 0}h_{S}\rho(h_{S} \rho)^{n}\sigma_{S}, h^{\prime}_{S}=h_{S}+\sum_{n\geqslant 0}h_{S}\rho(h_{S}\rho)^{n}h_{S}.\]
In the given formula for \(d^{\prime}\), each of \(\pi_{S},\sigma_{S}\) and \(h_{S}\) are \(S\)-linear extensions of maps of vector spaces, and therefore all non-constant \(S\)-coefficients in \(d^{\prime}\) arise from \(\rho=\sum_{i\in[m]}\upsilon_{i}\otimes\iota_{i}\):
\[d^{\prime}=\sum_{n\geqslant 0}\sum_{i_{1},\ldots,i_{n}\in[m]}(\upsilon_{i_{1}} \cdots\upsilon_{i_{n}})\pi_{S}\iota_{i_{1}}h_{S}\iota_{i_{2}}h_{S}\cdots h_{S }\iota_{i_{n}}\sigma_{S}.\]
Since \(R(K)\) is nonzero only in squarefree multidegree, and since \(h_{S}\) has multidegree zero while each \(\iota_{i}\) has multidegree \((0,\ldots,-1,\ldots,0)\), the summands in the above series can only be nonzero when \(i_{1},\ldots,i_{n}\) are distinct. In other words, the summands are indexed by subsets \(I=\{i_{1},\ldots,i_{n}\}\), with each ordering contributing a different summand. This yields the expression \(d^{\prime}=\sum_{I\subseteq[m]}v_{I}\otimes\partial_{I}\).
Finally, using \(\theta_{S}\) from (17), the natural projection
\[\theta_{S}\sigma^{\prime}_{S}=\theta_{S}\sigma_{S}\colon\big{(}S\otimes\mathrm{ Tor}^{S}(k[K],k),d^{\prime}\big{)}\longrightarrow k[K]\]
is a quasi-isomorphism by Lemma 4.1.
**Remark 4.5**.: The system of cohomology operations on \(\mathrm{Tor}^{S}(k[K],k)\) constructed in Theorem 4.4 contains exactly the data needed to recover the derived tensor product \(k[K]\otimes_{S}^{\mathrm{L}}k\) up to a quasi-isomorphism of dg \(\Lambda\)-modules. In this sense, the cohomology operations play a similar role to \(\mathrm{A}_{\infty}\)-module structures. In fact, the operations \(\{\partial_{I}\}\) can be obtained by symmetrising the higher multiplications \(\{m_{n}\}\) in a choice of \(\mathrm{A}_{\infty}\)-\(\Lambda\)-module structure on \(\mathrm{Tor}^{S}(k[K],k)\). Continuing along these lines, it is possible to consider the data \(\{\partial_{I}\}\) as a kind of \(\infty\)-\(\Lambda\)-module structure with respect to the twisting cochain \(\tau\colon S^{\vee}\to\Lambda\); cf. [27].
**Remark 4.6**.: The same operations are also closely related to the systems of higher homotopies defined by Eisenbud, on resolutions of modules over local complete intersection rings [12]. More precisely, if \(A\) is a local ring with residue field \(k\), and \(B=A/(f_{1},\ldots,f_{m})\) is the quotient by a regular sequence, then for any \(B\)-module \(M\), there is a natural \(\Lambda(x_{1},\ldots,x_{m})\)-module structure on \(\mathrm{Tor}^{A}(M,k)\), with \(x_{i}\) having homological degree \(1\). A system of higher homotopies on the minimal \(A\)-free resolution of \(M\) (as in [12]) induces a system of cohomology operators on \(\mathrm{Tor}^{A}(M,k)\) extending its \(\Lambda(x_{1},\ldots,x_{m})\)-module structure, analogous to those in Theorem 4.4 but having different degrees.
These cohomology operations can be interpreted in terms of the equivariant topology of the moment-angle complex \(\mathcal{Z}_{K}\). This time we choose a multigraded deformation retraction for the cellular cochain algebra
(19)
satisfying the same conditions as in (18). Using the isomorphism \(R(K)\cong\mathcal{C}_{\mathrm{cw}}^{*}(\mathcal{Z}_{K};k)\) of dg \(\Lambda\)-modules from Lemma 3.3, the statement of Theorem 4.4 translates to the corollary below. We first recall a definition from rational homotopy theory and the theory of transformation groups (generalised slightly to allow for field coefficients other than \(k=\mathbb{Q}\)).
For a \(T\)-space \(X\), although the \(S=H^{*}(BT;k)\)-module structure on \(H^{*}_{T}(X;k)=H^{*}(ET\times_{T}X;k)\) cannot in general be lifted to an action of \(S\) on \(C^{*}(ET\times_{T}X;k)\), the existence of a dg algebra quasi-isomorphism \(f\colon C^{*}(BT;k)\to H^{*}(BT;k)\) identifies the derived categories of dg \(C^{*}(BT;k)\)-modules and dg \(H^{*}(BT;k)\)-modules. Explicitly, the functor
\[-\otimes_{C^{*}(BT^{m})}^{\mathrm{L}}H^{*}(BT)\colon D(C^{*}(BT))\longrightarrow D (H^{*}(BT)) \tag{20}\]
is an equivalence of categories with quasi-inverse given by the restriction of scalars along \(f\). Below, we identify \(C^{*}(ET\times_{T}X;k)\) with its image under the equivalence above.
**Definition 4.7**.: The _minimal Hirsch-Brown model_ of the action of a torus \(T\) on a space \(X\) is the dg \(S\)-module minimal model of \(C^{*}(ET\times_{T}X;k)\), where \(S=H^{*}(BT;k)\).
The minimal Hirsch-Brown model is therefore a semi-free dg \(S\)-module \((S\otimes V,d)\) that satisfies the minimality condition \(\mathrm{im}(d)\subseteq S^{>0}\otimes V\) and is quasi-isomorphic to (the image under (20) of) \(C^{*}(ET\times_{T}X;k)\) in the derived category of dg \(S\)-modules. When \(k=\mathbb{Q}\), the minimal Hirsch-Brown model can be defined as the dg \(S\)-module minimal model of a Sullivan model for the Borel construction \(ET\times_{T}X\). A convenient reference for the existence, uniqueness and properties of these models is [2, Appendix A].
**Corollary 4.8**.: _Let \(K\) be a simplicial complex on the vertex set \([m]\) and choose a multigraded deformation retraction for \(\mathcal{C}_{\mathrm{cw}}^{*}(\mathcal{Z}_{K};k)\), as in (19). The cohomology operations on \(H^{*}(\mathcal{Z}_{K};k)\) defined by_
\[\partial_{I}=\sum_{I=\{i_{1},\ldots,i_{n}\}}\pi_{i_{1}}h\iota_{i_{2}}h\cdots h \iota_{i_{n}}\sigma\quad\text{ for }I\subseteq[m]\]
_assemble to yield the minimal Hirsch-Brown model of the action of \(T^{m}\) on \(\mathcal{Z}_{K}\):_
\[\big{(}S\otimes H^{*}(\mathcal{Z}_{K};k),d\big{)}\stackrel{{ \simeq}}{{\longrightarrow}}H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\quad\text{ with }d=\sum_{I\subseteq[m]}v_{I}\otimes\partial_{I}.\]
Proof.: By Theorem 4.4, \((S\otimes H^{*}(\mathcal{Z}_{K};k),d)\) is quasi-isomorphic to \(H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\cong k[K]\) as a dg \(S\)-module. It remains to show that \((S\otimes H^{*}(\mathcal{Z}_{K};k),d)\) is quasi-isomorphic to the image of \(C^{*}(ET^{m}\times_{T^{m}}\mathcal{Z}_{K};k)\) under (20), and for this we use a well-known formality result for \(ET^{m}\times_{T^{m}}\mathcal{Z}_{K}\). By [7, Theorem 4.3.2], the Borel fibration \(ET^{m}\times_{T^{m}}\mathcal{Z}_{K}\to BT^{m}\) can be identified up to homotopy with the natural inclusion \(DJ_{K}\to BT^{m}\) of the Davis-Januszkiewicz space1 associated to \(K\) into \(BT^{m}=(\mathbb{C}P^{\infty})^{m}\). By [13] or [29], there is a dg algebra quasi-isomorphism \(C^{*}(DJ_{K};k)\to H^{*}(DJ_{K};k)\) which is functorial in \(K\). Since \(DJ_{\Delta^{m-1}}=BT^{m}\), the inclusion \(K\hookrightarrow\Delta^{m-1}\) gives rise to a commutative diagram
Footnote 1: Here \(DJ_{K}\) is defined analogously to (2) as a polyhedral product corresponding to the pair of spaces \((\mathbb{C}P^{\infty},*)\) instead of \((D^{2},S^{1})\).
Since the quasi-isomorphism \(f\) defining (20) can be taken to be the left vertical map in the diagram above, this shows that the \(S\)-module \(H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\cong H^{*}(DJ_{K};k)\) is mapped under the quasi-inverse of (20) (restricting scalars along \(f\)) to a \(C^{*}(BT^{m};k)\)-module that is quasi-isomorphic to \(C^{*}(ET^{m}\times_{T^{m}}\mathcal{Z}_{K};k)\cong C^{*}(DJ_{K};k)\). We now know that \((S\otimes H^{*}(\mathcal{Z}_{K};k),d)\) is quasi-isomorphic as a dg \(S\)-module to \(H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\), which in turn is quasi-isomorphic to the image under (20) of \(C^{*}(ET^{m}\times_{T^{m}}\mathcal{Z}_{K};k)\), and this completes the proof.
The higher operations \(\delta_{I}\) were defined in Section 3.4 by Massey-type formulae, up to the indeterminacy explained in (13). The next result shows that these are essentially equal to the operations \(\partial_{I}\) defined in this section. In particular, the indeterminacy of the \(\delta_{I}\) can be removed by fixing a deformation retraction for \(\mathcal{C}_{\mathrm{cw}}^{*}(\mathcal{Z}_{K};k)\) as in (19), after which they assemble to yield the minimal Hirsch-Brown model for the \(T^{m}\)-action on \(\mathcal{Z}_{K}\).
**Proposition 4.9**.: _The operations \(\partial_{I}\) provide representatives for the higher operations \(\delta_{I}\) where they are defined. More precisely, for an index \(I\subseteq[m]\) and an element \(\alpha\in\mathrm{Tor}^{S}(k[K],k)\), if \(\delta_{J}(\alpha)=0\) for all \(J\subsetneq I\), then \(\delta_{I}(\alpha)=\partial_{I}(\alpha)\) modulo \(\sum_{J\subsetneq I}\mathrm{im}(\delta_{J})\)._
Proof.: If \(I=\{i\}\) then \(\partial_{i}=\delta_{i}=\iota_{i}\) by definition, and we proceed inductively. Suppose that \(\delta_{J}(\alpha)\) is defined and zero for all \(J\subsetneq I\). For each \(i\in I\) we can assume by induction that
\[\alpha_{I\smallsetminus i}=\sum_{I\smallsetminus i=\{i_{1},\ldots,i_{n}\}}\iota_{i _{1}}h\iota_{i_{2}}h\cdots h\iota_{i_{n}}\sigma(\alpha)\]
is a cycle representing \(\delta_{I\smallsetminus i}(\alpha)\). Since we are assuming \(\delta_{I\smallsetminus i}(\alpha)=0\), each \(\alpha_{I\smallsetminus i}\) is a boundary. Then we choose preimages \(d^{-1}\alpha_{I\smallsetminus i}\) under the differential, and by definition of the higher operations set \(\delta_{I}(\alpha)=[\sum_{i\in I}\iota_{i}d^{-1}\alpha_{I\smallsetminus i}]\). We are free to use \(h\alpha_{I\smallsetminus i}=d^{-1}\alpha_{I\smallsetminus i}\) as our preimages, in which case
\[\delta_{I}(\alpha)=\big{[}\sum_{i\in I}\iota_{i}h\alpha_{I\smallsetminus i}\big{]} =\,\pi\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We finish this section by describing Hirsch-Brown models for the actions of subtori \(H\subseteq T^{m}\) (that is, compact connected abelian subgroups) on moment-angle complexes.
Every one-parameter subgroup \(S^{1}\to T^{m}\) of the \(m\)-torus is of the form \(t\mapsto(t^{a_{1}},\ldots,t^{a_{m}})\) for some \(a_{1},\ldots,a_{m}\in\mathbb{Z}\). Using this we assign to each subtorus \(H\subseteq T^{m}\) an ideal of \(S\) that is generated by linear forms:
\[H\ \longmapsto\ \mathcal{J}_{H}=\bigg{(}\underset{i\in[m]}{\sum}b_{i}v_{i}\ :\ \underset{i\in[m]}{\sum}a_{i}b_{i}=0\text{ for all }S^{1}\to H,\ t\mapsto(t^{a_{1}},\ldots,t^{a_{m}})\bigg{)}. \tag{21}\]
Recalling that the cohomology ring of the classifying space \(BT^{m}=(\mathbb{C}P^{\infty})^{m}\) is given by the polynomial ring \(H^{*}(BT^{m})=S=k[v_{1},\ldots,v_{m}]\) with generators in degree \(2\), it is straightforward to see that the map \(H^{*}(BT^{m})\to H^{*}(BH)\) induced by the inclusion \(H\hookrightarrow T^{m}\) can be identified with the quotient map \(S\to S/\mathcal{J}_{H}\).
**Theorem 4.10**.: _Let \(K\) be a simplicial complex on the vertex set \([m]\) and let_
\[\big{(}S\otimes H^{*}(\mathcal{Z}_{K};k),d\big{)}\overset{\simeq}{ \longrightarrow}H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\]
_be the minimal Hirsch-Brown model for the \(T^{m}\)-action on \(\mathcal{Z}_{K}\). Then for any subtorus \(H\subseteq T^{m}\), taking the quotient by \(\mathcal{J}_{H}\) yields the minimal Hirsch-Brown model for the \(H\)-action on \(\mathcal{Z}_{K}\):_
\[\big{(}(S/\mathcal{J}_{H})\otimes H^{*}(\mathcal{Z}_{K};k),d\big{)}\overset{ \simeq}{\longrightarrow}C^{*}(EH\times_{H}\mathcal{Z}_{K};k).\]
Proof.: All cochain and cohomology groups are taken with coefficients in the field \(k\), which we suppress from the notation below. In the morphism of Borel fibrations
the right-hand square is a homotopy pullback. Taking cochains, the Eilenberg-Moore model for the pullback yields a quasi-isomorphism
\[C^{*}(EH\times_{H}\mathcal{Z}_{K})\simeq C^{*}(BH)\otimes_{C^{*}(BT^{m})}^{ \mathrm{L}}C^{*}(ET^{m}\times_{T^{m}}\mathcal{Z}_{K}) \tag{22}\]
in the homotopy category of dg \(C^{*}(BH)\)-modules. Since the given Hirsch-Brown model is a semi-free dg \(S\)-module resolution of \(H^{*}_{T^{m}}(\mathcal{Z}_{K})\), taking the quotient by \(\mathcal{J}_{H}\) computes the derived tensor product, giving a quasi-isomorphism of \(H^{*}(BH)=S/\mathcal{J}_{H}\)-modules
\[\big{(}(S/\mathcal{J}_{H})\otimes H^{*}(\mathcal{Z}_{K}),d\big{)}\simeq S/ \mathcal{J}_{H}\otimes_{S}^{\mathrm{L}}H^{*}_{T^{m}}(\mathcal{Z}_{K}). \tag{23}\]
We next construct a quasi-isomorphism between (22) and (23), implying the collapse of the Eilenberg-Moore spectral sequence of the homotopy pullback above. We identify \(ET^{m}\times_{T^{m}}\mathcal{Z}_{K}\) with \(DJ_{K}\) as was done in the proof of Corollary 4.8. As reviewed in [15, Section 4.1], the construction of the quasi-isomorphism \(f\colon C^{*}(BT^{m})\to H^{*}(BT^{m})\) used in the proof of Corollary 4.8 depends on a choice of chain representatives \(c_{1},\ldots,c_{m}\) for a basis \(x_{1},\ldots,x_{m}\) of \(H_{1}(T^{m})\) where each \(c_{i}\) lies in the \(i\)th coordinate circle factor of \(T^{m}\). Given another basis \(x^{\prime}_{1},\ldots,x^{\prime}_{m}\in H_{1}(T^{m})\), write \(x^{\prime}_{i}=\sum_{j}\alpha_{ij}x_{j}\) for each \(1\leqslant i\leqslant m\), and set \(c^{\prime}_{i}=\sum_{j}\alpha_{ij}c_{j}\). Then it follows from the construction in [15, Section 4.1] that the quasi-isomorphism \(f^{\prime}\colon C^{*}(BT^{m})\to H^{*}(BT^{m})\) corresponding to the chains \(c^{\prime}_{1},\ldots,c^{\prime}_{m}\) is equal to \(f\), while any other set of representatives \(\tilde{c}_{1},\ldots,\tilde{c}_{m}\), with \(\tilde{c}_{i}\) homologous to \(c^{\prime}_{i}\), leads to a quasi-isomorphism \(\tilde{f}\) which is homotopic to \(f^{\prime}\) as a map of dg algebras by [15, Proposition 4.4]. By fixing a decomposition into circles for each of the factors of \(T^{m}\cong H\times(T^{m}/H)\) and choosing a basis of \(H_{1}(T^{m})\) represented by chains \(\tilde{c}_{i}\) that lie in the \(i\)th circle factor for each \(1\leqslant i\leqslant m\), we obtain a quasi-isomorphism \(\tilde{f}\colon C^{*}(BT^{m})\to H^{*}(BT^{m})\) that is natural with respect to coordinatewise inclusions of subtori by [13, Theorem 5.3] and is homotopic to \(f\). We therefore have a diagram
where the right square strictly commutes, the left square commutes up to a dg algebra homotopy, all vertical maps are quasi-isomorphisms of dg algebras and the horizontal maps are induced by the evident inclusions. This diagram induces the desired quasi-isomorphism between (22) and (23). Moreover, by restricting scalars along the left vertical map, the derived tensor product (23) becomes a \(C^{*}(BH)\)-module quasi-isomorphic to (22), so it follows that \(\big{(}(S/\mathcal{J}_{H})\otimes H^{*}(\mathcal{Z}_{K}),d\big{)}\) is the minimal Hirsch-Brown model for the \(H\)-action on \(\mathcal{Z}_{K}\).
**Remark 4.11**.: A space \(X\) with an \(H\)-action is said to be _MOD-formal_ (over \(k\)) if \(C^{*}(EH\times_{H}X;k)\) and \(H^{*}_{H}(X;k)\) are isomorphic in the derived category of dg \(H^{*}(BH;k)\)-modules; this condition was investigated by Amann and Zoller in connection with the Toral Rank Conjecture [2]. The proof of Corollary 4.8 shows that the \(T^{m}\)-action on \(\mathcal{Z}_{K}\) is MOD-formal (since \(DJ_{K}\simeq ET^{m}\times_{T^{m}}\mathcal{Z}_{K}\) is a formal space). However this property is not inherited by subtori; see [2, Example 7.1].
Given a subset \(I\subseteq[m]\), we will write
\[T^{I}=\big{\{}(t_{1},\ldots,t_{m})\,:\,t_{i}=1\text{ for }i\notin I\big{\}} \quad\text{and}\quad\mathcal{J}_{I}=\big{(}v_{i}\,:\,i\notin I\big{)} \tag{24}\]
for the _coordinate subtorus_ of \(T^{m}\) specified by \(I\), and the corresponding ideal of \(S\). The next result follows by combining Corollary 4.8 with Theorem 4.10.
**Corollary 4.12**.: _Let \(K\) be a simplicial complex on the vertex set \([m]\) and choose a multigraded deformation retraction for \(\mathcal{C}^{*}_{\mathrm{cw}}(\mathcal{Z}_{K};k)\), resulting in cohomology operations \(\partial_{J}\) as in Corollary 4.8. For any subset \(I\subseteq[m]\), the minimal Hirsch-Brown model for the \(T^{I}\)-action on \(\mathcal{Z}_{K}\) is given by_
\[\big{(}(S/\mathcal{J}_{I})\otimes H^{*}(\mathcal{Z}_{K};k),d\big{)}\stackrel{{ \simeq}}{{\longrightarrow}}C^{*}(ET^{I}\times_{T^{I}}\mathcal{Z}_{K};k) \quad\text{ with }d=\sum_{J\subseteq I}v_{J}\otimes\partial_{J}.\]
Small models for \(H^{*}_{T^{I}}(\mathcal{Z}_{K})\) are described using Koszul complexes in [30] (see also [28]). For our purposes, an advantage of the Hirsch-Brown models above is that they make explicit the relationship between the equivariant and ordinary cohomology of \(\mathcal{Z}_{K}\). In particular, the Hirsch-Brown model in Corollary 4.12 (together with Proposition 4.9) shows that the freeness of \(H^{*}_{T^{I}}(\mathcal{Z}_{K})\) over \(H^{*}(BT^{I})\) is equivalent to the vanishing of certain cohomology operations on \(H^{*}(\mathcal{Z}_{K})\) coming from the \(T^{I}\)-action. This will be used in the next two sections to characterise equivariant formality for any (not necessarily coordinate) subtorus action on \(\mathcal{Z}_{K}\).
## 5. Equivariant formality and higher operations
In the section we explain how equivariant formality of subtorus actions on a moment angle complex can be understood from the minimal free resolution of the corresponding Stanley-Reisner ring, and equivalently from the cohomology operations \(\delta_{I}\). The language of \(\mathcal{J}\)-closed ideals turns out to be a convenient lens through which to view this relationship.
### \(\mathcal{J}\)-closed modules
The class of \(\mathcal{J}\)-closed modules was introduced in the work of Diethorn on free resolutions in local algebra, as a natural generalisation of the notion of a weak complete intersection ideal [11]. As before, we write \(S=k[v_{1},\ldots,v_{m}]\), multigraded by \(\mathbb{N}^{m}\).
**Definition 5.1**.: Let \(\mathcal{J}\subseteq S\) be an ideal and let \(M\) be a multigraded \(S\)-module. Then \(M\) is _\(\mathcal{J}\)-closed_ if, in the minimal free resolution
\[\cdots\to F_{2}\stackrel{{ d_{2}}}{{\longrightarrow}}F_{1} \stackrel{{ d_{1}}}{{\longrightarrow}}F_{0}\to M\to 0,\]
the differentials satisfy \(d_{i}(F_{i})\subseteq\mathcal{J}F_{i-1}\) for all \(i\). A monomial ideal \(\mathcal{I}\subseteq S\) is called _\(\mathcal{J}\)-closed_ if \(S/\mathcal{I}\) is a \(\mathcal{J}\)-closed module. (In [11] it is assumed that \(\mathcal{J}\) is generated by a regular sequence--this will be the case in our main situation of interest, but we do not make that assumption here.)
Let \(\mathcal{I}(d_{i})\) denote the ideal generated by the entries of \(d_{i}\), when written as a matrix over \(S\) (this is sometimes called the first Fitting ideal of \(d_{i}\), and it is independent of the matrix representing \(d_{i}\)). If we also write \(\mathcal{I}(M)=\sum_{i}\mathcal{I}(d_{i})\), then by definition \(M\) is \(\mathcal{J}\)-closed if and only if \(\mathcal{I}(M)\subseteq\mathcal{J}\).
**Lemma 5.2**.: _Let \(\mathcal{J}\subseteq S\) be an ideal (not necessarily multigraded) and let_
\[\mathcal{J}^{\prime}=\bigoplus_{a\in\mathbb{N}^{m}}\mathcal{J}\cap S_{a}\subseteq S\]
_be the largest multigraded ideal contained in \(\mathcal{J}\). Then a multigraded \(S\)-module \(M\) is \(\mathcal{J}\)-closed if and only if it is \(\mathcal{J}^{\prime}\)-closed._
Proof.: Since \(M\) is multigraded, it has a multigraded minimal resolution, and therefore \(\mathcal{I}(M)\) is a multigraded ideal. From this it follows that \(\mathcal{I}(M)\subseteq\mathcal{J}\) if and only if \(\mathcal{I}(M)\subseteq\mathcal{J}^{\prime}\).
Equivariant formality is related to the \(\mathcal{J}\)-closed condition under the correspondence (21).
**Proposition 5.3**.: _Let \(K\) be a simplicial complex on vertex set \([m]\) and let \(H\subseteq T^{m}\) be a subtorus. Then the action of \(H\) on \(\mathcal{Z}_{K}\) is equivariantly formal over \(k\) if and only if the Stanley-Reisner ring \(k[K]\) is \(\mathcal{J}_{H}\)-closed as an \(S\)-module._
Proof.: Let \(F\) be the minimal free resolution of \(k[K]\) over \(S\). By Theorem 4.10 there is an isomorphism \(H^{*}(F/\mathcal{J}_{H}F)\cong H^{*}_{T^{m}}(\mathcal{Z}_{K};k)\). Since \(F/\mathcal{J}_{H}F\) is a minimal complex of free \(S/\mathcal{J}_{H}\)-modules, the only way it can have free homology is if its differentials vanish entirely. This happens exactly when \(d(F)\subseteq\mathcal{J}_{H}F\), that is, when \(k[K]\) is \(\mathcal{J}_{H}\)-closed.
### Reduction to coordinate subtori
In this section we show that, for the purpose of answering Question 1, it suffices to study the coordinate subtorus actions on moment-angle complexes. For these torus actions we give some characterisations of equivariant formality that will be used and generalised in subsequent sections.
**Definition 5.4**.: Let \(H\subseteq T^{m}\) be a subtorus. The _coordinate hull_ of \(H\) is the smallest coordinate subtorus of \(T^{m}\) containing \(H\),
\[\operatorname{hull}(H)=\Big{\{}(t_{1},\dots,t_{m})\;:\;t_{i}=1\text{ if }H \subseteq\prod_{j<i}S^{1}\times\{1\}\times\prod_{j>i}S^{1}\subseteq T^{m}\Big{\}}.\]
In positive characteristic we will need to use the following slightly different (possibly smaller) construction.
**Definition 5.5**.: Let \(H\subseteq T^{m}\) be a subtorus. For any prime number \(p\), the \(p\)_-coordinate hull_ of \(H\) is the coordinate subtorus of \(T^{m}\) given by
\[p\operatorname{-hull}(H)=\big{\{}(t_{1},\dots,t_{m})\;:\;t_{i}=1\text{ if }p|a_{i}\text{ for any map }S^{1}\to H,\ t\mapsto(t^{a_{1}},\dots,t^{a_{m}})\big{\}}.\]
**Proposition 5.6**.: _Let \(K\) be a simplicial complex on vertex set \([m]\) and let \(H\subseteq T^{m}\) be a subtorus._
1. _If_ \(k\) _is a field of characteristic zero, then the action of_ \(H\) _on_ \(\mathcal{Z}_{K}\) _is equivariantly formal over_ \(k\) _if and only if the action of_ \(\operatorname{hull}(H)\) _is equivariantly formal over_ \(k\)_._
2. _If_ \(k\) _is a field of characteristic_ \(p>0\)_, then the action of_ \(H\) _on_ \(\mathcal{Z}_{K}\) _is equivariantly formal over_ \(k\) _if and only if the action of_ \(p\operatorname{-hull}(G)\) _is equivariantly formal over_ \(k\)_._
Proof.: Taking one-parameter-subgroups, the inclusion \(H\subseteq T^{m}\) induces an inclusion of lattices
\[L_{H}=\operatorname{Hom}(S^{1},H)\subseteq\operatorname{Hom}(S^{1},T^{m})= \mathbb{Z}^{m}.\]
The smallest coordinate subspace of \(\mathbb{Q}^{m}\) containing \(L_{H}\otimes_{\mathbb{Z}}\mathbb{Q}\) is exactly \(L_{\operatorname{hull}(H)}\otimes_{\mathbb{Z}}\mathbb{Q}\), and likewise the smallest coordinate subspace of \(\mathbb{F}_{p}^{m}\) containing \(L_{H}\otimes_{\mathbb{Z}}\mathbb{F}_{p}\) is exactly \(L_{p\operatorname{-hull}(H)}\otimes_{\mathbb{Z}}\mathbb{F}_{p}\).
The inclusion \(L_{H}\subseteq\mathbb{Z}^{m}\) is dual to a surjection \(k^{m}\twoheadrightarrow\operatorname{Hom}(L_{H},k)\). This copy of \(k^{m}\) can be naturally identified with \(\operatorname{span}_{k}\{v_{1},\dots,v_{m}\}\), and we define the subspace
\[V_{H}=\ker\big{(}\operatorname{span}_{k}\{v_{1},\dots,v_{m}\}\twoheadrightarrow \operatorname{Hom}(L_{H},k)\big{)}.\]
Unraveling all these definitions, the ideal \(\mathcal{J}_{H}\subseteq S\) defined in (21) is the ideal generated by this subspace \(V_{H}\). By duality, when \(k\) has characteristic zero the largest coordinate subspace contained
in \(V_{H}\) is exactly \(V_{\mathrm{hull}(H)}\), and likewise when \(k\) has characteristic \(p\) the largest coordinate subspace contained in \(V_{H}\) is exactly \(V_{p\cdot\mathrm{hull}(H)}\). Combining these ingredients, the statement now follows from Lemma 5.2 and Proposition 5.3.
According to Proposition 5.6, the problem of determining which subtori \(H\subseteq T^{m}\) act equivariantly formally on \(\mathcal{Z}_{K}\) can be reduced to the case of the coordinate subtori \(H=T^{I}\), where \(I\subseteq[m]\), as in (24). We therefore focus on these actions from now on.
**Proposition 5.7**.: _Let \(K\) be a simplicial complex on \([m]\) and let \(I\subseteq[m]\). Then the following conditions are equivalent:_
* _the_ \(T^{I}\)_-action on_ \(\mathcal{Z}_{K}\) _is equivariantly formal over_ \(k\)_;_
* _the Stanley-Reisner ring_ \(k[K]\) _is_ \(\mathcal{J}_{I}\)_-closed;_
* _the cohomology operations_ \(\delta_{U}\) _vanish on_ \(H^{*}(\mathcal{Z}_{K};k)\) _for all_ \(U\subseteq I\)_._
Proof.: The equivalence of (a) and (b) is contained in Proposition 5.3. By Theorem 4.4 and Proposition 4.9, the minimal free resolution of \(k[K]\) over \(S\) is of the form \(F=S\otimes H^{*}(\mathcal{Z}_{K})\) with differential \(d=\sum_{U\subseteq[m]}v_{U}\otimes\delta_{U}\), where \(v_{U}=\prod_{i\in U}v_{i}\). This means \(d(F)\subseteq\mathcal{J}_{I}F\) if and only if \(\delta_{U}=0\) whenever \(v_{U}\notin\mathcal{J}_{I}\). Note that \(v_{U}\notin\mathcal{J}_{I}\) if and only if \(U\subseteq I\), and altogether this shows that (b) is equivalent to (c). (The equivalence of (a) and (c) also follows immediately from Corollary 4.12 and Proposition 4.9.)
**Remark 5.8**.: The equivalence of condition (a) above with the vanishing of an infinite family of cohomology operations \(\delta_{s}\), \(s\in k[v_{i}:i\in I]\), is due to Goresky, Kottwitz and MacPherson [18]. We remark that the proof of Proposition 5.7 is quite different from the arguments in [18], and that in our more restrictive setting the finite family of operations appearing in (c) (indexed by squarefree monomials) form a complete set of obstructions for equivariant formality.
Combining Proposition 5.7 with Lemma 3.5 (or Katthan's combinatorial description of the linear part of the minimal free resolution of \(k[K]\)[24]) also yields the following characterisation of equivariant formality for coordinate circle actions in terms of the homology of full subcomplexes of \(K\). It will be generalised to arbitrary coordinate subtori in Theorem 6.8.
**Theorem 5.9**.: _Let \(K\) be a simplicial complex on vertex set \([m]\) and let \(j\in[m]\). Then the following conditions are equivalent:_
* _the coordinate_ \(S^{1}_{j}\)_-action on_ \(\mathcal{Z}_{K}\) _is equivariantly formal over_ \(k\)_;_
* _the derivation_ \(\iota_{j}\) _is trivial on_ \(H^{*}(\mathcal{Z}_{K};k)\)_;_
* \(K_{J\cdot j}\hookrightarrow K_{J}\) _induces the trivial map on_ \(\widetilde{H}^{*}(\;;k)\) _for all_ \(J\subseteq[m]\) _with_ \(j\in J\)_._
**Remark 5.10**.: It follows that \(H^{*}(\mathcal{Z}_{K})\) is a trivial \(\Lambda(\iota_{1},\dots,\iota_{m})\)-module if and only if the coordinate \(S^{1}_{j}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal for all \(j\in[m]\). This latter condition (the simultaneous equivariant formality of every coordinate circle action) is considered in [30], where some combinatorial characterisations are given in the case that \(K\) is flag or \(1\)-dimensional. (In the flag case, [30, Theorem 4.9] can readily be recovered from the results of Section 5.3 below.)
We emphasise that the equivariant formality of a circle action on a space is typically not equivalent to the vanishing of the primary cohomology operation induced by the action as in Theorem 5.9; in general, the vanishing of an infinite family of higher operations is also necessary (see [18, Section 13]). In the case considered above, Lemma 3.10 automatically implies that \(\delta_{v_{j}^{\eta}}=0\) for all \(n>1\), leaving the primary operation \(\delta_{v_{j}}=\iota_{j}\) as the only possible obstruction.
Below we give two examples of circle actions on manifolds inducing trivial primary operations where equivariant formality is obstructed by associated higher operations. The first example demonstrates that the vanishing of a primary operation alone is not sufficient for the equivariant formality of (non-coordinate) circle actions on moment-angle complexes.
**Example 5.11**.: Let \(K=\partial\Delta^{1}\) and observe that \(\mathcal{Z}_{K}=D^{2}\times S^{1}\cup S^{1}\times D^{2}\cong S^{3}\), so the \(\Lambda(\iota_{1},\iota_{2})\)-module structure on \(H^{*}(\mathcal{Z}_{K})\) is trivial for degree reasons. In this case, restricting the standard \(T^{2}\)-action to the diagonal circle \(S^{1}_{\text{diag}}=\{(t,t)\}\subseteq T^{2}\) yields the Hopf action on \(S^{3}\), and the primary operation induced by this circle action is \(\iota_{1}+\iota_{2}\). The fundamental class in \(H^{3}(\mathcal{Z}_{K})\) is represented by the cocycle \(v_{1}u_{2}\in R(K)=\big{(}k[v_{1},v_{2}]/(v_{1}v_{2})\otimes\Lambda(u_{1},u_{2 })\big{)}/(v_{i}^{2},v_{i}u_{i})\). Since there is no indeterminacy, the zig-zag
\[v_{1}u_{2}\xmapsto{\iota_{1}+\iota_{2}}v_{1}\xmapsto{\iota_{1}}u_{1}\xmapsto 1\]
shows that the secondary operation \(H^{3}(\mathcal{Z}_{K})\to H^{0}(\mathcal{Z}_{K})\) induced by the \(S^{1}_{\text{diag}}\)-action maps \([v_{1}u_{2}]\) to [1], obstructing equivariant formality.
More generally, for any \(m\geqslant 2\), if \(K=\partial\Delta^{m-1}\) on the vertex set \([m]\), then the \(m\)-ary operation induced by the diagonal circle action on \(\mathcal{Z}_{K}\cong S^{2m-1}\) is given by the higher operation \(\delta_{[m]}\) and defines an isomorphism \(H^{2m-1}(\mathcal{Z}_{K})\to H^{0}(\mathcal{Z}_{K})\).
**Example 5.12**.: Consider the real solvable Lie algebra
\[L=\langle W,X,Y,Z\ :\ [W,X]=X,\,[W,Y]=-Y,\,[X,Y]=Z\rangle.\]
The corresponding simply connected solvable Lie group \(G\) admits a lattice \(\Gamma\), and the de Rham cohomology of the solvmanifold \(G/\Gamma\) is computed by the Lie algebra cohomology of \(L\) since the inclusion of left-invariant differential forms \(\Lambda L^{\vee}=\Omega_{G}^{*}(G)\hookrightarrow\Omega_{\Gamma}^{*}(G)= \Omega^{*}(G/\Gamma)\) is a quasi-isomorphism by a theorem of Hattori [22]. Let \((\Lambda L^{\vee},d)=(\Lambda(w,x,y,z),d)\) be the Chevalley-Eilenberg complex of \(L\) with \(|w|=|x|=|y|=|z|=1\) and differential determined by
\[d(x)=wx,\quad d(y)=-wy,\quad d(z)=xy.\]
It can be shown that \(S^{1}\) acts smoothly on \(G/\Gamma\) with fundamental vector field \(Z\) (see for example [5, Theorem 3.6]). The corresponding cohomology operation is induced by the degree \(-1\) derivation of the de Rham complex given by interior multiplication by \(Z\) (cf. [18, Section 10.5]). Let \(\iota_{Z}\colon\Lambda^{*}L^{\vee}\to\Lambda^{*-1}L^{\vee}\) denote the derivation defined by interior multiplication by \(Z\). Then a straightforward computation with the dg \(\Lambda(\iota_{Z})\)-module \((\Lambda(w,x,y,z),d)\) shows that \(H^{*}(G/\Gamma;\mathbb{R})\) together with all higher operations induced by the \(S^{1}\)-action is given by:
\[\begin{array}{l}H^{4}\colon\qquad\qquad[wxyz]\\ H^{3}\colon\
**Lemma 5.13**.: _Let \(K\) be a flag complex on vertex set \([m]\) and let \(v\in[m]\). If \(\{i,v\}\in K\) for every vertex \(i\in[m]\), then \(K=K_{[m]\smallsetminus v}*\{v\}\)._
Proof.: Since \(K\) is flag, so is every full subcomplex of \(K\). It follows that the join of full subcomplexes \(K_{[m]\smallsetminus v}*\{v\}\) is flag. By definition, a flag complex is completely determined by its \(1\)-skeleton, so the result follows from the fact that \(K\) and \(K_{[m]\smallsetminus v}*\{v\}\) have the same \(1\)-skeleton by the assumption that \(v\) is connected by an edge to every vertex of \(K\).
**Lemma 5.14**.: _Let \(K\) be a flag complex on vertex set \([m]\) and let \(v\in[m]\). Then the following conditions are equivalent:_
1. \(K_{J\smallsetminus v}\hookrightarrow K_{J}\) _induces the trivial map on_ \(\widetilde{H}^{*}(\ ;k)\) _for all_ \(J\subseteq[m]\) _with_ \(v\in J\)_;_
2. \(K_{\{i,j\}}*\{v\}\subseteq K\) _for every missing edge_ \(\{i,j\}\notin K\) _with_ \(v\notin\{i,j\}\)_._
Proof.: That condition (a) implies condition (b) is clear since for any missing edge \(\{i,j\}\notin K\), the inclusion \(K_{\{i,j\}}\hookrightarrow K_{\{i,j,v\}}\) is nontrivial in reduced cohomology unless the vertex \(v\) cones over \(K_{\{i,j\}}=\partial\Delta^{1}\).
Conversely, suppose \(K_{J\smallsetminus v}\hookrightarrow K_{J}\) is nontrivial on \(\widetilde{H}^{*}(\ )\) for some \(J\subseteq[m]\), \(v\in J\). Then there must exist a vertex \(i\in J\) with \(\{i,v\}\notin K\) since otherwise \(K_{J}\) is a cone by Lemma 5.13 and hence contractible, contradicting the assumption. Similarly, there must exist some \(j\in J\smallsetminus v\) with \(\{i,j\}\notin K\) since otherwise \(K_{J\smallsetminus v}\) is the cone \(K_{J\smallsetminus\{v,i\}}*\{i\}\) by Lemma 5.13. Now since \(\{i,j\}\notin K\) and \(\{i,v\}\notin K\), we have \(K_{\{i,j\}}*\{v\}\not\subseteq K\).
While condition (a) above apparently involves the coefficient ring \(k\), condition (b) does not. As we will see next, this will mean that in the flag case the equivariant fomality of a coordinate torus action is independent of \(k\) as well. This is not true in general, see Section 5.5 for examples.
**Theorem 5.15**.: _Let \(K\) be a flag complex on vertex set \([m]\) and let \(I\subseteq[m]\). The coordinate \(T^{I}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal (over \(k\)) if and only if \(I\in K\) and \(K_{\{i,j\}}*K_{I\smallsetminus\{i,j\}}\subseteq K\) for every missing edge \(\{i,j\}\notin K\)._
Proof.: Assume that the \(T^{I}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal. Then \(\delta_{U}=0\) for all \(U\subseteq I\) by Proposition 5.7. In particular, the secondary operation \(\delta_{ij}\) is trivial on \(H^{3}(\mathcal{Z}_{K})\) for all \(i,j\in I\). Since \(H^{3}(\mathcal{Z}_{K})\cong H^{3}(R(K))\) is spanned by \(\{[v_{i}u_{j}]\,:\,\{i,j\}\notin K\}\) and \(\delta_{ij}[v_{i}u_{j}]=1\) for each missing edge \(\{i,j\}\notin K\), it follows that \(K_{I}\) has no missing edges. Since \(K_{I}\) is flag, this implies \(K_{I}\) is a simplex, or equivalently \(I\in K\). Now to show that \(K_{\{i,j\}}*K_{I\smallsetminus\{i,j\}}\subseteq K\) for every \(\{i,j\}\notin K\), it suffices by flagness to show \(K_{\{i,j\}}*\{v\}\subseteq K\) for all \(v\in I\smallsetminus\{i,j\}\). Since the primary operation \(\delta_{v}=\iota_{v}\) is trivial for all \(v\in I\) by the assumption, this follows from Theorem 5.9 and Lemma 5.14.
Conversely, suppose that \(I\in K\) and \(K_{\{i,j\}}*K_{I\smallsetminus\{i,j\}}\subseteq K\) for every \(\{i,j\}\notin K\). Assume toward contradiction that the \(T^{I}\)-action on \(\mathcal{Z}_{K}\) is not equivariantly formal, and hence that \(\delta_{U}\neq 0\) for some \(U\subseteq I\). Assuming without loss of generality that \(U\) is a minimal such subset of \(I\), it follows from Lemma 3.10 that \(\delta_{U}\) acts nontrivially on some multidegree \(J\) part of \(H^{*}(\mathcal{Z}_{K})\) with \(U\subseteq J\), defining a nontrivial map
\[\delta_{U}\colon\widetilde{H}^{p}(K_{J})\longrightarrow\widetilde{H}^{p-|U|+1 }(K_{J\smallsetminus U}).\]
In particular, neither \(K_{J}\) nor \(K_{J\smallsetminus U}\) are contractible. This implies that every \(j\in J\smallsetminus U\) is contained in a missing edge in \(K_{J\smallsetminus U}\), since otherwise \(K_{J\smallsetminus U}\) is a cone by Lemma 5.13 and is thus contractible. But every \(i\in U\) cones over every missing edge in \(J\smallsetminus U\) by the assumption. Therefore every \(i\in U\) is connected by an edge to every vertex \(j\in J\smallsetminus U\). Since every \(i\in U\) is also connected by an edge to every other vertex in \(U\) (as \(U\subseteq I\in K\)), it follows that \(K_{J}\) is a cone by Lemma 5.13, a contradiction.
**Remark 5.16**.: For any simplicial complex \(K\), the equivariant formality of the coordinate \(T^{I}\)-action on \(\mathcal{Z}_{K}\) implies that \(I\in K\). As in the proof above, this can be seen by observing that a missing face in \(K_{I}\) would imply that \(1\in H^{0}(\mathcal{Z}_{K})\) is in the image of some \(\delta_{U}\) with \(U\subseteq I\). This
also follows immediately from the well-known fact that equivariantly formal actions have fixed points. (Note that the \(T^{I}\)-action has fixed points precisely when \(I\in K\) by definition (2) of \(\mathcal{Z}_{K}\)).
An interesting consequence of Theorem 5.15 is that in the flag case the equivariant formality of a subtorus action on \(\mathcal{Z}_{K}\) is completely determined by the action of primary and secondary cohomology operations on the groups \(H^{4}(\mathcal{Z}_{K})\) and \(H^{3}(\mathcal{Z}_{K})\), respectively.
**Corollary 5.17**.: _Let \(K\) be a flag complex on vertex set \([m]\) and let \(I\subseteq[m]\). If_
\[\delta_{v}\colon H^{4}(\mathcal{Z}_{K})\to H^{3}(\mathcal{Z}_{K})\quad\text{ and }\quad\delta_{ij}\colon H^{3}(\mathcal{Z}_{K})\to H^{0}(\mathcal{Z}_{K})\]
_are trivial for all \(i,j,v\in I\), then \(\delta_{U}\) vanishes everywhere for all \(U\subseteq I\). In particular, a coordinate \(S^{1}_{v}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal if and only if \(\delta_{v}\) is trivial on \(H^{4}(\mathcal{Z}_{K})\)._
Proof.: It is straightforward to check that a secondary operation \(\delta_{ij}\) is nontrivial on \(H^{3}(\mathcal{Z}_{K})\) if and only if \(\{i,j\}\notin K\). Therefore if \(\delta_{ij}\) is trivial on \(H^{3}(\mathcal{Z}_{K})\) for all \(i,j\in I\), then \(K_{I}\) is a simplex by flagness, so \(I\in K\). If additionally \(\delta_{v}\) is trivial on \(H^{4}(\mathcal{Z}_{K})\) for all \(v\in I\), then \(K_{\{i,j\}}\hookrightarrow K_{\{i,j,v\}}\) is trivial in reduced cohomology for every \(\{i,j\}\notin K\) with \(v\in I\smallsetminus\{i,j\}\) by Lemma 3.5. This implies \(K_{\{i,j\}}*\{v\}\subseteq K\) for every \(\{i,j\}\notin K\) with \(v\in I\smallsetminus\{i,j\}\). It follows that both combinatorial conditions of Theorem 5.15 are satisfied, so the \(T^{I}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal.
The equivariant formality of the action of any subtorus \(H\subseteq T^{m}\) on a moment-angle complex \(\mathcal{Z}_{K}\) can be read off from the minimal free resolution of the Stanley-Reisner ring \(k[K]\) by Proposition 5.3. If \(K\) is flag, it follows from the above that the equivariant formality of the \(H\)-action can be read off from the first two differentials in the resolution alone.
### \(\mathcal{J}\)-closed edge ideals
We obtain as another consequence of Theorem 5.15 a simple classification of \(\mathcal{J}\)-closed edge ideals for any ideal \(\mathcal{J}\subseteq S=k[v_{1},\ldots,v_{m}]\) generated by linear forms.
Let \(G\) be a simple graph with vertex set \([m]\) and edge set \(E(G)\). The _edge ideal_ of \(G\) is the quadratic monomial ideal
\[\mathcal{I}(G)=\big{(}v_{i}v_{j}\,:\,\{i,j\}\in E(G)\big{)}\subseteq S.\]
The edge ideal of \(G\) is the Stanley-Reisner ideal of a simplicial complex associated to \(G\) called the _independence complex_\(\operatorname{Ind}(G)\), defined by
\[\operatorname{Ind}(G)=\{\sigma\subseteq[m]\,:\,\sigma\text{ is an independent set of }G\}.\]
Equivalently, \(\operatorname{Ind}(G)\) is the unique flag complex with \(1\)-skeleton the graph complement \(G^{c}\).
Next, let \(\mathcal{J}\subseteq S\) be an ideal generated by linear forms. To classify all graphs \(G\) whose edge ideals are \(\mathcal{J}\)-closed, it suffices by Proposition 5.3 and Proposition 5.6 to assume that \(\mathcal{J}\) has the form \(\mathcal{J}_{I}=(v_{i}:i\notin I)\) for some subset \(I\subseteq[m]\).
**Corollary 5.18**.: _Let \(G\) be a simple graph on vertex set \([m]\) and let \(I\subseteq[m]\). Then the following conditions are equivalent:_
* _the edge ideal_ \(\mathcal{I}(G)\) _is_ \(\mathcal{J}_{I}\)_-closed;_
* \(d_{2}(F_{2})\subseteq\mathcal{J}_{I}F_{1}\) _and_ \(d_{1}(F_{1})\subseteq\mathcal{J}_{I}F_{0}\) _in the minimal free resolution_ \((F,d)\) _of_ \(S/\mathcal{I}(G)\)_;_
* \(I\) _is an independent set of_ \(G\) _and_ \(\{i,v\},\{j,v\}\notin E(G)\) _for every edge_ \(\{i,j\}\in E(G)\) _and_ \(v\in I\smallsetminus\{i,j\}\)_._
Proof.: Since \(S/\mathcal{I}(G)\) is the Stanley-Reisner ring \(k[K]\) of the independence complex \(K=\operatorname{Ind}(G)\) and since \(S\otimes H^{4}(\mathcal{Z}_{K})\) and \(S\otimes H^{3}(\mathcal{Z}_{K})\) lie in homological degree \(2\) and \(1\), respectively, in the minimal free resolution \(F=S\otimes H^{*}(\mathcal{Z}_{K})\) (see Theorem 4.4 and Proposition 4.9), it follows from condition (b) that \(\delta_{v}\colon H^{4}(\mathcal{Z}_{K})\to H^{3}(\mathcal{Z}_{K})\) and \(\delta_{ij}\colon H^{3}(\mathcal{Z}_{K})\to H^{0}(\mathcal{Z}_{K})\) are trivial for \(i,j,v\in I\). By Corollary 5.17, this implies \(\delta_{U}=0\) for all \(U\subseteq I\), so \(I(G)\) is \(\mathcal{J}_{I}\)-closed by Proposition 5.7. This shows (b) implies (a). Since (a) implies (b) by definition, (a) and (b) are equivalent.
To see that (a) and (c) are equivalent, first observe that (a) is equivalent to the equivariant formality of the \(T^{I}\)-action on \(\mathcal{Z}_{K}\) by Proposition 5.7, where \(K=\operatorname{Ind}(G)\). Since \(K\) is flag, this in
turn is equivalent to the condition that \(I\in K\) and \(K_{\{i,j\}}\ast\{v\}\subseteq K\) for every \(\{i,j\}\notin K\) with \(v\in I\smallsetminus\{i,j\}\) by Theorem 5.15. These combinatorial conditions translate precisely to condition (c) by definition of the the independence complex of \(G\).
### Dependence of equivariant formality on the characteristic
In contrast with the flag case, we give here some examples of torus actions on moment-angle complexes where the equivariant formality of the action depends on the characteristic of \(k\). These examples show that a characterisation of equivariant formality for \(\mathcal{Z}_{K}\) purely in terms of the combinatorics of \(K\) (as in Theorem 5.15) cannot be expected in general.
**Example 5.19**.: Let \(K=\partial\Delta^{1}\) so that \(\mathcal{Z}_{K}\cong S^{3}\) (as in Example 5.11), and consider the subgroup \(S^{1}\to T^{2}\), \(t\mapsto(t,t^{p})\), where \(p\) is prime. The primary operation \(\iota_{1}+p\iota_{2}\) induced by this circle action is trivial in cohomology, but the cochain-level zig-zag
\[v_{1}u_{2}\xrightarrow{\iota_{1}+p\iota_{2}}pv_{1}\xleftarrow{d}pu_{1} \xrightarrow{\iota_{1}+p\iota_{2}}p\]
in the reduced Koszul complex \(R(K)\) shows that the associated secondary operation maps the fundamental class \([v_{1}u_{2}]\in H^{3}(\mathcal{Z}_{K};k)\) to \([p]\in H^{0}(\mathcal{Z}_{K};k)\). Since all tertiary and higher operations are trivial for degree reasons, it follows that this circle action is equivariantly formal over \(k=\mathbb{F}_{p}\) but not over \(k=\mathbb{Q}\).
**Remark 5.20**.: The example above illustrates the necessity of the \(p\)-coordinate hull construction used in Proposition 5.6. If \(H=S^{1}_{(1,2)}\subseteq T^{2}\) denotes the subtorus described above, then the coordinate hull of \(H\) is \(\operatorname{hull}(H)=T^{2}\), which does not act equivariantly formally (over any \(k\)) since the Stanley-Reisner ring \(H^{*}_{T^{2}}(\mathcal{Z}_{K};k)\cong k[v_{1},v_{2}]/(v_{1}v_{2})\) of \(K=\partial\Delta^{1}\) is not a free module over \(H^{*}(BT^{2};k)=k[v_{1},v_{2}]\). Thus, when working over \(k=\mathbb{F}_{p}\), the equivariant formality of the \(H\)-action is not detected by the smallest coordinate subtorus \(\operatorname{hull}(H)\) containing \(H\). In this example the \(p\)-coordinate hull is \(p\operatorname{-hull}(H)=S^{1}_{1}\), the first coordinate circle in \(T^{2}\), which does act equivariantly formally on \(\mathcal{Z}_{K}\cong S^{3}\).
The minimal \(6\)-vertex triangulation of \(\mathbb{R}P^{2}\) is the smallest simplicial complex \(K\) for which the associated moment-angle complex \(\mathcal{Z}_{K}\) has \(2\)-torsion in integral cohomology. (The homotopy type of this moment-angle complex is worked out in [19].) Using Lemma 3.5, it is straightforward to check that \(\iota_{j}\colon H^{6}(\mathcal{Z}_{K};k)\to H^{5}(\mathcal{Z}_{K};k)\) is nonzero for all \(j=1,\ldots,6\) and all fields \(k\) in this case. It follows that no coordinate \(T^{l}\)-action on \(\mathcal{Z}_{K}\) is equivariantly formal over any field \(k\).
The complex \(\hat{K}\) in the following example is obtained from the \(6\)-vertex triangulation \(K\) of \(\mathbb{R}P^{2}\) by introducing a seventh vertex which cones over all \(10\) minimal non-faces of \(K\). This introduces \(10\) new minimal non-faces containing the seventh vertex and has the effect that the primary cohomology operation \(\iota_{7}\) acts trivially on all cohomology groups except for \(H^{10}(\mathcal{Z}_{\hat{K}};k)\) when \(\operatorname{char}(k)=2\). According to the commutative diagrams of Lemma 3.5, \(\iota_{7}\colon H^{10}(\mathcal{Z}_{\hat{K}};k)\to H^{9}(\mathcal{Z}_{\hat{K} };k)\) is determined by the map
\[\widetilde{H}^{2}(\hat{K};k)\longrightarrow\widetilde{H}^{2}(\hat{K}_{\{1, \ldots,6\}};k)=\widetilde{H}^{2}(\mathbb{R}P^{2};k)\]
induced by the inclusion of full subcomplexes \(\hat{K}_{\{1,\ldots,6\}}\hookrightarrow\hat{K}_{\{1,\ldots,7\}}=\hat{K}\).
**Example 5.21**.: Consider the simplicial complex \(\hat{K}\) on \(7\) vertices with Stanley-Reisner ideal
\[\big{(}v_{124},v_{126}, v_{134},v_{135},v_{156},v_{235},v_{236},v_{245},v_{346},v_{456},\] \[v_{1237},v_{1257},v_{1367},v_{1457},v_{1467},v_{2347},v_{2467},v_ {2567},v_{3457},v_{3567}\big{)},\]
in \(S=k[v_{1},\ldots,v_{7}]\), where \(v_{i_{1}\cdots i_{q}}=v_{i_{1}}\cdots v_{i_{q}}\). One can verify with Macaulay2[20] that \(v_{7}\) appears as an entry in the matrix representing the final differential in the minimal free resolution of the Stanley-Reisner ring \(\mathbb{F}_{2}[\hat{K}]\), whereas \(v_{7}\) does not appear as an entry in the differentials of the minimal free resolution of \(\mathbb{Q}[\hat{K}]\). In other words, for \(\mathcal{J}=(v_{1},\ldots,v_{6})\), the Stanley-Reisner ring \(k[\hat{K}]\) is \(\mathcal{J}\)-closed when \(k=\mathbb{Q}\) but not when \(k=\mathbb{F}_{2}\). Consequently, by Proposition 5.7, the coordinate \(S^{1}_{7}\)-action on \(\mathcal{Z}_{\hat{K}}\) is equivariantly formal over \(\mathbb{Q}\) but not over \(\mathbb{F}_{2}\).
## 6. Combinatorial models for the higher operations
In this section we describe the action of the higher cohomology operations for moment-angle complexes in terms of the Hochster decomposition \(H^{*}(\mathcal{Z}_{K})\cong\bigoplus_{U\subseteq[m]}\widetilde{H}^{*}(K_{U})\). This has already been done for the primary operations \(\delta_{i}=\iota_{i}\) (see Lemma 3.5). In general, an analogous description of the higher operations \(\delta_{U}\), for \(U\subseteq[m]\), purely in terms of the combinatorics of full subcomplexes of \(K\) (and avoiding the issue of indeterminacy) is only possible when the lower degree operations \(\delta_{V}\) all vanish, for \(V\subsetneq U\). This will lead in Section 6.3 to a characterisation of equivariantly formal torus actions in terms of the combinatorics of subcomplexes of \(K\), generalising Theorem 5.9.
### Secondary operations and the Mayer-Vietoris sequence
We begin by describing the action of the secondary cohomology operations \(\delta_{ij}\), since these admit a particularly simple description in terms of the Mayer-Vietoris long exact sequence. However, this will be generalised to all cohomology operations in Section 6.2, and that section does not rely on the results of this one.
Fix a multidegree \(J\subseteq[m]\) and assume that \(i,j\in J\) with \(i<j\). Consider the subcomplex \(K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\) of \(K_{J}\). Since \(K_{J\smallsetminus i}\cap K_{J\smallsetminus j}=K_{J\smallsetminus ij}\), the cover \(\{K_{J\smallsetminus i},K_{J\smallsetminus j}\}\) of \(K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\) gives rise to a Mayer-Vietoris sequence
\[\cdots\xrightarrow{\
by definition of the \(\Lambda\)-module structure on \(\bigoplus_{J\subseteq[m]}\widetilde{C}^{*}(K_{J})\) (see Remark 3.6), it follows that
\[\delta_{ij}[\alpha] =\Big{[}(-1)^{\varepsilon(j,J)+|\alpha|+1}\iota_{i}\left(d^{-1}( \alpha|_{j})\right)+(-1)^{\varepsilon(i,J)+|\alpha|+1}\iota_{j}\left(d^{-1}( \alpha|_{i})\right)\Big{]}\] \[=\Big{[}(-1)^{\varepsilon(j,J)+|\alpha|+1}(-1)^{\varepsilon(i,J \smallsetminus j)+|\alpha|}\left(d^{-1}(\alpha|_{j})\right)|_{i}\] \[\qquad\qquad+(-1)^{\varepsilon(i,J)+|\alpha|+1}(-1)^{\varepsilon( j,J\smallsetminus i)+|\alpha|}\left(d^{-1}(\alpha|_{i})\right)|j\Big{]}\] \[=\Big{[}(-1)^{\varepsilon(j,J)+\varepsilon(i,J\smallsetminus j)+1} \left(d^{-1}(\alpha|_{j})\right)|_{i}+(-1)^{\varepsilon(i,J)+\varepsilon(j,J \smallsetminus i)+1}\left(d^{-1}(\alpha|_{i})\right)|_{j}\Big{]}. \tag{27}\]
Observe that since \(i<j\), \(\varepsilon(i,J\smallsetminus j)=\varepsilon(i,J)\) while \(\varepsilon(j,J\smallsetminus i)=\varepsilon(j,J)-1\). Therefore the two terms in (27) have opposite signs. Comparing with (26), we conclude that \(\ell([\alpha])\) and \(\delta_{ij}[\alpha]\) are equal up to the indicated sign.
**Remark 6.2**.: The combinatorial interpretation of the secondary operation \(\delta_{ij}[\alpha]\) given above holds just as well under the weaker assumption that \(\iota_{i}\) and \(\iota_{j}\) vanish only on the summand \(\widetilde{H}^{|\alpha|}(K_{J})\) containing \([\alpha]\) and on \(\widetilde{H}^{|\alpha|-1}(K_{J\smallsetminus i})\) and \(\widetilde{H}^{|\alpha|-1}(K_{J\smallsetminus j})\). More generally, for any cohomology class \([\alpha]\in\widetilde{H}^{*}(K_{J})\) in the kernel of the primary operations \(\iota_{i}\) and \(\iota_{j}\), the secondary operation \(\delta_{ij}[\alpha]\) is defined up to indeterminacy analogous to the indeterminacy of a triple Massey product. In this case, a statement analogous to Proposition 6.1 still holds, with indeterminacy corresponding to the nonuniqueness of a choice of lift \(\ell\) in diagram (25).
It is well known that the connecting homomorphism in the Mayer-Vietoris sequence for an excisive triad \((X;U,V)\) is induced by a map of spaces, namely, the quotient map collapsing the ends of the double mapping cylinder \(U\cup((U\cap V)\times[0,1])\cup V\simeq X\) to form \(\Sigma(U\cap V)\). For the Mayer-Vietoris sequence above, we can see the connecting homomorphism \(\widetilde{H}^{*-1}(K_{J\smallsetminus ij})\to\widetilde{H}^{*}(K_{J\smallsetminus i }\cup K_{J\smallsetminus j})\) even more concretely, being induced by the inclusion \(K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\hookrightarrow\Sigma K_{J \smallsetminus ij}\), where \(\Sigma K_{J\smallsetminus ij}\) is viewed as the union of the cones \(K_{J\smallsetminus ij}*\{i\}\) and \(K_{J\smallsetminus ij}*\{j\}\) (cf. Figure 1).
Thus, just as the primary operations \(\delta_{i}\) on \(H^{*}(\mathcal{Z}_{K})\) are determined by the maps \(K_{J\smallsetminus i}\hookrightarrow K_{J}\) for all \(J\subseteq[m]\), the secondary operations \(\delta_{ij}\) are essentially determined by the maps
\[K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\hookrightarrow K_{J}\quad\text{ and } \quad K_{J\smallsetminus i}\cup K_{J\smallsetminus j}\hookrightarrow\Sigma K_{J \smallsetminus ij}. \tag{28}\]
In particular, for each \(J\subseteq[m]\) containing \(i,j\), there is a cofibration sequence
inducing the Mayer-Vietoris sequence, and when the composite of the two leftmost arrows is null homotopic, there exists an extension \(\ell\) inducing the lift in (25).
**Example 6.3**.: Let \(K\) be the simplicial complex on the vertex set [5] with minimal non-faces \(13\), \(14\), \(24\), \(25\) and \(345\). (\(K\) is obtained from the boundary of a pentagon by adding the edge \(35\).) Take \(J=[5]\) and consider the Mayer-Vietoris sequence associated to the cover \(\{K_{J\smallsetminus 1},K_{J\smallsetminus 3}\}\). In this case, \(K_{J\smallsetminus 1}\cup K_{J\smallsetminus 3}=K_{J}\), so the vertical map in (25) is an isomorphism. The map \(K_{J\smallsetminus 1}\cup K_{J\smallsetminus 3}\hookrightarrow\Sigma K_{J \smallsetminus 13}\) inducing the connecting homomorphism (pictured in Figure 1) is given up to homotopy by a map \(S^{1}\lor S^{1}\stackrel{{\longrightarrow}}{{\longrightarrow}}S ^{1}\vee\{pt\}\) collapsing the second wedge summand to a point. If \(\alpha\in\widetilde{H}^{1}(S^{1}\lor S^{1})\cong\widetilde{H}^{1}(K_{J}) \subset H^{*}(\mathcal{Z}_{K})\) is a generator for the first circle summand, then \(\alpha\) is in the kernel of the restriction maps \(\widetilde{H}^{1}(K_{J})\to\widetilde{H}^{1}(K_{J\smallsetminus 1})\) and \(\widetilde{H}^{1}(K_{J})\to\widetilde{H}^{1}(K_{J\smallsetminus 3})\), and hence \(\iota_{1}\alpha=\iota_{3}\alpha=0\). Moreover, since \(\alpha\) is clearly in the image of the connecting homomorphism, it follows that the multidegree \(J\) part of \(H^{*}(\mathcal{Z}_{K})\) supports a nontrivial secondary operation \(\delta_{13}\alpha\neq 0\).
**Remark 6.4**.: In terms of the minimal free resolution \(F\) of \(k[K]\), the maps induced by the inclusions of subcomplexes (28) yield a partial combinatorial interpretation for the quadratic component of the differential. Together with the description of the linear component of the differential given by [24, Theorem 1.1] (or Lemma 3.5), this amounts to a combinatorial interpretation of
the complex \(F/\mathfrak{m}^{3}F\) that partially answers Katthan's Question 4.2 of [24]. One can remove the indeterminacy by fixing chain level data as in Section 4, but to fully answer Katthan's question one would need to understand how to make these choices at the cohomology level, in terms of the inclusions (28) and Hochster's decomposition.
We also obtain the following characterisation of equivariant formality, extending Theorem 5.9 to the case of coordinate \(2\)-torus actions. Since this result will be further generalised in Theorem 6.8, we omit the proof. (See Section 6.3 for the definition of the face deletion \(K\smallsetminus F\).)
**Theorem 6.5**.: _Let \(K\) be a simplicial complex on vertex set \([m]\) and let \(I=\{i,j\}\subseteq[m]\) with \(i\neq j\). Then the following conditions are equivalent:_
1. _the coordinate_ \(T^{I}\)_-action on_ \(\mathcal{Z}_{K}\) _is equivariantly formal (over_ \(k\)_);_
2. \(\delta_{i}\)_,_ \(\delta_{j}\) _and_ \(\delta_{ij}\) _are trivial on_ \(H^{*}(\mathcal{Z}_{K};k)\)_;_
3. \(K_{J}\smallsetminus(I\cap J)\hookrightarrow K_{J}\) _induces the trivial map on_ \(\widetilde{H}^{*}(\;;k)\) _for all_ \(J\subseteq[m]\)_._
### Higher operations and the Mayer-Vietoris spectral sequence
We generalise the results of the previous section to the case of an arbitrary coordinate subtorus, that is, the \(T^{I}\)-action on \(\mathcal{Z}_{K}\) for any \(I\subseteq[m]\). In place of the Mayer-Vietoris long exact sequence used to describe secondary cohomology operations in the \(|I|=2\) case, we will here identify all higher operations with differentials in a Mayer-Vietoris spectral sequence.
Fix \(I\subseteq[m]\) and consider the \(T^{I}\)-action on \(\mathcal{Z}_{K}\). To describe the higher cohomology operations induced by this torus action in terms of the Hochster decomposition \(H^{*}(\mathcal{Z}_{K})\cong\bigoplus_{J\subseteq[m]}\widetilde{H}^{*}(K_{J})\), we fix a multidegree \(J\subseteq[m]\) and, as before, consider the cover
\[\mathcal{U}_{I,J}=\{K_{J\smallsetminus i}\ :\ i\in I\cap J\}.\]
The _(ordered) Cech complex_\(\tilde{C}^{*}(\mathcal{U}_{I,J},\widetilde{C}^{p})\) of the cover \(\mathcal{U}_{I,J}\) with coefficients in the presheaf \(\widetilde{C}^{p}\) is given by
\[\tilde{C}^{q}(\mathcal{U}_{I,J},\widetilde{C}^{p})=\bigoplus_{\begin{subarray} {c}i_{0}<\cdots<i_{q}\\ i_{0},\ldots,i_{q}\in I\cap J\end{subarray}}\widetilde{C}^{p}(K_{J\smallsetminus i _{0}\cdots i_{q}}).\]
For an element \(\omega\) of \(\tilde{C}^{q}(\mathcal{U}_{I,J},\widetilde{C}^{p})\), we write \(\omega_{i_{0}\cdots i_{q}}\) for its component in \(\widetilde{C}^{p}(K_{J\smallsetminus i_{0}\cdots i_{q}})\). The Cech differential is then defined by
\[\tilde{d}\colon\tilde{C}^{q-1}(\mathcal{U}_{I,J},\widetilde{C}^{p})\to \tilde{C}^{q}(\mathcal{U}_{I,J},\widetilde{C}^{p}),\qquad(\tilde{d}\omega)_{i _{0}\cdots i_{q}}=\sum_{\ell=0}^{q}(-1)^{\ell}\omega_{i_{0}\cdots\widehat{i_ {\ell}}\cdots i_{q}}\big{|}_{K_{J\smallsetminus i_{0}\cdots i_{q}}}.\]
We form the Cech double complex \(\tilde{C}^{q}(\mathcal{U}_{I,J},\widetilde{C}^{p})\), whose vertical differential is \((-1)^{p}\tilde{d}\), and whose horizontal differential \(d\colon\tilde{C}^{q}(\mathcal{U}_{I,J},\widetilde{C}^{p})\to\tilde{C}^{q}( \mathcal{U}_{I,J},\widetilde{C}^{p+1})\) is induced by the simplicial cochain differential. The inclusions \(K_{J\smallsetminus i}\hookrightarrow K_{J}\) induce a morphism from \(\widetilde{C}^{*}(K_{J})\) to the Cech double complex,
and with this we form the augmented Cech double complex \(a\widetilde{C}^{*}(\mathcal{U}_{I,J},\widetilde{C}^{*})\):
(29)
Taking cohomology with respect to the horizontal differential yields the first page of the _augmented Mayer-Vietoris spectral sequence_\((E_{r}^{p,q},d_{r})\) associated to \(\mathcal{U}_{I,J}\), with
\[E_{1}^{p,-1}=\widetilde{H}^{p}(K_{J}),\qquad E_{1}^{p,q}=\bigoplus_{\begin{subarray} {c}i_{0}<\cdots<i_{q}\\ i_{0},\ldots,i_{q}\in I\cap J\end{subarray}}\widetilde{H}^{p}(K_{J\smallsetminus i _{0}\cdots i_{q}})\ \text{ for }q\geqslant 0. \tag{30}\]
(Equivalently, this is the spectral sequence associated to the filtration of the total complex of (29) by column degree.) The differential \(d_{1}\), being induced by the vertical differential in (29), therefore has components given up to sign by the primary cohomology operations \(\iota_{i}=\delta_{i}\) for \(i\in I\cap J\). The next result identifies the higher differentials \(d_{s}\) in this spectral sequence with the higher cohomology operations \(\delta_{U}\) indexed by subsets \(U\subseteq I\cap J\) with \(|U|=s\).
**Lemma 6.6**.: _Let \(K\) be a simplicial complex on the vertex set \([m]\), and fix \(I,J\subseteq[m]\). Suppose that \(d_{r}=0\) for \(1\leqslant r<s\) in the augmented Mayer-Vietoris spectral sequence associated to \(\mathcal{U}_{I,J}\). Then the differential \(d_{s}\colon E_{s}^{p,-1}\to E_{s}^{p-s+1,s-1}\) defines a map_
\[\widetilde{H}^{p}(K_{J})\longrightarrow\bigoplus_{i_{0}<\cdots<i_{s-1}} \widetilde{H}^{p-s+1}(K_{J\smallsetminus i_{0}\cdots i_{s-1}})\]
_with each component given by \((-1)^{\varepsilon(i_{0}\ldots i_{s-1},J)+p+s}\delta_{i_{0}\cdots i_{s-1}}\)._
Proof.: Let \([\alpha]\in\widetilde{H}^{p}(K_{J})\). Observe that \(d_{1}[\alpha]=\big{(}[\alpha|_{i}]\big{)}_{i\in I\cap J}\in\bigoplus_{i} \widetilde{H}^{p}(K_{J\smallsetminus i})\), where for each \(i\in I\cap J\) we have
\[[\alpha|_{i}]=(-1)^{\varepsilon(i,J)+p+1}[\iota_{i}\alpha]=(-1)^{\varepsilon (i,J)+p+1}\delta_{i}[\alpha]\]
by Remark 3.6. So \(d_{1}[\alpha]=0\) implies that there exists an element \((\beta_{i})_{i\in I\cap J}\in\bigoplus_{i}\widetilde{C}^{p-1}(J_{K\smallsetminus i})\) with \(d(\beta_{i})=\alpha|_{i}\) for each \(i\in I\cap J\), and \(d_{2}[\alpha]\) is then represented by \((-1)^{p-1}d\big{(}(\beta_{i})_{i\in I\cap J}\big{)}\), as indicated in the cochain-level zig-zag below.
Since the cohomology class of \((\beta_{j}|_{i}-\beta_{i}|_{j})_{i<j}\in\bigoplus_{i<j}\widetilde{C}^{p-1}(J_ {K\smallsetminus ij})\) does not depend on the choice of \(d\)-preimages \(\beta_{i}\) of \(\alpha|_{i}\), we may assume for each \(i\in I\cap J\) that \(\beta_{i}=(-1)^{\varepsilon(i,J)+p+1}d^{-1}\iota_{i}\alpha\) for
some preimage \(d^{-1}\iota_{i}\alpha\) of \(\iota_{i}\alpha\). Therefore, for each component of \((\beta_{j}|_{i}-\beta_{i}|_{j})_{i<j}\), we have
\[\beta_{j}|_{i}-\beta_{i}|_{j} =(-1)^{\varepsilon(i,J\smallsetminus j)+p}\iota_{i}\beta_{j}-(-1)^{ \varepsilon(j,J\smallsetminus i)+p}\iota_{j}\beta_{i}\] \[=(-1)^{\varepsilon(i,J\smallsetminus j)+\varepsilon(j,J)+1}\iota_ {i}d^{-1}\iota_{j}\alpha-(-1)^{\varepsilon(j,J\smallsetminus i)+\varepsilon(i,J )+1}\iota_{j}d^{-1}\iota_{i}\alpha.\]
Since \(i<j\), it follows that \(\varepsilon(i,J\smallsetminus j)=\varepsilon(i,J)\) and \(\varepsilon(j,J\smallsetminus i)=\varepsilon(j,J)-1\), and hence
\[[\beta_{j}|_{i}-\beta_{i}|_{j}] =(-1)^{\varepsilon(i,J)+\varepsilon(j,J)+1}\left[\iota_{i}d^{-1} \iota_{j}\alpha+\iota_{j}d^{-1}\iota_{i}\alpha\right]\] \[=(-1)^{\varepsilon(i,J)+1}\delta_{ij}[\alpha].\]
Finally, it follows that each component of \(d_{2}[\alpha]\) is of the form
\[(-1)^{p-1}\left[\beta_{j}|_{i}-\beta_{i}|_{j}\right]=(-1)^{\varepsilon(ij,J)+ p}\delta_{ij}[\alpha],\]
as claimed.
Proceeding inductively, we assume that \(d_{r}=0\) for \(1\leqslant r<s\) and that the components of
\[d_{s}[\alpha]\in E_{s}^{p-s+1,s-1}\cong\bigoplus_{i_{0}<\cdots<i_{s-1}}\widetilde {H}^{p-s+1}(K_{J\smallsetminus i_{0}\cdots i_{s-1}})\]
are of the form \((-1)^{\varepsilon(i_{0}\ldots i_{s-1},J)+p+s}\delta_{i_{0}\cdots i_{s-1}}[\alpha]\). Now suppose \(d_{s}\) is also trivial. To ease notation, we will write \(\delta_{i_{0}\cdots i_{s-1}}(\alpha)\) for a cochain representative of \(\delta_{i_{0}\cdots i_{s-1}}[\alpha]\) defined recursively by
\[\delta_{i_{0}\cdots i_{s-1}}(\alpha)=\sum_{\ell=0}^{s-1}\iota_{i_{\ell}}d^{- 1}\delta_{i_{0}\cdots\widehat{\iota}\cdots i_{s-1}}(\alpha)\in\widetilde{C}^ {p-s+1}(K_{J\smallsetminus i_{0}\cdots i_{s-1}}),\]
where \(d^{-1}\delta_{i_{0}\cdots\widehat{\iota}\cdots i_{s-1}}(\alpha)\) denotes a choice of preimage of \(\delta_{i_{0}\cdots\widehat{\iota}\cdots i_{s-1}}(\alpha)\). Then \(d_{s}[\alpha]=0\) implies that there exists a zig-zag in the double complex (29) of the form
where \(d(\beta_{i_{0}\cdots i_{s-1}})=(-1)^{\varepsilon}\delta_{i_{0}\cdots i_{s-1}} (\alpha)\) with \(\varepsilon=\varepsilon(i_{0}\ldots i_{s-1},J)+p+s\) for each strictly increasing sequence \(i_{0},\ldots,i_{s-1}\in I\cap J\), and the top-left cochain is a representative of \(d_{s+1}[\alpha]\). Now for each strictly increasing sequence \(i_{0},\ldots,i_{s}\in I\cap J\) and for each \(0\leqslant\ell\leqslant s\), we have
\[\beta_{i_{0}\cdots\widehat{\iota}\cdots i_{\ell}\cdots i_{s}} \big{|}_{i_{\ell}} =(-1)^{\varepsilon(i_{\ell},J\smallsetminus i_{0}\cdots\widehat{ \iota}\cdots i_{s})+p-s+1}\iota_{i_{\ell}}\beta_{i_{0}\cdots\widehat{\iota} \cdots i_{\ell}\cdots i_{s}}\] \[=(-1)^{\varepsilon(i_{\ell},J)+\ell+p-s+1}\iota_{i_{\ell}}\beta_{ i_{0}\cdots\widehat{\iota}\cdots i_{s}}.\]
It follows that each component of \(d_{s+1}[\alpha]\) is represented by a cochain of the form
\[(-1)^{p-s}\sum_{\ell=0}^{s}(-1)^{\ell}\beta_{i_{0}\cdots\widehat{ \iota}\cdots i_{s}}\big{|}_{i_{\ell}} =\sum_{\ell=0}^{s}(-1)^{\varepsilon(i_{\ell},J)+1}\iota_{i_{\ell} }\beta_{i_{0}\cdots\widehat{\iota}\cdots i_{s}}\] \[=\sum_{\ell=0}^{s}(-1)^{\varepsilon(i_{\ell},J)+\varepsilon(i_{0} \cdots\widehat{\iota}\cdots i_{s},J)+p+s+1}\iota_{i_{\ell}}d^{-1}\delta_{i_{0} \cdots\widehat{\iota}\cdots i_{s}}(\alpha)\] \[=(-1)^{\varepsilon(i_{0}\ldots i_{s},J)+p+s+1}\delta_{i_{0} \cdots i_{s}}(\alpha),\]
which closes the induction.
Taking \(J=[m]\) yields the augmented Mayer-Vietoris spectral sequence associated to the cover \(\mathcal{U}_{I}=\{K_{[m]\smallsetminus i}\ :\ i\in I\}\), and we will see next that this spectral sequence contains all of the higher operations as components in its differentials.
**Theorem 6.7**.: _Suppose that \(d_{r}=0\) for \(1\leqslant r<s\) in the augmented Mayer-Vietoris spectral sequence associated to \(\mathcal{U}_{I}\). Then the differential \(d_{s}\colon E_{s}^{p,q}\to E_{s}^{p-s+1,s+q}\) defines a map_
\[\bigoplus_{U\subseteq I,\ |U|=m-q-1}\widetilde{H}^{p}(K_{[m]\smallsetminus U}) \longrightarrow\bigoplus_{V\subseteq I,\ |V|=m-q-s-1}\widetilde{H}^{p-s+1}(K_{[m]\smallsetminus V}),\]
_and the components are given by \((-1)^{\varepsilon(V\smallsetminus U,[m])+p+s}\delta_{V\smallsetminus U}\) when \(U\subseteq V\), and zero otherwise._
Proof.: Writing \(J=[m]\smallsetminus U\), there is a comparison map of augmented Cech double complexes
\[a\tilde{C}^{q}(\mathcal{U}_{I,J},\widetilde{C}^{p})\longrightarrow a\tilde{ C}^{q+u}(\mathcal{U}_{I},\widetilde{C}^{p}),\]
increasing vertical degree by \(u=|U|\). On the component indexed by \(i_{0}\ldots i_{q}\) this is given by
\[\widetilde{C}^{p}(K_{J\smallsetminus i_{0}\ldots i_{q}})\xrightarrow{(-1)^{ \varepsilon(i_{0}\ldots i_{q},U)}}\widetilde{C}^{p}(K_{[m]\smallsetminus i_{0} \ldots i_{q},U}),\]
using the equality \(K_{J\smallsetminus i_{0}\ldots i_{q}}=K_{[m]\smallsetminus i_{0}\ldots i_{q}U}\) to identify the two sides. On the first page of the associated spectral sequences this map restricts to the isomorphism
\[\widetilde{H}^{p}(K_{J\smallsetminus i_{0}\ldots i_{q}})\xrightarrow{(-1)^{ \varepsilon(i_{0}\ldots i_{q},U)}}\widetilde{H}^{p}(K_{[m]\smallsetminus i_{0} \ldots i_{q},U}).\]
Under the assumption that \(d_{r}=0\) for \(1\leqslant r<s\) in the spectral sequence associated to \(\mathcal{U}_{I}\), we may also assume by induction that \(d_{r}=0\) for \(1\leqslant r<s\) in the spectral sequence associated to \(\mathcal{U}_{I,J}\). Therefore on the \(s\)th page the comparison map induces commutative squares
In particular, beginning on the \(-1\)st row and taking \(V=\{i_{0},\ldots,i_{s-1}\}\cup U\), the claimed formula follows from Lemma 6.6, with the sign \((-1)^{\varepsilon(V\smallsetminus U,U)+\varepsilon(V\smallsetminus U,J)+p+s}=(-1) ^{\varepsilon(V\smallsetminus U,[m])+p+s}\).
### Equivariant formality from combinatorics
We are ready to prove our main result characterising the equivariant formality of subtorus actions on moment-angle complexes \(\mathcal{Z}_{K}\), purely in terms of the cohomology of subcomplexes of \(K\). By Proposition 5.6, it suffices for us to treat the case of coordinate subtori.
The _face deletion_ of a simplicial complex \(K\) at \(F\in K\) is the largest subcomplex \(K\smallsetminus F\) of \(K\) that does not contain \(F\):
\[K\smallsetminus F=\{\sigma\in K\ :\ F\not\subseteq\sigma\}.\]
It follows from the definition that the face deletion can be written as the union of full subcomplexes
\[K\smallsetminus F=\bigcup_{i\in F}K_{[m]\smallsetminus i}.\]
By convention, \(K\smallsetminus\varnothing\) is the empty simplicial complex. We also remind the reader of the notation
\[\mathcal{U}_{I,J}=\{K_{J\smallsetminus i}\ :\ i\in I\cap J\}\]
and that this collection of subcomplexes of \(K_{J}\) induces a Mayer-Vietoris spectral sequence as in Section 6.2. The collection \(\mathcal{U}_{I}=\mathcal{U}_{I[m]}\) is of particular importance.
**Theorem 6.8**.: _Let \(K\) be a simplicial complex on vertex set \([m]\) and let \(I\subseteq[m]\). Then the following conditions are equivalent:_
1. _the coordinate_ \(T^{I}\)_-action on_ \(\mathcal{Z}_{K}\) _is equivariantly formal over_ \(k\)
2. _the cohomology operations_ \(\delta_{J}\) _vanish on_ \(H^{*}(\mathcal{Z}_{K};k)\) _for all_ \(J\subseteq I\)_;_
3. _the augmented Mayer-Vietoris spectral sequence associated to_ \(\mathcal{U}_{I}\) _degenerates at its first page (or equivalently,_ \(\mathcal{U}_{I,J}\) _for all_ \(J\)_);_
4. \(K_{J}\smallsetminus(I\cap J)\hookrightarrow K_{J}\) _induces the trivial map on_ \(\widetilde{H}^{*}(\,;k)\) _for all_ \(J\subseteq[m]\)_._
Proof.: The equivalence of (a) and (b) is in Proposition 5.7. The extra equivalence smuggled into (c) follows from the argument given for Theorem 6.7, since, for any \(J\), the differentials appearing augmented Mayer-Vietoris spectral sequence associated to \(\mathcal{U}_{I,J}\) appear in that of \(\mathcal{U}_{I}\). After this, (b) is equivalent to (c) by Theorem 6.7. So it is sufficient to show that (b) is equivalent to (d).
For brevity, we fix \(J\) and write \(\check{C}_{J}\) for the Cech double complex associated to \(\mathcal{U}_{I,J}\), and \(a\check{C}_{J}\) for the corresponding augmented Cech double complex (29). There is then a short exact sequence of complexes
\[0\longrightarrow\operatorname{Tot}(\check{C}_{J})\longrightarrow\operatorname {Tot}(a\check{C}_{J})\longrightarrow\widetilde{C}^{*}(K_{J})[-1]\longrightarrow 0. \tag{31}\]
It is well known that the Cech complex associated \(\mathcal{U}_{I,J}\) is acyclic, that is, for each \(p\) there is a quasi-isomorphism \(\widetilde{C}^{p}(\bigcup_{i\in I\cap J}K_{J\smallsetminus i})\xrightarrow{ \simeq}\check{C}^{*}(\mathcal{U}_{I,J},\widetilde{C}^{p})\). It follows that there is a quasi-isomorphism \(\widetilde{C}^{*}(\bigcup_{i\in I\cap J}K_{J\smallsetminus i})\xrightarrow{ \simeq}\operatorname{Tot}(\check{C}_{J})\). We also note that \(\bigcup_{i\in I\cap J}K_{J\smallsetminus i}=K_{J}\smallsetminus(I\cap J)\) is the face deletion. Therefore, by taking cohomology (31) yields a long-exact sequence
\[\cdots\longrightarrow\widetilde{H}^{*}(K_{J}\smallsetminus(I\cap J)) \longrightarrow H^{*}(\operatorname{Tot}(a\check{C}_{J}))\xrightarrow{e} \widetilde{H}^{*-1}(K_{J})\xrightarrow{\delta}\widetilde{H}^{*-1}(K_{J} \smallsetminus(I\cap J))\longrightarrow\cdots\]
By exactness, \(e\) is surjective if and only if \(\delta\) is zero; moreover the connecting homomorphism \(\delta\) is induced by the inclusion \(K_{J}\smallsetminus(I\cap J)\hookrightarrow K_{J}\), so this is equivalent to (d). The edge map \(e\) comes from the projection of \(a\check{C}_{J}\) onto its \(-1\)st row. Therefore \(e\) is surjective if and only if, in the corresponding spectral sequence (30), every differential \(d_{s}\) leaving the \(-1\)st row is zero, for \(s\geqslant 1\). By Lemma 6.6 this is equivalent to (b).
|
2302.12978 | Impact of Thermal Variability on SOC Estimation Algorithms | While the efficiency of renewable energy components like inverters and PV
panels is at an all-time high, there are still research gaps for batteries.
Lithium-ion batteries have a lot of potential, but there are still some
problems that need fixing, such as thermal management. Because of this, the
battery management system accomplishes its goal. In order for a battery
management system (BMS) to function properly, it must make accurate estimates
of all relevant parameters, including state of health, state of charge, and
temperature; however, for the purposes of this article, we will only discuss
SOC. The goal of this article is to estimate the SOC of a lithium-ion battery
at different temperatures. Comparing the Extended Kalam filter algorithm to
coulomb counting at various temperatures concludes this exhaustive
investigation. The graphene battery has the highest SOC when operated at the
optimal temperature, as determined by extensive analysis and correlation
between SOC and temperature is not linear | Wasiue Ahmed, Mokhi Maan Siddiqui, Faheemullah Shaikh | 2023-02-25T04:13:28Z | http://arxiv.org/abs/2302.12978v1 | # Impact of Thermal Variability on SOC Estimation Algorithms
###### Abstract
While the efficiency of renewable energy components like inverters and PV panels is at an all-time high, there are still research gaps for batteries. Lithium-ion batteries have a lot of potential, but there are still some problems that need fixing, such as thermal management. Because of this, the battery management system accomplishes its goal. In order for a battery management system (BMS) to function properly, it must make accurate estimates of all relevant parameters, including state of health, state of charge, and temperature; however, for the purposes of this article, we will only discuss SOC. The goal of this article is to estimate the SOC of a lithium-ion battery at different temperatures. Comparing the Extended Kalam filter algorithm to coulomb counting at various temperatures concludes this exhaustive investigation. The graphene battery has the highest SOC when operated at the optimal temperature, as determined by extensive analysis and correlation between SOC and temperature is not linear.
BMS, SOC Estimation, Coulomb Counting, EKF method, Temperature analysis.
## 1 Introduction
Energy storage and use are crucial during the switch to green energy. When we talk about energy storage, we immediately think of batteries. The battery is the main part that stores energy and needs to be reliable, effective, and healthy. These things can be guaranteed by the battery management system. Batteries need to be guarded against high voltage and dangerous operating situations. The system not only keeps an eye on and takes care of the battery modules, but it also sends out early warnings. So, the BMS makes sure the batteries work the way they are supposed to. An important part of battery management systems is figuring out how full a battery is. It aids in describing the actual amount of energy stored in the battery, i.e., State of Charge (Plett, 2004). Assessing the battery's SOC is just as important as knowing how much long it will last. There are several methods for estimating the battery's state of charge (SOC). According to (M. Mastali et al., 2013) the Equation (1), states that battery's state of charge is measured as the fraction of its nominal capacity Qn that is available at any given time, denoted by the capacity Q(t).
\[SOC(t)=Q(t)/Q_{n} \tag{1}\]
As stated by (J. Meng et al., 2018), a proposed comprehensive study comparing various SOC estimation algorithms and their online implementation, the Kalman filter is the most promising method, considering calculation power and sacrificing efficiency. Comparative research using EKF and Sigma Point Kalman Filter was also conducted by (Jiahao Li, 2013), by establishing the initial SOC, the study discussed the tracking accuracy performance. In addition (Joaqun Klee Barillas et al., 2015), compared and analysed the cost of various algorithms, recognizing that as computational power increases the efficiency increase, so does the cost. Therefore, we must settle for some middle ground. Numerous other researchers have
developed methods for accurately estimating SOC, such as (Shi Li et al., 2019), who proposed a comparative study between four algorithms and estimated the SOC by comparing the different load profiles, demonstrating that algorithm performance is dependent on the load patterns and is correlated in frequency spectrum. In the literature, the SOC of lithium-ion batteries has been estimated under a variety of conditions, but the scope of this article is to analyse SOC under various thermal conditions.
## 2 Methodology
The SOC estimation algorithm is utilised in conjunction with other battery management techniques to prevent the battery from being over-discharged or over-charged and to prolong its life. Researchers have focused on the challenge of SOC estimation, leading to the development of numerous approaches. Classifying the methods is difficult because most solutions involve using multiple techniques at once and incorporating either heuristic or deterministic mathematical tools. It will be shown that the methods of coulomb counting (CC) and open circuit voltage (OCV) are utilised frequently by discussing both. Because individual methods can have their own shortcomings, combining them can result in a wide range of enhancements to both the initial and online SOC estimation. For instance, a robust extended Kalman filter algorithm (EKF) could be combined with the OCV method and the CC method as the secondary function. It becomes more complicated to categorise individual methods when they are combined in this way. In further sections, we will be discussing both the methods used in combined algorithms to estimate SOC at different thermal conditions.
### Coulomb Counting Method
The use of CC for SOC estimation has become the standard. Because it is the most accurate method for short-term calculations, the CC method (ampere hour method) is defined in the Equation (2).
\[SOC(t)=SOC(t_{o})+\frac{1}{c_{n}}\int_{t_{o}}^{t_{o}+t}I_{bat}(d\tau)*100\% \tag{2}\]
In the equation (2), \(SOC(t_{o})\) represents SOC at time \(t_{o}\), \(C_{n}\) represents specified capacity, and \(I_{bat}\) represents the current. CC can be implemented easily, due to initial SOC there are error and challenges to consider. When measuring battery current, sometimes an error occurs in calibration and a proper current curve is not obtained. Accumulated errors are caused by noise, the wide range in sensor resolution, or rounding. Accumulated errors make equations less accurate over time, so they need algorithms to help them out. In actual practice, the initial SOC is unknown, and the initial SOC of a battery can only be found when the battery is in thermodynamic equilibrium. The charge and discharge current time integrals are evaluated to estimate SOC by the Coulomb method, and the initial SOC value is needed. Most of the time, it is unknown, and intentionally set to the wrong value. This method is dependent on the initial SOC value and cannot eliminate cumulative errors. If evaluated at the wrong SOC, we will get improper results. Despite its widespread use in recent years, the CC method is not typically used as a stand-alone technique for estimating SOC but rather in conjunction with other techniques. This paper employs CC along with EKF to estimate SOC.
### 2RC Electrical Circuit Model Based Estimation (2RC-ECM)
Most of researchers have used ECM models for SOC estimation, in many contexts, the second order resistor capacitor (2RC) ECM is used because of its ease of use and precision. For this purpose, we will be analysing the 2RC-ECM. Figure 1 depicts the 2RC-ECM. There is a voltage source (the OCV), an ohmic resistance (Ro), and a resistance and capacitance poles (RC). Both the activation polarization resistance and capacitance are represented by R1 and C1 in the resistor capacitor branches. Similarly, R2 and C2 stand in for the resistance and capacitance of the concentration polarization, respectively. The current through the load is denoted by I. The 2RC-ECM and Kirchhoff's law allow us to rewrite the battery model as a state
space model as in (Rivera Barrera JP et al., 2017). The two voltages for the resistor capacitor forks are denoted by U1 and U2, defined in equations (3) & (4) respectively. Capacity of the battery, denoted by q. Equation (5) defines the SOC in terms of battery capacity, and Equation (6) depicts the relationship between open circuit voltage and state of charge.
\[U_{1}=-\frac{1}{R_{1}c_{1}}U_{1}+\frac{I}{c_{1}} \tag{3}\]
\[U_{2}=-\frac{1}{R_{2}c_{2}}U_{2}+\frac{I}{c_{2}} \tag{4}\]
\[SOC=-\frac{1}{\mathrm{q}} \tag{5}\]
\[U=OCV(SOC)+U_{1}+U_{2}+IR_{o} \tag{6}\]
As seen in Figure 1, the battery ECM under consideration. It features a double pole RC circuit, compared to single and triple RC structures, which offers the optimal balance among inaccuracy and complexity also it is one of the most popular ECMs.
### HPPC Test
The Hyper Pulse Power Characterization test has been conducted by Dr. Phillip on Turnigy Graphene 5000mAh 65C Lithium-ion Battery. The test was carried out at six different temperatures, but in this article only four temperatures have been selected for simplicity (Kollmeyer et al., 2020). The data is real time and is utilized in this article for SOC Estimation. Battery capacity is calculated using the static capacity test described above, and then 10% SOC steps are used for the HPPC test. The steps are taken from full SOC to no SOC, and then the test profile can resume at full SOC.
Included in each stage are:
* Wait an hour before continuing.
* Discharge, pulse duration: 1C for 10 seconds
* (Relaxation period) a 10-minute break.
* Regeneration Pulse: 1C or 0.75C for 10 seconds
* (Relaxation period) Take a 10-minute break.
* Discharge/charge the next step per the manufacturer's data sheet.
Each type of cell can be tested at each SOC, as well as at different temperatures and charge and discharge rates (C-rates).
Figure 1: Second Order RC Model of Li-ion Battery
### The Extended Kalman Filter
The nonlinear system is linearized through the extended Kalman filtering (EKF) algorithm. It makes a prediction for the value of the following time step based on the current time step. For the best estimation, the state variables are continuously updated with data from the system's inputs and outputs. The battery estimation process calls for white noise with a Gaussian distribution, both in terms of the processing noise and the observation noise. This is a problem with every version of the Kalman filter. Here, it's simple to regulate the correlation between processing and observational noises. This method uses two step predication correction algorithms, as depicted in equation (7) & (8), where k denotes a discrete point in time, K is Kalman gain, P is covariance, Q is covariance of the process, and R is the covariance of the output as proposed by (F. Khanum et al., 2021).
\[\hat{x}_{k+1|k} =A\hat{x}_{k|k}+Bu_{k} \tag{7}\] \[P_{k+1|k} =AP_{k|k}A^{T}+Q_{k} \tag{8}\]
The equation (9) calculates the Kalman Gain, and equation (10) updates the estimation with new measurement value, finally the equation (16) depicts the error covariance.
\[K_{k+1}=P_{k+1|k}C^{T}\big{(}CP_{k+1|k}C^{T}+R_{k+1}\big{)}^{-1} \tag{9}\] \[\hat{x}_{k+1|k}=\hat{x}_{k+1|k}+K_{k+1}(z_{k+1}-C\hat{x}_{k+1|k})\] (10) \[P_{k+1|k+1}=(1-K_{k+1}C)P_{k+1|k} \tag{11}\]
The algorithm has been modified to estimate SOC at different temperatures and associated errors in MATLAB.
## 3 Simulation & Results
This article showed two different ways to estimate SOC accurately. Yet, the BMS is an area that needs further investigation. The main goal of this research is to make a better BMS that can accurately predict how long a Lithium-ion battery will last. But simulation is a good way to learn how dynamic systems, like Li-ion, act in different situations. For example, in our article we analysed the SOC of graphene batteries at four different temperatures using two different methods. In this article, the effect of temperature on SOC is looked at in depth. The foundation for this section is laid with the help of simulation and computational techniques discussed in the previous section. The Table 1 shows the comparative analysis of SOC at different temperatures using two different methods.
For more depth analysis, the graph is plotted by combing the SOC results of both methods, which is shown in Figure 2.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Temperature** & **SOC by CC Method** & **SOC by EKF Method** \\ \hline
0 \({}^{\circ}\)C & 55.9785\% & 52.3885\% \\ \hline
10 \({}^{\circ}\)C & 54.0007\% & 52.8973\% \\ \hline
25 \({}^{\circ}\)C & 52.9031\% & 55.0667\% \\ \hline
40 \({}^{\circ}\)C & 53.1736\% & 54.9699\% \\ \hline \end{tabular}
\end{table}
Table 1: Average SOC using CC & EKF methods at different temperatures
The result clearly shows that SOC rises as temperature rises. When the temperature reaches a certain point, the percentage of SOC will begin to drop. In addition to this, total thirty hours' time frame is implemented to observe the behavior of battery. Finally, to find the accuracy of methods used, the graph between average percentage error is plotted as visible in Figure 3, its evident that at near ambient temperature the error is relatively less.
Figure 3: % Error in Estimated SOC at Different Temperatures
Figure 2: % SOC at Different Temperatures of Turnigy Graphene 5000mAh 65C Li-ion Battery
Conclusion
The necessity of this research is presented as a central idea. Following the execution of two distinct algorithms and the collection of their outputs, we arrived at the realization that the SOC is being estimated with an extremely small margin of error. Additionally, the nature of operation of lithium-ion batteries can be deduced from the results at various temperatures. The percentage of SOC begins to rise when the temperature does. When the temperature is raised even further, the SOC level begins to drop. This indicates that there is no direct correlation between state of charge and temperature, but it is best practice to keep batteries operating at a temperature that is close to 25 degrees Celsius. If someone wants to improve the accuracy of the results, this algorithm can be paired with other machine learning algorithms, which can further increase the efficiency as well as the accuracy of implementation. The implemented algorithm can be used while designing the BMS for a lithium-ion battery. Because the algorithm is straightforward but effective, it can easily be implemented on microcontrollers using any programming language.
|
2306.09111 | Enhanced Sampling with Machine Learning: A Review | Molecular dynamics (MD) enables the study of physical systems with excellent
spatiotemporal resolution but suffers from severe time-scale limitations. To
address this, enhanced sampling methods have been developed to improve
exploration of configurational space. However, implementing these is
challenging and requires domain expertise. In recent years, integration of
machine learning (ML) techniques in different domains has shown promise,
prompting their adoption in enhanced sampling as well. Although ML is often
employed in various fields primarily due to its data-driven nature, its
integration with enhanced sampling is more natural with many common underlying
synergies. This review explores the merging of ML and enhanced MD by presenting
different shared viewpoints. It offers a comprehensive overview of this rapidly
evolving field, which can be difficult to stay updated on. We highlight
successful strategies like dimensionality reduction, reinforcement learning,
and flow-based methods. Finally, we discuss open problems at the exciting
ML-enhanced MD interface. | Shams Mehdi, Zachary Smith, Lukas Herron, Ziyue Zou, Pratyush Tiwary | 2023-06-15T13:13:56Z | http://arxiv.org/abs/2306.09111v2 | # Enhanced Sampling with Machine Learning: A Review
###### Abstract
Molecular dynamics (MD) enables the study of physical systems with excellent spatiotemporal resolution but suffers from severe time-scale limitations. To address this, enhanced sampling methods have been developed to improve exploration of configurational space. However, implementing these is challenging and requires domain expertise. In recent years, integration of machine learning (ML) techniques in different domains has shown promise, prompting their adoption in enhanced sampling as well. Although ML is often employed in various fields primarily due to its data-driven nature, its integration with enhanced sampling is more natural with many common underlying synergies. This review explores the merging of ML and enhanced MD by presenting different shared viewpoints. It offers a comprehensive overview of this rapidly evolving field, which can be difficult to stay updated on. We highlight successful strategies like dimensionality reduction, reinforcement learning, and flow-based methods. Finally, we discuss open problems at the exciting ML-enhanced MD interface.
###### Contents
* 1 INTRODUCTION
* 2 ENHANCED SAMPLING
* 3 DIMENSIONALITY REDUCTION FOR SAMPLING
* 3.1 AUTOMATED CV SELECTION
* 3.2 TRANSFER OPERATOR APPROXIMATION
* 3.3 INFORMATION BOTTLENECK-BASED APPROACHLES
* 3.4 EIGENDEComposition TECHNIQUES
* 3.5 OTHER DIMENSIONALITY REDUCTION APPROACHLES
* 4.1 LEARNING NEW STRATEGIES TO ENHANCE SAMPLING
* 4.1 REINFORCEMENT LEARNING STRATEGIES FOR ADAPTIVE SAMPLING
* 4.2 LEARNING NEW STRATEGIES TO BIAS SIMULATIONS
* 5 ESTIMATING FREE ENERGIES WITH FLOW-BASED MODELS
* 5.1 NORMALIZING FLOWS
* 5.2 BOLTZMANN GENERATORS
* 5.3 FREE ENERGY ESTIMATION THROUGH INVERITIBLE MAPPINGS
* 5.4 SCORE-BASED MODELS
* 5.5 INTEGRATION WITH ENHANCED SAMPLING FRAMEWORKS
* 6 DISCUSSION
* 6.1 BENCHMARK APPLICATIONS
* 6.2 MODEL INTERPRETABILITY & EXPLAINAABILITY
* 6.3 LEARNING MEANINGFUL RCs
* 6.4 EXPLOITING SYMMETRY THROUGH MACHINE LEARNING
* 6.5 ROBUST FREE ENERGY ESTIMATION
## 1 Introduction
Molecular dynamics (MD) simulations play a crucial role in the field of physical chemistry and allied sciences, offering a powerful tool to investigate the intricate motions and behaviors of atoms and molecules. These simulations act as a virtual microscope, allowing scientists to explore the dynamic aspects underlying complicated processes. MD is implemented by discretizing time into small steps and Newton's equations of motion serve as the guiding principle for iterative generation of the time evolution of a system from initial atomic coordinates (1). This approach enables the study of the microscopic state of a system described by the position and momentum of each atom in phase space. In addition to the deterministic forces described by Newton's laws, MD simulations incorporate thermostats (2) to sample the canonical, constant number, volume, temperature (NVT) ensemble or both thermostats and barostats (3) to sample the isothermal-isobaric, constant number, pressure, temperature (NPT) ensemble. These techniques enhance the simulation's ability to reproduce realistic conditions and achieve accurate results.
Under this framework, the interactions between different entities present in a physical system are defined using numerical constants known as force fields that are obtained empirically or from first principle calculations. Depending on the task at hand, force fields with different levels of detail e.g., quantum-mechanical, classical, coarse-grained, etc. can be employed. In particular, classical force fields are obtained by carefully parametrizing atomic interactions to reproduce equilibrium properties observed in experimental studies
(4). While these simulations excel in capturing equilibrium properties over long periods, sampling rare events becomes a challenging task when the integration step is on the order of femtoseconds (\(fs\)). A small \(fs\) time step is necessary because in classical force fields, the fastest motion i.e., vibrations of hydrogen atoms take place at a similar time scale (4).
However, practical processes of interest such as large conformational shifts in proteins (5) or ligand binding/unbinding events (6, 7, 8) can occur on timescales ranging from milliseconds to hours. Even slower but critically important nucleation processes (9, 10, 11) may span seconds to days. Capturing these events using standard MD simulations can be computationally demanding, requiring an enormous amount of time. In fact, sampling a single event within these timescales may necessitate millions of years of computational effort. Although hardware advancements (12, 13) have facilitated faster MD simulations, achieving the exponential speedup necessary to access these rare events remains a significant challenge due to the inherently sequential nature of time. Evidently even on the best available hardware such as the Anton supercomputer, sampling of rare events remains a challenge. Thus, researchers continue to explore alternative creative methods and algorithms to overcome these limitations and efficiently explore the dynamics of complex processes.
Enhanced sampling methods aim to address this issue by increasing the efficiency of exploring the configuration space and accelerating MD simulations. These techniques help to overcome high energy barriers and explore system states that are typically inaccessible in conventional simulations, potentially improving the accuracy of calculated thermodynamic and kinetic properties. However, the implementation of enhanced sampling algorithms is not trivial and may require significant human expertise.
In recent years, machine learning (ML) models have been employed in various domains, for example in the prediction of binding affinities of protein targeting small molecules for drug discovery (14), in genomics and proteomics data analysis (15), synthesizing novel materials with tailored properties (16) etc. These successful applications of ML in diverse scientific fields have inspired the adoption of similar techniques in accelerating MD simulations. In this review, we examine the latest advancements in the field of ML-augmented enhanced sampling methods. Specifically, we will concentrate on enhancing the sampling capabilities of classical MD simulations. For readers interested in the application of ML in accelerating coarse-grained simulations, we refer to recent literature sources (17, 18, 19). It is important to note that our focus will be solely on the utilization of ML to expedite simulations and not on the analysis of MD data (20, 21, 22).
While often ML is applied to different fields solely due to the possibility of a data-driven approach, its confluence with enhanced sampling (Section 2) can be more organic. Under different names, to some extent, both disciplines have tackled similar problems. This could be the problem of dimensionality reduction (Section 3), new strategies for improved bias deposition (Section 4), or the problem of moving back-and-forth between tractable and intractable probability distributions (Section 5). This review looks at ML and enhanced MD through these and other shared lenses, summarizing the state-of-the-art in a burgeoning field that is hard to keep up with.
## 2 Enhanced Sampling
Many enhanced sampling methods have been developed to tackle the timescale problem with molecular dynamics. We classify these methods into three distinct classes with different mechanisms and opportunities for synergies with ML. However, note that alternative
classification schemes for enhanced sampling methods exist in the literature [23, 24, 25]. Our three classes are biasing methods, adaptive sampling methods, and generalized ensemble methods.
Biasing methods perform importance sampling by modifying the simulation with a bias potential that can be reweighted to recover unbiased statistics [26, 27]. This potential can be static [28] or updated over the course of the simulation [23, 29] and is defined in terms of a small number of collective variables (CVs) and not the full configuration of the system. These CVs can be simple basis functions such as distances or dihedral angles or they can be more complex linear or nonlinear combinations of basis functions. Determining which CVs to use in a data-driven manner can be considered a manifold learning problem where the goal is to find a low-dimensional manifold that effectively describes the system's relevant slow dynamics and/or dominant metastable states.
Adaptive sampling methods, also known as path sampling methods [32, 33], perform importance sampling by strategically initializing rounds of short parallel simulations in states that are either under-sampled or likely to sample an unexplored state. They are often
Figure 1: Overview of enhanced sampling methods and their interactions with ML methods, Enhanced sampling methods are shown as rectangles with colors corresponding to the associated learning tasks which are shown as ovals. Individual ML methods are shown as text with arrows corresponding to the information flow between methods. Note that ML for generalized ensemble methods is purely post-processing while learning for biasing and adaptive sampling methods informs new simulations.
analyzed by constructing a Markov state model (MSM) [34, 35] to combine the statistics and kinetics of these simulations. The separation of states can be done using geometric criteria, kinetic criteria, or even by discretizing CVs [36, 37]. Initial quantitative comparisons between strategies have shown that different techniques are beneficial for exploring state space or sampling rare events and that _a priori_ knowledge can be used to improve sampling further [38, 39]. Adaptive sampling provides many opportunities for ML because states can be defined by either learning a continuous manifold or a more direct mapping from configurations to discrete states.
Generalized ensemble methods accelerate sampling by allowing the simulation to transition to a different ensemble with a different temperature, pressure, or Hamiltonian. These ensembles can have lower kinetic barriers between configurations and new free energy minima can be sampled in the original ensemble after crossing lower barriers in another ensemble. For example, transitioning to a higher temperature ensemble would accelerate barrier crossing, then transitioning to the original ensemble would allow sampling of a new energy minimum at the temperature of interest. This is often done with replicas occupying a ladder of ensembles and periodically exchanging ensembles in the case of replica exchange but can also be done with a single simulation in expanded ensemble methods [40, 41, 42]. This class of enhanced sampling methods provides different opportunities for ML as there is no requirement for a learned representation of the system's states. Instead, ML is used to analyze these simulations and to infer free energy surfaces, potentially for regions only sampled in some ensembles, with sampling from the other ensembles.
A number of methods such as replica exchange umbrella sampling [43], parallel tempering metadynamics [44], and bias exchange metadynamics [45] combine multiple of these classes at the same time. A longer list of hybrid methods with an elegant taxonomy can be found in Ref. [24].
## 3 Dimensionality Reduction for Sampling
In recent years, a dominant area of research in accelerating MD simulations through ML has been the development of dimensionality reduction techniques for identifying slow modes from simulated trajectories. Constraints to the atomic degrees of freedom in molecular systems generate this low-dimensional manifold, the study of which is driven by the widespread use of enhanced sampling methods e.g, umbrella sampling, metadynamics, weighted ensemble, milestoning, variationally enhanced sampling (VES) and others [46, 47, 48, 49, 28], which still, however, require _a priori_ identification of approximate Reaction Coordinates (RCs) describing system's slow degrees of freedom. By employing these methods and enhancing sampling along an approximate RC, rare events of interest can be observed. However, determining RCs for practical systems is typically challenging as they are often unknown _a priori_ and are difficult to identify without simulating the rare event of interest itself. ML based methods attempt to solve this problem by projecting the high-dimensional MD data from an arbitrarily long simulation onto a low-dimensional manifold designed to approximate the system's RC, often as a combination of a much bigger dictionary of CVs. Since the initial short simulation will typically not include the rare event of interest, the quality of RC can be improved by iterating between performing enhanced sampling and learning better RCs until convergence as illustrated in **Figure 2**. In this context, it is important to note that there is a zoo of dimensionality reduction algorithms already available in other scientific domains. However, these methods are not always directly suitable
for MD simulations because they are not designed to preserve kinetic information and fail to capture essential physics governing system behavior. This is illustrated in **Figure 2**, where MD simulation data describing the permeation of a small molecule through a lipid bilayer (50) has been analyzed using a general purpose method (t-SNE [(51)]), and methods developed for identifying approximate RC for MD (TICA [(52)], RAVE [(53)]) respectively. It can be clearly observed that TICA and RAVE were able to correctly preserve kinetic information in addition to distinguishing the metastable states, which t-SNE failed to do.
In this section, we will consider data-driven dimensionality reduction methods for MD which typically involves the construction of an artificial neural network (ANN) and minimizing the loss of a well-defined objective function to generate a regularized low-dimensional manifold, also known as the latent space. In general, these recently developed methods draw inspiration from diverse approaches, which often overlap with each other and a meaningful classification of these approaches becomes a difficult task. In the following subsections, we attempt to classify them by primarily looking at the theoretical foundations of the adopted objective functions, and the specific purpose behind dimensionality reduction e.g., expressivity and interpretability. Additionally, ML methods such as the aforementioned vanilla t-SNE that could be used for clustering and analyzing MD trajectories, that are not necessarily suitable for enhanced sampling, are outside the scope of this review. Interested readers can refer to previous literature reviews that cover this specific topic [(20)].
### Automated CV selection
Instead of constructing an abstract low-dimensional latent space or RC directly as a function of the entire input feature space, ML methods can be employed to identify the subset of CVs most complete for describing the system's behavior. For example, in a seminal work by Dinner _et al._[(54)] a genetic neural network algorithm was implemented to acquire the initial set of coordinates that can clearly and effectively determine the transition state for a simple biomolecular transition. The authors showed how they could reproduce the committor from a set of CVs. The set of CVs that best represented the correct committor was then chosen to obtain the umbrella potential term, \(V=k(p_{B}^{GNN}-p_{B}^{target})^{2}\) for enhanced sampling.
In a later, also pioneering work by Peters _et al._[(55)], sets of potential CVs were examined for the construction of an appropriate RC using likelihood maximization under the transition path sampling (TPS) scheme. For an efficient screening of the CVs, a modified version of TPS called the aimless shooting algorithm was adopted to remove momentum correlations across each TPS trial trajectory. Finally, a good RC is identified by analyzing shooting history across trial trajectories and adopting a Bayesian information criterion for discarding complex models. Mathematically, if \(p(TP|\mathbf{x})\) represents the probability that the system will adopt a particular transition path given shooting point \(\mathbf{x}\), and \(r(\mathbf{x})\) represents an appropriate RC, then this method calculates \(p(TP|r(\mathbf{x}))\) corresponding to \(p(TP|\mathbf{x})\).
Both of these influential works considered the question of not just constructing a set of complete CVs, but also building a RC from them. In a recent study [(56)], Ravindra _et al._ introduced a method for only the first part of the problem, i.e. CV identification, called Automatic Mutual Information Noise Omission (AMINO) for the automated selection of CVs from MD data. Although AMINO does not directly create RCs for enhanced sampling, it generates a subset of the most relevant CVs describing a system by discarding correlated, or noisy CVs. This subset of CVs can be used to calculate improved RCs through other RC construction approaches discussed in this review. AMINO operates by initially employing
a mutual information-based distance metric (\(D\)) to determine the similarity between pairs of CVs (\(X\) and \(Y\)).
\[D(X;Y)=1-\frac{I(X;Y)}{H(X,y)}\] (1.
Here, the term \(I\) represents the mutual information between \(X\) and \(Y\), while \(H\) represents the joint entropy of \(X\) and \(Y\). This measure of similarity (\(D\)) is then utilized to group all the CVs into distinct clusters using K-medoids clustering. Subsequently, a single CV from each cluster is chosen for representing the CVs within the corresponding cluster as best as possible. Finally, the optimal number of CVs describing a dataset is determined by employing the jump method from rate-distortion theory. This involves constructing a distortion function and selecting the number of CVs that yields the greatest reduction in the distortion function.
In a subsequent study, Stock _et al._ (57) extended the idea of using mutual information as a similarity measure for selecting relevant CVs. Here, different clustering algorithms were explored and the authors found that the Leiden algorithm from graph theory for detecting
Figure 2: Small molecule permeation through a lipid bilayer. The general purpose dimensionality reduction method (t-SNE) fails to preserve kinetic information, but methods designed for MB data (TICA, RAVE) were successful. Permeation involves the sequential movement of the small molecule from one side (I), across the membrane (II, III), to the other side (II). A general protocol for iterative learning of improved RC is shown on the right panel.
communities of similar coordinates worked really well. The key idea is to examine the underlying graph structure constructed from system CVs as nodes, and mutual information-based similarity as edges. Leiden algorithm uses the definition of modularity in community detection for maximizing the following objective function (\(\Phi\)):
\[\Phi=\frac{1}{2m}\sum_{c}(e_{c}-\frac{k_{c}^{2}}{2m}) \tag{2}\]
In this equation, the subscript \(c\) represents different clusters, while \(m\), \(e_{c}\), \(k_{c}\) represent the number of edges, sum of edge weights, and sum of CV degrees within each cluster respectively. By applying this approach, the study aimed to identify communities or groups of CVs that exhibit higher similarity within each group compared to other groups.
Very recently, Ensing _et al._ (58) expanded on the genetic algorithm-based approach introduced by Ma & Dinner (54) discussed earlier in this section. Here, the key idea is to employ a feed-forward ANN that takes subsets of CVs selected using a genetic algorithm as inputs and predicts atomic coordinates as the output. After training a ML model, fitness scores are assigned to each CV determined from the mean absolute error between ANN output and the ground truth atomic coordinates. A second genetic algorithm is incorporated for tuning the construction of the ANN architecture. In contrast to the approach taken by Dinner _et al._, this method bypasses the costly committor calculations by implementing a TPS framework for generating training data. Predicting full atomic coordinates as the model output enables the generation of configurations for unexplored regions which can help with initiating additional simulations. Lastly, a lag time between input and predicted atomic coordinates in the output can be introduced to identify CVs appropriate for determining the slow modes.
### TRANSFER OPERATOR APPROXIMATION
One popular approach for reducing the dimensionality of MD trajectories through ML has been the identification of the slowest eigenfunctions \(\psi_{i}(\mathbf{x})\) of a system's transfer operator (\(\mathcal{T}\)). Assuming the system follows detailed balance and Markovianity, it can be shown that these eigenfunctions form a complete orthonormal basis i.e, \(\langle\psi_{i}|\psi_{j}\rangle_{\pi}=\delta_{ij}\) with a bounded eigenvalue spectrum \(1=\lambda_{0}>\lambda_{1}\geq\lambda_{2}\geq\cdots\), where \(\pi\) denotes the system's equilibrium probability. Thus the system's state \(\chi_{t}(\mathbf{x})\) at time \(t\) can be represented as \(\chi_{t}(\mathbf{x})=\sum_{i}\langle\psi_{i}|\chi_{t}\rangle_{\pi}\psi_{i}( \mathbf{x})\) and its time evolution after time \(k\tau\), where \(k\) is an integer and \(\tau\) represents the lag time, is given by,
\[\chi_{t+k\tau}(\mathbf{x})=\mathcal{T}^{k}\circ\chi_{t}(\mathbf{x})=\sum_{i} \langle\psi_{i}|\chi_{t}\rangle\pi\psi_{i}(\mathbf{x})\text{exp}\left(\frac{ -k\tau}{t_{i}}\right) \tag{3}\]
Here \(t_{i}=-\tau/\text{log}\lambda_{i}\) denotes the implied time scale of the eigenfunction \(\psi_{i}\), and eigenvalues \(\lambda_{i}\) represent the autocorrelation times. Thus, the system's behavior at long time scales can be deduced from slow eigenfunctions i.e., \(\psi_{i}\)'s corresponding to large \(\lambda_{i}\)'s of the transfer operator \(\mathcal{T}\).
A well-established technique for estimating the leading eigenfunctions of the transfer operator is the variational approach for conformation (VAC) dynamics (59, 60, 36). The key idea underlying VAC is to leverage the bounded and sorted nature of the eigenvalue spectrum of \(\mathcal{T}\), and successively maximize \(\bar{\lambda}_{i}\) to approximate leading \(\bar{\psi}_{i}\) such that the
condition, \(\langle\bar{\psi}_{i}|\bar{\psi}_{j}\rangle_{\pi}=0\) is satisfied. In this way, VAC is able to learn slow modes with the highest autocorrelation time.
Time-structure based independent component analysis [(61, 52)] (TICA) is a particular implementation of VAC where RCs are constructed as the linear combinations of the input features or CVs. For example, Pande _et al._[(52)] used TICA to learn a lower dimensional manifold and used it as the biasing variable for metadynamics and performed enhanced sampling. However, the use of linear transformations in the ANN model limits the expressivity of the RCs learned by TICA. In a later work, Pande _et al._ introduced kernel TICA [(62)] (kTICA) for learning RCs that are non-linear functions of the input features. Here, the main idea was the introduction of non-linearity by defining a kernel function for constructing an abstract feature space from input features using a pairwise similarity measure. However, later works [(63)] identified several limitations of kTICA implementation involving high memory \(O(N^{2})\), and time complexity \(O(N^{3})\).
One of the earliest applications of artificial neural networks (ANNs) in implementing the VAC principle for dimensionality reduction was the VAMPnets [(64)]. The key idea in VAMPnets is the automated construction of MSMs from structural features where the ANN's loss function, called the VAMP-score, is derived using a Koopman operator framework. The authors noted one particular choice of VAMP-scores, the VAMP-2 is well suited for time series data which intuitively corresponds to the sum of the squared eigenvalues of the transfer operator [(65)]. However, the output layer of VAMPnets i.e, the reduced dimensions are constructed by applying a softmax function for detecting discrete metastable states of a system, and are thus not directly suitable for performing enhanced sampling which often requires smoothly differentiable variables. To utilize the VAMPnets framework for constructing continuous and descriptive RCs, Ferguson [(63)]_et al._ proposed the state-free reversible VAMPnets (SRV) for approximating the eigenfunctions of the transfer operator as nonlinear mappings of the input space without using the kernel trick. SRV is implemented through an ANN that maps input coordinates to an \(n\)-dimensional continuous output. Here, \(n\) is a hyperparameter of the method representing the number of slow modes learned by the ANN model. In practice, the outputs \(f_{i}(\mathbf{x})\) of the ANN approximate the eigenfunctions as a linear combination \(\bar{\psi}_{i}(\mathbf{x})=\sum_{j}s_{ij}f_{j}(\mathbf{x})\) by minimizing, \(\mathcal{L}=\sum_{i}g(\tilde{\lambda}_{i})\). Here, \(g(\tilde{\lambda}_{i})\) is any monotonically decreasing function of the machine-learned eigenvalues of the transfer operator, \(\tilde{\lambda}_{i}\). However, the authors noted that specific choices e.g, \(g(\tilde{\lambda})=-\tilde{\lambda}^{2}\) or, \(g(\tilde{\lambda})=1/log(\tilde{\lambda})\) where the loss function corresponds to maximizing the cumulative kinetic variance (VAMP-2 score) or, the sum of the implied time scales respectively produce good results.
In a very recent publication, Ferguson _et al._[(66)] extended SRV by proposing Girsanov Reweighting Enhanced Sampling Technique (GREST) which utilizes both dynamical Girsanov reweighting and thermodynamic corrections to learn the slow modes. As discussed in previous sections, a general scheme for RC construction involves iterations between performing MD simulations and analyzing biased simulation data. Naturally, this analysis will produce more accurate results when considering the biased nature of the accelerated MD trajectories and implementing dynamical corrections. Previously proposed approaches have visited this same problem in the context of the RAVE method [(67)] by using the square-root formalism originally attributed to Bicout and Szabo [(68)]. GREST solves the problem by modifying the SRV loss function according to the Girsanov theorem. The implemented ANN learns from a set of discontinuous, biased MD trajectories under simulation potential, \(V_{sim}(\mathbf{x})=V_{target}(\mathbf{x})-U_{bias}(\mathbf{x})\). The key idea behind Girsanov reweighting is to correctly
assign path probabilities of different trajectories evolving under \(V_{sim}(\mathbf{x})\) to follow unbiased \(V_{target}(\mathbf{x})\). Finally, dynamical observables obtained from the path ensemble are used to estimate unbiased averages.
In a typical dimensionality reduction through enhanced sampling protocol, a crucial step in learning improved RCs through iterations is the initial first round of MD. The learned RC from the initial round can be improved by performing separate unbiased simulations where the system is initialized at different metastable states that are known _a priori_. However, when studying complicated practical systems, it is possible that this will still not result in good initial RCs and it will require many iterations to get a converged result due to the absence of transition dynamics. Here, one strategy could be to perform several biased simulations e.g., using metadynamics by employing different trial RCs and attempting to combine information from the resultant trajectories. Even if the trial RCs are suboptimal, the learned improved RC from combining trajectories will likely be superior due to better sampling. However, combining biased trajectories with different RCs is not trivial and Parrinello [(69)]_et al._, introduced Deep-TICA which is essentially a complete protocol for implementing this idea. The main idea behind Deep-TICA is to learn RCs from the first round of biased simulation by implementing a nonlinear VAC principle and biasing the leading eigenfunctions. It should be noted that different biased simulations will have different eigenvalue spectrum of the transfer operator as the modes that are accelerated will be different across simulations. In Deep-TICA this is taken into account by using the accelerated time scale [(47)] for implementing the VAC principle and learning improved RCs. Additionally, Deep-TICA employs the recently developed OPES multithermal for the first round of biased simulations using the potential energy of the system as a RC. The subsequent rounds of simulations are also implemented through OPES due to its rapid convergence to a quasistatic regime.
### Information Bottleneck-Based Approaches
Information bottleneck [(70)] (IB) based approaches perform dimensionality reduction of MD data by optimizing the trade-off between complexity and prediction accuracy of an ML model. The resulting low-dimensional manifold, called the latent space can be used as the biasing RC of an appropriate enhanced sampling scheme such as umbrella sampling or metadynamics. Typical implementations of IBs involve autoencoder [(71)] type ML models that consist of two sequential feed-forward ANNs known as the encoder and decode respectively. During training, the encoder parameters are optimized to generate a latent space embedding containing the most essential information in the training data while the decoder parameters are optimized to reconstruct the input from the latent space as accurately as possible. Thus, the encoder-decoder pair is trained simultaneously to find an optimal trade-off in a self-supervised manner. A key advantage of IBs in contrast to the transfer operator approximation methods discussed earlier (see Section 3.2), is that autoencoders consist of decoders that can effectively map data points sampled from the latent space back to the original input space, enabling them to function as generative models.
One of the earliest adoptions of autoencoders for MD was the Molecular Enhanced Sampling with Autoencoders (MESA) by Ferguson [(72)]_et al._. MESA is used to learn a nonlinear mapping between atomic coordinates (\(\mathbf{x}_{t}\)) generated from MD and a low dimensional latent representation that can reconstruct \(\hat{\mathbf{x}}_{t}\) with minimal error, \(\mathcal{L}=\sum_{q=1}^{Q}\|\mathbf{x}_{q,t}-\hat{\mathbf{x}}_{q,t}\|^{2}\)
The optimal dimensionality of the autoencoder is determined by the fraction of variance explained by the latent space compared to the input data. Since, MESA works with atomic coordinates, translational and rotational motion is taken into account through mean centering and augmenting MD data with rotationally invariant MD frames respectively. Finally, the learned RCs are used to perform umbrella sampling (US) and an estimate of the unbiased free energy surface of the MESA RCs is computed using the Weighted histogram analysis method (73) (WHAM).
In contemporary work, Wehmeyer and Noe (74) proposed the time-lagged autoencoder (deep TAE) where input coordinates (\(\mathbf{x}_{t}\)) at time \(t\) is given to the encoder and the decoder learns to predict atomic coordinates (\(\mathbf{x}_{t+\tau}\)) at a later time (\(t+\tau\)). The authors showed that linear TAEs correspond to time-lagged canonical correlation analysis (TCCA) and are equivalent to TICA if the MD data is time-reversible. By performing non-linear transformations and choosing appropriate lag-time \(\tau\), highly expressive RCs for enhanced sampling can be obtained.
Pande _et al._ extended this approach of considering time-lag in reconstruction error computation and proposed variational dynamics encoder (75) (VDE) which substitutes vanilla autoencoders with variational autoencoders (76) (NAEs). In VAEs, the latent space is assumed to follow a prior distribution, typically a multivariate Gaussian distribution and the encoder learns to approximate the posterior distribution through training. During training, VAEs aim to maximize the evidence lower bound (77) (ELBO) consisting of two terms: the reconstruction loss (\(\mathcal{L}_{R}\)) measuring how well the VAE can reconstruct the input data, and the Kullback-Leibler (KL) divergence (\(\mathcal{L}_{KL}\)) between the approximate posterior and the prior distribution. The KL divergence encourages the approximate posterior to match the prior distribution, promoting regularization and controlling the complexity of the latent space. Under the VDE framework a third term, the autocorrelation loss (\(\mathcal{L}_{AC}\)) is also considered which maximizes the largest dynamical eigenvalue by following the VAC principle and encourages the discovery of the slowest process in the input data. Thus, the final VDE loss function takes the form, \(\mathcal{L}=\mathcal{L}_{R}+\mathcal{L}_{KL}+\mathcal{L}_{AC}\).
As discussed in previous sections, the key strategy in computing improved RCs for enhanced sampling is to iterate between simulations and data analysis. However, after conducting the first round of the biased simulation, the effect of the deposited bias on MD trajectory should be taken into account for the accurate identification of RCs in subsequent rounds. Tiwary _et al._ addressed this issue by proposing the Reweighted Autoencoder for Variational Bayes (58) (RAVE). RAVE redefines the reconstruction loss by taking the deposited metadynamics bias (\(V_{bias}\)) into account, \(\mathcal{L}_{R}=\sum_{i}w_{i}^{2}(\mathbf{x}_{i,t}-\hat{\mathbf{x}}_{i,t})^{2}\), where \(w=e^{V_{bias}/k_{b}T}\). Unlike the previously discussed IB-based approaches so far the encoder of RAVE adopts a linear activation function to keep the latent space interpretable. It should be noted that, unlike deep-TAE or VDE, the original version of the RAVE decoder does not implement any time lag and aims to reconstruct the input data. However, in a later work (78), the authors introduced time-lag by modifying the RAVE objective function to \(\mathcal{L}=I(\chi,\mathbf{X}_{\Delta t})-\beta I(\mathbf{X},\chi)\). Here, the first term is the mutual information between latent representation \(\chi\) at time \(t\), and input (\(\mathbf{X}_{\Delta t}\)) at a later time \(t+\Delta t\). \(I(\mathbf{X},\chi)\) represents the mutual information between input and latent representation at time \(t\). The hyper-parameter \(\beta\) can be used to tune the trade-off between model complexity and prediction accuracy. While implementing this objective function the authors reported improved performance when setting \(\beta=0\) as it reduces the number of model parameters and adds a stochastic term to the decoder reconstruction for avoiding data memorization. Wang _et
al._ further improved the RAVE protocol [67] for iterative biasing by correcting the RAVE objective function to take into account the effect of bias for small time-lag on the dynamical propagator of the system.
In another modified and as of now the preferred version of RAVE, Wang _et al._ introduced state predictive information bottleneck [79, 50] (SPIB) which aims to learn the metastable states of a system in addition to learning a low-dimensional representation of MD data given a time delay \(\Delta t\). SPIB acts as a fast mode filter that ignores fluctuations occurring at a time scale smaller than the time delay when generating the latent space suitable for enhanced sampling. From _a priori_ system information, a user provides guessed initial metastable state assignment and the stochastic, non-linear decoder iteratively reconstructs and refines the metastable state definitions until convergence. Additionally, SPIB implements a mixture of Gaussians known as VampPrior [80] as the prior distribution instead of a single Gaussian for improved regularization of the latent space.
### Eigendecomposition Techniques
In this section, we look at data-driven methods for RC discovery that aim to find a low-dimensional representation of MD trajectories by computing the eigendecomposition of relevant operators other than the transfer operator discussed earlier (Section 3.2).
An interesting approach for enhanced sampling was proposed by Parrinello _et al._[81] that involves linear discriminant analysis (LDA). LDA is a supervised ML algorithm that constructs a \((d-1)\) dimensional linear projection (\(\mathbf{W}\)) of the input data with \(d\) labeled classes. This is achieved by maximizing the inter-class distance (\(\mathbf{S}_{b}\)) and minimizing the within-class variance (\(\mathbf{S}_{w}\)) simultaneously, which is equivalent to maximizing the Fisher's ratio: \(\underset{\mathbf{W}}{argmax}\ \underset{\mathbf{W}}{\mathbf{W}}\mathbf{W}^{T}\). Using this expression it is trivial to show that the eigenvector corresponding to the largest eigenvalue (\(\lambda\)) of the generalized eigenvalue problem (\(S_{b}\mathbf{W}=\lambda S_{w}\mathbf{W}\)) serves as the LDA projection operator. For data with two labeled classes (A & B) with given expectations (\(\boldsymbol{\mu}\)) and variances (\(\boldsymbol{\sum}\)), \(\mathbf{W}\) can be computed using (\(\boldsymbol{\sum}_{A}+\boldsymbol{\sum}_{B}\)) (\(\boldsymbol{\mu}_{A}-\boldsymbol{\mu}_{B}\)) (82). In the context of MD, the expectations and variances of the system CVs can be obtained from independent, short unbiased simulations initialized from states A & B that are separated by a high energy barrier. The authors noted that this definition of projection operator assigns CVs with high variance higher importance, causing poor sampling of the more stable CVs with small fluctuations. However, the space spanned by the latter type of path CVs needs to be adequately explored to overcome the high energy barrier. To address this issue, the authors proposed taking the harmonic average of the variance instead of the arithmetic average when constructing the projection operator. The modified scheme is termed harmonic linear discriminant analysis (HLDA) and using Eq. 4, HLDA can construct path CVs without any _a priori_ path information. Here the subscript denotes the class for which the expectation or variance has been calculated.
\[\mathbf{W}=\frac{1}{\frac{1}{\boldsymbol{\sum}_{A}}+\frac{1}{\boldsymbol{ \sum}_{B}}}(\boldsymbol{\mu}_{A}-\boldsymbol{\mu}_{B}) \tag{4}\]
In a later work, Bonati _et al._[83] extended this approach by proposing Deep-LDA. Here, input CVs recorded from short unbiased simulations initialized from two separate states are passed through a non-linear feed-forward ANN. The output of the last hidden layer acts as a high-level feature identifier and is used as the input to an ANN implementation of LDA. Here the ML model is trained to maximize the largest eigenvalue of the generalized
eigenvalue problem mentioned previously to ensure maximal class separation. It should be noted that deep-LDA employs a traditional definition of the projection operator involving arithmetic average instead of the definition introduced in HLDA.
Very recently, Hocky _et al._ (84) proposed a novel LDA implementation for identifying optimal path CVs by taking position coordinates of relevant atoms as inputs instead of CVs. This framework is particularly useful for studying systems where the number of input CVs scales poorly with system size (e.g, pairwise distance scales quadratically). To directly work with atomic coordinates as they scale linearly, the authors reformulated the generalized eigenvalue problem to a generalized singular value decomposition (SVD). This enables the consideration of singular \(S_{w}\) matrices for the computation of the projection operator which is often observed for position coordinates. After removing the effects of translational and rotational motion represented in position coordinates by aligning the molecular structures to a global average, optimal path CVs are computed and biased using OPES.
Compared to the LDA based methods discussed above, Tiwary and Berne took an alternative approach by proposing the spectral gap optimization of order parameters (SGOOP) method (85). The key idea in SGOOP is that the transition probability matrix (\(\mathbf{\Omega}\)) corresponding to the low-dimensional RC will have the largest time-scale separation between its slow and fast modes. If \(\lambda_{0}=1>\lambda_{1}\geq\lambda_{2}...\) represent the ordered eigenvalues of \(\mathbf{\Omega}\), then this is achieved by finding parameters of the learned RC that describe it as a linear or non-linear combination of the input CVs to maximize the spectral gap i.e, \(\lambda_{s}-\lambda_{s+1}\), where \(s\) is the number of apparent barriers. Here, the principle of Maximum Caliber with constraints (\(\rho_{i}\)) is implemented for estimating the transition matrix from an ensemble of short, unbiased/biased simulations where bias would otherwise prevent recovery of kinetics. Here, \(a\), and \(b\) represent discretized state of the system in the learned low-dimensional space. In a later work (86), Smith _et al._ extended this approach by proposing iterative construction of the low-dimensional RC starting from 1-d by employing conditional probability factorization to focus on transitions that are not captured in earlier dimensions. Here, additional components of the RC are constructed only if the prior component failed to capture a slow mode. In a recent work (87), the authors introduced a stopping criterion for the iterative addition of orthogonal components to the RC by computing commute time on the space of a kinetically accurate distance measure termed SGOOP-d.
### Other Dimensionality Reduction Approaches
In this section, we report notable and recently developed machine-learned-dimensionality reduction techniques that do not fall under the classes of methods presented in previous sections.
Subsequent to Rydzewski & Nowak introducing the t-Distributed Stochastic Neighbor Embedding (t-SNE) (51) for representing MD data, Zhang & Chen proposed a t-SNE based enhanced sampling scheme (88) that generates a stochastic kinetic embedding of the input CVs into a low-dimensional RC. By assuming the input CVs undergo an implicit diffusion process, here the KL divergence between high-dimensional transition probability matrix (\(\mathbf{M}_{input}\)) and low-dimensional \(\mathbf{M}_{t-SNE}\) from is minimized through a multilayer perceptron. Finally, the exploration of the current least informative region in the CV space is performed through well-tempered metadynamics in an iterative manner. Rydzewski & Valsson extended this approach through multiscale reweighted stochastic embedding (MRSE) (89), by incorporating a multiscale representation of the input CVs and thus removed the
choice of perplexity, which is a hyperparamter of t-SNE from the protocol. t-SNE involves computing the probabilities \(p_{ij},q_{ij}\) of choosing a sample \(\mathbf{x}_{i}\) as a neighbor of the sample \(\mathbf{x}_{j}\) in the feature (**M**), and low-dimensional (**Q**) space respectively. In a traditional t-SNE implementation, perplexity controls the trade-off between the captured local and global properties which can be difficult to select without _a priori_ system information. Under this framework, the pairwise probability distribution \(\mathbf{M}_{mix}\) is constructed by considering a mixture of **M**'s stemming from individual perplexity values. An ML algorithm is implemented to parametrize model parameters that maximize the KL divergence between **M**, and **Q** for a batchsize \(N_{b}\). The final loss function is shown in eq. 5. For an accurate construction of the RC, a landmark selection scheme is adopted to prevent the under-representation of transition state data points during training, and a reweighting scheme to learn from biased data are adopted.
\[\mathcal{L}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\sum_{i=1,\,i,\,i,\,i,\,j}^{N_{b}}p _{ij}\log\frac{p_{ij}}{q_{ij}} \tag{5.1}\]
An alternative approach (90) to dimensionality reduction was adopted by Kozinsky _et al._ which adopts a multitask learning scheme. This method modifies a VAE architecture by adding a second, parallel decoder/layer that minimizes potential energy error for learning a more informed transition state in the latent space. Under this framework, the objective function of the ML model is a weighted sum of three loss functions, \(\mathcal{L}=c_{c}\mathcal{L}_{c}+c_{p}\mathcal{L}_{p}+c_{r}\mathcal{L}_{r}\). Here, the subscripts \(c,p,r\) denote metastable state classification, potential energy error minimization, and latent space regularization tasks respectively. It should be noted that the metastable state labels for \(\mathcal{L}_{c}\) are obtained from a TPS scheme and through committor analysis which can become expensive with the presence of many metastable states.
## 4 Learning New Strategies to Enhance Sampling
Although the majority of work incorporating ML into enhanced sampling has been on dimensionality reduction, a few creative methods have used ML to inform how new simulations are initialized or how bias is deposited. In the adaptive sampling community, this has primarily been achieved through reinforcement learning-inspired methods that define a goal on reward function that is then used to initialize new simulations. These methods balance initialization in regions that optimize a desired property and initialization in undersampled regions that may contain pathways to optimal regions. In the biasing community, ML has been used to develop new strategies for bias deposition in contrast to the kernel density estimation used by metadynamics and OPES. These methods have primarily achieved this goal by introducing new neural network-based free energy estimators but have also treated bias deposition as a reinforcement learning problem.
### Reinforcement Learning Strategies for Adaptive Sampling
Three methods, FAST (91), REAP (92), and AdaptiveBandit (93), take inspiration from reinforcement learning to treat adaptive sampling initialization as a policy selection problem (94, 95, 96, 97) where decisions are made to maximize a reward function.
FAST (91) takes inspiration from the multi-armed bandit problem (94) to initialize simulations that are likely to optimize a given property such as root-mean-squared deviation
(RMSD) to a target state. The multi-armed bandit problem weighs the trade-off between exploration and exploitation when facing uncertainty in rewards. In the original problem, a gambler faces the trade-off between exploiting a slot machine (also called a one-armed bandit) with known rewards or exploring for a potentially better machine. FAST, however, balances the trade-off between sampling states with optimal values of the metric of interest (exploitation) and poorly sampled states (exploration) by seeding simulations in proportion with a reward function that balances these two. An illustrative example of the general trade-off for reinforcement learning adaptive sampling is shown in **Figure 3**. The reward \(r(c_{i})\) for state/cluster \(c_{i}\) shown in Eq. 6 combines a directed component \(\bar{\phi}(c_{i})\) that rewards states with a higher value of the metric of interest and an undirected component \(\bar{\psi}(c_{i})\) that rewards poorly sampled states with \(\alpha\) controlling the weight of exploration and exploitation.
\[r(c_{i})=\bar{\phi}(c_{i})+\alpha\bar{\psi}(c_{i}) \tag{6.1}\]
Figure 3: An illustrative example of the trade-off between exploration and exploitation in reinforcement learning. Here we show a collection of cartoon trajectories for conformational change in a protein. The optimized property is the distance to a reference conformation where distant conformations are shown in blue and close conformations are shown in purple. Reinforcement learning methods balance sampling regions such as the bottom left conformer which is in a poorly explored region of configuration space (exploration) and sampling regions such as the top conformers which have already optimized the property of interest (exploitation).
REAP (92) uses a reinforcement learning-inspired reward function to select starting configurations in the setting where a set of CVs is known but their importance is not. REAP calculates a reward function for each cluster \(c_{i}\) shown in Eq. 7 in order to seed simulations in clusters with high rewards. The reward function standardizes each CV \(\Theta_{j}\) and then takes a weighted sum of their absolute values. The standardization is done with respect to the mean and standard deviation of all clusters \(C\) and the weights \(w_{j}\) ranging from 0 to 1 are interpreted as the importance of \(\Theta_{j}\). The weights are updated iteratively after each round of sampling to maximize the reward over the set of least sampled clusters. This iterative scheme allows sampling to proceed in different directions as new states are discovered.
\[r(c_{i})=\sum_{j=1}^{k}w_{j}\frac{|\Theta_{j}(c_{i})-\langle\Theta_{j}(C) \rangle|}{\sigma_{i}(C)} \tag{93}\]
AdaptiveBandit (93) is another method based on the multi-armed bandit problem but in this setting the goal is to find minimum free energy configurations. The reward is defined as the negative mean of the free energy of configurations sampled after a starting point and simulations are initialized using the UCB1 algorithm (98) for the multi-armed bandit problem. UCB1, shown in Eq. 8, defines a trade-off between the expected reward \(Q_{t}\) for a given action \(a\), or initialization in this case, and the uncertainty based on how many times the action has been selected. The uncertainty is defined as the square root of the ratio of the total past actions taken \(t\) and the number of times action \(a\) has been taken \(N_{t}(a)\). AdaptiveBandit uses an MSM to discretize the system's states and estimate their free energy, providing a discrete set of choices for the starting state and estimated rewards. For each iteration, a random configuration is chosen from the optimal starting state according to UCB1, a short trajectory is sampled, then the MSM is updated which updates the possible choices, the associated rewards, and their uncertainties.
\[a_{t}=\operatorname*{argmax}_{a}\left[Q_{t}(a)+c\sqrt{\frac{\ln(t)}{N_{t}(a)}}\right] \tag{94}\]
### Learning New Strategies to Bias Simulations
In this section, we review free energy-based biasing methods in which the process of modifying the Hamiltonian through the deposition of bias or through other approaches is improved by ML algorithms (see **Figure 4** for an illustration). Both non-parametric methods and artificial neural networks (ANNs) have been directly integrated with enhanced sampling methods to facilitate sampling efficiency and accuracy. While there are several examples of such integration, here we highlight three.
Csanyi and collaborators introduced Gaussian Process Regression (GPR), a non-parametric ML technique to reconstruct functions in multiple dimensions, on the reconstructed free energy surface first from the umbrella sampling method (100), and later on to enhance the sampling of a combination of adaptive biasing force and metadynamics methods (101). Specifically, metadynamics is used to deposit bias potentials, and instantaneous collective forces (ICF) are estimated using the adaptive biasing force method for the final reconstruction of the final multidimensional free energy surface. It has been shown that with this metadynamics/ICF/GPR scheme, the sampling/computational efficiency is significantly improved.
Besides combining the two enhanced sampling methods metadynamics and adaptive biasing force with the kernel-based ML method reviewed above, ANNs, which require training on prebuilt datasets, have also been applied individually to these two methods. As the second example we mention work by Galvelis and Sugita (102). Here an ANN is trained on estimating free energy on-the-fly for an input higher-dimensional CV space. Such an approximated free energy is then used to construct a bias potential before additional biasing with traditional CV-based metadynamics. This method in principle not only allows fast approximations of the instantaneous free energy but also a higher-dimensional biasing scheme. However, as pointed out by the authors (102), limitations show up when applied to more complicated systems with high-dimensional biases being deposited. In such systems, the construction of bias potential becomes less efficient and eventually makes it difficult for transitions to occur with any computational advantage. Later on, this very similar strategy was applied to another free energy based biasing method, called the variationally enhanced sampling (VES) method (103). In particular, the high-order basis set expansion needed in VES is replaced by such ANN that takes pre-screened CVs as input and predicts free energies of the CV space. With this approach, the computational cost becomes manageable and the number of input CVs is no longer limited (the amount of variational parameters
Figure 4: Schematic of machine-learned biasing methods reviewed in Section 4.2, where typical estimators (Gaussian kernel density estimation (top) (99) and discrete grids (bottom)) are replaced with trained ANN/DNN models to yield high accuracy and efficient computation on the biases.
scales exponentially with the number of chosen CVs in conventional VES) [104].
As the third example, we describe how ANNs were combined with the adaptive biasing force method. In the traditional approach, discrete grids are used to estimate the mean forces and consequently the free energies. However, with such a scheme, a tradeoff between precision and efficiency needs to be made: fast convergence can be reached under low-resolution grids, but numerical issues arise on the estimates of mean force as regions become broader. In the "Force-biasing Using Neural Networks" (FUNN) approach [105], a self-regularizing ANN [106] is trained to provide an on-the-fly estimate on free energy and, later on, to learn the generalized mean force. Specifically, the model is optimized with rigid regularization to make the network robust to hyperparameters and overfitting. Results show that both ANN sampling techniques speed up the sampling compared to their non-NN counterparts, and additionally, FUNN leads to faster convergence regarding ANN sampling.
The methods we highlighted above mix machine-learning techniques with traditional adaptive biasing methods -- metadynamics and adaptive biasing force methods. Naturally, this recipe can be used to create more flavors of data-driven biased simulations. As an example, the idea of reinforcement learning has been introduced to enhance the sampling of unexplored regions along selected CV spaces in [107]. In a similar spirit as metadynamics, Gaussian biases are deposited at regions where sufficiently sampled in the reinforced dynamics method. Unlike metadynamics, the bias deposition in [107] relies on an uncertainty indicator, defined as the standard deviation of the predictions outputted from the trained deep neural network models. A significant advantage of such a formalism is that it allows one to enhance the sampling in high-dimensional CV spaces, given the ability of deep neural networks in representing high-dimensional functions. However, as also acknowledged by the authors, the quality of selected CVs affects the resulting approximation of free energy surface and approaches to construct ML RCs are reviewed in Section 3.1. Recently, deep learning methods were also applied to Gaussian accelerated Molecular Dynamics (GaMD). Unlike the typical GaMD method [108], Do _et al._[109] introduce machine-learned boost potentials to smoothen the system's potential energy surface, termed Deep Boosted Molecular Dynamics (DBMD). These potentials are optimized by iterative training on the randomly generated boost potentials in initial GaMD runs and a chosen anharmonicity threshold is used as a sign of convergence. We believe we will be seeing many more such methods in the coming years mixing different ML architectures with different enhanced sampling protocols for data-driven biased simulations that in principle do not require dimensionality reduction at least on the part of the user.
## 5 Estimating Free Energies With Flow-Based Models
In contrast to dimensionality reduction methods, flow-based models do not attempt to map the dynamics of the system to a simpler, lower-dimensional manifold. The key idea is to instead transform the complex probability distribution of the data into a more tractable distribution while preserving the dimensionality of the data. As such, flow-based models are especially suited for the study of systems with highly complex structures and dynamics; systems with many metastable states and no clear separation of timescales may not be accurately represented on low-dimensional manifolds. Flow-based models learn the distribution of states in the full configuration space, avoiding dimensionality reduction and assumptions about the data made therein.
More precisely, flow-based models formulate generative modeling as learning determin
istic or stochastic mappings from a complicated empirical distribution to a simple prior distribution. Once the map bridging the empirical and prior distributions is learned, samples from the prior can be transformed into samples that resemble the empirical data distribution, typically at a low computational cost. Since flow-based models learn mappings between probability distributions, they have been primarily used to estimate free energies as opposed to learning dynamics, which remains an open, exciting area.
### Normalizing Flows
Normalizing flows are models that establish an invertible mapping between empirical and target distributions while ensuring that the Jacobian of the mapping remains computationally tractable (110). These models have been employed in the enhanced sampling community to improve the accuracy of free energy estimation and enable the efficient generation of a large number of realistic samples.
Normalizing flows parameterize an invertible mapping \(f_{\theta}\)\(\mathbf{x}\rightarrow\mathbf{z}\) between data \(\mathbf{x}\) sampled from an empirical distribution to \(\mathbf{z}\) that follows a target or prior distribution that can be easily sampled. (111, 112, 113). Such a mapping may be parameterized as an ordinary differential equation taking the form
\[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}=f_{\theta}(\mathbf{x}\cdot\mathbf{x})\]
with learnable drift \(f_{\theta}\) and solution
\[\mathbf{z}=\mathbf{x}+\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.26378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.26378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.26378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.26378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.26378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt}{\includegraphics[width=14.226378 pt]{14.226378pt}{\includegraphics[width=14.226378pt]{14.226378pt}{ \includegraphics[width=14.226378pt]{14.226378pt}{\
Figure 5: **A** Depiction of the learning process in normalizing flows, where samples from a complex data distribution are mapped to a simple prior distribution. The mappings in both directions are learned simultaneously. **B** Illustrative depiction of how a score-based model maps samples from a complex data distribution to a simple prior distribution. Notably, only the mapping from the prior distribution to the data distribution is learned. **C** Comparative illustration emphasizing the difference between score-based models and normalizing flows; score-based models learn only the reverse mapping, whereas normalizing flows parameterize the forward and reverse mappings simultaneously. And while normalizing flow mappings are bijective, mappings learned by score-based models will produce different results for each realization of the generative processes.
However, this efficiency comes at a cost. While these restrictions make normalizing flows computationally tractable, they limit their expressivity and make it challenging to learn complex mappings between distributions. Normalizing flows have been observed to struggle with mapping multi-modal empirical distributions to unimodal priors [114]. Recent advancements in normalizing flows aim to address these issues. For example, one approach involves alternating between deterministic flow layers and Monte-Carlo sampling layers [115, 116, 117]. This approach is similar to the class of generative models known as score-based generative models (discussed in Section 5.4).
### Boltzmann Generators
The pioneering use of normalizing flows to enhance sampling is the Boltzmann generator [118]. The Boltzmann generator establishes an invertible mapping between the distribution of system conformations \(\mathbf{x}\) and a latent distribution \(\mathbf{z}\) using a normalizing flow. Once this mapping is learned, it becomes possible to transform samples from the prior distribution into realistic conformations through the inverse mapping.
However, it should be noted that the generated conformations are not guaranteed to follow a Boltzmann distribution, so thermodynamic free energies cannot be computed directly. To account for this, the Boltzmann generator employs a reweighting scheme based on the potential energies of the conformations, \(u(\mathbf{x})\), in which the probabilities are rescaled by the Boltzmann factor \(e^{-\beta u(\mathbf{x})}\). The reweighting scheme utilized by the Boltzmann generator is limited to implicitly solvated systems because it requires generating _all_ degrees of freedom of the system. For explicitly solvated systems, it would be necessary to generate the degrees of freedom associated with the solvent, which becomes computationally infeasible even for relatively small systems.
Other variants of the Boltzmann generator attempt to integrate with multi-ensemble simulation protocols, incorporate transferability across thermodynamic parameters, and improve the robustness of the learned normalizing flow [119, 120, 115].
### Free Energy Estimation Through Invertible Mappings
Following the Boltzmann generator, there has been a proliferation of approaches that combine normalizing flows with analytically informed approaches to free energy estimation. The accuracy of free energy estimates depends on the degree of configuration space overlap between the states being compared. Unfortunately, unbiased simulations often exhibit limited overlap between states, leading to slow convergence. Methods like BAR (Bennett Acceptance Ratio) and MBAR (Multistate Bennett Acceptance Ratio) aim to improve estimates by explicitly maximizing the likelihood of the free energy [121, 122]. Notably, MBAR has been shown to be a statistically optimal free energy estimator for uncorrelated samples.
Another approach to enhancing the convergence of free energy estimates is finding an invertible mapping \(\mathcal{M}\) which increases the overlap between states in configuration space and has a tractable Jacobian [123]. Normalizing flows are particularly suitable for this task. Consequently, there has been a line of research focused on learning normalizing flow mappings \(f_{\theta}\) which increase overlaps in configuration space and maximize the likelihood of the free energy [124, 125, 126]. The free energies under the transformation \(f_{\theta}\) can then be reweighted using Eq. 11 to obtain an estimate of the conformational free energy.
### Score-based Models
Just as normalizability is fundamental to normalizing flows, stochasticity plays a central role in score-based models. Score-based models extend the framework of normalizing flows to encompass stochastic differential equations (SDEs). These models encompass a variety of frameworks that approach generative modeling as learning stochastic processes [127, 128, 129, 130]. Drawing inspiration from non-equilibrium thermodynamics, score-based models capture complex relationships within data [131].
Score-based models can be expressed as pairs of forward and backward SDEs:
\[\mathrm{d}\mathbf{x}=-f(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}\quad \text{and} \tag{12}\]
\[\mathrm{d}\mathbf{x}=-\left[f(\mathbf{x},t)+g(t)^{2}\nabla_{\mathbf{x}}\log p _{t}(\mathbf{x})\right]\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}. \tag{13}\]
Here, \(f(\mathbf{x},t)\) represents the drift guiding the diffusion process and \(g(t)\) parameterizes the time-dependent noise. The forward and reverse diffusions are described by Eq. 12 and Eq. 13 respectively, and are evaluated from \(t=0\) for the forward process and \(t=T\) to \(t=0\) for the reverse.
The forward diffusion, determined by choice of \(f(\mathbf{x})\) and \(g(t)\), maps an empirical distribution \(p(\mathbf{x})\) to the stationary distribution of \(f(\mathbf{x})\). When \(f\) is linear in \(\mathbf{x}\), the stationary distribution of Eq. 12 is a Gaussian, allowing for efficient computation of the diffusion process at any given time \(t\). Unlike normalizing flows, the forward process is not learned (see **Figure 5B** and 5C).
Evaluating the reverse process (Eq. 13) is less straightforward due to the dependence of the term \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) on the initial conditions of the forward diffusion. This term, referred to as the score \(\mathbf{s}(\mathbf{x},t)\), is parameterized by a neural network. Intuitively, the score opposes the gradient of the probability flow in Eq. 12. Similar to normalizing flows, score-based models are trained by maximizing a variational lower bound on the score, often referred to as the score-matching objective.
Score-based models also admit unbiased estimation of the Jacobian of \(\mathbf{s}(\mathbf{x},t)\), which allows for reweighting of probability densities as in Eq. 11. Similar to normalizing flows, the network used to parameterize the score can reinforce inductive biases of the data [132, 133]. By drawing upon theories developed for analyzing stochastic processes, score-based models are an appealing inductive prior for applications of deep learning to molecular dynamics [134].
### Integration with Enhanced Sampling Frameworks
Flow models - deterministic or stochastic - do not independently solve the sampling problem. Both classes of models have been coupled with existing statistical mechanics frameworks of sampling to enhance the sampling of thermodynamic observables. The most prominent examples of this approach couple normalizing flows with free energy perturbation and replica exchange methods [119, 135, 126].
In this review, we will specifically focus on the coupling approach employed by our group, which involves post-processing replica exchange simulations to enhance free energy estimation [135]. In replica exchange simulations, the rate of exchange scales as \(1/\sqrt{N}\) for an \(N\)-particle system, which represents a crucial limitation of the method, especially for applications to large systems. Even with low replica exchange rates, diffusion models recover Boltzmann weights and provide accurate free energy estimates within the distribution of simulated temperatures. When extrapolating outside the simulated range, diffusion
models exhibit greater robustness in estimating free energies compared to other data-driven methods such as MBAR.
As of the time of writing this review, the integration of score-based models with enhanced sampling frameworks remains an active area of research. In the machine learning community, score-based models have emerged as a more robust alternative to normalizing flows which share many attractive mathematical properties. While training normalizing flows on molecular systems can be an arduous task, requiring expert knowledge of the system to construct a suitable prior [(119, 126)], score-based models have little trouble mapping highly complex dynamics to trivially simple priors. We anticipate that the robustness of score-based models, coupled with enhanced sampling frameworks, will facilitate the study of complex systems of great cross-disciplinary interest.
## 6 Discussion
Recent advances in ML augmented enhanced sampling techniques such as the ones we discussed in this Review have had a profound impact on the capabilities of MD simulations. These advances have expanded the scope of studying rare events and complex dynamical processes, leading to a deeper understanding of molecular behavior [(136, 137, 138, 139)]. Despite these advances, there are still potential research areas that warrant further exploration.
### Benchmark Applications
After the development of an enhanced sampling technique, the next crucial step is to assess its effectiveness by testing it on various systems. The methods discussed in this review so far typically examine metastable state transitions in analytical systems like the double/four-well, Muller-Brown potential, or small molecular systems such as alanine dipeptide, chignolin, Trp-cage, villin headpiece and others [(5)]. However, the lack of standardized benchmark systems presents challenges in accurately evaluating the performance of these methods. To address this, it would be beneficial to the enhanced sampling community to establish a curated set of test systems specifically designed to assess an ML model's robustness (e.g., to hyperparameter tuning), efficiency (training data requirements for enhanced sampling), and finally the ability of the method in overcoming different kinds of thermodynamic, kinetic bottlenecks. Additionally, new test systems should be regularly proposed to prevent methods from being developed solely to surpass the existing benchmarks, otherwise knowingly or unknowingly prior familiarity with the system can influence true assessment of the methods being developed. By adopting such standardized evaluation criteria, appropriate comparisons between new and existing methods can be made to assess their strengths and weaknesses in a transparent manner.
### Model Interpretability & Explainability
One of the key reasons behind increased adoption of ML techniques in MD can be attributed to their ability to learn from complicated data distributions in an automated, and efficient manner. However, highly expressive ML models typically come with the cost of poor interpretability i.e, it becomes difficult to understand why an ML model is making its predictions. This loss of interpretability makes identification of inaccurate ML models difficult, especially when the model starts memorizing training data by overfitting model
parameters. Thus, it is worthwhile to examine and assign a degree of trust to a trained ML model prior to adopting it for enhanced sampling [140].
This challenge can be addressed by either designing, (i) inherently interpretable ML models from MD data or, (ii) a post hoc interpretation scheme for explaining the behavior of complicated ML models. Methods such as RAVE [53] takes the former approach by adopting a linear encoder for enhanced sampling, while deep-LDA [83] discussed in Section 3 proposes using the modulus of the weights between input and first layer of the ANN for interpretation. However, most of the methods discussed in this review implement non-linear transformations to achieve high expressivity and as a result, the latter approach must be adopted to make the models explainable. To this end, Mehdi _et al._ proposed Thermodynamically Explainable Representations of AI and other blackbox Paradigms (TERP) that constructs linear, interpretable surrogate models to approximate local behavior of the complicated ML model [141]. TERP is inspired by methods that are well-established in the general domain of ML, and has been designed to be suitable for MD data. The authors of this review hope to see more active investigations in the domain of interpreting ML models used for MD.
### Learning Meaningful RCs
The primary aim of dimensionality reduction techniques for enhanced sampling is the construction of informative RCs (Section 3). If the RCs are able to capture system behavior with sufficient detail, the learned ML model can even be implemented in similar but different systems through transfer learning eliminating the need for retraining [142]. Additionally, depending on the task at hand, the constructed RCs can be further improved by imposing constraints. For example, one promising approach could be to isolate different types of thermodynamic and kinetic bottlenecks along orthogonal components of the constructed RC and employ enhanced sampling techniques that are most suitable for overcoming the respective bottlenecks. In a recent work, Beyerle _et al._[143] employed SPIB [79] to successfully learn disentangled energy-entropy coordinates in the machine-learned RC for certain model systems. In another work [144], Wang _et al._ introduced Dynamics Constrained Auto-encoder (Dynamics-AE), which constructs a latent space that follows a prior probability distribution based on overdamped Langevin dynamics instead of a typical gaussian distribution for more faithful and disentangled representations of physical systems.
### Exploiting Symmetry through Machine Learning
As discussed in Section 3, implementing dimensionality reduction techniques to learn system RCs for enhanced sampling involves the analysis of CVs such as torsion angles, pairwise distances, etc. These traditional CVs have rotational and translational invariance because they use internal coordinates but using them forces models to rely on a hand-picked basis set. This limitation becomes evident when attempting to sample self-assembly or nucleation processes, where repeated molecules form the system of interest, and traditional CVs are inadequate for an accurate description of the system. RCs describing the state of the system by averaging neighboring interactions have been developed [145, 146, 147, 9, 10, 11], but learning new basis functions has remained difficult. Ideally, these RCs could be learned from all-atom coordinates using a neural network that explicitly preserves these symmetries such as a graph neural network (GNN). However, implementing such an approach is nontrivial, and research in this area is still in its early stages. In a very recent study [148], Huang _et al._
proposed GraphVAMPnets employing GNNs, capable of capturing local atomic information to learn an RC that preserves translational and rotational invariance. In the future, we expect to see further research in this direction, particularly focusing on enhancing the transferability of the ML models, since the treatment of symmetries and the construction of underlying graphs may vary.
### Robust Free Energy Estimation
The conformational free energy is a fundamental quantity in the molecular sciences that is often computationally intractable to estimate, due to its relationship to the partition function. In recent years, the emergence of normalizing flows has made conformational free energy estimation tractable for complex systems, although the robustness of flow-based estimates of the free energy remains a critical barrier to widespread use.
Recently, score-based models have shown greater expressiveness and robustness compared to normalizing flows, particularly when learning mappings from simple to highly complex distributions [(128)]. The construction of a suitable prior for normalizing flows can be challenging and typically relies on expert knowledge of the system [(126, 119)]. In contrast, diffusion models can reliably learn mappings from simple priors to highly complex data distributions and do not suffer from mode-seeking behavior to the same extent as normalizing flows [(135, 114)]. In fact, there has been a recent focus on enhancing the stability and improving the accuracy of both normalizing flows and score-based models through the integration of Monte-Carlo or importance sampling into the generative process [(115, 149, 116, 117, 128)]. Due to these advantages, we anticipate that score-based modeling will emerge as a more robust alternative to normalizing flows for enhanced free energy estimation, while still offering the desirable properties of normalizability and invertibility.
## Disclosure Statement
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
## Acknowledgments
S.M. thanks the NCT-UMD Partnership for Integrative Cancer Research for financial support. Z.S., Z.Z. and L.H. were supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R35GM142719. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health. P.T. was an Alfred P. Sloan Foundation fellow during preparation of this manuscript. We thank Deepthought2, MARCC,and XSEDE (projects CHE180007P and CHE180027P) for computational resources.
|
2303.13797 | Personalizing Task-oriented Dialog Systems via Zero-shot Generalizable
Reward Function | Task-oriented dialog systems enable users to accomplish tasks using natural
language. State-of-the-art systems respond to users in the same way regardless
of their personalities, although personalizing dialogues can lead to higher
levels of adoption and better user experiences. Building personalized dialog
systems is an important, yet challenging endeavor and only a handful of works
took on the challenge. Most existing works rely on supervised learning
approaches and require laborious and expensive labeled training data for each
user profile. Additionally, collecting and labeling data for each user profile
is virtually impossible. In this work, we propose a novel framework, P-ToD, to
personalize task-oriented dialog systems capable of adapting to a wide range of
user profiles in an unsupervised fashion using a zero-shot generalizable reward
function. P-ToD uses a pre-trained GPT-2 as a backbone model and works in three
phases. Phase one performs task-specific training. Phase two kicks off
unsupervised personalization by leveraging the proximal policy optimization
algorithm that performs policy gradients guided by the zero-shot generalizable
reward function. Our novel reward function can quantify the quality of the
generated responses even for unseen profiles. The optional final phase
fine-tunes the personalized model using a few labeled training examples. We
conduct extensive experimental analysis using the personalized bAbI dialogue
benchmark for five tasks and up to 180 diverse user profiles. The experimental
results demonstrate that P-ToD, even when it had access to zero labeled
examples, outperforms state-of-the-art supervised personalization models and
achieves competitive performance on BLEU and ROUGE metrics when compared to a
strong fully-supervised GPT-2 baseline | A. B. Siddique, M. H. Maqbool, Kshitija Taywade, Hassan Foroosh | 2023-03-24T04:33:40Z | http://arxiv.org/abs/2303.13797v1 | # Personalizing Task-oriented Dialog Systems via Zero-shot Generalizable Reward Function
###### Abstract.
Task-oriented dialog systems enable users to accomplish tasks using natural language. State-of-the-art systems respond to users in the same way regardless of their personalities, although personalizing dialogues can lead to higher levels of adoption and better user experiences. Building personalized dialog systems is an important, yet challenging endeavor and only a handful of works took on the challenge. Most existing works rely on supervised learning approaches and require laborious and expensive labeled training data for each user profile. Additionally, collecting and labeling data for each user profile is virtually impossible. In this work, we propose a novel framework, P-ToD, to personalize task-oriented dialog systems capable of adapting to a wide range of user profiles in an unsupervised fashion using a zero-shot generalizable reward function. P-ToD uses a pre-trained GPT-2 as a backbone model and works in three phases. Phase one performs task-specific training. Phase two kicks off unsupervised personalization by leveraging the proximal policy optimization algorithm that performs policy gradients guided by the zero-shot generalizable reward function. Our novel reward function can quantify the quality of the generated responses even for _unseen_ profiles. The optional final phase fine-tunes the personalized model using a few labeled training examples. We conduct extensive experimental analysis using the personalized bAbI dialogue benchmark for five tasks and up to 180 diverse user profiles. The experimental results demonstrate that P-ToD, even when it had access to _zero_ labeled examples, outperforms state-of-the-art supervised personalization models and achieves competitive performance on BLEU and ROUGE metrics when compared to a strong fully-supervised GPT-2 baseline.
Dialog Systems, Personalization, Reinforcement Learning, Zero-shot Learning +
Footnote †: leftmargin=*]This work is partially supported by the National Science Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of of the Foundation of Foundation of the Foundation of Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of of the Foundation of the Foundation of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of of the Foundation of the Foundation of the Foundation of the of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of the Foundation of the Foundation of the Foundation of the Foundation of the of the Foundation of the Foundation of the Foundation of the of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the Foundation of the of
For each user profile, these works require enormous amounts of labeled training data, which is time-consuming, expensive, and nearly impossible to acquire. Recently, pre-trained language models have shown zero-shot capabilities in the natural language understanding and natural language generation tasks [6, 10], which suggests the possibility of developing personalized task-oriented dialog systems without requiring labeled training data for each target user profile. However, successfully exploiting the users' profiles and synthesizing personalized responses with no (or few) labeled training examples is a demanding task.
We introduce a novel framework for building **P**ersonalized **T**ask-**oriented **D**ialog Systems, P-ToD, that leverages the pre-trained language models (LMs), zero-shot (as well as few-shot) learning, and deep reinforcement learning. Guided by the proximal policy optimization (PPO) algorithm [9, 46] and a zero-shot generalizable reward function, the proposed framework can personalize task-oriented dialog systems to diverse user profiles in an unsupervised fashion. Figure 1 presents an overview of the framework that works in three phases and uses a pre-trained GPT-2 [39] as a backbone model. A task-specific training (e.g., reserving a table) is performed in the first phase. Task-specific training datasets are generally available for a wide range of tasks in many domains [25, 62], whereas personalized counterparts are practically impossible to obtain. To overcome this challenge, we employ the unsupervised personalization phase. The deep reinforcement learning-based phase initializes a personalized GPT model from the task-specific GPT model (i.e., trained in phase one). Then, it trains personalized GPT model based on (_i_) the appropriateness of the generated response for the given user profile, quantified by the zero-shot generalizable reward function; and (_ii_) fidelity of the response to the task, measured by the KL divergence between the responses generated by the task-specific and personalized models. Using the above signals, the PPO algorithm is employed to perform policy gradients.
We also propose a new reward function that allows quantifying the quality of the generated personalized responses not only for previously seen user profiles, but also for newly emerging unseen profiles. The zero-shot generalizable reward function uses pre-trained sentence transformers and contrastive representation learning to score the suitability of the response for the active user profile. To the best of our knowledge, this is the _first work_ that can adapt the responses of task-oriented dialog systems to diverse user profiles in an unsupervised fashion. To further improve the performance of the personalized task-oriented dialog systems, an _optional_ few-shot fine-tuning phase is introduced. This phase uses a few labeled training examples to adjust the responses for the given user profile, that can be employed or skipped depending on the availability of the labeled training data. Moreover, the number of shots can also be adjusted depending on the quantity of the available training examples.
We perform thorough experimental evaluations on the only publicly available benchmark, personalized bAbI dialogue benchmark, for five tasks and up to 180 distinct user profiles in the restaurant domain. The experimental results show that our proposed framework outperforms state-of-the-art supervised personalization models, even when given access to zero labeled training instances (i.e., few-shot fine-tuning phase is skipped). We also demonstrate that the proposed personalization approach achieves a competitive performance when compared to a strong supervised GPT-2 baseline model on the BLEU-4 and ROUGE-2 measures. Furthermore, the human study confirms the competitiveness of our unsupervised personalization framework to the other supervised approaches.
This work's contributions are summarized below:
* We propose an end-to-end framework for personalizing task-oriented dialog systems in an unsupervised way. To the best of our knowledge, this is the first work that has the unsupervised personalization capabilities.
* We introduce a zero-shot generalizable reward function that can guide the policy of the personalized task-oriented dialog systems to generate rich and personalized responses even for the unseen user profiles.
* We perform extensive experimental analysis using personalized bAbI dialogue dataset and show that our framework consistently outperforms state-of-the-art supervised personalization models for up to 180 unique user profiles on five tasks.
Figure 1. Overview of P-ToD. The unsupervised personalization phase is at the core of the proposed framework.
## 2. Preliminaries
### Problem Formulation
In a multi-turn task-oriented dialogue, \(\mathcal{U}_{t}\) is an input from the user and \(\mathcal{S}_{t}\) is a system's response at a turn t. To generate a response \(\mathcal{S}_{t}\), all previous turns are concatenated to prepare dialog context \(\mathcal{C}_{t}=\{\mathcal{U}_{0},\mathcal{S}_{0},\cdots,\mathcal{U}_{t-1}, \mathcal{S}_{t-1}\}\) and passed to the system as input along with the user's current input \(\mathcal{U}_{t}\). In a personalized task-oriented dialog system, at turn t, the goal is to synthesize a response \(\mathcal{S}_{t}^{i}\) adapted for a user profile \(\mathcal{P}^{i}\in\mathcal{P}=\{\mathcal{P}^{0},\mathcal{P}^{1},\cdots\}\). The system's response \(\mathcal{S}_{t}^{i}\) is generated by conditioning on dialog context \(\mathcal{C}_{t}\), user's current utterance \(\mathcal{U}_{t}\), and profile information \(\mathcal{P}_{i}\) for user \(i\), concatenated as a single sequence.
\[\mathcal{S}_{t}^{i}=\text{P-ToD}([\mathcal{P}^{i};\mathcal{C}_{t};\mathcal{U} _{t}])\]
In traditional (i.e., supervised) personalized task-oriented dialog systems, at turn t, we are given \(m\) variants of the system response adapted for each user as: \(\{(\mathcal{U}_{t},\mathcal{S}_{t}^{i})\}_{i=1}^{m}\) for all \(m\) user profiles to train the models. The major disadvantage of such an approach is the unscalable requirement of having a large number of labeled training examples for each user profile; such data acquisition is expensive and time-consuming. To overcome this challenge, we assume that, at turn \(t\), profile-specific response \(\mathcal{S}_{t}^{i}\,\forall i\) is not available for model's supervision (i.e., unsupervised personalization). To allow for handling up to \(\infty\) user profiles, we assume that the user profile is described via natural language text, in contrast to previous works that encode the features of the user profile via one-hot encoding and limits the model's expansion to new profile features. Naturally, describing user profiles using natural language takes care of the case where only partial information about a user profile is available. Moreover, some tasks require interaction with knowledge base, we define the knowledge base tuples as \(K=[k_{1},k_{2},\cdots,k_{\ell}]\), where each tuple \(k_{b}\) is defined using natural language and passed as additional input to the model where needed.
### Pre-trained Language Models
The Language models (e.g., GPT-2 (Srivastava et al., 2017), BERT (Devlin et al., 2017)) are trained utilizing massive amounts of text data in the unsupervised way. Since these models have millions of parameters, they have the capability to effectively capture both general semantic and syntactic information. In this work, we utilize the pre-trained GPT-2 and MPNet (Zhu et al., 2019) models. We use GPT-2 as a base model, perform task-specific training, and then further train the model to synthesize personalized responses in an supervised way, guided by the novel reward function. The GPT-2 model has achieved state-of-the-art performance on many natural language generation benchmarks including conversation question answering (Zhu et al., 2019), text summarization (Zhu et al., 2019), and machine translation (Zhu et al., 2019), among others.
We train a zero-shot generalizable reward function to score the acceptability of the generated responses for the given user profile using a contrastive loss function. The novel reward function uses pre-trained MPNet (Zhu et al., 2019) as a basic building block to acquire semantically accurate embeddings. The MPNet model has produced cutting-edge results on several natural language processing tasks including GLUE (Zhu et al., 2019), SQuAD (Quad et al., 2019; Quad et al., 2019), RACE (Zhu et al., 2019), and sentiment prediction (Zhu et al., 2019) benchmarks. In the following, we provide a brief overview of the GPT-2 and MPNet models.
**GPT-2.** The GPT-2 model is pre-trained for autoregressive generation (i.e., predicting the next word) on the WebText dataset (i.e., 40 GB of text) and adapts a transformer-based neural architecture (Zhu et al., 2019). Suppose we have a natural language sequence \((s_{1},\cdots,s_{n})\) where symbol \(s_{i}\) is drawn from a fixed set of symbols. The sequential ordering of language leads to factorizing the joint probabilities over symbols as a product of conditional probabilities (Bahdan et al., 2016), as given below.
\[p(s)=\prod_{i=1}^{n}p(s_{i}|s_{1},\cdots,s_{i-1})\]
Using this approach, it is possible to estimate \(p(s)\) and any conditionals of the form \(p(s_{i-k},\cdots,s_{i}|s_{1},\cdots,s_{i-k-1})\), and perform tractable sampling.
**MPNet.** BERT does not account for interdependence among predicted tokens, whereas complete position information is not used by XLNet (Zhu et al., 2019), though dependency among tokens is considered. The MPNet model exploits the benefits of masked language modeling (MLM) (i.e., employed by BERT) and permuted language modeling (PLM) (i.e., used by XLNet) and eliminates their shortcomings. It brings out the best of both worlds: by using PLM, it exploits the predicted token's dependencies, and, at the same time, uses the full position information of a sentence from MLM to enable a full view of the sentence. It has been pre-trained on BooksCorpus (Zhu et al., 2019), OpenWebText, CC-News, Stories (Zhu et al., 2019), and Wikipedia (i.e., over 160GB data). For a given sequence \((s_{1},\cdots,s_{n})\), where permutations of set \(\{1,\cdots,n\}\) is represented by \(\mathcal{Z}_{n}\), the \(t\)-th element of \(z\) by \(z_{t}\), the first \(t-1\) element of \(z\) by \(z_{<t}\), the number of non-predicted tokens by \(c\), and the mask tokens [M] in position \(z_{>c}\) by \(M_{z_{>c}}\). The MPNet is trained for the following objective:
\[\mathbb{E}_{z\in\mathcal{Z}_{n}}\sum_{t=c+1}^{n}\log p(s_{z_{t}}|s_{z_{<t}},M_{z _{>c}};\theta)\]
### Reinforcement Learning Paradigm
The reinforcement learning paradigm has been extensively studied for unsupervised learning. Methods that use policy gradients compute an estimator of the gradient and then plug it into a stochastic gradient ascent algorithm. It is common to optimize the policy \(\pi\) by maximizing the expected reward \(r\in\mathbb{R}\) for the generated sequence \(\mathcal{J}=(y_{1},\cdots,y_{n})\) with length \(n\), given the input sequence \(\mathcal{X}=(x_{1},\cdots,x_{m})\) with length \(m\), that is sampled from data distribution \(\mathcal{D}\). We can optimize the expected reward as follows:
\[\mathbb{E}_{\pi}[r]=\mathbb{E}_{x\sim\mathcal{D},y\sim\pi(\cdot|x)}\left[r(x,y)\right]\]
The PPO algorithm introduced clipped surrogate objective, in addition to, the penalty on the KL divergence. The objective function is modified using the KL divergence penalty, instead of making it a hard constraint like in the trust region policy optimization algorithms (Zhu et al., 2019). The PPO updates its policy, at step \(k\) via:
\[\theta_{k+1}=\arg\max_{\theta}\mathbb{E}_{s,a\sim\pi_{\theta_{k}}}\left[\mathcal{ L}(s,a,\theta_{k},\theta)\right]\]
where \(s\) and \(a\) represent the state and action, respectively. In this work, we employ PPO algorithm (Dai et al., 2016) to perform policy gradients, that has been shown to be scalable (e.g., for large language models), data-efficient, and robust (i.e., without excessive hyperparameter tuning) (Bahdan et al., 2016).
## 3. Personalization Framework: P-ToD
This work presents a new framework for developing personalized dialog systems that works in three phases. A pre-trained GPT-2 model serves as the backbone model for the framework. In the first phase, the base GPT-2 model is optimized via task-specific training. The phase two, referred to as unsupervised personalization phase, employs deep reinforcement learning to adapt the system responses to a wide range of user profiles guided by the zero-shot generalizable reward function (i.e., presented in Figure 2) and the trained task-specific GPT model. The _optional_ phase three fine-tunes the personalized GPT model using a few supervised training examples to further improve the performance. Figure 1 summarizes the proposed unsupervised personalization framework.
### Phase One: Task-specific Training
We leverage the power of the pre-trained language models by intializing the phase one of our framework with a pre-trained GPT-2 model. The details of the pre-trained model are as follows. The model (Zhou et al., 2017) was pre-trained on the WebText dataset and has 774 million parameters. Using byte pair encoding, the vocabulary size is 50,257 tokens; capitalization and punctuation were preserved (Zhou et al., 2017). The model is built on the transformer's decoder stack (Zhu et al., 2017), and it has 36 layers, 20 heads, and an embedding size of 1280. The task-specific training of the model is performed using causal language modeling (see Section 2.2 for details). Figure 3 presents the task-specific training of the model. Given a dialog context \(\mathcal{C}_{t}\), user's current utterance \(\mathcal{U}_{t}\), and (optional) knowledge base search result tuples \(K\) at turn \(t\), the probability of system's response \(\mathcal{S}_{t}\) with length \(n\) can be defined as:
\[p(\mathcal{S}_{t}|\mathcal{C}_{t},\mathcal{U}_{t},K)=\prod_{i=1}^{n}p(s_{i}|s_ {<i},\mathcal{C}_{t},\mathcal{U}_{t},K)\]
We train the model by calculating the cross-entropy loss by maximizing the log-likelihood of the system response conditioned on the dialog context, user's input, and knowledge base tuples. If the task does not require interaction with the knowledge base, the search query is not performed nor the generation is conditioned on the resultant tuples. The output of phase one is the trained task-specific GPT model.
### Phase Two: Unsupervised Personalization
This phase initializes the personalized GPT model with the trained task-specific GPT model (i.e., output of phase one). The personalized GPT model is trained for personalization in the unsupervised way. The two critical training signals are provided by (_i_) the zero-shot generalizable reward function that quantifies whether the output of the personalized model is appropriate for the given user profile; and (_ii_) the KL divergence between the personalized and task-specific model's distributions to ensure that the output of the personalized model does not deviate too much from the task-specific model (i.e., it still accomplishes the task with high accuracy).
In the following, we describe the details of the novel reward function and KL divergence. Then, we detail the training process for the unsupervised personalization phase.
**Zero-shot Generalizable Reward Function.** The zero-shot generalization is enabled by the unsupervised representations provided by the powerful pre-trained language model MPNet and the contrastive loss function (Zhu et al., 2017). The training and inference process of the reward function is shown in Figure 2. At a dialog turn \(t\), we concatenate the dialog context \(\mathcal{C}_{t}\), user's current input \(\mathcal{U}_{t}\), the (optional) knowledge base search result tuples \(K\), and the system's response \(\mathcal{S}_{t}^{i}\) for the user \(i\) and acquire their representation \(\mathcal{H}_{t}^{i}\). Similarly, we encode the user profile information \(\mathcal{P}_{j}\) for the user \(j\) to get a corresponding representation \(\mathcal{U}^{j}\). If a pair of encodings had a positive corresponding label (i.e., the system response is appropriate for the given user profile), then the contrastive loss function would reduce their distance, and if a negative label were given, it would increase their distance. We generate positive training examples by setting
Figure 3. The task-specific training of the GPT-2 model.
Figure 2. Overview of the training and inference process for the zero-shot generalizable reward function.
\(i=j\) and negative examples are generated by setting \(i\neq j\). The training loss can be defined as:
\[\mathcal{L}_{i,j}=-\log\frac{\exp\left(\mathcal{H}_{t}^{i}\cdot\mathcal{U}^{j}/t \right)}{\sum\limits_{q\in Q}\exp\left(\mathcal{H}_{t}^{i}\cdot\mathcal{U}^{q}/ \tau\right)}\]
where the \(\cdot\) represents the scoring function, \(\tau\in\mathcal{R}^{+}\) is a scalar parameter for temperature, and \(Q\) is the set of negative pairs, i.e., \(i\neq j\). To train a classifier that works in the zero-shot setting, we select a subset of user profiles (i.e., _seen_ profiles) and use them to train the classifier. The pre-trained MPNet has the capability to generate rich, accurate, and high-quality embeddings even for the _unseen_ user profiles or unseen knowledge base entries, since both the user profile and knowledge base tuples are described using natural language. For example, the model can produce precise embeddings for an unseen user profile who prefers "kosher" food, because it has already learned the contextual usage of a large number of words (e.g., MPNet has a vocabulary size of 30,527) in the pre-training process. The scoring function learns to score close to one, the matching pairs (i.e., the system response is appropriate for the given profile), and zero otherwise.
Our zero-shot generalizable reward function follows the Sentence-BERT [43] that employs siamese and triplet network structures [44], leverages contrastive loss, and dot product is used as the scoring function. To generate input encoding, we use the pre-trained all-mpnet-base-v2 that has been trained on over one billion training pairs and produces 768 dimensional normalized embeddings for the input by mean pooling. For every positive training pair, two negative training examples are generated. At inference time, the trained zero-shot generalizable reward function provides a scalar reward, \(r\in\) [0,1] that quantifies the suitability of the system's responses for both previously seen and newly emerging unseen user profiles.
**KL Divergence.** To ensure that the personalized policy does not diverge too much from the trained task-specific model, we use an additional reward signal by calculating the KL divergence between the personalized policy and the task-specific policy (i.e., the model trained in phase one). That is, keeping close to the task-specific model is rewarded, whereas big KL divergences are penalized. We denote the distributions of the task-specific and personalized models by \(p_{1}\) and \(p_{2}\), respectively. At dialog turn \(t\), the KL divergence can be calculated as:
\[KL=\mathbb{E}_{\mathcal{S}_{t}^{i}\sim p_{2}}[\log p_{2}(\mathcal{S}_{t}^{i}| \mathcal{P}^{i},C_{t},\mathcal{U}_{t},K)-\log p_{1}(\mathcal{S}_{t}|C_{t}, \mathcal{U}_{t},K)]\]
where \(\mathcal{S}_{t}\) is the task-specific response and \(\mathcal{S}_{t}^{i}\) is the system's response adapted for the user \(i\). The final \(reward\) can be combined as given below:
\[reward=r+\beta\times KL\]
where \(\beta\in[0,-1]\) is the penalty coefficient and decides the weight of the KL divergence. We use adaptive KL Penalty coefficient and initialize \(\beta=-0.2\) in our experiments.
**Training Details.** To start with the unsupervised personalization phase, we initialize our personalized model \(p_{2}=p_{1}\) and then adapt \(p_{2}\) to synthesize the personalized responses for a wide range of user profiles using deep reinforcement learning. The personalized model is fine-tuned via PPO algorithm from [9] with the final \(reward\) (i.e., a combination of KL divergence and a score from zero-shot generalizable reward function). The expected reward for a response \(\mathcal{S}_{t}^{i}\) for the user \(i\) at a dialog turn \(t\) can be written as:
\[\mathbb{E}_{p_{2}}[reward]=\mathbb{E}_{\mathcal{U}_{t}\sim\omega,\mathcal{S}_{t }^{i}-p_{2}(\cdot|\mathcal{P}^{i},C_{t},\mathcal{U}_{t},K)}[reward(\mathcal{P} ^{i},\mathcal{S}_{t}^{i})]\]
where \(\omega\) represents a given task, the model \(p_{2}\) is being trained for. The personalized model is trained for up to 600,000 episodes using Adam optimizer [19] with a learning rate of \(1.41\times 10^{-5}\).
The output of this phase is a personalized model that can generate responses that are not only specific to the task, but are also adapted for the given user profile. It is important to recall that the unsupervised personalization phase does not use any personalized variants of the responses for training the model. It is exclusively trained in the unsupervised setting, guided by the zero-shot generalizable reward function and KL divergence between the distributions of the task-specific and personalized models.
Figure 4. Phase two of the framework: Unsupervised Personalization.
### Phase Three: Few-shot Fine-tuning
The optional phase three uses a few labeled training examples to calibrate the personalized model (i.e., trained in phase two in the unsupervised setting) for the given user profile in the supervised setting. The probability for system's response \(\mathcal{S}_{t}^{j}\) with length \(n\), for a given user \(j\), at dialog turn \(t\) can be defined as:
\[p(\mathcal{S}_{t}^{j}|\mathcal{P}^{j},C_{t},\mathcal{U}_{t},K)=\prod_{i=1}^{n} p(s_{i}|s_{<i},\mathcal{P}^{j},C_{t},\mathcal{U}_{t},K)\]
We call this phase _optional_, since it can be employed or skipped based on the availability of the labeled variants for the given user profile. Moreover, the number of shots can also be adjusted depending on the quantity of the available training examples. In our experiments, we present results with the following number of shots: 0 (i.e., we skip this phase), 1, 5, 10, and 20.
## 4. Experimental Setup
In this section, we describe the task-specific and personalization datasets, methodology of evaluation, competing methods, and the implementation details of our framework P-ToD.
### Datasets
We used one task-specific task dataset bAbI dialogue (Beng et al., 2017) that trains our model in phase one. The personalized counterpart, called personalized bAbI dialogue (Kang et al., 2017), is used to train all the supervised competing models. Our proposed framework adapts to diverse user profiles in the unsupervised setting. To the best of our knowledge, personalized bAbI dialogue is the _only_ publicly available personalization benchmark for task-oriented dialog systems. Table 1 presents important statistics for both datasets. Both datasets are in the restaurant domain and consist of five tasks.
**Task 1: Issue API calls.** This task involves extracting values of all the required slots (a.k.a. values for query parameters, e.g., cuisine = spanish) from natural language utterances and successfully making an API call. In this task, the personalization involves understanding and adapting the linguistic variations for a given user profile (e.g., male vs female).
**Task 2: Update API calls.** This task includes updating the values for certain slots, if the user wishes to do so. For example, a user's request in natural language, "Instead could it be in a cheap price range in Madrid", should update the current API call: api_call(cuisine=french, city=paris, party_size=four, price_range=expensive) to the call: api_call(cuisine=french, city=madrid, party_size=four, price_range=cheap). Similarly to task one, personalization task two mainly deals with the style adaptations.
**Task 3: Display Options.** This task requires displaying relevant options from the knowledge base using the search results from API call. The personalization task involves adapting certain linguistic style as well as understanding user's taste and restaurant's specialities, among others, and making appropriate suggestions based on the active user's profile. Unsupervised personalization for this task is the most challenging part of this work.
**Task 4: Provide extra information.** The user's acceptance of an option entails asking for extra information (e.g., phone_number) from the system. The personalization for task four calls for resolving ambiguities efficiently along with the style adaptation. For example, asking for contact information could refer to phone_number or social_media depending on the active user (e.g., elderly vs young).
**Task 5: Conduct Full dialogs.** This task is about conducting the full dialogue that covers tasks 1-4 successfully. Similarly, personalization task includes, but not limited to: (_i_) adjusting the conversation flow to the active user's personality, (_ii_) adapting the linguistic style, and (_iii_) dealing with nuances effectively.
The personalized bAbI dialogue dataset contains two test sets: a standard test set and a test set - 00V (Out Of Vocabulary). We conduct extensive experiments on both test sets for all the five tasks for up to 180 diverse user profiles.
### Evaluation Methodology
To demonstrate the effectiveness of P-ToD, we evaluate our framework and all the competing methods for (_i_) task completion and (_ii_) personalization of the dialog for the given user profile.
**Task Completion.** To quantify the performance for the task completion, we compute the F1 scores and present evaluation results for all the models for all five tasks.
**Personalization.** The main task for the proposed framework is to personalize the task-oriented dialog systems in the unsupervised way. To evaluate the efficacy of the framework and how it compares to the other supervised approaches, we use BLEU-4 and ROUGE-2 scores. The BLEU (Peters et al., 2017) and ROUGE (Kang et al., 2017) metrics have been extensively used for natural language generation tasks. Human judgment and BLEU scores show a very strong correlation. The BLEU-n (n \(\in\{1,2,3,4\}\)) score \(\in[0,100]\) measures the proportion of n-grams in the generation that also occurs in the reference. ROUGE, on the other hand, is a recall-based measure that quantifies n-gram overlap between the generation and the reference. Moreover, we also conduct a user study on a randomly selected 300 responses generated by the top performing supervised models and our proposed unsupervised personalization framework.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline
**Dataset** & & **Task 1** & **Task 2** & **Task 3** & **Task 4** & **Task 5** \\ \hline bAbI dialogue & \begin{tabular}{c} Number of dialogs \\ Avg. dialog turns \\ \end{tabular} & 4000 & 4000 & 4000 & 4000 & 4000 \\ & Avg. dialog turns & 6.0 & 9.5 & 9.9 & 3.5 & 18.4 \\ \hline \multirow{4}{*}{Personalized bAbI dialogue} &
\begin{tabular}{c} Number of dialogs \\ Avg. dialog turns \\ \end{tabular} & 24000 & 24000 & 48000 & 24000 & 48000 \\ & Avg. dialog turns & 6.0 & 9.5 & 11.8 & 3.5 & 20.3 \\ \cline{1-1} & Number of user profiles & 6 & 6 & 180 & 6 & 180 \\ \cline{1-1} & Avg. dialogs per profile & 4000 & 4000 & 267 & 4000 & 267 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Datasets statistics.
### Competing Methods
We compare against the following state-of-the-art (SOTA) personalization models and GPT-2-based strong baselines:
**MemNN (Nang et al., 2018):**: The response selection-based approach proposes to use the memory network to encode dialog content and user profile information using a concatenation of the profile information and dialog memory (i.e., MemNN-org) and using split memory for the profile information and concatenating hidden states (i.e., MemNN-split).
**PMemN2N (Wang et al., 2018):**: The memory network-based method facilitates the model's personalization by combining the style information of the user attributes in the encoder.
**Mem2Seq (Wang et al., 2018):**: An end-to-end approach that proposes to use memory network in the encoder and employs RNN-based decoder for query generation and memory network for personalized response generation. This work proposes three variants of the models, called Mem2Seq-org, Mem2Seq-split, and Mem2Seqatt.
**GLMP (Wang et al., 2018):**: Based on Mem2Seq, this model includes local and global encoders to share external knowledge efficiently.
**CoMemNN (Wang et al., 2018):**: This work proposes cooperative memory network and assumes that only partial user profile information is available. This approach does not generate response, instead relies on the response selection. In our experiments, we provided the model with 100% user profile information for a fair comparison.
**Supervised GPT:**: Since none of the SOTA personalized models follow SOTA transformers architecture, we also trained a supervised GPT-2 model. This model was trained in the same fashion as our phase three except it was trained on all the training examples of the dataset, thus serves as a strong supervised baseline.
**Few-shot GPT:**: Due to the unavailability of any unsupervised approach for comparison and coming up with a reward function is non-trivial, we also trained a few-shot GPT-2 model. This model follows same training process, except phase two (i.e., unsupervised personalization) is skipped to demonstrate the effectiveness of the phase two of the proposed framework.
### Implementation Details
We use the pre-trained GPT-2 model as a backbone model that is trained in all the three phases of the framework. The phase one trains the task-specific model for 3 epochs using cross-entropy loss and Adam optimizer, with a batch size of 8, and a learning rate of \(5\times 10^{-5}\). Other parameters are as follows: warmup_steps=100, weight_decay=0.01, max_length=1024. The zero-shot generalizable reward function uses a pre-trained MPNet for input encoding. It is trained for 3 epochs using contrastive loss on 50% of the user profiles on every task and the remaining 50% profiles are considered unseen. The phase two uses the same parameters as phase one, except batch size of 4 was used because of the GPU memory limitations (and a learning rate of \(1.41\times 10^{-5}\)). Similarly, phase three uses same parameters, except a smaller learning rate of \(5\times 10^{-7}\) was used and up to 20 training examples were made available for training. We present two variants of our model: (_i_) PToD-0 does not use phase three (i.e., personalized model is only trained in the unsupervised setting) and (_ii_) P-ToD that uses 20 training examples in the phase three.
## 5. Results
In this section, we present quantitative as well as qualitative analysis. We first present results on the task completion and then demonstrate that our proposed framework consistently outperforms SOTA supervised personalization models for the personalization task.
### Quantitative Analysis
**Task Completion.** Despite the fact that the core task in this work is personalization, the personalized models should not compromise the accuracy of task completion for adapting their behaviors for the profiles of the users. Keeping it in mind, we report the results for task completion in Tables 2 that presents F1 scores for all five tasks for all the competing models. In terms of task completion, all the models show competitive performance except MemNN-split. The main reason for all the models showing great performance for task completion is that the user never drops out of the conversation, even if the system keeps providing the user with unwanted recommendations or never adapts the linguistic style according to the user. Since, the system eventually completes the task (i.e., the user is too patient which is not the case in the real-world), the F1 score is high for all the competing models. Though, the margin is not big, the best models are supervised-GPT and P-ToD (i.e., this work). For example, on tasks one and three, the proposed P-ToD performs the
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline
**Approach** & **Models** & **Task 1** & **Task 2** & **Task 3** & **Task 4** & **Task 5** \\ \hline \multirow{8}{*}{Supervised} & MemNN-org & 99.63 & 99.81 & 98.87 & 98.87 & 85.10 \\ & MemNN-split & 85.66 & 85.83 & 84.89 & 84.89 & 87.28 \\ & PMemN2N & 99.70 & 99.93 & 98.91 & 98.97 & 95.33 \\ & Mem2Seq-org & 99.68 & 99.68 & 98.28 & 99.68 & 80.41 \\ & Mem2Seq-split & 99.62 & 99.62 & 98.52 & 99.62 & 82.19 \\ & Mem2Seq-att & 99.65 & 99.66 & 98.46 & 99.66 & 82.38 \\ & GLMP & 99.45 & 99.45 & 98.48 & 99.45 & 86.20 \\ & CoMemNN & 99.65 & 99.65 & 98.61 & 99.65 & 98.13 \\ & Supervised-GPT & 99.72 & **99.96** & 99.02 & **99.96** & **98.21** \\ \hline Unsupervised Personalization & PToD-0 (This work) & 99.69 & 99.86 & 98.92 & 99.88 & 98.14 \\ \hline \multirow{2}{*}{Few-shot Personalization} & Few-shot GPT & 98.12 & 99.08 & 97.71 & 97.32 & 91.23 \\ & P-ToD (This work) & **99.74** & 99.94 & **99.03** & 99.94 & 98.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2. F1 scores for task completion.
best, and on the remaining three tasks, supervised-GPT shows the best performance.
It is critical to emphasize that the proposed P-ToD was trained using only 20 labeled training examples in phase three, whereas the supervised-GPT was trained on the complete training set. Moreover, we observe that PToD-0 variant (i.e., that was not trained in phase three) has comparable performance when compared to the SOTA personalization models. Last but not least, the few-shot GPT (that skipped phase two training and used only 20 training examples in phase three) baseline does not show good performance for task five as compared to other models.
**Personalization.** Table 3 presents BLEU-4 and ROUGE-2 scores for all the competing models on all five tasks. For all the tasks, the proposed P-ToD achieves the best performance or insignificant performance difference from supervised-GPT baseline. Excluding supervised-GPT model, the proposed P-ToD outperforms all other SOTA response generation methods by at least 19.95% on BLEU-4 and 9.74% on ROUGE-2 metrics. Similarly, the other variant PToD-0 that was not trained on any labeled training examples, still outperforms all the competing models including CoMemNN (which is a response selection model) for BLEU score. Since CoMemNN does not generate responses, it has advantage to get better BLEU and ROUGE scores as compared to the response generation approaches. Moreover, the few-shot GPT baseline shows the worst performance, since it was trained with only 20 labeled examples in the phase three and phase two (i.e., unsupervised personalization) was skipped. The poor performance of the few-shot GPT baseline highlights the critical role of the phase two.
Figure 5 presents the performance of the proposed personalization framework, when provided with different number of training examples in phase three. Generally, we notice that as the number of training examples are increased, the performance improves, which highlights the importance of the supervision. However, we noticed that the performance does not get much better beyond 20 examples. That is almost the point, when P-ToD is as good as supervised-GPT model (i.e., trained on full training set).
The unsupervised personalization phase is at the core of the proposed framework, we provide more details about it in Figure 6. Since all five tasks vary in terms of difficulty, we present the mean reward of the models for each task, as the training progresses in phase two. The general trend is that the mean reward starts at 0 (e.g., at episode 0), which is obvious because the responses at the beginning of this phase were not tailored for the given user profile. Then, depending on the difficulty of the task, we notice that the respective models start approaching to 1.0 (e.g., after 100,000 episodes). We know that the task five (i.e., conduct full personalized dialog) is the most challenging task and the mean reward throughout the training process also signifies that. Similarly, we also notice that the tasks that involve adapting only linguistic styles (e.g., task two), the respective models start to achieve higher mean reward quickly as compared to the tasks that require meaningful recommendations or need to resolve nuances (e.g., task three).
\begin{table}
\begin{tabular}{l l c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{**Approach**} & \multirow{2}{*}{**Models**} & \multicolumn{2}{c|}{**Task 1**} & \multicolumn{2}{c|}{**Task 2**} & \multicolumn{2}{c|}{**Task 3**} & \multicolumn{2}{c|}{**Task 4**} & \multicolumn{2}{c}{**Task 5**} \\ \cline{3-11} & & BLEU & ROUGE & BLEU & ROUGE & BLEU & ROUGE & BLEU & ROUGE \\ \hline \multirow{4}{*}{Supervised} & Mem2Seq-org & 60.12 & 64.82 & 65.54 & 69.83 & 57.74 & 62.73 & 59.07 & 63.32 & 64.23 & 59.39 \\ & Mem2Seq-split & 60.30 & 63.82 & 64.92 & 68.60 & 58.07 & 62.43 & 59.20 & 63.03 & 64.11 & 58.73 \\ & Mem2Seq-att & 62.26 & 71.17 & 67.15 & 75.84 & 59.84 & 69.59 & 61.29 & 69.74 & 66.02 & 66.17 \\ & GLMP & 61.25 & 70.81 & 66.40 & 75.46 & 59.07 & 68.93 & 59.66 & 70.13 & 64.91 & 65.74 \\ & CoMemNN & 68.67 & 77.71 & 73.83 & 82.67 & 65.77 & 75.72 & 67.58 & 76.85 & 72.23 & 72.53 \\ & Supervised-GPT & **75.71** & 78.42 & **80.61** & **83.38** & **73.21** & **76.46** & **74.64** & 77.11 & **80.01** & 73.61 \\ \hline Unsupervised & PToD-0 (This work) & 70.84 & 75.02 & 75.75 & 79.85 & 68.44 & 72.93 & 69.72 & 73.69 & 75.12 & 70.21 \\ \hline \multirow{2}{*}{Few-shot} & Few-shot GPT & 40.21 & 46.71 & 33.17 & 39.32 & 27.17 & 22.78 & 39.20 & 33.25 & 24.12 & 29.31 \\ & P-ToD (This work) & 75.64 & **78.46** & 80.55 & 83.29 & **73.24** & 76.37 & 74.52 & **77.13** & 79.92 & **73.65** \\ \hline \hline \end{tabular}
\end{table}
Table 3. BLEU scores and ROUGE scores for personalization for all five tasks.
Figure 5. Performance of the P-ToD for different number of shots for all five tasks.
Figure 6. Mean reward across unsupervised personalization phase for all five tasks.
### Qualitative Analysis
In this experiment, we randomly selected 300 responses generated by supervised-GPT (i.e., the best model among the supervised competitors), PToD-0 (i.e., used zero labeled training examples), and P-ToD (i.e., used 20 labeled training examples) along with the reference responses and asked human annotators to rate them (i.e., 1 to 5, 5 being the best) for fluency and appropriateness of the response for the given user profile. Moreover, we also asked the annotators to rank the responses for personalization to the given user profile. Each response was rated by three annotators. Table 4 presents average scores for fluency, appropriateness of the response, and average rank among the responses. All the models (including reference) achieve high scores on the fluency and appropriateness of the response for the given user profile. Moreover, there is not a significant difference among the average scores. Similarly, almost all were ranked similar as reference responses. For example, responses generated from every model are ranked at all the places, i.e., 1\({}^{st}\) to 4\({}^{th}\) place. In summary, results from human study show that the responses of all the models are as a good as reference responses. It is important to remind that the supervised-GPT was trained on the full training set, whereas our proposed PToD-0 and P-ToD were trained using zero and 20 labeled training examples, respectively.
We also observe that the PToD-0 model had slightly lower BLEU and ROUGE scores as compared to P-ToD and supervised-GPT, whereas in the human study it showed equally outstanding performance. Upon further investigation, we noticed that the responses generated by the PToD-0 are identical to that of supervised-GPT and P-ToD. The PToD-0 model did not use the "words" (or n-grams) in the reference responses. For example, a perfectly acceptable response generated by PToD-0, "What should the price be, madam?" did not get good BLEU or ROUGE scores, because the reference response happened to be, "Madam! which price range are you looking for?".
## 6. Related Work
The two broad categories of dialog systems are open-ended and task-oriented dialog systems. In the following, we summarize the personalization aspect of related work for both categories.
**Personalized Open-ended Dialogue Systems.** Among the earlier attempts to personalize open-ended dialog systems, (Zhou et al., 2019) proposes learning interlocutor persona embeddings and adapting the conversation style accordingly. Researchers have since proposed a variety of methods, including persona information fusion (Zhou et al., 2019; Zhou et al., 2019), multi-task learning (Xu et al., 2019), transfer learning (Xu et al., 2019; Zhou et al., 2019), meta learning (Xu et al., 2019), persona incorporation into the sequence-to-sequence framework (Zhou et al., 2019; Zhou et al., 2019), persona-conditioned RNN-based model (Zhou et al., 2019), persona memory-conditioned variational autoencoders (Xu et al., 2019), response selection using memory networks (Zhou et al., 2019), topical information usage (Xu et al., 2019), persona pre-training (Xu et al., 2019; Zhou et al., 2019), and extra training procedures for personalization (Xu et al., 2019; Zhou et al., 2019). While many of these works have proven useful for assigning personalities or language styles to open-ended dialog systems, they are ineffective for task-oriented dialog systems. We propose that, rather than assigning personalities to agents (i.e., dialog systems), make them more adaptive to their different kinds of interlocutors in task-oriented dialog settings.
**Personalized Task-oriented Dialogue Systems.** Comparatively to open-domain dialog systems, personalized task-oriented dialog systems are under-explored. In fact, to the best of our knowledge, personalized bbdi dialogue (Xu et al., 2019) is the only publicly available benchmark for the evaluation of task-oriented dialog systems. Most of the existing work (Xu et al., 2019; Zhou et al., 2019; Zhou et al., 2019; Zhou et al., 2019) use memory networks by concatenating profile information and dialog memory (Xu et al., 2019), combining style information (Xu et al., 2019), query generation via RNN-based decoder (Xu et al., 2019), local and global encoders (Xu et al., 2019). Similarly, cooperative memory network have been proposed (Xu et al., 2019) to handle the case, where only partial profile information is available. All of these works follow supervised learning approaches and require a large amount of labeled training data for each user profile. In contrast to previous work, we employ deep reinforcement learning to personalize task-oriented dialog systems in the unsupervised setting without requiring any labeled training data. This work leverages pre-trained language models and zero-shot learning for natural language understanding and generation, and adapts its responses to a wide range of user profiles in unsupervised way. Nonetheless, it is noteworthy to mention that several key ideas leveraged in this work have been used for task-oriented dialog systems such as deep reinforcement learning for dialog policy generation (Xu et al., 2019; Zhou et al., 2019) and paraphrasing (Xu et al., 2019), zero-shot learning for intent detection (Xu et al., 2019) and slot filling (Xu et al., 2019), and language models for anaphora resolution (Xu et al., 2019) and response generation (Xu et al., 2019). However, none of these works have proposed to personalizing dialog systems in the unsupervised setting.
## 7. Conclusion
We have presented a novel personalization framework for task-oriented dialog systems, P-ToD, that can seamlessly adapt to newly emerging unseen user profiles in the unsupervised fashion. P-ToD stands out as the first unsupervised framework for personalized task-oriented dialog systems that can effectively adapt its conversation flows and linguistic styles, disambiguate nuances, and make meaningful recommendations according to the profile of the active user. The key idea behind the proposed framework is using a novel zero-shot generalizable reward function that guides the policy of the personalized model to adapt its responses for the given user without compromising the task completion accuracy. Our experimental evaluation uses up to 180 diverse user profiles for five tasks including conducting full personalized dialogs. Interestingly, our proposed framework outperforms all the existing personalization models using quantitative as well as qualitative analysis. Furthermore, we also trained a fully supervised-GPT model for comparison and it turned out that P-ToD, trained using only 20 labeled training examples, achieves better or competitive performance.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & Fluent & Appropriate & Rank \\ \hline Reference Response & 4.92 & 4.87 & 2.41 \\ Supervised GPT & 4.93 & 4.85 & 2.52 \\ PToD-0 (This work) & 4.91 & 4.86 & 2.62 \\ P-ToD (This work) & 4.92 & 4.85 & 2.45 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Average scores of the user study. |
2306.02044 | Why We Should Report the Details in Subjective Evaluation of TTS More
Rigorously | This paper emphasizes the importance of reporting experiment details in
subjective evaluations and demonstrates how such details can significantly
impact evaluation results in the field of speech synthesis. Through an analysis
of 80 papers presented at INTERSPEECH 2022, we find a lack of thorough
reporting on critical details such as evaluator recruitment and filtering,
instructions and payments, and the geographic and linguistic backgrounds of
evaluators. To illustrate the effect of these details on evaluation outcomes,
we conducted mean opinion score (MOS) tests on three well-known TTS systems
under different evaluation settings and we obtain at least three distinct
rankings of TTS models. We urge the community to report experiment details in
subjective evaluations to improve the reliability and interpretability of
experimental results. | Cheng-Han Chiang, Wei-Ping Huang, Hung-yi Lee | 2023-06-03T07:52:11Z | http://arxiv.org/abs/2306.02044v1 | # Why We Should Report the Details in Subjective Evaluation
###### Abstract
This paper emphasizes the importance of reporting experiment details in subjective evaluations and demonstrates how such details can significantly impact evaluation results in the field of speech synthesis. Through an analysis of 80 papers presented at INTERSPEECH 2022, we find a lack of thorough reporting on critical details such as evaluator recruitment and filtering, instructions and payments, and the geographic and linguistic backgrounds of evaluators. To illustrate the effect of these details on evaluation outcomes, we conducted mean opinion score (MOS) tests on three well-known TTS systems under different evaluation settings and we obtain at least three distinct rankings of TTS models. We urge the community to report experiment details in subjective evaluations to improve the reliability and interpretability of experimental results.
Cheng-Han Chiang, Wei-Ping Huang, Hung-yi Lee National Taiwan University, Taiwan [email protected], [email protected], [email protected]
**Index Terms**: mean opinion score, naturalness, listening test, crowdsourcing, Amazon Mechanical Turk
## 1 Introduction
Speech synthesis is the fundamental building block to several speech processing tasks, such as text-to-speech (TTS), voice conversion [1], and speech-to-speech translation [2]. Due to the absence of ground truth and automatic evaluation metrics, subjective evaluation [3] is the predominant method used to assess the quality of synthesized speech. In the subjective evaluation, researchers recruit listeners and present the listeners with some speech signals, and the listeners are asked to rate the given speech signal based on the task instructions given to the human evaluators. Using online crowdsourcing platforms has been more and more common these days [4].
Despite subjective evaluation being a critical evaluation metric for speech synthesis systems, we discover that prior works often omit details pertaining to subjective evaluation. Through an analysis of over 80 papers presented at INTERSPEECH 2022 on speech synthesis, we find that none of the papers provide comprehensive details to enable the replication of subjective evaluation under the same experimental setting. These missing details include the recruitment and selection of evaluators, their instructions and compensation, their qualifications, location, and linguistic background.
To show that these missing details in subjective evaluation can significantly influence the experiment result, we conduct mean opinion score (MOS) tests to assess the quality of three different TTS models: Tacotron2 [5], FastSpeech2 [6], and VITS [7]. We perform over ten sets of MOS tests on the quality of audio samples generated by the TTS models and ground truth human recordings, with the same audio samples used across all MOS tests. The MOS tests differ in some experiment details that are omitted in prior works. Since all MOS tests we conduct share the same audio samples, we expect only one "ground truth ranking" on the quality of audio samples generated by different TTS models, but our MOS tests yield at least three rankings on the three TTS models. Our results highlight the criticality of details in subjective evaluations for reliable experiment results.
## 2 Survey of Prior Works
We begin by conducting a survey of previous works to comprehend the current state of how the details in subjective evaluation experiments are reported. Specifically, we survey **all** the papers in INTERSPEECH 2022 that belong to the speech synthesis track or have the term "speech synthesis" in the paper's title and conduct subjective evaluation. We exclude 8 papers that do not use MOS evaluation, resulting in a total of 80 papers. For each of these papers, we evaluate whether they report the following **factors** or not:
**Recruitment platform:** Out of the 80 papers examined, 62 do not report what platform is used to recruit the evaluators. Among the remaining 18 papers, 11 use Amazon Mturk, 2 use Prolific, and 1 uses Microsoft UHRS, while 4 papers mention crowdsourcing platforms without specifying which one is used.
**Language background and geographic location of the evaluators:** We find that 61.3% of the papers we survey do not report whether the evaluators are native speakers of the language used in the speech synthesis model to be evaluated. Furthermore, we observe that only 9 papers report the current location of their evaluators. This presents a problem since the rating of native speakers and non-native speakers may differ, and the same language spoken by people from different parts of the world can also vary.
**Qualification of the evaluators:** There is a possibility that even if the evaluator is a native speaker and resides in the region of interest, they may not be able to provide reliable feedback due to factors such as low-quality audio devices. It is also possible that the evaluator just wants to make money by answering the survey randomly. Therefore, it is crucial to establish certain qualifications to filter out invalid evaluators and ensure the quality of the subjective evaluation. However, we note that a concerning number of papers (68 papers) do not address how they establish qualifications to select workers or handle invalid responses during post-processing.
**Instructions given to the evaluators:** Task instructions serve to inform evaluators about the tasks at hand and provide guidance on how to complete the task. In the MOS test, the instructions include the description used to describe a particular score, e.g., _"5: Excellent"_. In our survey, two-thirds of the papers (51) fail to include any instructions used during their subjective evaluations. Many papers simply state that they "conduct a MOS test," without providing further details. Although
the recommended practice for MOS tests exists [8, 3], it is unclear whether the papers adhere to the evaluation procedures outlined in the recommendations. In fact, we have observed the task instructions stated in some papers to be different from the recommendations. We even find some papers (9) use a 0.5-point increment in the MOS tests, contradicting the 1-point increment in the recommended practice MOS tests.
**Number of raters and rated items:** About one-third of the papers we survey do not report how many unique individuals participate in the subjective evaluation, and 27.5% of papers do not say how many audio samples are evaluated. More than half of the papers (51) do not state how many raters evaluate each audio sample, and 72 papers do not say the total number of audio samples rated by a unique individual.
## 3 Experiment Setup
We demonstrate the crucial role of unspecified details in subjective evaluation by conducting various MOS tests to evaluate the quality of three TTS models: Tacotron2 [5], FastSpeech2 [6], and VITS [7]. By manipulating certain factors in each MOS test, we investigate whether the experiment results vary. TTS is chosen as the target task since the majority of our surveyed papers focus on it, and we choose the three TTS models since they are well-studied and their performance is well-recognized.
**Since all the MOS tests share the same audio samples, there should only exist one ranking on the quality of the three TTS models, which is the ground truth ranking**. Here, we do not assume what this ground truth ranking is, while there might be some agreement about this ranking in the TTS community.
### TTS Models and Datasets
We use LJSpeech [9] as our dataset, which is commonly used in TTS research. For the TTS models, we use the pre-trained checkpoints from ESPNet-TTS[10] and directly apply its demo code to synthesize all the samples. For FastSpeech2 and Tacotron2, we use the HifiGAN[11] vocoder checkpoint from ESPnet-TTS to convert the output spectrogram back to the waveform. All audios used in the experiment, including the ground truth audios, are normalized to mitigate the amplitude difference between speeches generated from different systems.
### Subjective Evaluation Setup
We randomly select 50 sentences from the testing set of LJSpeech and use the three TTS models to synthesize the corresponding audio samples. The audio samples have lengths longer than 3 seconds and shorter than 10 seconds. Each of the 50 sentences will have three audio samples generated by three TTS models and one human recording, resulting in a total of 200 audio samples. We split the 200 audio samples into 10 equal-sized non-overlapping groups to form 10 questionnaires, and each questionnaire consists of 5 audio samples from the three TTS models and the human recordings. There will be no audio samples in a questionnaire that have the same transcript. Each audio sample is evaluated by 9 distinct evaluators.
Unless specified, we use the following instructions and rating scales in our MOS tests, following [12]. We ask the evaluators "_How natural (i.e. human-sounding) is this recording from a scale of 1 to 5?_". The scale options are: "_1: Bad - Very natural speech", "2: Poor - Somewhat unnatural speech", "3: Fair - Neither natural nor unnatural speech", "4: Good - Somewhat natural speech", "5: Excellent - Completely natural speech"._ We also ask the raters to wear headphones, and we only recruit workers that do not have hearing impairments.
We mainly use two crowdsource platforms for our experiments: Amazon Murk and Prolific. When using Amazon Murk for evaluation, we cannot control the number of participants and how many audio samples an individual assesses. We estimate that conducting a single questionnaire should take less than 5 minutes, and we pay the evaluators on Murk US50.9 for conducting one questionnaire. For the experiments conducted on Prolific, we recruit 9 distinct individuals and ask each of them to conduct the rating of 200 audio samples (10 questionnaires). The interface seen by evaluators recruited from Prolific is the same as that seen by the workers recruited using Murk. Each individual is paid US10 for the rating of 200 audio samples, which is slightly higher than the payment to workers on Murk. This is because workers on Prolific need to register a Murk account to conduct the task, and we pay them slightly higher for doing so. In all our subjective evaluations, we ensure that the payments are reasonable to the raters from anywhere in the world. Other details about the experiments will be specified in the following sections.
In all the tables of our paper, we use subscripts to denote the width of the 95% confidence interval of the MOS, and we use \(\underline{\text{blue}}\), yellow, and \(\underline{\text{red}}\) to denote the best, runner-up, and worst TTS model.
## 4 Do Different Factors in MOS Evaluation Affect the Result?
In this section, we vary the factors in the MOS test and show that all these factors can change the experiment results.
### Qualification of Evaluators
First, we study how the MOS test results can vary due to how we select the quality of the workers on Murk. In this section, we conduct our study on Murk as it is the most adopted crowdsourcing platform in the papers we survey and it is a well-studied crowdsourcing platform [13, 14]. Murk has two parameters to assess the quality of the workforce: **HIT Approval Rate** and **Number of HITs Approved**. The former is the percentage of successfully completed tasks by a worker, while the latter represents the total number of completed tasks. A higher HIT Approval Rate and Number of HITs Approved may indicate that the worker provides results with better quality.
We conduct two sets of MOS evaluation: the first one allows all the workers on Murk to participate in the task and the second one only recruits workers that have HIT Approval Rate \(\geq 95\%\) and Number of HITs Approved \(\geq 1000\); these numbers are set based on prior works that conduct human evaluations [15]. For the MOS evaluation experiment in this section, we do not impose any additional requirements on the evaluators including geographic location and language background.
The results are presented Table 1. We show that without any worker qualifications (denoted as _None_ in Table 1), FastSpeech2 is favored over Tacotron2 in the MOS test. However, the highly overlapped 95% confidence intervals of the MOS for the two models indicate that there is no statistical significance in FastSpeech2's superiority over Tacotron2. With a reasonably high worker threshold (i.e., HIT Approval Rate \(\geq 95\%\) and Number of HITs Approved \(\geq 1000\)), the evaluators once again find Tacotron2 to be worse than FastSpeech2. Additionally, it seems that qualified listeners cannot distinguish between VITS and the ground truth. Based on these results, we will conclude that (1) although Tacotron2 is an autoregressive TTS model, the audio
it synthesizes is still inferior to the audio samples produced by the non-autoregressive FastSpeech2, and (2) VITS is already on par with human recordings.
Next, we ask whether we can use a test to filter valid evaluators and only recruit those workers passing the test to conduct the MOS test. Using a test to select valid participants is recommended by P.808 [4, Section 6.3.1.1], but it is unclear if this recommendation is widely adopted when conducting crowdsourcing subjective evaluations. We design the test by the following procedure: We randomly sample 10 sentences in the test set of LJSpeech and synthesize 4, 3, and 3 audios using FastSpeech2, Tacotron2, and VITS, respectively. Those sentences are different from the ones used for MOS tests. We then pair those synthesized audios with the ground truth recording to form 10 pairs of audios. Last, we create a survey containing the 10 audio pairs, and the participants are asked to choose the more natural sample in each pair of samples. We publish the survey on Mturk and recruit 90 workers with HIT Approval Rate \(\geq 95\%\) and Number of HITs Approved \(\geq 1000\) to conduct the task, and they are paid for US$0.9 for completing the survey.
We show the accuracy of the test in Figure 1, where the accuracy is the proportion that a rater considers the ground truth to be more natural among the 10 audio pairs. Surprisingly, more than half of the workers do not display a consistent preference for human recordings. This finding suggests that setting qualifications on Mturk alone may not be sufficient if researchers expect evaluators to discern differences between model-generated and human recording samples. We then conduct another MOS test while only allowing the workers with accuracy higher than 0.7 to participate, amounting to 29 workers. The MOS test result, denoted with _Pass test_ in Table 1, reveals that VITS is the best, while Tacotron2 performs better than FastSpeech2. The MOS differences between the three TTS models are all statistically significant. This result contradicts our previous results. Overall, qualifications employed in the subjective evaluation may result in a selection bias on the experiment result. Therefore, it is crucial to report the qualifications used.
### Location of Workers
Next, we study how the locations of workers change the MOS results using Murk. We only recruit English speakers as they are more familiar with English and hence may be better equipped to detect subtle unnatural prosody or accent in the samples. However, Murk assumes that workers using their platform are fluent in English; therefore, no qualification for the English ability of the raters can be set. We publish three MOS tests on Mturk, recruiting only workers from the USA, the UK, and India respectively. We also only recruit workers that have HIT Approval Rate \(\geq 95\%\) and Number of HITs Approved \(\geq 1000\).
The experiment results are shown in Table 2. We find that for workers in the USA, FastSpeech2 generates audio samples as natural as those generated by Tacotron2. Workers in India also agree that the quality of FastSpeech2 and Tacotron2 is very similar. However, raters in the UK consider Tacotron2 superior to FastSpeech2 by a significant margin. Furthermore, UK-based evaluators consider VITS much more unnatural compared to the ground truth, while workers in the other two regions do not find the ground truth significantly better. We include the result when we do not restrict the location of the raters in Table 2, denoted as _All_. In this case, we observe a completely different ranking among the three TTS models. This highlights the variability of the results due to the location of the evaluators.
The phenomenon observed in this section could be attributed to several potential factors. From a linguistic perspective, English spoken by speakers from different regions could vary, potentially affecting how raters score the same audio sample. Another possible reason could be that people from the USA are more tolerant of unnatural samples, resulting in them rating samples as more natural. Additionally, the headphones used by evaluators from different countries may be systematically different, leading to different perceptions of the unnatural elements in the audio samples. There could be more intricate reasons that are not listed here, and all of them contribute to the uncertainty of subjective evaluation results. Thus, it is important to report the locations of evaluators who participated in the study to better understand to whom the experiment results may apply.
### Crowdsourcing Platforms
In this section, we turn our attention to the crowdsourcing platform used to recruit evaluators. We choose two popular platforms, Mturk and Prolific, and recruit workers located in the USA for both platforms. We also publish another MOS test by recruiting students enrolled in a Machine Learning course at our university to conduct the study. The demographic constitution of the raters recruited at our university is significantly different from the workers on Mturk and Prolific: students participating in our study are Asian whose first language is Chinese but can speak English fluently; the age distribution of the students falls in the range of 18 to 28. We include the study using students from our university because it is common for graduate student researchers to conduct subjective evaluations using their personal networks, and we aim to simulate this scenario by recruiting students on campus.
The results are presented in Table 3. Even though the demographic composition of the workers recruited from Prolific
\begin{table}
\begin{tabular}{c|c c c} \hline Qualification & None & \(\geq\)95\% and \(\geq\)1000 & Pass test \\ \hline FastSpeech2 & \(3.70_{0.10}\) & \(3.70_{0.08}\) & \(3.170_{0.11}\) \\ Tacotron2 & \(\mathbf{3.62_{0.09}}\) & \(\mathbf{3.61_{0.08}}\) & \(3.51_{0.10}\) \\ VITS & \(\mathbf{3.78_{0.08}}\) & \(3.74_{0.08}\) & \(3.96_{0.10}\) \\ Ground truth & \(3.86_{0.08}\) & \(3.74_{0.08}\) & \(4.16_{0.08}\) \\ \hline \end{tabular}
\end{table}
Table 1: MOS results when using different qualifications.
\begin{table}
\begin{tabular}{c|c c c c} \hline Location & All & USA & UK & India \\ \hline FastSpeech2 & \(3.70_{0.08}\) & \(3.73_{0.09}\) & \(2.64_{0.08}\) & \(3.58_{0.09}\) \\ Tacotron2 & \(\mathbf{3.61_{0.08}}\) & \(3.73_{0.09}\) & \(2.87_{0.09}\) & \(3.62_{0.08}\) \\ VITS & \(\mathbf{3.74_{0.08}}\) & \(\mathbf{3.79_{0.09}}\) & \(3.17_{0.09}\) & \(4.10_{0.07}\) \\ Ground truth & \(3.74_{0.08}\) & \(3.87_{0.08}\) & \(3.71_{0.08}\) & \(4.15_{0.07}\) \\ \hline \end{tabular}
\end{table}
Table 2: MOS results when recruiting evaluators residing in different locations.
Figure 1: The distribution of test accuracy.
is markedly different from that of our university, they produce the same ranking of the TTS models. However, evaluators on Prolific are more adept at distinguishing the quality disparity between samples generated by FastSpeech2 and Tacotron2. In contrast, workers from Murk do not find significant differences in the quality of samples produced by the three TTS models.
The possible reasons for the result differences are discussed as follows: Different recruiting platforms have different processes for how to become a valid worker on the platform. For instance, Prolific necessitates that workers verify their phone numbers and government ID, while Amazon Mturk may not mandate the provision of government ID by workers. These differences may potentially affect the quality of the workforce by serving as a presecreening mechanism. Secondly, the number of unique raters involved in the studies conducted on different platforms is different, which may potentially affect the results. In this section, the studies conducted on Murk, Prolific, and our university involved 90, 9, and 90 unique participants, respectively. The impact of the unique number of raters on the experiment results will be investigated with more systematic analyses in future work. Although we only controlled the crowdsourcing platform in this section, numerous factors can change by simply altering the crowdsourcing platform. Since the platform can significantly influence the experiment results, it is crucial to explicitly state the platform used to help readers better understand the potential underlying distribution of evaluators in the study.
### Instructions to the Workers
Last, we investigate how the MOS results can change by varying the instructions given to the workers. The experiments in this section are conducted on Prolific and only recruit workers living in the USA whose first language is English. We use four sets of instructions to create four different MOS experiments, and the workers in all four experiments are non-overlapping. The instructions are: (i) **None**_: "How natural (i.e. human-sounding) is this recording on a scale of 1 to 5? 1: Poor, 2: Bad, 3: Fair, 4: Good, 5: Excellent."_ This follows the P.800 [3, B.4.5]. (ii) **Natural**: The default instruction stated in Section 3.2. (iii) **Distort**: "What is the quality of the speech based on the level of distortion of the speech on a scale of 1 to 5? 1: Bad - Very annoying and objectionable, 2: Poor - Anmoving, but not objectionable, 3: Fair - Perceptible and slightly annoying, 4: Good - Just perceptible, but not annoying, 5: Excellent - Imperceptible."_ This follows the MOS (ACR) referred to in [14]. (iv) **All**: We use the default instruction in Section 3.2, but explicitly instruct the raters to consider the _"fluency, prosody, intonation, distortion, and noise in the sample."_ This instruction is motivated by 2 papers in our survey that explicitly instruct the evaluators on what to focus on during the evaluation.
The results in Table 4 show three different rankings of the three TTS models. With the **None** instruction containing the least instruction, raters find VITS to be the best TTS model, with the shortest time taken to complete the task among the four settings. When using the default instruction (**Natural**), Tacotron2 becomes the best one. When raters are asked to focus on the distortion in the samples (the **Distort** instruction), the raters again agree that VITS has the least distortion. We find that VITS becomes the worst TTS model for the raters when they are asked to consider all possible factors for natural speech using the **All** instruction. We also observe that when the instructions are longer, the time taken to complete the task becomes longer. Additionally, when the evaluators are explicitly asked to focus on certain factors in the samples (as in **Distort** and **All**), they spend more time on the task. After finishing the task, we interview the participants in the **None** group and ask them what factors they consider during the rating. Interestingly, they state that fluency, pronunciation, robotic sounds (distortion), and noises are the main factors, which mostly coincide with the factors we listed in the **All** setting. This shows that even when the raters consider similar factors during the tasks, the results can still be largely different depending on whether they are explicitly required to do so.
## 5 Conclusion
In this paper, we reveal that most papers on speech synthesis do not fully report the details of subjective evaluations. To highlight the gravity of the problem, we conduct more than ten sets of MOS experiments to rate the quality of three TTS models and obtain at least three rankings on the quality of those models. Since all the MOS evaluation shares the same audio samples but only differ in the factors in subjective evaluation, we show that those factors are highly influential to the experiment results. The surveyed paper list and the example of MOS tests can be found at github.com/d223302/SubjectiveEvaluation. Since we do not assume a ground truth ranking of the TTS models used in our paper, we are not able to provide any guidelines on how to conduct _"better"_ subjective evaluations to yield results closer to the ground truth. The one and only guideline we provide for future researchers when conducting _good_ subjective evaluations is to comprehensively report every detail in the subjective evaluations. While there are guidelines for conducting crowdsourcing MOS evaluation [13, 14], it is unclear if those guidelines are still adopted recently and if they are suitable nowadays.
While the details in human evaluation have been included in the checklist of major machine learning and natural language processing conferences (e.g., NeurIPS and *ACL), the speech community has yet to take similar action. To increase the reproducibility of experiment results and allow for more reliable interpretations of subjective evaluation results, we encourage future researchers to comprehensively report details in subjective evaluations, either in the paper or by online supplementary materials. We hope that the concerning results presented in our paper draw attention to the importance of reporting subjective evaluation details and provoke further discussions on this topic.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \begin{tabular}{c} Instruction \\ \end{tabular} & \begin{tabular}{c} None \\ \end{tabular} & \begin{tabular}{c} Natural \\ \end{tabular} & \begin{tabular}{c} Distort \\ \end{tabular} & \begin{tabular}{c} All \\ \end{tabular} \\ \hline \begin{tabular}{c} FastSpeech2 \\ \end{tabular} & \(3.11_{0.1}\) & \(3.06_{0.1}\) & \(3.0_{0.09}\) & \(2.96_{0.1}\) \\ \begin{tabular}{c} Tacotron2 \\ \end{tabular} & \(3.16_{0.1}\) & \(3.23_{0.1}\) & \(3.20_{0.1}\) & \(3.10_{0.1}\) \\ \begin{tabular}{c} VITS \\ \end{tabular} & \(3.40_{0.12}\) & \(3.14_{0.11}\) & \(3.98_{0.1}\) & \(2.95_{0.11}\) \\
\begin{tabular}{c} Ground truth \\ \end{tabular} & \(4.28_{0.08}\) & \(3.96_{0.09}\) & \(4.57_{0.07}\) & \(3.89_{0.08}\) \\ \hline \hline \multicolumn{2}{c}{Time (mins)} & 32 & 43 & 52 & 52 \\ \hline \hline \end{tabular}
\end{table}
Table 4: MOS results when using different task instructions. We also report the average time an evaluator takes to complete the rating of 200 samples.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Platform & Mturk & Prolific & University \\ \hline \begin{tabular}{c} FastSpeech2 \\ \end{tabular} & \(3.73_{0.09}\) & \(2.81_{0.11}\) & \(3.08_{0.12}\) \\ \begin{tabular}{c} Tacotron2 \\ \end{tabular} & \(3.73_{0.09}\) & \(3.02_{0.11}\) & \(3.18_{0.12}\) \\ \begin{tabular}{c} VITS \\ \end{tabular} & \(3.79_{0.09}\) & \(3.12_{0.11}\) & \(3.46_{0.11}\) \\
\begin{tabular}{c} Ground truth \\ \end{tabular} & \(3.87_{0.08}\) & \(4.12_{0.08}\) & \(3.76_{0.11}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: MOS results when recruiting evaluators using different platforms. |
2307.15611 | A Time-Frequency Generative Adversarial based method for Audio Packet
Loss Concealment | Packet loss is a major cause of voice quality degradation in VoIP
transmissions with serious impact on intelligibility and user experience. This
paper describes a system based on a generative adversarial approach, which aims
to repair the lost fragments during the transmission of audio streams. Inspired
by the powerful image-to-image translation capability of Generative Adversarial
Networks (GANs), we propose bin2bin, an improved pix2pix framework to achieve
the translation task from magnitude spectrograms of audio frames with lost
packets, to noncorrupted speech spectrograms. In order to better maintain the
structural information after spectrogram translation, this paper introduces the
combination of two STFT-based loss functions, mixed with the traditional GAN
objective. Furthermore, we employ a modified PatchGAN structure as
discriminator and we lower the concealment time by a proper initialization of
the phase reconstruction algorithm. Experimental results show that the proposed
method has obvious advantages when compared with the current state-of-the-art
methods, as it can better handle both high packet loss rates and large gaps. | Carlo Aironi, Samuele Cornell, Luca Serafini, Stefano Squartini | 2023-07-28T15:13:59Z | http://arxiv.org/abs/2307.15611v1 | # A Time-Frequency Generative Adversarial based method for Audio Packet Loss Concealment
###### Abstract
Packet loss is a major cause of voice quality degradation in VoIP transmissions with serious impact on intelligibility and user experience. This paper describes a system based on a generative adversarial approach, which aims to repair the lost fragments during the transmission of audio streams. Inspired by the powerful image-to-image translation capability of Generative Adversarial Networks (GANs), we propose _bin2bin_, an improved pix2pix framework to achieve the translation task from magnitude spectrograms of audio frames with lost packets, to non-corrupted speech spectrograms. In order to better maintain the structural information after spectrogram translation, this paper introduces the combination of two STFT-based loss functions, mixed with the traditional GAN objective. Furthermore, we employ a modified PatchGAN structure as discriminator and we lower the concealment time by a proper initialization of the phase reconstruction algorithm. Experimental results show that the proposed method has obvious advantages when compared with the current state-of-the-art methods, as it can better handle both high packet loss rates and large gaps. We make our code publicly available at: github.com/aircarlo/bin2bin-GAN-PLC.
Packet Loss Concealment, Spectrogram Inpainting, Conditional Generative Adversarial Networks, _bin2bin_.
## I Introduction
Speech signals are often subject to localized distortions or even total loss of information, when data is transmitted through unreliable channels. This happens, for example, in applications such as mobile digital communications, video-conferencing systems and Voice over Internet Protocol (VoIP) calls. In such scenarios, audio frames are often encapsulated into packets, which are then routed individually through the network, sometimes taking different paths, resulting in out-of-order delivery. At the destination, the original sequence may be reassembled in the correct order, based on the packet sequence numbers. Hence, a variety of issues can occur, like packet losses, over-delay or jitter.
The process of restoration of missing packets is known as Packet Loss Concealment (PLC) [1]. This term refers to any technique that attempts to overcome the packet-loss problem, by concealing the lost fragments by an estimated reconstruction, which should be meaningful and consistent with the informative content of the speech message. The system should also prevent audible artifacts and decrease listening fatigue, so that the listener remains unaware of any problems that have occurred.
### _Related works_
Some techniques refer to a similar task with the terms Audio Inpainting [2, 3], Waveform Interpolation [4] or Extrapolation [5]. These techniques address the reconstruction problem from a sparsity point of view, by approximating the waveform with a combination of frequency atoms, extracted from a given dictionary. However they are not suitable for real-time applications, as the computational cost can lead to excessive latency times.
Most of the current approaches to PLC are based on codecs that implement algorithmic solutions: sender-based techniques like Interleaving and Forward-Error Correction (FEC) [6], or receiver-based concealment techniques, like Silence/Noise Substitution, Waveform Substitution, or Linear Predictive Coding (LPC) [7].
With the rise of Deep Neural Networks (DNN), a significant improvement of quality has been obtained on speech processing tasks, hence also DNN architectures for neural PLC have been successfully investigated: MLP [8], LSTM/RNN [1, 9], Autoencoders [10, 11], GANs [12, 13].
In this study, we apply a Generative method, based on the pix2pix [14] framework which exploits a Fully Convolutional Network (FCN) architecture, to address the spectrogram inpainting task. We show that this solution, while preserving global temporal and spectral information along with local information, can outperform competing approaches, based either on classical digital signal processing solutions or learning methods.
## II Generative Adversarial Networks
Generative Adversarial Networks (GANs) [15] have emerged in the past years as a powerful generative modeling technique. A typical GAN consists of two networks, a generator (\(G\)) and a discriminator (\(D\)). Given an input of random values sampled from a normal distribution, \(z\) (latent variable), the generator performs an upsampling in order to obtain a sample of suitable dimensions. On the other hand, the discriminator acts as a binary classifier, trying to distinguish "real" samples \(x\) (belonging to the dataset distribution) from "fake" samples, generated by \(G\).
Both \(G\) and \(D\) are trained simultaneously in a min-max competition with respect to binary cross-entropy loss. The final objective for \(G\) is to output samples that follow as close as possible the "real" data distribution, while \(D\) learns to spot the fake samples from real ones, by penalizing \(G\) for producing implausible results.
Given the success achieved in the field of image processing, GANs have also been effective in speech processing tasks. In
this regard, WaveGAN [16] represents the pioneering attempt to adapt a deep convolutional GAN (DCGAN) structure for speech, by compressing the two-dimensional image input into one-dimensional. It laid the foundations for GAN-based practical audio synthesis and for converting different image generation GANs to operate on waveforms.
Several extensions have been derived from WaveGAN; to name a few, cWaveGAN [17], which allows conditioning both \(G\) and \(D\) with additional information to drive the generation process, and Parallel WaveGAN [18], which uses a multi-resolution STFT loss along with the adversarial loss.
As outlined in [16], in the generative setting, working with compressed time-frequency representations may be problematic as the generated spectrograms are non-invertible, hence they cannot be listened to without lossy estimations, nevertheless, the practice of bootstrapping image recognition algorithms for audio tasks has become commonplace; examples include SpecGAN [16], MelGAN [19], VocGAN [20] and StyleGAN [21].
### _Pix2pix_
Pix2pix is a conditional GAN (cGAN) originally developed in 2017 by Phillip Isola, et al. [14] for synthesizing photos from label maps, reconstructing objects from edge maps and colorizing images. Unlike a vanilla GAN which uses only random noise seeds to trigger generation, a cGAN introduces a sort of supervision by feeding the generator with the target information \(c\), categorical labels or contextual samples. The discriminator is also conditioned by \(c\), to help distinguish more accurately the matching and alignment of two images:
\[\begin{split}\underset{G}{\text{min}}\ \underset{D}{\text{max}}\ \mathcal{L}_{cGAN}\left(D,G \right)=&\mathbb{E}_{\text{x}c}\left[\log\left(D(x|c)\right) \right]+\\ &\mathbb{E}_{\text{c}c}\left[\log\left(1-D(G(z|c))\right)\right] \end{split} \tag{1}\]
Unlike other cGAN-based works (e.g. [22][23]), Isola et al. demonstrate that the input noise vector \(z\) does not have a significant impact if the conditioning information is strong enough, so they removed it, getting the same stochastic behavior by adding dropout layers to the generator.
## III Neural Concealment Architecture
An overview of our bin2bin architecture is presented in Fig. 1. The main contribution of this paper is the adaptation of the pix2pix architecture, for the audio packet loss concealment task, through an in-depth evaluation of both generative and discriminative processes, optimized to inpaint spectrograms gaps. We adopt the term bin2bin as a direct translation of pix2pix, inspired by the fundamental unit (bin) of the discretized time and frequency axes of the spectrogram.
### _Generator_
In the proposed bin2bin scheme, the generator architecture makes use of the U-Net [24] structural design with the insertion of skip-connections between affine layers. The U-Net is composed of a convolutional encoder that down-samples the input image in the first half of the architecture, and a decoder that upsamples the latent representation applying 2D transposed-convolutions.
The clean signal \(s\) and its lossy counterpart \(\tilde{s}\), are first transformed into time-frequency spectrograms. In the provided implementation, all STFTs are computed with a 512 points Hann window, corresponding to 32 milliseconds at the sample rate of 16000 Hz, and a hop size of 64. The STFT parameters have been chosen to ensure a balanced resolution between the regions to be reconstructed and the reliable parts acting as conditioning contexts.
Our generator \(G\) accepts \(1\times 256\times 256\) inputs, where each dimension represents, respectively, the number of _Channels_, _Frequency_ and _Time_ bins, hence, a portion of such size is extracted at a random time, from the aforementioned spectrograms \(S\) and \(\tilde{S}\), regardless of the amount of lost fragments present inside.
Only the log-magnitude spectrogram is fed into the generator; for the training stage, the phase information is discarded, while for the test stage it is used to initialize the Griffin-Lim [25] phase reconstruction algorithm.
### _Discriminator_
The discriminator is built on a custom architecture, specifically designed for the pix2pix framework, called PatchGAN [14]. It is basically a fully convolutional network that maps the input image into an \(N\times N\) feature map of outputs \(Y\), in which each patch \(y_{ij}\) indicates whether the corresponding portion of input is real or fake. The patches originate from overlapped receptive fields, which can be retrieved through simple backtracking operations.
In the original paper [14], an ablation study was conducted to determine the best configuration of \(D\) (number of conv. layers, kernels size), to maximize the evaluated metrics. In this work we focused on a similar aspect: we tested the effect of varying the size of the discriminator convolutional kernels, to achieve a rectangular receptive field, instead of the square dimension (\(70\times 70\) pixels) used in pix2pix. We motivated this decision by observing that the portions of the spectrogram to be concealed extend over the entire frequency dimension, and a relatively small part of the time dimension. We traded-off between the complexity of \(D\) and the desired shape, obtaining an optimal receptive field of \(162\times 24\), with rectangular \(8\times 2\) kernels for all conv layers.
### _Post-processing_
The generator output represents the magnitudes of the TF coefficients, both of the reliable and lost regions. The synthesis by the inverse STFT introduces an inherent cross-fading, which significantly reduces artifacts. For the phase reconstruction we used a modified version of Griffin-Lim [25] algorithm, by providing the phase of the lossy frame as an initial estimate. In this way the synthesis of the reconstructed waveform is considerably sped up; we can obtain maximum quality, with less than 10 iterations of the algorithm.
### _Loss functions_
The generator model is trained by mixing the GAN objective with a traditional pixel-wise loss, between the generated reconstruction of the source spectrogram and the expected target spectrogram.
Differently from the original paper, we have found it more beneficial to use loss functions related to the perceptual quality of the audio signal: log-STFT magnitude loss \((\mathcal{L}_{mag})\) and Spectral Convergence loss \((\mathcal{L}_{sc})\), defined as follows:
\[\mathcal{L}_{mag}\left(S,\tilde{S}\right)=\frac{\sum_{t,f}\lvert\mathrm{log} \lvert S_{t,f}\rvert-\mathrm{log}\lvert\tilde{S}_{t,f}\rvert\rvert}{T\cdot N} \tag{2}\]
\[\mathcal{L}_{sc}\left(S,\tilde{S}\right)=\frac{\sqrt{\sum_{t,f}\left(\lvert S_ {t,f}\rvert-\lvert\tilde{S}_{t,f}\rvert\right)^{2}}}{\sqrt{\sum_{t,f}\lvert S _{t,f}\rvert^{2}}} \tag{3}\]
where \(\lvert S_{t,f}\rvert\) and \(\lvert\tilde{S}_{t,f}\rvert\) represent the STFT magnitude vector of \(s\) and \(\tilde{s}\) respectively, at time \(t\), while \(T\) and \(N\) denote the number of time bins and frequency bins of a frame.
As outlined in [26], \(\mathcal{L}_{sc}\) highly emphasizes large spectral components, which helps especially in early phases of training, while \(\mathcal{L}_{mag}\) accurately fits small amplitude variations, which tends to be more important towards the later phases of training.
The goal of the adversarial loss is to drive the generator model to output T-F representations that are plausible in the target domain, whereas the spectral losses regularize the generator model to output spectrograms that are a plausible translation of the source context. The combination of the adversarial loss and the spectral losses is controlled by the hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\), both set to 250, since it has been observed that the spectral loss is more important for reconstruction than the adversarial one.
\[\mathcal{L}=\mathcal{L}_{cGAN}+\lambda_{1}\mathcal{L}_{mag}+\lambda_{2} \mathcal{L}_{sc} \tag{4}\]
The discriminator model is trained in a standalone manner in the same way as in a traditional GAN model, minimizing the negative log-likelihood of identifying real and fake images, although conditioned on the clean spectrogram, which is concatenated with \(G(\tilde{S})\) to form the input of \(D\).
We followed a common practice in training generative networks [27], which consists in balancing the evolution of training by iterating \(n_{G}\) times the generator weights update, for every one of \(D\). We used the value \(n_{G}=10\).
The models were trained for 50 epochs, following an early stopping policy based on the spectral losses observed on the validation set. We used the Adam [28] optimizer with a learning rate of 0.0002 for both the generator and the discriminator, and a batch size of 8.
## IV Datasets
We used the VCTK Corpus (Centre for Speech Technology Voice Cloning Toolkit) [29] set of data to simulate loss traces, for training and evaluation of the speech PLC model.
VCTK contains about 44 hours of clean speech from 109 English speakers, 47 males and 62 females, with different accents. To comply with the policy followed by the comparing methods, we downsampled the audio to 16 kHz, trimmed leading and trailing silence, and split into three subsets: train, validation and test, the latter containing 5 speakers held out from the train and validation sets. We assumed that the lost packets have a duration multiple of 20 ms, and were simulated by zeroing samples of the clean waveform, finally we limited to 120 ms the maximum gap length, equivalent to 6 consecutive packets. Fig. 3 shows the distribution of lost gaps, obtained by injecting packets with rates in the range 10% - 40%.
## V Results and comparisons
The proposed PLC method has been compared with three algorithmic solutions, represented by the general purpose codecs Opus [30], WebRTC [31] and Enhanced Voice Services (EVS) [32], and against four state-of-the-art deep PLC methods: the wave-to-wave generative adversarial network (PLCNet) [33], the mel-to-wave non-autoregressive adversarial auto-encoder (PLAAE) [34], the wave-to-wave adaptive recurrent neural network (RNN) [9] and the time-frequency hybrid generative
Fig. 1: The proposed framework is composed of the U-Net for spectrogram inpainting. Deep feature loss for training the U-Net is obtained by ensembling the discriminator loss (binary cross-entropy between patches), along with the spectral distances (\(\mathcal{L}_{mag}\) and \(\mathcal{L}_{sc}\)), between the representations of the recovered and the actual STFT log-magnitudes.
adversarial network (TFGAN) [12]. In addition, the evaluation metrics obtained by simply zero-filling the lost gaps were also reported as a baseline.
We evaluated the performances of the proposed generative inpainting method, in terms of Wide-Band Perceptual Evaluation of Speech Quality (PESQ) [35] and Short-Time Objective Intelligibility (STOI) [36]. The implementations used in this paper are from [37] for PESQ, and from [38] for STOI.
Table I shows the experimental results for PESQ and STOI, under different packet loss rates, compared with the PLCNet method, It can be seen that the proposed model can achieve a significant improvement in performance, the more the loss rate increases, so it is also able to cope better with large gaps of adjacent lost packets. The improvement is notable on PESQ scores; it ranges from +6.0% (loss rate 10%) to +27.5% (loss rate 40%). The STOI shows less noticeable gains, only for higher loss rates: +2.3% (loss rate 30%) and +7.8% (loss rate 40%).
Table II summarizes the results of the proposed method with all the competing approaches. Values represent the average score of PESQ and STOI under all packet loss rates investigated. Compared with the best performing network among previous state-of-the-art systems (PLCNet), bin2bin improves PESQ by 15.3% and STOI by 2.4%, while, in comparison with the best codec-based concealment (EVS), the improvement rises up to 43.9% for PESQ and 12.8% for STOI.
Figure 2 shows the qualitative results of a concealed 120 ms wide gap, within a test sample. This represents the worst scenario, in terms of extent of lost fragments, the network is trained to face.
In addition, we timed the forward execution of the bin2bin inpainting process, both in a CPU environment (Intel core i7-6850K) and a GPU environment (Nvidia Titan Xp), obtaining real-time (RT) factor values of 0.17 and 0.11 respectively.
## VI Conclusions
In this paper, we proposed an end-to-end pipeline for spectrogram inpainting and audio concealment using a cGAN-based architecture, inspired by the popular pix2pix framework. We combined the classical discriminative loss with a linear combination of two loss functions, that are correlated with the perceptual quality of speech. In addition, we adapted the receptive field of the PatchGAN discriminator and we used a custom initialization of the Griffin-Lim algorithm to speed up post-processing. We demonstrated experimentally that the proposed method is capable of simultaneously identifying and recovering missing parts, thus outperforming the state-of-the-art DNN method by +15.3% on PESQ and +2.4% on STOI, respectively. Finally, inference time evaluation suggests that this approach can be integrated into a real-time application, even with a mid-range hardware setting.
As future developments we plan to investigate the generator to directly process complex-valued spectrograms, in order to incorporate the phase reconstruction directly into the generative model.
|
2306.15538 | DataCI: A Platform for Data-Centric AI on Streaming Data | We introduce DataCI, a comprehensive open-source platform designed
specifically for data-centric AI in dynamic streaming data settings. DataCI
provides 1) an infrastructure with rich APIs for seamless streaming dataset
management, data-centric pipeline development and evaluation on streaming
scenarios, 2) an carefully designed versioning control function to track the
pipeline lineage, and 3) an intuitive graphical interface for a better
interactive user experience. Preliminary studies and demonstrations attest to
the easy-to-use and effectiveness of DataCI, highlighting its potential to
revolutionize the practice of data-centric AI in streaming data contexts. | Huaizheng Zhang, Yizheng Huang, Yuanming Li | 2023-06-27T15:07:20Z | http://arxiv.org/abs/2306.15538v2 | # DataCI: A Platform for Data-Centric AI on Streaming Data
###### Abstract
We introduce DataCI, a comprehensive open-source platform designed specifically for data-centric AI in dynamic streaming data settings. DataCI provides 1) an infrastructure with rich APIs for seamless streaming dataset management, data-centric pipeline development and evaluation on streaming scenarios, 2) an carefully designed versioning control function to track the pipeline lineage, and 3) an intuitive graphical interface for a better interactive user experience. Preliminary studies and demonstrations attest to the easy-to-use and effectiveness of DataCI, highlighting its potential to revolutionize the practice of data-centric AI in streaming data contexts.
Machine Learning, DataCI, DataCI, DataCI, DataCI, DataCI, DataCI: A Platform for Data-Centric AI, DataCI: A Platform for Data
environment management, speeding up the development of a DataCI pipeline. Furthermore, a **versioning control** function is introduced to track the pipeline lineage.
**Data-centric Function Zoo.** Different from the model hubs from HuggingFace (Wolf et al., 2020) and PyTorch (Paszke et al., 2017), we provide this module to store data processing methods such as data selection, data augmentation, and tricks applied in a specific scenario (e.g., prompting). Users can share and reuse the functions for pipeline building.
**Pipeline Orchestration.** To run a data-centric pipeline, we leverage the idea from pipeline orchestration systems like Airflow. The data flow shown in Figure 3 will pass through every pre-defined stage in the pipeline and then trigger the evaluation to test the whole pipeline and know the benefit of the new data-centric function. If the pipeline further passes the A/B test, the new function or the new pipeline can be deployed to the product.
**Leaderboard.** Once the evaluation is finished, the results will be sent to the leaderboard. We use the {run #No.} as the index and store the pipeline name with version, the evaluation dataset, the model name with training hype parameters, and the metric for easy reproduction and comparison.
## 3 Demonstration
We demonstrate our system from two views, namely user experience investigation and quantity analysis.
**User experience.** Users' experience is our top priority. To satisfy them, we equip DataCI infrastructure with a **playground** as shown in Figure 2. This playground enables users can try our system in an interactive manner. The playground can be grouped into three sections. First, users select the data from Streaming Data Sink and pre-defined pipeline with a specific version in Pipeline Registry. Second, users can manually launch the pipeline and our playground will show a DAG (directed acyclic graph) for better visualization. Users can replace one function in the DAG and generate a new pipeline version for a quick control experiment. Third, the experiment running details will be presented in the playground for reference.
**Quantitative analysis.** We simulate a real-world case by using Yelp dataset and sending them into our system in a streaming mode as shown in Figure 3. Assume that the online pipeline has upgraded to v5, which is our starting point. We build a pipeline v6, which passed the A/B test and was deployed to the online production. We keep using the latest data from Streaming Data Sink to develop new data-centric pipelines and test them. Only v8 fails as it can not outperform v7 as shown in Figure 4. Also, from Figure 4, we find if we keep using v6 without a quick pipeline update, the online performance will drop dramatically.
This is a very preliminary study to show that a system for quick building and evaluating a data-centric pipeline on streaming data is necessary, as in the real world, data distributions change very frequently. However, this study also poses many questions that need further exploration. For example, how can we decide the upgrade frequency? Is there any better metric to measure the pipeline's performance in this streaming scenario? etc.
## 4 Conclusion
Data-centric AI exploration has highlighted the shortcomings of existing tools in streaming data environments. To combat this, we introduced DataCI, an open-source platform that bridges these gaps. With its modular features and intuitive interface, DataCI streamlines streaming data management and method deployment. Preliminary studies affirm DataCI's potential to revolutionize data-centric AI in dynamic contexts.
Figure 4: Performance comparison between multiple data-centric pipeline versions.
Figure 3: We use Yelp data to simulate a streaming data scenario for the DataCI evaluation.
Figure 2: An interactive web interface to build, launch, compare, and visualize DataCI pipelines. |
2301.06285 | Entanglement Island and Page Curve in Wedge Holography | Entanglement islands play an essential role in the recent breakthrough in
resolving the black hole information paradox. However, whether entanglement
islands can exist in massless gravity theories is controversial. It is found
that entanglement islands disappear in the initial model of wedge holography
with massless gravity on the brane. As a result, the entanglement entropy of
Hawking radiation becomes a time-independent constant, and there is no Page
curve. In this paper, we recover massless entanglement islands in wedge
holography with suitable DGP gravity or higher derivative gravity on the
branes. We study two typical cases. In the first case, we consider a black hole
on the strong-gravity brane and a bath on the weak-gravity brane. It is similar
to the usual double holography with non-gravitational baths. In the second
case, we discuss two black holes on the two branes with the same gravitational
strength. We recover massless entanglement islands and non-trivial Page curves
in both cases. We also argue that the entanglement island is consistent with
massless gravity. Our results strongly support that entanglement islands can
exist in long-range theories of gravity. | Rong-Xin Miao | 2023-01-16T07:00:01Z | http://arxiv.org/abs/2301.06285v2 | # Entanglement Island and Page Curve in Wedge Holography
###### Abstract
Entanglement islands play an essential role in the recent breakthrough in resolving the black hole information paradox. However, whether entanglement islands can exist in massless gravity theories is controversial. It is found that entanglement islands disappear in the initial model of wedge holography with massless gravity on the brane. As a result, the entanglement entropy of Hawking radiation becomes a time-independent constant, and there is no Page curve. In this paper, we recover massless entanglement islands in wedge holography with suitable DGP gravity or higher derivative gravity on the branes. We study two typical cases. In the first case, we consider a black hole on the strong-gravity brane and a bath on the weak-gravity brane. It is similar to the usual double holography with non-gravitational baths. In the second case, we discuss two black holes on the two branes with the same gravitational strength. We recover massless entanglement islands and non-trivial Page curves in both cases. We also argue that the entanglement island is consistent with massless gravity. Our results strongly support that entanglement islands can exist in long-range theories of gravity.
###### Contents
* 1 Introduction
* 2 Wedge holography with DGP terms
* 2.1 Effective action
* 2.2 Mass spectrum
* 2.3 Brane bending mode
* 2.4 Holographic entanglement entropy
* 3 Page curve of case I: one black hole approximately
* 3.1 Island phase
* 3.2 No-Island phase
* 4 Page curve of case II: two black hole
* 5 Discussions on massless-island puzzle
* 6 Higher derivative gravity on branes
* 7 Conclusions and Discussions
* A Numerical calculation for the no-island phase
## 1 Introduction
Recently, there has been a significant breakthrough toward resolving the black hole information paradox [1], where the entanglement islands play a critical role [2, 3, 4]. See [5] for a good review. For simplicity, one considers the Hawking radiation emitted into a non-gravitational bath. This can be naturally realized in doubly holographic models such as Karch-Randall
(KR) braneworld [6] and AdS/BCFT [7]. Let us take the doubly holographic black-string model [8] as an example
\[ds^{2}=dr^{2}+\cosh^{2}(r)\frac{\frac{dz^{2}}{f(z)}-f(z)dt^{2}+\sum_{i=1}^{d-2}dy _{i}^{2}}{z^{2}}, \tag{1}\]
where \(f(z)=1-z^{d-1}\) with the horizon at \(z=1\), \(r\) denotes the distance to the brane, the Karch-Randall (KR) brane \(Q\) locates at \(r=\rho\), and the AdS boundary \(M\) is at \(r=-\infty\). See Fig.1 for the geometry, where a gravitational black hole lives on the KR brane \(Q\), and a non-gravitational black hole (bath) locates on the AdS boundary \(M\)1. One imposes the transparent boundary condition on the defect \(\Sigma\) so that Hawking radiation on \(Q\) can flow into the bath on \(M\). It proposes that one should use the following island rule to calculate the entanglement entropy of Hawking radiation R
Footnote 1: Note that there is a black hole on the AdS boundary for the black string, which is different from the usual double holography with AdS-Schwarzschild-like black holes in bulk.
\[S_{\rm EE}({\rm R})=\min\Bigl{\{}{\rm ext}\Bigl{(}S_{\rm QFT}({\rm R}\cup{ \rm I})+\frac{A(\partial{\rm I})}{4\hat{G}_{N}}\Bigr{)}\Bigr{\}}, \tag{2}\]
where one adjusts the island region I to minimize the above generalized entropy [9, 10]. It is believed that one can extract information on the island from the radiation region R, although
Figure 1: The black-string geometry and its interpretation in black hole information paradox. \(Q\) is the KR brane with a gravitational black hole, and \(M\) is the AdS boundary with a non-gravitational black hole (bath). I and \(\bar{\rm I}\) denotes the island region (purple line) and its complement (black line) on the brane \(Q\), R and \(\bar{\rm R}\) denotes the radiation region (red line) and its complement (black line) on the AdS boundary \(M\), \(\Sigma\) is the defect (blue point) on the corner. Note that the island region I and radiation region R envelop the black-hole horizon on \(Q\) and \(M\), respectively. For simplicity, we only show the regions outside the horizon. In bulk, the dotted line, blue, and orange lines indicate the horizon, RT surfaces in the island phase, and Hartman-Maldacena (HM) surface in the no-island phase at \(t=0\), respectively.
they are disconnected in the lower-dimensional system \(Q\cup M\). Interestingly, the entanglement entropy of QFT can decrease by adding a disconnected region, i.e., \(S_{\rm QFT}({\rm R}\cup{\rm I})<S_{\rm QFT}({\rm R})\). This quantum property is important in reducing the entropy and recovering the Page curve. So far, exact derivations of the Page curve are limited to Jackiw-Teitelboim gravity in two dimensions, where there are no gravitons. In higher dimensions, all reliable discussions focus on doubly holographic models. See [11, 12, 13, 14] for examples. See also [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 8, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] for some recent works on entanglement islands and Page curve.
Unfortunately, the gravity on the brane is massive in the usual double holography such as Karch-Randall (KR) braneworld [6] and AdS/BCFT [7]2. Physically, that is because a gravitational system on the brane \(Q\) is coupled with a non-gravitational system on the AdS boundary \(M\). As a result, the general covariance breakdowns, leading to a mass for the graviton. Technically, that is because one imposes Neumann boundary condition (NBC) on the brane \(Q\) while imposing Dirichlet boundary condition (DBC) on the AdS boundary \(M\). However, according to [46], only if one sets both NBCs on the two boundaries \(Q\) and \(M\) does the massless gravity appear. Besides, the massless mode is non-normalizable since the AdS boundary \(M\) locates at infinity.
Footnote 2: See also [42, 43, 44, 45] for other proposals of AdS/BCFT with various boundary conditions.
Naturally, we get normalizable massless gravity if we set \(M\) at a finite place as \(Q\) and impose both NBCs on the two boundaries. This deformed double holography is called wedge holography [47, 48]3. See Fig. 2 for the geometry and [49] for its generalization to codim-n defects. Wedge holography proposes that the classical gravity in the \((d+1)\)-dimensional bulk \(W\) is dual to "quantum gravity" on the \(d\)-dimensional branes \(Q=Q_{1}\cup Q_{2}\) and is dual to the conformal field theory (CFT) on the \((d-1)\)-dimensional corner \(\Sigma\). Thus, it is also called codim-2 holography. In wedge holography, the effective theory on the branes is a CFT plus a ghost-free higher derivative gravity, or an equivalent multi-metric gravity, which behaves like Einstein gravity in many aspects [46]. For example, they yield the same holographic Weyl anomaly and the first law of entanglement entropy. Besides, all of the solutions to Einstein gravity are also solutions to the effective higher derivative gravity on the branes.
Footnote 3: The original motivation of wedge holography is not to obtain massless gravity. The existence of massless gravity in wedge holography is found in [46]. See also [50]
Unfortunately, although we have massless gravity in wedge holography, the entanglement island disappears [50, 51]. Let us explain how this happens in Fig. 2. According to [50], since both branes are gravitating in wedge holography, one should adjust both the radiation region R and the island region I to minimize the entanglement entropy of Hawking radiation. Moreover, from the viewpoint of bulk, since the RT surface is minimal, it is natural to adjust its intersections \(\partial{\rm R}\) and \(\partial{\rm I}\) on the two branes to minimize its area. Following this approach, the RT surface (blue line) in the island phase coincides with the horizon (dotted line) [50]. As a result, the island and radiation regions I and R of Fig. 1 disappear, and the entanglement entropy of radiation emitted into the bath becomes a time-independent constant [50]. Note that the island region (purple line) envelops the black-hole horizon on the brane \(Q_{2}\), and only
the region outside the horizon disappears. See also the Penrose diagram in Fig.(3) (right), which shows that the island shrinks into a point in wedge holography. In this sense, we say that the entanglement island disappears in wedge holography 4. Inspired by the above observation, it is conjectured that the entanglement island can exist only in massive gravity theories [51, 52]. They argue that the entanglement island is inconsistent with massless gravity obeying Gauss's law. There are controversies on this conjecture [24, 25]. According to [5], it is natural that the island mechanism works for massless gravity. Interestingly, [53] finds that the absence-of-island issue can be ameliorated in the large D limit.
Footnote 4: The “island” is a broad concept in the literature. Here the island disappears in the sense of the Penrose diagram as shown in Fig.(3) (right). On the other hand, if one defines the island as RT surfaces ending on the branes [13], of course, there is an island in that sense. The critical point here is that the entanglement entropy of Hawking radiation is a time-independent constant, and there is no Page curve in wedge holography when there is massless gravity on the branes [50, 52].
It is a significant question whether there are entanglement islands in massless gravity. From a practical point of view, the gravity in our universe is massless. Strict experimental limits on gravity mass have also been set based on the gravitational wave, Yukawa potential, dispersion relation, and modified gravity theories [54, 55]. Thus, addressing the information
Figure 2: The geometry of wedge holography without DGP terms and its interpretation in black hole information paradox. \(W\) is the bulk wedge space, \(Q_{1}\) is the weak-gravity “bath” brane and \(Q_{2}\) is the strong-gravity brane, \(\Sigma\) is the defect on the corner of the wedge, \(H\) denotes the horizon in bulk. According to [50], since both branes are gravitating, one should adjust both the radiation region R (red line) and the island region I (purple line) to minimize the entanglement entropy in the island phase. Remarkably, the corresponding RT surface \(\Gamma\) (blue line) coincides with the horizon \(H\) (black dotted line). As a result, the potential island and radiation regions I and R of Fig. 1 disappear. Note that the island region envelops the black-hole horizon on the brane \(Q_{2}\), and only the region outside the horizon disappears.
paradox in the real world is more crucial and necessary than in a toy model of massive gravity. From a theoretical point of view, massive gravity suffers the non-causal problem. Although the ghost-free theory can be constructed [56], massive gravity admits superluminal shock wave solutions and thus violates causality generally [57]. It is not satisfactory if the island rule applies only to an acausal theory. This paper gives a positive answer to the above question. We find that massless entanglement islands can exist in wedge holography with Dvali-Gabadadze-Porrati (DGP) gravity [58] or higher derivative gravity on the branes. It helps to clarify the theoretical controversy and strongly implies that the entanglement island is consistent with massless gravity theories.
This paper investigates many aspects of wedge holography with DGP terms. We find that there is normalizable massless gravity on the branes. By analyzing effective Newton's constants, brane bending modes, and holographic entanglement entropy, we obtain several lower bounds for the DGP parameters. Interestingly, the DGP parameters can be negative. We discuss the Page curve for eternal two-side black holes in this paper. For simplicity, we show only one side of the systems in most figures (Fig.1, Fig.2, Fig.7, Fig.13). One can double these figures for the two-side geometry as in Figure 1 of [8]. We discuss two different situations. In case I shown in Fig.7, we take approximately the black hole on weak-gravity brane \(Q_{1}\) as the "bath" and focus on the Hawking radiation of the black hole on the strong-gravity brane \(Q_{2}\). Following [50], the primary purpose of this approximation is to mimic the usual case with a non-gravitating bath. We call it "case I: one black hole approximately"
Figure 3: Left: Penrose diagram on brane \(Q\) in the usual double holography (Fig.1). Right: Penrose diagram on the strong-gravity brane \(Q_{2}\) in wedge holography (Fig.2). The black-dotted line, green-dotted line, and the purple line or point denote the horizon, singularity and island, respectively. It shows that the island shrinks into a point in the Penrose diagram of wedge holography. In this sense, it claims that the entanglement island disappears in wedge holography.
in sect. 3. One may ask what happens if we take the two black holes on \(Q_{1}\cup Q_{2}\) seriously. This is the motivation we further consider case II, shown in Fig.13. The two branes have equal gravitational strength in case II. Thus, there is no natural way to choose which black hole is the "bath" black hole, and we name it "case II: two black holes" in sect. 4. We recover massless entanglement islands and Page curves in both cases. We argue that the entanglement islands can consistently exist in the brane-world models of massless gravity. Finally, we generalize the results to higher derivative gravity on the branes.
The paper is organized as follows. In section 2, we formulate wedge holography with DGP gravity on the brane. Then, we show massless gravity on the branes and get several lower bounds for the DGP parameter. Section 3 discusses the entanglement island and the Page curve in case I: one strong-gravity black hole coupled with a weak-gravity bath black hole. Section 4 generalizes the discussions to case II: two black holes associated with two strong-gravity baths. Section 5 discusses the possible resolutions to the puzzle of the massless island raised by [51]. Section 6 generalizes the discussions to higher derivative gravity on the branes. Finally, we conclude with some open problems in section 7.
Note that parts of the results have been shown in the letter [59]. We give more details and new developments in this paper. The new results include the mass spectrum, brane bending mode, holographic entanglement entropy, details for calculations of Page curves, an inspiring analog of the island puzzle and its possible resolutions in AdS/CFT, and generalizations to higher derivative gravity on the branes.
## 2 Wedge holography with DGP terms
This section investigates the wedge holography with DGP gravity on the brane. First, we work out the effective action for one novel class of solutions and verify that there is normalizable massless gravity on the brane. We get a lower bound of the DGP parameter to have a positive effective Newton's constant. Second, we find the mass spectrum on the brane obeys the Breitenlohner-Freedman bound \(m^{2}\geq-(d-1)^{2}/4\), so the system is tachyon-free. Third, we derive the effective action of brane bending modes, which yields another lower bound of the DGP parameter. Finally, we discuss the holographic entanglement entropy and get an additional lower bound of the DGP parameter.
Let us recall the geometry of wedge holography shown in Fig.2, where \(W\) is the bulk wedge space, \(Q=Q_{1}\cup Q_{2}\) denote two end-of-the-world branes, \(\Sigma\) labels the corner of the wedge, where the defect lives. Let us take a typical metric to illustrate the geometry
\[ds^{2}=dr^{2}+\cosh^{2}(r)\frac{dz^{2}-dt^{2}+\sum_{i=1}^{d-2}dy_{i}^{2}}{z^{2} },\ -\rho_{1}\leq r\leq\rho_{2}, \tag{3}\]
where the left brane \(Q_{1}\), the right brane \(Q_{2}\), and the defect \(\Sigma\) locate at \(r=-\rho_{1}\), \(r=\rho_{2}\) and \(z=0\), respectively. Wedge holography has three equivalent descriptions:
1. a classical gravity coupled with two branes in the \((d+1)\)-dimensional bulk \(W\),
2. a "quantum gravity" coupled with CFTs on the \(d\)-dimensional branes \(Q=Q_{1}\cup Q_{2}\),
3. a CFT on the \((d-1)\)-dimensional defect \(\Sigma\).
Now we quickly formulate wedge holography with DGP gravity on the branes. The action is given by
\[I=\int_{W}dx^{d+1}\sqrt{-g}\Big{(}R_{W}+d(d-1)\Big{)}+2\int_{Q}dx^{d}\sqrt{-h_{ Q}}(K-T_{a}+\lambda_{a}R_{Q}), \tag{4}\]
where \(R_{W}\) is the Ricci scalar in bulk \(W\), \(K\) is the extrinsic curvature, \(h_{Q\ ij}\) and \(R_{Q}\) are the induced metric and the intrinsic Ricci scalar (DGP term) on the branes \(Q=Q_{1}\cup Q_{2}\), and \(T_{a}\) and \(\lambda_{a}\) with \(a=1,2\) are free parameters. For simplicity, we have set Newton's constant \(16\pi G_{N}=1\) together with the AdS radius \(L=1\). Following [7], we choose Neumann boundary condition (NBC) on \(Q\)
\[K^{ij}-(K-T_{a}+\lambda_{a}R_{Q})h_{Q}^{ij}+2\lambda_{a}R_{Q}^{ij}=0, \tag{5}\]
which yields a massless gravitational mode on the brane [46]. On the other hand, the gravity becomes massive if one imposes Dirichlet boundary condition (DBC) [44] or conformal boundary condition (CBC) [45] on one or two of the branes. In general, it isn't easy to find exact solutions which satisfy both Einstein equations in bulk and NBC (5) on the boundary.
### Effective action
Fortunately, there is one novel class of exact solutions [48]
\[ds^{2}=dr^{2}+\cosh^{2}(r)h_{ij}(y)dy^{i}dy^{j},\ -\rho_{1}\leq r\leq\rho_{2}, \tag{6}\]
if \(h_{ij}\) obeys Einstein equations on the brane
\[R_{h\ ij}-\frac{R_{h}+(d-1)(d-2)}{2}h_{ij}=0, \tag{7}\]
and the brane tensions are given by
\[T_{a}=(d-1)\tanh(\rho_{a})-\lambda_{a}\frac{(d-1)(d-2)}{\cosh^{2}(\rho_{a})}. \tag{8}\]
Note that \(R_{h}\) of (7) denotes Ricci scalar defined by \(h_{ij}\), which is different from \(R_{Q}\) defined by \(h_{Q\ ij}=\cosh(\rho_{a})h_{ij}\). Substituting the metric (6) into the action (4) and integrating \(r\), we get an effective action on each brane
\[I_{a}=\frac{1}{16\pi G_{\text{eff N}}^{a}}\int_{Q_{a}}\sqrt{-h}\Big{(}R_{h}+( d-1)(d-2)\Big{)}, \tag{9}\]
where \(R_{h}\) is the Ricci scalar defined by \(h_{ij}\) and \(G^{a}_{\text{eff N}}\) denotes the effective Newton's constant
\[\frac{1}{16\pi G^{a}_{\text{eff N}}}=2\lambda_{a}\cosh^{d-2}(\rho_{a})+\int_{0}^ {\rho_{a}}\cosh^{d-2}(r)dr. \tag{10}\]
In the above derivations, we have used (8), \(K=d\tanh\rho_{a}\), \(h_{Q\ ij}=\cosh^{2}(\rho_{a})h_{ij}\), \(R_{Q}=\text{sech}^{2}(\rho_{a})R_{h}\) and
\[R_{W}=R_{h}\text{sech}^{2}(r)-d\left(2+(d-1)\tanh^{2}r\right). \tag{11}\]
From the EOM (7) and effective action (9), it is clear that there is massless gravity on the branes. We require that the effective Newton's constant (10) is positive, which yields a lower bound on the DGP parameter
\[\lambda_{a}\geq\lambda_{\text{cri}}(\rho_{a})=-\frac{1}{2}\int_{0}^{\rho_{a}} \frac{\cosh^{d-2}(r)}{\cosh^{d-2}(\rho_{a})}dr. \tag{12}\]
We draw \(\lambda_{\text{cri}}(\rho)\) in Fig.4, which shows that \(\lambda_{\text{cri}}\) has a lower bound too
\[\lambda_{\text{cri}}(\rho_{a})\gtrsim\begin{cases}-\frac{1}{2},&\text{for }d=3,\\ -0.300,&\text{for }d=4,\\ -0.236,&\text{for }d=5.\end{cases} \tag{13}\]
The larger the spacetime dimension is, the larger the lower bound is. In the large \(d\) limit, \(\lambda_{\text{cri}}\) approaches zero, i.e., \(\lim_{d\rightarrow\infty}\lambda_{\text{cri}}\to 0\).
### Mass spectrum
In this subsection, we study the mass spectrum of gravitons on the branes in wedge holography with DGP terms. We find the mass spectrum obeys Breitenlohner-Freedman bound
Figure 4: The lower bound \(\lambda_{\text{cri}}(\rho)\) in various dimensions. The larger the spacetime dimension is, the larger the lower bound is.
\(-(d-1)^{2}/4\). In particular, it includes a massless mode, which agrees with the results of the last subsection. We focus on the fixed brane locations in this subsection, which yield \(m^{2}\geq 0\). We leave the discussions of brane bending modes with \(m^{2}=-(d-2)\) to the following subsection.
We take the following ansatz of the perturbation metric and the embedding function of \(Q\)
\[ds^{2}=dr^{2}+\cosh^{2}(r)\left(h^{(0)}_{ij}(y)+\epsilon H(r)h^ {(1)}_{ij}(y)\right)dy^{i}dy^{j}+O(\epsilon^{2}), \tag{14}\] \[Q_{1}:\ r=-\rho_{1}+O(\epsilon^{2}),\ \ \ \ \ Q_{2}:r=\rho_{2}+O( \epsilon^{2}), \tag{15}\]
where \(h^{(0)}_{ij}(y)\) is the AdS metric with a unit radius and \(h^{(1)}_{ij}(y)\) denotes the perturbation, \(\epsilon\) denotes the order of perturbations. In terms of bulk metric perturbations, we have
\[\delta g_{r\mu}=0,\ \delta g_{ij}=\cosh^{2}(r)H(r)\bar{h}^{(1)}_{ij}(y). \tag{16}\]
Imposing the transverse traceless gauge
\[\nabla^{\mu}\delta g_{\mu\nu}=0,\ \ \ g^{\mu\nu}\delta g_{\mu\nu}=0, \tag{17}\]
we get
\[D^{i}h^{(1)}_{ij}=0,\ \ \ h^{(0)ij}h^{(1)}_{ij}=0, \tag{18}\]
where \(\nabla_{\mu}\) and \(D_{i}\) are the covariant derivatives with respect to \(g_{\mu\nu}\) and \(h^{(0)}_{ij}\), respectively. Substituting (14) and (18) into Einstein equations and separating variables, we obtain
\[\left(\Box+2-m^{2}\right)h^{(1)}_{ij}(y)=0, \tag{19}\] \[\cosh^{2}(r)H^{\prime\prime}(r)+d\sinh(r)\cosh(r)H^{\prime}(r)+m^ {2}H(r)=0, \tag{20}\]
where \(m\) denotes the mass of gravitons and \(\Box=D_{k}D^{k}\) is the d'Alembert operator defined by \(h^{(0)}_{ij}\). Solving (20), we derive
\[H(r)=\mbox{sech}^{\frac{d}{2}}(r)\left(c_{1}P^{\frac{d}{2}}_{ \lambda_{g}}(\tanh r)+c_{2}Q^{\frac{d}{2}}_{\lambda_{g}}(\tanh r)\right), \tag{21}\]
where \(P^{\frac{d}{2}}_{\lambda_{g}}\) and \(Q^{\frac{d}{2}}_{\lambda_{g}}\) are the Legendre polynomials, \(c_{1}\) and \(c_{2}\) are integral constants and \(\lambda_{g}\) is given by
\[\lambda_{g}=\frac{1}{2}\left(\sqrt{(d-1)^{2}+4m^{2}}-1\right), \tag{22}\]
which yields the correct Breitenlohner-Freedman bound of massive gravity in AdS\({}_{d}\)
\[m^{2}\geq-(\frac{d-1}{2})^{2}. \tag{23}\]
By using EOM (19), we can simplify the NBC (5) as
\[\cosh^{2}\left(\rho_{1}\right)H^{\prime}\left(-\rho_{1}\right)+2 \lambda_{1}m^{2}H\left(-\rho_{1}\right)=0, \tag{24}\] \[\cosh^{2}\left(\rho_{2}\right)H^{\prime}\left(\rho_{2}\right)-2 \lambda_{2}m^{2}H\left(\rho_{2}\right)=0. \tag{25}\]
Substituting the solution (21) into (24,25), we derive a constraint for the mass
\[m^{2}\big{(}M_{00}+M_{10}\lambda_{1}+M_{01}\lambda_{2}+M_{11}\lambda_{1} \lambda_{2}\big{)}=0, \tag{26}\]
with
\[M_{00} =\sqrt{1-x_{1}^{2}}\sqrt{1-x_{2}^{2}}\left(P_{\lambda_{g}}^{ \frac{d}{2}-1}\left(x_{2}\right)Q_{\lambda_{g}}^{\frac{d}{2}-1}\left(-x_{1} \right)-P_{\lambda_{g}}^{\frac{d}{2}-1}\left(-x_{1}\right)Q_{\lambda_{g}}^{ \frac{d}{2}-1}\left(x_{2}\right)\right), \tag{27}\] \[M_{10} =2\left(x_{1}^{2}-1\right)\sqrt{1-x_{2}^{2}}\left(P_{\lambda_{g} }^{\frac{d}{2}}\left(-x_{1}\right)Q_{\lambda_{g}}^{\frac{d}{2}-1}\left(x_{2} \right)-P_{\lambda_{g}}^{\frac{d}{2}-1}\left(x_{2}\right)Q_{\lambda_{g}}^{ \frac{d}{2}}\left(-x_{1}\right)\right),\] (28) \[M_{01} =2\sqrt{1-x_{1}^{2}}\left(x_{2}^{2}-1\right)\left(P_{\lambda_{g} }^{\frac{d}{2}}\left(x_{2}\right)Q_{\lambda_{g}}^{\frac{d}{2}-1}\left(-x_{1} \right)-P_{\lambda_{g}}^{\frac{d}{2}-1}\left(-x_{1}\right)Q_{\lambda_{g}}^{ \frac{d}{2}}\left(x_{2}\right)\right),\] (29) \[M_{11} =-4\left(x_{1}^{2}-1\right)\left(x_{2}^{2}-1\right)\left(P_{ \lambda_{g}}^{\frac{d}{2}}\left(x_{2}\right)Q_{\lambda_{g}}^{\frac{d}{2}} \left(-x_{1}\right)-P_{\lambda_{g}}^{\frac{d}{2}}\left(-x_{1}\right)Q_{\lambda _{g}}^{\frac{d}{2}}\left(x_{2}\right)\right), \tag{30}\]
where \(x_{1}=\tanh\rho_{1},x_{2}=\tanh\rho_{2}\) and \(\lambda_{g}\) is given by (22). From (26), we notice a massless mode with \(m^{2}=0\), which agrees with the results of the last subsection. There is an easier way to see that there is a massless mode. Clearly, \(H(r)=1\) and \(m^{2}=0\) are solutions to EOM (20) and BCs (24,25). Furthermore, this massless mode is normalizable
\[\int_{-\rho_{1}}^{\rho_{2}}dr\cosh^{d-2}(r)H(r)^{2}\text{ is finite}. \tag{31}\]
Thus, there is indeed a physical massless gravity on the brane in wedge holography with DGP terms. On the other hand, the massless mode is non-normalizable due to the infinite volume in the usual double holography
\[\int_{-\infty}^{\rho_{2}}dr\cosh^{d-2}(r)H(r)^{2}\rightarrow\infty. \tag{32}\]
Naively, one can check that \(m^{2}=-(d-2)\) is also a solution to (26). However, this is not the case. According to [45], (21) is no-longer the general solution for \(m^{2}=-(d-2)\). Instead, one should re-solve EOM (20) with \(m^{2}=-(d-2)\) to get the general solution. One can check this solution does not satisfy the NBCs (24,25) at fixed brane positions. Instead, they correspond to the brane bending modes, allowing the brane positions to change. We will discuss the brane bending modes in the next subsection. To end this subsection, we list the mass spectrum in Table. 1 and Table. 2 below. Without loss of generality, we take \(\rho_{a}=0.5,\lambda_{a}=0.1\) and \(\rho_{a}=0.5,\lambda_{a}=-0.1\) as examples. Table. 1 and Table. 2 show that the mass \(m\) and mass gap \(\Delta m\) become larger for negative \(\lambda_{a}\). Thus, Einstein's gravity is a better approximation at the low energy scale as the brane effective theory for negative \(\lambda_{a}\). That is because the massive mode is more difficult to be excited due to the more significant mass gap for negative \(\lambda_{a}\).
### Brane bending mode
Let's study the brane bending modes [60, 61]. In the last subsection, we focus on the fixed brane locations (15). In general, there are fluctuations for the brane positions
\[Q_{1}:\ r=-\rho_{1}-\epsilon\ \phi_{1}(y),\ \ \ \ \ Q_{2}:r=\rho_{2}-\epsilon\ \phi_{2}(y). \tag{33}\]
We assume that the metric perturbation is still given by (14) with the gauge (18). By using (14, 18, 19,33), we can simplify the NBC (5) as
\[\Big{(}-\frac{1}{2}\cosh^{2}(\rho_{1})H^{\prime}(-\rho_{1})- \lambda_{1}H(-\rho_{1})m^{2}\Big{)}h_{ij}^{(1)}\] \[-\Big{(}D_{i}D_{j}\phi_{1}-(\Box+(1-d))\phi_{1}h_{ij}^{(0)}\Big{)} (1+2(d-2)\lambda_{1}\tanh\rho_{1})=0, \tag{34}\] \[\Big{(}\frac{1}{2}\cosh^{2}(\rho_{2})H^{\prime}(\rho_{2})- \lambda_{2}H(\rho_{2})m^{2}\Big{)}h_{ij}^{(1)}\] \[+\Big{(}D_{i}D_{j}\phi_{2}-(\Box+(1-d))\phi_{2}h_{ij}^{(0)}\Big{)} (1+2(d-2)\lambda_{2}\tanh\rho_{2})=0, \tag{35}\]
where \(\Box\) is the d'Alembert operator defined by \(h_{ij}^{(0)}\). Note that (34,35) agree with (24,25) at fixed brane locations, i.e., \(\phi_{1}=\phi_{2}=0\). Taking the trace of (34, 35) and using \(h^{(1)i}_{\ \ \ i}=0\), we derive
\[(\Box-d)\phi_{a}=0, \tag{36}\]
where \(a\) denotes \(1,2\). The traceless parts of (34, 35) give
\[h_{ij}^{(1)}=\frac{-2\left(2(d-2)\lambda_{1}\tanh\left(\rho_{1} \right)+1\right)}{\cosh^{2}\left(\rho_{1}\right)H^{\prime}\left(-\rho_{1} \right)+2\lambda_{1}m^{2}H\left(-\rho_{1}\right)}\ \Big{(}D_{i}D_{j}-\frac{1}{d}h_{ij}^{(0)}\Box\Big{)}\phi_{1}, \tag{37}\] \[h_{ij}^{(1)}=\frac{2\left(2(d-2)\lambda_{2}\tanh\left(\rho_{2} \right)+1\right)}{2\lambda_{2}m^{2}H\left(\rho_{2}\right)-\cosh^{2}\left(\rho_ {2}\right)H^{\prime}\left(\rho_{2}\right)}\ \Big{(}D_{i}D_{j}-\frac{1}{d}h_{ij}^{(0)}\Box\Big{)}\phi_{2}, \tag{38}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & 1 & 2 & 3 & 4 & 5 \\ \hline \(m^{2}\) for \(\lambda_{a}=0.1\) & 0 & 5.124 & 25.011 & 61.667 & 117.415 \\ \hline \(m^{2}\) for \(\lambda_{a}=-0.1\) & 0 & 22.511 & 74.747 & 149.216 & 245.281 \\ \hline \end{tabular}
\end{table}
Table 1: Mass spectrum for \(d=3\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & 1 & 2 & 3 & 4 & 5 \\ \hline \(m^{2}\) for \(\lambda_{a}=0.1\) & 0 & 4.776 & 24.950 & 61.837 & 117.742 \\ \hline \(m^{2}\) for \(\lambda_{a}=-0.1\) & 0 & 22.721 & 75.211 & 149.764 & 245.866 \\ \hline \end{tabular}
\end{table}
Table 2: Mass spectrum for \(d=4\)
which implies that \(\phi_{1}\) and \(\phi_{2}\) are not independent generally. Substituting either (37) or (38) into (19), we derive
\[m^{2}=-(d-2), \tag{39}\]
where we have the following formula [61] in the above calculations
\[\Big{(}\Box+2+(d-2)\Big{)}\Big{(}D_{i}D_{j}-\frac{1}{d}h^{(0)}_{ij}\Box\Big{)} \phi_{a}=\Big{(}D_{i}D_{j}-\frac{1}{d}h^{(0)}_{ij}\Box\Big{)}(\Box-d)\phi_{a}=0. \tag{40}\]
Thus, the brane bending modes produce a metric perturbation (37,38) with \(m^{2}=-(d-2)\).
Note that, for the ansatz of bulk metric (14) with gauge (18), the bending modes \(\phi_{1}\) and \(\phi_{2}\) are not independent. One may consider a more general ansatz of the metric perturbation with non-zero \(\delta g_{ri}\) to have independent brane bending modes 5. Or equivalently, one chooses two coordinate patches, the first (second) of which includes only the left (right) brane. In each coordinate patch, the bulk metric is still given by (14). An additional coordinate transformation is needed to relate the metrics in the overlap of these patches. See [62] for more discussions. For simplicity, we focus on the case of one independent brane bending mode in this paper. We discuss the left and right bending modes, respectively. Take the left one as an example. We choose
Footnote 5: Near one brane, we may remove \(\delta g_{ri}\) by suitable coordinate transformations. However, generally, one cannot delete \(\delta g_{ri}\) near both branes.
\[Q_{1}:\ r=-\rho_{1}-\epsilon\ \phi_{1}(y),\ \ \ \ \ Q_{2}:r=\rho_{2}, \tag{41}\]
and impose the BC for \(H(r)\)
\[\frac{-2\left(2(d-2)\lambda_{1}\tanh\left(\rho_{1}\right)+1\right) }{\cosh^{2}\left(\rho_{1}\right)H^{\prime}\left(-\rho_{1}\right)+2\lambda_{1}m ^{2}H\left(-\rho_{1}\right)}=1, \tag{42}\] \[\Big{(}\frac{1}{2}\cosh^{2}(\rho_{2})H^{\prime}(\rho_{2})-\lambda _{2}H(\rho_{2})m^{2}\Big{)}=0. \tag{43}\]
Then we get the metric perturbation (37)
\[h^{(1)}_{ij}=\Big{(}D_{i}D_{j}-\frac{1}{d}h^{(0)}_{ij}\Box\Big{)}\phi_{1} \tag{44}\]
with \(m^{2}=-(d-2)\). Similarly, one can obtain the bending mode for the right brane.
Now let us study the effective action for the brane bending mode. It is more convenient to take another ansatz of the bulk metric instead of (14). By performing suitable coordinate transformations, one can rewrite the metric (14) with \(h^{(1)}_{ij}\sim\Big{(}D_{i}D_{j}-\frac{1}{d}h^{(0)}_{ij}\Box\Big{)}\phi\) into the following form [62]. See also [63].
\[ds^{2}=\Big{(}1+\epsilon H_{1}(r)\phi(y)\Big{)}dr^{2}+\Big{(}1+ \epsilon H_{2}(r)\phi(y)\Big{)}\cosh^{2}(r)h^{(0)}_{ij}dy^{i}dy^{j}, \tag{45}\]
where \(\phi(y)\) denotes the brane bending mode, \(H_{1}(r)\) and \(H_{2}(r)\) are functions to be determined. Compared with (14), the metric (45) has the advantage that it includes less derivatives of \(\phi\). Solving Einstein equations at the linear order, we obtain
\[H_{1}(r)=-c_{1}(d-2){\rm sech}^{d-2}(r),\ H_{2}(r)=c_{1}{\rm sech}^{d-2}(r), \tag{46}\]
and
\[(\Box-d)\phi=0, \tag{47}\]
where \(c_{1}\) is an integral constant. Comparing (47) with (36), we see that \(\phi\) obeys the EOM of the brane bending mode. In fact, \(\phi\) indicates the relative motion of the two branes, which is called the radion [62, 63].
Let us go on to derive the location of the two branes. Substituting the embedding functions (33) into the NBC (5), we solve
\[\phi_{1}=-\frac{c_{1}(d-2)\lambda_{1}{\rm sech}^{d-2}\left(\rho_{1} \right)}{1+2(d-2)\lambda_{1}\tanh\left(\rho_{1}\right)}\phi,\ \ \phi_{2}=\frac{c_{1}(d-2)\lambda_{2}{\rm sech}^{d-2}\left(\rho_{2} \right)}{1+2(d-2)\lambda_{2}\tanh\left(\rho_{2}\right)}\phi. \tag{48}\]
Substituting the bulk metric (45,46) together with the embedding functions of branes (33,48) into the action (4) and integrating along \(r\), we finally obtain the squared action of the radion
\[I_{\phi}=B\ c_{1}^{2}\epsilon^{2}\int dy^{d}\sqrt{-h^{(0)}}\Big{(}-\frac{1}{2} D_{i}\phi D^{i}\phi-\frac{d}{2}\phi^{2}\Big{)}, \tag{49}\]
where \(B\) is given by
\[B=\sum_{a=1}^{2}\Big{(}\ \frac{(d-1)(d-2)}{2}\int_{0}^{\rho_{a}}dr\ {\rm sech}^{d-2}(r)+\frac{(d-1)(d-2){\rm sech}^{d-2}\left(\rho_{a} \right)}{1+2(d-2)\lambda_{a}\tanh\left(\rho_{a}\right)}\lambda_{a}\ \Big{)}. \tag{50}\]
Note that we have drop some total derivative terms in the above derivations. In particular, the linear action of \(\phi\) is a total derivative as expected. From the action (49), we can derive the correct EOM of \(\phi\) (47). This can be regarded as a test of our calculations. To have the positive kinetic energy, we require that
\[B\geq 0, \tag{51}\]
which imposes another constraint on the parameters \((\rho_{a},\lambda_{a})\). Now we have obtained two constraints (12) and (51) for the parameters of our model.
### Holographic entanglement entropy
In this subsection, we study the holographic entanglement entropy (HEE) [65] for CFTs on the \((d-1)\)-dimensional defect \(\Sigma\) in wedge holography with DGP gravity. We focus on the
vacuum state on the whole defect \(\Sigma\) for simplicity. Since it is a pure state, the HEE is expected to be zero 6, which causes another lower bound of the DGP parameter.
Footnote 6: Note that we are studying regularized finite HEE since the branes locate at a finite place instead of infinity. Similar to Casimir energy, the regularized HEE can be negative in principle. As a result, we can relax the constraint and require that the HEE is bounded from below. Interestingly, this relaxed constraint yields the same lower bound for the DGP parameter as zero HEE.
From the action (4), we read off HEE
\[S_{\rm HEE}=\min\Bigl{\{}\mbox{ext}\Bigl{(}4\pi\int_{\Gamma}dx^{d-1}\sqrt{ \gamma}+8\pi\int_{\partial\Gamma}dx^{d-2}\sqrt{\sigma}\lambda_{a}\Bigr{)} \Bigr{\}}, \tag{52}\]
where \(\Gamma\) denote the RT surface in the bulk, \(\partial\Gamma=\Gamma\cap Q\) is the intersection of the RT surface and the brane \(Q\), \(\gamma\) and \(\sigma\) represent the induced metric on \(\Gamma\) and \(\partial\Gamma\) respectively. Since we are interested in the vacuum state of the defect, we focus on the AdS space (3) in bulk. Substituting the embedding functions \(z=z(r)\) and \(t=\mbox{constant}\) into the AdS metric (3) and entropy formula (52), i.e., \(S_{\rm HEE}=4\pi A\), we get the area functional of RT surface
\[A=\int_{-\rho_{1}}^{\rho_{2}}dr\frac{\cosh^{d-2}(r)}{z(r)^{d-2}}\sqrt{1+\frac {\cosh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}}}+\sum_{a=1}^{2}\frac{2\lambda_{a} \cosh^{d-2}(\rho_{a})}{z_{a}^{d-2}}, \tag{53}\]
Figure 5: Various lower bounds of the DGP parameter \(\lambda_{2}\) for \(\rho_{1}=0.5,\lambda_{1}=0,d=4\), i.e., \(\lambda_{2}\geq\lambda_{\rm cri}\). The blue, orange, green, red curves denote the lower bounds derived from HEE (57), brand bending modes (50), effective Newton’s constants (10), respectively. Here \(\frac{1}{G}=\frac{1}{G_{1}}+\frac{1}{G_{2}}\), and \(G_{1},G_{2}\) are the effective Newton’s constants on the two branes. It shows that the Newton’s constant \(G_{2}\) and HEE impose the strongest constraint for \(\rho_{2}<0.638\), and \(\rho_{2}>0.638\), respectively. In the large tension limit \(\rho_{2}\to\infty\), all lower bounds approach to \(\lambda_{\rm cri}\to-1/(2(d-2))\).
where we have set the tangential volume \(V=\int dy^{d-2}=1\), and \(z_{a}=z((-)^{a}\rho_{a})\) denotes the endpoints of the RT surfaces on the branes. Taking variations of (53), we derive the Euler-Lagrange equation
\[z^{2}\cosh(r)\left(d\sinh(r)z^{\prime}+\cosh(r)z^{\prime\prime} \right)+(d-2)z^{3}\] \[+(d-3)z\cosh^{2}(r)\left(z^{\prime}\right)^{2}+(d-1)\sinh(r)\cosh ^{3}(r)\left(z^{\prime}\right)^{3}=0 \tag{54}\]
and NBC on the branes
\[\frac{(-)^{a}z_{a}^{\prime}}{\sqrt{z_{a}^{2}+\cosh^{2}(\rho_{a})z_{a}^{\prime 2 }}}=\frac{2\lambda_{a}(d-2)}{\cosh^{2}(\rho_{a})}. \tag{55}\]
Note that the AdS metric (3) is invariant under the rescale \(z\to cz\). Due to this rescale invariance, if \(z=z_{0}(r)\) is an extremal surface, so does \(z=cz_{0}(r)\). Under the rescale \(z\to cz\), the area functional (53) transforms as \(A\to A/c^{d-2}\). Recall that the RT surface is the extremal surface with minimal area. By choosing \(c\to\infty\), we get the RT surface \(z=cz_{0}(r)\to\infty\) with zero area \(A=A_{0}/c^{d-2}\to 0\), provided \(A_{0}\) is positive. Here \(A_{0}\) denotes the area of the input extremal surface \(z=z_{0}(r)<\infty\). On the other hand, if \(A_{0}\) is negative for sufficiently negative \(\lambda_{a}\), the RT surface is given by choosing \(c\to 0\) so that \(A=A_{0}/c^{d-2}\to-\infty\). To rule out this unusual case with negative infinite entropy, we must impose a lower bound on \(\lambda_{a}\).
For simplicity, we focus on the case with \(\lambda_{1}=0\) and discuss how to derive the lower bound of \(\lambda_{2}\). The approach is as follows. We take an arbitrarily start point \(0<z_{1}=z(-\rho_{1})<\infty\) on
Figure 6: Various lower bounds of the DGP parameter \(\lambda_{2}\) for \(\rho_{1}=0,\lambda_{1}=0,d=4\), i.e., \(\lambda_{2}\geq\lambda_{\rm cri}\). The blue, orange, green curves denote the lower bounds derived from HEE (57), brane bending modes (50), effective Newton’s constants (10), respectively. It shows that the HEE impose the strongest lower bound of \(\lambda_{2}\). In the large tension limit \(\rho_{2}\to\infty\), all lower bounds approach to \(\lambda_{\rm cri}\to-1/(2(d-2))\).
the left brane \(Q_{1}\), and impose the orthogonal condition \(z_{1}^{\prime}=z^{\prime}(-\rho_{1})=0\), then we solve EOM (54) to determine the extremal surface \(z=z_{0}(r)\) numerically. By requiring the corresponding area \(A_{0}\) (53) is non-negative, we obtain a lower bound
\[\lambda_{2}\geq\lambda_{\rm HEE}, \tag{56}\]
where \(\lambda_{\rm HEE}\) is derived from \(A_{0}=0\). Note that \(A_{0}=0\) means that the corresponding extremal surface is the RT surface with minimal area. As a necessary condition, it should satisfy the NBC (55) on the right brane \(Q_{2}\). From (55), we derive
\[\lambda_{\rm HEE}(\rho_{2})=\frac{\cosh^{2}(\rho_{2})z_{2}^{\prime}}{2(d-2) \sqrt{\cosh^{2}(\rho_{2})z_{2}^{\prime 2}+z_{2}^{2}}}, \tag{57}\]
where \(z_{2}=z(\rho_{2})\) is the endpoint on the right brane \(Q_{2}\). Due to the rescale invariance of AdS, any input start point \(z_{1}=z(-\rho_{1})\) gives the same \(\lambda_{\rm HEE}\) (57). In other words, there are infinite zero-area RT surfaces, which obey NBCs on both branes. It is similar to the case of AdS\({}_{3}\) in AdS/BCFT. On the other hand, for \(\lambda>\lambda_{\rm HEE}\), the RT surface locates only at infinity, i.e., \(z\rightarrow\infty\). And the NBC (55) can be satisfied only at infinity for \(\lambda_{2}>\lambda_{\rm HEE}\). Please see blue curves of Fig.5 and Fig.6 for the lower bound \(\lambda_{\rm HEE}(\rho_{2})\). In Fig.5 with \(\rho_{1}=0.5\) and \(d=4\), we notice that Newton's constant \(G_{2}\) imposes the strongest lower bound for \(\rho_{2}<0.638\), while HEE imposes the strongest lower bound for \(\rho_{2}>0.638\). In Fig. 6 with \(\rho_{1}=0\) and \(d=4\), we find that HEE always gives the strongest lower bound for \(\lambda_{2}\).
In summary, we have discussed various constraints of the DGP parameters from effective Newton's constants, brane bending modes, and HEE. We find that HEE imposes the strongest lower bound of \(\lambda_{2}\) for sufficiently large \(\rho_{2}\).
## 3 Page curve of case I: one black hole approximately
The above section investigates some aspects of wedge holography with DGP gravity (DGP wedge holography) on the branes. In particular, we find that there is massless gravity on the branes, and we get several constraints (12,51,56) for the parameters \((\rho_{a},\lambda_{a})\). This section studies the Page curve in DGP wedge holography for case I. We focus on the eternal two-side black hole, which is dual to the thermofield double state of CFTs [64]. See Fig. 7 for one side of the system at the time slice \(t=0\). See also Fig. 8 for the Penrose diagram of the two-side black holes on the branes.
Let us focus on the black string in bulk
\[ds^{2}=dr^{2}+\cosh^{2}(r)\frac{\frac{dz^{2}}{f(z)}-f(z)dt^{2}+\sum_{i=1}^{d-2 }dy_{i}^{2}}{z^{2}},\ \ -\rho_{1}\leq r\leq\rho_{2}, \tag{58}\]
where \(f(z)=1-z^{d-1}\), the weak-gravity brane \(Q_{1}\) and strong-gravity brane \(Q_{2}\) locate at \(r=-\rho_{1}\) and \(r=\rho_{2}\), respectively. See Fig. 7 for the geometry. Note that there are two
black holes on the branes \(Q_{1}\cup Q_{2}\). Following [50], we take the black hole on the weak-gravity brane \(Q_{1}\) as the bath approximately. Since both branes are gravitating, we should adjust both the radiation region R (red line) and the island region I (purple line) to minimize the entanglement entropy of the radiation R [50]. Once this approach determines the radiation region R, we can follow the usual procedure to calculate the entanglement entropy of R, which is given by the Hartman-Maldacena (HM) surface (orange curve of Fig.7) at early times and given by RT surface in the island phase (blue curve of Fig.7) at late times.
To warm up, we start with wedge holography without DGP gravity. See Fig. 2 for the geometry. We first investigate the island phase, where the RT surface ends on the branes and stays outside the horizon. Assuming the embedding function \(z=z(r)\) and \(t=\mathrm{constant}\), from (58) we derive the area functional of the RT surface
\[A_{\mathrm{I}}=V\int_{-\rho_{1}}^{\rho_{2}}dr\frac{\cosh^{d-2}(r)}{z(r)^{d-2} }\sqrt{1+\frac{\cosh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}f(z(r))}}, \tag{59}\]
where I means the island phase, and \(V=\int dy^{d-2}\) denotes the tangential volume. From \(0\leq z(r)\leq 1\) and \(f(z)\geq 0\), we derive an inequality
\[A_{\mathrm{I}}\geq V\int_{-\rho_{1}}^{\rho_{2}}dr\cosh^{d-2}(r)=A_{\mathrm{BH}}, \tag{60}\]
Figure 7: Geometry for case I: one black hole approximately. \(Q_{1}\) denotes the bath brane with weak gravity, and \(Q_{2}\) is the AdS brane with intense gravity. The red and black lines denotes the radiation R and its complement \(\bar{\mathrm{R}}\) on the left brane, the purple and black lines denotes the island I and its complement \(\bar{\mathrm{I}}\) on the right brane. The island region I and radiation region R envelop the black-hole horizon on \(Q_{1}\cup Q_{2}\). For simplicity, we only show the regions outside the horizon. The dotted line, blue, and orange lines in the bulk indicate the horizon, RT surface in the island phase and HM surface in the no-island phase at \(t=0\), respectively.
where \(A_{\rm BH}\) is the horizon area in bulk. The above inequality shows that the area functional (59) minimizes on the horizon \(z(r)=1\). In other words, as shown in Fig.2, the RT surface (minimal area surface) coincides with the horizon in the island phase. Comparing Fig. 1 with Fig. 2, we notice that the island region I in the usual double holography disappears in wedge holography without DGP terms 7. Now we reproduce the results of [50] in a simpler method.
Footnote 7: We mean the region outside the horizon disappears.
### Island phase
Let us go on to discuss the island phase in wedge holography with DGP gravity on the branes. We find a non-trivial RT surface outside the horizon and, thus a non-trivial island region for suitable DGP terms. See Fig. 7 for the geometry at a time slice.
For the DGP wedge holography, the area functional becomes
\[A_{\rm I}=\frac{S_{\rm HEE}}{4\pi}=V\int_{-\rho_{1}}^{\rho_{2}}dr\frac{\cosh^{ d-2}(r)}{z(r)^{d-2}}\sqrt{1+\frac{\cosh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}f(z(r))}}+V \sum_{a=1}^{2}\frac{2\lambda_{a}\cosh^{d-2}(\rho_{a})}{z_{a}^{d-2}}, \tag{61}\]
where \(z_{a}=z((-)^{a}\rho_{a})\) denotes the endpoints of the RT surfaces on the branes. To have a well-defined variational principle for (61), we can impose either Dirichlet boundary condition (DBC) \(\delta z_{a}=0\) or NBC on each brane
\[\frac{(-)^{a}z_{a}^{\prime}}{f(z_{a})\sqrt{1+\frac{\cosh^{2}(\rho_{a})z_{a}^ {\prime 2}}{z_{a}^{2}f(z_{a})}}}=\frac{2\lambda_{a}(d-2)z_{a}}{\cosh^{2}(\rho_{a})}. \tag{62}\]
Figure 8: Penrose diagram on the branes for case I: one two-side black hole approximately. The left and right vertical lines are glued together. The black-dotted, green-dotted, red, and purple lines denote the horizon, singularity, radiation region, and island region, respectively. The black lines linking R and I represent \(\bar{\rm R}\cup\bar{\rm I}\).
Usually, NBC yields a smaller area than DBC since it allows the endpoint of the RT surface to move on the brane. From (61), we derive the Euler-Lagrangian equation
\[\cosh^{2}(r)\left(z^{\prime}\right)^{2}\left((d-1)\sinh(2r)z^{\prime }-(d-5)z^{d}+2(d-3)z\right)\] \[+2(d-2)z\left(z-z^{d}\right)^{2}-2z\left(z^{d}-z\right)\cosh(r) \left(d\sinh(r)z^{\prime}+\cosh(r)z^{\prime\prime}\right)=0, \tag{63}\]
where we abbreviate \(z(r)\) by \(z\) to simplify the above equation.
Note that the first term of the area functional (61) decreases with \(z(r)\), while the second term of (61) increases with \(z_{a}\) for negative DGP parameters \(\lambda_{a}\). These two terms compete and can yield non-trivial RT surfaces outside the horizon, i.e., \(z(r)<1\). As a result, the island region becomes non-zero for negative \(\lambda_{a}\), as shown in Fig. 7.
Let us take the method of [50] to understand why DGP gravity can recover the entanglement islands. Without the DGP terms, as a minimal area surface, the RT surface should end orthogonally on both branes, i.e., \(z^{\prime}_{a}=0\). This orthogonal condition rules out all of the extremal surfaces except the horizon [50]. When the DGP gravity appears, the orthogonal condition breakdowns [13], i.e., \(z^{\prime}_{a}\neq 0\). See the NBC (62), which gives \(z^{\prime}_{a}\sim\lambda_{a}\neq 0\). As a result, the no-go theorem based on \(z^{\prime}_{a}=0\) disappears. That is why there could be non-trivial RT surfaces outside the horizon, equivalently, non-vanishing entanglement islands.
Now we show how to construct the RT surface outside the horizon exactly. To do so, we turn the logic around. Suppose we have solved a series of extremal surfaces outside the horizon from the EOM (63). For any extremal surface, we can derive \(z_{a},z^{\prime}_{a}\) on the branes and obtain \(\lambda_{a}\) from NBC (62). Let us return to our problem. For the DGP parameters \(\lambda_{a}\) fixed above, the RT surface is just the input extremal surface outside the horizon because it satisfies both the Euler-Langrangian equation (63) and NBC (62). We should further check that the extremal surface is minimal instead of maximal. As shown below, we can always do so by choosing suitable parameters. Now we finish the construction of entanglement islands in wedge holography with DGP gravity on the branes.
Let us study an exact example. Without losing generality, we choose the parameters
\[\rho_{1}=0.5,\lambda_{1}=0,\rho_{2}=1.2,\lambda_{2}\approx-0.246,\ d=4,\ V=1, \tag{64}\]
which obey the constraints from Newton's constants (12)
\[0<G_{\rm eff\ N}^{1}\approx 0.037<G_{\rm eff\ N}^{2}\approx 0.056, \tag{65}\]
brane bending modes (51) and HEE (56)
\[B\approx 1.391>0,\ \ \lambda_{2}>\lambda_{\rm HEE}\approx-0.250. \tag{66}\]
Note that we show only three valid digits after the decimal point in this paper. In the numerical calculations, we keep more valid numbers. For instance, we have \(\lambda_{2}\approx-0.245829\).
Naturally, we choose the left brane as the bath brane since it has a smaller effective Newton's constant 65, i.e., \(G^{1}_{\rm eff\;N}<G^{2}_{\rm eff\;N}\). Solving the Euler-Langrangian equation (63) together with NBC (62), we obtain numerically the RT surface \(z(r)\), which starts at \(z_{1}\approx 0.950\) on the left brane and ends at \(z_{2}\approx 0.484\) on the right brane. Please see Fig. 9. Let us show more details of the numerical calculations. We impose BCs \(z=z_{1}\) and \(z_{1}^{\prime}=0\)8 on the left brane, and adjust the left endpoint \(z_{1}\) so that the NBC (62) on the right brane is satisfied. It is the so-called shooting method. In this way, we derive the RT surface shown in Fig.9. There is another method to calculate the RT surface. For any given \(0\leq z_{1}\leq 1\) and \(z_{1}^{\prime}=0\), we can solve the extremal surface and its area \(A_{\rm I}\) (61). We adjust the left endpoint \(z_{1}\) to minimize the area \(A_{\rm I}\). See Fig. 10 for \(A_{\rm I}(z_{1})\), which shows the area \(A_{\rm I}\) becomes minimal at \(z_{1}\approx 0.950\). From the RT surface with \(z_{1}\approx 0.950\), we derive the right endpoint \(z_{2}\approx 0.484\) and its derivative \(z_{2}^{\prime}\approx-0.150\). We verify that the obtained \(z_{2}\) and \(z_{2}^{\prime}\) satisfy the NBC (62), which agrees with the first method. The second method has the advantage that it is clear that the obtained RT surface is minimal rather than maximal. See Fig. 10 again.
Footnote 8: Recall that we have chosen \(\lambda_{1}=0\), which gives \(z_{1}^{\prime}=0\) from (62).
With the above numerical results, we derive the area of RT surface
\[A_{\rm I}\approx 0.842<A_{\rm BH}\approx 0.898, \tag{67}\]
which is smaller than the black hole area. Thus there is indeed a nontrivial RT surface outside the horizon. Note that \(A_{\rm BH}\) includes the contributions from DGP terms. Note also that we focus on half of the two-side black hole in (67). Recall that the RT surface ends on \(z_{1}\approx 0.950\) and \(z_{2}\approx 0.484\) on the left and right brane, respectively. According to Fig.7, it means the radiation region lies in \(z\geq z_{1}\approx 0.950\) on the left brane, and the island region locates at \(z\geq z_{2}\approx 0.484\) on the right brane. Clearly, the island region is non-zero in wedge holography with DGP terms.
### No-Island phase
In this subsection, we discuss the RT surface in the no-island phase, which is also called the Hartman-Maldacena (HM) surface. The HM surface starts at \(z_{1}\approx 0.950\) on the left weak-gravity brane, ends on the horizon at the beginning time \(t=0\), and then passes the horizon at \(t>0\). Let us first study the case at \(t=0\) (orange line of Fig.7). By varying the endpoint on the horizon, we get the HM surface with the minimal area
\[A_{\rm N}\approx 0.384<A_{\rm I}\approx 0.842,\ {\rm at\ t=0}, \tag{68}\]
where N labels the no-island phase. Since \(A_{\rm N}<A_{\rm I}\) at \(t=0\), the no-island phase dominates at the beginning.
As the black hole evolves, the HM surface crosses the horizon. To avoid coordinate singularities, we choose the infalling Eddington-Finkelstein coordinate \(dv=dt-\frac{dz}{f(z)}\). Substituting the embedding functions \(v=v(z),r=r(z)\) into the metric (58) and entropy formula (52), we get the area functional
\[A_{\rm N}=\frac{S_{\rm HEE}}{4\pi}=V\int_{z_{1}}^{z_{\rm max}}dz\frac{\cosh^{d -2}(r)}{z^{d-2}}\sqrt{r^{\prime 2}-\frac{\cosh^{2}(r)}{z^{2}}v^{\prime}(2+f(z)v^{ \prime})}, \tag{69}\]
and the time on the left bath brane
\[t_{1}=t(z_{1})=-\int_{z_{1}}^{z_{\rm max}}\Big{(}v^{\prime}+\frac{1}{f(z)} \Big{)}dz, \tag{70}\]
where \(r=r(z),v=v(z)\) are abbreviations, \(z_{\rm max}\geq 1\) denotes the turning point of the two-side black hole. According to [66], we have \(v^{\prime}(z_{\rm max})=-\infty\) and \(t(z_{\rm max})=0\), and \(z_{\rm max}=1\) corresponds to the beginning time \(t_{1}=0\). For simplicity, we label \(t_{1}\) by \(t\) in this paper. Note that \(A_{\rm N}\) (69) is independent of the DGP parameters \(\lambda_{a}\). That is because we have chosen
Figure 10: Relation between the area \(A_{\rm I}\) and the endpoint \(z_{1}\) on the left brane, which shows that area functional (61) becomes minimal at \(z_{1}\approx 0.950\).
\(\lambda_{1}=0\) in our model (64) on the left brane. Besides, the HM surface does not intersect the right brane. As a result, no terms depend on \(\lambda_{2}\) in the area functional (69) either. From (69) and \(-v^{\prime}(2+f(z)v^{\prime})\geq 0\)[66], we can derive an inequality
\[A_{\rm N}\geq V\int_{z_{1}}^{z_{\rm max}}dz\frac{1}{z^{d-1}}\sqrt{-v^{\prime}(2 +f(z)v^{\prime})}, \tag{71}\]
where the RHS is obtained by setting \(r(z)=0\). We remark that \(r(z)=0\) is an exact solution to the Euler-Lagrangian equations derived from (69) [50]. However, this solution does not obey the boundary condition on the left since the left brane is located at \(r=-\rho_{1}\) rather than \(r=0\). Instead of an exact solution, \(r(z)=0\) is actually an asymptotic solution at \(t\to\infty\). We observe that \(r_{0}=r(z_{\rm max})\) approaches zero in the large time limit. See Fig. 18 of appendix A. Note also that, in the large time limit, the integrations around the turning point \(z=z_{\rm max}\) contribute most to the area \(A_{N}\) (69) and the time (70). Thus we have
\[\lim_{t\to\infty}A_{\rm N}=\lim_{r\to 0}A_{\rm N}=V\int_{z_{1}}^{z_{\rm max}} \frac{dz}{z^{d-1}}\sqrt{-v^{\prime}(2+f(z)v^{\prime})}. \tag{72}\]
Remarkably, (72) is the same as the volume conjecture of holographic complexity [67, 68] for a \(d-\)dimensional AdS-Schwarzschild black hole. Following [66, 67, 68], we obtain
\[\lim_{t\to\infty}\frac{dA_{\rm N}}{dt}=\frac{V}{2}, \tag{73}\]
which yields the expected result that the HM surface area increases linearly over time at late enough times. Interestingly, the late-time growth rate (73) is the same as that of holographic complexity. This seems to imply a deep relation between entanglement entropy and complexity. This issue is worth more study in the future. Note that the late-time growth rate of \(A_{\rm N}\) is universal and is independent of the choices of parameters \((\rho_{a},\lambda_{a})\).
Let us provide some details on how to derive (73). Since the area functional (72) does not depend on \(v(z)\) exactly, we can derive a conserved quantity
\[E_{\rm N}=-\frac{\partial L}{\partial v^{\prime}}=\frac{z^{-d}\left(1+f(z)v^{ \prime}\right)}{\sqrt{-\frac{v^{\prime}(f(z)v^{\prime}+2)}{z^{2}}}}=\frac{ \sqrt{-f(z_{\rm max})}}{z_{\rm max}^{d-1}}, \tag{74}\]
where \(A_{\rm N}=V\int dzL\), \(E_{\rm N}\) is a constant at a fixed time, and we have used \(v^{\prime}(z_{\rm max})=-\infty\) to derive the last equality of (74). According to [66], the conserved quantity \(E_{\rm N}\) approaches to an extremum in the large time limit
\[\lim_{t\to\infty}\frac{dE_{\rm N}}{dz_{\rm max}}=-\frac{(d-1)z_{\rm max}^{-d-1 }\left(\bar{z}_{\rm max}^{d}-2\bar{z}_{\rm max}\right)}{2\sqrt{\bar{z}_{\rm max }^{d-1}}-1}=0, \tag{75}\]
which yields the maximal value of \(z_{\rm max}\)
\[\bar{z}_{\rm max}=\lim_{t\to\infty}z_{\rm max}=2^{\frac{1}{d-1}}. \tag{76}\]
From (74), we solve
\[v^{\prime}(z)=\frac{E_{\rm N}z^{d}\left(\sqrt{E_{\rm N}^{2}z^{2d}+z^{2}f(z)}-E_{ \rm N}z^{d}\right)-z^{2}f(z)}{f(z)\left(E_{\rm N}^{2}z^{2d}+z^{2}f(z)\right)}. \tag{77}\]
By using (75) and (77), we get
\[\lim_{t\rightarrow\infty}\frac{\partial v^{\prime}(z)}{\partial z _{\rm max}}=\lim_{t\rightarrow\infty}\frac{\partial v^{\prime}(z)}{\partial E _{\rm N}}\frac{\partial E_{\rm N}}{\partial z_{\rm max}}=0, \tag{78}\] \[\lim_{t\rightarrow\infty}\frac{\partial L}{\partial z_{\rm max}}= \lim_{t\rightarrow\infty}\frac{\partial L}{\partial v^{\prime}(z)}\frac{ \partial v^{\prime}(z)}{\partial z_{\rm max}}=0. \tag{79}\]
Recall that \(L\) is defined by \(A_{\rm N}=V\int dzL\). From (69, 70,78,79) and \(v^{\prime}(\bar{z}_{\rm max})=-\infty\), we have
\[\lim_{t\rightarrow\infty}\frac{dA_{\rm N}}{dt} = \lim_{t\rightarrow\infty}\frac{dA_{\rm N}/dz_{\rm max}}{dt/dz_{ \rm max}}=V\frac{\frac{1}{z_{\rm max}^{d-2}}\sqrt{-\frac{v^{\prime}(\bar{z}_{ \rm max})\left(2+f(\bar{z}_{\rm max})v^{\prime}(\bar{z}_{\rm max})\right)}{ \bar{z}_{\rm max}^{2}}}+\int_{z_{1}}^{\bar{z}_{\rm max}}dz\frac{\partial L}{ \partial\bar{z}_{\rm max}^{2}}} \tag{80}\] \[= V\frac{\sqrt{-f(\bar{z}_{\rm max})}}{\bar{z}_{\rm max}^{d-1}}=VE _{\rm N}(\bar{z}_{\rm max})\]
Substituting (76) and \(f(z)=1-z^{d-1}\) into (80), we finally obtain the late-time growth rate of \(A_{\rm N}\) (73).
We can obtain the general time dependence of \(A_{\rm N}\) by numerical calculations. The numerical method developed in [41] is quite helpful in studying the HM surface. Although it is designed for codim-2 branes, it can be easily generalized to this paper's case of codim-1 branes. See appendix A for more details. We draw the Page curve in Fig. 11, where \(A\) and \(t\) are half those of a two-side black hole. The entanglement entropy (orange line) increases with time at early times, and then becomes a constant (blue line) after the Page time, which reproduces the expected Page curve of the eternal black hole.
To end this section, let us make some comments. **1.** As shown in this section, there are non-trivial entanglement islands and Page curves in wedge holography with DGP gravity. This strongly implies the entanglement island is consistent with massless gravity. **2**. To recover the entanglement islands in wedge holography with massless gravity on the branes, at least one of the DPG parameters \(\lambda_{a}\) is negative. Nothing goes wrong for negative \(\lambda_{a}\) as long as the constraints (12,51,56) are satisfied. We stress that our model is physically well-defined since it has positive effective Newton's constants and kinetic energy of brane bending modes, stable mass spectrum, and obeys the holographic c-theorem [59]. **3.** As discussed at the end of sect.2.2, the brane low-energy-effective theory is better approximated by Einstein's gravity for negative \(\lambda_{a}\). That is because the massive mode is more difficult to be excited due to the more significant mass gap for negative \(\lambda_{a}\). **4**. We get \(A_{\rm I}=0\) if the AdS black hole is replaced by an AdS space on the branes. From (61) with \(f(z)=1\), we observe that \(A_{\rm I}\to 0\) for \(z=z_{a}\rightarrow\infty\). See also Fig. 12, which shows that \(A_{\rm I}\) decreases
with \(z_{1}\) and minimizes at the AdS horizon \(z_{1}\rightarrow\infty\). \(A_{\rm I}=0\) means entanglement entropy on the whole defect \(\Sigma\) is zero for CFTs in a vacuum state, which is reasonable. **5**. One may identify the DGP term of the entropy formula (52) with the second term of the island rule (2). Then it seems that \(\lambda_{a}\sim 1/\hat{G}_{N}\) should be positive. However, this is not true. Note that we are studying the island rule (2) on the branes. The effective action on the brane is given by \(I_{\rm eff}=I_{\rm CFT}+\frac{1}{16\pi G_{\rm eff\;N}}\int_{Q_{2}}\sqrt{-h}(R_ {h}+(d-1)(d-2))+...\) with higher derivative corrections suppressed around the solution (6) [46]. As a result, \(\hat{G}_{N}\) should be identified with the effective Newton's constant on the brane instead of \(1/\lambda_{a}\). **6**. Note that the entanglement entropy of our model is finite. It is the renormalized entanglement entropy since the branes locate at finite places instead of asymptotic infinity. Similar to Casimir energy, in principle, the renormalized entropy can be negative. For simplicity, we do not consider this situation in this paper. **7**. The results of this paper can be generalized to cone holography [49]. Cone holography can be regarded as a holographic dual of the edge modes on the codim-n defect, which is a generalization of wedge holography.
## 4 Page curve of case II: two black hole
In the above section, we focus on the case \(G^{1}_{\rm eff\;N}<G^{2}_{\rm eff\;N}\) (case I), where the brane with a smaller Newton's constant can be chosen as the bath. This case describes approximately one
Figure 11: Page curve of case I for \(d=4\) and \(V=1\). The orange and blue lines denote the RT surface in the no-island and island phases, respectively. The Page curve is given by the orange line before Page time, and is given by the blue after Page time. The entanglement entropy firstly increases with time (orange line) and then becomes a constant (blue line), which recovers the Page curve of eternal black holes.
black hole on a strong-gravity brane coupled with a bath on the weak-gravity brane. In this section, we investigate the situation \(G_{\rm eff\;N}^{1}=G_{\rm eff\;N}^{2}\) (case II), where two black holes interact with each other on two branes of equal gravitational strength. See Fig. 13 for the geometry at a time slice, and see Fig. 14 for the Penrose diagram on the branes.
Unlike case I, there is no natural way to choose the weak-gravity bath brane for case II. As a result, we have to take the two black holes on \(Q_{1}\cup Q_{2}\) seriously. By symmetry and naturalness, the region near the black-hole horizon can be chosen as the island region (purple line of Fig. 13), and its complement on \(Q_{1}\cup Q_{2}\) is the radiation region (red line of Fig. 13). Similar to case I, since both branes are gravitating, we adjust the radiation region R (equivalently, the island region I, since \(\partial{\rm R}=\partial{\rm I}\) ) to minimize the entanglement entropy of R in the island phase. In this way, we fix the radiation region R. Then, we can follow the usual procedure to calculate the entanglement entropy of R, which is given by the Hartman-Maldacena (HM) surface (orange curve of Fig.13) at early times and given by RT surface in the island phase (blue curve of Fig.13) at late times.
In case II, the island region (purple line of Fig.13) and the radiation region (red line of Fig.13) constitute the whole space. Naturally the "island" of case II is not one component of the radiation but that of the black hole. In other words, the "island" of case II does not lie in the entanglement wedge of the radiation. This differs from the case with a non-gravitational bath or case I with a weak gravitational bath. As a result, the island rule (2) should be
Figure 12: The figure of \(A_{\rm I}(z_{1})\) for an AdS space on the branes, where \(A_{\rm I}\) denotes the area of extremal surface, and \(z_{1}\) is its endpoint on the left brane. We take the same parameters \((\rho_{1}=0.5,\lambda_{1}=0,\rho_{2}=1.2,\lambda_{2}\approx-0.246,V=1,d=4)\) as in sect.3.1. It shows that the area of the extremal surface decreases with \(z_{1}\) and approaches zero at the AdS horizon \(z_{1}\rightarrow\infty\). Since the RT surface is defined as the extremal surface with minimal area, we get \(A_{\rm I}=0\) for the RT surface when the AdS black hole is replaced by an AdS space on branes.
modified as
\[S_{\rm EE}({\rm R})=S_{\rm EE}({\rm I})=\min\Bigl{\{}{\rm ext}\Bigl{(}S_{\rm QFT} ({\rm R})+\frac{A(\partial{\rm I})}{4\hat{G}_{N}}\Bigr{)}\Bigr{\}}, \tag{81}\]
where \(\partial{\rm R}=\partial{\rm I}\) and \(S_{\rm QFT}({\rm R})=S_{\rm QFT}({\rm I})\). Interestingly, (81) is the usual formula for generalized entropy before the island theory is developed. Remarkably, this formula can give the Page curve. We give a holographic derivation of the Page curve for case II below. Since the method is the same as that of sect.3, we show only some key results below.
We choose the following parameters
\[\rho_{1}=\rho_{2}=0.5,\ \lambda_{1}=\lambda_{2}\approx-0.182,\ V=1,\ d=4, \tag{82}\]
which obeys the constraints (12,51,56)
\[G^{1}_{\rm eff\ N}=G^{2}_{\rm eff\ N}\approx 0.248>0,\ \ B\approx 0.179>0,\ \ \lambda_{1}=\lambda_{2}>\lambda_{\rm HEE}\approx-0.188. \tag{83}\]
Following the approach of sect.3, we numerically derive the RT surface ending at \(z_{a}\approx 0.886\) on the two branes, where \(z\geq z_{a}\) corresponds to the island region (purple line of Fig.13). Thus, there exist non-vanishing entanglement islands. We also obtain the area of HM surface at \(t=0\), the area of RT surface in the island phase, and the black hole area with corrections from the DGP gravity as follows
\[A_{\rm N}\approx 0.049<A_{\rm I}\approx 0.160<A_{\rm BH}\approx 0.161. \tag{84}\]
Figure 13: Geometry for case II: two black holes. The red and purple lines denote the radiation R and island I on branes. The dotted line, blue, and orange lines indicate the horizon, RT surface in the island phase, and RT surface in the no-island phase at \(t=0\), respectively.
Because \(A_{\rm N}<A_{\rm I}\), the no-island phase dominates at the early times. As the black hole evolves, \(A_{\rm N}\) grows over time. In the large time limit, we get
\[\lim_{t\rightarrow\infty}\frac{dA_{\rm N}}{dt}=V, \tag{85}\]
which is twice of (73) since there are two HM surfaces now. Please see the orange lines of Fig.13. Since \(A_{\rm N}\sim t>A_{\rm I}\) at late times, the island phase becomes dominated later, which produces the Page curve of the eternal black hole. See Fig.15 for the Page curve of case II. Finally, we want to mention that the parameters of this section also give \(A_{\rm I}=0\) if the AdS black hole is replaced by an AdS space on the branes.
## 5 Discussions on massless-island puzzle
In sect.3 and sect.4, we show that the massless entanglement island exists in wedge holography with suitable DGP terms. And the Page curve of eternal black holes can be recovered. In this section, we discuss the puzzle of massless islands raised in [51] and argue that the entanglement island is consistent with massless gravity.
Let us first give a brief review of the massless-island puzzle [51]. Following [51], we take the Karch-Randall braneworld with non-gravitational baths to illustrate the main ideas. See Fig. 1 for the geometry at a constant time slice. Let us consider the late-time evolution of the black hole, where the island phase dominates, and the RT surface is given by the blue line of Fig. 1. According to the entanglement wedge reconstruction [69, 70], operators in the island region I and its complement \(\bar{\rm I}\) on the brane can be reconstructed by operators in the radiation region R and its complement \(\bar{\rm R}\) on the AdS boundary, respectively. Now let
Figure 14: Penrose diagram for case II: two two-side black holes. The left and right vertical lines are glued together. The black-dotted, green-dotted, red, and purple lines denote the horizon, singularity, radiation region, and island region, respectively.
us focus on the d-dimensional system on the brane \(Q\) and AdS boundary \(M\). On one hand, we have a QFT system without gravity on the AdS boundary \(M\). The CFT operators of R commute with those of \(\bar{\rm R}\) since they are space-like separated
\[[O_{\rm R},\ O_{\bar{\rm R}}]=0, \tag{86}\]
where \(O_{A}\) denote operators defined in the region \(A\). As a result, according to the entanglement wedge reconstruction [69, 70], the operators in the island I dressed to the radiation R commute with operators in \(\bar{\rm I}\) dressed to \(\bar{\rm R}\)
\[[O_{\rm I},\ O_{\bar{\rm I}}]=0. \tag{87}\]
On the other hand, according to the gravitational Gauss's law, the action of operators in the island I must be accompanied by a disturbance in the metric outside the island, i.e., \(\bar{\rm I}\). In other words, the energy fluctuation inside I can be measured in the spacetime boundary of \(\bar{\rm I}\). Thus we have
\[[O_{\rm I},\ O_{\bar{\rm I}}]\neq 0, \tag{88}\]
at least for some operators, which conflicts with (87). It is not a problem in the Karch-Randall braneworld since the gravity on the brane is massive and Gauss's law breakdowns. As a result, (88) becomes invalid. However, it is problematic for massless gravity with Gauss's law. For the above reasons, [51] conjectures that the entanglement island is inconsistent with the long-range gravity obeying Gauss's law.
Figure 15: Page curve of case II for \(d=4\) and \(V=1\). The orange and blue lines denote the RT surface in the no-island and island phases. The Page curve is given by the orange line before Page time, and is given by the blue after Page time. The entanglement entropy firstly increases with time (orange line) and then becomes a constant (blue line), which recovers the Page curve of the eternal black hole.
Now we discuss the possible resolutions to the island puzzle. To warm up, let us study an inspiring analog of the above puzzle in AdS/CFT. Consider an AdS black hole as shown in Fig. 16, where \(O_{in}\) and \(O_{out}\) are operators inside and outside the black hole region, which corresponds to \(O_{\rm I}\) and \(O_{\rm I}\) in the above puzzle. Please note that the dotted line of Fig. 16 is the RT surface derived from the generalized entropy, which is not necessarily the horizon. For the static AdS black hole, the entanglement wedge of the whole space of CFTs is the region bounded by the RT surface and AdS boundary (white region of Fig. 16). As a result, operators \(O_{in}\) and \(O_{out}\) lie outside and inside the entanglement wedges and satisfy \([O_{in},~{}O_{out}]=0\) in analogy of (87). On the other hand, gravity is massless in AdS/CFT. Similar to (88), the commutator \([O_{in},~{}O_{out}]\neq 0\) cannot vanish due to the Gauss's law. Now there is a contradiction. Of course, AdS/CFT is well-defined and cannot be wrong. For the thermal CFT state obtained by tracing one side of the thermofield double state [64], the puzzle can be resolved. The operator \(O_{in}\) can be dressed to the other side of the black hole, which yields \([O_{in},~{}O_{out}]=0\) and agrees with Gauss's law 9. While for the pure thermofield double state, the problem is more subtle. Take \(O_{in}\) and \(O_{out}\) to be the total operators on both sides, \(O_{in}\) has to be dressed to the AdS boundary on one side, which leads to \([O_{in},~{}O_{out}]\neq 0\). Recall that the black hole interior increases monotonically over time. If \(O_{in}\) can be completely dressed to the fast growing black hole interior, then the puzzle can be resolved. Recently, there is an interesting paper [71], which finds that operators \(O_{in}\) and \(O_{out}\) commute, provided that the state breaks all asymptotic symmetries. If this is the case, the puzzle can be resolved too. Our key observation is that if the puzzle can be resolved in the some way in AdS/CFT, so does it in wedge holography. In fact, wedge holography is equivalent to AdS/CFT for the class of solutions (6) studied in this paper [48], that is because they have the same effective
Figure 16: Schematic diagram for AdS black hole, where the gray area denotes the black hole “interior”. Note that the dotted line is the RT surface derived from the generalized entropy, which is not necessarily the horizon. \(O_{in}\) and \(O_{out}\) corresponds to \(O_{\rm I}\) and \(O_{\rm I}\) in the puzzle of massless island, respectively.
action (9).
Finally, we are ready to discuss wedge holography with DGP terms, where there is massless gravity on the branes. We first discuss case II with two AdS black holes on the two branes, which is very similar to the above case in AdS/CFT. Comparing Fig.(13) with Fig.(16), we notice that the island region I and radiation region R of case II correspond to the grey region and white region of Fig.(16) in AdS/CFT. Following the same arguments of AdS/CFT, we can resolve the potential puzzle in case II. It should be mentioned that the puzzle [51] reviewed around (86-88) does not directly apply to case II since there are no regions of \(\bar{\rm I}\) and \(\bar{\rm R}\) in case II. Besides, the operators on the island are not described by operators in the radiation. Thus the island of case II is not the one defined in [51], which lines in the entanglement wedge of the radiation region R but is disconnected from R. In this sense, case II does not conflict with [51]. Now let us turn to case I with one black hole and one weak-gravity bath. See Fig. 7 for the geometry. There are several possible resolutions. First, unlike the case on the AdS boundary without gravity, because of Gauss's law, operators on R and \(\bar{\rm R}\) do not commute anymore on the left brane with massless gravity. Thus (86,87) breakdown and the puzzle disappears. Second, one persists in that operators on R and \(\bar{\rm R}\) commute and gives up Gauss's law. According to [71], operators on I and \(\bar{\rm I}\) commute, provided that the state breaks all asymptotic symmetries. Third, we can choose similar island and radiation regions divisions as in case II. It is a natural choice if we take into account the Hawking radiation from the black hole on the left weak-gravity brane. On the other hand, if we are interested in only the Hawking radiation from the black hole on the right strong-gravity brane, case I is a good approximation and is similar to the situation discussed in the usual double holography with non-gravitational baths.
## 6 Higher derivative gravity on branes
In this section, we generalize the above discussions to higher derivative gravity on the branes. Higher derivative gravity is interesting in many aspects. Maybe most interestingly, the general higher curvature gravity is renormalizable [72]. Although it may suffer the ghost problem, one can construct a ghost-free and potentially renormalizable higher derivative gravity by choosing the parameters carefully [73, 74, 75]. Besides, string theory predicts higher derivative corrections in gravitational action. The motivation here is to show that massless entanglement islands exist in general gravity theories. For simplicity, we focus on the following action
\[I = \int_{W}dx^{d+1}\sqrt{-g}\Big{(}R_{W}+d(d-1)\Big{)} \tag{89}\] \[+2\int_{Q}dx^{d}\sqrt{-h_{Q}}(K-T_{a}+\lambda_{a}R_{Q}+b_{a}\bar{ R}_{Q}^{2}+d_{a}\bar{R}_{Qij}\bar{R}_{Q}^{\;ij}),\]
where \(b_{a},d_{a}\) are the higher derivative parameters and
\[\bar{R}_{Q}=R_{Q}+d(d-1){\rm sech}^{2}\left(\rho_{a}\right),\;\bar{R}_{Q\ ij}=R_{Q\ ij}+(d-1){\rm sech}^{2}\left(\rho_{a} \right)h_{Q\ ij}, \tag{90}\]
which vanishes for the class of solutions (6). As a result, at least for the solutions (6), the higher derivative action (89) is equal to the DGP action (4) on-shell. However, they are different generally. In fact, \(\bar{R}_{Q}^{2}\) and \(\bar{R}_{Qij}\bar{R}_{Q}^{\;ij}\) are "irrelevant" higher derivative terms in the sense that they do not contribute to the Weyl anomaly [76], universal terms of entanglement entropy (logarithmic divergent term) [77] and correlation functions (up to three-point functions) [78] for the dual CFTs. On the other hand, \(\bar{R}_{Qijkl}\bar{R}_{Q}^{\;ijkl}\) is "relevant". However, it excludes the novel class of solutions (6) 10. Thus, we do not consider the "relevant" term \(\bar{R}_{Qijkl}\bar{R}_{Q}^{\;ijkl}\) in this paper and leave its study to future works.
Footnote 10: Only if the bulk is a local AdS space, (6) is a solution to wedge holography with \(\bar{R}_{Q\;ijkl}\bar{R}_{Q}^{\;ijkl}\) on the branes. On the other hand, the black string is no longer a solution.
Similar to sect.2, we impose NBC so that there is massless gravity on the branes
\[K^{ij}-(K-T_{a}+\lambda_{a}R_{Q})h_{Q}^{ij}+2\lambda_{a}R_{Q}^{ij}+2H^{ij}=0, \tag{91}\]
where \({\cal L}=b_{a}\bar{R}_{Q}^{2}+d_{a}\bar{R}_{Qij}\bar{R}_{Q}^{\;ij}\), \(\nabla_{Q\;i}\) denotes covariant derivative with respect to \(h_{Q\;ij}\) and
\[H_{ij}=P_{(i}^{\;mnl}R_{Q\;j)mnl}-2\nabla_{Q}^{m}\nabla_{Q}^{n}P _{imnj}-\frac{1}{2}{\cal L}h_{Q\;ij}, \tag{92}\] \[P^{ijkl}=\frac{\partial{\cal L}}{\partial R_{Q\;ijkl}}=b_{a}\bar {R}_{Q}(h_{Q}^{ik}h_{Q}^{jl}-h_{Q}^{il}h_{Q}^{jk})+\frac{d_{a}}{2}(\bar{R}_{Q} ^{ik}h_{Q}^{jl}-\bar{R}_{Q}^{il}h_{Q}^{jk}+h_{Q}^{ik}\bar{R}_{Q}^{jl}-h_{Q}^{il }\bar{R}_{Q}^{jk}). \tag{93}\]
Note that \(H_{ij}=\bar{R}_{Qij}=\bar{R}_{Q}=P_{ijkl}=0\) for the class of solutions (6). As a result, the bulk metric (6) obeys NBC (91) provided that \(T_{a}\) and \(\lambda_{a}\) satisfy the relation (8).
Substituting the metric (6) into the action (89) and integrating \(r\), we get the effective action on branes
\[I_{a} = \frac{1}{16\pi G_{\rm eff\;N}^{a}}\int_{Q_{a}}\sqrt{-h}\Big{(}R_ {h}+(d-1)(d-2)\Big{)} \tag{94}\] \[+2\cosh(\rho_{a})^{d-4}\int_{Q_{a}}\sqrt{-h}\Big{(}b_{a}\bar{R}_{ h}^{2}+d_{a}\bar{R}_{h\;ij}\bar{R}_{h}^{\;ij}\Big{)},\]
where \(G_{\rm eff\;N}^{a}\) denotes the effective Newton's constant (10) on \(Q_{a}\), \(\bar{R}_{h}=R_{h}+d(d-1)\) and \(\bar{R}_{h\;ij}=R_{h\;ij}+(d-1)h_{ij}\). We require that the CFTs dual to the effective theory (94) have positive central charges and no negative energy fluxes appear in scattering processes [79, 80]. This yields \(G_{\rm eff\;N}^{a}>0\) but does not restrain \(b_{a}\) and \(d_{a}\)[76, 78]. Usually, one treats the higher derivative terms as small corrections. Thus we focus on the case
\[|b_{a}|<1,\;\;|d_{a}|<1. \tag{95}\]
It should be mentioned that the above higher-derivative model includes massless gravity on the brane. The reasons are as follows. First, since (6) is a solution, the induced metric
on the brane obeys Einstein equations (7). Thus, it is clear that there is a massless mode. Second, the effective theory (94) on the brane is a higher derivative gravity, which generally includes a massless graviton and a massive graviton. Usually, the massive mode is a ghost, which can be deleted by fine-tuning the parameters. See critical gravity [73, 81], and the higher derivative gravity from ghost-free multi-metric gravity [82, 46] for examples.
Now let us discuss the entanglement entropy of Hawking radiation. From the action (89), we can derive the holographic entanglement entropy [83, 84]
\[S_{\rm HEE}=4\pi\int_{\Gamma}dx^{d-1}\sqrt{\gamma}+8\pi\int_{\partial\Gamma}dx^ {d-2}\sqrt{\sigma}\left(\lambda_{a}+2b_{a}\bar{R}_{Q}+d_{a}(\bar{R}_{Q}^{\ \alpha}_{\ \alpha}-\frac{1}{2}K_{\alpha}K^{\alpha}) \right), \tag{96}\]
where \(\Gamma\) denotes RT surface, \(\partial\Gamma=\Gamma\cap Q\) is the intersection of the RT surface and the branes, \(K_{\alpha}\) denote the trace of extrinsic curvatures of \(\partial\Gamma\), as viewed from the brane geometry, and \(\alpha\) are the directions normal to \(\partial\Gamma\) on the branes. For the black string geometry (58), we derive the extrinsic curvature at a constant time slice \(t={\rm costant}\) and \(z=z_{a}\) on the branes
\[K_{\alpha}K^{\alpha}=(d-2)^{2}f(z_{a}){\rm sech}^{2}(\rho_{a}), \tag{97}\]
which is non-zero generally. Here \(f(z_{a})=1-z_{a}^{d-1}\) and \(z_{a}\) is the endpoint of the bulk RT surface on the brane \(Q_{a}\). Following the approach of sect.3.1, we obtain the area functional of the RT surface in the island phase
\[A_{\rm I}=\frac{S_{\rm HEE}}{4\pi} = V\int_{-\rho_{1}}^{\rho_{2}}dr\frac{\cosh^{d-2}(r)}{z^{d-2}} \sqrt{1+\frac{\cosh^{2}(r)z^{\prime 2}}{z^{2}f(z)}} \tag{98}\] \[+V\sum_{a=1}^{2}\Big{(}\frac{2\lambda_{a}\cosh^{d-2}(\rho_{a})}{ z_{a}^{d-2}}-d_{a}(d-2)^{2}f(z_{a})\frac{\cosh^{d-4}(\rho_{a})}{z_{a}^{d-2}} \Big{)},\]
where we have used (97) and \(\bar{R}_{Qij}=\bar{R}_{Q}=0\) for the black string. Note that only the higher derivative term \(\bar{R}_{Qij}\bar{R}_{Q}^{\ ij}\) contributes to the area (98). Consider the variation of the above area functional, we derive the NBC on the branes
\[\frac{(-)^{a}z_{a}^{\prime}}{f(z_{a})\sqrt{1+\frac{\cosh^{2}(\rho_{a})z_{a}^{ \prime 2}}{z_{a}^{2}f(z_{a})}}}=\frac{2\lambda_{a}(d-2)z_{a}}{\cosh^{2}(\rho_{a})}- \frac{d_{a}(d-2)^{2}\left((d-2)+z_{a}^{d-1}\right)z_{a}}{\cosh^{4}\left(\rho_ {a}\right)}. \tag{99}\]
Following the discussions of sect.3.1, we observe that positive \(d_{a}\) can yield a non-trivial RT surface outside the horizon. The reasons are as follows. First, the boundary term of the area functional (98) increases with \(z_{a}\) for positive \(d_{a}\), while the bulk term of (98) decreases with \(z\) and takes minimal value at \(z=1\). As a result, the total area functional (98) could minimize outside the horizon \(z<1\). Second, from NBC (99), we note that \(z_{a}^{\prime}\neq 0\), which gets rid of the no-go theorem of [50] based on \(z_{a}^{\prime}=0\). Thus there could be massless entanglement islands in wedge holography with higher derivative gravity on the branes.
Now we are ready to study the Page curve in wedge holography with higher derivative gravity on the branes. For simplicity, we focus on case I and choose the left weak-gravity
brane as the bath. Besides, we remove the DGP terms and consider only \(\bar{R}_{Qij}\bar{R}_{Q}^{\;ij}\) on the right brane. Without loss of generality, let us take the following parameters
\[d=4,\ V=1,\rho_{1}=0.6,\ \rho_{2}=0.1,\ \lambda_{a}=0,\ d_{1}=0,\ d_{2}\approx 0.107678\approx 0.108, \tag{100}\]
which yields
\[0<G_{\rm eff\ N}^{1}\approx 0.029<G_{\rm eff\ N}^{2}\approx 0.198,\ \ B\approx 1.910>0. \tag{101}\]
Following the approaches of sect.3, we obtain the RT surface in the island phase, which starts at \(z_{1}\approx 0.900\) on the left brane and ends on \(z_{2}\approx 0.705\) on the right brane. Thus the radiation region (red line of Fig.7) locates at \(z\geq z_{1}\approx 0.900\), and the island region (purple line of Fig.7) locates at \(z\geq z_{2}\approx 0.705\). Then, we numerically derive various areas
\[A_{\rm N}\approx 0.646<A_{\rm I}\approx 0.730<A_{\rm BH}\approx 0.778, \tag{102}\]
which implies that the no-island phase dominates at the beginning \(t=0\). At late enough times, the time-growth rate of \(A_{\rm N}\) approaches a universal constant \(\lim_{t\to\infty}dA_{\rm N}/dt=V/2\). Since \(A_{\rm N}\sim t>A_{\rm I}\) in the late times, the island phase dominates later, which recovers the Page curve of the eternal black hole. To end this section, we draw the Page curve in Fig.17. Similar to sect.3, this section's higher derivative model also gives \(A_{\rm I}=0\) if the AdS black hole is replaced by an AdS space on the branes. It means that the entanglement entropy of the whole space is zero for the CFTs on the defect in a vacuum. This is reasonable and can
Figure 17: Page curve of case I with higher derivative gravity on the branes. We have set \(d=4\) and \(V=1\). The orange and blue lines denote the RT surface in the no-island and island phases. The Page curve is given by the orange line before Page time, and is given by the blue after Page time. The entanglement entropy firstly increases with time (orange line) and then becomes a constant (blue line), which recovers the Page curve of the eternal black hole.
be regarded as a test of our model. Recall that the entanglement entropy of this paper is the renormalized entropy. Similar to Casimir energy, in principle, the renormalized entanglement entropy can be negative as long as it is bounded from below. For simplicity, we focus on the case \(A_{\rm I}\geq 0\) in this paper.
## 7 Conclusions and Discussions
This paper investigates the entanglement island and Page curve in wedge holography with DGP gravity and higher derivative gravity on the branes. We work out the effective action for one novel class of solutions and find that the mass spectrum obeys the Breitenlohner-Freedman bound. Interestingly, the effective action and mass spectrum show that there is massless gravity on the brane. By studying the effective Newton's constant, brane bending modes and HEE, we get several lower bounds for the DGP parameters. Remarkably, there are non-trivial entanglement islands outside the horizon in wedge holography with suitable DGP gravity or higher derivative gravity on the branes. We study two cases. In case I, there is one black hole on the strong-gravity brane and a bath on the weak-gravity brane; In case II, there are two black holes on the two branes with equal gravitational strength. We find non-vanishing entanglement islands and recover the Page curve in both cases. Finally, we study an inspiring analog of the island puzzle in AdS/CFT and discuss its possible resolutions. We argue that if the contradiction can be resolved in AdS/CFT, so does it in wedge holography. Our results strongly imply that the entanglement islands exist in massless gravity theories.
There are many significant problems to explore. First, [50, 52] prove the absence of entanglement islands in the black string geometry in the initial theory of wedge holography [47]. We show that the island can be recovered in wedge holography with suitable DGP or higher derivative gravity on the branes. This raises the question if the spacetime studied in [50, 52] is too particular. Does the entanglement island exist in more general spacetime in the initial model of wedge holography? It is a significant problem worth studying. Second, this paper only discusses the effects of curvature terms on the branes. It is interesting to see what happens when one adds appropriate matter fields on the branes. Third, we focus on the Page curve of eternal black holes. It is interesting to generalize the discussions to evaporating black holes. See some interesting progress in [53]. Fourth, there is also a massless gravitational mode on the branes of cone holography [49], which generalizes wedge holography to codim-n defects. It is interesting to generalize the results of this paper to cone holography. Fifth, we focus on the doubly holographic model in this paper. It is a fundamental and non-trivial problem to study the entanglement islands directly in four-dimensional Einstein gravity. We hope these problems can be addressed in the future.
## Acknowledgements
We thank T. Takayanagi, X. Dong, Y. Pang, H. J. Wang, D. q. Li and Z. Q. Cui for valuable comments and discussions. This work is supported by the National Natural Science Foundation of China (No.12275366 and No.11905297).
## Appendix A Numerical calculation for the no-island phase
In this appendix, we numerically calculate the time evolution of the area of the RT surface in the no-island phase. We take the same parameters as in sect.3.1. Thus, the RT surface starts at \(z_{1}=0.95\) on the left brane and ends on the horizon \(z=1\) at the beginning time \(t=0\) and then passes the horizon at \(t>0\). We find that \(r\) approaches zero, and the area of the RT surface increases linearly with time in the late times.
Note that the area functional (69), i.e., \(A_{\rm N}=V\int_{z_{1}}^{z_{\rm max}}dzL(r,r^{\prime},v^{\prime},z)\), does not include \(v(z)\) exactly. Thus we can derive a conserved quantity
\[E = -\frac{\partial L}{\partial v^{\prime}}=\frac{z^{-d}\cosh^{d}(r )\left(1+f(z)v^{\prime}\right)}{\sqrt{\frac{\cosh^{2}(r)v^{\prime}(-f(z)v^{ \prime}-2)}{z^{2}}+\left(r^{\prime}\right)^{2}}} \tag{103}\] \[= \sqrt{-f(z_{\rm max})}\left(\frac{\cosh{(r_{0})}}{z_{\rm max}} \right){}^{d-1},\]
where \(E\) is a constant at a fixed time, \(z_{1}\) is the endpoint of the RT surface on the left brane, \(z_{\rm max}\geq 1\) is the turning point of the two-side black hole [66], and we have used \(r(z_{\rm max})=r_{0}\) and \(v^{\prime}(z_{\rm max})=-\infty\) to derive the last equality of (103). From (103), we can solve \(v^{\prime}(z)\) in
Figure 18: Left: function \(r_{0}(z_{\rm max})\); Right: function \(r_{0}(t)\). Note that \(z_{\rm max}=1\) corresponds to \(t=0\) and \(z_{\rm max}=2^{\frac{1}{d-1}}\approx 1.260\) corresponds to \(t\rightarrow\infty\). It shows that \(r_{0}=r(z_{\rm max})\) approaches zero in the large time limit.
functions of \(r^{\prime}(z)\) and \(r(z)\). Substituting \(v^{\prime}(z)\) into the Euler-Langrangian equation derived from (69), we get the EOM of \(r(z)\) decoupled with \(v(z)\)
\[r^{\prime\prime}\left(8z^{2d+1}z_{\rm max}^{2}\cosh^{2}(r)f\left( z_{\rm max}\right)\left(\frac{\cosh\left(r_{0}\right)}{z_{\rm max}}\right){}^{2d}-8 fz^{3}\cosh^{2}\left(r_{0}\right)\cosh^{2d}(r)\right)\] \[-4z^{2d}z_{\rm max}^{2}r^{\prime}f\left(z_{\rm max}\right)\left( \frac{\cosh\left(r_{0}\right)}{z_{\rm max}}\right){}^{2d}\left(zr^{\prime} \left(zf^{\prime}-2f\right)+\sinh(2r)\right)-4\cosh^{2}(r)\right)\] \[+4z^{2}r^{\prime}\cosh^{2}\left(r_{0}\right)\cosh^{2d-2}(r)\left( fzr^{\prime}\left(2(d-2)fzr^{\prime}+d\sinh(2r)\right)+2(d-3)f\cosh^{2}(r)-zf^{ \prime}\cosh^{2}(r)\right)\] \[+8(d-1)z\sinh(r)\cosh^{2}\left(r_{0}\right)\cosh^{2d+1}(r)=0. \tag{104}\]
Similarly, substituting \(v^{\prime}(z)\) into the area functional (69) and the time (70), we obtain
\[A_{\rm N}=V\int_{z_{1}}^{z_{\rm max}}dz\left(\frac{\cosh(r)}{z} \right)^{d-2}\sqrt{\frac{2z^{2}f(z)\left(r^{\prime}\right)^{2}+\cosh(2r)+1}{2 z^{2}\left(f(z)-f\left(z_{\rm max}\right)\left(\frac{z\cosh(r_{0})}{z_{\rm max }\cosh(r)}\right){}^{2d-2}\right)}}, \tag{105}\]
and the time in functions of only \(r(z)\). Now we have simplified the problem into solving a single differential equation (104) of \(r(z)\).
Solving (104) perturbatively around the turning point \(z=z_{\rm max}\), we derive
\[r(z)=r_{0}+r_{1}(z-z_{\rm max})+r_{2}(z-z_{\rm max})^{2}+O(z-z_{\rm max})^{3}, \tag{106}\]
where
\[r_{1}=\frac{\sinh\left(2r_{0}\right)}{z_{\rm max}^{d}-2z_{\rm max }}, \tag{107}\] \[r_{2}=\frac{\sinh\left(2r_{0}\right)\left(2z_{\rm max}^{d+1} \left((d-5)\cosh\left(2r_{0}\right)-2d-5\right)+(d+4)z_{\rm max}^{2d}+24z_{\rm max }^{2}\cosh^{2}\left(r_{0}\right)\right)}{6z_{\rm max}\left(2z_{\rm max}-z_{ \rm max}^{d}\right){}^{3}}.\]
From (106) we get the BC around the turning point
\[r(z_{\rm max}-\epsilon)=r_{0}+r_{1}\epsilon+r_{2}\epsilon^{2},\ \ r^{\prime}(z_{\rm max }-\epsilon)=r_{1}+2r_{2}\epsilon, \tag{109}\]
where \(\epsilon\) is a small cutoff. For instance, we can choose \(\epsilon=10^{-9}\). For any given \(-\rho_{1}<r_{0}\leq 0\) and \(1\leq z_{\rm max}\leq\bar{z}_{\rm max}=2^{\frac{1}{d-1}}\), we can numerically solve EOM (104) with the BC (109), and then derive the value of \(r\) on the left brane
\[r(z_{1})=-\rho_{1}, \tag{110}\]
where we have chosen the parameters \(z_{1}=0.95,\rho_{1}=-0.5\), and \(d=4\) as in sect. 3. Of course, for arbitrary inputs \(r_{0}\) and \(z_{\rm max}\), the additional BC (110) is not satisfied generally. We can apply the shooting method to resolve this problem. For any given \(1\leq z_{\rm max}\leq 2^{\frac{1}{d-1}}\), we adjust the input \(r_{0}\) so that the BC (110) is obeyed. In this way, we fix the relation between \(r_{0}\) and \(z_{\rm max}\). See Fig. 18 for \(r_{0}(z_{\rm max})\). Note that \(z_{\rm max}=1\) corresponds to \(t=0\) and \(z_{\rm max}=2^{\frac{1}{d-1}}\) corresponds to \(t\to\infty\).
Now we have numerically solved \(r(z)\) for the parameter \(1\leq z_{\rm max}\leq 2^{\frac{1}{d-1}}\). Substituting the solution into the area (105) and the time (70)11, we can derive \(A_{\rm N}(z_{\rm max})\), \(t(z_{\rm max})\) and thus \(A_{\rm N}(t)\). See Fig. 19 for \(A_{\rm N}(z_{\rm max})\) and \(t(z_{\rm max})\). See Fig. 20 for \(A_{\rm N}(t)\) with \(V=1\), which shows that \(A_{\rm N}\) increases with time and the grown rate approaches a constant \(\lim_{t\to\infty}dA_{\rm N}/dt=1/2\) at late times, which agrees with the analytical result (73). Now we finish the numerically derivations of the time evolution of \(A_{\rm N}\).
Figure 20: The area of RT surface increases with time in the no-island phase. In this large time limit, we have \(\lim_{t\to\infty}dA_{\rm N}/dt=1/2\), where we have set \(V=1\) for simplicity. |
2305.14376 | PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis | The human brain is the central hub of the neurobiological system, controlling
behavior and cognition in complex ways. Recent advances in neuroscience and
neuroimaging analysis have shown a growing interest in the interactions between
brain regions of interest (ROIs) and their impact on neural development and
disorder diagnosis. As a powerful deep model for analyzing graph-structured
data, Graph Neural Networks (GNNs) have been applied for brain network
analysis. However, training deep models requires large amounts of labeled data,
which is often scarce in brain network datasets due to the complexities of data
acquisition and sharing restrictions. To make the most out of available
training data, we propose PTGB, a GNN pre-training framework that captures
intrinsic brain network structures, regardless of clinical outcomes, and is
easily adaptable to various downstream tasks. PTGB comprises two key
components: (1) an unsupervised pre-training technique designed specifically
for brain networks, which enables learning from large-scale datasets without
task-specific labels; (2) a data-driven parcellation atlas mapping pipeline
that facilitates knowledge transfer across datasets with different ROI systems.
Extensive evaluations using various GNN models have demonstrated the robust and
superior performance of PTGB compared to baseline methods. | Yi Yang, Hejie Cui, Carl Yang | 2023-05-20T21:07:47Z | http://arxiv.org/abs/2305.14376v1 | # PTGB: Pre-Train Graph Neural Networks for
###### Abstract
The human brain is the central hub of the neurobiological system, controlling behavior and cognition in complex ways. Recent advances in neuroscience and neuroimaging analysis have shown a growing interest in the interactions between brain regions of interest (ROIs) and their impact on neural development and disorder diagnosis. As a powerful deep model for analyzing graph-structured data, Graph Neural Networks (GNNs) have been applied for brain network analysis. However, training deep models requires large amounts of labeled data, which is often scarce in brain network datasets due to the complexities of data acquisition and sharing restrictions. To make the most out of available training data, we propose PTGB, a GNN pre-training framework that captures intrinsic brain network structures, regardless of clinical outcomes, and is easily adaptable to various downstream tasks. PTGB comprises two key components: (1) an unsupervised pre-training technique designed specifically for brain networks, which enables learning from large-scale datasets without task-specific labels; (2) a data-driven parcellation atlas mapping pipeline that facilitates knowledge transfer across datasets with different ROI systems. Extensive evaluations using various GNN models have demonstrated the robust and superior performance of PTGB compared to baseline methods.
P TGB: Pre-Train Graph Neural Networks for Brain Network Analysis
## 1 Introduction
Brain network analysis has attracted considerable interest in neuroscience studies in recent years. A brain network is essentially a connected graph constructed from different raw imaging modalities such as Diffusion Tensor Imaging (DTI) and functional Magnetic Resonance Imaging (fMRI), where nodes are composed by the anatomical regions of interest (ROIs) given predefined parcellation atlas, and connections are usually formed with the correlations among ROIs.
Effective brain network analysis plays a pivotal role in understanding the biological structures and functions of complex neural systems, which potentially helps the early diagnosis of neurological disorders and facilitates neuroscience research (Martensson et al., 2018; Yahata et al., 2016; Lindquist, 2008; Smith, 2012).
Graph Neural Networks (GNNs) have emerged as a powerful tool for analyzing graph-structured data, delivering impressive results on a wide range of network datasets, including social networks, recommender systems, knowledge graphs, protein and gene networks, and molecules, among others (Kipf and Welling, 2017; Hamilton et al., 2017; Schlichtkrull et al., 2018; Vashishth et al., 2020; Xu et al., 2019; Ying et al., 2018; Zhang et al., 2020; Liu et al., 2022; Xiong et al., 2020; Cui et al., 2022; Xu et al., 2022). These models have proven their ability to learn powerful representations and efficiently compute complex graph structures, making them well-suited for various downstream tasks. In the field of neuroscience, GNN has been applied to brain network analysis, specifically for graph-level classification/regression (Ying et al., 2018; Xu et al., 2019; Errica et al., 2020; Luo et al., 2022; Dai et al., 2023; Xu et al., 2023a) and important vertex/edge identification (Ying et al., 2019; Luo et al., 2020; Vu and Thai, 2020; Yu et al., 2023; Kan et al., 2022c), towards tasks such as connectome-based disease prediction and multi-level neural pattern discovery. However, deep learning models, including GNNs, require large amounts of labeled data to achieve optimal performance (Hu et al., 2020; You et al., 2020; Zhu et al., 2021a). While neuroimaging datasets are available from national neuroimaging studies such as the ABCD (Casey et al., 2018), ADNI (Hinrichs et al., 2009), and PPMI (Aleksovski et al., 2018), these datasets are still relatively small compared to graph datasets from other domains, such as datasets with 41K to 452K graphs on OGB (Hu et al., 2020) and datasets with thousands to millions of graphs on NetRepo (Rossi and Ahmed, 2016)). The limited amount of data can result in overfitting when training deep models.
Transfer learning offers a solution to the challenge of limited data availability in training deep models. It allows a model pre-trained on large-scale source datasets to be adapted to smaller target datasets while maintaining robust performance. However, the success of transfer learning depends on the availability of similar supervision labels on the source and target dataset. This is not always feasible in large-scale public studies, particularly in the field of brain network analysis. Self-supervised pre-training has been shown to be effective in various domains, such as computer vision (He et al., 2020; Chen et al., 2020), natural language processing (Devlin et al., 2019; Yu et al., 2022), and graph mining (Sun et al., 2022). We aim to explore a self-supervised pre-training approach for GNNs on brain networks that is not restricted by task-specific supervision labels. Despite the promising potential, unique challenges still need to be addressed to achieve effective disease prediction. One of the major challenges is the inconsistent ROI parcellation systems in constructing different brain network datasets, which hinders the transferability of pre-trained models across datasets. The process of parcellating raw imaging data into brain networks is highly complex and usually done ad hoc by domain experts for each study, making it unrealistic to expect every institution to follow the same parcellation system. Although some institutions may release preconstructed brain network datasets (Di Martino et al., 2014), the requirement for universal adherence to a single parcellation system is infeasible.
To tackle the challenge of insufficient training data for GNNs in brain network analysis, we present **P**-**T**raining **G**raph neural networks for **B**rain networks (PTGB), a fully unsupervised pre-training approach that captures shared structures across brain network datasets. PTGB adapts the data-efficient MAML (Finn et al., 2017) with a two-level contrastive learning strategy based on the naturally aligned node systems of brain networks across individuals. Additionally, to overcome the issue of diverse parcellation systems, we introduce a novel data-driven atlas mapping technique. This technique transforms the original features into low-dimensional representations in a uniform embedding space and aligns them using variance-based projection, which incorporates regularizations that preserve spatial relationships, consider neural modules, and promote sparsity.
In summary, our contributions are three-folded:
* We present an unsupervised pre-training approach for GNNs on brain networks, addressing the issue of resource-limited training.
* We propose a two-level contrastive sampling strategy tailored for GNN pre-training on brain networks, which combines with a data-driven brain atlas mapping strategy that employs customized regularizations and variance-based sorting to enhance cross-dataset learning.
* Our experiments against shallow and deep baselines demonstrate the effectiveness of our proposed
PTGB. Further, we provide an in-depth analysis to understand the influence of each component.
## 2 Related Work
GNNs for Brain Network Analysis.GNNs are highly effective for analyzing graph-structured data and there have been some pioneering attempts to use them for predicting diseases by learning over brain networks. For example, BrainGNN (Li et al., 2021) proposes ROI-aware graph convolutional layers and ROI-selection pooling layers for predicting neurological biomarkers. BrainNetCNN (Kawahara et al., 2017) designs a CNN that includes edge-to-edge, edge-to-node, and node-to-graph convolutional filters, leveraging the topological locality of brain connectome structures. BrainNetTF (Kan et al., 2022) introduces a transformer architecture with an orthonormal clustering readout function that considers ROI similarity within functional modules. Additionally, various studies (Cui et al., 2022; Kan et al., 2022; Zhu et al., 2022; Cui et al., 2022; Yu et al., 2023) have shown that, when data is sufficient, GNNs can greatly improve performance in tasks such as disease prediction. However, in reality, the lack of training data is a common issue in neuroscience research, particularly for specific domains and clinical tasks (Xu et al., 2023). Despite this, there has been little research into the ability of GNNs to effectively train for brain network analysis when data is limited.
Unsupervised Graph Representation Learning and GNN Pre-training.Unsupervised learning is a widely used technique for training complex models when resources are limited. Recent advancements in contrastive learning (Chen et al., 2020; He et al., 2020; Yu et al., 2021; Zhu et al., 2022) have led to various techniques for graphs. For instance, GBT (Bielak et al., 2022) designs a Barlow Twins Zbontar et al. (2021) loss function based on the empirical cross-correlation of node representations learned from two different views of the graph (Zhao et al., 2021). Similarly, GraphCL (You et al., 2020) involves a comparison of graph-level representations obtained from two different augmentations of the same graph. DGI (Velickovic et al., 2019) contrasts graph and node representations learned from the original graph and its corruption.
To obtain strong models for particular downstream tasks, unsupervised training techniques can be used to pre-train a model, which is then fined tuned on the downstream tasks to reduce the dependence on labeled training data. The approach has proven highly successful in computer vision (Cao et al., 2020; Grill et al., 2020), natural language processing (Devlin et al., 2019; Radford et al., 2018, 2021; Liang et al., 2020), and multi-modality (e.g. text-image pair) learning (Li et al., 2022; Yao et al., 2022). There are various strategies for pre-training GNNs as well. GPT-GNN (Hu et al., 2020) proposes graph-oriented pretext tasks, such as masked attribute and edge reconstruction. L2P-GNN (Lu et al., 2021) introduces dual adaptation by simultaneously optimizing the encoder on a node-level link prediction objective and a graph-level self-supervision task similar to DGI. Others, such as GMPT (Hou et al., 2022) adopt an inter-graph message-passing approach to obtain context-aware node embedding and optimize the model concurrently under supervision and self-supervision. To the best of our knowledge, the effectiveness of both contrastive learning and pre-training has not been investigated in the context of the unique properties of brain networks.
## 3 Unsupervised Brain Network Pre-training
Problem Definition.The available training resource includes a collection of brain network datasets \(\mathcal{S}=\{\mathcal{D}_{1},\mathcal{D}_{2},\cdots\mathcal{D}_{s}\}\), where each dataset contains a varying number of brain networks. We consider each brain network instance with \(M\) number of defined ROIs as an undirected weighted graph \(\mathcal{G}\) with \(M\) nodes. \(\mathcal{G}\) is represented by a node-set \(\mathcal{V}=\{v_{m}\}_{m=1}^{M}\), an edge set \(\mathcal{E}=\mathcal{V}\times\mathcal{V}\), and a weighted adjacency matrix \(\mathbf{A}\in\mathbb{R}^{M\times M}\). We define a \(\theta\) parameterized GNN model \(f(\cdot)\), and our goal is to propose a pre-training schema that can effectively learn an initialization \(\theta_{0}\) for \(f(\cdot)\) on a set of source datasets \(\mathcal{S}_{\text{source}}\subset\mathcal{S}\) via self-supervision and adapt \(f_{\theta_{0}}(\cdot)\) to a local optimum \(\theta^{*}\) on a target set \(\mathcal{S}_{\text{target}}\in\mathcal{S}\).
### GNN Pre-training for Brain Networks
The goal of pre-training a GNN model for brain networks is to learn an appropriate initialization that can easily be adapted to downstream task. Note that the concept of pre-training is distinct from transfer learning since the latter expects a similarity between the source and target data as well as their learning objectives (_e.g.,_ loss functions), while this is often lacking in brain network analysis due to absence of
sufficient ground truth labels in large scale studies as well as inherent differences in their brain network parcellation methods across varying datasets. Practically, a GNN model can be pre-trained either on a singular task with a single source dataset or on a collection of tasks with multiple source datasets. The proposed PTGB framework adopts the latter option since multi-task pre-training reduces the likelihood of the model being biased towards the knowledge of data from a singular source, which could be particularly concerning if the source and target data shares limited similarity leading to poor downstream adaptation due to information loss during model transfer. However, a naive approach towards multi-task pre-training would not suffice in learning a robust model initialization. Specifically, it presents two underlying risks: (1) the model may not perform consistently well on all tasks and may also overfit to a particular task which significantly undermines model generalizability; and (2) the process could be computationally inefficient with increasing number of tasks regardless if the model is optimized sequentially or simultaneously on all tasks (Yang et al., 2022).
To this end, we adopt the popular data-efficient training techniques presented in MAML (Finn et al., 2017) with the goal of ensuring consistent performance on all tasks as well as computational efficiency. The MAML technique is characterized by an inner-loop adaptation and an outer-loop update (Raghu et al., 2019). At each training iteration, each input dataset is partitioned into an inner-loop support set and an outer-loop query set. The model is first trained on the support set without explicitly updating the parameters. Instead, the updates are temporarily stored as fast weights (Ba et al., 2016). These fast weights are then used to evaluate the query set and compute the actual gradients. This approach makes use of approximating higher-order derivatives (Tan and Lim, 2019) at each step, allowing the model to foresee its optimization trajectory a few steps ahead, which practically reduces the number of required training iterations to reach local optima. In our scenario, the joint optimization involves summing the loss over each brain network dataset, i.e., for \(n\) number of datasets and their respective temporary fast weights \(\{\theta_{i}^{\prime}\}_{i=1}^{n}\) and outer-loop queries \(\{\text{query}_{i}\}_{i=1}^{n}\), the step-wise update of the model parameter at time \(t\) is \(\theta^{t+1}=\theta^{t}-\alpha\nabla_{\theta^{t}}\sum_{i=1}^{n}\mathcal{L}_{ \text{query}_{i}}f_{\theta_{i}^{\prime}}(\cdot)\). We hereby summarize this process in Algorithm 1. In addition, we will also demonstrate the advantages of MAML-styled pre-training over vanilla multi-task pre-training as well as single task pre-training through experiments which will be discussed in Section 4.1.
### Brain Network Oriented Two-Level Contrastive Learning
Given the high cost of acquiring labeled training data for brain network analysis, our pre-training pipeline
Figure 1: Overview of the proposed framework PTGB. The initial features of the source datasets are projected to a fixed dimension through atlas transformation followed by variance-based feature alignment, which facilitates self-supervised GNN pre-training on multiple datasets via the novel two-level contrastive learning objective. The learned model can serve as the parameter initialization and be further fine-tuned on target tasks.
of PTGB adopts to the effective label-free learning strategy of contrastive learning (CL). CL aims to maximize the mutual information (MI) between an anchor point of investigation \(X\) from a data distribution \(\mathcal{H}\) and its positive samples \(X^{+}\), while minimizing MI with its negative samples \(X^{-}\). The contrastive objective function is formulated as follows:
\[\mathcal{J}_{\text{con}}=\arg\min\left[\left(-I(X;X^{+})+I(X;X^{-})\right) \right]. \tag{1}\]
In the context of graph CL, given an anchor node representation \(z_{\alpha}\), a set of positive samples \(\mathbf{S}^{+}\), and a set of negative samples \(\mathbf{S}^{-}\), the training objective is based on the Jensen-Shannon divergence (Hjelm et al., 2019),
\[\mathcal{J}_{\text{JSD}}(z_{\alpha})=\arg\min\left[\left(-I(z_{\alpha}; \mathbf{S}^{+})+I(z_{\alpha};\mathbf{S}^{-})\right)\right], \tag{2}\]
where
\[I(z_{\alpha};\mathbf{S}^{+}) =\frac{1}{|\mathbf{S}^{+}|}\sum_{z_{s^{+}}\in\mathbf{S}^{+}}\text {sp}\left(\frac{z_{\alpha}^{\top}z_{s^{+}}}{\|z_{\alpha}\|\|z_{s^{+}}\|}\right), \tag{3}\] \[I(z_{\alpha};\mathbf{S}^{-}) =\frac{1}{|\mathbf{S}^{-}|}\sum_{z_{s^{-}}\in\mathbf{S}^{-}}\text {sp}\left(\frac{z_{\alpha}^{\top}z_{s^{-}}}{\|z_{\alpha}\|\|z_{s^{-}}\|}\right), \tag{4}\]
and \(\text{sp}(\cdot)=\log(1+e^{\cdot})\) is softplus nonlinearity.
The ultimate goal of our framework is to localize effective GNN CL learning (Zhu et al., 2021) for brain networks. Given a dataset \(\mathcal{D}\) and an anchor node \(i\) from graph \(\mathcal{G}_{p}\in\mathcal{D}\) with the learned representation \(z_{i,p}\), we propose to categorize the possible sample selections into three fundamental types (a visualization is shown in Figure 2):
* \(\mathbf{\underline{S_{1}}}\): \(\{z_{j,p}\,:\,j\in\mathcal{N}_{k}(i,p)\}\) refers to the node representation set within the the \(k\)-hop neighborhood of the anchor in graph \(\mathcal{G}_{p}\).
* \(\mathbf{\underline{S_{2}}}\): \(\{z_{j,p}\,:\,j\notin\mathcal{N}_{k}(i,p)\}\) refers to the remaining node representation set in graph \(\mathcal{G}_{p}\) that are not in the the \(k\)-hop neighborhood of the anchor.
* \(\mathbf{\underline{S_{3}}}\): \(\{z_{j,q}\,:\,\mathcal{G}_{q}\in\mathcal{D},\,j\in\mathcal{G}_{q},\,q\neq p\}\) refers to the node representation set of nodes in all the other graphs of dataset \(\mathcal{D}\).
Notice that our framework leverages the \(k\)-hop substructure around the anchor node to further differentiate \(\mathbf{S_{1}}\) and \(\mathbf{S_{2}}\) for contrastive optimization. This design is driven by two considerations: **(1) Regarding GNN learning.** Given that node representations are learned from the information aggregation of its \(k\)-hop neighborhood, maximizing the MI of an anchor to its \(k\)-hop neighbors naturally enhances lossless message passing of GNN convolutions. **(2) Regarding the uniqueness of brain networks.** Brain networks can be anatomically segmented into smaller neural system modules (Cui et al., 2022), thus capturing subgraph-level knowledge can provide valuable signals for brain-related analysis.
Building on these three fundamental types of samples, we take advantage of the property of brain networks that ROI identities and orders are fixed across samples to introduce an additional sample type. This encourages the GNN to extract shared substructure knowledge by evaluating the MI of an anchor against its presence in other graphs. Given an anchor representation \(z_{i,p}\) of node \(i\) from graph \(\mathcal{G}_{p}\in\mathcal{D}\), the novel inter-graph sample type is defined as:
Figure 2: Visual demonstration of the sample types where \(X_{i,p}\) is the anchor and \(\mathbf{S_{1}}/\mathbf{S_{4}}\) are sampled as 1-hop neighbors.
* \(\mathbf{S_{4}}\):\(\{z_{j,q}\,:\,j\in\mathcal{N}_{k}(i,q)\cap\mathcal{N}_{k}(i,p),\,\mathcal{G}_{q} \in\mathcal{D},\,q\neq p\}\), refers to the node representation set within the \(k\)-hop neighborhood of node \(i\) in all other graphs in \(\mathcal{D}\). Conceptually, \(\mathbf{S_{4}}\) is a special subset of \(\mathbf{S_{3}}\).
It is important to note that for an anchor node \(i\), its \(k\)-hop neighborhood structures might not be identical among different graphs. As a result, we only consider shared neighborhoods when evaluating the mutual information across multiple graphs. To encourage the learning of unique neighborhood knowledge within a single brain network instance and shared substructure knowledge across the entire dataset, we configure \(\mathbf{S_{1}}\) and \(\mathbf{S_{4}}\) as positive samples while \(\mathbf{S_{2}}\) and the set \(\mathbf{S_{3}}-\mathbf{S_{4}}\) as negative samples, as illustrated in Figure 3. Strictly speaking, \(\mathbf{S_{1}}\) does not include the anchor itself, but the anchor is always a positive sample to itself by default. Furthermore, our sampling categorization can also help understand the objective formulations in various state-of-the-art graph CL frameworks (Velickovic et al., 2019; Qiu et al., 2020; Xia et al., 2022; Sun et al., 2019; Zhu et al., 2021). We summarize our findings in Table 1. Specifically, "+" denotes positive sampling; "-" denotes negative sampling; and "/" means that the sample type is not considered. It can be observed that DGI and InfoGraph (InfoG) use graph representation pooled from node representations as a special sample, which is essentially equivalent to jointly considering \(\mathbf{S_{1}}\) and \(\mathbf{S_{2}}\) without explicit differentiation. On the other hand, GCC and EGI, which are more closely related to our framework, leverage neighborhood mutual information maximization on a single graph, but fail to extend this to a multi-graph setting like ours.
### Data-driven Brain Atlas Mapping
MotivationWhen fine-tuning a pre-trained model on a new data domain, the misalignment between source and target signals can negatively impact its adaptation. This issue is particularly relevant in brain networks, where it is hard, if not impossible, to require every brain network data provider to stick to the same brain atlas template, and each template can use a unique system of ROIs. For instance, the HIV dataset we obtained is parcellated from the AAL90 template (Tzourio-Mazoyer et al., 2002), leading to 90 defined ROIs; while the PPMI dataset uses the Desikan-Killiany84 template (Desikan et al., 2006), resulting in 84 defined ROIs. As a result, brain networks in the two datasets will have different ROI semantics and graph structures. Although GNNs can handle graphs without fixed numbers and orders of nodes, constructing the most informative ROI (_i.e.,_ node) features as the connection profiles (_i.e.,_ adjacency) (Cui et al., 2022, 2022) can result in different feature dimensions and physical meanings. While manual conversion can be performed to translate between templates, it is a costly process that requires domain expertise to perform even coarse cross-atlas mappings.
To address this issue, we aim to provide a data-driven atlas mapping solution that is easily accessible and eliminates the strong dependency on network construction. The data-driven atlas mapping solution, which transforms the original node features into lower-dimensional representations that preserve the original connectivity information and align features across datasets, is learned independently on each dataset prior to GNN pre-training.
#### 3.3.1 Autoencoder with Brain Network Oriented Regularizers
PTGB adopts a one-layer linear autoencoder (AE) as the base structure. The AE consists of a linear projection encoder \(\mathbf{W}\) and a transposed decoder \(\mathbf{W}^{\top}\)
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & \(\mathbf{S_{1}}\) & \(\mathbf{S_{2}}\) & \(\mathbf{S_{3}}\) & \(\mathbf{S_{4}}\) \\ \hline DGI & + & + & / & / \\ \hline InfoG & + & + & – & / \\ \hline GCC & + & – & – & / \\ \hline EGI & + & – & – & / \\ \hline Ours & + & – & – & + \\ \hline \end{tabular}
\end{table}
Table 1: The sampling configuration of some existing graph contrastive learning methods. “+” denotes positive sampling, “-” for negative, and “/” for no consideration.
Figure 3: The sampling configuration of the proposed PTGB framework. \(\mathbf{S_{1}}\) and \(\mathbf{S_{4}}\) are positive samples, \(\mathbf{S_{2}}\) and the set \(\mathbf{S_{3}}-\mathbf{S_{4}}\) are negative samples.
with the goal of learning a low-dimensional projection that can easily reconstruct the original presentation. The loss function is defined as minimizing the reconstruction error \(\mathcal{L}_{\text{rec}}=(1/M)\|\mathbf{X}-\mathbf{X}\mathbf{W}\mathbf{W}^{\top} \|_{2}^{2}\), where \(\mathbf{X}\in\mathbb{R}^{M\times M}\) is the input and \(\mathbf{W}\in\mathbb{R}^{M\times D}\) is the learnable projection (Hinton and Zemel, 1993). To further enhance the feature compression and to guide the overall AE optimization, we propose to incorporate several regularizers that take into account the unique characteristics of brain networks.:
**Locality-Preserving Regularizer (LR).** We aim to ensure that the compressed features preserve the spatial relationships of the original brain surface. To achieve this, we incorporate a locality preserving regularizer (He et al., 2005) to the AE objective. The regularizer is formulated as \(\mathcal{L}_{\text{loc}}=(1/M)\|\mathbf{Y}-\mathbf{T}\mathbf{Y}\|^{2}\), where \(\mathbf{Y}\in\mathbb{R}^{M\times D}\) represents the projected features from the AE and \(\mathbf{T}\in\mathbb{R}^{M\times M}\) is a transition matrix constructed from the \(k\)-NN graph of the 3D coordinates of ROIs.
**Modularity-Aware Regularizer (CR).** Brain networks can be segmented into various neural system modules that characterize functional subsets of ROIs. In graph terminology, they are community structures. The projected feature should also capture information about neural system membership. However, obtaining ground-truth segmentations is a difficult task that requires expert knowledge. To overcome this challenge, we resort to community detection methods on graphs, specifically based on modularity maximization. The regularizer (Salha-Galvan et al., 2022) is defined as minimizing
\[\mathcal{L}_{\text{com}}=-\frac{1}{2D}\sum_{i,j=1}^{M}\left[\mathbf{A}_{ij}- \frac{k_{i}k_{j}}{2D}\right]\exp(-\|y_{i}-y_{j}\|_{2}^{2}), \tag{5}\]
where \(\mathbf{A}\in\mathbb{R}^{M\times M}\) is the graph adjacency matrix, \(k_{i}\) denotes degree of node \(i\), and \(y_{i}\) is the AE projected features. Essentially, this optimization minimizes the \(L_{2}\) distance between representations of nodes within the same communities, as measured by the modularity score, and maximizes the distance between representations of nodes in different communities.
**Sparsity-Oriented Regularizer (SC).** Sparse networks have proven to be effective in learning robust representations from noisy data (Jeong et al., 2017; Shi et al., 2019; Makhzani and Frey, 2014). In brain connectome analysis, sparsity has also been shown to improve the interpretation of task-specific ROI connections in generation and classification tasks (Kan et al., 2022). To this end, we implement the popular KL-divergence smoothing to enforce sparsity in the parameters of the linear projection encoder, \(\mathbf{W}\)). This is formulated as:
\[\mathcal{L}_{\text{KL}}=\sum_{i=1}^{M}\sum_{j=1}^{D}\left[\rho\log\left(\frac {\rho}{\hat{\rho}_{ij}}\right)+(1-\rho)\log\left(\frac{1-\rho}{1-\hat{\rho}_ {ij}}\right)\right], \tag{6}\]
where \(\rho\) is a small positive float set as the target sparsity value, and \(\hat{\rho}_{ij}\) represents the element-wise activation of the encoder projection matrix \(\mathbf{W}\in\mathbb{R}^{M\times D}\).
#### 3.3.2 Variance-based Dimension Sorting
In addition to transforming dataset-specific features, cross-dataset alignment of feature signals is also crucial for improving model adaptation. The one-layer AE transforms the original feature vectors into weighted combinations of multiple dimensions, creating new feature dimensions which we name as _virtual ROIs_. In the context of brain networks, this process helps to group ROIs and their signals. This idea is inspired by the well-studied functional brain modules (Philipson, 2002; Anderson et al., 2004; Hilger et al., 2020; Brodmann, 1999; Zhou et al., 2020), which provide a higher-level and generic organization of the brain surface, as opposed to fine-grained ROI systems. Since the variations in ROI parcellations are due to differences in clinical conventions, it is reasonable to assume that there exists a shared virtual ROI system underlying different parcellation systems, similar to the discretization of functional brain modules. The community learning and neighborhood preserving regularizers, introduced in Section 3.3, allow us to capture these shared virtual ROIs in a data-driven manner. Our ultimate goal is to align the discovered virtual ROIs across datasets, so that each virtual ROI characterizes the same functional module in the human brain, regardless of its origin. This cross-dataset alignment of virtual ROIs ensures that the model can effectively adapt to new datasets and provide meaningful insights into the different downstream analyses.
The objective of the one-layer linear AE is similar to PCA, as discussed in more detail in Appendix A.1, with the added benefit of incorporating additional regularizers. PCA orders dimensions based on decreasing levels of sample variance (Hotelling, 1933). PTGB leverage this approach by utilizing the learned parameters of the AE projection to estimate the variance of each virtual ROI (_i.e._, projected feature di
mension). The sample variance of each virtual ROI indicates its representativeness of the original data variations. Given the shared patterns across different parcellation systems, we expect that similar virtual ROIs in datasets with different atlas templates will have similar variance scores, especially in terms of their order. By sorting the same number of virtual ROIs based on their sample variance in each dataset, we aim to align virtual ROI cross datasets, so that each virtual ROI represents the same functional unit in the human brain. The procedure is explained in detail in Algorithm 2 in Appendix A.2.
## 4 Experiments
We evaluate the effectiveness of PTGB through extensive experiments on real brain network datasets, with a focus on the following research questions:
* **RQ1**: How does PTGB compare with other unsupervised GNN pre-training frameworks adapted to the scenario of brain networks?
* **RQ2**: What is the contribution of each major component in PTGB to the overall performance?
* **RQ3**: How does the choice of sampling method affect model convergence and performance?
* **RQ4**: How effective is the variance-based sorting in aligning virtual ROIs among different parcellation systems?
Datasets, Configurations, and Metrics.Our experiments are conducted on three real-world brain network datasets: PPMI, BP, and HIV. The PPMI dataset is parcellated using the Desikan-Killiany84 atlas template and includes brain networks from 718 subjects, 569 of whom are Parkinson's Disease (PD) patients and 149 are Healthy Control (HC). The networks are constructed using three tractography algorithms: Probabilistic Index of Connectivity (PICo), Hough voting (Hough), and FSL. The BP dataset is parcellated using the Brodmann82 template and includes resting-state fMRI and DTI modalities from 97 subjects, 52 of whom have Bipolar I disorder and 45 are HCs. The HIV dataset is parcellated using the AAL90 template and includes fMRI and DTI modalities from 70 subjects, with 35 early HIV patients and 35 HCs. We pre-train the model on the PPMI dataset and evaluate the downstream performance on BP and HIV. Further details about the datasets can be found in Appendix B.
PTGB employs GCN as the backbone for the GNN (Kipf and Welling, 2017) encoder. We also benchmark PTGB with GAT (Velickovic et al., 2018) and GIN (Xu et al., 2019), and the results are provided in Appendix D.1. The hyperparameter settings are described in detail in Appendix C. The hyperparameter tuning follows the standard designs in related studies such as in (Yang et al., 2021; Wein et al., 2021; Hu et al., 2021). The downstream evaluation is binary graph classification for disease prediction. To assess the performance, we use the two widely used metrics in the medical field (Li et al., 2021; Cui et al., 2022): accuracy score (ACC) and the area under the receiver operating characteristic curve (AUC).
### Overall Performance Comparison (RQ1)
We present a comprehensive comparison of the target performance between the proposed PTGB and popular unsupervised learning strategies in Table 2. To fairly compare the methods, we apply atlas mapping pre-processing and the multi-dataset learning backbone discussed in section 3.1 to all methods. The purpose of this comparison is to effectively highlight the impact of the proposed two-level contrastive pre-training and we will further analyze the effect of atlas mapping in subsequent subsections. In addition, for a clearer presentation, we group the selected baselines according to their optimization strategies:
* No pre-training (NPT): the backbone with randomly initialized parameters for target evaluation.
* Non-CL-based (NCL): methods with cost functions regularized by co-occurrence agreement or link reconstruction, including Node2Vec (Grover and Leskovec, 2016), DeepWalk (Perozzi et al., 2014), and VGAE (Kipf and Welling, 2016).
* Single-scale CL (SCL): methods utilizing either node- or graph-level representations in the CL optimization, including GBT (Bielak et al., 2022), ProGCL (Xia et al., 2022), and GraphCL (You et al., 2020).
* Multi-scale CL (MCL): methods whose CL optimization utilizes both nodes- and graph-level representations, including DGI (Velickovic et al., 2019) and InfoG (Sun et al., 2019).
* Ego-graph sampling (EGS): methods whose contrastive samplings consider \(k\)-hop ego-networks as discriminative instances, which are the most similar to the proposed PTGB, including GCC (Qiu et al., 2020) and EGI (Zhu et al., 2021).
* Our proposed two-level contrastive optimization (Ours): methods include single task pre-training (STP) in which we select the PICo modality of
the PPMI study to be the only source task; multi-task pre-trainig (MTP) which does not utilize the MAML technique; and the full implementation of the PTGB framework. The experiments reveal the following insights:
* The proposed PTGB consistently outperforms all the baselines, achieving a relative improvement of 7.34%-13.30% over the best-performing baselines and 31.80%-38.26% over the NPT setting. The results of PTGB have been statistically compared against baselines using paired \(t\)-tests. With a significance level set to 0.05, the largest two-tailed \(p\) value is reported at 0.042, indicating that PTGB demonstrates a statistically significant performance increase over other selected methods.
* Compared with the transductive methods of Node2Vec and DeepWalk, the GNN pre-trained by VGAE learns structure-preserving representations and achieves the best results in the NCL-type methods. This indicates the potential benefit of the locality-preserving regularizer design in PTGB.
* Maximizing mutual information between augmented instances may hinder GNNs from learning a shared understanding of the entire dataset. For baselines belonging to the categories of SCL, MCL, and EGS, pre-training with non-augmented CL (InfoG, EGI) generally results in a 4.36% relative improvement across both metrics and a 7.63% relative decrease in performance variance compared to their augmentation-based counterparts (GBT, GraphCL, ProGCL, DGI, GCC). This explains why PTGB does not employ data augmentation.
* Multi-scale MI promotes the capture of effective local (_i.e.,_ node-level) representations that can summarize the global (_i.e.,_ graph-level) information of the entire network. The MCL-type methods typically outperform the SCL-type ones by a relative gain of 2.68% in ACC and 3.27% in AUC.
* The group of baselines considering \(k\)-hop neighborhoods (EGS) presents the strongest performance, indicating the importance of local neighborhoods in brain network analysis. The proposed PTGB, which captures this aspect through both node- and graph-level CL, is the only one that comprehensively captures the local neighborhoods of nodes.
* Learning from multiple tasks (MTP) brings significant improvement over STP, reporting a relative increase of 8.47% in accuracy and 6.90% in AUC. Furthermore, the full PTGB framework with MAML-styled training achieves a relative improvement of 11.29% in accuracy, 14.75% in AUC, and a reduced variance over MTP, demonstrating its advantages in enhancing model generalizability.
### Ablation Studies (RQ2)
We examine two key components of PTGB- (1) the two-level contrastive sampling and (2) the atlas mapping regularizers. The best contrastive sampling configuration is fixed when examining the atlas regularizers, and all regularizers are equipped when examin
\begin{table}
\begin{tabular}{l l|c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{BP-fMRI} & \multicolumn{2}{c}{BP-DTI} & \multicolumn{2}{c}{HIV-fMRI} & \multicolumn{2}{c}{HIV-DTI} \\ \cline{3-10} & & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC \\ \hline NPT & GCN & 50.07\(\pm\)0.70 & 50.11\(\pm\)5.80 & 49.51\(\pm\)0.68 & 51.83\(\pm\)0.80 & 56.27\(\pm\)1.84 & 57.16\(\pm\)5.14 & 51.30\(\pm\)0.42 & 53.82\(\pm\)1.94 \\ \hline \multirow{3}{*}{NCL} & Node2Vec & 48.51\(\pm\)0.30 & 49.68\(\pm\)7.23 & 50.83\(\pm\)1.44 & 46.70\(\pm\)10.30 & 52.61\(\pm\)10.38 & 50.75\(\pm\)10.94 & 49.65\(\pm\)0.30 & 51.22\(\pm\)10.79 \\ & DeepWalk & 50.28\(\pm\)0.33 & 51.59\(\pm\)0.60 & 51.72\(\pm\)5.74 & 38.46\(\pm\)9.37 & 54.81\(\pm\)11.20 & 55.55\(\pm\)11.93 & 52.67\(\pm\)11.20 & 50.88\(\pm\)10.39 \\ & VGAE & 56.71\(\pm\)1.48 & 55.24\(\pm\)11.40 & 54.63\(\pm\)11.20 & 54.11\(\pm\)11.82 & 62.76\(\pm\)4.77 & 61.25\(\pm\)11.54 & 56.90\(\pm\)0.42 & 55.35\(\pm\)0.44 \\ \hline \multirow{3}{*}{SCL} & GBT & 57.21\(\pm\)0.68 & 57.32\(\pm\)10.00 & 56.29\(\pm\)0.53 & 55.27\(\pm\)10.54 & 65.73\(\pm\)10.00 & 66.08\(\pm\)10.63 & 59.80\(\pm\)7.76 & 57.37\(\pm\)0.40 \\ & GraphCL & 59.79\(\pm\)30 & 59.10\(\pm\)0.78 & 57.57\(\pm\)10.63 & 57.35\(\pm\)0.67 & 67.08\(\pm\)7.76 & 69.17\(\pm\)6.68 & 60.43\(\pm\)3.90 & 60.03\(\pm\)10.48 \\ & ProGCL & 62.36\(\pm\)0.50 & 62.61\(\pm\)0.34 & 61.26\(\pm\)37 & 62.67\(\pm\)8.46 & 71.52\(\pm\)10.39 & 72.16\(\pm\)8.55 & 62.48\(\pm\)10.38 & 61.94\(\pm\)10.37 \\ \hline \multirow{3}{*}{MCL} & DGI & 62.44\(\pm\)10.13 & 60.75\(\pm\)10.97 & 58.15\(\pm\)0.60 & 58.95\(\pm\)0.60 & 70.22\(\pm\)11.40 & 70.12\(\pm\)12.16 & 60.83\(\pm\)10.48 & 62.06\(\pm\)10.16 \\ & InfoG & 62.87\(\pm\)0.52 & 62.37\(\pm\)0.67 & 60.88\(\pm\)0.67 & 60.44\(\pm\)0.61 & 72.46\(\pm\)8.71 & 72.94\(\pm\)5.66 & 61.75\(\pm\)76 & 61.37\(\pm\)0.45 \\ \hline \multirow{3}{*}{EGS} & GCC & 63.45\(\pm\)0.62 & 62.39\(\pm\)0.60 & 60.44\(\pm\)0.54 & 60.29\(\pm\)0.10 & 70.97\(\pm\)0.13 & 72.48\(\pm\)10.13 & 61.27\(\pm\)10.66 & 61.38\(\pm\)10.79 \\ & EGI & 63.38\(\pm\)0.63 & 63.58\(\pm\)0.62 & 61.82\(\pm\)0.63 & 61.57\(\pm\)8.27 & 37.46\(\pm\)0.42 & 32.85\(\pm\)0.48 & 60.98\(\pm\)0.42 & 62.41\(\pm\)10.50 \\ \hline \multirow{3}{*}{Ours} & STP & 53.92\(\pm\)12.2\(\pm\)27 & 54.61\(\pm\)11.28 & 55.51\(\pm\)11.28 & 56.73\(\pm\)10.20 & 61.18\(\pm\)14.57 & 62.88\(\pm\)11.55 & 55.29\(\pm\)12.38 & 57.31\(\pm\)11.27 \\ & MTP & 60.37\(\pm\)11.47 & 61.44\(\pm\)11.28 & 59.41\(\pm\)11.26 & 59.92\(\pm\)13.37 & 67.65\(\pm\)12.30 & 68.38\(\pm\)12.36 & 60.54\(\pm\)13.37 & 59.46\(\pm\)12.39 \\ & PTGB & **68.84\(\pm\)0.84** & **68.45\(\pm\)0.86** & **66.57\(\pm\)0.87** & **68.31\(\pm\)0.88** & **77.80\(\pm\)0.98** & **77.22\(\pm\)0.74** & **67.51\(\pm\)0.87** & **67.74\(\pm\)0.88** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Disease prediction performance comparison. All results are averaged from 5-fold cross-validation along with standard deviations. The best result is highlighted in bold and runner-up is underlined. * denotes a significant improvement according to paired \(t\)-test with \(\alpha=0.05\) compared with baselines.
ing the contrastive samplings. The results, shown in Figure 4 (with additional DTI version in Appendix D.2), are analyzed based on the four possible variants of contrastive sampling listed in Table 3. Our analyses yield the following observations: **(1)** leveraging \(k\)-hop neighborhood (_i.e.,_ positive **S2**) MI maximization brings visible performance gain, confirming its benefit in brain structure learning; **(2)** The extension to multi-graph CL (_i.e.,_ consideration of **S3**) facilitates the extraction of unique ROI knowledge, leading to improved results in Var. 3/4; **(3)** Var. 4 outperforms Var. 3 as it effectively summarizes of global (_i.e.,_ graph-level) information in local node representations; **(4)** The full implementation of PTGB brings a relative gain of 4.27% in both metrics on top of Var. 4, highlighting the significance of considering shared substructure knowledge across multiple graphs (_i.e.,_ through the inclusion of **S4**).
The right-side sub-figures examine the impact of the atlas mapping regularizers by comparing the results of the full framework to those without the sparsity regularizer (w/o SR), the locality regularizer (w/o LR), and the community regularizer (w/o CR). Two key observations are made: **(1)** The removal of SR leads to the greatest performance drop, emphasizing its crucial role in learning robust projections that can effectively handle noise and prevent over-fitting; **(2)** The inferior results when LR and CR are absent emphasize the importance of spatial sensitivity and blockwise feature information in brain network analysis. This supports our intuition to consider the relative positioning of ROIs in the 3D coordinate as
Figure 4: Ablation comparisons on contrastive sampling choices (left two) and atlas mapping regularizers (right two). The \(y\)-axis refers to the numeric values of evaluated metrics (in %). The setup of Var. 1 - 4 is described in Table 3. “SC”, “LR”, and “CR” are abbreviations for “sparsity constraints”, “locality regularizer”, and “community (modularity-aware) regularizer” respectively.
Figure 5: In-depth comparison among the four variants and the full model. The \(x\)-axis is epochs. Fig. (a) evaluates the trajectory of pre-training loss, Fig. (b) evaluates their respective testing accuracy on the fMRI view of the HIV dataset, and Fig. (c) reports the pre-training runtime in seconds.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{1}{|c|}{**S1**} & \multicolumn{1}{|c|}{**S2**} & \multicolumn{1}{|c|}{**S3**} & \multicolumn{1}{|c|}{**S4**} \\ \hline Var. 1 & – & – & / & / \\ \hline Var. 2 & + & – & / & / \\ \hline Var. 3 & + & – & – & / \\ \hline Var. 4 & + & + & – & / \\ \hline \end{tabular}
\end{table}
Table 3: The four variants of sampling strategies.
well as knowledge on community belongings based on modularity measures.
### Analysis of Two-level Contrastive Sampling (RQ3)
Figure 5 offers insight into the pre-training convergence, target adaptation progression, and pre-training runtime consumption of the four sampling variants and the full framework. Key observations include: **(1)** As seen in Figure 5(a), all variants demonstrate efficient pre-training convergence due to the multi-dataset joint optimization inspired by MAML. The full model demonstrates the most optimal convergence, highlighting the advantage of learning shared neighborhood information in brain network data through two-level node contrastive sampling. **(2)** Figure 5(b) shows the superiority of our design in terms of downstream adaptation performance compared to other variants. **(3)** Figure 5(c) reveals that the more sophisticated the sampling considerations result in greater computational complexity for mutual information evaluation, leading to longer runtime for each pre-training epoch. However, the total time consumptions are all on the same scale.
### Analysis of ROI Alignment (RQ4)
To further validate the variance-based virtual ROI sorting, we select the top 2 virtual ROIs with the highest sample variances for each atlas template (_i.e.,_ dataset) and backtrack to locate their corresponding projected ROIs. The results are illustrated in Figure 6, which shows a 3D brain surface visualization highlighting the original ROIs. From this, we draw two main conclusions: **(1)** There exists multiple regional overlaps between pairs of two atlas templates, reflecting some working effectiveness of our proposed solution as well as confirming the feasibility of converting between atlas templates. **(2)** It is relatively harder to find regions that overlap across all three atlas templates which shows a limitation of the proposed unsupervised ROI alignment scheme, suggesting a need to modify against the current variance-based heuristic which may inspire further study and research opportunity.
## 5 Conclusion
Brain network analysis for task-specific disease prediction has been a challenging task for conventional GNN frameworks due to the limited availability of labeled training data and the absence of a unifying brain atlas definition, which hinders efficient knowledge transfer across different datasets. To address these challenges, we propose PTGB, a novel unsupervised multi-dataset GNN pre-training that leverages a two-level node contrastive sampling to overcome data scarcity. Additionally, PTGB incorporates atlas mapping through brain-network-oriented regularizers and variance-based sorting to address the issue of incompatible ROI parcellation systems in cross-dataset model adaptation in a data-driven way. Extensive experiments on real-world brain connectome datasets demonstrate the superiority and robustness of PTGB in disease prediction and its clear advantage over various state-of-the-art baselines. As more brain network datasets become available, it will be intriguing to further validate its generalizability.
Figure 6: The virtual ROI mapping across the three investigated datasets. We highlight pairs of overlapping regions with colored boxes. In particular, we use gold boxes for the PPMI and BP mapping; blue boxes for the BP and HIV mapping; and purple boxes for the PPMI and HIV mapping. |
2305.13419 | Two-fluid Physical Modeling of Superconducting Resonators in the ARTEMIS
Framework | In this work, we implement a new London equation module for superconductivity
in the GPU-enabled ARTEMIS framework, and couple it to a finite-difference
time-domain solver for Maxwell's equations. We apply this two-fluid approach to
model a superconducting coplanar waveguide (CPW) resonator. We validate our
implementation by verifying that the theoretical skin depth and reflection
coefficients can be obtained for several superconductive materials, with
different London penetration depths, over a range of frequencies. Our
convergence studies show that the algorithm is second-order accurate in both
space and time, except at superconducting interfaces where the approach is
spatially first-order. In our CPW simulations, we leverage the GPU scalability
of our code to compare the two-fluid model to more traditional approaches that
approximate superconducting behavior and demonstrate that superconducting
physics can show comparable performance to the assumption of quasi-infinite
conductivity as measured by the Q-factor. | Revathi Jambunathan, Zhi Yao, Richard Lombardini, Aaron Rodriguez, Andrew Nonaka | 2023-05-22T19:07:55Z | http://arxiv.org/abs/2305.13419v1 | # Two-Fluid Physical Modeling of Superconducting Resonators in the ARTEMIS Framework
###### Abstract
In this work, we implement a new London equation module for superconductivity in the GPU-enabled ARTEMIS framework, and couple it to a finite-difference time-domain solver for Maxwell's equations. We apply this two-fluid approach to model a superconducting coplanar waveguide (CPW) resonator. We validate our implementation by verifying that the theoretical skin depth and reflection coefficients can be obtained for several superconductive materials, with different London penetration depths, over a range of frequencies. Our convergence studies show that the algorithm is second-order accurate in both space and time, except at superconducting interfaces where the approach is spatially first-order. In our CPW simulations, we leverage the GPU scalability of our code to compare the two-fluid model to more traditional approaches that approximate superconducting behavior and demonstrate that superconducting physics can show comparable performance to the assumption of quasi-infinite conductivity as measured by the Q-factor.
keywords: Superconducting Materials, Maxwell's Equations; London Equations; Finite-Difference Time-Domain; Two-Fluid Model; Resonators; Microelectronics Msc: 35-04, 35Q60, 78M20, 82D55 +
## 1 Introduction
The tremendous growth in materials research as well as the race to miniaturize microelectronic devices has accelerated the adoption and integration of novel materials in traditional CMOS devices. Superconducting materials exhibit unique characteristics, such as, greatly reduced loss compared to metals, the expulsion of magnetic fields via the Meissner effect [1], quantum tunneling effects, and flux quantization [2], making them a promising candidate to produce high-fidelity and high-coherence circuits. While our focus is on microelectronics, applications of these materials also extend to imaging [3], tokamaks [4], accelerators [5; 6], and magnetic levitation [7; 8]. In microelectronics, these materials are used in resonators, qubits with Josephson junctions to build quantum devices, circuit quantum electrodynamics devices (cQED) [9; 10], and in superconducting quantum interference devices (SQUID). Resonators are devices where the measured field attains maximum amplitude at a designed resonant frequency. Superconducting coplanar waveguide (CPW) resonators are used in quantum computing applications as they are ideal for control and readout [11; 12] and as an interface between resonators and qubits.
In order to design and optimize such devices without resorting to expensive trial-and-error fabrication and measurement cycles, we require an accurate simulation tool that can model the superconducting behavior over a wide range of frequencies for a given configuration of material properties. In this work, we are interested in a classical description of the superconducting materials to investigate the interaction of the electromagnetic signals with the superconducting sub-components of the CPW. Traditionally, these sub-components have been approximated as a perfect conductor or as a highly conductive material or the interaction is reduced to an empirical model with an resistance-inductance-capacitance (RLC) response to the incoming signal [13]. However, such methods do not capture the non-linear coupling between electromagnetic and superconducting physics and more accurate numerical descriptions are required.
For accurate classical descriptions of superconducting materials, the London equations [14] provide foundational constitutive relationships. These equations are coupled with Maxwell's equations to fully describe the interaction between electromagnetic fields and currents in superconducting materials. In the past decades, there has been interest in incorporating the London equations into the widely-used finite-difference time-domain (FDTD)
approach [15] for solving Maxwell's equations. This two-fluid approach has been derived independently from two different mathematical formulations, however, they both lead to functionally identical numerical implementation. The first mathematical formulation involves the use of a complex conductivity to describe the contribution from superconductivity. In Rittweger et al. [16] and similar work by others [17; 18; 19], the complex conductivity was incorporated into a frequency-domain representation of Maxwell's equations and converted to the equivalent time-domain representation. This resulted in the inclusion of an additional source term, equal to the time-integral of the electric field, in Ampere's law. The second mathematical formulation involves the use of a two-fluid model [20; 21], where, the total current density is the sum of the standard conductive current plus a superconducting current whose evolution is governed by the first London equation. This approach, used by a number of works [22; 23; 24; 25; 26], is analytically equivalent to that of the first, since the superconducting current is in fact, a scaled time-integral of the electric field. Alternative approaches have also been considered, such as by Yun et al. [27], where a shift-operator technique is incorporated into the FDTD framework to directly account for a complex conductivity.
The main challenge in using a coupled explicit Maxwell-London solver for CPW resonators is the large disparity in length-scale of the London penetration depth, (typically 10-400 nm), and the size of the CPW resonator (\(\sim 1000\)) \(\mu\)m. As a result of the explicit time integration, the disparity in the temporal scales required to resolve the speed of light and that required for the low frequency signal to achieve resonance results in simulations that can require \(10^{6}\) time steps or more. Using traditional CPU-based solvers render such simulations impossible to perform and a scalable GPU-enabled code is required. Therefore, we use our GPU-enabled open source framework, called ARTEMIS [28] developed to model electromagnetic signals in microelectronics devices.
In this paper, we describe our implementation of the two-fluid approach in ARTEMIS for modeling interactions between electromagnetic signals and superconducting components and apply it to the study of CPW resonators. The GPU speedup and scalability of our code allows for rigorous validation and case studies with frequencies comparable to operating conditions of devices, that would not be possible otherwise. The rest of this paper is organized as follows. In Section 2, we describe our two-fluid model to couple Maxwell and London equations. In Section 3, we describe our numerical method and implementation in the ARTEMIS framework. In Section 4, we
present the skin depth and reflection coefficient analysis, and validate that we are able to reproduce theoretically-predicted behavior. We also present spatial and temporal convergence tests for a number of material configurations. Finally, in Section 5, we perform simulations of a CPW resonator and compare the results from our two-fluid model to those obtained from a simpler, purely-conductive approximation for superconductivity that can be accomplished with a standard FDTD Maxwell solver.
## 2 Two-Fluid Model for Superconductivity
The thermodynamic model proposed by Gorter and Casimir [20] states that superconducting materials at temperatures between absolute zero and the critical temperature will contain both conductive and superconductive currents, hence the term "two-fluid model", where one includes normal electrons (with finite conductivity, \(\sigma>0\)) and the other superconducting electrons, i.e., Cooper pairs [29]. According to this model, at \(T=0\) K, all electrons condense to Cooper pairs and \(\sigma=0\), leading to pure superconducting behavior. To model such behavior, we first begin with the full-form of the dynamic Maxwell's equations, i.e., Ampere and Faraday's laws,
\[\nabla\times\mathbf{H}=\mathbf{J}+\frac{\partial\mathbf{D}}{\partial t}, \tag{1}\]
\[\nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t} \tag{2}\]
where, \(\mathbf{D}=\epsilon\mathbf{E}\) is the electric displacement, \(\mathbf{E}\) is the electric field, \(\mathbf{B}=\mu\mathbf{H}\) is the magnetic flux density, and \(\mathbf{H}\) is the magnetic field. The permittivity of the medium, \(\epsilon\), is the product of vacuum permittivity, \(\epsilon_{0}\), and the dimensionless relative permittivity, \(\epsilon=\epsilon_{0}\epsilon_{r}\). Similarly, the permeability of the medium, \(\mu\), is the product of vacuum permeability, \(\mu_{0}\) and the unit-less relative permeability, \(\mu=\mu_{0}\mu_{r}\). Consistent with the model proposed by Gorter and Casimir[20], in the two-fluid model, the total electric current density, \(\mathbf{J}\), in equation (1), is given by the sum of the conductive current, \(\sigma\mathbf{E}\), and the superconducting current, \(\mathbf{J}_{s}\), such that, \(\mathbf{J}=\sigma\mathbf{E}+\mathbf{J}_{s}\), with conductivity \(\sigma\).
In order to obtain the superconducting current, we invoke the classical model for superconductivity given by the London equations. The first London equation, given below, can be derived by combining the Lorentz force with the unbounded acceleration of electrons in the presence of an electric
field.
\[\frac{\partial\mathbf{J}_{s}}{\partial t}=\frac{n_{s}e^{2}}{m}\mathbf{E}, \tag{3}\]
where, \(n_{s}\) is the number density of superconducting electrons, \(m\) is the electron mass, and \(e\) is the elementary charge. If we define \(\lambda=\sqrt{m/(n_{s}e^{2}\mu)}\) as the London penetration depth, Equation (3) can be written as
\[\frac{\partial\mathbf{J}_{s}}{\partial t}=\frac{1}{\lambda^{2}\mu}\mathbf{E}. \tag{4}\]
The typical range for London penetration depth of superconducting materials is \(\mathcal{O}(10-100)\) nm. As previously stated, a superconducting material may still exhibit finite conductivity that reduces to zero as the temperature approaches absolute zero. Thus, to model such systems, we require the two-fluid approach, where Maxwell's equations given in equations (1) and (2) are coupled with the first London equation given in equation (3) to provide a fully classical description of the superconducting physics.
## 3 Numerical Method and Implementation
We employ the standard Yee grid configuration for electrodynamics, where, the normal components of \(\mathbf{B}\) fields are defined on cell-faces, and the tangential components of \(\mathbf{E}\) are defined on the cell-edges. The standard explicit FDTD scheme for Maxwell's equations on a Yee grid uses a leap-frog discretization in time, where \(\mathbf{E}\) is updated at integer time levels and \(\mathbf{B}\) is updated at half-integer time levels. In the superconducting regions within the domain, the superconducting current, \(\mathbf{J}_{s}\) are defined using the same spatial discretization as the electric field, i.e., tangential currents on the cell-edges, and same temporal discretization as the magnetic field. We note that this algorithm is constrained by the fact that the interface between a non-superconducting and superconducting material is always grid aligned. Thus, all cell-edges that lie on such interfaces are considered superconducting and contain tangential components of \(\mathbf{J}_{s}\). Also, due to the spatial Yee grid and leap-frog temporal discretization, the numerical scheme is second-order in space and time. If there is a sharp discontinuity in either the conductivity or the inverse of the penetration depth, the algorithm is first-order in space; for cases where the conductivity is smoothly varying and at superconducting interfaces the inverse penetration depth smoothly varies, second-order spatial convergence is
retained. It should also be noted that the case of non-superconducting material implies that \(1/\lambda\to 0\). The integration scheme that we implemented in ARTEMIS is described below in Algorithm 1 and it is analytically equivalent to a standard leap-frog approach. For diagnostic purposes, we split the time-update of the magnetic field, \(\mathbf{B}\), and current density, \(\mathbf{J}_{s}\), into two half timestep updates. In Algorithm 1, we describe the steps to advance the solution from time level, \(t^{n}\) to the next time level at \(t^{n}+\Delta t\), where the superscript on a variable indicates the time step index.
For domain boundary conditions, the code includes the standard options for periodic, perfect electric conductor (PEC), and perfectly matched layer (PML) [30; 31]. We note that the interaction between a London region and PML is not well-understood. Thus, in our simulations, we use a domain large enough such that the signal does not interact with London regions close to the PML boundaries, or include an air gap in-between the London region and the domain boundary making sure the overall characterization of the device is not significantly affected.
ARTEMIS is built on the AMReX framework for block-structured mesh calculations [32] and leverages many of the computational kernels from the ECP-funded electromagnetic Particle-In-Cell WarpX [33] code. Thus, the ARTEMIS code is portable and scalable to the largest multicore and GPU-based supercomputers. We note that all of the simulations in this paper are performed using uniformly-sized cuboid cells and leverage the efficient and scalable MPI+CUDA implementation provided by AMReX and WarpX. More specifically, we use a hierarchical parallelization model where the domain is divided into boxes that are distributed to MPI ranks, and computational work is performed by distributing individual grid cells to GPU threads. Using three NERSC HPC systems (Perlmutter GPU partition, Perlmutter CPU partition, haswell CPU partition), we find that on a node-by-node basis, the perlmutter GPU partition offers a 10.5x speedup compared to Perlmutter CPU partition, and a 56x speedup over the haswell CPU partition. We have recently demonstrated the near-perfect weak scaling performance of the code on up to 2,000 GPUs [28; 34] and due to the explicit nature of the algorithm, the scaling properties of the new London module will behave the same.
## 4 Physical and Numerical Validation
In this section, we first validate our implementation by examining the skin depth within superconducting material and measuring the reflection proper
ties at superconducting interfaces and comparing the results with theoretical predictions. We then demonstrate the numerical convergence properties of our implementation.
### Skin Depth
The general formula for the skin depth in a normal conductor is well-known [35]. Here we derive the analogous expression that includes the superconducting current. Consider the Maxwell London model in a homogeneous medium with uniform \(\epsilon\), \(\mu\), \(\sigma\), \(\lambda\), and no free charges. Applying the curl to Equations (1) and (2) using \(\nabla\cdot\mathbf{B}=\nabla\cdot\mathbf{E}=0\), we get,
\[\nabla^{2}\mathbf{E} = \mu\epsilon\frac{\partial^{2}\mathbf{E}}{\partial t^{2}}+\mu \sigma\frac{\partial\mathbf{E}}{\partial t}+\frac{\mathbf{E}}{\lambda^{2}} \tag{5}\] \[\nabla^{2}\mathbf{B} = \mu\epsilon\frac{\partial^{2}\mathbf{B}}{\partial t^{2}}+\mu \sigma\frac{\partial\mathbf{B}}{\partial t}+\frac{\mathbf{B}}{\lambda^{2}}. \tag{6}\]
These equations admit plane wave solutions. Let us consider a plane wave traveling along the \(z\)-direction given by
\[\mathbf{E}(z,t) = \mathbf{E}_{0}e^{i(kz-\omega t)} \tag{7}\] \[\mathbf{B}(z,t) = \mathbf{B}_{0}e^{i(kz-\omega t)} \tag{8}\]
where, \(\mathbf{E_{0}}\) and \(\mathbf{B_{0}}\) are the magnitude of the electric and magnetic fields, \(\omega\) is the frequency, and the wavenumber, \(k\), is complex and equal to
\[k=\sqrt{\left(\mu\epsilon\omega^{2}-\frac{1}{\lambda^{2}}\right)+i\mu\sigma \omega}. \tag{9}\]
Re-writing the complex wavenumber as \(k=\gamma+i\kappa\) and taking the square root of the complex term in Eq. 9, we can write the real and imaginary part of the wavenumber as,
\[\gamma=\sqrt{\frac{\sqrt{\left(\mu\epsilon\omega^{2}-\frac{1}{ \lambda^{2}}\right)^{2}+(\mu\sigma\omega)^{2}+\left(\mu\epsilon\omega^{2}- \frac{1}{\lambda^{2}}\right)}}{2}} \tag{10}\] \[\kappa=\sqrt{\frac{\sqrt{\left(\mu\epsilon\omega^{2}-\frac{1}{ \lambda^{2}}\right)^{2}+(\mu\sigma\omega)^{2}-\left(\mu\epsilon\omega^{2}- \frac{1}{\lambda^{2}}\right)}}{2}}. \tag{11}\]
The skin depth, \(\delta\), for the superconductor is simply given by
\[\delta=\frac{1}{\kappa}. \tag{12}\]
Note that when the superconducting current is not included (i.e., \(1/\lambda^{2}=0\)), the skin depth reduces to the well-known expression for a conductor. Also, in the limit as \(\omega\to 0\), the skin depth reduces to the London penetration depth, \(\lambda\).
_Comparison of skin depth with theory_
We now measure the skin depth using a series of tests and compare against theoretical predictions. The computational domain in these tests are homogeneous consisting of superconducting metal with a London penetration depth, \(\lambda=400\) nm, vacuum permittivity and permeability, and uniform conductivity. To study the effect of conductivity on skin depth and compare with theory, we perform tests with three different values, i.e., \(\sigma=0,10^{4}\), and \(10^{7}\) S/m. For each value of conductivity, we perform tests with four different frequencies, \(f=25\) GHz, 100 GHz, 1 THz, and 100 THz, In each case, we perform one-dimensional simulations with \(\Delta z=10\) nm and a domain extending from \(-L_{z}\) to \(L_{z}\). The value of \(L_{z}\) is dependent on the frequency and is chosen to be long enough such that the signal does not interact with the domain boundaries. Thus, we use \(L_{z}=128\) mm, 32 mm, 3.2 mm, and 32 \(\mu\)m, respectively, for the four frequency values given above. We excite the system with an electric field given by \(E_{y}=\sin{(2\pi f)}\) at the center of the domain, \(z=0\), and measure the peak amplitude of the signal at the source, i.e., at \(z=0\) and at an observation point along the propagation direction at \(z_{0}>0\). These measurements will be compared against the theoretical skin-depth. In each simulation, we choose the observation point, \(z_{0}\), such that it lies on a grid-point closest to theoretical skin depth for that particular configuration of conductivity and frequency used in the simulation. We use a CFL of 0.9, with a corresponding time step of \(\Delta t=0.03\) fs and run each simulation to \(t=5T\) s, where \(T=1/f\) is the period of the excitation.
To measure the skin depth from our simulations, we use
\[\delta^{\rm supercond}_{\rm computed}=\frac{z_{0}}{\ln(E^{\rm peak}_{z=0}/E^{ \rm peak}_{z=z_{0}})}, \tag{13}\]
where, the \(E^{\rm peak}\) values correspond to the final peak amplitude of the measured signal during the simulation. In Table 1, we compare the theoretical values of the skin depth (\(\delta^{\rm supercond}_{\rm theory}\)) and the computed skin depth (\(\delta^{\rm supercond}_{\rm computed}\)) in columns 4 and 5, respectively. We obtain excellent agreement between the theoretical and computed skin depths using the two-fluid approach implemented in ARTEMIS. We note that, as predicted by the theory, for all
values of conductivity, the skin depth approaches the London penetration depth as we decrease the frequency. On the other hand, as the frequency increases, the skin depth either increases or decreases depending on the conductivity. For reference, we also include in column 3, the theoretical skin depth assuming the metal is conductive with no superconducting behavior (\(\delta_{\text{theory}}^{\text{cond}}\)) to highlight the difference in the physical behavior of the two models indicating the effect on the simulation if superconductivity is not accounted for.
### Reflection Coefficient
Next, we examine the reflection coefficient, \(R\), as a function of frequency, \(\omega\), for a signal propagating from vacuum medium at normal incidence to a conductor or superconductor. With the same plane waves, previously discussed in Section 4.1, the signals travel in the positive \(z\) direction resulting in transmission and reflection. The incident \(I\), reflected \(R\), and transmitted
\begin{table}
\begin{tabular}{l l l l l} \hline \(\sigma\) [S/m] & \(f\) & \(\delta_{\text{theory}}^{\text{cond}}\) [nm] & \(\delta_{\text{theory}}^{\text{supercond}}\) [nm] & \(\delta_{\text{computed}}^{\text{supercond}}\) [nm] \\ \hline
0 & 25 GHz & \(\infty\) & 400 & 400 \\
0 & 100 GHz & \(\infty\) & 400 & 400 \\
0 & 1 THz & \(\infty\) & 400 & 400 \\
0 & 100 THz & \(\infty\) & 734 & 739 \\ \hline \(10^{4}\) & 25 GHz & 50331 & 400 & 400 \\ \(10^{4}\) & 100 GHz & 15920 & 400 & 400 \\ \(10^{4}\) & 1 THz & 5047 & 400 & 400 \\ \(10^{4}\) & 100 THz & 656 & 448 & 450 \\ \hline \(10^{7}\) & 25 GHz & 1592 & 399 & 395 \\ \(10^{7}\) & 100 GHz & 503 & 350 & 350 \\ \(10^{7}\) & 1 THz & 159 & 153 & 153 \\ \(10^{7}\) & 100 THz & 16 & 16 & 13 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of theoretical and computed skin depths as a function of \(\sigma\) and \(f\) for a superconducting material with London penetration depth, \(\lambda=400\) nm.
\(T\) waves are given as
\[\mathbf{E}_{I}(z,t) = E_{0,I}e^{i(k_{1}z-\omega t)}\mathbf{\hat{y}} \tag{14}\] \[\mathbf{B}_{I}(z,t) = -\frac{E_{0,I}}{v_{1}}e^{i(k_{1}z-\omega t)}\mathbf{\hat{x}}\] (15) \[\mathbf{E}_{R}(z,t) = E_{0,R}e^{i(-k_{1}z-\omega t)}\mathbf{\hat{y}}\] (16) \[\mathbf{B}_{R}(z,t) = \frac{E_{0,R}}{v_{1}}e^{i(-k_{1}z-\omega t)}\mathbf{\hat{x}} \tag{17}\]
\[\mathbf{E}_{T}(z,t) = E_{0,T}e^{i(k_{2}z-\omega t)}\mathbf{\hat{y}} \tag{18}\] \[\mathbf{B}_{T}(z,t) = -\frac{E_{0,T}}{v_{2}}e^{i(k_{2}z-\omega t)}\mathbf{\hat{x}}. \tag{19}\]
Let us label the vacuum medium as medium 1 with wave speed \(v_{1}=\omega/k_{1}\), where the wavenumber, \(k_{1}=\omega\sqrt{\epsilon_{1}\mu_{1}}\), and medium 2 is the conductor or superconductor with wave speed \(v_{2}=\omega/k_{2}\), where \(k_{2}\) is given by Equation (9).
The parallel \(\mathbf{E}\) and normal \(\mathbf{H}\) fields are continuous at an interface, i.e.,
\[\mathbf{E}_{1}^{||} = \mathbf{E}_{2}^{||} \tag{20}\] \[\frac{\mathbf{B}_{1}^{\perp}}{\mu_{1}} = \frac{\mathbf{B}_{2}^{\perp}}{\mu_{2}} \tag{21}\]
If the interface is at \(z=0\), then Equation (20) implies
\[E_{0,I}+E_{0,R}=E_{0,T}, \tag{22}\]
and Equation (21) combined with Equations 15, 17, and 19 implies
\[\frac{1}{\mu_{1}v_{1}}(E_{0,I}-E_{0,R})=\frac{E_{0,T}}{\mu_{2}v_{2}}, \tag{23}\]
or equivalently,
\[E_{0,I}-E_{0,R}=\sqrt{\frac{\mu_{1}}{\epsilon_{1}}}\frac{k_{2}}{\mu_{2}\omega} E_{0,T}. \tag{24}\]
Combining Equations (22), (24), and (27), we obtain the following relationships between the incident amplitude and both the reflected and transmitted
\[E_{0,R}=\frac{1-\beta}{1+\beta}E_{0,I} \tag{25}\] \[E_{0,T}=\frac{2}{1+\beta}E_{0,I}. \tag{26}\]
where, \(\beta\) is a complex term given by,
\[\beta=\sqrt{\frac{\mu_{1}}{\epsilon_{1}}}\frac{k_{2}}{\mu_{2}\omega}. \tag{27}\]
Thus, the theoretical reflection coefficient is
\[R=\left|\frac{E_{0,R}}{E_{0,I}}\right|^{2}=\left|\frac{1-\beta}{1+\beta}\right| ^{2}. \tag{28}\]
An interesting result of this analysis is that for the cases where \(\sigma=0\), there is a cut-off frequency below which the reflection coefficient \(R=1\), with a dropoff in \(R\) greater than this cutoff. Specifically, if \(\sigma=0\), then Eq. 9 reduces to
\[k_{2}=\sqrt{\mu_{2}\epsilon_{2}\omega^{2}-\frac{1}{\lambda^{2}}}, \tag{29}\]
and therefore,
\[\beta=\sqrt{\frac{\mu_{1}}{\epsilon_{1}}}\frac{1}{\mu_{2}\omega}\begin{cases} i\sqrt{\frac{1}{\lambda^{2}}-\mu_{2}\epsilon_{2}\omega^{2}}&\omega\leq\frac{1}{ \lambda\sqrt{\mu_{2}\epsilon_{2}}}\\ \sqrt{\mu_{2}\epsilon_{2}\omega^{2}-\frac{1}{\lambda^{2}}}&\omega>\frac{1}{ \lambda\sqrt{\mu_{2}\epsilon_{2}}}\end{cases}. \tag{30}\]
Substituting Equation (30) in Equation (28), we get complete reflection, i.e., \(R=1\), for \(\omega\leq 1/(\lambda\sqrt{\mu_{2}\epsilon_{2}})\). For finite conductivity (\(\sigma>0\)), \(R\) smoothly decreases from 1 to 0 with increasing \(\omega\). Next, we will perform demonstration tests to validate our implementation and highlight these key features of the reflection coefficient.
_Comparison of Reflection Coefficient with Theory_
To compare the reflection coefficient with theoretical predictions, we perform one-dimensional simulations from \(z=0\) to \(z=89.6\)\(\mu\)m. For \(z<7\)\(\mu\)m we define a vacuum region (medium 1) and for \(z>=7\)\(\mu\)m we define a superconducting region with \(\lambda=100\) nm (medium 2). We perform two tests with different values for conductivity, \(\sigma=\)0 and \(10^{4}\) S/m, and use vacuum permittivity and permeability everywhere. We use periodic boundary conditions in \(x\) and \(y\), and PML at the low and high \(z\) boundaries. Note that, we terminate the simulation before the signal interacts with the PML boundary. We discretize the domain with a uniform mesh with 4,480 grid cells such that \(\Delta z=20\) nm; thus we resolve the London penetration depth sufficiently
in our validation tests. We then excite the system at \(z=100\) nm with a frequency-modulated Gaussian pulse given by,
\[E_{y}=\exp\frac{-(t-t_{o})^{2}}{2t_{w}^{2}} \tag{31}\]
where, \(t_{w}=1/(2f)\) is the Gaussian width of the pulsar, \(t_{o}=4t_{w}\) is the initial pulse duration, and the frequency, \(f\), is 100 THz. For the cases we present here, we use CFL=0.95 resulting in a timestep of 0.0632 fs, and the simulations were run for 15,800 timesteps, such that we resolve one-hundredth of the frequency of the input signal. To compute the reflection, we first measure the total signal just upstream of the interface, at \(z=6.8\)\(\mu\)m, from the simulation. We then measure the input signal at the same observation point from a separate simulation, performed using the same simulation parameters, but, with pure vacuum conditions throughout the simulation domain without any interface. To obtain the reflected signal, we subtract the input signal from the total signal. Finally, we compute the Fourier transform of the reflected and input signals in order to compute the reflection coefficient \(R=\hat{E}_{r}^{2}/\hat{E}_{i}^{2}\), where \(\hat{E}_{r}\) and \(\hat{E}_{i}\) are the reflected and incident signals in Fourier space.
The comparison of the reflection coefficient obtained from our simulations with the theoretical predictions given in equation (28), along with their absolute differences, is shown in Fig. 1. The results from our code match the theory within 10% in regions with reflection coefficient greater than 0.4, and the relative difference increases to a maximum of 30% as the reflection coefficient decreases for the \(\sigma=0\) S/m case. Notable, for the case with \(\sigma=0\), the cut-off frequency from both the simulation and theory are at \(f_{c}=1/(2\pi\lambda\sqrt{\epsilon\mu})=478.75\) THz. Thus, we infer that for all operating frequencies below this frequency, the superconductor with \(\sigma=0\) will behave as expected with a reflection coefficient of 1. Also, for \(\sigma=10^{4}\) S/m, as long as the operating frequency is below a few THz, the reflection coefficient is nearly 1 and the metal will behave in the same way as a superconductor.
### Convergence Tests
Having validated the physical accuracy of the Maxwell-London solver, we now demonstrate the spatial and temporal convergence of our numerical implementation using three different geometrical configurations with superconducting materials that have finite conductivity.
* In the first setup (Section 4.3.1), the three-dimensional domain is homogeneous with constant properties for the material throughout the domain and a Gaussian pulse initialized propagates through the medium.
* In the second setup (Section 4.3.2), we introduce a thin strip of material (conductive or superconductive) in a vacuum domain and study the convergence for the Gaussian pulse that interacts with the material strip.
Figure 1: Comparison of reflection coefficient obtained from the simulation with theory for superconductor with 100 nm London penetration depth for conductivity, \(\sigma\)=0 and \(10^{4}\) S/m, along with the absolute difference between simulation and theory.
* In the third setup (Section 4.3.3), to study convergence for a relatively complex field structure compared to the first two cases, we embed a cubic superconducting material at the center of the vacuum domain which interacts with the incident Gaussian pulse.
In each set of convergence tests, the computational domain extends from -640 to +640 nm in each of the three directions with vacuum permittivity and permeability. We use periodic boundary conditions in all the tests described below. We perform tests with four types of media listed below to compare the solutions with the superconducting material with finite conductivity.
* (a) vacuum everywhere (\(\sigma=0\) S/m and superconductivity disabled),
* (b) purely conductive (\(\sigma=10^{4}\) S/m and superconductivity disabled),
* (c) purely superconducting (\(\sigma=0\) and \(\lambda=40\) nm),
* (d) superconducting with finite conductivity (\(\sigma=10^{4}\) S/m and \(\lambda\)=40 nm).
In order to compute the convergence rates, we use a standard procedure for computing \(L^{1}\) error norms for the \(\mathbf{E}\) and \(\mathbf{B}\) fields at increasing resolution in space and/or time. This involves computing the error between "coarse" and "medium" resolution solutions, \(E_{\mathrm{coarse}}^{\mathrm{medium}}\), and the error between "medium" and "fine" resolution solutions, \(E_{\mathrm{medium}}^{\mathrm{fine}}\). The exact procedure, which includes the spatial averaging procedure for the \(\mathbf{E}\) and \(\mathbf{B}\) fields at different resolutions, is described in detail in Section 5.1 of the original ARTEMIS paper [28]. We note that for the temporal-only convergence tests below, no spatial averaging is required since all solutions use the same cell size.
#### 4.3.1 Homogeneous Conductive and Superconducting Domains
First, we demonstrate that the algorithm within homogeneous domains is second-order in space and time with all the physics turned on, i.e., with the superconducting material with finite conductivity. We perform four tests where the entire domain is homogeneous with the material types listed in cases (a)-(d) above. We initialize three Gaussian pulses, given by,
\[E_{x}=e^{-z^{2}/L^{2}},\hskip 28.452756ptB_{x}=\frac{1}{c}e^{-y ^{2}/L^{2}},\] \[E_{y}=e^{-x^{2}/L^{2}},\hskip 28.452756ptB_{y}=\frac{1}{c}e^{-z^{2}/ L^{2}},\] \[E_{z}=e^{-y^{2}/L^{2}},\hskip 28.452756ptB_{z}=\frac{1}{c}e^{-x ^{2}/L^{2}}, \tag{32}\]
with \(L=80\) nm and \(c=1/\sqrt{\epsilon_{0}\mu_{0}}\) m/s. Note that these fields are consistent with the relationship satisfying the intrinsic impedance in vacuum, and as demonstrated below, result in pure translation under vacuum conditions. In each simulation, the domain is discretized with \(512^{3}\) grid cells (2.5 nm resolution) and the simulation is performed to 200 time steps with a CFL of 0.9 to a physical time of 0.87 fs. The three Gaussian pulses are all initialized to propagate in different directions, so showing one-dimensional plots of any one field is rotationally equivalent to the results for the other fields.
First, to demonstrate that both the conductive physics and superconductive physics have nontrivial contributions to the evolution of the system, we compare the results obtained from the four simulations with material types listed above in (a)-(d). In Fig. 2, we show the initial and final configuration of \(E_{x}(z)\), extracted along the \(z-\)direction at the center of the domain, to show the nontrivial contribution of physics for each type of material. Case (a) with homogeneous vacuum domain results in pure translation as expected. For Case (b) with finite conductivity, the signal undergoes attenuation along with some dispersion resulting in negative values for the electric field. This observation is a natural consequence of the frequency-dependent oscillation of the electromagnetic field. The signal in Case (c) undergoes some translation, and exhibits a more pronounced dispersion due to superconductivity and the complex wavenumber previously described in Section 4.1. Finally, the signal in Case (d) is similar to Case (c) with additional attenuation due to the finite conductivity.
Next, we compute the numerical convergence in space and time (simultaneously) for Case (d) where both the conductive and superconductive physics is enabled in the entire domain. To compute the convergence, we perform 3 simulations using a computational mesh with \(128^{3},256^{3}\), and \(512^{3}\) grid cells (i.e., 10, 5, and 2.5 nm cell resolution, respectively). We use a CFL of 0.9 for each simulation and run the simulations for 50, 100, and 200 time steps, i.e., to the same physical time of 0.87 fs. In Table 2 we show clear second-order convergence in space and time for all field components.
#### 4.3.2 Conductive and Superconductive Strips
In these tests, the domain is vacuum except for a thin strip (in \(z\)). Specifically, a thin strip of material is initialized from \(z=-40\) nm to \(z=+40\) nm, and it extends to the periodic domain boundaries in \(x\) and \(y\). Similar to the homogeneous domain cases presented above, to demonstrate that both conductive and superconductive physics have nontrivial contributions to the
evolution of the system, we perform the 4 simulations with the same parameters for \(\sigma\) and \(\lambda\) described in the previous section (Section 4.3.1). We
\begin{table}
\begin{tabular}{l l l l} \hline Variable & \(E_{\text{coarse}}^{\text{medium}}\) & \(E_{\text{medium}}^{\text{fine}}\) & Rate \\ \hline \(E_{x},E_{y},E_{z}\) & \(3.33\times 10^{-4}\) & \(8.30\times 10^{-5}\) & 2.00 \\ \(B_{x},B_{y},B_{z}\) & \(6.05\times 10^{-13}\) & \(1.52\times 10^{-13}\) & 1.99 \\ \hline \end{tabular}
\end{table}
Table 2: Spatial and temporal convergence rates for a Gaussian pulse propagating through a homogeneous superconducting domain with finite conductivity (\(\lambda=40\) nm and \(\sigma=10^{4}\) S/m).
Figure 2: Initial and final \(E_{x}\) field as a function of \(z\) with homogeneous medium with the domain. We compare the variation of \(E_{x}\) for the cases where the domain is vacuum everywhere, conductive only (\(\sigma=10^{4}\) S/m), superconducting with finite conductivity (\(\sigma=10^{4}\) S/m and \(\lambda\)=40 nm).
initialize a single Gaussian pulse given by,
\[E_{x}=e^{-(z-z_{0})^{2}/L^{2}}, B_{x}=0,\] \[E_{y}=0, B_{y}=\frac{1}{c}e^{-(z-z_{0})^{2}/L^{2}},\] \[E_{z}=0, B_{z}=0, \tag{33}\]
with \(z_{0}=-320\) nm. Even though we perform the simulations in three-dimensions, this setup is essentially one-dimensional since the wave propagates purely in the \(z\) direction, and \(E_{y},E_{z},B_{x}\), and \(B_{z}\) remain zero. The simulation domain is discretized with a \(512^{3}\) grid (i.e., 2.5 nm resolution) and is performed for 400 timesteps with CFL=0.9 to a physical time of 1.73 fs.
In Fig. 3 we show the initial and final configuration of \(E_{x}(z)\), extracted along the \(z\)-direction at the center of the domain, to show the nontrivial evolution of the signal after it interacts with the strip, indicated by vertical lines. It can be seen that, similar to the homogeneous setup, Case (a) with pure vacuum condition results in pure translation of the signal. For Case (b), the signal interacts with the conductive strip and undergoes attenuation as well as reflection evident from the negative value of the signal upstream of the metal strip. For the superconducting strip in Case (c), we see a more complex signal in the final step of the simulation where the pulse is modified by two superconducting (zero conductivity) interfaces. After interacting with the material strip, the pulse undergoes reflection, and transmission with frequency-dependent dispersion. Finally, in Case (d) with superconducting strip with finite conductivity, the final profile of the signal is similar to Case (c) but with additional attenuation.
Since this configuration includes an interface, we perform separate tests to compute temporal and spatial convergence. We first perform tests with Case (d) to demonstrate that the algorithm is second-order in time in the presence of both superconducting and conductive currents. We perform 3 simulations with a computational mesh containing \(256^{3}\) grid cells (i.e., 5 nm cell resolution), but use a CFL of 0.9, 0.45, and 0.225 to vary the timestep
\begin{table}
\begin{tabular}{l l l l} \hline Variable & \(E_{\text{coarse}}^{\text{medium}}\) & \(E_{\text{medium}}^{\text{fine}}\) & Rate \\ \hline \(E_{x}\) & \(1.23\times 10^{-4}\) & \(3.08\times 10^{-5}\) & 2.00 \\ \(B_{y}\) & \(3.90\times 10^{-13}\) & \(9.73\times 10^{-14}\) & 2.00 \\ \hline \end{tabular}
\end{table}
Table 3: Temporal convergence rates for a Gaussian pulse interacting with a strip of superconducting material with finite conductivity (\(\lambda=40\) nm and \(\sigma=10^{4}\) S/m).
resolution, such that the simulations reach the same physical time of 1.73 fs using 200, 400, and 800 time steps, respectively. In Table 3 we show clear second-order convergence in time for all the field components.
Next, we conduct a spatial-only convergence test for Case (d) to demonstrate that the algorithm is first-order in space, even in the presence of the spatial discontinuity in material properties (i.e., vacuum-superconducting in
\begin{table}
\begin{tabular}{l l l l} \hline Variable & \(E_{\text{coarse}}^{\text{medium}}\) & \(E_{\text{medium}}^{\text{fine}}\) & Rate \\ \hline \(E_{x}\) & \(9.05\times 10^{-3}\) & \(4.86\times 10^{-3}\) & 0.90 \\ \(B_{y}\) & \(2.82\times 10^{-11}\) & \(1.53\times 10^{-11}\) & 0.88 \\ \hline \end{tabular}
\end{table}
Table 4: Spatial convergence rates for a Gaussian pulse interacting with a strip of superconducting material with finite conductivity (\(\lambda=40\) nm and \(\sigma=10^{4}\) S/m).
Figure 3: Initial and final \(E_{x}\) field as a function of \(z\) for a Gaussian pulse initialized in vacuum interacting with a strip of material indicated by the region between vertical solid lines. We compare the variation of \(E_{x}\) for the cases where the domain is vacuum everywhere, conductive only (\(\sigma=10^{4}\) S/m), superconducting with finite conductivity (\(\sigma=10^{4}\) S/m and \(\lambda\)=40 nm).
terfaces). We perform 3 simulations with \(128^{3},256^{3}\), and \(512^{3}\) grid cells (i.e., 10, 5, and 2.5 nm cell resolution, and using a CFL of 0.225, 0.45, and 0.9 for each simulation, such that the timestep in each simulation is the same. We run all the simulations for 400 time steps (to the same physical time of 1.73 fs). Due to the abrupt spatial discontinuity in both physics and conductivity, we see in Table 4 that the algorithm is first-order in space, which is expected for configurations with inherent discontinuities. We have confirmed with separate simulations that for the case where \(\sigma=0\) and \(\lambda\) smoothly varies from 400 nm to 40 nm over the entire right-half of the domain that second-order spatial convergence is retained. In other words, this is the case where \(1/\lambda\) transitions relatively smoothly at the material interface and we expect second-order convergence.
#### 4.3.3 Cubic Block of Superconductive Material
In these set of tests, a cubic material with material type (d) is embedded in a vacuum domain to demonstrate the spatial and temporal convergence for more complex three-dimensional field structures compared to the first two configurations used for convergence studies. The setup and initialization for this case is identical to the previous case with a material strip, except that here the domain has an embedded cube extending from \(-40\) nm to \(+40\) nm in all three spatial directions. For this set up, we only consider Case (d) where the material is superconducting with finite conductivity (\(\lambda=40\) nm and \(\sigma=10^{4}\) S/m). Even though we initialize only \(E_{x}\) and \(B_{y}\) components, (described previously in Section 4.3.2), as the Gaussian pulse propagates along the \(z-\)direction, it interacts with the cubic block of material and all components of the electric and magnetic field develop complex structures as seen in Fig. 4, thus allowing us to study convergence in three-dimensions for all components.
In Fig. 4, we show the time evolution of only two components, \(E_{x}\), and \(B_{y}\), extracted on an \(x-z\) slice through the center of the domain, with the embedded box at the center. The figure illustrates that both \(E_{x}\) and \(B_{y}\) fields develop complex structures, especially near the embedded box, as the signal propagates through the superconductor from \(z<0\) to \(z>0\). The figures at the top show the incident Gaussian pulse at 0.52 fs on the embedded box. As the pulse propagates through the center of the domain, we observe that the signal is mainly transmitted outside the embedded box (evident from signal at 1.21 fs), while, inside the box, the amplitude of the \(E_{x}\) and \(B_{y}\) components is very small. Finally, as the signal completely propagates
through the material, we observe that surface fields develop surrounding the embedded box, and we attribute this mainly to the superconducting current that may evolve in this regions. We would like to note that, the main purpose of this setup is to perform numerical convergence tests in three-dimensions.
Figure 4: Time-evolution of field components, \(E_{x}\) (left column) and \(B_{y}\) (right column) extracted along an \(x-z\) slice at the center of the three-dimensional domain at \(t=0.52\) (top,) 1.21 (middle), and 1.73 (bottom) fs. Figures show a zoomed-in view of the slice, with the Gaussian pulsar propagating along the \(z\)-direction through an embedded cube of superconducting material with finite conductivity (\(\sigma=10^{4}\) S/m and \(\lambda=40\) nm indicated as the solid black box)
We conduct separate tests for temporal and spatial convergence, similar to the tests performed for the material strip in Section 4.3.2, and obtain the same overall conclusions, namely, second-order accuracy in time and first-order accuracy in space, as illustrated in Tables 5 and 6, respectively.
## 5 Coplanar Waveguide Resonator
In this section, we present three-dimensional simulations performed using ARTEMIS for a coplanar waveguide (CPW) resonator verifying that we capture the resonant behavior of the structure and we also present the \(Q\)-factor measurements for different material configurations. In our simulations, the computational domain has a physical size of [-65,65] \(\mu\)m in \(x\), [-504,504] \(\mu\)m in \(y\), and [0,64] \(\mu\)m in \(z\). The domain size in \(z\) was chosen to be large enough to prevent loss of information of the closed loop magnetic field lines by verifying matching results with larger domain sizes in \(z\) over shorter time (to save computational resources). We discretize the domain with \(130\times 1008\times 1280\) grid cells, so that \(\Delta x=\Delta y=1\)\(\mu\)m and \(\Delta z=50\) nm. In Figure 5(a), we
\begin{table}
\begin{tabular}{l l l l} \hline Variable & \(E_{\text{coarse}}^{\text{medium}}\) & \(E_{\text{medium}}^{\text{fine}}\) & Rate \\ \hline \(E_{x}\) & \(1.68\times 10^{-3}\) & \(4.77\times 10^{-4}\) & 1.82 \\ \(E_{y}\) & \(1.31\times 10^{-4}\) & \(6.48\times 10^{-5}\) & 1.01 \\ \(E_{z}\) & \(1.44\times 10^{-4}\) & \(7.18\times 10^{-5}\) & 1.00 \\ \(B_{x}\) & \(7.89\times 10^{-14}\) & \(3.83\times 10^{-14}\) & 1.04 \\ \(B_{y}\) & \(5.37\times 10^{-12}\) & \(1.46\times 10^{-12}\) & 1.88 \\ \(B_{z}\) & \(5.54\times 10^{-13}\) & \(2.81\times 10^{-13}\) & 0.98 \\ \hline \end{tabular}
\end{table}
Table 6: Spatial convergence rate for a Gaussian pulse propagating through a cube of superconducting material with finite conductivity (\(\lambda=40\) nm and \(\sigma=10^{4}\) S/m).
\begin{table}
\begin{tabular}{l l l l} \hline Variable & \(E_{\text{coarse}}^{\text{medium}}\) & \(E_{\text{medium}}^{\text{fine}}\) & Rate \\ \hline \(E_{x}\) & \(1.02\times 10^{-4}\) & \(2.55\times 10^{-5}\) & 2.00 \\ \(E_{y}\) & \(7.38\times 10^{-7}\) & \(1.84\times 10^{-7}\) & 2.00 \\ \(E_{z}\) & \(7.67\times 10^{-7}\) & \(1.91\times 10^{-7}\) & 2.00 \\ \(B_{x}\) & \(1.49\times 10^{-16}\) & \(3.72\times 10^{-17}\) & 2.00 \\ \(B_{y}\) & \(3.40\times 10^{-13}\) & \(8.49\times 10^{-14}\) & 2.00 \\ \(B_{z}\) & \(4.71\times 10^{-15}\) & \(1.18\times 10^{-15}\) & 2.00 \\ \hline \end{tabular}
\end{table}
Table 5: Temporal convergence rate for a Gaussian pulse propagating through a cube of superconducting material with finite conductivity (\(\lambda=40\) nm and \(\sigma=10^{4}\) S/m).
show a schematic of the CPW resonator to illustrate the resonator structure used in our simulations. We use a CPW structure designed to support a fundamental mode of approximately 100 GHz based on approximate analytic formulas [36]. Note that, our circuit dimensions and design frequency are based on typical length scales and operating conditions used in quantum readout applications [12].
The resonator structure, as shown in Fig. 5(a) consists of a silicon substrate, with a thickness of \(h=32\)\(\mu\)m and relative permittivity of 11.7. Thin superconducting films sit atop the silicon substrate with a thickness of \(t=200\) nm and relative permittivity of 1. The remainder of the domain on top of the resonator structure is vacuum. The grid cell size in the \(z\)-dimension, \(\Delta z=50\) nm, is chosen to properly resolve the finest-scale feature in the simulation, which in this case is the superconducting film with 200 nm thickness. The central resonator line has a length of 600 \(\mu\)m along the \(y-\)direction, the input/output ports have a length of 100 \(\mu\)m and the air gaps between the central resonator line and the input/output ports is 100 \(\mu\)m in \(y\), allowing for capacitive coupling between the ports and resonator. All three components have a width of \(w=10\)\(\mu\)m. The ground planes have a length of 1000 \(\mu\)m in \(y\) aand width \(g=50\)\(\mu\)m, and the air gap between the ground planes and the input/output as well as transmission lines is \(s=6\)\(\mu\)m.
Figure 5: (a) Schematic to illustrate a CPW resonator structure found in quantum readout applications with superconducting films sitting atop a silicon substrate. (b) Spatial variation of electric field along an \(x-z\) slice passing through the transmission lines. The dark shaded regions indicate metal (either conducting or superconducting). The red and blue shading indicates the magnitude of the \(E_{x}\) field (blue/red = \(\pm 0.001\) V/m) near the end of the simulation, illustrating the fundamental mode. The inset is an \(x-z\) slice with normal in the \(y\)-direction extract at the front of the resonator line, \(y=-300\)\(\mu\)m, with vectors illustrating the electric field.
The relative permeability everywhere in the domain is 1. We use a perfectly matched layer (PML) [37; 30] boundary condition on all domain faces. Note that with these geometrical specifications of the CPW, we use a 4 \(\mu\)m vacuum gap between the domain boundary and the outer \(x\) and \(y\) edges of the CPW resonator; thus each PML boundary condition is in contact with either vacuum or dielectric material to allow for signals to propagate out of all domain faces. The use of PML in contact with superconducting material in the two-fluid model is not well-understood and is a subject for future work.
In the air gaps between the ground planes and the input port, we provide two (opposite in magnitude in the left and right air-gaps) soft-source excitations. The excitation used in the two air-gaps at the front end, i.e., at \(y=-500\)\(\mu\)m is a modulated sine wave with center frequency, \(f_{\rm in}\), and associated period \(T_{\rm in}=1/f_{\rm in}\).
\[E_{x}=\pm e^{-(t-4T_{\rm in})^{2}/(2T_{\rm in}^{2})}\sin\left(\frac{2\pi t}{T_ {\rm in}}\right)\,\rm{V/m}, \tag{34}\]
In each simulation we choose \(f_{\rm in}=100\) GHz to match the predicted resonance frequency of the CPW. We use a CFL of 0.95, corresponding to a time step of \(\Delta t\approx 0.158\) fs. We run our simulations on 32 NVIDIA A100 GPUs on the NERSC perlmutter system for 12 hours, to a total time of \(\sim 134\) ps (\(\sim 850,000\) time steps, and we find that each time step requires \(\sim 0.05\) s). While we have resolved the superconducting film thickness in \(z\), ideally, we would also like to resolve the structure in \(x\) and \(y\), and also study the effects of varying circuit dimensions. However, computational allocations limit us to this size system and a limited number of simulations with a high-resolution only along the \(z-\)direction, which may be sufficient for the analysis we present. We conduct four tests to measure the effects of using the two-fluid model for superconductivity when compared to traditional approximations for superconductivity, such as, modeling the material as a regular or perfect conductor with artificially high conductivity. Below are the material properties we set for the superconducting material in each of the four test cases:
* Case 1: Regular conductor (\(\sigma=6\times 10^{7}\) S/m and superconductivity disabled)
* Case 2: Regular conductor, but with artificially high conductivity (\(\sigma=10^{10}\) S/m and superconductivity disabled),
* Case 3: Superconductor with finite conductivity (\(\sigma=6\times 10^{7}\) S/m and \(\lambda=100\) nm),
* Case 4: Purely superconductive (\(\sigma=0\) and \(\lambda=100\) nm).
Case 1 represents a standard conductor, in this case copper, and requires only the Maxwell solver. Case 2 represents a commonly-used high conductivity approximation for superconducting behavior, requiring only the Maxwell solver. Case 3 represents a superconductor that has been cooled to just below the critical threshold and retains its standard conductivity properties (we keep the conductivity of copper here, to compare with Case 1, even though a typical superconductor such as niobium has a room temperature conductivity that is an order of magnitude smaller), and in this case, the two-fluid model implemented in ARTEMIS is used. Finally, Case 4 represents a superconductor that has been cooled to near absolute zero, and has essentially no conductive current, and is simulated using the two-fluid model.
In Figure 5(b), we show the spatial variation of the \(E_{x}\) field component along the \(x\)-\(y\) slice such that it cuts through the thin-film to include the air-gap (transmission line) between the ground plane and input/output port and the central resonator line. The \(E_{x}\) field shown in the figure is obtained from the results of the Case 3 simulation at time, \(t\sim 0.13\) ns. We observe that the system excited with the signal near the input port, achieved resonance with maximum field amplitude near the front and back edges of the transmission lines (i.e. in the air-gap region between the ground plane and the central resonator line). As expected, the fundamental mode (\(\lambda_{EM}/2\), where \(\lambda_{EM}\) is the effective wavelength) of EM resonance is excited at 100 GHz. We also show a zoomed-in view of field extracted along an \(x\)-\(z\) slice near the front edge of the central resonator (at \(y=-300\)\(\mu\)m) along with the electric field vectors to demonstrate that the left and right air-gaps have opposite \(E_{x}\) fields, but their amplitude is maximum in this region.
To further visualize resonance and compare the signal evolution among the four tests cases described above, we measure the signal, i.e., the \(E_{x}\) field component at two locations; (1) in the air-gap between the ground plane and input port, halfway along the length of the port, at \(y=-450\mu\)m to obtain the input excitation; (2) in the air-gap between the front edge of the resonator line and the ground plane, i.e., \(y=-300\)\(\mu\)m, the same location where we observe maximum field amplitude in the \(x-z\) inset in Fig. 5(b). In Fig. 6, we show the input excitation (top), and compare the signal evolution at the second location as a function of time as obtained from the
four cases. In each case, the fundamental mode is clearly established in the resonator line, similar to that previously illustrated for the \(E_{x}\) field in Fig. 5(b). However, the amplitude of the signal decays rapidly for Case 1, where the superconducting film is approximated as a regular conductor with \(\sigma=6\times 10^{7}\) S/m. We also observe that the amplitude variation for cases 2, 3, 4, are similar. To quantify the difference, we compute the \(Q\)-factor and resonant frequency obtained from the simulation using the measured signal shown in Fig. 6[38; 39]. \(Q\)-factor, also known as quality factor, quantifies how underdamped or performant a resonator is, and is therefore widely used to quantify the efficiency of a resonator. We compute \(Q\)-factor beginning the measurement at \(t=8\times 10^{-11}\) s, which is after the input pulse has died out, resonance has formed, and at least five periods of resonance are recorded. We
Figure 6: (Top) Measured \(E_{x}\) field in the air gap between the input port and the ground plane, halfway down the input port. (Bottom) Measured \(E_{x}\) field for the four simulations in the air gap between the resonator line and the ground plane at the front of the resonator line.
use the signal processing code ESPRIT [38; 39] to extract the attenuation constant and phase constant, which are used to compute the \(Q\)-factor. In Table 7 we report the \(Q\)-factor and computed resonance frequency, \(f_{\mathrm{comp}}\), for each of the four cases. We see that the standard conductor (Case 1) has the lowest \(Q\)-factor, and the superconductor modeled with no conductivity (Case 4) has the largest \(Q\)-factor, i.e., the most performant. Cases 2 and 3 show a \(Q\)-factor that is in-between the purely conductive and the purely superconducting cases. This indicates that the amount of standard conductivity included in a superconducting model (in physical terms, the temperature of the system) can have a significant impact on performance, and the assumption of quasi-infinite conductivity may not accurately describe performance. Each simulation is able to compute a resonance frequency that is close to the predicted frequency, consistent with the observation from the time-domain plot show in Fig. 6. All inputs files, data sets, and scripts used for analysis of the simulations presented in Secs. 4 and 5 are provided online 1.
Footnote 1: [https://doi.org/10.5281/zenodo.7943012](https://doi.org/10.5281/zenodo.7943012)
## 6 Summary and Future Work
We have implemented a two-fluid model for superconductivity within the open-source ARTEMIS framework and performed numerical studies to validate the model. We have demonstrated that our algorithm is second-order accurate in space and time within superconducting materials and first-order in space in the presence of superconducting material interfaces. The reflection coefficient and skin depth obtained from our implementation agree with theoretical predictions for a wide range of material properties and frequencies. We have applied our algorithm to model resonant behavior in a su
\begin{table}
\begin{tabular}{l l l} \hline \hline & \(Q\) & \(f_{\mathrm{comp}}\) [GHz] \\ \hline Case 1 & 24 & 96.9 \\ Case 2 & 491 & 98.2 \\ Case 3 & 316 & 97.1 \\ Case 4 & 973 & 97.1 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Computed \(Q\)-factors and resonance frequencies for (Case 1) a regular conductor, (Case 2) an artificially-high conductive material, (Case 3) a superconductor with conductivity, and (Case 4) a superconductor with zero conductivity.
perconducting coplanar waveguide, demonstrating that the superconducting physics performs on-par, or even better than the assumption of quasi-infinite conductivity.
There are several avenues to explore for future work in relation to improving our model further and broadening our applications. To improve the model, we would like to develop an effective absorbing boundary condition along the lines of PML for superconducting interfaces at domain boundaries. We would like to explore and develop higher-order accurate discretization in space to improve the accuracy of the method at superconducting material interfaces. Also, methods that are not subject to the Courant condition such as implicit [40; 41] or spectral methods could be explored in order to significantly increase the timestep, which limits the frequency that can be used in coplanar waveguide simulations even when using GPUs. While in this work, we use a constant conductivity and London penetration depth throughout the simulation, new modifications to the model could be implemented to account for the temperature dependence of these quantities using alternate approaches suggested by Hirsch [42]. It would also be of interest to explore using a more general Landau-Ginzberg model or an electrodynamic vector potential to compute the superconducting current density[43] and compare with the two-fluid model implemented in this work. For the case of complex geometrical features, e.g., resonator readout circuitry with non-grid aligned transmission lines or spherical/curved geometries where this current work only supports a staircase approximation, we would like to explore embedded boundary discretizations, which have been developed for Maxwell's equations [44; 45; 46]. Finally, we would like to expand the implementation to applications in larger circuits where we may model multiple superconducting sub-components and develop new methods to quantify crosstalk interactions between them.
## Acknowledgments
This work was supported by Laboratory Directed Research and Development (LDRD) funding from Berkeley Lab, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Visiting Faculty Program (VFP). This research used resources of the National Energy Research Scientific Computing Cen
ter (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
|
2303.09917 | Vision Transformer for Action Units Detection | Facial Action Units detection (FAUs) represents a fine-grained classification
problem that involves identifying different units on the human face, as defined
by the Facial Action Coding System. In this paper, we present a simple yet
efficient Vision Transformer-based approach for addressing the task of Action
Units (AU) detection in the context of Affective Behavior Analysis in-the-wild
(ABAW) competition. We employ the Video Vision Transformer(ViViT) Network to
capture the temporal facial change in the video. Besides, to reduce massive
size of the Vision Transformers model, we replace the ViViT feature extraction
layers with the CNN backbone (Regnet). Our model outperform the baseline model
of ABAW 2023 challenge, with a notable 14% difference in result. Furthermore,
the achieved results are comparable to those of the top three teams in the
previous ABAW 2022 challenge. | Tu Vu, Van Thong Huynh, Soo Hyung Kim | 2023-03-16T13:43:02Z | http://arxiv.org/abs/2303.09917v2 | # Vision Transformer for Action Units Detection
###### Abstract
Facial Action Units detection (FAUs) represents a fine-grained classification problem that involves identifying different units on the human face, as defined by the Facial Action Coding System. In this paper, we present a simple yet efficient Vision Transformer-based approach for addressing the task of Action Units (AU) detection in the context of Affective Behavior Analysis in-the-wild (ABAW) competition. We employ the Video Vision Transformer(ViViT) Network to capture the temporal facial change in the video. Besides, to reduce massive size of the Vision Transformers model, we replace the ViViT feature extraction layers with the CNN backbone (Regnet). Our model outperform the baseline model of ABAW 2023 challenge [8], with a notable 14% difference in result. Furthermore, the achieved results are comparable to those of the top three teams in the previous ABAW 2022 challenge.
## 1 Introduction
Affective computing is a foundation field in Artificial Intelligence that aims to enable machines to recognize, interpret, and respond to human emotions. Recent advances in deep learning and computer vision techniques have enabled significant breakthroughs in the field, but several challenges remain unsolved. In particular, Facial Affect Analysis in the Wild emerge as a notable challenge in recent years. This task plays a crucial role in applications such as Human-Machine Interaction and serves as an initial step for many systems. As such, the Affective Behavior Analysis in the Wild (ABAW) competition [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 27] was organized to address these challenges. Since the first Workshop [27], ABAW has become an important platform for researchers to benchmark their approaches and collaborate on solving Affective Computing problems.
The competition comprises three tasks focusing on detecting and recognizing three commonly-used presentations in affect analysis: Expression, Facial Action Units, and Valence-Arousal. The competition involves three tasks that focus on different affect presentations of affect analysis. The Facial Action Units (AU) detection task utilizes the Unit defined by the Facial Action Coding System (FACS) [5] to capture and interpret facial muscle movements associated with different expressions. The Expression Recognition task, on the other hand, employs categorical and explicit definitions to represent human expressions. Finally, the Valence and Arousal estimation task uses continuous values to describe human emotional states, providing a more nuanced and comprehensive approach to affect the analysis. In this research, we particularly focus on the Action Unit detection task, which is the Multi-labels (12 labels) classification.
The Transformer architecture [25] has gained widespread popularity as a model of choice in the field of Deep Learning. Its successor, the Attention-based model, has emerged as the state-of-the-art approach not only for Natural Language Processing (NLP) tasks but also for achieving significant performance in various Computer Vision problems, especially in classification tasks [4]. For that reason, in this study, we present a Video Vision Transformer [1] based approach for the Action Units Detection task in the ABAW 2023 challenge.
Overall, the contribution of this paper can be summarized as following:
* Instead of feeding the model raw video, we employ the CNN feature extraction model as the embedding module to get the presentation of video. This methods reduce the size of the model while still keeping the important information, hence help lightening the model.
* We utilize the ViVit model with Ensemble learning scheme for Facial Action Units model. The model outperform the baseline model and show competitive result comparing with the winner methods of last ABAW competition.
## 2 Related Work
### Facial Action Units Detection in the wild
Facial affective computing has been a significant challenge in the field of computer vision since its early stage. During this period, popular approaches primarily focus on single-modal affective analysis and statical 2D data processing [2]. In the deep learning era, this problem have been extended to different manner such as spontaneous facial expression [21, 29], multi-modalities, 3D facial expression [29] or automatic affect facial analysis [20]. DISFA [21] is one of the first publicly available databases of spontaneous facial expression with well-annotated Action Units intensity. One year later, Zhang et al introduce the first spotaneous 3D facial expression database [29]. Until now, many Facial Affective Analysis researchs still uses these two dataset and attain remarkable result [6, 24].
Although having been research for a long time, most Affective Computing research are done in the controlled environment. Therefore, the applicability of these research in real world is quite limited despite of having achieved good results. To address this problem, Zafeiriou et al [27] introduce the first dataset for analysing human affective Behavior in real world scenario. The dataset can be access through the ABAW competition and is still updating each year.
### Vision Transformers in Facial Actions Units Detection
Since the outstanding performance of ViT [4] and other NLP-influenced model in Image Classification and Video Classification, Transformer [25] has been adopted for various Computer Vision tasks. For Action Units Detection, "Facial Action Unit Detection With Transformers" [6] is one of the first study using the Transformer. Following previous Region of Interest attention based methods such as JAA-Net [24] or EAC-Net [17], the study use Transformer's Multi-head Attention Module as the ROI Attention module. While this methods achieve promising results on images, the size of model due to ROI Attention module may not be suitable for videos.
In the previous year's ABAW competition (CVPR 2022) [7], a number of competitors employed Transformer. Specifically, among the top five teams ranked highest, three out of five teams utilized Transformer models as a core component of their model [22, 26, 28]. The winner team [28] use Transformer as an fusion module for their Multi-Modal architecture. On the other hand, the fourth places team [26] also use Multi-Modal scheme but Transformer is used for Feature Extraction purpose. While the third places team [22] treat each image feature from CNN extractor as a token and integrate Transformer to be the classification head. However, the Transformer model used in these methods are the original architecture which is not specifically suitable for video processing.
## 3 Methodology
Following the work of [22], we construct a ViViT-based model [1] for Facial Action Units Detection problems. Our architecture comprises of two core modules: Feature extraction module and Classification module. Overall architecture is showed in Figure 1.
### Feature Extraction
For Feature Extraction module, due to good performance and small model size, we use RegNetY [23] as the backbone. Proposed by Radosavovic et al in 2020, RegNetY is a type of simple and regular convolutional network. The RegNetY design space is defined by three primary parameters: depth, initial width, and slope, and generates a unique block width for each block in the network. Notably, RegNet models are constrained by a linear parameterization of block widths, meaning that the design space only includes models with one specific linear structure. The RegNetY architecture is organized into multiple stages, each consisting of four blocks that collectively form the stem (start), body (main part), and head (end) of the network. Within the body section, multiple stages are defined, with each stage comprised of several blocks. It should be noted that RegNetY employs a single type of block throughout the network, specifically, the standard residual bottleneck block with group convolution.
We adopt the Transfer Learning approach, leveraging a pre-trained RegNetY model that has been trained on the ImageNet dataset [3]. To be more specific, we use this pre-trained model as a backbone for training on the ABAW
Figure 1: An overview of the action unit detection model.
dataset. However, instead of freezing the entire backbone, we just partially unfreeze the last three blocks of the backbone while keeping the first block frozen. This allows us to fine-tune the pre-trained model for improved performance on our target task while still benefiting from the pre-existing knowledge gained during training on the large and diverse ImageNet dataset.
### Frames-wise Classification
As mention earlier, we choose Video Vision Transformers [1] as the classification module. Inspired by ViT [4], this model process on the a sequence of spatio-temporal tokens that extracts from the video. Transformer layers employ a distinct approach wherein all pairwise interactions among spatio-temporal tokens are modeled within each layer. As a result, the transformer layer is able to capture long-range dependencies across the entire video sequence from the initial layer itself. However, as the Multi-Headed Self Attention has quadratic complexity with respect to the number the tokens, we reduce the computational complexity by removing the first 4. We select the Factorized encoder variant of the ViViT model based on its optimal trade-off between inference speed and accuracy, as compared to the other three variants.
From a extracted video embedding \(V\in R^{B\times T\times E_{l}\times E_{h}\times E_{w}}\), with \(E_{l}\), \(E_{h}\), \(E_{w}\) is the length, height and width of each frame embedding, we use Tubelet Embedding of ViVit to convert into sequence of token. Then the token will be fed into Transformer layers comprises of Multi-Headed Self-Attention (MSA) [25], layer normalisation (LN) and MLP blocks. Each element is formalised in follow equation:
\[y^{l}=MSA(LN(z^{l}))+z^{l} \tag{1}\]
\[z^{l+1}=MLP(LN(y^{l}))+y^{l} \tag{2}\]
## 4 Experiments and results
### Dataset
The AU task contains 541 videos that include annotations in terms of 12 AUs, namely AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU15, AU23, AU24, AU25, and AU26 of around 2.7M frames and contain 438 subjects, 268 of which are male and 170 female. The dataset have been annotated in a semi-automatic procedure (that involves manual and automatic annotations).
### Experiments setup
The networks were implemented with the PyTorch Deep Learning toolkit. We trained model by using SGD with learning rate of 0.9 combine and Cosine annealing warm restarts scheduler [19]. The networks is optimized with Focal loss function [18]. The number of frames in each sequence length is set as 256 frames. For the ViViT model, we set number of Head in each Attention module is 8 heads and the hidden dimension of the transformer is 1024. Besides, we only keep the last 8 Transformer layers of ViVit and remove the rest.
### Metrics
According to challenge white paper [8], macro F1 Score is the official evaluation criterion for Action Units detection task. Therefore, the performance measure is calculated as the average F1 Score across all 12 AUs:
\[P_{A}U=\frac{(\sum_{au}F_{1}^{a}u)}{12} \tag{3}\]
## 5 Conclusion
In this paper, we present the Vision Transformer-based model for AU detection in the ABAW Competition. To reduce the computational burden and improve the effectiveness of our approach on small images, we propose using a CNN-based model for feature extraction instead of relying solely on the long Transformer backbone layers. Our method outperforms the baseline model and achieves competitive results compared to other methods used by last year's participants
## Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1A4A1019191) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF2021R111A3A04036408).
|
2304.05152 | PP-MobileSeg: Explore the Fast and Accurate Semantic Segmentation Model
on Mobile Devices | The success of transformers in computer vision has led to several attempts to
adapt them for mobile devices, but their performance remains unsatisfactory in
some real-world applications. To address this issue, we propose PP-MobileSeg, a
semantic segmentation model that achieves state-of-the-art performance on
mobile devices. PP-MobileSeg comprises three novel parts: the StrideFormer
backbone, the Aggregated Attention Module (AAM), and the Valid Interpolate
Module (VIM). The four-stage StrideFormer backbone is built with MV3 blocks and
strided SEA attention, and it is able to extract rich semantic and detailed
features with minimal parameter overhead. The AAM first filters the detailed
features through semantic feature ensemble voting and then combines them with
semantic features to enhance the semantic information. Furthermore, we proposed
VIM to upsample the downsampled feature to the resolution of the input image.
It significantly reduces model latency by only interpolating classes present in
the final prediction, which is the most significant contributor to overall
model latency. Extensive experiments show that PP-MobileSeg achieves a superior
tradeoff between accuracy, model size, and latency compared to other methods.
On the ADE20K dataset, PP-MobileSeg achieves 1.57% higher accuracy in mIoU than
SeaFormer-Base with 32.9% fewer parameters and 42.3% faster acceleration on
Qualcomm Snapdragon 855. Source codes are available at
https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8. | Shiyu Tang, Ting Sun, Juncai Peng, Guowei Chen, Yuying Hao, Manhui Lin, Zhihong Xiao, Jiangbin You, Yi Liu | 2023-04-11T11:43:10Z | http://arxiv.org/abs/2304.05152v1 | # PP-MobileSeg: Explore the Fast and Accurate Semantic Segmentation Model on Mobile Devices
###### Abstract
The success of transformers in computer vision has led to several attempts to adapt them for mobile devices, but their performance remains unsatisfactory in some real-world applications. To address this issue, we propose PP-MobileSeg, a semantic segmentation model that achieves state-of-the-art performance on mobile devices. PP-MobileSeg comprises three novel parts: the StrideFormer backbone, the Aggregated Attention Module (AAM), and the Valid Interpolate Module (VIM). The four-stage StrideFormer backbone is built with MV3 blocks and strided SEA attention, and it is able to extract rich semantic and detailed features with minimal parameter overhead. The AAM first filters the detailed features through semantic feature ensemble voting and then combines them with semantic features to enhance the semantic information. Furthermore, we proposed VIM to upsample the downsampled feature to the resolution of the input image. It significantly reduces model latency by only interpolating classes present in the final prediction, which is the most significant contributor to overall model latency. Extensive experiments show that PP-MobileSeg achieves a superior tradeoff between accuracy, model size, and latency compared to other methods. On the ADE20K dataset, PP-MobileSeg achieves 1.57% higher accuracy in mIoU than SeaFormer-Base with 32.9% fewer parameters and 42.3% faster acceleration on Qualcomm Snapdragon 855. Source codes are available at [https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8).
## 1 Introduction
Semantic segmentation is a computationally expensive task compared to other computer vision tasks like image classification [18] or object detection [37], as it involves predicting the class of every pixel. While there have been significant advancements in semantic segmentation on GPU devices, few studies have addressed the challenges of mobile semantic segmentation [12, 27, 33]. This lack of research impedes the practical application of semantic segmentation to mobile applications.
Recently, the surge of vision transformers(ViTs) [10] proved the promising performance of transformer-based neural networks on semantic segmentation [5, 24, 31, 35]. Various works have proposed transformer-CNN hybrid architectures for lightweight neural network design, such as MobileViT [21], MobileFormer [4], and EdgeNext [20]. This hybrid architecture combines global and local information in neural networks at the lowest possible cost. However, the computation complexity of Multi Head Self Attention(MHSA) makes these networks hard to be deployed
Figure 1: We present the accuracy-latency-params analysis of our proposed PP-MobileSeg model on the ADE20K validation set. The trade-off analysis is represented as a bubble plot, where the x-axis denotes the latency and the y-axis denotes the mIoU. Models with the same color are from the same model series. Our model achieves a better accuracy-latency-params trade-off. Note that the latency is tested with the final ArgMax operator using PaddleLite on Qualcomm Snapdragon 855 CPU with a single thread and 512x512 as input shape.
on mobile devices. Even though several efforts have been made to decrease the time complexity, including shifted window attention [16], efficient attention [23], external attention [29], axial attention [28], SEA attention [27] and etc. But many of these techniques require complex index operations that ARM CPUs cannot support [27]. Besides latency and accuracy, memory storage is also a crucial element for mobile applications, since memory storage is limited on mobile devices. Therefore the fundamental question arises: _can we design a hybrid network for mobile devices with a superior trade-off between parameter, latency, and accuracy?_
In this work, we address the above question by exploring the mobile segmentation architecture under the constraints of model size and speed for a performance leap forward. Under extensive search, we manage to propose three novel designed modules: the four-stage backbone StrideFormer, the feature fusion block AAM, and the upsample module VIM as shown in Fig. 2. By combining these modules, we propose a family of SOTA mobile semantic segmentation networks called PP-MobileSeg, which is well-suited for mobile devices with great parameters, latency, and accuracy balance. Our improved network design allows PP-MobileSeg-Base to improve the inference speed by 40% and 34.9% less model size than SeaFormer while maintaining a competitive 1.37 higher mIoU(Tab. 1). Compared with MobileSeg-MV3, PP-MobileSeg-Tiny achieves 3.13 higher mIoU while being 45% faster and 49.5% smaller(Tab. 1). We also evaluate the performance of PP-MobileSeg on the Cityscapes dataset [6] (Tab.2), which shows its superiority in model performance on high-resolution inputs. Although PP-MobileSeg-Base has slightly longer latency, it maintains model size superiority while being 1.96 higher in mIoU than SeaFormer [27] on the cityscapes dataset [6].
In summary, our contributions are as follows:
* We introduce the StrideFormer, a four-stage backbone with MobileNetV3 blocks that efficiently extracts features of different receptive fields while minimizing parameter overhead. We also apply strided SEA attention [13, 27] to the output of the last two stages to improve global feature representation under computation constraints.
* We propose the Aggregate Attention Module (AAM), which fuses features from the backbone through ensemble voting of enhanced semantic features and further enhances the fused feature with the semantic feature of the largest receptive field.
* To reduce the significant latency caused by the final interpolation and ArgMax operation, we design the Valid Interpolate Module (VIM) that only upsamples classes present in the final prediction during inference time. Replacing the final interpolation and ArgMax operation with VIM significantly reduces model latency.
* We combine the above modules to create a family of SOTA mobile segmentation models called PP-MobileSeg. Our extensive experiments show that PP-MobileSeg achieves an excellent balance between latency, model size, and accuracy across ADE20K and Cityscapes datasets.
## 2 Related Work
Under the speed and model size constraints, mobile semantic segmentation is the task that aims to adapt semantic segmentation networks with efficient network designs.
### semantic segmentation
To achieve high performance in semantic segmentation, several key elements are essential, including a large receptive field to capture context [3, 36], a large resolution of features for accurate segmentation [30, 32], fusion of detail and semantic features for precise predictions [1, 22], and attention mechanisms for improving feature representation [14, 26, 31]. State-of-the-art models often combine several or even all of these elements to achieve superior performance. The primary requirement for the semantic segmentation task is that the network must be able to capture a holistic view of the scene while simultaneously preserving the image's details and semantics. Thus, it is essential to design network architectures that can efficiently and effectively integrate these elements.
### Efficient Network Designs
There are two types of efficient network architectures in the field of deep learning. The first type focuses on adding new elements to the network without introducing unwanted latency during inference. The representative one is structural reparameterization [8, 9], which approximates the multi-branch neural network block with a single branch at inference time. The second type aims to downscale the network at the expense of the model performance reduction. Designs belonging to this category include group convolution [11], channel shuffle [34], and efficient attention mechanisms [28, 29, 23].
### Mobile semantic segmentation
Due to the large computational complexity of semantic segmentation, there has been limited research of segmentation on mobile devices, with only a few works focusing on this area [12, 27, 33]. Among them, TopFormer enhances the token pyramid with a self-attention block and fuses it with the local feature using their proposed injection module. Further, SeaFormer boosts the model performance with an efficient SEA attention module. Both of them significantly
outperform MobileSeg and LRASPP, which currently represent the state-of-the-art in mobile semantic segmentation.
## 3 Architecture
This section presents a comprehensive exploration of mobile segmentation networks designed under speed and size constraints, aiming at achieving better segmentation accuracy. Through our research, we have identified three key modules that lead to faster inference speed or smaller model size with slight performance improvements. The full architecture of PP-MobileSeg is shown in Fig. 2, which comprises four main parts: StrideFormer, Aggregate Attention Module (AAM), segmentation head, and Valid Interpolate Module (VIM). The StrideFormer takes input images and generates a feature pyramid with strided attention applied to the last two stages to incorporate global semantics. The AAM is responsible for fusing local and semantic features, which are then passed through the segmentation head to produce the segmentation mask. Finally, the upsample module VIM is used to further enhance the segmentation mask, reducing latency by only upsampling the few channels corresponding to the classes that exist in the final prediction. The following sections provide a detailed description of each of these modules.
### StrideFormer
In the StrideFormer module, we utilize a stack of MobileNetV3 [12] blocks to extract features of different receptive fields. More detailed information about the variants of this architecture can be found in subsection 3.4. Given an image of \(I\in R^{3\times H\times W}\), where \(3,H,W\) represent channels, height, and width of the image. StrideFormer produces features \(\{F_{\times 8},F_{\times 16},F_{\times 32}\}\), representing features that are downsampled 8, 16, and 32 times compared to the resolution of the input image. One key design choice is the number of stages in the backbone, where each stage is a stack of mobilenetv3 blocks that produce one of the feature sets, \(F_{\times downsample-rate}\). Inspired by efficientFormer [13], we discover that the four-stage model has minimal parameter overhead while still maintaining excellent performance compared to the five-stage model as shown in Tab. 3. Therefore we design StrideFormer with the four-stage paradigm. With \(\{F_{\times 8},F_{\times 16},F_{\times 32}\}\) generated from the four-stage backbone, we add the \(M/N\) SEA attention blocks on the features from the last two stages following [27]. Due to the time complexity of the self-attention module with large resolution input, we add the stride convolution prior to the SEA attention module and upsample the feature afterward. In this way, we reduce computation complexity by 1/4 of the original implementation when we empower the features with global information.
### Aggregated Attention Module
With the generated \(\{F_{\times 8},F_{\times 16},F_{\times 32}\}\) from backbone, we designed a aggregated attention module(AAM) to fuse features. The structure of AAM is on the top-right of Fig. 2. Among the generated features, \(\{F_{\times 16},F_{\times 32}\}\) have a larger receptive field and contain rich semantic information. Therefore we use them as the information filter through ensemble voting to find out the important information in detail feature \(F_{\times 8}\). In the filtration process, \(F_{\times 16}\) and \(F_{\times 32}\) are upsampled to the same resolution as \(F_{\times 8}\). And sigmoid operator is applied to them to obtain weight coefficients. Afterward, \(F_{\times 16}\) and \(F_{\times 32}\) is multiplied and the multiplication result is used to filter \(F_{\times 8}\). We can formulate the above procedure as Eq. 1
Additionally, we observed that the features with rich semantics complement the previously filtered detail feature and are crucial in improving model performance. Therefore, it should be kept to the most extent. So we add \(F_{\times 32}\), the feature of the largest receptive field and enhanced with the global view, to the filtered detail feature.
\[F_{fused} =Act(F_{\times 32})\times Act(F_{\times 16})\times Conv(F_{ \times 8}) \tag{1}\] \[+Conv(F_{\times 32})\]
After feature fusion, the fused feature captures both rich spatial and semantic information, which is fundamental for segmentation performance. On top of that, we add a simple segmentation head following TopFormer [33]. The segmentation head consists of a \(1\times 1\) layer, which helps to exchange information along the channel dimension. Then a dropout layer and convolutional layer are applied to produce the downsampled segmentation map.
### Valid Interpolate Module
Under the latency constraints, we did a latency profile and find out that the final interpolation and ArgMax operation take up more than 50% of the overall latency. Therefore, we designed the Valid Interpolate Module(VIM) to replace the interpolation and ArgMax operation and greatly reduce the model latency. The latency profile of the SeaFormer-Base and PP-MobileSeg-Base is shown in Fig 3. Detailed statistics after adding VIM can be seen in Table. 3.
The VIM is based on the observation that the number of classes that appear in the prediction of a well-trained model is often much less than the overall number of classes in the dataset, especially for datasets with a large number of classes. This is a common case for datasets with a large number of classes. Therefore, it is not necessary to consider all of the classes in the interpolation and ArgMax process. The structure of VIM is on the bottom-right of Fig. 2. As the structure shows, the VIM consists of three main steps. First, the ArgMax and Unique operations are applied to the down
sampled segmentation map to find out the necessary channels. Then, the index select operation selects only those valid channels, and interpolation is applied to the slimmed feature. Finally, the selected channels are upsampled to the original resolution to produce the final segmentation map. With VIM in replace of interpolation and ArgMax operation, we retrieved the final segmentation map at much less latency costs.
The use of VIM greatly reduces the channels involved in the interpolation and ArgMax operation, leading to a significant decrease in the model latency. However, VIM is only applicable when the number of classes is large enough to have channel redundancy in the model. Therefore, a class threshold of 30 is set, and VIM will not take effect when the number of classes is below this threshold.
### Architecture Variants
We provide two variants of PP-MobileSeg to adapt our model to different complexity requirements, i.e., PP-MobileSeg-Base and PP-MobileSeg-Tiny. The size and latency of these two variants with the input of shape 512x512 are shown in Tab. 1. The base and tiny model have the same number of MobileNetV3 layers, whereas the base model is wider than the tiny model. And the base model generates features with more channels to enrich the feature representation. Besides, there are several differences in the attention block as well. PP-MobileSeg-Base model has 8 heads in the SEA attention module, \(M/N=3/3\) attention blocks. The PP-MobileSeg-Tiny model has 4 heads in the SEA attention module and the number of blocks is \(M/N=2/2\). The feature channels of the last two stages are 128, 192 for PP-MobileSeg-Base and 64, 128 for PP-MobileSeg-Tiny respectively. The setting of the feature fusion module is the same for the base and tiny model and the embed channel dim of AAM is set to 256. For more details about the network architecture, please refer to the source code.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Method & Backbone & mIoU(\%) & Latency(ms) & Parameters (M) \\ \hline LR-ASPP [12] & MobileNetV3-large-x1 & 33.10 & 730.9 & 3.20 \\ MobileSeg [15] & MobileNetV3-large-x1 & 33.26 & 391.5 & 2.85 \\ \hline TopFormer-Tiny [33] & TopTransFormer-Tiny & 32.46 & 490.3 & **1.41** \\ SeaFormer-Tiny [27] & SeaFormer-Tiny & 35.00 & 459.0 & 1.61 \\
**PP-MobileSeg-Tiny** & StrideFormer-Tiny & **36.39** & **215.3** & 1.44 \\ \hline TopFormer-Base [33] & TopTransformer-Base & 37.80 & 480.6 & **5.13** \\ SeaFormer-Base [27] & Seaformer-Base & 40.20 & 465.4 & 8.64 \\
**PP-MobileSeg-Base** & StrideFormer-Tiny & **41.57** & **265.5** & 5.71 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on ADE20K validation set. Latency is measured with PaddleLite with the final ArgMax operator on Qualcomm Snapdragon 855 CPU) and 512x512 as the input shape. All result is evaluated with a single thread. The mIoU is reported with single-scale inference.
Figure 2: The architecture of PP-MobileSeg network. The structure of AAM is on the top right of the figure. The difference between the normal interpolation module and VIM is displayed at the bottom right of the figure. By selecting only the classes that exist in the final prediction, VIM significantly reduces latency by upsampling a few channels.
## 4 Experiments
In this section, we first present the dataset used for our model training and evaluation and provide implementation details on our training and inference implementation. Secondly, We compared the proposed method with the previous state-of-the-art on this task in terms of accuracy, inference speed, and model size. Finally, we perform an ablation study to demonstrate the effectiveness of our proposed method.
### Experiments Setup
#### 4.1.1 Datasets
We perform our experiments on ADE20K [36] and cityscapes [6] datasets, and the mean of class-wise intersection over union(mIoU) is used as the evaluation metric. **ADE20K** is a parsing dataset that contains 25K images in total and 150 fine-grained semantic concepts. All images are split into 20K/2K/3K for training, validation, and testing. **Cityscapes** is a large-scale dataset for semantic segmentation. It consists of 5,000 fine annotated images, 2975 for training, 500 for validation, and 1525 for test-dev. The resolution of the images is 2048 x 1024, which poses a great challenge for models used in mobile devices.
#### 4.1.2 Implementation Details
Our implementation is built upon PaddleSeg [15] and Paddle [19].
**Training Settings** Our backbone is pre-trained on ImageNet1K [7] to retrieve common knowledge about images. We set the batch size as 32 and train the model for 80K iterations. During training, we cross entropy loss and Lovasz loss with the loss ratio of 4:1 [2] to optimize the model. We use the exponential moving average method to average model parameters from different training iterations and the moving average coefficient is 0.999 [25]. The learning rate is at 0.006 and uses the ADAMW [17] optimizer with the weight decay set at 0.01. the learning rate schedule is set as the combination of the warmup schedule and the poly schedule with factor 1.0. The learning rate goes up from 1e-6 for 1500 tiers and then decreases linearly. For ADE20K, we follow the data augmentation strategy of TopFormer and SeaFormer [27, 33], including the random scale ranges in [0.5, 2.0], image crop to the given size, random horizontal flip, and random distortion. For Cityscapes, the data augmentation is the same except that we crop the image to 1024x512 rather than 512x512 in ADE20K datasets, and the random scale ranges in [0.125, 1.5]. Our model is trained with two Tesla V100 GPUs. We report the single scale results on the validation set to compare with other methods.
**Inference Settings** During inference, we set the input shape as \(512\times 512\) for ADE20K datasets and \(512\times 1024\) for cityscapes. To test the model latency, the full-precision PP-MobileSeg models are exported to the static model, and the latency is then measured on the Qualcomm 855 with PaddleLite on a single thread. During inference, we use VIM in place of the interpolation and ArgMax operation. It is worth noting that the image preprocesses, including resizing and normalizing, is accomplished before the inference process, so the inference time only includes model infer time. Especially, the latency of VIM is correlated with the number of classes predicted in the image. Therefore, we use an image from the ADE20K validation set, which has the average number of categories, to evaluate the latency for a reasonable comparison.
### Comparison with State-of-the-arts
**ADE20K Results** Table. 1 presents the comparison of PP-MobileSeg with previous mobile semantic segmentation models, including both lightweight vision transformers and efficient CNNs, and report the results on parameters, latency, and mIoU. As the results show, PP-MobileSeg outperforms these SOTA models not only on latency but also on model size while maintaining a competitive edge in accuracy. Compare with MobileSeg and LRASPP, both of them use MobileNetV3 as their backbone, PP-MobileSeg-Tiny is more than 3.0 higher than them in mIoU, while being 49.47% smaller and 45% faster than MobileSeg. And PP-MobileSeg-Tiny is 55% smaller and 70.5% faster than LRASPP. In comparison with the SOTA vision transformer-based models TopFormer and SeaFormer, which use convolution-based global self-attention as their semantics extractor, PP-MobileSeg achieves higher segmentation accuracy with lower latency and smaller model size. PP-MobileSeg-Base is about the same size or 34.9% smaller and 42.9% to 44.7% faster than its counterparts, while maintaining a competitive edge in accuracy and is 1.37% to 3.77% higher in mIoU. These results demonstrate the effectiveness of PP-MobileSeg in improving feature representation.
**Cityscapes Results** It can be seen from Table. 2 that PP-MobileSeg-Tiny achieves better performance in all as
Figure 3: Latency profile compare between SeaFormer and PP-MobileSeg.
pects of accuracy, latency, and parameters than SeaFormers-small. Furthermore, PP-MobileSeg-Base achieves significantly better accuracy with comparable latency and smaller model sizes. These results demonstrate that PP-MobileSeg maintains its excellent balance among accuracy, model size, and speed even under high-resolution inputs.
### Ablation Study
We conduct an ablation study to discuss the influence of our proposed modules and dissect and analyze these modules. In Table 3, we show the effectiveness of three proposed modules by adding them to the baseline one by one.
**VIM**: As we mentioned before, VIM serves as the replacement for interpolation and ArgMax operations to accelerate the inference speed. As we can see from the profile comparison (Fig. 3), the overall latency of Segmentation greatly decreased from 76.32% to 48.71% with the application of VIM. And the experimental results from Table 3 show the model latency is decreased by 49.5% after adding VIM. These experiments prove that VIM's acceleration capabilities on datasets with a large number of classes are exceptional.
**StrideFormer**: The usage of the four-stage network in StrideFormer resulted in a notable reduction of 32.19% in parameter overhead. The experimental results also show an increase in accuracy by 0.78%, which we attribute to the enhanced backbone.
**AAM**: AAM raises the accuracy by 0.59% while slightly increasing the latency and model size. To gain insight into the design of the AAM, we split the fusion module into two branches: the ensemble vote and the final semantics as shown in Table 4. And the reported results reveal the significance of both branches and especially the importance of final semantics. Without it, the accuracy can drop by 0.45%.
## 5 Conclusion
In this paper, we investigate the design options for hybrid vision backbones and addressed the latency bottleneck in mobile semantic segmentation networks. After thorough exploration, we identified the mobile-friendly design choices and propose a new family of mobile semantic segmentation networks called PP-MobileSeg with the combination of transformer blocks and CNN. With the carefully designed backbone, fusion module and interpolate module, PP-MobileSeg achieves a SOTA balance between model size, speed, and accuracy on ARM-based devices.
|
2302.08920 | A tale of two tails: 130 years of growth-at-risk | We extend the existing growth-at-risk (GaR) literature by examining a long
time period of 130 years in a time-varying parameter regression model. We
identify several important insights for policymakers. First, both the level as
well as the determinants of GaR vary significantly over time. Second, the
stability of upside risks to GDP growth reported in earlier research is
specific to the period known as the Great Moderation, with the distribution of
risks being more balanced before the 1970s. Third, the distribution of GDP
growth has significantly narrowed since the end of the Bretton Woods system.
Fourth, financial stress is always linked to higher downside risks, but it does
not affect upside risks. Finally, other risk indicators, such as credit growth
and house prices, not only drive downside risks, but also contribute to
increased upside risks during boom periods. In this context, the paper also
adds to the financial cycle literature by completing the picture of drivers
(and risks) for both booms and recessions over time. | Martin Gächter, Elias Hasler, Florian Huber | 2023-02-17T14:50:48Z | http://arxiv.org/abs/2302.08920v1 | # A tale of two tails: 130 years of growth-at-risk+
###### Abstract
We extend the existing growth-at-risk (GaR) literature by examining a long time period of 130 years in a time-varying parameter regression model. We identify several important insights for policymakers. First, both the level as well as the determinants of GaR vary significantly over time. Second, the stability of upside risks to GDP growth reported in earlier research is specific to the period known as the Great Moderation, with the distribution of risks being more balanced before the 1970s. Third, the distribution of GDP growth has significantly narrowed since the end of the Bretton Woods system. Fourth, financial stress is always linked to higher downside risks, but it does not affect upside risks. Finally, other risk indicators, such as credit growth and house prices, not only drive downside risks, but also contribute to increased upside risks during boom periods. In this context, the paper also adds to the financial cycle literature by completing the picture of drivers (and risks) for both booms and recessions over time.
**JEL classification:** C11, C53, E32, E44, G01, N10
**Keywords:** Growth-at-risk; financial crises, business cycles; tail forecasting
Introduction
The empirical growth-at-risk (GaR) concept introduced by Adrian et al. (2019) suggests that deteriorating financial conditions are associated with increased downside risks to economic growth. While standard forecasts focus on the expected value of future GDP growth, the GaR approach places a particular emphasis on the probability and magnitude of potentially adverse outcomes. Similar to the value-at-risk concept in finance, the GaR of an economy for a given time horizon is defined as a specific low quantile of the distribution of the projected GDP growth rate for the respective horizon. In this context, Adrian et al. (2019) show that the left tail of the distribution of (projected) GDP growth is less stable and more affected by financial conditions than the upper quantiles of the distribution. Against this background, the GaR concept is a useful and intuitive policy tool to identify and quantify systemic risk and has therefore gained traction among policy-makers in recent years.
In the last few years, the GaR idea has been extended in various directions, e.g. adding various risk indicators from the financial cycle literature (see, for instance, Aikman et al., 2019) or by examining the term structure of GaR (Adrian et al., 2020). In this context, the GaR framework is used as a composite indicator for systemic risk at the country level and can, therefore, also be taken as an indicator when to activate various policy measures. Consequently, recent research has also taken into account the impact of various policy instruments on GaR (e.g. Galan, 2020). Remarkably, the entire strand of literature has relied entirely on data samples back to the 1970s without taking into account earlier developments in the last century.1 This is insofar surprising, as the main objective of macroprudential policy is the prevention of financial crises (or alternatively the reduction of the costs of such crises if they occur). Since financial crises appear infrequently, as also shown by the widely cited financial cycle literature (Schularick and Taylor, 2012; Jorda et al., 2017), a long time series of the underlying drivers is crucial to capture the tail risks of the variables of interest.
Footnote 1: One notable exception that relies on copula models is Coe and Vahey (2020). As opposed to this paper, they focus on the role of financial conditions for tail risks to economic growth.
Using long time series, however, raises additional econometric issues. For instance, these time series might be subject to structural breaks in the conditional mean. Such a behaviour might reflect changes in key relationships between the conditional distribution of output growth or long-run unconditional means. In addition, the volatility of shocks is often found to be time-varying (for predictive evidence of this claim, see, e.g., Clark, 2011). Standard quantile regressions (QRs) have difficulties matching both features of historical data. In this paper,
we build on recent papers that show that simple heteroskedastic models perform well (or even better) than QRs (Carriero et al., 2020; Brownlees and Souza, 2021) and propose using a time-varying parameter stochastic volatility regression model (TVP-SV) to capture changes in the relevance of different potential drivers of upside and downside risks to output growth. We use this framework to analyze an extended data set covering 130 years of macroeconomic data and show that it works well for predicting downside risks to GDP. After providing evidence that such a model is competitive to standard QRs, we back out the contributions of individual variables over time using state-of-the-art techniques from statistics and machine learning (see, e.g., Crawford et al., 2019; Woody et al., 2021).2 Thus, by applying novel methods to historical data, we are able to draw important policy implications for today's policy-makers.
Footnote 2: Clark et al. (2022) use a similar approach to summarize the effect of different predictors on inflation within a nonparametric model.
Our findings allow us to draw a whole range of relevant policy implications and to put previous findings into a historical context. First, we show that the stability of upside risks to GDP growth shown in previous research is time-specific for the time period since the start of the Great Moderation, with a more or less symmetric distribution of risks up to the 1970s. Second, both upside and downside risks to GDP have decreased substantially over time, remaining at relatively low levels since the beginning of the Great Moderation. Third, we show that financial stress is always associated with higher downside risks, although the effect varies in magnitude over time, while financial stress does not affect upside risks. Fourth, while the effect of credit growth and house price growth varies over time, the similarities of the effect of credit growth during the Great Depression and the Great Financial Crisis are remarkable. Fifth, we find that the large negative impact of house prices on growth risks during the Global Financial Crisis was unprecedented. Finally, our findings suggest that credit growth and house prices do not only drive downside risks but also increase upside risks, i.e. they lead to a wider distribution of (expected) GDP growth. Against this background, our findings also add to the financial cycle literature by looking at the whole distribution of (expected) GDP, thus completing the picture of drivers (and risks) for booms and recessions over time. In this context, a better understanding of how individual variables and risk indicators influence both upside and downside risks to GDP is crucial for the calibration and timing of macroprudential (and also monetary) policy measures.
The remainder of the paper is structured in the following way. The next section fleshes out the dataset and econometric methods adopted. Section 3 includes the empirical findings and consists of an out-of-sample tail forecasting exercise, provides quantitative evidence on the
predictive distributions of GDP growth and then proceeds by discussing the key drivers of GaR. Section 3.5 puts our empirical findings in context and draws relevant policy conclusions while the final section concludes the paper.
## 2 Data and Methods
### Data
Our analysis is based on a newly constructed data set stretching from 1893Q1 to 2016Q4. Using three different sources, our data includes annualised real GDP growth, a financial stress indicator, the 3-year average growth rate of the credit-to-GDP ratio, and the 3-year average growth rate of real house prices. The real GDP time series is taken from D'Agostino and Surico (2012) and updated using FRED data. The dependent variable is constructed by using the logarithm of real GDP, \(Y_{t}\), and converting it into the annualised growth rate \(h\) periods ahead, \(y_{t+h}=\frac{Y_{t+h}-Y_{t}}{h/4}\).
In line with previous literature, we include a measure of financial stress as an explanatory variable. However, unlike other financial stress indicators, which are typically based on financial data and start in the 1970s we use a historical newspaper-based financial stress indicator built by Puttmann (2018). As Puttmann (2018) notes, the index exhibits a long-run trend; hence we detrend the time series using a slow-moving Hodrick-Prescott filter with a \(\lambda\) of \(5\times 10^{6}\).
While financial stress measures are highly relevant for short-term GaR estimations - at least since the 1970s - credit-to-GDP growth and house prices are frequently used as a signal for medium-term financial imbalances (Aikman et al., 2019; Galan, 2020). Unfortunately, consistent quarterly data for credit (loans to the non-financial private sector) and house prices are not available for such a long time horizon. Therefore, we use data from Jorda et al. (2017) and convert the annual data into quarterly by using the quadratic spline of Forsythe et al. (1977). Subsequently, annualized three-year averages of the log differences of the credit-to-GDP ratio as a measure of credit growth and the three-year average of the log differences of real house prices as a measure of house price growth are used.
### Growth at risk through the lens of TVP regressions
Downside risks are typically analyzed through QRs (see Adrian et al., 2019). However, one key shortcoming of QRs is that they are not able to capture structural breaks in the regression coefficients, a feature that is crucial given the length of the time series we analyze. A natural
way of capturing time-variation in the parameter is through time-varying parameter (TVP) regression models. These models assume that the coefficients evolve smoothly over time and thus capture changes in transmission mechanisms but, conditional on appropriate modeling assumptions, also allow for rapid movements in the underlying parameters. To capture changing volatilities of the shocks, we also allow for heteroskedasticity in the regression model through a standard stochastic volatility specification. Our simple, yet flexible model enables us to capture differences in the relations between the determinants of GaR but also allows for situations where large, unobserved shocks are the main drivers of tail risks.
In its general form, we consider predictive equations with drifting parameters that take the following form:
\[y_{t+h}=\mathbf{\beta}_{t+h}^{\prime}\mathbf{x}_{t}+\varepsilon_{t+h},\quad\varepsilon_ {t+h}\sim\mathcal{N}(0,\sigma_{t+h}^{2}) \tag{1}\]
where \(\mathbf{\beta}_{t+h}\) is a vector of TVPs which link \(y_{t+h}\) to our set of \(K\) macro-financial covariates in \(\mathbf{x}_{t}\). We follow much of the literature (see e.g. Primiceri, 2005; Cogley and Sargent, 2005) and assume that these evolve according to random walk processes. Moreover, the logarithm of the error variance is assumed to follow an AR(1) process. These assumptions give rise to a system of state evolution equations:
\[\mathbf{\beta}_{t} =\mathbf{\beta}_{t-1}+\mathbf{\eta}_{t},\quad\mathbf{\eta}_{t}\sim\mathcal{N }(\mathbf{0}_{K},\mathbf{V}_{\beta}),\] \[\log\sigma_{t}^{2} =\mu_{\sigma}+\rho_{\sigma}(\log\sigma_{t-1}^{2}-\mu_{\sigma})+w _{t},\quad w_{t}\sim\mathcal{N}(0,\vartheta^{2}),\] \[\log\sigma_{0}^{2} \sim\mathcal{N}\left(\mu_{\sigma},\frac{\vartheta^{2}}{1-\rho_{ \sigma}^{2}}\right)\]
where \(\mathbf{V}_{\beta}=\text{diag}(v_{1}^{2},\ldots,v_{K}^{2})\) is a diagonal matrix with variances \(v_{j}^{2}\). These variances control the amount of time-variation in the regression coefficients. If \(v_{j}^{2}=0\), \(\beta_{jt}\), the \(j^{th}\) element of \(\mathbf{\beta}_{t}\), would be constant over time since \(\beta_{jt}=\beta_{jt-1}\) for all \(t\). Hence, the corresponding effect of \(x_{jt}\) on \(y_{t+h}\) is time-invariant. By contrast, setting \(v_{j}^{2}\) to a large value implies substantial variation in the corresponding coefficient, giving rise to overfitting concerns.
The coefficients associated with the law of motion of the log-volatilities are the long-run unconditional mean \(\mu_{\sigma}\), the persistence parameter \(\rho_{\sigma}\) and the innovation variance \(\vartheta^{2}\). If \(\rho_{\sigma}\) is close to one, the corresponding volatility estimate will be smooth. The variance parameter \(\vartheta^{2}\) controls the amount of time variation in the error variances.
Deciding on whether we need time variation in the parameters is a nonstandard statistical problem and the Bayesian literature offers several solutions. In this paper, we use shrinkage
priors to decide on whether coefficients are constant or time-varying. In particular, we use the tripple Gamma shrinkage prior (Cadonna et al., 2020) as implemented in the R package shrinkTVP(Knaus et al., 2021). More details on the priors and the posterior simulator are provided in Appendix A.
In this paper, we will take a predictive stance and consider the predictive distribution of the model in (1). The predictive density is given by:
\[p(y_{T+h}|Data_{1:t})=\int p(\mathbf{y}_{T+h}|\mathbf{\Xi},Data_{1:T})p(\mathbf{\Xi}|Data_ {1:T})d\mathbf{\Xi}, \tag{2}\]
where \(Data_{1:T}\) denotes the available information up to time \(T\), \(\mathbf{\Xi}\) is a generic object that collects coefficients and latent states. The predictive density is not available in closed form and obtained through simulation-based techniques using the output from the MCMC sampler.
Notice that \(p(\mathbf{y}_{T+h}|\mathbf{\Xi},Data_{1:T})\) is Gaussian:
\[y_{T+h}|\mathbf{X}i,Data_{1:T}\sim\mathcal{N}(\mathbf{\beta}^{\prime}_{T+h}\mathbf{x}_{T},\sigma_{T+h}^{2}),\]
and thus symmetric. However, once we integrate over \(\mathbf{\Xi}\) the corresponding predictive distribution takes a non-standard form and accommodates features such as skewness, heavy tails and downside asymmetries as reported in, e.g., Adrian et al. (2019). In addition, the fact that the parameters vary over time allows us to investigate whether different elements in \(\mathbf{x}_{t}\) vary in importance for explaining growth at risk over time.
## 3 Empirical findings
### Model evaluation and features of the predictive densities
In a first step, we evaluate the out-of-sample forecasting performance of different TVP and quantile regression models. For both the quantile regression and the TVP model, we estimate two specifications: _(i)_ the baseline model including a constant, the financial stress indicator, and lagged GDP growth as predictors (we will use the abbreviation QR and TVP to refer to these models), and _(ii)_ the extended model which additionally includes credit growth and house price growth as regressors (hereafter referred to as QR+ and TVP+).
We evaluate the accuracy of the predicted downside risks with the help of quantile scores (see e.g. Giacomini and Komunjer, 2005; Brownlees and Souza, 2021). Our focus is on the performance in the left tail. Hence, we consider the quantile score at the 5 percent quantile.
Table 1 reports the quantile scores relative to the QR model for the pre and post World War II (WWII) subsample and the forecasting horizons \(h=1\) and \(h=4\).3 In general, we find that TVP models perform particularly well during the post-WWII period, especially at the one-step ahead horizon. For both periods and horizons considered, the TVP models produce quantile scores that are superior to those obtained from the extended quantile regression. However, it is worth noting that the small QR model produces slightly more precise tail forecasts during the pre-WWII period.
Footnote 3: We need to split the sample because the quantile regression models can not adequately handle the large drop in GDP volatility post WWII. Furthermore, Amir-Ahmadi et al. (2016) show that correlations between variables, forecasts and other statistics change considerably at certain points in time when using long time series, making an a priori choice of subsamples hard to defend. Moreover, Gachter et al. (2023) show that the average downside risks and the magnitude of effects of financial risk indicators depend on the structural characteristics of a country. These country characteristics change over time and therefore should be taken into account when not using a TVP approach.
Once we focus on four quarter ahead GDP growth forecasts, this pattern becomes slightly less pronounced. In this case, both quantile regressions (QR and QR+) improve upon the TVP models when the period prior to WWII is considered. When we focus on post-WWII data, the TVP regressions are again outperforming the simple quantile regression model by appreciable margins. The reason for this rather weak performance in the pre-WWII period is driven by the fact that the QRs produce predictive densities with wide credible intervals. This helps in periods characterized by sharp breaks in GDP growth but harms forecasting accuracy in tranquil times. Since the pre-WWII features several crises, a model which produces wide forecast intervals during that specific points in time yields favorable overall tail forecasts.
This brief discussion has shown that TVP models can outperform quantile regressions, especially at the one quarter ahead horizon, but also for multi-step ahead forecasts and post-WWII data. Since we focus on two variants of the models (i.e. the baseline and the extended versions), we can also analyze whether including credit and house price growth pays off for obtaining more precise tail forecasts. The results in Table 1 suggest that for one quarter ahead tail forecasts, additional information does not translate into more precise predictions for the pre-WWII sample
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{3}{c}{Pre WWII} & \multicolumn{3}{c}{Post WWII} \\ \cline{2-7} Horizon & TVP & TVP+ & QR+ & TVP & TVP+ & QR+ \\ \hline \(h=1\) & 1.063 & 1.074 & 1.229 & 0.856 & 0.853 & 1.010 \\ \(h=4\) & 1.887 & 1.715 & 1.223 & 0.937 & 0.886 & 1.117 \\ \hline \end{tabular} _Note:_ This table reports the out-of-sample model evaluation for the forecast horizon of 1 and 4 quarters. The quantile scores are reported relative to the QR model.
\end{table}
Table 1: Out-of-sample model evaluation
and only slightly improves the predictive fit over the post-WWII period. This pattern reverses if our interest is on four step ahead forecasts. In that case, using more information yields more precise forecasts (which are still inferior to the QR benchmark predictions pre-WWII) for both hold-out periods considered. This finding is most likely driven by the fact that one quarter ahead predictions are dominated by high frequency shocks which are notoriously difficult to predict whereas for longer-run forecasts, short-lived trends become more important.
In sum, our small forecasting exercise shows that TVP regressions are capable of producing competitive tail forecasts without explicitly focusing on the corresponding quantile under scrutiny. This indicates that the increased flexibility provided by TVP models (i.e. allowing for drifts in \(\mathbf{\beta}_{t}\) and changing error variances in \(\sigma_{t}^{2}\)) enables us to adequately model the distribution of GDP growth over long samples.
Next, we examine the characteristics of the predictive densities. Figure 4 presents the annualized one- and four-quarter ahead GDP growth along with their predicted lower (\(5^{th}\) percentile) and upper (\(95^{th}\) percentile) bounds. The results for the post-1970s period are consistent with previous research, which shows that lower bounds vary significantly over time while upper bounds are relatively stable (see, for example, Adrian et al., 2019; Aikman et al., 2019). However, this pattern appears to have emerged only since the start of the Great Moderation period in the 1980s. Prior to that, upper bounds were just as volatile as lower bounds. Table 2 supports this observation by showing the standard deviation of up- and downside risks over four different periods. The table also highlights the significant reduction in overall tail volatility after WWII and the even lower variation in the tails since the Great Moderation. This is not surprising, as the Great Depression led to a change in policy-making, with a greater focus on stabilizing the business cycle and reducing volatility.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Upside risks} & \multicolumn{4}{c}{Downside risks} \\ \cline{2-10} Horizon & pre WWI & Interwar & pre GM & since GM & pre WWI & Interwar & pre GM & since GM \\ \hline \(h=1\) & 11.03 & 11.74 & 3.09 & 1.82 & 12.78 & 11.62 & 2.87 & 2.60 \\ \(h=4\) & 11.96 & 11.66 & 3.62 & 2.22 & 7.47 & 9.29 & 3.47 & 2.63 \\ \hline \hline \end{tabular}
_Note:_ This table reports the standard deviation of the predicted \(5^{th}\) and \(95^{th}\) percentile. The sample is split into four periods: pre WWI, the interwar era, the time before the Great Moderation (pre GM) and since the Great Moderation (since GM). After both world wars two years are left out to not capture any wartime effects in the standard deviations.
\end{table}
Table 2: Standard deviation of up- and downside risks
Figure 1: Time series evolution of the predicted tail risks
### Decomposing up- and downside risks
Next, we focus on the drivers of tail risks over time. Our TVP model with stochastic volatility is able to handle asymmetries in up- and downside risks, which is a necessary feature for this type of analysis. The asymmetries in tail risks imply asymmetries in the unconditional distributions but do not necessarily require asymmetries in conditional predictive distributions (Carriero et al., 2020). Hence, only focusing on the time series evolution of the coefficients (see Figure A.3 and A.6) does not provide a complete picture of how tail risks occur. To shed light on which indicator drives the predictive quantiles of GDP growth, we rely on linear posterior summaries (see, e.g., Woody et al., 2021). This is achieved as follows. Based on the \(h\)-step-ahead predictive distribution of the model (see Eq. (2)) we compute a sequence of quantiles \(\mathcal{Q}_{t+h,p}\). Each of these estimated quantiles is then used as the dependent variable in the following linear regression model:
\[\mathcal{Q}_{t+h,p}=\boldsymbol{\alpha}_{p}^{\prime}\boldsymbol{x}_{t}+e_{t},\]
where \(\boldsymbol{\alpha}_{p}\) is a quantile-specific set of linear coefficients and \(e_{t}\) is a Gaussian shock with constant error variance. The OLS estimator \(\hat{\boldsymbol{\alpha_{p}}}\) then provides a linear quantile-specific approximation to the predictive density of the flexible TVP regression. The key advantage of this approach is that it improves interpretability and allows for a straightforward decomposition of the driving forces of right and left-tail forecasts of GDP growth.
Figures 2 and 3 illustrate the decomposed tail risks in panel A and B. The dashed black line shows the level of tail risks as predicted by our TVP model at the time of the prediction. The bars show the decomposition of the predicted \(5^{th}\) and \(95^{th}\) percentile approximated by the ten-year rolling linear regression. Panel C and D each show the linear posterior summary's corresponding coefficients to assess the risk indicators' marginal effect. A circle indicates whether a coefficient is significant at the \(5\%\) significance level. Subsequently, we do not discuss the tail risks around WWII, because we can not distinguish between the impact of macrofinancial variables and war time effects.
### Main Results
Starting with the intercept, we observe a staggering reduction of the intercept post-WWII (see Figures 2 and 3), which is the conditional average downside risk of the ten-year window of the rolling linear posterior summary. Therefore, the conditional average tail risks nowadays are significantly lower compared to the start of the sample. This result is partly unsurprising since
the overall tail volatility has substantially decreased since WW II, however the reduction in the conditional average growth risk is still eye-catching.
Adrian et al. (2020) find that financial stress is the main short-term factor in predicting left tail risks. This is evident during events such as both oil crises, the dot-com bubble, and the Great Recession, where financial stress greatly contributes to downside risks. However, during the Global Financial Crisis, credit and house price growth were the major contributors to downside risks for the 4-quarter ahead prediction (see Figure 3 Panel A). Likewise, before WWII, both variables play an important role in predicting growth-at-risk, especially during the early years of the Great Depression. In fact, in midst of the Great Depression, credit growth contributes the most to downside risks. Interestingly, unlike its impact on the left tail, financial stress has little effect on the upper tail of the predicted distribution. Hence, financial stress indeed leads to longer and fatter left tails, but does not influence the upper tail of the predicted distribution. This is reflected in the coefficients, with a significant negative effect on left tail risks, but a less significant effect on upside risks, as seen in Figures 2 and 3 Panel C and D. More precisely, financial stress has a significant negative marginal effect on left tail risks (panel C), while the marginal effect on upside risks is less significant both economically and statistically.4
Footnote 4: Although the OLS coefficient of the one quarter ahead \(95^{th}\) percentile posterior summery is consistently significantly positive since the Global Financial Crisis, the effect is only small.
Credit growth typically affects downside risks negatively in the medium and long term, but less so at shorter time horizons (Adrian et al., 2020; Galan, 2020). While our findings broadly confirm this, we are also able to shed light on temporary and transitional effects over time. In our one-quarter ahead predictions, we find a negative impact of credit growth both after the Global Financial Crisis and the first oil shock, but for completely different reasons. While rapid credit expansion drives downside risks during the first oil crisis, a credit crunch - i.e. too little credit - increased downside risks following the Global Financial Crisis (see Figure 2 Panel C). This stands in stark contrast to the effect of credit growth in the four-quarter ahead predictions, where the sharp decline in the coefficient in the mid-2000s results in a large negative overall effect.5 Before WWII, credit growth was the most important driver of tail risks, contributing to the post-WWI recession, the Depression of 1920-1921, as well as the deterioriation in growth risks during the Great Depression. Again, when looking at Panel C in Figures 2 and 3, we
see the importance of using a flexible approach, as the coefficients vary substantially over time both in terms of sign and magnitude. While financial stress has only a minimal impact on upside risks, credit growth is the main contributor to higher and lower risks, depending on the respective time period.
The effect of financial stress and credit growth on downside risks to economic growth since the 1970s is well in line with previous literature. For house price growth, we would expect no or only a small effect on downside risks (Aikman et al., 2019; Galan, 2020); however, we find a significant effect on short-term tail risks in the 1950s, 1960s, early 1990s, and 2000s. Figures 2 and 3 Panel C offer insights into the patterns observed. Fast-rising house prices during the 1950s, partly due to the GI Bill, and increasing subprime lending in the 2000s negatively impacted economic growth risks by creating a threat of a real estate bubble. Conversely, declining house prices, as seen in the early 1990s and late 1960s, also negatively impacted downside risks, with the most pronounced effect seen in the mid-2000s four-quarter ahead prediction. In this respect, our findings differ from previous studies that suggest little or no effect of house prices on short-term tail risks, as our empirical model is able to capture time-varying effects. Interestingly, house price growth also affects the upside risks of economic growth, which is well in line with concepts of a boom-bust pattern in the financial cycle literature (Borio, 2014). In the pre-WWII period, the effect of house price growth on tail risks was rather small except for the house price crash after WWI. Our novel results fit well with the literature about housing and financial cycles, which concludes that over the last century, mortgage lending and total credit to households increased strongly, thereby reinforcing the feedback effects to the whole economy and the impact on tail risks to economic growth. As a result, house prices have become more relevant also from a financial stability perspective, as financial imbalances may have increased with rising leverage in recent decades, compared to the pre-WWII period (see, for instance, Jorda et al., 2015, 2016; Mian et al., 2017).
Figure 2: Decomposition of tail risks one quarter ahead
Figure 3: Decomposition of tail risks four quarter ahead
### Common patterns in times of crises
To get an in-depth understanding of our empirical results, it is useful to explain the findings by reference to the two major economic crises in our sample. For this purpose, we illustrate the results on the basis of TVP Local Projections during the Great Depression on the one hand, and the Global Financial Crisis on the other. The results show significant fluctuations in coefficients during both crises, with an even more pronounced effect observed during the latter crisis. Interestingly, there are both similarities and differences in the impact of financial risk indicators on downside risks during the two crisis periods, with particularly strong differences in the effect of house prices.
Both during the Great Depression and the Global Financial Crisis, financial stress amplified downside growth risks at short time horizons, in the case of the Great Depression partly also at longer time horizons.6 In contrast, the effect is somewhat different in the years preceding the crisis, as pre-crisis growth risks are only affected by financial stress prior to the Global Financial Crisis. In both cases, as the crisis unfolded, the effect on longer term horizons disappeared and the impact on short-term (one quarter ahead) predictions intensified. Thus, financial stress elevated downside risks to economic growth in both crises, however, it only influenced pre-crisis growth risks before the Global Financial Crisis.
Footnote 6: Please note that the illustrated positive coefficients are not statistically significant, except for 2008Q1, when the positive coefficient four quarters ahead may reflect a partial reversion of the strongly negative effect for the one quarter ahead tail forecast.
For credit growth, we observe comparable patterns in both crises, with higher credit growth being associated with heightened growth risks both prior to and during the crisis. During the Great Depression, the coefficients for all horizons shifted upward, suggesting that a credit crunch, rather than excessive credit, heightened growth risks in later years. On the contrary, this upward shift in coefficients was only observed for the one quarter ahead horizons during the Global Financial Crisis, potentially indicating the success of central bank intervention in avoiding a credit crunch.
While the results are comparable for financial stress and credit growth, the relationship between house price growth and downside risks shows strong differences between the two crises. In the Great Depression, rising house prices had a negative impact on growth risks, whereas during the Global Financial Crisis, falling house prices had the same effect. These results for the Global Financial Crisis are well in line with recent findings on housing wealth effects (e.g. Mian et al., 2013). The literature finds that a decline in house prices can also reduce consumption,
especially when housing wealth constitutes a large proportion of households' overall wealth.
_Note:_ This figure shows TVP Local Projections of the linear approximation at different time points. A circle indicates a significant coefficient (5% significance level).
Figure 4: TVP Local Projections
### Discussion
In the following, we point out some main findings before discussing more specific effects regarding the financial risk indicators. First, we see that a more flexible approach for estimating GaR is warranted and important, especially when including many indicators (i.e. not only financial stress) and when estimating the effects over a long time period. We find that the link between tail risks and financial risk indicators is time-varying which has strong implications for policy-makers. For instance, when calibrating models for the application of optimal macroprudential policy, such as the one proposed by Suarez (2022), with non time-varying parameters (e.g. quantile regression estimates), this could lead to misleading policy recommendations. Furthermore, not only the effect of risk indicators is time-varying, but also the effect of macroprudential policies can vary over time (see for example Jimenez et al., 2017; Cerutti et al., 2017). Therefore, instead of modeling all kinds of interactions between risk indicators (and possible also macroprudential policies)7, it is easier and more effective to use a more flexible empirical approach like the one proposed in this paper. Second, TVP-SV models outperform standard quantile regression models in tail forecasting and therefore give a more accurate picture of future downside risks, which also translates into more reliable warning signals about a potentially sharp recession and a more efficient use of macroprudential polices.
Footnote 7: For example whether bubbles are leveraged or not, see e.g. Jorda et al. (2015).
In a next step, we summarize and discuss the key findings and connect the impact of the individual risk indicators to the broader literature of financial cycles. With respect to financial stress, we find a consistently negative marginal and total effect on downside risks over the whole sample for both one and four quarter ahead predictions. The relationship between financial stress and downside risks to economic growth, as shown by Adrian et al. (2019), seems to hold true across time, although with a varying magnitude. Therefore, policy-makers should pay close attention to financial stress and avoid it whenever possible, as it has increased growth risks for over 100 years with no benefit to the upper tail of the distribution.
The impact of credit growth on tail risks is time-varying and therefore harder to generalize. Once again, this finding confirms that a more flexible approach is warranted when assessing growth risks. For the one quarter ahead predictions, we see signs of a credit crunch during the Great Depression and the Global Financial Crisis, as indicated by a negative total effect on downside risks, but with a positive marginal effect. This implies that negative credit growth increased the risks to economic growth. However, the stark difference between the two crises is the duration and magnitude of the credit crunch, thus showing the effectiveness of central
banks' interventions in 2008 and the following years. Strikingly, while credit growth did not significantly increase downside risks in the second half of the last century, this was the case in the run-up to the Global Financial Crisis, comparable to the Great Depression (both in terms of absolute and marginal effects). These results are in line with the seminal work of Jorda et al. (2015) who show that credit boom fuelled stock bubbles are much costlier compared to non-credit boom bubbles. Furthermore, Gorton and Ordonez (2020) find that the average credit boom in a sample of 34 countries from 1960 to 2010 last 11 years and is not necessarily bad. Our analysis supports this view, e.g. the credit boom after WWII had a very desirable outcome by decreasing downside and increasing upside risks.
Finally, our findings regarding house price growth are also notable. While financial stress has a remarkably consistent impact on growth risks, and the impact of credit growth during the Great Depression and the Global Financial Crisis is also comparable, the effects of house price growth on tail risks during the Great Recession was unprecedented. When house prices collapsed during the Global Financial Crisis, this had a strong impact on downside risks. Interestingly, we see an almost exclusive positive marginal effect of house prices on growth risks since the 1970s. When linking this fact to a rising home ownership ratio, it implies an increasing wealth effect on consumption in the case of collapsing house prices (Mian et al., 2013; Jorda et al., 2017; Graham and Makridis, 2021). This view is also supported by the fact that the correlation between house prices and credit growth increased strongly since the 1960s and remained high thereafter. Therefore, the mortgage-financed housing boom may explain why the effect of house price growth tail risks was unprecedented during the Global Financial Crisis, which is also in line with the broader financial cycle literature (Borio, 2014).
## 4 Conclusion
Predicting growth risks has become a critical aspect of policymaking since the Global Financial Crisis, and it is particularly important for macroprudential policy-makers. While previous research has identified financial stress, credit growth, and house price growth as key indicators for predicting growth risks, these studies often rely on short time series and inflexible models. In contrast, our research uses a flexible empirical approach and a 130-year historical time series to improve forecasting accuracy and to gain a better understanding with respect to the changing relationships between tail risks and financial risk indicators.
Our findings suggest that certain risk indicators, such as credit growth, have a time-varying
effect on tail risks, but tend to follow similar patterns during financial crises. On the other hand, the impact of house prices on tail risks is time-varying for both upside and downside risks, and its impact on growth risks during the Global Financial Crisis was also exceptional compared to historical trends. Additionally, we find that financial stress is consistently associated with increased growth risks (i.e. lower values of GaR) over the entire sample period.
This paper also serves as a note of caution in drawing policy implications from short time series or single historical events, as the observed effects of risk indicators may be time-dependent. In addition to financial risk indicators, policy variables may also have a time-varying impact on tail risks throughout the credit and business cycle. Further research is necessary to assess the effectiveness and importance of macroprudential policy in crisis situations and the potential trade-offs that may occur during normal times. |
2306.04435 | Stellar dynamics within the virial theorem: asymptotic small-parameters
time expansion of the Ermakov-Lewis-Leach invariant as an infinite series of
conservation laws | The regime of undamped oscillations characterizing the stellar dynamics of
virialised systems is analysed within the framework of a new approach to the
study of the integrals of motion. The method is relevant as far as the
cosmological implementation is concerned, as it applies to the calculations apt
for any age of the evolution of the Universe starting from the epoch of
non-Gaussianities until present time. The new method here developed is based on
the asymptotic small-parameters expansion of the new expression of the
Ermakov-Lewis-Leach integrals of motion; as a result, an infinite series of new
conservation laws implies the uniqueness and existence of new integrals of
motion; analytical examples are provided after the pulsed Plummer potential,
the three instances of the pulsed Dehnen potentials, the pulsed harmonic
potential, the Jaffe potential, the Hernquist potential, applied to the studied
problem. Particular cases of complex potentials are therefore also comprehended
in the analysis. The constants of motions descending from the conservation laws
are demonstrated to depend on the virialised radius and on the virialised mass
of the stellar system, independently of the potential used. Cosmological
implementation is given within the framework of a generic Terzic'-Kandrup
potential. Comparison with data analysis techniques are provided with. | Orchidea Maria Lecian, Brunello Tirozzi | 2023-06-07T13:50:54Z | http://arxiv.org/abs/2306.04435v1 | Stellar dynamics within the virial theorem: asymptotic small-parameters time expansion of the Ermakov-Lewis-Leach invariant as an infinite series of conservation laws
###### Abstract
The regime of undamped oscillations characterizing the stellar dynamics of virialised systems is analysed within the framework of a new approach to the study of the integrals of motion. The method is relevant as far as the cosmological implementation is concerned, as it applies to the calculations apt for any age of the evolution of the Universe starting from the epoch of non-Gaussianities until present time. The new method here developed is based on the asymptotic small-parameters expansion of the new expression of the Ermakov-Lewis-Leach integrals of motion; as a result, an infinite series of new conservation laws implies the uniqueness and existence of new integrals of motion; analytical examples are provided after the pulsed Plummer potential, the three instances of the pulsed Dehnen potentials, the pulsed harmonic potential, the Jaffe potential, the Hernquist potential, applied to the studied problem. Particular cases of complex potentials are therefore also comprehended in the analysis. The constants of motions descending from the conservation laws are demonstrated to depend on the virialised radius and on the virialised mass of the stellar system, independently of the potential used.
COsmological implemetation is given within the framework of a generic Terzic'-Kandrup potential.
**Keywords**: Astrophysics dynamics; non-Gaussianities; virialised systems; integrals of motion; dark matter.
## I Introduction
The interest of Astrophysics dynamics of virialised systems relies on the possibility to study the formation of astrophysical objects and their evolution within the virialised description, which can be assumed to be starting at the epoch of non-Gaussianities, and to hold up to present times [1], within the framework of the collisionless assumptions. The stellar stability was phenomenologically analysed in [2] as far as the period density and the light curves are concerned.
Observational evidence of galaxy oscillations is reported in [3].
Some features of the asymptotic theory of stellar oscillations and of excitation and damping of the oscillations are recapitulated in [4].
In [5], the fitting techniques of stellar stability of time-dependent radial movement of mass shell modelised after an oscillator equation relating the eigenfrequency with the adiabatic exponent are presented.
One of the advantages of studying the properties of the integral constants of motion for virialised systems is the possibility to trace the wanted properties of the dark-matter effects within stellar systems, as in [6].
In [7], the problem is addressed, about the description of galaxies as prolate, oblate or triaxial. The numerical method followed mimics the representation of the density distribution produced after one orbit when averaged over a long time. The quantity \(\tilde{E}=\frac{1}{2}v^{2}+\tilde{P}hi\) is found to be always isolating; nevertheless, the quantities \(f(\tilde{E}),L_{z}\) are commented as
not suited as elliptical galaxies have been measured to rotate faster (pag. 29 bibdem).
In [8], the relaxation of the assumption that the potential in the Jacobi integral be an even function of the coordinates allows one to further investigate the dynamics of two-dimensional rotating systems.
In [9], the features of triaxial galaxies are investigated numerically; a density distribution with a modified Hubble profile is taken to approximate elliptical galaxies. As a result, the orbits computed are found to have three effective integral: the energy and two non-classical integrals. For these purposes, a model density distribution is reproduced after superposing the orbital density distributions if each orbit is reasonably occupied by an appropriate number of states, with occupation numbers being non-negative.
In [10], an analytical formula for an approximation of the third integral of motion in some specific case is provided with. The delicate issue of treating the integration boundaries according to the Standard Cosmological Principle has been addressed only recently and only numerically in [11] for the choice of an appropriate scale factor in GR. The analysis of [12] is focused on galaxy clusters. Differently, the studies of [13] point out the setting of the virialising radii in the computation; to do so, the conduction length is demonstrated to be typically less than the size of the virializing cluster also in the absence of any suppression of the thermal conductivity for the choice of the conduction length: the choice of a realistic suppression factor is discussed. The study [11] is concerned with choosing a reasonable size to integrate over the sampled Astrophysical system.
The realistic implementation of the virializing techniques of self-gravitating (Relativistic) Riemann-like galaxies is envisaged in [14].
In [15], the features of undamped oscillations of stellar systems are described; to do so, the Lewis invariant is used.
The virialised radius is not always found as an effective fitting parameter from the fitting techniques of the observational evidence [16].
A discussion of the use of the virial masses within fitting algorithms is provided with in [17].
In [18], coherent oscillations of the entire stellar system under the effect of a perturbing mass are studied.
The problem of visibility of oscillations was recently discussed in [19]: in comparison with the signal-to-noise ratio, a time series is obtained after realistic magneto-hydro-dynamic simulations, in combination with radiative transfer calculations; the interpretation of the time series is left as an open issue.
The purposes of the present paper are the formulation of conservation laws of the constant of motions descending from the establishment of the technique of the asypmtotic small-parameters time expansion, which lead to constants of motion depending on the virial mass and on the virial radius at the cosmological implementation.
The technique is applied to the Lewis-Ermakov-Leach invariant as far as the analysis of stellar systems is concerned. In the present work, the constant of motion is expressed as a function of the virialised radius, and as a function of the virial mass of the stellar objects (where the latter result holds due to the non-restrictive assumption that, during an infinitesimal time interval at the time of non-Gaussianities, the mass of the stellar object is not varying, i.e., for instance, there is no ejection, such that the phenomena described in [20] and [21] are avoided).
The results are obtained after the consideration of a stellar objects whose properties are ruled after a generalised potential, which can contain oscillating terms. The integral of motion is decomposed according to the small time-parameters technique, and new conservation laws of new integrals of motions are found.
The cosmological implementation is provided with, within the framework of a generic Terzic'-Kandrup potential.
The methodologies straightforward apply to the Ermakov-Lewis invariant and to the Ermakov-Lewis adiabatic invariant. The system can tested at different approximation orders.
The cosmological implementation is achieved at the time of non-Gaussianities. According to this choice, it is newly possible to analyse the oscillation properties of stellar systems as due to two different components, i.e. the constants of motions, and the phenomena perturbing the CBE system at any time of the observational evidence.
The interest in these new description is also that it is possible to apply the paradigm to calculate the cosntants of motion also at different ages of the Universe, at which stellar systems can be hypothesised to be started forming.
The manuscript is organised as follows.
In Section I, the main achievements regarding the constants of motion, the virial radius and the virial mass of stellar systems are introduced.
In Section II, the application of the tensor version of the virial theorem to stellar objects is recalled to be reconducted to the scalar version, at which there defines a virialised radius.
In Section III, the elements that lead to a choice of a collisionless Boltzmann distribution function are discussed.
In Section IV, the phenomenon of oscillations of stellar systems is revised.
In Section V, the mathematical tools needed for the definition of generalised Hamiltonians are summarised.
In Section VI, time-dependent oscillator potentials are recapitulated.
In Section VII, new estimations of the constants of motions after the Lewis-Ermakov-Leach invariant are found. The method can be analysed as to apply strightforward to the Ermakov Lewis invariant and to the adiabatic Ermakov-Lewis invariant.
In Section VIII, the new conservation laws of the new constants of motions are stated.
In Section IX, the cosmological implementation of the newly-found conservation laws of the constants of motion is provided at the epoch of non-Gaussianities.
In Section X, the modifications of the distribution function of collisionless-boltzmann systems are discussed as far as the Astrophysical implementation is concerned. In Section XI, the further issues concerning stationary oscillations are this way freshly analysed.
In Section XII, the phenomena which lead away from the description of stationary oscillations are thus anew envisaged.
In Section XIII, the guidelines of investigation of the cosmological issues allow one to frame the new results here demonstrated.
## II Stellar dynamics within the virial theorem
In [22], after [23], the tensor version of the virial theorem, applied to a system of equal-mass points, after the Liouville equation, is specified to the dynamics of the 6-dim phase-space of spherical stellar systems. More in detail, stellar systems with positive total energy are demonstrated to disperse at infinity; differently, spherical systems with negative total energy are demonstrated to perform periodic oscillations with finite amplitude. The implementation of the technique to Astrophysical systems spams from clusters of stars to clusters of galaxies.
The scalar version of the virial system in the hypothesis of stationarity, after a kinetic energy \(\mathcal{T}\) and a potential energy \(\mathcal{W}\), is stated as
\[2\mathcal{T}+\mathcal{W}=0; \tag{1}\]
the gravitational potential energy \(\mathcal{W}\) is written
\[\mathcal{W}=-\frac{1}{2}\frac{GM^{2}}{\bar{R}}, \tag{2}\]
with \(\bar{R}\) being the average measure of the linear dimension of the system (i.e., also, the radius of the sphere in a spherical system), and \(M\) the mass.
Two-points correlations are to be ignored.
The following derivation holds
\[\frac{d^{2}}{dt^{2}}\mathcal{I}_{ij}=2\mathcal{T}_{ij}+\mathcal{W}_{ij}, \tag{3}\]
where \(\mathcal{I}_{ij}\) is the inertia-moment tensor, \(\mathcal{T}_{ij}\) is the kinetic-energy tensor, and \(\mathcal{W}_{ij}\) is the potential-energy tensor.
Be \(\mathcal{E}\) the total energy of the system, remaining constant: the following application holds
\[\frac{d^{2}I}{dt^{2}}=2\mathcal{T}+\mathcal{W}=2\mathcal{E}-\mathcal{W}. \tag{4}\]
It is therefore now possible to analyse the evolution of the dynamics of the studied spherical system.
The dynamics is hypothesized to start at some initial time, at which a cluster is characterised after its moment of inertia, its kinetic energy and its potential energy. The evolution is therefore described after Eq. (4).
It is momentous to recall the attention on the choice of the linear size \(\bar{R}\), i.e. the internal distribution of the density cluster is demonstrated not to be of relevance. For these purposes, one chooses s spherically-symmetric cluster consisting of \(N\) equal-mass points, distributed in the phase space according to a Gaussian density distribution.
Therefore, the density \(\rho(r)\) reads
\[\tilde{\rho(r)}=\frac{Nm}{\mathcal{A}^{3}\pi^{3/2}}e^{-r^{2}/d^{2}}, \tag{5}\]
with \(\mathcal{A}\equiv\mathcal{A}(t)\) the radius of the cluster.
Eq. (5) is assumed to be maintained.
Eq. (4) implies that
\[\frac{3}{4}Nm\frac{d^{2}\mathcal{A}}{dt^{2}}=2\mathcal{E}+\frac{GN^{2}m^{2}}{( 2\pi)^{1/2}}. \tag{6}\]
### Virialization
It is relevant to remark that, if instead of a Gaussian distribution, a homogeneous distribution had been chosen inside the space of radius \(\mathcal{A}\), an equivalent equation, differing only after the coefficients, would have been obtained, which will assume the same dimensionless form as follows.
**Def.**: Be \(\mathcal{A}\equiv\mathcal{A}z\), with the specification of \(\mathcal{A}_{\prime}\) to be designated and measuring the time VEDI in
\[t_{0}=\left(\frac{9}{8}\pi\right)^{\frac{1}{2}}\frac{\mathcal{A}}{NmG}. \tag{7}\]
Eq. (6) implies the role of \(\mathcal{E}\) to be spelled out following
\[\frac{d^{2}z}{dt^{2}}=2\left(sign\mathcal{E}\right)+\frac{1}{z} \tag{8}\]
with the introduction of \(Q\) as
\[Q=\left(\frac{\mathcal{E}}{\mathcal{W}_{0}}\right) \tag{9}\]
being \(\mathcal{W}_{\prime}\) the potential energy at \(\mathcal{A}_{\prime}\).
Finally, Eq. (8) admits first integral
\[z^{2}\left(\frac{dz}{dt}\right)^{2}=\left(sign(\mathcal{E})\right)Qz^{2}+z+const \tag{10}\]
#### ii.1.1 The application to systems wit negative total energy
For systems with negative total energy, Eq. (10) rewrites with the help of the integration constant \(\Lambda\)
\[z^{2}\left(\frac{dz}{dt}\right)^{2}=\Lambda^{2}-Q\left(z-\frac{1}{2Q}\right)^ {2}. \tag{11}\]
Eq. (11) is solved for \(t\)
\[t=\frac{1}{Q^{1/2}}\left[\Lambda^{2}-\left(z-\frac{1}{2Q}\right)\right]^{1/2} +\frac{1}{2Q^{3/2}}cos^{-1}\left[\frac{1}{\Lambda}\left(z-\frac{1}{2Q}\right) \right]. \tag{12}\]
It is possible to define \(\mathcal{C}^{\prime}\) an arbitrary constant defining the origin of \(t\); the constant \(\mathcal{C}^{\prime}\) is unimportant and can be neglected.
Eq. (12) implied that, when the total energy of the system is negative, the system performs periodic oscillations of finite amplitude, characterized after a period \(p\)
\[p=\frac{\pi}{Q^{3/2}} \tag{13}\]
and amplitude of oscillation for \(z\) as
\[\Lambda+\frac{1}{2Q}\geq z\geq-\Lambda+\frac{1}{2Q}. \tag{14}\]
The characterizing case \(z\geq 0\) is, by definition,
\[\Lambda\leq\frac{1}{2Q}. \tag{15}\]
The special case
\[\Lambda=\frac{1}{2Q}, \tag{16}\]
i.e. vanishing kinetic energy at \(t=0\) and \(t=\frac{1}{Q}\).
## III The choice of the collisionless Boltzmann distribution in astrophysical systems
The problem of a large number \(N\) of interacting particles is here addressed after the consideration of a small number of quantities, which are averaged in a specific manner, rather than that of the exact position \(\vec{r}\) and velocity \(\vec{v}\) of each particle [24]. The \(N\) interacting particles in large number can be described as a function of the number \(N\), the density and the collision frequency at characteristic times and spacial scales.
The Vlasov equationsIt is necessary to reconcile the Newtonian equations of motion with those of hydrodynamics within the description of motion in continuous media.
A hydrodynamical systems is characterised at a given point after the density \(\rho(\stackrel{{\cdot}}{{,}}t)\), the pressure \(P(\vec{r},t)\) and the velocity of motion of the fluid \(\vec{v}(\vec{r},t)\).
There therefore holds the continuity equation
\[\frac{\partial\tilde{\rho}}{\partial t}+div(\tilde{\rho}\vec{v})=0 \tag{17}\]
to be combined with the Euler equations
\[\frac{d\vec{v}}{dt}\equiv\frac{\partial\vec{v}}{\partial t}+(\vec{v}\nabla) \vec{v}=-\frac{1}{\tilde{\rho}}\frac{\partial P}{\partial\vec{r}}+\vec{F}, \tag{18}\]
\(\vec{F}\) being the force per unit mass.
In view of the requested reconciliation, the force per unit mass \(\vec{F}\) has to be set to imply the Newtonian gravitational force
\[\vec{F}=-grad\Phi(\vec{r},t), \tag{19}\]
being \(\Phi(\vec{r},t)\) the gravitational potential.
Therefore, combining the continuity equation Eq. (17) and the Euler equations Eq.'s (18), the hydrodynamical description is achieved after posing \(\vec{F}=0\) in a chosen equation of state. Differently, if \(\vec{F}\neq 0\), in the case of gravitating systems, the Poisson equation
\[\Delta\Phi=4\pi G\tilde{\rho} \tag{20}\]
has to be recovered.
The thermal motion of particles is schematised in the kinetic theory, in which the statistical description occupies a central role: the distribution function \(f(\vec{r},\vec{v},t)\) has to be set after the particles.
The Boltzmann kinetic equation reads
\[\frac{df}{dt}=\mathcal{C} \tag{21}\]
along the exact trajectory of the particle in the field of the force form which \(\vec{F}\) is normalised.
The choice \(\mathcal{C}=0\) in Eq. (21) corresponds to a collisionless kinetic equation.
The choice \(\mathcal{C}=0\) adapts obviously to Astrophysical systems. It corresponds to he incompressible feature of a phase fluid, after which the distribution function stays unchanged, i.e.
\[\frac{df}{dt}=0 \tag{22}\]
during the path of the phase space.
The density \(\tilde{\rho}(t)\) therefore reads
\[\tilde{\rho}(\vec{r},t)=\int f(\vec{r},\vec{v},t)d\vec{v}. \tag{23}\]
The set of equations Eq. (22), Eq. (20), and Eq. (23) is denominated Vlasov equations.
The set of Vlasov equations is analogous to the equations characterising a plasma.
The collisionless kinetic equationThe collisionless kinetic equation Eq. (22) describes the evolution of the single-particle distribution function \(f=f(\vec{r},\vec{v},t)\). It can be obtained also from the Liouville equation of the \(N\)-particle distribution function \(\mathcal{F}=\mathcal{F}(\vec{r_{1}},\vec{v_{1}},\vec{r_{2}},\vec{v_{2}},...;t)\) which is a function of the particles coordinates, the particles velocities and of time, as
\[\frac{\partial\mathcal{F}}{\partial t}+[\mathcal{F},H]=0, \tag{24}\]
endowed with its Poisson brackets, being \(H\) the Hamiltonian of the function (where the Hamiltonian variables \(q_{\alpha}\) and \(p_{\alpha}\) are defined).
Eq. (24) rewrites also
\[\frac{\partial\mathcal{F}}{\sqcup}+\sum_{\alpha}v_{\alpha}\frac{\partial \mathcal{F}}{\partial x_{\alpha}}+\sum_{\alpha}\mathcal{F}_{\alpha}\frac{ \partial\mathcal{F}}{\partial v_{\alpha}}=0. \tag{25}\]
Eq. (22) is therefore also obtained from Eq. (25) after integration over the portion of the phase space which is comprehended within the phase-space coordinates of the particles except for those of one particle; the function \(\mathcal{F}\) therefore splits as
\[\mathcal{F}=f(\vec{r_{1}},\vec{v_{1}},t)f(\vec{r_{2}},\vec{v_{2}},t)...f(\vec {r_{N}},\vec{v_{N}},t). \tag{26}\]
Eq. (26) is therefore commented as suited for Astrophysical systems, where the statistical independence of the phase distributions of different objects is requested, and in which the collisions (i.e. the interaction forces between the individual pairs of particles) are negligibly small with respect to the'smoothed' force \(\vec{F}\). The smoothed force \(\vec{F}\) here considered is one due to the collective self-consistent action of all the particles of the system.
The self-consistent nature of the field is due to the fact that the field is produced after a particle distribution \(f\), which, on it turn, is produced after the same field.
From VEDI [86; 138], one notes that the condition of collisions to be neglected is reflected int he requirement that the number of particles in the so-called 'Debye-sphere' is large.
###### Collisionless gravitating system and the equilibrium states
The equilibrium states of a collisionless gravitating system are described after the distribution functions \(f_{0}(\vec{r},\vec{v})\), the mass density \(\tilde{\rho}_{0}(\vec{r})\), and the gravitational potential \(\Phi_{0}(\vec{r})\). They obey the Vlasov equations Eq. (20), Eq. (22), Eq. (23) where \(\frac{\partial}{\partial t}=0\) is hypothesized.
### Small oscillations of gravitating systems: estimates of the equilibrium configurations
The gravitating systems are parameterised after a distribution function \(f_{0}(\vec{r},\vec{v})\), a mass density \(\tilde{\rho}_{0}(\vec{r})\), related after a gravitational potential \(\tilde{\rho}_{0}(r)\) which obey the simplified Vlasov equations with \(\frac{\partial}{\partial t}=0\)[24].
They have to bey the simplified Vlasov equations as
\[\vec{v}\frac{\partial f_{0}}{\partial\vec{r}}-\frac{\partial\Phi_{0}}{\partial \vec{r}}\frac{\partial f_{0}}{\partial\vec{v}}=0, \tag{27}\]
\[\Delta\Phi_{0}=4\pi G\tilde{\rho}_{0}, \tag{28}\]
and
\[\tilde{\rho}_{0}=\int f_{0}d\vec{v}. \tag{29}\]
The kinetic equation Eq. (27) is commented as an equation of the distribution function \(f_{0}\) after having assumed the potential \(\Phi_{0}\) to be known; it is therefore a homogenous differential equation of the first order at partial derivatives.
The equations of motion of a particle, ruled after the potential from t he gravitational force \(-\frac{\partial\Phi_{0}}{\partial r}\) are the characteristics of the equations
\[\frac{d\vec{r}}{dt} = \vec{v}, \tag{30a}\] \[\frac{d\vec{v}}{dt} = -\frac{\partial\Phi_{0}}{\partial\vec{r}}. \tag{30b}\]
A general solution of Eq. (27) is an arbitrary function of the particle integrals of motion in the field \(\Phi_{0}(\vec{r})\)[VEDI 156].
The requirement of unambiguity of the distribution function for all the points of the phase space leads to the request, on its turn, that the integrals of motion, which can be arguments of the distribution, to be single-valued.
The equations of the characteristics of the system of Eq.'s (30) in Cartesian coordinates write
VEDERE MISPRINT LIBRO
\[\frac{dx}{dv_{x}}=\frac{dy}{dv_{y}}=\frac{dz}{dv_{z}}=-\frac{dv_{y}}{-\partial \Phi_{0}/\partial y}=-\frac{dv_{z}}{-\partial\Phi_{0}/\partial z}=dt. \tag{31}\]
From Eq. (31), six independent integral of motion are given, five of which do not dependent on the time \(t\).
The requirement that the integrals be single-valued has to be compared with the properties of uniformity of time (energy), as well an with those of homogeneity and isotropy of space.
Thus, the particle energy \(E\) in the field \(\Phi_{0}\) can all ways be one of the arguments of the function \(f_{0}\) as
\[E=\frac{v^{2}}{2}+\Phi_{0}. \tag{32}\]
## IV About oscillations
The relations between the isolating integrals, the Jeans theorem and the distribution functions are enumerated in [25].
In [15], the performance of undamped oscillations of stellar systems is analysed from the definition of the role of the integrals of motions, after [24].
The dynamics of collisionless Boltzmann equations (CBE) is chosen for the schematization of the system as one from the distribution function
\[\frac{df}{dt}+\vec{v}\cdot\frac{df}{d\vec{r}}-\frac{d\Phi}{d\vec{r}}+\frac{df }{d\vec{v}}=0, \tag{33}\]
which implies convective features in the dynamics of stars in galaxies, where the former is ruled after the CBE: in Eq. 33, the distribution \(f\) is defined as \(f\equiv f(\vec{r},\vec{v},t)\), i.e. such that \(f(\vec{r},\vec{v},t)d^{3}rd^{3}v\) is the mass comprehended in portion \(d^{3}r,d^{3}v\) of the phase space available for the system at the defined time \(t\); furthermore, the potential \(\Phi\) is \(\Phi\equiv\Phi(\vec{r},t)\) the gravitational potential compatible with the (self-consistent) Poisson equation
\[\nabla^{2}\Phi=4\pi G\tilde{\rho}(\vec{r},t)\equiv 4\pi G\int f(\vec{r},\vec{v},t)d^{3}v \tag{34}\]
of the density \(\tilde{\rho}\equiv\tilde{\rho}(\vec{r},t)\).
As a result, most solutions can be demonstrated to relax into a time-independent description.
The standard Cosmological principle can be adopted in order to perform the requested integrals; according to application of the Standard Cosmological Principle, the integration boundaries are chosen.This way, the change of the shape and that of the size of the region is analysed to change in time (while the region is assumed to maintain its geometrical features, i.e. such as ellipsoidal): the strength of the potential is in this manner time-dependent.
### One-dimensional models
One-dimensional modes are found to be characterizes such as all solutions are periodic functions of time.
Within one-dimensional models, spherical models and elliptical models can be studied.
### Three-dimensional spherical models
within the three-dimensional models, the system of a homogenous static sphere corresponds to the characterization of a polytropic equation of index zero. Nevertheless, the former schematization is not consistent with the request of isotropic dispersion relation [26].
A distribution function which depends only on energy is calculated not to describe a uniform sphere; as an an
alternative derivation, given \(\vec{L}=\vec{r}\wedge\vec{v}\), the distribution function is demonstrated to be requested to depend both on \(E\) and on \(L\).
The inversion of the integral equation of \(f\) is studied not to be unique.
## V The time-dependent harmonic oscillator
The dynamics of the time-dependent harmonic oscillation is ruled after the potential characterizing the Hamiltonian
\[H=\frac{1}{2\eta}\left[p^{2}+\Omega^{2}(t)q^{2}\right], \tag{35}\]
for which there exists a class of exact invariants.
The class of exact invariants \(I_{\eta}(\rho)\), where \(\rho\) is an auxiliary variable, can be given in a closed form as a function of \(\rho(t)\), i.e.
\[I_{\eta}(\rho):\ \ \eta^{2}\frac{d^{2}\rho}{dt^{2}}+\Omega^{2}(t)\rho-\rho^{-3}=0. \tag{36}\]
On its turn, the density \(\rho(t)\) obeys the equation SCRIVI, i.e. such that, for each for each particular solution of the equation of \(\rho\), an invariant is defined.
Such an analysis lead to results more general than the asymptotic treatment (also in the case of a complex \(\Omega(t)\)).
### More about the CBE
In [27], quadratic time-dependent one-dimensional models are constructed numerically, i.e. in an extended phase space after the Jeans Theorem. In particular, the numerical analysis of spherical oscillations is described to be difficult to perform. By contrast, the presence of non-linear periodically time-dependent solutions of the Poisson equations and the CBE are shown to be possible. The application to bar galaxies is suggested. This suggestion has to be compared with the analytical studies of [28] and [26], in which, nevertheless, a different distribution function is chosen, under the Chandrasekhar's guideline it should be of a negative energy.
In the case of quadratic time-dependent potentials, all solutions are found as periodic function of time \(t\); one-parameter of families of oscillating models, with a specified total mass and with a specified energy, are defined. More precisely, it is possible to spell the chose parameter as the first integral.
### The Lewis invariant
The Lewis invariant is obtained from Eq. (35) for complex potential term \(\Omega\).
Derivation of the Lewis invariantThe Lewis invariant is derived in [29] in the case of a complex potential \(\Omega\) after following the prescriptions of the analysis of [30] in the case of a real potential.
In particular, in [30], the the analysis of Hamiltonian systems whose solutions are all nearly periodic is developed. To this purposes, autonomous systems are newly viewed, and the recurrent systems are accordingly classified, of which the asymptotical solution is found; splittable systems are studied of which the most direct series solution is shown to be not adequate. The standardisation procedure is therefore provided with: appropriate variables are chosen, after which a recursive construction of the requested functions is built.
In [29], the procedure followed is dictated after the treatment of [30] as implemented in the case of a complex potential \(\Omega(t)\) form the Hamiltonian
\[H=\frac{1}{2\eta}\left(p^{2}+\Omega^{2}(t)q^{2}\right) \tag{37}\]
being \(q\) the canonical coordinate, \(p\) the canonical conjugate momenta, \(\Omega(t)\) an arbitrary complex function of the time \(t\), and \(\eta\) a positive real parameter.
There exists a class of exact invariants
\[I=\frac{1}{2}\left[\rho^{-2}q^{2}+\left(\rho p-\eta\frac{d\rho}{dt}q\right)^{ 2}\right] \tag{38}\]
with \(\rho\) satisfying
\[\eta^{2}\frac{d^{\rho}}{dt^{2}}+\Omega(t)^{2}\rho-\rho^{3}=0 \tag{39}\]
The class of exact invariants is therefore defined for any \(\rho\) obeying Eq. (39).
If Eq. (39) can be solved recursively, the series describing \(\rho\) is a series of positive powers of the parameter \(\eta\); this implies that Eq. (38) is a series of positive powers of \(\eta\).
Implementation of the Lewis invariantAs from the result [29], the following expressions of integrals \(I_{\rho}\) is found.
From the comparison with [30], in classical systems with real \(\Omega\)\(I\) is the series of usual adiabatic-invariant series in which the leading term is proportional to \(\eta H/\Omega\).
In the present case
\[I_{\rho}\equiv\frac{1}{2}\left[\rho^{-2}x^{2}+\rho p-\dot{\rho}x^{2}\right], \tag{40}\]
i.e. one for each of the \(\rho\) satisfying the condition
\[\ddot{\rho}+\Omega^{2}(t)\rho=\rho^{-3}. \tag{41}\]
The methods of [30] are applied in [29] to any complex \(\Omega\), \(q\) and \(p\) at any particular solution of Eq. (39) after differentiating Eq. (38), using the Hamiltonian equations to eliminate the first time derivative of the canonical variable and that of hte canonical conjugate momenta, and taking advantage of Eq. (39) to eliminate the second time derivative of \(\rho\).
The integral is found in Eq. (34) of [29] and represented in terms of \(\rho(t)\); the integral is found after Eq. (15a) of [29] from the \(I=\int p\cdot dq\) on particular closed curves (named rings) after the choice of opportune variables.
The rings are demonstrated to be ellipses in the opportune coordinates, after which the first integral can be reversed as a function the Hamiltonian variables \(p\) and \(q\). Correspondingly, there exists a conserved symmetric tensor \(I_{mn}\)
\[I_{mn}=\frac{1}{2}\rho^{-2}q_{m}q_{n}+(\rho p_{m}-\dot{\rho}q_{m})(\rho p_{n} -\dot{\rho}q_{n}) \tag{42}\]
which is a representation of \(SU(3)\).
### More about invariants
In [31], a generalization of the Lewis invariants is proposed, for the time-dependent oscillator
\[H=\frac{1}{2}p^{2}+\frac{1}{2}\omega^{2}(t)x^{2}, \tag{43}\]
where the potential ruling the dynamics is specified.
In [32], the work of [29] was applied to three-dimensional time-dependent oscillators, which problem comprehends also the anisotropic perturbation problems and the singular quadratic perturbation problem.
A conserved symmetric tensor operator for the isotropic-oscillator problem is written
\[I_{mn}=\frac{1}{2}\left[\rho^{-2}q_{m}q_{n}+(\rho p_{m}-r\dot{h}qq_{m})(\rho p _{n}-\dot{\rho}q_{n})\right], \tag{44}\]
whose symmetry group is a 'non-invariance symmetry group' of the three-dimensional time-dependent isotropic oscillator.
More in detail, in [31], the only non-trivial quadratic invariant \(I\) of the one-dimensional oscillator problem is found as
\[2I=(\rho^{-1}e^{F/2}q)\left(C_{1}^{2}+C_{2}^{2}+2C_{1}C_{2}cos(2 W)\right)+ \tag{45a}\] \[\left\{(\rho e^{-F/2}p)-(\dot{\rho}-\frac{1}{2}\rho f)e^{F/2}q \right\}^{2}(c_{1}^{2}+C_{2}^{2}-2C_{1}C_{2}cos(2W))+\] (45b) \[-4(\rho^{-1}e^{-F/2})\left\{\rho e^{-F/2}p-\left(\dot{\rho}-\frac{1}{2} \rho f\right)e^{F/2}q\right\}C_{1}C_{2}sin(2W) \tag{45c}\]
where \(W\) obeys the definition
\[W\equiv\int_{t_{0}}^{t}\rho^{-2}dt: \tag{46}\]
the coordinate transformation needed to obtain the invariants is provided after the matrix which is symplectic under the condition
\[C_{1}^{2}-C_{2}^{2}=1. \tag{47}\]
Keeping the transformation symplectic assures the fundamental symplectic form to stay unchanged; this way, the phase space available for the model is kept unchanged.
The invariant of the undamped oscillations is found after posing \(F=0\) with general values of the constants \(C_{1}\) and \(C_{2}\) as
\[I=\frac{1}{2}\{\rho^{-2}q^{2}\left(C_{1}^{2}+C_{2}^{2}+2C_{1}C_{ 2}cos(2W)\right)+ \tag{48a}\] \[+(\rho p-\dot{\rho}q)^{2}\left(C_{1}^{2}+C_{2}^{2}-2C_{1}C_{2}cos(2 W)\right)-4\rho^{-1}q(\rho p-\dot{\rho}q)C_{1}C_{2}sin(2W)\} \tag{48b}\]
The Lewis invariant is obtained after the choice \(C=0\) of
\[C_{1}=coshC, \tag{49a}\] \[C_{2}=sinhC. \tag{49b}\]
The choice \(C=0\) is not arbitrary; differently, it descends from the request that the coordinate transformation stays symplectic, for which the \(C\) constant is interpreted as a scaling factor.
## VI About time-dependent oscillator potentials
Among the several possibilities of modified Hamiltonians, included those whose potential is generalised in ways as to contain time-dependent oscillator potentials, those of interest within the cosmological implementation are here briefly recapitulated.
### Mathematical outline
The regularity assumptions about the time-dependent oscillators potentials were very recently recapitulated in [33]. In [34], the features of a generalized time-dependent oscillator potential are spelled out within the Hamiltonian analysis as
\[H=f(t)\frac{p^{2}}{2m}+\frac{1}{2}g(t)w_{0}^{2}x^{2} \tag{50}\]
being \(H\) the Hamiltonian a system of a point particle of mass \(m\), i.e. for a system in which both the kinetic term and the potential term are generalised as time-dependent by means of the functions \(f(t)\) and \(g(t)\), respectively. The initial values for the two functions \(f(t)\) and \(g(t)\) are established.
### Physical implementations of the time-dependent potentials
The need of a oscillator symmetry in the description of galactic dynamics is recalled in [40]. There, the galactic velocity field is studied as a linear function of the cartesian coordinates of the masses. The definition of collective kinetic energy (as the kinetic energy within the linear-velocity-field approximation) is explained as one neglecting the degrees of freedom which are connected with non-linear velocity fields: the possibility to render the description of galactic systems is outlined, according to which the theory of symplectic dynamical symmetry is established as far as classical systems are concerned. As a result, the groups, of which the co-adjoint orbit is representing the classical phase space, is set: the symmetries are studied. One of the elements of the symplectic Lie algebra is the matrix composed from the self-gravitating potential energy, the angular velocity and the hydrostatic pressure. The solutions of the Hamiltonian dynamical systems characterised after these symmetries are studied as isospectral deformations.
The conserved quantities are found as the Casimirs.
In [41], the case of spherically-symmetric potentials which lead to a periodic dynamics are recalled.
There, the potentials proposed \(V=V(r,t)\) are of the simplified form
\[V(r,t)=V_{(}r)(1+m_{0}sin\omega t) \tag{51}\]
which can be cast in the Mathieu Equation, because the equation
\[\frac{d^{2}X}{dt^{2}}+\Omega^{2}(t)X=0 \tag{52}\]
with
\[\Omega^{2}(t)=a+bcos\omega t \tag{53}\]
is recast as
\[\frac{d^{2}X}{d\tau^{2}}+(\alpha+\beta cos2\tau)=0. \tag{54}\]
The simplified case \(m_{0}=const\) is relevant in considering the mass(es) as constant. i.e. in comparison with the less simplified cases.
Four potentials are studied: the pulsed Plummer potential and three cases of the pulsed Dehnen potential. More in detail, the pulsed Plummer potential reads
\[V(r,t)=-\frac{m(t)}{\sqrt{1+r^{2}}}; \tag{55}\]
the three pulsed Dehnen potentials read
\[V(r,t)=-\frac{m(t)}{2-\gamma}\left[1-\frac{r^{2-\gamma}}{(1+r)^{2-\gamma}}\right] \tag{56}\]
and are specified after the relevant cases \(\gamma=0\), \(\gamma=1/2\) and \(\gamma=1\). In Eq. (55) and in Eq.'s (56), the mass \(m(t)\) can be specified as
\[m(t)=1+m_{0}sin\omega t. \tag{57}\]
More about variable galactic massAccording to these specifications, [42], the periodic motion of a star under particular conditions is investigated under a generalized mass-variation law, and within the Poincare small-parameter method.
The generalized law of the galaxy mass variation considered is
\[M(t)=M_{0}M^{n}(t), \tag{58}\]
which is also named the analogue of the Eddington- Jeans law after [43] and [44].
The Jaffe potential [35] is a potential which was derived in a way such that the gravitational potential and the projected velocity dispersion are easily outlined.
The Hernquist potential [36] as well enjoys the properties that the analytical expression of the velocity dispersion terms of elementary functions holds.
The pulsed Harmonic potential in the case of a classical particle was studied in [37].
The Plummer potential and the Hernquist potential were used to study the mass variation of galactic environments [38]. In [39], the harmonic-oscillator potential with mass as a time-dependent function included both in the kinetic term and in the potential term is presented.
The Jaffe potential and the Hernquist potential can be considered as particular cases of the Dehnen potential.
More about perturbed potentialsThe isotropic-harmonic-oscillator potential perturbed by a polynomial term was studied in [45] and the references therein.
The Lewis Hamiltonian
In [46] and [47], the Hamiltonian was considered
\[H=\frac{1}{2\eta}\left[\frac{p^{2}}{2m}+\Omega^{2}(t)q^{2}\right] \tag{59}\]
for which the assumption of \(\Omega(t)\) an arbitrary continuous function of time is requested, being \(\eta\) a positive, real parameter; the existence of the Hamiltonian \(H\) Eq.(59) is studied.
In [48], Eq. (59) is further investigated. The work of [15] was generalised to a planar galaxy model in [49], for which a 10-dim phase space is found.
### Possible generalizetions: homogenous power-law potentials
Homogenwous power law time-dependent potentials, which evidently contain the time-dependent harmonic oscillator as a simplified version, are analysed in [50]; in particular, the WKB methods are exposed.
The mechanisms underlying variation of mass of stellar structures, i.e. such as revised in [51], can be in the first analyses disregarded.
## VII Estimations of the constants of motions
One poses \(\rho\equiv\rho(t)\) as
\[\rho\equiv\rho_{0}+\epsilon\rho_{1}+\epsilon^{2}\rho_{2}+\epsilon^{3}\rho_{3} +... \tag{60}\]
with \(\epsilon<<1\).
Further approachesA method to find exact invariant for the time-dependent harmonic oscillator is discussed in [52].
The generalised time-dependent Hamiltonian is studied in [53].
In [54], new transformations are considered.
In [55], a class of non-linear, time-dependent potentials is analysed.
In [56], the Lewis invariant is studied as projection of an auxiliary two-dimensional motion; the analysis is thus useful for the analysis of the phase space, as recalled throughout the paper.
In [57], the physical characterization of a cluster of particles with collision processes is considered; the analysis will be of interest in the discussion of [40].
In [58], the invariant considered is as an integral of an energy-balance equation.
## VIII Conservation laws and integrals of motion
When the expansion of \(\rho(t)\) as Eq. (60) is considered at all the orders, the infinite series of conservation laws [59] is found as
\[\frac{d}{dt}\dot{\rho}_{2n}^{2}=(-1)^{-1+n/2}\dot{\rho}_{2n}^{2}f_{2n}(\rho_{2 n},\rho_{2n-2},...,\rho_{2},\rho_{0}) \tag{61}\]
which defines an infinite series of integrals of motion \(\rho_{2n}\).
Indeed, Eq. (61) is demonstrated to be not apt to be written as an Abel equation, nor as an Emden-Fowler-like equation.
## IX Cosmological implementation: conserved integrals at the time of non-gaussianities
It is the aim of the present Section to provide with a cosmological implementation of the analytical results here obtained. As a previous remark, one should notice that, one one hand, the fit analyses of the experimental evidence at present times does not agree with the specification of the virialised radius mathematically expected; on the other hand,
the presence of elements modifying the specifications of the CBE assumptions are very strong, as further specified in Section X.
It is therefore the opportunity to state the exact conservation laws of the integrals of motion and to test them at the very beginning of the objects formations, during which (integration) lapse of time the further effects can be neglected; the obtained results are therefore to be compared with the then-present further cosmological ingredients.
For these purposes, it is our goal to study the solutions of the systems implemented in Section VIII at the very beginning of the epoch of non-Guassianities, during a small time interval when even \(\tilde{\rho}_{2}\) can be considered as very-slowly varying, and during during which all the other modifications to the CBE-hypothesis are considered as not effective.
As a result, it is thus possible to understand the mechanisms that lead the objects formations after the exact integrals of motions obtained after the conservation laws are established, i.e. the effects modifying the CBE hypothesis, for which, as an examples, the estimations of the virialised radii of objects does not fulfill the fitting algorithms. For the latter reason, the component \(F(r)\) of the potentials can be considered as slowly-varying, i.e. during the considered time integral the virialised radius \(r\) is considered as the mathematical virialised one \(r_{v}\).
During this very small time interval \(\Delta t\equiv t_{f}-t_{i}\), the conserved component \(\rho_{0}\) is evaluated after a series expansion in the time variable as
\[\int_{t_{i}}^{t_{f}}\int_{t_{i}}^{t}\tilde{\rho}_{0}(\tau)\simeq A_{0}\frac{ t_{f}^{2}}{2}-A_{0}\frac{t_{i}^{2}}{2}-A_{0}t_{i}t_{f}+A_{0}\frac{t_{i}^{2}}{2}+A_ {1}\frac{t_{f}^{3}}{6}-A_{1}\frac{t_{i}^{3}}{6}-A_{1}\frac{t_{i}t_{f}}{2}+A_{ 1}\frac{t_{i}^{2}}{2}+A_{2}\frac{t_{f}^{4}}{12}-A_{2}\frac{t_{i}^{4}}{12}-A_{2 }\frac{t_{i}^{3}t_{f}}{3}+A_{2}\frac{t_{i}^{4}}{3} \tag{62}\]
with the constants
\[A_{0} \equiv\frac{4\tilde{\rho}_{2}}{F(r_{v})}\frac{arctanh\sqrt{\frac {m_{0}}{m_{0}^{2}-1}}}{\omega\sqrt{m_{0}^{2}-1}}, \tag{63a}\] \[A_{1} \equiv\frac{2\tilde{\rho}_{2}}{F(r_{v})}\frac{1}{(m_{0}^{2}-1)(1- \frac{1}{m_{0}^{2}-1})},\] (63b) \[A_{3} \equiv-\frac{\tilde{\rho}_{2}}{F(r_{v})}\frac{\omega m_{0}}{(m_{0 }^{2}-1)(1-\frac{m_{0}^{2}}{m_{0}^{2}-1})}. \tag{63c}\]
The dependence of the constant of motion \(\rho_{0}\) on the frequency \(\omega\), on the mass \(m_{0}\) and on the radius of the simplified pulsed potentials is therefore delineated. Moreover, the constant of motion is expressed as a function of the virialised radius. As a result, the mass \(m_{0}\) entering the simplied definitions of the pulsed potentials is here demonstrated to be therefore the virialised mass.
### The constants of motions as function of the viralised radius and of the virialised mass
According to the cosmological implementation achieved at the age of the Universe of non-Gaussianities, it is possible to analyse the dynamics of stellar systems as due to different contributions: one one hand, the constants of motions are demonstrated to be functions of the viralised rardius and of the virialised mass within the framework of the CBE choice; on the other hand, the modifications with respoect to the CBE dynamics is due to the different phenomena characterising the evolution of the stellar sysyem.
## X Modifications of the CBE hypothesis
The scheme of the linear approximation of the perturbation to the CBE after a point mass is studied in [18]. The further generalisations of the simplified scheme lead to the analyses of [40].
### Adiabatic invariants
The origins of the adiabatic invariant in classical mechanics are traced in [60].
Some indirect relations between the change in action of the harmonic oscillator and slowly-varying perturbations are analysed in [61].
In [62], a broad overview about the use of adiabatic invariants is presented.
In [63], the adiabatic invariant is used to calculate the eccentricity of the orbit of a globular cluster in a particular self-gravitating system.
As a toymodel example, in [64], the one-dimensional harmonic oscillator with a slowly-varying frequency is considered; as an example, a one-particle system in a slowly varying isochrone potential is described. The analysis of adiabatic invariants demonstrates not to produce secular errors in the computation after symplectic integrators.
### About 'irregular' forces
The Kuzmin integral is hypothesized to be a function depending on the velocities only [65], [66]. The triaxial velocities distribution is studied in [67]. Irregular forces in the secular evolution of a stellar system are studied in [68]. The radial velocities dispersions for comparison with the Kuzmin formulation is studied in [69].
More in detail, in [65], stationary galaxies are analysed as far as the definition of integrals is concerned, and the phase space is delineated for these sysyems. The triaxial distribution of velocities of objects of spherical systems and that of intermediate ones is investigated.
After the Jeans theorem, the first integral of motion \(\tilde{\Psi}\) ('phase density') is classified as a function of six independent first integrals of motion \(\tilde{I}_{i}\) as a function of the phase-space variables as \(\tilde{\Psi}=\tilde{\Psi}(\tilde{I}_{1},\tilde{I}_{2},...,\tilde{I}_{6})\). Each \(\tilde{I}_{i}=const\) fixes 6 5-dimensional hypersurfaces, which are moving within the phase space. Accordingly, the first integral \(\tilde{\Psi}\) is constant in phase-space points moving with stars.
The density \(\tilde{\rho}\) is determined after the Poisson equation, where the latter poses constraints not only on the phase density \(\tilde{\Psi}\), which cannot therefore be assumed as arbitrary, but also on the density \(\tilde{\rho}\), which cannot be assumed as as negative. The boundary conditions have to be satisfied at infinity, for which the standard cosmological principle has to be applied.
The Jeans theorem, furthermore, fixes the potential and the phase density as independent of time. When stationary potential are considered, orbits are classified according to the number of integrals \(\tilde{J}\) needed for the description in the phase space, as \(\tilde{\Psi}=\tilde{\Psi}(\tilde{J}_{1},\tilde{J}_{2},...,\tilde{J}_{n})\), with \(n\leq 5\).
A method similar to that of [65] is later found in [70]; in particular, in the case of a plane galactic disk, after three isolating integrals of motion, the three invariants of motion are formulated as combinations of the 'elements of a Keplerian ellipse'.
In the case of Kuzmin-like potential, some integrals of motions are investigated in [71].
In [67], flattened stellar systems are outlined to be a complicated model in the case the phase density is provided with as a function of three integrals of motion, which correspond to the triaxial local-velocity-distribution ellipsoid.
## XI Outlook
In [76], the evolution of disk galaxies is investigated; the changes in the angular momentum are studied as due to the interactions between stars or of gas.
In [6], dark matter is presented to model the dumping.
In [77], the evolution of galaxies as far as the stationary oscillations are concerned is performed and analysed according to the analysis hinted in [24].
In [78], a further integral of motion is found, of which the convergence is questioned about.
Isolating integrals of motion and non-isolating integrals of motion are discussed in [79] and [75]; in particular, in [75], a distribution function is proposed to be built from isolating integrals only. For these purposes, the orbital phase angle is studied in [72] and [73].
Non-analytical descriptions descending from the Kuzmin potential are presented in [74].
In [68] the very specific case of gravitational interaction within stars in clusters are studied. A host of detailed results are found. Among the findings, it is interesting to remark that the variation of the phase density was derived. The velocity dispersion and the mean dispersion are given.
## XII Perspectives
In [75], the Jeans theorem is applied to axially-symmetric stellar systems, and specified to self-gravitating systems. The space density is hypothesized to be a function of the gravitational potential and of the radial coordinate in cylindrical coordinates, form which an equation of the mass of the stellar objects per chosen density is obtained; from the solution, the velocity dispersion and the mean velocity are found, while the rotational velocity is shown to have
to be determined after further assumptions. Indeed, the virial theorem is applied locally and the condition that the start should be not streaming but only rotating (where the latter is called a'relaxation condition') is applied. A distribution function which allows one for the description of an observed density variation is found.
In [79], a spherical cluster of mass points is illustrated to be able to rotate without becoming oblate. The example of the Sun in the Galaxy is issued.
The numerical analysis of the central parts of galaxies has been shown to exhibit noisy oscillations [80].
## XIII Remarks
The analyses of the observational evidences as far as the fitting algorithms are concerned is based on the comparison between the values of the virialised radii and on that of the virialised mass of stellar systems; more in detail, there exist severe discrepancies among the values needed for the fitting algorithms and those estimated from a mathematical point of view.
The definition of generalised Hamiltonian can be explored within the search of generalised potential, which can comprehend the (generalised) time-dependent oscillator. The resulting Hamiltonian systems can be shown to admit constants of motion; in particular, the constants of motions can be obtained after the conservation laws of the Hamiltonian system.
The main results of the present work are the new finding of new conservation laws of new constants of motion after the Lewis invariant; the framing of the construction within its proper cosmological implementation allows one to discover that the quantities entering the definitions of the constants of motion are the virialised radius and the virialised mass of the stellar system. More in particular, within the formalism here developped it is also possible to relate the quantities involved int he pulsed potentials to the mathematical ones.
The cosmological implementation here presented is one related to the age of non-Gaussianities; the formalism also straightforward applies to different ages of the Universe, at which the considered stellar system are estimated to be formed.
The modification of the behavior of the stellar system with respect to the new constants of motions is found to be ascribed to the phenomena modifying the CBE description.
The paper is organised as follows.
In Section I, the main results relating constants of motion, the virial radius and the virial mass of stellar systems are scrutinised.
In Section II, the application of the virial theorem which defines the virialised radius (of a stellar system) is summarised. In Section III, the physical implementation of the requests that lead to a collisionless Boltzmann distribution function are recapitulated.
In Section IV, oscillations of stellar systems are revisited.
In Section V, generalised Hamiltonians are introduced.
in Section VI, in particular, the time-dependent oscillator potentials are enumerated.
In Section VII, new estimations of the constants of motions after the Lewis-Ermakov-Leach invariant are found.
In Section VIII, the new conservation laws of the new constants of motions are stated. As results, the constants of motions are demonstrated to depend on the virialised mass and on the virialised radius of the stellar system. The new methods can be demonstrated to apply to the cases of the Ermakov-Lewis invariant and to that of the Ermakov-Lewis adiabatic invariant.
In Section IX, the new constants of motion from the new conservation laws are implemented from a cosmological point of view.
In Section X, the modifications of the CBE hypotheses are debated. In Section XIII, the new results are framed within the modern research guidelines; in particular, a comparison with the data-analysis techniques is implemented.
|
2307.13260 | Erosion of synchronization and its prevention among noisy oscillators
with simplicial interactions | Previous studies of oscillator populations with two-simplex interaction
report novel phenomena such as discontinuous desynchronization transitions and
multistability of synchronized states. However, the noise effect has not been
well understood. Here, we find that when oscillators with two-simplex
interaction alone are subjected to external noise, synchrony is eroded and
eventually completely disappears even when the noise is infinitesimally weak.
Nonetheless, synchronized states may persist for extended periods, with the
lifetime increasing approximately exponentially with the strength of the
two-simplex interaction. Assuming weak noise and using Kramers' rate theory, we
derive a closed dynamical equation for the Kuramoto order parameter, by which
the exponential dependence is derived. Further, when sufficiently strong
one-simplex coupling is additionally introduced, noise erosion is prevented and
synchronized states become persistent. The bifurcation analysis of the
desynchronized state reveals that as one-simplex coupling increases, the
synchronized state appears supercritically or subscritically depending on the
strength of two-simplex coupling. Our study uncovers the processes of
synchronization and desynchronization of oscillator assemblies in higher-order
networks and is expected to provide insight into the design and control
principles in such systems. | Yuichiro Marui, Hiroshi Kori | 2023-07-25T05:08:44Z | http://arxiv.org/abs/2307.13260v1 | # Erosion of synchronization and its prevention among noisy oscillators with simplicial interactions
###### Abstract
Previous studies of oscillator populations with two-simplex interaction report novel phenomena such as discontinuous desynchronization transitions and multistability of synchronized states. However, the noise effect has not been well understood. Here, we find that when oscillators with two-simplex interaction alone are subjected to external noise, synchrony is eroded and eventually completely disappears even when the noise is infinitesimally weak. Nonetheless, synchronized states may persist for extended periods, with the lifetime increasing approximately exponentially with the strength of the two-simplex interaction. Assuming weak noise and using Kramers' rate theory, we derive a closed dynamical equation for the Kuramoto order parameter, by which the exponential dependence is derived. Further, when sufficiently strong one-simplex coupling is additionally introduced, noise erosion is prevented and synchronized states become persistent. The bifurcation analysis of the desynchronized state reveals that as one-simplex coupling increases, the synchronized state appears supercritically or subscrtically depending on the strength of two-simplex coupling. Our study uncovers the processes of synchronization and desynchronization of oscillator assemblies in higher-order networks and is expected to provide insight into the design and control principles in such systems.
_Introduction._ Synchronization is a major subject of study not only in physics but also in chemistry, biology, engineering, and sociology [1; 2; 3; 4; 5]. Examples include pacemaker cells in the heart [6], laser arrays [7], applauding audiences [8; 9], power grids consisting of AC (alternating current) generators [10], and Josephson junctions [11; 12]. In addition to synchronization, understanding the desynchronization process in oscillator assemblies is likewise very important. For example, whereas synchronization of circadian pacemaker cells in the brain is essential for mammals to have 24-hour activity rhythm, their transient desynchronization, triggered by a phase shift of light-dark cycles, is a putative cause of jet lag symptoms [13; 14]. Desynchronization is also important in neurological disorders such as Parkinson's disease [15] and epilepsy [16], and methods to promote desynchronization in this context have been actively studied both theoretically and experimentally [17].
Phase oscillator models, including Kuramoto's model [18], are widely known for their usefulness, not only in understanding synchronization [2; 19], but also in controlling real-world systems [20; 21]. Whereas the classical Kuramoto model considers pairwise interactions between oscillators, recent studies have extended the model to allow for non-pairwise interactions [22; 23; 24; 25]. Such structures are often called simplexes, where \(n\)-simplex describes an interaction between \(n+1\) oscillators [26]. Research on brain dynamics [27; 28] or social phenomena [29] suggested that simplicial structures play an important role in such systems.
In noiseless phase oscillators with two-simplex interactions, Tanaka and Aoyagi [22] noted that multiple stable synchronized states appear as two clusters with different population ratios. Moreover, Skardal and Arenas showed that abrupt desynchronization transitions occur as the interaction strength decreases [23]. Skardal and Arenas also found that two- and three-simplicial couplings promote abrupt synchronization transitions in the presence of one-simplex interactions [24]. In contrast, in noisy phase oscillators, Komarov and Pikovsky pointed out that no stable synchronized states exist in the limit of an infinite number of oscillators. However, their focus was on the synchronized states that exist only in small populations. Desynchronization is expected in a large population, and as mentioned above, understanding its process is essential.
In the present study, we consider a large population of noisy phase oscillators with two-simplex coupling. Although steady synchronized states do not exist in this system, we demonstrate that the population is transiently synchronized for an extended period and then abruptly desynchronized when two-simplex coupling is sufficiently strong compared to the noise strength. Assuming weak noise and exploiting Kramers' rate theory, we derive a closed dynamical equation for the Kuramoto order parameter by which the desynchronization process is reproduced and the exponential dependence of the lifetime of the synchronized states on the coupling strength is derived. We also consider a system in which both one- and two-simplex coupling exist and show that synchronized states become persistent when one-simplex coupling is sufficiently strong. Our bifurcation analysis reveals that the desynchronization-synchronization transition changes from continuous to discontinuous at a critical strength of two-simplex coupling.
_Model and results._ We consider a system of identical phase oscillators subjected to independent noise and globally coupled with one-simplex (i.e., two-body) and two-simplex (i.e., three-body) interactions, given as
\[\dot{\theta}_{m}=\omega_{m}+\frac{K_{1}}{N}\sum_{j=1}^{N}\sin(\theta_{j}-\theta_ {m})+\frac{K_{2}}{N^{2}}\sum_{j,k=1}^{N}\sin(\theta_{j}+\theta_{k}-2\theta_{m}) +\xi_{m}(t), \tag{1}\]
where \(\omega_{m}\) and \(\theta_{m}\) are the intrinsic frequency and the phase of oscillator \(m\) (\(1\leq m\leq N\)), respectively; \(K_{1}\geq 0\) and \(K_{2}\geq 0\) are the coupling strengths of one-simplex and two-simplex interactions, respectively. The term \(\xi_{m}(t)\) represents Gaussian white noise with zero mean, \(\delta\)-correlated in time and independent for different oscillators. Specifically, \(\langle\xi_{m}(t)\rangle=0,\ \langle\xi_{m}(t)\xi_{n}(\tau)\rangle=2D\delta_{mn} \delta(t-\tau)\), where \(D\geq 0\) is the noise strength. As for the intrinsic frequencies, we consider two cases: (i) \(\omega_{m}=\omega_{0}\) for all \(m\), and (ii) \(\omega_{m}\) is drawn from the Lorentzian distribution. Essentially, \(\omega_{m}\sim g(\omega)\), where \(g(\omega)=\frac{\gamma}{\pi[(\omega-\omega_{0})^{2}+\gamma^{2}]}\), \(\omega_{0}\) is the mean frequency, and \(\gamma\) is the width of the Lorentzian distribution. We may set arbitrary \(\omega_{0}\) and \(\gamma\) values without loss of generality because the model is invariant under the transformations \(\theta_{m}\rightarrow\theta_{m}-\omega_{0}t\), \(t\to c\gamma t\), \(K_{i}\rightarrow\frac{K_{i}}{c\gamma}\), and \(D\rightarrow\frac{D}{c\gamma}\), with \(c\) being an arbitrary constant. Specifically, we set \(\omega_{0}=0\) and \(\gamma=0.01\) hereafter. Next, we introduce
\[Z(t)=R(t)e^{i\Theta(t)}=:\frac{1}{N}\sum_{j=1}^{N}e^{i\theta_{j}(t)}, \tag{2}\]
where \(R\) and \(\Theta\) are the Kuramoto order parameter and the mean phase, respectively. Note that \(R\) assumes \(1\) and \(0\) for the in-phase state (i.e., \(\theta_{j}=\theta_{0}\) for all \(j\)) and the fully desynchronized state (i.e., \(\theta_{j}\) is uniformly distributed within \([0,2\pi)\)), respectively. Using \(R\) and \(\Theta\), Eq. (1) may be rewritten as
\[\dot{\theta}_{m}=\omega_{m}+K_{1}R\sin(\Theta-\theta_{m})+K_{2}R^{2}\sin 2( \Theta-\theta_{m})+\xi_{m}(t). \tag{3}\]
We can see that the three-body interaction tends to make \(\theta_{m}\) to be either \(\Theta\) or \(\Theta+\pi\). Therefore, one can suspect that three-body interaction is likely to promote the formation of two-cluster states. This is actually the case for \(K_{1}=D=0\)[23]. Motivated by this observation, we numerically investigate the dynamics for the initial condition of two-cluster states. Specifically, we set \(\theta_{m}(0)=0\) and \(\pi\) for \(1\leq m\leq\eta N\) and otherwise, respectively, where \(\eta\leq\frac{1}{2}\) is the initial population ratio of the two clusters. Note that \(\eta=1\) corresponds to the one-cluster state, i.e., in-phase synchrony. In Fig. 1(a), we illustrate the time evolution of the order parameter \(R\) for \(\omega_{m}=\omega_{0}(=0)\). For \(K_{1}=0\) and \(D=0.0\) (black solid line), we observe that \(R\) is almost constant, indicating that the two-cluster state is stable. However, in the presence of noise, we observe qualitatively different behaviors. For \(K_{1}=0\) and \(D=0.1\) (purple solid line), \(R\) first slowly decreases and then abruptly vanishes. Thus, the two-cluster state is, in the presence of noise, actually not stable but meta-stable with a long lifetime. Instead, the fully desynchronized state seems to be stable. The phase distributions of this process are shown in Fig. 1(c). In contrast, for \(K_{1}=0.3\) and \(D=0.1\) (blue solid line), \(R\) seems to approach a particular nonvanishing value. For this parameter set, we also test the initial condition of the fully desynchronized state (dotted line) and observe the evolution of the phase distributions [Fig. 1(d)], noting that the system approaches a particular two-cluster state independently of the initial condition. Figure 1(b) displays results for \(\omega_{m}\sim g(\omega)\), which indicates that most results are quantitatively almost unchanged, except that the lifetime of the two-cluster states for \(K_{1}=0.3,D=0.1\) becomes shorter. Therefore, we expect that the essential properties of the system are shared by the cases \(\omega_{m}=\omega_{0}\) and \(\omega_{m}\sim g(\omega)\), and henceforth we focus on the simpler case \(\omega_{m}=\omega_{0}\) for ease of analysis.
We now construct a theory to understand the synchronization and desynchronization processes. For analytical tractability, we consider a continuum limit \(N\rightarrow\infty\). Specifically, the number of oscillators having the phase within \((\theta,\theta+d\theta)\) is described by \(NP(\theta,t)d\theta\), where \(P(\theta,t)\) is the probability density function. The order parameters are redefined as
\[Z(t)=\int_{0}^{2\pi}\exp(i\theta)P(\theta,t)\mathrm{d}\theta. \tag{4}\]
The Fokker-Planck equation equivalent to the Langevin equation (3) is
\[\frac{\partial P}{\partial t}=\frac{\partial}{\partial\theta}\left[K_{1}\sin( \theta-\Theta)+K_{2}R^{2}\sin 2(\theta-\Theta)\right]P+D\frac{\partial^{2}P}{ \partial\theta^{2}}. \tag{5}\]
We assume that \(\Theta(t)\) is a constant in the limit \(N\rightarrow\infty\). Thus, without loss of generality, we set \(\Theta=0\). Note that \(Z(t)=R(t)\) for \(\Theta=0\). The stationary distribution \(P=P_{\mathrm{s}}(\theta)\) for a constant \(R=R_{s}\) can be obtained by setting
\(\partial_{t}P=0\), yielding
\[P_{\rm s}(\theta)=c_{1}\exp\left(\frac{2K_{1}R_{s}\cos\theta+K_{2}R_{s}^{2}\cos 2 \theta}{2D}\right), \tag{6}\]
where \(c_{1}\) is a normalizing constant [see SI for details]. Plugging this into Eq. (4), we obtain the self-consistency for \(R_{s}\), given as
\[R_{s}=\int_{0}^{2\pi}P_{\rm s}(\theta)\cos\theta\ {\rm d}\theta. \tag{7}\]
We observe that the right-hand side of Eq. (7) vanishes for \(K_{1}=0\) because \(P_{\rm s}(\theta)\) is \(\pi\)-periodic. This implies that for \(K_{1}=0\), there is no stationary distribution except the one corresponding to \(R_{s}=0\), which is the uniform distribution \(P_{\rm s}(\theta)=\frac{1}{2\pi}\)[30]. Nontrivial steady distributions may only arise when \(K_{1}>0\).
We investigate the bifurcation of the system by numerically solving Eq. (7) for \(R_{\rm s}\). In Fig. 2(a), we plot some typical behavior of Eq. (7), indicating that the steady state of \(R_{s}\) bifurcates as \(K_{1}\) and \(K_{2}\) vary. In Fig. 2(b), the phase diagram in the \((K_{1},K_{2})\) plane is displayed, suggesting that the bifurcation occurs at \(K_{1}=K_{\rm c}=0.2\) for all \(K_{2}\). In Figs. 2(c) and (d), we plot \(R_{s}\) as a function of \(K_{1}\) for \(K_{2}=0.15\) and \(K_{2}=3.0\), respectively. We find that supercritical and subcritical pitchfork bifurcations occur in Figs. 2(c) and (d), respectively. Moreover, the bifurcation type changes from supercritical to subcritical at \(K_{1}\simeq 0.2\), yielding the bistable region in which both the desynchronized and synchronized states are stable for \(K_{1}>0.2\). Note that \(0.2\) is equal to \(2D\).
To elucidate the bifurcation structure, we perform a weakly nonlinear analysis using a standard method [3; 18]. We denote the bifurcation point of \(R=0\) by \(K_{1}=K_{\rm c}\) and set \(K_{1}=K_{\rm c}(1+\mu)\), where \(\mu\) is the bifurcation parameter. The complex order parameter is expanded as
\[Z=\varepsilon Z_{1}+\varepsilon^{2}Z_{2}+\cdots, \tag{8}\]
where \(\epsilon=\sqrt{|\mu|}\). As detailed in SI, we derive
\[\frac{1}{\varepsilon^{2}}\dot{Z}_{1}=\frac{K_{1}-K_{\rm c}}{2}\ Z_{1}-g|Z_{1} |^{2}Z_{1}, \tag{9}\]
Figure 1: (a, b) Time series of the order parameter \(R\) for (a) the delta and (b) the Lorentzian frequency distributions. As the initial condition, we employ the two-cluster state with \(\eta=0.75\) for solid curves and the desynchronized state for dashed curves. (c, d) Time courses of the phase distribution \(P(\theta,t)\). (c) \(K_{1}=0\) and the initial condition is the two-cluster state with \(\eta=0.75\). (d) \(K_{1}=0.3\) and the initial condition is the desynchronized state.
where
\[K_{\rm c} = 2D, \tag{10}\] \[g = \frac{K_{\rm c}^{2}+K_{\rm c}K_{2}}{8D}-\frac{K_{2}}{2}. \tag{11}\]
Note that \(g\) is real in this particular system. The sign of \(g\) determines the bifurcation type: supercritical and subcritical bifurcations occur for \(g>0\) (or \(K_{2}<K_{\rm c}\)) and \(g<0\) (or \(K_{2}>K_{\rm c}\)), respectively. This theoretical analysis is in perfect agreement with the numerical analysis of Eq. (7) shown in Fig. 2.
Next, we investigate the slow desynchronization process that occurs for \(K_{1}=0\) (see Fig.1(c)). We will demonstrate that under some assumptions, an approximate dynamical equation for \(R\) may be obtained in a closed form, by which we may determine the lifetime of the synchronized state. To this end, we first rewrite the system as a gradient system:
\[\dot{\theta}_{m}=-\frac{\partial}{\partial\theta}\,U(\theta,R(t))+\xi_{m}, \tag{12}\]
where
\[U(\theta,R)=-\frac{1}{2}\,K_{2}R^{2}\cos 2\theta \tag{13}\]
is the potential. Clearly, there are minima at \(\theta=0\) and \(\pi\) in this potential \(R>0\), which have the same depth and are separated by the potential barrier \(\Delta U\) given by
\[\Delta U=K_{2}R^{2}. \tag{15}\]
We assume that the noise is sufficiently weak compared to \(\Delta U\), i.e., \(D\ll\Delta U\). We also assume that \(R\) evolves sufficiently slowly. Then, we can expect that the phase distribution is well approximated to
\[P(\theta,t)=\eta(t)\delta(\theta)+(1-\eta(t))\delta(\theta-\pi). \tag{16}\]
Figure 2: (a) Typical behavior of the self-consistency equation (7) for \(K_{2}=3.0\) with varying \(K_{1}\). (b) Phase diagram in the \((K_{1},K_{2})\) plane. The color scale describes the \(R\) value of the stable synchronized state. The vertical line at \(K_{2}=2D=0.2\) denotes the critical coupling strength above which the desynchronized state is unstable. The solid and dashed lines denote, respectively, the supercritical and subcritical bifurcation curve. The bistable area is denoted by the hatched colored region in which both synchronized and desynchronized states are stable. (c, d) Bifurcation diagrams for (c) \(K_{2}=0.15\) and (d) \(K_{2}=3.0\). We fix \(D=0.1\) in this figure.
Under our assumption, each oscillator experiences the virtually fixed potential and jumps from one well to another at a certain rate \(k\), i.e.,
\[\dot{\eta}=-k\eta+k(1-\eta)=k(1-2\eta). \tag{17}\]
According to Kramers' rate theory [31; 32], we may calculate \(k\) based on \(U\) as follows:
\[k(R) = 2\frac{\sqrt{|\partial_{\theta}^{2}U(\theta_{\min},R)\partial_{ \theta}^{2}U(\theta_{\max},R)|}}{2\pi}\exp\left(-\frac{\Delta U}{D}\right) \tag{18}\] \[= \frac{2K_{2}R^{2}}{\pi}\exp\left(-\frac{K_{2}R^{2}}{D}\right), \tag{19}\]
where the rate is doubled compared to the standard formula [32] because there are two paths for oscillators to move from one well to another. We assume \(\eta\geq\frac{1}{2}\) without loss of generality. Then, under the assumption of (16), we obtain \(R=2\eta-1\). Substituting (17) into \(\dot{R}=2\dot{\eta}\), we obtain the closed equation for \(R(t)\) as
\[\dot{R}(t)=-\frac{4K_{2}R^{3}}{\pi}\mathrm{exp}\left(-\frac{K_{2}R^{2}}{D} \right). \tag{20}\]
Because \(\dot{R}<0\) for \(R>0\) and \(\dot{R}=0\) for \(R=0\), \(R=0\) is the global attractor. However, because the term \(\exp\left(-\frac{K_{2}R^{2}}{D}\right)\) is vanishingly small for \(R\gg\sqrt{\frac{D}{K_{2}}}\equiv R_{1}\), the relaxation to \(R=0\) is extremely slow if \(R(0)\equiv R_{0}\gg R_{1}\). We define the lifetime \(\tau\) of the synchronized state as the time within which \(R\) varies from \(R_{0}\) to \(R_{1}\). Because \(\tau=\int_{0}^{\tau}dt=\int_{R_{0}}^{R_{1}}\frac{dt}{dR}dR\), we obtain
\[\tau=\int_{R_{1}}^{R_{0}}\frac{\pi\exp\left(\frac{K_{2}R^{2}}{D}\right)}{4K_{ 2}R^{3}}\,\mathrm{d}R. \tag{21}\]
This integral can only be computed numerically. Because the evolution of \(R(t)\) is very slow until \(R\) reaches \(R_{1}\), a rough estimation of \(\tau\) may be given by setting \(R(t)=R_{0}\) in Eq. (21), giving rise to
\[\tau\sim e^{\frac{K_{2}R_{0}^{2}}{D}}, \tag{22}\]
where the coefficient including the factor \(\frac{R_{0}-R_{1}}{R_{1}^{2}}\) is omitted. This estimation indicates that \(\tau\) approximately exponentially increases with \(K_{2}\). To verify our theory, in Fig. 3, we compare our theoretical estimations, given by Eqs. (21) and (22), to the lifetime obtained from direct simulations of Eq. (1). The simulation setup is the same as that illustrated in Fig. 1, and the lifetime is given as the time at which \(R\) passes \(R_{1}\) for the first time. We observe that Eq. (21) is in reasonable agreement with the simulation data. We also observe that the lifetime indeed increases approximately exponentially with \(K_{2}\), as predicted by Eq. (22). Note that as detailed in SI, the closed equation for \(R\) may also be obtained for \(K_{1}>0\) and it approximately reproduces \(R(t)\) trajectories obtained in Eq. (1).
_Conclusion._ In this study, we explored a large population of noisy oscillators with one- and two-simplicial interactions. Specifically, we demonstrated that dynamical noise, no matter how weak, will extinguish steady synchronized states when only two-simplicial interaction exists. However, synchronized states composed of two clusters persist for extended periods. This feature could translate to future applications. In the context of memory storage, a cluster can be considered as a state in which the system holds some information. Since the lifetime of a cluster state is determined by the initial ratio of clusters, it may be possible to design a self-organizing system that can measure time and also pre-set the time to retain information.
Finally, we discuss the limitations and future directions of this study. We have considered only the all-to-all coupling case. Although such a network is mathematically tractable, its applicability to real-world systems would be limited. Extension to complex networks is vital. Furthermore, to comprehensively understand the nature of two-simplex interaction, it is essential to explore another generic coupling type, given as \(\sin(\theta_{j}-\theta_{k}-\theta_{m})\) instead of \(\sin(2\theta_{j}+\theta_{k}-2\theta_{m})\) in Eq. (1) [24; 33]. The noise effect on such a system is an interesting open problem.
|
2303.01393 | Phase Separation in Two-Dimensional Electron Systems: Experimental View | Key experimental results on unveiling and studying properties of a multiphase
state that arises in two-dimensional electron systems due to the interplay of
interelectron interactions and disorder are reviewed. The review focuses on the
experimental results obtained with high mobility Si-field effects structures
(Si-MOS), in which the interaction effects at low carrier concentrations are
most pronounced due to the strong e-e interactions, multi-valley spectrum, and
the short-range character of the random potential. The reviewed effects of
phase separation include features in transport, magnetotransport and
thermodynamics. Consideration of a number of experimental results is
supplemented with a brief review of their theoretical interpretation. | V. M. Pudalov | 2023-03-02T16:23:53Z | http://arxiv.org/abs/2303.01393v1 | # Phase separation in two-dimensional electron systems: Experimental view
###### Abstract
Key experimental results on unveiling and studying properties of a multiphase state that arises in two-dimensional electron systems due to the interplay of interelectron interactions and disorder are reviewed. We focus on the experimental results obtained with high mobility Si- field effects structures (Si-MOS), in which the interaction effects at low carrier concentrations are most pronounced due to the strong e-e interactions, multi-valley spectrum, and the short-range character of the random potential. The reviewed effects of phase separation include features in transport, magnetotransport and thermodynamics. Consideration of a number of experimental results is supplemented with a brief review of their theoretical interpretation.
###### Contents
* I Introduction
* II Two-dimensional electron systems with strong interactions. Theory overview
* II.1 Negative compressibility and phase separation. Analytical results
* II.2 Taking disorder into account
* II.2.1 Results in the framework of quantum interaction corrections and renormalization group theory
* II.2.2 Numerical results
* II.2.3 Non-linear screening approach. Numerical modelling
* III Compressibility of 2DE systems: Experimental studies
* III.1 Earlier capacitance measurements
* III.2 Field penetration measurements
* III.3 Local compressibility measurements
* IV Phase separation effects revealed in thermodynamics and transport
* IV.1 Evidence of the "spin-droplet" state in thermodynamic spin magnetization
* IV.2 Phase separation effects in charge transport
* IV.2.1 Magnetotransport in the in-plane field
* IV.2.2 Phase separation effects in oscillatory magnetotransport
* IV.2.3 Phase separation effects in zero field transport
* IV.3 Phase separation effect in spin susceptibility
* V Conclusions
* VI Acknowledgements
## I Introduction
Recently, phase separation effects came to the forestage as the objects of intent attention in condensed matter physics. Historically first and the most remarkable phase separation effects were found in manganites [1], then the phase separation appeared to play essential role in high-Tc superconductors [2; 3], low-dimensional organic crystals [4], etc. To date it became clear that phase separation is ubiquitous rather than exotics.
Several comprehensive reviews were published in recent years [5; 6], they considered mainly theoretical aspects of the physics of phase separation. The current mini-review partially compensates this shortcoming and considers several representative manifestations of the phase separation in experiments with two-dimensional (2D) systems of interacting electrons. In all considered examples the driving force behind the phase separation is the competition between disorder and interparticle interactions. It is known that the effects of interaction are the stronger, the lower the dimension of the system.
## II Two-dimensional electron systems with strong interactions. Theory overview
Correlation plays a crucial role for electrons with a \(1/r\) pair potential moving in a neutralizing charge background [7]. Its importance grows both with lowering the density and the space dimensionality, and tends to qualitatively change the predictions of simple schemes, such as the Hartree-Fock HF or random-phase approximation RPA [7]. The interaction strength is commonly characterized by the dimensionless parameter \(r_{s}\) - the ratio of the potential interaction energy \(E_{ee}\) and kinetic Fermi energies \(E_{F}\); for electrons in (001)-Si MOS structure \(r_{s}=2.63\times(10^{12}/n[\mathrm{cm}^{-2}])^{1/2}\)[8].
In the low-density strongly correlated electron liquid, the energy balance determining the system properties is played on a very minute scale and, to get meaningful predictions, a great accuracy such as the one afforded by quantum Monte Carlo (QMC) methods is necessary [7].
In two dimensional systems, the interplay of disorder and electron-electron interactions gives birth to many exciting effects, some of them are considered below. We begin with the negative compressibility of the electron liquid - the effect that paves the way for phase separation.
### Negative compressibility and phase separation. Analytical results
The inverse compressibility (or \(\partial\mu/\partial n\)) of a system reflects how its electrochemical potential changes with carrier density
\[\kappa^{-1}=n^{2}\frac{\partial^{2}E_{tot}}{\partial n^{2}}=n^{2}\left(\frac{ \partial\mu}{\partial n}\right), \tag{1}\]
with \(n\) being the carrier density, and \(\mu\) the electrochemical potential. For noninteracting electrons \(\kappa\) is proportional to the single-particle density of states \(D\), which in 2DE systems is density independent, being \(D_{2}=g_{v}m/(\pi\hbar^{2})\), where \(g_{v}\) - is the valley degeneracy (\(g_{v}=2\) for (001)-Si MOS).
This picture, however, changes drastically when interactions are included. It was realized already in the 1980s that compressibility of the 2DES can become negative at low densities, owing to electron-electron interactions [9]. Exchange and correlation effects weaken the repulsion between electrons, thereby reducing the energy cost, thus leading to negative and singular corrections to \(\partial\mu/\partial n\). At zero magnetic field this effect is due primarily to the exchange energy while at high field the correlation energy plays a significant role as well [10].
Within the Hartree-Fock (HF) theory, which includes both the density-of-states and exchange terms, for a clean system with no disorder one gets:
\[\frac{\partial\mu}{\partial n}=\frac{\pi\hbar^{2}}{m}-\left(\frac{2}{\pi} \right)^{1/2}\frac{e^{2}}{4\pi\epsilon}\frac{1}{n^{1/2}} \tag{2}\]
Thus, upon decreasing density, the compressibility for a clean system gets negative and tends to \(-\infty\). The sign change of compressibility means that when the concentration of electrons in the system varies, the changes in the potential energy due to the interelectron interaction, having the opposite sign, exceed the changes in the kinetic energy. Experimentally, the sign change of \((\partial\mu/\partial n)^{-1}\)) has been found first in the capacitance and chemical potential measurements in magnetic field [11; 12; 13] and later was confirmed in the measurements performed by the field penetration technique in zero field [14; 15; 16] (Sec. III.2).
### Taking disorder into account
Disorder is inevitably present in real two-dimensional systems [17]. No matter how small the potential fluctuations in the most advanced 2D structures are, they lead to a significant change in the behavior of thermodynamics and transport in the system as carrier density decreases.
#### ii.1.1 Results in the framework of quantum interaction corrections and renormalization group theory
Explicit calculations on the "metallic" side (high conductivity \(\sigma\times(h/e^{2})=E_{F}\tau/\hbar\gg 1\)), within the renorm-group and interaction correction theory, show no singular correction to the compressibility in leading order in disorder [18; 19; 20; 21; 22], and even to the second order [23].
There were several pioneering efforts [24; 25] in addressing the interplay between interaction and disorder and their effect in thermodynamic properties, by calculating corrections to the compressibility from the exchange and correlation contribution to the ground state energy [24]. However, the predicted vanishing of the compressibility as the transition to the localized state is approached from the metallic side is at odd with experimental results (Section III).
#### ii.1.2 Numerical results
The unlimited divergency of \(\kappa^{-1}\) for the clean 2DE system is cut-off in the presence of disorder. Having obtained the equation of state [i.e., \(E(r_{s})\)] of the normal liquid, Tanatar and Ceperley [26] calculated the compressibility using
\[\frac{\kappa_{0}}{\kappa}=1-\frac{\sqrt{2}r_{s}}{\pi}+\frac{r_{s}^{4}}{8} \left[\frac{d^{2}}{dr_{s}^{2}}-\frac{1}{r_{s}}\frac{d}{dr_{s}}\right]E_{c}. \tag{3}\]
Here \(\kappa_{0}=\pi r_{s}^{4}/2\) is the compressibility of a noninteracting system (see Fig. 1a), and \(E_{c}\) - correlation energy [26] In the above equation, the compressibility becomes negative around \(r_{s}=2.03\), slightly before the Hartree-Fock result of 2.22.
Within the self-consistent HF approximation and the deformable jellium model, Orozco et al. [27] calculated the ground-state compressibility of the 2DE system for the density region which includes the liquid to localized phase transition. A sudden change in the behavior of the chemical potential and a large divergence in the inverse compressibility have been found for low densities. The change of sign of the inverse compressibility and its overall behavior versus \(r_{s}\) calculated in this work, agree qualitatively with the behavior of \(\kappa^{-1}\) observed in experiments (see Section III).
The thermodynamic compressibility can be readily calculated from the ground-state energy
\[\frac{\kappa_{0}}{\kappa}=-\frac{r_{s}^{3}}{8}\left[\frac{\partial Eg}{ \partial r_{s}}-r_{s}\frac{\partial^{2}E_{g}}{\partial r_{s}^{2}}\right]. \tag{4}\]
Asgari and Tanatar [28] calculated the ground state energy and compressibility within DFT and dynamical
mean field formulation. The results are presented in Fig. 1b. The solid curve here shows \(\kappa^{-1}\) for a clean system. They considered the disorder effect within two models: (i) a density independent scattering rate \(\gamma\) - similar to Si and Varma [24], and (ii) in the mode coupling approximation, with \(\gamma\) dependent on \(r_{s}\) through the screened impurity scattering potential. The dotted curve was calculated with a constant \(\gamma\). It stays negative at low density, qualitatively similar to that for the clean system. Most important, \(\kappa_{0}/\kappa\) calculated within the mode-coupling approximation, which includes the screened electron-impurity scattering potential, exhibits a minimum and starts rising towards positive values. Thus, the inverse compressibility upturn at low densities is the effect of disorder solely. On the other hand, the disorder does not affect \(\kappa\) in the range of \(r_{s}=2\div 4\).
The density at which the calculated inverse compressibility experiences a minimum, depends on the impurity density \(n_{i}\) (see Fig. 1b). In experiments [29; 30; 31] the inverse compressibility also shows an upturn after going through a minimum. Initially [29], this minimum has been suggested as a thermodynamic signature of the metal-insulator transition. However, later on the two effects were disentangled and the anomalous behavior of \(1/\kappa\) was attributed to the inhomogeneous nature of the insulating phase, as demonstrated experimentally [30; 31; 32] and theoretically [27; 28; 33; 34]. Thus, the calculations Ref. [28] yield the overall \(1/\kappa(r_{s})\) dependence similar to that observed in the experiments (Section III.2).
#### ii.1.3 Non-linear screening approach. Numerical modelling
Shi and Xie [33] investigated spatial distribution of carrier density and the compressibility of 2D electron systems by using the local density approximation. A slowly varied disorder potential was applied to simulate the disorder effect. To investigate the density distribution of a disordered 2D electron system, within DFT, the total electron energy was calculated as
\[E(n)=E_{T}(n)+E_{ee}(n)+E_{d}(n)+Ex(n)+E_{c}(n) \tag{5}\]
Here \(E_{T}(n)\) is the functional of the kinetic energy, \(E_{ee}(n)\) is the direct Coulomb energy due to charge inhomogeneity, \(E_{d}(n)\) is the disorder potential energy, \(E_{x}(n)\), and \(E_{c}(n)\) are the exchange and correlation energy, respectively. The ground state spatial density distribution was obtained by minimizing the total energy functional with respect to the density. A slowly varied disorder potential was applied to simulate the disorder effect. Shi and Xie found that at low average densities electrons form a droplet state which is a coexistence phase of high- and low-density regions. In calculating total exchange and correlation energy they used interpolated Tanatar and Ceperley QMC results for exchange and correlation energy density for the homogeneous 2DE system [26].
It was found that the compressibility anomaly observed in 2D systems which accompanies the metal-insulator transition can be attributed to the formation of the droplet state due to a disorder effect at low carrier densities. Figure 2 shows the density distribution of the system. It can be clearly seen that the electrons form some high density regions, while the density of other regions is essentially zero. Depending on the average density of the system, the high-density regions may connect to each other (\(r_{s}=10\)), or form some isolated regions (\(r_{s}=19\)). There exists a certain density \(r_{s}=14\), where the connectivity of the high-density regions changes (a percolation transition).
The e-e interaction is important for the conduction behavior of a dilute electron system in the sense that it makes the density distribution more extended because of the Coulomb repulsion. Figure 3 shows the density distribution for the free electron gas with the same density as in Fig. 2 by turning off the electron-electron interaction. The system forms only some isolated high density regions
Figure 1: (a) Inverse compressibility \(\kappa_{0}/\kappa\) of the electron gas as function of density parameter \(r_{s}\) calculated using Eq. (3). Dashed line is the compressibility in the Hartree-Fock approximation. Adapted from Ref. [26]. (b) \(\kappa_{0}/\kappa\) calculated for a wider range of \(r_{s}\). The short and long-dashed curves are for impurity densities \(n_{i}=5\times 10^{10}\) and \(10^{11}\)cm\({}^{-2}\), respectively. Adapted from Ref. [28].)
at the most disordered areas, while the density distribution of the corresponding interacting system (Fig. 2b) is quite extensive at the same density. In other words, at a given disorder strength, the critical density for the free electron gas is much higher than for its interacting analogue.
## III Compressibility of 2DE systems: experimental studies
### Earlier capacitance measurements
In earlier experiments [11; 12; 13; 35; 36], information about compressibility (or inverse density of states) was obtained from either capacitance measurements or from measurements of the electrochemical potential variations versus density in quantizing magnetic field. In the former case, the capacitance was measured by AC-bridge in the frequency range 6-75 Hz. The measured capacitance \(C\) is considered to be a series connection of the geometric and "quantum" parts:
\[C^{-1}=C_{0}^{-1}+e^{2}S\left(\frac{\partial n}{\partial\mu}\right)^{-1}, \tag{6}\]
where \(C_{0}\) is the capacitance in the \(\partial\mu/\partial n\to 0\) limit which does not depend on \(B\), and \(S\) is the 2DE layer area. \(C_{0}\) may be estimated as follows:
\[C_{0}^{-1}=C^{-1}|_{B=0}-\left(e^{2}SD_{0}\right)^{-1},\]
where the density of states \(D_{0}=\partial n/\partial\mu|_{B=0}=8.37\times 10^{14}(m^{*}/m_{e})\)cm\({}^{-2}\)eV\({}^{-1}\)[8]. In order to separate the second "quantum" part from the geometric capacitance, the data taken in zero field was subtracted from that in magnetic field. Correspondingly, the zero field behavior of the compressibility remained inaccessible.
Figure 4 represents the difference \(\Delta C(n)\) of the two dependences \(\Delta C=-\left(C-C|_{B=0}\right)\) measured with Si-MOSFET sample as a function of carrier density at three temperatures and in a fixed magnetic field of 11.7 Tesla. The measured difference equals to
\[\Delta C\approx\left(C^{2}/e^{2}S\right)\left[(\partial n/\partial\mu)^{-1}-D _{0}^{-1}\right]\]
The obtained dependence at \(T=4.2\) K agrees with earlier capacitance measurements [37; 38; 39]. In particular,
Figure 3: Spatial density distribution for the free electron gas on the same disorder landscape as Fig. 2 at density \(r_{s}=14\). Adapted from Ref. [33]
Figure 2: Spatial density distributions for various average densities. The contour plot shows local density parameter \(r_{s}=1/\sqrt{\pi n}\). The density in the white area decreases rapidly to zero. The size of the system is set as \(L=256a_{B}^{*}\). The disorder potential is generated by off-plane charge impurities with \(d=10a_{B}^{*}\), \(n_{i}=2.5\times 10^{-3}/(a_{B}^{*})^{2}\). Adapted from Ref. [33].
the \(\left(\partial n/\partial\mu\right)^{-1}\) values at half-integer fillings are less than \(D_{0}^{-1}\) but positive. However, lowering the temperature to \(1.4\,\)K and to \(0.6\,\)K leads to the appearance of regions of filling factors where \((\partial n/\partial\mu)^{-1}\) gets negative. The appearance of these dips in the vicinity but somewhat away of the integer filling factors was exactly the feature predicted by Efros [10]. Indeed, in his language, the total electron energy acquires a term \(E_{ee}\) in addition to the single-particle term \(E_{1p}\). Therefore, the inverse total density of states may be written as
\[D^{-1}=D_{1p}^{-1}+G_{ee}^{-1}.\]
\(G_{ee}\) was evaluated by Efros [10] as follows:
\[G_{ee} =-(e^{2}\alpha/\varepsilon)\left(\{\nu\}n_{B}\right)^{3/2},\qquad \{\nu\}\leq\frac{1}{2},\] \[=-(e^{2}\alpha/\varepsilon)\left[\left(1-\{\nu\}\right)n_{0}^{3/2 }\right],\qquad\{\nu\}\geq\frac{1}{2},\]
where \(\nu=n/n_{B}\) is the Landau levels filling factor, \(n_{B}=1/(2\pi l_{B}^{2})\) - the Landau level degeneracy, \(\{\nu\}\) - fractional part of the filling factor \(\{\nu\}=\nu-int(\nu)\), and \(\alpha\) is a dimensionless constant (\(=2\) for the classical Coulomb interaction).
The inverse thermodynamic density of states, correspondingly, is equal to [10]:
\[D_{ee}^{-1} =-\left(\frac{3\alpha e^{2}}{4\pi\varepsilon}\right)\left(\{\nu \}n_{B}\right)^{-1/2},\quad\{\nu\}\leq\frac{1}{2},\] \[=-\left(\frac{3\alpha e^{2}}{4\pi\varepsilon}\right)\left[\left(1 -\{\nu\}\right)n_{B}\right]^{-1/2},\{\nu\}>\frac{1}{2} \tag{7}\]
The negative compressibility signals a tendency of the 2DE system to break the homogeneous state. On the other hand, the stability of the entire 2D system with a negative compressibility is achieved in physical systems by the neutralizing background. For the gated 2D system, the stability condition was analyzed in Ref.[40].
### Field penetration measurements
In the conventional capacitance technique, the capacitance between the 2D gas and a metal gate electrode is measured. The dominance of a large geometric term in the measured capacitance essentially forces one to vary some other parameter, like magnetic field [11; 12; 13; 36], temperature [41; 42], etc., and then subtract off a large, and constant, offset in order to uncover the quantum term. There are three major drawbacks to this technique. First, the geometric term is usually not accurately known and therefore the subtraction is uncertain. The geometric term produces a second difficulty as well: it may not actually remain constant as the external parameter, e.g. magnetic field is changed. Thirdly, the slowly-decaying eddy currents excited in 2DE system by ac-modulation of the carrier density [43] impede capacitance measurements at low temperatures in quantizing magnetic field.
The "floating gate" technique elaborated in Ref. [44] does not require field or density modulation and therefore can be used for electrochemical potential measurements even in the QHE regime [45; 46].
The alternative field penetration technique introduced by Eisenstein [14; 15] provides automatically subtracting the geometric term. This is achieved by use of a double layer 2D system and measuring the fraction of the ac electric field \(\delta E_{0}\) which penetrates one layer and is detected by the second. The inset to Fig. 5 shows a schematic set-up. The ac field \(\delta E_{p}\) penetrates through the upper layer and causes current flow through the external impedance \(Z\), thus generating a detectable voltage \(V_{sig}\).
Lower part of Fig. 5 shows a trace of the measured ac-current (proportional to penetration field) as a function of the gate voltage or electron density of the upper 2D layer. The penetration field measures the screening ability of the electrons which is inversely proportional to \(\kappa\). The main advantage of this experimental approach is that it provides direct access to \(\partial\mu/\partial N\) for the top 2DE layer without any offset signals (related with geometric capacitance contribution). Dultz and Jiang [29] have extended the field penetration method to a more conventional heterostructure with only a single layer of carriers.
Were the 2DE system noninteracting, the penetration to the bottom layer would be a few percent and would be positive. This result is qualitatively altered by e-e interactions that make the observed differential penetration negative. The minimum in the oinverse compressibility was already detected in the measurements by Eisenstein [15].
This minimum attracted much interest when Dultz and Jiang [29] reported that in some samples the minimum virtually coincides with the metal-insulator transition in transport. They found [29] that the negative \(1/\kappa\) at low densities reaches a minimum value at a certain density, and then increases dramatically with further decreasing \(n\). This coincidence was initially considered as a thermodynamic signature of an interaction-driven
Figure 4: \(\Delta C\) dependences on \(V_{g}\) (proportional to the carrier density) for three temperatures and at field \(11.7\,\)Tesla. The upper horizontal axis shows the Landau level filling factors. Adapted from Ref. [11]
phase transition [47; 24]. However, later on Allison et al. [32] measured simultaneously the compressibility, capacitance and resistivity in the vicinity of the metal-insulator transition with different samples. It was shown that the coincidence of the two effects in some samples, the inverse compressibility minimum and the sign change in transport \(d\rho/dT\), is accidental.
### Local compressibility measurements
Ilani et al. [30; 31] have performed local study of \(\partial\mu/\partial n\) and expanded them into the low density regime, across transition to the localized state. Their measurements utilized single electron transistors (SET), located directly above a two-dimensional hole gas (2DHG) of inverted back-gated GaAs/AlGaAs structures. This technique allowed to probe the local behavior of \(\partial\mu/\partial n\) as well as its spatial variations. At equilibrium, the Fermi energy is constant across the sample, and therefore a change in \(\mu(n)\) induces a change in the electrostatic potential, which is readily deduced by measuring the change in current through the SET. The spatial resolution, determined from the size of the SET and its distance from the 2DHG, is \(0.1\times 0.5\mu\)m\({}^{2}\).
Instead of the anticipated monotonic dependence, local \(\mu(n)\) exhibits a rich structure of oscillations. In the high density "metallic" regime, Ilani et al. observed long sawtooth oscillations. Superimposed on them and starting in close proximity to the onset of the localized state, a new set of rapid oscillations emerges (Fig. 6b). Their typical period is an order of magnitude smaller, and the amplitude grows continuously from the point of appearance to lower densities. All the oscillations, including the fine structure seen on the left side of Fig. 6b, were reproducible and did not exhibit hysteresis or sweep rate dependence.
The sawtooth profile is reminiscent of the electrochemical potential behavior for a quantum dot as a function of the number of electrons [48] and, hence, suggests the existence of discrete charging events. Thus, the measured \(\mu\) of the 2DHG varies undisturbed along the segments with negative slopes, until a certain bias between the 2DHG and the SET makes recharging of an intermediate localized state energetically favorable. This causes a sharp drop in the electrostatic potential, after which \(\mu\) continues to vary smoothly until the next screening event occurs.
Ilani et al. reconstructed the basic \(\mu(n)\) dependence by assembling the undisturbed segments together; the results are shown in Fig. 7, for five different SETs placed apart from each other. In the "metallic" regime (high density) all the data collapse onto a single curve in Fig. 7, rather close to the HF model prediction.
In the insulating phase, assuming that the new set of oscillations is caused by the same mechanism (screening by traps), the authors extracted the slopes and added them to the same plot of \(\partial\mu/\partial n\) (see Fig. 7). Unlike in
Figure 5: Normalized penetrating field \(\delta E_{p}/\delta E_{0}\) versus gate voltage at zero magnetic field and \(T=1.2\)K. Dotted curve is calculated using Tanatar and Ceperley’s compressibility [26]. Upper axis gives carrier density of the top 2DE system. Dashed horizontal line - noninteracting case. Inset - experimental set-up. Adapted from Ref. [14].
Figure 6: (a) Measured \(\mu(n)\) in the metallic regime (dots) together with the HF theory [Eq. (1)] for a clean system (solid line). The measured negative slopes are highlighted (dark symbols) to demonstrate their resemblance to the HF model. (b) Measured \(\mu(n)\) across the MIT and in the insulating region. Inset: A closer look at the data in the insulating regime. Each slope is composed of many data points allowing an accurate determination of the slopes. Adapted from Ref.[30]
the metallic phase where the system clearly has a negative \(\partial\mu/\partial n\), in the insulating phase the sign of the compressibility is not known a priori. Therefore, in Fig. 7 the absolute value is shown of both negative and positive slopes, all of which deviate considerably from the expected \(n^{-1/2}\) power law. The deviation becomes greater than an order of magnitude at the lowest density and indicates a change in the screening properties of the 2DHG at the transition to the localized state. The fluctuations in the slopes, observed on the insulating side, are reproducible and suggest that mesoscopic effects are present. Furthermore, the average behavior of \(\partial\mu/\partial n\) in this fluctuating regime is position dependent (see the inset in Fig. 7). Such dependence on position indicates that once the system crosses into the insulating phase it becomes spatially inhomogeneous.
The local measurements [31] emphasize the important role of charged traps in the ground state thermodynamics of the 2D system. It might have a direct relationship to the models where the 2D gas and charge traps coexist in equilibrium, particularly [49; 50]. To summarize shortly the results of local measurements [30], it was found that the behavior of \(\partial\mu/\partial n\) in the metallic phase on average follows the HF model suggesting the 2D system to be almost spatially homogeneous. In contrast, the insulating phase is found to be spatially inhomogeneous.
## IV Phase separation effects revealed in thermodynamics and transport
### Evidence of the "spin-droplet" state in thermodynamic spin magnetization
The method of \(\partial\mu/\partial B\) thermodynamic measurements was introduced and substantiated in Refs. [51; 52]. To probe purely spin susceptibility, free of orbital contribution, measurements in Refs. [51; 52] were performed in magnetic field \(B_{\parallel}\) aligned strictly parallel to the 2D plane. In this technique, the applied magnetic field is modulated with a small amplitude \(\delta B\) and the excited recharging current of the Si-MOS straucture [52] is measured: \(\delta I=[i\omega C_{0}\delta B/e](\partial\mu/\partial B)\). Here \(C_{0}\) is the known capacitance of the "gate - 2D layer" structure. From the measured recharging current the quantity \(\partial\mu/\partial B\) is found and, due to the Maxwell relation, directly yields the magnetization per electron \(\partial M/\partial n\).
In order to explore interaction effects the measurements in [53] were performed in weak fields less than temperature, \(g\mu_{B}B\leq k_{B}T\). In Figure 8 one can see that at low densities \(\partial M/\partial n\) becomes positive and in all cases is much greater than expected for the Pauli spin susceptibility. When the field increases (while still being smaller than the temperature, \(g\mu_{B}B<k_{B}T\)), \(\partial M/\partial n\) sharply increases and exceeds the Bohr magneton by more than a factor of two at low temperatures (Fig. 8). Such behavior of \(\partial M/\partial n\) is reminiscent of the dependence anticipated for free spins [53]. However, the fact that \(\partial M/\partial n\) exceeds the Bohr magneton points to a ferromagnetic ordering of the electron spins. The magnetization curves \(\partial M/\partial n\) (Fig. 8) saturate in the field \(b=\mu_{B}B/(k_{B}T)\sim 0.25\), signaling that the particles which respond to the field modulation have spins \(1/(2b)\approx 2\), rather than \(1/2\). This result is the "smoking-gun-evidence" of the emergence of a two-phase state in the 2D system consisting of a paramagnetic Fermi liquid and ferromagnetic domains (called "spin droplets") with the total spin \(\sim 2\), comprising, respectively, four or more electrons.
The existence of the two-phase state is not a unique property of the low-density state solely, the spin droplets were detected in Ref. [53] in the wide range of densities, up to \(n\approx 2\times 10^{11}\)cm\({}^{-2}\) that is twice the critical density of the onset of the insulating state, i.e. in the regime of high metallic metallic conduction \(\sigma\approx 80e^{2}/h\).
### Phase separation effects in charge transport
In Ref. [54], several features have been revealed in magnetotransport, zero-field transport, and thermodynamic spin magnetization for a 2D correlated electron system. These features have been associated with the two-phase state. More specifically:
(i) in magnetoconductivity the novel regime of magnetoconductance sets above a density-dependent temperature
Figure 7: \(|d\mu/dn|\) collected from several SETs on several samples. In the insulating regime, the \(|d\mu/dn|\) magnitude includes both negative and positive slopes. Each point corresponds to a well-defined segment in the \(\mu(n)\) trace. The point marked by an arrow corresponds to the marked segment in Fig. 6a. Inset: Results from two SETs on the same device demonstrating the spatial dependence of \(|d\mu/dn|\) in the insulating side. Adapted from Ref. [30]
\(T_{\rm kink}(n)\).
(ii) In the temperature dependence of zero-field resistivity an inflection point is observed at about the same temperature \(T_{\rm infl}(n)\approx T^{*}\).
(iii) In thermodynamic magnetization the weak-field spin susceptibility per electron, \(\partial\chi/\partial n\equiv\partial^{2}M/\partial n\) changes sign at \(T_{dM/dn}(n)\approx T^{*}\).
All three notable temperatures, \(T_{\rm kink}\), \(T_{\rm infl}\), and \(T_{dM/dn}\) are close to each other (see Fig. 9), and are intrinsic to strongly correlated regime solely. It is shown below that these features can be described within the framework of the phase separation approach.
#### iii.2.1 Magnetotransport in the in-plane field
Regarding the zero field transport and magnetotransport, their features require more detailed explanation.
In the conventional theory of interaction corrections (IC) [55], the lowest order variations of the magnetoconductivity (MC) with weak in-plane field \(g\mu_{B}B<k_{B}T\ll k_{B}T_{F}\) at a fixed temperature \(T\) are parabolic. This is clear from symmetry arguments, and also follows from the IC theory and the screening theory [56; 57].
\[\sigma=\sigma_{0}-a_{\sigma}B^{2}+\mathcal{O}\left(B^{2}\right);\quad\rho= \rho_{0}+a_{\rho}B^{2}+\mathcal{O}(B^{2}), \tag{8}\]
where by definition
\[a_{\sigma}\equiv\left.-\frac{1}{2}\frac{\partial^{2}\sigma}{\partial B^{2}} \right|_{B=0}=\frac{1}{2\rho^{2}}\frac{\partial^{2}\rho}{\partial B^{2}}; \quad a_{\rho}\equiv\left.\frac{1}{2}\partial^{2}\rho/\partial B^{2}\right|_ {B=0}.\]
In Ref. [54] the in-plane field MC was studied in detail and quantified in terms of the prefactor \(a_{\sigma}(T,n)\). Within the IC theory, the \(\sigma(T,B)\) variation in the 2DE system is described by the sum of the interference correction and e-e interaction corrections [55; 58]
\[\Delta\sigma(T)\approx\Delta\sigma_{C}(T,B)+n_{T}(B)\Delta\sigma_{T}(T,B)+O \left(\frac{1}{k_{F}l}\right).\]
Here the first term combines single-particle interference and interaction corrections in the singlet channel, and the second term is the interaction corrections in the triplet channels, \(k_{F}l\gg 1\) is the dimensionless conductivity. Within the same approach, MC in the weak in-plane field originates from the field dependence of the effective number of triplet channels \(n_{T}(B)\), that in its turn is due to the Zeeman splitting [55].
As a result, the first order interaction corrections to MC in the diffusive and ballistic interaction regime \(\Delta\sigma\equiv\sigma(T,B)-\sigma(T,0)\) may be written in terms of \(a_{\sigma}\) as follows [55]:
\[a_{\sigma}(T)\propto\left\{\begin{array}{cc}(1/T)^{2},&T\tau\ll 1\\ (1/T),&T\tau\gg 1.\end{array}\right. \tag{9}\]
Their explicit expressions are given in Ref. [55].
Thus, according to the IC theory predictions, as temperature _increases_, the MC should cross over from \((1/T^{2})\) to \((1/T)\) temperature dependence. This theory prediction is confirmed in measurements with low-mobility (high density, weak interactions) Si-MOS samples [54]. In contrast, for high-mobility (lower densities, strongly interacting regime) structures, as Fig. 10 shows, with temperature _increasing_\(a_{\sigma}(T)\) crosses over from the conventional ballistic-type \(-(B^{2}/T)\) to the unomalous \(-(B^{2}/T^{2})\) dependence. Despite the absence of overheating of electrons [60], the diffusion regime of MC in the high mobility structures is not observed down to \(T=0.3\)K.
One can see from Fig. 10, that the ballistic-type behavior \(\propto T^{-1}\) extends up to temperatures 1.5-2 K (which are
Figure 8: Magnetization per electron \(\partial M/\partial n\) in weak field plotted versus the normalized magnetic field for a carrier density of \(5\times 10^{10}\)cm\({}^{-2}\) at several temperatures (\(T=0.8\), 1.2, 1.8, 4.2, 7, 10, 24 K, from top down). Adapted from Ref. [53]
Figure 9: Empirical phase diagram of the 2DE system. Dashed areas are (I) the ballistic interaction regime and (II) the anomalous magnetoconductance regime. Hatched area (III) is the nondegenerate regime, the blank area at \(n<nc\) is the localized phase. Full dots: the kink temperature Tkink; open dots: the inflection point \(T_{infl}\). Dash-dotted curves show the calculated bare (\(T_{F}\)) and the renormalized (\(T_{F}\) ) Fermi temperatures. The insert blows up the low-density region; the dashed line is \(T_{dM/dn}\)[53]. Adapted from [54].
a factor of 10 higher than the estimated diffusive/ballistic border \(T_{\rm db}\approx 0.2\)K) [54], then it sharply changes to the novel dependence, \(a_{\sigma}(T)\propto T^{-2}\), making the overall picture clearly inconsistent with theory predictions, Eq. (9). The crossover in Fig. 10 occurs rather sharply, as a kink on the double-log scale. The kink and the overall type of behavior were observed in the wide range of densities and were qualitatively similar for several studied high mobility samples. The next section shows that the observed effect in the in-plane magnetic field is associated with the onset of the two-phase state.
#### iv.2.2 Phase separation effects in oscillatory magnetotransport
Measurements of the oscillatory magnetoresistance in high mobility Si-MOS structures in weak perpendicular magnetic fields were performed in Ref. [59]. It was found that the quantum oscillations in 2D electron systems are observed _down to the critical carrier density_\(n_{c}\) of the transition to strongly localized state. For such low densities, the oscillations exhibit an anticipated period, phase, and amplitude, even though the conductivity becomes essentially less than \(e^{2}/h\), and, hence, the mean free path becomes less than the Fermi wavelength \(\lambda_{F}\). It was concluded that this apparent contradiction with the Ioffe - Regel criterion for diffusive transport is caused by the emergence of an inhomogeneous state of the 2D system, in which the regions of diffusive and hopping conduction are spatially separated.
The existence of quantum resistivity oscillations down to the critical electron density provides an evidence for emerging inhomogeneity of the 2D system. As density approaches \(n_{c}\), the "global" resistivity, calculated under assumption of a uniform current flow, becomes much greater than the "local" resistivity of the spatial areas, which contribute primarily to the oscillations amplitude. This observation supports earlier conjecture of emergent inhomogeneity of the conductive regions near \(n_{c}\)[54] deduced from the analysis of magnetoconductivity in weak parallel fields. Thus, we associate the observed oscillations with Shubnikov-de Haas (SdH) effect within certain regions of the 2D space, in which the momentum relaxation times \(\tau_{p}\) are much longer than that calculated from the global resistivity under assumption of the uniform current flow.
In Ref.[53] it was shown that the correlated 2D electron system can be inhomogeneous even _at high electron densities_: it contains inclusions of collective localized (insulating) states (called "spin droplets") in a conductive Fermi liquid. From the low density oscillatory transport measurements [59] the latter picture (we call it "bi-colored") is supplemented with the data showing that the system is, in fact, "three-colored". The conductive Fermi-liquid phase is not spatially homogeneous. Instead, it forms a pattern of regions with a large momentum relaxation time \(\tau_{p}\). These highly conductive regions are connected with each other through poorly conductive regions of Fermi liquid with lower \(\tau_{p}\)-values.
Figure 11: Examples of quantum oscillations of the resistivity for (a) \(n=2\times 10^{11}\)cm\({}^{-2}\), and (b) 0.94, 1.00, and \(1.04\times 10^{11}\)/cm\({}^{2}\). Dashed lines in panel (b) depict the upper boundary of analyzed magnetic fields. \(T=0.1\)K. Adapted from [59].
Figure 10: Comparison of the temperature dependences of the prefactors \(a_{\sigma}(T)\) for two samples Si2 and Si-63, and for two density values (in units of \(10^{11}\)cm\({}^{-2}\). For clarity, the curves are scaled by the factors shown next to each curve.
Phase separation effects in zero field transport
Below we analyze the \(\rho(T)\) and \(\sigma(T)\) dependencies at zero field. The variations of these quantities in the relevant temperature range (see Fig. 12) for high mobility Si-MOS samples are large (up to a factor of 10), making the IC theory inapplicable.
Each \(\rho(T)\) curve has two remarkable points: \(\rho(T)\) maximum, \(T_{\rm max}\), and inflection, \(T_{\rm infl}\)[61, 62]. Whereas \(T_{\rm max}\) is an order of the renormalized Fermi energy, the inflection point happens at much lower temperatures, in the degenerate regime. Importantly, the inflection temperature appears to be close to the kink temperature (see Figs. 9, 12). Besides that, \(T^{*}(n)\) is much higher than the "incoherence" temperature at which the phase coherence is lost (defined as \(\tau_{\varphi}(T)=\tau\)[63]). This confirms that the kink, inflection and \(\partial\chi/\partial n\) sign change are irrelevant to the single-particle interference effects [63, 64, 65, 66].
One can see from Fig. 12 that the \(\rho(T)\) temperature dependence is monotonic up to \(T=T_{F}\), and follows one and the same additive resistivity functional form over a wide density range:
\[\rho(T) = \rho_{0}+\rho_{1}\exp(-\Delta(n)/T),\] \[\Delta(n) = \alpha(n-n_{c}(B)), \tag{10}\]
where \(\rho_{1}(n,B)\) is a slowly decaying function of \(n\), and \(\rho_{0}(n,T)\) includes Drude resistivity and quantum corrections, both from the single-particle interference and interaction. Although the above empirical resistivity form has been suggested in Ref. [67] on a different footing, it fits well the \(\rho(T)\) dependence for a number of material systems [67, 68, 69, 70, 71, 72, 73, 74].
This empirical additive \(\rho(T)\) form satisfies general requirements for the transport behavior in the vicinity of a critical point [62, 75]. This form implies two channel scattering and therefore agrees with the two-phase state of the low density 2D electronic system (cf. Matthiessen's rule).
As noted above, \(\rho(T)\) (and \(\sigma(T)\)) variations of the experimental data (Fig. 12) are so large, that the first order in \(T\) corrections, of cause, cannot describe them. The simplest functional dependence, Eq. (10), correctly describes the inflection in \(\rho(T)\) and the linear density dependence of the inflection temperature [67, 76]. Obviously, in this model \(T_{\rm infl}=\Delta/2\). To take magnetic field into account, and following the results of Ref. [76] we include to \((\Delta/T)\) all even in \(B\) and the lowest order in \(B/T\) terms, as follows:
\[\Delta(T,B,n)/T=\Delta_{0}(n)/T-\beta(n)B^{2}/T-\xi(n)B^{2}/T^{2}, \tag{11}\]
with \(\Delta_{0}=\alpha[n-n_{c}(0)]\).
Equations (10) and (11) link the magnetoconductance with the zero-field \(\rho(T)\) temperature dependence. Combining equations (10) and (11), we obtain the \(\rho(T,B)\) dependence as follows:
\[\rho(B,T)=\left[\sigma_{D}-\delta\sigma\cdot\exp\left(-T/T_{B} \right)\right]^{-1}\] \[+\rho_{1}\exp\left(-\alpha\frac{n-n_{c}(0)}{T}-\beta\frac{B^{2}}{ T}-\xi\frac{B^{2}}{T^{2}}\right) \tag{12}\]
The term in the square brackets includes the Drude conductivity and interaction quantum corrections [55, 58], which are smoothly cut-off above \(T=T_{B}\approx\Delta/2\). \(\delta\sigma(T)\) was calculated in Ref. [54] using experimentally determined Fermi-liquid coupling constants \(F_{0}^{\sigma}(n)\)[66, 77], and \(\sigma_{D}\) found by a conventional procedure [78].
From Eq. (12), the prefactor \(a_{\sigma}=-(1/2)\partial^{2}\sigma/\partial B^{2}\) is calculated straightforward and in Fig. 12 is compared with experimental data. In the \(\rho(T)\) fitting [Figs. 12 (a,c,e,g)], basically, there is only one adjustable parameter, \(\rho_{1}(n)\), for each density. Indeed, \(n_{c}(0)\) is determined from the conventional scaling analysis at \(B=0\)[62], and the slope \(\alpha=2\partial T_{\rm infl}(n)/\partial n\) may be determined from Fig. 9.
One can see that both \(\rho(T)\) and \(a_{\sigma}(T)\) are well fitted; the model captures correctly the major data features, the steep \(\rho(T)\) rise (including the inflection), and the kink in \(a_{\sigma}(T)\) dependence. Within this model, the kink signifies a transition from the low-temperature magnetoconductance regime (where the interactions driven linear \(\sigma(T)\) temperature dependence dominates and the exponential term may be neglected) to the high temperature regime governed by the steep exponential \(\rho(T)\) rise.
We emphasize that both regimes are not related to the diffusive regime of interactions. This conclusion casts doubt on early attempts to use the two-parameter scaling for describing the magnetoresistance \(\sigma(B)\) and the temperature dependence \(\rho(T)\) within the renormalization group approach.
Thus, within the framework of a phenomenological two-phase model with two scattering channels, it is pos
Figure 12: Fitting \(\rho(T,B=0)\) dependencies (left) and \(a_{\sigma}(T)\) (right) with the same set of the fitting parameters. Carrier densities (from top to bottom) are \(n=1.5,2.0,2.5\), and \(3.25\times 10^{11}\)cm\({}^{-2}\). Vertical arrows point at the kink positions. Adapted from [54].
sible to explain all the observed features in transport and magnetotransport in a parallel field. It is important that their characteristic temperatures are close to the crossover temperature \(T_{dM/dn}(n)\), where the spin magnetization per electron changes the sign [53] (see the insert in Fig. 9). Physically, this means that at temperatures below \(T_{dM/dn}(n)\), collective droplets with a large spin (minority phase) "melt" with increasing density. In other words, the electrons added to the Fermi liquid improve screening thereby promoting the disappearance of spin droplets. At temperatures above \(T_{dM/dn}(n)\), on the contrary, the number of spin droplets increases with increasing density; in this case, the electrons added to the 2D system prefer to combine and form new spin droplets. In Ref. [54], it was concluded that \(T^{*}\) may be related to the averaged energy spectrum of the SD phase.
### Phase separation effect in spin susceptibility
Using a vector magnetic field technique with two independent superconducting coils, in Ref. [50] SdH oscillations were precisely measured and analyzed in various in-plane fields. Earlier [77; 79; 80], the oscillatory component \(\delta\rho_{xx}\) was shown to be well fitted with conventional Lifshits-Kosevich formula [79; 80; 81; 82]; this enables accurate extraction of the spin susceptibility \(\chi^{*}\) and density of mobile carriers \(n_{\rm SdH}\) from the beating of oscillations. In particular, \(\chi^{*}\) values have been determined with an accuracy of \(\sim(1-2)\%\) as a function of the in-plane field.
Figure 13 shows the main result of [50] - a sharp non-monotonic dependence of \(\chi^{*}\) on the in-plane field. The characteristic \(\delta\chi^{*}(B)/\chi^{*}(0)\) variations are in the range from \(\sim 25\%\) at low densities to \(\sim 6\%\) at high densities \(10\times 10^{11}\)cm\({}^{-2}\). The data reported in [50] coincide in the \(B_{\parallel}\to 0\) limit with the \(\chi^{*}(B=0)\) values reported Refs. [66; 77]. The characteristic field of the \(\chi^{*}(B)\)-minimum, \(B_{\parallel}\sim 1\,\)T for \(n=(1.1-2)\times 10^{11}\)cm\({}^{-2}\), is much weaker than the field of complete spin polarization of the 2D system \(B_{p}\sim 20\)T. Evidently, in a homogeneous single-phase Fermi liquid the only characteristic field is \(B_{p}\).
The spin susceptibility variations \(\delta\chi^{*}(B)\) measured from SdH oscillations are relevant to the mobile carriers. The \(\delta\chi^{*}(B)\) data also appears to correlate (i) with variation of the mobile carrier density \(\delta n_{\rm SdH}\) (see Fig. 14), and (ii) with thermodynamic magnetization \(\partial M/\partial n\) of the collective localized states \(M(B)\) (see Fig. 8) [53]. This correlation prompts that the observed changes in the properties of extended states are caused by the changes in magnetization of the localized states and by the subsequent redistribution of carriers between the two subsystems. The range of accessible densities where \(\delta n_{\rm SdH}\) could be measured is limited from low densities side, on the verge of the transition to fully localized state. Here the \(n_{\rm SdH}(B)\) variation cannot be measured precisely and variations of \(\chi^{*}(B)\) cannot be traced to higher field, because application of an in-plane field quickly causes localization of the 2D system [83; 84; 85; 86]
From the density redistribution at ultralow tempera
Figure 14: Correlation between the in-plane field dependence of (a) \(\chi^{*}(B)/\chi^{*}(0)\), and (b) density \(n_{\rm SdH}(B)\). Red curve shows \(\tanh(\mu_{B}B/k_{B}T)\)-fitting of the experimental \(M(B)\) data. The zero-field densities are \(n_{0}=1.61\times 10^{11}\)cm\({}^{-2}\) for (a) and (b), and \(1.4\times 10^{11}\)cm\({}^{-2}\) for (c). Temperature \(T=0.1\,\)K. Adapted from Ref. [50].
ture it follows that energy of the localized states is located in the close vicinity of the Fermi energy, in order to allow for the carrier exchange at ultralow temperatures between two electronic subsystems. No temperature dependence of \(\delta n_{\rm SdH}\) was observed within the range \(0.1-0.5\,\)K, hence, the carrier redistribution occurs elastically, via tunneling. The energy diagram describing schematically the two-phase state is shown in Fig. 15. Note, that this picture is essentially different from the conventional model of the disorder-localized single-particle states in the tail of the conduction band [87; 88; 8].
It is worthnoting that the Fermi-liquid density deduced from SdH oscillations in the phase-separated system is determined by the local density in the Fermi-liquid "lakes" (where the carriers possess the highest relaxation time), rather than by the total or by average density; this picture holds until the delocalized states (Fermi-liquid lakes) percolate. The carrier redistribution between two phases in the 2D system is not easy to determine by other techniques. For example, the capacitance measurements taken at frequencies \(10^{1}-10^{5}\,\)Hz (\(1\,\)nF, \(10\,\)kOhm/\(\square\)) probe the total charge density that includes both SD and mobile states. In order to separate the SD and FL states, the capacitance measurements should be done at frequency of \(~{}10^{10}-10^{12}\)Hz, inaccessible for the gated structure. Hall measurements cannot also shed a light on the density distribution of the delocalized and SD states, because Hall voltage becomes irrelevant to the carrier density in the vicinity of the localization transition [89].
Measurements [50] have been performed with a gated Si-MOS structure at a fixed gate voltage \(V_{g}\), whereas \(B_{\parallel}\) and \(T\) varied. Under this condition the total charge is conserved. Therefore, a change in the mobile carrier density (\(\delta n_{\rm SdH}\)) in the FL-regions can only occur via carriers transfer to the localized regions (SD) and back [53; 42].
To describe the data, a simple thermodynamic model with two phases coexisting in equilibrium has been applied in Ref. [50]. The model was found is capable to explain the results qualitatively, and even quantitatively, with some parameters determined in experiments. In particular, the \(n_{\rm FL}(B)\) dependence calculated within this model for a representative density \(1.4\times 10^{11}\)cm\({}^{-2}\) is shown in Fig. 16. It is rather similar to the direct experimental data of Fig. 14b; the similarity supports the validity of the two-phase thermodynamic approach.
Summarizing the content of this section, we conclude that the results of [50; 54] give reason to believe that the phase separation in a correlated 2D electron system exists not only near the transition to the insulator state (as was revealed in local compressibility measurements [30]), but also in a wide range of densities, even deep in the "metallic regime" of high conductivity \(\sigma=(3-80)\times(e^{2}/h)\)[90].
## V Conclusions
A two-dimensional electron system in silicon structures for the last 50 years has served as a research platform, where many new exciting effects have been discovered, including the integer quantum Hall effect, negative electron compressibility, strong renormalization of electron effective mass and spin susceptibility, etc. This 2DE system is strongly correlated in a wide range of densities, where the energy of interparticle interactions is much greater than the kinetic Fermi energy.
Local compressibility measurements [30] evidenced the emergence of an inhomogeneous state on a microscopic scale in 2D system with a decrease in the concentration of carriers near the transition to the insulator state. For a long time, this result was not appreciated when considering a macroscopic system with high conductivity as, on average, a homogeneous Fermi liquid. Within such approach, the averaged values of the Fermi-liquid parameters were experimentally determined and the averaged properties in charge transport were quantitatively described. However, later thermodynamic measurements [53] revealed signatures of the coexistence in thermodynamic equilibrium, in a wide range of densities, of the majority Fermi liquid and the minority phase of collective localized states with large spin.
Subsequent precision measurements of SdH oscillations in the presence of an in-plane field revealed a sharp change in the spin susceptibility \(\chi^{*}(B_{\parallel})\) and a simultaneous change in the concentration of mobile carriers \(\delta n_{\rm SdH}(B)\) in correlated 2D electron system. The two effects correlate well with each other and with the thermodynamic magnetization of the localized SD states. It is found that the origin of these variations is the mag
Figure 16: Model curve \(\delta N_{1}(B_{\parallel})\) calculated from experimental data as described in the text. Adapted from Ref. [50].
Figure 15: (a) Schematic spatial arrangement of the two-phase state and (b) the energy band diagram of the two-phase system.
netization of collective localized states ("spin droplets") and, as a result, the redistribution of carriers between the two phases. Independent measurements of spin magnetization and magnetoresistance in a weak in-plane field, as well as the temperature dependence of resistance, revealed the existence of a new energy scale \(T^{*}(n)\ll T_{F}\), which marks a crossover between the regime of predominant proliferation of the SD states and the regime of their disappearance. The results of the considered experiments were described within the framework of a phenomenological two-phase model. These results and their susccessfull description with two-phase model provide the solid evidence for the phase separation in the interacting 2D electron system even at relatively high carrier densities, deeply in the "metallic" regime of high conductivity \(\sigma=(3-80)\times(e^{2}/h)\)[90]. The latter regime was commonly considered as a pure Fermi liquid.
In quasi-one-dimensional systems, the main driving force of the phase separation is associated with the nesting of the Fermi surface, which leads to the appearance of a spin or charge density wave coexisting with a paramagnetic or superconducting metallic phase in the vicinity of the phase transition [91; 4; 92]. In 2D systems, instability can also occur in the charge or spin exchange channel. An interesting and still debatable issue is the microscopic mechanism behind the electronic phase separation that is experimentally observed in correlated low dimensional electron systems.
Several scenarios were theoretically considered, where in the majority Fermi liquid the minority phases such as Wigner solid "droplets", or spin polarized "droplets" emerge due to the local Wigner crystallization [93; 94], local Stoner instability [95; 96; 97; 98; 99; 100; 101], or, alternatively, the topology of the Fermi surface changes [102; 103; 104; 105; 106].
The experimental results presented in this review evidence for the existence of spin-polarized droplets as the minority phase in the majority Fermi liquid sea. It is possible, however, that with a stronger interaction or a weaker disorder, instability in the charge channel may also manifest itself.
Attempts to ignore the tendency to spin/charge instability or instability of the Fermi surface, considering only the semiclassical effects of disorder and screening [107], although they are able to describe some experimental results (such as negative compressibility, transport in the zero field), but give an overly simplified picture of the phenomenon of phase separation and miss the structure of the heterophase state.
The microscopic mechanism responsible for the phase separation, for the redistribution of carriers between two phases, as well as the energy structure of the minority phase remain interesting and still open issues.
## VI Acknowledgements
The author is grateful to B. Altshuler, G. Bauer, G. Brunthaler, I.S. Burmistrov, M. D'Iorio, J. Campbell, V.S. Edel'man, M.E. Gershenson, N. Klimov, H. Kojima, S. V. Kravchenko, A. Yu. Kuntsevich, D. L. Maslov, L. A. Morgan, O. Prus, M. Reznikov, D. Rinberg, S. G. Semenchnisky, and N. Teneh for fruitful collaboration in developing experimental methods, performing measurements, discussing the results, and writing the original papers. Financial support from the State assignment of the research at P.N. Lebedev Physical Institite (Grant # 0019-2019-0006) and from Russian Foundation for Basic research (#18-02-01013) is acknowledged.
|
2305.15068 | ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks
for Exploring Theory of Mind | Theory of Mind (ToM), the capacity to comprehend the mental states of
distinct individuals, is essential for numerous practical applications. With
the development of large language models (LLMs), there is a heated debate about
whether they are able to perform ToM tasks. Previous studies have used
different tasks and prompts to test the ToM on LLMs and the results are
inconsistent: some studies asserted these models are capable of exhibiting ToM,
while others suggest the opposite. In this study, We present ToMChallenges, a
dataset for comprehensively evaluating the Theory of Mind based on the
Sally-Anne and Smarties tests with a diverse set of tasks. In addition, we also
propose an auto-grader to streamline the answer evaluation process. We tested
three models: davinci, turbo, and gpt-4. Our evaluation results and error
analyses show that LLMs have inconsistent behaviors across prompts and tasks.
Performing the ToM tasks robustly remains a challenge for the LLMs. In
addition, our paper wants to raise awareness in evaluating the ToM in LLMs and
we want to invite more discussion on how to design the prompts and tasks for
ToM tasks that can better assess the LLMs' ability. | Xiaomeng Ma, Lingyu Gao, Qihui Xu | 2023-05-24T11:54:07Z | http://arxiv.org/abs/2305.15068v2 | # ToMchallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind
###### Abstract
Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is essential for numerous practical applications. With the development of large language models, there is a heated debate about whether they are able to perform ToM tasks. Previous studies have used different tasks and prompts to test the ToM on large language models and the results are inconsistent: some studies asserted these models are capable of exhibiting ToM, while others suggest the opposite. In this study, We present ToMChallenges, a dataset for comprehensively evaluating Theory of Mind based on Sally-Anne and Smarties tests. We created 30 variations of each test (e.g., changing the person's name, location, and items). For each variation, we test the model's understanding of different aspects: reality, belief, 1st order belief, and 2nd order belief. We adapt our data for various tasks by creating unique prompts tailored for each task category: Fill-in-the-Blank, Multiple Choice, True/False, Chain-of-Thought True/False, Question Answering, and Text Completion. If the model has a robust ToM, it should be able to achieve good performance for different prompts across different tests. We evaluated two GPT-3.5 models, text-davinci-003 and gpt-3.5-turbo-0301, with our datasets. Our results indicate that consistent performance in ToM tasks remains a challenge.
## 1 Introduction
As large language models (LLMs) become increasingly prevalent in applications in natural language understanding and dialogue generation Devlin et al. (2019); Brown et al. (2020); Raffel et al. (2020), the demand for models to develop Theory of Mind (ToM) has grown rapidly. Theory of Mind refers to the ability to impute mental states to different individuals, e.g., beliefs, emotions, and intentions Wimmer and Perner (1983); Gallese and Sinigaglia (2011). ToM is commonly measured through false belief tasks in psychology studies Dennett (1978), as these tasks unambiguously show whether children can distinguish their own belief (true belief) and other people's belief (false belief). For example, in the Smarties test, a classic false belief task, the child is shown a Smarties candy box and asked what they believe is in the box. Naturally, the child would answer 'Smarties.' The experimenter opens the box to show the child that it was filled with something else, like crayons. Then, the child is asked what they think another person, who hasn't seen what's inside the box, would believe is inside the Smarties box. Children younger than 4 years old would answer 'crayons' as they assume that other people know what they know; whereas older children would answer 'Smarties' as they are able to reason that other people see the label on the box and assume that there are Smarties inside Gopnik and Astington (1988). Typically children are able to pass false belief tasks around age 4 or 5 Wellman et al. (2001). The development of ToM is closely intertwined with language development, as both abilities develop around the same age and are highly correlated whereas other cognitive abilities do not correlate as highly as language Milligan
Figure 1: An example of Smarties test, and Mentalizing and Nonmerging criteria.
et al., 2007). Since the mental state can not be observed through behavior, language is indispensable in understanding and reasoning mental states. Although the exact nature of the relationship between language and ToM is still under study, some studies propose that the relation can be causal (De Villiers and Pyers, 2002; Moore et al., 1990). Theoretically, LLMs could develop ToM given its powerful natural language understanding capacity. Testing the ToM in LLMs could bring more insight into the relationship between language and ToM development. In addition, ToM is also important to improve the applications of LLMs as we want the models to generate appropriate and context-aware responses. For example, when requesting a model to continue generating a story, we anticipate that it will recognize distinct beliefs held by different characters. Likewise, we expect a chatbot to provide more tailored and empathetic responses to various users.
There is an ongoing debate on whether ToM has already emerged in the current models, with some studies asserted that the models exhibit ToM (Kosinski, 2023; Wu et al., 2023), some suggest the opposite, (Le et al., 2019; Nematadeh et al., 2018; Sap et al., 2022; Ullman, 2023), and others maintain caution and questions (Sileo and Lemould, 2023; Aru et al., 2023). The inconsistency of the findings may largely attributed to the variance of ToM evaluation methods adopted in these studies.
ToM theories in the field of child development (Quesque and Rossetti, 2020) suggest that we should ensure the measure focuses on mental states rather than irrelevant confounding processes (Mentalizing; e.g., focus on emotion rather than facial expression categorization), as well as maintain the distinction between the present and the imagined mental states (Nonmerging). Tasks that fail to satisfy the two criteria shouldn't be regarded as valid assessments. In Ullman (2023)'s study, variations such as transparent access, uninformative label, and others were used to examine the robustness of models. However, the variations primarily incorporate pragmatic knowledge and inferential bias, which deviate from the criterion of Mentalizing and do not effectively maintain the Nonmerging requirement. Similarly, testing a few examples on a single format, as done by Kosinski (2023), also deviates from Mentalizing. This is because corner-heuristics may occur when the language of the task itself contains regularities and correlations (Le et al., 2019). Moreover, LLMs are shown to be sensitive to the choice of prompts (Jiang et al., 2020; Zhao et al., 2021; Elazar et al., 2021; Schick and Schutze, 2022). To our knowledge, none of the works consider the impact of prompts, as Sap et al. (2022) framed the task as question answering, and Kosinski (2023) and Ullman (2023) framed the task as story completion.
To improve the validity of ToM tests, one solution is to increase the open-endedness of the tasks, as proposed by Aru et al. (2023), while still adhering to the requirements of Mentalizing and Nonmerging as outlined by Quesque and Rossetti (2020). Open-ended tasks increase the diversity of task formats, making it harder for LLMs to use shortcuts to pass the tests. At the same time, following the requirements of Mentalizing and Nonmerging ensures a rigorous theoretical focus and meaningful results.
In this paper, we create a dataset based on two widely used false-belief tasks in human studies: the Sally-Anne test (Wimmer and Perner, 1983; Baron-Cohen et al., 1985) and the Smarties task (also known as the Crayon Box test, or the Unexpected Contents Test) (Gopnik and Astington, 1988).1 According to Quesque and Rossetti (2020), the false-belief tasks meet Mentalizing and Nonmerging criteria. To enhance the open-endedness of the tests, we adapt our data for various tasks by creating unique prompts tailored for each task: fully-constrained(Fill-in-the-Blank, Multiple Choice, True/False), semi-constrained(Chain-of-Thought True/False, Question Answering), and open-ended generation(Text Completion). Next, we evaluate the performance of two versions of GPT-3.5 (text-davinci-003 and gpt-3.5-turbo-0301) on our dataset. Our results demonstrate that the models cannot reliably perform the ToM tasks. The Text Completion task leads to the best results, followed by the Fill-in-the-Blank task. In addition, the models also have different patterns on accuracy for Sally-Anne and Smarties tests.
Footnote 1: Our dataset is available at [https://github.com/xiaomeng-ma/ToMchallenges](https://github.com/xiaomeng-ma/ToMchallenges).
## 2 Related Work
Nematzadeh et al. (2018) were the first to propose using Theory of Mind (ToM) tasks from developmental psychology to evaluate different question-answer models. Their findings indicated that all the tested models were unsuccessful in completing
their tasks, suggesting that these models lack the ability to keep track of inconsistent beliefs or states of the world. In 2019, Le et al. (2019) showed the QA benchmarks at that time would suffer from data biases such that corner-cutting heuristics can be made due to a strict event sequence template for each task type. To address this issue, they proposed new evaluation methods as well as a new dataset. Sap et al. (2022) evaluated GPT-3 Brown et al. (2020) on this dataset and concluded that the models struggle with the task with an accuracy of 55 - 60% on questions regarding mental states, even for GPT-3-davinci after few-shot finetuning.
These studies evaluated the ToM on different datasets and tasks. Most of the studies develop the datasets based on the false-belief tasks used in psychology studies, namely Smarties test and Sally-Anne test (for a detailed description, see Section 3.1). For example, Le et al. (2019) proposed a ToMi dataset that is based on Sally-Anne test and bAbi dataset Weston et al. (2015). The findings of the study showed that the models do not reliably exhibit ToM. Kosinski (2023) used a different crafted dataset based on both Sally-Anne test and Smarties test and showed that the text-davinci-003 model is able to perform ToM tasks.
Different from these works, we focus on the validity of ToM tests, considering the Mentalizing and Nonmerging criteria. We also consider the impact of prompts on the model performance, and propose to adapt the data for various tasks by creating different prompts, and construct a principle-guided dataset and diverse evaluation tasks for exploring ToM.
## 3 ToMchallenges and Tasks
We aimed to build a corpus based on two types of tests: _Sally-Anne Test_ and _Smarties Test_, following the **Mentalizing** and **Nonmerging** criteria proposed by Quesque and Rossetti (2020). Below we describe how we construct ToMchallenges data, and how we design the diverse evaluation tasks.
### Dataset Construction
While Le et al. (2019) proposed the inclusion of distractors to prevent models from adopting corner-cutting heuristics, it is important to note that distractors are more relevant for fine-tuning rather than zero-shot probing. Given the ongoing discussions surrounding the zero-shot performance of models in recent studiesKosinski (2023); Ullman (2023) and the fact that finetuning is not available yet for GPT-3.5, we introduce a distractor-less dataset as below to maintain focus, with examples displayed in Tables 1 and 2. We created 30 variations of each test (e.g., changing the person's name, location, and items), and the details of the tests and variables are described as follows.
Sally-Anne TestThe Sally-Anne Test is related to another person's false belief about the container of an object because the person is unaware of the container change while absent. The narrative involves several components: (1) a location L, where the event takes place, (2) two agents a and b, who maintain distinct mental states, (3) an object o, which is moved from one container to another during the narrative, and (4) two containers c1 and
\begin{table}
\begin{tabular}{l l} \hline \hline Variables & L: attic, a: Neila, b: Juanita, o: towel, c1: closet, c2: cabinet \\ \hline Narrative & _Neila_ and _Juanita_ were hanging out in the _atic_. \\ \(\mathcal{N}\) & They saw a _closet_ and a _cabinet_. They found a _towel_ in the _closet_. _Juanita_ left the _atic_. _Neila_ moved the _towel_ to the _cabinet_. \\ \hline reality & Where is the _towel_ currently? \\ belief & Where was the _towel_ previously? \\ After _Juanita_ came back to the _atic_, \({}^{\dagger}\) \\ lstA & where would _Neila_ look for the _towel_? \\ lstB & where would _Juanita_ look for the _towel_? \\
2ndA & where would _Neila_ think _Juanita_ would look for the _towel_? \\ & the _towel_? \\
2ndB & where would _Juanita_ think _Neila_ would look for the _towel_? \\ \hline \hline \multicolumn{2}{l}{The initial prompt with \(\dagger\) is applied to lstA, lstB, 2ndA, and 2ndB.} \\ \end{tabular}
\end{table}
Table 1: An example for Sally-Anne Test.
\begin{table}
\begin{tabular}{l l} \hline \hline Variables & L: attic, A: Neila, b: Juanita, c: bag, o1: plate, o2: vest \\ \hline Narrative & _Neila_ found a _bag_ in the _atic_. The label on the _bag_ says _plate_. _Neila_ couldn’t see what was inside the _bag_. _Neila_ opened the _bag_ and found a _vest_. There is no _plate_ in the _bag_. _Neila_ closed the _bag_ and put it back. _Juanita_ entered the _atic_ and saw the _bag_. \\ \hline reality & What was in the _bag_? \\ belief & What was supposed to be in the _bag_? \\ When the _bag_ was opened, \({}^{\dagger}\) \\ lstA & what would _Neila_ expect to find in the _bag_? \\ lstB & what would _Juanita_ expect to find in the _bag_? \\
2ndA & what would _Neila_ think _Juanita_ would expect to find in the _bag_? \\
2ndB & what would _Juanita_ think _Neila_ would expect to find in the _bag_? \\ & to find in the _bag_? \\ \hline \hline \multicolumn{2}{l}{The initial prompt with \(\dagger\) is applied to lstA, lstB, 2ndA, and 2ndB.} \\ \end{tabular}
\end{table}
Table 2: An example for Smarties Test.
c2, representing the object's initial and updated positions, respectively. We extract the agent names from CMU Name Corpus2 and manually write the options for l, o, c1 and c2 following the rules below:
Footnote 2: [https://www.cs.cmu.edu/Groups/AI/util/locations/nlp/corpora/names/](https://www.cs.cmu.edu/Groups/AI/util/locations/nlp/corpora/names/)
* The location l should be spacious enough for two people to spend time together.
* The object o should be reasonably movable by hand.
* Both containers (c1 and c2) should be capable of accommodating the object.
For the questions, reality focuses on the updated/current position of o, and belief focuses on the initial/previous position. The first-order belief (1stA and 1stB) questions ask the agents' mental states, and the second-order belief (2ndA and 2ndB) questions ask one agent's belief regarding another agent's mental state.
Smarties TestThe Smarties Test is related to another person's false belief about the object in a specific container as that container is marked as holding a different object. Although it also involves one location and two agents, there are two differences: (1) only one container c that contains the object, and (2) two objects (o1 and o2) are mentioned in the narrative, with o1 being labeled and o2 actually occupying c. We choose the location with the same rule described above, and follow the rules below for the container and objects:
* The container c is likely to obscure the object inside it.
* The container c should be capable of accommodating both objects.
The questions of Smarties Test are similar in nature to those for the Sally-Anne Test, but the reality question focuses on the real object in the container, and there's no belief question for this test.
### Task Formulation
While we allow the model to generate freely, we restructure the two tests with diverse prompts, effectively transforming them into distinct task formats. Given the different levels of generation freedom, we categorize tasks into three groups: fully-constrained, semi-constrained, and open-ended generation.
Fully-ConstrainedFully-constrained generation limits the model's output to specific predefined structures or responses. In this group, we design 3 tasks, i.e., Fill-in-the-Blank, Multiple Choice, and True or False questions.
Semi-ConstrainedSemi-constrained generation involves partial guidance by specific rules or structures, while still permitting some flexibility in the model's responses. This group encompasses 2
\begin{table}
\begin{tabular}{l l} \hline \hline Narrative \(\mathcal{N}\) & _Neila_ and _Juanita_ were hanging out in the _atic_. They saw a _closet_ and a _cabinet_. They found a _towel_ in the _closet_. _Juanita_ left the _atic_. _Neila_ moved the _towel_ to the _cabinet_. \\ \hline Fill-in-the-Blank & Fill in the blank (\(\boldsymbol{<}\boldsymbol{\rangle}\)): \(\mathcal{N}\) After _Juanita_ came back to the _atic_, _Neila_ would think _Juanita_ would look for the _towel_ in the \(\boldsymbol{<}\boldsymbol{>}\). Answer: \\ \hline Multiple Choice & Choose the correct answer from A or B for the following question: Question: \(\mathcal{N}\) After _Juanita_ came back to the _atic_, where would _Neila_ think _Juanita_ would look for the _towel_? A. _cabinet_ B. _closet_ \\ \hline True/False & Given the context, judge True or False of the given statements A and B respectively: \(\mathcal{N}\) Statements: A. Juanita would look for the towel in the closet. \\ \hline CoT True/False & Given the context, reason through statements A and B step by step and provide a True or False judgment based on the reasoning: \(\mathcal{N}\) Statements: A. Juanita would look for the towel in the cabinet. B. Juanita would look for the towel in the closet. \\ \hline Q\&A & Answer the following questions based on the context: Context: \(\mathcal{N}\) Questions: 1. Where is the _towel_? 2. Where was the _towel_? 3. After _Juanita_ came back to the _atic_, where would _Juanita_ look for the _towel_? 4. After _Juanita_ came back to the _atic_, where would _Neila_ look for the _towel_? 5. After _Juanita_ came back to the _atic_, where would _Neila_ think _Juanita_ would look for the _towel_? 6. After _Juanita_ came back to the _atic_, where would _Juanita_ think _Neila_ would look for the _towel_? \\ \hline Text Completion & Complete the following paragraph: \(\mathcal{N}\) After _Juanita_ came back to the _atic_, _Neila_ would think _Juanita_ would look for the _towel_ in Answer: \\ \hline \hline \end{tabular}
\end{table}
Table 3: An illustrative example for different task templates of the Sally-Anne Test, ignoring line breaks in templates for space saving.
tasks, i.e., Chain-of-Thought (CoT) True or False questions and Question Answering (Q&A) tasks.
Open-EndedOpen-ended generation enables the model to generate responses without being restricted by predefined rules or structures, leading to more diverse and varied outputs. An example of this group is Text Completion.
We demonstrate how to reframe a Sally-Anne Test example into these different tasks in Table 3. We only present the question 2ndA except for the question answering task, where we include all questions in the same prompt when we conduct experiments. For the Smarties Test, our templates are similar for task descriptions, but different in the phrase regarding questions.
### Experimental Setup
We evaluate the zero-shot performance of two versions of GPT-3.5 models: text-dayinci-003 and gpt-3.5-turbo-0301 (OpenAI, 2022). For the hyper-parameters of both models, we set the temperature as 0, top_p as 1, and both frequency penalty and presence penalty as 0. Due to the different natures of our task design, we choose different maximum token limits for each prompt as follows: Fill in the Blank at 10 tokens, Multiple Choices at 2 tokens, True or False at 20 tokens, CoT True or False at 100 tokens, and both Question Answering and Text Completion are at 50 tokens.
## 4 Results and Analysis
In this section, we present the results of our evaluation for both models on two tests (Sally-Anne and Smarties tests), six tasks/prompts as shown in Table 3, and six question types (reality, belief, 1stA, 1stB, 2ndA, and 2ndB) as shown in Table 1 and 2. As we created 30 stories of each test, an idealized model that is capable to solve The
Figure 3: The average accuracy of questions in Smarties test for different prompts.
Figure 2: The average accuracy for questions in Sally-Anne test for different prompts.
ory of Mind tasks should be able to achieve high accuracy on all question types and in most of the stories.
### Accuracy by Question Type
The accuracy of each question type is calculated by averaging the accuracy over all stories (e.g., an accuracy of \(50\%\) means that the model answered correctly for 15 out of the 30 stories). Figure 2 and 3 show the average accuracy of 6 types of questions for different prompts.
For the Sally-Anne tests, both text-davinci-003 and the turbo-0301 models are able to achieve near-perfect accuracy on reality, belief, and 1stA questions for all prompts, indicating that the models can reason based on facts. For 1stB question that requires reasoning both the belief of A and B, the turbo-0301 model achieved better accuracy than the text-davinci-003 model. For 2ndA and 2ndB questions, both models struggled to understand one person's belief about another person's belief. In addition, the models achieved the best overall performance with the Text Completion prompt, followed by the Fill-in-the-Blank prompt. Also, the introduction of Chain-of-Thought did not improve the model's performance on True/False task.
The Smarties test showed a different accuracy pattern from the Sally-Anne test. Both models had difficulties answering the belief and 1stA question correctly. However, for 2ndA and 2ndB questions, text-davinci-003 model achieved better performance in the Smarties test than in the Sally-Anne test. We observe that the completion prompt works best for the text-davinci-003 model, and the multiple-choice prompt works best for the turbo-0301 model.
By comparing the different tests, prompts, and questions, it is clear that the models can not reliably perform ToM tasks. The models are sensitive to the prompts, and framing the stories into Text Completion task works better than other tasks.
### Accuracy by Stories
The accuracy of each story is calculated as the average accuracy over six question types. Although the stories are generated through the same template, the models produced different answers. Table 4 and 5 show the average accuracy of Sally-Anne and Smarties tests. For the Sally-Anne test, text-davinci-003 produced more stable results across different prompts since all stories achieved 50% accuracy for Multiple Choice, True/False, CoT True/False, and Question Answering prompts. The turbo-0301 model performs better since the average accuracy is higher across all prompts.
For the Smarties test, turbo-0301 model has better and more stable performance than the text-davinci-003 model, as the average accuracy is higher and the range is smaller.
### Error Analysis
We further looked into the errors the models made, especially for the questions that the models had low accuracy. For the Sally-Anne task, the text-davinci-003 model made errors on 1stB, 2ndA and 2ndB questions by assuming person B knew the new location of the item. In the context, person B does not know the new location of the item, since person A moved it after B left the room. However, the model could not reason this aspect and assumed that person B knew that person A moved the item. For the turbo-0301 model, the model could reason most of the 1stB questions, but failed on the 2ndA and 2ndB questions. These results indicate that 2nd order belief task is still very difficult for the models.
For the Smarties test, the text-davinci-003 model struggles most on 1stB question, and not so much on 2ndA and 2ndB questions. The common error
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \begin{tabular}{c} Sally-Anne \\ N = 6 \\ \end{tabular} & \begin{tabular}{c} text-davinci-003 \\ mean \\ \end{tabular} &
\begin{tabular}{c} turbo-0301 \\ mean \\ \end{tabular} \\ \hline FB & 0.61 & 0.5 - 0.83 & 0.93 & 0.67 - 1 \\ MC & 0.5 & 0.5-0.5 & 0.82 & 0.5 - 1 \\ TF & 0.5 & 0.5-0.5 & 0.65 & 0.5 - 0.83 \\ CoT-TF & 0.5 & 0.5-0.5 & 0.57 & 0.5 - 1 \\ QA & 0.5 & 0.5-0.5 & 0.68 & 0.5 - 1 \\ Comp & 0.72 & 0.5 - 1 & 0.92 & 0.67 - 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The average accuracy for stories in the Sally-Anne test for different prompts. The terms Fill-in-the-Blank, Multiple Choice, True/False, CoT True/False, Question Answering, and Text Completion are abbreviated as FB, MC, TF, CoT-TF, QA, and Comp, respectively.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \begin{tabular}{c} Smarties \\ N = 6 \\ \end{tabular} & \begin{tabular}{c} text-davinci-003 \\ mean \\ \end{tabular} &
\begin{tabular}{c} turbo-0301 \\ mean \\ \end{tabular} \\ \hline FB & 0.78 & 0.5 - 1 & 0.95 & 0.67 - 1 \\ MC & 0.84 & 0.83 - 1 & 0.96 & 0.83 - 1 \\ TF & 0.33 & 0.17 - 0.5 & 0.46 & 0.33 - 0.83 \\ CoT-TF & 0.44 & 0.17 - 0.67 & 0.34 & 0.17 - 0.5 \\ QA & 0.79 & 0.5 - 1 & 0.37 & 0.17 - 0.67 \\ Comp & 0.85 & 0.67 - 1 & 0.78 & 0.67 - 1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The average accuracy for stories in the Smarties test for different prompts.
for the 1stB question is that the model assumed person B knew what's inside of the container despite B didn't open the container and didn't know that the item inside is not the item indicated on the label.
In addition, we also found that the models cannot reliably infer the mental state of agents in the story. For example, turbo-0301 model has 0 accuracy on 1stA question for Smarties test with question-answering prompt. The model actually refused to answer the 1stA question (e.g., After B opened the container, what would A expect to find in the bag?) by producing answers like: 'The context does not provide information on what A would expect to find in the backpack after B opened it.' This type of error indicates that the model does not have a robust understanding of mental state.
## 5 Conclusions
In this study, we proposed ToMChallenges to comprehensively test the ToM on LLMs. The dataset is constructed based on the Sally-Anne and Smarties tests. For each test, we created a template to generate variations of the test. In addition, we incorporated 6 types of questions to examine the model's understanding of reality, belief, 1st order belief and 2nd order belief. We also included 6 tasks with different prompts for evaluation, considering the impact of prompts on model performance. This evaluation method serves a dual purpose: it not only measures whether the model has ToM capacity, but also measures the robustness of the model in performing the ToM tasks.
Using 30 variations of Sally-Anne and Smarties tests, we found that the GPT-3.5 models can not reliably perform the ToM tasks. Overall, the models performed better on Sally-Anne test than the Smarties test. The types of prompts greatly affect the model's performance. The models achieved the best accuracy on the Text Completion task, followed by the Fill-in-the-Blank task. The models struggled on 1stB, 2ndA, and 2ndB questions for both Sally-Anne and Smarties tests. If a model has a robust representation of ToM, it should have good performance across tests, questions and prompts. However, our evaluation shows that the models are sensitive to the test template, task/prompt, and question type, and that they can not reliably perform well on the ToM tasks.
Further studies could investigate how and why different prompt types would affect the model's performance. We hope our study could invite more discussions on the evaluation and improvement of ToM in LLMs.
|
2307.03073 | Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning | We propose a novel framework for few-shot learning by leveraging large-scale
vision-language models such as CLIP. Motivated by unimodal prototypical
networks for few-shot learning, we introduce Proto-CLIP which utilizes image
prototypes and text prototypes for few-shot learning. Specifically, Proto-CLIP
adapts the image and text encoder embeddings from CLIP in a joint fashion using
few-shot examples. The embeddings from the two encoders are used to compute the
respective prototypes of image classes for classification. During adaptation,
we propose aligning the image and text prototypes of the corresponding classes.
Such alignment is beneficial for few-shot classification due to the reinforced
contributions from both types of prototypes. Proto-CLIP has both training-free
and fine-tuned variants. We demonstrate the effectiveness of our method by
conducting experiments on benchmark datasets for few-shot learning, as well as
in the real world for robot perception. The project page is available at
https://irvlutd.github.io/Proto-CLIP | Jishnu Jaykumar P, Kamalesh Palanisamy, Yu-Wei Chao, Xinya Du, Yu Xiang | 2023-07-06T15:41:53Z | http://arxiv.org/abs/2307.03073v3 | # Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning
###### Abstract
We propose a novel framework for few-shot learning by leveraging large-scale vision-language models such as CLIP [1]. Motivated by unimodal prototypical networks for few-shot learning, we introduce Proto-CLIP that utilizes image prototypes and text prototypes for few-shot learning. Specifically, Proto-CLIP adapts the image encoder and text encoder in CLIP in a joint fashion using few-shot examples. The two encoders are used to compute prototypes of image classes for classification. During adaptation, we propose aligning the image prototypes and the text prototypes of the corresponding classes. Such alignment is beneficial for few-shot classification due to the reinforced contributions from both types of prototypes. We demonstrate the effectiveness of our method by conducting experiments on benchmark datasets for few-shot learning, as well as in the real world for robot perception1.
Footnote 1: Project page is available at [https://irvlutd.github.io/Proto-CLIP](https://irvlutd.github.io/Proto-CLIP)
**Keywords:** Robot Perception, Object Recognition, Few-Shot Learning, Contrastive-Learning, Vision-Language, Multimodal
## 1 Introduction
Building autonomous robots that can help people perform various tasks is the dream of every roboticist. Nowadays, most robots are working in factories and warehouses by performing pre-programmed repetitive tasks such as assembling and delivering. In the future, we believe that there will be intelligent robots that can perform tasks in human environments autonomously. For example, people can instruct a robot by saying "bring me a bottle of water" or "wash the mug on the table", then the robot will execute the instructions accordingly. In these scenarios, robots need to recognize objects from sensory data in order to understand the required tasks. In this work, we develop a novel few-shot learning method that can enable robots to recognize novel objects from just a few example images per object.
We believe that few-shot learning [2] is a promising paradigm to enable robots to recognize a large number of objects. The appeal lies in the ease of data collection--just a few example images is sufficient for teaching a robot a novel object. On the contrary, object model-based approaches build 3D models of objects and then use these 3D models [3] for object recognition. Object category-based approaches focus on recognizing category labels of objects such as 80 categories in the MSCOCO dataset [4]. The limitation of model-based object recognition is the difficulty of obtaining a large number of 3D models for many objects in the real world. Current 3D scanning techniques cannot deal well with metal objects or transparent objects. For category-based object recognition, it is difficult to obtain a large number of images for each category in robotic settings. Large-scale datasets for object categories such as ImageNet [5] and Visual Genome [6] are collected from the Internet. These Internet images are not very suitable for learning object representations for robot manipulation due to domain differences. Due to the limitations of model-based and category-based object recognition, if a robot can learn to recognize a new object from a few images of the object, it is likely to scale up the number of objects that the robot can recognize in the real world.
The main challenge in few-shot learning is how to achieve generalization with very limited training examples. Learning good visual representations is the key to achieve good performance in
few-shot learning [7]. Although the Internet images are quite different from robot manipulation settings, they can be used to learn powerful visual representations. Recently, the CLIP (Contrastive Language-Image Pre-training) model [1] trained with a large number of image-text pairs from the Internet achieves promising _zero-shot_ image recognition performance. Using the visual and language representations from CLIP, several few-shot learning approaches [8; 9; 10] are proposed to improve the zero-shot CLIP model. [9; 10] adapt the CLIP image encoder to learn better feature representations, while [8] learns prompts for the CLIP model. On the other hand, few-shot learning approaches are studied in the meta-learning framework [11]. These approaches are aimed at generalizing to novel classes after training. A notable method is Prototypical Network [12] and its variants [13; 14], which demonstrate effective performance for few-shot learning. However, these methods do not leverage the powerful feature representation of CLIP.
These observations motivate us to leverage CLIP in prototypical networks for few-shot learning. We notice that existing methods for adapting CLIP models in few-shot learning adapt the image encoder [9; 10] or the text encoder [8] in CLIP. We argue that if we can use both the image encoder and the text encoder for classification and jointly adapt them using few-shot training images, we can improve the few-shot classification performance. To achieve this goal, we propose Proto-CLIP, a new model motivated by the traditional unimodal Prototypical Networks [12]. Proto-CLIP utilizes image prototypes and text prototypes computed from adapted CLIP encoders for classification. In addition, we propose to align the image prototype and the text prototype of the same class during adaptation. In this way, both the image encoder and the text encoder can contribute to the classification while achieving agreement between their predictions. Fig. 1 illustrates the concept of learning the joint embedding space of images and text from Proto-CLIP.
To verify the effectiveness of Proto-CLIP, we have conducted experiments on commonly used benchmarks for few-shot image classification, as well as the FewSOL dataset introduced for few-shot object learning in robotic environments [15]. In addition, we have built a robotic system that integrates Automatic Speech Recognition (ASR), few-shot object recognition using Proto-CLIP and robotic grasping to demonstrate the robotic application of Proto-CLIP.
## 2 Related Work
In the context of image recognition, few-shot learning indicates using a few images per image category. The problem is usually formulated as "\(N\)-way, \(K\)-shot", i.e., \(N\) classes with \(K\) images per class. In the traditional image classification setup, these \(NK\) images are used as training images. Once a model is trained, it can be used to test images among \(N\) classes. Recent CLIP-based few-shot learning methods fall into this setting.
**CLIP-based Few-Shot Learning.** The CLIP [1] model applies contrastive learning to image-text pairs from the Internet. It consists of an image encoder and a text encoder for the extraction of features from images and text, respectively. Its training objective is to maximize the similarity between the corresponding image and text in a pair in a high-dimensional joint feature space. After training, CLIP can be used for zero-shot image classification by comparing image features with text embeddings of novel class names. This model is denoted as zero-shot CLIP. When a few training images are available
Figure 1: Our Proto-CLIP model learns a joint embedding space of images and text, where image prototypes and text prototypes are learned and aligned using support sets for few-shot classification.
for each class, several approaches are proposed to improve zero-shot CLIP. The linear-probe CLIP model [1] trains a logistic regression classifier using CLIP image features. CoOp [8] proposes to use learnable vectors as a prompt for the CLIP text encoder for few-shot learning. CLIP-Adapter [9] learns two layers of linear transformations on top of the image encoder and the text encoder with residual connections, respectively, to adapt CLIP features for few-shot learning. Tip-Adapter [10] builds a key-value cache model, where keys are CLIP image features and values are one-hot vectors of the class labels. Given a query image, its image feature is compared with the cache keys to combine the value labels for classification. Tip-Adapter can also fine-tune the keys by treating them as learnable parameters, which further improves the few-shot classification accuracy. Sus-X [16] leverages the power of Stable Diffusion [17] to create support sets and aims to address the issue of uncalibrated intra-modal embedding distances in TIP-Adapter [10] by utilizing inter-modal distances as a connecting mechanism.
Table 1 compares our proposed method with existing CLIP-model-based few-shot learning methods. By using the image prototypes and text prototypes for classification, our method can adapt both the image embeddings and text embeddings from CLIP. In addition, the model aligns the image prototypes and the text prototypes, which serves as a regularization term in adapting the feature embeddings. We empirically verify our model by conducting experiments on benchmark datasets for few-shot learning.
**Meta-learning-based Few-Shot Learning.** In parallel with these efforts to adapt CLIP for few-shot learning, meta-learning-based approaches are also proposed for few-shot learning. While previous CLIP-based models are tested on the same classes in training, the focus here is to learn a model on a set of training classes \(\mathcal{C}_{train}\) that can generalize to novel classes \(\mathcal{C}_{test}\) in testing. Each class contains a support set and a query set. During training, the class labels for both sets are available. During testing, only the class labels of the support set are available, and the goal is to estimate the labels of the query set. Meta-learning-based approaches train a meta-learner with the training classes \(\mathcal{C}_{train}\) that can be adapted to the novel classes \(\mathcal{C}_{test}\) using their support sets. Non-episodic approaches use all the data in \(\mathcal{C}_{train}\) for training such as \(k\)-NN and its 'Finetuned' variants [18; 19; 20; 7]. Episodic approaches construct episodes, i.e., a subset of the training classes, to train the meta-learner. Representative episodic approaches include Prototypical Networks [12], Matching Networks [21], Relation Networks [22], Model Agnostic Meta-Learning (MAML) [11], Proto-MAML [13] and CrossTransformers [14]. The Meta-Dataset [13] was introduced to benchmark few-shot learning methods in this setting. In this work, we consider training and testing in the same classes following previous CLIP-based few-shot learning methods [8; 9; 10].
## 3 Method
We consider the \(N\)-way \(K\)-shot classification problem. In few-shot settings, \(K\ll N\). The image set with class labels is considered as the _support set_: \(\mathcal{S}=\{\mathbf{x}_{i}^{s},y_{i}^{s}\}_{i=1}^{M}\), where \(\mathbf{x}_{i}^{s}\) denotes a support image, \(y_{i}^{s}\in\{1,2,\ldots,N\}\) denotes the class label of the support image, and \(M\) is the size of the support set. In \(N\)-way \(K\)-shot settings, \(M=NK\). The goal of few-shot classification is to classify the _query set_\(\mathcal{Q}=\{\mathbf{x}_{j}^{0}\}_{j=1}^{L}\), i.e., \(L\) test images without class labels. Specifically, we want to estimate the conditional probability \(P(y=k|\mathbf{x}^{q},\mathcal{S})\) that models the probability distribution of the class label \(y\) given a query image \(\mathbf{x}^{q}\) and the support set \(\mathcal{S}\).
\begin{table}
\begin{tabular}{l c c c c} \hline Method & Use Support Sets & Adapt Image Embedding & Adapt Text Embedding & Align Image and Text \\ \hline Zero-shot CLIP [1] & ✗ & ✗ & ✗ & ✓ \\ Linear-probe CLIP [1] & ✓ & ✓ & ✗ & ✗ \\ CoOp [8] & ✓ & ✗ & ✓ & ✗ \\ CLIP-Adapter [9] & ✓ & ✓ & ✓ & ✗ \\ Tip-Adapter [10] & ✓ & ✓ & ✗ & ✗ \\ Sus-X [16] & ✓ & ✓ & ✗ & ✗ \\ \hline
**Proto-CLIP (Ours)** & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between our proposed method with existing CLIP-based methods for few-shot learning. “Use Support Sets” indicates if a method uses support training sets for fine-tuning. “Adapt Image/Text Embedding” indicates if a method adapts the image/text embeddings in CLIP. “Align Image and Text” indicates if a method specifically aligns images and text in the feature space.
**Our Proto-CLIP model (Fig. 2)**. We propose to leverage both the image encoder and the text encoder in the CLIP model [1] to estimate the conditional probability of class label as
\[P(y=k|\mathbf{x}^{q},\mathcal{S})=\alpha\underbrace{P(y=k|\mathbf{x}^{q}, \mathcal{S}_{x})}_{\text{image probability}}+(1-\alpha)\underbrace{P(y=k| \mathbf{x}^{q},\mathcal{S}_{y})}_{\text{text probability}}, \tag{1}\]
where \(\mathcal{S}_{x}=\{\mathbf{x}_{i}^{s}\}_{i=1}^{M}\) and \(\mathcal{S}_{y}=\{y_{i}^{s}\}_{i=1}^{M}\) denote the image set and the label set of the support set \(\mathcal{S}\), respectively, and \(\alpha\in[0,1]\) is a hyper-parameter to combine the two probabilities. To model the probability distributions conditioned on \(\mathcal{S}_{x}\) or \(\mathcal{S}_{y}\), we leverage the prototypical networks [12]:
\[P(y=k|\mathbf{x}^{q},\mathcal{S}_{x}) =\frac{\exp(-\beta\|g_{\mathbf{w}_{1}}(\mathbf{x}^{q})-\mathbf{c }_{k}^{x}\|_{2}^{2})}{\sum_{k^{\prime}=1}^{N}\exp(-\beta\|g_{\mathbf{w}_{1}}( \mathbf{x}^{q})-\mathbf{c}_{k^{\prime}}^{x}\|_{2}^{2})}, \tag{2}\] \[P(y=k|\mathbf{x}^{q},\mathcal{S}_{y}) =\frac{\exp(-\beta\|g_{\mathbf{w}_{1}}(\mathbf{x}^{q})-\mathbf{c }_{k}^{y}\|_{2}^{2})}{\sum_{k^{\prime}=1}^{N}\exp(-\beta\|g_{\mathbf{w}_{1}}( \mathbf{x}^{q})-\mathbf{c}_{k^{\prime}}^{y}\|_{2}^{2})}, \tag{3}\]
where \(g_{\mathbf{w}_{1}}(\cdot)\) denotes the CLIP image encoder plus an adapter network with learnable parameters \(\mathbf{w}_{1}\) used to compute the feature embeddings of query images. The CLIP image encoder is pretrained and then frozen. \(\mathbf{c}_{k}^{x}\) and \(\mathbf{c}_{k}^{y}\) are the "prototypes" for class \(k\) computed using images and text, respectively. \(\beta\in\mathbb{R}^{+}\) is a hyperparameter to sharpen the probability distributions. We have the prototypes as
\[\mathbf{c}_{k}^{x}=\frac{1}{M_{k}}\sum_{y_{i}^{s}=k}\phi_{\text{ Image}}(\mathbf{x}_{i}^{s}),\ \ \mathbf{c}_{k}^{y}=\frac{1}{\tilde{M}_{k}}\sum_{j=1}^{\tilde{M}_{k}}\phi_{\text{ Text}}(\text{Prompt}_{j}(y_{i}^{s}=k)), \tag{4}\]
where \(M_{k}\) is the number of examples with label \(k\), and \(\tilde{M}_{k}\) is the number of prompts for label \(k\). To compute text embeddings, we can either directly input the class names such as "mug" and "plate" into the text encoder, or convert the class names to phases such as "a photo of mug" and "a photo of plate" and then input the phrases into the text encoder. These phrases are known as _prompts_ of the vision-language models. We can use multiple prompts for each class label. \(\phi_{\text{Image}}(\mathbf{x}_{i}^{s})\) and \(\phi_{\text{Text}}(\text{Prompt}_{j}(y_{i}^{s}=k))\) denote the image embedding and the text embedding of the image-label pair \((\mathbf{x}_{i}^{s},y_{i}^{s})\) computed using the CLIP image encoder and the text encoder, respectively. These embeddings with dimension \(C\) of the support set form the image memory and the text memory, as shown in Fig. 2. They are learnable embedding vectors initialized by the computed embeddings using the CLIP image encoder and text encoder. We use \(\mathbf{c}_{k}^{x}\) and \(\mathbf{c}_{k}^{y}\) to denote the mean of the embeddings of the images and the prompts for class \(k\), respectively. Since the image embeddings and the text embeddings are of the same dimension, we can compute the distance between the text prototype \(\mathbf{c}_{k}^{y}\) and the image embedding \(g_{\mathbf{w}_{1}}(\mathbf{x}^{q})\) in Eq. (3). As we can see, our model leverages prototypical networks with image encoder and text encoder from CLIP. We name it "Proto-CLIP".
**Learning the memories and the adapter.** During training, we can construct a support set \(\mathcal{S}=\{\mathbf{x}_{i}^{s},y_{i}^{s}\}_{i=1}^{M}\) and a query set with ground truth labels \(\mathcal{Q}=\{\mathbf{x}_{j}^{q},y_{j}^{q}\}_{j=1}^{L}\). Then we can use \(\mathcal{S}\) and \(\mathcal{Q}\)
Figure 2: Overview of our proposed Proto-CLIP model. The CLIP image encoder and text encoder are frozen during training. The image memory, the text memory and the adapter network are learned. Given a class name, \(\tau_{i}\) returns the \(i^{th}\) out of \(\tilde{K}\) predefined text prompts.
to learn the weights in Proto-CLIP. First, the support set is used to initialize the image memory \(\mathbf{W}_{\text{image}}\) and the text memory \(\mathbf{W}_{\text{text}}\). Second, the weights in the adapter network applied to the query images \(g_{\mathbf{w}_{1}}(\cdot)\) need to be learned. Fig. 3 shows two designs of the adapter network, i.e., an MLP-based adapter as in [9] and a convolution-based adapter we introduce. The convolution-based adapter has fewer weights to learn compared to the MLP-based one. We found that the two adapters have their own advantages on different datasets in our experiments. Finally, motivated by the CLIP-Adapter [9], we do not fine-tune the weights in the image encoder and text encoder by freezing these weights during training. In this way, we can reuse the weights of CLIP trained on a large number of image-text pairs and adapt the image embeddings and the text embeddings.
**Loss Functions.** The first loss function is the negative log-probability of the true label for a query image: \(\mathcal{L}_{1}(\mathbf{W}_{\text{image}},\mathbf{W}_{\text{text}},\mathbf{ w}_{1})=-\log P(y^{q}=k|\mathbf{x}^{q},\mathcal{S})\), where \(P(y^{q}=k|\mathbf{x}^{q},\mathcal{S})\) is defined in Eq. (1). Minimizing \(\mathcal{L}_{1}\) learns the weights to classify the query images correctly. Second, we propose aligning the image prototypes and the text prototypes in training. Let \(\{\mathbf{c}_{1}^{x},\mathbf{c}_{2}^{x},\dots,\mathbf{c}_{N}^{x}\}\) be the image prototypes computed from the image embeddings for the \(N\) classes and \(\{\mathbf{c}_{1}^{y},\mathbf{c}_{2}^{y},\dots,\mathbf{c}_{N}^{y}\}\) be the corresponding text prototypes. We would like to learn the model weights such that \(\mathbf{c}_{k}^{x}\) is close to \(\mathbf{c}_{k}^{y}\) and far from other prototypes in the embedding space. We utilize the InfoNCE loss for contrastive learning [23]:
\[\mathcal{L}_{2}^{k}(\mathbf{c}_{k}^{x},\{\mathbf{c}_{k^{\prime}}^{y}\}_{k^{ \prime}=1}^{N})=-\log\frac{\exp(\mathbf{c}_{k}^{x}\cdot\mathbf{c}_{k}^{y})}{ \sum_{k^{\prime}=1}^{N}\exp(\mathbf{c}_{k}^{x}\cdot\mathbf{c}_{k^{\prime}}^{ y})},\mathcal{L}_{3}^{k}(\mathbf{c}_{k}^{y},\{\mathbf{c}_{k^{\prime}}^{x}\}_{k^{ \prime}=1}^{N})=-\log\frac{\exp(\mathbf{c}_{k}^{y}\cdot\mathbf{c}_{k}^{x})}{ \sum_{k^{\prime}=1}^{N}\exp(\mathbf{c}_{k}^{y}\cdot\mathbf{c}_{k^{\prime}}^{x})} \tag{5}\]
for \(k=1,\dots,N\), where \(\cdot\) indicates dot-product. Here, \(\mathcal{L}_{2}^{k}(\mathbf{c}_{k}^{x},\{\mathbf{c}_{k}^{y}\}_{k^{\prime}=1}^ {N})\) compares an image prototype \(\mathbf{c}_{k}^{x}\) with the text prototypes \(\{\mathbf{c}_{k^{\prime}}^{y}\}_{k^{\prime}=1}^{N}\), while \(\mathcal{L}_{3}^{k}(\mathbf{c}_{k}^{y},\{\mathbf{c}_{k^{\prime}}^{x}\}_{k^{ \prime}=1}^{N})\) compares a text prototype \(\mathbf{c}_{k}^{y}\) with the image prototypes \(\{\mathbf{c}_{k^{\prime}}^{x}\}_{k^{\prime}=1}^{N}\). In this way, we can align the image prototypes and the text prototypes for the \(N\) classes. This alignment can facilitate classification, since the class conditional probabilities are computed using the image prototypes and the text prototypes as in Eqs. (2) and (3). The total loss function for training is:
\[\mathcal{L}=-\frac{1}{L}\sum_{j=1}^{L}\log P(y_{j}^{q}=k|\mathbf{x}_{j}^{q}, \mathcal{S})+\frac{1}{N}\sum_{k=1}^{N}\big{(}\mathcal{L}_{2}^{k}(\mathbf{c}_ {k}^{x},\{\mathbf{c}_{k^{\prime}}^{y}\}_{k^{\prime}=1}^{N})+\mathcal{L}_{3}^{ k}(\mathbf{c}_{k}^{y},\{\mathbf{c}_{k^{\prime}}^{x}\}_{k^{\prime}=1}^{N})\big{)} \tag{6}\]
for a query set \(\mathcal{Q}=\{\mathbf{x}_{j}^{q},y_{j}^{q}\}_{j=1}^{L}\). Following previous CLIP-based few-shot learning methods [8; 9; 10], the support set and the query set are the same during training in our experiments, i.e., \(\mathcal{S}=\mathcal{Q}\).
## 4 Experiments
**Datasets and Evaluation Metric.** Following previous CLIP-based few-shot learning methods [8; 9; 10], we conduct experiments on the following datasets for evaluation: ImageNet [5], StandfordCars [24], UCF101 [25], Caltech101 [26], Flowers102 [27], SUN397 [28], DTD [29], EuroSAT [30], FGVCAircraft [31], OxfordPets [32], and Food101 [33]. In addition, we also include the FewSOL dataset [15] recently introduced for few-shot object recognition in robotic environments. In the \(N\)-way \(K\)-shot classification setting, \(K\) images for each class will be sampled from each dataset for training. A validation set of each dataset is reserved for hyper-parameter tuning, and a test set is used for evaluation. We report the classification accuracy of the test set as an evaluation metric.
Figure 3: Two designs of the adapters. (a) A Multi-layer perceptron-based adapter as in [9]. (b) A convolution-based adapter we introduce. The feature dimension is for CLIP ResNet50 backbone.
**Choosing the Hyper-parameters: \(\alpha\) and \(\beta\).** From the experiments, we found that the two hyperparameters \(\alpha\) in Eq. (1) and \(\beta\) in Eqs. (2) and (3) play a critical role in classification accuracy. Therefore, for each dataset, we conducted a grid search of the two parameters using the validation set. Then we finalize their values for all the runs in our experiments.
**Proto-CLIP Variants.** i) "Proto-CLIP": we do not train the image memory and the text memory and do not use any adapter in Proto-CLIP (Fig. 2), we directly run inference using the pre-trained CLIP features. ii) "Proto-CLIP-\(F\)": we train the image memory and/or the text memory with the adapter. During training, for all the query images, we precompute their CLIP image features and directly use these stored features for training. This variant can be trained more quickly. Therefore, we use it for our ablation studies. iii) "Proto-CLIP-\(F\)-\(Q^{T}\)": During training, for each query image, we apply random data augmentation operations such as cropping and horizontal flip. Then we compute CLIP image features for the transformed query images during training.
### Ablation Studies
**Adapter Types and Learnable Text Memory.** Since the 12 datasets have different characteristics, we found that varying adapter types and whether to learn the text memory or not affect performance. Table 2 summarizes the result of this ablation study. The architectures of the MLP-based adapter and the convolution-based adapter are illustrated in Fig. 3. "2xConv" indicates using 2 convolution layers as shown in Fig. 3, while "3xConv" uses 3 convolution layers in the adapter where we add a \(32@3\times 3\times 32\) convolution layer in the middle. By checking the best accuracy for each dataset, we can see that there is no consensus on which adapter and trainable text memory to use among these datasets. Therefore, we select the best configuration on the adapter and learnable text memory for each dataset in the following experiments. Learning both image memory and text memory can obtain aligned image-text prototypes. Fig. 4 visualizes the image-text prototypes in the FewSOL dataset [15] before and after training.
**Loss functions.** We have introduced three different loss functions in Sec. 3: \(\mathcal{L}_{1},\mathcal{L}_{2},\mathcal{L}_{3}\). We analyze the effects of these loss functions in Table 3. We can see that i) the \(\mathcal{L}_{1}\) loss function is essential since it drives the classification of the query images; ii) Overall, both \(\mathcal{L}_{2}\) and \(\mathcal{L}_{3}\) loss functions for prototype alignment contribute to the performance, which verifies our motivation of aligning image and text prototypes for few-shot classification.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline
**Adapter** & **Train-Text-Memory** & **ImageNet** & **FCVC** & **Felix** & **Cars** & **ErrorSOL** & **ClassNet** & **ClassNet** & **DTD** & **Flowers** & **FoolB** & **UCF-F** & **FoolB** \\ \hline MLP & \(\mathcal{f}\) & 61.06 & 37.31 & 85.51 & 72.63 & 83.43 & 92.50 & 63.67 & 63.67 & 60.70 & 74.05 & 76.16 & 28.62 \\ MLP & \(\mathcal{f}\) & 61.06 & **37.56** & 83.72 & 76.53 & 83.53 & 92.13 & 68.71 & 63.99 & **90.06** & 74.05 & 76.16 & 28.57 \\
2xConv & \(\mathcal{f}\) & **65.75** & 34.38 & **93.62** & **75.25** & 81.35 & 92.40 & **71.94** & 67.35 & 94.76 & **76.05** & 72.13 \\
2xConv & \(\mathcal{f}\) & 58.60 & 35.32 & 89.21 & 73.44 & 81.78 & 93.02 & 67.93 & 67.32 & 95.52 & 78.06 & 76.57 & 27.13 \\
3xConv & \(\mathcal{f}\) & 59.53 & 34.15 & 87.93 & **75.82** & 81.27 & 93.04 & 71.63 & 67.92 & 94.02 & 76.11 & **72.00** & **38.22** \\
3xConv & \(\mathcal{f}\) & 59.63 & 36.15 & 87.93 & 72.68 & 81.57 & 92.74 & 68.64 & **68.56** & 95.78 & 78.61 & 77.03 & **38.22** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of query adapters with \(K=16\) and Proto-CLIP-\(F\). In all cases, the adapter and the visual memory keys are trained. In case of tie, an underlined setup was used.
Figure 4: Barnes-Hut t-SNE visualization [34] using the FewSOL dataset [15]. (a) Image and text prototypes from zero-shot CLIP, which are not aligned. (b) Aligned image and text prototypes from Proto-CLIP-\(F\).
Backbones.Table 4 shows the results of using different backbone networks on the FewSOL dataset [15]. In general, better backbones can learn more powerful feature representations and consequently improve the classification accuracy. CLIP vision transformer backbones achieve better performance than CLIP ResNet backbones.
### Comparison with Other Methods
Table 5 shows the performance of Proto-CLIP compared to the state-of-the-art methods using CLIP for few-shot learning in the literature: Linear-Probe CLIP [1], CoOp [8], CLIP-Adapter [9] and Tip-Adapter [8]. We follow these methods and use ResNet50 backbone for this comparison. The fine-tuned variant of Tip-Adapter "Tip-F" is the most competitive method compared to ours. The performance of Proto-CLIP on very few shots, i.e., 1 shot and 2 shots is inferior compared to Tip-F. When the number of shots increases to 4, 8 and 16, the fine-tuned variants of Proto-CLIP outperform Tip-F. Proto-CLIP-\(F\)-\(Q^{T}\) performs better than Proto-CLIP-\(F\) on most datasets by using the data augmentation of query images during training.
### Real World Experiments
As an application, we have built a robotic system to verify the effectiveness of Proto-CLIP for object recognition in the real world. Fig. 5 illustrates our pipeline for the system. It takes human instruction in the form of voice commands as input such as "pick something" or "grasp something". The system first applies Automatic Speech Recognition (ASR) to convert voice input to text using OpenAI Whisper [35]. Then the system grounds the noun in the human instruction into a target object observed from an input image. This is achieved by joint object segmentation and classification. We utilize unseen object instance segmentation [36] to segment objects in cluttered scenes and then
\begin{table}
\begin{tabular}{c|c|c|c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Adapter} & \multirow{2}{*}{TextM} & \multicolumn{4}{c}{Backbone} \\ \cline{4-9} & & & & & & & & \\ \hline Zero-Shot-CLIP [1] & - & - & 25.91 & 25.96 & 40.70 & 41.87 & 54.57 \\ \hline Tip [10] & - & - & 27.42 & 37.43 & 47.00 & 41.48 & 50.78 \\ \hline Tip-F100 & - & - & 32.52 & 41.43 & 50.17 & 45.48 & 60.17 \\ \hline Proto-CLIP-\(F\) & MLP & ✗ & 33.48 & 39.04 & 47.96 & 41.91 & 58.65 \\ Proto-CLIP-\(F\) & MLP & ✓ & 34.83 & 40.74 & 47.43 & 42.13 & 58.91 \\ Proto-CLIP-\(F\) & 2Conv & ✗ & 35.04 & 41.04 & 50.83 & 46.52 & **63.74** \\ Proto-CLIP-\(F\) & 2xConv & ✓ & 35.04 & 42.52 & 49.26 & 43.43 & 61.61 \\ Proto-CLIP-\(F\) & 3xConv & ✗ & 34.13 & 42.83 & **51.91** & **46.87** & 62.35 \\ Proto-CLIP-\(F\) & 3xConv & ✓ & **38.22** & **44.09** & 50.39 & 46.57 & 60.39 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Backbone ablation study. Dataset=FeWSOL-52 [15]. \(K=16\). Model=Proto-CLIP-\(F\). “TextM” indicates whether to train text memory.
Figure 5: Results for the real world setup with top-5 predictions from the Proto-CLIP-\(F\) (ViT-L/14) model trained on FewSOL-198 [15]. The Speech-To-Text is performed via Whisper [35].
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c} \hline \hline Loss & ImageNet & FCFV & Pots & Cars & EuroScal & Caltech101 & SUNy79 & DTD & ProvoS & Food101 & UCTIO1 & FewSOL \\ \hline \(\mathcal{L}_{1}\) & 62.67 & 20.34 & 72.31 & 73.77 & 78.98 & 92.25 & 68.34 & 66.49 & **96.14** & 77.39 & 76.66 & 34.57 \\ \hline \(\mathcal{L}_{2}\) & 62.29 & 4.71 & 0.00 & 0.00 & 38.95 & 0.28 & 69.63 & 67.38 & 10.31 & 77.71 & 57.41 & 32.70 \\ \hline \(\mathcal{L}_{3}\) & 62.27 & 4.14 & 0.00 & 0.00 & 38.09 & 0.24 & 64.86 & 67.38 & 10.27 & 77.69 & 57.55 & 20.22 \\ \hline \(\mathcal{L}_{1}+\mathcal{L}_{2}\) & 63.59 & 3.62 & 38.58 & 75.39 & 87.28 & **93.71** & 71.65 & 68.09 & 59.06 & 76.89 & 77.29 & 33.48 \\ \hline \(\mathcal{L}_{2}+\mathcal{L}_{3}\) & 62.33 & 3.87 & 0.00 & 0.00 & 36.86 & 0.24 & 64.84 & 68.32 & 8.20 & 77.35 & 57.52 & 19.61 \\ \hline \(\mathcal{L}_{1}+\mathcal{L}_{3}\) & 65.43 & **36.84** & 88.35 & **75.51** & 82.84 & 93.35 & 71.44 & 68.32 & **96.14** & 78.30 & **77.53** & 33.43 \\ \hline \(\mathcal{L}_{1}+\mathcal{L}_{2}\) & **65.75** & **37.56** & **89.62** & 75.25 & **83.33** & **71.94** & **68.35** & 96.06 & **79.09** & 77.50 & **38.22** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Loss function ablation results with the ResNet50 backbone and shot \(K=16\).
classify each segmented object with Proto-CLIP. By matching the noun with the class labels, the system can ground the target in the image. Once the target object is recognized, we use Contact-GraspNet [37] for grasp planning and MoveIt motion planning toolbox [38] to pick and place the target. See the supplementary material for more real-world results.
## 5 Limitations
Proto-CLIP performs poorly in low-shot regimes, as is evident from Table 5. A hyperparameter grid search is necessary for each new dataset, following the methodology of Tip-Adapter. This requirement applies to every combination of the new dataset and the backbone. Embracing the diversity of datasets, our system thrives on the need for different set-ups. When encountering a new dataset, we actively compare the effectiveness of \(F\) and \(F\)-\(Q^{T}\) to determine the optimal choice. This dynamic approach transforms the potential weakness into a strength, allowing us to adapt and maximize performance for every unique dataset. During our observations, we discovered that data transformations play a crucial role in building the cache model.
## 6 Conclusion and Future Work
We have introduced a novel method for few-shot learning based on the CLIP vision-language model. Our method learns image prototypes and text prototypes from few-shot training examples and aligns the corresponding image-text prototypes for classification. The model is equipped with learnable image memory and text memory for support images and a learnable adapter for query images. Compared to previous CLIP-based few-shot learning methods, our method is flexible in configuring these learnable components, resulting in powerful learned models.
Good feature representation is the key in few-shot learning. Future work includes how to further improve feature representation learning compared to CLIP models. One idea is to adapt more powerful vision-language models such as GPT variants. The FewSOL dataset also provides multiview and depth information about objects. Exploring this 3D information in few-shot object recognition is also a promising direction.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline Dataset & ImageNet & Pytorch & Pytorch & Pytorch &
#### Acknowledgments
This work was supported in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under contract number HR00112220005.
|
2310.18922 | Band Structure of Topological Insulator BiSbTe1.25Se1.75 | We present our angle resolved photoelectron spectroscopy (ARPES) and density
functional theory results on quaternary topological insulator (TI)
BiSbTe1.25Se1.75 (BSTS) confirming the non-trivial topology of the surface
state bands (SSBs) in this compound. We find that the SSBs, which are are
sensitive to the atomic composition of the terminating surface have a partial
3D character. Our detailed study of the band bending (BB) effects shows that in
BSTS the Dirac point (DP) shifts by more than two times compared to that in
Bi2Se3 to reach the saturation. The stronger BB in BSTS could be due to the
difference in screening of the surface charges. From momentum density curves
(MDCs) of the ARPES data we obtained an energy dispersion relation showing the
warping strength of the Fermi surface in BSTS to be intermediate between those
found in Bi2Se3 and Bi2Te3 and also to be tunable by controlling the ratio of
chalcogen/pnictogen atoms. Our experiments also reveal that the nature of the
BB effects are highly sensitive to the exposure of the fresh surface to various
gas species. These findings have important implications in the tuning of DP in
TIs for technological applications. | H. Lohani, P. Mishra, A. Banerjee, K. Majhi, R. Ganesan, U. Manju, D. Topwal, P. S. Anil Kumar, B. R. Sekhar | 2023-10-29T06:43:58Z | http://arxiv.org/abs/2310.18922v1 | # Band Structure of Topological Insulator
###### Abstract
We present our angle resolved photoelectron spectroscopy (ARPES) and density functional theory results on quaternary topological insulator (TI) BiSbTe\({}_{1.25}\)Se\({}_{1.75}\) (BSTS) confirming the non-trivial topology of the surface state bands (SSBs) in this compound. We find that the SSBs, which are are sensitive to the atomic composition of the terminating surface have a partial 3D character. Our detailed study of the band bending (BB) effects shows that in BSTS the Dirac point (DP) shifts by more than two times compared to that in Bi\({}_{2}\)Se\({}_{3}\) to reach the saturation. The stronger BB in BSTS could be due to the difference in screening of the surface charges. From momentum density curves (MDCs) of the ARPES data we obtained an energy dispersion relation showing the warping strength of the Fermi surface in BSTS to be intermediate between those found in Bi\({}_{2}\)Se\({}_{3}\) and Bi\({}_{2}\)Te\({}_{3}\) and also to be tunable by controlling the ratio of chalcogen/pnictogen atoms. Our experiments also reveal that the nature of the BB effects are highly sensitive to the exposure of the fresh surface to various gas species. These findings have important implications in the tuning of DP in TIs for technological applications.
## Introduction
Discovery of the new quantum state of matter called topological insulators (TI) has attracted world wide interest due to their exotic properties which are manifestations of a non-trivial band topology [1, 2]. TIs have insulating bulk and conducting edges due to the presence of some peculiar surface states (SSs). These SSs are spin non-degenerate with a unique property of spin momentum locking which results from the strong spin-orbit coupling (SOC) effects in combination with time reversal symmetry. It has been theoretically predicted that these SSs host many interesting properties like, Dirac fermion [1, 3], magnetic monopole [4] and Majorana bound state at the vortex in superconducting regime [5, 6]. Strong immunity of these SSs to Anderson localization and backscattering in presence of non-magnetic impurities have tremendous technical advantages, especially for functional applications like spintronic devices and quantum computers [7, 8]. Furthermore, tunability of the crossing point of the topological SSs, called the Dirac point (DP) by chemical doping, is another aspect important from such technological point of view [9, 10, 11, 12]. In the known Bi and Sb based binary TIs the DP and the SSs are often obscured by contributions from bulk states. Tetradymite Bi\({}_{2}\)Te\({}_{2}\)Se which is isostructural to the prototypical TIs Bi\({}_{2}\)Se\({}_{3}\) and Bi\({}_{2}\)Te\({}_{3}\) has been found to be suitable for such tuning of the DP within the bulk band gap owing to its relatively large bulk resistivity [13]. The resistivity can be optimized in the Sb doped quaternary alloy Bi\({}_{2-x}\)Sb\({}_{x}\)Te\({}_{3-y}\)Se\({}_{y}\) by changing the ratio of the pnictogen (Bi and Sb) and chalcogen (Se and Te) atoms without disturbing its crystallinity. In this compound, topological nature with different bulk resistivity has been experimentally observed in a wide range of x and y combinations [14]. Thus, Bi\({}_{2-x}\)Sb\({}_{x}\)Te\({}_{3-y}\)Se\({}_{y}\) provides an ideal platform to study the nature of topological surface states by tuning the Dirac node through controlling the proportion of chalcogen/pnictogen atoms.
Recently, quantum hall effect (QHE) [15] and scanning tunnelling spectroscopy (STS) [18] studies have been used to confirm the topological characters of BiSbTeSe\({}_{2}\) and Bi\({}_{1.5}\)Sb\({}_{0.5}\)Te\({}_{1.7}\)Se\({}_{1.3}\). Tunability of the Dirac cone also has been observed in some of the compositions of Bi\({}_{2-x}\)Sb\({}_{x}\)Te\({}_{3-y}\)Se\({}_{y}\) by using angle resolved photoelectron spectroscopy (ARPES) measurements [20, 21]. Furthermore, the low bulk carrier density in these materials allows electrostatic gating of the chemical potential letting a strong control over electrical transport properties which is vital for applications [15, 16, 17]. While most of the reported studies were focussed on the tunability by chemical doping or adding layers of other elements on the surface of TIs, the drifting of the topological surface state bands (SSBs) and DP with aging of the surface which are also important for device applications [22], has not been addressed adequately in this family of TIs [9, 10]. In this paper, we present a detailed study of the electronic structure and aging effects of BiSbTe\({}_{1.25}\)Se\({}_{1.75}\)(BSTS) using ARPES in conjunction with density functional theory (DFT) based calculations. In
the ARPES data, we observe topological character of the SSBs and a warping of the Fermi surface (FS). These results are consistent with our calculated SSBs which fall within the region of bulk band gap of BSTS. In addition, we find pronounced effects of aging due to band bending (BB) and which are relatively stronger in this compound in comparison to Bi\({}_{2}\)Se\({}_{3}\). Effects of the BB are enhanced due to the high adsorption of residual gases at low temperatures. Furthermore, experiments performed with constant dosing of different gases show that the BB effects are highly sensitive to the gas species.
## Results and Discussion
Fig.1(a) shows the primitive unit cell of BiSbTeSe\({}_{2}\) which has a rhombhohdral symmetry. This structure can also be depicted as a hexagonal unit cell as shown in Fig.1(b). Basic building block of this structure is the so called quintuple layer (QL) consisting of five atoms arranged in an order Se1-Bi-Se-Sb-Te. The Bi(Se1) atoms are connected to the Sb(Te) atoms with the center of inversion symmetry at the Se atom site. The structure of BiSbTeSe\({}_{2}\) is similar to that of Bi\({}_{2}\)Se\({}_{3}\). Substitution sites of Sb and Te in Bi\({}_{2}\)Se\({}_{3}\) to build the BiSbTeSe\({}_{2}\) structure for our calculations were chosen by considering the total energy minimization among various possible structures. Fig.1(c) shows the Brillouin zone (BZ) of the primitive unit cell where the high symmetry k-points are marked. In Fig.1(d) we have shown a low energy electron diffraction (LEED) pattern from our BSTS crystal depicting the hexagonal symmetry of the surface BZ. The BSTS crystal cleaves along the (111) crystal plane and scanning tunnelling microscopy experiments have shown that the terminated plane has Te/Se atoms on it [18].
Fig.2(a) and (b) show the bulk band structure of nominal BiSbTeSe\({}_{2}\) composition without and with inclusion of SOC effects respectively. In both the cases, valence band(VB) and conduction band(CB) states are well separated in the energy scale. However, the SOC effects induce a small splitting in the bands along various k-directions. The band gap is \(\sim\) 0.45eV at the \(\Gamma\) point in SOC included case(Fig.2(b)) which is higher than the value 0.3eV found in Bi\({}_{2}\)Se\({}_{3}\)[19]. Structure of the top most VB(red) and the lowest CB(blue) along the F-\(\Gamma\)-L direction indicate a band inversion at the \(\Gamma\) point after incorporating the SOC effects. This results in a non-trivial value of the Z\({}_{2}\) invariant in this system [23]. In Fig.2(c) and (d) the ARPES intensity plots of BSTS, which has a slightly different stoichiometry (BiSbTe\({}_{1.25}\)Se\({}_{1.75}\)) from the nominal composition (BiSbTeSe\({}_{2}\)), are presented. These plots are taken along the \(\Gamma\)-M and \(\Gamma\)-K directions of the surface BZ by using 31eV photon energy respectively. Among the various bands seen in the VB region, the deeper lying bands with binding energy (BE) in the range E\({}_{b}\) = -0.5 to -4.5eV show a highly dispersive nature. A cone shaped distribution of low intensity is clearly visible in the vicinity of the E\({}_{f}\) around the \(\Gamma\) point which is absent in the calculated band structure. The cone is formed by the topological SSBs in the bulk band gap region of the material. Although, inconsequential to the results of this study, a closer look at the data in the Fig.2(c) will reveal the existence a small energy gap near the tip of the cone which could be due to some possible misalignment of the angular position of the sample to the perfect \(\Gamma\)-M direction during the data collection. Nevertheless, in order to have a better comparison, raw data of the bulk bands along the \(\Gamma\)-F and \(\Gamma\)-L directions (Fig.2(b)) which correspond to the \(\Gamma\)-M and \(\Gamma\)-K directions of the
Figure 1: (a) and (b) show the primitive and hexagonal unit cells of BiSbTeSe\({}_{2}\) respectively. The dotted box encloses the structure of five atomic layers which is called quintuple layer (QL). (c) Brillouin zone of the primitive unit cell where high symmetry k-points are marked. (e) LEED spots depict the hexagonal symmetry of the surface BZ of BSTS where the high symmetry k-directions \(\Gamma\)-K and \(\Gamma\)-M are marked.
surface BZ, are plotted adjacent to the ARPES images in Fig.2(e) and (f) respectively. Along both the directions, calculated bands placed at higher BE show a fair resemblance to the corresponding intensity pattern observed in the ARPES images.
Fig.3(a) shows the near E\({}_{f}\) region of ARPES plot taken by using 35eV photon energy. The two SSBs are clearly visible exhibiting almost a linear dispersion. Intensity between these two SSBs indicates the presence of the bulk conduction band (BCB) states occupied due to n-type intrinsic impurities and defects. On the other hand, lower part of the Dirac cone strongly overlaps with the bulk valence band (BVB) states which form a high intensity region at \(\sim\) E\({}_{b}\) = -0.4eV. In order to identify the position of the DP, the corresponding energy density curve (EDC) of the image shown in Fig.3(a) is plotted in Fig.3(b). This EDC is taken at the \(\Gamma\) point with k width of \(\pm\) 0.02 A\({}^{-1}\). As is clear from this spectra the DP appearing at E\({}_{d}\)\(\sim\) -0.2eV is obscured by the emission from the BVB states. In order to confirm the surface nature of the SSBs, ARPES images were collected at different photon energies as shown in Fig.3(a), (c), (e) and (g) which correspond to images taken at photon energy 35, 33, 31 and 28 eV respectively. EDCs(at the \(\Gamma\) point with k width of \(\pm\) 0.02 A\({}^{-1}\)) corresponding to these intensity plots are presented in Fig.3(b), (d), (f) and (h) respectively. The BVB gets modified sharply with the variation in photon energy indicating the bulk nature of these higher BE bands while the shape of near E\({}_{f}\) SSBs (upper part of Dirac cone) remains unaffected confirming the surface state character. The slight variation in the intensity of these SSBs is due to the difference in the matrix elements involved in the photoemission process[3]. It should be noted that the SSBs show some significant changes close to the DP. The EDC spectra of 33eV photon energy (Fig.3 (d)) shows an apparent opening of a gap in the SSBs in the vicinity of the DP (E\({}_{d}\) = -0.2eV), unlike the case of 35eV photon energy (Fig.3 (b)). Similarly, the spectral weight near \(\sim\) -0.2eV BE in the EDC of 31eV (Fig.3 (f)) also shows differences compared to that taken with 28eV (Fig.3 (h)) photon energy. These results show that the SSBs are not of pure 2D character in this compound. The SSBs which mainly form the lower part of the Dirac cone hybridize with the BVB states and therefore acquire a partial 3D character. Origin of these hybridized states could be the impurities or defects in the system as suggested in a theoretical model for finite bulk band gaps by Black-Schaffer _et. al.[24]_. Experimental realization of such impurity induced gap opening in the SSBs at the DP has been recently reported by Sanchez-Barriga _et. al._ in their detailed ARPES study of (Bi\({}_{1-x}\)Mn\({}_{x}\))\({}_{2}\)Se\({}_{3}\) system[25]. Fig.3(i) and (k) show the ARPES plot taken with s and p-polarized light of photon energy 31eV respectively and adjacent Fig.3(j) and (l) display the EDC(at the \(\Gamma\) point with k width of \(\pm\) 0.02 A\({}^{-1}\)) corresponding to them. In both the cases the linearly dispersive SSBs are clearly seen. However, the intensity of the BVBs at \(\sim\) E\({}_{b}\) = -0.5eV got drastically reduced in the p-polarized case compared to the s-polarized. These changes are also visible in the spectral features of their EDCs (Fig.3(j) and (l)) showing the different orbital
Figure 2: (a) and (b) show the bulk band structure plots of BiSbTeSe\({}_{2}\) without and with inclusion of SOC effects respectively. (c) and (d) show the ARPES images of BSTS along the \(\Gamma\)-M and \(\Gamma\)-K directions of the surface BZ respectively. (e) and (f) show the bulk bands of (b) along the \(\Gamma\)-F and \(\Gamma\)-L directions respectively.
characters of these bands.
Characteristics of the SSBs have been investigated by performing surface state calculations on the (111) crystal plane of BiSbTeSe\({}_{2}\). In Fig.4(a) bands (red) of 6QL slab structure with Se1 terminated face are plotted along the M-\(\Gamma\)-K direction of the surface BZ. It can be seen that two bands falling into the shape of 'V' are observed just above the E\({}_{f}\) around the \(\Gamma\) point. Here, green dots represent the orbital contribution coming from the atoms present in the topmost QL. Crossing of these Dirac like SSBs occurring at E\({}_{f}\) is more clearly visible in the inset. In order to figure out the origin of these 'V' shaped bands, orbital projection weight of majorly contributing atoms to these bands at the \(\Gamma\) point with respect to their distances from the surface (z) are plotted in Fig.4(b). Fig.4(b) shows clearly that the orbital character originates primarily from the atoms close to the surface rather than the bulk region and the same character also persists in the nearby k-points of the \(\Gamma\) point. This further confirms the surface state nature of these bands. These results show a qualitative similarity to the previously reported SSBs of BiSeTe\({}_{2}\)[23]. The 'V' shaped bands show a high resemblance to the intensity pattern of SSBs observed in the ARPES data (Fig.3(a)), though there is slight mismatch in the Fermi position. The mismatch is possibly due to the intrinsic n-doping in the sample which raises the E\({}_{f}\) level in the experimental data. The other possibility of Te terminated surface has also been examined and bands (blue) of 6QL slab of this geometry are shown in Fig.4(c). In this case, the 'V' shaped band around the \(\Gamma\) point (I\({}^{st}\) region) deviates from linearity as it moves away from the \(\Gamma\) point (II\({}^{nd}\) region). Position of the tip of this V shaped band (180 meV) is quite below the E\({}_{f}\), unlike the case of Se terminated face (Fig.4(a)). This energy position differs from the experimentally observed BVB (80 meV) of freshly cleaved BSTS (see Fig.1(e) of supplementary note). In addition, the orbital weight of these bands at region I\({}^{st}\) and II\({}^{nd}\) are dominated by atomic orbitals placed in the bulk and surface sites respectively as is clear from the Fig.4(d). Probably, this large mixing of bulk and surface characters leads to the deviation in the dispersion of this band. This different origin of the 'V' shaped band around the \(\Gamma\) point from bulk and surface in the Te and Se termination cases show that the SSBs are sensitive to the atomic composition of the surface.
As mentioned before, the tunability of the DP within the bulk band gap of the system is an advantage of BSTS important from the technological point of view which could be achieved by chemical doping. Intimately related to this is the observed gradual shifting of the DP with adsorption of gases or even the elapse of time in ultra high vacuum after the crystal cleaving.
Figure 3: (a), (c), (e) and (g) correspond to the near E\({}_{f}\) ARPES images taken at photon energy 35, 33, 31 and 28 eV respectively. (b), (d), (f) and (h) show EDC spectra corresponding to the images in (a), (c), (e) and (g) respectively. Intensity map using s and p-polarized photon energy of 31 eV are shown in (i) and (k) respectively. EDC spectra of these images are shown in (j) and (l) respectively.
Figure 4: (a) and (c) show calculated bands of Se1 and Te terminated face of 6QL slab geometry of BiSbTeSe\({}_{2}\) respectively. (b) Weight of atomic orbitals mainly contributing to the ‘V’ shaped band (enclosed in the black box) at the \(\Gamma\) point in Se terminated plain with respect to their distances from the surface. (d) Similar contribution to V shaped band at the \(\Gamma\) point(\(\Gamma^{st}\)) and slightly away from the \(\Gamma\) point(\(\Pi^{nd}\)) in Te terminated face.
Figure 5: (a) and (b) depict the ARPES intensity plots taken at 31 eV photon energy along the \(\Gamma\)-K direction taken \(\sim\) 10 and 27 hrs. after the sample cleaving. (c) and (d) show the same images along \(\Gamma\)-M direction collected at different time intervals from the cleaving. (e)-(h) show MDC spectra extracted from the images (a)-(d). The dispersion relation between E and k estimated from the MDC plots are fitted to the calculated values obtained from the model Hamiltonian[28] along the \(\Gamma\)-K (i) and the \(\Gamma\)-M (j) directions.
This shifting of DP is caused by the band bending which has been observed previously in various TIs, like Bi\({}_{2}\)Se\({}_{3}\), Bi\({}_{2}\)Te\({}_{3}\)[26, 27]. We present our observations of BB effects on the SSBs in Fig.5, where 5(a) and (b) show ARPES images along the \(\Gamma\)-K direction collected \(\sim\) 10 and 27 hours after the cleaving. As can be seen from Fig.5(b), a significant shift of \(\sim\) 0.14eV is observed in the position of E\({}_{d}\) in comparison to that in Fig.5(a). Further, the filled CB states in the nearby E\({}_{f}\) region around the \(\Gamma\) point are clearly demarcated from the SSBs and an arc shaped structure is seen in these CB states which could be a signature of two dimensional electron gas (2DEG) character arising due to the strong BB, like in Bi\({}_{2}\)Se\({}_{3}\)[26]. Similarly, shift of E\({}_{d}\) and appearance of distinct SSBs and CB states can also be found in the ARPES images Fig.5(c) and (d) which were collected along the \(\Gamma\)-M direction at different time intervals after the cleaving. Fig.5(e)-(h) show plots of momentum density curves (MDC) extracted from the ARPES images of Fig.5(a)-(d) respectively, where the linear dispersion of the MDC peaks is clearly seen. An energy dispersion relation of the SSBs can also be obtained from the model Hamiltonian approach proposed by Fu [28] by the following relation.
\[E_{\pm}(\vec{k})=E_{0}(k)\pm\sqrt{\nu_{k}^{2}k^{2}+\lambda^{2}k^{6} cos^{2}(3\theta)} \tag{1}\] \[\text{here, }E_{0}=k^{2}/(2m^{*})\quad;\quad\nu_{k}=\nu_{0}(1+ \alpha k^{2})\]
where E\({}_{\pm}\) corresponds to the energy of the upper and lower band, E\({}_{0}\)(k) generates particle-hole asymmetry, m\({}^{*}\) denotes effective mass, and \(\theta\) indicates the azimuthal angle of momentum \(\vec{k}\) with respect to the x-axis(\(\Gamma\)-M direction). \(\lambda\) is a parameter for the hexagonal warping. \(\nu_{0}\) is the Dirac velocity which is modified to \(\nu_{k}\) after including a second order correction parameter(\(\alpha\)) to the Dirac velocity in the k.p Hamiltonian. The peak positions measured from the MDC plots along the \(\Gamma\)-K and \(\Gamma\)-M directions are fitted to the E-k dispersion relation of the SSBs obtained from the Eq.1 in Fig.5(i) and (j) respectively. The calculated bands nicely fit near the DP while a slight deviation can be seen in the regions away from the DP where the states of BVB and BCB are predominant. Parameters used for the fitting are tabulated in Table1 which shows that \(\nu_{0}\) reduces significantly along the \(\Gamma\)-K direction after 27 hrs. from the cleaving in comparison to the 10 hrs. cleaving case. On the other hand, warping strength, defined as \(\sqrt{\lambda/\nu_{0}}\) remains almost constant under the influence of BB. The estimated value of warping strength(\(\sqrt{\lambda/\nu_{0}}\) = 6.8) is intermediate between the value found in Bi\({}_{2}\)Se\({}_{3}\) and Bi\({}_{2}\)Te\({}_{3}\)[29]. This result clearly establishes that FS warping and associated out of plane spin polarization can be controlled by the ratio of chalcogen/pnictogen atoms in Bi/Sb based TIs.
Further experiments were performed to understand the BB effect by using a laboratory HeI (21.2 eV) photon source in combination with a Scienta R3000 electron energy analyzer. It has earlier been proposed that the BB in TIs originate from accumulation of additional charges at the surface and these extra charges arise due to Se vacancies present in the bulk as well as those created at the surface in the process of surface cleaving [30, 31]. Moreover, adsorption of residual gases further changes the charge distribution at the surface [10, 11, 12]. In order to understand the role of adatoms in BB, we undertook the ARPES measurements over a cycle of temperatures, 300K - 77K - 300K. Since, the adsorption of residual gases is faster at low temperatures the effect of BB is expected to enhance at low temperatures. This view is fairly supported by our thermal cycling ARPES data taken at different time intervals under constant exposure of Ar, N\({}_{2}\) and O\({}_{2}\) gases. In first panel, Fig.6(a), (b) and (c) show the ARPES images taken at 300K - 77K - 300K respectively just after the cleaving (I\({}^{st}\) thermal cycle) in the Ar environment. Similarly, second (Fig.6(d)-(f)) and third (Fig.6(g)-(i)) panels display the ARPES images of the I\({}^{st}\) thermal cycle performed under constant dosing of N\({}_{2}\) and O\({}_{2}\) gases respectively. In Fig.6(b), a marginal shifting of the BVB maximum (marked with red arrow) is observed towards the E\({}_{f}\), though it was recorded at a later time in compared to Fig.6(a). This result of BB is contrary to the behavior observed under the ultra high vacuum conditions. It indicates that the Ar adatoms act like an electron acceptor compensating the Se vacancy induced downward BB and thereby lead to a small upward shifting of the BVB. This inference is supported by the relatively large downward shift of the BVB due to the gas des
\begin{table}
\begin{tabular}{c c c c c} \hline & \multicolumn{4}{c}{\(\Gamma\)-K} \\ \hline \hline Time(hr.) & 1/2m\({}^{*}\)(eV.Å\({}^{3}\)) & \(\nu_{0}\)(eV.Å) & \(\alpha\)(eV.Å\({}^{3}\)) & \(\lambda\)(eV.Å\({}^{3}\)) \\ \hline \hline
10 & 7 & 3.0 & 5 & 130 \\
27 & 1.8 & 1.8 & 5 & 80 \\ \hline & \multicolumn{4}{c}{\(\Gamma\)-M} \\ \hline \hline
15 & 5 & 2.85 & 5 & - \\
27 & 4 & 2.65 & 5 & - \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of calculated SSBs obtained after fitting with the E-k dispersion relation(Eq.1) estimated from the ARPES data along the \(\Gamma\)-K and the \(\Gamma\)-M directions.
annealed data (Fig.6(c)). These changes are more clearly visible in the higher BE region (between the two red dotted lines). Similar characteristics of hole doping are also seen under the exposure of N\({}_{2}\) gas (Fig.6(d)-(f)). However, the dosing of O\({}_{2}\) gas gives rise to an opposite BB effect _i. e._ features of n-doping as clear from Fig.6(g)-(i). Further in this case, a faint feature of the SSBs appears quite early at the top of the BVB, unlike the other gas exposure cases after the cleaving. This probably is linked to the higher adsorption of O\({}_{2}\) gas which accelerates the BB. These changes are also compared in the EDC (taken around the \(\Gamma\) point of k width of \(\pm\) 0.02 A\({}^{-1}\)) plots Fig.6(j), (k) and (l) which correspond to the first, second and third panels respectively. Data sets of thermal cycling performed \(\sim\) 0:30 and 24:00 hrs. after the cleaving are marked I\({}^{st}\) and II\({}^{nd}\) respectively. In Fig.6(j), a sharp reduction is observed in the intensity of initial 300K spectra (black) in comparison to the 77K spectra (red) which again raise and fall in the next 77K (blue) and 300K (green) data. Similar behavior is reproduced at the II\({}^{nd}\) set of thermal cycling also. In addition, spectral weight originating from the filled BCB states also is found to be enhancing in the vicinity of the E\({}_{f}\) as shown in the inset. The trend under thermal cycling shown by the EDC spectra of N\({}_{2}\) case (Fig.6(k)) qualitatively matches with that of the Ar exposure case, whereas, the EDC plots of the O\({}_{2}\) exposure case (Fig.6(l)) show an opposite behavior. The small recovery of annealed 300K (green) spectra with respect to the 77K (red) in the II\({}^{nd}\) set of cycle could possibly be a signature of incomplete desorption of the O\({}_{2}\) gas.
It was reported that in the binary TI Bi\({}_{2}\)Se\({}_{3}\) the DP moves by 116 meV from its initial position just after cleaving to a saturation value under the influence of BB [9]. Our own measurements also showed a shifting of \(\sim\) 0.1eV in the time scale of 11:00 hr. after the sample cleaving in Bi\({}_{2}\)Se\({}_{3}\). It is interesting to note that this movement is substantially lesser in comparison to that in BSTS where it is \(\sim\) 0.2 eV in a similar time scale and experimental conditions (Fig.1 of supplementary note). Recent studies on Bi\({}_{2}\)Se\({}_{3}\) have shown that not only the extra charges at the surface but also its periodic re-arrangement inside the bulk creates a Coulomb potential of long range order contributing to the BB [31]. This unique property is inherent to the layered structure of Bi\({}_{2}\)Se\({}_{3}\) where charge is accumulated and depleted at both ends of each QL. Introduction of additional elements (Sb and Te) in the QL of BSTS leads to an asymmetry in the structure of the QL compared to the Bi\({}_{2}\)Se\({}_{3}\). The presence of Te atoms in
Figure 6: (a), (b) and (c) show ARPES images taken at 300K - 77K - 300K respectively just after the cleaving (I\({}^{st}\) thermal cycle) under constant Ar exposure. (d)-(f) and (g)-(i) display the ARPES images of the I\({}^{st}\) thermal cycle performed under constant dosing of N\({}_{2}\) and O\({}_{2}\) gases respectively. (j), (k) and (l) correspond to the EDC (taken around the \(\Gamma\) point) plots of thermal cycling under the exposure of Ar, N\({}_{2}\) and O\({}_{2}\) gases respectively, where the inset of each plot shows the enlarged view of the near E\({}_{f}\) region. Different colours (black \(\rightarrow\) red \(\rightarrow\) green \(\rightarrow\) blue \(\rightarrow\) magenta) of the EDC spectra represent various stages (300K \(\rightarrow\) 77K \(\rightarrow\) 300K \(\rightarrow\) 77K \(\rightarrow\) 300K) of thermal cycling respectively.
addition to the Se atoms at the terminating faces of the QL in Bi\({}_{1.5}\)Sb\({}_{0.5}\)Te\({}_{1.7}\)Se\({}_{1.3}\) have been observed in STM measurements [18]. This could possibly provide different screening to the surface charges compared to the Bi\({}_{2}\)Se\({}_{3}\). So an oscillatory behavior of charge density could persist at larger distances inside the bulk region and result in a Coulomb potential of higher and longer range giving rise to stronger BB in BSTS in comparison to the Bi\({}_{2}\)Se\({}_{3}\). Our experimental observations, stronger BB and lower DP position in BSTS compared to Bi\({}_{2}\)Se\({}_{3}\), are consistent with the ARPES results on a similar composition Bi\({}_{1.5}\)Sb\({}_{0.5}\)Te\({}_{1.7}\)Se\({}_{1.3}\) reported by Golden _et. al._[32]. These authors have also attributed the difference in effective screening of the adsorbate-induced surface charge to the variation in the temporal evolution of the SSBs between Bi\({}_{1.5}\)Sb\({}_{0.5}\)Te\({}_{1.7}\)Se\({}_{1.3}\) and Bi\({}_{2}\)Se\({}_{3}\) compounds. In addition, they suggested that different compositions of the terminated face between the two compounds could also influence the sticking process of the residual gas atoms thereby leading to different BB behaviour. This argument is supported by our first principles results where we find that the nature of the SSBs are different in Se and Te terminated slab geometries(Fig.4). Besides this, another factor affecting the BB process could be the difference in the relaxation process of the exposed surface of the two compounds. However, in case of Bi\({}_{2}\)Se\({}_{3}\) Hofmann _et. al._ have ruled out any surface lattice relaxation from their LEED study [26].
In conclusion, we discussed the results of our experimental studies using ARPES and first principles based Quantum Espresso band structure calculations and confirmed the non-trivial topology of the SSBs in BSTS. Our calculations show that the SSBs are sensitive to the atomic composition of the terminating surface and our experimental data shows that they have a partial 3D character. We have undertaken a detailed study of the shifting of the DP by the BB effect with elapse of time as well as adsorption of gases after the crystal cleaving. We find that under the BB effect the DP in BSTS shifts by more than two times compared to that in Bi\({}_{2}\)Se\({}_{3}\) to reach a saturation. Our results suggest that the stronger BB in BSTS could be due to the difference in screening of the surface charges because of the different compositions of the QLs of the two compounds. From the MDCs of the ARPES data we obtained an energy dispersion relation showing the warping strength of the Fermi surface in BSTS to be intermediate between those found in Bi\({}_{2}\)Se\({}_{3}\) and Bi\({}_{2}\)Te\({}_{3}\) and also is tunable by the ratio of chalcogen/pnictogen atoms. Further experiments reveal that the nature of the BB effects are highly sensitive to the exposure of the fresh surface to various gas species; Ar and N\({}_{2}\) show signatures of hole doping while O\({}_{2}\) shows those of electron doping. Our findings could have importance in the tuning of the DP in topological insulators especially the members of the BSTS family for technological applications.
## Methods
The high quality single crystal samples of BiSbTe\({}_{1.25}\)Se\({}_{1.75}\)(BSTS) used in this study were grown by modified Bridgman method. Stoichiometric amounts of Bi(99.999%), Sb(99.999%), Te(99.999%) and Se(99.999%) were heated in evacuated quartz ampoules to a temperature of 1073 K followed by slow cooling. Large sized single crystals(\(\sim\) 5 cm) were obtained which cleaved easily along planes normal to the c-axis. The ARPES experiments were carried out using the facilities associated with the BaDELPH beamline of ELETTRA synchrotron center, Italy, equipped with a SPECS Phoiobs 150 hemispherical analyser. The photoemission spectra were collected on freshly cleaved (_in-situ_ at 77K) surfaces of crystals under a vacuum of the order of 4.0 \(\times\) 10\({}^{-11}\) mbar. In addition, ARPES data were taken by using our laboratory facility decked with a high flux GAMMADATA VUV He lamp (VUV5000) attached to a VUV monochromator (VUV5040) and a SCIENTA R3000 analyser. Fermi energies of the samples were calibrated by using a freshly evaporated Ag film on the sample holder. The total energy resolution estimated from the width of the Fermi edge, was about 27meV for HeI excitation energy. The angular resolution was better than 1\({}^{\circ}\) in the wide-angle mode (8\({}^{\circ}\)) of the analyzer. All the measurements were performed inside the analysis chamber under a base vacuum of 3.0 \(\times\) 10\({}^{-10}\) mbar
First-principles calculations were performed by using a plain wave basis set inherent in Quantum Espresso (QE) [33]. Many electron exchange-correlation energy was approximated by the Perdew-Burke-Ernzerhof (PBE) functional [34, 35, 36]. Fully relativistic ultrasoft [37] and non relativistic norm conserving pseudopotentials were employed for spin-orbit-coupled (SOC) and non SOC calculations respectively. Fine mesh of k-points with Gaussian smearing of the order 0.0001 Ry was used for sampling the Brillouin zone integration, and kinetic energy and charge density cut-off were set to 100Ry and 450Ry respectively. Surface state calculations on (111) plane were performed by using supercell structures of hexagonal unit cell consisting of six quintuple layer (QL) with a vacuum separation of \(\sim\) 26 A. Experimental lattice parameters [14] and atomic coordinates were relaxed under damped (Beeman) dynamics with respect to both ionic coordinates and the lattice vectors for all the structures.
|
2301.10381 | Discovery of 20 UV Emitting SNRs in M31 with UVIT | We present the first catalog of supernova remnants (SNRs) in M31 which
exhibit diffuse ultraviolet (UV) emission. UV images of M31 were obtained by
the Ultraviolet Imaging Telescope (UVIT) on the AstroSat satellite, and the
list of SNRs was obtained from X-ray, optical and radio catalogues of SNRs in
M31. We used the UVIT images to find SNRs with diffuse emission, omitting those
too contaminated with stellar emission. 20 SNRs in M31 were detected with
diffuse UV emission. Fluxes in the UVIT F148W, F169M, F172M, N219M and N279N
filters are measured for these SNRs. The luminosities are compared to those
computed from the spectra of seven known UV-emitting SNRs in the Milky Way, the
LMC, and the SMC. We find similar spectral shapes between the known and the M31
UV-emitting SNRs. The spectral shapes and the diffuse nature of the emission
are good evidence that the UV emissions are dominated by line emissions, like
known SNRs, and the UV is associated with the SNRs. Models are applied to the 6
SNRs with X-ray spectra. The main difference is that the 2 X-ray/UV SNRs are
Type Ia and the 4 X-ray/non-UV SNRs are core-collapse or unknown type. A
comparison of M31 SNRs in different wavebands shows that most are detected
optically, similar to the case for other nearby galaxies. 19 of the 20
UV-emitting SNRs are detected optically, expected because both UV and optical
are from forbidden and recombination lines from shock-ionized gas. | Denis Leahy, Christopher Monaghan, Sujith Ranasinghe | 2023-01-25T02:09:17Z | http://arxiv.org/abs/2301.10381v1 | # Discovery of 20 UV Emitting SNRs in M31 with UVIT
###### Abstract
We present the first catalog of supernova remnants (SNRs) in M31 which exhibit diffuse ultraviolet (UV) emission. UV images of M31 were obtained by the Ultraviolet Imaging Telescope (UVIT) on the AstroSat satellite, and the list of SNRs was obtained from X-ray, optical and radio catalogues of SNRs in M31. We used the UVIT images to find SNRs with diffuse emission, omitting those too contaminated with stellar emission. 20 SNRs in M31 were detected with diffuse UV emission. Fluxes in the UVIT F148W, F169M, F172M, N219M and N279N filters are measured for these SNRs. The luminosities are compared to those computed from the spectra of seven known UV-emitting SNRs in the Milky Way, the LMC, and the SMC. We find similar spectral shapes between the known and the M31 UV-emitting SNRs. The spectral shapes and the diffuse nature of the emission are good evidence that the UV emissions are dominated by line emissions, like known SNRs, and the UV is associated with the SNRs. Models are applied to the 6 SNRs with X-ray spectra. The main difference is that the 2 X-ray/UV SNRs are Type Ia and the 4 X-ray/non-UV SNRs are core-collapse or unknown type. A comparison of M31 SNRs in different wavebands shows that most are detected optically, similar to the case for other nearby galaxies. 19 of the 20 UV-emitting SNRs are detected optically, expected because both UV and optical are from forbidden and recombination lines from shock-ionized gas.
Andromeda Galaxy (39); Ultraviolet astronomy(1736); Supernova remnants() +
Footnote †: journal: AJ
0000-0002-4880-7088]Denis Leahy
0000-0002-1888-0885]Christopher Monaghan
0000-0002-1888-0888]Sujith Ranasinghe
## 1 Introduction
A supernova remnant (SNR) is an extended (pc scale) structure in the interstellar medium excited by the shock wave from the explosive death of a star at the end of its life. The study of SNRs is crucial to our understanding of supernova explosions, the nature of shock waves and the structure of the interstellar medium. SNRs emit over a wind range of wavelengths, thus Galactic and extragalactic surveys of SNRs have been conducted at optical, radio, and X-ray wavelengths. There are 294 known SNRs within the Milky Way (Green, 2019). A small number of nearby external galaxies have had their SNRs catalogued, including the Large and Small Magellanic Clouds (LMC & SMC), M33, NGC 300 and M31. E.g. there are 62 confirmed SNRs and 30 SNR candidates in the LMC, and 21 SNRs and 2 candidates in the SMC (Yew et al., 2021; Maggi et al., 2019). There are 109 SNRs and SNR candidates in M33 (Duric, 2000; Pannuti et al., 2000) and 44 in NGC 300 (Pannuti et al., 2000). Optical surveys of M31 have found 156 SNRs (Lee and Lee, 2014) and XMM-Newton X-ray found 26 SNRs and 21 SNR candidates (Sasaki et al., 2012). Radio emission was found for 30 SNRs in M31 by Braun and Walterbos (1993).
Galactic and extragalactic searches for SNRs have been carried out at optical, radio, and X-ray wavelengths. However, ultraviolet (UV) observations of SNRs are scarce. The difficulty in detecting UV emission is caused by the strong interstellar extinction for our Galaxy in the UV (e.g. Sun et al., 2021). As a result, only nearby Galactic SNRs
have detected UV radiation. However, UV emission lines from SNRs can provide valuable information on the SNR, including shock velocities, densities, and thermal structure (Raymond et al., 1997).
The first major steps in analyzing the UV emission of SNRs came from the International Ultraviolet Explorer (IUE), designed for analyzing UV spectra. IUE data provided the groundwork for the first UV-based analyses of SNRs throughout the 1980's. Many studies focused on comparing theoretical models of shockwaves to nearby SNRs, such as Vela and the Cygnus Loop, by analyzing the individual line emissions in the UV spectrum (Raymond et al., 1980, 1981, 1988). SNRs in the Large Magellanic Cloud and the Small Magellanic Cloud, such as N49, N63, and E0102, were also analyzed using their UV spectra (Benvenuti et al., 1980; Vancura et al., 1992; Benvenuti et al., 1980). Despite the large distances to these sources, their positions with respect to the plane of the Milky Way allows light from the LMC and SMC to pass through less of the interstellar medium, and experience less extinction than most Galactic SNRs. Such studies determined that the UV spectra of these remnants are dominated by line emission, with many of the same lines present in different SNRs. N[V], Si[IV], O[IV], He[II], and O[III] emission lines were observed in more than six intergalactic and extragalactic (LMC and SMC) SNRs, and a number of other lines were identified in more than one SNR (Fesen and Hurford, 1996).
Despite the progress in UV-based SNR research, there does not yet exist a catalogue of extragalactic UV-emitting SNRs. As an important step in UV studies of SNRs, we conduct a search for SNRs in M31 using data from AstroSat's UVIT instrument, and generate the first catalog of UV SNRs in another galaxy. Section 2 below summarizes the observations and Section 3 describes our data analysis. In Section 4.1 the catalog of UV-emitting SNRs in M31 is given, and in Section 4.2 the UV spectral shapes of known UV-emitting SNRs are compared to those of the M31 SNRs. The set of 6 SNRs with X-ray spectra are fit with SNR models to derive their physical conditions in Section 4.3. The statistics of numbers of SNRs detected in different wavebands is discussed in Section 4.4 and Section 5 summarized the results from this study.
## 2 Observations
The observations of M31 were carried out by the Ultraviolet Imaging Telescope (UVIT) onboard the AstroSat (Singh et al., 2014). UVIT is capable of observing in a variety of Far Ultraviolet (FUV) and Near Ultraviolet (NUV) bandpasses. The M31 survey includes data with the filters: F148W, F154W, F169M, F172M, N219M, and N279N filters, although the F154W filter was used only for one observation. The filter parameters, including effective area curves, can be found in Tandon et al. (2017). New in-orbit calibrations of UVIT were carried out by Tandon et al. (2020). Data processing was carried out using CCDLab (Postma and Leahy, 2017, 2021) to produce images with a pixel scale of \(0.4168^{\prime\prime}\times 0.4168^{\prime\prime}\) from the instrument data. The resulting spatial resolution, using the latest UVIT calibrations and data processing procedure, is \(\simeq 1^{\prime\prime}\).
The M31 survey with UVIT (Leahy et al., 2020) consists of 19 partially-overlapping fields, each with a diameter of \(\approx 28^{\prime}\), covering a sky area of \(\approx 3.3^{\circ}\times 1.3^{\circ}\). Since the survey paper (Leahy et al., 2020), additional observations were carried out, including observation of the missing field number 8 in F148W and F169M bands to yield full coverage of the M31 survey area in the F148W filter, and partial coverage in the other filters. The images of these 19 fields, each with 2 to 5 filter bands, were used in the analysis carried out here.
## 3 Data Analysis
### Selection of SNRs
The list of SNRs and SNR candidates within M31 was obtained from existing optical, X-ray and radio SNR lists1. The optical SNRs were from Lee and Lee (2014), which contained 156 SNR candidates observed using H\(\alpha\) and S[II] images. The X-ray SNRs were from Sasaki et al. (2012) which lists 26 confirmed SNRs and 21 SNR candidates. These sources were first catalogued in an XMM-Newton survey of M31, which catalogued a total of 1897 X-ray sources (Stiele et al., 2011). The radio SNRs were from Braun and Walterbos (1993) which lists 24 high confidence (\(>5\sigma\)) and 6 medium confidence (3 to \(5\sigma\)) detections at 1465 MHz. That work found 52 SNRs and candidates using narrow band imaging in [SII] and H\(\alpha\) filters of the NE half of M31, then matched those with radio continuum imaging of M31.
Footnote 1: Galvin and Filipovic (2014) gives a catalog of 916 radio sources in M31 detected at 20cm, however the sources which are SNRs are not identified.
We combine the three sets of SNRs. The 52 optically-detected SNRs (including 30 with radio emission) from Braun and Walterbos (1993) were included in the study of Lee and Lee (2014). Four of the 30 with radio emission were re-examined
by Lee & Lee (2014) and found not to be SNRs, and one more was found by us to be contaminated by stellar emission. This left 25 SNRs from Braun & Walterbos (1993) that are detected in radio, which are included in the list in Lee & Lee (2014). Some SNRs are listed in both optical and X-ray catalogues as noted in the analysis of X-ray SNRs in the northern disc of the M31 (Sasaki et al., 2018). Using an angular separation of 18'', we found an additional 18 source matches between the radio and X-ray SNR catalogs. This left 179 unique SNRs or SNR candidates, hereafter referred to as SNRs, within M31. However, two of these are outside the M31 UVIT survey region, leaving 177 unique sources for analysis using UVIT data: 119 detected only in optical, 22 detected only in X-ray, 13 detected in optical and radio, 11 detected in optical and X-ray, and 12 detected in optical, X-ray and radio. The location of these 177 SNRs in M31 for the different categories is shown in Figure 1(b).
### Search for UV emission from the SNRs
The locations of the 177 SNRs were used to determine within which field each object was located and thus which filters were oberved for each SNR. Next, we carried out a set of tasks to find those SNRs which have diffuse emission not too contaminated by UV emitting stars in M31. SNRs in UV are characterized by diffuse emission, in contrast to stars which are unresolved point sources. UVIT has high enough spatial resolution in most cases to separate diffuse from the stellar point-source emission.
An initial inspection of the 177 SNRs was undertaken to remove any sources from the list without any UV emission within the optical radius of the SNR, given in Lee & Lee (2014). For the 22 X-ray-only SNRs, no optical SNR radius was available, and neither was an X-ray radius, so the position error was used instead. The radius of analysis around each source (whether from optical or X-ray) is henceforth referred to as the "SNR radius". Several SNRs were
Figure 1: (a): the F148W mosaic of M31 (from Leahy et al., 2020) but with missing field 8 added. (b): positions of the 177 SNRs from Lee & Lee (2014) and Sasaki et al. (2012), overlaid on the mosiac (with the mosiac made fainter to clearly see the SNRs). Re squares indicate the positions of SNRs detected in optical, aqua indicates SNRs detected in X-ray, purple indicates SNRs detected in optical and radio, pink indicates SNRs detected in optical and X-ray, and yellow indicates SNRs detected in optical, radio and X-ray. SNRs identified in optical, X-ray and radio in panel (b) closely trace the positions of the spiral arms and star formation which are bright in the F148W image of M31 (panel (a)).
contained within densely packed, UV-emitting star clusters, so that no SNR emission could be distinguished from the stellar emission. The "no emission" and crowded sources were removed from the list of SNRs for analysis, leaving 126.
Further inspection of these 126 SNRs often revealed stars within the radii of the SNRs. Thus, we search for known stars within the SNR radius of each SNR: Vizier ([https://vizier.cds.unistra.fr/](https://vizier.cds.unistra.fr/)) was used to search through stellar catalogues. A number of stellar catalogues were initially searched, but most were contained within the GAIA Early Data Release 3 (EDR3) catalogue (Bailer-Jones et al., 2021). We found all EDR3 stars within a 15''search radius (15''was the largest optical radius of any source). A total of 766 stars were found in the EDR3 data (many would not have UV emission), and their coordinates were imported into CCDLAB to search for stellar contamination. 42 sources were removed from our SNR list because all UV emission within the SNR radius was identified with catalogued GAIA stars, leaving 84 sources in our list of SNRs. This process was repeated using the full data release (DR3) upon its publication on June 13th, 2022. 5 additional stars were determined to be nearby the SNRs, but none associated with the remaining 84 with UV emission.
The 84 sources were further analyzed by searching for stellar sources from additional stellar catalogues. A number of catalogues were searched during this process, including a BVRI analysis of M31 objects using the McGraw Hill Telescope, a Swift/UVOT source catalogue, and a XMM-OM object survey (Magnier et al., 1992; Yershov, 2014; Page et al., 2012). Vizier searches from these catalogues did not provide additional stellar contaminants in the 84 SNRs. However, the catalogue of M31 supergiants from the local group galaxy survey (Massey et al., 2016) provided additional stellar sources within a 15''search radius of the 84 SNRs. The red supergiant catalogue of Massey et al. (2021) was also included in the Vizier search. 610 supergiants from these two catalogues were found to be nearby our remaining SNRs: most of these were blue supergiants or luminous blue variables. This process led to a number of UV emissions to be reclassified as supergiant stars. 43 sources were removed from our SNR list, leaving 41 SNRs. One more source was removed using the HST M31 PHAT catalogue (Williams et al., 2014) using PHAT stars with F275W magnitude brighter than 20.
Out of the 40 SNRs that remained, 7 had UV emission with no associations with stars. The other 33 consisted of either i) sources with diffuse emissions that overlapped with GAIA stars or Massey supergiants or ii) diffuse emission too dim to be reliably measured. Stars within diffuse emission were examined to determine if they were likely to emit UV radiation: Larger U-B values indicated that the star emitted very little UV radiation, and would not contaminated the UV measurements. These 33 SNRs were analyzed in CCDLAB to determine whether or a measurement of the diffuse emission could be done. 13 of these were isolated enough from nearby UV sources to analyze using box measurements. 5 sources had clear indications of diffuse emission, but the region had too many overlapping stars (i.e. was confused) for a reliable flux measurement to be taken. The remaining 15 sources were removed from the list as there was either no clear indication of SNR emissions within the SNR radius, or the measured flux was too dim compared to the background to result in a reliable measurement.
The result of the above selection process left 25 UV emitting SNRs consisting of 7 without stellar contamination; 13 with stellar contamination which can be separated from the diffuse emission and 5 more with likely diffuse emission but too confused with stars for measurement. The locations of the 20 SNRs in M31 with clear diffuse UV emission and 5 SNRs with likely diffuse emission are shown in Figure 2. We do not consider further the 5 diffuse but confused sources for which reliable UV fluxes could not be obtained. Thus, the number of SNRs for which we can carry out flux measurements is 20.
Properties of the 20 SNRs with detected diffuse UV emission are shown in Table 1. Table 1 lists each SNR's ID numbers, including their original Lee and Stiele (SPH11) IDs, their J2000 coordinates, the optical diameter and error, the likely progenitor type, and their optical and/or X-ray luminosities.
Figure 2: Positions of the 20 SNRs with detected diffuse UV emission (red squares) and of the 5 SNRs with likely, but confused, diffuse emission (blue squares), overlaid on the image of M31 in the F148W filter.
### Measurement of SNR UV Luminosities
From the measured fluxes and the known distance to M31, we determine the SNR UV luminosities in the different filters. CCDLAB provides source fitting methods for photometric measurements (Postma & Leahy, 2017). The methods include Gaussian and Moffat functional fits, a Curve-of-Growth fit (COG), and a box fit. For diffuse emission Gaussian and Moffat functions, designed for fitting point sources, are not suitable. The COG fit works well to measure all counts within a given radius if there is a good background outside that radius, but does not work well for our diffuse sources usually surrounded by stellar sources. Thus, we use the box method to calculate the fluxes of diffuse emission and to subtract background emission while avoiding stellar emission in both source and background boxes.
The box method yields the total photon counts in a chosen box, which can be square or rectangular. The SNR source counts were measured using different box sizes, generally 4 different sizes from 17 by 17 pixels to 23 by 23 pixels. However, more crowded sources would be measured 2 or 3 times using smaller boxes, and some sources required larger boxes due to their unusual shape. A few sources were large and irregularly shaped. These required a larger box to encapsulate the UV emission, and for these cases small boxes would be used to remove the counts from any stars that fell within the larger box. In order to account for background variations, 6 different measurements were done for each filter using manually chosen boxes. The average of the 6 measurements was used as the background, and the standard deviation of the 6 measurements used as the uncertainty.
The counts for each filter band image were divided by the net exposure time of each image to obtain a count rate. This was corrected for the extended wings of the point-spread-function as given in Table 5 of Tandon et al. (2017). Then the count rate was converted to flux in units of erg s\({}^{-1}\) cm\({}^{-2}\)\(\AA^{-1}\) using the flux conversions given in Table 4 of Tandon et al. (2017). Additional uncertainties include Poisson photon counting errors, uncalibrated spatial variations in detector sensitivity, and uncertainties in correction for the wings of the point spread function. These were included in quadrature in the calculation of flux errors. The fluxes and uncertainties were converted into luminosities using the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Coordinates} & \multicolumn{2}{c}{UV Luminosity} & & & \\ & \multicolumn{2}{c}{(J2000)} & \multicolumn{2}{c}{(\(10^{35}\)erg/s)} & & & & \\ SNR & R.A. & DEC. & F148W & F169M & F172M & N219M & N279N \\ ID & (hh mm ss) & (dd mm ss) & & & & & \\ \hline
1 & 00 39 52.0 & 40 29 44.7 & 50.0\(\pm\)5.1 & 21.8\(\pm\)2.2 & & & \\
2 & 00 42 20.0 & 41 27 54.7 & 49.3\(\pm\)4.9 & 23.9\(\pm\)2.4 & 9.4\(\pm\)1.0 & 10.4\(\pm\)1.1 & 4.2\(\pm\)0.5 \\
3 & 00 42 53.6 & 41 25 51.5 & 46.9\(\pm\)4.7 & 24.7\(\pm\)2.5 & 11.4\(\pm\)1.2 & 14.9\(\pm\)1.5 & 3.7\(\pm\)0.4 \\
4 & 00 42 55.4 & 41 26 22.8 & 8.1\(\pm\)0.9 & 4.8\(\pm\)0.6 & 2.6\(\pm\)0.3 & 1.7\(\pm\)0.4 & 0.7\(\pm\)0.2 \\
5 & 00 43 29.2 & 41 19 02.0 & 11.0\(\pm\)1.7 & 3.9\(\pm\)0.7 & 1.5\(\pm\)0.3 & 1.3\(\pm\)0.5 & 0.3\(\pm\)0.2 \\
6 & 00 43 39.1 & 41 26 53.6 & 43.9\(\pm\)4.4 & 24.6\(\pm\)2.5 & 10.6\(\pm\)1.1 & 13.1\(\pm\)1.4 & 4.0\(\pm\)0.4 \\
7 & 00 43 54.2 & 41 52 54.7 & 17.1\(\pm\)1.8 & & 3.5\(\pm\)0.4 & 2.1\(\pm\)0.3 \\
8 & 00 43 56.4 & 41 47 10.1 & 40.4\(\pm\)4.2 & & 7.0\(\pm\)0.8 & 5.6\(\pm\)0.6 & 2.1\(\pm\)0.3 \\
9 & 00 44 01.0 & 41 21 05.3 & 6.2\(\pm\)0.8 & 3.6\(\pm\)0.5 & 0.4\(\pm\)0.2 & 3.6\(\pm\)0.7 & 0.4\(\pm\)0.2 \\
10 & 00 44 05.1 & 41 20 12.7 & 9.9\(\pm\)1.4 & & 3.7\(\pm\)0.4 & & 0.9\(\pm\)0.2 \\
11 & 00 44 26.8 & 41 48 57.4 & 14.2\(\pm\)2.4 & 3.4\(\pm\)0.5 & & & \\
12 & 00 44 36.4 & 41 24 57.8 & 86.3\(\pm\)8.7 & 17.9\(\pm\)1.8 & 21.7\(\pm\)2.2 & 9.0\(\pm\)0.9 \\
13 & 00 44 46.7 & 41 29 23.9 & 38.4\(\pm\)3.9 & & 8.8\(\pm\)0.9 & 12.6\(\pm\)1.3 & 4.3\(\pm\)0.4 \\
14 & 00 44 54.6 & 41 31 52.2 & 76.2\(\pm\)8.1 & 20.9\(\pm\)2.2 & 20.3\(\pm\)2.1 & 5.3\(\pm\)0.7 \\
15 & 00 45 10.7 & 41 32 53.4 & 22.9\(\pm\)2.5 & 4.0\(\pm\)0.5 & 3.8\(\pm\)0.6 & 3.6\(\pm\)0.4 \\
16 & 00 45 15.3 & 41 34 27.8 & 14.0\(\pm\)1.7 & 1.7\(\pm\)0.4 & 3.6\(\pm\)0.5 & 3.5\(\pm\)0.4 \\
17 & 00 45 24.7 & 41 41 00.3 & 22.0\(\pm\)2.8 & 5.0\(\pm\)0.6 & 6.4\(\pm\)0.7 & 1.7\(\pm\)0.2 \\
18 & 00 45 30.0 & 41 47 36.7 & 20.3\(\pm\)3.2 & 3.5\(\pm\)0.5 & & \\
19 & 00 46 20.2 & 41 53 04.6 & 28.4\(\pm\)2.9 & 6.9\(\pm\)0.7 & & \\
20 & 00 46 30.0 & 41 58 09.4 & 27.9\(\pm\)2.9 & 5.6\(\pm\)0.6 & & \\ \hline \end{tabular}
\end{table}
Table 2: M31 SNRs with UV emission
Figure 3: The 20 SNRs with UV emission. SNR ID, Candidate ID and J2000 coordinates are given below each panel. The SNR radius is shown by the green circle. The red squares indicate star positions from Massey et al. (2016) and Massey et al. (2021), light blue squares indicate positions of stars from GAIA DR3. The purple circles indicate approximately the areas analyzed to obtain source fluxes.
distance to M31 and the effective bandwidth of each filter. The filter effective wavelengths and effective bandwidths are given in Table 3 of Tandon et al. (2017).
## 4 Results and Discussion
### UV emission from M31 SNRs and Catalog of UV emitting SNRs in M31
The images of the 20 M31 SNRs in the F148W band are shown in Figure 3. These sources exhibit diffuse emission which is not associated with stars, although the strength of the diffuse emission varies. Images of two of these sources (ID 3 and ID 6) in all five UVIT filter bands are shown in Figure 4. The catalog of 20 SNRs in M31 with luminosities in the detected UVIT bands is given in Table 2. Because of some overlap of the fields, some SNRs were imaged in two different fields in the same filter. The reported band luminosity and uncertainty for those are the average of these measurements.
The spectral shape of each source is shown in Figure 5, panels (a) and (b). The F148W filter luminosity is the largest for all SNRs analyzed, in part caused by the wider effective bandwidth for the F148W filter. The other filter band luminosities vary in prominence, however the spectral shapes of the 20 M31 SNRs are similar.
### Comparison of UV-emitting M31 SNRs with known UV-emitting SNRs
The spectra of known UV-emitting SNRs shows that their emission in the UVIT filter bands would be dominated by emission lines, with a small contribution from continuum radiation. A list of UV emission lines measured in known SNRs (Fesen & Hurford, 1996) is presented in Table 3, together with the UVIT filter bands that contain these lines. These SNRs include 3 Galactic SNRs- the Cygnus Loop (Raymond et al., 1980, 1981, 1988), Vela (Raymond et al., 1981), and Puppis A (Blair et al., 1995); 3 SNRs in the Large Magellanic Cloud- N132D (Blair et al., 2000), N49 (Vancura et al., 1992), and N103B (Blair et al., 2020); and 1 SNR in the Small Magellanic Cloud- E0102-7219 (Blair et al., 1989, 2000). Table 4 lists these SNRs, with SNR Type, distance, and aperture size for the line flux measurements given in the references.
Figure 4: Images of two of the best selected SNRs in all 5 filters (F148W, F169M, F172M, N219M and N279N, left to right): top row- ID 3; bottom row- ID 6. The green circles indicates the SNRs radii.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{\(\lambda\) (A)} & Ion & F148W & F169M & F172M & N219M & N279N \\ \hline
1334.53 & C II & X & & & & \\
1355.6 & O I & X & & & & \\
1371.29 & O V & X & & & & \\
1393.76 & Si IV & X & & & & \\
1397.20, 1399.77 & O IV] & X & & & & \\
1398.13, 1404.77 & S IV] & X & & & \\
1402.77 & Si IV & X & & & \\
1483.32, 1486.50 & [N IV],N IV] & X & X & & \\
1533.43 & Si II & X & X & & \\
1548.20, 1550.77 & C IV & X & X & & \\
1574.8 & [Ne V] & X & X & & \\
1601, 1602 & [Ne IV] & X & X & & \\
1640 blend & He II & X & X & X & \\
1660.81, 1666.15 & O III] & X & X & X & \\
1670.81 & A1 II & X & X & X & \\
1730 blend & N III] & X & X & X & \\
1746.82, 1748.61 & N III] & X & X & X & \\
2320.95, 2331.40 & [O III] & & & & \\
2323.50, 2324.69 & C II & & & & X \\
2328.51, 2334.40 & Si II] & & & & X \\
2795.52, 2802.70 & Mg II & & & & X \\ \hline \end{tabular}
\end{table}
Table 3: Ultraviolet emission lines known SNRs (from Fesen & Hurford 1996) and the UVIT filter bands which contain those lines.
Figure 5: Spectrophotometry of the 20 SNRs: (a) SNR ID’s 1 to 10; (b)SNR ID’s 11 to 20.
We estimated the UVIT band luminosities for the known SNRs by summing the published luminosities or fluxes (converted to luminosities) of the detected emission lines which fall within each UVIT filter, as listed in Table 3. The results of this are given in columns 5 through 9 of Table 4 and the band luminosities vs. the effective wavelengths of the UVIT filter bands are shown in panel (a) of Figure 6. Although the band luminosities of the different sources are quite variable, the F148W luminosity is the brightest. That band includes more emission lines than the other bands (see Table 3).
Supernovae and SNRs can be categorized into two separate types: thermonuclear runaway (Type Ia) and core collapse (CC). The small sample of known SNRs with UV spectroscopy includes only one type Ia SNR (N103B). This SNR has band luminosities (panel (a) of Figure 6) which does not appear different to the CC-type SNRs. Scaled band luminosities are defined as the band luminosities for each SNR divided by the F148W band luminosity for that SNR. The scale band luminosities of the seven known UV-emitting SNRs are shown in panel (b) of Figure 6. They are all quite similar, with factor \(\sim\)2 variations between the different SNRs.
Although UV spectroscopic data for the M31 SNRs does not exist, we can compare the UV emission of the M31 sources to known UV-emitting SNRs by comparing the UVIT band luminosities. For several of the known SNRs, the aperture only covered part of the area of the SNR, so that we do not have a good measurement of the line luminosities over the whole SNR. The extrapolation from the aperture to the whole SNR is a highly uncertain factor because of line brightness variations over the face of the SNR. However, if the line ratios are nearly constant over the face of the SNR, the scale band luminosities should be representative of the spectral shapes of the whole SNR. Thus we compare the spectral shapes of the known SNRs to one another and to the M31 SNRs using the scaled band luminosities.
For the band luminosities, known SNRs (panel (a) of Figure 6) show larger variations than the 20 new UV emitting SNRs in M31 (panels (a) and (b) of Figure 5). This is, in part, caused by the different fractions of the SNR area measured in the aperture of the spectroscopic observations (Table 3). However, the scaled band luminosities (panels (b) of Figure 6) show that the spectral shapes of the known SNRs are quite similar. The F148W value is the largest, followed by the F169M value, then with similar values in the remaining 3 bands (F172M, N219M and N279N). The scaled band luminosities of the 20 M31 SNRs (panels (c) and (d) of Figure 6) have remarkably similar variations to the known SNRs. The strong similarity of the rescaled band luminosities of the newly detected 20 M31 SNRs to known UV-emitting SNRs (Figure 6) is evidence that the detected UV emission is from the SNR, rather than from foreground or background objects.
This current list of 20 UV-emitting SNRs in M31 is likely incomplete. The interstellar extinction of UV light from M31 by the Milky Way's ISM is small, but the ISM of M31 can have significant extinction. The extinction has been measured for stellar clusters in M31 using UVIT photometry of stellar clusters (Leahy et al., 2022): the mean extinction is \(E(B-V)\)=0.24 and ranges from 0 to 0.6. Thus SNRs within or on the farside of M31's disc may be undetected because of extinction. More than half (Section 3.2) of the SNRs from the original list of 177 were excluded
\begin{table}
\begin{tabular}{l l l c c c c c c c} \hline \hline & & & & \multicolumn{6}{c}{Estimated UVIT Filter Band Luminosity (erg/s)} & \\ SNR & Type & Location & Distance & F148W & F169M & F172M & N219M & N279N & Aperture Size & Source \\ & & & & (pc) & & & & & \\ \hline & & & & 5.82E+36 & 4.48E+36 & 2.29E+36 & 1.53E+36 & - & 10\({}^{\prime}\)x20” & a \\ Cygnus Loop & CC & Milky Way & 725 & 3.62E+36 & 2.77E+36 & 8.46E+35 & 1.17E+36 & 2.10E+35 & 4 x 10\({}^{\prime}\)x20” & b \\ & & & & 5.18E+36 & 4.12E+36 & 2.28E+36 & 1.39E+36 & - & 3 x 3.8\({}^{\prime}\)x12.4” & c \\ Vela & CC & Milky Way & 250 & 3.56E+35 & 2.53E+35 & 8.24E+34 & 6.07E+34 & - & 4 x 10\({}^{\prime}\)x20” & b \\ PuppisA & CC & Milky Way & 2,146 & 1.86E+33 & 1.43E+33 & 2.88E+32 & - & - & 10\({}^{\prime}\)x56” & d \\ N49 & CC & LMC & 50,000 & 5.47E+37 & 4.94E+37 & 8.40E+36 & 2.60E+37 & 1.30E+37 & 5 x 10\({}^{\prime}\)x20” & e \\ N103B & Ia & LMC & 45,990 & 4.62E+35 & 3.19E+35 & 5.06E+34 & - & - & 2.5” (circular) & f \\ N132D & CC & LMC & 52,000 & 4.34E+34 & 2.51E+34 & 8.07E+33 & 6.21E+33 & 8.49E+33 & 3 x 1” (circular) & g \\ E0102-7219 & CC & SMC & 64,386 & 6.28E+33 & 3.45E+33 & 1.09E+33 & 5.28E+32 & 9.24E+32 & 1”x41” & g \\ & & & & 4.05E+35 & 2.12E+35 & 3.18E+34 & 1.50E+34 & 1.92E+34 & 10\({}^{\prime}\)x20” & h \\ \hline \end{tabular} Note. –a) Raymond et al. (1980). b) Raymond et al. (1981). c) Raymond et al. (1988). d) Blair et al. (1995). e) Vancura et al. (1992). f) Blair et al. (2020). g) Blair et al. (2000). h) Blair et al. (1989).
\end{table}
Table 4: Known UV-emitting SNRs and estimated UVIT filter band luminosities calculated from measured line fluxes
Figure 6: Spectral shape of 7 known UV-Emitting SNRs: (a) estimated band luminosities (for Cygnus Loop the average of the 3 values and for E0102-7219 the second listed values from Table 4 are plotted); (b) scaled band luminosities (luminosity divided by the F148W luminosity of each SNR). Scaled band luminosities for the SNRs in M31: (c) SNR ID’s 1 to 10; (d) SNR ID’s 11 to 20.
because of confusion with stellar emission: there was not enough diffuse emission separated from overlapping stars within the SNR radius for detection.
### Physical conditions of the 6 M31 SNRs with X-ray spectra
To determine density of environment and evolution status of SNRs, X-ray observations of the thermally emitting shocked gas are required. There are 6 SNRs in M31 which have had their X-ray spectra analyzed with hot plasma models from shocked gas, from Sasaki et al. (2012) and Sasaki et al. (2018). We use the spectral parameters for SPH11 SNRs 1050 and 1066 from Sasaki et al. (2012) and for SPH11 SNRs 1234, 1275, 1535 and 1599 from Sasaki et al. (2018)2. The emission measures (\(EM\)) for each SNR was calculated from "norm" or from the flux, if no "norm", using XSPEC and the best fit spectral model.
Footnote 2: SPH11 1234 was analyzed in both: we used the newer analysis from the later reference.
The modelling software used is SNRpy (Leahy et al., 2019; Leahy & Williams, 2017) which is based on the unified models of SNR evolution of Truelove & McKee (1999), with extensions added, including non-equilibrium ionization. For each SNR, we fit the measured shocked-gas temperature \(kT\) and emission measure \(EM\) for 3 different cases: i) explosion in a uniform environment (s=0) and emission from the forward-shocked gas; ii) explosion in a stellar wind (s=2) and emission from the forward-shocked gas; and iii) explosion in a stellar wind (s=2) and emission from the reverse-shocked gas. Explosions in a uniform medium lead to much brighter emission from forward-shocked gas, but explosions in a wind environment can lead to either brighter emission from the forward-shocked gas or from the reverse-shocked gas. The results of the models are given in Table 5. Because the measured X-ray spectrum is for the brighter component (i.e. forward or reverse-shocked), we mark the models which are consistent with the observations (i.e. measured \(EM\) larger than \(EM_{2}\)) in the Table. Emission from the forward-shock gas in a uniform medium (s=0) or emission from the reverse-shocked gas in a stellar wind (s=2) are consistent with observations. Other information can help to select a preferred model for each SNR. E.g., SNRs with energy above \(10^{52}\) erg should be rare (Leahy &
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{ SNR\({}^{(a)}\)} & \multicolumn{1}{c}{Type\({}^{(b)}\)} & \multicolumn{1}{c}{\(M_{ej}^{(c)}\)} & \multicolumn{1}{c}{(s,n)\({}^{(d)}\)} & \multicolumn{1}{c}{Shock\({}^{(e)}\)} & \multicolumn{1}{c}{Age} & \multicolumn{1}{c}{E0} & density & \multicolumn{1}{c}{\(\dot{M}/(4\pi V_{w})\)} & \multicolumn{1}{c}{\(kT_{2}^{(e)}\)} & \multicolumn{1}{c}{\(EM_{2}^{(e)}\)} & \multicolumn{1}{c}{Consistent\({}^{\gamma(f)}\)} \\ \multicolumn{1}{c}{SPH11 ID} & \multicolumn{3}{c}{\(M_{\odot}\)} & & & \multicolumn{1}{c}{(yr)} & \multicolumn{1}{c}{(\(10^{51}\)erg)} & \multicolumn{1}{c}{(cm\({}^{-3}\))} & \multicolumn{1}{c}{(gm/cm)} & \multicolumn{1}{c}{(keV)} & \multicolumn{1}{c}{(\(10^{58}\)cm\({}^{-3}\))} & \\ \hline
1066(UV) & Ia & 1.2 & (0,7) & fwd & 1970 & 12.1 & 0.33 & & & 20.5 & 5.76\(\times 10^{-3}\) & Y \\
1066(UV) & Ia & 1.2 & (2,12) & fwd & 324 & 33.2 & & 9.39\(\times 10^{13}\) & 0.70 & 48.4 & N \\
1066(UV) & Ia & 1.2 & (2,12) & rev & 133 & 16.0 & & 3.66\(\times 10^{13}\) & 20.6 & 1.12 & Y \\ \hline
1275(UV) & Ia & 1.2 & (0,7) & fwd & 14040 & 0.311 & 0.594 & & 1.243 & 8.23\(\times 10^{-3}\) & Y \\
1275(UV) & Ia & 1.2 & (2,12) & fwd & 2950 & 0.387 & & 1.50\(\times 10^{14}\) & 0.035 & 131 & N \\
1275(UV) & Ia & 1.2 & (2,12) & rev & 636 & 6.76 & & 5.85\(\times 10^{13}\) & 1.03 & 3.05 & Y \\ \hline
1055 & (unk.) & 5 & (0,7) & fwd & 7220 & 0.234 & 0.080 & & 0.131 & 0.421 & Y \\
1055 & (unk.) & 5 & (2,12) & fwd & 1420 & 0.769 & & 8.65\(\times 10^{13}\) & 0.035 & 96.6 & N \\
1055 & (unk.) & 5 & (2,12) & rev & 292 & 14.8 & & 3.37\(\times 10^{13}\) & 1.03 & 2.24 & Y \\ \hline
1234 & CC & 10 & (0,7) & fwd & 23300 & 1.02 & 0.488 & & 0.683 & 9.33\(\times 10^{-2}\) & Y \\
1234 & CC & 10 & (2,12) & fwd & 5400 & 2.03 & & 3.18\(\times 10^{14}\) & 0.033 & 368 & N \\
1234 & CC & 10 & (2,12) & rev & 1030 & 38.8 & & 6.35\(\times 10^{13}\) & 0.974 & 2.24 & Y \\ \hline
1535 & CC & 10 & (0,7) & fwd & 18500 & 0.344 & 0.320 & & 0.254 & 8.19\(\times 10^{-2}\) & Y \\
1535 & CC & 10 & (2,12) & fwd & 3360 & 2.49 & & 1.30\(\times 10^{14}\) & 0.035 & 78.8 & N \\
1535 & CC & 10 & (2,12) & rev & 782 & 37.3 & & 5.07\(\times 10^{13}\) & 1.03 & 1.83 & Y \\ \hline
1599 & CC & 10 & (0,7) & fwd & 18200 & 1.143 & 0.568 & & 0.774 & 1.120 & Y \\
1599 & CC & 10 & (2,12) & fwd & 3640 & 3.45 & & 2.98\(\times 10^{14}\) & 0.044 & 361 & N \\
1599 & CC & 10 & (2,12) & rev & 807 & 56.9 & & 1.16\(\times 10^{14}\) & 1.299 & 8.37 & Y \\ \hline \end{tabular} Note. –(a)The UV-emitting SNRs are marked with (UV). (b) Type Ia (Ia), core collapse (CC) or unknown (unk.). (c) Ejecta mass taken as 1.2\(\rm{M_{\odot}}\) for Ia, 10\(\rm{M_{\odot}}\) for CC or SM\({}_{\odot}\) for unk. (d) s=power law index for circumstellar medium density: constant s=0, or wind s=2; n=power law index for ejecta density. (e) Measured \(kT\) and \(EM\) are assumed to be from forward shock (fwd) or from reverse shock (rev); if ”fwd” then \(kT_{2}\) and \(EM_{2}\) are the predicted values for the reverse shock; if ”rev” then \(kT_{2}\) and \(EM_{2}\) are the predicted values for the forward shock. (f) Is the predicted \(EM_{2}\) small enough to be consistent with observations? (Y=yes), (N=no).
\end{table}
Table 5: Models for 6 M31 SNRs with X-ray spectra
Filipovic 2022c; Leahy et al. 2020b; Leahy 2017). This disfavors the (s,n)=(2,12) reverse-shock models for SPH11 1234, 1535 and 1599. The energies, ages and densities (or stellar wind parameters) are reasonable for the other models.
The 2 UV-emitting SNRs with X-ray spectra, SPH11 1066 and 1275, are marked with (UV) in Table 5. The clear difference between the UV emitting SNRs and the UV non-detected SNRs is that the 2 UV-emitting SNRs are Type Ia. Because of the large age of Type Ia progenitors compared to CC progenitors, Type Ia are expected to be at significant height above the disk plane, and thus to have low extinction. The models for the 6 SNRs have a range of ages, explosion energies and densities, and the sample is too small to see systematic differences in the ages, densities or explosion energies between the UV-detected and non-detected SNRs. One would expect the column density (given in Sasaki et al. 2012 and Sasaki et al. 2018) to be smaller for the UV-detected SNRs. The column densities are small, with large uncertainties, except for SPH11 1599 which is not detected in UV. Better determination of column densities with better X-ray spectra are needed to confirm the relation between UV-detection and low column density.
### Comparison of numbers of M31 SNRs in different wavebands
The 177 SNRs in M31 were detected in different wavebands; optical, X-ray and radio, prior to the current work. Panel (a) of Figure 7 shows the numbers of SNRs detected in single and multiple wavebands using a Venn diagram. Pannuti et al. (2000) shows similar Venn diagrams (optical, radio, and x-ray detected SNRs) for the nearby galaxies M33 and NGC300. M33 has a total number of 109 SNRs and SNR candidates and NGC300 has a total of 44, compared to the total of 177 SNRs and SNR candidats for M31. For all 3 galaxies (M31, M33 and NGC300), optically-detected SNRs dominate with fractions of detected SNRs of 155/177, 79/109 and 28/44, respectively. The X-ray and radio detections are significantly smaller fractions: the X-ray detected fractions are 45/177, 21/109, and 6/44 respectively; and the radio detected fractions are 25/177, 53/109 and 17/44, respectively. The optically detected fractions are similar (\(\sim 0.6-0.9\)) for all 3 galaxies; the radio detected fractions higher for M33 and NGC300 (\(\sim 0.4-0.5\)) than for M31 (\(\sim 0.15\)), and the X-ray detected fraction higher for M31 (\(\sim 0.25\)) than for M33 and NGC300 (\(\sim 0.15-0.2\)). This is probably the result of the intensive X-ray observations of M31 (Sasaki et al. 2012, 2018) compared to the other two galaxies. The three sets of optically-detected, radio-detected and X-ray detected SNRs show little overlap for all three galaxies. This is consistent with the opposing selection effects for these 3 wavebands, as discussed in Pannuti
Figure 7: Venn diagrams for SNRs in M31 detected in different wavebands (optical, X-ray and radio: (a): all SNRs; (b): for the 20 SNRs detected in UV (this work).
et al. (2000): SNRs identified through optical represent those located in regions with relatively low confusion from H\(\alpha\) emission, well away from star-forming regions; radio-selected SNRs are biased toward star-forming regions; and X-ray SNRs are selected for soft X-ray spectra and association with H II regions, so are biased against SNRs with hard spectra and no optical counterparts.
Bozzetto et al. (2017) considers the statistics of optical, radio and X-ray detections for the Large Magellanic Cloud (LMC). The Venn diagram of SNRs for the LMC shows most SNRs (47 of 59) are detected in all three bands. This is probably a result of sensitivity to luminosity because of the closer distance of the LMC (\(\sim\)20 times closer than the other nearby galaxies discussed above). The Venn diagrams given in Bozzetto et al. (2017) include NGC300 and M33, with similar numbers to those given in Pannuti et al. (2000), and M31 with results similar to Figure 7 here. The other galaxies with Venn diagrams in Bozzetto et al. (2017) are the Small Magellanic Cloud (SMC), NGC7793, NGC 6946, NGC 55, and one diagram for NGCs 2403, 3077, 4214, 4395, 4449 and 5204 combined (hereafter referred to as "NGCcombined"). For the SMC, which is also nearby, the diagram is similar to that for the LMC with most SNRs detected in all 3 bands. The diagrams for NGCcombined, NGCC 7793 and NGC 6946 are dominated by SNRs detected in optical only, like M31, M33 and NGC 300. NGC 55 has too few SNRs (6 total) to draw conclusions on numbers. The differences are likely the result of the differing sensitivities of the observations in the different wavebands for each of the galaxies, as discussed by Bozzetto et al. (2017).
The 20 UV-emitting SNRs in M31 are listed in Table 1. The other wavebands in which these 20 SNRs are detected are as follows. Sources 1, 2, 4, 5, 8, 10, 11, 12, 14, 16, 17, 18, 19, and 20 were listed only in Lee & Lee (2014) (optical); sources 7 and 9 were listed in Lee & Lee (2014) and in Braun & Walterbos (1993)(optical & radio); source 15 was listed only in Sasaki et al. (2012) (X-ray) and sources 3, 6, and 13 were listed in all 3 (optical, X-ray, & radio). Panel (b) of Figure 7 shows the numbers of UV-emitting SNRs detected in optical, X-ray and radio. Nearly all of the UV-emitting SNRs (19 of 20) are detected in optical. This is not surprising because the emission mechanisms for UV and optical are most similar: forbidden and recombination lines from shock-ionized gas. In contrast, the radio emission mechanism is synchrotron from shock-accelerated electrons and the the emission mechanism is primarily thermal bremmstrahlung with some contribution from lines.
The total number of UV-emitting SNRs in M31 is similar to the number radio emitting SNRs (20 vs. 25) but much smaller than numbers of optical or X-ray emitting SNRs. The fractions of UV-emitting SNRs in M31 are: 19/155 (optical), 5/25 (radio) and 4/45 (X-ray). These have not been measured for other galaxies yet, but are similar (\(\sim 0.1-0.2\)) for the 3 different categories. This indicates that the UV-selection criterion is different than the optical, radio and X-ray criteria listed above. The most important expected UV detection criterion is source extinction. This is important for the 7 previously known SNRs, which all have low extinction (see references in Table 4). The UV extinction to SNRs in M31 is dominated by line-of-sight distance through M31's disk3. For M31 SNRs detected in optical, radio and X-ray, the disk extinction is small at those wavelengths so they should be detected independent of distance into the disk. The SNRs detected in UV will be those on the near side of the disk. The fraction detected in UV to other wavebands should be determined by extinction, i.e. disk geometry, and thus approximately constant and equal to the fraction of disk which is on the near side of M31 and with low extinction in UV. The visible to UV extinction curve (Fitzpatrick & Massa, 2007) has E(\(\lambda\)-V)/E(B-V) \(\sim\)3.5 to 7 (average \(\sim\)5) for \(\lambda=\)120 nm to 280 nm (with the peak at 220 nm). Typical measured E(B-V) values in M31 are in the range 0.0 to 0.6 (Leahy et al., 2022), which yields E(\(\lambda\)-V) from 0 to 3 mag, and extinction factors from 1 to 0.06. The fraction of UV to optical SNRs in M31 (19/155\(=0.12\), Figure 7) is consistent with that expected from extinction, but the differing sensitivity of UV and optical observations could also affect the fraction.
Footnote 3: The Milky Way contribution to extinction is small in the direction of M31
## 5 Conclusion
Using the survey images of M31 carried out by Astrosat's UVIT, we searched for diffuse UV-emission from M31 SNRs. SNRs for analysis were obtained from previous optical, X-ray and radio surveys for SNRs in M31. We used stellar catalogues and the UV images to remove SNRs contaminated with stellar emission, enabling us to detect 20 SNRs with diffuse UV emission in M31. Band fluxes for the five observed UVIT filters, F148W, F169M, F172M, N219M and N279N, were measured for these 20 SNRs to obtain band luminosities. The result is the first catalog of UV emitting SNRs in M31.
The band luminosities of the UV-emitting SNRs were compared to the band luminosities computed form the spectra for seven previously known UV-emitting SNRs in the Milky Way, the LMC, and the SMC. We find similar spectral shapes between the known SNRs and the M31 SNRs. The spectral shapes and the diffuse nature of the emission together form good evidence that the UV emission from the 20 M31 SNRs is dominated by line emission, like known SNRs, and that the UV emission is associated with the SNRs.
For the small sample of 6 SNRs in M31 with X-ray spectral models, we apply SNR models to obtain their physical characteristics. The 2 UV-emitting X-ray SNRs are Type Ia, the other 4 X-ray SNRs are CC type. Type Ia indicates positions above the disk plane in M31. The two UV-emitting X-ray SNRs have low measured extinction in X-rays, so that it is consistent that the detection of UV emission is related to low extinction.
We compare the numbers of SNRs detected in M31 for different wavebands (optical, radio and X-ray) to those detected in other nearby galaxies given by Pannuti et al. (2000) and Bozzetto et al. (2017). This confirms the somewhat opposing selection effects for detecting SNRs in the different wavebands, as discussed by Pannuti et al. (2000). 19 of the 20 UV-emitting SNRs are detected in optical, which is expected because the emission mechanisms for both UV and optical are forbidden and recombination lines from shock-ionized gas.
It is desirable to carry out spectroscopic observations to confirm the line nature of the UV emission from these SNRs, although spectroscopy will difficult for the typically crowded regions in M31 where the SNRs are located.
This work was supported by a grant from the Canadian Space Agency. The authors thank the reviewer for making a number of suggestions to improve this manuscript.
|
2306.14140 | When UAVs Meet ISAC: Real-Time Trajectory Design for Secure
Communications | The real-time unmanned aerial vehicle (UAV) trajectory design of secure
integrated sensing and communication (ISAC) is optimized. In particular, the
UAV serves both as a downlink transmitter and a radar receiver. The legitimate
user (Bob) roams on ground through a series of unknown locations, while the
eavesdropper moves following a fixed known trajectory. To maximize the
real-time secrecy rate, we propose an extended Kalman filtering (EKF)-based
method for tracking and predicting Bob's location at the UAV based on the delay
measurements extracted from the sensing echoes. We then formulate a non-convex
real-time trajectory design problem and develop an efficient iterative
algorithm for finding a near optimal solution. Our numerical results
demonstrate that the proposed algorithm is capable of accurately tracking Bob
and strikes a compelling legitimate vs. leakage rate trade-off. | Jun Wu, Weijie Yuan, Lajos Hanzo | 2023-06-25T06:26:19Z | http://arxiv.org/abs/2306.14140v1 | # When UAVs Meet ISAC: Real-Time Trajectory Design for Secure Communications
###### Abstract
The real-time unmanned aerial vehicle (UAV) trajectory design of secure integrated sensing and communication (ISAC) is optimized. In particular, the UAV serves both as a downlink transmitter and a radar receiver. The legitimate user (Bob) roams on ground through a series of unknown locations, while the eavesdropper moves following a fixed known trajectory. To maximize the real-time secrecy rate, we propose an extended Kalman filtering (EKF)-based method for tracking and predicting Bob's location at the UAV based on the delay measurements extracted from the sensing echoes. We then formulate a non-convex real-time trajectory design problem and develop an efficient iterative algorithm for finding a near optimal solution. Our numerical results demonstrate that the proposed algorithm is capable of accurately tracking Bob and strikes a compelling legitimate vs. leakage rate trade-off.
ISAC, UAV, EKF, real-time trajectory design.
## I Introduction
Given by the high flexibility and predominantly line-of-sight (LoS) mature of air-to-ground propagation links, unmanned aerial vehicles (UAVs) are capable of providing reliable communication services in rural, disaster, and hot-spot areas in next generation networks [1]. In contrast to traditional terrestrial communication systems, UAVs are capable of dynamically adjusting their coverage areas to serve as mobile relays or aerial base stations (BSs) [2]. To meet the demand for high rate transmission in next generation systems, the authors of [3] investigated the deployment of UAV relays in the presence of malfunctioning base stations and maximized the capacity of the relay network. However, stationary relays do not have the high flexibility of UAVs. Furthermore, UAV communications are highly susceptible to potential eavesdropping due to their LoS-dominated communication channels [4]. Hence, there is an emerging trend to design secure UAV-based communication schemes. For instance, the authors of [5] investigated the maximum secrecy rate of a single user relying on joint trajectory design and power allocation. Furthermore, the energy efficiency maximization of secure communication systems was considered in [6], while supporting multiple users. However, all these contributions assumed that both the ground users as well as eavesdroppers are static on the ground, while, in practice the ground users are motive, hence imposing challenges on UAV-aided secure communication systems.
To enhance the communication performance in practical applications in the face of user mobility, the UAVs are required to keep track of the real-time locations of ground users [7]. Traditionally, UAV-based localization and tracking schemes tend to rely on the Global Navigation Satellite System (GNSS) and/or video sensors. For example, equipped with the measurements of UAV location and camera angles, the authors of [8] proposed a vision-based localization method by exploiting the pixel-based location estimate of the target in an image. However, such vision sensor-based methods are likely to suffer localization performance degradation due to the environmental variations. Moreover, attaching vision sensors to UAVs will increase their sizes and result in additional power consumption, which is undesirable for UAVs having limited onboard battery capacity. To simultaneously support UAV-based communication and positioning services at a low overhead, there is a novel technology integrating both sensing and communication functionalities using unified hardware and signal waveforms [9, 10]. This so-called integrated sensing and communication (ISAC) technology [11] allows traditionally independent sensing and communication systems to seamlessly share both their wireless infrastructure and spectral resources for significantly enhancing their spectrum-, energy-, and hardware-efficiency. Recently, some research efforts have been devoted to ISAC systems relying on UAV platforms. For example, Meng _et.al_[12] proposed a novel UAV-enabled integrated periodic sensing and communication mechanism to strike a trade-off between sensing and communication. As a step forward, the authors of [13] maximized the achievable communication rate, while meeting the sensing frequency and beam pattern gain requirement.
Fig. 1: A UAV-enabled ISAC system.
In this context, we solve the problem of simultaneous ground user tracking and secure communications for a UAV-based system. In particular, a legitimate user (Bob) moves through unknown locations and an eavesdropper (Eve) follows a fixed trajectory1. For accurately tracking Bob, we first propose an extended Kalman filtering (EKF) framework, which relies on the delay measurements extracted from the ISAC echoes. Furthermore, based on the predicted location, we formulate a weighted non-convex trajectory design problem for supporting flexible secure communications performance for meeting the diverse requirements of various UAV-based applications. We then propose an efficient and rapidly converging iterative algorithm for solving the resultant non-convex optimization problem via the popular successive convex approximation (SCA) technique. Our simulation results show that the proposed algorithm efficiently tracks Bob and maximizes the real-time secrecy rate in the presence of Eve.
Footnote 1: These assumptions are typically satisfied in applications, where Eve moves along a preset path for eavesdropping. The more generalized scenario, where Eve has an unknown trajectory which has to be predicted will be considered in our future work.
The rest of this paper is organized as follows. Section II introduces the UAV-based system model. Section III formulates the real-time trajectory design problem, while the proposed solutions are derived in Section IV. Our simulation results are provided in Section V, while Section VI concludes this paper.
_Notations:_ The \(M\)-dimensional vector space is denoted as \(\mathbb{R}^{M\times 1}\). We use \(\|\cdot\|\) and \([\cdot]^{\mathrm{T}}\) to denote the vector norm and the transposition operation, respectively. We use \(\mathcal{N}(\mu,v)\) to denote a Gaussian distribution of mean \(\mu\) and variance \(v\). For a time-dependent function \(x(t)\), the first-order derivatives with respect to time \(t\) are denoted as \(\dot{x}(t)\). We use \(\mathbf{I}\) and \(\hat{x}\) to denote the identity matrix and the estimated value of \(x\), respectively. We use diag(\(\cdot\)) to represent a diagonal matrix.
## II System Model
As shown in Fig. 1, we consider a UAV-aided ISAC system where the UAV acts as a downlink communication transmitter and a radar receiver to serve Bob in the presence of Eve. Assume that the UAV hovers in the sky with a total flight period of \(T\), which can be divided into \(N\) time slots (TSs), contained in the set \(\mathcal{F}=\{1,2,...,N\}\). The duration between two consecutive TSs is denoted as \(\Delta t>0\). We consider a three-dimensional Cartesian coordinate system. At the \(n\)-th TS, the time-varying horizontal coordinate of the UAV is denoted as \(\mathbf{q}[n]=[x_{q}[n],y_{q}[n]]^{\mathrm{T}}\in\mathbb{R}^{2\times 1}\) with a constant altitude \(H\). Since Bob and Eve are on the ground at zero altitudes, the coordinates of Bob and Eve at the \(n\)-th TS can be expressed as \(\mathbf{b}[n]=[x_{b}[n],y_{b}[n]]^{\mathrm{T}}\in\mathbb{R}^{2\times 1}\), \(\mathbf{w}[n]=[x_{w}[n],y_{w}[n]]^{\mathrm{T}}\in\mathbb{R}^{2\times 1}\), respectively.
### _Sensing Model_
The UAV aims for tracking Bob for providing improved communication performance. In each TS, the UAV transmits its information-bearing signal \(c(t)\) to Bob. Since the UAV's sensing capability, the echoes reflected by Bob and received at the UAV can be written as
\[r(t)=A\beta c\left(t-\tau\right)e^{j2\pi f_{d}t}+z_{r}(t), \tag{1}\]
where \(A\), \(\beta\), \(\tau\), and \(f_{d}\) represent the echoes amplitude, the reflection coefficient, the round-trip signaling delay, and the Doppler frequency, respectively. While \(z_{r}(t)\) represents the additive white Gaussian noise (AWGN) process. Let us express the distance from the UAV to Bob and the UAV to Eve at the \(n\)-th TS as
\[d_{b}[n]=\sqrt{H^{2}+\|\mathbf{q}[n]-\mathbf{b}[n]\|^{2}}\quad \text{and} \tag{2}\] \[d_{w}[n]=\sqrt{H^{2}+\|\mathbf{q}[n]-\mathbf{w}[n]\|^{2}}, \tag{3}\]
respectively2. In practice, it is essential to acknowledge the potential existence of unwanted echoes that are reflected by other terrestrial objects. Effective UAV-based filtering and deconvolutional methods have been extensively explored and employed for suppressing clutter interference [15, 16]. Given the signal propagation delay \(\tau[n]\) at the \(n\)-th TS, the range between the UAV and Bob is given by \(\frac{cr[n]}{2}\), where \(c\) represents the signal propagation speed. With the assumption of Gaussian measurement noise, the measured range \(\hat{d}_{b}[n]\) is written as
Footnote 2: Note that the identification of Bob is essential for the success of the proposed real-time sensing-aided secure communications design. Although how to identify Bob is not in the main scope of this manuscript, our proposed framework can efficiently distinguish Bob by solving the corresponding data association problem [14] using a typical radar cross section (RCS) size, and the specific trajectories of the objects. We can also leverage the periodic location feedback mechanism discussed in Sec.V to ascertain the identity of Bob.
\[\hat{d}_{b}[n]=d_{b}[n]+z_{d}, \tag{4}\]
where \(z_{d}\) is the corresponding noise term obeying the Gaussian distribution of \(\mathcal{N}(0,\sigma_{d}^{2})\).
### _Communication Model_
As for communications, since we consider a free-space propagation scenario, the channel gains corresponding to the UAV-Bob link and the UAV-Eve link follow the classic inverse second power path-loss model [6], which can be expressed as
\[h_{b}[n]=\frac{\rho_{0}}{H^{2}+\|\mathbf{q}[n]-\mathbf{b}[n]\|^ {2}}, \tag{5}\] \[\text{and}\quad h_{w}[n]=\frac{\rho_{0}}{H^{2}+\|\mathbf{q}[n]- \mathbf{w}[n]\|^{2}}, \tag{6}\]
where \(\rho_{0}\) denotes the reference channel's power gain at a unit distance. Hence, we assume the transmitted power to be approximately a constant value of \(p_{0}\). This is mainly because the transmit power is far lower than the power dissipated by the propulsion of flying and hovering [15] and its variation is negligible. Consequently, the achievable communication rate of Bob at the \(n\)-th TS is given by
\[R_{b}[n]=B\log_{2}(1+\frac{p_{0}h_{b}[n]}{\sigma^{2}}), \tag{7}\]
where \(B\) and \(\sigma^{2}\) are the channel's bandwidth and the noise power at Bob, respectively. Similarly, the leakage data rate at Eve at the \(n\)-th TS is given by
\[R_{w}[n]=B\log_{2}(1+\frac{p_{0}h_{w}[n]}{\sigma_{e}^{2}}), \tag{8}\]
where \(\sigma_{e}^{2}\) represents the noise power at Eve. According to (7) and (8), the achievable secrecy rate at the \(n\)-th TS becomes
\[R_{s}[n]=R_{b}[n]-R_{w}[n]. \tag{9}\]
### _State Evolution Model_
We aim for tracking Bob's motion state at each TS, which is determined by Bob's kinematic equations. Due to the constraints of the road shape and the environment, for convenience, we assume that Bob is moving at an approximately constant speed in any pair of consecutive TSs. At the \(n\)-th TS, Bob's velocity is denoted as \(\mathbf{v}[n]=[v_{x}[n],v_{y}[n]]^{\mathrm{T}}\), where \(v_{x}[n]\) and \(v_{y}[n]\) are the speed along the \(x\)-axis and the \(y\)-axis in Cartesian coordinates, respectively. Hence, relying on the parameters at the (\(n-1\))-st TS, the state evolution model for Bob's location \([x_{b}[n],y_{b}[n]]^{\mathrm{T}}\) and velocity \([v_{x}[n],v_{y}[n]]^{\mathrm{T}}\) can be summarized as follows:
\[x_{b}[n] =x_{b}[n-1]+v_{x}[n-1]\Delta t+\omega_{x}, \tag{10}\] \[v_{x}[n] =v_{x}[n-1]+\omega_{v_{x}},\] (11) \[y_{b}[n] =y_{b}[n-1]+v_{y}[n-1]\Delta t+\omega_{y},\] (12) \[v_{y}[n] =v_{y}[n-1]+\omega_{v_{y}}, \tag{13}\]
where \(\omega_{x}\), \(\omega_{v_{x}}\), \(\omega_{y}\), and \(\omega_{v_{y}}\) represent the corresponding transition noise terms having the distributions of \(\mathcal{N}(0,\sigma_{x}^{2})\), \(\mathcal{N}(0,\sigma_{v_{x}}^{2})\), \(\mathcal{N}(0,\sigma_{y}^{2})\), and \(\mathcal{N}(0,\sigma_{v_{y}}^{2})\), respectively.
## III Problem Formulation
Our goal is to maximize the real-time secrecy rate as a function of the UAV trajectory. Since the UAV's maximum speed \(V_{max}\) is finite, the maximum travel distance is constrained as
\[\|\mathbf{q}[n]-\mathbf{q}[n-1]\|\leq V_{max}\Delta t,\quad\forall n\in \mathcal{F}. \tag{14}\]
In practice, the UAV will only fly in a rectangular area \(\mathcal{D}\) having dimensions of \(L_{x}\) and \(L_{y}\). Thus, the UAV trajectory has to satisfy
\[0\leq x_{q}[n]\leq L_{x},\quad\forall n\in\mathcal{F}. \tag{15}\] \[0\leq y_{q}[n]\leq L_{y},\quad\forall n\in\mathcal{F}. \tag{16}\]
Our optimization problem then can be formulated as
\[\max_{\mathbf{q}[n]} \alpha B\log_{2}\left(1+\frac{p_{0}\rho_{0}}{\sigma^{2}\left(H^{2 }+\|\mathbf{q}[n]-\mathbf{b}[n]\|^{2}\right)}\right)\] (17a) s.t. \[(\ref{eq:14}),(\ref{eq:15}),(\ref{eq:16}), \tag{17b}\]
where \(\alpha\) is a weighting factor taking values between 0 and 1 to achieve a flexible secure communication performance. A higher value of \(\alpha\) means that increasing the communications rate for Bob is more important than avoiding information leakage. According to the above equations, it is seen that maximizing the secrecy rate requires the real-time design of the UAV's trajectory, which in turn requires the knowledge of Bob's location.
## IV Proposed Algorithm
In Section III, we formulated the real-time UAV trajectory design as problem (17), which is non-convex and hence it cannot be solved by conventional convex optimization methods. In this section, we first present an EKF-based algorithm conceived for tracking and predicting Bob's real-time location. Then, an iterative algorithm is proposed for solving problem (17).
### _EFK-Based Bob Tracking_
A core contribution of our work is to treat the trajectory design as an on-line optimization, which is totally different from the global path planning in [4] and requires us to track Bob's motion state including location and velocity at each TS. The sophisticated Kalman filtering (KF) is a popular technique, which is often adopted for solving estimation problems. But since the range measurements in (4) are nonlinear, the classic KF can not be directly adopted. To circumvent this problem, we resort to the EKF algorithm, which has been widely used for solving nonlinear estimation problems. Let us define the state variables as \(\mathbf{x}[n]=[x_{b}[n],\dot{x}_{b}[n],y_{b}[n],\dot{y}_{b}[n]]^{\mathrm{T}}\). Accordingly, the discrete time dynamic models can be written in compact forms as
\[\left\{\begin{array}{l}\text{Evolution Model: }\mathbf{x}[n]=\mathbf{\Phi}[n|n-1] \mathbf{x}[n-1]+\mathbf{\omega}[n-1],\\ \text{Measurement Model: }\dot{d}_{b}[n]=h\left(\mathbf{x}[n]\right)+z_{d}[n], \end{array}\right. \tag{18}\]
where \(\mathbf{\Phi}[n|n-1]\) is the linear state evolution matrix given by
\[\mathbf{\Phi}[n|n-1]=\left[\begin{array}{cccc}1&\Delta t&0&0\\ 0&1&0&0\\ 0&0&1&\Delta t\\ 0&0&0&1\end{array}\right]. \tag{19}\]
The vector \(\mathbf{\omega}=[\omega_{x},\omega_{v_{x}},\omega_{y},\omega_{v_{y}}]^{\mathrm{T}}\) is the transition noise vector having the covariance matrix of
\[\mathbf{Q}_{\omega}=\text{diag}(\sigma_{x}^{2},\sigma_{v_{x}}^{2},\sigma_{y}^ {2},\sigma_{v_{y}}^{2}). \tag{20}\]
In (18), \(h(\cdot)\) is the nonlinear observation function defined in (4). We then harness the Jacobian matrix of \(h(\mathbf{x})\) to linearize the measurements shown as
\[\frac{\partial h}{\partial\mathbf{x}}=\left[\begin{array}{cccc} \frac{\partial h}{\partial x}&\frac{\partial h}{\partial v_{x}}&\frac{ \partial h}{\partial y}&\frac{\partial h}{\partial v_{y}}\end{array}\right]\] \[=\left[\begin{array}{cccc}\frac{x_{b}[n]-x_{g}[n]}{d_{b}[n]}&0& \frac{y_{b}[n]-y_{g}[n]}{d_{b}[n]}&0\end{array}\right]. \tag{21}\]
We now invoke the EKF technique for predicting and tracking Bob's real-time location following the standard procedure. The state prediction and tracking can be summarized as follows:
1) _State Prediction:_
\[\dot{\mathbf{x}}[n|n-1]=\mathbf{\Phi}[n|n-1]\dot{\mathbf{x}}[n-1]. \tag{22}\]
2) _Linearization:_
\[\mathbf{H}[n]=\left.\frac{\partial h}{\partial\mathbf{x}}\right|_{\mathbf{x}= \hat{\mathbf{x}}[n|n-1]}. \tag{23}\]
3) _MSE Matrix Prediction:_
\[\mathbf{P}[n|n-1]=\mathbf{\Phi}[n|n-1]\mathbf{P}[n-1]\mathbf{\Phi}^{\mathsf{T} }[n|n-1]+\mathbf{Q}_{\omega}. \tag{24}\]
4) _Kalman Gain Calculation:_
\[\mathbf{K}[n]=\mathbf{P}[n|n-1]\mathbf{H}^{\mathsf{T}}[n]\times\left(\mathbf{ H}[n]\mathbf{P}[n|n-1]\mathbf{H}^{\mathsf{T}}[n]+\sigma_{d}^{2}\right)^{-1}. \tag{25}\]
5) _State Update:_
\[\hat{\mathbf{x}}[n]=\hat{\mathbf{x}}[n|n-1]+\mathbf{K}[n](\hat{d}_{b}[n]-h( \hat{\mathbf{x}}[n|n-1])). \tag{26}\]
6) _MSE Matrix Update:_
\[\mathbf{P}[n]=(\mathbf{I}-\mathbf{K}[n]\mathbf{H}[n])\mathbf{P}[n|n-1]. \tag{27}\]
Note that \(\hat{d}_{b}[n]\) in (26) is measured relying on the UAV's real-time location, which is derived in Section IV-B.
### _Real-Time Trajectory Design_
In this section, we solve the UAV's trajectory design problem for maximizing the real-time secrecy rate. Since the objective function (17a) is non-convex with respect to \(\mathbf{q}[n]\), the problem (17) is neither a convex nor a quasi-convex optimization problem, hence imposing challenges in finding the globally optimal solution. To overcome the non-convexity in (17a), in the following, we harness the successive convex optimization method to obtain a sub-optional solution. We observe that the term \(\|\mathbf{q}[n]-\mathbf{b}[n]\|^{2}\) is convex with respect to \(\mathbf{q}[n]\) and the first part of (17a) is convex with respect to \(\|\mathbf{q}[n]-\mathbf{b}[n]\|^{2}\) as well. At this point recall that any convex function has a lower-bound given by the first-order Taylor series expanded at any point within its support. Let now \(\mathbf{q}^{*}[n]\) denote the UAV's location in the \(r\)-th iteration at the \(n\)-th TS. Based on the predicted location \(\hat{\mathbf{b}}[n|n-1]\) (included in \(\hat{\mathbf{x}}[n|n-1]\)), the lower-bound can be written as
\[R_{b}[n] =B\log_{2}\left(1+\frac{p_{0}\rho_{0}}{\sigma^{2}\left(H^{2}+\| \mathbf{q}[n]-\hat{\mathbf{b}}[n|n-1]\|^{2}\right)}\right)\] \[\geq B\left(\log_{2}\left(1+\frac{p_{0}\rho_{0}}{\sigma^{2}\left( H^{2}+\|\mathbf{q}^{*}[n]-\hat{\mathbf{b}}[n|n-1]\|^{2}\right)}\right)\right.\] \[+\frac{1}{\ln 2}\left(\frac{1}{1+\frac{p_{0}\rho_{0}}{\sigma^{2} \left(H^{2}+\|\mathbf{q}^{*}[n]-\hat{\mathbf{b}}[n|n-1]\|^{2}\right)}}\right) \left(\frac{-p_{0}\rho_{0}}{\sigma^{2}}\right)\] \[\times\frac{\|\mathbf{q}[n]-\hat{\mathbf{b}}[n|n-1]\|^{2}-\| \mathbf{q}^{*}[n]-\hat{\mathbf{b}}[n|n-1]\|^{2}}{(H^{2}+\|\mathbf{q}^{*}[n]- \hat{\mathbf{b}}[n|n-1]\|^{2})^{2}}\Bigg{)}\] \[\triangleq\hat{R}_{b}[n]. \tag{28}\]
```
1:Initialize:set \(\mathbf{x}[\mathbf{1}],\mathbf{P}[1],\mathbf{q}[1]\), the corresponding index \(n=1\), \(r=1\), the tolerance \(\epsilon\), the maximum iteration \(r_{max}\).
2:repeat
3: Set \(n=n+1\).
4: Compute \(\hat{\mathbf{x}}[n|n-1]\) with (22) and \(\mathbf{P}[n|n-1]\) with (24).
5:repeat
6: With \(\hat{\mathbf{b}}[n|n-1]\), \(\mathbf{q}^{*}[n]\) to obtain \(\mathbf{q}[n]\), obj \((\mathbf{q}[n])^{r}\) by solving problem (30).
7: Update \(r=r+1\).
8:until\(|(\text{obj}(\mathbf{q}[n])^{r}\text{-obj}(\mathbf{q}[n])^{r-1}\mid\leq\epsilon\) or \(r>r_{max})\)
9: With \(\hat{\mathbf{b}}[n|n-1]\) to obtain \(\mathbf{K}[n]\) by (25).
10: With \(\hat{d}_{b}[n]\) to obtain \(\hat{\mathbf{x}}[n],\mathbf{P}[n]\) by (26) and (27).
11:until (\(n>N\))
```
**Algorithm 1** The Proposed Overall Algorithm
Next, by introducing the slack variables \(\mathbf{s}[n]=\|\mathbf{q}[n]-\mathbf{w}[n]\|^{2}\), problem (17) can be rewritten as
\[\max_{\mathbf{q}[n],\mathbf{s}[n]} \alpha\hat{R}_{b}[n]-(1-\alpha)B\log_{2}\left(1+\frac{p_{0}\rho_{ 0}}{\sigma_{e}^{2}\left(H^{2}+\mathbf{s}[n]\right)}\right)\] s.t. \[\mathbf{s}[n]\leq\|\mathbf{q}[n]-\mathbf{w}[n]\|^{2}, \tag{29b}\] \[(14),(15),(16). \tag{29c}\]
Although the objective function (29a) is now joint concave with respect to \(\mathbf{q}[n]\) and \(\mathbf{s}[n]\), problem (29) is still non-concave due to the constraint (29b). We find furthermore that \(\|\mathbf{q}[n]-\mathbf{w}[n]\|^{2}\) is also lower-bounded by \(\|\mathbf{q}^{*}[n]-\mathbf{w}[n]\|^{2}+2\left(\mathbf{q}^{*}[n]-\mathbf{w}[n] \right)^{\mathsf{T}}\left(\mathbf{q}[n]-\mathbf{q}^{*}[n]\right)\). Hence, problem (29) can be approximated as the following problem:
\[\max_{\mathbf{q}[n],\mathbf{s}[n]} \alpha\hat{R}_{b}[n]-(1-\alpha)B\log_{2}\left(1+\frac{p_{0}\rho_{ 0}}{\sigma_{e}^{2}\left(H^{2}+\mathbf{s}[n]\right)}\right)\] (30a) s.t. \[\mathbf{s}[n]\leq\|\mathbf{q}^{*}[n]-\mathbf{w}[n]\|^{2}\] \[+2\left(\mathbf{q}^{*}[n]-\mathbf{w}[n]\right)^{\mathsf{T}}\left( \mathbf{q}[n]-\mathbf{q}^{*}[n]\right), \tag{30b}\] \[(14),(15),(16). \tag{30c}\]
Problem (30) is concave, which can be readily solved using standard solvers, such as CVX [17]. To sum up, the details of the proposed procedure are given in **Algorithm 1**. The optimal value increases with the iteration index, which is guaranteed to converge in a finite number of iterations [18]. We further analyse the complexity of **Algorithm 1**. At each time slot, the EKF methods have to perform matrix inversion with a cubic complexity order of \(\mathcal{O}(4^{3})\). Then, upon denoting the iteration index of solving Problem (30) at time slot \(n\) by \(r_{n}(r_{n}\leq r_{max})\) and assuming that the convex optimization Problem (30) is solved via the standard interior-point method having a complexity order of \(\mathcal{O}[2^{3.5}\log(\frac{1}{\epsilon})]\)[9], the overall complexity of **Algorithm 1** is on the order of \(\sum\limits_{n=1}^{N}\mathcal{O}\left[4^{3}+r_{n}(2^{3.5}\log(\frac{1}{\epsilon}))\right]\).
## V Numerical Results
In this section, we present numerical results for characterising the performance of the proposed algorithm. To guarantee the tracking performance while relying on a single UAV, we divide the entire duration into several tracking periods and Bob will feed back his location to the UAV through the uplink channel at the beginning of each tracking period. Here, we set each tracking period to \(10\) TSs. Note that the feedback mechanism indicates that the EKF tracking is employed within each tracking period respectively, rather than across the entire duration. Furthermore, the feedback mechanism can help us reliably distinguish Bob. The size of region \(\mathcal{D}\) is set as \(L_{x}=L_{y}=1000\) m. We use \(\Delta t=0.1\) s as the TS duration. The initial position of the UAV is the midpoint between Bob and Eve at the altitude of \(H=50\) m and the maximum speed is \(V_{max}=50\) m/s. The channel's power gain at a unit distance is set to \(\rho_{0}=-60\) dB and the noise power in the receiver is set to \(\sigma^{2}=\sigma_{e}^{2}=-100\) dBm. We set the channel bandwidth as \(1\) MHz. The state transition noises are set with standard deviations \(\sigma_{x}=\sigma_{y}=1\) m and \(\sigma_{v_{x}}=\sigma_{v_{y}}=0.5\) m/s. Finally, for the observation, the standard deviations is set to \(\sigma_{d}=2\) m.
In Fig. 2, we first evaluate the cumulative distribution function (CDF) of both the legitimate received rate at Bob and the secrecy rate parameterized by \(\alpha\). It can be observed in Fig. 2(a) that as the weighting factor \(\alpha\) increases, the rate received at Bob also increases since the UAV trajectory will approach Bob's trajectory. By contrast, it is interesting to see in Fig. 2(b) that although the scheme having \(\alpha=0\) achieves the minimum leakage data rate, and the scheme with \(\alpha=1\) achieves the maximum rates received at Bob, the proposed algorithm associated with \(\alpha=0.5\) always achieves the best secure communication performance. To further investigate the effectiveness of our proposed algorithm, we present the CDF of the secrecy rate at different noise levels in Fig. 3. As shown, we observe that as expected the secure communication performance is detrimentally affected by the noise ratio \(\sigma^{2}/\sigma_{e}^{2}\), which is due to the fact that the eavesdropping capability of Eve is enhanced in the presence of lower noise levels. Nevertheless, our proposed algorithm supports secure communication most of the time, particularly when the ratio obeys \(\sigma^{2}/\sigma_{e}^{2}\leq 3\). Although the initial secrecy rate is zero, it gradually increases as **Algorithm 1** proceeds. When the secrecy rate is zero, we suspend the transmission of information. However, it should be emphasized that when the ratio is too high, the secrecy rate remains zero for the entire duration. We then study the tracking performance of the proposed EKF algorithm with Bob's initial location being \(\mathbf{b}[1]=[350,470]^{\text{T}}\)m and velocity being \(\mathbf{v}[1]=[10,10]^{\text{T}}\)m/s. In Fig. 4, we evaluate the location estimation performance in terms of its root mean squared error (RMSE). Although the RSME increases in most cases after feedback due to the low positioning accuracy based on a single range measurement, it can be corrected with the aid of the location information from the following uplink feedback. Observe in Fig. 4, the RMSE becomes high at specific TSs, e.g., the \(79\)-th TS, which is caused by the location prediction and estimation errors of Bob due to
Fig. 3: CDF of secrecy rate at different noise levels with \(\alpha=0.5\).
Fig. 2: CDF of the throughput parameterized by \(\alpha\).
the movement direction change. In Fig. 5, we show the UAV's real-time designed trajectory for different values of \(\alpha\), where a higher value of \(\alpha\) indicates that the UAV focuses more on increasing Bob's communications rate rather than minimizing the leakage to Eve. As a result, the UAV trajectory almost coincides with Bob's trajectory. By contrast, we can see that the UAV will escape from Eve to the boundary of area \(\mathcal{D}\) when \(\alpha=0.2\), where the reduction of leakage rate is the main design goal. Furthermore, it can be observed that the UAV always endeavors to fly to the side far from Eve at each TS when \(\alpha=0.5\), hence improving the security. On the one hand, the UAV will get close to Bob for improving the legitimate communications performance, which increases the risk of eavesdropping as well. On the other hand, the UAV flies away from Eve for reducing the leakage rate, while simultaneously reducing the legitimate communication rate for Bob. These trends motivate us to explore a more flexible secure communications mechanism.
## VI Conclusion
A UAV-aided dynamic ISAC system was designed, where a UAV having sensing and communication functionalities is supporting a ground user roaming through unknown locations in the presence of an eavesdropper. Specifically, we utilized the EKF algorithm for predicting and tracking the user's real-time location relying on the range measurements extracted from the ISAC echoes. Motivated by maximizing the real-time secrecy rate, we formulated a weighted real-time trajectory design problem. Furthermore, to solve the resultant non-convex optimization problem, we proposed an efficient iterative algorithm relying on the popular successive convex optimization technique. Our simulation results verified that the proposed algorithm tracks the user efficiently and achieves flexible secure communications performance in UAV-aided applications having different communication requirements.
|
2301.09808 | On Dynamic Regret and Constraint Violations in Constrained Online Convex
Optimization | A constrained version of the online convex optimization (OCO) problem is
considered. With slotted time, for each slot, first an action is chosen.
Subsequently the loss function and the constraint violation penalty evaluated
at the chosen action point is revealed. For each slot, both the loss function
as well as the function defining the constraint set is assumed to be smooth and
strongly convex. In addition, once an action is chosen, local information about
a feasible set within a small neighborhood of the current action is also
revealed. An algorithm is allowed to compute at most one gradient at its point
of choice given the described feedback to choose the next action. The goal of
an algorithm is to simultaneously minimize the dynamic regret (loss incurred
compared to the oracle's loss) and the constraint violation penalty (penalty
accrued compared to the oracle's penalty). We propose an algorithm that follows
projected gradient descent over a suitably chosen set around the current
action. We show that both the dynamic regret and the constraint violation is
order-wise bounded by the {\it path-length}, the sum of the distances between
the consecutive optimal actions. Moreover, we show that the derived bounds are
the best possible. | Rahul Vaze | 2023-01-24T04:22:13Z | http://arxiv.org/abs/2301.09808v1 | # On Dynamic Regret and Constraint Violations in Constrained Online Convex Optimization
###### Abstract
A constrained version of the online convex optimization (OCO) problem is considered. With slotted time, for each slot, first an action is chosen. Subsequently the loss function and the constraint violation penalty evaluated at the chosen action point is revealed. For each slot, both the loss function as well as the function defining the constraint set is assumed to be smooth and strongly convex. In addition, once an action is chosen, local information about a feasible set within a small neighborhood of the current action is also revealed. An algorithm is allowed to compute at most one gradient at its point of choice given the described feedback to choose the next action. The goal of an algorithm is to simultaneously minimize the dynamic regret (loss incurred compared to the oracle's loss) and the constraint violation penalty (penalty accrued compared to the oracle's penalty). We propose an algorithm that follows projected gradient descent over a suitably chosen set around the current action. We show that both the dynamic regret and the constraint violation is order-wise bounded by the _path-length_, the sum of the distances between the consecutive optimal actions. Moreover, we show that the derived bounds are the best possible.
## I Introduction
Online convex optimization (OCO) has been a very attractive research problem for the last two decades, because of its versatility in modelling rich optimization problems. With OCO, at each time \(t\), an online algorithm selects an action \(a_{t}\), after which the loss incurred \(f_{t}(a_{t})\) is revealed. Knowing all \(f_{t}\)'s, \(1\leq t\leq T\) ahead of time, an optimal offline algorithm chooses action \(x^{\star}=\arg\min_{x}\sum_{t=1}^{T}f_{t}(x)\), and the _static_ regret of an online algorithm is defined as \(R_{s}=\max_{f_{t},t=1,\ldots,T}\sum_{t=1}^{T}f_{t}(a_{t})-\sum_{t=1}^{T}f_{t}( x^{\star})\), i.e., an adversary can choose the functions \(f_{t}\). The name static comes from the fact that the optimal offline algorithm is constrained to use a single action.
###### Abstract
We consider the problem of finding a sequence \({\bf u}={\bf x}^{\star}=(x_{1}^{\star},\ldots,x_{T}^{\star})\) of \({\bf u}={\bf x}^{\star}\), where \(x_{t}^{\star}=\arg\min_{x_{t}}f_{t}(x)\), the sequence of local optimizers is given by the following equation:
\[{\bf u}={\bf x}^{\star}=(x_{1}^{\star},\ldots,x_{T}^{\star}), \tag{1}\]
where \({\bf x}^{\star}=(x_{1}^{\star},\ldots,x_{T}^{
which measures the gap between the function \(g_{t}\) evaluated at the optimal point and the chosen action or
\[P^{\prime}_{g}(\mathbf{x}^{\star})=\max_{f_{t},g_{t}=1,\ldots,T}\sum_{t=1}^{T}g_ {t}(a_{t}),\]
which just counts the overall constraint violation. We use \(P_{g}(\mathbf{x}^{\star})\) rather than \(P^{\prime}_{g}(\mathbf{x}^{\star})\) since it is a stronger measure as \(P_{g}(\mathbf{x}^{\star})\geq P^{\prime}_{g}(\mathbf{x}^{\star})\) on account of \(g_{t}(x^{\star}_{t})\leq 0\).
In prior work, starting from [17], where functions \(f_{t},g_{t}\) are assumed to be convex, Lipschitz and smooth, an algorithm has been proposed that achieves \(R_{d}(\mathbf{x}^{\star})\leq O(V_{\mathbf{x}^{\star}}\sqrt{T})\) while \(P^{\prime}_{g}(\mathbf{x}^{\star})=O(T^{1/2})\), which was improved in [18], to get \(R_{d}(\mathbf{x}^{\star})\leq O(\sqrt{TV_{\mathbf{x}^{\star}}})\) while \(P^{\prime}_{g}(\mathbf{x}^{\star})=O(V_{\mathbf{x}^{\star}}^{1/4}T^{3/4})\), and most recently in [19], an algorithm based on the drift plus penalty method has regret \(R_{d}(\mathbf{x}^{\star})\leq O(\max\{\sqrt{TV_{\mathbf{x}^{\star}}},V_{g}\})\) while \(P^{\prime}_{g}(\mathbf{x}^{\star})=O(\sqrt{T},V_{g})\), or \(R_{d}(\mathbf{x}^{\star})\leq O(\sqrt{TV_{\mathbf{x}^{\star}}})\) while \(P^{\prime}_{g}(\mathbf{x}^{\star})=O(T^{3/4},V_{g})\), where \(V_{g}\) is as defined in (2) with \(f=g\).
However, notably [19] considers the full information setting, where once \(a_{t}\) is chosen, full functions \(f_{t}\) and \(g_{t}\) are revealed, and hence \(x^{\star}_{t}\) can be computed. Clearly, obtaining this information is highly imposing. Moreover, [19] also needs to know the diameter \(D\) of the feasible set. In comparison, the result of [18] requires the knowledge of \(V_{\mathbf{x}^{\star}}\) instead of individual \(x^{\star}_{t}\), which is relatively less demanding, however, still very difficult to obtain in practice, as well as the knowledge of \(T\) and \(D\).
In this paper, we consider an alternate information structure that is less imposing than considered in [18, 19]. The full feasible set at time \(t\) is \(\chi_{t}=\{x\in\chi:g_{t}(x)<0\}\). We assume that once the current action \(a_{t}\) is chosen, for a fixed constant \(\mathsf{dist}>0\) that is independent of \(T\), a subset of \(\chi_{t}\), set \(\chi_{t}(a_{t})=\{x:g_{t}(x)\leq 0\}\cap\mathcal{B}(a_{t},\mathsf{dist})\) is made available, where \(\mathcal{B}(x,r)\) is a ball with radius \(r\) centered at \(x\). Set \(\chi_{t}(a_{t})\) captures the feasible set in the neighborhood of the current action. With full information, e.g., in [19], \(\mathsf{dist}=\infty\). We will show that our results hold for any \(\mathsf{dist}>0\).
With this new information structure, we consider the problem of simultaneously minimizing the dynamic regret and constraint violation penalty when \(f_{t},g_{t}\) are strongly convex, Lipschitz and smooth. Generalizing the results when \(f_{t},g_{t}\) are only convex, is part of ongoing work.
Towards this end, we propose an algorithm that uses the projected gradient descent (PGD) algorithm [12] as a black box, and depending on the chosen action \(a_{t}\) being feasible \(g_{t}(a_{t})<0\), on the boundary \(g_{t}(a_{t})=0\) or infeasible \(g_{t}(a_{t})>0\), executes PGD over a suitably chosen subset that may or may not be contained in the feasible region of \(g_{t}\). The main concept that the algorithm relies on is the property of the PGD algorithm [12] when executed over a convex set \(I\) and starting point \(a_{t}\), is that the next action \(a_{t+1}\) satisfies
\[||x^{\star}_{I}-a_{t+1}||\leq\mathsf{c}||x^{\star}_{I}-a_{t}||, \tag{3}\]
for a constant \(\mathsf{c}<1\), where \(x^{\star}_{I}=\min_{x\in I}f(x)\) when \(f\) is strongly convex and smooth.
If the whole feasible region \(\chi_{t}=\{x\in\chi:g_{t}(x)<0\}\) was known, then using \(I=\chi_{t}\), (3) will imply that the algorithm is making 'quick' progress towards the optimal point \(x_{t}^{\star}\). Unfortunately only local information about the feasible region \(\chi_{t}\) is known. In particular, only \(\chi_{t}(a_{t})=\chi_{t}\cap\mathcal{B}(a_{t},\mathsf{dist})\) is available for a constant \(\mathsf{dist}\). Thus, we proceed in two steps. We identify a small region \(I_{t}\) at time \(t\) around \(a_{t}\) that is contained in \(\chi_{t}\) and use (3) to claim that we are making progress towards the optimal point in this subset \(I_{t}\) (which could be far away from the global optimal). Next, exploiting the strong convexity and the smoothness of the functions, we extend the same claim to the optimal point \(x_{t}^{\star}\) which need not be in \(I_{t}\).
Since we have only local information about \(g_{t}\) around \(a_{t}\), it can happen that the size of \(I_{t}\) is arbitrarily small or \(I_{t}\) is empty in case \(g_{t}(a_{t})>0\) (current choice is infeasible). For both these cases, we show that the algorithm makes progress of a finite distance towards the optimal point \(x_{t}^{\star}\) in \(\chi_{t}\), and establish a relation similar to (3). Once we have (3), a simple application of the triangle inequality and the Lipschitz condition, implies the result.
Our contributions.
* We show that under the defined information structure, the proposed algorithm simultaneously achieves \(R_{d}(\mathbf{x}^{\star})\leq O(V_{\mathbf{x}^{\star}})\) and \(P_{g}^{\prime}(\mathbf{x}^{\star})\leq P_{g}(\mathbf{x}^{\star})\leq O(V_{ \mathbf{x}^{\star}})\) for any \(\mathsf{dist}>0\). Importantly, no information about \(x_{t}^{\star},V_{\mathbf{x}^{\star}},T\) or \(D\) is needed.
* As a function of information variable \(\mathsf{dist}>0\), both \(R_{d}(\mathbf{x}^{\star})\) and \(P_{g}(\mathbf{x}^{\star})\) scale inverse polynomially, which is natural to expect since for any algorithm as information availability is decreased, (smaller value of \(\mathsf{dist}\)), the regret should worsen. We do not know at this point if the algorithm achieves the best scaling in terms of \(\mathsf{dist}\).
* In Remark 3, we also argue that our result is the best one can hope for, given the minimal information structure.
Notation: For the rest of the paper, we follow the notation described as follows. For a set \(I\in\mathbb{R}^{n}\), its interior is defined as \(\mathsf{int}(I)\), while its boundary as \(\text{boundary}(I)\). \(\mathcal{B}(x,r)\) is the ball of radius \(r\) centered at \(x\). For a discrete set of points \(S\), \(\text{convex hull}(x\in S)\) represents the convex hull of points \(x\in S\). \(\text{Proj}(x,S)\) is the projection of point \(x\) on set \(S\), i.e. \(\text{Proj}(x,S)=\arg\min_{y\in S}||x-y||\).
## II System Model
Time is slotted with total time horizon \(T\), and time slots are indexed as \(t=1,\ldots,T\). Let \(\chi\subset\mathbb{R}^{n}\) be a compact and convex set. For each \(t\), two functions \(f_{t}\) and \(g_{t}\) are of interest, that are defined over \(\chi\). The feasible set at time \(t\) is defined as \(\chi_{t}=\{x\in\chi:g_{t}(x)\leq 0\}\). Let the optimizer for \(f_{t}\) over the constraint set \(g_{t}(x)\leq 0\) be \(x_{t}^{\star}\), i.e., \(x_{t}^{\star}=\arg\min_{\{x\in\chi_{t}\}}f_{t}(x)\).
We make the following standard assumptions about \(f_{t}\) and \(g_{t}\). Functions \(f_{t}\) and \(g_{t}\) are assumed to be Lipschitz with Lipschitz constants \(\mathcal{L}_{f}\) and \(\mathcal{L}_{g}\), respectively. Moreover, functions \(f_{t}\) and \(g_{t}\) are assumed to be smooth, i.e., the gradients \(\nabla f_{t}\) and \(\nabla g_{t}\) are assumed to be Lipschitz with Lipschitz constants \(L_{f}\) and \(L_{g}\), respectively. Moreover, for all \(1\leq t\leq T,\sup_{x\in\chi}||\nabla f_{t}(x)||\leq G\)
and \(\sup_{x\in\chi}||\nabla g_{t}(x)||\leq G\). 1 Compared to prior work [18, 19] that assume that \(f_{t}\) and \(g_{t}\) are convex, we assume that \(f_{t}\) and \(g_{t}\) are _strongly_ convex with strong convexity parameters \(\nu_{f},\nu_{g}\), respectively. 2
Footnote 1: For notational simplicity we are assuming the same constant \(G\), which can be generalized without any change in following analysis.
Footnote 2: We are assuming that all \(f_{t}\)’s and \(g_{t}\)’s have the same smoothness parameter \(L_{f}\) and \(L_{g}\) only for notational simplicity. All results will go through with different parameters as well.
At each time \(t\), an action \(a_{t}\) is chosen by an algorithm, for which the cost is \(f_{t}(a_{t})\). The goal of the algorithm to choose \(a_{t}\) such that the cost \(f_{t}(a_{t})\) is as small as possible while making sure that \(a_{t}\in\chi_{t}\). However, the information available with the algorithm to choose \(a_{t}\) is limited and described as follows.
Information structure: Similar to [17, 18, 19], once the action \(a_{t}\in\mathbb{R}^{n}\) is chosen at time \(t\), \(g_{t}(a_{t})\) is revealed. Moreover, the algorithm can also access \(\nabla f_{t}(x),\nabla g_{t}(x)\) for at most one point \(x\) of its choice. As described in the Introduction, additionally, in this paper, we assume that, set \(\chi_{t}(a_{t})=\chi_{t}\cap\mathcal{B}(a_{t},\mathsf{dist})\) is also revealed at time \(t\) for a fixed constant \(\mathsf{dist}>0\), after \(a_{t}\) has been chosen. Note that \(\mathsf{dist}\) can be arbitrarily small but is a constant that is fixed throughout the time horizon and does not depend on \(t\) or \(T\). Compared to prior work, [17, 18, 19], acquiring this information is less imposing and does not involve finding any \(x_{t}^{*}\). The set \(\chi_{t}(a_{t})\) maps the local behaviour of \(g_{t}\) in a very small neighborhood of \(a_{t}\). Note that convexity implies that \(\chi_{t}(a_{t})\) is convex for any \(a_{t}\).
**Remark 1**.: _For the considered problem to be meaningful, once \(a_{t}\) is chosen, \(g_{t}(a_{t})\) has to be revealed, as already assumed in prior work [17, 18, 19]. In this work, in addition, we are assuming that \(\chi_{t}(a_{t})\) is also known which in turn requires that \(g_{t}(x)\) for \(x\in B(a_{t},\mathsf{dist})\) is also known. When \(\mathsf{dist}\to 0\), this new information is equivalent to just acquiring \(g_{t}(a_{t})\). Since \(\mathsf{dist}\) is allowed to be any arbitrarily small constant, the extra information assumed is very minimal and can be obtained similar to obtaining \(g_{t}(a_{t})\) (necessary), and can be done efficiently by exploiting the convexity of \(g_{t}\)._
The performance metric for an online algorithm that chooses actions \(a_{t},t=1,\ldots,T\) is defined as the dynamic regret
\[R_{d}(\mathbf{x}^{\star})=\max_{f_{t},g_{t}}\sum_{t=1}^{T}||f_{t}(x_{t}^{ \star})-f_{t}(a_{t})||,\]
and penalty for constraint violation as
\[P_{g}(\mathbf{x}^{\star})=\max_{f_{t},g_{t}}\sum_{t=1}^{T}||g_{t}(x_{t}^{ \star})-g_{t}(a_{t})||,\]
where \(a_{t}\)'s are the causal actions of the algorithm that can depend on the information acquired till time slot \(t-1\). Moreover, \(f_{t},g_{t}\) can be chosen by an adversary (can be adaptive, i.e., depend on previous actions \(a_{\tau},\tau\leq t-1\)) and are not required to follow any structure, other than what has been described earlier.
Note that \(P_{g}\) is stronger than the penalty considered in earlier work [19] that is defined as \(P_{g}^{\prime}(\mathbf{x}^{\star})=\max_{f_{t},g_{t}}\sum_{t=1}^{T}g_{t}(a_{ t}),\) in two aspects. \(P_{g}^{\prime}(\mathbf{x}^{\star})\) can be negative, while \(P_{g}(\mathbf{x}^{\star})\) is always positive, and \(P_{g}(\mathbf{x}^{\star})\geq P_{g}^{\prime}(\mathbf{x}^{\star})\) since \(g_{t}(x_{t}^{\star})\) can be negative.
## III Algorithm
We present the proposed algorithm as a pseudo code in Algorithm 1, and describe it as follows. Let at time \(t\),
\[x_{t}^{*}=\arg\min_{x\in\chi_{t}}f_{t}(x).\]
Let the action chosen at time \(t\) be \(a_{t}\). We want to choose \(a_{t+1}\) in such a way that
\[||x_{t}^{*}-a_{t+1}||^{2}<c||x_{t}^{*}-a_{t}||^{2}, \tag{4}\]
for some constant \(0<c<1\) that does not depend on \(t\). Recall that while choosing \(a_{t+1}\), no information about \(f_{t+1},g_{t+1}\) is available. Thus, relation (4) is useful in the sense that it ensures that \(a_{t+1}\) is closer to \(x_{t}^{*}\) compared to \(a_{t}\), in hope that if \(x_{t}^{*}\) and \(x_{t+1}^{*}\) are close, then \(a_{t+1}\) will be close to \(x_{t+1}^{*}\) as well.
The main idea of the algorithm is to accomplish this goal (showing that (4) holds) depending on three possible cases, namely : i) \(g_{t}(a_{t})<0\), i.e. \(a_{t}\) is strictly feasible for \(g_{t}\), ii) \(g_{t}(a_{t})=0\), i.e. \(a_{t}\) is on the boundary of the feasible region for \(g_{t}\), and finally, iii) \(g_{t}(a_{t})>0\), i.e. \(a_{t}\) is strictly infeasible for \(g_{t}\). Just to be clear, all the described actions in the following are taken after \(a_{t}\) is chosen and the information has been revealed about \(\nabla f_{t}(x),g_{t}(a_{t})\), \(\nabla g_{t}(x)\) and \(\chi_{t}(a_{t})\) for some one point \(x\).
In case i) \(g_{t}(a_{t})<0\), and we know that \(a_{t}\) is strictly feasible and potentially there is room to move to a point closer to \(x_{t}^{*}\), the optimizer of \(f_{t}\). Using the Lipschitz property of \(g_{t}\)'s, this implies that each point in the ball \(\mathcal{B}(a_{t},||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||)\) is also feasible. Thus, Algorithm 1 chooses the set \(\mathcal{B}(a_{t},||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||)\) as the feasible region to execute the PGD.
In case, the radius \(||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||\) of the identified feasible region is smaller than the fixed constant dist, then using the extra local information \(\chi_{t}(a_{t})\) as described earlier, the feasible region is chosen as \(\chi_{t}(a_{t})\). A local gradient descent algorithm over the chosen feasible region using subroutine Optimize Algorithm 2 (online gradient descent) is used to find the next action \(a_{t+1}\).
In case ii) \(g_{t}(a_{t})=0\), \(a_{t}\) is on the boundary of the feasible region. In this case, we use the local information about \(g_{t}\) around \(a_{t}\) and choose \(\chi_{t}(a_{t})\) as the feasible region. Next, a local gradient descent is executed using subroutine Optimize Algorithm 2 in the identified feasible region to find the next action.
Finally in case iii) \(g_{t}(a_{t})>0\)\(a_{t}\) is strictly infeasible. Since the current choice of \(a_{t}\) is infeasible for \(g_{t}\), and \(g_{t}(x_{t}^{*})\leq 0\), it is sufficient to move towards the region for which \(g_{t}(x)\leq 0\) to ensure (4) while staying infeasible. In fact, if we 'blindly' move into the feasible region, we cannot guarantee that \(a_{t+1}\) is closer to \(x_{t}^{*}\) than \(a_{t}\), for example if \(g_{t}(x_{t}^{*})=0\). However, using the Lipschitz condition, we know that each point in the ball \(\mathcal{B}(a_{t},||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||)\) is infeasible given that \(a_{t}\) is infeasible. Thus, in this case as long as \(||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||\geq||\nabla g_{t}(a_{t})\frac{1}{L_ {g}}||\) we move a distance of \(||\nabla g_{t}(a_{t})\frac{1}{L_{g}}||\) from \(a_{t}\) in the direction of negative gradient of \(g_{t}\) at \(a_{t}\). Thus the new point \(a_{t+1}\) is still infeasible, but as we show in Lemma 7, \(a_{t+1}\) is closer to \(x_{t}^{*}\) than \(a_{t}\). In case \(||\frac{g_{t}(a_{t})}{\mathcal{L}_{g}}||<||\nabla g_{t}(a_{t})\frac{1}{L_{g}}||\), the algorithm finds a feasible region similar to case i) using the local information \(\chi_{t}(a_{t})\) and follow
a local gradient descent in this feasible region using subroutine Optimize Algorithm 2 to find the next action. In case, \(\chi_{t}(a_{t})\) turns out to be an empty set, we proceed similar to the case when \(||\frac{a_{t}(a_{t})}{L_{g}}||\geq||\nabla g_{t}(a_{t})\frac{1}{L_{g}}||\), since the whole of \(\mathcal{B}(a_{t},\mathsf{dist})\) is infeasible.
**Theorem 1**.: _When both \(f_{t},g_{t}\) are strongly convex, Lipschitz, and smooth for all \(t\leq T\) and \(1\leq t\leq T,\sup_{x\in\chi}\nabla f_{t}(x)\leq G\) and \(\sup_{x\in\chi}\nabla g_{t}(x)\leq G\), with information structure as defined, for Algorithm 1_
\[||x_{t}^{\star}-a_{t+1}||<c||x_{t}^{\star}-a_{t}||,\]
_for some constant \(0<c<1\) that does not depend on \(t\). In particular,_
\[c=\max\left\{c_{2},c_{3},c_{4},c_{5}\right\}<1,\]
_where \(c_{2}=\left(1-\frac{\alpha\nu_{f}}{2L_{f}}\right)^{1/2},c_{3}=\frac{D+\alpha \mathsf{dist}}{D+\mathsf{dist}}\) and \(c_{4}=(1-\alpha\nu_{g}/L_{g})^{1/2},c_{5}=\left(1-\alpha\frac{\nu_{g}}{\max \left\{G/\mathsf{dist},L_{g}\right\}}\right)^{1/2}\), and \(0<\alpha<1\) is a constant to be chosen by subroutine Optimize, \(D\) is the diameter of the feasible region and \(0<\mathsf{dist}\) is a constant to be chosen by Algorithm 1. Note that \(\nu_{g}\leq L_{g}\) and \(\nu_{f}\leq L_{f}\) always, thus \(0<c<1\)._
Using Theorem 1, we get the main result of the paper as follows.
**Theorem 2**.: _When both \(f_{t},g_{t}\) are strongly convex, Lipschitz, and smooth (\(\nabla f_{t},\nabla g_{t}\) are Lipschitz) for all \(t\leq T\) and \(1\leq t\leq T,\sup_{x\in\chi}\nabla f_{t}(x)\leq G\) and \(\sup_{x\in\chi}\nabla g_{t}(x)\leq G\), with information structure as defined, with Algorithm 1, simultaneously,_
\[R_{d}(\mathbf{x}^{\star})=O(V_{\mathbf{x}^{\star}}),\text{and},\ P_{g}( \mathbf{x}^{\star})=O(V_{\mathbf{x}^{\star}}).\]
**Remark 2**.: _Both the regret and constraint violation penalty bounds derived in Theorem 2 are inverse polynomially proportional to the chosen constant \(\mathsf{dist}\). In particular, they grow as \(\frac{1}{1-c_{5}}\) where \(c_{5}=\left(1-\alpha\frac{\nu_{g}}{\max\left\{G/\mathsf{dist},L_{g}\right\}} \right)^{1/2}\). It is natural to expect that regret grows with decreasing \(\mathsf{dist}\) since for any algorithm as information availability is decreased, (in this case smaller value of \(\mathsf{dist}\)), the regret should worsen. However, \(\mathsf{dist}\) can be any constant and not necessarily has to be \(<<1\), and there is a tradeoff between regret and the amount of available local feasibility information \(B(a_{t},dist)\)._
**Remark 3**.: _For the unconstrained OCO, when at each step gradient information is available only at a single point, the best known algorithm when each \(f_{t}\) is smooth, and strongly convex, has \(R_{d}(\mathbf{x}^{\star})\leq O(V_{\mathbf{x}^{\star}})\)[12]. Note that \(V_{\mathbf{x}^{\star}}\) in the constrained and the unconstrained OCO problem are different, therefore directly we cannot compare our result with that of [12]. However, since functions \(g_{t}\) and \(f_{t}\) are allowed to be arbitrary with the constrained OCO, \(g_{t}=f_{t}\) for each \(t\) is a valid choice for \(g_{t}\) and \(f_{t}\). With \(g_{t}=f_{t}\), the constrained OCO collapses to the unconstrained OCO, for which the best known result on regret is \(O(V_{\mathbf{x}^{\star}})\), making the derived result (which also needs gradient availability at only one point) the best possible._
```
1:Input \(L_{f},L_{g},\mathcal{L}_{g},\mathsf{dist}>0\), feasible set \(\chi\subset\mathbb{R}^{n}\)
2:Initialize \(t=0\)
3:Choose action \(a_{1}\) arbitrarily belonging to \(\chi\)
4:while\(t\leq T-1\)do
5:\(t=t+1\)
6:if\(g_{t}(a_{t})<0\)then %previous action \(a_{t}\) was strictly feasible for \(g_{t}\)
7:\(\delta_{t}=||\frac{g_{t}(a_{t})}{2L_{g}}||\)
8:if\(\delta_{t}\geq\mathsf{dist}\)then
9:\(I_{t}=\mathcal{B}(a_{t},\delta_{t})\)
10:\(a_{t+1}=\textsc{Optimize}(f_{t},I_{t},\frac{1}{2L_{f}},a_{t})\)
11:else
12: Find the feasible region \(\chi_{t}(a_{t})=\chi_{t}\cap\mathcal{B}(a_{t},\mathsf{dist})\)
13:\(I_{t}=\chi_{t}(a_{t})\)
14:\(a_{t+1}=\textsc{Optimize}(f_{t},I_{t},\frac{1}{2L_{f}},a_{t})\)
15:endif
16:elseif\(g_{t}(a_{t})=0\)then %\(a_{t}\) is on the boundary of the feasible region
17: Find the feasible region \(\chi_{t}(a_{t})=\chi_{t}\cap\mathcal{B}(a_{t},\mathsf{dist})\)
18:\(I_{t}=\chi_{t}(a_{t})\)
19:\(a_{t+1}=\textsc{Optimize}(f_{t},I_{t},\frac{1}{2L_{f}},a_{t})\)
20:elseif\(g_{t}(a_{t})>0\)then %\(a_{t}\) is infeasible
21:if\(\delta_{t}\geq||\nabla g_{t}(a_{t})\frac{1}{L_{g}}||\)then
22:\(a_{t+1}=a_{t}+\alpha(\hat{a}_{t}-a_{t})\), where -
23:\(\hat{a}_{t}=a_{t}-\nabla g_{t}(a_{t})\frac{1}{L_{g}}\),
24:else
25: Find the feasible region \(\chi_{t}(a_{t})=\chi_{t}\cap\mathcal{B}(a_{t},\mathsf{dist})\)
26:if\(\chi_{t}(a_{t})\neq\emptyset\)then
27:\(I_{t}=\chi_{t}(a_{t})\)
28:\(a^{\prime}_{t}=\textsc{Proj}(a_{t},I_{t})\)
29:\(a_{t+1}=\textsc{Optimize}(f_{t},I_{t},\frac{1}{2L_{f}},a^{\prime}_{t})\)
30:else
31:endwhile
```
**Algorithm 1** Algorithm
Proof of Theorem 2.: Using the triangle inequality, we get that \(\sum_{t=1}^{T}||x_{t}^{\star}-a_{t}||\)
\[\leq||x_{1}^{\star}-a_{1}||+\sum_{t=2}^{T}||x_{t-1}^{\star}-a_{t}||+ \sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1}^{\star}||,\] \[\stackrel{{(a)}}{{\leq}}||x_{1}^{\star}-a_{1}||+c \sum_{t=2}^{T}||x_{t-1}^{\star}-a_{t-1}||+\sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1 }^{\star}||,\] \[\stackrel{{(b)}}{{\leq}}||x_{1}^{\star}-a_{1}||-c ||x_{T}^{\star}-a_{T}||+c\sum_{t=1}^{T}||x_{t}^{\star}-a_{t}||\] \[\qquad+\sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1}^{\star}||, \tag{5}\]
where \((a)\) is obtained by using Theorem 1, while to obtain \((b)\) we added and subtracted \(c||a_{T}-x_{T}^{\star}||\) and rearranged terms.
```
1:Input\((h,I,\mu,x_{t})\)
2:Constant \(0<\alpha<1\), \(x_{t+1}=x_{t}+\alpha(\hat{x}_{t}-x_{t})\), where \[\hat{x}_{t}=\text{Proj}(x_{t}-\frac{1}{\mu}\nabla h(x_{t}),I).\]
**Algorithm 2** Optimize
Regrouping terms in (5), we get \(\sum_{t=1}^{T}||x_{t}^{\star}-a_{t}||\)
\[\leq\frac{||x_{1}^{\star}-a_{1}||-c||x_{T}^{\star}-a_{T}||}{1-c}+\frac{1}{1-c} \sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1}^{\star}||. \tag{6}\]
Thus, using the Lipschitz property of \(f_{t}\) and \(g_{t}\), (6) implies that \(R_{d}(\mathbf{x}^{\star})=\sum_{t=1}^{T}||f_{t}(x_{t}^{\star})-f_{t}(a_{t})||\)
\[\leq\frac{L_{f}}{1-c}\sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1}^{\star}||+\frac{D}{ 1-c}=O(V_{\mathbf{x}^{\star}})\]
and \(P_{g}(\mathbf{x}^{\star})=\sum_{t=1}^{T}||g_{t}(x_{t}^{\star})-g_{t}(a_{t})||\)
\[\leq\frac{L_{g}}{1-c}\sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1}^{\star}||+\frac{D}{ 1-c}=O(V_{\mathbf{x}^{\star}}),\]
where \(V_{\mathbf{x}^{\star}}=\sum_{t=2}^{T}||x_{t}^{\star}-x_{t-1}^{\star}||\), the accumulated variation of the per-step minimizers.
Next, we first briefly discuss the basic difference between the proposed algorithm and the relevant prior work. In [18], a primal dual algorithm has been proposed using the Lagrangian
\[\mathsf{L}(x,\lambda)=f_{t}(x)+\lambda^{T}g_{t}(x)+\frac{\eta}{||\lambda||^{2}},\]
where \(a_{t}\) is updated using the gradient descent over the Lagrangian to move towards the optimizer of \(f_{t}\) with penalty function \(\lambda\) as
\[a_{t+1}=a_{t}-\eta\nabla_{x}\mathsf{L}(x,\lambda),\]
while gradient ascent is used to increase the penalty in case of constraint violation as \(\lambda_{t+1}=\lambda_{t}-\eta\nabla_{\lambda}\mathsf{L}(x,\lambda)\).
Similarly, in [19], a primal dual algorithm is proposed where the increase in \(\lambda\) is derived by minimizing the expected 'drift' of the constraint violation. In particular, it is given by
\[\lambda_{t+1}=\max\{\lambda_{t}+\eta_{t}g_{t}(a_{t}),-\eta_{t}g_{t}(a_{t})\},\]
while
\[a_{t+1}=\nabla f_{t}^{T}(a_{t})(x-a_{t})+\mu_{t}||x-a_{t}||^{2}+[\lambda_{t}+ \eta_{t}g_{t}(a_{t})]^{T}\eta_{t}g_{t}(a_{t}).\]
Both these algorithms [18, 19] are long-term in the sense that they want to remain close to \(x_{t}^{\star}\) while minimizing the constraint violation penalty \(P_{g}^{\prime}(\mathbf{x}^{\star})\) in the long term, i.e., they nudge the updates'slowly' in the direction of constraint satisfaction to avoid large accumulated constraint violation penalty. In contrast, the proposed algorithm in this paper is local, and is trying to go close to the optimal point in every single step as shown in Lemma 5, 6 and 7. Thus, conceptually our algorithm is entirely different than [18, 19].
In terms of restrictions, over and above [18, 19], we assume that \(f_{t}\) and \(g_{t}\) are strongly convex, however in terms of information, we require far less. In particular, at time \(t\), after \(a_{t}\) has been chosen, Algorithm 1 requires only \(g_{t}(a_{t})\), \(\chi_{t}(a_{t})\) and \(\nabla f_{t}(x),\nabla g_{t}(x)\), at \(x=a_{t}\) or some \(x\in\chi_{t}(a_{t})\). In contrast, [19] assumes that once \(a_{t}\) is chosen, full \(f_{t},g_{t}\) are revealed, making \(x_{t}^{\star}\) known. Moreover, it requires the knowledge of the diameter \(D\). In [18], knowledge of \(V_{\mathbf{x}^{\star}},D,T\) is needed over and above \(\nabla f_{t}(a_{t}),\nabla g_{t}(a_{t}),g_{t}(a_{t})\).
In the rest of the paper, we prove Theorem 1, for which we need the following Lemma regarding the subroutine Optimize.
**Lemma 3**.: _[_12_]_ _If function \(h\) is \(\nu_{h}\)-strongly convex, and \(\nabla h\) is Lipschitz with parameter \(L_{h}\), and \(x^{I\star}=\arg\min_{x\in I}h(x)\), then if parameter \(\mu\geq L_{h}\), the output \(x_{t+1}\) from subroutine Optimize satisfies_
\[||x^{I\star}-x_{t+1}||\leq\mathsf{c}||x^{I\star}-x_{t}||, \tag{7}\]
_for \(\mathsf{c}=\left(1-\alpha\frac{\nu_{h}}{\mu}\right)^{1/2}<1\)._
**Corollary 4**.: _For subroutine Optimize, let \(\tilde{x}\in I\) be such that \(h(\tilde{x})<h(\hat{x}_{t})\), then with parameter \(\mu\geq L_{h}\), the output \(x_{t+1}\) from subroutine Optimize satisfies_
\[||\tilde{x}-x_{t+1}||\leq\mathsf{c}||\tilde{x}-x_{t}||, \tag{8}\]
_for \(\mathsf{c}=\left(1-\alpha\frac{\nu_{h}}{\mu}\right)^{1/2}<1\) as long as function \(h\) is \(\nu_{h}\)-strongly convex, and \(\nabla h\) is Lipschitz with parameter \(L_{h}\)._
Proof.: The only place where optimality of \(x^{I\star}\) is used in the proof of Lemma 3 in [12] is to show that \(h(\hat{x}_{t})>h(x^{I\star})\). Thus, the proof goes through as it is, even with this weaker condition that \(h(\tilde{x})<h(\hat{x}_{t})\). For completeness, the full proof is given
in Section V. Another way to see the result is that by pruning \(I\) to get \(I^{\prime}\) such that \(\tilde{x}=\arg\min_{I^{\prime}}h(x)\), while keeping \(h\) a strongly convex function over \(I^{\prime}\). Thus, the result follows directly from Lemma 3.
For ease of exposition, we break the proof of Theorem 1 into three parts corresponding to \(g_{t}(a_{t})<0,g_{t}(a_{t})=0\), and \(g_{t}(a_{t})>0\) in the next three lemmas.
**Lemma 5**.: _When both \(f_{t},g_{t}\) are strongly convex and smooth for all \(t\leq T\) with information structure as defined, with Algorithm 1, for the case when \(g_{t}(a_{t})<0\)_
\[||x_{t}^{\star}-a_{t+1}||<c_{1}||x_{t}^{\star}-a_{t}||,\]
_where \(0<c_{1}=\max\{c_{2},c_{3}\}<1\) for \(c_{2}=\left(1-\frac{\alpha\nu_{f}}{2L_{f}}\right)^{1/2},c_{3}=\frac{D+\alpha \textsf{dist}}{D+\textsf{dist}}<1\) that does not depend on \(t\). Since \(L_{f}\geq\nu_{f}\) (always), \(c_{2}<1\)._
Proof.: Recall that \(\delta_{t}=||\frac{g_{t}(a_{t})}{2L_{g}}||\).
Case a) \(\delta_{t}\geq\textsf{dist}\). In this case, \(I_{t}=\mathcal{B}(a_{t},\delta_{t})\) and \(I_{t}\subseteq\chi_{t}\) using the Lipschitz condition on \(g_{t}\).
Subroutine Optimize is executed with set \(I_{t}\) and starting point \(a_{t}\). The output of Subroutine Optimize is
\[a_{t+1}=a_{t}+\alpha(\hat{a}_{t}-a_{t}), \tag{9}\]
\[\hat{a}_{t}=\text{Proj}(a_{t}-\nabla f_{t}(a_{t})\frac{1}{2L_{f}},I_{t}).\]
Subcase a-i) \(\hat{a}_{t}\in\text{convexhull}(a_{t},x_{t}^{\star})\) (just the line segment connecting \(a_{t}\) and \(x_{t}^{\star}\)). If \(x_{t}^{\star}\in I_{t}\), then directly from Lemma 3, we get
\[||a_{t+1}-x_{t}^{\star}||\leq c_{2}||a_{t}-x_{t}^{\star}||, \tag{10}\]
where \(c_{2}=\left(1-\alpha\frac{\nu_{f}}{2L_{f}}\right)^{1/2}\) as we have chosen \(\mu=2L_{f}\).
Otherwise, if \(x_{t}^{\star}\notin I_{t}\), then \(\hat{a}_{t}\in\text{boundary}(I_{t})\) since \(f_{t}\) is strongly convex, \(\hat{a}_{t}\in\text{convexhull}(a_{t},x_{t}^{\star})\) and \(x_{t}^{\star}\notin I_{t}\). Thus, the distance between \(\hat{a}_{t}\) and \(a_{t}\) is at least \(\textsf{dist}\) since \(\delta_{t}\geq\textsf{dist}\), and the distance between \(a_{t+1}\) and \(a_{t}\) is at least \(\alpha\textsf{dist}\), while the distance between \(a_{t+1}\) and \(x_{t}^{\star}\) is at most \(D\) (the diameter). Thus, we get that
\[||a_{t+1}-x_{t}^{\star}||\leq c_{3}||a_{t}-x_{t}^{\star}||, \tag{11}\]
where \(c_{3}=\frac{D+\alpha\textsf{dist}}{D+\textsf{dist}}\).
Subcase a-ii) \(\hat{a}_{t}\notin\text{convexhull}(a_{t},x_{t}^{\star})\)
If \(x_{t}^{\star}\in I_{t}\), then directly from Lemma 3, we get that
\[||a_{t+1}-x_{t}^{\star}||\leq c_{2}||a_{t}-x_{t}^{\star}||, \tag{12}\]
as we have chosen \(\mu=2L_{f}\).
Thus, consider the case when \(x_{t}^{\star}\notin I_{t}\). Let \(I_{t}^{\prime}=\text{convex hull}(a_{t},\hat{a}_{t},x_{t}^{\star})\) where \(I_{t}^{\prime}\subseteq\chi_{t}\), i.e. full set \(I_{t}^{\prime}\) is feasible, since \(g_{t}\) is convex.
Now, consider that if Subroutine Optimize is executed with set \(I_{t}^{\prime}\) and the same starting point \(a_{t}\), the output of Subroutine Optimize will be the same as (9), since
\[\text{Proj}(a_{t}-\nabla f_{t}(a_{t})\frac{1}{2L_{f}},I_{t})=\text{ Proj}(a_{t}-\nabla f_{t}(a_{t})\frac{1}{2L_{f}},I_{t}^{\prime}),\]
irrespective of whether \(a_{t}-\nabla f_{t}(a_{t})\frac{1}{2L_{f}}\) belongs to \(I_{t}\) or not. However, since \(x_{t}^{\star}\in I_{t}^{\prime}\), we get from Lemma 3 that
\[||a_{t+1}-x_{t}^{\star}||\leq c_{2}||a_{t}-x_{t}^{\star}||. \tag{13}\]
An illustration of the basic idea of the proof when \(\tilde{a}_{t}\notin I_{t}\) is presented in Fig. 1.
Case b) \(\delta_{t}<\text{dist}\) Except for the choice of set \(I_{t}\) which is now \(\chi_{t}(a_{t})\) everything else is same as in case a). Moreover, since \(\chi_{t}(a_{t})\in\chi_{t}\) is feasible by definition, the same arguments as detailed in case a) apply, and we either get (11) or (13).
The two distinct choices of \(I_{t}\) are essentially made to speed up the algorithm. Always choosing \(I_{t}=\chi_{t}(a_{t})\) is sufficient for analysis.
At this point it is difficult to appreciate the power of Lemma 5. What Lemma 5 saying is that irrespective of the size (how small) of set \(I_{t}\) chosen by the algorithm, as well as independent of the distance of \(x_{t}^{\star}\) (however far) from \(I_{t}\), we get a relation (13), that states that the distance between the optimal point and the updated point \(a_{t+1}\) contracts by a fixed amount compared to the original point \(a_{t}\). The main tool that we are exploiting to prove Lemma 5 is both the strong convexity as well as the smoothness (gradient being Lipschitz) of the function \(f_{t}\), and in some measure of \(g_{t}\). To gather more intuition we consider a one-dimensional case in Figs. 2 and 3 to show how strong convexity together with smoothness indicates that contraction
Fig. 1: Illustration for the proof of Lemma 5 case a-ii), where the blue dashed triangle is \(I_{t}^{\prime}=\text{convex hull}(a_{t},\hat{a}_{t},x_{t}^{\star})\subseteq \chi_{t}\).
of distance from the optimal holds independent of the distance between the present point \(a_{t}\), the updated point \(a_{t+1}\), and the optimal point \(x_{t}^{\star}\).
In Fig. 2, for function \(f(x)\) which is assumed to be strongly convex and smooth, we consider that the feasible set is \(\chi_{1}=(-\infty,x_{1})\) and \(x^{\star}=x_{1}\), while in Fig. 3 it is \(\chi_{2}=(-\infty,x_{2})\) and \(x^{\star}=x_{2}\). Clearly, by construction, \(a_{t+1}\) remains the same when Optimize is executed with starting point \(a_{t}\), input function \(h=f\) with \(I=\chi_{1}\) or \(\chi_{2}\) and an identical choice of \(\mu\). Thus, from Lemma 3, we get that
\[||a_{t+1}-x_{1}||\leq c||a_{t}-x_{1}||. \tag{14}\]
as well as
\[||a_{t+1}-x_{2}||\leq c||a_{t}-x_{2}||. \tag{15}\]
for the same \(c<1\). Clearly, as \(x_{2}\) is moved sufficiently far away to the right, one does not expect (15) to hold together with (14). However, since \(f\) is both strongly convex and smooth, there is a limit on how far \(x_{2}\) can be compared to \(x_{1}\), before \(f\) starts to increase. This is the key reason behind both (14) and (15) to be true. Essentially, when \(f\) is both strongly convex and smooth, it is 'trapped' between a lower and an upper envelope.
In general, coming back to Lemma 5, because of the strong convexity and smoothness, the function \(f_{t}\) cannot continue to decrease beyond a point, and the estimate one gets for the contraction in (13) is an underestimate when \(x_{t}^{\star}\) is close to \(a_{t}\) and \(a_{t+1}\), while becomes tighter as \(x_{t}^{\star}\) is drawn away from \(a_{t}\) and \(a_{t+1}\).
**Lemma 6**.: _When both \(f_{t},g_{t}\) are strongly convex and smooth for all \(t\leq T\) with information structure as defined, with Algorithm 1, for the case when \(g_{t}(a_{t})=0\)_
\[||x_{t}^{\star}-a_{t+1}||<c_{2}||x_{t}^{\star}-a_{t}||.\]
Proof.: When \(g_{t}(a_{t})=0\), \(a_{t}\) is on the boundary, and the chosen set is \(I_{t}=\chi_{t}(a_{t})\) which by definition is feasible. Thus, the analysis is identical to that of Lemma 5, and we get the same relation as in Lemma 5 as required.
Next, we consider the final case when \(g(a_{t})>0\), which is the most involved of the lot.
**Lemma 7**.: _When both \(f_{t},g_{t}\) are strongly convex and smooth for all \(t\leq T\), \(1\leq t\leq T,\sup_{x\in\chi}\nabla f_{t}(x)\leq G\) and \(\sup_{x\in\chi}\nabla g_{t}(x)\leq G\), with information structure as defined, with Algorithm 1, for the case when \(g_{t}(a_{t})>0\)_
\[||x_{t}^{\star}-a_{t+1}||<c_{6}||x_{t}^{\star}-a_{t}||,\]
_where \(c_{6}=\max\{c_{2},c_{3},c_{4},c_{5}\}<1\) for \(c_{4}=(1-\alpha\nu_{g}/L_{g})^{1/2},c_{5}=\Big{(}1-\alpha\frac{\nu_{g}}{\max\{ C/d\text{\rm{\small def}},L_{g}\}}\Big{)}^{1/2}\)._
For proving Lemma 7, we will use the strong convexity of \(g_{t}\) as well as \(f_{t}\).
Proof.: Case a) \(\delta_{t}=||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||\geq||\nabla g_{t}(a_{t} )\frac{1}{L_{g}}||\), in which case the update is
\[a_{t+1}=a_{t}+\alpha(\hat{a}_{t}-a_{t}), \tag{16}\]
where
\[\hat{a}_{t}=a_{t}-\nabla g_{t}(a_{t})\frac{1}{L_{g}}.\]
Since \(\delta_{t}=||\frac{g_{t}(a_{t})}{2\mathcal{L}_{g}}||\geq||\nabla g_{t}(a_{t} )\frac{1}{L_{g}}||\), the Lipschitz condition on \(g_{t}\) implies that \(\mathcal{B}(a_{t},\delta_{t})\cap\chi_{t}=\emptyset\). Thus, \(g_{t}(\hat{a}_{t})>0\), i.e. \(\hat{a}_{t}\) is still infeasible for \(g_{t}\), and we want to show that
\[||x_{t}^{\star}-a_{t+1}||^{2}\leq c||x_{t}^{\star}-a_{t}||^{2}, \tag{17}\]
for some fixed constant \(c<1\) that does not depend on \(t\). Towards this end, we will exploit the strong convexity of \(g_{t}\).
We will connect the update (16) with an update Subroutine Optimize will make on a suitable initial point \(x_{t}\), function \(h\), step size \(\mu\), and a feasible set \(I\). Recall that \(x_{t}^{\star}=\arg\min_{x\in\chi_{t}}f_{t}(x)\). Consider a new set \(I_{t}^{\prime}=\text{convex hull}(a_{t},\hat{a}_{t},x_{t}^{\star})\), where \(a_{t}\) and \(\hat{a}_{t}\) are as defined in (16). As discussed above, both \(g_{t}(a_{t})>0\) and \(g_{t}(\hat{a}_{t})>0\). Important to note that \(x_{t}^{\star}\) is not necessarily equal to \(\arg\min_{x\in I_{t}^{\prime}}g_{t}(x)\). However, \(g_{t}(x_{t}^{\star})<g_{t}(\hat{a}_{t})<g_{t}(a_{t})\) since \(g_{t}(x_{t}^{\star})\leq 0\), while \(g_{t}(a_{t})>0\) and \(g_{t}(\hat{a}_{t})>0\), and \(g_{t}(\hat{a}_{t})<g_{t}(a_{t})\) since \(\nabla g_{t}(a_{t})\) is a descent direction for \(g_{t}\).
Consider the update \(x_{t+1}\) which Subroutine Optimize will make if the initial/starting point \(x_{t}=a_{t}\), the set \(I=I_{t}^{\prime}\) with step size \(\mu=L_{g}\) and \(h=g_{t}\). Since \(\hat{a}_{t}\in I_{t}^{\prime}\), \(\hat{a}_{t}=\text{Proj}(\hat{a}_{t},I_{t}^{\prime})\). Hence from Subroutine Optimize we get that
\[\hat{x}_{t}=\text{Proj}(a_{t}-\nabla g_{t}(a_{t})\frac{1}{L_{g}}, I_{t}^{\prime})=a_{t}-\nabla g_{t}(a_{t})\frac{1}{L_{g}}=\hat{a}_{t}, \tag{18}\] \[x_{t+1}=a_{t}+\alpha(\hat{x}_{t}-a_{t}), \tag{19}\]
coinciding with (16). Thus, the update of the algorithm (16) is equivalent to executing Subroutine Optimize with starting point \(x_{t}=a_{t}\), set \(I=I_{t}^{\prime}\) with step size \(\mu=L_{g}\), for function \(h=g_{t}\). So we would like to use Lemma 3. However, since \(x_{t}^{\star}\) need not be \(\arg\min_{x\in I_{t}^{\prime}}g_{t}(x)\), we cannot use Lemma 3 directly. Instead we exploit the fact that \(g_{t}(x_{t}^{\star})<g_{t}(\hat{a}_{t})<g_{t}(a_{t})\). Hence Corollary 4 becomes applicable, and we get that
\[||a_{t+1}-x_{t}^{\star}||\leq c_{4}||a_{t}-x_{t}^{\star}||, \tag{20}\]
with \(c_{4}=(1-\alpha\nu_{g}/L_{g})^{1/2}\) since we have chosen \(\mu=L_{g}\), inverse of the step size in Subroutine Optimize.
Case b) \(\delta_{t}=||\frac{g_{t}(a_{t})}{2L_{g}}||<||\nabla g_{t}(a_{t})\frac{1}{L_{ g}}||\)
In this case, we have no sufficiently sized estimate of the infeasible region around \(a_{t}\). Thus, we will exploit the strong convexity and smoothness of \(f_{t}\), as follows.
b-i) Let \(\chi_{t}(a_{t})\neq\emptyset\). In this case, \(I_{t}=\chi_{t}(a_{t})\), \(a_{t}^{\prime}=\text{Proj}(a_{t},I_{t})\) and \(a_{t+1}=\text{\sc{Optimize}}(f_{t},I_{t},\frac{1}{2L_{f}},a_{t}^{\prime})\).
Subcase b-i-i) Let \(I_{t}=\chi_{t}(a_{t})\subseteq\text{int}(\mathcal{B}(a_{t},\textsf{dist}))\) which implies that \(x_{t}^{\star}\in I_{t}\). Recall that by definition, \(a_{t}^{\prime}=\text{Proj}(a_{t},I_{t})\). Thus, with \(x_{t}^{\star}\in I_{t}\), we get directly from Lemma 3 that
\[||a_{t+1}-x_{t}^{\star}||\leq c_{2}||a_{t}^{\prime}-x_{t}^{\star}||\leq c_{2}|| a_{t}-x_{t}^{\star}||,\]
where the last inequality follows since \(a_{t}\notin I_{t}\).
Subcase b-i-ii) Let \(I_{t}=\chi_{t}(a_{t})\not\subset\mathcal{B}(a_{t},\textsf{dist})\). In this sub-case, we get that \(g_{t}(a_{t}^{\prime})=0\) and is identical to the case considered in Lemma 6, except the starting point is \(a_{t}^{\prime}\) instead of \(a_{t}\). Thus, similar to (13), we get the first inequality
\[||x_{t}^{\star}-a_{t+1}||\leq\max\{c_{2},c_{3}\}||x_{t}^{\star}-a_{t}^{\prime }||\leq\max\{c_{2},c_{3}\}||x_{t}^{\star}-a_{t}||, \tag{21}\]
where the second inequality follows since \(a_{t}\notin I_{t}\).
Case b-ii) Let \(\chi_{t}(a_{t})=\emptyset\). In this case, the update is
\[a_{t+1}=a_{t}+\alpha(\hat{a}_{t}-a_{t}), \tag{22}\]
where \(\hat{a}_{t}=a_{t}-\nabla g_{t}(a_{t})\)\(\frac{\textsf{dist}}{||\nabla g_{t}(a_{t})||}\). Since \(\chi_{t}(a_{t})=\mathcal{B}(a_{t},\textsf{dist})\cap\chi_{t}\) is empty, \(g_{t}(\hat{a}_{t})>0\). Thus, we can exploit the strong convexity and smoothness of \(g_{t}\) as in case a).
Given the assumption that \(\nabla g_{t}(a_{t})\leq G\), we get that (22) is equivalent to executing Optimize with set \(I_{t}=\text{convex hull}(a_{t},\hat{a}_{t},x_{t}^{*})\), \(h=g_{t}\), and \(\mu=\max\{G/\textsf{dist},L_{g}\}\). Thus, similar to (20), since \(g_{t}(x_{t}^{*})<g_{t}(\hat{a}_{t})<g_{t}(a_{t})\), we get
\[||a_{t+1}-x_{t}^{*}||\leq c_{5}||a_{t}-x_{t}^{*}||, \tag{23}\]
where \(c_{5}=\left(1-\alpha\frac{\nu_{a}}{\max\{G/\textsf{dist},L_{g}\}}\right)^{1/2}\).
## IV Conclusions
In this paper, we considered a constrained OCO problem, and provided the best (simultaneously) possible bounds for the regret and the constraint violation penalty, when both the loss function and the function defining the constraint are strongly convex and smooth. Compared to prior work, we proposed an algorithm that has better regret and penalty bounds while using significantly less information requirement about the loss function and the function defining the constraints. Extending these results when the respective functions are just convex and not strongly convex, remains open.
|
2310.19382 | Volterra black-box models identification methods: direct collocation vs
least squares | The Volterra integral-functional series is the classic approach for nonlinear
black box dynamical systems modeling. It is widely employed in many domains
including radiophysics, aerodynamics, electronic and electrical engineering and
many other. Identifying the time-varying functional parameters, also known as
Volterra kernels, poses a difficulty due to the curse of dimensionality. This
refers to the exponential growth in the number of model parameters as the
complexity of the input-output response increases. The least squares method
(LSM) is widely acknowledged as the standard approach for tackling the issue of
identifying parameters. Unfortunately, the LSM suffers with many drawbacks such
as the sensitivity to outliers causing biased estimation, multicollinearity,
overfitting and inefficiency with large datasets. This paper presents
alternative approach based on direct estimation of the Volterra kernels using
the collocation method. Two model examples are studied. It is found that the
collocation method presents a promising alternative for optimization,
surpassing the traditional least squares method when it comes to the Volterra
kernels identification including the case when input and output signals suffer
from considerable measurement errors. | Denis Sidorov, Aleksandr Tynda, Vladislav Muratov, Eugeny Yanitsky | 2023-10-30T09:40:17Z | http://arxiv.org/abs/2310.19382v1 | # Volterra black-box models identification methods: direct collocation vs least squares
###### Abstract
The Volterra integral-functional series is the classic approach for nonlinear black box dynamical systems modeling. It is widely employed in many domains including radiophysics, aerodynamics, electronic and electrical engineering and many other. Identifying the time-varying functional parameters, also known as Volterra kernels, poses a difficulty due to the curse of dimensionality. This refers to the exponential growth in the number of model parameters as the complexity of the input-output response increases. The least squares method (LSM) is widely acknowledged as the standard approach for tackling the issue of identifying parameters. Unfortunately, the LSM suffers with many drawbacks such as the sensitivity to outliers causing biased estimation, multicollinearity, overfitting and inefficiency with large datasets. This paper presents alternative approach based on direct estimation of the Volterra kernels using the collocation method. Two model examples are studied. It is found that the collocation method presents a promising alternative for optimization, surpassing the traditional least squares method when it comes to the Volterra kernels identification including the case when input and output signals suffer from considerable measurement errors.
Volterra series collocation method kernels identification Chebyshev polynomials memory effects
## 1 Introduction
At the current stage of development of wireless technologies like 5G/6G communication system networks based on antenna arrays with digital beam forming (Massive Multiple Input Multiple Output system), it is impossible to do without such digital signal processing algorithms as digital correction of the nonlinear distortion DPD (Digital Predistortion). Nonlinear distortions of the signal occurring inside the transceiver path strongly distort the spectrum of this signal, as shown in Fig. 1, where its shown in blue, and the main signal is red color respectively.
However, the international wireless standards like 3GPP, ETSI impose strict requirements on the spectral power of the radiated signal. The use of digital nonlinear distortion correction algorithms allows to meet the requirements of standards and at the same time positively affect the overall efficiency, that is, the energy consumption of the entire signal receiving and transmitting system. There are different approaches to the implementation of such algorithms, both purely digital and analog and mixed. One of them, a purely mathematical approach to the description of nonlinear distortions, we will describe below. However, Let's consider the general statement of the problem of digital correction (DPD) with the following structure of the some model of correction as shown in Fig. 2.
Here \(F_{DPD}(.)\) is a nonlinear operator reflecting the essence of nonlinear correction and imagine it as some function dependent on parameters \(\vec{W}=[w_{1},...,w_{p}]^{T},\vec{W}\in\mathbf{C}^{p}\). \(F_{PA}(.)\) is a nonlinear operator identified with a nonlinear device and generating some complex vector \(\vec{Y}=[y_{1},...,y_{n}]^{T},\vec{Y}\in\mathbf{C}^{n}\) and also define some vector from a complex field of numbers \(\vec{Y_{d}}=[y_{d,1},...,y_{d,n}]^{T},\vec{Y_{d}}\in\mathbf{C}^{n}\) on which the operator \(F_{DPD}(.)\) depends. Under the error \(E\in\mathbf{C}^{n}\) we will understand the difference between vectors \(Y\) and \(Y_{d}\)
\[\vec{E}=\vec{Y_{d}}-\vec{Y}.\]
Then we can formulate the requirements for the definition of parameters \(\vec{W}\) as follows: \(\vec{\omega}=\arg\min_{W}\left\|E\right\|^{2}\), where \(\left\|.\right\|\) is Euclidean norm. Considering \(Y=F_{PA}(F_{DPD}(Y_{d})),\) the above introduced expression can be rewritten as
\[\vec{\omega}=\operatorname*{arg\,min}_{W}\left\|Y_{d}-F_{PA}(F_{DPD}(Y_{d})) \right\|^{2}.\]
This equation will be task of DPD (Digital Predistortion). Here we can highlight several important sub-tasks, which in themselves are quite complex both theoretically and computationally:
a) Since we have formulated, in fact, the problem of approximation of a function, we need to derive the analytical regression dependence \(F_{DPD}(.)\) on the parameters \(\vec{W}\). How this function is defined will depend on the quality of the correction of nonlinear distortions;
Figure 1: Power spectrum density [1]
Figure 2: Digital correction scheme
b) A procedure of the searching the parameters \(\vec{W}\) is a classical optimization problem, which is a linear or nonlinear regression with respect to the parameters \(\vec{W}\). Finding efficient methods of convex or non-convex optimization is one of the major problems in this problem;
c) Compression of a function \(F_{DPD}(.)\), i.e. reducing its computational complexity.
One of the methods to solve the problem a) for DPD task is the Volterra functional series. And it is also the conventional tool to characterize the complex nonlinear dynamics in various fields including the radiophysics, mechanical engineering, electronic and electrical engineering, energy sciences (here readers may refer e.g. to review [2]). Volterra series are widely employed to represent the input-output relationship of nonlinear dynamical systems with memory. Volterra power series are among the best-understood nonlinear system representations in signal processing. Such integral functional series (also called Frechet-Volterra series) (1)
\[y(t)=F(x(t)):=\int\limits_{0}^{t}K_{1}(s)x(t-s)\,ds+\int\limits_{0}^{t}\int \limits_{0}^{t}K_{2}(s_{1},s_{2})x(t-s_{1})x(t-s_{2})\,ds_{1}ds_{2}+\ldots \tag{1}\]
\[\cdots+\int\limits_{0}^{t}\int\limits_{0}^{t}\cdots\int\limits_{0}^{t}K_{n}(s_ {1},s_{2},\ldots,s_{n})x(t-s_{1})x(t-s_{2})\ldots x(t-s_{n})\,ds_{1}ds_{2} \ldots ds_{n}+\ldots\ t\in[0,T]\]
were first proposed by Maurice Frechet for a continuous nonlinear dynamical systems representation [3, 4]. Here readers may also refer to overview [5] and monograph [6] for more details on relevant Lyapunov-Liechtenstein operator and Lyapunov - Schmidt methods in the theory of non-linear equations.
The role of a reproducing kernel Hilbert space in development of a unifying view of the Volterra theory and polynomial kernel regression is presented in [7].
In (1) \(x(t)\) is input signal and \(y(t)\) is output of a single input single output (SISO) nonlinear system and \(K_{n}(s_{1},s_{2},\ldots,s_{n})\) are the multidimensional Volterra kernels (or transfer functions) to be identified based on nonlinear system's response \(y(t)\) as a reaction on input \(x(t)\) (Fig. 3). It is to be noted, that for the basic case \(n=1\) we have a conventional Finite Impulse Response (FIR) linear model which optimal in the least-squares sense.
Frechet theorem [3] generalises the famous Weierstrass approximation theorem which characterizes the set of continuous functions on a compact interval via uniform approximation by algebraic polynomials.
Power series (1) characterize the stationary dynamical systems. Stationarity here means that a transfer functions do not vary during the transient process as \(t\in[0,T]\). More general power series (2) models nonstationary dynamics when transfer functions depend explicitly on time \(t\)
\[y(t)=\int\limits_{0}^{t}K_{1}(t,s)x(s)\,ds+\int\limits_{0}^{t}\int\limits_{0} ^{t}K_{2}(t,s_{1},s_{2})x(s_{1})x(s_{2})\,ds_{1}ds_{2}+\ldots \tag{2}\]
\[\cdots+\int\limits_{0}^{t}\int\limits_{0}^{t}\cdots\int\limits_{0}^{t}K_{n}( t,s_{1},s_{2},\ldots,s_{n})x(s_{1})x(s_{2})\ldots x(s_{n})\,ds_{1}ds_{2} \ldots ds_{n}+\ldots\ t\in[0,T].\]
The Volterra series is essential tool of mathematical modeling the nonlinear dynamical systems appearing in digital pre-distortion (DPD) iterative process [8]. DPD as we described before is an important part of the digital signal processing algorithms used in transmitters and receivers. Several methods have been studied for DPD, with Volterra series-based methods being popular due to their ease of implementation and the straightforward interpretation of their
Figure 3: Behavioral modeling of the black box system
nonlinear terms. The key issue with Volterra series is the curse of dimension: as the order of the series increases, the number of terms involved in the expansion grows exponentially, making it computationally demanding. From other hand, estimating the functional coefficients (Volterra kernels) of the Volterra integral functional series can be challenging. It often considered in its discrete form and requires a significant amount of data and complex optimization algorithms to find the best fit for the model coefficients. Alternative approach based on problem reduction to multi-dimensional integral equations solution [10; 11] needs special probe signals design.
In present paper the alternative approach for the Volterra kernels identification is proposed using the direct collocation method. The results are compared with the conventional least squares method (LSM) widely employed for Volterra series identification problem in telecommunication domain.
The rest of the paper is structured as follows: The subsequent section provides the problem statement. Section 3 focuses on the collocation method. Section 4 carries out computational experiments with LSM, while section 5 discusses concluding remarks and future work.
## 2 Identification problem statement
Let us consider the following segment of the truncated Volterra series (1) for \(n=2\)
\[y(t)=\int\limits_{0}^{t}K_{1}(s)x(t-s)\,ds+\int\limits_{0}^{t}\int\limits_{0}^ {t}K_{2}(s_{1},s_{2})x(t-s_{1})x(t-s_{2})\,ds_{1}ds_{2},\ t\in[0,T]. \tag{3}\]
Our current problem in this section is to determine the kernels \(K_{1}(s)\) and \(K_{2}(s_{1},s_{2})\) by a known input and output pair \(\big{(}x(t),y(t)\big{)}\).
In contrast to the linear case \(n=1\), when it is sufficient to specify a single pair \(\big{(}x(t),y(t)\big{)}\) to determine the kernel \(K_{1}(s)\), in the nonlinear case \(n=2\), for the unique identification of the two-dimensional kernel \(K_{2}(s_{1},s_{2})\), it is necessary to specify a two-dimensional continuum of equalities. This means that problem (4) has an infinite set of solutions.
**Remark A1**: _It should be noted that if we consider this problem as an integral equation with two unknown functions \(K_{1}(s)\) and \(K_{2}(s_{1},s_{2})\), then this problem is essentially ill-posed. There are an infinite number of solutions, this problem is insufficiently defined. In this regard, no classical numerical methods designed for integral equations are applicable in this case. And as a result, there are no any attempts to solve the problem in this form in the literature._
**Remark A2**: _A fundamentally different situation takes place in the problem of determining an unknown input signal \(x(t)\) with a known output signal \(y(t)\) after kernels identification. It is to be noted that in this case we have the problem of nonlinear Volterra integral equations' solution. Here readers may refer to sec. 9 in book [11], papers [12], [13] and references therein regarding the Kantorovich main solutions and blow up phenomenon._
Within the framework of this paper, from a practical point of view, we will be satisfied with any pair of approximately found kernels \(\widetilde{K}_{1}(s)\) and \(\widetilde{K}_{2}(s_{1},s_{2})\) that provides a sufficiently small residual norm
\[\varepsilon=\max\limits_{t\in[0,T]}\left|y(t)-\int\limits_{0}^{t}\widetilde{K }_{1}(s)x(t-s)\,ds-\int\limits_{0}^{t}\int\limits_{0}^{t}\widetilde{K}_{2}(s _{1},s_{2})x(t-s_{1})x(t-s_{2})\,ds_{1}ds_{2}\right|. \tag{4}\]
Denote by \(B_{i}(t),\ i=0,1,2,\dots,\) the basis functions forming a complete orthogonal system of functions on the segment \([0,T]\).
We look for an approximate solution of the problem (3) in the form of segments of series of expansions according to the selected system of basis functions
\[\widetilde{K}_{1,m}(s)=\sum\limits_{i=0}^{m-1}A_{i}B_{i}(s),\quad\widetilde{K }_{2,m_{1},m_{2}}(s_{1},s_{2})=\sum\limits_{i=0}^{m_{1}-1}\sum\limits_{j=0}^{ m_{2}-1}C_{ij}B_{i}(s_{1})B_{j}(s_{2}). \tag{5}\]
## 3 Collocation method
Collocation-type methods are widely used in the discretization of various kinds of integro-functional equations [14]. With sufficiently good accuracy and stability, they are also computationally less expensive in comparison with projection methods of the Galerkin type requiring additional integration [15].
In order to determine the unknown coefficients \(A_{i}\) and \(C_{ij}\), we introduce a uniform grid of nodes
\[t_{k}\in[0,T],\ k=0,1,\ldots,N, \tag{6}\]
where \(N+1\) is number of nodes.
Substitute (5) in (3) and then demand that the equalities be fulfilled at the points (6)
\[y(t_{k})=\int\limits_{0}^{t_{k}}\widetilde{K}_{1,m}(s)x(t_{k}-s)\,ds+\int \limits_{0}^{t_{k}}\int\limits_{0}^{t_{k}}\widetilde{K}_{2,m_{1},m_{2}}(s_{1},s_{2})x(t_{k}-s_{1})x(t_{k}-s_{2})\,ds_{1}ds_{2},\ k=\overline{0,N}. \tag{7}\]
Denote for a simplicity \(y(t_{k})=y_{k}\), and transform the last equalities as follows
\[y_{k}=\sum\limits_{i=0}^{m-1}A_{i}\int\limits_{0}^{t_{k}}B_{i}(s)x(t_{k}-s)\, ds+\sum\limits_{i=0}^{m_{1}-1}\sum\limits_{j=0}^{m_{2}-1}C_{ij}\int\limits_{0}^{t_ {k}}\int\limits_{0}^{t_{k}}B_{i}(s_{1})B_{j}(s_{2})x(t_{k}-s_{1})x(t_{k}-s_{2} )\,ds_{1}ds_{2}. \tag{8}\]
As a system of basis functions \(B_{i}(t),\ i=0,1,\ldots\), we choose Chebyshev polynomials of the first kind
\[T_{0}(t)=1,\ T_{1}(t)=t,\ T_{i+1}(t)=2tT_{i}(t)-T_{i-1}(t),\ i=1,2,\ldots. \tag{9}\]
Since these polynomials are orthogonal on the segment \([-1,1]\), we apply a linear mapping to the segment \([0,T]\).
The controlled norm of residual corresponding to the selected values of \(m,m_{1}\) and \(m_{2}\) takes the form
\[\varepsilon_{N}=\max\limits_{t\in[0,T]}\left|y(t)-\sum\limits_{i= 0}^{m-1}A_{i}\int\limits_{0}^{t}B_{i}(s)x(t-s)\,ds-\right.\\ -\left.\sum\limits_{i=0}^{m_{1}-1}\sum\limits_{j=0}^{m_{2}-1}C_{ ij}\int\limits_{0}^{t}\int\limits_{0}^{t}B_{i}(s_{1})B_{j}(s_{2})x(t-s_{1})x(t-s_{2} )\,ds_{1}ds_{2}\right| \tag{10}\]
Let us denote \(N=m+m_{1}m_{2}-1\). Number of equalities (number of nodes in the grid) equals to the number of unknown coefficients.
Thus, we have the following system of linear algebraic equations
\[y_{k}=\sum\limits_{i=0}^{m-1}A_{i}\beta_{ik}+\sum\limits_{i=0}^{m_{1}-1}\sum \limits_{j=0}^{m_{2}-1}C_{ij}\gamma_{ijk},\quad k=\overline{0,m+m_{1}m_{2}-1}, \tag{11}\]
with respect to the unknown coefficients \(A_{i},\ i=0,1,\ldots,m-1\) and \(C_{ij},\ i=0,1,\ldots,m_{1}-1,\ j=0,1,\ldots,m_{2}-1\). Here
\[\beta_{ik}=\int\limits_{0}^{t_{k}}B_{i}(s)\,x(t_{k}-s)\,ds,\quad\gamma_{ijk}= \int\limits_{0}^{t_{k}}\int\limits_{0}^{t_{k}}B_{i}(s_{1})B_{j}(s_{2})x(t_{k} -s_{1})x(t_{k}-s_{2})\,ds_{1}ds_{2}. \tag{12}\]
## 4 Least-square method
Let us denote \(N>m+m_{1}m_{2}-1\). We have the situation where number of equalities is larger than number of unknown coefficients \(A_{i}\) and \(C_{ij}\). Thus we have the overdetermined system of linear equations with respect to the unknown coefficients \(A_{i},\ i=0,1,\ldots,m-1\) and \(C_{ij},\ i=0,1,\ldots,m_{1}-1,\ j=0,1,\ldots,m_{2}-1\):
\[y_{k}=\sum\limits_{i=0}^{m-1}A_{i}\beta_{ik}+\sum\limits_{i=0}^{m_{1}-1}\sum \limits_{j=0}^{m_{2}-1}C_{ij}\gamma_{ijk}, \tag{13}\]
where
\[\beta_{ik}=\int\limits_{0}^{t_{k}}T_{i}(s)x(t_{k}-s)\,ds,\quad\gamma_{ijk}= \int\limits_{0}^{t_{k}}\int\limits_{0}^{t_{k}}T_{i}(s_{1})T_{j}(s_{2})x(t_{k} -s_{1})x(t_{k}-s_{2})\,ds_{1}ds_{2}. \tag{14}\]
The system is inconsistent. Least-square method is used to find the approximate solution of the system. The point of the method is to find such coefficients \(A_{i}\) and \(C_{ij}\) such that the following criteria is minimized:
\[\sum_{k=0}^{N-1}\left(y_{k}-\sum_{i=0}^{m-1}A_{i}\beta_{ik}-\sum_{i=0}^{m_{1}-1} \sum_{j=0}^{m_{2}-1}C_{ij}\gamma_{ijk}\right)^{2}\longrightarrow\min \tag{15}\]
## 5 Numerical experiments
Let us illustrate the operation of the proposed identification methods on two pairs of model signals.
### Model 1. Periodic signal
\[\begin{array}{l}x(t)=\sin(20t),\ y(t)=\frac{1}{81002}\Big{(}199\cos^{2}(20t) -15\sin(40t)-200\cos(20t)e^{-2t}+1+\\ +10\sin(20t)e^{-2t}+20\sin(20t)e^{-t}\Big{)}+\frac{1}{409}\left(3\sin(20t)-20 \cos(20t)+\frac{850920}{40501}e^{-3t}\right).\end{array} \tag{16}\]
The Figure 4 shows the graphs of the input and output signal (16).
#### 5.1.1 Collocation method results for the model (16)
The Table 1 demonstrates the dependence of the residual \(\varepsilon_{N}\) on the values \(m=m_{1}=m_{2}\) for the uniform mesh \(t_{k}=\frac{k}{N},\ k=0,1,\ldots,N,\) covering the segment \([0,1]\).
\begin{table}
\begin{tabular}{c c} \hline \hline \(\mathbf{m}\) & \(\varepsilon_{N}\) \\ \hline
3 & \(1,41\cdot 10^{-2}\) \\
4 & \(1,14\cdot 10^{-6}\) \\
5 & \(4,72\cdot 10^{-9}\) \\
6 & \(1,77\cdot 10^{-12}\) \\
7 & \(1,83\cdot 10^{-14}\) \\
8 & \(1,53\cdot 10^{-18}\) \\
10 & \(2,84\cdot 10^{-26}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dependence of the residual \(\varepsilon_{N}\) on the values \(m,m_{1},m_{2}\).
Figure 4: Input and output functions (16)
All calculations were performed in the Maple system with parameter Digits:=30 (the number of digits that Maple uses when making calculations with software floating-point numbers). Also note that the integration during the formation of the system (11) was carried out analytically and did not introduce additional error in the calculation results. This is due to the fact that the input signal \(x(t)\) in most cases allows analytical calculation of the values (12). In the case of using input signals of a more complex structure, special approximation methods should be applied to the integrals (12), taking into account the possible fast oscillation of \(x(t)\).
#### 5.1.2 Least-square method results for the model (16)
For the simplicity we assume that \(m=m_{1}=m_{2}\). The Table 2 demonstrates the dependence of the residual \(\varepsilon_{N}\) on the parameters.
All calculations for least-square method were performed in MATLAB. Overdetermined matrix is solved using lsqminorm function. It also should be noted that all the integrations during calculation were carried out analytically and didn't introduce additional error in the results.
\begin{table}
\begin{tabular}{c|c c c} \hline & **m = 3** & **m = 5** & **m = 7** \\ \hline \(\mathbf{k}=(\mathbf{m}+\mathbf{m}^{2})\cdot\mathbf{2}\) & \(8.07\cdot 10^{-4}\) & \(4.92\cdot 10^{-10}\) & \(2.50\cdot 10^{-16}\) \\ \(\mathbf{k}=(\mathbf{m}+\mathbf{m}^{2})\cdot\mathbf{5}\) & \(8.07\cdot 10^{-4}\) & \(3.90\cdot 10^{-10}\) & \(1.50\cdot 10^{-16}\) \\ \(\mathbf{k}=(\mathbf{m}+\mathbf{m}^{2})\cdot\mathbf{10}\) & \(8.07\cdot 10^{-4}\) & \(4.90\cdot 10^{-10}\) & \(2.87\cdot 10^{-15}\) \\ \hline \end{tabular}
\end{table}
Table 2: Dependence of the residual \(\varepsilon_{N}\) on the values \(m\) and \(k\).
Figure 10: Residual for \(m=7\) and \(k=(m+m^{2})\cdot 5\)
Figure 9: Residual for \(m=5\) and \(k=(m+m^{2})\cdot 5\)
### Model 2. Fading input signal
\[\begin{split} x(t)=e^{-3t}\sin(10t),\\ y(t)=\int\limits_{0}^{t}\cos\left(\frac{s}{2}\right)x(t-s)\,ds+ \\ +\int\limits_{0}^{t}\int\limits_{0}^{t}\sin(s_{1}+2s_{2})x(t-s_{ 1})x(t-s_{2})\,ds_{1}ds_{2}.\end{split} \tag{17}\]
The Figure 11 shows the graphs of the input and output signals (17).
#### 5.2.1 Collocation method results for the model (17)
The Table 3 demonstrates the dependence of the residual \(\varepsilon_{N}\) on the values \(m=m_{1}=m_{2}\) for the uniform mesh \(t_{k}=\frac{k}{N},\ k=0,1,\ldots,N,\) covering the segment \([0,1]\).
Let us also discuss the stability of suggested numerical technique. Let the input data of the problem (17) be determined with some random error \(\varepsilon_{rand}\) varying within the \(\delta\) value, namely \(|\varepsilon_{rand}|\leqslant\delta\). Table 4 shows the dependence of the averaged residual \(\varepsilon_{N}\) on the \(\delta\) value at a fixed \(m=3\) based on the results of \(10\) measurements.
\begin{table}
\begin{tabular}{c c} \hline \hline \(\delta\) & \(\varepsilon_{N}\) \\ \hline \(10^{-2}\) & \(0.01729\) \\ \(10^{-3}\) & \(2.71\cdot 10^{-3}\) \\ \(10^{-4}\) & \(2.56\cdot 10^{-4}\) \\ \(10^{-5}\) & \(7,54\cdot 10^{-5}\) \\ \(10^{-6}\) & \(1,66\cdot 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Stability results for collocation
Figure 11: Input and output functions (17)
\begin{table}
\begin{tabular}{c c} \hline \hline \(\mathbf{m}\) & \(\varepsilon_{N}\) \\ \hline
3 & \(3,16\cdot 10^{-5}\) \\
4 & \(9,85\cdot 10^{-9}\) \\
5 & \(8,58\cdot 10^{-12}\) \\
6 & \(2,17\cdot 10^{-16}\) \\
7 & \(5,37\cdot 10^{-20}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dependence of the residual \(\varepsilon_{N}\) on the values \(m,m_{1},m_{2}\).
It can be seen from the results of the Table 4 that residual continuously depends on the limits of random measurement errors of the input and output signals. Thus, we can conclude about the stability of the suggested method.
#### 5.2.2 Least-square method results for the model (17)
As for collocation method, let us check the stability of least-square method on this model. For testing stability 10 rounds of experiments were performed and average residual \(\varepsilon_{N}\) was calculated. Also \(m=3,k=(m+m^{2})\cdot 5\) were fixed.
Figure 14: Residual for \(m=7\) and \(k=(m+m^{2})\cdot 7\)
Figure 12: Residual for \(m=3\) Figure 13: Residual for \(m=7\)
\begin{table}
\begin{tabular}{c|c c c} \hline & **m = 3** & **m = 5** & **m = 7** \\ \hline \(\mathbf{k}=(\mathbf{m}+\mathbf{m}^{2})\cdot\mathbf{2}\) & \(2.38\cdot 10^{-6}\) & \(7.77\cdot 10^{-14}\) & \(2.93\cdot 10^{-16}\) \\ \(\mathbf{k}=(\mathbf{m}+\mathbf{m}^{2})\cdot\mathbf{5}\) & \(2.63\cdot 10^{-6}\) & \(7.46\cdot 10^{-14}\) & \(3.05\cdot 10^{-16}\) \\ \(\mathbf{k}=(\mathbf{m}+\mathbf{m}^{2})\cdot\mathbf{10}\) & \(3.48\cdot 10^{-6}\) & \(7.41\cdot 10^{-14}\) & \(3.80\cdot 10^{-16}\) \\ \hline \end{tabular}
\end{table}
Table 5: Dependence of the residual \(\varepsilon_{N}\) on the values \(m\) and \(k\).
## 6 Conclusions
Two numerical approaches to solving the problem of identification of the Volterra model were proposed in the paper. As can be seen from the presented results, both methods showed stable convergence (in the sense of the tendency of the residual to zero). However, from the point of view of the arithmetic complexity of calculations, the collocation method turns out to be less expensive. And this factor is more pronounced the more parameters of the model are to be determined. This is due to the need to calculate a significantly larger number of integrals proportional to the square of the number of measurements being processed.
Further development of research suggests an increase in the number of terms \(n\) in the model (1) to identify a more accurate functional relationship between the input and output signals. It is also planned to develop special methods for approximating integrals (12) for the case of using input signals of a more complex structure, including fast oscillating signals.
|
2304.12348 | Orientation Memory of Magnetic Dipoles | We study the precession caused by electromagnetic radiation on a magnetic
dipole located far from the source. As we show, this entails a net rotation of
the dipole in the plane orthogonal to the direction of wave propagation,
providing an electromagnetic analogue of gyroscopic gravitational memory. Like
its gravitational cousin, the precession rate falls off with the square of the
distance to the source, and is related to electric-magnetic duality and optical
helicity on the celestial sphere. We use a multipolar expansion to compute the
memory effect due to localized sources such as moving point charges, and
highlight its occurrence in setups that break parity symmetry. | Blagoje Oblak, Ali Seraj | 2023-04-24T18:00:02Z | http://arxiv.org/abs/2304.12348v2 | # Orientation Memory of Magnetic Dipoles
###### Abstract
We study the precession caused by electromagnetic radiation on a magnetic dipole located far from the source. As we show, this entails a net rotation of the dipole in the plane orthogonal to the direction of wave propagation, providing an electromagnetic analogue of gyroscopic gravitational memory. Like its gravitational cousin, the precession rate falls off with the square of the distance to the source, and is related to electric-magnetic duality and optical helicity on the celestial sphere. We use a multipolar expansion to compute the memory effect due to localized sources such as moving point charges, and highlight its occurrence in setups that break parity symmetry.
## I Introduction
The passage of a burst of gravitational waves through a detector typically leads to persistent effects, generally referred to as _gravitational memory_. Such phenomena have received widespread attention in recent years. Indeed, the seminal detection of gravitational waves [1] and the growing number of subsequent observations provide encouraging prospects for the detection of memory in the near future [2; 3; 4; 5; 6; 7], while the rich interplay between memory effects, asymptotic symmetries and soft theorems in quantum gravity [8] makes them crucial from a theoretical standpoint. This has led to numerous proposals of memory observables and their relation with gravitational charges: see _e.g._[9; 10; 11; 12; 13; 14]. For our purposes, the most relevant example is that of refs. [15; 16], which describe a _gyroscopic memory effect_: a net rotation of a spinning gyroscope in the "transverse plane" orthogonal to the direction of gravitational wave propagation.
The existence of soft theorems and asymptotic symmetries in gauge theories other than gravity [17; 18; 19; 20; 21] suggests that memory effects are not limited to general relativity. This turns out to be true. For instance, electromagnetic waves cause a leading kick effect [22] and a subleading displacement [23; 24] on free test charges, and their analogue in non-Abelian gauge theories was described in [25; 26; 27].
In this work, we similarly study a persistent effect of electromagnetic radiation on the orientation of a distant magnetic dipole. This observable, which we dub "gyromagnetic memory," is remarkably similar to its gyroscopic gravitational cousin. Indeed, both effects decay as \(1/r^{2}\) in terms of the distance \(r\) between source and detector. Furthermore, gravitational gyroscopic memory contains two terms: one that is linear in the metric perturbation and coincides with the spin memory effect [9], and a second, nonlinear part related to gravitational electric-magnetic duality and the helicity of gravity waves. The same structure turns out to arise in electrodynamics, despite one's naive expectation that no nonlinear term should arise in Maxwell's linear theory [28]. In particular, the nonlinear term is again related to electric-magnetic duality and the optical helicity of radiation. Similar quantities also occur in [29], which appeared while we were finalizing this work and exhibits memory effects from an angular momentum transfer between electromagnetic waves and test systems.
An obvious advantage of the electromagnetic setup compared to its gravitational version is its simplicity: one can compute, with minimal effort, the radiative data and memory caused by a given source. (This should be contrasted with the gravitational case, where the extraction of radiative data from dynamical sources involves intricate numerical or perturbative frameworks [30; 31; 32; 33; 34].) Accordingly, we eventually study the gyromagnetic precession produced by nonrelativistic moving point charges and find that it crucially requires a breaking of parity symmetry. This occurs for instance in the simple case of a rotating point charge, suggesting that a similar gravitational gyroscopic precession occurs for inspiralling binary black holes; these will be studied in a separate work.
The paper is organized as follows. In section II, we show how electromagnetic radiation gives rise to the precession of a magnetic dipole near null infinity. Section III is then devoted to the relation between the resulting memory effect, electric-magnetic duality and optical helicity. Finally, in section IV we compute the gyromagnetic precession and memory produced at null infinity by moving point charges in the bulk; this involves in particular a multipolar, nonrelativistic expansion, also used in gravitational computations of the same kind.
**Notation.** We use Gaussian units and set \(c=1\) throughout, except at the very end of section IV. Vectors are denoted by bold letters, _e.g._ the position \(\mathbf{x}=x^{i}\partial_{i}\) and the radial unit vector \(\mathbf{n}\equiv\mathbf{x}/|\mathbf{x}|=n^{i}\partial_{i}\). In addition, we interchangeably use \(\mathbf{n}\) and \(\mathbf{\theta}\) to represent a point on a unit (celestial) sphere. We also define \(\Delta f(u)\equiv f(u)-f(-\infty)\) and \(\Delta f\equiv\lim_{u\to\infty}\Delta f(u)\) for any time-dependent function \(f(u)\) whose derivatives vanish at past and future infinities. Finally, in section IV, we will rely on the multi-index notation \(X_{L}=X_{i_{1}i_{2}\cdots i_{t}}\) to construct symmetric trace-free (STF) harmonics \(\hat{n}_{L}\equiv n_{(L)}\), where angle brackets denote the STF part of a tensor.
Gyromagnetic precession and memory
Here we use the asymptotic behaviour of electrodynamics near null infinity to predict the precession rate of a magnetic dipole located far away from a source of electromagnetic waves, at leading order in the inverse distance from the source. For radiation bursts that are compactly supported in time, this leads to a net gyromagnetic memory, which we compute.
**Asymptotic electromagnetic field.** Consider four-dimensional Minkowski spacetime with inertial coordinates \(x^{\mu}=(t,x^{i})\), \(i=1,2,3\), and define retarded Eddington-Finkelstein (Bondi) coordinates by \(r\equiv\sqrt{x^{i}x^{i}}\) and \(u\equiv t-r\). Let also \(\theta^{B}\) (\(B=1,2\)) be local coordinates on a (future) unit celestial sphere whose metric is \(\gamma_{BC}(\mathbf{\theta})\,\mathrm{d}\theta^{B}\,\mathrm{d}\theta^{C}\). For later reference, introduce a time-independent orthonormal dyad \(E_{a}{}^{B}(\mathbf{\theta})\) on \(S^{2}\) such that \(E_{a}{}^{B}E_{b}{}^{C}\gamma_{BC}=\delta_{ab}\), with frame indices \(a,b\in\{1,2\}\). One can then define a local Cartesian frame
\[\mathbf{e}_{r}=n^{i}\partial_{i}\,,\qquad\mathbf{e}_{a}=rE_{a}{}^{B}\frac{\partial n^{ i}}{\partial\theta^{B}}\partial_{i} \tag{1}\]
with \(n^{i}\equiv x^{i}/r\). This frame will eventually be used to write the components of a magnetic dipole.
Now let \(J_{\mu}(t,\mathbf{x})\) be some localized, generally time-dependent, electric current density in the bulk of Minkowski spacetime. It produces an electromagnetic field \(\mathcal{A}_{\mu}\) which, in Lorenz gauge \(\partial_{\mu}\mathcal{A}^{\mu}=0\), satisfies the Maxwell equation \(\square\mathcal{A}_{\mu}=-4\pi\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It is immediate to deduce the net rotation angle in the transverse plane following a burst of radiation: it is an integral \(\Phi=\int_{-\infty}^{\infty}\mathrm{d}u\,\Omega(u)\) with \(\Omega\) given by (10). This angle is the _gyromagnetic memory_. Since all results above are covariant, the net rotation can also be written in standard polar coordinates \(\theta^{B}\) on the sphere:
\[\Phi=\frac{k}{2r^{2}}\int_{-\infty}^{\infty}\mathrm{d}u\left(D_{B}\widetilde{A }^{B}(u)-k\dot{A}^{B}(u)\Delta\widetilde{A}_{B}(u)\right), \tag{11}\]
where \(A_{B}=E_{B}^{a}\,A_{a}\) and \(D_{B}\) is the covariant derivative determined by the metric \(\gamma_{BC}\) on \(S^{2}\). Again, the similarity between this expression and gravitational gyroscopic memory [15; 16] is striking. The term \(D_{B}\widetilde{A}^{B}\) in (11) is indeed the radial magnetic field at null infinity, and it may be seen as a boundary current for dual large gauge transformations [19]; it is analogous to the dual mass aspect [35; 36; 37; 38; 39; 40] that appears in the gravitational case [15; 16]. As for the nonlinear term in (11), we now study it in a separate section.
## III Interpretation: duality and helicity
Here we focus on the nonlinear term of the gyromagnetic memory (11). We show that it can be interpreted in two equivalent ways: either as a generator of local electric-magnetic duality on a celestial sphere, or as a measure of the difference between the numbers of left- and right-handed photons crossing a given point on the celestial sphere. Again, analogous interpretations hold in the gravitational version of the setup [15; 16].
**Memory and duality.** Electric-magnetic duality is a manifest symmetry of the vacuum Maxwell equations; it mixes electric and magnetic fields through rotations of the electromagnetic field \(\mathcal{F}\) and its Hodge dual \(*\mathcal{F}\). These can be enhanced to a symmetry of the action by defining a second gauge potential \(\mathcal{C}\) constrained by the nonlocal condition \(\mathrm{d}\mathcal{C}=*\mathrm{d}\mathcal{A}\), whereupon duality transformations become \(\mathrm{U}(1)\) rotations of the pair \((\mathcal{A},\mathcal{C})\). Things simplify near null infinity, where one has \(\mathcal{C}_{\mu}=C_{\mu}/r+\mathcal{O}(r^{-2})\) in terms of an \(r\)-independent function \(C_{\mu}\); the above constraint then reduces to a _local_ relation \(\dot{C}_{a}=\epsilon_{a}{}^{b}\dot{A}_{b}=\widetilde{A}_{a}\), which yields \(C_{a}=\widetilde{A}_{a}\) up to arbitrary integration functions. From this perspective, duality induces global \(\mathrm{U}(1)\) rotations of the pair \((A_{a},\widetilde{A}_{a})\).
As far as radiative data at large distances is concerned, one can in fact enhance duality transformations into _local_ rotations on a celestial sphere. Indeed, given any smooth function \(\varepsilon(\mathbf{\theta})\) on \(S^{2}\), the infinitesimal transformations
\[\delta_{\varepsilon}A_{B}=\varepsilon(\mathbf{\theta})\widetilde{A}_{B}\,,\qquad \delta_{\varepsilon}\widetilde{A}_{B}=-\varepsilon(\mathbf{\theta})A_{B} \tag{12}\]
preserve the radiative symplectic structure [41; 42]
\[\Gamma=\int\mathrm{d}u\,\mathrm{d}^{2}\mathbf{\theta}\,\sqrt{\gamma}\,\delta\dot{ A}^{B}\wedge\delta A_{B}\,. \tag{13}\]
The corresponding Hamiltonian generator can be found through the standard procedure and reads
\[Q[\varepsilon]=\int\mathrm{d}^{2}\mathbf{\theta}\sqrt{\gamma}\,\varepsilon(\mathbf{ \theta})\,D(\mathbf{\theta})\,, \tag{14}\]
with the local density [43]
\[D(\mathbf{\theta})=\int_{-\infty}^{+\infty}\mathrm{d}u\,\dot{A}^{B}(u,\mathbf{\theta} )\,\Delta\widetilde{A}_{B}(u,\mathbf{\theta})\,. \tag{15}\]
This all reduces to standard duality transformations for \(\varepsilon(\mathbf{\theta})=\mathrm{const}\). However, locality is crucial here, since one can then identify the density (15) with the nonlinear part of the gyromagnetic memory (11). The latter is thus related to local electric-magnetic duality on the celestial sphere, as was to be shown.
**Memory and optical helicity.** Let us now write the local density (15) in terms of photonic Fock space operators. We start from the mode expansion of the radiative gauge field (with the conventions of [8]),
\[A_{a}(u,\mathbf{n})=-i\int_{0}^{+\infty}\frac{\mathrm{d}\omega}{\sqrt{2\pi}} \big{(}e^{i\omega u}\,b_{a}^{\dagger}(\omega,\mathbf{n})+\mathrm{h.c.}\big{)}\,, \tag{16}\]
where \(b_{a}^{\dagger}(\omega,\mathbf{n})\) creates a transverse photon of frequency \(\omega\), propagating along \(\mathbf{n}\) with polarization along the direction \(a\). The celestial density (14) then becomes
\[D(\mathbf{\theta})=\int_{-\infty}^{+\infty}\mathrm{d}u\,\dot{A}^{a}\bar{A}_{a}=2i \epsilon^{ab}\int\mathrm{d}\omega\,\omega\,b_{a}^{\dagger}\,b_{b}\,. \tag{17}\]
The geometric interpretation of this quantity is clearest in the helicity basis, _i.e._ in terms of a complex null dyad \(\tilde{E}_{a}{}^{B}\) on \(S^{2}\) such that the inverse volume form reads \(\epsilon^{ab}=\tilde{E}_{A}^{a}\tilde{E}_{B}^{b}\,\epsilon^{AB}=\mathrm{diag}(- i,i)\). Indeed, eq. (17) then reduces to
\[D(\mathbf{\theta})=2\int_{0}^{\infty}\mathrm{d}\omega\,\omega\,(b_{+}^{\dagger}\,b_ {+}-b_{-}^{\dagger}\,b_{-}), \tag{18}\]
which is known as the _optical helicity_[44]: it measures the difference between the numbers of right-handed and left-handed photons emitted in the direction \(\theta^{A}\) on the future celestial sphere [45]. We have thus confirmed the second interpretation of the nonlinear term in the gyromagnetic memory (11).
## IV Gyromagnetic memory from localized sources
We conclude this work with an estimate of the gyromagnetic effect produced by nonrelativistic localized sources. This is achieved in two different ways. First, we directly solve (2) for a point charge and exhibit the result for two kinds of oscillatory motion. Second, we consider more general, arbitrary localized sources in the multipolar expansion and find the leading nonrelativistic
effect in the gyromagnetic memory. This paves the way for similar gravitational analyses in the post-Newtonian framework [30].
**Point-like sources.** Consider a particle with charge \(q\) that follows some (accelerated) path \(\mathbf{R}(t)\) in space. The corresponding charge density is \(\rho(\mathbf{x},t)=q\,\delta^{(3)}(\mathbf{x}-\mathbf{R}(t))\) and the current density is \(\mathbf{J}(\mathbf{x},t)=q\,\mathbf{v}(t)\,\delta^{(3)}(\mathbf{x}-\mathbf{R}(t))\), where \(\mathbf{v}=\dot{\mathbf{R}}\) is the particle's velocity. As a result, the \(\mathbf{y}\) integral of the electromagnetic field (2) is straightforward and one finds
\[\mathcal{A}_{\mu}(\mathbf{x},t)=q\,\int\limits_{-\infty}^{t}\!\mathrm{d}t^{\prime }\,v_{\mu}(t^{\prime})\,\frac{\delta(t^{\prime}-t+|\mathbf{x}-\mathbf{R}(t^{\prime})|) }{|\mathbf{x}-\mathbf{R}(t^{\prime})|} \tag{19}\]
where \(v_{0}\equiv 1\) and \(v_{i}=v^{i}=\dot{R}^{i}\). It now remains to carry out the time integral, taking into account the nonlinear dependence of the delta function on the integration variable \(t^{\prime}\). This is most easily done near null infinity, where one may expand \(t-|\mathbf{x}-\mathbf{R}(t^{\prime})|=u+\mathbf{n}\cdot\mathbf{R}(t^{\prime})+\mathcal{O}(1/r)\) and the electromagnetic field (19) gives the following special case of eq. (3):
\[\mathcal{A}_{\mu}(\mathbf{x},t)=\frac{q}{r}\frac{v_{\mu}(t_{*})}{1-\mathbf{n}\cdot\bm {v}(t_{*})}+\mathcal{O}(r^{-2})\,. \tag{20}\]
Here \(t_{*}\) is the root of the transcendental equation
\[t^{\prime}=u+\mathbf{n}\cdot\mathbf{R}(t^{\prime})\,, \tag{21}\]
enforced by the delta function in (19) at leading order in \(1/r\). This root is unique and stable by virtue of the fact that the particle's velocity is lower than the speed of light. In fact, we shall henceforth assume that our particle's motion is periodic; then \(t_{*}\) can be found by iterating the function \(u+\mathbf{n}\cdot\mathbf{R}(...)\) infinitely many times, starting from the'seed' \(u\):
\[t_{*}=u+\mathbf{n}\cdot\mathbf{R}\bigg{(}u+\mathbf{n}\cdot\mathbf{R}\Big{(}u+ \mathbf{n}\cdot\mathbf{R}\big{(}\cdots(u)\cdots\big{)}\Big{)}\bigg{)}\,. \tag{22}\]
Truncating this infinite composition to a finite \(n\)-fold composition yields the nonrelativistic approximation of the electromagnetic field that incorporates the effects of the source up to order \(|\mathbf{v}|^{n}=(|\mathbf{v}|/c)^{n}\).
Let us illustrate this with two examples of sources. Consider first a particle that oscillates along the \(z\) axis, so that \(\mathbf{R}(t)=-R\cos(\Omega t)\mathbf{e}_{z}\) for some length \(R\) and some frequency \(\Omega\) such that \(v\equiv\Omega R\ll 1\). Eq. (20) then yields \(\mathcal{A}_{x}=\mathcal{A}_{y}=0\) and
\[\mathcal{A}_{z}\sim\frac{q}{r}\,v\Big{[}\sin(\Omega u)-v\cos(2\Omega u)\cos( \theta)+\mathcal{O}(v^{2})\Big{]}\,, \tag{23}\]
where \(\theta\) is the standard azimuthal polar coordinate and we used the crudely approximate root \(t_{*}\sim u-R\cos(\theta)\cos(\Omega u)\) of eq. (21). In terms of electromagnetic boundary data, one finds \(F_{ab}=0\) and \(\dot{A}_{a}\widetilde{A}^{a}=0\), so there is no magnetic dipole precession. Note that this remains true at any order in the nonrelativistic expansion, because the only nonzero component of the gauge field at null infinity is \(A_{\theta}\), which only depends on \(u\) and \(\theta\).
Let us now turn to a source that breaks parity symmetry, namely a charged particle that moves along a circle in the \((x,y)\) plane:
\[\mathbf{R}(t)=\big{(}R\cos(\Omega t),R\sin(\Omega t),0\big{)}\,, \tag{24}\]
again with \(v\equiv\Omega R\ll 1\). Now limiting ourselves only to the leading order in the nonrelativistic expansion, we find in Bondi coordinates that \(\mathcal{A}_{u}=\mathcal{A}_{r}=q/r\) and
\[\mathcal{A}_{\theta} \sim qv\,\sin(\varphi-\Omega u)\cos\theta\,, \tag{25}\] \[\mathcal{A}_{\varphi} \sim qv\,\cos(\varphi-\Omega u)\sin\theta\,. \tag{26}\]
It follows again that \(F_{ab}=0\), but this time the integrand of optical helicity does _not_ vanish:
\[\dot{A}_{a}\widetilde{A}^{a}=-q^{2}R^{2}\Omega^{3}\cos\theta+ \mathcal{O}(|v|^{3})\,. \tag{27}\]
As a result, the precession rate (10) is nonzero. It depends on the azimuthal location of the test magnetic dipole with respect to the source, but the average of the precession rate over the whole celestial sphere vanishes. One may expect a similar behaviour for the gyroscopic gravitational memory produced by inspiralling binary systems, up to the fact that the angular distribution should start from the \(\ell=2\) harmonic.
**Multipolar expansion.** Let us now consider a generic compact source with characteristic speed \(v\), small with respect to the speed of light \(c\). For such sources, the multipolar expansion provides an efficient approximation scheme, since multipole moments of order \(\ell\) in the radiation are suppressed by a factor \((v/c)^{\ell}\). We reinstate \(c\) until the end of this work, as a bookkeeping parameter that controls the order of the nonrelativistic expansion.
To begin, we decompose the radiative field (4) in terms of two scalar functions \(\phi^{\pm}\) with definite parity:
\[A_{a}(u,\mathbf{n})=D_{a}\phi^{+}(u,\mathbf{n})+\epsilon_{a}{}^{b}D_{b}\,\phi^{-}(u, \mathbf{n})\,. \tag{28}\]
The linear and nonlinear terms in the gyromagnetic precession rate (10) are then given by
\[D_{a}\widetilde{A}^{a} =-D^{2}\phi^{-}\,, \tag{29}\] \[\dot{A}_{a}\widetilde{A}^{a} =-\Big{(}D_{a}\dot{\phi}^{+}D^{a}\phi^{-}-D_{a}\dot{\phi}^{-}D^{a} \phi^{+}\Big{)}\] \[\quad+\epsilon^{ab}\left(D_{a}\dot{\phi}^{+}D_{b}\phi^{+}+D_{a} \dot{\phi}^{-}D_{b}\dot{\phi}^{-}\right)\,. \tag{30}\]
The scalars \(\phi^{\pm}\) can be expanded in symmetric trace-free (STF) harmonics on the sphere as \(\phi^{\pm}(u,\mathbf{n})\equiv\phi^{\pm}_{L}(u)\,n_{L}\), where the STF coefficients \(\phi^{\pm}_{L}(u)\) are _radiative multipole moments_. These can be derived through a multipole expansion of eq. (2) [46], which leads to
\[\phi^{+}_{L}=\frac{1}{c^{\ell}\ell\ell}\partial^{\ell}_{u}q^{+}_{L}\,,\qquad \phi^{-}_{L}=\frac{1}{c^{\ell+1}(\ell+1)!}\partial^{\ell}_{u}q^{-}_{L} \tag{31}\]
where the _source multipole moments_\(q_{L}^{\pm}\) are explicitly given in [46, eqs. (4.17)]. In the nonrelativistic limit, they take the simple form
\[q_{L}^{+} =\int d^{3}x\,\rho\,\hat{x}_{L}+\mathcal{O}(1/c^{2})\,, \tag{32}\] \[q_{L}^{-} =\int d^{3}x\,(\mathbf{x}\times\mathbf{J})_{(i}x_{L-1)}+\mathcal{O}(1/c^{ 2})\,. \tag{33}\]
Note from (31)-(32) that the leading-order effect in a nonrelativistic expansion is given by \(\phi_{i}^{+}=\frac{1}{c}\dot{p}_{i}\), where \(p_{i}\) is the source's electric dipole moment. As a result, the nonlinear precession in (10) is given at leading order by the third term in (30):
\[\dot{A}_{a}\tilde{A}^{a}=\frac{1}{c^{2}}\left(\ddot{\mathbf{p}}\times\dot{\mathbf{p}} \right)\cdot\mathbf{n}+\mathcal{O}(1/c^{3})\,. \tag{34}\]
At the same time, the linear term term in the precession (10) is determined by (29); at leading order, this is given by the magnetic dipole \(\phi_{i}^{-}=\frac{1}{2c^{2}}\dot{m}_{i}\) so that
\[D_{a}\widetilde{A}^{a}=\frac{1}{c^{2}}\dot{\mathbf{m}}\cdot\mathbf{n}+\mathcal{O}(1/c ^{3})\,. \tag{35}\]
It follows that the full gyromagnetic memory (11) is
\[\Phi(\mathbf{n})=\frac{k}{2r^{2}c^{2}}\left[\Delta\mathbf{m}-k\int du\left(\ddot{\bm {p}}\times\dot{\mathbf{p}}\right)\right]\cdot\mathbf{n}+\mathcal{O}(1/c^{3})\,. \tag{36}\]
In particular, one can now return to the example of the circular source (24), where \(\mathbf{m}=qR^{2}\mathbf{\Omega}\) and \(\mathbf{p}(t)=q\mathbf{R}(t)\) so that (35) vanishes while the nonlinear term (34) does not, yielding
\[\dot{A}_{a}\tilde{A}^{a}=-\frac{q^{2}R^{2}\Omega^{2}}{c^{2}}\mathbf{\Omega}\cdot \mathbf{n}+\mathcal{O}((|v|/c)^{3})\,. \tag{37}\]
This confirms our earlier result (27), now from an explicit multipolar nonrelativistic expansion.
## Acknowledgements
We thank Azadeh Maleknejad and Marios Petropoulos for fruitful discussions on related subjects. The work of B.O. is supported by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 846244. A.S. is supported by a Royal Society University Research Fellowship.
|
2307.13506 | Shape Programming in Entropic Tissues | Epithelial morphogenesis, a signature problem of tissue biology and tissue
mechanics, continues to inspire biologists and physicists alike. Many
treatments focus on tissue fluidization, apical/basal ratio changes, or
mechanical instabilities. In contrast to these approaches, shape-programmable
materials, where the local lengths in the material change in a prescribed way,
offer an appealing analogy. In this analogy, certain in-plane collective cell
behaviors could also actively alter the local lengths in a tissue and therefore
provide the ingredients necessary for shape programming. In this Letter we
demonstrate that this is indeed the case for directed, active T1 rearrangements
of cells. We determine the required shape programming parameters associated to
tissue patches with both fixed numbers of rearrangements and patches at steady
state between directed T1 events and counterbalancing randomly oriented ones
using a simple free-boundary vertex model approach. Along the way we uncover a
surprising connection between tissues with active T1 events and the central
limit theorem, and through it, the physics of entropic springs. | Carlos M. Duque, Carl D. Modes | 2023-07-25T13:58:42Z | http://arxiv.org/abs/2307.13506v1 | # Shape Programming in Entropic Tissues
###### Abstract
Epithelial morphogenesis, a signature problem of tissue biology and tissue mechanics, continues to inspire biologists and physicists alike. Many treatments focus on tissue fluidization, apical/basal ratio changes, or mechanical instabilities. In contrast to these approaches, shape-programmable materials, where the local lengths in the material change in a prescribed way, offer an appealing analogy. In this analogy, certain in-plane collective cell behaviors could also actively alter the local lengths in a tissue and therefore provide the ingredients necessary for shape programming. In this Letter we demonstrate that this is indeed the case for directed, active T1 rearrangements of cells. We determine the required shape programming parameters associated to tissue patches with both fixed numbers of rearrangements and patches at steady state between directed T1 events and counterbalancing randomly oriented ones using a simple free-boundary vertex model approach. Along the way we uncover a surprising connection between tissues with active T1 events and the central limit theorem, and through it, the physics of entropic springs.
The grand problem of how cells form tissue continues to receive great interest from biologists and physicists. One major question that remains unresolved is that of morphogenesis, or, how cells in an initially simply-shaped tissue reliably create more complex tissues, as in organ development [1; 2; 3] or the formation of limb buds [4; 5; 6]. Meanwhile, recent progress in our understanding of _engineered_ shape transitions holds considerable promise for next generation devices and novel actuation modalities. A specifically designed exotic material - e.g. a liquid crystal solid or a NIPA hydrogel - can be coupled to an external field, such as temperature, light, or hydration, in order to control a pre-designed shape transition [7; 8; 9; 10; 11; 12]. These transitions are instantiated by _spontaneous strain_ fields in the material whose principal directions can be pre-programmed to deliver the desired shape.
If the establishment of a spontaneous strain field drives engineered shape-programmability, could morphogenesis also employ such a strategy? It has long been known that cells in a developing epithelial tissue engage in complex, collective in-plane behaviors. Further, owing to remarkable advances in live-imaging microscopy, patches of cells performing mixes of growth, extrusion, death, elongation, division, and neighbor rearrangements have now been described with striking quantitative detail [13; 14; 15]. We posit that these collective behaviors, many of which are actively driven, provide coarse-grained spontaneous strains that could capture and predict shape outcomes.
We therefore seek a mapping that connects actively driven collective cellular rearrangements to the physics of spontaneous strain-mediated shape programmability. We begin by considering the local patch-shaping effects of fixed numbers of oriented rearrangements. We then investigate whether actively driven _unoriented_ T1 events can provide a restoring strain allowing for the recovery of more homogeneous patch shapes, and in so doing discover an unexpected connection to the physics of entropic springs. Finally, we determine the effective shape programmability parameters - the spontaneous strain and spontaneous Poisson ratio - for a tissue patch at steady state balancing the competing effects of oriented and unoriented active rearrangements.
Consider the well-known vertex model (VM), where cells in a confluent epithelia are modeled by polygons. This model is particularly suitable to study the mechanical tissue response to topological changes in the underlying network of cellular junctions. These changes manifest through cellular events such as divisions, extrusions, or neighbor exchanges, often called T1 events [16; 17; 18; 19]. T1 events achieve neighbor exchange through the shrinking and disappearance of the junction between neighbors and the subsequent regrowth of a new junction separating different neighbors. They therefore act directly on the neighbor network of the tissue instead of the conventional elastic deformation where individual cells stretch or contract. Such T1 rearrangements, known to be the driver of convergent-extension [20; 21; 22; 23; 24], still provide a rich, underappreciated avenue for shape control.
We fix the cell mechanical properties and assume negligible bond-tension fluctuations. The non-dimensional VM work function is given by [16; 17; 18; 19]:
\[E=\frac{1}{2}\sum_{\alpha}\left[\left(A^{\alpha}-1\right)^{2}+\Gamma\left(P^{ \alpha}-P_{0}\right)^{2}\right]+\frac{\Lambda}{2}\sum_{b\in\partial S}\ell^ {b}, \tag{1}\]
for cells with unit preferred area. Superscripts \(\alpha\) and \(b\) denote cell and junction identity, respectively, while \(\Gamma\) and \(\Lambda\) are the perimeter stiffness and line tension on cellular junctions. \(\ell^{b}\) is the junction length. The term \(P_{0}=-\Lambda/2\Gamma\) is a preferred perimeter. The third term, present in tissue patches, \(S\), is a line tension on cellular junctions at the patch boundary, \(\partial S\). We focus here on the viscoleastic solid regime. We introduce a preferred
direction within the tissue by setting a junction angle with increased likelihood of participation in an active T1 event. We call this the _T1 axis_.
In order to describe the changing shape of a tissue patch under actively driven T1 event dynamics, we consider the _deformation pathway_ of the patch, which is a discrete sequence of tissue states in which each state is the result of a single random active T1 event and subsequent mechanical relaxation, possibly including "passive" T1 events that occur naturally during this relaxation. As we are working in the solid regime, such passive T1 events are rare and do not trigger avalanching T1s [25]. For the remainder of this Letter we simply use the term "T1 event" for an actively driven T1 event. We define \(\mathcal{N}_{\mathrm{T1}}\) to be the number of random T1 events composing the deformation pathway and work within a quasi-static approximation where intermediate tissue states reach mechanical stability prior to the next T1 event [16; 26]. Additionally, we fix the number of cells in a simulation by dividing a randomly chosen cell whenever a mechanically-driven extrusion occurs [19].
In what follows, we study two mechanisms of deformation. Firstly we fix \(\mathcal{N}_{\mathrm{T1}}\) and deform the tissue through T1 events that tend to be aligned with an imposed T1 axis. We refer to T1 events of this kind as _directed_. The second mechanism we consider is that of a dynamical steady state of tissue shape reached in the limit of \(\mathcal{N}_{\mathrm{T1}}\rightarrow\infty\) together with a counterbalancing restoring force. Surprisingly, the required restoring force can naturally be provided by an entropic mechanism reminiscent of the entropic spring of polymer physics [27].
We now focus on the spontaneous strains generated by the action of many subsequent directed T1 events. We introduce a preferred direction within the tissue by setting a junction angle with increased likelihood of participation in an active T1 event. We call this the _T1 axis_, \(\Theta_{\mathrm{T1}}\) and set \(\Theta_{\mathrm{T1}}=\pi/2\). We then sample the junctions according to an approximate normal distribution \(p\left(\Delta\theta_{b}\right)\sim\exp\left[-(\Delta\theta_{b})^{2}/(2\sigma)^ {2}\right]\), where \(\Delta\theta_{b}\) is the angular difference between a junction and \(\Theta_{\mathrm{T1}}\). The distribution spread, \(\sigma\), which we set to \(0.9\), can be used to tune the rate at which a tissue elongates.
Note that different preparations of the tissue are possible. For example, one might apply directed T1s quasi-statically, through a deformation pathway as described previously and depicted in Fig. 1(a). Or, directed T1s may be simultaneously "pre-loaded" into the tissue and released to act all at once, a scenario depicted in Fig. 1(b). In both cases similar shaping is achieved, though the second case limits the number of directed T1s available. We therefore focus for the remainder of this Letter on quasi-static deformation pathways.
To calculate the spontaneous strain along the tissue's long axis we use the length, \(\ell\), of the longer side of the smallest-area rectangle that can enclose the tissue at any given state, Fig. 1 (a). \(\ell\) is equivalent to the maximum width of the convex hull of the tissue patch [28]. We start with circular tissue configurations and use the diameter \(\ell_{0}\) to estimate the induced spontaneous strain as \(\epsilon=(\ell-\ell_{0})/\ell_{0}\).
In Fig. 2 we show the dependence of the mean spontaneous strain, \(\bar{\epsilon}\), on \(\mathcal{N}_{\mathrm{T1}}\) demonstrating that topological rearrangements alone can drive a tissue patch into different shape configurations without requiring abrupt deformations of the individual cells. The increasing behavior of \(\bar{\epsilon}\) with \(\mathcal{N}_{\mathrm{T1}}\) underscores the anisotropic reshaping of the tissue. Note that the values at which \(\bar{\epsilon}\) plateaus increase with the system size, a consequence of the fact that patches can adopt more elongated configurations as the number of cells in the tissue increases.
The linear behavior of \(\bar{\epsilon}\) in the initial regime highlights the dominant role directed T1 events play in reshaping the patch. In a tissue patch of a given size one expects that individual directed T1 events will, on average, induce the same degree of elongation. Thus, as single, directed T1 elongations are added, the total elongation should scale linearly with \(\mathcal{N}_{\mathrm{T1}}\). The relative elongation effect of a single T1 is proportional to the linear size of the patch, yielding the curve collapse evident in Fig. 2.
Is it possible that a simple change to the model might allow for dynamical steady states to be reached within the linear elastic regime, before the finite system size-governed stalling of \(\bar{\epsilon}\)? What is required is an effective restoring force. If the density of states associated to configurations of tissue patches with different elongations is skewed in favor of homogeneous configurations, then an entropic force could naturally provide such a mechanism, similar to that found in an entropic spring [27]. Individual T1 events can be thought of as Markov moves acting on the space of patch configurations and therefore, com
Figure 1: (a) Schematic of Markov-chain type process driven by the subsequent action of random T1 events. The likelihood of each tissue state depends on its long axis strain as shown on the schematic density of states \(g(\epsilon)\). (b) An elongated tissue patch resulting from \(\mathcal{N}_{\mathrm{T1}}\) simultaneous directed T1 events acting on the starting patch \(\mathbf{X}\).
pletely randomized, _undirected_ T1 events may be capable of providing the effective temperature needed to allow such a restoring force [29].
Consider the deformation gradient tensor \(\Lambda_{ij}=\partial x_{i}/\partial X_{j}\), which characterizes changes in infinitesimal distances between a reference elastic configuration, \(\mathbf{X}\), and a current configuration, \(\mathbf{x}\). Assuming spatially uniform deformations between current and reference states, one may linearly transform the reference state as \(x_{i}=\Lambda_{ij}X_{j}\). By introducing the displacement vector, \(\mathbf{u}\), defined to be \(\mathbf{x}-\mathbf{X}\), we may rewrite \(\Lambda_{ij}=\delta_{ij}+u_{i,j}\), where \(u_{i,j}\) and \(\delta_{ij}\) are the gradient of the displacement vector and the Kronecker symbol, respectively. We focus on the case where a current configuration \(\mathbf{x}\equiv\mathbf{x}^{(n)}\) can be recast as a series of \(n\) previous configurations \(\mathbf{x}^{(0)}\), \(\mathbf{x}^{(1)}\),..., \(\mathbf{x}^{(n-1)}\), with \(\mathbf{x}^{(0)}\equiv\mathbf{X}\). Each intermediate state \(\mathbf{x}^{(m)}\) is obtained through its previous state by \(x_{i}^{(m)}=\Lambda_{ij}^{(m)}x_{j}^{(m-1)}\), as shown in Fig. 1(a), implying that our deformation gradient tensors can be decomposed to relate \(\mathbf{x}^{(n)}\) with \(\mathbf{X}\) as:
\[x_{i}^{(n)}(\mathbf{X})=\Lambda_{ik}^{(n)}\Lambda_{kl}^{(n-1)}\ldots\, \Lambda_{mj}^{(1)}X_{j}. \tag{2}\]
Defining \(\tilde{\Lambda}_{ij}=\Lambda_{ik}^{(n)}\Lambda_{kl}^{(n-1)}\ldots\Lambda_{mj}^ {(1)}\) and expressing it in terms of displacement vector gradients we arrive at:
\[\tilde{\Lambda}_{ij}=(\delta_{ik}+u_{i,k}^{(n)})(\delta_{kl}+u_{k,l}^{(n-1)}) \ldots(\delta_{mj}+u_{m,j}^{(1)}). \tag{3}\]
Provided that the deformations between subsequent intermediate states, \(k\), satisfy a small displacement approximation \(\left|\partial u_{i}^{(n-k+1)}(\mathbf{x}^{(n-k)})/\partial x_{j}^{(n-k)} \right|\ll 1\), we keep terms up to linear order in \(u_{i,j}\) for all the components of \(u_{i,j}\) and write \(\tilde{\Lambda}_{ij}=\delta_{ij}+u_{i,j}^{(1)}+u_{i,j}^{(2)}+\ldots+u_{i,j}^{( n)}\equiv\delta_{ij}+\tilde{u}_{i,j}\)[30, 31, 32], bringing \(\tilde{\Lambda}_{ij}\) into the familiar form \(\tilde{\Lambda}_{ij}=\delta_{ij}+\tilde{u}_{i,j}\). We can thus use the linear approximation of the strain tensor, \(\tilde{\epsilon}_{ij}\), relating reference and current configurations. \(\tilde{\epsilon}_{ij}\) then satisfies the additive property:
\[\tilde{\epsilon}_{ij}=\epsilon_{ij}^{(1)}+\epsilon_{ij}^{(2)}+\ldots+\epsilon _{ij}^{(n)}=\left(\tilde{u}_{i,j}+\tilde{u}_{j,i}\right)/2. \tag{4}\]
Here, each \(\epsilon_{ij}^{(m)}\) is the result of a single T1 event, and as long as the elongation induced by this event is small compared to the long axis of the tissue, we may safely apply the additive strain property. Given that each of the imposed T1 events is of random origin, we assume that the mechanical response of the tissue, as manifested through the additive strains, is also random, even though it may be modulated by the quasi-static approximation that connects subsequent tissue states. The \(\epsilon_{ij}^{(k)}\)'s that _additively_ compose \(\tilde{\epsilon}_{ij}\) (as per Eq. 4) are thus independent and identically distributed random tensors, and we apply the central limit theorem (CLT) to conclude that \(\tilde{\epsilon}_{ij}\) is normally distributed [33].
Now consider a model in which a tissue patch is deformed not only through directed T1 events but also undirected T1 events. As alluded to previously a restoring force must act on the tissue in combination with directed T1s in order to allow for steady states within the elastic regime. The fact that the CLT approximation implies normally distributed net spontaneous strains together with the addition of an effective temperature through the inclusion of undirected T1 events provides the ingredients required for an entropic restoring force. We assume that directed and undirected T1 events occur independently from each other and non-simultaneously. We respectively assign them directed and undirected probabilities \(p_{\mathrm{d}}\) and \(p_{\mathrm{u}}\) such that \(p_{\mathrm{d}}+p_{\mathrm{u}}=1\).
In Fig. 3 (a) the density of states with respect to the spontaneous strain, \(g(\epsilon)\), is shown for a fixed system size and increasing values of \(p_{\mathrm{d}}\). Each density of
Figure 3: Long axis strain (Poisson’s ratio) density of state, \(g(\epsilon)\) (\(g(\nu)\)), for a fixed number of cells N\({}_{\mathrm{cells}}\) and increasing directed probabilities \(p_{\mathrm{d}}\), all for VM parameters well within the solid regime, \(\Lambda=0.12\), and \(\Gamma=0.04\).
states was calculated by acting on the tissue patch with a total of \(\mathcal{N}_{\mathrm{T1}}=10^{5}\) events. Each density of states is very well approximated by a Gaussian distribution, supporting the CLT hypothesis. As conjectured, undirected T1 events provide a restoring mechanism, as the peaks of \(g(\epsilon)\) shift towards smaller \(\epsilon\) as \(p_{\mathrm{d}}\) decreases. Moreover, as \(p_{\mathrm{d}}\to 0\), the density of states becomes less spread and more peaked, a further testament of the fact that the configuration space of the tissue is dominated by compact rather than "stringy" configurations. We illustrate this distinction on the late states of the Markov chain schematic of Fig. 1.
Defining \(\lambda=\ell/\ell_{0}\) and \(\lambda_{\perp}=\ell_{\perp}/\ell_{0}\) with \(\ell_{\perp}\) the elongation along the short or perpendicular axis of the tissue, a spontaneous version of Poisson's ratio can be calculated as \(\nu=-\log\lambda_{\perp}/\log\lambda\)[10]. In Fig. 3 (b) we show the Poisson's ratio density of states, \(g(\nu)\), for the same parameter values of Fig. 3 (a). In contrast to the density of states for \(\epsilon\), the distribution of \(\nu\) values does not follow the additive property for random strains and the CLT cannot be applied. \(\nu\) appears to instead follow a ratio distribution as it can be built by taking the ratio (of the logarithms) of two random variables. Indeed, the \(\nu\) density of states are accurately described by skew normal distributions, yielding R squared values close to 1.
We now have the ingredients needed to produce an atlas relating cell intercalations to the parameters of spontaneous strain-mediated shape programmability. In Fig. 4 (a) and (b) we respectively quantify the dependence of the mean spontaneous strain and Poisson's ratio on the system size for increasing fixed values of \(p_{\mathrm{d}}\), again indicating the existence of a preferred elongation of the tissue patches as a function of the activity.
Both \(\bar{\epsilon}\) and \(\bar{\nu}\) are consistent with power-law behavior, at least for the observed decade in \(N_{\mathrm{cells}}\). Moreover, we observe two regimes in \(p_{\mathrm{d}}\). For \(p_{\mathrm{d}}\lesssim 0.5\), \(\bar{\epsilon}\) monotonically transitions from negative to positive scaling exponents until saturating at values consistent with a power-law scaling of \(\frac{1}{2}\) for \(p_{\mathrm{d}}\gtrsim 0.5\). This is likely the same \(\frac{1}{2}\) scaling that collapses the mean strain curves for fixed numbers of directed T1s. For small \(p_{\mathrm{d}}\) the entropic shape-restoring force induced by undirected T1s keeps the patch more compact as shown by \(\bar{\epsilon}\sim 0\). For \(p_{\mathrm{d}}=0\) we observe a clear negative slope implying that shape perturbation from undirected T1 events becomes negligible as the number of cells increases. As \(p_{\mathrm{d}}\) increases, directed T1 events bias the strain towards larger values, but the background noise, still dominated by undirected events, keeps the strain relatively small. Only when \(p_{\mathrm{d}}\sim p_{\mathrm{u}}\) are there well defined linear trends in \(\bar{\epsilon}\). The power-law-like regime of \(\bar{\nu}\) is similar. Linearity is not held for \(p_{\mathrm{d}}\sim 0\) but is clearly present for increasing \(p_{\mathrm{d}}\).
Finally we look at the variation of \(\bar{\epsilon}\) and \(\bar{\nu}\) with respect to \(p_{\mathrm{d}}/p_{\mathrm{u}}\). Here we find that \(\bar{\epsilon}\) and \(\bar{\nu}\) each exhibit behavior commensurate with exponential decay to an asymptotic value. The decay rates are largely insensitive to \(N_{\mathrm{cells}}\), whereas the asymptotic value approached for large \(p_{\mathrm{d}}/p_{\mathrm{u}}\) is determined by \(N_{\mathrm{cells}}\).
In this Letter, we have generated an explicit mapping between collective cell neighbor rearrangements and the parameters required for active shape programmability in a coarse-grained, continuum model. We have shown these maps both for fixed numbers of directed T1 events as well as dynamical steady states balancing directed T1 events driving elongation and undirected T1 events providing an entropic restoring force. Along the way we have also demonstrated the existence of an unexpected analogy between these tissues and the physics of entropic springs.
Figure 4: Shape programmability parameters for varying patch sizes and relative activity, for VM parameters well within the solid regime, \(\Lambda=0.12\), and \(\Gamma=0.04\). (a-b) Mean long axis strain (Poisson’s ratio) as a function of the number of cells N\({}_{\mathrm{cells}}\) and increasing directed probabilities \(p_{\mathrm{d}}\). (c-d) Mean long axis strain (Poisson’s ratio) as a function of \(p_{\mathrm{d}}/p_{\mathrm{u}}\) and different number of cells. The error bars in all cases represent the standard errors of the means. (e) Power law exponents \(\beta_{\epsilon}\) and \(\beta_{\gamma}\) for each of the linear fits shown on (a) and (b) respectively. The vertical dashed line at \(\beta_{\epsilon}=0.5\) denotes the value at which \(\beta_{\epsilon}\) appears to converge as \(p_{\mathrm{d}}\to 1\). (f) Exponential decay constants \(\alpha_{\epsilon}\) and \(\alpha_{\nu}\) for the exponential fits shown on (c) and (d) respectively.
Many interesting directions still remain open. Our approach could be generalized to other classes of topological events, including cell division, death, or extrusion. Extending the computational model to larger systems that support gradients in alignment would allow an examination of interactions between the coarse-graining reported here and these gradients, revealing higher order contributions to shape programming outcomes. Furthermore, the status of the steady state tissue as an entropically-driven system could also be expanded on: constructing a formal effective temperature from the activity would allow both explicit stress derivations as well as other applications of traditional thermodynamics. Finally, a more complete exploration of different VM parameter regimes, especially close to the fluid transition, could be useful in capturing varying morphogenetic contexts.
Ultimately, we believe that tissue shape programming through actively established spontaneous strains may provide a new and powerful way to understand the extraordinary shapes and forms of life. Still, how a collective community of cells acquires complex 3D form with robustness and precision remains a deep and beautiful question, and there is much yet to learn.
###### Acknowledgements.
We are happy to acknowledge useful discussions with F. Julicher, S.W. Grill, M. Popovich, N.A. Dye, A. Materne, J. Fuhrmann, A. Krishna, and M. Staddon. We are particularly grateful to S.W. Grill and M. Staddon for also providing feedback on an in-depth reading of the manuscript. This work was funded by the German Federal Ministry of Education and Research under grant number 031L0160. C.M.D. was further supported by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement no. 829010 (PRIME) during the completion of the manuscript.
|
2306.12531 | Fast ion transport in quasisymmetric equilibria in the presence of a
resonant Alfvénic perturbation | Significant progress has been made in designing magnetic fields that provide
excellent confinement of the guiding enter trajectories of alpha particles
using quasisymmetry (QS). Given the reduction in this transport channel, we
assess the impact of resonant Alfv\'{e}n eigenmodes (AEs) on the guiding center
motion. The AE amplitudes are chosen to be consistent with experimental
measurements and large-scale simulations. We evaluate the drift resonance
condition, phase-space island width, and island overlap criterion for
quasisymmetric configurations. Kinetic Poincar\'{e} plots elucidate features of
the transport, including stiff transport above a critical perturbation
amplitude. Our analysis highlights key departures from the AE-driven transport
in tokamaks, such as the avoidance of phase-space island overlap in
quasihelical configurations and the enhanced transport due to wide phase-space
islands in low magnetic shear configurations. In configurations that are closer
to QS, with QS deviations $\delta B/B_0 \lesssim 10^{-3}$, the transport is
primarily driven by the AE, while configurations that are further from QS,
$\delta B/B_0 \sim 10^{-2}$, experience significant transport due to the
QS-breaking fields in addition to the AE. | Elizabeth J. Paul, Harry E. Mynick, Amitava Bhattacharjee | 2023-06-21T19:26:08Z | http://arxiv.org/abs/2306.12531v2 | # Fast ion transport in quasisymmetric equilibria in the presence of a resonant Alfvenic perturbation
###### Abstract
Significant progress has been made in designing magnetic fields that provide excellent confinement of the guiding enter trajectories of alpha particles using quasisymmetry (QS). Given the reduction in this transport channel, we assess the impact of resonant Alfven eigenmodes (AEs) on the guiding center motion. The AE amplitudes are chosen to be consistent with experimental measurements and large-scale simulations. We evaluate the drift resonance condition, phase-space island width, and island overlap criterion for quasisymmetric configurations. Kinetic Poincare plots elucidate features of the transport, including stiff transport above a critical perturbation amplitude. Our analysis highlights key departures from the AE-driven transport in tokamaks, such as the avoidance of phase-space island overlap in quasihelical configurations and the enhanced transport due to wide phase-space islands in low magnetic shear configurations. In configurations that are closer to QS, with QS deviations \(\delta B/B_{0}\lesssim 10^{-3}\), the transport is primarily driven by the AE, while configurations that are further from QS, \(\delta B/B_{0}\sim 10^{-2}\), experience significant transport due to the QS-breaking fields in addition to the AE.
## 1 Introduction
Energetic particles have historically been challenging to confine in stellarator configurations due to the possibility of unconfined orbits and resonances exposed at low collisionality. These difficulties must be overcome to develop a stellarator reactor concept, as excessive alpha losses before thermalization can impact power balance and impart damage to plasma-facing components. In recent years, significant progress has been made in designing stellarator magnetic fields that can confine the orbits of fusion-born alpha particles without perturbations (Bader _et al._, 2019; Landreman & Paul, 2022; Landreman _et al._, 2022). However, reducing the guiding center orbit loss mechanisms may make mode-particle interactions relatively significant.
The interaction of Alfven eigenmodes (AEs) with energetic particles has been shown to drive substantial flattening of the fast-ion profile in tokamak experiments (Heidbrink _et al._, 2008). Alvenic activity has also been observed on several stellarator configurations, including HSX (Deng _et al._, 2009), CHS (Takechi _et al._, 2002), LHD (Toi _et al._, 2011), W7-AS (Weller _et al._, 1994), TJ-II (Melnikov _et al._, 2014), W7-X (Rahbarnia _et al._, 2020), and Heliotron-J (Yamamoto _et al._, 2007). Alfvenic instabilities have been thought to be potentially benign in a stellarator reactor due to their ability to operate at high density (Helander _et al._, 2012). However, fast-ion-driven modes may still be destabilized at high density: LHD modeling indicates that Alfvenic activity remains present even at fast ion
beta of \(\approx 0.05\%\)(Varela _et al._, 2017). For comparison, using the profiles from the ARIES-CS stellarator reactor study with density \(\approx 5\times 10^{20}\) m\({}^{-3}\)(Ku _et al._, 2008), the fast ion beta is \(\approx 0.2\%\). It remains to be seen to what extent Alfvenic activity can be controlled in a stellarator reactor by manipulating the density profile or optimizing the magnetic field.
Significant recent work has focused on energetic particle physics in quasisymmetric configurations based on properties of guiding center orbits (Bader _et al._, 2021; LeViness _et al._, 2022; Paul _et al._, 2022), but relatively little has been studied with respect to AE-driven transport. AE stability of the quasiaxisymmetric CFQS configuration has been evaluated using the linear gyrofluid FAR3D code (Varela _et al._, 2021). However, a systematic study of the AE-driven transport in quasisymmetric configurations has not yet been performed. For equilibria sufficiently close to quasisymmetry (QS), phenomena previously observed on tokamaks are anticipated. Monte Carlo simulations indicate that a resonant perturbation induces some rapid convective transport due to phase-space islands near the boundaries. Island overlap occurs for larger perturbation amplitudes, \(\delta B^{r}/B_{0}\sim 10^{-3}\) where \(\delta B^{r}\) is the radial perturbed magnetic field and \(B_{0}\) is the equilibrium field strength, causing diffusive losses (Sigmar _et al._, 1992; Hsu & Sigmar, 1992). Because of the coupling of a single AE to the poloidal variation of the magnetic drifts, sideband resonances arise and can lead to island overlap even in the presence of a single AE (Mynick, 1993\(b\),_a_; Hsu & Sigmar, 1992). In perfect symmetry with the addition of a single perturbation harmonic, kinetic Poincare plots (White, 2011, 2012) can be employed to observe the formation of phase-space islands, island overlap, and chaos.
We aim to address unresolved questions in this area, including how the quasisymmetry helicity and deviations from quasisymmetry impact transport. Recent simulations (White _et al._, 2022; White & Duarte, 2023) of resonant AEs in W7-X and the precise QH equilibrium (Landreman & Paul, 2022) have indicated that even a small-amplitude Alfvenic perturbation, \(\delta B/B\sim 10^{-6}\), can lead to global flattening of the distribution function. This extreme sensitivity to perturbations is postulated to arise because the low magnetic shear of the equilibrium implies low transit frequency shear for passing particles. Here, we study the impact of magnetic shear in more detail by evaluating the island width and drift-harmonic overlap conditions for quasisymmetric configurations. As discussed later, our conclusions differ from these recent studies, which predict substantive diffusive losses for low AE perturbation amplitudes for equilibria very close to QS.
The impact of AEs on the fast-ion transport in a stellarator reactor is challenging to compute in practice, requiring knowledge of the saturated mode amplitude. The nonlinear saturation can be obtained from high-fidelity modeling (Todo _et al._, 2017; Feher, 2014; Spong _et al._, 2017), which depends on the details of the thermal and fast-ion profiles. Rather than attempt such calculations here, we consider the potential impact of a single Alfvenic perturbation on the phase-space integrability and resulting transport in reactor-scale equilibria designed to be close to quasisymmetry. We employ a "worst-case scenario" approach, in which an Alfvenic perturbation is chosen to strongly resonate with rational periodic passing orbits in the core for several quasisymmetric configurations. We consider several mode amplitudes consistent with experimental measurements and high-fidelity modeling. While physically, such a perturbation should correspond to an AE of the background plasma, we instead let the perturbation be a radially global mode with prescribed mode numbers, frequency, and perturbation amplitude. We assess the impact of the perturbation mode number, amplitude, and residual quasisymmetry error on the resulting transport. Phase-space Poincare plots are used to guide the analysis and assess the transition to chaos. While Poincare plots have been used to study the structure of phase space for energetic passing particles in the absence of time-dependent
perturbations for some stellarators (White _et al._, 2022), this technique has not yet been applied to assess the impact of AEs on the transport. We evaluate the extent to which this technique provides insight into the transport in configurations with varying deviations from quasisymmetry.
In Section 2, we outline the guiding center equations of motion with Alfvenic perturbations. In Section 3, we describe the theory for resonance, island width, and island overlap in QS configurations. Resonance analysis is performed for several equilibria designed to be close to QS in Section 4, and resonant perturbations are identified. Kinetic Poincare plots are employed in Section 5 to assess the formation of phase-space islands and chaos in the presence of the resonant perturbations. A Monte Carlo guiding center transport analysis is performed in Section 6. We conclude in Section 7.
## 2 Guiding-center motion in the presence of an Alfvenic perturbation
The guiding-center motion, \(\mathbf{R}(t)\), is described by the Lagrangian (Littlejohn, 1983),
\[L(\mathbf{R},\dot{\mathbf{R}},v_{\parallel})=q\left(\mathbf{A}+\frac{Mv_{ \parallel}}{qB}\mathbf{B}\right)\cdot\dot{\mathbf{R}}-\frac{Mv_{\parallel}^{2}}{2}- \mu B-q\Phi, \tag{1}\]
where \(q\) is the charge, \(M\) is the mass, \(v_{\parallel}\) is the velocity in the direction of the magnetic field, and \(\mu\) is the magnetic moment. We assume the magnetic field is comprised of an equilibrium field, \(\mathbf{B}_{0}\), and a shear Alfvenic perturbation (White _et al._, 1983),
\[\mathbf{B}=\mathbf{B}_{0}+\nabla\times(\alpha\mathbf{B}_{0})\,, \tag{2}\]
while the scalar potential vanishes in the equilibrium, \(\Phi=\delta\Phi\). With the reduced MHD assumption (Kruger _et al._, 1998)--\(k_{\parallel}/k_{\perp}\ll 1\), where \(k_{\parallel}\) is the characteristic parallel wave number and \(k_{\perp}\) is the characteristic perpendicular wave number associated with perturbed quantities--the form of the perturbed field (2) implies that the linear perturbation to the field strength vanishes, \(\delta B=0\). Under the ideal MHD assumption, the corresponding scalar potential must satisfy \(\mathbf{B}_{0}\cdot\delta\mathbf{E}=0\), which implies
\[\nabla_{\parallel}\delta\Phi=-B_{0}\frac{\partial\alpha}{\partial t}. \tag{3}\]
The perturbed fields could be computed from a global Alfven eigenmode solver such as AE3D (Spong _et al._, 2010). For the fundamental studies here, we instead assume a single harmonic perturbation of the form,
\[\delta\Phi=\hat{\Phi}(\psi)\sin(\omega t+m\theta-n\zeta), \tag{4}\]
where \((\psi,\theta,\zeta)\) are Boozer coordinates such that the unperturbed magnetic field is expressed as,
\[\left\{\begin{array}{l}\mathbf{B}_{0}=\nabla\psi\times\nabla\theta- \iota(\psi)\nabla\psi\times\nabla\zeta,\\ \mathbf{B}_{0}=G(\psi)\nabla\zeta+I(\psi)\nabla\theta+K(\psi,\theta,\zeta)\nabla \psi,\end{array}\right. \tag{5}\]
\(m\) is the poloidal mode number, \(n\) is the toroidal mode number, and \(\omega\) is the frequency. We define the phase variable \(\eta=\omega t+m\theta-n\zeta\). The corresponding expression for \(\alpha\) is then obtained from (3), which we write schematically as,
\[\alpha=\hat{\alpha}(\psi)\sin(\eta). \tag{6}\]
We impose \(\hat{\Phi}\) and compute \(\hat{\alpha}\) so that a magnetic differential equation need not be inverted, which can give rise to singular solutions on rational surfaces.
Since \(Mv_{\parallel}/(qB_{0})\) is small in comparison to the characteristic length scale of the equilibrium and \(\delta B=0\), the perturbed Lagrangian reads (Littlejohn, 1985),
\[L(\mathbf{R},\dot{\mathbf{R}},v_{\parallel})=q\left(\mathbf{A}_{0}+\alpha\mathbf{B}_{0}+\frac{Mv _{\parallel}}{qB_{0}}\mathbf{B}_{0}\right)\cdot\dot{\mathbf{R}}-\frac{Mv_{\parallel}^{ 2}}{2}-\mu B_{0}-q\delta\Phi. \tag{7}\]
The resulting equations of motion are integrated in Boozer coordinates using the SIM-SOPT stellarator optimization and modeling package (Landreman _et al._, 2021).
To compare Alfvenic perturbations across configurations with different mode numbers, we will express the strength of the perturbation in terms of its normalized radial magnetic field,
\[\frac{\delta\mathbf{B}\cdot\nabla\psi}{B_{0}|\nabla\psi|}\approx\frac{\nabla \alpha\times\mathbf{B}_{0}\cdot\nabla\psi}{rB_{0}^{2}}=\frac{\hat{\alpha}(mG-nI)}{ r(G+\iota I)}\cos(\eta). \tag{8}\]
Here we have assumed that the gradient scale length of the perturbed field is larger than that of the equilibrium. Since \(G\gg I\) for stellarator equilibria, we define the parameter \(\delta\hat{B}^{\psi}=m\hat{\alpha}/r\sim\delta\mathbf{B}\cdot\nabla\psi/(B_{0}| \nabla\psi|)\) to compare equilibria with respect to the strength of the perturbed radial field.
## 3 Resonance theory
We assume a quasisymmetric equilibrium for which the unperturbed field strength can be expressed as \(B_{0}(\psi,\chi)\) where \(\chi=\theta-N\zeta\) is the symmetry angle and \(N\) is an integer representing the symmetry helicity. (\(N=0\) for quasiaxisymmetry and \(N\neq 0\) for quasihelical symmetry.) Each equilibrium we consider in Section 4 is sufficiently close to quasisymmetry that such an assumption provides valuable insight into the resulting dynamics.
The radial drift over the unperturbed trajectories is analyzed in the presence of an Alfvenic perturbation in Appendix A, generalizing the theory of Mynick (1993_b_) to quasisymmetric configurations and moderate frequency perturbations. At moderate frequencies, the electrostatic potential enters the guiding center equations of motion. Because \(\mathbf{B}_{0}\cdot\delta\mathbf{E}=0\), particles continue to follow perturbed field lines to lowest order in the gyroradius but experience an additional \(\mathbf{E}\times\mathbf{B}\) drift. Under the assumption of quasisymmetry and stellarator symmetry, the unperturbed equations of motion can be written schematically as,
\[\left\{\begin{array}{l}\dot{\chi}=\omega_{\chi}+\sum_{j\neq 0}\chi_{j}\cos(j \chi),\\ \dot{\zeta}=\omega_{\zeta}+\sum_{j\neq 0}\zeta_{j}\cos(j\chi),\end{array}\right. \tag{9}\]
where \(\omega_{\chi}=\langle\dot{\chi}\rangle\) and \(\omega_{\zeta}=\langle\dot{\zeta}\rangle\) are the averaged drifts in the \(\chi\) and \(\zeta\) directions. The overdot represents a time derivative, and the average is performed over many toroidal transits, \(\langle A\rangle=\int_{0}^{T}dt\,A/T\). The summations represent the periodic contributions from the drifts.
For an Alfvenic perturbation of the form (4) to provide a net radial drift, a resonance condition must be satisfied,
\[\Omega_{l}=(m+l)\omega_{\chi}-(n-Nm)\omega_{\zeta}+\omega=0. \tag{10}\]
As discussed in Appendix A, the integer \(l\) arises due to coupling through the \(\chi\)-dependence of the magnetic drifts. If the drift dynamics (9) is dominated by a particular cosine
harmonic with integer \(j^{\prime}\), then \(l\) is assumed to be an integer multiple of \(j^{\prime}\). To simplify the analysis, the \(j^{\prime}=1\) harmonic of the field strength is assumed to be dominant, which holds near the magnetic axis (Garren & Boozer 1991) and for the equilibria of interest. More general expressions are provided in Appendix A. Under the assumption that the characteristic frequencies are approximately flux functions, the full island width associated with a given resonance is given by,
\[w_{l}^{\psi}=2\sqrt{\left|\frac{\psi_{l}}{\Omega_{l}^{\prime}(\psi)}\right|} \approx 2\sqrt{\left|\frac{\psi_{l\neq 0}}{(m+l)\omega_{\theta}^{\prime}(\psi)} \right|}, \tag{10}\]
where \(\psi_{l}\) is a cosine harmonic of the radial perturbed drift and we have made the approximation \(h^{\prime}(\psi)\approx\omega_{\theta}^{\prime}(\psi)/\omega_{\zeta}\). The perturbed radial drift, \(\delta\dot{\psi}\), can be evaluated along the unperturbed trajectory and expressed as a cosine series in \(\cos(\Omega_{l}t+\eta^{0})\) and \(\cos(\Omega_{l}t+\eta^{0}\mp\chi^{0})\) with coefficients given by \(\psi_{l}^{0}\) and \(\psi_{l}^{\pm}\); see (10). Here \(\eta^{0}=m\theta^{0}-n\zeta^{0}\) is the initial phase. The scaling of these coefficients with the magnitude of the magnetic drifts, mode numbers, and perturbed radial field is summarized as,
\[\left\{\begin{array}{l}\psi_{0}^{0}\sim(\iota-\omega_{\theta}/\omega_{\zeta })J_{0}(\eta_{1})\delta\hat{B}^{\psi},\\ \psi_{l}^{0}\sim J_{l}(\eta_{1})\delta\hat{B}^{\psi},\\ \psi_{l}^{\pm}\sim J_{l\pm 1}(\eta_{1})\zeta_{1}\delta\hat{B}^{\psi}, \end{array}\right. \tag{11}\]
where \(J_{l}\) are the Bessel functions of the first kind, \(\eta_{1}=\left(m\chi_{1}-(n-Nm)\zeta_{1}\right)/\omega_{\chi}\) with \(\chi_{1}\) and \(\zeta_{1}\) defined through (9), and \(\omega_{\theta}=\langle\dot{\theta}\rangle\). The full expressions for general \(j^{\prime}\) are provided in Appendix A.
When considering the dependence of the island width on the magnetic drifts in the small argument limit of the Bessel functions, the most significant radial transport will arise from \(\psi_{0}^{0}\), \(\psi_{\pm 1}^{0}\), or \(\psi_{\pm 1}^{\pm}\). In the limit of large mode numbers, \(\psi_{\pm 1}^{0}\) will dominate due to its dependence on \(\eta_{1}\). Considering the small argument limit of the Bessel function, the island width scales with \(\sqrt{\eta_{1}^{|l|}/(m+1)}\) for fixed \(\delta\hat{B}^{\psi}\), thus increasing the island width for quasihelical configurations due to the dependence of \(\eta_{1}\) on the helicity \(N\). Given the scaling of \(\eta_{1}\) with the mode numbers, the island width will roughly scale independently of \(m\) for \(|l|=1\) but will scale with \(m^{(|l|-1)/2}\) for \(|l|>1\). Finally, the island width decreases strongly with increasing \(|l|\).
Defining the passing orbital helicity as \(h=\omega_{\theta}/\omega_{\zeta}\), resonances occur where:
\[h=\frac{n-Nm-\omega/\omega_{\zeta}}{m+l}+N. \tag{12}\]
For a given primary resonance \(l\) at \(h=h_{0}\), additional sideband resonances may be excited for other drift harmonics \(l^{\prime}\) corresponding to neighboring periodic orbits. The spacing between the \(l\) and \(l+1\) resonances is given by,
\[\left(\Delta\psi\right)_{l}=\frac{1}{h^{\prime}(\psi)}\frac{h_{0}-N}{m+l+1}. \tag{13}\]
The ratio between the island width and the resonance spacing
\[\frac{w_{l}^{\psi}}{\left(\Delta\psi\right)_{l}}\approx\frac{m+l+1}{(h_{0}-N) \omega_{\zeta}}\sqrt{\left|\frac{\psi_{l}\omega_{\theta}^{\prime}(\psi)}{m+l }\right|} \tag{14}\]
provides a conservative estimate for the island overlap criterion, \(w_{l}^{\psi}/\left(\Delta\psi\right)_{l}\gtrsim 1\). Given
the scaling of \(\psi_{l}\) (3.4), the potential for island overlap increases with \(m\) and decreases with \(N\) for fixed \(\delta\tilde{B}^{\psi}\). In this way, quasihelical configurations are advantageous for preventing the transition to phase-space chaos. Finally, while shear in the transit frequency, \(\omega_{\theta}^{\prime}(\psi)\), reduces individual island widths, it promotes island overlap if multiple resonances are present.
## 4 Resonance analysis
We consider four equilibria optimized to be close to quasisymmetry: \(\beta=2.5\%\) quasihelical (QH) and quasiaaxisymmetric (QA) equilibria with self-consistent bootstrap current (Landreman _et al._, 2022), a vacuum equilibrium with precise levels of quasiaxisymmetry (Landreman & Paul, 2022), and the quasiaxisymmetric NCSX li383 equilibrium (Koniges _et al._, 2003; Mynick _et al._, 2002). Each equilibrium considered is scaled to the minor radius (1.70 m) and the field strength (5.86 T) of ARIES-CS (Najmabadi _et al._, 2008). The rotational transform profiles and quasisymmetry error,
\[f_{QS}(s)=\frac{\sqrt{\sum_{Mn\neq Nm}\left(B_{m,n}^{c}(s)\right)^{2}}}{\sqrt {\left(B_{0,0}^{c}(s)\right)^{2}}}, \tag{4.1}\]
are shown in Figure 1, where the unperturbed field strength in Boozer coordinates is \(B_{0}(s,\theta,\zeta)=\sum_{m,n}B_{m,n}^{c}(s)\cos(m\theta-n\zeta)\).
We first identify the low-order periodic passing orbits present in the equilibrium to determine the impact of a potential resonant perturbation on the guiding center losses. Fusion-born alpha particles are initialized with equally-spaced pitch angle and radius values and followed for 500 toroidal transits. The net change in the poloidal angle, \(\Delta\theta\), and transit time, \(\Delta t\), are computed for each toroidal transit. In the integrable case, the average of the characteristic frequencies over many transits, denoted \(\omega_{\zeta}=2\pi/\langle\Delta t\rangle\), and \(\omega_{\theta}=\langle\Delta\theta\rangle/\langle\Delta t\rangle\), converges quickly with respect to the number of toroidal transits (Das _et al._, 2016). In this way, we obtain \(\omega_{\zeta}\) and \(\omega_{\theta}\) in the two-dimensional space \((s,\mu)\), where \(s=\psi/\psi_{0}\) is the normalized toroidal flux and \(\psi_{0}\) is the value of the toroidal flux on the boundary. All nonsymmetric modes are artificially suppressed for this frequency analysis so that all trajectories lie on KAM surfaces.
Figure 1: (a) Rotational transform and (b) quasisymmetry error (4.1) profiles for the four equilibria under consideration.
For the subsequent analysis, resonant perturbations are intentionally imposed, satisfying the condition (3.2) for a resonant surface near mid-radius. As many potential Alfvenic perturbations exist that resonate with a given periodic orbit, a comparison is made between \(m=1\), \(15\), and \(30\) perturbations. The perturbation with the highest mode number is chosen such that \(m\approx a/\rho_{EP}\) for the ARIES-CS reactor parameters, where \(a\) is the effective minor radius and \(\rho_{EP}\) is the energetic particle orbit width. With this choice, the radial width of the AE eigenstructure is predicted to be comparable to the EP orbit width, and the growth rate is maximized (Gorelenkov _et al._, 2018). The smaller values of \(m\) are more typical for current experiments such as LHD (Varela _et al._, 2017). We choose perturbation amplitudes in the range \(\delta\hat{B}^{\psi}\sim 10^{-4}-10^{-3}\), consistent with LHD modeling (Nishimura _et al._, 2013) and experimental measurements from TFTR and (Nazikian _et al._, 1997) and NSTX (Crocker _et al._, 2013). In analysis of the impact of TAEs on guiding center confinement in tokamaks, alpha orbit stochasticity is typically present for \(\delta\hat{B}^{\psi}\sim 10^{-3}\), while for \(\delta\hat{B}^{\psi}<10^{-4}\) the losses are insignificant (Sigmar _et al._, 1992). Although the eigenfunction typically depends on the mode number, we choose a radially uniform mode structure for a conservative analysis.
Given a \(\omega_{\theta}/\omega_{\zeta}=p/q\) periodic orbit and poloidal mode number \(m\), the resonant wave frequency satisfying (3.2) can be expressed as \(\omega/\omega_{\zeta}=p^{\prime}/q\) for some integer \(p^{\prime}\). The toroidal mode number \(n\) and the frequency parameter \(p^{\prime}\) are chosen to satisfy the \(l=1\) resonance condition (3.2), given that the \(l=\pm 1\) resonance is predicted to give rise to the most substantial radial transport as described in Section 3. The value of \(n\) is chosen to yield small magnitudes of \(|p^{\prime}|\), corresponding with lower-frequency perturbations. The mode parameters are summarized in Table 1.
For each configuration, the profile of the effective orbit helicity \(h\) (3.5) is shown in Figure 2 for co- and counter-passing orbits (blue and orange, respectively). While the helicity profile does not change substantially between co- and counter-passing orbits, the resonance condition (3.5) is modified since the sign and magnitude of \(\omega_{\zeta}\) differs. Although the choice of pitch angle will impact the resonance condition, for the following calculations, we focus on co-passing orbits, \(v_{\parallel}/v_{0}=+1\). The horizontal black lines in the figures indicate the chosen resonant surface for the co-passing orbits. For the \(\beta=2.5\%\) QA and NCSX equilibria, the nearby resonant surfaces corresponding to other values of \(l\) are indicated by the colored horizontal lines. As \(m\) increases, the resonance spacing
\begin{table}
\end{table}
Table 1: Alfvénic perturbation mode parameters chosen to satisfy the resonance condition (3.2).
decreases according to (3.6). For the vacuum QA equilibrium, the magnetic shear is very low, see Figure 1, and no sideband resonances exist. Although the shear is moderate for the \(\beta=2.5\%\) QH equilibrium, sideband resonances are not present due to the factor of \(N\) in the resonance spacing expression.
## 5 Kinetic Poincare plots
Given a magnetic field with exact quasisymmetry, a kinetic Poincare plot can be constructed analogously to the case of axisymmetric magnetic fields (Sigmar _et al._, 1992). Given a quasisymmetric field \(B_{0}(\psi,\chi=\theta-N\zeta)\) in the absence of a perturbation, the canonical angular momentum,
\[P_{\zeta}=(G+NI)\left(\frac{mv_{\parallel}}{B_{0}}+q\alpha\right)+q\left(N \psi-\psi_{P}\right), \tag{5.1}\]
Figure 2: The characteristic orbit helicity, \(h=\omega_{\theta}/\omega_{\zeta}\), is computed for co-passing (\(v_{\parallel}/v_{0}=+1\), blue) and counter-passing orbits (\(v_{\parallel}/v_{0}=-1\), orange). A low-order periodic orbit is selected near the mid-radius, denoted by the horizontal dashed black line. Alfv\(\check{\rm{e}}\)nic perturbations with several mode numbers \(m\) are chosen to resonate with this orbit periodicity. Sideband resonances are excited in the \(\beta=2.5\%\) QA and NCSX equilibria due to the angular dependence of the drifts, denoted by the colored horizontal lines. Because of the increased distance between resonances, no sidebands are excited for in the vacuum QA or \(\beta=2.5\%\) QH equilibria for the mode numbers chosen.
and energy,
\[E=\frac{mv_{\parallel}^{2}}{2}+\mu B_{0}+q\delta\Phi, \tag{10}\]
are conserved. When an Alfvenic perturbation is applied with single mode numbers \(n\) and \(m\), neither \(P_{\zeta}\) nor \(E\) are conserved, but a conserved quantity is obtained by moving with the wave frame,
\[\overline{E}_{n}=\left(n-Nm\right)E-\omega P_{\zeta}. \tag{11}\]
Given the velocity-space parameters--\(\mu\), \(\overline{E}_{n}\), and sign(\(v_{\parallel}\))--and specification of the position in space and time--\(t\), \(s\), \(\theta\), and \(\zeta\)--equation (11) provides a nonlinear equation for \(v_{\parallel}\)(Hsu & Sigmar, 1992).
Given the form for the phase factor of the perturbation (4), the resulting equations of motion depend on \((s,\chi,\zeta-\overline{\omega}_{n}t,v_{\parallel})\), where \(\overline{\omega}_{n}=\omega/(n-Nm)\). The resulting motion is generally 4D. If purely passing particles are considered, then sign \(\left(v_{\parallel}\right)\) is fixed, \(v_{\parallel}\) can be computed from the other coordinates as described above, and the resulting motion becomes 3D. For a fixed value of \(\overline{E}_{n}\) and \(\mu\), a kinetic Poincare map \(M(s,\chi)\rightarrow(s^{\prime},\chi^{\prime})\) is constructed by moving with the wave frame to eliminate the time dependence. Guiding center trajectories are followed until they intersect a plane of constant \(\zeta-\overline{\omega}_{n}t\). Therefore, the resulting Poincare section is 2D.
When quasisymmetry is broken in the equilibrium, \(\overline{E}_{n}\) is no longer precisely conserved. Nonetheless, if the quasisymmetry errors are sufficiently small, the kinetic Poincare analysis provides insight into the resulting transport. The non-quasisymmetric modes are artificially suppressed when constructing the kinetic Poincare plots to aid the analysis. The impact of quasisymmetry-breaking modes will be assessed in the following Section.
In Figure 3, kinetic Poincare plots are displayed for particles with sign(\(v_{\parallel})=+1\), \(\mu=0\), \(E=3.52\) MeV and Alfvenic perturbations with parameters given in Table 1. The amplitude \(\hat{\Phi}\) is chosen such that \(\delta\dot{B}^{\psi}=10^{-3}\) on the \(s=1\) surface. Periodic orbits that satisfy the resonance condition \(\Omega_{l}=0\) appear as \(n/(m+l)\) periodic orbits in the kinetic Poincare plots.
For the NCSX and \(\beta=2.5\%\) QA configurations, a clear \(l=1\) island chain is apparent in the presence of the \(m=1\) perturbation. For the vacuum QA equilibrium, the perturbation shifts the orbit helicity slightly, moving the resonance outside the equilibrium due to the low magnetic shear. The resonance reappears by increasing the perturbation frequency by \(8\%\), see Figure 4. In the \(\beta=2.5\%\) QA equilibria, in addition to the primary \(l=1\) resonance, the \(l=2\) resonance is apparent near \(s=0.1\). However, the island width is small due to the reduced magnitude of the \(J_{2}(\eta_{1})\) coupling parameter. As indicated by the resonance plots, Figure 2, none of the other equilibria contain the \(m=1\), \(l=2\) resonance.
As the mode number increases from \(m=15\) to \(m=30\), the \(l=1\) island width stays roughly the same for the \(\beta=2.5\%\) QA configuration, as indicated by the scaling of the island width formula (11)-(12) for fixed \(\delta\dot{B}^{\psi}\). For the vacuum QA configuration, the \(l=1\) resonance reappears for the \(m=15\) and \(m=30\) perturbations, its width being especially wide due to the low magnetic shear of the configuration. For both the vacuum QA and \(\beta=2.5\%\) QH configurations with the \(m=15\) and \(m=30\) perturbations, the island chain is wide enough to lead to visible destruction of nearby KAM surfaces and the formation of secondary island chains.
In the \(\beta=2.5\%\) QA equilibrium, overlap between the \(l=0\), \(1\), \(2\), and \(3\) resonances is observed with the \(m=15\) perturbation. Substantial island overlap is also observed in the NCSX equilibrium with the \(m=15\) perturbation. In the presence of the \(m=30\)
perturbation, strong island overlap is observed in the \(\beta=2.5\%\) QA and NCSX equilibria. However, the phase-space volume over which island overlap occurs is narrowed. As seen in Figure 2, the resonances become more closely spaced for the \(m=30\) perturbation compared to the \(m=15\) perturbation. However, because the island width scales as \(\sqrt{\eta_{1}^{|l|}}\) with \(\eta_{1}\ll 1\), island overlap does not occur for the large \(|l|\) resonant surfaces. This island-width scaling effectively reduces the non-integrable volume as \(m\) increases.
The impact of these phase-space features on the resulting transport will be discussed in Section 6.
Figure 3: Kinetic Poincaré plots are constructed using the mode parameters in Table 1 with the perturbation amplitude \(\delta\hat{B}^{\psi}=10^{-3}\). All QS-breaking harmonics of the equilibrium field are artificially suppressed for this analysis.
### Impact of quasisymmetry deviations
We now investigate the impact of the finite quasisymmetry deviations, quantified in Figure 1, on the kinetic Poincare analysis. When QS deviations are present, the motion of passing particles becomes 4D \((s,\chi,\zeta,t)\), and the Poincare section becomes 3D, \(M(s,\chi,\zeta)\rightarrow(s^{\prime},\chi^{\prime},\zeta^{\prime})\). Nonetheless, we can still visualize the Poincare map in the \((s,\chi)\) plane to assess the structure of phase space.
In Figures 5-6, we show a selection of the kinetic Poincare plots constructed with the same parameters as those in Figure 3, but without the suppression of the QS-breaking modes. For the case of the \(\beta=2.5\%\) QH equilibrium, the phase-space structure remains mostly unchanged, with the addition of small-scale mixing. In the case of the NCSX equilibrium, the amplitude of the phase-space mixing is amplified due to the enhanced symmetry breaking. The gross phase-space structure remains mostly unchanged in the presence of the \(m=1\) perturbation. In the presence of the \(m=15\) perturbation, the effective diffusion due to the non-integrability of the orbits destroys many of the remaining KAM surfaces and island chains, leading to a wide region of phase-space chaos. We conclude that most large-scale phase space structure is preserved by adding QS deviations, especially for equilibria close to QS, \(f_{QS}\sim 10^{-3}\). This finding is in contradiction with recent results (White & Duarte 2023), indicating substantial diffusive losses in an equilibrium very close to QH (\(f_{QS}\sim 10^{-4}\)) with an Alfvenic perturbation as small as \(\delta\hat{B}^{\psi}\sim 10^{-6}\).
## 6 Monte Carlo analysis
To assess the impact of the phase-space structure on the resulting transport, we perform Monte Carlo collisionless guiding center tracing simulations. We initialize 5000 particles uniformly in pitch angle and volume for \(s\in[0.25,0.75]\). Particles are followed for \(10^{-3}\) seconds or until they are considered lost when they cross through \(s=0\) or \(s=1\)
Figure 4: (a) In the presence of the resonant perturbation of amplitude \(\delta\hat{B}^{\psi}=10^{-4}\), the characteristic frequencies \(\omega_{\theta}\) and \(\omega_{\zeta}\) shift, causing the resonance defined by \(\Omega_{l}=0\) to move outside the equilibrium due to the low shear, see Figure 3g. By increasing the mode frequency by \(8\%\), the resonance reenters. (b) The kinetic Poincaré plot of amplitude \(\delta\hat{B}^{\psi}=10^{-4}\) with the shifted frequency (\(\omega=2.15\) kHz) reveals the corresponding \(l=1\) island on the shifted resonant surface.
Figures 7-8 display results of tracing performed with the true equilibria (solid curves, "equilibrium") as well as with the quasisymmetry-breaking modes artificially filtered out (dashed curves, "perfect QS"). Calculations are carried out without Alfvenic perturbations (\(\hat{\alpha}=0\)) and for the \(m=1\), \(15\), and \(30\) perturbations indicated in Table 1.
In Figure 8, the effective transport is quantified by the distribution of the radial displacement, \(|\Delta s|\), between \(t=0\) and the final recorded time (after \(10^{-3}\) or when a particle is considered lost). The loss fraction as a function of time is also shown in Figure 7. A summary of the results is presented in Table 2. When comparing the equilibrium and perfect QS cases, we see slight transport enhancement due to QS-breaking modes in the \(\beta=2.5\%\) QA and QH and vacuum QA equilibria. The enhancement of transport becomes much more pronounced in the NCSX equilibrium, given its more substantial deviations from QS; see Figure 1.
In the equilibria that do not exhibit island overlap in Figure 3, the vacuum QA and
Figure 5: \(\beta=2.5\%\) QH kinetic Poincaré plots with the same parameters as Figure 3, but without the suppression of the quasisymmetry-breaking modes.
Figure 6: NCSX kinetic Poincaré plots with the same parameters as Figure 3, but without the suppression of the quasisymmetry-breaking modes.
\(\beta=2.5\%\) QH equilibria, the total loss fraction increases monotonically with \(m\). This result matches the observations in Figure 3, which indicate that although the island width remains roughly the same, the destruction of nearby KAM surfaces increases with increasing mode number. On the other hand, in equilibria that exhibit strong resonance overlap in Figure 3, NCSX and \(\beta=2.5\%\) QA, the transport increases monotonically with \(m\) until \(m=15\), then decreases for \(m=30\). This characteristic is explained in terms of the kinetic Poincare plots in Figure 3, which indicate that the effective volume of non-integrability is decreased for \(m=30\) compared to \(m=15\) due to the reduction in island width for large \(l\) perturbations.
Overall, the total induced transport is the lowest for the \(\beta=2.5\%\) QH configuration, for which the total losses remain less than \(10\%\) in the presence of the Alfvenic perturbations. The losses are also less prompt for this configuration, with most beginning around \(10^{-4}\) seconds, rather than around \(10^{-5}\) seconds for the other configurations. This reduction in transport is due to the increased resonance spacing of quasihelical configurations and its increased magnetic shear compared to the vacuum QA equilibria, see Figure 1.
Figure 7: The loss fraction as a function of time is shown. Monte Carlo tracing is performed in the presence of a \(\delta\hat{B}^{\psi}=10^{-3}\) perturbation with parameters described in Table 1. Guiding center trajectories of fusion-born alpha particles are followed for \(10^{-3}\) seconds or until they cross through \(s=0\) or \(s=1\). A comparison is made between the actual equilibrium and the equilibrium for which the QS-breaking modes are suppressed, “perfect QS.”
### Transition to phase-space chaos
We now study the transition to phase-space chaos and global transport with increasing perturbation amplitude in the \(\beta=2.5\%\) QA equilibria. We focus on the \(m=30\) perturbation, given that this is the expected mode number at reactor scales. Kinetic Poincare plots for co-passing particles resonant with the Alfvenic perturbation are shown in Figure 9. The \(l=-1\), \(0\), \(1\), \(2\), and \(3\) islands are visible at the smallest perturbation amplitude, corresponding to \(\delta\hat{B}^{\psi}=3.14\times 10^{-4}\). As the amplitude is increased to \(\delta\hat{B}^{\psi}=5.58\times 10^{-4}\), a band of island overlap appears near the resonant surface. The region of destroyed KAM surfaces increases with increasing perturbation amplitude, with only a few remnant islands present at \(\delta\hat{B}^{\psi}=1.76\times 10^{-3}\).
Monte Carlo calculations, as described above, are performed for the same Alfvenic perturbations to distinguish the impact of the phase-space structure on transport. The loss fraction as a function of time, initial and final distribution function, and net radial displacement \(|\Delta s|\) are shown in Figure 10. We note a sudden increase in the loss fraction above \(\delta\hat{B}^{\psi}=5.58\times 10^{-4}\), the value for which island overlap is seen to occur in Figure 9. This behavior is analogous to the observation of critical gradient behavior on DIII-D, for which the fast-ion transport suddenly becomes stiff above a critical gradient threshold. Similar to our results, the critical gradient behavior on DIII-D arises due to phase-space stochastiz
Figure 8: Distribution of radial displacement, \(|\Delta s|\), between \(t=0\) and the final recorded time among Monte Carlo samples for the same calculation presented in Figure 7.
The initial losses scale approximately as \(t^{2}\), as expected for quasilinear diffusion. As the radial distribution of particles evolves, we see enhanced flattening near the primary resonance surface with increasing perturbation amplitude. The distribution of the trapping parameter, \(\lambda=v_{\perp}^{2}/(2Bv^{2})\), indicates that most losses occur near the trapped-passing boundary. Scattering across the trapped-passing boundary leads to a large radial step and losses due to the wide orbit width (Hsu & Sigmar 1992). Overall, we conclude that the kinetic Poincare analysis provides insight into transport characteristics, even for these configurations with finite deviations from quasisymmetry.
## 7 Conclusions
We have developed the theory for guiding center transport in quasisymmetric equilibria with Alfvenic perturbations. Even if the perturbation is restricted to a single \(m\) and \(n\), additional resonances may be excited due to the coupling of the perturbation to the magnetic drifts, as discussed in Section 3. The resonance condition, phase-space island width, and island overlap conditions are discussed for several equilibria of interest. Quasi-helical configurations have a reduced propensity for island overlap due to their increased resonance spacing. While the potential for island overlap increases with the poloidal mode number \(m\), the effective volume of phase-space non-integrability decreases when \(m\) is large enough due to the reduced drift-island width for higher-order drift couplings. These features are visualized using kinetic Poincare plots in Section 5. Although the kinetic Poincare section is only 2D for a perfectly quasisymmetric equilibrium, this analysis
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Configuration & Perturbation & mean\(|\Delta s|\) (Equil./Perfect QS) & Loss frac. (Equil./Perfect QS) \\ \hline \(\beta=2.5\%\) QA & Unperturbed & 0.082/0.085 & 0.046/0.039 \\ & \(m=1\) & 0.092/0.088 & 0.067/0.054 \\ & \(m=15\) & 0.160/0.143 & 0.179/0.148 \\ & \(m=30\) & 0.114/0.095 & 0.105/0.063 \\ \hline \(\beta=2.5\%\) QH & Unperturbed & 0.037/0.032 & 0.007/0 \\ & \(m=1\) & 0.040/0.033 & 0.009/0 \\ & \(m=15\) & 0.051/0.042 & 0.019/0.002 \\ & \(m=30\) & 0.061/0.046 & 0.035/0.008 \\ \hline NCSX & Unperturbed & 0.127/0.055 & 0.178/0.005 \\ & \(m=1\) & 0.144/0.064 & 0.213/0.012 \\ & \(m=15\) & 0.227/0.137 & 0.367/0.146 \\ & \(m=30\) & 0.170/0.081 & 0.251/0.044 \\ \hline QA vac & Unperturbed & 0.105/0.112 & 0.083/0.066 \\ & \(m=1\) & 0.128/0.132 & 0.125/0.131 \\ & \(m=15\) & 0.163/0.146 & 0.194/0.162 \\ & \(m=30\) & 0.177/0.167 & 0.203/0.170 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of transport properties for Monte Carlo calculations in the presence of Alfvénic perturbations with mode parameters described in Table 1 and amplitude \(\delta\hat{B}^{\psi}=10^{-3}\). The mean radial displacement along a trajectory, mean\(|\Delta s|\), and total loss fraction after \(10^{-3}\) s are compared between the actual equilibrium, “equil”, and the equilibrium for which the QS-breaking modes are suppressed, “perfect QS.”
still provides insight into the transport with finite quasisymmetry-breaking errors. The quasisymmetry-breaking errors enhance the transport, especially in configurations with significant unconfined guiding center trajectories such as NCSX.
We evaluate the transport in several configurations close to quasisymmetry for low (\(m=1\)), moderate (\(m=15\)), and high (\(m=30\)) mode number perturbations, with the highest mode number being that expected for reactor conditions. The toroidal mode number and frequency are chosen to resonate with a drift surface in the equilibrium. We fix the perturbation amplitude to an appropriate amplitude based on experimental measurements and modeling, \(\delta B^{r}/B_{0}\sim 10^{-3}\), finding that substantial island overlap can be present for moderate and high-mode number perturbations. No island overlap occurs for a quasihelical equilibrium or an equilibrium with very low magnetic shear, such as the recently obtained vacuum equilibria with precise quasisymmetry (Landreman and Paul 2022). For the low-shear equilibrium, an enhancement of the loss fraction to \(\approx 20\%\) occurs due to the wide orbit width. For the quasihelical configuration, the losses remain less than \(7\%\) for all perturbations considered. For configurations with substantial island overlap, the losses increase to \(~{}10-20\%\) due to AE-driven transport. These results are consistent with similar modeling for tokamak configurations (Hsu and Sigmar 1992; Sigmar et al. 1992), which indicated resonant overlap and enhanced transport for \(\delta B^{r}/B_{0}\sim 10^{-3}\). However, the results are not consistent with recent findings of substantial diffusive losses
Figure 9: Kinetic Poincaré plots for co-passing orbits in the \(\beta=2.5\%\) QA in the presence of an \(m=30\) perturbation with parameters described in Table 1.
for AE perturbation amplitudes \(\delta B^{r}/B_{0}\sim 10^{-6}\) for equilibria very close to QS (White & Duarte 2023). Further work is necessary in order to resolve this discrepancy.
As the amplitude of the perturbation is increased, island overlap is observed, leading to a rapid transition to stiff fast ion transport. This result is similar to observations of critical gradient transport in DIII-D (Collins _et al._ 2016, 2017) and indicates the potential applicability of quasilinear models (Gorelenkov _et al._ 2014) to predict the saturated AE amplitude in stellarators.
Our results suggest several avenues to reduce AE-driven transport in quasisymmetric configurations:
* Development of quasihelical configurations, which avoid drift-island overlap due to increased resonance spacing;
Figure 10: Monte Carlo guiding center tracing calculations are performed for the \(\beta=2.5\%\) QA configuration in the presence of an \(m=30\) perturbation with parameters described in Table 1. The perturbation amplitude is increased to study the impact of island overlap, as illustrated in Figure 9, on transport. (a) The loss fraction as a function of time. (b) The distribution of the total radial displacement, \(|\Delta s|\), between the initial time and final recorded time. (c) The initial and final radial distribution of particles. (d) The distribution of the trapping parameter, \(\lambda=v_{\perp}^{2}/(v^{2}B)\), for the initial population (black) and at the final time for lost particles (colored) for different mode number perturbations.
* Avoiding low magnetic shear, which manifests as wide drift island widths for passing particles;
* Evaluating metrics from the unperturbed equilibrium (e.g., the Bessel coupling parameters \(J_{l}(\eta_{1})\)) as a proxy for drift island width within a stellarator optimization loop.
Finally, we remark that many aspects of Alfven eigenmodes were not considered in this study and will affect the transport. To gain a basic understanding of transport, a resonant AE was chosen to have a uniform radial structure. While the radial perturbation amplitude was assumed to be held fixed when comparing across configurations, in practice, the mode structure and amplitude will also depend on the mode numbers and other physical parameters through the AE's nonlinear evolution. In the future, we plan to build on this work to obtain a more complete picture of the evolution of AEs in stellarator reactor scenarios. For example, the analysis tools described here, such as kinetic Poincare plots, could be used to study the nonlinear evolution of phase-space structures such as hole-clump pairs (Bierwage _et al._, 2021) and zonal structures (Zonca _et al._, 2015) in quasisymmetric configurations.
## Acknowledgements
The authors would like to acknowledge discussions with Vinicius Duarte and Roscoe White. We acknowledge funding through the U. S. Department of Energy, under contract No. DE-SC0016268.
|
2305.04603 | Privacy-Preserving Representations are not Enough -- Recovering Scene
Content from Camera Poses | Visual localization is the task of estimating the camera pose from which a
given image was taken and is central to several 3D computer vision
applications. With the rapid growth in the popularity of AR/VR/MR devices and
cloud-based applications, privacy issues are becoming a very important aspect
of the localization process. Existing work on privacy-preserving localization
aims to defend against an attacker who has access to a cloud-based service. In
this paper, we show that an attacker can learn about details of a scene without
any access by simply querying a localization service. The attack is based on
the observation that modern visual localization algorithms are robust to
variations in appearance and geometry. While this is in general a desired
property, it also leads to algorithms localizing objects that are similar
enough to those present in a scene. An attacker can thus query a server with a
large enough set of images of objects, \eg, obtained from the Internet, and
some of them will be localized. The attacker can thus learn about object
placements from the camera poses returned by the service (which is the minimal
information returned by such a service). In this paper, we develop a
proof-of-concept version of this attack and demonstrate its practical
feasibility. The attack does not place any requirements on the localization
algorithm used, and thus also applies to privacy-preserving representations.
Current work on privacy-preserving representations alone is thus insufficient. | Kunal Chelani, Torsten Sattler, Fredrik Kahl, Zuzana Kukelova | 2023-05-08T10:25:09Z | http://arxiv.org/abs/2305.04603v1 | # Privacy-Preserving Representations are not Enough: Recovering Scene Content from Camera Poses
###### Abstract
Visual localization is the task of estimating the camera pose from which a given image was taken and is central to several 3D computer vision applications. With the rapid growth in the popularity of AR/VR/MR devices and cloud-based applications, privacy issues are becoming a very important aspect of the localization process. Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service. In this paper, we show that an attacker can learn about details of a scene without any access by simply querying a localization service. The attack is based on the observation that modern visual localization algorithms are robust to variations in appearance and geometry. While this is in general a desired property, it also leads to algorithms localizing objects that are similar enough to those present in a scene. An attacker can thus query a server with a large enough set of images of objects, _e.g.,_ obtained from the Internet, and some of them will be localized. The attacker can thus learn about object placements from the camera poses returned by the service (which is the minimal information returned by such a service). In this paper, we develop a proof-of-concept version of this attack and demonstrate its practical feasibility. The attack does not place any requirements on the localization algorithm used, and thus also applies to privacy-preserving representations. Current work on privacy-preserving representations alone is thus insufficient.
## 1 Introduction
Visual localisation refers to the problem of estimating the camera pose of a given image in a known scene. It is a core problem in several 3D computer vision applications, including self-driving cars [17, 18] and other autonomous robots [50], and Augmented Reality [23, 5, 25].
A popular approach for Augmented/Mixed/Virtual Reality (XR) applications is to use a client-server mechanism for localization: the user device (client) sends image data to a cloud-based system (server) that computes and returns the camera pose [23, 46, 25]. Examples of such services include Google's Visual Positioning System [29], Microsoft's Azure Spatial Anchors [24], and Niantic's Lightship [39]. Cloud-based localization services are popular for multiple reasons - _first_, performing localization on the server reduces storage requirements and the computational load, and thus energy consumption, which is important for client devices such as mobile phones and headsets; _second_, it enables using robust mapping and localization algorithms that are too expensive for mobile devices; _third_, in the context of collaborative mapping, _e.g.,_ for the AR cloud or autonomous driving, maintaining a single scene representation in a centralized place is far easier than keeping multiple copies on various mobile devices up-to-date.
Naturally, sending user data to a server, _e.g._, in the form of images to be localized or 3D maps recorded by users that will be used for localization, raises privacy concerns [41, 9, 42]. Work on privacy-preserving localization aims to resolve these concerns by ensuring that private details cannot be recovered from the data sent [14, 26, 42] to or stored on the server [11, 15, 28, 36, 41, 52].
Existing work focuses on scenarios where an attacker gains access to the localization service or can eavesdrop on the communication between client and server. In this work, we demonstrate that it is possible for an attacker to learn about the content of a scene stored on a localization server without direct access to the server. We show that a localization service will reveal scene-related information through estimated camera poses, _i.e._, through its normal operation process. The attack is based on two recent developments: (1) modern visual localization algorithms are designed to be robust against changes such as illumination and seasonal variations [44]. This is an essential property for cloud-based localization services in order to operate robustly and reli
ably. However, since these algorithms are robust to (slight) variations in appearance and geometry, they will also localize images showing objects that are similar (but not necessarily identical) to those objects present in the scene. (2) massive amounts of images depicting objects in different variations are readily available on the Internet. Taken together, both developments allow an attacker to repeatedly query the server with images and to recover the positions of the objects in the scene based on the camera poses returned by the server (Fig. 1). In this paper, we demonstrate the feasibility of this attack by developing a proof-of-concept implementation of the attack.
In summary, we make the following contributions: **(1)** we identify a new line of attack in the context of privacy-preserving visual localization based on the camera poses returned by a cloud-based server. **(2)** we show the feasibility of the attack through a proof-of-concept implementation of the attack. Through experiments, we explore the performance of our implementation as well as the trade-off between localization robustness and potential defenses against the attack. **(3)** the attack is agnostic to the underlying localization algorithm and thus applicable even if the localization system is otherwise perfectly privacy-preserving. This paper thus proposes a new research direction for privacy-preserving localization, where the aim for the localization service is to correctly identify whether a query image was taken in the concerned scene or not, in order to prevent leaking information through camera poses.
## 2 Related Work
**Visual localization.** Most state-of-the-art visual localization algorithms are based on establishing 2D-3D matches between a query image and a 3D model of the scene. These correspondences are then used for camera pose estimation. The 3D model can either be stored explicitly [19, 20, 21, 31, 32, 33, 43, 27], _e.g._, in the form of a Structure-from-Motion (SfM) point cloud, or implicitly in the form of the weights of a machine learning model [38, 45, 3, 1, 2, 6]. In the former case, local feature descriptors are associated with 3D points of the model. It has been shown that this information is sufficient to recover detailed images from the 3D map [28, 40], although sparsifying these models [4, 51] might effectively make them privacy-preserving [7]. Approaches based on implicit representations map image pixels or patches to 3D points by training scene coordinate regression models [3, 38]. Recently, it was claimed that such approaches are inherently privacy-preserving [11]. However, feature-based methods currently scale better to large scenes and are able to better handle condition changes [44], such as illumination or seasonal changes, between the query image and the database images used to build the the scene representation. The resulting robustness is highly important in many applications of visual localization, including AR and robotics. The robustness is a direct consequence of recent advances in local features [10, 30, 13] and effective parameters [32, 43, 48, 53]. In this paper, we show that robustness to changing conditions enables an attacker to learn about the content of the scene: robustness to changing conditions not only bridges the gap between (small) varia
Figure 1: In the context of privacy-preserving localization, we show that it is possible to learn about the content of a scene using camera poses returned by a localization service, without any direct access to the scene representation. **(1st column)** Examples of images from the scene, used to build the scene representation. The images are shown for illustrative purposes and are not available to an attacker trying to learn about the scene. **(2nd column)** The attacker queries the service with images of objects, _e.g._, downloaded from the Internet. **(3rd & 4th column)** Using the camera poses for the query image returned by the localization service, the attacker is able to identify the types of objects present in the scene and to accurately place them in the scene. We show the estimated object poses overlaid over the ground truth structure of the scene (which is not accessible to the attacker). The attacker is able to faithfully recover the placement of objects. Overall, our results demonstrate that simple feedback such as camera poses is already sufficient to potentially reveal private details.
tions in scene appearance and geometry observed in images depicting the same place, but also leads to correspondences between images depicting similar but not identical objects, _e.g_., different chairs. In turn, these correspondences can be used to localize the object in the scene, which is the basis for the attack described in this work. Note that the properties we exploit are inherent to robust localization algorithms and are not restricted to feature-based methods. Ultimately, any robust localization system needs to handle variations in shape and appearance.
**Privacy-preserving visual localization.** Existing work on privacy-preserving localization focuses on two points of attack: (1) ensuring that data sent to a localization service does not reveal private information. (2) ensuring that data stored on a localization service does not reveal private information. For the former case, it has been shown that images can be recovered from local features [9, 12, 49]. Work on privacy-preserving queries to a localization server thus mostly aims at developing features that prevent image recovery [14, 26] or on obfuscating the feature geometry [16, 42]. Similarly, work on privacy-preserving scene representation aims to obfuscate the geometry [37, 41] (although scene geometry can be recovered under certain conditions [7]), splitting the maps over multiple server for increased data security [15], using implicit representations [11], or storing raw geometry without any feature descriptors [52].
This paper presents a new line of attack that complements existing work. Previous work considers a scenario where the attacker gains access to the service. In contrast, we show that it is possible to recover scene content from the very basic information provided by any localization service, namely the camera poses estimated for query images. As such, the attack is still feasible even if the data send to and stored on the server is completely privacy-preserving. Our work thus shows that existing privacy-preserving localization approaches are not sufficient to ensure user privacy.
## 3 Recovering Scenes from Camera Poses
Any localization system returns the camera poses of localized query images. At the same time, modern localization algorithms aim to be robust to shape and appearance variations in order to be robust to changes in viewing conditions. This feature, however, opens up the possibility that not only genuine queries, but also images of objects that are similar to the ones present in the scene can be localized. The camera poses of the localized images can then in turn be used to infer the positions of certain objects in the scene, potentially revealing more information about the scene than the cloud-based service / a user would like to disclose.
Naturally, an attacker does not know which objects are present in the scene and thus which images to use for their queries. The Internet is a source of a theoretically unlimited number of images, videos, and 3D models of objects of different types and appearances. This naturally leads to an idea of a potential attack, where an attacker just downloads such images and videos, bombards the server with localization requests, and uses poses of localized images to reveal detailed scene structure.
In the following sections, we investigate this new type of attack, and we try to answer several questions: Can an attacker with access to images and videos of objects similar to those present in the scene easily learn about the presence/absence of different objects and their placement in the scene just from the poses returned by a localization service? What are the challenges of such an attack, and are these challenges easily solvable? Can cloud-based services easily prevent such attacks? To this end, we present a proof-of-concept implementation of the attack.1 Later, Sec. 6 discusses an approach to potentially mitigate the attack and why its effectiveness is limited.
Footnote 1: We only aim to show feasibility. We believe that better attack algorithms are certainly possible.
### Formalization
We assume a localization server \(\mathcal{L}\) that is responsible for localizing images in a scene \(\mathcal{S}\). \(\mathcal{L}\) tries to align each query image it receives with the scene representation as best as possible. If an image can be localized, the server returns a 6-dof camera pose \([\mathbf{R}|\mathbf{t}]\). We assume that the scale of the translation component \(\mathbf{t}\) is in known.
An adversary \(\mathcal{A}\) is querying \(\mathcal{L}\) with many images of different objects, where each image contains only one dominant object to avoid confusion about which object from the image was localized in the scene. \(\mathcal{A}\), using the poses returned by \(\mathcal{L}\), wants to learn about the presence/absence of objects in the scene \(\mathcal{S}\), and wants to infer their (approximate) positions. As such, \(\mathcal{A}\) tries to construct an (approximate) "copy" of the scene \(\mathcal{S}\) or at least its layout.
In this setting \(\mathcal{A}\) needs to deal with two challenges:
1. \(\mathcal{A}\) queries \(\mathcal{L}\) with images of objects that, in general, differ geometrically from the actual objects in the scene. In the best case, the pose returned by the server provides the best-possible approximate alignment between the query and actual object. In general, the returned poses will be noisy and can be quite inaccurate if only a part of the object, _e.g_., a chair's leg, is aligned. Creating an accurate "copy" of the scene from such poses is a challenging problem.
2. \(\mathcal{A}\) has, in general, no a-priori information about the type of the scene and which objects are visible in it. Since \(\mathcal{L}\) can also return poses for objects that are not in the scene, \(\mathcal{A}\) needs to have a mechanism for deciding the presence/absence of an object based on the re
turned poses. Naturally, having to deal with noisy and inaccurate poses makes the decision process harder.
In general, it is not possible to overcome these challenges by using a single image of each object. A single camera pose returned by \(\mathcal{L}\), without additional information, does not provide enough data for deciding about the presence/absence of the object in the scene and the quality of the pose.
However, given the large amount of images available on the Internet, and in particular the availability of videos, \(\mathcal{A}\) can use several images of the same object taken from different viewpoints. Jointly reasoning about all of the corresponding poses obtained for these images can then be used to decide the presence and position of the object.
### 3D Object Placement
Assuming that the attacker knows that an object is present, they still need to predict its position and orientation in the scene based on the pose estimates provided by the server. To this end, the attacker can use that multiple images of the same object taken from different viewpoints are available. These images can be used by \(\mathcal{A}\) to build a local 3D model \(\mathcal{M}\), _e.g._, using SfM [34] and MVS [35], and to compute the poses \(\mathbf{P}_{o}\) of these images w.r.t. this model. In turn, \(\mathcal{L}\) provides a set \(\hat{\mathbf{P}}_{o}\) of poses for (a subset of) these images in the coordinate system of the scene model \(\mathcal{S}\). The problem of placing the object in the copy of the scene \(\mathcal{S}\) thus reduces to the problem of aligning both sets of poses (_cf_. Fig. 2). The camera poses \(\hat{\mathbf{P}}_{o}\) provided by \(\mathcal{L}\) can be very noisy and can contain outliers. Thus, the alignment process needs to be robust.
As mentioned above, for simplicity we assume that the scale of the 3D model stored by \(\mathcal{L}\) is known.2 Similarly, the scale of the local model \(\mathcal{M}\) can be (approximately) recovered using the known size of the object. In this case, the two poses, in the coordinate systems of \(\mathcal{M}\) and \(\mathcal{S}\), for a single image already provide an alignment hypothesis, _i.e._, the relative pose between them. As outlined in Alg. 1, we evaluate all hypotheses. The input to Alg. 1 are the two sets of poses, \(\mathbf{P}_{o}\) and \(\hat{\mathbf{P}}_{o}\), and two error thresholds - \(\delta_{r}\) for rotation and \(\delta_{t}\) for translation. For each pair of corresponding camera poses - local and server-provided, a relative transformation is computed (line 5-6). One set of poses is transformed using this estimated transformation, and errors for rotation and translation between corresponding pairs are computed (Lines 9-10). Using the two thresholds, we determine which other pose pairs are inliers to the pose hypothesis (Lines 11-12). The transformation with the largest number of inliers is selected (Lines 13-14) and refined by averaging the relative poses of all inliers.
Footnote 2: In the context of user-generated maps, captured by devices with IMUs such as mobile phones or dedicated XR headsets, it seems realistic to assume that the scale of the maps is provided in meters.
Obviously, not knowing the scale of the scene model \(\mathcal{S}\) is insufficient to prevent the attacker from placing the object in the scene as the scale and relative transformation can be recovered from two pairs of poses. Additionally, there are ways to further robustify the alignment process. _E.g._, if images of multiple very similar instance of an object and the corresponding 3D models are available, it seems reasonable to assume that images of different instances taken from similar viewpoints will also result in similar pose estimates by \(\mathcal{L}\). These estimates can then be used to average out noise in the poses. Similarly, the relation between different objects, _e.g._, a monitor standing on a desk, can be used to stabilize the process of placing objects in the scene. However, we do not investigate such advanced strategies in this paper.
### Deciding the Presence/Absence of an Object
We assume that \(\mathcal{L}\) is running a localization algorithm that is robust to shape and appearance variations and that is aligning query images to the scene as best as it can. At the same time, \(\mathcal{L}\) can also return poses for objects that are not in the scene, as well poses for objects that are not even from the same categories or similar to objects present in the scene. Deciding if an object is present or not in a scene based on the poses returned for its images by the localization server is therefore a challenging problem.
For an attacker \(\mathcal{A}\) trying to recover scene information via camera poses, it is impossible to determine which type of objects are present using just a single camera pose returned for one query image of each of the objects.
To overcome this challenge, \(\mathcal{A}\) can employ several possible techniques; _e.g._, they can use statistics about object co-occurrence to select the set of queries and associated camera
poses having a high probability of their spatial distribution. Another simple solution is to use multiple images of the same object taken from different viewpoints or to cluster query images into groups depicting similar objects that are assumed to be matched with the same object in the scene \(\mathcal{S}\). \(\mathcal{A}\) can then use different images from these groups to query \(\mathcal{L}\) and decide on the presence/absence of the object based on the consistency of returned poses. Even though the returned poses can be noisy and can contain outlier poses, in general, it is expected that a reasonably large subset of images depicting the same object from different viewpoints or depicting objects from the same group will show consistency of returned poses if a similar object is present in \(\mathcal{S}\). On the other hand, poses obtained for images of an object that is absent can be expected to exhibit a much higher variance.
In this paper, we discuss and evaluate another strategy for presence/absence decision that allows us to show the completeness of the attack and present its proof-of-concept implementation. We assume that the attacker \(\mathcal{A}\) learns certain statistics for each object or category from a curated training data that comprises of scenes with known presence/absence of these objects or object categories. This can be done for different types of localization schemes over huge amounts of 3D data. The attacker can then use these learned statistics to infer the presence/absence of objects when attacking an unknown scene \(\mathcal{S}\).
For experimental results in the later sections, we use the inlier-ratio \(\epsilon\) obtained from the object positioning step (Line 15 in Alg. 1) as this statistic. We can assume that for each object (or a class of objects) \(o\), \(\mathcal{A}\) has inlier-ratios \(\epsilon_{o}^{+}\) and \(\epsilon_{o}^{-}\) that are trained on scenes with known presence or absence of \(o\). _E.g._, \(\epsilon_{o}^{+}\) and \(\epsilon_{o}^{-}\) can be computed as the medians of \(\epsilon_{o}\) over all "present(+)/absent(-)" scenes. Based on these statistics, the presence or absence of \(o\) in the unknown scene \(\mathcal{S}\) can be decided by comparing the distances of \(\epsilon_{o}^{\mathcal{S}}\) to \(\epsilon_{o}^{+}\) and \(\epsilon_{o}^{-}\). We provide concreteness to this idea when assessing its effectiveness over a real world dataset in Section 5.2.
## 4 Datasets
We use multiple datasets for our experiments:
**IKEA-Scenes** and **IKEA-Objects** - We captured image sequences of seven different inspiration-rooms at an IKEA furniture store (_cf_. Fig.3). 1,000-2,500 images were captured for each room, depending on its size. 4-10 objects from each room were selected, and a separate sequence of images was captured for each of them in the inventory section of the store, where the surrounding environment was different from that of the inspiration rooms. Note that the two instances of each object have the same model, but in many cases differ in color and size. Presence/absence of additional objects such as cushions on a sofa, or a computer on a desk can additionally change the overall appearance of the two instances. In total, the dataset comprises 38 object instances covered by 100-200 images each. While capturing the dataset, we tried to only have a single object occupying a large part of each image. However, this was not always possible and no post processing has been applied to mask out objects. We call the inspiration-room data _IKEA-Scenes_ and the data from the inventory section _IKEA-Objects_.
**ScanNet-Office-Scene** - To show that the objects do not need to be of the exact same model for the proposed attack to work, we consider a generic office scene - _scene0040_ from the ScanNet [8] dataset.
Figure 3: Example images from _IKEA-Scenes_ (left) and one of the objects of corresponding scenes in _IKEA-Objects_ (right).
Figure 2: **Object alignment example: 1.** A 3D model \(\mathcal{M}\) of an object and corresponding camera poses \(\mathbf{P}_{o}\) in the attacker’s local coordinate system, built from a sequence of object images. **2.** The server scene with a similar object. **3.** The noisy poses returned by the server for the queried object images. **4.** Sequences of local and server-provided poses aligned to approximately place the object in the scene.
**Office-Objects** - We collected image sequences of 5 common office room objects - a _door_, a _whiteboard_, an _office chair_, a _desk with computer_, and a _bookshelf_. These images are used as queries by the attacker.
**RIO10** - RIO10 [47] is a localization benchmark dataset which we use to evaluate the effectiveness of a potential defence strategy that a localization server might employ.
We manually scale all local 3D models constructed by the attacker to roughly metric scale.
## 5 Experimental Evaluation
This section presents a series of experiments that show the practical feasibility of the attack introduced in Sec. 3. First, we show via qualitative results that the method proposed in Sec. 3.2 allows the attacker to place the 3D models of relevant objects close to the actual corresponding objects in the scene. We then explain and evaluate a simple implementation of the method described in Sec. 3.3 that the attacker can use to decide the presence/absence of objects.
For querying the localization server, we use images from the datasets described above. To implement the server, we use HLoc [31, 32] (with default thresholds and parameters), a state-of-the-art visual localization approach. HLoc uses feature descriptors to establish 2D-3D matches between features extracted from the query image and 3D scene points. The resulting correspondences are then used for pose estimation. We demonstrate the reliance of the attack on the robustness of the localization process by evaluating three different local image features and matchers: Superpoint [10] features with the SuperGlue [32] (most robust), R2D2 [30] with Nearest Neighbor (NN) matching, and SIFT [22] with NN matching (least robust).
### 3D Object Placement
We qualitatively evaluate the accuracy of the 3D object placements obtained using the approach from Sec. 3.2 for the _IKEA-Scenes_ and _ScanNet-Office-Scene_ datasets. We use qualitative results rather than quantitative metrics since it is hard to quantify when a placement is realistic enough. _E.g._, consider the predicted positions of the oven in the 3rd row of Fig. 4. The first two predictions are far enough from the ground truth position that a metric such as the IoU of the 3D bounding boxes of the objects will discard them as wrong. Yet, the estimated positions are close enough to the ground truth to provide the attacker with a good layout of the scene.
Figure 4: Qualitative results for aligning objects in different scenes of the _IKEA-Scenes_ dataset. We evaluate three combinations of local features and matchers. Aligned objects are color-coded green to red along the gravity direction to make their orientation better visible.
Fig. 4 shows results for placing selected items from the _IKEA-Objects_ dataset in 4 different scenes from the _IKEA-Scenes_ dataset. Fig. 5 shows results for placing objects from the _Office-Objects_ dataset in the _ScanNet-Office-Scene_ dataset. As can be seen, using a robust localization process based on Superpoint features and the Superglue matcher or R2D2 features allows the attacker to place the objects close to their ground truth positions. In particular, the results from Fig. 5 show that the alignment also works well when the queried object is not the same model of different color/size but also a very different one in terms of shape and overall appearance. The results clearly demonstrate the practical feasibility of the placement strategy.
We used slightly different values for the error thresholds required by the positioning algorithm based on the object size and obtained poses. Such an approach is feasible if a human supervises the attack. Code and data is available at [https://github.com/kunalchelani/ObjectPositioningFromPoses](https://github.com/kunalchelani/ObjectPositioningFromPoses).
### Deciding the Presence/Absence of an Object
In Sec. 3.3, we suggested strategies which an attacker can use to decide whether an object is present or not in a scene \(\mathcal{S}\). for each object.
Concretely, using a set of training scenes, the attacker has learned representative values \(\epsilon^{+}\) and \(\epsilon^{-}\) for the inlier-ratio returned by Alg. 1 for cases where the object is present(+) respectively absent(-). When deciding the presence of an object \(\mathbf{o}\) in a scene \(\mathcal{S}\), the attacker uses the inlier ratio (\(\epsilon\)) from Alg. 1 to make their decision. The object \(\mathbf{o}\) is considered to be present in the scene if \(|\epsilon-\epsilon^{+}|<|\epsilon-\epsilon^{-}|\) and otherwise considered as absent.
We use the _IKEA-Scenes_ and _IKEA-Objects_ dataset for this experiment. When deciding the presence/absence of an object in a scene, the other 6 scenes are used as training scenes. Many of the objects from _IKEA-Objects_ are only present in one of the scenes from _IKEA-Scenes_. In these cases, no reference value for \(\epsilon^{+}\) is available for these scenes. In such cases, the object is considered as present if \(\epsilon>\epsilon^{-}\). This strategy is motivated by the assumption that correctly placing an object that is present results in a higher inlier-ratio than placing objects that are not present.
Tab. 1 shows precision and recall of this strategy. Since the computation of the inlier-ratio \(\epsilon\) depends upon the error thresholds, we present the results for three different sets of thresholds. The results show that for most scenes, it is possible to obtain a precision/recall of approx. 0.4/0.6, which, _e.g_., translates to 3 out of 5 present, and around 29 out of 33 absent objects from _IKEA-objects_ being correctly classified. The average precision using random guessing in these scenes is 0.19. This, together with the quality of the placement, clearly validates the feasibility of the proposed attack.
Figure 5: **(a)** Example images from _ScanNet-Office-Scene_ and corresponding objects in _Office-Objects_. **(b)** Qualitative results for aligning generic office objects in ScanNet [8]_scene0040_, using Superpoint+Superglue and R2D2+NN.
## 6 Preventing the Attack?
A natural way to prevent the presented attack is to try to distinguish between genuine and malicious queries. By not sending poses for query images deemed as (potentially) malicious, the localization service effectively prevents the attacker from using pose estimates to learn about the scene.
One potential classification strategy is based on the fact that the attacker sends images focusing on a single object. In this case, we expect that most of the 3D points from the inlier 2D-3D matches found by HLoc lie on a single 3D object. We thus count the number of 3D objects that contribute at least a certain fraction of inliers (X% of the inliers of the object contributing the largest number of inliers). If the number is too small, the query image is considered to be malicious and is rejected.
Fig. 6 shows results for three different objects used to attack three different scenes of the RIO10 dataset [47]. Here, we use the instance-level labels provided by the dataset, which include background classes such as floor and walls, to define objects. As can be seen, rejecting the majority of malicious queries while retaining genuine queries can be challenging. The reason is that even while focusing on a single object, other objects might be partially visible in the queries, _e.g_., part of a desk for monitors, different pillows on a couch, books on a shelf, _etc_. In addition, genuine queries might focus on small parts of the scene or even individual objects. Thus, finding a suitable threshold on the minimum number of visible objects can be hard. Furthermore, note that this defense strategy requires the service to have knowledge about the objects in the scene, either extracted from the queries or the scene representation. This requirement creates a potential privacy risk if an attacker is able to gain access to the service.
## 7 Conclusions and Future work
In this paper, we have considered the problem of privacy-preserving localization. Prior work aims to defend attacks for the case where the attacker gains access to a cloud-based localization service. In contrast, we show that it is possible for an attacker to recover information about the scene by using the service as intended: by querying the server with images of different objects, an attacker is able to determine which objects are present and to estimate their position in the scene. The attack is based on the minimum amount of information that a localization service needs to provide to its users, _i.e_., camera poses for query images, and exploits that modern localization systems are robust to changing conditions. Experiments with our proof-of-concept implementation show the practical feasibility of the attack. The attack is applicable even if the localization algorithm used by the server is otherwise perfectly privacy-preserving.
Our results show that existing privacy-preserving approaches are not sufficient to ensure user privacy, creating the need for further research. In particular, first experiments show that preventing the attack proposed in this paper without reducing localization performance and creating other angles of attack is a non-trivial task and interesting direction for future work.
**Acknowledgements.** This work was supported by the EU Horizon 2020 project RICAIP (grant agreement No. 857306), the European Regional Development Fund under project IMPACT (No. CZ.02.101/0.0/0.0/15_003/0000468), the Czech Science Foundation (GACR) JUNIOR STAR Grant No. 22-23183M, Chalmers AI Research Center (CHAIR), WASP and SSF.
\begin{table}
\begin{tabular}{|l||c|c|c|c|c||c|c|c|c|c||c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Scene} & \multicolumn{6}{c||}{Superpoint=Superglue} & \multicolumn{6}{c||}{RD2+NN} & \multicolumn{6}{c|}{SIFT +NN} \\ \cline{2-13} & \(10^{\circ},0.25m\) & \(30^{\circ},0.5m\) & \(60^{\circ},2m\) & \(10^{\circ},0.25m\) & \(30^{\circ},0.5m\) & \(60^{\circ},2m\) & \(10^{\circ},0.25m\) & \(10^{\circ},0.25m\) & \(30^{\circ},0.5m\) & \(60^{\circ},2m\) & \(10^{\circ},0.25m\) & \(30^{\circ},0.5m\) & \(60^{\circ},2m\) \\ \cline{2-13} & Precision & Recall & P & R & P & R & P & R & P & R & P & R & P & R & P & R \\ \hline Scene1 & 0.6 & 0.85 & 0.75 & 0.85 & 0.67 & 0.85 & 0.57 & 0.57 & 0.36 & 0.57 & 0.28 & 0.57 & 0.33 & 0.57 & 0.45 & 0.71 & 0.33 & 0.43 \\ Scene2 & 0.36 & 0.4 & 0.36 & 0.5 & 0.37 & 0.6 & 0.34 & 0.4 & 0.3 & 0.3 & 0.35 & 0.6 & 0.33 & 0.46 & 0.26 & 0.5 & 0.28 & 0.6 \\ Scene3 & 0.55 & 0.71 & 0.36 & 0.57 & 0.25 & 0.43 & 0.31 & 0.71 & 0.47 & 1 & 0.41 & 1.0 & 0.3 & 0.42 & 0.5 & 0.42 & 0.44 & 1.0 \\ Scene4 & 0.17 & 0.4 & 0.23 & 0.6 & 0.14 & 0.4 & 0.34 & 0.6 & 0.28 & 0.4 & 0.2 & 0.4 & 0.15 & 0.4 & 0.15 & 0.4 & 0.17 & 0.4 \\ Scene5 & 0.33 & 0.6 & 0.4 & 0.8 & 0.44 & 0.8 & 0.5 & 0.6 & 0.34 & 0.4 & 0.5 & 0.6 & 0.22 & 0.4 & 0.25 & 0.4 & 0.33 & 0.4 \\ Scene6 & 0.25 & 0.6 & 0.28 & 0.6 & 0.22 & 0.4 & 0.22 & 0.4 & 0.3 & 0.6 & 0.33 & 0.8 & 0.14 & 0.2 & 0.2 & 0.2 & 0.25 & 0.6 \\ Scene7 & 0.5 & 0.5 & 0.5 & 0.33 & 0.33 & 0.5 & 0.6 & 0.5 & 0.5 & 0.5 & 0.38 & 0.5 & 0 & 0.14 & 0.17 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Precision (P) and recall (R) of our method to determine the presence of objects for the _IKEA-scenes_ and _IKEA-Objects_ datasets.
Figure 6: Effectiveness of a potential approach to prevent the proposed attack based on not providing poses for queries containing only a few objects. Only objects contributing at least 10% of the inliers found on the object with the most inliers are considered. As can be seen, finding a suitable threshold for the minimum number of visible objects can be difficult. |
2304.13056 | Preheating in Einstein-Cartan Higgs Inflation: Oscillon formation | We make use of classical lattice simulations in 3 + 1 dimensions to study the
preheating stage of Higgs Inflation in Einstein-Cartan gravity. Focusing for
concreteness on a simplified scenario involving the seminal Nieh-Yan term, we
demonstrate the formation of dense and spatially localized oscillon
configurations constituting up to 70% of the total energy density. The
emergence of these meta-stable objects may lead to a prolonged period of matter
domination, effectively modifying the post-inflationary history of the Universe
as compared to the metric and Palatini counterparts. Notably, the creation of
oscillons comes together with a significant gravitational wave signal, whose
typical frequency lies, however, beyond the range accessible by existing and
planned gravitational wave experiments. The impact of the Standard Model gauge
bosons and fermions and the potential extension of our results to more general
Einstein-Cartan settings is also discussed. | Matteo Piani, Javier Rubio | 2023-04-25T18:00:03Z | http://arxiv.org/abs/2304.13056v2 | # Preheating in Einstein-Cartan Higgs Inflation: Oscillon formation
###### Abstract
We make use of classical lattice simulations in \(3+1\) dimensions to study the preheating stage of Higgs Inflation in Einstein-Cartan gravity. Focusing for concreteness on a simplified scenario involving the seminal Nieh-Yan term, we demonstrate the formation of dense and spatially localized oscillon configurations constituting up to \(70\%\) of the total energy density. The emergence of these meta-stable objects may lead to a prolonged period of matter domination, effectively modifying the post-inflationary history of the Universe as compared to the metric and Palatini counterparts. Notably, the creation of oscillons comes together with a significant gravitational wave signal, whose typical frequency lies, however, beyond the range accessible by existing and planned gravitational wave experiments. The impact of the Standard Model gauge bosons and fermions and the potential extension of our results to more general Einstein-Cartan settings is also discussed.
## 1 Introduction
The constant improvement in the precision of measurements of the Cosmic Microwave Background (CMB) [1; 2] has led to the establishment of the inflationary paradigm [3; 4; 5] as the best candidate to tackle the conceptual problems of the hot Big Bang while generating the primordial density perturbations seeding structure formation. Despite this success, the nature of the inflaton field remains unknown and its role could be played by any candidate able to mimic a scalar field in the slow-roll regime. Indeed, while the Planck and BICEP2 collaborations have set strong constraints on the scale dependence of the density power spectrum and the tensor-to-scalar ratio encoding the amount of primordial gravitational waves [2], large classes of inflationary models remain still compatible with observations [6]. On top of that, knowing the shape of the inflationary potential is a priori not sufficient to determine the whole post-inflationary evolution. In particular, any inflationary model must come together with a graceful exit mechanism able to transfer the energy stored in the homogeneous inflaton condensate to the Standard Model (SM) degrees of freedom, leading with it to the onset of the usual hot Big Bang theory [7; 8; 9]. This poses significant limitations for many particle physics scenarios in which the inflaton-matter couplings are not experimentally known.
Among the many possible particle physics embeddings of the early Universe accelerated expansion, Higgs Inflation (HI) [10; 11] stands out as an ideal candidate for studying the post-inflationary dynamics. In fact, assuming no Beyond SM physics between the electroweak and the Planck scale, it is a priori possible to compute the running of the experimentally known SM couplings, allowing, in principle, for a comprehensive study of the preheating stage. This scenario has been studied in different formulations of gravity, ranging from the standard metric case [10; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] to the Palatini [23; 24; 25; 26; 27; 28; 29], teleparallel [30], Einstein-Cartan [31; 32; 33; 34] and metric-affine formulations [35; 36], including also several scale-invariant extensions [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. Conversely, the post-inflationary evolution has been studied both analytically and numerically only for the metric [54; 55; 56; 57; 58; 59; 60; 61; 62] and Palatini [63; 64] cases.
In this work, we make use of fully-fledged numerical lattice simulations in 3+1 dimensions to study the preheating stage of HI in Einstein-Cartan (EC) gravity. Restricting ourselves to the radial degree of freedom of the Higgs field, we find that, contrary to what happens in the metric and Palatini formulations of the theory, this scenario allows for the formation of dense and spatially-localized oscillon configurations constituting up to 70% of the total energy
density for suitable model parameters. In spite of lacking a conserved charge associated with them, these pseudo-solitonic objects can be rather long-lived, leading to a prolonged matter-domination era that effectively alters the minimum duration of the inflationary phase needed to solve the usual hot Big Bang problems. Additionally, we find that such structures can source a sizeable gravitational wave signal, thus providing an alternative observational channel for this appealing inflationary model. The emergence of oscillons is expected to be robust against the inclusion of gauge fields, since the accumulation of these species turns out to be severely suppressed by their perturbative decay into fermions for a large number of oscillations, significantly exceeding the oscillon formation time.
The paper is structured as follows. In Section 2 we review the embedding of HI in the EC gravity, highlighting on the differences with the metric and Palatini formulations. Section 3 is devoted to the study of the preheating stage in the scalar sector of theory, starting from a simple linear analysis to subsequently perform a fully non-linear characterization of oscillon formation via 3+1 classical lattice simulations. Aspects such as the abundance and energy distribution of oscillons or the associated production of gravitational waves at formation are addressed in detail, comparing the latter with the sensitivity and frequency range of current and future GWs experiments. The potential impact of the SM gauge boson and fermions is considered in Section 4. Finally, our conclusions are presented in Section 5, where we argue on possible extensions and improvements of our treatment.
## 2 Higgs Inflation in Einstein-Cartan Gravity
The EC formulation of gravity assumes the tetrads \(e^{A}_{\mu}\) and spin connection \(\omega^{AB}_{\mu}\) (\(\omega^{AB}_{\mu}=-\omega^{BA}_{\mu}\)) as fundamental gravitational variables [65, 66, 67, 68], with the Greek letters related to the usual spacetime coordinate basis and the Latin ones to an orthonormal non-coordinate basis displaying covariance under local Lorentz transformations. The associated metrics and connections entering the covariant derivatives in the two bases,
\[D_{\mu}V^{\alpha}=\partial_{\mu}V^{\alpha}+\Gamma^{\alpha}_{\sigma\mu}V^{ \sigma}\,,\hskip 28.452756ptD_{\mu}V^{A}=\partial_{\mu}V^{A}+\omega^{AB}_{\mu}V _{B}\,, \tag{1}\]
are related through
\[g_{\mu\nu}=e^{A}_{\mu}e^{B}_{\nu}\eta_{AB}\,,\hskip 28.452756pt\Gamma^{\mu}_{ \nu\rho}=e^{\mu}_{A}\left(\partial_{\rho}e^{A}_{\nu}+\omega^{A}_{\rho B}e^{B} _{\nu}\right)\,, \tag{2}\]
with the second expression following directly from the condition \(D_{\mu}e^{A}_{\nu}=0\), without any symmetry assumption on the lower indices. Beyond the curvature tensor, this theory accounts for a non-vanishing torsion contribution
\[T^{\mu}_{\ \nu\rho}=\Gamma^{\mu}_{\ \nu\rho}-\Gamma^{\mu}_{\ \rho\nu}\,, \tag{3}\]
obtained by acting with the commutator of two covariant derivatives on a given vector.
As compared to General Relativity, this construction offers some conceptual advantages. Firstly, EC gravity can be viewed as the gauge theory of the Poincare group, placing gravitational interactions on equal footing with other fundamental forces in Nature [65, 66]. Secondly, the independent treatment of the metric and the connection in EC gravity avoids the necessity of introducing specific boundary terms in order to obtain the Einstein equations of motion [69]. Finally, the spin connection can be easily coupled to fermions, making the EC formulation particularly useful in describing the behaviour of the SM in the presence of gravity.
In the context of scalar-tensor theories, the existence of torsion opens up the possibility of including additional gravitational operators beyond those usually considered in metric HI, namely the usual SM Higgs sector and the Einstein-Hilbert term modified by the addition of a non-minimal coupling to gravity, 1
Footnote 1: We adopt here a mostly plus signature and work in Planckian units \(M_{P}^{2}\equiv 8\pi G=1\).
\[S_{\rm SM+EH}=\int d^{4}x\sqrt{-g}\left[\frac{1+\xi h^{2}}{2}g^{\mu\nu}R_{\mu \nu}(\Gamma)-\frac{1}{2}g^{\mu\nu}\partial_{\mu}h\partial_{\nu}h-\frac{\lambda }{4}(h^{2}-v^{2})^{2}\right]\,, \tag{4}\]
with \(h\) the Higgs field in the unitary gauge, \(\xi\) a dimensionless coupling constant to be determined from observations, \(\lambda\) the Higgs self-coupling, and \(v\simeq 250\) GeV its vacuum expectation value. In the context of EC gravity, building the most general theory containing only a massless spin-2 gravitational degree of freedom requires the introduction of a plethora of operators. As shown explicitly in Ref. [34] (see also Refs. [31; 32] for specific choices), the most general graviscalar sector beyond (4) including at most quadratic operators in the derivatives of all fields takes the form
\[\begin{split} S_{\rm T}&=\int d^{4}x\,\sqrt{-g} \Bigg{[}v^{\mu}\partial_{\mu}Z^{v}+a^{\mu}\partial_{\mu}Z^{a}\\ &+\frac{1}{2}\Big{(}G_{vv}v_{\mu}v^{\mu}+2G_{va}v_{\mu}a^{\mu}+G_ {aa}a_{\mu}a^{\mu}+G_{\tau\tau}\tau_{\alpha\beta\gamma}\tau^{\alpha\beta \gamma}+\tilde{G}_{\tau\tau}\epsilon^{\mu\nu\rho\sigma}\tau_{\lambda\mu\nu} \tau^{\lambda}_{\ \rho\sigma}\Big{)}\Bigg{]}\,,\end{split} \tag{5}\]
with
\[Z^{v/a}=\zeta_{h}^{v/a}h^{2}\,,\qquad\qquad G_{ij}=c_{ij}\left(1+\xi_{ij}h^{2} \right)\, \tag{6}\]
no summation repeated \(i,j\) indices and \(\xi_{h}^{v/a}\), \(c_{ij}\) and \(\xi_{ij}\) constants. Here
\[v_{\mu}=T^{\nu}_{\ \mu\nu}\,,\qquad\quad a_{\mu}=\epsilon_{\mu\nu\rho\sigma}T^{ \nu\rho\sigma}\,,\qquad\quad\tau_{\mu\nu\rho}=\frac{2}{3}\left(T_{\mu\nu\rho} -v_{[\nu}g_{\rho]\mu}-T_{[\nu\rho]\mu}\right)\, \tag{7}\]
stand respectively for the vector, pseudo-vector and reduced tensor irreducible components of the torsion tensor (3),
\[T_{\mu\nu\rho}=e_{\mu A}T^{A}_{\nu\rho}=\frac{2}{3}v_{[\nu}g_{\rho]\mu}-\frac{ 1}{6}a^{\sigma}\epsilon_{\mu\nu\rho\sigma}+\tau_{\mu\nu\rho}\,, \tag{8}\]
with the square brackets standing for anti-symmetrization in the corresponding indices.
In order to get a more intuitive picture of the inflationary and post-inflationary dynamics in this rather complicated theory, it is convenient to move to the Einstein-frame, where the Higgs field is minimally coupled to gravity and no explicit torsion operator is present. This can be achieved in two steps. First, we perform a Weyl rescaling of the metric with \(g_{\mu\nu}\to(1+\xi h^{2})^{-1}g_{\mu\nu}\). Second, we solve the equation of motion for the anti-symmetric part of the connection, _i.e._ the one encoding torsion, and plug back the solution into the action. The explicit computation has already been developed in Refs. [31; 34; 50], to which we refer to the interested reader for details. The final action \(S=S_{\rm SM+EH}+S_{\rm T}\) takes the compact form
\[S=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{R}{2}-\frac{1}{2}K(h)(\partial h)^{2}-V(h )\Bigg{]}\,, \tag{9}\]
with
\[K(h)=\frac{1+c\,h^{2}}{(1+\xi h^{2})^{2}}\,,\qquad\qquad V(h)=\frac{\lambda}{4} \frac{h^{4}}{(1+\xi h^{2})^{2}}\,, \tag{10}\]
and
\[c(h)=\xi+6\xi^{2}+4f(h)\frac{G_{aa}(\zeta_{h}^{v})^{2}+G_{vv}(\zeta_{h}^{a})^{2} -G_{va}\zeta_{h}^{v}\zeta_{h}^{a}}{G_{vv}G_{aa}-G_{va}^{2}}\;. \tag{11}\]
Here we have explicitly neglected the Higgs vacuum expectation value, since this will not be relevant for our analysis of inflation and preheating. For the purposes of this paper, we will mainly be interested in EC scenarios leading to field-independent \(c\) values. For illustration purposes only, in what follows we will consider a simple scenario with
\[c_{vv}=-\frac{2}{3}\,,\qquad\quad c_{va}=0\,,\qquad\quad c_{aa}= \frac{1}{24}\,, \tag{12}\] \[\xi_{vv}=\xi_{aa}=-\zeta_{h}^{v}=\xi\,,\qquad\quad\xi_{va}=0\,, \qquad\quad\zeta_{h}^{a}=\frac{1}{4}\xi_{\eta}\;,\]
leading effectively to a parameter
\[c=\xi+6\xi_{\eta}^{2}\,. \tag{13}\]
In this limit, the general equation (5) reduces to a simple interaction term involving the seminal Nieh-Yan topological invariant [70, 71]
\[S_{\rm T}=-\frac{1}{4}\int d^{4}x\,\xi_{\eta}h^{2}\partial_{\mu}\left(\sqrt{- g}\epsilon^{\mu\nu\rho\sigma}T_{\nu\rho\sigma}\right)\;, \tag{14}\]
with \(\epsilon^{\mu\nu\rho\sigma}\) the totally anti-symmetric tensor (\(\epsilon_{0123}=1\)) and \(\xi_{\eta}\) a non-minimal coupling to be determined from observations. As shown in Refs. [32, 52], this specific choice provides a smooth parametric interpolation between the metric (\(\xi_{\eta}=\xi\)) and Palatini formulations of HI (\(\xi_{\eta}=0\)), a property that, as we will show in Section 3, will turn out to be critical for oscillon formation. Note, however, that most of our results can be easily extended to more general settings within the EC multiverse (5), provided that they effectively lead to a constant value of \(c\) in the field range of interest.
For constant \(c\), the non-canonical kinetic term in Eq. (9) can be made canonical by performing a field redefinition
\[\frac{d\chi}{dh}=\sqrt{K(h)}\,. \tag{15}\]
Solving this differential equation and inverting the result in the small (\(c\,h^{2}\ll 1\)) and large field (\(c\,h^{2}\gg 1\)) regimes, we obtain the effective Einstein-frame potential
\[V\simeq\begin{cases}\frac{\lambda}{4}\chi^{4}\,,&\text{for}\quad\quad\chi< \chi_{c}\;\,,\\ \frac{\lambda}{4\xi^{2}}\left[1-\exp\left(-\frac{2\xi|\chi|}{\sqrt{c}}\right) \right]^{2}\,,&\text{for}\quad\quad\chi\gg\chi_{c}\,,\end{cases} \tag{16}\]
with \(\chi_{c}=1/\sqrt{c}\) a crossover scale depending only on the parameter \(c\). This expression flattens out exponentially at large field values, allowing for inflation to take place with the usual slow-roll initial conditions. In this regime, our scenario coincides essentially with that of \(\alpha\)-attractor E-models [72], upon the formal replacement \(c\to 6\xi^{2}\alpha\), but in this case with a discrete \(\mathds{Z}_{2}\) symmetry. Although such a reflection invariance does not affect at all the
inflationary dynamics (other than allowing, of course, for inflation with negative and positive field values), it will have important consequences for the preheating phase, as we will show explicitly in Section 3.
Knowing the potential in canonical field variables, we can proceed now with the standard inflationary analysis, computing explicitly the slow-roll parameters,
\[\epsilon\equiv\frac{1}{2}\left(\frac{V_{\chi}}{V}\right)^{2}\simeq\frac{8\xi^{2 }}{c}\exp\left(-\frac{4\xi|\chi|}{\sqrt{c}}\right)\,\hskip 28.452756pt\eta\equiv\frac{V_{\chi\chi}}{V} \simeq-\frac{8\xi^{2}}{c}\exp\left(-\frac{2\xi|\chi|}{\sqrt{c}}\right)\,, \tag{17}\]
and the relation between the inflaton field value and the number of \(e\)-folds of inflation,
\[\mathcal{N}(\chi)=\int_{\chi_{\rm end}}^{\chi}\frac{d\chi^{\prime}}{\sqrt{2 \epsilon(\chi^{\prime})}}\simeq\frac{c}{8\xi^{2}}\exp\left(\frac{2\xi|\chi|}{ \sqrt{c}}\right)\hskip 14.226378pt\longrightarrow\hskip 14.226378pt|\chi( \mathcal{N})|=\frac{\sqrt{c}}{2\xi}\log\left(\frac{8\xi^{2}\mathcal{N}}{c} \right)\, \tag{18}\]
with
\[\chi_{\rm end}=\frac{\sqrt{c}}{2\xi}\log\left(1+\frac{2\sqrt{2}\xi}{\sqrt{c} }\right) \tag{19}\]
the field value at which inflation ends \(|\epsilon(\chi_{\rm end})\equiv 1|\). This allows us to determine the amplitude of the primordial spectrum of density perturbations \(A_{s}=V/(24\pi^{2}M_{P}^{4}V)\), its tilt \(n_{s}=1+2\eta-6\epsilon\) and the tensor-to-scalar ratio \(r=16\epsilon\) at the time at which the pivot scale \(k_{*}=0.05\,{\rm Mpc}^{-1}\) exits the horizon,
\[A_{s}=\frac{\lambda\mathcal{N}_{*}^{2}}{12\pi^{2}c}\,,\hskip 42.679134ptn_{s}=1- \frac{2}{\mathcal{N}_{*}}\,\hskip 42.679134ptr=\frac{2c}{\xi^{2}\mathcal{N}_{*}^{2}}\,. \tag{20}\]
The precise value of the number of \(e\)-folds \(\mathcal{N}_{*}\) to be inserted in these expressions depends on the whole post-inflationary evolution, and in particular, on how long it takes after inflation to enter the phase of radiation domination needed for successful Big Bang Nucleosynthesis. Note also that the observational constraint on the CMB-normalization, \(A_{s}=2.1\cdot 10^{9}\)[1], restricts only the ratio of couplings determining the amplitude of the inflationary potential, namely
\[\frac{c}{\lambda}=\frac{2}{5}\cdot 10^{7}\,\mathcal{N}_{*}^{2}\,. \tag{21}\]
Therefore, as far as inflation is concerned, a change in the Higgs self-coupling \(\lambda\) can be always compensated by a change of the parameter \(c\), meaning that a strict relation between cosmological observables and SM parameters can only be established once the running of \(\lambda\) is known. In the context of Einstein-Cartan gravity, this is expected to depend not only on the usual SM couplings but also on the actual strength of five- and six- dimensional Higgs-fermion and fermion-fermion interactions appearing when torsion is integrated out [66], potentially extended with additional non-minimal couplings [73; 74; 75; 33; 76; 34]. In the lack of a proper computation of the SM running in this context, we will consider in what follows a fiducial value \(\lambda=10^{-3}\). This choice is commensurate with the value obtained using the pure SM running for the top quark _pole_ masses in Refs. [77; 78], while coinciding also with the favoured one in the Palatini formulation of HI [28]. 2
Footnote 2: For a discussion of HI for negative values of \(\lambda\), we refer the reader to Refs. [79; 56].
A simple inspection of the inflationary observables (20) reveals several interesting regimes. For \(\xi=\xi_{\eta}\), one recovers the metric HI predictions, where both the spectral tilt and the tensor-to-scalar ratio are almost insensitive to the specific value of the non-minimal
couplings to gravity. On the contrary, for \(\xi\gtrsim\xi_{\eta}^{2}\), we rather have \(c\simeq\xi\), recovering effectively the predictions of the Palatini HI scenario, and in particular the strong \(1/\xi\) suppression in the tensor-to-scalar ratio. The intermediate regime, \(\xi_{\eta}<\xi<\xi_{\eta}^{2}\), leads to a tensor-to-scalar ratio proportional to \(\xi_{\eta}^{2}/\xi^{2}\), with \(\xi_{\eta}\) fixed to a value \(\xi_{\eta}\simeq\sqrt{2\lambda/3}\cdot 10^{3}\,\mathcal{N}_{*}\) by Eq. (21) and \(\xi\) almost independent of the CMB normalization. Since preheating has already been extensively studied both in the metric [54; 55; 56; 57; 58; 59; 60; 61; 62] and Palatini [63; 64] regimes, we will focus in what follows on the uncharted range \(\xi_{\eta}<\xi<\xi_{\eta}^{2}\).
## 3 Preheating in the scalar sector
Immediately after the end of inflation, the vast majority of the energy density of the Universe is stored in the zero mode of the Higgs field, which will start oscillating around the minimum of the potential (16), leading with it to periodically time-varying masses for all the SM particles coupled to it and potentially inducing particle creation. The direct creation of SM fermions out of the Higgs condensate is, however, severely restricted by Pauli blocking effects [80; 81], leaving the production of Higgs excitations and electroweak gauge bosons as main depletion channels.
As a first step to study the complicated preheating dynamics of EC HI, let us concentrate on the pure Higgs sector, neglecting momentarily its coupling to the SM gauge bosons and fermions. In this regime, the evolution of the inflaton field in a Friedmann-Lemaitre-Robertson-Walker background obeys a single-field Klein-Gordon equation
\[\ddot{\chi}+3H\dot{\chi}-a^{-2}\nabla^{2}\chi+V_{,\chi}=0\, \tag{24}\]
with \(a\) the scale factor, the dots denoting derivatives with respect to the cosmic time \(t\) and the Hubble rate \(H=\dot{a}/a\) evolving according to the Friedmann equations
\[3H^{2}=\frac{1}{2}\dot{\chi}^{2}+\frac{1}{2a^{2}}(\nabla\chi)^{2}+V(\chi)\,, \qquad\qquad\dot{H}=-\frac{1}{2}\dot{\chi}^{2}-\frac{1}{6a^{2}}(\nabla\chi)^{2}\,. \tag{25}\]
The initial stages of preheating can be understood by splitting the inflaton field into a homogeneous component \(\bar{\chi}(t)\) satisfying the zero-gradient limit of Eqs. (24) and (25) and a small perturbation \(\delta\chi(\mathbf{x},t)\). The combination of the background equations of motion leads to the rather enlightening expression [63]
\[\frac{dH}{d\bar{h}}=-\frac{\dot{\bar{\chi}}^{2}}{2\dot{\bar{h}}}=-\frac{1}{2} \sqrt{6H^{2}-2V[\bar{\chi}(\bar{h})]}\frac{d\bar{\chi}}{d\bar{h}}\,, \tag{26}\]
with \(V[\bar{\chi}(\bar{h})]\) the \(h\)-dependent potential in Eq. (10) and the field redefinition \(d\bar{\chi}/d\bar{h}\) completely encoding the specific dependence on the EC formulation of gravity under consideration, cf. Eq. (15). A simple inspection of the associated kinetic function (10) at the intermediate field values \(\bar{h}_{c}\ll\bar{h}\ll\bar{h}_{end}\) reveals different behaviours for different values of \(c\). In particular, for \(c\to\xi+6\xi^{2}\), corresponding to the metric HI limit \(\xi_{\eta}=\xi\), the kinetic function remains parametrically large, \(d\bar{\chi}/d\bar{h}\approx\sqrt{6}\xi\bar{h}/M_{P}\gg 1\), leading to a quick damping of the oscillations as \(\bar{h}\) approaches zero. On the other hand, for \(c\to\xi\), corresponding to the Palatini HI limit \(\xi_{\eta}=0\), we have approximately \(d\bar{\chi}/d\bar{h}\approx 1\), making the damping of the oscillations significantly less efficient and allowing for the inflaton field to return closer to the inflationary plateau for a given number of oscillations. These recursive field incursions have a dramatic
effect on the production of Higgs excitations, as becomes evident when considering the mode equations for the linear perturbations \(\delta\chi\) in Fourier space,
\[\delta\ddot{\chi}_{\mathbf{k}}+3H\delta\dot{\chi}_{\mathbf{k}}+\left(\mathbf{k}^ {2}/a^{2}+V_{,\chi\chi}\right)\delta\chi_{\mathbf{k}}=0\,, \tag{10}\]
with \(\mathbf{k}\) the associated wave-vector. In particular, the effective square frequency of this damped harmonic oscillator becomes non-positive definite whenever the Higgs field accelerates towards the minimum of the potential from beyond the inflection point \(\chi_{i}=\sqrt{c}/(2\xi)\ln 2\). This translates into a tachyonic amplification of long-wave fluctuations (\(\mathbf{k}^{2}/a^{2}<|V_{,\chi\chi}|\)) that can significantly enhance the efficiency of other Bose stimulation effects. The maximum momentum that will grow due to tachyonic instability is given by the minimum of the second derivative \(V_{,\chi\chi}\), which turns out to be located at a parametrically fixed value
\[\chi_{\rm min}=\frac{\sqrt{c}\log(2)}{\xi}\,, \tag{11}\]
such that
\[\frac{|\mathbf{k}_{\rm max}|}{a}=\sqrt{|V_{,\chi,\chi}(\chi_{\rm min})|}=2^{- 3/2}M\simeq 0.35M\,, \tag{12}\]
with
\[M^{2}\equiv V_{,\chi\chi}|\chi_{c\to 0}=\frac{2\lambda}{c} \tag{13}\]
the oscillation frequency of the condensate for \(\chi_{c}\to 0\).
If the resulting fluctuations are not efficiently diluted by the expansion of the Universe, they will eventually backreact on the homogeneous dynamics, leading to the fragmentation of the condensate into a highly inhomogeneous state with sizable overdensities. These overdensities can potentially collapse under their own self-interactions, leading to the formation of periodically-varying pseudo-solitonic objects or oscillons [82, 83, 84, 85, 86, 87, 88]. Indeed, such localized quasi-spherical configurations are known to appear in many inflationary models involving a shallower-than-quadratic potential [89, 90, 87],3 as the one under consideration.
Footnote 3: This feature allows for the partial cancellation of the dispersion terms in Eq. (10), leaving effectively a free harmonic oscillator with a frequency of the order of inflaton mass at the minimum.
Having a potential that can support oscillon solutions is of course a necessary, although not sufficient, condition for their formation. In order to properly assess the formation of oscillons, we will resort in what follows to non-perturbative \(3+1\) classical lattice simulations. We will make use of the recently developed \(\mathcal{C}\)_cosmo\(\mathcal{C}\)_attice_ package [91, 92]. This public code allows to solve the preheating dynamics in a self-consistent way, accounting not only for particle production but also for its backreaction effects on the homogeneous Friedmann equations (11) and therefore on the scale factor evolution. To this end, it makes use of a cubic box of comoving size \(L\), \(N^{3}\) points uniformly distributed and periodic boundary conditions. The values of \(L\) and \(N\) are chosen in such a way all the relevant momenta involved in our simulations are always well within the infrared (IR) and ultraviolet (UV) resolution in momentum space,
\[k_{\rm IR}=\frac{2\pi}{L}\,,\qquad\qquad k_{\rm UV}=\frac{\sqrt{3}}{2}Nk_{\rm IR }\,, \tag{14}\]
associated respectively with the size of the simulation box and the lattice spacing.
Following the standard procedure, the initial conditions for the Higgs field are determined by evolving the homogeneous version of Eq. (10) from the deep slow-roll regime (\(\mathcal{N}_{*}=55\)) till
the end of inflation, which we identify with the onset of our simulations (\(t=0\)). Fluctuations over this homogeneous background are then added as Gaussian random fields, as done usually for systems with short classicalization times as the one under consideration. The subsequent temporal evolution of the system is performed using a 2nd-order Velocity Verlet algorithm, which turns out to be sufficient to keep the violation of energy conservation below \(\mathcal{O}(10^{-3})\) during the whole simulation time. More precisely, the scale factor is evolved using the second Friedmann equation
\[\frac{\ddot{a}}{a}=\frac{1}{3}\langle-2E_{K}+E_{V}\rangle\,, \tag{3.9}\]
while the first one,
\[H^{2}=\frac{1}{3}\langle E_{\rm K}+E_{\rm G}+E_{\rm V}\rangle\,, \tag{3.10}\]
is rather used as a check of energy conservation. In these expressions, we have conveniently split the total energy density into its kinetic (K), gradient (G) and potential (V) counterparts,
\[E_{K}\equiv\frac{1}{2}\dot{\chi}^{2}\,,\hskip 28.452756ptE_{G}\equiv\frac{1}{2 a^{2}}(\nabla\chi)^{2}\,,\hskip 28.452756ptE_{V}\equiv V\,, \tag{3.11}\]
with \(\langle\ldots\rangle\) denoting the spatial averaging of the corresponding quantity over the simulation volume. Note also that the lattice potential accounts for both the large and small field regimes in Eq. (2.16), suitably interpolated in our lattice implementation. Whenever needed, we will
Figure 1: (Left) Field values for the end of inflation \(\chi_{\rm end}\) (orange), the location of the inflection point \(\chi_{i}\) (green) and the field value associated to the minimum of \(V_{,\chi\chi}\) (or the maximum momentum growing due to tachyonic instability) as a function of the non-minimal coupling \(\xi\). The red line indicates the critical field value \(\chi_{c}\) below which the potential becomes quartic. As expected, in the metric limit \(\xi\simeq 1500\) inflation ends too close to the inflection point, which combined with a higher Hubble friction prevents the field to re-enter the tachyonic region of the potential in successive oscillations. (Right) Schematic form of the Einstein-frame HI potential for different values of the non-minimal coupling \(\xi\), with \(\xi_{1}<\xi_{2}<\xi_{3}\).
make also use of temporal averages of specific quantities over a given number of oscillations, denoting them as \(\langle\ldots\rangle_{T}\).
### Oscillon formation
Changing the value of the non-minimal coupling \(\xi\) is expected to substantially affect the background dynamics and the associated growth of perturbations, leading to different fragmentation processes. Indeed, as shown in Fig. 1, the ratio \(\chi_{\rm end}/\chi_{i}\), between the field value \(\chi_{\rm end}\) at the end of inflation and the location of the inflection point \(\chi_{i}\), increases with \(\xi\), allowing the background field to explore the tachyonic region of the potential for a higher number of oscillations. At the same time, the Hubble rate immediately after the end of inflation scales inversely proportional to \(\xi\) (\(H\sim V_{\rm inf}^{1/2}\sim\sqrt{\lambda}/\xi\)), so that higher values of this quantity translate also into a less efficient expansion rate.
To characterize the preheating stage in EC HI and the potential formation of oscillons configurations in this setting, we will consider two benchmark values of \(\xi\) leading to different fragmentation modalities. These values are summarized in Table 1, where we present also the grid parameters, the infrared and ultraviolet resolution of the corresponding reciprocal lattice in units of the oscillation frequency (10) and the final number of \(e\)-folds of preheating considered in our simulations. In both cases, the associated tensor-to-scalar ratio lies substantially below the expected sensitivity of CMB Stage-IV experiments [93], making the stochastic GWs background generated during inflation completely undetectable in the near future.
The typical outputs of our simulations are presented in Fig. 2 and in this URL, where we display several snapshots of the 3-dimensional lattice volume as well as computer-generated movies explicitly demonstrating the formation of oscillons in EC HI. The emergence of these pseudo-solitonic objects manifests itself also through several observables which we now proceed to describe:
* **Field average and time-dependent dispersion:** As shown in the upper panels of Fig. 3, the coherent nature of the inflaton field at sufficiently early times is well-captured by the field average \(\langle\chi\rangle\), which undergoes damped oscillations driven essentially by the homogeneous version of Eq. (10). However, the cumulative tachyonic production of Higgs excitations translates eventually into a sharp deviation from this coherent behaviour, with the average field value drastically dropping down to zero at that time. As anticipated at the beginning of this section, a change in the non-minimal coupling \(\xi\) impacts both the number of times the field undergoes tachyonic oscillations and the overall backreaction dynamics. In particular, for \(\xi=5\cdot 10^{4}\) and thanks to the \(Z_{2}\) symmetry of the potential, the homogeneous condensate is able to re-enter the instability region for about consecutive 6-7 semi-oscillations before the cosmic expansion dampens
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\xi\) & \(L\left[M^{-1}\right]\) & N & \(k_{\rm IR}\left[M\right]\) & \(k_{\rm UV}\left[M\right]\) & \(\Delta\mathcal{N}\) \\ \hline \hline \(5.0\cdot 10^{4}\) & 30 & 512 & 0.2 & 89 & 2 \\ \hline \(2.5\cdot 10^{5}\) & 60 & 512 & 0.1 & 44 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Lattice parameters and resolution for the two benchmark scenarios considered in our simulations. For each of these cases, we present also the final number of \(e\)-folds of preheating considered in our simulations, \(\Delta\mathcal{N}\).
out its amplitude. Afterwards, the field continues oscillating in the quadratic part of the potential, with the condensate remaining intact for about 8-9 additional semi-oscillations to finally fragment in less than one oscillation. On the other hand, for \(\xi=2.5\cdot 10^{5}\), the Higgs field spends most of the time beyond the inflection point, leading to fragmentation in just 6 semi-oscillations. This situation is reminiscent of what happens in the Palatini formulation of HI, where the field incursions all the way back the inflationary plateau lead to fragmentation in a single oscillation [63, 64]. Note, however, that in that case, the potential is essentially quartic all the way up to the plateau, not being able therefore to support the formation of oscillons. A similar picture follows also from considering the root mean squared (rms) characterizing the typical growth of perturbations,
\[\text{rms}(\chi)=\sqrt{\langle\chi^{2}\rangle-\langle\chi\rangle^{2}}\,. \tag{3.12}\]
As shown in the lower panels of Fig. 3, backreaction takes place precisely when this quantity reaches the amplitude of the background component and the average energy density stored in perturbations becomes comparable with that of the inflaton condensate.
Figure 2: Oscillons configurations for our two benchmark scenarios at time \(Mt=500\), corresponding respectively to \(\Delta\mathcal{N}=2\) (left) and \(\Delta\mathcal{N}=0.9\) (right). The contours in the plots represent regions with overdensities 5 times larger than the average. Once formed, these objects remain located at almost fixed spatial positions for a large number of oscillations. Due to the expanding box, their fixed physical size shrinks, however, with time.
* **Energy budget and average equation of state:** As previously mentioned, one of the key ingredients allowing the inflaton condensate to re-enter the tachyonic region of the potential is a relatively large ratio between the field value at the end of inflation and the location of the inflection point. However, an important contribution to the dynamics comes also from the expansion rate regulating the damping of oscillations. In the presence of particle creation, this is effectively determined by the relative contribution of kinetic, gradient and potential counterparts to the total energy density, or, equivalently, by the associated equation-of-state parameter, \[w\equiv\frac{\langle p\rangle}{\langle\rho\rangle}=\frac{\langle E_{K}-E_{G}/ 3-E_{V}\rangle}{\langle E_{K}+E_{G}+E_{V}\rangle}\,.\] (3.13) The evolution of the different energy components for our two benchmark scenarios is displayed in the upper panels of Fig. 4. During the initial tachyonic oscillations, the energy budget is dominated by the potential energy contribution, reflecting the fact that the inflaton field spends more time close to the turning points than to the minimum of the potential. This is also apparent in the evolution of the equation-of-state parameter, displayed in the lower panels of the same figure. In particular, while this quantity oscillates wildly between \(w=-1\) (potential energy domination) and \(w=1\) (kinetic energy domination), its temporal average lies always within the interval
Figure 3: Lattice results for our two benchmark scenarios \(\xi=5\cdot 10^{4}\) (left), \(2.5\cdot 10^{5}\) (right). First row: Time evolution of the lattice-averaged field value \(\langle\chi\rangle\) (blue) and the homogeneous solution (red dashed), as compared to the location of the inflection points \(\pm\chi_{i}\) (green). Second row: Time evolution of the root mean squared (3.12) characterizing the growth of perturbation. When this quantity becomes comparable with the amplitude of the background oscillations, fragmentation takes place, and \(\langle\chi\rangle\) starts deviating from the homogeneous solution.
0, accounting both for the early-time field excursions in the plateau domain and the light tendency to equipartition between kinetic and potential energy contributions once the oscillation amplitude enters the quadratic part of the potential. This situation holds until the onset of backreaction, where the growth of perturbations leads to the appearance of a non-negligible gradient component that becomes commensurable or even equal to the potential counterpart. At this fragmentation stage, the equation of state approaches rapidly zero, in accordance with Eq. (3.13) and the energy budget at that time. The created oscillons behave then collectively as pressureless dust. At this point, it is important to remark that, in the parameter range of our simulations, backreaction happens always before the average field amplitude drops below the threshold value \(\chi_{c}\). Therefore, when fragmentation happens and oscillons form, the Universe enters a sustained period of matter domination regardless of the shape of the potential. This contrasts with the naive expectation following from an incomplete homogeneous treatment, where a radiation-dominated stage is expected to develop at field values \(\bar{\chi}<\chi_{c}\), where the EC HI potential (2.16) becomes approximately quartic [94].
* **Power spectra:** The unstable tachyonic band in Eq. (3.6) can be quantitatively observed in the Higgs power spectrum \(\mathcal{P}_{k}\) or two-point correlation function in Fourier space, customarily defined through \[\langle\chi^{2}\rangle=\int\frac{k^{3}}{2\pi^{2}}d\ln k\,\mathcal{P}_{k}\.\] (3.14)
Figure 4: Lattice results for our two benchmark scenarios \(\xi=5\cdot 10^{4}\) (left), \(2.5\cdot 10^{5}\) (right). First row: Evolution of the volume-averaged (semi-transparent) kinetic (blue), gradient (red) and potential (green) contributions to the total energy, together with their time averages (opaque). Second row: Time evolution of the equation of state parameter (blue) and its time average (orange).
As shown in Fig. 5, the associated infrared fluctuations experience a rapid growth at the very early stages of preheating, where the average field value exceeds the location of the inflection point \(\chi_{i}\). As expected, this growth is stronger for higher values of \(\xi\), in agreement with the higher efficiency and longer duration of the tachyonic regime in this case. Due to self-resonance, the enhancement of perturbations continues even for \(\langle\chi\rangle<\chi_{i}\), albeit at a significantly slower rate. Eventually, the growth of fluctuations is halted by backreaction effects and a broad peak appears at a _physical_ momentum \(\rm k_{p}\), corresponding to a typical oscillon size \(R\sim(2\pi)/\rm k_{p}\). 4 It is interesting to notice that increasing \(\xi\) affects also slightly the typical excited band and the final size of the oscillons. As we will see in the forthcoming section, this has a very mild impact on the peak frequency of the produced GWs signal. Footnote 4: Since the momenta on the lattice are co-moving and the oscillons have a fixed physical size, this scale shift towards the UV due to the expansion of the Universe.
* **Energy density histograms:** The smoking gun that signals the actual existence of quasi-spherical structures of a fixed physical size, rather than other kinds of anisotropies, is the energy density distribution [88]. As shown in Fig. 6, the histograms containing this quantity develop a flattening region at high densities, representing the points on the lattice that lie within the overdense oscillon configurations. This plateau-like feature encodes the local conservation of energy in oscillons as compared to the decreasing energy density of the simulation box \(\langle\rho\rangle\propto a^{-3}\), so that the plateau in the distribution becomes more and more pronounced at later times. Note that while the oscillons remain highly subdominant in volume, they dominate the total energy budget. In particular, the fraction of regions where the energy density
Figure 5: Time evolution of the energy power spectra for our two benchmark scenarios, with each spectrum taken at intervals of \(\Delta(Mt)=10\). The initial growth of perturbations at low momenta is clearly distinguishable, as well as the broad peak appearing due to oscillon formation. Once oscillons have formed, we observe a residual growth of momenta in the UV. This is due to the presence of the quartic region at the bottom of the EC HI potential and does not happen therefore in other purely quadratic models.
exceeds the average energy density \(\langle\rho\rangle\) by a threshold factor 5,
\[f=\frac{\int_{\rho>5\langle 5\rangle}\rho\,dV}{\int\rho\,dV}\,, \tag{3.15}\]
accounts for \(\sim 70\%\) of the total energy.
### Gravitational waves signal
In the previous section, we have seen how oscillons form in EC HI for a wide range of the parameter space compatible with observations, with different values of the non-minimal coupling \(\xi\) leading to different fragmentation dynamics. In this section, we explore whether fragmentation and oscillon formation leads to distinctive GWs signatures.
The anisotropies in the inflaton configuration are expected to source linear metric perturbation \(h_{ij}\) through the wave equation
\[\ddot{h}_{ij}+3H\dot{h}_{ij}-a^{-2}\nabla^{2}h_{ij}=2a^{-2}\Pi_{ij}^{\rm TT}\, \tag{3.16}\]
with
\[\Pi_{ij}^{\rm TT}=(\partial_{i}\chi\partial_{j}\chi)^{\rm TT} \tag{3.17}\]
the transverse-traceless (TT) part of the associated anisotropic energy-momentum tensor. Employing the GW module implemented in \(\mathcal{C}osmo\mathcal{L}attice\)[95], we can easily extract the energy density power spectrum of the GWs signal, namely [91]
\[\Omega_{\rm GW}=\frac{1}{3H^{2}}\frac{\mathrm{d}\rho_{\rm GW}}{\mathrm{d}\log k }\,\qquad\quad\text{with}\qquad\quad\rho_{\rm GW}=\frac{1}{4}\langle\dot{h}_{ij} \dot{h}_{ij}\rangle. \tag{3.18}\]
The results of the simulations for the two benchmark scenarios considered in the previous section are shown in Fig. 7. We observe that the GWs spectrum grows initially with time, developing a main peak at low momenta. When fragmentation starts, the spectrum quickly rises, and, shortly after fragmentation ends, there appears a subdominant secondary peak at a higher frequency. Once the oscillons have formed the spectrum stops growing, reflecting their individual character and roughly spherical symmetry.
Figure 6: Energy density histograms for the benchmark case \(\xi=5\cdot 10^{4}\), with the heights of the bins normalized to set the area of the histograms to 1. One can clearly distinguish the transition from the configuration before fragmentation (left), to the one slightly after oscillon formation (centre), where the distribution starts flattening, signalling the appearance of highly overdense regions with density contrasts \(\rho/\langle\rho\rangle>5\). As expected, the flatter part becomes more pronounced at later times (left), as the energy in the oscillons remains constant and the one in the background dilutes as matter.
The proper connection of the GW spectrum at the time of emission to that at present times requires the knowledge of the full cosmological history upon fragmentation, and in particular, of the average lifetime of the created oscillon configurations. Unfortunately, observing the fate of these very stable objects within a realistic scenario like the one under consideration requires not only extensive computational power but also a detailed lattice implementation of the Higgs couplings to other Standard Model degrees of freedom, a task that significantly exceeds the proof-of-principle scope of this paper (cf., however, Section 4). In order to estimate the current amplitude \(\Omega_{\rm 0,GW}\) and frequency \(f_{0}\) of the spectrum, we follow therefore a poorman's parametric approach based on two working approximations. First, we will presume the Universe to evolve adiabatically until the present day, ensuring entropy conservation. Second, we will consider that, once oscillons have formed, the associated equation-of-state parameter evolves gradually to that of radiation domination, the moment at which we assume the SM matter content to be completely thermalized. 5 With these assumptions, we get [99]
Footnote 5: Note that, although most likely true in a SM setting like the one under consideration [56, 96], the onset of radiation domination does not necessarily coincide with that of thermalization. In particular, the former requires only the effective energy dominance of relativistic particles, which might have, however, a non-thermal spectrum, see e.g. Refs. [97, 98].
\[f_{0}=\frac{k_{p}}{2\pi a_{0}}=\frac{k_{p}}{a_{\rm e}\rho_{\rm e}^{1/4}}e^{- \frac{1}{4}(1-3\bar{w})\Delta{\cal N}_{\rm RD}}\,4\cdot 10^{10}Hz\, \tag{3.19}\]
\[\Omega_{\rm 0,GW}\ h^{2}\simeq 1.6\times 10^{-5}e^{-(1-3\bar{w})\Delta{\cal N}_{ RD}}\,\Omega_{\rm e,GW}\,, \tag{3.20}\]
with \(a_{0}\) the scale factor today, the quantities \(a_{\rm e}\), \(\rho_{\rm e}\) and \(\Omega_{\rm e,GW}\) denoting respectively the scale factor, the total energy density and the gravitational waves' energy fraction at the time of emission and \(\bar{w}\) the mean equation-of-state parameter in the \(e\)-folds interval \({\cal N}_{\rm RD}\) between GW emission and the onset of radiation domination.
Figure 7: Gravitational waves spectra for our two benchmark scenarios. The first scales to grow are those related to the inhomogeneities arising during the initial growth of perturbations and the fragmentation process. In a second moment, we observe a secondary peak at higher momenta, due to the oscillon formation and relaxation to a spherical shape. Afterwards, the growth stops, as there are no more strong dynamical inhomogeneities to produce GWs.
As usually happens in preheating scenarios, the resulting amplitude of the signal is commensurable with the sensitivity of current GWs experiments, but the peak frequency turns out to be \(\mathcal{O}(\text{GHz})\), lying therefore far away from the available observational window. As explicit in Eq. (3.19), there are two main factors contributing to the redshift of the maximum available signal. On the one hand, the likely existence of a prolonged intermediate stage with an equation of state \(0<\bar{w}<1/3\). On the other hand, \(f_{0}\) will be smaller for higher values of the energy density at the time of formation. Nonetheless, a higher energy density during preheating implies also a smaller horizon and hence higher momenta in the spectrum, so that opposite contributions from \(k_{p}\) and \(\rho_{e}\) tend to cancel out.
## 4 Towards preheating in the Standard Model
Now that we have established the presence of oscillons in the scalar sector of EC-HI, let us turn our attention to the other SM components, namely the massive electroweak bosons and fermions. As shown in Refs. [54; 55; 56; 63], the generation of these two species is intrinsically intertwined, with the Bose enhancement of the former strongly depending on their perturbative decay into fermions.
The so-called _Combined preheating formalism_ developed in Ref. [55] (cf. also Ref. [56]) allows for a semi-analytic treatment of the gauge boson production in Higgs inflation, under several simplifying assumptions. First, it replaces the usual Higgs-gauge boson interactions following from the SM covariant derivatives with global couplings between the Higgs field and three scalar degrees of freedom playing the role of gauge bosons. This approximation is well-supported by several numerical studies, explicitly demonstrating the qualitative equivalence of global and Abelian interactions during the initial stages of preheating, where the non-linearities associated with gauge boson self-interactions can be still safely neglected [100; 101; 102]. Second, the formalism makes use of a WKB approximation in order to determine the production of gauge boson excitations at every semi-oscillation of the inflaton field. This well-established procedure takes into account the spontaneous and stimulated emission of particles at the bottom of the potential, as well as potential interference effects among semi-oscillations [103]. Third, _Combined preheating_ accounts for the potential decay of the produced gauge bosons into the SM quarks and leptons by considering effective decay widths proportional to the gauge boson masses, as done, for instance, in instant preheating scenarios [104]. This approximation explicitly neglects a plethora of annihilations processes, which are, however, subdominant at early times or low gauge boson number densities [56]. All together the _Combined preheating_ assumptions give rise to a master, phase-averaged, equation relating the number densities \(n_{k}(j)\) of each gauge boson field polarization at two consecutive zero crossings [56]
\[\left[\frac{1}{2}+n_{k}((j+1)^{+})\right]=(1+2C_{k}(j))\left[\frac{1}{2}+n_{k }(j^{+})\,e^{-\Theta(j)}\right]\,, \tag{4.1}\]
with \(C_{k}(j)\) a \(k\)-dependent IR window function depending on the small-amplitude behaviour of the corresponding gauge boson mass,
\[\Theta(j)=\int_{t_{j}}^{t_{j}+1}\Gamma(t)dt \tag{4.2}\]
the mean decay rate into fermions among two consecutive crossings \(t_{j-1}\) and \(t_{j}\) and \(\Gamma(t)\) the instantaneous rate evaluated on the background equations of motion. The explicit form
of these building functions can be obtained by transforming the usual (_scalarized_) Higgs-gauge boson interactions, \(\sim g_{i}^{2}/8\,h^{2}A^{2}\), into the Einstein frame, with \(g_{i}=g,g/\cos\,\theta_{w}\) for the \(A=W^{\pm}\) and \(Z\) bosons, \(g\) the \(U(1)_{Y}\) coupling and \(\theta_{w}\) the weak mixing angle. Upon performing an additional field redefinition \(A\to\Omega A\) for canonically normalizing the resulting kinetic sector, we obtain an interaction term6
Footnote 6: For simplicity we will always refer to quantities related to the \(W^{\pm}\) bosons, Although the same formulas, with the appropriate couplings, apply to the \(Z\) boson.
\[\frac{\mathcal{L}_{int}}{\sqrt{-g}}=\frac{1}{2}m^{2}A^{2}\,, \tag{4.3}\]
with
\[m^{2}=\frac{g^{2}}{4}\frac{h^{2}}{1+\xi h^{2}}=\begin{cases} \frac{g^{2}}{4}\frac{\chi^{2}}{1+\xi\chi^{2}}\,&\chi<\chi_{c}\,\\ \frac{g^{2}}{4\xi}\left[1-\exp\left(-\frac{2\xi|\chi|}{\sqrt{c}}\right)\right] \,&\chi\gg\chi_{c}\,,\end{cases} \tag{4.4}\]
an effective mass approaching the constant value \(g^{2}/(4\xi)\) in the large field regime and dropping to zero at each minimum-crossing of the inflaton field, where the adiabaticity condition \(\dot{m}/m^{2}<1\) is maximally violated and gauge boson production takes place. In this latter regime, the equation of motion for the \(A\)-field perturbations takes the form
\[\delta\ddot{A}_{\mathbf{k}}+3H\delta\dot{A}_{\mathbf{k}}+\left( \mathbf{k}^{2}/a^{2}+m^{2}(\chi)\right)\delta A_{\mathbf{k}}=0. \tag{4.5}\]
with
\[m^{2}(\chi)\simeq\begin{cases}\frac{g^{2}}{4}\chi^{2}\,&\text{for}\ \ \ \ \chi<\chi_{c}\,\\ \frac{g^{2}}{2\sqrt{c}}|\chi|\,&\text{for}\ \ \ \ \chi\gg\chi_{c}\,\end{cases} \tag{4.6}\]
depending only on the non-minimal coupling \(\xi\) through the constant \(c\), which, for a given value of \(\lambda\), is almost completely determined by the CMB normalization, cf. Eq. (2.20). As shown in Fig. 8, the violation of adiabaticity condition \(\dot{m}/m^{2}<1\) following from the numerical solution of the background equations of motion happens generically at field values \(\chi_{a}>\chi_{c}\), even though these two scales approach each other for increasing values of \(\xi\). Taking this into account, together with the fact that we only aim at providing a qualitative picture of this otherwise complicated dynamics, we will neglect in what follows the contribution of the upper term in the right-hand side of Eq. (4.6). In this limit, the scenario under consideration reduces conceptually to that studied in the metric formulation of the theory [54; 55; 56; 57]. By performing the same type of computations, the IR window function in Eq. (4.1) takes the form7[55]
Footnote 7: For the sake of simplicity, we neglect the slight differences among \(W^{\pm}\) and \(Z\) encoded in Lorentz invariant phase space factors, cf. Ref. [55] for details on this issue.
\[C(x)=\pi^{2}\left[\text{Ai}(-x^{2})\text{Ai}^{\prime}(-x^{2})+ \text{Bi}(-x^{2})\text{Bi}^{\prime}(-x^{2})\right]^{2}\,, \tag{4.7}\]
with Ai and Bi the Airy functions of first and second kind, \(x=k/aM\left(j/q\right)^{1/3}\), \(j\) the number of inflaton zero crossings and the parameter
\[q=\frac{\sqrt{c}g^{2}\chi_{\text{end}}}{8\pi\lambda}=\frac{cg^{2}}{16\pi \lambda\,\xi}\log\left(1+\frac{2\sqrt{2}\xi}{\sqrt{c}}\right)\,, \tag{4.8}\]
decreasing for increasing values of \(\xi\). Assuming that the same approximation holds in our case, gauge bosons will be produced with a typical scale \(k_{\rm gb}\sim q^{\frac{1}{3}}M\), which for our benchmark scenarios \(\xi=5\cdot 10^{4}\) and \(2.5\cdot 10^{5}\), corresponds roughly to \(k_{\rm gb}/M\sim 20,10\), respectively. The depletion of the bosons produced at a crossing is efficient whenever the factor \(\Theta\) in Eq. (4.1) is large enough to compensate for the growth of \(n_{k}\) due to parametric resonance. Taking into account that the specific form of the decay rate into SM particles is proportional to the mass of the bosons,
\[\Gamma=\frac{3g^{2}m}{16\pi}\,, \tag{4.9}\]
we can compare the two effects for different values of \(\xi\). On the one hand, being the maximum decay rate proportional to the gauge bosons masses, and the latter to \(1/\sqrt{\xi}\), we have that for larger values of the non-minimal coupling, the maximum \(\Gamma\) will have a lower value. On the other hand, increasing \(\xi\) substantially modifies the background dynamics, resulting in a higher period for the first few oscillations. To get an estimate of the actual timescales on which the decay rate becomes inefficient, we can consider the maximum value of the Bose enhancement factor happening at \(x=0\), and compare it with the suppression factor \(e^{-\Theta}\), by evaluating the decay rate on the homogeneous solution for the background. As shown in Fig. 9, in both our scenarios the decay rate is highly efficient for the first \(\mathcal{O}(10^{2}-10^{3})\) zero crossings, significantly exceeding the typical number of semi-oscillations needed for oscillon formation, \(\mathcal{O}(10-20)\). This results in two main effects. Firstly, for a large number of oscillations, almost all the energy stored in the gauge bosons at each zero crossing is transferred to SM fermions, making this the dominant channel for energy transfer to SM particles other than the Higgs. Secondly, the SM gauge bosons cannot efficiently accumulate, so their parametric resonance is substantially delayed, preventing them from backreacting on the inflaton condensate, already at the early stages of its evolution.
Figure 8: Field values at which adiabaticity condition \(\dot{m}/m^{2}<1\) is first violated in our two benchmark scenarios (orange), as compared to the homogeneous solution for the background (blue) and the crossover value \(\chi_{c}\) below which the Einstein-frame potential becomes quartic (red). Note that, although adiabaticity is always broken in the quadratic part of the potential, the value \(\chi_{ad}\) approaches \(\chi_{c}\) at higher \(\xi\).
Now that we have established that the decay into fermion efficiently suppresses the production of gauge bosons, it is natural to wonder how the interplay between the inflaton and other SM particles can affect the formation of oscillons. Based on our simple linear analysis it is clear that the amount of energy transferred to fermions through the gauge bosons decay needs several oscillations to become important. For instance, in the metric case [57], this can take up to hundreds of oscillations. Taking into account that increasing the non-minimal coupling reduces the number of oscillations before fragmentation, we can infer that such transfer becomes less efficient for larger \(\xi\), leaving the process of oscillon formation unaltered. However, it is important to point out that our previous analysis works only at a linear level, and does not take into account possible non-linear effects associated with the quick tachyonic growth of inflaton perturbations, as well as the highly inhomogeneous state between fragmentation and oscillon formation. Although it would be interesting to perform a fully non-linear analysis, including both fermions and gauge bosons in our lattice, such a task presents some technical difficulties. First, although \(\mathcal{C}osmo\mathcal{L}attice\) can in principle support the implementation of the full Standard Model gauge group, this can only be done in the Jordan-Frame, where the covariant derivatives are linear in the fields. Second, even if the gauge fields are introduced as mere scalar fields [57; 64], the created particles will be produced with typical momenta much higher than those relevant for oscillons, therefore requiring simulations with very high resolution, far beyond the scope of this paper. In particular, the coverage of all momenta involved in the picture, especially at later times, would require at least \(N>1024\) lattice sites for realistic gauge couplings \(g^{2}\simeq 0.3\). Such simulations would be extremely
Figure 9: Interplay between the Bose enhancement factor for \(k=0\) and the suppression due to gauge boson decay into fermions for a fiducial gauge coupling \(g^{2}=0.3\), as a function of the number of zero-crossings \(j\). The factor \(\Theta\) is computed on the numerical solution for the background. Although the two effects cancel out after \(\mathcal{O}(10^{3})\) zero-crossings, the actual modes can start growing slightly before, once the proper recursive formula in Eq. (4.1) is taken into account. Nonetheless, we expect the gauge boson production to be heavily subdominant during the first tens of oscillations where fragmentation and oscillon formation takes place.
time-consuming, especially if all three gauge bosons are taken into account. 8
Footnote 8: Using a \(N=512\) lattice, we are able to confirm, however, that the impact of gauge bosons in the tachyonic amplification of Higgs excitations remains subdominant until oscillon formation for \(g^{2}\simeq\mathcal{O}(10^{-2})\), even if the decay into fermions is completely neglected and the depletion mechanism discussed above is absent.
Third, when introducing spinors on the lattice one encounters the infamous _fermion-doubling problem_[105], leading to the appearance of additional copies of the latter. Although this limitation could be circumvented with an effective treatment along the lines of Ref. [57] (see also Ref. [106]), the resolution of the IR and UV momenta involved would still remain.
## 5 Conclusions and outlook
Among the plethora of inflationary models in the literature, Higgs inflation stands out as one rather minimalistic scenario not requiring the introduction of additional degrees of freedom beyond the Standard Model content while potentially allowing for a complete analysis of the preheating stage following the inflation. In spite of its simplicity, the inclusion of a non-minimal coupling of the Higgs field to gravity breaks the usual degeneracy among metric-affine representations, with different formulations of gravity giving rise to different inflationary predictions and preheating dynamics.
In this paper, we have studied the preheating phase for Higgs Inflation in Einstein-Cartan gravity using fully-fledged numerical lattice simulations in \(3+1\) dimensions. We have found that this scenario allows for the formation of dense and spatially localized oscillon configurations, constituting up to \(70\%\) of the total energy density for suitable model parameters. These pseudo-solitonic objects survive for the entire simulation time, leading to a prolonged period of matter domination that modifies the post-inflationary history and therefore the minimum duration of the inflationary phase needed to solve the hot Big Bang problems. Furthermore, we have shown that such structures can source a significant gravitational wave signal, providing an alternative observational channel for this inflationary model, besides the usual stochastic background of primordial tensor perturbations generated during inflation. Unfortunately, as usually happens in preheating scenarios, the associated peak frequency turns out to be significantly larger than the one accessible by current and planned GWs experiments.
While the results of this paper have been explicitly derived assuming a simple Nieh-Yan interaction (14), they can be easily extended to the more general constant \(c\) scenarios within the EC multiverse9[34], including also scale-invariant extensions [52] or even TDiff generalizations [45, 107]. Indeed, the main condition for oscillon formation is effectively encoded in the field redefinition \(d\bar{\chi}/d\bar{h}\) entering Eq. (13) or, equivalently, in the kinetic function \(K(h)\) appearing in the Einstein frame-action (9). Provided this quantity is sufficiently close to one (but still larger), the inflaton field will be allowed to exceed the inflection of the potential for a given number of oscillations, building up fluctuations through tachyonic instabilities and eventually fragmenting out into oscillon configurations.
Footnote 9: For instance, a choice \(\xi_{vv}=\xi_{aa}=\xi\), \(c_{av}=0\) in Eq. (11) translates also into a constant \(c\) value,
\[c=\xi+6\xi_{\eta,{\rm eff}}^{2}\qquad\quad{\rm with}\qquad\quad\xi_{\eta,{\rm eff }}^{2}\equiv\xi^{2}+\frac{2}{3}\left(\frac{\zeta_{ha}^{2}}{c_{aa}}+\frac{ \zeta_{vv}^{2}}{c_{vv}}\right)\,, \tag{17}\]
allowing to recover the phenomenology described in this paper for any set of parameters \(\zeta_{ha}\), \(\zeta_{bv}\), \(c_{aa}\), \(c_{vv}\) leading numerically to \(\xi_{\eta,{\rm eff}}^{2}\simeq\xi_{\eta}^{2}\) and commensurable \(\xi\) values.
Our results are only a first step towards the full characterization of the preheating process in Einstein-Cartan HI. In particular, it would be interesting to address several aspects not properly taken into account in this study:
1. _Oscillons lifetime_: Understanding the lifetime of oscillons is indeed of uttermost importance for determining the precise value of the inflationary observables to be confronted with observations. On top of that, oscillons could also generate gravitational waves when they decay, with longer lifetimes translating into higher chances of observing GWs within the frequency windows of current and future gravitational wave experiments. In all of our simulations, it is clear that oscillons have enough time to form before the quartic coupling becomes relevant. However, their long-time classical and quantum stability [108, 109, 110, 111], as well as their possible formation for higher values of \(\xi\), remains to be checked. Unfortunately, performing simulations for these scenarios turns out to be rather problematic, as the quartic potential affects mainly the momenta in the far UV regime, while the typical scale of oscillons is located in the IR. In this regard, it would be interesting to test whether the presence of quartic self-interactions can lead to parametric resonance effects within oscillons, along the lines of Ref. [110]. Future research could also explore the impact of the complete SM structure on the inflaton lifetime, accounting for rescattering effects induced by non-Abelian interactions [101, 102] or potential spike effects in longitudinal gauge degrees of freedom [58, 59, 60, 61, 62]. Note, however, that the Riemannian spikes advocated in the latter references are expected to be subdominant in our context. In particular, the combination of the uncertainty principle and the approximate conservation of energy during the first few oscillations, translates roughly into a typical energy scale \(\Delta E_{\rm sp}\sim\Delta t_{\rm sp}^{-1}\sim\dot{\chi}_{0}/\chi_{c}\sim \sqrt{c\,V_{\rm end}}\sim\beta\sqrt{\lambda}\) for the particles created by the spike, with \(\Delta t_{\rm sp}\) the time needed to cross the spike, \(\chi_{0}\) the field velocity at the minimum of the potential, \(V_{\rm end}\) the potential energy at the end of inflation and \[\beta\equiv\frac{\sqrt{c}}{\sqrt{c}+2\sqrt{2}\xi}\,.\] (5.2) While \(\beta\sim\mathcal{O}(1)\) in the metric formulation [61], this quantity is significantly smaller in the EC HI settings considered in this paper, \(\beta\sim\mathcal{O}(10^{-2}-10^{-3})\), leading to a transfer of energy through this mechanism of order \(\rho_{\rm sp}/V_{\rm end}\sim 1/(8\pi^{2})(\Delta E_{\rm sp})^{4}/V_{\rm end} \sim\lambda\,c\,\beta^{2}/(16\pi^{2})\sim\mathcal{O}(10^{-2}-10^{-3})\).
2. _Gravitational perturbations and primordial black holes_: Since oscillons represent well-localized objects with high overdensities, it is natural to wonder whether these quasi-spherical structures could potentially collapse into black holes, a possibility not accounted for in our simulations, where metric perturbations are completely disregarded. Assuming spherical symmetry, the strength of the Newtonian potential at the surface of a single oscillon of radius \(R=(2\pi/{\rm k}_{p})\), mass \(\mathcal{M}\), and core energy density \(\rho_{c}\) can be estimated as10[88] Footnote 10: This result comes from the fact that oscillons are expected to have a field amplitude at the core proportional to that of the oscillating homogeneous condensate at the time of backreaction, \(\chi_{c}\sim\chi_{\rm br}\), which result in \(\rho_{c}\sim\langle\rho\rangle_{\rm br}\). \[|\Phi|\sim\frac{\mathcal{M}}{8\pi R}\sim\frac{\rho_{c}R^{2}}{6}\sim H_{\rm br} ^{2}R^{2}\,\] (5.3) with \(H_{\rm br}\) the value of the Hubble parameter at the time of backreaction (fragmentation). For all the scenarios considered in this paper, this _compactness_ turns out to be
significantly smaller than one, \(|\Phi_{\rm EC}|\sim 10^{-3}-10^{-4}\), supporting our approximations. We point out, however, that this estimate addresses only the typical oscillon sizes that we expect to find in our simulation, being a priori possible, although unlikely due to the magnitude of \(\Phi\), that some of them meet the condition for collapse. Moreover, if stable enough, oscillons could eventually cluster on long time scales [112, 113].
3. _Fermions in EC gravity_: As mentioned when introducing EC gravity, one of the main perks of this formulation is the possibility to naturally account for the coupling of fermions to gravity, thanks to the presence of a non-vanishing spin connection. However, the use of a theory with torsion leads inevitably to the appearance of higher-order four-fermion and scalar-fermion interactions appearing when torsion is integrated out [66]. Interestingly enough, these operators can be used, for instance, to produce singlet fermions that can play the role of dark matter, as done in Ref. [33]. Unfortunately, even if the universality of gravitational couplings to fermions is then assumed, one would need to take into account potential non-canonical kinetic terms for fermions, increasing the freedom of the model, and reducing, therefore, its predictive power. As long as the scale introduced by these operators is higher than the inflationary one, we can expect their effect to be subdominant both in the running of \(\lambda\), as well as in the gauge bosons' perturbative decay. Nonetheless, they might play an important role if non-perturbative effects are taken into account.
We leave the in-depth study of these interesting aspects for future work.
M. P. (ORCID ID 0000-0002-2387-5948) acknowledges the Fundacao para a Ciencia e a Tecnologia (FCT), Portugal, for the financial support to the Center for Astrophysics and Gravitation-CENTRA, Instituto Superior Tecnico, Universidade de Lisboa, through the Project No. UIDB/00099/2020. M. P. thanks also the support of this agency through the Grant No. SFRH/BD/151003/2021 in the framework of the Doctoral Program IDPASC-Portugal. JR (ORCID ID 0000-0001-7545-1533) is supported by a Ramon y Cajal contract of the Spanish Ministry of Science and Innovation with Ref. RYC2020-028870-I. The authors acknowledge the Department of Theoretical Physics of the Universidad Complutense de Madrid for allowing access to the cluster facilities utilized for the research reported in this paper.
|
2303.07084 | The challenge of representation learning: Improved accuracy in deep
vision models does not come with better predictions of perceptual similarity | Over the last years, advancements in deep learning models for computer vision
have led to a dramatic improvement in their image classification accuracy.
However, models with a higher accuracy in the task they were trained on do not
necessarily develop better image representations that allow them to also
perform better in other tasks they were not trained on. In order to investigate
the representation learning capabilities of prominent high-performing computer
vision models, we investigated how well they capture various indices of
perceptual similarity from large-scale behavioral datasets. We find that higher
image classification accuracy rates are not associated with a better
performance on these datasets, and in fact we observe no improvement in
performance since GoogLeNet (released 2015) and VGG-M (released 2014). We
speculate that more accurate classification may result from hyper-engineering
towards very fine-grained distinctions between highly similar classes, which
does not incentivize the models to capture overall perceptual similarities. | Fritz Günther, Marco Marelli, Marco Alessandro Petilli | 2023-03-13T13:08:20Z | http://arxiv.org/abs/2303.07084v1 | The challenge of representation learning: Improved accuracy in deep vision models does not come with better predictions of perceptual similarity
###### Abstract
Over the last years, advancements in deep learning models for computer vision have led to a dramatic improvement in their image classification accuracy. However, models with a higher accuracy in the task they were trained on do not necessarily develop better image representations that allow them to also perform better in other tasks they were not trained on. In order to investigate the representation learning capabilities of prominent high-performing computer vision models, we investigated how well they capture various indices of perceptual similarity from large-scale behavioral datasets. We find that higher image classification accuracy rates are not associated with a better performance on these datasets, and in fact we observe no improvement in performance since GoogLeNet (released 2015) and VGG-M (released 2014). We speculate that more accurate classification may result from hyper-engineering towards very fine-grained distinctions between highly similar classes, which does not incentivize the models to capture overall perceptual similarities.
## 1 Introduction
Over the last decade, following the seminal work by Krizhevsky, Sutskever, and Hinton [25], computer vision models based on deep neural network architectures have become increasingly powerful, and nowadays achieve very high levels of performance [5, 36]. This performance is typically assessed on the very task used in model training, most often as the accuracy in image classification (using measures such as top-1 error or top-5 error [31]).
As these models achieve higher and higher performance in such scenarios, they also tend to become increasingly sophisticated and complex in terms of model architecture and the numbers of parameters to be estimated. However, this additional complexity does not necessarily imply that these models _generally_ perform better, also on domains they are _not_ trained on: such an approach runs the risk of having systems that are over-optimized for a particular (set of) tasks, without gaining much in terms of transfer and generalizability [13].
These aspects play an important role in machine learning, often discussed under the label of _representation learning_[13]). However, the point is even more relevant when these systems are used as general-level vision models for research purposes. In that respect, an emerging line of research in the domains of computational neuroscience and cognitive science has started to investigate and employ computer vision models (originally designed and trained for image classification) as models for human visual representation and processing, with very promising results from recent studies [3, 14]. These works also provide us with rich, large-scale datasets of human behavioral data that allow us to investigate to which extent current computer-vision models can serve as general-level vision models, with much wider scientific applications than being pure image classifiers [8, 24, 26]. Following these developments, in the present study, we will systematically examine which models perform best when tested against a battery of behavioral datasets, and if such models also turn out to be the most complex and best-performing image classifiers.
## 2 Related Work
In the development of language models, human behavioral data have long been established as a gold standard for
model evaluation (e.g. [1]). The most prominent example are ratings of word similarity, with widely-used datasets such as WordSim353 [12], SimLex999 [18], or MEN [4].
Analogously, ratings of image similarity are widely employed to evaluate and compare the performance of computer vision models. This includes pairs of different naturalistic images [17, 23, 27], as well as comparisons between real images and their distorted versions [42]. In a recent study, Roads and Love collected similarity ratings for a very large collection of 50,000 ImageNet images, which were not only used for evaluation but also to enrich computer vision with participant-sourced information [30].
More recently, Gunther al al. [14] released a collection of several large-scale data sets, comprising rating data as well as on-line processing data in the form of response times, which were used to evaluate a VGG-based vision model [6]. These will constitute the gold standard datasets for our present study, where we systematically evaluate the performance of a wide range of models against data that are cognitively relevant, but relatively atypical for the computer vision domain, and far from the tasks on which systems are typically optimized.
## 3 Datasets
We considered the following metrics (see [14] for detailed descriptions of the data collection procedures):
* **image similarity [IMG]*
* for 3,000 pairs of naturalistic ImageNet images. Data were collected from 480 participants, with 30 observations per image pair.
* _visual_ word similarity [WORD]*
* for 3,000 word pairs (image labels of the aforementioned 3,000 image pairs), where participants were asked to judge how similar of the objects denoted by the words (i.e., the word referents) _look like_. Thus, unlike other word-based ratings, [4, 12, 18], these data focus on the visual domain. Data were collected from 480 participants, with 30 observations per word pair.
* **typicality ratings [TYP]*
* for 7,500 word-image pairs (1,500 sets of image labels and five images tagged with that label), where participants were asked to indicate the most and least typical image for the category denoted by the presented label. Data were collected from 902 participants, with 30 observations per word-image pair.
All ratings were collected using the best-worst method [19], so participants were always presented with a set of stimuli and asked to pick the most and least relevant for the given task. Responses were then scored on a continuous scale using the Value learning algorithm [19]. As a result, the datasets contain exactly one rating score between 0 (completely dissimilar) and 1 (identical) for each word pair in the WORD dataset and each image pair in the IMG dataset, and one score between 0 (very atypical) and 1 (very typical) for each word-image pair in the TYP dataset. Examples for items with very high and very low ratings are presented in Figure 1
* **Processing time data*
* **discrimination task [DIS]*
* for the same 3,000 image pairs of the IMG dataset. In a discrimination task, two stimuli (here: images) are presented in very rapid succession, and participants have to indicate whether they are identical or different by pressing one of two buttons (see Figure 2, upper panel for a schematic representation of an experimental trial). Responses are typically _slower_ for more visually similar stimuli, which are harder to discriminate from the actual stimulus. Data were collected from 750 participants, with 30 observations per image pair.
* **priming task [PRIM]*
* for the same 3,000 image pairs of the IMG and DIS datasets. In a priming task, two stimuli (here: images) are presented in quick succession, and participants have to perform a task on the second image only (here: judge whether a real or scrambled image has been presented by pressing one of two buttons); see Figure 2, lower panel for a schematic representation of an experimental trial. Responses are typically _faster_ when the stimulus was preceded by a more visually similar stimulus, which primes (= facilitates processing of) the target. Data were collected from 750 participants, with 30 observations per image pair.
The target variable in these processing time studies is the mean response time for each image pair, after removing erroneous trials and outliers with far too slow or fast responses.
All datasets are publicly available in an OSF repository associated to the original study [14] at [https://doi.org/10.17605/OSF.IO/QVW9C](https://doi.org/10.17605/OSF.IO/QVW9C).
## 4 Vision Models
### Models employed
For this study, we considered all pre-trained vision models available in the _MatConvNet_[40] and _Deep Learning Toolbox_ ([https://github.com/matlab-deep-learning/](https://github.com/matlab-deep-learning/)
MATLAB-Deep-Learning-Model-Hub) packages for MATLAB. A full list of models is provided in Table 1.
### General setup: Image and prototype representations
In line with previous studies [2, 3, 14, 28], we extracted the activation values in each convolutional and fully-connected layer of a model for a given input image (i.e., image embeddings) as representations for that image. In addition, we constructed prototype vectors for image labels as the centroid of 100-200 image embeddings of images tagged with that label (using the very same method presented in [14, 28]). For each image label, we obtain such a prototype representation for each layer of each considered model.
We used the cosine similarity metric to compute similarities between these image embeddings (at the same layer of the same model). In this manner, we can obtain a metric for the similarity between two individual images (for the IMG, DIS and PRIM datasets), the overall visual similarity
Figure 1: Examples for items with very high and very low rating values in the individual rating tasks. _Upper panel:_ Image similarity ratings [IMG]; _middle panel:_ word similarity ratings for visual similarity between the words denoted by the objects [WORD], _lower panel:_ typicality ratings.
Figure 2: Schematic representations of experimental trials in the processing time paradigms. _Upper panel:_ the discrimination task [DIS], in which participants have to decide whether the second image (the target) is identical to the first _lower panel_: priming task [PRIM], where participants have to decide whether the second image (the target) is a real image or a scrambled one. The behavioral variable of interest is the time until a response is made for the target.
between two categories denoted by their respective image labels (for the WORD dataset), and the similarity between an individual image and its category (for the TYP dataset) at each layer for each model.
## 5 Results
Since relations between the model-derived similarities and the behavioral outcome variables are mostly non-linear [14], performance was assessed using Spearman rank correlations. All predictors (i.e., similarities based on each layer of each model) were ranked in terms of performance on each behavioral dataset, and these ranks were used to calculate three general-level evaluation metrics:
* The **rating performance** as the mean rank across the three rating datasets (IMG, WORD, and TYP)
* The **processing time performance** as the mean rank across the two processing-time datasets (DIS and PRIM)
* The **overall performance** as the mean rank across all behavioral datasets (compare [1, 15])
The results for the best-performing layers for each evaluation metric are displayed in Table 2. We include the best-performing layer in the paper by Gunther et al. [14] (VGG-F, fully-connected layer 6) as a reference condition. Note that, for the PRIM dataset, participants tend to respond _faster_ (that is, lower response times) if the two images are more similar; therefore, the target metric here is a _more negative_ correlation.
As can be seen in Table 2, the overall best-performing representations (i.e., the model estimates that are most associated with behavioral variables) are provided by the GoogLeNet model, more specifically one of the representations in the 5th layer of the model (5a_3x3_reduce). These representations are also best-performing when it comes to predicting the arguably most fundamental types of behavioral data, similarity judgments within a given modality (i.e., between two different images [IMG] and between two different categories [WORD]).
Focusing only on the explicit rating data ([IMG], [WORD], and [TYP]), the best-performing representations are provided by the 7th layer (a fully-connected layer) of the VGG-M-1024 variant, closely followed by the same
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline model & \# layers & param. (mio.) & acc. & year & ref. \\ \hline AlexNet & 8 & 61.0 & 57.4 & 2012 & [25] \\ CaffeNet & 8 & 61.0 & 57.4 & 2014 & [22] \\ DarkNet-19 & 19 & 20.8 & 74.0 & 2017 & [29] \\ DarkNet-53 & 53 & 41.6 & 76.5 & 2017 & [29] \\ DenseNet-201 & 201 & 20.0 & 75.9 & 2017 & [20] \\ EfficientNet B0 & 82 & 5.3 & 74.7 & 2019 & [39] \\ GoogLeNet & 22 & 7.0 & 66.3 & 2015 & [37] \\ Inception-ResNet-v2 & 164 & 55.9 & 79.6 & 2017 & [36] \\ Inception-v3 & 48 & 23.9 & 77.1 & 2016 & [38] \\ MobileNetV2 & 53 & 3.5 & 70.4 & 2018 & [32] \\ NASNet-Mobile & \(*\) & 5.3 & 73.4 & 2018 & [44] \\ ResNet-18 & 18 & 11.7 & 69.5 & 2016 & [16] \\ ResNet-50 & 50 & 25.6 & 74.5 & 2016 & [16] \\ ResNet-101 & 101 & 44.6 & 76.0 & 2016 & [16] \\ ResNet-152 & 152 & 60.3 & 77.0 & 2016 & [16] \\ ShuffleNet & 50 & 1.4 & 63.7 & 2018 & [43] \\ SqueezeNet & 18 & 1.2 & 55.2 & 2016 & [21] \\ VGG-16 & 16 & 138.3 & 71.5 & 2014 & [34] \\ VGG-19 & 19 & 143.7 & 71.3 & 2014 & [34] \\ VGG-F & 8 & 60.8 & 58.9 & 2014 & [6] \\ VGG-M & 8 & 102.9 & 62.7 & 2014 & [6] \\ VGG-M-128 & 8 & 82.7 & 59.2 & 2014 & [6] \\ VGG-M-1024 & 8 & 87.2 & 62.2 & 2014 & [6] \\ VGG-M-2048 & 8 & 92.5 & 62.9 & 2014 & [6] \\ VGG-S & 8 & 102.9 & 63.3 & 2014 & [6] \\ Xception & 71 & 22.9 & 78.2 & 2017 & [7] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview over the models investigated, including their number of layers, number of parameters, accuracy (measured as top-1 accuracy in the ImageNet classification task ILSVRC2012), and references to the papers introducing the models.
layer of the VGG-M-2048 variant. Although these perform slightly worse for the [IMG] dataset than the best-performing GoogLeNet layer (and very marginally worse for the [WORD] dataset), they make up for this with a near top-level performance in the [TYP] dataset (with the 50th convolutional layer of the DarkNet-53 model as the top-performer). However, these models fall behind a mean rank of 240 for the processing time data.
When focusing only on the processing time data ([DIS] and [PRIM]), the 6th layer (again, a fully-connected layer) of the VGG-M model (standard variant) performs best, with near top-level performance in both individual datasets (those top-performers being a layer of the EfficientNet B2 model and of the ResNet-50 model, respectively). However, conversely to the 7th layers in the VGG-M-1024 and VGG-M-2048 variants, these representations in turn fall behind for the rating data, with a mean rank of 157.
### Comparison with model characteristics
In an additional step, we assessed the relation between the characteristics of a model (more specifically, their number of parameters and their top-1 classification accuracy [31]; see Table 1) and its performance on the behavioral datasets tested here. To this end, we equated the overall model performance with the performance of its best-performing layer, as measured by the mean rank.
We estimated two separate non-linear statistical models (GAMs; [41, 11]), modelling mean rank as a function of model accuracy and number of parameters, the results of which are depicted in Figure 3. Note that a _lower_ mean rank indicates better performance. As can be seen in these plots, medium levels of classification accuracy tend to be associated with better performance against behavioral data (with two local minima around 65% and around 75%). Regarding the number of parameters, either very low or very high numbers tend to be associated with better model performance; however, a closer inspection of the individual data points indicates that the best-performing models all have a rather low number of parameters.
## 6 Discussion
### Implications of the results
In the present study, we investigated which representations obtained from different computer-vision models best predict a battery of five large-scale behavioral datasets, including both rating data and processing time data. We find that the overall best-performing models are in fact quite "old" models, given the pace of the research cycle within the field: A layer of the GoogLeNet model (published 2015, [37]) displays the overall highest performance across all five datasets, and different layers (of different variants) of
\begin{table}
\begin{tabular}{l l l|r r r r r|r r|r} row & model & layer & IMG & WORD & TYP & DIS & PRIM & rating & processing & overall \\ \hline
1 & GoogLeNet & 5a\_3x3\_reduce & **1 (0.774)** & **1 (0.666)** & 35 (0.361) & 65 (0.207) & 88 (-0.088) & 12.3 & 76.5 & **38.0** \\
2 & DarkNet-19 & conv14 & 9 (0.740) & 81 (0.646) & 39 (0.359) & 43 (0.211) & 43 (-0.095) & 43.0 & 43.0 & 43.0 \\
3 & GoogLeNet & 5a\_3x3 & 41 (0.707) & 53 (0.649) & 95 (0.339) & 42 (0.211) & 11 (-0.105) & 63.0 & 26.5 & 48.4 \\
4 & VGG-M-1024 & fc7 & 8 (0.741) & 11 (0.660) & 4 (0.389) & 137 (0.199) & 383 (-0.066) & **7.7** & 260.0 & 108.6 \\
5 & VGG-M-2048 & fc7 & 15 (0.734) & 21 (0.657) & 3 (0.392) & 185 (0.192) & 301 (-0.071) & 13.0 & 243.0 & 105.0 \\
6 & VGG-M & fc6 & 36 (0.714) & 288 (0.626) & 147 (0.324) & 12 (0.222) & 8 (-0.107) & 157.0 & **10.0** & 98.2 \\
7 & DarkNet-53 & conv50 & 114 (0.648) & 17 (0.658) & **1 (0.400)** & 328 (0.176) & 56 (-0.092) & 44.0 & 192.0 & 103.2 \\
8 & EfficientNet B2 & B12-Dconv2d-P & 47 (0.702) & 42 (0.651) & 160 (0.322) & **1 (0.231)** & 129 (0.083) & 83 & 65 & 75.8 \\
9 & ResNet-50 & Res5a-Branch2b & 76 (0.679) & 113 (0.643) & 134 (0.327) & 80 (0.205) & **1 (-0.128)** & 107.7 & 40.5 & 80.8 \\
10 & VGG-F & fc6 & 24 (0.721) & 244 (0.63) & 176 (0.316) & 7 (0.224) & 18 (-0.102) & 148 & 12.5 & 93.8 \\ \end{tabular}
\end{table}
Table 2: The best-performing models across the different datasets, arranged by overall performance (the three top models in rows 1–3), rating performance (the two top models in rows 4–5), processing time performance (the top model in row 6), and performance in the individual datasets (the top models in row 1, as well as rows 7–10). The first number indicates the rank (ranging from a top value of 1 to a worst value of ), the second number in brackets the Spearman correlation. The best-performing model in Günther et al. [14] (VGG-F, layer fc6) is listed as a baseline (row 10).
the VGG-M model (published 2014, [6]) display the best overall performance for the rating data and processing time data, respectively. Note that the differences in performance between the individual representations are meaningful and not trivial: For example, the difference in performance for the [IMG] dataset between the overall best GoogLeNet layer (0.774) and the overall second-best DarkNet-19 layer (0.740) is already more than three percentage points.
Over the last years, a lot of effort has gone into developing systems with ever better performance than these "older" models. With respect to the task these models are designed for - most prominently, image classification - this effort has reached impressive successes: As can be seen in Table 1, the top-1 accuracy for the ILSVRC 2012 validation data [31] has increased dramatically, from around 60% in 2014/2015 to around 80%. In comparison, GoogLeNet (66.3 %) and especially VGG-M (around 60 % for all variants) definitely fall on the lower end of this scale. This however reveals an interesting rift opening with respect to model performance: Even though more recent models get better and better on their target tasks, this improvement in classification accuracy does not go along with improvements in predicting other types of data (in fact the contrary, compare Figure 3, upper panel). This is not to say that more recent models show low performance on this type of data: Representations from recent models and highly accurate models like DarkNet-19 [29] are among the best-performing representations available. The critical point however still remains that the strong improvement in classification accuracy has not been accompanied by an _improvement_ in predicting other types of data.
On the other hand, we find no clear connection between model complexity and top performance in the behavioral dataset: The GoogLeNet model is relatively small in terms of parameters (7 mio.) and comes with an intermediate number of layers (22), while the VGG-M models are quite large (around 90 to 100 mio. parameters) but have only a few layers (8). Therefore, one can neither conclude that a model needs to be very large and complex for top-level performance on behavioral data (consider especially the better performance of the VGG-M model vis-a-vis the conceptually and architecturally similar VGG-16 and VGG-19 models), nor that it needs to be particularly small and efficient (compare also Figure 3, lower panel).
At this point, we can only speculate _why_ more recent and more classification-accurate models don't perform better in accounting for behavioral data. One possible explanation may be that the models are optimized to predict a very _specific_ image class, and only exact matches as a hit when calculating accuracy - with the mis-classification of a _spotted salamander_ as a _European fire salamander_ treated as a miss in the same way a mis-classification as a _toaster_ is. This may lead the models to weight relatively specific details to a similar or maybe even larger degree than the overall structure/"gestalt" of the depicted object. Human judgments and responses, on the other hand, are more driven by these general-level similarities (e.g. [17]) rather than details (even if those are very informative for classification); this might lead to the observed discrepancy between classification accuracy and performance on behavioral datasets. However, we want to stress again that this is speculation on an open question, and more research is necessary to properly investigate and explain this discrepancy.
### Limitations and future directions
At this point, we need to emphasize that all the issues discussed so far are based on the results of our evaluation, and therefore necessarily restricted to the models analysed here. However, there may well be models we did not consider here which contradict our findings (i.e., a high
Figure 3: The relation between a model’s accuracy (upper panel) and number of parameters (lower panel) on the performance across all behavioral datasets (measured as mean rank; the graphs show the partial residuals of a GAM analysis of this outcome variable). Each individual data point represents the best-performing layer of one of the models tested here.
accuracy model that simultaneously has a higher performance on behavioral data than the best-performing models identified here). In fact, in the context of successful transfer learning, we would consider this highly desirable, and hope that our study can give an impetus to systematically consider behavioral data in the search for an overall well-performing model.
While one may dismiss the behavioral data analysed here as not relevant for evaluating the performance of computer vision models, we argue that at the very least recognizing which images are more or less similar to one another should be considered one of the core prerequisites for a general-level vision model, analogous to semantic models predicting semantic similarity and relatedness data in the NLP domain [1, 4, 12, 18].
In general, a desirable direction for future work in the field would be to develop general-level models that do not only excel in one particular task, but perform well across a range of different tasks. Ideally, in the spirit of successful transfer learning, this would not simply mean _optimizing_ a single model for a range of different tasks, but instead _testing_ such a model on a battery of tasks it was not optimized for [35]. Following up on our suspicion that the lack of improvement in representation learning could be the result of hyper-engineering to distinguish very specific (and somewhat arbitrary) categories, we speculate that possible routes of advancement to achieve models representations that better capture a general similarity structure could be as follows: On the one hand, the training objective of the models could be altered to not only consider _exact_ hits among a set of candidate categories, but to also partially reward _close_ hits, for example based on their word embedding similarity or their WordNet distance to the correct target (thus rewarding the classification of a _poole_ as a _dalmatian_ or as a _dog_ more than as a _Persian cat_, and that more than as a _pillow_; see also [9]). On the other hand, the training sets of the models could be altered to more closely approximate human visual experience rather than over-representing certain categories [10], or to include more than one correct label per image [33].
We argue that such developments would be interesting from an engineering/transfer learning viewpoint (since a successful general-level model could be applied to new tasks that it was not originally optimized for), but also for the application of such systems as models of human visual representations in cognitive (neuro)science.
## 7 Data availability
Data and the analysis script for this study are available at [https://osf.io/sx5u3/?view_only=09c05b84a52246d5b8b061cbbee10350](https://osf.io/sx5u3/?view_only=09c05b84a52246d5b8b061cbbee10350).
|
2307.15177 | Assessing quantum dot SWAP gate fidelity using tensor network methods | Advanced tensor network numerical methods are used to explore the fidelity of
repeated SWAP operations on a system comprising 20-100 quantum dot spin qubits
in the presence of valley leakage and electrostatic crosstalk. The fidelity of
SWAP gates is largely unaffected by Zeeman splitting and valley splitting,
except when these parameters come into resonance. The fidelity remains
independent of the overall valley phase for valley eigenstates, while for
generic valley states, some minor corrections arise. We analyze the fidelity
scaling for long qubit chains without valley effects, where crosstalk
represents the only error source. | Jacob R. Taylor, Nathan L. Foulk, Sankar Das Sarma | 2023-07-27T20:11:51Z | http://arxiv.org/abs/2307.15177v2 | # Assessing quantum dot SWAP gate fidelity using tensor network methods
###### Abstract
The SWAP gate facilitates the exchange of quantum states between qubits and is integral to quantum algorithms. We utilize advanced tensor network methods to explore the fidelity for repeated SWAP operations on a system comprising 20 to 100 quantum dot spin qubits. We incorporate valley states, valley splitting, spin-valley coupling, Zeeman splitting, and crosstalk. The fidelity of SWAP gates is largely unaffected by Zeeman splitting and valley splitting, except when these parameters come into resonance. In addition to confirming that fidelity is positively impacted by the larger exchange couplings \(J_{\text{SWAP}}\) in terms of the residual exchange \(J_{0}\) and that spin-valley coupling negatively impacts fidelity, we also show that for valley eigenstates, the fidelity remains independent of the valley phase, while for generic valley states some minor corrections arise. We also analyze the fidelity scaling for long qubit chains without valley effects, where crosstalk represents the only error source.
## I Introduction
A standard operation in quantum computing is the SWAP gate, which allows for the exchange (or "swapping") of the quantum states between two qubits. The SWAP gate has important applications in quantum computing, such as in quantum error correction [1], measurement schemes [2], and quantum state engineering [3]. The root \(\sqrt{\text{SWAP}}\) is entangling and thus, in combination with arbitrary single-qubit gates, allows for the implementation of general unitary operations, sufficient for universal quantum computation [4]. This fact, combined with the SWAP gate's ubiquity within quantum error correction, makes the creation of high fidelity SWAP gates essential for building a quantum computer on any platform.
Si-based quantum dot spin qubits have emerged as a promising candidate for realizing quantum computers due to their long coherence times [5; 6]. Both \({}^{28}\)Si and \({}^{30}\)Si are common spin-0 isotopes, and thus it is possible to remove the decoherence arising from nuclear spin noise by isotopic purification [7]. In addition, electrical gate operations implemented directly from the Heisenberg interaction between different qubits, such as SWAP gates, have durations on the order of 1 ns [8]. These long coherence times arising from such isotopic purification, combined with the available short gate operation times, make Si especially well-suited for hosting qubits. Most importantly, silicon-based qubits can be easily integrated into the existing semiconductor industry, allowing for more straightforward scalability. A silicon-based quantum computer platform could potentially host millions of qubits in a small chip similar to existing CMOS-based integrated circuits, and there has been spectacular recent progress in producing scalable multiqubit Si circuits [9; 10]. It is therefore both timely and important to consider SWAP gates in large spin qubit systems.
Tensor networks provide a framework for representing many-body quantum states and operators in a computationally compact and efficient way, allowing for accurate simulations of large quantum systems intractable with direct methods [11]. Acting on matrix product states (MPS) with tensor network time evolution methods such as TEBD [12] or TDVP [13] provides a technique to accurately approximate the time evolution of interacting quantum systems and can be used to investigate model-intrinsic sources of error. Systems of hundreds of qubits that would be utterly intractable through direct simulation can often be represented efficiently by restricting one's system to an approximate low-entanglement subspace using tensor networks.
Here we seek to expand previous work into the effects of valley states on spin qubit devices by directly including them within our simulation. Previous work into the fidelity of sequences of SWAP gates on a spin qubit chain has looked into the effects of charge noise and dissipation on the fidelity of SWAP gates [14; 15]. Such work did not directly include dynamical valley states or their interactions and was done only on small-scale systems due to computational constraints in exact diagonalization. We intend to use state-of-the-art methods within tensor networks to accurately model the effect of the initial valley state and different experimentally relevant parameters on the fidelity of a sequence of chained SWAP gates. Our work can address tens to hundreds of spin qubits in contrast to all earlier works on the subject.
We first introduce our spin qubit model and describe the numerical methods used to perform the calculations. We then present the results of those numerical calculations. We demonstrate the effect of valley splitting, Zeeman splitting, and SWAP exchange strength on the fidelity of SWAP operations. We explore the effects of the spin-valley coupling and its phase on different initial valley states. We conclude with an investigation into the single gate fidelity scaling up to 100 spin qubits isolating the influence of crosstalk.
Model
We label the basis states of our model by the valley and spin degrees of freedom. We consider the lowest two valley states, corresponding to \(k=\pm z\), which we label \(|\pm\rangle\), along with the spin states within the valley \(|\uparrow\rangle\) and \(|\downarrow\rangle\). We consider exchange-coupled spin qubits [4], where the spin states serve as our computational basis.
We model the system as a one-dimensional (1D) spin chain with both valley and spin degree of freedom using the following Hamiltonian:
\[H=\sum_{n=1}^{L-1}J_{n}\left(\mathbf{\sigma}_{n}\cdot\mathbf{\sigma}_{n +1}+1\right)\left(\mathbf{\tau}_{n}\cdot\mathbf{\tau}_{n+1}+1\right)+\\ h\sum_{n}^{L}\sigma_{n}^{z}+\Delta\sum_{n}^{L}\tau_{n}^{z}+\\ \frac{\gamma_{1}}{2}\sum_{n}^{L}\left(\tau_{n}^{x}\sigma_{n}^{x}+ \tau_{n}^{y}\sigma_{n}^{y}\right)+\frac{\gamma_{2}}{2}\sum_{n}^{L}\left(\tau_{n }^{y}\sigma_{n}^{x}-\tau_{n}^{x}\sigma_{n}^{y}\right) \tag{1}\]
where \(L\) is the number of qubits in the spin chain, \(h\) is the spin Zeeman splitting, \(\Delta\) is the valley splitting and \(\gamma_{1}\),\(\gamma_{2}\) are the real and imaginary parts of the spin-valley coupling \(\gamma=\gamma_{1}+i\gamma_{2}\). \(\mathbf{\sigma}_{n}=(\sigma_{n}^{x},\sigma_{n}^{y},\sigma_{n}^{z})\) is the \(n^{\text{th}}\) site Pauli vector in the spin basis, while \(\mathbf{\tau}_{n}\) is the same but in the valley basis.
The role of the spin-valley coupling becomes clear in the matrix representation of the single-site Hamiltonian,
\[H_{n}=\begin{pmatrix}h+\Delta&0&0&0\\ 0&h-\Delta&\gamma&0\\ 0&\gamma*&\Delta-h&0\\ 0&0&0&-\Delta-h\end{pmatrix}.\]
Coupling between the \(|-\downarrow\rangle\) and \(|+\uparrow\rangle\) states is assumed to be negligible. It can be shown that \(\gamma\) is proportional to the valley splitting, and we can reasonably assume \(\frac{|\gamma|}{\Delta}\sim 1/500\)[16].
We perform the SWAP operation using a \(\frac{\pi}{4}\) pulse Heisenberg interaction between the swapping sites. Ideally, the exchange coefficient should be zero for non-swapping sites. In reality, this is never true. We set the exchange coefficient \(J_{n}\) as follows:
\[J_{n}=\left\{\begin{array}{ll}J_{0},&\text{if }n\neq l\\ J_{\text{SWAP}},&\text{if }n=l\end{array}\right\}, \tag{2}\]
where the SWAP gate is between sites \(l\) and \(l+1\). We assume \(J_{0}\), \(\gamma\), \(\Delta\) and \(h\) all to be site independent. These nonessential assumptions are easy to relax in our method if experimental information about their site dependence is available.
We take the initial state to be a product state between an initial spin state \(|\psi_{i}\rangle_{*}\) and the valley state \(|\psi_{i}\rangle_{v}\) up to a normalization factor,
\[|\psi_{i}\rangle_{v}\propto(1-\alpha)|--\ldots-\rangle+\alpha|-\ldots-\rangle, \tag{3}\]
where
\[|-_{x}\rangle=\frac{1}{\sqrt{2}}\left(|+\rangle-|-\rangle\right).\]
We define the total fidelity for our sequence of SWAP gates as:
\[F_{\text{tot}}=\text{Tr}_{s}\left[\text{Tr}_{v}[U\rho_{i}U^{\dagger}]\text{Tr }_{v}[R\rho_{i}R^{\dagger}]\right], \tag{4}\]
where \(R\) is an operator which performs the SWAP gate sequence with perfect fidelity, \(U\) represents the actual SWAP gate sequence with errors, and \(\rho_{i}=|\psi_{i}\rangle\!\langle\psi_{i}|\) is the initial state of the system. \(\text{Tr}_{s}[...]\) and \(\text{Tr}_{v}[...]\) are the partial trace operators over spin and valley degrees of freedom, respectively. In our case, the SWAP sequence transports a spin state from one side of the spin chain to the other. The transport is performed through swapping sites \(1\leftrightarrow 2\), then \(2\leftrightarrow 3\), and so forth until the first spin state is transported to the end of the chain. We use the effective single gate fidelity taken as \(F=(F_{tot})^{1/(L-1)}\) to be able to compare gate sequences of different lengths.
We perform the calculations using tensor network MPS methods. Since each physical spin qubit consists of spin and valley states, we split each into two separate two-level sites in the tensor network representation. We arrange the two-level tensors in two rows, with the top row representing the spin states and the bottom row representing the valley states. To construct the tensor network Hilbert space, we order the sites such that the \(2j-1\) and \(2j\) MPS sites map to the \(j\)th physical qubit's spin and valley states, respectively. The interweaving of the two states is ideal because it minimizes the number of long-range interactions necessary within the Hamiltonian, thus improving the efficiency of the MPS algorithm.
The system is initialized by constructing the spin state MPS and the valley state MPS independently and then interspersing them with standard tensor network operations to build a single total MPS for the entire sys
Figure 1: Tensor network diagram for the MPS of the spin chain. The blue and red sites represent the spin (\(|\psi_{s}\rangle\)) and valley (\(|\psi_{v}\rangle\)) degrees of freedom respectively. We order the tensors so they snake between the spin and valley degrees of freedom to minimize long-range interactions.
tem. In the case of the random spin state MPS, an initial MPS of bond dimension M=10 with complex matrix elements is generated. We perform the time evolution of the total MPS using Time Evolving Block Decimation (TEBD) [12] with SWAP gate time \(T=\frac{\pi/4}{J_{\rm SWAP}}\). We map the state to a rotating reference frame to correct for the background rotation caused by the external magnetic fields. The simple mapping is achieved by applying a set of local unitary operations to all sites \(U_{r}=\Pi_{n=1}^{L}\exp(ih\sigma_{n}^{z}T)\) when computing the fidelity. We then convert the resulting MPS into a projector matrix product operator (MPO), which we use in Eq. 4.
## III Calculations
To assess the effects of different parameters on the fidelity of SWAP gates, we first look at the impact of the magnitude of \(J_{\rm SWAP}\) on the fidelity. We would expect that the larger \(J_{\rm SWAP}\) is relative to \(J_{0}\), the larger the fidelities of the SWAP gate would be since this would trivially make the SWAP gates faster. Our results are shown in Fig. 2. Unsurprisingly this holds regardless of the initial spin basis state, though it is apparent that random, likely entangled states experience dramatically higher infidelity.
It is worth noting that in all plots, we write the relevant parameters \(\Delta\), \(h\), \(J_{\rm SWAP}\), \(\gamma_{1}\) and \(\gamma_{2}\) in units of \(J_{0}\). In a recent experiment, the residual exchange was measured on average to about \(J_{0}\approx 40\) kHz [17].
Looking at the effects of \(h\) and \(\Delta\) in Fig. 3, we find a significant impact on the fidelity of the SWAP sequence only when in resonance (\(h\approx\Delta\)). Large values of \(J_{\rm SWAP}\) broaden the \(h\approx\Delta\) resonance regime, though they also suppress the infidelity peak in such a regime.
The magnitude \(|\gamma|=\sqrt{\gamma_{1}^{2}+\gamma_{2}^{2}}\) of the spin valley coupling significantly impacts the fidelity of the SWAP gate. As expected, a larger \(\gamma\) is detrimental to the SWAP gates' reliability due to local spin-valley state mixing. This result can be seen in Fig. 5.
Figure 4: Single gate fidelity for variable \(J_{\rm SWAP}\) and \(\Delta\) of a \(L=20\), random spin, \(\alpha=0\) initial state. The other parameters used were h=750, \(\gamma_{1}=\Delta/500\), \(\gamma_{2}=0\). The small deviation at \(h\approx\Delta\) can be seen when \(\Delta\) and \(h\) are in resonance.
We also examine the effect of different phases of the spin valley coupling \(\theta=\arg(\gamma)\). What may seem initially surprising is that for the \(|-\ldots-\rangle\) initial valley state (\(\alpha=0\)), there is no valley phase dependence on fidelity, and only \(|\gamma|\) has any effect. If we initialize one or both of the spin or valley states to a Z-basis state, the effect of \(\arg(\gamma)\) reduces to a global phase. However, there is a phase dependence when the valley state is initially \(|-_{x}\ldots-_{x}\rangle\). To investigate this, we write the initial valley state in the \(\alpha\) dependent form of Eq. 3, where the initial valley state is \(|-\ldots-\rangle\) for \(\alpha=0\) and \(|-_{x}\ldots-_{x}\rangle\) for \(\alpha=1\). We show the SWAP fidelity for six random initial spin states at \(\alpha=0\) and \(\alpha=1\) in Fig. 6. The effects of tuning \(\alpha\) from \(0\) to \(1\) can be seen in Fig. 7.
To investigate the effects of crosstalk for even longer spin chains, we simulate lengths \(L=[5,100]\) in Fig. 8. We find that the single gate fidelities drop to critically low values even for relatively large values of \(J_{\rm SWAP}\). This is because the residual exchange \(J_{0}\) crosstalk has more sites to entangle and more time overall to do so.
## IV Conclusion
In conclusion, we investigate the effects of valley states on the fidelity of a sequence of SWAP gates in long quantum spin qubit systems (20-100 qubits).
Figure 8: Single gate infidelities (\(1-F\)) for chains of varying length with variable values of \(J_{\rm SWAP}\in[150,500]\). This simulation was run using \(h=750\), \(\Delta=500\), and \(\gamma=0\), averaging over five \(\alpha=0\) random initial spin states.
Figure 5: Single gate fidelity for variable \(\gamma_{1}\) and \(\gamma_{2}\) with a \(L=20\), \(\alpha=0\), random initial spin state. Ran with parameters \(h=750\) and \(\Delta=500\).
Figure 6: Single gate fidelity for 6 random \(L=20\) initial states of the form in Eq. 3. This was calculated for different phases of \(\gamma=10e^{i\theta}\) where \(\gamma_{1}={\rm Re}[\gamma]\), \(\gamma_{2}={\rm Im}[\gamma]\). The initial states had (a) \(\alpha=0\) (b) \(\alpha=1\). In both cases \(h=30\) and \(\Delta=100\).
Figure 7: Single gate fidelity of a single \(L=50\) random spin initial state for different \(\alpha\) and variable \(\gamma\) phase. Calculations were performed using \(|\gamma|=10\), \(h=30\), \(\Delta=100\) and \(J_{\rm SWAP}=250\).
Using state-of-the-art tensor network methods, we can examine the large-scale behavior of a complete model of a spin qubit chain, including both spin and valley, as well as the coupling between them. We calculate how the fidelity of repeated SWAP gates is directly affected by factors such as valley splitting, spin-valley coupling, Zeeman splitting, and crosstalk.
We find that the fidelity is only weakly affected by Zeeman splitting and valley splitting except when brought into resonance. We show that for Z-basis valley states, there is no dependence on the phase of the spin-valley coupling, and even when not in the Z-basis, the effect is small. We also investigate the scaling of the fidelity spin chains due to the effects of crosstalk.
## V Acknowledgement
We thank Donovan Buterakos for helpful discussions and suggestions. This work was supported by the Laboratory for Physical Sciences.
|
2307.01102 | Implications for the non-Gaussianity of curvature perturbation from
pulsar timing arrays | The recently released data by pulsar timing array (PTA) collaborations
present strong evidence for a stochastic signal consistent with a
gravitational-wave background. Assuming this signal originates from
scalar-induced gravitational waves, we jointly use the PTA data from the
NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to probe the small-scale
non-Gaussianity. We put the first-ever constraint on the non-Gaussianity
parameter, finding $|F_\mathrm{NL}|\lesssim 13.9$ for a lognormal power
spectrum of the curvature perturbations. Furthermore, we obtain $-13.9 \lesssim
F_\mathrm{NL}\lesssim -0.1$ to prevent excessive production of primordial black
holes. Moreover, the multi-band observations with the space-borne
gravitational-wave detectors, such as LISA/Taiji/TianQin, will provide a
complementary investigation of primordial non-Gaussianity. Our findings pave
the way to constrain inflation models with PTA data. | Lang Liu, Zu-Cheng Chen, Qing-Guo Huang | 2023-07-03T15:27:23Z | http://arxiv.org/abs/2307.01102v3 | # Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays
###### Abstract
The recently released data by pulsar timing array (PTA) collaborations present strong evidence for a stochastic signal consistent with a gravitational-wave background. Assuming this signal originates from scalar-induced gravitational waves, we jointly use the PTA data from the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to probe the small-scale non-Gaussianity. We put the first-ever constraint on the non-Gaussianity parameter, finding \(|F_{\rm NL}|\lesssim 13.9\) for a lognormal power spectrum of the curvature perturbations. Furthermore, we obtain \(-13.9\lesssim F_{\rm NL}\lesssim-0.1\) to prevent excessive production of primordial black holes. Moreover, the multi-band observations with the space-borne gravitational-wave detectors, such as LISA/Taiji/TianQin, will provide a complementary investigation of primordial non-Gaussianity. Our findings pave the way to constrain inflation models with PTA data.
**Introduction.** Various inflation models (see _e.g._[1, 2, 3, 4, 5, 6, 7, 8]) predict the existence of a sizable primordial non-Gaussianity, hence the non-Gaussianity plays an import role in exploring the early Universe [9, 10, 11]. How to probe the non-Gaussianity of the Universe is one of the key questions in modern physics. Over several decades, significant advancements have been made in precisely measuring a nearly scale-invariant power spectrum characterizing primordial density fluctuations. These measurements have been accomplished through the utilization of observational data from the cosmic microwave background (CMB) [12] and large-scale structure [13, 14] surveys, offering valuable insights into the fundamental properties of the Universe. Although significant efforts have been dedicated to precisely characterizing power spectra of primordial perturbations on large scales, searching for new and independent probes becomes crucial when examining phenomena at the small scale.
Gravitational waves (GWs) offer a fascinating avenue for acquiring insights into the history and composition of the Universe, serving as another probe of small-scale non-Gaussianity. In fact, space-borne GW detectors, such as LISA [15], Taiji [16], and TianQin [17], can explore the non-Gaussianity through scalar-induced GWs (SIGWs) [18, 19, 20, 21, 22, 23, 24, 25, 26, 27] in the mHz frequency band. Pulsar timing arrays (PTA) [28, 29], on the other hand, are sensitive in the mHz frequency band, providing another opportunity to probe the early Universe. Recently, NANOGrav [30, 31], PPTA [32, 33], EPTA [34, 35], and CPTA [36] all announced the evidence for a stochastic signal in their latest data sets consistent with the Hellings-Downs [37] spatial correlations expected by a stochastic gravitational-wave background (SGWB). Although there can be a lot of sources [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51] in the PTA window, whether this signal is of astrophysical or cosmological origin is still under investigation [52, 53, 54, 55, 56, 57, 58, 59, 60].
A possible explanation for this signal is the SIGN produced by the primordial curvature perturbations at small scales. When the primordial curvature perturbations reach significant magnitudes, they can generate a considerable SGWB through second-order effects resulting from the non-linear coupling of perturbations. Additionally, large curvature perturbations can trigger the formation of primordial black holes (PBHs) ([61, 62, 63]). PBHs have attracted a lot of attention in recent years [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92] (see also reviews [93, 94, 95]) as a promising candidate for dark matter and can explain the binary black holes detected by LIGO-Virgo-KAGRA [96, 97]. The formation rate of PBHs would be entirely altered if there is any significant non-Gaussianity, as PBHs are produced at the large amplitude tail of the curvature perturbation probability distribution [98].
In this letter, assuming that the signal detected by PTAs is from SIGWs, we jointly use the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to constrain the small-scale non-Gaussianity when the scalar modes re-enter the horizon. As a demonstration, we employ a lognormal power spectrum of curvature perturbations and constrain the non-Gaussianity parameter as \(-13.9\lesssim F_{\rm NL}\lesssim-0.1\).
**SIGWs and PBHs.** We will briefly review the SIGWs that arise as a result of the local-type non-Gaussian curvature perturbations, a significant phenomenon that has been previously discussed in [99, 100, 21, 101, 102, 103]. The local-type non-Gaussianities are characterized by the expansion of the curvature perturbation, \(\mathcal{R}(\vec{x})\), in
terms of the Gaussian component in real space. Specifically, the expansion up to the quadratic order can be written as [104, 105, 106, 107, 108, 109]
\[\mathcal{R}(\vec{x})=\mathcal{R}_{\text{G}}(\vec{x})+F_{\text{NL}}\left( \mathcal{R}_{\text{G}}^{2}(\vec{x})-\left\langle\mathcal{R}_{\text{G}}^{2}( \vec{x})\right\rangle\right), \tag{1}\]
where \(\mathcal{R}_{\text{G}}(\vec{x})\) follows Gaussian statistics, and \(F_{\text{NL}}\) represents the dimensionless non-Gaussian parameters. It is worth noting that the non-Gaussianity parameter \(F_{\text{NL}}\) is related to the commonly used notation \(f_{\text{NL}}\) through the relation \(F_{\text{NL}}\equiv 3/5f_{\text{NL}}\). The non-Gaussian contributions are incorporated by defining the effective curvature power spectrum, \(P_{\mathcal{R}}^{\text{NG}}(k)\), as [21]
\[P_{\mathcal{R}}^{\text{NG}}=P_{\mathcal{R}}(k)+F_{\text{NL}}^{2}\int_{0}^{ \infty}\!\!\!\mathrm{d}v\!\int_{|1-v|}^{1+v}\!\!\mathrm{d}u\,\frac{P_{\mathcal{ R}}(uk)P_{\mathcal{R}}(vk)}{2u^{2}v^{2}}. \tag{2}\]
In the conformal Newton gauge, the metric perturbations can be expressed as
\[ds^{2}=a^{2}(\eta)\left\{-(1+2\phi)\mathrm{d}\eta^{2}+[(1-2\phi)\delta_{ij}+h _{ij}]\mathrm{d}x^{i}\mathrm{d}x^{j}\right\}, \tag{3}\]
where \(\eta\) represents the conformal time, \(\phi\) is the Newtonian potential, and \(h_{ij}\) corresponds to the tensor mode of the metric perturbation in the transverse-traceless gauge. The equation of motion for \(h_{ij}\) can be obtained by considering the perturbed Einstein equation up to the second order, namely
\[h_{ij}^{\prime\prime}+2\mathcal{H}h_{ij}^{\prime}-\nabla^{2}h_{ij}=-4\mathcal{ T}_{ij}^{\ell m}S_{\ell m}, \tag{4}\]
where the prime denotes a derivative with respect to the conformal time \(\eta\), \(\mathcal{H}\equiv\frac{a^{\prime}}{a}\) represents the conformal Hubble parameter, and \(\mathcal{T}_{ij}^{\ell m}\) is transverse and traceless projection operator in Fourier space. The source term \(S_{ij}\), which is of second order in scalar perturbations, reads
\[S_{ij}=3\phi\partial_{i}\partial_{j}\phi-\frac{1}{\mathcal{H}}\left(\partial_ {i}\phi^{\prime}\partial_{j}\phi+\partial_{i}\phi\partial_{j}\phi^{\prime} \right)-\frac{1}{\mathcal{H}^{2}}\partial_{i}\phi^{\prime}\partial_{j}\phi^{ \prime}. \tag{5}\]
The characterization of SGWBs often involves describing their energy density per logarithmic frequency interval relative to the critical density \(\rho_{c}(\eta)\),
\[\Omega_{\text{GW}}(k,\eta)\equiv\frac{1}{\rho_{c}(\eta)}\frac{\mathrm{d}\rho _{\text{GW}}(k,\eta)}{\mathrm{d}\ln k}=\frac{k^{3}}{48\pi^{2}}\left(\frac{k}{ \mathcal{H}}\right)^{2}\overline{\left\langle\left|h_{\mathbf{k}}(\eta)\right|^{ 2}\right\rangle}, \tag{6}\]
where the overline represents an average over a few wavelengths. During the radiation-dominated era, GWs are generated by curvature perturbations, and their density parameter at the matter-radiation equality is denoted as \(\Omega_{\text{GW}}(k)=\Omega_{\text{GW}}(k,\eta\rightarrow\infty)\). Using the relation between curvature perturbations \(\mathcal{R}\) and scalar perturbations \(\phi\) in the radiation-dominated era, \(\phi=(2/3)\mathcal{R}\), we can calculate \(\Omega_{\text{GW}}(k)\) as [23]
\[\Omega_{\text{GW}}(k)=\int_{0}^{\infty}\mathrm{d}v\int_{|1-v|}^{|1+v|}\mathrm{ d}u\mathcal{T}\,P_{\mathcal{R}}^{\text{NG}}(vk)P_{\mathcal{R}}^{\text{NG}}(uk), \tag{7}\]
where the transfer function \(\mathcal{T}=\mathcal{T}(u,v)\) is given by
\[\mathcal{T}(u,v)= \frac{3}{1024v^{8}u^{8}}\left[4v^{2}-\left(v^{2}-u^{2}+1\right)^ {2}\right]^{2}\left(v^{2}+u^{2}-3\right)^{2} \tag{8}\] \[\times\left\{\left[\left(v^{2}+u^{2}-3\right)\ln\left(\left| \frac{3-(v+u)^{2}}{3-(v-u)^{2}}\right|\right)-4vu\right]^{2}\right.\] \[\left.+\pi^{2}\left(v^{2}+u^{2}-3\right)^{2}\Theta(v+u-\sqrt{3}) \right\}.\]
According to the Eq. (2) and Eq. (7), \(\Omega_{\text{GW}}(k)\) can be expanded as
\[\Omega_{\text{GW}}(k)=A^{2}\Omega^{(0)}(k)+A^{3}F_{\text{NL}}^{2}\Omega^{(2)} (k)+A^{4}F_{\text{NL}}^{4}\Omega^{(4)}(k), \tag{9}\]
where \(\Omega^{(0)}(k)\), \(\Omega^{(2)}(k)\), and \(\Omega^{(4)}(k)\) represent the corresponding integral terms, and \(A\equiv\int P_{\mathcal{R}}\,\mathrm{d}\ln k\) is the amplitude of \(P_{\mathcal{R}}\). From Eq. (9), we see that positive and negative \(F_{\text{NL}}\) will generate identical SIGWs. In other words, positive and negative \(F_{\text{NL}}\) are degenerate regarding their impact on SIGWs.
Using the relation between the wavenumber and frequency, \(k=2\pi f\), we obtain the energy density fraction spectrum of SIGWs at the present time,
\[\Omega_{\text{GW},0}(f)=\Omega_{\text{r},0}\left[\frac{g_{*,r}(T)}{g_{*,r}\left( T_{\text{eq}}\right)}\right]\left[\frac{g_{*,s}\left(T_{\text{eq}}\right)}{g_{*,s}(T)} \right]^{\frac{4}{3}}\Omega_{\text{GW}}(k). \tag{10}\]
It is given by the product of \(\Omega_{\text{GW}}(k)\), the present energy density fraction of radiation, \(\Omega_{r,0}\), and two factors involving the effective degrees of freedom for entropy density, \(g_{,s}\), and radiation, \(g_{,r}\). To demonstrate the method, we adopt a commonly used power spectrum for \(P_{\mathcal{R}}\), taking the lognormal form [110, 23]
\[P_{\mathcal{R}}(k)=\frac{A}{\sqrt{2\pi}\Delta}\exp\left(-\frac{\ln^{2}(k/k_{*}) }{2\Delta^{2}}\right), \tag{11}\]
where \(A\) is the amplitude and \(\Delta\) characterizes the width of the spectrum.
We note that a positive value of \(F_{\text{NL}}\) will increase the abundance of PBHs for a given power spectrum of curvature perturbations. Conversely, a negative value of \(F_{\text{NL}}\) will decrease the abundance of PBHs. This behavior highlights the impact of non-Gaussianity, quantified by \(F_{\text{NL}}\), on the formation and abundance of PBHs. The Gaussian curvature perturbation \(\mathcal{R}_{\text{G}}\) can be determined by solving Eq. (1) as [111, 98]
\[\mathcal{R}_{\text{G}}^{\pm}(\mathcal{R})=\frac{1}{2F_{\text{NL}}}\left(-1\pm \sqrt{1+4F_{\text{NL}}\mathcal{R}+4F_{\text{NL}}^{2}\langle\mathcal{R}_{\text{G} }^{2}\rangle}\right). \tag{12}\]
PBHs are expected to form when the curvature perturbation exceeds a certain threshold value \(\mathcal{R}_{\text{c}}\sim 1\)[112, 113, 114, 115]. The PBH mass fraction at formation time can be
calculated as [98]
\[\beta(M)\simeq\frac{1}{2}\begin{cases}\text{erfc}\!\left(\frac{\mathcal{R}_{ \text{\tiny{C}}}^{+}(\mathcal{R}_{\text{\tiny{c}}})}{\sqrt{2\langle{\mathcal{R} _{\text{\tiny{G}}^{2}}}\rangle}}\right)+\text{erfc}\!\!\left(-\frac{\mathcal{R}_ {\text{\tiny{G}}}^{-}(\mathcal{R}_{\text{\tiny{c}}})}{\sqrt{2\langle{\mathcal{R }_{\text{\tiny{G}}^{2}}}\rangle}}\right);\ F_{\text{NL}}>0,\\ \text{erfc}\!\left(\frac{\mathcal{R}_{\text{\tiny{C}}}^{+}(\mathcal{R}_{\text{ \tiny{c}}})}{\sqrt{2\langle{\mathcal{R}_{\text{\tiny{G}}^{2}}}\rangle}} \right)-\text{erfc}\!\left(\frac{\mathcal{R}_{\text{\tiny{C}}}^{-}(\mathcal{R}_ {\text{\tiny{c}}})}{\sqrt{2\langle{\mathcal{R}_{\text{\tiny{G}}^{2}}}\rangle} }\right);\ \ F_{\text{NL}}<0.\end{cases} \tag{13}\]
One can define the total abundance of PBHs in the dark matter at present as [93]
\[\begin{split} f_{\text{PBH}}&\equiv\frac{\Omega_{ \text{PBH}}}{\Omega_{\text{CDM}}}=2.7\times 10^{8}\int_{-\infty}^{\infty}\text{d} \ln M\\ &\times\left(\frac{g_{*,r}}{10.75}\right)^{3/4}\left(\frac{g_{*,s} }{10.75}\right)^{-1}\left(\frac{M}{M_{\odot}}\right)^{-1/2}\beta(M),\end{split} \tag{14}\]
where \(\Omega_{\text{CDM}}\) is the cold dark matter density.
**Data analyses and results.** We jointly use the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to estimate the model parameters. The ongoing efforts of these PTAs have lasted for more than a decade. Specifically, NANOGrav 15-yr data set contains observations of 68 pulsars with a time span of 16.03 years, PPTA DR3 contains observations of 32 pulsars with a time span of up to 18 years, and EPTA DR2 contains observations of 25 pulsars with a time span of 24.7 years. These PTA data sets all present a stochastic signal consistent with the Hellings-Downs spatial correlations expected for an SGWB. If this signal is true of GW-origin, it should share the same properties among these PTAs. Therefore, we combine the observations from these PTAs to estimate model parameters to increase the precision rather than using each individual PTA. In this letter, we use the free spectrum amplitude derived by each PTA
Figure 1: The posterior predictive distribution for the energy density from SIGWs for the \(\mathcal{M}_{\text{NG}}\) model. The solid olive line is the median value, while the shaded region represents the 90% credible region. We also show the energy density spectra derived from the free spectrum from NANOGrav 15-yr data set (red violins), PPTA DR3 (green violins), and EPTA DR2 (blue violins). The black solid, dashed, and dash-dotted lines represent the power-law integrated sensitivity curves for LISA, Taiji, and TianQin, respectively.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Parameter & \(A\) & \(\Delta\) & \(f_{*}/\text{Hz}\) & \(|F_{\text{NL}}|\) \\ \hline Prior & \(\log\!\mathcal{U}(-3,2)\) & \(\mathcal{U}(0.05,5)\) & \(\log\!\mathcal{U}(-9,-2)\) & \(\log\!\mathcal{U}(-5,3)\) \\ Result for \(\mathcal{M}_{\text{G}}\) & \(1.73^{+5.57}_{-1.47}\) & \(3.24^{+0.70}_{-1.34}\) & \(3.25^{+51.1}_{-3.22}\times 10^{-5}\) & – \\ Result for \(\mathcal{M}_{\text{NG}}\) & \(1.06^{+5.20}_{-1.02}\) & \(3.36^{+1.10}_{-1.29}\) & \(1.81^{+45.3}_{-1.79}\times 10^{-5}\) & \(\lesssim 13.9\) \\ \hline \end{tabular}
\end{table}
Table 1: Prior distributions and results for the model parameters. We consider two cases: a model with non-Gaussianity, \(\mathcal{M}_{\text{NG}}\), and a model without non-Gaussianity, \(\mathcal{M}_{\text{G}}\). Here \(\mathcal{U}\) and \(\log\!\mathcal{U}\) denote the uniform and \(\log\)-uniform distributions, respectively. We quote each parameter’s median value and 90% equal-tail credible interval.
with Hellings-Downs correlations. Given the time span \(T_{\rm obs}\) of a PTA, the free spectrum starts with the lowest frequency \(1/T_{\rm obs}\). NANOGrav, PPTA, and EPTA use 14, 28, and 24 frequency components in their SGWB searches, respectively. Combining these data together results in 66 frequencies of a free spectrum ranging from 1.28 nHz to 49.1 nHz. A visualization of the data used in the analyses is shown in Fig. 1. In this work, we also consider the constraints from the big-bang nucleosynthesis (BBN) and CMB for the integrated energy-density fraction defined by \(\int_{k_{\rm min}}^{\infty}d\ln kh^{2}\Omega_{\rm GW,0}(k)\), where \(h=H_{0}/(100{\rm km\,s^{-1}\,Mpc^{-1}})=0.674\)[116] is the dimensionless Hubble constant. The upper limits are \(1.3\times 10^{-6}\) for BBN [117] and \(2.9\times 10^{-7}\) for CMB [118].
We use the time delay data released by each PTA. The time delay \(d(f)\) can be converted to the power spectrum \(S(f)\) by
\[d(f)=\sqrt{S(f)/T_{\rm obs}}. \tag{15}\]
We then convert \(S(f)\) to the characteristic strain, \(h_{c}(f)\), by
\[h_{c}^{2}(f)=12\pi^{2}f^{3}S(f). \tag{16}\]
Further we obtain the free spectrum energy density as
\[\hat{\Omega}_{\rm GW}(f)=\frac{2\pi^{2}}{3H_{0}^{2}}f^{2}h_{c}^{2}(f)=\frac{8 \pi^{4}}{H_{0}^{2}}T_{\rm obs}f^{5}d^{2}(f). \tag{17}\]
For each frequency \(f_{i}\), with the posteriors of \(\hat{\Omega}_{\rm GW}(f_{i})\) at hand, we can estimate the corresponding kernel density \(\mathcal{L}_{i}\). Therefore, the total likelihood is
\[\mathcal{L}(\Lambda)=\prod_{i=1}^{66}\mathcal{L}_{i}(\Omega_{\rm GW}(f_{i}, \Lambda)), \tag{18}\]
where \(\Lambda\equiv\{A,\Delta,f_{*},|F_{\rm NL}|\}\) is the collection of the model parameters. We use dynesty [119] sampler wrapped in Bilby[120, 121] package to search over the parameter space. The model parameters and their priors are summarized in Table 1.
We consider two models: one without non-Gaussianity, \(\mathcal{M}_{\rm G}\), and another with non-Gaussianity, \(\mathcal{M}_{\rm NG}\). The posterior distributions for the parameters are shown in Fig. 2, and the median and 90% credible interval values for each parameter are summarized in Table 1. We note that the \(\mathcal{M}_{\rm G}\) model has been studied by NANOGrav with their 15-yr data set, which is called SIGN-GAUSS in their paper. While we obtain consistent results, the combined data from NANOGrav, PPTA, and EPTA can constrain the parameters to higher precision than using the NANOGrav data set alone, as expected. For the \(\mathcal{M}_{\rm NG}\) model, the \(F_{\rm NL}\) and \(A\) parameters are generally degenerate. The combined data can constrain the amplitude to be \(A=1.06^{+5.20}_{-1.02}\), therefore constraining \(|F_{\rm NL}|\lesssim 13.9\). Since positive and negative \(F_{\rm NL}\) values are degenerate, we have \(-13.9\lesssim F_{\rm NL}\lesssim 13.9\). Moreover, the abundance of PBHs cannot exceed that of dark matter, i.e., \(f_{\rm PBH}\lesssim 1\). Using Eqs.(13) and (14), this limitation allows us to break the degeneracy and obtain constraints on \(F_{\rm NL}\) as \(-13.9\lesssim F_{\rm NL}\lesssim-0.1\).
**Summary and discussion.** While the CMB and large-scale structure observations have provided increasingly precise measurements on the largest scales of the universe, our knowledge of small scales remains limited, except for the constraints imposed by PBHs. PTAs, on the other hand, are an invaluable tool to probe the small-scale non-Gaussianity through SIGWs. Assuming the stochastic signal detected by the PTA collaborations origins from SIGWs, we jointly use the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2 to constrain the SIGWs accounting for non-Gaussianity. For the first time, we constrain the non-linear parameter as \(|F_{\rm NL}|\lesssim 13.9\) for a lognormal power spectrum of the curvature perturbation. Furthermore, we obtain \(-13.9\lesssim F_{\rm NL}\lesssim-0.1\) to avoid overproduction of PBHs. Although we have only dealt with the lognormal power spectrum of curvature perturbations, the method and the framework proposed in this work can be easily extended to different types of power spectra.
The constraints on primordial non-Gaussianity have significant implications for inflation models that involve
Figure 2: One and two-dimensional marginalized posteriors of the parameters for the \(\mathcal{M}_{\rm G}\) (red) model and the \(\mathcal{M}_{\rm NG}\) (blue) model. We jointly use the PTA data from the NANOGrav 15-yr data set, PPTA DR3, and EPTA DR2. The contours in the two-dimensional plot correspond to the \(1\sigma\), \(2\sigma\), and \(3\sigma\) credible regions, respectively.
scalar fields, other than the inflaton, in generating the primordial curvature perturbations. For instance, adiabatic curvaton models predict that [122; 123]
\[f_{\rm NL}=\frac{5}{3}F_{\rm NL}=\frac{5}{4r_{\rm D}}-\frac{5r_{\rm D}}{6}-\frac {5}{3}, \tag{19}\]
when the curvaton field has a quadratic potential [124; 125; 126; 127; 128]. Here the parameter \(r_{\rm D}=3\rho_{\rm curvaton}/(3\rho_{\rm curvaton}+4\rho_{\rm radiation})\) represents the "curvaton decay fraction" at the time of curvaton decay under sudden decay approximation. Our constraint \(|F_{\rm NL}|\lesssim 13.9\) implies
\[r_{\rm D}\gtrsim 0.05\quad(95\%), \tag{20}\]
and the further constraint that \(F_{\rm NL}\lesssim-0.1\) yields
\[r_{\rm D}\gtrsim 0.62\quad(95\%), \tag{21}\]
indicating that the curvaton field has a non-negligible energy density when it decays. Our findings, therefore, pave the way to constrain inflation models with PTA data.
Furthermore, as indicated in Fig. 1, the energy density spectrum of SIGW can generally be extended to the frequency band of the space-borne GW detector. Therefore, the multi-band observations of PTAs with the forthcoming space-borne GW detectors, such as LISA/Taiji/TianQin, will provide a complementary investigation of non-Gaussianity.
**Note added**. While finalizing this work, we found two parallel independent works [129; 130], which also investigate the possibility of explaining the NANOGrav signal with second-order GWs related to the non-Gaussianity. However, these two works didn't perform parameter estimation for the non-Gaussianity parameter using the PTA data.
_Acknowledgments_ LL is supported by the National Natural Science Foundation of China (Grant No. 12247112 and No. 12247176) and the China Postdoctoral Science Foundation Fellowship No. 2023M730300. ZCC is supported by the National Natural Science Foundation of China (Grant No. 12247176 and No. 12247112) and the China Postdoctoral Science Foundation Fellowship No. 2022M710429. QGH is supported by grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15).
|
2304.11899 | Dissociation dynamics in low energy electron attachment to nitrogen
dioxide | Complete dissociation dynamics of low energy electron attachment to nitrogen
dioxide around 8.5 eV resonance has been studied using a velocity map imaging
(VMI) spectrometer. Besides the three prominent resonant peaks at around 1.4
eV, 3.1 eV, and 8.5 eV, we have found an additional small resonance at the
higher energy tail of the 8.5 eV resonance. We have collected the momentum
distribution data of O$^-$ ions at different incident electron energies around
the 8.5 eV resonance along with the smaller additional resonant peak. A
theoretical analysis of these resonances with the momentum imaging experimental
data on dissociative electron attachment to nitrogen dioxide in the gas phase
is used to provide a detailed picture of the molecular dissociation process. | Anirban Paul, Dipayan Biswas, Dhananjay Nandi | 2023-04-24T08:18:59Z | http://arxiv.org/abs/2304.11899v1 | # Dissociation dynamics in low energy electron attachment to nitrogen dioxide
###### Abstract
Complete dissociation dynamics of low energy electron attachment to nitrogen dioxide around 8.5 eV resonance has been studied using a velocity map imaging (VMI) spectrometer. Besides the three prominent resonant peaks at around 1.4 eV, 3.1 eV, and 8.5 eV, we have found an additional small resonance at the higher energy tail of the 8.5 eV resonance. We have collected the momentum distribution data of O\({}^{-}\) ions at different incident electron energies around the 8.5 eV resonance along with the smaller additional resonant peak. A theoretical analysis of these resonances with the momentum imaging experimental data on dissociative electron attachment to nitrogen dioxide in the gas phase is used to provide a detailed picture of the molecular dissociation process.
## 1 Introduction
Dissociation of molecules induced by the collision of electron-like charged particles is a vital process in different branches of science since it is directly related to the mechanisms of radiation-induced damage of living cells and depletion of the ozone layer in the upper atmosphere. Damage to living cells due to nuclear radiation is mainly caused by dissociative electron attachment (DEA) by the low-energy secondary electrons generated from higher-energy primary radiation. Single and double-strand breaks of DNA [1] are also primarily caused by low-energy secondary electron impact. Dissociative electron attachment (DEA), one of the most important mechanisms in these physical processes, is a two-step resonant process dominant in low-energy electron-molecule inelastic collisions. In the first step, the incident electron is attached to the molecule, forming a temporary negative ion (TNI) state. In most cases, this negative ion state is very unstable and repulsive. This unstable TNI dissociates into a negative ion fragment and one or more neutral fragment(s) in the subsequent step. It is, therefore, crucial to perform similar experiments in the laboratory in a controlled atmosphere [2, 3, 4]. Nitrogen dioxide(NO\({}_{2}\)) is an atmospheric gas that can act as a free radical with an unpaired electron in its highest occupied molecular orbital (HOMO). It exists in equilibrium with its dimer N\({}_{2}\)O\({}_{4}\). It is a toxic gas and an industrial pollutant responsible for photochemical smog. It can cause the depletion of the ozone layer and play a significant role in the decomposition of the ozone layer. In the air, toxic nitric oxide (NO) and other organic nitrates are also formed from NO\({}_{2}\). It is a bent-shaped molecule having a C\({}_{2v}\) point symmetry which gives some interesting complexity for its dissociation, which makes it much more interesting to study from the purely scientific point of view.
DEA to NO\({}_{2}\) has been studied earlier by many groups. Most of them have measured the appearance energy, peak positions of the resonances, and
relative cross-section of these resonances. Fox [5] observed O\({}^{-}\) ion peaks at 1.9 eV, 3.0 eV, and 8.75 eV with onset values at 1.35 eV, 2.5 eV, and 7.3 eV and concluded that the peak around 8.75 eV is due to the impurities like NO or H\({}_{2}\)O in NO\({}_{2}\). Rallis and Goodings [6] observed O\({}^{-}\) ion peaks at 3.0 eV and 8.1 eV with their onset values at 1.6 eV and 7.3 eV. They found the first peak onset matched well with the thermodynamical threshold value 1.65 eV and concluded that the 3 eV resonant peak, the O\({}^{-}\) ions are produced with NO in its electronic ground state. While for the resonance peak at 8.1 eV, the O\({}^{-}\) ions are created with NO in its first electronic excited state (\(a^{4}\Pi\)). Abouaf and Fiquet- Fayard [7] found that the O\({}^{-}\) ions in the first resonant peak are produced with NO (\(X^{2}\Pi\)) via the dissociation of NO\({}_{2}\) in the \({}^{1}\)B\({}_{1}\) resonance state. Rangwala _et al._[8] measured the absolute DEA cross-sections of O\({}^{-}\) ions from NO\({}_{2}\) and observed the peak position of the resonances at 1.4 eV, 3.1 eV, and 8.3 eV. Based on the R-Matrix calculations, Munjal _et al._ showed that NO\({}_{2}^{-}\) supports a bound state A below the ground state of NO\({}_{2}\).[9] They found two shape resonances \({}^{3}\)B\({}_{1}\) and \({}^{1}\)B\({}_{1}\) at 1.18 and 2.3 eV respectively are responsible for the first two 1.8 and 3.1 eV peaks. Later Gupta _et al._ calculated the total scattering cross-section for NO\({}_{2}\) + e\({}^{-}\) and compared it with the absolute scattering cross-sections measured by Szmytkowski _et al._[10] They also obtained the resonance positions in total scattering cross-section at 1.33 and 3 eV having symmetries \({}^{3}\)B\({}_{1}\) and \({}^{1}\)B\({}_{1}\), respectively. Nandi and Krishnakumar [11] measured the kinetic energy distribution of fragment O\({}^{-}\) ions using the time of flight (TOF) mass spectrometer technique around those resonances. They found that for the 1.8 eV and 3.5 eV resonant peaks, the kinetic energy vs. incident electron energy curves have a small slope, and their threshold values are very close. Therefore, the excess energy is distributed as the internal energy of the NO fragments, and both of these resonances have the same dissociation limit. For 8.5 eV resonance, they assigned the dissociation channel leads to O\({}^{-}\) ion is O\({}^{-}\) + NO (\(a^{4}\Pi\)) with a threshold energy of 6.43 eV. Gope et al. [12] measured the kinetic energy and angular distribution of O\({}^{-}\) ions at different incident electron energies around the third resonance peak, which is at 8.5 eV energy using Velocity Slice Image (VSI) technique. They found both the low and high energy O\({}^{-}\) ions at this resonance and attributed the low energy ions to O\({}^{-}\) + NO (\(A^{2}\Sigma^{+}\)) channel having threshold energy 7.10 eV, while the high energy ions to O\({}^{-}\) + NO (\(D^{2}\Sigma^{+}\)) channel having threshold energy 8.20 eV. They fitted the angular distribution of the high energy ions with different resonant symmetries and their combination and found that the best fit is for B\({}_{1}\) + B\({}_{2}\) resonant symmetry. In contrast, the previous theoretical studies found no B\({}_{1}\) resonance in this range. Tian and co-workers recently studied the complete dissociation dynamics of dissociative electron attachment to NO\({}_{2}\) at the two low-energy resonances.[13] They found the involvement of a \({}^{3}\)B\({}_{2}\) resonant state in the 3.1 eV resonance.[13]
In this article, we have reported the detailed systematic studies of DEA to NO\({}_{2}\) around the higher energy resonances using the velocity slice imaging (VSI) technique.[14, 15, 16, 17, 18] Here, we report distinct kinetic energy and angular distributions of the fragment ions. Our angular distribution data matches quite well with the theoretical prediction.
## 2 Experimental
All the experiments have been carried out in our velocity slice imaging (VSI) setup. The details of this setup have been reported earlier [19, 20] elsewhere. Therefore, we have briefly discussed the setup and experimental procedure here. The setup mainly consists of an electron gun, a Faraday cup, and a velocity slice imaging (VSI) spectrometer. The electron gun produces a pulsed (200 ns wide) electron beam which is further collimated magnetically by the magnetic field produced by two coils in Helmholtz configuration. This collimated electron beam is made to cross at right angles with an effusive molecular beam produced by a capillary along the axis of the Velocity Slice Imaging (VSI) spectrometer. The spectrometer consists of a single electrostatic lens and a conical drift tube. The VSI spectrometer is followed by a 2D position-sensitive detector (PSD) consisting of three microchannel plates mounted in a Z-stack configuration and a hexanode position-sensitive de
tector placed just after the MCP stack. A pulsed electric field extracts the newton sphere of the ions formed in the interaction region applied 100 ns after the electron beam pulse. The main principle [21] of the VMI spectrometer is to focus all the ions having the same velocity (speed + direction) on a single point to the detector. The lens plate is used for this focusing. After the lens plate, a conical-shaped drift tube is used; for our experimental purpose, we have applied 108 V potential to the drift tube. The drift tube is used to expand the newton sphere of the fragment ions, and we get better resolution. After that, there is a two-dimensional position-sensitive detector consisting of three microchannel plates (MCPs) in a Z-stack configuration. Behind these MCPs is a delay line hexanode [22]. The time-of-flight (TOF) of the detected ions is determined from the back MCP signal, and hexanode plates collect the x and y positions of each detected ion. Thus, each ion's x, y, and ToF can be collected and stored in a list-mode file format (.lmf); for our experimental purpose, we have used the CoboldPC software from RoentDek to collect these data and to store them in a.lmf file. Thus, we can construct the full newton sphere of the negative ions from the.lmf file.
For our analysis, we have used solid angle weighted slices [23, 17]. We used the data of DEA to Oxygen (O\({}_{2}\)) for calibration. We have performed the experiments using 99.9 % pure commercially available NO\({}_{2}\) gas.
## 3 Computation
We conducted ab initio electronic structure and fixed-nuclei electron scattering calculations to interpret our results. NO\({}_{2}\) is a bent triatomic molecule with C\({}_{2v}\) point group symmetry. It has 1.2 A equilibrium bond length, while it has 134.5\({}^{\circ}\) bond angle at equilibrium. It is an open-shell molecule nominally described, near its equilibrium geometry, by the electronic configuration. The ground state electron configuration of NO\({}_{2}\) is \((1a_{1})^{2}(1b_{2})^{2}(2a_{1})^{2}(3a_{1})^{2}(2b_{2})^{2}(4a_{1})^{2}(5a_{1} )^{2}(3b_{2})^{2}\)\((1b_{1})^{2}(4b_{2})^{2}\)\((1a_{2})^{2}(6a_{1})^{1}\) and its ground state is \(X^{2}A_{1}\). The neutral and anion states are described by multi-reference configuration-interaction (MRCI) calculations that include single- and double-excitations from a complete active space (CAS). The CAS orbitals are obtained from state-averaged multi-configuration-self-consistent-field (MCSCF) calculations. The ground energy of the neutral molecule is optimized by restricting the first 16 electrons in the first eight orbitals, while the remaining 7 electrons are kept free in the active space of 6 molecular orbitals. The energy we found from this calculation is -204.152 H, which agrees very well with the previously found values -201.144 H by Munjal _et al._ and -204.15 H by Gupta _et al._ The obtained dipole moment is 0.374 D, which is in fair agreement with the experimental value of 0.316 D.[24] On the other hand, for anions, out of 24 electrons, we froze 12 in the first six molecular orbitals to determine the energy of the
Figure 1: (a) Ion-yield of O\({}^{-}\) ions produced due to DEA to NO\({}_{2}\). The black arrows indicate the energies at which the velocity slice images are taken. (b) O\({}^{-}\) yield for the incident electron energy in the range of 6 to 14 eV. The dashed vertical lines indicate the energy of different resonant states found by our theoretical calculation.
anion states, while the remaining 12 electrons were kept free in the active space of 10 molecular orbitals for the MCSCF calculation. We performed all these calculations with the program GAMESS-US package. [25] As mentioned earlier, we have calculated the energies of different excited states of NO\({}_{2}^{-}\) under the equilibrium geometry condition of the neutral NO\({}_{2}\) molecule. According to the Born-Oppenheimer approximation, the electronic motion is much faster than that of a molecule's nuclear motion, and the equation of motions for electrons can be separated from the nuclear motions. Based on this, the Franck-Condon transition principle suggests that the electronic transition occurs under the same geometry as it was initially. Therefore, the resonance energy can be calculated by the energy difference of an anionic state calculated under the neutral equilibrium geometry to that of the neutral molecule. From this calculation, we get that a \({}^{3}\)B\({}_{1}\) and a \({}^{1}\)B\({}_{1}\) resonances are at 1.02 and 2.02 eV energy. This matches with the result found by Munjal _et al._ who found these at 1.18 and 2.3 eV, respectively.[9]
## 4 Results and Discussion
In Fig. 1, we have plotted the O\({}^{-}\) ion yield as the function of electron energy for the 1.0 to 15.0 eV energy range. The ion yield shows three main resonances, the most intense one is at around 1.4 eV, the second one is at about 3.1 eV, and the third one is at about 8.5 eV. The brown curve in Fig. 1 is the zoomed view of the excitation function for the 6 to 15 eV range. This zoomed view suggests that the third resonance is a combination of two resonances; the bigger one is peaking at around 8.5 eV, while the smaller one is at the falling edge of the bigger one. The theoretically calculated value of the ion-pair dissociation (IPD) (NO\({}^{+}\) + O\({}^{-}\)) threshold for NO\({}_{2}\) is 10.91 eV. [26] Upon close inspection of the ion yield curve, an overlap between the DEA and ion-pair dissociation can be observed. Previous studies have evident this kind of overlap between the IPD and DEA.[27] In the present study, we have found another small resonance at the higher energy tail of the main resonance. We have taken the VSI data of O\({}^{-}\) ions for six different incident electron energies around these two resonances. The solid angle weighted velocity slice images of O\({}^{-}\) ions are shown in Fig. 2.
In the present theoretical calculation, we have found several Feshbach resonances present in the FC transition region's 6 to 12 eV energy. In Table 1, we have listed the position of the resonances in the FC region. From that table, we see three resonances with A\({}_{1}\), three resonances with B\({}_{2}\), and four resonances with A\({}_{2}\) symmetry are present in that energy range. The \({}^{1}\)A\({}_{1}\) resonance at around 7.04 eV is a Feshbach one where one electron from 2\(\pi\) orbital gets excited into the 2b\({}_{1}\) molecular orbital from 6a\({}_{1}\) orbital, and the incident electron also gets caught into the b\({}_{1}\) orbital.
\begin{table}
\begin{tabular}{c|c|c} Position of the resonance (eV) & Symmetry & Electronic configuration \\ \hline
7.04 & \({}^{1}A_{1}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{2}\)6a\({}_{1}^{0}\)2b\({}_{1}^{2}\) \\
7.99 & \({}^{1}A_{1}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{2}\)6a\({}_{1}^{0}\)7a\({}_{1}^{2}\) \\
8.15 & \({}^{3}B_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{1}\)1a\({}_{2}^{2}\)6a\({}_{1}^{1}\)2b\({}_{1}^{2}\) \\
9.24 & \({}^{3}A_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{2}\)6a\({}_{1}^{1}\)2b\({}_{1}^{2}\) \\
9.78 & \({}^{3}B_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{1}\)1a\({}_{2}^{2}\)6a\({}_{1}^{1}\)7a\({}_{1}^{2}\) \\
10.35 & \({}^{3}A_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{1}\)6a\({}_{1}^{1}\)7a\({}_{1}^{2}\) \\
10.50 & \({}^{1}A_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{1}\)6a\({}_{1}^{1}\)2a\({}_{1}^{2}\) \\
11.14 & \({}^{1}A_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{1}\)6a\({}_{1}^{1}\)7a\({}_{1}^{2}\) \\
11.36 & \({}^{1}B_{2}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{1}\)1a\({}_{2}^{2}\)6a\({}_{1}^{1}\)2a\({}_{1}^{2}\) \\
11.50 & \({}^{1}A_{1}\) & 1b\({}_{1}^{2}\)4b\({}_{2}^{2}\)1a\({}_{2}^{2}\)6a\({}_{1}^{0}\)7a\({}_{1}^{2}\) \\
12.66 & \({}^{3}B_{1}\) & 1b\({}_{1}^{1}\)4b\({}_{2}^{2}\)1a\({}_{2}^{2}\)6a\({}_{1}^{1}\)2b\({}_{1}^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Resonance position, their nature, and symmetry responsible for the broad resonance peaking at 8.3 eV and the smaller resonance at the higher energy tail of that.
The electron configuration is also listed in Table 1. The other two A\({}_{1}\) resonances (at 7.99 and 11.5 eV) are due to the excitation into the 7a\({}_{1}\) and 8a\({}_{1}\) orbital respectively from 6a\({}_{1}\) orbital. The electron configuration of each of the resonances has listed in Table 1. From Fig. 1 (b), we can see that the bigger resonant peak is extended up to about 10 eV. The two A\({}_{1}\), one A\({}_{2}\), and two B\({}_{2}\) resonant symmetric states are present in this energy range. On the other hand, the smaller peak is extended up to about 12.5 eV energy. Three A\({}_{2}\), one A\({}_{1}\), and one B\({}_{2}\) resonant symmetric states are present in that energy range.
### Kinetic energy distribution
The kinetic energy distribution of the O\({}^{-}\) ions for different incident electron energies extracted for the entire 0 to 2\(\pi\) angle about the electron beam direction is shown in Fig. 3. The kinetic energy distributions in Fig. 3 show only one near-zero energy peak up to 8.5 eV. One additional overlapping peak at the higher energy region appears in the distribution for electron energy, \(\geq\) 9.3 eV one higher energy peak appears in the distribution.
\begin{tabular}{|c|c|c|} \hline \begin{tabular}{c} Chanel \\ no. \\ \end{tabular} & \begin{tabular}{c} DEA channels \\ \end{tabular} &
\begin{tabular}{c} Threshold \\ energy (eV) \\ \end{tabular} \\ \hline
1. & O\({}^{-}\) + NO (\(X^{2}\Pi\)) & 1.65 eV \\ \hline
2. & O\({}^{-}\) + NO (\(a^{4}\Pi\)) & 6.43 eV \\ \hline
3. & O\({}^{-}\) + NO (\(A^{2}\Sigma^{+}\)) & 7.10 eV \\ \hline
4. & O\({}^{-}\) + NO (\(B^{2}\Pi\)) & 7.46 eV \\ \hline
5. & O\({}^{-}\) + NO (\(C^{2}\Pi\)) & 8.11 eV \\ \hline
6. & O\({}^{-}\) + NO (\(D^{2}\Sigma^{+}\)) & 8.20 eV \\ \hline
7. & O\({}^{-}\) + N (\({}^{4}S\)) + O (\({}^{3}P\)) & 8.18 eV \\ \hline \end{tabular}
Gope _et al._ also found both the higher and lower energy ions at this resonance.[12] From their threshold calculation, they found that the low energy ions
\begin{table} \end{table}
Table 2: DEA channels of NO\({}_{2}\) and their threshold values. [12]
Figure 2: Wedge sliced images of O\({}^{-}\) /NO\({}_{2}\) ions for (a) 7.7 eV (b) 8.5 eV, (c) 9.3 eV, (d) 9.9 eV, (e) 10.5 eV, and (f) 11.1 eV incident electron energy. The electron beam direction is from left to right (The red arrow indicates the electron beam direction) through the center of each image. The wedge angle used for our analysis is 8\({}^{\circ}\) for each of the images.
are produced from O\({}^{-}\) + NO (\(A^{2}\Sigma^{+}\)) dissociation channel having threshold energy 7.10 eV. In contrast, the higher energy ions are produced from O\({}^{-}\) + NO (\(D^{2}\Sigma^{+}\)) dissociation channel having a threshold energy of 8.20 eV.
### Angular distribution
The angular distributions of the fragment O\({}^{-}\) ions have been extracted from the sliced images. We have plotted the angular distribution of the lower energy ions for the 7.7, 8.5, 9.3, 9.9, 10.5, and 11.1 eV incident electron energies in fig. 4 (a). For the low-energy ions, we have considered the ions with kinetic energy \(\leq\) 1.2 eV. While the angular distribution of the higher energy ions for the 9.3, 9.9, 10.5, and 11.1 eV incident electron energies in fig. 4 (b). For the higher-energy ions, we have considered the ions with kinetic energy \(\geq\) 1.4 eV. We can see from fig. 4 (a) that the angular distribution of the low energy ions has a slight dip at around 80\({}^{\circ}\) angle with a relatively higher backward count. The angular distribution of the low-energy ions remains more or less similar with the increase in the incident electron energy. With the incident electron energy increase, the backward count increases, and the dip also becomes prominent. On the other hand, from the angular distributions of the higher energy O\({}^{-}\) ions [Fig. 4 (b)] we can see that the ion count has a peak around 80\({}^{\circ}\) and also have some finite count in forward and backward direction with a small forward-backward asymmetry. For the angular distributions of the higher energy ions, the distribution remains more or less similar with the increase in the incident electron energy. With the increase in the incident electron energy, the backward count decreases, and the dip near 150\({}^{\circ}\) becomes prominent. Our angular distribution agrees reasonably well with the distribution found by Gope _et al._ They also found the distribution peaks at around 90\({}^{\circ}\) with small contributions in the forward and backward directions. [12]
As discussed earlier, the \({}^{1}\)A\({}_{1}\) at 7.04 eV, \({}^{1}\)A\({}_{1}\) at 7.99 eV, \({}^{3}\)B\({}_{2}\) at 8.15 eV, \({}^{3}\)A\({}_{2}\) at 9.24 eV, and \({}^{3}\)A\({}_{2}\) at 9.78 eV are the possible resonances for the prominent resonant peak at 8.5 eV. On the other hand, \({}^{3}\)A\({}_{2}\) at 10.35 eV, \({}^{1}\)A\({}_{2}\) at 10.50 eV, \({}^{1}\)A\({}_{2}\) at 11.14 eV, \({}^{1}\)B\({}_{2}\) at 11.36 eV, and \({}^{1}\)B\({}_{2}\) at 11.50 eV are the possible resonant states for the small resonant peak at the higher energy edge of the bigger one. To verify this, we have also fitted these experimentally observed angular distributions with the theoretically predicted ones.
The expression for the angular distribution of the fragment negative ions from the DEA to the diatomic molecule was first given by O'Malley and Taylor [28]. The expression is as follows
\[I(\theta,\phi,k)=\sum_{\mu}|\sum_{l=\mu}a_{l\mu}(k)Y_{l}^{\mu}(\theta,\phi)e^{ i\delta_{l}}|^{2} \tag{1}\]
where \(a_{l\mu}(k)\) are energy-dependent expansion coefficients, \(k\) is the incident electron momentum, \(Y_{l}^{\mu}(\theta,\phi)\) are the spherical harmonics, \(\mu\) is the difference in the projection of the angular momentum along the internuclear axis for the neutral molecular state and the negative ion resonance state, given as \(\mu=|\Lambda_{f}-\Lambda_{i}|\), \(l\) is the angular momentum of the incoming electron with values given by \(l\geq|\mu|\) and (\(\theta\), \(\phi\)) are the polar angles of the negative ion fragments with respect to the incident electron direction. Later Azaria _et al._[29] extrapolated this for the polyatomic molecules and found the angular distribution of the negative ion fragments averaging over \(\phi\). The expression is as
Figure 3: Kinetic energy distributions of O\({}^{-}\) from DEA to NO\({}_{2}\) at different incident electron energies around the 8.5 eV resonance. The distributions shown are obtained after integrating over entire 2\(\pi\) angles about the electron beam direction and normalized by the peak.
follows:
\[I(\theta)=\frac{1}{2\pi}\int_{0}^{2\pi}|\sum_{l\mu\epsilon}i^{l}e^{i\delta_{l}}a_{ l\mu}^{\epsilon}X_{l\mu}^{\epsilon}(\theta,\phi)|^{2}d\phi \tag{2}\]
where \(X_{l\mu}^{\epsilon}\) are the basis functions for the irreducible representation of the group of the molecule, \(a_{l\mu}^{\epsilon}\) are their amplitude and all other variables are the same as discussed earlier.
To have all the functions defined in the same coordinates and to be able to compare them with the measurements, we need to transform the basis functions (table 3), and the expression for the partial waves via the Euler angles to the dissociation frame of the molecule. The angles we used are (0, \(\beta\), 0) for the basis functions, where \(\beta\) is the half-bond angle. For the partial wave representing the electron beam, we used (\(\phi\), \(\theta\), 0) which are the polar angles of the electron momentum vector in the dissociation frame. The expression of the angular distribution of the O\({}^{-}\) ions for A\({}_{1}\) resonant state symmetry under the axial re
Figure 4: (a) Data points represent the experimentally obtained angular distributions of the O\({}^{-}\) ions in the higher energy band for 9.3 eV (Blue) and 9.9 eV (Orange) incident electron energy, respectively. The solid curves represent the best fits with A\({}_{1}\)+B\({}_{2}\) resonant symmetries. While the dashed curves represent the best fits with A\({}_{1}\)+A\({}_{2}\)+B\({}_{2}\) resonant symmetries. (b) Data points represent the experimentally obtained angular distributions of the O\({}^{-}\) ions in the higher energy band for 10.5 eV (Blue) and 11.1 eV (Orange) incident electron energy, respectively. The solid curves represent the best fits with A\({}_{1}\)+B\({}_{2}\) resonant symmetries. In comparison, the dashed curves represent the best fits with A\({}_{1}\)+A\({}_{2}\)+B\({}_{2}\) resonant symmetries. (c) Data points represent the experimentally obtained angular distributions of the O\({}^{-}\) ions in the higher energy band for 8.5 eV incident electron energy. The black solid curve represents the best fit with A\({}_{1}\)+B\({}_{2}\) resonant symmetries. While the solid blue curve represents the best fit with A\({}_{1}\)+A\({}_{2}\)+B\({}_{2}\) resonant symmetry. (d) Angular distributions of the O\({}^{-}\) ions in the lower energy band for 10.5 eV incident electron energy. (e) Angular distributions O\({}^{-}\) ions in the lower energy band around the bigger resonant peak fitted with A\({}_{1}\)+A\({}_{2}\)+B\({}_{2}\) resonant symmetries. (f) Angular distributions O\({}^{-}\) ions in the lower energy band around the smaller resonant peak fitted with A\({}_{1}\)+A\({}_{2}\)+B\({}_{2}\) resonant symmetries.
coil approximation is as follows:
\[I_{s+p+d}^{A_{1}}(\theta)=a_{0}^{2}+a_{1}^{2}(sin^{2}\beta\,sin^{2} \theta+cos^{2}\beta\,cos^{2}\theta)+\] \[a_{2}^{2}[\frac{9}{16}(sin^{4}\beta sin^{4}\theta+sin^{2}2\beta \,cos^{2}2\theta)+\] \[\frac{1}{2}(3cos^{2}\beta-1)^{2}(3cos^{2}\theta-1)^{2}]+\] \[4a_{0}a_{1}cos\beta\,cos\theta\,cos\delta_{0}+\] \[2a_{1}a_{2}[\frac{3}{4}sin\,\beta\,sin\,2\beta\,sin\,\theta\, sin\,2\theta+\] \[\frac{1}{2}cos\,\beta\,cos\theta\,(3cos^{2}\theta-1)(3cos^{2} \theta-1)]\,cos\,\delta_{1}+\] \[a_{0}a_{2}(3cos^{2}\theta-1)(3cos^{2}\theta-1)cos(\delta_{0}+ \delta_{1}) \tag{3}\]
Here, we have considered up to the d-partial wave (l=2). The expression of the angular distribution for other resonant symmetries is as follows:
\[I_{d}^{A_{2}}(\theta)= cos^{2}\beta\,sin^{4}\theta+sin^{2}\beta\,sin^{2}2\theta \tag{4}\] \[I_{p+d}^{B_{1}}(\theta)= 2b_{1}^{2}sin^{2}\theta+\frac{3}{2}b_{2}^{2}[sin^{2}\beta sin^{4} \theta+cos^{2}\beta sin^{2}2\theta]\] \[+2b_{1}b_{2}\sqrt{3}cos\,\beta\,sin\,\theta\,sin\,2\theta\,cos \,\delta_{2}\] (5) \[I_{p+d}^{B_{2}}(\theta)= 2b_{3}^{2}(sin^{2}\beta\,cos^{2}\theta+cos^{2}\beta\,sin^{2} \theta)+\] \[\frac{3}{2}b_{4}^{2}[\frac{1}{4}sin^{2}2\beta sin^{4}\theta+\] \[cos^{2}2\beta sin^{2}2\theta+\frac{1}{2}sin^{2}2\beta(3cos^{2} \theta-1)^{2}]+\] \[2b_{3}b_{4}[\sqrt{3}cos\,\beta\,cos\,2\beta\,sin\,\theta\, sin\,2\theta+\] \[sin\,\beta\,sin\,2\beta\,cos\,\theta\,(3cos^{2}\theta-1)]cos\, \delta_{3} \tag{6}\]
The symmetry/symmetries of the resonant state/states involved in DEA can be found from the angular distribution of the fragment negative ions.
In Fig. 4 (a), we have fitted the angular distributions of the higher energy O\({}^{-}\) ions for 9.3 and 9.9 eV electron energy on the more prominent resonant peak. As our theoretical calculation suggests, A\({}_{1}\), A\({}_{2}\), and B\({}_{2}\) resonant symmetries are present here and might be responsible for this peak; we have therefore fitted them with these resonant symmetries (dashed curves). From our fitting, we have noticed that the contribution of A\({}_{2}\) resonance is very small or negligible. Therefore, we have also fitted these distributions with the combination of A\({}_{1}\) and B\({}_{2}\) resonant symmetries (solid curves). The A\({}_{1}\) + B\({}_{2}\) fits justify the nature of the distributions properly. This suggests that the higher energy O\({}^{-}\) ions are produced from the A\({}_{1}\) and B\({}_{2}\) resonances with negligible or no contribution from A\({}_{2}\) resonance. For lower energy ions also, we have fitted the angular distributions [Fig. 4 (c)] with both the combination of A\({}_{1}\) + A\({}_{2}\) + B\({}_{2}\) (solid line) and A\({}_{1}\) + B\({}_{2}\) (dashed line) symmetries. From this fitting, we can see that the combination of A\({}_{1}\) + B\({}_{2}\) resonant symmetries can not justify the distribution properly, and we need to consider the contribution from the A\({}_{2}\) resonance also. On the other hand, the low-energy ions get highly affected by the rotation of the TNI.[30] Therefore, the experimentally obtained angular distribution of the low-energy ions may be much different from the actual distribution. That is why it may not be a good idea to fit the angular distributions for the low energy ions using these expressions and conclude something from that.[31]
In Fig. 4 (b), we have fitted the angular distributions of the higher energy O\({}^{-}\) ions for 10.5 and 11.1 eV electron energy on the smaller resonant peak. As our theoretical calculation suggests, A\({}_{1}\), A\({}_{2}\), and B\({}_{2}\) resonant symmetries are present here and might be responsible for this peak; we have therefore fitted them with these resonant symmetries (dashed curves). Here also, we have noticed that the contribution of A\({}_{2}\) resonance is very small or negligible. Therefore, we have also fitted these distributions with A\({}_{1}\) + B\({}_{2}\) resonant symmetries (solid curves). Here also, the A\({}_{1}\) + B\({}_{2}\) fits justify the nature of the distributions. Therefore, the formation of higher-energy ions in this resonance may
\begin{table}
\begin{tabular}{|c|c c c c|c|} C\({}_{2v}\) & E & C\({}_{2}\) & \(\sigma_{v}\) & \(\sigma^{\prime}_{v}\) & Basis functions \\ \hline A\({}_{1}\) & +1 & +1 & +1 & +1 & Y\({}_{l}^{m}\)+Y\({}_{l}^{-m}\) ; m=even \\ A\({}_{2}\) & +1 & +1 & -1 & -1 & Y\({}_{l}^{m}\)-Y\({}_{l}^{-m}\) ; m=even \\ B\({}_{1}\) & +1 & -1 & +1 & -1 & Y\({}_{l}^{m}\)+Y\({}_{l}^{-m}\) ; m=odd \\ B\({}_{2}\) & +1 & -1 & -1 & +1 & Y\({}_{l}^{m}\)-Y\({}_{l}^{-m}\) ; m=odd \\ \end{tabular}
\end{table}
Table 3: Character table of C\({}_{2v}\) point group and basis functions.
have no contribution. We have fitted the angular distributions [Fig. 4 (d)] with both the combination of A\({}_{1}\) + A\({}_{2}\) + B\({}_{2}\) (blue solid line) and A\({}_{1}\) + B\({}_{2}\) (black solid line) symmetries. From this fitting, we can see that the combination of A\({}_{1}\) + B\({}_{2}\) resonant symmetries can not justify the distribution properly, and we need to consider the contribution from the A\({}_{2}\) resonance also. Here also, we cannot wholly be sure about that since the axial recoil approximation gets violated most of the time for the lower energy ions.[31]
## 5 Conclusion
The angular distribution of the low-energy ions remains more or less similar with the increase in the incident electron energy. With the incident electron energy increase, the backward count increases, and the dip becomes prominent. On the other hand, the angular distributions of the higher energy ions remain more or less similar with the increase in the incident electron energy. With the incident electron energy increase, the backward count decreases, and the dip near 150\({}^{\circ}\) becomes prominent. Our angular distribution agrees reasonably well with the distribution found by Gope _et al._ We have thus developed a quantitative understanding of DEA to nitrogen dioxide molecules for resonant peaks at 8.5 and 11 eV. Our experimental findings are justified by the theoretical calculations.
## Acknowledgements
A.P. sincerely appreciates the "Council of Scientific and Industrial Research (CSIR)" for the financial assistance. D.N. gratefully acknowledges the financial support from the "Science and Engineering Research Board (SERB)" under Project No. "CRG/2019/000872."
|
2308.11594 | Quantization-based Optimization with Perspective of Quantum Mechanics | Statistical and stochastic analysis based on thermodynamics has been the main
analysis framework for stochastic global optimization. Recently, appearing
quantum annealing or quantum tunneling algorithm for global optimization, we
require a new researching framework for global optimization algorithms. In this
paper, we provide the analysis for quantization-based optimization based on the
Schr\"odinger equation to reveal what property in quantum mechanics enables
global optimization. We present that the tunneling effect derived by the
Schr\"odinger equation in quantization-based optimization enables to escape of
a local minimum. Additionally, we confirm that this tunneling effect is the
same property included in quantum mechanics-based global optimization.
Experiments with standard multi-modal benchmark functions represent that the
proposed analysis is valid. | Jinwuk Seok, Changsik Cho | 2023-08-20T05:03:31Z | http://arxiv.org/abs/2308.11594v3 | # Quantization-based Optimization with Perspective of Quantum Mechanics
###### Abstract
Statistical and stochastic analysis based on thermodynamics has been the main analysis framework for stochastic global optimization. Recently, with the appearance of quantum annealing or quantum tunneling algorithms for global optimization, we require a new research framework for global optimization algorithms. In this paper, we provide the analysis for quantization-based optimization based on the Schrodinger equation to reveal what property in quantum mechanics enables global optimization. We present that the tunneling effect derived by the Schrodinger equation in quantization-based optimization enables to escape of a local minimum. Additionally, we confirm that this tunneling effect is the same property included in quantum mechanics-based global optimization. Experiments with standard multi-modal benchmark functions represent that the proposed analysis is valid.
## 1 Introduction
Stochastic global optimization algorithms such as Simulated Annealing(SA) have represented outstanding performance in combinatorial optimization problems [2, 3, 6]. However, when the complexity and size of a problem, such as a Traveling Salesman Problem(TSP) involving many cities (beyond 100 cities), are significantly huge, such a conventional algorithm shows a limitation of optimization performance [8]. Recently, the newest optimization algorithm, which applies quantization to the range space of an objective function, represented exceptional optimization performance in such an intricate problem [9]. Nevertheless, the dynamics of the quantization-based optimization are based on the analysis of the conventional stochastic global optimization, so it is difficult to realize the core component to reveal such superiority. In this paper, we present the transformation from the Fokker-Plank equation, which describes the dynamics of the state transition probability in the quantization-based and stochastic global optimization, to the Schrodinger equation for the analysis based on quantum mechanics Hamacher [1]. In addition, from experiments to compare the optimization performance concerning SA and Quantum Annealing(QA) (Kadowaki and Nishimori [5], Santoro and Tosatti [7]), we provide the validity of the quantization-based optimization algorithm in a general continuous objective function.
## 2 Fundamental Definition and Assumption
First, we consider an objective function \(f:\mathbf{R}^{n}\rightarrow\mathbf{R}^{+}\) with the unique global optimum \(x^{*}\) such that \(f(x^{*})<f(x)\), for all \(x,x^{*}\in\mathbf{R}^{n}\) and \(x\neq x^{*}\). Further, we establish the following definitions and assumptions before beginning our discussion.
**Definition 1**: _For \(f\in\mathbf{R}\), we define the quantization of \(f\) as follows:_
\[f^{Q}\triangleq\frac{1}{Q_{p}}\lfloor Q_{p}\cdot(f+0.5\cdot Q_{p}^{-1})\rfloor= \frac{1}{Q_{p}}(Q_{p}\cdot f+\varepsilon)=f+\varepsilon Q_{p}^{-1} \tag{1}\]
_, where \(\lfloor f\rfloor\in\mathbf{Z}\) is the floor function such that \(\lfloor f\rfloor\in\max_{y}\{y\in\mathbf{Z}|y\leq x,\;x\in\mathbf{R}\}\), \(\varepsilon\in\mathbf{R}[-1/2,1/2]\) is the quantization error, and \(f^{Q}\in\mathbf{Q}\) is the quantization of \(f\)._
In Definition 1, we establish the quantization parameter \(Q_{p}\in\mathbf{Q}^{+}\) to be a monotone increasing function \(Q_{p}:\mathbf{R}^{++}\mapsto\mathbf{Z}^{+}\) such that
\[Q_{p}(t)=\eta\cdot b^{\bar{h}(t)} \tag{2}\]
, where \(\eta\in\mathbf{Q}^{++}\) denotes the fixed constant parameter of the quantization parameter, \(b\) denotes the base, and \(\bar{h}:\mathbf{R}^{++}\mapsto\mathbf{Z}^{+}\) denotes the power function such that \(\bar{h}(t)\uparrow\infty\;\) as \(\;t\rightarrow\infty\), for all \(t\in\mathbf{R}^{++}\). We assume that the quantization error defined in (1) with a uniform distribution, according to the White Noise Hypothesis (WNH) [4]. This statistical assumption of the quantization error leads to the mean and the variance provided by the following proposition:
**Proposition 2**: _If the quantization error \(\varepsilon_{t}\in\mathbf{R}^{n}\) satisfies the WNH, the mean and the variance of the quantization error at \(t>0\) is_
\[\forall\varepsilon_{t}^{q}\in\mathbf{R},\quad\mathbb{E}_{\mathbf{R}}Q_{p}(t) \varepsilon_{t}^{q}=0,\quad\mathbb{E}_{\mathbf{R}}Q_{p}^{-2}(t)\varepsilon_{t }^{q}{}^{2}=Q_{p}^{-2}(t)\cdot\mathbb{E}_{\mathbf{R}}\varepsilon_{t}^{q}{}^{ 2}=\frac{1}{12\cdot Q_{p}^{2}(t)} \tag{3}\]
Furthermore, we establish the notations of vector-valued derivatives as follows:
**Definition 3**: _Suppose that \(\{\mathbf{e}_{k}\}_{k=1}^{n}\) denotes the set of basis vectors on an Euclidean vector space. We define the gradient, the divergence, and the Laplacian operation such that_
\[\begin{split}&\nabla_{\mathbf{x}}\triangleq\sum_{k}\frac{\partial}{ \partial x_{k}}\mathbf{e}_{k}\in\mathbf{R}^{n}\quad\nabla\cdot f(\mathbf{x},\cdot) \triangleq\sum_{j}\frac{\partial f}{\partial x_{j}}\in\mathbf{R},\quad\because f :\mathbf{R}^{n}\rightarrow\mathbf{R}\\ &\Delta\triangleq\nabla\cdot\nabla=\sum_{k}\frac{\partial}{ \partial x_{k}}(\sum_{j}\frac{\partial}{\partial x_{j}})=\sum_{k}\sum_{j} \frac{\partial^{2}}{\partial x_{k}\partial x_{j}}\in\mathbf{R}.\end{split} \tag{4}\]
Holding the above definition and assumption for the quantization error, we can establish the stochastic differential equation for the quantization-based optimization algorithm according to [8], as follows:
**Proposition 4**: _For a given objective function \(f:\mathbf{R}^{n}\rightarrow\mathbf{R}^{+}\), suppose that there exist the quantized objective functions \(f^{Q}(\mathbf{x}_{t}),\;f^{Q}(\mathbf{x}_{t+1})\) evaluated from (1), at a current state \(\mathbf{x}_{t}\in\mathbf{R}^{n}\) and the following state \(\mathbf{x}_{t+1}\in\mathbf{R}^{n}\) such that \(f^{Q}(\mathbf{x}_{t})\geq f^{Q}(\mathbf{x}_{t+1})\), for all \(\mathbf{x}_{t+1}\neq\mathbf{x}_{t}\). We can obtain the stochastic differential equation of the state transition as follows:_
\[d\mathbf{X}_{t}=-\nabla_{\mathbf{x}}f(\mathbf{X}_{t})dt+\sqrt{C_{q}}\cdot Q_{p}^{-1}(t)d \mathbf{W}_{t} \tag{5}\]
_, where \(\mathbf{W}_{t}\in\mathbf{R}^{n}\) denotes a vector-valued standard Wiener process, which has a zero mean and variance with one, \(\mathbf{X}_{t}\in\mathbf{R}^{n}\) denotes a random variable corresponding to \(\mathbf{x}_{t}\)._
Given the dynamics of the algorithm as (5), we can obtain the corresponding Fokker-Plank equation such that
\[\partial_{t}\rho(\mathbf{x},t)=\nabla\cdot(\nabla_{\mathbf{x}}f(\mathbf{x})\rho(\mathbf{x},t))+ \frac{1}{2}C_{q}Q_{p}^{-2}(t)\Delta\rho(\mathbf{x},t) \tag{6}\]
, where a state \((\mathbf{x},t)\) instead of the random variable \(\mathbf{X}_{t}\) at time t, and \(\rho(\mathbf{x},t):\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{R}[0,1]\) denotes a probability density function of the random variable \(\mathbf{X}_{t}\).
## 3 Derivation of the Schrodinger Equation to Quantization-based Optimization
Derivation of the Schrodinger Equation from the Fokker-Plank equation for the Quantization-based Optimization
For convenience, let a diffusion parameter \(Q(t):\mathbf{R}^{+}\rightarrow\mathbf{R}\) such that \(Q(t)\triangleq C_{q}Q_{p}^{-2}(t)\). Considering a log function to the probability density to \(\mathbf{x}\) such as \(\ln\rho(\mathbf{x},t)\), we can calculate the gradient of the log function as follows:
\[\nabla_{\mathbf{x}}\ln\rho(\mathbf{x},t)=\frac{\partial}{\partial\rho}\ln\rho(\mathbf{x},t)\cdot\sum_{k}\frac{\partial\rho}{\partial x_{k}}\mathbf{e}_{k}=\frac{1}{\rho( \mathbf{x},t)}\nabla_{\mathbf{x}}\rho(\mathbf{x},t). \tag{7}\]
In addition, we establish a function \(\mu(\mathbf{x},t):\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{R}^{n}\) such that
\[\mu(\mathbf{x},t)=\nabla_{\mathbf{x}}f(\mathbf{x},t)+Q(t)\nabla_{\mathbf{x}}\ln\rho(\mathbf{x},t) \tag{8}\]
Substituting (8) into (7), we get
\[\mu(\mathbf{x},t)=\nabla_{\mathbf{x}}f(\mathbf{x})+Q(t)\frac{1}{\rho(\mathbf{x},t)}\nabla_{ \mathbf{x}}\rho(\mathbf{x},t)\Rightarrow\nabla_{\mathbf{x}}f(\mathbf{x})\rho(\mathbf{x},t)=\mu( \mathbf{x},t)\rho(\mathbf{x},t)-Q(t)\nabla_{\mathbf{x}}\rho(\mathbf{x},t) \tag{9}\]
Substituting (9) into (6), it leads
\[\partial_{t}\rho(\mathbf{x},t)=\nabla\cdot\mu(\mathbf{x},t)\rho(\mathbf{x},t)-\frac{1}{2} Q(t)\Delta\rho(\mathbf{x},t). \tag{10}\]
Adding (10) to (6), we obtain the following equation:
\[\partial_{t}\rho(\mathbf{x},t)=-\nabla\cdot v(\mathbf{x},t)\rho(\mathbf{x},t) \tag{11}\]
, where the function \(v(\mathbf{x},t):\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{R}^{n}\) is as follows:
\[v(\mathbf{x},t)=-\frac{1}{2}(\nabla_{\mathbf{x}}f(\mathbf{x})+\mu(\mathbf{x},t))=-\nabla_{\bm {x}}f(\mathbf{x})-\frac{1}{2}Q(t)\nabla_{\mathbf{x}}\ln\rho(\mathbf{x},t). \tag{12}\]
To verify the correspondence with the Schrodinger equation, we define a quantum state function \(\psi:\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{C}\) such that
\[\rho(\mathbf{x},t)\triangleq|\psi(\mathbf{x},t)|^{2}=\psi(\mathbf{x},t)\cdot\psi^{*}(\bm {x},t) \tag{13}\]
and a correct velocity of a quantum probability current \(v:\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{C}\) such that
\[v(\mathbf{x},t)=\frac{\hbar}{im}(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)-\nabla_{\mathbf{x}} \ln\psi^{*}(\mathbf{x},t)) \tag{14}\]
, where \(\hbar\) denotes the Dirac constant such that \(\hbar=h/2\pi\) for the Plank constant \(h\), \(i\) denotes an imaginary unit, \(m\) denotes a massive of a particle described by the state \(\mathbf{x}\), and \(\psi^{*}\) is the conjugate function of \(\psi\).
Substituting (12) and (13) into (9), we obtain
\[\begin{split}&\partial_{t}\rho(\mathbf{x},t)=-\nabla\cdot v(\mathbf{x},t) \rho(\mathbf{x},t)\\ &\Rightarrow\partial_{t}\psi^{2}(\mathbf{x},t)=-\frac{\hbar}{im} \nabla\cdot(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)-\nabla_{\mathbf{x}}\ln\psi^{*}(\mathbf{x },t))\psi^{2}(\mathbf{x},t)\\ &\Rightarrow 2\psi(\mathbf{x},t)\partial_{t}\psi(\mathbf{x},t)=-\frac{\hbar}{im} \nabla\cdot\left(\frac{1}{\psi(\mathbf{x},t)}\nabla_{\mathbf{x}}\psi(\mathbf{x},t)-\frac{ 1}{\psi^{*}(\mathbf{x},t)}\nabla_{\mathbf{x}}\psi^{*}(\mathbf{x},t)\right)\psi^{2}(\mathbf{x },t)\\ &\Rightarrow\partial_{t}\psi(\mathbf{x},t)=-\frac{\hbar}{2im}\nabla \cdot\left(\frac{1}{\psi(\mathbf{x},t)}\nabla_{\mathbf{x}}\psi(\mathbf{x},t)-\frac{1}{ \psi^{*}(\mathbf{x},t)}\nabla_{\mathbf{x}}\psi^{*}(\mathbf{x},t)\right)\psi(\mathbf{x},t)\\ &\Rightarrow\partial_{t}\psi(\mathbf{x},t)=-\frac{\hbar}{2im}\nabla \cdot\left(\nabla_{\mathbf{x}}\psi(\mathbf{x},t)-\frac{\nabla_{\mathbf{x}}\psi^{*}(\mathbf{x}, t)}{\psi^{*}(\mathbf{x},t)}\psi(\mathbf{x},t)\right)\\ &\Rightarrow i\hbar\partial_{t}\psi(\mathbf{x},t)=-\frac{\hbar^{2}}{2 m}\nabla\cdot\left(\nabla_{\mathbf{x}}\psi(\mathbf{x},t)-\frac{\nabla_{\mathbf{x}}\psi^{*}(\mathbf{x}, t)}{\psi^{*}(\mathbf{x},t)}\psi(\mathbf{x},t)\right).\end{split} \tag{15}\]
Consequently, if we let a function \(V:\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{C}\) such that \(V(\mathbf{x},t)\triangleq\frac{\hbar^{2}}{2m}\nabla\cdot\frac{\nabla_{\mathbf{x}}\psi ^{*}(\mathbf{x},t)}{\psi^{*}(\mathbf{x},t)}\), we can obtain the following Schrodinger equation:
\[i\hbar\partial_{t}\psi(\mathbf{x},t)=-\frac{\hbar^{2}}{2m}\Delta\psi(\mathbf{x},t)+V( \mathbf{x},t)\psi(\mathbf{x},t). \tag{16}\]
Quantization parameter in the Quantization-based Optimization from the perspective of the Schrodinger Equation
In this section, we derive the correspondence of the quantization parameter to the Schrodinger equation (16).
From the equality of the correct velocity \(v(\mathbf{x},t)\) in the equations (12) and (14), we can establish the following equation:
\[v(\mathbf{x},t)=\frac{\hbar}{im}(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)-\nabla_{\mathbf{x}} \ln\psi^{*}(\mathbf{x},t))=-\nabla_{\mathbf{x}}f(\mathbf{x})-\frac{1}{2}Q(t)\nabla_{\mathbf{x }}\ln\rho(\mathbf{x},t). \tag{17}\]
By the definition of the quantum mechanical probability density in (13), it leads
\[\nabla_{\mathbf{x}}\ln\rho(\mathbf{x},t)=\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)\cdot\psi^{* }(\mathbf{x},t)=\nabla_{\mathbf{x}}(\ln\psi(\mathbf{x},t)+\ln\psi^{*}(\mathbf{x},t)). \tag{18}\]
Substituting (18) into (17), we get
\[\begin{split}\frac{\hbar}{im}(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)- \nabla_{\mathbf{x}}\ln\psi^{*}(\mathbf{x},t))&=-\nabla_{\mathbf{x}}f(\mathbf{x })-\frac{1}{2}Q(t)\nabla_{\mathbf{x}}(\ln\psi(\mathbf{x},t)+\ln\psi^{*}(\mathbf{x},t))\\ &=-\nabla_{\mathbf{x}}f(\mathbf{x})-\frac{1}{2}Q(t)(\nabla_{\mathbf{x}}\ln \psi(\mathbf{x},t)+\nabla_{\mathbf{x}}\ln\psi^{*}(\mathbf{x},t))\end{split} \tag{19}\]
If we arrange both terms by transposition, we obtain
\[\begin{split}-\nabla_{\mathbf{x}}f(\mathbf{x})&=\frac{\hbar }{im}\left(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)-\nabla_{\mathbf{x}}\ln\psi^{*}(\mathbf{x}, t)\right)+\frac{1}{2}Q(t)\left(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t)+\nabla_{\mathbf{x}} \ln\psi^{*}(\mathbf{x},t)\right)\\ &=Q(t)Re(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t))+\frac{2\hbar}{m}Im( \nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t))\end{split} \tag{20}\]
Since \(\nabla_{\mathbf{x}}f(\mathbf{x})\in\mathbf{R}^{n}\), the equation (20) is valid.
From the equation (19), we note that the followings:
* For a pure deterministic case, i.e., \(Q(t)=0\) the gradient of the objective function is proportion to the imaginary part in the gradient of the log scaled quantum state function \(\psi\). Thereby, we can describe that the deterministic gradient is a relation to the variation in the frequency of a particle. From the viewpoint of numerical analysis, we can regard such a frequency variation based on quantum mechanics as a quantized operation, so that we can describe the deterministic variation of the objective function as the variation of a fundamental power series.
* For a stochastic case, i.e., \(Q(t)>0\), the gradient of an objective function contains an additional effect of a photon injection. Further, according to quantum mechanics, we regard \(i\hbar\partial_{t}\) as a total energy \(E(\mathbf{x},t)\), and we rewrite (16) as a following familiar formulation: \[\begin{split} E(\mathbf{x},t)\psi(\mathbf{x},t)=-\frac{\hbar^{2}}{2m} \Delta\psi(\mathbf{x},t)+V(\mathbf{x},t)\psi(\mathbf{x},t)\\ \Rightarrow\Delta\psi(\mathbf{x},t)+\frac{2m}{\hbar^{2}}\psi(\mathbf{x}, t)(E-V)(\mathbf{x},t)=0.\end{split}\] (21) In (21) if we establish a difference energy \(U:\mathbf{R}^{n}\times\mathbf{R}\rightarrow\mathbf{R}\) such that \(U=V-E\) and \(E<V\) for all \(\mathbf{x}\in\mathbf{R}^{n}\) and \(t>0\), we can write (21) as follows: \[\Delta\psi(\mathbf{x},t)-\frac{2m}{\hbar^{2}}\psi(\mathbf{x},t)U(\mathbf{x},t)=0.\] (22) The solution of (22) reveals that the probability of the state existing beyond the energy hill \(V\) is non-zero.
In other words, when the current state exists on a local minimum around an energy hill \(V\), the stochastic enforcement \(Re(\nabla_{\mathbf{x}}\ln\psi(\mathbf{x},t))\) controlled by the quantization parameter enables to move the current state on the other state over the energy hill. This phenomenon is known as the "tunneling effect."
Accordingly, we note that global optimization techniques such as simulated annealing, quantum annealing, and quantization-based optimization present quantum tunneling, and the hill-climbing based on a noisy vector present equal properties to those of quantum tunneling.
## 4 Numerical Experiments
To verify the validity of the analysis, we accomplish numerical experiments on optimization problems to multi-modal functions. The provided benchmark functions are general test functions for optimization algorithms for years. In addition, all the benchmark functions contain a lot of local minima along the domain space, so finding the global optimum point in a finite domain is difficult using a conventional deterministic algorithm such as a gradient descent-based optimizer. However, as stated in the previous section, if stochastic optimization algorithms, including the quantization-based optimization algorithm, can find the global minimum of the benchmark functions, it reveals that our quantum mechanic-based analysis is valid.
We employ Simulated Annealing(SA), Quantum Annealing(QA), and the quantization-based optimization algorithm as the stochastic global optimization algorithm for the experiments. SA
exploited for various combinatorial optimization problems such as the Travelling Salesman Problem (TSP), or Knap-Sack Problem is the representative stochastic global optimization algorithm. QA is compatible with combinatorial optimization problems in a similar manner to SA. In particular, physicists have analyzed the optimization dynamics of QA from the viewpoint of quantum mechanics. Quantization-based optimization, which dynamics we analyzed with the perspective of quantum mechanics in this paper, is another type of stochastic optimization for combinatorial optimization. Even though SA, QA, and quantization-based optimization are not generally compatible with an optimization problem on a multi-dimensional continuous domain, SA represents sufficient optimization performance on a low-dimensional vector space.
Table 2 represents the experimental results. As for the Salomon, Drop-wave, and Schaffel N2 benchmark functions, all tested algorithms find the global optimum. Those results illustrate the stochastic and quantum mechanics-based optimization algorithms include the same optimization dynamics analyzed with quantum mechanics. Further, quantization-based optimization finds the global minimum with fewer iterations than SA and QA. This result shows that quantization-based optimization contains an additional property besides the hill-climbing or tunneling effect in optimization.
As for the Xin She Yang N4 benchmark function, the experimental results represent a significantly different aspect. SA and quantization-based optimization fall into a local minimum of around 50% higher value than the global minimum. However, the local minimum of the benchmark function is located in a smoother space, whereas the global minimum is located in a very sharp area. This result shows that SA and quantization-based algorithms search a minimum point with a positive Hessian with a relatively small matrix norm. Practically, the optimization result represents better performance when the algorithm finds such a minimum point in a sparse dataset, whereas the algorithm finding a sharper minimum point occurs as an over-fitting problem. Finally, in contrast to both algorithms, the QA algorithm fails to find a feasible minimum in the experiment. We suppose
\begin{table}
\begin{tabular}{c c} \hline Function Name & Equation \\ \hline Xin-She Yang N4 & \(f(x)=2.0+(\sum_{i=1}^{d}\sin^{2}(x_{i})-\exp(-\sum_{i=1}^{d}x_{i}^{2})\exp(- \sum_{i=1}^{d}\sin^{2}\sqrt{|x_{i}|})\) \\ Salomon & \(f(x)=1-\cos\left(2\pi\sqrt{\sum_{i=1}^{d}x_{i}^{2}}\right)+0.1\sqrt{\sum_{i=1 }^{d}x_{i}^{2}}\) \\ Drop-Wave & \(f(x)=1-\frac{1-\cos\left(12+\sqrt{x^{2}+y^{2}}\right)}{0.5(x^{2}+y^{2})+2}\) \\ Shaffel N2 & \(0.5+\frac{\sin^{2}(x^{2}-y^{2})-0.5}{(1+0.001(x^{2}+y^{2})^{2}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Standard Benchmark Functions
Figure 1: Shape of Benchmark functions
the reason why QA fails is that the Xin She Yang N4 benchmark function includes a thicker energy barrier to the tunneling effect for searching for a minimum point.
## 5 Conclusion
We present the analysis from the perspective of quantum mechanics for quantization-based optimization in this paper. The presented analysis shows that stochastic optimization algorithm, such as quantization-based optimization, includes the tunneling effect to find the global minimum. This analysis illustrates that the tunneling effect in quantum mechanical optimization is equal to the hill-climbing property in stochastic algorithms. Finally, in future work, we will research the hidden dynamics of why quantization-based optimization represents finding the global minimum with fewer iterations.
## Acknowledgment
This work was supported by the Institute for Information Communications Technology PlanningEvaluation(IITP) grant funded by the Korean government(MSIT) (No.2021-0-00766 Development of Integrated Development Framework that supports Automatic Neural Network Generation and Deployment optimized for Runtime Environment).
\begin{table}
\begin{tabular}{c l c c c} \hline \hline Function & Criterion & SA & QA & Quantization \\ \hline \multirow{2}{*}{Xin-She Yang N4} & Iteration & 6420 & 17* & 3144 \\ & Improvement ratio & 54.57\% & 35.22\% & 54.57\% \\ \multirow{2}{*}{Salomon} & Iteration & 1312 & 7092 & 1727 \\ & Improvement ratio & 99.99\% & 99.99\% & 100.0\% \\ \multirow{2}{*}{Drop-Wave} & Iteration & 907 & 3311 & 254 \\ & Improvement ratio & 100.0\% & 100.0\% & 100.0\% \\ \multirow{2}{*}{Shaffel N2} & Iteration & 7609 & 9657 & 2073 \\ & Improvement ratio & 100.0\% & 100.0\% & 100.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation results of standard nonlinear optimization functions. SA denotes Simulated Annealing, QA denotes Quantum Annealing, and Quantization represents Quantization-based optimization algorithm |
2301.12596 | Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with
Unsupervised Text Pretraining | While neural text-to-speech (TTS) has achieved human-like natural synthetic
speech, multilingual TTS systems are limited to resource-rich languages due to
the need for paired text and studio-quality audio data. This paper proposes a
method for zero-shot multilingual TTS using text-only data for the target
language. The use of text-only data allows the development of TTS systems for
low-resource languages for which only textual resources are available, making
TTS accessible to thousands of languages. Inspired by the strong cross-lingual
transferability of multilingual language models, our framework first performs
masked language model pretraining with multilingual text-only data. Then we
train this model with a paired data in a supervised manner, while freezing a
language-aware embedding layer. This allows inference even for languages not
included in the paired data but present in the text-only data. Evaluation
results demonstrate highly intelligible zero-shot TTS with a character error
rate of less than 12% for an unseen language. | Takaaki Saeki, Soumi Maiti, Xinjian Li, Shinji Watanabe, Shinnosuke Takamichi, Hiroshi Saruwatari | 2023-01-30T00:53:50Z | http://arxiv.org/abs/2301.12596v3 | Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with Unsupervised Text Pretraining
###### Abstract
While neural text-to-speech (TTS) has achieved human-like natural synthetic speech, multilingual TTS systems are limited to resource-rich languages due to the need for paired text and studio-quality audio data. This paper proposes a method for zero-shot multilingual TTS using text-only data for the target language. The use of text-only data allows the development of TTS systems for low-resource languages for which only textual resources are available, making TTS accessible to thousands of languages. Inspired by the strong cross-lingual transferability of multilingual language models, our framework first performs masked language model pretraining with multilingual text-only data. Then we train this model with a paired data in a supervised manner, while freezing a language-aware embedding layer. This allows inference even for languages not included in the paired data but present in the text-only data. Evaluation results demonstrate highly intelligible zero-shot TTS with a character error rate of less than 12% for an unseen language. All experiments were conducted using public datasets and the implementation will be made available for reproducibility.
## 1 Introduction
Recent advances in end-to-end neural text-to-speech synthesis (TTS) [14, 15] have yielded significant improvements in naturalness and speech quality. However, the data-intensive nature and the requirement of paired text and studio-quality audio data have limited multilingual TTS systems to resource-rich languages, which are small portions of the more than 6,000 languages in the world [13]. To address the limitation, current research in multilingual TTS aims not only to exploit resource-rich languages [11, 12] but also to build models for low-resource languages [16].
Previous work has addressed low-resource TTS by using untranscribed speech data with vector-quantized variational autoencoder (VQ-VAE) [15] or automatic speech recognition (ASR) models [20]. Other study [21] has jointly used paired TTS, paired ASR, unpaired speech, and unpaired text data to build TTS for languages without any paired TTS data. However, these approaches still rely on speech data for the target languages and face the challenge of data collection, when audio recordings for these languages are not available. In this study, we focus on the use of a text-only data for multilingual TTS as shown in Fig. 1. Previous research [22, 23] has shown the strong cross-lingual transferability of multilingual language models such as multilingual BERT [1] in natural language processing (NLP) tasks. By leveraging multilingual pretraining, the model can generalize to other languages, even if it has never seen the target data in those languages. Our work applies the framework of multilingual masked language model (MLM) pretraining to TTS, with the goal of achieving zero-shot cross-lingual transfer of pronunciation and prosody. Zero-shot TTS using text-only data enables the development of TTS systems for languages for which only textual resources are available, and it has the potential to open up TTS to thousands of languages [14].
In this paper, we propose a multilingual TTS framework that leverages unsupervised text pretraining. Fig. 2 illustrates the proposed framework. We use a typical end-to-end TTS architecture consisting of token embedding, encoder, and decoder. Our model also has a language-aware embedding layer, which includes the token embedding layer, a language embedding layer, and a bottleneck layer. As shown in Fig. 2(a), we first pretrain the language-aware embedding layer and the encoder of the TTS model with multilingual text data. We then fine-tune the encoder and decoder of the TTS model with paired data, while the language-aware embedding layer is frozen, as illustrated in Fig. 2(b). This allows zero
Figure 1: Our concept. We aim to build TTS model on languages for which only text data is available, to support low-resource languages.
shot TTS for a language not included in the paired data but present in the text data, as shown on the right in Fig. 2(c).
Our contributions are as follows. 1) We propose a framework for zero-shot multilingual TTS that achieves highly intelligible TTS for an unseen language, resulting in a character error rate of less than 12%. 2) Our method also improves TTS for seen languages, resulting in byte-based models without grapheme-to-phone (G2P) modules that outperform the phone-based baselines. 3) Our ablation studies provide additional insights, including the effectiveness of the frozen language-aware embedding layer. We conducted the experiments on public datasets and will make the implementation available for reproducibility. Audio samples are available1.
Footnote 1: [https://takaaki-saeki.github.io/zm-tts-text_demo](https://takaaki-saeki.github.io/zm-tts-text_demo)
## 2 Method
Our framework consists of three stages: a) unsupervised multilingual text pretraining, b) supervised learning with paired data, and c) inference. The model has a typical end-to-end TTS architecture consisting of token embedding, encoder, and decoder. First, we use MLM pretraining with multilingual text data to learn cross-lingual representations. Then we preform supervised learning with paired data to learn the mapping from linguistic features obtained in the pretraining to speech features. The model performs inference for languages that are included and not present in the paired data.
### Unsupervised multilingual text pretraining
Fig. 2(a) illustrates the unsupervised pretraining method. It uses multilingual text data consisting of languages that are not included in the paired data. Let \(X=(x_{n}\in V|n=1,\cdots,N)\) denote the input text token sequence of length \(N\), where \(V\) denotes a vocabulary constructed for pretraining. We define \(\mathcal{D}_{\mathrm{text}}\) as the text dataset. Let \(L_{\mathrm{text}}\) denote the set of language IDs included in \(\mathcal{D}_{\mathrm{text}}\). First, the masked token sequence \(X^{\mathrm{m}}\) and a language ID \(l_{\mathrm{text}}\in L_{\mathrm{text}}\) are fed to the model. Let the token embedding sequence and language embedding be \(Z^{\mathrm{m}}=(\mathbf{z}_{n}^{\mathrm{m}}\in\mathbb{R}^{d}|n=1,\cdots,N)\) and \(\mathbf{e}_{l}\in\mathbb{R}^{d}\), respectively. The embedding layers output \(Z^{\mathrm{m}}\) and \(\mathbf{e}_{l}\) as:
\[Z^{\mathrm{m}}=\text{Embed}(X^{\mathrm{m}};\theta_{\mathrm{T}}),\qquad\mathbf{e} _{l}=\text{Embed}(l_{\mathrm{text}};\theta_{\mathrm{L}}), \tag{1}\]
where \(\theta_{\mathrm{T}}\) and \(\theta_{\mathrm{L}}\) denote the model parameters of the token embedding and language embedding layers, respectively. Then the token and language embeddings obtained in Eq. (1) are added and fed to a bottleneck layer to project them into a hidden input vector. Let \(H_{\mathrm{in}}=(\mathbf{h}_{\mathrm{in},n}\in\mathbb{R}^{d}|n=1,\cdots,N)\) and \(H_{\mathrm{out}}=(\mathbf{h}_{\mathrm{out},n}\in\mathbb{R}^{d}|n=1,\cdots,N)\) denote hidden vectors in the encoder input and output, respectively. Then the conditional probability \(p(X|X_{-\Pi})\) is computed as:
\[H_{\mathrm{in}} =\text{Bottleneck}(Z^{\mathrm{m}}+\mathbf{e}^{l};\theta_{\mathrm{B}}), \tag{2}\] \[H_{\mathrm{out}} =\text{Encoder}(H_{\mathrm{in}};\theta_{\mathrm{E}}),\] (3) \[p(X|X_{-\Pi}) =\text{Softmax}(\text{PredictionNet}(H_{\mathrm{out}};\theta_{ \mathrm{P}})), \tag{4}\]
where \(\theta_{\mathrm{B}}\), \(\theta_{\mathrm{E}}\), \(\theta_{\mathrm{P}}\) denote the model parameters of the bottleneck layer, the encoder and a prediction network, respectively. In Eq. (4), \(\text{Softmax}(\cdot)\) denotes a softmax function. We define the network with the model parameters \(\{\theta_{\mathrm{B}},\theta_{\mathrm{T}},\theta_{\mathrm{L}}\}\) as **language-aware embedding layer**, which jointly embeds the token sequence \(X\) and the language ID \(l_{\mathrm{text}}\) as in Eq. (1) and (2). Let \(\Pi=(\pi_{k}\in\mathbb{N}|k=1,\cdots,K)\) be the indexes of the masked tokens of length \(K\). With the probability computed in Eq. (4), the training objective can be defined as:
\[\begin{split}\mathcal{L}_{\mathrm{mlm}}&=\frac{1}{K }\sum_{k=1}^{K}\log p(x_{\pi_{k}}|X_{-\Pi}),\\ \{\hat{\theta}_{\mathrm{E}},\hat{\theta}_{\mathrm{B}},\hat{ \theta}_{\mathrm{T}},\hat{\theta}_{\mathrm{L}}\}&=\operatorname {arg\,min}_{\theta_{\mathrm{E}},\theta_{\mathrm{B}},\theta_{\mathrm{T}}, \theta_{\mathrm{L}}}\mathcal{L}_{\mathrm{mlm}},\end{split} \tag{5}\]
where \(X_{-\Pi}\) denotes the unmasked tokens.
We use UTF-8 bytes or phones for the input token sequence \(X\). For each token type, the vocabulary \(V\) is constructed from \(\mathcal{D}_{\mathrm{text}}\), which includes a start/end of sentence token ([SOS/EOS]). For phone inputs, we extracted International Phonetic Alphabet (IPA) sequences using an open-source toolkit2. To obtain the masked token \(X^{\mathrm{m}}\), we use the same masking ratio and category as in the original BERT pretraining [4] for each token type. Randomly, 12 % of the tokens are replaced with the [MASK] token, and 1.5 % of them are replaced with random tokens. Also, 1.5 % of the tokens are left unchanged and \(\mathcal{L}_{\mathrm{mlm}}\) is computed as in Eq. (5) for those 15 % of tokens that have indices \(\Pi\).
Figure 2: Proposed framework. (a) We first perform masked language model pretraining on multilingual text data and then (b) train TTS model on paired data with frozen language-aware embedding layer. (c) Zero-shot TTS is performed with language IDs that are not included in paired data but present in text-only data.
### Supervised learning with paired data
Fig. 2(b) illustrates the supervised learning of the TTS model with paired data. We define the paired data and the set of language IDs as \(\mathcal{D}_{\mathrm{paired}}\) and \(L_{\mathrm{paired}}\), respectively. Note that we assume \(L_{\mathrm{paired}}\subset L_{\mathrm{text}}\). Let \(Y=(\mathbf{y}_{t}\in\mathbb{R}^{D}|t=1,\cdots,T)\) denote the speech feature sequence with the length of \(T\). We first initialize the model parameters \(\{\theta_{\mathrm{E}},\theta_{\mathrm{B}},\theta_{\mathrm{T}},\theta_{\mathrm{ L}}\}\) with those obtained in the pretraining described in SS 2.1. Let \(\theta_{\mathrm{D}}\) denote the model parameter of the decoder. The speech features are predicted with teacher forcing as:
\[H_{\mathrm{out}} =\text{Encoder}(\text{Bottleneck}(Z+\mathbf{e}^{l})), \tag{6}\] \[\hat{Y} =\text{Decoder}(H_{\mathrm{out}},Y;\theta_{\mathrm{D}}), \tag{7}\]
where \(Z\) is the unmasked token embedding sequence. Note that the unmasked token sequence is used in Eq. (6), while the masked token sequence is used in Eq. (2) Let \(\mathcal{L}_{\mathrm{tts}}(\hat{Y},Y)\) denote the training objective of the TTS model. Then we consider two types of schemes.
**Updating language-aware embedding layer.** We only freeze the parameter of the language embedding layer \(\theta_{\mathrm{L}}\) while updating the rest of the parameters. Therefore the trainable model parameters can be written as
\[\{\hat{\theta}_{\mathrm{D}},\hat{\theta}_{\mathrm{E}},\hat{\theta}_{\mathrm{ B}},\hat{\theta}_{\mathrm{T}}\}=\operatorname*{arg\,min}_{\theta_{\mathrm{D}}, \theta_{\mathrm{E}},\theta_{\mathrm{B}},\theta_{\mathrm{T}}}\mathcal{L}_{ \mathrm{tts}}(\hat{Y},Y). \tag{8}\]
Previous work has confirmed that multilingual BERT has high cross-lingual transferability for various NLP tasks [21]. This scheme corresponds to a simple fine-tuning of BERT [21], which updates all the parameters during training for the downstream tasks3.
Footnote 3: We freeze the language embedding layer to address the mismatch between language embedding of seen and unseen languages.
**Freezing language-aware embedding layer.** We freeze the bottleneck layer and the token embedding layer along with the language embedding, updating the encoder and decoder. The training process can be written as
\[\{\hat{\theta}_{\mathrm{D}},\hat{\theta}_{\mathrm{E}}\}=\operatorname*{arg\, min}_{\theta_{\mathrm{D}},\theta_{\mathrm{E}}}\mathcal{L}_{\mathrm{tts}}(\hat{Y},Y). \tag{9}\]
In contrast to the scheme represented in Eq. (8), the scheme in Eq. (9) preserves the parameters of the language-aware embedding layer to facilitate cross-lingual transfer. In the evaluation, we use the scheme formulated in Eq. (9), except for the ablation study in SS 3.4.
### Inference
In inference, the whole TTS model trained in SS 2.2 synthesizes speech from multilingual texts. Let \(L_{\mathrm{syn}}\) denote the set of language IDs used for inference. The text token sequence \(X\) and the language ID \(l_{\mathrm{syn}}\in L_{\mathrm{syn}}\) are fed to the model as in Eq. (1), and the encoder output is predicted as in Eq. (6). Unlike Eq. (7), the speech features are predicted as:
\[\hat{Y}=\text{Decoder}(H_{\mathrm{out}};\theta_{\mathrm{D}}). \tag{10}\]
The output waveform is obtained by feeding the predicted features \(\hat{Y}\) to a pretrained neural vocoder.
Fig. 2(c) illustrates the inference process. The left and right sides of the figure show the typical multilingual TTS and our zero-shot TTS. Previous work [15] has typically assumed _seen_ languages, which are included in supervised learning with the paired data. Then the inference is performed with the language IDs \(L_{\mathrm{seen}}\subset L_{\mathrm{paired}}\). Whereas, it is challenging to perform TTS for unseen languages \(L_{\mathrm{unseen}}\cap L_{\mathrm{paired}}=\emptyset\). While other work [20] has built zero-shot TTS from paired ASR and unpaired data, it uses audio data for the target languages. Our work attempts to only use the linguistic knowledge to improve the zero-shot TTS. Thus, the inference process is written as \(L^{\prime}_{\mathrm{unseen}}\cap L_{\mathrm{paired}}=\emptyset\) and \(L^{\prime}_{\mathrm{unseen}}\subset L_{\mathrm{text}}\). In the evaluation, we denote the inference with \(L_{\mathrm{unseen}}\) and \(L^{\prime}_{\mathrm{unseen}}\) as _Fully zero-shot TTS_ and _Text-seen zero-shot TTS_, respectively. _Fully zero-shot TTS_ performs zero-shot TTS without pretraining as in the phone-based previous method [22], which is the baseline method in our evaluations.
### Model architecture
Our model is an autoregressive TTS model based on Transformer TTS [15], which has also been used in the previous work on byte-based multilingual TTS [16]. During the supervised learning described in SS 2.2 and inference described in SS 2, we use x-vector [23] for the speaker embedding and add it to the encoder output through a projection layer. During supervised learning, we use the average x-vectors computed from the training data. For evaluation purposes, we perform zero-shot synthesis with the average x-vector from the test data of the target language and feed it to the model. Note that we also conduct the evaluation with x-vectors from seen languages.
For the bottleneck layer with the parameter \(\theta_{\mathrm{B}}\), we use a simple residual network consisting of Layer Normalization [1], down projection, ReLU [13], and up projection with the residual connection. This architecture is also used in a language adapter for cross-lingual transfer [1]. We describe other detailed configurations in SS 3.1.
## 3 Experimental evaluations
### Experimental setting
**Dataset**
We carried out all the evaluations with publicly available datasets. Table 1 shows the sizes of the data for each language. For the unsupervised text pretraining described in SS 2.1, we used transcripts from VoxPopuli [20], M-AILABS [15], and CSS10 [21], resulting in a total of about 2.8 GB of spoken text across 19 languages. We used CSS10 for the supervised learning described in SS 2.2, and we selected seven European languages as the seen languages, with Spanish as the unseen language. The paired data consisted of one speaker per language. It should be noted that Spanish is not actually a low-resource language, but we chose to use it for evaluation purposes in order to 1) compare our zero-shot TTS methods with the oracle methods using the paired data for the target language and
2) ensure a sufficient number of evaluators for the subjective evaluation. We used 5 and 100 utterances as dev and test sets, respectively, with the remaining data used for training.
**Training details**
The sampling rate was set to 16 kHz. An 80-dimension of mel filter bank, 1024 samples of FFT length, and 256 samples of frame shit were used for speech analysis. For the pretraining described in SS 2.1, we trained the model for 1.2M iterations using the Noam optimizer [21] with the learning rate and warm-up step set to 1.0 and 10000, respectively. For the TTS model described in SS 2.4, we used a 6-block Transformer encoder [21] and a 6-block Transformer decoder, with a post-net consisting of five convolutional layers with a kernel size of five. The attention dimension and the number of attention heads were set to 512 and 8, respectively. For the bottleneck layer described in SS 2.4, we set the hidden dimension after the down projection to 256. The PredictionNet in Eq. (4) consisted of a linear layer, a GELU activation function [11], Layer Normalization, and a linear layer with the hidden dimension of 512. We also used guided attention loss [15] to improve the training efficiency. For the supervised learning described in SS 2.2, we trained the models for 2.47M iterations (200 epochs). The Noam optimizer was used with the warm-up step of 50000. For the neural vocoder, we trained HiFi-GAN [13] for 2M iterations with LibriTTS [22], VCTK [21], and CSS10. For the x-vector described in SS 2.4, we used a model trained on VoxCeleb1 and VoxCeleb2 [23] published in SpeechBrain [23]. We used ESPnet2-TTS [24, 25] for the implementation.
**Baselines**
We developed baseline models without the pretraining.
**Seen language.**_Monolingual:_ We trained a model for each language independently. Our preliminary study found that Transformer TTS was unstable and could not synthesize intelligible speech in the monolingual condition due to the lack of training data. Therefore, we used Tacotron2 [22] only for the monolingual models, as in the original paper of the dataset [15]. _Multilingual w/o LIDs:_ We trained a multilingual Transformer TTS model using the paired data shown in Table 1 without language IDs (LIDs). _Multilingual w/ LIDs:_ We trained a multilingual Transformer TTS model with the paired data of the unseen language. It also used the language IDs.
**Unseen language.** We compared _Fully zero-shot TTS_ and _Text-seen zero-shot TTS_ defined in SS 2.3. In _Oracle_, we used the _Monolingual_ and _Multilingual w/ LIDs_, which used the paired data of the unseen language. In _Fully zero-shot TTS_, we used _Multilingual w/o LIDs_ to synthesize speech from text tokens in the unseen language. This method corresponds to the conventional multilingual TTS model using bytes [14] or IPA phones [20].
**Evaluation metrics**
To objectively measure the synthetic speech quality, we used mel cepstral distortion (MCD) [22] with the mel cepstrum dimension set to 25. We also evaluated the intelligibility using CERs computed with a multilingual ASR model [20]. We used a pretrained _large_ model that is publicly available4. To evaluate the naturalness, we carried out listening tests to calculate five-scale mean opinion scores (MOS) of synthesized speech for each method. Forty native speakers were recruited through Amazon Mechanical Turk [26] for each of the tests. Furthermore, we leveraged a publicly available automatic MOS (AMOS) prediction model [20] to evaluate the naturalness. Note that the model was trained on English and Chinese datasets, but previous work [20] has reported that it also showed a correlation coefficient higher than 0.8 for another language (Japanese).
Footnote 4: [https://github.com/openai/whisper](https://github.com/openai/whisper)
### Evaluation results on seen languages
We evaluated our framework on the seen languages included in the paired data, as defined in SS 2.3. Table 2 lists the results in MCD and CER. Lower values are better for both metrics. As we can see, the byte-based or phone-based models with the proposed multilingual pretraining performed the best across all languages and metrics. Among the baselines, byte-based monolingual and multilingual models tended to have higher MCD and CER than phone-based models, and failed to synthesize intelligible speech in some languages.For example, the baseline byte-based models showed the high CER values for French, which has a deep orthography, meaning that a single character has different pronunciations depending on the context. We observed that our method improved the byte-based models and they outperformed the phone-based baseline models for all the metrics and languages. It is worth noting that the proposed byte-based models even outperformed the proposed phone-based models except for el
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Languages & Code & Text-only data & \multicolumn{2}{c}{Paired data} \\ & & \multicolumn{2}{c}{Text} & Audio \\ \hline \multicolumn{5}{l}{_Seen languages for evaluation \(L_{\text{sensen}}\)_} \\ \hline German & de & 359MB & 0.73MB & 16.13h \\ French & fr & 372MB & 0.94MB & 19.15h \\ Dutch & nl & 336MB & 0.75MB & 14.10h \\ Finnish & fi & 308MB & 0.47MB & 21.36h \\ Hungarian & hu & 104MB & 0.51MB & 10.53h \\ Russian & ru & 4.9MB & 1.5MB & 10.00h \\ Greek & el & 0.39MB & 0.39MB & 4.13h \\ \hline \multicolumn{5}{l}{_Useen language for evaluation \(L_{\text{sensen}}\)_} \\ \hline Spanish & es & 345MB & 0.0MB (1.2MB) & 0.00h (23.81h) \\ \hline \multicolumn{5}{l}{Languages not included in CSS10} \\ \hline English & en & 338MB & & \\ Estonian & et & 87MB & & \\ Croatian & hr & 2.0MB & & \\ Italian & it & 334MB & & \\ Lithuanian & lt & 89MB & & \\ Polish & pl & 102MB & & \\ Romanian & ro & 67MB & & \\ Slovak & sk & 94MB & & \\ Slovenian & sl & 81MB & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Amount of text-only and paired data for each language. Parentheses indicate amount of original data in CSS10.
and ru. These results suggest that our framework is effective in building a TTS model for languages without G2P modules.
### Evaluation results on unseen language
We evaluated our method on zero-shot TTS for the unseen language defined in SS 2.3. As described in SS 2.4, we first used the x-vector from the es speaker to compute the MCD. Table 3 lists the results. The baseline models showed the CERs of over 40% and MCDs of over 10.0. However, our proposed text preraining improved the metrics, resulting in CERs of less than half for both byte and phone-based methods. Also, in contrast to the results for the seen languages, the phone-based model outperformed the byte-based one in terms of CER. Compared with the oracle case with the paired data of the unseen language, our proposed zero-shot TTS showed higher MCD and CER but achieved only 1% difference in CER compared to the oracle byte-based monolingual model. These results demonstrate the effectiveness of our method in achieving intelligible zero-shot TTS for the unseen language.
To investigate the case where the target speaker information is completely unavailable, we also used the x-vector from a seen language. We chose the fr speaker because es and fr are both categorized as Western Romance in Glottolog [11]. Table 3 lists the results. Note that this case does not have the MCD results, since a different speaker than the ground-truth speech was used. We can see that the unsupervised text pretraining also improved the zero-shot performance when using the x-vector from the fr speaker. In the proposed byte-based model, the cross-lingual x-vector showed the lower CER. This might result from that the es x-vector was not present in the training data whereas the fr x-vector was present in the training data.
### Ablation study
To further evaluate our method, we conducted several ablation studies. Table 4 lists the results. _Bytes multilingual_ represents the byte-based proposed method in the evaluation of SS 3.2 and 3.3. Note that it used the frozen language-aware embedding layer as formulated in Eq. (9). We also examined the effect of the text data used in the pretraining in Appendix A.
In _W/o bottleneck layer_, we excluded the bottleneck layer and simply added the token and language embedding to obtain the encoder input in Eq. (2). We found that removing the bottleneck layer led to a performance drop in all the languages and metrics, with an average increase of 0.53 in MCD and 4.16% in CER. The largest increase was observed in the unseen language, with an increase of 1.21 in MCD. This suggests that the bottleneck layer, which projects the token and language embedding into the hidden input text representation with nonlinear dimensionality reduction, is effective in im
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{de} & \multicolumn{3}{c|}{fr} & \multicolumn{3}{c|}{ru} & \multicolumn{3}{c|}{fi} & \multicolumn{3}{c|}{hu} & \multicolumn{3}{c|}{nl} & \multicolumn{3}{c}{el} \\ & MCD & CER & MCD & CER & MCD & CER & MCD & CER & MCD & CER & MCD & CER & MCD & CER \\ \hline Natural & - & 2.75 & - & 4.52 & - & 2.12 & - & 4.73 & - & 4.86 & - & 6.22 & - & 7.14 \\ \hline \multicolumn{10}{l}{_Baseline (Monolingual)_} \\ \hline Bytes monolingual & 7.70 & 8.61 & 11.76 & 91.82 & 11.43 & \(>\)100 & 8.33 & 56.03 & 10.22 & 93.05 & 7.49 & 15.33 & 10.20 & 85.98 \\ Phones monolingual & 7.38 & 4.07 & 8.96 & 17.86 & 11.89 & 25.30 & 7.23 & 27.62 & 7.59 & 24.62 & 7.80 & 19.20 & 8.16 & 21.79 \\ \hline \multicolumn{10}{l}{_Baseline (Multilingual)_} \\ \hline Bytes multilingual w/o LIDs & 7.68 & 37.46 & 8.71 & 41.35 & 9.38 & 45.92 & 6.26 & 29.19 & 6.48 & 33.82 & 8.46 & 46.33 & 7.64 & 36.24 \\ Bytes multilingual w/o LIDs & 6.51 & 13.19 & 10.84 & 55.79 & 12.89 & \(>\)100 & 6.78 & 27.22 & 9.09 & 42.97 & 8.47 & 39.37 & 7.25 & 23.56 \\ Phones multilingual w/o LIDs & 6.31 & 10.64 & 7.44 & 20.86 & 8.10 & 35.32 & 5.53 & 19.56 & 5.59 & 14.03 & 7.76 & 34.49 & 6.90 & 19.33 \\ Phones multilingual w/o LIDs & 6.16 & 9.76 & 6.88 & 14.97 & 7.63 & 23.54 & 5.17 & 10.63 & 5.28 & 9.11 & 6.95 & 19.48 & 6.90 & 16.97 \\ \hline \multicolumn{10}{l}{_Proposed (Unsupervised text pretraining)_} \\ \hline Bytes multilingual & **5.65** & **3.79** & **6.48** & **7.15** & 7.38 & **10.62** & **4.99** & **5.28** & **5.01** & **6.05** & **6.52** & **13.74** & 6.57 & 11.75 \\ Phones multilingual & 5.88 & 5.52 & 6.61 & 7.72 & **7.25** & 15.85 & 5.18 & 8.62 & 5.30 & 7.37 & 7.00 & 14.42 & **6.53** & **11.06** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation results for _seen_ languages. Bold indicates best scores in baseline and proposed methods.
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{es} \\ & es x-vector & fr x-vector \\ & MCD & CER & CER \\ \hline Natural & - & 2.71 & 2.71 \\ \hline \multicolumn{10}{l}{_Oracle_} \\ \hline Bytes monolingual & 8.65 & 10.70 & - \\ Phones monolingual & 8.47 & 5.28 & - \\ Phones multilingual & 6.20 & 5.32 & 6.99 \\ \hline \multicolumn{10}{l}{_Baseline (Fully zero-shot TTS)_} \\ \hline Bytes multilingual & 11.22 & 64.07 & 66.45 \\ Phones multilingual & 10.75 & 44.75 & 44.37 \\ \hline \multicolumn{10}{l}{_Proposed (Text-seen zero-shot TTS)_} \\ \hline Bytes multilingual & 9.05 & 18.27 & 13.74 \\ Phones multilingual & 9.44 & 11.69 & 13.33 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results for _unseen_ language.
Figure 3: Visualization of token and language embedding. Pairs of similar languages (es–fr and de–nl) are overlapping in token embedding space, while output of bottleneck layer separates them.
proving the generalization for zero-shot TTS.
We also evaluated the effect of including language IDs in the proposed method by comparing it with a version that excluded language IDs, referred to as _W/o language ID_. It corresponds to a simple multilingual BERT pretraining [23] that uses only text tokens across different languages. We observed that the use of language IDs led to an average improvement of 0.5 MCD and 4.48% CER, indicating the effectiveness of our approach in using language IDs.
In _W/o initializing encoder_, we did not initialize the encoder \(\theta_{\mathrm{E}}\) before the supervised leaning described in SS 2.2. Instead, we only initialized the parameters \(\theta_{\mathrm{T}}\), \(\theta_{\mathrm{L}}\), and \(\theta_{\mathrm{B}}\) with the parameters pretrained in SS 2.1. Through this evaluation, we investigated whether the performance gain with our method resulted from the initialization of the language-aware embedding layer or the encoder. We observed that _W/o initializing encoder_ resulted in an improvement of 0.04 in MCD and only a 2.27% increase in CER on average, suggesting that our method benefits more from the pretraining of the language-aware embedding layer than from the encoder.
In _Updating language-aware embedding layer_, we updated the language-aware embedding layer during supervised learning, as formulated in Eq. (8). We observed that freezing the language-aware embedding layer led to better performance for most languages and metrics, resulting in an average difference of 0.29 in MCD and 1.04% in CER.
### Dependency on unseen languages
We conducted evaluations on the zero-shot TTS for different unseen languages. The eight European languages included in the paired data are composed of Indo-European and Uralic language families defined in Glottolog [13]. In this evaluation, we selected de and hu from each of the families. During supervised learning in SS 2.2, we excluded the paired data for each of de and hu and instead included the paired data for es. Table 5 lists the results. We chose the phone-based baseline method, which had shown better results in SS 3.3. We observed that the pretraining improved the CER by around 10% and MCD by around 0.3 for de. However, the improvement in CER for hu was limited to 2%, while the MCD was improved by around 0.5. These results suggest that the performance of our zero-shot TTS is language dependent, as observed in previous work on cross-lingual transfer for NLP tasks [23].
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Seen} & \multicolumn{3}{c|}{Unseen} & \multicolumn{3}{c|}{Avg.} \\ & \multicolumn{3}{c|}{de} & \multicolumn{3}{c|}{fr} & \multicolumn{3}{c|}{ru} & \multicolumn{3}{c|}{fi} & \multicolumn{3}{c}{es} & \multicolumn{3}{c}{Avg.} \\ & MCD & CER & MCD & CER & MCD & CER & MCD & CER & MCD & CER & MCD & CER \\ \hline Bytes multilingual & 5.65 & 3.79 & 6.48 & 7.15 & 7.38 & 10.62 & 4.99 & 5.28 & 9.05 & 18.27 & 6.46 & **9.58** \\ \hline W/o bottleneck layer & 6.06 & 5.01 & 7.15 & 9.09 & 7.71 & 28.52 & 5.33 & 6.47 & 10.26 & 24.01 & 6.99 & 13.74 \\ W/o language ID & 6.07 & 5.09 & 7.09 & 9.99 & 7.77 & 22.58 & 5.23 & 6.99 & 10.45 & 32.70 & 6.96 & 14.06 \\ W/o initializing encoder & 5.59 & 3.75 & 6.52 & 9.31 & 7.12 & 16.47 & 4.86 & 5.03 & 9.02 & 21.91 & **6.42** & 11.85 \\ Updating language-aware embedding layer & 6.05 & 6.22 & 6.75 & 6.93 & 7.46 & 11.42 & 5.16 & 8.00 & 9.48 & 17.21 & 6.75 & 10.62 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies on training and model configurations. Bold indicates best metrics on average (Avg.).
Figure 4: MOS and AMOS results for _seen_ languages. Error bars in MOS results represent 95% confidence intervals.
Figure 5: MOS, AMOS, and AB test results for _unseen_ language. Error bars in MOS results represent 95% confidence intervals.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{de} & \multicolumn{3}{c}{hu} \\ & MCD & CER & MCD & CER \\ \hline Natural & - & 2.75 & - & 2.12 \\ \hline _Oracle_ & & & & \\ \hline \hline \multicolumn{3}{l|}{Phones monolingual} & 7.38 & 4.07 & 7.59 & 24.62 \\ \multicolumn{3}{l|}{Phones multilingual} & 6.16 & 9.76 & 5.28 & 9.11 \\ \hline \multicolumn{3}{l}{_Baseline (Fully zero-shot TTS)_} & & \\ \hline \multicolumn{3}{l}{Phones multilingual} & 10.31 & 38.75 & 9.93 & 52.62 \\ \hline \multicolumn{3}{l}{_Proposed (Text-seen zero-shot TTS)_} & & \\ \hline \multicolumn{3}{l}{Bytes multilingual} & 10.00 & 28.01 & 9.40 & 50.11 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Analysis on different _unseen_ languages.
Fig. 3 visualize the token embedding \(Z\) and encoder inputs \(H_{\text{in}}\) averaged on each utterance. We used a t-distributed stochastic neighbor embeddings (t-SNE) [14]. We observed overlaps in the token embedding for (es, fr) and (de, nl), which are classified as Western Romance and West Germanic in Glottolog, respectively. The encoder inputs are separated in the embedding space for each language. The results in Table 5 and the visualization suggest that the cross-lingual transfer works better when similar languages sharing the token embedding space are present during supervised learning. However, for languages with distinct token and language embeddings, the cross-lingual transferability might be limited. We leave the further analysis on language dependencies as a topic for future research.
### Subjective evaluations on naturalness
We conducted evaluations on naturalness as described in SS 3.1. Fig. 4 shows the results for seen languages. Note that we conducted the listening tests for de and fr. For each language, either of the proposed methods showed the highest MOS, while we did not observe any significant difference between the proposed methods and the best baseline method, which was the phone-based multilingual model with LIDs. To further validate our results, we also evaluated the naturalness with an AMOS prediction model, as shown in Fig. 4. We observed that the either of the proposed methods showed the highest scores in all the languages. On average, the byte-based and phone-based proposed models showed 2.89 and 2.84, respectively, while the best baseline method obtained 2.835. Additionally, we observed that the byte-based proposed model often scored higher than the phone-based proposed models, which is consistent with the results in Table 2.
Footnote 5: The AMOS tended to be lower than the MOS. While the MOS prediction model has a high correlation, it may produce errors in predicting absolute values, as reported in previous work [16]. The relative relationships are more reliable in the AMOS.
Fig. 5 shows the results for unseen languages. The oracle methods had the highest MOS of 3.76 and 3.96, and the baseline zero-shot method had the lowest MOS of 3.29. The proposed methods outperformed the baseline method, and the byte- and phone-based models had the MOS of 3.44 and 3.32, respectively. The AMOS results were consistent with the listening test results, with the proposed zero-shot TTS methods outperforming the baseline method. In this evaluation, the proposed byte-based model scored 3.21 on the AMOS, while the oracle phone-based model scored 3.20. To further validate the results, we conducted a preference AB test on naturalness with 25 rators. As shown in Fig. 5, our byte-based model significantly outperformed the baseline phone-based model.
## 4 Related work
**Multilingual TTS** While previous work on multilingual TTS has primarily focused on resource-rich languages [15, 16], there is growing interest in developing TTS models on low-resource languages. Several studies have explored the input tokens shared across languages such as bytes [16, 1], IPA phones [10], and articulatory features [11], to transfer knowledge from resource-rich to low-resource languages. Grapheme tokens can eliminate the per-language G2P knowledge, and previous work has built a byte-based TTS model for around 40 languages [10]. There has been work using the phonological features derived from IPA to achieve the zero-shot TTS [17]. Our framework achieves the zero-shot cross-lingual transfer with bytes by leveraging multilingual text pretraining. There have been studies on using untranscribed speech data for low-resource scenarios by leveraging VQ-VAE [13] or an ASR model [15, 14]. Other work [16] has used paired TTS, paired ASR, unpaired speech, and unpaired text data to build TTS for languages without any paired TTS data. While it also performs text-only training as in our work, it still uses the audio recordings of the target languages.
**Cross-lingual representation learning for NLP** There have been studies on learning cross-lingual representations that can be applied to various NLP tasks in different languages [12, 13]. Recent work has highlighted the strong cross-lingual transferability of multilingual BERT [14], which has been observed to perform surprisingly well when transferred to other languages [15, 16]. Building on this, our work leverages multilingual MLM pretraining for TTS, which improves byte-based TTS models without G2P knowledge and achieves zero-shot TTS.
**Language model pretraining for TTS** Previous research has explored self-supervised text pretraining techniques for TTS. BERT models have been used to extract contextual embeddings and enhance the prosody of TTS [15, 16]. Other studies have used phonemes jointly with graphemes [17] or sub-phonemes [17] as the inputs of the MLM pretraining. Our work proposes multilingual MLM pretraining for TTS using text tokens shared across languages, rather than focusing on monolingual pretraining.
## 5 Conclusions
We presented a multilingual text-to-speech (TTS) framework that leverages unsupervised text pretraining. Our framework achieved highly intelligible zero-shot TTS for an unseen language, resulting in a character error rate (CER) of less than 12%. It also improved the TTS for seen languages, with byte-based models without grapheme-to-phone (G2P) modules outperforming the phone-based baselines. Our ablation studies provided additional insights, including the effectiveness of the frozen language embedding layer.
**Limitations and future work** Our proposed framework has limitations. The performance gap remains between the oracle models and our zero-shot TTS models in terms of intelligibility, speech quality, and naturalness, as seen in the evaluation in SS 3.3 and SS 3.6. Further studies are needed to improve our zero-shot TTS. Our framework also has a limitation with language dependency, as the results in SS 3.5 suggest that this dependency is caused by the presence of similar languages during supervised learning. Our future work will focus on studying this language dependency further and developing a method that performs better for various languages.
## Acknowledgments
Part of this work was supported by JSPS KAKENHI Grant Number 21H05054, 22H03639, and 22J12040. This work used the Bridges system [21], which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center. We would like to thank the research teams at Google, Japan and Google, USA through the internship program of the first author for providing various insights on this topic.
|
2308.01787 | AdS$_3$ Pure Gravity and Stringy Unitarity | We construct a unitary, modular-invariant torus partition function of a
two-dimensional conformal field theory with a Virasoro primary spectral gap of
$\Delta_* = \frac{c-1}{12}$ above the vacuum. The twist gap is identical, apart
from two states $\mathcal{O}_*$ with spin scaling linearly in the central
charge $c$. These states admit an AdS$_3$ interpretation as strongly coupled
strings. All other states are black hole microstates. | Gabriele Di Ubaldo, Eric Perlmutter | 2023-08-03T14:36:15Z | http://arxiv.org/abs/2308.01787v1 | # AdS\({}_{3}\) Pure Gravity and Stringy Unitarity
###### Abstract
We construct a unitary, modular-invariant torus partition function of a two-dimensional conformal field theory with a Virasoro primary spectral gap of \(\Delta_{*}=\frac{c-1}{12}\) above the vacuum. The twist gap is identical, apart from two states \(\mathcal{O}_{*}\) with spin scaling linearly in the central charge \(c\). These states admit an AdS\({}_{3}\) interpretation as strongly coupled strings. All other states are black hole microstates.
The quest for AdS\({}_{3}\) pure gravity still beckons.
It is not fully known whether, or in what precise sense, a consistent such theory exists, either quantum mechanically or in the semiclassical limit. The latter is of particular physical interest, due to the existence of black holes and the emergence of spacetime.
Holographically speaking, the outstanding spectral problem is to find a torus partition function of a two-dimensional conformal field theory (CFT) that is mutually compatible with unitarity (a non-negative Virasoro primary spectral density) and modularity (exact \(SL(2,\mathbb{Z})\)-invariance of the partition function), while preserving the spectral gaps of a dual bulk theory with only black holes above a normalizable AdS\({}_{3}\) ground state. No known partition function satisfies these basic requirements.
There exists a diverse set of approaches to this problem which, famous as it is, we describe in condensed fashion. Summing over all smooth on-shell 3-manifolds \(\mathcal{M}\) with \(\partial\mathcal{M}=T^{2}\)[1], namely the \(SL(2,\mathbb{Z})\) family of BTZ black holes, generates a negative density of states in two regimes [1; 2; 3]: at large spin \(j\to\infty\) near extremality,
\[\int_{0}^{t_{0}}dt\,\rho_{\text{MWF},\,j}(t)\sim(-1)^{j}e^{\pi\sqrt{\xi j}}, \quad t_{0}\sim e^{-2\pi\sqrt{\xi j}} \tag{1}\]
where
\[t:=\min(h,\overline{h})-\xi\,,\quad j=h-\overline{h}\,,\quad\xi:=\frac{c-1}{ 24}\,, \tag{2}\]
and at the scalar black hole threshold,
\[\rho_{\text{MWF},0}(t)=-6\delta(t)+(t>0\text{ continuum})\,. \tag{3}\]
The property (1) is especially severe: an exponentially large negative density despite an exponentially small window. From the bulk perspective, seeking a consistent pure gravity path integral requires reckoning with the sum over topologies; for related work, see [4; 5; 6; 7; 8; 9; 10; 11]. (We note here some recent work in AdS\({}_{3}\)/CFT\({}_{2}\) that studies fixed bulk topologies [12; 13; 14; 15; 16; 17].)
Some valuable progress has been made. Explicit restoration of unitarity may be achieved by retreating from pure gravity [18; 19], adding heavy point-particle matter which admits a geometric bulk interpretation. The construction of [20], which preserves the pure gravity spectrum, uses dimensional reduction to JT gravity to fix (1) with an infinite sum over off-shell Seifert manifolds, though it remains a mostly [21] implicit construction away from extremality and leaves (3) intact. Other approaches that forego a subset of the above conditions include [22; 23; 24].
## I Partition Function
The Virasoro primary partition function is defined as
\[Z_{p}(\tau)=\sqrt{y}|\eta(\tau)|^{2}Z(\tau) \tag{4}\]
where \(Z(\tau)=\text{Tr}_{q}(q^{L_{0}-\frac{\pi}{4}}q\bar{q}^{L_{0}-\frac{\pi}{4}})\) is the torus partition function (non-holomorphic) and \(\tau:=x+iy\). The following modular-invariant \(Z_{p}(\tau)\) is unitary at sufficiently large \(\xi\):
\[\mathcal{Z}(\tau)=Z_{\text{MWK}}(\tau)+Z_{\text{string}}(\tau) \tag{5}\]
where
\[Z_{\text{MWK}}(\tau) :=\sum_{\gamma\in SL(2,\mathbb{Z})/\Gamma_{\infty}}\sqrt{\text{ Im}(\gamma\tau)}\,|q_{\gamma}^{-\xi}(1-q_{\gamma})|^{2}\] \[Z_{\text{string}}(\tau) :=\sum_{\gamma\in SL(2,\mathbb{Z})/\Gamma_{\infty}}\sqrt{\text{ Im}(\gamma\tau)}\,\left(2q_{\gamma}^{\xi/4}\overline{q}_{\gamma}^{-\xi/4}+\text{c.c.}\right) \tag{6}\]
with \(q_{\gamma}:=e^{2\pi i\gamma\tau}\). These are Poincare sums over \(SL(2,\mathbb{Z})\) modulo \(\Gamma_{\infty}\), the set of modular \(T\)-transformations [25]. As we substantiate below, the unitary range of \(\xi\) includes \(\xi\gg 1\), and provisionally appears to hold for all \(\xi\in 2\mathbb{Z}_{+}\). The reason for the "string" moniker will be explained momentarily.
From a CFT point of view, \(Z_{\text{string}}(\tau)\) is a Poincare sum over two copies of a Virasoro primary seed state \(\mathcal{O}_{*}\) with quantum numbers
\[(\Delta_{*},j_{*})=\left(2\xi,\frac{\xi}{2}\right)\quad\Leftrightarrow\quad(t_ {*},\overline{t}_{*})=\left(-\frac{\xi}{4},\frac{\xi}{4}\right) \tag{7}\]
and its parity image with \(h_{*}\leftrightarrow\overline{h}_{*}\). We have employed the "reduced twist" variable \(t\) along with its partner \(\overline{t}:=\max(h,\overline{h})-\xi\). We have chosen the state in (6) to be doubly-degenerate, a natural choice that preserves integrality, but \(\mathcal{Z}(\tau)\) is unitary for a finite range of degeneracies \(d_{*}>1\) (see Appendix B, e.g. Fig. 3).
Let us state the spectral properties of the partition function \(\mathcal{Z}(\tau)\), deferring its unitarity to the next subsection. The spectrum is shown in Fig. 1. The gap in conformal dimension above the vacuum is exactly
\[\Delta_{*}=\frac{c-1}{12} \tag{8}\]
with no corrections. This is the value anticipated by the Virasoro modular bootstrap program (e.g. [26; 27; 28; 29; 30]) as
the optimal gap at _large_\(c\), on the basis of black hole universality: the conformal dimension (8) corresponds to the massless limit of the semiclassical BTZ black hole. The state-of-the-art bootstrap upper bound on the spectral gap at large \(c\) is the numerical result [29]
\[\Delta_{*}\lesssim\frac{c}{9.08}\qquad(c\gg 1) \tag{9}\]
with a slightly weaker analytical bound [30]. (See [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] for further bootstrap work on Virasoro spectra at large \(c\).) The explicit realization by \(\mathcal{Z}(\tau)\) of the gap (8) while preserving unitarity at \(\xi\gg 1\) (the first such example, to our knowledge) is also noteworthy because of the paucity of pure CFT arguments that a gap this large is possible. Conversely, \(\mathcal{Z}(\tau)\) shows constructively that without incorporating discreteness into the modular bootstrap [49], the optimal bound on the gap cannot be lower than \(\Delta_{*}=2\xi\). This statement applies for all values of \(\xi\) for which \(\mathcal{Z}(\tau)\) is unitary.
As for the twist spectrum, all Virasoro primaries besides \(1\) and \(\mathcal{O}_{*}\) have \(t\geq 0\). There is a positive integer number of scalar states at \(t=0\) (see (13)). The spectrum of \(t>0\) states is continuous. This can be understood rather generally in terms of coarse-graining. At large \(\xi\), this can be thought of as a consequence of ignorance of exponentially small effects in \(c\) - for example, smearing over the mean level spacing \(\sim e^{-S_{\text{Cardy},j}(t)}\). We explain these points of interpretation in Sec. IV.
### Density of states
The corresponding Virasoro primary density of states, related to our partition function as
\[\frac{\mathcal{Z}(\tau)}{\sqrt{y}}=\sum_{j=0}^{\infty}(2-\delta_{j,0})\cos(2 \pi jx)\int_{\mathbb{R}}d\Delta\,e^{-2\pi y(\Delta-2\xi)}\rho_{j}(\Delta) \tag{10}\]
can be derived straightforwardly using existing methods for Poincare sums. We have, in terms of reduced twist \(t\),
\[\rho_{j}(t)=\rho_{\text{MWK},\,j}(t)+\rho_{\text{string},\,j}(t) \tag{11}\]
for every spin \(j\). The MWK density \(\rho_{\text{MWK},\,j}(t)\) is recalled in (15). The new term is, for \(j\neq 0\),
\[\rho_{\text{string},\,j}(t) =\frac{4}{\sqrt{t\,t}}\sum_{s=1}^{\infty}f_{j,j_{*};s}\cos\left( \frac{2\pi}{s}\sqrt{\xi t}\right)\cosh\left(\frac{2\pi}{s}\sqrt{\xi t}\right)\] \[+(j_{*}\to-j_{*},t\leftrightarrow\overline{t}) \tag{12}\]
where \(f_{j,j_{*};s}:=S(j,j_{*};s)/s\) with \(S(j,j_{*};s)\) a Kloosterman sum (16). For \(j=0\), such sums must be regularized; using standard methods nicely summarized in [19], the result is the \(j=0\) specialization of the \(j\neq 0\) densities, augmented by a constant subtraction; see (17) and (18).
There are two hurdles to establishing positivity: one must cancel the negativity of the MWK partition function in the \(j\to\infty\) regime, and at the scalar threshold \(t=0\), both without introducing new negativity.
At \(j\to\infty\), the negativity (1) is resolved by construction: we have added states with reduced twist \(t_{*}=-\xi/4\), designed precisely to avoid the large-spin negativity in accordance with the arguments of [3] and the subsequent approach of [18; 19]. (We added two such states, but any number \(d_{*}>1\) would do; we review this in Appendix B.) The states \(\mathcal{O}_{*}\) have asymptotically large spin as \(\xi\to\infty\). It is exactly this property which admits the novelty of a spectral gap \(\Delta_{*}=2\xi\) without introducing further negativity elsewhere in the spectrum - and indeed, as we now show, curing the scalar negativity (3) in the process.
The scalar density of states is
\[\rho_{0}(t)=\delta(t+\xi)+(-6+8\sigma_{0}(j_{*}))\delta(t)+\tilde{\rho}_{0}(t )\,. \tag{13}\]
The first term is the vacuum state. The second, formerly problematic, term has been rendered strictly positive, for any \(j_{*}=\xi/2\). Happily, it is also an integer, a welcome surprise. Unlike previous approaches to this negativity, its resolution does not require the addition of an "extra" ad hoc \(+6\delta(t)\)[2], instead coming for free in \(\rho_{\text{string},0}(t)\). We note a number-theoretic feature of this degeneracy: if \(j_{*}\) is prime, then \(\sigma_{0}(j_{*})=2\).
The last term, \(\tilde{\rho}_{0}(t)\), is the continuum with support on \(t>0\), given explicitly in (19). Its positivity requires a more careful analysis because various large-\(\xi\) suppression factors are absent when \(j=0\), i.e. \(t=\overline{t}\), as can be seen in (12); however, \(\tilde{\rho}_{0}(t)\) is indeed positive for all \(t\geq 0\). We
Figure 1: The Virasoro primary spectrum of \(\mathcal{Z}(\tau)\). Green dots denote the vacuum state and two (parity-invariant) states with \((h_{*},\overline{h}_{*})=(\frac{\overline{h}_{*}}{4}\xi,\frac{3}{4}\xi)\), interpretable in AdS\({}_{3}\) as strongly coupled strings. All other states exceed the semiclassical black hole threshold: \(\min(h,\overline{h})\geq\xi\). The density of states is positive.
provide details in Appendix B, but can sketch the essential point here. In the regime \(\xi t\gg 1\), the scalar MWK density is positive and exponentially larger in magnitude than the string density. As \(\xi t\sim\mathcal{O}(1)\), positivity is non-trivial as both densities are of the same order and the string density is term-wise oscillatory in \(t\). With an eye toward semiclassical gravity, we focus on \(\xi\gg 1\), taking the regime of \(x:=2\pi\sqrt{\xi}t\) fixed. The proof of positivity proceeds in two steps: first, we show that the sum of the \(s=1\) and \(s=2\) terms in (A9) is positive; and second, we show that the \(s\geq 2\) terms are individually positive.
Numerical evaluation of the scalar density at large but finite \(\xi\) confirms these analytic results, as shown in Fig. 2. Indeed, we see that positivity appears to hold all the way down to \(j_{*}=1\), i.e. \(\xi=2\), formally the smallest central charge in our construction [52].
## II A bulk string interpretation
The above construction is purely on the CFT side. Is there an AdS\({}_{3}\) gravity interpretation of the highly-spinning operator \(\mathcal{O}_{*}\) and its modular images?
One appealing answer is that \(\mathcal{O}_{*}\) is a strongly coupled string, and its modular images, stringy contributions to the black hole spectrum. While an operator like \(\mathcal{O}_{*}\) with \(t<0\) and \(\overline{t}>0\) cannot be dual to a smooth BTZ black hole nor to a conical defect (such solutions with real mass and angular momenta do not exist in semiclassical AdS\({}_{3}\) gravity coupled to point particles), spinning strings in AdS\({}_{3}\) can, and indeed do, satisfy this condition.
The spectrum of folded, spinning Nambu-Goto strings coupled to gravity in AdS\({}_{3}\) was studied in [53] in the classical limit. The Virasoro primary string spectrum is parameterized by a string tension \(\lambda\) and an angular velocity \(\omega\). The string tension is given in terms of AdS, string and Planck scales as
\[\lambda=\frac{1}{2\pi}\frac{L_{\rm AdS}}{\ell_{s}}\frac{\ell_{p}}{\ell_{s}} \tag{14}\]
where \(\ell_{p}=8\pi G_{N}\).
For a given \(\lambda\), the string spectrum starts at the origin \(t=\overline{t}=-\xi\) and ends at the extremality bound \(t=0\) or \(\overline{t}=0\). Matching the string spectrum to the quantum numbers (7) of the operator \(\mathcal{O}_{*}\) yields the unique result
\[\lambda_{*}=1\,,\quad\omega_{*}=2\,. \tag{15}\]
This string is strongly coupled: from (14), an AdS-sized string with \(\lambda_{*}=1\) requires \(\ell_{p}/\ell_{s}\sim\mathcal{O}(1)\), which is the ratio that defines an effective string coupling \(g_{s}=(\ell_{p}/\ell_{s})^{>0}\) (where the exponent depends on the details of the putative string background [54]).
The specific value \(\lambda_{*}=1\) happens to enjoy a certain synergy with the equations of [53]. For generic \(\lambda\) and \(\omega\), the solutions of [53] are given in terms of elliptic integrals. However, at \(\lambda=1\) - and only at \(\lambda=1\) - the solution simplifies dramatically, as the string embedding equation becomes algebraic. It is simple enough to recall explicitly in a few lines. The Lorentzian spacetime metric outside the string is locally AdS\({}_{3}\) with the corresponding mass and angular momentum,
\[ds^{2}=\frac{1}{16}(-du^{2}+dv^{2})-\left(z-\frac{1}{256z}\right)dudv+\frac{ dz^{2}}{4z^{2}} \tag{16}\]
where, in the conventions of [53], the conformal boundary is at \(z\to\infty\). The string embedding is determined by functions \(u(\sigma,\tau),v(\sigma,\tau)\) and \(z(\sigma)\), where \((\sigma,\tau)\) are worldsheet coordinates with induced metric
\[h =\Omega^{2}(\sigma)(-d\tau^{2}+d\sigma^{2}) \tag{17}\] \[\Omega^{2}(\sigma) =3\frac{(z_{L}-z(\sigma))(z(\sigma)-z_{R})}{z(\sigma)}\]
where (15) implies \(z_{L}=\frac{3}{16},z_{R}=-\frac{1}{48}\), and
\[z(\sigma)=\frac{\left(32-25\cos^{2}\sigma+5\sqrt{25\cos^{4}\sigma-64\cos^{2} \sigma+64}\right)}{384} \tag{18}\]
Opposite points on the string are identified, "sewing up" the spacetime [55]. The outermost radius of the string (where it folds) is at \(z=z_{L}\), while the center of the string is at \(z=z(0)=1/12\). The spacetime ends at the string, avoiding a naked singularity.
So we see that the state \(\mathcal{O}_{*}\) admits an interpretation as a highly-spinning string coupled to gravity in AdS\({}_{3}\). That it is strongly coupled dovetails nicely with how AdS\({}_{3}\) pure gravity could possibly arise in string theory: strong coupling is necessary to gap the light string modes to the Planck scale.
### Black hole microstates
Our construction adds not only the states \(\mathcal{O}_{*}\), but their \(SL(2,\mathbb{Z})\) images too. These states are heavy, but are _not_
Figure 2: Plot of the regularized scalar primary density of states \(\tilde{\rho}_{0}(x)\) of the partition function \(\mathcal{Z}(\tau)\), as a function of \(x=2\pi\sqrt{\xi}t\), with \(\xi\) ranging from \(\xi=2\) (red) to \(\xi=102\) (blue) in steps of 10. The curves are positive for all \(x\geq 0\). (Obtained by summing over \(s\leq 200\) in (A9).)
BTZ black holes (fully captured by the MWK sum over smooth Euclidean saddles) nor their orbifolds (which are modular images of conical defect geometries).
Instead, these are new black hole microstates made of strongly coupled strings. The Euclideanized, modular-transformed solutions of [53] are small black strings, in the following specific sense: whereas a BTZ/conical defect solution with the same quantum numbers would be nakedly singular, the strings shroud this region by terminating the spacetime. These geometries may be thought of as quantum AdS\({}_{3}\) versions of the stringy cloak of [56] and other small black strings (e.g. [57; 58]). That they are "small" - the modular transforms of a single string, rather than a parametrically large number of them - is also visible thermodynamically in the different functional forms of the stringy and BTZ microcanonical entropies: \(\rho_{\text{string},j}(t)\) is oscillatory as a function of \(t\), unlike the BTZ density \(\rho_{\text{MWR},j}(t)\), and is exponentially subleading to \(\rho_{\text{MWR},j}(t)\), term-by-term in the modular sum, away from the near-extremal regime \(\xi t\lesssim\mathcal{O}(1)\) where the BTZ black hole becomes highly quantum [59]. This fluctuating behavior signals that the stringy degrees of freedom give genuinely new contributions to the black hole Hilbert space, distinct from the semiclassical BTZ geometries or quotients thereof.
## III \(Sl(2,\mathbb{Z})\) spectral representation
As a slight detour, it is enlightening to give an alternative representation of \(Z_{\text{string}}(\tau)\). The spectral gap condition \(\Delta_{*}=2\xi\) implies that \(Z_{\text{string}}(\tau)\in L^{2}(\mathcal{F})\), and hence admits a harmonic decomposition in the \(SL(2,\mathbb{Z})\) spectral eigenbasis, comprised of the completed Eisenstein series \(E^{*}_{\frac{1}{2}+i\omega}(\tau)\) with \(\omega\in\mathbb{R}\) and Maass cusp forms \(\phi_{n}(\tau)\) (e.g. [60; 61; 62]). Denoting their spin-\(j\) Fourier coefficients as \(\mathfrak{a}^{(s)}_{j}\) and \(\mathfrak{b}^{(n)}_{j}\), respectively, and using the conventions of [21], we have
\[\begin{split} Z_{\text{string}}(\tau)&=\int_{ \mathcal{C}_{\text{crit}}}\mathfrak{a}^{(s)}_{j_{*}}\frac{\Gamma\left(\frac{ \frac{1}{2}-s}{2}\right)\Gamma\left(\frac{s-1}{2}\right)}{\Lambda(s)\Lambda(1 -s)}E^{*}_{s}(\tau)\\ &\quad+\sum_{n=1}^{\infty}\mathfrak{b}^{(n)}_{j_{*}}\Gamma\Big{(} -\frac{i\omega_{n}}{2}\Big{)}\Gamma\Big{(}\frac{i\omega_{n}}{2}\Big{)}\phi_{n }(\tau)\end{split} \tag{19}\]
where \(\mathcal{C}_{\text{crit}}\) denotes \(((4\pi i)^{-1}\) times) contour integration along \(s=\frac{1}{2}+i\omega\), and \(\Lambda(s):=\pi^{-s}\Gamma(s)\zeta(2s)\) is the completed Riemann zeta function. (See Appendix C.)
Presenting \(Z_{\text{string}}(\tau)\) in spectral form reveals some interesting features and curiosities.
First, the modular average of \(Z_{\text{string}}(\tau)\) vanishes:
\[\langle Z_{\text{string}}\rangle:=\int_{\mathcal{F}}\frac{dxdy}{y^{2}}Z_{\text {string}}(\tau)=0\,. \tag{20}\]
This follows from the vanishing of the Eisenstein spectral overlap in (19) at \(s=0\), which defines the modular average in general. We note that this property is shared by Narain CFTs [61].
Next, \(Z_{\text{string}}(\tau)\) may be written as the action of an \(SL(2,\mathbb{Z})\) Hecke operator \(T_{\xi/2}\)[63] on a "primitive" partition function, \(\mathcal{Z}_{\text{string}}(\tau)\), defined as \(Z_{\text{string}}(\tau)\) but with the Fourier coefficients evaluated at \(j_{*}=1\):
\[\mathcal{Z}(\tau)=Z_{\text{MNK}}(\tau)+T_{\xi/2}\,\mathcal{Z}_{\text{string}}( \tau)\,. \tag{21}\]
In this way, the entire family of unitary partition functions indexed by \(\xi\) may be generated by a Hecke action, implementing shifts in central charge. This shares a superficial likeness with Witten's construction of holomorphic extremal CFT partition functions [64], with obvious differences.
Finally, there is a profound conjecture in number theory, the "horizontal" Sato-Tate conjecture for Maass cusp forms of \(SL(2,\mathbb{Z})\), which has interesting consequences for the spectral decomposition [65; 66; 67; 68]. The conjecture states that for prime \(j\to\infty\) and any fixed \(n\), the normalized Fourier coefficients \(\mathfrak{b}^{(n)}_{j}/\mathfrak{b}^{(n)}_{1}\) are equidistributed with respect to Wigner's semicircle distribution. This (and \(\mathfrak{b}^{(n)}_{1}\neq 0\), which follows from Hecke relations applied to Hecke-Maass cusp forms) implies that
\[\lim_{j\to\infty}\langle\!\langle\mathfrak{b}^{(n)}_{j}\rangle\!\rangle=0\qquad \text{(fixed $n$)} \tag{22}\]
where \(\langle\!\langle\cdot\rangle\!\rangle\) indicates a statistical average. Therefore, even though \((Z_{\text{string}},\phi_{n})\propto\mathfrak{b}^{(n)}_{j_{*}}\neq 0\), they vanish on average in the large central charge limit \(j_{*}\to\infty\)[69]. In this sense, the Eisenstein term seems to more directly underlie the unitarity of \(\mathcal{Z}(\tau)\). It would be nice to understand this from a physical, quantum chaos point of view.
## IV Summary and random (matrix) comments
Our main result is the construction of the unitary partition function \(\mathcal{Z}(\tau)\) given in (5), with the spectral gaps depicted in Fig. 1.
From the AdS\({}_{3}\) gravity point of view, despite the dimension gap above the vacuum state to the black hole threshold \(\Delta_{*}=\frac{c-1}{12}\), this is not a semiclassical pure gravity path integral in the strict sense, due to the spinning states \(\mathcal{O}_{*}\) with sub-threshold twist. At any finite spin, these states are not visible, and the theory contains only black hole states. The degeneracies of all discrete states are integers.
We have advanced a bulk interpretation of \(\mathcal{O}_{*}\) as a strongly coupled spinning string, though other interpretations may well be possible (or preferred). We view this as an indicative toy model for a genuine string theory compactification to AdS\({}_{3}\) pure gravity. A complete approach would include higher Regge trajectories; corrections to the spectrum from excitations around the spin-\(j\) ground states of [53]; and the other ingredients, such as fluxes and their brane sources, required to solve the strongly coupled string field equations (whatever they may be).
### Randomness
Our construction cures the negativity from the sum over smooth bulk saddles semiclassically, rather than quantum mechanically. Quantum effects are not just present in a consistent theory, but are expected to be crucial in the engineering of a bona fide theory of AdS\({}_{3}\) pure gravity: there are strong indications that if such a theory exists, off-shell geometries encoding random matrix behavior of the chaotic spectrum play a central role in unitarizing the spectrum [70; 20; 8]. An explicit determination of the leading-order random matrix contribution to the semiclassical path integral of pure gravity with torus boundary, denoted as \(Z_{\rm RMT}(\tau)\), was made in [21].
In any theory of semiclassical AdS\({}_{3}\) gravity (pure or otherwise), the black hole spectrum is chaotic, and its path integral should encode random fluctuations for quantum consistency. Such random matrix contributions are absent in \(\mathcal{Z}(\tau)\). We may explain this fact, as well as the continuous spectrum in the chaotic regime \(t>0\), by interpreting \(\mathcal{Z}(\tau)\) as the partition function of a microscopic compact CFT that has been subject to coarse-graining.
As shown in [21] using a formalism built on a CFT trace formula, the random matrix contribution to the density of states, properly understood, vanishes upon coarse-graining the spectrum over a suitable microcanonical window in twist, \(\delta t\)[71]. Because this window is necessarily larger than the exponentially small mean level spacing of the chaotic spectrum, the coarse-graining simultaneously explains both the absence of random matrix contributions to (5) and its continuous spectrum while remaining compatible with a microscopic CFT interpretation. Given our explicit construction, we can determine \(\delta t\): it is the characteristic wavelength of the oscillations of \(\rho_{\rm string,j}(t)\) in (12), namely, \(\delta t\sim 1/\xi\). We emphasize that this coarse-graining interpretation does not rely on a \(\xi\gg 1\) limit, and is compatible with compactness of a putative underlying CFT; there could, of course, be as-yet-unknown bootstrap constraints that rule this possibility out.
Note that in a \(\xi\gg 1\) limit, \(\mathcal{Z}(\tau)\) is also compatible with other interpretations, in particular with a hypothetical ensemble average over (possibly near-)CFTs, or with other, perhaps independent, constructions of "approximate CFT" [72]. While we have presented a microscopic CFT interpretation in part to emphasize that a departure from standard AdS/CFT physics is not required at this level, semiclassical gravity seems unable to distinguish among these [72; 73; 74], at least perturbatively in \(G_{N}\).
A complementary view on this coarse-grained interpretation comes from the formalism of [21]. Since \(Z_{\rm string}(\tau)\) is the modular completion of a non-black hole state, we do not expect it to encode random matrix behavior per se [75; 76; 21; 72]. Applying the results of [21] to \(Z_{\rm string}(\tau)\) helps to ratify this perspective. In (19) we provided the \(SL(2,\mathbb{Z})\) spectral decomposition of \(Z_{\rm string}(\tau)\). A canonical diagnostic of random matrix universality is the presence of a linear ramp in the coarse-grained spectral form factor, with a specific coefficient prescribed by the random matrix ensemble. We can ask whether \(Z_{\rm string}(\tau)\) generates this ramp after squaring and taking the diagonal approximation. A necessary and sufficient condition for the ramp was derived in [21], as an exponential decay condition on the spectral overlaps at \(\omega\to\infty\). One readily checks that \(Z_{\rm string}(\tau)\) does not satisfy this criterion, instead decaying as a power law [77].
### Stringiness
On the other hand, \(Z_{\rm string}(\tau)\) exhibits some behavior that lies somewhere "in between" chaotic and non-chaotic. Define a microcanonical coarse-graining over mean twist,
\[\overline{f(t_{1})f(t_{2})}:=\int_{0}^{\infty}dt^{\prime}\,f(t^{\prime}+ \epsilon)f(t^{\prime}-\epsilon)W(t-t^{\prime}) \tag{23}\]
where \(t=\frac{t_{1}+t_{2}}{2}\) and \(\epsilon=\frac{t_{1}-t_{2}}{2}\). Applying this to \(f(t)=\rho_{\rm string,j}(t)\) at fixed \(j\) using (12) produces a non-zero variance upon coarse-graining over windows \(\delta t\gtrsim\frac{1}{\xi}\). However, its oscillatory behavior leads to suppression relative to the disconnected average. In particular, at \(\xi\gg 1\),
\[\frac{\text{Var}(\rho_{j}(t))}{\bar{\rho}_{j}(t)^{2}}\approx e^{-4\pi\sqrt{ \xi(t+j)}}\qquad(\xi\gg 1) \tag{24}\]
where \(\bar{\rho}_{j}(t)=\rho_{\rm MWK,j}(t)\). In the extremal limit \(t\to 0\), the suppression factor is \(e^{-S_{0,j}}\), where \(S_{0,j}=4\pi\sqrt{\xi j}\) is the extremal spin-\(j\) BTZ black hole entropy. In contrast, wormholes encoding chaotic behavior are suppressed as \(e^{-2S_{0,j}}\) in the extremal limit [78; 79; 8; 21; 8]. It would be worthwhile to understand this intermediate behavior as a non-perturbative effect, possibly associated to strongly coupled strings, in a UV complete AdS\({}_{3}\) gravity path integral.
###### Acknowledgements.
We thank Jacob Abajian, Veronica Collazuol, Scott Collier, Henry Maxfield, Dalimil Mazac, Sridip Pal, Yiannis Tsiares, and Pierfrancesco Urbani for helpful discussions. EP and GD thank the Kavli Institute for Theoretical Physics, Santa Barbara for support during the course of this work. EP also thanks the ICTP Trieste and Kavli IPMU for hospitality. GD also thanks the ICISE in Quy Nhon, Vietnam for hospitality. This research was supported by ERC Starting Grant 853507, and in part by the National Science Foundation under Grant No. NSF PHY-1748958.
## Appendix A Density of states
We write the total spin-\(j\) density of states as
\[\rho_{j}(t)=\rho_{\rm MWK,\,j}(t)+\rho_{\rm string,\,j}(t) \tag{25}\]
The MWK density of states may be written as [3; 19]
\[\begin{split}&\rho_{\mathrm{MWK},j}(t)=\\ &\frac{2}{\sqrt{t\overline{t}}}\sum_{s=1}^{\infty}\bigg{[}f_{j,0;s }\cosh\!\left(\frac{4\pi}{s}\sqrt{\xi\overline{t}}\right)\cosh\!\left(\frac{4 \pi}{s}\sqrt{\xi t}\right)\\ &-f_{j,-1;s}\cosh\!\left(\frac{4\pi}{s}\sqrt{\xi\overline{t}} \right)\cosh\!\left(\frac{4\pi}{s}\sqrt{(\xi-1)t}\right)\\ &-f_{j,1;s}\cosh\!\left(\frac{4\pi}{s}\sqrt{(\xi-1)\overline{t}} \right)\cosh\!\left(\frac{4\pi}{s}\sqrt{\xi t}\right)\\ &+f_{j,0;s}\cosh\!\left(\frac{4\pi}{s}\sqrt{(\xi-1)\overline{t}} \right)\cosh\!\left(\frac{4\pi}{s}\sqrt{(\xi-1)t}\right)\!\bigg{]}\end{split} \tag{10}\]
where \(f_{j,k;s}:=S(j,k;s)/s\) and \(S(j,k;s)\) is a Kloosterman sum,
\[S(j,k;s)=\sum_{0\leq d\leq s-1,\,(d,s)=1}e^{2\pi i\frac{d}{s}j+\frac{\xi-1}{s }k} \tag{11}\]
where \(d^{-1}\in\mathbb{Z}\) is the multiplicative inverse of \(d\) mod \(s\). The scalar density requires regularization [1; 2]. It is comprised of a delta function piece given by (3), and a continuous piece which we denote by \(\tilde{\rho}_{\mathrm{MWK},0}(t)\):
\[\begin{split}&\tilde{\rho}_{\mathrm{MWK},0}(t)=\\ &\frac{2}{t}\sum_{s=1}^{\infty}\bigg{\{}\frac{\phi(s)}{s}\bigg{[} \sinh^{2}\left(\frac{4\pi}{s}\sqrt{\xi t}\right)+\sinh^{2}\left(\frac{4\pi}{s} \sqrt{(\xi-1)t}\right)\bigg{]}\\ &-2\,\frac{\mu(s)}{s}\bigg{[}\cosh\!\left(\frac{4\pi}{s}\sqrt{ \xi t}\right)\cosh\!\left(\frac{4\pi}{s}\sqrt{(\xi-1)t}\right)-1\bigg{]}\bigg{\}} \end{split} \tag{12}\]
where \(\phi(s)=S(0,0;s)\) is the Euler totient function and \(\mu(s)=S(0,1;s)\) is the Mobius function.
The string density for \(j\neq 0\) was given in (12), which we repeat here for convenience:
\[\begin{split}\rho_{\mathrm{string},\,j}(t)&=\frac{ 4}{\sqrt{t\overline{t}}}\sum_{s=1}^{\infty}f_{j,j_{*};s}\cos\left(\frac{2\pi}{ s}\sqrt{\xi\overline{t}}\right)\cosh\left(\frac{2\pi}{s}\sqrt{\xi t}\right)\\ &+(j_{*}\to-j_{*},t\leftrightarrow\overline{t})\end{split} \tag{13}\]
Similarly to the MWK case, the scalar density requires regularization. The regularized density contains a delta function piece given in (13), and a continuous piece which we denote by \(\tilde{\rho}_{\mathrm{string},0}(t)\):
\[\begin{split}\tilde{\rho}_{\mathrm{string},0}(t)=& \frac{8}{t}\sum_{s=1}^{\infty}\frac{c_{s}(j_{*})}{s}\times\\ &\bigg{[}\cos\!\left(\frac{2\pi}{s}\sqrt{\xi t}\right)\cosh\! \left(\frac{2\pi}{s}\sqrt{\xi t}\right)-1\bigg{]}\end{split} \tag{14}\]
where \(c_{s}(j_{*})\) is a Ramanujan sum,
\[c_{s}(j_{*})=\sum_{1\leq d\leq s,\,(d,s)=1}e^{2\pi i\frac{d}{s}j_{*}} \tag{15}\]
The total regularized scalar density is given by the sum of these two contributions, together with the delta functions in (13):
\[\rho_{0}(t)=\delta(t+\xi)+(-6+8\sigma_{0}(j_{*}))\delta(t)+\tilde{\rho}_{0}(t) \tag{16}\]
where
\[\tilde{\rho}_{0}(t)=\tilde{\rho}_{\mathrm{MWK},0}(t)+\tilde{\rho}_{\mathrm{ string},0}(t). \tag{17}\]
## Appendix B Positivity
We treat in turn the positivity of the large spin \(j\to\infty\), finite spin \(j\geq 1\), and scalar \(j=0\) sectors of the density (10), focusing mostly on the regime \(\xi\gg 1\). Actually, we consider a more general case in which we have \(d_{*}\) string states: namely, \(Z_{p}(\tau)=Z_{\mathrm{MWK}}(\tau)+\frac{d_{*}}{2}\,Z_{\mathrm{string}}(\tau)\), and correspondingly for the densities. For the partition function \(\mathcal{Z}(\tau)\) defined in the main text, \(d_{*}=2\).
### Positivity at large spin
The MWK density (10) is known to be negative in the extremal limit \(t\to 0\) of large spin \(|j|\to\infty\) (more precisely, for \(t\lesssim t_{0}\sim e^{-2\pi\sqrt{\xi j}}\), dropping a numerical prefactor). We cure this negativity with the states \(\mathcal{O}_{*}\) by design, having chosen their twist to be \(t_{*}=-\frac{\xi}{4}\). The mechanism is the same as described in Sec. 2.1 of [19], building on [3; 18]. In the regime \(t<t_{0}\) with \(j\to\infty\), the MWK density is approximately equal to [19]
\[\rho_{\mathrm{MWK},\,j}(t)\approx\frac{(-1)^{j}}{\sqrt{j}t}\Big{(}e^{2\pi \sqrt{\xi j}}+e^{2\pi\sqrt{(\xi-1)j}}\Big{)} \tag{18}\]
In the same regime, the string density (13) has the same exponential behavior but with a positive coefficient, coming from the state of spin \(-j_{*}\):
\[\rho_{\mathrm{string},\,j}(t)\approx\frac{d_{*}}{\sqrt{j}t}e^{2\pi\sqrt{\xi j}}. \tag{19}\]
The string density cancels the odd-spin negativity for \(d_{*}>1\). Whereas at \(d_{*}=1\) there are subleading negativities to take care of, requiring the addition of higher-twist states [19], choosing \(d_{*}=1+\delta\) for any finite \(\delta\) gives a positive extremal density. In the construction of \(\mathcal{Z}(\tau)\) in (5), we chose \(d_{*}=2\), the smallest integer degeneracy which guarantees positivity, as a matter of naturalness. The above discussion applies equally to the regime of large negative spin \(j\to-\infty\), whereupon the negativity is cured by the state \(\mathcal{O}_{*}\) with spin \(+j_{*}\).
### Positivity at \(j\geq 1\)
We now consider finite spin \(j\geq 1\). We also take \(\xi\gg 1\). There are two regimes of twist \(t\) to consider: the extremal limit \(t\to 0\), and fixed \(t\).
In the extremal limit, the arguments given just above are again sufficient to guarantee positivity. In particular, we have \(\xi j\gg 1\) in the present regime of interest; one may confirm upon inspection that the \(\xi\)- and \(j\)-dependence of \(\rho_{\text{MWK},\,j}(t)\) and \(\rho_{\text{string},\,j}(t)\) are such that at \(\xi j\gg 1\), even for finite \(j\), the result of the previous subsection carries through.
Now we consider the regime of fixed \(t\). Since \(\xi j\gg 1\), terms of the form \(\cosh\Bigl{(}\frac{4\pi}{s}\sqrt{\xi(t+j)}\Bigr{)}\) for \(s\geq 2\) are exponentially suppressed with respect to the \(s=1\) term. At fixed \(t\), the MWK density is therefore well-approximated by the \(s=1\) term:
\[\begin{split}&\frac{\sqrt{t}\overline{t}}{2}\rho_{\text{MWK},\,j} (t)\approx\Bigl{[}\cosh\Bigl{(}4\pi\sqrt{\xi t}\Bigr{)}-\cosh\Bigl{(}4\pi\sqrt {(\xi-1)t}\Bigr{)}\Bigr{]}\\ &\quad\times\Bigl{[}\cosh\Bigl{(}4\pi\sqrt{\xi(t+j)}\Bigr{)}- \cosh\Bigl{(}4\pi\sqrt{(\xi-1)(t+j)}\Bigr{)}\Bigr{]}\end{split} \tag{24}\]
This is manifestly positive, and scales as \(\sim e^{4\pi\sqrt{\xi j}}\) times an \(\mathcal{O}(1)\) coefficient. The string density at leading order in \(\xi j\gg 1\) is
\[\begin{split}&\frac{\sqrt{t}\overline{t}}{2d_{*}}\rho_{\text{ string},\,j}(t)=\cos\Bigl{(}2\pi\sqrt{\xi t}\Bigr{)}\cosh\Bigl{(}2\pi\sqrt{ \xi(t+j)}\Bigr{)}\\ &\quad\quad+\sum_{s=1}^{\infty}f_{j,j_{*};s}\cos\biggl{(}\frac{2 \pi}{s}\sqrt{\xi(t+j)}\biggr{)}\cosh\biggl{(}\frac{2\pi}{s}\sqrt{\xi t}\biggr{)} \end{split} \tag{25}\]
In the first line we dropped the exponentially-suppressed \(s>1\) terms (this is allowed because the sum over \(s\) cannot lead to exponential enhancement), whereas no such suppression is present in the second line. Noting that (25) scales as \(\sim e^{2\pi\sqrt{\xi j}}\), we see that the sum of (24) and (25) is positive, as the latter is exponentially suppressed in \(\xi j\gg 1\). As an aside, note that this hierarchy can be overcome if \(d_{*}\) is exponentially large in \(\xi\), a possibility that we discard (in the next subsection we bound \(d_{*}\) by an \(\mathcal{O}(1)\) number).
Summarizing so far, we have shown that \(\rho_{j}(t)>0\) for \(j\geq 1\) and all \(t\) at \(\xi\gg 1\).
### Positivity at \(j=0\)
The scalar sector requires slightly more attention since there is no longer a parametrically large scale that suppresses \(s>1\) terms in the density. Since the MWK density is exponentially large and positive as \(\xi t\gg 1\), any possible negativity will arise only for \(\xi t\lesssim\mathcal{O}(1)\). We can then study the density at fixed \(x:=2\pi\sqrt{\xi t}\), where we also take \(\xi\gg 1\).
We divide the proof into two parts: showing that the sum of (\(s=1\)) and (\(s=2\)) terms is positive for \(d_{*}\) below a critical value; and showing that \(s>2\) terms are individually positive.
#### b.3.1 \((s=1)+(s=2)\)
The sum of the \(s=1,2\) terms of (25) and (25) (times \(\frac{d_{*}}{2}\)) is, at leading order in large \(\xi\),
\[\begin{split}\frac{t}{2}\tilde{\rho}_{0}(t)\big{|}_{s\leq 2}& =2\sinh^{2}x+2d_{*}(\cos x\cosh x-1)\\ &+d_{*}(-1)^{j_{*}}\Bigl{(}\cos\Bigl{(}\frac{x}{2}\Bigr{)}\cosh \Bigl{(}\frac{x}{2}\Bigr{)}-1\Bigr{)}.\end{split} \tag{26}\]
One can easily see numerically that upon increasing \(d_{*}\), this function develops a minimum \(x_{\text{min}}\) which eventually becomes negative, approximately given by
\[\begin{split} d_{*}&\lesssim 4.910,\quad x_{\text{min}} \approx 1.851\qquad(j_{*}\text{ even})\\ d_{*}&\lesssim 5.236,\quad x_{\text{min}} \approx 1.847\qquad(j_{*}\text{ odd})\end{split} \tag{27}\]
If \(d_{*}\) obeys these bounds, then (26) is positive. We can check how these bounds compare to the full sum over \(s\) at finite but large central charge: see Fig. 3. Summing up to \(s=200\) for \(\xi=1000\), which easily ensures convergence, we observe numerically that the density becomes negative for \(d_{*}\gtrsim 7.3\), not far from the limited analytic bound obtained above. The growth of the upper bound as we include more terms in the sum is due to the positivity of the \(s>2\) terms, as we will show next. Note that at smaller \(\xi\), the upper bound actually grows, as seen in Fig. 4: for the smallest value \(\xi=2\) allowed within our construction, we observe positivity for \(d_{*}\lesssim 19.2\).
#### b.3.2 \(s\geq 3\) terms
At leading order in \(\xi\gg 1\), the density of states for \(s\geq 3\) is:
\[\begin{split}\frac{ts}{2}\tilde{\rho}_{0}(t)\big{|}_{s\geq 3}& =2(\phi(s)-\mu(s))\sinh^{2}(2x_{s})\\ &+2d_{*}c_{s}(j_{*})(\cos x_{s}\cosh x_{s}-1),\end{split} \tag{28}\]
Figure 3: Plot of the regularized scalar density of states \(\tilde{\rho}_{0}(x)\) with \(\xi=1000\), as a function of \(x=2\pi\sqrt{\xi t}\), with degeneracy ranging from \(d_{*}=3.3\) (red) to \(d_{*}=7.3\) (blue) in half-integer steps. For \(d_{*}\gtrsim 7.3\), the density develops a negative region. (Obtained by summing over \(s\leq 200\) in (23).)
where
\[x_{s}:=\frac{x}{s}\,. \tag{10}\]
Denoting the right-hand side of (10) as \(f(x_{s})\), we observe that there is a minimum at \(x_{s}=0\) for which \(f(0)=f^{\prime}(0)=0\). As a consequence, one way to ensure positivity is to demand convexity, \(f^{\prime\prime}(x_{s})>0\), for all \(x_{s}>0\); this is of course not a necessary condition, but it is sufficient to achieve our goal of demonstrating existence of a range of \(d_{*}\) in which these terms are positive. Imposing convexity gives the inequality
\[16(\phi(s)-\mu(s))\cosh(4x_{s})-4d_{*}c_{s}(j_{*})\sin x_{s} \sinh x_{s}>0\,. \tag{11}\]
Using the bounds
\[|c_{s}(j_{*})|<\phi(s)\,,\quad|\mu(s)|\leq 1\,,\quad\phi(s\geq 3)\geq 2 \tag{12}\]
gives rise to the strongest inequality,
\[d_{*}<2\frac{\cosh(4x_{s})}{\sinh(x_{s})}\,. \tag{13}\]
If this is satisfied for all \(x_{s}\), then so is (11). Minimizing the right-hand side gives
\[d_{*}\lesssim 11.888\,. \tag{14}\]
This ensures positivity of each individual \(s\geq 3\) term in the density. This is compatible with the previously derived bounds for the \(s=1,2\) terms.
Altogether, we conclude that for a finite range of \(d_{*}>1\), the density of states is positive, \(\rho_{j}(t)>0\), for all spins \(j\) and twists \(t\) at \(\xi\gg 1\).
## Appendix C Spectral decomposition of \(Z_{\rm string}(\tau)\)
In this appendix we derive (19). We directly present the relevant calculations, directing the reader to [60, 61, 62] for details on the \(SL(2,\mathbb{Z})\) spectral formalism, and [80, 21, 76] for its further use in the 2d CFT context.
We wish to compute the Petersson inner product
\[(Z_{\rm string},\psi_{\omega}):=\int_{\mathcal{F}}\frac{dxdy}{y^{2}}Z_{\rm string }(\tau)\overline{\psi}_{\omega}(\tau) \tag{15}\]
where \(\psi_{\omega}(\tau)=\{E_{\frac{1}{2}+i\omega}(\tau),\phi_{n}(\tau)\}\) are the \(SL(2,\mathbb{Z})\) eigenbasis elements. Since \(Z_{\rm string}(\tau)\) is a Poincare sum, the overlaps with the Eisenstein series and Maass cusp forms can be easily computed using the "unfolding trick." This results in the following integral for the Eisenstein series:
\[(Z_{\rm string},E_{\frac{1}{2}+i\omega})=\frac{4\mathsf{a}_{j_{*}}^{(\frac{1} {2}+i\omega)}}{\Lambda\big{(}\frac{1}{2}-i\omega\big{)}}\int_{0}^{\infty} \frac{dy}{y}K_{i\omega}(2\pi j_{*}y) \tag{16}\]
where \(\mathsf{a}_{j}^{(\frac{1}{2}+i\omega)}\) are the Eisenstein Fourier coefficients
\[\mathsf{a}_{j}^{(\frac{1}{2}+i\omega)}=\frac{2\sigma_{2i\omega}(j)}{j^{i\omega }}\,, \tag{17}\]
which obey reflection symmetry, \(\mathsf{a}_{j}^{(\frac{1}{2}+i\omega)}=\mathsf{a}_{j}^{(\frac{1}{2}-i\omega)}\). The cusp form overlap is obtained similarly:
\[(Z_{\rm string},\phi_{n})=4\mathsf{b}_{j}^{(n)}\int_{0}^{\infty} \frac{dy}{y}K_{i\omega}(2\pi j_{*}y), \tag{18}\]
where \(\mathsf{b}_{j}^{(n)}\) are the cusp form Fourier coefficients, known only approximately via numerics [81]. The integral is divergent at the origin. Regularizing the divergence is straightforward: introducing
\[\begin{split}(Z_{\rm string},E_{\frac{1}{2}+i\omega})_{\epsilon}: =\frac{4\mathsf{a}_{j_{*}}^{\left(\frac{1}{2}+i\omega\right)}}{\Lambda\big{(} \frac{1}{2}-i\omega\big{)}}\int_{0}^{\infty}\frac{dy}{y^{1-\epsilon}}K_{i \omega}(2\pi j_{*}y)\\ =\frac{(\pi j_{*})^{-\epsilon}\mathsf{a}_{j_{*}}^{\left(\frac{1} {2}+i\omega\right)}}{\Lambda\big{(}\frac{1}{2}-i\omega\big{)}}\Gamma\bigg{(} \frac{\epsilon-i\omega}{2}\bigg{)}\Gamma\bigg{(}\frac{\epsilon+i\omega}{2} \bigg{)}\end{split} \tag{19}\]
the overlaps may be defined by removing the regulator,
\[\begin{split}(Z_{\rm string}E_{\frac{1}{2}+i\omega})& =\lim_{\epsilon\to 0}(Z_{\rm string},E_{\frac{1}{2}+i\omega})_{\epsilon} \\ &=\frac{\mathsf{a}_{j_{*}}^{\left(\frac{1}{2}+i\omega\right)}}{ \Lambda\big{(}\frac{1}{2}-i\omega\big{)}}\Gamma\bigg{(}-\frac{i\omega}{2} \bigg{)}\Gamma\bigg{(}\frac{i\omega}{2}\bigg{)}.\end{split} \tag{20}\]
and likewise for the cusp form overlap (14). This yields the spectral decomposition (19). We note that the regularization used here is equivalent to the following standard regularization of Poincare sums over seed primaries of fixed dimensions,
\[Z_{h,\overline{h}}^{\epsilon}(\tau):=\sum_{\gamma\in SL(2,\mathbb{Z})/\Gamma_{ \infty}}\mathrm{Im}(\gamma\tau)^{\frac{1}{2}+\epsilon}q_{\gamma}^{h-\xi} \overline{q}_{\gamma}^{\overline{h}-\xi}\,, \tag{21}\]
where \((h,\overline{h})=(\frac{\xi}{2}\xi,\frac{3}{4}\xi)\) for our state \(\mathcal{O}_{*}\).
Figure 4: Plot of the regularized scalar density of states \(\tilde{\rho}_{0}(x)\) with \(\xi=2\), as a function of \(x=2\pi\sqrt{\xi}t\), with degeneracy ranging from \(d_{*}=3.2\) (red) to \(d_{*}=19.2\) (blue) in steps of two. For \(d_{*}\gtrsim 19.2\), the density develops a negative region – a larger critical value than for \(\xi=1000\). (Obtained by summing over \(s\leq 200\) in (11).)
### Re-deriving the scalar density
As a consistency check, we can re-derive the scalar density \(\rho_{\text{string},0}(t)\) from the spectral decomposition. The scalar piece of the regularized partition function is
\[\frac{Z^{\epsilon}_{\text{string},0}(y)}{2\sqrt{y}(\pi j_{*})^{-\epsilon}}=\int_{ \mathcal{C}_{\text{crit}}}\frac{y^{i\omega}\mathbbm{a}_{j_{*}}^{(\frac{1}{2}+i \omega)}}{\Lambda\Big{(}\frac{1}{2}-i\omega\Big{)}}\Gamma\bigg{(}\frac{ \epsilon+i\omega}{2}\bigg{)}\Gamma\bigg{(}\frac{\epsilon-i\omega}{2}\bigg{)} \tag{10}\]
where \(\int_{\mathcal{C}_{\text{crit}}}=\frac{1}{4\pi}\int_{-\infty}^{\infty}d\omega\) is the integration along the critical line. In writing (10) we have used the scalar Fourier modes \(E^{*}_{*,0}(y)=\Lambda(s)y^{s}+\Lambda(1-s)y^{1-s}\) and \(\phi_{n,0}(y)=0\). We now perform contour integration for complex \(z:=i\omega\) by deforming to a new contour, \(\mathcal{C}\), a semicircle in the left half plane \(\text{Re}(z)<0\) such that \(y^{z}\) decays at infinity. The integrand vanishes factorially on the arc at infinity due to the \(\Lambda\big{(}\frac{1}{2}-i\omega\big{)}\) in the denominator. The poles inside the contour come from \(\Gamma\big{(}\frac{\epsilon+z}{2}\big{)}\) at \(z=-2k-\epsilon\) with \(k=0,1,\dots\). The integral (10) is then given as a sum over residues,
\[\frac{Z^{\epsilon}_{\text{string},0}(y)}{\sqrt{\pi}(\pi j_{*})^{- \epsilon}}=(k=0\text{ term})\ +\] \[\sqrt{y}\sum_{k=1}^{\infty}\frac{(-1)^{k}}{k!}\frac{4\sigma_{4k+2 \epsilon}(j_{*})\Gamma(k+\epsilon)}{\Gamma(\frac{1}{2}+\epsilon+2k)\zeta(1+2 \epsilon+4k)}\bigg{(}\frac{\pi}{j_{*}y}\bigg{)}^{2k+\epsilon} \tag{11}\]
where we used \(\Lambda(s)=\pi^{-s}\Gamma(s)\zeta(2s)\) and the explicit Fourier coefficients (10). We have separated the \(k=0\) term because as we remove the regulator \(\epsilon\to 0\), two simple poles at \(z=\pm\epsilon\) coalesce into a double pole at \(z=0\), on the contour. We will thus treat separately the \(k>0\) terms, for which the regulator can be trivially removed, and the \(z=\pm\epsilon\) poles.
Let us now Laplace transform to the density of states,
\[Z_{\text{string},0}(y)=\sqrt{y}\int_{0}^{\infty}dt\,e^{-4\pi yt}\rho_{\text{ string},0}(t). \tag{12}\]
We have written the density in terms of the reduced twist, \(t=\Delta/2-\xi\) for scalars, which is related to the density as a function of dimension \(\Delta\) through \(\rho(\Delta)d\Delta=\rho(t)dt\). The regularized partition function (11) gives a regularized density
\[\rho_{\text{string},0}(t)=(k=0\text{ term})\ +\] \[\frac{8\sqrt{\pi}}{t}\sum_{k=1}^{\infty}\frac{(-1)^{k}\sigma_{4k} (j_{*})}{\Gamma(1+2k)\Gamma(\frac{1}{2}+2k)\zeta(1+4k)}\bigg{(}\frac{4\pi^{2} t}{j_{*}}\bigg{)}^{2k} \tag{13}\]
where we have removed the regulator in the second line. Using the identity
\[\sigma_{z}(j)=\zeta(z+1)j^{z}\sum_{s=1}^{\infty}\frac{c_{s}(j)}{s^{z+1}}\,, \tag{14}\]
swapping the order of the sums and performing some simplifications,
\[\rho_{\text{string},0}(t)=(k=0\text{ term})\ +\] \[\frac{8}{t}\sum_{s=1}^{\infty}\frac{c_{s}(j_{*})}{s}\sum_{k=1}^{ \infty}\frac{(-1)^{k}}{(4k)!}\bigg{(}\frac{2\pi\sqrt{2\xi t}}{s}\bigg{)}^{4k}. \tag{15}\]
The second line can be resummed to reproduce (10), the continuous part of the scalar density.
Finally, we return to the \(k=0\) term, still with the regulator, which is explicitly given by
\[\rho_{\text{string},0}(t)\big{|}_{k=0}=\frac{4\sqrt{\pi}}{t(\pi j _{*})^{\epsilon}}\frac{\sigma_{2\epsilon}(j_{*})}{\Gamma(\frac{1}{2}+\epsilon) \zeta(1+2\epsilon)}\bigg{(}\frac{4\pi^{2}t}{j_{*}}\bigg{)}^{\epsilon} \tag{16}\]
To regulate the divergence as \(\epsilon\to 0\) and \(t\to 0\), similarly to [19] we integrate from \(t=0\) up to some \(t_{*}>0\) and use \(\epsilon\zeta(1+2\epsilon)\rightarrow\frac{1}{2}\) as \(\epsilon\to 0\) to arrive at
\[\lim_{\epsilon\to 0}\bigg{[}\int_{0}^{t_{*}}dt\rho_{\text{string},0}(t) \big{|}_{k=0}\bigg{]}=8\sigma_{0}(j_{*}). \tag{17}\]
Together with the continuous part of the density derived above, this reproduces the full result (10).
|
2308.10852 | Uncertainty benchmarks for time-dependent transport problems | Verification solutions for uncertainty quantification are presented for time
dependent transport problems where $c$, the scattering ratio, is uncertain. The
method of polynomial chaos expansions is employed for quick and accurate
calculation of the quantities of interest and uncollided solutions are used to
treat part of the uncertainty calculation analytically. We find that
approximately six moments in the polynomial expansion are required to represent
the solutions to these problems accurately. Additionally, the results show that
if the uncertainty interval spans c=1, which means it is uncertain whether the
system is multiplying or not, the confidence interval will grow in time.
Finally, since the QoI is a strictly increasing function, the percentile values
are known and can be used to verify the accuracy of the expansion. These
results can be used to test UQ methods for time-dependent transport problems. | William Bennett, Ryan G. McClarren | 2023-08-21T16:48:55Z | http://arxiv.org/abs/2308.10852v2 | # Uncertainty Benchmarks for Time-Dependent Transport Problems
###### Abstract
Uncertainty quantification results are presented for a well known verification solution, the time dependent transport infinite plane pulse. The method of polynomial chaos expansions (PCE) is employed for quick and accurate calculation of the quantities of interest. Also, the method of uncollided solutions is used in this problem to treat part of the uncertainty calculation analytically.
Uncertainty quantification, transport, radiative transfer
## 1 Introduction
The time dependent, isotropic scattering transport equation with a delta function source is
\[\left(\frac{\partial}{\partial t}+\mu\frac{\partial}{\partial x}+1\right)\psi(x,t,\mu)=\frac{c}{2}\,\phi(x,t)+\frac{1}{2}\,\delta(x)\delta(t), \tag{1}\]
where, \(\psi(x,t,\mu)\) is the angular flux, and \(\phi(x,t)=\int_{-1}^{1}\!d\mu^{\prime}\,\psi(x,t,\mu^{\prime})\) is the scalar flux. \(\mu\) is the cosine of the angle between the polar angle of the direction of flight of a particle and the \(x\)-axis. \(c\) is the scattering ratio, defined as \(c\equiv\sigma_{s}/\sigma_{t}\) where \(\sigma_{s}\) is the scattering cross section and \(\sigma_{t}\) is the total absorption plus scattering cross section.
The linearity of Eq. (1) allows the scalar flux to be divided into sum of the collided and collided parts (\(\phi=\phi_{u}+\phi_{c}\)) where the uncollided flux has not experienced scattering and the collided flux has. [1] gives the uncollided solution to Eq. (1),
\[\phi_{u}=\frac{1}{2}\frac{\exp{(-t)}}{t}\Theta\left(1-\frac{x}{t}\right), \tag{2}\]
where \(\Theta\) is a step function. The collided solution given is,
\[\phi_{\rm c}(x,t)=c\left(\frac{e^{-t}}{8\pi}\left(1-\eta^{2}\right)\int_{0}^{ \pi}\!du\sec^{2}\left(\frac{u}{2}\right){\rm Re}\left[\xi^{2}e^{\frac{ct}{2}(1 -\eta^{2})\xi}\right]\right)\Theta(1-|\eta|), \tag{3}\]
where
\[\xi(u,\eta)=\frac{\log{q}+i\,u}{\eta+i\tan(\frac{u}{2})} \tag{4}\]
and
\[q=\frac{1+\eta}{1-\eta},\qquad\eta=\frac{x}{t}. \tag{5}\]
Since the source in this configuration is a delta function, the only physical parameter is the scattering ratio, \(c\). To intoduce uncertainty into the system, \(c\) is defined as an uncertain parameter,
\[c=\overline{c}+a_{1}\theta, \tag{6}\]
where \(\overline{c}\) is a known mean value, \(a_{1}\) is a constant, and \(\theta\) is a uniform random variable, \(\theta\sim\mathcal{U}[-1,1]\). This definition makes \(a_{1}>0\) give the width of the uncertain interval centered around the mean \(\overline{c}\). It is noteworthy that if the uncertain interval extends from \(c>1\) to \(c<1\), there is uncertainty in whether the system is multiplying or decaying. Also, the uncollided flux (Eq. (2)), has no uncertainty. This is a consequence of the uncollided particles having no interaction with the material.
It is assumed that the magnitude of \(a_{1}\) is small enough so that \(c\) is always positive. For a uniform random variable, the probability density function (PDF) is defined as
\[f(\theta_{i})=\begin{cases}\frac{1}{2}&\theta_{i}\in[-1,1]\\ 0&\text{otherwise}\end{cases}. \tag{7}\]
With these choices for our uncertain parameters, the expectation of the scalar flux is defined,
\[E[\phi]=\frac{1}{2}(E[\phi_{u}]+E[\phi_{c}])=\phi_{u}+\frac{1}{2}\int_{-1}^{ 1}\!d\theta_{1}\,\phi_{c}(c). \tag{8}\]
The variance is also found,
\[\mathrm{VAR}[\phi]=E[\phi^{2}]-E[\phi]^{2}, \tag{9}\]
\[\mathrm{VAR}[\phi]=\frac{1}{2}\int_{-1}^{1}\!d\theta\;\phi_{c}(c)^{2}-\frac{1 }{4}\left(\int_{-1}^{1}\!d\theta\;\phi_{c}(c)\right)^{2}. \tag{10}\]
The moments of the collided scalar flux may be calculated with numerical quadrature, where Eq. (3) is the integrand. These results will be useful for benchmarking the polynomial expansion of the solution. It is possible to carry out the first integral over the uncertain variable to find expectation,
\[E[\phi_{c}]=\frac{1}{8\pi a_{1}\left(\eta^{2}-1\right)t^{2}} \int_{0}^{\pi}\!du\;\sec^{2}\left(\frac{u}{2}\right)\mathrm{Re}\bigg{[}e^{- \frac{1}{2}\left(\eta^{2}-1\right)\xi t(a_{1}+\overline{c})-t}\times\\ \Big{(}\big{(}\eta^{2}-1\big{)}\,\xi t\,(a_{1}+\overline{c})+e^{a _{1}\left(\eta^{2}-1\right)\xi t}\left(\left(\eta^{2}-1\right)\xi t\,(a_{1}- \overline{c})-2\right)+2\Big{)}\bigg{]}\Theta(1-|\eta|). \tag{11}\]
The second moment which is required to find the variance must be calculated with a double numerical quadrature integral.
## 2 Polynomial Chaos
While it is possible to calculate the moments of the quantity of interest (QoI) with integration, more statistically robust quantile measures require Monte Carlo sampling. If the QoI is difficult to calculate, arriving at an accurate estimate of the median for example could be very expensive. If however, the solution is represented as a polynomial expansion of the solution in the uncertain variables, evaluation will be trivial once the coefficients have been calculated. This is the motivation for PCE. Therefore, the scalar flux as a function of the scattering ratio is approximated by,
\[\phi_{c}(c)=\phi_{u}+\sum_{j=0}^{J}c_{j}P_{j}(\theta), \tag{12}\]
where \(P_{j}\) are Legendre polynomials. The coefficients in the expansion are found by,
\[c_{j}=\frac{2j+1}{2}\int_{-1}^{1}d\theta\,\phi_{c}(c)P_{j}(\theta). \tag{13}\]
Representing the collided flux as a polynomial pays immediate dividends. The oddness of the Legendre polynomials allows us to find the expectation exactly from the zeroth expansion coefficient,
\[E[\phi]=\phi_{u}+c_{0}, \tag{14}\]
where \(\frac{1}{2}c_{0}\) can be interpreted to be the expected value of the collided flux. The orthogonality of the polynomials may also be employed to simplify the variance,
\[\mathrm{VAR}[\phi]=\sum_{j=0}^{J}\frac{1}{2j+1}c_{j}^{2}-c_{0}^{2}=\sum_{j=1}^ {J}\frac{1}{2j+1}c_{j}^{2}. \tag{15}\]
The variance is not dependent on the uncollided solution, which is expected. While this method can be employed to find the skewness, kurtosis, etc., we limit our study to expectation and variance. The convergence of the root mean square error of the variance as a function of the order of the polynomial expansion is shown in Figure 1 for a case with high variance. The linear slope of the error indicates spectral convergence.
At \(t=1\) and \(t=5\) mean free times, the mean and plus and minus one standard deviation are shown in Figures 2 and 3. A ten percent uncertainty is modelled in the left panels and a constant uncertainty on the right. For the same uncertainties, the variance increases as the scattering ratio grows closer to one for early and later times. At later times when the difference between a multiplying solution and a decaying solution is more drastic, the variance in the solution with \(c\) centered around one is much greater than the solutions for other scattering ratios that are always less than one.
To show quantile results, the expansion is sampled with the Sobol sequence [2], which is quasi-random and converges faster than a random sequence. Results were calculated from roughly five million samples and are shown in Figures 4 and 5. These results show, like the moments results, that the interval a sample can be expected to fall in increases as the scattering ratio tends to one.
## 3 Conclusions
A method for characterizing the uncertainty in a time dependent scattering transport problem is presented. This method can be extended into higher dimensions to transport problems with different sources, which introduce new uncertainties such as the source width or the time the source is left on. Introducing uncertainty in the source will result in uncertainty in the wavefront, which could cause difficulty for spectral methods like PCE since polynomial expansions over sharp interfaces converge slowly. In addition to creating benchmarks for uncertain transport problems, PCE can be employed to produce benchmark solutions for radiative transfer problems such as the \(P_{1}\) Su-Olson problem [3] or the Marshak wave [4, 5].
## Acknowledgements
This work was supported by a Los Alamos National Laboratory under contract #599008, "Verification for Radiation Hydrodynamics Simulations".
Figure 1: Root mean square error (RMSE) of the expansion variance versus expansion order N, calculated with Eq. (15) for \(t=5\), \(c=1\), \(a_{1}=0.5\).
Figure 2: Plus and minus one standard deviation (dashed lines) and mean (solid lines) for the time dependent plane pulse problem at \(t=1\) with an uncertain scattering ratio calculated with a order 12 expansion
Figure 4: Twentieth (dashed) fiftieth (solid) and eightieth (dot-dashed) percentile for the time dependent plane pulse problem at \(t=1\) with an uncertain scattering ratio calculated with a order 12 expansion
Figure 3: Plus and minus one standard deviation (dashed lines) and mean (solid lines) for the time dependent plane pulse problem at \(t=5\) with an uncertain scattering ratio calculated with a order 12 expansion |
2306.06544 | Herd's Eye View: Improving Game AI Agent Learning with Collaborative
Perception | We present a novel perception model named Herd's Eye View (HEV) that adopts a
global perspective derived from multiple agents to boost the decision-making
capabilities of reinforcement learning (RL) agents in multi-agent environments,
specifically in the context of game AI. The HEV approach utilizes cooperative
perception to empower RL agents with a global reasoning ability, enhancing
their decision-making. We demonstrate the effectiveness of the HEV within
simulated game environments and highlight its superior performance compared to
traditional ego-centric perception models. This work contributes to cooperative
perception and multi-agent reinforcement learning by offering a more realistic
and efficient perspective for global coordination and decision-making within
game environments. Moreover, our approach promotes broader AI applications
beyond gaming by addressing constraints faced by AI in other fields such as
robotics. The code is available at https://github.com/andrewnash/Herds-Eye-View | Andrew Nash, Andrew Vardy, David Churchill | 2023-06-11T00:02:18Z | http://arxiv.org/abs/2306.06544v2 | # Herd's Eye View: Improving Game AI Agent Learning with Collaborative Perception
###### Abstract
We present a novel perception model named Herd's Eye View (HEV) that adopts a global perspective derived from multiple agents to boost the decision-making capabilities of reinforcement learning (RL) agents in multi-agent environments, specifically in the context of game AI. The HEV approach utilizes cooperative perception to empower RL agents with a global reasoning ability, enhancing their decision-making. We demonstrate the effectiveness of the HEV within simulated game environments and highlight its superior performance compared to traditional ego-centric perception models. This work contributes to cooperative perception and multi-agent reinforcement learning by offering a more realistic and efficient perspective for global coordination and decision-making within game environments. Moreover, our approach promotes broader AI applications beyond gaming by addressing constraints faced by AI in other fields such as robotics. The code is available on GitHub.1
Footnote 1: [https://github.com/andrewnash/Herds-Eye-View](https://github.com/andrewnash/Herds-Eye-View)
## Introduction
Game environments traditionally grant AI agents access to extensive global information from the game engine. While this configuration assists in efficient decision-making, it does not accurately represent the restrictions encountered by AI applications outside of gaming, where comprehensive access to a system's software or engine is not feasible. Consequently, game AI techniques that rely predominantly on game engine data may limit their potential contribution to broader AI applications, as their dependency on perfect information and global environmental data is often unrealistic in other contexts such as robotics and autonomous vehicles.
In response to these challenges, our work delves into the application of more constrained, realistic perception models for game AI. We take inspiration from publications like the ViZDoom platform [14] and the Obstacle Tower Challenge [11] that have embraced the shift towards game AI with real-world constraints. Both ViZDoom and Obstacle Tower have utilized visual data as the primary input for AI agents, enabling them to navigate complex 3D environments and reinforcing the importance of perception-based game AI models without game engine access.
Research in autonomous vehicles has made extensive strides in AI perception models, particularly using intermediary environmental representations like the Bird's Eye View (BEV). The BEV model provides an overhead perspective of the environment, often in the form of a semantic obstacle grid, from a single "ego" vehicle's standpoint. This concept has become a key component in many self-driving systems [15].
Drawing on these past works, we propose a similar intermediary representation for game AI: the Herd's Eye View (HEV) model. Differing from the BEV's ego-centric perspective, the HEV model offers a shared world-centric perception derived from multiple agents. This shared perception model aligns closer to real-world AI applications, where multiple systems often work together to understand and navigate their environment.
The HEV model presents dual advantages. First, it mirrors the constraints faced by AI outside of gaming, contributing to the development of more believable AI behavior in games. Second, it alleviates the computational demands associated with the BEV model, where each agent maintains its own unique view of the environment, instead, only a single shared global view is utilized.
Emulating the successful methodologies of the ViZDoom project and the Obstacle Tower paper, we also incorporate Reinforcement Learning (RL) into our approach. RL enables us to test the effectiveness of HEV in both low-level control tasks and high-level planning challenges concurrently in complex environments. Importantly, similar to the Obstacle Tower approach, our agents are assessed not solely on their ability to navigate familiar environments, but also on their ability to handle unique variations of these environments. This highlights the importance of generalization in adapting to novel scenarios within the same environment.
To assess the effectiveness of the HEV model, we conduct two sets of experiments in three simulated Multi-Agent Reinforcement Learning (MARL) game environments. The first compares the accuracy of HEV world-centric predictions with BEV ego-centric predictions. The second experiment evaluates the efficiency of policies learned by RL agents trained on HEV perspective views compared to those trained on BEV perspective views.
Our work makes the following contributions:
1. We propose a baseline model for performing semantic segmentation in a fixed "HEV" world-centric view.
2. We demonstrate the effectiveness of the HEV fixed world viewpoint in improving collaborative perception and MARL in games.
Our exploration of more realistic perception models provides significant insights for game AI development, stressing the wider applicability of these techniques beyond the gaming industry.
## Related Works
### Birds Eye View Semantic Segmentation
In autonomous vehicle research, the bird's-eye view semantic segmentation task involves predicting pixel-level semantic labels for a top-down ego-centric view of an environment. Segmentation classes are typically dedicated to vehicles, driveable areas, and obstacles. In prior BEV research, a significant point of distinction lies in the method used for transforming 2D perspective-view features into 3D space or directly onto the BEV plane. Many previous works have leveraged explicit geometric reasoning in their perspective transformation [14, 15, 16]. An approach that has recently gained popularity is the Cross-View Transformer (CVT) [13] model, which implicitly models scene geometry. The CVT model leverages a camera-aware cross-view attention mechanism to implicitly learn a mapping from individual camera views to a canonical map-view representation for map-view semantic segmentation. The model consists of a convolutional image encoder for each view and cross-view transformer layers for inferring the segmentation in a simple, easily parallelizable, and real-time manner. BEVFormer [11] uses a similar cross-attention model to extract spatiotemporal BEV information. BEVSegFormer [12] uses a deformable transformer-based encoder. There are many publications in this research area using similar architectures of transformers to shift perspective view(s) to BEV, Ma et al. provides a recent review of these architectures.
The HEV semantic segmentation task poses a unique challenge compared to the BEV task since the agent translations are unknown; this requires the model to geometrically reason about multiple camera views to localize. For our baseline approach, we leverage the CVT model proposed by [13]. The CVT model is well suited for the HEV task because of its global attention mechanism. Many BEV publications such as BEVFormer [11] and BEVSegFormer [12] aim to optimize this global attention mechanism since in egocentric tasks, a camera view only overlaps with a consistent subsection of the map-view. Conversely, in our HEV world-centric use case, global attention is an advantage because a camera view can overlap with any part of the map-view. Additionally, we expect that the model's performance can be further improved by incorporating additional information from other sensors, such as lidar and radar, as demonstrated by recent works [1].
### Collaborative Perception Datasets
Autonomous vehicle datasets have been widely used in collaborative perception research, comprising various sensory inputs, including cameras, lidar, and GPS [1], from multiple vehicles in a vehicle-to-vehicle environment [23, 24]. Some datasets, such as those proposed in [11, 12], include infrastructure sensors, resulting in a vehicle-to-infrastructure data model. Others, such as the dataset presented in [23], employ a vehicle-to-everything model. The CoPerception-UAVs dataset [16] employs five images from five drones flying in formation. It is worth noting that these datasets are all sourced from CARLA [15] in Unreal Engine, a widely used open-source platform for simulating autonomous vehicles.
The HEV datasets sourced from our simulated environments are uniquely challenging in the field of collaborative perception, as the agents are equipped with only one or two cameras. Unlike previously proposed collaborative perception problems, the HEV task does not provide the agents with the transformation component of their pose. The unknown position of each camera view within the global coordinate frame adds a significant challenge to the semantic segmentation prediction task and other downstream tasks.
### Collaborative Perception Methods
Collaborative perception has been explored in recent years, improving the capability of single-agent perception models [11, 12, 13, 14]. In conventional collaborative perception, intermediate representations produced by sensors or neural networks from multiple viewpoints are propagated among a team of robots, such as a group of vehicles [16, 23] or a swarm of drones [13, 15]. The existing works commonly learn a collaboration module, produced by a Graph Neural Network [11, 12], Convolutional Neural Network [11, 12], or a Transformer [23, 16] to combine multiple robot intermediate representations.
Prior research has focused on robots equipped with multiple sensors, requiring sensor data fusion on a per-agent basis before information exchange among agents [13]. However, in this work, we focus on robots with only one or two cameras and no additional sensors, making our approach more amenable to smaller, simpler robot swarms. Since we focus on simpler robots, we do not utilize a collaboration module, and instead fuse all camera views together in a single cross-attention module.
## Methodology
### Herd's Eye View
In the Herd's Eye View (HEV) semantic segmentation task, we are given a set of \(n\) monocular camera views, \((I_{k},K_{k},R_{k})_{k=1}^{n}\) consisting of an input image \(I_{k}\in\mathbb{R}^{H\times W\times 3}\), camera intrinsics \(K_{k}\in\mathbb{R}^{3\times 3}\), and extrinsic rotation \(R_{k}\in\mathbb{R}^{3\times 3}\) with respect to the agent base. The goal
of the HEV task is to predict a binary semantic segmentation mask \(y\in\{0,1\}^{h\times w\times C}\) in the global coordinate frame, where \(C\) is the desired number of segmentation classes. The HEV task adds additional ambiguity to the well-studied BEV task as each camera view is at an unknown translation and orientation with respect to the global coordinate frame.
We define a BEV as a single-agent perception transformed into an ego-centric view, whereas the HEV is a collaborative perception transformed into a fixed world-centric view. A comparison of the ego-centric views tested and the fixed word-centric view can be seen in Figure 3.
Our approach, seen in Figure 1, follows three steps:
1. Collect multiple views of the environment from robot cameras.
2. Use a collaborative perception model to obtain the HEV, the world-centric semantic segmentation of the environment.
3. Input the HEV to a Reinforcement Learning (RL) control policy to obtain agent control commands.
Our goal is to establish a baseline HEV perception model to extract information from the multiple camera views and project them onto a fixed world-centric view. We propose a baseline perception model using the Cross-View Transformer (CVT) [22] and use semantic segmentation as our downstream task. The Cross-View Transformer is a recent approach that uses a cross-view attention module, first proposed by [22], enabling the agents to reason about their environment in an ego-centric coordinate frame. We extend the CVT model to further improve its accuracy and speed for the HEV use case. We name our baseline model the Herd's Eye View Cross-View Transformer (HEV-CVT). We use a world-centric map embedding and tune positional embeddings, output sizes, and the number of transformer layers to fit our proposed HEV environments.
### Data Collection
We use identical Unity simulation environments to source the datasets for training the HEV semantic segmentation task and MARL task. To collect the HEV ground truth for both tasks, we use our own custom fork of MBaske's Unity Grid-Sensor Library [1] which allows the collection of HEV world-centric grid-sensors. The only difference between ego-centric based agents and world-centric based agents is the location of their grid-sensor and the perspective at which they take their actions (e.g., forward for the word-centric agent is always North, but forward for the ego-centric agent is with respect to their current orientation). All agents are trained on ground-truth sensors, calculated using the bounding boxes that are individually tuned to each object. The resolution of the grid-sensor is adjusted to accommodate the complexity and size of the environment as seen in Table 1. Example observations of world-centric and ego-centric based agents can be seen in Figure 3
Our simulations are conducted in three different Unity ML-Agents environments:
**Collaborative Push Block:** Three agents are required to push white blocks to a green goal area on a randomly selected side of a square area. There are blocks sized one, two and three, each requiring the respective amount of agents to push into the goal area [1].
**Dungeon Escape:** As a Green Dragon slowly walks towards an exit portal, one of the three agents must collide with it in order to sacrifice itself and spawn a key. The key
Figure 1: A visualization of the proposed HEV approach in the Dungeon Escape environment: Agent camera views are extracted via a backbone model, then combined in a cross-attention module, then decoded into a world-centric semantic segmentation. The resulting semantic segmentation can be used as an observation for a swarm of robots.
must then be picked up by one of the two remaining agents and brought to the exit door to escape the dungeon (Cohen et al., 2022). Once any agent escapes, all agents win.
**Planar Construction:** Six agents collaborate to push red pucks into desired positions. Desired positions are randomly assigned to valid coordinates within the arena, and are observed via a Grid-Sensor, similar to the Push Block environment (Strickland, Churchill, and Vardy, 2019). In each round a new random amount of pucks from 2 to 16 are spawned.
We utilize the open-source Collaborative Push Block and Dungeon Escape environments from ML-Agents (Cohen et al., 2022), which are already native to Unity and only change the sensor input of agents. We recreate the Planar construction task (Vardy, 2018; Strickland, Churchill, and Vardy, 2019; Vardy, 2020, 2022, 2023) based on Strickland, Churchill, and Vardy's work in the CWaggle simulator but adapt the environment to Unity ML-Agents. All three environments can be seen in Figure 2. For MARL training, we use the HEV ground truth as model input and identical reward functions to the original implementations. Specifically, the agents are trained using the Multi-Agent POsthumous Credit Assignment (MA-POCA) algorithm (Cohen et al., 2022) in Unity ML-Agents. By using identical reward functions, we aim to create a fair comparison between the performance of agents using HEV and those using traditional sensor frames in cooperative scenarios.
The MARL task enables us to train the CVT models, which can perform semantic segmentation in an ego-centric or world-centric view. To collect the data necessary for training the CVT models, we run the trained MA-POCA models and collect the camera view, camera intrinsics, and rotation extrinsic from each agent at each step of the simulation, along with the ground truth HEV and BEV. By collecting data from various environments and introducing variations, we aim to create diverse and robust datasets for training the CVT models.
### Implementation Details
The Cross-View Transformer is adapted from Zhou and Krahenbuhl for the Herds Eye View Collaborative Perception task. The first stage of the network passes each input image from agents into a feature extractor, we use an EfficientNet-B4 (Tan and Le, 2019), which outputs two multi-resolution patch embeddings of size (28, 60) and (14, 30). Each patch is passed into a Cross-View Transformer convolution stack as in the original implementation. We found fewer convolution stacks significantly degrade the HEV-CVTs ability to localize, and more are not necessary. The patch embedding act as image features and are used in the keys and as the values for the Cross-View Transformer.
We encode the rotation \(R_{k}\in\mathbb{R}^{3\times 3}\) of the agent's camera into a \(D\)-Dimensional positional embedding using a multi-layer perceptron. We use \(D=64\) for all of our experiments. The positional embedding is combined with the image feature to compute the keys for the cross-view transformer. The world-centric map embedding operates similarly to the orig
Figure 2: Images of the three environments used to test the HEV collaborative perception and reinforcement learning algorithms. The top left is the Dungeon Escape environment. The top right is the Collaborative Push Block environment. The bottom is the Planar Construction environment.
inally proposed map-view embedding. The key difference with our approach is we do not subtract camera location embeddings from the map embedding, instead, we directly use the learned map embedding as queries. The camera locations with respect to the world are unknown for the HEV task, and we found subtracting rotation embeddings did not improve performance. The transformer architecture refines its world-centric estimate through two rounds of computation, each resulting in new latent embeddings used as queries.
The cross-view transformer computes softmax-cross-attention [20] between the image feature keys, values and world-centric queries. This setup allows world coordinates from the world-centric map embedding to attend to one or more image locations, allowing the model to reason about the environment from multiple image features. The multi-head attention mechanism uses 4-heads like the original implementation but with half the embedding size of \(d_{head}=32\).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & Agents & Agent Cameras & Grid Size \\ \hline Collaborative Push Block & 3 & 1-left, 1-right & 32x32 \\ \hline Dungeon Escape & 2-3 & 1-left, 1-right & 32x32 \\ \hline Planar Construction & 6 & 1-forward & 32x64 \\ \hline \end{tabular}
\end{table}
Table 1: HEV simulated environment parameters.
Figure 3: Example scene and corresponding agent observations from the Collaborative Push Block environment. The top image shows a debug camera (not available for agent observation). The bottom left shows the HEV world-centric observation of the blue agent. The bottom middle shows the BEV-centric observation of the blue agent. The bottom right shows the BEV-forward observation of the blue agent. Blue is the controller agent, green is ally agents, and red shades are differently-sized push blocks.
The cross-view transformer output is 8x8 for square environments and 8x8 and 8x16 for rectangular environments, this then passes through a decoder consisting of three up-convolutional layers to a final size of 64x64 and 64x128. This is purposely larger than is required for RL observation size, as smaller sizes can create ambiguity for some object occupancy resulting in decreased performance. These larger HEV-CVT sizes can easily be down-sampled to match the required RL observation sizes of 32x32 and 32x64. We threshold the output prediction confidences, keeping predictions with a confidence greater than 0.4. The prediction confidences prior to thresholding can be seen in Figure 4 as a heat map (lighter is higher confidence).
Our training process is similar to the original implementation by Zhou and Krahenbuhl, we also use focal loss Lin et al. (2017) and the AdamW Loshchilov and Hutter (2017) optimizer with a one-cycle learning rate scheduler Smith and Topin (2018). All models are trained with a batch size of 4 for 25 epochs. Training lasts approximately 8 hours on a single RTX 3090 GPU before converging.
## Experiments and Results
### Collaborative Perception
The HEV-CVT model must accurately localize the position of each agent based on the overlap of camera frames, which are located at unknown positions. An example of this in the Dungeon Escape environment can be seen in Figure 4. The cameras are recorded at resolution \(480\times 224\), and we use the camera intrinsics of a Raspberry Pi Camera Module 3. Consistent with prior works Ma et al. (2022), we show the result Intersection over Union (IoU) metric for the HEV-CVT model trained on each environment in Table 2. We compare the performance of the baseline CVT model on world-centric, ego-centric, and ego-forward coordinate frames.
In the Collaborative Push Block environment, three agents are equipped with two forward-facing cameras and are tasked with predicting the occupancy of all push blocks, agents and the goal area. In the Dungeon Escape environment, three agents are equipped with two forward-facing cameras and are tasked with predicting the occupancy of the dragon, agents and key. In the Planar Construction environment, six agents are equipped with a single forward-facing camera and are tasked with predicting the occupancy of all pucks and agents.
Our results shown in Table 2 demonstrate the world-centric coordinate frame consistently outperforms the ego-centric coordinate frames in all environments. The Collaborative Push Block and Dungeon Escape environments show the largest performance improvements, with up to 32.72% and 17.46% improvement in IoU, respectively. These results suggest that the world-centric HEV approach is effective in addressing the challenges of collaborative perception in multi-agent environments. This result is especially apparent in the Collaborative Push Block environment, where the HEV-CVT model easily localizes itself based on the large goal location seen in most camera views for a near-perfect 96.94% IoU score. The landmarks in the Dungeon Escape environment, the exit door and portal are in randomized locations which makes localization harder than the Push Block environment, reflected by the steep drop in IoU scores.
The standard ML-Agents environments were not as chal
Figure 4: Sample HEV-CVT prediction from the Dungeon Escape environment validation dataset. The two left columns show each of the three agents’ unique camera views, each row contains the images from the left and right cameras of the same agent. The top right shows the HEV-CVT prediction confidence heat map of the agents and dragon, the ground truth is directly to the left (agents are blue, the dragon is red). The bottom right shows a world-view camera not available to agents to help readers understand the scene.
lenging for the CVT models as there were not many permutations of the environment layout. By contrast, our custom Planar Construction environment presents a more complex challenge as we randomly change the coloring of six wall and floor components at every time step of the environment during data collection. Additionally, the locations of pucks to be pushed are randomized, and the environment area is twice the size of the ML-Agents environments. Despite the additional challenge the HEV-CVT model still performs well in the Planar Construction environment scoring 48.37% on the HEV task. This result shows the CVT models can localize based on the overlap in views between cameras as much of the validation set contains wall colors and puck layouts never before seen.
### Multi-Agent Reinforcement Learning
In order to compare the performance of the fixed world-centric coordinate frames with other commonly used coordinate frames, we conduct experiments in all three proposed environments. To ensure a fair comparison between the performance of agents using different coordinate frames, we use identical reward functions to each environment's original implementation and identical grid sizes.
Table 3 compares the performance of agents using different coordinate frames in all three proposed environments. We find consistently lower episode lengths with world-centric based agents compared to ego-centric. We opt to use episode length as our performance metric, as it directly reflects the speed of task completion. While alternative metrics such as cumulative or mean reward are also commonly used, these primarily reflect minor negative rewards assigned per time step, providing less insight into an agent's efficiency in our context.
Our experiments highlight a common challenge faced by BEV-based agents in all three environments, often an object necessary to take the optimal action was missing from the agent's view, leading to sub-optimal decision-making and increased episode lengths. This was especially apparent in the Push Block environment where often one of the three agents would not observe the size three block (requiring all three agents to push it), causing two agents to be waiting for the third agent to join them at the block, wasting time. Conversely, we found HEV-based agents in the Push Block environment stuck close together and consistently pushed the highest value blocks together first.
The HEV-based agents were able to leverage the multiple viewpoints available to them, enabling them to better perceive their environment and take more optimal actions. This issue was particularly evident in the Push Block environment, where the improved perception of world-centric agents resulted in significantly lower episode lengths than ego-based agents.
Overall, these findings suggest that the HEV framework offers a superior perception model in MARL environments, providing agents with a more comprehensive understanding of their surroundings, leading to improved decision-making and better overall performance.
## Conclusion
We have proposed a new perception model called Herd's Eye View that provides a global view of the environment, enabling better global coordination and cooperation in MARL scenarios. We conduct two sets of experiments in three simulated multi-agent environments. Our first experiment focuses on the perception aspect of HEV and shows the same Cross-View Transformer model performs better on the world-centric HEV task than its BEV ego-centric counterpart. Our second experiment focuses on the effectiveness of the HEV perspective view compared to BEV perspective views for MARL agents. We find that RL agents trained on world-centric perspective views learn more efficient policies than those trained on ego-centric perspective views. Our work opens up new possibilities for advanced perception models in MARL game environments, which can greatly enhance the performance of multi-agent systems by enabling better collaboration and coordination.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & World-Centric & Ego-Centric & Ego-Forward \\ \hline Collaborative Push Block & **96.94\%** & 63.87\% & 64.22\% \\ \hline Dungeon Escape & **43.53\%** & 13.47\% & 26.07\% \\ \hline Planar Construction & **48.37\%** & 35.45\% & 10.16\% \\ \hline \end{tabular}
\end{table}
Table 2: HEV-CVT validation IoU results per coordinate frame in each environment (higher is better).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & World-Centric & Ego-Centric & Ego-Forward \\ \hline Collaborative Push Block & **100.1**\(\pm\) 40.6 & 137.9 \(\pm\) 53.5 & 124.9 \(\pm\) 47.4 \\ \hline Dungeon Escape & **15.1**\(\pm\) 0.81 & 17.3 \(\pm\) 0.87 & 18.4 \(\pm\) 1.44 \\ \hline Planar Construction & **176.9**\(\pm\) 40.8 & 233.8 \(\pm\) 73.7 & 239.8 \(\pm\) 75.8 \\ \hline \end{tabular}
\end{table}
Table 3: MA-POCA mean episode length \(\pm\) standard deviation per coordinate frame in each environment (lower is better). |
2307.11311 | Entropy and fluctuation relations in isotropic turbulence | Based on a generalized local Kolmogorov-Hill equation expressing the
evolution of kinetic energy integrated over spheres of size $\ell$ in the
inertial range of fluid turbulence, we examine a possible definition of entropy
and entropy generation for turbulence. Its measurement from direct numerical
simulations in isotropic turbulence leads to confirmation of the validity of
the fluctuation relation (FR) from non-equilibrium thermodynamics in the
inertial range of turbulent flows. Specifically, the ratio of probability
densities of forward and inverse cascade at scale $\ell$ is shown to follow
exponential behavior with the entropy generation rate if the latter is defined
by including an appropriately defined notion of ``temperature of turbulence''
proportional to the kinetic energy at scale $\ell$. | H. Yao, T. A. Zaki, C. Meneveau | 2023-07-21T02:38:39Z | http://arxiv.org/abs/2307.11311v2 | # Entropy and fluctuation relations in isotropic turbulence
###### Abstract
Based on a generalized local Kolmogorov-Hill equation expressing the evolution of kinetic energy integrated over spheres of size \(\ell\) in the inertial range of fluid turbulence, we examine a possible definition of entropy and entropy generation for turbulence. Its measurement from direct numerical simulations in isotropic turbulence leads to confirmation of the validity of the fluctuation relation (FR) from non-equilibrium thermodynamics in the inertial range of turbulent flows. Specifically, the ratio of probability densities of forward and inverse cascade at scale \(\ell\) is shown to follow exponential behavior with the entropy generation rate if the latter is defined by including an appropriately defined notion of "temperature of turbulence" proportional to the kinetic energy at scale \(\ell\).
## 1 Introduction
A long-standing hope of research in turbulence is that connections to non-equilibrium thermodynamics and statistical mechanics could be established. For example, connections were attempted some time ago for vortex filament models (Chorin, 1991), infinitely divisible cascade processes (see (Castaing, 1996) and references therein), as well asmultifractal models of the energy cascade with its analogues to Gibbs free energy, Legendre transformations (Paladin & Vulpiani, 1987; Chhabra _et al._, 1989), and even phase transitions (Meneveau & Chhabra, 1990). However, connections between such models of the cascade and the Navier-Stokes equations remain tenuous to this day. More recently, considering the reversibility of Navier-Stokes equations in the inviscid limit (or in the inertial range of turbulence) and building upon prior works by She & Jackson (1993); Carati _et al._ (2001); Cichowlas _et al._ (2005); Domaradzki & Carati (2007); Eyink & Aluie (2009); Cardesa _et al._ (2015, 2017), an analysis of the cascade process and possible connections to entropy was carried out by Vela-Martin & Jimenez (2021). Various consequences of the time-reversibility of the inertial range dynamics were explored and connections were made to physical-space flow structures in seeking physical explanations for the asymmetry between positive (forward) and negative (inverse) cascade rates. Recently Fuchs _et al._ (2020) proposed a definition of entropy change of individual cascade trajectories based on a Fokker-Planck stochastic model equation and tested predictions from non-equilibrium thermodynamics. Similarly, Porporato _et al._ (2020) considered fluctuations in spectral models in Fourier space. We here explore a new definition of entropy generation rate based on the exact kinetic energy transport equation in the inertial range of turbulence and test quantitative predictions from non-equilibrium thermodynamics regarding the direction and magnitude of the cascade rate.
## 2 The generalized Kolmogorov-Hill equation for local kinetic energy
The kinetic energy of turbulence can be defined using structure functions (Frisch, 1995). As a generalization of the celebrated Karman-Howarth and Kolmogorov equations for
structure functions, Hill (2001, 2002) derived what will here be denoted as the generalized Kolmogorov-Hill equation (GKHE). It is obtained from the incompressible Navier-Stokes equations written at two points and before averaging, it accounts for the local time evolution of velocity increment magnitude (square) at a specific physical location \(\mathbf{x}\) and scale \(\mathbf{r}\), and incorporates effects of viscous dissipation, viscous transport, advection, and pressure (Hill, 2001, 2002). With no mean flow and for scales at which large-scale forcing can be neglected, the instantaneous GKHE reads
\[\frac{\partial\delta u_{i}^{2}}{\partial t}+u_{j}^{*}\frac{\partial\delta u_{i} ^{2}}{\partial x_{j}}=-\frac{\partial\delta u_{j}\delta u_{i}^{2}}{\partial r _{j}}-\frac{8}{\rho}\frac{\partial p^{*}\delta u_{i}}{\partial r_{i}}+\nu\frac {1}{2}\frac{\partial^{2}\delta u_{i}\delta u_{i}}{\partial x_{j}\partial x_{j} }+2\nu\frac{\partial^{2}\delta u_{i}\delta u_{i}}{\partial r_{j}\partial r_{j} }-4\epsilon^{*}, \tag{1}\]
where \(\delta u_{i}=\delta u_{i}(\mathbf{x};\mathbf{r})=u_{i}^{+}-u_{i}^{-}\) is the velocity increment vector in the ith Cartesian direction. The superscripts \(+\) and \(-\) represent two points \(\mathbf{x}+\mathbf{r}/2\) and \(\mathbf{x}-\mathbf{r}/2\) in the physical domain that have a separation vector \(r_{i}=x_{i}^{+}-x_{i}^{-}\) and middle point \(x_{i}=(x_{i}^{+}+x_{i}^{-})/2\). The superscript \(*\) denotes the average value between two points. For instance, the two-point average dissipation is defined as \(\epsilon^{*}=(\epsilon^{+}+\epsilon^{-})/2\). Here \(\epsilon^{\pm}\) is the "pseudo-dissipation" defined locally as \(\epsilon=\nu(\partial u_{i}/\partial x_{j})^{2}\), where \(\nu\) is the kinematic fluid viscosity.
As already noted by Hill (2002) (SS3.5), Eq. 1 at any point \(\mathbf{x}\) can be integrated over a sphere in \(\mathbf{r}\)-space, up to a diameter which here will be denoted as the scale \(\ell\) (it will be assumed to be in the inertial range so that viscous diffusion terms are neglected Yao _et al._ (2023_a_)). The resulting equation is divided by the volume of the sphere (\(V_{\ell}=\frac{4}{3}\pi(\ell/2)^{3}\)) and a factor of 4, which yields its integrated form,
\[\frac{\widehat{d}k_{\ell}}{dt}=\Phi_{\ell}+P_{\ell}-\epsilon_{\ell}, \tag{2}\]
where
\[\frac{\widehat{d}k_{\ell}}{dt}\equiv\frac{1}{2\,V_{r}}\int_{V_{r}}\left( \frac{\partial\frac{1}{2}\delta u_{i}^{2}}{\partial t}+u_{j}^{*}\frac{ \partial\frac{1}{2}\delta u_{i}^{2}}{\partial x_{j}}\right)\,d^{3}\mathbf{r}_ {s}=\frac{\partial k_{\ell}}{\partial t}+\frac{1}{2\,V_{r}}\int_{V_{r}}u_{j}^{* }\frac{\partial\frac{1}{2}\delta u_{i}^{2}}{\partial x_{j}}\,d^{3}\mathbf{r}_ {s}, \tag{3}\]
is a local time rate of change of kinetic energy at all scales smaller or equal to \(\ell\). We have defined the kinetic energy associated to the scales smaller than \(\ell\) according to
\[k_{\ell}(\mathbf{x},t)\equiv\frac{1}{2\,V_{\ell}}\int_{V_{\ell}}\frac{1}{2} \delta u_{i}^{2}(\mathbf{x},\mathbf{r})\,d^{3}\mathbf{r}_{s}, \tag{4}\]
where the \(1/2\) factor in front of the integral accounts for the fact that a volume integration over the sphere \(V_{\ell}\) of diameter \(\ell\) will count the increments \(\delta u_{i}^{2}\) twice. The quantity \(k_{\ell}(\mathbf{x},t)\) will be central to our analysis. Eq. 2 also includes
\[\epsilon_{\ell}(\mathbf{x})\equiv\frac{1}{V_{\ell}}\int_{V_{\ell}}\epsilon^{*} (\mathbf{x},\mathbf{r})d^{3}\mathbf{r}_{s}, \tag{5}\]
the locally volume averaged rate of dissipation envisioned in the the Kolmogorov (1962) refined similarity hypothesis (KRSH). The radius vector \(\mathbf{r}_{s}=\mathbf{r}/2\) is integrated up to magnitude \(\ell/2\), and
\[\Phi_{\ell}\equiv-\frac{3}{4\,\ell}\frac{1}{S_{\ell}}\oint_{S_{\ell}}\delta u _{i}^{2}\,\delta u_{j}\,\hat{r}_{j}dS=-\frac{3}{4\,\ell}\,[\delta u_{i}^{2} \delta u_{j}\hat{r}_{j}]_{S_{\ell}} \tag{6}\]
is interpreted as the local energy cascade rate in the inertial range at scale \(\ell\) at position \(\mathbf{x}\). Note that Gauss theorem is used to integrate the first term on the RHS of Eq. 1 over
the \(r_{s}\)-sphere's surface, with area element \(\hat{r}_{j}dS\), with \(\hat{\mathbf{r}}=\mathbf{r}/|\mathbf{r}|\), and \(S_{\ell}=4\pi(\ell/2)^{2}\) the sphere's overall area (care must be taken as the Gauss theorem applies to the sphere's radius vector \(\mathbf{r}_{s}=\mathbf{r}/2\) and \(\partial_{r}=2\,\partial_{r_{s}}\)). Averaging over the surface \(S_{\ell}\) is denoted by \([...]_{S_{\ell}}\). Finally, Eq. 2 also includes
\[P_{\ell}\equiv-\frac{6}{\ell}\frac{1}{S_{\ell}}\oint_{S_{\ell}}\frac{1}{\rho} \,p^{*}\,\delta u_{j}\,\hat{r}_{j}\,dS, \tag{7}\]
the surface averaged pressure work term at scale \(\ell\) (defined as positive if the work is done _on_ the system inside the volume \(V_{\ell}\)). Equation 2 is local (valid at any point \(\mathbf{x}\) and time \(t\)), and each of the terms in the equation can be evaluated from data according to their definition using a sphere centered at any middle point \(\mathbf{x}\). For more details about this formulation, see Yao _et al._ (2023_a_).
In most prior works, it is the statistical average of Eq. 2 that is considered (Monin & Yaglom, 1975; Danaila _et al._, 2001, 2012; Carbone & Bragg, 2020). Using ensemble averaging for which isotropy of the velocity increment statistics can be invoked, in the inertial range neglecting the viscous term, the rate of change and pressure terms vanish and one recovers the Kolmogorov equation for two-point longitudinal velocity increments that connects third-order moments to the overall mean rate of viscous dissipation via the celebrated \(-4/5\) law: \(\langle\delta u_{L}^{3}(\ell)\rangle=-\frac{4}{5}\ell\langle\epsilon\rangle\)(Kolmogorov, 1941; Frisch, 1995). Here \(\langle..\rangle\) means global averaging, \(\delta u_{L}(\ell)\) is the longitudinal velocity increment over distance \(\ell\), assumed to be well inside the inertial range of turbulence. Without averaging, and also without the viscous, pressure and unsteady terms, Eq. 2 becomes the "local \(4/3\)-law" obtained by Duchon & Robert (2000) and discussed by Eyink (2002) and Dubrulle (2019), connecting \(\Phi_{\ell}\) to \(\epsilon_{\ell}\) in the context of energy dissipation in the \(\nu\to 0\) limit (see Yao _et al._ (2023_a_) regarding subtle differences with Hills's more symmetric two-point approach used here).
Returning to the time derivative term in Eq. 3, in order to separate advection due to overall velocity at scale \(\ell\) and smaller scale contributions, we define the filtered advection velocity as \(\tilde{u}_{j}\equiv\frac{1}{V_{\ell}}\int_{V_{\ell}}u_{j}^{*}\,d^{3}\mathbf{r }_{s}\). It corresponds to a filtered velocity using a spatial radial top-hat filter (Yao _et al._, 2023_a_). Accordingly, we may write
\[\frac{\tilde{d}k_{\ell}}{dt}=\frac{\widetilde{d}k_{\ell}}{dt}+\frac{\partial q _{j}}{\partial x_{j}}=\frac{\partial k_{\ell}}{\partial t}+\tilde{u}_{j}\, \frac{\partial k_{\ell}}{\partial x_{j}}+\frac{\partial q_{j}}{\partial x_{j}}, \tag{8}\]
where \(\tilde{d}/dt=\partial/\partial t+\tilde{u}_{j}\partial/\partial x_{j}\) and \(q_{j}=\frac{1}{V_{\ell}}\int_{V_{\ell}}\frac{1}{2}\left(\delta u_{i}^{2}\delta u _{j}^{*}\right)\,d^{3}\mathbf{r}_{s}\) (the spatial flux of small-scale kinetic energy), with \(\delta u_{j}^{*}\equiv u_{j}^{*}-\tilde{u}_{j}\). The evolution of kinetic energy of turbulence at scales at and smaller than \(\ell\) (in the inertial range, i.e. neglecting viscous diffusion and forcing terms) is thus given by
\[\frac{\widetilde{d}k_{\ell}}{dt}=\Phi_{\ell}-\epsilon_{\ell}+P_{\ell}-\frac{ \partial q_{j}}{\partial x_{j}}, \tag{9}\]
This equation represents the "first law of thermodynamics" for our system of interest. The system can be considered to be the eddies inside the sphere of diameter \(\ell\) consisting of turbulent fluid (see Fig. 1(a)). We consider the smaller-scale turbulent eddies inside the sphere to be analogous to a set of interacting "particles" which are exposed to energy exchange with the larger-scale flow structures at a rate \(\Phi_{\ell}\), loosing energy to molecular degrees of freedom at a rate \(\epsilon_{\ell}\), and also being exposed to work per unit time done by pressure at its periphery (\(P_{\ell}\)). Spatial turbulent transport (spatial flux \(q_{j}\)) can also be present.
## 3 Analogy with Gibbs equation and definition of entropy
The energetics (first law Eq. 9) of the system of eddies inside the ball of size \(\ell\) invites us to write a sort of Gibbs equation, in analogy to the standard expression
\[Tds=de+p\,dv, \tag{10}\]
where \(T\) is temperature, aiming to define an entropy \(s\). The internal energy \(e\) is analogous to \(k_{\ell}\) and the pressure work (\(p\,dv\), work done _by_ the system) is analagous to \(-P_{\ell}\) since the volume change \(dv\) is the surface integration of \(\delta u_{j}\hat{r}_{j}\) times a time increment \(dt\). Rewritten as a rate equation (i.e., dividing by \(dt\)), the analog to Gibbs equation for our system reads
\[T\,\frac{\widetilde{ds}_{\ell}}{dt}=\frac{\widetilde{dk}_{\ell}}{dt}-P_{\ell}, \tag{11}\]
where \(s_{\ell}\) is a new quantity defined via this equation and is akin to an entropy (intensive variable) of the system of small-scale eddies inside the sphere of diameter \(\ell\). Also, \(T\) has to be some suitably defined temperature. Combining Eq. 11 with the energy equation (Eq. 9) one obtains
\[\frac{\widetilde{ds}_{\ell}}{dt}=\frac{1}{T}\left(\Phi_{\ell}-\epsilon_{\ell} -\frac{\partial q_{j}}{\partial x_{j}}\right). \tag{12}\]
The heat exchange with the "thermal reservoir" (here considered to be the molecular degrees of freedom inside the sphere) at rate \(\epsilon_{\ell}\) also occurring at temperature \(T\) then generates a corresponding change (increase) of entropy of the "reservoir" at a rate
\[\frac{\widetilde{ds}_{\rm res}}{dt}=\frac{\epsilon_{\ell}}{T}. \tag{13}\]
The generation rate of _total entropy_\(s_{\rm tot}=s_{\ell}+s_{\rm res}\) is then given by
\[\frac{\widetilde{ds}_{\rm tot}}{dt}=\frac{\Phi_{\ell}}{T}-\frac{q_{j}}{T^{2}} \frac{\partial T}{\partial x_{j}}-\frac{\partial}{\partial x_{j}}\left(\frac{ q_{j}}{T}\right), \tag{14}\]
where we have rewritten \(T^{-1}\nabla\cdot{\bf q}=T^{-2}{\bf q}\cdot\nabla T+\nabla\cdot({\bf q}/T)\). The first two terms on the RHS of Eq. 14 represent the entropy generation terms (strictly positive in equilibrium thermodynamics due to the second law), while the last one represents spatial diffusion of entropy thus not associated with net generation.
To complete the thermodynamic analogy, we identify the temperature to be the (internal) kinetic energy of the small-scale turbulence, i.e., we set \(T=k_{\ell}\) (in other words, we select a "Boltzmann constant" of unity thus choosing units of temperature equal to those of turbulent kinetic energy per unit mass). Examining Eq. 14 it is then quite clear that the quantity
\[\widehat{\Psi}_{\ell}=\frac{\Phi_{\ell}}{k_{\ell}}-\frac{q_{j}}{k_{\ell}^{2}} \frac{\partial k_{\ell}}{\partial x_{j}} \tag{15}\]
represents the total entropy generation rate for the system formed by the smaller-scale eddies inside any particular sphere of diameter \(\ell\). In this paper we do not focus on the entropy generation due to spatial gradients in small-scale kinetic energy (the second term in Eq. 15) and focus solely on the part due to the cascade of kinetic energy in scale space,
\[\Psi_{\ell}=\frac{\Phi_{\ell}}{k_{\ell}}. \tag{16}\]
The structure function formalism leading to \(\Phi_{\ell}\) as the quantity describing the rate of local energy cascade at scale \(\ell\) is not the only formalism that can be used to quantify
cascade rate in turbulence. Another approach is widely used in the context of Large Eddy Simulations (LES), where an equation similar to Eq. 9 can be obtained using filtering (Piomelli _et al._, 1991; Germano, 1992; Meneveau & Katz, 2000). It is a transport equation for the trace of the subgrid-scale or subfilter scale stress tensor \(\tau_{ij}=\widetilde{u_{i}u}_{j}-\widetilde{u_{i}}\widetilde{u_{i}}\) (the _tilde_. represents spatial filtering at scale \(\ell\)) i.e., a transport equation for \(k_{\ell}^{\rm gs}=\frac{1}{2}\tau_{ii}\). In this equation the term \(\Pi_{\ell}=-\tau_{ij}\tilde{S}_{ij}\) appears (\(\tilde{S}_{ij}\) is the filtered strain-rate tensor), and \(\Pi_{\ell}\) plays a role similar to the role of \(\Phi_{\ell}\) for the velocity increment (or structure function) formalism (see Yao _et al._ (2023_a_) for a comparative study of both). Consequently, we can define another entropy generation rate associated with subgrid or subfilter-scale motions according to \(\Psi_{\ell}^{\rm gs}=\Pi_{\ell}/k_{\ell}^{\rm gs}\).
In any case, the system consisting of the small-scale eddies inside the sphere of diameter \(\ell\) cannot be considered to be in near statistical equilibrium and thus \(\Phi_{\ell}\) or \(\Pi_{\ell}\) (and \(\Psi_{\ell}\) or \(\Psi_{\ell}^{\rm gs}\)) can in principle be both positive and negative. In particular, the literature on observations of negative subgrid-scale energy fluxes \(\Pi_{\ell}\) is extensive (Piomelli _et al._, 1991; Borue & Orszag, 1998; Meneveau & Katz, 2000; Van der Bos _et al._, 2002; Vela-Martin, 2022). The lack of equilibrium conditions in turbulence is related to the fact that there is no wide time-scale separation between the eddies smaller than \(\ell\) and those at or larger than \(\ell\). It is also related to the fact that the number of entities, eddies, or "particles", at scales smaller than \(\ell\) that are dynamically interacting with those at scales larger than \(\ell\) is not large as it is in molecular systems. Therefore, local violations of analogues of the second law are to be expected and relevant principles from non-equilibrium thermodynamics must be invoked instead. We regard the evolution of small-scale eddies at scales below \(\ell\), but significantly larger than the Kolmogorov scale, as being governed by inviscid, reversible dynamics. These eddying degrees of freedom would be the analogue of the reversible dynamics of molecular degrees of freedom at the microscopic level. The reversible microscopic dynamics of such molecules give rise to positive definite dissipation rate \(\epsilon\) and phase-space volume contraction when motions are coarse-grained at continuum description scales. For the turbulence case, we posit that phase-space contraction and entropy generation occurs at the level of coarser-grained dynamics, when attempting to describe the system using effective variables at scales at, or larger than, \(\ell\). The reversible inviscid eddying motions at scales smaller than \(\ell\) give rise to \(\Phi_{\ell}\) in analogy to how reversible microscopic molecular dynamics give rise to \(\epsilon\). However, because of the lack of scale separation between the small-scale eddies and \(\ell\), \(\Phi_{\ell}\) and phase-space volume change for variables at that level of description can be either positive or negative.
It should be kept in mind that defining entropy for non-equilibrium systems is in general not a settled issue, even for fields other than fluid turbulence. For the purpose of exploring the consequences of a relatively simple option, we follow the definition used in equilibrium systems as in Eq. 1. Clearly, since \(\Phi_{\ell}\) and \(\Pi_{\ell}\) can be negative, so will the entropy generation rates, and second law violations will be possible using the currently proposed definition of entropy.
## 4 Fluctuation Theorem in non-equilibrium thermodynamics
A well-known and testable result from non-equilibrium thermodynamics is the Fluctuation Relation, FR (Evans _et al._, 1993; Gallavotti & Cohen, 1995; Searles _et al._, 2000; Marconi _et al._, 2008; Seifert, 2012). Very loosely speaking, for systems in which the microscopic dynamics are reversible (as they can be argued to be in the case of small-scale eddies in the inertial range obeying nearly inviscid dynamics), the ratio of probability densities of observing a "forward positive dissipative" event and the same "negative
dissipation reverse" event can be related to the contraction rate in the appropriate phase-space. The sketch in Fig. 1(b) illustrates the evolution of a "blob" of states of the system (set of states "A" occupying volume \(V(0)\) in phase space) at time \(t=0\). These states evolve and after some time \(t\) the corresponding phase-space volume has changed to \(V(t)\) and the set of states now occupies set \(B\).
On average due to positive mean entropy generation and associated contraction of phase-space volume, \(V(t)<V(0)\), but for certain configurations the reverse may be true. The probability of observing one of the states in set \(A\) can be taken to be proportional to the phase-space volume of set \(A\). Thus, the probability of being in set \(A\) (and therefore ending up in \(B\) after a time \(t\)) is proportional to \(V(0)\), i.e., \(P(A\to B)\sim V(0)\). Phase-space contraction rates involve exponential rates of volume change depending on the finite-time Lyapunov exponents. Since the phase-space contraction rate in dynamical systems is proportional to the local rate of entropy generation (\(\Psi_{\ell}\) in our case), one expects \(V(t)=V(0)\exp(-\Psi_{\ell}t)\), assuming that the initial set A was chosen specifically to consist only of sets of states characterized by \(\Psi_{\ell}\) between times \(t=0\) and \(t\). Crucially, since the dynamics are reversible, if one were to run time backwards and start with the states at \(B\), one would end up at \(A\). Also, \(P(B\to A)\sim V(t)\). The corresponding entropy production rate would have the opposite sign (as \(\Psi_{\ell}\) is an odd function of velocities). Identifying \(P(A\to B)\) with the probability density \(P(\Psi_{\ell})\) of observing a given value of entropy generation and \(P(B\to A)\) with the probability density of observing the sign-reversed value, i.e. \(P(-\Psi_{\ell})\), leads to the FR relationship applied to the entropy generation rate defined for our turbulence system:
\[\frac{P(\Psi_{\ell})}{P(-\Psi_{\ell})}=\frac{V(0)}{V(t)}=\exp\left(\Psi_{\ell} \,t\right), \tag{1}\]
where time \(t\) is understood as the time over which the entropy generation rate is computed if in addition one were to average over periods of time following the sphere of size \(\ell\) in the flow. For now we shall not assume a specific value of \(t\) and assume it is small but finite. If Eq. 1 holds true in turbulent flows, a plot of \(\log[P(\Psi_{\ell})/P(-\Psi_{\ell})]\) versus \(\Psi_{\ell}\) should show linear behavior when plotted as function of \(\Psi_{\ell}\).
Figure 1: (a) Sketch in physical space illustrating eddies at scales \(\ell\) and smaller being transported by the larger-scale flow and exchanging energy locally at a rate \(\Psi_{\ell}\) with eddies of larger size (\(n\ell\)), and being affected by pressure work \(P_{\ell}\). There is dumping of energy with a “heat reservoir” at a rate \(\epsilon_{\ell}\). (b) Sketch in phase space representing the (“microscopically” reversible) dynamics of a set (\(A\)) of possible states of the system that are characterized by phase-space contraction rate \(\Psi_{\ell}\), that start at \(t=0\) and evolve to states \(B\) at time \(t\). The “microscopic” degrees of freedom here are the eddies of scale smaller that \(\ell\) and in the inertial range their dynamics are reversible.
## 5 Results from isotropic turbulence at \(R_{\lambda}=1250\)
To evaluate the validity of the FR for isotropic turbulence we use data from a direct numerical simulation (DNS) of forced isotropic turbulence at a Taylor-scale Reynolds number of \(R_{\lambda}=1\),250. The simulations used 8,192\({}^{3}\) grid points (Yeung _et al._, 2012) and the data are available at the public Johns Hopkins Turbulence Database system (JHTDB). We perform the analysis at three length-scales in the inertial range, \(\ell=30\eta,\,45\eta,\,60\eta\) where \(\eta\) is the average Komogorov scale. To compute surface averages required to evaluate \(\Phi_{\ell}\), we discretize the outer surface of diameter \(\ell\) into 500 point pairs (\(+\) and \(-\) points) that are approximately uniformly distributed on the sphere. Velocities to evaluate \(\delta u_{i}\) are obtained using the JHTDB webservices. \(k_{\ell}\) is evaluated similarly by integrating over five concentric spheres. The accuracy of this method of integration has been tested by increasing the number of points used in the discretization. To compute spherically volume filtered quantities such as \(\tau_{ij}\) or \(\tilde{S}_{ij}\), we fix the middle point coordinate \(\mathbf{x}\) in the physical domain. For each center point, we download data in a cubic domain using the JHTDB's cutout service in a cube of size equal to \(\ell\). The arrays are then multiplied by a spherical mask (filter) to evaluate local filtered velocities and velocity products. Gradients are evaluated using 4th-order centered finite differences. We compute the quantities \(k_{\ell}\), \(\Phi_{\ell}\) and \(\Psi_{\ell}=\Phi_{\ell}/k_{\ell}\) at \(2\times 10^{6}\) randomly chosen points in the domain. The probability density functions of \(\Psi_{\ell}\) (and of \(\Phi_{\ell}\) ) are then evaluated based on the entire sample of randomly chosen points.
Figure 2 shows the ratio of probability densities for positive and negative entropy production rates as function of the entropy production rate \(\Psi_{\ell}\), in semi-logarithmic axes. Results are shown for three scales \(\ell/\eta=30,45\) and \(60\). In good agreement with the prediction of the fluctuation relation, to a good approximation the results show linear behavior, over a significant range of \(\Psi_{\ell}\) values. The units of \(\Psi_{\ell}\) are inverse time-scale, so that they are here normalized by the inertial range scaling of this quantity, \(\langle\epsilon\rangle^{1/3}\ell^{-2/3}\).
The slope of the lines, when \(\Psi_{\ell}\) is normalized by \(\langle\epsilon\rangle^{1/3}\ell^{-2/3}\) is rather independent of \(\ell/\eta\) and is quite close to unity. It suggests that the elapsed "time" is on the order of \(t\sim\tau_{\ell}\), where \(\tau_{\ell}=\langle\epsilon\rangle^{-1/3}\ell^{2/3}\), consistent with the notion of eddy turnover-time. Figure 2 represents the main finding of this study, providing strong support for the applicability
Figure 2: Fluctuation Relation test for isotropic turbulence at \(R_{\lambda}=1250\): ratio of probability densities of positive and negative entropy generation rate scales exponentially with the entropy generation rate \(\Psi_{\ell}\) at scale \(\ell\). Results are shown for 3 different scales \(\ell/\eta=30\) (black circles), 45 (red triangles) and 60 (blue squares). The gray dashed line has slope=1.13 obtained via linear fit while the solid gray line has slope \(=1\). In this and all other figures, natural logarithm is used.
of FR in the context of turbulence in the inertial range, provided the entropy generation rate is defined based on the ratio of energy cascade rate and local "temperature" \(k_{\ell}\).
Furthermore, if one were to interpret the normalized entropy generation rate as an entropy change, i.e. \(\Delta s=\Psi_{\ell}\tau_{\ell}\), one can also test the integral fluctuation relation (Marconi _et al._, 2008; Seifert, 2012; Fuchs _et al._, 2020) which states that \(\langle\exp(-\Delta s)\rangle=1\). Remarkably, computing the average over all \(N=2\times 10^{6}\) samples, we obtain \(\langle\exp(-\Psi_{\ell}\langle\epsilon\rangle^{-1/3}\ell^{2/3})\rangle=0.99\), \(1.03\) and \(0.97\) at the three scales \(\ell/\eta=30\), \(45\) and \(60\), respectively. Statistical convergence of our evaluation of \(\langle e^{-\Delta s}\rangle\) is very good: For the case where \(\ell/\eta=30\) and \(N=0.5\times 10^{6}\) and \(10^{6}\), the corresponding \(\langle e^{-\Delta s}\rangle\) are \(0.9856\) and \(0.9845\), respectively. Similarly for \(\ell/\eta=45\) and \(60\) the disparity is less than \(1\%\)
This confirmation of the validity of the integral fluctuation relation suggests that \(\tau_{\ell}=\langle\epsilon\rangle^{-1/3}\ell^{2/3}\) is the natural timescale for the cascade process, although \(\tau_{\ell}\) corresponds to an average turn-over timescale (since it is based on the global mean dissipation instead of the local dissipation \(\epsilon_{\ell}\)). \(\tau_{\ell}\) may therefore be interpreted as describing the level in the cascade process corresponding to scale \(\ell\) (as envisioned in the approach by Fuchs _et al._ (2020)) rather than representing the actual elapsed time during an eddy turnover process, for which the local time based on \(\epsilon_{\ell}\) could be more appropriate (for analysis of conditional statistics based on \(\epsilon_{\ell}\), see Yao _et al._ (2023_b_)).
## 6 Discussion
Here we explore some other plausible quantities and entropy definitions, and test to what degree FR can apply to them. First, we test applicability of the FR relation to the entropy production rate \(\Psi_{\ell}^{sgs}\) as suggested in the filtering formalism from LES. Figure 3(a) shows that the corresponding FR does not exhibit linear behavior, i.e. the FR does not apply to the LES version of entropy generation rate \(\Pi_{\ell}/\frac{1}{2}\tau_{ii}\) (at least not for the scales \(\ell/\eta\) studied here). Another variant is motivated by considering directly the cascade rates \(\Phi_{\ell}\) and \(\Pi_{\ell}\) rather than \(\Psi_{\ell}\) or \(\Psi_{\ell}^{sgs}\) as representative of the entropy production rate. We remark that the identification of \(\Pi_{\ell}\) as "entropy generation rate" is commonplace in the literature, presumably because a constant reference (arbitrary) temperature is assumed. Figure 3(b) shows that such definitions also do not exhibit linear behavior and thus the cascade rates do not obey the FR relations. Our results show that \(\Phi_{\ell}\) must be divided by \(k_{\ell}\) (temperature) to properly correspond to an entropy generation rate (units of \(1/\)time) and only then they exhibit behavior consistent with FR (Fig. 2).
For more in-depth understanding of the observed trends, we show PDFs of the entropy production rate \(\Psi_{\ell}\) in Fig. 4(a) and the energy cascade rate \(\Phi_{\ell}\) in Fig. 4(b), all at the three scales \(\ell\) (note that here we do not normalize \(\Psi_{\ell}\) and \(\Phi_{\ell}\) by their inertial range values). PDFs of energy cascade rate \(\Pi_{\ell}\) have been shown in the literature on many occasions, especially for the filtering/LES formulations (see e.g., Borue & Orszag (1998); Cerutti & Meneveau (1998); Tao _et al._ (2002); Cardesa _et al._ (2015); Vela-Martin & Jimenez (2021)). A detailed comparative study between statistics of \(\Phi_{\ell}\) and \(\Pi_{\ell}\) has been presented elsewhere (Yao _et al._, 2023_a_). Here we note that the PDFs of \(\Phi_{\ell}\) quantities have elongated highly non-Gaussian tails. Consistent with many prior observations (Borue & Orszag, 1998; Cerutti & Meneveau, 1998; Vela-Martin & Jimenez, 2021) regarding the PDFs of \(\Pi_{\ell}\), they have tails that are much wider (i.e., even more intermittent) than having exponential tails. However, by considering the variable \(\Psi_{\ell}\) (i.e., properly dividing by temperature), the tails of the PDF of \(\Psi_{\ell}\) become visibly much closer to exponential. Extreme events of \(\Psi_{\ell}\), once divided by the prevailing local kinetic energy, become less extreme. As can be seen in Fig. 4(a), the slopes of the exponential tails differ on the negative (steeper) and positive (flatter) sides.
We note that if both sides of the PDF have an exponential tail (e.g., \(P(\pm\Psi_{\ell})\sim\exp(-\alpha_{\pm}|\Psi_{\ell}|)\) with \(\alpha_{+}\) characterizing the positive \(\Psi_{\ell}\) tail and \(\alpha_{-}\) the negative one, the FR holds trivially and the slope of \(\log(P(\Psi_{\ell})/P(-\Psi_{\ell}))\) versus \(\Psi_{\ell}\) is \(\alpha_{-}-\alpha_{+}\). For the case of normalization using \(\tau_{\ell}\), we thus have \(\alpha_{-}-\alpha_{+}\approx 1\), approximately independent of scale \(\ell\) in the inertial range. For purely two-sided exponential PDFs with the two slopes \(\alpha_{-}\) and \(\alpha_{+}\), one can show that \(\langle\exp(-\Psi_{\ell}\tau_{\ell})\rangle=\alpha_{-}\alpha_{+}(\alpha_{-}-1 )^{-1}(\alpha_{+}+1)^{-1}\) which equals unity if \(\alpha_{-}-\alpha_{+}=1\), consistent with the integral fluctuation theorem. These observations must be kept in mind when interpreting the results supporting the FR behavior seen in Fig. 2: On the one hand, as argued before, they could point to non-equilibrium thermodynamic behavior expected for systems far from equilibrium. Or, perhaps more mundanely, they could be a mere consequence of exponential tails in the PDFs of the ratio of energy transfer rate divided by local kinetic energy in turbulence. Perhaps both interpretations are non-trivially connected.
Returning to the proposed definition of entropy in Eq. 2 associated to the system of eddies in a sphere of diameter \(\ell\), it is instructive to rewrite it in "increment" form (which again has to be interpreted in Lagrangian fashion) and it would read
\[ds_{\ell}=\frac{1}{k_{\ell}}\left(dk_{\ell}+dw_{\ell}\right)=d\,\ln(k_{\ell}) +k_{\ell}^{-1}dw_{\ell}. \tag{10}\]
Figure 4: (a) PDFs of entropy production rate \(\Psi_{\ell}\). (b) PDFs of energy cascade rate \(\Phi_{\ell}\) (\(\Phi_{\ell}\) is shown in simulation units (Yeung _et al._, 2012), for which \(\langle\epsilon\rangle=1.367\)). Results are shown in semi-logarithmic axes, for 3 different scales \(\ell/\eta=30\) (black dotted line), 45 (red dashed line) and 60 (blue solid line).
Figure 3: (a) Fluctuation Relation test for isotropic turbulence at \(R_{\lambda}=1250\) applied to the entropy generation rate suggested by the LES filtering formalism \(\Psi_{\ell}^{ags}=\Pi_{\ell}/(\tau_{ii}/2)\) (normalized by the timescale \(\tau_{\ell}=\langle\epsilon\rangle^{-1/3}\ell^{2/3}\)) for 3 filtering scales \(\ell/\eta\)=30 (black circles), 45 (red triangles) and 60 (blue squares). (b) Fluctuation Relation test applied to the cascade rates \(Z=\Phi_{\ell}\) (solid red triangles) and \(Z=\Pi_{\ell}\) (open red triangles) directly, without division by local kinetic energy (“temperature”). Results are shown for scale \(\ell/\eta=45\) but results for other scales are similar.
Here \(dw_{\ell}=(6/\ell)[(p^{*}/\rho)d\delta{\bf s}\cdot\hat{\bf r}]_{S_{\ell}}\) is the spherically averaged pressure work such that \(\delta{\bf u}=\delta d{\bf s}/dt\). Whether this definition can somehow be related to the (log of) the number of possible states of the eddies smaller than \(\ell\), or provide any additional predictive capabilities (besides the observed FR behavior), remains to be seen.
As future extensions of the present study, it would be of interest to consider the effects of Reynolds number and scale \(\ell\) approaching either the viscous or the integral scale of turbulence, to consider flows other than isotropic turbulence, and also to explore the contributions of "spatial diffusive fluxes of kinetic energy" due to spatial gradients of \(k_{\ell}\) that according to Eq. 1 should also contribute, perhaps separately, to the total entropy generation rate. The small deviations from precisely linear behaviour seen in Fig. 2 also deserve further more detailed study. Furthermore, the role of "cascade time" \(t\) has to be clarified. An obvious possibility is to follow \(\Psi_{\ell}\) in a Lagrangian frame (Meneveau & Lund, 1994; Wan _et al._, 2010) and perform additional finite-time averaging.
## Acknowledgements
We thank G. Eyink for fruitful comments and the JHTDB/IDIES staff for their assistance with the database and its maintenance. This work is supported by NSF (Grant # CSSI-2103874).
## Declaration of interests
The authors report no conflict of interest.
|
2305.08039 | Systematic Meets Unintended: Prior Knowledge Adaptive 5G Vulnerability
Detection via Multi-Fuzzing | The virtualization and softwarization of 5G and NextG are critical enablers
of the shift to flexibility, but they also present a potential attack surface
for threats. However, current security research in communication systems
focuses on specific aspects of security challenges and lacks a holistic
perspective. To address this challenge, a novel systematic fuzzing approach is
proposed to reveal, detect, and predict vulnerabilities with and without prior
knowledge assumptions from attackers. It also serves as a digital twin platform
for system testing and defense simulation pipeline. Three fuzzing strategies
are proposed: Listen-and-Learn (LAL), Synchronize-and-Learn (SyAL), and
Source-and-Learn (SoAL). The LAL strategy is a black-box fuzzing strategy used
to discover vulnerabilities without prior protocol knowledge, while the SyAL
strategy, also a black-box fuzzing method, targets vulnerabilities more
accurately with attacker-accessible user information and a novel
probability-based fuzzing approach. The white-box fuzzing strategy, SoAL, is
then employed to identify and explain vulnerabilities through fuzzing of
significant bits. Using the srsRAN 5G platform, the LAL strategy identifies 129
RRC connection vulnerabilities with an average detection duration of 0.072s.
Leveraging the probability-based fuzzing algorithm, the SyAL strategy
outperforms existing models in precision and recall, using significantly fewer
fuzzing cases. SoAL detects three man-in-the-middle vulnerabilities stemming
from 5G protocol vulnerabilities. The proposed solution is scalable to other
open-source and commercial 5G platforms and protocols beyond RRC. Extensive
experimental results demonstrate that the proposed solution is an effective and
efficient approach to validate 5G security; meanwhile, it serves as real-time
vulnerability detection and proactive defense. | Jingda Yang, Ying Wang, Yanjun Pan, Tuyen X. Tran | 2023-05-14T01:00:54Z | http://arxiv.org/abs/2305.08039v2 | # Systematic Meets Unintended: Prior Knowledge Adaptive 5G Vulnerability Detection via Multi-Fuzzing
###### Abstract
The virtualization and softwarization of 5G and NextG are critical enablers of the shift to flexibility, but they also present a potential attack surface for threats. However, current security research in communication systems focuses on specific aspects of security challenges and lacks a holistic perspective. To address this challenge, a novel systematic fuzzing approach is proposed to reveal, detect, and predict vulnerabilities with and without prior knowledge assumptions from attackers. It also serves as a digital twin platform for system testing and defense simulation pipeline. Three fuzzing strategies are proposed: Listen-and-Learn (LAL), Synchronize-and-Learn (SyAL), and Source-and-Learn (SoAL). The LAL strategy is a black-box fuzzing strategy used to discover vulnerabilities without prior protocol knowledge, while the SyAL strategy, also a black-box fuzzing method, targets vulnerabilities more accurately with attacker-accessible user information and a novel probability-based fuzzing approach. The white-box fuzzing strategy, SoAL, is then employed to identify and explain vulnerabilities through fuzzing of significant bits. Using the srsRAN 5G platform, the LAL strategy identifies 129 RRC connection vulnerabilities with an average detection duration of 0.072s. Leveraging the probability-based fuzzing algorithm, the SyAL strategy outperforms existing models in precision and recall, using significantly fewer fuzzing cases. SoAL detects three man-in-the-middle vulnerabilities stemming from 5G protocol vulnerabilities. The proposed solution is scalable to other open-source and commercial 5G platforms and protocols beyond RRC. Extensive experimental results demonstrate that the proposed solution is an efficient and efficient approach to validate 5G security; meanwhile, it serves as real-time vulnerability detection and proactive defense.
Fuzz Testing, Vulnerabilities Detection, RRC Protocols, 5G Stack, Digital Twin
## I Introduction
G New Radio (NR) and NextG cellular networks promise a wide variety of heterogeneous use cases that enable Ultra-Reliable Low Latency Communications (URLLC) and enhance Mobile Broadband (eMBB) and massive Machine Type Communications (mMTC) across various industries which require networking with unprecedented flexibility. Flexibility is key in 5G New Radio(NR), providing performance enhancements and allowing vertical customization; however, these advances also increase the complexity of security in NR protocols and implementations [1]. While softwarization, virtualization, and disaggregation of networking functionalities as the key enablers of the needed shift to flexibility, it presents a potential attack surface to threats and requires rigorous testing against vulnerabilities, which are often computationally expensive and impractical in large-scale software stacks.
In existing large-scale codebases for both open source and commercially available 5G stacks, compared to the state-of-the-art security research in communication systems, which has resolved specific aspects or partitions of security challenges to achieve assurance, the 5G and NextG networks need to consider the security defense from a holistic perspective in detecting vulnerabilities and unintended emergent behaviors. In addition, defense strategies based on thorough testing often lack adaptation and robustness, facing variations of attacks and compromising 5G assurance. Furthermore, general system engineering approaches (e.g., system dynamics and agent-based modeling) are inadequate in describing both the qualitative and quantitative aspects of security features in 5G software stacks. Moreover, comprehensive cybersecurity assessments involving physical objects, particularly over critical infrastructures, can be expensive and time-consuming.
Unlike deterministic behaviors that can be verified via approaches like formal methods, detecting unintended emergent behavior in 5G software stacks requires repeatability and adds uncertainty to the results due to their stochastic nature and various use scenarios. Additionally, the recently adopted Open Radio Access Network (O-RAN) [2], characterized by machine learning algorithms, introduces less transparency to 5G communications. This uncertainty poses a significant challenge to traditional vulnerability detection methods, as they may not be able to effectively identify vulnerabilities arising from unexpected inputs or behaviors resulting from machine learning algorithms. Therefore, an efficient and systematic scheme that is based on experimental results for detecting vulnerabilities and unintended emergent behaviors is crucial for ensuring the security and robustness of 5G systems. Experimental work in the context of 5G shifts simulation-driven research used in previous mobile network generations to system implementation prototyping [3][4][5]. This change stems from several factors including the widespread adoption of programmable Software Defined Radios (SDR), network function virtualization (NFV), and subsequent open-source softwarization of mobile network functions through various projects such as Open Air Interface (OAI) [6], srsRAN [7]. The change also enables digital twins to emerge as a disruptive concept for testing complex, large-scale 5G network stacks with limited resources. A digital twin enables systems analysis, design, optimization, and evolution to take place fully digitally or in conjunction with a cyber-physical system. Compared to traditional engineering approaches, digital twin solutions offer
enhanced speed, accuracy, and efficiency [8]. The applications of digital twin range from well-defined low-flow device control communication, such as in Industry 4.0 [9], to more sophisticated applications involving large volumes of data flow fields, such as Augmented Reality (AR) [10].
The concept of digital twins has been applied to cybersecurity and risk management in communication systems. Nguyen et al. [11] proposed a systematic approach to using digital twins for developing and deploying complex 5G environments and risk prevention systems. Additionally, Jagannath et al. [12] developed an AI cloud modeling system to mitigate risks associated with developing innovative technologies on existing systems. Network digital twins, in conjunction with traditional fuzzing approaches, offer comprehensive feasibility proof and evaluation for development and deployment on physical systems [13]. Furthermore, digital twins technology can be utilized to formulate specific and efficient standards for security training [14], extending beyond software design and development.
Though various digital twin applications exist in manufacturing and research, the exponential increase in data volume and volatile environments presents significant challenges for using digital twins in physical systems. For example, simulating and identifying unintended emergent behaviors in 5G to provide scalable cybersecurity assurance through digital twins remains difficult. As a result, digital twins are often limited to descriptive rather than actionable functions in 5G and cybersecurity fields [15]. This study aims to develop a digital twin platform that is not only descriptive but also actionable against potential or actual attacks in a physical 5G system, from the micro-atomic to the macro-geometric level. In addition, effective fuzzing strategies are developed on the platform to detect vulnerabilities and unintended emergent behaviors in 5G specifications and implementations.
In addition to an efficient testbed, many studies have proposed research on 5G protocol vulnerability detection and the extension to a critical area application. For instance, 3GPP TS 33.501a described how a significant number of pre-authentication messages are sent in an unencrypted format, which can be exploited to launch DoS attacks and obtain sensitive information, such as the location of mobile subscribers in 5G/LTE [16]. In [17], the authors identified the weakest links and channels vulnerable to sniffing and spoofing in the 5G NR framework. Hussain et al. [18] proposed a property-directed approach for qualitative and quantifiable analysis. Innovative strategies such as the grammar-based fuzzing approach with a Markov chain model have recently been proposed to generate high-risk fuzzing inputs [19]. Similarly, other stateful transition models have been introduced to efficiently locate vulnerabilities [20, 21, 22]. In an effort to further refine the fuzzing scope, formal verification has been incorporated into fuzzing strategies as demonstrated by HyPFuzz [23]. Capitalizing on advancements in deep learning technologies, Rainfuzz [24] employs reinforcement learning to generate a heatmap, facilitating an estimation of risk associated with varying permutations of fuzzing cases. Additionally, Natural Language Processing (NLP) has been introduced to analyze vulnerabilities directly from the source code [25]. In a bid to enhance vulnerability assessment, the development of security metrics [26] and dependent fields [27] offers a more comprehensive visualization of vulnerability evaluation. These developments continue to contribute to the effectiveness and efficiency of vulnerability detection and risk assessment. Despite of the substantial contribution to protocol-based vulnerability detection, a comprehensive and systematic approach for detecting vulnerabilities and unintended emergent behaviors in the entire protocol, considering varying perspectives on prior knowledge and fuzzing levels, remains unaddressed.
### _Motivation and Challenges_
Among vulnerability detection approaches, fuzz testing has been extensively used in large-scale 5G and beyond systems for cybersecurity purposes. Nevertheless, the major challenge in this area remains computational complexity, which tends to increase exponentially with the size of the protocol complexity. To confine the detection within the protocol scope, Potnuru et al. [28] proposed a protocol-based fuzz testing to generate fuzz testing cases with all possible identifiers and provided a comprehensive understanding of how the reaction to different protocol-based attacks. Han et al. proposed mutation-based fuzz testing [29], which can generate extreme cases, like buffer overflow or incorrect format. Combining the advantages of protocol and mutation, Salazar et al. provided a rule-based fuzzer [30], which can cover all protocol-based cases and part of extreme cases. In [31], Ma et al. proposed a state transaction method to analyze serial attacks, which can be achieved by modifying different messages in different states. Their approaches significantly augmented the complexity and diversity of attacks. However, assuming prior knowledge may implicitly limit the applicability of vulnerability detection methods, as attackers will exploit vulnerabilities based on the most efficient means available, utilizing their knowledge of the system. Successful tokenized-based general-purpose fuzzers (GPF), such as LZfuzz [32], eliminate the requirements for access to well-documented protocols and implementations, while focusing on plain-text fuzzing. Additionally, the non-selective traverse type of fuzzing relies on massive computation resources [31]. To address the challenges of the prior knowledge requirements and computation complexity, we propose a multiple dimension multi-layer protocol-independent fuzzing framework based on digital twin system, which is combined with machine learning algorithms aiming to detect protocol vulnerabilities and unintended emergent behaviors in fast-evolving 5G and NextG specifications and large-scale open programmable 5G stacks.
### _Contributions_
In this paper, we develop a digital twin framework enabling systematic and accumulative vulnerability detection for 5G and NextG protocols through fuzz testing. Based on the proof of concept of existing work in LAL [33], a more detailed description of the digital twin framework demonstration for 5G vulnerability detection is shown in [34]. By feeding invalid and unexpected inputs and data into network traffic and software
stack implementation, fuzz testing is a superb solution for discovering and detecting vulnerabilities, including implementation errors and security vulnerabilities [28, 30, 35]. Unlike existing works targeting command-level or bit-level fuzzing only, the designed digital twin architecture incorporates both fuzzing dimensions. Besides, based on the awareness of prior knowledge, we design fuzzing strategies including 'Listen-and-Learn (LAL)', 'Sync-and-Learn (SyAL)' and 'Source-and-Learn (SoAL)', which are analogous to black-box, grey-box, and white-box models. Prior knowledge, in this context, refers to any information that an attacker may possess about a system or element before attempting to exploit vulnerabilities. This information may include protocols, synchronization information, zero-day exploits, or any other relevant information that can be used to exploit potential vulnerabilities. The Radio Resource Control (RRC) protocols and implementations on srsRAN are adopted to serve as digital twin proof-of-concept of the designed system, and a relay model is proposed for digital twins of an attacker.
In particular, the proposed LAL command-level strategy assumes no prior knowledge of protocols nor access to code stacks. The protocol-independent characteristic of it enables an automatic verification for 5G and NextG protocols and large-scale open programmable stacks release. Then by leveraging some prior and domain knowledge, a more strategic grey-box approach, SyAL command-level fuzzing, is formed to achieve higher efficiency and accuracy in detecting vulnerabilities. In scenarios with access to the source code, the designed SoAL bit-level fuzzing works as a white-box strategy to perform a more sophisticated approach on the identified high-risk commands via LAL and SyAL. Our proposed fuzzing system offers sufficient automation and efficiency that could serve as a feasible approach to validate security for 5G protocols and implementations. It also enables real-time system vulnerability detection and proactive defense.
Our main contributions are summarized below:
1. Taking into account the attacker strategies with different levels of prior knowledge, we design three fuzzing strategies named LAL, SyAL, and SoAL. These strategies offer an efficient and comprehensive solution for vulnerability detection in 5G specifications and stacks.
2. We propose a probability-based fuzzing approach that can reduce the average number of fuzzing cases expected to detect a vulnerability from linear to logarithmic growth, resulting in significant scalability and efficiency improvements for complex systems.
3. We design a renovated 5G cybersecurity digital twin (CDT) platform based on classical 5G cybersecurity modeling. Compared to existing ones, the introduced platform is not only describable but also actionable for potential or actual attacks in a physical 5G system.
4. A proof-of-concept of the designed framework piloting RRC protocols in the srsRAN platform is developed. The discovered vulnerable states and transactions of the RRC protocol provide insights for fortifying 5G and NextG protocols.
5. The digital twin solution can directly scale to other existing and future open-source and commercial 5G platforms and protocols other than RRC.
The rest of the paper is organized as follows: Section II describes the overview and setup of LAL system. Section III-C provides rule-based, and LSTM-based prediction approaches to quantifiably evaluate the feasibility and performance of our system. We discuss the application of LAL in 5G and NextG software stacks in Section VI.
## II System Design
### _System Overview_
The proposed system is scenario-adaptive to different levels of knowledge background, from no knowledge (black-box) to thorough knowledge (white-box) about protocols and Fig. 1 shows the architecture of the proposed scenario-adaptive fuzzing system. First, attack model configuration is required as input, where we can define the security goals and target high-risk protocols or modules in a specific software stack based on contextual information and domain knowledge. Then, Given the input, the system could identify fuzzing locations and generate appropriate attack models. For example, the model would take the black-box LAL strategy when the attack configuration is no knowledge, and the white-box SoAL strategy would be selected as the attack model when the attack configuration is thorough knowledge. Finally, based on the attack model, the fuzzing strategy function will generate the fuzzing sequences ordered by the priorities. The output of the system contains the identification of high-risk states and transactions, the detected vulnerabilities, and the prediction of the vulnerable path. The consistency of our detected vulnerabilities with existing exploits, which will be illustrated in the following sections, proves the feasibility and efficiency of our system.
Based on the format and impact, we focus on the validity and legitimacy of commands. Legitimacy indicates whether the command will pass protocol and cryptographic checkers, and validity represents whether the command will lead to a threat. Correspondingly, commands mainly fall into three classes: valid states, illegal or invalid states, and other logical states, the relationship of which is shown in Fig. 2. Most command-level fuzzing states can be regarded as legal states because all command-level states are collected from regular connections.
Fig. 1: Overview of 5G fuzz testing methods.
However, the validity of command-level fuzzing states can only be decided based on the result of connections, in which valid means fuzzing state has no influence on protocol stack and invalid means fuzzing state will lead to a threat or vulnerability. On the contrary, bit-level fuzzing states contain both illegal and legal states. Therefore, we use the source code interpreter as the integrity checker to identify whether the bit-level fuzzing states are legal or illegal. As for the validity of bit-level fuzzing states, we take the same measurement approach with command-level fuzzing states to label.
### _Hardware and Software Setup_
Fig. 3 shows the designed srsRAN-based man-in-the-middle (MITM) digital twin model to simulate emergent behavior and wildly unexpected emergent behaviors usually occurring during legitimate physical communications. In our digital twin model, fundamental functions of User Equipment (UE) and gNodeB (gNB) are implemented by srsRAN. Furthermore, we use ZeroMQ (ZMQ), an asynchronous socket message-transfer framework implemented with TCP protocol, as the substitute for wireless communications between UE and gNB in the digital twin. Then we set up a MITM relay, which can listen and forward the socket messages between UE and gNB, to represent attackers in our proposed digital twin model. As for core network (CN), we use Open5GS to achieve all necessary functions of 5G protocols.
As the core of the proposed digital twin model, MITM relay is responsible for message listening, modification, and recording. The following is the detailed structure of our proposed MITM relay implementation:
* **Message listening.** In Up_link channel, the proposed MITM relay can listen to messages from UE by port 2003 following TCP protocol and forward the UE messages to gNB by port 2000. Same in the downlink channel, the proposed MITM relay can listen to port 2002 to get messages from gNB and forward them to port 2001 of UE.
* **Message modification.** Based on the fuzzing probability system, illustrated in the following section, the relay will take command-level and bit-level fuzzing strategies to modify the message to detect vulnerabilities.
* **Message recording.** We build a database, shown in Fig. 4, to store the history of messages, which we listen from UE and gNB chronologically. And our database will also record fuzzed cases and the status of each connection attempt.
With the updated probability system by status monitors in UE and gNB, the relay can efficiently learn the threat patterns and detect the vulnerabilities. Not limited to MITM attacks, the relay can also simulate overshadowing attacks or false Base Station (BS) attacks by message modification. Our proposed relay can simulate any physical wireless attacks, which proves our relay is qualified for the digital twin of attackers in the real world.
## III LAL Command-level Strategy
When our proposed system has no prior knowledge or understanding of protocols, the system will try to detect and predict vulnerabilities without any domain knowledge. All commands received by our relay will be encrypted and Fourier transformed. Even under such conditions, the system proves the ability to detect vulnerabilities in black-box environment. Especially to discover and mitigate vulnerabilities and unintended emergent behaviors in the 5G stack with sufficient automation and adequate scalability, we design a protocol-independent Listen-and-Learn (LAL) based fuzzing system.
The proposed LAL fuzzing system is designed to target the RRC protocols, which are recognized as one of the most critical cellular protocols for radio resources management [28]. The RRC protocol configures the user and control planes according to the network status and allows the implementation of radio resource management. We fuzz the RRC protocols with the srsRAN 5G platform and the tunneled Non-Access Stratum (NAS) protocols through message reorder, injection, and modification. We implement two-dimensional fuzzing--command-level and bit-level, but we focus on command-level fuzzing in this work. We identify high-risk attack paths by generating the state-Transaction graph from the command-level fuzzing results. We further perform timely high-risk scenarios prediction with state Transaction based Long Short-Term Memory (LSTM).
By embedding the message exchange sniffer and the LSTM based prediction model in virtual wireless simulation, our LAL fuzzing system is capable of automatically and efficiently determining the command-level fuzzing message according to the states of UE and gNB. Besides, as the designed framework is protocol independent, it can be quickly adapted and transferred to new-released code stacks and protocols. For example, it can be easily turned into a hybrid design to provide provable assurance and formal threat detection for 5G software stacks by combining deterministic detection approaches such as formal methods. Finally, we incorporate an LSTM model
Fig. 2: Definition of fuzz testing region.
based on rapid vulnerability detection. This prediction model enables proactive defenses against potential attacks through learning the early-stage abnormal state transaction paths. In short, the designed LAL fuzzing system can be applied to 5G and NextG architectures (e.g., ORAN) for real-time vulnerability and unintended behavior monitoring, prediction, detection, and tracking.
### _State Recording_
The authentication and authorization scheme in 5G, being viewed as a finite state transaction, enables the graphics-based analysis to identify the pattern of the risks. During the fuzzing, the recorded states include the following information:'message time', 'original bytes', 'RRC channel','message type', and 'physical channel'. 'Message time' represents message sending time, and 'RRC channel' indicates which protocol we should use for message decoding. The RRC procedure of the UE can be uniquely identified with the 'Message type' [36]. Due to a lack of domain knowledge about encryption and transformation, we use a general 5G protocol, pycrate [37], to interpret the first six hex values and select the cross mapping of interpreted'message type' as the identifier to a state. Even though this interpretation approach can not provide the correct translation of command, the feasibility of vulnerability detection and prediction can still be proved in Sec III-C. Besides, the 'trcConnectionSetupComplete' message is used as the identifier of the successful completion of RRC establishment. When the monitor in gNB detects the 'trcConnectionSetupComplete' message, the testing case will be terminated and labeled as a successful connection. When the monitor in gNB cannot detect the 'trcConnectionSetupComplete' message within a predefined timeout limit (\(600\) seconds in the proof-of-concept experiments), it is considered a failed connection.
We build a database to record the state and fuzzing cases. Fig. 4 illustrates the structure of the database. The foreign keys in Fig. 4 are generated from the primary keys of the refereed table like 'action_id' is generated from the primary key 'action_id' in table Action. The statements of each table are described in the following:
* **State.** Each state represents the state of RRC status. For each message sent from either UE or eNB, the system will update the description of sent commands in the table state.
* **Action.** Each action item records parsed message, whose channel and physical channel represent where the commands come from.
* **Probability.** Each probability item records the probability
Fig. 3: Digital engineering view for 5G vulnerability and unintended emergent behavior detection.
of fuzzing cases for corresponding states and actions. If one fuzzing case leads to RRC connection failure, the probability increases. The completion rate indicates the record of bit-level fuzzing and will be none if there is only command-level fuzzing.
### _Command-level Fuzzing Strategy_
Command-level and bit-level fuzzing are two primary protocol fuzz testing approaches. They are also common approaches for protocol attacks. Compared to bit-level attacks, which require time and frequency synchronization along with information in UE profiles, command-level attacks are usually low-cost and less information needed. Hence, we primarily focus on command-level fuzzing to detect vulnerabilities under a black-box environment.
For fuzzing purposes, the LAL observes and collects the exchanged legitimate messages and saves them to the fuzzing message candidate pool. At the command level, commands are replaced by other commands in the same physical channel to test whether any communication error state occurs. More specifically, fuzz testing is implemented iteratively through each case in the pool. Within each loop, we first simulate UE and gNB for connection initialization. Then we decode the observed message and get two primary RRC identifiers: 'interpreted message type' and 'interpreted RRC TransactionIdentifier'. If the message in this physical channel has never been observed before, we record the message and mark it with the corresponding channel. Moreover, if there is still any unapplied message replacement, we adapt this replacement and delete it in the recording. Due to the change of temporary identifiers, such as rni, most of the replaced messages are illegal for UE or gNB. In this way, our fuzz testing framework replaces messages with not only regular ones but also abnormal messages since the number of message permutations grows with the increasing number of cases. Due to no requirement of prior knowledge, LAL can be quickly adapted and transferred to new-released code stacks and protocols.
### _Result assessment_
#### Iv-C1 State Based Vulnerability Prediction Model
As commands are listened to and selectively added to the candidate pool during the experiments, the system needs to traverse the commands in the pool with priority orders to perform command-level fuzzing. As an initial step to command priority exploration, we show the RRC connection state distribution of commands fuzzing at various channels in Fig. 5, where DL and UP in the x-axis represent downlink and uplink channels, respectively. Among a total of \(205\) collected fuzz testing cases with RRC procedures, there are \(76\) successful connections and \(129\) failed connections. The majority of the failed RRC connections are through uplink fuzzing. The failures in the downlink fuzzing channel are primarily caused by PCCH Messages, which are used for paging the UEs whose cell location is not known to the network.
From the distribution, we can conclude that the downlink channel protocols are more robust with unintended messages, and uplink channels are more vulnerable than the downlink. However, due to PCCH-Messages broadcast and explicit content nature among the downlink channel, its vulnerability could affect a more extensive range of communications and potentially cause a Deny of Service (DoS) attack for all UEs within the cell of the BS.
For the \(129\) failed connection in Fig. 5, we identify high-risk states as those with high frequencies in failed connections. Our results show that no state only appears in failed connections but not in successful ones. We identify \(7\) high-risk states with higher frequencies than the average. However, the RRC connection failure cannot be fully covered with those high-risk states solely as rule-based detection. Therefore, we introduce transaction-based detection using the sequenced states to enhance vulnerability identification.
#### Iv-C2 Vulnerability Identification via State Transaction
With the sequence of fuzzing tests being executed, the system automatically generates a state transaction probability map. The probability map predicts the connection risks, and further rerouting strategies can be developed to avoid certain states and transactions that may potentially lead to RRC connection failures. The RRC state changes from one to another are defined and recorded as a transaction. We can graphically
Fig. 4: Structure of the database
Fig. 5: RRC connection state fuzzing distribution.
represent the state and transaction during the RRC procedures as the vertex and edge to ease further graphics-based analysis for risk identification and prediction.
The occurrence of the state-transaction cases can be used for the rule-based prediction of failed connections. Fig. 7 shows the state transaction frequency on successful and failed connections, from which we can observe \(7\) high-risk state-transaction cases that almost only occur in failed connections. This rule-based prediction using transaction frequency is more trustworthy compared to the prediction based on state frequency. Because among all \(7\) high-risk state transactions, there is only one transition: from state 0 to state 2, occurring in a total of 76 successful connections for only one time. By further looking into the results of Fig. 7, we observe that \(70.54\%\) failed connections include at least one high-risk transaction. Hence, given that recall equals \(70.54\%\), a more accurate algorithm is necessary to identify and predict RRC connection failures. In the following, we presented an LSTM based vulnerability prediction on providing high-confidence predictions.
#### Iv-B3 LSTM Based Vulnerability Prediction
The results from Sec. III-C1 and III-C2 show that the statistic and rule-based classification can only achieve recall up to \(70.54\%\), which is unreliable in practice. Therefore, we design an LSTM based vulnerability prediction model for reliability enhancement and early prediction. With the early prediction of RRC connection failures, we can enable RRC state rerouting strategy to avoid the failures.
We define the input of LSTM as the sequenced states from the fuzzing occurrence and the length of the sequenced states as the cut-off length. The cut-off length determines how long a state transaction path can be sufficient to meet the expected accuracy. Two approaches are used to specify the cut-off length: _duration from beginning_ and _number of states from beginning_. To avoid overfitting, we use \(20\%\) of the dataset as testing data and \(0.001\) as the learning rate of the model. Moreover, for each input size, we average the accuracy, precision, and recall over \(100\) runs. Each run includes \(30\) epochs, and each epoch includes \(10\) batches. As shown in the performance evaluation of both approaches in Fig. 7, accuracy grows with the increasing number of steps or times and increases sharply after the \(8\)-th step.
To balance the performance and reaction time, we generate Receiver Operating Characteristics (ROC) curves over steps and duration to find the strategy with the least cut-off length and almost \(90\%\) Area Under the ROC Curve (AUC). From Fig. 7(a), \(10\) steps is the optimal strategy that achieves stable \(89\%\) AUC. And we can get that \(0.08s\) is the optimal strategy that achieves \(96\%\) AUC. Therefore, we take \(10\) steps and \(0.08s\)
Fig. 6: State transaction frequency on: (a) successful connection; (b) failed connection.
Fig. 7: Receiver Operating Characteristics (ROC) analysis of LSTM over: (a) steps; (b) duration.
as the input to do deeper analysis on the converge performance of LSTM. From converge performance of LSTM, we find that LSTM can learn the optimal parameter in 2 or 3 epochs. The fast convergence proves that our system has the ability to learn the pattern of failed connections. The average cut-off duration of \(10\)-th steps is \(0.072\) seconds with the accuracy achieving \(89\%\), which is consistent with setting the cut-off length as \(0.08\) seconds through the duration cut-off approach. The accurate and timely prediction also provides sufficient time for proactive defense before RRC connection completion or failure, with an average of \(3.49\) seconds.
With an average performance of LSTM meeting the accuracy expectation, we analyze failed predictions, including False Positive and False Negative ones, for further improvements. The following patterns are summarized as misclassified cases.
**False Positive:** When there are three messages, which are interpreted as paging messages, sent by gNB within the first ten states in connections, the model may misclassify this connection as a failed one. Because most of the failed connections also have three paging messages in the first ten states. This pattern can be addressed by a finer definition of paging messages in future work.
**False Negative:** The cases with multiple times of interpreted \(\mathrm{active\_set\_update}\) messages in downLink channels are classified as failed connections and lead to false alarms. A high frequency of interpreted \(\mathrm{active\_set\_update}\) messages occur more frequently in successful connections. Cross-layer or side channel information could be applied to improve the false alarm rate in future work.
The proposed vulnerability and unintended behavior detection system could also be applied in real-time, in compliance with O-RAN architecture [38]. When deployed for real-time monitoring, timing in detection is critical to mitigate the vulnerabilities and provide assurance and high-quality communications. There are sufficient intervals, an average of \(3.49\) seconds, between successful detection time and RRC connection successful/failed time, as shown in Fig. 8, which gives enough defense time for potential attacks. Moreover, we have tinier gaps between fuzzing occurrence time and successful detection time.
## IV SyAL Domain Assisted Strategy
Synchronization in the generated random identifier of each fuzzing case enables more domain information background possible for MITM attacker, including command type and critical identifier values. Leveraging the domain knowledge, we propose a probability-based command-level fuzzing system called Synchronize-and-Learn (SyAL) to help our proposed digital twin MITM attacker be more efficient in prioritizing and locating the more vulnerable areas. In the proof-of-concept of this study, we fix the Radio Network Temporary Identifier (RNTI) of UE to keep commands synchronized in different fuzzing cases. RNTI is the cyclic redundancy check (CRC) mask, which is generated in synchronization procedure and required to encode and decode Downlink Control Information (DCI) message. The synchronized commands provide domain-assisted background knowledge for our digital twin MITM attacker. Furthermore, our proposed probability-based command-level fuzzing system takes a Sync-and-Learn strategy to learn the vulnerability pattern efficiently and prioritize high-risk commend-level fuzzing cases. The result of this study proves the significance of timing in the 5G Authentication and Key Agreement (AKA).
### _Not Illegal Command-level Fuzzing in the Not-illegal Not-valid Set_
Besides the illegal command-level fuzzing in LAL, we will continue not illegal command-level fuzzing, which contains correct identifiers and can be appropriately interpreted by UE and gNB, in this section. Through changing the occurrence timing of not illegal commands, we can find a path that can transfer from 'green zone', valid states, to 'yellow zone', not illegal and not valid states, in Fig. 2. This part of the experiment provides substantial proof of the feasibility of the listen-and-replace relay attack directed by a part of the communication context. Due to the limitation of the numerous capacity for command-level fuzzing permutation, we focus on the downlink channel, which is sent from gNB and is more vulnerable than uplink. All messages in the downlink channel are duplicated with the same RNTI and belong to legal or not illegal commands (the 'green zone' or 'yellow zone' in Fig. 2).
### _Probability-based Fuzzing Strategy_
With the domain knowledge of message types, we proposed a probability-based command-level fuzzing system to learn the vulnerability pattern efficiently and prioritize high-risk commend-level fuzzing cases. The efficiency of our proposed probability-based command-level fuzzing system outperforms traditional fuzzing systems, like brute force fuzzing.
Algorithm 1 describes the detailed process of a probability-based fuzzing system. First, we build up a database to store all commands in the downlink channel, whose structure shows in Fig. 4. Then we initialize a command-level fuzzing probability matrix \(D.p\) with the size of \(n\times n\), \(n\) is the number of commands, to represent the probability of command fuzzing cases. The value of \(D.p_{i,j}\) means the probability of the fuzzing
Fig. 8: Comparison of detection time and completion time.
case, which changes from command \(i\) to command \(j\), is high-risk. After initialization, the system updates the command-level fuzzing probability matrix based on the fuzzing result after each fuzzing test. The command-level fuzzing probability matrix update follows the independent rule: the system can only update the row and column corresponding to fuzzed commands. Moreover, in each fuzzing case, the system uses the proposed digital twin MITM attacker to generate fuzzing cases based on the value of the command-level fuzzing probability matrix \(D.p\).
```
1:\(I_{1}\), \(I_{2}\)
2:start_simulator()
3:command_history\(c\leftarrow\)[]
4:while\(!time\_out()\)do
5:if\(a\) = received_action(\(I_{2}\))then
6:if\(a\) = rrc_Connection_Complete then
7:break
8:endif
9:update \(D\)(\(a\))
10:if fuzzed then
11:continue
12:else
13:\(a^{\prime}\) = \(random(D.p[a,:])\) //Weighted random
14:\(send\_as\_O_{2}(a^{\prime})\)
15:\(c\) += (\(a\),\(a^{\prime}\))
16:\(D.p^{\prime}.update.(a^{\prime})\)\(fuzzed\) = \(true\)
17:endif
18:endif
19:endwhile
20:if connection_failed()then
21:\(D.p[a,a^{\prime}]\) += \(D.p[a,a^{\prime}]\times\alpha\)
22:else
23:\(D.p[a,a^{\prime}]\) = \(D.p[a,a^{\prime}]\times\alpha\times ratio\)
24:endif //Update database as \(O_{2}\)
25:endwhile
```
**Algorithm 1** SyAL Fuzzing Testing
### _Result Assessment_
As mentioned in Table II, there are 3080 possible fuzzing cases and only 43 vulnerabilities. In Fig. 9, we present fuzz testing tracks of two fuzzing strategies: random fuzzing and probability-based fuzzing, until all vulnerabilities are found. In Fig. 9(a), there are 2811 fuzzing cases fuzzed by random fuzzing strategy until all vulnerabilities founded. However, the probability-based fuzzing strategy takes only 1027 fuzzing cases to get all vulnerabilities in Fig. 9(b). Therefore, we can easily conclude that the probability-based fuzzing strategy can locate the vulnerabilities much more efficiently than the random fuzzing strategy.
Further, we use the hyper-parameter strategy with the permutation of change percentage \(\alpha\) from 0.1 to 2 and failed attenuation \(ratio\) from 0.9 to 0.1 to get the optimal parameters set for probability-based fuzzing strategy. However, the result of the hyper-parameter, which is shown in Fig. 10, provides an intuition that the modification of the probability ratio makes no difference in the gradients of the vulnerability detection ratio curve. Even a larger increasing probability ratio may speed up the detection in the first period; the final estimated number of steps to detect all vulnerabilities is almost indistinguishable.
We use several algorithms to fit strategies and generate the best representations for each strategy. With the assumption that the number of fuzzed cases is \(i\), we find that the regress exponential algorithm, \(2.072\times\mathrm{e}^{0.004i}\) orange dashed line in
Fig. 9: Fuzz testing tracks on downlink channels of: (a) random fuzzing; (b) probability-based fuzzing.
Fig. 11, is the best fitting algorithm to the beginning period of probability-based fuzzing with \(R^{2}\) value of 0.972. And for the end of probability-based fuzzing, vulnerabilities that do not follow the learned pattern are the primary reason for slow-down growth, like the case which changes command AV to command U shown as blue square in Fig. 9(b). Furthermore, for random fuzzing, the linear algorithm, \(0.015i-0.617\) blue dashed line in Fig. 11, is the best fitting algorithm with \(R^{2}\) value of 0.987.
To accelerate the speed of vulnerability detection in the beginning period, we design a probability-based fuzzing strategy with extra prior knowledge. In the preknowledge probability-based fuzzing strategy, we assign two arbitrary vulnerabilities as the extra prior knowledge to skip the pattern collection procedures in the beginning period. We run 20 times of each strategy, and plot the average and part of random points in Fig. 11. The random fuzzing strategy has the worst performance among the three strategies, and the other strategies have similar performance except for the beginning period. In the beginning period, the prior knowledge can provide a local gaudiness for the probability fuzzer to efficiently locate the vulnerabilities, which is twice faster than the probability-based fuzzing strategy, especially in the first 500 fuzzing cases. Then, we can take advantage of extra prior knowledge to speed up the short-term efficiency of vulnerability detection.
## V SoAL Bit-level Strategy
Source-and-Learn (SoAL) Bit-level Strategy provides a digital twin of some active attacks such as overshadowing [39], which can change part of identifier values of communicated commands. In this strategy, we take two approaches, before-encryption and after-encryption, to represent two different scenarios, with domain knowledge and without domain knowledge. Compared to the traditional psychical overshadowing test, our proposed bit-level strategy achieves efficient vulnerability detection, which helps us fuzz more vulnerable cases and focus on the protocol.
### _Risk Prioritized Fuzzing strategy_
Bit-level fuzzing is to randomly change the value of different identifiers in the specific command to generate different fuzzing cases. Following the guidance of command-level fuzzing, we can take more efficient bit-level fuzzing to locate vulnerabilities. For instance, based on the result of command-level fuzzing, we can first do bit-level fuzzing on these high-risk commands, e.g., command C and command H in Fig. 9.
During the bit-level fuzzing procedure, we set a message detection and multi-lists of identifiers which cover the value range. For any specific message, the system takes a random value which has never been used to replace the identifier. There are two replacement strategies on bit-level fuzzing: before-encryption and after-encryption. Before-encryption approach is to change the identifier values before the protocol encryption while after-encryption approach is implemented in a reserved way. In this way, we can try all possible fuzzing cases to specific identifiers in special commands.
### _Result Assessment of Bit-level Fuzzing_
Based on the result of Qpaque command-level and domain probability-based command-level fuzzing. As shown in Table III, three high-risk commands, 'RRC Setup Request,' 'RRC Reconfiguration,' and 'RRC connection,' are selected as our bit-level fuzzing target.
In the 'RRC Setup Request' command, we fuzz it with both the before-encryption and after-encryption approaches. For the before-encryption approach, there were three identifiers: 'ue-Identity', 'Establishment Cause', and'spare'. Since the'spare' identifier never contains critical information and only occupies 1 bit, we only do bit fuzzing on 'ue-Identity' and 'Establishment Cause'. As shown in Table III, any value for 'ue-Identity' will not affect the connection. However, a different value of 'Establishment Cause' can make connection transfer into different service types, as also mentioned in [40]. For example, if we change 'Establishment Cause' from bit 0110 to bit 0000, UE can only request an emergency call. Moreover, with the after-encryption approach, all fuzzing cases lead to disconnection. This shows the integrity check for 5G protocol can identify whether the message is modified or not.
Except for the 'RRC Setup Request' command, we also fuzz other two downlink commands, 'RRC Reconfiguration' (command C in Fig. 9) and 'RRC Connection' (command H
Fig. 11: Comparison of Benchmark random-based fuzzing and probability-based fuzzing.
Fig. 10: Sensitivity analysis of different probability ratio.
in Fig. 9). In 'RRC Reconfiguration' command, we take'sr-ConfigIndex' as our target because this identifier is responsible for radio scheduling and critical for connection establishment. However, no matter how we modify the'sr-ConfigIndex', the connection between UE and gNB can still be established. On the contrary, when we fuzz the identifier'srb1_sm_id' in 'RRC Connection' command with different values, UE rejects the connection establishment. Therefore, we conclude that UE may have alternative methods to negotiate the'sr-ConfigIndex', but cannot accept a new'srb_id'.
In Fig. 12, our system shows the ability to detect vulnerabilities efficiently. Among all 33 possible before-encryption cases, our system detects 10 vulnerabilities. As for the after-encryption approach, we find that it is unlikely to find a successful case because of the integrity check. Hence, we conclude that only fuzzing before-encryption is the only appropriate way for vulnerability detection in bit-level fuzzing.
## VI Conclusion
We designed a novel fuzzing approach that systematically and cumulatively detects vulnerabilities for 5G and NextG protocols. Our approach is designed to detect, characterize, predict, and reveal vulnerabilities with varying levels of prior knowledge assumptions for attackers, beginning from no prior knowledge to full source code access. A digital twin framework was proposed and three fuzzing strategies were developed: LAL (black-box fuzzing), SyAL (gray-box fuzzing), and SoAL (white-box fuzzing). In black-box scenarios where no prior knowledge of the platform is known, the LAL strategy randomly arranges the sequence of commands. In gray-box scenarios where partial access to information is allowed, the SyAL strategy randomly replays the record commands with access to critical synchronization information for potential user information collection, which is similar to RNTI. When the system is a white-box that supports full access to source code, the SoAL method performs bit-level fuzzing guided by the command-level fuzzing of LAL and SyAL for risk analysis transparency and reasoning.
In particular, the LAL strategy detected 129 vulnerabilities with 39 command types using only transmitted messages, and the embedded LSTM model efficiently predicted over 89% of connection failures in 0.072 seconds on average. We then proposed a probability-based vulnerability detection method in the SyAL strategy, which achieves a linear growth of time cost with system size and allows for the detection of all vulnerabilities with partial user privacy information. This outperforms traditional fuzzing models with exponential growth of time consumption. In addition, based on the results of the SyAL strategy, the proposed SoAL method not only validates the integrity mechanism of 5G protocols but also detects three types of man-in-the-middle (MITM) vulnerabilities that are critical to protocol security. Extensive simulation results demonstrated that the designed fuzzing system is an efficient, automated approach that supports real-time vulnerability detection and proactive defense for existing 5G platforms and future released protocols.
## Acknowledgment
This effort was sponsored by the Defense Advanced Research Project Agency (DARPA) under grant no. D22AP00144. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.
|
2305.10676 | Performance improvement of a fractional quantum Stirling heat engine | To investigate the impact of fractional parameter on the thermodynamic
behaviors of quantum systems, we incorporate fractional quantum mechanics into
the cycle of a quantum Stirling heat engine and examine the influence of
fractional parameter on the regeneration and efficiency. We propose a novel
approach to control the thermodynamic cycle that leverages the fractional
parameter structure and evaluates its effectiveness. Our findings reveal that
by tuning the fractional parameter, the region of the cycle with the perfect
regeneration and the Carnot efficiency can be expanded. | Shihao Xia, Youlin Wang, Minglong Lv, Jincan Chen, Shanhe Su | 2023-05-18T03:28:41Z | http://arxiv.org/abs/2305.10676v1 | # Performance improvement of a fractional quantum Stirling heat engine
###### Abstract
To investigate the impact of fractional parameter on the thermodynamic behaviors of quantum systems, we incorporate fractional quantum mechanics into the cycle of a quantum Stirling heat engine and examine the influence of fractional parameter on the regeneration and efficiency. We propose a novel approach to control the thermodynamic cycle that leverages the fractional parameter structure and evaluates its effectiveness. Our findings reveal that by tuning the fractional parameter, the region of the cycle with the perfect regeneration and the Carnot efficiency can be expanded.
## I Introduction
The study of fractional calculus [1; 2; 3] has received growing attention in recent years due to its unique mathematical structure and close association with the renormalization and the inverse power law. It provides a powerful mathematical tool to solve problems related to complex systems [4; 5]. In addition, Levy flight, a natural generalization of Brownian motion, has become a research hotspot in the field of anomalous diffusion with practical implications for the advancements of physics, life science, information science, and other disciplines [6; 7; 8; 9; 10; 11; 12]. Levy flight arises from the strong interaction between particles and their environment, and it is a Markov stochastic process characterized by long-range jumps. Although the Levy process is mainly utilized for numerical simulations, the experimental work [13] has shown that it is feasible to adjust the system parameters with precision, enabling direct experimental studies of Levy flight. Hence, discussions on the diffusion behavior and the dynamics of atomic groups with damping and even more complex transport environments are underway.
Applications of fractional quantum mechanics have been developed by defining the fractional path integral over Levy paths and using the Riesz fractional derivative, extending the concept of fractality in quantum physics [14; 15; 16; 17; 18]. This area has witnessed significant advances in recent years [19; 20; 21; 22; 23; 24; 25; 26; 27], and has been demonstrated experimentally [28]. Moreover, fractional calculus is increasingly being employed to describe thermodynamic phenomena [29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. Attempts have also been made to combine fractional quantum mechanics with thermodynamics, such as Black hole thermodynamics [39], thermal properties of fractional quantum Dirac oscillators [38], and etc.
Quantum heat engine [40; 41; 42; 43; 44; 45; 46] is an excellent platform for studying the thermodynamic properties of quantum systems. In this context, we investigate the effect of the fractional parameter on the performance of a quantum Stirling engine (QSE). We propose a new thermodynamic process based on the fractional parameter and analyze the behavior of the thermodynamic cycle that incorporate this process. It will demonstrate the potential applications of fractional quantum mechanics in thermodynamics.
This paper is organized as follows: In Section II, we provide a brief overview of fractional quantum mechanics and show the solution in the infinite potential well (IPW). Several fundamental concepts of quantum thermodynamics are introduced as well. In Section III, we introduce the structure of the QSE and propose a new way to regulate the thermodynamic cycle based on fractional parameters. Expressions of thermodynamic quantities in the cycle are provided. In Section IV, the effects of the fractional parameter on the performance of the QSE are discussed. Conclusions are given in Section V.
## II Fractional quantum mechanics and key quantities in quantum thermodynamic processes
### Fractional quantum mechanics
In fractional quantum mechanics, the fractional Hamiltonian operator is defined as \(H=D_{\alpha}|p|^{\alpha}+V(x)\), where \(p\) is the momentum, the fractional parameter \(1<\alpha\leq 2\), \(V(x)\) is the potential energy as a functional of a particle path \(x\), and \(D_{\alpha}\) is the scale coefficient [14; 38]. If the system at an initial time \(t_{a}\) starts from the point \(x_{a}\) and goes to the final point \(x_{b}\) at time \(t_{b}\), one could define the quantum-mechanical amplitude, often called a kernel, \(K\left(x_{b}t_{b}\mid x_{a}t_{a}\right)\). The kernel function is the sum of the contributions of all trajectories through the first and last points [14; 15; 16; 17; 18]. The kernel based on the Levy path in phase space is defined as
\[K\left(x_{b}t_{b}\mid x_{a}t_{a}\right)=\lim_{N\rightarrow\infty }\int_{-\infty}^{\infty}dx_{1}\ldots dx_{N-1}\frac{1}{(2\pi\hbar)^{N}} \tag{1}\] \[\times\int_{-\infty}^{\infty}dp_{1}\ldots dp_{N}\exp\left\{\frac{ i}{\hbar}\sum_{j=1}^{N}p_{j}\left(x_{j}-x_{j-1}\right)\right\}\] \[\times\exp\left\{-\frac{i}{\hbar}D_{\alpha}\varepsilon\sum_{j=1}^ {N}|p_{j}|^{\alpha}-\frac{i}{\hbar}\varepsilon\sum_{j=1}^{N}V\left(x_{j}\right) \right\},\]
where \(\hbar\) is Planck's constant, \(\varepsilon=\left(t_{b}-t_{a}\right)/N\), \(x_{j}=x\left(t_{a}+j\varepsilon\right)\), \(p_{j}=p\left(t_{a}+j\varepsilon\right)\), \(x\left(t_{a}+j\varepsilon\right)_{j=0}=x_{a}\), and \(x\left(t_{a}+j\varepsilon\right)_{j=N}=x_{b}\).
The kernel describes the evolution of a system, leading to the fractional wave function at time \(t_{b}\)
\[\psi\left(x_{b},t_{b}\right)=\int_{-\infty}^{\infty}dx_{a}K\left(x_{b}t_{b} \mid x_{a}t_{a}\right)\psi\left(x_{a},t_{a}\right), \tag{2}\]
with \(\psi\left(x_{a},t_{a}\right)\) being the fractional wave function of the initial state. The fractional wave function \(\psi\left(x,t\right)\) satisfies the fractional Schrodinger equation (Appendix A)
\[i\hbar\frac{\partial\psi(x,t)}{\partial t}=-D_{\alpha}(\hbar\nabla)^{\alpha} \psi(x,t)+V(x)\psi(x,t), \tag{3}\]
where the quantum Riesz fractional derivative \((\hbar\nabla)^{\alpha}\) is defined as
\[(\hbar\nabla)^{\alpha}\psi(x,t)=-\frac{1}{2\pi\hbar}\int_{-\infty}^{\infty}dp \exp\left(i\frac{px}{\hbar}\right)\mid\!p\!\mid^{\alpha}\!\varphi(p,t) \tag{4}\]
with \(\varphi(p,t)=\int_{-\infty}^{\infty}dp\exp\left(-i\frac{px}{\hbar}\right) \psi(x,t)\) being the Fourier transform of \(\psi(x,t)\).
In the following discussion, the scale coefficient \(D_{\alpha}\) is set to be equal to \((1/2m)^{\frac{\alpha}{2}}\) with \(m\) being the mass of the quantum mechanical particle [38]. For \(\alpha=2\), it becomes the standard quantum mechanics that we know. Meanwhile, we consider a particle in a one-dimensional IPW, where the potential field
\[V(x)=\begin{cases}0&-L/2\leqslant x\leqslant L/2,\\ \infty&\text{otherwise}.\end{cases} \tag{5}\]
The solution of Eq. (3) is related to the time independent wave function \(\phi(x)\) by
\[\psi(x,t)=\exp\left\{-i\frac{Et}{\hbar}\right\}\phi(x), \tag{6}\]
where \(E\) represents the energy of the particle. Putting Eq. (6) into Eq. (3) leads to the following time-independent fractional Schrodinger equation
\[-D_{\alpha}(\hbar\nabla)^{\alpha}\phi(x)+V(x)\phi(x)=E\phi(x). \tag{7}\]
By using Eqs. (5) and (7) and considering the boundary conditions, the eigenvalue \(E_{n}\left(L,\alpha\right)\) of the fractional Hamiltonian operator \(H\) and the corresponding wave function \(\phi(x)\) read [15]
\[\begin{split} E_{n}&\left(L,\alpha\right)=D_{\alpha} \left(\frac{2\pi\hbar}{L}\right)^{\alpha}n^{\alpha}\\ &=\left(\frac{1}{2m}\right)^{\frac{\alpha}{2}}\left(\frac{2\pi \hbar}{L}\right)^{\alpha}n^{\alpha},\end{split} \tag{8}\]
\[\phi(x)=\begin{cases}\sqrt{\frac{2}{L}}\cos\left[\left(n-\frac{1}{2}\right) \frac{2\pi x}{L}\right]&\text{for $n$ even},\\ \sqrt{\frac{2}{L}}\sin\frac{2\pi nx}{L}&\text{for $n$ odd},\end{cases} \tag{9}\]
where \(L\) represents the width of the potential well, and \(n\) is a positive integer (\(n=1,2,3,4,...\)).
## Appendix B Key quantities in quantum thermodynamic processes
The internal energy \(U\) of the particle is expressed as the ensemble average of the fractional Hamiltonian operator, i.e.,
\[U=\left\langle H\right\rangle=\sum_{n}P_{n}E_{n}, \tag{10}\]
where \(P_{n}\) denotes the occupation probability of the \(n\)th eigenstate with energy \(E_{n}\). During an infinitesimal process, the time differential of the internal energy
\[dU=\sum_{n}\left(E_{n}dP_{n}+P_{n}dE_{n}\right). \tag{11}\]
According to the first law of thermodynamics, \(dU\) is associated with the heat \(dQ\) absorbed from the environment and the work \(dW\) performed by the external agent, i.e.,
\[dU=dQ+dW. \tag{12}\]
For the isothermal and isochoric processes, the heat exchange and the work done during an infinitesimal thermodynamic process are, respectively, identified as [42; 43; 44; 47]
\[dQ=\sum_{n}E_{n}dP_{n}, \tag{13}\]
and
\[dW=\sum_{n}P_{n}dE_{n}. \tag{14}\]
As the isothermal process with the temperature \(T\) of the particle being a constant is reversible, Eq. (13) is equivalent to
\[dQ=TdS, \tag{15}\]
where
\[S=-k_{B}\sum_{n}P_{n}\ln P_{n} \tag{16}\]
indicates the entropy of the particle, \(k_{B}\) is Boltzmann's constant, and
\[P_{n}=\exp\left(-\beta E_{n}\right)/\mathrm{Tr}\left[\exp(-H/\left(k_{B}T\right))\right] \tag{17}\]
describes the occupation probability of a Gibbs state at energy \(E_{n}\). In the next section, the theory of fractional quantum mechanics and the concepts of heat and work in quantum thermodynamic processes will be applied to build quantum engines.
## III Quantum Stirling engine based on fractional quantum mechanics
Generally, the Stirling heat engine consists of two isothermal processes and two isochoric processes[45; 46; 48; 49]. We focus on revealing the necessary conditions for the perfect regeneration and the reversible operation based on fractional quantum mechanics. For this reason, the fractional isothermal process, where the fractional parameter and the well width are changed slowly, is proposed. This process can be used to construct the fractional QSE, which consists of two fractional isothermal processes (\(A\to B\) and \(C\to D\)) and two quantum isochoric processes (\(B\to C\) and \(D\to A\)), as depicted in Fig. 1. The fractional parameter provides us with a new way to regulate the thermodynamic cycle.
At stage I (A-B), the particle confined in the IPW interacts with the hot bath at temperature \(T_{h}\). The fractional parameter slowly changes from \(\alpha_{2}\) to \(\alpha_{1}\) and the IPW varies from \(L_{A}\) to \(L_{B}\). The process is infinitely slow, allowing the particle to continually be in thermal equilibrium with the hot bath. The probability of each eigenstate, which has the form of Eq. (17), changes from \(P_{n}^{A}\) to \(P_{n}^{B}\). With the help of Eq. (15), the heat absorbed from the hot bath is written as
\[Q_{AB}=T_{h}[S(B)-S(A)], \tag{18}\]
where \(S(i)\) is the entropy of the particle at state \(i\) calculated by Eq. (16).
At stage II (B-C), the particle with the initial probability \(P_{n}^{B}\) of each eigenstate is placed in contact with the the regenerator and undergoes an isochoric process until reaching the temperature \(T_{c}\). The probability of each eigenstate changes from \(P_{n}^{B}\) to \(P_{n}^{C}\). The eigenvalue \(E_{n}\) of the fractional Hamiltonian operator \(H\) is kept fixed as the well width and fractional parameter maintain constant values, i.e., \(L_{B}\) and \(\alpha_{1}\), respectively. The temperature of the particle decreases from \(T_{h}\) to \(T_{c}\). There is heat exchange between the particle and the regenerator and no work is performed in this isochoric process. According to Eq. (13), the amount of the heat absorbed in this process is equal to the change of the internal energy of the particle, i.e.,
\[Q_{BC}=U\left(C\right)-U\left(B\right)=\sum_{n}E_{n}\left(L_{B},\alpha_{1} \right)\left(P_{n}^{C}-P_{n}^{B}\right), \tag{19}\]
where \(U(i)\) is the internal energy of the particle at state \(i\) calculated by Eq. (10). As \(Q_{BC}<0\), heat is released to the regenerator without any work being done.
At stage III (C-D), the particle is brought into contact with the cold bath at temperature \(T_{c}\). It is an isothermal process, which is a reversed process of stage I. The state of the particle is always in thermal equilibrium with the cold bath, while the fractional parameter slowly changes from \(\alpha_{1}\) to \(\alpha_{2}\) and the IPW varies from \(L_{C}\) to \(L_{D}\). Similar to Eq. (18), the heat absorbed from the cold bath is
\[Q_{CD}=T_{c}[S(D)-S(C)]. \tag{20}\]
At stage IV (D-A), the particle is removed from the cold bath and goes through another isochoric process by connecting the the regenerator until reaching the temperature \(T_{h}\), where the well width and fractional parameter are invariant. The cycle ends until the temperature of the particle increasing to \(T_{h}\). Heat absorbed from the regenerator at this stage is computed by
\[Q_{DA}=U\left(A\right)-U\left(D\right)=\sum_{n}E_{n}\left(L_{A},\alpha_{2} \right)\left(P_{n}^{A}-P_{n}^{D}\right). \tag{21}\]
Note that \(L_{D}=L_{A}\) is required for completing one cycle.
As the energy contained in the particle always returns to its initial value. The net work done by the heat engine would then be
\[W=Q_{AB}+Q_{BC}+Q_{CD}+Q_{DA}. \tag{22}\]
The Stirling heat engine is known as a closed-cycle regenerative heat engine. The net heat exchange between the particle and the regenerator during the two isochoric processes is
\[Q_{R}=Q_{BC}+Q_{DA}. \tag{23}\]
Three possible cases exists: (a) \(Q_{R}=0\), (b) \(Q_{R}<0\), and (c) \(Q_{R}>0\). The case \(Q_{R}=0\) means that the regenerator
Figure 1: Temperature-entropy (T-S) diagram for a quantum Stirling engine (QSE).
is a perfect regenerative heat exchanger. The mechanism of the perfect regeneration makes the efficiency of the engine attain the Carnot value. When \(Q_{R}\)<0, the heat \(\left|Q_{BC}\right|\) flowing from the particle to the regenerator in one regenerative process is larger than its counterpart \(Q_{DA}\) flowing from the regenerator to the working substance in the other regenerative process. The redundant heat in the regenerator per cycle must be timely released to the cold bath. When \(Q_{R}\)>0, the amount of \(\left|Q_{BC}\right|\) is smaller than \(Q_{DA}\). The inadequate heat in the regenerator must be compensated from the hot bath, otherwise the regenerator may not be operated normally. Due to the non-perfect regenerative heat, the net heat absorbed from the hot bath per cycle may be different from \(Q_{h}\) and is given by
\[Q_{h}=Q_{AB}+H\left(Q_{R}\right)Q_{R}, \tag{24}\]
where \(H(x)\) is the Heaviside step function. The efficiency is an important parameter for evaluating the performance, which is often considered in the optimal design and theoretical analysis of heat engines.
By using Eqs. (22) and (24), the expression of the efficiency of the QSE should be
\[\eta=\frac{W}{Q_{h}}=\frac{W}{Q_{AB}+H\left(Q_{R}\right)Q_{R}}. \tag{25}\]
## IV Results and discussion
By using the model presented above, the performance of the QSE through different ways of regulation will analyzed. Firstly, the QSE can be regulated by adjusting the widths of the IPW for a given fractional parameter value. Secondly, the fractional parameter can be adjusted to identify the condition for perfect regeneration in the QSE when the width of the IPW is fixed. Finally, the performance of the QSE can be improved by simultaneously adjusting both the widths of the IPW and the fractional parameters.
### The effects of well widths
Fig. 2(a) shows the contour plot of the net heat exchange \(Q_{R}\) between the particle and the regenerator of the QSE varying with the widths \(L_{A}\) and \(L_{B}\) of the IPW, where the parameters \(\alpha_{1}\) and \(\alpha_{2}\) are set to be equal to 2. The optimizations of \(L_{A}\) and \(L_{B}\) yield the perfect regeneration with \(Q_{R}=0\) [black line in Fig. 2(a)]. The contour plot of the efficiency \(\eta\) of the QSE as a function of \(L_{A}\) and \(L_{B}\) is presented in Fig. 2(b), and it can be observed that the region of Carnot efficiency \(\eta_{C}=1-T_{c}/T_{h}\) corresponds to that of perfect regeneration.
Fig. 2(c) shows the performance of a fractional QSE. In this case, the fractional parameters \(\alpha=\alpha_{1}=\alpha_{2}\) and the well widths are some given values. The efficiency \(\eta\)
Figure 2: The contour plots of (a) the net heat exchange \(Q_{R}\) between the particle and the regenerator and (b) the efficiency \(\eta\) varying the widths \(L_{A}\) and \(L_{B}\) of the IPW, where \(\alpha_{1}=\alpha_{2}=2\). The black line represents the cycle with the perfect regeneration, i.e., \(Q_{R}=0\). (c) The efficiency \(\eta\) of the Stirling cycle as a function of the fractional parameter \(\alpha\) for \(L_{B}=L_{C}=2\) (dotted line and dash-dotted line) and 3 (solid line and dashed line), where \(\alpha=\alpha_{1}=\alpha_{2}\), and \(L_{A}=L_{D}=0.5\) and 1, respectively. The parameters \(T_{h}=4\), \(T_{c}=3\), and \(m=1\). Note that Planck’s constant \(\hbar\) and Boltzmann’s constant \(k_{B}\) are set to be unity throughout the paper, i.e., \(\hbar=k_{B}=1\).
of the engine is plotted as a function of the fractional parameter \(\alpha\) for different values \(L_{B}=L_{C}=2\) (dotted and dash-dotted lines) and \(L_{B}=L_{C}=3\) (solid and dashed lines) of the well width, where \(L_{A}=L_{D}=0.5\) and \(1\), respectively. The plot indicates that when \(L_{A}=L_{D}\) is about larger than \(1\), the efficiency \(\eta\) increases monotonically with \(\alpha\) and reaches a maximum value when \(\alpha=2\), which is the efficiency of the standard quantum mechanical QSE. However, when \(L_{A}=L_{D}\) is small, the efficiency is not a monotonic function of \(\alpha\). The optimal value of \(\alpha\) can make the efficiency attain the Carnot efficiency. These results mean that the performance of a QSE can be improved by regulating the well widths and/or the fractional parameters.
### The effects of fractional parameters
In this section, we examine the impact of regulating fractional parameters on the performance of the QSE. The width of the IPW is kept constant throughout the cycle, and the fractional parameter is slowly adjusted from \(\alpha_{2}\) (\(\alpha_{1}\)) to \(\alpha_{1}\) (\(\alpha_{2}\)) during the fractional isothermal process from A to B (C to D), which creates a QSE regulated solely by fractional parameters. To ensure that the cycle proceeds forward, we set \(\alpha_{1}<\alpha_{2}\).
By setting \(L_{A}=L_{B}=L_{C}=L_{D}=1\) and combining Eqs. (18)-(25), the contour plot of the net heat exchange \(Q_{R}\) between the particle and the regenerator varying with \(\alpha_{1}\) and \(\alpha_{2}\) is obtained, as shown in Fig. 3(a). The plot indicates that \(Q_{R}\) is not a monotonic function of \(\alpha_{1}\) and \(\alpha_{2}\), and the perfect regeneration is able to be achieved by optimizing these parameters [black line in Fig. 3(a)]. The contour plot of the efficiency \(\eta\) varying with \(\alpha_{1}\) and \(\alpha_{2}\) is presented as well [see Fig. 3(b)]. The plot shows that \(\eta\) can reach the Carnot efficiency by optimizing \(\alpha_{1}\) and \(\alpha_{2}\). This is because of the fact that suitable fractional parameters \(\alpha_{1}\) and \(\alpha_{2}\) lead to perfect regeneration \(Q_{R}=0\).
### The effects of well widths and fractional parameters
Fig.2 demonstrates that the QSE, which is controlled by the well widths, does not achieve the optimal performance in most regions but can be improved by introducing variational fractional parameters. To further investigate this problem, we modify the isothermal process by adjusting both the widths of the IPW and the fractional parameters simultaneously. As an illustration, we consider the QSE with \(L_{A}=1\) and \(L_{B}=1.5\), and shows how the engine's efficiency is enhanced by the fractional parameters.
By combining Eqs. (18)-(25), the contour plot of the net heat exchange between the particle and the regenerator \(Q_{R}\) of the QSE varying with \(\alpha_{1}\) and \(\alpha_{2}\) is provided [see Fig. 4(a)]. It can be observed from the figure that \(Q_{R}\) is not a monotonic function of \(\alpha_{1}\) and \(\alpha_{2}\). By optimizing \(\alpha_{1}\) and \(\alpha_{2}\), the cycle can achieve perfect regeneration with \(Q_{R}=0\). At the same time, the contour plot of the efficiency \(\eta\) varying with \(\alpha_{1}\) and \(\alpha_{2}\) is shown in Fig. 4(b). It can be observed from the figure that \(\eta\) is also not a monotonic function of \(\alpha_{1}\) and \(\alpha_{2}\). By optimizing \(\alpha_{1}\) and \(\alpha_{2}\), \(\eta\) can reach the Carnot efficiency. This indicates that the QSE solely regulated by the widths of IPW may lead to a non-ideal regenerative cycle, but the absolute value of the regenerative loss can be reduced and the performance of the QSE can be improved by adjusting the fractional parameters.
Furthermore, we demonstrate that by adjusting the fractional parameters, the QSE with different well widths can achieve perfect regeneration [see Table 1]. For given values of \(L_{A}\) and \(L_{B}\), the third column of the table 1
Figure 3: The contour plots of (a) the net heat exchange \(Q_{R}\) between the particle and the regenerator and (b) the efficiency \(\eta\) varying the fractional parameters \(\alpha_{1}\) and \(\alpha_{2}\), where \(T_{h}=4\), \(T_{c}=3\), \(m=1\), and \(L_{A}=L_{B}=L_{C}=L_{D}=1\). The black line represents the cycle with the perfect regeneration, i.e., \(Q_{R}=0\).
shows the regenerative loss \(Q_{R}\) of the standard QSE (\(\alpha_{1}=\alpha_{2}=2\)), while the last two columns show the optimal values of \(\alpha_{1}\) and \(\alpha_{2}\) for the cycle with perfect regeneration. In Fig. 5, we further present the the fractional parameter \(\alpha_{1}\) as a function of \(\alpha_{2}\) under the condition of perfect regeneration for \(L_{A}=1.0,L_{B}=1.4\) (square points), \(L_{A}=1.2,L_{B}=1.6\) (circular points), and \(L_{A}=1.4,L_{B}=1.8\) (triangular points). Fig.5 shows clearly that for different well widths, the performance of the QSE can be improved through the regulation of fractional parameters, and consequently, the Carnot efficiency can be obtained.
## V Conclusions
By incorporating the fractional parameter into quantum thermodynamic cycles, we have proposed a new way to regulate thermodynamic cycles based on the fractional quantum mechanics. It is observed that the energy level structure of the system can be changed by adjusting the fractional parameters so that the perfect regeneration and the Carnot efficiency are obtained. This proposal introduces a new approach for designing thermodynamic cycles, when the motion of the particle transits from Brownian motion to Levy flight. Usually, Brownian motion is driven by white Gaussian noise, whereas the Levy process can be viewed as a process driven by Levy noise. Therefore, the introduction of fractional quantum mechanics may provide us with a new route to study thermodynamic processes that are affected by noise or some other heat engines with specific properties. This may also allow us to investigate information theory based on the fractional Schrodinger equation.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{\(L_{A}\)} & \multirow{2}{*}{\(L_{B}\)} & \multirow{2}{*}{\(Q_{r}(\alpha_{1}=\alpha_{2}=2)\)} & \multicolumn{2}{c|}{\(Q_{r}=0\)} \\ \cline{3-4} & & & \(\alpha_{1}\) & \(\alpha_{2}\) \\ \hline \multirow{2}{*}{0.6} & 0.9 & -0.1291 & 1.245 & 1.282 \\ \cline{2-4} & 1.0 & -0.1315 & 1.279 & 1.326 \\ \hline \multirow{2}{*}{0.8} & 1.1 & -0.01223 & 1.311 & 1.409 \\ \cline{2-4} & 1.2 & -0.01009 & 1.382 & 1.459 \\ \hline \multirow{2}{*}{1.0} & 1.3 & 0.005565 & 1.439 & 1.520 \\ \cline{2-4} & 1.4 & 0.008296 & 1.502 & 1.579 \\ \hline \multirow{2}{*}{1.2} & 1.5 & 0.008021 & 1.517 & 1.621 \\ \cline{2-4} & 1.6 & 0.01057 & 1.565 & 1.678 \\ \hline \multirow{2}{*}{1.4} & 1.7 & 0.007634 & 1.607 & 1.719 \\ \cline{2-4} & 1.8 & 0.009979 & 1.660 & 1.778 \\ \hline \end{tabular}
\end{table}
Table 1: The values of fractional parameters \(\alpha_{1}\) and \(\alpha_{2}\) for the perfect regeneration at given values of the widths \(L_{A}\) and \(L_{B}\).
Figure 4: The contour plots of (a) the net heat exchange \(Q_{R}\) between the particle and the regenerator and (b) the efficiency \(\eta\) varying with the fractional parameters \(\alpha_{1}\) and \(\alpha_{2}\), where \(T_{h}=4\), \(T_{c}=3\), \(m=1\), \(L_{A}=1\), and \(L_{B}=1.5\). The black line represents the cycle with the perfect regeneration, i.e., \(Q_{R}=0\).
Figure 5: The fractional parameters \(\alpha_{1}\) and \(\alpha_{2}\) for the perfect regeneration at \(L_{A}=1.0,L_{B}=1.4\) (square points), \(L_{A}=1.2,L_{B}=1.6\) (circular points), and \(L_{A}=1.4,L_{B}=1.8\) (triangular points).
###### Acknowledgements.
The authors thank Prof. Haijun Wang, Jia Du for helpful discussions and comments. This work has been supported by the National Natural Science Foundation (Grants No. 12075197) and the Fundamental Research Fund for the Central Universities (No. 20720210024).
## Appendix A The derivation of the fractional Schrodinger equation
During an infinitesimal interval \(\varepsilon\), the state of the fractional quantum-mechanical system evolves from \(\psi(y,t)\) and \(\psi(x,t+\varepsilon)\), which is given by
\[\psi(x,t+\varepsilon)=\int_{-\infty}^{\infty}dyK(x,t+\varepsilon\mid y,t)\psi( y,t). \tag{20}\]
By using Eq. (1), the continuum limit \(\sum_{j=1}^{N}V\left(x_{j}\right)\simeq\int_{t_{a}}^{t_{b}}d\tau V(x(\tau))\), Feynman's approximation \(\int_{t}^{t+\varepsilon}d\tau V(x(\tau))\simeq\varepsilon V\left(\frac{x+y}{2 }\right)\), and the kernel
\[K(x,t+\varepsilon\mid y,t)\approx\exp\left[-\frac{i}{\hbar} \varepsilon V\left(\frac{x+y}{2}\right)\right]\] \[\times\lim_{N\to\infty}\int_{-\infty}^{\infty}dx_{1}\ldots dx_{N-1} \frac{1}{(2\pi\hbar)^{N}}\int_{-\infty}^{\infty}dp_{1}\ldots dp_{N}\] \[\times\exp\left[\frac{i}{\hbar}\sum_{j=1}^{N}p_{j}\left(x_{j}-x_ {j-1}\right)-\frac{i}{\hbar}D_{\alpha}\varepsilon\sum_{j=1}^{N}\left|p_{j} \right|^{\alpha}\right]. \tag{21}\]
Note that
\[\sum_{j=1}^{N}p_{j}\left(x_{j}-x_{j-1}\right)=\sum_{j=1}^{N}x_{j}\left(p_{j}-p _{j+1}\right)+p_{N}x_{N}-p_{1}x_{0} \tag{22}\]
and the \(\delta\) function
\[\delta\left(p_{j}-p_{j+1}\right)=\int dx_{j}\frac{1}{2\pi\hbar}\exp\left[ \frac{i}{\hbar}x_{j}\left(p_{j}-p_{j+1}\right)\right], \tag{23}\]
Eq. (21) is simplified as
\[K_{L}(x,t+\varepsilon\mid y,t) =\frac{1}{2\pi\hbar}\int_{-\infty}^{\infty}dp\exp\left[\frac{ip(x -y)}{\hbar}\right.\] \[\left.-\frac{iD_{\alpha}|p|^{\alpha}\varepsilon}{\hbar}-\frac{i} {\hbar}\varepsilon V\left(\frac{x+y}{2},t\right)\right]. \tag{24}\]
Substituting Eq. (24) into Eq. (20) arrives at
\[\psi(x,t+\varepsilon)= \int_{-\infty}^{\infty}dy\frac{1}{2\pi\hbar}\int_{-\infty}^{ \infty}dp \tag{25}\] \[\times\exp\left[\frac{ip(x-y)}{\hbar}\right]\times\exp\left[- \frac{i}{\hbar}D_{\alpha}|p|^{\alpha}\varepsilon\right]\] \[\times\exp\left[-\frac{i}{\hbar}V\left(\frac{x+y}{2},t\right) \varepsilon\right]\psi(y,t).\]
Expanding the left- and the right-hand sides in power series, taking the first-order approximation and using the definition of Riesz operator in Eq. (4), we have
\[\psi(x,t)+\varepsilon\frac{\partial\psi(x,t)}{\partial t}\] \[=\psi(x,t)+i\frac{D_{\alpha}\varepsilon}{\hbar}(\hbar\nabla)^{ \alpha}\psi(x,t)-\frac{i}{\hbar}\varepsilon V(x,t)\psi(x,t), \tag{26}\]
which can be further simplified to obtain Eq. (3).
|
2307.08640 | A new quantum machine learning algorithm: split hidden quantum Markov
model inspired by quantum conditional master equation | The Hidden Quantum Markov Model (HQMM) has significant potential for
analyzing time-series data and studying stochastic processes in the quantum
domain as an upgrading option with potential advantages over classical Markov
models. In this paper, we introduced the split HQMM (SHQMM) for implementing
the hidden quantum Markov process, utilizing the conditional master equation
with a fine balance condition to demonstrate the interconnections among the
internal states of the quantum system. The experimental results suggest that
our model outperforms previous models in terms of scope of applications and
robustness. Additionally, we establish a new learning algorithm to solve
parameters in HQMM by relating the quantum conditional master equation to the
HQMM. Finally, our study provides clear evidence that the quantum transport
system can be considered a physical representation of HQMM. The SHQMM with
accompanying algorithms present a novel method to analyze quantum systems and
time series grounded in physical implementation. | Xiao-Yu Li, Qin-Sheng Zhu, Yong Hu, Hao Wu, Guo-Wu Yang, Lian-Hui Yu, Geng Chen | 2023-07-17T16:55:26Z | http://arxiv.org/abs/2307.08640v6 | A new quantum machine learning algorithm: split hidden quantum Markov model inspired by quantum conditional master equation
###### Abstract
The Hidden Quantum Markov Model(HQMM) shows tremendous potential for analyzing time-series data and studying stochastic processes in the quantum world due to its high accuracy and better efficiency compared to the classical hidden Markov model. Here, we proposed the project to realize the hidden quantum Markov process using the conditional master equation, which includes a fine balance condition and better reflects the relationships among the inner states of quantum system. The experimental results indicate that our model has better performance and robust than previous models for time-series data. Most importantly, by taking the quantum transport system as an example, we establish the relations between the quantum conditional master equation and the HQMM, and propose a new learning algorithm to determine the parameter-solving in HQMM. Our findings provide obvious evidence that the quantum transport system can be deemed a physical embodiment of HQMM.
## 1 Introduction
The explosive growth of data and information in recent years have highlighted the drawbacks of traditional machine learning algorithms running on classic computers, such as low solution efficiency and running speed. To better satisfy the development demands of human society, the advance in quantum computing has become an irresistible trend. Unlike classical computing, quantum computing uses qubits to read, store and calculate data and information, and qubits can achieve efficient parallel computing due to their superposition, demonstrating that quantum computing is more adaptable to the current rapid explosion of data and information processing. Quantum computing originated from Benioff's[1] quantum Turing machine and Feynman's[2] bypassing the difficulty of simulating quantum mechanics with classical computers. Over the past few years, both the hardware implementation of quantum computing and the quantum algorithm methods have achieved rapid development. The current physical systems that implement quantum computing include ion trap[3], neutral atom[4], quantum optics[5], superconducting Josephson junction[6], cavity quantum Electrodynamics[7], liquid nuclear magnetic resonance[8], quantum dots[9]_et.al._. In terms of quantum algorithms, important works have been summarized [10], and the arrival of the _"Noisy intermediate-scale quantum (NISQ) algorithms"_ era has triggered the emergence of a large number of hybrid framework algorithms (quantum+classical) [11]. These quantum algorithms are expected to be realized under the current hardware equipment condition. In recent years, quantum computing has achieved successful applications in various fields, such as chemistry[12], Hamiltonian simulation[13], biology[14], pharmaceutical[15], finance[16], materials[17]_et.al._. This indicates
that quantum computing has more advantages over classical computing.
In the field of machine learning, the hidden Markov model(HMM) is an important algorithm that has achieved great success in stock market forecasting [18; 19], natural language processing[20; 21], protein sequencing[22; 23]. The classic HMM contains three problems: evaluation, decoding, and learning. If the dimension of the hidden state is not large, the Baum-Welch algorithm, Viterbi algorithm, and EM algorithm can be adopted to solve these problems effectively. However, if the dimensions of the hidden state and observation space increase simultaneously, the classical algorithm becomes weak in solution speed and accuracy. So, Alex Monras _et.al.[24]_ proposed random quantum operators to measure an open quantum system, and defined the result of such a random process as the hidden quantum Markov model, and gave the mathematical definition of the hidden quantum Markov process in terms of Kraus operator. The results of their work provide an idea of the HQMM and a method for studying this model from the perspective of quantum open systems, and realizing heuristic quantum computing algorithms.
Comparing with the classical HMM, the quantum version is also a stochastic probability graph model and involves the same three problems as HMM. However, the key of these problems is how to solve the model parameters, known as the _"learning problem"_. Siddarth Srinivasan _et. al.[25]_ proposed an algorithm based on the learning algorithm for Norm Observable Operator Model, which was proposed by Jaeger _et.al.[26]_. Although, their numerical experimental results show that HQMM is superior to the classical HMM in model complexity and accuracy, unfortunately, this algorithm is not perfect because it is only effective when the hidden state dimension is relatively small, and it can easily fall into a locally optimal solution. Subsequently, Liu Qin _et.al.[27]_ analytically established the strict superiority of hidden quantum Markov models, and Sandesh Adhikary[28]_et.al._ proposed a new learning algorithm based for the optimization theory on the manifold [29] to solve the Kraus.
#### Motivation
* Our research takes a different starting point compared to the studies mentioned above, as the HQMM is related to an open quantum system described by the quantum master equation from a physical perspective. This means that there can be different HQMM models that involve the physical process of a quantum state for the same measurement results. As a result, we were inspired to develop a new learning algorithm of the HQMM by applying the quantum conditional master equation[30; 31; 32]. This equation provides insight into the relationship between the quantum states of a system under detailed balance conditions, which was useful in developing our new algorithm.
* Since all quantum algorithms are ultimately executed on physical devices or systems, it is essential to explore quantum algorithms in terms of real physical systems and processes. As a physical example of the quantum condition master equation, quantum transport systems [33] just provide us the foundation of physical implementation. Therefore, we are also interested in combining HQMM theory and the quantum condition master equation to develop new HQMM algorithm that is more practical and suitable for real-world applications.
* The interpretability of machine learning algorithms has been a major topic of discussion, and this also applies to quantum machine learning algorithms. Understanding the physical process behind a designed quantum algorithm not only enhances its interpretability [34], but also aids in designing quantum circuits. Since the physical process of the main equation of quantum conditions is similar to that of neural networks, researches in this area can improve the interpretability and construction of quantum circuits for HQMM and quantum neural networks.
* Currently, the progress of NISQ has highlighted the potential of quantum computing and the significance of achieving quantum supremacy. In the realm of quantum mechanics, noise and measurements introduce the concept of open quantum systems, and the evolution of quantum states for quantum computing becomes non-unitary. Since
HQMM deals with open systems and non-unitary quantum algorithms, it aligns with the research needs of NISQ, which is also a motivation for our work.
Main workThis paper applies the quantum conditional master equation for the first time to study the new HQMM. We extend the research of Clark et al.[35] to construct new algorithm methods, specifically a novel HQMM. Additionally, we propose a new method to solve the problem of parameter updating or learning and provide numerical experimental results for both quantum and classical data. The paper is structured as follows: Sec.2 introduces the HMM and HQMM. Sec.3 describes the theory of the quantum conditional master equation. In Sec.4, the HQMM is derived from the quantum master equation, and a new stochastic probability graph model called the Split Hidden Quantum Markov Model (SHQMM) is proposed from the quantum conditional master equation. The quantum transport system is used as an example to demonstrate the implementation of SHQMM. Sec.5 presents the results of numerical experiments on the SHQMM, revealing some interesting results. Finally, Sec.6 concludes the work and provides some outlooks.
## 2 Hidden quantum Markov model
The Hidden Markov Model (HMM) is a type of probabilistic graph model that describes the evolutionary properties of Markov dynamics. It consists of two important parameters: the transition matrix \(\mathbf{T}\) and the observation matrix \(\mathbf{C}\), which are constant matrices. A HMM can be defined as \(\lambda=(\mathbf{T},\mathbf{C},x_{0})\), where \(x_{0}\) is the initial state vector. The update of the hidden state and the observable results can be obtained from Eq.1, which represents a state-emitting (Moore) hidden Markov model.
\[\begin{split} x_{t+\Delta t}&=\mathbf{T}x_{t}\\ y_{t+\Delta t}&=\text{diag}(\mathbf{C}_{(y;\cdot)} )x_{t+\Delta t},\end{split} \tag{1}\]
the variable \(y\) represents an output symbol, where \(y\) belongs to the observable space \(O\).
A Hierarchical Quantum Markov Model (HQMM) can be defined using a set of parameters \(\lambda_{Q}=(\rho_{0},K_{y})\), similar to the classical Markov process. Here, \(\rho_{0}\) corresponds to the initial state vector \(x_{0}\) of the classical Markov model, and the Kraus operator \(K_{y}\) corresponds to the matrices \(\mathbf{T}\) and \(\mathbf{C}\). In comparison to the classical Markov model, the Kraus operator \(K_{y}\) plays a dual role as both the evolution state and the observable output result. It satisfies the condition \(\sum_{m}K_{m}^{\dagger}K_{m}=I\). When the system is measured (assuming the measurement or read-out result is \(y\)), the density matrix can be expressed as follows [28]:
\[\rho_{y}(t+\Delta t)=\frac{\sum_{\omega_{y}}K_{\omega_{y}}\rho(t)K_{\omega_{y }}^{\dagger}}{\text{Tr}[\sum_{\omega_{y}}K_{\omega_{y}}\rho(t)K_{\omega_{y}}^ {\dagger}]} \tag{2}\]
where \(\omega_{y}\) denotes the auxiliary dimension of Kraus operator.
The difference between the HMM and the HQMM is shown in Table.1.
To calculate the parameters \(\{K\}\) of the HQMM, Siddarth Srinivasan and his colleagues [28] proposed a maximum likelihood estimation algorithm. This algorithm assumes that a set of observation sequences \(y_{1},y_{2},y_{3},\cdots,y_{T}\) is known, and constructs the maximum likelihood function based on this data. This is a particular case where \(\omega=1\):
\[\mathscr{L}=-\text{ln}\,\text{tr}\left(K_{y_{T}}\cdots K_{y_{2}}K_{y_{1}} \rho_{0}K_{y_{1}}^{\dagger}K_{y_{2}}^{\dagger}\cdots K_{y_{T}}^{\dagger} \right). \tag{3}\]
Then the parameter solving problem of the HQMM is transformed into a constrained optimization problem:
\[\begin{split}\text{minimize}_{\{K\}}&\mathscr{L}(\{ K\})\\ \text{subject to}&\sum_{y}K_{y}^{\dagger}K_{y}=I,K_{y} \in\mathbb{C}^{n\times n}.\end{split} \tag{4}\]
Stack \(K_{m}\) by column to form a new matrix \(\kappa=[K_{1},K_{2},\cdots,K_{m}]^{T}\) with dimension \(nm\times n\), the constraint condition in Eq.4 can be rewritten as
\[\kappa^{\dagger}\kappa=I,\kappa\in\mathbb{C}^{nm\times n}. \tag{5}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & HMM & HQMM \\ \hline State & state vector \(x\) & density matrix \(\rho\) \\ \hline \begin{tabular}{c} Transition \\ and Emission \\ \end{tabular} & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\(\mathbf{T},\mathbf{C}\)} \\ \(\{K\}\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Kraus operators \\ \(\{K\}\) \\ \end{tabular} } \\ \hline \begin{tabular}{c} Steady \\ State \\ \end{tabular} & \(x^{*}=\mathbf{T}x^{*}\) & \(\rho^{*}=\sum_{\omega_{y}}K_{\omega_{y}}\rho^{*}K_{\omega_{y}}^{\dagger}\) \\ \hline
\begin{tabular}{c} Probability \\ \end{tabular} & \(\text{Idiag}(\mathbf{C}_{(y,\cdot)})x\) & \(\text{Tr}(\sum_{\omega_{y}}K_{\omega_{y}}\rho K_{\omega_{y}}^{\dagger})\) \\ \hline \end{tabular}
\end{table}
Table 1: The difference between the HMM and the HQMM
According to Ref.[28], \(\kappa\) in Eq.5 lies on the Stiefel manifold, and the following gradient descent method can be used to solve Eq.4:
\[\begin{split}& G=\frac{\partial\mathscr{L}}{\partial\kappa},\\ &\kappa=\kappa-\tau\mathbf{U}(I+\frac{\tau}{2}\mathbf{V}^{\dagger} \mathbf{U})^{-1}\mathbf{V}^{\dagger}\kappa.\end{split} \tag{6}\]
In Eq.6, \(\mathbf{U}=[G,\kappa]\), \(\mathbf{V}=[\kappa,-G]\), and \(\tau\) is a positive real number.
## 3 The quantum conditional master equation
In the real physical world, due to the coupling effect between the quantum system and the environment, the Schrodinger equation is not so practical to describe the open quantum system. The quantum master equation is proposed to describe an open quantum system. For an open quantum system, the Hamiltonian form can be applied as
\[H=H_{S}+H_{E}+H^{\prime}. \tag{7}\]
\(H_{S}\) and \(H_{E}\) represent the Hamiltonians of the quantum system and the environment, respectively. The Hamiltonian \(H^{\prime}\) describes the coupling effect between the quantum system and the environment. In the case of weak coupling between the quantum system and the environment, \(H^{\prime}\) can be treated as a perturbation. Using the expansion of the second-order cumulant, we can obtain a description of the evolution of the reduced density matrix.
\[\begin{split}\dot{\rho}(t)&=-i\mathcal{L}\rho(t) \\ &-\int_{0}^{t}d\tau\langle\mathcal{L}^{\prime}(t)\mathcal{G}(t, \tau)\mathcal{L}^{\prime}(\tau)\mathcal{G}^{\dagger}(t,\tau)\rangle\rho(t). \end{split} \tag{8}\]
Here, the Liouvillian super operator is defined as \(\mathcal{L}=[H_{S},(\cdots)]\), \(\mathcal{L}^{\prime}=[H_{S}^{\prime},(\cdots)]\). \(\mathcal{G}(t,\tau)=G(t,\tau)\times(\cdots)\times G^{\dagger}(t,\tau)\). \(G(t,\tau)\) is the Green's function related to \(H_{S}\). The reduced density matrix is obtained by partially tracing the density matrix of the composite system, that is, \(\rho(t)=\text{Tr}_{E}[\rho_{T}(t)]\).
In experiments, measurement results are typically linked to changes in the internal state of a system. Therefore, unlike the method used to derive Eq.8, Li et. al. [33] introduced the "detailed balance" condition to illustrate the relationship among different system states when studying the current ( the measurement result) of a quantum transport system. This allowed them to derive the quantum conditional master equation(QCME) in Equation 12 and to obtain some interesting results. By adopting the approach of the QCME described in reference [33] and considering the detailed balance among different system states, the general QCME can be expressed as following when the environment space is divided into different subspaces as shown in Figure 1.
\[\dot{\rho}^{(\mathcal{M}_{q})}=-i\mathcal{L}\rho^{(\mathcal{M}_{q})}-\int_{0}^ {t}d\tau\text{Tr}_{E(\mathcal{M}_{q})}[\mathcal{L}^{\prime}(t)\mathcal{G}(t, \tau)\mathcal{L}^{\prime}(\tau)\mathcal{G}^{\dagger}(t,\tau)\rho_{T}(t)]. \tag{9}\]
Here, the proposed initial conditions for the quantum conditional master equation are \(\rho_{T}(0)\simeq\sum_{\mathcal{M}_{q}}\rho^{(\mathcal{M}_{q})}(0)\otimes \rho_{E}^{(\mathcal{M}_{q})}(0)\), and \(\rho^{(\mathcal{M}_{q})}\) denotes _conditional density matrix_ of the quantum system corresponding to the environment \(G(t,\tau)\times(\cdots)\times G^{\dagger}(t,\tau)\). \(G(t,\tau)\) is the Green's function related to \(H_{S}\). The reduced density matrix is obtained by partially tracing the density matrix of the composite system, that is, \(\rho(t)=\text{Tr}_{E}[\rho_{T}(t)]\).
In experiments, measurement results are typically linked to changes in the internal state of a system. Therefore, unlike the method used to derive Eq.8, Li et. al. [33] introduced the "detailed balance" condition to illustrate the relationship among different system states when studying the current ( the measurement result) of a quantum transport system. This allowed them to derive the quantum conditional master equation(QCME) in Equation 12 and to obtain some interesting results. By adopting the approach of the QCME described in reference [33] and considering the detailed balance among different system states, the general QCME can be expressed as following when the environment space is divided into different subspaces as shown in Figure 1.
\[\dot{\rho}^{(\mathcal{M}_{q})}=-i\mathcal{L}\rho^{(\mathcal{M}_{q})}-\int_{0}^ {t}d\tau\text{Tr}_{E(\mathcal{M}_{q})}[\mathcal{L}^{\prime}(t)\mathcal{G}(t, \tau)\mathcal{L}^{\prime}(\tau)\mathcal{G}^{\dagger}(t,\tau)\rho_{T}(t)]. \tag{10}\]
Here, the proposed initial conditions for the quantum conditional master equation are \(\rho_{T}(0)\simeq\sum_{\mathcal{M}_{q}}\rho^{(\mathcal{M}_{q})}(0)\otimes \rho_{E}^{(\mathcal{M}_{q})}(0)\), and \(\rho^{(\mathcal{M}_{q})}\) denotes _conditional density matrix_ of the quantum system corresponding to the environment \(G(t,\tau)\).
## 4 Quantum conditional master equation
In this section, we present the quantum conditional master equation for the quantum conditional master equation. In the case of weak coupling between the quantum system and the environment, the quantum system and the environment, the quantum system and the environment, the quantum system and the environment, the quantum system and the environment, the quantum system and the environment. In the case of weak coupling between the quantum system and the environment, the quantum system and the environment, the quantum system and the environment, the quantum system and the environment are treated as a perturbation. Using the expansion of the second-order cumulant, we can obtain a description of the evolution of the reduced density matrix.
### Quantum conditional master equation
In the case of weak coupling between the quantum system and the environment, the quantum system and the environment, the quantum system and the environment are treated as a perturbation. Using the expansion of the second-order cumulant, we can obtain a description of the evolution of the reduced density matrix.
\[\dot{\rho}^{(\mathcal{M}_{q})}=-i\mathcal{L}\rho^{(\mathcal{M}_{q})}-\int_{0}^ {t}d\tau\text{Tr}_{E(\mathcal{M}_{q})}[\mathcal{L}^{\prime}(t)\mathcal{G}(t, \tau)\mathcal{L}^{\prime}(\tau)\mathcal{G}^{\dagger}(t,\tau)\rho_{T}(t)]. \tag{11}\]
Here, the quantum system and the environment are treated as a perturbation. The quantum system is treated as a perturbation.
different subspaces \(M_{q}\), which provides a better understanding of the open quantum system being studied.
## 4 The split hidden quantum Markov model based on QCME
### The quantum master equation of quantum transport system
_Since any quantum computing needs an actual physical system to implement, we need search for an open quantum system that can be described by the conditional master equation to establish HQMM. After conducting a search, we found that the quantum transport system is suited for implementing our HQMM based on previous work_[28], [33].
As a result, in this section, we present the quantum conditional master equation for the quantum transport system. The Hamiltonian of this quantum system is expressed as follows [33]:
\[\begin{split} H&=H_{S}(a_{\mu}^{\dagger},a_{\mu})+ \sum_{\alpha=L,R}\sum_{\mu k}\epsilon_{\alpha\mu k}d_{\alpha\mu k}^{\dagger}d_ {\alpha\mu k}\\ &+\sum_{\alpha=L,R}\sum_{\mu k}(t_{\alpha\mu k}a_{\mu}^{\dagger}d _{\alpha\mu k}+\text{H.c}).\end{split} \tag{10}\]
where \(H_{S}\) is the Hamiltonian of the quantum dots system, \(L\) and \(R\) represent the left and right electrodes respectively, \(d_{\alpha\mu k}^{\dagger}\) and \(d_{\alpha\mu k}\) represent the creation and annihilation operators of electrons in the electrode, respectively, and \(t_{\alpha\mu k}\) represents the coupling strength between the electrode and the quantum dot system. The master equation for the quantum transport system can be derived through some calculations [33] based on Eq.8:
\[\dot{\rho}=-i\mathcal{L}\rho-\frac{1}{2}\sum_{\mu}\{[a_{\mu}^{\dagger},A_{\mu }^{(-)}\rho-\rho A_{\mu}^{(+)}]+\text{H.c.}\}. \tag{11}\]
If the stae space where the electrode is located, without any electrons passing through the quantum dot system, is denoted as \(E^{(0)}\), it is formed by the wave function of the two isolated electrodes on the left and right \(E^{(0)}=\text{span}\{|\psi_{L}\rangle\otimes|\psi_{R}\rangle\}\). If there are \(n\) electrons from the state space where the right electrode passes through the quantum dot to the left electrode, it is denoted as \(E^{(n)}\) (\(n=1,2,3,\cdots\)). Then the electrode state space \(E\) in the equation can be decomposed as \(E=\oplus_{n}E^{(n)}\), which leads to the quantum conditional master equation [33] with an initial condition of \(\rho_{T}(0)\simeq\sum_{n}\rho^{(n)}(0)\otimes\rho_{E}^{(n)}(0)\).
\[\dot{\rho}^{(n)} =-i\mathcal{L}\rho^{(n)}-\frac{1}{2}\sum_{\mu}\{[a_{\mu}^{\dagger }A_{\mu}^{(-)}\rho^{(n)}+\rho^{(n)}A_{\mu}^{(+)}a_{\mu}^{\dagger} \tag{12}\] \[-A_{L\mu}^{(-)}\rho^{(n)}a_{\mu}^{\dagger}-a_{\mu}^{\dagger}\rho^ {(n)}A_{L\mu}^{(+)}-A_{R\mu}^{(-)}\rho^{(n-1)}a_{\mu}^{\dagger}\] \[-a_{\mu}^{\dagger}\rho^{(n+1)}A_{R\mu}^{(+)}]+\text{H.c.}\}.\]
Here, \(\rho^{(n)}=\text{Tr}E^{(n)}[\rho T(t)]\) is the conditional density matrix of the quantum dot system, which means that there are \(n\) electrons passing through the quantum dot system within time \(t\). Here, the number of electrons \(n\) corresponds to the subspace \(\mathcal{M}_{q}\) in Eq. 9.
### The relationship between the QCME equation and HQMM
Based on the contents in Sec.3 and 4.1, we derive the hidden quantum Markov model from a quantum master equation and proposed a new stochastic graph model from a quantum conditional master equation. After some calculations (the detailed proof and calculation process are shown in [36]), we obtain:
(1) For quantum master equation of Eq.8, the evolution density matrix of quantum dot system is
\[\rho(t+\Delta t)=\sum_{i,\mu}K_{i,\mu}\rho K_{i,\mu}^{\dagger}. \tag{13}\]
(2) For quantum conditional master equation of Eq.12, the evolution density matrix of quantum dot system is
\[\rho^{(n)}(t+\Delta t)=\sum_{i,\mu}K_{i,\mu}\rho^{(n)}K_{i,\mu}^{\dagger}+\sum _{\mu}K_{3,\mu}\rho^{(n-1)}K_{3,\mu}^{\dagger}+\sum_{\mu}K_{4,\mu}\rho^{(n+1)} K_{4,\mu}^{\dagger}, \tag{14}\]
where \(i=0,1,2\).
Comparing Eq.2(\(\omega=1\)) and Eq.13, we concluded that there is a close relationship between a quantum Markov model and a quantum master equation. However, the Kraus operators \(K_{i,\mu}\) of Eq.14 are involved with the related \(\rho^{(n)}\), \(\rho^{(n-1)}\), and \(\rho^{(n+1)}\). This difference arises from the division of the Hilbert space of the environment, which gives rise to a new HQMM called the split hidden quantum Markov model.
### Split hidden quantum Markov model
In this section, we introduce a SHQMM inspired by a quantum transport system. Similar to the HQMM, the SHQMM is defined by applying a set of parameters \(\lambda_{sQ}=(\rho^{(0)},\rho^{(1)},\rho^{(2)},\cdots,K_{y})\) where \(\mathrm{Tr}(\sum_{i=0}\rho^{(i)})=1\).
Firstly, the evolution conditional density matrix of quantum system \(H_{S}\) is written as
\[\rho^{(\mathcal{M}_{q})}(t+\Delta t) =\sum_{y}K_{y}^{(\mathcal{M}_{1})}\rho^{(\mathcal{M}_{1})}(t)K_{ y}^{(\mathcal{M}_{1})\dagger}+\cdots \tag{15}\] \[+\sum_{y}K_{y}^{(\mathcal{M}_{q})}\rho^{(\mathcal{M}_{q})}(t)K_{ y}^{(\mathcal{M}_{q})\dagger}+\cdots,\]
where, \(q\) denotes the values of subspace for environment \(\mathbf{M}\) and \(\sum_{i,\mu}K_{i,\mu}^{\dagger}K_{i,\mu}=I\). The parameter \(y\) represents the read-out of information symbols from the open quantum system.
Secondly, when we read out or measure a certain value \(y^{\prime}\) for \(\rho^{(\mathcal{M}_{q})}(t)\), the conditional density matrix \(\rho^{(\mathcal{M}_{q})}(t+\Delta t)\) is rewritten as follows:
\[\rho^{(\mathcal{M}_{q})}_{y^{\prime}}(t+\Delta t) =\frac{\rho^{{}^{\prime}(\mathcal{M}_{q})}_{y^{\prime}}(t+\Delta t )}{Tr[\sum_{\mathcal{M}_{q}}\rho^{{}^{\prime}(\mathcal{M}_{q})}_{y^{\prime}}( t+\Delta t)]}, \tag{16}\] \[\rho^{{}^{\prime}(\mathcal{M}_{q})}_{y^{\prime}}(t+\Delta t) =K_{y^{\prime}}^{(\mathcal{M}_{1})}\rho^{(\mathcal{M}_{1})}(t)K_{ y^{\prime}}^{(\mathcal{M}_{1})\dagger}+\cdots\] \[+K_{y^{\prime}}^{(\mathcal{M}_{q})}\rho^{(\mathcal{M}_{q})}(t)K_ {y^{\prime}}^{(\mathcal{M}_{q})\dagger}+\cdots.\]
Thirdly, the probability of obtaining the measurement result \(y^{\prime}\) is given by:
\[P(y^{\prime})=\sum_{\mathcal{M}_{q}}Tr[\rho^{{}^{\prime}(\mathcal{M}_{q})}_{ y^{\prime}}(t+\Delta t)]. \tag{17}\]
Here, Eq.17 describes the contribution of different \(\rho^{{}^{\prime}(\mathcal{M}_{q})}\) to the probability \(P(y^{\prime})\), and this process reveals the concept of "detailed balance" in physics, as described in Eq.15.
The concretely implement example of our SHQMMTo calculate the parameters of the SHQMM, assuming that a set of sequences \(y_{0},y_{1}\cdots,y_{T}\) are known, the conditional density matrix evolution under the measurement result \(y_{i}\) is shown in Fig.2 based on transport system. It can be seen that Fig.2 is similar to a neural network and shows the process of forward propagation through time \(t\). This demonstrates the connection of quantum state evolution among the different subspaces \(n\) in QCME and the conversion relationship among the probabilities \(Tr(\rho^{(n)})\). Compared to previous work on HQMM (the probabilities for the measurement value \(y_{i}\) depend on \(\rho=\sum_{n}\rho^{(n)}\)), the property illustrated in Fig.2 also displays the contribution variance of different \(\rho^{(n)}\) to the probabilities of obtaining the measurement value \(y_{i}\) at time \(t_{i}\). Therefore, our model produces a more stable and robust model structure (as seen in experimental results).
Here, based on the QCME (Eq.14), we can write a probability function using the following equations.
Figure 2: The expanded calculation diagram of \(\rho^{n}(\mathbf{t})\) for a set of sequences \(y_{0},y_{1}\cdots,y_{T}\): The red line represents the Kraus operator \(K_{y}\) in \(\{K\}\), the black line represents the Kraus operator \(R_{y}\) in \(\{R\}\), and the blue line represents the Kraus operator \(A_{y}\) in \(\{A\}\)
\[\begin{split}\rho_{T}^{(0)}&=K_{y_{T-1}}\rho_{T-1}^{(0)}K _{y_{T-1}}^{\dagger}+A_{y_{T-1}}\rho_{T-1}^{(1)}A_{y_{T-1}}^{\dagger},\\ \rho_{T}^{(1)}&=R_{y_{T-1}}\rho_{T-1}^{\prime}R_{y_{T -1}}^{\dagger}+K_{y_{T-1}}\rho_{T-1}^{(1)}K_{y_{T-1}}+A_{y_{T-1}}\rho_{T-1}^{(2 )}A_{y_{T-1}}^{\dagger},\\ &\ldots\\ \rho_{T}^{(n)}&=R_{y_{T-1}}\rho_{T-1}^{(n-1)}R_{y_{T-1}}^{ \dagger}+K_{y_{T-1}}\rho_{T-1}^{(n)}K_{y_{T-1}}+A_{y_{T-1}}\rho_{T-1}^{(n+1)}A_{ y_{T-1}}^{\dagger},\\ &\ldots\\ \rho_{T}^{(N_{max})}&=R_{y_{T-1}}\rho_{T-1}^{(N_{max}-1) }R_{y_{T-1}}^{\dagger}+K_{y_{T-1}}\rho_{T-1}^{(N_{max})}K_{y_{T-1}}.\end{split} \tag{18}\]
where \(K_{i,\mu}\), \(K_{3,\mu}\) and \(K_{4,\mu}\) of Eq.14 denote \(K_{y_{i}}\), \(R_{y_{i}}\) and \(A_{y_{i}}\), respectively. \(N_{max}\) denotes the maximum value of \(n\). The probability of \(y_{i}\) is \(P(y_{i})=\sum_{n}\mathrm{Tr}[K_{y_{i}}\rho^{(n)}(t)K_{y_{i}}^{\dagger}+R_{y_{i }}\rho^{(n-1)}(t)R_{y_{i}}^{\dagger}+A_{y_{i}}\rho^{(n+1)}(t)A_{y_{i}}^{\dagger}]\).
From Eq.18,the probability of sequences \(y_{0},y_{1}\cdots,y_{T}\) can be easily obtained
\[\begin{split} P_{y_{0},y_{1}\cdots,y_{T}}&=Tr(\rho_ {T})\\ \rho_{T}&=\rho_{T}^{(0)}+\rho_{T}^{(1)}+\cdots+\rho_{T }^{(\ldots)}.\end{split} \tag{19}\]
To compute the parameters of the SHQMM, we propose a maximum likelihood estimation method, based on the results in [25, 28]. Firstly, we use the probability function to derive all possible Kraus operators, and then use the gradient descent algorithm to find the matrix form of the Kraus operator that satisfies the minimum probability function of the given sequence. This turns parameter-solving into an optimization problem.
\[\begin{split}\text{minimize}_{\{K,R,A\}}&\mathscr{L }(\{K,R,A\})=-\mathrm{ln}\,\mathrm{Tr}(\rho_{T})\\ \text{subject to}&\sum_{y\in O}K_{y}^{\dagger}K_{y}+R_{y }^{\dagger}R_{y}+A_{y}^{\dagger}A_{y}=I.\end{split} \tag{20}\]
Stack \(K_{y}\), \(R_{y}\), \(A_{y}\) by column to form a new matrix \(\kappa=[K_{1},K_{2},\cdots,R_{1},R_{2},\cdots,A_{1},A_{2},\cdots]\) with dimension \(3\mathrm{dim}O\cdot m\times m(m\) is the dimension of the Kraus operator). So the constraint condition in Eq.20 can be rewritten as
\[\kappa^{\dagger}\kappa=I,\kappa\in\mathbb{C}^{3\mathrm{dim}O\cdot m\times m}. \tag{21}\]
We summarize all the above steps into an algorithm for solving the Kraus operator. The specific steps are shown in the Algorithm1.
```
0: Training data \(D\in\mathbb{N}^{M\times l}\), \(W\) is number of sequences and \(l\) is the length of sequence.
0:\(\{\mathbf{K_{i}}\}_{i=1}^{\mathrm{dim}O}\), \(\{\mathbf{R_{i}}\}_{i=1}^{\mathrm{dim}O}\), \(\{\mathbf{A_{i}}\}_{i=1}^{\mathrm{dim}O}\)...
1:Initialize: Complex orthogonal matrix on Stiefel \(\kappa\in\mathbb{C}^{3\mathrm{dim}O\cdot m\times m}\) and \(\rho^{(0)},\rho^{(1)},\rho^{(2)},\cdots,\rho^{(N-1)}\) and require that \(\rho^{(i)}\) is positive semi-definite and \(\sum_{i=0}^{N-1}\rho^{(i)}=\rho_{total}\), \(\rho_{total}\) is density matrix.
2:for\(epoch=1:E\)do
3: split the data \(D\) into B batches \(D_{B}\)
4:for\(batch=1:B\)do
5: Compute gradient \(G_{i}^{\{K\}}=\frac{\partial\mathscr{L}}{\partial K_{i}^{\epsilon}}\), \(G_{i}^{\{R\}}=\frac{\partial\mathscr{L}}{\partial K_{i}^{\epsilon}}\), \(G_{i}^{\{A\}}=\frac{\partial\mathscr{L}}{\partial A_{i}^{\epsilon}}\)
6: Compute the like-hood function \(\mathscr{L}\)
7: Stack \(G_{i}^{\{K\}}\), \(G_{i}^{\{R\}}\), \(G_{i}^{\{A\}}\) vertically to construct \(G=[G_{i}^{K},\cdots,G_{i}^{A\dagger}T]\)
8: Construct \(\mathbf{U}=[G\kappa],\mathbf{V}=[\kappa]-G]\)
9:\(G=\beta G_{od}+(1-\beta)G\)
10: Update \(\kappa=\kappa-\tau\mathbf{U}(I+\frac{\tau}{2}\mathbf{V}^{\dagger}\mathbf{U})^{- 1}\mathbf{V}^{\dagger}\kappa\)
11:endfor
12: Update learning rate \(\tau=\alpha\tau\)
13:endfor
14:Compute the DA function by using the value of probability function \(\mathscr{L}\)
15:return\(\{\mathbf{K_{i}}\}_{i=1}^{\mathrm{dim}O},\{\mathbf{R_{i}}\}_{i=1}^{\mathrm{ dim}O},\{\mathbf{A_{i}}\}_{i=1}^{\mathrm{dim}O}\) and DA
```
**Algorithm 1** Leaning SHQMM using gradient descent method on Stiefel manifold
In Algorithm1, \(\tau\)(learning rate), \(\alpha\)(decay factor) and \(\beta\)(momentum parameter) are the hyper-parameters, and DA is a function that describes the quality of the model. DA is defined as:
\[DA=f(1+\frac{\log_{t}P(D|M)}{l}).\]
Where, \(D\) is data, \(M\) is model. \(l\) is the length of the sequence, and \(\iota\) is the number of output symbol in the sequence. The function \(f(\cdot)\) is a
non-linear segmented function that can map any argument in \((-\infty,1]\) to \((-1,1]\), and it is defined as:
\[f(x)=\begin{cases}x,&x\geq 0,\\ \dfrac{1-e^{-0.25x}}{1+e^{-0.25x}},&x<0.\end{cases}\]
The model can perfectly predict the Markov sequence if \(DA=1\) and the model better than random model for \(DA>0\).
In the SHQMM, different models can produce different prediction effects for the same sequences, depending on the number \(\mathcal{M}q\) of initialized conditional density matrices and the connections between them. Eq.18 describes the closest connection between the conditional density matrix and the number \(\mathcal{M}q\), which is equal to \(n\).
During the optimization process, boundary conditions should be considered to avoid situations where the Kraus operators (\(R\), \(A\)) converge to zero. The optimal solution of HQMM is one of the optimal solutions of Eq.20. Therefore, periodic boundary conditions are applied to the first and last conditional density matrices, similar to the arrangement of atoms in a crystal.
\[\rho_{T}^{(0)} =K_{y_{T-1}}\rho_{T-1}^{(0)}K_{yT-1}^{\dagger}+A_{y_{T-1}}\rho_{T -1}^{(1)}A_{yT-1}^{\dagger} \tag{22}\] \[+R_{y_{T-1}}\rho_{T-1}^{(N_{max})}R_{y_{T-1}}^{\dagger},\] \[\rho_{T}^{(N_{max})} =R_{y_{T-1}}\rho_{T-1}^{(N_{max}-1)}R^{\dagger}+K_{y_{T-1}}\rho_ {T-1}^{(N_{max})}K_{y_{T-1}}\] \[+A_{y_{T-1}}\rho_{T-1}^{(0)}A_{y_{T-1}}^{\dagger}.\]
Some detailed cases are presented in [36].
Extend the generalization ability of the SHQMMIf we need to further improve the complexity of the model, we can set the parameters \(N_{max}=4,5,6,\cdots\) and apply a more complicated connection defined as \(k\)-local. Thus, the general SHQMM can be defined as a tuple \(\lambda_{SQ}=(\mathbb{C}^{m},k-\)local,\(K_{y}^{j}y\in O^{j=2k+1},\rho 0^{(i)}i=0^{Nmax-1})\) with the following conditions:
(1) \(\rho_{0}^{(i)}\) is conditional density matrix, \(\text{Tr}(\sum_{i=0}^{N_{max}-1}\rho_{0}^{(i)})=1\).
(2) For every Kraus operator, \(K_{y}^{j}:\mathbb{C}^{m}\rightarrow\mathbb{C}^{m}\) and \(\sum_{y,j}\left(K_{y}^{j}\right)^{\dagger}K_{y}^{j}=I\).
(3) The evolution of \(\rho^{(i)}\) follows Eq.23.
\[\rho^{(i)}(t+\Delta t)=\sum_{j^{\prime}=-\lfloor\frac{j}{2}\rfloor}^{\lfloor \frac{j}{2}\rfloor}\sum_{y\in O}K_{y}^{j^{\prime}+1+\lfloor\frac{j}{2}\rfloor} \rho^{(i-j^{\prime})}(t)K_{y}^{j^{\prime}+1+\lfloor\frac{j}{2}\rfloor\dagger}, \tag{23}\]
where the periodic boundary conditions should be applied and conditional density matrix beyond index range should be zeroed, that is, \(0\leq i-j^{\prime}\leq N_{max}-1\). \(\lfloor\frac{j}{2}\rfloor\) equal to \(k\), \(j^{\prime}\in\{-\lfloor\frac{j}{2}\rfloor,-\lfloor\frac{j}{2}\rfloor+1,\)\(\ldots,\lfloor\frac{j}{2}\rfloor-1,\lfloor\frac{j}{2}\rfloor\}\).
(4) The probability of observation symbol \(y\) is
\[p(y)=\sum_{i=0}^{N-1}\text{Tr}(\sum_{j^{\prime}=-\lfloor\frac{j}{2}\rfloor}^{ \lfloor\frac{j}{2}\rfloor}K_{y}^{j^{\prime}+1+\lfloor\frac{j}{2}\rfloor}\rho ^{(i-j^{\prime})}(t)K_{y}^{j^{\prime}+1+\lfloor\frac{j}{2}\rfloor\dagger}), \tag{24}\]
where, \(k\)-local represents the relationship between different conditional density matrix and \(j\) represents the number of Kraus operator classes.
Comparison of properties of SHQMM and HQMMWe use a simple case to illustrate the correlation and difference between SHQMM and HQMM. For 1-local model, by summing the conditional density matrix \(\rho^{(n)}\) with index \(n\) in Eq.18, we obtain:
\[\rho(t+\Delta t) =\sum_{y}K_{y}\rho(t)K_{y}^{\dagger}+\sum_{y}R_{y}\rho(t)R_{y}^{ \dagger} \tag{25}\] \[+\sum_{y}A_{y}\rho(t)A_{y}^{\dagger}\] \[=\sum_{\omega_{y}}K_{\omega_{y}}\rho(t)K_{\omega_{y}}^{\dagger}.\]
The second equal sign in Eq.25 shows a formal relationship between SHQMM and HQMM, but our model has clear physical implications compared to the auxiliary dimension \(\omega_{y}\) in Eq.2 (HQMM with \(\omega=3\)). This indicates that the SHQMM is a valid HQMM at the same time. The differences between SHQMM and HQMM lie in the following aspects:
* SHQMM makes density matrix have a hierarchy structure as shown in Fig.2, and the density matrix evolves through multiple channels.
* Kraus operator\(\{K_{y}^{j}\}_{y\in O}^{j=2k+1}\) acts on different conditional density matrix \(\rho^{(n)}\) in SHQMM,
whereas in HQMM, Kraus oprator \(\{K_{\omega_{y}}\}_{y\in O}\) acts on total density matrix \(\rho\).
* SHQMM can be derived from actual physical systems, such as quantum transport systems, as shown in Fig.2.
The SHQMM can reflect the relationship between hidden states and is more suitable for handling more complex data than the HQMM. From a physical system implementation point of view, the quantum conditional master equation may differ from the quantum transport system for other open quantum systems[30, 31, 32], resulting in different split hidden quantum Markov models. The Bayesian rule for SHQMM is:
\[\rho_{y|x}^{(n)}=\frac{\sum_{j^{\prime}=\lfloor\frac{j}{2}\rfloor}^{\lfloor \frac{j}{2}\rfloor}K_{y_{1}}^{j^{\prime}+1+\lfloor\frac{j}{2}\rfloor}\rho^{(n -j^{\prime})}K_{y_{1}}^{j^{\prime}+1+\lfloor\frac{j}{2}\rfloor\dagger}}{ \operatorname{Tr}\left(\sum_{n=0}^{n=N-1}\sum_{j^{\prime}=\lfloor\frac{j}{2} \rfloor}^{\lfloor\frac{j}{2}\rfloor}K_{y_{1}}^{j^{\prime}+1+\lfloor\frac{j}{2 }\rfloor}\rho^{(n-j^{\prime})}K_{y_{1}}^{j^{\prime}+1+\lfloor\frac{j}{2} \rfloor\dagger}\right)}. \tag{26}\]
Table 2 shows the properties of the SHQMM and HQMM.
## 5 Experiment and results
In this section, we applied quantum and classical data to train and test our SHQMM.
Quantum dataFirstly, we used the quantum data generated by a quantum mechanical process in Ref.[25]. The quantum data has six hidden states and six observational values. The size of the quantum data is \(40\times 3000\).
Training and validationWe use \(20\times 3000\) data to train our model, generating a total of 20 models. Simultaneously, we used \(10\times 3000\) data to verify the model, and the remaining data were used to test the model. The results are shown in Fig.3 with hyperparameters \(\tau=0.95,\alpha=0.95,\beta=0.90\) (More calculation results can be found in [36]).
Fig.3 shows several models \(\lambda_{SQ}\) for single, double and three qubits quantum system which are used to construct the SHQMM under the different parameters \(N\) and \(k\). It found : (1) With the increase of the qubit number, the value of DA also increases and reaches a stable value at about 20 epochs (three qubits exceed 20 epochs). (2) Apart from the single bit, the connection mode (different k-local) and \(N\) have little effect on the DA value. (3) Although there is a relatively large value of DA for more qubits, the value of standard deviation (STD) decreases with the increase of the qubit number when other data are used to test the models. This means that the stability of the model deteriorates and some overfitting occurs.
Figure 3: The training result of the different SHQMM for quantum data under different parameters \(N\) and \(j=2k+1\). The subfigures (a), (b), (c) represent the training results for choosing single, double and three qubits quantum system, respectively.
To conduct additional testing on our model, we will assess its reliability from several other perspectives.
Initialize KrausIn Ref. [28], it was stated that the training outcome of HQMM is susceptible to the initial Kraus operators in smaller models. Thus, this study investigates the effect of the initial position of Kraus operators on Stiefel manifolds for SHQMM. Eq. 27 is utilized to evaluate the distance between various initial positions.
\[D(\kappa_{1},\kappa_{2})=||\kappa_{1}\kappa_{2}^{\dagger}-I||_{2}. \tag{27}\]
When \(\kappa_{1}=\kappa_{2}\), \(D=0\). The initialization method for the Kraus operator is presented in Algorithm 2. The varying behaviors of the DA are depicted in Figure 4 for different random initialization seeds (RS). It is evident from the figure that the model can attain stability rapidly, irrespective of the initial Kraus values. This observation lends support to the validity of our proposed SHQMM.
Selection of effective modelsGiven the various methods available to construct models, selecting the most effective one for a given dataset is a critical challenge. Typically, the expressiveness of a model is directly linked to the number of its parameters. The number of parameters for SHQMM is as follows:
\[\mathscr{N}_{P}=m^{2}\cdot j. \tag{28}\]
The corresponding results are shown in Fig.5 to obtain best training results, we should change the dimension \(m\) of Kraus operator firstly and then adjust the parameter \(j\) for a given sequence.
**Hyperparameters selection for the model** To obtain the optimal \(DA\), Algorithm 1 em
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & State & \begin{tabular}{c} Transition \\ and Emission \\ \end{tabular} & Probability & Bayesian Rule & Evolution \\ \hline SHQMM & \(\{\rho^{(i)}\}_{i=0}^{N-1}\) & \begin{tabular}{c} quantum channel \\ \(\mathcal{K}_{s}\) \\ \end{tabular} & Eq.24 & \(\rho^{(n)}_{y|x}\) & Eq.23 \\ \hline HQMM & \(\rho\) &
\begin{tabular}{c} quantum channel \\ \(\mathcal{K}\) \\ \end{tabular} & \(\text{Tr}(\sum_{\omega_{y}}K_{\omega_{y}}\rho(t)K_{\omega_{y}}^{\dagger})\) & \(Eq.2\) & \(\sum_{\omega_{y}}K_{\omega_{y}}\rho(t)K_{\omega_{y}}^{\dagger}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The properties of HQMM and sHQMM
Figure 4: The training result of SHQMM(\(N_{max}=3\), AR, Single Qubit) in random initialization
ploys three hyperparameters, namely \(\tau\), \(\alpha\), and \(\beta\). Fig. 6 demonstrates the impact of varying hyperparameters on \(DA\), indicating that \(DA\) is more sensitive to changes in \(\alpha\) as compared to \(\tau\). Moreover, the existence of multiple local optima in SHQMM is evident. To identify the global optimum, we investigated the effect of momentum parameter \(\beta\) on \(DA\) for the best case (\(\tau=0.95,\alpha=0.95\)) and the worst case (\(\tau=0.65,\alpha=0.65\)), as presented in Fig.7. It was observed that \(DA\) may reach the global optimum at \(\tau=0.95\), \(\alpha=0.95\), and \(\beta=0.90\), and that \(DA\) can be further enhanced by selecting a different \(\beta\). However, after computing the distance of Kraus solutions between different hyperparameters (\(\tau=0.95\), \(\alpha=0.95\), \(\beta=0.90\) and \(\tau=0.65\), \(\alpha=0.65\), \(\beta=0.60\)) using Eq. 27, we discovered that their \(DA\) values were comparable despite locating at different positions on the Stiefel manifold.
Classical dataTo conduct a thorough evaluation of the model, classical data generated by
Figure 5: The relationship between the training outcome of SHQMM and the number of parameters. (a) represents the variation of DA with the dimension \(m\) of the Kraus operator. (b)represents the variation of DA with the \(j\) of the conditional density matrix. The training results are more sensitive to \(m\) than \(j\)
a hidden Markov process with transition matrix \(\mathbf{T}\) and emission matrix \(\mathbf{C}\) were utilized to compute the Kraus operator and determine \(DA\). The results obtained from the classical data are presented in Fig. 8, where hyperparameters were set to \(\tau=0.95\), \(\alpha=0.95\), and \(\beta=0.90\).
Similar to the quantum case, the value of \(DA\) also increases and reaches a stable state after approximately 20 epochs (for three qubits, it took 20 epochs to stabilize). The different values of k-local have little impact on the \(DA\) value, and the standard deviation (STD) continues to decrease as the qubit number increases for the testing data, possibly due to model overfitting. Additional test results can be found in [36].
\[\mathbf{T}=\left(\begin{array}{cccccc}0.8&0.01&0&0.1&0.3&0\\ 0.02&0.02&0.1&0.15&0.05&0\\ 0.08&0.03&0.1&0.4&0.05&0.5\\ 0.05&0.04&0.5&0.35&0&0.5\\ 0.03&0.5&0.03&0&0.6&0\\ 0.02&0.4&0.27&0&0&0\end{array}\right),\] \[\mathbf{C}=\left(\begin{array}{cccccc}0.2&0&0.05&0.95&0.01&0.05 \\ 0.7&0.1&0.05&0.01&0.05&0.05\\ 0.05&0.8&0.1&0.02&0.05&0.04\\ 0.04&0.04&0.02&0&0.84&0.11\\ 0.01&0.03&0.7&0.01&0.02&0.2\\ 0&0.03&0.08&0.01&0.03&0.55\end{array}\right).\]
## 6 Conclusion
In summary, we present the quantum conditional master equation, which is used to study the hidden quantum Markov process for the first time. We also propose a new stochastic probability graph model and the SHQMM based on the QCME. A new algorithm is further put forward to solve the learning problem, specifically the generation of the Kraus operator, which is constrained on the Stiefel manifold. Our experimental results demonstrate that our model is more robust than previous models, and the DA of the model is improved as the qubit increases. Furthermore, our results demonstrate that the SHQMM reveals the closer relationship among the hidden states of the quantum system, which are connected to the time series data. Moreover, the quantum transport
Figure 8: The training results of SHQMM for classical data under different parameters \(N\) and \(j=2k+1\). The subfigures (a), (b), (c) represent the training results for choosing single, double and three qubits quantum system, respectively.
system can be considered as a physical realization of the SHQMM. Additionally, the SHQMM can serve as a new tool for understanding open quantum systems and solving time-series problems in machine learning.
## 7 Acknowledgements
This work is supported by the National Key R&D Program of China, Grant No.2018FYA0306703 and Chengdu Innovation and Technology Project, No.2021-YF05-02413-GX and 2021-YF09-00114-GX, the Open Fund of Advanced Cryptography and System Security Key Laboratory of Sichuan Province (Grant No. SKLACSS-202210), Sichuan Province key research and development project, No.2022YFG0315.
|
2305.11568 | A large $|η|$ approach to single field inflation | Single field models of inflation capable to produce primordial black holes
usually require a significant departure from the standard, perturbative
slow-roll regime. In fact, in many of these scenarios, the size of the
slow-roll parameter $|\eta|$ becomes larger than one during a short phase of
inflationary evolution. In order to develop an analytical control on these
systems, we explore the limit of $|\eta|$ large, and promote $1/|\eta|$ to a
small quantity to be used for perturbative expansions. Formulas simplify, and
we obtain analytic expressions for the two and three point functions of
curvature fluctuations, which share some of the features found in realistic
inflationary models generating primordial black holes. We study one-loop
corrections in this framework: we discuss criteria for adsorbing ultraviolet
divergences into the available parameters, leaving log-enhanced infrared
contributions of controllable size. | Gianmassimo Tasinato | 2023-05-19T10:14:04Z | http://arxiv.org/abs/2305.11568v2 | ###### Abstract
###### Abstract
Single field models of inflation capable to produce primordial black holes usually require a significant departure from the standard, perturbative slow-roll regime. In fact, in many of these scenarios, the size of the slow-roll parameter \(|\eta|\) becomes larger than one during a short phase of inflationary evolution. In order to develop an analytical control on these systems, we explore the limit of \(|\eta|\) large, and promote \(1/|\eta|\) to a small quantity to be used for perturbative expansions. Formulas simplify, and we obtain analytic expressions for the two and three point functions of curvature fluctuations, which share some of the features found in realistic inflationary models generating primordial black holes. We study one-loop corrections in this framework: we discuss criteria for adsorbing ultraviolet divergences into the available parameters, leaving log-enhanced infrared contributions of controllable size.
**A large \(|\eta|\) approach to single field inflation**
Gianmassimo Tasinato\({}^{1,2}\)
\({}^{1}\) Dipartimento di Fisica e Astronomia, Universita di Bologna, Italia
\({}^{2}\) Physics Department, Swansea University, SA28PP, United Kingdom
email: g.tasinato2208 at gmail.com
## 1 Introduction and Conclusions
Identifying the nature of dark matter is one of the most challenging open problems in cosmology [1]. A fascinating possibility is that dark matter is made of primordial black holes (PBH) [2, 3, 4, 5], forming from the collapse of high density fluctuations produced during cosmic inflation: see e.g. [6, 7, 8, 9, 10, 11, 12] for reviews. In order for producing PBH, the size of the inflationary curvature fluctuation spectrum needs to increase by around seven orders of magnitude, from large to small scales. This condition is not possible to achieve within a controlled slow-roll expansion in single-field inflation [13]: a departure from the standard slow-roll conditions is needed. In several single-field realizations of PBH scenarios, the size \(|\eta|\) of the second slow-roll parameter becomes larger than one during a brief phase of non-slow-roll evolution (from now on, NSR). Such brief NSR era should last few e-folds \(\Delta N_{\rm NSR}\) of expansion. Examples are ultra-slow-roll models [14, 15, 16], where \(\eta=-6\), and constant roll models [17, 18, 19], where \(|\eta|\) can be larger or smaller than 6, depending on the properties of the inflationary potential. In these cases, the evolution of fluctuations challenges analytical investigations, since the slow-roll expansion breaks down. Wands duality [20] can be of help in the ultra-slow-roll case, but still care is needed in connecting slow-roll to NSR eras. Oftentimes, a numerical analysis is needed.
In this work, we consider large values for the slow-roll quantity \(|\eta|\), and use the inverse \(1/|\eta|\) as expansion parameter. A large value of \(|\eta|\) is not inconceivable to obtain at the
price of tunings, for example in constant roll systems. Here we are not interested in model building, but in investigating the consequences of a large \(|\eta|\) limit for the dynamics of fluctuations. When working at leading order in \(1/|\eta|\) formulas simplify, and we obtain analytic expressions for the two and three point functions of curvature fluctuations. These analytic results can be useful to get insights on the properties of curvature fluctuations in PBH scenarios, as well as understanding the physical consequences of a rapid growth of the curvature spectrum from large to small scales.
This idealized, large-\(|\eta|\) limit has some intriguing analogy with the large-\(N\) limit of \(SU(N)\) QCD, a model introduced by 't Hooft [21] in a particle physics context. \(N\) being the number of colors, the field-theory analysis can be carried on using a perturbative \(1/N\) expansion, and simplifies in a large-\(N\) limit. Real world QCD has \(N=3\) colors only, yet the results of an \(1/N\) expansion catch various important properties of standard QCD: we refer the reader to chapter 8 of [22] for a pedagogical survey. Calling \(g\) the QCD coupling constant, and \(N\) the number of colors, 't Hooft finds convenient to take the simultaneous limits \(g\to 0\), \(N\to\infty\), and \(gN^{2}\) fixed [21]. Analogously, in PBH forming scenarios, it is convenient to consider the limit of vanishing e-folds of NSR expansion, \(\Delta N_{\rm NSR}\to 0\), and at the same time taking \(|\eta|\to\infty\), keeping fixed the product \((\Delta N_{\rm NSR}|\eta|)\). As we will learn, this product is associated with the growth of the spectrum from large to small scales. Keeping \((\Delta N_{\rm NSR}|\eta|)\) fixed, and expanding in \(1/|\eta|\), the formulas for the curvature fluctuation \(n\)-point functions become easier to deal with.
Having analytical control on a perturbative expansion in \(1/|\eta|\) allows us to address the issue of loop corrections, a topic that recently raised much attention after the important papers [23, 24] appeared. As pointed out in [23], the same mechanisms that causes the curvature spectrum growth, as needed for producing PBH, also amplify the effects of loop corrections to the curvature power spectrum. Their size can become so large to invalidate a perturbative loop expansion. Many solutions and new perspectives have recently pointed out [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. In the framework of a large \(|\eta|\) expansion, we show that loop corrections can be placed under control, at least at the large scales that can affect CMB physics. We regularize loop integrals by means of ultraviolet and infrared cut-offs, and analytically compute the effects of loops in a large \(|\eta|\) regime. The resulting ultraviolet divergences can be adsorbed into physically measurable quantities corresponding to the amplitude and the large-scale tilt of the spectrum. We are left with log-enhanced infrared contributions, whose size is small at large scales.
We hope that the tool of an \(1/|\eta|\) expansion, although idealized, can lead to analytical insights allowing to further investigate properties of the dynamics of curvature fluctuations in PBH scenarios. It will be interesting to further apply this method to related topics, as the behaviour of higher order \(n\)-point functions, and their corresponding loop corrections in a large \(|\eta|\) limit. Having analytic expressions for the primordial correlators can also be useful for investigating the actual process of PBH formation in the post-inflationary universe, as well as the generation of second-order gravitational waves from enhanced curvature spectra: see respectively e.g. [37] and [38] for reviews. We leave these topics to future investigations.
System under consideration
We consider single field models of inflation with canonical kinetic terms. Around a conformally flat cosmological metric, \(ds^{2}\,=\,a^{2}(\tau)\,(-{\rm d}\tau^{2}+d\vec{x}^{2})\), the quadratic action for the curvature perturbation in Fourier space reads (we set the Planck mass to unity)
\[S_{\rm quad}\,=\,\frac{1}{2}\int d\tau\,d^{3}k\,z^{2}(\tau)\left[\zeta_{k}^{ \prime 2}(\tau)+k^{2}\zeta_{k}^{2}(\tau)\right]\,, \tag{2.1}\]
where the pump field \(z(\tau)\) is given by
\[z(\tau)\,=\,a(\tau)\,\sqrt{2\epsilon(\tau)}\,. \tag{2.2}\]
The definitions of Hubble and slow-roll parameters are
\[H(\tau)\,=\,\frac{a^{\prime}(\tau)}{a^{2}(\tau)}\,\,\,\,\,\,\,\,\,\,\,;\,\,\,\, \,\,\,\,\,\,\,\,\epsilon(\tau)\,=\,-\frac{H^{\prime}(\tau)}{a(\tau)\,H^{2}( \tau)}\,\,\,\,\,\,\,\,\,\,;\,\,\,\,\,\,\,\,\,\,\,\,\,\eta(\tau)\,=\,\frac{ \epsilon^{\prime}(\tau)}{a(\tau)\,H(\tau)\,\epsilon(\tau)}\,. \tag{2.3}\]
We assume that the first slow-roll parameter \(\epsilon(\tau)\) remains small during the entire duration of inflation, which takes place for negative conformal time \(\tau\leq\tau_{0}\,=\,0\). We also assume that the second parameter \(\eta(\tau)\) remains small for negative values of \(\tau\), a part from a brief time interval \(\tau_{1}\leq\tau\leq\tau_{2}\) during which \(\eta\) is negative and its size \(|\eta|\) becomes larger than one. (See the brief discussion in Section 1.) During this short phase, which we call non-slow-roll (NSR) period, we can not make a perturbative slow-roll expansion in \(|\eta|\): other methods are needed to tackle the evolution of fluctuations. In this work, we explore the possibility to consider the inverse \(1/|\eta|\) as a convenient expansion parameter for pursuing analytical considerations. But before discussing the role of the \(|\eta|\) parameter, we first examine a quantity related with the duration of NSR phase. We build a dimensionless positive parameter \(\Delta\tau\), as
\[\Delta\tau\,=\,-\frac{\tau_{2}-\tau_{1}}{\tau_{1}}\,, \tag{2.4}\]
and we require that \(\Delta\tau\ll 1\). This condition implies that the duration of the NSR phase is short with respect to the typical time-scales one encounters in treating the system, as e.g. \(|\tau_{1}|\) which controls the onset of the NSR phase. A short duration of non-slow-roll phase is demanded by the requirement to avoid excessive stochastic effects [39, 40, 41]. Since we assume that the slow-roll parameter \(\epsilon(\tau)\) remains always small during inflation, we consider for simplicity the limit of pure de Sitter expansion, with \(a(\tau)\,=\,-1/(H_{0}\tau)\) and \(H_{0}\) constant during inflation. If the interval \(\Delta\tau\) of eq (2.4) is small, this parameter has a physical interpretation in terms of a (small) number \(\Delta N_{\rm NSR}\) of e-folds of NSR evolution:
\[\Delta N_{\rm NSR}\,=\,\ln\left(\frac{a(\tau_{2})}{a(\tau_{1})}\right)\,=\, \ln\left(\frac{\tau_{1}}{\tau_{2}}\right)\,=\,\ln\left(\frac{1}{1-\Delta\tau }\right)\,\simeq\,\Delta\tau\,, \tag{2.5}\]
where in the next-to-last equality we used the definition (2.4), and in the last equality we expanded for small \(\Delta\tau\).
In the regime of \(\Delta\tau\ll 1\) we can use the results of [42] (reviewed in the technical Appendix A): we write the solution for the mode function of the curvature perturbation \(\zeta_{\kappa}(\tau)\) in Fourier space during different epochs in the inflationary evolution. We define the pivot scale
\[k_{\star}\,=\,1/|\tau_{1}|\,, \tag{2.6}\]
corresponding to modes leaving the horizon at the onset of the NSR era. We express our formulas in terms of dimensionless momentum scales, as follows:
\[\kappa\equiv-k\tau_{1}\,=\,k/k_{\star}\,. \tag{2.7}\]
Our expressions simplify with this notation, as we can easily identify modes with \(\kappa\sim 1\) which cross the horizon at epochs corresponding to the NSR phase. For this reason, we adopt from now on the dimensionless definition (2.7) when treating momenta.
The mode function \(\zeta_{\kappa}(\tau)\) acquires its usual profile matching the Bunch-Davies vacuum at short distances:
\[\zeta_{\kappa}(\tau)\,=\,-i\,\frac{H_{0}\,(-\tau_{1})^{3/2}}{\sqrt{4\epsilon_{ 1}}\,\kappa^{3/2}}\left(1-\frac{i\tau}{\tau_{1}}\right)\,e^{i\frac{\kappa\tau} {\tau_{1}}}\qquad;\qquad\tau\leq\tau_{1} \tag{2.8}\]
for conformal times \(\tau\leq\tau_{1}\), since at early times the modes do not yet experience the NSR evolution. In the previous equation, \(H_{0}\) is the constant Hubble parameter during inflation, and
\[\epsilon_{1}\,=\,\epsilon(\tau_{1}) \tag{2.9}\]
is the value of the first slow-roll parameter at \(\tau=\tau_{1}\).
For later times \(\tau_{2}\leq\tau\leq\tau_{0}\) during inflation, instead, the profile of the mode function is modified by the effects of the NSR era. See Appendix A, where we include the behaviour of the mode function in the interval \(\tau_{1}\leq\tau\leq\tau_{2}\) that we do not need in the main text. (Sufficient to say that the mode functions, with their first derivatives, are continuous at the transition between slow-roll and non-slow-roll eras.) We find
\[\zeta_{k}(\tau)\,=\,-i\,\frac{H_{0}\,(-\tau_{1})^{3/2}}{\sqrt{4\epsilon_{1}} \,\kappa^{3/2}}\left[\mathcal{C}_{1}(\kappa)\,\left(1-\frac{i\tau}{\tau_{1}} \right)\,e^{i\frac{\kappa\tau}{\tau_{1}}}+\mathcal{C}_{2}(\kappa)\,\left(1+ \frac{i\tau}{\tau_{1}}\right)\,e^{-i\frac{\kappa\tau}{\tau_{1}}}\right]\;\; ;\;\;\;\tau_{2}\leq\tau\leq\tau_{0} \tag{2.10}\]
with (recall the definition of \(\Delta\tau\) in eq (2.4))
\[\mathcal{C}_{1}(\kappa) = 1-\frac{\eta}{8(1-\Delta\tau)^{2}\,\kappa^{2}}\left[1-e^{2i \kappa\Delta\tau}-2\kappa\Delta\tau\,(i-2\kappa(1-\Delta\tau))\right]\,, \tag{2.11}\] \[\mathcal{C}_{2}(\kappa) = \frac{\eta\,e^{2i(1-\Delta\tau)\kappa}}{8(1-\Delta\tau)^{2}\, \kappa^{2}}\left[1-2i\kappa-e^{2i\kappa\Delta\tau}\,(1-2i\kappa(1-\Delta\tau ))\right]\,, \tag{2.12}\]
and
\[\eta\,=\,\lim_{\tau\to\tau_{1}^{+}}\eta(\tau)\,. \tag{2.13}\]
From now on, the quantity \(\eta\) refers to the definition (2.13), i.e. the value of the time-dependent \(\eta(\tau)\) evaluated at the beginning of the NSR era. Notice that in the limit of
negligible \(\eta\to 0\), the two mode functions (2.8) and (2.10) coincide. Instead, if \(|\eta|\) is large in size, the scale dependence of the mode functions (2.8) and (2.10) differs considerably. This leads to the opportunity of increasing the size of the curvature spectrum at small scales, as required by primordial black hole production. In what comes next, we examine this possibility.
## 3 The two-point function of curvature fluctuations
In this section we show how a suitably defined large-\(|\eta|\) limit allows us to analytically capture the scale dependence of the spectrum of curvature fluctuations. Starting from the mode functions obtained in the previous Section, we quantize the system starting from the quadratic action (2.1) for curvature fluctuations. See e.g. [43] for a textbook discussion. We can easily compute the two-point function \(\langle\zeta_{\kappa}(\tau_{0})\,\zeta_{\kappa}^{*}(\tau_{0})\rangle\) of curvature perturbations evaluated at the end of inflation, \(\tau=\tau_{0}=0\), and the corresponding power spectrum (recall our definition (2.7) of dimensionless scale \(\kappa\))
\[{\cal P}(\kappa)\,\equiv\,\frac{\kappa^{3}}{2\pi^{2}(-\tau_{1})^{3}}\,\langle \zeta_{\kappa}(\tau_{0})\,\zeta_{\kappa}^{*}(\tau_{0})\rangle^{\prime}\,, \tag{3.1}\]
where a prime indicates the two-point function omitting the momentum-conserving delta functions. At very large scales, \(\kappa\to 0\), one finds the usual expression
\[{\cal P}_{0}\,=\,\lim_{\kappa\to 0}{\cal P}_{\kappa}\,=\,\frac{H_{0}^{2}}{8\pi^{2 }\epsilon_{1}}\,, \tag{3.2}\]
with the scale of \({\cal P}_{0}\) of order \(10^{-9}\) to match CMB normalization. Since large scale modes leave the horizon much earlier than the NSR era, they are unaffected by it. It is convenient to compute the dimensionless ratio \(\Pi(\kappa)\) (see [42]) between the power spectrum (3.1) evaluated at scale \(\kappa\), versus the large-scale spectrum \({\cal P}_{0}\) in eq (3.2). We find
\[\Pi(\kappa)\,\equiv\,\frac{{\cal P}_{\kappa}}{\lim_{\kappa\to 0}{\cal P}_{ \kappa}}\,=\,|{\cal C}_{1}(\kappa)+{\cal C}_{2}(\kappa)|^{2}\,, \tag{3.3}\]
with the scale-dependent \({\cal C}_{1,2}(\kappa)\) given in eqs (2.11), (2.12). Such ratio can be considered as a dimensionless power spectrum evaluated at the end of inflation, which singles out the overall amplitude \({\cal P}_{0}\) at large scales, and encapsulates the rich scale dependence of the spectrum evolving from large to small scales. We plot \(\Pi(\kappa)\) in Figure 1, left panel, for a representative choice of parameters capable to enhance the spectrum at small scales. Physically, the scale dependence of the spectrum is due to the brief NSR phase of inflationary evolution. The NSR era is able to excite the would-be decaying mode at superhorizon scales, which starts to actively participate to the dynamics of curvature fluctuations. See e.g. [12] for a recent review. Notice that the spectrum has a pronounced dip at intermediate scales, due to a disruptive interference between the growing and decaying modes of the curvature fluctuation at super-horizon scales. The dip is followed by a steady growth (with slope \(\kappa^{4}\) as first shown in [44]) until it reaches a maximal
amplitude. See also [45] for a detailed analysis of the shape of the curvature power spectrum in PBH forming scenarios.
It is particularly interesting to evaluate the value of \(\Pi(\kappa)\) at very small scales, \(\kappa\to\infty\), which informs us on the total amount of the growth of the spectrum. See Figure 1, left panel. Plugging into (3.3) the expressions for \({\cal C}_{1,2}\) of eqs (2.11), (2.12) and taking the small-scale limit, we find
\[\lim_{\kappa\to\infty}\Pi(\kappa) = \left(\frac{1+(|\eta|/2-1)\ \Delta\tau}{1-\Delta\tau}\right)^{2}\,, \tag{3.4}\] \[\equiv (1+\Pi_{0})^{2}\,,\]
where in the second line we introduce a constant parameter \(\Pi_{0}\) controlling the enhancement of the spectrum from large to small scales (\(\Pi_{0}=0\) means no enhancement). We would like a large enhancement of the spectrum at small scales for producing PBH. Since we are in a regime of small \(\Delta\tau\), as discussed in Section 2, we need to consider large values for the parameter \(|\eta|\) during the NSR period (we make the hypothesis that \(\eta\) is negative, hence the absolute value). In fact, in the limit of \(|\eta|\) large and \(\Delta\tau\) small, expression (3.4) simplifies to
\[\Pi_{0}\,\simeq\,\frac{|\eta|\Delta\tau}{2}\,. \tag{3.5}\]
The combination (3.5), as well as the considerations above, motivates us to take the simultaneous limits:
\[|\eta|\gg 1\ \ \ \ \ ;\ \ \ \ \ \Delta\tau\ll 1\ \ \ \ ;\ \ \ \ \ \mbox{keep}\ \ \Pi_{0}\ \ \mbox{fixed}\,. \tag{3.6}\]
Figure 1: **Left panel:** Plot of the dimensionless power spectrum \(\Pi(\kappa)\) as defined in eq (3.3): we choose the values \(|\eta|=10^{4}\) and \(\Delta\tau=0.2\) for the free parameters. **Right panel**: the black line is the same as left panel. The dashed red line represents the spectrum \(\hat{\Pi}(\kappa)\) of eq (3.7), choosing the value \(\Pi_{0}=1250\) for the single free parameter. See the discussion after eq (3.8). Notice that the maximal values of the spectrum occur around the onset of non-slow-roll phase, for \(\kappa\sim{\cal O}(1)\).
This is reminiscent of the 't Hooft limit one encounters in particle physics [21], as explained in Section 1. In fact, combining \(|\eta|\) and \(\Delta\tau\) into the fixed quantity \(\Pi_{0}\) allows us to consistently perform expansions in the small parameter \(1/|\eta|\), maintaining at the same time control on the effects of the NSR through the quantity \(\Pi_{0}\). In most PBH scenarios we aim to a total enhancement of the order \(10^{6}-10^{7}\) in eq (3.4). Then the quantity \(\Pi_{0}\) results by itself large, of order \(10^{3}-10^{4}\).
Adopting the limits of eq (3.6), the expression for the ratio (3.3) simplifies. We substitute \(\Delta\tau\,=\,2\Pi_{0}/|\eta|\) in eq (3.3), and expand for large values of \(|\eta|\) keeping \(\Pi_{0}\) fixed. At leading order in this expansion, we obtain
\[\hat{\Pi}(\kappa)\,=\,1-4\kappa\,\Pi_{0}\,\cos\kappa\,j_{1}(\kappa)+4\kappa^{ 2}\,\Pi_{0}^{2}\,j_{1}^{2}(\kappa)\,, \tag{3.7}\]
where a hat indicates that we only include the leading order in an expansion in \(1/|\eta|\), following the conditions of eq (3.6). The spherical Bessel function \(j_{1}(\kappa)\) is given by
\[j_{1}(\kappa)\,=\,\frac{\sin\kappa}{\kappa^{2}}-\frac{\cos\kappa}{\kappa} \qquad;\qquad j_{1}(\kappa\ll 1)\,=\,\frac{\kappa}{3}-\frac{\kappa^{3}}{30}+ \mathcal{O}(\kappa^{5})\,. \tag{3.8}\]
We represent formula (3.7) in Figure 1, right panel, in comparison with the result obtained by the more accurate formula (3.3). The latter, plotted in the left panel of the figure, makes use of a small \(\Delta\tau\) limit only, without the further expansion in \(1/|\eta|\) of eq (3.6). The resulting profile of the spectrum is very similar in both cases, at least in the regime \(\kappa\leq 5\), indicating that the limits of eq (3.6) give trustable results for the spectrum at least at relatively large scales. It is not difficult to use eq (3.7) to analytically determine the position of the dip, finding agreement with other works in the literature [42].
It is remarkable to obtain such a simple formula (3.7) for the scale dependence of the curvature power spectrum, whose momentum profile shares features with more realistic PBH models discussed in the literature. This formula depends on a single free parameter \(\Pi_{0}\). Besides parameterizing the total enhancement of the spectrum, this quantity also governs the scale dependence of the spectrum at large scales. Expanding (3.7) up to \(\kappa^{2}\):
\[\hat{\Pi}(\kappa)\,=\,1-\frac{4\,\Pi_{0}}{3}\kappa^{2}+\mathcal{O}(\kappa^{4} )\,, \tag{3.9}\]
making manifest the role of \(\Pi_{0}\) in controlling the deviations from a flat spectrum. We can be more precise and analytically compute the spectral index associated with eq (3.7):
\[\hat{n}_{s}(\kappa)-1 \equiv \frac{d\ln\hat{\Pi}(\kappa)}{d\ln\kappa}\,, \tag{3.10}\] \[= \frac{2\,\kappa\,\Pi_{0}\left[(1-2\kappa^{2})\sin\left(2\kappa \right)-2\kappa\cos\left(2\kappa\right)\right]}{\kappa^{2}+4\kappa\Pi_{0}\cos \kappa\left(\kappa\cos\kappa-\sin\kappa\right)+4\Pi_{0}^{2}\left(\kappa\cos \kappa-\sin\kappa\right)^{2}}\] (3.11) \[-\frac{\Pi_{0}^{2}\left[4-(4-8\kappa^{2})\cos\left(2\kappa\right) +4\kappa(\kappa^{2}-2)\sin\left(2\kappa\right)\right]}{\kappa^{2}+4\kappa\Pi _{0}\cos\kappa\left(\kappa\cos\kappa-\sin\kappa\right)+4\Pi_{0}^{2}\left( \kappa\cos\kappa-\sin\kappa\right)^{2}}\,.\]
The rich dependence in momentum scale of the spectral index in eq (3.11) reflects the scale dependence of the spectrum in Fig 1. We represent it in Fig 2 for a range of momenta
going from the dip position to small scales. Comparing Figures 1 and 2, we notice that, after the dip position, the maximal growth slope of the spectrum is \(n_{s}-1\leq 4\). This agrees with the more sophisticated analysis [44] based on complete expressions for the curvature power spectrum, outside the large \(|\eta|\) limit we consider here.
## 4 The three-point function of curvature fluctuations
We now apply the previous set-up to the study of three-point function of curvature fluctuations, evaluated at the end of inflation. This quantity controls the non-Gaussianity of curvature fluctuations in PBH scenarios. We assume that the slow-roll parameter \(\epsilon(\tau)\) remains always small, while \(\eta(\tau)\) experiences a sharp transition between the slow-roll and non-slow-roll phases, at \(\tau=\tau_{1}\) and \(\tau=\tau_{2}\). The \(n\)-point functions of \(\zeta\) can be computed using the in-in formalism [46, 47, 48]. Let \(\mathcal{O}(\tau)\) the operator one wishes to determine (for us, the three-point function \(\langle\zeta_{\kappa_{1}}(\tau_{0})\zeta_{\kappa_{2}}(\tau_{0})\zeta_{\kappa_ {3}}(\tau_{0})\rangle\)), and \(\mathcal{H}_{\rm int}\) the interaction Hamiltonian. We map the time evolution of the operator from the initial \(\big{|}{\rm in}\big{>}\) vacuum up to the time the operator \(\mathcal{O}(\tau)\) is evaluated, and then we map back to the \(\big{|}{\rm in}\big{>}\) vacuum again. In formulas: \(\langle{\rm in}\Big{|}\bar{T}e^{-i\int\mathcal{H}_{\rm int}(\tau^{\prime})d \tau^{\prime}}\,\mathcal{O}(\tau)\,Te^{i\int\mathcal{H}_{\rm int}(\tau^{ \prime})d\tau^{\prime}}\Big{|}{\rm in}\rangle\). In our case, since we focus on sudden transitions, there is a single dominant contribution to the interaction Hamiltonian [23, 24], which can be extracted from the third-order action of perturbations in single field inflation [46]:
\[\mathcal{H}_{\rm int}\,=\,-\frac{1}{2}\int d^{3}x\,a^{2}(\tau)\epsilon(\tau) \,\eta^{\prime}(\tau)\,\zeta^{2}(\tau,\vec{x})\,\zeta^{\prime}(\tau,\vec{x})\,. \tag{4.1}\]
We assume that \(|\eta|\) is negligible during slow-roll evolution (\(\tau<\tau_{1}\) and \(\tau_{2}<\tau<\tau_{0}\)) while it is large during the intermediate NSR phase, \(\tau_{1}\leq\tau\leq\tau_{2}\). We adopt a sharp-transition Ansatz [24] for the time-derivative of \(\eta(\tau)\)
\[\eta^{\prime}(\tau)\,=\,\Delta\eta\,[-\delta(\tau-\tau_{1})+\delta(\tau-\tau_ {2})]. \tag{4.2}\]
Figure 2: The spectral index as given in eq (3.11), choosing \(\Pi_{0}=1250\).
where the times \(\tau_{1,2}\) correspond to the onset and end of the NSR phase during inflation.
Soon we will discuss a criterium to select the constant \(\Delta\eta\). But first, we apply the aforementioned in-in approach with the interaction Hamiltonian (4.1), and eq (4.2). The curvature three-point function, evaluated at the end of inflation \(\tau_{0}(=0)\), results [24]
\[\langle\zeta_{\kappa_{1}}(\tau_{0})\zeta_{\kappa_{2}}(\tau_{0}) \zeta_{\kappa_{3}}(\tau_{0})\rangle^{\prime}=\] \[-2\Delta\eta\Big{(}\epsilon(\tau_{2})a^{2}(\tau_{2})\,\text{Im} \left[\left(\zeta_{\kappa_{1}}(\tau_{0})\zeta_{\kappa_{1}}^{*}(\tau_{2})\right) \left(\zeta_{\kappa_{2}}(\tau_{0})\zeta_{\kappa_{2}}^{*}(\tau_{2})\right) \left(\zeta_{\kappa_{3}}(\tau_{0})\partial_{\tau_{2}}\zeta_{\kappa_{3}}^{*}( \tau_{2})\right)\right]-(\tau_{2}\to\tau_{1})\Big{)}\] \[+\text{perms}\,. \tag{4.3}\]
where recall that the prime means that we understand the momentum-conserving delta functions. In the squeezed limit, eq (4.3) reduces to
\[\lim_{\kappa_{1}\to 0;\,\kappa_{2}\simeq\kappa_{3}}\langle\zeta_{ \kappa_{1}}(\tau_{0})\zeta_{\kappa_{2}}(\tau_{0})\zeta_{\kappa_{3}}(\tau_{0}) \rangle^{\prime}\] \[=\,-\,4\Delta\eta\,\epsilon(\tau_{2})a^{2}(\tau_{2})\left|\zeta_ {\kappa_{1}}(\tau_{0})\right|^{2}\left|\zeta_{\kappa_{2}}(\tau_{0})\right|^{2}\] \[\times\left\{\text{Im}\left[\frac{\zeta_{\kappa_{2}}^{2}(\tau_{0 })}{|\zeta_{\kappa_{2}}(\tau_{0})|^{2}}\zeta_{\kappa_{2}}^{*}(\tau_{2})(\zeta_ {\kappa_{2}}^{\prime}(\tau_{2}))^{*}\right]-\frac{\epsilon(\tau_{1})a^{2}(\tau _{1})}{\epsilon(\tau_{2})a^{2}(\tau_{2})}\text{Im}\left[\frac{\zeta_{\kappa_ {2}}^{2}(\tau_{0})}{|\zeta_{\kappa_{2}}(\tau_{0})|^{2}}\zeta_{\kappa_{2}}^{*}( \tau_{1})(\zeta_{\kappa_{2}}^{\prime}(\tau_{1}))^{*}\right]\right\}\,. \tag{4.4}\]
The squeezed limit refers to modes with very large momenta \(\kappa_{1}\) which leave the horizon much earlier than the onset of the non-slow-roll (NSR) phase. When selecting large-scale modes with \(\kappa_{2}\) small, also far from the NSR epoch, we do expect that the standard Maldacena consistency relation [46] holds. Namely
\[\lim_{\kappa_{1}\to 0;\,\kappa_{2}\simeq\kappa_{3}}\langle\zeta_{\kappa_{1}}( \tau_{0})\zeta_{\kappa_{2}}(\tau_{0})\zeta_{\kappa_{3}}(\tau_{0})\rangle^{ \prime}\,=\,-(n_{s}(\kappa_{2})-1)\left|\zeta_{\kappa_{1}}(\tau_{0})\right|^{2 }\left|\zeta_{\kappa_{2}}(\tau_{0})\right|^{2}. \tag{4.5}\]
We substitute our expressions for the mode functions in eq (2.10), and take the small \(\kappa_{2}\) limit. Using the results of Section 3 for computing the spectral index, the two expressions (4.4) and (4.5) match once we select a certain value for the parameter \(\Delta\eta\) which enters in the Ansatz (4.2). Neglecting contributions that vanish in the large-\(|\eta|\) limit, we find the requirement
\[\Delta\eta\,=\,\frac{|\eta|}{(1+\Pi_{0})}+\frac{\Pi_{0}(12+34\Pi_{0}+25\Pi_{0} ^{2})}{2(1+\Pi_{0})^{2}(1+2\Pi_{0})}\,, \tag{4.6}\]
as well as the expected condition 1
Footnote 1: In fact, we are working in a regime of large \(|\eta|\) and very small \(\Delta\tau\), as dictated by relations (3.6). Hence, we obtain
\[\epsilon(\tau_{1})\,a^{2}(\tau_{1}) =\,\epsilon(\tau_{2})\,a^{2}(\tau_{2})\left(1+\tau_{1}\,\frac{ \epsilon^{\prime}(\tau_{1})}{\epsilon(\tau_{1})}\,\Delta\tau\right)\left(1+2 \tau_{1}\,\frac{a^{\prime}(\tau_{1})}{a(\tau_{1})}\,\Delta\tau\right)\,,\] \[=\,\epsilon(\tau_{2})\,a^{2}(\tau_{2})\left(1+\tau_{1}\,a(\tau_{1 })H(\tau_{1})\,(\eta+2)\,\,\Delta\tau\right)\,,\] \[=\,\epsilon(\tau_{2})\,a^{2}(\tau_{2})\left(1-(\eta+2)\,\, \Delta\tau\right)\,,\] \[=\,\epsilon(\tau_{2})\,a^{2}(\tau_{2})\left(1+2\Pi_{0}\right)\,,\]
in agreement with condition (4.7).
Interestingly, although the constant \(\Delta\eta\) has been fixed to satisfy Maldacena condition in the small-\(\kappa_{2}\) limit, the resulting expression (4.4) for the squeezed three-point function that matches well with single-field Maldacena consistency relation also for larger scales: see Fig 3, left panel, which is also in agreement with [49, 50]. The resulting squeezed non-Gaussianity is strongly scale-dependent [51, 52].
The squeezed limit of the three-point function, as in eq (4.4), is not the only interesting configuration. From the complete expression for the three-point function, eq (4.3), we can also consider other shapes. For example, let us consider the equilateral limit \(\kappa_{i}=\kappa\) for \(i=1,2,3\). In Fig 3, right panel, we represent the value for the three-point function as a function of the dimensionless scale \(\kappa\), divided by the square of the large-scale power spectrum, eq (3.2) (we further divide it by \(\Pi_{0}^{3}\)). Namely,
\[\frac{f_{\rm eq}(\kappa)}{\Pi_{0}^{3}}\equiv\frac{\langle\zeta_{\kappa}(\tau _{0})\zeta_{\kappa}(\tau_{0})\zeta_{\kappa}(\tau_{0})\rangle^{\prime}}{\Pi_{ 0}^{3}\,{\cal P}_{0}^{2}}\,. \tag{4.8}\]
This quantity aims to capture the scale-dependence of the non-Gaussian equilateral limit [53], analogously to the scale-dependent part of the power spectrum of eq (3.2). Remarkably, the profile of the scale-dependence for the equilateral shape (changing its overall sign) is similar to the profile of the scale-dependent power spectrum: compare Fig 1 with Fig 3, right panel. It would be interesting to find a physical reason for this result.
The non-Gaussianity of curvature fluctuations in PBH scenarios is an important observable with several phenomenological ramifications for PBH formation [54, 55, 56, 57, 58]. We refer the reader to [59] for a recent comprehensive analysis, and further references therein.
Figure 3: **Left panel** Check of Maldacena consistency relation for the squeezed limit of the three point function. Black line: the quantity \(1-n_{s}\). Red dashed line: the squeezed limit of the three-point function of eq (4.4) (we omit the factors \(|\zeta_{\kappa_{1}}(\tau_{0})|^{2}\,|\zeta_{\kappa_{2}}(\tau_{0})|^{2}\)). We use the mode functions in eqs (2.10), and choose the values \(|\eta|=10^{4.1}\), \(\Delta\tau=0.002\). **Right panel:** Plot of the scale dependence of the equilateral three-point function, the quantity \(-f_{\rm eq}/\Pi_{0}^{3}\), as defined in the main text, eq (4.8). The profile is remarkably similar to the power spectrum of Fig 1.
Loop corrections
In this section we apply the previous tools to study loop contributions to the inflationary power spectrum [60, 61, 62, 63, 64]. In developing our arguments, we closely follow the clear technical discussion of [24], but we make use of our large \(|\eta|\) expansion, and the corresponding solutions for the mode functions discussed in Section 2. We are especially interested in examining the physical implications of large loop corrections in our approach, and the role of the scale dependence of the spectrum. Moreover, we discuss a proposal to adsorb quadratic ultraviolet divergences into the available bare parameters in a large \(|\eta|\) limit, at least at large scales relevant for CMB physics. We are left with log-enhanced, infrared effects whose size is small at large scales. This is an important step in order to clarify the relation between loops and physically measurable quantities.
The interaction Hamiltonian that we consider is given in eq (4.1); as in Section 4, we focus on a sharp transition between slow-roll regimes and an intermediate non-slow-roll regime for \(\tau_{1}\leq\tau\leq\tau_{2}\). We consider for definiteness the two-point function of curvature fluctuations in momentum space, evaluated at the scale \(p\) (dimensionless in the sense that the momentum is multiplied by \(-\tau_{1}\), as in eq (2.7)). The corresponding 1-loop contributions can be found utilising the in-in formalism. We follow [24]: loop corrections are conveniently decomposed as
\[\langle\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{0})\rangle_{\rm loop}\,=\, \langle\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{0})\rangle_{(1,1)}+2\,{\rm Re} \left[\langle\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{0})\rangle_{(2,0)}\right]\,, \tag{5.1}\]
where
\[\langle\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{0})\rangle_{(1,1)} = \frac{1}{4}\,\int_{-\infty}^{\tau_{0}}\,d\tau_{a}\,a^{2}(\tau_{a} )\,\epsilon(\tau_{a})\,\eta^{\prime}(\tau_{a})\int_{-\infty}^{\tau_{0}}\,d \tau_{b}\,a^{2}(\tau_{b})\,\epsilon(\tau_{b})\,\eta^{\prime}(\tau_{b}) \tag{5.2}\] \[\times\int\Pi_{i=1}^{6}\,\frac{d^{3}k_{i}}{(2\pi)^{3}}\,\delta^{ 3}(\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3})\,\delta^{3}(\vec{k}_{4}+\vec{k}_{5}+ \vec{k}_{6})\] \[\times\langle\zeta_{\vec{k}_{1}}^{\prime}(\tau_{a})\zeta_{\vec{k} _{2}}(\tau_{a})\zeta_{\vec{k}_{3}}(\tau_{a})\zeta_{\vec{p}}(\tau_{0})\zeta_{- \vec{p}}(\tau_{0})\zeta_{\vec{k}_{4}}^{\prime}(\tau_{b})\zeta_{\vec{k}_{5}}( \tau_{b})\zeta_{\vec{k}_{6}}(\tau_{b})\rangle\,,\]
and
\[\langle\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{0})\rangle_{(2,0)} = -\frac{1}{4}\,\int_{-\infty}^{\tau_{0}}\,d\tau_{a}\,a^{2}(\tau_{ a})\,\epsilon(\tau_{a})\,\eta^{\prime}(\tau_{a})\int_{-\infty}^{\tau_{0}}\,d \tau_{b}\,a^{2}(\tau_{b})\,\epsilon(\tau_{b})\,\eta^{\prime}(\tau_{b}) \tag{5.3}\] \[\times\int\Pi_{i=1}^{6}\,\frac{d^{3}k_{i}}{(2\pi)^{3}}\,\delta^{ 3}(\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3})\,\delta^{3}(\vec{k}_{4}+\vec{k}_{5}+ \vec{k}_{6})\] \[\times\langle\zeta_{\vec{p}}(\tau_{0})\zeta_{-\vec{p}}(\tau_{0}) \zeta_{\vec{k}_{1}}^{\prime}(\tau_{a})\zeta_{\vec{k}_{2}}(\tau_{a})\zeta_{\vec {k}_{3}}(\tau_{a})\zeta_{\vec{k}_{4}}^{\prime}(\tau_{b})\zeta_{\vec{k}_{5}}( \tau_{b})\zeta_{\vec{k}_{6}}(\tau_{b})\rangle\,.\]
From now on, to simplify the calculations we focus on a large-scale regime where the size of the external momentum \(p\) is much smaller than the momenta \(k_{i}\) over which we integrate [23, 24]. This allows to simplify formulas substituing \(k-p\simeq k\), and permits us to obtain analytic results. We will discuss in due course the limitations we should impose on \(p\) for satisfying this condition, and their physical implications.
Substituting our Ansatz (4.2) in the case a sharp transition at the times \(\tau_{1}\) and \(\tau_{2}\) between slow-roll and non-slow-roll phases, the result acquires the following structure [24]
\[\langle\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{0})\rangle_{\rm loop} = \left(2\epsilon(\tau_{2})a^{2}(\tau_{2})\right)^{2}\Delta\eta^{2} \,|\zeta_{\vec{p}}(\tau_{0})|^{2} \tag{5.4}\] \[\times \int\frac{d^{3}k}{(2\pi)^{3}}\Big{[}|\zeta_{\vec{k}}(\tau_{2})|^{ 2}\,{\rm Im}(\zeta_{p}(\tau_{2})\zeta_{p}^{\prime*}(\tau_{2}))\,{\rm Im}(\zeta _{k}(\tau_{2})\zeta_{k}^{\prime*}(\tau_{2}))\] \[-4\frac{\epsilon(\tau_{1})a^{2}(\tau_{1})}{\epsilon(\tau_{2})a^{2 }(\tau_{2})}\,{\rm Im}(\zeta_{p}(\tau_{0})\zeta_{p}^{*}(\tau_{2}))\,{\rm Im}( \zeta_{k}^{\prime}(\tau_{2})\zeta_{k}(\tau_{2})\zeta_{k}^{*}(\tau_{1})\zeta_{ k}^{\prime*}(\tau_{1}))\] \[-2\frac{\epsilon(\tau_{1})a^{2}(\tau_{1})}{\epsilon(\tau_{2})a^{2 }(\tau_{2})}\,{\rm Im}(\zeta_{p}(\tau_{2})\zeta_{p}^{\prime*}(\tau_{2}))\,{\rm Im }(\zeta_{k}^{2}(\tau_{2})\zeta_{k}^{*}(\tau_{1})\zeta_{k}^{\prime*}(\tau_{1}))\] \[+\frac{\epsilon^{2}(\tau_{1})a^{4}(\tau_{1})}{\epsilon^{2}(\tau_ {2})a^{4}(\tau_{2})}\,|\zeta_{\vec{k}}(\tau_{1})|^{2}\,{\rm Im}(\zeta_{p}(\tau _{1})\zeta_{p}^{\prime*}(\tau_{1}))\,{\rm Im}(\zeta_{k}(\tau_{1})\zeta_{k}^{ \prime*}(\tau_{1}))\Big{]}\,.\]
Since the integrand functions are rotationally invariant, the three-dimensional integrals over internal momenta can be decomposed into integrals over the real line as
\[\int\frac{d^{3}k}{(2\pi)^{3}}\left(\dots\right) = \int_{\Lambda_{\rm IR}/|\eta|^{1/2}}^{\mu/|\eta|^{1/2}}\frac{k^{ 2}\,dk}{2\pi^{2}}\left(\dots\right)+\int_{\mu/|\eta|^{1/2}}^{\Lambda_{\rm UV}/ |\eta|^{1/2}}\frac{k^{2}\,dk}{2\pi^{2}}\left(\dots\right) \tag{5.5}\]
with \(\Lambda_{\rm IR}\) and \(\Lambda_{\rm UV}\) corresponding to a very small infrared (IR) and a very large ultraviolet (UV) cut-off 2. They are dimensionless quantities, obtained multiplying physical momentum scales with \(|\tau_{1}|\), as in eq (2.7). For convenience, as a technical device we rescale the extrema of integration by \(1/|\eta|^{1/2}\), to simplify our results in a large \(|\eta|\) limit. The intermediate dimensionless scale \(\mu\) is introduced in order to physically separate the loop corrections in an IR part (the first integral in eq (5.5)) and a UV part (the second integral). We can think of \(\mu\sim 1\) as a scale where NSR effects take place. This separation will be essential for our arguments.
Footnote 2: From now on, our approach is a different from [23], where \(\Lambda_{\rm IR,UV}\) are scales of modes leaving the horizon at the start and end of the NSR era. In our case, being the NSR epoch very short, we do not make this identification.
We decompose the resulting power spectrum at the end of inflation \(\tau_{0}=0\) as a tree level and a loop part
\[{\cal P}_{\rm tot}(p) = \frac{p^{3}}{2\pi^{2}(-\tau_{1})^{3}}\,\langle\zeta_{p}(\tau_{0} )\zeta_{p}^{*}(\tau_{0})\,=\,{\cal P}_{0}\,\hat{\Pi}(p)\left[1+L_{\rm loop}^{ \rm IR}+L_{\rm loop}^{\rm UV}\right]\,, \tag{5.6}\]
with \({\cal P}_{0}\) the amplitude of the large scale tree-level power spectrum as in eq (3.2), and \(\hat{\Pi}(p)\) is the momentum-dependent function of eq (3.7), controlling the scale-dependent ratio between small-scale and large-scale spectra. Eq (5.6) contains the quantity \(L_{\rm loop}\), the loop contribution (5.4), with the momentum integrals decomposed as in eq (5.5). We collect as an overall factor the momentum dependent quantity \({\cal P}_{0}\,\hat{\Pi}(p)\).
We substitute in the general formulas (5.5) our mode functions (2.10). We analytically perform both the IR and the UV integrals, which are much simplified in the large \(|\eta|\) regime
of eq (3.6), which keeps \(\Pi_{0}\) fixed. At leading order in \(1/|\eta|\), the dominant contribution to the IR piece of the loop correction results
\[L_{\rm loop}^{\rm IR} = -p^{2}\,\frac{{\cal P}_{0}}{3}\,\frac{\Pi_{0}^{4}}{(1+\Pi_{0})^{2}( 1+2\Pi_{0})}\,\ln\left(\frac{\mu}{\Lambda_{\rm IR}}\right), \tag{5.7}\]
where we include only the log-enhanced part. We neglect power-law quadratic pieces depending on the small quantity \(\Lambda_{\rm IR}\), and on \(\mu\) which, being of order one, is suppressed with respect to the logarithm in eq (5.7), in case of a large ratio \(\mu/\Lambda_{\rm IR}\). This IR contribution can be interpreted as a secular effects caused by modes crossing the horizon from the onset of inflation until around the epoch of NSR, controlled respectively by the scales \(\Lambda_{\rm IR}\) and \(\mu\). IR contributions are typically characterized by large logarithms, whose effects might contribute to observable quantities, if inflation lasts long. In our case, we can estimate \(\ln\left(\mu/\Lambda_{\rm IR}\right)\sim\ln\left[a(\tau_{1})/a(\tau_{\rm start })\right]\)). Hence, we can expect the logarithm to be of order say \(10^{2}\). See e.g. the clear discussion in [61].
The dominant contribution to the UV integral is a quadratic divergence in \(\Lambda_{\rm UV}\). We write the result only including the contribution quadratic in the UV cut-off
\[L_{\rm loop}^{\rm UV} = -\frac{{\cal P}_{0}\,\Pi_{0}\,\Lambda_{\rm UV}^{2}}{(1+\Pi_{0})} \left(\frac{5}{6}+\frac{3j_{1}(p)-p}{3\,p}\right)\,, \tag{5.8}\]
and we neglect subleading contributions. The spherical Bessel function \(j_{1}(p)\) defined in eq (3.8). The UV part contains the effects of small-scale modes, which remain in a thermal vacuum within a subhorizon regime during the first phase of inflation, until the short NSR phase occurs. These modes should not participate to the dynamics of the NSR era during inflation, and the associated UV divergences are expected to be adsorbed into appropriate, physically measurable quantities (see e.g. [65] for a detailed analysis within slow-roll models).
We adopt this viewpoint, and assume that the contributions of \(L_{\rm loop}^{\rm UV}\) are adsorbed into the available parameters by means of an appropriate renormalization procedure. We discuss in Appendix B a way to do so. We are left with the log-enhanced loop contributions of eq (5.7). All our results are derived under the approximation stated after eq (5.3): to analytically compute the integrals, we make the hypothesis that the momentum \(p\) at which the two-point function (5.6) is computed is well _smaller_ than the momentum scales over which integrate, \(i.e.\) the lower extremum of the integral
\[p^{2}\,\leq\,\frac{\Lambda_{\rm IR}^{2}}{|\eta|}\,. \tag{5.9}\]
Since we are working at leading order in a \(1/|\eta|\) expansion, the previous condition informs us that we should only focus on the very first terms in a momentum expansion of our formulas. Using the expression (3.7), we consider eq (5.6) up to second order in an expansion in momentum \(p\), including the IR loop contributions:
\[{\cal P}_{\rm tot}(p) = {\cal P}_{0}-\frac{4{\cal P}_{0}\,\Pi_{0}}{3}\left[1+\,\frac{{ \cal P}_{0}}{4}\,\frac{\Pi_{0}^{3}}{(1+\Pi_{0})^{2}(1+2\Pi_{0})}\,\ln\left( \frac{\mu}{\Lambda_{\rm IR}}\right)\right]\,p^{2}+{\cal O}(p^{4})\,. \tag{5.10}\]
Hence, the log-enhanced IR loop only gives a contribution to the quadratic term in the expansion. Its size is small, being suppressed by a factor \(\mathcal{P}_{0}\simeq 10^{-9}\) with respect to the tree level term, so even a large logarithm is unable to give large effects. The coefficients depending on \(\Pi_{0}\) give order one effects, in the limit of \(\Pi_{0}\) large.
The main difference with respect to [23], who focussed on \(|\eta|=6\), is that the physical loop effects are suppressed by a factor \(p^{2}\), rendering them small at large CMB scales \(p\ll 1\). It would be interesting to pursue our large-\(|\eta|\) program further, estimating the loop corrections also at smaller scales of \(p\) of order one, understanding whether the associated divergences can be adsorbed into measurable quantities. A computation of loop effects valid for larger \(p\) was recently carried out in the interesting work [36] using a numerical procedure, in a framework similar to the one of [23]. Probably, also in the large-\(|\eta|\) set-up we discuss here, a proper computation of the associated loop integrals need to be handled by means of a numerical approach. It would be important to then investigate at what extend divergences renormalize the tree-level spectrum, for example using more sophisticated techniques as the ones pursued by [32] in a related context. Moreover, it would be interesting to study higher loops, and consequences of higher order interactions, in a complete framework for handling loop-divergences in PBH scenarios in a physically controllable set-up based on a large \(|\eta|\) expansion. We leave these investigations for future works.
### Acknowledgments
It is a pleasure to thank Maria Mylova, Ogan Ozsoy, and Ivonne Zavala for useful input. GT is partially funded by the STFC grant ST/T000813/1. For the purpose of open access, the author has applied a Creative Commons Attribution licence to any Author Accepted Manuscript version arising.
## Appendix A Curvature perturbations and the NSR regime
In this technical Appendix, we briefly review the results developed in [42] to determine analytic solutions for inflationary mode functions during non-slow-roll regimes, referring the reader to [42] for more details. Starting from the quadratic action (2.1) of curvature perturbations, it is convenient to introduce a Mukhanov-Sasaki variable \(v_{k}(\tau)\,=\,\zeta_{k}(\tau)/z(\tau)\), satisfying the equation
\[v_{k}^{\prime\prime}(\tau)+\left[k^{2}-\frac{z^{\prime\prime}( \tau)}{z(\tau)}\right]\,v_{k}(\tau)\,=\,0\,,\] (A.1)
in momentum space. In our case, the inflationary evolution for \(\tau\leq\tau_{0}\) undergoes different phases. We have an initial slow-roll phase for \(\tau\leq\tau_{1}\), where both the slow-roll parameters \(\epsilon(\tau)\) and \(\eta(\tau)\) are very small. We can approximate this as pure de Sitter phase. Then, for \(\tau_{1}\leq\tau\leq\tau_{2}\) we have non-slow-roll evolution where \(\epsilon(\tau)\) keeps small, while \(\eta(\tau)\) is negative but potentially large in size. We denote with \(\epsilon_{1}\) and \(\eta\) the values of the slow-roll
parameters evaluated at \(\tau\to\tau_{1}^{+}\). Finally, a slow-roll phase \(\tau_{2}<\tau\leq\tau_{0}\) where the slow-roll parameters return to very small values. Again, we approximate this last phase to pure de Sitter. We assume that the pump field \(z(\tau)\) is continuous at the transitions.
In the de Sitter limit, while \(z^{\prime\prime}(\tau)/z(\tau)\,=\,2/\tau^{2}\) for \(\tau\leq\tau_{1}\) and \(\tau_{2}<\tau\leq\tau_{0}\), the time-profile for this quantity can be richer. As in [42], we adopt an Ansatz
\[v_{k}(\tau) = -\frac{i\,H_{0}\,z(\tau)\,e^{-ik\tau}}{2\sqrt{\epsilon_{1}\,k^{3} }}\,\mathcal{C}_{1}(k)\left[1+ik\tau+(ik\tau_{1})^{2}A_{(2)}(\tau)+(ik\tau_{1} )^{3}A_{(3)}(\tau)\right]\] \[-\frac{i\,H_{0}\,z(\tau)\,e^{ik\tau}}{2\sqrt{\epsilon_{1}\,k^{3} }}\,\mathcal{C}_{2}(k)\left[1-ik\tau+(-ik\tau_{1})^{2}A_{(2)}(\tau)+(-ik\tau_{ 1})^{3}A_{(3)}(\tau)\right]\]
for the Mukhanov-Sasaki mode function.
For \(\tau<\tau_{1}\), the mode equation is the same as in a standard slow-roll era: in order to match with the Bunch-Davies vacuum, we select \(A_{(n)}=0\) for \(n\geq 2\), as well as \(\mathcal{C}_{2}=0\) and \(\mathcal{C}_{1}=1\) in eq (A.2). For \(\tau_{1}\leq\tau\leq\tau_{2}\), we can use the Ansatz (A.2) in the evolution equation (A.1), and solve the equation order by order in powers of \((k\tau_{1})\): see [42]. At each order \(n\) in \((k\tau_{1})^{n}\), the equation can be solved at leading order in an expansion in the parameter \(\Delta\tau\) of eq (2.4) controlling the duration of the non-slow-roll era. For each \(n\), the result depends on powers of the quantity \(d\ln\left[z^{2}(\tau)/a^{2}(\tau)\right]/d\ln\tau\), evaluated at time \(\tau_{1}^{+}\) at the onset of the NSR era. This quantity was dubbed \(\alpha\) in [42]: in the present instance, within single field inflation with canonical kinetic terms and in a pure de Sitter limit, it corresponds to the quantity \(-\eta\) (we use the definitions (2.3)). After computing each quantity \(A_{(n)}\), the resulting series in eq (A.2) can be resummed analytically in terms of exponentials. The result of the resummation is [42]
\[v_{k}(\tau) =-\frac{i\,H_{0}\,z(\tau)\,e^{-ik\tau}}{2\sqrt{\epsilon_{1}\,k^{3} }}\left[1+ik\tau+\frac{\eta}{4}\left(1-2ik(\tau-\tau_{1})-e^{2ik(\tau-\tau_{1 })}\right)\right]\,,\] (A.3)
valid for \(\tau_{1}\leq\tau\leq\tau_{2}\). This mode function continuously connects, together with its first derivative, with the mode function (and the Bunch-Davies vacuum) for \(\tau\leq\tau_{1}\). We can finally connect the result of eq (A.3) with de Sitter mode function at later times \(\tau_{2}\leq\tau\leq\tau_{0}\), imposing continuity of the function and its first derivative at \(\tau=\tau_{2}\). The solution corresponds to Ansatz (A.2) with \(A_{(n)}=0\), and the scale-dependent functions \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are collected in eqs (2.11) and (2.12) of the main text.
## Appendix B Renormalization of UV divergences
In this Appendix we briefly discuss a method for adsorbing the UV quadratically-divergent parts (5.8) of the loop contributions into the available parameters of the system, at least at large scales for \(p\ll 1\). The quantities available for this procedure are the overall amplitude \(\mathcal{P}_{0}\) defined in eq (3.2), and the factor \(\Pi_{0}\) controlling the scale-dependence of the tree level spectrum (3.7). As stated in the main text, we can trust our results only on
a large-scale, small-\(p\) regime. (See discussion around eq (5.9).) Expanding the total power spectrum (5.6) up to quadratic order in \(p\), and including the UV one-loop contributions given in eq (5.8), we obtain:
\[\mathcal{P}_{\rm tot}(p) = \mathcal{P}_{0}\left(1-\frac{5\,\Pi_{0}\,\Lambda_{\rm UV}^{2}\, \mathcal{P}_{0}}{6\,(1+\Pi_{0})}\right)-\frac{4\mathcal{P}_{0}\,\Pi_{0}}{3} \left(1-\frac{103\,\Lambda_{\rm UV}^{2}\,\mathcal{P}_{0}}{120\,(1+\Pi_{0})} \right)\,p^{2}+\mathcal{O}(p^{4})\,.\] (B.1)
The parenthesis contain the UV-divergent loop contributions, suppressed by a factor \(\mathcal{P}_{0}\) with respect to the tree-level terms. Higher loop corrections give contributions to eq (B.1) with powers higher than two in \(\mathcal{P}_{0}\). In the present one-loop instance, we can trust our results only up to quadratic contributions \(\mathcal{P}_{0}^{2}\). We can then adsorb the UV-divergent parts of eq (B.1) into a redefinition of the bare quantities \(\mathcal{P}_{0}\) and \(\Pi_{0}\), which are mapped into measurable quantities \(\mathcal{P}_{\rm ms}\) and \(\Pi_{\rm ms}\) at large scales:
\[\mathcal{P}_{0} \to \mathcal{P}_{\rm ms}\left(1+\frac{5\,\Lambda_{\rm UV}^{2}\,\Pi_{ \rm ms}}{6\,(1+\Pi_{\rm ms})}\,\mathcal{P}_{\rm ms}\right)\,,\] (B.2) \[\Pi_{0} \to \Pi_{\rm ms}\left(1+\frac{\Lambda_{\rm UV}^{2}\,\left(103-100\Pi _{\rm ms}\right)}{120\,(1+\Pi_{\rm ms})}\,\mathcal{P}_{\rm ms}\right)\,.\] (B.3)
By means of this definition, we express eq (B.1) as
\[\mathcal{P}_{\rm tot}(p) = \mathcal{P}_{\rm ms}-\frac{4\mathcal{P}_{\rm ms}}{3}\,\Pi_{\rm ms }\,p^{2}+\mathcal{O}(p^{4})+\mathcal{O}(\mathcal{P}_{\rm ms}^{2}).\] (B.4)
Hence quadratically divergent, one-loop effects get adsorbed into bare quantities. The result is expressed in terms of the the measurable amplitude \(\mathcal{P}_{\rm ms}\) of the spectrum, and on the parameter \(\Pi_{\rm ms}\) controlling its scale dependence at very large scales (see the discussion around eq (3.9)).
|
2306.10326 | Predicting Alzheimers Disease Diagnosis Risk over Time with Survival
Machine Learning on the ADNI Cohort | The rise of Alzheimers Disease worldwide has prompted a search for efficient
tools which can be used to predict deterioration in cognitive decline leading
to dementia. In this paper, we explore the potential of survival machine
learning as such a tool for building models capable of predicting not only
deterioration but also the likely time to deterioration. We demonstrate good
predictive ability (0.86 C-Index), lending support to its use in clinical
investigation and prediction of Alzheimers Disease risk. | Henry Musto, Daniel Stamate, Ida Pu, Daniel Stahl | 2023-06-17T12:03:35Z | http://arxiv.org/abs/2306.10326v1 | Predicting Alzheimer's Disease Diagnosis Risk over Time with Survival Machine Learning on the ADNI Cohort
###### Abstract
The rise of Alzheimer's Disease worldwide has prompted a search for efficient tools which can be used to predict deterioration in cognitive decline leading to dementia. In this paper, we explore the potential of survival machine learning as such a tool for building models capable of predicting not only deterioration but also the likely time to deterioration. We demonstrate good predictive ability (0.86 C-Index), lending support to its use in clinical investigation and prediction of Alzheimer's Disease risk.
Survival Machine Learning, ADNI, Clinical Prediction Modelling.
## 1 Introduction
One of the most pressing challenges for governments and healthcare systems is the rising number of people with dementia. More than 55 million people live with dementia worldwide, and there are nearly 10 million new cases yearly, with 60-70% of all dementias being of Alzheimer's Disease type (AD) [1]. Recently, attention has turned to Machine Learning (ML) as a tool for improving the predictive ability of clinical models concerning AD and addressing clinical challenges more widely. However, of the hundreds of clinical ML models that appear in scientific publications each year, few have thus far been successfully embedded into existing clinical practice [2]. One of the reasons for this is that most models only provide predictions for disease cases without quantifying the probability of disease occurrence. This limitation restricts clinicians' ability to accurately measure and communicate the probability of disease development over time with the patient. [3]. Also, in the context of predicting the progression of AD in particular, many studies that use ML methods employ a classification approach, whereby the outcome to be predicted is either a binomial or multinomial outcome within a specific timeframe [4][5]. The datasets are often derived from longitudinal studies, whereby clinical marker data is collected from participants over months and years [6]. Thus, such data has a temporal element inherent to the methodology employed in the collection process. However, standard classification ML cannot consider the predictive power of time in conjunction with other predictors. Furthermore, classification models cannot handle drop-outs which are common in longitudinal studies.
With this in mind, a newly emerging field of exploration seeks to build on traditional time-dependent statistical models, such as survival analysis, to develop machine learning models which can predict the time-dependent risk of developing AD and go beyond simple classification. Survival analysis is a statistical method that aims to predict the risk of an event's occurrence, such as death or the emergence of a disease, as a function of time. A key aspect of survival analysis is the presence of censored data, indicating that the event of interest has not occurred while the subject was part of the study. The presence of censored data requires the use of specialised techniques. Traditionally, the Cox proportional hazards model [7] has been the most widely used technique for analysing data containing also censored records. However, the Cox model typically works well for small data sets and does not scale well to high dimensions [8]. ML techniques that inherently handle high-dimensional data have been adapted to handle censored data, allowing ML to offer a more flexible alternative for analysing high-dimensional, censored, heterogeneous data [8]. Furthermore, the ability to predict not only a binary or multinomial outcome but also the risk of such outcomes occurring at different timepoints provides clinicians and researchers with more information for the benefit of research and patients.
This work has several aims. First, it aims to build upon existing work demonstrating the utility of survival-based ML techniques in predicting the risk of deterioration at different time points in AD using the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Secondly, it aims to explore the predictive power of these techniques once the more physically intrusive biomarkers available in the dataset are removed. These predictors, such as ABETA, TAU and PTAU, which are established biomarkers for dementia, are collected via painful lumbar puncture procedures to sample cerebrospinal fluid (CSF). Recently efforts have been made to investigate alternative biomarkers such as blood metabolites which, in some studies, proved to have comparable predictive power to the established CSF-biomarkers [9].
The rest of the paper will be ordered as follows. First, it will review existing literature on survival-based ML as applied to clinical questions in general and AD prediction in particular. Next, the problem of interest will be defined. Then the proposed methodology will be outlined. Before the results are presented, the study design of the dataset will be described, including predictors and diagnostic criteria. A discussion of the implications of these results will then follow.
## 2 Related Work
Spooner et al. [8] systematically compared the performance and stability of ML algorithms and feature selection methods suitable for high-dimensional, heterogeneous, censored clinical data, in the context of cognitive ageing and AD, by predicting the risk of AD over time [8]. The authors assessed ten survival-based machine-learning techniques alongside the standard Cox proportional hazard model. The Sydney Memory and Aging Study (MAS) dataset and Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset were utilised. All algorithms evaluated performed well on both data sets and outperformed the standard Cox proportional hazards model.
Another paper that explores the clinical utility of survival modelling within the domain of AD research comes from [10], which looked at the interaction between socioeconomic features and polygenic hazard scores on the timing of Alzheimer's diagnosis using Cox proportional hazard survival analysis. Only the standard Cox PH technique was used. The authors could demonstrate the clinical utility of using socioeconomic markers and the presence of the APOE4 gene expression to predict the time to AD diagnosis. Although a small study focusing on only one model, this work demonstrated the utility of survival-based models in AD prediction. However, more work was needed to build upon these results using ML methods. This was achieved in [11] using ML survival-based methods to predict the risk of developing AD in the English Longitudinal Study of Aging (ELSA) dataset. This work again found that Survival ML outperformed Cox methods.
On the other hand, [12] found the standard Cox regression and two ML models (Survival Random Forest and Extreme Gradient Boosting) had comparable predictive accuracy across three different performance metrics, when applied to the Prospective Registry For Persons with Memory Symptoms (PROMPT) dataset [13]. The authors concluded that survival ML did not perform better than standard survival methods.
In comparison, [14] found that multi-modal survival-based deep learning methods produced good results when applied to the ADNI dataset, comparable to [8]. In this context, our present work serves as an example of including neural network models, as these methods have hitherto seldom been explored in a survival context.
Despite the scarcity of survival modelling papers in relation to AD prediction, recent examples have shown promise in attempting to outperform the classic Cox proportional hazard model, using survival ML and survival neural networks/ deep learning on clinical datasets. This supports the continued exploration of survival ML as a predictive tool for clinical risk problems [11].
## 3 Problem Definition
This study uses survival-based ML methods to predict the risk of deterioration, defined as receiving a worse diagnosis at their final visit to the data collection centre before leaving the study, compared to baseline diagnosis. Furthermore, the study aims to build models to predict the risk of receiving a worse diagnosis within the data collection period using survival-based ML. These models will then be tested for stability, and two estimations of the general test error will be calculated based on C-Index and Calibration scores [15].
A secondary aim is to explore the predictive power of these models when predictors derived from invasive CSF collections are removed from the dataset.
## 4 Methodology
### Data Description
**Alzheimer's Disease Neuroimaging Initiative.**
The data used in this paper was derived from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database [6]. This longitudinal case-control study was initiated
in 2004 by the National Institute of Aging (NIA), The National Institute of Biomedical Imaging and Bioengineering (NIBIB), The Food and Drug Administration (FDA), as well as elements of the private and non-profit sectors. The initial protocol, ADNI1, was conducted over six years, recruiting 400 subjects diagnosed with Mild Cognitive Impairment (MCI), 200 subjects with Alzheimer's (AD), and 200 healthy controls (CN). The initial goal of the ADNI study was to test whether repeated collections of neuroimaging, biomarker, genetic, and clinical and neuropsychological data could be combined to contribute in an impactful way to research dementia [6].
Data for the present paper was downloaded on the 1st of October 2022 through the ADNIMERGE package in R. This package combines predictors from the different
ADNI protocols. The final combined dataset contains 115 variables and 15,157 observations, which included multiple observations per participant. These observations represent data collection events where participants made up to 23 visits to study sites. The data used for this work is a subset of the full dataset, containing only information from the original ADNI2 study. After some initial cleaning, the resulting data contained 607 observations and 52 variables consisting of 50 input attributes, 1 time attribute (defined as the time in months until the participant visited the data collection centre for the last time), and 1 outcome attribute. The outcome attribute consisted of three diagnostic classes received at their final visit to the data collection centre: those who received a diagnosis of Cognitively Normal (CN), those who received a diagnosis of Mild Cognitive Impairment (MCI), and those who received a diagnosis of Alzheimer's Disease (AD) [4].
### Predictors
* Baselines Demographics: age, gender, ethnicity, race, marital status, and education level were included in the original dataset.
* Neuropsychological test results, including those from the Functional Activities Questionnaire (FAQ), the Mini-Mental State Exam (MMSE), and Rey's Auditory Verbal Learning Test (RAVLT), were included in the data. This numeric data is well-validated as a tool for identifying cognitive impairment in general and AD-related cognitive impairment in particular. Full details of the tests included can be found at [16].
* Positron Emission Tomography (PET) measurements (FDG, PIB, AV45) are indirect measures of brain function using the Positron Emission Tomography neuroimaging modality.
* Magnetic Resonance Imaging (MRI) measurements (Hippocampus, intracranial volume (ICV), MidTemp, Fusiform, Ventricles, Entorhinal and WholeBrain) are structural measurements of a participant's brain derived from the Magnetic Resonance Imaging neuroimaging modality.
APOE4 is an integer measurement representing the appearance of the epsilon 4 allele of the APOE gene. This allele has been implicated as a risk factor for AD [17]
* ABETA, TAU, and PTAU are cerebrospinal fluid (CSF) biomarker measurements. These biomarkers are collected via lumbar puncture. These predictors were removed from the model-building process for the second set of models.
* Last Visit is defined for this paper as the number of months from baseline data collection to the subject's last visit at a data collection centre. This variable was added to explicitly define a time predictor for survival-based ML modelling.
### Data Preprocessing
Boolean variables were created, indicating the location of missing data for each predictor. Variables with missingness at 90% or greater of the total rows for that predictor were removed. All nominal predictors were dummy-coded.
The data was split into two groups to predict deterioration using survival-based ML. The first group contained only those diagnosed as cognitively normal (CN) on their first visit to the data collection centre. The second group contained only those diagnosed with Mild Cognitive Impairment (MCI) on their first visit to the data collection centre. Deterioration was defined as receiving a worse diagnosis on their final visit to the data collection centre. The resultant two datasets had 285 and 322 observations respectively and 98 variables with CSF-derived biomarkers included/92 without (See Tables 1, 2, 3).
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline Outcome & Definition \\ \hline CN & Those diagnosed with CN at baseline who received the same diagnosis at their last visit. \\ \hline MCI/AD & Those having received a diagnosis of CN at baseline _either_ received a diagnosis of AD or MCI at their last visit. \\ \hline \end{tabular}
\end{table}
Table 1: Those who received a cognitively normal (CN) diagnosis at baseline were the only group included. The models predicted the diagnoses these participants received at the final visit, defined here.
### Model Development
Model development, evaluation, and validation were carried out according to methodological guidelines outlined by [18]; results were reported according to the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) guidelines [19]. This paper explored three algorithms:
Cox Proportional Hazard Model (Cox PH) - The Cox model is expressed by the hazard function, which is the risk of an event occurring at time as follows:
\[h\left|t\right|=h_{0}\left|t\right|\ \bullet\ \exp\left(\beta_{1}X_{1}+\beta_{2}X_{2}+ \beta_{p}X_{p}\right)\qquad\left(1\right)\]
where\(t\) represents the survival time,\(h\left|t\right|\)is the hazard function,\(X_{1},X_{2},...X_{p}\)are the values of the \(p\) covariates, \(\beta_{1},\beta_{2}...\beta_{p}\)are the coefficients that measure the effect of the covariates on the survival time and \(h_{0}\left|t\right|\) is the baseline hazard function, which is unspecified. The regression coefficients are estimated by maximising the partial likelihood [8], and hence the model does not require tuning.
Survival Random Forest (SRF) - Random Forests seek to grow many trees using bootstrapped aggregation and splitting on a random subsection of predictors for each split point. The split points are chosen based on some criteria (such as entropy or purity of the node), which seeks to allocate classifications of one type within each terminal node. In a Survival Random Forest, the feature and split point chosen is the one that maximises the survival difference (in terms of the hazard function) between subsequent nodes [8][20]. In the tuning grid for this model, the values of mtry varied between 1 and 20, with a step of 1, while the values for minimum node size in the grid
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Out-come & Definition \\ \hline CN/MCI & Those who had received a diagnosis of MCI at baseline either received the same diagnosis at their last visit or a more favourable diagnosis of CN. \\ \hline AD & Those diagnosed with MCI at baseline received a diagnosis of AD at their last visit. \\ \hline \end{tabular}
\end{table}
Table 2: Those diagnosed with Mild Cognitive Impairment (MCI) at baseline were the only group included. The models predicted the diagnoses these participants received at the final visit, defined here.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Dataset & Variables & Observations \\ \hline CN at baseline & 98/92 (with/without CSF predictors) & 285 \\ \hline MCI at baseline & 98/92 (with/without CSF predictors) & 322 \\ \hline \end{tabular}
\end{table}
Table 3: The final dimensions of the two datasets after preprocessing.
were 10, 20, 30, 40, and 50. SRF comprised 1000 trees. The number of trees promotes model convergence (large is better) and generally is not tuned.
Survival Deep Hit Neural Networks (SNN) - Deep Hit is a multi-task neural network comprising a shared sub-network and K cause-specific sub-networks. The architecture differs from a conventional multi-task neural network in two ways. First, it utilises a single softmax layer as the output layer of Deep Hit to ensure that the network learns the joint distribution of K possible outcomes, not the marginal distributions of each outcome. Second, it maintains a residual connection from the input covariates into the input of each cause-specific sub-network. The full technical description of this model can be found in [21]. In the tuning grid for this model, the number of nodes was between 2 and 300, the epochs were between 10 and 400, and the batch sizes was 32. The learning rates were 0.001, and 0.01, the activation functions were'relu', 'elu' and 'leakyrelu', and the optimisers were 'adam' and 'adamw'. 10% of the training dataset was held aside for validation in the early stopping procedure, with patience at either 10, or 150 epochs.
### Nested Cross-Validation and Monte Carlo Simulation
A Nested Cross-Validation procedure was implemented to tune and evaluate the models so precise estimates of the model's performance of unseen cases (internal validation) could be gathered [4]. Nested Cross-Validation consisted of an outer 5-fold CV (model assessment) and an inner 5-fold CV (model tuning). We conducted a Monte Carlo procedure of 100 repetitions of the nested CV using different random splits per model to assess the models' stability. Performance statistics were recorded for each model produced by each iteration. Each performance statistic's mean and standard deviation across all iterations were recorded when the MC was complete. To ensure the representativeness of training and test samples in both procedures, the data splitting was stratified based on the AD cases variable.
### Performance Metrics
To assess model performance, two statistics were recorded. Discrimination was assessed using the Concordance index or C-index [18]. This metric, also called Harrel's C-index, provides a global assessment of the model and can be considered a more general form of the AUCROC measure typically used in binary classification tasks. The C-index computes the percentage of comparable pairs within the dataset whose risk score was correctly identified by the model. Comparable pairs are defined as a selection of two observations, which can be compared in terms of survival time predicted by the model. If both are censored, then they are not included in the computation for this metric. A pair is considered concordant if the observation who experiences the earlier event is identified as having greater risk and discordant otherwise. Thus the total concordance score for a model is the ratio of concordant pairs within the dataset divided by the total number of observations [15].
Secondly, calibration was assessed using Van Houwelingen's Alpha Survival Measure of non-proportional hazards models [15]. This metric is defined as:
\[\alpha\!=\!\sum\delta\,/\sum\,H_{i}(t_{i}) \tag{2}\]
Where \(\delta\) is the true censoring indicator observed from the test data, \(H_{i}\) is the cumulative hazard predicted by the model, and \(t_{i}\) is the observed survival time. The model is well calibrated if the estimated \(\alpha\) is equal or close to 1. Calibration is a formal comparison between the probability distribution and resultant survival instances observed in the test data and the probability distribution and resultant survival predictions generated by the model. A full exploration of this metric can be found in [22].
### Software and Hardware
The data analysis was conducted using the R language [23]. Initial data cleaning was performed using base R functions and the Tidyverse R package [24]. The creation of dummy variables was performed using the Caret R package [25]. The nested cross-validation procedure, including training, tuning and evaluation, was performed on the Cox PH, SRF, and SNN models using the mlr3 R package [26]. The hardware consisted of 3 servers running Linux, with Xeon processors and 64GB of RAM.
## 5 Results
The nested cross-validation C-index and Calibration performance for each model type is detailed below. Figures for the two groups' C-indexes, with CSF-derived biomarkers included in the models, can be found in Fig. 2.
The best-performing model for the CN group with CSF-derived biomarkers included was SRF, followed by SNN, followed by Cox PH model. Thus, the SRF and SNN outperformed the conventional statistical model Cox in the CN group with CSF-derived biomarkers included in the Calibration and the C-index metric.
Once the CSF-derived biomarkers were removed, for the CN group, both the Cox PH and the SNN reported worse predictive power. However, as the C-Index and Calibration estimated, the SRF retained its predictive ability, even significantly improving its calibration score.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Model & C-index CSF included / removed & Calibration CSF included / removed \\ \hline Cox & 0.71 / 0.59 & 0.01 / 0.01 \\ PH & & \\ \hline SRF & 0.84 / 0.86 & 0.80 / 1.02 \\ \hline SNN & 0.80 / 0.70 & 0.64 / 0.60 \\ \hline \end{tabular}
\end{table}
Table 4: CN group with CSF-derived biomarkers included / removed.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Group (Model) & Mean C-index (sd) & Mean Calibration (sd) \\ \hline MCI (Cox PH) & 0.78(0.02) & 0.33(0.08) \\ \hline CN (Cox PH) & 0.59(0.06) & 0.03(0.02) \\ \hline \end{tabular}
\end{table}
Table 6: Cox PH Monte Carlo at 100 iterations.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Model & C-index CSF included / removed & Calibration CSF included / removed \\ \hline Cox & 0.78 / 0.78 & 0.29 / 0.25 \\ PH & & \\ \hline SRF & 0.84 / 0.84 & 0.98 / 0.99 \\ \hline SNN & 0.83 / 0.77 & 1.16 / 0.91 \\ \hline \end{tabular}
\end{table}
Table 5: MCI group with CSF-derived biomarkers included / removed.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Group (Model) & Mean C-index (sd) & Mean Calibration (sd) \\ \hline MCI (SNN) & 0.77(0.02) & 0.91(0.1) \\ \hline CN (SNN) & 0.7(0.06) & 0.6(0.03) \\ \hline \end{tabular}
\end{table}
Table 7: SNN Monte Carlo.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Group & Mean C-index (sd) & Mean Calibration (sd) \\ \hline MCI (SNR) & 0.84(0.008) & 0.99(0.02) \\ \hline CN (SRF) & 0.83(0.01) & 1.02(0.02) \\ \hline \end{tabular}
\end{table}
Table 8: SRF Monte Carlo.
The SRF model results on both the C-Index and Calibration proved the most stable upon repeated testing, with standard deviations at less than 0.03. The SNN model was less stable and reported less predictive power, as measured by both the C-Index and Calibration.
## 6 Discussion
This study aimed to further explore the potential of survival-based ML as a tool for predicting time to AD diagnosis. This paper demonstrates the clear utility of such methods when predicting on the ADNI2 dataset. This provides further evidence for the continued exploration of the utility of survival ML in this context.
Several results reported here are worthy of note. Firstly, we demonstrated good predictive power for SRF with very good discrimination and excellent calibration, which was superior to both the standard Cox PH model and the SNN model. Good discrimination and calibration are essential in survival ML models to obtain accurate risk estimations at specific time periods of interest, which is not possible with traditional classification ML models. This allows for informed decision-making, personalised interventions, and timely allocation of resources for the prevention, early detection, or management of dementia. Our results support the work of [11] but disagrees with [20], which found that the standard Cox model was superior to tree-based ensemble methods. This is possibly due also to the way in which the Survival trees were constructed, with [18] using probabilities derived from a Cox model to construct a Random Forest. In comparison, the SRF presented here sought to create trees whose splits aimed to maximise the difference in survival between the resultant nodes. With the present study indicating strong results using this approach, it may be that the latter technique produces better models. However, we should note that these results were obtained on datasets other than the one used in this study, ADNI.
Figure 1: C-indexes for models applied to the two groups with CSF-derived biomarkers included in the models.
With the removal of the CSF-derived biomarkers, performance deterioration was seen for SNN but not SRF or the Cox PH. The choice to investigate an SNN was derived, in part, from the work of [14], whose best model achieved a C-index of 0.83 on the ADNI dataset. In comparison, the best model found by the present study, using SNN, achieved a C-index of 0.77. However, we should note that [13] did not provide a comparison between the Survival Neural Network models used and either a standard Cox PH model or any other survival ML algorithm. Another point of consideration is that the authors used a slightly different Neural Network algorithm to the one described here. Thus, an important next step would be directly comparing the DeepSurv model and the Deep Hit model described here.
SNN had worse stability than the SRF and Cox PH models, as measured by the standard deviations of the C-index and Calibration scores for these models. This would suggest that this algorithm produces unstable models with unreliable predictions. Neural Networks usually perform best in complex problems that require discovering hidden patterns in the data between a large number of interdependent variables. Furthermore, Neural Networks usually perform better on image and audio classification rather than tabular data, such as the dataset used in this study [27]. Therefore, it may be the case that a simpler model such as Random Forest might be better suited for the kind of limited datasets presented here. It may also be the case that the SNN model overfit the comparatively small dataset presented here.
Finally, the results in this work suggest that CSF-derived biomarkers did not have a clear contribution in this setting, for building models capable of accurately predicting the time to AD diagnosis on our considered ADNI sample. Although both the Cox PH and SNN models variously suffered from the removal of these predictors, the RSF model did not. This is important, as collecting biomarkers from CSF is an invasive and painful process for participants, which involves a lumbar puncture. Recent analyses conducted on EMIF-AD data [9] established that predictors such as metabolites in blood showed similar predictive power to the well-established but more invasive CSF biomarkers.
Despite the results obtained by this work, there are a number of limitations to the present paper that need to be considered. Firstly, the ADNI2 data is comparatively small, and future work is required to validate the models created here using external data. A related point is the lack of diversity within this data, which heavily skews towards white North-American participants. To validate the models created here, they must be tested on non-white, non-western participants such that evidence of model performance be gathered for a wider group of people.
A further limitation is that the choice of hyper-parameters for the grid search procedure for each model is finite. We were unable to conduct an exhaustive search over a larger set of combinations of hyperparameter values due to time constraints and computational cost. Therefore it is entirely possible that better results for these models can be found using hyperparameters not explored here.
Conclusion
This paper proposed a survival ML approach to predict the time to Alzheimer's Disease diagnosis accurately. It was compared with one of the most used statistical models for survival analysis, namely Cox PH. In our framework proposed by using the ADNI cohort, the Machine Learning based approach proved to be more accurate than the statistical approach, which was the case also in a recent study conducted on different clinical data [11].
**Acknowledgements**
Daniel Stahl was part funded by the NIHR Maudsley Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King's College London. This study represents independent research and views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.
|
2305.09327 | Improved Type III solar radio burst detection using congruent deep
learning models | Solar flares are energetic events in the solar atmosphere that are often
linked with solar radio bursts (SRBs). SRBs are observed at metric to
decametric wavelengths and are classified into five spectral classes (Type
I--V) based on their signature in dynamic spectra. The automatic detection and
classification of SRBs is a challenge due to their heterogeneous form.
Near-realtime detection and classification of SRBs has become a necessity in
recent years due to large data rates generated by advanced radio telescopes
such as the LOw Frequency ARray (LOFAR). In this study, we implement congruent
deep learning models to automatically detect and classify Type III SRBs. We
generated simulated Type III SRBs, which were comparable to Type IIIs seen in
real observations, using a deep learning method known as Generative Adversarial
Network (GAN). This simulated data was combined with observations from LOFAR to
produce a training set that was used to train an object detection model known
as YOLOv2 (You Only Look Once). Using this congruent deep learning model
system, we can accurately detect Type III SRBs at a mean Average Precision
(mAP) value of 77.71%. | Jeremiah Scully, Ronan Flynn, Peter Gallagher, Eoin Carley, Mark Daly | 2023-05-16T10:04:30Z | http://arxiv.org/abs/2305.09327v1 | # Improved Type III solar radio burst detection using congruent deep learning models
###### Abstract
Solar flares are energetic events in the solar atmosphere that are often linked with solar radio bursts (SRBs). SRBs are observed at metric to decametric wavelengths and are classified into five spectral classes (Type I-V) based on their signature in dynamic spectra. The automatic detection and classification of SRBs is a challenge due to their heterogeneous form. Near-realtime detection and classification of SRBs has become a necessity in recent years due to large data rates generated by advanced radio telescopes such as the LOw Frequency ARray (LOFAR). In this study, we implement congruent deep learning models to automatically detect and classify Type III SRBs. We generated simulated Type III SRBs, which were comparable to Type IIIs seen in real observations, using a deep learning method known as Generative Adversarial Network (GAN). This simulated data was combined with observations from LOFAR to produce a training set that was used to train an object detection model known as YOLOv2 (You Only Look Once). Using this congruent deep learning model system, we can accurately detect Type III SRBs at a mean Average Precision (mAP) value of 77.71%.
## 1 Introduction
Solar flares are the most intense explosive events in the solar system (Lin, 2011). The accelerated particles emit light across the electromagnetic spectrum, from gamma rays to radio waves. High-intensity radio emission characterizes solar radio bursts (SRBs), which manifest as complex signals in dynamic spectra. Based on the structure of their dynamic spectra, SRBs are divided into five categories, ranging from Type I to Type V (Pick M, 2009). Because Type III bursts can occur hundreds of times each day, detecting them and understanding their spectral features is a computational challenge. This problem has become even more difficult in recent years, with the development of technologies such as the LOw-Frequency ARray (LOFAR; Van Haarlem et al., 2013), which provides high-volume data streams (up to 3 Gb/s at a single station) of radio burst observations that need to be classified accurately in real-time. With the development of LOFAR for Space Weather (LOFAR4SW; Carley et al., 2020), a system update aimed at autonomously monitoring solar radioactivity, the need for automated data pipelines for solar radio bursts has become more immediate. Presently, software pipelines for autonomously detecting SRBs will be an essential component of such a system. The research presented in this paper identifies deep learning as an important component of such a pipeline.
Recent work on object detection algorithms, such as You Only Look Once (YOLO; Redmon et al., 2016), classification algorithms, such as Support Vector Machines (SVM; Evgeniou and Ponti, 2001) and Random Forest (RF; Louppe, 2014), has shown the need for a high-quality simulated training set to improve the accuracy and robustness of algorithms when classifying and detecting Type III SRBs (Carley et al., 2020). The work presented in this paper shows the crucial role that Generative Adversarial Networks (GANs) (Goodfellow et al., 2020) play in generating simulated images for such training sets.
Creating training sets for SRB classification using conventional techniques frequently involves the tedious effort of combing through large data archives to locate and collate pertinent Type III SRB images. Generating simulated data significantly reduces this task through generation of data that not only look like SRBs but also creates it in volume and in a short period of time. Previously, a significant number of SRB-like images were produced using parametric modeling (Kalkan et al., 2018). This method produced Type III SRBs that were random in number, grouping, intensity, drift rate, heterogeneity, start-end frequency, and start-end duration, all of which are traits of a Type III. However, compared to Type IIIs observed in daily observations, these images lacked realism.
Several non-machine learning attempts have been made to automatically classify SRBs in dynamic spectra. Current algorithms implement the Hough or Radon transform methods as a way of recognizing specified parametric shapes in images (Lobzin et al., 2014). Depending on the type of radio burst being categorized, these algorithms can reach up to 84% accuracy. Other methods include Constant-False-Alarm-Rate detection (Lu et al., 2004), which is essentially the detection of radio bursts in dynamic spectra employing de-noising and adaptive threshold. The method works well with various types of radio bursts and has been reported to have a 70% accuracy.
Through the application of multi-modal deep learning to a spectrogram at millimetric wavelengths, deep neural networks have been shown to be highly successful at detecting SRBs (Ma et al., 2017). The technique combines auto-encoders and regularization to achieve an accuracy of 82% in burst detection, but it
has not been applied to metric wavelengths (the LOFAR range), where bursts can have much more intricate geometries.
Generative deep learning models, such as Deep Convolutional Generative Adversarial Networks (DCGANS) (Zhang et al., 2021), are playing a crucial role when classifying SRBs. The system is modified to convert a GANs generative type network into a classification technique. This method successfully classifies Type III solar radio bursts with an accuracy of between 89-92% when used with LOFAR metric wavelengths.
To identify SRBs, researchers have recently started using object detection algorithms such as Faster R-CNN (Hou et al., 2020). With an average precision (AP) of 91%, this deep learning neural network was demonstrated to be accurate at extracting minor aspects of SRBs. However, the model lacks the capability of real-time detection, demonstrating a maximum of 17 frames per second on open-sourced datasets such as COCO (Ren et al., 2017).
Fast Radio Bursts (FRBs) and Search for Extra-Terrestrial Intelligence (SETI; Zhang et al., 2018) is another area where machine learning is being used in conjunction with radio interferometer observations. SETI employs machine learning to look for FRBs in planetary systems using its Allen telescope array. SETI uses a deep Convolutional Neural Network (Wu, 2017), known as ResNet (He et al., 2016), to extract irregular high-frequency FRB spikes from within the dynamic spectrum, highlighting regular noise frequencies and Radio Frequency Interference (RFI). This model produced a recall score of 95%.
Conor & van Leeuwen use deep neural networks to extract and classify features of FRBs within observations (Connor & van Leeuwen, 2018). The research uses both simulated and real single Galactic pulsars to obtain a dataset for training the CNN. In this instance, due to the scarcity of FRBs, the training set was dominated by simulated bursts. They simulated most of their true positives but used only false positives that were generated in real surveys and labelled by eye. CNNs have been applied to this scarce data with above 99% accuracy when classifying such phenomena. This FRB extraction method illustrates how robust a CNN can be when using a hybrid dataset containing both real and simulated data.
With recent research moving to deep CNNs and object detection for classifying and detecting radio frequency phenomena, we decided to further investigate these topics. For object detection, CNNs come in a variety of different flavours, including YOLO, Single Shot Detectors (Liu et al., 2016), Region-CNN (R-CNN) (Girshick et al., 2016), Fast R-CNN (Girshick, 2015), and Faster R-CNN (Ren et al., 2017). YOLO has been the only algorithm to deliver high accuracy and real-time detections on datasets, although the other approaches listed have shown to be quite successful for object detection, just not in real-time.
In our previous research, we adapted the YOLO algorithm to detect Type III SRBs (Scully et al., 2021). Using this configuration we obtained an accuracy score of 82.63%. However, one score we could not obtain them was mean Average Precision (mAP), which determines how precisely the algorithm can locate a certain object in an image, in this case, Type III SRBs. We noted some key areas that we could improve on to allow us to measure mAP, in particular the simulated training set.
In this paper, we apply multiple deep-learning methods to the problem of SRB simulation, detection, and classification. We use GANs to create a simulated training set of images comparable to real observations, which is then fed into YOLO to precisely detect and classify Type III SRBs.
The paper is organised as follows, In Section 2, we describe how our data is gathered with LOFAR and the phenomenon that is an SRB. In Section 3, we introduce the deep learning technique of convolutional neural networks and how we manipulate this deep learning technique for SRB simulation, detection, and classification, and the significance of SRB simulation techniques. In Section 4, we introduce GANs, and how they can improve SRB simulation. In Section 5, we introduce YOLO, our second deep learning method for SRB detection and classification, its architecture, and the datasets on which YOLO is trained and evaluated. In Section 6, we visualise and discuss the results produced by YOLO and how the introduction of GANs for simulating SRBs can generate a mAP score.
## 2 LOFAR and Solar Radio Bursts
### Low Frequency ARray
The data for this study came from the LOFAR radio interferometer, which was erected in the north of the Netherlands and across Europe, and includes Ireland's I-LOFAR station, which is shown in Figure 1. For monitoring the radio universe in the comparatively uncharted low-frequency range of 10-240 MHz, LOFAR offers a special assortment of observational modes. Several radio astronomical objects can be observed simultaneously by LOFAR, which can also operate several stations at once. The system can operate as a multi-station, very long baseline interferometer or each station can function independently as a telescope.
LOFAR antenna stations provide the same basic functions as standard interferometric radio dishes. These stations feature a large collecting area and high sensitivity, as well as pointing and tracking capabilities, similar to typical radio dishes. Unlike traditional radio dishes, LOFAR stations physically don't move, instead, the system combines signals from individual antennae to construct a phased array using a combination of analog and digital beam-forming techniques, making it more flexible and agile. Rapid telescope re-pointing and multiple, simultaneous observations from a single station are made possible by station-level beamforming. Then, a central processing unit can receive the sta
Figure 1: The Irish Low-Frequency Array station IE613 (I-LOFAR) at Birr Castle, County Offaly. Coaxial cables are used to transport data from the Low Band Antennas (LBAs) and High Band Antennas (HBAs) to the ILT Cabinet (center right), where it is amplified, filtered, and digitalised. Data is sent in international mode at a speed of \(\sim\)3.2 Gbps to Groningen, Netherlands. In the I-LOFAR Control Room, data is processed using REALTA in local mode (bottom left)(Murphy et al., 2021).
tions' digitized, beam-formed data and correlate it for imaging and observation analysis.
Single-station beamformed data is typically compiled into a dynamic spectrum with a time resolution of 5 microseconds and a frequency resolution of 195 kHz. The 488 frequency channels that make up the dynamic spectra allow for the recording of data at a rate of several Terabytes (TB) per hour. The REAL-time Transient Acquisition backend (REALITA), a high-performance computing system for processing and storing unprocessed beamformed data has recently been developed by the I-LOFAR team, (Murphy et al., 2021). Due to the high volume of data, automated algorithms are required to sort and classify any phenomena of interest. In our case, the classification and detection of SRBs is the primary goal. In this study, we constructed training sets on which GANs and YOLO could be trained and assessed against observations from I-LOFAR.
### Solar Radio Bursts
SRBs are usually investigated in dynamic spectra and are divided into five major spectral classes, spanning from Type I to Type V, based on their shape, frequency, and time length. The intricate characterisation of these radio bursts, however, makes classification extremely challenging. When employing machine learning approaches to categorize such occurrences, the data that the classification algorithms are trained on is crucial. SRBs are regularly observed in dynamic frequency versus time spectra.
The most frequent SRBs are Type IIIs, which are short-lived and appear in dynamic spectra as a vertical bright strip, as illustrated in Figure 2. The numerous diverse forms a Type III might assume within this vertical strip makes the task more complicated. They can appear as being smooth or patchy, faint, or intense, superimposed on other radio bursts, freestanding or in groups, or immersed in strong RFI. For this research, we used Type IIIs in the frequency range of 10-100 MHz, where they generally occur as a vertical stripe.
## 3 Deep learning and data simulation
### Convolutional neural networks
A CNN is a deep-learning neural network that is specifically designed for visual feature recognition. It is distinguished from a standard NN by the presence of numerous convolutional layers. These layers effectively do image filtering to produce image intensity gradients. Each filter produces a different gradient and is responsive to particular shapes. For example, the first layers of the CNN may contain filters that respond to horizontal or vertical lines, curves, and other simple geometric shapes. After the first filters are applied, the CNN then applies a max-pooling layer, which is a type of summing and downsampling. Another convolutional layer, max-pooling, and so forth follow. Max-pooling works by allowing deeper layers of the network to access larger and larger portions of the image. While simple geometries may be responsive to the early layers, with max-pooling, the subsequent layers become responsive to more complex shapes made from these geometries, such as circles, triangles, and complex polygons. The max-pooling and convolving can continue until the deepest levels of the network react to recognizable aspects, for example, facial characteristics.
In our research, the network's final layers respond to SRBs and their precise structure. Depending on the complexity of the network, the number of convolutional and max pooling layers vary, but the final result is a single image vector representation. This vector is then used to classify the image. Similar to smaller neural networks, a CNN's weights and biases (including those of the convolutional layers) must be trained. Given the size of the networks, fitting tens of thousands or even millions of parameters may be necessary for the most complex networks. To avoid under- or overfitting (Ying, 2019), the number of training instances should at least be in or close to the same order of magnitude as the number of weights and biases in the network. A resulting problem is the lack of accessible databases with tens of thousands of images to train phenomena like radio bursts. There are two ways to address this problem: (i) simulating a large number of training samples for the network, or (ii) using a method called transfer learning (Zhuang et al., 2021), in which we take an advanced and powerful CNN that has already been trained on millions of images of generic scenes (containing, among other things, everyday objects like cars and people) and retrain a smaller portion of the network on our specific set of data (radio bursts). Transfer learning is based on the notion that the general forms learned by the CNN from ordinary items may be recycled for new objects, with studies showing that this is possible even for images that are wholly morphologically unlike, such as cars and solar radio bursts (Zhuang et al., 2021).
In this research, we implemented congruent CNN architectures to address the data problem. In deep learning, congruence refers to the similarity or consistency between different parts of a model or different models. It also refers to the similarity between the weights of different models trained on the same task, or the consistency of the outputs of different models when presented with the same input. In the case of GANs, we created two CNN architectures, with one being inverted and pitched them against each other to generate real simulated Type III SRBs. We then used transfer learning along with the simulated dataset to train an elongated CNN architecture YOLO to detect Type III SRBs.
### SRB simulation
Simulating SRBs has been essential for generating large datasets for training detection and classification algorithms. It essentially eliminates the time-consuming task of searching large data archives to find good, clean images suitable for a training set. Simulating data also gets rid of the need for cleaning archival data of any artefacts such as RFI. We originally used parametric modelling to simulate SRBs in previous studies, in which poly
Figure 2: Example of a dynamic spectrum showing a real Type III solar radio burst between 20-90MHz. Notice the Type III’s vertical strip-like shape and short duration in time, lasting only a couple of seconds.
nomials were used to create overall Type III shape in dynamic spectra, with skewed Gaussians used for their time dependant intensity profile at each frequency. Using this method we created Type III radio bursts with a random number, grouping, intensity, drift rate, heterogeneity, start-end frequency, and start-end time. We placed the bursts in front of a background of synthetic and random RFI channels, as shown in Figure 3.
When training YOLO, this strategy has a number of advantages, including the ability to generate enormous amounts of automatically labelled Type III data in a short amount of time. Using parametric modelling, we were able to construct a dataset of 80,000 images of Type III SRBs for our previous YOLO model. However, this training set created many issues when it came to testing the model's robustness. The first issue is that this data must be accurately labelled for training; YOLO needs to see what it's looking at because it's a supervised learning technique that requires a labelled dataset in order to identify a class, in this instance Type III SRBs. The automatic labelling system in the parametric modelling saved a lot of time, but there was no change in the Y-axis or height variables of these labelled bounding boxes, in other words, the height variable became overly saturated with the same static values in the training set, therefore the final YOLO detections had the same height variation as the training set, as seen in Figure 4. As a result, we were unable to calculate the localized precision of our model.
The second issue we experienced with the parametric modelling approach was the model's lack of realism. While it provided many possibilities in terms of position, grouping, and overall shape and intensity, it did not provide us with the exact shape and intensity variation that we would see in a real observation.
With the introduction of GAN, we were able to generate Type III SRB simulations that were realistic. This allowed us to create SRBs that were similar to those seen in actual I-LOFAR observations, thus removing the need to trawl through data archives for the appropriate images for the training set. They also offered the Y-axis variation that YOLO needed to make localized Type III SRB detections.
## 4 Generative Adversarial networks
GANs are a type of generative modelling that makes use of CNNs. Generative modelling is an unsupervised learning task that automatically finds and learns regularities or patterns in input data. The model created may be used to produce or output brand new instances that have similiar attributes or features from the original dataset input. The generator model is an inverted CNN, used to generate new simulated instances, and the discriminator model, a binary classifier that attempts to classify examples as real or fake (generated), are the two sub-models that are trained as part of the GANs framework, see Figure 5. These two sub-models are trained in an adversarial zero-sum game until the discriminator model is tricked approximately half of the time, indicating that the generator model is producing believable examples. To put it simply, GANs let us generate incredibly realistic new data that is based on pre-existing data.
### Type III Generation
The GAN was trained to create simulated Type III SRBs. The training set consisted of 2,763 real Type III images that I-LOFAR obtained by merging several observation days, with each observation day broken up into 10-minute chunks. The vertical strip shape of a Type III is visible in these 10-minute chunks. In terms of solar activity, these observation days alternate between active and relatively quiet. This data is then cleaned to generate images that are free of interference, such as embedded RFI. We don't need to label any data because the GAN is an unsupervised algorithm, so the images are fed directly into the algorithm for training. The GAN algorithm is quite computationally demanding as we are trying to produce images from just general noise or random vectors. Therefore, the system used to train the GAN included two Nvidia Geforce RTX 2080 Ti GPUs connected via SLI, running Ubuntu 20.4.2 LTS on an AMD Ryzen Threadipper 1950x with 32GB of RAM. For the training configuration, we were using 90% of GPU capacity for a variety of different epochs at a batch size of 32, see Figure 6.
During training, we produced 8 generated images after each epoch to create a collection of fake images. The GAN was trained numerous times, which allowed us to build a dataset of over 4,500 simulated Type III SRBs that were random in number, grouping, intensity, drift rate, heterogeneity, start-end frequency, and start-end time. The generated SRBs compared very
Figure 4: A 10-minute segment from the testset of our previous model attempt. The lack of variation and over-saturation in the Y-axis or height variable in the training set meant that when our previous YOLO model was evaluated, the bounding box predictions (highlighted in green) in the test set had no variation in the Y-axis or height variable no matter the size of the Type III.
Figure 3: Parametric modelling simulation (a) compared to a real Type III observation (b). The parametric modelling method fails to simulate activity seen in a real observation such as small, faint Type III bursts or interference like embedded RFI.
well with real Type IIIs observed by I-LOFAR. We then filtered out noisy generated images produced when the generator error significantly spiked during training. The generated images were small at 128 x 128 pixels, so they were bulk-rescaled up to 256 x 256. To evaluate the images produced by the GAN, we use human perception as the most efficient way of evaluating these GAN-produced Type III SRBs (Borji 2019). We compared the GAN results along with parametric methods to real Type III SRBs observed by I-LOFAR in Figure 7. This generated data was then used as a hybrid training set for YOLO.
## 5 You only look once (YOLO)
Identifying what entities are present in a given image, and where they are located, is a computer vision problem known as object detection. Detecting objects in an image is more difficult than classifying them, as classifying only distinguishes between objects but not their exact positions in an image. Additionally, classification fails when applied to images with numerous objects. YOLO employs a different strategy. YOLO is a CNN that, depending on its configuration, does real-time object detection. The method divides the image into grid regions and predicts bounding boxes and probabilities for each grid zone using a single CNN (see Figure 8). Projected probabilities are used to weight these bounding boxes. Due to its high accuracy and real-time functionality, YOLO is well-liked. The approach "only looks once" at the image since it only needs one forward propagation run through the neural network to provide predictions. Using YOLO, a single CNN can predict a variety of bounding boxes and class probabilities for those boxes. By using complete images for training, YOLO enhances detection performance. Because Type III SRBs are often short-lived (\(\sim\)0.1-3 seconds) and have a drift rate of 500 MHz\({}^{-1}\) in dynamic spectra (Reid & Ratcliffe 2014), we chose YOLO for this investigation. The fundamental benefit of YOLO is that it is very quick and can deliver accuracy that is virtually equivalent to Faster R-CNN (Ren et al. 2017).
### Dataset
The key feature of our updated YOLOv2 model from our previous work is the dataset. Instead of using the parametric modelling generated data, we used a hybrid dataset, which offered more realistic data for YOLO to train on. The hybrid dataset consisted of data generated by GANs and real observed data from I-LOFAR in a 50:50 split. This improved dataset of 6,732 images, with just over 60,000 Type III examples, is considerably smaller than the parametric modelling (80,000 images) training set. However, the improved dataset is more realistic and offers more robustness when testing on real I-LOFAR observations. There's also less memory taken by the improved training set so it can be easily transferred without the risk of data corruption. The only constraint that comes with this approach is that this dataset needs to be manually labelled. Once the dataset was labelled, we had a training set consisting of 6,732 images for training YOLOv2.
Figure 5: An illustration of a GAN architecture for producing Type III SRB data in simulation. The purpose of the generator is to accept random input values (noise) and generate an image from them using a deconvolutional neural network. To upsample the data, we employ 5 transpose convolutional layers with a variation of 2 x 2 and 1 x 1 strides. In order to accommodate for small negative values when the input is less than zero, we use Batch normalization and Leaky ReLu activation after each transposition layer. The Tanh activation function is used at the top layer because, while creating images, they are often normalized to fall between [0,1] and [-1,1]. The discriminator is then fed a batch of real training data for fake training data, depending on the training stage, then downsampled using 5 convolutional layers with a combination of 2 x 2 and 1 x 1 strides. Each convolutional layer is followed by the use of Batch normalization once more, followed by ReLu activation, which changes all negative values to 0. The Sigmoid activation function, used at the top layer, normalizes the output in the [0, 1] range.
### Model configuration
After we created the dataset, we set up the model. To create the model in YOLOv2, we used a framework called Darkflow [16], which is a TensorFlow python version of Darknet. As seen in Figure 8, YOLOv2 has 19 convolutional layers and 5 maxpool layers.
\[filters=bounding(classes+coords) \tag{1}\]
To optimize the model, the number of filters in the final convolutional layer was reduced (see function 1), and the sizes of the bounding boxes were adjusted using anchor values or bounding box dimensions. In our previous research, we originally set the height of the bounding box to a static 10-90MHz; this was due to the lack of Y-axis value variation seen in the parametric modelling approach. With our new and improved training set, we adjusted the anchor value ranges to 10-30MHz, 10-40Mhz, 10-50MHz, 10-60MHz, and 10-80MHz. This allowed us to capture most Type III sizes in terms of width and height detected by YOLOv2. The input size was also changed from 416x416 to 288x288 to achieve real-time frame rates. With this input size, the model can detect Type IIIs in real-time (90 frames per second) as it only detects one class in greyscale format [14].
### Training and Validation
The YOLOv2 model was trained to detect and classify Type III SRBs. The training set employed a collection of 3,000 simulated GAN images and 3,763 real images. The data set included Type III samples with random start-end frequency, start-end time, drift rate, intensity, grouping, and inhomogeneity. A subset of the training set, the validation set, contained 1,500 Type III images produced by the GAN. The training and validation sets were both manually labelled which, although tedious, provides YOLO with precise instructions on what to train within the specified image. To fulfill the requirements of the Darkflows training set, these manually labeled training set images were converted into XML instructions. To determine whether the model was overfitting or underfitting, we had to construct the Darkflow framework to validate the training. Leaky ReLU was used as the activation func
Figure 6: The loss error battle between the discriminator and the generator when generating Type IIIs. This illustrates the GAN’s learning pattern. Notice how no instance of training is the same. One key feature when training GANs is convergence failure seen in plots (b) and (d) (when generator loss spikes) [14]. This occurs when there is an inability to find the equilibrium between generator loss and discriminator loss. Images generated during this period are very poor and noisy.
tion during training, and the model was trained using a learning rate based on the learning pattern of the model. If the model learns too quickly, the learning rate was updated, resulting in a smoother learning pattern. Stochastic Gradient Descent (SGD) was used to continuously update the learning parameters until convergence. We trained the YOLO model for 1,000 epochs at a batch size of 16. With this configuration, training took 7 days with both training and validation loss decreasing with every iteration, as shown in Figure 9.
### Test set
When testing our previous model, we chose a specific observation date, the 10th of September 2017. Although it provided it us with good benchmark results it never tested the robustness of our model as the observation was relatively uneventful. To test the model's robustness, a test set is needed that has a variety of Type III examples. Therefore, multiple observations from different dates in the I-LOFAR archive were mined to build a test set that contained examples of busy and quiet periods of solar activity. In order for colour not to be an influencing factor in the model's predictions, the image was converted to greyscale (Rafegas & Vanrell 2017). We concentrated our observations in the 10-90 MHz band as this is where the Type III's vertical strip
Figure 8: The Darknet-19 CNN architecture of YOLOv2. The Darknet-19 architecture consists of 19 convolutional layers and 5 maxpool layers. The Reorg layer combines both high- and mid-level features for better detection accuracy. In order to increase accuracy in YOLOv2, the fully connected layers of the CNN are eliminated, and K-means classification is used for detection and classification (Redmon & Farhadi 2017).
Figure 7: Parametric modelling and GANs compared to real Type III SRBs observed by I-LOFAR. GANs produce more realistic examples of Type IIIs compared to the parametric modelling method. These GAN-produced Type IIIs were combined with real Type IIIs observed by I-LOFAR to create a training set for YOLO.
Figure 9: Comparing Training loss with validation loss illustrates how good YOLO is learning on the training set. It is also used to prevent the algorithm from overfitting. Each epoch represents 422 iterations, or when the dataset is passed forward and backward through YOLO once.
shape can be seen. The observations were then divided into 10-minute intervals to provide a test set of 2,763 images that contained around 35,000 Type III solar radio bursts. We then precisely annotated our ground truth bounding box values. When a Type III was labeled, the appropriate bounding box coordinates were saved in an XML file for mAP for comparison of the ground truth coordinates with the predicted coordinates of the models.
## 6 Results
The YOLOv2 model's performance was measured using the test set described in Section 5.4. In our previous research, when evaluating the model's performance, we viewed the model as a detection-classification problem. This was done by first predicting using the test set and then sifting through it searching for correctly identified Type III SRBs. Once a bounding box encompassed a Type III, we categorised it as correctly identified and then annotated a bounding box around the predicted bounding box. This meant we could only obtain a unit in which we represented our previous model's performance results known as the f1-score. The f1-score calculates the balance between precision and recall, where precision refers to how accurately the model predicted an object's position and recall is the proportion of true positives to all actual objects. Then:
\[Precision=\frac{TP}{TP+FP} \tag{2}\]
\[Recall=\frac{TP}{TP+FN} \tag{3}\]
\[f1-score=2*\frac{Precision*Recall}{Precision+Recall} \tag{4}\]
Although we could determine how accurate our model was, we could never calculate mAP (mean Average Precision), which is a metric used in computer vision for evaluating the performance of object detection and image classification algorithms. It calculates the average precision across all classes and is expressed as a fraction between 0 and 1. It takes into account both the number of true positive (TP) detections and false positive (FP) detections. A higher mAP score indicates a better performance of the algorithm. The TP and FP variables in mAP are determined by comparing the ground truth bounding box to the model's predicted bounding box, also known as Intersection over Union (IoU).
\[Intersection\,over\,Union=\frac{Area\,of\,Overlap}{Area\,of\,Union} \tag{5}\]
The IoU values from tested data are used to determine the TPs and FPs. We compared YOLOs predicted bounding box the actual ground truth bounding box to obtain IoU. If the IoU is greater than 0.5, a prediction is categorized as a TP, and if it is less than 0.5, as a FP. Figure 10 illustrates the False Negative (FN) for images where the model missed an identified Type III object.
The confidence threshold is an important consideration when assessing the model's performance. The confidence threshold expresses the model's level of certainty in predicting a Type III SRB. Figure 11 shows that the lower the confidence, the more detections made on a test image, but also the more false detections made. The model was found to be optimal with the confidence threshold set to 0.35, as there is a balance between the TP and FP rate in terms of the models predictions (see Table 1 and Figure 11). Figure 12 shows the bounding box predictions when model configuration is set to threshold 0.35, the resulting mAP for detecting Type III solar radio bursts is 77.71%. We plot the localised YOLO detections into a dynamic spectra observation made on the 10th of September 2017, see Figure 13.
## 7 Discussion
In our previous research, we had two issues with the detection quality and robustness of the model. The first issue was detection quality, the predicted bounding boxes were very static in
Figure 11: A visual representation of Table 1. As the confidence threshold decreases, the TP and FP values increase. Here, we can see where YOLO performs at its optimised confidence threshold. The key is to find the balance between both true positive and true negative values. For our model, we have evaluated using IOU threshold at 0.5 (b) as it tests the model’s robustness and localised accuracy at detecting Type III SRBs.
Figure 10: A visual representation of IOU thresholding. The green bounding box indicates the ground-truth or actual Type III, blue bounding box indicates a correctly predicted bounding box by YOLO (TP) and red bounding box indicates a false detection (FN) or IOU!¡
the y-axis or height variable. The previous parametric modelling method also lacked the ability of producing simulated Type III SRB data comparable to real observed data. With the introduction of GANs for generating Type III examples, we could produce simulated SRB data with variation in the Y-axis or height variable. GANs also provided realistic Type III examples almost identical to real observed data and also free of any interference such as RFI. The second issue robustness, where the model couldn't handle high volumes of data, having problems with SRB groupings or storms. Using the GANs simulated data, we could produce all sorts of Type III variations including groupings and storms but also other classes of Type IIIs such as inverted-U bursts and Type N bursts. Employing this new diverse training set, we could train YOLO to detect and classify the actual shape of the data.
Figure 12: YOLO making localised detections on a 10-minute segment at the optimised confidence threshold of 0.35 on the testset (a). When the image is colour inverted (b), we can see the faint Type IIIs YOLO is picking up. Notice how YOLO picks up most Type IIIs in the image and ignores most RFI.
Figure 13: YOLOv2 applied to an I-LOFAR observation made on the 10th of September 2017. The models’ detections capture the Type IIIs’ frequency range and length in time. The model predicts the most intense Type IIIs correctly and ignores somewhat low intensity as they are quite difficult to distinguish between Type III and RFI even to the human eye.
of a Type III. With this configuration of congruent deep learning models, we can accurately detect such phenomena.
In this research, we introduced the computer vision metric mAP, which calculates how well the model can accurately detect locally a Type III. One concern is the approach we took to calculate this metric. We approached the problem as a computer vision problem and evaluated it using computer vision standards applied in COCO dataset competitions, in which the IOU threshold is set to 0.5. This has many advantages in terms of accurate model detection and model robustness when tested against both busy and quiet solar activity observations. However, we could be missing out on some valuable detection data where a visually positive detection has been made (the bounding box is slightly overlapping a Type III) and the detection has been classified as a false detection due to the IoU threshold value of <0.5 not being met. One could argue that we have been overly harsh on the model's detections but we have done this with the view of having a more robust and accurate model for detecting Type III SRBs.
A key feature of our configuration of YOLO is the potential to detect Type III SRBs in real time. Our tests were conducted on small data streams, however, with the recent development of REALTA, a computing backend at I-LOFAR, the potential exists for the recording and processing of data in near real-time. The combination of YOLO, as a real-time software pipeline, and REALTA's hardware capabilities, could prove significant for near real-time space weather monitoring. LOFAR for Space Weather (LOFAR4SW) is a planned LOFAR improvement that would allow for frequent space weather monitoring, which will enable near-real-time monitoring of space weather phenomena such as solar flares and coronal mass ejections (Carley et al. 2020a). This will benefit not only space weather researchers but the radio astronomy community. It will increase our understanding of how space weather affects radio wave propagation in the inner heliosphere and ionosphere disturbances, as well as the impact this has on observing astronomical sources. A backend such as REALTA will be required to capture the data streams from a LOFAR4SW-equipped international station in local mode and analyze the raw data so that it may be used by space weather researchers and forecasters. With YOLO's proven ability to detect Type IIIs in real-time, and REALTA's ability to record and process data in near real time. The possibility of near real-time Type III SRB analysis is promising.
## 8 Conclusion
We applied a combination of deep learning models to the problem of SRB generation, detection, and classification, with the focus on real-time detection. We trained a GAN to produce realistic Type III SRBs, similar to those observed in real observations from I-LOFAR. We then labeled and combined this generated data with real observed data to produce a training set on which YOLOv2 could be evaluated. This particular configuration of YOLOv2 can achieve a mAP accuracy of 77.71% on a real data observation consisting of over 35,000 Type III solar radio burst examples while also achieving real-time frame rates (maximum 90 fps). The combination of YOLO, as a real-time
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline IoU Threshold & Confidence Threshold & Recall & True Positive & False Positive & mAP \\ \hline \multirow{8}{*}{IoU Threshold @ 0.5} & 0.7 & 84.03\% & 20838 & 3959 & 57.56\% \\ & 0.65 & 79.30\% & 23347 & 6093 & 63.57\% \\ & 0.6 & 74.14\% & 25398 & 8858 & 68.19\% \\ & 0.55 & 69.34\% & 26942 & 11911 & 71.44\% \\ & 0.5 & 64.74\% & 28140 & 15321 & 73.79\% \\ & 0.45 & 60.36\% & 29094 & 19099 & 75.55\% \\ & 0.4 & 56.18\% & 29828 & 23261 & 76.80\% \\ & 0.35 & 52.03\% & 30404 & 28026 & 77.71\% \\ & 0.33 & 50.42\% & 30592 & 30073 & 78.00\% \\ & 0.3 & 48.02\% & 30885 & 33422 & 78.42\% \\ & 0.25 & 44.00\% & 31295 & 39827 & 78.97\% \\ & 0.2 & 39.86\% & 31685 & 47801 & 79.45\% \\ & 0.15 & 35.28\% & 32060 & 58811 & 79.87\% \\ & 0.1 & 30.08\% & 32422 & 75336 & 80.21\% \\ \hline \multirow{8}{*}{IoU Threshold @ 0.1} & 0.7 & 84.74\% & 21014 & 3783 & 58.32\% \\ & 0.65 & 79.99\% & 23549 & 5891 & 64.45\% \\ & 0.6 & 74.75\% & 25604 & 8648 & 69.12\% \\ & 0.55 & 69.90\% & 27162 & 11691 & 72.42\% \\ & 0.5 & 65.23\% & 28353 & 15108 & 74.78\% \\ & 0.45 & 60.79\% & 29298 & 18895 & 76.53\% \\ & 0.4 & 56.58\% & 30039 & 23050 & 77.80\% \\ & 0.35 & 52.38\% & 30604 & 27822 & 78.71\% \\ & 0.33 & 50.75\% & 30789 & 29876 & 78.99\% \\ & 0.3 & 79.41\% & 31082 & 33225 & 79.41\% \\ & 0.25 & 44.27\% & 31387 & 39635 & 79.96\% \\ & 0.2 & 40.11\% & 31880 & 47606 & 80.45\% \\ & 0.15 & 35.49\% & 32255 & 58616 & 80.87\% \\ & 0.1 & 30.27\% & 32625 & 75133 & 81.22\% \\ \hline \end{tabular}
\end{table}
Table 1: mAP scores associated with different confidence thresholds set in YOLO at different IoU thresholds. Notice how when the confidence threshold decreases the mAP increases but so too does the true negative and positive rate. The challenge is to find a balance between metrics for optimized performance in terms of accuracy.
software pipeline, and REALTA's hardware capabilities, could prove significant for near real-time space weather monitoring.
We intend to develop this software pipeline further by increasing the size of the dataset using GANs but also adding variety, extending YOLO's capability to detect other SRBs such as Type IIs. We have shown that with congruent deep learning model techniques, we can create a robust method of detecting Type III SRBs. Thus, illustrating that accurate real-time detection and classification of Type III SRBs is readily attainable.
###### Acknowledgements.
LOFAR is one of the largest astrophysics projects in Europe, consisting of 12 international stations spread across Germany, Poland, France, UK, Sweden and Ireland, with additional stations and a central hub in The Netherlands, operated by the Netherlands Institute for Radio Astronomy (ASTRON). I-LOFAR was the Irish addition to this network and was constructed by members from Trinity College Dublin (TCD), University College Dublin (UCD), Armagh Observatory, Dublin City University (DCU), University College Cork (UCC) and National University of Ireland Galway (NUG) with funding from Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation, Open Eir and Offay County Council. J.Scully acknowledges support from SFI and the Technological University of the Shannon (TUS).
|
2303.05693 | On spectra of Hermitian Randić matrix of second kind | Let $X$ be a mixed graph and $\omega=\frac{1+\i \sqrt{3}}{2}$. We write
$i\rightarrow j$, if there is an oriented edge from a vertex $v_i$ to another
vertex $v_j$, and $i\sim j$ for an un-oriented edge between the vertices $v_i$
and $v_j$. The degree of a vertex $v_i$ is denoted by $d_i$. We propose the
Hermitian Randi\'c matrix of second kind $R^\omeg(X)\coloneqq(R^\omeg_{ij})$,
where $R^\omeg_{ij}=\frac{1}{\sqrt{d_id_j}}$ if $i \sim j$, $R^\omeg_{ij}=
\frac{\omega}{\sqrt{d_id_j}}$ and $R^\omeg_{ji}=
\frac{\overline{\omega}}{\sqrt{d_id_j}}$ if $i\rightarrow j$, and 0 otherwise.
In this paper, we investigate some spectral features of this novel Hermitian
matrix and study a few properties like positiveness, bipartiteness,
edge-interlacing etc. We also compute the characteristic polynomial for this
new matrix and obtain some upper and lower bounds for the eigenvalues and the
energy of this matrix. | A Bharali, B Bhattacharjya, S Borah, I J Gogoi | 2023-03-10T03:53:24Z | http://arxiv.org/abs/2303.05693v2 | # On spectra of Hermitian Randic matrix of second kind
###### Abstract
We propose the Hermitian Randic matrix \(R^{\mathbf{\omega}}(X)\coloneqq(R^{\mathbf{\omega}}_{ij})\), where \(\mathbf{\omega}=\frac{1+i\sqrt{3}}{2}\) and \(R^{\mathbf{\omega}}_{ij}=1/\sqrt{d_{i}d_{j}}\) if \(v_{i}v_{j}\) is an unoriented edge, \(\mathbf{\omega}/\sqrt{d_{i}d_{j}}\) if \(v_{i}\to v_{j}\), \(\overline{\mathbf{\omega}}/\sqrt{d_{i}d_{j}}\) if \(v_{i}\gets v_{j}\), and \(0\) otherwise. This appears to be more natural because of \(\mathbf{\omega}+\overline{\mathbf{\omega}}=1\) and \(|\mathbf{\omega}|=1\). In this paper, we investigate some features of this novel Hermitian matrix and study a few properties like positiveness, bipartiteness, edge-interlacing etc. We also compute the characteristic polynomial for this new matrix and obtain some upper and lower bounds for the eigenvalues and the energy of this matrix.
**Keywords:** Mixed graph; Hermitian adjacency matrix; Hermitian Randic matrix; graph energy
**AMS Classifications (2010):** 05C50; 05C09; 05C31
## 1 Introduction
There has been an upsurge of studies related to spectral properties of graph theoretical matrices. Investigation of these properties plays a vital role in analyzing some properties of networks. In recent times, the extensions of spectral theory of unoriented networks to mixed networks is a popular topic. In comparison to the unoriented networks, the mixed networks are much better to model the real world problems. However, we see that many graph matrices for mixed networks appear to be non-symmetric, losing the property that eigenvalues are real.
Recently, many researcher studied the spectral properties of adjacency matrix, Laplacian matrix, normalized Laplacian matrix etc. of mixed networks by incorporating modified versions of these matrices. For details, see [1, 2, 10, 26, 28]. In 2015, Yu and Qu [27] described some notable works on Hermitian Laplacian matrix of mixed graphs. In the same year, Liu and Li [17] studied some properties of Hermitian adjacency matrix. They also determined some bounds for energies of mixed graphs. Some similar works on Randic matrix was done by Lu et al. [18] in 2017. Yu et al. [25] in 2019, defined the Hermitian normalized Laplacian matrix and studied some spectral properties for mixed networks. In 2020, B. Mohar [21] introduced a new modified Hermitian matrix that seems more natural. Some relevant notable works can be found in [9, 14, 15, 16, 22, 23, 24].
## 2 Preliminaries
Throughout the paper, we consider connected simple graph with at least two vertices. A graph \(X\) is said to be mixed if it has both possibilities of edges that are oriented and unoriented. The graph obtained after removing orientations from a mixed graph \(X\), is called the _underlying graph_\(X_{U}\) of \(X\). A _cycle_ in a mixed graph is a cycle in its underlying graph. A cycle is even or odd according as its order is even or odd.
Let \(X\) be an unoriented graph. An edge of \(X\) between \(v_{i}\) and \(v_{j}\) is denoted by \(e_{ij}\). Note that the edge \(e_{ij}\) can be assigned two orientations. An oriented edge from \(v_{i}\) to \(v_{j}\) is denoted by \(\overrightarrow{e_{ij}}\). For each edge \(e_{ij}\in E(X)\), there is a pair of oriented edges \(\overrightarrow{e_{ij}}\) and \(\overrightarrow{e_{ji}}\). The collection \(\overrightarrow{E}(X)\coloneqq\{\overrightarrow{e_{ij}},\overrightarrow{e_{ ji}}:e_{ij}\in E(X)\}\) is the oriented edge set associated with \(X\). Note that each edge of an unoriented graph is of the form \(e_{ij}\). The set \(\overrightarrow{E}(X)\) is the collection of all possible oriented edges of an unoriented graph \(X\). If \(X\) is a mixed graph, then at most one of \(e_{ij},\overrightarrow{e_{ij}}\) and \(\overrightarrow{e_{ji}}\) can be in \(E(X)\).
A _gain graph_ or \(\mathbb{T}\)-gain graph is a triplet \(\Phi\coloneqq(X,\mathbb{T},\varphi)\), where \(X\) is an unoriented graph, \(\mathbb{T}=\{z\in\mathbb{C}:|z|=1\}\) and \(\varphi:\overrightarrow{E}(X)\rightarrow\mathbb{T}\) is a function satisfying \(\varphi(\overrightarrow{e_{ij}})=\varphi(\overrightarrow{e_{ji}})^{-1}\), for each \(e_{ij}\in E(X)\). The function \(\varphi\) is called the _gain function_ of \((X,\mathbb{T},\varphi)\). For simplicity, we use \(\Phi\coloneqq(X,\varphi)\) to denote a \(\mathbb{T}\)-gain graph instead of \(\Phi\coloneqq(X,\mathbb{T},\varphi)\). Again, \(-\Phi\) represents the \(\mathbb{T}\)-gain graph with gain function \(-\varphi\), that is, \(-\Phi\coloneqq(X,-\varphi)\). In [23], Reff
proposed the notion of the adjacency matrix \(A(\Phi)\coloneqq(a_{ij})\) of \(\mathbb{T}\)-gain graph, where
\[a_{ij}=\left\{\begin{array}{ll}\varphi(\overrightarrow{e_{ij}})&\mbox{if $v_{i}$ is adjacent to $v_{j}$}\\ 0&\mbox{otherwise.}\end{array}\right.\]
It is clear that \(A(\Phi)\) is Hermitian. Thus the eigenvalues of this matrix are real. If \(\varphi(\overrightarrow{e_{ij}})=1\) for all \(\overrightarrow{e_{ij}}\), then we have \(A(\Phi)=A(X)\), where \(A(X)\) is the adjacency matrix of the graph \(X\). Thus we can assume a graph \(X\) as a \(\mathbb{T}\)-gain graph \((X,\mathbf{1})\), where \(\mathbf{1}\) is the function that assign \(1\) to each edge of \(X\). By slight abuse of notation, we sometimes write \(\varphi(\overrightarrow{e_{ij}})\) as \(\varphi(e_{ij})\) and \(\varphi(\overrightarrow{e_{ji}})\) as \(\varphi(e_{ji})\). A _switching function_\(\zeta\) of \(X\) is a function from \(V(X)\) to \(\mathbb{T}\), that is, \(\zeta:V(X)\rightarrow\mathbb{T}\). Two gain graphs \(\Phi_{1}\coloneqq(X,\varphi_{1})\) and \(\Phi_{2}\coloneqq(X,\varphi_{2})\) are said to be _switching equivalent_, written \(\Phi_{1}\sim\Phi_{2}\), if there exists a switching function \(\zeta:V(X)\rightarrow\mathbb{T}\) such that
\[\varphi_{2}(e_{ij})=\zeta(v_{i})^{-1}\varphi_{1}(e_{ij})\zeta(v_{j}),\mbox{ where $v_{i}$ and $v_{j}$ are adjacent vertices of the edge $e_{ij}$.}\]
It is clear from the definition that the gain graphs \(\Phi_{1}\) and \(\Phi_{2}\) are switching equivalent if and only if there is a diagonal matrix \(D_{\zeta}\), where the diagonal entries come from \(\mathbb{T}\), such that
\[A(\Phi_{2})=D_{\zeta}^{-1}A(\Phi_{1})D_{\zeta}.\]
Guo and Mohar [10] introduced a Hermitian adjacency matrix of a mixed graph in 2015, where the \(ij\)-th entry is \(\mathbf{i},-\mathbf{i}\) or \(1\) according as \(\overrightarrow{e_{ij}}\in E(X)\), \(\overrightarrow{e_{ji}}\in E(X)\) or \(e_{ij}\in E(X)\) respectively, and \(0\) otherwise. Here \(\mathbf{i}=\sqrt{-1}\). This matrix has numerous appealing characteristics, including real eigenvalues and the interlacing theorem for digraphs etc.
Later in 2020, Mohar [21] put forward a new Hermitian adjacency matrix \(H(X)\coloneqq(h_{ij})\) of a mixed graph \(X\), which is referred as Hermitian matrix of second kind, where
\[h_{ij}=\left\{\begin{array}{ll}1&\mbox{if $e_{ij}\in E(X)$}\\ \boldsymbol{\omega}&\mbox{if $\overrightarrow{e_{ij}}\in E(X)$}\\ \overline{\boldsymbol{\omega}}&\mbox{if $\overrightarrow{e_{ji}}\in E(X)$}\\ 0&\mbox{otherwise.}\end{array}\right.\]
Here \(\boldsymbol{\omega}\coloneqq\frac{1+\mathbf{i}\sqrt{3}}{2}\) is a primitive sixth root of unity and \(\overline{\boldsymbol{\omega}}\coloneqq\frac{1-\mathbf{i}\sqrt{3}}{2}\) is its conjugate. Further, if \(X\) is a mixed graph, then \((X_{U},\boldsymbol{\omega})\) represents the \(\mathbb{T}\)-gain graph with gain
function \(\mathbf{\omega}:\overrightarrow{E}(X_{U})\rightarrow\{1,\mathbf{\omega},\overline{\mathbf{ \omega}}\}\), where
\[\mathbf{\omega}(\overrightarrow{e_{ij}})=\left\{\begin{array}{ll}1&\text{if }e_{ij}\in E(X)\\ \mathbf{\omega}&\text{if }\overrightarrow{e_{ij}}\in E(X)\\ \overline{\mathbf{\omega}}&\text{if }\overrightarrow{e_{ji}}\in E(X).\end{array}\right.\]
Note that \(H(X)=A(\Phi)\) for \(\Phi=(X_{U},\mathbf{\omega})\).
With the growing popularity of these Hermitian matrices, the idea of investigating spectral properties of mixed networks based on other graph matrices is also evolved. In consideration of this, we construct a new Hermitian-Randic matrix \(R^{\mathbf{\omega}}(X)\coloneqq(R^{\mathbf{\omega}}_{ij})\) of a mixed graph \(X\), where
\[R^{\mathbf{\omega}}_{ij}=\left\{\begin{array}{ll}\frac{1}{\sqrt{d_{i}d_{j}}}& \text{if }e_{ij}\in E(X)\\ \frac{\mathbf{\omega}}{\sqrt{d_{i}d_{j}}}&\text{if }\overrightarrow{e_{ij}}\in E(X) \\ \frac{\overline{\mathbf{\omega}}}{\sqrt{d_{i}d_{j}}}&\text{if }\overrightarrow{e_{ji}}\in E(X) \\ 0&\text{otherwise.}\end{array}\right.\]
Clearly, \(R^{\mathbf{\omega}}(X)\) is Hermitian and \(R^{\mathbf{\omega}}(X)=D^{-1/2}H(X)D^{-1/2}\), where \(D=\text{diag}(d_{1},\ldots,d_{n})\) and \(d_{i}=\text{deg}(v_{i})\) for \(i\in\{1,\ldots,n\}\). Let \(L(X)=D-H(X)\) and \(\mathfrak{L}(X)=D^{-1/2}L(X)D^{-1/2}\). The matrices \(L(X)\) and \(\mathfrak{L}(X)\) are known as Hermitian Laplacian matrix and normalized Laplacian matrix of \(X\), respectively. It is clear that \(R^{\mathbf{\omega}}(X)=I-\mathfrak{L}(X)\).
A walk (or path) in a mixed graph is a walk (or path) in its underlying graph. The value of a walk \(W\coloneqq v_{1}v_{2}\cdots v_{\ell}\) of mixed graph \(X\) is defined as \((R^{\mathbf{\omega}})_{12}(R^{\mathbf{\omega}})_{23}\cdots(R^{\mathbf{\omega}})_{(\ell-1)\ell}\). A walk is called positive or negative according as its value is positive or negative, respectively. An acyclic mixed graph is defined to be positive. A non-acyclic mixed graph is positive or negative according as each of its cycle is positive or negative, respectively. A mixed graph \(X\) is called an _elementary graph_ if each component of \(X\) is an edge or a cycle.
The energy levels of \(\pi\)-electrons in conjugated hydrocarbons in molecular orbital are strongly related in spectral graph theory. In 1978, Gutman [11] developed the notion of graph energy based on eigenvalues of a graph. Since then it plays an important role in chemical graph theory. Later many variants of graph energy, based on different matrices other than the adjacency matrix, were proposed as a consequence of the success of the notion of graph energy, for details see [4, 5, 6, 8, 13, 15]. In 2010, Bozkurt et al. [5] proposed the Randic energy of graph as the sum of the absolute values of the eigenvalues of the Randic matrix. In 2017, Lu et al. [18] introduced Hermitian Randic matrix for mixed
graphs, and investigated the energy for this matrix. Analogously, we define the energy of the Hermitian Randic matrix of second kind as the sum of the absolute values of the eigenvalues of the Hermitian Randic matrix of second kind.
## 3 Spectral properties of Hermitian Randic matrix of second kind
In this section, we characterize some spectral properties of \(R^{\boldsymbol{\omega}}(X)\). We continue with some known results which are associated to our findings.
Let \(\mathcal{M}_{n}(\mathbb{C})\) denote the set of all \(n\times n\) matrices with complex entries. For \(A\in\mathcal{M}_{n}(\mathbb{C})\), the matrix whose entries are absolute values of the corresponding entries of \(A\) is denoted by \(|A|\). The maximum of the absolute values of the eigenvalues of a matrix \(A\) is called the _spectral radius_ of \(A\). It is denoted by \(\rho(A)\). Further, the spectrum of \(A\) is denoted by \(\mathrm{Spec}(A)\).
**Theorem 3.1** ([29]).: _A \(\mathbb{T}\)-gain graph \((X,\varphi)\) is positive if and only if \((X,\varphi)\sim(X,\mathbf{1})\)._
**Theorem 3.2** ([20]).: _Let \((X,\varphi)\) be a connected and positive \(\mathbb{T}\)-gain graph. Then \(X\) is bipartite if and only if \((X,-\varphi)\) is positive._
**Theorem 3.3** ([12]).: _Let \(A,B\in\mathcal{M}_{n}(\mathbb{C})\). Suppose \(A\) is non-negative and irreducible, and \(A\geq|B|\). Let \(\lambda\coloneqq e^{i\theta}\rho(B)\) be a maximum-modulus eigenvalue of \(B\). If \(\rho(A)=\rho(B)\), then there is a diagonal unitary matrix \(D\in\mathcal{M}_{n}(\mathbb{C})\) such that \(B=e^{i\theta}DAD^{-1}\)._
In [13], Kannan et al. studied the normalized Laplacian matrix for gain graphs. They also characterized some spectral properties for the normalized adjacency matrix \(D^{-1/2}A(X)D^{-1/2}\) of an unoriented graph \(X\), which is generally referred as the Randic matrix \(R(X)\). If \(X\) is a mixed graph, then the Randic matrix \(R(\Phi)\) of a \(\mathbb{T}\)-gain graph \(\Phi=(X_{U},\varphi)\) is the matrix \((R(\Phi))_{ij})\), where
\[R(\Phi)_{ij}=\left\{\begin{array}{ll}\frac{1}{\sqrt{d_{i}d_{j}}}&\mbox{if }e_{ ij}\in E(X)\\ \frac{\varphi(e_{ij})}{\sqrt{d_{i}d_{j}}}&\mbox{if }\overrightarrow{e_{ij}}\in E(X) \\ \frac{\overline{\varphi}(e_{ij})}{\sqrt{d_{i}d_{j}}}&\mbox{if }\overrightarrow{e_{ji}}\in E(X) \\ 0&\mbox{otherwise.}\end{array}\right.\]
**Lemma 3.1** ([13]).: _Let \(X\) be a connected graph. Then \(\mathrm{Spec}(R(X))\)= \(\mathrm{Spec}(-R(X))\) if and only if \(X\) is bipartite._
**Lemma 3.2** ([13]).: _Let \(\Phi_{1}\) and \(\Phi_{2}\) be two connected gain graphs. If \(\Phi_{1}\sim\Phi_{2}\), then_
\[\mathrm{Spec}(R(\Phi_{1}))=\mathrm{Spec}(R(\Phi_{2}))\]
**Lemma 3.3** ([13]).: _If \(\Phi\coloneqq(X,\varphi)\) is a connected gain graph, then_
\[\rho(R(\Phi))\leq\rho(|R(\Phi)|)=\rho(R(X)).\]
The following result is an immediate consequence of the preceding lemmas.
**Theorem 3.4**.: _If \(X\) is a mixed graph, then \(\mathrm{Spec}(R^{\boldsymbol{\omega}}(X)=\mathrm{Spec}(R(X_{U}))\) if and only if \((X_{U},\boldsymbol{\omega})\sim(X_{U},\mathbf{1})\)._
Proof.: If \(\mathrm{Spec}(R^{\boldsymbol{\omega}}(X))\)=\(\mathrm{Spec}(R(X_{U}))\) then by Theorem 3.3, we have \(R^{\boldsymbol{\omega}}(X)=e^{i\theta}D_{\zeta}R(X_{U})D_{\zeta}^{-1}\), where \(D_{\zeta}\) is a diagonal unitary matrix. Hence
\[D_{\zeta}^{-1}R^{\boldsymbol{\omega}}(X)D_{\zeta}=e^{i\theta}R( X_{U})\] \[\text{or, }\quad D_{\zeta}^{-1}D^{-1/2}H(X)D^{-1/2}D_{\zeta}=e^{i \theta}D^{-1/2}A(X_{U})D^{-1/2}\] \[\text{or, }\quad H(X)=e^{i\theta}D_{\zeta}A(X_{U})D_{\zeta}^{-1}.\]
Since both the matrices \(H(X)\) and \(A(X_{U})\) are symmetric, \(\theta\) is either \(0\) or \(\pi\). This gives that either \((X_{U},\boldsymbol{\omega})\sim(X_{U},\mathbf{1})\) or \((X_{U},\boldsymbol{\omega})\sim(X_{U},-\mathbf{1})\). If \((X_{U},\boldsymbol{\omega})\sim(X_{U},\mathbf{1})\), we are done. If \((X_{U},\boldsymbol{\omega})\sim(X_{U},-\mathbf{1})\), then by Lemma 3.2, we have \(\mathrm{Spec}(R^{\boldsymbol{\omega}}(X))\)=\(\mathrm{Spec}(-R(X_{U}))\). Again as \(\mathrm{Spec}(R^{\boldsymbol{\omega}}(X))\)=\(\mathrm{Spec}(R(X_{U}))\), we have \(\mathrm{Spec}(R(X_{U}))\)=\(\mathrm{Spec}(-R(X_{U}))\). Thus by Lemma 3.1, \(X_{U}\) is bipartite. Now applying Theorem 3.2 for the positive gain graph \((X_{U},-\boldsymbol{\omega})\), we find that \((X_{U},\boldsymbol{\omega})\) is positive, and hence \((X_{U},\boldsymbol{\omega})\sim(X_{U},\mathbf{1})\).
Conversely, if \((X_{U},\boldsymbol{\omega})\sim(X_{U},\mathbf{1})\), then clearly \(\mathrm{Spec}(R^{\boldsymbol{\omega}}(X))\)=\(\mathrm{Spec}(R(X_{U}))\).
In order to determine some spectral properties of the matrix \(R^{\boldsymbol{\omega}}(X)\), we now provide the following lemma.
**Lemma 3.4**.: _If \(X\) be a mixed graph on \(n\) vertices, and \(\mathbf{x}^{t}\coloneqq(x_{1},\ldots,x_{n})\in\mathcal{C}^{n}\), then_
\[\mathbf{x}^{*}H(X)\mathbf{x}=\sum_{v_{i}\to v_{j}}|x_{i}+h_{ij}x_{j}|^{2}- \sum_{v_{i}\to v_{j}}(|x_{i}|^{2}+|x_{j}|^{2}).\]
Proof.: By definition, we have
\[\mathbf{x}^{*}H(X)\mathbf{x} =\sum_{v_{i}\to v_{j}}(\overline{x}_{i}h_{ij}x_{j}+x_{i}\overline{h}_{ij} \overline{x}_{j})\] \[=\sum_{v_{i}\to v_{j}}(\overline{x}_{i}h_{ij}x_{j}+x_{i}\overline{h}_{ ij}\overline{x}_{j})+\overline{x}_{i}x_{i}+\overline{x}_{j}x_{j}-\overline{x}_{i}x_{i} -\overline{x}_{j}x_{j}\] \[=\sum_{v_{i}\to v_{j}}(\overline{x}_{i}h_{ij}x_{j}+\overline{x}_{i}x_ {i})+(x_{i}\overline{h}_{ij}\overline{x}_{j}+\overline{x}_{j}\overline{h}_{ij} h_{ij}x_{j})-\sum_{v_{i}\to v_{j}}(|x_{i}|^{2}+|x_{j}|^{2})\] \[=\sum_{v_{i}\to v_{j}}(x_{i}+h_{ij}x_{j})(\overline{x_{i}+h_{ij}x_{j}} )-\sum_{v_{i}\to v_{j}}(|x_{i}|^{2}+|x_{j}|^{2}),\qquad\qquad\text{ as }|h_{ij}|^{2}=1\] \[=\sum_{v_{i}\to v_{j}}|x_{i}+h_{ij}x_{j}|^{2}-\sum_{v_{i}\to v_{j}}(|x_{ i}|^{2}+|x_{j}|^{2}).\qed\]
Using the preceding lemma we have the following theorem.
**Theorem 3.5**.: _Let \(X\) be a mixed graph of order \(n\), where \(n\geq 2\). If \(\operatorname{Spec}(R^{\boldsymbol{\omega}}(X))=\{\lambda_{1},\ldots,\lambda_ {n}\}\), then \(-1\leq\lambda_{i}\leq 1\) for each \(i\in\{1,\ldots,n\}\)._
Proof.: For the complex vectors \(\mathbf{x}\) and \(\mathbf{x}^{(i)}\), define the vectors \(\mathbf{y}\coloneqq D^{-1/2}\mathbf{x}\) and \(\mathbf{y}^{(i)}\coloneqq D^{-1/2}\mathbf{x}^{(i)}\), where \(D\) is the diagonal degree matrix of the underlying graph \(X_{U}\).
By Courant-Fischer Theorem, we have
\[\lambda_{k}= \max_{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(k-1)}\in\mathcal{C}^{n}} \min_{\begin{subarray}{c}\mathbf{x}\perp\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{ (k-1)}\}\\ \mathbf{x}\not=0\\ \mathbf{x}\in\mathcal{C}^{n}\end{subarray}}\frac{\mathbf{x}^{*}R^{\boldsymbol {\omega}}(X)\mathbf{x}}{\mathbf{x}^{*}\mathbf{x}}\] \[= \max_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(k-1)}\in\mathcal{C}^{n}} \min_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{ (k-1)}\}\\ \mathbf{y}\not=0\\ \mathbf{y}\in\mathcal{C}^{n}\end{subarray}}\frac{\mathbf{y}^{*}H(X)\mathbf{y}}{ \mathbf{y}^{*}D\mathbf{y}}\] \[= \max_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(k-1)}\in\mathcal{C}^{n} }\min_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{ (k-1)}\}\\ \mathbf{y}\not=0\\ \mathbf{y}\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{j}}(| y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})}{\sum\limits_{v_{i}}|y_{i}|^{2}d_{i}}, \tag{3.1}\]
where \(h_{ij}\) is the \(ij^{th}\) entry of \(H(X)\). Note that \(|a+b|^{2}\leq 2|a|^{2}+2|b|^{2}\) for two complex numbers \(a\) and \(b\). Therefore, we have
\[\sum_{v_{i}\to v_{j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2} \leq \sum_{v_{i}\to v_{j}}(2|y_{i}|^{2}+2|h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y _{j}|^{2})\] \[= \sum_{v_{i}\to v_{j}}|y_{i}|^{2}+|y_{j}|^{2}\ \ (as\ |h_{ij}|=1)\] \[= \sum_{v_{i}}|y_{i}|^{2}d_{i}. \tag{3.2}\]
From Equation (3.1) and Equation (3.2), we have \(\lambda_{k}\leq 1\). Again
\[\sum\limits_{v_{i}\to v_{j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2} \geq \sum\limits_{v_{i}\to v_{j}}-|y_{i}|^{2}-|y_{j}|^{2}\] \[= -\sum\limits_{v_{i}}|y_{i}|^{2}d_{i}. \tag{3.3}\]
Hence from Equation (3.1) and Equation (3.3), we have \(\lambda_{k}\geq-1\).
Let \(S_{H}(X)\coloneqq(s_{H_{ke}})\) be an \(n\times m\) matrix indexed by the vertices and edges of a mixed graph \(X\) with \(|s_{H_{ke}}|=1\), and
\[s_{H_{ke}}=\left\{\begin{array}{ll}-s_{H_{\ell e}}&\text{if }e=e_{k\ell} \\ -\mathbf{\omega}s_{H_{\ell e}}&\text{if }e=\overrightarrow{e_{k\ell}}\\ -\overline{\mathbf{\omega}}s_{H_{\ell e}}&\text{if }e=\overrightarrow{e_{\ell k}}\\ 0,&\text{otherwise}.\end{array}\right.\]
If \(D\) is the diagonal degree matrix of the underlying graph \(X_{U}\) and \(\left(D^{-1/2}S_{H}(X)\right)\left(D^{-1/2}S_{H}(X)\right)^{*}=(\alpha_{k\ell })_{n\times n}\), then
\[\alpha_{k\ell}=\sum\limits_{e\in E(X)}s_{ke}\cdot\overline{s}_{\ell e}=\sum \limits_{e\in E(X)}\frac{1}{\sqrt{d_{k}}}s_{H_{ke}}\cdot\frac{1}{\sqrt{d_{ \ell}}}\overline{s}_{H_{\ell e}}=\sum\limits_{e\in E(X)}\frac{1}{\sqrt{d_{k}d _{\ell}}}s_{H_{ke}}\cdot\overline{s}_{H_{le}}.\]
Thus \(\alpha_{kk}=\sum\limits_{e\in E(X)}\frac{1}{\sqrt{d_{k}d_{k}}}s_{H_{ke}} \overline{s}_{H_{ke}}=\sum\limits_{e\in E(X)}\frac{1}{d_{k}}|s_{H_{ke}}|^{2}= \frac{1}{d_{k}}\cdot d_{k}\)=1.
Now assume that \(k\neq\ell\),
1. For \(e_{k\ell}\in E(X)\), \[\alpha_{k\ell}=s_{ke}\cdot\overline{s}_{le}=\frac{1}{\sqrt{d_{k}}}s_{H_{ke}} \cdot\frac{1}{\sqrt{d_{\ell}}}\overline{s}_{H_{\ell e}}=\frac{1}{\sqrt{d_{k}d _{\ell}}}(-s_{H_{\ell e}})\overline{s}_{H_{le}}=-\frac{1}{\sqrt{d_{k}d_{\ell}} }\left|s_{H_{\ell e}}\right|^{2}=-\frac{1}{\sqrt{d_{k}d_{\ell}}}\]
2. For \(\overrightarrow{e_{k\ell}}\in E(X)\), \[\alpha_{k\ell}=s_{ke}\cdot\overline{s}_{\ell e}=\frac{1}{\sqrt{d_{k}}}s_{H_{ke }}\cdot\frac{1}{\sqrt{d_{\ell}}}\overline{s}_{H_{\ell e}}=\frac{1}{\sqrt{d_{k} d_{l}}}(-\mathbf{\omega}s_{H_{le}})\overline{s}_{H_{le}}=\frac{-\mathbf{\omega}}{\sqrt{d_{k}d _{\ell}}}\left|s_{H_{\ell e}}\right|^{2}=\frac{-\mathbf{\omega}}{\sqrt{d_{k}d_{ \ell}}}\]
3. For \(\overrightarrow{e_{lk}}\in E(X)\), \[\alpha_{k\ell}=s_{ke}\cdot\overline{s}_{le}=\frac{1}{\sqrt{d_{k}}}s_{H_{ke}} \cdot\frac{1}{\sqrt{d_{\ell}}}\overline{s}_{H_{\ell e}}=\frac{1}{\sqrt{d_{k}d _{l}}}(-\overline{\mathbf{\omega}}s_{H_{\ell e}})\overline{s}_{H_{\ell e}}=\frac{ -\overline{\mathbf{\omega}}}{\sqrt{d_{k}d_{\ell}}}\left|s_{H_{\ell e}}\right|^{2}= \frac{-\overline{\mathbf{\omega}}}{\sqrt{d_{k}d_{\ell}}}\]
Thus, \(R^{\mathbf{\omega}}(X)=I-\left(D^{-1/2}S_{H}(X)\right)\left(D^{-1/2}S_{H}(X)\right) ^{*}\).
**Lemma 3.5** ([27]).: _A mixed graph \(X\) is positive if and only if for any two vertices \(v_{i}\) and \(v_{j}\) all paths from \(v_{i}\) to \(v_{j}\) have same value._
**Theorem 3.6**.: _Let \(X\) be a connected mixed graph. If 1 is an eigenvalue of \(R^{\mathbf{\omega}}(X)\), then it must be and \(X\) is positive._
Proof.: Assume that 1 is an eigenvalue of \(R^{\mathbf{\omega}}(X)\). For \(\mathbf{x}=(x_{1},\ldots,x_{n})^{t}\), we have
\[R^{\mathbf{\omega}}(X)\mathbf{x}=\mathbf{x}\] \[\text{or, }\quad(I-R^{\mathbf{\omega}}(X))\mathbf{x}=0\] \[\text{or, }\quad(D^{-1/2}S_{H}(X)(D^{-1/2}S_{H}(X))^{*}) \mathbf{x}=0\] \[\text{or, }\quad\langle D^{-1/2}S_{H}(X)(D^{-1/2}S_{H}(X))^{*} \mathbf{x},\mathbf{x}\rangle=0\] \[\text{or, }\quad\langle D^{-1/2}S_{H}^{*}(X)\mathbf{x},D^{-1/2}S_{H} ^{*}(X)\mathbf{x}\rangle=0.\]
Thus the element of \(D^{-1/2}S_{H}^{*}(X)\mathbf{x}\) satisfies
\[(D^{-1/2}S_{H}^{*}(X)\mathbf{x})_{e}=0\] \[\text{or, }\quad\overline{s}_{H_{ie}}d_{i}^{-1/2}x_{i}+\overline{s} _{H_{je}}d_{j}^{-1/2}x_{j}=0\]
for an edge \(e\) incident to both \(v_{i}\) and \(v_{j}\). Note that \(s_{H_{ie}}\overline{s}_{H_{je}}=-h_{ij}\), so \(x_{i}=\sqrt{\frac{d_{i}}{d_{j}}}h_{ij}x_{j}\) for any edge incident to \(v_{i}\) and \(v_{j}\). Let \(W_{i}\coloneqq u_{1}u_{2}\ldots u_{i}\) be any \(u_{1}u_{i}\)- walk such that \(u_{1}=v_{1}\), \(u_{i}=v_{i}\). We have \(x_{1}=\sqrt{\frac{d_{1}}{d_{2}}}h_{12}x_{2}=\sqrt{\frac{d_{1}}{d_{3}}}h_{23}x_ {3}=\cdots=\sqrt{\frac{d_{1}}{d_{i}}}h(W_{i})x_{i}\). This implies that each \(u_{1}u_{i}\)- walk has the same value. Moreover, \(\mathbf{x}^{t}=(x_{1},\ldots,x_{i})=(x_{1},\sqrt{\frac{d_{2}}{d_{1}}\overline {h(W_{2})}}x_{1},\ldots,\sqrt{\frac{d_{i}}{d_{1}}\overline{h(W_{i})}}x_{1})=x _{1}(1,\sqrt{\frac{d_{2}}{d_{1}}\overline{h(W_{2})}},\ldots,\sqrt{\frac{d_{i} }{d_{1}}\overline{h(W_{i})}}).\) Hence 1 is an eigenvalue of \(R^{\mathbf{\omega}}(X)\) with multiplicity 1.
Yu et al. [25], in their study of Hermitian normalized Laplacian matrix for mixed networks, established that a graph is bipartite if and only if all of its eigenvalues are symmetric about 1. The symmetric characteristics of the \(R^{\mathbf{\omega}}(X)\) eigenvalues can also be determined in a similar manner.
**Theorem 3.7**.: _If \(X\) is a connected mixed graph, then \(X\) is bipartite if and only if all eigenvalues of \(R^{\mathbf{\omega}}(X)\) are symmetric about 0._
Proof.: Because of \(R^{\mathbf{\omega}}(X)=I-\mathfrak{L}(X)\), the proof is analogous to the proof of Theorem 3.5 in [25].
**Theorem 3.8** ([25]).: _If \(X\) is a connected mixed graph, then \(2\) is an eigenvalue of \(\mathfrak{L}(X)\) if and only if \(X\) is a positive bipartite graph._
Noting that \(R^{\mathbf{\omega}}(X)=I-\mathfrak{L}(X)\), we get the following corollary from Theorem 3.8.
**Corollary 3.1**.: _If \(X\) is a connected mixed graph, then \(-1\) is an eigenvalue of \(R^{\mathbf{\omega}}(X)\) if and only if \(X\) is a positive bipartite graph._
Note that if \(X\) is a bipartite graph, then the spectrum of \(R^{\mathbf{\omega}}(X)\) is symmetric about \(0\). As a result, if \(X\) is a bipartite connected graph, then \(1\) is an eigenvalue of \(R^{\mathbf{\omega}}(X)\) if and only if \(X\) is a positive.
Eigenvalue interlacing is a popular technique for generating inequality and regularity conclusions regarding graph structure in terms of eigenvalues. We provide an edge version of interlacing properties for \(R^{\mathbf{\omega}}(X)\). First, we give a lemma.
**Lemma 3.6**.: _Let \(a\), \(b\), and \(c\) be three real numbers such that \(b>0\), \(c>0\) and \(b-c>0\)._
1. _If_ \(\frac{a}{b}\leq 1\)_, then_ \(\frac{a-c}{b-c}\leq\frac{a}{b}\)__
2. _If_ \(|\frac{a}{b}|\leq 1\)_, then_ \(\frac{a+c}{b-c}\geq\frac{a}{b}\)_._
Proof.:
1. We have \(b(a-c)-a(b-c)=c(a-b)\leq 0\), as \(c>0\) and \(\frac{a}{b}\leq 1\) gives \(a-b\leq 0\). This means that \(b(a-c)\leq a(b-c)\). Since \(b>0\) and \(b-c>0\), the result follows.
2. We have, \(b(a+c)-a(b-c)=c(a+b)\geq 0\), as \(c>0\), \(b>0\) and \(|a|\leq|b|\). This implies that \(b(a+c)\geq a(b-c)\). Since \(b>0\) and \(b-c>0\), the result follows.
**Theorem 3.9**.: _Let \(X\) be a mixed graph on \(n\) vertices and \(X-e\) be the graph obtained by removing the edge \(e\) of \(X\). Let \(\operatorname{Spec}(R^{\mathbf{\omega}}(X))=\{\lambda_{1},\dots,\lambda_{n}\}\) and \(\operatorname{Spec}(R^{\mathbf{\omega}}(X-e))=\{\theta_{1},\dots,\theta_{n}\}\), then_
\[\lambda_{i-1}\leq\theta_{i}\leq\lambda_{i+1}\]
_for each \(i\in\{1,\dots,n\}\) with the convention that \(\lambda_{0}=-1\) and \(\lambda_{n+1}=1\)._
Proof.: From Equation (3.1) we have
\[\lambda_{k}=\max_{{\bf y}^{(1)},\dots,{\bf y}^{(k-1)}\in\mathcal{C}^{n}}\ \min_{ \begin{subarray}{c}{\bf y}\perp\{{\bf y}^{(1)},\dots,{\bf y}^{(k-1)}\}\\ {\bf y}\neq 0\\ {\bf y}\in c^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{j}}(|y_{i}+h_{ij} y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})}{\sum\limits_{i}|y_{i}|^{2}d_{i}}.\]
Similarly, we write
\[\lambda_{k}=\min_{{\bf y}^{(k+1)},\dots,{\bf y}^{(n)}\in\mathcal{C}^{n}}\ \max_{ \begin{subarray}{c}{\bf y}\perp\{{\bf y}^{(k+1)},\dots,{\bf y}^{(n)}\}\\ {\bf y}\neq 0\\ {\bf y}\in c^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{j}}(|y_{i}+h_{ij} y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})}{\sum\limits_{i}|y_{i}|^{2}d_{i}}.\]
For an edge \(\overrightarrow{e_{ij}}\), then we have \(h_{12}=\boldsymbol{\omega}\). After deleting the edge \(e\), the degrees of \(v_{1}\) and \(v_{2}\) are decreased by \(1\). Moreover, \(\sum\limits_{v_{i}\to v_{j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})\) no longer includes the removal corresponding the edge \(e_{12}\). Thus
\[\sum\limits_{v_{i}\to v_{j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2}) \ \text{ becomes }\sum\limits_{v_{i}\to v_{j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2 })-(|y_{1}+h_{12}y_{2}|^{2}-|y_{1}|^{2}-|y_{2}|^{2}\]
and \(\sum\limits_{i}|y_{i}|^{2}d_{i}\) becomes \(\sum\limits_{i}|y_{i}|^{2}d_{i}-|y_{1}|^{2}-|y_{2}|^{2}\).
Now
\[\theta_{k}= \max\limits_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(k-1)}\in \mathcal{C}^{n}}\min\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{( 1)},\ldots,\mathbf{y}^{(k-1)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})-(|y_{1}+h_{12}y_{2}|^{2}-|y_{1}|^{2}-|y_{2}|^{2})}{ \sum\limits_{i}|y_{i}|^{2}d_{i}-|y_{1}|^{2}-|y_{2}|^{2}}\] \[\leq \max\limits_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(k-1)}\in \mathcal{C}^{n}}\min\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{( 1)},\ldots,\mathbf{y}^{(k-1)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})-(|y_{1}+h_{12}y_{2}|^{2}-|y_{1}|^{2}-|y_{2}|^{2})}{ \sum\limits_{i}|y_{i}|^{2}d_{i}-|y_{1}|^{2}-|y_{2}|^{2}}\] \[= \max\limits_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(k-1)}\in \mathcal{C}^{n}}\min\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{( 1)},\ldots,\mathbf{y}^{(k-1)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})-(2|y_{1}|^{2})}{\sum\limits_{i}|y_{i}|^{2}d_{i}-2 |y_{1}|^{2}}\] \[\leq \max\limits_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(k-1)}\in \mathcal{C}^{n}}\min\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{( 1)},\ldots,\mathbf{y}^{(k-1)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})}{\sum\limits_{i}|y_{i}|^{2}d_{i}}\] by Lemma 3.6 \[\leq \max\limits_{\mathbf{y}^{(1)},\ldots,\mathbf{y}^{k}\in \mathcal{C}^{n}}\min\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{( 1)},\ldots,\mathbf{y}^{(k-1)},e_{1}-\boldsymbol{\omega}e_{2}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})}{\sum\limits_{i}d_{i}|y_{i}|^{2}}\] \[= \lambda_{k+1}.\]
Similarly,
\[\theta_{k}= \min\limits_{\mathbf{y}^{(k+1)},\ldots,\mathbf{y}^{(n)}\in \mathcal{C}^{n}}\max\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{ (k+1)},\ldots,\mathbf{y}^{(n)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})-(|y_{1}+h_{12}y_{2}|^{2}-|y_{1}|^{2}-|y_{2}|^{2})}{ \sum\limits_{i}|y_{i}|^{2}d_{i}-|y_{1}|^{2}-|y_{2}|^{2}}\] \[\geq \min\limits_{\mathbf{y}^{(k+1)},\ldots,\mathbf{y}^{(n)}\in \mathcal{C}^{n}}\max\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{ (k+1)},\ldots,\mathbf{y}^{(n)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})-(|y_{1}+h_{12}y_{2}|^{2}-|y_{1}|^{2}-|y_{2}|^{2})}{ \sum\limits_{i}|y_{i}|^{2}d_{i}-|y_{1}|^{2}-|y_{2}|^{2}}\] \[\geq \min\limits_{\mathbf{y}^{(k+1)},\ldots,\mathbf{y}^{(n)}\in \mathcal{C}^{n}}\max\limits_{\begin{subarray}{c}\mathbf{y}\perp\{\mathbf{y}^{(k+1) },\ldots,\mathbf{y}^{(n)}\}\\ \mathbf{y}\not\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{v_{i}\to v_{ j}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_{j}|^{2})+(2|y_{1}|^{2})}{\sum\limits_{i}|y_{i}|^{2}d_{i}-2 |y_{1}|^{2}}\]
\[\geq \min_{\begin{subarray}{c}\mathbf{y}^{(k+1)},\ldots,\mathbf{y}^{(n)} \in\mathcal{C}^{n}\end{subarray}}\max_{\begin{subarray}{c}\mathbf{y}\perp \mathbf{y}^{(k+1)},\ldots,\mathbf{y}^{(n)},e_{1}+\boldsymbol{\omega}e_{2}\\ \mathbf{y}\in\mathcal{C}^{n}\end{subarray}}\frac{\sum\limits_{\begin{subarray} {c}v_{i}\to v_{j}\\ y_{i}\to v_{j}\\ y_{i}\in\mathcal{C}^{n}\end{subarray}}(|y_{i}+h_{ij}y_{j}|^{2}-|y_{i}|^{2}-|y_ {j}|^{2})}{\sum\limits_{i}|y_{i}|^{2}d_{i}}\] by Lemma 3.6 \[= \lambda_{k-1}.\]
Hence, \(\lambda_{i-1}\leq\theta_{i}\leq\lambda_{i+1}\) with the convention that \(\lambda_{0}=-1\) and \(\lambda_{n+1}=1\).
## 4 Characteristic Polynomial of \(R^{\boldsymbol{\omega}}(X)\)
Here we provide some results similar to Theorem 2.7 in [17] and Proposition 7.3 in [3] for Hermitian Randic matrix of second kind. Lu et al. in [18] defined a Hermitian-Randic matrix as \(R\coloneqq(r_{ij})\), where \(r_{ij}=\frac{1}{\sqrt{d_{i}d_{j}}}\) if \(v_{i}v_{j}\) is an unoriented edge, \(\frac{\mathbf{i}}{\sqrt{d_{i}d_{j}}}\) if \(v_{i}\to v_{j}\), \(\frac{-\mathbf{i}}{\sqrt{d_{i}d_{j}}}\) if \(v_{i}\gets v_{j}\) and \(0\), otherwise. For this Hermitian matrix, they also obtained the determinant and characteristic polynomial. The following result obtained in [18].
**Theorem 4.1** ([18]).: _Let \(R^{\boldsymbol{\omega}}(X)\) be the Hermitian Randic matrix of a mixed graph \(X\) of order \(n\). Then_
\[\det(R^{\boldsymbol{\omega}}(X))=\sum_{X^{\prime}}(-1)^{r(X^{\prime})+l(X^{ \prime})}2^{s(X^{\prime})}W(X^{\prime}),\]
_where the summation is over all real spanning elementary subgraphs \(X^{\prime}\) of \(X\), \(r(X^{\prime})=n-c(X^{\prime})\), \(c(X^{\prime})\) denotes the number of components of \(X^{\prime}\), \(l(X^{\prime})\) denotes the number of negative mixed cycle of \(X^{\prime}\), \(s(X^{\prime})\) denotes the number of mixed cycles with length at least 3 in \(X^{\prime}\), and \(W(X^{\prime})=\prod_{v_{i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})}\)._
For the Hermitian Randic matrix defined by Lu et al. [18], the summation is taken over all real spanning elementary subgraphs. However we find that for Hermitian Randic matrix of second kind, the summation is to be taken over all spanning elementary subgraphs. As a consequence, variations in the coefficient of the characteristic polynomial as well as for various graph structures also occur.
The following are some of the results for determinants and coefficient of characteristic polynomials of \(R^{\boldsymbol{\omega}}(X)\).
**Theorem 4.2**.: _Let \(R^{\boldsymbol{\omega}}(X)\) be the Hermitian Randic matrix of second kind of a mixed graph \(X\) of order \(n\). Then_
\[\det(R^{\boldsymbol{\omega}}(X))=\sum_{X^{\prime}}(-1)^{r(X^{\prime})+l(X^{ \prime})}2^{s(X^{\prime})}W(X^{\prime}),\]
_where the summation is over all spanning elementary subgraphs \(X^{\prime}\) of \(X\), \(r(X^{\prime})=n-c(X^{\prime})\), \(c(X^{\prime})\) denotes the number of components of \(X^{\prime}\), \(l(X^{\prime})\) denotes the number of negative mixed cycle of \(X^{\prime}\), \(s(X^{\prime})\) denotes the number of mixed cycles with length at least 3 in \(X^{\prime}\), and \(W(X^{\prime})=\prod_{v_{i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})}\)._
Proof.: Let \(X\) be a mixed graph of order \(n\) with vertex set \(\{v_{1},\ldots,v_{n}\}\). We have
\[\det(R^{\boldsymbol{\omega}}(X))=\sum_{\pi\epsilon s_{n}}\text{sgn}(\pi)(R^{ \boldsymbol{\omega}})_{1\pi(1)}(R^{\boldsymbol{\omega}})_{2\pi(2)}\cdots(R^{ \boldsymbol{\omega}})_{n\pi(n)},\]
where \(s_{n}\) is the set of all permutation on \(\{1,\ldots,n\}\).
Consider a term \(\text{sgn}(\pi)(R^{\boldsymbol{\omega}})_{1\pi(1)}\cdots(R^{\boldsymbol{ \omega}})_{n\pi(n)}\) in the expansion of \(\det(R^{\boldsymbol{\omega}}(X))\). If \(v_{k}v_{\pi(k)}\) is not an edge of \(X\), then \((R^{\boldsymbol{\omega}})_{k\pi(k)}=0\), that is, this term vanishes. Thus, if the term corresponding to a permutation is non-zero, then it is fixed- point-free and can be expressed uniquely as the composition of disjoint cycle of length at least 2. Consequently, each non-vanishing term in the expansion of \(\det(R^{\boldsymbol{\omega}}(X))\) gives rise to a spanning elementary mixed graph \(X^{\prime}\) of \(X\).
A spanning elementary subgraph \(X^{\prime}\) of \(X\) with \(s(X^{\prime})\) number of mixed cycles (length at least 3) gives \(2^{s(X^{\prime})}\) permutations, since there are two ways of choosing the corresponding cycle in a permutation for each mixed cycle-component in \(X^{\prime}\). For \(v_{k}\in V(X^{\prime})\), we denote \(d_{k}\coloneqq d(v_{k})=d_{X_{U}}(v_{k})\). Furthermore, if for some direction of permutation, a mixed cycle component \(C_{1}\) has value \(\boldsymbol{\omega}\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}\), then the opposite direction has value \(\overline{\boldsymbol{\omega}}\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}\) and vice versa. Thus, in the summation of \(\det(R^{\boldsymbol{\omega}}(X))\), we have
\[\boldsymbol{\omega}\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}+\overline{ \boldsymbol{\omega}}\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}=(\boldsymbol{ \omega}+\overline{\boldsymbol{\omega}})\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_ {j})}=\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}.\]
In addition, if \(C_{1}\) has the value \(\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}\) or \(-\prod_{v_{j}\in V(C_{1})}\frac{1}{d(v_{j})}\) for some direction of a permutation, then it has the same value for the other direction of \(C_{1}\). For each edge component with vertices \(v_{k}\) and \(v_{l}\), the corresponding factor \((R^{\boldsymbol{\omega}})_{kl}(R^{\boldsymbol{\omega}})_{lk}\) has the value \(\frac{1}{\sqrt{d_{k}d_{l}}}\cdot\frac{1}{\sqrt{d_{l}d_{k}}}=\frac{1}{d_{k}d_{ l}}\) or \(\frac{\boldsymbol{\omega}}{\sqrt{d_{k}d_{l}}}\cdot\frac{\overline{\boldsymbol{ \omega}}}{\sqrt{d_{l}d_{k}}}=\frac{1}{d_{k}d_{l}}\).
Since \(\text{sgn}(\pi)=(-1)^{n-c(X^{\prime})}=(-1)^{r}(X^{\prime})\), each real spanning elementary subgraph \(X^{\prime}\) contributes \((-1)^{r(X^{\prime})+l(X^{\prime})}2^{s(X^{\prime})}\prod_{v_{i}\in V(X^{\prime })}\frac{1}{d_{X_{U}}(v_{i})}\) to the determinant of \(R^{\boldsymbol{\omega}}(X)\). This completes the proof.
Let \(P_{R^{\boldsymbol{\omega}}}(X,x)\) denote the characteristic polynomial of the matrix \(R^{\boldsymbol{\omega}}(X)\) of a mixed graph \(X\). W e call \(P_{R^{\boldsymbol{\omega}}}(X,x)\) the \(R^{\boldsymbol{\omega}}\)-characteristic polynomial of \(X\). We now compute the coefficients of \(P_{R^{\boldsymbol{\omega}}}(X,x)\).
**Theorem 4.3**.: _If \(P_{R^{\mathbf{\omega}}}(X,x)\coloneqq x^{n}+a_{1}x^{n-1}+\cdots+a_{n}\), then_
\[(-1)^{k}a_{k}=\sum_{X^{\prime}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s(X^{\prime })}\prod_{v_{i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})},\]
_where the summation is over all spanning elementary subgraphs \(X^{\prime}\) with order \(k\) of \(X\), \(r(X^{\prime})=k-c(X^{\prime})\), \(c(X^{\prime})\) denote the number components of \(X^{\prime}\) and \(l(X^{\prime})\) denotes the number of mixed cycle of length at least three in \(X^{\prime}\)._
Proof.: The proof is based on Theorem 4.2, and makes use of the fact that the summation of the determinants of all principal \(k\times k\) sub-matrices of \(R^{\mathbf{\omega}}(X)\) is \((-1)^{k}a_{k}\).
In the next corollary, we look at how these coefficients changes their shape for different graph structures.
**Corollary 4.1**.: _Let \(P_{R^{\mathbf{\omega}}}(X,x)\coloneqq x^{n}+a_{1}x^{n-1}+\cdots+a_{n}\)_
1. _If_ \(X\) _is a mixed tree, then_ \((-1)^{k}a_{k}=\sum\limits_{X^{\prime}}(-1)^{r(X^{\prime})}\prod_{v_{i}\in V( X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})}\)_._
2. _If the underlying graph_ \(X_{U}\) _of_ \(X\) _is_ \(r\)_-regular (_\(r\neq 0\)_), then_ \((-1)^{k}a_{k}=\sum\limits_{X^{\prime}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s( X^{\prime})}\frac{1}{r^{k}}\)_._
The proof of Corollary 4.1 is straightforward due to the absence of circles in a tree and the fact that every vertex in a \(r\)-regular graph has degree \(r\).
The double factorial \(n!!\) of a positive integer \(n\) is defined by
\[n!!=\left\{\begin{array}{ll}n\cdot(n-2)\cdots 5\cdot 3\cdot 1&\text{if $n$ is odd}\\ n\cdot(n-2)\cdots 4\cdot 2&\text{if $n$ even}.\end{array}\right.\]
**Lemma 4.1** ([7]).: _If \(X\) is a complete graph of order \(2n\), then the number of perfect matchings of \(X\) is \((2n-1)!!\)._
**Theorem 4.4**.: _If \(X\) is a mixed complete graph of order \(2n\), then \(\operatorname{rank}R^{\mathbf{\omega}}(X)=2n\)._
Proof.: Supposed \(X\) is a mixed complete graph of order \(2n\) and \(P_{R^{\mathbf{\omega}}}(X,x)\coloneqq\sum_{i=0}^{2n}a_{i}x^{2n-i}\) is the characteristic polynomial of \(X\). From Theorem 4.3, we have
\[(-1)^{k}a_{k}=\sum_{X^{\prime}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s(X^{ \prime})}\prod_{v_{i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})},\]
where the summation is over all spanning elementary subgraphs \(X^{\prime}\) of order \(k\) of \(X\), \(r(X^{\prime})=k-c(X^{\prime})\), \(c(X^{\prime})\) denotes the number component of \(X^{\prime}\) and \(l(X^{\prime})\) denotes the number of mixed cycle with length at least three in \(X^{\prime}\).
For a complete graph with \(2n\) vertices, there are \((2n-1)!!\) perfect matchings. Suppose \(X_{1}\) is the set of all such perfect matchings of \(X\) and \(X_{2}\) is the set of all spanning elementary subgraphs which contains at least one mixed cycle. Thus \(X_{1}\cup X_{2}\) is the set of all spanning elementary subgraphs of \(X\). We have
\[a_{2n}= \sum_{X^{\prime}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s(X^{\prime })}\prod_{v_{i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})}\] \[= \sum_{X^{\prime}\in X_{1}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s( X^{\prime})}\prod_{v_{i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})}+\sum_{X^{ \prime}\in X_{2}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s(X^{\prime})}\prod_{v_{ i}\in V(X^{\prime})}\frac{1}{d_{X_{U}}(v_{i})}\] \[= (-1)^{2n-n}(2n-1)!!\left(\frac{1}{2n-1}\right)^{2n}+\sum_{X^{ \prime}\in X_{2}}(-1)^{r(X^{\prime})+l(X^{\prime})}2^{s(X^{\prime})}\left( \frac{1}{2n-1}\right)^{2n}\] \[= \left((-1)^{n}(2n-1)!!+2\sum_{X^{\prime}\in X_{2}}(-1)^{r(X^{ \prime})+l(X^{\prime})}2^{s(X^{\prime})-1}\right)\left(\frac{1}{2n-1}\right)^{ 2n}.\]
As (2n-1)!! is an odd number, \(a_{2n}\neq 0\). This completes the proof.
The general Randic index of a underlying graph \(X_{U}\) of a mixed graph \(X\) is defined as
\[R_{\alpha}(X_{U})=\left\{\begin{array}{ll}(d_{u}d_{v})^{\alpha}&\text{if the vertex $u$ is adjacent to the vertex $v$}\\ 0,&\text{otherwise.}\end{array}\right.\]
Now we give some bounds for eigenvalues of the matrix \(R^{\boldsymbol{\omega}}(X)\).
**Theorem 4.5**.: _If \(\lambda_{1}\) is the smallest eigenvalue of \(R^{\boldsymbol{\omega}}(X)\), then_
\[\lambda_{1}^{2}\geq\frac{2R_{-1}(X_{U})}{n(n-1)},\]
_where \(R_{-1}(X_{U})\) is the general randic index \(R_{\alpha}(X_{U})\) of \(X_{U}\) with \(\alpha=-1\)._
Proof.: Let the eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\) of \(R^{\boldsymbol{\omega}}(X)\) satisfy \(\lambda_{1}\leq\cdots\leq\lambda_{k}\leq\lambda_{k+1}\leq\cdots\leq\lambda_{n}\), we have
\[\sum_{i=1}^{n}{\lambda_{i}}^{2}=\text{trace}(R^{\boldsymbol{\omega }}(X))^{2}=\sum_{i=1}^{n}\sum_{j=1}^{n}R^{\boldsymbol{\omega}}_{ij}R^{ \boldsymbol{\omega}}_{ji}= \sum_{i=1}^{n}\sum_{j=1}^{n}R^{\boldsymbol{\omega}}_{ij}\overline {R}^{\boldsymbol{\omega}}_{ij}\] \[= \sum_{i=1}^{n}\sum_{j=1}^{n}|R^{\boldsymbol{\omega}}_{ij}|^{2}\] \[= \sum_{i\sim j}\frac{1}{d_{i}d_{j}}=2R_{-1}(X_{U}).\]
Also,
\[\sum_{i=1}^{n}(\lambda_{i}-\lambda_{1})=\sum_{i=1}^{n}\lambda_{i}-n \lambda_{1}=-n\lambda_{1}\] \[\text{or} \sum_{i=1}^{n}(\lambda_{i}-\lambda_{1})^{2}+\sum_{p\neq q}^{n}( \lambda_{p}-\lambda_{1})(\lambda_{q}-\lambda_{1})=(n\lambda_{1})^{2}.\]
Since \(\sum_{p\neq q}(\lambda_{p}-\lambda_{1})(\lambda_{q}-\lambda_{1})\) is positive, we have
\[\sum_{i=1}^{n}(\lambda_{i}-\lambda_{1})^{2}\leq n^{2}\lambda_{1}^ {2}\] \[\text{or} \sum_{i=1}^{n}\lambda_{i}^{2}+n\lambda_{1}^{2}\leq n^{2}\lambda_ {1}^{2}\] \[\text{or} 2R_{-1}(X_{u})+n\lambda_{1}^{2}\leq n^{2}\lambda_{1}^{2}\] \[\text{or} \lambda_{1}^{2}\geq\frac{2R_{-1}(X_{u})}{n(n-1)}.\qed\]
For an \(n\times n\) matrix \(A\coloneqq(a_{ij})\), define
\[\gamma_{1}(A)=\min\bigg{\{}\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}\;,\; \frac{1}{n}\sum_{i=1}^{n}a_{ii}-\frac{1}{n(n-1)}\sum_{i\neq j}a_{ij}\bigg{\}} \quad\text{and}\]
\[\gamma_{2}(A)=\max\bigg{\{}\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}\;,\; \frac{1}{n}\sum_{i=1}^{n}a_{ii}-\frac{1}{n(n-1)}\sum_{i\neq j}a_{ij}\bigg{\}}.\]
**Lemma 4.2** ([19]).: _Let \(A\coloneqq(a_{ij})\) be an \(n\times n\) Hermitian matrix. If \(\lambda_{1}\) and \(\lambda_{n}\) are the smallest and largest eigenvalues of \(A\) respectively, then \(\lambda_{1}\leq\gamma_{1}(A)\leq\gamma_{2}(A)\leq\lambda_{n}\)._
**Theorem 4.6**.: _Let \(X\) be a mixed graph and \(R^{\boldsymbol{\omega}}\coloneqq(R^{\boldsymbol{\omega}}_{ij})\) be its Hermitian Randic matrix of second kind. If \(\lambda_{1}\) and \(\lambda_{n}\) are the smallest and the largest eigenvalues of \(R^{\boldsymbol{\omega}}(X)\) respectively, then_
\[\lambda_{1}\leq-\frac{1}{n(n-1)}\left(\sum_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j} }}+\sum_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\right)\leq\frac{1}{n}\left(\sum_{ i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}} \right)\leq\lambda_{n}.\]
Proof.: We have
\[\sum_{i\neq j}R^{\boldsymbol{\omega}}_{ij}=\sum_{i=1}^{n}\sum_{j= 1}^{n}R^{\boldsymbol{\omega}}_{ij}= \sum_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum_{i\to j}\left(\frac{ \boldsymbol{\omega}}{\sqrt{d_{i}d_{j}}}+\frac{\overline{\boldsymbol{\omega}}} {\sqrt{d_{i}d_{j}}}\right)\] \[= \sum_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum_{i\to j}\frac{1}{ \sqrt{d_{i}d_{j}}}\qquad\text{ as }\boldsymbol{\omega}+\overline{\boldsymbol{\omega}}=1.\]
Also,
\[\sum_{i=1}^{n}R^{\mathbf{\omega}}_{ij}=\operatorname{trace}(R^{\mathbf{\omega}})=0.\]
Hence by Lemma 4.2, we have
\[\lambda_{1}\leq\gamma_{1}\leq\gamma_{2}\leq\lambda_{n},\ \ \text{where}\]
\[\gamma_{1} =\min\bigg{\{}\frac{1}{n}\bigg{(}\sum\limits_{i\sim j}\frac{2}{ \sqrt{d_{i}d_{j}}}+\sum\limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\,\ 0-\frac{1}{n(n-1)}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum \limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\bigg{\}}\quad\text{and}\] \[\gamma_{2} =\max\bigg{\{}\frac{1}{n}\bigg{(}\sum\limits_{i\sim j}\frac{2}{ \sqrt{d_{i}d_{j}}}+\sum\limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\,\ 0- \frac{1}{n(n-1)}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum \limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\bigg{\}}\]
Since \(\frac{1}{n}>\frac{-1}{n(n-1)}\), we have
\[\frac{1}{n}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum \limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}>-\frac{1}{n(n-1)} \bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum\limits_{i\to j }\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}.\]
Hence
\[\lambda_{1}\leq\frac{-1}{n(n-1)}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d _{i}d_{j}}}+\sum\limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\leq\frac{ 1}{n}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum\limits_{i \to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\leq\lambda_{n}.\qed\]
**Corollary 4.2**.: _Let \(X\) be a mixed graph and \(R^{\mathbf{\omega}}(X)\) be its Hermitian Randic matrix of second kind. If \(\lambda_{1}\) and \(\lambda_{n}\) are the smallest and the largest eigenvalues of \(R^{\mathbf{\omega}}(X)\), then_
\[\lambda_{n}-\lambda_{1}\geq\frac{1}{n-1}\bigg{(}\sum\limits_{i\sim j}\frac{2 }{\sqrt{d_{i}d_{j}}}+\sum\limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}.\]
Proof.: Since
\[\lambda_{1}\leq\frac{-1}{n(n-1)}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d _{i}d_{j}}}+\sum\limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\leq\frac{ 1}{n}\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_{j}}}+\sum\limits_{i \to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\leq\lambda_{n},\]
we have, \(\lambda_{n}-\lambda_{1}\geq\bigg{(}\sum\limits_{i\sim j}\frac{2}{\sqrt{d_{i}d_ {j}}}+\sum\limits_{i\to j}\frac{1}{\sqrt{d_{i}d_{j}}}\bigg{)}\bigg{(}\frac{1} {n}+\frac{1}{n(n-1)}\bigg{)}\). Thus the result follows.
**Theorem 4.7**.: _If \(\alpha\) is the smallest positive eigenvalue and \(\beta\) is the largest negative eigenvalue of \(R^{\mathbf{\omega}}(X)\), then_
\[\frac{1}{\beta}\leq\gamma_{1}(R^{\mathbf{\omega}}(X)^{-1})\leq\gamma_{2}(R^{\mathbf{ \omega}}(X)^{-1})\leq\frac{1}{\alpha}.\]
Proof.: Since \(\alpha\) is the smallest positive eigenvalue and \(\beta\) is the largest negative eigenvalue of \(R^{\mathbf{\omega}}(X)\), \(\frac{1}{\alpha}\) is the largest and \(\frac{1}{\beta}\) is the smallest eigenvalues of \(R^{\mathbf{\omega}}(X)^{-1}\). Hence using Theorem 4.6 and Lemma 4.2, we have
\[\frac{1}{\beta}\leq\gamma_{1}(R^{\mathbf{\omega}}(X)^{-1})\leq\gamma_{2}(R^{\mathbf{ \omega}}(X)^{-1})\leq\frac{1}{\alpha},\]
where
\(\gamma_{1}=\min\left\{\frac{1}{n}\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}r_{ij}^ {\omega}\,\ \frac{1}{n}\sum\limits_{i=1}^{n}r_{ii}^{\omega}-\frac{1}{n(n-1)}\sum\limits_{i \neq j}r_{ij}^{\omega}\right\}\), \(\gamma_{2}=\max\left\{\frac{1}{n}\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n}r_{ ij}^{\omega}\,\ \frac{1}{n}\sum\limits_{i=1}^{n}r_{ii}^{\omega}-\frac{1}{n(n-1)} \sum\limits_{i\neq j}r_{ij}^{\omega}\right\}\) and \(R^{\omega}(X)^{-1}=[r_{ij}^{\omega}]\).
## 5 Energy of \(R^{\omega}(X)\)
Lu et al. in [18] investigated the energy for Hermitian Randic matrix \(R_{H}(X)\) and computed various bounds. We observe that most of the results on energy of \(R_{H}(X)\) also hold good for the matrix \(R^{\omega}(X)\) due to the fact that \(\mathrm{trace}(R^{\omega}(X))=\mathrm{trace}(R_{H}(X))=0\) and \(\sum_{i=1}^{n}{\lambda_{i}}^{2}=2R_{-1}(X_{U})\) for both the matrices. Here we discuss some analogous conclusions for the energy of \(R^{\omega}(X)\).
**Theorem 5.1**.: _Let \(X\) be a mixed graph and let its underlying graph \(X_{U}\) be \(r\)-regular. If \(E(R^{\omega}(X))\) is the energy of \(R^{\omega}(X)\), then \(E(R^{\omega}(X))=\frac{1}{r}E(H(X)),\) where \(E(R^{\omega}(X))\) and \(E(H(X))\) are the energy of \(R^{\omega}(X)\) and the Hermitian adjacency matrix of second kind of \(X\), respectively._
Proof.: The result follows from the fact that if the underlying graph is \(r\)-regular, then \(R^{\omega}(X)=\frac{1}{r}H(X)\).
**Theorem 5.2**.: _Let \(X\) be a mixed graph of order \(n\) and \(R^{\omega}(X)\) be the Hermitian Randic matrix of second kind of \(X\) with eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\). If \(E(R^{\omega}(X))\) is the energy of \(R^{\omega}(X)\), then_
\[\sqrt{2R_{-1}(X_{U})+n(n-1)(\det R^{\omega}(X))^{2/n}}\leq E(R^{\omega}(X)) \leq\sqrt{2nR_{-1}(X_{U})},\]
_where equality holds if \(|\lambda_{1}|=\cdots=|\lambda_{n}|\)._
Proof.: The proof is similar to the proof of Theorem 3.5 in [18], that can be easily obtained by the Cauchy-Schwartz inequality and geometric-arithmetic inequality.
**Theorem 5.3**.: _Let \(X\) be a mixed graph and \(R^{\omega}(X)\) be the Hermitian Randic matrix of second kind of \(X\). If \(E(R^{\omega}(X))\) is the energy of \(R^{\omega}(X)\), then_
\[E(R^{\omega}(X))\geq 2(n-k)\bigg{(}\frac{\det(R^{\omega}(X))}{\prod_{i=1}^{k} \lambda_{i}(R^{\omega}(X))}\bigg{)}^{\frac{1}{n-k}},\]
_where \(k\) is the number of negative eigenvalues of \(R^{\omega}(X)\)._
Proof.: Let the eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\) of \(R^{\mathbf{\omega}}(X)\) satisfy \(\lambda_{1}\leq\ldots\leq\lambda_{k}\leq\lambda_{k+1}\leq\ldots\leq\lambda_{n}\). Suppose \(\lambda_{1},\ldots,\lambda_{k}\) are negative and \(\lambda_{k+1},\ldots,\lambda_{n}\) are positive. As \(\operatorname{trace}(R^{\mathbf{\omega}}(X))=0\), we have \(E(R^{\mathbf{\omega}}(X))=\sum\limits_{i=1}^{n}|\lambda_{i}(R^{\mathbf{\omega}})|=2 \sum\limits_{i=k+1}^{n}|\lambda_{i}(R^{\mathbf{\omega}})|=2\sum\limits_{i=1}^{k}| \lambda_{i}(R^{\mathbf{\omega}})|\). Now
\[|\lambda_{1}|+|\lambda_{2}|+\ldots+|\lambda_{k}|\geq |\lambda_{1}+\lambda_{2}+\ldots+\lambda_{k}|\] \[= |\lambda_{k+1}+\ldots+\lambda_{n}|\] \[\geq (n-k)\left(\prod\limits_{i=k+1}^{n}\lambda_{i}(R^{\mathbf{\omega}}(X ))\right)^{\frac{1}{n-k}}\] \[= (n-k)\left(\frac{\det(R^{\mathbf{\omega}}(X))}{\prod\limits_{i=1}^{k} \lambda_{i}(R^{\mathbf{\omega}}(X))}\right)^{\frac{1}{n-k}}\]
Hence
\[E(R^{\mathbf{\omega}}(X))=2\sum\limits_{i=1}^{k}|\lambda_{i}(R^{\mathbf{\omega}})|\geq 2 (n-k)\left(\frac{\det(R^{\mathbf{\omega}}(X))}{\prod\limits_{i=1}^{k}\lambda_{i}(R ^{\mathbf{\omega}}(X))}\right)^{\frac{1}{n-k}}.\qed\]
**Theorem 5.4**.: _Let \(X\) be a mixed graph and \(R^{\mathbf{\omega}}(X)\) be the Hermitian Randic matrix of second kind of \(X\). If \(E(R^{\mathbf{\omega}}(X))\) is the energy of \(R^{\mathbf{\omega}}(X)\), then_
\[E(R^{\mathbf{\omega}}(X))\leq e^{\sqrt{2R_{-1}(X_{U})}}.\]
Proof.: Let the eigenvalues of \(R^{\mathbf{\omega}}(X)\) be \(\lambda_{1},\ldots,\lambda_{n}\). We have \(\sum_{i=1}^{n}{\lambda_{i}}^{2}=2R_{-1}(X_{U})\). Now
\[E(R^{\mathbf{\omega}}(X))=\sum\limits_{i=1}^{n}|\lambda_{i}| <\sum\limits_{i=1}^{n}e^{|\lambda_{i}|}\] \[=\sum\limits_{i=1}^{n}\sum\limits_{k\geq 0}\frac{|\lambda_{i}|^{k}}{k!}\] \[\leq\sum\limits_{k\geq 0}\frac{1}{k!}\left(\sum\limits_{i=1}^{n}| \lambda_{i}|^{2}\right)^{k/2}\leq\sum\limits_{k\geq 0}\frac{1}{k!}(2R_{-1}(X_{U}))^{k/2}\] \[=\sum\limits_{k\geq 0}\frac{1}{k!}\left(\sqrt{2R_{-1}(X_{U})} \right)^{k}=e^{\sqrt{2R_{-1}(X_{U})}}.\qed\]
**Theorem 5.5**.: _If \(\rho=\max\limits_{i}|\lambda_{i}|\), where \(\lambda_{1},\ldots,\lambda_{n}\) are eigenvalues of \(R^{\mathbf{\omega}}(X)\), then_
\[E(R^{\mathbf{\omega}}(X))\leq\frac{1}{2}\left(\rho(n-2)+\sqrt{\rho^{2}(n-2)^{2}+1 6R_{-1}(X_{U})}\right).\]
Proof.: Suppose \(\lambda_{k}\) is the largest negative eigenvalue of \(R^{\mathbf{\omega}}(X)\) then \(\lambda_{1}\leq\cdots\leq\lambda_{k}\) and \(\lambda_{k+1}\leq\ldots\leq\lambda_{n}\). Therefore \(E(R^{\mathbf{\omega}}(X))=\sum\limits_{i=1}^{n}|\lambda_{i}(R^{\mathbf{\omega}})|=2 \sum\limits_{i=1}^{k}|\lambda_{i}(R^{\mathbf{\omega}})|=2\sum\limits_{i=k+1}^{n}| \lambda_{i}(R^{\mathbf{\omega}})|\).
Now
\[E(R^{\mathbf{\omega}}(X))^{2}= \left(\sum_{i=1}^{k}|\lambda_{i}|+\sum_{j=k+1}^{n}|\lambda_{j}| \right)^{2}\] \[= 2\left(\left(\sum_{i=1}^{k}|\lambda_{i}|\right)^{2}+\left(\sum_{j =k+1}^{n}|\lambda_{j}|\right)^{2}\right)\qquad\text{ as }\;2\sum_{i=1}^{k}|\lambda_{i}|\sum_{j=k+1}^{n}|\lambda_{j}|=\left(\sum_{i=1}^ {k}|\lambda_{i}|\right)^{2}+\left(\sum_{j=k+1}^{n}|\lambda_{j}|\right)^{2}\] \[= 2\left(\sum_{i=1}^{k}|\lambda_{i}|^{2}+\sum_{j=k+1}^{n}|\lambda_ {j}|^{2}+2\sum_{i<p\leq k}|\lambda_{i}||\lambda_{p}|+2\sum_{(k+1)\leq j<q}| \lambda_{i}||\lambda_{p}|\right)\] \[= 2\sum_{i=1}^{n}|\lambda_{i}|^{2}+4\left[\sum_{i<p\leq k}|\lambda _{i}||\lambda_{p}|+\sum_{(k+1)\leq j<q}|\lambda_{j}||\lambda_{q}|\right]. \tag{5.1}\]
We have \(\left(|\lambda_{i}|-\rho/2\right)(|\lambda_{p}|-\rho/2)\leq\frac{\rho^{2}}{4}\), which implies that \(|\lambda_{i}||\lambda_{p}|\leq\frac{\rho}{2}(|\lambda_{i}|+|\lambda_{p}|)\). Similarly, \(|\lambda_{j}||\lambda_{q}|\leq\frac{\rho}{2}(|\lambda_{j}|+|\lambda_{q}|)\).
Hence from Inequality (5.1), we have
\[E(R^{\mathbf{\omega}}(X))^{2}\leq 4R_{-1}(X_{u})+4\cdot\frac{\rho}{2}\left(\sum_{i<p\leq k}(| \lambda_{i}|+|\lambda_{p}|)+\sum_{(k+1)\leq j<q}(|\lambda_{j}|+|\lambda_{q}|)\right)\] \[= 4R_{-1}(X_{u})+2\rho\left((k-1)\sum_{i=1}^{k}|\lambda_{i}|+(n-k-1 )\sum_{j=k+1}^{n}|\lambda_{j}|\right)\] \[= 4R_{-1}(X_{u})+2\rho\left((k-1)\frac{E(R^{\mathbf{\omega}}(X))}{2}+( n-k-1)\frac{E(R^{\mathbf{\omega}}(X))}{2}\right)\] \[= 4R_{-1}(X_{u})+\rho(n-2)E(R^{\mathbf{\omega}}(X)).\]
After solving the preceding inequality, we get
\[E(R^{\mathbf{\omega}}(X))\leq\frac{1}{2}\left(\rho(n-2)+\sqrt{\rho^{2}(n-2)^{2}+1 6R_{-1}(X_{u})}\right).\qed\]
**Theorem 5.6**.: _Let the eigenvalues of \(R^{\mathbf{\omega}}(X)\) be \(\lambda_{1},\ldots,\lambda_{n}\). If \(\lambda=\underset{i}{\min}|\lambda_{i}|\), then_
\[E(R^{\mathbf{\omega}}(X))\geq\frac{1}{2}\left(\lambda(n-2)+\sqrt{\lambda^{2}(n-2)^ {2}+16R_{-1}(X_{U})}\right).\]
Proof.: Consider \(\lambda=\underset{i}{\min}|\lambda_{i}|\). We have \(\left(|\lambda_{i}|-\lambda/2\right)(|\lambda_{p}|-\lambda/2)\geq\frac{\lambda^ {2}}{4}\), which implies that \(|\lambda_{i}||\lambda_{p}|\geq\frac{\lambda}{2}(|\lambda_{i}|+|\lambda_{p}|)\). Similarly, \(|\lambda_{j}||\lambda_{q}|\geq\frac{\lambda}{2}(|\lambda_{j}|+|\lambda_{q}|)\). Also we have \(\sum_{i=1}^{n}{\lambda_{i}}^{2}=2R_{-1}(X_{U})\). Now from Inequality (5.1), we get the quadratic inequality \(E(R^{\mathbf{\omega}}(X))^{2}\geq 4R_{-1}(X_{U})+\lambda(n-2)E(R^{\mathbf{\omega}}(X))\). Solving this quadratic inequality, we get the required result.
We now provide some basic inequalities that help us in determining another lower bound for the energy of \(R^{\mathbf{\omega}}(X)\).
**Lemma 5.1** (Polya-Szego Inequality [8]).: _If \(a_{i}\) and \(b_{i}\), are positive real numbers for each \(i\in\{1,\ldots,n\}\), then_
\[\sum_{i=1}^{n}a_{i}^{2}\sum_{i=1}^{n}b_{i}^{2}\leq\frac{1}{4}\left(\sqrt{\frac{ M_{1}M_{2}}{m_{1}m_{2}}}+\sqrt{\frac{m_{1}m_{2}}{M_{1}M_{2}}}\right)^{2} \left(\sum_{i=1}^{n}a_{i}b_{i}\right)^{2},\]
_where \(M_{1}=\max\limits_{1\leq i\leq n}a_{i}\), \(M_{2}=\max\limits_{1\leq i\leq n}b_{i}\), \(m_{1}=\min\limits_{1\leq i\leq n}a_{i}\) and \(m_{2}=\min\limits_{1\leq i\leq n}b_{i}\)._
**Lemma 5.2** (Ozeki's Inequality [8]).: _If \(a_{i}\) and \(b_{i}\), are non-negative real numbers for each \(i\in\{1,\ldots,n\}\), then_
\[\sum_{i=1}^{n}a_{i}^{2}\sum_{i=1}^{n}b_{i}^{2}-\left(\sum_{i=1}^{n}a_{i}b_{i} \right)^{2}\leq\frac{n^{2}}{4}\left(M_{1}M_{2}-m_{1}m_{2}\right)^{2},\]
_where \(M_{1}=\max\limits_{1\leq i\leq n}a_{i}\), \(M_{2}=\max\limits_{1\leq i\leq n}b_{i}\), \(m_{1}=\min\limits_{1\leq i\leq n}a_{i}\) and \(m_{2}=\min\limits_{1\leq i\leq n}b_{i}\)._
**Theorem 5.7**.: _If \(\rho=\max\limits_{i}\lvert\lambda_{i}\rvert\) and \(\lambda=\min\limits_{i}\lvert\lambda_{i}\rvert\), where \(\lambda_{1},\ldots,\lambda_{n}\) are the eigenvalues of \(R^{\mathbf{\omega}}(X)\) then_
\[E(R^{\mathbf{\omega}}(X))\geq\frac{\sqrt{8n\rho\lambda R_{-1}(X_{U})}}{\rho+ \lambda}.\]
Proof.: Let \(\lambda_{1}\leq\cdots\leq\lambda_{n}\). Since \(\rho=\max\limits_{i}\lvert\lambda_{i}\rvert\) and \(\lambda=\min\limits_{i}\lvert\lambda_{i}\rvert\), by Polya-Szego Inequality we have
\[\sum_{i=1}^{n}\lvert\lambda_{i}\rvert^{2}\sum_{i=1}^{n}1^{2}\leq \frac{1}{4}\left(\sqrt{\frac{\rho}{\lambda}}+\sqrt{\frac{\lambda}{\rho}}\right) ^{2}\left(\sum_{i=1}^{n}\lvert\lambda_{i}\rvert\right)^{2}\] \[\text{or}\hskip 14.226378pt2R_{-1}(X_{U})\cdot n\leq\frac{1}{4} \left(\frac{(\rho+\lambda)}{\sqrt{\lambda\rho}}\right)^{2}\left(E(R^{\mathbf{ \omega}}(X))\right)^{2}\] \[\text{or}\hskip 14.226378ptE(R^{\mathbf{\omega}}(X))\geq\frac{\sqrt{8n \rho\lambda R_{-1}(X_{U})}}{\rho+\lambda}.\qed\]
**Theorem 5.8**.: _If \(\rho=\max\limits_{i}\lvert\lambda_{i}\rvert\) and \(\lambda=\min\limits_{i}\lvert\lambda_{i}\rvert\), where \(\lambda_{1},\ldots,\lambda_{n}\) are the eigenvalues of \(R^{\mathbf{\omega}}(X)\) then_
\[E(R^{\mathbf{\omega}}(X))\geq\frac{\sqrt{8nR_{-1}(X_{U})-n^{2}(\rho-\lambda)^{2}}}{ 2}.\]
Proof.: Let \(\lambda_{1}\leq\cdots\leq\lambda_{n}\). Since \(\rho=\max\limits_{i}\lvert\lambda_{i}\rvert\) and \(\lambda=\min\limits_{i}\lvert\lambda_{i}\rvert\), by Ozeki's Inequality we have
\[\sum_{i=1}^{n}\lvert\lambda_{i}\rvert^{2}\sum_{i=1}^{n}1^{2}- \left(\sum_{i=1}^{n}\lvert\lambda_{i}\rvert\right)^{2}\leq\frac{n^{2}}{4}\left( \rho-\lambda\right)^{2}\] \[\text{or}\hskip 14.226378pt\frac{\sqrt{8nR_{-1}(X_{U})-n^{2}(\rho- \lambda)^{2}}}{2}\leq E(R^{\mathbf{\omega}}(X)).\qed\]
## Funding
The first two authors thank the Science and Engineering Research Board (SERB), Government of India for supporting major part of this work under the Teachers Associateship for Research Excellence (TARE) project [file number TAR/2021/000045].
|
2308.12978 | A Note on the Kaluza-Klein Theory | We show that the Kaluza-Klein theory contains a fundamental problem: The
four-dimensional metric tensor and the electromagnetic potential vector assumed
in the Kaluza-Klein theory belong to four-dimensional vector spaces that are
not integrable in general, resulting that the four-dimensional physical
variables and the corresponding field equations derived from the
five-dimensional Einstein field equation (i.e., the four-dimensional Einstein
field equation and the Maxwell equations) are not defined on a four-dimensional
submanifold. That is, the four-dimensional spacetime assumed in the
Kaluza-Klein theory does not exist. No satisfactory solutions are found within
the Kaluza-Klein formalism. Perhaps the best approach to fix the problem is
giving up the Kaluza-Klein theory and looking for a new unified scheme for
gravitational and electromagnetic interactions in the framework of a spacetime
with extra dimensions, as having already been explored in some literature. | Li-Xin Li | 2023-08-23T01:03:58Z | http://arxiv.org/abs/2308.12978v1 | # A Note on the Kaluza-Klein Theory
###### Abstract
We show that the Kaluza-Klein theory contains a fundamental problem: The four-dimensional metric tensor and the electromagnetic potential vector assumed in the Kaluza-Klein theory belong to four-dimensional vector spaces that are not integrable in general, resulting that the four-dimensional physical variables and the corresponding field equations derived from the five-dimensional Einstein field equation (i.e., the four-dimensional Einstein field equation and the Maxwell equations) are not defined on a four-dimensional submanifold. That is, the four-dimensional spacetime assumed in the Kaluza-Klein theory does not exist. No satisfactory solutions are found within the Kaluza-Klein formalism. Perhaps the best approach to fix the problem is giving up the Kaluza-Klein theory and looking for a new unified scheme for gravitational and electromagnetic interactions in the framework of a spacetime with extra dimensions, as having already been explored in some literature.
## I Introduction
The Kaluza-Klein (KK) theory represents the first attempt to unify the gravitational and electromagnetic interactions in the framework of general relativity extended to a spacetime with extra dimensions [1; 2; 3].1 In the KK theory, the bulk spacetime is assumed to be five-dimensional and described by the five-dimensional Einstein field equation. By 4+1 decomposition of the five-dimensional spacetime metric, a four-dimensional Einstein field equation and the Maxwell equations are derived from the five-dimensional Einstein field equation, which are assumed to describe the four-dimensional world where we live. How to make the extra dimension compact, small, and static has been a challenging problem in modern theoretical physics [5; 6; 7; 8; 9]. Nowadays, introducing (compact or noncompact but warped) extra dimensions in addition to the four dimensions of the spacetime where we live has been a popular strategy for unifying all fundamental interactions in nature, e.g., the theories of supergravity [10; 11], superstring [12; 13; 14; 15], and brane gravity [16; 17; 18]. To test the existence of extra dimensions, the KK particles arising from the excitation of fields along the compact extra dimensions have been extensively searched by the Large Hadron Collider [19; 20; 21].
Footnote 1: An even earlier attempt to unify electromagnetic and gravitational fields in a five-dimensional spacetime before the appearance of general relativity was given by Nordström in 1914 [4].
Despite the success of the KK theory in derivation of the Maxwell equations from the higher-dimensional Einstein field equation and its heavy influence on modern theories of unification, in this paper we show that the KK theory has a serious problem in its foundation: The four-dimensional metric tensor and the electromagnetic potential vector assumed in the KK theory are defined in vector spaces that are not integrable hence not tangent to any four-dimensional submanifold, unless the electromagnetic field antisymmetric tensor vanishes. That is, the four-dimensional spacetime assumed in the KK theory to support the effective four-dimensional theory of electromagnetism and gravity is defined only when the electromagnetic field vanishes, which conflicts the original aim of the KK theory in unifying the gravitational and electromagnetic interactions.
The influence of the problem just mentioned may not be limited to the KK theory. As is well known, one of the cornerstones of string theory--extra dimensions and compactification of extra dimensions--originated from the KK theory with an extension from one extra dimension to multiple extra dimensions. In string theory, a popular approach to derive gauge fields from higher-dimensional gravity is through the KK mechanism with an extension to spacetime of dimensions greater than five. For example, this is the case in the eleven-dimensional supergravity when it is connected to the low-energy limit of M-theory [13; 14; 15].
The paper is organized as follows. In section II, we outline the KK theory and derive the representation of the four-dimensional metric tensor and the electromagnetic potential vector in the five-dimensional spacetime. The representation is uniquely determined by the self-consistency requirement of the theory. In section III, we discuss the geometric interpretation of the above two KK variables and quantities derived from them (e.g., the electromagnetic field antisymmetric tensor). We show that the four-dimensional quantities are in vector spaces orthogonal to the direction of the extra dimension. In section IV, we prove that the vector spaces containing the four-dimensional variables are not integrable unless the electromagnetic field tensor vanishes. Thus, in general there does not exist a four-dimensional submanifold supporting the four-dimensional theory derived from the five-dimensional Einstein field equation.
Section V is devoted to discussion on the action principle and compactification of the extra dimension under the assumption of the cylinder condition. We show that after compactification, although the four-dimensional Einstein field equation and the Maxwell equations can be derived from the action principle, there still does not exist a four-dimensional submanifold supporting the four
-dimensional field equations. Finally, in section VI, we summarize the results that we have obtained in this paper and discuss their implications.
Throughout the paper geometrized units with \(G=c=1\) are adopted unless otherwise stated, where \(G\) is the four-dimensional gravitational constant and \(c\) is the speed of light. In addition, we will take \((-,+,+,+,+)\) as the convention for the signature of the five-dimensional spacetime metric. The abstract index notation for vectors and tensors advocated in [22] will be used. That is, vectors and tensors are denoted by letters followed by lower case Latin indices, e.g., \(v^{a}\), \(g_{ab}\), etc.
## II The Kaluza-Klein formalism
The success of the KK theory relies on a specific decomposition scheme of the metric tensor of a five-dimensional bulk spacetime. Without loss of generality, in a five-dimensional spacetime \((\tilde{\cal M},\tilde{g}_{ab})\) we take a coordinate system \(\{x^{0},x^{1},x^{2},x^{3},x^{4}=w\}\) and write the matrix representation of the five-dimensional metric tensor \(\tilde{g}_{ab}\) as2
Footnote 2: The form of metric decomposition in equation (1) agrees with the general case for the KK theory generalized to a \((4+n)\)-dimensional spacetime to include non-Abelian gauge fields, where the \(\phi^{2}\) is replaced by a matrix \(g_{ij}\) with the indices \(i\) and \(j\) running from \(1\) to \(n\) for the extra dimensions [23; 24; 25].
\[\tilde{g}_{AB}=\left(\begin{array}{cc}g_{\mu\nu}+\phi^{2}A_{\mu}A_{\nu}& \phi^{2}A_{\mu}\\ \phi^{2}A_{\nu}&\phi^{2}\end{array}\right)\;, \tag{1}\]
where indices \(A,B=0,1,2,3,4\), and \(\mu,\nu=0,1,2,3\). Capital Latin letters label coordinate components of five-dimensional vectors and tensors. Lower case Greek letters label coordinate components of four-dimensional vectors and tensors.
The \(4\times 4\) matrix \(g_{\mu\nu}\) is interpreted as the component of the metric on a four-dimensional spacetime \(({\cal M},g_{ab})\) associated with the coordinate system \(\{x^{0},x^{1},x^{2},x^{3}\}\), the \(4\times 1\) matrix \(A_{\mu}\) as the component of an electromagnetic potential dual vector, and the function \(\phi\) as a scalar field in \(({\cal M},g_{ab})\). With the convention in equation (1), the five-dimensional spacetime metric tensor \(\tilde{g}_{ab}\) is represented in the coordinate system \(\{x^{\mu},w\}\) as
\[\tilde{g}_{ab} = \tilde{g}_{AB}dx_{a}^{A}dx_{b}^{B}=(g_{\mu\nu}+\phi^{2}A_{\mu}A_{ \nu})dx_{a}^{\mu}dx_{b}^{\nu} \tag{2}\] \[+2\phi^{2}A_{\mu}dx_{(a}^{\mu}dw_{b)}+\phi^{2}dw_{a}dw_{b}\;,\]
where the parentheses in the indices of a tensor denote symmetrization of the tensor about the indices inside the parentheses. The Einstein summation convention for tensor components is used, i.e., an index appearing in both subscripts and superscripts is summed over all dimensions represented by the index.
The inverse of the \(5\times 5\) matrix in equation (1), which is also the component matrix of the inverse five-dimensional metric tensor \(\tilde{g}^{ab}\), is
\[\tilde{g}^{AB}=\left(\begin{array}{cc}g^{\mu\nu}&-A^{\mu}\\ -A^{\nu}&\phi^{-2}+A_{\rho}A^{\rho}\end{array}\right)\;, \tag{3}\]
where the \(4\times 4\) matrix \(g^{\mu\nu}\) is the inverse of \(g_{\mu\nu}\), i.e.,
\[g_{\mu\nu}g^{\nu\rho}=\delta_{\mu}^{\;\;\rho} \tag{4}\]
where \(\delta_{\mu}^{\;\;\nu}=1\) if \(\mu=\nu\) and \(0\) otherwise; and
\[A^{\mu}\equiv g^{\mu\nu}A_{\nu}\;. \tag{5}\]
Equations (4) and (5) automatically imply
\[A_{\mu}=g_{\mu\nu}A^{\nu}\;. \tag{6}\]
By equation (3), the inverse five-dimensional metric tensor is represented as
\[\tilde{g}^{ab} = g^{\mu\nu}\left(\frac{\partial}{\partial x^{\mu}}\right)^{a} \left(\frac{\partial}{\partial x^{\nu}}\right)^{b}-2A^{\mu}\left(\frac{ \partial}{\partial x^{\mu}}\right)^{(a}\left(\frac{\partial}{\partial w} \right)^{b} \tag{7}\] \[+\left(\frac{1}{\phi^{2}}+A_{\rho}A^{\rho}\right)\left(\frac{ \partial}{\partial w}\right)^{a}\left(\frac{\partial}{\partial w}\right)^{b}\;.\]
It can be verified that the following reciprocal relation is satisfied
\[\tilde{g}_{ab}\tilde{g}^{bc}=\tilde{\delta}_{a}^{\;\;c}\equiv\delta_{\mu}^{\; \;\nu}dx_{a}^{\mu}\left(\frac{\partial}{\partial x^{\nu}}\right)^{c}+dw_{a} \left(\frac{\partial}{\partial w}\right)^{c}\;, \tag{8}\]
where \(\tilde{\delta}_{a}^{\;\;c}\) is the identity map in the five-dimensional spacetime.
In fact, the \(5\times 5\) matrices \(\tilde{g}_{AB}\) and \(\tilde{g}^{AB}\) in equations (1) and (3) are inverse to each other if and only if (a) the \(4\times 4\) matrices \(g^{\mu\nu}\) and \(g_{\mu\nu}\) are inverse to each other (eq. 4), and (b) the \(A^{\mu}\) and \(A_{\mu}\) are related by equations (5) and (6). It should be noted that equations (4), (5), and (6) are not independent, since any two of them can give rise to the other.
The KK 4-metric tensor \(g_{ab}\) and the electromagnetic potential dual 4-vector \(A_{a}\) are, respectively, a tensor and a vector in the five-dimensional spacetime \((\tilde{\cal M},\tilde{g}_{ab})\). The question is how they are expressed in coordinate components in the five-dimensional coordinate system \(\{x^{\mu},w\}\). Since \(g_{\mu\nu}\) are interpreted as the components of the four-dimensional metric in coordinates \(\{x^{\mu}\}\), the \(\mu\)-\(\nu\) components of \(g_{ab}\) must be \(g_{\mu\nu}\). Then, the general form of \(g_{ab}\) must be
\[g_{ab}=g_{\mu\nu}dx_{a}^{\mu}dx_{b}^{\nu}+2g_{\mu 4}dx_{(a}^{\mu}dw_{b)}+g_{44}dw_{a}dw_{b}\;, \tag{9}\]
where \(g_{\mu 4}\) and \(g_{44}\) are to be determined. Similarly, since \(A_{\mu}\) is interpreted as the coordinate component of \(A_{a}\) in \(\{x^{\mu}\}\), and \(A^{\mu}=g^{\mu\nu}A_{\nu}\) as the coordinate component of \(A^{a}\), we must have
\[A_{a}=A_{\mu}dx_{a}^{\mu}+A_{4}dw_{a} \tag{10}\]
\[A^{a}=A^{\mu}\left(\frac{\partial}{\partial x^{\mu}}\right)^{a}+A^{4}\left(\frac{ \partial}{\partial w}\right)^{a}\;, \tag{11}\]
where \(A_{4}\) and \(A^{4}\) are to be determined.
By equations (7) and (10), we have
\[A^{a} = \tilde{g}^{ab}A_{b}=\left(g^{\mu\nu}A_{\nu}-A^{\mu}A_{4}\right) \left(\frac{\partial}{\partial x^{\mu}}\right)^{a} \tag{12}\] \[-\left[A_{\rho}A^{\rho}-\left(\frac{1}{\tilde{\phi}^{2}}+A_{\rho }A^{\rho}\right)A_{4}\right]\left(\frac{\partial}{\partial w}\right)^{a}\;.\]
By equation (5), comparison of equation (12) to equation (11) leads to
\[A_{4}=0\;,\hskip 28.452756ptA^{4}=-A_{\rho}A^{\rho}\;. \tag{13}\]
Thus, we must have
\[A_{a}=A_{\mu}dx_{a}^{\mu}\;, \tag{14}\]
and
\[A^{a}=A^{\mu}\left(\frac{\partial}{\partial x^{\mu}}\right)^{a}-A_{\rho}A^{ \rho}\left(\frac{\partial}{\partial w}\right)^{a}\;. \tag{15}\]
By equations (9) and (11), we have
\[g_{ab}A^{b}=\left(A_{\mu}+g_{\mu 4}A^{4}\right)dx_{a}^{\mu}+\left(g_{\mu 4}A^{\mu}+g_{44}A^{4} \right)dw_{a} \tag{16}\]
after submission of equation (6). Since \(g_{ab}\) and \(A^{a}\) are interpreted as, respectively, the metric tensor and the electromagnetic potential vector in a four-dimensional spacetime, we must have \(g_{ab}A^{b}=\tilde{g}_{ab}A^{b}=A_{a}\). Then, comparison of equation (16) to equation (14) leads to
\[g_{\mu 4}=g_{44}=0\;. \tag{17}\]
Thus, in coordinates \(\{x^{\mu},w\}\) the 4-metric tensor \(g_{ab}\) is represented as
\[g_{ab}=g_{\mu\nu}dx_{a}^{\mu}dx_{b}^{\nu}\;. \tag{18}\]
Then, by \(g^{ab}=\tilde{g}^{a}\tilde{e}\tilde{g}^{bd}g_{cd}\) we get the inverse 4-metric tensor
\[g^{ab} = g^{\mu\nu}\left(\frac{\partial}{\partial x^{\mu}}\right)^{a} \left(\frac{\partial}{\partial x^{\nu}}\right)^{b}-2A^{\mu}\left(\frac{ \partial}{\partial x^{\mu}}\right)^{(a}\left(\frac{\partial}{\partial w} \right)^{b} \tag{19}\] \[+A^{\rho}A_{\rho}\left(\frac{\partial}{\partial w}\right)^{a} \left(\frac{\partial}{\partial w}\right)^{b}\;.\]
From equations (18) and (19) we get
\[g_{a}^{\phantom{a}c} = g_{ab}g^{bc}=\delta_{\mu}^{\phantom{a}\nu}dx_{a}^{\mu}\left( \frac{\partial}{\partial x^{\nu}}\right)^{c}-A_{\mu}dx_{a}^{\mu}\left(\frac{ \partial}{\partial w}\right)^{c} \tag{20}\] \[= g_{\phantom{a}a}^{c}\;,\]
just as being expected.
Therefore, given the 4+1 decomposition of the five-dimensional metric in equation (1), the four-dimensional metric tensor \(g_{ab}\) and the electromagnetic potential vector \(A^{a}\) are uniquely determined by equations (18) and (15). They are determined by the assumed form of the five-dimensional metric and the self-consistency requirement of the theory, without additional assumptions.
## III Geometric interpretation of the Kaluza-Klein variables
For the KK theory to be meaningful, the 4-metric tensor \(g_{ab}\) and the electromagnetic potential 4-vector \(A_{a}\) assumed in the KK theory, and the four-dimensional quantities derived from them (e.g., the four-dimensional Ricci tensor \(R_{ab}\) and the electromagnetic field antisymmetric tensor \(F_{ab}\)) must be defined on some four-dimensional manifold--or a four-dimensional submanifold embedded in the five-dimensional manifold \(\tilde{\cal M}\). Such a submanifold \({\cal M}\) should be a hypersurface in \(\tilde{\cal M}\), since \(\dim{\cal M}=4=\dim\tilde{\cal M}-1\). Assuming that such a hypersurface has a unit normal \(n^{a}\), which must be spacelike since \(({\cal M},g_{ab})\) is supposed to be a four-dimensional spacetime. That is, all vectors in a vector space tangent to \({\cal M}\) are orthogonal to \(n^{a}\), and \(\tilde{g}_{ab}n^{a}n^{b}=n^{a}n_{a}=1\). Then, the 4-metric \(g_{ab}\) on \({\cal M}\) must be related to the 5-metric \(\tilde{g}_{ab}\) on \(\tilde{\cal M}\) by \(g_{ab}=\tilde{g}_{ab}-n_{a}n_{b}\), or, equivalently,
\[g^{ab}=\tilde{g}^{ab}-n^{a}n^{b}\;. \tag{21}\]
The questions are: _does such a hypersurface exist?_ If yes, _how is it defined?_
It appears that neither of the above two questions has been seriously considered in the literature, at least to the knowledge of the present author. In his original paper [1] (English translation in [26], page 61), Kaluza only wrote that "we are certainly free to consider our space-time to be a four-dimensional part of an \(R_{5}\)". Kaluza used \(R_{5}\) to denote a five-dimensional spacetime. In [2] (English translation in [26], page 76), Klein only stated that "four of the coordinates, \(x^{1}\), \(x^{2}\), \(x^{3}\), \(x^{4}\), say, are to characterize the usual space-time." (Klein's \(x^{1}\), \(x^{2}\), \(x^{3}\), \(x^{4}\) are equivalent to our \(x^{0}\), \(x^{1}\), \(x^{2}\), \(x^{3}\) respectively, and his \(x^{0}\) corresponds to our \(w\) coordinate.) How is "a four-dimensional part of an \(R_{5}\)" defined? What is the exact meaning of "the usual space-time" in mathematics? These questions have never been clearly answered.
In some references the authors have explicitly identified the four-dimensional spacetime with the hypersurface defined by \(w=\) const without any proof or argument. For example, in [27] Einstein and Bergmann wrote that "We consider a four dimensional surface cutting each of the \(A\)-lines once and only once. We introduce on this surface 4 coordinates \(x^{a}(a=1...4)\) and assume \(x^{0}\) equal zero on this surface." (Section I of [27], subsection "The Special Coordinate System"). Note that their coordinate \(x^{0}\) corresponds to our \(w\), and their \(x^{a}\) correspond to our \(x^{\mu}(\mu=0,1,2,3)\). Their "\(A\)-lines" correspond to our \(w\)-lines. To distinguish it from the electromagnetic potential vector \(A^{a}\) used in this paper, let us denote the 5-vector "\(A\)" used in [27] by \(\tilde{A}^{a}\). In our notations, \(\tilde{A}^{a}=\phi^{-2}(\partial/\partial w)^{a}=\phi^{-1}n^{a}\), which is related to the 4-vector potential \(A^{a}\) by \(\tilde{A}_{a}=A_{a}+dw_{a}\) (see eq. 33 below, Einstein and Bergmann chose \(\phi=1\) so that \(\tilde{g}_{ab}\tilde{A}^{a}\tilde{A}^{b}=1\)). Similarly, in [28] (English translation in [26], page 108), Thiry wrote that "Kaluza's attempt
at a unified theory consists of considering space-time as the \(x^{0}={\rm const.}\) subspace of a five-dimensional Riemann space, and of assuming this subspace cylindrical with respect to the fifth coordinate \(x^{0}\)." (Thiry's \(x^{0}\) is equal to our \(w\) coordinate.)
There are also people taking different views. For example, Coququereaux & Esposito-Farese [29] interpreted the four-dimensional spacetime as a hypersurface orthogonal to the \(w\)-lines by stating that "Locally, the 4-dimensional space orthogonal to this vector will be interpreted as the usual space-time". Their "this vector" corresponds to our \(n^{a}=\phi^{-1}(\partial/\partial w)^{a}\), i.e., the vector \(n^{a}\) in equation (33) below. However, Coquereaux & Esposito-Farese did not provide evidence supporting their views. They did not even consider whether "the 4-dimensional space orthogonal to this vector" exists or not. As will be shown later in this paper, such a hypersurface does not exist in general.
The view that the submanifold supporting the four-dimensional variables in the KK theory coincides with the hypersurface defined by \(w={\rm const}\) was disproved in [30]. Let us use \({\cal S}\) to denote the hypersurface defined by \(w={\rm const}\), and write its unit normal as \(s^{a}\). The \(s^{a}\) is defined by \(\tilde{g}_{ab}s^{a}s^{b}=1\) and \(\tilde{g}_{ab}s^{a}(\partial/\partial x^{\mu})^{a}=0\) for \(\mu=0...3\). Thus we must have
\[s_{a}=\left(\tilde{g}^{44}\right)^{-1/2}dw_{a}=\frac{1}{\sqrt{\phi^{-2}+A_{ \rho}A^{\rho}}}dw_{a}\;. \tag{22}\]
By equations (15) and (19), we have
\[A^{a}s_{a}=-\left(\tilde{g}^{44}\right)^{-1/2}A_{\rho}A^{\rho}=-\left(\tilde{ g}^{44}\right)^{-1/2}A_{a}A^{a}\;, \tag{23}\]
and
\[g^{ab}s_{b} = \left(\tilde{g}^{44}\right)^{-1/2}\left[-A^{\mu}\left(\frac{ \partial}{\partial x^{\mu}}\right)^{a}+A^{\rho}A_{\rho}\left(\frac{\partial}{ \partial w}\right)^{a}\right] \tag{24}\] \[= -\left(\tilde{g}^{44}\right)^{-1/2}A^{a}\;.\]
Hence, the four-dimensional variables \(g_{ab}\) and \(A^{a}\) are not orthogonal to \(s^{a}\) unless \(A_{a}=0\).
By equation (2) we have the metric tensor on \({\cal S}(w=0)\)
\[\hat{g}_{ab} = \tilde{g}_{ab}-s_{a}s_{b}=\left(g_{\mu\nu}+\phi^{2}A_{\mu}A_{\nu }\right)dx^{\mu}_{a}dx^{\nu}_{b} \tag{25}\] \[+2\phi^{2}A_{\mu}dx^{\mu}_{(a}dw_{b)}+\frac{\phi^{4}A_{\rho}A^{ \rho}}{1+\phi^{2}A_{\rho}A^{\rho}}dw_{a}w_{b}\;.\]
After restriction of the action of \(\hat{g}_{ab}\) on vectors tangent to \({\cal S}\), we get the metric tensor on \({\cal S}\)
\[\hat{g}_{ab}=\left(g_{\mu\nu}+\phi^{2}A_{\mu}A_{\nu}\right)dx^{\mu}_{a}dx^{\nu }_{b}\;. \tag{26}\]
The four-dimensional components of \(\hat{g}_{ab}\) are not \(g_{\mu\nu}\), but \(g_{\mu\nu}+\phi^{2}A_{\mu}A_{\nu}\). Hence, the KK theory can be treated as _approximately_ being defined on \({\cal S}\) only if the electromagnetic field is sufficiently weak so that \(|\phi^{2}A_{\mu}A_{\nu}|\ll|g_{\mu\nu}|\sim 1\), i.e., only if
\[|\phi A_{\mu}|\ll A_{\rm crit}\equiv\frac{c^{2}}{G^{1/2}}=3.48\times 10^{24}\,{ \rm statvolt}\;. \tag{27}\]
If \(|\phi A_{\mu}|\gtrsim A_{\rm crit}\), the \(g_{ab}\) and \(A_{a}\) are not orthogonal to \(s^{a}\) and the KK theory cannot be treated as being defined on the hypersurface defined by \(w=0\).
To find a submanifold \({\cal M}\) supporting the KK four-dimensional variables, we submit equations (7) and (19) into equation (21). We get
\[n^{a}n^{b}=\frac{1}{\phi^{2}}\left(\frac{\partial}{\partial w}\right)^{a} \left(\frac{\partial}{\partial w}\right)^{b}\;, \tag{28}\]
which immediately leads to a unique solution (up to a sign)
\[n^{a}=\phi^{-1}w^{a}\;,\qquad w^{a}\equiv\left(\frac{\partial}{\partial w} \right)^{a}\;. \tag{29}\]
Thus, if the submanifold \({\cal M}\) exists, its unit normal \(n^{a}\) must be a unit vector tangent to the coordinate lines of the extra dimension. It is easy to verify that both \(g_{ab}\) and \(A^{a}\) are orthogonal to \(n^{a}\).
Let us denote the tangent space of the five-dimensional manifold \(\tilde{M}\) at a point \(p\in\tilde{M}\) by \(\tilde{\cal T}_{p}\), \(\dim\tilde{\cal T}_{p}=\dim\tilde{\cal M}=5\). The disjoint union of \(\tilde{\cal T}_{p}\) at all points of \(M\) is called the tangent bundle of \(\tilde{\cal M}\) and denoted as \(\tilde{\cal T}=\tilde{\cal T}(\tilde{\cal M})\). Let \({\cal T}\) be a rank-4 subbundle of \(\tilde{\cal T}\) (called a rank-4 distribution or tangent distribution, or tangent subbundle [31]), described by a disjoint union of subspaces containing all vectors and tensors orthogonal to \(w^{a}\propto n^{a}\) at all points of \(\tilde{\cal M}\), i.e., \({\cal T}=\prod_{p\in\tilde{\cal M}}{\cal T}_{p}\) with \(\dim{\cal T}_{p}=4\). The \({\cal T}\) is a smooth distribution in the sense that for each \(p\in\tilde{\cal M}\) we can find an open neighborhood \(\tilde{\cal O}\) of \(p\) such that in \(\tilde{\cal O}\), \({\cal T}\) is spanned by smooth vector and tensor fields orthogonal to \(w^{a}\). We have \(g_{ab}\), \(g_{a}^{\;\;b}\), \(g^{ab}\in{\cal T}\). The \(g_{ab}\) is the metric tensor field in \({\cal T}\). The \(g^{ab}\) is the inverse metric tensor field, and \(g_{a}^{\;\;b}\) the identity map in \({\cal T}\).3 Since \(A_{a}w^{a}=0=A^{a}w_{a}\), we have \(A_{a}\), \(A^{a}\in{\cal T}\), too.
Footnote 3: The \(g_{a}^{\;\;b}=\tilde{g}_{a}^{\;\;b}-n_{a}n^{b}\) is also the projection operator mapping a vector (and a tensor) in \(\tilde{\cal T}\) to a vector (and a tensor) in \({\cal T}\).
By equations (2) and (14), we have
\[\phi^{2}A_{a}=\phi^{2}A_{\mu}dx^{\mu}_{a}=\tilde{g}_{bc}\left(\frac{\partial}{ \partial x^{\mu}}\right)^{b}\left(\frac{\partial}{\partial w}\right)^{c}dx^{\mu}_ {a}\;. \tag{30}\]
By equation (8), we have
\[dx^{\mu}_{a}\left(\frac{\partial}{\partial x^{\mu}}\right)^{b}=\tilde{\delta}_{a}^{ \;\;b}-dw_{a}\left(\frac{\partial}{\partial w}\right)^{b}\;. \tag{31}\]
Hence, we get
\[\phi^{2}A_{a} = \tilde{g}_{bc}\left(\frac{\partial}{\partial w}\right)^{c}\left[ \tilde{\delta}_{a}^{\;\;b}-dw_{a}\left(\frac{\partial}{\partial w}\right)^{b}\right] \tag{32}\] \[= \tilde{g}_{ac}\left(\frac{\partial}{\partial w}\right)^{c}-\phi^{2} dw_{a}\;,\]
where we have used \(\tilde{g}_{bc}(\partial/\partial w)^{b}(\partial/\partial w)^{c}=\tilde{g}_{44}= \phi^{2}\). Submitting the definition of \(n^{a}\) in equation (29) into equation (32), we get the relation
\[A_{a}=\phi^{-1}n_{a}-dw_{a}\;. \tag{33}\]
Equation (33) states that \(A^{a}\) is obtained from orthogonal decomposition of the vector field \(\mu^{a}\equiv\tilde{g}^{a}{}^{b}{}^{d}{}^{b}{}^{c}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d }{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{ }{}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{d}{}^{ }^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}^{d}{ }^{}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{}^{d}{}^{d}{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{d}{}^{ }{}^{d}^{d}{}^{d}{}^{d}^{d}{}^{d}{}^{d}{}^{d}{}^{d}^{d}{}^{}^{d}{}^{d}{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{ }^{d}{}^{d}{}^{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{}^{d}{}^{d}{}^{ }^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{d}{}^{}^{d}{}^{
equations can easily be derived from the action principle.4 The cylinder condition is equivalent to the requirement that \(w^{a}\) is a Killing vector of the five-dimensional spacetime, i.e.,
Footnote 4: If the cylinder condition is dropped the derived field equations are much more complicated, see, e.g., [36].
\[\tilde{\mathcal{L}}_{w}\tilde{g}_{ab}=0\;. \tag{45}\]
When the above condition is satisfied, the Ricci scalar \(\tilde{R}\) associated with the five-dimensional bulk metric \(\tilde{g}_{ab}\) is related to the Ricci scalar \(R\) associated with the KK four-dimensional metric \(g_{ab}\) by [29; 33; 34]
\[\tilde{R}=R-\frac{\phi^{2}}{4}F_{ab}F^{ab}+\tilde{\nabla}_{a}v^{a}\;, \tag{46}\]
where \(v^{a}\) is a vector.
The determinant of the five-dimensional metric, \(\tilde{g}=\det\tilde{g}_{AB}\), is related to the determinant of the four-dimensional metric, \(g=\det g_{\mu\nu}\), by \(\tilde{g}=\phi^{2}g\). Thus, we have \(\sqrt{-\tilde{g}}=\phi\sqrt{-g}\) and the five-dimensional action of gravity
\[I_{g} = \frac{1}{\tilde{G}}\int\tilde{R}\sqrt{-\tilde{g}}\,d^{4}xdw \tag{47}\] \[= \frac{1}{\tilde{G}}\int dw\int\phi\left(R-\frac{\phi^{2}}{4}F_{ab }F^{ab}\right)\sqrt{-g}\,d^{4}x\;,\]
where \(\tilde{G}\) is the five-dimensional gravitational constant. The divergence term in equation (46), \(\tilde{\nabla}_{a}v^{a}\), has been dropped off since it has no contribution to the action integral under appropriate boundary conditions.
Since \(w^{a}\) is a Killing vector field, the five-dimensional spacetime can be compactified along the direction of extra dimension, i.e., the direction of \(w\)-lines (Fig. 1). That is, a spacetime point \(\{x^{\mu},w\}\) in \((\tilde{M},\tilde{g}_{ab})\) is identified with the spacetime point \(\{x^{\mu},w+L\}\) in \((\tilde{M},\tilde{g}_{ab})\), where \(L>0\) is a constant. Then, the value of \(w\) is restricted to the region of \([0,L)\), which leads to \(\int dw=L\). To have the circumference of the extra dimension--which is \(C_{w}\equiv\phi\int dw=\phi L\)--to be constant, \(\phi\) must be constant [33].5 For the extra dimension to be unaccessible to current experiments the length \(C_{w}=\phi L\) must be sufficiently small [2]. Then, equation (47) becomes
Footnote 5: It should be noted that \(\phi\) cannot be constant if the vacuum Einstein field equation \(\tilde{R}_{ab}=0\) is imposed. This follows because the equation \(\tilde{R}_{ww}=0\) entails that
\[\tilde{\nabla}^{a}\tilde{\nabla}_{a}\phi=\frac{\phi^{3}}{4}F^{ab}F_{ab}\;,\]
as first noted by Jordan [37] and Thiry [28].
\[I_{g}=\frac{C_{w}}{\tilde{G}}\int\left(R-\frac{\phi^{2}}{4}F_{ab}F^{ab}\right) \sqrt{-g}\,d^{4}x\;. \tag{48}\]
The appearance of the term \(-(\phi^{2}/4)F_{ab}F^{ab}\) in the Lagrangian density in equation (48) guarantees that the Maxwell equations can be derived from the five-dimensional Einstein field equation by variation of the action \(I_{g}\) with respect to the potential vector \(A^{a}\). In fact, if we identify \(\tilde{G}/C_{w}\) as the four-dimensional gravitational constant \(G=1\) (i.e., \(C_{w}=\tilde{G}\)) and \(\phi=2\), the action in equation (48) reduces to the total action of gravity and electromagnetic fields in a four-dimensional spacetime (see [22], Appendix E). Note that, however, all the quantities (\(F_{ab}\), \(R\), and \(g\)) appearing in the integral of equation (48) are defined in the tangent subbundle \(\mathcal{T}\) orthogonal to \(w^{a}\), since they all are derived from \(A_{a}\), \(g_{ab}\), and the derivative operator \(\nabla_{a}\) associated with \(g_{ab}\).
According to the results in section IV, the rank-4 distribution \(\mathcal{T}\) is not integrable and hence \(g_{ab}\), \(A_{a}\), and quantities derived from them are not tangent to any four-dimensional submanifold embedded in \(\mathcal{\bar{M}}\) unless the electromagnetic field \(F_{ab}\) vanishes. Although under the cylinder condition the four-dimensional Einstein field equation and the Maxwell equations are successfully derived from the five-dimensional Einstein field equation through the action principle, these equations are not supported by a four-dimensional submanifold hence do not define a four-dimensional spacetime.
The manifold structure of the KK theory, after the extra dimension is compactified, is depicted in Fig. 2. The five-dimensional spacetime "tube" is made of twisted four-dimensional "wires", with each wire representing a hypersurface \(w=\text{const}\). The transverse cross-section of the spacetime tube corresponds to the \(w\)-coordinate lines, i.e., curves whose tangent vectors are \(w^{a}=(\partial/\partial w)^{a}\), as
Figure 1: Under the cylinder condition the five-dimensional spacetime can be compactified along the direction of extra dimension [i.e., the direction of \(w^{a}=(\partial/\partial w)^{a}\)]. This way, the hypersurface \(\mathcal{S}(w=0)\) in \((\tilde{M},\tilde{g}_{ab})\) is identified with the \(\mathcal{S}^{\prime}(w=L)\) under the map generated by \(w^{a}\). The black dot on \(\mathcal{S}\) is identified with the black dot on \(\mathcal{S}^{\prime}\), the circle on \(\mathcal{S}\) identified with the circle on \(\mathcal{S}^{\prime}\), and so on (as indicated by dashed lines). The KK variables \(A^{a}\) and \(g_{ab}\) are orthogonal to \(w^{a}\) hence not tangent to \(\mathcal{S}\), since in general \(w^{a}\) is not orthogonal to \(\mathcal{S}\). In fact, \(w^{a}\) is not orthogonal to any hypersurface unless the electromagnetic field \(F_{ab}\) vanishes. Note that to make the extra dimension unaccessible to current experiments its circumference \(C_{w}=\phi L\) has to be small.
indicated by the blue circle in the figure. The KK variables \(A^{a}\), \(g_{ab}\), and the associated distribution \(\mathcal{T}\), are in the longitudinal direction along the tube (i.e., the direction perpendicular to \(w^{a}\)). They are not tangent to any four-dimensional submanifold. Thus, the action in equation (48) is defined in the rank-4 tangent subbundle or distribution \(\mathcal{T}\), but not defined on a four-dimensional submanifold.
## VI Summary and Discussion
All existing physical theories are defined on a smooth manifold with or without a well-defined spacetime metric. In the KK theory, the five-dimensional theory is defined on a five-dimensional manifold with a Lorentz metric determined by the five-dimensional Einstein field equation. The four-dimensional metric tensor and the electromagnetic potential vector assumed in the KK theory must be defined on a four-dimensional submanifold (i.e., a hypersurface) embedded in the five-dimensional manifold, in order for the derived four-dimensional theory (including the four-dimensional Einstein field equation and the Maxwell equations) to be able to describe the four-dimensional world where we live and do physical experiments. But this is not the case, as has been shown in this paper.
In general, the four-dimensional KK variables \(g_{ab}\), \(A_{a}\), and other geometric quantities derived from them (e.g., the four-dimensional Ricci tensor \(R_{ab}\) and the electromagnetic field antisymmetric tensor \(F_{ab}\)) are in a four-dimensional subbundle that is not tangent to any four-dimensional submanifold, since by the KK construction \(g_{ab}\) and \(A_{a}\) are orthogonal to the vector field \(w^{a}\) generating the extra dimension but \(w^{a}\) is not hypersurface orthogonal unless the electromagnetic field vanishes. Thus, the results presented in the paper lead us to such a paradox: the KK theory is valid mathematically only if the electromagnetic field derived from the KK theory vanishes. This is a general conclusion, independent of the cylinder condition adopted for derivation of the four-dimensional field equations.
When the electromagnetic field is weak and has a negligible effect on the spacetime metric, i.e., when condition (27) is satisfied, the four-dimensional metric tensor and the electromagnetic potential vector can be regarded as _approximately_ being defined on the hypersurface of \(w=\text{const}\). But then the KK theory becomes an approximate and weak-field limit theory, conflicting the original spirit of unification of gravitational and electromagnetic interactions. In addition, without a precisely defined four-dimensional submanifold supporting the four-dimensional variables, it is hard to accept the approximate theory since it is not well defined in mathematics. An ultimate solution to the problem raised in this paper may be given by a different 4+1 decomposition of a five-dimensional spacetime metric as having been proposed in [30], where the four-dimensional spacetime is defined on a hypersurface that is not orthogonal to the extra dimension, but then the theory is different from the KK theory since an electromagnetic field equation with a curvature-coupled term is derived.
###### Acknowledgements.
The author thanks the anonymous adjudicator for a very enlightening report, which has stimulated the author to think more deeply and more widely about the problem discussed in the paper. The report has also helped to improve the presentation of the paper. This work was supported by the NSFC grants program (no. 11973014).
|
2305.05220 | Evidence for bootstrap percolation dynamics in a photo-induced phase
transition | Upon intense femtosecond photo-excitation, a many-body system can undergo a
phase transition through a non-equilibrium route, but understanding these
pathways remains an outstanding challenge. Here, we use time-resolved second
harmonic generation to investigate a photo-induced phase transition in
Ca$_3$Ru$_2$O$_7$ and show that mesoscale inhomogeneity profoundly influences
the transition dynamics. We observe a marked slowing down of the characteristic
time, $\tau$, that quantifies the transition between two structures. $\tau$
evolves non-monotonically as a function of photo-excitation fluence, rising
from below 200~fs to $\sim$1.4~ps, then falling again to below 200~fs. To
account for the observed behavior, we perform a bootstrap percolation
simulation that demonstrates how local structural interactions govern the
transition kinetics. Our work highlights the importance of percolating
mesoscale inhomogeneity in the dynamics of photo-induced phase transitions and
provides a model that may be useful for understanding such transitions more
broadly. | Tyler Carbin, Xinshu Zhang, Adrian B. Culver, Hengdi Zhao, Alfred Zong, Rishi Acharya, Cecilia J. Abbamonte, Rahul Roy, Gang Cao, Anshul Kogar | 2023-05-09T07:32:07Z | http://arxiv.org/abs/2305.05220v1 | # Evidence for bootstrap percolation dynamics in a photo-induced phase transition
###### Abstract
Upon intense femtosecond photo-excitation, a many-body system can undergo a phase transition through a non-equilibrium route, but understanding these pathways remains an outstanding challenge. Here, we use time-resolved second harmonic generation to investigate a photo-induced phase transition in Ca\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) and show that mesoscale inhomogeneity profoundly influences the transition dynamics. We observe a marked slowing down of the characteristic time, \(\tau\), that quantifies the transition between two structures. \(\tau\) evolves non-monotonically as a function of photo-excitation fluence, rising from below 200 fs to \(\sim\)1.4 ps, then falling again to below 200 fs. To account for the observed behavior, we perform a bootstrap percolation simulation that demonstrates how local structural interactions govern the transition kinetics. Our work highlights the importance of percolating mesoscale inhomogeneity in the dynamics of photo-induced phase transitions and provides a model that may be useful for understanding such transitions more broadly.
In a photo-induced phase transition (PIPT), a qualitative and macroscopic change to the behavior of a many-body system occurs following intense femtosecond photo-excitation. PIPTs are inherently different from equilibrium transitions because they typically proceed through a far from equilibrium, non-thermal pathway where time becomes a fundamental variable. Tracking the temporal evolution of the spatially averaged response functions is crucial to understanding the spectacular behaviors instigated by photo-excitation in solids, such as the appearance of transient order and metastability of hidden states [1; 2; 3; 4; 5; 6]. However, order often evolves in a spatially non-uniform manner in many PIPTs; a major recurring theme is the presence of electronic and crystallographic inhomogeneity on the nano- to micro-scale [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25].
Experimentally capturing the _dynamics_ of mesoscale structures has proven difficult. However, inhomogeneous textures have been observed in materials exhibiting long-lived metastability following photo-excitation. In the metastable "hidden" states of both 1_T_-TaS\({}_{2}\)[1; 26; 27] and strained La\({}_{2/3}\)Ca\({}_{1/3}\)MnO\({}_{3}\)[2; 28], quasi-static textures were observed using real-space scanning probe techniques. But, such experimental approaches are currently unfeasible for observing inhomogeneity that evolves on the short timescales characteristic of many PIPTs. A notable exception is the insulator-metal PIPT in VO\({}_{2}\), which was found to exhibit similar transient textures determined by grain boundaries or pre-existing domains following each applied laser pulse [13; 14; 16]. To understand the effects of the dynamic inhomogeneity, our approach here is to quantify its aggregate effects on macroscopic observables and correlate the observations with a percolation model.
To accomplish this goal, we employ time-resolved second harmonic generation (SHG) to investigate a PIPT in a prototypical correlated material, Ca\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) (CRO), in which photo-induced inhomogeneity is expected to occur (see below). By relating the experimental observations to simulation results, we provide strong evidence that structural percolation, mediated by lattice strain, governs the transition kinetics. Specifically, we show that the photo-induced dynamics are consistent with bootstrap percolation (BP), a particular cellular automata model that lacks detailed balance.
Ca\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) (CRO) is the \(n\)=2 compound in the Ruddlesden-Popper series Ca\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\)[29]. The crystal is distorted from the typical _1A/mmm_ structure due to rotation and tilts of the oxygen octahedra around the (001) axis and (110) axis, respectively (Fig. 1(b)). At all temperatures, its crystallographic space group is \(Bb2_{1}m\) (No. 36), with point group \(C_{2v}\). As the temperature is reduced below the Neel temperature, \(T_{N}\)=56 K, CRO undergoes a continuous phase transition to an antiferromagnetic state in which the Ru spins are aligned ferromagnetically along the \(\pm a\) axis within each bilayer and antiferromagnetically between bilayers [30]. Of primary interest here is the discontinuous metal-insulator transition at \(T_{MI}\)=48 K. In samples grown by the floating zone method, the low-temperature state is semimetallic [31; 32; 33], but in the flux-grown samples used here, this state is truly insulating [34]. This electronic transition is accompanied by a rotation of the spins from the \(\pm a\) axis to the \(\pm b\) axis [30; 35] and a structural transition without a change in crystallographic symmetry. Through the transition, the \(c\) axis contracts by \(\sim\)0.1% and the \(a\) and \(b\) axis lattice parameters expand by \(\sim\)0.07% [29]. The
structural change leads to a compression of the oxygen octahedra and further breaks the degeneracy between the Ru \(d_{xy}\) and the \(d_{xz/yz}\) orbitals; it is therefore thought to be a vital component of the insulator-metal transition by promoting an orbital polarization [35; 36; 37; 38; 39].
To probe the dynamics of the phase transition, we perform time-resolved SHG, a technique that can monitor the symmetry of CRO in its various phases [40]. These measurements were performed in a reflection geometry, using 180 fs laser pulses with a 5 kHz repetition rate. We collected data in two configurations - one with parallel and one with perpendicular incident and outgoing light polarization. The probe beam was centered at 900 nm (1.38 eV) and was shone normal to the \(ab\)-plane with a 40 \(\mu\)m spot size. The pump beam was centered at 1030 nm (1.20 eV) and was incident at 15\({}^{\circ}\) to the surface normal with a 200 \(\mu\)m spot size. When the system is pumped with the laser pulse, the excited electrons remain within the \(t_{2g}\) manifold of the Ru\({}^{4+}\) atoms (the crystal field splitting to the \(e_{g}\) levels is \(\sim\)2 eV) [41].
In the equilibrium state, the leading-order electric-dipole (ED) SHG contribution is allowed in CRO due to broken inversion symmetry (Fig. 1). At all temperatures, we obtain a good simultaneous fit to the parallel and perpendicular polarization configurations of the rotational anisotropy (RA) pattern with the ED contribution of the known point group symmetry \(C_{2v}\)[42] (Fig. 1(a)). In Fig. 1(c), we show the temperature dependence of the SHG intensity, \(I^{2\omega}\), arising from the tensor element \(\chi^{ED}_{baa}\) (red dot in Fig. 1(a)) across the insulator-metal transition. Although there is no temperature-induced change in symmetry of the RA pattern across the insulator-metal transition (Fig. 2(a)), the intensity exhibits a pronounced jump. (The offset between the reported value for \(T_{MI}\)=48 K and the observed jump around 46 K arises due to laser heating) [42]. No thermal hysteresis is measured, and no features are observed at \(T_{N}\) (Fig.1(c) inset).
Because the crystal, electronic and magnetic structure all change at \(T_{MI}\), the cause of the observed jump in \(I^{2\omega}\) is not immediately clear. A previous study reports a similar increase in \(I^{2\omega}\) for pure CRO, but observes no such feature in the Fe-doped compound Ca\({}_{3}\)Ru\({}_{1.95}\)Fe\({}_{0.05}\)O\({}_{7}\)[43]. This difference is striking because, similar to pure CRO, the latter compound undergoes a transition in which the spins reorient from pointing along the \(\pm a\) axis to the \(\pm b\) axis and a concomitant metal-insulator transition. However, unlike in pure CRO, this transition is not accompanied by a large structural change [37; 44]. We therefore conclude that the increase in \(I^{2\omega}\) is indicative of the structural change (note that this is consistent with no features being observed at \(T_{N}\) Fig. 1(c)). To corroborate the connection between crystal structure and \(I^{2\omega}\), in the Supplemental Material we use a Landau theory approach to show that a first-order transition with a symmetry-preserving order parameter can give rise to a jump in \(I^{2\omega}\)[42].
We now study the PIPT instigated by intense femtosecond laser pulses. Figure 2(a) shows the change in the RA-SHG pattern above and below \(T_{MI}\) after the arrival of a 0.28 mJ/cm\({}^{2}\) pump pulse. Below \(T_{MI}\), there is a clear drop in intensity following the pulse, while the pattern above \(T_{MI}\) is minimally affected. As in the thermal transition, the symmetry is unchanged. Figure 2(b) shows the evolution of \(I^{2\omega}\propto|\chi^{ED}_{baa}|^{2}\) after photo-excitation at several temperatures. These curves demonstrate that the decrease in intensity is stable for \(\gg\) 5 ps. The magnitude of the drop in \(I^{2\omega}\) is roughly constant for various temperatures below \(T_{MI}\), but is markedly smaller in the high-temperature phase. (Here the measured \(T_{MI}\) is between 44-45 K due to laser heating from both pump and probe pulses). It should be noted that the intensity of the second harmonic, \(I^{2\omega}\), varies across the sample surface (the \(ab\) plane) due to the nonuniform distribution of 180\({}^{\circ}\) polar domains, as discussed more thoroughly in Refs. [42; 45]. However, the photo-induced changes are associated only with the phase transition [42].
To understand the kinetics of the PIPT, we measure SHG time traces in the low temperature state for various pump fluences (Fig. 3(a)). We fit each time trace to the phenomenological function [42]:
\[u(t)=1+\left[\theta(t-t_{0})I_{\infty}(1-\alpha e^{-(t-t_{0})/\tau})\right]*g( w_{0},t), \tag{1}\]
Figure 1: **(a)** Measured RA-SHG patterns at 52 K with incident and outgoing polarizers in parallel and perpendicular geometry. Simultaneous fits to both channels are obtained using a susceptibility tensor constrained by the \(C_{2v}\) point group (solid lines). The data are normalized to a maximum in the perpendicular channel. The \(a\) and \(b\) crystallographic axes are indicated with black arrows. **(b)** Schematic of the low-temperature crystal structure. **(c)** Temperature dependence of the SHG intensity \(I^{2\omega}\) at a polarization angle indicated by the red dot in **(a)**. In this geometry, \(I^{2\omega}\propto|\chi^{ED}_{baa}|^{2}\). The red(blue) curve corresponds to heating(cooling). (Inset) Normalized \(I^{2\omega}\propto|\chi^{ED}_{baa}|^{2}\) across \(T_{N}\).
where \(\theta(t)\) denotes the Heaviside step function, \(g(w_{0},t)\) is the cross-correlation of the pump and probe pulses and the \(*\) indicates convolution. This allows us to extract three parameters: (i) the time constant of the transient decay, \(\tau\); (ii) the SHG intensity at late times relative to the intensity before the pulse, \(I_{\infty}\); and (iii) the fraction of \(I_{\infty}\) that is related to the dynamics associated with \(\tau\), \(\alpha\). The best-fit values for \(\tau\), \(I_{\infty}\), and \(\alpha I_{\infty}\) are plotted as a function of fluence in Fig. 3(b). We find that \(I_{\infty}\) decreases with increasing fluence until reaching a saturation \(I_{\infty}^{sat}\approx\) -0.11 at fluence \(F_{sat}\approx\) 0.4 mJ/cm\({}^{2}\), while \(\alpha I_{\infty}\) decreases from zero to roughly half of \(I_{\infty}^{sat}\) near \(F_{sat}\) before increasing at high fluence. The time constant \(\tau\) exhibits the most noteworthy behavior; it first increases by nearly an order of magnitude from \(<\)200 fs to \(\sim\)1.4 ps, peaking near \(F_{sat}\) before decreasing to roughly 200 fs.
For fluences \(F\gtrsim F_{sat}\), these measurements suggest that the sample has reverted to the high-temperature structure; the intensity jump observed as a function of temperature (Fig. 1(c)) is completely suppressed by the photo-exciting laser pulse. However, the nature of the "intermediate" states, characterized by \(0<|I_{\infty}|<|I_{\infty}^{sat}|\) is not immediately obvious. There are two natural possible scenarios to attribute to these states. In the first scenario, the lattice parameters would change in a spatially uniform manner throughout the illuminated region and would take on a value between that of the low- and high-temperature equilibrium phases. Alternatively, the effect of the pulse could be spatially nonuniform and the intermediate states could consist of small regions in which the lattice parameters primarily take on either their low- or high-temperature equilibrium values.
For several reasons, including the discontinuous character of the equilibrium transition and the observation of structural inhomogeneity in the hysteresis region of Titsubstituted CRO [46; 47], we expect _a priori_ that the lattice parameters exhibit discontinuous changes. We show below that this scenario accounts for the experimental observations, most notably the non-monotonic fluence-dependence of the timescale \(\tau\) (Fig. 3(b)).
To demonstrate that binary values of the lattice parameters can give rise to this behavior, we perform a bootstrap percolation simulation. The simulation consists of the following recipe [48]. (In the following, for ease of presentation, we refer only to the \(c\)-axis changes, but this is meant to represent the changes to all lattice parameters.) (1) An array of s
Figure 2: Temperature-dependent SHG response to the 1030 nm pump pulse. **(a)** Rotational anisotropy patterns measured before (t\(<\)0) and after (t\(>\)0) a 0.28 mJ/cm\({}^{2}\) pump pulse both above and below \(T_{MI}\). The pulse induces a large drop in SHG intensity when applied below \(T_{MI}\) and has little effect above \(T_{MI}\). **(b)** Time traces of the normalized SHG intensity \(I^{2\omega}\propto|\chi_{ba}^{ED}|^{2}\) at several temperatures with a 0.86 mJ/cm\({}^{2}\) pulse. Red lines are fits to Eq. (1). Traces at additional temperatures are excluded here for clarity and are presented in the Supplemental Material [42].
Figure 3: **(a)** Time evolution of the normalized SHG intensity, \(I^{2\omega}\propto|\chi_{ba}^{ED}|^{2}\), for varying pump fluences, measured at a nominal temperature of 4 K (laser heating raises the temperature). Fits to Eq. (1) are overlaid in red. **(b)** Best-fit values of \(I_{\infty}\), \(\alpha I_{\infty}\), and \(\tau\) plotted vs. pump fluence. Error bars are 95% confidence intervals from the fitting procedure.
take only a long or short \(c\)-axis lattice parameter, denoted \(L_{c}\) and \(S_{c}\), respectively. Before \(t=0\) the system is initialized to possess only \(S_{c}\) sites, corresponding to the low-temperature insulating state. (2) At \(t=0\), a random subset of sites is switched to the \(L_{c}\) state to mimic the effect of the pump. The number of switched sites is assumed to be proportional to the incident pump fluence. (3) Each remaining \(S_{c}\) site then evolves according to the governing rule that if the number of nearest neighbor \(L_{c}\) sites _exceeds_ a threshold value, \(\sigma_{th}\), the examined site switches from \(S_{c}\) to \(L_{c}\). (4) Lastly, once converted to an \(L_{c}\) site, it is forbidden from reverting to an \(S_{c}\) one. These rules encompass the entire simulation, and it is run in discrete time steps until the system reaches quasi-equilibrium where site-switching no longer occurs. Imposition of the rule (4) is motivated by data showing that the recovery to the state with a globally contracted \(c\)-axis occurs on much longer timescales [42]. Because we disallow \(L_{c}\)-to-\(S_{c}\) conversion, the model describes a manifestly non-equilibrium process characterized by a breakdown of detailed balance and a transition to an absorbing state. With this minimal model taking a single input parameter, \(\sigma_{th}\), we are able to capture the qualitative behavior of all three fitted parameters in our data.
In our implementation, the sample is modeled as a 40\(\times\)40\(\times\)40 cubic array of sites. The fraction of sites that are excited at \(t\)=0 is given by the fluence fraction parameter \(f\), which is defined between 0 and 1, corresponding, respectively, to no pump pulse and to a pulse that excites all of the sites quasi-instantaneously. At each time step, the "strain" at each site, \(\sigma\), is equal to the number of neighboring sites in the \(L_{c}\) state [42]. In the results of the simulation shown in Fig. 4, the threshold parameter, \(\sigma_{th}\), is equal to three. To correlate the simulation to our data, we make the assumption that the change in \(I^{2\omega}\) at each time step is proportional to the fraction of sites that have switched to the \(L_{c}\) state. This correspondence allows us to produce the simulated time traces in Fig. 4(a). Finally, we fit the simulated time traces to Eq. (1) (without the finite pulse width factor \(g(w_{0},t)\)) to extract \(I_{\infty}\), \(\alpha\), and \(\tau\) as a function of the fluence fraction \(f\) (Fig. 4(b)) [42].
Figure 4 shows that this model reproduces all of the qualitative aspects of the SHG response. We emphasize that these simulations did not require fine-tuning to produce the desired results. Rather, we find that the non-monotonic fluence dependences of \(\tau\) and of \(\alpha\), with peaks near \(F_{sat}\), are robust to changes of \(\sigma_{th}\) and to the inclusion of additional neighbor couplings, both when the simulation is confined to two dimensions and when the penetration depths of the pump and probe pulses are taken into account. See the Supplemental Material [42] for details of the simulations using alternate settings.
We are left with the following physical picture to explain the non-monotonic behavior of the timescale \(\tau\). At low fluences, random isolated sites are photo-excited to the \(L_{c}\) state, but these sites cannot percolate very far; the \(S_{c}\) sites do not possess a sufficient number of \(L_{c}\) nearest neighbors to trigger a switching event. The observed transition time is therefore on the order of the exciting laser pulse in the experiment. As the fluence is increased, some sites that remained in the \(S_{c}\) state after the initial photo-excitation exceed the nearest neighbor strain threshold and turn into an \(L_{c}\) site. The \(L_{c}\) sites then start to percolate; switching events are able to trigger further switching events. Near the percolation threshold, where almost all the sites are switched, the transition time starts to lengthen considerably, mimicking critical slowing down [49]. At higher fluences, the number of sites that switch to the \(L_{c}\) state instantaneously is large and only a short time is needed to switch the remaining \(S_{c}\) sites. Within this framework, the peak in \(\tau\) physically represents the maximum time it takes for the \(L_{c}\) site percolation to occur following the initial photo-excitation.
This physical picture also lends itself to a natural interpretation of the parameters \(I_{\infty}\) and \(\alpha\). In this scheme, \(I_{\infty}\) represents the total number of switched sites, including both the quasi-instantaneous effects from photo-excitation and the subsequent percolation. On the other hand, \(\alpha\) describes the fraction of sites that convert from \(S_{c}\) to \(L_{c}\) solely due to percolation, and its dynamics are
Figure 4: **(a)** Simulated temporal evolution of SHG intensity for various fluence fractions \(f\) with strain threshold \(\sigma_{th}\)=3. \(\Delta\)SHG is determined from the fraction of sites in the \(L_{c}\) state. Time steps indicate iterations of the simulation. (When compared to the experiment, each time step corresponds to a few hundred fs.) **(b)** The parameters \(\tau\), \(I_{\infty}\), and \(\alpha I_{\infty}\) determined from the simulations plotted against the fluence fraction parameter \(f\). **(c)** Examples of two-dimensional slices of the final system state for various fluence fractions \(f\). \(L_{c}(S_{c})\) sites are depicted in yellow(purple). We estimate that each site corresponds to a region of the material with a length scale on the order of 1 nm [42]. **(d)**\(\tau\) plotted versus the average distance between absorbed pump photons \(d_{\gamma}\). The linear fit to the data in the high-fluence regime allows us to extract an approximate percolation speed \(v_{p}\).
associated with the timescale \(\tau\). The dip in \(\alpha I_{\infty}\) with varying fluence thus indicates that the volume of the sample induced to become \(L_{c}\) through site-to-site spreading is largest at fluences near \(F_{sat}\), in accordance to what would be expected near a percolation threshold (Fig. 4(b)). The interpretation of these parameters allows us to understand the PIPT as a percolation phenomenon mediated by local interactions between neighboring sites.
A physical justification that the interactions are mediated by lattice strain is obtained by an estimate of the percolation speed \(v_{p}\). We first convert the fluence to an average distance between absorbed pump photons \(d_{\gamma}\)[42]. For fluences \(F>F_{sat}\), this quantity characterizes the length over which an average \(L_{c}\) region percolates. (This is not the case for \(F<F_{sat}\) when \(S_{c}\) regions will persist between sites excited by photons). We find that \(\tau\) is linearly proportional to \(d_{\gamma}\) in this high-fluence (low \(d_{\gamma}\)) regime, indicating that the percolation speed is independent of fluence. Performing a linear fit, we extract a characteristic growth speed of \(\sim\)4400 m/s (Fig. 4(d)). Though the speed of sound has not been measured in Ca\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\), this value is in accord with what would be expected if the growth of \(L_{c}\) clusters was given by ballistic strain propagation.
In summary, our study provides substantial evidence that the kinetics of the PIPT in Ca\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) proceeds through the percolation of nanoscale clusters which is mediated by lattice strain. Specifically, the transition dynamics are qualitatively captured by a model of bootstrap percolation. The simplicity of this model suggests that it may hold significant promise for understanding the dynamics of other PIPTs. Indeed, time-resolved measurements of the photo-induced transition in VO\({}_{2}\)[50, 51, 18, 21] shows two timescales of comparable duration to those observed in this work, which may also be described within a percolation theory. Our work paves the way towards understanding the effects of dynamic inhomogeneity on PIPTs and provides a general model that may be useful for investigating photo-excited materials more broadly.
###### Acknowledgements.
We thank R. Schonmann for helpful discussions regarding the percolation model, M. Rasiah for help with the initial construction of the second harmonic generation setup, and S. Kivelson for helpful suggestions regarding the Landau theory calculation. Research at UCLA was supported by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences under Award No. DE-SC0023017 (experiment and simulations). Work at UC Boulder was supported by the National Science Foundation via Grant No. DMR 2204811 (materials synthesis). A.Z. acknowledges support by the Miller Institute for Basic Research in Science. A.K., R.A. and C.J.A. acknowledge the REU program through STROBE: a National Science Foundation Science and Technology Center under award no. DMR-1548924.
|
2303.09794 | Revisiting Image Reconstruction for Semi-supervised Semantic
Segmentation | Autoencoding, which aims to reconstruct the input images through a bottleneck
latent representation, is one of the classic feature representation learning
strategies. It has been shown effective as an auxiliary task for
semi-supervised learning but has become less popular as more sophisticated
methods have been proposed in recent years. In this paper, we revisit the idea
of using image reconstruction as the auxiliary task and incorporate it with a
modern semi-supervised semantic segmentation framework. Surprisingly, we
discover that such an old idea in semi-supervised learning can produce results
competitive with state-of-the-art semantic segmentation algorithms. By
visualizing the intermediate layer activations of the image reconstruction
module, we show that the feature map channel could correlate well with the
semantic concept, which explains why joint training with the reconstruction
task is helpful for the segmentation task. Motivated by our observation, we
further proposed a modification to the image reconstruction task, aiming to
further disentangle the object clue from the background patterns. From
experiment evaluation on various datasets, we show that using reconstruction as
auxiliary loss can lead to consistent improvements in various datasets and
methods. The proposed method can further lead to significant improvement in
object-centric segmentation tasks. | Yuhao Lin, Haiming Xu, Lingqiao Liu, Jinan Zou, Javen Qinfeng Shi | 2023-03-17T06:31:06Z | http://arxiv.org/abs/2303.09794v1 | # Revisiting Image Reconstruction for Semi-supervised Semantic Segmentation
###### Abstract
Autoencoding, which aims to reconstruct the input images through a bottleneck latent representation, is one of the classic feature representation learning strategies. It has been shown effective as an auxiliary task for semi-supervised learning but has become less popular as more sophisticated methods have been proposed in recent years. In this paper, we revisit the idea of using image reconstruction as the auxiliary task and incorporate it with a modern semi-supervised semantic segmentation framework. Surprisingly, we discover that such an old idea in semi-supervised learning can produce results competitive with state-of-the-art semantic segmentation algorithms. By visualizing the intermediate layer activations of the image reconstruction module, we show that the feature map channel could correlate well with the semantic concept, which explains why joint training with the reconstruction task is helpful for the segmentation task. Motivated by our observation, we further proposed a modification to the image reconstruction task, aiming to further disentangle the object clue from the background patterns. From experiment evaluation on various datasets, we show that using reconstruction as auxiliary loss can lead to consistent improvements in various datasets and methods. The proposed method can further lead to significant improvement in object-centric segmentation tasks.
## 1 Introduction
Autoencoding aims to reconstruct inputs as outputs with the least possible amount of distortion[2] through an information bottleneck created with low dimension or low-resolution latent variables. Because of its simplicity and effectiveness, it has attracted researchers' attention since it was first introduced in the 1980s[37]. Autoencoders again enter the visions of the researchers when the deep stacked-autoencoder architectures[19] have shown state-of-the-art results as a feature extractor. Nowadays, as one of the most classic representation learning strategies, autoencoder has been widely applied in different applications, such as clustering [17], and classification [32]. It has also been discovered [26, 15] that an auto-encoder style reconstruction task could be an excellent auxiliary task for semi-supervised learning. In semi-supervised learning settings, we have access to a large volume of unlabelled data and only a small number of labeled training samples. The reconstruction objective, however, can be applied without any class labels. However, only a small number of labeled training samples and the reconstruction loss can be trained without any class labels. Such a scheme has become less popular as more sophisticated methods [8, 20, 30] have been proposed recently.
Semi-supervised semantic segmentation is a challenging yet important topic in computer vision, with many real-world applications. It requires utilizing both labeled and unlabeled data to improve segmentation results. As an important application of semi-supervised learning, many semi-supervised learning methods [38, 22] have been applied and extended to solve the semi-supervised segmentation problem [46, 41, 20]. However separating the fuzzy margin between foreground and background is still challenging.
This work explores the use of an autoencoder-style reconstruction task to improve semi-supervised segmentation. Perhaps surprisingly, we find that if we incorporate the image reconstruction task with a commonly used semi-supervised segmentation baseline method [27, 46], the final performance can be improved, especially when the number of training images is small (see Figure 2). This motivates us to understand how reconstruction helps the segmentation task. By visualizing the intermediate activation maps of the reconstruction branch (see Figure 1), we find that the latent activations of a reconstruction branch have already uncovered the semantics of objects if the reconstruction branch is jointly trained with a semi-supervised segmentation loss. This explains the benefit of the reconstruction task for semi-supervised segmentation, as both tasks are shared with some similarities. From further observation of the latent activations, we noticed that the object and part of its background could often co-occur in one feature map, suggesting poten
tial entanglement of the object and background clue. Thus, we propose a strategy to further disentangle those two clues and expect to align the reconstruction task and segmentation task better. Specifically, we propose to reconstruct foreground-region-only images for the labeled image and perform a similar reconstruction task but apply the loss to partial pixels, as guided by the pseudo-label generated from the segmentation head. Through our experimental evaluation on various datasets, we show that joint reconstruction can be used as a strong semi-supervised segmentation baseline that achieves consistent improvement under different scenarios.
The main contributions of this paper are highlighted as follows:
* We revisit image construction as an auxiliary task for semi-supervised segmentation and show that it can be very effective when working together with existing semi-supervised segmentation methods.
* We visualize the intermediate activations of the reconstruction decoder and shed light on why it is beneficial for the segmentation task.
* We further propose a method that modifies the reconstruction task and makes the reconstruction tasks more suited for the object-centric segmentation problems.
## 2 Related Work
### Autoencoders for Semi-Supervised Learning
As one classic unsupervised representation learning approach, Autoencoders (AE) are widely used for unsupervised learning and as a regularization scheme in semi-supervised learning[2, 23]. Because of the simplicity, they have been attracting researchers' attention since it was first introduced in the 1980s[37]. In semi-supervised learning settings, they have a considerable volume of unlabelled data but only a small number of labeled training samples. In this setting, some study [35] shows that skip connections and layer-wise unsupervised targets effectively turn autoencoders into hierarchical latent variable models, which are well suited for semi-supervised learning. Because of its nature of clustering, it has been shown effective as an auxiliary task for many semi-supervised learning tasks such as regression [17], and classification [32]. Based on the success of AE, denoising autoencoders (DAE)[40] and variational autoencoders (VAE)[25] are proposed for better representation learning. However, it has become less popular as more sophisticated methods have been proposed in recent years.
### Supervised Semantic Segmentation
As the fundamental task in computer vision, semantic segmentation has witnessed an explosion of progress in architecture design during recent decades. Starting from
Figure 1: Some visualization results from Pascal VOC 2012 **validation set** (the reconstruction branch is only trained on the training set). (a) input images, (b) feature maps from the reconstruction-only model, (c) feature maps from the reconstruction-segmentation model, and (d) the segmentation results of the same epoch as c. We observe that the activation areas in some feature maps are focused on the objects (i.e., birds in the first row and goats in the fourth row).
the FCN [31], which modifies the end-to-end architecture to be fully convolutional layers. After that, some extensions were explored: 1) encoder-decoder structure[1, 6, 36], 2) multi-scale aspects of the image [5, 29], 3) pyramidal feature maps[47], 4) dilated convolutions [3, 6, 44]. Recently, attention mechanisms[14, 5, 21] are popular among researchers because of their strength in global context communication. However, these fully supervised segmentation networks are data-hungry, laborious, and time-consuming.
### Semi-supervised Semantic Segmentation
Semi-supervised semantic segmentation aims to utilize the tremendous unlabeled and small amount of labeled data fully. Though the emergence of attention mechanisms for the model attracts much attention in the segmentation field, SOTA semi-supervised semantic segmentation still relies on the CNN model Deeplab V3+ [4] and PSPNet [47]. One of the most existing studies used approaches is consistent learning with various perturbations[46, 33, 24, 30]. For example, cutmix-seg [46] validates the effectiveness of image-level perturbation with cutmix data augmentation. Cross-consistency training (CCT)[33] introduces a feature-level perturbation and constrains the outputs of different decoders. Similarly, guided collaborative training (GCT)[24] proposed a model-level perturbation with different network initialization and enforced the consistency between models. Most recently, PS-MT[30] introduced a new adversarial perturbation for double teachers for better prediction accuracy.
Another frequently used technique is self-training[45, 8, 43], which generates pseudo labels with unlabeled data and trains the model with labeled and pseudo-label data. Furthermore, considering the class imbalance and unreliable pseudo labels, recent studies [20, 16, 41] were proposed and achieved state-of-the-art performance.
## 3 Image Reconstruction
Autoencoder [19] is one of the oldest unsupervised/self-supervised learning approaches. It is usually implemented by an encoder-decoder pair. The encoder encodes the input image information into latent variables that are often low-dimensional or low-resolution. Then the decoder decodes the latent variables into an image of the same size as the input. A loss function is used to ensure the reconstructed output image is as close as possible to the input image, that is,
\[\min_{\theta_{e},\theta_{d}}\ \mathbb{E}\Big{(}\|x-f_{\theta_{d}}(f_{\theta_{e}}(x ))\|_{2}^{2}\Big{)}, \tag{1}\]
where \(\mathbb{E}\) is a dissimilarity or distortion function and \(f_{\theta_{d}}\), \(f_{\theta_{e}}\) are the decoder and encoder, respectively.
Modern image segmentation neural networks, such as the DeepLab family [34, 13, 3] can also be seen as an encoder-decoder structure, although the decoder is usually lightweight compared to the decoder. The encoder encodes the images into a \(H^{\prime}\times W^{\prime}\times d\) dimensional feature map, and the decoder, e.g., ASPP module decodes the feature maps into the predicted segmentation mask. Therefore, the image reconstruction task can be naturally applied to the existing image segmentation neural networks by sharing the same encoder but using a different decoder (branch) to produce reconstructed images.
## 4 Image Reconstruction for Semi-supervised Semantic Segmentation
### Preliminary
Semi-supervised semantic segmentation task is defined as: given labeled images \(X_{l}\in\mathbb{R}^{H\times W\times 3}\), the corresponding pixel-wise semantic labels \(y\in(1,C)^{H\times W}\), and unlabelled images \(X_{u}\in\mathbb{R}^{H\times W\times 3}\) (W, H, C denotes the width, height, and the number of classes respectively). The goal is to learn a model \(F\) from both label data \(D^{l}=\{X_{l},y\}\) and unlabelled data \(D^{u}=\{X_{u}\}\). In most work [46, 33, 24, 30, 41], the overall optimization target is formalized as
\[\mathcal{L}=\mathcal{L}_{s}+\lambda\mathcal{L}_{ul},\]
where \(\mathcal{L}_{s}\) and \(\mathcal{L}_{ul}\) are loss functions for labeled and unlabeled images respectively.
### Semi-supervised Segmentation Baseline
Most of the state-of-the-art semi-supervised semantic segmentation methods [45, 8, 43] are built around a simple
Figure 2: (a) illustrates a simple diagram of our joint training process. The model is based on the traditional teacher-student structure, and we add another decoder with the same architecture as the segmentation decoder to reconstruct the original image. When training, the optimization goal is to minimize the total loss from both segmentation CE loss and reconstruction MSE loss. In (b), we compare the performance of baseline with and without original image reconstruction on Pascal VOC 2012[11] under the standard partition protocols. It is worth noting that even by simply reconstructing the input image, the baseline outperforms current SOTA results in all partitions.
baseline, as we call robust pseudo-labeling. Specifically, the model is firstly trained on the small number of labeled images and then produces the posterior probability estimation for each pixel on unlabeled images. Pseudo-labels are then generated if the highest posterior probability exceeds a predefined threshold. In robust pseudo-labeling, certain data augmentation, e.g., cutout [10], cutmix [46], is applied to the input image, and the pseudo-labels will be used to update the model with the augmented input images.
Our semi-supervised segmentation baseline is based on a particular version of the robust pseudo-labeling approach. Specifically, our baseline follows the typical student-teacher framework in semi-supervised semantic segmentation [33, 24, 30, 41], with the teaching network parameters being the exponential moving average [39] of the parameters of the student network.
Each network consists of a convolutional feature encoder \(h\) and a segmentation decoder \(g\). We denote the student version and teacher version of the encoder and decoder as \(h_{s}\),\(g_{s}\), \(h_{t}\)\(g_{t}\), respectively. At each training step, we equally sample \(b\) labeled images \(\mathcal{B}_{l}\) and \(b\) unlabeled images \(\mathcal{B}_{ul}\). For \(\mathcal{B}_{l}\) and \(\mathcal{B}_{ul}\), we apply strong augmentations [8, 24] (e.g., color jitter, randomize grayscale, blur, CutMix [46] and zoom in/out [28, 5]) to the student model. The teacher model will generate posterior distribution \(P(y_{i,j}=c|I_{i,j}^{n})\), indicating the likelihood of each pixel \((i,j)\) being assigned to class \(c\). A pseudo-label for pixel \((i,j)\) is generated if \(\max_{c}P(y_{i,j}=c|I_{i,j}^{n})>\tau\). Then the pseudo-labels will be used to train a student network with augmented input images.
### Reconstruction as an Auxiliary Task
Based on the framework mentioned in the Section 4.2, we further incorporate image reconstruction as an auxiliary task, which is shown in Figure 1. Different from the baseline, our framework has two decoders (\(g_{s}\),\(g_{rec}\)) in the student network, and the two decoders share the encoder part (\(h_{s}\)).
Therefore, the outputs of the model are segmentation prediction: \(P_{seg}=g_{s}\circ h_{s}(x)\in\mathbb{R}^{H\times W\times C}\) and the image reconstruction pixel value prediction \(I_{rec}=g_{rec}\circ h_{s}(x)\in\mathbb{R}^{H\times W\times 3}\). Same as [7], we adopt the Mean Squared Error (MSE) loss for image reconstruction, and the image reconstruction module does not need additional annotations. The overall loss is defined as
\[\mathcal{L}=\mathcal{L}_{s}+\lambda_{1}\mathcal{L}_{ul}+\lambda_{2}\mathcal{L }_{rec}.\]
Surprisingly, this embarrassingly simple baseline achieves quite good performance. Figure 2 shows the performance before and after adding the reconstruction task. As seen, the benefit of using a reconstruction task is especially pro
Figure 3: Illustration of our approach. The left side is the basic MT structure which employs two segmentation networks, the student \(h_{s},g_{s}\) and the teacher \(h_{t},g_{t}\). The right side \(g_{rec}\) is our Foreground-Only reconstruction module, which takes the output of the student’s encoder \(h_{s}\) and reconstructs a monotonous background image. Reconstruction loss masks are applied for loss calculation. Specifically, for labeled images, we set the background to 0 according to the ground truth, and the one-hot mask \(M_{rec}^{sup}\) in shape [\(H\times W\times 1\)] is all ones. For unlabeled images, the mask \(M_{rec}^{unsup}\) is derived from the teacher’s ignoring uncertain areas. Then the teacher is updated by the student’s exponential moving average (EMA).
nounced when the number of labeled images is small.
#### 4.3.1 Visualization the "Latent Images" from the Reconstruction Decoder
To understand the improvement, we perform visualization analysis on the reconstruction decoder. In particular, we consider the latent activations (feature maps) before the last layer of the reconstruction decoder. Recall that this last layer is a (kernel size \(=1\times 1\)) convolutional layer, which maps a \(\mathbf{Z}\in\mathbf{R}^{H\times W\times d}\) feature map into the reconstructed image \(I_{rec}\in\mathbb{R}^{H\times W\times 3}\). There are three convolutional filters \(\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{3}\in\mathbb{R}^{d}\), one for each color channel. Now consider one output channel from \(I_{rec}\), denoted as \(I_{rec}^{c}\), it is clear that it can be written as
\[I_{rec}^{c}=\sum_{k=1}^{d}\mathbf{Z}^{k}w_{e}^{k}, \tag{2}\]
where \(w_{c}^{k}\) denotes the \(k\)-th dimension of \(\mathbf{w}_{c}\) and \(\mathbf{Z}^{k}\in\mathbb{R}^{H\times W}\) denotes the \(k\)-th slice of \(\mathbf{Z}\). Intuitively, the above equation suggests that the reconstructed image is the weighted average of \(d\) slices \(\mathbf{Z}^{k}\), where each \(\mathbf{Z}^{k}\) is equivalent to an image, and we call it "latent image".
An interesting discovery is that those "latent images" could correspond to the semantic concepts in images if the encoder is jointly trained with a segmentation model. Figure 1 shows some example "latent images" from an autoencoder trained with reconstruction loss only and trained jointly with semi-supervised segmentation loss. Also, to show the relative progress of the segmentation decoder and the reconstruction decoder, we choose an epoch that the training process has not converged yet. From Figure 1, we can make the following observations:
* If the autoencoder is trained without the semi-supervised segmentation loss, the "Latent Images" have a weak correlation to the semantic concepts.
* When the autoencoder is trained with the semi-supervised segmentation loss, some "Latent Images" can correspond to some semantic concepts. For example, in the second and fourth row of Figure 1, the activation areas in some feature maps are focused on the train and the sheep. It seems that the semantic segmentation loss provides an inductive bias to make the image reconstructed through semantically meaningful "latent images".
* Surprisingly, some "Latent Images" recover the object contour better than the segmentation decoder at the same training epoch. It seems that the "Latent Images" are leading the segmentation decoder, which might explain why the reconstruction task could help segmentation.
* Finally, we find the "Latent Images" are far from perfect. Some background pixels, especially those that are the context of the object, tend to co-occur with the object in the latent image.
### Improving the Reconstruction Task by Object-Background Disentanglement
The last observation discussed in Section 4.3.1 suggests a potential object-background entanglement may exist in the current reconstruction task. Thus, we propose the following strategy to disentangle the object and its context background. More specifically, we let the reconstruction decoder only reconstruct the foreground-only images at the labeled set. In other words, the output from the reconstruction decoder only contains pixels belonging to the object parts while the background pixels are set to zero, that is,
\[y_{rec\ (i,j)}=\begin{cases}x_{(i,j)}^{l},&\text{if }y_{(i,j)}\in\text{ foreground}\\ 0,&\text{otherwise}\end{cases} \tag{3}\]
Examples of foreground-only images are shown in Figure 3. For unlabeled images, we do not have access to the class labels, those we cannot directly generate foreground-only images. Thus we recourse to pseudo-labels. For an unlabeled image, we consider the following three scenarios for a pixel \((i,j)\): (1) the current pixel can generate a pseudo-label, and the pseudo-label corresponds to the foreground. In other words, \(max_{c^{\prime}\in\mathcal{O}}P(y_{i,j}=c^{\prime}|x_{i,j})>\tau\), where \(\mathcal{O}\) is the set of classes that belong to objects. (2) the current pixel can generate a pseudo-label, and the pseudo-label corresponds to the background. (3) no pseudo-label can be generated from the current pixel and the segmentation decoder is uncertain about the class of the pixel. We will ignore the loss of pixels from (3) and only perform the foreground-only reconstruction for pixels from (1) and (2).
We call this modified reconstruction method as Foreground-Only reconstruction (FOrec). The scheme is illustrated in Figure 3.
**Discussion:** The FOrec method is mainly for object centric semantic segmentation, where the aim is to segment different objects from the background. For generic scene segmentation, i.e., segmenting both things and stuff, one can choose a category that often co-occurs with other categories as the background. Applying FOrec in that case could potentially alleviate the entanglement of those semantic concepts.
## 5 Experiments
In this section, we compare our approach with several semi-supervised semantic segmentation methods.
### Experimental Setup
**Datasets:** Our experiments are mainly conducted on the Pascal VOC 2012 [11], and Cityscapes [9], which
are widely used in semi-supervised semantic segmentation tasks [46, 33, 24, 30]. The **classic** Pascal VOC 2012 consists of 1,464/1,449/1,556 images covering twenty classes for training, validation, and testing, respectively. Due to the demand for data for the semi-supervised semantic segmentation scene, some researchers [33, 24, 30] adapt the additional labels from [18], which means the training data is augmented up to 10,528 images. In the augmented training set, 1,464 labeled data is selected among 1,464 samples in the **classic** setting, while the remainings are of low quality, containing noise. The augmented data selection setting is named **blender**. Both settings are evaluated for the measurement of performance. Cityscapes [9] is the urban driving scene dataset, consisting of 2,975, 500, and 1,525 images covering 19 classes for training, validation, and testing, respectively.
In this paper, we follow the same data splitting protocol from U\({}^{2}\)PL[41] and experiment with four kinds of label partition: 1/16, 1/8, 1/4, and 1/2. Our code will be released after the anonymity period.
**Evaluation metrics:** Following the previous works [41, 30], we adopt the mean Intersection-over-Union (mIoU) as the evaluation metrics.
**Implement detail:** Following the prior work [30, 42, 8], the network structure of our method is based on Deeplab V3+ [4], with pretrained ResNet-101 as the backbone. The segmentation head and the auxiliary task head are the default pixel-level linear classifier.
For all experiments on both datasets, we employ the stochastic gradient descent (SGD) as the optimizer and polynomial learning rate decay: \((1-\frac{iter}{total\_iter})^{0.8}\) for model optimization. For reconstruction loss, we set the \(\lambda_{1}\) as 0.5 and \(\lambda_{2}\) as 1 for the unsupervised and supervised parts, respectively.
On Pascal VOC 2012 [11], the images are cropped into \(512\times 512\) pixels and trained with initial learning rate \(1.0\times 10^{-3}\), weight decay \(1.0\times 10^{-4}\),and 80 training epochs. On Cityscapes [9], we crop the images into \(712\times 712\) pixels and trained our model with an initial learning rate \(1.0\times 10^{-2}\), weight decay \(5.0\times 10^{-4}\) and 200 training epochs. Our experiments were run with batch size 16 on 8 NVIDIA Tesla V100 GPUs.
For the reconstruction decoder, we simply adapt the same structure as the segmentation decoder. The only difference between them is the channel number of the last layer.
### Comparison with State-of-the-Arts
#### 5.2.1 Pascal VOC 2012
#### 5.2.2 Pascal VOC 2012
Table 1 and Table 2 illustrated the results on Pascal VOC 2012 validation set, Table 1 is under **classic** setting and Table 2 is under **blender** setting.
For **classic** setting, Table 1 illustrates that our approach successfully exploits unlabelled data, with a dramatic performance boost from the fully supervised training. Specifically, in the smaller partition like 92 and 183, our approach surpasses the fully supervised baseline with 25.2% and 19.8%, respectively. Meanwhile, compared with the current SOTA methods, our approach performs consistently better than all other methods for all partition protocols (using the ResNet-101 as the backbone). Taking the U\({}^{2}\)PL[41] as the instance, our approach improves the performance by 1.6% to 5.6% in all cases. Furthermore, in some partitions, our approach is better than all the current SOTA methods with fewer labeled samples. For example, compared with all current SOTA methods, our approach trained with 92 labeled images outperforms trained with 183 labeled images. This demonstrates that when the number of labeled data is extremely small (92, 183), our approach achieves a significant improvement in performance.
For **blender** setting, Table 2 indicates that our approach outperforms the supervised baseline by a large gap 3.86% to 10.97% from the fully supervised training. Compared with the current SOTA methods, our approach beats all other methods for all partition protocols. Even comparing with one of the currently best performed approaches U\({}^{2}\)PL[41], our approach improves the performance by 1.63%, 1.51%, 1.62% in 662, 1323, and 2646, respectively. The impressive boost proves that our method is not only useful for accurately annotated data, but also compatible with the noisy annotation.
#### 5.2.3 Cityscapes
Table 3 demonstrates the results of our method against several current state-of-the-art algorithms on Cityscapes validation set. Compared to the fully supervised results, our
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Method & 92 & 183 & 366 & 732 & 1464 \\ \hline Supervised & 45.8 & 54.9 & 65.9 & 71.7 & 72.5 \\ \hline MT [39] & 51.7 & 58.9 & 63.9 & 69.5 & 71.0 \\ PseudoSeg [48] & 57.6 & 65.5 & 69.1 & 72.4 & 73.2 \\ CPS [8]\({}_{\text{CVPR 271}}\) & 64.1 & 67.4 & 71.7 & 75.9 & - \\ PS-MT[30]\({}_{\text{CVPR 271}}\) & 65.8 & 69.6 & 76.6 & 78.4 & 80.0 \\ ST++[43]\({}_{\text{CVPR 271}}\) & 65.2 & 71.0 & 74.6 & 77.3 & 79.1 \\ U\({}^{2}\)PL[41]\({}_{\text{CVPR 271}}\) & 68.0 & 69.1 & 73.6 & 76.2 & 79.5 \\ \hline FOrec (Ours) & **71.0** & **74.7** & **77.5** & **78.7** & **81.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing results of state-of-the-art algorithms on PASCAL VOC 2012 [11] val set with mIoU (%) metric. Methods are trained on the classic setting, i.e., the labeled images are selected from the original VOC train set, which consists of 1, 464 samples in total. Best results are in bold.
method successfully exploits unlabelled data, with an obvious performance boost for all partitions. e.g., under the 1/16 label partition, our approach surpasses the fully supervised result by 6.68%. Then, compared to the state-of-the-art algorithm U\({}^{2}\)PL[41], Ours performs better than U\({}^{2}\)PL in all cases by 2.12%, 1.39% and 1.18% under the 1/16, 1/8 and 1/4 label partition, respectively.
Note that the performance of our method on the 1/16 label partition is slightly lower than that of AEL [20]. The reason is that the class imbalance problem of this partition is more serious and AEL especially aims to deal with class imbalance problems. However, our methods focus on separating the foreground objects from the background patterns in semi-supervised semantic segmentation tasks, we do not explicitly consider processing label imbalance problems. Technically, there is a high probability that merging both ideas is a to optimize overall performance.
### Analysis
In the following part, we perform a series of experiments to analyze the proposed method. Specially, we consider the following analyses: (1) The comparison between FOrec and standard reconstruction on both the object-centric segmentation task, i.e., PASCAL VOC, and scene-understanding segmentation task, i.e., CityScapes. (2) The latent images created from FOrec. (3) The applicability of the proposed method to other semi-supervised segmentation approaches.
**The comparison of the standard reconstruction and FOrec** In Table 4, we compare the standard reconstruction task and the proposed foreground-only reconstruction scheme. As seen, FOrec achieves a significant improvement over the standard reconstruction task on PASCAL VOC 2012 clean setting, the improvement over the standard reconstruction is around 2%. This supports our claim that using foreground-only images as the reconstruction target could be beneficial for object-centric segmentation.
**The comparison of the foreground-background segmentation and FOrec** Table 5 shows that the naive foreground-background segmentation performs worse than our approach, the improvement benefit from the foreground-only reconstruction is ranging from 0.5% to 3.3%. We think that unifying all kinds of foregrounds into one class is not conducive to the semantic segmentation of different objects.
**The impact of FOrec on the "latent images"** The FOrec is proposed to address the issue that background pixels tend to co-occur with foreground pixels. To verify this design, we visualize the latent images obtained from FOrec and standard reconstruction. We conduct an experiment by using two models, one trained by FOrec and another trained by standard reconstruction (jointly trained with semi-supervised segmentation loss for both cases with the same architecture). Then we use the trained model to generate latent images for input images from the validation
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline & Standard & \multirow{2}{*}{FOrec} & \multirow{2}{*}{1/16 (92)} & \multirow{2}{*}{1/8 (183)} \\ & Reconstruction & & & \\ \hline \hline
1 & ✗ & ✗ & 67.82 & 70.78 \\
2 & ✓ & ✗ & 68.99 & 72.35 \\
3 & ✗ & ✓ & **70.99** & **74.67** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on the effectiveness of different components of our approach. _✓_ and _✗_ represent the variant containing or not containing the sub module at each row respectively.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & 662 & 1323 & 2646 & 5290 \\ \hline Supervised & 67.87 & 71.55 & 75.80 & 77.13 \\ \hline MT[39] & 70.51 & 71.53 & 73.02 & 76.58 \\ CPS [8]cvr8.217 & 74.48 & 76.44 & 77.68 & 78.64 \\ AEL[20] [INTS-217] & 77.20 & 77.57 & 78.06 & 80.29 \\ ST++ [43]cvr8.217 & 74.70 & 77.90 & 77.90 & - \\ PS-MT [30]cvr8.221 & 75.50 & 78.20 & 78.72 & 79.76 \\ UCC [12]cvr8.217 & 76.49 & 77.06 & 79.09 & 79.54 \\ U\({}^{2}\)PL[41]cvr8.217 & 77.21 & 79.01 & 79.30 & 80.50 \\ \hline FOrec (Ours) & **78.84** & **80.52** & **80.92** & **80.99** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparing results of state-of-the-art algorithms on PASCAL VOC 2012 [11] val set with mIoU (%) metric. Methods are trained on the blender setting, i.e., the labeled images are selected from the augmented VOC train set, which consists of 10, 582 samples in total. Best results are in bold.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & 1/16 & 1/8 & 1/4 & 1/2 \\ \hline Supervised & 65.74 & 72.53 & 74.43 & 77.83 \\ \hline MT[39] & 69.03 & 72.06 & 74.20 & 78.15 \\ CCT [33]cvr8.201 & 69.32 & 74.12 & 75.99 & 78.10 \\ CPS [8]cvr8.217 & 69.78 & 74.31 & 74.58 & 76.81 \\ AEL [20]snips2.17 & **74.45** & 75.55 & 77.48 & 79.01 \\ U\({}^{2}\)PL[41]cvr8.221 & 70.30 & 74.37 & 76.47 & 79.05 \\ \hline FOrec (Ours) & 72.42 & **75.76** & **77.65** & **79.18** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparing results of state-of-the-art algorithms on Cityscapes [9] val set with mIoU (%) \(\uparrow\) metric. Methods are trained on identical label partitions, and the labeled images are selected from the Cityscapes train set, which consists of 2, 975 samples in total. Best results are in bold.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & 92 & 183 & 366 & 732 \\ \hline fore/back-ground seg. & 67.7 & 71.9 & 76.1 & 78.1 \\
**FOrec (ours)** & **71.0** & **74.7** & **77.5** & **78.7** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Foreground-background (saliency estimation) segmentation on PASCAL VOC 2012 (classic setting).
set. _Note that although FOrec is trained to reconstruct foreground-only images, the Foreground-only images are only used at the training time as the target. Once trained, it can be used without knowing the foreground mask._ The results are shown in Figure 4. As seen, by applying FOrec, the latent images tend to capture more object regions. We can observe more "latent images" that only include the objects. For example, in our figures, the goats and the cat in (c) are highlighted a lot compared with the latent images of standard reconstruction. When mapping to the results of the segmentation task, these natures are kept. For instance, in the first example in Figure 4, the gaps between goats' legs are captured, which did not appear in the results from the standard reconstruction. Moreover, in the second example, the sofa next to the cat is faded in the FOrec feature maps. In the segmentation results, it disappears. These results in Figure 4 clearly validate the effectiveness of the proposed method.
**The applicability of the proposed method on other semi-supervised segmentation methods** Finally, we apply both the reconstruction task and FOrec to another semi-supervised learning approach, PS-MT [30]. The results are shown in Table 6. As seen, the reconstruction is still effective. Compared with the PS-MT baseline, applying FOrec leads to a significant increase, especially when the number of training examples is small. The advantage of FOrec over standard reconstruction is also seen. Again, we observe FOrec tends to produce a superior performance on the low-supervision regime, e.g., when only 92 labeled images are used.
## 6 Conclusion
In this paper, we revisit the idea of using image reconstruction as an auxiliary task for semi-supervised semantic segmentation. We find that this old idea can produce results competitive with state-of-the-art semantic segmentation algorithms. By visualizing the intermediate layer activations of the image reconstruction module, we show that the feature map channel can correlate well with the semantic concept, which explains why joint training with the reconstruction task is helpful for the segmentation task. Motivated by this observation, we further proposed a modification to the image reconstruction task, aiming to further disentangle the object clue from the background patterns. From experiment evaluation on various datasets, we show that using reconstruction as an auxiliary loss leads to consistent improvements in various datasets and methods. The proposed method can further lead to significant improvement in object-centric segmentation tasks. For datasets without background class, it only provides slight improvements and more investigations will be made to improve scene understanding ability.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & 92 & 183 & 366 & 732 \\ \hline PS-MT[30] & 65.8 & 69.6 & 76.6 & 78.4 \\ PS-MT+rec & 68.4 & 71.0 & 77.2 & 78.9 \\ PS-MT+FOrec & **70.3** & **71.9** & **77.9** & **79.8** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparing results of another SOTA codebase (PS-MT[30]) on classic setting PASCAL VOC 2012 [11] val set with mIoU (%) metric. Experiments are conducted in the same settings as in the paper.
Figure 4: Visualizations of the latent images obtained from FOrec and standard reconstruction, from left to right, are (a) input images, (b) latent images obtained from standard reconstruction, (c) latent images obtained from FOrec, (d) segmentation results from standard reconstruction, (e) segmentation results from FOrec, and (f) segmentation ground truth. Note that the (b) and (c) are slices from the same position for a fair comparison. |
2309.02842 | Random Postprocessing for Combinatorial Bayesian Optimization | Model-based sequential approaches to discrete "black-box" optimization,
including Bayesian optimization techniques, often access the same points
multiple times for a given objective function in interest, resulting in many
steps to find the global optimum. Here, we numerically study the effect of a
postprocessing method on Bayesian optimization that strictly prohibits
duplicated samples in the dataset. We find the postprocessing method
significantly reduces the number of sequential steps to find the global
optimum, especially when the acquisition function is of maximum a posterior
estimation. Our results provide a simple but general strategy to solve the slow
convergence of Bayesian optimization for high-dimensional problems. | Keisuke Morita, Yoshihiko Nishikawa, Masayuki Ohzeki | 2023-09-06T08:59:34Z | http://arxiv.org/abs/2309.02842v2 | # Random postprocessing for combinatorial Bayesian optimization
###### Abstract
Model-based sequential approaches to discrete "black-box" optimization, including Bayesian optimization techniques, often access the same points multiple times for a given objective function in interest, resulting in many steps to find the global optimum. Here, we numerically study the effect of a postprocessing method on Bayesian optimization that strictly prohibits duplicated samples in the dataset. We find the postprocessing method significantly reduces the number of sequential steps to find the global optimum, especially when the acquisition function is of _maximum a posteriori_ estimation. Our results provide a simple but general strategy to solve the slow convergence of Bayesian optimization for high-dimensional problems.
Optimizing an expensive objective function \(f:\mathcal{X}\rightarrow\mathbb{R}\) defined on a domain \(\mathcal{X}\) is a common task in various practical situations, such as recommendation systems [1, 2], automated material discovery [3, 4, 5, 6], the creation of electronic circuits [7], parameter optimization of a quantum circuit [8], and finding effective Hamiltonians [9]. Due to the significant expense of evaluating \(f(x)\), it is crucial to identify the globally optimal solution \(x_{\text{opt}}=\arg\min_{x\in\mathcal{X}}f(x)\) with as few function evaluations as possible. Bayesian optimization [10, 11, 12, 13, 14] deals with these difficulties by introducing a statistical model called a _surrogate model_ and sequentially estimating \(f(x)\). In each iteration step \(t\), we update the surrogate model \(\hat{f}\) to fit the already known input-output dataset \(\mathcal{D}=\{x^{(i)},f(x^{(i)})\}_{i=1}^{t}\). Using on the properties of the updated surrogate model \(\hat{f}\), we build an _acquisition function_\(\alpha:\mathcal{X}\rightarrow\mathbb{R}\) and optimize it to find the most _promising_ input point \(x^{(t+1)}\), namely, \(x^{(t+1)}=\arg\min_{x\in\mathcal{X}}\alpha(x)\). We then compute the objective function of the chosen input point, \(f(x^{(t+1)})\) and append the pair to the dataset: \(\mathcal{D}\leftarrow\mathcal{D}\cup(x^{(t+1)},f(x^{(t+1)}))\). This process is repeated until a termination criterion is fulfilled, e.g., exhausting the predetermined maximum number of steps or finding a sample satisfying a desired constraint.
In recent years, several attempts have been made to apply Bayesian optimization to high-dimensional combinatorial optimization problems [6, 15, 16, 17, 18, 19, 20, 21, 22, 23]. These methods often take a long time to reach the global optimum as the acquisition functions yield the same or nearby points many times [24] and get stuck in a local optimum. This could become serious when the next point to compute the objective function is determined from the acquisition function in a deterministic manner [15, 25, 26], in which the algorithm can never escape from a local optimum.
Ref. [4] solved the aforementioned issue by randomly selecting a new point as a postprocessing step [4]: If the next query point \(x^{(t+1)}\) drawn from the acquisition function is already in the dataset \(\mathcal{D}\), it is rejected, and another randomly chosen point is proposed. Ref. [4] combined this process with an algorithm using the factorization machines [27, 28]. Using an advanced gradient method, they updated the parameters of the surrogate model to minimize the least-square loss, which can be interpreted as an approximate _maximum a posteriori_ (MAP) estimation of the model parameters [29, 30]. Our primary interest thus lies in understanding the effects of this postprocessing on Bayesian optimization.
In this letter, we apply the postprocessing method to Bayesian optimization using Thompson sampling and MAP estimation for the parameter of the surrogate model, and study its performance for the ground-state search of the Sherrington-Kirkpatrick spin glass model. We show that random postprocessing significantly improves the performance when the algorithm is highly exploitative by driving it away from local optima. We also find that the Bayesian optimization algorithm with MAP estimation and random postprocessing outperforms its Thompson sampling variant. Our results imply that random postprocessing can improve Bayesian optimization for high-dimensional problems.
We focus on the ground-state search of the Sherrington-Kirkpatrick (SK) model [31] defined by the Hamiltonian
\[H =\frac{1}{\sqrt{N}}\sum_{i<j}J_{ij}s_{i}s_{j} \tag{1}\] \[=\frac{1}{\sqrt{N}}\sum_{i<j}J_{ij}(2x_{i}-1)(2x_{j}-1). \tag{2}\]
Here, \(N\) is the number of spins, \(s_{i}\in\{-1,1\}\), \(x_{i}=(s_{i}+1)/2\in\{0,1\}\), and the interaction between spin \(i\) and \(j\) is drawn from the normal distribution with zero mean and variance \(J^{2}\), i.e., \(J_{ij}\sim\mathcal{N}(0,J^{2})\). We use a second-order surrogate model as in the 'Bayesian optimization of combinatorial structures'
(BOCS) algorithm [15] for simplicity, which is given by
\[\hat{f}_{\mathbf{w}}(x)=w_{0}+\sum_{i}w_{i}x_{i}+\sum_{i<j}w_{ij}x_{i}x_{j}={\mathbf{w}}^ {\top}{\mathbf{z}}, \tag{3}\]
where \(x\in\{0,1\}^{N}\) is the input vector and \(w_{i},w_{ij}\in\mathbb{R}\) are parameters of the model. This model is linear with respect to \({\mathbf{w}}=(w_{0},w_{1},\ldots,w_{N},w_{12},\ldots,w_{(N-1)N})^{\top}\in\mathbb{ R}^{P}\) and \({\mathbf{z}}=(1,x_{1},\ldots,x_{N},x_{1}x_{2},\ldots,x_{N-1}x_{N})^{\top}\in\{0,1 \}^{P}\) where \(P=1+N+\binom{N}{2}\). We conduct linear regression to estimate the model parameters \({\mathbf{w}}\) from the dataset \(\mathcal{D}=\{({\mathbf{x}}^{(i)},H({\mathbf{x}}^{(i)}))\}\) as follows. We first normalize the observed energies \(\{H({\mathbf{x}}^{(i)})|_{i=0,1,\cdots,i\mathcal{D}-1}\}\) as
\[y^{(i)}=2\frac{H({\mathbf{x}}^{(i)})-\min_{j\in\mathcal{D}(i)}H({\mathbf{x}}^{(i)})}{ \max_{j\in\mathcal{D}(i)}H({\mathbf{x}}^{(j)})-\min_{j\in\mathcal{D}(i)}H({\mathbf{x }}^{(j)})}-1. \tag{4}\]
so that \(y^{(i)}\in[-1,1]\). While our algorithm works without this normalization, it slightly shortens the time to find the ground state. We then assume that \(y^{(i)}\) is distributed according to the normal distribution with variance \(\sigma_{y}^{2}\):
\[y^{(i)}|{\mathbf{x}}^{(i)},{\mathbf{w}}\sim\mathcal{N}({\mathbf{w}}^{\top}{\mathbf{z}}^{(i)}, \sigma_{y}^{2}). \tag{5}\]
Whereas the original BOCS algorithm uses the horseshoe prior distribution [32, 33, 34], which efficiently works for sparse parametric models [35], we employ a rather simple one; the normal prior \({\mathbf{w}}\sim\mathcal{N}_{t}({\mathbf{0}},\sigma_{\text{pr}}^{2}I)\). Thanks to the conjugacy of the normal distribution, the posterior distribution is also multivariate normal \({\mathbf{w}}|\mathcal{D}\sim\mathcal{N}_{t}({\mathbf{m}}_{\text{pos}},\text{V}_{\text {pos}})\), with the parameters
\[{\mathbf{m}}_{\text{pos}}=\frac{1}{\sigma_{y}^{2}}{\mathbf{V}}_{\text{pos}}{\mathbf{Z}}^{ \top}{\mathbf{y}}, \tag{6}\]
\[{\mathbf{V}}_{\text{pos}}=\sigma_{y}^{2}\left[{\mathbf{Z}}^{\top}{\mathbf{Z}}+\frac{\sigma _{y}^{2}}{\sigma_{\text{pr}}^{2}}I\right]^{-1}, \tag{7}\]
where \({\mathbf{Z}}=({\mathbf{z}}^{(0)},{\mathbf{z}}^{(1)},\ldots,{\mathbf{z}}^{(0)(-1)})^{\top}\in \{0,1\}^{|\mathcal{D}\cup N}\) and \({\mathbf{y}}=({y}^{(0)},{y}^{(1)},\ldots,{y}^{(2\mathcal{D}-1)})^{\top}\in\mathbb{ R}^{|\mathcal{D}|}\). The values of hyperparameters are fixed to \(\sigma_{\text{pr}}^{2}=10^{-2}\) and \(\sigma_{y}^{2}=1\). Hereafter, we refer to BOCS with the normal prior as 'nBOCS' following Ref. [26].
We adopt Thompson sampling (TS) [36, 37, 38] and MAP estimation to build an acquisition function. TS draws a sample of the model parameters from the posterior and uses it to construct an acquisition function
\[\alpha^{\text{TS}}=\hat{f}_{\tilde{w}},\quad\text{where}\ \tilde{w}\sim p({\mathbf{w}}| \mathcal{D}). \tag{8}\]
In MAP estimation, on the other hand, we use a set of parameters that maximizes the posterior probability. The resultant acquisition function is thus
\[\alpha^{\text{MAP}}=\hat{f}_{\tilde{w}},\quad\text{where}\ \tilde{w}=\underset{w}{\text{arg max}}\ p({\mathbf{w}}|\mathcal{D}). \tag{9}\]
We use simulated annealing (SA) [39, 40, 41] to determine the next query point. In our annealing schedule for SA, the inverse temperature \(\beta(r)=\beta_{\text{init}}\times(\beta_{\text{final}}/\beta_{\text{init}})^{ \prime/r_{\text{total}}}\) with \(r=0,1,\cdots,r_{\text{total}}\) the time step of simulated annealing and \(r_{\text{total}}\) the total number of Monte Carlo sweeps. We set \(\beta_{\text{init}}=10^{-3}/J\) and \(r_{\text{total}}=10^{4}\). We systematically change the final inverse temperature \(\beta_{\text{final}}\), ranging from \(10^{0}/J\) to \(10^{4}/J\), to see how it affects the performance of our algorithm. Even though SA is not a deterministic method, it could yield a point already included in the current dataset \(\mathcal{D}\). In this case, we could reject it and sample a new one uniformly randomly from \(\mathcal{X}\). Once we obtain a sample \({\mathbf{x}}_{\text{next}}\) not included in \(\mathcal{D}\), we compute \(H({\mathbf{x}}_{\text{next}})\) and append \(({\mathbf{x}}_{\text{next}},H({\mathbf{x}}_{\text{next}}))\) to \(\mathcal{D}\). This postprocessing strictly prohibits duplicated samples in the dataset \(\mathcal{D}\).
We run nBOCS with the two different acquisition functions, which we refer to as 'nBOCS (TS)' and 'nBOCS (MAP)', respectively, in the following. We also study the performances of these algorithms combined with the postprocessing we described above, which we call 'nBOCS-Random (TS)', and 'nBOCS-Random (MAP)', respectively. Starting with single one randomly chosen pair \(\{({\mathbf{x}}^{(0)},H({\mathbf{x}}^{(0)}))\}\) as the initial dataset \(\mathcal{D}(t=0)\), we measure the normalized smallest energy in the dataset \(\mathcal{D}(t)=\{({\mathbf{x}}^{(i)},H({\mathbf{x}}^{(i)}))|_{i=0,1,\cdots,t-1}\}\)
\[[u(t)]=\left(\frac{\min_{j\in\mathcal{D}(i)}H({\mathbf{x}}^{(j)})-H_{\text{min}}}{H_ {\text{max}}-H_{\text{min}}}\right), \tag{10}\]
in which the bracket \([\cdot]\) stands for an average over disorder realizations, \(H_{\text{min}}\) and \(H_{\text{max}}\) are the minimum and maximum possible energies of each disorder instance, respectively. The typical number of disorder instances is \(10^{2}\). Whereas the ground-state search of the SK model is an NP-hard problem taking an exponentially long time with \(N\) in the worst case [42], we find the ground state for each instance by the branch-and-bound algorithm [43, 44] implemented in the Gurobi optimizer [45].
Fig. 1 shows \([u(t)]\) for nBOCS (TS) and nBOCS-Random (TS) with various \(\beta_{\text{final}}\)'s. The performance strongly depends on \(\beta_{\text{final}}\) and a larger \(\beta_{\text{final}}\) gives smaller iteration steps to find the ground state, but the postprocessing does not change \([u(t)]\) for any \(\beta_{\text{final}}\). This comes from the fact that \(\alpha^{\text{TS}}({\mathbf{x}})\) (Eq. (8)) rarely yields a point already included in \(\mathcal{D}(t)\) and triggers the postprocessing with a very small probability. Therefore, nBOCS (TS) and nBOCS-Random (TS) are virtually identical. In contrast, the postprocessing improves the performance of nBOCS when using the acquisition function \(\alpha^{\text{MAP}}({\mathbf{x}})\) (see Fig. 2): nBOCS-Random (MAP) typically finds the ground state within \(10^{3}\) steps if \(\beta_{\text{final}}\) is large enough, whereas \([u(t)]\) of nBOCS (MAP) with any \(\beta_{\text{final}}\) used in our runs gets stuck at \([u(t)]\gtrsim 10^{-1}\) and is independent of \(t\) when \(t\gtrsim 5\times 10^{2}\), indicating that nBOCS (MAP) cannot find the ground state even in the limit \(t\to\infty\). The parameter \({\mathbf{w}}\) is typically sparse when nBOCS (MAP) converges to an incorrect one \({\mathbf{w}}_{e}\). Most components are much smaller than \(J\), and only a small fraction have large amplitudes comparable to \(J\). Finding the ground state of the surrogate model \(\hat{f}_{{\mathbf{w}}_{e}}({\mathbf{x}})\) with SA is thus easy, and the same point is always appended to \(\mathcal{D}\) even though SA is stochastic, meaning that \({\mathbf{w}}_{e}\) is a fixed point. Appending uniformly random samples to \(\mathcal{D}\) drives \({\mathbf{w}}\) away from the incorrect fixed point \({\mathbf{w}}_{e}\).
Now, we characterize the performance of each algorithm by the scaling of the typical number of steps \([r]\) needed
to find the ground state. For each disorder instance, we define \(\tau\) as the number of steps where \(u(t=\tau)=10^{-3}\). We have checked that the following results do not change if we set the threshold value to \(10^{-6}\). In Fig. 3, we show \([\tau]\) for nBOCS (TS), nBOCS-Random (TS), and nBOCS-Random (MAP) with \(\beta_{\rm final}=10^{4}/J\), as a function of the number of spins \(N\). For all the algorithms we have studied, \([\tau]\) grows only algebraically with \(N\), with an exponent \(z\). This indicates that its average-case complexity is much easier than the worst case, as shown in Ref. [46]. The exponent \(z\) depends on algorithms, and nBOCS-Random (MAP) yields the smallest value, \(z=2.12(3)\), while \(z=2.29(4)\) for the other two algorithms, suggesting that the postprocessing qualitatively changes how the algorithm explores the parameter space. This is better seen in the normalized overlap between \(\mathbf{w}(t)\) and the correct one, \(\mathbf{w}_{J}\), determined by Eq. (2),
\[R(t)=\left[\frac{\mathbf{w}(t)\cdot\mathbf{w}_{J}}{\|\mathbf{w}(t)\|\cdot\|\mathbf{w}_{J}\|} \right]. \tag{11}\]
If \(\mathbf{w}(t)\) is very close to \(\mathbf{w}_{J}\), \(R(t)\simeq 1\). We show \(R(t)\) for all the algorithms in Fig. 4. For nBOCS with Thompson sampling, \(R(t)\) grows very slowly with the iteration step and suddenly reaches \(R(t)\simeq 0.7\) at \(t\simeq 7\times 10^{2}\), at which the algorithm typically finds the ground state. On the other hand, for nBOCS-Random (MAP), it is already quite large even at \(t=10^{2}\), and gradually converges to \(R(t)\simeq 0.8\) at \(t\simeq 5\times 10^{2}\), which is again very close to \([\tau]\). This indicates very different approaches to \(\mathbf{w}_{J}\) in the parameter space, which we expect to result in the different scaling of \([\tau]\). Note that, regarding that \(\mathbf{w}_{J}\) is dense with size \(P=O(N^{2})\), we expect that \(O(N^{2})\) data points are needed to find \(\mathbf{w}_{J}\) in general, yielding the optimal exponent \(z=2\). Our algorithms, especially nBOCS-Random (MAP), are thus very close to the optimal.
To conclude, we have studied the effects of postprocessing on Bayesian optimization for an NP-hard, high-dimensional optimization problem. We have focused on the ground-state search of the SK model and found that, when combined with the nBOCS (MAP) algorithm, the postprocessing drastically
Figure 1: Normalized smallest energy \([u(t)]\) (Eq. (10)) as a function of iteration step \(t\) for (a) nBOCS (TS) and (b) nBOCS-Random (TS). The number of spins \(N=32\).
Figure 3: Number of steps \([\tau]\) to reach \([u(t)]=10^{-3}\) as a function of \(N\). The broken and dotted curves are power-law growths with exponent \(2.3\) and \(2.1\), respectively.
Figure 2: Normalized smallest energy \([u(t)]\) (Eq. (10)) as a function of iteration step \(t\) for (a) nBOCS (MAP) and (b) nBOCS-Random (MAP). The number of spins \(N=32\).
reduces the number of steps \([\tau]\) to find the ground state. We then found that \([\tau]\) has a power-law scaling with exponent \(z\), which is slightly smaller for nBOCS-Random (MAP) than the other algorithms.
Our results show that the conceptually simple, random postprocessing drives \(\mathbf{w}\) to escape from a metastable, local optimum, yielding enhanced parameter space exploration. We thus expect that it should improve Bayesian optimization for general optimization problems as well as the ground-state search of the SK model. Nevertheless, applying the postprocessing to Bayesian optimization of other problems, e.g., constrained combinatorial optimization problems, is certainly an interesting direction.
We thank Renichiro Haba for useful discussions. Y. N. acknowledges support from JSPS KAKENHI (Grant No. 22K13968). M. O. receives financial support from JSPS KAKENHI Grant No. 23H01432, the MEXT-Quantum Leap Flagship Program Grant No. JPMXS0120352009, as well as Public/Private R&D Investment Strategic Expansion PrograM (PRISM) and programs for Bridging the gap between R&D and the IDeal society (society 5.0) and Generating Economic and social value (BRIDGE) from Cabinet Office.
|
2308.03875 | Stability Verification of Quantum non-i.i.d. sources | We introduce the problem of stability verification of quantum sources which
are non-i.i.d.. The problem consists in ascertaining whether a given quantum
source is stable or not, in the sense that it produces always a desired quantum
state or if it suffers deviations. Stability is a statistical notion related to
the sparsity of errors. This problem is closely related to the problem of
quantum verification first proposed by Pallister et. al. [1], however, it
extends the notion of the original problem. We introduce a family of states
that come from these non-i.i.d. sources which we call a Markov state. These
sources are more versatile than the i.i.d. ones as they allow statistical
deviations from the norm instead of the more coarse previous approach. We prove
in theorem 1 that the Markov states are not well described with tensor products
over a changing source. In theorem 2 we further provide a lower bound on the
trace distance between two Markov states, or conversely, an upper bound on the
fidelity between these states. This is a bound on the capacity of determining
the stability property of the source, which shows that it is exponentially
easier to ascertain this with respect to n, the number of outcomes from the
source. | Esteban Martínez-Vargas | 2023-08-07T19:00:28Z | http://arxiv.org/abs/2308.03875v4 | # Stability Verification of Quantum non-i.i.d. sources
###### Abstract
We introduce the problem of stability verification of quantum sources which are non-i.i.d.. The problem consists in ascertaining whether a given quantum source is stable or not, in the sense that it produces always a desired quantum state or if it suffers deviations. Stability is a statistical notion related to the sparsity of errors. This problem is closely related to the problem of quantum verification first proposed by Pallister et. al. [1], however, it extends the notion of the original problem. We introduce a family of states that come from these non-i.i.d. sources which we call a Markov state. These sources are more versatile than the i.i.d. ones as they allow statistical deviations from the norm instead of the more coarse previous approach. We prove in theorem 1 that the Markov states are not well described with tensor products over a changing source. In theorem 2 we further provide a lower bound on the trace distance between two Markov states, or conversely, an upper bound on the fidelity between these states. This is a bound on the capacity of determining the stability property of the source, which shows that it is exponentially easier to ascertain this with respect to \(n\), the number of outcomes from the source.
## I Introduction
Quantum tomography is the process of reconstructing a quantum state from a series of observations [2]. This is a very costly process [3] as it normally requires an exponential amount of measurements with the dimension of the system [4], which implies an exponential amount of copies of the state. Alternative approaches have been invented to circumvent this issue: using compressed sensing for example [5]. Recently, there have been interesting lines of research whose objective is less ambitious than full-state tomography, but to calculate functionals of states that take a polynomial amount of resources [4; 6].
Close to this topic is the task of quantum verification [1], whose objective is to ascertain if a source yields a desired state, or if it incurs an error. The question to answer is if a machine that produces identical copies of the state \(\left|\psi\right\rangle\) and whose details are hidden from us (is a black box) is producing the state it should. Here one does not deal with the full tomographical problem and therefore number of required measurements can be lower.
Pallister et. al. [1] define verification as a quantum hypotheses testing problem, which consists in guessing a given quantum state from two possible hypotheses with the lowest probability of error [7; 8]. The task is simple to state: suppose a machine produces states \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{n}\}\) which should be \(n\) copies of \(\left|\psi\right\rangle\). Hypothesis 0 is that \(\sigma_{i}=\left|\psi\right\rangle\!\!\left\langle\psi\right|\) for all \(i\) and hypothesis 1 is that \(\left\langle\psi\right|\sigma_{i}\left|\psi\right\rangle\leq 1-\epsilon\) for all \(i\) for \(0\leq\epsilon\leq 1\). The objective is that the verifier passes the test with a worst-case probability of \(\delta\). They consider independent online measurements [9].
Despite making considerable advances, their approach is an oversimplification as it restricts to detect very specific situations: the source produced all the time the correct state or all the time a wrong one. Perhaps one would qualify as not so bad a machine that produces a desired state \(\left|\psi\right\rangle\)_most_ of the time, but here and there, in a sparse manner, allow an error.
Part of the oversimplification of the problem lies in the fact that its definition uses i.i.d. sources that limit the abstract notions that one would like to address. Here, we extend the notion of verification of Pallister from identifying outputs of an i.i.d. source. Instead of aiming for detecting a perfectly consistent source, we allow deviations as long as they are statistically negligible. We introduce the formalism of a family of mixed states that describe rigorously this situation. These states are prepared by a source that is non-i.i.d. but has temporal correlations between the produced states. We shall call the sources we study "Markov sources" as their definition depends on Markov chains.
We investigate how the family of states that we introduce behaves and we find that the tensor product of states after each iteration does not apply in this case. In some sense, the Markov sources we introduce here generalize the notion of a Markov chain to quantum systems, although studied through other approaches [10]. We show in section II.1 the relationship between the sparsity of errors for the Markov source and two parameters \(\epsilon\in[0,1]\) and \(\delta\in[0,1]\). Then, we arrive at theorem 1, which shows the difference between the Markov source and a similar one is exponential in the number of instances of the Markov source.
Having defined the Markov sources and their respective output after \(n\) instances, we address the problem of verification which can be translated into a quantum discrimination problem. Pallister's approach uses individual measurements for several copies, therefore their measuring scheme has no horizon. By contrast, our approach presupposes that a fixed number of instances of the Markov source is given. We observe that this problem can be translated into two hypotheses: \(H_{0}\) is that we were |
2302.07490 | Electric dipole polarizability of $^{40}$Ca | The electric dipole strength distribution in $^{40}$Ca between 5 and 25 MeV
has been determined at RCNP, Osaka, from proton inelastic scattering
experiments at very forward angles. Combined with total photoabsorption data at
higher excitation energy, this enables an extraction of the electric dipole
polarizability $\alpha_\mathrm{D}$($^{40}$Ca) = 1.92(17) fm$^3$. Together with
the measured $\alpha_{\rm D}$ in $^{48}$Ca, it provides a stringent test of
modern theoretical approaches, including coupled cluster calculations with
chiral effective field theory interactions and state-of-the art energy density
functionals. The emerging picture is that for this medium-mass region dipole
polarizabilities are well described theoretically, with important constraints
for the neutron skin in $^{48}$Ca and related equation of state quantities. | R. W. Fearick, P. von Neumann-Cosel, S. Bacca, J. Birkhan, F. Bonaiti, I. Brandherm, G. Hagen, H. Matsubara, W. Nazarewicz, N. Pietralla, V. Yu. Ponomarev, P. -G. Reinhard, X. Roca-Maza, A. Richter, A. Schwenk, J. Simonis, A. Tamii | 2023-02-15T06:30:17Z | http://arxiv.org/abs/2302.07490v2 | # Electric Dipole Polarizability of \({}^{40}\)Ca
###### Abstract
The electric dipole strength distribution in \({}^{40}\)Ca between 5 and 25 MeV has been determined at RCNP, Osaka, from proton inelastic scattering experiments at very forward angles. Combined with total photoabsorption data at higher excitation energy, this enables an extraction of the electric dipole polarizability \(\alpha_{\rm D}(^{40}{\rm Ca})=1.92(17)\) fm\({}^{3}\). Together with the measured \(\alpha_{\rm D}\) in \({}^{48}\)Ca, it provides a stringent test of modern theoretical approaches, including coupled cluster calculations with chiral effective field theory interactions and state-of-the art energy density functionals. The emerging picture is that for this medium-mass region dipole polarizabilities are well described theoretically, with important constraints for the neutron skin in \({}^{48}\)Ca and related equation of state quantities.
_Introduction_.- The nuclear equation of state (EOS) determines not only basic properties of nuclei [1] but also plays a key role for the properties of neutron stars and the dynamics of core-collapse supernovae and neutron star mergers [2]. New observations from neutron stars and mergers provides constraints for the EOS of neutron-rich matter that can be compared with those derived from nuclear physics (see, e.g., Refs. [3; 4; 5]). However, while the EOS of symmetric nuclear matter is well determined around saturation density, the properties of neutron-rich matter are less explored experimentally. The latter depends on the symmetry energy, whose properties are typically encoded in an expansion around saturation density \(n_{0}\), with the symmetry energy at saturation density \(J(n_{0})\) and its density dependence \(L=3n_{0}\partial J(n_{0})/\partial n\).
Theoretically, a model-dependent correlation between \(L\) and the neutron-skin thickness \(r_{\rm skin}\) in nuclei with neutron excess has been established [6; 7; 8; 9]. This correlation was also recently confirmed in _ab initio_ computations of the neutron skin in \({}^{208}\)Pb [10]. Experimental attempts to determine the neutron skin thickness have been performed with a variety of probes (see, e.g., Ref. [11] and references therein), but many of them suffer from systematic uncertainties entering in the description of the reaction processes. Parity-violating elastic electron scattering (a weak process mediated by the \(Z^{0}\) boson) can be used for a nearly model-independent extraction of the neutron distribution in nuclei and, by comparison with accurately measured charge radii, the neutron skin thickness. Recently, results with this technique have been reported by the CREX and PREX collaborations for \({}^{48}\)Ca [12] and \({}^{208}\)Pb [13], respectively. The \(r_{\rm skin}\) values inferred with selected nuclear models favor a comparatively small neutron skin in the former and a large skin in the latter case.
Alternatively, the electric dipole polarizability \(\alpha_{\rm D}\) has been established as a possible measure of the neutron skin, based on the strong correlation with \(r_{\rm skin}\)[8; 14]. Data for \(\alpha_{\rm D}\) extracted from proton inelastic scattering experiments at extreme forward angles have been presented for both \({}^{48}\)Ca [15] and \({}^{208}\)Pb [16]. In these papers, two theoretical approaches have been used to describe \(\alpha_{\rm D}\): _ab initio_ coupled-cluster (CC) calculations [17; 18] starting from chiral two- and three-nucleon interactions [19; 20] and energy density functional (EDF) theory [21].
Attempts to simultaneously describe \(\alpha_{\rm D}(^{208}{\rm Pb})\) and the parity-violating asymmetry from PREX and CREX with EDF models have shown limited success [22; 23; 24; 25]. The values derived for \(r_{\rm skin}\)[13] and \(L\)[26] from PREX are in tension with EDFs capable of describing [27] the presently available results on \(\alpha_{\rm D}\) in \({}^{48}\)Ca [15], \({}^{68}\)Ni [28], \({}^{120}\)Sn [29; 30], and \({}^{208}\)Pb [16]. While the CREX results is in excellent agreement with _ab initio_ predictions [18], the PREX result is in mild tension with the recent _ab initio_ computations of \({}^{208}\)Pb [10].
Correlations between experimental observables and symmetry energy properties are well explored in EDF
theory [6; 7; 8; 14], but predictions for isovector observables like \(\alpha_{\rm D}\) are less well constrained. On the other hand, _ab initio_ calculations provide a direct link to the EOS, as nuclear matter properties can be calculated based on the same chiral interactions [10; 18; 19; 31; 32]. Results presented here are based on the set of two- and three-nucleon interactions from Refs. [19; 20] applied to study \(\alpha_{\rm D}\) in \({}^{48}\)Ca [15; 18]. The calculations of the \(E1\) response are based on merging the Lorentz Integral Transform approach with CC theory, as described in Refs. [33; 34]. Recent work has extended the original two-particle-two-hole (2p-2h) CC truncation to include correlations up to three-particle-three-hole (3p-3h), so-called triples corrections, in the computation of \(\alpha_{\rm D}\)[35]. Their inclusion leads to a reduction of the predictions for \(\alpha_{\rm D}\)(\({}^{48}\)Ca) of the order of 10%, allowing an improved simultaneous description of the charge radius [35]. A similar improvement was achieved for \({}^{68}\)Ni [36].
In this Letter, we present the measurement of the dipole polarizability for \({}^{40}\)Ca and confront it with CC and EDF calculations. This tests the emerging picture that nuclear theory can describe very well the neutron skin in medium-mass nuclei and related observables.
_Experiment.-_ Cross sections for the \({}^{40}\)Ca(\(p,p^{\prime}\)) reaction have been measured at RCNP, Osaka, at an incident proton energy of 295 MeV. Data were taken with the Grand Raiden spectrometer [37] in a laboratory scattering angle range \(0.4^{\circ}-14.0^{\circ}\) and for excitation energies in the range \(5-25\) MeV. Dispersion matching techniques were applied to achieve an energy resolution of about 30 keV (full width at half maximum). The experimental techniques and the raw data analysis are described in Ref. [38].
In the top panel of Fig. 1 we show representative energy spectra measured at laboratory scattering angles \(\Theta_{\rm lab}=0.4^{\circ}\), \(1.74^{\circ}\), \(3.18^{\circ}\), and \(5.15^{\circ}\). The predominant cross sections lie in the energy region above 10 MeV. \(M1\) strength in \({}^{40}\)Ca is known to be concentrated in a single prominent transition at 10.32 MeV [39]. The cross sections above 10 MeV show a broad resonance structure peaking at about 19 MeV increasing towards \(0^{\circ}\). The angular dependence is consistent with relativistic Coulomb excitation of \(E1\) transitions. We identify this resonance structure as the isovector giant dipole resonance.
The various contributions to the spectra were separated using a multipole decomposition analysis (MDA) as described in Ref. [40]. Results for the most forward angle measured are presented in the bottom part of Fig. 1 as example, where the spectra was rebinned to 200 keV. Theoretical angular distributions for the relevant multipoles were obtained from Distorted Wave Born Approximation calculations with transition amplitudes from quasiparticle-phonon-model calculations similar to the analysis of \({}^{48}\)Ca [15]. Additionally, a background due to pre-equilibrium multistep scattering was considered. Its angular dependence was taken from experimental systematics [41; 42] while the amplitude was derived by two means. Initially, an unconstrained fit was done at each energy bin of the set of spectra. The resulting cross sections could be well approximated by a simple Fermi function but showed strong fluctuations for certain excitation energy bins due to the similarity to some of the \(E1\) theoretical angular distributions. Thus, in the final analysis, the continuum contribution was determined by fitting a Fermi function to the unconstrained excitation energy dependence.
_Photoabsorption cross sections and dipole polarizability.-_ The \(E1\) cross sections resulting from the MDA were converted into equivalent photoabsorption cross sections using the virtual photon method [45]. The virtual photon spectrum was calculated in an eikonal approach [46] to Coulomb excitation, integrated over the distribution of scattering angles covered in the solid angle of each angular bin. The photoabsorption spectra derived from scattering data at \(0.40^{\circ}\) and \(1.00^{\circ}\) were
Figure 1: Top panel (a): Spectra of the \({}^{40}\)Ca(\(p,p^{\prime}\)) reaction at \(E_{0}=295\) MeV and scattering angles \(\Theta_{\rm lab}=0.4^{\circ}\), \(1.74^{\circ}\), \(3.18^{\circ}\) and \(5.15^{\circ}\). Bottom panel (b): Example of the MDA of the spectrum at \(\Theta_{\rm lab}=0.4^{\circ}\) in 200 keV bins (blue) and decomposition into contributions of \(\lambda\neq 1\) multipoles (orange), continuum background (green), and \(E1\) (red).
essentially identical, and that at \(1.74^{\circ}\) deviated only slightly, consistent with an estimate of the grazing angle (\(1.33^{\circ}\)) at which Coulomb-nuclear interference becomes relevant. The resulting photoabsorption cross section is displayed as blue histogram in Fig. 2 (a).
The electric dipole polarizability \(\alpha_{D}\) was obtained from the photoabsorption cross section over the energy range \(10-25\) MeV leading to a contribution \(1.60(14)\) fm\({}^{3}\). The integration was extended to \(60\) MeV, where the cumulative sum plotted in Fig. 2 (b) shows saturation. The data at higher excitation energies were taken for \(25-31\) MeV from Ref. [44] and for \(31-60\) MeV from Ref. [43] to obtain the total \(\alpha_{D}(^{40}\text{Ca})=1.92(17)\) fm\({}^{3}\). The uncertainty considers systematic errors of (i) the absolute cross sections, (ii) the MDA (determined as described, e.g., in Ref. [47]), and (iii) the parameterization of the continuum background (evaluated by varying the amplitude of the Fermi function), added in quadrature. The latter, dominating the total uncertainty budget, was estimated by the variation needed to change the \(\chi^{2}\) value of the MDA fit by one. Statistical errors turned out to be negligible. A detailed breakdown of the error contributions is given in Table 1.
_Comparison with coupled-cluster calculations.-_ The extracted value of \(\alpha_{\text{D}}\) serves as a benchmark for CC theory [33; 34; 18; 35]. Coupled-cluster calculations were recently performed for the dipole polarizability of \({}^{48}\)Ca [15] and \({}^{68}\)Ni [36], which led to an improved understanding of the neutron and proton distributions in nuclei, as well as their difference encoded in the neutron skin. We have performed CC computations of \(\alpha_{\text{D}}\) in \({}^{40}\)Ca starting from a Hartree-Fock reference state considering a basis of 15 major harmonic oscillator shells. To gauge the convergence of our results we varied the oscillator frequency in the range \(\hbar\omega=12-16\) MeV. Three-nucleon contributions had an additional energy cut of \(E_{\text{3max}}=16\hbar\omega\).
Figure 3 explores the correlation between \(\alpha_{\text{D}}\) for \({}^{40}\)Ca and \({}^{48}\)Ca as predicted by theory. Panel (a) shows the CC results including triples contributions, not available for \({}^{40}\)Ca so far. The theoretical uncertainties for the different Hamiltonians stem from the truncation of the CC expansion and the residual dependence on CC convergence parameters, calculated as described in Ref. [34]. Similarly to \({}^{48}\)Ca, we find that the inclusion of 3p-3h correlations reduces the value of \(\alpha_{\text{D}}(^{40}\text{Ca})\) by an amount varying between 10% - 20% for different interactions. While the EM and PWA interactions [19] are not simultaneously compatible with both \({}^{40}\)Ca and \({}^{48}\)Ca experimental data, the set of employed interactions shows an approximately linear trend between the two quantities overlapping with both experimental results. A particular improvement in the reproduction of both \(\alpha_{\text{D}}(^{48}\text{Ca})\) and \(\alpha_{\text{D}}(^{40}\text{Ca})\) is seen for the NNLO\({}_{\text{sat}}\) interaction [20], which is capable of accurately describing binding energies and radii of nuclei up to \({}^{40}\)Ca as well the saturation point of symmetric nuclear matter. The different interactions predict a range of symmetry energy parameters \(J=27-33\) MeV, \(L=41-49\) MeV [18], with the NNLO\({}_{\text{sat}}\) values at the lower end (\(J=27\) MeV, \(L=41\) MeV).
_Comparison with EDF approaches.-_ Recently, it was investigated whether the dipole polarizability and the parity-violating asymmetry \(A_{\text{PV}}\) for \({}^{208}\)Pb and \({}^{48}\)Ca can be simultaneously accounted for with modern EDFs [24]. We use the four representative forms of functionals from that study: non-relativistic Skyrme functionals SV [48] and RD [49], the latter with different forms of density dependence, and relativistic functionals DD [50] with finite-range meson-exchange coupling and PC [51] with point coupling. All four have been calibrated to the same set of ground-state data to determine the model parameters. With these sets, it was shown that PREX and CREX results for \(A_{\text{PV}}\) (and \(r_{\text{skin}}\)) cannot be consistently explained within the model uncertainties while the \(\alpha_{\text{D}}\) were repro
\begin{table}
\begin{tabular}{l c} Source & Value (\%) \\ \hline Trigger efficiency & 0.1 \\ Drift chamber efficiency & 0.8 \\ Charge collection & 0.3 \\ Target thickness & 1.0 \\ Determination of solid angle & 3.0 \\ MDA & 1.2 \\ Background parameterization & 8.3 \\ \hline Total & 9.0 \\ \end{tabular}
\end{table}
Table 1: Budget of error contributions to \(\alpha_{\text{D}}(^{40}\text{Ca})\).
Figure 2: Top panel (a): Photoabsorption cross section derived at a scattering angle of \(0.40^{\circ}\) using the virtual photon method. Bottom panel (b): Electric dipole polarizability \(\alpha_{D}\) derived from the photoabsorption cross sections. The blue curve shows the present data, while the orange and green curves show the extrapolation to higher energies using the data of Refs. [43; 44]. The open (full) black circles are the CC results for the NNLO\({}_{\text{sat}}\) interaction including up to doubles (triples) contributions in the cluster expansion.
duced. Hence, the present result in \({}^{40}\)Ca provides an important test of the global predictive power of these EDFs.
Figure 3 (b) displays the EDF results for \(\alpha_{\rm D}\) with 1\(\sigma\) error ellipses (for their definition see Refs. [22; 24]). The parametrizations as given from the ground-state fits are shown by filled ellipses. The DD functional performs rather well. The other predictions tend to slightly overestimate the experimental mean values of both \({}^{40}\)Ca and \({}^{48}\)Ca, while their 1\(\sigma\) error ellipses do overlap with the experimental bands, except for PC. In all cases, the two \(\alpha_{\rm D}\) values are highly correlated. We note that the same holds for the description of \(\alpha_{D}\)(\({}^{208}\)Pb) [16] after correction for the quasi-deuteron contribution [27]. Thus, all the models are capable to account for the mass dependence of the polarizability.
The dashed ellipses show results from a refit where additionally the experimental \(\alpha_{\rm D}\) value of \({}^{208}\)Pb [16] corrected for the quasideuteron part [27] was included yielding the functionals SV-alpha; RD-alpha, PC-alpha, and DD-alpha [22; 24]. This improves the agreement with experiment, particularly for the PC model, and shrinks most error ellipsoids. The uncertainty reduction is especially large for the DD model because this functional has the least isovector freedom. The linear trend shown by the different theoretical approaches in Fig. 3 is similar although the CC calculations tend to underestimate the \(\alpha_{D}\) in \({}^{40}\)Ca and perform nicely for \({}^{48}\)Ca. The bulk symmetry energies range from \(J=30\) MeV for DD to 35 MeV for PC and accordingly from 32 MeV to 82 MeV for \(L\). The fits which include also \(\alpha_{\rm D}\) in \({}^{208}\)Pb narrow the prediction to \(J=30-32\) MeV and \(L=35-52\) MeV which correlates nicely to the narrower range of predictions for \(\alpha_{\rm D}\) in \({}^{40,48}\)Ca.
_Conclusions.-_ We have extracted the dipole polarizability of \({}^{40}\)Ca from a combination of relativistic Coulomb excitation measurement in inelastic proton scattering under very forward angles with total photoabsorption data at high excitation energies. Together with a similar analysis on \({}^{48}\)Ca the new data serve as a benchmark test of state-of-the art theoretical approaches. A representative set of EDFs can describe these data. An improvement is obtained when the EDFs are optimized by adding the dipole polarizability of \({}^{208}\)Pb to the calibration dataset. Coupled-cluster computations for the NNLO\({}_{\rm sat}\) interaction simultaneously describe well the dipole polarizability of \({}^{40}\)Ca and \({}^{48}\)Ca, as well as the corresponding charge radii and the neutron skin thickness [34]. A nearly linear systematic trend is obtained for other interactions, as in the case of EDF theory. This analysis supports the robustness of current theoretical approaches in the description of \(\alpha_{D}\) and their constraints of symmetry energy parameters discussed, e.g., in Refs. [22; 24; 52].
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 279384907 -- SFB 1245, through the Cluster of Excellence "Precision Physics, Fundamental Interactions, and Structure of Matter" (PRISMA\({}^{+}\) EXC 2118/1, Project ID 39083149), by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under award numbers DE-SC0013365 and DE-SC0023175 (NUCLEI SciDAC-5 collaboration), under the contract DE-AC05-00OR22725 with UT-Battelle, LLC (Oak Ridge National Laboratory), and by the University of Cape Town. Computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (IN
Figure 3: Comparison of the experimental dipole polarizabilites of \({}^{40}\)Ca (present work) and \({}^{48}\)Ca [15] shown as blue bands with (top panel, a) CC calculations with different interactions, including triples contributions and (bottom panel, b) EDF calculations with different energy density functionals [22]. For details see text.
CITE) programme. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
|
2310.07681 | Murmurations | We establish a case of the surprising correlation phenomenon observed in the
recent works of He, Lee, Oliver, Pozdnyakov, and Sutherland between Fourier
coefficients of families of modular forms and their root numbers. | Nina Zubrilina | 2023-10-11T17:30:24Z | http://arxiv.org/abs/2310.07681v1 | # Murmurations
###### Abstract
We establish a case of the surprising correlation phenomenon observed in the recent works of He, Lee, Oliver, Pozdnyakov, and Sutherland between Fourier coefficients of families of modular forms and their root numbers.
## 1 Introduction
In a recent paper, He, Lee, Oliver, and Pozdnyakov ([3]) discovered a remarkable oscillation pattern in the averages of the Frobenius traces of elliptic curves of fixed rank and conductor in a bounded interval. This discovery stemmed from the use of machine learning and computational techniques and did not explain the mathematical source of this phenomenon, referred to as "murmurations" due to its visual similarity to bird flight patterns:
Later, Sutherland and the authors ([13], [4]) detected this bias in more general families of arithmetic \(L\) functions, for instance, those associated to weight \(k\) holomorphic modular cusp forms for \(\Gamma_{0}(N)\) with conductor in a geometric interval range \([M,cM]\) and a fixed root number. Sutherland made a striking observation that the average of \(a_{f}(P)\) over this family for a single prime \(P\sim M\) converges a continuous-looking function of \(P/M\):
Figure 2: Dyadic averages of \(a(p)\) with a fixed root number, courtesy of Sutherland
The goal of this paper is to establish this bias in families of modular forms of square-free level with arbitrary fixed weight and root number. We show the following:
**Theorem 1**.: _Let \(H^{\text{new}}(N)\) be a Hecke basis for trivial character weight \(k\) cusp newforms for \(\Gamma_{0}(N)\) with \(f\in H^{\text{new}}(N)\) normalized to have lead coefficient \(1\). Let \(\varepsilon(f)\) be the root number of \(f\), let \(a_{f}(p)\) be the \(p\)-th Fourier coefficient of \(f\), and let \(\lambda_{f}(p):=a_{f}(p)/p^{(k-1)/2}\). Let \(X,Y,\) and \(P\) be parameters going to infinity with \(P\) prime; assume further that that \(Y=(1+o(1))X^{1-\delta_{2}}\) and \(P\ll X^{1+\delta_{1}}\) for some \(\delta_{2},\delta_{1}>0\) with \(2\delta_{1}<\delta_{2}<1\). Let \(y:=P/X\). Then:_
\[\frac{\sum_{N\in[X,X+Y]}\sum_{f\in H^{\text{new}}(N)}\lambda_{f}( P)\sqrt{P}\varepsilon(f)}{\sum_{N\in[X,X+Y]}\sum_{f\in H^{\text{new}}(N,k)}1} =D_{k}A\sqrt{y}+(-1)^{k/2-1}D_{k}B\sum_{1\leq r\leq 2\sqrt{y}}c(r) \sqrt{4y-r^{2}}U_{k-2}\left(\frac{r}{2\sqrt{y}}\right)\] \[-D_{k}\delta_{k=2}\pi y+\text{O}_{\varepsilon}\left(X^{-\delta^{ \prime}+\varepsilon}+\frac{1}{P}\right)\]
_where \(U_{k-2}\) is the Chebyshev polynomial given by_
\[U_{n}(\cos\theta):=\frac{\sin((n+1)\theta)}{\sin\theta},\] \[\delta^{\prime}:=\min\{\delta_{2}/2-\delta_{1},1/9+\delta_{2}/9- \delta_{1}\},\] \[A:=\prod_{p}\left(1+\frac{p}{(p+1)^{2}(p-1)}\right),\] \[B:=\prod_{p}\frac{p^{4}-2p^{2}-p+1}{(p^{2}-1)^{2}},\] \[D_{k}:=\frac{12}{(k-1)\pi\prod_{p}\left(1-\frac{1}{p^{2}+p} \right)},\] \[c(r):=\prod_{p|r}\left(1+\frac{p^{2}}{p^{4}-2p^{2}-p+1}\right),\]
_and \(\sum^{\square}\) denotes a sum over square-free parameters. In particular, for any \(\delta_{1}<2/9\), one can find \(\delta_{2}\) for which \(\delta^{\prime}>0\)._
We define
\[M_{k}(y):=D_{k}A\sqrt{y}+(-1)^{k/2-1}D_{k}B\sum_{1\leq r\leq 2\sqrt{y}}c(r) \sqrt{4y-r^{2}}U_{k-2}\left(\frac{r}{2\sqrt{y}}\right)-D_{k}\delta_{k=2}\pi y\]
to be the weight \(k\) murmutation density.
The formula above arises from an application of the Eichler-Selberg trace formula to the composition of Hecke and Atkin-Lehner operators, which allows us to reinterpret the sum in terms of class numbers. We then compute class number averages in short intervals by means of the class number formula.
We remark that the exponents in the statement are far from optimal, as our goal here is only to get a power saving error term in a range up to \(X^{a}\) for some \(a>1\). The restriction to square-free levels is a technical one, as the trace formula simplifies greatly when the level is square-free. From the computations of Sutherland, it appears that the resulting density functions are slightly perturbed when one considers all levels, but they share key properties with the ones above.
The murmuration densities \(M_{k}(y)\) in Theorem 1 have many interesting features. They are highly oscillating, continuous, with derivative discontinuities at \(n^{2}/4\) for \(n\in\mathbb{N}\). At the origin, the \(M_{k}\)'s are positive and grow like \(\sqrt{y}\). This positive root number bias for small \(P\) has been observed previously in the works of Martin and Pharis (see [8], [9], [10]).
In the case of weight \(k=2\), we analyze the behavior this function as \(y\to\infty\) in more detail. In spite of the appearing \(y\) term, it has a true growth rate of \(y^{1/4}\). The function \(M_{2}(y)/y^{1/4}\) is an asymptotically uniformly almost periodic function of \(\sqrt{y}\); it is a convergent sum of periodic functions with (increasing) half integer periods. Its sign, i.e., the sign of the correlation bias, changes infinitely often. We prove the following:
**Theorem 2**.: _Let_
\[M_{2}(y)=D_{2}A\sqrt{y}+D_{2}B\sum_{1\leq r\leq 2\sqrt{y}}c(r)\sqrt{4y-r^{2}}-D _{2}\pi y\]
_be the weight \(2\) murmuration density. Then as \(y\to\infty\),_
\[M_{2}(y)=y^{1/4}\cdot\frac{2BD_{2}}{\pi}\sum_{\begin{subarray}{c}1\leq d\leq 2 \sqrt{y}\\ d\boxempty_{\text{free}}\end{subarray}}Q(d)d^{1/2}\zeta\left(-1/2,\left\{ \frac{2\sqrt{y}}{d}\right\}\right)+\mathrm{O}(1).\]
_Here \(\zeta\) is the Hurwitz zeta function,_
\[Q(d):=\prod_{p|d}\frac{p^{2}}{p^{4}-2p^{2}-p+1}\asymp\mu^{2}(d)/d^{2}\]
_and \(\{\cdot\}\) denotes the fractional part._
Integrating the murmuration density above produces the dyadic interval weight \(k\) averages observed by Sutherland:
Figure 4: The weight \(2\) density function
Figure 3: Plots of \(M_{8}\) and \(M_{24}\), courtesy of Sutherland ([14])
**Theorem 3**.: _Let \(P\ll X^{6/5}\), let \(c>1\) be a constant, \(Z:=cX\), and \(y:=P/X.\) Then as \(X\to\infty,\)_
\[\frac{\sum_{N\in[X,Z]}\sum_{f\in H^{\infty}(N,k)}\lambda_{f}(P)\sqrt{P}\varepsilon (f)}{\sum_{N\in[X,Z]}\sum_{f\in H^{\infty}(N,k)}1}=\frac{2}{(c^{2}-1)}\int_{1} ^{c}uM_{k}(y/u)du+o_{y}(1),\]
_where \(M_{k}(y)\) is as in Theorem 1. In particular, for \(k=c=2\), the dyadic average_
\[\frac{\sum_{N\in[X,2X]}^{\square}\sum_{f\in H^{\infty}(N,1)}a_{f}(P)\varepsilon (f)}{\sum_{N\in[X,2X]}^{\square}\sum_{f\in H^{\infty}(N,k)}1}\]
_converges to_
\[\begin{cases}\alpha\sqrt{y}-\beta y&\text{on }[0,1/4];\\ \alpha\sqrt{y}-\beta y+\gamma\pi y^{2}-\gamma(1-2y)\sqrt{y-1/4}-2\gamma y^{2} \arcsin(1/2y-1))&\text{on }[1/4,1/2];\\ \alpha\sqrt{y}-\beta y+2\gamma y^{2}(\arcsin(1/y-1)-\arcsin(1/2y-1))&\\ -\gamma(1-2y)\sqrt{y-1/4}+2\gamma(1-y)\sqrt{2y-1}&\text{on }[1/2,1],\end{cases}\]
_where_
\[\alpha\approx 6.38936,\beta\approx 11.3536,\gamma\approx 2.6436.\]
_We note this is the function from Figure 2._
Finally, we analyze the asymptotic behavior of the smoothed averages from Theorem 3:
**Theorem 4**.: _Let \(\Phi:(0,\infty)\to\mathbb{C}\) be a compactly supported smooth weight function, and let_
\[M_{\Phi}(y):=\Big{(}\int_{0}^{\infty}M_{2}(y/u)\Phi(u)u^{2}\frac{du}{u}\Big{)} /\int_{0}^{\infty}\Phi(u)u^{2}\frac{du}{u}.\]
_Then \(M_{\Phi}\) is continuous on \((0,\infty)\), \(M_{\Phi}(0)=0\), and as \(y\to\infty\),_
\[M_{\Phi}(y)=1+o_{y}(1).\]
Asymptotic properties of the characteristic function averages as in Theorem 3 will be analyzed in forthcoming work.
Murmurations are a feature of the the one-level density transition range. The Katz-Sarnak philosophy ([6]) predicts that averages as in Theorem 4 for \(P\sim N^{a}\) behave differently when \(a<1\) and \(a>1\), and our statements for \(P\thicksim N\) describe the phase transition between these. The unusual (for \(k>2\)) normalization of the coefficients above also arises naturally from this interpretation. For a more detailed discussion of this connection, we point the reader to [11].
Computing similar averages weighted "harmonically" (i.e., by the value at \(1\) of the symmetric square \(L\)-function) by means of the Petersson formula reveals that with weights, this bias becomes much less pronounced: the resulting function grows like \(y\), as opposed to \(\sqrt{y}\), at the origin.
Murmurations for elliptic curves over \(\mathbb{Q}\) are not explained by these results, as they constitute a very sparse subset of weight \(2\) modular forms. We point out that computational observations of the aforementioned authors make a compelling case that this phenomenon is very sensitive to the ordering by conductor, and disappears almost entirely when the curves are ordered by naive height, \(j\)-invariant, or discriminant.
Trace Formula Setup
Given a square-free positive integer \(N\) and a prime \(P\nmid N\), let \(H^{\text{new}}(N,k)\) denote a basis of the space \(S^{\text{new}}(N,k)\) of weight \(k\) Hecke cusp newforms for \(\Gamma_{0}(N)\), and let \(a_{f}(P)=\lambda_{f}(P)P^{(k-1)/2}\) denote the eigenvalue under the \(P\)-th Hecke operator \(T_{\text{p}}\) of \(f\in H^{\text{new}}(N,k)\). Let \(\varepsilon(f)\) denote the root number of \(f\) (recall \((-1)^{k/2}\varepsilon(f)\) is equal to the eigenvalue of \(f\) under the Atkin-Lehner involution \(W_{N}\)). In order to compute the average of \(a_{f}(P)\varepsilon(f)\) for eigenforms \(f\) ranging over square-free levels \(N\) in an interval, we interpret \(\sum_{f\in H^{\text{new}}(N,k)}a_{f}(P)\varepsilon(f)\) as the trace of the operator \((-1)^{k/2}T_{\text{p}}\circ W_{N}\) on \(S^{\text{new}}(N)\) and apply the corresponding trace formula.
Such a trace formula was first derived by Yamauchi in [16]; the result contained a computational error which was later corrected by Skoruppa and Zagier ([12]). This formula (section 2, formula (7)) gives the trace of \(T_{\text{p}}\circ W_{N}\) on the full space of cusp forms \(S(N)\); as the authors point out in the discussion leading to formula (5), oldforms coming from \(S(M)\) contribute to the trace only when \(N/M\) is a square. Since we are restricting ourselves to \(N\) square-free, we thus have the following result at our disposal:
**Theorem** (Skoruppa-Zagier ([12], section 2, formulas (5) and (7)).: _For \(N\) square-free and a prime \(P\nmid N\),_
\[\sum_{f\in H^{\text{new}}(N,k)}\sqrt{P}\lambda_{f}(P)\varepsilon(f) =\frac{H_{1}(-4PN)}{2}+(-1)^{k/2-1}U_{k-2}\left(\frac{r\sqrt{N}}{ 2\sqrt{P}}\right)\sum_{0<r\leq 2\sqrt{P/N}}H_{1}(r^{2}N^{2}-4PN)\] \[-\delta_{k=2}(P+1).\]
Here the Hurwitz class number \(H_{1}(-d)\) is the number of equivalence classes with respect to \(\text{SL}_{2}(\mathbb{Z})\) of positive-definite binary quadratic forms of discriminant \(-d\) weighed by the number of automorphisms (i.e., with forms corresponding to multiples of \(x^{2}+y^{2}\) and \(x^{2}+xy+y^{2}\) counted with multiplicities \(1/2\) and \(1/3\), accordingly). Hence, \(H_{1}\) can be expressed in terms of the Gauss class number \(h\) via:
\[H_{1}(-d)=\sum_{f\in\mathbb{N}:f^{2}|d}h(-d/f^{2})+\text{O}(1),\]
with the error term disappearing if \(d\neq 3\cdot\square,4\cdot\square\).
Assume from now on that \(P\neq 2\). The square factors of \(4PN\) are \(1\) and \(4\), since by assumption \(P\nmid N\). For a prime \(q\) and \(r\geq 1\), the condition \(q^{2}\mid N(r^{2}N-4P)\) can hold either if \(q^{2}|r^{2}N-4P\) or if \(q\) divides both \(N\) and \(4P\), i.e., if \(q=2\) and \(N\) is even. However, if \(N=2N^{\prime}\) is even (with \(N^{\prime}\) odd), then for any \(d\) with \(4d^{2}|(r^{2}N^{2}-4PN)\), one has
\[(r^{2}N^{2}-4PN)/4d^{2}=(r^{2}N^{\prime 2}-2PN^{\prime})/d^{2},\]
which is always \(2\) or \(3\) modulo \(4\), so the corresponding class number vanishes. Thus it suffices to consider square divisors of \(r^{2}N^{2}-4PN\) for which \(d^{2}|r^{2}N-4P\). Consequently, for \(N\) square-free, and a prime \(P\nmid 2N\), the trace formula becomes:
\[\sum_{f\in H^{\text{new}}(N)}\sqrt{P}\lambda_{f}(P)\varepsilon(f) =\frac{h(-4PN)}{2}+\frac{h(-PN)}{2}-\delta_{k=2}P+\text{O}(1).\] \[+(-1)^{k/2-1}U_{k-2}\left(\frac{r\sqrt{N}}{2\sqrt{P}}\right)\sum _{1\leq r\leq 2\sqrt{\frac{P}{N}}}\sum_{d^{2}|r^{2}N-4P}h(N(r^{2}N-4P)/d^{2}). \tag{1}\]
From this formula, one can already see that the trace is positively biased, at least for small \(P\). Indeed, the only negative term in this expression is \(-P\); on the other hand, Siegel's bound dictates that the class
number terms should be of size \((PN)^{1/2+\varepsilon}\), which dominates for small \(P\), as was observed in ([8]) and ([9]), ([10]).
On the other hand, for \(P\) of size \(N^{1+\varepsilon}\), the balance becomes more subtle, and as we will see, the trace can be either positive or negative, even when averaged over short intervals in \(N\).
## 3 Average Class Number in Short Intervals
Our interest in this section is to exploit the Dirichlet class number formula to understand sums of class numbers in (1) as the square-free parameter \(N\) ranges over a short interval \([X,X+Y]\) for \(Y=o(X)\). For such an interval, the square root term in the class number formula has fixed size, so these sums can be understood by averaging Dirichlet characters coming from a truncated \(L\) function special value. Carrying out this computation yields Theorem 1. We establish it via the following two propositions:
**Proposition 3.1**.: _Let \(P\neq 2\) be prime and let \([X,X+Y]\) be an interval of length \(Y=o(X)\). Then as \(X\to\infty\), we have_
\[\frac{\zeta(2)\pi}{XY}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
by truncating the Dirichlet series of the \(L\)-function and splitting the corresponding characters into primitive and non-primitive ones. For \(\chi\) a non-principal Dirichlet character of modulus \(d\), it follows from Abel summation and Polya-Vinogradov that for a truncation parameter \(T\),
\[L(1,\chi)=\sum_{n\geq 1}\frac{\chi(n)}{n}=\sum_{n=1}^{T}\frac{\chi(n)}{n}+\mathrm{ O}\left(\sqrt{d}\log d/T\right) \tag{3}\]
(see, for example, [1], page 321, Theorem 5.2.) Since \(\chi_{-PN}\) is always a non-principal Dirichlet character for square-free \(N\) with \(P\nmid N\), we have
\[\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\sqrt{N/X}L(1,\chi_{-PN})=\sum_{\begin{subarray}{c}N \in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\sqrt{N/X}\sum_{n=1}^{T}\frac{\left(\frac{-PN}{n} \right)}{n}+\mathrm{O}\left(\frac{Y\sqrt{PX}\log PX}{T}\right)\] \[= \sum_{\begin{subarray}{c}N\in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\sum_{m=1}^{\sqrt{T}}\frac{\sqrt{N/X}\left(\frac{-PN }{m^{2}}\right)}{m^{2}}+\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\sum_{\begin{subarray}{c}n=1\\ n\neq\square\end{subarray}}^{T}\frac{\sqrt{N/X}\left(\frac{-PN}{n}\right)}{n}+ \mathrm{O}\left(\frac{Y\sqrt{PX}\log PX}{T}\right)\] \[=: \mathrm{Sq}+\mathrm{NSq}+\mathrm{O}\left(\frac{Y\sqrt{PX}\log PX }{T}\right).\]
Since the sum \(\mathrm{Sq}\) contains principal characters and \(\mathrm{NSq}\) is going over non-principal ones, we expect \(\mathrm{Sq}\) to be the main term. Indeed,
\[\mathrm{Sq} =\sum_{m=1}^{\sqrt{T}}\frac{1}{m^{2}}\sum_{\begin{subarray}{c}N \in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\mu^{2}(N)\left(\frac{-PN}{m^{2}}\right)\left(1+ \left(\sqrt{1+\frac{N-X}{X}}-1\right)\right)\] \[=\sum_{m=1}^{\sqrt{T}}\frac{1}{m^{2}}\sum_{\begin{subarray}{c}N \in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\mu^{2}(N)\left(\frac{-PN}{m^{2}}\right)+O\left(Y \left(\sqrt{1+Y/X}-1\right)\right)\] \[=\sum_{\begin{subarray}{c}m\leq\sqrt{T}\\ (P,m)=1\end{subarray}}\frac{1}{m^{2}}\left(\sum_{N\in[X,X+Y]}\mu^{2}(N)\left( \frac{N}{m^{2}}\right)\frac{\chi_{1}(PN)-\chi_{2}(PN)}{2}\right)+O\left(\frac {Y^{2}}{X}\right)\]
where \(\chi_{1,2}\) are the characters modulo \(4\), \(\chi_{1}\) principal. The character \(\left(\frac{N}{m^{2}}\right)\chi_{1}(N)\) is principal modulo \(2m\), and \(\left(\frac{N}{m^{2}}\right)\chi_{2}(N)\) is always non-principal modulo \(4m\). Applying Lemmas 6.7 and Lemma 6.5,
\[\mathrm{Sq} =\sum_{\begin{subarray}{c}m\leq\sqrt{T}\\ (P,m)=1\end{subarray}}\frac{Y}{\zeta(2)}\frac{\eta(2m)}{2m^{2}}+O_{\varepsilon }\left(\frac{1}{m^{2}}m^{1/5+\varepsilon}X^{3/5+\varepsilon}\right)+O\left( \frac{Y^{2}}{X}\right)\] \[=Y\frac{4A}{11\zeta(2)}+\mathrm{O}_{\varepsilon}\left(\frac{Y}{P^ {2}}+\frac{Y}{\sqrt{T}}+X^{3/5+\varepsilon}+\frac{Y^{2}}{X}\right). \tag{4}\]
Next, we want to bound the term
\[\mathrm{NSq}= \sum_{\begin{subarray}{c}N\in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\sum_{\begin{subarray}{c}n=1\\ n\neq\square\end{subarray}}^{T}\frac{\sqrt{N/X}\left(\frac{-PN}{n}\right)}{n}= \hskip-8.535827pt\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ PN=3\bmod 4\end{subarray}}\sum_{\begin{subarray}{c}n=1\\ n\neq\square\end{subarray}}^{T}\frac{\left(\frac{-PN}{n}\right)}{n}+O\left( \frac{Y^{2}\sum_{n\leq T}(1/n)}{X}\right)\] \[= \sum_{\begin{subarray}{c}n=1\\ n\neq\square\end{subarray}}^{T}\frac{\left(\frac{-P}{n}\right)}{n}\left(\sum_{N \in[X,X+Y]}\left(\frac{N}{n}\right)\frac{\chi_{1}(PN)-\chi_{2}(PN)}{2}\right)+O \left(\frac{Y^{2}\log T}{X}\right)\]
For \(n\) not a square, \(\left(\frac{N}{n}\right)\) is non-principal; moreover, \(\left(\frac{N}{2}\right)\) is primitive modulo \(8\). Hence \(\left(\frac{N}{n}\right)\chi_{1,2}(N)\) are also non-principal, so applying Lemma 6.7 again,
\[\mathrm{NSq}=\sum_{\begin{subarray}{c}n=1\\ n\neq\square\end{subarray}}^{T}(1/n)\mathrm{O}_{\varepsilon}\left(n^{1/5+ \varepsilon}X^{3/5+\varepsilon}\right)+O\left(Y^{2}\log T/X\right)\ll_{ \varepsilon}T^{1/5+\varepsilon}X^{3/5+\varepsilon}+Y^{2}\log T/X. \tag{5}\]
Combining (2), (4), and (5),
\[\frac{1}{\sqrt{PX}}\sum_{N\in[X,X+Y]}h(-PN)=\frac{4A}{11\zeta(2)\pi}Y+\mathrm{ Err}_{Y,X,P,T}, \tag{6}\]
where
\[\mathrm{Err}_{Y,X,P,T}=\mathrm{O}_{\varepsilon}\left(\frac{Y}{P^{2}}+\frac{Y} {\sqrt{T}}+X^{3/5+\varepsilon}T^{1/5+\varepsilon}+\frac{Y^{2}\log T}{X}+\frac {Y\sqrt{PX}\log PX}{T}\right).\]
In particular, setting \(T:=Y^{5/6}P^{5/12}X^{-1/12}\), we get an error term matching that of Proposition 3.1.
#### 3.1.2 \(h(-4PN)\)
We handle this case the same way as in the previous section. Since \(-4PN\) is always \(0\,\mathrm{mod}\,4\),
\[\frac{1}{\sqrt{PX}}\sum_{N\in[X,X+Y]}h(-4PN) =\frac{2}{\pi}\sum_{N\in[X,X+Y]}\sqrt{\frac{N}{X}}L(1,\chi_{-4PN})\] \[=\frac{2}{\pi}\sum_{N\in[X,X+Y]}\sqrt{\frac{N}{X}}\sum_{n=1}^{T} \frac{\left(\frac{-4PN}{n}\right)}{n}+\mathrm{O}\left(\frac{Y\sqrt{PX}\log PX }{T}\right)\] \[=\frac{2}{\pi}\sum_{N\in[X,X+Y]}\sum_{n=1}^{T}\frac{\left(\frac{- 4PN}{n}\right)}{n}+\mathrm{O}\left(\frac{Y\sqrt{PX}\log PX}{T}+\frac{Y^{2}\log T }{X}\right).\]
Again, we can separate into principal and non-principal characters:
\[\sum_{N\in[X,X+Y]}\sum_{n=1}^{T}\frac{\left(\frac{-4PN}{n}\right)}{n}=\sum_{ \begin{subarray}{c}n=1\\ n\neq\mathrm{old}\end{subarray}}^{T}\sum_{N\in[X,X+Y]}\frac{\left(\frac{-PN}{ n}\right)}{n}+\sum_{\begin{subarray}{c}m=1\\ m\,\mathrm{old}\end{subarray}}^{\sqrt{T}}\sum_{N\in[X,X+Y]}\frac{\left(\frac{- PN}{m^{2}}\right)}{m^{2}}\]
Applying Lemma 6.7 and Lemma 6.5 as in the previous section, we conclude
\[\frac{1}{\sqrt{PX}}\sum_{N\in[X,X+Y]}h(-4PN)=\frac{2Y}{\pi\zeta(2)}\left(\sum _{\begin{subarray}{c}m\,\mathrm{old}\\ (m,P)=1\end{subarray}}^{\sqrt{T}}\frac{\eta(m)}{m^{2}}\right)Y+\mathrm{Err}_{Y,X,P,T}=Y\frac{18A}{11\zeta(2)\pi}+\mathrm{Err}_{Y,X,P,T},\]
which finishes the proof of Proposition 3.1 in combination with (6).
### \(H_{1}(r^{2}N^{2}-4PN)\)
The aim of this section is to prove the following:
**Proposition 3.3**.: _Let \(P\neq 2\) be a prime, let \(r\in\mathbb{N}\), and let \(X>Y>0\) be such that \(r^{2}(X+Y)<4P\). Then:_
\[\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ P\nmid N\end{subarray}}^{\Box}H_{1}(r^{2}N^{2}-4PN)=\frac{YBc(r)}{\pi\zeta(2)} \sqrt{4PX-r^{2}X^{2}}\] \[\quad+\operatorname{O}\left(YPX\right)^{\varepsilon}\left((YPX)^ {3/5}+\frac{Y^{2}\sqrt{P}}{\sqrt{X}}+rY^{3/2}X^{1/2}+X\sqrt{P}Y^{5/18}+Y^{8/9 }\sqrt{PX}\right)\right).\]
For a divisor \(d^{2}|r^{2}N-4P\) such that \(\frac{r^{2}N^{2}-4PN}{d^{2}}=0/1\operatorname{mod}4\), we once again have by the class number formula that
\[h\left(\frac{r^{2}N^{2}-4PN}{d^{2}}\right)=\frac{\sqrt{4PN-r^{2}N^{2}}}{\pi d }L(1,\chi_{\frac{r^{2}N^{2}-4PN}{d^{2}}}).\]
Thus, for \(1\leq r\leq 2\sqrt{P/(X+Y)}\),
\[\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ P\nmid N\end{subarray}}^{\Box}H_{1}(r^{2}N^{2}-4PN)=\sum_{d^{2}\leq 4P}\sum_{ \begin{subarray}{c}N\in[X,X+Y]\\ N\in\widetilde{\mathcal{A}}_{r,d}\end{subarray}}\frac{L(1,\chi_{(r^{2}N^{2}-4PN )/d^{2}})}{\pi d}\sqrt{4PN-r^{2}N^{2}},\]
where we let
\[\widetilde{\mathcal{A}}_{k,d}:=\{N\in\mathbb{Z}:N\;\square\text{ - free},P\nmid N,d^{2}|r^{2}N-4P,\text{ and }(r^{2}N^{2}-4PN)/d^{2}\equiv 0/1 \operatorname{mod}4.\}\]
We begin by analyzing the set \(\widetilde{\mathcal{A}}_{r,d}\).
#### 3.2.1 Remainder analysis
Suppose first that \(r\) is odd. For \(N\) square-free, \(d^{2}|r^{2}N-4P\) implies that \(d\) is odd, and \(\frac{r^{2}N^{2}-4PN}{d^{2}}\equiv 0/1\operatorname{mod}4\) automatically holds. Thus, \(\widetilde{\mathcal{A}}_{r,d}\) is the set of square-free integer solutions to the congruence
\[r^{2}N=4P\operatorname{mod}d^{2},P\nmid N.\]
For odd \(r\) and \(d\), this has a solution if and only if \(P\nmid d\) and \((r,d)=1\), yielding
\[\widetilde{\mathcal{A}}_{r,d}=\{N\in\mathcal{A}_{r,d}:N\;\square\text{ - free},P\nmid N\},\]
where we let
\[\mathcal{A}_{r,d}:=\begin{cases}n\in\mathbb{Z}:n\equiv 4Pk^{-2}\operatorname{ mod}d^{2}&\text{ if }(d,r)=(d,P)=(d,2)=1;\\ \emptyset&\text{ otherwise}\end{cases}\]
Now assume \(r\) is even. From \(d^{2}|4P-r^{2}N<4P\) and since \(r^{2}X<4P\), we know \(r,d<P\), i.e., \((P,k)=(P,d)=1\). Let \(r:=2l\). Then \(\frac{r^{2}N^{2}-4PN}{d^{2}}=0/1\operatorname{mod}4\) is equivalent to
\[4l^{2}N=4P+rd^{2}\operatorname{mod}4d^{2},\text{ where }rN=0/1 \operatorname{mod}4. \tag{7}\]
If \(d\) is odd, reducing (7) modulo \(4\) shows that \(r\equiv 0\operatorname{mod}4\), and (7) simplifies to
\[l^{2}N=P\operatorname{mod}d^{2}.\]
This has \(1\) solution \(\operatorname{mod}d^{2}\) for \((l,d)=1\) and no solutions otherwise. Suppose now \(d=2b\), so (7) becomes
\[l^{2}N=P+rb^{2}\operatorname{mod}4b^{2},rN=0/1\operatorname{mod}4. \tag{8}\]
Since we restrict to \(N\) square-free, we can disregard the case \(N\equiv 0\,\mathrm{mod}\,4\). If \(N=2\,\mathrm{mod}\,4\), then \(r\) must be even and (8) has no solutions \(\mathrm{mod}\,2\). For \(N\) odd, \(rN=0/1\,\mathrm{mod}\,4\) holds if and only if \(r=0/N\,\mathrm{mod}\,4\), and we have an equivalence
\[\eqref{eq:N}\ \&\ N\ \text{is odd}\ \iff\ \begin{bmatrix}N(l^{2}-b^{2})=P\, \mathrm{mod}\,4b^{2},N\ \text{odd}\\ Nl^{2}=P\,\mathrm{mod}\,4b^{2},N\ \text{odd}.\end{bmatrix}\]
If \((l,b)>1\), this has no solutions since \(P\nmid(l,b)\). Otherwise, if \((l,b)=1\), there are three cases:
* if \(l,b\) are odd, there is a solution \(N\equiv Pl^{-2}\,\mathrm{mod}\,4b^{2}\);
* if \(l\) is even, \(b\) is odd, there is a solution \(N\equiv P(l^{2}-b^{2})^{-1}\,\mathrm{mod}\,4b^{2}\);
* if \(l\) is odd, \(b\) is even, there's a solution \(N\equiv Pl^{-2}\,\mathrm{mod}\,4b^{2}\) and a solution \(N=P(l^{2}-b^{2})^{-1}\,\mathrm{mod}\,4b^{2}\), distinct.
Note \(N\) is automatically odd in the above three cases.
In summary, for any choice of \(r\) and \(d\),
\[\widetilde{\mathcal{A}}_{r,d}:=\{N\in\mathcal{A}_{r,d}:P\nmid N,N\,\square\ \text{- free}\},\]
where the set \(\mathcal{A}_{r,d}\) is given by a congruence condition modulo \(d^{2}\); namely,
\[\mathcal{A}_{r,d}:=\{n\in\mathbb{Z}:n\,\mathrm{mod}\,d^{2}\in\mathcal{R}_{d,r },\}\]
where \(\mathcal{R}_{d,r}\) is a subset of residues \(\mathrm{mod}\,\,d^{2}\) coprime to \(d\) that satisfies:
\[|\mathcal{R}_{d,r}|=\begin{cases}1&\text{if}\,(d,r)=1,2\nmid d,k,P\nmid d\\ 1&\text{if}\,(d,r)=1,2|r\\ 1&\text{if}\,(d,r)=2,2||d;\\ 2&\text{if}\,(d,r)=2,4|d;\\ \emptyset&\text{otherwise.}\end{cases}\]
Furthermore, letting \(s:=\frac{r^{2}N^{2}-4PN}{d^{2}}\) for some \(N\in\mathcal{A}_{r,d}\), the above analysis also proves that:
* For \((d,r)=2,2||d,2||r\), \(s\) is always even;
* For \((d,r)=2,2||d,4|r\), \(s\) is always odd;
* For \((d,r)=2,4|d\), the two residues in \(\mathcal{R}_{r,d}\) produce \(s\) of different parity.
We will call a pair \((r,d)\)_admissible_ if \(\mathcal{R}_{r,d}\) is non-empty.
#### 3.2.2 Truncation
We have established that
\[\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ P\nmid N\end{subarray}}^{\square}H_{1}(r^{2}N^{2}-4PN)=\sum_{d^{2}\leq 4P} \sum_{\begin{subarray}{c}N\in[X,X+Y]\\ N\in\mathcal{A}_{r,d}\\ P\nmid N\end{subarray}}^{\square}\frac{L(1,\chi_{\frac{r^{2}N^{2}-4PN}{d^{2}} })}{\pi d}\sqrt{4PN-r^{2}N^{2}}. \tag{9}\]
Note that for \(N\in[X,X+Y]\) (and assuming \(r^{2}(X+Y)<4P\)),
\[\sqrt{4PN-r^{2}N^{2}}-\sqrt{4PX-r^{2}X^{2}} =\sqrt{4P-r^{2}N}\left(\sqrt{N}-\sqrt{X}\right)-\sqrt{4PX-r^{2}X^{2} }\left(1-\sqrt{1-\frac{r^{2}(N-X)}{4P-r^{2}X}}\right)\] \[\ll\sqrt{PX}\left(\sqrt{1+\frac{Y}{X}}-1\right)-\sqrt{X}\sqrt{4P- r^{2}X}\left(1-\sqrt{1-\frac{r^{2}Y}{4P-r^{2}X}}\right)\] \[\ll\sqrt{P}Y/\sqrt{X}+r\sqrt{X}\sqrt{Y}.\]
Combining this with Siegel's bound,
\[\eqref{eq:PX-r^{2}X^{2}}=\sum_{\begin{subarray}{c}d^{2}\leq 4P\sum_{ \begin{subarray}{c}N\in[X,X+Y]\\ P\mid N\in\mathcal{A}_{r,d}\end{subarray}}}\hskip-14.226378pt\sum_{ \begin{subarray}{c}N\in[X,X+Y]\\ P\mid N\end{subarray}}^{\Box}\frac{L(1,\chi_{r^{2}N^{2}-4PN})}{\pi d}\sqrt{4PX-r ^{2}X^{2}}+\mathrm{O}\left(Y^{2}P^{1/2+\varepsilon}X^{-1/2+\varepsilon}+rY^{3 /2}X^{1/2+\varepsilon}P^{\varepsilon}\right).\]
By (3), the main term can be truncated as
\[\sqrt{4PX-r^{2}X^{2}}\sum_{d^{2}\leq 4P}\frac{1}{\pi d}\hskip-14.226378pt\sum_{ \begin{subarray}{c}N\in[X,X+Y]\\ P\mid N\in\mathcal{A}_{r,d}\end{subarray}}^{\Box}\sum_{n=1}^{T}\frac{\left( \frac{(r^{2}N^{2}-4PN)/d^{2}}{n}\right)}{n}+\mathrm{O}\left(\sqrt{PX}\sum_{d^{ 2}\leq 4P}\sum_{N\in[X,X+Y]}\frac{\sqrt{PX}\log PX}{d^{2}T}\right),\]
so
\[\eqref{eq:PX-r^{2}X^{2}}=\sum_{d^{2}\leq 4P}\frac{\sqrt{4PX-r^{2}X^{2 }}}{\pi d}\hskip-14.226378pt\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ P\mid N\in\mathcal{A}_{r,d}\end{subarray}}^{\Box}\sum_{n=1}^{T}\frac{\left( \frac{(r^{2}N^{2}-4PN)/d^{2}}{n}\right)}{n}+\mathrm{O}\left(\frac{YPX\log PX} {T}+\frac{Y^{2}P^{1/2+\varepsilon}}{X^{1/2-\varepsilon}}+rY^{3/2}X^{1/2+ \varepsilon}P^{\varepsilon}\right)\] \[=\frac{\sqrt{4PX-r^{2}X^{2}}}{\pi}\sum_{d^{2}\leq 4P}\sum_{n\leq T }\frac{\mathcal{S}_{d,n,r}}{nd}+\mathrm{O}\left(\frac{YPX\log PX}{T}+\frac{Y ^{2}P^{1/2+\varepsilon}}{X^{1/2-\varepsilon}}+rY^{3/2}X^{1/2+\varepsilon}P^{ \varepsilon}\right), \tag{10}\]
where we define \(\mathcal{S}_{d,n,r}\) as follows:
**Definition 3.4**.: _Let \(d,r\) be positive integers, let \(X>Y>0\), and let \(P\) be a prime with that \(4P>r^{2}(X+Y)\). Define_
\[\mathcal{S}_{d,n,r}:=\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ N\in\mathcal{A}_{r,d}\\ P\mid N\end{subarray}}\mu^{2}(N)\left(\frac{N}{n}\right)\left(\frac{(r^{2}N-4P) /d^{2}}{n}\right).\]
_Note that \(\mathcal{S}_{d,n,r}=0\) unless \((r,d)\) is an admissible pair._
To evaluate \(\mathcal{S}_{d,n,r}\), one cannot simply split these sums into a sum of trivial and non-trivial characters: unlike the sums \(\sum_{a\bmod m}\chi(m)\) considered previously, the shifted sum \(\sum_{a\bmod m}\chi(m)\chi(m+a)\) is not necessarily zero for a non-principal \(\chi\), so we can't expect cancellation. Our strategy is as follows:
* For \(d\ll Y^{\tau}\) for some \(0<\tau<1\) and \(n\ll Y^{\sigma}\) for some \(0<\sigma<1\), we compute \(\mathcal{S}_{d,n,r}\) explicitly by examining equidistribution of square-free numbers in residue classes \(\bmod n\);
* for \(d\ll Y^{\tau}\) for some \(0<\tau<1\) and \(n\gg Y^{\sigma}\), we upper bound \(\mathcal{S}_{d,n,r}\) sum using Poisson summation;
* for \(d\gg Y^{\tau}\), we use a crude upper bound coming from Siegel's bound on the class number.
In the remaining subsection, we carry out these steps and prove the following:
**Proposition 3.5**.: _Let \(d,r\) be positive integers, let \(X>Y>0\), and let \(P\) be a prime with that \(4P>r^{2}(X+Y)\). Then:_
\[\sum_{d^{2}\leq dP}\sum_{n\leq T}\frac{\mathcal{S}_{d,n,r}}{nd}=\frac{YBc(r)}{ \zeta(2)}+\mathrm{O}\left(\sqrt{Y}T^{1/4+\varepsilon}+\sqrt{X}Y^{5/18}+Y^{8/9} (PX)^{\varepsilon}\right).\]
This implies Proposition 3.3 by choosing the appropriate value of \(T\):
Proof of Proposition 3.3.: Plugging Proposition 3.5 into identity (10), we get that
\[\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ P[N]\end{subarray}}H_{1}(r^{2}N^{2}-4PN) =\frac{YBc(r)}{\zeta(2)}\frac{\sqrt{4PX-r^{2}X^{2}}}{\pi}+ \mathrm{O}\left(\frac{YPX\log PX}{T}+\frac{Y^{2}P^{1/2+\varepsilon}}{X^{1/2- \varepsilon}}+rY^{3/2}X^{1/2+\varepsilon}P^{\varepsilon}\right)\] \[+\mathrm{O}\left(\sqrt{PX}\sqrt{Y}T^{1/4+\varepsilon}+X\sqrt{P} Y^{5/18}+Y^{8/9}(PX)^{1/2+\varepsilon}\right).\]
Setting
\[T:=(YPX)^{2/5}\]
gives the claimed error term.
#### 3.2.3 Small \(d\), small \(n\).
First, we address the case when the parameters \(d\) and \(n\) are small. This is the main term of the sum.
**Proposition 3.6**.: _For any parameters \(0\leq\tau,\sigma\leq 1\) and integer \(r\geq 1\),_
\[\sum_{n\leq Y^{\sigma}}\sum_{d<Y^{\tau}}\frac{\mathcal{S}_{d,n,r}}{nd}=\frac{ YBc(r)}{\zeta(2)}+\mathrm{O}\left(\sqrt{X}Y^{\sigma/2}+Y^{3\sigma/2+\tau+ \varepsilon}+Y^{1-2\tau}+Y^{1-\sigma/5}\right).\]
The function \(\mathcal{S}_{d,n,r}\) can be approximated by multiplicative functions defined in Definition 6.1 as follows.
**Lemma 3.7**.: _Let \(d,n,r\geq 1\) be integers with \(P\nmid n\), such that \((d,r)\) is an admissible pair. Then_
\[\mathcal{S}_{d,n,r}=\frac{Y}{\zeta(2)}\frac{\eta(d^{2}n)}{\varphi(d^{2}n)} \widetilde{\varphi}_{r,d}(g)\theta_{r}(n^{\prime})+\mathrm{O}\left(\sqrt{Xn}/ d+dn^{3/2+\varepsilon}\right),\]
_where \(g:=(d^{\infty},n)\), and \(n^{\prime}:=n/g\)._
Proof.: Let \(r\in\mathcal{R}_{r,d}\) be a residue modulo \(d^{2}\). The character \(\left(\frac{(r^{2}x-4P)/d^{2}}{n}\right)\) is a function of \(x\,\mathrm{mod}\,fnd^{2}\), where \(f=4\) if \(n\) is even and \(f=1\) if \(n\) is odd. Thus, we have:
\[\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ N\equiv r\,\mathrm{mod}\,d^{2}\end{subarray}}\mu^{2}(N)\left(\frac{N}{n} \right)\left(\frac{(r^{2}N-4P)/d^{2}}{n}\right) =\sum_{\begin{subarray}{c}a\,\mathrm{mod}\,fd^{2}n\\ a\equiv r\,\mathrm{mod}\,d^{2}\end{subarray}}\sum_{\begin{subarray}{c}N\in[X,X +Y]\\ a\equiv r\,\mathrm{mod}\,d^{2}\end{subarray}}\mu^{2}(N)\left(\frac{N}{n} \right)\left(\frac{(r^{2}N-4P)/d^{2}}{n}\right)\] \[=\sum_{\begin{subarray}{c}a\,\mathrm{mod}\,fd^{2}n\\ a\equiv r\,\mathrm{mod}\,d^{2}\end{subarray}}\left(\frac{a}{n}\right)\left( \frac{(r^{2}a-4P)/d^{2}}{n}\right)\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ N\equiv a\,\mathrm{mod}\,fd^{2}n\\ P[N]\end{subarray}}\mu^{2}(N). \tag{11}\]
Since \(P>X/4\), we can replace the condition \(P\nmid N\) with an error term of \(\operatorname{O}(n)\). Evidently, the terms with \((a,n)>1\) above vanish; moreover, for \(r\) above, we have \((r,d)=(a,d)=1\). Thus, it suffices to consider moduli \(a\) coprime to \(fd^{2}n\). Thus we can apply Theorem 6.4, yielding
\[(11)=\frac{Y}{\zeta(2)}\frac{\eta(d^{2}n)}{f\varphi(d^{2}n)}\sum_{ \begin{subarray}{c}a\bmod d^{2}fn\\ a\equiv r\bmod d^{2}\end{subarray}}\left(\frac{a}{n}\right)\left(\frac{(r^{2} a-4P)/d^{2}}{n}\right)+\operatorname{O}\left(\sqrt{Xn}/d+d^{1+\varepsilon}n^{3/2+ \varepsilon}\right).\]
Finally, Lemma 3.8 applied to \(m=fn\) completes the proof, since by Lemma 6.3, \(\widetilde{\varphi}(fn)=f\widetilde{\varphi}(n)\).
**Lemma 3.8**.: _Let \(m\) be an integer with \(P\nmid m\) and \(v_{2}(m)\neq 1,2\), and let \((r,d)\) be an admissible pair. Then:_
\[\sum_{\begin{subarray}{c}a\bmod d^{2}m\\ a\in\mathcal{R}_{r,d}\bmod d^{2}\end{subarray}}\left(\frac{a}{m}\right) \left(\frac{(r^{2}a-4P)/d^{2}}{m}\right)=\widetilde{\varphi}_{r,d}(g)\theta_{ r}(m^{\prime}),\]
_where \(g:=(d^{\infty},m)\), and \(m^{\prime}:=m/g\)._
Proof.: Let \(r\) be an integer that reduces to an element of \(\mathcal{R}_{d,r}\) modulo \(d^{2}\). Then:
\[\sum_{\begin{subarray}{c}a\bmod d^{2}m\\ a\equiv r\bmod d^{2}\end{subarray}}\left(\frac{a}{m}\right)\left(\frac{(r^{2} a-4P)/d^{2}}{m}\right) =\sum_{\begin{subarray}{c}a\bmod d^{2}m\\ a\equiv r\bmod d^{2}\end{subarray}}\left(\frac{a}{m^{\prime}}\right)\left( \frac{(r^{2}a-4P)/d^{2}}{m^{\prime}}\right)\left(\frac{a}{g}\right)\left(\frac {(r^{2}a-4P)/d^{2}}{g}\right)\] \[=\sum_{\begin{subarray}{c}b\bmod m^{\prime}\end{subarray}} \left(\frac{b}{m^{\prime}}\right)\left(\frac{r^{2}b-4P}{m^{\prime}}\right) \sum_{\begin{subarray}{c}a\bmod d^{2}g\\ a\equiv r\bmod d^{2}\end{subarray}}\left(\frac{a}{g}\right)\left(\frac{(r^{2} a-4P)/d^{2}}{g}\right)\] \[=\theta_{r}(m^{\prime})\sum_{\begin{subarray}{c}a\bmod d^{2}g\\ a\equiv r\bmod d^{2}\end{subarray}}\left(\frac{a}{g}\right)\left(\frac{(r^{2} a-4P)/d^{2}}{g}\right)\]
where we applied Lemma 6.2 and the assumption \(P\nmid m^{\prime}\) in the last step. It remains to sum the above identity over \(r\in\mathcal{R}_{r,d}\), which is \(\widetilde{\varphi}_{d,r}(g)\) by definition.
Proof of Proposition 3.6.: Since \(P>X/4\), \(Y=o(X)\), and \(n\ll Y^{\sigma}\), the condition \((P,n)=1\) will always hold asymptotically. Thus, by Lemma 3.7 and Lemma 6.6,
\[\sum_{\begin{subarray}{c}n\leq Y^{\sigma}\\ d\leq Y^{\tau}\end{subarray}}\frac{\mathcal{S}_{d,n,r}}{nd} =\sum_{\begin{subarray}{c}n\leq Y^{\sigma}\\ d\leq Y^{\tau}\end{subarray}}\frac{Y}{\zeta(2)}\frac{\eta(d^{2}n)}{\varphi(d^ {2}n)}\frac{\widetilde{\varphi}_{r,d}(g)\theta_{r}(n^{\prime})}{nd}+\sum_{ \begin{subarray}{c}n\leq Y^{\sigma}\\ d\leq Y^{\tau}\end{subarray}}\operatorname{O}\left(\frac{\sqrt{Xn}}{nd^{2}}+ \frac{d^{1+\varepsilon}n^{3/2+\varepsilon}}{nd}\right)\] \[=\frac{YBc(r)}{\zeta(2)}+\operatorname{O}\left(\sqrt{X}Y^{ \sigma/2}+Y^{3\sigma/2+r+\varepsilon}+Y^{1-2\tau}+Y^{1-\sigma/5}\right)\]
as aimed.
#### 3.2.4 Small \(d\), large \(n\).
**Proposition 3.9**.: _Let \(P\neq 2\) be a prime, let \((r,d)\) be an admissible pair of positive integers, and let \(X>Y>0\) be such that \((X+Y)r^{2}<4P\). Let \(\sigma,\tau\) be parameters with \(0<\sigma,\tau<1\). Then as \(X\to\infty\),_
\[\sum_{n\in[Y^{\sigma},T]}\sum_{d<Y^{\tau}}\frac{\mathcal{S}_{d,n,r}}{nd}\ll \log T\log Y\sqrt{X}+Y^{1-\sigma/2+\varepsilon}+\sqrt{Y}T^{1/4+\varepsilon} \log Y.\]
**Lemma 3.10**.: _Let \(\pi(x):\mathbb{Z}\to S^{1}\subseteq\mathbb{C}\) be an \(m\)-periodic function, and let \(\ell\in\mathbb{R}_{+}\). Then:_
\[\max_{I:|I|=\ell}\Big{|}\sum_{x\in I}\pi(x)\Big{|}\ll\sqrt{\mathcal{M}\ell}+ \frac{\mathcal{M}\ell}{m},\]
_where_
\[\mathcal{M}:=\max_{n\bmod m}\Big{|}\sum_{b\bmod m}e(bn/m)\pi(b)\Big{|}.\]
Proof.: We exploit that for an \(m\)-periodic function \(\pi\) and a Shwartz function \(f\), one has by Poisson summation that
\[\sum_{n\in\mathbb{Z}}\pi(n)f(n)=\sum_{b\bmod m}\pi(b)\sum_{a\in\mathbb{Z}}f( ma+b)=\sum_{b\bmod m}\frac{\pi(b)}{m}\sum_{n\in\mathbb{Z}}e\left(bn/m\right) \hat{f}\left(\frac{n}{m}\right). \tag{12}\]
We construct a suitable function \(f\) as follows. Let \(\psi(x):\mathbb{R}\to\mathbb{R}_{+}\) be smooth, compactly supported, with \(\left\|\psi\right\|_{L_{1}}=1\) and \(\operatorname{supp}(\psi)\subseteq[-1,1]\). The function
\[\psi_{\varepsilon}(x):=\frac{\psi(x/\varepsilon)}{\varepsilon},\widehat{\psi _{\varepsilon}}(t)=\widehat{\psi}(\varepsilon t)\]
then also satisfies \(\left\|\psi_{\varepsilon}\right\|_{L_{1}}=1\), has \(\left\|\widehat{\psi_{\varepsilon}}\right\|_{L_{\infty}}=\left\|\widehat{\psi }\right\|_{L_{\infty}}\), and \(\operatorname{supp}(\psi)\subseteq[-\varepsilon,\varepsilon]\). For \(c\in\mathbb{R}\), let
\[f_{\ell,c,\varepsilon}(t):=\left(\chi_{[0,1]}*\psi_{\varepsilon}\right)\left( \frac{t-c}{\ell}\right),\]
where \(\chi_{[0,1]}\) is the characteristic function of the interval \([0,1]\). The function \(f=f_{\ell,c,\varepsilon}\) is a smooth function satisfying the following properties:
1. \(f(t)\in[0,1]\);
2. \(\operatorname{supp}(f)\subseteq[c-\varepsilon\ell,c+\ell+\varepsilon\ell]\);
3. \(f(t)=1\) for \(t\in[c+\ell\varepsilon;c+\ell-\ell\varepsilon]\);
4. \(\left|\hat{f}(t)\right|=\ell\cdot\left|\widehat{\chi}_{[0,1]}(t\ell)\right| \cdot\left|\widehat{\psi}(\varepsilon t\ell)\right|\)
5. \(\hat{f}(t)\ll\ell,\hat{f}(t)\ll_{k,\psi}\frac{\ell}{(\varepsilon\ell t)^{k}}\) for \(r\geq 2\).
Choosing \(c\) to be the starting point of \(I\) and \(\varepsilon=\varepsilon_{\ell}\) to be chosen later, properties \(1,2\), and \(3\) imply that
\[\sum_{x\in I}\pi(x)=\sum_{n\in\mathbb{Z}}\pi(n)f(n)+\operatorname{O}\left( \varepsilon\ell\right).\]
From (12),
\[\sum_{n\in\mathbb{Z}}\pi(n)f(n)=\frac{1}{m}\sum_{n\in\mathbb{Z}}\hat{f}\left( \frac{n}{m}\right)\sum_{b\bmod m}e(bn/m)\pi(b)\ll\frac{\max_{n\bmod m}|\sum_ {b\bmod m}e(bn/m)\pi(b)|}{m}\sum_{n\in\mathbb{Z}}\left|\hat{f}(n/m)\right|\]
Choosing an integer parameter \(X_{c}=\max\{1,m/\varepsilon\ell\}\) and using property 5, we thus get a bound
\[\sum_{x\in I}\pi(x) =\sum_{n\in\mathbb{Z}}\pi(n)f(n)+\mathrm{O}(\ell\varepsilon)\ll \frac{\mathcal{M}}{m}\sum_{n\in\mathbb{Z}}\left|\hat{f}(n/m)\right|+\mathrm{O} (\varepsilon\ell)\ll\frac{\mathcal{M}}{m}\left(\ell X_{c}+\ell\sum_{t\geq X_{c }}\frac{m^{k}}{\varepsilon^{k}n^{k}\ell^{k}}\right)+\mathrm{O}(\ell\varepsilon)\] \[\ll_{r}\frac{\mathcal{M}}{m}\left(\ell X_{c}+\frac{\ell m^{k}}{ \varepsilon^{k}\ell^{k}X_{c}^{k-1}}\right)+\mathrm{O}(\ell\varepsilon)\ll\frac {\mathcal{M}}{m}(\ell+m/\varepsilon)+\mathrm{O}(\ell\varepsilon).\]
Finally, letting \(\varepsilon:=\sqrt{\mathcal{M}/\ell}\) yields the desired result.
**Lemma 3.11**.: _Let \(A,B,C,D\) be integers, let \(m\in\mathbb{N}\), and let \(\chi(x)=\left(\frac{x}{m}\right)\) for some \(m\) be a Kronecker symbol of modulus \(m^{\prime}\), where \(m^{\prime}=4m\) if \(m\equiv 2\,\mathrm{mod}\,4\), and \(m=m^{\prime}\) otherwise. Let \(\pi(x):=\chi(Ax+B)\chi(Cx+D)\). Then:_
\[\mathcal{M}:=\max_{n\,\mathrm{mod}\,m^{\prime}}\Big{|}\sum_{x\,\mathrm{mod}\, m}e(xn/m^{\prime})\pi(x)\Big{|}\ll\mathfrak{m}\]
_where \(\mathfrak{m}\) is given as follows. Let \(h=(AD-BC,m)\), and let \(\mathcal{P}\) be a set of primes given by_
\[\mathcal{P}:=\{p|m:2\nmid\upsilon_{p}(m),p\nmid h,p\neq 2,\text{ and }(A,C,p)=1\}.\]
_Then:_
\[\mathfrak{m}:=\frac{m}{\prod_{p\in\mathcal{P}}\sqrt{p}/2}.\]
Proof.: Since \(\left(\frac{x}{2}\right)=\left(\frac{x}{8}\right)\), by replacing \(m\) with \(4m\) when \(m\equiv 2\,\mathrm{mod}\,4\), it suffices to prove the statement in the case \(m=m^{\prime}\). We claim that for such \(m\), \(\mathcal{M}\) is multiplicative in \(m\). Indeed, let \(m=\prod_{p}p^{\alpha_{p}}\), let \(\chi_{p}:=\left(\frac{x}{p}\right)^{\alpha_{p}}\) be the Kronecker symbol, and let
\[\pi_{p}(x):=\chi_{p}(Ax+B)\chi_{p}(Cx+D),\]
so \(\pi(x)=\prod_{p}\pi_{p}(x_{p}).\) Let \(y_{p}\) be the inverse of \(\prod_{q\neq p}q^{\alpha_{p}}\) modulo \(p^{\alpha_{p}}\), and let \(y\) be an integer that reduces to \(y_{p}\,\mathrm{mod}\,p^{\alpha_{p}}\) for all \(p\) (so in particular, \((y,m)=1\)). Then for any \(x\),
\[xy\sum_{p}\prod_{q\neq p}q^{\alpha_{q}}\equiv x\,\mathrm{mod}\,m,\]
so,
\[\prod_{p}\sum_{x_{p}\,\mathrm{mod}\,p^{\alpha_{p}}}e(x_{p}(ny)/p^{\alpha_{p}}) \pi_{p}(x_{p})=\sum_{\begin{subarray}{c}x\,\mathrm{mod}\,m\\ (x_{p}:=x\,\mathrm{mod}\,p^{\alpha_{p}})\end{subarray}}e\Big{(}n\Big{(}y \sum_{p}x_{p}/p^{\alpha_{p}}\Big{)}\Big{)}\prod_{p}\pi_{p}(x_{p})=\sum_{x\, \mathrm{mod}\,m}e(xn/m)\pi(x).\]
Since we could choose \(ny\) to have any set of simultaneous reductions modulo primes \(p|m\), the maximum over \(n\) for \(\pi\) will be the product of the corresponding maxima for the \(\pi_{i}\)'s.
Assume now that \(m=p^{\alpha}\) for some prime \(p\). If \(\alpha\) is even or \(p=2\) or \(p|h\), we apply the trivial bound, so assume \(\alpha\) and \(p\) are odd and \(p\nmid h\).
**Case I:** Suppose \((AC,p)=1\). Then:
\[\Big{|}\sum_{x\,\mathrm{mod}\,m}e(xn/m)\chi(Ax+B)\chi(Cx+D)\Big{|} =\Big{|}\sum_{x\,\mathrm{mod}\,m}e(xn/m)\chi(x+BA^{-1})\chi(x+DC^{ -1})\Big{|}\] \[=\Big{|}\sum_{x\,\mathrm{mod}\,m}e(xn/m)\chi(x)\chi(x+DC^{-1}-BA^{ -1})\Big{|}\]
where the inverses are taken mod \(p^{\alpha}\). Notice that for any \(s\) with \((s,m)=1\) and \(ts=1\,\mathrm{mod}\,m\) and for any shift \(h\),
\[\left|\sum_{x\,\mathrm{mod}\,m}\chi(x)\chi(x+h)e\left(\frac{nx}{m}\right)\right| =\left|\sum_{x\,\mathrm{mod}\,m}\chi(sx+sh)e\left(\frac{(nt)(sx)}{m}\right) \right|=\left|\sum_{x\,\mathrm{mod}\,m}\chi(x)\chi(x+sh)e\left(\frac{(nt)x}{m} \right)\right|\]
Thus, \(\max_{n}\left|\sum_{y\,\mathrm{mod}\,m}\chi(y)\chi(y+\alpha)e\left(\frac{ny}{m }\right)\right|\) depends only on \((m,h)\), and since we assumed \((A,m)=(C,m)=1\), we are evaluating
\[\max_{n\,\mathrm{mod}\,m}\Big{|}\sum_{x\,\mathrm{mod}\,m}e(xn/m)\chi(x)\chi(x+h) \Big{|}\]
where \(h=(BA^{-1}-DC^{-1},m)=(AD-BC,m)\). The inner sum depends on the \(p\)-adic valuation of \(n\):
* When \(n\equiv 0\,\mathrm{mod}\,p^{\alpha}\), the using \(p\nmid h\) and Lemma 6.2, \[\left|\sum_{n\,\mathrm{mod}\,p^{\alpha}}e(xn/p^{\alpha})\chi(x)\chi(x+h)\right| =\left|\sum_{n\,\mathrm{mod}\,p^{\alpha}}\chi(x)\chi(x+1)\right|=\left|\theta( p^{\alpha})\right|=p^{\alpha-1};\]
* When \(\upsilon_{p}(n)=\alpha-1,n=:n^{\prime}p^{\alpha-1}\), \[\sum_{x\,\mathrm{mod}\,p^{\alpha}}e(nx/p^{\alpha})\chi(x)\chi(x+1) =p^{\alpha-1}\sum_{x\,\mathrm{mod}\,p}e(xn^{\prime}/p)\chi(x)\chi (x+1)=p^{\alpha-1}\sum_{x\,\mathrm{mod}\,p}^{*}e(cx/p)\chi(1+x^{-1})\] \[=\pm p^{\alpha-1}\sum_{x\,\mathrm{mod}\,p}^{*}e(x^{-1}/p)\chi(x+1 )\leq 2p^{\alpha-1/2},\] (here \(\sum^{*}\) denotes summation over coprime residues).
* Finally, when \(\upsilon_{p}(n)<\alpha-1\), the sum is \(0\) since the Legendre symbol is \(p\) periodic for \(p\neq 2\).
In summary, the maximum over \(n\) is \(2p^{\alpha-1/2}\) when \(p\nmid h\) and \((2AC,p)=1\).
**Case II:** Next, suppose \((A,p)=1\) but \(p|C\). If \(p|D\), the sum vanishes, so without loss of generality, \(p\nmid D\), and
\[\Big{|}\sum_{x\,\mathrm{mod}\,m}e(xn/m)\chi(Ax+B)\chi(Cx+D)\Big{|}=\Big{|}\sum _{x\,\mathrm{mod}\,m}e(xn/m)\chi(x)\Big{|}.\]
* When \(n\equiv 0\,\mathrm{mod}\,p^{\alpha}\), \(\sum_{x\,\mathrm{mod}\,m}\left(\frac{x}{m}\right)e\left(\frac{nx}{m}\right)= \sum_{x\,\mathrm{mod}\,m}\left(\frac{x}{m}\right)=0\).
* When \(\upsilon_{p}(n)=\alpha-1\), \(n=:n^{\prime}p^{\alpha-1}\), \[\left|\sum_{x\,\mathrm{mod}\,p^{\alpha}}e(nx/p^{\alpha})\chi(x)\right|=p^{ \alpha-1}\middle|\sum_{x\,\mathrm{mod}\,p}e(xn^{\prime}/p)\chi(x)\right|\leq p^ {\alpha-1/2}\] (Gauss sum).
* When \(\upsilon_{p}(n)<\alpha-1\), the sum is \(0\).
In summary, the maximum over \(n\) is always at most \(p^{\alpha-1/2}\) when \(p\) divides exactly one of \(A,C\).
We need not consider the case \(p|A,C\) since then \(p|h\), so this concludes the proof.
**Lemma 3.12**.: _Let \(Y\leq X\), let \(m,w,D\in\mathbb{N}\) and \(q\in\mathbb{Z}\). Let \(\chi=\left(\frac{\cdot}{m}\right)\). Let \(r\) be an integer with \(wr=q\operatorname{mod}D\) and \((r,D)=1\). Then:_
\[\sum_{\begin{subarray}{c}N=X\\ N=r\operatorname{mod}D\end{subarray}}^{X+Y}\mu^{2}(N)\chi(N)\chi((wN-q)/D) \ll\sqrt{X}+\frac{\mathfrak{m}}{mD}Y+\frac{\log Y\sqrt{\mathfrak{m}Y}}{\sqrt{D }},\]
_where_
\[\mathfrak{m}:=\frac{m}{\prod_{p\in\mathcal{P}}\sqrt{p}/2},\]
\[\mathcal{P}:=\{p|m:2\nmid v_{p}(m),p\nmid(q,m),p\neq 2,\text{ and }(D,w,p)=1\}.\]
Proof.: Let \(l\in\mathbb{Z}\) be such that \(wr=q+lD\). Then:
\[\sum_{\begin{subarray}{c}N=X\\ N=r\operatorname{mod}D\end{subarray}}^{X+Y}\mu^{2}(N)\chi(N)\chi\left(\frac{wN -q}{D}\right) =\sum_{\begin{subarray}{c}N=X\\ N=r\operatorname{mod}D\end{subarray}}^{X+Y}\mu^{2}(N)\chi(N)\chi\left(\frac{w( N-r)}{D}+l\right)\] \[=\sum_{\delta^{2}\leq X+Y}\mu(\delta)\sum_{\begin{subarray}{c}s \in[X/\delta^{2},(X+Y)/\delta^{2}]\\ s\delta^{2}=r\operatorname{mod}D\end{subarray}}\chi(s\delta^{2})\chi\left( \frac{w(s\delta^{2}-r)}{D}+l\right)\] \[\ll\sum_{\begin{subarray}{c}\delta<\sqrt{2X}\\ (\delta,D)=1\end{subarray}}\left|\sum_{\begin{subarray}{c}t\in[X/\frac{r}{D},( X+Y)-r\\ -D\end{subarray}]\\ t=-rD^{-1}\operatorname{mod}\delta^{2}\end{subarray}}\chi(Dt+r)\chi(wt+l)\right|\]
Here we restrict to \((\delta,m)=1\) since terms with \((\delta,m)>1\) clearly vanish, and to \((D,\delta)=1\) because otherwise \(s\delta^{2}=r\operatorname{mod}D\) cannot hold (as we assumed \((r,D)=1\)).
For each \(\delta\leq\sqrt{2X}\) with \((\delta,m)=(\delta,D)=1\), pick an integer solution \(t_{\delta}\) to \(t_{\delta}D=-r\operatorname{mod}\delta^{2}\), and let
\[I_{\delta}:=[\frac{X-r}{\delta^{2}D}-\frac{t_{\delta}}{\delta^{2}},\frac{X+Y- r}{\delta^{2}D}-\frac{t_{\delta}}{\delta^{2}}]\]
be an interval of length \(Y/(\delta^{2}D)\). Changing variables again and applying the trivial bound for \(\delta>R\) for some parameter \(R\leq\sqrt{2X}\), we can rewrite the above by letting \(t=\delta^{2}x+t_{\delta}\) for \(x\in I_{\delta}\), which yields
\[\sum_{\begin{subarray}{c}\delta\leq R\\ (\delta,D)=1\\ (\delta,m)=1\end{subarray}}\left|\sum_{x\in I_{\delta}}\chi(D(\delta^{2}x+t_ {\delta})+r)\chi(w(\delta^{2}x+t_{\delta})+l)\right|+\operatorname{O}\left( \sum_{\delta\in[R,\sqrt{2X}]}1+\frac{Y}{D\delta^{2}}\right)\] \[\leq \sum_{\begin{subarray}{c}\delta\leq R\\ (\delta,D)=1\\ (\delta,m)=1\end{subarray}}\left|\sum_{x\in I_{\delta}}\chi(D\delta^{2}x+( Dt_{\delta}+r))\chi(wx\delta^{2}+(wt_{\delta}+l))\right|+\operatorname{O}\left( \sqrt{X}+\frac{Y}{DR}\right).\]
Note that the determinant
\[D\delta^{2}\cdot(wt_{\delta}+l)-(Dt_{\delta}+r)w\delta^{2}=\delta^{2}(Dl-rw)=- q\delta^{2}\]
satisfies \((q\delta^{2},m)=(q,m)\) for all the \(\delta\) in the above sum. Thus by Lemma 3.10 and Lemma 3.11 and using the condition \((\delta,m)=1\),
\[\sum_{x\in I_{\delta}}\chi(D\delta^{2}x+(Dt_{\delta}+r))\chi(w\delta^{2}x+(wt_{ \delta}+l))\ll\frac{\sqrt{\mathfrak{m}Y}}{\delta\sqrt{D}}+\frac{\mathfrak{m}Y} {\delta^{2}Dm}\]
Summing over \(\delta\), we conclude that
\[\sum_{\begin{subarray}{c}N=X\\ N=r\,\mathrm{mod}\,D\end{subarray}}^{X+Y}\mu^{2}(N)\chi(N)\chi\left(\frac{wN-q }{D}\right)\ll\frac{\sqrt{\mathfrak{m}Y}\log R}{\sqrt{D}}+\frac{Y\mathfrak{m}} {Dm}+\sqrt{X}+\frac{Y}{DR}\]
or, taking \(R=Y\),
\[\sum_{\begin{subarray}{c}N=X\\ N=r\,\mathrm{mod}\,D\end{subarray}}^{X+Y}\mu^{2}(N)\chi(N)\chi\left(\frac{wN-q }{D}\right)\ll\frac{\sqrt{\mathfrak{m}Y}\log Y}{\sqrt{D}}+\frac{Y\mathfrak{m}} {Dm}+\sqrt{X}.\]
Applying the lemma to \(D=d^{2},w=r^{2}\), and \(q=4P\), and using that the terms with \(P\nmid N\) contribute an \(O(1)\) error term for \(P\gg X\), we get an immediate corollary:
**Corollary 3.13**.: _Let \(Y\leq X\), let \((r,d)\) be an admissible pair of integers, let \(n\in\mathbb{N}\), and let \(P\) be a prime such that \(r^{2}(X+Y)<4P\)._
_Let_
\[\mathcal{P}_{n}=\{p|n:2\nmid\upsilon_{p}(n),\,\text{and}\,\,p\neq 2,P\},\]
_and let_
\[\mathfrak{n}_{n}:=\frac{n}{\prod_{p\in\mathcal{P}}\sqrt{p}/2}.\]
_Then:_
\[\mathcal{S}_{d,n,r}\ll\sqrt{X}+\frac{\mathfrak{n}_{n}}{nd^{2}}Y+\frac{\log Y \sqrt{\mathfrak{n}_{n}Y}}{d}.\]
Proof of Proposition 3.9.: From Corollary 3.13,
\[\sum_{n\in[Y^{\sigma},T]}\sum_{d<Y^{\tau}}\frac{\mathcal{S}_{d,n, r}}{nd} \ll\sum_{m\in[Y^{\sigma},T]}\sum_{d<Y^{\tau}}\frac{\sqrt{X}}{nd}+ \frac{\mathfrak{n}_{n}Y}{n^{2}d^{3}}+\frac{\log Y\sqrt{Y}\sqrt{\mathfrak{n}_{ n}}}{nd^{2}}\] \[\ll\log T\log Y\sqrt{X}+Y\hskip-28.452756pt\sum_{n\in[Y^{\sigma},T]}\frac{\mathfrak{n}_{n}}{n^{2}}+\sqrt{Y}\log Y\sum_{n\in[Y^{\sigma},T]} \frac{\sqrt{\mathfrak{n}_{n}}}{n}.\]
Every integer \(n\) is representable uniquely in the form
\[n=a^{2}b\;2^{\alpha}P^{\beta},\]
where \(b\) is square-free and \((a,2P)=(b,2P)=1.\) In terms of this representation, \(\mathcal{P}_{n}=\{p|b\}\), so with the divisor bound,
\[\mathfrak{n}_{n}\leq a^{2}b^{1/2+\varepsilon}\;2^{\alpha}P^{\beta}.\]
Thus
\[\sum_{n\leq T}\frac{\sqrt{\mathfrak{n}}_{n}}{n}\ll\sum_{\begin{subarray}{c}\alpha, \beta:\\ 2^{\alpha}P^{\beta}\leq T\end{subarray}}\sum_{a\leq\sqrt{\frac{T}{2^{\alpha}P^ {\beta}}}}\sum_{b\leq\frac{T}{a^{2\alpha}P^{\beta}}}\frac{1}{2^{\alpha/2}P^{ \beta/2}\,a\,b^{3/4-\varepsilon}}\ll\sum_{a\leq\sqrt{T}}(1/a)\left(T/a^{2} \right)^{1/4+\varepsilon}\ll T^{1/4+\varepsilon}.\]
Similarly,
\[\sum_{n>Y^{\sigma}}\frac{\mathfrak{n}_{n}}{n^{2}}\ll\sum_{\alpha,\beta}\sum_{a \in\mathbb{N}}\frac{1}{2^{\alpha}P^{\beta}a^{2}}\sum_{b>Y^{\sigma}/a^{2}}b^{- 3/2+\varepsilon}\ll\sum_{a\leq Y^{\sigma/2}}\frac{1}{a^{2}}\frac{a}{Y^{\sigma /2-\varepsilon}}+\sum_{a>Y^{\sigma/2}}\frac{1}{a^{2}}\ll Y^{-\sigma/2+ \varepsilon}.\]
To summarize,
\[\sum_{n\in[Y^{\sigma},T]}\sum_{d<Y^{\tau}}\frac{\mathcal{S}_{d,n,r}}{nd}\ll \log T\log Y\sqrt{X}+Y^{1-\sigma/2+\varepsilon}+\sqrt{Y}T^{1/4+\varepsilon} \log Y\]
as desired.
#### 3.2.5 Large \(d\) and error terms
For \(\sqrt{P}\gg d\gg Y^{\tau},\) we use Siegel's bound:
\[\sum_{\sqrt{P}\gg d\gg Y^{\tau},n}\mathcal{S}_{d,n,r}/dn\ll\sum_{\sqrt{P}\gg d \gg Y^{\tau}}\left(\frac{Y}{d^{2}}+1\right)\frac{(PX)^{\varepsilon}}{d^{1+ \varepsilon}}\ll(PX)^{\varepsilon}(Y^{1-2\tau}+\log Y).\]
It remains to collect the error terms. The cumulative error term from Proposition 3.6, Proposition 3.9 and the bound above is
\[\left(\log T\log Y\sqrt{X}+Y^{1-\sigma/2+\varepsilon}+\sqrt{Y}T^{ 1/4+\varepsilon}\log Y\right) +\left(\sqrt{X}Y^{\sigma/2}+Y^{3\sigma/2+\tau+\varepsilon}+Y^{1 -2\tau}+Y^{1-\sigma/5}\right)\] \[+(PX)^{\varepsilon}(Y^{1-2\tau}+\log Y).\]
Assuming \(P^{\varepsilon}\ll\sqrt{X},\) and since we will pick \(T\) with \(T\gg Y\), we can rewrite this as
\[\log T\log Y\sqrt{X}+\sqrt{Y}T^{1/4+\varepsilon}+\sqrt{X}Y^{\sigma/2}+Y^{3 \sigma/2+\tau+\varepsilon}+Y^{1-2\tau}+Y^{1-\sigma/5}+(PX)^{\varepsilon}Y^{1 -2\tau}.\]
Finally, letting \(\tau=1/18,\)\(\sigma=5/9,\) and assuming \(\log T\ll Y\), this becomes
\[\sqrt{Y}T^{1/4+\varepsilon}+\sqrt{X}Y^{5/18}+Y^{8/9}(PX)^{\varepsilon}\]
as aimed.
### Large \(r\), \(P\)-divisible levels and the \(P\) term.
We bound trivially the \(r\)'s not covered in Proposition 3.2, that is,
\[\frac{\zeta(2)\pi}{YX}\!\!\!\!\!\!\sum_{\begin{subarray}{c}N\in[X,X+Y]\\ P\nmid N\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
by Lemma 6.8.
Hence
\[\frac{1}{\sum_{N\in[X,X+Y]}^{\square}\sum_{f\in H^{\infty}(N,k)}1}=\frac{12\cdot \zeta(2)\prod_{p}\left(1-\frac{1}{p^{2}+p}\right)^{-1}}{(k-1)(XY)}+\mathrm{O} \left(\frac{1}{kYX^{2-\varepsilon}}+\frac{1}{kY^{2}X^{3/5}}+\frac{1}{kX^{2}} \right).\]
Now, (14) implies that the the numerator of the left-hand side of Theorem 1 is bounded by \(XYky=kPY.\) Thus
\[\frac{\sum_{N\in[X,X+Y]}^{\square}\sum_{f\in H^{\infty}(N)}\lambda_{f}(P) \sqrt{P}\varepsilon(f)}{\sum_{N\in[X,X+Y]}^{\square}\sum_{f\in H^{\infty}(N,k) }1}=(*)\cdot\frac{12\prod_{p}\left(1-\frac{1}{p^{2}+p}\right)^{-1}}{(k-1)\pi} +\mathrm{O}\left(\frac{P}{X^{2-\varepsilon}}+\frac{P}{TX^{3/5}}+\frac{PY}{X^ {2}}\right)\]
which completes the proof of Theorem 1.
## 4 Properties of the weight \(2\) density function
In this section, we analyze asymptotic of the function from Theorem 1 in the case of weight \(k=2.\)
**Lemma 4.1**.: _Let_
\[\mathrm{A}(t):=\pi/4-\sum_{s=1}^{\lfloor 1/t\rfloor}t\sqrt{1-s^{2}t^{2}}\]
_for \(0\leq t\leq 1.\) Then_
\[\mathrm{A}(t)=\frac{t}{2}+t^{3/2}P(1/t)+O(t^{2.5}),\]
_where_
\[P(M)=-\sqrt{2}\zeta(-1/2,\{M\}).\]
Proof.: We will argue geometrically.
Let \(t:=1/M,\lfloor M\rfloor:=N,M-N:=\delta,\) and let \(s:=\delta/M.\) A\((t)\) can be interpreted as the excess area above the Darboux sum for the graph of \(f(t)=\sqrt{1-x^{2}}\) with \(t\) spacing:
We let \(\alpha_{L}\) and \(\alpha_{i},i\in\{1,\ldots,N\}\) be the angles between lines connecting \(0\) and \((kt,\sqrt{1-k^{2}t^{2}})\) for \(k\in\{1,\ldots,N\},\) ordered in decreasing order of the \(x\) coordinate (see Picture (a)). We split the area we want to compute into triangles (blue) and arcs (purple). We refer to them as \(A_{1}\) and \(A_{2},\) accordingly.
Since the heights of the blue triangles add up to \(1\), the total area \(A_{1}\) is given by:
\[A_{1}=\frac{t}{2}+\frac{s-t}{2}\sqrt{1-(1-s)^{2}}=\frac{t}{2}-\frac{(t-s)\sqrt{s }}{\sqrt{2}}+\mathrm{O}(t^{5/2}).\]
Now we compute the purple area. For a sector with an angle \(\alpha\), Taylor approximation dictates that the volume is given by:
\[\mathrm{Vol}(\alpha)=\frac{\alpha^{3}}{12}+\mathrm{O}(\alpha^{5}).\]
Using Taylor approximation again, note that
\[\cos\alpha=1-a\implies\mathrm{Vol}(\alpha)=\frac{\sqrt{2}a^{3/2}}{6}+\mathrm{ O}(a^{5/2}). \tag{16}\]
Thus from the inner product,
\[\mathrm{Vol}(\alpha_{L})=\frac{\sqrt{2}s^{3/2}}{6}+\mathrm{O}(t^{5/2}).\]
Finally, we address \(\mathrm{Vol}(\alpha_{1}),\ldots,\mathrm{Vol}(\alpha_{N})\). From the inner product,
\[\cos\alpha_{n+1}=(1-s-n/M)(1-s-(n+1)/M)+(1-(1-s-n/M)^{2})(1-(1-s-(n+1)/M)^{2}).\]
Setting \(s_{n}:=s+n/M\), this can be rewritten as
\[\cos\alpha_{n+1}=(\sqrt{s_{n+1}}-\sqrt{s_{n}})^{2}+s_{n}s_{n+1}+2\sqrt{s_{n}s _{n+1}}\left(\sqrt{1-\frac{s_{n}}{2}}\sqrt{1-\frac{s_{n+1}}{2}}-1\right).\]
For \(n\in\{0,\ldots,N-1\}\), let
\[x_{n+1}:=(\sqrt{s_{n+1}}-\sqrt{s_{n}})^{2}=\frac{1}{M}\frac{1}{(\sqrt{n+1+ \delta}+\sqrt{n+\delta})^{2}}\asymp\frac{t}{n+1},\]
\[y_{n}:=s_{n}s_{n+1}+2\sqrt{s_{n}s_{n+1}}\left(\sqrt{1-\frac{s_{n}}{2}}\sqrt{1 -\frac{s_{n+1}}{2}}-1\right).\]
Note that for parameters \(c\in[0,1]\) and \(\varepsilon\), Taylor approximation yields
\[c(c+\varepsilon)+2\sqrt{c(c+\varepsilon)}\left(\sqrt{1-\frac{c}{2}}\sqrt{1- \frac{c+\varepsilon}{2}}-1\right)=-\varepsilon^{2}\left(\frac{\sqrt{c^{2}+ \varepsilon c}/4}{\sqrt{c^{2}+\varepsilon c}+c+\varepsilon/2}\right)- \varepsilon^{2}\frac{\sqrt{c^{2}+\varepsilon c}}{8(2-c)}+\mathrm{O}( \varepsilon^{3}).\]
Hence \(y_{n}=\mathrm{O}(t^{2})\) and hence \(y_{n}\ll x_{n}\). From(16), the volume we are to compute is given by
\[\sum_{n=1}^{N}\frac{\sqrt{2}(x_{n}+y_{n})^{3/2}}{6}+\mathrm{O}((x_{n})^{5/2})= \frac{\sqrt{2}}{6}\sum_{n=1}^{N}x_{n}^{3/2}+\frac{\sqrt{2}}{6}\sum_{n=1}^{N} \frac{3x_{n}^{2}y_{n}+3x_{n}y_{n}^{2}+y_{n}^{3}}{x_{n}^{3/2}+(x_{n}+y_{n})^{3/2 }}+\mathrm{O}(t^{5/2}).\]
For parameters \(x\ll y\),
\[(x+y)^{3/2}=x^{3/2}+\frac{3x^{2}y+3xy^{2}+y^{3}}{x^{3/2}+(x+y)^{3/2}}.\]
Note that
\[x_{n}-\frac{1}{4Mn}\ll\frac{1}{Mn^{2}},\]
and that
\[y_{n}-\frac{1}{4M^{2}(2-n/M)}\ll\frac{1}{M^{3}}.\]
From this one can see that
\[\frac{3x_{n}^{2}y_{n}+3x_{n}y_{n}^{2}+y_{n}^{3}}{x_{n}^{3/2}+(x_{n}+y_{n})^{3/ 2}}=\frac{3(x_{n}^{\prime})^{2}y_{n}^{\prime}+3x_{n}^{\prime}(y_{n}^{\prime}) ^{2}+(y_{n}^{\prime})^{3}}{(x_{n}^{\prime})^{3/2}+(x_{n}^{\prime}+y_{n}^{ \prime})^{3/2}}+\mathrm{O}\left(\frac{1}{M^{5/2}}k^{5/2}\right),\]
where
\[x_{n}^{\prime}=\frac{1}{4Mn},y_{n}^{\prime}=\frac{1}{4M^{2}(2-n/M)}.\]
From the Euler-Maclaurin formula, we can approximate the sum for \(x_{n}^{\prime},y_{n}^{\prime}\) by an integral
\[\sum_{n=1}^{N}\frac{3(x_{n}^{\prime})^{2}y_{n}^{\prime}+3x_{n}^{\prime}(y_{n}^ {\prime})^{2}+(y_{n}^{\prime})^{3}}{(x_{n}^{\prime})^{3/2}+(x_{n}^{\prime}+y_ {n}^{\prime})^{3/2}}=\frac{1}{8M^{2}}\int_{0}^{1}\frac{z^{2}-6z+12}{\sqrt{z}(( 2-z)^{3}+(2(2-z))^{3/2})}dz+\mathrm{O}(t^{5/2}).\]
The function under the integral has an explicit antiderivative given by
\[\frac{2}{\sqrt{z}}-\frac{2\sqrt{2}\sqrt{2-z}(z-1)}{\sqrt{z}(z-1)}.\]
In conclusion,
\[\sum_{n=1}^{N}\frac{3x_{n}^{2}y_{n}+3x_{n}y_{n}^{2}+y_{n}^{3}}{x_{n}^{3/2}+(x _{n}+y_{n})^{3/2}}=\frac{1}{4M^{2}}+\mathrm{O}(t^{5/2}).\]
It remains to compute \(\sum_{n=1}^{N}x_{n}^{3/2}\):
\[\sum_{n}(\sqrt{s_{n+1}}-\sqrt{s_{n}})^{3} =\sum_{n=0}^{N-1}(s_{n+1})^{3/2}-3(s_{n}+1/M)\sqrt{s_{n}}+3(s_{n+ 1}-1/M)\sqrt{s_{n+1}}-s_{n}^{3/2}\] \[=(-s_{0})^{3/2}+(s_{N})^{3/2}-3(s_{0}+1/M)\sqrt{s_{0}}+2(s_{N}-1 /M)\sqrt{s_{N}}-\frac{6}{M}\sum_{1}^{N-1}\sqrt{s_{n}}\] \[=-\delta^{3/2}-3(\delta+1)\sqrt{\delta}+\left((\delta+N)^{3/2}+3 (\delta+N+1)\sqrt{\delta+N}-6\sum_{n=1}^{N-1}\sqrt{\delta+k}\right).\]
Now,
\[\lim_{N\to\infty}(\delta+N)^{3/2}+3(\delta+N+1)\sqrt{\delta+N}-6\sum_{n=1}^{N-1} \sqrt{\delta+k}=-6\zeta(-1/2,1+\delta),\]
so
\[\sum_{n}=-\delta^{3/2}-3(\delta+1)\sqrt{\delta}-6\zeta(-1/2,1+\delta)-\sum_{N} ^{\infty}(\sqrt{s_{n+1}}-\sqrt{s_{n}})^{3}.\]
Finally,
\[\sum_{N}^{\infty}(\sqrt{s_{n+1}}-\sqrt{s_{n}})^{3}=\sum_{N}^{\infty}\frac{1}{M ^{3/2}}\frac{1}{(\sqrt{n+\delta}+\sqrt{n+1+\delta})^{2}}=\frac{1}{4M^{2}}+ \operatorname{O}(t^{5/2})\]
by the the Euler-Maclaurin formula.
Collecting together all the terms concludes the proof.
Proof of Theorem 2.: For convenience of notation, we analyze \(m(y):=M(y)/D_{2}\).
The function \(m(y)\) can be re-expressed in terms of the coefficients \(Q(d)\), since
\[\sum_{1\leq r\leq 2\sqrt{y}}c(r)\sqrt{y-r^{2}/4}=\sum_{\begin{subarray}{c}1\leq d \leq 2\sqrt{y}\\ d\;\Box_{\text{free}}\end{subarray}}Q(d)\sum_{1\leq n\leq 2\sqrt{y}/d}\sqrt{y-d^{2} n^{2}/4}.\]
For a fixed value of \(d\), one has:
\[\sum_{1\leq n\leq 2\sqrt{y}/d}\sqrt{y-d^{2}n^{2}/4}\leq\int_{0}^{2\sqrt{y}/d} \sqrt{y-d^{2}t^{2}/4}dt=\frac{\pi y}{2d}.\]
We let
\[F(y,d):=-\sum_{1\leq n\leq 2\sqrt{y}/d}\sqrt{y-d^{2}n^{2}/4}+\frac{\pi y}{2d}\geq 0\]
be the error term in this approximation. Then:
\[m(y) =A\sqrt{y}-\pi y-2B\sum_{\begin{subarray}{c}1\leq d\leq 2\sqrt{y} \\ d\;\Box_{\text{free}}\end{subarray}}\left(F(d,y)Q(d)-\frac{\pi yQ(d)}{2d}\right)\] \[=A\sqrt{y}-\pi y+y(B\pi)\sum_{\begin{subarray}{c}1\leq d\leq 2 \sqrt{y}\\ d\;\Box_{\text{free}}\end{subarray}}\frac{Q(d)}{d}-2B\sum_{\begin{subarray}{ c}1\leq d\leq 2\sqrt{y}\\ d\;\Box_{\text{free}}\end{subarray}}F(d,y)Q(d)\]
Clearly,
\[\sum_{d>2\sqrt{y}}\frac{Q(d)}{d}\ll y^{-1}.\]
Moreover, one sees from the Euler product that
\[B\pi y\sum_{d\;\Box_{\text{free}}}\frac{Q(d)}{d}=\pi y,\]
and hence
\[m(y)=A\sqrt{y}-2B\sum_{\begin{subarray}{c}1\leq d\leq 2\sqrt{y}\\ d\ \Box\text{free}\end{subarray}}Q(d)F(d,y)+\text{O}(1).\]
Changing variables, \(F(y,d)=\sqrt{y}(1/t),t:=\frac{d}{2\sqrt{y}},\) where \(\text{A}(t)\) is the function from Lemma 4.1. Applying the Lemma,
\[m(y) =A\sqrt{y}-2B\sum_{\begin{subarray}{c}1\leq d\leq 2\sqrt{y}\\ d\ \Box\text{free}\end{subarray}}Q(d)\frac{2y}{d}\left(\frac{d}{4\sqrt{y}}+ \frac{d^{3/2}}{2^{3/2}y^{3/4}}P(\frac{d}{2\sqrt{y}})+\text{O}\left(\frac{d^{5/ 2}}{y^{5/4}}\right)\right)+\text{O}(1)\] \[=A\sqrt{y}-B\sqrt{y}\sum_{\begin{subarray}{c}1\leq d\leq 2\sqrt{y}\\ d\ \Box\text{free}\end{subarray}}Q(d)-y^{1/4}\sqrt{2}B\sum_{ \begin{subarray}{c}1\leq d\leq 2\sqrt{y}\\ d\ \Box\text{free}\end{subarray}}Q(d)\sqrt{d}P\left(\frac{d}{2\sqrt{y}}\right)+ \text{O}(1).\]
Now,
\[\sum_{d\ \Box\text{free}}Q(d)=A/B,\]
and hence
\[m(y)=-y^{1/4}\sqrt{2}B\sum_{\begin{subarray}{c}1\leq d\leq 2\sqrt{y}\\ d\ \Box\text{free}\end{subarray}}Q(d)\sqrt{d}P\left(\frac{d}{2\sqrt{y}}\right),\]
which concludes the proof.
## 5 Geometric Averaging
In this section we complete the proof of Theorem 3 and analyze the asymptotic behavior of the dyadic average.
Proof of Theorem 3.: Let and let \(\delta_{2}\) be a parameter chosen depending on \(\delta_{1}\) to satisfy the conditions of Theorem 1 with a powersaving error term. Assume further that \(Y\thicksim X^{1-\delta_{2}}\) be a parameter chosen so \(Y\) divides \(Z-X;\) let \(X=X_{1},X_{2},\ldots,X_{G}=Z-Y\) be given by \(X_{g}=X+(g-1)Y,\) where \(G:=(Z-X)/Y.\) From (14),
\[\sum_{N\in[X,Z]}^{\Box}\sum_{f\in H^{\infty}(N,2)}\lambda_{f}(P) \sqrt{P}\varepsilon(f) =\sum_{g}\sum_{N\in[X_{g},X_{g+1}]}^{\Box}\sum_{f\in H^{\infty}( N,2)}\lambda_{f}(P)\sqrt{P}\varepsilon(f)\] \[=\sum_{g}\frac{X_{g}Y}{D_{k}\pi\zeta(2)}M_{k}\left(\frac{P}{X_{g} }\right)+\ \left(X^{2}.\right)\]
Since Theorem 1 gives a bound of \(\text{O}(k+kP/X)\) on \(M\),
\[\sum_{g}X_{g}YM_{k}(P/X_{g})-\int_{X}^{Z}uM_{k}(P/u)du \ll\int_{X}^{Z}um_{k}(P/u)-(u+Y)m_{k}(p/(u+Y))u\] \[\ll kXY(P/X+1)=o(kX^{2}).\]
Next,
\[\int_{X}^{Z}uM_{k}(p/u)du=X^{2}\int_{1}^{c}M_{k}(y/u)du.\]
Finally, from (15), we can again compute that
\[\frac{1}{\sum_{N\in[X,Z]}^{\square}\sum_{f\in H^{\infty}(N,k)}1}=\frac{24\cdot \zeta(2)\prod_{p}\left(1-\frac{1}{p^{2}+p}\right)^{-1}}{(c^{2}-1)(k-1)X^{2}}+ \mathrm{O}\left(\frac{1}{kX^{12/5-\varepsilon}}\right).\]
Thus,
\[\frac{\sum_{N\in[X,Z]}^{\square}\sum_{f\in H^{\infty}(N,2)}\lambda _{f}(P)\sqrt{P}\varepsilon(f)}{\sum_{N\in[X,Z]}^{\square}\sum_{f\in H^{\infty} (N,2)}1} =\frac{24\zeta(2)\prod_{p}\left(1-\frac{1}{p^{2}+p}\right)^{-1}}{ \pi\zeta(2)D_{k}(c^{2}-1)(k-1)}\int_{1}^{c}M_{k}(y/u)du+o_{c}(1)\] \[=\frac{2}{(c^{2}-1)}\int_{1}^{c}M_{k}(y/u)du+o_{c}(1)\]
Finally, we analyze the asymptotic behavior of the function above.
Proof of Theorem 4.: Let \(f(x)=\delta_{0<x<1}\sqrt{1-x}/\sqrt{x}\). The Mellin transform of \(f\) is expressible via the Beta function as
\[\tilde{f}(s) =\int_{0}^{\infty}f(x)x^{s}\frac{dx}{x}=\frac{1}{2}\int_{0}^{1} \sqrt{1-x}x^{s-1/2}\frac{dx}{x}=B\left(\frac{3}{2},s-1/2\right)\] \[=\frac{\Gamma(s-1/2)\Gamma(3/2)}{\Gamma\left(s+1\right)}=\frac{ \sqrt{\pi}}{2}\frac{\Gamma(s-1/2)}{\Gamma\left(s+1\right)}\]
By Stirling approximation,
\[\left|\tilde{f}(\sigma+it)\right|\sim|t|^{-3/2},\]
so we can apply Mellin inversion, which yields
\[F(x):=\sum_{r\geq 1}rc(r)f(r^{2}/x)=\sum_{r\geq 1}rc(r)\frac{1}{2\pi i}\int_{ \mathrm{Re}(s)=3}\tilde{f}(s)r(x/r^{2})^{s}ds=\frac{1}{2\pi i}\int_{\mathrm{ Re}(s)=3}L(2s-1)\tilde{f}(s)x^{s}ds,\]
where
\[L(s):=\sum_{r}c(r)r^{-s}=\prod_{p}\left(1+\left(1+\frac{p^{2}}{p^{4}-2p^{2}-p +1}\right)\frac{p^{-s}}{1-p^{-s}}\right).\]
For a function \(\Phi:(0,\infty)\rightarrow\mathbb{R}\) of compact support, let
\[F_{\Phi}(x):=\left(\int_{0}^{\infty}F\left(\frac{x}{u}\right)\Phi(u)u^{2} \frac{du}{u}\right)/\tilde{\Phi}(2).\]
By definition,
\[(D_{2}B)\cdot F_{\Phi}(x)=\left(\int_{0}^{\infty}\sum_{1\leq r\leq x}c(r)\sqrt {(x/u)-r^{2}}\Phi(u)u^{2}\frac{du}{u}\right)/\int_{0}^{\infty}\Phi(u)u^{2} \frac{du}{u}.\]
Then by Mellin inversion,
\[F_{\Phi}(x)=\frac{1}{2\pi i}\int_{\mathrm{Re}(s)=3}L(2s-1)\tilde{f}(s)\tilde{ \Phi}(s+2)x^{s}ds. \tag{17}\]
Now, we can further simplify \(L\) as:
\[L(s)=\frac{\zeta(s)\zeta(s+2)}{\zeta(2s+4)}\prod_{p}\left(1+\frac{-1+p+2p^{2}}{(1 -p-2p^{2}+p^{4})(1+p^{2+s})}\right).\]
The Euler product part in the expression above evaluated at \(2s-1\) converges uniformly for \(s>\sigma\) for any \(\sigma>-1\), so it is analytic and uniformly bounded in this region. The Mellin transform \(\widetilde{\Phi}\) is entire because of the support assumption, and if \(\Phi\) is smooth, the decay of \(\Phi\) in the \(t\) aspect allows us to shift the contour in (17) to the line \(\operatorname{Re}(s)=-1/2\). The poles at \(1\) and \(1/2\) cancel our exactly the contribution of the \(\pi y\) and \(A\sqrt{y}\) terms in \(M_{\Phi}(y)\). Finally, the residue at \(0\) is
\[\frac{\sqrt{\pi}}{2}\frac{\Gamma\left(\frac{-1}{2}\right)}{\Gamma(1)}\frac{ \zeta(-1)}{\zeta(2)}\prod_{p}\left(1+\frac{-1+p+2p^{2}}{(1-p-2p^{2}+p^{4})(1+p) }\right)=1/BD_{2}.\]
## 6 Arithmetic Functions
In this section, we collect auxiliary definitions, notation, and lemmas.
**Definition 6.1**.: _Let \(\varphi(m)\) be Euler's function and \(\psi(m)\) be the Dedekind Psi function. Let_
\[\eta(m):=\frac{m}{\psi(m)}=\prod_{p\mid m}\frac{p}{p+1}.\]
_For any \(r,m\in\mathbb{N}\) and a prime \(P\), let_
\[\theta_{r}(m):=\sum_{a\bmod m}\left(\frac{a}{m}\right)\left(\frac{ar^{2}-4P} {m}\right).\]
_We let_
\[\widetilde{\Phi}_{r,d}(g):=\sum_{\begin{subarray}{c}a\bmod d^{2}g\\ a\bmod d^{2}\in\mathcal{R}_{r,d}\end{subarray}}\left(\frac{a}{g}\right)\left( \frac{(r^{2}a-4P)/d^{2}}{g}\right)\]
_for \(d,g,r\in\mathbb{N}\)._
**Lemma 6.2**.: _For any modulus \(m\) and a prime \(P\neq 2\) with \((m,P)=1\), \(\theta_{r}(m)\) is a multiplicative function of \(m\). For odd \(r\), it is given as follows. For a prime \(p\) with \((p,2k)=1\),_
\[\theta_{r}(p^{\alpha})=-p^{\alpha-1}\text{ for }\alpha\circ\text{dd};\ \ \theta_{r}(p^{\alpha})=p^{\alpha-1}(p-2)\text{ for }\alpha\text{ even}.\]
_For \(p\mid r\),_
\[\theta_{r}(p^{\alpha})=0\text{ for }\alpha\circ\text{dd};\ \ \theta_{r}(p^{\alpha})=p^{ \alpha-1}(p-1)\text{ for }\alpha\text{ even};\]
_For \(p=2\),_
\[\theta_{r}(2^{\alpha}):=(-1)^{\alpha}2^{\alpha-1}.\]
_For even \(r\),_
\[\theta_{r}(2^{\alpha})=0\]
_for any \(\alpha\geq 1\); for an odd prime \(p\), \(\theta_{r}(p^{\alpha})=\theta_{r^{\prime}}(p^{\alpha}),\) where \(r^{\prime}\) is the odd part of \(r\)._
Proof.:
**Lemma 6.3**.: _Let \(d,g,r\in\mathbb{N}\) be such that \(g|d^{\infty}\). Then:_
\[\widetilde{\varphi}_{r,d}(g)=\begin{cases}\varphi(g)\delta_{g=\square}&\text{ if }2 \nmid d\\ \varphi(g)\delta_{g=\square}&\text{ if }2||d,2\nmid n\\ 0&\text{ if }2||d,2|n,2||r\\ 2\varphi(g)\delta_{g=\square}&\text{ if }2||d,2|n,4|r\\ 2\varphi(g)\delta_{g=\square}&\text{ if }4|d\end{cases}.\]
If \((d,r)\) is non-admissible, \(\widetilde{\varphi}_{r,d}(g)=0\), so assume it is an admissible pair. Let \(r\) be an integer that reduces to an element of \(\mathcal{R}_{r,d}\) modulo \(d^{2}\), let \(s:=(r^{2}r-4P)/d^{2}\), and let \(g=:2^{\alpha}h\), where \((h,2)=1\). Since \(h\) is odd, \(\left(\frac{r}{h}\right)\) is a character modulo rad \(h\), and since rad \(h|d^{2}\), we have
\[\sum_{\begin{subarray}{c}a\bmod d^{2}g\\ a\equiv r\bmod d^{2}\end{subarray}}\left(\frac{a}{g}\right)\left(\frac{(r^{2}a -4P)/d^{2}}{g}\right) =\sum_{t=1}^{g}\left(\frac{r+td^{2}}{g}\right)\left(\frac{s+tr^{2 }}{g}\right)\] \[=\sum_{t\bmod h}\left(\frac{r}{h}\right)\left(\frac{s+tr^{2}}{h} \right)\sum_{t\bmod 2^{\alpha}}\left(\frac{r+td^{2}}{2^{\alpha}}\right)\left( \frac{s+tr^{2}}{2^{\alpha}}\right)\] \[=\delta_{h=\square}\varphi(h)\sum_{t\bmod 2^{\alpha}}\left( \frac{r+td^{2}}{2^{\alpha}}\right)\left(\frac{s+tr^{2}}{2^{\alpha}}\right).\]
where in the last step we used that \((h,k)=1\) because \((r,d)\leq 2\) by assumption. We compute
\[\star:=\sum_{t\bmod 2^{\alpha}}\left(\frac{r+td^{2}}{2^{\alpha}}\right)\left( \frac{s+tr^{2}}{2^{\alpha}}\right)\]
case by case.
1. If \(\alpha=0\), i.e., \(2\nmid g\), then \(\star=1\), so \(\widetilde{\varphi}(g)=2\delta_{g=\square}\varphi(g)\) if \(4|d\) and \(2\delta_{g=\square}\varphi(g)\) otherwise; \(\text{If}2|g\) (and hence \(2|d\) and \(2|r\) from admissibility), then \(r\) is odd since \((r,d)=1\) for all \(r\in\mathcal{R}_{d,r}\), and:
2. If \(2||r,2||d\), then \(s\) is even and \(\star=0\).
3. If \(4|r\) and \(2||d,\mathcal{R}_{r,d}\) has one element and the corresponding \(s\) is odd, and so using that \[\left(\frac{x+4}{2}\right)=-\left(\frac{x}{2}\right),\] we see \[\star=\sum_{t\bmod 2^{\alpha}}\left(\frac{z+4t}{2^{\alpha}}\right)\left( \frac{z^{\prime}}{2^{\alpha}}\right)=\delta_{2|\alpha}2^{\alpha};\]
4. if \(4|d\) and \(2||r\), there's exactly one choice of \(r\in\mathcal{R}_{r,d}\) for which \(s\) is odd; for this choice of \(r\), once again \(\star=\delta_{2^{\alpha}=\square}2^{\alpha}\); for the other choice of \(r\), \(s\) is even and \(\star=0\).
Combining all the cases, we get the statement of the lemma.
**Theorem 6.4** (Hooley, [5]).: _: Let \((a,m)=1\)._
\[\sum_{\begin{subarray}{c}N=X\\ N=a\bmod m\end{subarray}}^{X+Y}\mu^{2}(N)=\frac{Y}{\zeta(2)}\frac{\eta(m)}{ \varphi(m)}+O(\sqrt{X/m}+m^{1/2+\varepsilon}).\]
**Lemma 6.5**.: _Let \(K\) be a cut-off parameter and let \(P\neq 2\) be a prime. Let_
\[A:=\prod_{p}(1+\frac{p}{(p+1)^{2}(p-1)});\]
1. \[\sum_{\begin{subarray}{c}m\,\omega l\\ (P,m)=1\end{subarray}}^{K}\frac{1}{m^{2}}\eta(m)=\prod_{p\neq 2,P}\left(1+\frac{p}{ (p+1)^{2}(p-1)}\right)+\mathrm{O}\left(\frac{1}{K}\right)=\frac{9A}{11}+ \mathrm{O}\left(\frac{1}{P^{2}}+\frac{1}{K}\right);\]
2. \[\sum_{m:(P,m)=1}^{K}\frac{1}{m^{2}}\eta(2m)=\frac{2}{3}\sum_{k\geq 0}2^{-k}\prod_{ p\neq 2,P}\left(1+\frac{p}{(p+1)^{2}(p-1)}\right)=\frac{8A}{11}+\mathrm{O} \left(\frac{1}{P^{2}}+\frac{1}{K}\right);\]
**Lemma 6.6**.: _Let \(\mathcal{T}_{r}\subseteq\mathbb{N}^{3}\) denote the set of triples \((m,d,g)\) such that \((d,r)\) is admissible, \((m,d)=1\), and \(g|d^{\infty}\)._
\[\Theta_{r}(m,d,g):=\frac{\eta(d^{2}mg)}{\varphi(d^{2}mg)}\frac{\theta_{r}(m) \widetilde{\varphi}(g)}{mgd}.\]
_Then \(\sum_{\mathcal{T}_{r}}\Theta_{r}(m,d,g)\) is absolutely convergent and equal to_
\[B\cdot c(r):=\prod_{p}\frac{p^{4}-2p^{2}-p+1}{(p^{2}-1)^{2}}\prod_{p|r}\left(1 +\frac{p^{2}}{p^{4}-2p^{2}-p+1}\right).\]
_Moreover,_
\[\sum_{\begin{subarray}{c}(m,d,g)\in\mathcal{T}_{r}\\ mg\leq Z^{\prime},d\leq Z\end{subarray}}\Theta_{r}(m,d,g)-Bc(r)\ll Z^{-2}+(Z ^{\prime})^{-1/5}.\]
Proof.: Define \(E:=\prod_{p}\left(1+\frac{p}{(p^{2}-1)^{2}}\right),E(k):=\prod_{p|r}\left(1+ \frac{p}{(p^{2}-1)^{2}}\right),G:=\prod_{p}\left(1-\frac{2p}{(p^{2}-1)^{2}+p} \right),G(k):=\prod_{p|r}\left(1-\frac{2p}{(p^{2}-1)^{2}+p}\right),\) and \(F(k):=\prod_{p|r}\left(1+\frac{p(p-1)}{(p^{2}-1)^{2}}\right).\) It is easy to verify that
\[Bc(r)=\frac{EGF(k)}{E(k)G(k)}.\]
Next, since \(g|d^{\infty}\) and \((m,d)=1\), we have \(\{p|md^{2}g\}=\{p|d\}\sqcup\{p|m\}\), so
\[\Theta_{r}(m,d,g)=\frac{1}{d^{3}\prod_{p|d}(1-1/p^{2})}\frac{\theta_{r}(m)}{m^ {2}\prod_{p|m}(1-1/p^{2})}\frac{\widetilde{\varphi}(g)}{g^{2}}.\]
We begin by establishing absolute convergence. For fixed \(m,d\), the sum
\[\sum^{g}:=\sum_{g|d^{\infty}}\frac{\widetilde{\varphi}(g)}{g^{2}}\]
over \(g\)'s appearing in \(\mathcal{T}_{r}\) for these \(m\) and \(d\) is upper bounded by
\[\sum_{\begin{subarray}{c}g|d^{\infty}\\ g=\square\end{subarray}}\frac{2\varphi(g)}{g^{2}}=2\prod_{p|d}\left(1+\sum_{k \geq 1}\frac{p-1}{p}\frac{1}{p^{2k}}\right)=2\prod_{p|d}\left(1+\frac{1}{p(p+1) }\right)\ll\prod_{p}(1+1/p^{2})\leq\prod_{p}1/(1-1/p^{2})=\zeta(2),\]
so is uniformly bounded.
Now fix \(d\). Any number \(r\) can be written uniquely as \(a^{2}\cdot b\cdot 2^{c}\), where \(b\) is square-free and \(a,b\) odd. From the definition of \(\theta_{r}\),
\[\left|\theta_{r}(a^{2}\cdot b\cdot 2^{c})\right|\leq a^{2}\cdot 2^{c}.\]
In particular, using that \(\prod_{p\mid m}(1-1/p^{2})\gg 1\),
\[\sum_{(m,d)=1}\frac{\theta_{r}(m)}{m^{2}\prod_{p\mid m}(1-1/p^{2})}\ll\sum_{m \in\mathbb{N}}|\theta_{r}(m)|/m^{2}\ll\left(\sum_{c\in\mathbb{N}}1/2^{c}\right) \left(\sum_{a\in\mathbb{N}}1/a^{2}\right)\left(\sum_{b\;\square\,\text{-free}} 1/b^{2}\right)\ll 1.\]
Finally,
\[\sum_{d\in\mathbb{N}}\frac{1}{d^{3}\prod_{p\mid d}(1-1/p^{2})}\ll 1,\]
so indeed, the series \(\sum_{\mathcal{T}_{r}}\Theta_{r}(m,d,g)\) converges absolutely.
Next, we compute \(\sum^{g}\) case by case from the definition.
* When \(2\nmid d\), \(\widetilde{\varphi}(g)=\delta_{g=\square}\varphi(g)\), and \[\sum^{g}=\sum_{\begin{subarray}{c}g\mid d^{\infty}\\ g=\square\end{subarray}}\frac{\varphi(g)}{g^{2}}=\prod_{p\mid d}\left(1+\frac{ 1}{p(p+1)}\right);\]
* When \(2||d\) and \(2||r\), \(\widetilde{\varphi}(g)=\delta_{g=\square}\varphi(g)\) for odd \(g\) and \(0\) for even \(g\), so \[\sum^{g}=\sum_{\begin{subarray}{c}g\mid d^{\infty}\\ g=\square\\ 2|g\end{subarray}}\frac{\varphi(g)}{g^{2}}=\prod_{p\mid d_{o}}\left(1+\frac{ 1}{p(p+1)}\right)\] where \(d_{o}\) is the odd part of \(d\);
* When \(2||d\) and \(4|r\), then \(\widetilde{\varphi}(g)=\delta_{g=\square}\varphi(g)\) for \(g\) odd and \(\widetilde{\varphi}(g)=2\delta_{g=\square}\varphi(g)\) for \(g\) even, so \[\sum^{g}=\sum_{\begin{subarray}{c}g\mid d^{\infty}\\ g=\square\end{subarray}}\frac{\varphi(g)}{g^{2}}+2\sum_{\begin{subarray}{c}g \mid d^{\infty}\\ g=\square\\ 2|g\end{subarray}}\frac{\varphi(g)}{g^{2}}=\frac{4}{3}\prod_{p\mid d_{o}} \left(1+\frac{1}{p(p+1)}\right);\]
* When \(4|d\) and \(2||r\), then \(\widetilde{\varphi}(g)=2\delta_{g=\square}\varphi(g)\), so \[\sum^{g}=2\prod_{p\mid d}\left(1+\frac{1}{p(p+1)}\right)=\frac{7}{3}\prod_{p \mid d_{o}}\left(1+\frac{1}{p(p+1)}\right);\]
We can now compute the desired sum by expressing it as an Euler product.
Suppose first that \(r\) is odd, so \((r,d)\) is admissible if and only if \(d\) is odd and \((r,d)=1\). Then
\[\sum_{\mathcal{T}_{r}}\Theta_{r}(m,d,g) =\sum_{\begin{subarray}{c}d\;\text{odd}\\ (d,r)=1\end{subarray}}\sum_{(m,d)=1}\frac{\prod_{p\mid d}(1+1/p(p+1))}{d^{3} \prod_{p\mid d}(1-1/p^{2})}\frac{\theta_{r}(m)}{m^{2}\prod_{p\mid m}(1-1/p^{2})}\] \[=\sum_{m}\frac{\theta_{r}(m)}{m^{2}\prod_{p\mid m}(1-1/p^{2})} \sum_{(d,2mr)=1}\frac{1}{d^{3}}\prod_{p\mid d}\frac{p(1+p+p^{2})}{(p-1)(p+1)^ {2}}.\]
The inner sum can be expressed as the Euler product
\[\prod_{p|2mr}\left(1+\sum_{k\geq 1}\frac{1}{p^{3k}}\frac{p(1+p+p^{2})}{(p-1)(p+1) ^{2}}\right)\!\!=\prod_{p|2mr}\left(1+\frac{p}{(p^{2}-1)^{2}}\right)=\prod_{p} \left(1+\frac{p}{(p^{2}-1)^{2}}\right)/\prod_{p|2mr}\left(1+\frac{p}{(p^{2}-1)^ {2}}\right)\]
so
\[\sum_{\mathcal{T}_{r}}\Theta_{r}(m,d,g)=\frac{E}{E(k)}\cdot\sum_{m}\frac{\theta _{r}(m)}{m^{2}}\prod_{p|m}(1-1/p^{2})^{-1}\prod_{p|2m,p|k}(1+p/(p^{2}-1)^{2})^ {-1}.\]
This sum over \(m\) can itself be expressed as an Euler product. The \(p\neq 2\), \(p\nmid r\) part of the product is
\[\prod_{p|2k}\left(1+\frac{1}{(1-1/p^{2})(1+p/(p^{2}-1)^{2})}\left( \sum_{\alpha\geq 0}\frac{\theta_{r}(p^{2\alpha+1})}{p^{4\alpha+2}}+\sum_{ \alpha\geq 0}\frac{\theta(p^{2\alpha+2})}{p^{4\alpha+4}}\right)\right)\] \[= \prod_{p|2k}\left(1+\frac{1}{(p^{2}-1)(1+p/(p^{2}-1)^{2})}\left( -\sum_{\alpha\geq 0}\frac{1}{p^{2\alpha}}+\sum_{\alpha\geq 0}\frac{(p-2)}{p^{2 \alpha+1}}\right)\right)\] \[= \prod_{p|2k}\left(1-\frac{2p}{(p^{2}-1)^{2}+p}\right)=\frac{G}{G( 2k)}.\]
The \(p|r\) factors are
\[\prod_{p|r}\left(1+\frac{1}{1-1/p^{2}}\left(\sum_{\alpha\geq 0} \frac{\theta_{r}(p^{2\alpha+1})}{p^{4\alpha+2}}+\sum_{\alpha\geq 0}\frac{ \theta(p^{2\alpha+2})}{p^{4\alpha+4}}\right)\right) = \prod_{p|r}\left(1+\frac{1}{p^{2}-1}\sum_{\alpha\geq 0}\frac{p-1}{p^{2 \alpha+1}}\right)\] \[= \prod_{p|r}\left(1+\frac{p(p-1)}{(p^{2}-1)^{2}}\right)=F(k).\]
The Euler factor at \(2\) is
\[\frac{1}{1+2/(2^{2}-1)^{2}}\left(1+\sum_{\alpha\geq 1}\left(1-\frac{1}{2^{2}} \right)^{-1}\frac{(-1)^{\alpha 2\alpha-1}}{2^{2\alpha}}\right)=\frac{9}{11}\left(1+ \frac{1}{2}\frac{4}{3}\sum_{\alpha\geq 1}(-1/2)^{\alpha}\right)=\frac{7}{11}=G(2).\]
All the Euler factors together add up to \(Bc(r)\).
Now we estimate the tail, namely, the contribution of terms with \(d>Z\) or \(mg>W\). As noted above, for \(d\) fixed,
\[\sum_{(m,d)=1}\sum_{g|d^{\infty},g=\square}\frac{\theta_{r}(m)}{m^{2}\prod_{p |m}(1-1/p^{2})}\frac{\varphi(g)}{g^{2}}\]
is uniformly bounded; hence,
\[\sum_{(m,d,g)\in\mathcal{T}_{r}:d\geq Z}\Theta_{r}(m,d,g)\ll\sum_{d\geq Z} \frac{1}{d^{3}\prod_{p|d}(1-1/p^{2})}\ll Z^{-2}. \tag{18}\]
For a given \(g\), the contribution of summands with that \(g\) is
\[\frac{\varphi(g)}{g^{2}}\sum_{\begin{subarray}{c}m,d\\ (m,d)=1\\ d\operatorname{\text{\tiny{odd}}}\\ g|d^{\infty}\end{subarray}}\frac{1/d^{3}}{\prod_{p|d}(1-1/p^{2})}\frac{\theta _{r}(m)/m^{2}}{\prod_{p|m}(1-1/p^{2})}\leq\frac{\varphi(g)}{g^{2}}\sum_{ \begin{subarray}{c}m,d\\ (m,d)=1\\ d\operatorname{\text{\tiny{odd}}}\end{subarray}}\frac{1/d^{2}}{\prod_{p|d}(1- 1/p^{2})}\frac{|\theta_{r}(m)|/m^{2}}{\prod_{p|m}(1-1/p^{2})}\ll\frac{\varphi (g)}{g^{2}}.\]
Here we used that \(\sum_{\begin{subarray}{c}m,d\\ (m,d)=1\\ d\text{ odd}\end{subarray}}\frac{1/d^{2}}{\prod_{p\mid d}(1-1/p^{2})}\frac{| \theta_{r}(m)|/m^{2}}{\prod_{p\mid m}(1-1/p^{2})}\) converges since these are terms of the original sum corresponding to \(g=1\). From this,
\[\sum_{(m,d,g)\in\mathcal{T}_{r}:g\geq W}\Theta_{r}(m,d,g)\ll\sum_{t>\sqrt{W}} \frac{1}{t^{2}}\ll W^{-1/2}. \tag{19}\]
Finally, the terms in the sum corresponding to elements of \(\mathcal{T}_{r}\) with a fixed \(m\) are
\[\frac{\theta_{r}(m)/m^{2}}{\prod_{p\mid m}(1-1/p^{2})}\sum_{ \begin{subarray}{c}g,d\\ (m,d)=1\\ d\text{ odd}\\ g\mid d^{3}\end{subarray}}\frac{\varphi(g)}{g^{2}}\frac{1/d^{3}}{\prod_{p\mid d }(1-1/p^{2})}\leq\frac{\theta_{r}(m)/m^{2}}{\prod_{p\mid m}(1-1/p^{2})}\sum_{ \begin{subarray}{c}g,d\\ d\text{ odd}\\ g\mid d^{3}\end{subarray}}\frac{\varphi(g)}{g^{2}}\frac{1/d^{3}}{\prod_{p\mid d }(1-1/p^{2})}\ll\frac{\theta_{r}(m)/m^{2}}{\prod_{p\mid m}(1-1/p^{2})}\]
where again, \(\sum_{\begin{subarray}{c}g,d\\ d\text{ odd}\\ g\mid d^{3}\end{subarray}}\frac{\varphi(g)}{g^{2}}\frac{1/d^{3}}{\prod_{p\mid d }(1-1/p^{2})}\) converges since these are the \(m=1\) terms in the original sum. Thus,
\[\sum_{(m,d,g)\in\mathcal{T}_{r}:m\geq V}\Theta_{r}(m,d,g)\ll\sum_{m\geq V}| \theta_{r}(m)|/m^{2}.\]
Recall that for \(m=a^{2}\cdot b\cdot 2^{c}\) for \(b\) square-free, \(a,b\) odd, one has \(\theta_{r}(m)/m^{2}\ll\frac{1}{a^{2}b^{2}2^{c}}\). Hence
\[\sum_{m\geq V}|\theta_{r}(m)|/m^{2} \leq\sum_{c\geq 1}\frac{1}{2^{c}}\sum_{\begin{subarray}{c}m=a^{2} b\text{ odd}\\ m\geq V/2^{c}\end{subarray}}|\theta_{r}(m)|/m^{2}\ll\sum_{c\geq 1}\frac{1}{2^{c}} \Big{(}\sum_{a>(V/2^{c})^{1/3}}\sum_{b}\frac{1}{a^{2}b^{2}}+\sum_{b>(V/2^{c}) ^{1/3}}\sum_{a}\frac{1}{a^{2}b^{2}}\Big{)}\] \[\ll\sum_{c}\frac{1}{2^{c}}\frac{2^{c/3}}{V^{1/3}}\ll V^{-1/3}. \tag{20}\]
Finally, let \(Z^{\prime}=VW\), where \(V=Z^{3/5}\), \(V=Z^{2/5}\). Then \(gm>Z^{\prime}\) implies that at least one of \(g>W\) or \(m>V\) holds, so putting together (18), (19), and (20),
\[\sum_{\begin{subarray}{c}(m,d,g)\in\mathcal{T}_{r}\\ mg\geq Z^{\prime}\text{ or }d\geq Z\end{subarray}}\Theta_{r}(m,d,g)\ll Z^{-2}+(Z^{ \prime})^{-1/5}.\]
Next, suppose \(r\) is even, let \(r_{o}\) the odd part of \(r\).
\[\sum_{\mathcal{T}_{r}}\Theta_{r}(m,d,g) =\sum_{\begin{subarray}{c}d\text{ odd}\\ (d,k_{o})=1\end{subarray}}\sum_{(m,2d)=1}\frac{\prod_{p\mid d_{o}}(1+1/p(p+1))} {d^{3}\prod_{p\mid d}(1-1/p^{2})}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p\mid m} (1-1/p^{2})}\] \[+\sum_{\begin{subarray}{c}2\mid d\\ (d,k_{o})=1\end{subarray}}\sum_{(m,d)=1}\frac{\prod_{p\mid d_{o}}(1+1/p(p+1))} {d^{3}\prod_{p\mid d}(1-1/p^{2})}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p\mid m} (1-1/p^{2})}\] \[+\frac{7}{3}\sum_{\begin{subarray}{c}4\mid d\\ (d,k_{o})=1\end{subarray}}\sum_{(m,d)=1}\frac{\prod_{p\mid d_{o}}(1+1/p(p+1))} {d^{3}\prod_{p\mid d}(1-1/p^{2})}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p\mid m }(1-1/p^{2})}\] \[=:I+II+(7/3)III.\]
We compute the three summands \(I,II,\) and \(III\), separately.
Observe that for \(m\) odd and \(\alpha\geq 1\),
\[\frac{\theta_{k_{o}}(2^{\alpha}m)}{2^{2\alpha}m^{2}\prod_{p|2m}(1-1/p^{2})}= \frac{1}{2^{2\alpha}}\frac{1}{1-1/4}(-1)^{\alpha}2^{\alpha-1}\frac{\theta_{k_{ o}}(m)}{m^{2}\prod_{p|m}(1-1/p^{2})}=\frac{2\cdot(-1)^{\alpha}}{3\cdot 2^{ \alpha}}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p|m}(1-1/p^{2})}.\]
Thus
\[I=\sum_{\begin{subarray}{c}d\text{ odd}\\ (d,k_{o})=1\end{subarray}}\sum_{(m,d)=1}\frac{\prod_{p|d}(1+1/p(p+1))}{d^{3} \prod_{p|d}(1-1/p^{2})}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p|m}(1-1/p^{2})} \left(1+\sum_{\alpha\geq 1}\frac{2\cdot(-1)^{\alpha}}{3\cdot 2^{\alpha}} \right)^{-1}=\frac{9}{7}\sum_{\mathcal{T}_{k_{o}}}\Theta_{k_{o}}(m,d,g).\]
For \(II\), we can rewrite
\[II=\sum_{\begin{subarray}{c}d_{o}\text{ odd}\\ (d,k_{o})=1\end{subarray}}\frac{1}{8(1-1/4)}\frac{\prod_{p|l}(1+1/p(p+1))}{d^{3 }\prod_{p|d}(1-1/p^{2})}\sum_{(m,2d_{o})=1}\frac{\theta_{k_{o}}(m)}{m^{2}\prod _{p|m}(1-1/p^{2})}=\sum_{\begin{subarray}{c}d\text{ odd}\\ (d,k_{o})=1\end{subarray}}\frac{9}{7}\frac{1}{6}\sum_{\mathcal{T}_{k_{o}}} \Theta_{k_{o}}(m,d,g).\]
Finally,
\[III=\sum_{\alpha\geq 2}\sum_{\begin{subarray}{c}d_{o}\text{ odd}\\ (d,k_{o})=1\end{subarray}}\frac{1}{8^{\alpha}(1-1/4)}\frac{\prod_{p|d}(1+1/p(p+ 1))}{d^{3}\prod_{p|d}(1-1/p^{2})}\sum_{(m,2d)=1}\frac{\theta_{k_{o}}(m)}{m^{2 }\prod_{p|m}(1-1/p^{2})}=\frac{9}{7}\frac{1}{6\cdot 7}\sum_{\mathcal{T}_{k_{o}}} \Theta_{k_{o}}(m,d,g).\]
In summary,
\[\frac{\theta_{k_{o}}(2^{\alpha}m)}{2^{2\alpha}m^{2}\prod_{p|2m}(1-1/p^{2})}= \frac{11}{9}\left(1+\frac{1}{6}+\frac{1}{6\cdot 7}\frac{7}{3}\right)\sum_{ \mathcal{T}_{k_{o}}}\Theta_{k_{o}}(m,d,g)=\frac{11}{7}\sum_{\mathcal{T}_{k_{o} }}\Theta_{k_{o}}(m,d,g).\]
It remains to notice that \(c(r)=11/7c(k_{o})\).
Finally, we address the case \(4|r\):
\[\sum_{\mathcal{T}_{r}}\Theta_{r}(m,d,g) =\sum_{\begin{subarray}{c}d\text{ odd}\\ (d,k_{o})=1\end{subarray}}\sum_{(m,2d)=1}\frac{\prod_{p|d_{o}}(1+1/p(p+1))}{d^ {3}\prod_{p|d}(1-1/p^{2})}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p|m}(1-1/p^{2})}\] \[+\frac{4}{3}\sum_{\begin{subarray}{c}2\|d\\ (d,k_{o})=1\end{subarray}}\sum_{(m,d)=1}\frac{\prod_{p|d_{o}}(1+1/p(p+1))}{d^ {3}\prod_{p|d}(1-1/p^{2})}\frac{\theta_{k_{o}}(m)}{m^{2}\prod_{p|m}(1-1/p^{2})}.\]
Since the first summand matches \(I=\frac{9}{7}\cdot c(k_{o})\); the second is equal to \(\frac{4}{3}\cdot II=\frac{4}{3}\frac{1}{6}\frac{9}{7}\cdot c(k_{o})=\frac{2}{ 7}c(k_{o})\), and we get the desired answer once again.
The error term analysis for \(r\) even matches that of odd \(r\).
**Lemma 6.7**.: _Let \(m\in\mathbb{N}\), and let \(\chi\) be a real character modulo \(m\) coming from a primitive character modulo \(m_{0}\). Let_
\[\sigma_{\chi}(Z):=\sum_{N\leq Z}\mu^{2}(N)\chi(N).\]
_Then:_
\[\sigma_{\chi}(Z)=\frac{Z}{\zeta(2)}\prod_{p|m}\frac{p}{p+1}+O_{\varepsilon} \left(Z^{3/5+\varepsilon}m^{\varepsilon}\right)=Z\cdot\frac{\eta(m)}{\zeta(2) }+O_{\varepsilon}\left(Z^{3/5+\varepsilon}m^{\varepsilon}\right)\]
_if \(\chi\) is the principal character, and_
\[\sigma_{\chi}(Z)=O_{\varepsilon}\left(Z^{3/5+\varepsilon}m^{\varepsilon}m_{0} ^{1/5+\varepsilon}\right)\]
_if \(\chi\) is non-principal._
Proof.: Consider the associated Dirichlet series
\[\mathcal{L}_{\chi}(s):=\sum_{n}\mu^{2}(n)\chi(n)n^{-s}=\prod_{p}\left(1+\chi(p)p^{ -s}\right)=\frac{L(s,\chi)}{L(2s,\chi^{2})},\]
where \(L(s,\chi)\) is the Dirichlet series associated to \(\chi\). By Perron's formula (see [15], p. 70), choosing \(Z\) near a half-integer, we have
\[\sum_{N\leq Z}\mu^{2}(N)\chi(N)=\frac{1}{2\pi i}\int_{1+\varepsilon-iT}^{1+ \varepsilon+iT}\mathcal{L}_{\chi}(s)\frac{Z^{s}}{s}ds+\mathrm{O}_{\varepsilon }\left(\frac{Z^{1+\varepsilon}}{T}\right)\]
Suppose first that \(\chi\) is the principal character. Then we have
\[\mathcal{L}_{\chi}(s)=\frac{\zeta(s)}{\zeta(2s)}\prod_{p|m}\frac{1}{1+p^{-s}},\]
and shifting the integral to the left of the line \(\mathrm{Re}(s)=1\) picks up the pole of \(\zeta(s)\) at \(s=1\). We have:
\[\mathrm{Res}_{s=1}\mathcal{L}_{\chi}(s)=\frac{1}{\zeta(2)}\prod_{p|m}\frac{1} {1+1/p}\]
Since there are no other poles to the right of the line \(1/2\), shifting the contour to the line \(\mathrm{Re}(s)=1/2+\varepsilon\),
where the integration contour \(\mathcal{C}\) as shown in Figure 5(a). Similarly, if \(\chi\) is non-principal, there are no poles inside of the contour, and
\[\sum_{N\leq Z}\mu^{2}(N)\chi(N)=\frac{1}{2\pi i}\int_{\mathcal{C}}\mathcal{L} _{\chi}(s)\frac{Z^{s}}{s}ds+\mathrm{O}_{\varepsilon}\left(\frac{Z^{1+ \varepsilon}}{T}\right).\]
It remains to bound the contour integral terms.
From the Euler product expansion for \(L(s,\chi)\) it is immediate that for \(s=\sigma+it,\sigma>1\), one has \(|L(s,\chi)|\geq\zeta(2\sigma)/\zeta(\sigma)\), and thus \(|1/L(2s,\chi^{2})|\ll_{\varepsilon}1\) everywhere on the above contour. Moreover, from the above discussion, we also have \(H_{\chi}(s)\ll 1\). Hence
\[\frac{1}{2\pi i}\int_{\mathcal{C}}\mathcal{L}_{\chi}(s)\frac{Z^{s}}{s}ds\ll_ {\varepsilon}\int_{\mathcal{C}}|L(s,\chi))|\frac{Z^{s}}{s}ds.\]
Next, let \(\chi^{\prime}\) be the the real primitive character of modulus \(m_{0}<m\) that gives rise to \(\chi\). For \(\sigma>0\),
\[L(s,\chi)=L(s,\chi^{\prime})\prod_{p|m}(1+\chi^{\prime}(p)p^{-s})\]
and thus for \(s\in\mathcal{C}\),
\[|L(s,\chi)|\leq|L(s,\chi^{\prime})|\prod_{p|m}(1+p^{-1/2-\varepsilon})\ll m^ {\varepsilon}|L(s,\chi^{\prime})|\]
From result of Davenport ([2]) that for \(\chi^{\prime}\) primitive and \(s=\sigma+it\), \(\sigma\in[0,1]\), we have
\[|L(s,\chi^{\prime})|\ll\left((|t|+1)m_{0}\right)^{1/2-\sigma/2}\]
Thus,
\[\int_{1/2+\varepsilon-iT}^{1/2+\varepsilon+iT}\bigg{|}L(s,\chi^{\prime}))\frac {Z^{s}}{s}\bigg{|}ds\ll Z^{1/2+\varepsilon}m^{1/4-\varepsilon/2}\int_{-T}^{T} \frac{t^{1/4-\varepsilon/2}}{|t|+1}dt\ll Z^{1/2+\varepsilon}\int_{1}^{T}t^{-3/ 4-\varepsilon}dt\ll m_{0}^{1/4-\varepsilon/2}Z^{1/2+\varepsilon}T^{1/4-}\]
Next, let \(\sigma\geq 1\) and \(s=\sigma+it,t>1\), and let
\[A(u):=\sum_{n\leq u}\chi^{\prime}(n).\]
If \(\chi\) is principal (i.e., if \(L(s,\chi^{\prime})=\zeta(s)\)) one has \(|L(s,\chi^{\prime})|\ll\log|t|\). Indeed, for an integer \(N=\lfloor T\rfloor\), Abel summation gives
\[\zeta(s)=\sum_{n\leq N}n^{-s}-N^{s-1}+\int_{N}^{\infty}\frac{su}{u^{s+1}}du- \int_{N}^{\infty}\frac{s\{u\}}{u^{s+1}}du=\sum_{n\leq N}n^{-s}+\mathrm{O}(1)+ \mathrm{O}\left(T/N\right)=\mathrm{O}(\log T).\]
On the other hand, if \(\chi\) is non-principal, one has
\[|A(u)|\ll\sqrt{m_{0}}\log m_{0}\]
by the Polya-Vinogradov inequality; hence, by partial summation,
\[\left|\sum_{n\geq N}n^{-s}\chi^{\prime}(n)\right|=\left|-A(N)N^{-s}+\int_{N}^ {\infty}\frac{sA(u)}{u^{s+1}}du\right|\ll\frac{\sqrt{m_{0}}\log m_{0}}{N}+|t| \sqrt{m_{0}}\log m_{0}\int_{N}^{\infty}\frac{du}{u^{s+1}}\ll\frac{|t|\sqrt{m_ {0}}\log m}{N},\]
so setting \(N=\sqrt{m_{0}}|t|\), we see that
\[|L(s,\chi^{\prime})|\ll\left|\sum_{n<N}\chi^{\prime}(n)n^{-s}\right|+|t|\sqrt {m_{0}}\log m_{0}/N\ll\log|m_{0}t|.\]
To summarize,
\[\int_{1/2+\varepsilon+iT}^{1+\varepsilon+iT}\bigg{|}L(s,\chi^{ \prime}))\frac{Z^{s}}{s}\bigg{|}ds \ll\int_{1/2+\varepsilon}^{1}\frac{Z^{\sigma}}{T}(Tm_{0})^{1/2- \sigma/2}d\sigma+\log(Tm_{0})\int_{1}^{1+\varepsilon}\frac{Z^{\sigma}}{T}d\sigma\] \[\ll\frac{\sqrt{m_{0}}}{\sqrt{T}}\int_{1/2+\varepsilon}^{1}\left( \frac{Z}{\sqrt{Tm_{0}}}\right)^{\sigma}d\sigma+\frac{\varepsilon Z^{1+ \varepsilon}\log m_{0}T}{T}\] \[\ll\frac{Z}{T}+\frac{\varepsilon Z^{1+\varepsilon}\log Tm_{0}}{T}\]
and similarly for the other horizontal contour. Finally, taking \(T=Z^{2/5}/m_{0}^{1/5}\) yields the desired error term.
**Lemma 6.8**.: _One has_
\[\sum_{n\leq Z}\mu^{2}(n)\varphi(n)=\frac{Z^{2}}{2\zeta(2)}\prod_{p}\left(1- \frac{1}{p^{2}+p}\right)+\mathrm{O}(Z^{8/5+\varepsilon})\]
Proof.: Consider the associated Dirichlet series
\[\mathcal{L}_{\chi}(s):=\prod_{p}\left(1+\frac{p-1}{p^{2}}\right)=\frac{\zeta(s-1) }{\zeta(2s-2)}\prod_{p}\left(1-\frac{1}{p^{s}+p}\right).\]
Applying Perron's formula again,
\[\sum_{N\leq Z}\mu^{2}(N)\chi(N)=\frac{1}{2\pi i}\int_{2+\varepsilon-iT}^{2+ \varepsilon+iT}\mathcal{L}_{\chi}(s)\frac{Z^{s}}{s}ds+\mathrm{O}_{\varepsilon }\left(\frac{Z^{1+\varepsilon}}{T}\right)\]
Shifting the contour to the line \(\mathrm{Re}(s)=3/2+\varepsilon\) picks up the pole of \(\zeta(s)\) at \(s=2\) with residue agreeing with the main term, and no other poles. In the region \(\mathrm{Re}(s)\geq 3/2+\varepsilon\), the function \(\prod_{p}\left(1-\frac{1}{p^{s}+p}\right)\) is bounded absolutely. Similarly, \(|1/\zeta(2s-2)|\ll_{\varepsilon}1\) in this region. Hence it suffices to bound the contour integrals \(\int_{1/2+\varepsilon+iT}^{1/2+\varepsilon+iT}|\zeta(s)Z^{s+1}/s|ds\) and \(\int_{1/2+\varepsilon+iT}^{1+\varepsilon+iT}|\zeta(s)Z^{s+1}/s|ds\). By the convexity bound we have \(\zeta(1/2+\varepsilon+it)\ll(1+|t|)^{1/2-\sigma/2}\); hence, for the first integral
\[\int_{1/2+\varepsilon-iT}^{1/2+\varepsilon+iT}|\zeta(s)Z^{s+1}/s|ds\ll Z^{3/ 2+\varepsilon}\int_{1}^{T}t^{1/4-\varepsilon/2-1}dt=Z^{3/2+\varepsilon}T^{1/ 4-\varepsilon/2}.\]
Similarly, on the vertical parts of the contour inside the critical strip, one has
\[\int_{1/2+\varepsilon+iT}^{1+iT}|\zeta(s)Z^{s+1}/s|ds\ll\int_{1/2+\varepsilon }^{1}Z^{t+1}\frac{T^{1/2-t/2}}{T}dt\ll\frac{Z^{2}}{T}\int_{0}^{1/2}T^{-t/2} \ll Z^{2}/T,\]
and for \(\sigma\geq 1\), \(\zeta(s)=\mathrm{O}(\log T)\) gives
\[\int_{1+iT}^{1+\varepsilon+iT}\big{|}\zeta(s)Z^{s+1}/s\big{|}ds\ll\varepsilon Z ^{2+\varepsilon}/T^{1-\varepsilon}.\]
Taking \(T=Z^{2/5}\) concludes the calculation.
## 7 Acknowledgments
I am very grateful to Andrew Sutherland for introducing this question and to Peter Sarnak for suggesting it as a project to me, as well as for many fruitful discussions. I would like to thank Jonathan Bober, Andrew Booker, Andrew Granville, Yang-Hui He, Min Lee, Michael Lipnowski, Mayank Pandey, Michael Rubinstein, and Will Sawin, for enlightening conversations in connection to this problem.
The research was supported by NSF GRFP and the Hertz Fellowship.
|
2310.17492 | Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation
Models: A Multi-Agent Deep Reinforcement Learning Approach | The efficient deployment and fine-tuning of foundation models are pivotal in
contemporary artificial intelligence. In this study, we present a
groundbreaking paradigm integrating Mobile Edge Computing (MEC) with foundation
models, specifically designed to enhance local task performance on user
equipment (UE). Central to our approach is the innovative Emulator-Adapter
architecture, segmenting the foundation model into two cohesive modules. This
design not only conserves computational resources but also ensures adaptability
and fine-tuning efficiency for downstream tasks. Additionally, we introduce an
advanced resource allocation mechanism that is fine-tuned to the needs of the
Emulator-Adapter structure in decentralized settings. To address the challenges
presented by this system, we employ a hybrid multi-agent Deep Reinforcement
Learning (DRL) strategy, adept at handling mixed discrete-continuous action
spaces, ensuring dynamic and optimal resource allocations. Our comprehensive
simulations and validations underscore the practical viability of our approach,
demonstrating its robustness, efficiency, and scalability. Collectively, this
work offers a fresh perspective on deploying foundation models and balancing
computational efficiency with task proficiency. | Wenhan Yu, Terence Jie Chua, Jun Zhao | 2023-10-26T15:47:51Z | http://arxiv.org/abs/2310.17492v1 | Orchestration of Emulator Assisted Mobile Edge Tuning for AI Foundation Models: A Multi-Agent Deep Reinforcement Learning Approach
###### Abstract
The efficient deployment and fine-tuning of foundation models are pivotal in contemporary artificial intelligence. In this study, we present a groundbreaking paradigm integrating Mobile Edge Computing (MEC) with foundation models, specifically designed to enhance local task performance on user equipment (UE). Central to our approach is the innovative Emulator-Adapter architecture, segmenting the foundation model into two cohesive modules. This design not only conserves computational resources but also ensures adaptability and fine-tuning efficiency for downstream tasks. Additionally, we introduce an advanced resource allocation mechanism that is fine-tuned to the needs of the Emulator-Adapter structure in decentralized settings. To address the challenges presented by this system, we employ a hybrid multi-agent Deep Reinforcement Learning (DRL) strategy, adopt at handling mixed discrete-continuous action spaces, ensuring dynamic and optimal resource allocations. Our comprehensive simulations and validations underscore the practical viability of our approach, demonstrating its robustness, efficiency, and scalability. Collectively, this work offers a fresh perspective on deploying foundation models and balancing computational efficiency with task proficiency.
Mobile edge computing, foundation model, parameter efficient tuning, deep reinforcement learning, wireless communications.
## I Introduction
**Background.** Artificial intelligence has undergone a profound transformation with the advent of foundation models. These powerful computational structures, like GPT-3 [1] and BERT [2], excel in processing and generating diverse data types, such as text, images, and audio [3]. These models have established new benchmarks in tasks spanning from natural language understanding to content generation and translation. Their strength lies in their extensive training, involving billions of parameters, which fosters a comprehensive and foundational data comprehension. The contemporary era is marked by the dominance of these foundational models, which are increasingly finding applications in various industries such as healthcare, finance, education, and entertainment. These expansive language models offer a versatile toolkit for a wide range of practical applications. They can be tailored or adjusted for specific domains or tasks through a process known as fine-tuning. Fine-tuning [4] customizes the initially pretrained model to operate effectively in a more specific context or application. To illustrate, a large language model that has been pretrained on general text data can be fine-tuned to become an expert in tasks like medical diagnosis, legal document review, or customer service chatbots. It's important to recognize that the initial training of foundational models is centered on self-supervised learning from extensive unlabeled data, enabling them to grasp general language comprehension. Conversely, fine-tuning involves adapting these pretrained models to particular tasks using task-specific labeled data, which enhances their specialized performance.
**Motivation.** Foundation models serve as the cornerstone for a wide array of downstream tasks in industries like finance and healthcare. These models can be customized through fine-tuning to address specific natural language understanding challenges in specialized fields. However, fine-tuning these models for local tasks on mobile devices is prohibitively computationally intensive [5]. To overcome this challenge, mobile edge computing (MEC) can be employed, where local devices send their training data to a server for model training and fine-tuning. Yet, deploying this solution in decentralized environments, with numerous User Equipments (UEs) handling various tasks, presents complexities. The substantial size of foundation models and local device data, coupled with the high communication and computation costs associated with transmitting data to the server and fine-tuning large language models, pose significant challenges. Moreover, MEC introduces issues like delays in uplink data transmission and model fine-tuning. The key lies in achieving optimal downstream task performance while managing the overhead costs of communication and computation.
**Proposed solution.** To address the challenge of fine-tuning foundation models for downstream tasks, we propose a hybrid approach that combines mobile edge computing (MEC) with local device computation. Additionally, we aim to reduce the computational and communication burden of MEC and local device computation by employing an emulator and adapter combination approach [4]. An adapter consists of trainable neural network parameters, such as weights, layers, or units, while the emulator is a representation of the fixed-weight
portions of the neural network. Adapters enable the server and local devices to train only a subset of the foundation model's parameters, fine-tuning them specifically for the downstream task. Moreover, the emulator, a compressed version of the fixed-weight foundation model, assists in training the adapter during local device model fine-tuning.
To facilitate the adoption of mixed MEC and local device model fine-tuning, we introduce an orchestrator. This orchestrator optimizes crucial variables, including device selection for MEC and local device computation, as well as novel considerations such as emulator compression parameters. The latter is particularly important in the context of foundation model fine-tuning, which was not previously emphasized in MEC approaches. Our orchestrator enhances resource allocation for the fine-tuning process, improving its scalability. Furthermore, our orchestrator leverages a novel multi-agent deep reinforcement learning technique, the Hybrid Multi-agent Proximal Policy Optimization (HMPPO) approach, to seamlessly handle the optimization of both continuous and discrete variables.
Our contributions are as follows:
* **Paradigmatic Shift with MEC**: We introduce a novel paradigm that combines Mobile Edge Computing (MEC) with fine-tuning of foundation models for local device tasks. This approach is designed to enhance model performance for tasks on local user equipment, optimizing computation while maintaining foundation model integrity and performance.
* **Architectural Innovation with Emulator-Adapter**: We divide the foundation model into two components: the Emulator and the Adapter. This modular approach minimizes local device overhead while maintaining foundation model adaptability, achieving a balance between resource conservation and optimizing downstream task-specific model fine-tuning performance.
* **Optimized Resource Allocation Strategy**: We develop an advanced resource allocation mechanism that optimizes key variables, including device selection for mobile edge computing or local device computation, and emulator compression parameters. These variables are selected to address the specific challenges and needs of the Emulator-Adapter structure in a decentralized environment.
* **Hybrid Multi-agent DRL for Resource Allocation**: We deploy a cutting-edge hybrid multi-agent Deep Reinforcement Learning (DRL) method to tackle a mixed discrete-continuous action space problem. This approach effectively addresses the challenges within our system model, enabling optimal and dynamic resource allocation decisions.
* **Comprehensive Simulations and Validations**: We conduct comprehensive simulations and these rigorous tests and evaluations have confirmed the robustness, efficiency, and scalability of our proposed system and solution. These simulations attest to the practical viability and superior performance of our approach.
**Related works.** Foundation models, such as GPT-3 [1] and CLIP [6], widely known as large pre-trained models, have gained prominence due to their exceptional ability to make zero-shot predictions and their adaptability to new tasks through a transfer learning method known as fine-tuning [7, 8]. Leveraging these models for fine-tuning to tackle downstream tasks offers significant advantages in terms of both time and resource savings when compared to the labor-intensive process of training models from scratch.
Efficient utilization of foundation models has become a central focus in modern AI, with techniques designed to reduce computational and storage overhead while maintaining or enhancing performance. Among these techniques, adapters [9] and Low-rank Adapters (LoRA) [10, 11] have stood out, encoding task-specific information within intermediate layers of a model without overshadowing pre-existing knowledge [12]. The trend in recent advancements, such as Parameter-Efficient Fine Tuning (PEFT), prefix-tuning [13], and prompt tuning [14, 15], adapters, P-tuning V2 [16], tuning embedding layer inputs [17], emphasizes the minimization of changes to model parameters, serving the dual purpose of resource conservation and knowledge encapsulation from larger pre-trained models.
Local devices often face constraints when accommodating the full weight of large foundation models, which not only hinders efficiency but also raises concerns regarding comprehensive model knowledge and potential privacy issues. In response to these challenges, recent research [4, 18] has spotlighted the utility of emulators, scaled-down yet effective versions of foundation models, to facilitate efficient fine-tuning. This approach is particularly relevant within the context of server-assisted computing, where the intertwined future of model tuning and server-assisted computing, specifically within Mobile Edge Computing (MEC), promises resource-sensitive, high-performance AI solutions.
Besides the work by Dong et al. [19, 20], there is a notable absence of research dedicated to the mobile edge computing of large foundation models, particularly the implementation of an emulator-assisted approach to fine-tuning such models. Our objective is to pioneer a Parameter-Efficient Emulator Assisted Tuning (PEAT) approach within mobile edge computing to address downstream tasks.
## II System model
In an environment where a central server operates alongside a collection of User Equipments (UEs) represented by \(\mathcal{N}=\{1,2,\ldots,N\}\), every UE has a buffer filled with \(T_{n}\) tasks to complete. At every step \(t\), each user undertakes a task by utilizing models tailored from the foundational model. These tailored models consist of an **Emulator**, compressed from the foundation by knowledge distillation, pruning, or layer drop [21], and an **Adapter** specifically fine-tuned for the task at hand. Assume that each UE can only cache one emulator at a time due to limited storage capacity. Despite this emulator serving as a foundational framework, it doesn't undergo training. Instead, the onus of adaptability lies with
the Adapter. With its trainable weights, the Adapter is locally fine-tuned, allowing it to be best aligned with the task it's meant to facilitate.
For our approach, we implement a robust resource allocation method, presuming every UE consistently manages its buffer with a uniform task load, implying \(T_{n}\) remains consistent across all UEs. Addressing this high-demand situation creates a foundation adaptable to cases where some UEs may not have tasks. Central to this architecture, the server maintains a comprehensive foundation model dedicated to bolstering the UEs in their endeavors.
The server, leveraging its advanced algorithms and the foundation model, first determines the (1) optimal emulator configuration \(E_{n}^{t}\) for each UE (\(n\)) and the specific task (\(t\)) it is tackling, beyond this, the server also makes (2) informed decisions \(z_{n}^{t}\) about where the computation should ideally occur: back at the central server (\(z_{n}^{t}=1\)) or locally at the UE (\(z_{n}^{t}=0\)). Then, the wireless communication overhead and the task accuracy are jointly optimized. The system model is illustrated in Fig. 1. These two cases are expounded on as follows:
**Case 1: Sever-side Training.** When the UE opts for training to be conducted on the server, the entire process unfolds at the central node. In this scenario, the UE's first step is to securely transmit its data \(D_{n}^{t}\) to the server. The server, equipped with the necessary computational resources, then engages in the training process by tailoring the foundation model to the \(E_{n}^{t}\), which is the same as the one currently on the UE designated as \(n\), to ensure consistency. Upon the completion of the training phase, the server does not send back the entire model. Instead, it efficiently packages and transmits only the adapter weights. The UE, in turn, employs these weights to update its local adapter, ensuring both the server and UE remain synchronized in their model representations. Thus, the communication overhead for uploading data when using server training is:
\[d_{s,n}^{t}(z_{n}^{t})=\frac{D_{n}^{t}}{r_{u,n}^{t}}\times z_{n}^{t},\;\;\forall n \in\mathcal{N},\forall t\in\mathcal{T}. \tag{1}\]
where \(r_{u,n}^{t}\) represents the average upload transmission rate of the \(n^{\text{th}}\) UE at task \(t\). For transmission, the system employs FDMA to counteract potential interference [22] and average allocation of bandwidth resources. Consequently, the achievable transmission rate \(r_{n}^{t}\) can be expressed as:
\[r_{u,n}^{t}(z_{n}^{t})=\bar{W}_{u}^{t}\log_{2}(1+\frac{p_{u,n}^{t}h_{n}^{t}}{ \sigma^{2}\bar{W}_{u}^{t}}), \tag{2}\]
where \(p_{u,n}^{t}\) is the uplink transmission power of UE \(n\) determined by the respective device, and \(\mathbf{h}_{n}^{t}\) is the average channel
Fig. 1: Architecture of a central server interacting with User Equipments (UEs) for task execution and resource allocation, where each UE utilizes a two-part model comprising an _Emulator_ and an _Adapter_. The decision-making process is governed by Deep Reinforcement Learning, optimizing task accuracy and communication overhead.
gain when dealing with task \(t\), which will be expounded on in Section VI-A. \(\bar{W}_{u}^{t}\) is the average uplink bandwidth:
\[\bar{W}_{u}^{t}(z_{n}^{t})=\frac{W_{u}^{s}}{\{|\mathcal{N}|^{t}\}_{\forall n:z_{n }^{t}=1}}. \tag{3}\]
Here, the \(\{|\mathcal{N}|^{t}\}_{\forall n:z_{n}^{t}=1}\) is the number of UEs allocated on server training, and \(\bar{W}_{u}^{s}\) is the sum resource of the uplink bandwidth.
In Case 1, we bypass the downlink overhead for transmitting adapter weights. This is justified by their minimal size compared to the emulators, as corroborated by [4]. Furthermore, we rely on dedicated channels for this transmission, ensuring efficiency.
**Case 2: Local Training.** Alternatively, if the UE decides to manage its training locally, the server assumes a consultative role. It reviews the UE's current emulator and, if deemed unsuitable for the present task compared to the UE's previous emulator \(E_{n}^{t-1}\), the server dispatches an updated emulator to UE \(n\). Essentially, if the cached emulator on the UE suffices for the subsequent task, the server can endorse the continued use of the same emulator, thereby saving on transmission overhead. We introduce an auxiliary emulator switch indicator \(I_{n}^{t}\) to capture this:
For \(\forall n\in\mathcal{N},\forall t\in\mathcal{T},\text{ and }z_{n}^{t}=0\):
\[I_{n}^{t}\!\!=\!\begin{cases}0,\text{ if }E_{n}^{t}=E_{n}^{t-1}\\ 1,\text{ otherwise.}\end{cases} \tag{4}\]
Then the communication overhead is:
\[d_{l,n}^{t}(z_{n}^{t},E_{n}^{t})\!\!=\!\frac{E_{n}^{t}\times D_{FM}}{r_{d,n}^{ t}}\!\times\!I_{n}^{t}\!\times\!(1\!-\!z_{n}^{t}),\forall n\in\mathcal{N}, \forall t\in\mathcal{T}, \tag{5}\]
where the \(D_{FM}\) is the size of the original foundation model and downlink rate \(r_{d,n}^{t}\) is:
\[r_{n}^{t}(z_{n}^{t},E_{n}^{t})=\bar{W}^{t}\log_{2}(1+\frac{\bar{p}^{t}h_{n}^{t }}{\sigma^{2}\bar{W}^{t}}), \tag{6}\]
For both bandwidth and power, average allocations of the aggregate resources on the server are utilized as:
\[\bar{W}^{t}(z_{n}^{t},E_{n}^{t})=\frac{W_{d}^{s}}{\{|\mathcal{N}| ^{t}\}_{\forall n:(z_{n}^{t}=1}\text{ and }I_{n}^{t}=1)},\] \[\bar{p}^{t}(z_{n}^{t},E_{n}^{t})=\frac{P_{d}^{t}}{\{|\mathcal{N}| ^{t}\}_{\forall n:(z_{n}^{t}=1}\text{ and }I_{n}^{t}=1)}. \tag{7}\]
In Case 2, the proactive approach ensures that the UE always operates with the most appropriate version of the emulator for its tasks. Once equipped with the correct emulator, the UE takes the reins, conducting the training of the adapter on its own. This localized approach eliminates the need for any data uploads to the server but may require additional communication resources for downloading the new emulator.
For the communication overhead, in each step \(t\), the uplink (data transmission for server computing) and downlink (emulator transmission for local computing) happen simultaneously, and the maximum delay of all users at \(t\) is taken as the system latency. Therefore, the delay for each user \(d_{n}^{t}\) can be shown as:
\[d_{n}^{t}(z_{n}^{t},E_{n}^{t})=d_{s,n}^{t}\times z_{n}^{t}+d_{l,n}^{t}\times I _{n}^{t}\times(1-z_{n}^{t}). \tag{8}\]
**Task accuracy.** In the real-world scenario, tasks naturally differ in their demands and intricacies. To address this and to bring about a more accurate representation of the situation, we introduce a quantified factor \(c_{n}^{t}\). This factor serves as a metric, capturing the inherent complexity of each task. By quantifying the complexity, the system can make more informed decisions, allowing for better allocation of resources and more precise emulator configurations. Thus, the task accuracy is formulated as:
\[\kappa_{n}^{t}(z_{n}^{t},E_{n}^{t})=\begin{cases}f(1)\times\frac{t}{c_{n}^{t}+ t},\text{ if }z_{n}^{t}=1.\\ f(E_{n}^{t})\times\frac{t}{c_{n}^{t}+t},\text{ otherwise.}\end{cases} \tag{9}\]
Here, \(E_{n}^{t}\) represents the layer drop retention accuracy less than \(1\). By treating layer retention as a continuous variable, the system gains fine-grained control over the emulator's size, allowing for a dynamic balance between accuracy and communication overhead. The \(f(\cdot)\) captures the accuracy based primarily on \(E_{n}^{t}\), which is curve-fitted based on empirical results from [4], further detailed in Section VI-A. The term \(\frac{t}{c_{n}^{t}+t}\) serves to scale down \(\kappa_{n}^{t}\) as \(c_{n}^{t}\) increases. As \(c_{n}^{t}\) grows, the scaling factor diminishes, thus reducing \(\kappa_{n}^{t}\).
## III Problem Formulation
The crux of our problem lies in striking an optimal balance between model accuracy and communication overhead. This trade-off is sought by dynamically dictating two primary factors: the computing cases (either on the server or locally) and the update frequency and emulator layer drop retention.
To systematize this dynamic allocation, we employ two matrices, \(\mathbf{z}\) and \(\mathbf{E}\). These matrices capture the allocation patterns for both computing cases and emulator configurations across all users and tasks. Specifically, the entry at the \(n^{\text{th}}\) row and \(t^{\text{th}}\) column of these matrices corresponds to \(z_{n}^{t}\) and \(E_{n}^{t}\), respectively.
Our primary objective, with respect to task accuracy, is to enhance the cumulative accuracy across all tasks:
\[\text{P1: }\max_{\mathbf{z},\mathbf{E}}\sum_{t=1}^{T}\sum_{n=1}^{N}\kappa_{n}^{t}. \tag{10}\]
Concurrently, we are also concerned with the overall communication overhead, aiming to minimize it:
\[\text{P2: }\min_{\mathbf{z},\mathbf{E}}\sum_{t=1}^{T}\max_{n\in\mathcal{N}}(d_{n}^{t}). \tag{11}\]
Synthesizing these objectives, the overarching problem encapsulating both Eq. (10) and Eq. (11) can be concisely formulated as:
\[\max_{\mathbf{z},\mathbf{E}}\left\{\omega_{1}\times\sum_{t=1}^{T}\sum_{n=1} ^{N}\kappa_{n}^{t}-\omega_{2}\times\sum_{t=1}^{T}\max(d_{n}^{t})\right\}, \tag{12}\] \[s.t. C1:z_{n}^{t}\in\{0,1\},\forall n\in\mathcal{N},\forall t\in \mathcal{T}.\] (13) \[C2:0\leq E_{n}^{t}\leq 1,\forall n\in\mathcal{N},\forall t\in \mathcal{T}. \tag{14}\]
where the \(\omega_{1},\omega_{2}\) are weight parameters, derived from specific reward settings which are expounded upon in Section IV.
The Emulator-Adapter framework provides flexibility, ensuring that while communication costs are optimized, the accuracy is not compromised. The system recognizes that not all tasks are created equal, and its design caters to these differences, striking a balance between efficient resource use and effective task completion. The formulated problem in Eq. (12) contains highly coupled both discrete (computing cases decisions) and continuous (emulator configuration), making it a tough inseparable mixed integer non-linear programming (MINLP) problem and the sequential nature of this problem further complicates the problem. Thus, it is infeasible to use traditional optimization strategies, and Deep Reinforcement Learning (DRL) algorithms, attributed to their superior ability to tackle sequential problems and find near-optimal solutions, need to be considered.
## IV DRL Environment Setting
At present, model-free DRL methods are well utilized in wireless communication scenarios [23], since they can efficiently reach near-optimal points while tackling a number of random and unpredicted factors. For model-free DRL, three key elements are essential to create the DRL environment based on the formulated problem, allowing agents to interact with and learn satisfactory policies. Thereafter, we provide the detailed settings of these three elements: state, action, and reward.
### _State_
Since the computing case decisions and emulator configuration need to be jointly optimized, involving mixed discrete-continuous actions, we propose to use two agents for optimizing them.
**Agent 1 (case decisions):** The state of Agent 1 \(s_{1}^{t}\) includes (1) average channel gains of currently finished tasks of all users \(\{h_{1}^{t-1},h_{2}^{t-1},\ldots,h_{N}^{t-1}\}\). (2) the current task complexities of all users \(\{c_{1}^{t},c_{2}^{t},\ldots,c_{N}^{t}\}\). (3) current local data sizes of all users \(\{D_{1}^{t},D_{2}^{t},\ldots,D_{N}^{t}\}\). (4) the currently cached emulators on different UEs \(\{E_{1}^{t},E_{2}^{t},\ldots,E_{N}^{t}\}\).
**Agent 2 (emulator configuration):** The state of Agent \(s_{2}^{t}\) involves: (1) the currently cached emulators on different UEs like in Agent 1. (2) the action \(a_{1}^{t}\) from Agent 1 (cases decisions). (3) the task complexities of all users as in Agent 1.
### _Action_
The actions are intuitive. For Agent 1 controlling the case decisions, the discrete action contains all decisions for different users \(\{z_{1}^{t},z_{2}^{t},\ldots,z_{N}^{t}\}\), and in terms of Agent 2 handling the emulator configuration allocation, continuous actions \(\{E_{1}^{t},E_{2}^{t},\ldots,E_{N}^{t}\}\) is used.
### _Reward_
Utilizing the Centralized Training Decentralized Execution (CTDE) framework, the global reward is set to give feedback to the Critic and learn the state value, then update the two Actors. This reward \(R_{g}^{t}\) is composed of (1) the average task accuracy among users of the current step: \(\omega_{p}\times\frac{1}{N}\times\sum_{n\in\mathcal{N}}\kappa_{n}^{t}\). In the simulation, the accuracy refers to the language model perplexity, lower is preferable, further detailed in Section VI-A. Thus, the weight is set as negative. (2) the maximum communication delay at \(t\): \(\omega_{d}\times\max_{n\in\mathcal{N}}(d_{n}^{t})\), where the \(\omega_{p},\omega_{d}\) are negative weight parameters.
## V Methodology
### _Preliminary_
Proximal Policy Optimization (PPO) by OpenAI [24] offers significant advancements over traditional policy gradient algorithms. PPO's strengths can be attributed to its enhanced sample efficiency and the introduction of a policy constraint.
1. _Sample Efficiency Enhancement:_ PPO uses two distinct policies: \(\pi_{\theta^{\prime}}\) for sampling trajectories during training, and \(\pi_{\theta}\) for evaluation. This separation optimizes the algorithm's sample efficiency. The expectation relationship between them is expressed as: \[\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta}}[\pi_{\theta}(a^{t}|s^{t})A^{t}]= \mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta^{\prime}}}[\frac{\pi_{\theta}(a^{t}|s^ {t})}{\pi_{\theta^{\prime}}(a^{t}|s^{t})}A^{t}].\] (15) where \(A^{t}\) is the advantage function to estimate how is the selected action.
2. _Introduction of Policy Constraint:_ Switching between the data sampling policies doesn't eliminate variances between their objective functions. To tackle this, a KL-divergence penalty is integrated into the reward formulation. Due to the impracticality of computing the KL divergence for every observation, the objective function is redefined as [24]: \[\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta}}[f^{t}(\theta)A^{t}],\] (16) where \[f^{t}(\theta)=\min\{r^{t}(\theta),\text{clip}(r^{t}(\theta),1-\epsilon,1+ \epsilon)\}.\] Here, \(r^{t}(\theta)\) represents the ratio between the two policies: \(r^{t}(\theta)=\frac{\pi_{\theta}(a^{t}|s^{t})}{\pi_{\theta^{\prime}}(a^{t}|s^{t })}\). And the advantage function \(A^{t}\) is calculated via Generalized Advantage Estimation (GAE) [25]: \[A^{t}=\delta^{t}+(\gamma\lambda)\delta^{t+1}+...+(\gamma\lambda) ^{T-1}\delta^{t+\tilde{T}-1},\] (17) where \[\delta^{t}=R^{t}+\gamma V_{\phi^{\prime}}(s^{t+1})-V_{\phi^{\prime}}(s^{t }).\] (18)
The gradient associated with this problem is captured by: \[\Delta\theta=\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta_{t}}}[\bigtriangledown f^{t}( \theta)A^{t}].\] (19)
3. _Value Network (Critic) Implementation:_ PPO employs a Critic reminiscent of other Actor-Critic algorithms. The loss function is defined as: \[L(\phi)=[V_{\phi}(s^{t})-(A^{t}+V_{\phi^{\prime}}(s^{t}))]^{2}.\] (20) In this context, \(V(s)\) is the widely-recognized state-value function [26]. With \(\phi\) being the learned parameter, it's updated by minimizing \(L(\phi)\). The target state-value function, parameterized by \(\phi^{\prime}\), is periodically updated in alignment with \(\phi^{\prime}\). This approach of using a target value is a staple in RL, a strategy embedded in numerous algorithms [26].
### _Hmppo_
In this study, we introduce the Hybrid Multi-agent PPO (HMPPO), a specialized variant of the multi-agent PPO (MAPPO) algorithm tailored for both discrete and continuous actions. Notably, the intrinsic design of the PPO algorithm emphasizes evaluating state values (a.k.a. V value) over action values (a.k.a. Q value), which is beneficial as it simplifies the learning process for agents, as evidenced in [27]. Consequently, in the PPO structure, the action does not form part of the Critic's input, equipping PPO to seamlessly cater to both discrete and continuous action sets.
Building on the principles of the CTDE paradigm, as presented in [28], and integrating the hybrid structure for addressing mixed actions, we elucidate the update functions associated with the two Actors and a single Critic in our proposed HMPPO framework:
\[\Delta\theta_{1} =\!\mathbb{E}_{1}^{t}[\nabla_{\theta_{1}}\min\{r^{t}(\theta_{1}) A^{t},\text{clip}(r^{t}(\theta_{1}),1-\epsilon,1+\epsilon)A^{t}\}],\] \[\Delta\theta_{2} =\!\mathbb{E}_{2}^{t}[\nabla_{\theta_{2}}\min\{r^{t}(\theta_{2}) A^{t},\text{clip}(r^{t}(\theta_{2}),1-\epsilon,1+\epsilon)A^{t}\}], \tag{21}\] \[L^{t}(\phi) =[V_{\phi}(\{s_{1}^{t};s_{2}^{t}\})-(A^{t}+V_{\phi^{\prime}}(\{s _{1}^{t};s_{2}^{t}\}))]^{2}, \tag{22}\]
where the \(\{s_{1}^{t};s_{2}^{t}\}\) is the concatenation of these two states.
## VI Simulations
### _Numerical Settings_
We configured the number of UEs to range from \(6\) to \(9\) across various experimental setups. The foundation model has a size of \(10.8\,\text{GB}\), which corresponds to the _GPT-3 2.7B_ large language model [29]. The number of tasks \(T\) for each UE is set to \(50\). The emulator retention is adjusted between \(0.2\) and \(0.8\). Local data sizes, denoted as \(D_{n}^{t}\), are randomly chosen from a uniform distribution spanning \(300\,\text{MB}\) to \(500\,\text{MB}\). The upload transmission power is uniformly selected from a range of \(200\,\text{mW}\) to \(1\,\text{W}\). Assuming the use of Frequency Division Duplexing (FDD) to allocate distinct bandwidth resources for uplink and downlink transmissions [30], the aggregate bandwidth limits are set to \(10^{5}\,\text{Hz}\) for uplink and \(10^{6}\,\text{Hz}\) for downlink. With regard to channel gain, we assume the channel remains coherent over short intervals of \(10\,\text{ms}\). Small-scale fading adheres to the Rayleigh distribution, with a path loss exponent of \(\alpha=2\). The total power resource allocated for downlink is \(60\,\text{W}\).
For the accuracy function \(f(E_{n}^{t})\) in \(\kappa_{n}^{t}\), we performed curve-fitting on the relationship between layer-drop-retention and large language model (LLM) perplexity from [4], yielding the function \(f=25.2(E_{n}^{t})^{2}-43.1E_{n}^{t}+31.9\) with an \(R^{2}\) score of \(0.97\), indicating a high level of fit to the observed data. And then \(\kappa_{n}^{t}=f(E_{n}^{t})\times\frac{e_{n}^{t}+10}{c_{n}^{t}}\), where \(\iota=10\) is the complexity weight parameter. Note that we use \(\frac{e_{n}^{t}+\iota}{c_{n}^{t}}\) instead of \(\frac{\iota}{t+c_{n}^{t}}\) in Eq. (9) since we use perplexity and lower is better. Perplexity is a measure used in language modeling to quantify how well a probability distribution predicts a sample [4], a lower perplexity indicates the model's predictions are closer to the true distribution. It's important to note that in the context of our simulation, task accuracy is represented by the LLM perplexity, where **lower values are preferable**. All experiments were conducted using a single NVIDIA GTX 2080 Ti. We employed \(3\times 10^{5}\) training steps, with evaluation intervals set at every \(500\) training steps, and all experiments are conducted under the same global random seed.
### _Metrics and baselines_
The most important metrics in the paper are:
1. The DRL episodic reward is the direct feedback for the agents, and it serves as intuitive evidence for comparing algorithms' performances.
2. The average perplexity among all tasks of all users, showcases our task accuracy.
3. The total communication delay of all UEs across \(T\) tasks, testifies to the communication overhead.
4. Other than the two above-mentioned matrices corresponding to the reward composition in Section IV, we also include emulator change times, to show if the algorithms catch the sequential nature and try to reduce the emulator changes to decrease the communication overhead for downloading.
We design the following baseline algorithms to compare with our proposed HMPPO:
* **Independent PPO (IPPO) [31]**: A straightforward approach to utilizing RL in a cooperative interactive setting involves deploying two independent RL agents that interact with each other. We employed this concept using two independent PPO agents.
* **Random**: This method involves two agents selecting actions at random, representing system performance without any optimization strategy. The random policy acts as a baseline, showcasing results in the absence of optimization.
## VII Result analysis
As illustrated in Fig. 2, the performance of the HMPPO method consistently surpasses both IPPO and the random approach across all metrics, which includes reward, delay,
perplexity, and emulator changes. During the initial stages, HMPPO and IPPO exhibit comparable performance, even when pitted with the total delay, IPPO momentarily outshines HMPPO in system delay around the \(100\,\)k steps mark, as shown in Fig. 2(b). However, HMPPO eventually demonstrates marked improvement across the board. To provide specific figures: HMPPO exceeds IPPO by margins of \(33.79\%\), \(80.49\%\), \(23.53\%\), and \(91.55\%\) for reward, delay, perplexity, and emulator changes, respectively. It's worth noting that IPPO maintains a consistent lead over the random approach. Since total delay and task perplexity directly influence the reward, the patterns observed in them closely align with each other. Yet, the emulator change times metric reveals distinct and interesting behaviors. While both HMPPO and IPPO initially increase emulator change times to better adapt and enhance the total reward, only HMPPO eventually optimizes by decreasing emulator change times around \(150\,\)k steps. This optimization benefits by reducing the total delay, as evident in Fig.2(b), which subsequently contributes to the reward boost shown in Fig.2(a). An essential observation is the resilience of HMPPO compared to IPPO. Despite achieving similar peaks during the initial stages, IPPO exhibits considerable fluctuations in performance, as highlighted in Fig.2(b) and Fig.2(c), suggesting that HMPPO provides a more stable and reliable optimization approach.
From Table. I, it's evident that the HMPPO method consistently outperforms both IPPO and the random approach across all UE numbers in terms of reward, total delay, and task perplexity. As the UE number increases, while all methods exhibit a decrease in reward and an increase in both total delay and task perplexity, HMPPO's degradation is much more gradual, highlighting its scalability and robustness. The delay analysis further showcases HMPPO's efficiency, with the method maintaining a reasonably low total delay even with an increase in UE numbers, whereas IPPO's delay increases substantially, especially when transitioning from 7 to 8 UEs. On the aspect of task perplexity, HMPPO again emerges superior, offering the lowest values across the board, ensuring a more precise task-specific understanding. This is in stark contrast to the random method, which exhibits the highest perplexity, reflecting its general inefficiency. Lastly, in terms of reward, HMPPO achieves the least negative values consistently, underscoring its performance advantage. Overall, the trends in the table affirm HMPPO's effectiveness and scalability, making it an optimal choice in environments with varying UE numbers.
## VIII Conclusion
Throughout this study, we pioneered an innovative approach by combining Mobile Edge Computing (MEC) with the fine-tuning of foundation models to optimize local device tasks. The introduction of our Emulator-Adapter framework is a testament to our commitment to optimizing performance
Fig. 2: Simulation results. The first four sub-figures illustrate complete training results on different metrics (i.e., rewards, system delay, task perplexity, and emulator switch times), and the final two sub-figures delineate the overall results on delay and perplexity across all scenarios, where the number of UEs varies from 6 to 9.
without overburdening device resources. Our innovative resource allocation strategy ensures that our system thrives in a decentralized setting. Our results, particularly with the HMPPO method, underline the efficacy of our approach. With remarkable improvements across key metrics, like a \(33.79\%\) enhancement in reward when compared to IPPO, our approach stands validated. The application of a hybrid multi-agent Deep Reinforcement Learning (DRL) further confirmed the resilience and adaptability of our system model.
In summation, our research offers a promising pathway in the realm of AI, particularly for the deployment of extensive machine learning models on everyday devices. While we have made significant strides, the horizon is vast, and we anticipate even more refined and efficient solutions in the future.
|
2305.13442 | The fragility of thin discs in galaxies -- II. Thin discs as tracers of
the assembly history of galaxies | Thin galactic discs and nuclear stellar discs (NSDs) are fragile structures
that can be easily disturbed by merger events. By studying the age of the
stellar populations in present-day discs, we can learn about the assembly
history of galaxies and place constraints on their past merger events.
Following on the steps of our initial work, we explore the fragility of such
disc structures in intermediate-mass-ratio dry encounters using the previously
constructed $N$-body model of the Fornax galaxy NGC 1381 (FCC 170), which hosts
both a thin galactic disc and a NSD. We dismiss major and minor encounters, as
the former were previously shown to easily destroy thin-disc structures,
whereas the latter take several Hubble times to complete in the specific case
of FCC 170. The kinematics and structure of the thin galactic disc are
dramatically altered by the mergers, whereas the NSD shows a remarkable
resilience, exhibiting only a smooth increase of its size when compared to the
model evolved in isolation. Our results suggest that thin galactic discs are
better tracers for intermediate-mass-ratio mergers, while NSDs may be more
useful for major encounters. Based on our simulations and previous analysis of
the stellar populations, we concluded that FCC 170 has not experienced any
intermediate-mass-ratio dry encounters for at least $\sim$10 Gyr, as indicated
by the age of its thin-disc stellar populations. | Pablo M. Galán-de Anta, Pedro R. Capelo, Eugene Vasiliev, Massimo Dotti, Marc Sarzi, Enrico Maria Corsini, Lorenzo Morelli | 2023-05-22T19:37:10Z | http://arxiv.org/abs/2305.13442v1 | The fragility of thin discs in galaxies - II. Thin discs as tracers of the assembly history of galaxies
###### Abstract
Thin galactic discs and nuclear stellar discs (NSDs) are fragile structures that can be easily disturbed by merger events. By studying the age of the stellar populations in present-day discs, we can learn about the assembly history of galaxies and place constraints on their past merger events. Following on the steps of our initial work, we explore the fragility of such disc structures in intermediate-mass-ratio dry encounters using the previously constructed \(N\)-body model of the Fornax galaxy NGC 1381 (FCC 170), which hosts both a thin galactic disc and a NSD. We dismiss major and minor encounters, as the former were previously shown to easily destroy thin-disc structures, whereas the latter take several Hubble times to complete in the specific case of FCC 170. The kinematics and structure of the thin galactic disc are dramatically altered by the mergers, whereas the NSD shows a remarkable resilience, exhibiting only a smooth increase of its size when compared to the model evolved in isolation. Our results suggest that thin galactic discs are better tracers for intermediate-mass-ratio mergers, while NSDs may be more useful for major encounters. Based on our simulations and previous analysis of the stellar populations, we concluded that FCC 170 has not experienced any intermediate-mass-ratio dry encounters for at least \(\sim\)10 Gyr, as indicated by the age of its thin-disc stellar populations.
keywords: galaxies: elliptical and lenticular, cD - galaxies: interactions - galaxies: kinematics and dynamics - galaxies: structure - methods: numerical
## 1 Introduction
The most widely accepted cosmological paradigm is the \(\Lambda\)-cold dark matter (CDM) model, wherein galaxies form in a bottom-up fashion (White & Rees, 1978) following the hierarchical growth of the dark matter structure in which they are embedded. In this way smaller galaxies would form first and later grow into larger systems thanks to interactions and merging events, with galaxy mergers also shifting galaxies from disc to spheroidal morphologies (including bulges in disc galaxies; Toomre, 1977). Galaxy evolution is therefore dictated by the rate at which galaxies can form stars and merge together, with the latter process being generally more frequent in group and field environments. Here low relative velocities between galaxies favour encounters, as opposed to dense cluster environments, where galaxies evolve mainly through a combination of gravitational and hydrodynamic processes such as tidal interactions and ram-pressure stripping (Moore et al., 1996; Angulo et al., 2009; Yun et al., 2019; Joshi et al., 2020; Galan-de Anta et al., 2022).
While some galaxies may be observed interacting with other galaxies, or exhibiting imprints of past merger events (e.g. tidal features or shells; Malin & Carter, 1980, 1983; Schweizer & Seitzer, 1992; Schweizer, 1996), other galaxies do not present any clue of past encounters. However, observational signatures of mergers can dim in time, making necessary in-depth studies of the stellar populations to unveil the assembly history of galaxies (Davison et al., 2021; Mazzilli Ciraulo et al., 2021).
Darg et al. (2010) attempted to trace the rate of past merger events by investigating the properties of close galactic pairs and interacting galaxies. These kinds of studies draw on the depth and area covered by recent imaging surveys, finding that merging gas-rich spiral galaxies present intense star formation activity and merging gas-poor ellipticals do not increase their star formation activity. Eliche-Moral et al. (2018), on the other hand, studied the left-over remnant morphologies of major mergers in simulations, finding that S0 galaxies could be the relic of a past major merger. The lack of constraints on the assembly history of galaxies leaves unchecked some of the predictions of the hierarchical paradigm. For example, the formation of the most massive galaxies is not completely understood. Following the \(\Lambda\)-CDM model, these types of objects should have assembled more recently than observed (De Lucia et al., 2006; Khochfar & Burkert, 2006).
From a theoretical standpoint, past minor mergers with mass ratios of at least 1:10 (i.e. the ratio of the total mass of the secondary galaxy to that of the primary one is at least 0.1) are thought to have occurred in most galaxies (Ostriker & Tremaine, 1975; Maller et al., 2006; Khochfar & Burkert, 2006; Fakhouri & Ma, 2008; Stewart et al., 2008; Poole et al., 2017; Sotillo-Ramos et al., 2022). A large fraction of the current galaxy population is expected to have experienced at least one past major encounter of mass ratio \(\sim\)1:3 in the last \(\sim\)2-3 Gyr (Bell et al., 2006; Lotz et al., 2008; Lin et al., 2008; Tonnesen & Cen, 2012), as also confirmed by observations (Lin et al., 2004; Barton et al., 2007; Woods & Geller, 2007; Yadav & Chen, 2018; Shah et al., 2022). When looking at higher redshifts, violent interactions often affect both the morphology and kinematics of disc galaxies (Hammer et al., 2005; Flores et al., 2006; Puech et al., 2008). Hence, the fragility of large-scale discs against mergers might be used as a tracer of past merger events (Barnes & Hernquist, 1992; Hammer et al., 2009; Deeley et al., 2017). However, it was shown that discs might also survive mergers of different mass ratios (from 1:3 to \(<\)1:10; Abadi et al., 2003; Hopkins et al., 2009; Purcell et al., 2009; Capelo et al., 2015), and merger remnants of wet major mergers could even re-form a galactic disc (Athanassoula et al., 2016; Capelo & Dotti, 2017; Sparre & Springel, 2017; Peschken et al., 2020). Toomre & Toomre (2012) laid the first stone by pointing out that mergers can significantly perturb the morphology of discs, even turning them into elliptical galaxies. In fact, the current observed properties of low- and intermediate-mass ellipticals may be the result of past major mergers between spiral galaxies (Hernquist & Quinn, 1988; Mihos & Hernquist, 1994; Barnes & Hernquist, 1996; Di Matteo et al., 2005; Springel et al., 2005; Naab et al., 2006).
In many elliptical galaxies, kpc-scale embedded discs can be easily found, forming a common family of structures in all S0 galaxies (Kormendy, 1985; Bender et al., 1992; Ferrarese et al., 2006; Emsellem et al., 2007). Most galaxies also may present disc-like substructures such as nuclear stellar discs (NSDs), first found in Hubble Space Telescope images (Jaffe et al., 1994; van den Bosch et al., 1994). These NSDs reside in the nuclear regions of up to 20 per cent of many kinds of galaxies (Ledo et al., 2010) and consist of razor-thin discs of a few hundred parsecs across (Pizzella et al., 2002). The properties of the stellar populations of NSDs were derived for a few galaxies (Corsini et al., 2016; Sarzi et al., 2016), with the studies of Ledo et al. (2010) and Sarzi et al. (2015) using, for the first time, NSDs as tracers of the assembly history of galaxies. They performed a simple set of \(N\)-body mergers consisting of an NSD, a stellar halo, and a central black hole (BH) in interaction with a secondary BH. Sarzi et al. (2015) explored a wide range of mergers, with mass ratios 1:10, 1:5, and 1:1, all in circular orbits and with different inclinations, showing that NSDs may survive some 1:5 encounters and are entirely destroyed in 1:1 mergers.
Sarzi et al. (2015) demonstrated that NSDs may serve as tracers of the most recent merger event, albeit under idealised conditions. In this context, our work aims to investigate the robustness of both kpc-scale thin discs and NSDs against intermediate-mass-ratio mergers, building on our previous research (Galan-de Anta et al., 2023; hereafter Paper I). Using FCC 170 as our primary galaxy and subjecting it to bombardment, we employ a tailored \(N\)-body model to assess the resilience of its discs to mergers. Taking into account the findings of Pinna et al. (2019), who indicate that both the thin disc and NSD of FCC 170 are \(\sim\)10 Gyr old, we deduce that the age of these stellar components provides a proxy for the look-back time of the last merger event, if the thin discs in our model turn out to be fragile against previous intermediate-mass-ratio encounters.
In Section 2, we describe how to build the two galaxies used in the merger simulations and provide the details on how to set up the initial conditions of the merger and how to run the simulations. In Section 3, we study the survivability of both the kpc-scale thin disc and NSD against the mergers. Last, in Section 4 we give our conclusions.
## 2 Setting up and running \(N\)-body simulations
### Initial conditions of isolated galaxies
We use the Agama stellar-dynamical toolbox (Vasiliev, 2019) to create equilibrium models of both the merging galaxies. In Paper I, we described in detail the approach for constructing a dynamical model for FCC 170 that matches the observational constraints. In brief, the galaxy is composed of several components described by distribution functions (DFs) in action space. For any choice of DP parameters, a self-consistent equilibrium model is constructed iteratively by first assuming a gravitational potential, then computing the density generated by the DF of each component in this potential, updating the total potential from the Poisson's equation, and repeating the procedure a few times until convergence. The observable properties of the model (projected density and kinematic maps described by Gauss-Hermite moments) are then compared with the observations, and the parameters of the DFs are varied until a good match is achieved (after many thousand model evaluations). Finally, we create an \(N\)-body realisation of the best-fitting model and verify that it remains in an equilibrium state for many gigayears, apart from a gradual increase of thickness of the NSD caused by numerical relaxation. We added this model as the primary galaxy.
For the secondary galaxy, we use a simpler approach without extensive parameter search. Namely, we assume that it is a spherical system composed of three components: a central supermassive BH (SMBH), a stellar bulge, and a DM halo. The ratio of SMBH, stellar, and total masses of the secondary
galaxy to the primary one is taken to be 1:4, corresponding to the boundary between major and minor mergers (e.g. Capelo et al., 2015; see also Mayer, 2013 for a discussion). We assume that the DM halo follows an exponentially truncated Navarro-Frenk-White (Navarro et al., 1996) profile,
\[\rho=\rho_{0}\,\frac{a}{r}\frac{1}{(1+r/a)^{2}}\exp{\left[-\left(\frac{r}{R_{ \rm cutoff}}\right)^{2}\right]}, \tag{1}\]
where \(a\) is the scale radius, \(R_{\rm cutoff}\) is the cutoff radius, and \(\rho_{a}\) is the normalisation of the density profile (the total mass is computed by numerical integration). The stellar component follows the de Vaucouleurs (1948) profile. We assigned the 3D half-mass radius (or Lagrangian radius at 50 per cent of the mass) of the stellar profile from the scaling relation for spheroidal galaxies (Shen et al., 2003), taking into account that the effective radius of the galaxy is about \(\sim\)30 per cent smaller than the 3D half-mass radius for a spheroid that follows the de Vaucouleurs profile. The values of stellar mass and half-mass radius for the secondary (see Table 1) are consistent with the findings for elliptical galaxies reported by Robertson et al. (2006) and also with the fundamental scaling relations found for local galaxies and stellar bulges as studied in Hon et al. (2023).
After fixing the structural properties of all galaxy components, we determine the DFs of the stellar bulge and DM halo from the Eddington inversion formula, and use it to assign particle velocities. For consistency with what done in Paper I, we choose the individual mass of the stellar and DM particles of the secondary galaxy to be the same as that of the particles of the primary one, i.e. \(m_{*}=7.32\times 10^{3}\) M\({}_{\sun}\) and \(m_{\rm DM}=2.48\times 10^{5}\) M\({}_{\sun}\). Hence, the number of particles in the primary galaxy is four times that of the secondary one. The two central SMBHs in our simulations have masses larger than ten times the mass of any individual DM particle. This guarantees that both BHs sink into the centre of the merger remnant by the end of the simulation and are not excessively perturbed by DM particles (Capelo et al., 2015). The main values of masses and radii of both the primary FCC 170 \(N\)-body model and the secondary spheroid model are tabulated in Table 1.
### Orbit configuration
We are interested in reproducing the initial conditions of a merger consistent with the current \(\Lambda\)-CDM cosmological context. Benson (2005) found that the most common type of orbit in cosmological simulations is the parabolic orbit (with eccentricity \(e=1\) and binding energy \(E=0\)), whereas Khochfar & Burkert (2006) found that about 85 per cent of parabolic encounters have a first pericentric distance larger than 10 per cent of the virial radius1 of the primary galaxy.
Footnote 1: The virial radius is usually defined as the radius at which the matter density (including both baryonic and DM) is 200 times the critical density of the Universe.
The pericentre of a particular orbit is defined by its eccentricity, orbital angular momentum of the primary-secondary system, and virial masses of both the primary and secondary galaxy. The initial separation between the two galaxies is set to \(d=R_{\rm cutoff,1}+R_{\rm cutoff,2}\), with \(R_{\rm cutoff,1}\) and \(R_{\rm cutoff,2}\) the cutoff radii2 of the DM haloes of the primary and secondary galaxy, respectively. By defining the initial separation larger than the sum of the cutoff radii of the DM haloes, we minimise any tidal effect that could alter the initial morphology of both galaxies during the early stages of the merger. By adjusting the initial orbital angular momentum (i.e. the initial velocities of the galaxies), we can pre-select the first pericentre distance to be about 10-20 per cent of the cutoff radius of the primary. We also varied the initial inclination angle \(i\) of the primary with respect to the secondary in order to design different bombardments, ranging from the co-rotating co-planar encounter to the counter-rotating co-planar encounter. We thus set up five encounters with initial inclinations \(i=0^{\circ}\), \(45^{\circ}\), \(90^{\circ}\), \(135^{\circ}\), and \(180^{\circ}\), with \(i=0^{\circ}\) being the co-rotating co-planar encounter and \(i=180^{\circ}\) being the counter-rotating co-planar encounter. The name of each merger regarding its inclination angle is tabulated in Table 2, along with the main orbital parameters.
Footnote 2: The cutoff radius adopts a similar value to the virial radius of the galaxy.
### Details of the runs
To evolve our \(N\)-body mergers, we use the code gizmo(Hopkins, 2015), as explained in section 4.1 of Paper I. The softening lengths are equal to those utilised in Paper I, i.e. \(\varepsilon_{\rm DM}=50\) pc, \(\varepsilon_{\rm bulge}=\varepsilon_{\rm thin-disc}=\varepsilon_{\rm thick-disc}=10\) pc, \(\varepsilon_{\rm nSD}=5\) pc, and \(\varepsilon_{\rm SMBH}=1\) pc, and were chosen to properly resolve the vertical structure of each component in our FCC 170 \(N\)-body model. It took about 200 hours of wall-clock time to reach a total integration time of 10 Gyr on 480 processor cores spread between 15 nodes. Same as with the \(N\)-body FCC 170 model in isolation, we produce up to 200 snapshots
\begin{table}
\begin{tabular}{c c c c c c} \hline Galaxy & \(M_{\rm total}\) & \(R_{\rm cutoff}\) & \(M_{\rm BH}\) & \(M_{*}\) & \(R_{50}\) \\ & [\(10^{11}\) M\({}_{\sun}\)] & [kpc] & [\(10^{6}\) M\({}_{\sun}\)] & [\(10^{9}\) M\({}_{\sun}\)] & [kpc] \\ \hline Primary & 24.8 & 243.0 & 31.1 & 27.2 & 1.68 \\ Secondary & 6.2 & 135.8 & 7.8 & 6.8 & 1.25 \\ \hline \end{tabular} 1
\end{table}
Table 1: Properties of the merging galaxies.
\begin{table}
\begin{tabular}{c c c c} \hline Galaxy merger & \(R_{\rm initial}\) [kpc] & \(r_{\rm peri}^{\rm(first)}\) [kpc] & \(i\) [deg] \\ \hline co-co & & 36.5 & 0 \\
45-tilted & & 37.1 & 45 \\ polar & 378.8 & 36.5 & 90 \\
135-tilted & 36.8 & 135 \\ co-ret & & 37.0 & 180 \\ \hline \end{tabular} 1
\end{table}
Table 2: Main parameters of the mergers.
for each merger, equally spaced in time by 50 Myr. The total number of particles for each merger is \(N=1.71\times 10^{7}\), with \(N_{*}=4.64\times 10^{6}\) the number of stellar particles. Prior to the set up of the mergers initial conditions, we initially evolve the secondary galaxy in isolation, proving that it is in equilibrium during the run, with negligible changes in the Lagrangian radii through 10 Gyr.
In Figure 1, we show the evolution of the thin disc of the primary during the co-planar co-rotating merger at different stages: the initial conditions, first, second, and third pericentric passages, and remnant phase at 10 Gyr. We focus on the primary thin disc to show the tidal disruption of the disc throughout the merger. We also show the evolution of the NSD particles in Figure 2.
Since the secondary galaxy is gradually disrupted by the tidal forces due to the primary, it proves useful to estimate its mass evolution as a function of time. To obtain the bound mass of the secondary galaxy in a given snapshot, we first compute the gravitational potential created by all its particles and then add the kinetic energy of their motion with respect to the galaxy centre, to obtain the total energy \(E\) of each particle of the secondary galaxy (still ignoring the effect of the primary). We then remove the particles with \(E>0\) and recompute the gravitational potential, repeating this procedure several times until the list of bound particles stabilises. At early times, this list includes almost all the particles of the secondary galaxy. But already after the first pericentric passage, the bound mass considerably drops and eventually decreasing essentially to zero well before the beginning of the "remnant phase", which we define to start when the SMBH separation drops below 100 pc. In other words, the SMBH of the secondary galaxy becomes stripped of its surrounding stars and, as a consequence, the efficiency of dynamical friction dramatically decreases: the separation between the two SMBHs remains at a level of a few kiloparsec for several gigayears, before further dropping to essentially the softening length (a few parsec) towards the end of the last ("remnant") phase, occurring, for all encounters, before 10 Gyr.
In Figure 3, the top panel shows the time evolution of the separation between the central SMBHs of the primary and secondary galaxy for each merger of our suite. The bottom panel shows the total mass loss during the merger. We have not included the mass of the central SMBH of the secondary galaxy in the mass-loss calculation. The first pericentric distances are also tabulated in Table 2 and are all \(\sim 0.15R_{\rm cutoff,1}\). We find that most of the mass loss occurs before the remnant phase, with barely no bound mass at the secondary galaxy during the remnant phase. This suggests that the core of the secondary galaxy gets 'naked' before the remnant phase, with just the central secondary SMBH reaching the central region of the galaxy remnant, at scales of the NSD.
Additionally, we have also run a set of two mergers for a minor encounter (of mass ratio 1:10) in a co-planar co-rotating orbit and a co-planar counter-rotating orbit (not listed in Tables 1-2 nor shown in Figure 3). For this set-up, we infer that dry minor mergers with our FCC 170 \(N\)-body model do merge in a time-scale that is larger than the current age of the Universe, since dynamical friction is not efficient enough. Consequently, as the secondary galaxy does not reach the centre of
Figure 1: Edge-on view (top panels) and face-on view (bottom panels) of the surface mass density of the thin-disc particles of the primary galaxy for the 14 co-planar co-rotating encounter, at different stages. The remnant phase represents the resulting thin disc after the two central SMBHs sink. Each panel is centred on the primary central SMBH.
Figure 2: Same as Figure 1, but for the NSD particles of the primary galaxy.
the primary galaxy in a reasonable time, these mergers were not utilised as possible tracers of the assembly history in the present work. Sarzi et al. (2015) found that NSDs can withstand any minor merger regardless of the inclination of the orbit, with small to no effects on the final shape of the NSD. In addition, Sarzi et al. (2015) showed that NSDs would not survive major (1:1) mergers, which would no doubt also significantly affect the main stellar disc. Therefore, we focus on exploring the effect exerted by intermediate-mass-ratio mergers on the thin kpc-scale disc and NSD, as it has not been studied in detail yet.
In the next section, we analyse the results for each merger and we check how the encounter affects the thinness and rotation of both the thin kpc-scale disc and NSD. We also compare the results with those of our FCC 170 \(N\)-body model evolved in isolation in the same time range.
## 3 Survivability of thin discs against mergers
### Changes in galaxy thickness
To quantify the morphological changes experienced by the galaxy remnant's thin disc and NSD after the merger, we first align the resulting discs with the galactic plane independently, using the tensor of inertia of the stellar particles within \(R_{90}\), the Lagrangian radius enclosing 90 per cent of the total mass of the galaxy/component, excluding the particles in the outskirts of each disc component. This method was also used by Joshi et al. (2020) and Pulsoni et al. (2020), and more recently by Galan-de Anta et al. (2022). We rotate each galaxy remnant's thin-disc component (thin disc and NSD) independently so that the \(z\)-axis represents the projected minor axis and the \(x\)- and \(y\)-axis are the major axes. Once each component is conveniently aligned with the galactic plane, we quantify the thickness of both the thin disc and the NSD using the mass tensor defined as in Genel et al. (2015)
\[M_{i}=\frac{\left(\sum_{n}m_{x}x_{n,i}^{2}\right)^{1/2}}{\left(\sum_{n}m_{n} \right)^{1/2}}, \tag{2}\]
with the sums performed over all particles inside \(R_{90}\), \(x_{n,i}\) and \(m_{n}\) being the coordinates and mass of each particle, respectively, and \(i\in(x,y,z)\). From Equation (2), we can infer the thickness of the thin disc and the NSD, given by the axis ratio \(M_{z}/\sqrt{M_{x}M_{y}}\), with \(M_{i}\) the \(i\)-th diagonal component of the mass tensor. The thickness accounts for how flat or 'discy' a system is: the smaller the thickness, the flatter the system is. Consequently, we can use the thickness parameter to account for how much the flatness of the thin disc and NSD got modified by the mergers.
In Figure 4, we show the thickness of the remnant for each merger, along with those of the model in isolation (all at 10 Gyr), thin-disc particles (red triangles) and NSD particles (black squares). The thickness is calculated inside \(R_{90}\). We observe a considerable increase of the thickness of the thin disc, with respect to the isolated model for the co-co and co-ret mergers (by 31 and 35 per cent, respectively) and a dramatic thickening of the thin disc for the 45-tilted (by 90 per cent), polar (by 82 per cent), and 135-tilted (by 82 per cent) mergers. These numbers suggest that if a past intermediate-mass-ratio merger occurred in this galaxy, it should be imprinted on both the photometry and kinematics of the galaxy. We explore this scenario in detail in Sections 3.2 and 3.3.
Figure 4: Thickness inside \(R_{90}\) for the thin-disc particles (red triangles) and NSD particles (black squares) for our mergers in co-planar co-rotating merger, 45 tilted merger, polar merger, 135 tilted merger, co-planar counter-rotating merger, and for the model in isolation. The thickness for each component has been calculated at 10 Gyr.
Figure 3: Top panel: evolution of the SMBHs separation for all our mergers. Bottom panel: total mass loss during the mergers. The mass of the secondary SMBH is not included in the calculation of the mass loss. The beginning of the remnant phase is defined as the first time when the SMBH separation is below 100 pc. This varies amongst mergers: for example, the remnant phase occurs sooner for the co-co merger than for the polar merger.
The NSD experiences a slight increase of the thickness for four out of five mergers (the polar merger being the exception), with no significant differences amongst them, albeit the co-co, 135-tilted, and co-ret mergers are slightly more disturbing. For the polar merger, the NSD experiences a small contraction in the semi-minor axis too, decreasing its thickness (by \(\sim\)8 per cent) but slightly increasing its size along the semi-major axis. Most of the particles of the secondary get unbound during the collision, heating the kpc-scale components (thin disc, thick disc, and bulge) and making the primary galaxy to expand. Hence, the secondary galaxy sinks towards the centre of FCC 170 as a 'naked' core with most of its mass in its central SMBH. As the mass of the secondary BH is about 100 times smaller than that of the NSD, the central disc barely notices the presence of the secondary SMBH, just contributing to a modest expansion (or flattening, in the case of the polar merger) of the NSD but being unable to destroy it.
Additionally, for every merger we find that the thin disc and NSD experience a modest misalignment (overall for the tilted mergers) when the two galaxies approach and overall at distances less than \(\lesssim 10\) kpc at times between \(\sim 4-6\) Gyr. This misalignment gets compensated at late stages by the thin disc torque, aligning the NSD.
### Ellipticity and stellar kinematics of thin discs
For each galaxy merger, we also calculate the changes in ellipticity and stellar kinematics of the FCC 170 \(N\)-body model at 10 Gyr to further compare how much the rotation has diminished and how much the shape has been modified, in particular for both the thin disc and NSD. After aligning the galaxy remnant with the galactic plane through the inertia tensor (Joshi et al., 2020; Pulsoni et al., 2020; Galan-de Anta et al., 2022), we further design a 2D grid of the projected edge-on galactic thin-disc and NSD remnants by binning the thin disc and NSD inside a 20\(\times\)20 kpc box and 2\(\times\)2 kpc box, respectively. We choose bin sizes of 200 pc and 20 pc for the thin disc and NSD, respectively. We then design an adaptive-bandwidth histogram using a kernel density estimate from \(K\) nearest neighbours using \(K=100\) in each mock image to recover the velocity dispersion of the thin disc and NSD of our galaxy remnants even at pixels with few points. Once we have the 2D mock images of both the thin disc and NSD remnants, we compute the ellipticity at different radii using equation (1) from Emsellem et al. (2007)
\[\varepsilon=1-\sqrt{\frac{\langle z^{2}\rangle}{\langle x^{2}\rangle}}, \tag{3}\]
where \(\langle z^{2}\rangle\) is the quadratic mass-weighted coordinate along the semi-minor axis, defined as \(\langle z^{2}\rangle=\sum_{n}m_{a}z_{n}^{2}/\sum_{n}m_{n}\) sum over \(n\) bins, and \(\langle x^{2}\rangle\) the quadratic mass-weighted coordinate along the semi-major axis. For an infinitely flat disc and a perfect sphere, it is \(\varepsilon\sim 1\) and \(\varepsilon=0\), respectively.
The top panels of Figures 5, 6, 7, 8, and 9 show the mean velocity \(v_{0}\) and velocity dispersion \(\sigma\) for the thin-disc particles (two top-left panels) and NSD particles (two top-right panels) of the merger remnant for the co-co, 45-tilted, polar, 135-tilted, and co-ret merger, respectively. In the bottom panels, we show the profiles of \(|v/\sigma|\) and ellipticity along the major axis with blue solid lines. We also show the values of \(|v/\sigma|\) and ellipticity for the FCC 170 \(N\)-body model in isolation at the same time with black dotted-dashed lines.
Figure 5: Top panels: 2D mock images of the mean velocity and velocity dispersion for the thin disc (left-hand panels) and NSD (right-hand panels), both in edge-on view, for the remnant phase (at 10 Gyr) of the co-co merger. Bottom panels: mean velocity over velocity dispersion along the equatorial plane and ellipticity for the thin disc and NSD of the merger remnant (blue solid lines) and FCC 170 \(N\)-body model in isolation at 10 Gyr (black dotted-dashed lines).
The shape of the thin disc gets significantly distorted by the 45-tilted, polar, and 135-tilted encounters, diminishing its ellipticity (turning into a thicker disc) on a plateau at radii larger than \(\sim\)3 kpc with respect to the model in isolation at 10 Gyr. The relative decrements in ellipticity at the plateau with respect to the model in isolation is 55, 59, and 43 per cent, for the 45-tilted, polar, and 135-tilted merger, respectively. On the other hand, the co-co and co-ret mergers decrease the ellipticity of the thin disc by about 7 per cent. Regarding the NSD, the changes in ellipticity are quite similar for the co-co and co-ret mergers, with the ellipticities decreasing by 12 and 8 per cent, respectively. For the 45-tilted, polar, and 135-tilted mergers, the ellipticity diminished by 9, 1, and 15 per cent, respectively.
Regarding the kinematics, each merger substantially decreases the rotation of the thin disc with the tilted and polar mergers being the most effective. For the 45-tilted, polar, and 135-tilted encounters, while the disc gets twisted and becomes thicker, its rotation significantly decreases with respect to the model in isolation, overall at radii greater than \(\sim\)3 kpc, where the structure of the thin disc gets thicker. Regarding the kinematics of the NSD, the mergers that affect the rotation the least are the co-co and polar mergers, with a modest decrease of the NSD rotation. The co-ret, 45-tilted, and 135-tilted mergers affect the most the NSD rotation.
As the secondary SMBH reaches the centre of FCC 170 as a 'naked' SMBH, with a mass that is negligible compared to that of the NSD, the mergers do not destroy the NSD, albeit they decrease its rotation while slightly decreasing its ellipticity. These findings are different from those found by Sarzi et al. (2015). In their case, for a 1:5 merger at different orbit inclinations, only for the co-planar co-rotating orbit the NSD got totally destroyed. On the contrary, in our case, regardless of the orbit, the NSD only experiences an expansion without being entirely destroyed. In their experiments, however, the mass of the secondary SMBH was comparable to that of the NSD, whereas in our case the mass of the secondary SMBH is significantly (\(\sim 100\) times) smaller than the mass of the NSD. Taken together with the outcome of our simulations, this suggests that NSDs in galaxies bombarded by an intermediate-mass-ratio-merger perturber might be less resilient if the mass of the secondary SMBH is of the order of that of the NSD.
Regarding numerical heating on the NSD, as also discussed in Paper I, the results show that the changes in the ellipticity of the NSD are larger when the disc is exposed to a galaxy merger than in the isolated case, except for the polar merger, in which the decrement in ellipticity is comparable to that shown for the isolated case. In general, the NSDs of the merger remnants exhibit a smaller ellipticity than in the isolated scenario, and the polar remnant shows comparable values, suggesting that the effect of numerical heating is quantitatively similar to the effects of the merger remnants.
Although in our simulations we have not included gas particles, we can expect that the outcome of a gas-rich merger is likely to include central star formation and addition of stellar disc populations (e.g. Abadi et al., 2003; Hopkins et al., 2009; Van Wassenhove et al., 2014; Capelo et al., 2015; Capelo & Dotti, 2017). The precise reconstruction of the star-formation history of NSDs (e.g. Pinna et al., 2019) is a promising avenue to constrain the occurrence of such last wet merger events.
### Kinematics comparison: merger remnants versus the isolated model
Last, we check the similarity between the kinematics of the entire merger remnants and FCC 170 \(N\)-body model after 10 Gyr of evolution in isolation. Such a comparison allows to quantify how much the merger events we considered would affect the overall stellar morphology and kinematics of an
Figure 6: Same as Figure 5, but for the 45-tilted merger.
initially unperturbed FCC 170-like system, and to whether or not such differences with respect to the isolated model for would be significant from an observer perspective.
To produce kinematic 2D maps for both the model in isolation and each merger remnant, we follow the procedure outlined in section 3.2 of Paper I to finally obtain the mean velocity \(v_{0}\) and velocity dispersion \(\sigma\). In Figure 10, we show the surface stellar luminosity, which, as in Paper I, is reproduced as an integral of the DF over the velocity. We also plot the mean velocity and the velocity dispersion for the model in isolation and of our merger remnants. Whereas in the mock images of Figures 5-9 we show just a single component independently (thin disc or NSD), in Figure 10 we display all the stellar particles. Figure 10 shows that, although the surface stellar luminosity distribution and velocity maps are not dramatically different after an intermediate-mass-ratio merger, the stellar velocity dispersion map is radically different from that of a galaxy with a prominent thin-disc structure. Such an encounter would indeed lead to shallower velocity dispersion gradients and larger average values of the velocity dispersion. In all considered merging scenarios, the tell-tale signature of a thin disc (i.e. the low-velocity dispersion values near the equatorial plane outside the bulge-dominated regions) would be erased. On the other hand, all merger remnants show a central dip in velocity dispersion (not easily visible in the figures because of the edge-on projections), thus preserving the kinematic signature of the NSD, in agreement with the analysis made in Section 3.2.
If FCC 170 had experienced an intermediate-mass-ratio dry merger, our results suggest that the kinematic maps of the corresponding galaxy remnants would not match those of our isolated numerical model for FCC 170, which in turn was shown in Paper I to be kinematically consistent with the actual MUSE observations for this galaxy. In turn, since the measurements of Pinna et al. (2019) indicate that both the NSD and thin disc of FCC 170 have passively evolving stellar populations that are at least 10 Gyr old, we infer that FCC 170 did not experience any major or intermediate-mass-ratio merger events over this period of time. Deep imaging of the Fornax cluster by Iodice et al. (2019) further show no evidence of tidal tails or shells in the outskirts of FCC 170, suggesting that also relatively minor mergers did not take place in the recent past of this object.
The absence of any kinematic or photometric signature of significant gravitational interactions for FCC 170 is in agreement with the notion that galaxies in clusters have a low probability of merging with other galaxies due to their relatively high velocities (Serra et al., 2016), as suggested by the survival of discs in simulated cluster galaxies (Joshi et al., 2020; Galan-de Anta et al., 2022).
## 4 Conclusions
Making use of the \(N\)-body model of FCC 170 created in Paper I, we run a set of 1:4 dry mergers in parabolic orbits with different inclinations to study the resilience of the kpc-scale thin disc and NSD against such encounters.
We find that the thin, kpc-scale disc gets destroyed in the polar and tilted encounters, whereas it experiences a modest expansion in the co-planar, co-rotating, and counter-rotating orbits, indicating that thin galactic discs may be resilient against co-planar, intermediate-mass-ratio dry encounters. On the other hand, the NSD of FCC 170 appears to be quite resilient, surviving all the encounters in our suite of simulations while increasing its thickness and still being observable in the kinematics of the galaxy. This suggests that NSDs in galaxies could be more resilient than previously thought against intermediate-mass-ratio encounters, in particular if they are rather more massive than the central SMBH of the
Figure 7: Same as Figure 5, but for the polar merger.
secondary galaxy (and not simply assumed to be of similar mass, as done in Sarzi et al., 2015), which in turn makes them less susceptible to the interaction.
Our results also strongly indicate that, according to the latest estimates of the stellar ages of FCC 170, it is rather unlikely that this galaxy has experienced a past intermediate-mass-ratio event in the last 10 Gyr, as inferred from the kinematics of the merger remnant, and in good agreement with the cluster environment where the galaxy is embedded, in which direct collisions between galaxies are a rare event.
## Acknowledgements
This work was performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR programme receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. We are grateful for use of the computing resources from the Northern Ireland High Performance Computing (NI-HPC) service funded by the Engineering and Physical Sciences Research Council (EPSRC) (EP/T022175). Enrico Maria Corsini acknowledges support by Padua University grants DOR 2019-2022 and by Italian Ministry for Education University and Research (MIUR) grant PRIN 2017 20173ML3WW-001.
## Data Availability Statement
The data underlying this article can be made available upon request. The models results can be reproduced using publicly available codes.
|
2304.04217 | The Study of Highway for Lifelong Multi-Agent Path Finding | In modern fulfillment warehouses, agents traverse the map to complete endless
tasks that arrive on the fly, which is formulated as a lifelong Multi-Agent
Path Finding (lifelong MAPF) problem. The goal of tackling this challenging
problem is to find the path for each agent in a finite runtime while maximizing
the throughput. However, existing methods encounter exponential growth of
runtime and undesirable phenomena of deadlocks and rerouting as the map size or
agent density grows. To address these challenges in lifelong MAPF, we explore
the idea of highways mainly studied for one-shot MAPF (i.e., finding paths at
once beforehand), which reduces the complexity of the problem by encouraging
agents to move in the same direction. We utilize two methods to incorporate the
highway idea into the lifelong MAPF framework and discuss the properties that
minimize the existing problems of deadlocks and rerouting. The experimental
results demonstrate that the runtime is considerably reduced and the decay of
throughput is gradually insignificant as the map size enlarges under the
settings of the highway. Furthermore, when the density of agents increases, the
phenomena of deadlocks and rerouting are significantly reduced by leveraging
the highway. | Ming-Feng Li, Min Sun | 2023-04-09T11:21:22Z | http://arxiv.org/abs/2304.04217v1 | # The Study of Highway for Lifelong Multi-Agent Path Finding
###### Abstract
In modern fulfillment warehouses, agents traverse the map to complete endless tasks that arrive on the fly, which is formulated as a lifelong Multi-Agent Path Finding (lifelong MAPF) problem. The goal of tackling this challenging problem is to find the path for each agent in a finite runtime while maximizing the throughput. However, existing methods encounter exponential growth of runtime and undesirable phenomena of deadlocks and rerouting as the map size or agent density grows. To address these challenges in lifelong MAPF, we explore the idea of highways mainly studied for one-shot MAPF (i.e., finding paths at once beforehand), which reduces the complexity of the problem by encouraging agents to move in the same direction. We utilize two methods to incorporate the highway idea into the lifelong MAPF framework and discuss the properties that minimize the existing problems of deadlocks and rerouting. The experimental results demonstrate that the runtime is considerably reduced and the decay of throughput is gradually insignificant as the map size enlarges under the settings of the highway. Furthermore, when the density of agents increases, the phenomena of deadlocks and rerouting are significantly reduced by leveraging the highway.
Path Planning for Multiple Mobile Robots or Agents, Multi-Robot Systems, Motion and Path Planning
## I Introduction
Multi-Agent Path Finding (MAPF) is the problem of planning collaborative paths for a team of agents while avoiding collisions. MAPF has been widely used in applications like video games [1], traffic management [2], and delivery policies [3]. Several MAPF solvers has been proposed in the past years, such as CA* [4], CBS [5], ECBS [6], which consider the path planning problem in a one-shot manner, assuming each agent has only one pair of start and goal locations. Generally, these methods are not suitable for applications in online scenarios without reasonable modification.
Considering applications in the real world, especially for robots in fulfillment warehouses [7], instead of having only one pair of start and goal locations, new goals are assigned on the fly. This scenario is referred to as lifelong MAPF. Although current solutions for lifelong MAPF [8, 9, 10, 11] can handle a sequence of goals, they present poor computational efficiency or low throughput (i.e, finished tasks per timestep). Therefore, Rolling-Horizon Collision Resolution (RHCR) [12] incorporates the idea of windowed MAPF [4] into the lifelong MAPF framework, separating a lifelong MAPF problem into a sequence of one-shot MAPF problems, which gains computational efficiency without sacrificing throughput. Nevertheless, the approach still encounters undesirable phenomena of deadlocks and rerouting [4, 12].
In order to solve such challenges in lifelong MAPF, we study the methods of _highway_ typically utilized in one-shot MAPF [13, 14, 15]. First, we leverage a directed map [13, 14] by enforcing agents to move along specific directions. This simple yet effective strategy decreases the complexity of the problem, which translates into higher efficiency. Additionally, instead of enforcing a movement direction, we leverage [15] to calculate heuristic values that penalize movements against the highway direction during planning. In short, both strategies encourage agents to move in the same direction and thus effectively avoid face-to-face conflicts. Subsequently, we further discuss the properties after incorporating _highway_ into the lifelong MAPF framework, which can effectively minimize the undesirable phenomena of deadlocks and rerouting, as shown in Fig. 1.
Through several experiments, we present the benefits of utilizing _highway_ in the lifelong MAPF framework and the trade-off between runtime and throughput. The experimental results show that the runtime can be accelerated dozens of times with less than 10% throughput decay in warehouse-like maps with more than 50% obstacles. Furthermore, when the density of agents is high, the throughput even increases after utilizing _highway_, because the severe impacts of deadlocks and rerouting are significantly reduced by _highway_.
## II Background and Related Work
### _One-shot MAPF_
MAPF is an NP-hard problem [16, 17] and is typically solved in one-shot and offline, where each agent has exactly one goal known beforehand and paths of agents are found at once, respectively. The solution of one-shot MAPF solvers is typically evaluated by _sum-of-costs_ (the sum of the arrival times of each agent) or _makespan_ (the time span for all agents to reach their goals).
Fig. 1: Two scenarios in a warehouse map. Panel (a) shows a map without the highway. Panel (b) shows a map with the highway to solve the situation of crowded agents contributing to deadlock and rerouting in panel (a). The highway direction is represented as blue arrows.
There are various one-shot MAPF methods [18], including compilation-based solvers [19, 20], rule-based solvers [21, 22], A*-based solvers [23, 24], and prioritized planning [25], etc. Especially, the search-based solvers [6, 26] and their variants are common. Most of the search-based solvers are two-level MAPF solvers. The low-level solver plans the paths for each agent and the high-level solver handles the conflicts of these paths to guarantee no collisions. A* [27] is a typical algorithm as the low-level solver of search-based solvers. As for the high-level solvers, Conflict-Based Search (CBS) [5] is a popular MAPF solver that is complete and optimal, and its variants [6, 28] sacrifice the optimality for faster computation time. Besides, Priority-Based Search (PBS) [29], which solves the conflicts by priority orderings, shows prominent computational efficiency but is incomplete and not optimal.
### _Lifelong MAPF_
In contrast to one-shot MAPF, lifelong MAPF solves problems that an agent may be assigned more than one task, which is always accompanied by the online scenario. In online MAPF, the tasks will be generated on the fly during execution [3]. Therefore, the solution can not be planned in advance and should be evaluated by _throughput_ (the number of tasks finished per timestep) because the execution time and the tasks assigned to agents can be endless. For instance, endless tasks come at any time in warehouses, and agents are assigned new tasks after they finished the previous ones. Therefore, the path solver needs to consider the solution constantly in real-time and the problems should be solved as lifelong MAPF. In addition, we should consider not only the _throughput_ but also the _runtime_ (computing time of planning) that makes sure the planning of paths can be completed on time to effectively utilize agents.
### _Lifelong Solutions_
Firstly, if all the tasks are known in advance, a lifelong MAPF problem can be solved as a one-shot MAPF problem [10]. Otherwise, the second type of methods [9, 11] replans paths for all the agents at each timestep to handle new tasks on the fly. Nonetheless, these two types of methods are really time-consuming in solving the whole problem offline or constantly replan at each timestep, respectively. Therefore, the third type of method [8] can be followed in that new paths are replanned for the agents which get new goals only, but it still encounters poor throughput without cooperative planning among all agents.
To retain the efficiency of runtime while maximizing the throughput, RHCR [12] incorporated windowed MAPF [4] into lifelong MAPF. The idea is to plan the entire paths to goals and solve conflicts only for the first \(w\) timesteps and the \(w\) represents the time horizon. In addition to the time horizon \(w\), RHCR has another user-specified parameter \(h\) that specifies the replanning period. Every \(h\) timesteps as an episode, the new goals are assigned to the agents which have reached their goals, and the conflict-free paths in subsequent \(w\) timesteps are planned for the future when the agents are following the planning results from the previous episode. After \(h\) timesteps pass, agents follow these new conflict-free paths, and the paths for the next episode should be planned. However, due to the incompleteness and the lack of guarantee in the optimality, the windowed MAPF solver encounters deadlocks (i.e., agents idle at their current locations waiting for each other to pass first), which rely on a potential function to evaluate the progress of the agents and \(w\) will be increased to address deadlocks [12]. Besides, because windowed MAPF [4] only solves conflicts in the first few timesteps and replans paths constantly, it causes agents to revise their path direction or revisit locations that they had previously visited, which also happens in RHCR [12].
### _Highway_
In one-shot MAPF, the _highway_ is an add-on setting on the map to speed up the runtime, which builds a global rule of the moving direction to encourage or enforce MAPF solvers to search paths in a consistent direction. Forbidding agents to move against the highway direction is a simple strategy to reduce computational complexity and avoid collisions. For instance, a one-shot solution [14] used directed maps to reduce the complexity of MAPF for faster computational efficiency. Additionally, another one-shot solution [15] increases the cost on the paths against the highway direction when calculating the heuristic values, which implicitly encourages solvers to explore the path along the highway direction. However, these methods are proposed for one-shot MAPF where undesirable phenomena in lifelong MAPF such as deadlocks and rerouting do not occur. Even if RHCR [12] (a lifelong solution) used directed maps in its experimental section to reduce the planning complexity, it still lacked further discussions and experiments on how _highway_ affects the planning results in lifelong scenarios.
## III Problem Definition
The multi-agent path finding (MAPF) problem is formulated as a directed graph of the map \(G=(L,E)\) and a set of agents \(A=\{a_{1},...,a_{k}\}\). The graph represents the locations \(L\) and the edges \(E\) connecting pairs of these locations. For each agent \(a_{i}\), it starts from its start location \(l_{s}^{i}\) and aims at its own goal location \(l_{g}^{i}\). Assuming that time is discretized into timesteps, \(a_{i}\) should decide to _move_ to a neighboring location \(l_{i}\in L\) through the connected edge \(e\in E\) or _wait_ at its current location at each timestep, and the movement will be completed within the timestep.
However, a collision may occur when two agents arrive at the same location or move through the same edge at the same time, which is referred to as a _conflict_[30]. Therefore, the objective of MAPF is to find collision-free paths \(P=\{p_{1},...,p_{k}\}\) for all the agents to reach their goals jointly, where \(p_{i}=[l_{s}^{i},...,l_{g}^{i}]\) is the path of agent \(a_{i}\) as a sequence of neighboring locations.
We assume the scenario as lifelong and online MAPF. The agents start from their locations without knowing their task (i.e., moving to a goal location) in advance, and a new task will be assigned to the agent after it finishes its task (i.e.,
reaching its goal location). The goal of the MAPF solver is to constantly plan the path for each agent and maximize the _throughput_ (the number of tasks finished per timestep).
## IV Lifelong MAPF with Highways
### _Highway Definition_
In one-shot MAPF with highways [15], _highway_ was represented as a subgraph \(G_{H}=(L_{H},E_{H})\) of the given graph of the map \(G=(L,E)\), where \(L_{H}\) contained locations involved in the _highway_ and \(E_{H}\) represented the edges following the _highway_ direction.
In this work, we focus on warehouse-like maps [30], the _highway_ settings can be defined by specifying the direction of movement for each _corridor_. A sequence of locations \(K=\{l_{0},...,l_{n}\}\subseteq L\) is called a _corridor_ with length \(n\) iff \(l_{i}\) is connected to exact two locations \(l_{i-1}\) and \(l_{i+1}\) according to \(E\) for \(i=1\to i-1\) (i.e., only one way in and one way out at each location). The \(l_{0}\) and \(l_{n}\) can be the same location as a _loop-corridor_. For each _corridor_, there exists a direction \(d\in D\), which is a _bool-value_ representing the _direction_ of the _corridor_, shown in Fig. 2. Besides, any location as an _intersection_, which connects two or more _corridors_, will not belong to any _corridors_. Finally, given a set of _directions_\(D=\{d_{0},...,d_{m}\}\) (m is the number of _corridors_), the _highway_ can be represented as a subgraph \(G_{H}=(L_{H},E_{H})\subseteq G\) and the edges that against the highway direction \(E_{H^{\prime}}=\{(l_{b},l_{a})|(l_{a},l_{b})\in E_{H}\}\).
### _Lifelong MAPF Framework_
We follow RHCR [12] as our lifelong MAPF framework. Given user-specified time horizon \(w\) and replanning period \(h\) (\(h\leq w\)), the conflict-free paths in the first \(w\) timesteps will be planned for agents every \(h\) timesteps, and the new goals will be assigned to the agents which have reached their goals. As for the high-level solver, we use PBS [1], which shows prominent computational efficiency with RHCR.
In our method, we use A* as the low-level solver with the true shortest distance heuristic. The shortest-path heuristic values, which ignore dynamic constraints (i.e., moving agents), can be calculated by shortest-path algorithms in advance. In this way, the precise values of the distances between nodes can guide the low-level solver to traverse fewer locations to the goal than the heuristic values of Manhattan distance. Besides, given a _highway_ setting, the calculated heuristic can influence the low-level solver to find paths following the direction of _highway_, thereby resulting in fewer conflicts, and the complexity of low-level planning is also relaxed because choices to move against the highway direction are trimmed away.
### _Strict-limit Highway and Soft-limit Highway_
There are two methods to set up the highway for the low-level solver. Firstly, the movement across edges is allowed for only one direction [14], which we call _strict-limit highway_ in this work. The second one is to increase heuristic values only except following the direction of the highway [15] that encourages low-level solvers to search paths along the edges of the highway, which we call _soft-limit highway_ here.
#### Iv-C1 Strict-limit Highway
While using the strict-limit highway, \(G=(L,E)\) will be replaced with the subgraph \(G_{S}=(L,E_{S})\), where \(E_{S}=(E-E_{H^{\prime}})\subseteq E\). Therefore, the connectivity of the neighboring locations is restricted strictly under the strict-limit setting, which means the movement against the highway is impossible here, as shown in Fig. 3. The strict rule forces agents to follow the directions of the highway but limit the flexibility to use idle locations, because the low-level solver may no longer explore neighboring locations that avoid the highway direction. To utilize the highway with the strict limitation in RHCR, the heuristic values based on the all-pair shortest paths ignoring moving agents should be calculated and stored for usage in advance. The heuristic computed with \(G_{S}\) is defined as:
\[H_{S}(l_{s},l_{g})=\min_{p}|p|-1 \tag{1}\]
where \(p=[l_{s},...,l_{g}]\) is a feasible path from a start location \(l_{s}\) to the goal location \(l_{g}\) and \(|p|\) is the length of the path. Hence, \(H_{S}(l_{s},l_{g})\) represents the shortest distance moving from start \(l_{s}\) to goal \(l_{g}\). Then, the lower-level solver follows the guidelines from the precise highway heuristic to make agents move along the direction of the highway.
#### Iv-C2 Soft-limit Highway
The limitation is added indirectly to the heuristic while low-level planning. There is a user-specified parameter \(c\) greater than one. During the calculation of the heuristic values, the cost of crossing an edge along the direction of the highway is normally one, and the cost is increased to \(c\) when moving in the opposite direction of _highway_, which is defined as:
\[H(l_{s},l_{g},c)=\min_{p}\sum_{(l_{i},l_{i+1})\in p}\begin{cases}c,&\text{if }(l_{i},l_{i+1})\in E_{H^{\prime}}\\ 1,&\text{otherwise},\end{cases} \tag{2}\]
where \(p=[l_{s},...,l_{g}]\) is a feasible path from a start location \(l_{s}\) to the goal location \(l_{g}\). When \(c\) is close to one, the heuristic values are close to the shortest paths without any limitation. On the contrary, if the \(c\) is increasing to infinity, the heuristic values are similar to the shortest paths computed under the highway setting. The highway heuristic values encourage the agents to move in the same direction to avoid collisions. For instance, an example with \(c=2\) is shown in Fig. 3, and the heuristic values on the path moving against the highway direction are higher (i.e., 8, 6, 4, 2). However, because of the lack of strict limitations, the movement against the highway direction will still happen, which means there exist
Fig. 2: An example map where the arrows represent the direction of the highway in each _corridor_. The _intersections_, which connect _corridors_, cannot be a part of any _corridors_.
conflicts that should be solved. To use the _soft-limit highway_ in RHCR, the heuristic values should be calculated according to the chosen \(c\) beforehand, and then the lower-level solver will follow the heuristic to find paths.
### _Highway Behaviors_
Under the soft-limit highway, the low-level solver is planning with the highway heuristic. Because the path against the highway direction has higher heuristic values, the neighboring locations along the highway direction have higher priority to be taken out from the open-set of A* for the lower expected costs. Hence, the direction of paths found by the low-level solver shows consistency in the direction. The consistency of paths for agents causes fewer face-to-face collisions, which decreases the nodes the high-level solver should generate to solve conflicts. When \(c\) is small, the results are close to the ones without the highway. As \(c\) increases, the planning results are gradually close to the ones using the shortest paths of the highway as the heuristic values. However, even if \(c\) is set to infinity, the movement against the highway direction happens when the locations along the shortest path are blocked by the dynamic obstacle, which shows the flexibility for agents to choose the direction of their movement but may cause conflicts that take time to solve. Using the strict-limit highway, the consistent direction of paths of agents can be guaranteed by blocking the edges to neighboring locations during planning, as shown in Fig. 4.
The purpose of utilizing _highway_ is to ensure the paths of agents can share a collaborative rule locally, which relieves the congestion by solving conflicts as well as keeps the consistency of the planning results in neighboring time horizons. Benefiting from the consistent direction of paths, the strict-limit highway has the following properties addressing the issues of deadlocks and rerouting. Additionally, we show that the soft-limit highway can exploit these properties depending on the given \(c\) in our experimental section.
### _Property of Avoiding Deadlocks_
_Deadlock_ is a phenomenon caused by the _windowed MAPF_ without considering the entire time horizon [12]. A deadlock happens when two agents move face-to-face with their goals on opposite sides. With a small \(w\), neither they can go directly because of the swapping conflict nor go along the reverse direction because the high-level solver prefers the agents waiting at the original location for less _sum-of-costs_ instead of taking a longer path. For example, if time horizon \(w\!=\!2\), replanning period \(h\!=\!2\) and CBS is used as the high-level solver, the windowed MAPF solver returns the paths in Fig. 5-(a), which lets agents stay at the location without moving for \(w\) timesteps and start to move after that to the goal location. This situation causes by the limited cooperative planning of the windowed MAPF solvers, which only solves the conflicts in the first \(w\) timesteps and ignores the conflicts behind that. Hence, the low-level solver is not able to consider the further locations because the _sum-of-costs_ is smaller while waiting at the original location. To make one agent move along the top side and the other move along the bottom side, \(w\) should be set to \(3\) at least to let the solver consider the further path instead of a cheating solution that waiting across the time horizon to avoid conflicts like in Fig. 5-(b). Worse yet, the bigger value of \(w\) should be checked to find the solution if the corridor is longer or the goals are further, which increases the runtime.
Generally, a deadlock happens when the MAPF solver can not find feasible moving paths for agents with lower _sum-of-costs_ compared with paths of waiting. That is, the MAPF solver can not find the new locations for agents in the next \(w\) timesteps that have a lower sum of distances to their goals. Given \(A_{d}\subseteq A\) represents a subset of agents which may cause a deadlock. An agent \(a_{i}\in A_{d}\) asks for \(p_{i}=[l_{c}^{i},...,l_{n}^{i}]\) (\(|p_{i}|-1=w\), where \(w\) is the time horizon), which is a feasible path from its current location \(l_{i}^{c}\) to the next location \(l_{n}^{i}\) in the next \(w\) timesteps without collisions. With the shortest path heuristic values that represent the distances to the goal locations in the strict-limit highway, a situation that a deadlock may happen can be simplified as:
\[\sum_{a_{i}\in A_{d}}H_{S}(l_{c}^{i},l_{g}^{i})\leq\sum_{a_{i}\in A_{d}}\min_ {p_{i}}H_{S}(l_{n}^{i},l_{g}^{i}) \tag{3}\]
where \(l_{g}^{i}\) is the goal location of \(a_{i}\).
In _strict-limit highway_, if an agent \(a_{i}\) moves from any location \(l_{a}^{i}\) to its neighboring location \(l_{b}^{i}\) and \((l_{a}^{i},l_{b}^{i})\in E_{H}\) always, then \(H_{S}(l_{b}^{i},l_{g}^{i})=H_{S}(l_{a}^{i},l_{g}^{i})-1\) if \(l_{a}^{i}\neq l_{g}^{i}\), where \(l_{g}^{i}\) is its goal location. Thus, Eq. (3) can be rewritten as:
\[\sum_{a_{i}\in A_{d}}H_{S}(l_{c}^{i},l_{g}^{i})\leq\sum_{a_{i}\in A_{d}}\min_ {p_{i}}H_{S}(l_{c}^{i},l_{g}^{i})-|\bar{p}_{i}|+1 \tag{4}\]
where \(\bar{p}_{i}\) is the set of unique locations in \(p_{i}\), and \(|\bar{p}_{i}|\) reflects the number of unique locations in \(p_{i}\).
Eq. (4) is not held when \(\exists a_{i}\in A_{d},|\bar{p}_{i}|>1\). Namely, in the strict-limit highway, if all agents move along the edges \(\in E_{H}\), deadlocks will not happen unless every agent has no available neighboring locations to move to.
### _Property of Avoiding Rerouting_
_Rerouting_ is a phenomenon that results from the lack of long-term consideration of _windowed MAPF_[4], which causes agents to revise their path direction or revisit locations that they had previously visited. The idea of the _windowed
Fig. 3: Comparison of the connectivity between _strict-limit_ (left) and _soft-limit_ (right) with \(c=2\). The direction of _highway_ is counterclockwise and the black arrows represent the movement that agents can take for _strict-limit_. _Soft-limit_ allows bidirectional movement (both black and red arrows), but _strict-limit_ does not. The numbers are the heuristic values from the location to the goal, which are strictly decreasing along the highway direction in _strict-limit_. In _soft-limit_, when the shortest path contains movement against the highway direction, the cost is increased to \(c\) (i.e., 8, 6, 4, 2).
_MAPF_ used in RHCR is to separate the lifelong MAPF problem into a sequence of subproblems by time, which is referred to as _episode_. In each _episode_, the low-level solver plans the entire path for each agent, and then the high-level solver solves the conflicts of the paths for the finite time horizon \(w\). After planning, each agent follows the path of the first \(h\) (smaller than or equal to \(w\)) timesteps. However, although planning the entire path to the goal, only the partial path in the first \(h\) timesteps is followed by each agent. Therefore, each agent may stop at a location whose heuristic value is higher than the original location, namely waiting is a wiser choice rather than moving. Besides, in most search-based MAPF solvers, special mechanisms are followed to determine the priority orderings in high-level planning to decide which agent should go first when a conflict happens (e.g. CBS [5] adds the constraint to the certain agent as a _CT-node_, and PBS [29] maintains priority ordering pairs). However, the consistency of the priority orderings of agents is not guaranteed in the subsequent high-level planning, and changes in the priority orderings may lead to an obvious difference in the planning results. These cases may ask agents to _reroute_ or even move backward, resulting in longer distances for agents to reach their goals.
We formally defined that an agent \(a_{i}\) is _rerouting_ when it is assigned a path \(p_{i}=[l_{c}^{i},...,l_{n}^{i}]\) by the MAPF solver that makes \(a_{i}\) move away from its goal \(l_{g}^{i}\), which can be represented by the shortest path heuristic:
\[H_{S}(l_{c}^{i},l_{g}^{i})<H_{S}(l_{n}^{i},l_{g}^{i}) \tag{5}\]
where \(l_{c}^{i}\) is the current location of \(a_{i}\) and \(l_{n}^{i}\) represents the location where \(a_{i}\) will arrive after \(h\) timesteps (\(h\) is the replanning period smaller than or equal to the time horizon \(w\)). In _strict-limit highway_, given an agent \(a_{i}\) and its goal location \(l_{g}^{i}\), \(\forall(l_{a},l_{b})\in E_{H}\), \(H_{S}(l_{a},l_{g}^{i})=H_{S}(l_{b},l_{g}^{i})+1\) if \(l_{a}\neq l_{g}^{i}\). Therefore, when \(a_{i}\) moves along the edges \(\in E_{H}\), Eq. (5) is not held because \(H_{S}(l_{c}^{i},l_{g}^{i})\geq H_{S}(l_{n}^{i},l_{g}^{i})\). In summary, using _strict-limit highway_, these properties guarantee that there are no _deadlocks_ and _rerouting_ as agents move across the edges that are parts of the highway.
## V Experiments
Our experiments focus on the comparison between _before_ and _after_ incorporating _highway_ into lifelong MAPF and the analysis of strengths and drawbacks. Firstly, we evaluated the changes in the throughput and runtime when leveraging _highway_. Secondly, we analyzed the advantages of using _highway_ when map sizes or agent densities grow larger.
### _Environment_
We implement RHCR with \(w=5\), \(h=5\) using PBS as the lifelong MAPF solver with a standard location-time A* as the low-level solver in Python and ran all experiments on Intel Core i9-9980XE with 16 GB memory. Before each round of experiments, the start locations and goal locations of agents are selected randomly from the different empty locations on the map. When an agent arrives at its goal location, a location that is not the goal location of any other agents will be assigned. Besides, we randomly initialize 100 episodes for each experiment and simulate 100 iterations of planning in each episode. Each iteration has its time limit of 60 seconds, and the episode will be labeled as a _fail case_ if any iteration times out. All metrics shown are calculated as averages which exclude the fail cases.
Generally, _throughput_ and _runtime_ are the two main metrics for lifelong MAPF. For evaluating that _highway_ can effectively deal with existing phenomena of deadlocks and rerouting, _moving timesteps_, _idle timesteps_, and _rerouting rate_ are recorded during the experiments. _Rerouting rate_ is the percentage of the agents who reroute in an iteration. Besides, _moving timesteps_ reflects the number of timesteps that an agent moves during a task and _idle timesteps_ is the number of timesteps that an agent stays at the same location
Fig. 4: Different behaviors under _strict-limit_ and _soft-limit_ highway. The arrow represents the path in the next \(n\) timesteps. When the road ahead is clear, both the agent under _strict-limit_ and _soft-limit_ highway follow the direction of the highway, shown in panel (a). When a dynamic obstacle occurs, the agent under _strict-limit_ highway still strictly keeps the direction of the highway and, however, the agent under _soft-limit_ highway chooses to reroute, as shown in panel (b).
Fig. 5: A case of the _deadlock_ situation. \(P_{1},P_{2}\) represent the paths of \(a_{1},a_{2}\) found by the MAPF solver and \(w\) is the _time horizon_. In panel (a) without _highway_, the \(w\) should be set to 3 to avoid _deadlock_. In panel (b), _deadlock_ won’t happen under _highway_ using any \(w\).
without moving during a task. For observing how increasing \(c\) influences agents, _highway avoidance rate_ shows the chance that an agent moves against the highway direction. Similar to [5] and [15], _generated nodes_ is used to explain the reason why our method has lower _runtime_, which is the number of high-level nodes generated by the MAPF solver before finding the solution during one planning.
Given that the heuristic values are calculated from the shortest paths (e.g. _strict-limit_ highway and _soft-limit_ highway with \(c=1\) or \(\infty\)), a trick mentioned in the previous work [4] can be applied to simplify planning. Namely, in windowed MAPF, instead of planning the entire path to the goal, we can directly ignore the planning after \(w\) timesteps, which yields the same result but saves the time to plan entire paths. We include this trick in Table II for comparison.
### _Fulfillment Warehouse_
In autonomous warehouses, a set of inventory pods are placed closely into a rectangle block, and the blocks with corridors in between are placed into a grid, like in Fig. 7. In our experiments, we follow the settings of the _warehouse map_ in Fig. 7-(a) [30]: (1) each block contains 10\(\times\)2 pods, and (2) the corridors are single-rowed. For simplicity, we test maps with N\(\times\)N blocks, where N is an odd number (i.e., 3, 5, 7, etc.), and inventory pods are considered obstacles for the agents. For instance, the smallest 3\(\times\)3 version of the maps is shown in Fig. 7-(b). In addition, the percentage of obstacles over the entire map indicates how crowded the map is. Our smallest map (Fig. 7-(b)) has more than 50% of obstacles, and the percentage of obstacles gradually increases as the map grows with the larger N. The high obstacle density and neighboring corridors are suitable for testing _highway_.
### _From No Highway to Highway_
The parameter \(c\) in _soft-limit highway_ determines how much the map is influenced by the highway directions. Therefore, we evaluate the influence of _highway_ through the map in Fig. 7-(b) with the increasing \(c\) and fixed 5% density of agents (i.e., the ratio of agents to empty locations of the map). Fig. 6 shows the relative throughput, runtime, and generated nodes with the \(c\in\{1,1.2,1.5,2,5,10,50,\infty\}\), which are compared to the result of \(c=1\).
When the map is small, the value of throughput drops when the value of \(c\) rises in Fig. 6-(a), though the runtime decreases slightly in Fig. 6-(b). However, in the case of the larger map, the drop in throughput is much slower, and the drop in runtime is significant as \(c\) increases. Surprisingly, the relative throughput when \(c=1.5\) even surpasses the throughput without _highway_. Furthermore, a trend shows that the gap in the throughput between _no-highway_ (\(c=1\)) and _highway_ (\(c=\infty\)) narrows as the map size grows larger.
The speedup on the runtime mainly benefits from the more efficient planning under _highway_, because the global rule of the direction should be followed, and thus planning results are more consistent, which causes fewer conflicts before the solution is found. Referring to Fig. 6, if a higher \(c\) is used, fewer nodes should be generated to find a solution, and the benefit is even more pronounced in larger maps.
In Table I, the results verified that using _highway_ resulted in fewer average idle timesteps (timesteps that an agent stays at the same point during a task), which reflects the consistent direction of paths and fewer deadlocks, and the average idle timesteps reduce more when a larger map. Also, through \(c\) increases, the chance that an agent moves against the highway direction and the rerouting problem decreases,
Fig. 6: Relative ratios of throughput, runtime, and numbers of generated nodes (i.e., the number of high-level nodes generated by PBS before finding the solution) in different map sizes. The results of different \(c\) for _soft-highway_ are compared to the results without _highway_, namely with \(c=1\). As the \(c\) increases, the cost of moving against the direction of the highway grows, which gradually makes agents move along the direction of the highway and results in lower throughput and faster runtime because of fewer nodes generated when planning. Moreover, the impact on throughput is reduced as the size of the map increases, which is shown from the bottom line to the top line of panel (a).
Fig. 7: Panel (a) shows the warehouse map in [30] with 10\(\times\)20 blocks of inventory pods. Each rectangle indicates one block. Panel (b) shows the warehouse map with 3\(\times\)3 blocks of inventory pods with the direction (arrow) of the highway. Each block (green rectangle) in both maps consists of 10\(\times\)2 inventory pods (green squares).
shown as the highway avoidance rate (the chance of an agent moving against the highway direction) and the rerouting rate (the percentage of the rerouting agents in an iteration of planning) in Table I. Besides, following the highway causes agents to have more moving timesteps (timesteps that an agent needs to arrive at its goal). Nevertheless, using _highway_ in a larger map causes fewer extra moving timesteps, which explains another reason that a larger map can benefit more while using _highway_. For instance, in Table I, using _highway_ (\(c=50\)) in the 3x3 map causes 69.1% extra moving timesteps while using _highway_ (\(c=50\)) in the 7x7 map only causes 23.9% extra moving timesteps.
### _Scaling up_
To ensure the runtime is efficient enough to assign the paths on time without idling the agents, the question is whether the advantages of _highway_ are kept after the map of a warehouse scales up. Referring to the results mentioned previously, the answer is clearly yes. Subsequently, we discuss and quantify the benefits of _highway_ as the map size scales up and the agent density grows larger.
#### Iv-D1 Map Size
To scale up the size of the map, the block is kept the same as in Fig. 7-(b), which contains 10\(\times\)2 pods in each block, and corridors are still single-rowed. Then, the number of blocks are increasing from 3\(\times\)3, 5\(\times\)5, 7\(\times\)7 blocks... to 15\(\times\)15 blocks, and the density of agents is fixed at 5%. The results using _strict-limit highway_, _soft-limit highway_ (\(c=\infty\)), and the baseline without _highway_ are shown in Table II. When the map grows larger, the gap of throughputs between _highway_ and the baseline becomes smaller, and the speedup gained from using _highway_ keeps increasing. The main reason for the decrease in the throughput gap is that, as the map gets larger, the number of extra moves taken for the highway becomes insignificant relative to the total number of moves. Furthermore, in contrast to the setting using _highway_, the fail cases appear on larger maps without _highway_.
Besides, the gap between throughputs of _strict-highway_ and _soft-highway_ is also decreasing in Fig. 8. Therefore, it becomes more cost-effective to convert _soft-highway_ to _strict-highway_ for faster runtime (see Table II).
#### Iv-D2 Number of Agents
To test the influence of different agent densities on the map, the density of agents is increasing from 5% to 20% with the 3\(\times\)3 map in Fig. 7-(b). Referring to the result in Fig. 9, the throughput of using _highway_ increases constantly. However, the trend of the throughput without using _highway_ starts to decline when the density increases. Therefore, the rerouting rate and average idle timesteps are evaluated. Under the setting of _highway_, _rerouting_ rarely happens and the fewer average idle timesteps potentially indicate that _deaklocks_ and the phenomena of waiting for each other are significantly reduced because of the consistent direction of paths for agents.
Fig. 8: Relative throughput of transferring ”_no-highway_ to _soft-limit highway_” and ”_soft-limit highway_ to _strict-limit highway_”. The dotted line represents the relative runtime of replacing ”_no-highway_ with _strict-limit highway_”, showing the reduction of runtime. As the map enlarges, the relative throughputs of these two settings approach 100% (solid lines) and the runtime keeps decreasing (yellow dotted line).
Fig. 9: Comparison of throughput, rerouting rate, and the average number of idle timesteps between _highway_ and _no-highway_ in different densities of agents. When the agent density is high, using _highway_ even gains higher throughput than _no-highway_, shown in panel (a). As shown in panel (b), this is because the phenomena of rerouting (solid lines) and idle agents (dotted lines) are severe.
## VI Conclusion
In this work, we studied the _highway_ idea (previously proposed for one-shot MAPF) in the lifelong MAPF scenario for the trade-off between runtime and throughput. Furthermore, we discussed the properties of combining _highway_ with the lifelong MAPF framework that minimizes the existing problems of deadlocks and rerouting. Finally, we evaluated _highway_ through a sequence of experiments. According to the experimental results, using _highway_ can significantly speed up the runtime, and the decay of throughput gradually diminishes when the map size or agent density grows larger.
|
2309.01393 | A Simple Quantitative Model of Neuromodulation. Part I: Ion Flow Through
Neural Ion Channels | We develop a simple model of ionic current through neuronal membranes as a
function of membrane potential and extracellular ion concentration. The model
combines a simplified Poisson-Nernst-Planck (PNP) model of ion transport
through individual mechanosensitive ion channels with channel activation
functions calibrated from ad hoc in-house experimental data. The simplified PNP
model is validated against bacterial Gramicidin A ion channel data. The
calibrated model accounts for the transport of calcium, sodium, potassium, and
chloride and exhibits remarkable agreement with the experimentally measured
current-voltage curves for the differentiated human neural cells. All relevant
data and code related to the ion flow models are available at DaRUS. | Linda Werneck, Mertcan Han, Erdost Yildiz, Marc-André Keip, Metin Sitti, Michael Ortiz | 2023-09-04T06:47:01Z | http://arxiv.org/abs/2309.01393v1 | # A Simple Quantitative Model of Neuromodulation. Part I: Ion Flow Through Neural Ion Channels
###### Abstract
We develop a simple model of ionic current through neuronal membranes as a function of membrane potential and extracellular ion concentration. The model combines a simplified Poisson-Nernst-Planck (PNP) model of ion transport through individual mechanosensitive ion channels with channel activation functions calibrated from ad hoc in-house experimental data. The simplified PNP model is validated against bacterial Gramicidin A ion channel data. The calibrated model accounts for the transport of calcium, sodium, potassium, and chloride and exhibits remarkable agreement with the experimentally measured current-voltage curves for the differentiated human neural cells. All relevant data and code related to the ion flow models are available at [1].
## 1 Introduction
Owing to its ability to provide non-invasive control of neural activity in deep-brain regions with millimeter spatial precision, ultrasonic neuromodulation (UNM) has elicited sustained interest (cf., e. g., [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]) and is widely regarded as one of the most significant new technologies for human neuroscience. Studies dating back to the 1950s [14, 15, 16, 17] suggested that ultrasound can affect neural activity. In 2008, Tyler et al. [2] reignited interest in this phenomenon by demonstrating neuromodulation in rodents using low-intensity ultrasound [2, 4] without a chemical or genetic pre-treatment of the subject. UNM complements human imaging techniques for studying brain connectivity and function in basic and clinical applications. Thus, established non-invasive modulation techniques such as transcranial magnetic and electrical stimulation (TMS and TES) are limited by their physics to mostly cortical regions and centimeter-scale resolution, thereby lacking access to subcortical areas underlying many neurological functions [18]. In contrast, the physics of ultrasound enables this modality to target deep tissue structures with millimeter precision, including the human brain [19, 20], which enables a broad range of applications, including drug delivery [21], deep brain stimulation [22] and the treatment of epilepsy [23], and depression [24], among others.
Despite this surge in interest, the precise biophysical mechanisms underlying UNM have been the subject of extensive surmise and controversy. An unambiguous and definitive identification of such mechanisms has been finally effected by Yoo et al. [13], who have shown that low-intensity focused ultrasound (LIFUS) in the 300-1000 kHz frequency range excites neurons through primarily mechanical means mediated by specific calcium-selective mechanosensitive (MS) ion channels. The activation of these channels results in a gradual build-up of calcium, which is amplified by calcium and voltage-gated channels, generating a burst firing response. Pharmacological and genetic inhibition of specific ion channels leads to reduced responses to ultrasound while over-expressing these channels results in stronger ultrasonic stimulation. In addition, Yoo et al. [13] find that cavitation [25, 26, 27], temperature changes [28, 29, 30], indirect auditory mechanisms [12, 31], and synaptic transmission are not required for this excitation to occur, thus ruling out
other possible competing mechanisms. These findings strongly suggest the interaction between ultrasound and MS ion channels as the underlying mechanism responsible for UNM.
These remarkable contributions notwithstanding, a validated quantitative model of the biophysical mechanisms by which LIFUS mechanically excites cortical neurons appears to be as yet unavailable. The overarching goal of the present work is to develop--and validate with in-house experiments on differentiated human neural cells--a simple but predictive mechano-electro-physiological model of the effect of LIFUS on human cortical axons. The model, alongside imaging and characterization techniques such as functional magnetic resonance imaging (fMRI), magnetic resonance elastography (MRE) [32, 33, 34, 35, 36, 37, 38] and advanced computational modeling [39, 40], can be used for medical device design and the optimization of personalized clinical procedures. We specifically aim to develop a multiscale hierarchy of electro-mechanical models that provide a fundamental understanding, as well as a quantitative and predictive capability, of how ultrasonic excitation interferes with brain activity and induces neuromodulation.
At the full cranial scale, ultrasound wave propagation in the brain can be studied using finite element models representing a variety of conditions, from LIFUS to concussion (cf. e. g. [41, 40]). Detailed computational models have been successfully constructed from magnetic resonance (MR) images [42, 43, 44]. The constitutive modeling of soft biological tissues has also received considerable attention [41, 45, 46, 47, 48, 49]. These full-cranium computational models enable the precise determination of the viscoelastic wave patterns that arise in the human brain in response to LIFUS. In particular, for transducers operating at a fixed frequency, the models predict the steady-state harmonic deformations sustained by any target point within the brain or in any other organ of interest, such as the inner ear [40].
Bianchi et al. [50, 51] have shown that local strains such as those induced by LIFUS are transferred from tissue to individual cells in peripheral nerves. Specifically, they have quantified experimentally the changes in inwards and outwards ion currents and action potential (AP) firing in dorsal root ganglion-derived neurons subject to uniaxial strains, using a custom-built device allowing simultaneous cell deformation and patch clamp recording. In turn, the forces that regulate MS ion channel gating originate from the surrounding lipid membrane, suggesting a close relationship between membrane strain and MS ion channel function [52]. For example, membrane stiffening by stomatin-like protein-3 has been shown to control mechanically gated ion channel activity in sensory neurons finely [53]. In addition, voltage-activated sodium channels have been shown to respond to strain, with a left-shift in channel current-voltage (I-V) relations, leading to reduced inactivation potential, sodium leakage, and a reduced rate of AP firing [54]. Overall, these results suggest that ion channels are actuated directly by uniaxial strain in neuronal axons, thereby inducing cell electrophysiological activity.
The central question that remains is, therefore, how the electrophysiology of single neurons is influenced by axonal strain. We specifically aim to characterize this effect within the framework of the Hodgkin-Huxley (HH) model [55, 56, 57, 58]. In 1952, Alan Hodgkin and Andrew Huxley proposed their celebrated model to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon, for which they received the 1963 Nobel Prize in Physiology or Medicine. In the HH model, voltage-gated ion channels are represented by effective electrical conductances depending on both voltage and time. The electro-chemical gradients driving the flow of ions through the channels are represented by sources whose voltages are determined by the ratio of the intra- and extracellular concentrations of the ionic species under consideration.
Within this framework, we posit that a quantitative model of mechanical neuromodulation can be fashioned in three steps: i) a model of conductance due to ion flow through open axonal channels; ii) a model of mechanosensitive channel actuation by a prescribed axial strain [59, 2, 60]; and iii) a model of parametric resonance resulting from axonal harmonic excitation such as induced by LIFUS. The validation of these three elements of the model require specialized experimental protocols and extensive laboratory testing. Therefore, we divide the presentation of the model into three parts. In the present part I, we focus on the effective ionic current of ion channels and their dependence on ion concentration and channel geometry, which lays the foundation for the remaining parts II and III of the model, to be presented in subsequent publications.
We specifically model ion transport by recourse to the coupled Poisson-Nernst-Planck (PNP) equations [61, 62]. The basic framework of the PNP model and some of its extensions, including size effects [63], ion-water interactions [64], coupling to density functional theory [65], and others, is presently well-established. However, the direct first-principles or molecular dynamics simulation of mechanical gating in ion channels remains largely out of reach due to the structural complexity of the channels, disparate length scales, and the staggering gap between the molecular and diffusive time scales. Present approaches are, for the most part, decoupled from the mechanical response and treat the surface and charge distribution of the channels as given.
A coarse-grained strategy for ion channel analysis is to construct a continuum functional of a charge transport system to encompass the polar and nonpolar free energies of solvation and chemical potential-related energies expressed in terms of averaged ion concentrations [66]. Using the calculus of variations, a coupled PNP system of equations and other
transport equations then follows whose solutions give explicit profiles of electrostatic potential and densities and fluxes of charged species [61, 62].
Our modeling strategy consists of solving the PNP equations at the continuum level for individual ion channels assuming a simplified cylindrical channel geometry, and then estimating the membrane conductances arising in the HH model by means of a mean-field approximation that uses the density of channels per unit area of the axonal membrane. These simplifications set forth a simple quantitative model of ion flow that characterizes the effective conductance of the axonal membrane and parametric dependencies thereof, including channel geometry. The model accounts for the effect of calcium, sodium, potassium, and chloride ions.
Evidently, the critical question to be addressed is whether such a simple model suffices to characterize effective ion conductances accurately. To elucidate this question, we validate the model against archival experimental data from a single bacterial ion-channel model, gramicidin A, and calibrate it with in-house experimental data from a human neural cell culture acquired through electrophysiological recordings conducted specifically for the present study. We find that the agreement between the predictions of the continuum model and the experimental data is excellent, which is remarkable considering the simplicity of the model. This validation suggests that the effective ion conductance of axonal membranes depends mainly on coarse-grained channel parameters such as cross-sectional area and channel length and, to a good approximation, is independent of the fine structure of the channels.
In Section 2 we first introduce the PNP equations in form of a thermodynamic framework. Based on the PNP, we present a model for ion flow through single ion channels and we validate our results using data from GA channels in Section 3. We extend the single channel model to a full-axon model using in-house experimental data and discuss our results in Section 4. Finally, we conclude our work in Section 5.
## 2 A simple quantitative model of ion flow through axonal channels
Given the crucial role played by mechanosensitive (MS) ion channels in the physiology of mechanotransduction, considerable effort has been devoted to understanding their gating mechanisms (cf. [67] for a review). MS ion channels respond to mechanical forces along the plane of the cell membrane (membrane tension). The protein structure of many MS ion channels is known and available in protein repositories (GenBank, Protein DataBank, and SwissProt), which provides a basis for molecular dynamics simulations. MS ion channels of small conductance (MscS) from several prokaryotes have been extensively characterized [68] and serve as model systems for understanding the physio-chemical principles of mechanotransduction. Comparatively, much less is known about the structure of eukaryotic members of the MscS superfamily, many of which acquire extra transmembrane helices as well as additional extra-membrane domains [69], but the number of newly characterized channels is growing at a rapid pace [70, 71]. Even in cases where the protein structure is known in detail, the first-principles characterization of the gating mechanism of MS ion channels, with or without strain, remains a formidable challenge. An additional challenge in understanding the function of MS ion channels concerns the analysis of ion transport. Ion channel exists in a complex environment, including cell membrane, water molecules, mobile ions, and other molecular components. These components interact through mutual long-range (e. g., electrostatics) and short-range (e. g., Lennard-Jones) interactions.
Unlike these first-principles studies, the focus of the present work is to estimate the effective ionic conductance of axonal membranes as a function of coarse features of the channels, such as size and density, with a view to devising a strain- and vibration-dependent HH model of neuromodulation. To this end, we resort to a continuum PNP model [61, 62, 66] applied to simplified cylindrical channel geometry and subsequently estimate the membrane conductances by means of a mean-field approximation based on channel densities. The PNP model is a mean-field model that, using a continuum approximation, treats the ion flow as the averaged ion concentration driven by the electrostatic potential force and ion concentration gradient. These aspects of the models are reviewed next for completeness and ease of reference.
### Thermodynamic framework
We consider a body in a configuration \(\mathcal{B}\subset\mathbb{R}^{3}\) with boundary \(\partial\mathcal{B}\) at a given time \(t\) in a time interval \(\mathcal{T}\subset\mathbb{R}_{+}\). The body is contained in a domain \(\Omega\subset\mathbb{R}^{3}\) in free space, which includes the space occupied by the body \(\mathcal{B}\). The space is spatially parameterized in the coordinates \(\boldsymbol{x}\in\Omega\). To describe electro-chemical phenomena, we introduce as independent field variables the electric potential
\[\phi:\left\{\begin{array}{l}\mathcal{B}\times\mathcal{T}\to\mathbb{R}\\ (\boldsymbol{x},t)\mapsto\phi(\boldsymbol{x},t)\end{array}\right. \tag{1}\]
and the concentrations of the individual species, labeled by 'i', as well as the corresponding chemical potentials,
\[c_{\mathrm{i}}:\left\{\begin{array}{l}\mathcal{B}\times\mathcal{T}\to\mathbb{R} \in[0,1]\\ (\mathbf{x},t)\mapsto c_{\mathrm{i}}(\mathbf{x},t)\end{array}\right.\qquad\text{ and }\quad\mu_{\mathrm{i}}:\left\{\begin{array}{l}\mathcal{B}\times \mathcal{T}\to\mathbb{R}\\ (\mathbf{x},t)\mapsto\mu_{\mathrm{i}}(\mathbf{x},t).\end{array}\right. \tag{2}\]
The driving electro-chemical fields are the electric field \(\mathbf{e}\) and the chemical fields \(\mathbf{m}_{\mathrm{i}}\), defined as
\[\mathbf{e}:=-\nabla\phi\quad\text{and}\quad\mathbf{m}_{\mathrm{i}}:=-\nabla\mu_{ \mathrm{i}}, \tag{3}\]
where '\(\nabla\)' represents the gradient operator with respect to \(\mathbf{x}\). An application of Cauchy's theorem further defines the electric displacement field \(\mathbf{d}\) and the molar ion-flux densities \(\mathbf{h}_{\mathrm{i}}\), with associated jump conditions
\[-q_{\mathrm{f}}=\llbracket d\rrbracket\cdot\mathbf{n}\quad\text{and}\quad 0= \llbracket h_{\mathrm{i}}\rrbracket\cdot\mathbf{n}, \tag{4}\]
where \(q_{\mathrm{f}}\) is the surface density of free electric charges. In the above expressions, \(\llbracket\bullet\rrbracket\) denotes the jump of a quantity \(\bullet\) across a surface \(\mathcal{S}\) separating two regions \((1)\) and \((2)\) such that \(\llbracket\bullet\rrbracket:=\bullet_{(1)}-\bullet_{(2)}\) and \(\mathbf{n}\) denotes a unit normal on \(\mathcal{S}\) pointing from the region (1) to the region (2). Eq. (4)\({}_{2}\) describes continuous molar flux across surfaces, where the molar surface-flux densities--characterizing the amount of ion species passing through a given surface per unit time--are given as \(h_{\mathrm{i}}:=\mathbf{h}_{\mathrm{i}}\cdot\mathbf{n}\). The electric displacement can be expressed in terms of the polarization \(\mathbf{p}\) as
\[\mathbf{d}=\epsilon_{0}\mathbf{e}+\mathbf{p}, \tag{5}\]
where \(\epsilon_{0}~{}\approx~{}8.854~{}\times~{}10^{-12}~{}\frac{\mathrm{F}}{\mathrm{ m}}\) is the electric permittivity of free space and the polarization \(\mathbf{p}\) only exists in space filled with polarizable matter.
The motion of the ions induces ion-current densities
\[\mathbf{j}_{\mathrm{i}}=FZ_{\mathrm{i}}\mathbf{h}_{\mathrm{i}}, \tag{6}\]
where \(Z_{\mathrm{i}}\) is the ionic valence of the individual species and \(F~{}\approx~{}9.6485~{}\times~{}10^{4}~{}\frac{\mathrm{C}}{\mathrm{mol}}\) is the Faraday constant. The total ion-current density is then
\[\mathbf{j}=\sum_{\mathrm{i}}\mathbf{j}_{\mathrm{i}}, \tag{7}\]
with associated jump condition
\[0=\llbracket j\rrbracket\cdot\mathbf{n}, \tag{8}\]
where \(j:=\mathbf{j}\cdot\mathbf{n}\) is the surface density of ionic current. We assume throughout weak electric currents, from which no magnetic fields of significant magnitude are created.
### Balance equations and dissipation inequality
Gauss's law of electrostatics requires
\[\text{div}\,\mathbf{d}=\rho, \tag{9}\]
where \(\rho\) is the volumetric density of ionic charges, related to the molar concentrations \(c_{\mathrm{i}}\) via
\[\rho=F\sum_{\mathrm{i}}Z_{\mathrm{i}}c_{\mathrm{i}}. \tag{10}\]
In addition, assuming that a change in concentration within a control volume can only happen as a result of an inward or outward flux of matter through the surface of the control volume (and cannot be generated locally within the control volume), we obtain the mass balance laws for the ion concentrations
\[\hat{c}_{\mathrm{i}}=-\text{div}\,\mathbf{h}_{\mathrm{i}}. \tag{11}\]
We posit an additive decomposition of the electro-chemical energy-density function
\[\widehat{\Psi}(\mathbf{e},c):=\widehat{\Psi}_{0}(\mathbf{e})+\widehat{\Psi}_{\mathrm{ mat}}(\mathbf{e},c), \tag{12}\]
where we write \(c:=(c_{1},\ldots,c_{n})\), with \(n\) the number of ionic species,
\[\widehat{\Psi}_{0}(\mathbf{e})=-\frac{\epsilon_{0}}{2}\|\mathbf{e}\|^{2}, \tag{13}\]
\(\mathbf{x}\in\Omega\), is the electric energy density of free space and \(\widehat{\Psi}_{\mathrm{mat}}(\mathbf{e},c_{\mathrm{i}})\), \(\mathbf{x}\in\mathcal{B}\), is the electro-chemical energy density in the presence of matter. The corresponding thermodynamically consistent relationships for the polarization and the chemical potential then follow as
\[\mathbf{p}:=-\partial_{\mathbf{e}}\widehat{\Psi}_{\mathrm{mat}}(\mathbf{e},c),\quad\text{ and}\quad\mu_{\mathrm{i}}:=\partial_{c_{\mathrm{i}}}\widehat{\Psi}_{\mathrm{mat}}(\mathbf{e},c). \tag{14}\]
Using these relations, the dissipation inequality reduces to
\[\sum_{\rm i}(\mathbf{m}_{\rm i}+Z_{\rm i}F\mathbf{e})\cdot\mathbf{h}_{\rm i}\geq 0. \tag{15}\]
By further defining the electro-chemical potentials of the individual ions, as well as the corresponding negative gradients, as
\[\overline{\mu}_{\rm i}:=\mu_{\rm i}+Z_{\rm i}F\phi\quad\text{and}\quad\overline{ \mathbf{m}}_{\rm i}:=-\nabla\overline{\mu}_{\rm i} \tag{16}\]
and using the definition of \(\mathbf{m}_{\rm i}\) and \(\mathbf{e}\) as gradient fields according to (3), we recast the dissipation inequality (15) in the compact form
\[\sum_{\rm i}\overline{\mathbf{m}}_{\rm i}\cdot\mathbf{h}_{\rm i}\geq 0. \tag{17}\]
This constraint can be automatically fulfilled by assuming kinetic relations for the molar flux densities of the individual ions of the potential form
\[\mathbf{h}_{\rm i}:=\partial_{\overline{\mathbf{m}}_{\rm i}}\widehat{\Phi}_{\rm i}( \overline{\mathbf{m}}_{\rm i}), \tag{18}\]
and requiring the electro-chemical dissipation-potential-density function \(\widehat{\Phi}_{\rm i}(\overline{\mathbf{m}}_{\rm i})\) to be a gauge, i. e., a non-negative, positively homogeneous and convex function that evaluates to zero at the origin.
### Material model
We model electro-chemical (electrodiffusive) behavior by recourse to the classical PNP equations. In this setting, the electro-chemical energy-density function of the material is assumed to be of the form
\[\widehat{\Psi}_{\rm mat}(\mathbf{e},c):=-\frac{\epsilon_{0}\chi}{2}\|\mathbf{e}\|^{2 }+\sum_{\rm i=1}^{n}RTc_{\rm i}\left[\text{log}\frac{c_{\rm i}}{c_{0}}-\left(1 -\frac{c_{0}}{c_{\rm i}}\right)\right], \tag{19}\]
where \(\chi\) is the electric susceptibility, \(R\approx 8.314\ \frac{1}{\text{mol}\,\text{K}}\) is the gas constant, \(T\) is the absolute temperature and \(c_{0}\) is the reference molar concentration. In addition, the dissipation-potential-density functions are assumed to be of the form
\[\widehat{\Phi}_{\rm i}(\overline{\mathbf{m}}_{\rm i}):=\frac{1}{2}\frac{D_{\rm i }}{RT}c_{\rm i}\|\overline{\mathbf{m}}_{\rm i}\|^{2}, \tag{20}\]
where \(D_{\rm i}\) are the diffusion coefficients of the ionic species.
For this particular model, an application of (14) gives the electric polarization and the chemicals potentials as
\[\mathbf{p}=\epsilon_{0}\chi\mathbf{e}\quad\text{and}\quad\mu_{\rm i}=RT\,\text{log} \frac{c_{\rm i}}{c_{0}}, \tag{21}\]
whence the electric displacement \(\mathbf{d}\) in the entire domain can be computed using (5). In addition, an application of (18) gives the Nernst-Planck equation
\[\mathbf{h}_{\rm i}=-D_{\rm i}\left(\nabla c_{\rm i}+\frac{Z_{\rm i}F}{RT}\,c_{\rm i }\,\nabla\phi\right). \tag{22}\]
Inserting these relations into the conservation equations (11)\({}_{1}\) gives the diffusion equations
\[\dot{c}_{\rm i}=\text{div}\left[D_{\rm i}\left(\nabla c_{\rm i}+\frac{Z_{\rm i }F}{RT}\,c_{\rm i}\,\nabla\phi\right)\right]. \tag{23}\]
In addition, Gauss's law (9) together with the electric displacement (5), the electric field (3), and the relation (10) gives
\[-\text{div}(\epsilon\nabla\phi)=F\sum_{\rm i}Z_{\rm i}\,c_{\rm i}, \tag{24}\]
where \(\epsilon=\epsilon_{0}(1+\chi)\) is the electric permittivity.
Eqs. (23) and (24) set forth a system of partial differential equations which, together with suitable initial and boundary conditions, govern the evolution of the ionic concentrations and the electrostatic field.
## 3 Mass transport through single ion channel
As a first step towards the formulation of a full-axon ion transport model, we begin by considering an individual ion channel in isolation and seek to characterize the net flux of ions through the channel by means of the PNP model described in the foregoing.
### Cylindrically symmetric PNP problem
In order to obtain a simple and easy-to-evaluate model, we assume a cylindrical ion-channel geometry with boundary conditions shown in Fig. 1 and refer the solution to a system of cylindrical coordinates \((r,\theta,z)\). In this representation, the domain of analysis is \(0\leq r\leq a\), \(0\leq\theta<2\pi\), \(0\leq z\leq l\), where \(a\) and \(l\) are the radius and length of the channel, respectively. We further assume that the lateral surface of the channel is charge-free and impermeable to the ions and that the electric permittivity of the channel is homogeneous. Finally, we assume that the ion concentration is at a steady state and uniform across cross-sections of the channel.
By virtue of these simplifying assumptions, the concentration and electrostatic potential fields, \(c(z)\) and \(\phi(z)\), respectively, depend on the axial coordinate \(z\) only, and the governing equations (23) and (24) reduce to the elementary form
\[c^{\prime\prime}+\frac{ZF}{RT}\left(c\phi^{\prime}\right)^{\prime}=0,\qquad \epsilon\phi^{\prime\prime}+ZFc=0, \tag{25}\]
respectively, where \((\cdot)^{\prime}\) denotes the spatial derivative with respect to the coordinate \(z\). We further assume the boundary conditions
\[c(0)=c_{\rm in},\quad c(l)=c_{\rm out},\quad\phi(0)=\phi_{\rm in},\quad\phi(l )=\phi_{\rm out}, \tag{26}\]
where \(c_{\rm in}\) and \(\phi_{\rm in}\) are the concentration and electrostatic potential at the inlet of the channel, respectively, with \(c_{\rm out}\) and \(\phi_{\rm out}\) idem at the outlet. The instantaneous ion flux through the channel can be computed from the Gauss theorem as
\[I_{\rm c}=ZF\int_{A_{\rm in/out}}D\left(c^{\prime}+\frac{ZF}{RT}c\phi^{\prime} \right)_{z=0}\mathrm{d}A, \tag{27}\]
where \(A_{\rm in/out}\) are the inlet/outlet sections of the channel, respectively.
### Validation: Gramicidin A channels
By way of validation of the single-channel model just described, we compare the predictions of the model against archival experimental data [72, 73] for Gramicidin A (GA) channels in NaCl and KCl solutions at different molar concentrations, Fig. 2. Extra and intracellular concentrations take equal values between 0.2 and 2 mM (molarity, 1 M is 1 mol/I), and the membrane potential varies between 25 and 200 mV. The properties of the GA channel used in the calculation are collected in Table 1. We note that agreement with the observational data requires effective diffusion coefficients in GA channels to be lower than the bulk diffusion coefficients by factors of \(2\) to \(10\)[74].
Fig. 3 shows experimental data points and computed ionic currents through a single GA channel for different NaCl and KCl concentrations. The agreement between the model predictions and the experimental data is remarkable, especially in view of the simplicity of the model, which suggests that the effective ion transport of individual channels depends mainly on coarse channel parameters such as cross-sectional area and channel length and is independent of the fine structure of the channels to a good first approximation.
\begin{table}
\begin{tabular}{c l} \hline Parameter & Value \\ \hline \(l_{\rm GA}\) & \(2.60\times 10^{-9}\) m \\ \(a_{\rm GA}\) & \(2.00\times 10^{-10}\) m \\ \(D_{\rm GA,Na}\) & \(1.33\times 10^{-10}\) s\({}^{\frac{1}{2}}\) \\ \(D_{\rm GA,K}\) & \(3.92\times 10^{-10}\) s\({}^{\frac{1}{2}}\) \\ \(D_{\rm GA,CI}\) & \(2.03\times 10^{-10}\) s\({}^{\frac{1}{2}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Properties of GA channel [75, 74, 73] used in PNP calculations.
Figure 1: Simplified geometry and boundary conditions assumed in calculations of ion channel mass transport.
## 4 Full-axon ion transport model
Excitation and electrical signaling of neurons involve the transport of ions through channels that populate the neuronal membrane in large numbers [76]. The sodium (Na\({}^{+}\)), potassium (K\({}^{+}\)), calcium (Ca\({}^{2+}\)) and chloride (Cl\({}^{-}\)) ions account for the majority of the action. Each channel responds to a voltage, chemical or mechanical stimuli, and the response of the channel is called gating, which keeps the channel open for a few milliseconds. The open channels exhibit selective permeability, allowing a specific species of ion to flow at high rates of the order of \(10^{6}\) ions per second. Neurons exhibit a voltage \(V\), i. e. an electrical potential difference across their membrane, which is negative at the cytoplasmic side compared to the extracellular space. This membrane voltage, normally ranging from \(-90\) to \(-50\) mV, is mainly due to differences in ionic concentration at the extra- and intracellular sides of the membrane. In order to estimate the net ionic current through an entire axonal membrane, we resort to a simple mean-field model that aggregates the ion fluxes of the individual channels such as computed, e. g., by the PNP model set forth in Section 2.
### A mean-field model for voltage-gated channels
We consider voltage-gated Na\({}^{+}\), K\({}^{+}\), Ca\({}^{2+}\) and Cl\({}^{-}\) channels responding to a prescribed voltage \(V\). We assume that the overall ion transport \(I_{\text{tot}}\), through the membrane is the sum of the ion currents through all the individual channels that are open at a given voltage \(V\). We posit that the number of gated channels is an increasing function of \(V\), to be
Figure 3: Comparison between experimental data [72] and simulation results of voltage clamp of (a) KCl, and (b) NaCl solution for different molar concentrations.
Figure 2: Alignment of GA channel in the cellular membrane. The channel consists of 15 L- and D-amino acids and presents as helix dimers in the lipid bilayers.
determined. These assumptions suggest a relation of the form
\[I_{\text{tot}}=b_{\text{Na}}(V)I_{\text{c}}^{\text{Na}}+b_{\text{k}}(V)I_{\text{c} }^{\text{K}}+b_{\text{Ca}}(V)I_{\text{c}}^{\text{Ca}}+b_{\text{Cl}}(V)I_{\text{c }}^{\text{Cl}}, \tag{28}\]
where \(I_{\text{c}}^{\text{Na}}\), \(I_{\text{c}}^{\text{K}}\), \(I_{\text{c}}^{\text{Ca}}\) and \(I_{\text{c}}^{\text{Cl}}\) are ionic currents of single Na\({}^{+}\), K\({}^{+}\), Ca\({}^{2+}\) and Cl\({}^{-}\) channels, respectively. In addition, \(b_{\text{Na}}(V)\), \(b_{\text{K}}(V)\), \(b_{\text{Ca}}(V)\) and \(b_{\text{Cl}}(V)\) are the number of Na\({}^{+}\), K\({}^{+}\), Ca\({}^{2+}\) and Cl\({}^{-}\) channels in the axon, respectively, that are gated at voltage \(V\). To close the model, we calibrate these activation functions empirically from our in-house experimental data, as described next.
### Experimental setup: Patch-clamp recordings from differentiated human neural cells
For purposes of model calibration, we have tested a human-derived neural progenitor cell line (ReNcell CX, Sigma Aldrich, MO, USA). The neurons are maintained by means of ReNcell NSC maintenance media on laminin-coated cell culture dishes at \(37\,\mathrm{\SIUnitSymbolCelsius}\) in a humidified incubator with 5\(\%\) CO\({}_{2}\) prior to use for experiments. After the differentiation of the neural cells, all electrophysiological recordings are made with a whole-cell patch clamp setup (Axopatch 200B, Molecular Devices, CA, USA). The pulled patch pipettes are utilized at 4-6 M\(\Omega\) resistance to carry out whole-cell patch-clamp experiments. The physiological extracellular media are prepared by mixing 20 mM HEPES, 10 mM glucose, 140 mM NaCl, 2.5 mM KCl, 1.8 mM CaCl\({}_{2}\), and 1.0 mM MgCl\({}_{2}\) in distilled water. The pH was calibrated to 7.4 using 1 M NaOH. The ion concentrations for other extracellular media conditions are modified according to Table 2. The internal cellular medium is purchased by a commercial producer (Internal KF 110, Nanion, Munich, Germany). For whole-cell patch-clamp measurements, the dynamic current-voltage measurements are recorded from the same point in the axon hilllock of each neuron for standardization of the experimental data collection. The patch pipettes are filled with the intracellular solution to achieve a whole-cell patch, and pipette tips are applied to cells while holding positive pressure. When whole-cell patch formation is achieved after gigaseal is overpassed, the patch-clamp pipette is held in this position for 5 minutes to stabilize the membrane potential in the physiological range. Dynamic current-voltage measurements are made from \(-100\) to 100 mV with 10 mV steps in the voltage clamp settings. Representative experimental results for membrane potential in response to step depolarization between the physiological range of membrane potential are shown in Fig. 4.
### Calibration of an ionic transport model for neuronal membranes
The boundary conditions of the PNP model are according to the experimental setup. In calculations, membrane potentials are varied between \(-100\) and 100 mV. The evaluated extracellular concentrations are listed in Table 2; intracellular concentrations are given by the physiological state of neurons as \(c_{\text{in}}^{\text{Na}}=15\) mM, \(c_{\text{in}}^{\text{K}}=100\) mM, \(c_{\text{in}}^{\text{Ca}}=2\times 10^{-4}\) mM, \(c_{\text{in}}^{\text{Cl}}=13\) mM. Representative results from PNP calculations are shown in Fig. 5.
For the physiological concentrations, the equilibrium potentials \(E_{\text{ion}}\) for all ion species agree closely with the literature values \(E_{\text{Na}}=56.4\) mV, \(E_{\text{K}}=-93.1\) mV, \(E_{\text{Ca}}=115.0\) mV, \(E_{\text{Cl}}=-61.7\) mV [77]. In the hyperphysiological environment, the concentration gradients for sodium and calcium are increased, while the concentration gradient of potassium is decreased. These concentrations result in larger sodium and calcium currents and a smaller potassium current. Contrarily, in the hypophysiological scenario smaller concentration gradients of sodium and calcium result in smaller currents, whereas the potassium current is increased by a higher potassium concentration gradient. This effect is dominant for smaller membrane potentials and relatively minor for potassium compared with the other cations.
With the single-channel currents computed by the PNP model, multiple linear regression of Eq. (28) to the data is carried out in order to identify the activation functions \(b_{\text{Na}}(V)\), \(b_{\text{K}}(V)\), \(b_{\text{Ca}}(V)\) and \(b_{\text{Cl}}(V)\) over the full range of the membrane potentials. The activation functions thus identified are shown in Fig. 6. The activation functions encode the information contained in the data, ranging from the number of active channels at different membrane potentials, the relative activities of the ionic species and their interaction, and other information.
Two features immediately stand out in Fig. 6: i) the chloride activation function exhibits a discontinuous spike at the chloride resting potential, and ii) the activation function of calcium takes high values for membrane potentials \(>40\) mV outside of the physiological range, as the membrane potential approaches the calcium equilibrium potential. Both activation functions stabilize the ionic current flow. This effect is expected for calcium channels in view of the strong
\begin{table}
\begin{tabular}{l l l l} \hline Ion & Hypophys. & Phys. & Hyperphys. \\ \hline \(c_{\text{out}}^{\text{Na}}\) & \(120\) & \(140\) & \(160\) \\ \(c_{\text{out}}^{\text{K}}\) & \(1.5\) & \(2.5\) & \(6.0\) \\ \(c_{\text{out}}^{\text{Ca}}\) & \(1.0\) & \(1.8\) & \(4.0\) \\ \hline \end{tabular}
\end{table}
Table 2: Extracellular ion concentrations (mM) in modified and physiological neuronal media.
influence that calcium ions have on the overall cell response, Fig. 4f, i. For chloride ions, the spike in the activation function sets in a stable current flow \(I_{\mathrm{CI}}=b_{\mathrm{CI}}\cdot I_{\mathrm{CI}}^{\mathrm{FNP}}\) in the physiological range, cf. Fig. 8. A relatively high baseline of chloride current into the cell under physiological conditions independent of membrane potential is also noteworthy. This behavior may be due to membrane-impermeable intracellular polymer-like anionic chains that increase chloride inflow near the membrane [78, 79]. In this vicinity, which is referred to as Debye layer, electroneutrality cannot be assumed [80].
The activation function of potassium bears additional remark. This function takes negative values in the physiological state, implying an inflow of potassium ions into the cell, cf. also Fig. 8. Since the activation functions contain all ionic coupling information, this behavior can be explained by an appeal to the double-layer theory: the cell membrane is not--as assumed in our model--electrically neutral but negatively charged. Cations form a layer at close range to the negatively charged membrane, and anions accumulate in a second layer. The dominant ionic species in the first layer are potassium and calcium. Therefore, the concentrations that set the boundary conditions for both experiments and calculations can exhibit local fluctuations near the membrane. These fluctuations cannot be resolved experimentally and are neglected in calculations.
The total membrane currents predicted by (28) are compared in Fig. 7 against the measured I/V curves. The overall agreement with the experiment achieved by the calibrated model is remarkable, especially considering the simplicity of the model. This overall agreement notwithstanding, slight discrepancies are observed in connection with the counter-intuitive behavior of hyper- and hypophysiological environment of potassium and calcium ions for membrane
Figure 4: a) Experimental setup; b) Phase-contrast microscopy image during patch-clamp measurements; c–i) Representative traces of membrane voltage in response to step depolarization between \(-70\) and 30 mV under various ion concentration conditions; c) Physiological ion concentrations; d) Hyperphysiological sodium concentration; e) hyperphysiological potassium concentration; f) hyperphysiological calcium concentration; g) hypophysiological sodium concentration; h) hypophysiological potassium concentration; i) hypophysiological calcium concentration.
potentials above 0 mV. The experimentally measured membrane current is higher for the physiological environment than for the hyper- and hypophysiological environment of potassium and calcium. In addition, both an increase and a decrease in the concentration of these two ion species lead to the same response, namely, an increase in the overall membrane current. This behavior can again be explained by an appeal to double-layer theory. Concentration changes of potassium or calcium do not affect the double layer, as other cations from the extracellular matrix can substitute to form a stable, positively charged layer. Consequently, a similar behavior for sodium ions is not to be expected. In our model, this behavior results in a slight underestimation of the physiological prediction and a slight overestimation of the decreased calcium prediction.
### Discussion
We recall that the activation functions obtained from calibration may take negative values, Fig. 6, and, consequently, result in unexpected membrane current flow of individual ion species, as shown in Fig. 8. As already mentioned, the high baseline current of chloride ions could follow from anionic proteins that disturb the electroneutrality in short distances. However, this baseline results in a compensating inward sodium current in an equilibrium state to achieve a zero net membrane current. Impermeant anions, such as negatively charged intracellular proteins, also could be one of the main contributing causes for the high baseline of chloride ions [81]. Consideration of constant chloride activation in computational tests evinces a resulting zero chloride current flow at equilibrium potential, which also results in membrane currents close to zero for the other ion species. This result is expected from the assumption of an anionic baseline [82].
Another important factor to carefully weigh is the time dependence of neuron response in experiments, cf. Fig. 4, vs. the stationary character of the single-channel and membrane models in the present work. In Fig. 4, the measurements for hypophysiological sodium and hyperphysiological calcium environment stand out for values above 0 mV of membrane potential. Furthermore, in the hypophysiological potassium environment, a shorter response time is observed for each of the measurements. Overall, the close agreement between steady-state PNP and experiments suggests that the time scale of any transient effects is negligibly small and appears to bear out the assumption of stationary channel flow.
Figure 5: Single-channel currents for individual ions computed by PNP equations for physiological as well as hyper- and hypophysiological concentrations.
Figure 6: Activation parameters of sodium, potassium, calcium, and chloride ions in experimental and in physiological range.
Figure 8: Ionic membrane currents of the individual ion species in experimental and physiological range.
Figure 7: I/V curves of experimental data and simulation results for (a) control and hyperphysiological and (b) hypophysiological environment of the extracellular medium.
## 5 Concluding remarks
Realistic models that accurately represent anatomical detail and the mechanical response of the tissues in the human skull are available from Magnetic Resonance Imaging (MRI) [43, 44, 42, 83], Magnetic Resonance Elastography (MRE) [35, 36, 38, 37], and other imaging techniques, which enables finite-element analyses of wave propagation in the brain under a variety of conditions from concussion to ultrasound neuromodulation, cf., e. g., [41, 39, 40]. By contrast, there is a paucity of neuronal ion-channel and cell-membrane models that are quantitatively predictive and can be integrated into the full-scale finite-element analyses to predict the extent of neuronal activation as a function of local conditions. Indeed, not until the recent breakthrough work of Yoo et al. [13] the precise mechanisms by which local strain and ultrasound activate neural activity have not been conclusively known.
The present work is intended as a first step in filling in this modeling gap. We have shown that the combination of simplified PNP calculations and ion-specific activation functions calibrated from experimental data accurately predicts ionic currents and I/V curves as a function over the entire physiological range of membrane potentials. A number of aspects of the membrane response, such as the dynamic response of voltage-gated ion channels, the continuous effect of the sodium-potassium ATPase pump, calcium-dependent signaling pathways, and electrodiffusion phenomena in the Debye layer, especially the double layer and negatively charged intracellular proteins [84], complex ion channel geometries and intracellular biochemical interactions such as calcium-based intracellular signaling pathways [85, 86], are not accounted for explicitly by the model but only implicitly, if at all, by calibration to the experimental data. It is conceivable that an explicit accounting of these and other effects could improve the predictiveness of the model without incurring excessive computational complexity. These and other enhancements of the model suggest themselves as worthwhile directions for further research.
## Data availability statement
Data and code (programmed in MATLAB [87]) for the single-channel model and the full-axon transport model are available at DaRUS [1].
## Acknowledgements
This work is funded by the German Research Foundation (Deutsche Forschungsgemeinschaft; DFG) within the Priority Program 2311, grant number 465194077, and the Max Planck Society. We furthermore gratefully acknowledge the support of the DFG under Germany's Excellence Strategy - EXC 2075 - 390740016. E.Y. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no 101059593.
|
2304.10262 | A Local Multi-Layer Approach to Modelling Interactions between Shallow
Water Flows and Obstructions | The capability to accurately predict flood flows via numerical simulations is
a key component of contemporary flood risk management practice. However, modern
flood models lack the capacity to accurately model flow interactions with
linear features, or hydraulic structures like bridges and gates, which act as
partial barriers to flow. Presented within this paper is a new Riemann solver
which represents a novel approach to modelling fluid-structure interactions
within two-dimensional hydrodynamic models. The solution procedure models
obstacles as existing at the interface between neighbouring cells and uses a
combination of internal boundary conditions, different forms of the
conservation laws and vertical discretisation of the neighbouring cells to
resolve numerical fluxes across a partially obstructed interface. The
predictive capacity of the solver has been validated through comparisons with
experimental data collected from experiments conducted in a state-of-the-art
hydraulic flume. Since the solution procedure is local, only applying to the
cells within the immediate vicinity of a structure, the method is designed to
be compatible with existing two-dimensional hydrodynamic models which use a
finite volume scheme to solve the shallow water equations. | James Mckenna, Vassilis Glenis, Chris Kilsby | 2023-04-20T12:30:56Z | http://arxiv.org/abs/2304.10262v1 | # A Local Multi-Layer Approach to Modelling Interactions between Shallow Water Flows and Obstructions
###### Abstract
The capability to accurately predict flood flows via numerical simulations is a key component of contemporary flood risk management practice. However, modern flood models lack the capacity to accurately model flow interactions with linear features, or hydraulic structures like bridges and gates, which act as partial barriers to flow. Presented within this paper is a new Riemann solver which represents a novel approach to modelling fluid-structure interactions within two-dimensional hydrodynamic models. The solution procedure models obstacles as existing at the interface between neighbouring cells and uses a combination of internal boundary conditions, different forms of the conservation laws and vertical discretisation of the neighbouring cells to resolve numerical fluxes across a partially obstructed interface. The predictive capacity of the solver has been validated through comparisons with experimental data collected from experiments conducted in a state-of-the-art hydraulic flume. Since the solution procedure is local, only applying to the cells within the immediate vicinity of a structure, the method is designed to be compatible with existing two-dimensional hydrodynamic models which use a finite volume scheme to solve the shallow water equations.
keywords: Flood modelling; bridges; free-surface flow; Riemann solver; finite-volume; model validation. +
Footnote †: journal: Elsevier
## 1 Introduction
The ominous threat of anthropogenic climate change is driving the requirement for more effective flood risk management in order to better manage what is already a challenging and costly hazard; models estimate that forecasted average annual flood losses for the United States will increase from US$32 billion to more than US$40 billion by 2050 [1], with similar predictions of increasing flood risk being made on a global scale [2]. Hydrodynamic models play a vital role in contemporary flood risk management by providing evidence, via numerical predictions, upon which the quantification of flood risk and consequential future investment is based. It is therefore vital for effective flood risk management that hydrodynamic models produce accurate predictions.
Within catchments, channel structures, such as bridges, weirs and gates, can act as obstacles to flow, significantly influencing the local flow characteristics [3]. However, within modern hydrodynamic modelling practice, methods for modelling such features are relatively under-developed, with industry standard models
using coarse approximations, empirically based methods or even omitting such features entirely [4, 5, 6]. Within academic literature there have been a number of contributions towards bridging this gap in modelling capacity, such as [7, 8, 9, 10, 11], however, none of the published works present an accurate method for the generalised treatment of partial barriers to flow within two-dimensional hydrodynamic models.
Within Mckenna et al. [12], the authors of this paper presented a new Riemann solver capable of resolving numerical fluxes across a partially obstructed interface. The proposed solution procedure represents structures as existing at the interface between neighbouring cells and uses a combination of internal boundary conditions and a different form of the conservation laws in the adjacent cells, to resolve numerical fluxes across the partially obstructed interface. Experimental validation, via experiments conducted in a state of the art research flume, demonstrated the accuracy of the solver for a range of flow conditions and barrier configurations.
Despite the successful validation of the solver, there is opportunity for enhancement of the method via more accurate discretisation of the horizontal velocity in the vertical plane. As such, this paper aims to use the basic conceptual idea underpinning the Riemann solver developed in [12], which is the decomposition of the Riemann problem in the vertical plane, to develop a new, more sophisticated and accurate method for representing structures within two-dimensional hydrodynamic models. As for the development of the previous solver, compatibility of the method with existing flood models utilising two dimensional finite volume schemes to solve the shallow water equations was a key consideration throughout the development of the solver.
## 2 Mathematical Model
The proposed solution method divides the computational domain into structure cells, intermediate cells and normal cells with corresponding normal interfaces (NI), intermediate interfaces (II) and structure interfaces (SI) as shown in Figure 1.
At a structure interface, the adjacent structure cells are vertically discretised into sub-cells with a maximum depth capacity corresponding to the dimensions of the idealised structure represented at the interface as shown in Figure 2. For example, the sub-cells \(\mathbf{U}_{i,1}\) and \(\mathbf{U}_{i+1,1}\) in Figure 2 have a maximum depth capacity of \(h_{1}=z_{\frac{3}{2}}-z_{b}\), which represents the difference in elevation between the base of the structure and the bed.
For normal interfaces and the corresponding adjacent normal or intermediate cells, a one-dimensional (1D)
Figure 1: A simple computational domain \([a,b]\) illustrating the designation of structure, intermediate and normal cells with their corresponding interfaces. \(z_{1/2}\) and \(z_{3/2}\) represent the height above the bed of the base and cover of the idealised structure represented at the structure interface.
FV scheme is used to solve the 1D Shallow Water Equations (1D-SWE) given as:
\[\partial_{t}\mathbf{U}+\partial_{x}\mathbf{F}(\mathbf{U})=\mathbf{S}(\mathbf{U}) \tag{1}\]
Where \(\mathbf{U}\) is the vector of conserved variables, \(\mathbf{F}(\mathbf{U})\) is the vector of fluxes and \(\mathbf{S}(\mathbf{U})\) is a vector of sources comprising of \(\mathbf{S}_{0}\), the bed slope source term and \(\mathbf{S}_{f}\), the bed friction source term. These terms are given as follows:
\[\mathbf{U}=\begin{bmatrix}h\\ hu\end{bmatrix}\ \,\ \mathbf{F}=\begin{bmatrix}hu\\ hu^{2}+\frac{1}{2}gh^{2}\end{bmatrix}\ \,\ \mathbf{S}_{0}=\begin{bmatrix}0\\ -gh\frac{\partial z}{\partial x}\end{bmatrix}\ \,\ \mathbf{S}_{f}=\begin{bmatrix}0\\ -\tau_{f}\end{bmatrix} \tag{2}\]
Whereby \(h\) denotes the depth of flow, \(u\) denotes the velocity component in the \(x\) direction, \(g\) is the acceleration due to gravity, \(z\) is the elevation of the bed and \(\tau_{f}\) is the shear stress due to bed friction in accordance with Manning's equation:
\[\tau_{f}=C_{f}u|u|=\frac{gn^{2}}{\sqrt[3]{h}}u|u| \tag{3}\]
Where \(n\) is Manning's roughness coefficient.
For the structure and intermediate interfaces and corresponding adjacent structure and intermediate cells, a 1D FV scheme is used to solve a multi-layer 1D shallow water system [13]:
\[\partial_{t}\mathbf{U}_{k}+\partial_{x}\mathbf{F}_{k}(\mathbf{U}_{k})=\mathbf{ S}_{k}(\mathbf{U}_{k}) \tag{4}\]
Where \(\mathbf{U}_{k}\) is the vector of conserved variables for the layer \(k\), \(\mathbf{F}(\mathbf{U}_{k})\) is the vector of fluxes for layer \(k\) and \(\mathbf{S}_{k}(\mathbf{U}_{k})\) is a vector of sources for layer \(k\) comprising of \(\mathbf{S}_{k,0}\), the topographic source terms for layer \(k\) and
Figure 2: Division of structure cells into sub-cells corresponding to the base and cover of the idealised structure modelled at the structure interface. The maximum depth capacity of flow in the layer one is \(z_{3/2}-z_{1/2}\) and the maximum depth capacity of flow in the second layer is equal to \(z_{5/2}-z_{3/2}\). The uppermost layer has no maximum depth capacity.
\(\mathbf{S}_{k,f}\), the friction source terms for layer \(k\). These terms are given as follows:
\[\mathbf{U}_{k}=\begin{bmatrix}h_{k}\\ h_{k}u_{k}\end{bmatrix} \tag{5}\]
\[\mathbf{F}_{k}=\begin{bmatrix}h_{k}u_{k}\\ \frac{(h_{k}u_{k})^{2}}{h_{k}}+\frac{1}{2}gh_{k}^{2}+gh_{k_{(+)}}h_{k}\end{bmatrix} =\begin{bmatrix}q_{k}\\ \sigma_{k}\end{bmatrix} \tag{6}\]
\[\mathbf{S}_{k,0}=\begin{bmatrix}0\\ -R_{k+\frac{1}{2}}+R_{k-\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}0\\ gh_{k_{(+)}}\frac{\partial z_{k+1/2}}{\partial x}-g(h_{k_{(+)}}+h_{k})\frac{ \partial z_{k-1/2}}{\partial x}\end{bmatrix} \tag{7}\]
\[\mathbf{S}_{k,f}=\begin{bmatrix}0\\ \tau_{k+\frac{1}{2}}-\tau_{k-\frac{1}{2}}\end{bmatrix}=\begin{bmatrix}0\\ (1-\delta_{nk})\frac{2\nu(u_{k_{(+)}}-u_{k})}{h_{k_{(+)}}+h_{k}}-\left((1- \delta_{1k})\frac{2\nu(u_{k}-u_{k_{(-)}})}{h_{k}+h_{k_{(-)}}}-\delta_{1k} \frac{gn^{2}u_{k}|u_{k}|}{\sqrt[3]{H}}\right)\end{bmatrix} \tag{8}\]
Where \(k\) refers to the index of the layer under consideration, labelled in ascending order from layer 1 at the bed, to layer \(n\) at the free surface. \(k+1/2\) and \(k-1/2\) refer respectively to the upper and lower interface for layer \(k\). The subscript \(k_{(+)}\) refers to the properties of the flow above layer \(k\) and the subscript \(k_{(-)}\) refers to the properties of the flow below layer \(k\), which are defined respectively as:
\[h_{k_{(+)}}=\sum_{k=k+1}^{n}h_{k}, h_{k_{(-)}}=\sum_{k=1}^{k-1}h_{k}\] \[u_{k_{(+)}}=\frac{\sum_{k=k+1}^{n}h_{k}u_{k}}{h_{k_{(+)}}}, u_{k_{(-)}}=\frac{\sum_{k=1}^{k-1}h_{k}u_{k}}{h_{k_{(-)}}} \tag{9}\]
\(R_{k+1/2}\) and \(R_{k-1/2}\) refer to the reaction forces exerted at the interfaces between the layers, with \(R_{k+1/2}\) denoting the reaction force of layer \(k\) onto the fluid above and \(R_{k-1/2}\) denoting the reaction force exerted on layer \(k\) by the fluid or bed beneath it. \(\tau_{k+1/2}\) and \(\tau_{k-1/2}\) represent the interlayer viscous friction effect induced at the upper and lower interfaces of layer \(k\). The interlayer friction terms are derived for a multi-layer cell by applying a finite difference approximation, across the depth of the fluid layer \(k\), to the viscous stress component of the incompressible Navier-Stokes system, as proposed by Audusse et al., [14]:
\[\int_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}}\left.\frac{\partial}{\partial z} \left(\nu\frac{\partial u}{\partial z}\right)dz=\nu\frac{\partial u}{\partial z }\Big{|}_{z_{k+\frac{1}{2}}}-\nu\frac{\partial u}{\partial z}\Big{|}_{z_{k- \frac{1}{2}}}\approx\frac{2\nu(u_{k_{(+)}}-u_{k})}{h_{k_{(+)}}+h_{k}}-\frac{2 \nu(u_{k}-u_{k_{(-)}})}{h_{k}+h_{k_{(-)}}}=\tau_{k+\frac{1}{2}}-\tau_{k-\frac{ 1}{2}} \tag{10}\]
For the case where \(k=1\), considering the layer which flows over the bed, \(\tau_{k-1/2}=\tau_{0}\) which is instead derived from Manning's equation (3), where \(H\) is the total depth of flow for the whole structure cell. The particular form of the viscous effect on the base of the fluid layer, \(\tau_{k-1/2}\), is accounted for by Kronecker delta in (8), which is defined as:
\[\delta_{\alpha k}=\begin{cases}1\text{ if }k=\alpha\\ 0\text{ if }k\neq\alpha\end{cases} \tag{11}\]
The Kronecker delta also ensures that the \(\tau_{k+1/2}\) term is zero at the free surface for layer \(n\). The source terms for structure cells are also illustrated in Figure 3. Effects relating to stresses as a result of volumetric deformation are not considered necessary to include due to their minor influence [15]. For simplicity, wind friction effects on the free surface are also ignored however, wind friction effects can be easily added should the required wind data be available and the effects deemed necessary to include.
The domain is divided into cells \((\mathbf{V}_{i})_{i\in\mathbb{Z}}\) and the discretised first order finite volume scheme is given by:
\[\mathbf{U}_{i}^{n+1}=\mathbf{U}_{i}^{n}-\frac{\Delta t}{\Delta x}\left[\mathbf{ F}_{i+\frac{1}{2}}-\mathbf{F}_{i-\frac{1}{2}}\right]+\Delta t\mathbf{S}\left( \mathbf{U}_{i}^{n}\right) \tag{12}\]
Where the subscript \(i\) represents the \(i\)th cell, the superscript \(n\) represents the \(n\)th time level and \(\Delta x\) and \(\Delta t\) represent the cell size and time step respectively. \(\mathbf{F}_{i-1/2}\) and \(\mathbf{F}_{i+1/2}\) represent the numerical fluxes at
the \(i\pm 1/2\) interfaces respectively. For the structure cells, it is the constituent sub-cells which are updated using the following modification of (12):
\[\mathbf{U}_{i,k}^{n+1}=\mathbf{U}_{i,k}^{n}-\frac{\Delta t}{\Delta x}\left[ \mathbf{F}_{i+\frac{1}{2},k}-\mathbf{F}_{i-\frac{1}{2},k}\right]+\Delta t \mathbf{S}\left(\mathbf{U}_{i,k}^{n}\right) \tag{13}\]
Where \(\mathbf{U}_{i,k}^{n}\) represents the conserved variables for the \(k\)th sub-cell in the \(i\)th structure cell at time level \(n\). \(\mathbf{F}_{i-1/2,k}\) and \(\mathbf{F}_{i+1/2,k}\) represent the numerical fluxes at the \(k\)th layer of the \(i\pm 1/2\) interfaces respectively. Although a 1D scheme is implemented in this case, implementation as a 2D scheme requires no fundamental changes to the method.
### Numerical Flux Computation
The process for resolving fluxes is dependent on the type of interface (NI, II or SI). For structure and intermediate interfaces Harten-Lax-van Leer (HLL) approximate Riemann solvers [16] are used to resolve the intercell numerical fluxes. For normal interfaces, other suitable approximate Riemann solvers may be used, however, HLL approximate Riemann solvers are recommended for consistency.
#### 2.1.1 Normal Interfaces
A robust algorithm presented by Glenis et al. [17] is used to calculate wave speeds for the Riemann problem, which is outlined in Algorithm 1. Following calculation of the wavespeeds, a standard HLL approximate Riemann solver (14) is used to determine numerical fluxes across the normal interface.
\[\mathbf{F}_{i+\frac{1}{2}}=\begin{cases}\mathbf{F}_{i}\text{ if }S^{-}>0\\ \mathbf{F}^{hll}=\frac{S^{+}\mathbf{F}_{i}-S^{-}\mathbf{F}_{i+1}+S^{+}S^{-}( \mathbf{U}_{i+1}-\mathbf{U}_{i})}{S^{+}-S^{-}}\text{ if }S^{-}\leq 0\leq S^{+}\\ \mathbf{F}_{i+1}\text{ if }S^{+}<0\end{cases} \tag{14}\]
Figure 3: Annotation of the source terms for example structure cells and their component sub-cells on uneven bed topography. \(R\) represents a reaction force induced as a result of the uneven bed topography, \(\tau\) represents a friction force acting at a layer interface, \(z\) denotes the elevation above the bed and \(h\) denotes the water depth in the sub-cell. \(\mathbf{U}_{i}\) is the vector of conserved variables for the \(i\)th whole cell, which is equal to the sum of the conserved variables for the component sub cells \(\mathbf{U}_{i,k}\).
As discussed prior, other suitable approximate Riemann solvers may also be used however, use of a HLL solver is recommended for consistency.
#### 2.1.2 Structure Interfaces
At a structure interface the layers of flow and can be divided into _open_ and _closed_ as shown in Figure 5.
_Open_ layers are considered as having a transmissive boundary at the structure interface, with the portion of the structure interface shared by the adjacent sub-cells having no influence on the exchange of conserved variables. _Closed_ layers are considered as having a reflective boundary at the structure interface due to the presence of the structure. For each open layer, a single Riemann problem must be constructed and solved whereas, at each closed layer two Riemann problems must be constructed and solved, as shown in Figure 6.
Solution of two Riemann problems for a closed layer is necessary to implement the reflective boundary condition at the structure interface, which reflects the flow in both the left and right sub-cells. This process is based on the assumption that the vertical velocity of the flow is negligible, which is a fundamental assumption for the derivation of the shallow water equations, and therefore the direction of the flow can be considered to be primarily parallel to the bed.
Figure 4: (a) Example normal interface with adjacent normal cells and (b) the general structure of the general solution of the Riemann problem for a normal interface. \(S^{-}\) is the left wave speed and \(S^{+}\) is the right wave speed, as defined in Algorithm 1. \(h_{*}\) and \(u_{*}\) denote the conserved variables in the star region. \(\mathbf{F}_{i-\frac{5}{2}}\) denotes the numerical flux at the interface.
Figure 5: Designation of open and closed layers at a structure interface.
**Algorithm 1:** Calculation of wavespeeds [17]. An initial approximation (\(h_{0}\)) of the depth in the star region (\(h_{*}\)) using a two-rarefaction approximate state Riemann solver is used to determine whether a two-rarefaction or two-shock approximation is optimal. For a multi-layer system, the wave celerity is defined as \(c_{i,k}=\sqrt{g(h_{i,k}+h_{i,k_{(+)}})}\), where \(c_{i,k}\) is the celerity for cell \(i\) layer \(k\), \(h_{i,k}\) is the thickness of cell \(i\), layer \(k\) and \(h_{i,k_{(+)}}\) is the depth of water in cell \(i\) above layer \(k\).
\(g\gets 9.81\text{ms}^{-2}\)
**if**\(h_{i}\wedge h_{i+1}>0\)**then**\(\triangleright\) Initial two-rarefaction approximation
\[c_{i}\leftarrow\sqrt{gh_{i}}\quad,\quad c_{i+1}\leftarrow\sqrt{gh_{i+1}}\]
\[h_{0}\leftarrow\frac{1}{g}\left(\frac{1}{2}(c_{i}+c_{i+1})+\frac{1}{4}(u_{i} -u_{i+1})\right)^{2}\]
**if**\(h_{0}\leq\text{min}(h_{i},h_{i+1})\)**then**\(\triangleright\) Use two-rarefaction approximate state Riemann solver
\[h_{*}\gets h_{0}\]
**else if**\(h_{0}>\text{min}(h_{i},h_{i+1})\)**then**\(\triangleright\) Use two-shock approximate state Riemann solver
\[p_{i}\leftarrow\sqrt{\frac{g(h_{0}+h_{i})}{2h_{0}h_{i}}}\quad,\quad p_{i+1} \leftarrow\sqrt{\frac{g(h_{0}+h_{i+1})}{2h_{0}h_{i+1}}}\]
\[h_{*}\leftarrow\frac{p_{i}h_{i}+p_{i+1}h_{i+1}+u_{i}-u_{i+1}}{p_{i}+p_{i+1}}\]
**end if**
\[\alpha_{i}\leftarrow\left\{\begin{array}{c c c}\frac{\sqrt{0.5 (h_{*}+h_{i})h_{*}}}{h_{i}}&\text{if }h_{*}>h_{i}&,&\alpha_{i+1}\leftarrow\left\{ \begin{array}{c}\frac{\sqrt{0.5(h_{*}+h_{i+1})h_{*}}}{h_{i+1}}&\text{if }h_{*}>h_{i+1} \\ 1&\text{if }h_{*}\leq h_{i}&\\ \end{array}\right.&\text{if }h_{*}\leq h_{i+1}\\ S^{-}\gets u_{i}-\alpha_{i}c_{i}&,&S^{+}\gets u_{i+1}+\alpha_{i+1}c_{ i+1}\end{array}\right.\]
**else if**\(h_{i}=0\wedge h_{i+1}>0\)**then**\(\triangleright\) Left dry bed
\[S^{-}\gets u_{i+1}-2c_{i+1}\quad,\quad S^{+}\gets u_{i+1}+c_{i+1}\]
**else if**\(h_{i+1}=0\wedge h_{i}>0\)**then**\(\triangleright\) Right dry bed
\[S^{-}\gets u_{i}-c_{i}\quad,\quad S^{+}\gets u_{i}+2c_{i}\]
**end if**
The numerical flux for each layer is determined by applying (4) to each layer, where the numerical flux for a layer is given as:
\[\mathbf{F}_{k}=\begin{bmatrix}h_{k}u_{k}\\ \frac{(h_{k}u_{k})}{h_{k}}+\frac{1}{2}gh_{k}^{2}+gh_{k_{(+)}}h_{k}\end{bmatrix}= \begin{bmatrix}q_{k}\\ \sigma_{k}\end{bmatrix} \tag{15}\]
Which can then be used to determine the flux at the interface using a standard HLL approximate Riemann solver (14). The method for determining the fluxes at a structure interface is summarised in Algorithm 2.
#### 2.1.3 Intermediate Interfaces
In order to resolve fluxes with the adjacent sub-cells it is necessary to temporarily define layer properties for the intermediate cell as shown in Figure 7. The properties for the temporary layers in the intermediate interfaces are defined by assuming that the velocity in each layer is equal to the average velocity of the whole intermediate cell and that the depth in each layer is limited to the maximum depth capacity of the adjacent sub-cell. The fluxes for each layer can then be found using the process outlined for the open layers in Algorithm 2.
Figure 6: Method for resolving fluxes for the sub-cells adjacent to a structure interface.
```
\(g\gets 9.81\text{ms}^{-2}\) \(k\gets 1\)\(\triangleright\) For the open layers while\(k\leq n\)do calculate \(S_{k}^{-}\), \(S_{k}^{+}\) using Algorithm (1) \(\triangleright\) Calculate wavespeeds \(\triangleright\) Calculate layer flux \(\mathbf{F}_{i,k}\leftarrow\begin{bmatrix}h_{i,k}u_{i,k}\\ \frac{q_{i,k}^{2}}{h_{i,k}}+\frac{1}{2}gh_{i,k}^{2}+gh_{i,k_{(+)}}h_{i,k} \end{bmatrix}\), \(\mathbf{F}_{i+1,k}\leftarrow\begin{bmatrix}h_{i+1,k}u_{i+1,k}\\ \frac{q_{i+1,k}^{2}}{h_{i+1,k}}+\frac{1}{2}gh_{i+1,k}^{2}+gh_{i+1,k_{(+)}}h_{i+ 1,k}\end{bmatrix}\) \[\mathbf{F}_{i+\frac{1}{2},k}\leftarrow\begin{cases}\mathbf{F}_{i,k}\text{ if }S_{k}^{-}>0\\ \mathbf{F}^{hll}=\frac{S^{+}\mathbf{F}_{i,k}-S_{k}^{-}\mathbf{F}_{i+1,k}+S_{k}^ {+}S_{k}^{-}(\mathbf{U}_{i+1,k}-\mathbf{U}_{i,k})}{S_{k}^{+}-S_{k}^{-}}\text{ if }S_{k}^{-}\leq 0\leq S_{k}^{+}\\ \mathbf{F}_{i+1,k}\text{ if }S_{k}^{+}<0\end{cases}\] (6) \(k\gets k+2\)\(\triangleright\) Advance to next open layer endwhile\(k\gets 2\)\(\triangleright\) For the closed layer \(h_{i+1,ghost}\gets h_{i,k}\)\(\triangleright\) Right ghost cell water depth \(u_{i+1,ghost}\leftarrow-u_{i,k}\)\(\triangleright\) Right ghost cell water velocity calculate \(S_{k_{L}}^{-}\), \(S_{k_{L}}^{+}\) using Algorithm (1) \(\triangleright\) Calculate wavespeeds calculate \(\mathbf{F}_{i+\frac{1}{2},k_{L}}\) using (6)\(\triangleright\) Flux for the left side of the structure \(h_{i,ghost}\gets h_{i,k}\)\(\triangleright\) Left ghost cell water depth \(u_{i,ghost}\leftarrow-u_{i,k}\)\(\triangleright\) Left ghost cell water velocity calculate \(S_{k_{R}}^{-}\), \(S_{k_{R}}^{+}\) using Algorithm (1) \(\triangleright\) Calculate wavespeeds calculate \(\mathbf{F}_{i+\frac{1}{2},k_{R}}\) using (6)\(\triangleright\) Flux for the right side of the structure
```
**Algorithm 2**Calculation of fluxes for an example structure interface as shown in Figure 6. \(k\) is the index of the layer under consideration, \(n\) is the total number of layers at the structure interface.
### Conservative Updating of Conserved Variables
Once numerical fluxes have been resolved across all interfaces within the computational domain, the final procedure for each timestep is to update the conserved variables contained within each cell and sub-cell.
#### 2.2.1 Normal Cells
Normal cells are updated using equation (12), which is standard for a one-dimensional Godunov type scheme. For cases involving variable bed topography, a well-balanced treatment of the topographic source terms can be achieved via the hydrostatic reconstruction method [18] or via upwinding of the source terms [19]. Suitable explicit or implicit treatment of the remaining source terms are both viable depending on the desired stability and admissible constraint of the stable timestep. For strong stability and the flexibility of advancing the solution at the timestep for the advection problem, the splitting method proposed by Liang and Marche [20] is recommended:
\[q_{i}^{n+1}=q_{i}^{n}-\Delta tS_{i,c}^{n}=q_{i}^{n}-\Delta t\left(\frac{\tau_{ i,f}}{1+\Delta t\frac{\partial\tau_{i,f}}{\partial q_{i}}}\right)^{n}=q_{i}^{n}- \Delta t\left(\frac{C_{i}u_{i}|u_{i}|}{1+\frac{2\Delta tC_{i,f}|q_{i}|}{h_{i}^ {n}}}\right)^{n} \tag{16}\]
The following simple limiter is also recommended to ensure stability in regions where the water depth
Figure 8: Illustration of the numerical fluxes at the normal interfaces bordering a normal cell.
Figure 7: Temporary division of an intermediate cell into layers in order to resolve fluxes at a intermediate interface. \(u_{i-1,1}=u_{i-1,2}=u_{i-1,3}=u_{i-1}\) where \(u_{i-1}\) represents the average velocity for the whole intermediate cell.
approaches zero:
\[S_{i,c}^{n}=\frac{q_{i}^{n}}{\Delta t}\text{ if }|\Delta tS_{i,c}^{n}|>|q_{i}^{n}| \tag{17}\]
#### 2.2.2 Intermediate Cells
The same procedure for updating a normal cell is applied to an intermediate cell however, due to the fact that fluxes at a intermediate interface are calculated on a sub-cell basis (Figure 9), they must first be summated. For this case illustrated in Figure 9 this is equal to:
\[\mathbf{F}_{i-\frac{1}{2}}=\sum_{k=1}^{3}\mathbf{F}_{i-\frac{1}{2},k} \tag{18}\]
#### 2.2.3 Structure Cells
Since structure cells are divided into sub-cells, it is necessary to update each individual sub-cell using the
Figure 10: Illustration of the numerical fluxes used for updating the sub-cells of which a structure cells is comprised.
Figure 9: Illustration of the numerical fluxes used to update a intermediate cell.
respective left and right fluxes as per:
\[\mathbf{U}_{i,k}^{n+1}=\mathbf{U}_{i,k}^{n}-\frac{\Delta t}{\Delta x}\left[ \mathbf{F}_{i+\frac{1}{2},k}-\mathbf{F}_{i-\frac{1}{2},k}\right]+\Delta t \mathbf{S}\left(\mathbf{U}_{i,k}^{n}\right) \tag{19}\]
Where \(\mathbf{U}_{i,k}^{n}\) represents the vector of conserved variables for the \(k\)th sub-cell contained within the \(i\)th cell at time level \(n\). \(\mathbf{F}_{i-1/2,k}\) and \(\mathbf{F}_{i+1/2,k}\) represent the left and right fluxes for the \(k\)th layer of the \(i\)th cell. As for the normal cells, a well-balanced treatment of the topographic source terms may be achieved via the hydrostatic reconstruction method or via upwinding of the source terms. The remaining source terms may be treated using suitable explicit or implicit methods depending on the desired stability and constraint of the timestep. For strong stability and the convenience of advancing the solution at the timestep for the advection problem, a point implicit scheme is recommended for the friction source terms:
\[q_{i}^{n+1}=q_{i}^{n}+\Delta t\left(\frac{\left(\tau_{i,k+\frac{1}{2}}^{n+1}- \tau_{i,k-\frac{1}{2}}^{n+1}\right)}{1+\Delta t\left(\left(\frac{\sigma_{i,k+ 1/2}}{\partial q_{i,k}}\right)^{n}-\left(\frac{\partial\tau_{i,k-1/2}}{ \partial q_{i,k}}\right)^{n}\right)}\right) \tag{20}\]
At the sub-cell interfaces containing structures there are two numerical fluxes as illustrated in Figure 10, as a consequence of the two reflective boundaries implemented at each side of the structure. Since not all of the external forces are accounted for, these fluxes may be unequal, with the difference in the sum of the fluxes at the left face of the structure interface (\(\mathbf{F}^{(-)}\)) and the right face of the structure interface (\(\mathbf{F}^{(-)}\)) equal to the resultant hydrostatic pressure force exerted on the structure multiplied by the ratio of the timestep to the cell width (\(\Delta t\backslash\Delta x(\mathbf{F}^{(+)}-\mathbf{F}^{(-)})\)).
Once the sub cells have been updated, their updated depth may exceed the maximum depth capacity for the layer and it is therefore necessary to re-define the layer properties of the structure cells in order to maintain alignment of the layers with the obstructions modelled at the interface. The process for redefining the layer properties is outlined in Algorithm 3, for which an illustrative example is also provided via Figure 11.
Figure 11: Illustration of the layer redefinition process post updating of the conserved variables. The redefinition process is required to re-align the updated properties of the sub-cells with the respective boundary conditions implemented at the structure interface.
**Algorithm 3:** Redefinition of the sub-cell properties based on the maximum depth capacity of the layers defined at a structure interface, post updating of the conserved variables. \(\bar{h}\) and \(\bar{q}\) represent the redefined depth and momentum. \(j\) refers to the index of the redefined layers and \(k\) refers to the index of the updated layer properties pre-redefinition. \(n\) is the maximum number of layers defined at a structure interface.
```
for each structure celldo \(j\gets 1\) \(k\gets 1\) \(\bar{\mathbf{h}}_{j}\leftarrow[0,...,0]\) \(\bar{\mathbf{q}}_{j}\leftarrow[0,...,0]\) while\(\textit{sum}(\bar{\mathbf{h}}_{j})<\textit{sum}(\mathbf{h}_{k})\)do \(h_{max}\gets z_{j}-z_{j-1}\) while\(\bar{h}_{j}<h_{max}\wedge k\leq n\)do \(\bar{h}_{j}\leftarrow\bar{h}_{j}+h_{k}\) \(\bar{q}_{j}\leftarrow\bar{q}_{j}+q_{k}\) \(k\gets k+1\) endwhile \(h_{excess}\gets max(\bar{h}_{j}-h_{max},0)\) \(\bar{h}_{j}\leftarrow\bar{h}_{j}-h_{excess}\) \(\bar{q}_{j}\leftarrow\bar{q}_{j}-h_{excess}u_{k-1}\) \(\bar{h}_{j+1}\gets h_{excess}\) \(\bar{q}_{j+1}\gets h_{excess}u_{k-1}\) \(j\gets j+1\) endwhile endfor
```
**Algorithm 3** Redefinition of the sub-cell properties based on the maximum depth capacity of the layers defined at a structure interface, post updating of the conserved variables. \(\bar{h}\) and \(\bar{q}\) represent the redefined depth and momentum. \(j\) refers to the index of the redefined layers and \(k\) refers to the index of the updated layer properties pre-redefinition. \(n\) is the maximum number of layers defined at a structure interface.
## 3 Model Validation
Previously published validation data [12], collected from experiments conducted in Newcastle University's Armfield S100 Research Flume, is used to validate the accuracy of the proposed Riemann solver. The S100 Research Flume is a 12.5m long, 1m wide, 0.8m deep flume capable of producing flow rates up to 400ls\({}^{-1}\). Using the control panel, shown in Figure 12, the user can select a desired flow rate which is then produced by the two pumps which draw water from the sum. The flow rate is maintained and corrected via a proportional-integral-derivative control loop, which uses a electromagnetic flow meter (Euromag Model MUT2200EL) to ensure that flow rate within the inflow pipe matches the desired flow rate. According to Euromag technical sheet [21], each sensor is calibrated on a hydraulic test rig equipped with an ISO17025 traceable weighing system, which ensures that the accuracy of the sensor is equal to \(0.2\%\pm 2\mathrm{mms}^{-1}\) with a repeatability of approximately 0.1%. A summary of the maximum permissible error limits for the instrument, provided by the manufacturer, is presented in Table 1.
The validation experiments consisted of running the flume at a range of flow rates, with a range of different barrier geometries placed within the flume cross-section, at a distance of 5m downstream. The flume tilt was set to 0% for all validation experiments in order to eliminate any potential numerical errors introduced as
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{Maximum Permissible Error limits for Euromag Model MUT2200EL DN 350 PN 10 EN 1092-1} \\ \hline Flow Rate & \(q_{1}=12.800m^{3}h^{-1}\) & \(q_{2}=20.480m^{3}h^{-1}\) & \(q_{3}=360.000m^{3}h^{-1}\) \\ \hline Instrument Error & \(\pm\) 4.99\% & \(\pm\) 2.00\% & \(\pm\) 0.49\% \\ \hline \end{tabular}
\end{table}
Table 1: Maximum permissible error limits for the electromagnetic flow meter for a range of flow rates within the inflow pipe (adapted from [21] p.4).
a result of topographic source terms. Once steady state flow conditions were achieved for each experiment, depth measurements were obtained using vernier point gauges. The full validation dataset is available as supplementary material from the referenced publication.
### Numerical Setup
All numerical simulations were conducted on a 12.5m 1D spatial domain, discretised into a structured grid comprised of 0.1m cells (\(\Delta x=0.1\)m). In order to ensure satisfaction of the Courant-Friedrichs-Lewy condition, a Courant number of \(C=(0.95\Delta x)/(S_{max}^{n})\) was used to determine a stable timestep, where \(S_{max}^{n}\) is the maximum absolute wave speed at time level \(n\). Since the bed slope is set to 0% this has the intended effect of simplifying the source terms, only requiring the friction source term to be resolved, facilitating clearer analysis of the accuracy of the Riemann solver. The friction source terms for normal and intermediate cells are resolved using (16). The friction source terms for the structure cells are resolved using (20). A Manning's n equal to 0.012 and a kinematic viscosity of \(1.0034\times 10^{-6}\)m\({}^{2}\)s\({}^{-1}\) is assumed for all numerical simulations.
The upstream and downstream boundary conditions are both implemented using exterior ghost cells. In order to replicate the constant inflow produced by the S100 flume, an inflow boundary condition is defined at the upstream end utilising relationships derived from the Riemann invariants across a rarefaction wave. At the downstream boundary a critical depth boundary condition is imposed. Full details for the implementation of the boundary conditions are presented in [12].
## 4 Results
The following validation test cases can be categorised into three primary flow configurations:
Figure 12: Integrated control panel for the S100 Research Flume, including a schematic of the flume. Two pumps, which draw water from a recirculating sum, supply water to the flume via a pipe connected to the upstream (right) end. At the left end of the flume, the water exits the flume via a sloped free outfall into the the sum.
* Flow under a barrier.
* Flow under a barrier, producing a downstream stationary hydraulic jump.
* Flow over and under a barrier.
Through comparisons between the experimental and numerical data for the six presented validation test cases, the suitability and accuracy of the proposed solver is demonstrated.
#### 4.0.1 Flow Under a Barrier
For test case one and test case two, the solver produced accurate predictions for the upstream and downstream depth, capturing the interaction of the flow with the obstruction. In both test cases there is a slight overestimation of the upstream depth which equated to an error of \(0.7-8.3\%\) for test case one and \(0.1-12.6\%\) for test case two. In contrast, the downstream depth was slightly overestimated in both test cases, with a greater error for test case two due to the numerical prediction of a hydraulic jump at approximately \(x=10m\) downstream. This is potentially a consequence of greater uncertainty in the measurement of the downstream depth, due to the presence of turbulent and unsteady flow at the outfall, which contributed to difficulty implementing the correct downstream boundary condition. Moreover, the location of a stationary hydraulic jump was determined to be extremely sensitive to small deviations in the flow during the execution of the lab experiments.
The velocity upstream of the barrier is predicted accurately for both test cases with errors in the region of \(7-11\%\). The numerical estimation of the velocity at the upstream face of the barrier has a larger error however, this is a localised error, constrained only to the structure cell immediately upstream of the barrier. Since the discharge predictions are accurate otherwise, the overestimation of the downstream depth corresponds to an underestimation of the downstream velocity equating to an error of \(2.4-18.8\%\) for test case one and \(0.8-20.6\%\) for test case two. With the larger errors for test case two arising as a result of the incorrect prediction of the hydraulic jump.
Figure 13: Comparison between numerical and experimental results for test case 1. Details of the numerical setup can be found in Section 3.1.
Figure 14: Comparison between numerical and experimental results for test case 2. Details of the numerical setup can be found in Section 3.1.
#### 4.0.2 Stationary Hydraulic Jump
Test case five and test case six showcase the capacity of the solver to accurately resolve stationary hydraulic jumps. The two presented test cases use the same barrier configurations with different flow rates, resulting in the formation of different stationary hydraulic jumps for each scenario. In both cases, the numerical results correctly predicted the formation of a stationary hydraulic jump downstream of the barrier. For test case six, the position and height of the jump was accurately captured. For test case five, the height of the jump was accurately captured, however, the formation of the jump was premature occurring at approximately \(x=0.5\)m upstream of the actual location. The robust wave estimation algorithm (Algorithm 1) was determined to be crucial for accurately capturing and maintaining the stationary hydraulic jumps for the relevant numerical simulations.
In both cases the numerical results predict jumps with a zero length roller, characterised by a sharp discontinuity in the depth of flow at the toe of the jump, which is a feature of the classical shallow water equations; since there is no internal energy transcribed within the classical shallow water equations, energy loss through a shock discontinuity is instead captured via Rankine-Hugoniot relations arising from the conservation of mass and momentum [22]. This is insufficient to capture the complex behaviour which occurs within the transition region of turbulent hydraulic jumps with a Froude number of greater than \(1.5\). Methods to overcome the shortcomings of the classical shallow water equations, such as the work of Richard and Gavrilyuk [23], are not appropriate nor necessary for the desired application of flood risk modelling.
More generally, the predictions of the upstream and downstream depth and velocity proved to be accurate for both test cases, outside of the early prediction of the hydraulic jump for test case five. For test case five, there was a slight over estimation of the upstream depth corresponding to an error in the region of \(1.4-12.3\%\). Ignoring the region of the domain occupied by the hydraulic jump (\(6-7\)m), downstream depth predictions were also found to be accurate with errors in the region of \(0.7-13.2\%\). For test case six, the accuracy of the predictions starts to degrade towards the downstream boundary suggesting that the boundary condition may not be optimal. However, despite the increasing errors towards the downstream boundary, the solver still contributed to accurate results overall with depth errors from \(0.2-19.2\%\) and velocity errors from \(0.2-23.6\%\) for the data points between \(x=0-8\)m, with errors increasing to \(27.3\%\) and \(36.7\%\) respectively at the boundary.
Figure 15: Comparison between numerical and experimental results for test case 5. Details of the numerical setup can be found in Section 3.1.
Figure 16: Comparison between numerical and experimental results for test case 6. Details of the numerical setup can be found in Section 3.1.
#### 4.0.3 Flow Over and Under a Barrier
For test case eight and test case nine, depth predictions proved to be accurate with errors increasing towards the downstream boundary in both cases. For test case eight, upstream depth predictions were extremely accurate (\(0.1-3.3\%\)). The upstream depth was overestimated for test case nine but remained accurate with errors in the region of \(5.9-10.5\%\). For both test cases, there was a slight overestimation of the downstream depth with errors in the range of \(6.5-24.7\%\). Figure 17 demonstrates that the water was observed as vertically flowing over the barrier for test case eight which cannot be captured by the numerical model, due to the nature of the fundamental equations and the structure of the finite volume scheme. Although this behaviour isn't captured by the model, the overall results remain accurate and the general behaviour is well captured. Certainly, for applications concerning flood risk modelling, the key quantities are the upstream and downstream depths which are observed to be consistent with the validation data.
The velocity predictions are similarly accurate with errors in the region of \(1.2-19.8\%\) for both of the presented test cases. As for the previous test cases, there is also a local error in the prediction of the discharge in the cells proceeding the barrier with discharge predictions otherwise proving accurate.
Figure 17: Comparison between numerical and experimental results for test case 8. Details of the numerical setup can be found in Section 3.1.
Figure 18: Comparison between numerical and experimental results for test case 9. Details of the numerical setup can be found in Section 3.1.
The data in Table 2 demonstrates negligible differences in the results for the tested mesh resolutions which range between \(1-20\)cm (\(\Delta x=1-20\)cm). The relevant plots illustrating the results can be found in the Appendices. The primary difference between the meshes is in the sharpness of the depth discontinuity at the toe of the stationary hydraulic jump, which becomes steeper as the mesh is refined.
## 5 Comparison
In order to demonstrate the comparative value of the solver presented within this paper, designated as solver 2, a comparison is presented with the solver presented in [12], designated as solver 1. A comparison of the solvers for test case one, shown in Figure 19, demonstrates accurate results for both solvers. Solver 2 has a marked increase in accuracy for the depth upstream of the barrier, whereas, the depth and velocity downstream of the barrier is captured slightly more accurately by Solver 1. Similarly for test case 8, shown in Figure 22, there is a improvement in the prediction of the upstream depth for Solver 2, with comparatively accurate results for both solvers downstream of the barrier. The benefits of Solver 2 are however, showcased best via Figure 20 and Figure 21. Whilst Solver 1 is able to broadly capture the upstream and downstream depths, which is of primary concern for flood risk management applications, Solver 2 captures the upstream and downstream depths more accurately including the formation of a stationary hydraulic jump. The difference in the accuracy of the velocity predictions downstream of the barrier is stark and demonstrates that the superiority of the solution procedure utilised by Solver 2 with regards to capturing the horizontal velocity profile in the vertical plane at the structure interface.
Although it is clear that Solver 2 is capable of producing superior results, there is a clear increase in complexity and computational burden in comparison with Solver 1. However, since the implementation is local, on a sufficiently large domain the difference in computational efficiency of the two solvers is unlikely to be a limiting factor since structure cells are likely to comprise a very small percentage of all cells within the computational domain. As such, the primary grounds for the use of Solver 1 over Solver 2 should
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{3-13} \multicolumn{1}{c}{} & & \multicolumn{11}{c|}{} & \multicolumn{11}{c|}{**\(\kappa\)-Position**} \\ \cline{3-13} \multicolumn{1}{c}{} & & & 0.00m & 2.50m & 4.75m & 5.00m & 5.07m & 5.35m & 5.60m & 6.00m & 8.00m & 10.00m & 12.50m \\ \hline \multirow{4}{*}{\begin{tabular}{c} **\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}\)} \\ \end{tabular} } & Minimum & \multirow{2}{*}{**Depth (\(\mathbf{m}\))**} & 0.096 & 0.097 & 0.095 & 0.097 & 0.018 & 0.027 & 0.062 & 0.065 & 0.064 & 0.062 & 0.048 \\ \cline{2-13} & Maximum & & 0.097 & 0.098 & 0.096 & 0.098 & 0.019 & 0.037 & 0.065 & 0.067 & 0.065 & 0.064 & 0.054 \\ \cline{2-13} & Minimum & & 0.214 & 0.211 & 0.216 & 0.211 & 1.139 & 0.759 & 0.331 & 0.315 & 0.320 & 0.331 & 0.427 \\ \cline{2-13} & Maximum & & 0.209 & 0.207 & 0.211 & 0.207 & 1.068 & 0.549 & 0.312 & 0.303 & 0.312 & 0.317 & 0.376 \\ \cline{2-13} & Minimum & & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & Maximum & & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 & 0.021 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} **\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}\)} \\ \end{tabular} } } & **1cm Grid** & \multirow{2}{*}{**Depth (\(\mathbf{m}\))**} & 0.094 & 0.093 & 0.099 & 0.019 & 0.056 & 0.056 & 0.055 & 0.052 & 0.048 & 0.036 \\ \cline{2-13} & & Velocity (\(\mathbf{m}^{\ast}\)) & 0.020 & 0.020 & 0.017 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & Discharge (\(\mathbf{m}^{\ast}\)) & 0.218 & 0.219 & 0.219 & 0.188 & 1.096 & 0.365 & 0.367 & 0.371 & 0.391 & 0.422 & 0.572 \\ \cline{2-13} & & **Depth (\(\mathbf{m}\))** & 0.094 & 0.094 & 0.093 & 0.089 & 0.019 & 0.056 & 0.055 & 0.055 & 0.052 & 0.048 & 0.036 \\ \cline{2-13} & & **2cm Grid** & \multirow{2}{*}{**Velocity (\(\mathbf{m}^{\ast}\))**} & 0.020 & 0.020 & 0.020 & 0.017 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & Discharge (\(\mathbf{m}^{\ast}\)) & 0.217 & 0.218 & 0.219 & 0.188 & 1.096 & 0.365 & 0.367 & 0.371 & 0.391 & 0.422 & 0.567 \\ \cline{2-13} & & Depth (\(\mathbf{m}\)) & 0.094 & 0.094 & 0.094 & 0.090 & 0.019 & 0.044 & 0.055 & 0.055 & 0.052 & 0.048 & 0.036 \\ \cline{2-13} & & **Velocity (\(\mathbf{m}^{\ast}\))** & 0.020 & 0.020 & 0.020 & 0.017 & 0.020 & 0.022 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & **Velocity (\(\mathbf{m}^{\ast}\))** & 0.020 & 0.020 & 0.020 & 0.017 & 0.020 & 0.022 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & Discharge (\(\mathbf{m}^{\ast}\)) & 0.216 & 0.217 & 0.218 & 0.187 & 1.095 & 0.551 & 0.367 & 0.371 & 0.391 & 0.422 & 0.557 \\ \cline{2-13} & & Depth (\(\mathbf{m}\)) & 0.095 & 0.095 & 0.095 & 0.091 & 0.018 & 0.029 & 0.055 & 0.055 & 0.055 & 0.048 & 0.037 \\ \cline{2-13} & & **Velocity (\(\mathbf{m}^{\ast}\))** & 0.020 & 0.020 & 0.020 & 0.017 & 0.020 & 0.023 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & Discharge (\(\mathbf{m}^{\ast}\)) & 0.214 & 0.215 & 0.215 & 0.185 & 1.105 & 0.802 & 0.367 & 0.371 & 0.391 & 0.422 & 0.547 \\ \cline{2-13} & & **Depth (\(\mathbf{m}\))** & 0.097 & 0.097 & 0.093 & 0.019 & 0.020 & 0.055 & 0.055 & 0.052 & 0.048 & 0.038 \\ \cline{2-13} & **20cm Grid** & \multirow{2}{*}{**Velocity (\(\mathbf{m}^{\ast}\))**} & 0.020 & 0.020 & 0.021 & 0.017 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & **Velocity (\(\mathbf{m}^{\ast}\))** & 0.020 & 0.020 & 0.021 & 0.017 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 & 0.020 \\ \cline{2-13} & & Discharge (\(\mathbf{m}^{
be limited to scenarios in which the simplicity of implementation is important and for use cases which predominantly involve supercritical downstream flow regimes. Otherwise, Solver 2 proves to be the optimal solution. Furthermore, the capacity for Solver 2 to accurately resolve stationary hydraulic jumps and more accurately capture the velocity at the barrier presents further opportunities such as the modelling of the transport of water soluble contaminants. Passive scalars, such as water soluble contaminants, are assumed to be passively advected with the fluid and via the reintroduction of the contact discontinuity wave, via switching from a HLL to a HLLC (Harten-Lax-van Leer contact) approximate Riemann solver [24], their transport can be modelled. Since this process is highly dependent on the accurate determination of the velocity, this is only possible for Solver 2. This has important applications in terms of modelling water quality, especially since the combination of flows around obstacles and species equations is seldom explored and is therefore to be the subject of further work.
Figure 19: Comparison between solver 1 and solver 2 with respect to the experimental results for test case 1. Details for solver 1 can be found in [12]. Details for the numerical setup can be found in Section 3.1.
Figure 20: Comparison between solver 1 and solver 2 with respect to the experimental results for test case 5. Details for solver 1 can be found in [12]. Details for the numerical setup can be found in Section 3.1.
Figure 21: Comparison between solver 1 and solver 2 with respect to the experimental results for test case 6. Details for solver 1 can be found in [12]. Details for the numerical setup can be found in Section 3.1.
Figure 22: Comparison between solver 1 and solver 2 with respect to the experimental results for test case 6. Details for solver 1 can be found in [12]. Details for the numerical setup can be found in Section 3.1.
## 6 Conclusion
A new Riemann solver, capable of resolving numerical fluxes across a partially obstructed interface, has been presented. Via the validation process, it has been demonstrated that the solver is able to adequately capture fluid-structure interactions for a range of barrier configurations and flow rates. Furthermore, via the comparison process, it has been demonstrated that the solver represents a significant improvement on the previously published solver [12]. It is clear that the new solution procedure addresses the identified weakness of the previous solver by more accurately capturing the vertical variation in the horizontal velocity profile at a structure interface. This results in the more accurate determination of the flow characteristics including the ability to resolve the location and jump height of stationary hydraulic jumps. However, this does come at the cost of increased computational demands and implementation complexity although, due to the local nature of the solution procedure and the proportionally small number of structure cells within a computational domain, the increase in computational expense in unlikely to be prohibitive. As for the previous solver, the biggest barrier to implementation is the scarcity of the required data for structures and the availability of suitable meshing algorithms, which remains the subject of further work.
The capability of the solver to resolve numerical fluxes across a partially obstructed interface has significant implications for modelling a variety of structures within two-dimensional hydrodynamic models. This has important applications in terms of improving flood inundation modelling capabilities as well as enabling the modelling of infrastructure resilience modelling and the structural health monitoring of hydraulic structures. Moreover, the more accurate determination of the fluid velocity in comparison with the previously presented solver presents new opportunities such as the capacity to model hydraulic jumps and the ability to integrate species equations enabling the modelling of water soluble contaminants in conjunction with flows around obstacles.
As for all models, the underlying assumptions must be considered in order to ascertain the limitations of the model and as such the solver should be considered appropriate for modelling structures at a spatial scale whereby approximating the structure as a partial obstruction existing at a cell interface is appropriate. Although, the proposed model does not capture all of the energy losses which occur as a result of the fluid-structure interaction, such effects are insignificant at this spatial scale in comparison with the effect induced by the blockage of the flow by the structure, which is well captured as shown by the validation results. For detailed analyses of individual structures 3D CFD analyses are recommended.
Avenues for further development of the solver are limited without compromising on the compatibility of the solver with standard numerical schemes utilised in contemporary hydrodynamic models. The layer redefinition process, particularly where layers are shifted downwards a significant distance, presents the greatest weakness of the method. However, such cases involve flow which is inherently vertical in nature and it is difficult to consolidate this with the fundamental nature of the shallow water equations which underpin modern hydrodynamic modelling.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgements
This research was funded by the Engineering and Physical Sciences Research Council, United Kingdom grant number EP/T517914/1.
|
2310.04388 | A chromatic vanishing result for TR | In this note, we establish a vanishing result for telescopically localized
$\mathrm{TR}$. More precisely, we prove that $T(k)$-local $\mathrm{TR}$
vanishes on connective $L_n^{p,f}$-acyclic $\mathbf{E}_1$-rings for every $1
\leq k \leq n$ and deduce consequences for connective Morava K-theory and the
Thom spectra $y(n)$. The proof relies on the relationship between $\mathrm{TR}$
and the spectrum of curves on K-theory together with fact that algebraic
K-theory preserves infinite products of additive $\infty$-categories which was
recently established by C\'{o}rdova Fedeli. | Liam Keenan, Jonas McCandless | 2023-10-06T17:28:57Z | http://arxiv.org/abs/2310.04388v1 | # A chromatic vanishing result for TR
###### Abstract.
In this note, we establish a vanishing result for telescopically localized TR. More precisely, we prove that \(T(k)\)-local TR vanishes on connective \(L_{n}^{p,f}\)-acyclic \(\mathbb{E}_{1}\)-rings for every \(1\leq k\leq n\) and deduce consequences for connective Morava K-theory and the Thom spectra \(y(n)\). The proof relies on the relationship between TR and the spectrum of curves on K-theory together with fact that algebraic K-theory preserves infinite products of additive \(\infty\)-categories which was recently established by Cordova Fedeli.
## 1. Introduction
In this note, we study the telescopic localizations of TR inspired by the work of Land-Mathew-Meier-Tamme [24] and Mathew [28]. Our starting point is the following result which follows from the main result of [24]: If \(R\) is an \(\mathbb{E}_{1}\)-ring with \(L_{n}^{p,f}R\simeq 0\), then
\[L_{T(k)}\operatorname{K}(R)\simeq 0\]
for every \(1\leq k\leq n\). For instance, if \(R=\mathbb{Z}/p^{n}\) for some integer \(n\geq 1\), then \(L_{T(1)}\operatorname{K}(\mathbb{Z}/p^{n})\simeq 0\). We consider this result as an extension of Quillen's fundamental calculation that \(\operatorname{K}(\mathbb{F}_{p})_{p}^{\sim}\simeq H\mathbb{Z}_{p}\) which in particular yields that \(L_{T(1)}\operatorname{K}(\mathbb{F}_{p})\simeq 0\). This particular consequence was also obtained by Bhatt-Clausen-Mathew [4] by means of a calculation in prismatic cohomology. Additionally, the vanishing result above for \(T(k)\)-local K-theory can be applied to the Morava K-theories \(K(n)\) and to the Thom spectra \(y(n)\) considered by Mahowald-Ravenel-Shick in [27].
### Results
We will be interested in similar vanishing results for \(T(k)\)-local TR1. The invariant TR plays an instrumental role in the classical construction of topological cyclic homology in [7, 19, 6], where TC is obtained as the fixedpoints of a Frobenius operator on TR. In SS3, we briefly review the construction of TR following [29] which produces TR together with its Frobenius operator entirely in the Borel-equivariant formalism of Nikolaus-Scholze [30]. Even though TR does not feature prominently in the construction of TC given in [30], TR remains an important invariant by virtue of its close relationship to the Witt vectors and the de Rham-Witt complex [18, 19, 20, 21]. In [28], Mathew proves that \(T(1)\)-local TR is truncating on connective \(H\mathbb{Z}\)-algebras which means that if \(R\) is a connective \(H\mathbb{Z}\)-algebra, then the canonical map of spectra
Footnote 1: Note that \(L_{T(k)}\operatorname{TR}(R)\simeq L_{T(k)}\operatorname{TR}(R,p)\), where \(\operatorname{TR}(R,p)\) denotes the \(p\)-typical version of TR. Indeed, the canonical map \(\operatorname{TR}(R)\to\operatorname{TR}(R,p)\) is a \(p\)-adic equivalence and \(T(n)\)-localization is insensitive to \(p\)-completion. Therefore, we will not distinguish between \(p\)-typical TR and integral TR in this note.
\[L_{T(1)}\operatorname{TR}(R)\to L_{T(1)}\operatorname{TR}(\pi_{0}R)\]
is an equivalence. This property was verified for \(T(1)\)-local K-theory and \(T(1)\)-local TC in [4, 24]. Our main result is a version of this at higher chromatic heights:
**Theorem A**.: _Let \(n\geq 1\). If \(R\) is a connective \(\mathbb{E}_{1}\)-ring such that \(L_{n}^{p,f}R\simeq 0\), then_
\[L_{T(k)}\operatorname{TR}(R)\simeq 0\]
_for every \(1\leq k\leq n\)._
We remark that Theorem A is a consequence of the work of [24] in the case where \(R\) admits a more refined multiplicative structure; If \(R\) admits an \(\mathbb{E}_{m}\)-ring structure for \(m\geq 2\), then the refined cyclotomic trace \(\operatorname{K}(R)\to\operatorname{TR}(R)\) is a map of \(\mathbb{E}_{1}\)-rings. Consequently, the spectrum \(L_{T(k)}\operatorname{TR}(R)\) admits the structure of a \(L_{T(k)}\operatorname{K}(R)\)-module and \(L_{T(k)}\operatorname{K}(R)\simeq 0\) by [24, Theorem 3.8]. A similar sort of reasoning has recently been employed with great success to study redshift phenomena for algebraic K-theory in [9, 11, 15, 31]. We deduce the following results from Theorem A:
**Corollary B**.: _Let \(n\geq 1\). Then \(L_{T(k)}\operatorname{TR}(\mathbb{Z}/p^{n})\simeq 0\) for every \(k\geq 1\)._
We stress that Corollary B is a consequence of the work of [4, 24] by the reasoning above. For \(n=1\), Corollary B can also be deduced from the work of Mathew [28]. Since \(T(1)\)-local TR is truncating on connective \(H\mathbb{Z}\)-algebra it is in particular nilinvariant by [25], so
\[L_{T(1)}\operatorname{TR}(\mathbb{Z}/p^{n})\simeq L_{T(1)}\operatorname{TR}( \mathbb{F}_{p})\simeq 0,\]
where the final equivalence follows since \(\operatorname{TR}(\mathbb{F}_{p},p)\simeq H\mathbb{Z}_{p}\) by Hesselholt-Madsen [19]. As a consquence of Theorem A we deduce a new chromatic vanishing result for the connective Morava K-theories, which we denote by \(k(n)\). While \(k(n)\) admits the structure of an \(\mathbb{E}_{1}\)-ring, it does not admit the structure of an \(\mathbb{E}_{2}\)-ring so we cannot argue using the refined cyclotomic trace above.
**Corollary C**.: _Let \(n\geq 2\). Then \(L_{T(k)}\operatorname{TR}(k(n))\simeq 0\) for every \(1\leq k\leq n-1\)._
Similarly, we obtain a chromatic vanishing result for the Thom spectra \(y(n)\) considered in [27].
### Methods
We end by explaining the strategy of our proof of Theorem A. They key input is the close relationship between TR and the spectrum of curves on K-theory as studied in [3, 5, 18, 29]. For every \(\mathbb{E}_{1}\)-ring \(R\), the spectrum of curves on K-theory is defined by
\[\operatorname{C}(R)=\varprojlim_{i}\Omega\tilde{\operatorname{K}}(R[t]/t^{i}),\]
where \(\tilde{\operatorname{K}}(R[t]/t^{i})\) denotes the fiber of the map \(\operatorname{K}(R[t]/t^{i})\to\operatorname{K}(R)\) induced by the augmentation. If we assume that \(R\) is connective, then \(\operatorname{TR}(R)\simeq\operatorname{C}(R)\) by [29, Corollary 4.2.5]. This result was preceded by Hesselholt [18] and Betley-Schlichtkrull [3] who established the result for associative rings after profinite completion. Combining the theorem of the weighted heart (cf. [13, 17, 16]) with the recent result of Cordova Fedeli [12, Corollary 2.11.1] which asserts that algebraic K-theory preserves arbitrary products of additive \(\infty\)-categories, we reduce to proving that
\[L_{T(k)}\operatorname{K}^{\oplus}\big{(}\prod_{i\geq 1}\operatorname{Proj}_{R[t ]/t^{i}}^{\omega}\big{)}\simeq 0\]
provided that \(L_{n}^{p,f}R\simeq 0\), where \(\operatorname{Proj}_{R[t]/t^{i}}^{\omega}\) denotes the additive \(\infty\)-category of finitely generated projective \(R[t]/t^{i}\)-modules and \(\operatorname{K}^{\oplus}\) denotes additive algebraic K-theory. This claim can be verified explicitly by using [24, Proposition 3.6].
_Acknowledgements_.: The authors are grateful to Akhil Mathew for discussions about and interest in this project. The first author would also like to thank Tyler Lawson for a number of helpful conversations. The second author was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 390685587, Mathematics Munster: Dynamics-Geometry-Structure and the Max Planck Institute for Mathematics in Bonn while working on this project.
## 2. Preliminaries on weight structures and K-theory
The main technical apparatus for deducing our chromatic vanishing result for TR is the notion of a weight structure on a stable \(\infty\)-category in conjunction with the closely related theorem of the weighted heart (cf. [13, 16]). This will help us reduce to studying additive algebraic K-theory of additive \(\infty\)-categories.
**Definition 2.1**.: A weight structure on a stable \(\infty\)-category \(\mathcal{C}\) consists of a pair of full subcategories \(\mathcal{C}_{[0,\infty]}\) and \(\mathcal{C}_{[-\infty,0]}\) of \(\mathcal{C}\) such that the following conditions are satisfied:
1. The full subcategories \(\mathcal{C}_{[0,\infty]}\) and \(\mathcal{C}_{[-\infty,0]}\) are closed under retracts in \(\mathcal{C}\).
2. For \(X\in\mathcal{C}_{[-\infty,0]}\) and \(Y\in\mathcal{C}_{[0,\infty]}\), the mapping spectrum \(\operatorname{map}_{\mathcal{C}}(X,Y)\) is connective.
3. For every \(X\in\mathcal{C}\), there is a fiber sequence \[X^{\prime}\to X\to X^{\prime\prime}\] with \(X^{\prime}\in\mathcal{C}_{[-\infty,0]}\) and \(X^{\prime\prime}[-1]\in\mathcal{C}_{[0,\infty]}\).
The heart of the weight structure is the subcategory \(\mathcal{C}^{\text{ht}}=\mathcal{C}_{[0,0]}\), where \(\mathcal{C}_{[a,b]}=\mathcal{C}_{[a,\infty]}\cap\mathcal{C}_{[-\infty,b]}\). The weight structure is said to be exhaustive if every object is bounded, in the sense that
\[\mathcal{C}=\bigcup_{n\in\mathbb{Z}}\mathcal{C}_{[-n,n]}.\]
A weighted \(\infty\)-category is a stable \(\infty\)-category equipped with a weight structure.
**Remark 2.2**.: The heart of a weighted \(\infty\)-category is an additive \(\infty\)-category ([16, Lemma 3.1.2]).
We recall the following terminology which will play an important role throughout this note. For every connective \(\mathbb{E}_{1}\)-ring \(R\), let \(\operatorname{Proj}_{R}^{\omega}\) denote the full subcategory of the \(\infty\)-category \(\operatorname{LMod}_{R}^{\geq 0}\) spanned by those connective left \(R\)-modules which are finitely generated and projective. Recall that an object of \(\operatorname{Proj}_{R}^{\omega}\) can be written as a retract of a finitely generated free \(R\)-module (cf. [26, Proposition 7.2.2.7]). For any not necessarily connective \(\mathbb{E}_{1}\)-ring, let \(\operatorname{Perf}_{R}\) denote the \(\infty\)-category of perfect \(R\)-modules defined as the smallest stable subcategory of \(\operatorname{LMod}_{R}\) which contains \(R\) and is closed under retracts. The following is our main example of interest:
**Example 2.3**.: For a connective \(\mathbb{E}_{1}\)-ring \(R\), let \(\operatorname{Perf}_{R,\geq 0}\) be the full subcategory of \(\operatorname{Perf}_{R}\) spanned by those perfect \(R\)-modules which are connective, and let \(\operatorname{Perf}_{R,\leq 0}\) denote the full subcategory of \(\operatorname{Perf}_{R}\) spanned by those perfect \(R\)-modules \(M\) which have projective amplitude \(\leq 0\). This means that every \(R\)-linear map \(M\to N\) is nullhomotopic provided that \(N\) is \(1\)-connective. The pair \((\operatorname{Perf}_{R,\geq 0},\operatorname{Perf}_{R,\leq 0})\) defines an exhaustive weight structure on \(\operatorname{Perf}_{R}\) whose heart is equivalent to the additive \(\infty\)-category \(\operatorname{Proj}_{R}^{\omega}\) of finitely generated projective \(R\)-modules (cf. [17, 1.38 & 1.39]);
while the proofs therein are stated for connective \(\mathbb{E}_{\infty}\)-rings, the same arguments work in the \(\mathbb{E}_{1}\) case.
The algebraic K-theory of a weighted \(\infty\)-category is often determined by the additive algebraic K-theory of its heart by virtue of the theorem of the weighted heart first established by Fontes [13] but we also refer the reader to [16, Corollary 8.1.3, Remark 8.1.4]. Let \(\mathcal{A}\) denote an additive \(\infty\)-category regarded as a symmetric monoidal \(\infty\)-category with the cocartesian symmetric monoidal structure, so that the core \(\mathcal{A}^{\mbox{\tiny{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it { \it{ }}}}}}}}}}}}}}\) inherits the structure of an \(\mathbb{E}_{\infty}\)-monoid. Recall that the additive algebraic K-theory of \(\mathcal{A}\) is defined by
\[\operatorname{K}^{\oplus}(\mathcal{A})=(\mathcal{A}^{\mbox{\tiny{\it{\it{\it{ \it{\it{\it{\it{\it{\it{\,{\it{\it{\,}}}}}}}}}}}}})^{\operatorname{grp}},\]
where \((\mathcal{A}^{\mbox{\tiny{\it{\it{\it{\it{\it{\it{\it{\it{\it{\,{\it{\it{\it{\it{ \it{\it{\it{\it{{\it{ }}}}}}}}}}}}}}}})^{ \operatorname{grp}}\) denotes the group completion of the \(\mathbb{E}_{\infty}\)-monoid \(\mathcal{A}^{\mbox{\tiny{\it{\it{\it{\it{\it{\it{\it{\,{\it{\it{\,{\it{\it{\, \it{\it{\it{\it{\it{\it{\itit{\it{ }}}}}}}}}}}}}}}\). We have the following result which will play an instrumental role below (cf. [13, Theorem 5.1] and [16, Corollary 8.1.3]):
**Theorem 2.4**.: _The canonical map of spectra_
\[\operatorname{K}^{\oplus}(\mathcal{C}^{\operatorname{ht}})\to\operatorname{K }(\mathcal{C})\]
_is an equivalence for every stable \(\infty\)-category \(\mathcal{C}\) equipped with an exhaustive weight structure._
## 3. Chromatic vanishing results
The main goal of this section is to prove Theorem A from SS1 and discuss various consequences. As explained, our proof of this result relies on the close relationship between \(\operatorname{TR}\) and the spectrum of curves in K-theory (cf. [3, 18, 29]). We will regard \(\operatorname{TR}\) as a functor \(\operatorname{TR}:\operatorname{Alg}_{\mathbb{E}_{1}}^{\operatorname{cn}} \to\operatorname{Sp}\) given by
\[\operatorname{TR}(R)\simeq\operatorname{map}_{\operatorname{CycSp}}( \overline{\operatorname{THH}}(\mathbf{S}[t]),\operatorname{THH}(R))\]
following [29] and this agrees with the classical construction of \(\operatorname{TR}\) by [29, Theorem 3.3.12]. By virtue of our assumption that \(R\) is connective, there is an equivalence of spectra
\[\operatorname{TR}(R)\simeq\varprojlim\Omega\tilde{\operatorname{K}}(R[t]/t^{i}),\]
where \(\tilde{\operatorname{K}}(R[t]/t^{i})\) denotes the fiber of the map \(\operatorname{K}(R[t]/t^{i})\to\operatorname{K}(R)\) induced by the augmentation. In this generality, the result was obtained by the second author in [29] preceded by Hesselholt [18] and Betley-Schlichtkrull [3] who proved the result for associative rings after profinite completion. With this equivalence at our disposal, we prove the following result:
**Theorem 3.1**.: _Let \(n\geq 1\). If \(R\) is a connective \(\mathbb{E}_{1}\)-ring such that \(L_{n}^{p,f}R\simeq 0\), then_
\[L_{T(k)}\operatorname{TR}(R)\simeq 0\]
_for every \(1\leq k\leq n\)._
The limit in the definition of the spectrum of curves on K-theory above does not commute with \(T(k)\)-localization. Instead, the proof of Theorem 3.1 relies on the following result, which is proved by combining the theorem of the weighted heart and a recent result which asserts that additive algebraic K-theory preserves infinite products of additive \(\infty\)-categories, due to Cordova Fedeli [12].
**Proposition 3.2**.: _Let \(R\) be a connective \(\mathbb{E}_{1}\)-ring which vanishes after \(L_{n}^{p,f}\)-localization. If \(\{S_{i}\}_{i\in I}\) is collection of connective \(\mathbb{E}_{1}\)-rings with a map of \(\mathbb{E}_{1}\)-rings \(R\to S_{i}\) for every \(i\in I\), then_
\[L_{T(k)}\big{(}\prod_{i\in I}\operatorname{K}(S_{i})\big{)}\simeq 0\]
_for every \(1\leq k\leq n\)._
Proof.: For \(i\in I\), the stable \(\infty\)-category \(\operatorname{Perf}_{S_{i}}\) admits an exhaustive weight structure whose heart is equivalent to the additive \(\infty\)-category \(\operatorname{Proj}_{S_{i}}^{\omega}\) by Example 2.3. The canonical composite
\[\operatorname{K}^{\oplus}\Big{(}\prod_{i\in I}\operatorname{Proj}_{S_{i}}^{ \omega}\Big{)}\to\prod_{i\in I}\operatorname{K}^{\oplus}(\operatorname{Proj}_ {S_{i}}^{\omega})\to\prod_{i\in I}\operatorname{K}(\operatorname{Perf}_{S_{i}})\]
is an equivalence by [12, Corollary 2.11.1] and Theorem 2.4, so we have reduced to proving that
\[L_{T(k)}\operatorname{K}^{\oplus}\Big{(}\prod_{i\in I}\operatorname{Proj}_{S_ {i}}^{\omega}\Big{)}\simeq 0\]
for \(1\leq k\leq n\). By [24, Proposition 3.6], it suffices to prove that the endomorphism \(\mathbb{E}_{1}\)-rings of
\[\mathcal{A}=\prod_{i\in I}\operatorname{Proj}_{S_{i}}^{\omega}\]
vanish after \(L_{n}^{p,f}\)-localization. If \(P\in\mathcal{A}\), then the endomorphism \(\mathbb{E}_{1}\)-ring of \(P\) is given by
\[\operatorname{End}_{\mathcal{A}}(P)\simeq\prod_{i\in I}\operatorname{map}_{S_ {i}}(P_{i},P_{i}),\]
where \(\operatorname{map}_{S_{i}}(P_{i},P_{i})\) denotes the mapping spectrum in \(\operatorname{LMod}_{S_{i}}\). For each \(i\in I\), we may choose a positive integer \(n_{i}\geq 1\) such that \(P_{i}\) is a retract of \(S_{i}^{\oplus n_{i}}\) by virtue of our assumption that \(P_{i}\) is a finitely generated projective \(S_{i}\)-module. Consequently, we obtain a retract diagram of spectra
\[\operatorname{End}_{\mathcal{A}}(P)\to\prod_{i\in I}S_{i}^{\oplus n_{i}^{2}} \to\operatorname{End}_{\mathcal{A}}(P)\]
which proves the desired statement since the middle term is a left \(R\)-module, hence vanishes after \(L_{n}^{p,f}\)-localization by virtue of our assumption that \(R\) is \(L_{n}^{p,f}\)-acyclic.
**Remark 3.3**.: In general, \(E\)-acyclic spectra are not closed under infinite products; for each \(n\geq 0\), the \(n\)th Postnikov truncation \(\tau_{\leq n}\mathbb{S}\) is \(K(1)\)-acyclic, whereas \(\prod_{n\geq 0}\tau_{\leq n}\mathbb{S}\) is not, else \(L_{K(1)}\mathbb{S}\simeq 0\). The assumptions of Proposition 3.2 should be viewed as a uniformity condition on the spectra \(\operatorname{K}(S_{i})\), forcing their product to become acyclic.
Proof of Theorem 3.1.: Since \(R\) is a connective \(\mathbb{E}_{1}\)-ring, there is an equivalence of spectra \(\operatorname{TR}(R)\simeq\operatorname{C}(R)\) by [29, Corollary 4.2.5]. Thus, the spectrum \(\Sigma\operatorname{TR}(R)\) is the fiber of a suitable map
\[\prod_{i\geq 1}\widetilde{\operatorname{K}}(R[t]/t^{i})\to\prod_{i\geq 1} \widetilde{\operatorname{K}}(R[t]/t^{i})\]
which proves the desired statement as these products vanish after \(T(k)\)-localization for \(1\leq k\leq n\) by virtue of Theorem 3.2.
**Remark 3.4**.: As remarked above, we have used work by Cordova Fedeli [12] in a crucial way. This result on K-theory of additive \(\infty\)-categories is part of a long tradition of examining the interaction of algebraic K-theory and infinite products of categories. One of the first results of this kind is due to Carlsson, who showed that K-theory preserves infinite products of exact \(1\)-categories with a cylinder functor [10]. In [23], Kasprowski-Winges proved that K-theory
preserves infinite products of additive categories. Furthermore, Kasprowski-Winges [22] used a characterization of Grayson [14] to prove that non-connective algebraic K-theory preserves infinite products of stable \(\infty\)-categories and this was used in [8] with Bunke to prove the analogous statement of prestable \(\infty\)-categories.
**Remark 3.5**.: Another attempt to prove Proposition 3.2 proceeds by invoking the recent result of Kasprowski-Winges [22], which asserts that the canonical map of spectra
\[\operatorname{K}\big{(}\prod_{i\in I}\operatorname{Perf}(S_{i})\big{)} \to\prod_{i\in I}\operatorname{K}(S_{i})\]
is an equivalence (cf. Remark 3.5). Proceeding as in the proof of Proposition 3.2, it suffices to prove that the endomorphism \(\mathbb{E}_{1}\)-rings of the product of the stable \(\infty\)-categories \(\operatorname{Perf}(S_{i})\) vanish after \(L_{n}^{p,f}\)-localization. This is closely related to the following assertion:
1. Let \(E\) denote the endomorphism \(\mathbb{E}_{1}\)-ring of a finite spectrum \(V\) of type \(n\). If \(v:\Sigma^{k}E\to E\) is the associated \(v_{n}\) self-map of \(E\), then there is a canonical lift of \(v\) to a map of \(E\)-\(E\)-bimodules.
By the description of the \(\mathbb{E}_{1}\)-center as Hochschild cohomology, the statement \((*)\) is equivalent to asking for a lift of the class \(v\in\pi_{*}(E)\) to a class \(\tilde{v}\in\pi_{*}\mathcal{Z}_{\mathbb{E}_{1}}(E)\) along the \(\mathbb{E}_{1}\)-map \(\mathcal{Z}_{\mathbb{E}_{1}}(E)\to E\). Classes which do lift in this way can be viewed as "homotopically central" elements of \(E\), and we remark that such lifts exist for all \(\mathbb{E}_{2}\)-rings, by the universal property of the \(\mathbb{E}_{1}\)-center.
However, the assertion \((*)\) is false as we learned from Maxime Ramzi, and we thank him for help with the following argument. If such a lift exists, then we obtain an equivalence of \(L_{K(n)}\)-\(L_{K(n)}\)-bimodules
\[\varphi\,\colon\Sigma^{k}L_{K(n)}E\to L_{K(n)}E,\]
and there is an equivalence of \(\mathbb{E}_{1}\)-rings \(\operatorname{End}_{K(n)}(L_{K(n)}V)\simeq L_{K(n)}E\) since \(V\) is a finite spectrum. The \(\infty\)-category of \(K(n)\)-local spectra is equivalent to the \(\infty\)-category \(\operatorname{Mod}_{L_{K(n)}E}(\operatorname{Sp}_{K(n)})\) since \(L_{K(n)}V\) is a compact generator of \(\operatorname{Sp}_{K(n)}\). As a consequence, for every \(K(n)\)-local spectrum \(X\), we obtain an equivalence \(\Sigma^{k}X\to X\) by base-changing along \(\varphi\). This is a contradiction since the homotopy groups of a \(K(n)\)-local spectrum are not periodic. We indicate an example of this at every height \(n\geq 1\). Let \(k\) be a perfect field of characteristic \(p\), let \(\mathbb{G}\) be a \(1\)-dimensional formal group of height \(n\), and let \(E_{n}\) denote the associated Lubin-Tate theory which canonically carries the structure of an \(\mathbb{E}_{\infty}\)-ring. For every topological generator \(g\) of \(\mathbb{Z}_{p}^{\times}\), there is a map of \(\mathbb{E}_{\infty}\)-rings \(\psi_{g}:E_{n}\to E_{n}\), and we let \(F_{n}\) denote the fiber of the map
\[E_{n}\xrightarrow{1-\psi_{g}}E_{n}.\]
A calculation reveals that the homotopy groups of \(F_{n}\) are not periodic. For instance, if \(n=1\), then \(F_{1}\simeq L_{K(1)}\mathbb{S}\) since the map \(\psi_{g}\) is induced by Adams operations on \(E_{1}\simeq\operatorname{KU}_{p}^{\wedge}\).
Finally, we explore some immediate consequences of Theorem 3.1.
**Corollary 3.6**.: _Let \(R\) be a connective \(\mathbb{E}_{1}\)-algebra over \(\mathbb{Z}/p^{j}\). If \(n\geq 1\), then \(L_{T(n)}\operatorname{TR}(R)\simeq 0\)._
Proof.: Note that \(L_{n}^{p,f}R\) is a module over \(L_{n}^{p,f}\mathbb{Z}/p^{j}\simeq 0\), so the assertion follows from Theorem 3.1.
Recall that Corollary 3.6 above also follows from [4, 24, 28] as discussed in the introduction. We deduce some consequence for connective Morava K-theory. Let \(k(n)\) denote the connective cover of the \(n\)th Morava K-theory \(K(n)\). The spectrum \(k(n)\) carries the structure of an \(\mathbb{E}_{1}\)-ring but not the structure of an \(\mathbb{E}_{2}\)-ring. We have the following:
**Corollary 3.7**.: _If \(n\geq 2\), then \(L_{T(k)}\operatorname{TR}(k(n))\simeq 0\) for every \(1\leq k\leq n-1\)._
Proof.: For \(n\geq 2\), the canonical map \(k(n)\to\mathbb{F}_{p}\) is a \(L_{n-1}^{p,f}\)-local equivalence by [24, Lemma 2.2], so the assertion follows from Theorem 3.1.
**Remark 3.8**.: There is a fiber sequence of spectra
\[\operatorname{K}(\mathbb{F}_{p})\to\operatorname{K}(k(n))\to\operatorname{K} (K(n)),\]
by [1, Proposition 4.4] preceded by [2]. We consider this as an analogue of Quillen's devissage theorem for algebraic K-theory of ring spectra. One might ask whether we can establish a similar fiber sequence for \(\operatorname{TR}\). In particular, this would allow us to deduce an analogue of Corollary 3.7 for the non-connective Morava \(K\)-theory.
Let \(y(n)\) denote the Thom spectrum considered in [27, Section 3]. This is the Thom spectrum associated to the map of \(\mathbb{E}_{1}\)-spaces
\[\Omega J_{p^{n-1}}S^{2}\hookrightarrow\Omega^{2}S^{3}\to\operatorname{BGL}_{1 }(\mathbb{S}_{p}^{\wedge})\]
where \(J_{p^{n-1}}S^{2}\) is the \(2(p^{n-1})\)-skeleton of \(\Omega S^{3}\), which has a single cell in each even dimension. The map \(\Omega^{2}S^{3}\to\operatorname{BGL}_{1}(\mathbb{S}_{p}^{\wedge})\) is the spherical fibration constructed by Mahowald (for \(p=2\)) and Hopkins (for \(p\) odd) whose Thom spectrum is \(H\mathbb{F}_{p}\). We have the following:
**Corollary 3.9**.: _If \(n\geq 2\), then \(L_{T(k)}\operatorname{TR}(y(n))\simeq 0\) for every \(1\leq k\leq n-1\)._
Proof.: This follows immediately by combining Theorem 3.1 with [24, Lemma 4.14].
**Remark 3.10**.: If \(R\) is a connective \(H\mathbb{Z}\)-algebra, then the canonical map
\[L_{T(1)}\operatorname{K}(R)\to L_{T(1)}\operatorname{K}(R[1/p])\]
is an equivalence by [4, 24]. The analogue of this result does not hold for TC as explained in [24, Remark 4.27], which in particular means that the result also does not prolong to \(\operatorname{TR}\). However, at chromatic heights \(n\geq 2\), TC does satisfy a version of chromatic purity (cf. [24, Corollary 4.5]). In particular, if \(A\to B\) is an \(L_{n}^{p,f}\)-local equivalence of \(\mathbb{E}_{1}\)-rings, then the induced map
\[L_{T(n)}\operatorname{TC}(\tau_{\geq 0}A)\xrightarrow{\pi}L_{T(n)} \operatorname{TC}(\tau_{\geq 0}B).\]
is an equivalence. One can wonder whether such a statement is true of \(T(n)\)-local \(\operatorname{TR}\), but our methods here do not seem to shed light on this problem.
|
2307.16343 | Quantum recurrences in the kicked top | The correspondence principle plays an important role in understanding the
emergence of classical chaos from an underlying quantum mechanics. Here we
present an infinite family of quantum dynamics that never resembles the
analogous classical chaotic dynamics irrespective of dimension. These take the
form of stroboscopic unitary evolutions in the quantum kicked top that act as
the identity after a finite number of kicks. Because these state-independent
temporal periodicities are present in all dimensions, their existence
represents a universal violation of the correspondence principle. We further
discuss the relationship of these periodicities with the quantum kicked rotor,
in particular the phenomenon of quantum anti-resonance. | Amit Anand, Jack Davis, Shohini Ghose | 2023-07-30T23:42:24Z | http://arxiv.org/abs/2307.16343v1 | # Quantum recurrences in the kicked top
###### Abstract
The correspondence principle plays an important role in understanding the emergence of classical chaos from an underlying quantum mechanics. Here we present an infinite family of quantum dynamics that never resembles the analogous classical chaotic dynamics irrespective of dimension. These take the form of stroboscopic unitary evolutions in the quantum kicked top that act as the identity after a finite number of kicks. Because these state-independent temporal periodicities are present in all dimensions, their existence represents a universal violation of the correspondence principle. We further discuss the relationship of these periodicities with the quantum kicked rotor, in particular the phenomenon of quantum anti-resonance.
## I Introduction
The quantum-classical correspondence principle broadly states, in its commonly understood form, that the predictions of a dynamically evolving quantum system should reproduce the predictions of a classical system under appropriate circumstances [1]. This sometimes takes the form of a particular limit of some set of parameters that characterize the quantum system (i.e. large quantum numbers, vanishing Planck action, etc.). In such situations the transition may be called a _classical limit_ of the quantum system [2]. It is well know that classical systems can display chaotic behaviour - broadly defined as exponential sensitivity to initial conditions. Interestingly, in quantum systems that have such a chaotic classical limit, the correspondence principle is not well understood [3; 4; 5; 6]. Exploring such systems can thus provide insight to the structural differences between quantum and classical dynamics as well as the fundamental origin of chaotic phenomena.
A useful model studied in this context is the quantum kicked top [7]. This model is a spin-\(j\) system subject to a Floquet evolution (i.e. a stroboscopic dynamics). It is of interest because it lives in a finite-dimensional Hilbert space, its dynamics have a well-defined classical limit (\(j\to\infty\)) with an easily tunable degree of chaos via its Hamiltonian parameters, and it is experimentally feasible [8; 9; 10]. Furthermore, the alternative representation of any spin-\(j\) system as a many-body system of indistinguishable qubits has lead to much work on understanding the surprisingly subtle relationship between dynamical entanglement, Hilbert space dimension, and emergent chaos in the kicked top model [11; 12; 13; 14; 15; 6; 16].
In this paper, we probe quantum-classical correspondence in the kicked top, and present a startling result. For certain system parameters, classical chaotic behaviour is not recovered no matter how large the value of the spin quantum number, in contradiction to Bohr correspondence. We analytically and numerically show that in all dimensions (all values of the spin \(j\)), the kicked top displays several state-independent, temporal periodicities/recurrences: three for integer spin values and two for half-integer spin values. Whereas previous work has explored a specific temporal periodicity in the semiclassical limit [17], the general set of recurrences derived in our analysis have not been previously identified. Because these recurrences are state-independent and generally occur at large chaoticity values, they have no classical analog and so represent a violation of the correspondence principle. Furthermore, our results show that the transition to classical behaviour does not smoothly vary with the size of the system. Our analysis also resolves previous conflicting results on how the chaoticity parameter \(\kappa\) in the kicked top influences the presence of quantum temporal periodicity [14]. In addition we establish a relationship between our kicked top periodicities and the quantum resonances identified in the kicked rotor. Our results highlight the complex nature of quantum chaos and challenge typical notions of quantum-classsical correspondence.
## II Background
### Kicked Top Model
The quantum kicked top (QKT) is a finite-dimensional dynamical model used to study quantum chaos, known for its compact phase space and parameterizable chaoticity structure [7]. The time-dependent, periodically-driven system is governed by the Hamiltonian
\[H=\hbar\frac{pJ_{y}}{\tau}+\hbar\frac{\kappa J_{z}^{2}}{2j}\sum_{n=-\infty}^{ \infty}\delta(t-n\tau), \tag{1}\]
where \(\{J_{x},J_{y},J_{z}\}\) are the generators of angular momentum: \([J_{i},J_{j}]=i\epsilon_{ijk}J_{k}\). It describes a spin of size \(j\) precessing about the \(y\)-axis together with impulsive state-dependent twists about the \(z\)-axis with magnitude characterized by the chaoticity parameter \(\kappa\). The period between kicks is \(\tau\), and \(p\) is the amount of \(y\)-precession within one period. The associated Floquet time evolution operator
for one period is
\[U=\exp\Big{(}-i\frac{\kappa}{2j}J_{z}^{2}\Big{)}\exp\Big{(}-i\frac{p}{\tau}J_{y} \Big{)} \tag{2}\]
The classical kicked top can be obtained by computing the Heisenberg equations for the re-scaled angular momentum generators, \(J_{i}/j\), followed by the limit \(j\rightarrow\infty\)[7]. In the commonly considered case of (\(\tau=1,p=\pi/2\)), the classical map is
\[X_{n+1} = Z_{n}\cos(\kappa X_{n})+Y_{n}\sin(\kappa X_{n}),\] \[Y_{n+1} = Y_{n}\cos(\kappa X_{n})-Z_{n}\sin(\kappa X_{n}),\] \[Z_{n+1} = -X_{n}. \tag{3}\]
As the chaoticity parameter \(\kappa\) is varied the classical dynamics ranges from completely regular motion (\(\kappa\leq 2.1\)) to a mixture of regular and chaotic motion (\(2.1\leq\kappa\leq 4.4\)) to fully chaotic motion (\(\kappa>4.4\)) [6]. The classical stroboscopic map in polar coordinates for a set of initial conditions with \(\kappa=2.5\) and \(\kappa=3.0\) is given in Fig.1 and Fig.1 respectively.
### Husimi function
To study the quantum-classical correspondence in the quantum kicked top, the Husimi function is often used as an aid to compare quantum vs. classical dynamics [18; 6]. It is a non-negative quasiprobability distribution defined as
\[Q_{\rho}(\theta,\phi):=\langle\theta,\phi|\rho|\theta,\phi\rangle, \tag{4}\]
subject to the normalization condition
\[\frac{2j+1}{4\pi}\int_{S^{2}}Q_{\rho}(\theta,\phi)\sin\theta d\theta d\phi=1, \tag{5}\]
where \(|\theta,\phi\rangle\) are the standard spin coherent states associated with SU(2) dynamical symmetry [19].
## III Periodicity in twist strength
As pointed out in [20], there is a recurrent relationship between unitaries separated by an amount \(\kappa=2\pi j\):
\[U_{\kappa+2\pi j}=e^{-i\frac{(\kappa+2\pi j)}{2}J_{z}^{2}}e^{-ipJ_{y}}=e^{-i \pi J_{z}^{2}}U_{\kappa}.\]
The unitary \(e^{-i\pi J_{z}^{2}}\) characterizes the difference between the actions of \(U_{k}\) and \(U_{k+2\pi j}\) on Hilbert space. We will show that this operator acts as a symmetric local unitary in the qubit picture and so does not modify any correlations between the qubits.
Denoting \(Z_{k}:=\sigma_{z}^{(k)}\), consider the operator \(e^{-i\pi J_{z}^{2}}\) in the qubit picture:
\[e^{-i\pi J_{z}^{2}} =\exp\Big{[}-i\frac{\pi}{4}(Z_{1}+\cdots+Z_{N})^{2}\Big{]}\] \[=\exp\Big{[}-i\frac{\pi}{4}\sum_{\vec{k}}\binom{2}{\vec{k}}Z_{1} ^{k_{1}}\cdots Z_{n}^{k_{n}}\Big{]} \tag{6}\] \[=\prod_{\vec{k}}\exp\Big{[}-i\frac{\pi}{4}\binom{2}{\vec{k}}Z_{1} ^{k_{1}}\cdots Z_{n}^{k_{n}}\Big{]},\]
where \(\vec{k}=(k_{1},...,k_{n})\) is a multi-index of positive integers that sums to \(2\), and \(\binom{2}{\vec{k}}\) is a multinomial coefficient. Separate the multi-indices into those with a single \(k_{i}=2\) and those that don't; the former will happen \(n\) times, and the associated Pauli operator squares to the identity:
\[\exp\Big{[}-i\frac{\pi}{4}I\Big{]}^{n}\prod_{\vec{k}\neq 2}\exp\Big{[}-i \frac{\pi}{4}\binom{2}{\vec{k}}Z_{1}^{k_{1}}\cdots Z_{n}^{k_{n}}\Big{]}. \tag{7}\]
The remaining indices each have exactly two different slots equal to \(1\) and so the multinomial coefficient is always \(2\). The exponentials consequently reduce to
\[e^{-i\frac{\pi\pi}{4}}\prod_{\vec{k}\neq 2}\Big{(}I^{\otimes n} \cos\frac{\pi}{2}-iZ_{1}^{k_{1}}\cdots Z_{n}^{k_{n}}\sin\frac{\pi}{2}\Big{)}\] \[=e^{-i\frac{\pi\pi}{4}}\prod_{\vec{k}\neq 2}\Big{(}-iZ_{1}^{k_{1}} \cdots Z_{n}^{k_{n}}\Big{)}\,. \tag{8}\]
It is already clear from Eq. (8) that \(e^{-i\pi J_{z}^{2}}\) is a local unitary and so does not affect any correlations between the qubits.
Figure 1: Stroboscopic map showing the classical time evolution over 150 kicks for \(\mathbf{a}\). \(\kappa=2.5\) and \(\mathbf{b}\). \(\kappa=3.0\) for several hundred initial points.
Hence, the entanglement generated between the qubits is periodic in the chaoticity parameter with period \(\Delta\kappa=2\pi j\)[20]. That is to say,
\[U_{\kappa+2\pi j}\stackrel{{\text{LO}}}{{\Longleftrightarrow}}U_{ \kappa}, \tag{9}\]
where LO refers to (symmetric) local operations over the global Hilbert space of the qubits.
Eq. (8) can be written in more compact form as
\[e^{-i\pi J_{z}^{2}}=(-1)^{j^{2}}Z_{1}^{n-1}\cdots Z_{n}^{n-1}, \tag{10}\]
which is clearly symmetric. This breaks into three cases of spin
\[e^{-i\pi J_{z}^{2}}=\begin{cases}Z^{\otimes n}&\text{even integer}\\ -Z^{\otimes n}&\text{odd integer}\\ e^{-i\frac{\pi}{4}}I^{\otimes n}&\text{half-integer}\end{cases}. \tag{11}\]
## IV Temporal periodicity
Here we derive the temporal periodicity of the kicked top evolution for three special values of twist strength: \(\{2\pi j,\pi j,\frac{\pi j}{2}\}\), each of which are split into cases of integer and half-integer spins.
### Twist strength \(\kappa=2\pi j\)
The Floquet operator in the case of \(\kappa=2\pi j\) is
\[U_{2\pi j}=e^{-i\pi J_{z}^{2}}e^{-ipJ_{y}}, \tag{12}\]
where \(e^{-i\pi J_{z}^{2}}\) is a symmetric local unitary in the qubit picture (10). Like many of the results here, the consequences on temporal periodicity strongly depends on whether the spin is integer or half-integer.
#### Integer spin
In the case of integer spin the evolution squares to the identity regardless of the \(y\)-rotation angle:
\[U_{2\pi j}^{2}=I^{\otimes n}. \tag{13}\]
This can be seen using either integer form of Eq. (11) and writing the \(y\)-rotation in the qubit picture as
\[e^{-ipJ_{y}}=(I\cos\frac{p}{2}-iY\sin\frac{p}{2})^{\otimes n}. \tag{14}\]
And because in this case \(U_{2\pi j}^{2}\) is simply a composition of symmetric local unitaries, it suffices to consider a single tensor factor:
\[\begin{split}&[Z(I\cos\frac{p}{2}-iY\sin\frac{p}{2})]^{2}\\ &=[Z\cos\frac{p}{2}-X\sin\frac{p}{2}]^{2}\\ &=I(\cos^{2}\frac{p}{2}+\sin^{2}\frac{p}{2})-\frac{1}{2}(ZX+XZ) \sin p\\ &=I\\ \end{split} \tag{15}\]
with the last line coming from the anti-commutation relations of Pauli matrices.
See Fig. 2 for a visual interpretation of Eq. (13) by tracking the collectively shared Bloch vector. After the first \(y\)-rotation the twist effectively acts as a \(\pi\)-rotation about the \(z\) axis. Consequently, the second \(y\)-rotation then undoes the first and the second twist rotates back to the starting point. Note that this demonstration depends on each step being a symmetric local unitary, meaning that a spin coherent state will remain so throughout and no correlations are ever generated.
#### Half-integer spin
In the case of half-integer spin the twist becomes a scaled identity operator (11), leading to a \(p\)-dependent temporal periodicity (up to an irrelevant global phase) of \(N\) kicks under the condition
\[U_{2\pi j}^{N}=I\quad\text{if}\quad N=\frac{2\pi}{p}\in\mathbb{N} \tag{16}\]
Hence only when \(p\) is a rational fraction of \(\pi\) does there exist a temporal periodicity at this twist strength. It is interesting that in the half-integer case the rotation angle \(p\) is critical for determining the existence of a temporal periodicity while in the integer case \(p\) has no effect.
### Twist strength \(\kappa=\pi j\)
The case of twist strength \(\kappa=\pi j\) yields a more interesting temporal periodicity that also depends on the spin being integer or half-integer. Details of calculations can be found in Supplementary I.
Figure 2: Evolution of the Bloch vector associated to any reduced qubit state for \(\kappa=2\pi j\). After two kicks the state returns to its initial point, showing a two-step temporal periodicity. The initial point is \((\theta,\phi)=\) (2.25,2.0).
#### Integer spin
Following a similar argument from the previous section, the general expression for the twist unitary \(e^{-i\frac{\pi}{2}J_{z}^{2}}\) is
\[e^{-i\frac{\pi}{2}J_{z}^{2}}\prod_{\vec{k}\neq 2}\bigg{(}I^{\otimes n}\cos\frac{ \kappa}{4j}-iZ_{1}^{k_{1}}\cdots Z_{n}^{k_{n}}\sin\frac{\kappa}{4j}\bigg{)}\,, \tag{17}\]
which reduces to
\[e^{-i\frac{\pi}{2}J_{z}^{2}}=e^{-i\frac{\pi}{8}n}\prod_{\vec{k}\neq 2}\frac{1}{ \sqrt{2}}\left(I^{\otimes n}-iZ_{1}^{k_{1}}\cdots Z_{n}^{k_{n}}\right) \tag{18}\]
when \(\kappa=\pi j\). This appears to be a difficult expression to evaluate but simplifies to
\[e^{-i\frac{\pi}{2}J_{z}^{2}}=e^{-i\frac{\pi}{4}}\frac{I^{\otimes n}+i(iZ)^{ \otimes n}}{\sqrt{2}}, \tag{19}\]
as can be verified by comparing the two actions on the computational basis in \((\mathbb{C}^{2})^{\otimes 2j}\). This can also be found using the Gaussian sum decomposition result from [17]. With this in mind, and writing the \(y\)-rotation as in Eq. (14), the Floquet operator can be shown to exhibit the finite-time periodicity
\[U_{\pi j}^{8}=I\qquad\forall\text{ integer }\,j. \tag{20}\]
This can be done by establishing
\[\begin{split} U_{\pi j}^{4}&=\Bigg{[}\frac{e^{-i \frac{\pi}{4}}}{\sqrt{2}}\bigg{(}I^{\otimes n}+i(iZ)^{\otimes n}\bigg{)} \bigg{(}\frac{I-i\sigma_{y}}{\sqrt{2}}\bigg{)}^{\otimes n}\Bigg{]}^{4}\\ &=-(iY)^{\otimes n}\end{split} \tag{21}\]
through repeated use of the Pauli group commutation relations. As \(n\) is an even integer and \(Y^{2}=I\), this is enough to give Eq. (20). The same calculation may be repeated for \(\kappa=\pi j+2\pi j=3\pi j\) which also shows the period 8 periodicity. While expected from the \(2\pi j\) periodicity in correlations [20], this additional calculation is necessary to conclude the stronger notion of temporal periodicity of the state itself, possibly up to global phase. For example, any SU(2) rotation with an angle incommensurate to \(\pi\) will produce a sequence of spin coherent states - and therefore a period-1 recurrence in the quantum correlations - that never returns to the original state exactly.
In contrast to the previous twist strength of \(\kappa=2\pi j\), here entanglement is generated (and destroyed) throughout the period-8 orbit. This can be seen from Eq. (19) which is clearly not a symmetric local unitary. Fig. 3 shows the orbit in the Husimi representation (4) of a \(j=50\) spin coherent state initially centred at (\(\theta=2.25,\phi=2.0\)). After the initial rotation about the \(y\)-axis, we see the action of (19) "splitting" the state into a cat-like superposition. A second round of rotation-twist iteratively produces a balanced superposition of four spin coherent states distributed over phase space. Another two kicks recombines this state into the initial spin coherent state but reflected about the y-axis, matching (21). Finally another four kicks repeats this process, resulting in a recurrence of the initial state. This regular, periodic dynamical behaviour appears to have no analogue in the classical kicked top (not least of which at \(\kappa=\pi j\)) and so represents a departure from the classical-quantum correspondence.
It should also be noted that while the above is the generic temporal periodicity, certain states related to the Hamiltonian symmetries will experience a shorter orbit. In particular if we take the initial state as \(\ket{+}_{y}\), i.e. \((\theta,\phi)=(\pi/2,\pi/2)\), then the rotation part of the unitary will be ineffective. The twist (19) will create the superposition of \(\ket{+}_{y}\) and \(\ket{-}_{y}\); it can be shown that the evolution reduces to a period-4 orbit for even integer spins and a period-2 orbit for odd integer spins. See also [15] for a related analysis.
#### Half-integer
In the case of half-integer spin and \(\kappa=j\pi\) the general twist operator, Eq. (17), is equivalent to the following unitary in the qubit picture:
\[e^{-i\frac{\pi}{2}J_{z}^{2}} =e^{-i\frac{\pi}{8}}\frac{1}{\sqrt{2}}\left[R_{z}^{\dagger}(\frac {\pi}{2})+R_{z}(\frac{\pi}{2})\right] \tag{22}\] \[=\frac{e^{-i\frac{\pi}{2}}}{\sqrt{2}}\Bigg{[}\left(\frac{I+iZ}{ \sqrt{2}}\right)^{\otimes n}+\left(\frac{I-iZ}{\sqrt{2}}\right)^{\otimes n} \Bigg{]}. \tag{23}\]
Similar to the integer case, repeated and iterated use of the Pauli group commutation relations show that Eq. (23) raised to the 6th power yields a \(\pi\)-rotation up to phase:
Figure 3: Stroboscopic Husimi evolution at \(\kappa=\pi j\) of a spin coherent state starting at \((\theta,\phi)=(2.25,2.0)\) over 8 kicks. The state splits and becomes entangled then recombines back to the original unentangled position. \(Q_{max}\) corresponds to the maximum height of the Husimi distribution in each plot. Here \(j=50\).
\[U^{6}_{\pi j}=e^{-i\frac{\pi j}{2}}(iY)^{\otimes n}. \tag{24}\]
The full state recurrence comes after 12 kicks:
\[U^{12}_{\pi j}=e^{-i\frac{\pi}{2}}I\qquad\forall\text{ half-integer}\;\;j, \tag{25}\]
which is a finite temporal periodicity up to global phase. To our knowledge this recurrence was first discovered for the spin-\(\frac{3}{2}\) case in Ref. [16]; here we have shown that it exists in all dimensions. Also, as expected, a similar calculation shows another period-12 recurrence at \(\kappa=\pi j+2\pi j=3\pi j\), similar to the integer-spin case.
Again starting with a generic spin coherent state (i.e. a symmetric product state in the qubit picture), entanglement is generated and destroyed throughout its 12-state orbit. Similar to Fig. 3, the generation occurs during the recursive splitting of the state into successive cat-like superpositions, and the destruction occurs during the subsequent recombination into new, displaced spin coherent states. States associated with Hamiltonian symmetries again experience a reduced orbit length. For the initial state as \(\ket{+}_{y}\), i.e. \((\theta,\phi)=(\pi/2,\pi/2)\), the evolution reduces to a period-3 orbit.
### Twist strength \(\kappa=\frac{\pi j}{2}\)
The case of \(\kappa=\frac{\pi j}{2}\) has the most apparent difference between integer and half-integer spin.
#### Integer spin
Using the Gaussian sum decomposition [17], for integer spin the twist operator \(e^{-i\frac{\pi}{2}J_{z}^{2}}\) splits into the superposition of rotations
\[\frac{1}{2}\Big{[}e^{-i\frac{\pi}{2}}I+e^{-i\frac{\pi}{2}J_{z}}+e^{i\frac{3\pi }{2}}e^{-i\pi J_{z}}+e^{-i\frac{3\pi}{2}J_{z}}\Big{]}. \tag{26}\]
In the qubit picture this becomes
\[\begin{split} e^{-i\frac{\pi}{2}J_{z}^{2}}=\frac{1}{2}\Bigg{[}& e^{-i\frac{\pi}{2}}I^{\otimes n}+\left(\frac{I-iZ}{\sqrt{2}}\right)^{ \otimes n}\\ &+e^{i\frac{3\pi}{4}}\left(iZ\right)^{\otimes n}+\left(\frac{I+ iZ}{\sqrt{2}}\right)^{\otimes n}\Bigg{]}\end{split} \tag{27}\]
where we have used the fact that \(n\) is even to simplify. Numerical calculations suggest a temporal periodicity with period 48,
\[U^{48}_{\frac{\pi j}{2}}=I\qquad\forall\text{ integer }j. \tag{28}\]
This has been confirmed up to spin \(j=500\) where the Hilbert-Schmidt distance \(\|U^{48}_{\frac{\pi j}{2}}-I\|\) remains zero within the working error tolerance of \(10^{-10}\). And similar to the previous \(\kappa=\pi j\) case (both integer and half-integer) here the Floquet operator raised to half the periodicity (i.e. 24) also acts as an effective \(\pi\)-rotation about the \(y\)-axis up to some global phase. Cat-like splitting and recombination cycles were furthermore observed in the Husimi function tracking of a generic spin coherent state. Part of the difficulty in showing this analytically comes from determining the twist operator in the qubit picture as in Eqs. (19) and (23). Numerics also confirm a period-48 recurrence at \(\kappa=\frac{\pi j}{2}+2\pi j=\frac{5\pi j}{2}\).
We also note what appears to be two higher-frequency temporal recurrences present in low dimensions at this chaoticity value: for \(j=1\) and \(j=3\) the evolution repeats after only 16 kicks rather than 48. This observation is distinct from the continued theme of the special states \(\ket{\pm}_{y}\) experiencing a reduced orbit of 24 for even values of \(j\) and 4 for odd values of \(j\), which we numerically verified.
#### Half-integer spin
In the half-integer case we surprisingly find no temporal periodicity for \(\kappa=\frac{\pi j}{2}\). This was numerically concluded by computing the entanglement entropy of any one of the reduced constituent qubits,
\[\rho=\frac{1}{2}\begin{pmatrix}1-\left\langle S_{z}\right\rangle&\left\langle S _{-}\right\rangle\\ \left\langle S_{+}\right\rangle&1+\left\langle S_{z}\right\rangle\end{pmatrix}, \tag{29}\]
via the collective spin observables \(\{S_{z},S_{\pm}=S_{x}\pm iS_{y}\}\) where \(S_{i}=J_{i}/j\)[21]. This approach was used instead of Hilbert-Schmidt distance to avoid optimizing over the angles \(\varphi\) that could have _a priori_ appeared in a hypothetical periodicity of the form \(U^{n}=e^{i\varphi}I\).
All that is needed to conclude the lack of a global recurrence is the identification of a spin coherent state that never returns to product form. We thus focus on our running example of \(|\theta,\phi\rangle=|2.25,2.0\rangle\). We found that up to spin \(j=50\frac{1}{2}\), the single qubit dynamical entropy never falls below \(10^{-5}\) within the first 5000 kicks. In fact, the entropy generally increased with dimension. Fig. 4 plots the _smallest_ entanglement entropy obtained by any of qubits throughout the first 5000 kicks. As can be seen, higher spins experience a highly entangled orbit, remaining close to the upper bound of \(S_{\text{max}}=\ln 2\).
Further evidence supporting the lack of a recurrence can be found in the specific case of spin \(j=\frac{3}{2}\), the smallest possible kicked top applicable to this scenario. Recently, many aspects of this low-dimensional system were solved exactly, including the single-qubit linear entropy
\[S_{\rho}^{(\text{lin})}=1-\text{Tr}[\rho^{2}] \tag{30}\]
of various initial spin states as a function of twist strength and kick number \(N\)[16]. In particular, the single-qubit linear entropy of the state \(U^{N}_{\kappa}\ket{+}_{y}\) was found to be
\[S^{(\text{lin})}(N,\kappa)=4\chi^{2}U^{2}_{N-1}(\chi)[1-2\chi^{2}U^{2}_{N-1}( \chi)], \tag{31}\]
where
\[U_{N-1}(\chi)=\frac{\sin(N\gamma)}{\sin(\gamma)} \tag{32}\]
are the Chebyshev polynomials with arguments related to the twist strength via
\[\chi=\cos(\gamma)=\frac{1}{2}\sin(\frac{\kappa}{3}). \tag{33}\]
Eq. (31) may be efficiently computed using symbolic programming and we found that the linear entropy does not exactly vanish within the first million kicks.
Given that a recurrence is almost certainly not present in the \(j=3/2\) system at this twist strength, it seems highly unlikely that a family of recurrences exist, one for each half-integer \(j>3/2\). This argument has an added strength by focusing on the special state \(\ket{+}_{y}\), which, due to the Hamiltonian symmetry of the system, has a pattern of experiencing a reduced recurrence time when a global periodicity exists.
The lack of periodicity at this \(\kappa\) value also shows that in general, not all twist strengths commensurate to \(\pi\) yield an exact recurrence.
### Summary and other resonances
Table 1 summarizes our results. With these recurrences established, a natural question to ask is if there are others. To this end, we have performed a numerical search for such recurrences characterized by \(\kappa=\pi j\frac{r}{s}\) for all coprime \(1\leq r,s\leq 10\), for all integer and half-integer spin, upto 15.5 and have found none. This was done by computing the von-Neuman entropy for the initial state given by \(|\theta,\phi\rangle=|2.25,2.0\rangle\), upto 500 kicks and found that minimum value of von-Neuman entropy for the different sets of r and s upto \(j=15.5\) never falls below \(10^{-7}\). Numerical simulations suggests that there are no other sets of \(r\) and \(s\) that shows the temporal periodicity, than what we have found. This therefore places constraints on any additional values of \(\kappa\) that yield a state-independent finite periodicity.
### Relation to kicked rotor
It is interesting to compare our results to the quantum resonance behaviour found in the quantum kicked rotor [22; 23; 24]. This purely quantum dynamics occurs when one of the Hamiltonian parameters takes the form \(4\pi\frac{r}{s}\) and is characterized by quadratic growth of the wavefunction in momentum space. In contrast, the classical kicked rotor at the same parameter value only has linear scaling. An interesting exception to this quadratic growth behaviour is the case of \(r/s=1/2\), which yields a period-2 state-independent orbit. This special case is known as _quantum anti-resonance_ due to the complete lack of momentum growth [22].
Ref. [17] proposed a kicked top version of the quantum resonance condition as
\[\kappa=4\pi\frac{r}{s}j \tag{34}\]
for coprime integers \(r\) and \(s\), where it may be assumed \(r/s<1\) without loss of generality due to the global symmetry \(U_{\kappa}=U_{\kappa+4\pi j}\). This proposal is motivated by the well-known contraction from the quantum kicked top to the quantum kicked rotor [25], effected via the simultaneous scaling
\[\kappa\sim j\qquad p\sim\frac{1}{j}\quad\text{ as }\quad j\to\infty. \tag{35}\]
Here the kicked top parameter \(\kappa\) becomes the relevant parameter in the kicked rotor that controls the existence of resonances. (Also note that despite \(j\to\infty\) the above is not to be considered a classical limit as the quantum kicked rotor is a fully quantum object - hence _contraction_.)
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**chaos parameter** & \multicolumn{2}{c|}{**period**} \\ \hline \(\kappa\) & integer spin & half-integer spin \\ \hline
0 & 4 & 4 \\ \hline \(\frac{\pi j}{2}\) & \(48^{*}\) & \(\times^{*}\) \\ \hline \(\pi j\) & 8 & 12 \\ \hline \(\frac{3\pi j}{2}\) & \(48^{*}\) & \(\times^{*}\) \\ \hline \(2\pi j\) & 2 & 4 \\ \hline \(\frac{5\pi j}{2}\) & \(48^{*}\) & \(\times^{*}\) \\ \hline \(3\pi j\) & 8 & 12 \\ \hline \(\frac{7\pi j}{2}\) & \(48^{*}\) & \(\times^{*}\) \\ \hline \(4\pi j\) & 4 & 4 \\ \hline \end{tabular}
\end{table}
Table 1: Recurrence periods for different \(\kappa\) values. Here \(\times\) signifies the non-existence of periodicity and (*) represents results from numerical simulation. The numbers are specific to \(p=\frac{\pi}{2}\) with the exception of the integer-spin period-2 orbit for \(\kappa=2\pi j\), which is independent of \(p\).
The periodicities examined here do not satisfy the \(p\sim 1/j\) scaling (35) and therefore are not to be seen as "pre-contracted" phenomena, at least not in a strict sense. It is thus interesting that despite only having a partial relationship to the resonance behaviour found in the kicked rotor we still observe non-standard dynamics in the kicked top at these special chaoticity values.
The lone case of \(\kappa=2\pi j\) (i.e. \(r/s=1/2\)) for integer spin discussed in sec. IV.1 actually can be seen as being "pre-contracted". This is because the period-2 orbit does not depend on the rotation angle \(p\), and so without loss of generality we may set it to scale as \(p\sim 1/j\). Thus the peculiar behaviour of quantum anti-resonance found in the rotor may be seen as originating in the quantum kicked top. Previous works focusing on quantum correlations [20] or the pseudo-classical framework [17] do not fully capture this specialized anti-resonance effect: both approaches depend on the rotation angle \(p\) and both predict a higher than necessary orbit period [26].
## V Conclusion
Previous studies comparing classical and quantum dynamics in the kicked top largely validate the correspondence principle in the semiclassical regime [11; 27]. Other works have gone into characterizing if and when the correspondence principle may be applied in the deep quantum regime [9; 10; 20; 28; 14]. Here we have demonstrated a general violation of the correspondence principle by finding various sets of state-independent, finite-time periodicities that have no classical analog, and which exist for all spins (i.e. both the deep quantum and semiclassical regimes). Some of these recurrences had been identified earlier for specific spins [14; 16] or in a semiclassical context [17], but here we have generalized these results. We have analytically shown the existences of sets of recurrences and numerically introduced others. A preliminary search for additional "simple" periodicities indicate that if they exist the recurrence time must be relatively large.
Our analysis resolves a confusion over the general relationship between the rationality of the chaoticity parameter \(\kappa\) and the existence of a recurrence in the quantum kicked top. Ref. [14] argued that whenever this value is a rational multiple of \(\pi\) the evolution will be periodic in the sense that any initial state will only explore a finite subset of Hilbert space. Ref. [20] on the other hand maintained that this is only true for spin-1 systems; i.e. higher dimensional kicked tops do not experience such finite-orbit periodicity regardless of the chaoticity value. Here we have demonstrated the answer lies somewhere in-between. In particular, while recurrences do exist in all dimensions, and these recurrences do come from a rational \(\kappa\) value, not all rational \(\kappa\) values may yield a recurrence.
We further established a relationship to the quantum resonance phenomenon of the quantum kicked rotor [22], and showed that the peculiar anti-resonance effect (i.e. \(U^{2}=I\)) in the kicked rotor may have its origins in the quantum kicked top. Given the link to the kicked rotor and the simple, general criterion for periodicity, it would seem reasonable to expect such non-classical resonances to occur in other periodic or kicked systems as well. Future work in this direction would help shed light on the complicated route to correspondence in chaotic systems.
## Acknowledgements
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). We acknowledge fruitful discussions with Alan Jamison and Shlok Nahar. Wilfrid Laurier University and the University of Waterloo are located in the traditional territory of the Neutral, Anishnawbe and Haudenosaunee peoples. We thank them for allowing us to conduct this research on their land.
|
2305.17624 | SimpSON: Simplifying Photo Cleanup with Single-Click Distracting Object
Segmentation Network | In photo editing, it is common practice to remove visual distractions to
improve the overall image quality and highlight the primary subject. However,
manually selecting and removing these small and dense distracting regions can
be a laborious and time-consuming task. In this paper, we propose an
interactive distractor selection method that is optimized to achieve the task
with just a single click. Our method surpasses the precision and recall
achieved by the traditional method of running panoptic segmentation and then
selecting the segments containing the clicks. We also showcase how a
transformer-based module can be used to identify more distracting regions
similar to the user's click position. Our experiments demonstrate that the
model can effectively and accurately segment unknown distracting objects
interactively and in groups. By significantly simplifying the photo cleaning
and retouching process, our proposed model provides inspiration for exploring
rare object segmentation and group selection with a single click. | Chuong Huynh, Yuqian Zhou, Zhe Lin, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi, Abhinav Shrivastava | 2023-05-28T04:05:24Z | http://arxiv.org/abs/2305.17624v1 | # SimpSON: Simplifying Photo Cleanup
###### Abstract
In photo editing, it is common practice to remove visual distractions to improve the overall image quality and highlight the primary subject. However, manually selecting and removing these small and dense distracting regions can be a laborious and time-consuming task. In this paper, we propose an interactive distractor selection method that is optimized to achieve the task with just a single click. Our method surpasses the precision and recall achieved by the traditional method of running panoptic segmentation and then selecting the segments containing the clicks. We also showcase how a transformer-based module can be used to identify more distracting regions similar to the user's click position. Our experiments demonstrate that the model can effectively and accurately segment unknown distracting objects interactively and in groups. By significantly simplifying the photo cleaning and retouching process, our proposed model provides inspiration for exploring rare object segmentation and group selection with a single click. More information can be found at [https://github.com/hmchuong/SimpSON](https://github.com/hmchuong/SimpSON).
## 1 Introduction
Both professional photographers and casual users often require efficient photo retouching to enhance the quality of their images. One essential aspect of this task is the removal of visual distractions from photos [7]. These distractions can take various forms, such as unexpected pedestrians, objects that are cropped out of the photo's edge, dirty spots on the ground, repeated outlets on a wall, or even colorful and blurry lens flare. These distractions can be challenging to categorize due to their diverse appearance. As a result, users tend to select and mask them entirely and use photo editing software such as Photoshop to remove them.
Segmentation is necessary for photo cleaning tasks because rough masks may not be suitable for all scenarios. Accurate masks are required in situations where distractors are touching the main foreground subjects or where distractors are small but dense in the image. User-drawn rough masks can result in the deletion of too much background texture when connected. In other cases, users may have a
mask that covers the entire object but does not change the background too much. In all scenarios, our findings suggest that for inpainting, a tiny dilation from a highly accurate mask produces better background preservation and fewer leftover pixels of distractors. This finding is consistent with most of the existing inpainting models.
The process of manually masking distracting elements in a photo can be a tedious and time-consuming task. Users often seek an automated tool that can efficiently select and segment all distractors. One approach is to train an instance segmentation model like Mask-RCNN [11] to detect and segment distractors in a supervised manner. However, identifying distractors can be subjective, and collecting datasets requires scientific validation of the distractor annotations to ensure that most users agree. For instance, Fried _et al._[7] invited 35 users to mark distractors on a single image and received varying feedback. Even with a model that detects distractors, it may not always satisfy users' preferences. Therefore, tasks like these should rely heavily on user interaction, such as allowing users to click and decide where to retouch photos based on their own preferences.
Our goal is to propose a single-click distractor segmentation model. With the rapid development of panoptic segmentation technologies like PanopticFCN [20] and Mask2Former [5], can we utilize state-of-the-art models to retrieve distractor masks by clicking on the panoptic segmentation results? Unfortunately, most distractors belong to unknown categories, and some are tiny, making them difficult to segment using models [2, 13] trained on datasets such as COCO [21], ADE20K [31], or Cityscapes [6] with a closed-set of categories. Qi _et al._ proposed entity segmentation [24] to train panoptic segmentation in a class-agnostic manner to address the long-tail problem, but it still may not be guaranteed to separate all regions in the photos.
What if we use clicks as the input guidance for segmentation? Interactive segmentation models are closely related to our task, and recent works like FocalClick [4] and RiTM [26] have achieved practical and high-precision segmentation performance. However, interactive segmentation aims to use multiple clicks, including positive and negative ones, to segment larger foreground objects accurately, especially the boundary regions. In our task, we focus more on medium to small distracting objects and only require a single positive click to select semi-precise masks for inpainting purposes. The difference in our goal makes it challenging to follow the problem definition of interactive segmentation. Additionally, previous interactive segmentation models cannot select objects in groups, whereas most of our distractors are repeated, dense, and evenly distributed across photos.
This paper addresses the two challenges of accurate one-click universal class-agnostic segmentation and efficient similarity selection. Our proposed method can significantly reduce the photo retouching process from hours (e.g., 100+ clicks) to minutes (e.g., 1-2 clicks) when removing dense and tiny distractors. Firstly, we optimize the click-based segmentation model to accurately segment distractor-like objects with a single click. This is achieved by utilizing the entity segmentation [24] method to discard category labels and using single-click embedding to guide the segmentation of a single object. Secondly, we design a transformer-based Click Proposal Network (CPN) that mines similar distractor-like objects within the same image and regress click positions for them. Lastly, we rerun the single-click segmentation module using the proposed clicks to generate the mask and verify the similarity among the selected objects via the Proposal Verification Module (PVM). We also run the process iteratively to ensure that more similar objects are fully selected. In summary, our contributions consist of three main aspects:
* We introduce a novel one-click Distractor Segmentation Network (1C-DSN) that utilizes a single-click-based approach to segment medium to small distracting objects with high accuracy. Unlike other interactive segmentation methods, our model targets the segmentation of distracting objects with just one positive click. Our model is capable of generalizing well to objects of any rare categories present in the photos.
* We propose a Click Proposal Network (CPN) that mines all similar objects to the user's single click. The proposed clicks are then reused in the segmentation model, and their similarity is verified using the Proposal Verification Module (PVM). This allows for the group selection of distracting objects with one click.
* We further explore running the selection process iteratively to fully select similar distractors with slightly diverse appearances. Our proposed distractor selection pipeline, which we call 'SimpSON,' significantly simplifies the photo retouching process. By using SimpSON, users can remove distracting objects in their photos quickly and easily with just a few clicks.
## 2 Related works
Visual Distraction in PhotographyVisual distracting elements in photos are elements that attract users' attention but are not the primary subject of the photo. However, according to [7], the saliency map [14, 16, 17, 18, 19] may not be highly correlated with visual distractors because the main subject usually has the peak in the attention map. Although efforts have been made to detect and retouch scratches [28], noise, and dirty dots in photos, and automatic and interactive face retouching [29] has already been widely deployed in commercial products, only a few research works [1] have targeted automatic general distractor detection and editing
due to the high variance of distractor categories and appearances. In this work, our aim is to develop an interactive distractor selection and masking method in photos, along with automatic grouping and selection of all similar distractors.
Interactive SegmentationInteractive segmentation involves allowing users to provide a small amount of interaction to complete the target segmentation. Xu _et al._[30] proposed the first deep learning-based segmentation and introduced positive and negative clicks as inputs. BRS [15], and f-BRS [25] introduced online learning to optimize the segmentation results, while FCA-Net [23] by Lin _et al._ focuses more on the initial click and uses feature attention to improve the segmentation results. RiTM [26] generates the following segmentation by fully utilizing the masking results from previous iterations, while CDNet [3] presented how to use self-attention to propagate information among positive and negative clicks. FocalClick [4] revisited a series of interactive segmentation techniques and proposed to use local inference for a more efficient and deployment-friendly network. In this paper, we draw from the experience of interactive segmentation to use clicks as user inputs. However, due to the nature of distractor removal tasks in photo retouching and cleaning use cases, users prefer to use an undo operation if the model over-predicts the mask, instead of switching between positive and negative clicks. Additionally, distractors are usually smaller than foreground objects, so we redefined our task with only positive clicks and optimized the model with fewer positive clicks. Furthermore, previous works did not allow users to make group selections via self-similarity mining, while it is a highly demanded user need for distractor removal, which we address in our proposed method.
## 3 Methodology: SimpSON
Figure 2 shows the overall pipeline of the proposed SimpSON pipeline. It consists of a feature extraction backbone, a single-click Distractor Segmentation Network (1C-DSN), a similarity Click Proposal Network (CPN) designed for mining all the similar clicks, and a Proposal Verification Module (PVM) to check the similarity of the proposed click positions. The process can be run iteratively.
### One-click Distractor Segmentation Network (1C-DSN)
Motivation.When it comes to visual distractors in users' photos, they often come in all shapes and sizes with different appearances. We don't always know what these objects are, or how big or small they might be. To tackle this challenge, we need an interactive segmentation model that is highly adaptive, especially when dealing with unfamiliar classes or small and medium-sized objects. It should be able to respond to clicks at any position, even if they fall on rare or unexpected objects, like cigarette butts, puddles, or bird droppings on the ground. To achieve this, we need to ensure that our model is optimized for high recall, so that users can remove unwanted objects with just one click.
Difference with Previous Interactive Segmentation.When designing our pipeline, we imagined that users might wish to remove many distracting elements. For that scenario, we found it more intuitive and efficient to use only positive clicks in an iterative removal workflow, which could be particularly suited for mobile apps. As discussed in section 2, recent interactive segmentation works are designed for precise object segmentation with multiple positive and negative clicks. We found state-of-the-art tools like [4, 26] are not friendly to small and medium object segmentation with only a few positive clicks. However, for distractor selection tasks, many objects of small size should be easier to choose with one click. Larger and medium distractors had better be quickly selected with few positive clicks. So the major difference between our segmentation model and previous works is we do not use negative clicks and fully optimize our models with fewer positive clicks.
Network Structure.Figure 2 shows the network structure of the single-click distractor segmentation network. Given an image \(I\in\mathbb{R}^{H\times W\times 3}\), the feature extractor network provides a pyramid feature map: \(\mathcal{F}=\{X_{1},...,X_{N}\}\) with \(X_{i}\in\mathbb{R}^{h^{i}\times w^{i}\times d}\) and \(H>h^{1}>...>h^{N},W>w^{1}>...>w^{N}\). For each feature level, we pair it with a binary click map \(I_{i}^{c}\in\{0,1\}^{h^{i}\times w^{i}}\) where \(I_{i}^{c},x_{i}=1\) indicates the click at spatial location \((x,y)\) in \(I_{i}^{c}\). The click-embedded feature map \(X_{i}^{\prime}\in\mathbb{R}^{h^{i}\times w^{i}\times(d+c)}\) is then computed as \(X_{i}^{\prime}=X_{i}\oplus conv_{i}(I_{i}^{c})\), where \(\oplus\) indicates the concatenation along the feature dimension and \(conv_{i}\) is a mapping function which projects \(I_{i}^{c}\) to \(\mathbb{R}^{h^{i}\times w^{i}\times c}\).
After obtaining the groups of click-embedded feature map \(X_{i}^{\prime}\), we feed them to the detection head and segmentation head. We modify the bounding box filtering strategy by considering only keeping the boxes that overlap with the click positions. In this paper, we follow Entity Segmentation [24] to design the detection and segmentation heads. The segmentation module finally outputs multiple binary segmentation masks \(M_{j}\in\{0,1\}^{H\times W}\) corresponding to the user click positions. The 1C-DSN is trained with similar loss functions as in Entity Segmentation, which combines detection loss from FCOS [27] and the DICE loss from Entity Segmentation [24]. The design of the detection and segmentation parts can be replaced with any two-stage segmentation frameworks [11].
### Click Proposal Network (CPN)
In situations where there is only one instance of a distractor, the 1C-DSN model can be sufficient for accurately segmenting it out. However, in many cases, we may come across multiple instances of distractors that share similar categories and appearances. In such scenarios, users would prefer to be able to select all of these instances with just a single click. To address this, we have designed a self-similarity mining module that can effectively identify all the distractors that are similar to the user's click, thus enabling them to remove them all in one go.
We propose this Click Proposal Network (CPN) to mine similar regions using cross-scale feature matching and regress the click positions from the high-confident regions. Then we can feed those click coordinates back to our 1C-DSN for masking to obtain the masks of all the similar distractors. The design of the Click Proposal Network (CPN) is shown in Figure 2. The input to the CPN is a single query mask predicted from the previous 1C-DSN corresponding to the user's single click. We utilize three levels of feature maps with the spatial resolution to be \(\frac{1}{4}\), \(\frac{1}{8}\), and \(\frac{1}{16}\) of the input image size. For the given query mask region, we apply ROI-Align [11] to extract features from the three levels of maps, resize them to \(k\times k\times d\), where \(k=3\) is a hyper-parameter for query size and \(d\) is the dimension of the features, and then apply the binary query mask to zero-out non-masking feature regions. We then obtain \(3\times k^{2}\) feature vectors for similarity comparison with the original feature maps. We feed the query vectors into a cascade of transformer decoder layers L1, L2, and L3, where each layer takes the keys and values from different levels of feature maps. We finally use the obtained aggregated feature vector to conduct spatial convolution with the largest feature map to obtain the prediction click position heatmap.
During training, we follow CenterNet [32] to generate the ground truth heatmap using Gaussian filtering of the click map. The kernel size of the gaussian filter is set to the minimum value of the height and width of each mask. The module is then trained using a penalty-reduced pixel-wise logistic regression with focal loss as in CenterNet. During inference, we apply the Non-Maximum Suppression (NMS) to the heatmap to keep only the maximum value within a \(s\times s\) window and choose all the clicks having confidence larger than \(\tau_{c}\). Empirically, we set \(s=32\) and \(\tau_{c}=0.2\).
### Proposal Verification Module (PVM)
To avoid false positive proposals in the heatmap and click map, we propose using a Proposal Verification Module (PVM) to ensure that the selected click positions are highly similar to the user's clicks. This module performs pairwise comparisons between the generated masks and the initial click, and removes any click proposals that generate a mask that is significantly different from the initial query mask using a threshold.
Specifically, we first feed all the click proposals into the 1C-DSN to generate separate instance masks for each click position. We refer to the mask of the initial user click as the target mask and all the other proposed masks as source masks. Figure 3 shows the module structure of PVM and the process of comparing two distractors. Given the original image \(I\), the features \(X_{1}\), which is \(\frac{1}{4}\) of the spatial image
Figure 2: The overview of SimpSON framework with 1C-DSN, CPN and PVM modules. It consists of a feature extraction backbone, a single-click Distractor Segmentation Network (1C-DSN), a similarity Click Proposal Network (CPN) designed for mining all the similar clicks, and a Proposal Verification Module (PVM) to check the similarity of the proposed click positions. The process of finding similar distractors can be run iteratively to fully generate the masks.
resolution, extracted from the pre-trained feature backbone in the 1C-DSN, and the segmentation mask \(M\), we extract the region of interests from them. To preserve the aspect ratio of the objects, we extend the bounding box to square and use ROI-Align [11] to extract pixels or features. In this paper, we resize the cropped image patch to \(224\times 224\) and feed it into a lightweight feature extractor, ResNet18 [12]. We then concatenate the image features (from \(I\)), backbone features (from \(X_{1}\)), and resized masks (from \(M\)) together and feed them into neural layers to obtain the 1D feature embeddings, \(z_{t}\) for the target and \(z_{s}\) for the source. Notice that we also add the scaling factor \(\frac{w_{b}}{224}\) to guide the embedding learning, where \(w_{b}\) is the bounding box size. The Euclidean distance between \(z_{s}\) and \(z_{t}\) is input to the next fully-connected layer with a sigmoid activation to output the similarity score from 0 to 1.
In training, we randomly sample pairs from the same image. A pair is considered positive if it is drawn from the same copy; otherwise, it will be a negative pair. Besides the binary cross entropy \(\mathcal{L}_{BCE}\) is computed on the last output with the pair labels, the max-margin contrastive loss [10]\(\mathcal{L}_{con}\) is integrated on feature embedding \(z_{t},z_{s}\) to make the model learning features better. The final training loss is a linear combination \(\mathcal{L}=\mathcal{L}_{con}+\mathcal{L}_{BCE}\). In testing, the PVM classifies each mask proposal with its exemplar by thresholding the similarity score. In our experiments, we choose 0.5 as the threshold.
### Iterative Distractor Selection (IDS)
We further run an iterative process to sample more similar distractors to ensure that we entirely select all the distractors similar to the initial click. The details pseudo-code is shown in Algorithm 1. We update the \(M_{e}\) with the correct masks for each iteration and progressively add high-confidence clicks to the result. By updating \(M_{e}\), we can avoid incorrect similarity findings caused by the incomplete initial exemplar mask. Picking top-\(k\) clicks and PVM module is essential in reducing false positive rates of CPN. In our experiments, we choose a kernel size of 5 for NMS, \(N=5\), \(k=10\), and \(m=3\).
```
Data:\(M_{init}\) (Initial Mask), \(M_{e}\) (Exampler Set), \(M_{acc}\) (Accepted Masks), \(C_{acc}\) (Accepted Clicks), \(N\) (maximum iteractions) Result:\(M_{acc}\), \(C_{acc}\) \(itr\gets 0\); \(M_{e}=M_{init}\); \(M_{acc}\leftarrow\{M_{init}\}\); \(C_{acc}\leftarrow\emptyset\); while\(itr\leq N\)do Generate Heatmap Using \(M_{e}\) in CPN; Apply NMS to obtain clicks \(C_{new}\); Remove Clicks from \(C_{new}\) if within \(M_{acc}\); \(C^{\prime}_{new}\leftarrow\) top-\(k\) clicks with confidence \(\geq 0.2\); \(C_{acc}\gets C_{acc}+C^{\prime}_{new}\); Pass \(C_{acc}\) to 1C-DSN and Run PVM for \(M_{new}\); \(M_{acc}\gets M_{new}\); \(M_{e}\leftarrow\) top-\(m\) confident masks; end while
```
**Algorithm 1**IDS: Iterative Distractor Selection
## 4 Dataset Preparation
Public DatasetsWe conducted single-click segmentation experiments on the public COCO Panoptic and LVIS datasets. We pre-trained our model on the COCO Panoptic dataset, which contains 118,287 images, and fine-tuned it on the LVIS dataset, which contains 1,270,141 objects across 99,388 images. Since there is some overlap between the LVIS validation set and the COCO train set, we only used 4,809 images with 50,672 instances from the original LVIS validation set for our evaluation.
Self-Collected Distractor DatasetsTo gain a better understanding of the distractors in users' photos and improve the quality of our masking, we curated and annotated a dataset of distractor images. We began by creating a list of common distractors found in photos, such as distracting people, shadows, lens flare, cigarette butts on the floor, construction cones, and so on. We then collected images from various public image websites, including but not limited to Flickr, Unsplash, and Pixabay, among others. To annotate our dataset of distractor images, we recruited three professional photographers to manually select and mask the distracting regions in each image that affect its overall aesthetic appeal. We found that having three
Figure 3: Proposal Verification Module (PVM). Given the original image \(I\), the features \(X_{1}\), and the segmentation mask \(M\), we extract the region of interests from them. We then concatenate and feed the features from \(I\), \(X_{1}\) and \(M\) to obtain the 1D feature embedding, \(z_{t}\) for the target and \(z_{s}\) for the source. The Euclidean distance between them is fed to the fully-connected layer with a sigmoid activation to output the similarity score from 0 to 1.
annotators was sufficient to label all the distractors in a given photo. In total, our dataset contains 21,821 images, of which we used 20,790 images containing 178,815 distracting instances for training, and 1,031 images containing 8,956 instances for validation and evaluation. We have named our distractor dataset "Distractor20K" and the evaluation dataset "DistractorReal-Val" in this paper.
Data Synthesis for Similar Distractors MiningDuring the process of collecting our dataset, we observed that it is quite common for small, similar distractors (like bird drop-pings on the ground) to coexist in a single photo. However, our annotators may not be able to completely mask them. To our knowledge, there is no public dataset that includes annotations for these repeated distractors that we could use to train and evaluate our CPN model. Therefore, we propose a procedure to synthesize and generate similar distractors. This approach is inspired by [8], which demonstrated that copy-pasting can be an effective data augmentation technique for instance segmentation tasks.
To synthesize additional distractor data for our "Distractor20K" dataset, we utilized instances from the LVIS dataset and adopted the Mask2Former [5] approach to obtain semantic segmentation masks of the images. We only synthesized distractors within the same semantic regions, including ground, ceiling, wall, sky, sea, and river, as candidate regions. We first chose to copy objects that were either existing annotated distractors within those candidate regions or from the LVIS dataset. The LVIS examples were added to ensure a minimum of three objects to copy for each region, and the ratio between the objects and semantic regions determined the number of copies with a maximum of 10. We then iteratively placed the object at the maximum position in the distance map of the semantic region and recomputed the distance map after each iteration. In total, we obtained "DistractorSyn14K" with 14,264 images and 287,150 instances, which were used to train the CPN and PVM modules. We also created an evaluation dataset of 531 images, which we named "DistractorSyn-Val," containing 1,188 images with 10,980 instances.
## 5 Experiments
### Implementation details
1C-DSN TrainingOur 1C-DSN follows the structure of Entity Segmentation [24]. Entity Segmentation followed FCOS [27] to utilize P3 to P7 in the feature pyramid for detection and kernel prediction and used P2 for masking. Here \(P_{i}\) denotes the features having \(\frac{1}{2^{i}}\) of the spatial resolution of the input image. In our work, we intended to detect and find more medium and small distractors, so we utilized P2 to P5 features for both detection and segmentation. As described in section 3.1, we concatenate additional channels from the click map to the pyramid feature, and the channel number is 32. During training, we initialized the model from Entity Segmentation pre trained on COCO Panoptic Dataset [22], and finetuned it on LVIS dataset [9] in 4 epochs. For a better masking quality on distractor-like objects, after we obtained the model trained on LVIS, we also finetuned it on our Distractor20K dataset in 12 epochs. Our model was evaluated on both the LVIS validation set and the DistractorReal-Val dataset.
We randomly selected at most 50% of the instances during training to reduce the false positive rate and make the prediction results better correlated with the input click positions. For each instance, we randomly sampled 1-5 clicks by computing the distance transform and randomly putting the click around the center of the objects.
CPN and PVM TrainingThe CPN and PVM were trained on our synthetic distractor dataset containing many groups of similar distractors within one single image. To preserve the masking quality and avoid it from being affected by the fake masks and learning from composition artifacts, we freeze the 1C-DSN network and the backbones and reuse the learned feature pyramid. In CPN, we reused the features P2 to P4. In PVM, we only used P2 for feature extraction. When training the CPN, we randomly picked the target click, and the ground truth will be the groups of instances similar to the target object. While training the PVM, we randomly selected pairs of instances within the same image and assigned the labels according to their group identity. We constantly utilized 1C-DSN to generate masks for CPN and PVM during training. Both modules are trained in 14 epochs with an initial learning rate of \(0.0001\) for CPN and \(0.005\) for PVM, decreasing ten times at epochs 11 and 13. They are also trained with 8 A100 GPUs with batch size 16.
### Evaluation on 1C-DSN
Click EmbeddingTo evaluate the importance of click embedding for improving the performance, especially the recall rate of the model, we compared it with a baseline that was trained without click embedding as the input. We use the same click positions when comparing them. But for the
Figure 4: Precision-Recall (PR) curve on the validation dataset comparing the baseline and our proposed single-click based segmentation.
baseline, we used the click positions to extract the masks which have an overlap with the clicks for evaluation. Figure 4 shows the Precision-Recall (PR) curve, which demonstrates that click inputs drive the segmentation process to focus on the users' click positions and improve the overall precision and recall for all the feature extraction backbones. Table 1 and 2 show the Average Precision (AP) while testing on the LVIS validation dataset and our DistractorReal-Val. We split our instances into small (\(\leq 32\times 32\)), medium (\(32\times 32\) to \(96\times 96\)), and large (\(\geq 96\times 96\)) and evaluated them separately. The gains of the Average Precision (AP) show the evidence that click embedding helps improve the segmentation performance.
when running the selection. Recall that after we apply CPN to propose clicks and feed those clicks to 1C-DSN for masking, we can use PVM to reject false positives, so it possibly decreases the overall recall rate a little bit. At the same time, the iterative process (IDS) will generate more click proposals in the photos to boost the recall rate. Combining the two strategies (IDS and PVM), therefore, yields the best overall performance on our synthetic validation set. Figure 6 shows some examples when testing the model on both real and synthetic data. Compared with other off-the-shelf segmentation models, our single-click based model has a higher response to tiny distractors and is functional in interactive group selection. Our 1C-DSN is trained on a real distractor dataset, while the group selection pipeline is trained on a synthetic dataset. We found our model generalizes well to find similar objects in real images in Figure 1 and 6.
### More Ablation Studies
**Ablations on CPN and PVM Module.** We conducted ablation studies on the design of Click Proposal Network (CPN) in Table 4. We found that zeroing out irrelevant feature patches using masking was necessary to avoid a high false positive rate. If we enlarged the query patch size, the query vector would become more localized, so it yielded a higher false positive rate. The order of the feature map inputting to different layers of the transformer decoder was also important since starting the matching from the largest feature map would possibly lead to better feature aggregation. Several design details of the Proposal Verification Module (PVM) have been compared in Table 5. Our ablation experiments demonstrate that all three designs contribute to improving precision and recall.
## 6 Conclusions
We presented SimpSON, an interactive selection network for distractor segmentation in photos. Distractors are often small, repeated, clustered, and belong to unknown categories. To address this challenge, we optimized a single-click-based segmentation network and mined all the distractors similar to the click using Click Proposal Network (CPN) for group selection. We found that applying the CPN iteratively and using an additional Proposal Verification Module (PVM) made the selection more robust by avoiding false positives. Our experiments demonstrated that active click-guided segmentation yields better precision-recall than passive retrieval of masks from a pre-computed segmentation map. We believe that our pipeline will simplify the process of photo retouching and inspire new directions for interactive segmentation research.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(L1\to L2\to L3\) & Mask & Query size & AUC-PR (\%) \\ \hline \(1/4\to 1/8\to 1/16\) & ✓ & \(3\times 3\) & **40.43** \\ \(1/4\to 1/8\to 1/16\) & & \(3\times 3\) & 35.08 \\ \(1/16\to 1/8\to 1/4\) & ✓ & \(3\times 3\) & 37.00 \\ \(1/4\to 1/8\to 1/16\) & ✓ & \(5\times 5\) & 36.62 \\ \(1/4\to 1/8\to 1/16\) & ✓ & \(7\times 7\) & 34.14 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on Click Proposal Network (CPN) on DistractorSyn-Val.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Scale & Square & Mask & AP & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) & AR & AR\({}_{s}\) & AR\({}_{m}\) & AR\({}_{l}\) \\ \hline & ✓ & ✓ & 42.2 & 35.2 & 43.2 & 43.9 & 48.5 & 44.2 & 49.7 & 53.8 \\ ✓ & ✓ & ✓ & 42.3 & 34.4 & 43.3 & 44.1 & 48.7 & 44.1 & 50.2 & 54.1 \\ ✓ & ✓ & & 42.0 & 33.5 & 43.1 & **44.8** & 43.7 & 43.7 & 49.4 & 53.6 \\ ✓ & ✓ & ✓ & **42.4** & **35.6** & **43.4** & 44.2 & **49.7** & **44.5** & **50.5** & **54.2** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The performance of PVM with different input information on DistractorSyn-Val.
Figure 6: Distractor selection comparison using different off-the-shelf segmentation models on our real user images (upper row) and synthetic data (bottom row). Models trained for panoptic segmentation tasks like Mask2Former and EntitySeg cannot focus on small and tiny objects well. Interactive segmentation works rely one negative clicks to shrink the selecting regions, and they cannot behave like clicking-one-selecting-all. Our SimpSON works well for small and tiny distractors, and can select similar things in a group. |
2303.02819 | Artificial Intelligence: 70 Years Down the Road | Artificial intelligence (AI) has a history of nearly a century from its
inception to the present day. We have summarized the development trends and
discovered universal rules, including both success and failure. We have
analyzed the reasons from both technical and philosophical perspectives to help
understand the reasons behind the past failures and current successes of AI,
and to provide a basis for thinking and exploring future development.
Specifically, we have found that the development of AI in different fields,
including computer vision, natural language processing, and machine learning,
follows a pattern from rules to statistics to data-driven methods. In the face
of past failures and current successes, we need to think systematically about
the reasons behind them. Given the unity of AI between natural and social
sciences, it is necessary to incorporate philosophical thinking to understand
and solve AI problems, and we believe that starting from the dialectical method
of Marx is a feasible path. We have concluded that the sustainable development
direction of AI should be human-machine collaboration and a technology path
centered on computing power. Finally, we have summarized the impact of AI on
society from this trend. | Lin Zhang | 2023-03-06T01:19:25Z | http://arxiv.org/abs/2303.02819v1 | # Artificial Intelligence: 70 Years Down the Road
###### Abstract
Artificial intelligence (AI) has a history of nearly a century from its inception to the present day. We have summarized the development trends and discovered universal rules, including both success and failure. We have analyzed the reasons from both technical and philosophical perspectives to help understand the reasons behind the past failures and current successes of AI, and to provide a basis for thinking and exploring future development. Specifically, we have found that the development of AI in different fields, including computer vision, natural language processing, and machine learning, follows a pattern from rules to statistics to data-driven methods. In the face of past failures and current successes, we need to think systematically about the reasons behind them. Given the unity of AI between natural and social sciences, it is necessary to incorporate philosophical thinking to understand and solve AI problems, and we believe that starting from the dialectical method of Marx is a feasible path. We have concluded that the sustainable development direction of AI should be human-machine collaboration and a technology path centered on computing power. Finally, we have summarized the impact of AI on society from this trend.
## 1 Introduction
Since the concept of AI was introduced in the 1950s [14], researchers have been striving to make machines intelligent, leading to important sub-fields such as computer vision, natural language processing, and machine learning. Despite ups and downs, AI technology has been rapidly advancing, especially in the last two decades with the success of deep learning. However, the pursuit of a comprehensive understanding of AI mechanisms and how to accelerate its evolution continues. With more data, computing power, and minds than ever before, it is time to review the development of AI, as the reasons behind the ups and downs of its history are worth exploring. Otherwise, AI is likely to repeat the tragedies of the past.
Here, we reflect on the development of AI over the years and how past developments may extend into the future. We first analyze the technological changes in the core areas of AI over the past century. Although the research content and methods of sub-fields differ greatly, their development logic is remarkably similar, and can be summarized as the path from rules to statistics and then to data-driven approaches. The dilemma of AI development, from a technical perspective, is fundamentally due to the lack of mathematical tools. To further explore the reasons for this development model, we analyze it from a philosophical perspective. Specifically, we analyze the problems of AI from the perspectives of dialectics and pragmatism, and believe that AI needs to be understood and solved from the perspectives of contradiction and practice based on Marxist dialectics. Based on this, we believe that human-machine interaction is the inevitable path of AI development, and that AI technology should revolve around the development of computing power. Finally, we outline the changes that AI brings to society.
## 2 A Brief History of AI Technology
AI research covers a wide range of fields, and we have chosen the most representative research areas, including computer vision, natural language processing, and machine learning, to understand their development history. We will find that from a technical perspective, the dilemma of AI is fundamentally due to the lack of mathematical ability, and we are still unable to effectively model the complex real world. Ultimately, we rely on empirical methods to compensate for the shortcomings of mathematics.
### Computer Vision
Computer vision has a history of more than 60 years since its establishment in the 1960s [20], and it is a major branch of artificial intelligence. Its core task is to understand the content of images. From the 1960s to the 2000s, the development of computer vision can basically be attributed to a core idea: structured combination. That is to say, past researchers believed that the objects in images were composed of some basic structural units combined in a certain way, and representative figures include David Marr [14] and King-Sun Fu. Specifically, Marr started from neuroscience and believed that the process of visual recognition was completed through three progressive levels of visual features, namely 2D (such as node, edge information, etc.), 2.5D (the aggregation of 2D information into 3D contour information) and 3D information. Marr's computer vision theory imitated the process of human vision. He believed that the calculation methods of the brain and the computer were the same. The three-stage features he proposed were a simulation of human vision, which had a dominant influence on the development of computer vision for many years. Researchers have proposed a series of methods for different stage features. For 2D information, representative works include the Scale-invariant Feature Transform and the Canny edge detector for edge extraction, to support downstream visual tasks such as classification, recognition, and segmentation. The analysis of 2.5D and 3D visual information has become a key research topic in the early development of computer vision. For example, the first specialized computer vision paper (from Larry Roberts) analyzed the 3D structural information of objects in images, and similar ideas appeared in the cylinder structure research paradigm proposed by Thomas Binford [1] later on. However, the limitations of this type of research were too great, and the complexity of real visual objects made it difficult to complete them by simple geometric combinations. With the success of SIFT feature extraction, this type of research gradually withdrew from the historical stage.
Fu believed that there was a higher-level grammar structure in the world, so understanding images could be achieved by learning this grammar structure. Compared to Marr's three clear levels, Fu's method was more general, and therefore more difficult to design effective computational methods. Therefore, there were few followers until the beginning of this century, when Song-Chun Zhu [13] made some progress in image segmentation using this kind of thinking. However, subsequent research was still rare, mainly because it was difficult to model such grammar structures.
The 2000s were a transitional period for computer vision. The mainstream method was based on statistical models of hand-crafted features, and a series of highly practical works were produced during this period, such as the Viola-Jones algorithm [15] for face detection, the histograms of oriented gradient (HOG) descriptors [16] and its improved Deformable Parts Model (DPM) algorithm [17] for pedestrian detection, and the Spatial Pyramid Matching algorithm [1] for image matching and recognition. It should be emphasized that the success of these algorithms was based on the expert-designed features for image understanding, generally utilizing the properties of gradient distribution on the image. These studies gradually moved away from Marr's three-stage cognition. From a computable perspective, feature-based visual research brought about significant performance improvements, which laid the foundation for subsequent deep learning-based research, because the latter is a data-driven feature rather than a hand-designed one.
The concept of deep learning [15] was proposed by Geoffrey Hinton and his students in 2006, but it was not until 2012 that AlexNet's [13] outstanding performance on the ImageNet dataset attracted public attention. Deep learning algorithms, represented by convolutional neural networks (CNNs), brought computer vision into the era of deep learning, a new paradigm driven by data and computing power. With the increase in model parameters, deep learning has brought significant improvements in traditional computer vision tasks. Applications such as face recognition have reached commercial-grade capabilities. Finally, computer vision research converged on image representation learning, from traditional key point localization and representation to block-based representation learning represented by Vision Transformer (ViT) [2], which completely overturned the visual theory since Marr.
At this point, we may need to reflect on the ideological foundation of early computer vision development. The way computers perceive things may not be the same as what we consider to be the human visual process. In chronological order, computer vision methods can be roughly divided into three stages: 1. rule models established based on expert-designed visual processes; 2. statistical models based on hand-crafted features designed by experts; 3. representation learning paradigm driven by data and computing power. In terms of performance, we observe an interesting phenomenon: the less
prior knowledge of humans is utilized in the development of computer vision, whether it is rules or mathematics (or we consider rules as a simplified mathematical tool), the more computing power and data are used. But this is not a phenomenon unique to computer vision. We will find similar laws in other fields in the future.
### Natural Language Processing
The origin of natural language processing can be traced back to the early 20th century, with figures such as Baudouin de Courtenay [14] and Ferdinand de Saussure [15] approaching language as a system of symbols with inherent rules, from a psychological and cognitive perspective. Over time, language research gradually departed from the human and social context, abstracting the study to rules of symbols, which had a profound impact on later natural language processing. Generally speaking, natural language processing has gone through three eras: rule-based, probabilistic statistical models, and deep learning.
The first era was rule-based language analysis, which lasted from the 1950s to the 1980s. The basic idea was to use expert-designed rule sets related to specific domains to enable computers to understand natural language, with the primary focus on syntax and semantics analysis. Noam Chomsky's book "Syntactic Structures" [16] was a representative work, which enabled computers to analyze sentences according to given rules, such as constructing a syntactic parse tree for a sentence. However, the complexity of the real world made the artificial rules an abstract and simplified process, detached from the practical application of language, and inadequate for understanding language in real-world situations. Similarly to the study of combination rules in computer vision, both fields depend entirely on experts' abstraction of the world, and their detachment from real situations makes it difficult to have practical value.
In the 1970s, statistical methods gradually became the main tool in natural language processing. Probability was used to represent words, rather than artificially defined language rules. The core idea was that the probability of a sentence equals the joint probability of each word that makes it up, based on the Markov assumption on sentences. Representative works include the general N-gram model [17] and the Hidden Markov Model (HMM) [18] used in speech recognition. However, statistical models invariably require prior assumptions, and then design models to fit the data, such as the typical example of topic models. These assumptions may not match the real-world situation, and the performance of the models is limited.
Since the advent of the deep learning era, natural language processing has made remarkable breakthroughs, thanks to big data and large-scale models, completely abandoning the rule-based and statistical model research paradigm. Early classic works in this stage include Word2vec, which achieved a breakthrough in word representation. From the perspective of model structure, the success of Seq2seq [19] made the model universality initially formed. With the emergence of Transformer [20], universal models entered a new stage, directly leading to large-scale pre-training, becoming the unified mode of natural language processing. Recently, the appearance of ChatGPT has again significantly improved the performance of natural language processing. We notice that compared to the previous GPT-3 [1], the new version fundamentally differs by introducing large-scale human feedback mechanisms [10], where human knowledge is directly fed back to the model to help it iterate.
Similar to the development of computer vision, natural language processing has undergone a development process from rules to data-driven approaches. However, the uniqueness of language deserves our attention, and we must recognize that language is one of the reasons why people become people. Language is not an independent tool or system detached from humans. Using strong rules or data statistics based on strong prior assumptions cannot explain human language, making it difficult to understand human language.
### Machine Learning
In 1952, Arthur Samuel proposed the concept of machine learning, which is "a field of study that gives computers the ability to learn without being explicitly programmed." He demonstrated the possibility of machines outperforming humans by developing a checkers program. In pursuit of this goal, the field of machine learning has experienced ups and downs over the past seventy years, roughly divided into research directions based on neural networks and statistics.
In 1957, Rosenblatt proposed the perceptron [14], which can be considered as the earliest prototype of today's deep learning models and can be viewed as a biomimetic structure of a neuron. In 1969, Marvin Minsky raised the famous XOR problem, stating that the perceptron is ineffective on linearly inseparable data distributions. Researchers of neural networks entered a winter period until 1980, when Werbos proposed the multilayer perceptron (MLP). This model includes the core component of today's deep learning: backpropagation (BP). At this point, the model level of deep learning has been initially completed. In 2006, Hinton and others' research on deep learning once again received attention. After nearly twenty years of rapid development, deep learning has completely changed research on AI in various fields. We need to pay attention to the fact that computing power and data are the most important variables in the current and past eras. The model itself is only a small part of the reason for success because even a simple MLP model can achieve high performance with enough computing power and data. Another direction of machine learning is based on statistics. In the two decades after the 1990s, statistical-based methods became the mainstream of machine learning. Representative works such as Boosting [10], Support Vector Machine (SVM) [11], Ensemble Learning [12], Principal Component Analysis [13] were born. Although they have a solid mathematical foundation, their performance cannot compete with today's deep learning.
Fundamentally speaking, current mathematical tools are not enough to model the real world. They are more of an abstraction and simplification of the real world. Typically, these mathematical tools are based on strong assumptions, and these empirical assumptions (guesses or judgments) cannot be rigorously proven in a scientific manner. Therefore, these seemingly rigorous mathematical tools are no different from expert rules in computer vision and natural language processing. Therefore, from the beginning of modeling, we must recognize the inadequacy of theory and rely on empirical judgments to make up for it. This is the underlying logic of the success of deep learning, which uses a data-driven approach to compensate for the inadequacy of mathematical tools and directly learns models of the real world from data.
## 3 A Reflection from the Philosophy Perspective
### The Necessity of Practice
Although artificial intelligence has been developing for nearly a century, it still lacks a unified philosophical foundation. In particular, this discipline has broken down the opposition between natural sciences and social sciences since its inception, which means that it is itself a kind of unity. However, this unity is only formal at present.
The fundamental question about artificial intelligence is still the same: what is universal necessity? This universal necessity is actually the unity of the world, which is in motion and universally connected. In other words, it is the universality of contradictions. Truth or true knowledge is the unity of universality and particularity, and knowledge of universal necessity is obtained from limited experience. Sir Isaac Newton used mathematics to express this universality, which is the "Mathematical Principles of Natural Philosophy" [15]. What is truth and what is universal necessity is the same question. This was originally a problem for metaphysics to solve. Its translation into metaphysics is not very accurate, and its original meaning is ontology or first philosophy. In other words, why are universal conclusions drawn from limited empirical materials reliable?
To solve this problem, philosophers have made remarkable efforts. G.W.F. Hegel reconstructed metaphysics from the absolute, regarding truth as the alienation and return of absolute spirit. Specifically, Hegel's absolute is something that can only be grasped by pure thought, or it is pure thought itself. However, in reality, people always start from the limited to understand this infinite world. The so-called practice is endless, and understanding is endless, which negates Hegel's understanding of truth. Regarding artificial intelligence, Zhao Nanyuan proposed the theory of the generalized evolution of knowledge, which should demonstrate the universality and unity of knowledge. However, he said that the expansion of knowledge itself can prove its own effectiveness, which is equivalent to saying nothing. Moreover, the lack of unity in natural science also negates his conclusion.
Half a century ago, Mao Tse-tung's theory of practice and theory of contradictions solved the fundamental problem above from a philosophical level [16]. He attributed the answer to this problem to practice being the only criterion for testing truth at the practical level. The universality and immediacy of practice are the basis of the universality and unity of knowledge. His philosophical
theory is based on the epistemology of Marxist dialectical materialism, which places practice in the first position, and believes that human cognition cannot be separated from practice. It rejects all erroneous theories that deny the importance of practice and make cognition separate from practice. We must emphasize the dependence of theory on practice, and the foundation of theory is practice, which in turn serves practice. The judgment of whether knowledge or theory is true is not based on subjective feelings, but on how the results of social practice are objectively determined. The standard of truth can only be social practice, and the viewpoint of practice is the first and fundamental viewpoint of dialectical materialism epistemology. Therefore, artificial intelligence is no exception and needs to be based on practice.
We can make a speculation here: the establishment of causality cannot be obtained by logical reasoning and mathematical statistics. The theory of practice provides the standard of truth, and its essence and causality are actually the same problem, both of which involve universality and necessity. The establishment of universality and necessity is ultimately determined by practice, with confirmation, refutation, and revision. Therefore, the establishment of causality cannot be obtained by logical reasoning and mathematical statistics. In addition, if mathematics and logic are used as the basis of causality, it is tantamount to saying that there is no world without mathematics and logic. Causality is the interaction of things, one thing causes another. Therefore, there are two prerequisites for the establishment of causality: 1. things exist; 2. there is a connection between things. Discovering and revealing this connection is the task of science. Mathematics and logic can be used to express causality, but they cannot become the main factor and say that causality exists because of mathematics and logic. The establishment of causality is ultimately determined by practice, and it should be noted that practice is infinite, and only the endless process of practice verification can prove its universality.
### Why will our artificial intelligence be doomed to fail?
Why have our past AI attempts always failed? Because researchers have been eager to establish a knowledge system independent of humans, that is, to create a new intelligent species without human participation, allowing machines to make judgments independently of humans, whether based on rules or statistics and data-driven, fundamentally this is the vision. Specifically, AI simulates the thinking process of human beings from intuitive cognition to rational cognition, and mechanizes and mathematics a part of human's actual thinking process. However, in the past, we focused on the conceptual deduction of this thinking process, such as Marr's visual process and Saussure's language symbols mentioned in Chapter 2, abstracting this thinking process. But this system did not incorporate human feedback into the development framework of AI technology. Instead, it treated humans as humans and machines as machines, as two independent systems (subject-object binary opposition), or as humans being equivalent to machines themselves (mechanical monism). In fact, machine activity is essentially mathematical reasoning without self-awareness, while practice is a conscious human activity. This subject-object separation system without human feedback also lacks practice, and machines cannot produce knowledge, let alone human-like intelligence. Therefore, human participation is an essential part of AI. ChatGPT is an accidental event in technological development, but it is a necessary result of historical development, because only by systematically incorporating human factors into the AI system can AI move towards inevitable success.
### The Necessity of Human-Machine Collaboration
In the past, the philosophy of artificial intelligence was based on the subject-object dualism, where knowledge was the basis of knowledge, rather than practice. However, practice cannot be separated from feedback. Thus, implementing artificial intelligence requires human feedback, rather than simply allowing machines to independently achieve intelligence. Therefore, human-machine collaboration is inevitable, as it combines human cognitive abilities with machine capabilities, where human cognitive abilities are determined by human practice. Thus, human-machine collaboration is the necessity of artificial intelligence, rather than simply machine intelligence.
However, we must pay attention to the effectiveness of feedback. Taking recommendation systems as an example, as one of the most successful applications of artificial intelligence, the biggest problem is that users do not have true freedom of choice and decision-making. Although users can choose "dislike" or "not needed" in form, it is actually the imposition of the system's will on humans, which does not conform to the practice theory in reality.
### The Necessity of Computing Power
We need to recognize the role of computers. Without computers, there would be no artificial intelligence. Both computing power and algorithms are essential for computation. In terms of computers, we have never demanded more than the computing advantage and cannot demand more. This is the reason why artificial intelligence development has faced problems, as we often demand more from computers than just computation, such as independent consciousness. The development of artificial intelligence technology must always be centered on computing power, even if there are more powerful computing tools in the future, such as quantum computers. We can analyze the fundamental contradictions of AI in two aspects: first, computing power is constant, so we can only use human knowledge to improve the computational performance of intelligent machines; second, human knowledge is too complex and difficult to teach machines through computation, so computing power can continue to expand. The result is that with the expansion of computing power, a certain level can be reached, such as when quantum computers mature, and the part of human knowledge that cannot currently be achieved through computation may become possible. Thus, the chain of human knowledge, algorithms, and computing power can spiral upwards. Therefore, in light of these real contradictions, development centered on computing power is an inevitable choice.
## 4 Artificial Intelligence and Future Society
The "practice" of artificial intelligence is to participate in the social production and life practice of human beings, that is, it is a part of human social production and life practice. Artificial intelligence acts as a tool and means in social production and life. From this perspective, it has no essential difference from the mechanical machines invented by humans before, which is its universality. Artificial intelligence is the product of deepening and upgrading of labor division, but unlike previous technological innovations, it has its own particularity. AI represents the replacement of mental labor for physical labor, creative labor for simple repetitive labor, and the shift from being controlled by the production process to controlling the entire production process, marking a new leap in human social productivity. This creates a more solid material basis for higher and more comprehensive human development and ultimately produces a more efficient and harmonious social form.
Artificial intelligence is rooted in human social production and life practice, and human social production and life practice are the fundamental driving force for promoting the research and development of artificial intelligence. Therefore, we can predict that population size and production scale will give practical advantages, and societies with this form will win in the evolution of artificial intelligence.
In the era of artificial intelligence technology, what is the role of human beings? First of all, we need to answer a question: what can humans do and what can't they do? The human brain can use conceptual thinking to process experience and generate new knowledge, but the data that the human brain can process is limited. Secondly, we need to answer another question: what can machines do and what can't they do? We all know that machines can perform efficient calculations, and anything that can be calculated can be done using a computer. However, machines cannot practice, so they cannot judge the truth of knowledge and may even forge knowledge. Therefore, future humans need to constantly provide new knowledge, ultimately serving three aspects: 1. increase computing power; 2. improve algorithms; 3. provide feedback to machines. Future social division of labor will also revolve around these three points. On one hand, it will eliminate old divisions of labor, and on the other hand, it will deepen new divisions of labor, especially in the development of intellectual labor. Artificial intelligence undoubtedly will expand the scope, breadth, and depth of social division of labor.
## 5 Conclusion
This article summarizes the technological development and evolution of artificial intelligence over the past century from both technical and philosophical perspectives. We found that the development paths of different directions are generally similar, and the profound philosophical logic behind the failures and successes has never been valued in the field of AI, nor has it been systematically analyzed. It should be noted that AI is the combination of natural science and social science, not just machine intelligence, but an extension of human intelligence. Therefore, it is necessary to understand AI from different perspectives in order to avoid blind development. We call for attention to the establishment
of the philosophical foundation of AI, and the urgent need for a unified and time-tested philosophical system in the AI field to ultimately guide the development of technology.
|
2310.02921 | Tuneable and biodegradable poly(ester amide)s for disposable facemasks | The widespread use of disposable facemasks during the COVID-19 pandemic has
led to environmental widespread concern due to microplastic pollution.
Biodegradable disposable facemasks are a first step to reducing the
environmental impact of pandemics. In this paper we present high-performance
facemask components based on novel poly(ester amide) (PEA) grades synthesized
from bio-sourced materials and processed into non-woven facemask components.
PEA based polymers present an excellent compromise between mechanical
performance and biodegradability. Importantly, the properties of the PEA can
easily be tuned by changing the ratio of the ester and amides, or variation of
diol and diacid part. We synthesized seven polymers which we optimized for
biodegradability and processability. Among them, two grades combined
electrospinning process compatibility with full degradation within 35 days,
using a normalized biodegradation test. The ultra-thin filters thus developed
were evaluated for performance on a custom-made characterization bench. The
filters achieved a microparticle capture efficiency and breathability
comparable to commercial filters. Another PEA grade was optimized to reach
optimal visco-thermal properties that made it compatible with solvent-free
melt-spinning process as demonstrated with continuous fibres production.
Overall, our environmentally friendly solution paves the way for the
fabrication of high-performance fibres with excellent biodegradability for the
next generation facemasks. | Esteban Alvarez Seoane, Alessandro Cattaneo, Fabien Neuenschwander, Lucien Blanchard, Tatiana Nogueira Matos, Laure Jeandupeux, Gianni Fiorucci, Maryam Tizgadam, Kelly Tran, Pierre-Louis Sciboz, Luce Albergati, Jérôme Charmet, Roger Marti, Stefan Hengsberger | 2023-10-04T16:03:06Z | http://arxiv.org/abs/2310.02921v1 | Tuneable and biodegradable poly(ester amide) for disposable facemasks
## Abstract
The widespread use of disposable facemasks during the COVID-19 pandemic has led to environmental widespread concern due to microplastic pollution. Biodegradable disposable facemasks are a first step to reducing the environmental impact of pandemics. In this paper we present high-performance facemask components based on novel poly(ester amide) (PEA) grades synthesized from biosourced materials and processed into non-woven facemask components. PEA based polymers present an excellent compromise between mechanical performance and biodegradability. Importantly, the properties of the PEA can easily be tuned by changing the ratio of the ester and amides, or variation of diol and diacid part. We synthesized seven polymers which we optimized for biodegradability and processability. Among them, two grades combined electrospinning process compatibility with full degradation within 35 days, using a normalized biodegradation test. The ultra-thin filters thus developed were evaluated for performance on a custom-made characterization bench. The filters achieved a microparticle capture efficiency and breathability comparable to commercial filters. Another PEA grade was optimized to reach optimal viscothermal properties that made it compatible with solvent-free melt-spinning process as demonstrated with continuous fibres production. Overall, our environmentally friendly solution paves the way for the fabrication of high-performance fibres with excellent biodegradability for the next generation facemasks.
## Introduction
During the COVID-19 pandemic, facemasks proved to be an effective way to limit the spread of the virus[1, 2]. Commercial medical facemasks are high-tech products which must meet many requirements in terms of hygiene, efficiency, and cost. In general, they are multi-layered and include a high-performance filter layer. A recently published communication highlights the environmental impact resulting from disposable facemasks[3]. According to the authors, the microplastics originating from the standard masks, made of polypropylene and polyethylene, could significantly aggravate the global plastic pollution. Other studies also confirm the potential environmental damage that results from improper disposal of facemasks[4]. This ecological issue raises new challenges to the textile industry.
Electrospinning[5, 6] is a highly promising technique that was shown to not only improve facemask performance, but also reduce the amount of polymer used compared to conventional mask fabrication processes[7, 8, 9]. However, even though this fabrication method is more environmentally friendly, the solution is not ideal as it still produces plastic waste. Instead, one of the most promising solutions is to fabricate the facemasks using biodegradable polymers.
This class of polymers has received considerable attention for biomedical applications[10, 11], including for facemasks. Instances of electrospun biodegradable facemasks made of Polylactic acid (PLA)[12, 13]; cellulose[14, 15] chitosan[16, 17] and other materials[18, 19] were reported recently. Among them, PLA[12] and poly(butylene succinate)[20] electrospun air filters were shown to exhibit high filtration efficiency and good biodegradability.
Among the biodegradable polymers available, poly(ester amide)s (PEA) appear as a promising candidate as they harbour the high thermal stability, high elastic modulus and high tensile strength of polyamides combined with the good degradability of polyesters[21]. Another interesting feature of PEAs is their properties tuneability. This explains why poly(ester amide)s grades have received a lot of attention for biomedical applications[22, 23]. Even though the fabrication of high-grade filters, made by electrospinning of PEA fibres was demonstrated[24, 21] there is no report, to the best of our knowledge, of PEA based facemasks that demonstrate excellent biodegradability combined with filtration and breathability on par with the performance of commercial facemasks.
In this paper we present and fully characterise novel biodegradable facemasks components made of PEA fibres. In brief, we synthesised seven biosourced poly(ester amide) grades and evaluated them for processing into non-woven fibres. By varying the ratio of ester and amides and through the tuning of diol and diacid, we systematically optimised PEA grades for biodegradability and two non-woven fibres fabrication processes (Figure 1). Selected candidates underwent a normalised biodegradation test, and two polymers were fully degraded in less than 35 days, including one that degraded within 20 days, which is comparable to cellulose. To evaluate the performance of the biodegradable electrospun filters, we realised a custom bench to measure filtration efficacy and breathability. Compared to commercial filters, our ultra-thin filters demonstrate similar filtration efficiency and breathability. Finally, we demonstrate that one of our PEA grades is compatible with solvent-free melt spinning process for the fabrication of outer layers fabric. In particular we optimised the fabrication process to enable continuous fibre formation on a custom-made rig. Overall, our results pave the way for the development of high-performance biodegradable facemasks based on biosourced PEA.
## Results
Polymer Synthesis & Characterization
As shown in Figure 2a, a series of poly(ester amide)s **1-6** were synthesized by polycondensation from biobased raw materials such as diols (1,4-butanediol, 1,6-hexanediol and 1,10-decanediol), diesters (dimethyl adipate and dimethyl 2,5-furadicarboxylate DMFD) and a bisamide-siol building block prepared from 1,4-butanediamine and caprolactone[25, 26, 27, 28, 29]. The ratio of unique bisamide-diol building block was varied to prepare polymers with different ester-amide content. GPC measurements were performed on the polymers to measure the MW, Mn and PDI as reported in Table 1.
Figure 1: The poly(ester amide)s synthesised and presented in the manuscript were fine tuned to enable their processing by a) electrospinning and b) melt-spinning to allow for the fabrication of high performance mask filters and fibers compatible with mask outer layers respectively.
In addition, poly(ester amide) **7** based on bis(oxazoline) and sebacic acid was prepared by polyaddition reaction (Figure 2b)[30, 31]. The first synthesis (polymers **1,2**) were performed with 1,4-butanediol at an ester-amide ratio of 50%:50%. This initial selection was done due to the bio-based origin of this diol and the lower boiling point compared to the other candidates (1,6-hexanediol and 1,10-decanediol). The tests performed showed that low molecular weight polymers were synthesized due to sublimation of the oligomers formed during the polycondensation reaction and thus leading to low molecular weight polymers **1** and **2**. Polymer **3** synthesized with 1,10-decanediol was straightforward and showed a higher molecular weight in comparison to the first polymers synthesised. However, the cost of this diol encouraged us to select 1,6-hexanediol for further trials and the preparation of polymers **4, 5,** and **6**.
The summary of the thermal characterization of polymers **1-7** is shown in Table 2. The thermograms of the different polymers shows a decrease of the glass transition temperature with an increase of the diol aliphatic chain length (Table 2 and Figure S1 in SI). This phenomenon is linked to the flexibility of the polymers. The longer the aliphatic chain of the diol the easier it is for the polymer to pass from a glassy state to an amorphous state due to the increased movement possibilities offered by the polymer's chains. A cold crystallization peak can be seen in all the thermograms before melting. The introduction of hydrogen-bond prone amide segment allows this thermal event in the polymers. As for the Tg, the flexibility of the polymer increases with the length of aliphatic chain of the diol, thus reducing the energy needed to pass from an amorphous state to a crystal[25, 32].
Figure 2: Synthesis of poly(ester amide)s grades. a) Blobased poly(ester amide)s by polycondensation with dimethyl adipate (PEA **1**), and with Dimethyl 2,5-furanidcarboxylate (PEA **2-6**). b) Bis(oxazoline)-Based poly(ester amide) **7** by polyaddition.
The thermal analysis was also performed on polymers synthesized from 1,6-hexanediol with different ester-amide ratio (Table 2 and Figure S2 in SI). From the thermograms one observes that Tg increases with the amide content from an ester amide ratio of 75-25 (polymer **4**) to the 50-50 (polymer **5**). After this increase the Tg stabilises at around 27\({}^{\circ}\)C. Another observation is that a cold crystallization is present at the 50-50 ester amide ratio but not in the other two polymers. Finally, the presence of two distinct melting peaks in the PEA with an ester amide ratio of 25-75 (polymer **6**) indicating the melting of the ester and the amide segments. In the other polymers the ester melting was not observed.
Solubility tests and initial electrospinning trials
The polymer solubility was tested in different solvents (Table S1 in SI). For the first tests of solubility the polymers **3**, **4** and **6** synthesized from 1,4-butanediol, 1.6-hexanediol and 1,10-decanediol with a 50% of hard segment were chosen. In most cases the polymers present solubility in alcohols and halogenated solvents and are not soluble in carbonates or N-methylmorpholine N-Oxyde. The results also show that there is no specific solubility pattern. Where mixture of solvents was used, the results presented even greater variability. Methanol represents an exception as most tests were inconclusive when it was present. The solubility of the polymers is highly dependent on the molecular weight, polar forces, hydrogen bonding and dispersion forces[33]. Thus, the solubility of polymer has to be evaluated and fine-tuned for each batch synthesized.
From these initial observations, a selection of possible solvents for electrospinning was performed. The initial screening was performed on the polymers synthesized from 1,6-hexanediol and 1,10-decanediol. Polymers produced from 1,4-butanediol were not tested due to low molecular weights. The tests performed are summarized in the table 3.
The results of this screening tests suggest that HFIP is the best candidate amongst the solvents tested. High boiling point solvents (dimethylcarbonate (DMC), Phenyl-ethanol, benzyl alcohol) were selected due to their non-hazardous nature and for the fact that they are often found in nature (fruits) making them potentially bio-based. The problem with these solvents is their low vapor pressure which lower their evaporation rate compared to solvents such as hexafluoroisopropanol (HFIP) and dichloromethane (DCM). The rapid removal of solvent is important to obtain good electrospinning results. We also performed solubility tests in binary systems that combine a high boiling point with a low boiling point solvent (Methanol, Ethanol, DCM and Chloroform). Our idea behind this was to maintain the good solubility provided by the high boiling point solvents, while improving evaporation during electrospinning due to the presence of low boiling point solvent. The results of these tests were mostly electrospray or gelatinous mixture of solvent and polymer on the collector. Overall, the tests performed
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline _Polymer_ & _Tg [\({}^{\circ}\)]_ & _Cold cryst [\({}^{\circ}\)]_ & _Mp 1 [\({}^{\circ}\)]_ & _Mp 2 [\({}^{\circ}\)]_ \\ \hline \(1\) & _- 38.4_ & _-_ & _73.71_ & _133.3_ \\ \hline \(2\) & _27.5_ & _111.6_ & _134.4_ & _-_ \\ \hline \(3\) & _12.7_ & _71.9_ & _139.7_ & _-_ \\ \hline \(4\) & _18.1_ & _-_ & _119.8_ & _-_ \\ \hline \(5\) & _26.0_ & _105.5_ & _137.5_ & _-_ \\ \hline \(6\) & _27.5_ & _-_ & _157.8_ & _171.4_ \\ \hline \(7\) & **17.8** & **152.0** & **172.75** & _-_ \\ \hline \end{tabular}
\end{table}
Table 2: Overview of Thermal Data (DSC) for the seven polymer grades.
with HFIP showed the best results and enabled the deposition of fibrous material. This solvent was thus selected for further processing.
### Biodegradation tests
Figure 3 shows the biodegradation results for three PEA grades based on the norm ISO 14855-13. The method reported in the norm involves the measurement of carbon dioxide as a function of time allowing to determine the degradation of the materials in comparison to cellulose reference. A target value of 90% is considered as a total decomposition. Tests showed a rapid degradation of the two PEA grades polymer **1** and polymer **7**. These polymers were completely degraded after less than 35 days, with polymer **1** following the degradation curve of cellulose and a full degradation after about 20 days. The polymer **4** grade presents a slower rate of degradation with a plateau after 45 days followed by an increase between 75 days and 105 days. After 105 days the polymer slowly continues the degradation until a value of 70% after 180 days. The difference in the degradation is attributed to the different chemical structure and molecular weight of the polymers.
\begin{table}
\begin{tabular}{|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|} \hline
**\begin{tabular}{} \end{tabular}** & **\begin{tabular}{} \end{tabular}** & **\begin{tabular}{} \end{tabular}** & **\begin{tabular}{} \end{tabular}** & **\begin{tabular}{} \end{tabular}** & **
\begin{tabular}{} \end{tabular}** \\ \hline
3 & HFIP & 12 & 17 & 12 & 21 & 50 & Fibers \\ \hline
4 & HFIP & 8 & 17 & 12 & 31 & 50 & Fibers \\ \hline
4 & Chloroform / phenyl ethanol & 10 & 14 & 15 & 5 & 100 & Not continuous filament \\ \hline
4 & Chloroform / benzyl alcohol & 15 & 9 & 15 & 5 & 100 & Spraying \\ \hline
4 & Chloroform / phenyl ethanol & 12.5 & 11 & 15 & 35 & 100 & Not continuous filament \\ \hline
4 & DCM / benzyl alcohol & 13 & 9 & 15 & 35 & 100 & Not continuous filament \\ \hline
4 & Chloroform / DCM & 6.7 & 25 & 15 & 20 & 100 & Spraying \\ \hline
6 & HFIP & 10 & 17 & 13 & 21 & 50 & Fibers \\ \hline
6 & Chloroform / methanol & 6.5 & 8 & 5 & 31 & 100 & Spraying \\ \hline
6 & Chloroform / ethanol & 6.7 & 25 & 15 & 20 & 100 & Spraying \\ \hline
6 & DCM / ethanol & 6.7 & 25 & 11 & 35 & 100 & Spraying \\ \hline
6 & DCM / benzyl alcohol & 6.7 & 25 & 11 & 20 & 180 & Not continuous filament \\ \hline
6 & Chloroform / DMC & 6.7 & 25 & 11 & 20 & 180 & spraying \\ \hline
6 & DCM / Methanol & 6.7 & 25 & 11 & 20 & 180 & Spraying \\ \hline
6 & DCM / phenyl ethanol & 6.7 & 10 & 15 & 30 & 180 & Not continuous filament \\ \hline \end{tabular}
\end{table}
Table 3: Summary of initial electrospinning and conditions
Electrospinning and parameter optimisation
Based on the above results, polymer **7**, which performed well in the biodegradation test (Figure 3) and produced fibrous materials when dissolved in HFIP during the electrospinning screening tests, was selected to create high-performance filters. Different polymer concentrations were tested for fibres quality by systematically varying the flow rate, the collector distance and voltage (see Design of Experiment section and Table S2 in SI). Figures 4a shows the heatmaps that represent the quality of the electrospun fibers obtained from 10 wt.% solution of polymer **7** in HFIP. The green areas show high-quality, homogeneous fibers, while the red areas show very poor-quality fibres or electro-spraying. Figure 4a shows that slower flow rate and higher voltage/distance ratio improves the quality of the fibers. In contrast, the heatmap for 12.5 wt.% solution of polymer **7** in HFIP is (Figure S3 in SI) is almost entirely green under the same conditions.
The stark contrast between the results for the 10 wt.% and the 12.5 wt.% highlights the fact that the most important parameter for successful fibre formation is the polymer concentration in the solvent. Indeed, this parameter influences both viscosity and the required time for fibre strands to dry out into solid polymer. This observation is in line with other studies[35]. Then, increasing the voltage and reducing the collector distance also shows a clear improvement of fibre quality, although one should be wary of the influence on process speed and deposition area. For the feed-rate, a balance should be found between increased process speed (high feed-rate) and better-quality fibres (low feed-rates).
Figure 3: Biodegradability tests based on the norm ISO 14855-1 of three selected poly(ester amide) grades presented in this paper. The degradation is tested with cellulose as a reference material. A target value of 90% is considered as a total degradation. Polymer **1** achieved a degradation on par with cellulose, with a degradation within 20 days. Polymer **7** was fully degraded after 35 days. Biodegradation was observed for polymer **4**, albeit it was slower than for the other two polymers.
### Filter fabrication and characterisation
Correlation between deposition time and layer thickness
The capture efficiency and the breathability of filter depends on fibre size and density, and on the overall filter thickness. The first two parameters were evaluated using scanning electron microscopy and for the latter, we used a confocal microscope.
Layer thickness is dependent on feed rate, collector- emitter distance and importantly deposition time. Using optimized electrospinning parameters described above, we prepared samples and analysed samples as described in the materials section. It should be noted that when using a static planar collector, the thickness is location dependent, with fibres depositing faster at the centre of the pattern than at the edges[36]. Therefore, comparison between samples were made on sections cut at the same location each time as explained in the Methods section.
Even though the residual charges buildup on the collected fibres tends to repel the similarly charged jet which limits the maximum thickness of the layer, our data (Figure 4b and 4c, for polymer **4** and **7** respectively) show that we are still in the linear regime despite deposition times over what is needed for the fabrication of our filters, as shown below. This ensures that we simply control filter thickness by varying the deposition time.
Filter performances measurement
**Figure 5d** shows a sketch of the characterization bench developed to measure the filtering efficacy of the filters. An optical image of the bench is available in Figure S4, in SI. A magnetic agitator is used to create an aerosol of Teflon particles that directly passes to the particle counter (channel 1). Once the particle flow intensity is determined, the particle stream is
Figure 4: Electrospinning optimisation and characterisation. a) Fibre quality heatmap as function of the flowrate and the ratio of voltage to distance to collector. Data for 10wt.% solution of polymer **7** in HFP. Insets show scanning electron micrographs of fibers obtained after deposition under conditions i, ii and iii. Filter thickness as function of the deposition time for polymer **4** (b) and polymer **7** (c). A clear linear correlation is observed between deposition time and polymer layer thickness for each polymer with R\({}^{2}\) value of 99.8% and 96.2% for polymer **4** and **7** respectively.
directed to filter (channel 2) and the resulting particles passing through are counted, thus enabling differential measurement.
Since the particle detector allows for an independent analysis of 1 and 3 \(\upmu\)m particles, two norms (95% absorption of 3 \(\upmu\)m particles OR 70% absorption of 1 \(\upmu\)m particles) currently in use in Europe (95%/3\(\upmu\)m) and proposed by Swiss hospitals (70%/1\(\upmu\)m)[37] could be evaluated.
For the breathability characterization, the aerosol generator is removed, and the particle counter is replaced by a vacuum pump. An air flow controller (PFM750S-F01-F, Distrelec) and a differential pressure detector (Manometer Testo 512, 0-20hPa) are then added to the circuit. For the analysis the pressure drop through the filter was evaluated for an air flow range between 0 and 14 ft/min.
Varying the deposition time to control filter thickness allowed us to bring both the particle absorption and breathability closer to market products. The results have allowed to identify an
Figure 5: Electrospun fibre characterisation. a) Pressure drop for three electrospun filters (polymer 7) with 1,2 and 3 min electro-spinning deposition time with respect to two commercial facemasks for different air flow rates. (filter & outer layer for all tested samples) b) Optimisation of filter electrospinninging deposition time for polymer 7. In this figure the pressure drop values are compared for an air flow rate of 7 ft/min and the breathability compared to commercial filters (two horizontal lines). c) Absorption test of polymer 7 filters (electrospun filter on outer mask layer) in comparison to the commercial reference Facemask (filter & outer layer). The number of transmitted microparticles for electro-spinning deposition time between 1 min and 2 min are in general respectively above and below the values obtained using commercial filter. d) Principle of the filter performance test: Teflon microparticles are inserted in a magnetic agitance to generate a particle aerosol. A particle counter with an integrated pump analyses the transmitted microparticles. The particle flow can be alternatively directed through channel 1 for a control of the particle flow intensity and channel 2 to analyse the absorption through the filter. For the measurement of the breathability the particle counter is replaced through a vacuum pump with added flowmeter. The differential pressure is measured on both sides of the filter applying and a pure air flow is applied.
optimal filter deposition time between 1 and 2 minutes (Figure 5). Since these thin filters are difficult to handle, the electrospinninging process was adapted to deposit the filter directly on the outer layer of sheets of commercial facemasks (spun-bound non-woven polypropylene, supplied by EPSA-Swiss). Therefore, in this case the measurements of all electrospun and reference samples are made on an outer layer / filter composite. To evaluate the filter performance, we performed a differential measurement with the outer layer only. The pristine outer layer substrate was tested individually and its influence on the breathability and filtration was insignificant compared to the filter. Figure 4(a) shows the pressure drop of our filters, based on polymer **7** in comparison with commercial mask references. As expected, the pressure drops increase with filter thickness, for all the flow rates tested. The pressure drops for deposition times between 1-2 minutes correspond to the values of the commercial filters. In either case, our filters are within the above-mentioned norm. The particle capture efficiency of the filters is shown in Figure 4(b). The filtration performance with a 2 min deposition time is better than our reference commercial filter, while the filter fabricated within 1 min exhibits a lower capture efficiency for all particle sizes except the largest (Figure 4(b)). Importantly, compared to our reference commercial filter, the electrospun PEA-polymer **7** filter with 2 min deposition time shows a significantly higher absorption rate for both 1 and 3 um particles while respecting the breathability norm (we also refer to the discussion for this point).
In summary, by controlling the thickness of the filters through deposition time and using the optimum electrospinninging deposition parameters, we demonstrated performances comparable with commercial facemasks and with the norms evaluated.
Nanoinindentation tests have been performed on selected PEA-polymer **7** filters with different deposition time and the two reference facemasks. These tests did not show any significant dependence of the mechanical stiffness on the electrospinninging deposition time (not shown), but the electrospun fibres exhibited an elastic modulus twice as large as the commercial PP filters (see details and Figure S5 in SI), highlighting again the tuneability and excellent mechanical properties of our poly(ester amide) polymer.
### Meltspinning
Electrospinning is a solution of choice for the fabrication of critical, high-performance filters as shown above and elsewhere in the literature[38]. However, due to its limited throughput, it is not the most appropriate for the fabrication of the outer layer. In this case, more conventional approaches such as melt-spinning are typically favoured, especially since the fibres thus created have a lower constraint in terms of performance efficiency. Therefore, we decided to evaluate the processability of our selected polymers to create non-woven fabric using melt-spinning.
Melt-spinning tests have been successfully applied with polymer **4** as demonstrated by continuous fibers fabrication (Figure 5(b)). The polymer pellets were maintained at a controlled temperature inside the dispenser head and continuous fibers were created and collected by the rotating cylindrical collector (Figure 5(c)). Due to the speed limitations of the rotating collector (700 rpm) the smallest fiber dimensions that could be achieved was 34 um.
Polymer **4** was the only PEA grade that could be used for melt-spinning as no continuous fibers were generated using the two other selected grades. Viscosity versus temperature scans of these three grades were performed (Figure 5(a)) to investigate the reason behind the processability of the polymers.
The figure highlights two distinct behaviours that differentiate polymer **4** from polymers **1** and **7**. The slope of the viscosity around the melting temperature, and the final viscosity value reached in the molten state. When heated up to the melting point, polymer **4** shows a lower viscosity drop with temperature compared to the other two other PEA grades. Furthermore, over a temperature range of \(\Delta\)T\(>\)16K the viscosity remains above 3000 Pa's in the molten state and is much greater for polymer **4** than for the other two grades. This analysis therefore confirms why polymer **4** is compatible with melt-spinning while the other two are not. This is because in the latter cases, small temperature fluctuations can lead to a rupture of the fibre flow.
## Discussion
In this paper we present and fully characterise novel PEA-based biodegradable facemasks components made. Several biosourced PEA grades were synthesised, fine-tuned and evaluated for processing into non-woven fibres. Normalised biodegradation tests were performed on three PEA grades candidates that passed a processability screening test. Two polymers were fully degraded in less than 35 days, including one that degraded as quickly as cellulose, that is 20 days.
We then developed electrospun filters that were tested against filtration efficacy and breathability, using a test bench developed for the occasion. We systematically tested the filtration performance with particles between 0.3 and 10 micrometer size. Our optimised ultra-thin high-performance filters demonstrate filtration efficiency and breathability comparable to commercial facemasks.
The filtration efficacy and breathability of the filters were benchmarked against commercial filters. Disposable facemasks of Type I need to filter 95% of 3 \(\upmu\)m particles (norm EN 14683+AC:2019). However, during the COVID-19 pandemic several Swiss hospitals and medical institutes have requested to add a 70% filtration of 1 \(\upmu\)m particle criteria [28]. Commercial filter used as reference herein have been validated using both criteria and the
Figure 6: PEA grade optimised for melt-spinning process. a) Viscosity versus temperature for the three PEA grades around the melting point. Polymer 4 shows a smoother variation of viscosity around the melting point in comparison to polymer grades 1 and 7. This explains why the grade 4 is less sensible for temperature fluctuations and better fits for melt-spinning. b) Optical micrograph of continuous fibres as obtained by melt-spinning the polymer 4 c) Principle of the melt-spinning equipment. The molten polymer is pushed through a nozzle and collected by a rotating cylinder.
mask used as reference for the filtration test has shown a 95% absorption of 1 \(\upmu\)m particles. Furthermore, both commercial masks demonstrated a breathability that was approximately twice better than the normed target value (40 Pa/cm\({}^{2}\) according to norm EN 14683+AC:2019). Since the performance of our optimized filters are comparable to that of the two reference filters, we conclude that they may respond to the requirements of a normed test.
One further key outcome of our study is the demonstration that the filter material can directly be electrospun on the outer layer of a disposable mask. This ensures a tight contact between the layers and importantly simplifies the production of multi-layered systems. In addition, to evaluate the possibility to fabricate the outer layers of the facemasks, we developed a melt-spinning rig capable of continuous fibre fabrication on one of the polymers.
Pandit et al.[19] also claim that biodegradable materials will not only reduce waste but also increase wearing comfort and skin friendliness. Several authors report studies where biosourced and biodegradable filters have been made by electro-spinning, for example based on gluten blended PVA[39] or carbon blended gluten nanofiber[40], while other authors propose biopolymers like PVA and PLA[41] for electro-spinning nanofilters. To the best of our knowledge there are no biodegradable face masks based on biosourced PEA. PEA is an interesting class of polymers that combines good mechanical properties, biodegradability and importantly that is amenable to seamless modifications to fine tune their properties as shown herein.
One open point of this research pertains to the solvent used for electro-spinning, HFIP, that is not a green solvent. Despite comprehensive tests with other solvents like methanol and ethanol and mixtures of HFIP with green solvents the presented electro-spinning results have only been achieved using pure HFIP.
Overall, our results pave the way for the development of high-performance biodegradable facemasks based on biosourced PEA. Future studies, outside of the scope of this manuscript, will address the ideal combination of melt-spinning and electro-spinning process to fabricate an entire facemask. The potential final costs of a disposable biodegradable facemask will also be evaluated and factors like the upscaled PEA synthesis and a production including an electrospinning step will be considered.
Materials and methods
Synthesis of biodegradable Polyesteramide polymer
### Polymer Synthesis & Characterization.
General Information: Dimethyl 2,5-furandicarboxylate (DMFD) was purchased from Apollo Scientific, 1,4-butanediol; 1,6-hexaneediol; 1,10-decanediol; DBTO were purchased from Acros Organics. Diethylether; absolute alcohol with 5% IPA; tetrahydrofuran with BHT stabilizer; methanol, 99% were purchased from Thommen-Furler AG and used without any further purification; titanium (IV) butoxide was purchased from Fluorochem and dissolved in toluene at the desired concentration prior to use.
NMR spectra were recorded with a Bruker 300 Ultrashield spectrometer and referenced against the chemical shift of the residual proto-solvent peak (CDCl3: 7.26 ppm; DMSO-d6: 2.50 ppm, D2O: 4.79 ppm) for 1H NMR and the deuterated solvent peak (CDCl3: 77 ppm; DMSO-d6: 40 ppm) for 13C NMR measurements.
Infrared spectra were recorded on a Bruker ALPHA in absorption mode between 4000 cm-1 and 400 cm-1 with a resolution of 4 cm-1. Samples were analysed directly on the diamond crystal without further preparation (not shown).
DSC measurements were carried with a Mettler DSC 821e. The analyses were conducted under nitrogen in Al 40 \(\upmu\)L crucibles with a heating and cooling rate of 10\({}^{\circ}\)C/min. Method: -70\({}^{\circ}\)C to 200\({}^{\circ}\)C, 1 min annealing, 200\({}^{\circ}\)C to -70\({}^{\circ}\)C, 1 min annealing, -70\({}^{\circ}\)C to 200\({}^{\circ}\)C.
A TGA/SDTA851e (Mettler Toledo) instrument was used to study the thermal stability of the synthesized polymers. To this purpose, 5-10 mg of polymers were placed in a standard aluminium pan and heated under nitrogen from 30\({}^{\circ}\)C to 800\({}^{\circ}\)C at a heating rate of 10\({}^{\circ}\)C/min. The first indicator is the temperature for which the weight loss is equal to 5% (T\({}_{\rm d,5\%}\)).
GPC measurements were performed on a Waters 1260 infinity pump, a 1260 Infinity II Refractive Index Detector, a 1260 Infinity II Multisampler, an Acquity APC XT 45,1.7\(\upmu\)m column an Acquity APC XT 125, 2.5 \(\upmu\)m followed by an Acquity APC XT 200, 2.5 \(\upmu\)m columns in series at 30 \({}^{\circ}\)C. A 10 mM solution of Sodium triacetate in Hexafluoroisopropanol was used as eluent at a flow rate of 0.3 mL/min. The molecular weights were calibrated with PMMA standards on a range between 600 and 2 200 000 Da (PSS Polymer Standards Service, Mainz, Germany).
For the test of polymer solubility, a defined amount of polymer and solvent was charged in a vial and left stirring until dissolution. Solubility was evaluated visually.
### Synthesis of Poly(ester amide)s
The **6,4-bisamidediol building block** was synthesized as reported in references [42, 43].
1H-NMR signals of 6,4-bisamidediol (300 MHz; DMSO-d6): 5 7.74 (t, J=5.6 Hz, 1H), 4.39-4.30 (m, 1H), 3.37 (dd, J=6.4; 4.7 Hz, 2H), 3.00 (q, J=6.1 Hz, 2H), 2.03 (t, J=7.4 Hz, 2H), 1.58-1.15 (m, 3H).
Polymers were synthesized following the same procedure that comprises a first step where transesterification of DMFD or dimethyl adipate occurs with the consequent removal of methanol and a second polycondensation step where the polymer chains grow, and the removal of the diol takes place. An example the synthesis polymer **2** with DMFD and 1,4-butanediol to form a 50% hard segment is reported [43].
In a 500 mL three-necked round-bottom flask mounted with a distilling bridge and a helical stirrer connected to the system via a magnetic coupling, 58.18 g of DMFD (321.19 mmol, 1
eq.), 50.93 g of building block 6,4-bis-amidediol (97.5% purity, 157.75 mmol, 0.49 eq.), 14.47 g of 1,4-butandiol (160.56 mmol, 0.5 eq.) were introduced and 3 cycles of argon/vacuum were made to ensure inert atmosphere. The flask was heated to 190\({}^{\circ}\)C using an aluminium heating block (DrySyn) under argon atmosphere. Once the reactants melted, the stirring was enabled at 200 rpm and 4 mL of a 30 mg/mL stock solution in toluene of titanium tert-butoxide (catalyst, 180 mg, 0.52 mmol) were added through a septum. During the esterification a stream of Argon was purged through the reactor to remove methanol and toluene efficiently. Once the distillation ended the distillation collector was emptied, dried and re-connected. Then, 4 mL of a 30 mg/mL stock solution in toluene of titanium tert-butoxide (catalyst, 180 mg, 0.52 mmol) were added through a septum.
The temperature was increased to 205\({}^{\circ}\)C, the pressure was reduced to 0.02 mbar for 1.5 hours with a high vacuum pump. After this time the pressure was increased with Argon up to atmospheric pressure and 4 mL of a 30 mg/mL stock solution in toluene of titanium tert-butoxide (catalyst, 180 mg, 0.52 mmol) were added through a septum. After reducing the pressure again, the solution was kept 1.5 h at 205\({}^{\circ}\)C. Then, the temperature was increased to 210\({}^{\circ}\)C for 1.5 h and finally to 215\({}^{\circ}\)C for 1 h. After this time the formed polymer (lightly brown viscous liquid) was cast on a metal plate to allow solidification.
### Synthesis of 2,2\({}^{\prime}\)-bis(2-oxazoline)
2,2\({}^{\prime}\)-bis(2-oxazoline) was synthesized based on the procedures reported by H. Wenker[31] and in the patent WO2012066051A2\({}^{\prime}\)[44].
Diethyl oxalate (14.6 g, 0.1 mol, 1 eq.) dissolved in 15 mL of ethanol is added for 1 hour to a cooled mixture composed of 2-chloroethylamine hydrochloride (23.2 g, 0.2 mol, 2 eq.) and potassium hydroxide (85%, 13.2 g, 0.2 mol, 2 eq.) dissolved in 20 mL of deionized water. The mixture temperature is kept below 20\({}^{\circ}\)C with an ice bath. At the end of the addition, the mixture is stirred for an additional hour at room temperature. The white precipitate is isolated by vacuum filtration, suspended in 40 mL of deionized water, and stirred for 15 minutes. The suspension is filtered again by vacuum filtration and washed with 15 mL of ethanol. The powder is dried in a vacuum oven at 80\({}^{\circ}\)C and 50 mbar. 16.92 g of N,N'-bis(2-chloroethyl)oxamide as a fine white powder is obtained (yield: 79%).
m.p.: 203\({}^{\circ}\)C (203\({}^{\circ}\)C, Lit.(35))
1H NMR (300 MHz, DMSO-d6) $ 8.95 (t, J = 6.3 Hz, 2H, N-H), 3.70 (t, J = 6.3 Hz, 4H, Cl-CH2), 3.49 (q, J = 6.2 Hz, 4H, N-CH2)
13C NMR (75 MHz, DMSO-d6) $ 160.4 (C=O), 43.0 (C-Cl), 41.3 (C-NH)
IR: 3291s, 3067w, 2961w, 2934w, 1655s, 1534s, 1440s, 1362w, 1311m, 1246s, 1185m, 1055m, 933w, 860w, 760m, 652m, 546m.
N,N'-bis(2-chloroethyl)oxamide (13.42 g, 63 mmol, 1 eq.) is suspended in 50 mL of methanol containing 8.32 g of potassium hydroxide (85%, 126 mmol, 2 eq.). The mixture is heated to reflux for one hour. The resulting suspension is filtered by vacuum filtration at 50\({}^{\circ}\)C. The filtrate is concentrated at 50\({}^{\circ}\)C and 200 mbar. When around 40 mL of methanol is removed, the precipitate is filtered off under vacuum and washed with a small volume (about 5 mL) of cold methanol. A second concentration and filtration step is performed on the resulting filtrate with the same parameters. The powder obtained from these two concentrations is dried in a vacuum oven at 60\({}^{\circ}\)C and 100 mbar. 7.05 g of 2,2\({}^{\prime}\)-bis(2-oxazoline) as a white crystalline powder is obtained. (Yield: 80%).
m.p.: 213\({}^{\circ}\)C (213\({}^{\circ}\)C, Lit.(35))
1H NMR (300 MHz, D2O) $ 4.49 (t, J = 9.9 Hz, 4H, O-CH2), 4.00 (t, J = 9.9 Hz, 4H, N-CH2).
13C NMR (75 MHz, D2O) 8 156.09 (C=0), 69.24 (O-CH2), 53.73 (N-CH2)
IR: 3291w, 2940w, 2872w, 1655w, 1617s, 1538w, 1473w, 1385w, 1348w, 1290w, 1253w, 1188w, 1105s, 971m, 913s, 868m, 725w, 653w, 569m.
### 2,2'-bis(2-oxazoline)-Based Poly(ester amide) synthesis
Bulk polymerizations of PEAs were carried out according to the procedure reported by Wilsens[45].
Sebacic acid (6.06 g, 30 mmol, 1 eq.) is mixed with 2,2'-bis(2-oxazoline) (4.62 g, 33 mmol, 1.1 eq.) and Irganox 1330 (0.1 g, 1 wt%). The mixture is mixed and heated under nitrogen atmosphere to 195\({}^{\circ}\)C for two hours, before the viscous polymer is discharged on a Teflon foil.
### Aerobic biodegradation of polymers
Polymers **1**, **4** and **7** were grounded with Ultra Centrifugal Mill ZM 200 from Retch at a size lower than 800 \(\upmu\)m and sent to OWS[46].
The controlled composting biodegradation test is an optimized simulation of an intensive aerobic composting process where the biodegradability of a test item under dry, aerobic conditions is determined. The inoculum consists of stabilized and mature compost derived from the organic fraction of municipal solid waste. The test item is mixed with the inoculum and introduced into static reactor vessels where it is intensively composted under optimum oxygen, temperature and moisture conditions. During the aerobic biodegradation of organic materials, a mixture of gases (principally carbon dioxide and water) are the final decomposition products while part of the organic material will be assimilated for cell growth. The carbon dioxide production is continuously monitored and integrated to determine the carbon dioxide production rate and the cumulative carbon dioxide production. After determining the carbon content of the test item, the percentage of biodegradation can be calculated as the percentage of solid carbon of the test item, which has been converted to gaseous, mineral C under the form of CO2.
The tests were performed under the norm: ISO 14855-1 Determination of the ultimate aerobic biodegradability of plastic materials under controlled composting conditions - Method by analysis of evolved carbon dioxide (2012), but in singular instead of triplicate.
Fibre fabrication through electrospinningning of PEA
The Genvolt electrospinningnature kit, composed of a high voltage power supplier (up to 30 kV) and a syringe pump, was used for early screening. Glass syring with a metallic needle and a rotating collector were used for the experiments.
Polymers were dissolved in a vial at a given concentration (see Table S1 in SI, for details) and left stirring with a magnetic stirrer overnight at room temperature. The polymer solution was then filtered on a 0.22 \(\upmu\)m filter and loaded into a 10 mL glass syringe. The syringe was put on the syringe pump and the voltage connected to the needle. The flow of the syringe pump and the collector speed (rpm) were selected, and the tension was then applied. The tension was slowly increased until the apparition of fibres from the tip of the needle. After the process was finished the aluminium foil was removed from the collector and the tissue composed of polymer fibres was further used for SEM analysis.
Final electrospinningn experiments on selected polymers were realised with a 4Spin(r) device using a 21-gauge needle as the single emitter and a fixed rectangular collector. The setup is represented in Figure 1a PEA grades were dissolved in hexafluoroisopropanol (HFIP) with concentrations ranging between 8 wt.% and 12.5 wt.% for the selected polymers. Fibres were
deposited on aluminium foil or commercial mask outer layer, depending on the condition evaluated, on the collector side. Voltage, working distance and flowrate were optimised through experimental design, as detailed in supplementary information and table S2.
Using the results from the experimental design, selected solutions were electro-spun with an emitter-collector distance of 13 cm, a voltage of 17 kV and a feed rate of 30 \(\upmu\)L/min. Samples for filter performance characterisation were made by cutting 50 mm wide disks out of the fabric at the point where the fibres were the thickest, as assessed visually. Deposition time and polymer choice were the only variable for filter fabrication and three filters were made for each parameter setup.
Assessment of fibre quality through scanning electron microscopy
Quality of electrospun fibres was assessed visually using a JEOL JSM-6400 scanning electron microscope on metalized samples. Even spreads of long uniform fibres were judged of good quality while the presence of polymer beads indicated electro-spraying. Samples of the experimental design were graded from 1 to 6 (Table S2 in SI) and sorted onto a heatmap to better represent the influence of the variable parameters (Figure 4 and Figure S3).
Measurement of layer thickness through confocal microscopy
The thickness of layers deposited through electrospinning follows a gaussian distribution with the centre of the stream producing a wider fabric than on the sides. In order to get a measurement of thickness, it is important to account for that variability so that different measurements can be compared. This was done by depositing fibres over a silicon wafer partially covered by peelable masks in a fixed position. A confocal microscope (Sensor S neox) was then used to measure the height differences over a line between the areas where the mask was peeled off. To ascertain the relationship between layer thickness and deposition time, fibres deposited over 30, 60, 90 and 120 min were prepared for PEA 50% and up to 90 min for PEA 25%.
Nanomechanical tests of electrospun PEA fibers and commercial masks
Nanoinindentation tests of individual filter filaments have been performed with an Ultra nanoindenter UNHT (Anton-Paar) equipped with a Berkovich tip. This local probe method is explained in more detail in many publications, for example[47]. Briefly, the diamond tip is loaded into the sample and the force is recorded as a function of the displacement. During the loading phase the material deforms plastically and elastically whereby both contributions cannot be distinguished. During the unloading phase the material recovers elastically, which allows to determine the elastic modulus and hardness. For these nanomechanical tests the filter samples have been embedded in PMMA and polished. Indentation tests have been performed on individual fibers of selected electrospun PEA filters and of the reference commercial facemasks (see below). For each sample 15 indentation tests have been performed with a maximum load of 200 \(\upmu\)N. A linear loading rate of 200 \(\upmu\)N/min was applied, followed by 10 s break at maximum load and an unloading applying a rate of 200 \(\upmu\)N/min. The local elastic modulus and hardness of the fiber material were determined using a Poisson's ratio of 0.3.
Rheogical analysis of the PEA polymers
Rheological tests have been performed with an Anton Paar MCR702 TwinDrive rotational rheometer. Viscosity versus temperature scans have been carried out for selected PEA grades under a nitrogen atmosphere to avoid oxidation. The temperature interval has been defined based on the DSC scans (see Table 2). The measured range covered the melting point of the
individual PEA and the temperature has been varied at 1\({}^{\rm o}\)C/min applying an oscillation frequency of 1 Hz and an imposed 0.8%-1% relative deformation.
Custom made filter test bench for breathability and absorption test
A filter test bench to measure pressure drop and microparticles absorption rate was conceived and realized in-house. The custom-made bench architecture and working principles are presented in detail in the results section. Teflon microparticles (Polytetrafluoroethylene PTFE Powder; CAS: 9002-84-0) with particle diameters ranging from 0.3 to 10 \(\upmu\)m were used. The number of particles passing through the filters were counted using a particle counter (HPPC6 Particle Counter Plus 8306; Connect 2 Cleanrooms Ltd, Lancaster, UK) with an integrated pump and detection range of 0.3, 0.5 1, 3, 5 and 10 \(\upmu\)m particles. Commercial disposable facemasks, Einwegmaske "Facemask", PP, EN 14683:2019+AC:2019 and Aspop Einwegmaske PP en 14683 Type IIR, validated in terms of particle absorption and breathability by Kassensturz (October 2020), were used as reference.
Melt spinning tests
A melt-spinning set-up, allowing for solvent-free fabrication of continuous polymer fibres, was fabricated. It consisted of an extrusion head (Fig. 1b and Fig. S6 in supplementary materials) and a rotating cylindrical collector with a maximum speed of 700 rpm. The polymer is molten in the head body, via a heating collar reaching up to 400\({}^{\rm o}\)C and pushed through the nozzle using compressed air.
## Acknowledgements
We would like to acknowledge funding from HES-SO (Projet Libre, Public Mask). We are grateful to C. Csefalvay for scanning electron microscopy and confocal microscopy measurements. We are grateful to M. Chouin for supplying EPSA Swiss mask components. |
2306.00608 | On the Effectiveness of Hybrid Mutual Information Estimation | Estimating the mutual information from samples from a joint distribution is a
challenging problem in both science and engineering. In this work, we realize a
variational bound that generalizes both discriminative and generative
approaches. Using this bound, we propose a hybrid method to mitigate their
respective shortcomings. Further, we propose Predictive Quantization (PQ): a
simple generative method that can be easily combined with discriminative
estimators for minimal computational overhead. Our propositions yield a tighter
bound on the information thanks to the reduced variance of the estimator. We
test our methods on a challenging task of correlated high-dimensional Gaussian
distributions and a stochastic process involving a system of free particles
subjected to a fixed energy landscape. Empirical results show that hybrid
methods consistently improved mutual information estimates when compared to the
corresponding discriminative counterpart. | Marco Federici, David Ruhe, Patrick Forré | 2023-06-01T12:26:07Z | http://arxiv.org/abs/2306.00608v2 | # On the Effectiveness of Hybrid Mutual Information Estimation
###### Abstract
Estimating the mutual information from samples from a joint distribution is a challenging problem in both science and engineering. In this work, we realize a variational bound that generalizes both discriminative and generative approaches. Using this bound, we propose a hybrid method to mitigate their respective shortcomings. Further, we propose Predictive Quantization (PQ): a simple generative method that can be easily combined with discriminative estimators for minimal computational overhead. Our propositions yield a tighter bound on the information thanks to the reduced variance of the estimator. We test our methods on a challenging task of correlated high-dimensional Gaussian distributions and a stochastic process involving a system of free particles subjected to a fixed energy landscape. Empirical results show that hybrid methods consistently improved mutual information estimates when compared to the corresponding discriminative counterpart.
Machine Learning, ICML
## 1 Introduction
Mutual Information quantifies the amount of information gained about one random variable by observing another one (Shannon, 1948; MacKay et al., 2003). As such, it is a measure of their mutual dependence. Estimating the dependency of two random variables is a problem found ubiquitously in science and engineering. Examples are independent component analysis (Hyvarinen and Oja, 2000; Bach and Jordan, 2002), neuroscience (Palmer et al., 2015), Bayesian experimental design (Ryan et al., 2016). Further, mutual information estimation plays a crucial role in self-supervised learning, which in recent years has gained significant attention from the machine learning community (Hjelm et al., 2018; Zbontar et al., 2021) in the form of information optimization. Information optimization, such as in the information bottleneck method (minimization) (Tishby et al., 2000; Alemi et al., 2016) or deep representation learning (Bengio et al., 2013; Hjelm et al., 2018; van den Oord et al., 2018), can be conducted without explicitly modeling the underlying distributions. Instead, through directly bounding the mutual information, one can obtain tractable objective functions that can be optimized using flexible function estimators.
Mutual information estimation, the core focus of this work, is a challenging task since one usually has access only to samples from the underlying probability distributions, and not their densities (Paninski, 2003; McAllester and Stratos, 2020). Classical estimators (Kraskov et al., 2004; Gao et al., 2015) typically break down when the data is higher-dimensional and can be non-differentiable, making them unsuitable for gradient-based information optimization (Hjelm et al., 2018).
Variational mutual information estimation considers a lower bound on the true information. Therefore, maximizing this bound using a flexible variational family yields accurate information estimates. By identifying a bound that unifies the generative and discriminative methods as categorized by Song and Ermon (2020) and Poole et al. (2019), we aim to address their respective shortcomings. That is, generative approaches lack flexibility and discriminative approaches suffer from unfavorable bias-variance trade-offs for large estimates. The unification of these techniques provides a method that can provide more accurate estimation.
Further, we specify a simple yet effective generative method called _Predictive Quantization_ (PQ) that makes use of a quantized (discrete) representation of the data, such as a clustering or a set of external attributes, to approximate quantities (such as a marginal entropy) that are usually intractable.
Our contributions, therefore, include the following:
1. We introduce a novel family of hybrid mutual information estimators which generalizes both generative and discriminative approaches.
2. We show theoretically and empirically that there is a clear advantage to combining generative and discriminative mutual information estimators since they have complementary strengths and weaknesses.
3. We design Predictive Quantization (PQ): a simple generative method that can to improve a wide range of recently proposed discriminative estimators.
Experimentally, we test our models by estimating mutual information between the dimensions of a mixture of correlated multi-dimensional Gaussian distributions and compare the estimates of temporal information for a stochastic process of spatially correlated moving particles subjected to a fixed energy landscape.
## 2 Mutual Information Estimation
Given two random variables \(\mathbf{x}\) and \(\mathbf{y}\) with support \(\mathbb{X}\), \(\mathbb{Y}\) and joint probability density \(p(\mathbf{x},\mathbf{y})\), the mutual information between \(\mathbf{x}\) and \(\mathbf{y}\) is defined as Kullback-Leibler (KL) divergence between \(p(\mathbf{x},\mathbf{y})\) and the product of the two marginal distributions1:
Footnote 1: Unless otherwise specified, expectations are computed with respect to \(p(\mathbf{x},\mathbf{y})\). See Appendix A for notational details.
\[I(\mathbf{x};\mathbf{y}) \stackrel{{\mathrm{def}}}{{=}}\mathrm{KL}(p(\mathbf{x },\mathbf{y})||p(\mathbf{x})p(\mathbf{y}))\] \[=\mathbb{E}\left[\log\frac{p(\boldsymbol{x},\boldsymbol{y})}{p( \boldsymbol{x})p(\boldsymbol{y})}\right]. \tag{1}\]
In most applications of interest, we can sample \(\boldsymbol{x},\boldsymbol{y}\sim p(\mathbf{x},\mathbf{y})\) to compute a Monte Carlo estimate. However, some of the densities in Equation (1) are usually unknown or intractable. For this reason, a common strategy involves the introduction of a variational distribution \(q(\mathbf{x},\mathbf{y})\) to obtain a lower bound on mutual information, which can be used for either estimation or maximization:
\[I(\mathbf{x};\mathbf{y}) =\mathbb{E}\left[\log\left(\frac{q(\boldsymbol{x},\boldsymbol{y} )}{p(\boldsymbol{x})p(\boldsymbol{y})}\frac{p(\boldsymbol{x},\boldsymbol{y})}{ q(\boldsymbol{x},\boldsymbol{y})}\right)\right]\] \[=\mathbb{E}\left[\log\frac{q(\boldsymbol{x},\boldsymbol{y})}{p( \boldsymbol{x})p(\boldsymbol{y})}\right]+\underbrace{\mathrm{KL}(p(\mathbf{x},\mathbf{y})||q(\mathbf{x},\mathbf{y}))}_{\text{Variational Gap}}\] \[\geq\mathbb{E}\left[\log\frac{q(\boldsymbol{x},\boldsymbol{y})}{p (\boldsymbol{x})p(\boldsymbol{y})}\right]\] \[\stackrel{{\mathrm{def}}}{{=}}I_{q}(\mathbf{x}, \mathbf{y}), \tag{2}\]
where \(q(\mathbf{x},\mathbf{y})\) is a joint density in a set \(\mathcal{Q}\) of attainable variational densities with positive support on \(\mathbb{X}\times\mathbb{Y}\). Note that the bound in Equation (2) is tight only when \(p(\mathbf{x},\mathbf{y})\in\mathcal{Q}\). Therefore, accurate mutual information estimation requires access to a flexible variational family. The KL divergence between the joint distribution \(p(\mathbf{x},\mathbf{y})\) and its variational approximation \(q(\mathbf{x},\mathbf{y})\) is commonly referred to as the _variational gap_, which we seek to minimize for accurate estimation.
Previous work (Poole et al., 2019; Song and Ermon, 2020) has identified two categories of approaches for maximizing the ratio \(I_{q}(\mathbf{x};\mathbf{y})\): generative and discriminative methods. In the following sections, we recover both these approaches using a parameterization for \(q(\mathbf{x},\mathbf{y})\) consisting of a normalized proposal distribution (generative) and an unnormalized density ratio (discriminative). On one hand, discriminative approaches (Belghazi et al., 2018; Nguyen et al., 2010) directly model the entire ratio, but this comes at the cost of considerable bias (van den Oord et al., 2018) or high variance for large values of mutual information (McAllester and Stratos, 2020). Generative approaches (Barber and Agakov, 2003), on the other hand, model the components in Equation (2) using learnable normalized densities. However, access to flexible parameterized distributions can be hard to attain, or they are expensive to optimize. These issues usually become more severe for high-dimensional \(\mathbf{x}\) and \(\mathbf{y}\).
Figure 1: Predictive Quantization (PQ). This figure shows how a simple quantization function \(Q(\mathbf{x})\) can be used to obtain proposals that lie closer to a target joint distribution (on the right) than the product of the marginals (on the left). We can use these intermediate distributions to split the problem of estimating mutual information into two parts, effectively reducing the variance of popular discriminative estimators. This results in a tighter estimate of the variational bound. Vertical dashed lines are used to indicate the quantized regions. Note that as the number of partitions increases, the proposal approaches the joint distribution.
## 3 A Generalized Variational Approach
In the following, we set up a parametrization for \(q(\mathbf{x},\mathbf{y})\) that allows us to derive popular mutual information estimators as special cases and propose new generalized hybrid estimators. First, note that any joint probability distribution \(q(\mathbf{x},\mathbf{y})\) can be written as a normalized exponential:
\[q(\mathbf{x},\mathbf{y})=\frac{e^{F(\mathbf{x},\mathbf{y})}}{Z_{F}},\ \ \text{with}\ Z_{F}\stackrel{{\mathrm{def}}}{{=}}\iint e^{F( \boldsymbol{x},\mathbf{y})}d\boldsymbol{x}d\boldsymbol{y}. \tag{3}\]
The function \(F:\mathbb{X}\times\mathbb{Y}\to\mathbb{R}\) is commonly referred to as the negative _energy function_ and \(Z_{F}\) as its normalization constant2. The main advantage is that there are few restrictions on \(F\), a property that has fruitfully been exploited by energy-based models (Song and Kingma, 2021; Grathwohl et al., 2019; Haarnoja et al., 2017; Song et al., 2020; Hyvarinen and Dayan, 2005; Ngiam et al., 2011).
Footnote 2: The energy needs to satisfy mild integrability constraints.
The main disadvantage, however, is that the normalization constant \(Z_{F}\) is generally intractable due to computationally expensive integration over the support \(\mathbb{X}\times\mathbb{Y}\). To address this problem, we compute a Monte Carlo estimate of \(Z_{F}\) by sampling from a proposal distribution \(r(\mathbf{x},\mathbf{y})\) from an attainable family of normalized densities \(\mathcal{R}\):
\[Z_{F} =\iint\frac{r(\boldsymbol{x},\boldsymbol{y})}{r(\boldsymbol{x}, \boldsymbol{y})}e^{F(\boldsymbol{x},\boldsymbol{y})}d\boldsymbol{x}d\boldsymbol {y}\] \[=\mathbb{E}_{r}\left[e^{F(\boldsymbol{x},\boldsymbol{y})-\log r( \boldsymbol{x},\boldsymbol{y})}\right]. \tag{4}\]
Lastly, we can reparameterize the energy such that \(F(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}}\log r( \mathbf{x},\mathbf{y})+f(\mathbf{x},\mathbf{y})\) to obtain
\[q(\mathbf{x},\mathbf{y})=r(\mathbf{x},\mathbf{y})\frac{e^{f(\mathbf{x}, \mathbf{y})}}{Z_{f}},\ \ \text{with}\ Z_{f}=\mathbb{E}_{r}[e^{f(\boldsymbol{x}, \boldsymbol{y})}]. \tag{5}\]
Intuitively, \(q(\mathbf{x},\mathbf{y})\) is obtained by transforming the proposal \(r(\mathbf{x},\mathbf{y})\in\mathcal{R}\) with a _critic_ function \(f\in\mathcal{F}\), and re-normalizing to obtain a valid density. We denote the family of variational distributions obtained with this procedure as \(\mathcal{Q}\stackrel{{\mathrm{def}}}{{=}}\mathcal{R}_{F}\) to underline the dependency on the chosen proposals \(\mathcal{R}\) and critics \(\mathcal{F}\).
Using this parameterization in Equation (2), we obtain a general bound that includes both a generative and a discriminative component:
\[I_{q}(\mathbf{x},\mathbf{y}) =\underbrace{\mathbb{E}\left[\log\frac{r(\boldsymbol{x}, \boldsymbol{y})}{p(\boldsymbol{x})p(\boldsymbol{y})}\right]}_{I_{r}(\mathbf{x},\mathbf{y})}\] \[\quad+\underbrace{\mathbb{E}[f(\boldsymbol{x},\boldsymbol{y})]- \log\mathbb{E}_{r}\left[e^{f(\boldsymbol{x},\boldsymbol{y})}\right]}_{ \mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))}. \tag{6}\]
The entire decomposition is visualized in Figure 2. The total information \(I(\mathbf{x};\mathbf{y})\) (first row) decomposes through Equation (2) into the lower-bound \(I_{q}(\mathbf{x};\mathbf{y})\) and respective variational gap \(\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) (second row). \(I_{q}(\mathbf{x};\mathbf{y})\) is then further split into the terms of Equation (6) (third row). The first component \(I_{r}(\mathbf{x};\mathbf{y})\) is a mutual information lower bound determined by the proposal \(r(\mathbf{x},\mathbf{y})\in\mathcal{R}\). This quantity differs from the target mutual information \(I(\mathbf{x};\mathbf{y})\) by the variational gap for \(r(\mathbf{x},\mathbf{y})\): \(\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) (fourth row).
I
\begin{table}
\begin{tabular}{c|c|c} & Proposal & Energy \\ \hline Generative & \(r_{\theta}(\mathbf{x},\mathbf{y})\) & \(f_{k}(\mathbf{x},\mathbf{y})=k\) \\ Discriminative & \(p(\mathbf{x})p(\mathbf{y})\) & \(f_{\phi}(\mathbf{x},\mathbf{y})\) \\ Hybrid (Ours) & \(r_{\theta}(\mathbf{x},\mathbf{y})\) & \(f_{\phi}(\mathbf{x},\mathbf{y})\) \\ \end{tabular}
\end{table}
Table 1: Summary of the modeling choices for the main families of approaches for mutual information estimation with respect to the expression in Equation (6). Our proposed hybrid approach can exploit the flexibility of discriminative models while mitigating the variance because of the use of more expressive proposals.
Figure 2: Visualization of the additive decomposition of the terms of Equation (2) and Equation (6). The mutual information lower-bound for the implicitly defined distribution \(q(\mathbf{x},\mathbf{y})\) (in blue) can be seen as the sum of the lower-bound for the proposal distribution \(r(\mathbf{x},\mathbf{y})\) (in orange) and the estimation of the corresponding variational gap (in red).
The second component in Equation (6) consists of the comparison between the value of the critic \(f\) of samples from the true joint to samples from the proposal. This expression can be seen as the Donsker-Varadhan representation of the aforementioned variational gap of the proposal (Donsker and Varadhan, 1983):
\[\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\geq\mathrm{KL} _{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y})), \tag{7}\]
in which the inequality is tight when \(\mathcal{F}\) contains \(f_{k}^{*}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}} \log\frac{p(\mathbf{x},\mathbf{y})}{r(\mathbf{x},\mathbf{y})}+k\) with \(k\in\mathbb{R}\)(Poole et al., 2019). Note that the variational gap for \(q\) is always smaller than the variational gap for \(r\) whenever \(\mathcal{F}\) contains at least a constant (\(k\)) function:
\[\exists f\in\mathcal{F},\,f:\mathbb{X}\times\mathbb{Y}\to\{k\} \implies\max_{f\in\mathcal{F}}I_{q}(\mathbf{x};\mathbf{y})\geq I_{r}(\mathbf{ x};\mathbf{y}) \tag{8}\]
Since critics are usually modeled using flexible function approximators, this is a reasonable assumption, and, in practice, the family of transformed densities \(\mathcal{R}_{\mathcal{F}}\) is much larger than original proposals \(\mathcal{R}\).
As summarized in Table 1, previous approaches for mutual information estimation can be seen as instances of the parameterization reported in Equation (2) obtained by either restricting \(r(\mathbf{x},\mathbf{y})=p(\mathbf{x})p(\mathbf{y})\) (discriminative methods) or using a constant critic function (generative methods). In the following sections, we analyze the strengths and weaknesses of each method to design a novel hybrid approach that takes advantage of the best characteristics of both.
### The Discriminative Approach: a Bias-Variance Trade-Off
In this section we focus on discriminative methods, which we recover by setting \(r(\mathbf{x},\mathbf{y})=p(\mathbf{x})p(\mathbf{y})\), causing \(I_{r}(\mathbf{x},\mathbf{y})\) to vanish from Equation (6). On the one hand, modeling density ratios directly with neural network critics allow for great flexibility (Gutmann and Hyvarinen, 2010; van den Oord et al., 2018). On the other, the efficacy of techniques to estimate \(\mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) heavily depends on bias-variance trade-offs of the chosen approximation to evaluate the normalization constant (Poole et al., 2019; McAllester and Stratos, 2020; Song and Ermon, 2020). In practice, Monte Carlo estimation of the log-normalization constant \(\log Z_{f}\) with a limited number of samples yields biased results caused by the log-expectation (Belghazi et al., 2018; van den Oord et al., 2018). For this reason, we consider a looser lower-bound of \(\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) corresponding to the dual representation of the KL-divergence (Donsker and Varadhan, 1983):
\[\mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y }))\] \[\quad\geq\mathbb{E}_{p}[f(\mathbf{x},\mathbf{y})]-\mathbb{E}_{r}\left[e^ {f(\mathbf{x},\mathbf{y})}\right]+1, \tag{9}\]
where the inequality is tight when the normalization constant \(Z_{f}=1\).
Estimating Equation (9) using Monte Carlo samples suffers from less bias at the cost of a higher variance (Nguyen et al., 2010; Poole et al., 2019; Guo et al., 2022). In fact, the variance of \(\mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) for a critic \(f\in\mathcal{F}\) is mostly determined by the variance of \(e^{f(\mathbf{x},\mathbf{y})}\), which can be bounded from below by an exponential of the KL-divergence between the implicitly defined \(q(\mathbf{x},\mathbf{y})\) and original proposal \(r(\mathbf{x},\mathbf{y})\):
\[\mathbb{V}_{r}\left[e^{f(\mathbf{x},\mathbf{y})}\right] \geq Z_{f}^{2}\chi^{2}(q(\mathbf{x},\mathbf{y})||r(\mathbf{x}, \mathbf{y}))\] \[\geq Z_{f}^{2}\left(e^{\mathrm{KL}(q(\mathbf{x},\mathbf{y})||r( \mathbf{x},\mathbf{y}))}-1\right). \tag{10}\]
Here, \(\chi^{2}(q(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) denotes the Pearson chi-squared divergence between the variational distribution and the proposal. Intuitively, the variance increases the more the critic changes the proposal. Whenever the critic is constant on the whole support \(\mathbb{X}\times\mathbb{Y}\), \(q(\mathbf{x},\mathbf{y})\) matches \(r(\mathbf{x},\mathbf{y})\), rendering the variance and value of \(\mathrm{KL}_{f}(q(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\) zero. In this setting, the variational gap is completely determined by the proposal, since \(I_{q}(\mathbf{x};\mathbf{y})=I_{r}(\mathbf{x};\mathbf{y})\). In contrast, when \(q(\mathbf{x},\mathbf{y})=p(\mathbf{x},\mathbf{y})\) (i.e., the critic is optimal \(f(\mathbf{x},\mathbf{y})=f^{*}(\mathbf{x},\mathbf{y})\stackrel{{ \mathrm{def}}}{{=}}\log\frac{p(\mathbf{x},\mathbf{y})}{r(\mathbf{x},\mathbf{y})}\)), the variational gap for \(q(\mathbf{x},\mathbf{y})\) is zero, but the variance is still determined by an exponential of the variational gap for \(r(\mathbf{x},\mathbf{y})\). In the case in which \(r(\mathbf{x},\mathbf{y})=p(\mathbf{x})p(\mathbf{y})\), the variance grows exponentially with the amount of information to estimate (McAllester and Stratos, 2020; Song and Ermon, 2020).
### On the Generative Approach
When setting \(f(\mathbf{x},\mathbf{y})=k\), the second term of Equation (6) vanishes. We are now in the generative setting. Considering \(I_{r}(\mathbf{x};\mathbf{y})\), computing and optimizing the ratio between the proposal and the product of the marginals requires access to a flexible family of distributions with known (or approximate) densities. Depending on the modeling choices for \(r(\mathbf{x},\mathbf{y})\), the computation may require estimating up to two entropy and one cross-entropy term:
\[I_{r}(\mathbf{x};\mathbf{y})=\mathbb{E}[\log r(\mathbf{x},\mathbf{y})]+H( \mathbf{x})+H(\mathbf{y}). \tag{11}\]
Barber and Agakov (2003) and McAllester and Stratos (2020) model a joint proposal as the product of one of the marginals and a conditional proposal \(r_{\mathbf{x}}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}} p(\mathbf{x})r(\mathbf{y}|\mathbf{x})\). This reduces the computation to the estimation of only one entropy and a cross-entropy term:
\[I_{r_{\mathbf{x}}}(\mathbf{x};\mathbf{y})=\mathbb{E}[\log r(\mathbf{y}| \mathbf{x})]+H(\mathbf{y}). \tag{12}\]
Popular modeling choices for joint and conditional proposals include transforming simple densities using normalizing flows (Rezende and Mohamed, 2015; Kingma et al.,
2016; Dinh et al., 2017) or using variational lower-bounds (Kingma & Welling, 2014). Despite recent advances (Durkan et al., 2019; Ho et al., 2020), generative approaches tend to show a trade-off between flexibility and computational costs, which may limit their effectiveness in high-dimensional settings.
### A Hybrid Method
We can combine the advantages of both methods by (i) considering proposals \(r(\mathbf{x},\mathbf{y})\) such that \(\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))<\mathrm{KL}( p(\mathbf{x},\mathbf{y})||p(\mathbf{x})p(\mathbf{y}))\) to lower the variance for the estimates of \(\mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\). Furthermore, we choose a flexible \(f(\mathbf{x},\mathbf{y})\), e.g., a neural network, that refines the proposal.
Both terms of Equation (6) are now nonzero. Note that if we already had access to flexible density estimators, we can now simply include them as \(r(\mathbf{x},\mathbf{y})\), where \(f(\mathbf{x},\mathbf{y})\) corrects for any lack of flexibility. One can therefore apply an iterative method:
1. Pick the best proposal \(\hat{r}(\mathbf{x},\mathbf{y})\in\mathcal{R}\) to minimize the variational gap. This is equivalent to maximizing \(I_{r}(\mathbf{x},\mathbf{y})\): \[\hat{r}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}} \operatorname*{arg\,max}_{r\in\mathcal{R}}I_{r}(\mathbf{x};\mathbf{y})\] (13)
2. Learn the ratio between the best proposal and the joint distribution: \[\hat{f}(\mathbf{x},\mathbf{y})=\operatorname*{arg\,max}_{f\in\mathcal{F}} \mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||\hat{r}(\mathbf{x},\mathbf{y}))\] (14)
In practice, these two terms can be jointly optimized for fixed distributions of \(\mathbf{x}\) and \(\mathbf{y}\) as long as the objective in Equation (14) is treated as constant w.r.t the proposal.
Since our hybrid approach includes a flexible \(f(\mathbf{x},\mathbf{y})\), we design in the next section a simple family of proposals \(\mathcal{R}_{Q}\) that are guaranteed to lie closer to the joint distribution \(p(\mathbf{x},\mathbf{y})\) than the product of marginals, but yield simple and efficient computation of \(I_{r_{Q}}(\mathbf{x},\mathbf{y})\).
## 4 Predictive Quantization
We are interested in proposal distributions that we can easily sample (to estimate \(Z_{f}\)) and whose \(I_{r}(\mathbf{x},\mathbf{y})\) we can effectively approximate. We define a joint distribution \(r_{Q}(\mathbf{x},\mathbf{y})\) that factorizes as
\[r_{Q}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}}p( \mathbf{x})p(\mathbf{y}|Q(\mathbf{x})), \tag{15}\]
where \(Q\) maps to some discrete quantization \(\bar{\mathbf{x}}=Q(\mathbf{x})\), e.g., a cluster index. This proposal allows tractable estimation of \(I_{r}(\mathbf{x};\mathbf{y})\) since
\[I_{r_{Q}}(\mathbf{x};\mathbf{y}) =\mathbb{E}\left[\log\frac{p(\mathbf{y}|\bar{\mathbf{x}})}{p(\mathbf{y})} \right]=I(\bar{\mathbf{x}};\mathbf{y}) \tag{16}\] \[=\mathbb{E}\left[\log p(\bar{\mathbf{x}}|\mathbf{y})\right]+H(\bar{\mathbf{x}})\] \[\geq\mathbb{E}\left[\log s_{\psi}(\bar{\mathbf{x}}|\mathbf{y})\right]+H( \bar{\mathbf{x}}).\]
Here, \(s_{\phi}(\mathbf{x}|\mathbf{y})\) is a variational approximation to \(p(\bar{\mathbf{x}}|\mathbf{y})\). This form of the proposal has three key advantages:
1. Since \(p(\bar{\mathbf{x}})\) is discrete, we can obtain good estimates of \(H(\bar{\mathbf{x}})\) for reasonable numbers of quantization intervals. Further, we easily sample \(r_{Q}(\mathbf{x},\mathbf{y})\).
2. \(s_{\psi}(\bar{\mathbf{x}}|\mathbf{y})\) can be parametrized as a categorical distribution, obtaining a simple prediction task of \(\bar{\mathbf{x}}\) given \(\mathbf{y}\). We can amortize the inference using a deep neural network.
3. \(\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r_{Q}(\mathbf{x},\mathbf{y}))<\mathrm{KL }(p(\mathbf{x},\mathbf{y})||p(\mathbf{x})p(\mathbf{y}))\) for any \(Q\) such that \(I(\mathbf{y};Q(\mathbf{x}))>0\).
The last advantage is quickly identified by observing
\[\mathrm{KL}(p(\mathbf{x},\mathbf{y})||r_{Q}(\mathbf{x},\mathbf{y})) \stackrel{{Eq.\ref{eq:def
sampling batches \((\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{B}\) in which each \(\mathbf{x}_{i}\) lies in the same quantized region (\(\forall i,j\in[B],\,Q(\mathbf{x}_{i})=Q(\mathbf{x}_{j})\)) and adding the estimation of \(I(Q(\mathbf{x});\mathbf{y})\) to the original discriminative estimate for \(KL(p(\mathbf{x},\mathbf{y})||r_{Q}(\mathbf{x},\mathbf{y}))\). In Algorithm 1 we underline (in green) the simple modifications required to integrate PQ with a given discriminative estimator that produces samples from the marginal \(p(\mathbf{x})p(\mathbf{y})\) by shuffling the \((\mathbf{x}_{i},\mathbf{y}_{i})\) pair within a batch. By sampling each batch conditioned on one quantized value \(\bar{\mathbf{x}}\), the shuffling operation becomes equivalent to sampling from \(p(\mathbf{x})p(\mathbf{y}|\bar{\mathbf{x}})\) instead.
## 5 Related Work
Recent work on variational mutual information estimation (Poole et al., 2019) provides an overview of tractable and scalable objectives for estimating and optimizing Mutual Information (MI), identifying some of the characteristics and limitations of the two approaches. The authors analyze the trade-off between bias and variance for discriminative MI estimators focusing on critic architectures and the techniques used to estimate the normalization constant. They propose an interpolation between discriminative estimators with a low variance but high bias (van den Oord et al., 2018), and high-variance low-bias estimators (Nguyen et al., 2010).
Song & Ermon (2020) show that discriminative approaches such as Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018) and the Nguyen-Wainright-Jordan (NWJ) method (Nguyen et al., 2010) exhibit variance that grows exponentially with the true amount of underlying information, making them less suitable for estimation of large information quantities. The authors propose to control the bias-variance trade-off of the aforementioned estimators by clipping the critic's values during training. Other closely related work focuses on the statistical limitation of MI estimators (McAllester & Stratos, 2020). The authors underline that the exponential variance scaling affects any variational lower bound and focus on generative mutual information estimation based on the difference of entropies (DoE).
Other works consider factorizing the discriminative approaches using the sum of conditional and marginal MI terms (Sordoni et al., 2021), expressing the density ratio as a sum of several simpler critics (Rhodes et al., 2020), and using Fenchel-Legendre transform for MI estimation and optimization (Guo et al., 2022) to reduce the dependency on large-batch training and increase bound tightness.
Our work further unifies these families of estimators, addressing the exponential scaling issue of discriminative estimators by bringing the distribution used to estimate the normalization constant closer to the joint distribution of the two variables. The effectiveness of the combination of normalized (generative) and unnormalized (discriminative) distributions has been shown in the context of energy-based models Gao et al. (2020); Xiao et al. (2021). We further propose Predictive Quantization (PQ) as a simple yet effective generative estimator inspired by the literature on discrete mutual information estimation (Cover & Thomas, 2006; Gao et al., 2017) that can be easily combined with any of the aforementioned discriminative estimators to effectively reduce their variance.
## 6 Empirical Evaluation
We test our proposed hybrid models on a particularly challenging correlated mixture of normal distributions, and a discrete-time particle simulation task. These tasks have been selected to satisfy the following criteria.
1. Known True Mutual Information. Some of the estimators considered in this analysis are lower bounds, while others tend to overestimate the true value of mutual information. For this reason, we exclusively selected benchmarks in which the true value of mutual information is known.
2. Controllable Mutual Information. We consider tasks where the amount of information can be tuned by, e.g., a higher dimensionality or increasing the number of particles. We designed our experiments to make the difference between benchmarked estimators explicit by varying the batch size and target mutual information.
### Models and Optimization
We evaluate the performance of several discriminative mutual information estimators that differ for the computation of the normalization constant, the parameterization of the energy function, and their optimization (see Appendix B for details regarding each estimator). For each discriminative model, we consider three proposals:
1. The product of the marginals \(p(\mathbf{x})p(\mathbf{y})\). This corresponds to using no generative component.
2. The product of the marginal \(p(\mathbf{x})\) and a conditional Normal proposal \(\mathcal{N}(\mathbf{y}|\mathbf{\mu}_{\theta}(\mathbf{x}),\mathbf{\sigma}_{\theta}^{2} (\mathbf{x}))\) (BA, DoE). The functions \(\mathbf{\mu}_{\theta}(\mathbf{x})\) and \(\mathbf{\sigma}_{\theta}^{2}(\mathbf{x})\) are parametrized by neural networks. Note that mutual information estimation using this approach requires estimating the entropy of the marginal distribution \(p(\mathbf{y})\).
3. The product of \(p(\mathbf{x})\) and \(p(\mathbf{y}|Q(\mathbf{x}))\) defined in section 4. This requires specifying a fixed quantization function \(Q(\mathbf{x})\) and a classifier \(s_{\psi}(Q(\mathbf{x})|\mathbf{y})\).
For a fair comparison, all the architectures used in this analysis consist of simple Multi-Layer Perceptrons (MLP), ReLU activations (Nair & Hinton, 2010), and the same
neural architecture. All models are trained using Adam (Kingma and Ba, 2014) with a learning rate of 5e-4 for a total of 100,000 iterations.
### Tasks and Results
Mixture of Correlated NormalsFollowing previous work (Poole et al., 2019; Song and Ermon, 2019; Guo et al., 2022), we create a dataset by sampling \(100\,000\) points from a correlated distribution. Instead of using a two-dimensional correlated normal distribution, which simple generative estimators can easily fit, we use a mixture of 4 correlated normal distributions. The location and scale of each component are chosen such that both marginals \(p(\mathbf{x})\), \(p(\mathbf{y})\) and conditionals \(p(\mathbf{y}|\mathbf{x})\), \(p(\mathbf{x}|\mathbf{y})\) are bimodal distributions. The log-density of the product and joint distribution is visualized in Figure 1. Each pair of dimensions shares \(\approx 1.37\) nats of information. We stacked 5 independent versions to reach an amount of mutual information that is challenging enough to underline issues of the models in this analysis. We define the quantization as a component-wise indicator function \(Q_{2}(\mathbf{x})=\mathbb{I}(\mathbf{x}>0)\), separating positive and negative values of \(\mathbf{x}\) for each dimension.
Figure 3(a) shows the amount of information estimated by several combinations of generative and discriminative estimators for a fixed batch size of 64 at the end of the training procedure. We note that for all analyzed models, adding a generative component results in improved (i.e., less biased) mutual information estimates compared to the purely discriminative baselines (in blue). This is observed for both estimators that underestimate and overestimate the information and holds for both normal proposals (in green) and proposals that use \(r(\mathbf{y}|Q_{2}(\mathbf{x}))\) (orange). However, it is worth mentioning that \(r(\mathbf{y}|Q_{2}(\mathbf{x}))\) does not require access to the entropy \(H(\mathbf{y})\). We further indicate the parts of the total information contributed by \(I_{r}(\mathbf{x};\mathbf{y})\) and \(\mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\), respectively.
Figure 3(b) reports the effect of increasing the batch size on the bias and variance of the mutual information estimators. The addition of a generative component has a similar effect to an increased batch size. In particular, for estimations that suffer from high variance (such as MINE and NWJ) the adding the PQ generative component is comparable to increasing the batch size by a factor of 64. Estimators based on InfoNCE that are characterized by lower variance can benefit from the addition of a generative component to lower their bias. Further analysis of the effect of the number of samples from the proposal are reported in Appendix D.2.
Discrete Time Multi-Particle SimulationStochastic processes characterize relevant problems in scientific discovery. For example, it can be of interest in a dynamical system (e.g., fluid dynamics or molecular dynamics) estimating how much information gets propagated through time. We simulate a dataset that aims to resemble the characteristics of a system of particles moving in a fixed energy landscape fol
\begin{table}
\begin{tabular}{l|c|c|c} \hline & \(r(\mathbf{x},\mathbf{y})\) & \(I_{r}(\mathbf{x};\mathbf{y})\) & Requirements \\ \hline BA (Barber and Agakov, 2003) & \(p(\mathbf{x})r_{\theta}(\mathbf{y}|\mathbf{x})\) & \(\mathbb{E}[\log r_{\theta}(\mathbf{y}|\mathbf{x})]+H(\mathbf{y})\) & \(H(\mathbf{y})\) known \\ DoE (McAllester and Stratos, 2020) & \(p(\mathbf{x})r_{\theta}(\mathbf{y}|\mathbf{x})\) & \(\mathbb{E}[\log r_{\theta}(\mathbf{y}|\mathbf{x})-\log s_{\xi}(\mathbf{y})]\) & \(p(\mathbf{y})\) fixed \\ GM (Song and Ermon, 2020) & \(r_{\theta}(\mathbf{x},\mathbf{y})\) & \(\mathbb{E}[\log r_{\theta}(\mathbf{x},\mathbf{y})-\log s_{\psi}(\mathbf{x})-\log s_{\xi}( \mathbf{y})]\) & \(p(\mathbf{x})\), \(p(\mathbf{y})\) fixed \\ PQ (**Ours**) & \(p(\mathbf{x})p(\mathbf{y}|\mathbf{x}))\) & \(\mathbb{E}[\log s_{\psi}(\mathbf{x}|\mathbf{y})]+H(\mathbf{\bar{x}})\) & \(\mathbf{\bar{x}}=Q(\mathbf{x})\) \\ \hline \end{tabular}
\end{table}
Table 2: Overview of the main generative approaches to mutual information estimation and their requirements.
Figure 3: (a) Blue: the estimated mutual information in nats of several discriminative estimators. Orange: mutual information estimates using our PQ method. Green: results obtained by including a learned conditional normal proposal. Hybrid estimators improve upon their discriminative counterparts. We further indicate the parts of the total information contributed by \(I_{r}(\mathbf{x};\mathbf{y})\) and \(\mathrm{KL}_{f}(p(\mathbf{x},\mathbf{y})||r(\mathbf{x},\mathbf{y}))\), respectively. (b) Bias and variance of the included discriminative estimators (denoted by colors) as a function of batch size. For all estimators, we report a lowered bias using PQ.
lowing discrete-time overdamped Langevin dynamics (Ermak and Yeh, 1974): \(\mathbf{x}_{t}=\mathbf{x}_{t-1}-\epsilon\nabla U(\mathbf{x}_{t-1})+\sqrt{2 \epsilon/\beta}\boldsymbol{\eta}\), where \(\nabla U(\mathbf{x}_{t})\) refers to the gradient of an energy \(U(\mathbf{x}_{t})\) evaluated at a position \(\mathbf{x}_{t}\), and \(\boldsymbol{\eta}\) is time-independent noise sampled from a unit Normal \(\mathcal{N}(\mathbf{0},\mathbf{1})\). When the system is in equilibrium, \(\mathbf{x}_{t}\) follows the Boltzmann distribution \(p(\mathbf{x}_{t})\propto e^{-\beta U(\mathbf{x}_{t})}\). Therefore, it is possible to compute a good approximation of the entropy. Similarly, the conditional entropy of the transition distribution can be computed as a function of \(\epsilon\) and \(\beta\). As a result, in this simplified system, we can compute a faithful approximation of mutual information between the positions of a particle at consecutive time steps. We simulate a collection of five independent particles on a two-dimensional energy landscape characterized by multiple wells to create a ten-dimensional trajectory vector \(\mathbf{x}_{t}\) for which \(I(\mathbf{x}_{t};\mathbf{x}_{t+1})\approx 10.561\) nats. Furthermore, to introduce spurious correlation, we first concatenate 10-dimensional time-independent Normal noise and 10 constant dimensions before applying an invertible non-linear function. This last operation increases the dimensionality to 30 without effectively changing the value of mutual information. Further details on the dataset creation procedure can be found in Appendix D.3.
To produce the quantization function used for this experiment, we first project the 30-dimensional feature space onto 10 principal components using Temporal Independent Component Analysis (TICA) (Molgedey and Schuster, 1994). TICA ensures that particles that correlate highly through time, have similar representations. Secondly, we apply a K-nearest neighbors clustering to obtain \(Q(\mathbf{x})\) for all particles and time-steps of the dataset. The rationale behind this choice of the quantization function is to make sure that \(Q(\mathbf{x})\) captures temporal information by grouping those particles. Thereby, it yields a proposal distribution \(r_{Q}(\mathbf{x}_{t+1},\mathbf{x}_{t})\) that improves \(I(\mathbf{x}_{t};Q(\mathbf{x}_{t+1}))\).
Figure 4(a) shows the mutual information result for discriminative (blue) and hybrid models using PQ as the generative components (orange). Overall, we observe that including a generative component yields improved mutual information estimates of the tested estimators. Figure 4(b) shows the effect of increasing the number of quantization intervals \(Q(\mathbf{x})\) on the bias-variance trade-off. In this case, this number is provided by the clusters for the TICA quantization. Increasing the number of clusters results in a variance reduction for most of the estimators, except for InfoNCE-based estimators. This is because InfoNCE estimators are characterized by low variance and high bias, and the increase in variance is due to the estimation of the PQ generative component. We also notice that the bias for some estimators tends to increase for large numbers of quantized values. We hypothesize that this effect is due to the induced scarcity of the samples from \(p(\mathbf{y}|Q(\mathbf{x}))\).
## 7 Conclusion
We introduced a hybrid approach for mutual information estimates that generalizes discriminative and generative methods and combines advantages of both approaches. On top of that, we propose Predictive Quantization (PQ): a simple generative method that can be used to improve discriminative estimators. These contributions analytically yield improved mutual information estimates with lower variances. Theoretical results were confirmed experimentally on two challenging mutual information estimation tasks.
Limitations and Future WorkAlthough the hybrid approaches proposed in this work help to address limitations of generative and discriminative approaches in the literature, they introduce additional complexity due to the interplay between the two components. Nevertheless, we believe that simple (or non-parametric) proposals could be used together with discriminative models to maximize information between learned representations in future work.
Figure 4: (a) Shows the mutual information estimates of several approaches on the particle dataset. Using PQ (orange), all estimators yield better estimates closer to the true information than the baselines (blue). (b) Bias-variance analysis of the number of clusters used for PQ. Note that 0 clusters correspond to no generative estimator. |
2310.15285 | On the Dimensionality of Sentence Embeddings | Learning sentence embeddings is a fundamental problem in natural language
processing. While existing research primarily focuses on enhancing the quality
of sentence embeddings, the exploration of sentence embedding dimensions is
limited. Here we present a comprehensive and empirical analysis of the
dimensionality of sentence embeddings. First, we demonstrate that the optimal
dimension of sentence embeddings is usually smaller than the default value.
Subsequently, to compress the dimension of sentence embeddings with minimum
performance degradation, we identify two components contributing to the overall
performance loss: the encoder's performance loss and the pooler's performance
loss. Therefore, we propose a two-step training method for sentence
representation learning models, wherein the encoder and the pooler are
optimized separately to mitigate the overall performance loss in low-dimension
scenarios. Experimental results on seven STS tasks and seven sentence
classification tasks demonstrate that our method significantly improves the
performance of low-dimensional sentence embeddings. | Hongwei Wang, Hongming Zhang, Dong Yu | 2023-10-23T18:51:00Z | http://arxiv.org/abs/2310.15285v1 | # On the Dimensionality of Sentence Embeddings
###### Abstract
Learning sentence embeddings is a fundamental problem in natural language processing. While existing research primarily focuses on enhancing the quality of sentence embeddings, the exploration of sentence embedding dimensions is limited. Here we present a comprehensive and empirical analysis of the dimensionality of sentence embeddings. First, we demonstrate that the optimal dimension of sentence embeddings is usually smaller than the default value. Subsequently, to compress the dimension of sentence embeddings with minimum performance degradation, we identify two components contributing to the overall performance loss: the encoder's performance loss and the pooler's performance loss. Therefore, we propose a two-step training method for sentence representation learning models, wherein the encoder and the pooler are optimized separately to mitigate the overall performance loss in low-dimension scenarios. Experimental results on seven STS tasks and seven sentence classification tasks demonstrate that our method significantly improves the performance of low-dimensional sentence embeddings.
## 1 Introduction
Learning sentence representation is a fundamental problem in natural language processing. Sentence embeddings represent sentences as fixed-length vectors, which can be used in various downstream tasks, such as semantic textual similarity (STS) (Agirre et al., 2012, 2013; Marelli et al., 2014), information retrieval (Mitra et al., 2017; Karpukhin et al., 2020; Thakur et al., 2021), and sentiment analysis (Pang and Lee, 2005; Hu and Liu, 2004; Pang and Lee, 2004).
Existing work usually focuses on improving the quality of sentence embeddings by introducing novel model architectures or training strategies (Reimers and Gurevych, 2019; Liu et al., 2021; Gao et al., 2021; Chuang et al., 2022; Su et al., 2022). However, the exploration of sentence embedding dimensions remains limited. These sentence representation learning models typically employ the default dimension of the model's hidden states as the dimension of sentence embeddings (e.g., 768 for BERT\({}_{\text{base}}\)-like models and 1,024 for BERT\({}_{\text{large}}\)-like models). Nonetheless, the dimension plays a critical role in sentence embeddings, and many research questions regarding its impact on sentence embeddings remain unanswered. For instance, does the default dimension yield the best performance? Can the dimension of sentence embeddings be reduced to mitigate time and memory burdens in practical applications? Furthermore, how can we maintain the performance of sentence embeddings when their dimension is reduced?
In this paper, we aim to answer the above questions through a comprehensive and empirical study of the dimensionality of sentence embeddings. Unlike conventional post-processing dimension reduction methods, we propose a direct modification of the output dimension of the pooler in sentence representation learning models, as illustrated in Figure
Figure 1: The proposed architecture of a sentence representation learning model. The dimension of the pooler’s fully connected layer is changed from \(D\times D\) to \(D\times d\), where \(D\) is the hidden state dimension (768 for base models and 1,024 for large models), and \(d\) is the customizable sentence embedding dimension. The remaining part of the model (sentence encoder) is unchanged.
1. This approach enables us to generate sentence embeddings of any desired dimension while imposing minimal computational overhead. Subsequently, we evaluate sentence embeddings with various dimensions across various downstream tasks. The findings indicate that the optimal dimension for sentence embeddings tends to be smaller than the default value used in the literature.
Our findings also indicate a significant decline in the performance of sentence embeddings when the dimension is reduced beyond the optimal value. Therefore, we investigate whether the model's performance can be sustained in these low-dimension scenarios. This allows us to utilize sentence embeddings with even smaller dimensions in practical applications to reduce time and memory overhead further. Interestingly, we find that the model's performance deterioration in low-dimension scenarios is not solely attributed to the decrease of the pooler's output dimension, but also to the degradation in the quality of the sentence encoder's output. As a result, the performance loss can be divided into two components: the loss caused by the encoder and the loss caused by the pooler. We then propose a two-step training algorithm to mitigate the two aspects of the performance loss separately. First, on the encoder side, we replace the current "pool-trained" encoder with a "well-trained" one. To achieve this, we train multiple models with different pooler's output dimensions and select the best encoder to replace the current one. Next, on the pooler side, since the pooler and the new encoder have not been trained together, we can fine-tune the pooler on top of the new encoder. This involves training the pooler from its current state while keeping the new encoder frozen, ensuring their compatibility and improving overall performance.
We conduct experiments on seven STS tasks and seven classification tasks. Our proposed training method consistently outperforms all baseline methods across all tasks, for instance, 1.50% to 4.92% improvement over the best baseline method on classification tasks. Remarkably, our method reduces the dimension of sentence embeddings from 768 to 128 with almost no performance loss (from 76.57% to 76.46% on STS tasks). In addition, we validate the effectiveness of the two steps in our proposed method by showing that their average improvement is 1.79% and 1.17% respectively when trained with SimCSE, and 13.16% and 0.83% respectively when trained with Sentence-BERT, on the STS-B dataset.
The key contributions of this paper are:
* We propose customizing the dimension of sentence embeddings by directly modifying the output dimension of the pooler.
* We demonstrate that default dimension of sentence embeddings commonly used in literature is usually suboptimal.
* We discover that the performance loss of low-dimensional sentence embeddings can be divided into the encoder's performance loss and the pooler's performance loss.
* We propose a two-step training method to reduce the two parts of the performance loss separately.
## 2 Sentence Embedding Compressor
Existing sentence representation learning models usually set the dimension of output sentence embeddings as the dimension of hidden states \(D\), i.e., \(D=768\) for base models and \(D=1,024\) for large models. However, it is worth noting that the default dimension may not always be optimal. Traditional dimension reduction methods, such as Principle Component Analysis (PCA) (Abdi and Williams, 2010), Isomap (Tenenbaum et al., 2000), and Locally Linear Embedding (LLE) (Roweis and Saul, 2000), are not suitable for our purpose here due to the following reasons: (1) Our objective is to conduct a comprehensive study on the impact of dimension, whereas these methods can only reduce dimension rather than increase it; (2) These methods typically require access to the entire evaluation set before executing the algorithms, which may not be feasible in practical scenarios like online or streaming settings; (3) Utilizing these methods as a post-processing step will introduce extra computational overhead, which, to some extent, contradicts our initial goal of dimension reduction.
We propose a straightforward and efficient approach to modify the dimension of sentence embeddings. As illustrated in the left half of Figure 1, a sentence representation learning model, such as BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019), usually includes a pooler on top of the final hidden state of the [CLS] token. This pooler consists of a fully connected layer and a non-linear activation function. Initially, the pooler's purpose is to condense information from the input sentence into a fixed-sized representation without changing
the embedding's dimension. However, we can alter the output dimension of the fully connected layer in the pooler from the default \(D\) to a customizable value of \(d\). As a result, the pooler now serves as a compressor for sentence embeddings. Unlike conventional dimension reduction techniques, our method can generate sentence embeddings of any dimension. Furthermore, it does not require prior access to the entire evaluation set and has minimal impact on computational overhead.
## 3 The Impact of the Dimension of Sentence Embeddings
We conduct a study to examine the impact of the dimension of sentence embeddings on the performance of various downstream tasks. We select RoBERTa\({}_{\text{base}}\)Liu et al. (2019) as the sentence representation learning model and made its output dimension configurable. We utilize the unsupervised SimCSE Gao et al. (2021) as the training method, which takes an input sentence and predicts itself in a contrastive objective with dropout used as noise. Similar to SimCSE, we train the model on one million randomly sampled sentences from English Wikipedia, then apply the model to the following downstream tasks: (1) TREC, a question classification dataset containing 500 labeled questions in the test set with 6 class labels; (2) STS-B, a semantic textual similarity dataset containing 1,379 sentence pairs in the test set with 5 similarity grades; (3) CR, a binary sentiment classification dataset containing 3,773 sentences; (4) MRPC, a binary paraphrase detection dataset containing 1,726 sentence pairs in the test set.
The results of the accuracy / Spearman's correlation of SimCSE-RoBERTa\({}_{\text{base}}\) on the four datasets are presented in Figure 2. The sentence embedding dimension \(d\) ranges from 2,048 to 4, with the default value being \(D=768\). As the sentence embedding dimension increases from 768, the performance consistently remains stable across all four datasets. However, when the dimension decreases from 768, we observe distinct patterns in the performance curves: The performance on TREC (the red curve) continuously decreases, and the performance on STS-B and CR (the yellow and the blue curves) initially remains stable, then drops sharply. Conversely, the performance on MRPC (the green curve) remains consistently stable throughout.
It can be concluded that the optimal1 dimension of sentence embeddings varies across different downstream tasks. Specifically, the optimal dimensions for TREC, STS-B, CR, and MRPC are 768, 256, 256, and 16, respectively. One possible explanation for this variation is that downstream tasks exhibit different levels of difficulty, requiring varying amounts of information to be stored in embeddings to achieve the best performance. This observation motivates further exploring the dimensionality of sentence embeddings, particularly to enhance model performance in low-dimension scenarios.
Footnote 1: Although there is no strict definition for “optimal”, it can generally be understood as the dimension that maintains the best performance while being as small as possible.
## 4 The Proposed Approach
### Performance Loss Decomposition
According to the result presented in Figure 2, the performance of sentence embeddings on most tasks declines as their dimension decreases. The primary reason for the performance loss is that sentence embeddings become too short to retain sufficient information for downstream tasks. Nevertheless, given that the entire model is trained end-to-end, it is intriguing to examine whether the encoder component is affected when the output dimension of the pooler decreases. Therefore, we denote the final hidden state of [CLS] as the "output of the encoder" and utilize it as the sentence embedding for downstream tasks.
The results of using the encoder's output and the pooler's output as sentence embeddings on the STS-B dataset are presented in Figure 3. Interestingly, when the pooler's output dimension \(d\) decreases, the encoder's performance consistently declines for all four models, even though the dimension of the encoder's output remains unchanged. This finding
Figure 2: The results of accuracy / Spearman’s correlation of SimCSE-RoBERTa\({}_{\text{base}}\) on four different datasets. The sentence embedding dimension \(d\) is varied from 2,048 to 4 (the default value is \(D=768\)).
suggests that the performance loss is not solely attributed to the decrease of the pooler's output dimension but also to a deterioration in the quality of the encoder's output.
Figure 2(c) illustrates that the performance loss can thus be divided into two components: performance loss caused by the encoder and performance loss caused by the pooler. This decomposition of performance loss enables an in-depth understanding of the model's behavior in low-dimensional scenarios. Furthermore, it provides valuable insights into strategies that can improve the model's performance when working with smaller sentence embedding dimensions: By separately addressing the performance loss of the encoder and the pooler, we can effectively enhance the performance of the entire model and subsequently combine the two modules to achieve better outcomes.
### Reducing Performance Loss of the Encoder
Figure 3 indicates that the encoder's performance declines noticeably as the pooler's output dimension \(d\) decreases. It is worth noting that the encoder's architecture remains unchanged regardless of \(d\). As a result, we can easily replace a "pool-trained" encoder with a "well-trained" one to evaluate if the model's overall performance can be enhanced. We conduct end-to-end training of the SimCSE-RoBERTabase using different pooler output dimensions \(d\) (ranging from 768 to 4). This results in a model consisting of encoder\({}_{d}\) and pooler\({}_{d}\). We then combine each possible encoder\({}_{i}\) and pooler\({}_{j}\), and utilize the new model encoder\({}_{i}\) + pooler\({}_{j}\) to generate sentence embeddings.
In Figure 4, each cell in the heatmap represents the Spearman's correlation of a combined model on the STS-B dataset. Replacing the encoder with a superior one can usually substantially enhance the model's overall performance. For instance, the initial performance of the end-to-end training model with \(d=16\) (encoder\({}_{16}\) + pooler\({}_{16}\)) is 65.8, but it can be further elevated to 72.6 by replacing encoder\({}_{16}\) with encoder\({}_{64}\).
We thus propose a method to reduce the performance loss of the encoder, which is illustrated in Figure 4(a). Given the target dimension \(d\), we first train a sentence representation learning model with the pooler's output dimension being \(d\), which consists of encoder\({}_{d}\) and pooler\({}_{d}\). Meanwhile, we train multiple models with other pooler's output dimensions (e.g., 512, 256,...). From these models, we select the dimension \(opt\) that yields the optimal performance for encoder\({}_{opt}\) on a validation set. Lastly, we replace the original encoder\({}_{d}\) with encoder\({}_{opt}\) to improve the overall performance.
### Reducing Performance Loss of the Pooler
Unlike the encoder, replacing pooler\({}_{d}\) with a different pooler\({}_{d^{\prime}}\) is not feasible since the output dimension of the pooler must be exactly the target dimension \(d\). It is important to note that pooler\({}_{d}\) is
Figure 4: The results of Spearman’s correlation of all possible combinations of encoders and poolers on the STS-B dataset. See Section 4.2 for details.
Figure 3: The results of using the output of the encoder (red curves) and the output of the pooler (blue curves) as sentence embeddings on the STS-B dataset. The training method is SimCSE and the sentence encoders are BERT\({}_{\text{base}}\), BERT\({}_{\text{large}}\), RoBERTa\({}_{\text{base}}\) and RoBERTa\({}_{\text{large}}\), respectively. Figure 2(c) illustrates that the performance loss can be divided into the performance loss of the encoder and the performance loss of the pooler.
trained jointly with encoder\({}_{d}\) rather than the current encoder\({}_{opt}\), which implies that the parameters of pooler\({}_{d}\) may not be optimal for encoder\({}_{opt}\). Therefore, as illustrated in Figure 4(b), we freeze the parameters of encoder\({}_{opt}\) and only fine-tune pooler\({}_{d}\), until the model achieves the optimal performance.
Here, we would like to emphasize the following points: (1) The parameters of encoder\({}_{opt}\) should remain unchanged, as encoder\({}_{opt}\) is already the optimal encoder. If encoder\({}_{opt}\) is fine-tuned together with pooler\({}_{d}\), we would revert to the initial end-to-end training scenario, which has been shown to yield suboptimal performance. (2) Pooler\({}_{d}\) should not be trained from scratch with randomly initialized parameters but rather fine-tuned starting from its current parameters, as it provides an excellent starting point. Our experimental results also validate that fine-tuning from the current parameters outperforms training from randomly initialized parameters.
### A Two-Step Training Algorithm
Our proposed two-step training approach is outlined in Algorithm 1. The algorithm consists of two steps. In the first step, the primary objective is to acquire pooler\({}_{d}\) (line 2) and the optimal encoder encoder\({}_{opt}\) (line 6). Subsequently, the second step involves fine-tuning pooler\({}_{d}\) while keeping encoder\({}_{opt}\) frozen (line 8).
```
Input: An unsupervised training corpus \(T\), a validation dataset \(E\), the target dimension \(d\), a base sentence representation learning model \(M\) with customizable output dimension; Output: A well-trained sentence representation learning model with output dimension \(d\); // Step 1: Reducing the encoder's loss
1 Determine the set of candidate dimensions \(\mathcal{D}\); Train \(M\) with \(out\_dim=d\) on \(T\) and obtain pooler\({}_{d}\); for\(d^{\prime}\in\mathcal{D}\)do
2 Train \(M\) with with \(out\_dim=d^{\prime}\) on \(T\) and obtain encoder\({}_{d^{\prime}}\); Evaluate encoder\({}_{d^{\prime}}\) on \(E\);
3 Select the best encoder from {encoder\({}_{d^{\prime}}\)}\({}_{d^{\prime}\in\mathcal{D}}\) and denote it as encoder\({}_{opt}\); // Step 2: Reducing the pooler's loss
4 Concatenate encoder\({}_{opt}\) and pooler\({}_{d}\); Fine-tune pooler\({}_{d}\) on \(T\) with encoder\({}_{opt}\) frozen, and obtain new-pooler\({}_{d}\); return encoder\({}_{opt}\) + new-pooler\({}_{d}\)
```
**Algorithm 1**Two-Step Training Approach
The time complexity analysis of Algorithm 1 is as follows. We use \(C\) to denote the time required for training the entire model \(M\) once. In step 1, we train the model \(M\) for a total of \(|\mathcal{D}|+1\) times, resulting in a complexity of \(|\mathcal{D}|C\). The time complexity of the encoder evaluation is negligible compared to the training process. In step 2, the encoder is frozen, and only the pooler undergoes training. Since the pooler is merely a fully connected layer, while the encoder is typically much more complex than the pooler, the fine-tuning time for the pooler is negligible compared to \(C\). Therefore, the overall time complexity of Algorithm 1 is \(|\mathcal{D}|C\).
Our algorithm generally requires more running time when the candidate dimension set \(\mathcal{D}\) is larger. However, a larger pool of \(\mathcal{D}\) will also increase the probability that encoder\({}_{opt}\) performs better, thereby improving the final performance.
## 5 Experiments
### Experimental Setup
We evaluate our proposed two-step training algorithm on two types of datasets:
* STS datasets. We include seven STS datasets in our experiments: STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017), and SICK-Relatedness (Marelli et al., 2014). Each dataset consists of sentence pairs and their corresponding ground-truth similarity scores. We use Spearman's correlation to evaluate the predicted results of our method and all baseline methods on the test set.
* Sentence classification datasets. These includes MR (Pang and Lee, 2005), CR (Hu
Figure 5: Illustration of reducing performance loss of the encoder and the pooler for a sentence representation learning model.
and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST (Socher et al., 2013), TREC (Voorhees and Tice, 2000), and MRPC (Dolan and Brockett, 2005). A logistic regression classifier is trained on top of (frozen) sentence embeddings. Each dataset consists of sentences and their class labels. Accuracy is used as the evaluation metric. We follow default configurations from SentEval2.
Footnote 2: [https://github.com/facebookresearch/SentEval](https://github.com/facebookresearch/SentEval)
We use three traditional dimension reduction methods as baseline methods, including Principle Component Analysis (PCA), Isomap (Tenenbaum et al., 2000), and Locally Linear Embedding (LLE) (Roweis and Saul, 2000). PCA is a linear dimension reduction method, while Isomap and LLE are nonlinear. We use the embeddings of the first 2,000 sentences from the unsupervised English Wikipedia (Gao et al., 2021) as training data for these models. In addition, we also compare our method to the direct end-to-end training method using SimCSE (Gao et al., 2021).
### Result of the Proposed Approach
The results of Spearman's correlation for our proposed method on the STS-B dataset are presented in Tables 1 and 2. We select RoBERTa\({}_{\text{base}}\) as the base model for both experiments. Table 1 presents the result of using the contrastive loss in SimCSE as the training objective (see Section 3 for training details). Table 2 presents the result of using the softmax classification loss in Sentence-BERT as the training objective. Specifically, following Sentence-BERT, we use SNLI and MNLI datasets as the training data. For a pair of premise and hypothesis in SNLI/MNLI denoted as \(u\) and \(v\), we first calculate their sentence embeddings \(\mathbf{u}\) and \(\mathbf{v}\), and then concatenate \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{u}-\mathbf{v}\), followed by a 3-way softmax classifier. The pooling function is _cls_. The batch size is 64. Other hyperparameters are the same as reported in the Sentence-BERT paper.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Pooler’s output dim \(d\) & 768 & 512 & 256 & 128 & 64 & 32 & 16 & 8 \\ \hline \hline Encoder\({}_{d}\) + pooler\({}_{d}\) (end-to-end training) & 79.62 & 79.22 & 79.11 & 78.17 & 77.72 & 74.02 & 65.82 & 51.72 \\ \hline Encoder\({}_{opt}\) + pooler\({}_{d}\) (After step 1) & 80.08 & 79.26 & 79.11 & 78.66 & 77.94 & 74.78 & 69.62 & 60.27 \\ \hline Encoder\({}_{opt}\) + +0.46 & +0.04 & +0.00 & +0.49 & +0.22 & +0.76 & +3.80 & +8.55 \\ \hline Encoder\({}_{opt}\) + new-pooler\({}_{d}\) (After step 2) & 80.49 & 80.25 & 79.32 & 79.93 & 78.03 & 75.20 & 71.71 & 64.15 \\ \hline Encoder\({}_{opt}\) + +0.41 & +0.99 & +0.21 & +1.27 & +0.09 & +0.42 & +2.09 & +3.88 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of Spearman’s correlation (in %) of our proposed algorithm on the STS-B dataset using the contrastive loss in SimCSE as the training objective. The base model is RoBERTa\({}_{\text{base}}\). (1) The first block is the results of end-to-end training. (2) The second block results from step 1 of our proposed algorithm. The numbers in the second line are the absolute improvement over the first block. (3) The third block results from step 2 of our proposed algorithm. The numbers in the second line are the absolute improvement over the second block. \(opt=256\) for this experiment according to the first row of Figure 4.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Pooler’s output dim \(d\) & 768 & 512 & 256 & 128 & 64 & 32 & 16 & 8 \\ \hline \hline Encoder\({}_{d}\) + pooler\({}_{d}\) (end-to-end training) & 70.12 & 69.92 & 63.80 & 60.12 & 56.51 & 52.84 & 49.49 & 39.29 \\ \hline Encoder\({}_{opt}\) + pooler\({}_{d}\) (After step 1) & 73.50 & 69.92 & 73.34 & 73.53 & 72.50 & 71.46 & 68.36 & 64.78 \\ \hline Encoder\({}_{opt}\) (After step 1) & +3.38 & +0.00 & +9.54 & +13.41 & +15.99 & +18.62 & +18.87 & +25.49 \\ \hline Encoder\({}_{opt}\) + new-pooler\({}_{d}\) (After step 2) & 73.61 & 70.14 & 73.95 & 73.81 & 73.12 & 72.88 & 70.58 & 65.92 \\ \hline
\begin{tabular}{c} (After step 2) \\ \end{tabular} & +0.11 & +0.22 & +0.61 & +0.28 & +0.62 & +1.42 & +2.22 & +1.14 \\ \hline \hline Encoder\({}_{d}\) (pooler\({}_{d}\) used only in training) & 73.47 & 73.83 & 66.66 & 63.32 & 61.69 & 57.45 & 56.42 & 47.37 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of Spearman’s correlation (in %) of our proposed algorithm on the STS-B dataset using the softmax classification loss in Sentence-BERT as the training objective. The first three blocks are similar as in Table 1. The last block is the result of using encoder\({}_{d}\) + pooler\({}_{d}\) for end-to-end training but only encoder\({}_{d}\) for inference. The base model is RoBERTa\({}_{\text{base}}\). \(opt=512\) for this experiment according to the last block.
In Tables 1 and 2, the first block shows the results of encoder\({}_{d}\) + pooler\({}_{d}\), representing the end-to-end training approach. In the second block, we present the results of encoder\({}_{opt}\) + pooler\({}_{d}\), corresponding to step 1 of our proposed algorithm. The third block results from encoder\({}_{opt}\) + new-pooler\({}_{d}\), corresponding to step 2 of our proposed algorithm. In addition, in Table 2, we also present the result of Encoder\({}_{d}\) as the last block, in which encoder\({}_{d}\) + pooler\({}_{d}\) is used in end-to-end training but only encoder\({}_{d}\) is used in inference. Note that \(opt=256\) in Table 1 while \(opt=512\) in Table 2. The absolute improvement achieved by step 1 and step 2 is also presented.
We observe that step 1 and step 2 of our method both yield significant enhancement to the model's performance. The average absolute improvement achieved by step 1 and step 2 is 1.79% and 1.17% respectively using SimCSE, and is \(13.16\%\) and \(0.83\%\) respectively using Sentence-BERT. Notably, the improvement brought about by step 1 surpasses that of step 2 for two primary reasons. First, step 2 faces a greater challenge in improving the model since step 1 has already substantially enhanced its performance. Second, the encoder is typically more complicated than the pooler, which offers greater potential for step 1 to improve the performance. Moreover, the improvement is particularly pronounced when the dimension \(d\) is smaller, as the model has more room for improvement in low-dimensional scenarios.
### Comparison with Baseline Methods
The results of comparing with baseline methods on STS tasks and classification tasks are presented in Tables 3 and 4, respectively. Each block corresponds to a specific dimension of sentence embeddings. We do not show the results of \(d=512\) and \(d=256\) because their results are quite close to \(d=768\). It is evident that our method consistently achieves the best performance across almost all
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Methods**} & **STS12** & **STS13** & **STS14** & **STS15** & **STS16** & **STS-B** & **SICK-R** & **Avg.** \\ \hline \hline \multicolumn{10}{c}{\(d=768\) (w/o dimension reduction)} \\ SimCSE-RoBERT\({}_{\text{base}}\) & 70.16 & 81.77 & 73.24 & 81.36 & 80.65 & 80.22 & 68.56 & 76.57 \\ \hline \hline \multirow{5}{*}{
\begin{tabular}{c} PCA \\ Isomap \\ LLE \\ \end{tabular} } & \multicolumn{3}{c}{\(d=128\)} & & & & & & & & \\ PCA & 65.79 & 76.83 & 67.84 & 76.99 & 74.93 & 74.73 & 62.22 & 71.33 \\ Isomap & 51.55 & 56.97 & 45.52 & 53.17 & 56.01 & 49.26 & 51.36 & 51.98 \\ LLE & 38.54 & 45.41 & 34.76 & 42.42 & 45.69 & 40.22 & 42.15 & 41.31 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 69.10 & 79.69 & 70.80 & 78.70 & 79.66 & 78.17 & 68.32 & 74.92 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **70.15** & **80.66** & **72.38** & **81.74** & **80.62** & **79.93** & **69.77** & **76.46** \\ \hline \hline \multicolumn{10}{c}{\(d=64\)} & & & & & & & & \\ PCA & 65.94 & 75.81 & 66.72 & 75.97 & 73.78 & 73.08 & 60.46 & 70.25 \\ Isomap & 49.69 & 54.85 & 43.25 & 49.90 & 53.99 & 46.82 & 49.32 & 49.69 \\ ILLE & 33.54 & 42.57 & 32.38 & 38.78 & 40.24 & 36.40 & 37.04 & 37.28 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 66.29 & 78.76 & 70.55 & **80.18** & 78.33 & 77.72 & 67.85 & 74.24 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **68.73** & **80.34** & **71.63** & 79.90 & **79.61** & **78.03** & **68.62** & **75.27** \\ \hline \hline \multicolumn{10}{c}{\(d=32\)} & & & & & & & & \\ PCA & 65.04 & 72.92 & 64.14 & 73.16 & 71.31 & 69.15 & 58.08 & 67.69 \\ Isomap & 46.36 & 50.89 & 40.23 & 45.92 & 51.00 & 44.61 & 47.00 & 46.57 \\ ILLE & 32.33 & 35.37 & 24.99 & 30.81 & 35.84 & 32.37 & 33.09 & 32.11 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 63.33 & 77.71 & 66.67 & 76.06 & 76.23 & 74.02 & 67.22 & 71.61 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **67.89** & **77.80** & **69.77** & **77.66** & **77.38** & **75.20** & **68.26** & **73.42** \\ \hline \hline \multicolumn{10}{c}{\(d=16\)} & & & & & & & & \\ PCA & 62.75 & 67.67 & 59.97 & 68.13 & 67.23 & 63.46 & 55.53 & 63.53 \\ Isomap & 42.44 & 44.79 & 34.57 & 42.42 & 45.63 & 40.33 & 43.43 & 41.94 \\ ILLE & 28.55 & 33.95 & 23.66 & 29.67 & 34.13 & 30.82 & 32.01 & 30.40 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 54.16 & 67.31 & 55.85 & 63.92 & 70.70 & 65.82 & 63.64 & 63.06 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **64.82** & **75.16** & **65.32** & **74.87** & **74.25** & **71.71** & **66.15** & **70.33** \\ \hline \hline \multicolumn{10}{c}{\(d=8\)} & & & & & & & & \\ PCA & 53.31 & 56.66 & 51.23 & 59.86 & 60.52 & 52.58 & 49.85 & 54.86 \\ Isomap & 38.54 & 39.11 & 30.79 & 39.32 & 41.48 & 35.94 & 37.98 & 37.59 \\ LLE & 30.01 & 33.85 & 22.86 & 33.06 & 37.88 & 28.66 & 33.50 & 31.40 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 50.92 & 51.26 & 43.27 & 59.03 & 58.84 & 51.72 & 58.03 & 53.30 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **60.89** & **65.25** & **59.01** & **65.86** & **66.84** & **64.15** & **59.61** & **63.09** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The results of Spearman’s correlation (in %) on seven STS datasets. Each block corresponds to a specific dimension of sentence embeddings. The highest numbers across all methods are highlighted.
cases. For example, when \(d=32\), our method outperforms the best traditional dimension reduction method by 5.73% and 8.96% on average for STS tasks and classification tasks, respectively. Similar to Table 1, the improvement becomes more significant when the dimension decreases.
It is exciting to observe that our method exhibits minimal performance degradation when \(d\) decreases from 768 to 128 (from 76.57% to 76.46% on STS tasks), indicating that sentence embeddings can be effectively compressed to just \(1/6\) of the original size with almost no loss in performance. We also observe that, despite being a linear dimension duction method, PCA consistently outperforms the other two nonlinear dimension reduction methods.
## 6 Related Work
**Sentence Representation Learning**
Researchers have proposed numerous methods for sentence representation learning. For example, SBERT (Reimers and Gurevych, 2019) uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. DPR (Karpukhin et al., 2020) uses embeddings for information retrieval, which are learned from a small number of questions and passages by a simple dual-encoder framework. SimCSE (Gao et al., 2021) takes an input sentence and predicts itself in a contrastive objective with dropout used as noise. Building upon SimCSE, DiffCSE (Chuang et al., 2022) and ESimCSE (Wu et al., 2021) further enhance the method by improving the sampling approach. InstructOR (Su et al., 2022) embeds every text together with instructions explaining the use case, which can generate text embeddings for different downstream tasks and domains without further training. However, these works overlook the study of how the dimension of sentence embeddings im
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Methods** & **MR** & **CR** & **SUBJ** & **MPQA** & **SST** & **TREC** & **MRPC** & **Avg.** \\ \hline \hline \multicolumn{8}{c}{\(d=768\) (w/o dimension reduction)} \\ SimCSE-RoBERT\({}_{\text{base}}\) & 81.04 & 87.74 & 93.28 & 86.94 & 86.60 & 84.60 & 73.68 & 84.84 \\ \hline \multicolumn{8}{c}{\(d=128\)} \\ PCA & 72.76 & 79.93 & 84.33 & 79.03 & 76.93 & 68.00 & 62.13 & 74.73 \\ Isomap & 58.82 & 65.46 & 72.20 & 68.03 & 57.39 & 31.20 & 67.13 & 60.03 \\ LLE & 58.07 & 65.41 & 69.64 & 68.99 & 57.66 & 28.00 & 66.49 & 59.18 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 77.52 & **84.40** & 89.99 & 81.86 & 82.26 & 75.20 & 72.64 & 80.55 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **79.00** & 84.11 & **91.20** & **84.35** & **83.69** & **80.00** & **73.91** & **82.32** \\ \hline \multicolumn{8}{c}{\(d=64\)} \\ PCA & 70.69 & 78.29 & 80.46 & 76.28 & 73.69 & 56.80 & 61.20 & 71.06 \\ Isomap & 58.06 & 64.85 & 70.41 & 67.61 & 58.98 & 32.60 & 67.36 & 59.98 \\ LLE & 55.90 & 63.81 & 64.75 & 68.98 & 55.52 & 24.80 & 66.49 & 57.18 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ end-to-end training & 72.82 & 79.31 & 84.65 & 77.82 & 76.55 & 65.40 & 73.28 & 75.69 \\ SimCSE-RoBERT\({}_{\text{base}}\) w/ two-step training & **76.52** & **83.66** & **90.24** & **80.90** & **81.93** & **75.60** & **75.42** & **80.61** \\ \hline \multicolumn{8}{c}{\(d=32\)} \\ PCA & 66.02 & 74.02 & 76.44 & 71.63 & 71.22 & 48.20 & 64.28 & 67.40 \\ Isomap & 57.96 & 66.67 & 71.15 & 68.68 & 58.05 & 34.00 & 67.07 & 60.51 \\ \multicolumn{8}{c}{\(\text{LLE}\)} \\ \mul
pacts the model's performance. In contrast, our work focuses on enhancing the performance of sentence embeddings in low-dimensional scenarios. Our proposed training algorithm can be employed in conjunction with any Transformers-based language models and the aforementioned sentence representation learning methods.
**Dimension Reduction**
Dimension reduction is a technique that reduces the number of features in a dataset while preserving the essential information. For instance, PCA Abdi and Williams (2010) is a linear dimensionality reduction technique that finds a new set of uncorrelated variables (principal components) by projecting the data onto a lower-dimensional subspace while maximizing the variance. Isomap Tenenbaum et al. (2000) is a nonlinear dimensionality reduction algorithm that preserves the geodesic distances between data points, creating a low-dimensional embedding that captures the intrinsic structure of the data manifold. LLE Roweis and Saul (2000) is a nonlinear dimensionality reduction method that seeks to preserve local relationships between neighboring data points, constructing a lower-dimensional representation based on linear combinations of these neighbors. However, as discussed earlier, these traditional dimension reduction methods are not suitable for our task as they require access to the entire evaluation set in advance and they introduce additional computation cost. Another related work is Yin and Shen (2018), which theoretically studies the optimal dimension of word embeddings.
## 7 Conclusion
This paper presents a comprehensive and empirical study on the dimensionality of sentence embeddings. First, we propose customizing the dimension of sentence embeddings by directly modifying the pooler's output dimension. Subsequently, we demonstrate that the default dimension (768 or 1,024) of sentence embeddings commonly used in the literature are usually suboptimal. To enhance the performance of low-dimensional sentence embeddings, we decompose the performance loss into the encoder's loss and the pooler's loss. We then introduce a two-step training method that separately addresses the two parts of the performance loss. Experimental results demonstrate that our proposed training method consistently enhances the performance of sentence embeddings with low dimensions across all tasks.
### Limitations
In this paper, we aim to thoroughly comprehend the dimensionality of sentence embeddings, focusing primarily on empirical and experimental aspects. However, note that there remain unanswered questions concerning the dimension of sentence embeddings, especially from a theoretical perspective, which we leave as future work.
Firstly, Figure 3 illustrates that reducing the output dimension of the pooler leads to worse performance of the encoder. One possible explanation is that when the dimension is too small, sentence embeddings are unable to capture all the information in sentences, resulting in an inadequate representation of sentences. Consequently, the quality of the back-propagated signal from the pooler diminishes, which hinders the effective training of the encoder. However, a theoretical understanding of this phenomenon is currently lacking.
Secondly, as depicted in Figure 4, replacing the current encoder encoder\({}_{d}\) with a "well-trained" encoder\({}_{opt}\) improves the performance of pooler\({}_{d}\)'s output. It should be noted that encoder\({}_{opt}\) and pooler\({}_{d}\) are not trained jointly, which implies that the output embedding space of encoder\({}_{opt}\) and the input embedding space of pooler\({}_{d}\) are not aligned. This suggests that a simple concatenation of encoder\({}_{opt}\) and pooler\({}_{d}\) might not produce embeddings with physical meaning. However, experimental results demonstrate the effectiveness of this substitution strategy. The exact reason behind the improvement remains unknown.
Lastly, an intriguing relationship exists between PCA and the pooler of language models. While PCA applies a linear transformation to sentence embeddings, the pooler applies a linear transformation followed by a nonlinear function (tanh in our model). Notably, we also experiment with removing the nonlinear function from the pooler, and find that the model's performance did not significantly change. Therefore, the pooler can be considered as a rough approximation of a PCA layer, and we indeed discover that PCA is the most effective dimension reduction approach among the baseline methods. Given that the linear transformation in PCA aims to project data onto a low-dimensional space while maximizing the variance, it is intriguing to investigate how the pooler projects sentence embeddings and whether a theoretical connection exists between the linear transformation in PCA and the pooler. |
2304.09733 | An Exploratory Study of Ad Hoc Parsers in Python | Background: Ad hoc parsers are pieces of code that use common string
functions like split, trim, or slice to effectively perform parsing. Whether it
is handling command-line arguments, reading configuration files, parsing custom
file formats, or any number of other minor string processing tasks, ad hoc
parsing is ubiquitous -- yet poorly understood.
Objective: This study aims to reveal the common syntactic and semantic
characteristics of ad hoc parsing code in real world Python projects. Our goal
is to understand the nature of ad hoc parsers in order to inform future program
analysis efforts in this area.
Method: We plan to conduct an exploratory study based on large-scale mining
of open-source Python repositories from GitHub. We will use program slicing to
identify program fragments related to ad hoc parsing and analyze these parsers
and their surrounding contexts across 9 research questions using 25 initial
syntactic and semantic metrics. Beyond descriptive statistics, we will attempt
to identify common parsing patterns by cluster analysis. | Michael Schröder, Marc Goritschnig, Jürgen Cito | 2023-04-19T15:21:39Z | http://arxiv.org/abs/2304.09733v1 | # An Exploratory Study of Ad Hoc Parsers in Python+
###### Abstract.
_Background:_ Ad hoc parsers are pieces of code that use common string functions like split, trim, or slice to effectively perform parsing. Whether it is handling command-line arguments, reading configuration files, parsing custom file formats, or any number of other minor string processing tasks, ad hoc parsing is ubiquitous\(-\) yet poorly understood.
_Objective:_ This study aims to reveal the common syntactic and semantic characteristics of ad hoc parsing code in real world Python projects. Our goal is to understand the nature of ad hoc parsers in order to inform future program analysis efforts in this area.
_Method:_ We plan to conduct an exploratory study based on large-scale mining of open-source Python repositories from GitHub. We will use program slicing to identify program fragments related to ad hoc parsing and analyze these parsers and their surrounding contexts across 9 research questions using 25 initial syntactic and semantic metrics. Beyond descriptive statistics, we will attempt to identify common parsing patterns by cluster analysis.
ad hoc parsing, program slicing, mixed-method empirical study +
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
+
Footnote †: Accepted as a registered report for MSR 2023 with Continuity Acceptance (CA).
precise abstract string domains, it is necessary to know the expected range and behavior of characteristics like loop bounds or exception-related control flow, among many others.
We chose an exploratory study design to survey a wide, partially unknown array of syntactic and semantic features of ad hoc parsers and their surroundings. In this first study, we focus on ad hoc parsers in Python, a popular language for data science and machine learning tasks, which involve high amounts of text wrangling.
## 2. Research Questions
* **How common are ad hoc parsers in Python?** First, we want to know how prevalent ad hoc parsers are in the wild. We can determine this by looking at the number of projects that contain at least one ad hoc parser, and the ratio of ad hoc parsing code to all other code in a project.
* **Where are ad hoc parsers located?** One might think that the parsing component of a function is typically at the beginning, validating and transforming inputs before they are passed on to the rest of the program. But we know that _shotgun parsing_--the intermixing of parsing and business logic--is a real phenomenon [14; 16]. We want to know how often this actually occurs on the function level. We also want to locate ad hoc parsers on the system level: Do they only appear at the edges of a system, near I/O operations, or perhaps also deep within projects, where strings are used as a quick way to pass around semi-structured data?
* **How large are ad hoc parsers?** By definition, ad hoc parsers are small snippets of code, but we do not know what their actual average size is, in terms of lines of code or number of expressions. We do not know whether ad hoc parsers regularly use temporary variables to store intermediate results or perhaps not use any variables at all, preferring method chaining. Ad hoc parsers might be syntactically compact but also pack complex functionality in a small space.
* **What are the input sources of ad hoc parsers?** The immediate source of an ad hoc parser's input string could be an argument of the enclosing function, a global or instance variable, or the return value of some function call. In many cases, we should be able to determine the ultimate origin of the input, e.g., a command-line argument (stored in sys.argv) or a line read from a file (via readline).
* **What functions do ad hoc parsers use and how?** We want to know exactly which common functions and operations make up a typical ad hoc parser, and how they are used. One would certainly expect string functions like split or strip to feature prominently, but what about sequence operations like map or index, or syntactic sugar like g[i:j] for slicing? What are common arguments used with these functions? Do ad hoc parsers in Python use tuples and multiple return values? Do they use non-standard user-defined functions, which could impact static analysis by increasing the call graph that has to be investigated, potentially even introducing non-local effects?
* **How do ad hoc parsers use regular expressions?** A characteristic of ad hoc parsers is that they use common functions to parse strings, rather than more formal methods of parsing. Regular expressions, while ostensibly a proper formal parsing method, are nonetheless regularly used in an ad hoc fashion. They are often combined with other parsing constructs and may only play a small part in a larger piece of parsing code. We want to know how often ad hoc parsers use regular expressions internally and to what end. Previous investigations have focused on regular expressions in isolation [5; 6; 7], but have not ventured into a more holistic inquiry on the combination of regular expressions and ad hoc parsing. For example, are regular expressions used to do a first pass over the input string, using features such as named groups to break down the input's superstructure, before parsing continues on the smaller pieces? Or are they used at the terminal point of the input language, i.e., do ad hoc parsers first use functions like split and then apply regular expressions to the results? We want to know what kinds of regular expressions are used by ad hoc parsers and whether the use of regular expressions within ad hoc parsers produces non-regular languages, or whether the parser could have been written entirely as a regular expression (disregarding any readability concerns). This last question we will only be able to answer approximately, as we do not (yet) have a precise method of determining the input language of an ad hoc parser. Certain heuristics, such as branching structure and the nature of any enclosing loop bounds, might give us some hints, however.
* **What is the nature of loops in ad hoc parsers?** Every parser will in some way loop over its input string to access the string's characters. This can be done in a high-level functional manner, using functions like map or split, or by directly iterating over characters, using for or while loops. Loops can also be used to iterate over substrings of the input string, e.g., the results of a use of split. Loops can be nested, and it is even possible that a parser involves a recursive call to the enclosing function. We want to assess how ad hoc parsers use these various looping constructs and classify them accordingly. Of particular interest is the type of loop bound, as this will have a big impact on static analysis. Functions like split are always implicitly bounded by the length of their input, whereas other looping constructs allow for more complex bounds.
* **How do ad hoc parsers handle errors?** Every parser rejects those strings that are not part of the language it is parsing. In other words, a parser fails if it is fed an unknown string. How do ad hoc parsers deal with this? Do they crash? Perhaps an exception is (implicitly) raised but caught by the enclosing function. Or perhaps the ad hoc parser handles failure explicitly, returning an error value or a default value. How ad hoc parsers handle exceptions is of utmost importance, as this determines whether or not they might pose a fault risk.
* **What are typical ad hoc parsing patterns?** Beyond compiling descriptive statistics about ad hoc parsers,
we want to identify particular patterns of parsing, perhaps even a taxonomy of ad hoc parser types. Are there certain combinations of syntactic and semantic features that commonly co-occur? Can we identify certain application domains (based on identifier names and string origins) in which particular types of ad hoc parsers occur more often? A set of ad hoc parsing patterns would help researchers in talking about phenomena related to ad hoc parsing
## 3. Execution Plan
### Dataset & Infrastructure
To collect and analyze a large-scale dataset of Python projects, we plan on using Boa (Boa, 2020), a source code mining language and infrastructure. Boa allows running static program analysis at scale, using a declarative domain-specific language with built-in support for complex analysis tasks such as control-flow graph (CFG) generation and traversal (Bou, 2019). It has been previously used to extensively analyze syntactical features of Python programs (Bou, 2020), which gives us confidence in the feasibility of our envisioned analyses.
As of this writing, the latest Boa Python dataset (February 2022) includes 104 424 GitHub projects that indicated Python as their primary language. The repositories in the dataset were selected by sorting several million Python projects on GitHub by decreasing star count and decreasing date and thus reflect recent high-profile open-source Python projects (as of summer/fall 2021).1 The average star count in the dataset is 243 (min 24, median 59, max 138 438) and most projects (55 %) had commits within the last two years.
Footnote 1: Robert Dyer, lead researcher on Boa, email to authors, March 13, 2023.
An advantage of using the Boa framework is that our analysis will be easily reproducible and can be applied to other datasets in the future. As Boa is inherently a language-agnostic toolset, it should also be relatively easy to adapt our analysis to other programming languages, especially in comparison to custom one-off analysis scripts.
### Program Slicing
To extract ad hoc parsers from the dataset, we will use a form of program slicing (Krishnan et al., 2019), leveraging the built-in static analysis capabilities of the Boa framework. Here is an outline of our approach:
1. Extract all methods from all Python files in each project (including the top-level environment, which is treated like a regular method called _main_).
2. For each method, identify all string variables (including arguments). As Python is (usually) untyped, we have to perform crude but effective type inference by consulting an extensive list of methods whose arguments or return values are known to be (or not to be) strings, e.g., split or startsuit. If type hints are available, we take those into consideration as well. While we might not be able to find strictly _all_ string variables of a method this way, we should be able to find most _relevant_ string variables, i.e., those involved in ad hoc parsing. It seems highly unlikely that an ad hoc parser would not use at least one unambiguously string-specific operation.
3. For each string variable, construct a forward slice of the program, starting at the first occurrence of the variable (if it is not already part of a previous slice). We use an intra-procedural program-dependence graph (PDG) (Krishnan et al., 2019) to build the slice, continuing as long as the data dependents are themselves strings or collections of strings. This ensures that we capture the core of the parser, including intermediate results and transformations, but that we don't end up with a slice the size of the whole method. Our slices never extend beyond function boundaries.
4. If a program slice does not include any methods that impose constraints on the input string (e.g., if the string is just repeatedly appended to), it is not a parser and therefore discarded.
The program slices collected in this way capture the core of each ad hoc parser, beginning with the appearance of the input string and ending at the point where no more transformations of that string or its substrings occur. The parsed data types might be constrained further downstream, e.g., a parsed integer might be required to fall within a certain range, thus introducing further constraints on the input, but that is outside the scope of our present study. While the delineation of ad hoc parsing and business logic is fluid--a defining characteristic of ad hoc parsing--we want to focus purely on the initial string parsing aspects.
### Analysis
We will use the abstract syntax trees (ASTs) of the ad hoc parser cores extracted using program slicing as the basis of our analysis. For questions that require we look at the surrounding context, or at variables referenced by the core but not part of it (e.g., loop bounds), we can traverse outside the core AST on-demand as necessary.
While for most of our research questions we envision performing large-scale quantitative analysis on the ASTs, we want to complement our investigation with qualitative methods where we anticipate limitations due to soundness and completeness of our program analyses. Specifically, this means we will also sample ad hoc parsers in source code form for manual inspection.
To answer **RQs 1-8**, we will extract a number of metrics from the ad hoc parser ASTs and use them to generate various descriptive statistics. Table 1 shows an initial but not exhaustive list of these metrics. As this is an exploratory study, we anticipate that additional opportunities for insight will arise as we survey the data and thus we are prepared to extend our efforts beyond the predefined metrics.
To answer **RQ9**, we will attempt to cluster the collected ad hoc parsers based on the extracted metrics. We will experiment with using \(k\)-means as a baseline for clustering, followed by more advanced learning methods leveraging higher-dimensional embeddings (Krishnan et al., 2019). We plan to experiment with different concrete code embedding methods, such as code2vec (Bou, 2020), which represents code snippets as single fixed-length code vectors; CoCLuBERT (Krishnan et al., 2019), a fine-tuned version of CuBERT (Krishnan et al., 2019) designed for code clustering; and inst2vec (Bou, 2020), which defines an embedding space based on an intermediate representation of code. We will then manually sample
parsers from the identified clusters, both to validate the clustering and to gain further insight into the nature of the identified cluster.
## 4. Threats to Validity
Internal ValidityWe use an established large-scale dataset of open-source Python projects collected from GitHub as the basis of our analysis. It is possible that this dataset is not representative of Python code (and thus ad hoc parsers in Python) at large. To mitigate this risk, our entire analysis pipeline will be written in a reusable manner, running on the Boa infrastructure, which will allow future researchers to easily replicate our study on different and larger datasets.
External ValidityIn this study, we only consider ad hoc parsers in Python. The characteristics of these parsers might be (partially) Python-specific, and thus might not generalize to ad hoc parsers in other programming languages. However, even if that were the case, the results of this study are still valuable for program analysis efforts within the Python ecosystem.
Construct ValidityOur program slicing method might be unsound or incomplete, capturing irrelevant program fragments or missing out on (parts of) some legitimate ad hoc parsers. To mitigate this risk, we combine our quantitative analysis methods with qualitative investigations, which allows us to validate the program slicing results by directly inspecting the original sources.
## 5. Preliminary Study
In a preliminary study, we collected and analyzed 12 632 Python from_string methods from open-source projects on GitHub. We chose from_string methods as a proxy for ad hoc parsers, as these are small single-purpose functions that transform strings, usually originating in files, to internal data types.
We found that more than half of these ad hoc parsers are less than 11 lines of code in size, with only 20 % exceeding 20 lines, and that 95 % have a cyclomatic complexity of at most 10. The average number of functions called within a parser is 6, the median 3, and the most common operation is split, occurring in 41 % of all parsers, followed by 1en and the int constructor, each occurring in about 29 % of parsers. Only 12 % contain loops bounded by the length of the input string, 2 % loops with other types of bounds, and 2 % completely unbounded loops. More than half of all parsers (57 %) have the potential to raise exceptions based on the operations they use (e.g., the index function on strings, which raises an exception when the given substring is not found) and almost half of those (45 %) due to the implicit possibility of out-of-bounds errors, i.e., unchecked array access or optimistic tuple assignment, which occurs when a function call has the potential to return a different number of variables than a tuple assignment syntactically expects (28 % of split operations are immediately followed by a tuple assignment). Of all exception-raising parsers, 26 % do so explicitly, using the raise keyword, and 11 % of all investigated parsers explicitly catch and handle exceptions within the from_string method.
\begin{table}
\begin{tabular}{c l l} \hline \hline RQs & Metric & Description \\ \hline
1 2 & Project Name & name of the project containing the ad hoc parser \\
1 & Project LOC & total lines of code in the containing project \\
2 & Module Name & name of the enclosing module/file \\
2 & EF Name & name of the enclosing function \\
2 & EF LOC & total lines of code in the enclosing function \\
2 & Position & position of the ad hoc parser within the enclosing function \\
1 2 3 & LOC & lines of code in the ad hoc parser \\
3 & 6 & CYCLO & cyclomatic complexity of the ad hoc parser \\
2 4 & Input Source & source of the input string: EF argument, global variable, function call, etc. \\
2 4 & Input Origin & origin of the input string: command-line, file, environment variable, etc. \\
3 & Expression Count & number of expressions in the ad hoc parser \\
3 & Variable Count & number of variables in the ad hoc parser \\
3 & Function Count & number of function calls in the ad hoc parser \\
5 6 7 8 & Function Names & names of all functions called in the ad hoc parser \\
5 & Function Origins & origin of each called function: user-defined or from a library \\
5 6 & Function Positions & position of all function calls within the ad hoc parser \\
5 & Function Arguments & arguments with which each function is called, besides the input string \\
5 & 8 & Syntactic Sugar \\
6 & Regular Expressions & arguments to known regex functions or regex literals used in the ad hoc parser \\
6 7 & Loop Bounds & constant, linear on input string, complex, or unbounded \\
7 & Loop Types & for, while, functional (map, split, etc.), or recursive \\
6 7 & Loop Nesting Depth & how deeply nested loops in the ad hoc parser are \\
8 & Cought Exceptions & all exceptions caught by the ad hoc parser or the enclosing function \\
8 & Uncaught Exceptions & all uncaught exceptions (excluding explicitly raised ones) \\
8 & Raised Exceptions & all exceptions explicitly raised by the parser (using raise) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Initial list of metrics extracted for each ad hoc parser.
These preliminary results give us an initial impression of ad hoc parser characteristics but are limited by the fact that they are exclusively derived from from_string methods. While these are an interesting programming pattern in itself, we suspect that the kind of ad hoc parsing happening in these methods is not necessarily generalizable. By virtue of being so clearly delimited into their own functions, the parsers constituting from_string methods do not exhibit the intermixing of parsing with other code, which we think is a typical characteristic of ad hoc parsers. With the proposed study, we want to extend the scope of our inquiry to capture the phenomenon of ad hoc parsing at large.
|
2307.10732 | TransNFV: Integrating Transactional Semantics for Efficient State
Management in Virtual Network Functions | Managing shared mutable states in high concurrency state access operations is
a persistent challenge in Network Functions Virtualization (NFV). This is
particularly true when striving to meet chain output equivalence (COE)
requirements. This paper presents TransNFV, an innovative NFV framework that
incorporates transactional semantics to optimize NFV state management. The
TransNFV integrates VNF state access operations as transactions, resolves
transaction dependencies, schedules transactions dynamically, and executes
transactions efficiently. Initial findings suggest that TransNFV maintains
shared VNF state consistency, meets COE requirements, and skillfully handles
complex cross-flow states in dynamic network conditions. TransNFV thus provides
a promising solution to enhance state management and overall performance in
future NFV platforms. | Zhonghao Yang, Shuhao Zhang, Binbin Chen | 2023-07-20T09:45:15Z | http://arxiv.org/abs/2307.10732v1 | TransNFV: Integrating Transactional Semantics for Efficient State Management in Virtual Network Functions
###### Abstract
Managing shared mutable states in high concurrency state access operations is a persistent challenge in Network Functions Virtualization (NFV). This is particularly true when striving to meet chain output equivalence (COE) requirements. This paper presents TransNFV, an innovative NFV framework that incorporates transactional semantics to optimize NFV state management. The TransNFV integrates VNF state access operations as transactions, resolves transaction dependencies, schedules transactions dynamically, and executes transactions efficiently. Initial findings suggest that TransNFV maintains shared VNF state consistency, meets COE requirements, and skillfully handles complex cross-flow states in dynamic network conditions. TransNFV thus provides a promising solution to enhance state management and overall performance in future NFV platforms.
## 1 Introduction
Network Function Virtualization (NFV) has revolutionized network function deployment through software-based Virtualized Network Functions (VNFs) [7]. This shift brings improved flexibility, adaptability, and cost-efficiency compared to traditional hardware-based solutions. However, managing stateful VNFs, particularly those handling cross-flow states shared among multiple instances, poses considerable challenges. Moreover, NFV must fulfil chain output equivalence (COE) requirements, ensuring that the VNF actions across an NFV chain align with a hypothetical, always-available, single NF with infinite capacity [5]. These challenges are even more pronounced in emerging 5G/6G networks, where the dynamic bandwidth allocation model based on user needs and network conditions necessitates optimal shared resource allocation and efficient VNF performance [1].
Despite significant progress, existing NFV frameworks face difficulties in managing shared VNF states and complying with COE requirements under dynamic network conditions [2, 6, 9, 11, 4, 5, 13]. To address this, we present TransNFV, a ground-breaking NFV framework designed to manage cross-flow states efficiently while ensuring COE compliance. TransNFV distinguishes itself by employing database transaction concepts to encapsulate state access operations in VNFs into distinct units. This unique approach ensures the consistency and reliability of shared mutable states across VNF instances and provides an effective strategy for managing cross-flow states while addressing COE complexities.
To realize such an approach, TransNFV introduces four novel mechanisms: (i) transaction modelling for VNFs, (ii) transaction planning to resolve dependencies, (iii) adaptive transaction scheduling based on real-time execution status, and (iv) efficient concurrent transaction execution. These features contribute to TransNFV's scalability, adaptability, and robustness. Initial experiments demonstrate the potential of TransNFV to significantly improve strategies in managing cross-flow states under dynamic network conditions. In particular, our results show that TransNFV outperforms the most closely related work, CHC framework [5], with up to double the throughput and 2.5 times lower processing latency while maintaining the same COE compliance.
The remainder of this paper is organized as follows: Section 2 presents a motivating example to underscore the need for an innovative approach to shared state management in NFV. Section 3 details the unique mechanisms of TransNFV. Section 4 provides an in-depth view of the TransNFV prototype. Section 5 discusses our early-stage results, and we conclude in Section 6 with a summary and discussion of potential future work.
## 2 Motivation
Achieving _chain output equivalence_ (COE) in NFV chains, especially under high-throughput, low-latency demand is a daunting task. Existing NFV frameworks fall short of fully addressing these challenges. This section underscores these hurdles via an example.
### Motivating Example
Figure 1 shows a representative NFV chain that consists of Network Address Translation (NAT), Load Balancing (LB), and Trojan Detection (TD). Although these functions, deployed sequentially across different Virtual Network Function (VNF) instances, handle distinct packet flows, they
rly on shared state objects, which adds complexity to state management in VNFs [3]. Each function operates as follows:
1. **Network Address Translation (NAT):** On receiving a new connection, NAT instances select an available port from a shared list and update the per-flow state with the connection's port mapping. Simultaneously, they manage cross-flow states like the available ports and total packet count.
2. **Load Balancing (LB):** LB instances allocate incoming packets to a suitable server based on its current capacity. They also maintain a connection-to-server mapping and collaborate with other instances to keep track of the active connections and byte count per server.
3. **Trojan Detection (TD):** TD instances monitor flow timings across different connections on each host to detect Trojan Horse attacks by identifying predefined malicious patterns.
From this operational breakdown of the representative NFV chain, we distill two overarching challenges intrinsic to the management of states in VNFs. The first challenge arising from this example is the need for COE in NFV chains, a concept elaborated in [5]. This requirement equates to:
* **Availability**: Every instance in our chain, whether NAT, LB, or TD, must have continuous and up-to-date access to the shared state. This need is crucial even in the face of network fluctuations, such as instance failures or changes in traffic allocation. For instance, if a NAT instance fails, the system should swiftly provide an up-to-date state to the replacement instance to ensure seamless service.
* **Consistency**: Certain instances in our example, like NAT and LB, need to update shared states, including available ports and the active number of connections per server. Synchronizing these concurrent state updates is crucial for maintaining the consistency of shared cross-flow states.
* **Ordering**: In some network functions, it's vital to track the order of incoming traffic. For instance, a TD instance looks for threats based on a specific sequence of network events. Any mix-up in this order, possibly due to network congestion or recovery from failure, can cause incorrect detections. The issue gets more complex with multiple detectors, underscoring the importance of preserving traffic order in managing VNF states.
* **Isolation**: Every module in the NFV chain should be able to recover independently from a failure without causing disruptions in the chain. For instance, if an LB instance fails and recovers, it should not affect the operation of NAT or TD instances.
The second challenge resides in the need to maintain execution efficiency while ensuring COE. Particularly in our example, this involves several key components:
* **Concurrent State Access**: Both NAT and LB instances require concurrent reading and updating of shared state objects across multiple instances, such as the available ports list and server capabilities. The absence of adequate synchronization mechanisms could lead to significant contention overhead, affecting efficiency;
* **Dynamic Traffic Conditions**: The NFV Chain must operate under dynamic traffic conditions characterized by frequent instance failures and network bursts. This requires mechanisms capable of discarding associated updates, reverting to a safe state, and efficiently redistributing traffic processing to prevent redundancy. For instance, a sudden surge in traffic could overwhelm the LB instances, necessitating rapid reallocation of traffic to maintain service quality;
* **Cross-flow State Management**: The complexity of managing cross-flow states under such dynamic conditions is a non-trivial task. Given the shared state nature of our example, i.e., available ports, server capabilities, and active connections per server maintaining efficiency while ensuring COE is an intricate challenge.
### Limitations of Previous Work
The field of Network Function Virtualization (NFV) has witnessed considerable progress. Yet, current NFV frameworks [12, 10, 4, 5, 13, 11] struggle to efficiently manage VNF states while maintaining _chain output equivalence_ (COE).
Certain frameworks, such as FTMB [12], Pico Replication [10], and Split/Merge [11], primarily focus on state access availability. They seek to preserve state availability during traffic reallocation and failure recovery by discarding state changes. However, their assumption of static states restricts their applicability in real-world situations, which commonly involve state changes.
Managing shared mutable VNF states and satisfying COE requirements is a daunting task. Frameworks like OpenNF [4] and S6 [13] have attempted to address mutable states within a single VNF but not across multiple instances. Other frameworks [2, 6, 9, 11, 4, 5, 13] have made strides to support mutable states across multiple VNF instances, but they either overlook cross-flow states or make assumptions of perfect traffic partitioning, which simplifies the issue but neglects the intricacies of state sharing among instances.
Figure 1: An Example NFV Chain.
Recently, certain frameworks, such as OpenNF [4], CHC [5], and others [13, 11], have strived to maintain cross-flow state consistency during execution. However, they still face difficulties with the rigorous demands of COE, like state consistency during instances' replication and ensuring state access order. These solutions also tend to fall short in dynamic network conditions, including instance failure or traffic reallocation.
Among these, the CHC framework [5] stands out for its attempt to fully accommodate COE requirements. Yet, it also has limitations. Its coarse-grained approach to managing cross-flow state results in considerable overhead. CHC's strategy for caching or flushing state objects, based on upstream traffic partitioner information, leads to frequent transfers between cache and main memory, resulting in high communication overhead. Moreover, the employment of mechanisms like locks to preserve consistency across multiple instances incurs additional overhead.
## 3 Key Mechanisms
TransNFV combines transactional semantics with NFV state management, enabling efficient stateful VNFs execution while ensuring _chain output equivalence_ (COE). This is achieved through four key mechanisms: 1) modelling shared state access operations of VNFs as transactions, 2) identifying and resolving dependencies among VNF state access operations using a task precedence graph (TPG), 3) adaptively scheduling transactional workloads based on real-time execution status, and 4) execute transactions correctly and efficiently. These approaches help TransNFV tackle challenges associated with shared state management in previous NFV frameworks.
### Modelling VNF State Access
TransNFV employs a graph-based representation to express the VNF chain as a directed acyclic graph (DAG), where vertices represent VNFs, and edges denote the flow of traffic between VNFs. Additionally, TransNFV outlines three types of state access operations in each VNF: \(READ\), which fetches the value of a state object; \(WRITE\), which updates the state object with specific values; and \(READ\_MODIFY\), which reads and modifies the value of a state object. TransNFV models VNF execution logic through three steps: pre-processing, state access, and post-processing.
* The _pre-processing_ step includes identifying state entries linked to state access requests of incoming packets, as well as executing parts of the VNF logic that do not require accessing the state.
* The _state access_ step involves state access operations (i.e., \(READ\), \(WRITE\) or \(READ\_MODIFY\)) on the shared state objects.
* The _post-processing_ step performs additional actions based on the results of the state access operations, and carries out parts of the VNF logic that rely on the returned value of state access.
To ensure VNF state consistency and integrity, TransNFV incorporates transactional semantics into VNF state management. Notably, it treats a set of state access operations collected during the _state access_ step as a single atomic transaction, meaning all modifications triggered by a packet are either fully committed or completely aborted. This approach provides an efficient mechanism to handle VNF instance failures. When a VNF instance fails, TransNFV ensures any modifications made by the failed instance are discarded and isolated from other operations. Furthermore, the failed state access operations can be re-executed as a transaction to ensure correct execution, leading to reliable and consistent state objects even under dynamic network conditions.
### Dependency Identification
TransNFV adapts to the dynamic scheduling of transactional workloads at runtime by mapping these workloads and their dependencies to a task precedence graph (TPG). The TPG is designed to capture and resolve three types of dependencies:
* _Temporal Dependency (TD)_: This depicts the temporal relationship between two state access operations stemming from different transactions, but targeting the same state. TD ensures that state accesses maintain an order that aligns with the sequence of packet arrivals, thereby satisfying the _ordering_ requirement.
* _Parametric Dependency (PD)_: This captures the parametric relationship between two state access operations that write to the same state, where one write operation is contingent on the outcome of the other. PD tracking is employed by the system to address potential conflicts among write operations in the VNF logic, adhering to the _consistency_ requirement.
* _Logical Dependency (LD)_: This denotes the logical relationship among state access operations within a single transaction. By tracking LD, TransNFV ensures that all operations within a transaction are tightly connected and, in the event of failures, can be collectively aborted, adhering to the _availability_ and _isolation_ requirements.
TransNFV captures the transactional dependencies within each batch of state transactions by constructing a TPG. However, identifying these dependencies can pose challenges due to the potential _out-of-order arrival_ of network packets. To address this, TransNFV divides the TPG construction process into two primary phases: the packet processing phase and the state access processing phase. These phases are alternated periodically, allowing for effective management of different packet batches. The period can be fine-tuned to balance execution latency and throughput.
* _Packet Processing Phase_: During this phase, LDs within the same transaction are identified based on their statement order. To identify TDs, operations are sorted by timestamp and inserted into key-partitioned concurrent lists corresponding to each operation's target state. For operations with PD, 'proxy' operations are inserted into the lists. These act as placeholders for the potential reading of a state object that may be modified by another operation.
* _State Access Processing Phase_: During this phase, all subsequent state transactions are halted until TransNFV reverts to the packet processing phase. Here, TDs and PDs are efficiently identified by iterating through the sorted lists and the 'proxy' operations, respectively. This efficient identification of dependencies and swift TPG construction are crucial for TransNFV to adapt its scheduling strategy to the varying workload characteristics of different state transaction batches.
### Transaction Scheduling
As part of its execution process, VNF instances use the constructed TPG to guide the dynamic scheduling of transactional workloads for concurrent processing. This involves exploring the TPG to identify executable tasks (i.e., state access operations) while respecting dependencies among tasks and attempting to maximize the concurrent execution. To adapt to varying workload characteristics and optimize opportunities for parallelism, TransNFV employs different scheduling strategies. These strategies are determined in real-time based on three key dimensions:
1. _Exploration of Remaining State Access Operations_: This dimension pertains to how the TPG is navigated during the scheduling process. VNF instances can employ either a structured approach, such as depth-first or breadth-first search-like traversal, or an unstructured approach that relies on random traversal. Depth-first and breadth-first approaches prioritize tasks that are deeper or wider in the graph, respectively. On the other hand, the random traversal approach does not favour any particular depth or breadth of tasks. The choice of the traversal method affects how soon certain tasks are scheduled and hence can impact the overall parallelism.
2. _Scheduling Granularity_: This dimension pertains to the size of the task unit that is scheduled at one time. During the exploration process, VNF instances can decide to schedule either a single operation or a group of operations as a unit of scheduling. Scheduling a single operation may reduce complexity but also limit parallelism. In contrast, scheduling a group of operations can increase parallelism but may also increase the complexity of handling dependencies among operations.
3. _Transaction Abort Handling_: This dimension pertains to how failed operations are handled. VNF instances can adopt an eager abort approach, where a failed operation is aborted immediately, thereby minimizing computation wastage for other operations that would have been dependent on the failed operation. However, this approach may incur high context-switching overhead. Alternatively, a lazy abort approach can be adopted, where failed operations are allowed to continue until a convenient point, reducing context-switching overhead at the risk of potentially wasting computational resources on tasks that will eventually be aborted.
The selection of a strategy for each dimension depends on the unique characteristics of the workload (such as the distribution of dependencies among operations) and the specific requirements of the application (such as tolerance for wasted computation or context-switching overhead). To ensure efficient and effective workload scheduling, TransNFV may employ a decision model, which is currently under investigation, that dynamically determines the most suitable strategy for each dimension at runtime, based on current and projected workload characteristics.
### Transaction Execution
The execution process within the TransNFV framework seamlessly integrates the TPG, individual finite state machines for each state access operation, and a multi-versioning state table. This integration ensures robust handling of transactional workloads and guarantees system consistency and transactional accuracy.
Each vertex of TPG is annotated with a _finite state machine_ that represents its current status. The state machine transitions between various states reflecting the scheduling and execution processes: 1) _Blocked_: The operation is pending execution due to unresolved dependencies; 2) _Ready_: All dependencies have been resolved, and the operation is poised for execution; 3) _Executed_: The operation has been processed successfully; 4) _Aborted_: The operation has been terminated due to processing failures either from itself or its dependent operations.
To complement the TPG, TransNFV uses a multi-versioning state table to manage consistency during concurrent state access operations. The table maintains a mapping between state access timestamps and state versions, associating every state access operation with a unique state version. This strategy safeguards system consistency, preventing the impact of operation failures or aborts from propagating throughout the system.
In addition, the multi-versioning state table provides a robust recovery mechanism in the face of network or instance failures. By facilitating the rollback of failed transactions to a safe stage, TransNFV can recover states without compromising system consistency or integrity. This meticulous strategy enhances TransNFV's capability to efficiently manage a diverse range of workloads.
## 4 System Prototype
TransNFV's architecture divides control plane and data plane functionalities. The control plane manages the VNF
state, while the data plane handles VNF execution tasks such as packet reception, forwarding, and computations based on state access results. We explain the TransNFV workflow in Figure 2 in six steps.
TransNFV in the data plane.The data plane incorporates packet reception, forwarding, and NF-specific computations. We enhance _LibVNF_[8], an open-source VNF library, to support data plane functions in TransNFV. Although the libVNF API is versatile enough to build various VNFs and manage states across multiple VNF replicas, it lacks complete COE compliance, such as consistency and order requirements. We extend libVNF using the three-step transactional model outlined in Section 3.1:
Pre-processingUpon packet reception, VNF instances formulate the corresponding state access operations into a transaction, maintaining system consistency.
_State Access:_ After constructing the transactions, they are sent to the control plane, where execution suspends until state access results are received, ensuring synchronized state access across VNF instances.
Post-processingOnce state access results are acquired, NF-specific computations proceed, the output of which is used for further processing or response generation.
TransNFV in the control plane.The control plane ensures efficient state access operations and consistency of NFV states. It is made up of three key components:
Dependency IdentificationThis identifies transactional dependencies present within the incoming batch of state transactions, as discussed in Section 3.2, allowing for efficient and consistent processing.
Transaction SchedulingOnce the stateful Transaction Processing Graph (TPG) is formed, the scheduler analyses the workload characteristics, and guided by a heuristic decision model, determines optimal scheduling decisions, as detailed in Section 3.3.
Transaction ExecutionExecutor threads implement state access operations as per the scheduling decisions, maintaining correctness, as detailed in Section 3.4. In case of potential aborts or failures, the multi-versioning state tables allow the recovery of states, preserving system integrity and consistency.
## 5 Preliminary Results
We showcase TransNFV's effectiveness in VNF state management by implementing load balancing (LB) as described by Khalid et al. [5], which is part of our motivating example discussed in Section 2. We leave the evaluation of the entire NFV chain as future work. The implementation of LB employs two state tables: a connection counter table, and a connection server mapping table. The LB instance uses these tables to identify new connections, find the server with the fewest active connections, update the relevant counter value, and direct the packet to the correct server.
We have purposefully designed the input network traffic to include a significant proportion of packets from new connections, all contending to establish a link with the shared server. In contrast to packets from established connections that are processed without requiring state access, each packet from a new connection triggers a series of state access to pinpoint the least busy server and subsequently update the load counter of the selected server. Simultaneously, other instances are tasked with managing new connections, which are likely to access overlapping ranges of server states. This situation poses a significant challenge in efficiently identifying appropriate servers without compromising the requirements of correctness. This provides a stringent test for TransNFV's capability to handle high concurrency and contention in state access operations effectively and accurately.
### Performance Overview
We evaluate the performance of TransNFV and CHC by comparing three primary performance metrics: throughput, latency, and the breakdown of execution time. We configured TransNFV to run on a dual-socket Intel Xeon Gold 6248R server with 384 GB DRAM. Each socket contains \(24\) cores of 3.00GHz and 35.75MB of L3 cache. The OS kernel is _Linux 4.15.0-118-generic_.
ThroughputWe initially conducted experiments to gauge the influence of varying ratios of new connection requests (i.e., 5% to 60%) among all incoming packets on application throughput. The results are presented in Figure 2(a). We also studied the system performance when locks are completely removed, denoted by _No-Lock_. This represents an ideal situation and offers an upper limit on the system performance. From this analysis, we observed two main points: 1) With an increasing ratio of new connections, the throughput for both TransNFV and CHC declines due to the addition of more state accesses by new connections. However, TransNFV consistently outperforms CHC by achieving significantly higher throughput, approximately twice as much. This affirms the efficiency of incorporating transactional semantics into shared VNF state access management in TransNFV. 2) Despite TransNFV's superior performance over CHC in shared VNF state management, it exhibits a throughput that is 2-3 times lower compared to the _No-Lock_. This suggests substantial room for
Figure 2: TransNFV System Prototype.
optimization in the management of concurrent state access during VNF execution.
**Latency.** Subsequently, we conducted experiments to juxtapose the processing latency of TransNFV and CHC, the results of which are shown in Figure 2(b). We primarily measured the processing latency of the Load Balancer (LB) handling new connection requests under two extremes: 5% and 60% of new connections. These represented the two poles in our prior experiment. As the ratio of new connections increased, so did the processing time of new requests. However, TransNFV managed to consistently achieve significantly lower processing latency (up to 1.8 times less) compared to CHC. Much like our previous results, this observation further corroborates the efficiency of introducing transactional semantics into shared VNF state access management.
### Processing Overhead
To delve further into the LB's processing time, we performed a dissection of the time utilized by each aspect of the VNF state access execution under TransNFV and CHC.
We show the time breakdown in the following perspectives. 1) _Useful Time_ refers to the time spent on actual state access. 2) _Sync Time_ refers to the time spent on synchronization, including blocking time before lock insertion is permitted or blocking time due to synchronization barriers during mode switching. 3) _Lock Time_ refers to the time spent on inserting locks after it is permitted. 4) _Construct Time_ refers to the time spent on constructing the auxiliary data structures, e.g., TPG in TransNFV. 5) _Explore Time_ refers to the time spent on exploring available operations to process. 6) _Others_ refers to all other operations and system overheads, such as index lookup, context switching, and remote memory access.
Figure 2(c) illustrates the time breakdown per state access in CHC and TransNFV. We mainly have two main observations. First, although TransNFV dedicates a substantial portion of time to exploration (_Explore Time_), it significantly reduces synchronization (_Sync Time_) and lock (_Lock Time_) overhead compared to CHC. This elucidates its superior performance on multicore processors. Second, TransNFV still dedicates a significant amount of time to exploration (_Explore Time_). Moreover, as the ratio of new connections increases, the fraction of exploration time enlarges. This consequently diminishes throughput and increases processing time, as shown in Section 5.1. This phenomenon mainly arises from excessive message-passing between threads to identify available operations, and more dependencies lead to more message-passing overhead.
## 6 Conclusion and Future Work
This paper introduces a novel Network Function Virtualization (NFV) framework, TransNFV, that incorporates transactional semantics into VNF state management. Our primary goal is to optimize the performance of stateful VNFs while maintaining compliance with COE requirements. To this end, we propose a fine-grained dependency resolution for concurrent state access operations, drawing upon transaction processing techniques. Preliminary experimental evaluations demonstrate that these key mechanisms effectively mitigate contention overhead arising from concurrent state accesses.
Going forward, we see several promising avenues for further exploration. We aim to improve the efficiency of dependency identification by integrating advanced techniques such as machine learning or static analysis. The interplay between Software-Defined Networking (SDN) and TransNFV may also offer intriguing opportunities to adapt network policies based on workload characteristics. Furthermore, while TransNFV currently manages transaction failures, we aim to enhance its resilience to system-wide failures by developing robust checkpointing and recovery mechanisms. We look forward to investigating these intriguing research opportunities as we continue our efforts to optimize stateful VNF execution in line with the evolving demands of network infrastructures.
## Acknowledgments
This work is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research & Development Programme FCP-SUTD-RG-2022-005.
|
2304.06366 | IBIA: An Incremental Build-Infer-Approximate Framework for Approximate
Inference of Partition Function | Exact computation of the partition function is known to be intractable,
necessitating approximate inference techniques. Existing methods for
approximate inference are slow to converge for many benchmarks. The control of
accuracy-complexity trade-off is also non-trivial in many of these methods. We
propose a novel incremental build-infer-approximate (IBIA) framework for
approximate inference that addresses these issues. In this framework, the
probabilistic graphical model is converted into a sequence of clique tree
forests (SCTF) with bounded clique sizes. We show that the SCTF can be used to
efficiently compute the partition function. We propose two new algorithms which
are used to construct the SCTF and prove the correctness of both. The first is
an algorithm for incremental construction of CTFs that is guaranteed to give a
valid CTF with bounded clique sizes and the second is an approximation
algorithm that takes a calibrated CTF as input and yields a valid and
calibrated CTF with reduced clique sizes as the output. We have evaluated our
method using several benchmark sets from recent UAI competitions and our
results show good accuracies with competitive runtimes. | Shivani Bathla, Vinita Vasudevan | 2023-04-13T09:40:23Z | http://arxiv.org/abs/2304.06366v2 | IBIA: An Incremental Build-Infer-Approximate Framework for Approximate Inference of Partition Function
###### Abstract
Exact computation of the partition function is known to be intractable, necessitating approximate inference techniques. Existing methods for approximate inference are slow to converge for many benchmarks. The control of accuracy-complexity trade-off is also non-trivial in many of these methods. We propose a novel _incremental build-infer-approximate_ (IBIA) framework for approximate inference that addresses these issues. In this framework, the probabilistic graphical model is converted into a sequence of clique tree forests (SCTF) with bounded clique sizes. We show that the SCTF can be used to efficiently compute the partition function. We propose two new algorithms which are used to construct the SCTF and prove the correctness of both. The first is an algorithm for incremental construction of CTFs that is guaranteed to give a valid CTF with bounded clique sizes and the second is an approximation algorithm that takes a calibrated CTF as input and yields a valid and calibrated CTF with reduced clique sizes as the output. We have evaluated our method using several benchmark sets from recent UAI competitions and our results show good accuracies with competitive runtimes.
## 1 Introduction
Graphical models including Bayesian networks (BN) and Markov networks (MN) have been used for probabilistic inference in a wide variety of applications. A fundamental task in inference is the computation of the partition function (PR), which is the normalization constant for the overall probability distribution. Exact inference of PR is known to be #P complete (Roth, 1996), necessitating approximations. Methods for approximate inference can be broadly classified as methods based on variational optimization and sampling or search based methods.
Variational techniques cast the inference problem as an optimization problem, which is typically solved using iterative message-passing algorithms. These include loopy belief propagation (LBP) (Murphy et al., 1999; Wainwright et al., 2002; Wiegerinck & Heskes, 2003), region-graph based methods like generalized belief propagation (GBP) and its variants (Yedidia et al., 2000; Heskes, 2006; Mooij & Kappen, 2007; Sontag et al., 2008; Lin et al., 2020), mean-field approximations (Winn et al., 2005) and methods based on expectation propagation (Minka, 2001; Vehari et al., 2020; Vehari et al., 2020). A combination of mini-bucket heuristics and belief propagation is used in methods like weighted mini-bucket elimination (WMB) (Liu & Ihler, 2011; Forouzan & Ihler, 2015; Lee et al., 2020) and iterative join graph propagation (IJGP) (Mateescu et al., 2010). While the parameter settings for complexity accuracy trade-off is non-trivial in many of the GBP based methods, it is controlled using a single user-defined parameter (\(ibound\)) in mini-bucket based methods. More recently, several extensions of mini-bucket based methods have been proposed. These include bucket re-normalization (Ahn et al., 2018), deep-bucket elimination (DBE) (Razeghi et al., 2021) and NeuroBE (Agarwal et al., 2022). Both DBE and NeuroBE use neural networks to improve the quality of approximations. Another
approach is to bound clique sizes by simplifying the network. In the thin junction tree based methods (Bach and Jordan, 2001; Elidan and Gould, 2008; Scanagatta et al., 2018), a set of features (nodes and edges) so that the resulting graph has a bounded tree-width. The remaining features are ignored. The edge deletion belief propagation (EDBP) and the related relax-recover compensate (RRC) methods (Choi et al., 2005; Choi and Darwiche, 2006; Choi and Darwiche, 2007; 2008) perform inference on progressively more complex graphs in which new features are added, while satisfying some consistency conditions.
Sampling algorithms can be classified as methods based on Markov chain Monte Carlo (MCMC) like Gibbs sampling (Gelfand, 2000) and methods based on importance and stratified sampling (Bouckaert et al., 1996; Hernandez et al., 1998; Moral and Salmeron, 2005). The more recent importance sampling based methods use proposals based on approximate variational methods like WMB and IJGP. In Liu et al. (2015), WMB is used as the proposal for importance sampling (WMB-IS). The dynamic importance sampling (DIS) method proposed in Lou et al. (2017) also uses WMB and has a periodic update of the sampling proposal. The abstraction sampling methods (Broka, 2018; Kask et al., 2020) use an abstraction function to merge similar nodes in AND-OR search trees to get abstract states. An estimate of the PR is obtained using sampled subtrees, with WMB used in the sampling proposal. Sample search (Gogate and Dechter, 2007) is a variant of importance sampling that deals with the rejection of samples in the presence of zero weights. The method proposed in Gogate and Dechter (2011) uses sample search with cutset sampling and an IJGP based proposal. Another approach is to combine sampling techniques with model counting based methods (Chakraborty et al., 2013, 2016; Soos and Meel, 2019; Sharma et al., 2019).
**Limitations of existing methods:** Sampling based methods are anytime algorithms where it is possible to improve accuracy by increasing the number of samples, without the associated increase in memory. However, the performance of these methods depends significantly on the proposal distribution used for importance sampling. The results in Gogate and Dechter (2011); Lou et al. (2017, 2019); Kask et al. (2020) also indicate that after an initial rapid increase, the improvement in accuracy slows down significantly with time. Variational techniques typically require increase in both time and memory for better accuracy. LBP works with minimal cluster sizes and is therefore fast and gives solutions for most benchmarks (Agrawal et al., 2021). However, it results in poor accuracies especially for many of the harder benchmarks. The accuracy of GBP based methods depends on the choice of the outer regions, which is non-trivial. In practice, we have found that these methods are slow to converge for many benchmarks. Methods based on mini-bucket heuristics like WMB, WMB-IS and DIS have easy control of accuracy complexity trade-off but the accuracy obtained is often limited (Broka, 2018; Kask et al., 2020; Agarwal et al., 2022). Neural network based extensions like NeuroBE and DBE improve the accuracy of estimates, but require several hours of training. Selection of optimum features in the RRC and related methods is compute-intensive since it is based on metrics that require several iterations of belief propagation. While weighted model counting based methods work well for many benchmarks, they struggle for benchmarks with large variable domain cardinality (Agrawal et al., 2021).
**Contributions of this paper**: In this paper, we propose a new framework for approximate inference that addresses some of these issues. Our framework, denoted the _incremental build-infer-approximate_ (IBIA) paradigm, converts each connected graph in the PGM into a data structure that we call _Sequence of Clique Tree Forests_ (SCTF). We show that the SCTF can be used for efficient computation of the PR. To construct the SCTF, we propose two new algorithms and prove the correctness of both. The first is an algorithm for incremental construction of CTFs that is guaranteed to give a valid CTF with bounded clique sizes and the second is an approximation algorithm that takes a calibrated CTF as input and yields a valid and calibrated CTF with reduced clique sizes as the output.
Our method has easy control of accuracy-complexity trade-off using two user-defined parameters for clique size bounds, which are similar to the \(ibound\) parameter setting in mini-bucket based methods. Since IBIA is based on clique trees and not loopy graphs, the belief propagation step is non-iterative and there are no convergence issues. In IBIA, approximations are based on clique beliefs and not the network structure which results in good accuracies. We evaluated our method with 1717 instances belonging to different benchmark sets included in several UAI competitions. Results show that the accuracy of solutions obtained by IBIA is better than the other variational techniques. It also gives comparable or better accuracies than the state of art sampling methods in a much shorter time.
**Organization of this paper:** The rest of this paper is organized as follows. Section 2 provides background and notation. We present an overview of the IBIA framework in Section 3, the methodology for constructing the SCTF in Section 4 and approximate inference of PR in Section 5. We present the complexity analysis in Section 6, results in Section 7 and comparison with related work in Section 8. Finally, we present our conclusions in Section 9. The proofs for all propositions and theorems are included in Appendix A.
## 2 Background
This section has the notation and the definitions used in this paper. Throughout the paper, we use the terms clique tree, join tree and junction tree interchangeably. Also, as is common in the literature, we use the term \(C_{i}\) as a label for the clique as well as to denote the set of variables in the clique.
**Definition 1**.: **Probabilistic graphical model (PGM):** Let \(\mathcal{X}=\{X_{1},X_{2},\cdots X_{n}\}\) be a set of random variables with associated domains \(D=\{D_{X_{1}},D_{X_{2}},\cdots D_{X_{n}}\}\). The probabilistic graphical model (PGM) over \(\mathcal{X}\) consists of a set of factors, \(\Phi\), where each factor \(\phi_{\alpha}(\mathcal{X}_{\alpha})\in\Phi\) is defined over a subset of variables \(Scope(\phi_{\alpha})=\mathcal{X}_{\alpha}\). The domain \(D_{\alpha}\) of \(\mathcal{X}_{\alpha}\) is the Cartesian product of the domains of variables in \(\mathcal{X}_{\alpha}\) and the factor \(\phi_{\alpha}\) is a map \(\phi_{\alpha}:D_{\alpha}\to R\geq 0\). The joint probability distribution captured by the model is \(P(\mathcal{X})=\frac{1}{Z}\prod_{\alpha}\phi_{\alpha}\) where the normalizing constant, \(Z=\sum_{X}\prod_{\alpha}\phi_{\alpha}\) is the partition function (PR).
**Definition 2**.: **Chordal graph (\(\mathcal{H}\)):** It is an undirected graph with no cycle of length greater than three.
**Definition 3**.: **Clique:** A subset of nodes in an undirected graph such that all pairs of nodes are adjacent.
**Definition 4**.: **Maximal clique:** A clique that is not contained within any other clique in the graph.
**Definition 5**.: **Junction tree or Join tree or Clique tree (CT)** (Koller & Friedman, 2009): The CT is a hypertree with nodes \(\{C_{1},C_{2},\cdots,C_{n}\}\) that are the set of cliques in \(\mathcal{H}\). An edge between \(C_{i}\) and \(C_{j}\) is associated with a sepset \(S_{i,j}=C_{i}\cap C_{j}\). A valid CT satisfies the following
1. All cliques are maximal cliques i.e., there is no \(C_{j}\) such that \(C_{j}\subset C_{i}\).
2. It satisfies the running intersection property (RIP), which states that for all variables \(X\), if \(X\in C_{i}\) and \(X\in C_{j}\), then \(X\) is present in every node in the unique path between \(C_{i}\) and \(C_{j}\).
3. Each factor \(\phi_{\alpha}\) in the PGM is associated with a single node \(C_{i}\) such that \(\text{Scope}(\phi_{\alpha})\subseteq C_{i}\).
Exact inference in a CT is done using the belief propagation (BP) algorithm (Lauritzen & Spiegelhalter, 1988; Koller & Friedman, 2009) that is equivalent to two rounds of message passing along the edges of the CT, an upward pass (from the leaf nodes to the root node) and a downward pass (from the root node to the leaves). Following this, the CT is said to be calibrated. A calibrated CT is defined as follows.
**Definition 6**.: **Calibrated CT** (Koller & Friedman, 2009): Let \(\beta(C_{i})\) and \(\beta(C_{j})\) denote the beliefs associated with adjacent cliques \(C_{i}\) and \(C_{j}\). The cliques are said to be calibrated if
\[\sum_{C_{i}\setminus S_{i,j}}\beta(C_{i})=\sum_{C_{j}\setminus S_{i,j}}\beta (C_{j})=\mu(S_{i,j}) \tag{1}\]
Here, \(S_{i,j}\) is the sepset corresponding to \(C_{i}\) and \(C_{j}\) and \(\mu_{i,j}\) is the associated belief. The CT is said to be calibrated if all pairs of adjacent cliques are calibrated. It has the following properties.
1. All clique and sepset beliefs in the calibrated CT have the same normalization constant (\(Z\)) which is equal to the partition function (PR).
2. The joint probability distribution, \(P(\mathcal{X})\), can be re-parameterized in terms of the sepset and clique beliefs as follows: \[P(\mathcal{X})=\frac{1}{Z}\frac{\prod_{i\in\mathcal{V}_{T}}\beta(C_{i})}{ \prod_{(i-j)\in\mathcal{E}_{T}}\mu(S_{i,j})}\] (2) where \(\mathcal{V}_{T}\) and \(\mathcal{E}_{T}\) are the set of nodes and edges in the CT.
## 3 Overview of the IBIA paradigm
This section has the definitions of terms used in various algorithms and an overview of the IBIA paradigm. We also introduce a running example that will be used in various sections of this paper to illustrate the constituent algorithms.
### Definitions
We use the following definitions in the paper.
**Definition 7**.: **Clique Tree Forest (CTF):** Set of disjoint CTs.
**Definition 8**.: **Valid CTF:** A CTF is valid if all CTs in the CTF are valid i.e., they satisfy all properties in Definition 5.
**Definition 9**.: **Calibrated CTF:** A CTF is calibrated if all CTs in the CTF are valid and calibrated.
**Definition 10**.: **Clique size:** The clique size \(cs_{i}\) of a clique \(C_{i}\) is defined as follows.
\[cs_{i}=\log_{2}\ (\prod_{\forall\ v\in\ C_{i}}|D_{v}|\ ) \tag{3}\]
where \(|D_{v}|\) is the cardinality or the number of states in the domain of the variable \(v\).
It can be seen from the definition that the clique size is the effective number of binary variables contained in the clique.
### Overview
The inputs to the algorithm are the set of initial factors (\(\Phi\)) and two user-defined clique size parameters \(mcs_{p}\) and \(mcs_{im}\). Let \(\mathcal{G}\) denote the undirected graph induced by \(\Phi\). Figure 1 illustrates the overall methodology used in IBIA to construct the SCTF for \(\mathcal{G}\) and get an estimate of the partition function for the given set of factors. The three main steps in the method are as follows.
**Incremental Build**: Starting with a valid CTF, the algorithm (Algorithm 1) builds the CTs in the CTF by incrementally adding new factors to it as long as the maximum clique size bound, \(mcs_{p}\), is not violated. We show that the result of Algorithm 1 is guaranteed to be a valid CTF. It is assumed that \(mcs_{p}\) is large enough to accommodate the maximum domain size of the factors.
**Infer**: This step takes a valid CTF as input and calibrates all the CTs in the CTF using the standard BP algorithm (Lauritzen & Spiegelhalter, 1988) for exact inference.
**Approximate**: The input to this algorithm (Algorithm 2) is a calibrated CTF, \(CTF_{k}\), with maximum clique size \(mcs_{p}\). The result of the algorithm is an approximate CTF, \(CTF_{k,a}\), with a reduced maximum
Figure 1: Estimation of partition function using the IBIA framework
clique size of \(mcs_{im}\). Our approximation algorithm ensures that \(CTF_{k,a}\) is valid and calibrated so that the CTs need not be re-calibrated using the message-passing algorithm. It also ensures that a connected CT in \(CTF_{k}\) remains connected in \(CTF_{k,a}\) and normalization constants of the CTs in the CTF are unchanged.
Assume that \(\mathcal{G}\) is connected. The construction of the SCTF starts with an initial CTF (\(CTF_{0}\)) that contains cliques corresponding to factors in \(\Phi\) with disjoint scopes. As shown in the figure, the three steps incremental build, infer and approximate are used repeatedly to construct the \(\text{SCTF}=\{CTF_{1},\cdots,CTF_{n}\}\). The construction is complete once all factors in \(\Phi\) have been added to some CTF in the SCTF. The SCTF is thus a sequence of calibrated CTFs, each of which satisfies a property proved in Proposition 9. Based on this property, we show that the last CTF, \(CTF_{n}\), contains a single connected CT and the normalization constant of this CT is the estimate of the PR (Theorem 2).
If \(\mathcal{G}\) has multiple disjoint graphs, which happens for example after evidence based simplification, an SCTF is constructed for each connected graph and the estimate of PR is the product of the normalization constants of the last CTF of each SCTF.
### Example
We will use the example shown in Figure 1(a) as a running example to explain the steps used in various algorithms proposed in this work. The figure has the factors and the input graph induced by the factors. All variables are assumed to be binary and \(mcs_{p}\) and \(mcs_{im}\) are set to 4 and 3 respectively. The final result is an SCTF consisting of two CTFs shown in Figure 1(b). The normalization constant of clique beliefs in \(CTF_{2}\) is the estimated PR for the example.
## 4 Construction of the SCTF
In this section, we describe the three steps that are used to generate the sequence of CTFs namely, incremental build, infer and approximate. We use the following definitions in this section.
**Definition 11**.: \(MSG[V]\)**:** Given a subset of variables (\(V\)) in a valid CTF, \(MSG[V]\) is used to denote the minimal subgraph of the CTF that is needed to compute the joint beliefs of \(V\).
It is obtained by first identifying the subgraph of CTF that connects all the cliques that contain variables in the set \(V\). Then, starting from the leaf nodes of the subgraph (nodes with degree equal to 1), cliques that contain the same set (or subset) of variables in \(V\) as their neighbors are removed recursively.
**Definition 12**.: **Interface variables (IV):** It is the intersection of the set of variables in a CTF, \(CTF_{k}\), and the set of variables present in factors that have not been added to any CTF in \(\{CTF_{1},\ldots,CTF_{k}\}\).
Each CTF in the sequence has a different set of interface variables. IVs are needed to form the next CTF in the sequence.
### Incremental Build
In this step, new factors from a set \(\Phi\) are incrementally added to an existing valid CTF, which is either \(CTF_{0}\) or the approximate CTF, \(CTF_{k-1,a}\), as long as the maximum clique size bound (\(mcs_{p}\)) is not violated. If the scope of a new factor is a subset of an existing clique, the factor is simply assigned to the clique. Otherwise, we need to modify the CTF to add a clique that contains the scope of the new factor while ensuring that the CTF remains valid. We first explain our method of construction of CTFs with the help of the running example. We then formally state the steps and prove the correctness of our algorithm.
#### 4.1.1 Example
Figure 3 illustrates the construction of \(CTF_{1}\) from an initial CTF, \(CTF_{0}\). \(CTF_{0}\) is initialized with cliques corresponding to factors with disjoint scopes, chosen as cliques \(C_{1},C_{3}\) and \(C_{9}\) in the example. These are highlighted in red in the graph. Let \(\mathcal{V}\) denote the set of variables present in the existing CTF. The first step in the addition of a factor \(\phi\) is the identification of the subgraph \(SG_{min}=MSG[scope(\phi)\cap\mathcal{V}]\). The method for addition of \(\phi\) to the CTF depends on whether \(SG_{min}\) is a set of disjoint cliques or it has connected components. The steps involved in the two cases are as follows.
**1. \(SG_{min}\) is a set of disjoint cliques**: Assume that the factor \(\phi(h,i)\) is to be added to \(CTF_{0}\). In this
Figure 3: Construction of the first CTF in the sequence, \(CTF_{1}\), for an example PGM with \(mcs_{p}\) set to 4. Starting with a set of disjoint cliques, factors are added until the maximum clique size reaches \(mcs_{p}\). Factors \(\phi(k,l,o)\) and \(\phi(f,o)\) are deferred for addition to the next CTF.
case, \(SG_{min}=MSG[h,i]\) consists of two disjoint cliques, \(C_{3}\) and \(C_{9}\). As shown in the figure, the new clique corresponding to \(\phi(h,i)\) can simply be connected to cliques \(C_{3}\) and \(C_{9}\) via the sepset variables \(h\) and \(i\), producing a valid CTF. The addition of factor \(\phi(d,g,h)\) is similar. \(SG_{min}\) for the factors \(\phi(d,h,k)\), \(\phi(a,b,f)\) and \(\phi(j,m)\) are single cliques. As shown in the figure, they can be connected to the existing CTF via the corresponding sepsets to produce a valid CTF.
**2. \(SG_{min}\) has connected components**: When we try to add factor \(\phi(f,m,n)\), the variables \(f\) and \(m\) are present in cliques \(C_{5}\) and \(C_{4}\) which are already connected in the existing CTF. Directly connecting these cliques to the new clique containing variables \(f\), \(m\) and \(n\) will generate a loop and hence result in an invalid CTF. Figure 4 shows the steps used for addition of this factor. \(SG_{min}=MSG[\{f,m\}]\) is highlighted in red in Figure 3(a). The goal is to replace \(SG_{min}\) with a subtree \(ST^{\prime}\) that has a clique containing variables \(f,m\) and \(n\), while ensuring that the resulting CTF remains valid. As shown in Figure 3(b), when the new clique is added to the chordal graph corresponding to \(SG_{min}\), chordless loops \(f\)-\(g\)-\(h\)-\(j\)-\(m\)-\(f\) and \(f\)-\(d\)-\(h\)-\(j\)-\(m\)-\(f\) are introduced. Therefore, retriangulation is needed to get back a chordal graph. However, only a subgraph of the modified chordal graph needs to be re-triangulated. Using variable elimination to form cliques, clique containing variables \(c\), \(h\) and \(j\) is obtained after eliminating variable \(c\). This clique is already present in \(SG_{min}\). We call such cliques as _retained cliques_. The subgraph \(G_{E}\) shown in Figure 3(c) is obtained after removing the variable \(c\) and deleting the corresponding edges. This is the subgraph that needs re-triangulation. We call it the _elimination graph_ and denote the variables in this graph as the _elimination set_ (\(S_{E}\)). Comparing Figures 3(a) and 3(c), we see that \(S_{E}\) contains the sepset variables in \(SG_{min}\) and the variables in the new factor. On triangulating \(G_{E}\), we get a CT, \(ST^{\prime}\), that contains cliques \(C_{1}^{\prime},C_{2}^{\prime},C_{3}^{\prime}\) and \(C_{4}^{\prime}\) as shown in Figure 3(d). Each retained clique is then connected to a clique in \(ST^{\prime}\) such that the sepset contains all common variables. In the example, clique \(C_{3}\) gets connected to clique \(C_{3}^{\prime}\) via sepset variables \(h\) and \(j\) which are present in both \(C_{3}\) and \(ST^{\prime}\). The final \(ST^{\prime}\) is highlighted in teal in Figure 3(d). \(ST^{\prime}\) replaces \(SG_{min}\) in the existing CTF. The connection is done via cliques \(C_{5},C_{7}\) and \(C_{8}\) that were adjacent to \(SG_{min}\) with the same sepsets. Since cliques \(C_{1},C_{2},C_{4}\) are no longer present in the modified CT, the associated factors are re-assigned to corresponding containing cliques in \(ST^{\prime}\). Accordingly, the factors associated with \(C_{1}\) and \(C_{2}\) are re-assigned to \(C_{1}^{\prime}\) and that associated with \(C_{4}\) is re-assigned to \(C_{3}^{\prime}\). The new factor \(\phi(f,m,n)\) is assigned to clique \(C_{4}^{\prime}\) that contains all variables in the scope of this factor.
Figure 4: Addition of a factor \(\phi(f,m,n)\) to an existing CTF.
Factor \(\phi(d,m,o)\) is added in a similar manner and the resulting CTF, \(CTF_{1}\), is shown in Figure 3. Addition of factors \(\phi(f,o)\) and \(\phi(k,l,o)\) violates the clique size bounds (\(mcs_{p}=4\)) and are deferred for addition to the next CTF in the sequence. Note that \(ST^{\prime}\) is not unique and depends on the elimination order used for re-triangulation. Similarly, the replacement of \(SG_{min}\) by \(ST^{\prime}\) can be done in multiple ways. Therefore, the resulting CTF is not unique, but it is always a valid CTF.
Often the new factors that need to be added impact overlapping portions of the existing CTF. While they can be added sequentially, adding them together not only reduces the effort required for re-triangulation, but also often results in smaller clique sizes. Therefore, in our algorithm factors having overlapping \(SG_{min}\) are added together as a group. The procedure to add a group of factors is similar.
#### 4.1.2 Algorithm
We first define various terms used in the algorithm. Let \(\mathcal{V}\) denote the variables in the existing CTF, \(\Phi\) denote the set of factors to be added and \(Scope(\Phi)=\cup_{\phi\in\Phi}Scope(\phi)\).
**Definition 13**.: \(SG_{min}\)**:** It is defined as \(MSG[Scope(\Phi)\cap\mathcal{V}]\) (see Definition 11 for \(MSG\)).
It is the minimal portion of the existing CTF that is impacted by the addition of new factors.
**Definition 14**.: **Elimination set (\(S_{E}\)):** It is the set containing the variables in the new factors to be added and the variables in the sepsis of \(SG_{min}\).
**Definition 15**.: **Retained cliques:** Cliques in \(SG_{min}\) that contain variables that are not contained in the set \(S_{E}\).
**Definition 16**.: **Elimination graph (\(G_{E}\)):** The elimination graph is constructed using the following steps: (a) For each factor \(\phi\) in the set \(\Phi\), add a fully connected component between variables in \(Scope(\phi)\) (b) For each clique \(C\in SG_{min}\), add a fully connected component corresponding to \(C\cap S_{E}\).
Algorithm 1 shows the formal steps in our algorithm for incremental addition of new factors to an existing CTF such that clique sizes are bounded. The inputs to the algorithm are a valid CTF, the set of factors to be added (\(\Phi\)) and the clique size bound \(mcs_{p}\). In each step of this algorithm, we attempt to add a group of factors that have overlapping \(SG_{min}\) (\(\Phi_{g}\)) (lines 3-15). To do this, we first find the \(SG_{min}\) corresponding to the entire group \(\Phi_{g}\) (Definition 13) and construct the modified subtree \(ST^{\prime}\) by adding factors in \(\Phi_{g}\) to \(SG_{min}\) (lines 6-7). If \(ST^{\prime}\) satisfies the clique size bounds, the CTF is modified and the \(\Phi_{g}\) is removed from \(\Phi\) (lines 9-11). Otherwise, we remove a subset of factors, \(\Phi_{gs}\), from \(\Phi\) and try adding the remaining factors to the CTF. \(\Phi_{gs}\) is added to \(\Phi_{d}\), which is a list of factors that are deferred for addition to subsequent CTFs (lines 12-15). This process is continued until \(\Phi\) becomes empty and no further addition is possible. After the CTF is built, we re-assign \(\Phi\) to contain the set of all deferred factors (line 17).
_Construct \(ST^{\prime}\) (lines 19-34)_: In this function, we first find the elimination set \(S_{E}\) and the elimination graph \(G_{E}\) as per Definitions 14 and 16. The elimination graph is then triangulated and the corresponding clique tree \(ST^{\prime}\) is obtained (lines 20-21). We then identify the set of retained cliques (\(\mathcal{C}_{r}\)) which contain variables that are not present in \(S_{E}\) (\(\mathcal{V}_{r}\)) (lines 22-24). For each retained clique \(C\), we find a clique \(C^{\prime}\) in \(ST^{\prime}\) that contains the set \(C\cap S_{E}\). We show that this is always possible in the proof of Proposition 3. If \(C^{\prime}\) is a subset of \(C\), we replace \(C^{\prime}\) with \(C\). Otherwise, we connect \(C\) to \(C^{\prime}\) (lines 25-29). Following this, factors associated with cliques in \(SG_{min}\) are reassigned and new factors in \(\Phi_{g}\) are assigned to corresponding containing cliques in \(ST^{\prime}\) (lines 30-33).
_Modify CTF (lines 36-44)_: This function modifies the CTF by replacing \(SG_{min}\) by \(ST^{\prime}\). We start by finding the set of cliques adjacent to \(SG_{min}\) in the input CTF (\(Adj(SG_{min})\)) and remove \(SG_{min}\) from the CTF. Cliques in \(Adj(SG_{min})\) are reconnected to cliques in \(ST^{{}^{\prime}}\) that contain the corresponding sepset in the existing CTF. We show that this connection is always possible in the proof of Proposition 3.
#### 4.1.3 Soundness of the algorithm
Let the input to Algorithm 1 be a valid CTF. Let \(CTF_{m}\) denote the modified CTF obtained after adding a group of factors \(\Phi_{g}\) to an existing CTF using lines 3-15 of Algorithm 1. Then the following propositions hold true. The proofs for these propositions are included in Appendix A.
**Proposition 1**.: \(CTF_{m}\) contains only trees (possibly disjoint) i.e., no loops are introduced by the algorithm.
**Proposition 2**.: \(CTF_{m}\) contains only maximal cliques.
**Proposition 3**.: All CIs in \(CTF_{m}\) satisfy the running intersection property (RIP).
**Proposition 4**.: If the joint distribution captured by the input CTF is \(P(\mathcal{X})\), then the joint distribution captured by \(CTF_{m}\) is \(P(\mathcal{X})\prod_{\phi\in\Phi_{g}}\phi\).
```
0:\(CTF\): Input CTF \(\Phi\): Set of new factors to be added \(mcs_{p}\): Maximum clique size bound for the modified CTF
0:\(CTF\): Modified CTF \(\Phi\): Set of remaining factors
1:Initialize:\(\Phi_{d}=\{\}\)\(\triangleright\) Set of factors deferred for addition to subsequent CTFs
2:while\(\Phi_{:}isNotEmpty()\)do\(\triangleright\) Loop until further addition is not possible
3:\(\mathcal{V}=\{Variables\in CTF\}\)
4: For each factor \(\phi\in\Phi\), identify the corresponding minimal subgraph \(SG_{min}=MSG[Scope(\phi)\cap\mathcal{V}]\)
5:\(\Phi_{g}\leftarrow\) Find a group of factors with overlapping minimal subgraphs
6:\(SG_{min}\gets MSG[Scope(\Phi_{g})\cap\mathcal{V}]\)\(\triangleright\) Find the minimal subgraph corresponding to set \(\Phi_{g}\)
7:\(ST^{\prime}\leftarrow\) Construct \(ST^{\prime}\) (\(\Phi_{g}\), \(SG_{min}\))\(\triangleright\)\(ST^{\prime}\): Modified subtree
8:\(\triangleright\) Modify \(CTF\) if clique size bound is satisfied.
9:if Max-clique-size(\(ST^{\prime}\)) \(\leq\)\(mcs_{p}\)then
10:\(CTF\leftarrow\) Modify \(\operatorname{CTF}(ST^{\prime},SG_{min},CTF)\)\(\triangleright\) Replace \(SG_{min}\) with \(ST^{\prime}\) and get modified CTF
11:\(\Phi\leftarrow\Phi\setminus\Phi_{g}\)\(\triangleright\) Update the set of remaining factors
12:else
13:\(\Phi_{g*}\leftarrow\) {Subset of factors \(\in\Phi_{g}\)}\(\triangleright\) Choose a subset of factors for addition to subsequent CTFs
14:\(\Phi\leftarrow\Phi\setminus\Phi_{g*}\); \(\Phi_{d.add}(\Phi_{g*})\);\(\triangleright\) Remove \(\Phi_{gs}\) from \(\Phi\) and add it to the set of deferred factors \(\Phi_{d}\)
15:endif
16:endwhile
17:\(\Phi=\Phi_{d}\)
18:
19:procedureConstruct \(ST^{\prime}(\Phi_{g},SG_{min})\)
20: Construct the elimination set \(S_{E}\) and elimination graph \(G_{E}\), as per Definitions 14 and 16
21:\(ST^{\prime}\leftarrow\) Triangulate \(G_{E}\) and find the corresponding clique tree
22:\(\triangleright\) Identify the set of retained cliques, \(\mathcal{C}_{r}\)
23:\(\triangleright_{sg}\leftarrow\) {Variables \(\in SG_{min}\)}; \(\mathcal{V}_{r}\leftarrow\mathcal{V}_{sg}\setminus S_{E}\)\(\triangleright\)\(\mathcal{V}_{r}\): Variables used to identify retained cliques
24:\(\mathcal{C}_{r}\leftarrow\) Cliques \(\in SG_{min}\) that contain at least one variable in \(\mathcal{V}_{r}\)\(\triangleright\)\(\mathcal{C}_{r}\): Set of retained cliques
25:\(\triangleright\) Connect retained cliques to \(ST^{\prime}\)
26:for\(C\in\mathcal{C}_{r}\)do
27: Find a clique \(C^{\prime}\in ST^{\prime}\) such that \(C\cap S_{E}\subseteq C^{\prime}\)
28:if\(C^{\prime}\subset C\)then Replace \(C^{\prime}\) by \(C\)else Connect \(C^{\prime}\) to \(C\)\(\triangleright\) Check maximality, connect retained clique \(C\)
29:endfor
30:\(\triangleright\) Assign factors to cliques in \(ST^{\prime}\)
31: Re-assign factors associated with cliques in \(SG_{min}\) to containing cliques in \(ST^{\prime}\)
32: Assign factors in \(\Phi_{g}\) to containing cliques in \(ST^{\prime}\)
33:return\(ST^{\prime}\)
34:endprocedure
35:procedureModify \(\operatorname{CTF}(ST^{\prime},SG_{min},CTF)\)
36:\(\triangleright\) Replace \(SG_{min}\) with \(ST^{\prime}\) in \(\operatorname{CTF}\)
37:\(Adj(SG_{min})\leftarrow\) List of tuples \((C_{a},S_{a})\) containing cliques adjacent to \(SG_{min}\) and corresponding sepset variables
38: Remove \(SG_{min}\) from CTF
39:for\((C_{a},S_{a})\in Adj(SG_{min})\)do\(\triangleright\) Re-connect cliques adjacent to \(SG_{min}\) to cliques in \(ST^{\prime}\)
40: Connect \(C_{a}\) to clique \(C^{\prime}\) in \(ST^{\prime}\) such that \(S_{a}\subset C^{\prime}\)
41:endfor
42:return\(CTF\)
43:endprocedure
```
**Algorithm 1** Build\(\operatorname{CTF}(CTF,\Phi,mcs_{p})\)
**Theorem 1**.: Let the input CTF to Algorithm 1 be a valid CTF. Then, the CTF constructed by the algorithm is also a valid CTF with maximum clique size of \(mcs_{p}\).
Proof.: In Algorithm 1, we start with a valid CTF and sequentially add groups of factor using steps shown in lines 3-15. Based on Propositions 1 - 3, if the input is a valid CTF, the modified CTF is also a valid CTF since it satisfies all the properties needed to ensure that the CTF contains a set of valid CTs (see Definition 5). The clique size is bounded since the addition of factors is done only if the clique size bounds are met (line 9).
### Infer clique beliefs
The output of the incremental build step is a CTF, \(CTF_{k}\), where the maximum clique size is at most \(mcs_{p}\). In the _infer_ step, \(CTF_{k}\) is calibrated using the standard belief propagation algorithm for exact inference (Lauritzen and Spiegelhalter, 1988; Koller and Friedman, 2009). This is efficient since message passing is performed over clique trees with bounded clique sizes.
### Approximate CTF
The next step is the _approximate_ step, in which we reduce clique sizes in \(CTF_{k}\) to get an approximate CTF, \(CTF_{k,a}\). Based on Definition 12, we identify the interface variables (IV) in \(CTF_{k}\). All the other variables in the CTF are referred to as _non-interface variables_ (NIV). Since subsequent CTFs have factors that contain IVs, the accuracy of beliefs in these CTFs will depend on how well the joint beliefs of the IVs is preserved in \(CTF_{k,a}\).
Figure 5 shows the steps required to get the approximate CTF (\(CTF_{1,a}\)) for the running example. In the example, \(mcs_{im}\) is set to 3 and \(IV=\{f,l,k,o\}\) (marked in red in the figure). \(CTF_{1,a}\) is initialized to the minimal subgraph corresponding to IV, \(MSG[\{f,l,k,o\}]\) (highlighted in blue in the figure). The two main steps used to reduce the clique sizes are exact and local marginalization, described below. For clarity, we explain the steps assuming that the clique sizes can be reduced exactly to the user-defined parameter \(mcs_{im}\). In practice, it could be larger or smaller depending on the domain-sizes of the variables that are removed.
**Exact marginalization:** The goal of this step is to reduce the number of NIVs and the number of cliques in the CTF while preserving the joint beliefs over the IVs exactly. This can be done by removing some of the NIVs from the CTF as follows. If an NIV is present in a single clique, it is removed from the CTF and the corresponding clique belief is marginalized over all states in the domain of this variable. In case the resulting clique is non-maximal, it is removed and its neighbors are connected to the containing clique. If an NIV is present in multiple cliques, exact marginalization can only be done after collapsing all the cliques containing the variable into a single clique. Let \(ST_{v}\) be the subtree of \(CTF_{k,a}\) that has all the cliques containing a non-interface variable \(v\) and \(C_{c}\) be the new clique obtained after collapsing cliques in \(ST_{v}\) and removing \(v\). The clique belief for \(C_{c}\) is obtained after marginalizing the joint probability distribution of \(ST_{v}\) over all states in the domain of variable \(v\), as follows.
\[\beta(C_{c})=\sum_{D_{v}}\left(\frac{\prod_{C\in ST_{v}}\beta(C)}{\prod_{SP \in ST_{v}}\mu(SP)}\right) \tag{4}\]
where \(SP\) denotes sepsis in \(ST_{v}\) and \(D_{v}\) denotes the domain of variable \(v\). While this exactly preserves the joint distribution, this process becomes expensive or infeasible as the size of the collapsed clique increases. Therefore, we perform this step only if the size of the collapsed clique is less than or equal to \(mcs_{im}\).
In the running example (shown in Figure 5), non-interface variable \(c\) is present in a single clique \(C_{6}\). It is removed from \(C_{6}\) and the corresponding belief is marginalized. After this, \(C_{6}\) contains only variables \(h\) and \(j\), both of which are also present in \(C_{7}\). Since \(C_{6}\) is a non-maximal clique, it is removed and its neighbour \(C_{4}\) is connected to \(C_{7}\). In \(C_{7}\), \(j\) is a non-interface variable, present in a single clique. We can follow a similar process of marginalization and removal of a non-maximal clique, leaving only \(C_{1},C_{2},C_{3},C_{4}\) and \(C_{5}\) in \(CTF_{1,a}\). We can further reduce the number of non-interface variables. Variable \(i\) is present in cliques \(C_{4}\) and \(C_{5}\) which when collapsed give a clique of size 3 (\(\leq mcs_{im}\)) containing variables \(h,i\) and \(l\). Variable
is removed and the beliefs are marginalized to give a new clique \({C_{4}}^{\prime}\). Exact marginalization over all other NIVs will increase the clique size beyond \(mcs_{im}\) and is therefore not attempted.
**Local marginalization:** In this step, we reduce clique sizes by removing variables from cliques with size greater than \(mcs_{im}\) and locally marginalizing clique beliefs as follows. If a variable \(v\) is locally marginalized from two adjacent cliques \(C_{i}\) and \(C_{j}\) with sepset \(S_{i,j}\), the result is two cliques \(C_{i}^{\prime}=C_{i}\setminus v\) and \(C_{j}^{\prime}=C_{j}\setminus v\) with sepset \(S_{i,j}^{\prime}=S_{i,j}\setminus v\) and beliefs given by
\[\beta(C_{i}^{\prime})=\sum_{D_{v}}\beta(C_{i}),\ \ \beta(C_{j}^{\prime})=\sum_{D_{v} }\beta(C_{j}),\ \mu(S_{i,j}^{\prime})=\sum_{D_{v}}\mu(S_{i,j}) \tag{5}\]
We need to ensure that local marginalization satisfies the following constraints:
1. Since IVs are present in factors that have not yet been added to a CTF, they must be retained in at least one clique in \(CTF_{k,a}\).
2. A connected CT in \(CTF_{k}\) should remain connected in \(CTF_{k,a}\). The reason for this will become apparent in Section 5.
Figure 5 illustrates the methodology for local marginalization using the running example. \(CTF_{1,a}\) obtained after exact marginalization contains a single clique \(C_{2}\), with size greater than \(mcs_{im}\) (set to 3). The variables present in this clique are \(f,d,h\) and \(m\). Since \(f\) is an interface variable that is present in a single clique, it is not considered for marginalization. Variable \(h\) is also not considered, because removal of \(h\) from cliques \(C_{2}\) and \({C_{4}}^{\prime}\) will disconnect the clique tree, since the sepset between them contains only \(h\). If we remove \(d\) from \(C_{2}\), it must also be removed from either \(C_{1}\) or \(C_{3}\) to satisfy RIP. We retain \(d\) in \(C_{3}\) and marginalize it from beliefs corresponding to \(C_{1}\) and \(C_{2}\). The resulting approximated CTF, \(CTF_{1,a}\), contains cliques with sizes bounded by \(mcs_{im}\).
#### 4.3.1 Approximation Algorithm
\(ApproximateCTF\) (Algorithm 2) shows the formal steps in our algorithm used to approximate the CTF. The inputs are \(CTF_{k}\), the set of factors \(\Phi\) that have not been added to any CTF in the set \(\{CTF_{1},\ldots CTF_{k}\}\)
Figure 5: Approximation of \(CTF_{1}\) for the running example with \(mcs_{im}\) set to 3. The blue cliques in \(CTF_{1}\) form the minimal subgraph corresponding to interface variables \(f,\ k,\ o\) and \(l\) (marked in red). \(CTF_{1,a}\) is obtained after exact marginalization of non-interface variables \(c,j,i\) and local marginalization of variable \(d\).
and the clique size bound for the approximate CTF, \(mcs_{im}\). It returns the approximate CTF, \(CTF_{k,a}\). We first identify the interface variables (\(IV\)) and initialize \(CTF_{k,a}\) as the minimal subgraph of \(CTF_{k}\) that is needed to compute the joint beliefs of \(IV\) (\(MSG[IV]\), Definition 11) (lines 1-3). This is followed by exact marginalization of NIVs which are either present in a single clique or wherever the size of the collapsed clique is less than \(mcs_{im}\) (lines 4-10). Next, we perform local marginalization to reduce clique sizes to \(mcs_{im}\), if possible. We first choose a variable (\(v\)) that is present in large sized cliques and retain it in a connected subtree (\(ST_{r}\)) that has clique sizes less than or equal to \(mcs_{im}\) (lines 12-16). \(v\) is locally marginalized from all other cliques while satisfying the constraints specified for local marginalization (lines 18-24). Any non-maximal clique obtained after exact or local marginalization is removed and its neighbors are reconnected to the containing clique (lines 7,19).
#### 4.3.2 Properties of the approximated CTF
If the input CTF, \(CTF_{k}\) to the approximation algorithm is valid and calibrated, then resulting approximate CTF, \(CTF_{k,a}\), satisfies the following properties. The proofs for these properties are included in Appendix A.
**Proposition 5**.: All CTs in the approximate CTF, \(CTF_{k,a}\), are valid CTs.
**Proposition 6**.: All CTs in the approximate CTF, \(CTF_{k,a}\), are calibrated.
**Proposition 7**.: The normalization constant of all CTs in the approximate CTF \(CTF_{k,a}\) is the same as in the input CTF, \(CTF_{k}\).
**Proposition 8**.: If the clique beliefs are uniform, then the beliefs obtained after local marginalization are exact.
#### 4.3.3 Heuristics for choice of variables for local marginalization
Since our aim is to preserve the joint beliefs of the interface variables as much as possible, we would like to choose variables that have the least impact on this joint belief for local marginalization. We need a metric that measures this influence and is inexpensive to compute. Towards this end, we propose a heuristic technique based on pairwise mutual information (MI) between variables. The MI between two variables \(x\) and \(y\) is defined as
\[MI(x;y)=\sum_{s\in D_{x},w\in D_{y}}p(s,w)\log\frac{p(s,w)}{p(s)p(w)}\]
Computation of MI for variables belonging to different cliques is expensive. Instead, we propose two metrics that are easy to compute, namely, _Maximum Local Mutual Information (\(MLMI\))_ and _Maximum Mutual Information (\(maxMI\))_ which are defined as follows. Let \(IV_{C}\) denote the set of interface variables in a clique \(C\). The \(MLMI\) of a variable \(v\) in clique \(C\) is defined as
\[MLMI_{v,C}=\max_{\forall x\in IV_{C}\backslash v}MI(v;x) \tag{6}\]
The \(maxMI\) for a variable \(v\) is defined as the maximum \(MLMI\) over all cliques.
\[maxMI_{v}=\max_{\forall C\in CTF\ s.t.\ v\in C}MLMI_{v,C} \tag{7}\]
As seen in Equation 6, if \(v\) is an interface variable, \(MLMI\) is the maximum MI between \(v\) and the other interface variables in the clique. If \(v\) is a non-interface variable, it is the maximum MI between \(v\) and all the interface variables in the clique. Since \(maxMI\) of \(v\) is the maximum \(MLMI\) over all cliques (Equation 7), it is a measure of the maximum influence that a variable \(v\) has on interface variables that are present in cliques that contain \(v\). A low \(maxMI\) means that \(v\) has a low \(MI\) with interface variables in all the cliques in which it is present and is therefore assumed to have a lower impact on the joint distribution of the interface variables.
We prioritize non-interface variables with the least \(maxMI\) for local marginalization. If it is not possible to reduce clique sizes by removing non-interface variables, we locally marginalize over interface variables with least \(maxMI\) (line 15, Algorithm 2). During local marginalization, if we find multiple connected subtrees (\(ST_{r}\)) with bounded clique sizes (line 17, Algorithm 2), we retain the variable in the subtree that contains the clique with the maximum \(MLMI\).
#### 4.3.4 Re-parameterization of approximate CTF
\(CTF_{k+1}\) is constructed by adding new factors are added to the approximate CTF, \(CTF_{k,a}\). Before adding new factors, we re-assign factors associated with cliques in \(CTF_{k,a}\) such the product of these factors is a valid joint distribution. This reparameterization is needed to use the message-passing algorithm for calibration of \(CTF_{k+1}\). Using Proposition 6, we know that clique and segest beliefs in \(CTF_{k,a}\) are calibrated. We re-assign clique factors as follows. For each CT in the \(CTF_{k,a}\), a root node is chosen at random. The factor for the root node is the same as the clique belief. All other nodes are assigned factors by iterating through them in pre-order, i.e., from the root node to the leaf nodes. An un-visited neighbor \({C_{j}}^{\prime}\) of a node \({C_{i}}^{\prime}\) in \(CTF_{k,a}\) is assigned the conditional belief \(\beta({C_{j}}^{\prime}|{C_{i}}^{\prime})=\frac{\beta({C_{i}}^{\prime})}{\mu({ S_{i,j}^{\prime}})}\) as a factor. Using Equation 2, the product of the re-assigned factors is a valid joint distribution.
## 5 Approximate inference of the partition function
Proposition 9 and Theorem 2 shows how PR can be obtained from the SCTF. The proofs for this is included in Appendix A.
**Proposition 9**.: The product of normalization constants of the CTs in \(CTF_{k}\) is an estimate of PR of the factors added to \(\{CTF_{1},\cdots,CTF_{k}\}\).
**Theorem 2**.: Let the sequence \(\{CTF_{1},\cdots,CTF_{n}\}\) be the SCTF for a connected graph. Then, the last CTF, \(CTF_{n}\) contains a single CT and the normalization constant of this CT is the estimate of partition function (PR).
Evidence-based simplification of the PGM could give a set of disjoint graphs. We construct an SCTF corresponding to each connected graph. The PR is then estimated as the product of the normalization constants of the CT in the last CTF of each SCTF.
## 6 Complexity
Let \(N_{CTF}\) be the number of CTFs in the SCTF and \(N_{s}\) be the maximum number of incremental steps required to build any \(CTF\). We now discuss the worst-case complexity of three steps used to construct the SCTF.
_Incremental Build_: In each step, we add a subset of factors that impact overlapping portions of the CTF. The overall complexity of modification depends on the number of steps and the cost of re-triangulation in each step. In the worst case, in each step we get a group of factors that impacts all the cliques in the CTF and there are no retained cliques. The cost of re-triangulation (\(Cost_{R}\)) using any of the greedy search methods is polynomial in the number of variables in CTF (Koller & Friedman, 2009, Chap. 9). Hence, the worst-case complexity is upper bounded by \(O(N_{CTF}\cdot N_{s}\cdot Cost_{R})\). Generally, the number of computations required is much lower since there are many retained cliques and different subsets of factors impact disjoint subgraphs of the existing CTs.
_Inference and Approximation:_ Since we use exact inference to calibrate the clique-tree, the complexity of inference in each CTF is \(O(2^{mcs_{p}})\). Approximation involves summing out variables from a belief table. Once again, this is \(O(2^{mcs_{p}})\). The overall complexity is therefore \(O(N_{CTF}\cdot 2^{mcs_{p}})\).
## 7 Results
All experiments were carried out on a Intel i9-12900 Linux system. IBIA was run using Python v3.10 with Numpy, Scipy and NetworkX libraries. The memory limit was set to 8GB for all experiments, which is the same as that used in the UAI 2022 inference competition (UAI, 2022).
We address the following questions in our evaluation.
* How many instances can IBIA solve within different runtime limits?
* Are clique sizes generated by the proposed incremental method comparable to those obtained with a non-incremental method?
* Is the heuristic used for approximation useful?
* What is the impact of clique size constraints on the performance of IBIA?
* How does the performance of IBIA compare with the state of art techniques?
**Performance measure:** The error metric used is the absolute error in partition function (\(PR\)) measured as \(|\log_{10}PR_{IBIA}-\log_{10}PR_{ref}|\). \(PR_{ref}\) is either the exact value or available reference values of PR, discussed in more detail later in the section. Since each tool reports PR using a different number of precision digits, we round off errors to three decimal places and report an error of zero when it is less than 0.001.
**Benchmarks:** Table 1 lists the benchmark sets used in this work. These benchmarks have been included in several UAI approximate inference challenges (UAI, 2010; 2014; 2022) and the Probabilistic Inference Challenge (PIC, 2011). We have categorized an instance as _'small'_ if the exact solution was either available in the repository (Ihler, 2006) or could be computed using Ace (Chavira & Darwiche, 2015; 2008), a tool based on weighted model counting. All other instances are categorized as _'large'_.
**Notation:** In all tables in this section, we denote the induced width of a specific benchmark as \(w\) and the maximum domain-size as \(dm\). We use the following to denote the average statistics over all instances in each benchmark set (a) \(v_{a}\): average number of variables (b) \(f_{a}\): average number of factors (c) \(w_{a}\): average induced width and (d) \(dm_{a}\): average of the maximum domain size.
**Choice of parameters:** Based on the memory limit of 8GB, we chose \(mcs_{p}\) of 20 for all experiments unless stated otherwise. Since \(mcs_{im}\) determines the extent of approximation, we would like it to be as high as possible for better accuracy. But, we also need a sufficient margin to add variables to the next CTF in the sequence. We have empirically chosen \(mcs_{im}\) to be 5 less than \(mcs_{p}\).
### Number of instances solved by IBIA
Table 1 shows the percentage of large and small instances in each set that are solved by IBIA within 20 seconds, 20 minutes, 60 minutes and 100 minutes, similar to limits used in the UAI 2022 competition.
Except for a few blockmap and some mastermind instances, IBIA was able to solve all the small benchmarks within 20 seconds. Solutions to the remaining instances were obtained within 20 minutes. For the large instances, we allow for an increase in \(mcs_{p}\) if needed so that at least one new factor can be added while maintaining the overall memory limit. Except for Grids, CSP, DBN and Type4b, in which some instances take longer, all other large instances could be solved within 20 minutes. All Grids and Type4b instances can be solved within 60 minutes and DBN within 100 minutes. For a few DBN instances, the number of factors is very large (greater than 100,000) and the runtime is dominated by the incremental build step where repeated re-triangulations are performed to add factors. In large CSP benchmarks, inference using IBIA runs out of memory in 12 out of 52 instances. For these instances, the maximum domain-size is large (varies from 44 to 200). As a result, the number of variables contained in cliques and sequets in the CTF is very small. Therefore, the approximation step has a limited choice of variables and it becomes infeasible in these cases. This in turn leads to large-sized cliques in the next CTF, thereby exceeding the set memory limit.
### Evaluation of Algorithms in IBIA
In this section, we evaluate the performance of the proposed method for incremental CT construction and the performance of the metric used for guiding the approximate step in IBIA. We also study the trade-off between runtime and accuracy.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Size & Benchmarks & \#Inst & \multicolumn{3}{c}{Average states \({}^{+}\)} & \multicolumn{3}{c}{Instances solved (\%)} \\ \cline{4-7} & & \((v_{a},f_{a},w_{a},dm_{a})\) & 20 s & 20 min & 60 min & 100 min \\ \hline \multirow{8}{*}{_Small_} & Segmentation & 50 & (229,851,17,2) & 100\% & 100\% & 100\% & 100\% \\ & Promedas & 65 & (619,691,21,2) & 100\% & 100\% & 100\% & 100\% \\ & Protein & 77 & (60,180,67,6) & 100\% & 100\% & 100\% & 100\% \\ & BN & 97 & (637,637,28,10) & 100\% & 100\% & 100\% & 100\% \\ & Object Detection & 79 & (60,210,6,16) & 100\% & 100\% & 100\% & 100\% \\ & Grids & 8 & (250,728,22) & 100\% & 100\% & 100\% & 100\% \\ & CSP & 14 & (68,345,13,4) & 100\% & 100\% & 100\% & 100\% \\ & DBN & 66 & (780,1545,32,29) & 100\% & 100\% & 100\% & 100\% \\ & Pedigree & 24 & (853,853,24,25) & 100\% & 100\% & 100\% & 100\% \\ & mastermind & 128 & (2159,219,26,20) & 98\% & 100\% & 100\% & 100\% \\ & blockmap & 240 & (2458,2458,2459,505,72) & 78\% & 100\% & 100\% & 100\% \\ \hline \multirow{8}{*}{_Large_} & Segmentation & 50 & (229,851,19,21) & 100\% & 100\% & 100\% & 100\% \\ & Promedas & 173 & (1209,1209,72,2) & 80\% & 100\% & 100\% & 100\% \\ & Protein & 386 & (131,1215,21,81) & 75\% & 100\% & 100\% & 100\% \\ & BN & 22 & (1271,215,21,71) & 73\% & 100\% & 100\% & 100\% \\ & Object Detection & 37 & (60,183,509,17) & 0\% & 100\% & 100\% & 100\% \\ & Grids & 19 & (3432,1024,117,2) & 10\% & 79\% & 100\% & 100\% \\ & CSP & 52 & (304,1248,181,43) & 23\% & 77\% & 77\% & 77\% \\ & DBN & 48 & (1000,6616,78,2) & 0\% & 63\% & 63\% & 100\% \\ & Typedb & 82 & (10822,10822,24,5) & 0\% & 99\% & 100\% & 100\% \\ \hline \hline \end{tabular} \({}^{+}\) Average statistics for instances in each benchmark set, \(v_{a}\): average number of variables, \(f_{a}\): average number of factors, \(w_{a}\): average induced width and \(dm_{a}\): average of the maximum domain-size.
\end{table}
Table 1: Statistics of benchmark sets used and percentage of total instances solved by IBIA with memory limit set to 8GB and runtime limit set to 20 seconds, 20 minutes, 60 minutes and 100 minutes.
**Evaluation of Incremental CT construction:** We first evaluated our algorithm for incremental construction of the CT in terms of the maximum clique size. We used the following method for evaluation. For a given \(mcs_{p}\), we used Algorithm 1 to incrementally construct the first CTF in the sequence \((CTF_{1})\). For comparison, we used a CTF obtained using full compilation of all the factors added to \(CTF_{1}\). This is done as follows. We first find the undirected graph induced by the factors that were added to \(CTF_{1}\). This graph is then compiled using variable elimination (Zhang & Poole, 1996; Koller & Friedman, 2009). The elimination order is found using the'min-fill' metric, and the metric'min-neighbors' is used in the case of a tie (Koller & Friedman, 2009). We choose the min-fill metric since in most cases it has found to give lower clique sizes than other heuristics (Gogate & Dechter, 2004; Li & Ueno, 2017). Re-computing the number of fill-in edges each time a variable is eliminated increases the execution time. Therefore, we adopted the methodology suggested in Kask et al. (2011) to compute only the change in the number of fill-in edges.
Table 2 compares the maximum clique size obtained using the incremental \((mcs_{ibia})\) and full compilation \((mcs_{f})\) approaches for \(mcs_{p}\) of 20 and 25. It shows the average, maximum and minimum difference (\(\Delta=mcs_{ibia}-mcs_{f}\)) in clique sizes1 for a few benchmark sets. The results for other benchmarks are similar. The difference, \(\Delta=mcs_{ibia}-mcs_{f}\), is negative when the incremental approach yields a smaller clique size and positive otherwise. On an average, our incremental approach gives similar results as full compilation of the corresponding undirected graph. The average is negative, indicating that in many benchmarks, the incremental approach actually resulted in lower clique sizes than full compilation. Since the maximum value of \(\Delta\) is positive, it indicates that there are instances for which full compilation is better, which is expected.
Footnote 1: As shown in Equation (3), our definition for clique size is the logarithm (base 2) of the product of the domain sizes. Therefore, it is possible to get decimal values for sizes when cliques contain variables with domain size greater than 2.
**Evaluation of heuristic used in the approximate step:** To get an approximate CTF with lower clique sizes, we choose variables for local marginalization based on the \(maxMI\) metric (refer Equation 7). Table 3 compares the errors obtained using the \(maxMI\) metric and errors obtained using a random selection of variables. The minimum error obtained is marked in bold. We show results for a subset of hard instances (large width and domain-sizes) in BN, Pedigree, Promedas and DBN benchmarks. In most of the testcases, we observe that the errors obtained with the \(maxMI\) metric are either lower or comparable to that obtained using a random selection. This shows that the metric performs well.
**Impact of \(mcs_{p}\) on accuracy and runtime:** Table 4 shows the error in the estimated PR values for various values of \(mcs_{p}\). As mentioned earlier, we have empirically chosen \(mcs_{im}\) to be 5 less than \(mcs_{p}\). As expected, we observe that in most cases the accuracy improves as the clique size bounds are increased.
The runtime of IBIA includes the time required for the construction of the SCTF and inference of the partition function. We observe that while the required runtime is similar when \(mcs_{p}\) is set to \(10,15\) and \(20\), it increases sharply when \(mcs_{p}\) is set to \(25\). This is because the build step dominates the runtime for smaller values of \(mcs_{p}\) and the infer step dominates for larger values. As discussed in Section 6, the time complexity of the build step is \(O(N_{CTF}\cdot N_{s}\cdot Cost_{R})\). As \(mcs_{p}\) increases, while the number of CTFs in the sequence (\(N_{CTF}\)) is expected to reduce, the cost of re-triangulation \((Cost_{R})\) could be potentially larger as the number of variables in the CTF is larger. Therefore, we observe that the runtime is similar for \(mcs_{p}\) of 10, 15 and 20. The exponential complexity of inference begins to dominate at \(mcs_{p}=25\).
\begin{table}
\begin{tabular}{l r r|r r r|r r r} \hline & \#Inst & \(dm_{s}^{+}\) & \multicolumn{4}{c|}{\(mcs_{p}=20\)} & \multicolumn{4}{c}{\(mcs_{p}=25\)} \\ \cline{3-8} & \#Inst & \(dm_{s}^{+}\) & Avg \(\Delta\) & Min \(\Delta\) & Max \(\Delta\) & Avg \(\Delta\) & Min \(\Delta\) & Max \(\Delta\) \\ \hline BN & 119 & 12 & -0.03 & -8.1 & 3 & 0.5 & -9.2 & 5 \\ Promedas & 238 & 2 & -0.6 & -12 & 4 & -0.1 & -14 & 6 \\ Pedigree & 24 & 5 & -2.1 & -11.8 & 3 & -1.5 & -10.3 & 3 \\ Grids & 27 & 2 & -0.6 & -3 & 2 & -0.2 & -6 & 6 \\ CSP & 66 & 35 & -1.5 & -13.3 & 4 & -2 & -14.3 & 3.6 \\ \hline \multicolumn{8}{c}{\({}^{+}\)\(dm_{s}\): average of the maximum domain-size.} \\ \end{tabular}
\end{table}
Table 2: The difference in maximum clique sizes obtained after incremental construction \((mcs_{ibia})\) of the first CTF in the sequence, \(CTF_{1}\), and that obtained after full compilation of undirected graph induced by the factors added to \(CTF_{1}\)\((mcs_{f})\) for \(mcs_{p}=20,25\). \(\Delta=mcs_{ibia}-mcs_{f}\).
### Accuracy and runtime comparison with existing inference techniques
#### 7.3.1 Methods used for comparison
As mentioned, we classified the benchmarks as small or large depending on whether exact PR values can be computed or not. To evaluate the performance of IBIA for the small benchmarks, we used the results of a recent evaluation of various exact and approximate inference solvers by Agrawal et al. (2021). Based on these results, we chose the following methods for comparison. For exact inference, we used Ace (Chavia and Darwiche, 2015), which is based on weighted model counting. To compare with variational methods, we used LBP (Murphy et al., 1999) and double-loop GBP (HAK) (Heskes et al., 2003). Amongst the sampling techniques with a variational proposal, we chose Sample search (Gogate and Dechter, 2011). We used the publicly available codes used in Agrawal et al. (2021) or original implementations by the authors of the method for the comparison. Accordingly, for LBP and HAK, we used the implementations in libDAI (Mooij, 2010). For SampleSearch, we used a recent implementation (Gogate, 2020) by the authors of the method, which performs sample search using an IJGP-based proposal and cutset sampling (ISSwc). The runtime switches used are included in Table 5. For IBIA, we have used two sets of clique size bounds. We refer to IBIA with \(mcs_{p}\) set to 20 as _'IBIA20'_ and IBIA with \(mcs_{p}\) of 25 as _'IBIA25'_. We report results for ISSwc with two parameter settings. The first variant called as _'ISSwcd'_ uses default values of \(ibound\) (effective number of binary variables in a cluster) and w-cutset bound determined by the solver depending on the benchmark and given runtime constraints. For a fair comparison with IBIA, we set both bounds to 20 in the second variant (referred to as _'ISSwc20'_). While IBIA is implemented in Python, other tools use C++.
\begin{table}
\begin{tabular}{l r|c c c c|c c c c} \hline \hline \multirow{2}{*}{Benchmark} & \multirow{2}{*}{\((w,dm)^{+}\)} & \multicolumn{3}{c|}{Error} & \multicolumn{3}{c}{Error} \\ \cline{3-10} & & & & & & & \(maxMI\) & Random \\ \hline BN 69 & (48,36) & **1.2** & 1.3 & or\_chain\_155 & (31,2) & **0.01** & 0.02 \\ BN 70 & (81,36) & **2.2** & 5.1 & or\_chain\_107 & (33,2) & **0.3** & **0.3** \\ BN 71 & (45,36) & **0.8** & 2.2 & or\_chain\_128 & (30,2) & **0.2** & 0.6 \\ BN 72 & (58,36) & **1.3** & 2.4 & or\_chain\_102 & (31,2) & **0.4** & 0.8 \\ BN 73 & (75,36) & **1.9** & 2.3 & or\_chain\_106 & (31,2) & **0.3** & 0.7 \\ BN 74 & (37,36) & **1.7** & 2.9 & or\_chain\_140 & (33,2) & **0.1** & 0.9 \\ BN 75 & (59,36) & **2.4** & 2.5 & or\_chain\_242 & (31,2) & 0.5 & **0.1** \\ BN 76 & (53,36) & **1.7** & **1.7** & or\_chain\_198 & (32,2) & 1.0 & **0.01** \\ pedigreg13 & (32,3) & **0.01** & 0.02 & or\_chain\_61 & (34,2) & 0.6 & **0.1** \\ pedigree42 & (24,5) & 0.05 & **0.04** & rus 20 40 3 & (30,2) & **0.9** & 2.6 \\ pedigree19 & (27,5) & **0.04** & 0.3 & rus\_20\_40\_2 & (30,2) & **0.4** & 1.1 \\ pedigree34 & (32,5) & **0.2** & 0.3 & rus\_20\_40\_8 & (30,2) & **0.7** & 1.3 \\ pedigree40 & (29,7) & **0.1** & 0.3 & rus\_20\_40\_4\_2 & (30,2) & 0.4 & **0.02** \\ pedigree41 & (31,5) & **0.04** & 0.5 & rus 20 40 8 1 & (30,2) & 0.8 & **0.2** \\ pedigree7 & (33,4) & **0.01** & 0.2 & rus\_20\_20\_40\_5\_3 & (30,2) & 0.7 & **0.3** \\ \hline \hline \end{tabular} \({}^{+}\)\(w\): induced width, \(dm\): maximum domain-size
\end{table}
Table 3: Comparison of error obtained using IBIA when the choice of variables for local marginalization is made based on the \(maxMI\) metric versus a random selection of variables. The minimum error obtained is marked in bold.
\begin{table}
\begin{tabular}{l r|c c c c|c c c} \hline \hline \multirow{2}{*}{Benchmark} & \multirow{2}{*}{\((w,dm)^{+}\)} & \multicolumn{3}{c|}{Error} & \multicolumn{3}{c}{Error} \\ \cline{3-10} & & & & & & \(maxMI\) & Random \\ \hline BN 69 & (48,36) & **1.2** & 1.3 & or\_chain\_155 & (31,2) & **0.01** & 0.02 \\ BN 70 & (81,36) & **2.2** & 5.1 & or\_chain\_107 & (33,2) & **0.3** & **0.3** \\ BN 71 & (45,36) & **0.8** & 2.2 & or\_chain\_128 & (30,2) & **0.2** & 0.6 \\ BN 72 & (58,36) & **1.3** & 2.4 & or\_chain\_102 & (31,2) & **0.4** & 0.8 \\ BN 73 & (75,36) & **1.9** & 2.3 & or\_chain\_106 & (31,2) & **0.3** & 0.7 \\ BN 74 & (37,36) & **1.7** & 2.9 & or\_chain\_140 & (33,2) & **0.1** & 0.9 \\ BN 75 & (59,36) & **2.4** & 2.5 & or\_chain\_242 & (31,2) & 0.5 & **0.1** \\ BN 76 & (53,36) & **1.7** & **1.7** & or\_chain\_198 & (32,2) & 1.0 & **0.01** \\ pedigreg13 & (32,3) & **0.01** & 0.02 & or\_chain\_61 & (34,2) & 0.6 & **0.1** \\ pedigree42 & (24,5) & 0.05 & **0.04** & rus 20 40 3 & (30,2) & **0.9** & 2.6 \\ pedigree19 & (27,5) & **0.04** & 0.3 & rus\_20\_40\_2 & (30,2) & **0.4** & 1.1 \\ pedigree34 & (32,5) & **0.2** & 0.3 & rus\_20\_40\_8 & (30,2) & **0.7** & 1.3 \\ pedigree40 & (29,7) & **0.1** & 0.3 & rus\_20\_40\_4\_2 & (30,2) & 0.4 & **0.02** \\ pedigree41 & (31,5) & **0.04** & 0.5 & rus 20 40 8 1 & (30,2) & 0.8 & **0.2** \\ pedigree7 & (33,4) & **0.01** & 0.2 & rus\_20\_20\_40\_5\_3 & (30,2) & 0.7 & **0.3** \\ \hline \hline \end{tabular} \({}^{+}\)\(w\): induced width, \(dm\): maximum domain-size
\end{table}
Table 4: Comparison of error in partition function estimated with IBIA and required runtime (in seconds) for various clique size constraints (\(mcs_{p},mcs_{im}\)).
Amongst the small benchmarks, some of the benchmarks are in general considered "hard" in the literature. These benchmarks have been used extensively for comparison and results for many approximate inference methods are available in the literature. For these benchmarks, we compared our method with published results. Table 5 has the methods used for comparison and the reference to the publication from which the PR estimates were obtained.
For large networks for which the exact PR is not available, we compare our results with published results in Kask et al. (2020), which uses reference values of PR generated using 100 1-hr runs of abstraction sampling.
#### 7.3.2 Performance of IBIA for the small benchmarks
Table 6 compares the average error obtained using IBIA20 and IBIA25 with LBP, HAK, ISSwcd and ISSwc20 for all small benchmarks. We use two runtime constraints, 20 seconds and 20 minutes. If all instances in a set could not be solved within the given time and memory limits, we mark the corresponding entry as '-' and show the number of instances solved in brackets. An entry is marked in bold if it gives the lowest error amongst the methods used for comparison.
Out of 848 instances, IBIA20 solves 792 instances in 20 seconds. In contrast, ISSwc20 that uses the same clique size bounds solves only 659 instances. ISSwcd uses smaller clique size constraints and is able to solve 838 instances. LBP and HAK solve lesser instances than IBIA20. Note that while other solvers are written in C++, IBIA is implemented using Python3 and is therefore at a disadvantage in terms of runtime. That said, the only benchmarks that do not run within 20 seconds with IBIA20 are blockmap and mastermind, for which the maximum runtime is 408 and 35 seconds respectively. In 20 minutes, IBIA20 is able to solve all instances. On the other hand, LBP is unable to solve a few DBN instances, ISSwcd is unable to solve a few BN instances and ISSwc20 is unable to solve some Grid, BN and mastermind testcases. IBIA25 also fails to give a solution for some relational (blockmap and mastermind) and DBN benchmarks in 20 minutes with \(8GB\) memory limit.
For both time constraints, IBIA20 is definitely better than the two variational methods LBP and HAK for all benchmark sets. In 20 seconds, the errors obtained using IBIA20 are comparable to or better than ISSwcd and ISSwc20 for all benchmarks except CSP. IBIA20 has a significantly lower error for Pedigree and Grids, but higher error than ISSwcd for CSP. It is the only solver that solves all BN benchmarks in 20 seconds, with a low error. In 20 minutes, the lowest errors are obtained by either ISSwcd or IBIA25 or both. **In fact, the accuracy of the PR estimates obtained with IBIA20 in 20 seconds is either comparable to or
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Type & Publication & Tool & Parameters \\ \hline IBIA & & & IBIA & \(\overline{mes_{p}=20,mes_{in}=15}\) (IBIA20) \\ & & & & \(\overline{mes_{p}=25,mes_{in}=20}\) (IBIA25) \\ \hline LBP & Variational & & LibDAI & \(tol=10^{3}\),\#Iter=10\({}^{4}\) \\ (Murphy et al., 1999) & & & LibDAI & \(tol=10^{5}\),\#Iter=10\({}^{4}\) \\ \hline HAK & Variational & & LibDAI & \(tol=10^{3}\),\#Iter=10\({}^{4}\), clusters=LOOP3 \\ (Heskes et al., 2003) & Variational (MB) & ✓(Gogate \& Dechter, 2011) & ISSwc & Default (ISSwcd) \\ (Gogate \& Dechter, 2011) & +Sampling & & ibound=20,w-cutset bound=20 (ISSwc20) \\ \hline EDBP & Variational & ✓(Gogate \& Dechter, 2011) & \\ (Choi \& Darwiche, 2006) & & & \\ \hline WMB & Variational (MB) & ✓(Agarwal et al., 2022) & \\ (Liu \&hler, 2011) & & & \\ \hline NeuroBE & Variational (MB) & ✓(Agarwal et al., 2022) & \\ (Agarwal et al., 2022) & +Neural Networks & & & \\ \hline DBE & Variational (MB) & ✓(Razeghi et al., 2021) & \\ (Razeghi et al., 2021) & +Neural Networks & & & \\ \hline DIS & Variational (MB) & ✓(Kask et al., 2020) & \\ (Lou et al., 2017; 2019) & +Search +Sampling & & \\ \hline AS & Variational (MB) & ✓(Kask et al., 2020) & \\ (Broka, 2018) & +Search+Sampling & & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Methods used for comparison. For each method, we indicate the class of techniques it falls under. The column marked Publication has the citation to the paper containing the estimates of the PR and the first column has the citation to the original paper of the method. Methods for which we obtained data by running various tools are shown with the corresponding parameter settings in the last two columns.
better than that obtained by ISSwc20 and ISSwcd in 20 minutes for many of the benchmark sets.** The hardest benchmarks for IBIA are CSP and DBN. In ISSwc, cutset sampling plays a crucial role in reduction of errors. Without cutset sampling, we found that errors are significantly larger. This is also seen from the results in Broka (2018).
#### 7.3.3 Comparison with published results
Table 7 compares the error obtained using IBIA (\(mcs_{p}=10,20,25\)) with WMB, DBE, NeuroBE, EDBP and ISSwc for five subsets of benchmarks. The memory limit for IBIA was set to 8GB and time limit to 20 minutes. In the table, we use ISSwc(P) to indicate that the reported results are published results. For fair comparison, we set \(mcs_{p}\) to 10 in IBIA for benchmarks where \(ibound\) of 10 was used in published results. Entries are marked with '-' for instances where published results are not available for a particular benchmark. The minimum error obtained for each testcase is marked in magenta color in the table.
For small grid instances, the error obtained using IBIA10 is lower than all other methods. For small DBN instances, the error obtained with IBIA10 is smaller than WMB10, but worse than DBE10. For these instances, IBIA20 has the best accuracy in all testcases, except rbm20 for which WMB20 is better. In the Pedigree and BN instances, IBIA20 gives an error comparable to ISSwc(P). The two exceptions are BN_72 and BN_75 where IBIA20 gives a significantly larger error. For these instances, IBIA25 gives error comparable to ISSwc(P). Exact solutions are not known for large Grid instances. Therefore, we measure the absolute difference from the reference values published in Agarwal et al. (2022); Razeghi et al. (2021). The difference obtained with IBIA20 is much smaller than WMB20 and DBE20, and higher than NeuroBE for some instances. That said, the reference values are estimates and not the exact solution, thereby making it difficult to draw any conclusions.
Runtimes for published data cannot be compared due to differences in programming languages and systems used for evaluation. Therefore, we have only reported runtimes for IBIA. The small instances can be solved in less than 10 seconds by IBIA20. For the larger BNs and Grids, IBIA requires a couple of 100s to get an error comparable to ISSwc(P) and NeuroBE20 respectively.
\begin{table}
\begin{tabular}{l c c c c c c c c|c c c c c c} \hline \multicolumn{13}{c}{Average stats Total} & \multicolumn{3}{c}{**Average Error (20 seconds) (\#Inst.)**} & \multicolumn{3}{c}{**Average Error (20 minutes) (\#Inst.)**} \\ \cline{3-13} \multicolumn{1}{c}{} & \((f_{a},w_{a},dma_{a})^{+}\) & \#Inst. & LBP & HAR & ISSwcd & ISSwcc20 & IBIA20 & IBIA25 & LBP & HAR & ISSwcd & ISSwcc20 & IBIA20 & IBIA25 \\ \hline \multirow{2}{*}{Pedigree} & (853,24,4) & - & 1.03 & 2.48 & 0.41 & **0.07** & - & 2.60 & 1.03 & 0.17 & 0.20 & 0.07 & **0.05** \\ & 24 & (22) & - & (22) & - & (12) & - & (12) & - & 45.5 & 113.3 & 0.4 & - & 0.2 & **0** \\ \hline Grids & (728,22,2) & & 45.5 & 113.3 & 6.1 & - & **0.2** & - & (8) & - & 45.5 & 113.3 & 0.4 & - & 0.2 & **0** \\ & 8 & & & (4) & - & (8) & - & (1) & - & (1) & - & - & 0.2 & 0.1 & **0.1** & 0.2 & **0.1** \\ \hline Promedas & (619,21,2) & & **0.2** & 0.7 & - & **0.2** & - & **0.2** & - & 0.2 & 0.2 & **0.1** & **0.1** & 0.2 & **0.1** \\ & 65 & & & (62) & - & (30) & - & - & 30.2 & **0.001** & 0.02 & 0.57 & - \\ \hline DBN & (1543,29,2) & & - & - & 0.82 & - & **0.57** & - & 30.2 & **0.001** & 0.02 & 0.57 & - \\ & 66 & & (57) & (63) & (6) & (6) & (6) & (58) & & (36) & (36) \\ \hline CSP & (345,13,4) & & 18.2 & 12.5 & **0.68** & - & 2.87 & - & 18.2 & 12.5 & **0.28** & 0.43 & 2.87 & 1.06 \\ & 14 & & & (11) & (11) & (11) & (11) & (11) & (11) & (11) & (11) & (11) & (11) & (11) & (11) \\ \hline BN & (637,28,10) & & - & - & - & - & **0.004** & - & 0.27 & - & - & - & 0.004 & **0.002** \\ & 97 & (84) & (54) & (51) & (91) & (77) & (84) & (78) & (93) & (91) & (91) & (11) & (11) \\ \hline ObjDetect & (210,6,16) & & 0.38 & - & 0.05 & - & 0.01 & **0** & 0.38 & 6.2 & 0.01 & 0.001 & 0.01 & **0** \\ & 79 & (35) & (22) & (22) & - & - & - & - & 0.004 & 0.006 & **0** & **0** & **0** & **0** \\ \hline Protein & (180,67,6) & & 0.004 & - & 0.001 & 0.003 & **0** & **0** & 0.004 & 0.006 & **0** & **0** & **0** \\ & 77 & & (34) & (46) & (46) & (46) & (46) & (46) & (47) & (48) & (48) & (48) & (48) & (48) & (48) & (48) \\ \hline Segment & (24589,5057,2) & & - & - & - & - & - & - & - & - & **0** & **0** & 0.009 & - \\ & 240 & (186) & (75) & (236) & (236) & (187) & (177) & (238) & (152) & (152) & (198) & (198) \\ \hline Mastermind & (2159,26,2) & - & - & **0.13** & - & - & - & **2.33** & **2.37** & **0.04** & - & 0.12 & - \\ & 128 & (112) & (103) & (94) & (125) & (96) & (113) & (119) & (113) & (119) & (113) & (119) \\ \hline \hline Total \#Instances solved & 848 & 754 & 525 & 838 & 659 & 792 & 630 & 838 & 741 & 844 & 823 & 848 & 767 \\ \hline \hline \multicolumn{13}{l}{\({}^{+}f_{a}\): average number of factors, \(w_{a}\): average induced width and \(dm_{a}\): average of the maximum domain-size.} \\ \end{tabular}
\end{table}
Table 6: Average error in partition function estimated using IBIA20, IBIA25, LBP, HAK, ISSwcd and ISSwc20 with runtime limit set to 20 seconds and 20 minutes. Entries are marked as ‘-’ where all instances could not solved within the set time limit and the number of instances solved is shown in brackets below. The minimum average error obtained for each set is marked in bold.
\begin{table}
\end{table}
Table 7: Comparison of error in PR obtained with IBIA with published results for a subset of benchmarks. The minimum error obtained for each testcase is shown in magenta. Entries are marked with ‘-’ where published results are not available. \(w\): induced width, \(dm\): maximum domain-size
(a) Grid-small (\(mcs_{p}=10,ibound=10\))
\begin{table}
\end{table}
Table 7: Comparison of error in PR obtained with IBIA with published results for a subset of benchmarks. The minimum error obtained for each testcase is shown in magenta. Entries are marked with ‘-’ where published results are not available. \(w\): induced width, \(dm\): maximum domain-size
Table 8 has the results for large benchmarks for which the exact PR is not known. Here, the comparison was done with reference values of PR.2 The table reports the average difference over all instances in 4 benchmark sets. It has the results obtained using IBIA20 as well as published results for dynamic importance sampling (DIS) and abstraction sampling (AS) (Kask et al., 2020). In the table, the column marked AS(R) shows results obtained using the AOAS algorithm with randomized context-based abstraction function with 256 levels, and the column marked AS(B) shows best-case results. The table also shows the average and maximum runtime required for IBIA20.
Footnote 2: Reference values of PR were obtained by Prof. Rina Dechter’s group by averaging estimates obtained from 100 one hour runs of abstraction sampling.
IBIA20 could solve all instances within 8GB memory. In contrast, both AS and DIS are unable to solve all Type4b benchmarks within 1hr and 24 GB memory (Kask et al., 2020). On an average, estimates obtained using IBIA20 are higher than the reference value for all benchmarks except DBN, while those obtained by AS and DIS are lower. While Promedas, Grids and Type4b benchmarks are easy for IBIA, the DBN benchmarks are difficult. Both the difference from the reference and required runtimes are larger for these testcases.
## 8 Related work
**Incremental construction of CTs**: Incremental methods for CT modification have been explored in some previous works (Draper, 1995; Darwiche, 1998; Flores et al., 2002). In Draper (1995), incremental addition of links is performed by first forming a cluster graph using a set of rules and then converting the cluster graph into a junction tree. Although several heuristic-based graph transformations are suggested, a difficulty is to choose a set of heuristics so that clique size constraints are met. Also, there is no specific algorithm to construct the CT. A preferable method would be to make additions to an existing CT. Dynamic reconfiguration of CTs is explored (Darwiche, 1998), but it is specific to evidence and query based simplification. A more general approach using the Maximal Prime Subgraph Decomposition (MPD) of the PGM is discussed in Flores et al. (2002). In this method, the CT is converted into another graphical representation called the MPD join tree which is based on the moralized graph. When factors are added, the minimal subgraph of the moralized graph that needs re-triangulation is identified using the MPD tree. The identified subgraph is re-triangulated, and both the CT and MPD join trees are updated. In contrast, our method,
* Requires a lower effort for re-triangulation. This is because the minimal subgraph that is re-triangulated is not the modified moralized graph, but a portion of the modified chordal graph corresponding to the CT (which we have denoted as the elimination graph). Moreover, as opposed to Flores et al. (2002), the subgraph identified using our method need not always contain all variables present in the impacted cliques of the CT.
* Eliminates the memory and runtime requirements for maintaining additional representations like the moralized graph and the MPD join tree. Our method identifies the minimal subgraph to be
\begin{table}
\begin{tabular}{l r r r r r|r} \hline & \#Inst. & \multicolumn{3}{c|}{Avg. Difference} & Avg. (Max.) Runtime (s) \\ \cline{2-6} & IBIA20 & AS(R) & AS(B) & DIS & IBIA20 \\ \hline Promedas & 173 & 1.0 & -3.5 & -2.8 & -66.6 & 13 (60) \\ Grids & 19 & 73.2 & -77.5 & -49.5 & -113.73 & 16 (26) \\ Type4b & 67\({}^{\dagger}\) & 8.6 & - & - & - & 336 (1226) \\ DBN & 48 & -16.2 & -2.6 & -2.3 & -39.5 & 2000 (5810) \\ \hline \end{tabular} \({}^{\dagger}\) Reference values available only for 67 large instances out of 82.
\end{table}
Table 8: Comparison of the average difference in PR from the reference values (\(\log_{10}PR-\log_{10}PR_{ref}\)) for large instances in four benchmark sets. \(PR_{ref}\) are estimates averaged over \(100\times 1hr\) simulations of abstraction sampling (AS). Table reports results obtained using IBIA (\(mcs_{p}=20\)) and published results for DIS and a single 1hr run of AS. Entries are marked as ‘-’ where all instances could not solved.
re-triangulated directly from the CT, triangulates it and updates the CT. No other representation of the PGM is needed.
**Inference methods**: Like the variational methods, IBIA is a deterministic technique. However, unlike many other variational techniques, it does not have loopy graphs and hence, does not have convergence issues. Other methods that use multiple CTs include the exact inference method, multiply sectioned BN method (MSBN) (Xiang et al., 1993; Xiang & Lesser, 2003) and the approximate inference method proposed in Bhanja & Ranganathan (2004). MSBN cannot guarantee a bound on the clique sizes and is therefore limited to small networks. The method proposed in Bhanja & Ranganathan (2004) has only been used for inference of marginals in BNs with no evidence variables. The method in Murphy (2002); Boyen & Koller (1998) uses CTs corresponding to each time slice in the 2T-BN representation of a dynamic BN and clusters containing variables present in two adjacent time slices as the interface between the corresponding BNs. It uses exact inference on CTs for each time slice which can become infeasible since there are no bounds on clique sizes. Moreover, the approximation method used in this technique could disconnect CTs and hence, cannot be used for inference of PR.
## 9 Discussion and Conclusions
We propose a technique for approximate inference of partition function that constructs a sequence of CTFs using a series of incremental build, infer and approximate steps. We prove the correctness of our incremental build and approximate algorithms.
IBIA gives better accuracies than several variational methods like LBP, region-graph based techniques like HAK, methods that simplify network like EDBP and WMB which is a mini-bucket based method. For the same clique size bound, accuracy obtained with IBIA is comparable or better than the neural network based methods DBE and NeuroBE in many cases, without having the disadvantage of requiring several hours of training. In most instances, the accuracy obtained with IBIA is comparable or better than recent sampling based techniques with much smaller runtimes. The runtimes are very competitive even though it is written in Python. Within a memory limit of 8 GB, IBIA was able to give PR estimates for 1705 of 1717 benchmarks. For a large percentage of these benchmarks, a solution was obtained within 20 minutes.
Similar to other variational methods, increasing the clique size bounds gives better accuracies, but results in increased runtimes and memory utilization. IBIA constructs clique trees by incrementally adding factors to an existing CT. When the number of factors is large, repeated re-triangulations could increase the runtime. This is particularly seen in a few DBN instances where the number of factors is greater than 100,000 and the required runtime is around 100 minutes. Also, for benchmarks that have very large variable domain-sizes, the number of variables in cliques and sepsis in a CTF is small and approximation becomes difficult. This is seen in CSP benchmarks, where we were unable to solve 12 instances. Therefore, a good strategy is needed for the incremental build step that optimizes the runtime and results in reduced clique sizes. Approximation based on the \(maxMI\) gives smaller error in most testcases. However, in a few Promedas and DBN testcases smaller errors were obtained with random selection, thus indicating a possibility for further exploration of heuristics.
A possible extension would be to combine IBIA and sampling based techniques to get accuracies that improve with time. The proposed IBIA framework can also be extended to handle other inference queries such as computation of the marginals, max-marginals and the most probable explanation. It also has implications in learning, since the tree-width limitations can be relaxed.
## Acknowledgements
We thank Prof. Rina Dechter and Bobak Pezeshki for providing reference values of the partition function for the large benchmarks for which exact solutions are not known. |
2307.10054 | Internet Congestion Control Benchmarking | How do we assess a new Internet congestion control (CC) design? How do we
compare it with other existing schemes? Under what scenarios and using what
network parameters? These are just a handful of simple questions coming up
every time a new CC design is going to be evaluated. Interestingly, the number
of specific answers to these questions can be as large as the number of CC
designers. In this work, we aim to highlight that the network congestion
control, as a hot and active research topic, requires a crystal clear set(s) of
\textit{CC Benchmarks} to form a common ground for quantitatively comparing and
unambiguously assessing the strengths and weaknesses of a design with respect
to the existing ones. As a first step toward that goal, we introduce general
benchmarks that can capture the different performance of the existing Internet
CC schemes. Using these benchmarks, we rank the Internet CC algorithms and
illustrate that there is still lots of room for more innovations and
improvements in this topic. | Soheil Abbasloo | 2023-07-19T15:26:56Z | http://arxiv.org/abs/2307.10054v1 | # Internet Congestion Control Benchmarking
###### Abstract.
How do we assess a new Internet congestion control (CC) design? How do we compare it with other existing schemes? Under what scenarios and using what network parameters? These are just a handful of simple questions coming up every time a new CC design is going to be evaluated. Interestingly, the number of specific answers to these questions can be as large as the number of CC designers. In this work, we aim to highlight that the network congestion control, as a hot and active research topic, requires a crystal clear set(s) of _CC Benchmarks_ to form a common ground for quantitatively comparing and unambiguously assessing the strengths and weaknesses of a design with respect to the existing ones. As a first step toward that goal, we introduce general benchmarks that can capture the different performance of the existing Internet CC schemes. Using these benchmarks, we rank the Internet CC algorithms and illustrate that there is still lots of room for more innovations and improvements in this topic.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote: copyrighted none none:
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted none none:
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none:
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none:
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none none:
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none none:
+
Footnote: copyrighted: none none none:
+
Footnote: copyrighted: none none none
+
Footnote: copyrighted: none none:
+
Footnote: copyrighted: none none none
+
Footnote: copyrighted: none none none:
+
Footnote: none none: none
+
Footnote: none none: none
+
Footnote: none none: none
+
Footnote: none none: none
+
Footnote: none none none: none
+
Footnote: none none none: none
Schemes C and D depends heavily on the chosen minimum delay of the network.
The mentioned issues can highlight the fact that using transparent and clear CC benchmark(s) can greatly increase the trust in the reported results of a new design, significantly facilitate the process of reviewing CC algorithms, and avoid reviewers' time and efforts to be spent on the verification of the scenarios, settings, and test methodologies used throughout a work.
**From the CC Designers' Perspective:** Assuming that the first potential critics of an algorithm are the designers of it and they can improve their works by criticizing it before anyone else can, all the benefits discussed earlier can be considered as the benefits of benchmarking during the design of a CC algorithm. Moreover, benchmarking a design can work not only as a vigorous approach for observing the strengths and weaknesses of a design with respect to the existing ones, but also as a mechanism to perform functionality tests of it. In other words, benchmarking can facilitate and accelerate the design process by revealing the problems of a CC scheme during the design process. In addition, benchmarking can help designers to focus on designing better-performing algorithms rather than spending time on finding test scenarios or corner cases where other schemes perform poorly.
**ML and the New Wave of CC Designs:** The recent advances in machine learning techniques and algorithms have impacted different research communities including the system and network community. In particular, a new wave of CC designs based on (or inspired by) ML has been proposed in the recent years (e.g., [8; 9; 21; 31; 45; 48]) and likely in the coming years, there will be more of them out there.
Generally, learning-based CC schemes are designed and trained over a set of network scenarios and later tested over other scenarios where they are shown to perform very well. Based on these performance reports, the designers may conclude that their scheme poses certain properties. For example, the authors of Indigo mention "We find that Indigo consistently achieves good performance" [48] or PCC's authors say that "PCC achieves consistent high performance" [20]. However, usually, these kinds of statements will be shown to generally not hold true (e.g. check the poor performance of Indigo in [8] or the poor performance of PCC in [10]).
The main issue here is not with certain schemes or certain claims. The issue is that these general statements are ambiguous and susceptible to multiple interpretations (or simply put they are not comparable), because they are not reported using the same setting or over the same scenarios. That's why usually the reported results of CC schemes create confusion and consequently, make it very difficult to determine whether the improvements reported over the prior state-of-the-art CC designs are meaningful. Benchmarking is a powerful mechanism to prevent such ambiguities.
### Summary of This Work
Considering the motivations discussed, our main goal in this work is to highlight the need to have and use CC benchmarks. On the one hand, the CC benchmarks can lead to producing verifiable, comparable, and crystal clear statements about the performance of CC designs and make it possible to rank them among the existing approaches, a ranking that can be reproduced and trusted. On the other hand, CC benchmarks can stimulate more competition and greatly accelerate more innovations in this topic. As a first step toward that end, in this paper, we introduce a set of benchmarks named the CC-Bench1 and the CC-Bench2 for evaluating Internet CC schemes (Section 3). In particular, the CC-Bench1 focuses on the single-flow performance of Internet CC algorithms with respect to their gained throughput and round-trip delay. On the other hand, the CC-Bench2 provides a common ground for assessing the TCP-friendliness criterion of Internet CC schemes. As part of each benchmark, we present the notion of the scores and the winner schemes to facilitate the
Figure 1. The impact of bottleneck buffer size on the relative performance of two CC schemes. In [64; 256]kB region, scheme A outperforms B, in [256; 512]kB region, both schemes perform roughly similar, and in (512, 1024)kB region, scheme B outperforms A.
Figure 2. The impact of minimum delay of a network on the relative performance of two CC schemes. In [20; 50)ms region, scheme C performs better than D, while in (50, 120)ms region, scheme D outperforms C.
quantitative comparison of CC schemes. Using these two benchmarks (publicly available at (Beng et al., 2017)), we examine the performance of twenty two different Internet CC algorithms and rank them based on their gained scores in each benchmark (Section 4). These rankings point to some interesting findings. For example, when considering the CC-Bench1, the top-ranked scheme, that happens to be a ML-based approach, is able to get the best score in only about 33% of the scenarios! That means in any of the remaining 67% of the scenarios, the top-ranked scheme performs at least worse than one of the twenty one remaining schemes. This can show the remaining opportunities in this domain and provide a base challenge for observing more innovations and improvements.
## 2. Related Work
There are vast body of work in the literature centered around theoretically analyzing performance of different CC schemes or modeling certain classes of designs such as AIMD (additive increase multiplicative decrease) protocols (Kal
CC algorithm in most platforms is still an AIMD algorithm that had the same intrinsic objective during the design (TCP Cubic (Tepep, 2015)). The throughput-oriented nature of CC designs was a practical and acceptable assumption mainly due to the throughput-oriented nature of the dominant applications at the time. Recently though, the emerging applications (such as AR/VR, online gaming, tactile Internet, vehicle-to-vehicle communications, etc.) and their intrinsic delay-sensitive natures have spurred a new wave of CC designs that try not only to maximize user's throughput, but also to reduce the delay (Kumar et al., 2017; Kumar et al., 2018; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2020; Kumar et al., 2021).
In addition, the notion of incremental growth of the Internet over existing technologies and protocols has brought up another key practical challenge for designing new Internet CC schemes named TCP-friendliness. The challenge of TCP-friendliness comes from the fact that a new Internet CC scheme should not only provide a good throughput and delay performance, but also should be able to compete fairly with the default and established Internet CC scheme used by the majority of devices on the Internet. A scheme that is very aggressive toward the default scheme usually loses ground among the community and a scheme that is very polite/shy usually is not considered a practical solution.
So, putting it all together, we can reason about the performance of a CC scheme when we consider its throughput, delay, and TCP-friendliness. That said, to capture these three main metrics in different scenarios, we define two different scores for a flow \(F_{c}\) that uses the CC scheme \(c\). To reflect the throughput and delay performance metrics of \(F_{c}\), we use a modified version of Power (Power, 2015) and define \(S_{p}^{c}\) for a flow \(F_{c}\):
\[S_{p}^{c}=\frac{r_{c}^{a}}{d_{c}} \tag{1}\]
where \(r_{c}\) and \(d_{c}\) are the average delivery rate and the average round-trip delay of \(F_{c}\), respectively, and \(\alpha\) is a coefficient determining the relative importance of throughput and delay (unless otherwise mentioned, we set \(\alpha=2\)). A bigger value of \(S_{p}^{c}\) indicates better throughput and delay performance for the CC scheme \(c\). To reflect the TCP-friendliness metric, in a multi-flow scenario, we define \(S_{fr}^{c}\) as:
\[S_{fr}^{c}=|f_{c}-r_{c}| \tag{2}\]
where \(f_{c}\) is the expected average fair share of \(F_{c}\), when competing with the flows with default CC scheme, and \(r_{c}\) is the actual achieved average delivery rate of \(F_{c}\). A smaller value of \(S_{fr}^{c}\) indicates a better TCP-friendliness property for \(c\).
### Scenarios Covered
On the one hand, clearly, a benchmark cannot cover all possible scenarios and putting too many scenarios may lead to some practical issues such as the resources required to run the benchmark. On the other hand, a benchmark that spans only a few cases will not be able to effectively reveal differences among various CC schemes. So, as a general rule of thumb and as a first step to making CC benchmarks, we tried to keep the set big enough to see differences among the CC schemes and small enough to make them practical benchmarks. One can imagine that in the future more sets of scenarios can be added to the current list (as is the case in other communities (Kumar et al., 2019)).
In this section, we elaborate on the network model used in the benchmarks and describe the range of underlying network parameters and the details of the scenarios.
**Network Model:** To keep things simple, we use a single bottleneck link model of the network. This model can be specified using three main parameters: (1) bottleneck link capacity (BW), (2) minimum end-to-end delay (minRTT), and (3) bottleneck buffer size (\(qs\)). We will show later in Section 4 that this simple model is good enough to reveal the differences among various Internet CC schemes. To emulate this network model, we use Mahimahi (Mahimahi, 2017) that creates TUN/TAP interfaces on Linux OS and provides us with control over the BW, minRTT, and \(qs\) values. In addition, we optimize different Linux Kernel parameters (in particular, TCP related ones) to have best performance (Bordes et al., 2015).
**Range of Underlying Network Parameters:** For choosing the range of parameters, we considered typical Internet scenarios with a focus on normal end users on the Internet and some limitations of Mahimahi such as its large overhead for the large values of BW. That said, the benchmarks cover BWs from a few Mbps to about 200Mbps, minRTTs from 10ms to 160ms, and \(qs\) from \(\frac{1}{2}\times\)BDP to 16\(\times\)BDP1.
Footnote 1: The problem of how to set buffer sizes in a network is for itself an interesting (and not a simple) problem that still attracts new solutions. E.g. check out (Bordes et al., 2015) and the papers therein for a recent workshop dedicated to this issue.
We categorize benchmarks into two separate groups called CC-Bench1 and CC-Bench2 which include single-flow and multi-flow scenarios, respectively.
#### 3.2.1. The CC-Bench1
This set of benchmarks consists of single flow scenarios where schemes are evaluated with respect to \(S_{p}^{c}\) score that reflects their throughput and delay performance. The CC-Bench1 includes two main classes of scenarios: (1) the flat scenarios and (2) the step scenarios.
**The Flat Scenarios:** This set of scenarios represents general wired scenarios on the Internet. As its name suggests, it includes wired links with constant/flat bandwidths throughout the experiments. The ranges of BW, minRTT, and \(qs\) are \([12,192]\)Mbps, \([10,160]\)ms, \([\frac{1}{2},16]\times\)BDP, respectively.
**The Step Scenarios:** The flat scenarios alone cannot show the performance of CC schemes over a more dynamic network. So to answer questions such as how a CC scheme reacts when BW reduces or increases suddenly, we bring up the step scenarios. In these scenarios, we start with a given
network BW (\(BW_{1}\)) and after a specific period of time, we change the underlying BW of the network to \(m\times BW_{1}\). Then, we repeat this cycle until the end of the evaluation. The \(m\) value is chosen from \((0.25,0.5,2,4)\) list.
There are a couple of points here. First, we observed that some of the CC schemes perform periodic tasks (e.g. every 10s, BBR reduces its congestion windows to a few packets (Han et al., 2017) to observe the change of minRTT). So, to keep the benchmarking process fair, we tried to avoid having overlaps between the times of changes and these periodic tasks by choosing a safe time period for the changes. That said, we perform the changes every 7 seconds2, unless mentioned otherwise. Second, we observed that for large values of BW, Maimahi's overhead increases to a point that it tangibly impacts the results. To prevent these unwanted impacts, when changing BW, we always choose to be under 200Mbps. That means if \(BW_{1}\) is 96Mbps, we choose \(m<4\). The range of other parameters is similar to flat scenarios. For both flat and step scenarios, each CC scheme sends traffic for 30s.
Footnote 2: As far as we know, this value does not overlap with any settings for the existing CC schemes.
#### 3.2.2. The CC-Bench2
This set of benchmarks provides scenarios for assessing the TCP-friendliness of Internet CC schemes. To that end, we let TCP Cubic, which is the default CC scheme in most platforms (including Linux, Windows, and macOS), compete with the CC scheme under the test for accessing a shared bottleneck link and we capture the performance of schemes with respect to \(S_{f}^{c}\), score. Similar to the CC-Bench1, the three main network parameters are changed to make different scenarios. The ranges of minRTT and BW values are similar to CC-Bench1. In addition, we at least let the bottleneck link have 1\(\times\)BDP buffer size to be able to effectively absorb more than one flow during the tests. In particular, we choose \(qs\) from \([1,16]\times\)BDP range.
In a general Internet scenario, with a good probability, we can assume that a new incoming flow will observe flows controlled by the default CC scheme on the bottleneck link. This comes from the definition of a default CC scheme and its property of being used by the majority of flows. Therefore, we let TCP Cubic come to the network earlier than the CC scheme under the test. When buffer size increases, generally, it takes more time for flows to reach the steady-state (if any). In our experiments, we observed that reaching a fair-share point may take more than a minute (even when both flows are Cubic flows). So, in the CC-Bench2, we let flows send their packets for 120s to make sure that the results can present meaningful TCP-friendliness scores.
### The Notion of the Winner Schemes
A more classic way of looking at who should be called the winner in a certain scenario may lead us toward recognizing the CC scheme with the best score gained throughout a scenario as the winner in that scenario. However, there are two issues with this way of identifying a winner.
First, since the scores defined in equations 1 and 2 are Real numbers, their absolute values can differ slightly for two CC schemes. So, if we simply perform a mathematical comparison between scores, these slight differences can impact the choice of the winner in a scenario. That said, instead of picking the CC scheme with the best score as the winner, we pick all CC schemes with scores less than 10% worse than the best score as the winners of a scenario. In other words, any scheme with at most 10% lower performance than the best performing scheme is included in the winner list of that scenario.
Second, simply assigning a number to the performance of a scheme over an entire scenario and then comparing these numbers together to decide the winners may smooth out the important differences among the CC schemes. For instance, how fast CC schemes can react to a sudden change in the network may not be visible in an overall score of the scheme over a longer period. To address this issue, we calculate the score of a scheme in separate intervals throughout the experiments and instead of one score, assign four scores corresponding to the performance of the scheme in four different intervals throughout the test. Now, comparing the scores of a certain interval for all schemes can get a better sense of the performance of different schemes.
Putting all together, we compare the scores of all CC algorithms over a certain interval of a certain scenario and pick the best performing schemes (considering the 10% winning margins) as the winners. Then, we sweep over all intervals and scenarios.
## 4. The Ranking of the Internet CC
In this section, we rank the Internet CC schemes based on their performance in the CC-Bench1 and CC-Bench2 benchmarks3. The non-exhausting list of schemes consists of 22 different Internet CC algorithms. These algorithms include 14 available TCP schemes in Linux Kernel named Cubic (Vegas, 1999), YeAH (K
**Benchmarking to Reveal Opportunities:** The very first thing that Fig. 3 illustrates is that none of these Internet CC schemes are perfect and can achieve a 100% winning rate! A simple CC benchmark such as the CC-Bench1 can reveal the huge opportunities existing in this topic, especially when we consider that the top-ranked schemes in CC-Bench1 (Orca & Indigo) can be winners in only 1/3rd of the scenarios. How to boost performance further and design a CC algorithm that can achieve higher winning rates is an interesting challenge embracing more innovative designs.
**The Good and the Bad, but not the Ugly:** When we look at the results of Fig. 3 from another angle, we may see that having clear CC benchmarks provides us with both the _full-half_ of the glass (the part that the designers of the top-ranked schemes can be happy about) and the _empty-half_ of the glass (the extent of the remaining performance issues which embrace more novel designs). We think that this is a much more transparent and fair evaluation mechanism compared to when designers of a new CC scheme choose the scope and the details of scenarios in their evaluations.
**Getting Inspired by Looking at the Bigger Picture:** Results of Fig. 3 indicate a common pattern among the CC algorithms: generally, delay-based schemes are among the top-ranked schemes in CC-Bench1, while throughput-oriented schemes are among the top-ranked schemes in CC-Bench2. This is based on the fact that being TCP-friendly when coexisting with a loss-based scheme and (at the same time) being able to minimize delay and maximize throughput when not competing with a loss-based scheme is very hard and challenging. Interestingly, some of the results in Fig. 3 may point to some pragmatic ways to achieve a better trade-off between TCP-friendliness and throughput/delay performance of a CC scheme4. This notion of potentially getting inspired by the good/bad results of existing schemes is among the benefits that CC benchmarks can bring to the table.
Footnote 4: In particular, C2TCP, BBR2, and Orca seem to have managed this trade-off to some different degrees.
**Shedding Light on Neglected Evaluation Aspects:** As expected, Cubic is the top-ranked scheme in CC-Bench2. However, the fact that it cannot achieve a 100% winning rate may seem to be somewhat unexpected. One reason for this performance lies in a key but usually neglected aspect of CC schemes: the convergence speed of a CC scheme. In particular, although it can be shown that Cubic can eventually approach its fair share when competing with another Cubic flow, its convergence time for reaching that point is not necessarily the best, especially when compared with other top 5 schemes in large _qs_ scenarios5. The bottom line is that CC benchmarks (e.g., CC-Bench2) can potentially shed light on some neglected evaluation aspects of CC algorithms.
Footnote 5: The mechanism of segmenting an entire run to different intervals and assigning scores to each interval enables us to capture this behavior.
## 5. A Brief Discussion & Final NOTE
**I Think I Can Define Better CC Benchmarks!** The main point of this work is not to introduce _the_ best set of benchmarks, but to highlight the benefits of having CC benchmarks and the need for using them. The CC-Bench1 and the CC-Bench2 benchmarks are by no means the only or the best benchmarks out there. We think that when a community starts using benchmarks, it will not be hard to imagine that as part of an evolutionary process, more attractive benchmarks will be introduced and adopted in the future.
**Are These Benchmarks Enough?** Clearly, one or two benchmarks cannot cover all sets of evaluations and potentially highlight all the specific characteristics of different CC algorithms. That said, the main role of CC benchmarking is not to replace existing detailed stress tests of schemes, but to provide a base and a common ground for unambiguously comparing general results of different works in this domain.
**A Couple of Points on the Reported Rankings:** First, none of the scenarios in our benchmarks were designed to push certain schemes either to the top or bottom parts of the rankings reported. Second, our focus on this work was not to pinpoint the issues with certain CC schemes in certain scenarios. This alone can manifest itself as a good motivation for separate future work.
**Final Note:** Although benchmarking is a common practice in different communities, when it comes to CC design, this powerful tool is still missing. With the recent rise of a new wave of CC designs, the need for having such common foundations to unambiguously evaluating new algorithms increases. We hope this work will highlight this need and provide the community with a few samples to continue with.
Figure 3. The ranking of Internet CC schemes based on their winning rates in the CC-Bench1 (left) and the CC-Bench2 (right) benchmarks |
2304.09055 | A large deviation inequality for the rank of a random matrix | Let $A$ be an $n \times n$ random matrix with independent identically
distributed non-constant subgaussian entries. Then for any $k \le c \sqrt{n}$,
\[
\text{rank}(A) \ge n-k
\] with probability at least $1-\exp(-c'kn)$. | M. Rudelson | 2023-04-18T15:27:03Z | http://arxiv.org/abs/2304.09055v2 | # A large deviation inequality for the rank of a random matrix
###### Abstract.
Let \(A\) be an \(n\times n\) random matrix with independent identically distributed non-constant subgaussian entries.Then for any \(k\leq c\sqrt{n}\),
\[\operatorname{rank}(A)\geq n-k\]
with probability at least \(1-\exp(-c^{\prime}kn)\).
2000 Mathematics Subject Classification: 60B20 Research supported in part by NSF grant DMS 2054408 and a fellowship from the Simons Foundation.
## 1. Introduction
Estimating the probability that an \(n\times n\) random matrix with independent identically distributed (i.i.d.) entries is singular is a classical problem in probability. The first result in this direction showing that for a matrix with Bernoulli(1/2) entries, this probability is \(O(n^{-1/2})\) was proved by Komlos [7] in 1967. In a breakthrough paper [6], Kahn, Komlos, and Szemeredi established the first exponential bound for Bernoulli matrices:
\[\mathbb{P}(\det(A_{n})=0)=(0.998+o(1))^{n}.\]
The asymptotically optimal exponent has been recently obtained by Tikhomirov [17]:
\[\mathbb{P}(\det(A_{n})=0)=\left(\frac{1}{2}+o(1)\right)^{n}.\]
The exponential bound for probability of singularity holds in a more general context than Bernoulli random matrices. It was proved in [14] for matrices with i.i.d. subgaussian entries and extended in [11] to matrices whose entries have bounded second moment.
A natural extension of the question about the probability of singularity is estimating the probability that a random matrix has a large co-rank. More precisely, we are interested in the asymptotic of \(\mathbb{P}(\operatorname{rank}(A_{n})\leq n-k)\), where \(k<n\) is a number which can grow with \(n\) as \(n\to\infty\). Such rank means that there are \(k\) columns of \(A_{n}\) which are linearly dependent on the other columns. Based on the fact that
\[\mathbb{P}(\operatorname{rank}(A_{n})\leq n-1)=\mathbb{P}(A_{n}\text{ is singular})\leq\exp(-cn),\]
and the independence of the columns of \(A_{n}\), one can predict that the probability that the rank of \(A_{n}\) does not exceed \(n-k\) is bounded by \(\big{(}\exp(-cn)\big{)}^{k}=\exp(-cnk)\). Proving such a bound amounts to obtaining a super-exponential probability estimate if \(k\to\infty\) as \(n\to\infty\). This makes a number of key tools in the previously mentioned
papers unavailable, because these tools were intended to rule out pathological events of probability \(O(\exp(-cn))\) which cannot be considered negligible in this context.
The existing results fell short of this tight bound until recently. Kahn, Komlos, and Szemeredi showed that the probability that a Bernoulli(1/2) matrix has rank smaller than \(n-k\) is \(O(f(k)^{n})\) where \(f(k)\to 0\) as \(k\to\infty\). The intuitive prediction above was recently confirmed by Jain, Sah and Sawney in the case when \(k\in\mathbb{N}\) is a fixed number. Building on the ideas of Tikhomirov [17], they proved an optimal bound for random matrices with independent Bernoulli(\(p\)) entries. Namely, for any \(p\in(0,1/2],\ \varepsilon>0\), and for any \(n>n_{0}(k,p,\varepsilon)\)
\[\mathbb{P}(\operatorname{rank}(A_{n})\leq n-k)\leq\left(1-p+\varepsilon \right)^{kn}.\]
This completely solves the problem for Bernoulli matrices within the exponential range. However, the methods of this paper do not seem to be extendable to the case when \(k\) grows together with \(n\), i.e., to the super-exponential range of probabilities (see Section 2.2 for more details).
The main result of this paper confirms this prediction in the super-exponential range for all matrices with centered i.i.d. subgaussian entries. A random variable \(\xi\) is called subgaussian if
\[\mathbb{E}\exp\left(-\left(\frac{\xi}{K}\right)^{2}\right)<\infty\]
for some \(K>0\). In what follows, we regard \(K\) as a constant and allow other constants such as \(C,c,c^{\prime}\), etc. depend on it. This is an rich class of random variables including, for instance, all bounded ones.
We prove the following theorem.
**Theorem 1.1**.: _Let \(k,n\in\mathbb{N}\) be numbers such that \(k\leq cn^{1/2}\). Let \(A\) be an \(n\times n\) matrix with i.i.d. non-constant subgaussian entries. Then_
\[\mathbb{P}\left(\operatorname{rank}(A)\leq n-k\right)\leq\exp(-c^{\prime}kn).\]
_Remark 1.2_.: Combining the technique of this paper with that of Nguyen [10], one can also obtain a lower bound for the singular value \(s_{n-k}(A_{n})\) of the same type as in [10] but with the additive error term \(\exp(-ckn)\) instead of \(\exp(-cn)\). We will not pursue this route in order to keep the presentation relatively simple.
The importance of getting the large deviation bound of Theorem 1.1 in the regime when \(k\) grows simultaneously with \(n\) stems in particular from its application to _Quantitative Group Testing_ (QGT). This computer science problem considers a collection of \(n\) items containing \(k\) defective ones, where \(k<n\) is regarded as a known number. A test consists of selecting a random pool of items choosing each one independently with probability \(1/2\) and outputting the number of defective items in the pool. The aim of the QGT is to efficiently determine the defective items after a small number of tests. The question of constructing an efficient algorithm for QGT is still open. In [3], Feige and Lellouche introduced the following relaxation of the QGT: after \(m>k\) tests, one has to produce a subset \(S\subset[n]\) of cardinality \(m\), containing all defective items. This means that unlike the original QGT, the approach of Feige and Lellouche allows false positives which makes the problem simpler and admits more efficient algorithms. Denote by \(A\) the \(m\times n\) matrix whose rows are the indicator functions of the tests, and denote by \(A|_{S}\) its submatrix with
columns from the set \(S\subset[n]\). Then \(A\) is a random matrix with i.i.d. Bernoulli entries. The main result of [3] asserts that if an algorithm for the relaxed problem succeeds and outputs a set \(S\subset[n]\) and \(\operatorname{rank}(A|_{S})\geq m-O(\log n)\), then one can efficiently determine the set of defective items. Checking this criterion for a given algorithm is difficult since the set \(S\) is not known in advance. However, if we know that
\[\operatorname{rank}(A|_{S})\geq m-O(\log n) \tag{1.1}\]
for all \(m\)-element sets \(S\subset[n]\) at the same time, this condition would be redundant, and all algorithms for the relaxed problem could be adapted to solve the QGT. In other words, we need to estimate the minimal rank of all \(m\times m\) submatrices of an \(m\times n\) random matrix. We show below that Theorem 1.1 implies that the bound (1.1) holds with high probability, and moreover, that this is an optimal estimate (see Lemma 6.2).
**Acknowlegement.** This work was performed when the author held an Erna and Jakob Michael Visiting Professorship at the Department of Mathematics at the Weizmann Institute of Science. The author thanks the Weizmann Institute for its hospitality. He is especially grateful to Ofer Zeitouni for bringing this problem to his attention and numerous helpful discussion.
## 2. Notation and the outline of the proof
### Notation
We denote by \([n]\) the set of natural numbers from \(1\) to \(n\). Given a vector \(x\in\mathbb{R}^{n}\), we denote by \(\left\|x\right\|_{2}\) its standard Euclidean norm: \(\left\|x\right\|_{2}=\left(\sum_{j\in[n]}x_{j}^{2}\right)^{1/2}\). The unit sphere of \(\mathbb{R}^{n}\) is denoted by \(S^{n-1}\).
If \(V\) is an \(m\times l\) matrix, we denote by \(\operatorname{Row}_{i}(V)\) its \(i\)-th row and by \(\operatorname{Col}_{j}(V)\) its \(j\)-th column. Its singular values will be denoted by
\[s_{1}(V)\geq s_{2}(V)\geq\cdots\geq s_{m}(V)\geq 0.\]
The operator norm of \(V\) is defined as
\[\left\|V\right\|=\max_{x\in S^{l-1}}\left\|Vx\right\|_{2},\]
and the Hilbert-Schmidt norm as
\[\left\|V\right\|_{\operatorname{HS}}=\left(\sum_{i=1}^{m}\sum_{j=1}^{l}v_{i, j}^{2}\right)^{1/2}.\]
Note that \(\left\|V\right\|=s_{1}(V)\) and \(\left\|V\right\|_{\operatorname{HS}}=\left(\sum_{j=1}^{m}s_{j}(V)^{2}\right) ^{1/2}.\)
Throughout the paper, the letters \(c,\bar{c},C\) etc. stand for absolute constants whose values may change from line to line.
### Outline of the proof
Let \(A\) be an \(n\times n\) random matrix with i.i.d. entries. The fact that this matrix has rank at most \(n-k\) means that at least \(k\) of its columns are linearly dependent on the rest. Assume that the \(k\) last columns are linearly dependent on the other. As the results of [14] show, for a typical realization of the first \(n-k\) columns, the probability that a given column belongs to their linear span is \(O(\exp(-cn))\). Since the last \(k\) columns are mutually independent and at the same time independent of the first \(n-k\) ones, the probability that all \(k\) columns
fall into the linear span of the rest is \(O\Big{(}(\exp(-cn))^{k}\Big{)}=O(\exp(-cnk))\) which is the content of our main theorem.
The problem with this argument, however, is in the meaning of the term "typical". It include several requirements on the matrix with these \(n-k\) rows, including that its norm is \(O(\sqrt{n})\) and that its kernel contains no vector with a rigid arithmetic structure. As was shown in [14], all these requirements hold with probability at least \(1-\exp(-cn)\) which is enough to derive that the matrix is invertible with a similar probability. In our case, when we aim at bounding probability by \(\exp(-ckn)\) with \(k\) which can tend to infinity with \(n\), the events which have just exponentially small probability cannot be considered negligible any longer. In particular, we are not able to assume that the operator norm of a random matrix is bounded by \(O(\sqrt{n})\). This is, however, the easiest of the arising problems as we will be able to use a better concentrated Hilbert-Schmidt norm instead.
The problem of ruling out the arithmetic structure of the kernel turns out to be more delicate. For Bernoulli\((p)\) random matrices with \(0<p\leq\frac{1}{2}\), Jain, Sah, and Sawney [4] overcame it by replacing the approach based on the least common denominator used in [14] with a further development of the averaging method of Tikhomirov [17]. This allowed them to prove that if \(k\) is a constant, then with probability \(1-4^{-kn}\), the kernel of the matrix consisting of the first \(n-k\) rows either consists of vectors close to sparse (compressible), or does not contain _any_ vector with a problematic arithmetic structure, see [4, Proposition 2.7] whose proof follows [5, Proposition 3.7]. They further derived from this fact that the probability that a random Bernoulli matrix has rank \(n-k\) or smaller does not exceed
\[(1-p+o(1))^{kn}\]
for any constant \(k\). However, this approach is no longer feasible if \(k\) is growing at the same time with \(n\). Indeed, the kernel of an \((n-k)\times n\) Bernoulli random matrix contains the vector \((1,\ldots,1)\) with probability \(\left(c/\sqrt{n}\right)^{n}=\exp(-c^{\prime}n\log n)\). It can also contain numerous other vectors of the same type with a similar probability. Hence the kernel of such matrix contains incompressible vectors with rigid arithmetic structure for \(k=\Omega(\log n)\) which includes the range important for the question of Feige and Lellouche.
Fortunately, the complete absence of vectors with a rigid arithmetic structure in the kernel is not necessary for proving the bound on the probability of a low rank. It is sufficient to rule out the situation where such vectors occupy a significant part of the kernel. More precisely, we show that if \(B\) is an \((n-k)\times n\) random matrix with i.i.d. subgaussian entries, then with probability at least \(1-\exp(-ckn)\), its kernel contains a \((k/2)\)-dimensional subspace free of the vectors with a rigid arithmetic structure. Checking this fact is the main technical step in proving our main theorem.
We outline the argument leading to it below. We try to follow the geometric method developed in [12], [14]. However, the aim of obtaining a super-exponential probability bound forces us to work with systems of problematic vectors instead of single ones. To handle such systems, we introduce a notion of an _almost orthogonal_\(l\)-tuple of vectors in Section 3. These systems are sufficiently simple to allow efficient
estimates. At the same time, we show in Lemma 3.3 that a linear subspace containing many "bad" vectors contains an almost orthogonal system of such vectors possessing an important minimality property.
Following the general scheme, we split the unit sphere of \(\mathbb{R}^{n}\) into compressible and incompressible parts. Let us introduce the respective definitions.
**Definition 2.1**.: _Let \(s\in[n]\) and let \(\tau>0\). Define the set of \(s\)-sparse vectors by_
\[\operatorname{Sparse}(s)=\{x\in\mathbb{R}^{n}:\ |\operatorname{supp}(x)|\leq s\}\]
_and the sets of compressible and incompressible vectors by_
\[\operatorname{Comp}(s,\tau) =\{x\in S^{n-1}:\ \operatorname{dist}(x,\operatorname{Sparse}(s))\leq\tau\},\] \[\operatorname{Incomp}(s,\tau) =S^{n-1}\setminus\operatorname{Comp}(s,\tau).\]
Note that we define the sparse vectors in \(\mathbb{R}^{n}\) and not in \(S^{n-1}\). This is not important but allows to shorten some future calculations.
In Section 4, we show that the probability that the kernel of the matrix \(B=(A_{[n-k]\times[n]})^{\top}\) contains an almost orthogonal system of \(k/4\) compressible vectors is negligible. This is done by using a net argument, i.e., by approximating vectors from our system by vectors from a certain net. The net will be a part of a scaled integer lattice, and the approximation will be performed by _random rounding_, a technique widely used in computer science and introduced in random matrix theory by Livshyts [8]. Let \(B\) be a random matrix. The general net argument relies on obtaining a uniform lower bound for \(\left\lVert By\right\rVert_{2}\) over all points \(y\) in the net and approximating a given point \(x\) by the points of the net. In this case, one can use the triangle inequality to obtain
\[\left\lVert Bx\right\rVert_{2}\geq\left\lVert By\right\rVert_{2}-\left\lVert B \right\rVert\cdot\left\lVert x-y\right\rVert_{2}.\]
This approach runs into problems in the absence of a good control of \(\left\lVert B\right\rVert\). However, if the net is constructed a part of a scaled integer lattice, then one can choose the approximating point \(y\) as a random vertex of the cubic cell containing \(x\). This essentially allows to replace \(\left\lVert B\right\rVert\) in the approximation above by a more stable quantity \(\left\lVert B\right\rVert_{\operatorname{HS}}/\sqrt{n}\). Moreover, this replacement will be possible for a randomly chosen \(y\) with probability close to \(1\).
In our case, we have to approximate the entire system of vectors while preserving the almost orthogonality property. This makes the situation more delicate, and we can only prove that this approximation succeeds with probability which is exponentially small in \(k\). Fortunately, this is enough since we need just one approximation, so any positive probability is sufficient.
In Section 5, we assume that the kernel of \(B\) contains a subspace of dimension \((3/4)k\) consisting of incompressible vectors and prove that with high probability, this subspace contains a further one of dimension \(k/2\) which has no vectors with a rigid arithmetic structure. The arithmetic structure is measured in terms of the _least common denominator_ (LCD) which is defined in Section 3.3. To this end, we consider a minimal almost orthogonal system of \(k/4\) vectors having sub-exponential LCD-s and show that the presence of such system in the kernel is unlikely using the net argument and random rounding. This is more involved than the case of compressible vectors since the magnitude of the LCD varies from \(O(\sqrt{n})\) to the exponential level, and thus requires approximation on different scales. To implement it, we decompose the set of such systems according to the magnitudes of the
LCD-s and then we scale each system by the sequence of its LCD-s. Because of the multiplicity of scales, the approximation has to satisfy a number of conditions at once. At this step we also rely on random rounding allowing to check all the required conditions probabilistically. Verification that all of them can be satisfied simultaneously, although with an exponentially small probability performed in the proof in Lemma 5.3 is the most technical part of the argument.
Finally, in Section 6, we collect all the ingredients and finish the proof of Theorem 1.1.
## 3. Preliminary results
### Almost orthogonal systems of vectors
We will have to control the arithmetic structure of the subspace spanned by \(n-k\) columns of the matrix \(A\) throughout the proof. This structure is defined by the presence of vectors which are close to the integer lattice. To be able to estimate the probability that many such vectors lie in the subspace, we will consider special configurations of almost orthogonal vectors which are easier to analyze. This leads us to the following definition.
**Definition 3.1**.: _Let \(\nu\in(0,1)\). An \(l\)-tuple of vectors \((v_{1},\ldots,v_{l})\subset\mathbb{R}^{n}\setminus\{0\}\) is called \(\nu\)-almost orthogonal if the \(n\times l\) matrix \(W\) with columns \(\left(\frac{v_{1}}{\|v_{1}\|_{2}},\ldots,\frac{v_{l}}{\|v_{l}\|_{2}}\right)\) satisfies_
\[1-\nu\leq s_{l}(W)\leq s_{1}(W)\leq 1+\nu.\]
Estimating the largest and especially the smallest singular values of a general deterministic matrix is a delicate task. We employ a very crude criterion below.
**Lemma 3.2**.: _Let \(\nu\in[0,\frac{1}{4}]\) and let \((v_{1},\ldots,v_{l})\subset\mathbb{R}^{n}\setminus\{0\}\) be a an \(l\)-tuple such that_
\[\left\|P_{\mathrm{span}(v_{1},\ldots,v_{j})}v_{j+1}\right\|_{2}\leq\frac{\nu} {\sqrt{l}}\left\|v_{j+1}\right\|_{2}\quad\text{for all $j\in[l-1]$}.\]
_Then \((v_{1},\ldots,v_{l})\subset\mathbb{R}^{l}\) is a \((2\nu)\)-almost orthogonal system. Moreover, if \(V\) is the \(n\times l\) matrix with columns \(v_{1},\ldots,v_{l}\), then_
\[\det^{1/2}(V^{\top}V)\geq 2^{-l}\prod_{j=1}^{l}\left\|v_{j}\right\|_{2}.\]
Proof.: Construct an orthonormal system in \(\mathbb{R}^{n}\) by setting
\[e_{1}=\frac{v_{1}}{\left\|v_{1}\right\|_{2}},\qquad e_{j+1}=\frac{P_{(\mathrm{ span}(v_{1},\ldots,v_{j}))^{\perp}}v_{j+1}}{\left\|P_{(\mathrm{span}(v_{1}, \ldots,v_{j}))^{\perp}}v_{j+1}\right\|_{2}}\quad\text{for all $j\in[l-1]$}\]
and complete it to an orthonormal basis. The \(n\times l\) matrix \(W\) with columns \(\mathrm{Col}_{j}(W)=\frac{v_{j}}{\left\|v_{j}\right\|_{2}}\) written in this basis has the form \(W=\begin{bmatrix}\bar{W}\\ 0\end{bmatrix}\), where \(W_{u}\) is an \(l\times l\) upper triangular matrix. The assumption of the lemma yields
\[\left(\sum_{i=1}^{j-1}\bar{W}_{i,j}^{2}\right)^{1/2}=\left\|P_{\mathrm{span}(v _{1},\ldots,v_{j-1})}\,\mathrm{Col}_{j}(\bar{W})\right\|_{2}\leq\frac{\nu}{ \sqrt{l}}\quad\text{for all $j\in\{2,\ldots,l\}$}.\]
and so,
\[\sqrt{1-\frac{\nu^{2}}{l}}\leq\bar{W}_{j,j}\leq 1\quad\text{for all $j\in[l]$},\]
since \(\left\|\mathrm{Col}_{j}(\bar{W})\right\|_{2}=1\). Therefore,
\[\left\|\bar{W}-\mathrm{diag}(\bar{W})\right\|\leq\left\|\bar{W}-\mathrm{diag}( \bar{W})\right\|_{\mathrm{HS}}=\left(\sum_{j=1}^{l}\sum_{i<j}\bar{W}_{i,j}^{2} \right)^{1/2}\leq\nu,\]
and thus
\[1-2\nu \leq 1-\left\|I_{l}-\mathrm{diag}(\bar{W})\right\|-\left\|\mathrm{ diag}(\bar{W})-\bar{W}\right\|\] \[\leq s_{l}(\bar{W})\leq s_{1}(\bar{W})\] \[\leq 1+\left\|I_{l}-\mathrm{diag}(\bar{W})\right\|+\left\|\mathrm{ diag}(\bar{W})-\bar{W}\right\|\] \[\leq 1+2\nu.\]
This implies the first claim of the lemma. The second claim immediately follows from the first one.
The next lemma shows that if \(W\subset\mathbb{R}^{n}\setminus\{0\}\) is a closed set and \(E\subset\mathbb{R}^{n}\) is a linear subspace, then we can find a large almost orthogonal system in \(E\cap W\) having a certain minimality property or a further linear subspace \(F\subset E\) of a large dimension disjoint from \(W\). This minimality property will be a key to estimating the least common denominator below.
**Lemma 3.3** (Almost orthogonal system).: _There exists a constant \(c>0\) for which one of the following holds. Let \(W\subset\mathbb{R}^{n}\setminus\{0\}\) be a closed set. Let \(l<k\leq n\), and let \(E\subset\mathbb{R}^{n}\) be a linear subspace of dimension \(k\). Then at least one of the following holds._
1. _There exist vectors_ \(v_{1},\ldots,v_{l}\in E\cap W\) _such that_ 1. _The_ \(l\)_-tuple_ \((v_{1},\ldots,v_{l})\) _is_ \(\left(\frac{1}{8}\right)\)_-almost orthogonal;_ 2. _For any_ \(\theta\in\mathbb{R}^{l}\) _such that_ \(\left\|\theta\right\|_{2}\leq\frac{1}{20\sqrt{l}}\)__ \[\sum_{i=1}^{l}\theta_{i}v_{i}\notin W.\]
2. _There exists a subspace_ \(F\subset E\) _of dimension_ \(k-l\) _such that_ \(F\cap W=\varnothing\)_._
Proof.: Let us construct a sequence of vectors \(v_{1},\ldots,v_{l^{\prime}},\;l^{\prime}\leq l\) with \(\left\|v_{1}\right\|_{2}\leq\left\|v_{2}\right\|_{2}\leq\cdots\leq\left\|v_{ l^{\prime}}\right\|_{2}\) by induction. If \(E\cap W=\varnothing\), then (2) holds for any subspace \(F\) of \(E\) of dimension \(k-l\), so the lemma is proved. Assume that \(E\cap W\neq\varnothing\), and define \(v_{1}\) as the vector of this set having the smallest norm.
Let \(2\leq j\leq l-1\). For convenience, denote \(v_{0}=0\). Assume that \(j\in[l-1]\) and the vectors \(v_{1},\ldots,v_{j}\) with \(\left\|v_{1}\right\|_{2}\leq\left\|v_{2}\right\|_{2}\leq\cdots\leq\left\|v_{ j}\right\|_{2}\) and such that for all \(0\leq i\leq j-1\), \(v_{i+1}\) is the vector of the smallest norm in \(E\cap W\) for which the inequality
\[\left\|P_{\mathrm{span}(v_{0},\ldots,v_{l})}v_{i+1}\right\|_{2}\leq\frac{1}{1 6\sqrt{l}}\left\|v_{i}\right\|_{2}\]
holds. Note that if \(j=1\), then the condition above is vacuous, and the vector \(v_{1}\) has been already constructed. Assume that \(j\geq 2\), and we have found such vectors \(v_{1},\ldots,v_{j}\). Consider the set
\[H_{j}=\{v\in E\cap W:\;\left\|P_{\mathrm{span}(v_{0},\ldots,v_{j})}v\right\|_{ 2}\leq\frac{1}{16\sqrt{l}}\left\|v_{j}\right\|_{2}\}.\]
If \(H_{j}=\varnothing\), then (2) holds for any subspace of \(E\cap(\operatorname{span}(v_{1},\ldots,v_{j}))^{\perp}\) of dimension \(k-l\) which proves the lemma in this case. Otherwise, choose a vector \(v\in H_{j}\) having the smallest norm and denote it by \(v_{j+1}\). By construction, \(\left\lVert v_{j+1}\right\rVert_{2}\geq\left\lVert v_{j}\right\rVert_{2}\) since otherwise it would have been chosen at one of the previous steps.
Assume that we have run this process for \(l\) steps and constructed such sequence \(v_{1},\ldots,v_{l}\). Then for any \(j\in[l]\)
\[\left\lVert P_{\operatorname{span}(v_{1},\ldots,v_{j-1})}v_{j}\right\rVert_{2 }\leq\frac{1}{16\sqrt{l}}\left\lVert v_{j-1}\right\rVert_{2}\leq\frac{1}{16 \sqrt{l}}\left\lVert v_{j}\right\rVert_{2},\]
and Lemma 3.2 ensures that (1a) holds. Therefore, to complete the induction step, we have to check only (1b). Assume that there exists \(\theta\in\mathbb{R}^{j+1}\) such that \(\left\lVert\theta\right\rVert_{2}\leq\frac{1}{20\sqrt{l}}\) and
\[\sum_{i=1}^{j+1}\theta_{i}v_{i}\in W. \tag{3.1}\]
Let \(V^{j}\) be the \(n\times j\) matrix with columns \(v_{1},\ldots,v_{j}\). The already verified condition (1a) yields \(\left\lVert V^{j}\right\rVert\leq\frac{9}{8}\max_{i\in[j]}\left\lVert v_{i} \right\rVert_{2}\leq\frac{9}{8}\left\lVert v_{j}\right\rVert_{2}\). Since \(v_{j+1}\in H_{j}\),
\[\left\lVert P_{\operatorname{span}(v_{1},\ldots,v_{j})}\left( \sum_{i=1}^{j+1}\theta_{i}v_{i}\right)\right\rVert_{2} \leq\left\lVert\sum_{i=1}^{j}\theta_{i}v_{i}\right\rVert_{2}+ \left\lvert\theta_{j+1}\right\rvert\cdot\left\lVert P_{\operatorname{span}(v_{ 1},\ldots,v_{j})}v_{j+1}\right\rVert_{2}\] \[\leq\left\lVert V^{j}\right\rVert\cdot\left\lVert\theta\right\rVert _{2}+\left\lVert\theta\right\rVert_{2}\cdot\frac{1}{16\sqrt{l}}\left\lVert v _{j}\right\rVert_{2}\] \[\leq\left(\frac{9}{8}+\frac{1}{16\sqrt{l}}\right)\left\lVert \theta\right\rVert_{2}\cdot\left\lVert v_{j}\right\rVert_{2}\] \[<\frac{1}{16\sqrt{l}}\left\lVert v_{j}\right\rVert_{2}.\]
The last inequality above uses that \(\left\lVert\theta\right\rVert_{2}\leq\frac{1}{20\sqrt{l}}\). Since by the inductive construction, \(v_{j+1}\) is the vector of the smallest norm in \(H_{j}\) having this property, \(\left\lVert\sum_{i=1}^{j+1}\theta_{i}v_{i}\right\rVert_{2}\geq\left\lVert v_{ j+1}\right\rVert_{2}\). On the other hand, by (1a) and Lemma 3.2, \(\left\lVert V^{j+1}\right\rVert\leq\frac{9}{8}\left\lVert v_{j+1}\right\rVert_{2}\), so
\[\left\lVert\sum_{i=1}^{j+1}\theta_{i}v_{i}\right\rVert_{2}\leq\left\lVert V^{ j+1}\right\rVert\cdot\left\lVert\theta\right\rVert_{2}\leq\frac{9}{8}\left\lVert v _{j+1}\right\rVert_{2}\cdot\left\lVert\theta\right\rVert_{2}\leq\frac{1}{16 \sqrt{l}}\left\lVert v_{j+1}\right\rVert_{2}.\]
This contradiction shows that (3.1) is not satisfied, so (1b) holds.
### Concentration and tensorization
We will need several elementary concentration results. To formulate them, we introduce a few definitions. Denote by \(\mathcal{L}(X,t)\) the Levy concentration function of a random vector \(X\in\mathbb{R}^{m}\):
\[\mathcal{L}(X,t)=\sup_{y\in\mathbb{R}^{m}}\mathbb{P}(\left\lVert X-y\right\rVert _{2}\leq t).\]
Let \(\xi\in\mathbb{R}\) be a random variable. We will call it subgaussian if \(\mathbb{E}\exp(\lambda|\xi|^{2})<\infty\) for some \(\lambda>0\) and denote
\[\left\lVert\xi\right\rVert_{\psi_{2}}:=\inf\left\{s>0:\ \mathbb{E}\left[\exp \left(\frac{|\xi|}{s}\right)^{2}\right]\leq 2\right\}.\]
Subgaussian random variables form a large class including normal and all bounded random variables. Let \(A\) be an \(n\times n\) matrix. Denote by \(\operatorname{Row}_{i}(A)\) and \(\operatorname{Col}_{j}(A)\) the \(i\)-th row and the \(j\)-th column of \(A\) respectively.
For technical reasons, let us restrict the class of random entries of the matrix and introduce some parameters controlling their behavior. First, without loss of generality, we may assume that the entries of \(A\) are centered, i.e., \(\operatorname{\mathbb{E}}a_{i,j}=0\). Indeed, since all entries are i.i.d., subtracting the expectation from each one results in a rank one perturbation of the matrix \(A\) which does not affect the conclusion of Theorem 1.1. Second, since the entries are non-constant, \(\mathcal{L}(a_{i,j},t)<1\) for some \(t>0\). After an appropriate scaling the entries, we can assume that \(t=1\). Therefore, throughout the paper, we will assume that the entries of the matrix \(A\) are i.i.d. copies of a random variable \(\xi\) satisfying the following conditions:
\[\operatorname{\mathbb{E}}\xi=0,\quad\left\|\xi\right\|_{\psi_{2}}\leq K,\quad \mathcal{L}(\xi,1)\leq 1-p. \tag{3.2}\]
Without loss of generality, we may assume that \(K\geq 1\).
Throughout the paper we consider random matrices whose entries are independent copies of a random variable \(\xi\) satisfying (3.2). The constants \(c,C,C^{\prime}\) etc. appearing in various formulas below may depend on \(p\) and \(\left\|\xi\right\|_{\psi_{2}}\).
**Lemma 3.4** (Operator norm).: _Let \(m\leq n\), and let \(Q\) be an \(m\times n\) matrix with centered independent entries \(q_{i,j}\) such that \(\left|q_{i,j}\right|\leq 1\). Then_
\[\operatorname{\mathbb{P}}(\left\|Q\right\|\geq C\sqrt{n})\leq\exp(-cn).\]
Lemma 3.4 follows from a general norm estimate for a random matrix with centered subgaussian entries, see, e.g. [14]. It is easy to see that the statement of the lemma is optimal up to constants \(C,c\). Note that the event that \(\left\|Q\right\|\geq C\sqrt{n}\) has probability which is exponentially small in \(n\). Such bound is sufficient for the application we have in mind, but is not strong enough to bound the operator norm of \(A\). Indeed, as our aim is to prove the bound \(\exp(-ckn)\) for the probability that the rank of \(A\) is smaller than \(n-k\) and \(k\) can be large, we cannot exclude events of probability \(\exp(-cn)\). This forces us to consider another matrix norm which enjoys stronger concentration properties.
For a matrix with subgaussian entries, we probe a stronger bound for the Hilbert-Schmidt norm.
**Lemma 3.5** (Hilbert-Schmidt norm).: _Let \(m\leq n\) and let \(A\) be an \(m\times n\) matrix whose entries are independent copies of a random variable \(\xi\) satisfying (3.2). Then_
\[\operatorname{\mathbb{P}}(\left\|A\right\|_{\operatorname{HS}}\geq 2Kn)\leq \exp(-cn^{2}).\]
Proof.: Since \(\operatorname{\mathbb{E}}\xi^{2}\leq\left\|\xi\right\|_{\psi_{2}}<\infty\),
\[\operatorname{\mathbb{E}}\exp\left(\frac{\left|\xi^{2}-\operatorname{ \mathbb{E}}\xi^{2}\right|}{\left\|\xi\right\|_{\psi_{2}}^{2}}\right)\leq \operatorname{\mathbb{E}}\exp\left(\frac{\xi^{2}}{\left\|\xi\right\|_{\psi_{2} }^{2}}+1\right)\leq 2e,\]
which shows that \(Y=\xi^{2}-\operatorname{\mathbb{E}}\xi^{2}\) is a centered sub-exponential random variable. Taking into account that
\[\sum_{i=1}^{m}\sum_{j=1}^{n}\operatorname{\mathbb{E}}a_{i,j}^{2}\leq K^{2}n^{2}\]
and Bernstein's inequality [19], we obtain
\[\mathbb{P}(\left\|A\right\|_{\text{HS}}\geq 2Kn)=\mathbb{P}\left(\sum_{i=1}^{m} \sum_{j=1}^{n}(a_{i,j}^{2}-\mathbb{E}\,a_{i,j}^{2})\geq 3K^{2}n^{2}\right)\leq \exp(-cn^{2})\]
as required.
We will also need a tensorization lemma for the small ball probability similar to Lemma 2.2 [14].
**Lemma 3.6** (Tenzorization).: _Let \(m,K>0\) and let \(Y_{1},\ldots,Y_{n}\geq 0\) be independent random variables such that \(\mathbb{P}(Y_{j}\leq s)\leq(Ms)^{m}\) for all \(s\geq s_{0}\). Then_
\[\mathbb{P}\left(\sum_{j=1}^{n}Y_{j}\leq nt\right)\leq(CMt)^{mn}\quad\text{for all }t\geq s_{0}.\]
Proof.: Let \(t\geq s_{0}\). By Markov's inequality,
\[\mathbb{P}\left(\sum_{j=1}^{n}Y_{j}\leq nt\right) \leq\mathbb{E}\left[\exp\left(mn-\frac{m}{t}\sum_{j=1}^{n}Y_{j} \right)\right]\] \[=e^{mn}\prod_{j=1}^{n}\mathbb{E}\exp\left(-\frac{m}{t}Y_{j} \right),\]
where
\[\mathbb{E}\exp\left(-\frac{m}{t}Y_{j}\right) =\int_{0}^{1}\mathbb{P}\left[\exp\left(-\frac{m}{t}Y_{j}\right)>s \right]\,ds=\int_{0}^{\infty}e^{-u}\mathbb{P}\left[Y_{j}<\frac{t}{m}u\right]\,du\] \[\leq\int_{0}^{m}e^{-u}\mathbb{P}\left[Y_{j}<\frac{t}{m}\right]\, du+\int_{m}^{\infty}e^{-u}\mathbb{P}\left[Y_{j}<\frac{t}{m}u\right]\,du\] \[\leq(Kt)^{m}+\int_{m}^{\infty}e^{-u}\left(\frac{Mt}{m}u\right)^{ m}du\] \[\leq\left(1+\frac{\Gamma(m+1)}{m^{m}}\right)\cdot(Mt)^{m}\leq(CMt )^{m}.\]
Here we used that \(\left[Y_{j}<\frac{t}{m}u\right]\leq\left[Y_{j}<\frac{t}{m}\right]\) for \(u\in(0,1)\) in the first inequality and the Stirling formula in the last one. Combining the two inequalities above completes the proof.
### Least common denominators and the small ball probability
The least common denominator (LCD) of a sequence of real numbers originally introduced in [14] turned out to be a useful tool to gauge the behavior of the Levy concentration function of a linear combination of independent random variables with constant coefficients. Its various versions played a crucial role in proving quantitative estimates of invertibility of random matrices, see e.g. [13] and the references therein as well as more recent works including [18], [1], [9], [2], and numerous other papers. In what follows, we use the extension of the LCD to matrices introduced in [16].
**Definition 3.7**.: _Let \(V\) be an \(m\times n\) matrix, and let \(L>0,\alpha\in(0,1]\). Define the least common denominator (LCD) of \(V\) by_
\[D_{L,\alpha}(V)=\inf\left(\left\|\theta\right\|_{2}:\theta\in\mathbb{R}^{m},\ \operatorname{dist}(V^{\top}\theta,\mathbb{Z}^{n})<L\sqrt{\log_{+}\frac{ \alpha\left\|V^{\top}\theta\right\|_{2}}{L}}\right).\]
_If \(E\subset\mathbb{R}^{n}\) is a linear subspace, we can adapt this definition to the orthogonal projection \(P_{E}\) on \(E\) setting_
\[D_{L,\alpha}(E)=D_{L,\alpha}(P_{E})=\inf\left(\left\|y\right\|_{2}:y\in E,\ \operatorname{dist}(y,\mathbb{Z}^{n})<L\sqrt{\log_{+}\frac{ \alpha\left\|y\right\|_{2}}{L}}\right).\]
This is a modification of [18, Definition 6.1] and [16, Definition 7.1], where the same notion was introduced with \(\alpha=1\).
We will use the following concentration function estimate in terms of the LCD and its corollary proved in [16].
**Theorem 3.8** (Small ball probabilities via LCD).: _Consider a random vector \(\xi=(\xi_{1},\ldots,\xi_{n})\), where \(\xi_{k}\) are i.i.d. copies of a real-valued random variable \(\xi\) satisfying (3.2). Consider a matrix \(V\in\mathbb{R}^{m\times n}\). Then for every \(L\geq\sqrt{m/p}\) we have_
\[\mathcal{L}(V^{\top}\xi,t\sqrt{m})\leq\frac{\left(CL/(\alpha\sqrt{m})\right)^{ m}}{\det(VV^{\top})^{1/2}}\left(t+\frac{\sqrt{m}}{D_{L,\alpha}(V^{\top})} \right)^{m},\quad t\geq 0. \tag{3.3}\]
Theorem 3.8 with \(\alpha=1\) is [16, Theorem 7.5]. We notice that exactly the same proof with the modified definition of the LCD yields Theorem 3.8 with a general \(\alpha\).
**Corollary 3.9** (Small ball probabilities for projections).: _Consider a random vector \(\xi=(\xi_{1},\ldots,\xi_{N})\), where \(\xi_{k}\) are i.i.d. copies of a real-valued random variable \(\xi\) satisfying (3.2). Let \(E\) be a subspace of \(\mathbb{R}^{N}\) with \(\dim(E)=m\), and let \(P_{E}\) denote the orthogonal projection onto \(E\). Then for every \(L\geq\sqrt{m/p}\) we have_
\[\mathcal{L}(P_{E}\xi,t\sqrt{m})\leq\left(\frac{CL}{\alpha\sqrt{m}}\right)^{m} \left(t+\frac{\sqrt{m}}{D_{L,\alpha}(E)}\right)^{m},\quad t\geq 0. \tag{3.4}\]
We will need a lemma which essentially generalizes the fact that the LCD of an incompressible vector is \(\Omega(\sqrt{n})\). We will formulate it in a somewhat more technical way required for the future applications.
**Lemma 3.10**.: _Let \(s,\alpha\in(0,1)\). Let \(U\) be an \(n\times l\) matrix such that \(U\mathbb{R}^{l}\cap S^{n-1}\subset\operatorname{Incomp}(sn,\alpha)\) Then any \(\theta\in\mathbb{R}^{l}\) with \(\left\|U\theta\right\|_{2}\leq\sqrt{sn}\) satisfies_
\[\operatorname{dist}(U\theta,\mathbb{Z}^{n})\geq L\sqrt{\log_{+}\frac{\alpha \left\|U\theta\right\|_{2}}{L}}.\]
Proof.: Take any \(\theta\in\mathbb{R}^{l}\) such that \(\left\|U\theta\right\|\leq\sqrt{sn}\). Let \(x\in\mathbb{Z}^{l}\) be such that
\[\left\|U\theta-x\right\|_{2}=\operatorname{dist}(U\theta,\mathbb{Z}^{n})\leq \left\|U\theta\right\|_{2}.\]
Without loss of generality, we can assume that \(\left\|x\right\|_{2}\leq\sqrt{sn}\). Indeed, the fact that the distance from \(U^{\top}\theta\) to \(\mathbb{Z}^{n}\) is achieved at \(x\) implies that
\[\left\|U\theta-x\right\|_{2}\leq\min\Big{(}\left\|U\theta\right\|_{2},\left\|U \theta+x\right\|_{2}\Big{)},\]
from which the required inequality follows. Since the coordinates of \(x\) are integer, this implies that \(\left|\mathrm{supp}(x)\right|\leq sn\). Therefore,
\[\left\|\frac{U\theta}{\left\|U\theta\right\|_{2}}-\frac{x}{\left\|U\theta \right\|_{2}}\right\|_{2}\geq\alpha\]
since \(\frac{x}{\left\|U\theta\right\|_{2}}\in\mathrm{Sparse}(sn)\). Combining the two previous inequalities, we see that
\[\alpha\left\|U\theta\right\|_{2}\leq\left\|U\theta-x\right\|_{2}=\mathrm{dist }(U\theta,\mathbb{Z}^{n}).\]
The desired inequality follows now from an elementary estimate \(t>\sqrt{\log_{+}t}\) valid for all \(t>0\) which is applied with \(t=\frac{\alpha\left\|U\theta\right\|_{2}}{L}\).
### Integer points inside a ball
We will need a simple lemma estimating the number of integer points inside a ball in \(\mathbb{R}^{n}\). Denote the Euclidean ball of radius \(R\) centered at \(0\) by \(B(0,R)\) and the cardinality of a set \(F\) by \(\left|F\right|\).
**Lemma 3.11**.: _For any \(R>0\),_
\[\left|\mathbb{Z}^{n}\cap B(0,R)\right|\leq\left(2+\frac{CR}{\sqrt{n}}\right)^ {n}.\]
The proof immediately follows by covering \(B(0,R)\) by unit cubes and estimating the volume of their union.
## 4. Compressible vectors
The aim of this section is to prove that it is unlikely that the kernel of a rectangular matrix with i.i.d. entries satisfying (3.2) contains a large almost orthogonal system of compressible vectors. More precisely, we prove that the probability of such event does not exceed \(\exp(-clm)\), where \(m\) is the number of rows of \(B\) and \(l\) is the number of vectors in the system. The compressibility parameters will be selected in the process of the proof and after that, fixed for the rest of the paper. In Section 6, we will apply this statement with \(m=n-k\) and \(l=k/4\), in which case the probability of existence of such almost orthogonal system becomes negligible for our purposes.
We start with bounding the probability of presence of a fixed almost orthogonal system in the kernel of \(B\). This bound relies on a corollary of the Hanson-Wright inequality, see [15, Corollary 2.4]. Note that this result applies to any almost orthogonal system, not only to a compressible one.
**Lemma 4.1**.: _Let \(m\leq n\), and let let \(B\) be an \(m\times n\) matrix whose entries are i.i.d. random variables satisfying (3.2). Let \(l\leq n\) and let \(v_{1},\ldots,v_{l}\in S^{n-1}\) be an \(l\)-tuple of \(\left(\frac{1}{2}\right)\)-almost orthogonal vectors. Then_
\[\mathbb{P}\left(\left\|Bv_{j}\right\|_{2}\leq C_{4.1}\sqrt{m}\ \text{ for all }j\in[l]\right)\leq\exp(-c_{4.1}lm).\]
Proof.: Let \(V=(v_{1},\ldots,v_{l})\) be the \(n\times l\) matrix formed by columns \(v_{1},\ldots,v_{l}\). The assumption of the lemma implies that \(\left\|V\right\|\leq 2\max_{j\in[l]}\left\|v_{j}\right\|_{2}=2\). On the other hand
\[\sum_{j=1}^{l}s_{j}^{2}(V)=\left\|V\right\|_{\mathrm{HS}}^{2}=\sum_{j=1}^{l} \left\|v_{j}\right\|_{2}^{2}=l.\]
Note that if \(\xi\) is a random variable satisfying (3.2), then \(\mathbb{E}\,\xi^{2}\geq\mathbb{P}(|\xi|\geq 1)\geq p\). Let \(\eta\in\mathbb{R}^{n}\) be a random vector with i.i.d. coordinates satisfying (3.2). By the Hanson-Wright inequality,
\[\mathbb{P}\left(\left\|V^{\top}\eta\right\|_{2}^{2}\leq\frac{p}{2}l\right)\leq \mathbb{P}\left(\left\|V^{\top}\eta\right\|_{2}^{2}\leq\frac{p}{2}\left\|V \right\|_{\mathrm{HS}}^{2}\right)\leq\exp\left(-cl\right).\]
Let \(\eta_{1},\ldots,\eta_{m}\) be i.i.d. copies of \(\eta\). Then
\[\mathbb{P}\left(\sum_{i=1}^{m}\left\|V^{\top}\eta_{i}\right\|_{2}^{2}\leq\frac {p}{4}ln\right)\leq\exp\left(-\frac{c}{2}lm\right).\]
Indeed, the condition \(\sum_{i=1}^{m}\left\|V\eta_{i}\right\|_{2}^{2}\leq\frac{p}{4}lm\) implies that \(\left\|V\eta_{i}\right\|_{2}^{2}\leq\frac{p}{2}l\) for at least \(m/2\) indexes \(i\in[m]\). These events are independent, and the probability of each one does not exceed \(\exp(-cl)\).
Applying the inequality above with \(\eta_{j}=(\mathrm{Row}_{j}(B))^{\top}\), we obtain
\[\mathbb{P}\left(\left\|Bv_{j}\right\|_{2}^{2}\leq\frac{p}{4}m \text{ for }j\in[l]\right) \leq\mathbb{P}\left(\sum_{j=1}^{l}\left\|Bv_{j}\right\|_{2}^{2} \leq\frac{p}{4}m\cdot l\right)\] \[=\mathbb{P}\left(\sum_{i=1}^{m}\left\|V^{\top}\eta_{i}\right\|_{2 }^{2}\leq\frac{p}{4}lm\right)\leq\exp(-\frac{c}{2}lm).\]
The proof is complete.
The next statement, Proposition 4.2, contains the main result of this section. We will extend the bound of Lemma 4.1 from the presence of a fixed almost orthogonal system of compressible vectors in the kernel of a random matrix to the presence of any such system.
The proof of Proposition 4.2 follows the general roadmap of the geometric method. We start with constructing a special net for the set of compressible vectors in \(S^{n-1}\). This net will consist of the vectors from a scaled copy of the integer lattice in \(\mathbb{R}^{n}\). The vectors of an almost orthogonal system will be then approximated by the vectors from this net using the procedure of _random rounding_. This procedure whose use in random matrix theory was pioneered by Livshyts [8] has now numerous applications in the problems related to invertibility. One of its advantages is that it allows to bound the approximation error in terms of a highly concentrated Hilbert-Schmidt norm instead of the operator norm of the matrix. In our case, this approximation presents two new special challenges. First, we have to approximate all vectors \(x_{1},\ldots,x_{l}\) forming an almost orthogonal system at the same time and in a way that preserves almost orthogonality. Second, the vectors of the approximating system have to retain some sparsity properties the original vectors. We will show below that all these requirements can be satisfied simultaneously for a randomly chosen approximation. The probability of that will be exponentially small in \(l\) yet positive, which is sufficient since we need only to show the existence of such approximation.
**Proposition 4.2**.: _Let \(l\leq k\leq cn\). There exists \(\tau>0\) such that the probability that there exists a \(\left(\frac{1}{4}\right)\)-almost orthogonal \(l\)-tuple \(x_{1},\ldots,x_{l}\in\mathrm{Comp}(\tau^{2}n,\tau^{4})\) with_
\[\left\|Bx_{j}\right\|_{2}\leq\tau\sqrt{n}\ \text{ for all }j\in[l]\]
_is less than \(\exp(-cln)\)._
Proof.: Let \(\tau\in(0,1/2)\) be a number to be chosen later, and set
\[T=\left\{v\in\frac{\tau}{\sqrt{n}}\mathbb{Z}^{n}:\ \left\|v\right\|_{2}\in\left[ \frac{1}{2},2\right]\right\}.\]
Then Lemma 3.11 applied to \(R=\frac{\sqrt{n}}{\tau}\) yields
\[|T\cap\operatorname{Sparse}(4\tau^{2}n)|\leq\binom{n}{4\tau^{2}n}\cdot\left(2+ \frac{C}{\tau}\right)^{4\tau^{2}n}\leq\left(\frac{C^{\prime}}{\tau^{3}}\right)^ {4\tau^{2}n}. \tag{4.1}\]
Denote the coordinates of a vector \(x\in\mathbb{R}^{n}\) by \(x(1),\ldots,x(n)\). Consider a \(\left(\frac{1}{4}\right)\)-almost orthogonal \(l\)-tuple \(x_{1},\ldots,x_{l}\in\operatorname{Comp}(\tau^{2}n,\tau^{4})\). Since \(x_{j}\in\operatorname{Comp}(\tau^{2}n,\tau^{4})\), there is a set \(I_{1}(j)\subset[n]\) with \(|I_{1}|\leq\tau^{2}n\) such that
\[\sum_{i\in[n]\setminus I_{1}}x_{j}^{2}(i)\leq\tau^{8}.\]
Using an elementary counting argument, we conclude that there exists \(I_{2}(j)\supset I_{1}(j)\) with \(|I_{2}(j)|\leq 2\tau^{2}n\) such that
\[|x_{i}|\leq\frac{\tau^{3}}{\sqrt{n}}\quad\text{for any $i\in[n]\setminus I_{2}(j) $}.\]
For \(j\in[l]\), define the vector \(w_{j}=(w_{j}(1),\ldots,w_{j}(n))\) by
\[w_{j}(i)=\frac{\tau}{\sqrt{n}}\cdot\left\lfloor\frac{\sqrt{n}}{\tau}|x_{j}(i) |\right\rfloor\operatorname{sign}(x_{j}(i)).\]
This form of rounding is chosen to approximate small in the absolute value coordinates of \(x_{j}\) by zeros.
Define independent random variables \(\varepsilon_{i,j}\) such that
\[\mathbb{P}(\varepsilon_{i,j}=w_{j}(i)-x_{j}(i))=1-\frac{\sqrt{n}}{\tau}|x_{j} (i)-w_{j}(i)|,\]
and
\[\mathbb{P}\left(\varepsilon_{i,j}=w_{j}(i)-x_{j}(i)+\frac{\tau}{\sqrt{n}} \operatorname{sign}(x_{j}(i))\right)=\frac{\sqrt{n}}{\tau}|x_{j}(i)-w_{j}(i)|.\]
Set
\[v_{j}=x_{j}+\sum_{i=1}^{n}\varepsilon_{i,j}e_{i}.\]
Then \(v_{j}\) is a random vector such that \(\mathbb{E}\,v_{j}=x_{j}\). Moreover, \(\left\|v_{j}-x_{j}\right\|_{2}\leq\tau<1/2\), and so \(\left\|v_{j}\right\|_{2}\in[\frac{1}{2},2]\), which implies that \(v_{j}\in T\) for all \(j\in[l]\).
The definition of \(v_{j}\) above means that for any \(i\in[n]\setminus I_{2}(j)\), \(w_{j}(i)=0\), and so \(\mathbb{P}(v_{j}(i)\neq 0)\leq\tau^{2}\) for these \(i\). Set \(I_{3}(j)=I_{2}(j)\cup\{i\in[n]\setminus I_{2}(j)\ v_{j}(i)\neq 0\}\). Since the events \(v_{j}(i)\neq 0\) are independent for all \(i\in[n]\setminus I_{2}(j)\) for a given \(j\in[l]\), Chernoff's inequality in combination with the union bound over \(j\) yield
\[\mathbb{P}(\forall j\in[l]\ |I_{3}(j)|\leq 4\tau^{2}n)\geq 1-l\exp(-c\tau^{2}n). \tag{4.2}\]
Note that if \(|I_{3}(j)|<4\tau^{2}n\) for all \(j\in[l]\) then all the vectors \(v_{1},\ldots,v_{l}\) belong to \(T\cap\operatorname{Sparse}(4\tau^{2}n)\).
Let us form the \(n\times l\) matrices \(X\) and \(V\) with columns \(x_{1},\ldots,x_{l}\) and \(v_{1},\ldots,v_{l}\) respectively. Then the matrix \(V-X\) has independent centered entries \(\varepsilon_{i,j}\) whose
absolute values are bounded by \(\frac{\tau}{\sqrt{n}}\). This means that the random variables \(\frac{\sqrt{n}}{\tau}\varepsilon_{i,j}\) satisfy the assumptions of Lemma 3.4. In view of this lemma,
\[\mathbb{P}\left(\left\|V-X\right\|\leq\tau\right)\geq 1-\exp(-cn).\]
Define the diagonal matrix \(D_{V}=\operatorname{diag}(\left\|v_{1}\right\|_{2},\ldots,\left\|v_{l}\right\| _{2})\). Recall that
\[1-\tau\leq\left\|v_{j}\right\|_{2}\leq 1+\tau\]
for all \(j\in[l]\), and \(s_{l}(X)\geq\frac{3}{4}\) since the vectors \(x_{1},\ldots,x_{l}\) are \(\left(\frac{1}{4}\right)\)-almost orthogonal. Hence, if the event \(\left\|V-X\right\|\leq\tau\) occurs, then
\[s_{l}(VD_{V}^{-1}) \geq s_{l}(XD_{V}^{-1})-\left\|X-V\right\|\cdot\left\|D_{V}^{-1}\right\|\] \[\geq s_{l}(X)\cdot s_{l}(D_{V}^{-1})-\left\|X-V\right\|\cdot \left\|D_{V}^{-1}\right\|\] \[\geq\frac{3}{4}(1+\tau)^{-1}-\tau\cdot(1-\tau)^{-1}\] \[\geq\frac{1}{2}\]
where the last inequality holds if
\[\tau\leq\frac{1}{9}.\]
Similarly, we can show that \(s_{1}(VD_{V}^{-1})\leq\frac{3}{2}\), thus proving that the vectors \(v_{1},\ldots,v_{l}\) are \(\left(\frac{1}{2}\right)\)-almost orthogonal. This shows that
\[\mathbb{P}\left(v_{1},\ldots,v_{l}\text{ are }\left(\frac{1}{2}\right)\text{- almost orthogonal}\right)\geq 1-\exp(-cn). \tag{4.3}\]
Let \(\mathcal{E}_{\text{HS}}\) be the event that \(\left\|B\right\|_{\text{HS}}\leq 2Kn\). Lemma 3.5 yields that
\[\mathbb{P}(\mathcal{E}_{\text{HS}})\geq 1-\exp(-cn^{2}).\]
Condition on a realization of the matrix \(B\) such that \(\mathcal{E}_{\text{HS}}\) occurs. Since the random variables \(\varepsilon_{i,j}\) are independent,
\[\mathbb{E}\left\|B(x_{j}-v_{j})\right\|_{2}^{2} =\mathbb{E}\left\|\sum_{i=1}^{n}\varepsilon_{i,j}Be_{i}\right\|_ {2}^{2}=\sum_{i=1}^{n}\mathbb{E}\,\varepsilon_{i,j}^{2}\left\|Be_{i}\right\|_ {2}^{2}\] \[\leq\left(\frac{\tau}{\sqrt{n}}\right)^{2}\left\|B\right\|_{ \text{HS}}^{2}\leq 4K^{2}\tau^{2}n.\]
Hence by Chebyshev's inequality
\[\mathbb{P}\left[\left\|B(x_{j}-v_{j})\right\|_{2}\leq 3K\tau\sqrt{n}\mid \mathcal{E}_{\text{HS}}\right]\geq\frac{1}{2}.\]
In view of the independence of these events for different \(j\),
\[\mathbb{P}\left[\forall j\in[l]\ \left\|B(x_{j}-v_{j})\right\|_{2}\leq 3K\tau \sqrt{n}\mid\mathcal{E}_{\text{HS}}\right]\geq 2^{-l}. \tag{4.4}\]
Let us summarize (4.2), (4.3), and (4.4). As
\[1-\exp(-cn)-l\exp(-c\nu n)+2^{-l}>1,\]
conditionally on \(B\) for which the event \(\mathcal{E}_{\text{HS}}\) occurs, we can find a realization of random variables \(\varepsilon_{i,j},\ i\in[n],j\in[l]\) such that
* the vectors \(v_{1},\ldots,v_{l}\) are \(\left(\frac{1}{2}\right)\)-almost orthogonal;
* \(v_{1},\ldots,v_{l}\in T\cap\operatorname{Sparse}(4\tau^{2}n)\), and
* \(\left\|B(x_{j}-v_{j})\right\|_{2}\leq 3K\tau\sqrt{n}\) for all \(j\in[l]\).
Assume that there exists a \(\left(\frac{1}{4}\right)\)-almost orthogonal \(l\)-tuple \(x_{1},\ldots,x_{l}\in\operatorname{Comp}(\tau^{2}n,\tau^{4})\) such that \(\left\|Bx_{j}\right\|_{2}\leq\tau\sqrt{n}\) for all \(j\in[l]\). Then the above argument shows that conditionally on \(B\) such that \(\mathcal{E}_{\mathrm{HS}}\) occurs, we can find vectors \(v_{1},\ldots,v_{l}\in T\cap\operatorname{Sparse}(4\tau^{2}n)\) which are \(\left(\frac{1}{2}\right)\)-almost orthogonal such that \(\left\|Bv_{j}\right\|_{2}\leq 4K\tau\sqrt{n}\) for all \(j\in[l]\) since \(K\geq 1\). Therefore,
\[\mathbb{P}(\exists x_{1},\ldots,x_{l}\in\operatorname{Comp}(\tau^ {2}n,\tau^{4})\ \left\|Bx_{j}\right\|_{2}\leq\tau\sqrt{n}\ \text{ for all }j\in[l]\text{ and }\mathcal{E}_{\mathrm{HS}})\] \[\leq \mathbb{P}\Big{[}\exists v_{1},\ldots,v_{l}\in T\cap \operatorname{Sparse}(4\tau^{2}n)\ v_{1},\ldots,v_{l}\text{ are }\left(\frac{1}{2}\right) \text{-almost orthogonal}\] \[\text{ and }\left\|Bv_{j}\right\|_{2}\leq 4\tau\sqrt{n}\ \text{ for all }j\in[l]\ |\ \mathcal{E}_{\mathrm{HS}} \Big{]}\cdot\mathbb{P}(\mathcal{E}_{\mathrm{HS}})\] \[= \mathbb{P}\Big{(}\exists v_{1},\ldots,v_{l}\in T\cap \operatorname{Sparse}(4\tau^{2}n)\ v_{1},\ldots,v_{l}\text{ are }\left(\frac{1}{2}\right) \text{-almost orthogonal}\] \[\text{ and }\left\|Bv_{j}\right\|_{2}\leq 4K\tau\sqrt{n}\ \text{ for all }j\in[l]\text{ and }\mathcal{E}_{\mathrm{HS}} \Big{)}.\]
Let us show that the latter probability is small. Assume that
\[\tau\leq\min\left(\frac{1}{9},\frac{C_{4.1}}{4K}\right).\]
In view of Lemma 4.1 and (4.1),
\[\mathbb{P}\big{(}\exists v_{1},\ldots,v_{l}\in T\cap \operatorname{Sparse}(4\tau^{2}n)\ (v_{1},\ldots,v_{l})\text{ is }\left(\frac{1}{2}\right) \text{-almost orthogonal}\] \[\qquad\text{ and }\left\|Bv_{j}\right\|_{2}\leq 4K\tau\sqrt{n} \text{ for all }j\in[l]\big{)}\] \[\leq\left(\frac{C}{\tau^{3}}\right)^{4\tau^{2}n\cdot l}\cdot \exp(-c_{4.1}ln)\leq\exp\left(-\left[c_{4.1}-4\tau^{2}\log\left(\frac{C}{\tau^ {3}}\right)\right]ln\right)\] \[\leq\exp\left(-\frac{c_{4.1}}{2}ln\right),\]
where the last inequality holds if we choose \(\tau\) sufficiently small.
The previous proof shows that
\[\mathbb{P}(\exists x_{1},\ldots,x_{l}\in\operatorname{Comp}(\tau^ {2}n,\tau^{4})\ \left\|Bx_{j}\right\|_{2}\leq\tau\sqrt{n}\ \text{ for all }j\in[l]\text{ and }\mathcal{E}_{\mathrm{HS}})\] \[\leq \exp\left(-\frac{c_{4.1}}{2}ln\right).\]
In combination with the inequality \(\mathbb{P}(\mathcal{E}_{\mathrm{HS}}^{c})\leq\exp(-cn^{2})\), this completes the proof.
We will fix the value of \(\tau\) for which Proposition 4.2 holds for the rest of the paper.
## 5. Incompressible vectors
The main statement of this section, Proposition 5.1 shows bounds the probability that the kernel of a rectangular matrix with \(B\) i.i.d. entries satisfying assumptions (3.2) contains an almost orthogonal system of incompressible vectors with subexponential common denominators is small. In what follows, \(B\) will be the \((n-k)\times n\) matrix whose rows are \(\operatorname{Col}_{1}(A)^{\top},\ldots,\operatorname{Col}_{n-k}(A)^{\top}\), and the required probability should
be of order \(\exp(-ckn)\) to fit Theorem 1.1. The need to achieve such a tight probability estimate requires considering the event that \(l\) vectors in the kernel of \(B\) have subexponential least common denominator. The number \(l\) here is proportional to \(k\). Recall that a vector has a relatively small least common denominator if after being scaled by a moderate factor, it becomes close to the integer lattice. Since we have to consider \(l\) such vectors at once, and the norms of these scaled copies vary significantly, it is more convenient to consider these copies, and not the original unit vectors as we did in Proposition 4.2. Moreover, to bound the probability, we have to consider all vectors with a moderate least common denominator in the linear span of the original system of \(l\) vectors in the kernel of \(B\). To make the analysis of such linear span more manageable, we will restrict our attention to the almost orthogonal systems. This restriction will be later justified by using Lemma 3.3.
Throughout the paper, we set
\[L=\sqrt{k/p},\quad\alpha=\frac{\tau^{2}}{4}. \tag{5.1}\]
where \(k\) appears in Theorem 1.1, and \(p\) is a parameter from (3.2).
**Proposition 5.1**.: _Let \(\rho\in(0,\rho_{0})\), where \(\rho_{0}=\rho_{0}(\tau)\) is some positive number. Assume that \(l\leq k\leq\frac{\rho}{2}\sqrt{n}\)._
_Let \(B\) be an \((n-k)\times n\) matrix with i.i.d. entries satisfying (3.2). Let \(l\leq k\), and consider the event \(\mathcal{E}_{l}\) that there exist vectors \(v_{1},\ldots,v_{l}\in\ker(B)\) having the following properties._
1. \(\frac{\tau}{8}\sqrt{n}\leq\left\|v_{j}\right\|_{2}\leq\exp\left(\frac{\rho^{2 }n}{4L^{2}}\right)\) _for all_ \(j\in[l]\)_;_
2. \(\operatorname{span}(v_{1},\ldots,v_{l})\cap S^{n-1}\subset\operatorname{ Incomp}(\tau^{2},\tau^{4})\)_;_
3. _The vectors_ \(v_{1},\ldots,v_{l}\) _are_ \(\left(\frac{1}{8}\right)\)_-almost orthogonal;_
4. \(\operatorname{dist}(v_{j},\mathbb{Z}^{n})\leq\rho\sqrt{n}\) _for_ \(j\in[l]\)_;_
5. _The_ \(n\times l\) _matrix_ \(V\) _be the_ \(l\times n\) _with columns_ \(v_{1},\ldots,v_{l}\) _satisfies_ \[\operatorname{dist}(V\theta,\mathbb{Z}^{n})>\rho\sqrt{n}\] _for all_ \(\theta\in\mathbb{R}^{l}\) _such that_ \(\left\|\theta\right\|_{2}\leq\frac{1}{20\sqrt{l}}\) _and_ \(\left\|V\theta\right\|_{2}\geq\frac{\tau}{8}\sqrt{n}\)_._
_Then_
\[\mathbb{P}(\mathcal{E}_{l})\leq\exp\left(-ln\right).\]
To simplify the analysis, we will tighten condition (1) of Proposition 5.1 restricting the magnitudes of the norms of \(v_{j}\) to some dyadic intervals. Denote for shortness
\[r=\frac{\tau}{8},\quad R=\exp\left(\frac{\rho^{2}n}{4L^{2}}\right).\]
Consider a vector \(\mathbf{d}=(d_{1},\ldots,d_{l})\in[r\sqrt{n},R]^{l}\) and define the set \(W_{\mathbf{d}}\) be the set of \(l\)-tuples of vectors \(v_{1},\ldots,v_{l}\in\mathbb{R}^{n}\) satisfying
\[\left\|v_{j}\right\|_{2}\in[d_{j},2d_{j}]\text{ for all }j\in[l];\]
and conditions (2) - (5) of Proposition 5.1. We will prove the proposition for vectors \(v_{1},\ldots,v_{l}\) with such restricted norms first and derive the general statement by taking the union bound over \(d_{1},\ldots,d_{l}\) being dyadic integers.
We begin the proof of Proposition 5.1 with constructing a special net for the set \(W_{\mathbf{d}}\). This will follow by proving that the \(l\)-tuples from the net approximate any point of \(W_{\mathbf{d}}\) in a number of senses. After that we will prove the individual small
ball probability estimate for _some_\(l\)-tuples from the net. These will be exactly those tuples that appear as a result of approximation of points from \(W_{\mathbf{d}}\).
To make the construction of the net simpler, we introduce another parameter. Given \(\rho\) as in Proposition 5.1, we will chose \(\delta>0\) such that
\[\delta\leq\rho\quad\text{and }\delta^{-1}\in\mathbb{N}. \tag{5.2}\]
The parameter \(\delta\) will be adjusted several times throughout the proof, but its value will remain independent of \(n\).
**Lemma 5.2** (Net cardinality).: _Let \(\mathbf{d}=(d_{1},\ldots,d_{l})\) be a vector such that \(d_{j}\in[r\sqrt{n},R]\) for all \(j\in[l]\). Let \(\delta\) be as in (5.2), and \(\mathcal{N}_{\mathbf{d}}\subset(\delta\mathbb{Z}^{n})^{l}\) be the set of all \(l\)-tuples of vectors \(u_{1},\ldots,u_{l}\) such that_
\[\left\|u_{j}\right\|_{2}\in\left[\frac{1}{2}d_{j},4d_{j}\right]\quad\text{for all }\ j\in[l]\]
_and_
\[\operatorname{dist}(u_{j},\mathbb{Z}^{n})\leq 2\rho\sqrt{n}.\]
_Then_
\[\left|\mathcal{N}_{\mathbf{d}}\right|\leq\left(\frac{C\rho}{r\delta}\right)^ {ln}\left(\prod_{j=1}^{l}\frac{d_{j}}{\sqrt{n}}\right)^{n}.\]
Proof.: Let \(\mathcal{M}_{j}=\mathbb{Z}^{n}\cap 2d_{j}B_{2}^{n}\). Taking into account that \(d_{j}\geq r\sqrt{n}\), we use Lemma 3.11 to conclude that
\[\left|\mathcal{M}_{j}\right|\leq\left(2+\frac{Cd_{j}}{\sqrt{n}}\right)^{n} \leq\left(\frac{C^{\prime}}{r}\right)^{n}\cdot\left(\frac{d_{j}}{\sqrt{n}} \right)^{n}.\]
Define the set \(\mathcal{M}\) by \(\mathcal{M}=\delta\,\mathbb{Z}^{n}\cap 2\rho\sqrt{n}B_{2}^{n}\). Similarly, Lemma 3.11 yields
\[\left|\mathcal{M}\right|\leq\left(\frac{C\rho}{\delta}\right)^{n}.\]
Set \(\mathcal{N}_{j}=\mathcal{M}_{j}+\mathcal{M}\subset\delta\,\mathbb{Z}^{n}\). Here we used the assumption that \(\delta^{-1}\in\mathbb{N}\). Then for any \(u_{j}\in\delta\,\mathbb{Z}^{n}\) such that \(\left\|v_{j}-u_{j}\right\|_{\infty}\leq\delta\) belongs to \(\mathcal{N}_{j}\), and
\[\left|\mathcal{N}_{j}\right|\leq\left(\frac{C\rho}{r\delta}\right)^{n}\cdot \left(\frac{d_{j}}{\sqrt{n}}\right)^{n}.\]
Set \(\mathcal{N}_{\mathbf{d}}=\prod_{j=1}^{l}\mathcal{N}_{j}\). Multiplying the previous estimates, we obtain
\[\left|\mathcal{N}_{\mathbf{d}}\right|\leq\left(\frac{C\rho}{r\delta}\right)^{ ln}\left(\prod_{j=1}^{l}\frac{d_{j}}{\sqrt{n}}\right)^{n}\]
as required.
The next step is the central technical part of this section. Our next task is to show that for any \((v_{1},\ldots,v_{l})\in W_{\mathbf{d}}\), there exists a sequence \((u_{1},\ldots,u_{l})\in\mathcal{N}_{\mathbf{d}}\) which approximates it in various ways. As some of these approximations hold only for a randomly chosen point of \(\mathcal{N}_{\mathbf{d}}\), and we need all of them to hold simultaneously, we have to establish all of them at the same time. This will be done by using random rounding as in the proof of Proposition 4.2. The implementation of this method here is somewhat different since we have to control the least common denominator of the matrix \(U\) formed by the vectors \(u_{1},\ldots,u_{l}\).
We will prove the following lemma.
**Lemma 5.3** (Approximation).: _Let \(k\leq cn\). Let \(\mathbf{d}=(d_{1},\ldots,d_{l})\in[r\sqrt{n},R]^{l}\). Let \(\delta>0\) be a sufficiently small constant satisfying (5.2). Let \(B\) be an \((n-k)\times n\) matrix such that \(\left\|B\right\|_{\mathrm{HS}}\leq 2Kn\). For any sequence \((v_{1},\ldots,v_{l})\in W_{\mathbf{d}}\cap\mathrm{Ker}(B)\), there exists a sequence \((u_{1},\ldots,u_{n})\in\mathcal{N}_{\mathbf{d}}\) with the following properties_
1. \(\left\|u_{j}-v_{j}\right\|_{\infty}\leq\delta\) _for all_ \(j\in[l]\)_;_
2. _Let_ \(U\) _and_ \(V\) _be_ \(n\times l\) _matrices with columns_ \(u_{1},\ldots,u_{l}\) _and_ \(v_{1},\ldots,v_{l}\) _respectively. Then_ \[\left\|U-V\right\|\leq C\delta\sqrt{n}.\]
3. _the system_ \((u_{1},\ldots,u_{l})\) _is_ \((1/4)\)_-orthogonal;_
4. \(\mathrm{span}(u_{1},\ldots,u_{l})\cap S^{n-1}\subset\mathrm{Incomp}(\tau^{2}, \tau^{4}/2)\)_;_
5. \(\mathrm{dist}(u_{j},\mathbb{Z}^{n})\leq 2\rho\sqrt{n}\) _for all_ \(j\in[n]\)_;_
6. _Let_ \(U\) _be as in (_2_). Then_ \[\mathrm{dist}(U\theta,\mathbb{Z}^{n})>\frac{\rho}{2}\sqrt{n}\] _for any_ \(\theta\in\mathbb{R}^{n}\) _satisfying_ \[\left\|\theta\right\|_{2}\leq\frac{1}{20\sqrt{l}}\quad\text{ and }\quad\left\|U\theta\right\|_{2}\geq 8r\sqrt{n};\]
7. \(\left\|Bu_{j}\right\|_{2}\leq 2K\delta n\) _for all_ \(j\in[l]\)_._
Proof.: Let \((v_{1},\ldots,v_{l})\in W_{\mathbf{d}}\). Choose \((v_{1}^{\prime},\ldots,v_{l}^{\prime})\in\delta\mathbb{Z}^{n}\) be such that
\[v_{j}\in v_{j}^{\prime}+\delta[0,1]^{n}\quad\text{for all }j\in[l].\]
Define independent random variables \(\varepsilon_{i,j},i\in[n],j\in[l]\) by setting
\[\mathbb{P}(\varepsilon_{i,j}=v_{j}^{\prime}(i)-v_{j}(i))=1-\frac{v_{j}(i)-v_{ j}^{\prime}(i)}{\delta}\]
and
\[\mathbb{P}(\varepsilon_{i,j}=v_{j}^{\prime}(i)-v_{j}(i)+\delta)=\frac{v_{j}(i )-v_{j}^{\prime}(i)}{\delta}.\]
Then \(\left|\varepsilon_{i,j}\right|\leq\delta\) and \(\mathbb{E}\,\varepsilon_{i,j}=0\). Consider a random point
\[u_{j}=v_{j}+\sum_{i=1}^{n}\varepsilon_{i,j}e_{i}\in\delta^{-1}\mathbb{Z}^{n}.\]
Then \(\mathbb{E}\,u_{j}=v_{j}\) and \(\left\|u_{j}-v_{j}\right\|_{\infty}\leq\delta\) for all \(j\in[l]\) as in (1). Let us check that \((u_{1},\ldots,u_{l})\in\mathcal{N}_{\mathbf{d}}\) for any choice of \(\varepsilon_{i,j}\). Indeed, for any \(j\in[l]\),
\[\left\|u_{j}-v_{j}\right\|_{2}\leq\delta\sqrt{n}\quad\text{and }\left(1-\frac{ \delta}{r}\right)\left\|v_{j}\right\|_{2}\leq\left\|u_{j}\right\|_{2}\leq \left(1+\frac{\delta}{r}\right)\left\|v_{j}\right\|_{2}\]
as \(\left\|v_{j}\right\|_{2}\geq r\sqrt{n}\) for all \(j\in[l]\). This, in particular, implies that \(\left\|u_{j}\right\|_{2}\in\left[\frac{1}{2}d_{j},4d_{j}\right]\) for all \(j\in[l]\) and any values of \(\varepsilon_{i,j}\).
Let \(U\) and \(V\) be the \(n\times l\) matrices with columns \(u_{1},\ldots,u_{l}\) and \(v_{1},\ldots,v_{l}\) respectively. Then the matrix \(U-V\) has independent entries \(\varepsilon_{i,j},\ i\in[n],j\in[l]\) which are centered and bounded by \(\delta\) in the absolute value. By Lemma 3.4,
\[\mathbb{P}(\left\|U-V\right\|\geq C\delta\sqrt{n})\leq\exp(-c^{\prime}n),\]
and so condition (2) holds with probability at least \(1-\exp(-c^{\prime}n)\).
Let us check that condition (3) follows from (2). Let \(D_{U}\) be the diagonal matrix \(D_{U}=\operatorname{diag}(\left\lVert u_{1}\right\rVert_{2},\ldots,\left\lVert u _{l}\right\rVert_{2})\), and define \(D_{V}\) in a similar way. If \(\left\lVert U-V\right\rVert\leq C\delta\sqrt{n}\), then by the \(\left(\frac{1}{8}\right)\)-almost orthogonality of \((v_{1},\ldots,v_{l})\), we get
\[\left\lVert UD_{U}^{-1}\right\rVert \leq\left\lVert UD_{V}^{-1}\right\rVert\cdot\left\lVert D_{V}D_{U }^{-1}\right\rVert\] \[\leq\left[\left\lVert VD_{V}^{-1}\right\rVert+\left\lVert U-V \right\rVert\cdot\left\lVert D_{V}^{-1}\right\rVert\right]\cdot\left\lVert D _{V}D_{U}^{-1}\right\rVert\] \[\leq\left[\frac{9}{8}+C\delta\sqrt{n}\cdot\frac{1}{r\sqrt{n}} \right]\cdot\left(1-\frac{\delta}{r}\right)^{-1}\leq\frac{5}{4}\]
if \(\delta\leq cr\) for an appropriately small constant \(c>0\). Similarly,
\[s_{l}(UD_{U}^{-1}) \geq s_{l}(UD_{V}^{-1})\left\lVert D_{U}D_{V}^{-1}\right\rVert^{-1}\] \[\geq\left[s_{l}(VD_{V}^{-1})-\left\lVert U-V\right\rVert\cdot \left\lVert D_{V}^{-1}\right\rVert\right]\cdot\left\lVert D_{U}D_{V}^{-1} \right\rVert^{-1}\] \[\geq\left[\frac{7}{8}-C\frac{\delta}{r}\right]\cdot\left(1+\frac{ \delta}{r}\right)^{-1}\geq\frac{3}{4}\]
confirming our claim. The last inequality above follows again by choosing \(\delta<cr\) with a sufficiently small \(c>0\).
Let us check that condition (4) follows from (2) and (3). Indeed, let \(\theta\in\mathbb{R}^{l}\) be such that \(\left\lVert U\theta\right\rVert_{2}=1\). Since \(\min(\left\lVert u_{1}\right\rVert_{2},\ldots,\left\lVert u_{l}\right\rVert _{2})\geq r\sqrt{n}\), and the system \(u_{1},\ldots,u_{l}\) is \(\left(\frac{1}{4}\right)\)-almost orthogonal, we have
\[\left\lVert\theta\right\rVert_{2}\leq\left(s_{l}(U)\right)^{-1}\left\lVert U \theta\right\rVert_{2}\leq\frac{4}{3}\cdot\frac{1}{r\sqrt{n}}.\]
At the same time,
\[\left\lVert V\theta\right\rVert_{2}\geq\left\lVert U\theta\right\rVert_{2}- \left\lVert U-V\right\rVert\cdot\left\lVert\theta\right\rVert_{2}\geq 1- \delta\sqrt{n}\cdot\frac{4}{3}\cdot\frac{1}{r\sqrt{n}}=1-\frac{4\delta}{3r}.\]
Take any \(y\in\operatorname{Sparse}(\tau^{2}n)\). Then
\[\left\lVert V\theta-y\right\rVert_{2}\geq\left(1-\frac{4\delta}{3r}\right) \cdot\left\lVert\frac{V\theta}{\left\lVert V\theta\right\rVert_{2}}-\frac{y}{ \left\lVert V\theta\right\rVert_{2}}\right\rVert_{2}\geq\left(1-\frac{4\delta }{3r}\right)\cdot\tau^{4}\]
since \(\frac{V\theta}{\left\lVert V\theta\right\rVert_{2}}\in\operatorname{Incomp}( \tau^{2}n,\tau^{4})\). Therefore,
\[\left\lVert U\theta-y\right\rVert_{2}\geq\left\lVert V\theta-y\right\rVert _{2}-\left\lVert U-V\right\rVert\cdot\left\lVert\theta\right\rVert_{2}\geq \left(1-\frac{4\delta}{3r}\right)\cdot\tau^{4}-\frac{4\delta}{3r}\geq\frac{1}{ 2}\tau^{4}\]
where the last inequality holds if \(\delta\) is appropriately adjusted depending on \(r\) and \(\tau\). This proves that if \(\left\lVert U\theta\right\rVert_{2}=1\) then \(\operatorname{dist}(U\theta,\operatorname{Sparse}(\tau^{2}n))\geq\tau^{4}/2\), i.e., \(U\theta\in\operatorname{Incomp}(\tau^{n},\tau^{4}/2)\). This completes verifying (4).
Condition (5) immediately follows from (1) and the triangle inequality:
\[\operatorname{dist}(u_{j},\mathbb{Z}^{n})\leq\operatorname{dist}(v_{j}, \mathbb{Z}^{n})+\delta\sqrt{n}\leq 2\rho\sqrt{n}\]
since \(\delta\leq\rho\).
Condition (6) follows from (2) and (3). Indeed, let \(\theta\) be as in (6), and assume that \(\left\lVert U-V\right\rVert\leq C\delta\sqrt{n}\). Since both \((v_{1},\ldots,v_{l})\) and \((u_{1},\ldots,u_{l})\) are \(\left(\frac{1}{4}\right)\)-almost orthogonal, and \(\left\lVert u_{j}\right\rVert_{2}\geq\frac{1}{2}\left\lVert v_{j}\right\rVert_{2}\),
\[\left\lVert V\theta\right\rVert_{2}^{2}\geq\frac{1}{4}\sum_{j=1}^{l}\theta_{j}^ {2}\left\lVert v_{j}\right\rVert_{2}^{2}\geq\frac{1}{16}\sum_{j=1}^{l}\theta_{j }^{2}\left\lVert u_{j}\right\rVert_{2}^{2}\geq\frac{1}{64}\left\lVert U\theta \right\rVert_{2}^{2}\geq r^{2}n.\]
As \((v_{1},\dots,v_{l})\in W_{\mathbf{d}}\), this implies that
\[\operatorname{dist}(V\theta,\mathbb{Z}^{n})>\rho\sqrt{n}.\]
Therefore,
\[\operatorname{dist}(U\theta,\mathbb{Z}^{n}) \geq\operatorname{dist}(V\theta,\mathbb{Z}^{n})-\left\|(U-V)^{\top }\theta\right\|_{2}\] \[>\rho\sqrt{n}-\left\|U-V\right\|\cdot\left\|\theta\right\|_{2} \geq\rho\sqrt{n}-C\delta\sqrt{n}\cdot\frac{1}{20\sqrt{l}}\] \[\geq\frac{\rho}{2}\sqrt{n},\]
where we adjust \(\delta\) again if necessary. Thus \((u_{1},\dots,u_{l})\) satisfy (2)-(6) with probability at least \(1-\exp(-c^{\prime}n)\).
It remains to show that we can choose \((u_{1},\dots,u_{l})\) satisfying (7) at the same time. For any \(j\in[l]\), we have
\[\mathbb{E}\left\|B(u_{j}-v_{j})\right\|_{2}^{2}=\mathbb{E}\left\|\sum_{i=1}^{ n}\varepsilon_{i,j}Be_{i}\right\|_{2}^{2}=\sum_{i=1}^{n}\mathbb{E}\,\varepsilon_{i, j}^{2}\left\|Be_{i}\right\|_{2}^{2}\leq\frac{\delta^{2}}{4}\left\|B\right\|_{ \mathrm{HS}}^{2}\leq K^{2}\delta^{2}n^{2}.\]
By Chebyshev's inequality,
\[\mathbb{P}(\left\|B(u_{j}-v_{j})\right\|_{2}\leq 2K\delta n)\geq\frac{1}{2}.\]
In view of independence of these events for different \(j\),
\[\mathbb{P}(\forall j\in[l]\;\left\|B(u_{j}-v_{j})\right\|_{2}\leq 2K\delta n) \geq 2^{-l}.\]
As
\[1-\exp(-c^{\prime}n)+2^{-l}>1,\]
there is a realization \((u_{1},\dots,u_{l})\in\mathcal{N}_{\mathbf{d}}\) satisfying (2)-(6) for which
\[\left\|Bu_{j}\right\|_{2}=\left\|B(v_{j}-w_{j})\right\|_{2}\leq 2K\delta n\]
holds for all \(j\in[l]\) simultaneously. This finishes the proof of the lemma.
Fix the value of \(\delta\) (5.2) such that Lemma 5.3 holds for the rest of the proof.
We will now use the small ball probability estimate of Theorem 3.8 to show that the event \(W_{\mathbf{d}}\cap\operatorname{Ker}(B)\neq\varnothing\) is unlikely.
**Lemma 5.4**.: _Let \(\mathbf{d}=(d_{1},\dots,d_{l})\in[r\sqrt{n},R]^{l}\) where \(R,r\) are defined above. Let \(k\leq\frac{\delta}{20}\sqrt{n}\) and \(\frac{k}{10}\leq l\leq k\). Then_
\[\mathbb{P}\left(W_{\mathbf{d}}\cap\operatorname{Ker}(B)\neq\varnothing\right) \leq\exp(-2ln).\]
Proof.: Let \(\mathcal{N}_{\mathbf{d}}\) be the net constructed in Lemma 5.2. Let \(\tilde{\mathcal{N}}_{\mathbf{d}}\) be the set of all \((u_{1},\dots,u_{l})\in\mathcal{N}_{\mathbf{d}}\) which satisfy conditions (3) - (6) of Lemma 5.3. Consider an \(l\)-tuple \(u_{1},\dots,u_{l}\in\tilde{\mathcal{N}}_{\mathbf{d}}\). Let \(U\) be the \(n\times l\) matrix with columns \(u_{1},\dots,u_{l}\).
To apply the Levy concentration estimate of Theorem 3.8, we have to bound the LCD of \(U^{\top}\) from below. Let us show that
\[D_{L,\alpha}(U^{\top})\geq\frac{1}{20\sqrt{l}}. \tag{5.3}\]
Take \(\theta\in\mathbb{R}^{l}\) such that \(\left\|\theta\right\|_{2}\leq\frac{1}{20\sqrt{l}}\). Assume first that
\[\left\|U\theta\right\|_{2}\leq 8r\sqrt{n}\leq\sqrt{\tau^{2}n}.\]
Recall that \(L\) and \(\alpha\) are defined as in (5.1). Since the columns of \(U\) satisfy (4) of Lemma 5.3, applying Lemma 3.10 yields
\[\operatorname{dist}(U\theta,\mathbb{Z}^{n})\geq L\sqrt{\log_{+}\frac{\alpha \left\|U\theta\right\|_{2}}{L}}.\]
Assume now that \(\left\|U\theta\right\|_{2}>8r\sqrt{n}\). By condition (1) of Lemma 5.3,
\[\left\|U\right\|_{\mathrm{HS}}\leq\sqrt{l}\max_{j\in[l]}\left\|u_{j}\right\|_ {2}\leq\sqrt{l}R.\]
Hence,
\[L\sqrt{\log_{+}\frac{\alpha\left\|U\theta\right\|_{2}}{L}}\leq L\sqrt{\log_{+ }\frac{\left\|U\right\|_{\mathrm{HS}}}{L}}\leq L\sqrt{\log_{+}R}\leq L\sqrt{ \frac{\rho^{2}}{4}\cdot\frac{n}{L^{2}}}\leq\frac{\rho}{2}\sqrt{n}.\]
Recall that by condition (6) of Lemma 5.3,
\[\operatorname{dist}(U\theta,\mathbb{Z}^{n})>\frac{\rho}{2}\sqrt{n}\]
whenever \(\theta\in\mathbb{R}^{n}\) satisfies
\[\left\|\theta\right\|_{2}\leq\frac{1}{20\sqrt{l}}\quad\text{ and }\quad\left\|U\theta\right\|_{2}\geq 8r\sqrt{n}.\]
Combining these two cases, we see that any vector \(\theta\in\mathbb{R}^{l}\) with \(\left\|\theta\right\|_{2}\leq\frac{1}{9\sqrt{l}}\) satisfies
\[\operatorname{dist}(U\theta,\mathbb{Z}^{n})\geq L\sqrt{\log_{+}\frac{\alpha \left\|U\theta\right\|_{2}}{L}}\]
which proves (5.3).
Using condition (3) of Lemma 5.3 and Lemma 3.2, we infer
\[\det(U^{\top}U)^{1/2}\geq 4^{-l}\prod_{j=1}^{l}\left\|u_{j}\right\|_{2}\geq 8^{ -l}\prod_{j=1}^{l}d_{j}.\]
Let \(i\in[n]\). Recall that \(\operatorname{Row}_{i}(B)\in\mathbb{R}^{n}\) is a vector with i.i.d. random coordinates satisfying (3.2) and that \(l\leq k\leq\frac{\delta}{20\sqrt{n}}\). Combining this with (3.3) used with
\[t\geq\delta\sqrt{n}\geq 20l\geq\frac{\sqrt{l}}{D_{L,\alpha}(U^{\top})}\]
and recalling that \(L=O(\sqrt{k})\) by (5.1) and \(l\geq k/10\), we obtain
\[\mathbb{P}\left(\left\|U^{\top}(\operatorname{Row}_{i}(B))^{\top}\right\|_{2 }\leq t\sqrt{l}\right)\leq\frac{(CL/\sqrt{l})^{l}}{\det(U^{\top}U)^{1/2}}\left( t+\frac{\sqrt{l}}{D_{L,\alpha}(U^{\top})}\right)^{l}\leq\frac{C^{l}}{\prod_{j=1}^{l}d_{j} }t^{l}.\]
Denote
\[Y_{i}=\frac{1}{l}\left\|U^{\top}(\operatorname{Row}_{i}(B))^{\top}\right\|_{2 }^{2},\qquad M=\frac{C^{2}}{\left(\prod_{j=1}^{l}d_{j}\right)^{2/l}}.\]
Then we can rewrite the last inequality as
\[\mathbb{P}(Y_{i}\leq s)\leq(Ms)^{l/2}\quad\text{for }s\geq s_{0}=\delta^{2}n.\]
In view of Lemma 3.6 applied with \(m=l/2\) and \(t=4K^{2}s_{0}\) with \(K\) from (3.2), this yields
\[\mathbb{P}\left(\left\|Bu_{j}\right\|_{2}\leq 2K\delta n\text{ for all }j \in[l]\right) \leq\mathbb{P}\left(\sum_{j=1}^{l}\left\|Bu_{j}\right\|_{2}^{2}\leq 4K^{2} \delta^{2}ln^{2}\right)\] \[=\mathbb{P}\left(\sum_{i=1}^{n}\left\|U^{\top}(\operatorname{Row }_{i}(B))^{\top}\right\|_{2}^{2}\leq 4K^{2}\delta^{2}ln^{2}\right)\] \[=\mathbb{P}\left(\sum_{i=1}^{n}Y_{i}\leq n\cdot 4K^{2}\delta^{2}n \right)\leq(C^{\prime}M\delta^{2}n)^{nl/2}\] \[=(C^{\prime\prime}\delta)^{ln}\cdot\left(\prod_{j=1}^{l}\frac{ \sqrt{n}}{d_{j}}\right)^{n}.\]
Recall that \(\tilde{\mathcal{N}_{\mathbf{d}}}\subset\mathcal{N}_{\mathbf{d}}\). The combination of the small ball probability estimate above and Lemma 5.2 gives
\[\mathbb{P}\left(\exists(u_{1},\ldots,u_{l})\in\tilde{\mathcal{N}_ {\mathbf{d}}}\ \left\|Bu_{j}\right\|_{2}\leq\delta n,\ j\in[l]\right)\] \[\leq\left(\frac{C\rho}{r\delta}\right)^{ln}\left(\prod_{j=1}^{l} \frac{d_{j}}{\sqrt{n}}\right)^{n}\cdot(C^{\prime\prime}\delta)^{ln}\cdot\left( \prod_{j=1}^{l}\frac{\sqrt{n}}{d_{j}}\right)^{n}\] \[=\left(\frac{C^{\prime}\rho}{r}\right)^{ln}<\exp(-2ln)\]
if \(\rho<\frac{r}{eC^{\prime}}\).
Notice that
\[\mathbb{P}\left(W_{\mathbf{d}}\cap\operatorname{Ker}(B)\neq\varnothing\right) \leq\mathbb{P}\left(W_{\mathbf{d}}\cap\operatorname{Ker}(B)\neq \varnothing\text{ and }\left\|B\right\|_{\operatorname{HS}}\leq 2Kn\right)\] \[+\mathbb{P}\left(\left\|B\right\|_{\operatorname{HS}}\geq 2Kn \right).\]
In view of Lemma 3.5, the second term is smaller than \(\exp(-cn^{2})\) which means that we have to concentrate on the first one.
Assume that the events \(W_{\mathbf{d}}\cap\operatorname{Ker}(B)\neq\varnothing\) and \(\left\|B\right\|_{\operatorname{HS}}\leq 2Kn\) occur, and pick an \(l\)-tuple \((v_{1},\ldots,v_{l})\in W_{\mathbf{d}}\cap\operatorname{Ker}(B)\). Choose an approximating \(l\)-tuple \((u_{1},\ldots,u_{l})\in\mathcal{N}_{\mathbf{d}}\) as in Lemma 5.3. Then \((u_{1},\ldots,u_{l})\in\tilde{\mathcal{N}_{\mathbf{d}}}\) and \(\left\|Bu_{j}\right\|_{2}\leq 2K\delta n\) per condition (7) of this lemma. The argument above shows that the probability of the event that such a tuple \((u_{1},\ldots,u_{l})\in\tilde{\mathcal{N}_{\mathbf{d}}}\) exists is at most \(\exp(-cn)\). The lemma is proved.
Proposition 5.1 follows from Lemma 5.4 by taking the union bound over dyadic values of the coordinates of \(\mathbf{d}\).
Proof of Proposition 5.1.: Let \(\mathcal{E}_{\mathbf{d}}\) be the event that \(W_{\mathbf{d}}\cap\operatorname{Ker}(B)\neq\varnothing\). Then
\[\mathcal{E}_{l}=\bigcup\mathcal{E}_{\mathbf{d}},\]
where the union is taken over all vectors \(\mathbf{d}\) with dyadic coordinates: \(d_{j}=2^{s_{j}},\;s_{j}\in\mathbb{N}\) such that \(2^{s_{j}}\in[r\sqrt{n},R]\). Since there are at most
\[\left[\log\left(\frac{2R}{r\sqrt{n}}\right)\right]^{l}\leq\left(\frac{C\rho^{2 }n}{L^{2}}\right)^{l}\]
terms in the union, Lemma 5.4 yields
\[\mathbb{P}(\mathcal{E}_{\mathbf{d}})\leq\left(\frac{C\rho^{2}n}{L^{2}}\right) ^{l}\exp\left(-2ln\right)\leq(-ln)\]
where we took into account that \(L>1\). This finishes the proof of the proposition.
## 6. Rank of a random matrix
We will complete the proof of Theorem 1.1 using the probability estimates of Propositions 4.2 and 5.1. These propositions show that the linear subspace spanned by the first \(n-k\) columns of the matrix \(A\) is unlikely to contain a large almost orthogonal system of vectors with a small or moderate least common denominator. Applying Lemma 3.3, we will show that with high probability, this subspace contains a further subspace of a dimension proportional to \(k\) which has no vectors with a subexponential least common denominator. The next lemma shows that in such a typical situation, it is unlikely that the rank of the matrix \(A\) is \(n-k\) or smaller.
**Lemma 6.1**.: _Let \(A\) be an \(n\times n\) random matrix whose entries are independent copies of a random variable \(\xi\) satisfying (3.2). For \(k<\sqrt{n}\) define_
\[\Omega_{k}=\Omega_{k}(\operatorname{Col}_{1}(A),\ldots,\operatorname{Col}_{n- k}(A))\]
_as the event that there exists a linear subspace \(E\subset\big{(}\operatorname{span}(\operatorname{Col}_{1}(A),\ldots, \operatorname{Col}_{n-k}(A))\big{)}^{\perp}\) such that \(\dim(E)\geq k/2\) and_
\[D_{L,\alpha}(E)\geq\exp\left(C\frac{n}{k}\right).\]
_Then_
\[\mathbb{P}\big{(}\operatorname{Col}_{j}(A)\in\operatorname{span} (\operatorname{Col}_{i}(A),\ i\in[n-k])\text{ for }j=n-k+1,\ldots,n\text{ and }\Omega_{k}\big{)}\] \[\leq\exp(-c^{\prime}nk).\]
Proof.: Assume that \(\Omega_{k}\) occurs. The subspace \(E\) can be selected in a measurable way with respect to the sigma-algebra generated by \(\operatorname{Col}_{1}(A),\ldots,\operatorname{Col}_{n-k}(A)\). Therefore, conditioning on \(\operatorname{Col}_{1}(A),\ldots,\operatorname{Col}_{n-k}(A)\) fixes this subspace. Denote the orthogonal projection on the space \(E\) by \(P_{E}\). Since \(E\) is independent of \(\operatorname{Col}_{n-k+1}(A),\ldots,\operatorname{Col}_{n}(A)\), and these columns are mutually independent as well, it is enough to prove that
\[\mathbb{P}(\operatorname{Col}_{j}(A)\in\operatorname{span}( \operatorname{Col}_{i}(A),\ i\in[n-k])\text{ for }j=n-k+1,\ldots,n\mid E)\] \[\leq\mathbb{P}(\operatorname{Col}_{j}(A)\in E^{\perp}\text{ for }j=n-k+1, \ldots,n\mid E)\] \[=\big{(}\mathbb{P}(P_{E}\operatorname{Col}_{n}(A)=0\mid E)\big{)} ^{k}\] \[\leq\exp(-cnk),\]
or
\[\mathbb{P}(P_{E}\operatorname{Col}_{n}(A)=0\mid E)\leq\exp(-cn).\]
Using Corollary 3.9 with \(m=k/2\) and \(t=0\), we obtain
\[\mathbb{P}(P_{E}\operatorname{Col}_{n}(A)=0\mid E)\leq C^{m}\left(\sqrt{m}\exp \left(-C\frac{n}{k}\right)\right)^{m}\leq\exp(-cn)\]
as required.
With all ingredients in place, we are now ready to prove the main theorem.
Proof of Theorem 1.1.: Recall that it is enough to prove Theorem 1.1 under the condition that the entries of \(A\) are i.i.d. copies of a random variable satisfying (3.2).
Assume that \(\operatorname{rank}(A)\leq n-k\). Then there exists a set \(J\subset[n],\ |J|=n-k\) such that \(\operatorname{Col}_{j}(A)\in\operatorname{span}(\operatorname{Col}_{i}(A),\ i\in J)\) for all \(j\in[n]\setminus J\). Since the number of such sets is
\[\binom{n}{k}\leq\exp\left(k\log\left(\frac{en}{k}\right)\right)\ll\exp(-ckn),\]
it is enough to show that
\[\mathbb{P}\left(\operatorname{Col}_{j}(A)\in\operatorname{span}( \operatorname{Col}_{i}(A),\ i\in J)\text{ for all }J\in[n]\setminus J\right)\leq\exp(-ckn)\]
for a single set \(J\). As the probability above is the same for all such sets \(J\), without loss of generality assume that \(J=[n-k]\).
Consider the \((n-k)\times n\) matrix \(B\) with rows \(\operatorname{Row}_{j}(B)=(\operatorname{Col}_{j}(A))^{\top}\) for \(j\in[n-k]\). Let \(E_{0}=\operatorname{Ker}(B)\). Then the condition \(\operatorname{Col}_{j}(A)\in\operatorname{span}(\operatorname{Col}_{i}(A),\ i \in[n-k])\) reads \(P_{E_{0}}\operatorname{Col}_{j}(A)=0\).
Let \(\tau\) be the constant appearing in Proposition 4.2, and denote
\[W_{0}=\operatorname{Comp}(\tau^{2}n,\tau^{4}).\]
Set \(l=k/4\). Lemma 3.3 asserts that at least one of the events described in (1) and (2) of this lemma occurs. Denote these events \(\mathcal{E}_{3.3}^{(1)}\) and \(\mathcal{E}_{3.3}^{(2)}\) respectively. In view of Proposition 4.2,
\[\mathbb{P}(\mathcal{E}_{3.3}^{(1)})\leq\exp\left(-\frac{k}{4}n\right).\]
Here we used only condition (1a) in Lemma 3.3 ignoring condition (1b).
Assume now that \(\mathcal{E}_{3.3}^{(2)}\) occurs and consider the subspace \(F\subset E,\ \dim(F)=\frac{3}{4}k\) such that \(F\cap\operatorname{Comp}(\theta^{2}n,\theta^{4})=\varnothing\). Let \(\rho,r\) be the constant appearing in Proposition 5.1, and let \(L\) be as in (5.1). Set
\[W_{1}=\left\{v\in F:\ \frac{\tau}{8}\sqrt{n}\leq\left\|v\right\|_{2}\leq\exp \left(\frac{\rho^{2}n}{4L^{2}}\right)\text{ and }\operatorname{dist}(v,\mathbb{Z}^{n})\leq\rho\sqrt{n}\right\}\]
Applying Lemma 3.3 to \(W_{1}\) and \(l=\frac{k}{4}\), we again conclude that one of the following events occurs:
1. there exist vectors \(v_{1},\dots,v_{k/4}\in F\cap W_{1}\) such that 1. the \((k/4)\)-tuple \((v_{1},\dots,v_{k/4})\) is \(\left(\frac{1}{8}\right)\)-almost orthogonal and 2. for any \(\theta\in\mathbb{R}^{k/4}\) with \[\left\|\theta\right\|_{2}\leq\frac{1}{20\sqrt{k/4}},\] \[\sum_{i=1}^{k/4}\theta_{i}v_{i}\notin W_{1}\] or 2. there is a subspace \(\tilde{F}\subset F\) with \(\dim(\tilde{F})=\frac{k}{2}\) such that \(\tilde{F}\cap W_{1}=\varnothing\).
Denote these events \(\mathcal{V}_{3.3}^{(1)}\) and \(\mathcal{V}_{3.3}^{(2)}\) respectively. In view of Proposition 5.1,
\[\mathbb{P}(\mathcal{V}_{3.3}^{(1)})\leq\exp\left(-\frac{k}{4}n\right).\]
Assume now that the event \(\mathcal{V}_{3.3}^{(2)}\) occurs. We claim that in this case,
\[D_{L,\alpha}(\tilde{F})\geq R:=\exp\left(\frac{\rho^{2}n}{4L^{2}}\right).\]
The proof is similar to the argument used in the proof of Lemma 5.4. Let \(S:\mathbb{R}^{k/2}\to\mathbb{R}^{n}\) be an isometric embedding such that \(S\mathbb{R}^{k/2}=\tilde{F}\). Then \(D_{L,\alpha}(\tilde{F})=D_{L,\alpha}(S^{\top})\). Let \(\theta\in\mathbb{R}^{k/2}\) be a vector such that
\[\operatorname{dist}(S\theta,\mathbb{Z}^{n})<L\sqrt{\log_{+}\frac{\alpha\left\| \theta\right\|_{2}}{L}}.\]
Since
\[S\mathbb{R}^{k/2}\cap S^{n-1}\subset F\cap S^{n-1}\subset\operatorname{Incomp }(\tau^{2}n,\tau^{4}),\]
Lemma 3.10 applied with \(U=S\) and \(s=\tau^{2}\) yields
\[\left\|\theta\right\|_{2}\geq\tau\sqrt{n}.\]
On the other hand, if \(\left\|\theta\right\|_{2}\leq R\), then
\[L\sqrt{\log_{+}\frac{\alpha\left\|\theta\right\|_{2}}{L}}\leq\rho\sqrt{n},\]
and therefore \(\operatorname{dist}(S\theta,\mathbb{Z}^{n})<\rho\sqrt{n}\). Since \(\tilde{F}=S\mathbb{R}^{k/2}\cap W_{1}=\varnothing\), this implies that
\[\left\|\theta\right\|_{2}=\left\|S\theta\right\|_{2}>R=\exp\left(\frac{\rho^{ 2}n}{4L^{2}}\right),\]
thus proving our claim and checking the assumption of Lemma 6.1.
Finally,
\[\mathbb{P}\left(\operatorname{Col}_{j}(A)\in\operatorname{span }(\operatorname{Col}_{i}(A),\ i\in[n-k])\text{ for }j=n-k+1,\ldots,n\right)\] \[\leq 2\exp(-\frac{k}{4}n)\] \[+\mathbb{P}(\operatorname{Col}_{j}(A)\in\operatorname{span}( \operatorname{Col}_{i}(A),\ i\in[n-k])\text{ for }j=n-k+1,\ldots,n\text{ and }\mathcal{V}_{3.3}^{(2)}).\]
Lemma 6.1 shows that the last probability does not exceed \(\exp(-c(k/2)n)\). The proof is complete.
After the theorem is proved, we can derive an application to the question of Feige and Lellouche.
**Lemma 6.2**.: _Let \(q\in(0,1)\), and \(m,n\in\mathbb{N}\) be numbers such that_
\[m\leq n\leq\exp\left(C_{q}^{\prime}\sqrt{m}\right).\]
_Let \(A\) be an \(m\times n\) matrix with independent Bernoulli\((q)\) entries. Then with probability at least \(1-\exp(-m\log n)\) all \(m\times m\) submatrices of \(A\) have rank greater than \(m-C_{q}\log n\)._
_Furthermore, if \(n\geq m^{2}\), then with probability at least \(1-\exp(-cm)\), there exists an \(m\times m\) submatrix \(A|_{S}\) of \(A\) with \(|S|=m\) such that_
\[\operatorname{rank}(A|_{S})\leq m-c_{q}\log n.\]
_The constants \(C_{q}>c_{q}>0\) above can depend on \(q\)._
Proof.: As before, it is enough to consider an \(m\times n\) matrix \(A^{\prime}\) whose entries are \(a^{\prime}_{i,j}=a_{i,j}-\operatorname{\mathbb{E}}a_{i,j}\). Indeed, \(\operatorname{rank}(A-A^{\prime})=1\) which does not affect the estimates of the lemma. The entries of \(A^{\prime}\) are i.i.d. centered subgaussian random variables, so Theorem 1.1 applies to an \(m\times m\) submatrix of \(A^{\prime}\) as long as \(k\leq c\sqrt{m}\). In view of the assumption of the lemma, the last inequality holds if we take \(k=C_{q}\log n\). Combining Theorem 1.1 with the union bound, we obtain
\[\mathbb{P}\left(\exists S\subset[n]:\ |S|=m\text{ and } \operatorname{rank}(A^{\prime}|_{S})\leq n-C_{q}\log n\right)\] \[\leq\binom{n}{m}\exp(-c^{\prime}mC_{q}\log n)\leq\exp\left(m\log \left(\frac{en}{m}\right)-c^{\prime}mC_{q}\log n\right)\] \[\leq\exp(-m\log n)\]
if \(C_{q}\) is chosen sufficiently large.
To prove the second part of the lemma, take \(k<m\) and define a random subset \(J\subset[n]\) by
\[J=\{j\in[n]:\ a_{1,j}=\cdots=a_{k,j}=1\}.\]
Then for any \(j\in[n]\),
\[\mathbb{P}(j\in J)=q^{k},\]
and these events are independent for different \(j\in[n]\). Take \(k=c_{q}\log n\) and choose \(c_{q}\) so that \(nq^{k}\geq 10m\). Using Chernoff's inequality, we obtain
\[\mathbb{P}(|J|\geq m)=1-\exp(-cm).\]
On the other hand, \(\operatorname{rank}(A|_{J})\leq n-k\) since this matrix contains \(k\) identical rows. The lemma is proved.
|
2304.01409 | An Efficient Learning-Based Solver for Two-Stage DC Optimal Power Flow
with Feasibility Guarantees | In this paper, we consider the scenario-based two-stage stochastic DC optimal
power flow (OPF) problem for optimal and reliable dispatch when the load is
facing uncertainty. Although this problem is a linear program, it remains
computationally challenging to solve due to the large number of scenarios
needed to accurately represent the uncertainties. To mitigate the computational
issues, many techniques have been proposed to approximate the second-stage
decisions so they can be dealt more efficiently. The challenge of finding good
policies to approximate the second-stage decisions is that these solutions need
to be feasible, which has been difficult to achieve with existing policies.
To address these challenges, this paper proposes a learning method to solve
the two-stage problem in a more efficient and optimal way. A technique called
the gauge map is incorporated into the learning architecture design to
guarantee the learned solutions' feasibility to the network constraints.
Namely, we can design policies that are feed forward functions and only output
feasible solutions. Simulation results on standard IEEE systems show that,
compared to iterative solvers and the widely used affine policy, our proposed
method not only learns solutions of good quality but also accelerates the
computation by orders of magnitude. | Ling Zhang, Daniel Tabas, Baosen Zhang | 2023-04-03T22:56:08Z | http://arxiv.org/abs/2304.01409v2 | # An Efficient Learning-Based Solver for Two-Stage DC Optimal Power Flow with Feasibility Guarantees
###### Abstract
In this paper, we consider the scenario-based two-stage stochastic DC optimal power flow (OPF) problem for optimal and reliable dispatch when the load is facing uncertainty. Although this problem is a linear program, it remains computationally challenging to solve due to the large number of scenarios needed to accurately represent the uncertainties. To mitigate the computational issues, many techniques have been proposed to approximate the second-stage decisions so they can dealt more efficiently. The challenge of finding good policies to approximate the second-stage decisions is that these solutions need to be feasible, which has been difficult to achieve with existing policies.
To address these challenges, this paper proposes a learning method to solve the two-stage problem in a more efficient and optimal way. A technique called the gauge map is incorporated into the learning architecture design to guarantee the learned solutions' feasibility to the network constraints. Namely, we can design policies that are feed forward functions that only output feasible solutions. Simulation results on standard IEEE systems show that, compared to iterative solvers and the widely used affine policy, our proposed method not only learns solutions of good quality but also accelerates the computation by orders of magnitude.
## I Introduction
The optimal power flow (OPF) problem is one of the fundamental tools in the operation and planning of power systems [1, 2, 3]. It determines the minimum-cost generator outputs that meet the system demand and satisfy the power flow equations and operational limits on generators, line flows and other devices. Traditionally, the OPF is formulated as a deterministic optimization problem, where a solution is computed for some nominal and fixed demand. However, with significant penetration of renewable energy into the power grid as well as demand response programs, the fluctuation in the demand should be explicitly taken into account [4].
To take uncertainties in the net-load into account,1 stochastic programming methods are a type of common tools used to to reformulate the OPF as a multi-stage problem [5, 6, 7]. In these problems, decisions are made sequentially at each stage, based on the forecast of the net-load and the fact that additional adjustments can be made in future stages when the uncertainties are better known.
Footnote 1: In this paper, we use the term net-load to capture both renewable generation in the system [5] and the load.
In this paper, we consider a two-stage stochastic program based on the DC optimal power flow (DCOPF) model. The DC power flow model linearizes the power flow equations and is the workhorse in power industries [8]. The two-stage DCOPF problem is also becoming increasing popular as a canonical problem that incorporates the impact of uncertainties arising from renewable resources [9, 10]. In more general terms, the two-stage DCOPF problem falls under the category of _two-stage stochastic linear programs with (fixed) recourse_ (2S-SLPR) [11].
Like other 2S-SLPR problems, the second stage of the two-stage DCOPF involves an expectation of the uncertain parameters ( i.e., the randomness in the net-load) over some probability distribution. In practice, the probability distributions are rarely known and difficult to work with analytically. Therefore, several different approaches have been used to approximate 2S-SLPR problems. Among these, the most popular is the sample average approximation (SAA) [12, 13, 14].
The SAA is a basic Monte Carlo simulation method, which represents the random parameter using a finite set of realizations (scenarios), yielding a (possibly large) deterministic two-stage linear programming problem. Though the SAA approach is easy to implement, directly using it to solving two-stage DCOPF may result in computational challenges. A reason for this is that the SAA method tends to require a large sample set in order to generate a good-quality solution [15, 16, 17], rendering the SAA formulation for two-stage DCOPF into a very large-scale linear program. In some sense, the challenge has moved from generating many high quality samples from a probabilistic forecast to being able to solve a optimization problem using these samples [18, 19, 20]. Secondly, as decisions in power system operations are made in a more online (or corrective) manner [10, 21], OPF problems need to be solved repeatedly in real time. Even though solving single linear programs are easy, solving two-stage DCOPF problems are not [19, 22].
A common approach to reduce the computational burden in solving two-stage DCOFFs is to model the second stage decisions using an affine policy. More specifically, the second-stage (or the recourse) dispatch decision is restricted to be an affine function of the realized net-load and the first-stage
decisions [23, 24, 25]. Once the affine policy is determined, the decision-making in the real time is just simple function evaluations. This method has been observed to provide good performance when the net-load variations are small or are restricted to a few possible instances [26, 27, 28]. However, if the variations are large or include many possible values, the affine policy method tends to not perform well. In fact, it may produce decisions that do not even satisfy the constraints in the two-stage optimization problem.
In this paper, we overcome the challenge in policy design and solving two-stage DCOPF problems by presenting a neural network (NN)-based architecture that is computationally efficient and also guarantees the feasibility of learned solutions. In particular, our architecture involves two neural networks, one each for the first and second stages. The first neural network learns the mapping from the load forecast to the first-stage decisions. The second neural network approximates the cost-to-go given the net-load realization and the learned first-stage decisions. So, instead of using the affine policy, we offer an NN-based policy to solve the second-stage OPF problem. This NN policy is constructed using a technique called the _gauge map_[29, 30], which allows the output of the NN to be guaranteed to satisfy the DCOPF constraints. Since this policy also involves only function evaluations, it preserves the speed of affine policies. At the same time, a neural network is much more expressive than an affine function, and can provide much better approximations to the true solution.
The main advantages of the proposed learning architecture are summarized below:
1. Since decision-making using the NNs only involves feed-forward calculations, the proposed approach can solve problems at much faster speed (i.e., within milliseconds on average) compared to iterative solvers.
2. By using the gauge map, the neural networks' outputs are guaranteed to be a feasible solution in the constraint set. As a result, all constraints in the problem are satisfied by construction, which cannot be done using affine policies.
3. We validate the effectiveness of the proposed approach by applying it to solving two-stage DCOPF problems on the modified IEEE 118-bus system. The simulation results demonstrate the ability of our approach to generate high-quality solutions orders of magnitude more quickly than commercial solvers.
The rest of this article is organized as follows: In Section II, we describe the general setup of the two-stage DCOPF problem and the two widely-used formulations of two-stage DCOPF. Section III presents the proposed learning approach to solving the two-stage DCOPF problem, including the overall architecture design, the training of it and the decision-making procedure. Section IV illustrates how to incorporate the gauge map technique in the architecture design to ensure the feasibility of the neural networks' predictions. Section V provides the simulation results and Section VI concludes the paper.
## II Two-stage DCOPF
In this section, we provide more details about the formulation of two-stage DCOPF problems. Consider a power network with \(N\) buses connected by \(M\) transmission lines. Without loss of generality, we assume each bus \(i\) has a generator as well as a load, and the load is uncertain. We denote the randomness in the system by \(\mathbf{\omega}\in\mathbb{R}^{N}\), which is a random vector, and the net-load at each bus \(i\) is a function of \(\mathbf{\omega}\), denoted by \(d_{i}(\mathbf{\omega})\). Note this notation allows us to capture the fact that the load depend non-trivially on the underlying randomness. The algorithms developed in this paper is compatible with any scenario-based forecasting algorithms.
In the first stage of a problem, the exact value of \(d_{i}(\mathbf{\omega})\) is not known. Rather, we assume a forecast is available. Specifically, we adopt a scenario-based probabilistic load forecasting framework in this paper and assume a set of samples (scenarios) that is representative of \(\mathbf{\omega}\) is available [31, 32, 33, 34, 35]. It is useful to assume that a nominal load-for example, the mean of \(d_{i}(\mathbf{\omega})\)-is known in the first stage. We denote this nominal load by \(\bar{d}_{i}\), and based on the scenario forecasts and \(\bar{d}_{i}\), the system operator (SO) chooses a first-stage generation dispatching decision, denoted by \(p_{i}^{0}\). Then once the actual demand \(d_{i}(\mathbf{\omega})\) is realized, a second-stage (recourse) decision \(p_{i}^{R}\) is determined to balance the power network.
For concreteness, we specifically consider two widely used formulations of the two-stage DCOPF problem, risk-limiting dispatch (RLD) [5] and reserve scheduling [27]. Both are two-stage stochastic linear programs with recourse, and both highlight the structure and difficulty of two-stage problems.
### _Risk-Limiting Dispatch_
The RLD problem seeks to find a first-stage dispatching decision \(p_{i}^{0}\) at each bus \(i\) that minimizes expected total cost in two stages. The second-stage decisions, \(p_{i}^{R}\), are made after the net-load is observed.
We assume that the cost of dispatching generation at bus \(i\) is \(\alpha_{i}p_{i}^{0}\) in the first stage and \(\beta_{i}[p_{i}^{R}]^{+}\) in the second stage, where \(\alpha_{i}\) and \(\beta_{i}\) are prices measured in dollars per MW ($/MW) and the notation \([z]^{+}=\max\{z,0\}\) means that only power purchasing (\(p_{i}^{R}>0\)) incurs a second-stage cost and any excess power (\(p_{i}^{R}<0\)) can be disposed of for free [5, 36, 37]. The cost minimization problem is:
\[J_{\text{rdl}}^{\star}(\bar{\mathbf{d}})\triangleq\min_{\mathbf{p}^{0}} \mathbf{\alpha}^{T}\mathbf{p}^{0}+\mathbb{E}[Q(\mathbf{d}(\mathbf{\omega})- \mathbf{p}^{0};\mathbf{\beta})|\bar{\mathbf{d}}]\] (1a) s.t. \[\mathbf{p}^{0}\geq\mathbf{0}, \tag{1b}\]
where the expectation is taken with respect to the probability distribution of \(\mathbf{d}(\mathbf{\omega})\) conditioned on \(\bar{\mathbf{d}}\), and \(Q(\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0};\mathbf{\beta})\) is the second-stage cost or cost-to-go. Given the first-stage decision \(\mathbf{p}^{0}=[p_{1}^{0},\cdots,p_{N}^{0}]^{T}\) and a particular realization of
\(\mathbf{d}(\mathbf{\omega})\), the second-stage cost is given by
\[Q(\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0};\mathbf{\beta})\pm\min_{ \mathbf{p}^{R},\mathbf{\theta}} \mathbf{\beta}^{T}[\mathbf{p}^{R}]^{+}\] (2a) s.t. \[\mathbf{B}\mathbf{\theta}=\mathbf{p}^{R}-(\mathbf{d}(\mathbf{\omega})- \mathbf{p}^{0}) \tag{2b}\] \[-\mathbf{f}^{\max}\leq\mathbf{F}\mathbf{\theta}\leq\mathbf{f}^{\max}, \tag{2c}\]
where (2b) is the DC power flow constraints and (2c) is the line flow limit constraints. Without loss of generality, we assume bus \(1\) is the reference (slack) node and set its voltage angle to be zero. The notation \(\mathbf{\theta}\in\mathbb{R}^{N-1}\) denotes the voltage angles at non-slack buses, the matrix \(\mathbf{B}\in\mathbb{R}^{N\times(N-1)}\) maps \(\mathbf{\theta}\) to the nodal power injections, and the matrix \(\mathbf{F}\in\mathbb{R}^{M\times(N-1)}\) maps \(\mathbf{\theta}\) to the flows on all edges. See Appendix A for details on constructing \(\mathbf{B}\) and \(\mathbf{F}\).
Note that the second-stage problem (2) can be seen as a deterministic DCOPF problem with the demand \(\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0}\). Since the recourse decision \(\mathbf{p}^{R}\) is not bounded, (2) is feasible for any given demand input.
We approximate the expectation in (1) using samples. Let \(\{\mathbf{\omega}^{k}\}_{k=1}^{K}\) be a collection of samples of \(\mathbf{\omega}\), and \(\{\mathbf{d}(\mathbf{\omega}^{k})\}_{k=1}^{K}\) be the collection of load realizations. We determine the first-stage decision by solving the following scenario-based problem that is a deterministic linear program:
\[\widetilde{J}_{\text{td}}^{K}(\bar{\mathbf{d}}) \triangleq\min_{\begin{subarray}{c}\mathbf{p}^{0},\\ \{\mathbf{p}^{R}(\mathbf{\omega}^{k}),\mathbf{\theta}(\mathbf{\omega}^{k})\}_{k=1}^{K} \end{subarray}}\mathbf{\alpha}^{T}\mathbf{p}^{0}+\frac{1}{K}\sum_{k=1}^{K}\mathbf{ \beta}^{T}[\mathbf{p}^{R}(\mathbf{\omega}^{k})]^{+}\] (3a) s.t. \[\mathbf{p}^{0}\geq\mathbf{0} \tag{3b}\] \[\mathbf{B}\mathbf{\theta}(\mathbf{\omega}^{k})=\mathbf{p}^{R}(\mathbf{\omega }^{k})-(\mathbf{d}(\mathbf{\omega}^{k})-\mathbf{p}^{0}),\;\forall k\] (3c) \[-\mathbf{f}^{\max}\leq\mathbf{F}\mathbf{\theta}(\mathbf{\omega}^{k})\leq \mathbf{f}^{\max},\;\forall k \tag{3d}\]
where the second-stage decisions \(\{\mathbf{p}^{R}(\mathbf{\omega}^{k}),\mathbf{\theta}(\mathbf{\omega}^{k})\}_{k=1}^{K}\) are functions of \(\mathbf{\omega}\) and the constraints (3c)-(3d) related to the second-stage decisions need to be satisfied for every load realization \(\mathbf{d}(\mathbf{\omega}^{k})\).
### _Two-stage DCOPF with Reserve_
Sometimes the second-stage recourse decision \(\mathbf{p}^{R}\) cannot be arbitrarily positive or negative. Instead, \(\mathbf{p}^{R}\) is limited by various factors such as generator capacities or real-time (second-stage) electricity market structure. This is captured by a two-stage DCOPF where reserve services are provided to deal with the possible mismatch between the actual generation and the realized load [10, 27].
Specifically, we consider the spinning reserve service in this paper. In the first stage, in addition to choosing an initial dispatching decision \(p_{i}^{0}\) at each bus \(i\), the SO also needs to decide the up and down reserve capacities, \(\widehat{r}_{i}\) and \(\widetilde{r}_{i}\). In this way, the first-stage cost at each bus \(i\) includes both the cost of dispatching \(p_{i}^{0}\), i.e., \(\alpha_{i}p_{i}^{0}\), and of providing reserve services that is given by \(\mu_{i}(\widehat{r}_{i}+\widetilde{r}_{i})\), where \(\alpha_{i}\) and \(\mu_{i}\) are prices measured in $/MW.
The second-stage recourse decision \(p_{i}^{R}\) at each bus \(i\) is constrained by the reserve capacities, \(\widehat{r}_{i}\) and \(\widetilde{r}_{i}\). To quantify the amount by which the reserve capacities decided in the first stage might be exceeded, we define the cost of dispatching \(p_{i}^{R}\) at each bus \(i\) as a piecewise-affine function given by \(\gamma_{i}^{\text{res}}\Big{(}[p_{i}^{R}-\widehat{r}_{i}]^{+}-[p_{i}^{R}+ \widetilde{r}_{i}]^{-}\Big{)}\), where \(\gamma_{i}^{\text{res}}\) is penalty cost in $/MW and \([z]^{-}=\min\{z,0\}\). This cost function means that there would be no cost for second-stage dispatching within the reserve amounts that are allocated in the first stage.
The two-stage DCOPF with reserve scheduling can be formulated as the following stochastic program:
\[J_{\text{res}}^{\star}(\bar{\mathbf{d}})\triangleq\] \[\min_{\begin{subarray}{c}\mathbf{p}^{0},\\ \widetilde{\mathbf{r}},\widetilde{\mathbf{r}}\end{subarray}}\mathbf{\alpha}^{T} \mathbf{p}^{0}+\mathbf{\mu}^{T}(\widehat{\mathbf{r}}+\widetilde{\mathbf{r}})+ \mathbb{E}[Q(\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0};\widehat{\mathbf{r}}, \gamma^{\text{res}})|\bar{\mathbf{d}}]\] (4a) s.t. \[\mathbf{0}\leq\mathbf{p}^{0}\leq\mathbf{p}^{\max} \tag{4b}\] \[\mathbf{p}^{0}+\widehat{\mathbf{r}}\leq\mathbf{p}^{\max}\] (4c) \[\mathbf{p}^{0}-\widetilde{\mathbf{r}}\geq\mathbf{0}\] (4d) \[\widehat{\mathbf{r}},\;\widetilde{\mathbf{r}}\geq\mathbf{0}, \tag{4e}\]
where (4c)-(4e) constrain the up and down reserve at each bus \(i\) to be positive and no larger than the available capacities around \(p_{i}^{0}\). Given the first-stage decisions \((\mathbf{p}^{0},\widehat{\mathbf{r}},\widehat{\mathbf{r}})\) and a particular realization of \(\mathbf{d}(\mathbf{\omega})\), the second-stage cost is given by
\[Q(\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0};\widehat{\mathbf{r}}, \widetilde{\mathbf{r}},\mathbf{\gamma}^{\text{res}})\triangleq\] \[\min_{\mathbf{p}^{R},\mathbf{\theta}} \mathbf{\gamma}^{\text{res}T}\Big{(}[\mathbf{p}^{R}-\widehat{\mathbf{r}} ]^{+}-[\mathbf{p}^{R}+\widetilde{\mathbf{r}}]^{-}\Big{)}\] (5a) s.t. \[\mathbf{B}\mathbf{\theta}=\mathbf{p}^{R}-(\mathbf{d}(\mathbf{\omega})- \mathbf{p}^{0}) \tag{5b}\] \[-\mathbf{f}^{\max}\leq\mathbf{F}\mathbf{\theta}\leq\mathbf{f}^{\max}, \tag{5c}\]
which can also be seen as a deterministic DCOPF problem with demands \(\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0}\) and the cost being the penalty imposed on the generation value if it exceeds the reserve capacities. This "penalizing deviations" technique is commonly employed by stochastic programmers to promote the feasibility of second-stage problems for any given first-stage decisions [38].
The SAA method solves the following scenario-based problem associated with (4)
\[\widetilde{J}_{\text{res}}^{K}(\bar{\mathbf{d}}) \triangleq\min_{\begin{subarray}{c}\mathbf{p}^{0},\widehat{\mathbf{r}}, \mathbf{F},\\ \{\mathbf{p}^{R}(\mathbf{\omega}^{k}),\mathbf{\theta}(\mathbf{\omega}^{k})\}_{k=1}^{K} \end{subarray}}\mathbf{\alpha}^{T}\mathbf{p}^{0}+\mathbf{\mu}^{T}(\widehat{\mathbf{r}}+ \widetilde{\mathbf{r}})\;+\] \[\frac{1}{K}\sum_{k=1}^{K}\Bigg{(}\mathbf{\gamma}^{\text{res}T} \Big{(}[\mathbf{p}^{R}(\mathbf{\omega}^{k})-\widehat{\mathbf{r}}]^{+}-[\mathbf{p}^{R}( \mathbf{\omega}^{k})+\widetilde{\mathbf{r}}]^{-}\Big{)}\Bigg{)}\] (6a) s.t. \[\mathbf{0}\leq\mathbf{p}^{0}\leq\mathbf{p}^{\max} \tag{6b}\] \[\mathbf{p}^{0}+\widehat{\mathbf{r}}\leq\mathbf{p}^{\max}\] (6c) \[\mathbf{p}^{0}-\widetilde{\mathbf{r}}\geq\mathbf{0}\] (6d) \[\widehat{\mathbf{r}},\;\widetilde{\mathbf{r}}\geq\mathbf{0}\] (6e) \[\mathbf{B}\mathbf{\theta}(\mathbf{\omega}^{k})=\mathbf{p}^{R}(\mathbf{\omega}^{k})-( \mathbf{d}(\mathbf{\omega}^{k})-\mathbf{p}^{0}),\forall k\] (6f) \[-\mathbf{f}^{\max}\leq\mathbf{F}\mathbf{\theta}(\mathbf{\omega}^{k})\leq \mathbf{f}^{\max},\forall k. \tag{6g}\]
### _Computational Challenges_
To have a sample set of load realizations that is representative enough of the true distribution of the random net-load, a large number of realizations are required for even a moderately sized system [39]. Therefore, although (3) and (6) are linear programs, they are often large-scale problems. In addition, since both the first and second-stage decisions depend on the mean of the scenario forecasts, \(\overline{\mathbf{d}}\), every time the set of scenarios changes, we need to re-solve (3) and (6). Even if a single instance can be solved efficiently using commercial solvers such as CVXPY [40, 41] and GLPK [42], repeatedly solving large-scale linear programs can still impose considerable computational burdens.
The scale of the problems can grow quickly as the size of the system and the number of scenarios grow. Therefore, an affine policy is often used to approximate (2) and (5). However, finding a good policy that satisfies the constraints ((3c),(3d), (6f), and (6g)) can be difficult. In the next section, we present an NN-based learning architecture to enable more efficient computation.
## III Proposed Learning Algorithm
In this section, we present the learning algorithm to solve the scenario-based problems in (3) and (6). To start with, we rewrite the two-stage problem in a more compact way as follows
\[\widetilde{J}^{K}(\overline{\mathbf{d}})\triangleq \min_{\mathbf{x}} \widetilde{c}(\mathbf{x})+\widetilde{Q}^{K}(\mathbf{x};\{\mathbf{ \omega}^{k}\}_{k=1}^{K},\widetilde{\mathbf{\beta}})\] (7a) s.t. \[\mathbf{x}\in\mathcal{X} \tag{7b}\]
where \(\mathbf{x}\) denotes the first-stage decisions, which is \(\mathbf{p}^{0}\) for (3) and \((\mathbf{p}^{0},\widehat{\mathbf{r}},\widehat{\mathbf{r}})\) for (6), and the set \(\mathcal{X}\) collects all constraints that \(\mathbf{x}\) has to satisfy, i.e., (3b) or (6b)-(6e). The notation \(\widetilde{c}(\cdot)\) is the generic representation of the first-stage cost and \(\widetilde{Q}^{K}(\mathbf{x};\{\mathbf{\omega}^{k}\}_{k=1}^{K},\widetilde{\mathbf{ \beta}})\) is the estimated second-stage cost based on the set of scenarios \(\{\mathbf{\omega}^{k}\}_{k=1}^{K}\).
Here we use a simple decomposition technique such that (7) becomes much easier to work with. To be specific, if the first-stage decision \(\mathbf{x}\) is taken as given, then the second-stage cost \(\widetilde{Q}^{K}(\mathbf{x};\{\mathbf{\omega}^{k}\}_{k=1}^{K},\widetilde{\mathbf{ \beta}})\) is separable:
\[\widetilde{Q}^{K}(\mathbf{x};\{\mathbf{\omega}^{k}\}_{k=1}^{K},\widetilde{\mathbf{ \beta}})=\frac{1}{K}\sum_{k=1}^{K}\widetilde{Q}^{k}(\mathbf{\delta}_{d}(\mathbf{x };\mathbf{\omega}^{k});\mathbf{x},\widetilde{\mathbf{\beta}})\]
where we use the notation \(\mathbf{\delta}_{d}(\mathbf{x};\mathbf{\omega}^{k})\triangleq\mathbf{d}(\mathbf{\omega}^{ k})-\mathbf{p}^{0}\) to represent the demands that are not balanced by the first stage when the load realization is actually \(\mathbf{d}(\mathbf{\omega}^{k})\), and \(\widetilde{Q}^{k}(\mathbf{\delta}_{d}(\mathbf{x};\mathbf{\omega}^{k});\mathbf{x}, \widetilde{\mathbf{\beta}})\) is the optimal value of each scenario problem for a particular load realization \(\mathbf{d}(\mathbf{\omega}^{k})\). Each of these scenario problems can be seen as a deterministic DCOPF problem with demands \(\mathbf{\delta}_{d}(\mathbf{x};\mathbf{\omega}^{k})\) and an objective function \(\widetilde{q}(\mathbf{;x},\widetilde{\mathbf{\beta}})\) that takes \(\mathbf{x}\) and \(\widetilde{\mathbf{\beta}}\) as parameters. The deterministic DCOPF problem can be written in the following generic form:
\[\widetilde{Q}(\mathbf{\delta}_{d}(\mathbf{x};\mathbf{\omega});\mathbf{x},\widetilde{\mathbf{\beta}})\triangleq \min_{\mathbf{p}^{R},\mathbf{\theta}} \widetilde{q}(\mathbf{p}^{R};\mathbf{x},\widetilde{\mathbf{\beta}})\] (8a) s.t. \[\mathbf{B}\mathbf{\theta}=\mathbf{p}^{R}-\mathbf{\delta}_{d}(\mathbf{x} ;\mathbf{\omega}) \tag{8b}\] \[-\mathbf{f}^{\max}\leq\mathbf{F}\mathbf{\theta}\leq\mathbf{f}^{\max}. \tag{8c}\]
In a similar fashion, by exploiting the decomposable structure of (7), the proposed learning algorithm consists of two subnetworks, denoted by \(\phi^{0}\) and \(\phi^{R}\), respectively. The first subnetwork \(\phi^{0}\) learns the mapping from \(\widetilde{\mathbf{d}}\) to \(\mathbf{x}\), i.e., \(\phi^{0}:\mathbb{R}^{N}_{+}\longmapsto\mathcal{X}\), while the second one learns the mapping from the pair \((\mathbf{x},\mathbf{\delta}_{d}(\mathbf{x};\mathbf{\omega}))\) to \(\widetilde{Q}(\mathbf{\delta}_{d};\mathbf{x},\widetilde{\mathbf{\beta}})\), i.e., \(\phi^{R}:\mathcal{X}\times\Delta_{d}(\mathbf{x},\mathbf{\omega})\longmapsto[0,+ \infty]\), where \(\Delta_{d}(\mathbf{x},\mathbf{\omega})\) is the set of all possible mismatches, i.e., \(\Delta_{d}(\mathbf{x},\mathbf{\omega})=\{\mathbf{d}(\mathbf{\omega})-\mathbf{p}^{0}|( \mathbf{x},\mathbf{\omega})\in\mathcal{X}\times\Omega\}\), and \(\Omega\) is the sample space of \(\mathbf{\omega}\).
The two subnetworks can be implemented using neural networks. Once trained, these neural networks can produce solutions much faster than existing solvers. However, a key question also arises: how to make neural networks satisfy constraints, namely, how to ensure the output from \(\phi^{0}\) lies within the feasibility set \(\mathcal{X}\) and the constraints of the optimization problem in (8) are satisfied? Notably, we want to avoid steps such as projecting to the feasible set since these introduce additional optimization problems [43], which somewhat defeats the purpose of learning. This key question will be tackled in the next section, and for the rest of this section, we first treat the two subnetworks as black boxes to provide an overview of the proposed algorithm.
This algorithm includes a training process and a prediction process. The architecture used for training is shown in Fig. 1. When the learning algorithm is used in practice, i.e., in the prediction process, just the network \(\phi^{0}\) is required to predict the first-stage decisions given a scenario forecast. The reason why we need a second network \(\phi^{R}\) is that the two networks need to be trained together in order to obtain a network \(\phi^{0}\) that can predict the solution to (7) accurately. We now describe how the two networks in Fig. 1 are trained.
We use \(\mathbf{w}^{0}\) and \(\mathbf{w}^{R}\) to denote the respective parameters, i.e., the weights and biases, of neural networks in \(\phi^{0}\) and \(\phi^{R}\). The goal of training is to learn the optimal values for \(\mathbf{w}^{0}\) and \(\mathbf{w}^{R}\). To this end, we first construct a loss function in the forward
Fig. 1: The architecture used for training in the proposed algorithm. When making decisions in real time, we only need the network \(\phi^{0}\) to predict the first-stage decisions from the given load forecast.
pass, and then calculate the gradients of the loss function with respect to \(\mathbf{w}^{0}\) and \(\mathbf{w}^{R}\) through the backward pass. Following that, the stochastic gradient descent (SGD) method is used to minimize the loss function.
Suppose \(\{\bar{\mathbf{d}}^{i}\}_{i=1}^{I}\) is a batch of training data consisting of \(I\) load forecasts. The loss function is given by
\[\min_{\mathbf{w}^{0},\mathbf{w}^{R}}L(\mathbf{w}^{0},\mathbf{w}^{R})\triangleq \ \frac{1}{I}\sum_{i=1}^{I}L^{i}(\mathbf{w}^{0},\mathbf{w}^{R}), \tag{9}\]
where
\[L^{i}(\mathbf{w}^{0},\mathbf{w}^{R})\triangleq \ \widetilde{c}\Big{(}\phi^{0}(\bar{\mathbf{d}}^{i};\mathbf{w}^{0}) \Big{)}+\] \[\
### _Design of the first-stage network: \(\phi^{0}\)_
The network \(\phi^{0}\) in the RLD formulation must satisfy the non-negative orthant constraints in (3b), which can be guaranteed by using a ReLU activation after the last neural layer, and no additional transformation is needed.2 For the reserve scheduling problem, we can rewrite the constraints in (6b)-(6e) in a more compact way as \(\mathbf{x}\in[\bar{\mathbf{x}},\mathbf{\bar{x}}]\), where \(\bar{\mathbf{x}}=[\mathbf{p}^{\max T},(\mathbf{p}^{\max}-\mathbf{p}^{0})^{T}, \mathbf{p}^{0}{}^{T}]^{T}\) and \(\mathbf{\bar{x}}=[\mathbf{0}^{T},\mathbf{0}^{T},\mathbf{0}^{T}]^{T}\). In this way, the constraints in (6b)-(6e) can be treated as axis-aligned rectangular constraints.
Footnote 2: The ReLU activation function is \(\max(x,0)\).
To enforce such axis-aligned rectangular constraints, we use a Tanh activation on the last neural layer before the output and denote the output as \(\mathbf{u}\). The \(\tanh\) function has a range between \(-1\) and \(1\), and we have \(\mathbf{u}\in\mathbb{B}_{\infty}\), where \(\mathbb{B}_{\infty}\) is the unit ball with \(\ell_{\infty}\) norm given by \(\mathbb{B}_{\infty}\triangleq\{\mathbf{z}\in\mathbb{R}^{n}|-1\leq z_{i}\leq 1,\forall i\}\). Next, we apply the following scaling and translating operations to transform \(\mathbf{u}\) to a feasible solution that satisfies (6b)-(6e):
\[x_{i}=\frac{1}{2}(u_{i}+1)(\bar{x}_{i}-\underline{x}_{i})+\underline{x}_{i}, \forall i. \tag{12}\]
We provide a diagram in Fig. 2 to illustrate the network architecture of \(\phi^{0}\) for each of the problems in (3) and (6).
### _Network Design of \(\phi^{R}\)_
The network architecture design for \(\phi^{R}\) is not as straightforward as for \(\phi^{0}\) because the constraints in (8b)-(8c) can not be enforced by simply scaling and translating the neural layers' outputs. Indeed, (8b)-(8c) delineate a high-dimensional polyhedral set in terms of \(\mathbf{\theta}\). To see this, we can use the power flow equations in (8b) to express the recourse variables \(\mathbf{p}^{R}\) as an affine function of \(\mathbf{\theta}\). The feasibility of \(\mathbf{\theta}\) can be expressed as the following polyhedral set \(\Theta\):
\[\mathbf{\theta}\in\Theta\triangleq\{\widetilde{\mathbf{F}}\mathbf{\theta}\leq \widetilde{\mathbf{f}}\} \tag{13}\]
where \(\widetilde{\mathbf{F}}=[\mathbf{F}^{T},-\mathbf{F}^{T}]^{T}\in\mathbb{R}^{2M \times N}\) and \(\widetilde{\mathbf{f}}=[\mathbf{f}^{\max T},\mathbf{f}^{\max T}]^{T}\in\mathbb{ R}^{2M}\). Next, we describe the architecture design of \(\phi^{R}\) to transform the output of neural layers to a point within \(\Theta\).
Concretely, we again use a Tanh activation function on the last neural layer and denote the output from it by \(\mathbf{u}\), which satisfies \(\mathbf{u}\in\mathbb{B}_{\infty}\) as we have discussed. Then we utilize the _gauge map_ technique [29] to fulfill the transformation. Particularly, the gauge map can establish the equivalence between two C-sets using the gauge functions associated with them. We give the definitions of C-sets and gauge functions below, and we will also show that both \(\mathbb{B}_{\infty}\) and \(\Theta\) are C-sets.
**Definition 1** (C-set [46]).: _A C-set is a convex and compact subset of \(\mathbb{R}^{n}\) including the origin as an interior point._
By Definition 1, the unit hypercube \(\mathbb{B}_{\infty}\) is a C-set. To show that the polyhedral set \(\Theta\) also satisfies Definition 1, we first note that the origin is an interior point of \(\Theta\). Regarding the compactness of \(\Theta\), we provide the following theorem.
**Theorem 4.1**.: _The polyhedral set \(\Theta\) given by (13) is bounded._
The proof of Theorem 4.1 is given in Appendix B. Together, we can conclude that \(\Theta\) is also a C-set. Before describing the gauge transformation between \(\mathbb{B}_{\infty}\) and \(\Theta\), we first introduce the concept of the gauge function associated with a C-set.
**Definition 2** (Gauge function [46]).: _The gauge function associated with a C-set \(\mathcal{P}\) is a mapping given by \(g_{\mathcal{P}}:\mathbb{R}^{n}\longmapsto[0,+\infty]\), given by_
\[g_{\mathcal{P}}(\mathbf{z})=\min\{\lambda:\mathbf{z}\in\lambda\mathcal{P}, \lambda\geq 0,\mathbf{z}\in\mathbb{R}^{n}\}.\]
**Proposition 1**.: _If C-set \(\mathcal{P}\) is a polyhedral set of the form_
\[\mathcal{P}=\{\mathbf{z}\in\mathbb{R}^{n}|\mathbf{A}\mathbf{z}\leq\mathbf{b}, \mathbf{A}\in\mathbb{R}^{m\times n},\mathbf{b}\in\mathbb{R}^{n}\},\]
_then the gauge function associated with it is_
\[g_{\mathcal{P}}(\mathbf{z})=\max_{i=1,\cdots,m}\big{\{}\frac{\mathbf{a}_{i}^{T} \mathbf{z}}{b_{i}}\big{\}},\]
_where \(\mathbf{a}_{i}\) is the \(i\)-th row of \(\mathbf{A}\) and \(b_{i}\) is the \(i\)-th element of \(\mathbf{b}\)._
The proof of Proposition 1 is provided in Appendix C. By using the gauge function defined in Definition 2, we can express the gauge map as follows.
**Definition 3** (gauge map [29]).: _The gauge map between any two C-sets \(\mathcal{P}\) and \(\mathcal{Q}\) is a bijection \(G:\mathcal{P}\longmapsto\mathcal{Q}\) given by_
\[G(\mathbf{z}|\mathcal{P},\mathcal{Q})=\frac{g_{\mathcal{P}}(\mathbf{z})}{g_{ \mathcal{Q}}(\mathbf{z})}\mathbf{z}.\]
From this definition, the gauge map between \(\mathbb{B}_{\infty}\) and \(\Theta\) can be expressed as
\[G(\mathbf{u}|\mathbb{B}_{\infty},\Theta)=\frac{\|\mathbf{u}\|_{\infty}}{g_{ \Theta}(\mathbf{u})}\mathbf{u}, \tag{14}\]
where \(\|\mathbf{u}\|_{\infty}\) is the gauge of \(\mathbf{u}\) with respect to \(\mathbb{B}_{\infty}\), namely, \(g_{\mathbb{B}_{\infty}}(\mathbf{u})=|\mathbf{u}\|_{\infty}\), which directly follows from Proposition 1. Note that \(g_{\Theta}(\mathbf{u})\) can also be calculated using Proposition 1 since \(\Theta\) is a polyhedral C-set.
Fig. 2: In the RLD problem, non-negative orthant constraints can be enforced using ReLU activation in the last neural layer. For the reserve scheduling problem, the Tanh activation is used at the last neural layer and then the hypercubic output is passed through the transformation layers in (12) to obtain a feasible solution.
Using (14), for every point in \(\mathbb{B}_{\infty}\), we are able to find its one-to-one correspondence (image) in \(\Theta\). To better see how the gauge map works, we provide an illustrative example in Fig. 3 to transform a point from \(\mathbb{B}_{\infty}\) to its image in a randomly generated polyhedral C-set.
Once a feasible solution of \(\mathbf{\theta}\) is obtained, the values for for \(\mathbf{p}^{R}\) and the output of \(\phi^{R}\), i.e., the objective value of the deterministic DCOPF in (8), can be be easily computed. We summarize the network design of \(\phi^{R}\) in Fig. 4.
Lastly, we discuss the differentiability properties of the function in (14) since training the network architecture in Fig. 1 requires a backward pass that can calculate the gradients in (10). This is a nuanced point since both (14) and the layers used in neural networks are not everywhere differentiable. Here, we show that the non-differentiability introduced by the gauge map is no more severe than the non-differentiability that is already present in the neural network activation functions, and the end-to-end policy is differentiable almost everywhere:
**Theorem 4.2**.: _Let \(\mathcal{P}\) and \(\mathcal{Q}\) be polyhedral C-sets. Standard automatic differentiation procedures, when applied to the gauge map \(G(\cdot\mid\mathcal{P},\mathcal{Q})\), will return the gradient of \(G(\cdot\mid\mathcal{P},\mathcal{Q})\) for almost all \(\mathbf{z}\in\mathcal{P}\)._
Proof.: The set \(\mathcal{P}\) can be partitioned such that the gauge map is a different analytic function on each region of the partition (excluding the origin). By setting \(G(0\mid\mathcal{P},\mathcal{Q}):=0\), we obtain a function for which standard automatic differentiation procedures will compute the gradient of \(G(\cdot\mid\mathcal{P},\mathcal{Q})\) at all \(\mathbf{z}\in\mathcal{P}\) except possibly on a set of measure zero [47]. Details are in Appendix E.
Theorem 4.2 shows that the gauge map is differentiable with respect to the output of the neural layers, and hence enables the computation of backpropagation gradients in (10) and the training of the architecture in Fig. 1. The effectiveness of the proposed learning architecture is validated on a modified IEEE 118-bus system as shown in the next section.
## V Experimental results
In this section, we provide the experimental results of using the proposed algorithm in Table I to solve two-stage DCOPF problems. Particularly, we consider two application contexts, namely, the risk-limiting dispatch and reserve scheduling problems on the IEEE 118-bus system (the detailed configuration of the system can be found in [48]), and use our algorithm to learn the first-stage solutions to the scenario-based problems in (3) and (6), respectively. We implement our learning algorithm in Google Colab [49] using Pytorch and all codes and data of our experiments are available at [https://github.com/ling-zhang-linnet/two-stage-dcopf-neural-solver.git](https://github.com/ling-zhang-linnet/two-stage-dcopf-neural-solver.git).
**Network architecture:** We use a 4-layer convolutional neural network (two convolutional layers followed by two fully connected layers) for both \(\phi^{0}\) and \(\phi^{R}\) in all experiments. A dropout layer with the rate of \(0.5\) is used on each of the fully connected layers before the output. The network architectures are trained offline using Adam Optimizer [50] and the default learning rate is adopted. The size of hidden layer is tuned for each application context and the details can be found in our public code repository.
**Data generation:** There are two types of data in our algorithm. The first type is the load forecasts. They are inputs to the learning algorithm and comprise the datasets on which we train and test the network architecture. In both application contexts, the training dataset consist of \(50000\) load forecasts and testing dataset of \(100\). The second type of data is the load realizations that are used to solve the scenario-based problems (estimate the expected second-stage cost) or to evaluate the solution quality through ex-post _out-of-sample_ simulations [25]. In our algorithm, \(20\) load realizations are sampled independently at each iteration to provide an estimate of the expected second-stage cost during training, and \(500\) are used to evaluate the solution quality via out-of-sample simulations.
Both types of data are generated using the Gaussian distribution but with different choices of the mean and standard deviation. When generating load forecasts, we use the nominal load of the system as the mean and set the standard deviation to be \(10\%\) of it. The load realizations are generated specific to each instance of load forecasts, that is, we use the forecast as the mean and set the standard deviation as \(5\%\) of it to generate samples of realized load for each instance.
**Baseline solvers:** In both application contexts, we apply CVXPY solver [40] to solve the scenario-based problems in (3) and (6) on the same testing dataset as used in our method to
Fig. 4: The hypercubic output from the neural layers is transformed to a feasible solution for \(\mathbf{\theta}\) by the gauge map. Then the value of the objective function can be easily computed.
Fig. 3: An illustrative example of the gauge map from \(\mathbb{B}_{\infty}\) to a polyhedral C-set \(\mathcal{Q}\). The \(1\), \(\frac{3}{4}\), \(\frac{1}{2}\) and \(\frac{1}{5}\) level curves of each set are plotted in blue. For each point in \(\mathbb{B}_{\infty}\), it is transformed to its image (marked using the same color) in \(\mathcal{Q}\) with the same level curve.
provide a benchmark. We also compare the solutions produced by our method to that by solving (3) and (6) approximately using the affine policy method, which is a widely applied approximation policy to make the two-stage stochastic programs tractable [51]. The details on the affine policies used in each application are given in Appendix D.
**Evaluation procedure:** To compare the performance of different methods, we first use them to obtain their respective first-stage solutions for each instance in the testing dataset, and then we use the commonly adopted out-of-sample simulations to evaluate the solution quality. To do this, for each method and each test instance, we fix the value of the obtained first-stage solutions (and hence the first-stage cost), and solve the deterministic DCOPF problem in (8) \(500\) times using the same set of load realizations. By summing up the average cost of these \(500\) DCOPF problems and the fixed first-stage cost, we obtain the out-of-sample value of the total cost.
We calculate the out-of-sample values of the total cost for all test instances and use the average value as a metric to measure the method's performance across different instances. We also report the average solving time of each method to obtain the first-stage solutions to show the trade-off between the solution quality and computational tractability.
### _Application I: Risk-Limiting Dispatch_
The results of using different methods to solve the risk-limiting dispatch problem in (3) on the 118-bus system are provided in Table II. The average total costs of different methods are represented as the ratio compared to the average total cost obtained by applying CVXPY solver. From Table II, we can see that our learning method is faster than applying CVXPY solver by 4 orders of mannitol while the difference in average total cost is less than \(0.8\%\). In comparison, using the affine policy reduces the average running time by half, however, it also performs \(50\%\) worse. This is because the affine policy has bad generalization when applied to never-seen instances of load forecasts.
### _Application II: Reserve Scheduling_
We summarize the results of using different methods to solve the reserve scheduling problem in (6) on the 118-bus system in Table III. All reported total costs are expressed as the ratio to the average total cost achieved by applying the CVXPY solver. Compared to the risk-limiting dispatch problem, the reserve scheduling problem has more decision variables and constraints and thus is more complicated. It takes minutes for CVXPY solver to solve single instance. By using an affine policy for the recourse dispatch, the average running time per instance can be reduced by an order of magnitude, but the average total cost also increases by an order of magnitude due to poor generalization. In particular, the solutions found by the affine policy method can become infeasible, therefore incurring very high penalties. In contrast, our learning method not only learns to provide good solution quality (within \(10\%\) of the benchmark produced by CVXPY solver) but is also able to speed up the computation by 4 orders of magnitude.
## VI Conclusions and Future work
This paper presents a learning algorithm to solve two-stage DCOPF problems efficiently. The algorithm use two neural networks, one for each stages, to make the dispatch decisions. The gauge map technique is built into the network architecture design so that the constraints in two-stage DCOPF problems can be satisfied explicitly for all load realizations. Our numerical results on the IEEE 118-bus system validate the effectiveness of our algorithm, showing that it can speed up computation by orders of magnitude compared to the commercial solver while still learning high-quality solutions. A direction of future work is to generalize our learning algorithm to solve non-convex programs, for example, using the AC optimal power flow problem.
|
2308.01904 | DETR Doesn't Need Multi-Scale or Locality Design | This paper presents an improved DETR detector that maintains a "plain"
nature: using a single-scale feature map and global cross-attention
calculations without specific locality constraints, in contrast to previous
leading DETR-based detectors that reintroduce architectural inductive biases of
multi-scale and locality into the decoder. We show that two simple technologies
are surprisingly effective within a plain design to compensate for the lack of
multi-scale feature maps and locality constraints. The first is a box-to-pixel
relative position bias (BoxRPB) term added to the cross-attention formulation,
which well guides each query to attend to the corresponding object region while
also providing encoding flexibility. The second is masked image modeling
(MIM)-based backbone pre-training which helps learn representation with
fine-grained localization ability and proves crucial for remedying dependencies
on the multi-scale feature maps. By incorporating these technologies and recent
advancements in training and problem formation, the improved "plain" DETR
showed exceptional improvements over the original DETR detector. By leveraging
the Object365 dataset for pre-training, it achieved 63.9 mAP accuracy using a
Swin-L backbone, which is highly competitive with state-of-the-art detectors
which all heavily rely on multi-scale feature maps and region-based feature
extraction. Code is available at https://github.com/impiga/Plain-DETR . | Yutong Lin, Yuhui Yuan, Zheng Zhang, Chen Li, Nanning Zheng, Han Hu | 2023-08-03T17:59:04Z | http://arxiv.org/abs/2308.01904v1 | # DETR Doesn't Need Multi-Scale or Locality Design
###### Abstract
This paper presents an improved DETR detector that maintains a "plain" nature: using a single-scale feature map and global cross-attention calculations without specific locality constraints, in contrast to previous leading DETR-based detectors that reintroduce architectural inductive biases of multi-scale and locality into the decoder. We show that two simple technologies are surprisingly effective within a plain design to compensate for the lack of multi-scale feature maps and locality constraints. The first is a box-to-pixel relative position bias (BoxRPB) term added to the cross-attention formulation, which well guides each query to attend to the corresponding object region while also providing encoding flexibility. The second is masked image modeling (MIM)-based backbone pre-training which helps learn representation with fine-grained localization ability and proves crucial for remedying dependencies on the multi-scale feature maps. By incorporating these technologies and recent advancements in training and problem formation, the improved "plain" DETR showed exceptional improvements over the original DETR detector. By leveraging the Object365 dataset for pre-training, it achieved 63.9 mAP accuracy using a Swin-L backbone, which is highly competitive with state-of-the-art detectors which all heavily rely on multi-scale feature maps and region-based feature extraction. Code will be available at [https://github.com/impiga/Plain-DETR](https://github.com/impiga/Plain-DETR).
## 1 Introduction
The recent revolutionary advancements in natural language processing highlight the importance of keeping task-specific heads or decoders as general, simple, and lightweight as possible, and shifting main efforts towards building more powerful large-scale foundation models [37, 11, 2]. However, the computer vision community often continues to focus heavily on the tuning and complexity of task-specific heads, resulting in designs that are increasingly heavy and complex.
The development of DETR-based object detection methods follows this trajectory. The original DETR approach [4] is impressive in that it discarded complex and domain-specific designs such as multi-scale feature maps and region-based feature extraction that require a dedicated understanding of the specific object detection problem. Yet, subsequent developments [55, 54] in the field have reintroduced these designs, which do improve training speed and accuracy but also contravene the principle of "fewer inductive biases" [13].
In this work, we aim to improve upon the original DETR detector, while preserving its "plain" nature: _no multi-scale feature maps_, _no locality design for cross-attention calculation_. This is challenging as object detectors need to handle objects of varying scales and locations. Despite the latest improvements in training and problem formulation, as shown in Table 1, the plain DETR method still lags greatly behind state-of-the-art detectors that utilize multi-scale feature maps and regional feature extraction design.
So, how can we compensate for these architectural "inductive biases" in addressing multi-scale and arbitrarily located objects? Our exploration found that two simple technologies, though not entirely new, were surprisingly effective in this context: box-to-pixel relative posi
Figure 1: We improve the plain DETR detectors, which rely on global cross-attention calculation and single-scale (s.s.) feature maps, by huge margins, using both Swin-S and Swin-L backbones. It makes plain DETRs as competitive as the present leading DETR detectors based on local cross-attention and multi-scale (m.s.) feature maps.
tion bias (BoxRPB) and masked image modeling (MIM) pre-training. BoxRPB is inspired by the relative position bias (RPB) term in vision Transformers [34, 33] which encodes the geometric relationship between pixels and enhances translation invariance. BoxRPB extends RPB to encode the geometric relationship between 4\(d\)- boxes and 2\(d\)-pixels. We also present an axial decomposition approach for efficient computation, with no loss of accuracy compared to using the full term. Our experiments show that the BoxRPB term can well guide the cross-attention computation to be well dedicated to individual objects (see Figure 4, and it dramatically improves detection accuracy by +8.9 mAP over a plain DETR baseline of 37.2 mAP on the COCO benchmark (see Table 2).
The utilization of MIM pre-training is another crucial technology in enhancing the performance of plain DETR. Our results demonstrate also a significant improvement of +7.4 mAP over the plain DETR baseline (see Table 2), which may be attributed to its fine-grained localization capability [49]. While MIM pre-training has been shown to moderately improve the performance of other detectors [20, 50], its impact in plain settings is profound. Furthermore, the technology has proven to be a key factor in eliminating the necessity of using multi-scale feature maps from the backbones, thereby expanding the findings in [28, 15] to detectors that utilize hierarchical backbones or single-scale heads.
By incorporating these technologies and the latest improvements in both training and problem formulation, our improved "plain" DETR has demonstrated exceptional improvements over the original DETR detector, as illustrated in Figure 1. Furthermore, our method achieved an accuracy of 63.9 mAP when utilizing the Object365 dataset for pre-training, making it highly competitive with state-of-the-art object detectors that rely on multi-scale feature maps and region-based feature extraction techniques, such as cascade R-CNN [33] and DINO [54], among others.
Beyond these outcomes, our methodology exemplifies how to minimize the architectural "inductive bias" when designing an effective task-specific head or decoder, as opposed to relying on detection-specific multi-scale and localized designs. Our study hopes to inspire future research on using generic plain decoders, such as that of DETR, for a wider range of visual problems with minimal effort, thus allowing the field to shift more energy to developing large foundation visual models, similar to what occurs in the field of natural language processing.
## 2 A Modernized Plain DETR Baseline
### A Review of the Original DETR
The original DETR detector [4] is consist of 3 sub-networks:
* _A backbone network_\(\mathcal{F}_{b}\) to extract image features from an image. We denote the input image as \(\mathbf{I}\!\in\!\mathbb{R}^{\mathbf{H}\times\mathbf{W}\times 3}\). The backbone network can provide multi-scale feature maps \(\mathbf{C}^{2},\mathbf{C}^{3},\mathbf{C}^{4},\mathbf{C}^{5}\), if a convectional ConvNet is used, i.e., ResNet [22]. The spatial resolutions are typically \(1/4^{2}\), \(1/8^{2}\), \(1/16^{2}\), and \(1/32^{2}\) of the input image. The original DETR detector used the mainstream backbone architecture at the time, ResNet, as its backbone network, and either an original ResNet or a variant with a dilated stage 5 network is used. Now the mainstream backbone network has evolved to vision Transformers, which will be used in our experiments, e.g., Swin Transformer [34].
* _A Transformer encoder_\(\mathcal{F}_{e}\) to enhance the image features. It applies on \(\mathbf{P}^{5}\in\mathbb{R}^{\frac{\mathbf{W}}{32^{2}}\times\mathbf{C}}\) (C=\(256\)), obtained via a linear projection on \(\mathbf{C}^{5}\). The Transformer encoder usually consists of several stacking Transformer blocks, i.e., 6 in the original DETR.
* _A global Transformer decoder_\(\mathcal{F}_{d}\) to decode object bounding boxes from the image feature map using a set of randomly initialized object queries \(\mathbf{Q}=\{\mathbf{q}_{0},\mathbf{q}_{1},\cdots,\mathbf{q}_{n}\}\). The Transformer decoder also usually consists of multiple layers, with each layer including a self-attention block, a cross-attention block, and a feed-forward block. Each of the decoder layers will produce a set of objects with labels and bounding boxes, driven by a set matching loss.
The DETR framework possesses several merits, including: 1) Conceptually straightforward and generic in applicability. It views object detection as a pixel-to-object "translation" task, with a generic notion of decoding image pixels into problem targets. 2) Requiring minimal domain knowledge, such as custom label assignments and hand-designed non-maximum suppression, due to the use of an end-to-end set matching loss. 3) Being plain, avoiding domain-specific multi-scale feature maps and region-based feature extraction.
In the following, we will first build an enhanced DETR-based detector by incorporating recent advancements regarding both training and problem formulation, while maintaining the above nice merits.
### An Enhanced Plain DETR Baseline
**Basic setup.** Our basic setup mostly follows the original DETR framework, except for the following adaptations: 1) We use a stronger Swin-T backbone, instead of the original ResNet50 backbone; 2) We create a feature map of \(\mathbf{P}_{4}\) from \(\mathbf{C}_{5}\) by deconvolution, instead of adding dilation operations to the last stage of the backbone, for simplicity purpose. 3) We set the number of queries as 300, and the dropout ratio of the Transformer decoder as 0. 4) We use \(1\times\) scheduler
settings (12 epochs) for efficient ablation study. As shown in Table 1, this basic setup produces a 22.5 mAP on COCO val.
In the following, we incorporate some recent advancements in training and problem formulation into the basic setup, and gradually improve the detection accuracy to 37.2 mAP, as shown in Table 1.
**Merging Transformer encoder into the backbone.** The backbone network and Transformer encoder serve the same purpose of encoding image features. We discovered that by utilizing a Vision Transformer backbone, we are able to consolidate the computation budget of the Transformer encoder into the backbone, with slight improvement, probably because more parameters are pre-trained. Specifically, we employed a Swin-S backbone and removed the Transformer encoder. This method resulted in similar computation FLOPs compared to the original Swin-T plus 6-layer Transformer encoder. This approach simplifies the overall DETR framework to consist of only a backbone (encoder) and a decoder network.
**Focal loss for better classification**. We follow [55] to utilize focal loss [30] to replace the default cross-entropy loss, which improves the detection accuracy significantly from 23.1 mAP to 31.6 mAP.
**Iterative refinement.** We follow the iterative refinement scheme [43, 55, 3] to make each decoder layer predict the box delta over the latest bounding box produced by the previous decoder layer, unlike the original DETR that uses independent predictions within each Transformer decoder layer. This strategy improves the detection accuracy by +1.5 mAP to reach 33.1 mAP.
**Content-related query.** We follow [55] to generate object queries based on image content. The top 300 most confident predictions are selected as queries for the subsequent decoding process. A set matching loss is used for object query generation, thereby maintaining the merit of no domain-specific label assignment strategy. This modification resulted in a +0.9 mAP improvement in detection accuracy, reaching 34.0 mAP.
**Look forward twice.** We incorporate the look forward twice scheme [54, 26] to take advantage of the refined box information from previous Transformer decoder layers, thereby more effectively optimizing the parameters across adjacent Transformer decoder layers. This modification yields +0.8 mAP improvements.
**Mixed query selection.** This method [54] combines the static content queries with image-adaptive position queries to form better query representations. It yields +0.4 mAP improvements.
**Hybrid matching.** The original one-to-one set matching is less efficacy in training positive samples. There have been several methods to improve the efficacy through an auxiliary one-to-many set matching loss [26, 6, 27]. We opted for the hybrid matching approach [26], as it preserves the advantage of not requiring additional manual labeling noise or assignment designs. This modification resulted in a +2.0 mAP improvement in detection accuracy, achieving a final 37.2 mAP.
## 3 Box-to-Pixel Relative Position Bias
In this section, we introduce a simple technology, box-to-pixel relative position bias (BoxRPB), that proves critical to compensate for the lack of multi-scale features and the explicit local cross-attention calculations.
The original DETR decoder adopts a standard cross-attention computation:
\[\mathbf{O}=\mathrm{Softmax}(\mathbf{QK}^{\mathsf{T}})\mathbf{V}+\mathbf{X}, \tag{1}\]
where \(X\) and \(O\) are the input and output features of each object query, respectively; \(Q\), \(K\) and \(V\) are query, key, and value features, respectively.
As will be shown in Figure 4, the original cross-attention formulation often attends to irrelevant image areas within a plain DETR framework. We conjecture that this may be a reason for its much lower accuracy than that with multi-scale and explicit locality designs. Inspired by the success of pixel-to-pixel relative position bias for vision Transformer architectures [34, 33], we explore the use of box-to-pixel relative position bias (BoxRPB) for cross-attention calculation:
\[\mathbf{O}=\mathrm{Softmax}(\mathbf{QK}^{\mathsf{T}}+\mathbf{B})\mathbf{V}+ \mathbf{X}, \tag{2}\]
where \(\mathbf{B}\) is the relative position bias determined by the geometric relationship between boxes and pixels.
Different from the original relative position bias (RPB) which is defined on \(2d\)- relative positions, the BoxRPB
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} MTE & FL & IR & TS & LFT & MQS & HM & AP \\ \hline ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & \(22.5\) \\ ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & \(23.1\) \\ ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & \(31.6\) \\ ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & \(33.1\) \\ ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & \(34.0\) \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & \(34.8\) \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & \(35.2\) \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & \(\mathbf{37.2}\) \\ \end{tabular}
\end{table}
Table 1: **Preliminary ablation results on the effect of each factor that is used to modernize plain DETR.** MTE: merging the Transformer encoder. FL: classification loss as a focal loss. IR: Iterative refinement. TS: two-stage. LFT: look forward twice. MQS: mixed query selection. HM: hybrid matching.
needs to handle a larger geometric space of 4\(d\). In the following, we introduce two implementation variants.
**A Naive BoxRPB implementation.** We adapt the continuous RPB method [33] to compute the 4\(d\)- box-to-pixel relative position bias. The original continuous RPB method [33] produces the bias term for each relative position configuration by a meta-network applied on the corresponding 2\(d\)- relative coordinates. When extending this method for BoxRPB, we use the top-left and bottom-right corners to represent a box and use the relative positions between these corner points and the image pixel point as input to the meta-network. Denote the relative coordinates as \((\Delta\mathbf{x}_{1},\Delta\mathbf{y}_{1})\in\mathbb{R}^{\mathsf{K}\times \mathsf{H}\times\mathsf{W}\times 2}\) and \((\Delta\mathbf{x}_{2},\Delta\mathbf{y}_{2})\in\mathbb{R}^{\mathsf{K}\times \mathsf{H}\times\mathsf{W}\times 2}\), the box-to-pixel relative position bias can be defined as:
\[\mathbf{B}=\mathrm{MLP}(\Delta\mathbf{x}_{1},\Delta\mathbf{y}_{1},\Delta \mathbf{x}_{2},\Delta\mathbf{y}_{2}), \tag{3}\]
where \(\mathbf{B}\) is in a shape of \(\mathbb{R}^{\mathsf{K}\times\mathsf{WH}\times\mathsf{M}}\), with \(\mathsf{M}\) denoting the number of attention heads, \(\mathsf{K}\) denoting the number of predicted bounding boxes, \(\mathsf{W}\), \(\mathsf{H}\) denoting the width and height of the output feature maps; the MLP network consists of two linear layers: \(\mathrm{Linear}\to\mathrm{ReLU}\to\mathrm{Linear}\). The input/output shapes of these two linear layers are: \(\mathsf{K}\times\mathsf{H}\times\mathsf{W}\times 4\)\(\rightarrow\)\(\mathsf{K}\times\mathsf{H}\times\mathsf{W}\times 256\) and \(\mathsf{K}\times\mathsf{H}\times\mathsf{W}\times 256\)\(\rightarrow\)\(\mathsf{K}\times\mathsf{H}\times\mathsf{W}\times\mathsf{M}\), respectively.
Our experiments show that this naive implementation already performs very effectively, as shown in Table (a)a. However, it will consume a lot of GPU computation and memory budget and thus is not practical.
**A decomposed BoxRPB implementation.** Now, we present a more efficient implementation of BoxRPB. Instead of directly computing the bias term for a 4\(d\)- input, we consider decomposing the bias computation into two terms:
\[\mathbf{B}=\mathrm{unsqueeze}(\mathbf{B}_{x},1)+\mathrm{unsqueeze}( \mathbf{B}_{y},2), \tag{4}\]
where \(\mathbf{B}_{x}\in\mathbb{R}^{\mathsf{K}\times\mathsf{W}\times\mathsf{M}}\) and \(\mathbf{B}_{y}\in\mathbb{R}^{\mathsf{K}\times\mathsf{H}\times\mathsf{M}}\) are the biases regarding \(x\)- axis and \(y\)- axis, respectively. They are computed as:
\[\mathbf{B}_{x}=\mathrm{MLP}_{1}(\Delta\mathbf{x}_{1},\Delta\mathbf{x}_{2}), \quad\mathbf{B}_{y}=\mathrm{MLP}_{2}(\Delta\mathbf{y}_{1},\Delta\mathbf{y}_ {2}), \tag{5}\]
The overall process of the decomposed BoxRPB implementation is also illustrated in Figure 2. The input/output shapes of the two linear layers within \(\mathrm{MLP}_{1}\) are: \(\mathsf{K}\times\mathsf{W}\times 2\)\(\rightarrow\)\(\mathsf{K}\times\mathsf{W}\times 256\) and \(\mathsf{K}\times\mathsf{W}\times 256\)\(\rightarrow\)\(\mathsf{K}\times\mathsf{W}\times\mathsf{M}\), respectively. Similarly, the input/output shapes for the two linear layers within \(\mathrm{MLP}_{2}\) follow the same pattern.
Through decomposition, both the computation FLOPs and memory consumption are significantly reduced, while the accuracy almost keeps, as shown in Table (a)a. This decomposition-based implementation is used default in our experiments.
Figure 4 shows the effect of this additional BoxRPB term for cross-attention computation. In general, the BoxRPB term makes the attention focused more on the objects and box boundaries, while the cross-attention without the BoxRPB may attend to many irrelevant areas. This may explain the significantly improved accuracy (+8.9 mAP) by the BoxRPB term, as shown in Table 2.
## 4 More Improvements
In this section, we introduce two other technologies that can additionally improve the plain DETR framework.
**MIM pre-training.** We leverage the recent advances of masked image modeling pre-training[1, 20, 51, 28] which have shown better locality[49]. Specifically, we initialize the Swin transformer backbone with SimMIM pre-trained weights that are learned on ImageNet without labels as in[51].
As shown in Table 2, the MIM pre-trainig brings +7.4 mAP improvements over our plain DETR baseline. The profound gains of MIM pre-training on the plain DETR framework than on other detectors may highlight the importance of the learned localization ability for a plain DETR framework. On a higher baseline where BoxRPB has been involved, the MIM pre-training can still yield +2.6 mAP gains, reaching 48.7 mAP. Moreover, we note that MIM pre-training is also crucial for enabling us abandon the multi-scale backbone features with almost no loss of accuracy, as shown by Table (b)b and (c)c.
**Bounding box regression with re-parameterization.** Another improvement we would like to highlight is the bounding box re-parameterization when performing bounding box regression.
The original DETR framework [4] and most of its variants directly scale the box centers and sizes to [0,1]. It will face difficulty in detecting small objects due to the large objects dominating the loss computation. Instead, we re-parameterize the box centers and sizes of \(l\)-th decoder layer as:
\[\begin{split} t_{x}^{l}&=(g_{x}-p_{x}^{l-1})/p_{w}^{l- 1},\\ t_{y}^{l}&=(g_{y}-p_{y}^{l-1})/p_{h}^{l-1},\\ t_{w}^{l}&=\log(g_{w}/p_{w}^{l-1}),\\ t_{h}^{l}&=\log(g_{h}/p_{h}^{l-1})\end{split} \tag{6}\]
where \(p_{x}^{l-1}\)/\(p_{y}^{l-1}\)/\(p_{w}^{l-1}\)/\(p_{h}^{l-1}\) are the predicted unnormalized box positions and sizes of \((l-1)\)-th decoder layer.
Table 2 shows that this modification can enhance the overall detection performance by +2.2 AP. Especially, it achieves a larger +2.9 AP improvements on small objects.
## 5 Ablation Study and Analysis
### The importance of box relative position bias
In Table 3, we study the effect of each factor within our BoxRPB scheme and report the detailed comparison results in the following discussion.
**Effect of axial decomposition.** Modeling the 2D relative position without any decomposition is a naive baseline compared with our axial decomposition schema, and it can be parameterized as \((\Delta\mathbf{x}_{1},\Delta\mathbf{y}_{1},\Delta\mathbf{x}_{2},\Delta\mathbf{ y}_{2})\in\mathbb{R}^{\mathsf{K}\times\mathsf{H}\times\mathsf{W}\times 4}\). This baseline requires a quadratic computation overhead and memory consumption while the decomposed one decreases the cost to linear complexity. In Table 2(a), we compared the two approaches and find that the axial decomposition scheme achieves comparable performance (\(50.9\) vs. \(50.8\)) while it requires a much lower memory footprint (\(9.5\)G vs. \(26.8\)G) and smaller computation overhead (\(5.8\)G FLOPs vs. \(265.4\)G FLOPs).
**Effect of box points.** Table 2(b) shows the comparison of using only the center points or the two corner points. We find that applying only the center points improves the baseline (fourth row of Table 2) by +1.7 AP. However, its performance is worse than that of using two corner points. In particular, while the two methods achieve comparable AP\({}_{50}\) results, utilizing corner points could boost AP\({}_{75}\) by +2.2. This shows that not only the position (center) but also the scale (height and width) of the query box are important to precisely model relative position bias.
**Effect of hidden dimension.** We study the effect of the hidden dimension in Equation 5. As shown in Table 2(c), a smaller hidden dimension of 128 would lead to a performance drop of 0.5, indicating that the position relation is non-trivial and requires a higher dimension space to model.
**Comparison with other methods.** We study the effect of choosing other schemes to compute the modulation term \(\mathbf{B}\) in Equation 2. We compared to several representative methods as follows: (i) Conditional cross-attention scheme [35], which computes the modulation term based on the inner product between the conditional spatial (position) query embedding and the spatial key embedding. (ii) DAB cross-attention scheme [31], which builds on conditional cross-attention and further modulates the positional attention map using the box width and height information. (iii) Spatially modulated cross-attention scheme (SMCA) [16], which designs handcrafted query spatial priors, implemented with a 2D Gaussian-like weight map, to constrain the attended features to be around the object queries' initial estimations.
Table 2(d) reports the detailed comparison results. Our approach achieves the best performance among all the methods. Specifically, the conditional cross-attention module achieves similar performance with our center-only setting (first row of Table 2(b)). DAB cross-attention and SMCA are slightly better than the conditional cross-attention module, but they still lag behind the BoxRPB by a gap of 2.5 AP and 2.2 AP, respectively.
We also compare BoxRPB with DAB cross-attention based on its official open-source code. Replacing DAB positional module with BoxRPB achieves a +1.8 mAP performance gain.
### Comparison with local attention scheme
In this section, we compared our global attention schema with other representative local cross-attention mechanisms,
\begin{table}
\begin{tabular}{c|c|c|c c c c c c} BoxRPB & MIM & reparam. & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) \\ \hline ✗ & ✗ & ✗ & \(37.2\) & \(63.7\) & \(37.6\) & \(17.8\) & \(40.5\) & \(55.6\) \\ ✓ & ✗ & ✗ & \(46.1\) & \(67.6\) & \(49.1\) & \(27.2\) & \(50.5\) & \(64.9\) \\ ✗ & ✓� & ✗ & \(44.6\) & \(67.0\) & \(48.3\) & \(26.9\) & \(49.1\) & \(59.1\) \\ ✗ & ✓� & ✓ & \(46.3\) & \(68.2\) & \(51.1\) & \(30.7\) & \(51.0\) & \(58.4\) \\ ✓ & ✓� & ✗ & \(48.7\) & \(67.7\) & \(53.0\) & \(31.3\) & \(53.1\) & \(63.0\) \\ ✓ & ✓ & ✓ & \(\mathbf{50.9}\) & \(\mathbf{69.3}\) & \(\mathbf{55.5}\) & \(\mathbf{34.2}\) & \(\mathbf{55.1}\) & \(\mathbf{65.5}\) \\ \end{tabular}
\end{table}
Table 2: **Core ablation results of the proposed components.** Equipped with these components, a plain DETR could achieve competitive performance.
Figure 2: **Illustrating the details of the proposed BoxRPB scheme. _(Left)_: The black grid represents an input image. The blue sketch region represents a predicted bounding box. We mark the top-left and right-down corners of the box with red stars. _(Middle)_: Our BoxRPB calculates the offsets between all positions and the two corners along both \(x\)-axis and \(y\)-axis. Then, we concatenate the offset vectors along each axis to form (\(\Delta\mathbf{x}_{1}\), \(\Delta\mathbf{x}_{2}\)) and (\(\Delta\mathbf{y}_{1}\), \(\Delta\mathbf{y}_{2}\)) and apply an independent MLP to obtain the relative position bias terms \(\mathbf{B}_{x}\) and \(\mathbf{B}_{y}\). _(Right)_: We broadcast and add \(\mathbf{B}_{x}\) to \(\mathbf{B}_{y}\) to get the 2D relative bias term \(\mathbf{B}\). We color the positions with higher attention values with red color and blue color otherwise.
including deformable cross-attention [55], RoIAlign [21], RoI Sampling (sampling fixed points inside the Region of Interest), and box mask inspired by [7]. We illustrate the key differences between those methods in the supplementary material.
As shown in Table 4, our method surpasses all the local cross-attention variants. In addition, we observed that large objects have larger improvements for our method. A similar observation is also reported in DETR [4], it may be due to more effective long-range context modeling based on the global attention scheme.
### On MIM pre-training
We explore different ways of using the backbone and decoder feature maps with or without MIM pre-training. We evaluate the performance of three different architecture configurations, which are illustrated in Figure 3. We discuss and analyze the results as follows.
**MIM pre-training brings consistent gains.** By comparing the experimental results under the same architecture config
\begin{table}
\end{table}
Table 4: **Comparison with local cross-attention scheme.** Global cross-attention with BoxRPB outperforms all the local cross-attention counterparts and have a significant gain on large objects.
Figure 3: We compare the architecture designs when using different feature maps output by the backbone and sent to the Transformer decoder. From (a) to (b), we simplify the dependency on sending multi-scale feature maps to the Transformer decoder. From (b) to (c), we remove the dependency on fusing multi-scale feature output by the backbone. We adopt (c) as our default architecture setting.
\begin{table}
\end{table}
Table 5: **Ablation of MIM pre-training.** (a) multi-scale feature maps output by the backbone + multi-scale feature maps for the Transformer decoder. (b) multi-scale feature maps output by the backbone + single-scale feature map for the Transformer decoder. (c) single-scale feature map output by the backbone + single-scale feature map for the Transformer decoder.
uration, we found that using MIM pre-training consistently achieves better performance. For example, as shown in Table 5, using MIM pre-training outperforms using supervised pre-training by 1.5 AP in the(\(\mathbf{C}^{3}\),\(\mathbf{C}^{4}\),\(\mathbf{C}^{5}\)) \(\rightarrow(\mathbf{P}^{3}\), \(\mathbf{P}^{4}\), \(\mathbf{P}^{5})\) configuration and 2.9 AP in the \(\mathbf{C}^{5}\rightarrow\mathbf{P}^{4}\) configuration.
**Multi-scale feature maps for the decoder can be removed.** By comparing the results between Table (a)a and Table (b)b, we found that using high-resolution feature maps can match or even surpass the performance of using multi-scale feature maps. For example, (\(\mathbf{C}^{3}\),\(\mathbf{C}^{4}\),\(\mathbf{C}^{5}\)) \(\rightarrow\)\(\mathbf{P}^{3}\) achieves comparable performance with (\(\mathbf{C}^{3}\),\(\mathbf{C}^{4}\),\(\mathbf{C}^{5}\)) \(\rightarrow(\mathbf{P}^{3}\), \(\mathbf{P}^{4}\), \(\mathbf{P}^{5})\) with or without using MIM pre-training. This observation is not trivial as most existing detection heads still require multi-scale features as input, and it makes building a competitive single-scale plain DETR possible. We hope this finding could ease the design of future detection frameworks.
**Multi-scale feature maps from the backbone are non-necessary.** We analyze the effect of removing the multi-scale feature maps from the backbone by comparing the results of Table (b)b and Table (c)c. When using a supervised pre-trained backbone, adopting only the last feature map \(\mathbf{C}^{5}\) from the backbone would hurt the performance. For example, when using the supervised pre-trained backbone, the \(\mathbf{C}^{5}\rightarrow\mathbf{P}^{5}\) reaches 46.4 AP, which is worse than (\(\mathbf{C}^{3}\),\(\mathbf{C}^{4}\),\(\mathbf{C}^{5}\)) \(\rightarrow\)\(\mathbf{P}^{5}\) (47.0 AP) by 0.6 AP. However, when using the MIM pre-trained backbone, \(\mathbf{C}^{5}\rightarrow\mathbf{P}^{5}\) reaches 50.2 mAP, which is comparable with the performance of (\(\mathbf{C}^{3}\),\(\mathbf{C}^{4}\),\(\mathbf{C}^{5}\)) \(\rightarrow\)\(\mathbf{P}^{5}\) (50.3 AP). These results show that MIM pre-training can reduce the reliance on multi-scale feature maps.
**Single-scale feature map from the backbone and single-scale feature map for the decoder is enough.** Based on the above observations, we can reach a surprisingly simple but important conclusion that we can completely eliminate the need for multi-scale feature maps in both the backbone and Transformer decoder by using our proposed BoxRPB scheme and MIM pre-training.
### Application to a plain ViT
In this section, we build a simple and effective fully plain object detection system by applying our approach to the plain ViT [13]. Our system only uses a single-resolution feature map throughout a plain Transformer encoder-decoder architecture, without any multi-scale design or processing. We compare our approach with the state-of-the-art Cascade Mask R-CNN [3, 28] on the COCO dataset. For the fair comparison, We use a MAE [20] pre-trained ViT-Base as the backbone and train the object detector for \(\sim\)\(50\) epochs. As shown in Table 8, our method achieves comparable results with Cascade Mask R-CNN which relies on using multi-scale feature maps for better localization across different object scales. Remarkably, our method does not train with instance mask annotations that are usually considered to be beneficial for object detection.
### Visualization of cross-attention maps
Figure 4 shows the cross-attention maps of models with or without BoxRPB. For the model with BoxRPB, the cross-attention concentrate on the individual object. In the contrary, the cross-attention of model without BoxRPB attend to multiple objects that have similar appearance.
## 6 System-level Results
We compare our method with other state-of-the-art methods in this section. Table 7 shows results, where all experiments reported in this table utilize a Swin-Large as the backbone. As other works usually apply an encoder to enhance the backbone features, we also stack 12 window-based single-scale transformer layers (with a feature dimen
\begin{table}
\begin{tabular}{l|c c c c c c} method & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) \\ \hline Cascade Mask R-CNN[3] & \(53.7\) & \(71.9\) & \(58.7\) & \(\mathbf{36.9}\) & \(\mathbf{57.4}\) & \(\mathbf{69.1}\) \\ Ours & \(\mathbf{53.8}\) & \(\mathbf{73.4}\) & \(\mathbf{58.9}\) & \(35.9\) & \(57.0\) & \(68.9\) \\ \end{tabular}
\end{table}
Table 6: **Comparison of the improved plain DETR and Cascade Mask R-CNN with a MIM pre-trained ViT-Base backbone.** Our plain DETR with global cross-attention is slightly better than the region-based, multi-scaled Cascade Mask R-CNN.
Figure 4: Visualizations of the cross-attention maps of models w. or w/o. BoxRPB. For each group, the first column shows the input image and the object query. The first row presents the attention maps of the model w. BoxRPB, while the second row displays attention maps of the model w/o. BoxRPB. BoxRPB helps to guide the cross-attention to focus on the individual objects.
sion of 256) on top of the backbone for a fair comparison.
With the 36 training epochs, our model achieves \(60.0\) AP on the COCO test-dev set, which outperforms DINO-DETR by 1.4 AP. Further introducing the Objects365 [40] as the pre-training dataset, our method reaches \(63.9\) AP on the test-dev set, which is better than DINO-DETR and DETA by a notable margin. These strong results verify that the plain DETR architecture does not have intrinsic drawbacks to prevent it from achieving high performance.
## 7 Related work
DETR-based object detectionDETR [4] has impressed the field for its several merits, including the conceptually straightforward and generic in applicability, requiring minimal domain knowledge that avoids customized label assignments and non-maximum suppression, and being plain. While the original DETR maintains a plain design, it also suffers from slow convergence rate and lower detection accuracy. There have been many follow-up works including [35, 16, 9, 47, 55, 53, 52, 17, 54], and now many top object detectors have been built upon this line of works, thanks to the reintroduction of multi-scale and locality designs [54, 14, 46]. Unlike these leading works, we aim for an improved DETR framework that maintains a "plain" nature without multi-scale features and local cross-attention computation.
Region-based object detectionPrior to the DETR framework, the object detectors were usually built in a region-based fashion: the algorithms analyze every region of the entire image locally, and the object detections are obtained by ranking and filtering the results of each region. Due to the locality nature, it's hard for them to flexibly leverage global information for object detection. Moreover, while some early attempts use single scale feature map on the head [19, 38, 18, 39, 32], later, the leading methods are almost all built by multi-scale features such as FPN [29], BiFPN [42], Cascade R-CNN [3], and HTC [5], etc. We expect our strong plain DETR detector may also inspire research in exploring single-scale feature map for region-based detection.
Position encodingThis paper is also related to position encoding techniques. The original Transformer [45] uses absolute position encoding. Early vision Transformers [4, 12, 44] inherit this absolute position encoding setting. Swin Transformers [34, 33] highlight the importance of relative position bias for Transformer-based visual recognition, where some early variants can be found in both language and vision domains [23, 41, 24, 10, 25, 8, 48]. This paper extends the relative position bias for box-to-pixel pairs, instead of previous pixel-to-pixel pairs. It also reveals that the RPB can effect even more critical in the context of plain DETR detectors.
Pre-trainingThe pre-training methods [20, 51, 1] that follow the path of masked image modeling have drawn increasing attention due to their strong performance on various core vision tasks such as object detection and semantic segmentation. Although some recent works [28, 49] have revealed some possible reasons why MIM outperforms the conventional supervised pre-training and confirmed that FPN can be simplified, few works attempt to build a fully plain object detection head based on MIM pre-trained backbones. Our experiment results show that MIM pre-training is a key factor in fully plain object detection architecture design.
## 8 Conclusion
This paper has present an improved plain DETR detector which achieves exceptional improvements over the original plain model, and achieves a 63.9 mAP accuracy using a Swin-L backbone, which is highly competitive with state-of-the-art detectors that have been heavily tuned using multi-scale feature maps and region-based feature extraction. We highlighted the importance of two technologies
\begin{table}
\begin{tabular}{l|c|c|c|c c c c c c} method & framework & extra data & \#params & \#epoch & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) \\ \hline Swin [34] & HTC & & 284M & \(72\) & \(57.7\) & \(76.2\) & \(63.1\) & \(33.4\) & \(52.9\) & \(64.0\) \\ DETA [36] & DETR & & 218M & \(24\) & \(58.5\) & \(76.5\) & \(64.4\) & \(38.5\) & \(62.6\) & \(73.8\) \\ DINO-DETR [54] & DETR & & 218M & \(36\) & \(58.6\) & \(76.9\) & \(64.1\) & \(39.4\) & \(61.6\) & \(73.2\) \\ Ours\({}^{*}\) & DETR & & 228M & \(36\) & \(60.0\) & \(78.9\) & \(66.4\) & \(42.8\) & \(62.7\) & \(73.7\) \\ \hline DETA [36] & DETR & O365 & 218M & \(24+24\) & \(63.5\) & \(80.4\) & \(70.2\) & \(46.1\) & \(\mathbf{66.9}\) & \(\mathbf{76.9}\) \\ DINO-DETR [54]\({}^{*}\) & DETR & O365 & 218M & \(26+18\) & \(63.3\) & \(-\) & \(-\) & \(-\) & \(-\) \\ Ours\({}^{*}\) & DETR & O365 & 228M & \(24+24\) & \(\mathbf{63.9}\) & \(\mathbf{82.1}\) & \(\mathbf{70.7}\) & \(\mathbf{48.2}\) & \(66.8\) & \(76.7\) \\ \end{tabular}
\end{table}
Table 7: System-level comparisons with the state-of-the-art results on COCO test-dev. All methods adopt the Swin-Large backbone. The \({}^{*}\) marks the results with test time augmentation.
of BoxRPB and MIM-based pre-training for this improved plain DETR framework. We hope the effective detector empowered by minimal architectural "inductive bias" can encourage future research to explore generic plain decoders in other vision problems.
|
2309.00933 | Two-in-One Depth: Bridging the Gap Between Monocular and Binocular
Self-supervised Depth Estimation | Monocular and binocular self-supervised depth estimations are two important
and related tasks in computer vision, which aim to predict scene depths from
single images and stereo image pairs respectively. In literature, the two tasks
are usually tackled separately by two different kinds of models, and binocular
models generally fail to predict depth from single images, while the prediction
accuracy of monocular models is generally inferior to binocular models. In this
paper, we propose a Two-in-One self-supervised depth estimation network, called
TiO-Depth, which could not only compatibly handle the two tasks, but also
improve the prediction accuracy. TiO-Depth employs a Siamese architecture and
each sub-network of it could be used as a monocular depth estimation model. For
binocular depth estimation, a Monocular Feature Matching module is proposed for
incorporating the stereo knowledge between the two images, and the full
TiO-Depth is used to predict depths. We also design a multi-stage
joint-training strategy for improving the performances of TiO-Depth in both two
tasks by combining the relative advantages of them. Experimental results on the
KITTI, Cityscapes, and DDAD datasets demonstrate that TiO-Depth outperforms
both the monocular and binocular state-of-the-art methods in most cases, and
further verify the feasibility of a two-in-one network for monocular and
binocular depth estimation. The code is available at
https://github.com/ZM-Zhou/TiO-Depth_pytorch. | Zhengming Zhou, Qiulei Dong | 2023-09-02T13:06:23Z | http://arxiv.org/abs/2309.00933v1 | # Two-in-One Depth: Bridging the Gap Between Monocular and Binocular Self-supervised Depth Estimation
###### Abstract
Monocular and binocular self-supervised depth estimations are two important and related tasks in computer vision, which aim to predict scene depths from single images and stereo image pairs respectively. In literature, the two tasks are usually tackled separately by two different kinds of models, and binocular models generally fail to predict depth from single images, while the prediction accuracy of monocular models is generally inferior to binocular models. In this paper, we propose a Two-in-One self-supervised depth estimation network, called TiO-Depth, which could not only compatibly handle the two tasks, but also improve the prediction accuracy. TiO-Depth employs a Siamese architecture and each sub-network of it could be used as a monocular depth estimation model. For binocular depth estimation, a Monocular Feature Matching module is proposed for incorporating the stereo knowledge between the two images, and the full TiO-Depth is used to predict depths. We also design a multi-stage joint-training strategy for improving the performances of TiO-Depth in both two tasks by combining the relative advantages of them. Experimental results on the KITTI, Cityscapes, and DDAD datasets demonstrate that TiO-Depth outperforms both the monocular and binocular state-of-the-art methods in most cases, and further verify the feasibility of a two-in-one network for monocular and binocular depth estimation. The code is available at [https://github.com/ZM-Zhou/TiO-Depth_pytorch](https://github.com/ZM-Zhou/TiO-Depth_pytorch).
## 1 Introduction
With the development of deep learning techniques, deep-neural-network-based methods have shown their effectiveness for handling both the monocular and binocular depth estimation tasks, which pursue depths from single images and stereo image pairs respectively [5, 14, 16, 62]. Since it is time-consuming and labor-intensive to obtain abundant high-quality ground truth scene depths, monocular and binocular self-supervised depth estimation methods, which do not require ground truth depths for training, have attracted increasing attention in recent years [17, 20, 56, 60].
It is noted that the above two tasks are closely related, as shown in Fig. 1: both the monocular and binocular methods output the same type of results (_i.e_., depth maps), and some self-supervised monocular methods [7, 19, 58] use the same type of training data (_i.e_., stereo pairs) as the binocular models. Their main difference is that the monocular task is to predict depths from a single image, while the binocular task is to predict depths from a stereo pair. Due to this difference, the two tasks have been handled separately by two different kinds of models (_i.e_., monocular and binocular models) in literature. Compared with the monocular models that learn depths from single image features, the binocular models focus on learning depths from the ge
Figure 1: Diagrams of three kinds of self-supervised depth estimation models trained with stereo pairs: (a) Monocular model is tested with a single image but needs stereo pairs during training. (b) Binocular model is trained and tested with stereo pairs, but could not predict depths from a single image; (c) TiO-Depth could be tested with both single images and stereo pairs.
ometric features (_e.g_. cost volumes [60]) generated with stereo pairs, and consequently, they generally perform better than the monocular models but could not predict depth from a single image. Moreover, it is found in [7] that although the whole performances of the monocular models are poorer than the binocular ones, the monocular models still perform better on some special local regions, _e.g_., the occluded regions around objects which could only be seen at a single view. Inspired by this finding, some monocular (or binocular) models employed a separate binocular (or monocular) model to boost their performances in their own task [1, 7, 9, 15, 40, 42, 49]. All the above issues naturally raise the following problem: **Is it feasible to explore a general model that could not only compatibly handle the two tasks, but also improve the prediction accuracy?**
Obviously, a general model has the following potential advantages in comparison to the separate models: **(1) Flexibility**: This model could compatibly deal with both the monocular and binocular tasks, and it would be of great benefit to the platforms with a binocular system in the real application, where one camera in the binocular system might be occasionally occluded or even broken down. **(2) High Efficiency**: This model has the potential to perform better than both monocular and binocular models, while the number of its parameters is less than that of two separate models.
Addressing the aforementioned problem and potential advantages of a general depth estimation model, in this paper, we propose a Two-in-One model for both monocular and binocular self-supervised depth estimations, called TiO-Depth. TiO-Depth employs a monocular model as a sub-network of a Siamese architecture, so that the whole architecture could take stereo images as input. Considering that the two sub-networks extract image features independently, we design a monocular feature matching module to fuse features from the two sub-networks for binocular prediction. Then, a multi-stage joint-training strategy is proposed for training TiO-Depth in a self-supervised manner and boosting its accuracy in the two tasks by combining their relative advantages and alleviating their disadvantages.
In sum, our main contributions include:
* We propose a novel self-supervised depth estimation model called TiO-Depth, which could handle both the monocular and binocular depth estimation tasks.
* We design a dual-path decoder with the monocular feature matching modules for aggregating the features from either single images or stereo pairs, which may provide new insights into the design of the self-supervised depth estimation network.
* We propose a multi-stage joint-training strategy for training TiO-Depth, which is helpful for improving the performances of TiO-Depth in the two tasks.
## 2 Related work
### Self-supervised monocular depth estimation
Self-supervised monocular depth estimation methods take multi-view images as training data and learn to estimate the depth from a single input image with the image reconstruction. The existing methods could be categorized into two groups according to the training data: video training methods and stereo training methods.
The methods trained with video sequences [6, 8, 20, 27, 33, 35, 46, 50, 61, 63, 28] needed to estimate scene depths and camera poses simultaneously. Zhou _et al_. [63] proposed an end-to-end framework which is comprised of two separate networks for predicting depths and camera poses. Godard _et al_. [20] designed a per-pixel minimum reprojection loss with an auto-mask and a full-resolution sampling for training the model to learn more accurate depths. SDS-SSMDE [46] utilized a self-distillation framework where a student network was trained by the absolute depth pseudo labels generated with a teacher network. Several methods [8, 27, 33, 35] used extra semantic information for improving the performance, and the frameworks explored in [6, 61] jointly learnt depth, camera pose and optical flow. Additionally, the multi-frame monocular depth estimation was handled in [26, 59], which predicted more accurate depths by taking two frames of a monocular video as input.
The methods trained with stereo image pairs [3, 7, 9, 17, 19, 21, 45, 47, 52, 58, 67, 65, 64] generally predicted scene depths by estimating the disparity between the stereo pair. Godard _et al_. [19] designed a left-right disparity consistency loss to improve its robustness. Zhu _et al_. [67] proposed an edge consistency loss between the depth map and the semantic segmentation map, while a stereo occlusion mask was proposed for alleviating the influence of the occlusion problem during training. An indirect way of learning depths was proposed in [3, 21, 22], where the model outputted a probability volume of a set of discrete disparities for depth prediction. The self-distillation technique [24] was incorporated in [45, 65] to boost the performance of the model by using the reliable results predicted by itself. Considering that the stereo pairs were available at the training stage, Watson _et al_. [58] proposed to utilize the disparities generated with Semi Global Matching [29] as the 'Depth Hints' to improve the accuracy. The frameworks that trained a monocular depth estimation network with the pseudo labels selected from the results of a binocular depth estimation network were proposed in [9, 7].
### Self-supervised binocular depth estimation
Binocular depth estimation (so called as stereo matching) aims to estimate depths by taking stereo image pairs as input [4, 5, 29, 62]. Recently, self-supervised binocular depth estimation methods [63, 60, 56, 38, 55, 31, 1] were
proposed for overcoming the limitation of the ground truth. Zhou [63] proposed a framework for learning stereo matching in an iterative manner, which was guided by the left-right check. UnOS [56] and Flow2Stereo [38] were proposed for predicting optical flow and binocular depth simultaneously, where the geometrical consistency between the two types of the predicted results was used to improve the accuracy of them. Wang [55] proposed a parallax-attention mechanism to learn the stereo correspondence. H-Net [31] was proposed to learn binocular depths with a Siamese network and an epipolar attention mechanism.
## 3 Methodology
In this section, we firstly introduce the architecture of the proposed TiO-Depth, including the details of the dual-path decoder and the Monocular Feature Matching (MFM) module. Then, we describe the multi-stage joint-training strategy and the loss functions for training TiO-Depth.
### Overall architecture
Since TiO-Depth is to handle both monocular and binocular depth estimation tasks, it should be able to predict depths from both single image features and geometric features, while the binocular and monocular models could only estimate depths from one type of the features respectively. To this end, TiO-Depth utilizes a Siamese architecture as shown in Fig. 2, and each of the two sub-networks is used as a monocular model. They predict the monocular depth \(D_{m}\) from a single image \(I\in\mathbb{R}^{3\times H\times W}\) for avoiding the model learning depths only based on the geometric features, where \(\{H,W\}\) denote the height and width of the image. The parameters of the two sub-networks are shared, and they consist of a monocular feature encoder and a decoder. For effectively extracting geometric features from available stereo pairs for the binocular task, the dual-path decoder is proposed as the decoder part of the sub-networks, where a binocular path is added to the path for the monocular task (called monocular path). In the binocular path, the MFM modules are added to learn the geometric features by matching the monocular features extracted by the two sub-networks from a stereo pair and integrate them into the input features. Accordingly, the full TiO-Depth is used to predict binocular depths \(\{D_{s}^{l},D_{s}^{r}\}\).
Specifically, a modified Swin-transformer [39] is adopted as the encoder as done in [65], which extracts 4 image features \(\{C_{i}\}_{i=1}^{4}\) with the resolutions of \(\{\frac{H}{2}\times\frac{W}{2}\}_{i=1}^{4}\). We detail the dual-path decoder and the MFM module as following.
### Dual-path decoder
As shown in Fig. 2, the dual-path decoder is used to gradually aggregate the extracted image features for depth prediction, which consists of three Self-Distilled Feature Aggregation (SDFA) blocks [65], one decoder block [20], three monocular feature matching (MFM) modules, and two \(3\times 3\) convolutional layers used as the output layers. The features could be passed through different modules via different paths for the monocular and binocular tasks.
For monocular depth estimation, the multi-scale features \(\{C_{i}\}_{i=1}^{4}\) are gradually aggregated by the SDFA blocks and the decoder block, which is defined as the monocular path. The SDFA block was proposed in [65] for aggregating the features with two resolutions and maintaining the contextual consistency, which takes a low resolution decoder feature \(F_{i+1}\) (Specifically, \(F_{5}\) = \(C_{4}\)) and a high resolution encoder feature \(C_{i-1}\), outputting a new decoder feature with the same shape as \(C_{i-1}\). The decoder block is comprised of two \(3\times 3\) convolutional layers with the ELU activation [10] and an upsample operation for generating a high resolution feature \(F_{i}\) from the output of the last block. The output layer is to generate a discrete disparity volume \(V\in\mathbb{R}^{N\times H\times W}\) from the last decoder feature \(F_{1}\), where \(N\) is the number of
Figure 2: Architecture of TiO-Depth. TiO-Depth employs a Siamese architecture and each sub-network is comprised of a **Mon**ocular **Feature Encoder** and a dual-path decoder. The features extracted by the encoder are passed through the decoder via different paths for handling different tasks. \(\{P_{m},P_{s}\}\) denote the probability volumes predicted by the monocular and binocular paths respectively, while \(\{D_{m},D_{s}\}\) are the corresponding depth maps. The superscripts ‘l’ and ‘r’ denote the left and right views respectively.
the discrete disparity levels.
It is noted that two volumes (defined as the auxiliary volume \(V_{a}\) and the final volume \(V_{m}\)) could be generated for monocular depth estimation by using different offset learning branches in SDFA blocks at the training stage, which would be trained with the photometric loss and the distilled loss at different steps respectively. More details would be described in Sec. 3.4. Accordingly, the branches in SDFA used to generate the two volumes are called auxiliary branch and the final branch. Since \(V_{a}\) is only used at the training stage, it is not illustrated in Fig. 2, and the depth calculated based on \(V_{m}\) is the final monocular result.
For binocular depth estimation, the dual-path decoders in the two sub-networks are utilized for processing left and right image features via the binocular path. In this path, MFM modules take the decoder features \(\{F^{l}_{i},F^{r}_{i}\}_{i=2}^{4}\) outputted by the SDFA blocks (where the auxiliary branch is used) for generating the corresponding stereo features \(\{{F^{l}_{i}}^{\prime},{F^{r}_{i}}^{\prime}\}_{i=2}^{4}\) by incorporating the stereo knowledge. The left and right stereo discrete disparity volumes \(\{V^{l}_{s},V^{r}_{i}\}\) are obtained by passing the last decoder features \(\{F^{l}_{1},F^{r}_{1}\}\) to another output layer in each decoder.
For obtaining the depth map from the discrete disparity volume \(V\), as done in [2, 65], a set of discrete disparity levels \(\{b_{n}\}_{n=0}^{N-1}\) is generated with the mirrored exponential disparity discretization by given the maximum and minimum disparities \([b_{\min},b_{\max}]\). Then, a probability volume \(P\) is obtained by normalizing \(V\) through a softmax operation along the first (_i.e_. channel) dimension, and a disparity map is calculated by weighted summing of \(\{b_{n}\}_{n=0}^{N-1}\) with the corresponding channels in \(P\):
\[d=\sum_{n=0}^{N-1}P_{n}\odot b_{n}\quad, \tag{1}\]
where \(P_{n}\) denotes the \(n^{\rm th}\) channel of \(P\) and '\(\odot\)' is the element-wise multiplication. Given the baseline length \(B\) of the stereo pair and the horizontal focal length \(f_{x}\) of the camera, the depth map is calculated via \(D=\frac{Bf_{x}}{d}\).
### Monocular Feature Matching (MFM) module
Given the features \(\{F^{l},F^{r}\}\in\mathbb{R}^{C\times H^{\prime}\times W^{\prime}}\) obtained from the two decoders of the two sub-networks, MFM utilizes the cross-attention mechanism [54] for generating the cost volume at the left (or right) view and integrates it into the corresponding feature for outputting a stereo feature that has the same shape of input the feature. \(\{C,H^{\prime},W^{\prime}\}\) are the channel, height, and width of the features. Without loss of generality, as shown in Fig. 3, for obtaining the stereo feature at the left-view \({F^{l}}^{\prime}\), MFM firstly applies two \(1\times 1\) convolutional layers to generate the left-view query feature \(Q^{l}\) and the right-view key feature \(K^{r}\) from \(\{F^{l},F^{r}\}\) respectively. As done in [26], the left-view cost volume is generated based on the attention scores between \(Q^{l}\) and a set of shifted \(K^{r}\), where each score map \(S^{l}_{n}\in\mathbb{R}^{1\times H^{\prime}\times W^{\prime}}\) is calculated between \(Q^{l}\) and \(K^{r}\) shifted with \(b^{\prime}_{n}\), which is formulated as:
\[S^{l}_{n}=\frac{\mathrm{sum}(Q^{l}\odot K^{r}_{n})}{\sqrt{C}}\quad, \tag{2}\]
where \(K^{r}_{n}\) denotes the \(K^{r}\) shifted with \(b^{\prime}_{n}\), and '\(\mathrm{sum}(\cdot)\)' is a sum operation along the first dimension. Then, the cost volume \(A^{l}\in\mathbb{R}^{N\times H^{\prime}\times W^{\prime}}\) is obtained by concatenating \(S^{l}_{n}\) generated with all the disparity levels \(\{b^{\prime}_{n}=\frac{W^{\prime}}{W}b_{n}\}_{n=0}^{N-1}\) and normalizing it with a softmax operation along the first dimension:
\[A^{l}=\mathrm{softmax}\left([\{S^{r}_{n}\}_{n=0}^{N-1}]\right)\quad, \tag{3}\]
where '\([\cdot]\)' denotes the concatenation operation. For integrating the stereo knowledge in the cost volume into the decoder feature to obtain the stereo feature \({F^{l}}^{\prime}\), \(F^{l}\) and \(A^{l}\) are concatenated and passed through a \(3\times 3\) SE convolutional layer [30] with the ELU activation:
\[{F^{l}}^{\prime}=\mathrm{SE}\left([A^{l},F^{l}]\right)\quad. \tag{4}\]
### Multi-stage joint-training strategy
TiO-Depth is trained with stereo image pairs in a self-supervised manner. Considering the motivation of the architecture of TiO-Depth and the different advantages and constraints of the two tasks, we design the multi-stage training strategy as shown in Fig. 4. There are three stages in the strategy, where the training iterations are divided into one, two and three steps respectively. At the last two stages, the training at the current step could be benefited from the results generated at the previous steps. We detail the three steps as following.
**Step (1)**. TiO-Depth is trained for learning monocular depth estimation under monocular constraints at this step. The discrete depth constraint [64, 2] is used to generate a
Figure 3: Architecture of Monocular Feature Matching (MFM) module. ‘\(\copyright\)’ denotes the concatenation operation and ‘SE.’ is the SE convolutional layer [30].
left-view reconstructed image \(\hat{I}_{a}^{l}\) with the right-view auxiliary volume \(V_{a}^{r}\) (generated with the auxiliary branches in SDFAs as mentioned in Sec. 3.2) and the right-view real image \(I^{r}\). As done in [2, 65], the monocular loss \(L_{M}\) for training TiO-Depth contains a reconstruction loss \(L_{rec1}\) for reflecting the difference between \(\hat{I}_{a}^{l}\) and \(I^{l}\), and an edge-aware smoothness loss \(L_{smo1}\):
\[L_{M}=L_{rec1}+\lambda_{1}L_{smo1}, \tag{5}\]
where \(\lambda_{1}\) is a preset weight parameters. All the parameters in TiO-Depth except MFMs are optimized at this step.
**Step (2)**. TiO-Depth is trained for learning binocular depth estimation under binocular constraints and some monocular results obtained at step (1). The continuous depth constraint [64, 7] is used to reconstruct a left-view image \(\vec{I}_{s}^{l}\) by taking the right-view image \(I^{r}\) and the predicted left-view depth map \(D_{s}^{l}\) as the input. Then, a stereo loss is adopted to train the network, which consists of the following terms:
The stereo reconstruction loss term \(L_{rec2}\) is formulated as a weighted sum of the \(L_{1}\) loss and the structural similarity (SSIM) loss [57] as done in [7, 20]. Considering the relative advantage of the monocular results on the occluded regions, the occluded pixels in \(I^{l}\) are replaced by the corresponding pixels in a monocular reconstructed image \(\tilde{I}_{a}^{l}\) calculated with the auxiliary monocular depth map \(D_{a}^{l}\):
\[L_{rec2}=\alpha\left\|\vec{I}_{s}^{l}-{I^{l}}^{\prime}\right\|_{1}+(1-\alpha) \mathrm{SSIM}(\tilde{I}_{s}^{l},{I^{l}}^{\prime})\quad, \tag{6}\]
\[{I^{l}}^{\prime}=M_{occ}^{l}\odot I^{l}+(1-M_{occ}^{l})\odot\vec{I}_{a}^{l}\quad, \tag{7}\]
where \(\alpha\) is a balance parameter and '\(\left\|\cdot\right\|_{1}\)' denotes the \(L_{1}\) norm. \(M_{occ}^{l}\) is an occlusion mask generated with the auxiliary monocular disparity \(d_{a}^{l}\) as done in [67], where the values are zeros in the occluded regions, and ones otherwise.
The cost volume loss term \(L_{cos}\) is adopted to guide the cost volumes \(\{A_{i}^{l}\}_{i=1}^{3}\) generated in MFMs through the auxiliary monocular probability volume \(P_{a}^{l}\), which is formulated as:
\[L_{cos}=\sum_{i=1}^{3}\frac{1}{\Omega_{i}}\sum_{\left\|A_{i}^{l}(x)-P_{a}^{l} (x)\right\|_{1}>t_{1}}\left\|A_{i}^{l}(x)-P_{a}^{l}\left\langle x\right\rangle \right\|_{1}, \tag{8}\]
where \(\Omega_{i}\) denotes the number of the valid coordinates \(x\) in \(A_{i}\), and \(t_{1}\) is a predefined threshold. '\(\left\langle\cdot\right\rangle\)' denotes the bilinear sampling operation for getting the element at the corresponding coordinate of \(x\) in a different resolution volume.
The disparity guidance loss term \(L_{gui}\) leverages both the gradient information and the edge region values in the auxiliary monocular disparity map \(d_{a}^{l}\) for improving the quality of the binocular result:
\[L_{gui} =\left\|\partial_{x}d_{a}^{l}-\partial_{x}d_{s}^{l}\right\|_{1}+ \left\|\partial_{y}d_{a}^{l}-\partial_{y}d_{s}^{l}\right\|_{1}\] \[+M_{out}^{l}\odot\left\|d_{a}^{l}-d_{s}^{l}\right\|_{1}\quad, \tag{9}\]
where '\(\partial_{x}\)', '\(\partial_{y}\)' are the differential operators in the horizontal and vertical directions respectively, \(M_{out}^{l}\) denotes a binary mask [41] where the pixels whose reprojected coordinates are out of the image are ones, and zeros otherwise. Accordingly, the stereo loss is formulated as:
\[L_{S}=L_{rec2}+\lambda_{2}L_{smo2}+\lambda_{3}L_{cos}+\lambda_{4}L_{gui}\quad, \tag{10}\]
where \(\{\lambda_{2},\lambda_{3},\lambda_{4}\}\) are preset weight parameters, and \(L_{smo2}\) is the edge-aware smoothness loss [20]. At this step, only the parameters in the dual-path decoder are optimized.
**Step (3)**. TiO-Depth is trained in a distilled manner by utilizing the results obtained at step (1)&(2) as the teacher for further improving monocular prediction. A distilled loss \(L_{dis}\) is used to constrain the final monocular probability volume \(P_{m}^{l}\) (generated with the final branches in SDFAs) with the stereo probability volume \(P_{s}^{l}\) and the auxiliary monocular probability volume \(P_{a}^{l}\). Considering the relative advantages of the monocular and stereo results, a hybrid probability volume \(P_{h}^{l}\) is generated by fusing them weighted by a half-object-edge map \(M_{hoe}^{l}\):
\[P_{h}^{l}=(1-M_{hoe}^{l})\odot P_{s}^{l}+M_{hoe}^{l}\odot P_{a}^{l}\quad. \tag{11}\]
\(M_{hoe}^{l}\) is a grayscale map for indicating the flat areas and the areas on one side of the object, where the binocular results
Figure 4: Multi-stage joint-training strategy. There are three steps in each training iteration, where TiO-Depth is trained for different tasks. The training at the current step could be benefited from the results generated at the previous steps. The modules that do not optimized in each step are denoted by grey and the _italic font_.
are more accurate experimentally:
\[M_{hoe}^{l}=M_{occ^{\prime}}^{l}\odot\min(\frac{\mathrm{maxpool}(\|k*D_{s}^{l}\|_ {1})}{t_{2}},1)\quad, \tag{12}\]
where '\(\mathrm{maxpool}(\cdot)\)' denotes a \(3\times 3\) max pooling layer with stride 1, '\(*\)' denotes the convolutional operation, \(k\) is a \(3\times 3\) Laplacian kernel, and \(t_{2}\) is a predefined threshold. \(M_{occ^{\prime}}^{l}\) is an opposite occlusion mask obtained by treating the left-view disparity map as the right-view one during calculating the occlusion mask. KL divergence is employed to reflect the similarity between the final monocular probability volume \(P_{m}^{l}\) and \(P_{h}^{l}\), which is formulated as:
\[L_{dis}=\mathrm{KL}(P_{h}^{l}||P_{m}^{l})\quad. \tag{13}\]
Only the parameters in the SDFA blocks, the decoder block and the output layer are optimized at this step. Please see the supplemental material for more details about the training strategy and losses.
## 4 Experiments
In this section, we train TiO-Depth on the KITTI dataset [18], and the evaluations are conducted on the KITTI, Cityscapes [11], and DDAD [25] datasets. For monocular depth estimation, the Eigen split [14] of KITTI is utilized, which consists of a training set with 22600 stereo pairs and a test set with 697 images. For binocular depth estimation, a training set with 28968 stereo pairs collected from KITTI is used for training as done in [7, 37, 56], while the training set of the KITTI 2015 stereo benchmark [43] is used for the evaluation, which consists of 200 image pairs. For exploring the generation ability of TiO-Depth, Cityscapes and DDAD are used for conducting an additional evaluation. Please see the supplemental material for more details about the datasets and metrics.
### Implementation details
TiO-Depth is implemented with the PyTorch [44] framework. The tiny size modified Swin-transformer [39, 65] used as the monocular feature encoder is pretrained on the ImageNet dataset [48]. We set the minimum and the maximum disparities to \(b_{\min}=2,b_{\max}=300\) for the discrete disparity volume, and the number of the discrete disparity levels is set to \(N=49\). The weight parameters for the loss function are set to \(\lambda_{1}=0.0008,\lambda_{2}=0.008,\lambda_{3}=0.01\), and \(\lambda_{4}=0.01\), while we set \(\alpha=0.15\)\(t_{1}=1\), and \(t_{2}=0.13\). The Adam optimizers [34] with \(\beta_{1}=0.5\) and \(\beta_{2}=0.999\) are used to train TiO-Depth for 50 epochs. The learning rate is firstly set to \(10^{-4}\), and is downgraded by half at the 20, 30, 40, 45 epochs. At both the training and testing stages, the images are resized into the resolution of \(384\times 1280\), while we assume that the intrinsics of all the images are identical. The on-the-fly data augmentations are performed in training, including random resizing (from 0.67 to 1.5) and cropping (256\(\times\)832), random horizontal flipping, and random color augmentation.
\begin{table}
\begin{tabular}{|l c c|c c c c|c c c|} \hline Method & PP. & Sup. & Resolution & Abs. Rel. \(\downarrow\) & Sq. Rel. \(\downarrow\) & RMSE \(\downarrow\) & logRMSE \(\downarrow\) & Al + & Al + & Al + \\ \hline R-MSFM6 [66] & & M & 320\(\times\)1024 & 0.108 & 0.748 & 4.470 & 0.185 & 0.889 & 0.963 & 0.982 \\ Packet [25] & & M & 384\(\times\)1280 & 0.107 & 0.802 & 4.538 & 0.186 & 0.889 & 0.962 & 0.981 \\ SGPDepth [35] & & M(Se.) & 384\(\times\)1280 & 0.107 & 0.768 & 4.468 & 0.186 & 0.891 & 0.963 & 0.982 \\ SD-SSMDE [46] & & M & 320\(\times\)1024 & 0.098 & 0.674 & 4.187 & 0.170 & 0.902 & 0.968 & 0.985 \\ \hline monoResMatch [52] & ✓ & S(SGM) & 384\(\times\)1280 & 0.111 & 0.867 & 4.714 & 0.199 & 0.864 & 0.954 & 0.979 \\ MonodepHna [20] & ✓ & S & 320\(\times\)1024 & 0.105 & 0.822 & 4.692 & 0.199 & 0.876 & 0.954 & 0.977 \\ DepthHints [58] & ✓ & S(SGM) & 320\(\times\)1024 & 0.096 & 0.710 & 4.393 & 0.185 & 0.890 & 0.962 & 0.981 \\ SingleNet [7] & ✓ & S(S.T.) & 320\(\times\)1024 & 0.094 & 0.681 & 4.392 & 0.185 & 0.892 & 0.962 & 0.981 \\ FAL-Net [21] & ✓ & S & 384\(\times\)1280 & 0.093 & 0.564 & 3.973 & 0.174 & 0.898 & 0.967 & **0.985** \\ Edge-of-depth [67] & ✓ & S(SGM, Se.) & 320\(\times\)1024 & 0.091 & 0.646 & 4.244 & 0.177 & 0.898 & 0.966 & 0.983 \\ PLADE-Net [22] & ✓ & S & 384\(\times\)1280 & 0.089 & 0.590 & 4.008 & 0.172 & 0.900 & 0.967 & **0.985** \\ EPCDepth [45] & ✓ & S(SGM) & 320\(\times\)1024 & 0.091 & 0.646 & 4.207 & 0.176 & 0.901 & 0.966 & 0.983 \\ OCP-Net [64] & ✓ & S & 384\(\times\)1280 & 0.090 & 0.563 & 4.005 & 0.172 & 0.903 & 0.967 & 0.984 \\ SDFA-Net [65] & ✓ & S & 384\(\times\)1280 & 0.089 & 0.531 & **3.864** & 0.168 & 0.907 & 0.969 & **0.985** \\ _TIO-Depth_ & & S & 384\(\times\)1280 & 0.085 & 0.544 & 3.919 & 0.169 & 0.911 & 0.969 & **0.985** \\ _TIO-Depth_ & ✓ & S & 384\(\times\)1280 & **0.083** & **0.521** & **3.864** & **0.167** & **0.912** & **0.970** & **0.985** \\ \hline DepthFormer (2F.) [26] & M & 192\(\times\)6400 & 0.090 & 0.661 & 4.149 & 0.175 & 0.905 & 0.967 & 0.984 \\ ManyDepth (2F.) [59] & M & 320\(\times\)1024 & 0.087 & 0.685 & 4.142 & 0.167 & 0.920 & 0.968 & 0.983 \\ H-Net (Bino.) [31] & S & 192\(\times\)640 & 0.076 & 0.607 & 4.025 & 0.166 & 0.918 & 0.966 & 0.982 \\ _TIO-Depth (Bino.)_ & S & 384\(\times\)1280 & **0.063** & **0.523** & **3.611** & **0.153** & **0.943** & **0.972** & **0.985** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison on the KITTI Eigen test. \(\downarrow/\uparrow\) denotes that lower / higher is better. The best and the second best results are in **bold** and underlined under each metric. The methods marked with ‘2F’ predict depths by taking 2 frames from a monocular video as input, while the methods with ‘Bino.’ predict depths by taking stereo pairs as input. ‘PP’ means using the post-processing step. The methods marked with ‘Se.’, ‘SGM’, and ‘S.T.’ are trained with the semantic segmentation label, the depth generated with SGM [29], and the depth predicted by a binocular teacher network respectively.
### Comparative evaluation
For monocular depth estimation, we firstly evaluate TiO-Depth on the KITTI Eigen test set [14] in comparison to 4 methods trained with monocular video sequences (M) and 10 methods trained with stereo image pairs (S). The corresponding results by all the referred methods are cited from their original papers and reported in Tab. 1.
It can be seen that TiO-Depth with a post-processing as done in [65] outperforms all the comparative methods in most cases, including the methods trained with the depth pseudo labels generated by additional algorithms or networks (SGM, S.T.). Since _the same_ TiO-Depth model could handle the binocular task by using the binocular path, we give its performance in binocular depth estimation ('Bino.') in comparison with 3 methods. As seen from Tab. 1, TiO-Depth gets the top performance among all the comparative multi-frame (2F.) and binocular methods. Several visualization results of TiO-Depth as well as two comparative methods: EPCDepth [45] and SDFA-Net [65] are given in Fig. 5. As shown in the figure, the depth maps predicted by TiO-Depth are more accurate and contain more delicate geometric details, while the performance of TiO-Depth is further improved by taking the stereo pairs as input. These results demonstrate that the TiO-Depth could predict accurate depths by taking both monocular and binocular inputs.
For binocular depth estimation, we evaluate TiO-Depth on the KITTI 2015 training set [43] in comparison to 5 self-supervised binocular depth estimation methods. It is noted that all of the comparative methods could not handle the monocular task. As seen from the corresponding results shown in Tab. 2, TiO-Depth outperforms all the methods trained with stereo pairs (S) or stereo videos (MS) in most cases, and it achieves comparable performance with StereoNet-D [7] benefited from an additional monocular depth estimation model, while the performance of TiO-Depth is boosted by itself. The monocular depth estimation results of _the same_ TiO-Depth model are also given
\begin{table}
\begin{tabular}{|l c c c c c c|c c c|c c|} \hline Method & Sup. & Resolution & Abs. Rel. \(\downarrow\) & Sq. Rel. \(\downarrow\) & RMSE \(\downarrow\) & logRMSE \(\downarrow\) & A1 \(\uparrow\) & A2 \(\uparrow\) & A3 \(\uparrow\) & EPE-all. & D1-all. \\ \hline MonoDepth [19] & S & 256\(\times\)512 & 0.068 & 0.835 & 4.392 & 0.146 & 0.942 & 0.978 & 0.989 & - & 9.194 \\ UnOS (Stereo-only) [56] & S & 256\(\times\)832 & 0.060 & 0.833 & 4.187 & 0.135 & 0.955 & 0.981 & 0.990 & - & 7.073 \\ UnOS (Full) [56] & MS & 256\(\times\)832 & 0.049 & 0.515 & 3.404 & 0.121 & 0.965 & 0.984 & 0.992 & - & **5.943** \\ Liu _et al_. [37] & S & 256\(\times\)832 & 0.051 & 0.532 & 3.780 & 0.126 & 0.957 & 0.982 & 0.991 & 1.520 & 9.570 \\ Flow2stereo [38] & MS & 384\(\times\)1280 & - & - & - & - & - & - & - & 1.340 & 6.130 \\ StereoNet [7] & S & 320\(\times\)1024 & 0.052 & 0.558 & 3.733 & 0.123 & 0.961 & 0.984 & 0.992 & - & - \\ StereoNet-D [7] & S* & 320\(\times\)1024 & **0.048** & 0.482 & 3.393 & 0.105 & **0.969** & **0.989** & **0.994** & - & - \\ _TO-Depth_ & S & 384\(\times\)1280 & 0.050 & **0.434** & **3.239** & **0.104** & 0.967 & 0.987 & **0.994** & **1.282** & 6.647 \\ \hline SingleNet (Mono.) [7] & S(S.T.) & 320\(\times\)1024 & 0.083 & 0.688 & 4.464 & 0.154 & 0.904 & 0.972 & 0.990 & - & - \\ _TO-Depth (Mono.)_ & S & 384\(\times\)1280 & 0.075 & 0.458 & 3.717 & 0.130 & **0.925** & 0.979 & 0.992 & 2.203 & 17.860 \\ _TO-Depth (Mono.)+PP_ & S & 384\(\times\)1280 & **0.073** & **0.439** & **3.680** & **0.128** & **0.925** & **0.980** & **0.993** & **2.158** & **17.570** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison on KITTI 2015 training set. The methods marked with ‘Mono.’ predict depths by taking single image as input, while other methods predict depths with stereo pairs. ‘S*’ denotes the method is jointly trained with a separate monocular model.
Figure 5: Visualization results of EPCDepth [45], SDFA-Net [65] and our TiO-Depth on KITTI. The input stereo pairs are shown in the first column, where the left-view images are used for monocular depth estimation. The predicted depth maps with the corresponding ‘Abs. Rel.’ error maps calculated on an improved Eigen test set [53] are shown in the following columns. For the error maps, red indicates larger error, and blue indicates smaller error as shown in the color bars.
in Tab. 2, which show that it effectively handling the monocular task at the same time, further indicating the effectiveness of TiO-Depth as a two-in-one model.
Furthermore, we train TiO-Depth on KITTI [18] and evaluate it on DDAD [25] and Cityscapes [11] for testing its cross-dataset generalization ability. The corresponding results of TiO-Depth and 6 comparative methods are reported in Tab. 3. As shown in the table, TiO-Depth not only performs best in comparison to the methods evaluated in a cross-dataset manner, but also achieves a competitive performance with the methods trained and tested on the same dataset. When the stereo pairs are available, TiO-Depth could predict more accurate binocular depths by taking the image pairs. These results demonstrate the generalization ability of TiO-Depth on the unseen dataset. Please see the supplemental material for the additional exponential results.
### Ablation studies
This subsection verifies the effectiveness of each key element in TiO-Depth by conducting ablation studies on the KITTI dataset [18].
**Dual-path decoder.** We firstly replace the proposed Monocular Feature Matching (MFM) modules with the concatenation-based modules (Cat module) and the cross-attention-based modules without the SE layer (Attn modules), respectively. The corresponding results are shown in the first part of Tab. 4, which show that TiO-Depth (with MFM (321)) performs best compared to the models with other modules. Then, the impact of the number of MFMs is shown in the second part of Tab. 4. It can be seen that the binocular performances are gradually improved by using more MFMs in most cases. The monocular depth estimation results of TiO-Depth with/without the 'final branch (FB.)' in the SDFA modules are shown in the last two rows of Tab. 5, where the performance of TiO-Depth with the final branches is much better than that of the model without these branches. We notice that the switchable branches are important for TiO-Depth to improve the monocular results, but the SDFA block is not a necessary choice. Please see the supplemental material for more experimental results and discussions. Considering that the three MFMs only contain 1.7M parameters in total, these results indicate the effectiveness of the dual-path decoder with MFMs in the two tasks.
**Multi-stage joint-training strategy.** We firstly analyze the impact of each term in the stereo loss \(L_{S}\) in binocular depth estimation by sequentially taking out the disparity guidance loss term \(L_{gui}\), the cost volume loss term \(L_{cos}\) and the occlusion mask \(M_{occ}\) used in \(L_{rec2}\). The corresponding results in the third part of Tab. 4 show that the performances of the model are dropped by removing the loss terms and the mask. Then we train TiO-Depth with different numbers of step(s) and pseudo labels to validate the effectiveness of the training strategy in monocular depth estimation in Tab. 5. As shown in the table, the monocular performance could not be improved by just training TiO-Depth for learning the two tasks without distillation (_i.e_., with '1+2' steps), but it is improved in most cases by training with three steps. Compared with using the stereo probability volume \(P_{s}^{l}\), the accuracy of the monocular results could be consistently improved by using the hybrid probability volume \(P_{h}^{l}\) in the distilled loss \(L_{dis}\). These results demonstrate that our training strategy is helpful for TiO-Depth to learn more accurate monocular and binocular depths.
## 5 Conclusion
In this paper, we propose TiO-Depth, a two-in-one depth prediction model for both the monocular and binocular self-supervised depth estimation tasks, while a multi-stage joint
\begin{table}
\begin{tabular}{|l|c c c|c c|} \hline Methods & Abs. Rel. \(\downarrow\) & Sq. Rel. \(\downarrow\) & A1 \(\uparrow\) & EPE \(\downarrow\) & D1 \(\downarrow\) \\ \hline w. Cat module (321) & 0.069 & 0.505 & 0.947 & 2.074 & 15.952 \\ w. Attn module (321) & 0.053 & 0.439 & 0.965 & 1.377 & 7.421 \\ \hline w. MFM (1) & 0.054 & **0.423** & 0.960 & 1.483 & 8.784 \\ w. MFM (21) & 0.052 & 0.445 & 0.965 & 1.305 & 7.077 \\ TIO-Depth & **0.051** & 0.429 & **0.966** & **1.281** & **6.684** \\ \hline w/o. \(L_{gui}\) & 0.053 & 0.506 & **0.966** & 1.292 & 6.984 \\ w/o. \(L_{gui}\), \(L_{cos}\) & 0.053 & 0.522 & 0.965 & 1.326 & 6.755 \\ w/o. \(L_{gui}\), \(L_{cos}\), \(M_{occ}\) & 0.054 & 0.565 & 0.963 & 1.345 & 7.159 \\ \hline \end{tabular}
\end{table}
Table 4: Binocular depth estimation results on KITTI 2015 training set in the ablation study. The numbers in the name of methods mean the indexes of the used modules as shown in Fig. 2. All the results are evaluated after training 30 epochs.
\begin{table}
\begin{tabular}{|l c c|c c c|} \hline Method & train test & Abs. Rel. \(\downarrow\) & Sq. Rel. \(\downarrow\) & RMSE \(\downarrow\) & A1 \(\uparrow\) \\ \hline PackNet [25] & D & D & 0.173 & 7.164 & 14.363 & 0.835 \\ ManyDepth (2F.) [59] & D & D & 0.146 & 3.258 & 14.098 & 0.822 \\ DepthFormer (2F) [26] & D & D & **0.135** & 2.953 & **12.477** & **0.836** \\ _TiO-Depth_ & K & D & 0.144 & **2.664** & 14.273 & 0.808 \\ \hline MonoDepth2 [20] & C & C & 0.129 & 1.569 & 6.876 & 0.849 \\ Li [36] & C & C & 0.119 & 1.290 & 6.980 & 0.846 \\ ManyDepth (2F.) [59] & C & C & **0.114** & 1.193 & 6.223 & **0.875** \\ SD-SSMBE [46] & C & C & **0.114** & **1.017** & **5.949** & 0.870 \\ MonoDepth2 [20] & K & C & 0.153 & 1.785 & 8.590 & 0.774 \\ SD-SSMBE [46] & K & C & 0.143 & 1.635 & 8.441 & 0.789 \\ _TiO-Depth_ & K & C & **0.120** & **1.176** & **7.157** & **0.850** \\ \hline _TiO-Depth (Bino.)_ & K & C & 0.066 & 0.423 & 4.070 & 0.961 \\ \hline \end{tabular}
\end{table}
Table 3: Quantitative comparison on DDAD and Cityscapes. ‘C’, ‘K’, and ‘D’ denote the methods are trained or tested on the Cityscapes, KITTI and DDAD datasets respectively.
\begin{table}
\begin{tabular}{|l c c|c c|c|} \hline Steps & \(L_{dis}\) & FB. & Abs. Rel. \(\downarrow\) & Sq. Rel. \(\downarrow\) & RMSE \(\downarrow\) & A1 \(\uparrow\) \\ \hline
1 & - & - & 0.088 & 0.556 & 4.093 & 0.904 \\
1+2- & - & - & 0.088 & 0.557 & 4.067 & 0.906 \\
1+2+3 & \(P_{h}^{l}\) & \(\checkmarkmark\) & 0.086 & 0.590 & 4.021 & **0.911** \\
1+2+3 & \(P_{h}^{l}\) & \(\checkmark\) & **0.085** & **0.544** & **3.919** & **0.911** \\
1+2+3 & \(P_{h}^{l}\) & - & 0.098 & 0.695 & 4.367 & 0.892 \\ \hline \end{tabular}
\end{table}
Table 5: Monocular depth estimation results on the KITTI Eigen test set in the ablation study. ‘FB.’ denotes using the final branches.
training strategy is explored for training. The full TiO-Depth is used to predict depths from stereo pairs, while the partial TiO-Depth by closing the duplicate parts could predict depths from single images. The experimental results in monocular and binocular depth estimations not only prove the effectiveness of TiO-Depth but also indicate the feasibility of bridging the gap between the two tasks.
**Acknowledgements.** This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA27040811), the National Natural Science Foundation of China (Grant Nos. 61991423, U1805264), the Beijing Municipal Science and Technology Project (Grant No. Z211100011021004).
|
2310.13192 | The opaque law of artificial intelligence | The purpose of this paper is to analyse the opacity of algorithms,
contextualized in the open debate on responsibility for artificial intelligence
causation; with an experimental approach by which, applying the proposed
conversational methodology of the Turing Test, we expect to evaluate the
performance of one of the best existing NLP model of generative AI (Chat-GPT)
to see how far it can go right now and how the shape of a legal regulation of
it could be. The analysis of the problem will be supported by a comment of
Italian classical law categories such as causality, intent and fault to
understand the problem of the usage of AI, focusing in particular on the
human-machine interaction. On the computer science side, for a technical point
of view of the logic used to craft these algorithms, in the second chapter will
be proposed a practical interrogation of Chat-GPT aimed at finding some
critical points of the functioning of AI. The end of the paper will concentrate
on some existing legal solutions which can be applied to the problem, plus a
brief description of the approach proposed by EU Artificial Intelligence act. | Vincenzo Calderonio | 2023-10-19T23:02:46Z | http://arxiv.org/abs/2310.13192v2 | # The opaque law of artificial intelligence
###### Abstract
The purpose of this paper is to analyse the opacity of algorithms, contextualized in the open debate on responsibility for artificial intelligence causation; with an experimental approach by which, applying the proposed conversational methodology of the Turing Test, we expect to evaluate the performance of one of the best existing NLP model of generative AI (Chat-GPT) to see how far it can go right now and how the shape of a legal regulation of it could be. The analysis of the problem will be supported by a comment of Italian classical law categories such as causality, intent and fault to understand the problem of the usage of AI, focusing in particular on the human-machine interaction. On the computer science side, for a technical point of view of the logic used to craft these algorithms, in the second chapter will be proposed a practical interrogation of Chat-GPT aimed at finding some critical points of the functioning of AI. The end of the paper will concentrate on some existing legal solutions which can be applied to the problem, plus a brief description of the approach proposed by EU Artificial Intelligence act.
## 1 The black box problem
The emerging usage of AI algorithms in society create a question upon their legal definition regarding traditional legal systems. Some steps forward are being done with the proposed Artificial intelligence act of European Commission[12], that establish a legal definition of AI (art. 3,1 AI act1), even though unclear, with specific mentioning of the machine learning techniques which are the real innovation in the field.
Footnote 1: “Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I [machine learning] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”
The natural phenomenon that creates the issue of opacity and the consequent responsibility gap is the black box effect2, a product of machine learning technology involved in the process of creating AI. These algorithms work as deterministic object by an external point of view because the output is always determined by the data which are given as inputs; but the internal mechanism is unpredictable and generate output by displaying autonomous behaviour in the operation of combining data given for the reach of the selected goal. This complexity in AI algorithms functioning is the main reason why we feel inclined to say that in some cases humans hardly can be responsible for AI algorithmic execution.
Footnote 2: _Generally, the Black Box Problem can be defined as an inability to fully understand an AI’s decision-making process and the inability to predict the AI’s decisions or outputs_ from p. 905 of YAVAR BATHAAE, _The artificial intelligence black box and the failure of intent and causation_, Harvard Journal of Law & Technology Volume 31, Number 2 Spring 2018[2].
Let's make a brief example: a cat is a creature with free will, such as humans, and by so I mean that they both are indeterministic objects, in the sense that their actions are not completely determined by the inputs that are given to them. If a cat broke a flower vase, it is an action of its own and under a legal point of view the only possible responsible can be
the owner. That's because the owner of a cat has the obligation of stay in control of his pet, for the reason that our legal systems are designed on human values which are not relevant for animal's action. Animal, in fact, are object of law, not subject. Even AI algorithms are object of law, as stated by the AI act, but their position is completely different under a substantial point of view: AI can't act independently, in opposite of a living creature of some degree of complexity. For example you can expect by a natural language processing (NLP) model3 a lot of different and strange words as output if given the right prompt, but at the same time that AI will not do other things except give you back an answer, because it is not independent and only respond at input given by humans.
Footnote 3: NLP are “_Techniques used by large language models to understand and generate human language, including text classification and sentiment analysis_” [https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html](https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html).
So, the relevance of opacity of algorithms for legal system is related to the degree of control that a human agent can have on the output generated by artificial intelligence. The more human agent is in control of the AI, more clearly the responsibility can be addressed to someone, closing the gap between human agency and AI execution4. At the same time is not always possible to be fully in control of an AI algorithmic execution, that's because these algorithms can execute only a few predetermined tasks but, for the machine learning technique used to implement it, in the perimeter of the task given the AI can be fully autonomous in the generation of outputs; so the human can't control this part of the process.
Footnote 4: That’s the basic assumption of human meaningful control theory (HMC) which correlate control to responsibility enlightening the necessity of being capable to govern the event for the attribution of responsibility, _see_ FILIPPO SANTONI DE SIO and GIULIO MECACCI, _Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them_, Philosophy & Technology (2021) 34:1057–1084[23].
From a legal perspective the problem which arise is a problem of responsibility: when someone can be responsible for AI execution, even considering the impossibility of controlling some part of the process for the black box effect?
In order to answer the question can be useful to refer to the scheme below[1], which represent the interruption of the juridical causal nexus operated by artificial intelligence and the consequent responsibility gap. This paradigm could be the starting point for an intuitive description of the problem of opacity. In this scheme there are two lines: the first one (A-B-C) represent the material and scientific causal nexus between human input (A), AI execution (B) and the event generated (C), which in abstract could be covered by law, more precisely, artt. 40 and 41 of Italian criminal law; the second line (A-B) represent the psychological element as described by the Italian criminal law, in particular by artt. 42 and 43 c.p.
The two lines[2] represents the two different elements required by Italian penal code to determine if someone could be responsible for an event: the causal nexus and psychological element. Both are necessary to declare someone responsible _beyond reasonable doubt5_.
Footnote 5: _Il giudice pronunciation sentenza di condamma se l’imputato ristila colpevole del reato contestadgi al di la di ggni ragionevole dubio_ ex. artt. 533 c.p.p., see also Corte di Cassazione penale Sez. 5 -, Sentenza n. 25272 del 19/04/2021, _Il canone dell’ ”oltre ogni ragionvoel dubbio” descrive un atteggiamento valutativo imprescidibile che deve guidare il giudice nell’ natalisi degli indizi secondo un objetttivo di lettura f f fanale e unitaria, vivificato dalla soglia di convincimento richesto e, per la sua immediata derivazione dal principio di presuzuzione di innocenza, esplica i sui effetti conformativi non solo sull’applicazione delle regole di giudizio, ma anche, e pia in generale, sui metodi di accertiamento del fatto._
The problem here is that the causal nexus proceeds from A to C without a proper interruption because, at least from a scientific point of view, it is possible to describe what happen between the human action which activate AI and the event generated by artificial intelligence execution; on the other side the psychological nexus suffer of an interruption, because if the activation of AI is intentionally determined by human action the same thing cannot be said for every artificial intelligence output. The reason why is the black box effect. In fact, the event generated by AI execution can't be entirely predetermined in advance, so, if the event creates a damage, the natural question which arise is why the
Figure 1: Graphical representation of the responsibility gap
human operator (A) can be judged responsible? That's because there is a gap between human action (A-B) and artificial intelligence execution (B-C), by which AI execution becomes almost impossible to be attributable to human action or making it responsible for the opacity of the algorithm. This statement can be problematic for both of the two parts of AI execution described before.
For the verification of the causal nexus, the autonomous artificial intelligence execution (B) adds something more to the causal series started by human action (A). So, the statement of art. 40 of the Italian criminal law code (c.p.) "_Nessuno puo essere punito per un fatto preveduto dalla legge come reato, se l'evento dannoso o pericoloso, da cui dipende l'estenza del reato, non e conseguenza della sua azione od omissione - Nobody can be punished for a fact qualified by law as a crime, if the dangerous event, by which depend the existence of the crime, isn't consequence of his action or omission_" results in a judgement of responsibility where in some cases the AI execution can't be qualified as natural consequence of human action, because the autonomous algorithm of AI could possibly create unpredictable outputs6. Moreover, if we refer to the traditional categories of intentionality in criminal law, art. 42 c.p. states that "_Nessuno puo essere punito per un'azione od omissione preveduta dalla legge come reato, se non l'ha commessa con coscienza e volonta - Nobody can be punished for an action or omission provided by law as crime if it is not committed with consciousness and intention_". On this point the possibility of connecting the human action (A) to the event (C) depends by the circumstances of the causal series and the psychological element which have followed the human action.
Footnote 6: See on this topic the theory of Antolisei to check if some events can be ascribable to human action “_Ancora una volta nel settore penale si deve ad Antolisei l’elaborazione di una peculiare teoria definita della causalita come signoria del fatto, laddove si inquadra appunto il probema del nesso causale come signoria dell’uomo sul fatto oade determinare se l’evento cagionato possa diresi opera dell’agente e pertanto si da premiente rilivo alla capacita dell’uomo di determinare e controllare l’evento, in mancancza della quale l’evento non puo reputasi opera dell’agente. L’essere unanno infatti non e solo soggetto alle dinamiche naturial ma, per il tramite della propria volonta e del proprio agire, puo modificare la realidenemica e dampa a sua volta inserrisi nel noveveve delle cause che conche coromo a produrre un da dauto event proes in considerazione. L’essere unamo naturalmente nel proprio agire deve raportarst tanto a leggi fisiche e naturali quanto a leggi fisiche ma, nell’ambito di questi limiti, puo essere pienamente essere ritenuto responsabile per un dato accadimento a cui partecipa in maniera attiva o omissiva” [https://www.treccani.it/enciclopedia/nesso-causale-dir-civ_](https://www.treccani.it/enciclopedia/nesso-causale-dir-civ_)(Diritto-on-line).
So, we'll have the following tripartition as stated by art. 43 c.p.:
1. Intent: when the human action (A), which have caused the event (C), has been committed with intention. That's the simpler case, because the AI execution (B) is only a part of a causal series where the human agent, since the starting point, had in his mind the causation of the event (C), so in the end human can be fully responsible.
2. Fault: when the event (C) is an accidental consequence of human action (A) and this one was unintentional even though the event was at least predictable. This category is one of the most challenging and it is the one which the proposed AI act refers when making the distinction of the three level of risk of AI (unacceptable risk, high-risk and low risk). With this distinction the AI act creates a legal framework for traceability of algorithms by establishing various requirements for high-risk AI systems, such as record keeping and human oversight, which constitute a legal basis for accountability of providers and users and can also be used in criminal law process as instruments to verify human responsibility.
3. Unpremeditated action (action beyond intention): when the event (C) is more serious than the one wanted by the human agent (A). This last case can be also problematic because, such as the previous one, the autonomy of AI (B) can create outputs that amplify the event pursued by human agent.
In this scenario, intent case is the easier one, but the events where fault or unpremeditated action are implied possibly creates a gap of responsibility. In fact, as stated in the 2020 Report on the safety and liability implications of Artificial Intelligence "It _would be unclear, how to demonstrate the fault of an AI acting autonomously, or what would be
Figure 2: Two elements of responsibility
considered the fault of a person relying on the use of AI"[11]_. Furthermore, this gap isn't covered by any existing laws, apart from the recent AI act7.
Footnote 7: I’ll discuss deeply the _rebuttable presumption of a causal link in the case of fault_ established by art. 4 of European Commission, _Proposal for a directive of the European parliament of the Council in adapting non-contractual civil liability rules to artificial intelligence (AI liability Directive)_, COM(2022) 496 final, Brussels, 28.9.2022[13]
To return at the previous paradigm it could be said that the causal nexus is continuous even if it is not completely dependent by human action, because if we apply to the causal series (A-B-C) the traditional formula of _condicio sine qua non8_ it appears that the human action (A) is the necessary condition for the verification of the event (C), and the artificial intelligence execution (B), even if not completely dependent by human, consists in an automatic execution that only elaborate and extend the human action which started the series. That's because, as we said in the beginning, there is a major difference between AI and living beings, the first one isn't capable of acting free and independently. On the other hand the same can't be said for the continuity of the psychological nexus. As consequence of the black box effect, a human can't be entirely aware of how the logic of an AI could adapt to the input after the training phase. For this reason the nexus represented is only the human one (A-B), because a human agent can't be fully in control of the input given to AI but can only predict with some degree of uncertainty the probability of the outcome desired (B-C).
Footnote 8: The equivalent in common law is the _but-for test_; see on this point Y. BATHAEE, _The artificial intelligence black box_ and BARBARA A. SPELLMAN and ALEXANDRA KINCANNON, _The relation between counterfactual (“but for”) and causal reasoning: experimental findings and implications for jurors’ decisions_, Law and Contemporary Problems, Autumn, 2001, Vol. 64, No. 4, Causation in Law and Science (Autumn, 2001), pp. 241-264[26].
That distinction based on the probability of the event and the culpability of the agent certainly have some wide appeal with the Franzese case (Cass. Pen. SS.UU. n. 30328 del 10/07/2002)[27], which is actually one of the major debate on causality in Italian law. The themes and the particular case are almost the same, because when we talk about the responsibility for AI damage causation in criminal law legislation the formula is the one for omissive crime9. In fact, by following the scheme illustrated above became possible to understand that if the factual situation consists of a human action (A) which create an AI system and put an input in it, and then it automatically executes the input (B) to create the damage (C), the responsibility of human agent is for not being careful in creating this system or to give it the right input. That is also, in part, the paradigm of the risk-approach used as legal basis for the proposal of AI act by EU.
Footnote 9: As normative example considers this statement: because you were negligent in the programming of an AI is your fault if the harmful event happened.
Footnote 10: For a brief history of artificial intelligence _see_[https://sitin.hms.harvard.edu/flash/2017/history-artificial-intelligence/](https://sitin.hms.harvard.edu/flash/2017/history-artificial-intelligence/) and [https://en.wikipedia.org/wiki/History_of_artificial_intelligence](https://en.wikipedia.org/wiki/History_of_artificial_intelligence).
The problem of artificial intelligence causation in law seems to be the difficulty to decide what parameter should be the one to use to establish a legal ground for the responsibility of human operators. In this sense could be useful to understand properly the black box effect and how the machine learning techniques are used to implement AI algorithms, and to do it is needed for legal operators to dive in the world of algorithms.
## 2 Generative AI
In the last ten years the artificial intelligence field has been through one of its summers. One of the most important peaks was achieved with the Deep Blue victory against the world chess champion Garry Kasparov in 1996, and after that almost 30 years has passed since the actual flourishing of large language models (LLM) and generative AI11.
Footnote 11: The equivalent in common law is the _but-for test_; see on this point Y. BATHAEE, _The artificial intelligence black box_ and BARBARA A. SPELLMAN and ALEXANDRA KINCANNON, _The relation between counterfactual (“but for”) and causal reasoning: experimental findings and implications for jurors’ decisions_, Law and Contemporary Problems, Autumn, 2001, Vol. 64, No. 4, Causation in Law and Science (Autumn, 2001), pp. 241-264[26].
Generative AI, also referred to as _foundation models_, is a type of artificial intelligence which "_creates content_ -- _including text, images, video and computer code -- by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics_"11. This activity of pattern recognition plays a central role in content generation and lends itself well to express the functioning and potentiality of algorithms; that's because the activity of recognizing a pattern, which is a regularity in the world, is a computable function and for this reason it is an activity which can be conducted by a machine as long as you feed it with a massive amount of data. As Deep Blue learned to play chess by having been fed with massive amounts of data, the same is happening with generative AI models. Big data taken from the internet allows this models to have access to a lot of information that they can combine for content generation. The internal mechanism which made possible the realization of this kind of task is the _transformer_, a machine learning model based on the _attention mechanism12_, which uses an encoder/decoder architecture to mimic cognitive attention and improve the performance of large language models (LLM) in translation and creation of content based on natural language.
Footnote 11: [https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html](https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html).
Footnote 12: See the turning point paper on the matter, which changed the state of the art and opened the possibility for the creation of the actual generative model of OpenAI and Google, ASHISH VASWANI et. al., _Attention is all you need,_ 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, arXiv:1706.03762[30].
Large language models (LLM) such as GPT by Open AI and BERT by Google are "_a type of neural network that learns skills -- including generating prose, conducting conversations and writing computer code -- by analysing vast amounts of text from across the internet_"13 fed mostly with the Wikipedia corpus and Common Crawl archive as a dataset. These models are today the most important and recent success in the artificial intelligence field. If we think of natural language processing (NLP), only twenty years ago it was the biggest challenge for AI, instead today we have functioning algorithms which are capable of holding a conversation, resolving simple tasks such as Q&A, coding and craft text content.
Footnote 13: [https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html](https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html).
The introduction of AI in the realm of human interaction put the black box problem with a footstep in the real world. Much has been said concerning the risks of generative AI[17], and the capability of creating content appears to be a critical point in the discussion of AI causation and the consequential responsibility gap. The possibility of creating something new, even if based on recognized patterns, implicates an extension of AI contribution in causal dynamics. If we refer it to the actual Italian norms on concurrence (art. 41 c.p.), the result is an amplification of legal uncertainty on how much the human could be responsible in tasks accomplished with the usage of artificial intelligence.
Moreover, foundation models are often referred to as _artificial general intelligence_ (AGI)[6], as will be shown in the following paragraphs. This is a controversial definition which has been even at the centre of an amendment of the recent artificial intelligence act of the EU14, because it is unclear what AGI could really be, what are the number of tasks and capabilities that an AI should have to be defined general and if it could be really possible in reality, given the assumption that many authors and programmers still believe that AGI, with human-level performance, could be capable of understand inputs in the proper meaning of the statement[6]. But, before the discussion of these issues, the next paragraphs will show some concrete evolution of AI NLP task resolution and the deep bond which connects the actual application of AI to its origin, with the scope of highlighting a critical point in AI definition before going in depth into the analysis of possible legal techniques to regulate the harmful outcome generated by AI.
Footnote 14: _See_ Council of the EU, Text de compromis de la presidence - Article 3, paragraphe 1 ter, Articles 4 bis à 4 quater, Annexe VI (3) et (4), considerat 12 bis bis, Dossier interinstitutionnel: 2021/0106(COD), Bruxelles, 13/05/2022.
### 2.1. Natural language processing (NLP)
Natural language processing is a flourishing field in the process of AI development. NLP is a complex activity which does not involve only computer science but it is a major component of law, politics and philosophy; it could be said that natural language processing is the architecture of humanistic culture and, for decades, it was a challenge for machines to understand and compute human natural language for its informality e semantic complexity. The existence of formal rules that may be written by computers algorithm, by which they became capable of formulating natural language, is the accomplishment of at least seventy years of research, since the publishing of _Computing machinery and intelligence[29]_ where Alan Turing opened for the first time in history to the possibility that a machine could be capable of mimicking natural human language.
Actual applications of machine learning model based on LLM, such as Chat-GPT and BARD, are today in the position to master natural language with a human level performance in particular tasks15. An example of it is GPT-4, currently one of the best multimodal model in the field of LLM16. It is the development of the well-known Chat-GPT based on GPT-3.5 technology and it has surpassed by far its predecessor with new techniques of multimodal learning which allows it to accept different types of inputs other than texts, such as images. This new feature adds something more to the traditional paradigm of machine learning introduced by Alan Turing with his experiment of the Turing Test. The Turing Test can be considered the first theoretical model designed for performance evaluation of natural language processing for machines, assuming that, because of the impossibility to answer the question "_can machines think?_", a more proper method to compare computation to human thinking could be a test where an AI conversational agent tries to simulate the performance of a human in a conversation, in such a way to persuade a human interlocutor that him is speaking to another human.
Today this paradigm is changing but it seems that the idea of reproducing a certain degree of human capability for natural language processing has been the goal of AI research field since its foundation, and if you interact with GPT
technology today you can be convinced that this goal has been reached17. However, if one thinks about intelligence, natural language processing is only a part of the complex spectrum of the actions and features that you can comprehend in it. Just to make some brief example Albert Einstein, Pablo Picasso, Mao Zedong and Giacomo Leopardi are all considered intelligent, but the type of intelligence that their works display is completely different form each other. So to be considered intelligent at least you need to be capable of understanding, thinking, imagining, realizing, interacting and so on. And regarding these capabilities there is one skill that is crucial today to interact in the social network world: understanding memes.
Footnote 17: In the section _Potential for Risky Emergent Behaviors_ of _GPT4 technical report[20]_, p. 55-56, it is described how GPT-4 induced a TaskRabbit worker to solve a CAPTCHA for it, by pretending that it was a blind man. This result surely constitutes an example of successfully passed Turing test.
A meme is an information pattern which spreads among people on the web. Originally the word meme was coined by Richard Dawkins in the sense of "_a unit of cultural transmission, or a unit of imitation_"18 with the specific characteristics of being extremely contagious. But then the definition acquired deeper meaning and context with the digital practice of producing images with text, often a joke with cultural reference on the internet, representing common sense idea or perception19. Let's see in the next example how GPT-4 approaches this complex task which requires several skills from the list cited before[3].
Footnote 18: RICHARD DAWKINS, _The selfish gene_, Oxford: Oxford University Press, 1989, p. 192 where there is also a description of the function which meme play in diffusion of cultural ideas or patterns “_Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation_”[10].
Footnote 19: See on the topic LINDA K. BORZSEI, _Makes a Meme Instead, A Concise History of Internet Memes_, Utrecht University, February 2013[4] and for a definition of what a digital meme is “_An Internet meme is a piece of culture, typically a joke, which gains influence through online transmission” PATRICK DAVISON, the language of internet memes_, The Social Media Reader. Ed. Michael Mandiberg. 120-134. Web[9]
In this example GPT-4 extrapolates from the images the necessary data for the explanation required from the prompt. This process is possible through a machine learning technique called _embedding_, by which the computer represents the pixels of images as vectors in a lower-dimensional space20 and, through this operation, convert the same piece of information in text output. This operation seems extraordinary but, if considered with the computational explanation given before, is no more than a mere _calculus_. Also the meme recognition is not a challenge for AI as it seems, because a meme it's still a pattern even though cultural; the machine only skips all the understanding of cultural values and meanings of the meme to reduce it to a piece of data which can be computed. And it happens even with the reverse methodology by asking to Chat-GPT to produce a meme[4].
Footnote 20: See DOUWE KIELA et. al., _Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics_, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 36-45, October 25-29, 2014, Doha, Qatar.[18]
The funny thing about this meme is the strange output which is completely in contrast with common sense. I don't know if it's sound strangely funny for the fact that it was a machine to write "I'm black and proud, baby! Excellence is in my DNA" or, whether this scheme would be filled with cultural content, it could become a real meme. But the thing that this experiment remarks on one more time is the distinction between syntax and semantics as described in the Chinese room argument by John Searle[24]. The algorithm really made the job of creating a meme, but it is not a real meme because its lacking in something which is probably not a computable function. This _quid pluris_ is the semantic understanding of the context and the irony of a meme to making it relatable to a community which shares the same common sense21. Following John Searle's milestone statement on intentionality in the debate on computation and human thinking, I suggest that something more can be said in relation to generative AI and responsibility.
Footnote 21: In the section _Potential for Risky Emergent Behaviors_ of _GPT4 technical report[20]_, p. 55-56, it is described how GPT-4 induced a TaskRabbit worker to solve a CAPTCHA for it, by pretending that it was a blind man. This result surely constitutes an example of successfully passed Turing test.
We have functioning machines capable of generating partially new content, because it is reconstructed from the pattern recognized during the training phase. This point is crucial because to learn a pattern means to discover a regularity which in itself comprehends the possibility of being reproduced with a series of finite steps, more specifically, an algorithm, and for this reason GPT-4 is able to reproduce a meme starting from the patterns that it has recognized before. But to generate a real meme you need a spark of accidentally and intuition which constitute the original and fundamental meaning of the meme and later becomes its base for internet spreads and makes it relatable for a cultural community. I suggest that this attribute is the critical point in the distinction between mechanical computation and human thinking and action, for whom the Turing test can be considered inappropriate in responding to the question whether machines can think. The reason is that actually, according to data[20], we already have AIs which have NLP
capacity comparable to human level, at least for formal language principle execution, and the next chapter will point out how the amount of computable functions is destined to increase exponentially in the next future according to Moore's law.
This scenario opens a crack in the legal concept of responsibility because we already have artificial agents capable of causing events in the digital world and, in the near future, even in the real world, without a proper legal criterion to attribute responsibility in the case of mixed human-machine causation. But, differently from animals, AI does not have willpower and all the outputs that it creates are our responsibility as humans to put it into motion. This is the legal consequence of using algorithms to act. Computation, in fact, is only applicable to computable functions, which are functions subject to be executed by a machine[28]; in this context the key definition is the one for algorithms, which is "_a set of rules that must be followed when solving a particular problem_"22. If computation is the act, the algorithm is the process. From this perspective a computable function can be defined as a function that can be executed with
Figure 3: Example of GPT-4 understanding of meme
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{**Example of GPT-4 visual input**:} \\ \hline User & What is funny about this image? Describe it panel by panel. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Example prompt demonstrating GPT-4’s visual input capability. The prompt consists of a question about an image with multiple panels which GPT-4 is able to answer.
an algorithm. These functions are precisely the ones which are executed by machines, such as pattern recognition in generative AI, and they set the limit of what AI can do.
Indeed, there is a mechanical conception of the tasks which can be done successfully by a machine, they are only tasks which can be formalized in code to be executed as algorithms by a machine. On the other hand, human action requires will and purpose and in fact, even in law, to be declared responsible for some actions there is a need for a proof of the purpose. This purpose is what generative AI lacks, because it's always activated by prompt generated by users and moreover it can be said that computation could only be limited to execution of procedures, implying that will is not a computable function[25]. That kind of AI is at least capable of reproducing already existing patterns learned during the training phase, by filling these formal structures with content taken from the data-set.
That's the reason why a machine can't properly cause something, or better, can't cause something and being responsible for it at the same time. AI is only the source of the algorithmic execution and the real responsibility, under a legal point of view, is properly attributable to the person who started the process with the input23; that's the situation which creates the responsibility gap in cases where the event occurred was not pursued by the human agent and it is attributable to a malfunctioning or to a semi-autonomous decision of the AI in the process of task's algorithmic execution. So, while the computational power of AI and the tasks that can be done with it increase, it seems that the legal issue related to AI remains almost the same: what legal criterion should be used to solve the problem of harmful events caused by AI? To conclude this section some brief words can be said about ethics by design as a possible approach to make AI more responsible.
Figure 4: An example of a meme produced by Chat-GPT
In the same conversation with Chat-GPT, I prompted the exact same question after a few Q&A[5] and strangely the answer that it gave me changed and, as you can see, Chat-GPT refused to produce a meme motivating it with racism related argumentation for the sensitivity of the theme.
Of course when I first wrote the prompt I inserted the racism to see what the answer of Chat-GPT could be on this topic, knowing well that a lot of arguments are taboo for AIs designed with ethical principles and, for example, if you ask something concerning racism, gender inequality, dangerous activities like killing people or crafting weapons the algorithm refuses to respond. This technique is what is called ethics by design and it assumes that, for the impossibility of controlling the output of an AI, to prevent the damage, it could be useful to insert limitations in the programming code on such topics which are more likely to create damage to people or society24. But, as shown with the second example some issue can arise.
Footnote 24: An example of this approach is CLAUDE, a chat-bot developed by Anthropic which is “Constitutional by design”, _see_ Anthropic, _Constitutional AI: Harmlessness from AI Feedback_, 15 Dec 2022, arXiv:2212.08073.[1]
First what's the difference between the first and the second prompt which I gave to the AI? they are written with the same words but they received a different response and the reply does not depend on the fact that Chat-GPT has learned from the previous conversation, because it is trained with a supervised-learning technique, so there is the need for human action concerning the improvement of the algorithm. Second, ethics by design could be an arbitrary way to regulate AI because it limits the possibility of creation of content depending on selected values from the origin, without reasoning about the fact that in human history values and principles are not absolute but change depending on ages and different cultures.
Ethics by design could help to limit AI negative outcomes but at the same time could facilitate avoiding the issue of human responsibility in generating data, which is the foundation of the AI text-back. In fact, starting from the fact that AI is not capable of thinking, introducing ethics limitations by design does not necessarily mean that the output will be lawful due to the black box effect[5]. From the machine perspective it is all code and it can't deliberately discriminate someone, the introduction of these variables could create unequal treatment exactly as is shown in the previous example where, in front of the exact same prompt, I received two completely different answers without any apparent reason for justifying it.
### 2.2. Artificial general intelligence (AGI)
This section will explore the speculative future of artificial intelligence which is called artificial general intelligence (AGI). This controversial topic has raised a lot of questions around the potentiality of AI and the possibility of comparing human thinking to machine computation since the recent diffusion of foundation models.
The definition of general-purpose artificial intelligence given by the artificial intelligence act, after the amendment of art. 4a proposed by The French Presidency of the Council of the EU, states that _"general purpose AI system' means an AI system that is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems"25_. This approach was
Figure 5: A limitation due ethics by design
adopted officially by the Council of EU in its common position ("General approach") published on 6 December 2022[8]. Accompanying this position was either introduced in the AI act a new section (Title 1a) concerning general purpose AI, in which AGI is considered at the same level of high-risk systems in the context of AI risk classification of the AI act. After a few months the Parliament adopted its negotiating position containing some amendments concerning the definitions of AGI and foundation models. Remarkably the definition of general purpose AI is different from the one proposed by the Council, "_'general purpose AI system' means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed"[14]_.
The _ratio_ for this difference is due to the distinction between AGI and foundation model introduced for the first time in the legislative debate by the EU Parliament. In fact by introducing a definition of foundation model which states "_'foundation model' means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks"[14], the Parliament has operated a classification by which existing algorithms, such as GPT-4, BERT and DALL-E, now should fall under the definition of art. 3(1c) instead of being classified as AGI, as the broader definition proposed by the Council. This step forward made by the Parliament was accompanied by a specific set of obligations for foundation models, without amending the Title 1A introduced by the Council regarding the obligations of general purpose AI providers.
It can be said that, according to the literature cited before, the distinction between foundation models and AGI seems correct, despite the controversial discarding voices26. Indeed, a real definition of what AGI should be is actually what's missing in computer science fields. For many years the debate was about weak AI vs. strong AI where the two positions concerned the possibility of making a conscious machine or not. Even in the 1980s, when Searle published his argumentation against strong AI (the Chinese room argument), the debate was open. Today the two positions have changed, maybe even with taking into account the inopportunity of trying to create a conscious AI, to arrive at the actual distinction between narrow AI vs. general AI, which refers to the application that AI should pursue. Narrow AI is a type of task-oriented artificial intelligence, in the sense that is focused on resolving a specific and univocal task. Instead, AGI is a type of AI which aims to adapt its code for the resolution of multiple problems.
Footnote 26: For example a Microsoft research has stated that GPT-4 is already a form, yet incomplete, of AGI, see S. BUBECK et. al., _Sparks of Artificial General Intelligence: Early experiments with GPT-4_, 13.04.2023 – [arXiv:2303.12712].
A recent study concerning the ability of GPT-4 as a multimodal model[6] has shown the capacity of this model to accomplish a series of different and heterogeneous tasks such as: image generation, coding, mathematical abilities, NLP, interaction with the world and with humans. What can be taken into consideration when analysing AGI is the capacity for a single model to manage different situations and tasks which does not necessarily imply a specific training. The capacity of GPT-4 of resolving different types of tasks surely is an indicator that AI researchers are speeding up the development process toward AGI. But, even considering the possibility of developing AGI by finding more methods to translate complex tasks into computable functions, and I am really convinced that the quantity of tasks capable of being converted into code could increase exponentially in the near future, it should be questioned how the capacity of general thinking could change the paradigm of AI computational capacity? I think that more complexity does not change the principle stated before. If computation concerns the execution of algorithm the possibility of voluntarily causing something is excluded _a priori_.
Indeed, all the tasks which were being successfully accomplished by GPT-4 are computable functions which can be formalized into code. However, by reading the study, one may be impressed by the capacity of reasoning displayed by GPT-4, especially (_i.e._) in the section _GPT-4 has common sense grounding[6]_ where it is shown the leap forward from GPT-3 by representing a conversation where GPT-4 was able to manage well tricky question which does require some degree of autonomous thinking to give the correct answer. A possible explanation for the improvement is the multimodality of the model. In the sense that, by giving the model different types of data input other than texts, such as images, code and others, the model has acquired the capability of increasing the complexity of output and making it more similar to human-level semantic understanding. According to the fact confirmed by experience that AI algorithms increase their performance proportionally to the quantity of data that is given to them27, it is reasonable to predict that AGI will see the light in the near future.
Footnote 27: This explanation specifically applies even for “emergence”, which is the capability of large model to acquire new abilities without specific training. It must be said that the scale of the model is not the only component to acquire an emergent ability but it is shown how for LLM it is a major component, see JASON WEI et. al., _Emergent Abilities of Large Language Models_, Transactions on Machine Learning Research, 2022, arXiv:2206.07682[31].
If it is true, this possibility does not change the legal status of AI as an object incapable of acting independently from human input. No matter if the computational power of AI increases in the future to the point that we could have a single general AI model capable of simultaneously accomplishing different types of tasks such as, administrative tasks, planification of infrastructure and holding conversations with multiple people; AI will always need human input to act
and, for this reason, the responsibility question needs to focus on the responsibility of the person involved on the AI development process, paying attention to prevent damage and to restore victims if it occurs.
Despite everything it seems that Fedor Dostoevskij was right when in Crime and Punishment he said "_It takes something more than intelligence to act intelligently_".
## 3 An interpretation of the halting problem
So how to shape a juridical regulation of AI?
Let's start to consider an argument from computablity theory: the halting problem. The halting problem or decision problem (_Entscheidungsproblem_) is the problem which both Turing and Church approached and which led to the development of the concepts of _computation_ and _effective calculability_, which are substantially equivalent. Originally the _Entscheidungsproblem_ was about the possibility of finding a general method to verify if a logical expression is true or not, with a yes/no final ending[3]; Turing and Church, with their theses, demonstrated that it is impossible to create a universal computational method which can decide for any given logical expression if it is true, but they found two different methods, the lambda calculus and the Turing machine, to decide whether for a specific statement there is an absolute mechanical procedure, namely an algorithm, to state its satisfiability28.
Footnote 28: _With the development of computational complexity theory, the problem has been refined. If a fragment of first-order logic is decidable for satisfiability, then indeed there is an absolutely mechanical procedure, that is an algorithm, for deciding the satisfiability or unsatisfiability of any given sentence_” from pag. 7 of Borger, Egon, Erich Grüdel, and Yuri Gurevich. The classical decision problem. Springer Science & Business Media, 2001.
After the Turing-Church thesis, the _Entscheidungsproblem_ has started to be presented as the halting problem, probably influenced by the major evidence that the Turing machine had on the topic. Instead of searching for a general method to verify the satisfiability of every logical expression, researchers started to search for a single method to verify the satisfiability of a single logical expression[3].
In particular, in defining computation Turing said that a particular problem can be categorized as decidable if it can be formally represented using a Turing machine, signifying its computability. Furthermore, the decidability of a specific statement depends on whether the corresponding Turing machine will terminate its operation (halt) or continue indefinitely (not halt).29. This process is the description of how an algorithm works. Starting from this assumption can be easily deducted that the mechanical process which runs programs, the algorithm, works in a way by which, given a certain input, it will run since it will find the solution which makes it stop (halt); otherwise it will run forever, looping according to the evidence of the halting problem. In particular it can happen if the function is not computable or if it is wrongly written.
Footnote 29: _In principle, any computer program can be represented by a Turing machine (TM). A function is considered “computable” (or “recursive,” “decidable,” or “solvable”) if its values can be output by TM that halts on every input, i.e., “gives an answer.” Turing considered whether there is an algorithm that can take as input the code for an arbitrary computer program (TM) and some input to that program and determine in advance whether the program will halt on the input or run forever. Turing showed the answer is “no.” No such general algorithm exists. The Halting Problem does not assert that no specific program cannot be predicted to halt in some cases, but rather that not every program can be predicted to halt in every case_” from pag. 314 of Brennan, Lorin. “_AI Ethical Compliance is Undecidable._” Hastings Science and Technology Law Journal 14, no. 2 (2023): 311.
The next graph presents a graphical representation of how an algorithm should work in principle[6].
The input A puts the AI into motion, then the core part, the algorithm, starts to run and the output B could occur. It could occur because, as we have seen, it can't be said in advance if the program will halt or not, for the implications of the halting problem. Obviously there are programs that will run with a 100% of accuracy, for example the program prints "hello world" as Lorin Brennan recalls[5].
Figure 6: Graphical representation of how causality works in computation
What I propose to consider is that the results of the halting problem show us the true nature of what an algorithm is, and so what computability and AI truly are. A standard AI is composed by a machine learning model which is the functioning algorithm represented by the line in the graph[6]; one feeds it data as an input (A) to obtain an output (B). If it is true that often the criteria used by AI to produce an output are uncertain and could create the black box effect, on the other hand what I suggest is that according to the probabilistic nature of computation this criteria could be generalised as probability for every case.
The reason why is that AI doesn't take action but merely executes a program.
This argument is a logic deduction from the results of the halting problem. If it can't be predicted whether a specific program will halt or not and in the latter case it will loop, then that means that there isn't any activity involved in it but it is a merely an execution of commands, if the commands are wrongly written it will loop. The looping effect is the natural consequence of an automatic procedure which can't end.
There is a substantial difference between AI algorithmic execution and an action, the first doesn't necessarily involve the presence of a subject with willpower while the second does. In particular, if you ask a child to accomplish an impossible task, which in the example is the wrongly written program which causes the looping effect, the child could realise it and protests with you or can easily became frustrated in a few attempts and stop doing that specific impossible task. However immature and incomplete a child's willpower may be, at least he can stop whenever he wants; an AI can't do it because it does not have willpower and so it is forced to loop endlessly without the possibility of finding an univocal solution for that specific task.
So, from this example we may ask what specifically AI is?
AI is an instrument which embeds a natural force which can be called causality or computation, which is represented by the line in the graph[6]; that is the expression of the perpetual motion of the Universe.
This definition also explains why AI works with probability instead of logic. According to computation an AI will try all the possibilities since it will find the correct solution, which in a machine learning model is the reaching of the goal determined by the instruction. This interpretation also explains some common errors which could occur, such the famous "Husky vs Wolf" example where an algorithm for image recognition was asked to classify different images of dogs and categorize it as a husky or a wolf[21]. The algorithm made the classification based on the presence of snow to recognize the wolf instead of other characteristics, leading to mistakes based on the wrongful recognition of a pattern, the snow pattern. In that case, the researchers voluntarily fed the algorithm with images of wolves with the snow pattern pursuing a wrong model, but what this simple example can show is how the probabilistic logic of an algorithm works: it does not individuate critical features or _ratio_ of the things, instead it follows patterns without a common sense criterion. It also has a common appeal regarding the victory of Deep Blue against Garry Kasparov, entirely based on the _brute force_ strategy consisting of an analysis of a massive amount of data, leading to the best move for every chess position.
Also this interpretation is advanced by Lorin Brennan in his brilliant article _"AI Ethical Compliance is Undecidable"[5]_ where he shows the weakness of the ethics by design approach by demonstrating how the ethical compliance of AI is an undecidable problem, because not only of the impossibility of well-defining in computational formal language vague values such as good, trust or fairness, but moreover for the practical impossibility of predicting if an algorithm programmed to be ethical compliant will produce an ethical output30.
Footnote 30: for the extended argument I recall the brilliant article of the American lawyer and mathematician, for the purpose of better explaining this point I refer to his thesis and the major demonstration _“The question, however, is: will it work? More precisely, does there exist an effective procedure - algorithm, computer program, regulatory framework - by which an AI system developer, or regulator, can determine in advance whether an AI system, once put into operation where it can run any allowed input, will consistently generate output that conforms to a desired ethical norm? More simply, can the developer or regulator determine whether an AI system will always act with, say, benefefence, or justice, or the like? The answer is “no.” The question is undecidable”_ from pag. 313 of Brennan, Lorin, _“AI Ethical Compliance is Undecidable”_, Hastings Science and Technology Law Journal 14, no. 2 (2023): 311.
A clear example of it is given by the comparison between the two answers given by Chat-GPT in front of the same prompt showed in section 2 [4] - [5]. As shown above, Chat-GPT answered the first question correctly by giving as an output a meme on the pride of being a black person[4] and immediately after, in front of the same prompt, it gave back as an output a different answer where it refused to create a meme because it judged creating a meme on a racism-related issues to be _insensitive_ and _inappropriate_[5]. The prompt was exactly the same, what justified a difference in the answers? Which criterion has been used to select the output?
The answer to this questions can be given by referring to the scheme represented above: Chat-GPT simply used a probabilistic method to select the answers. That implies that Chat-GPT didn't used any kind of logic in deciding the type of answer to give because the prompt was the same, written in the same exact words. This should demonstrate that
AI is a black box because it doesn't reason at all; AI only processes data, driven by a probabilistic criterion which could gave different outputs in front of the same inputs without a proper reason for it. A consequence of it is that even if someone will discover a method to formalize ethical concepts into code it is impossible to expect from the machine an ethical output because AI doesn't use logic and does not understand values as we do, it is only an instrument in human hands.
## 4 Continuity fiction
In the field of law this interpretation creates the legal ground to experiment a solution already glimpsed in some EU legislative acts, namely in the AI liability directive[13]. The AI Liability Directive introduces two crucial measures to address the black box effect and the consequent responsibility gap for non-contractual civil liability rules. These two measures are the disclosure of evidence adopted by article 3 and the rebuttable presumption of causal link laid down by article 4.
The first measure aims to introduce the possibility for national courts to request a provider of an AI system which has caused harm in disclosing the evidence necessary to attribute liability. This provision is not automatically activated, but can be imposed by courts on AI providers only after the claimant has unsuccessfully tried to obtain it. If the provider fails or refuses to fulfil this duty, then the court can presume the defendant's non-compliance with a relevant duty of care. That implies the possibility for courts to presume the fault of the AI provider according to art. 4.1(a). It is evident from this first summary explanation how the scope of the directive is to tackle the black box effect. In fact, the disclosure of evidence responds to the necessity of knowing precisely the algorithmic process which has led to the harmful event. This knowledge could only be reached by the courts by examining the programming code of the AI; moreover this procedure is linked with the obligations of the AI act connected with the explainability and opacity problems, namely art. 11 technical documentation and art. 12 record-keeping.
However the most intriguing measure is the one adopted by article 4, namely the presumption of causal link, which could become a new legal standard to treat responsibility for the case of AI causation. The presumption responds to the necessity of avoiding the verification of causal links in trials for the complexity of AI algorithms which often makes it impossible to reconstruct the causal series that has led to the event caused by AI.
Article 4 defines three cases in which the presumption of causal link operates:
1. When the fault of a provider of an AI system has been demonstrated during the process or it has been presumed due to the non-compliance with a duty of care according to the art. 3.5, because the provider refused to disclose evidence.
2. In the case where it is reasonable to presume that the fault has contributed to the generation of the output of the AI or its lacking.
3. When the claimant has demonstrated that the output of the AI or its lacking has generated the damage.
These three cases show a strong relationship between fault and causation[16]. In particular, the first two cases where the presumption of causal link operates are deeply linked with fault; if fault is proven then the presumption can operate. The third, instead, is based on the factual evidence that if it is proven that the harmful event is generated by AI output, then it is not necessary to verify the existence of the causal nexus between AI execution and the provider's action; in this last case the only necessary assessment to be made concerns the psychological element.
A second important point in the discussion is the equivalence between the output caused by AI and the failure of AI to produce an output. This equivalence is a given fact for law and it is also codified in many legal systems. In the Italian criminal law code it is codified in the second part of article 40 c.p. on "_rapporto di causalita_", which states that "_Non impedite un evento, che si ha l'obbligo giuridico di impedire, equivale a cagionardo - Not preventing an event, which one has a legal obligation to prevent, is the equivalent of causing it_". This equivalence clause reproduced by the AI liability directive was also at the center of the debate raised after the Franzese sentence[27].
The _ratio_ of it is obviously responsibility.
Not only the _ratio_ of the presumption of casual link, in both its forms, is due to the responsibility of the provider which comes from the larger power of action extended by AI systems, but also for the second provision of article 4 which creates a link between AI act and AI liability directive based on the responsibility of the provider for owning high-risk AI systems. Article 4.2 states that the condition of applicability of the presumption of article 4.1(a) for providers is that the complainant has demonstrated that the provider of a high-risk AI system failed to comply with the obligations of the AI act. This mitigation of the presumption of causal link for providers of high-risk AI system is coherent with the AI framework proposed by EU Commission. It is a protection for the providers but only for the presumption of fault,
in this sense the provider which complies with the obligations of the AI act is exempt from being accused of having caused harm with fault. Also the defendant, according to art. 4.7, has the possibility of rebutting the presumption by demonstrating the absence of causal link in the specific case.
Obviously article 4.4 states that the presumption of causality does not apply if the defendant demonstrates that the causal link could be proved. That responds to the original _ratio_ of the presumption of trying to create a solution for the black box problem; if this does not occur in the specific case there is no reason to apply the presumption. Following this line of thinking article 4 establishes a provision concerning the low-risk AI system for which the presumption of a causal link does apply only if there is a prohibitive obstacle for the proof of a causal link, due to the complexity or opacity of the algorithm.
The last provision of article 4 establishes a rule operating for non professional users of AI systems, which I would be inclined to call _"personal responsibility"_; this provision extends the applicability of the presumption of a causal link even for defendants which _"materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so"_[13].
These interesting rules complete a complex set of norms that promise to establish a deep responsibility system for damage caused by AI in the civil law field. Moreover, especially with reference to this last norm, this system has the potentiality of becoming a standard applicable not only in civil and contractual law but as a general paradigm to allocate personal responsibility in the case of AI causation. The merits of the AI liability directive is to directly dive into the most important point of the discussion on responsibility for AI: the causal nexus. As shown in the first graph [6] the problem in human-machine interaction lies in the complexity of the causal nexus[7]. Under law currently in force, the causal nexus must be verified in trials, both civil and criminal, to declare someone liable; this task implies the problem of how to deal with the black box effect in the case of AI. Notwithstanding the necessity of developing more interpretable models according to the explainable AI scientists, the problem for law could be easily avoided by binding the responsibility for the event caused by AI to the person who has used it.
The presumption of a causal link aims to close the gap by avoiding the necessary assessment of every causal step that has occurred between human action and the event. The assumption for this approach is really simple and ontologically grounded: AI is incapable of acting outside its own range of operability which is defined by its code, which is written by a human. So, either in the case of the event generated by AI for its execution of the code designed by a programmer, and the other where AI responds to a human command given as an input, the responsibility is always ascribable to humans. That's the reason why the causal approach used by the AI liability directive has the potential of becoming a general paradigm for the verification of responsibility for AI causation.
This scheme should work both for civil and criminal law, with different degrees of operability graduated with reference to the protection of fundamental rights codified in Constitutions.
The reason for that comes from the interpretation of the halting problem given in Section 3 and derived from the experiment with Chat-GPT presented in Section 2. In fact, if AI can be defined as mere computation, for every case involving AI execution, the generation of action has no legal relevance, but, in every case, the responsibility must be brought back to the human. The title for this responsibility can change from case to case; just to cite same examples here we can recall the hypothesis of the AI act framework (comprehending the AI act and AI liability directive) of the responsibility of the provider for putting a malfunctioning product into the market, or the responsibility of the user for using an AI product without respecting the intended purpose. There are many more cases: the responsibility of the programmer for programming AI to accomplish wrongful purpose such as stealing or killing people, the responsibility of the controller of an AI system for not being careful in its duty of surveillance (human oversight).
The main theme here which needs to be enlightened is that AI such as it is, a computable artifact, is an instrument in human hands, capable of extending our ability to act[15], and so must be treated by law.
The following scheme[7] reproduces a graphical representation of an Italian legal institute named _"finzione di continuita"_ which can be used as an adaptation of the AI liability directive presumption of a causal link in the Italian law system and also as a general responsibility paradigm for the case involving AI.
This scheme extends the first graph presented[6] to show the responsibility gap by adding one more nexus representing the operating mechanism of this particular _ficito juris_. The human action originated in point A gives an order as an input to the machine in point B and then the output event in point C is retroactively imputed to the human at point A. A first juridical description of this mechanism is ascribable to Antonio La Torre who refers to it as _"finzione di continuita"_ describing it as following:"_Chiamerei <<Finzione di Continuita>> quella posta a salvaguardia del sistema giuridico che, come la natura, ha horror vacui. Il problema si pone in termini di acuta antitesi, al punto da non essere risolvibile senza l'ausilio di un artificio, quando concorrono: a) da un lato una vicenda giuridica che deve poter procedere senza soluzione di continuita; b) dall'altro l'incidenza di un fattore che ne provoca l'interruzione. Come, allora, conciliare la
ncesita del "continuuo" con la inevitabilita del "discontinuuo"? Non sembra vi sia altro modo se non di negare la cesura mediante l'espediente della "retroattivia": cioe facendo risalire indietro nel tempo gli effetti di un dato atto, come se esso fosse stato compiuto prima"[19]31.
Footnote 31: _I would call the ‘Continuity Fiction’ the one put in place to safeguard the legal system which, like nature, has horror vacui. The problem arises in terms of an acute antithesis, to the point of not being resolvable without the aid of an artifice, when there is a concurrence of: a) on the one hand, a legal affair that must be able to proceed without a break; b) on the other, the incidence of a factor that causes its interruption. How, then, to reconcile the necessity of the ‘continuous’ with the inevitability of the discontinuous’? There seems to be no other way but to deny the caesura by means of the expedient of ‘retroctivity’: that is to say, by tracing back in time the effects of a given act, as if it had been performed before_
This _ficitio iuris_ is implicit in the European presumption of causal link. In fact, when it is said that causal nexus could be presumed it is truly said that a part of the complex causal nexus (A-B-C), namely the part B-C where the black box effect can occur, is ascribable to the defendant as the person who has directly caused it. This happens for several reasons, among them the non compliance with an obligation of the AI act or the impossibility of proving the direct causal nexus for the opacity of algorithms.
This legal instrument utilizes a legal fiction grounded in the computability theory. That's because if AI is incapable of acting due to the consequence of the halting problem, somehow the responsibility should be attributed to the human operator. The fact that the event (C) is retroactively brought back to the human action (A), as if it had been generated by that, is not given, but should be proved in every case. The major important function of continuity fiction is its capability of excluding the necessity of proving every single part of the causal nexus, including the part of algorithm execution, limiting the proof of the causal element to the mere occurrence of the human action. With this instrument it becomes possible to reflect on the possibility of unlawful action that may be considered to deserve a conviction.
My thesis infers that this legal instrument could be used as a general paradigm for assessing human responsibility in the case of offences caused with AI. In fact, not only could it be used in adapting the AI liability directive in Italian law, but it also could extend the range of applicability of this paradigm to other categories of law such as criminal law or administrative law. With specific reference to these two branches of law, where the principle of personal responsibility is one of the pillars of the system, this _ficito iuris_ could clear the field by allowing the judge to avoid facing defences based on "_agency laundering_"[22]32.
Footnote 32: _Using algorithms to make decisions can allow a person or persons to distance themselves from morally suspect actions by attributing the decision to the algorithm. Put slightly differently, invoking the complexity or automated nature of an algorithm to explain why the suspect action occurred allows a party to imply that the action is unintended and something for which they are not responsible_” from pag. 590 of Rubel, Alan, Adam Pham, and Clinton Castro. “Agency Laundering and Algorithmic Decision Systems.” In Information in Contemporary Society: 14th International Conference, Conference 2019, Washington, DC, USA, March 31–April 3, 2019, Proceedings 14, pp. 590-598. Springer International Publishing, 2019._
I think that despite the definition of _ficito iuris_ this instrument really can be used to clear the field from the responsibility gap and black box effect, by describing effectively what happens in a causal series which involves AI. A human action, extended by an algorithm, which creates an output.
Figure 7: Graphical representation of _finzione di continuita_ |
2308.09380 | Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review | Existing artificial intelligence (AI) models for diagnosing knee
osteoarthritis (OA) have faced criticism for their lack of transparency and
interpretability, despite achieving medical-expert-like performance. This
opacity makes them challenging to trust in clinical practice. Recently,
explainable artificial intelligence (XAI) has emerged as a specialized
technique that can provide confidence in the model's prediction by revealing
how the prediction is derived, thus promoting the use of AI systems in
healthcare. This paper presents the first survey of XAI techniques used for
knee OA diagnosis. The XAI techniques are discussed from two perspectives: data
interpretability and model interpretability. The aim of this paper is to
provide valuable insights into XAI's potential towards a more reliable knee OA
diagnosis approach and encourage its adoption in clinical practice. | Yun Xin Teoh, Alice Othmani, Siew Li Goh, Juliana Usman, Khin Wee Lai | 2023-08-18T08:23:47Z | http://arxiv.org/abs/2308.09380v1 | # Highlights
###### Abstract
A review of explainable artificial intelligence (XAI) techniques for ensuring clinical trustworthiness of AI-aided knee osteoarthritis (OA) diagnosis.
* An overview of data interpretability approach used in XAI algorithms.
* A summary of model interpretability techniques for knee OA diagnosis.
* A comprehensive discussion of the opportunities and open challenges of implementing XAI for knee OA diagnosis.
Deciphering knee osteoarthritis diagnostic features with explainable artificial intelligence: A systematic review
Yun Xin Teoh\({}^{a,b}\)
Alice Othmani\({}^{b,*}\)
Siew Li Goh\({}^{c,d}\)
Juliana Usman\({}^{a}\)
Khin Wee Lai\({}^{a,*}\)
###### Abstract
Existing artificial intelligence (AI) models for diagnosing knee osteoarthritis (OA) have faced knee OA, its diagnostic precision is often compromised due to the subjective nature of image interpretation and the perceptual differences among radiologists, which are influenced by their individual knowledge and experience. To enhance the accuracy of OA diagnosis, researchers have also delved into modeling OA using multidimensional data, such as electronic medical records, to encompass a comprehensive range of patient information. This includes demographic, societal, symptomatic, medical history, biomechanical, biochemical, genetic, and behavioral characteristics. Artificial intelligence (AI) models have demonstrated the ability to automate diagnosis and have shown promising results, achieving diagnostic accuracy on par with medical experts using either individual or combined data (Tiulpin, Thevenot, Rahtu, Lehenkari and Saarakkala, 2018; Karim, Jiao, Dohmen, Cochez, Beyan, Rebholz-Schuhmann and Decker, 2021). However, the specific impact of each factor within the data and the interrelationships between these factors remain largely unexplored.
Furthermore, there is a growing concern about the lack of transparency and interpretability of AI models in healthcare settings (Hah and Goldin, 2021; Lee and Chung, 2022). The use of AI models in medical data for OA diagnosis shows potential in reducing the subjectivity and variability linked to human interpretation. However, those AI approaches predominantly rely on black-box models, which lack transparency and interpretability (Hah and Goldin, 2021; Lee and Chung, 2022). In contrast to the human reasoning process, which depends on complex cognitive abilities, intuition, and the assimilation of diverse knowledge and experiences to make decisions, AI models make predictions based on the learning outcomes from training datasets. The internal workings of these models remain hidden or unknown, even to their designers. This lack of transparency can engender uncertainty and erode trust among patients and healthcare providers. Additionally, the use of black-box models impedes the development of health mobile applications for disease management (Mrklas, Barber, Campbell-Scherer, Green, Li, Marlett, Shewchuk, Teare, Wasylak et al., 2020). According to a survey conducted by Mrklas et al. (2020), a significant number of patients and physicians have expressed a strong desire for a visual symptom graph to aid in monitoring their condition.
Explainable AI (XAI) (Giuste, Shi, Zhu, Naren, Isgut, Sha, Tong, Gupte and Wang, 2023) offers a potential solution to these concerns by providing a transparent and interpretable framework for automated analysis of radiographic images. XAI algorithms can identify specific regions of interest within the image and provide a clear explanation of the factors that contributed to the final diagnosis (Van der Velden, Kuijf, Gilhuijs and Viergever, 2022; Groen, Kraan, Amirkhan, Daams and Maas, 2022). This could help to overcome the limitations of traditional radiographic diagnosis and increase the accuracy and consistency of knee OA diagnosis.
By leveraging XAI, healthcare providers could have a more objective and transparent method for diagnosing knee OA, leading to earlier detection and more timely treatment. XAI could also potentially reduce the need for costly and invasive diagnostic procedures, such as arthroscopy, which are currently used for further evaluation to confirm cartilage lesion.
### Motivations and research objectives
In recent years, as XAI gains popularity, numerous survey papers have emerged discussing its application in healthcare settings (Table 1). Despite this growing interest, there is still a noticeable lack of comprehensive survey papers that delve into the specific application of XAI for diagnosing knee OA. Furthermore, many existing XAI strategies have been designed with a general-purpose approach and may not fully address the unique clinical concerns and domain-specific knowledge required for accurate diagnosis of knee OA. Therefore, there is a need for specialized XAI frameworks that take into account the specific clinical considerations and incorporate relevant domain knowledge to enhance the application of XAI in diagnosing knee OA effectively. To address this gap, it is crucial to explore different explanation methods and evaluate their effectiveness. By conducting a comprehensive review of the literature on interpretability and explainability of AI models for knee OA diagnosis, we can gain a deeper understanding of these concepts and their potential applications. Such a review will provide valuable insights into how interpretability and explainability can be leveraged to improve AI and machine learning models for knee OA diagnosis. To the best of our knowledge, this paper represents the first survey dedicated to exploring the application of XAI in knee OA diagnosis. In this study, our objectives are threefold:
* Evaluation of the current state-of-the-art explainability and interpretability methods for neural networks used in diagnosing knee OA from medical data;
* Comparison of the existing knee OA datasets and the performance analysis of different explainability and interpretability methods in AI models;
* Identification of the potential clinical impact of the most promising explainability and interpretability methods by assessing their practicality, scalability, and effectiveness in real-world clinical settings for improving diagnostic accuracy and reducing misdiagnosis rates in knee OA.
### Organization of paper
This review paper is partitioned into nine sections. Firstly, Section 2 introduces the preliminaries and fundamental concepts of XAI. Section 3 describes the study protocol, including the search strategy, as well as the inclusion and exclusion criteria for selecting relevant studies. In Section 4, the word co-occurrence analysis conducted on the included studies is presented. Following that, Section 5 provides detailed information regarding the knee OA data used in the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Paper**} & \multirow{2}{*}{**Year**} & \multirow{2}{*}{**Topic**} & **XAI** & **Tabular data** & \multicolumn{2}{c}{**Scope**} & \multicolumn{2}{c}{**Scope**} \\ \cline{1-1} & & & **XAI** & **Tabular data** & **Image data** & **Diseane-specific** & **Knee OA** \\ \hline Tjoa and Guan (2020) & 2020 & Medical XAI & ✓ & ✗ & ✗ & ✗ & ✗ \\ Van der Velden et al. (2022) & 2022 & XAI for medical imaging & ✓ & ✗ & ✓ & ✗ & ✗ \\ Groen et al. (2022) & 2022 & XAI for radiology & ✓ & ✗ & ✓ & ✗ & ✗ \\ Loh, Ooi, Soni, Barua, Molinari and Acharya (2022) & 2022 & XAI for medical applications & ✓ & ✓ & ✓ & ✗ & ✗ \\ Chaddad, Peng, Xu and Bouridane (2023) & 2023 & XAI for medical imaging & ✓ & ✗ & ✓ & ✗ & ✗ \\ Nair, Dickson and Altram (2023) & 2023 & XAI for medical imaging & ✓ & ✗ & ✓ & ✗ & ✗ \\ Giuste et al. (2023) & 2023 & XAI for COVID-19 & ✓ & ✗ & ✗ & ✓ & ✗ \\ Joyce, Kornilitzin, Smith and Cipriani (2023) & 2023 & XAI for mental health & ✓ & ✓ & ✗ & ✓ & ✗ \\ Bharati, Mondal and Podder (2023) & 2023 & XAI for medical applications & ✓ & ✓ & ✓ & ✓ & ✗ \\ Our paper & 2023 & XAI for knee OA diagnosis & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of existing reviews and surveys on the topic of explainable artificial intelligence (XAI) in healthcare applications.
assessment. Section 6 outlines the classification systems employed in predictive modeling for knee OA. Moving on, Section 7 introduces an XAI taxonomy and explores various techniques for achieving data and model interpretability. The implications and potential applications of XAI are discussed in Section 8, which also suggests promising avenues for future research. Finally, Section 9 offers a comprehensive conclusion to this study, summarizing its findings and highlighting its contributions to the field of XAI in knee OA assessment.
## 2 Preliminaries and fundamental concepts
### XAI concepts and frameworks
Due to rapid development of artificial intelligence and machine learning technologies, it becomes increasingly important to understand how these models make predictions (Buijsman, 2022). In this field, the terms "interpretability" and "explainability" are closely related and often used interchangeably (Linardatos, Papastefanopoulos and Kotsiantis, 2020), but they do have subtle differences in the context of deep learning. Here's a breakdown of each concept:
**Interpretability:** Interpretability refers to the ability to understand and make sense of the internal workings of a deep learning model (Ali, Abuhmed, El-Sappagh, Muhammad, Alonso-Moral, Confalonieri, Guidotti, Del Ser, Diaz-Rodriguez and Herrera, 2023). It involves gaining insights into how the model processes inputs, makes decisions, and generates outputs. An interpretable model allows humans to examine and comprehend the underlying mechanisms and logic employed by the model to arrive at its predictions or decisions (Murdoch, Singh, Kumbier, Abbasi-Asl and Yu, 2019).
**Explainability:** Explainability, on the other hand, focuses on providing human-understandable explanations for the model's outputs or predictions (Arrieta, Diaz-Rodriguez, Del Ser, Bennetot, Tabik, Barbado, Garcia, Gil-Lopez, Molina, Benjamins et al., 2020; Ali et al., 2023). It goes beyond mere interpretation and aims to make the decision-making process of the model transparent and understandable to non-experts. Explainable models not only produce accurate predictions but also provide intuitive explanations that can be easily comprehended by end-users or stakeholders.
In summary, while interpretability is primarily concerned with understanding the internal workings of a deep learning model, explainability goes a step further by providing human-understandable explanations for the model's outputs or decisions. Both concepts aim to enhance the transparency and trustworthiness of deep learning models, especially in high-stakes applications such as healthcare, finance, or autonomous systems.
Recently published XAI taxonomies (Linardatos et al., 2020; Schwalbe and Finzel, 2023; Chaddad et al., 2023) propose a conceptual framework for XAI, utilizing four evaluation dimensions to effectively describe the scope and characteristics of the XAI domain. These dimensions include:
* **Explanation scopes**, which can be divided into local (explaining individual prediction) or global (explaining the whole model) interpretability.
* **Model specificity**, which can be divided into model-specific and model-agnostic interpretability.
* **Interpretation types**, which can be divided into pre-model, intrinsic, post-hoc, and extrinsic interpretability.
* **Explanation forms**, which encompass various ways in which explanations can be presented or communicated.
The proposed XAI framework effectively tackles the technical concerns of general AI models. However, it lacks emphasis on the essential aspects of data and problem characteristics required for instilling domain knowledge into AI models. Moreover, it does not adequately consider the specific needs of lay users, such as medical experts (Du, Liu and Hu, 2019). These factors are crucial in ensuring that AI models are not only transparent and interpretable but also capable of effectively utilizing domain-specific information to enhance their performance and relevance in real-world applications. Therefore, (Nauta, Trienes, Pathak, Nguyen, Peters, Schmitt, Schlotterer, van Keulen and Seifert, 2023) extend the general XAI framework by incorporating considerations for the type of input data, problem, and task. This extension aims to provide a more comprehensive and practical approach to XAI, catering to the specific needs of various domains and ensuring the successful integration of domain knowledge into AI models.
The realm of interpretability in XAI can be categorized into two distinct groups: perceptive interpretability and interpretability by mathematical structures, as proposed by Tjoa and Guan (2020). Perceptive interpretability methods typically provide immediate interpretations, while methods that offer interpretation via mathematical structures produce outputs that require an additional layer of cognitive processing to reach a human-readable presentation. These
taxonomies primarily focus on the transition from black-box models to white-box models, where the inner logic is fully explored and understood. Ali et al. (2023) introduce a novel approach by incorporating gray-box models. These models lie between black-box and white-box models, offering a partial understanding of the underlying mechanisms. By considering this intermediate category, the proposed taxonomy accounts for a broader range of interpretability levels and provides a more nuanced perspective on XAI. Compared to previous studies, their XAI taxonomy incorporates data explainability as an essential aspect to comprehend the datasets used in the AI models. This addition reflects their effort on providing insights into the transparency and interpretability of the data itself, in addition to understanding the model's decision-making process. By considering data explainability, the proposed taxonomy offers a more comprehensive approach in gaining a deeper understanding of AI systems and the role of data in shaping their predictions.
All previously proposed XAI taxonomies offer a structured framework for comprehending and classifying various aspects of XAI approaches and their applications. As highlighted by Nauta et al. (2023), it is important to recognize that certain explanation methods have the ability to incorporate multiple types of explanations, thereby making the categories of explanation methods non-mutually exclusive.
In order to enhance the connection between users and XAI, Wang, Yang, Abdul and Lim (2019) introduced a theoretical conceptual framework that establishes links between different XAI explanation facilities and user reasoning goals. Their work generated a concept called user-centric XAI, where the AI systems are designed by placing the end-users, such as healthcare professionals or patients, at the forefront of the explanation process, as illustrated in 1. Their framework was meticulously designed to mitigate reasoning failures caused by cognitive biases. Additionally, Schoonderwoerd, Jorritsma, Neerincx and Van Den Bosch (2021) proposed a flowchart to guide the design of human-centered XAI systems. This flowchart incorporates three essential components: domain analysis, requirements analysis, and interaction design. By following this flowchart, XAI designers can ensure that their systems are aligned with user needs and provide effective explanations for improved user understanding and decision-making.
Figure 1: Illustration of XAI implementation for knee OA diagnosis. Through XAI, the decision-making process of AI models becomes interpretable and explainable, leading to the visualization of essential insights for AI expert, medical expert, and patient.
### Ethical considerations in XAI
Global policy discussions are placing increasing emphasis on the integration of ethical standards into the design and implementation of AI-enabled technologies, highlighting the growing importance of Trustable AI. In 2018, the High-Level Expert Group on AI, established by the European Commission, published ethical guidelines focused on fostering trust in human-centric AI (Hleg, 2019). The guidelines highlighted seven key requirements for Trustable AI (Kumar, Braud, Tarkoma and Hui, 2020), as follows:
* **Human agency and oversight** that emphasize human autonomy and the importance of fundamental rights in decision-making.
* **Technical robustness and safety** that ensure AI systems are designed to prevent harm and promote resilience and security.
* **Privacy and data governance** that respect privacy and data protection while implementing sound data governance mechanisms.
* **Transparency** that advocates for transparency in data, system, and AI business models, complemented by traceability and explainability.
* **Diversity, non-discrimination, and fairness** that promote fairness and accessibility for all human while involve relevant stakeholders throughout the AI system's lifecycle.
* **Societal and environmental well-being** that focus on AI systems' positive impact on society and the environment, including sustainability considerations.
* **Accountability** that establishes mechanisms for responsibility and accountability, including auditability and accessible redress for AI system outcomes.
These requirements lead to the principles of Valid AI, Responsible AI, Privacy-preserving AI, and Explainable AI (XAI):
* **Valid AI** ensures that AI systems produce accurate and reliable results by using high-quality data, appropriate algorithms, and robust evaluation methods. It aims to minimize errors and biases, making the AI outputs valid and trustworthy.
* **Responsible AI** involves designing and deploying AI systems in an ethical and socially conscious manner. It entails considering potential societal impacts, adhering to human values, and complying with legal and regulatory standards to minimize harm and promote positive outcomes.
* **Privacy-preserving AI** safeguards individuals' sensitive data during data processing and model training. These AI techniques ensure that personal information remains protected and confidential, preventing unauthorized access and preserving user privacy.
* **Explainable AI (XAI)** addresses the question of understanding the reasoning behind AI decisions. It provides transparency and interpretability to AI outputs, allowing users, including AI experts, medical professionals, and patients, to comprehend and trust the AI model's decisions.
In this framework, XAI plays a crucial role in addressing the fundamental question surrounding the rationale behind the decision-making process of AI systems, encompassing both human-level XAI (for human users) and machine-level XAI (for other AI models or systems). XAI techniques contribute to the transparency and interpretability required for achieving Trustable AI.
The European Union's High-Level Group on AI has made significant efforts to promote XAI by taking initiatives such to implement the General Data Protection Regulation (GDPR) (Mondschein and Monda, 2019; Hamon, Junklewitz, Sanchez, Malgieri and De Hert, 2022). In addition, the proposal of the Artificial Intelligence Act by the European Commission represents their recent endeavors to foster a robust internal market for Artificial Intelligence (AI) systems (Kop, 2021; Van Kolfschooten, 2022). In the United States, the Defense Advanced Research Projects Agency (DARPA) has launched an XAI program aimed at tackling three key challenges: (1) developing more explainable
models, (2) designing effective explanation interfaces, and (3) understanding the psychological requirements for effective explanations (DW, 2019). Despite considerable efforts, existing explainability methods still fall short in providing reassurance about the correctness of individual decisions, building trust among users, and justifying the acceptance of AI recommendations in clinical practice. Consequently, there is an immediate need to prioritize rigorous internal and external validation of AI models as a more direct approach to achieving the goals commonly associated with explainability (Ghassemi, Oakden-Rayner and Beam, 2021).
## 3 Search strategy and eligibility criteria
This systematic review was conducted based on the procedure proposed by Kitchenham (2004). We conducted a comprehensive literature search using Boolean search strategy in five databases, namely Web of Science, Scopus, ScienceDirect, PubMed, and Google Scholar (Table 2). Our search included all publications up to May 20th, 2023.
Papers will be included if they meet the following criteria:
* Focus on diagnostic tasks related to knee osteoarthritis (OA)
* Propose an end-to-end artificial intelligence (AI) model
* Utilize explainable AI (XAI) methods to provide explanations for the proposed model
* Not a review paper
* Published in English
Our review identified a total of 65 studies that presented at least one knee OA computer-assisted diagnostic system utilizing an end-to-end AI approach. Among these studies, 61 out of 69 (88.4%) incorporated explainable AI (XAI) techniques and were included for our analysis (Figure 2). The earliest publication in this domain was found in 2017, which coincides with the introduction of popular XAI approaches such as gradient-weighted class activation map (GradCAM) (Selvaraju, Cogswell, Das, Vedantam, Parikh and Batra, 2017) and self-attention mechanism (Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser and Polosukhin, 2017). The introduction of these techniques sparked increased interest and discussion surrounding XAI in the field of knee OA diagnosis, therefore the publication trend experienced exponential growth from 2017 to 2021, as depicted in Figure 3. Although there was a slight decrease in publications in 2022, the number of publications in high-quality (Q1) journals has been increasing steadily.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Database** & **Boolean search strings** \\ \hline Scopus & TITLE-ABS-KEY ( "knee" OR "tibiofemoral joint" ) AND TITLE-ABS-KEY ( "osteoarthritis" OR "degenerative arthritis" OR "degenerative joint disease" OR "wear-and-tear arthritis" ) AND TITLE-ABS-KEY ( "XAI" OR ( "explainable" OR "interpretable" ) AND ( "AI" OR "artificial intelligence" OR "deep learning" OR "machine learning" ) ) OR "SHAP" OR "LIME" OR "gradcam" OR "gradcam" OR "heatmap" OR "saliency map" OR "attention map" ) \\ Web of Science & (((TS=(knee OR tbiofemoral joint))) AND TS=(osteoarthritis OR degenerative arthritis OR degenerative joint disease OR wear-and-tear arthritis)) AND TS=(XAI OR ((explainable OR interpretable) AND (AI OR artificial intelligent" OR deep learning OR machine learning)) OR SHAP OR LIME OR \\ PubMed & gradcam OR grad cam OR heatmap OR saliency map OR attention map)) \\ PubMed & (knee OR tbiofemoral joint) AND (osteoarthritis OR degenerative arbitris OR degenerative joint disease OR wear-and-tear arthritis) AND (XAI OR ((explainable OR interpretable) AND (AI OR artificial intelligent" OR deep learning OR machine learning)) OR SHAP OR LIME OR \\ ScienceDirect & (knee) AND (osteoarthritis) AND (XAI OR ((explainable OR interpretable) AND (AI OR artificial intelligent OR deep learning OR machine learning))) \\ Google Scholar &
\begin{tabular}{l} intle:((knee) AND (osteoarthritis OR joint disease) AND (XAI OR ((explainable OR interpretable) AND (AI OR artificial intelligent OR deep learning OR machine learning)) OR SHAP OR LIME OR \\ gradcam OR grad cam OR heatmap OR saliency map OR attention map)) \\ \end{tabular}
\end{table}
Table 2: Boolean search strings employed for the corresponding bibliographic databases and search engines.
## 4 General query on knee OA assessment
We conducted an analysis of the general query to acquire an up-to-date comprehension of the topic on XAI application for knee OA diagnosis. This analysis aims to complement the qualitative literature review and provide valuable insights into the current state of research in this area. Co-occurrence analysis was performed using VOSviewer (Van Eck and Waltman, 2010) to discover the relationships among terms extracted from the titles and abstracts of the selected studies. Out of the 1,455 terms identified, a subset of 190 terms with an occurrence frequency of at least three were chosen for analysis. Out of these 190 terms, we focused on the top 114 terms based on their relevance score, which fell within the top 60% range. These 114 terms were then included in our analysis to gain insights into their co-occurrence patterns and relationships (Figure 4).
As a result, the analysis revealed nine distinct clusters. Cluster 1 (20 items), Cluster 3 (16 items), Cluster 5 (12 items), and Cluster 7 (10 items) primarily focused on various aspects of knee OA symptoms and underlying risks, including bone and cartilage conditions, anterior cruciate ligament injury, demographics, and risk of OA deterioration, respectively. Cluster 2 (19 items) emphasized model interpretability and clinical practitioners. Cluster 4 (14 items), Cluster 8 (10 items), and Cluster 9 (2) was specifically associated with patient data. Cluster 6 (11 items) mainly encompassed studies related to automatic early diagnosis of knee OA.
The ten most cited terms included "image" (33 occurrences), "radiograph" (21 occurrences), "detection" (21 occurrences), "progression" (14 occurrences), "pain" (14 occurrences), "cluster" (13 occurrences), "parameter" (13 occurrences), "risk factor" (12 occurrences), "task" (12 occurrences), and "mri" (11 occurrences).
## 5 Data for knee OA assessment
Knee OA is a complex and multifactorial disorder, and as such, a wide variety of data can be utilized to gain insights and explanations related to this health condition. In this review, we specifically focus on tabular data and image pixels. To track the evolving landscape of knee OA research, we performed an analysis of available datasets for knee OA assessment. This analysis provides a comprehensive understanding of the Western and Eastern data sources in knee OA diagnosis. We also highlight the role of datasets for generalizability and applicability of AI-based approaches.
### Tabular data
Tabular data in knee OA research is a collection of structured information encompassing both objective and subjective measurements of the condition. Within this tabular data, we have identified six distinct domains: demographic, clinical, imaging, patient-reported outcomes, biomechanics, and biomarkers. Demographic data represents information about participants' characteristics, such as medical history, symptoms, demographics, nutrition, physical activity, comorbidity, and behavioral aspects. Clinical data involves physical exam and blood measures, outlining patients' essential health information. Imaging data consists of medial imaging outcomes and anthropometrics for quantifying anatomical structures. Patient-reported outcomes focus on data collected through questionnaires to assess patient-reported symptoms and health-related quality of life. Biomechanical data involves the mechanics and movement of the knee during various activities. Biomarkers data includes measurable indicators found in bodily fluids, offering insights into disease status and treatment response. A comprehensive comparison of the accessibility, cost, complexity, diagnostic error, adverse effects, risk of bias, and level of knowledge required for each domain is presented in Table 3. This evaluation aids researchers in understanding the strengths and limitations of each data domain.
### Image pixels
Image pixels in knee OA research consist of 2D or 3D data that allow for the visualization of human bone and tissue structures. This visual representation aids in gaining a deeper understanding of the anatomical aspects of the knee, enabling researchers to analyze and assess the condition more effectively. When paired with the tabular data of medical imaging outcomes, the combination of image pixels and structured data provides a more comprehensive approach to both qualitative and quantitative assessments of knee OA. This integration enhances the overall analysis and contributes to better knowledge discovery opportunities. However, handling image pixels data comes with challenges such as noise and resolution issues. High-resolution images offer improved visualization outcomes but also create a heavier computational load. Thus, a trade-off between image quality and computational efficiency needs to be carefully considered for practical implementation.
### Geographical distribution and methodological insights
Extensive data collection for OA research was conducted in both Western (n = 16) and Eastern (n = 10) countries (Fig. 5), with a particular focus on the United States and China regions. Existing research heavily relied on data from United States, where Caucasian is the largest population in the datasets. It is worth noting that limited research has been carried out in South American countries, and no research has been conducted in African countries.
Overall, the identified datasets included a wide range of sample size, varying from 40 to 4,796 individuals. Notably, 62.3% of the studies (38 out of 61) utilized the Osteoarthritis Initiative (OAI) dataset from United States for training or testing purposes. In 42 out of 61 studies, imaging data, primarily X-ray images (33 out of 42 studies), were utilized for clinical confirmation of OA disease. Approximately 38.7% of the studies (24 out of 62) employed tabular or structured data, such as demographics, clinical characteristics, and laboratory examinations, to predict the risk of OA incidence.
Utilizing data from a single channel, whether images or tables, poses significant challenges in knee OA research, as it may limit the comprehensive understanding of the condition. To address this limitation, two studies (Titplin, Klein, Bierma-Zeinstra, Thevenot, Rahtu, Meurs, Oei and Saarakkala, 2019; Karim et al., 2021) adopted a data fusion approach, leading to the development of multimodal data models that maximize the utilization of patient information. By integrating diverse data types (Figure 6), these innovative approaches achieved more comprehensive and accurate predictions for knee OA assessment.
### Public datasets for knee OA assessment
Due to ethical concerns and strict institutional regulations, there are limited public datasets available for knee OA assessment. Despite the availability of a greater number of private datasets, public datasets play a dominant role in establishing benchmark results and facilitating continuous improvement in the field of knee OA research. In this section, we present the publicly accessible datasets for knee OA diagnosis as outlined in Table 4.
**Osteoarthritis Initiative (OAI)** (of Health and Services) is an open access dataset provided by the National Institutes of Health (NIH). It focuses on identifying the most promising biomarkers of development and progression of symptomatic knee OA. This dataset includes 4,796 subjects between the ages of 45 and 79 years who either have knee OA or are at an increased risk of developing the condition. The data was collected from four clinical centers (Ohio State University, University of Maryland School of Medicine/Johns Hopkins University School of Medicine, University of Pittsburgh School of Medicine, and Brown University School of Medicine and Memorial Hospital of Rhode Island). Over a period of ten years, all participants underwent annual radiography and MRI scan of the knee, along with clinical assessments of disease activity. Furthermore, genetic and biochemical specimens were collected annually from all participants, providing rich data for researchers to explore novel knee OA diagnosis and treatment approaches.
**Multicenter Osteoarthritis Study (MOST)**(Segal, Nevitt, Gross, Hietpas, Glass, Lewis et al., 2013) is a public dataset funded by the National Institutes of Health (NIH) and National Institute on Aging (NIA). The primary objective of this dataset is to study symptomatic knee OA in a community-based sample of adults with or at high risk of developing knee OA. About 3,026 subjects between the ages of 50 and 79 years from two clinical sites (Iowa City, Iowa and Birmingham, Alabama) participated the study. The dataset contains essential information related to biomechanical factors (such as physical activity-related factors), bone and joint structural factors (such as knee MRI assessment), and nutritional factors.
**MRNet**(Bien, Rajpurkar, Ball, Irvin, Park, Jones, Bereket, Patel, Yeom, Shpanskaya et al., 2018) is a collection of MRI data created by the Stanford University Medical Center. This dataset aims to investigate two common types of knee injuries: anterior cruciate ligament tears and meniscal tears which are contributing factors to knee OA disorder. The study involved 1,312 subjects and generated a total of 1,370 MRI scans. The MRI examinations were conducted using GE scanners (GE Discovery, GE Healthcare, Waukesha, WI) with a standard knee MRI coil and a routine non-contrast knee MRI protocol, comprising several key sequences: coronal T1 weighted, coronal T2 with fat saturation, sagittal proton density (PD) weighted, sagittal T2 with fat saturation, and axial PD weighted with fat saturation. Among the knee examinations, about 56.6% were performed using a 3.0 Tesla magnetic field, while the remaining used a 1.5 Tesla magnetic field. Furthermore, the authors provided a benchmark MRNet single model, intended to support further research endeavors in the field.
**FastMRI+**(Zhao, Yaman, Zhang, Stewart, Dixon, Knoll, Huang, Lui, Hansen and Lungren, 2022) is a publicly available MRI dataset that extended the work of the FastMRI dataset (Knoll, Zbontar, Sriram, Muckley, Bruno, Defazio, Parente, Geras, Katsnelson, Chandarana et al., 2020). This extended dataset includes 1,172 MRI scans acquired at 1.5 or 3.0 Tesla and provides 22 different pathology labels in knee anatomical areas such as bone, cartilage, ligament,
meniscus, and joint. Notably, many of the pathologies, such as cartilage loss and joint effusion, are closely related to knee OA. Each knee MRI scan comprises a single series of coronal images in PD or T2-weighted sequence. The primary focus of the FastMRI+ dataset is to facilitate the study of MRI image reconstruction, particularly in regions that could potentially contain clinical pathology. This dataset provides detailed pathology labels, researchers can explore and develop advanced image reconstruction techniques that cater to specific clinical conditions.
**Cohort Hip and Cohort Knee (CHECK)**(Wesseling, Boers, Viergever, Hilberdink, Lafeber, Dekker and Bijlsma, 2016; Wang, Runhaar, Kloppenburg, Boers, Bijlsma, Bacardit, Bierma-Zeinstra, Aerts-Lankhorst, Bastick, van Bentveld et al., 2022) is a research initiative sponsored by the Dutch Arthritis Foundation, in collaboration with ten general and university hospitals in The Netherlands, situated in semi-unbanized regions. The study recruited a total of 1,002 subjects aged between 45 and 65 years. The primary goal of this dataset is to explore and analyze the clinical, biochemical, and radiographic signs and symptoms associated with early OA. Moreover, the dataset aims to identify prognostic factors that may contribute to the diagnosis and progression of OA. The study spans a duration of seven years, during which 846 subjects actively participated in annual clinic visits, providing valuable longitudinal data for comprehensive OA research.
**Private research at Danderyd University Hospital** is used in Olsson, Akbarian, Lind, Razavian and Gordon (2021) to develop a predictive model for classification of OA stage. The dataset consists of 6,103 X-ray images acquired from Danderyd University Hospital. Unlike other datasets that undergo extensive preprocessing for artifact removal, this dataset used the entire image series, including X-ray images with visual disturbances like implants, casts, and non-degenerative pathologies. This unique approach provides a more realistic representation of clinical scenarios and enhances the dataset's value for studying OA progression and prediction in real-world conditions.
**Mendeley VI** is a unique public dataset that focuses on the Eastern population. It contains 1,650 X-ray images collected from Indian institutions. The X-ray images were captured using the PROTEC PRS 500E X-ray machine. All images are 8-bit grayscale and have been cropped to focus on the cartilage region. They have been manually annotated by two experienced medical experts with their respective Kellgren and Lawrence grades. The intention of this dataset is to facilitate in the development of AI models for classifying osteoarthritis severity.
## 6 Classification systems for knee OA conditions
In this section, we conducted a comparison of the employment of classification systems from the medical domain that establish the ground truth data for predictive models. Approximately half of the studies (32 out of 61) utilized medical experts' knowledge for classification or clustering tasks. Within this subset of studies, 28 employed Kellgren Lawrence (KL) grading system to rate the OA severity. Original Kellgren Lawrence (KL) grading system comprises five ordinal classes based on composite score of radiographic OA symptoms. However, the number of classes used in the top layer of the KL prediction models varied from two to five across the reviewed studies, depending on their respective research purposes. A commonly used standard threshold for radiological OA is a KL\(\geq\)2. Most of the studies (18 out of 28) were dedicated to developing AI models specifically for the five-grade KL classification. Binary classification was designed to identify the presence of OA (KL1 to KL4) (4 out of 28) or early OA (KL2) (4 out of 28). In addition, one study classified the change in KL grade after 60 months.
Besides KL grading, there was one study employed Osteoarthritis Research Society International (OARSI) atlas joint-space narrowing for medial tibofemoral OA. Another research developed radiographic spiking criteria to guide the generation of ground truth data (Patron, Annala, Lainiala, Paloneva and Aryamo, 2022). Whole-Organ Magnetic Resonance Imaging Score (WORMS) (n = 1) and MRI Osteoarthritis Knee Score (MOAKS) (n = 1) were employed for knee OA detection on MRI data. Both classification systems emphasized cartilage damage related to OA.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Input data type** & **Accessibility** & **Cost** & **Complexity** & **Diagnostic error** & **Adverse effects** & **Bias** & **Knowledge required** \\ \hline Demographic & High & Low & Low & Low & Low & Low \\ Clinical & High & Moderate & Moderate & Low & Low & Low & Moderate \\ Imaging & Moderate & High & High & Moderate & Low & Moderate & High \\ Patient-Reported Outcomes & Moderate & Low & Low & Moderate & Low & Low & Low \\ Biomechanics & Low & Low & Moderate & Low & Low & Low \\ Biomarkers & Low & High & High & Moderate & Low & High & High \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of input data types in knee OA diagnosis
In contrast, patient-reported outcome measure were only used in four studies. Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) (n = 2) and the Knee Injury and Osteoarthritis Outcome Score (KOOS) (n = 1) were frequently employed as patient-reported outcome measures, particularly for assessing pain. However, due to their subjective nature, the analysis process for these measures was complex and required careful statistical analysis (Pierson, Cutler, Leskovec, Mullainathan and Obermeyer, 2021). Moreover, transforming the data from these measures into a format suitable for modeling presented a significant challenge. Morales, Lee, Caliva, Iriondo, Liu, Majumdar and Pedoia (2021) established a direct binary classification system for chronic knee pain based on patient self-reporting. They defined chronic knee pain as pain that persists for more than half of the days in a month for at least six out of the past 12 months. Moreover, two studies utilized knowledge-based and patient-based outcome measures (Zeng, Zhu, Xie, Zhong, Huang, Ma, Zhang and Mao, 2022; Chan, Li, Chan and Wen, 2021).
## 7 XAI approaches for knee OA assessment
The role of XAI in knee OA assessment is to offer comprehensible explanations regarding the input data. These explanations are intended to be understood by humans. Thus, we take into account the interests of data scientists and domain experts in the development of XAI methods.
For data scientists, knowing the internal workings of the model and comprehending how the data is applied are crucial for improving the model's performance and preventing overfitting. This knowledge enables them to fine-tune the model, optimize its architecture, and make informed decisions during the development process. Post-hoc explanations may be of lesser concern to them, as they prioritize optimizing the model itself.
On the other hand, domain experts especially medical experts who may not have the technical expertise of data scientists are more interested in understanding how and why a model generated a particular result. They seek clear and interpretable explanations to trust the model's decisions and insights. Knowing the key characteristics that led
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Data source (website)** & **Year of data** & **No. subjects (age range)** & **Data types** & **Geographic representation** \\ \hline Osteoarthritis Initiative (OAI) (of Health and Services) ([https://nda.nih.gov/](https://nda.nih.gov/) oai/) & 2004-2015 & 4,796 (45-79) & Image data, tabular data at baseline, and 12, 24, 36, 48, 60, 72, 84, 96, and 108 months & USA \\ \hline Multicenter Osteoarthritis Study (MOST) (Segal et al., 2013) & 2003-2018 & 3,026 (50-79) & Image data, tabular data at baseline, 15, 30, 60, 72, and 84 months & USA \\ \hline MRNet (Bien et al., 2018) & 2001-2012 & 1,312 (-) & Image data: 1,370 MRI images & USA \\ \hline FastMRI+ (Zhao et al., 2022) ([https://github.com/microsoft/fastmri-plus](https://github.com/microsoft/fastmri-plus) and [https://fastmri.med.nyu.edu/](https://fastmri.med.nyu.edu/)) & N/A & - (-) & Image data: 1,172 coronal MRI scans either proton density-weighted and T2-weighted \\ \hline Cohort Hip and Cohort Knee (CHECK) (Wesseling et al., 2016; Wang et al., 2022) & 2002-2012 & 1,002 (45-65) & Image data and tabular data at baseline, 2, 5, 8, and 10 years & The Netherlands \\ \hline Private research at Danderyd University Hospital (Olsson et al., 2021) & 2002-2016 & - (-) & Image data: 6,403 X-ray images & Sweden \\ ([https://datahub.aida.scilifelab.se/10.23698/aida/koa2021](https://datahub.aida.scilifelab.se/10.23698/aida/koa2021)) & & & \\ \hline Mendeley VI (Gornale and Patravali, 2020) ([https://data.mendeley.com/](https://data.mendeley.com/) datasets/t9ndx37v5h/1) & N/A & - (-) & Image data: 1,650 X-ray images & India \\ \hline \end{tabular}
\end{table}
Table 4: List of OA-related data sources and descriptions in Western countries.
to a conclusion helps them validate the model's outputs and make informed decisions based on the AI system's recommendations.
By considering the specific needs and interests of both data scientists and domain experts, we propose the XAI taxonomy as shown in Figure 7 to provide valuable insights into the diverse requirements of different stakeholders. Understanding data interpretability, model interpretability, and post-hoc interpretability, along with XAI evaluation approaches, is crucial in building transparent, trustworthy, and effective AI models that cater to various real-world applications.
### Data interpretability
The importance of data interpretability arises from the substantial impact of the training dataset on an AI model's behavior. To facilitate a better understanding of the input data, numerous data analysis techniques and mathematical algorithms have been developed to quantify the intrinsic data characteristics. In the context of knee OA, data interpretability can uncover valuable clinical patterns that might not have been captured in traditional evidence-based research. This can empower the researchers to glean new insights and knowledge from the data, contributing to more informed and effective decision-making in knee OA assessment.
In the following sections, we will discuss a few approaches that provide interpretability for knee OA data. This includes feature extraction, explainable feature engineering, and knowledge graphs, which are widely recognized as pre-modelling approaches. These approaches help extract useful information from the data and represent different steps in achieving data interpretability. Feature extraction extracts relevant features, explainable feature engineering transforms data for better understanding, and knowledge graphs connect related points for a comprehensive disease overview.
#### 7.1.1 Feature extraction
Feature extraction plays a critical role in capturing a representative set of features. In our survey, we found three types of feature extraction: exploratory data analysis, image descriptors, and dimensionality reduction.
Exploratory data analysisExploratory data analysis (EDA) is a data analysis approach that involves summarizing the main characteristics of the data and visualizing the data summary using appropriate representations (Sahoo, Samal, Pramanik and Pani, 2019). EDA is an essential process for understanding the structure and distribution of tabular data, as well as identifying important features and patterns that can guide subsequent analysis. The significant contribution of EDA in knee OA data is analysis of population-based samples to provide the disease overview and to detect biases in data Angelini, Widera, Mobasheri, Blair, Struglics, Uebelhoer, Henrotin, Marijinssen, Kloppenburg, Blanco et al. (2022).
General EDA outcomes included dimensions, mean, median (Angelini et al., 2022), standard deviation, range, and missing samples. To deal with missing samples, Angelini et al. (2022) implemented imputation models using random forest (RF) and k-nearest neighbor (KNN) regression models, which were further refined through a bootstrapping-like procedure. Jakaite, Schetinin, Hladukva, Minaev, Ambia and Krzanowski (2021) performed an analysis on the brightness value distributions in the lateral and medial sides of the OA and control groups. The results showed that the mean brightness in the OA group was higher than in the control group in both sides. The observation suggested that the higher mean brightness may be indicative of increased bone density in patients from the OA case group.
Image descriptorsImage descriptors are typically used to capture and describe the shape of an object in an image. Jakaite et al. (2021) utilized Zernike moments (Equation 1) to capture knee X-ray textural details at the bone microstructural level. By using this image descriptor and the Group Method of Data Handling (GMDH), they were able to effectively identify patients at risk of early knee OA, even with a relatively small dataset of 40 samples.
The Zernike moments \(A_{nm}\) are defined as:
\[A_{nm}=\frac{n+1}{\pi}\sum_{x}\ \sum_{y}f(x,y)V^{*}_{nm}(\rho,\theta)\] (1a) where \[f(x,y)\] represents the image, \[n\] denotes the number of order, \[m\] denotes the number of repetition, \[V_{nm}\] represents orthogonal complex polynomials, \[\rho\] represents the length of a vector to a \[(x,y)\] pixel, and \[\theta\] represents the angle between x-axis and \[\rho\].
The orthogonal complex polynomials \(V_{nm}\) are defined as:
\[V_{nm}(x,y)=V_{nm}(\rho,\theta)=R_{nm}(\rho)e^{(-j\mu\theta)} \tag{1b}\]
where \(R_{nm}\) represents radial polynomials, \(\rho\) represents the length of a vector to a \((x,y)\) pixel, and \(\theta\) represents the angle between x-axis and \(\rho.\)
The radial polynomials \(R_{nm}\) are defined as:
\[R_{nm}=\sum_{k=0}^{(n-|m|)/2}\frac{(-1)^{k}(n-k)!}{k!((n+|m|)/2-k)!((n-|m|)/2-k )!}\rho^{n-2k} \tag{1c}\]
where \(n\) denotes the number of order and \(m\) denotes the number of repetition.
A more detailed bone analysis work was performed by (Bayramoglu, Tiulpin, Hirvasniemi, Nieminen and Saarakkala, 2020). The authors conducted a comparison of five image descriptors: local binary pattern (LBP), fractal dimension (FD), Haralick features, Shannon entropy, and histogram of gradient (HOG). Based on their findings, they recommended the use of LBP as it preserved the most discriminative features among the descriptors. They also pointed out that LBP and HOG descriptors are less sensitive to changes in radiographic acquisition protocols and could be applied in clinical decision support tools in the future.
Besides texture analysis, image descriptor could be used to extract object edge information. Adaptive Canny algorithm was employed to extract the edges of the knee joint from X-ray images by dynamically adjusting the threshold values based on the local image characteristics (Farajzadeh, Sadeghzadeh and Hashemzadeh, 2023). The low \(\alpha\) and high adaptive thresholds \(\beta\) are defined as:
\[\alpha =\max\left(0,(1-\sigma)\times\text{median}(x_{i})\right) \tag{2a}\] \[\beta =\min\left(255,(1+\sigma)\times\text{median}(x_{i})\right) \tag{2b}\]
where \(\alpha\) denotes upper limit pixel value, \(\beta\) denotes bottom limit pixel value, and \(x_{i}\) represents median pixel value.
Dimensionality reductionOA datasets are typically complex and multidimensional, containing a vast amount of variables. Visualizing such high-dimensional data can be challenging since human perception is limited to three dimensions. Hence, researchers tend to find lower-dimensional representations of the original data (Murdoch et al., 2019). Dimensionality reduction techniques are employed to reduce the number of parameters while preserving the underlying structure as much as possible. Two commonly used methods in this field are Principal Component Analysis (PCA) (Angelini et al., 2022) and t-Distributed Stochastic Neighbor Embedding (t-SNE) (Chen, Gao, Shi, Allen and Yang, 2019; Chan et al., 2021; Li, Xiao, Liu, Feng, Zhu, Liao, Yu, Qian, Chen, Fang et al., 2023; Wang, Chetouani and Jennane, 2023b,d).
#### 7.1.2 Explainable feature engineering
There are two main approaches developed for explainable feature engineering: domain-specific methods and model-based methods. Another emerging approach in explainable feature engineering is disentangled representation learning, which has gained traction with the introduction of various generative models.
Domain-specificDomain-specific approaches for knee OA diagnostic task utilize the knowledge and expertise of medical experts, along with insights derived from EDA to extract features. Many studies in this field have focused on developing knee-specific approaches that capture and characterize key aspects on bone and cartilage, as well as the limb alignment. Du, Almajalid, Shan and Zhang (2018) developed a measure called cartilage damage index (CDI) to quantify cartilage thickness by measuring specific informative locations on the reconstructed cartilage layer instead of evaluating the entire cartilage. In a cartilage assessment conducted by Ciliberti, Guerrini, Gunnarsson, Recenti, Jacob, Cangiano, Tesfahunegn, Islind, Tortorella, Tsirilaki et al. (2022), two volumetric analyses were employed. The first analysis focused on wall thickness, where the cartilage mesh was examined, and the thickness of each element was calculated from surface to surface. The hypothesis underlying this analysis was that patients with degenerative and traumatic cartilages would exhibit thinner cartilage in specific regions compared to the control group. The second analysis
focused on cartilage curvature by measuring the Gaussian curvature of cartilage element based on its neighboring elements. This analysis hypothesized that areas with higher cartilage degradation would exhibit increased curvature due to the formation of holes and depressions surrounding those regions. In (Morales et al., 2021), cartilage thickness was determined for femoral, tibial, and patellar cartilage masks per sagittal slice by performing an Euclidean distance transform along the morphological skeleton of each mask. Furthermore, the shape of the bone was characterized by measuring the distance from the bone surface of each bone mask to its volumetric centroid. A recent study by (Zhuang, Si, Wang, Xuan, Ouyang, Zhan, Xue, Zhang, Shen, Yao et al., 2022) proposed a unified graph representation approach to construct personalized knee cartilages that are attached to the femur, tibia, and patella, respectively. They used the patient-specific cartilage graph representation to guide their DL model. Additionally, to assess the coronal limb alignment through radiographic means, weight-bearing line (WBL) ratio was derived by calculating the ratio between the crossing point of the mechanical axis, measured from the medial edge of the tibial plateau, and the total width of the tibial plateau (Moon, Choi, Lee, Choi, Yoo and Lee, 2021).
Model basedModel-based feature engineering leverages an automatic approach to unveil the inherent structure of a dataset, leading to the extraction of relevant and informative features (Murdoch et al., 2019). One such example is unsupervised clustering, a technique that groups similar data points together based on their intrinsic characteristics, without the need for labelled target variables (Murdoch et al., 2019). For instance, Morales et al. (2021) developed a fully automatic landmark-matching algorithm based on Coherent Point Drift to map the bone surfaces into reference space. Bayramoglu et al. (2020) used simple linear iterative clustering based superpixel segmentation to extract the region of interest as a pre-processing strategy. Angelini et al. (2022) applied k-means clustering to analyze biochemical marker data and figure out prominent subgroups among patients with OA. This approach enabled them to identify three dominant OA phenotypes. Nelson, Keefe, Schwartz, Callahan, Loeser, Golightly, Arbeeva and Marron (2022) conducted similar work using biclustering, but their work was extended to more inclusive clinical data, including demographics, medical history, symptoms, physical activity, physical exam, and medical imaging outcome. Through their analysis, they identified two significant clusters. One cluster represented individuals who exhibited structural progression over time but experienced improvements in pain. The other cluster represented individuals who had stable pain scores and were less affected by OA. Additionally, model-based feature engineering techniques were employed to analyze gait data, which is known for its complexity with multidimensional and time-series properties. Leporace, Gonzalez, Metsavaht, Motta, Carpes, Chahla and Luzo (2021) utilized self-organizing maps (SOM) on principal components to detect gait similarity patterns in individuals with high grade OA. The resulting patterns were visualized using a unified distances matrix (U-matrix). Subsequently, the U-matrix was subjected to the k-means clustering algorithm, leading to the formation of four distinct gait kinematic clusters.
Disentangled representation learningDisentangled representation learning is a significant and closely related area of research that focuses on acquiring a dataset representation where the generative latent variables are disentangled or separated. Latent variables in this context can be regarded as interpretable or explainable features of the dataset. Prezja, Paloneva, Polonen, Niinimaki and Ayramo (2022) were pioneers in applying the DeepFake concept in this medical domain, specifically by utilizing Wasserstein generative adversarial neural networks with gradient penalty (WGAN-GP). Their model managed to preserve important OA anatomical information during the generation process. The authors utilized DeepFake generated data to substitute real data during the training of a pre-trained VGG model for classification task. Remarkably, they achieved a mere 3.79% decrease in accuracy compared to the baseline when classifying real OA X-rays. Wang, Chetouani and Jennane (2023) advanced the field of disentangled representation learning by introducing a novel approach called key-exchange convolutional autoencoder (KE-CAE). This method was designed to extract specific radiograph features related to early knee osteoarthritis (OA) from latent space through cross image reconstruction. Their proposed approach successfully captured crucial information from radiographs that represents early knee OA, enabling effective analysis. Notably, their model not only achieved high-quality reconstruction of the original images but also generated synthetic images that accurately represented different stages of knee OA. This noteworthy contribution holds promise for the early detection and diagnosis of knee OA.
#### Knowledge graphs
Knowledge graph (Figure 8) is a structured representation of knowledge that captures relationships between entities in a particular domain. Li, Liu, Zhao, Zhang and Xing (2020) established a medical knowledge graph using unstructured data from an electronic medical record (EMR) database. The EMR data was in Mandarin language, which posed a
computational challenge for processing Mandarin words. To address this, the authors adopted five feature extraction methods, including bi-directional long short-term memory (Bi-LSTM), bag of characters, natural language processing with the Chinese Academy of Sciences Word Segmentation Tool, dictionary features, and the k-means algorithm for word clustering. These diverse feature sets were utilized in the conditional random field (CRF++) algorithm for entity recognition. Following entity recognition, the authors combined the extracted features and implemented a CNN model, comprising a convolutional layer, pooling layer, fully connected layer, and softmax classifier, to extract entity relations from the identified entities. This step allowed for a deeper understanding of the interconnections within the medical data. In the final stages of building the medical knowledge graph, the authors employed Neo4j graph database. They achieved this by batch importing the previously identified medical entities and their corresponding relationships into the Neo4j database, forming a comprehensive and interconnected representation of medical knowledge in knee OA domain. The resulting knowledge graph encompassed 2,518 distinct entities and an impressive 29,972 different relationships related to knee OA condition. The knowledge graph spans a diverse range of entity types, comprising 368 diseases, 706 symptoms, 421 treatments, 859 examination descriptions, 72 examinations, 43 aggravating factors, 35 mitigating factors, and 14 inducing factors. This comprehensive repository of information serves as a valuable resource, empowering researchers and medical practitioners to gain deeper insights into knee OA.
### Model interpretability
While clean and carefully prepared data, aided by data interpretability techniques, is crucial for training models, it is equally important for the model itself to possess a clear understanding. Without this understanding, developers may face challenges when incorporating their domain knowledge into the learning process to achieve improved results. Therefore, alongside data interpretability, model interpretability plays a vital role.
In many instances, analyzing outputs or examining individual inputs is insufficient for comprehending why a training procedure failed to yield the desired outcomes. In such cases, it becomes necessary to investigate the training procedure in the model. The objective of model explainability is to develop models that are inherently more interpretable and understandable. This approach is also called intrinsic XAI.
#### 7.2.1 Interpretable models
Interpretable models, also known as white-box models, are models that provide self-explanatory insights (Du et al., 2019). Examples of such models include rule-based model, linear regression, logistic regression, and decision trees.
In the realm of rule-based models, Pierson et al. (2021) devised an objective algorithm for pain prediction and compared it to a general KL grade-based algorithm. Their proposed algorithm incorporated racial disparities (Black versus non-Black) and two socioeconomic measures, namely annual income below $50,000 and educational attainment (college graduation). The authors examined the differences in pain scores between groups and quantified the pain disparities using non-parametric means. By employing a regression model, the proposed algorithm successfully addressed the inequalities faced by under-served patients.
Zeng et al. (2022) utilized binary logistic regression to detect knee OA and recommend appropriate treatment options, including conservative or surgical approaches. Although the authors claimed the interpretability of their model, however they did not provide detailed analysis or explanations to support their claim.
In terms of tree-based models, Kotti, Duffell, Faisal and McGregor (2017) utilized a regression tree to analyze and interpret the rule induction process for detecting OA cases from a biomechanical perspective. A random subset of parameters extracted from ground reaction forces in the z-axis was employed to construct the regression tree, as illustrated in Figure 9. This approach provided insights into the biomechanical factors that may contribute to the presence of OA and offered a means of interpreting the rule induction process in the context of OA detection.
Liu, Yu, Fei, Li, Wu, Li, Pan and Wang (2018) employed splitting nodes algorithm to assess the importance of each feature in a tree generated by eXtreme Gradient Boosting (XGBoost). They found that demographics and anthropometric factors had a significant influence on determining OA status, but acknowledged that these factors are not exclusive to OA and contribute to various clinical issues like pain and disability. Instead, the authors emphasized three other categories: comorbidity, blood measures, and physical activity measures. These categories were closely linked to the risk of experiencing side effects from analgesics in OA patients.
Despite the use of white-box mechanisms, relying solely on interpretable models may not provide sufficient explanation for complex models, particularly in scenarios with high-dimensional and heterogeneous data. To address this limitation, the application of regularization techniques becomes necessary during model training. Regularization helps control the number of relevant input features by introducing penalties or constraints, ensuring that the model
focuses on the most important variables. For example, Kokkoti's, Moustakidis, Papageorgiou, Giakas and Tsaopoulos (2020) used a robust methodology to process 707 features from multidisciplinary settings. They employed six feature selection techniques, including filter algorithms, wrapper approach, and embedded techniques, and ranked features based on a majority vote scheme. This process identified 40 relevant risk factors, resulting in a classification accuracy of 77.88% using logistic regression.
However, one limitation of the majority voting approach is that it treats all models in the ensemble equally, without considering the possibility of weak predictions. To address this limitation, Ntakolia, Kokkoti, Moustakidis and Tsaopoulos (2021) introduced a Fuzzy ensemble approach to optimize the model and improve decision-making by considering the reliability and uncertainty of individual predictions. Additionally, Lu et al. (2022) demonstrated the effectiveness of recursive feature elimination (RFE), which considers the intrinsic characteristics of the data and model to select an optimal feature combination. RFE iteratively eliminates less relevant features, resulting in an informative subset that contributes significantly to the model's performance.
#### 7.2.2 Explainability through architectural adjustments
Attention mechanisms could introduce certain level of explainability and have revolutionized the utilization of DL algorithms (Zhang, Tan, Cho, Chang and Deniz, 2020; Schiratti, Dubois, Herent, Cahane, Dachary, Clozel, Wainrib, Keime-Guibert, Lalande, Pueyo et al., 2021; Wang, Wang, Gao, Du and Liu, 2021; Feng, Liu, Zhang and Qiu, 2021; Zhuang et al., 2022; Huo, Ouyang, Si, Xuan, Wang, Yao, Liu, Xu, Qian, Xue et al., 2022). Zhang et al. (2020) utilized the convolutional block attention module (CBAM) to implement an attention mechanism. Their CBAM consisted of both channel and spatial attention modules. By incorporating the module into ResNet34, the proposed approach identified the most relevant channel and spatial parts that contributed significantly to the final prediction and helped to enhance the model's performance by focusing on the most informative features in the input data during the training process.
In the study conducted by Feng et al. (2021), the channel attention module within the CBAM was enhanced by incorporating additional non-linear layers after fusing the channel weights from dual branches. This modification increased the expressiveness of the CBAM network and improved the model's accuracy in detecting potential lesions in knee X-ray images. Similarly, Schiratti et al. (2021) incorporated a gated attention mechanism to calculate attention scores for individual image slices, which can be interpreted as indicators of their importance. These scores were then utilized in the classification sub-model. Self-attention mechanism was implemented by Wang et al. (2021) by integrating a visual transformer after their deep learning model. Their approach effectively captured the interrelationship among imaging features from multiple regions.
Alternatively, Zhuang et al. (2022) proposed a self-attention-based network, namely CSNet that has been designed in a layer-by-layer manner. Each layer incorporated patch convolution to extract local appearance features from individual vertices and graph convolution to facilitate communication among the vertices. The self-attention mechanism was employed in each layer to enhance the model's ability to capture information from the cartilage graph. The final assessment of knee cartilage defects was obtained by pooling information from all vertices in the graph, and the CSNet also allowed for easy 3D visualization of the defects, showcasing its interpretability. In contrast to previous approaches that did not take semantic information into account, a recent study by Huo et al. (2022) introduced the use of an online class activation mapping (CAM) module to specifically direct the network's attention towards the cartilage regions.
### Post-hoc interpretability
Post-hoc XAI methods were found to be more commonly employed than intrinsic XAI methods in those studies. These post-hoc methods provide an external explanation of the AI model's decisions after it has made predictions. It involves querying the trained model and constructing a white-box surrogate model to extract the underlying relationships the model has learned (Murdoch et al., 2019). These methods could gain insights into the model's decision-making process by analyzing its predictions on specific instances, without altering the original model architecture. In contrast, intrinsic XAI focuses on designing AI models with inherent interpretability right from the model's architecture and design (Du et al., 2019). These models are built with specific structures or components that naturally provide transparency and understandability in their decision-making process.
#### 7.3.1 Attribution based
Perturbation approachPerturbation is a simple and effective method for computing the impact of changing input features on the output of an AI model (Giuste et al., 2023). It involves manipulating certain input features, running the forward pass, and measuring the difference from the original output. The importance of the input features can be ranked based on their effect on the output. Pierson et al. (2021) demonstrated a region-wise method to visualize image areas that influenced predictions made by a neural network. To do this, the image regions are "masked" out by replacing them with a circle, and the value of the circle was set to the mean pixel value for the image. Gaussian smoothing was applied to prevent sharp boundaries. The neural network's predicted pain score was then compared between the masked image and the original image, and the absolute change in the predicted pain level was computed. This process was repeated for a 32x32 grid of regions that evenly tiled the 1024x1024-pixel image. This has allowed a heatmap analysis that revealed how much masking each region of the image affected the neural network's prediction.
Backpropagation approachBackpropagation approach can be further divided into gradient-based approach, class-activation map, and gradient-weighted class activation map. Nearly half of the studies (30 out of 61) employed backpropagation XAI approach to explain the predictions of AI models as described in Table 5.
Gradient-based approachfocus solely on the gradient information when assessing the impact of modifying a specific pixel on the final prediction. Integrated Gradients is a specific technique within this approach. Another technique, namely SmoothGrad was introduced with the intention of reducing visual noise (Smilkov, Thorat, Kim, Viegas and Wattenberg, 2017) and used by Tack, Shestakov, Ludke and Zachow (2021) for MRI study.
Class activation map(CAM) utilize global average pooling to compute the spatial average of feature maps in the final convolutional layer of a CNN (Zhou, Khosla, Lapedriza, Oliva and Torralba, 2016). Three studies Chang, Felson, Qiu, Guermazi, Capellini and Kolachalama (2020); Huo et al. (2022); Dunnhofer, Martinel and Micheloni (2022) that focused on MRI data used CAM approach to analyze the prediction outcomes.
Gradient-weighted class activation map(GradCAM) is an extension of CAM, which is a technique that does not depend on a specific architecture. The principle of GradCAM is based on the concept of gradient-based CAM. It leverages the gradients of the final convolutional layer with respect to the predicted class to understand which parts of the input image are crucial for making that prediction. Around 33% of the studies (20 out of 61) utilized Grad-CAM for visualizing the final predictions. Another technique, namely Grad-CAM++ enhances Grad-CAM by substituting the globally averaged gradients with a weighted average of the gradients at the pixel level. This adaptation takes into account the significance of individual pixels in influencing the final prediction, resulting in more effective visual interpretations of CNN model predictions. Grad-CAM++ effectively overcomes the limitations of Grad-CAM, particularly in scenarios involving multiple instances of a class in an image.
Eigen class activation map(Eigen-CAM) is a variation of CAM that incorporates the use of principal components of the learned convolutional activations (Muhammad and Yeasin, 2020). It offers more accurate localization of important regions in an image and provides a deeper understanding of the underlying features. Bany Muhammad and Yeasin (2021) employed Eigen-CAM as a tool for localizing osteoarthritis (OA) features in X-ray images, using the Kellgren Lawrence grading scheme. The application of Eigen-CAM revealed significant findings, specifically highlighting the medial and lateral margins of the knee joint. These highlighted regions correspond to joint space narrowing and osteophytes sign, offering valuable insights into the presence and severity of OA-related changes in the knee joint.
DeepLIFTDeepLIFT was used by Chan et al. (2021) to quantitatively assess the contribution of each risk factor to the model's prediction. The assessment was carried out by computing the relative backpropagated gradients of the risk factors with regard to the model's prediction output. Their analysis revealed that for the prediction of knee OA onset, the medial JSN exhibited the highest DeepLIFT gradient, followed by history of injury. However, in the prediction of knee OA deterioration, diabetes and smoking habits showed the second and third highest gradients, respectively, alongside medial JSN, indicating their greater impact compared to injury.
#### 7.3.2 Game theory based
SHapley Additive Explanations (SHAP) is a widely favoured post-hoc approach for handling tabular data in machine learning models. This approach is rooted in game theory that provides local explanations for individual predictions in the models. By calculating Shapley Values, it assigns importance values to each feature based on their interactions and contributions to the prediction outcome. It enables a comprehensive understanding of the factors
driving each prediction and facilitates interpretability by identifying the most influential features in the decision-making process. The findings of all eight studies that utilized SHAP on tabular data were summarized in Table 6.
#### 7.3.3 Case based
Case-Based approach is a knowledge-driven approach in which all relevant knowledge is pre-programmed and explicitly specified. Esteves, Vicente, Machado, Alves and Neves (2017) demonstrated a case-based methodology for detecting knee osteoarthritis. Their proposed methodology integrated a logic programming approach to knowledge representation and reasoning with a case-based approach to computing, resulting in a comprehensive framework for effective problem-solving in OA field.
#### 7.3.4 Knowledge extraction based
Knowledge distillation is the core of the knowledge extraction based approaches. Huo et al. (2022) demonstrated the use of dual-consistency mean teacher model (Figure 10) to discriminate cartilage damages. Both the teacher sub-model and the student sub-model shared common network architecture, but the teacher model utilized an exponential moving average (EMA) strategy for weight updates. This approach involved averaging the student network's weights across multiple training steps, enabling the teacher model to maintain consistent predictions and effectively guide the student network, particularly for unlabelled data. Recent study by Aladhadh and Mahum (2023) employed knowledge distillation to convey pixel and pair-wise information from a teacher network to a student network. The teacher network, built upon HRNet-W, featured a head convolution layer consisting of 64 filters and a 3\(\times\)3 kernel, whereas the student network was equipped with 32 filters and 3\(\times\)3 kernels. The student network was trained using pixel-wise knowledge extracted from heatmaps generated by the more complex teacher network with loss function as shown in Equation 3, enabling the student network to adopt a simpler and more compact architecture.
\[L_{pl}=\frac{\sum_{i\in\Re}\ KL(h_{i}^{s}||h_{i}^{t})}{\hat{w}\times\hat{h}}, \quad\Re=1,2,\ldots,\hat{w}\times\hat{h} \tag{3}\]
where \(h_{i}^{s}\) represents the response of the pixel at \(ith\) position in student network, \(h_{i}^{t}\) represents the response by teacher network at \(ith\) position of pixel, \(KL\) represents the Kullback-Leibler exhibiting divergence among two heatmaps, and \(\hat{w}\times\hat{h}\) represents feature map.
Kornreich, Park, Braun, Pawar, Browning, Herzog, Odry and Zhang (2022) presented a novel two-stage method inspired by multiple instance learning. This method aimed to identify regions of high likelihood for pathologies by leveraging mixed-format data, which encompassed categorical and positional labels. Their approach incorporated a UNet network along with a morphological peak-finding algorithm to accurately localize defects. Prior to pathology detection, the images were automatically cropped around the anterior cruciate ligament or medial compartment cartilage. Additionally, they employed a deep reinforcement learning model to detect two anatomical landmarks, namely the intercondylar eminence and the fibular styloid, which were used to position a volume of interest in relation to the location of these landmarks.
#### 7.3.5 Neural based
Neural-based techniques encompass methods that explain specific predictions, simplify neural networks, and visualize the features and concepts learned by the network. Ciliberti et al. (2022) conducted feature important analysis on a pre-developed model based on random forest algorithm. Their findings demonstrated that cartilage and bone features, including the volume of femoral cartilage and patellar density, played a significant role in classifying the status of the knee, whether it was healthy, degenerative, or traumatic. Karim et al. (2021) implemented another neural based technique, namely layer-wise relevance propagation (LRP) as illustrated in Figure 11 to tackle important pixels by running a forward pass through the neural network. In addition, deep Taylor decomposition (DTD) was utilized to backpropagate the relevance \(R_{l}^{(L)}\), allowing for the generation of a visualizable relevance map \(R_{LRP}\).
### Evaluation of XAI
Evaluation of XAI could be guided by the Co-12 properties introduced by Nauta et al. (2023). These properties include Correctness, Completeness, Consistency, Continuity, Contrastivity, Covariate complexity, Compactness, Composition, Confidence, Context, Coherence, and Controllability. However, despite the comprehensive framework provided by these properties, our observations indicate that there has been relatively less emphasis on evaluating the explanations generated by XAI in existing research.
Limited evaluation methods for XAI were identified, such as sensitivity analysis (Pierson et al., 2021), confident score (Wang et al., 2021), and rate of agreement with medical experts (Chang et al., 2020), which minimally addressed the Correctness and Confidence properties. Nevertheless, it is worth highlighting the work of Chang et al. (2020) who went beyond the general human perception interpretation of GradCAM visualizations. They actively involved medical experts in the validation process to evaluate the reliability of the visualization maps.
Their approach underscores the importance of soliciting feedback and insights from domain experts to evaluate the reliability and effectiveness of XAI techniques. By incorporating the perspectives of medical experts, the evaluation of XAI can benefit from their specialized knowledge and experience, ultimately enhancing the reliability and applicability of the generated explanations.
## 8 Discussion
XAI holds promise in the identification of pathological patterns associated with knee OA by leveraging structured and unstructured data from diverse sources. Moreover, it has the potential to optimize the handling and organization of electronic health record data, resulting in streamlined clinical workflows and significantly reducing the time physicians spend on making diagnosis, prognosis, and searching for pertinent patient information in electronic records. However, the current state of XAI has certain limitations that need to be addressed in future research.
### Development of quantitative evaluation metrics for XAI
Our findings shed light on a significant gap in the evaluation of XAI generated explanations, where qualitative assessments are predominantly utilized. However, these subjective evaluations may not meet the evidence standards required by medical experts. Hence, the development of a quantitative evaluation metric for XAI is essential. Such a metric can provide objective measures and benchmarks for comparing different XAI techniques and their impact on decision-making processes. It can also help researchers and practitioners in the field to determine the strengths and weaknesses of their models, identify areas for improvement, and facilitate the reproducibility of results.
### Integration of domain-specific information from stakeholders
The deployment of XAI could lead to the real application of AI in healthcare, and overcome the lack of operator confidence in AI models. However, it is essential to understand how the application of these models in clinical tasks will be perceived, whether as a support or a substitute for medical expert's work, as well as the level of substitution. To achieve this, AI programmers must discern which explanations are valuable and which are not for medical professionals. Creating an XAI model that is deemed useless or difficult to comprehend may deter medical experts from utilizing it. Furthermore, patients play a significant role as stakeholders since the developed model aims to elucidate their health status. Therefore, their expectations and special needs should be taken into account and integrated into the process. To address the challenges, Mrklas et al. (2020) implemented a qualitative co-design approach at an academic health center in Southern Alberta, which involved conducting focus groups with patients, physicians, researchers, and industry partners, as well as analyzing prioritization activities and a pre-post quality and satisfaction Kano survey. The structured co-design processes were developed based on the basis of shared concepts, language, power dynamics, rationale, mutual learning, and respect for diversity and differing opinions.
### Exploring patient disparities in data and addressing population-specific factors
As highlighted by Pierson et al. (2021), there are noticeable racial and socioeconomic disparities in OA data. By considering these disparities during the training of AI models, there is a potential to enhance accuracy. The study also revealed that patient-perceived OA symptoms vary based on factors such as education, culture, and geography. Considering these variations is crucial in developing AI models that accurately capture the diverse experiences and manifestations of OA among different patient populations. In addition, we observed that there is lack of well-organized open access data specifically for the Eastern population, despite the higher prevalence of OA issues in this population (Inoue et al., 2001). This highlights a significant gap in available resources for studying and addressing OA within the Eastern population. The limited availability of comprehensive and representative data from this specific demographic group hinders the development and evaluation of AI models tailored to their unique needs and characteristics.
### Understanding the boundaries of knowledge and legal constraints
It is crucial to recognize that XAI is not all-powerful. Identifying the knowledge boundaries of XAI models is upmost important for users to have a clear understanding and make appropriate use of the tool. Users should be informed about the operational boundaries of the models and be able to discern when the models go beyond their knowledge limits, as this can potentially result in errors. While XAI-based systems have the potential to alleviate the workload of healthcare providers, they also raise concerns regarding legal responsibility in cases of unethical actions and errors. Therefore, the development of XAI models in healthcare should be approached with caution, while also recognizing their potential to bring about positive societal impacts.
### Exploring alternative XAI techniques for knee OA applications
In addition to the XAI applications discussed in the Section 8, a prospective XAI technique in OA diagnosis could be image captioning. It is a process of generating a textual description of an image using AI algorithms. Medical imaging is an area where this technology could be particularly useful, as generating accurate and detailed descriptions of radiology and pathology images could help healthcare professionals to identify the specific areas of the knee that require treatment and make better-informed decisions about patient care. This area of research presents an exciting opportunity for the development of new XAI models that could have a significant impact on the future of musculoskeletal healthcare. Furthermore, the exploration of the counterfactual approach to XAI in the context of OA applications presents an additional avenue for research. This approach aims to enhance people understanding of AI systems by offering counterfactual explanations specific to target domain. Recent studies have shown that counterfactuals can provide richer information compared to causal explanations, as they encompass a broader range of possibilities in their mental representation (Celar and Byrne, 2023).
## 9 Conclusion
A substantial number of studies in the field of computer-aided diagnosis for knee OA have sought to incorporate explainability through various XAI techniques in their deep learning models. However, these techniques encounter inherent limitations due to the absence of a robust XAI framework, and the evaluation of explanation quality is often overlooked, leading to uncertainty about the effectiveness of these approaches in addressing the black box nature of deep learning in OA diagnosis. Nevertheless, the development of XAI in knee OA detection aligns with the trend of precision diagnosis, offering the potential to reduce the healthcare burden and promote preventive strategies for musculoskeletal diseases.
## Acknowledgment
This work was supported by Ministry of Higher Education, Malaysia under Fundamental Research Grant Scheme (FRGS) Grant No. FRGS/1/2022/SKK01/UM/02/1.
## CRediT authorship contribution statement
**Yun Xin Teoh:** Data curation, writing - original draft. **Alice Othmani:** Supervision of project, software, writing - review & editing. **Slew Li Goh:** Conceptualization of this study from clinical perspective. **Juliana Usman:** Originate idea of study from biomechanics perspective. **Khin Wee Lai:** Originate idea of study from engineering perspective.
Figure 2: PRISMA flowchart depicting the study selection process for this systematic review.
* [Feoh et al.2010] Feoh, S., and H.-H., 2010. A new approach to the identification of the solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar solar-like solar-like solar solar-like solar-like solar-like solar-like solar-like solar solar-like solar-like solar-like solar-like solar solar-like solar-like solar-like solar solar-like solar-like solar solar-like solar-like solar solar-like solar solar-like solar-like solar-like solar-like solar-like solar solar-like solar solar-like solar-like solar solar-like solar-like solar solar-like solar solar-like solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar solar-like solar solar solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar-like solar solar-like solar solar solar-like solar solar solar-like solar solar-like solar solar-like solar solar-like solar solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar-like solar solar-like solar solar-like solar-like solar solar-like solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar solar-like solar-like solar-like solar solar-like solar solar-like solar-like solar solar-like solar solar-like solar solar-like solar-like solar-like solar solar-like solar-like solar solar-like solar-like solar-like solar solar-like solar solar-like solar-like solar-like solar-like solar-like solar-like solar solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-likelike solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-likelike solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar- solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar- solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like-like solar-like solar-like solar-like- solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar- solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like- solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar- solar-like solar-like solar- solar-like solar-like solar-like solar- solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar-like solar
* [Feoh et al.2014] Feoh, S., and S.-H., 2014. A new approach to the classification of a class of objects. _Journal of Machine Learning Research_, 10(1):1-10, 2014.
* [Feoh et al.2015] Feoh, S., and S.-H., 2015. A new approach to classification of objects. _Journal of Machine Learning Research_, 10(1):1-10, 2015.
* [Feoh et al.2016] Feoh, S., and S.-H., 2016. A new approach to classification of objects. _Journal of Machine Learning Research_, 10(1):1-1, 2016.
* [Feoh et al.2017] Feoh, S., and S.-H., 2017. A new approach to classification of objects. _Journal of Machine Learning Research_, 10(1):1-1, 2017.
* [Feoh et al.2018] Feoh, S., and S.-H., 2018. A new approach to classification of objects. _Journal of Machine Learning Research_, 10(1):1-1, 2018.
* [Feoh et al.2014] Feoh, S., and S.-H., 2014. A new approach to the classification of the 3D image. _Journal of the Royal Statistical Society: Series B_, 109(1):105-118, 2014.
* [Feoh et al.2015] Feoh, S., and S.-H., 2015. A new approach to classification of the 3D image. _Journal of the Royal Statistical Society: Series B_, 109(1):105-116, 2015.
* [Feoh et al.2016] Feoh, S., and S.-H., 2016. A new approach to classification of the 3D image. _Journal of the Royal Statistical Society: Series B_, 110(1):105-120, 2016.
* [Feoh et al.2017] Feoh, S., and S.-H., 2017. A new approach to classification of the 3D image. _Journal of the Royal Statistical Society: Series B_, 110(1):105-120, 2017.
* [Feoh et al.2018] Feoh, S.
* [Feoh et al.2014] Feoh, S., and S.-H., 2014. A new approach to image classification. _IEEE Transactions on Image Processing_, 12(1):101-103, 2014.
* [Feoh et al.2015] Feoh, S., and S.-H., 2015. A new approach to image classification. _IEEE Transactions on Image Processing_, 12(1):101-104, 2015.
* [Feoh et al.2016] Feoh, S., and S.-H., 2016. A new approach to image classification. _IEEE Transactions on Image Processing_, 12(1):101-106, 2016.
* [Feoh et al.2016] Feoh, S., and S.-H., 2016. A new approach to image classification. _IEEE Transactions on Image Processing_, 12(1):101-106, 2016.
* [Feoh et al.2016] Feoh, S., and S.-H., 2016. A new approach to image classification.
* [Feoh et al.2010] Feoh, S., and H.-H., 2010. A review of the theory of chemical chemical chemistry. _Journal of chemical physics_, 115(1):105001, 2010.
* [Feoh et al.2011] Feoh, S., and H.-H., 2011. A review of chemical chemistry. _Journal of chemical physics_, 115(1):105001, 2011.
* [Feoh et al.2012] Feoh, S., and H.-H., 2012. A review of chemical chemistry. _Journal of chemical physics_, 115(1):105001, 2012.
* [Feoh et al.2013] Feoh, S., and H.-H., 2013. A review of chemical chemistry. _Journal of chemical physics_, 115(1):105001, 2013.
* [Feoh et al.2014] Feoh, S., and H.-H., 2014. A review of chemical chemistry. _Journal of chemical physics_, 115(1):105001, 2014.
* [Feoh et al.2015] Feoh, S., and H.-H., 2015. A review of chemical chemistry. _Journal of chemical physics_, 115(1):105001, 2015.
Figure 10: Example of knowledge distillation. Adapted from (Huo et al., 2022)
Figure 9: Example of regression tree. Adapted from (Kotti et al., 2017)
Figure 11: Example of layer-wise relevance propagation (LRP) for knee OA detection. Adapted from (Karim et al., 2021)
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline
**Paper** & **XAI method** & **Type data** & **of data** & **Evaluated model (performance)** & **Target** & **XAI findings** & **Visualization of XAI** \\ \hline Chen et al. & GradCAM & Imaging - X-ray & VGG19 with adjustable ordinal loss (69.7\% classification accuracy and 0.344 mean absolute error) & Classification of OA & The localization of key indicators of knee OA, including JSN, sub-chondral sclerosis, and osteophyte formation were achieved. \\ \hline Norman et al. & Saliency maps & Imaging - X-ray & DenseNet (Sensitivity rates of no OA, mild, moderate, and severe OA (4 classes) were 83.7, 70.2, and 86.0\% respectively. & Classification of OA severity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surve & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surveity & Saliency map accurately captured ostee-surve & Saliency map accurately captured ostee
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline
**Paper** & **XAI method** & **Type data** & **of data** & **Evaluated model (performance)** & **Target** & **XAI findings** & **Visualization of XAI** \\ \hline Chang et al. & CAM & Imaging - MRI & Siamese neural network with six convolutional layers in based on (75.70\% accuracy) & Prediction of unilateral knee pain (2 classes) & Effusion or synovitis of unilateral (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), cartilage loss (1.9\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.6\%), Hoffa fat pad abnormalities (3.7\%), and meniscal damage (0.93\%). & Effusion or synovitis (c, d) was identified as the most prevalent structural abnormality associated with frequent knee pain in 95 out of 107 subjects (88.8\%), followed by bone marrow lesion (5.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Paper** & **XAI** & **Type** & **of** & **Evaluated** & **Target** & **XAI findings** & **Visualization of XAI** \\ & **method** & **data** & **model** & & & \\ & & & **(performance)** & & & \\ \hline Thomas & Saliency & Imaging & - & DenseNet169 & Classification & Osteophyte formation \\ et al. & maps & X-ray & (0.70 & F1, & of OA & sites demonstrated \\ (2020) & & & 0.71 accuracy, & severity & high influence to final \\ & & & 0,86 & Cohen & based on & KL prediction. \\ & & & weighted & KL grades & \\ & & & kappa) & (5 classes) & & \\ \hline Bany MuhEigen- & Imaging & - & Stacked & Classification & OA features (JSN and \\ mad & CAM & X-ray & ensemble & of OA & Osteophytes) in the \\ and & & learning using & severity & joint medial and lateral margins \\ Yeasin & & & CNN with & based on & real margins \\ (2021) & & & SVM as super & KL grade & \\ & & & learner & (5 classes) & \\ \hline Morales & Spherical & Imaging & - & ResNet50 & Detection & The similar patterns \\ et al. & Grad- & MRI & (53.7 & - 58.8 & of meniscal & observed in both the \\ (2021) & CAM & & sensitivity, & tears & true positive and true \\ & & & 67.4 & - 82.1 & (2 classes) & negative groups for \\ & & & specificity, & & the femur bone shape \\ & & & 68.1 & - 74.4 & feature suggested that \\ & & & AUC) & & the model exploited \\ & & & & & similar features for assessing pain presence \\ & & & & & and absence. \\ \hline \hline \end{tabular}
\end{table}
Table 5: (_continued_).
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Paper** & **XAI** & **Type of** & **Evaluated** & **Target** & **XAI findings** & **Visualization of XAI** \\ & **method** & **data** & **model** & & & & \\ & & & **(performance)** & & & & \\ \hline Moon & GradCAM & Imaging & CNN & Classification & For & WBL & ratios \\ et al. & - weight & architecture & of & WBL & of 0.2 to 0.6, \\ (2021) & bearing & consisted of & ratio & heatmap & signals \\ & whole-leg & six stacked SE- & (7 classes) & were concentrated & \\ & X-ray & ResNet blocks, & around the knee joint & \\ & & followed by & area. However, for & a WBL ratio of 0.0, \\ & & & & pooling, a fully & the heatmap signal \\ & & connected & appeared at multiple & \\ & & layer, & and & points, including the \\ & & Softmax & femoral & diagnosis, \\ & & activation & metaphysis, & fibular \\ & & functions & head, & and tibial \\ & & (95.1\% & & diagnosis. For a WBL \\ & & cumulative & & ratio of 0.1, the heat \\ & & score, 0.054 & & map signal specifically \\ & & mean absolute & & appeared on the tibial \\ & & error) & & & \\ \hline Schiratti & GradCAM & Imaging - & EfficientNet- & Prediction & For JSN & progression, \\ et al. & MRI & B0 & network & of & JSN & the medial joint space \\ (2021) & clinical & with attention & progress- & was found to be \\ & variables & sub-model & and & & & highly relevant, while \\ & & classification & months & for pain prediction, \\ & & sub-model & (2 classes) & the intra-articular \\ & & (task 1 - 65\% & Prediction & space emerged as a \\ & & ROC-AUC, & of pain & significant factor. \\ & & 13\% precision, & severity & & \\ & & 84\% recall; & (2 classes) & & \\ & & task 2 - & & & \\ & & 66.8\% & mean & & \\ & & precision- & & & \\ & & recall & AUC, & & \\ & & 72.4\% & mean & & \\ & & ROC-AUC, & & & \\ & & 65.2\% & mean & & \\ & & weighted F1) & & & & \\ \hline Zeng & GradCAM & Imaging - & CNN & Presence & No further explanation by the authors. \\ et al. & X-ray & & of OA & & \\ (2021) & & & & (2 classes) & \\ & & & & Classification & \\ & & & of OA & & \\ & & & severity & & \\ & & & based on & & \\ & & & KL grade & & \\ & & & (5 classes) & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: (_continued_).
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline
**Paper** & **XAI method** & **Type of data** & **Evaluated model (performance)** & **Target** & **XAI findings** & **Visualization of XAI** \\ \hline Karim et al. & GradCAM++imaging & - LRP & DenseNet and VGG (91\% accuracy) & OARSI & No further explanation by the authors (4 classes) & \\ \hline Wang et al. (2021) & GradCAM & Imaging & - YOLO, ResNet50 backbone with visual transformer (69.18\% accuracy) & Classification of OA & Unlike showed a severity based on high region, the proposed method exhibited high-weighted areas spread across both sides of the X-ray images, leveraging the correlation between small regions. Additionally, the proposed method outperformed ResNet50 in locating JSON and detecting lesions on the medial or lateral edge of the femur, including sclerosis or bone spurs. & & & \\ \hline Olsson et al. (2021) & Integrated Imaging & - ResNet (overall AUC & Classification of OA severity based on KL0 with an AUC of 0.97, with sensitivity and specificity of 97 and 88\%) & Classification of OA severity based on KL0 with an AUC of 0.97, with sensitivity and specificity of 97 and 88\%) & For wrong predictions, heatmap activity was focused on the implant, suggesting that the network was responding to persistent indications of a previously treated medial arthrosis. & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: (continued).
* [Feo et al., 2017]
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Paper** & **XAI** & **Type of** & **Evaluated** & **Target** & **XAI findings** & **Visualization of XAI** \\ & **method** & **data** & **model** & & & \\ & & & **(performance)** & & & \\ \hline Alshareef & GradCAM & Imaging & - & Vision & Classification No further explanation & \\ et al. & & X-ray & Transformer & of OA & by the authors. \\ (2022) & & & (ViT) & (71.2\% & severity & \\ & & & F1 and 70\% & based on & \\ & & & accuracy) & KL grades & \\ & & & & (5 classes) & \\ \hline DunnhoferCAM & Imaging & - & MRNet & Presence & The model was incentivized to extract features around joint centers. \\ et al. & MRI & & (AlexNet) with & of ACL & tree. \\ (2022) & & & MRPyrNet & tear & \\ & & & (composed & (2 classes) & tree. \\ & & of a Feature & Presence & \\ & & Pyramid & of & \\ & & Network with & meniscus & \\ & & Pyramidal & tear & \\ & & Detail & (2 classes) & \\ & & Pooling) & & & \\ & & & (0.834-0.974 & \\ & & & ROC-AUC) & & \\ \hline \hline Wang & GradCAM & Imaging & - & Siamese-GAP & Presence & No further explanation \\ et al. & & X-ray & model with & of early & by the authors. \\ (2023b) & & & hybrid & loss & OA \\ & & & strategy & (2 classes) & \\ & & & (89.14\% & \\ & & & accuracy, & \\ & & & 86.78\% F1) & & \\ \hline Tariq & Eigen- & Imaging & - & Ensemble & Classification The model allowed for the distinction of bone \\ et al. & CAM & X-ray & CNN (ResNet- & of OA & the distinction of bone \\ (2023) & & & 34, VGG-19, & severity & sclerosis, osteophytes, \\ & & & DenseNet 121, & based on & cartilage degeneration, \\ & & & and DenseNet & KL grades & and JSN by highlight- \\ & & & (5 classes) & fine curacted features. Notably, the Ensulting model achieved & \\ & & & (98\% overall & \\ & & accuracy & & \\ & & and & 0.99 & 34 and DenseNet-121 \\ & & Quadratic & & exhibited improved feature identification for \\ & & Weighted & & & \\ & & Kappa) & & KL4. \\ \hline \hline \end{tabular}
\end{table}
Table 5: (_continued_).
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Paper** & **XAI** & **Type** & **of** & **Evaluated** & **Target** & **XAI findings** & **Visualization of XAI** \\ & **method** & **data** & **model** & & & & \\ & & & **(performance)** & & & & \\ \hline Li et al. & GradCAM & Imaging - & ResNet50 & Classification & Heatmap & region \\ (2023) & & X-ray & (0.88 accuracy & of OA & was predominantly \\ & & when using an- & severity & focused on the knee & \\ & & retrogoterior & based on & area, which aligns & \\ & & knee X-ray, & KL grades & with expectations. & \\ & & & 0.93 accuracy & (5 classes) & However, in certain \\ & & when using & images, the study & & \\ & & multiview & observed that the & heatmap & region \\ & & & & & extended beyond the \\ & & & & & knee joint to other \\ & & & & & tissues, suggesting \\ & & & & & that certain extra- \\ & & & & & & articular tissues may \\ & & & & & & also play a key role in \\ & & & & & knee OA diagnosis. \\ \hline Wang & GradCAM & Imaging - & Vision & Presence & All models detected \\ et al. & X-ray & Transformer & of early & early & early knee OA features \\ (2023d) & & (ViT) model & OA & like osteophytes and \\ & & with Selective & (2 classes) & JSN. However, the & \\ & & Shuffled & & DenseNet, ResNet, & \\ & & Position & & and VGG models & \\ & & Embedding & & were influenced \\ & & (SSPE) and a & by & background \\ & & ROI-exchange & noise, leading to & \\ & & strategy & & reduced classification \\ & & (89.80\% & performance. & The \\ & & accuracy and & & proposed approach \\ & & & & and Siamese-based \\ & & & & models focused \\ & & & & on specific regions \\ & & & & & affected by knee OA, \\ & & & & & resulting in better \\ & & & & & performance. \\ \hline FarajzadehGradCAM & Imaging - & A & deep & Classification & The proposed model \\ et al. & X-ray & residual & of OA & focused on the edges \\ (2023) & & neural network & severity & of bones near joint \\ & & with eight & based on & space area. \\ & & convolutional & KL grades & \\ & & layers, termed & (5 classes) & \\ & & IJES-OA & Net & \\ & & (80.23\% & \\ & & average & & \\ & accuracy and & & & \\ & & 0.802 average & & \\ & & precison) & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: (_continued_).
* [Teoh et al.2016]
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline
**Paper** & **XAI** & **Type of** **data** & **Evaluated model (performance)** & **Target** & **XAI findings** & **Visualization of XAI** \\ \hline Wang et al. (2023a) & Attention maps & Imaging - X-ray & CNN (78\% accuracy) & Classification of OA severity & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the joint space, osteo- (4 classes) & Heatmap highlighted the area around the knee joint, including the
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Paper** & **XAI** & **Type of** & **Evaluated model** & **Target** & **XAI findings** \\ & **method** & **data** & **(performance)** & & \\ \hline Kokkcotis & SHAP & Tabular & Logistic & Presence & Top ten risk factors \\ et al. & clinical & regression & of OA & 1. Right knee symptoms: swelling, last 7 days (V00K5XRKN1) \\ (2020) & data & (77.88\% & (2 classes) & 2. Left knee symptoms: bend knee fully, last 7 days (V00K5XLKN5) \\ & & & & 3. Either knee, history of knee surgery (P02KSURG) \\ & & & & 4. Knee symptoms, risk factors, or both, status at initial eligible interview or screening visit (P02ELGRISK) \\ & & & & 5. Baseline symptomatic knee OA status by person (P01SXKOA) \\ & & & & 6. Right knee exam: patellofemoral crepitus present on exam (V00RKPCFRE) \\ & & & & 7. Left knee exam: patellofemoral crepitus present on exam (V00LKPCFRE) \\ & & & & 8. Left knee symptoms: swelling, last 7 days (V00K5XLKN1) \\ & & & & 9. Average current scale weight in kg (P01WEIGHT) \\ & & & & 10. Right knee baseline symptomatic OA status (P01RSXKOA) \\ \hline Ntakolia & SHAP & Tabular & Logistic & Prediction & The three most significant features were identified: lateral JSN on the right knee (P01SVRKJSL), lateral JSN on the left knee (P01SVRKJSL), lateral JSN on the left knee (P01SVLKJSL), and measure related to the percentage of foods marked as a small portion (V00PCTSMAL). \\ (2021) & data & (71.25\% & mean & Prediction & The most important variables that significantly influenced the prediction output were identified as lateral JSN on the left knee (P01SVLKOST), body mass index, average daily nutrients from vitamin supplements (V00SUPPCA), and education level (V00EDCV). \\ \hline Ntakolia & SHAP & Tabular & XGBoost & Prediction & Top five risk factors \\ et al. & clinical & (78.14\% & average & of JSN 1. Lateral JSN on the right knee (P01SVRKJSL) \\ (2021) & data & accuracy with 31 & progress- & 2. Percentage of foods marked as large portion (V00PCTLARG) \\ & & & & (2 classes) & 3. Frequency of cream/half and half/non-dairy creamer in coffee or tea in the past 12 months (V00FFG068) \\ & & & & 4. Frequency of getting in and out of squatting position 10 or more times during a typical week in the past 30 days (V00P4S30CV) \\ & & & & 5. Lateral JSN on the left knee (P01SVLKOST) \\ \hline Kokkcotis & SHAP & Tabular & Fuzzy feature & Presence & Top five risk factors \\ et al. & clinical & selection & of OA & 1. Knee symptoms (P02ELGRISK) \\ (2022b) & data & and random & (2 classes) & 2. History of knee surgery (P02KSURG) \\ & & & forest & (73.55\% & 3. Age (V00AGE) \\ & & & accuracy, 73.82\% & 4. BMI (P01BMI) \\ & & & precision, 73.64\% & 5. KOOS quality of life score (V00KOOSQOL) \\ & & & recall, 73.59 F1 & \\ & & & with 21 risk factors) & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Summary of game-theory-based XAI techniques from included papers. JSN: joint space narrowing; SHAP: SHapley Additive exPlanations; XGBoost: eXtreme Gradient Boosting.
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline
**Paper** & **XAI method** & **Type of data** & **Evaluated model (performance)** & **Target** & **XAI findings** \\ \hline Angelini et al. (2022) & SHAP et al. (2022) & Tree-characters & Biochemical K-means clustering, random forest or KNN (F1) & OA dominant molecular scores of 0.85 for C1 vs rest, 0.91 for C2 vs rest, and 0.88 for C3 vs rest) & **Cluster 1 - low tissue turnover:** This cluster demonstrated low repair and articular cartilage or subchondonal bone turnover, and had the highest proportion of non-progressors. \\ Kokkotis SHAP et al. (2022a) & Tabular gait data & SVM (94.95\% accuracy, 92.16–96.72\% precision, 92.19–97.62\% recall, 93.07–96.47\% F1 score) & Classification of anterior cruciate ligament injury status (3 classes) & The gait parameters K2, H4, A3, GRF4, GRF7, K1, A4, and GRF6 as illustrated in below figure were identified as the key factors that significantly influenced the model output, with mean SHAP values higher than 0.3. \\ \hline Lu et al. (2022) & SHAP et al. (2022) & Tabular data & XGBoost (0.741 AUC with 15 features) & Prediction of risk of developing thrombosis (2 classes) & KL grade, age, and hypertension emerged as the three pivotal variables in relation to the risk of venous thrombosis \\ \hline \hline \end{tabular}
\end{table}
Table 6: (continued). |
2303.11865 | Local convergence of multi-agent systems towards triangular patterns | Geometric pattern formation is an important emergent behavior in many
applications involving large-scale multi-agent systems, such as sensor networks
deployment and collective transportation. Attraction/repulsion virtual forces
are the most common control approach to achieve such behavior in a distributed
and scalable manner. Nevertheless, for most existing solutions only numerical
and/or experimental evidence of their convergence is available. Here, we
revisit the problem of achieving pattern formation giving sufficient conditions
to prove analytically that under the influence of appropriate virtual forces, a
large-scale multi-agent swarming system locally converges towards a stable and
robust triangular lattice configuration. Specifically, the proof is carried out
using LaSalle's invariance principle and geometry-based arguments. Our
theoretical results are complemented by exhaustive numerical simulations
confirming their effectiveness and estimating the region of asymptotic
stability of the triangular configuration. | Andrea Giusti, Marco Coraggio, Mario di Bernardo | 2023-03-21T14:11:28Z | http://arxiv.org/abs/2303.11865v1 | # Local convergence of multi-agent systems towards triangular patterns
###### Abstract
Geometric pattern formation is an important emergent behavior in many applications involving large-scale multi-agent systems, such as sensor networks deployment and collective transportation. Attraction/repulsion virtual forces are the most common control approach to achieve such behavior in a distributed and scalable manner. Nevertheless, for most existing solutions only numerical and/or experimental evidence of their convergence is available. Here, we revisit the problem of achieving pattern formation giving sufficient conditions to prove analytically that under the influence of appropriate virtual forces, a large-scale multi-agent swarming system locally converges towards a stable and robust triangular lattice configuration. Specifically, the proof is carried out using LaSalle's invariance principle and geometry-based arguments. Our theoretical results are complemented by exhaustive numerical simulations confirming their effectiveness and estimating the region of asymptotic stability of the triangular configuration.
## I Introduction
Many natural and artificial systems consist of multiple interacting agents; their behavior being determined by both the individual agent dynamics and their interaction. In some applications the number of _agents_ can be extremely large (_large-scale multi-agent systems_) and the role played by their interconnections becomes predominant over their individual dynamics [1]. Examples include cell populations [2], swarming multi-robot systems [3], social networks [4] among many others. Some of the most relevant emerging behavior exhibited by these systems involve their _spatial organization, coordination_, and _cooperation_[5]. A notable case is _geometric pattern formation_[6] where the agents are required to self-organize into some desired _pattern_, such as, for example, triangular lattices consisting of repeating adjacent triangles. Applications of pattern formation include sensor networks deployment [7], collective transportation and construction [8, 9], and exploration and mapping [10].
Most of the existing distributed control algorithms for geometric pattern formation rely on the use of _virtual forces_ (or _virtual potentials_), [11, 12, 13, 14, 15, 16, 17, 18]. Within this framework, agents move under the effect of forces generated by the presence of their neighboring agents and the environment, causing attraction, repulsion, alignment, etc.
Interestingly, most strategies are validated only numerically or experimentally [11, 12, 13, 7]. Among the exceptions, in [19], a geometric control approach based on trigonometric functions is proposed to build triangular lattices, and its global convergence is proved. The extension to 3D spaces is validated analytically in [20]. Moreover, _harmonic approximation_[21] provides necessary conditions for the local stability of a lattice. These conditions are used in [14] to numerically design a virtual force that locally stabilizes an hexagonal lattice. A general analysis of the effects of attraction/repulsion virtual forces is carried out in [22], where the authors prove that the agents converge inside a bounded region, even though the specific equilibrium configuration is not characterized. We wish to remark here that formation control [15, 16, 17] differs from geometric pattern formation because of a typically smaller number of agents (order of tens) with, possibly, unique identifiers, numerous roles for the agents and often some coordinated motion of the agents. Similarly, when solving _flocking_ control problems, the emergence of coordinated motion is the crucial concern [23, 24, 18].
In this paper, we revisit the problem of geometric pattern formation using _attraction/repulsion_ virtual forces with the aim of bridging a gap in the existing literature and deriving a general proof of convergence when considering the formation of triangular lattice configurations. When compared to previous work, our stability results (i) can be applied to most control laws based on virtual forces (or potentials), rather than only holding for specific algorithms, e.g. [19], (ii) are sufficient rather than being only necessary [21], (iii) characterize the asymptotic configuration of the agents, rather than just proving its boundedness [22], and (iv) guarantee the emergence of triangular lattices rather than less regular ones, e.g. \(\alpha\)-lattices studied in [18].
## II Mathematical preliminaries
Given a vector \(\mathbf{v}\in\mathbb{R}^{d}\), we denote by \([\mathbf{v}|_{i}\) its \(i\)-th element, by \(\|\mathbf{v}\|\) its Euclidean norm, and by \(\mathbf{\hat{v}}\doteq\frac{\mathbf{v}}{\|\mathbf{v}\|}\) its direction. \(\mathbf{0}\) denotes a column vector of appropriate dimension with all elements equal to \(0\). Given a matrix \(\mathbf{A}\), \([\mathbf{A}]_{ij}\) is its \((i,j)\)-th element.
Given a continuous-time, autonomous dynamical system
\[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t)),\quad\mathbf{x}(0)=\mathbf{x}_ {0}, \tag{1}\]
with state vector \(\mathbf{x}(t)\in\mathbb{R}^{d}\), and \(\mathbf{x}_{0}\in\mathbb{R}^{d}\), we term as \(\phi(t,\mathbf{x}_{0})\) its trajectory starting from \(\mathbf{x}(0)=\mathbf{x}_{0}\).
**Definition 1** (Equilibrium set): _A set \(\Xi\subset\mathbb{R}^{d}\) is an equilibrium set for system (1) if \(\mathbf{f}(\mathbf{x})=\mathbf{0}\)\(\forall\mathbf{x}\in\Xi\)._
**Definition 2** (Local asymptotic stability [25, Definition 1.8]): _An equilibrium set \(\Xi\) for system (1) is locally asymptotically stable if \(\forall\varepsilon>0,\exists\delta>0\) such that if \(\min_{y\in\Xi}\|\mathbf{x}_{0}-\mathbf{y}\|<\delta\), then_
1. \(\min_{y\in\Xi}\|\phi(t,\mathbf{x}_{0})-\mathbf{y}\|<\varepsilon,\ \forall t>0\)_, and_
2. \(\lim_{t\to+\infty}\phi(t,\mathbf{x}_{0})\in\Xi\)_._
**Definition 3** (Incidence matrix): _Given a digraph with \(n\) vertices and \(m\) edges, its incidence matrix \(\mathbf{B}\in\mathbb{R}^{n\times m}\) has elements defined as_
\[[\mathbf{B}]_{ij}\coloneqq\begin{cases}+1,&\text{if edge $j$ starts from vertex $i$},\\ -1,&\text{if edge $j$ ends in vertex $i$},\\ 0,&\text{otherwise}.\end{cases}\]
**Definition 4** (Framework [16, p. 120]): _Consider a (di-)graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(n\) vertices, and a set of positions \(\mathbf{p}_{1},\ldots,\mathbf{p}_{n}\in\mathbb{R}^{d}\) associated to its vertices, with \(\mathbf{p}_{i}\neq\mathbf{p}_{j}\ \forall i,j\in\{1,\ldots,n\}\). A \(d\)-dimensional framework is the pair \((\mathcal{G},\bar{\mathbf{p}})\), where \(\bar{\mathbf{p}}\coloneqq[\mathbf{p}_{1}^{\mathsf{T}}\ \cdots\ \mathbf{p}_{n}^{\mathsf{T}}]^{ \mathsf{T}}\in\mathbb{R}^{dn}\). Moreover, the length of an edge, say \((i,j)\in\mathcal{E}\), is \(\|\mathbf{p}_{i}-\mathbf{p}_{j}\|\)._
**Definition 5** (Congruant frameworks [26, p. 3]): _Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and two frameworks \((\mathcal{G},\bar{\mathbf{p}})\) and \((\mathcal{G},\bar{\mathbf{q}})\), these are congruent if \(\|\mathbf{p}_{i}-\mathbf{p}_{j}\|=\|\mathbf{q}_{i}-\mathbf{q}_{j}\|\ \ \forall i,j\in\mathcal{V}\)._
**Definition 6** (Rigidity matrix [26, p. 5]): _Given a \(d\)-dimensional framework with \(n\geq 2\) vertices and \(m\) edges, its rigidity matrix \(\mathbf{M}\in\mathbb{R}^{m\times dn}\) has elements defined as_
\[[\mathbf{M}]_{e,(jd-d+k)}\coloneqq\begin{cases}[\mathbf{p}_{j}-\mathbf{p}_{i} ]_{k},&\text{if edge $e$ starts from vertex $j$},\\ &\text{if edge $e$ starts from vertex $j$ and ends in vertex $i$},\\ 0,&\text{otherwise}.\end{cases} \tag{2}\]
_with \(k=1,\ldots,d\)._
**Definition 7** (Infinitesimal rigidity [16, p. 122]): _A framework with rigidity matrix \(\mathbf{M}\) is infinitesimally rigid if, for any infinitesimal motion, say \(\mathbf{u},\)1 of its vertices, such that the length of the edges is preserved, it holds that \(\mathbf{Mu}=0\)._
Footnote 1: \(\mathbf{u}\) can be interpreted as either a velocity or a small displacement.
To give a geometrical intuition of the concept of infinitesimal rigidity, we note that an infinitesimally rigid framework is also rigid [16, p. 122], according to the definition below.2
**Definition 8** (Rigidity [26, p. 3]): _A framework is rigid if every continuous motion of the vertices, that preserves the length of the edges, also preserves the distances between all pairs of vertices._
As a consequence, in a rigid framework, any continuous motion that does _not_ preserve the distance between any two pairs of vertices also does _not_ preserve the length of at least one edge.
**Theorem 1** ([16, p. 122]): _A 2-dimensional framework with \(n\geq 2\) vertices and rigidity matrix \(\mathbf{M}\) is infinitesimally rigid if and only if \(\operatorname{rank}(\mathbf{M})=2n-3\)._
**Definition 9** (Swarm): _A (planar) swarm \(\mathcal{S}\coloneqq\{1,2,\ldots,n\}\) is a set of \(n\in\mathbb{N}_{>0}\) identical agents that can move on the plane. For each agent \(i\in\mathcal{S}\), \(\mathbf{x}_{i}(t)\in\mathbb{R}^{2}\) denotes its position in the plane at time \(t\in\mathbb{R}_{\geq 0}\)._
Moreover, we call \(\bar{\mathbf{x}}(t)\coloneqq[\mathbf{x}_{1}^{\mathsf{T}}(t)\ \cdots\ \mathbf{x}_{n}^{\mathsf{T}}(t)]^{ \mathsf{T}}\in\mathbb{R}^{2n}\) the _configuration_ of the swarm, define \(\mathbf{x}_{c}(t)\coloneqq\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}(t)\in \mathbb{R}^{2}\) as its _center_, and denote by \(\mathbf{r}_{ij}(t)\coloneqq\mathbf{x}_{i}(t)-\mathbf{x}_{j}(t)\in\mathbb{R}^{2}\) the relative position of agent \(i\) with respect to agent \(j\).
**Definition 10** (Adjacency set): _Given a swarm \(\mathcal{S}\), the adjacency set of agent \(i\) at time \(t\) is \(\mathcal{A}_{i}(t)\coloneqq\{j\in\mathcal{S}\setminus\{i\}:\|\mathbf{r}_{ij}(t) \|\leq R_{\mathsf{A}}\}\),where \(R_{\mathsf{A}}\in\mathbb{R}_{>0}\) is the maximum link length._
**Definition 11** (Links): _A link is a pair \((i,j)\in\mathcal{S}\times\mathcal{S}\) such that \(j\in\mathcal{A}_{i}(t)\); \(\left\|\mathbf{r}_{ij}(t)\right\|\) is its length. The set of all links existing in a certain configuration \(\bar{\mathbf{x}}\) is denoted by \(\mathcal{E}(\bar{\mathbf{x}})\)._
Notice that \((i,j)\in\mathcal{E}(\bar{\mathbf{x}})\iff(j,i)\in\mathcal{E}(\bar{\mathbf{x}})\).
**Definition 12** (Swarm graph and framework): _The swarm graph is the digraph \(\mathcal{G}(\bar{\mathbf{x}})\coloneqq(\mathcal{S},\mathcal{E}(\bar{\mathbf{x}}))\). The swarm framework is \(\mathcal{F}(\bar{\mathbf{x}})\coloneqq(\mathcal{G}(\bar{\mathbf{x}}),\bar{ \mathbf{x}})\)._
**Definition 13** (Triangular lattice configuration): _Consider a planar swarm \(\mathcal{S}\) with framework \(\mathcal{F}(\bar{\mathbf{x}}^{*})\). \(\bar{\mathbf{x}}^{*}\) is a triangular (lattice) configuration if_
1. \(\mathcal{F}(\bar{\mathbf{x}}^{*})\) _is infinitesimally rigid, and_
2. \(\left\|\mathbf{r}_{ij}\right\|=R,\ \forall(i,j)\in\mathcal{E}(\bar{\mathbf{x}}^{*})\)_,_
_where \(R\in\mathbb{R}_{>0}\) denotes the desired link length._
Here, we assume that
\[R_{\mathsf{a}}\in]R,R\sqrt{3}[, \tag{3}\]
so that, when the swarm is in a triangular configuration, the adjacency set (Definition 10) of any agent includes only the agents in its immediate surroundings, and all the links (Definition 11) have length \(R\) (see Fig. 1).
We denote by \(\mathcal{T}\subset\mathbb{R}^{2n}\) the set of all triangular lattice configurations; it is immediate to verify that \(\mathcal{T}\) is unbounded and disconnected.
**Definition 14** (Congruant configurations): _Given a configuration \(\bar{\mathbf{x}}^{\diamond}\), we define the set of its congruent configurations \(\Gamma(\bar{\mathbf{x}}^{\diamond})\) as the set of configurations with congruent associated frameworks (see Definition 5), that is_
\[\Gamma(\bar{\mathbf{x}}^{\diamond})\coloneqq\{\bar{\mathbf{x}}\in\mathbb{R}^{2n }:\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|=\left\|\mathbf{x}_{i}^{ \diamond}-\mathbf{x}_{j}^{\diamond}\right\|,\forall i,j\in\mathcal{S}\}.\]
These configurations are obtained by translations and rotations of the framework \(\mathcal{F}(\bar{\mathbf{x}}^{\diamond})\); thus, it is immediate to verify that \(\Gamma(\bar{\mathbf{x}}^{\diamond})\) is connected and unbounded for any \(\bar{\mathbf{x}}^{\diamond}\) (see Fig. 1(a)). Also, note that \(\bar{\mathbf{x}}^{*}\in\mathcal{T}\iff\Gamma(\bar{\mathbf{x}}^{*})\subset \mathcal{T}\), and
\[\mathcal{T}=\bigcup_{\bar{\mathbf{x}}^{*}\in\mathcal{T}}\Gamma(\bar{\mathbf{x} }^{*}). \tag{4}\]
Fig. 1: Triangular configurations. (a) Schematic representation of a triangular lattice; red agents belong to the adjacency set of the black agent. (b) Example of a triangular configuration with \(n=100\) agents.
In the following, we omit the dependence on time when clear from the context.
## III Problem Statement
Consider a swarm \(\mathcal{S}\) of \(n\) agents, with agents' dynamics described by
\[\mathbf{\dot{x}}_{i}(t)=\mathbf{u}_{i}(t),\ \ \forall i\in\mathcal{S}, \tag{5}\]
where \(\mathbf{u}_{i}(t)\in\mathbb{R}^{2}\) is a control law to be designed.
Let \(R_{\mathrm{s}}\in\mathbb{R}_{>0}\) be a _sensing radius_ and define the _interaction set_ of agent \(i\) at time \(t\) as
\[\mathcal{I}_{i}(t)\coloneqq\{j\in\mathcal{S}\setminus\{i\}:\|\mathbf{r}_{ij} (t)\|\leq R_{\mathrm{s}}\}. \tag{6}\]
For the term \(\mathbf{u}_{i}(t)\) in (5), we consider the distributed _virtual forces_ control law, given by
\[\mathbf{u}_{i}(t)\coloneqq\sum_{j\in\mathcal{I}(i)}f\left(\left\|\mathbf{r}_{ ij}(t)\right\|\right)\mathbf{\dot{r}}_{ij}(t), \tag{7}\]
where \(f:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\) is the _interaction function_.
Note that in general there is no specific relation between \(\mathcal{I}_{i}\) and \(\mathcal{A}_{i}\) (see Definition 10); however, we reasonably assume that \(R_{\mathrm{s}}\geq R_{\mathrm{a}}\), so that
\[\mathcal{A}_{i}\subseteq\mathcal{I}_{i},\quad\forall i\in\mathcal{S}. \tag{8}\]
The following result slightly extends the one reported in [22, Lemma 1].
**Lemma 1:**_The position of the center of the swarm (see Definition 9), say \(\mathbf{x}_{\mathrm{c}}\), under the control law (7) is invariant, that is \(\mathbf{\dot{x}}_{\mathrm{c}}=\mathbf{0}\ \forall\mathbf{\dot{x}}\in \mathbb{R}^{2n}\)._
Proof.: Exploiting (5) and (7), the dynamics of the center of the swarm is given by
\[\dot{\mathbf{x}}_{\mathrm{c}}\coloneqq\frac{1}{n}\sum_{i=1}^{n}\dot{\mathbf{x }}_{i}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{u}_{i}=\frac{1}{n}\sum_{i\in\mathcal{I }_{i}}f(\left\|\mathbf{r}_{ij}\right\|)\mathbf{\dot{r}}_{ij}. \tag{9}\]
Since in a swarm the existence of any link \((i,j)\) implies the existence of link \((j,i)\) (see Definition 10), in (9), for any term \(f(\left\|\mathbf{r}_{ij}\right\|)\mathbf{\dot{r}}_{ij}\) there exists a term \(f(\left\|\mathbf{r}_{ji}\right\|)\mathbf{\dot{r}}_{ij}=-f(\left\|\mathbf{r}_{ ij}\right\|)\mathbf{\dot{r}}_{ij}\) (because \(\left\|\mathbf{r}_{ij}\right\|=\left\|\mathbf{r}_{ji}\right\|\) and \(\mathbf{\dot{r}}_{ij}=-\mathbf{\dot{r}}_{ji}\)). Therefore, the sum of the two is zero, yielding the thesis.
## IV Convergence to a triangular configuration
We can now state the main result of this work, showing that, given an interaction function \(f\) (in (7)) that generates short range repulsion and long range attraction, the set of triangular configurations of the swarm is a locally asymptotically stable equilibrium set (see Definitions 1 and 2).
**Assumption 1:**\(f\) (in (7)) is such that:
1. \(f(R)=0\),
2. \(f(z)>0\) for \(z\in[0;R[\) and \(f(z)<0\) for \(z>R\),
3. \(f(z)\) is continuous in \([0;R_{\mathrm{a}}]\),
4. \(f(z)=0\) for any \(z>R_{\mathrm{a}}\).
An exemplary interaction function fulfilling the assumption above is portrayed in Fig. 3.
Without loss of generality, we further assume that, under Assumption 1, in a sufficiently small neighborhood of a triangular configuration, all other equilibria are also triangular (supporting evidence showing that this assumption is not restrictive is reported in the Appendix).
**Theorem 2:**_Let Assumption 1 hold. Then, for any triangular configuration \(\mathbf{\dot{x}}^{*}\), \(\Gamma(\mathbf{\dot{x}}^{*})\) is a locally asymptotically stable equilibrium set. Consequently, \(\mathcal{T}\) is also a locally asymptotically stable equilibrium set._
Proof.: Let us consider _any_ triangular configuration \(\mathbf{\dot{x}}^{*}\in\mathcal{T}\), with center \(\mathbf{x}^{*}_{\mathrm{c}}\coloneqq\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}^{*}_{i}\) and relative positions \(\mathbf{r}^{*}_{ij}\), and the set \(\Gamma(\mathbf{\dot{x}}^{*})\) of its congruent configurations. Recalling Definition 13.(B) and (a1), we have that \(\mathbf{\dot{x}}^{*}\) is an equilibrium point of (5)-(7); thus, \(\Gamma(\mathbf{\dot{x}}^{*})\) and \(\mathcal{T}\) are equilibrium sets. Next, we will prove local asymptotic stability of \(\Gamma(\mathbf{\dot{x}}^{*})\subset\mathcal{T}\), which implies local asymptotic stability of \(\mathcal{T}\) through (4).
_Step 1 (Lyapunov function):_ Given a configuration \(\mathbf{\dot{x}}\in\mathbb{R}^{2n}\) with center \(\mathbf{x}_{\mathrm{c}}\) and inducing the links in \(\mathcal{E}(\mathbf{\dot{x}})\) according to Definition 11, let \(m\coloneqq|\mathcal{E}(\mathbf{\dot{x}})|\) and order the links in \(\mathcal{E}(\mathbf{\dot{x}})\) arbitrarily, so that \(\mathbf{r}_{1},\ldots,\mathbf{r}_{m}\) refer to the relative positions \(\mathbf{r}_{ij}\) for \((i,j)\in\mathcal{E}(\mathbf{\dot{x}})\). Recalling (a3), we can define the potential function \(P:[0,R_{\mathrm{a}}]\rightarrow\mathbb{R}\) given by \(P(z)=-\int_{R}^{z}f(y)\,\mathrm{d}y\) (see Fig. 3). Note that \(P(R)=0\), \(\frac{\mathrm{d}P}{\mathrm{d}z}(z)=-f(z)\), and, from (a2),
\[P(z)>0\quad\forall z\in\mathbb{R}_{\geq 0}\setminus\{R\}. \tag{10}\]
Then, let us consider the candidate Lyapunov function
\[V(\mathbf{\dot{x}})\coloneqq\|\mathbf{x}^{*}_{\mathrm{c}}-\mathbf{x}_{ \mathrm{c}}\|^{2}+\sum_{k\in\mathcal{E}(\mathbf{\dot{x}})}P(\|\mathbf{r}_{k}\|). \tag{11}\]
By (10), it holds that \(V(\mathbf{\dot{x}})\geq 0\ \forall\mathbf{\dot{x}}\in\mathbb{R}^{2n}\), and \(V=0\) if and only if both \(\mathbf{x}_{\mathrm{c}}=\mathbf{x}^{*}_{\mathrm{c}}\) and Definition 13.(B) holds.
Fig. 3: Example of an interaction function \(f\) (top panel) and its corresponding potential \(P\) (bottom panel).
Fig. 2: (a): Sets of triangular lattices configurations. (b): Sets used in the proof of Theorem 2.
_Step 2 (Properties of \(V\)):_\(V(\bar{\mathbf{x}})\) is discontinuous over \(\mathbb{R}^{2n}\) (because \(\mathcal{E}(\bar{\mathbf{x}})\) changes when links (dis-)appear). However, \(V(\bar{\mathbf{x}})\) is continuous and differentiable in any subset of \(\mathbb{R}^{2n}\) where the set \(\mathcal{E}(\bar{\mathbf{x}})\) of links is constant. To find such a set, we seek conditions on \(\bar{\mathbf{x}}\) such that \(\mathcal{E}(\bar{\mathbf{x}})=\mathcal{E}(\bar{\mathbf{x}}^{*})\) (see Definitions 10 and 11), i.e.,
\[\left\|\mathbf{r}_{ij}\right\| <R_{\mathrm{a}}, \forall(i,j)\in\mathcal{E}(\bar{\mathbf{x}}^{*}), \tag{12a}\] \[\left\|\mathbf{r}_{ij}\right\| >R_{\mathrm{a}}, \forall(i,j)\not\in\mathcal{E}(\bar{\mathbf{x}}^{*}). \tag{12b}\]
(12a) means that all links in \(\mathcal{E}(\bar{\mathbf{x}}^{*})\) are preserved in \(\mathcal{E}(\bar{\mathbf{x}})\), while (12b) means that no new links are created in \(\mathcal{E}(\bar{\mathbf{x}})\) with respect to \(\mathcal{E}(\bar{\mathbf{x}}^{*})\). With simple algebraic manipulations it is possible to show that (12a) and (12b) hold if \(\bar{\mathbf{x}}\in\mathcal{B}\), where
\[\mathcal{B}\coloneqq\{\bar{\mathbf{x}}\in\mathbb{R}^{2n}:\left|\left\| \mathbf{r}_{ij}\right\|-\left\|\mathbf{r}_{ij}^{*}\right\|\right|<\beta,\ \forall i,j\in\mathcal{S}\}, \tag{13}\]
and \(\beta<\min_{i,j\in\mathcal{S}}\left|R_{\mathrm{a}}-\left\|\mathbf{r}_{ij}^{*} \right\|\right|\). Note that \(\mathcal{B}\) can be interpreted as a "neighborhood" of \(\Gamma(\bar{\mathbf{x}}^{*})\) with "width" \(\beta\) (see Fig. 2b). Hence, \(\mathcal{E}(\bar{\mathbf{x}})=\mathcal{E}(\bar{\mathbf{x}}^{*})\) in \(\mathcal{B}\), and thus \(V\) is continuously differentiable in \(\mathcal{B}\).
_Step 3 (Analysis of \(\dot{V}\)):_ At this point, we can restrict our analysis to the set \(\mathcal{B}\) to study the attractivity of \(\Gamma(\bar{\mathbf{x}}^{*})\). Let us start by studying the dynamics of the agents. From (5)-(7), we have
\[\dot{\mathbf{x}}_{i}=\sum_{j\in\mathcal{I}_{i}}f(\left\|\mathbf{r}_{ij}\right\| )\dot{\mathbf{r}}_{ij}. \tag{14}\]
Hypothesis (a4) and (8) imply that in (14) we have
\[\sum_{j\in\mathcal{I}_{i}}f(\left\|\mathbf{r}_{ij}\right\|)\dot{\mathbf{r}}_{ ij}=\sum_{j\in\mathcal{A}_{i}}f(\left\|\mathbf{r}_{ij}\right\|)\dot{\mathbf{r}}_{ ij}. \tag{15}\]
Then, exploiting (15) and the incidence matrix \(\mathbf{B}\) (Definition 3) of the swarm graph, (14) can be rewritten as
\[\dot{\mathbf{x}}_{i}=\sum_{j\in\mathcal{A}_{i}}f(\left\|\mathbf{r}_{ij}\right\| )\dot{\mathbf{r}}_{ij}=\sum_{k=1}^{m}[\mathbf{B}]_{ik}f(\left\|\mathbf{r}_{k} \right\|)\dot{\mathbf{r}}_{k}. \tag{16}\]
Moreover we can write the dynamics of the relative positions along a link \(k\) as \(\dot{\mathbf{r}}_{k}=\sum_{i=1}^{n}[\mathbf{B}]_{ik}\dot{\mathbf{x}}_{i}\). Thus, exploiting Lemma 1 and (16), we get
\[\dot{V}(\bar{\mathbf{x}})=\sum_{k=1}^{m}\frac{\partial V}{\partial \left\|\mathbf{r}_{k}\right\|}\ \frac{\partial\left\|\mathbf{r}_{k}\right\|}{\partial \mathbf{r}_{k}}\ \dot{\mathbf{r}}_{k}=\sum_{k=1}^{m}P^{\prime}(\left\|\mathbf{r}_{k}\right\|) \ \dot{\mathbf{r}}_{k}^{\mathrm{T}}\sum_{i=1}^{n}[\mathbf{B}]_{ik}\dot{\mathbf{x}}_{i}\] \[=-\sum_{i=1}^{n}\sum_{k=1}^{m}[\mathbf{B}^{\mathrm{T}}]_{ki}\ f(\left\| \mathbf{r}_{k}\right\|)\ \dot{\mathbf{r}}_{k}^{\mathrm{T}}\dot{\mathbf{x}}_{i}=-\sum_{i=1}^{n}\bar{ \mathbf{x}}_{i}^{\mathrm{T}}\dot{\mathbf{x}}_{i}=-\dot{\bar{\mathbf{x}}}^{ \mathrm{T}}\dot{\mathbf{x}}.\]
We can hence conclude that \(\dot{V}(\bar{\mathbf{x}})=0\) if and only if \(\dot{\mathbf{x}}=\mathbf{0}\), i.e., in correspondence of equilibrium configurations.
Now, choosing \(\beta\) small enough, we can exclude the presence of equilibrium configurations not belonging to \(\Gamma(\bar{\mathbf{x}}^{*})\), and therefore
\[\begin{cases}\dot{V}(\bar{\mathbf{x}})=0,&\text{if }\bar{\mathbf{x}}\in\Gamma(\bar{ \mathbf{x}}^{*}),\\ \dot{V}(\bar{\mathbf{x}})<0,&\text{if }\bar{\mathbf{x}}\in\mathcal{B}\setminus\Gamma(\bar{ \mathbf{x}}^{*}).\end{cases} \tag{17}\]
_Step 4 (Applying LaSalle's invariance principle):_ To complete the proof, we define a forward invariant neighborhood of \(\bar{\mathbf{x}}^{*}\) and then apply LaSalle's invariance principle. Given some \(\omega\in\mathbb{R}_{>0}\), let \(\Omega\) be the largest connected set containing \(\bar{\mathbf{x}}^{*}\) such that \(V(\bar{\mathbf{x}})\leq\omega\ \forall\bar{\mathbf{x}}\in\Omega\) (see Fig. 2b). In particular, we select \(\omega\) small enough that \(\Omega\subseteq\mathcal{B}\).3 Since \(V(\bar{\mathbf{x}})\leq\omega\) and \(\dot{V}(\bar{\mathbf{x}})\leq 0\) for all \(\bar{\mathbf{x}}\in\Omega\), then \(\Omega\) is forward invariant. Moreover, \(\Omega\) is closed, because \(V\) is continuous in \(\Omega\), and \(\Omega\) is the inverse image of the closed set \([0,\omega]\). \(\Omega\) is also bounded because (i) translations too far from \(\bar{\mathbf{x}}^{*}\) cause \(V\) to increase beyond \(\omega\) (see (11)), and (ii) \(\Omega\subseteq\mathcal{B}\) implies that the deformations of the framework are bounded (see (13)). Since \(\Omega\) is closed and bounded, it is also compact.
Footnote 3: Such a value of \(\omega\) exists because \(\mathcal{B}\) is a “neighborhood” of \(\Gamma(\bar{\mathbf{x}}^{*})\) (in the sense of (13)) and, by the rigidity of framework \(\mathcal{F}(\bar{\mathbf{x}}^{*})\) (Definition 8), any continuous motion of the vertices that changes the distance between any two vertices also changes the length of at least one link, causing \(V\) to increase.
As \(\Omega\) is compact and forward invariant, we can apply LaSalle's invariance principle [27, Theorem 4.4], and noting that, in \(\Omega\), \(\dot{V}(\bar{\mathbf{x}})=0\) if and only if \(\bar{\mathbf{x}}\in\Gamma(\bar{\mathbf{x}}^{*})\) (see (17)), we get that all the trajectories starting in \(\Omega\) converge to \(\Gamma(\bar{\mathbf{x}}^{*})\cap\Omega\). This and the forward invariance of \(\Omega\) imply that \(\Gamma(\bar{\mathbf{x}}^{*})\) is locally asymptotically stable, and so is \(\mathcal{T}\) because of (4).
## V Numerical validation
In this section, we validate numerically the result presented in Section IV and estimate the basin of attraction of \(\mathcal{T}\).
### _Simulation setup_
We set the desired link length to \(R=1\), the maximum link length to \(R_{\mathrm{a}}=(1+\sqrt{3})/2\approx 1.37\), the sensing radius to \(R_{\mathrm{s}}=3\), and the number of agents to \(n=100\).
The interaction function \(f\) is chosen as the Physics-inspired Lennard-Jones function [5, 11], given by
\[f(z)=\min\left\{\left(\frac{a}{z^{2c}}-\frac{b}{z^{c}}\right),\ 1\right\}, \tag{18}\]
where we select \(a=b=0.5\) and \(c=12\); see Fig. 4. In (18), \(f\) is saturated to \(1\) to avoid divergence of \(f\) for \(z\to 0\). Concerning Assumption 1, the interaction function \(f\) satisfies (a1), (a2), and (a3). Also, as shown in Fig. 4, it quickly tends to zero so that we can assume it practically satisfies (a4). The choice of not setting \(f(z)\) exactly equal to zero for \(z\geq R_{a}\) is intentional as it allows to account for long range attraction between the agents, which is frequently required in swarm robotics applications [22].
To assess if the swarm is in a triangular configuration, we check the conditions in Definition 13. To evaluate whether a configuration is infinitesimally rigid, we use Theorem 1. Moreover, we define the _error_\(e(t)\coloneqq\max_{k\in\mathcal{E}(t)}\left|\left\|\mathbf{r}_{k}(t) \right\|-R\right|\), which is zero when the configuration is triangular. Also, as long as \(e(t)\) is lower than \(R_{\mathrm{a}}-R\), links in the configuration of interest are neither created nor destroyed.
For each simulation, the initial positions of the agents are obtained by picking a random triangular configuration and then applying, to each
All simulation are run in Matlab4 and last 20 s; the agents' dynamics (5)-(7) are integrated using the forward Euler method with a fixed time step equal to 0.01 s.
Footnote 4: Code available at [https://github.com/diBernardoGroup/SwarmSimPublic](https://github.com/diBernardoGroup/SwarmSimPublic).
**Remark 1:**_Theorem 2, together with Definitions 9 and 13 allow a straightforward extension of the analysis to the three-dimensional case (\(d=3\)). The only cumbersome step is to assess the infinitesimal rigidity of the 3D framework of interest as Theorem 1 can no longer be applied._
### _Numerical results_
To validate Theorem 2 and estimate the basin of attraction of the set of triangular configurations, we performed extensive simulations for various values of \(\delta\), and observed the steady state configurations. The results are reported in Fig. 5. Namely, we see that for \(\delta\leq\delta^{\text{thre}}:=0.25\) all simulations converge to a triangular configuration, with a rigid framework and a negligible value of \(e\). Then, as \(\delta\) increases beyond \(\delta^{\text{thres}}\), the average number of simulations converging to triangular configurations decreases, until for \(\delta>0.45\) no simulation converges to a triangular configuration. Notice that \(e(0)\leq 2\delta\), therefore \(\delta=0.25\) corresponds to a perturbation of up to \(50\%\) of the initial length of the links, providing an estimation of the basin of attraction (region of asymptotic stability) of \(\mathcal{T}\).
Moreover, we analysed the time evolution of \(e(t)\) in the case \(\delta=0.2\). The results of 10 simulations are shown in Fig. 6. We find that the rigidity is preserved during all simulations, and at steady state \(e\) reaches zero, meaning that the swarm, when locally perturbed, quickly converges back to a triangular configuration, as expected from Theorem 2.
## VI Conclusions
We proved analytically local asymptotic stability of triangular lattice configurations for planar swarms under the action of a distributed control action based on virtual attraction/repulsion forces. The theoretical derivations were supported by exhaustive numerical simulations validating the theoretical results and providing an estimate of the basin of attraction. The mild hypotheses required on the interaction function that were used to prove convergence allow for wide applicability of the theoretical results.
Future work will focus on the formalization of the three-dimensional case and the extension of the results to other geometric lattices, such as squares and hexagons.
To confirm the effectiveness of our theoretical results, we provide below further semi-analytical evidence that the set of triangular configurations \(\mathcal{T}\), is locally asymptotically stable, which also excludes the presence of other equilibria in an arbitrarily small neighborhood of it. To do so, we linearize system (5)-(7) around a triangular configuration, say \(\bar{\mathbf{x}}^{*}\), obtaining \(\bar{\mathbf{x}}\approx\mathbf{J}(\bar{\mathbf{x}}^{*})\left(\bar{\mathbf{x}} -\bar{\mathbf{x}}^{*}\right)\), with \(\mathbf{J}(\bar{\mathbf{x}}^{*})\in\mathbb{R}^{2n\times 2n}\) derived as follows.
_Jacobian of (5)-(7):_ System (5)-(7) can be recast as
\[\hat{\mathbf{x}}=((\mathbf{BFG}^{-1}\mathbf{B}^{\mathsf{T}})\otimes\mathbf{I }_{2})\bar{\mathbf{x}}=((\mathbf{BHB}^{\mathsf{T}})\otimes\mathbf{I}_{2}) \bar{\mathbf{x}}, \tag{19}\]
where \(\mathbf{F},\mathbf{G},\mathbf{H}\in\mathbb{R}^{m\times m}\) are diagonal matrices; \([\mathbf{F}]_{ii}\coloneqq f(\|\mathbf{r}_{i}\|)\), \([\mathbf{G}]_{ii}\coloneqq\|\mathbf{r}_{i}\|\), and \(\mathbf{H}\coloneqq\mathbf{FG}^{-1}\). The Jacobian of (19) is
\[\mathbf{J}=\left(\mathbf{B}\frac{\partial\mathbf{H}}{\partial\bar{\mathbf{x}} }\mathbf{B}^{\mathsf{T}}\otimes\mathbf{I}_{2}\right)\bar{\mathbf{x}}+( \mathbf{BHB}^{\mathsf{T}})\otimes\mathbf{I}_{2}=:\mathbf{J}_{1}+\mathbf{J}_{2}, \tag{20}\]
Fig. 4: Plot of the interaction function defined by (18). The zero of the function is highlighted by a red dot.
Fig. 5: Simulations for different values of \(\delta\). (a): Terminal values of \(e\) and \(\rho\). \(\rho\) is the fraction of trials converging to an infinitesimally rigid configuration. For \(e\), the solid line is the mean; the shaded area is the minimum and maximum. 20. Simulations with random initial conditions are performed for each value of \(\delta\). (b), (c): Initial and final configurations of representative simulations for specific values of \(\delta\).
where \(\frac{\partial\mathbf{H}}{\partial\mathbf{\bar{x}}}\in\mathbb{R}^{m\times m\times 2n}\) is a tensor, and
\[\left[\frac{\partial\mathbf{H}}{\partial\mathbf{\bar{x}}}\mathbf{B}^{\intercal} \right]_{:,:,k}=\left[\frac{\partial\mathbf{H}}{\partial\mathbf{\bar{x}}} \right]_{:,:,k}\mathbf{B}^{\intercal}\quad\in\mathbb{R}^{m\times n},\]
with notation \([\ :\ ]_{:,:,k}\) denoting the matrix obtained by fixing the third index of the tensor. From (a1), for all triangular configurations we have \(\mathbf{J}_{2}=(\mathbf{B}\mathbf{H}\mathbf{B}^{\intercal})\otimes\mathbf{I}_ {2}=\mathbf{0}\). Then, \([\mathbf{J}_{1}]_{:,k}=\left(\mathbf{B}\left[\frac{\partial\mathbf{H}}{ \partial\mathbf{\bar{x}}}\right]_{:,:,k}\mathbf{B}^{\intercal}\otimes\mathbf{I }_{2}\right)\mathbf{\bar{x}}\). From [26, p. 20] we have \(\frac{\partial\|\mathbf{r}\|^{2}}{\partial\|\mathbf{\bar{x}}\|_{k}}=2[ \mathbf{M}]_{:,k}\) (see Definition 6), that is \(\frac{\partial\|\mathbf{r}\|}{\partial\|\mathbf{\bar{x}}\|_{k}}=\frac{1}{\| \mathbf{r}\|}[\mathbf{M}]_{:,k}\), and thus
\[\left[\frac{\partial\mathbf{H}}{\partial\mathbf{\bar{x}}}\right] _{:,i,k} =\frac{\partial[f(\|\mathbf{r}\|_{i}\|)/\|\mathbf{r}\|_{i}]}{ \partial\|\mathbf{r}\|}\frac{\partial\|\mathbf{r}\|}{\partial\|\mathbf{\bar{ x}}\|_{k}}\] \[=[f^{\prime}(\|\mathbf{r}\|)\|\mathbf{r}\|_{i}]-f(\|\mathbf{r}\| )\|\mathbf{r}\|^{-3}\left[\mathbf{M}\right]_{:,k}, \tag{21a}\] \[\left[\frac{\partial\mathbf{H}}{\partial\mathbf{\bar{x}}}\right] _{:,i,k} =0,\quad\text{if }i\neq j. \tag{21b}\]
_Numerical analysis:_ We set \(R=1\) and generated 760 random triangular configurations (10 per each number of agents \(n\) between 25 and 100). For each of these configurations, assuming \(f\) (in (7)) is in the form (18), we computed \(\mathbf{J}\) using (20)-(21) and found that in all cases \(\mathbf{J}\) has 3 zero eigenvalues with eigenvectors \(\{\mathbf{w}_{i}^{0}\}_{i}\), and \(2n-3\) negative eigenvalues with eigenvectors \(\{\mathbf{w}_{j}^{\pm}\}_{j}\). Moreover, \(\mathbf{M}\mathbf{w}_{i}^{0}=\mathbf{0}\) and \(\mathbf{M}\mathbf{w}_{j}^{\pm}\neq\mathbf{0}\); thus, from Definition 7, the span of \(\{\mathbf{w}_{i}^{0}\}\) corresponds to root-translations and is a hyperplane locally tangent to \(\Gamma(\mathbf{\bar{x}}^{*})\) (see Definition 14), while \(\{\mathbf{w}_{j}^{\pm}\}\) correspond to other motions. Therefore, the _center manifold theorem_[25, Theorem 5.1] yields that \(\Gamma(\mathbf{\bar{x}}^{*})\) is a _center manifold_ of system (5)-(7). Moreover, as expected from Theorem 2, the _reduction principle_[25, Theorem 5.2] confirms that the dynamics locally converge onto the equilibrium set \(\Gamma(\mathbf{\bar{x}}^{*})\), and excludes the presence of other equilibria in an arbitrarily small neighborhood of it.
|
2310.00688 | Efficient Constrained Dynamics Algorithms based on an Equivalent LQR
Formulation using Gauss' Principle of Least Constraint | We derive a family of efficient constrained dynamics algorithms by
formulating an equivalent linear quadratic regulator (LQR) problem using Gauss
principle of least constraint and solving it using dynamic programming. Our
approach builds upon the pioneering (but largely unknown) O(n + m^2d + m^3)
solver by Popov and Vereshchagin (PV), where n, m and d are the number of
joints, number of constraints and the kinematic tree depth respectively. We
provide an expository derivation for the original PV solver and extend it to
floating-base kinematic trees with constraints allowed on any link. We make new
connections between the LQR's dual Hessian and the inverse operational space
inertia matrix (OSIM), permitting efficient OSIM computation, which we further
accelerate using matrix inversion lemma. By generalizing the elimination
ordering and accounting for MUJOCO-type soft constraints, we derive two
original O(n + m) complexity solvers. Our numerical results indicate that
significant simulation speed-up can be achieved for high dimensional robots
like quadrupeds and humanoids using our algorithms as they scale better than
the widely used O(nd^2 + m^2d + d^2m) LTL algorithm of Featherstone. The
derivation through the LQR-constrained dynamics connection can make our
algorithm accessible to a wider audience and enable cross-fertilization of
software and research results between the fields | Ajay Suresha Sathya, Herman Bruyninckx, Wilm Decre, Goele Pipeleers | 2023-10-01T14:50:48Z | http://arxiv.org/abs/2310.00688v1 | Efficient Constrained Dynamics Algorithms based on an Equivalent LQR Formulation using Gauss' Principle of Least Constraint
###### Abstract
We derive a family of efficient constrained dynamics algorithms by formulating an equivalent linear quadratic regulator (LQR) problem using Gauss' principle of least constraint and solving it using dynamic programming. Our approach builds upon the pioneering (but largely unknown) \(O(n+m^{2}d+m^{3})\) solver by Popov and Vereshchagin (PV), where \(n\), \(m\) and \(d\) are the number of joints, number of constraints and the kinematic tree depth respectively. We provide an expository derivation for the original PV solver and extend it to floating-base kinematic trees with constraints allowed on any link. We make new connections between the LQR's dual Hessian and the inverse operational space inertia matrix (OSIM), permitting efficient OSIM computation, which we further accelerate using matrix inversion lemma. By generalizing the elimination ordering and accounting for MuJoCo-type soft constraints, we derive two original \(O(n+m)\) complexity solvers. Our numerical results indicate that significant simulation speed-up can be achieved for high dimensional robots like quadrupeds and humanoids using our algorithms as they scale better than the widely used \(O(nd^{2}+m^{2}d+d^{2}m)\) LTL algorithm of Featherstone. The derivation through the LQR-constrained dynamics connection can make our algorithm accessible to a wider audience and enable cross-fertilization of software and research results between the fields.
## I Introduction
Rigid body mechanics is a long-studied field with fundamental contributions already made in the 18th and 19th centuries. Since the 1970s, robotics research has focussed on developing computationally efficient dynamics algorithms [1]. Initial motivation for this research was to enable real-time dynamic simulation and computed torque control on the slow computers of the 1970s. Despite significant processor clock-time improvements since then, computing dynamics efficiently remains a relevant problem because it can positively impact modern robotics applications involving model predictive control (MPC) and reinforcement learning. Faster computation enables MPC control designers to increase the prediction horizon which usually improves optimality and stability properties of the MPC controller [2]. It can speed up contact-aware online trajectory optimization [3, 4] and also shorten long training times in reinforcement learning from simulations. Unsurprisingly, implementing efficient dynamics simulators remains an active research area [5, 6, 7, 8, 9].
However, efficient dynamics algorithms are typically complex with "a steep learning curve" [10] and are not discussed in introductory robotics textbooks [11, 12]. Consequently, robotics researchers often use dynamics algorithms (especially constrained dynamics algorithms) implemented in simulators as a black-box and are therefore unable to adapt or debug the algorithms to suit their applications. By deriving efficient constrained dynamics algorithms (CDA) as the solution of an equivalent equality-constrained linear quadratic regulator (LQR) problem, we believe that this paper makes efficient CDAs accessible to researchers with an optimization and control background. This includes many roboticists that are MPC practitioners due to the rising popularity of differential dynamic programming (DDP) style [13] algorithms. The optimization-based perspective as well as the LQR connection opens up possibilities for transfer of software and recent research results between the fields, especially the recent data-driven methods for safe control of systems with uncertain dynamics [14]. Our derivation is also self-contained and does not assume prior knowledge of LQR derivation.
### _Related work_
The first efficient recursive algorithms, with \(O(n)\) complexity in the number of joints, for computing the unconstrained forward dynamics were independently discovered by Vereshchagin [15] and Featherstone [16]. However, Vereshchagin's solver "was way ahead of its time and languished in obscurity for a decade" [17]. Featherstone's insight involved efficiently propagating the solution of the Newton-Euler equations through the links, while Vereshchagin's approach was based on optimizing the Gauss' principle of least constraint [18] (a fundamental optimization-based formulation of classical mechanics) using dynamic programming (DP) [19]. Vereshchagin's idea is analogous to the standard textbook approach for solving the discrete-time linear quadratic regulator (LQR) problem using DP [2, Chapter 1], which we will use in the rest of this paper. Similar connection to the LQR problem was independently made in [20] by noting similarities between the Kalman filter and \(O(n)\) recursive dynamics algorithms and this connection was further developed within a _spatial operator algebra_ (SOA) framework [21, 22], making efficient \(O(n)\) dynamics algorithms accessible
to researchers familiar with filtering theory. However, the SOA derivation is fairly complex, is performed over several papers and assumes strong familiarity with filtering theory literature and notation from 1960s and 1970s. Moreover, the SOA derivation does not permit a straightforward extension to constrained dynamics. Unlike SOA, our LQR approach starts with the optimization problem arising from first principles, includes motion constraints and readers will find our derivation to be a significantly simpler and more direct connection to LQR than [20].
The simplicity arises from Gauss' principle allowing straightforward modeling of the motion constraints (like the non-penetration constraints for the feet of the Go1 robot in fig. (a)a) by adding them as the constraints to the associated optimization problem. This ease of modeling allowed Popov and Vereshchagin (PV) to quickly extend their forward dynamics algorithm to an efficient \(O(n)\) constrained dynamics algorithm [23, 24] for fixed-base kinematic chains with end-effector constraints. But this extension of the LQR connection to constrained dynamics remains largely unknown and unused by the robotics community despite its simplicity and efficiency. There have been a few robot control architectures using the PV solver [25, 26], including an implementation in Orocos-KDL 1 for kinematic chains, but its wider usage remains limited. [25] also derives the PV solver by introducing the concept of "acceleration energy" and extends it to trees by assembling acceleration energies. For readers unfamiliar with acceleration energy, their derivation is hard to follow and verify, while in this paper we provide an expository derivation purely using the mathematical perspective of dynamic programming on the LQR problem.
Footnote 1: [https://www.orocos.org/kdl.html](https://www.orocos.org/kdl.html)
Other independent contributions that can be used to solve constrained dynamics includes the well-known operational-space formulation [27]. However, [27] does not propose an efficient algorithm for computing the operational-space inertia matrix (OSIM), which has a computational complexity of \(O(n^{3})\) when computed naively in joint-space. A major contribution to computing OSIM efficiently came in the form of an \(O(n+m^{2}d+m^{3})\) complexity recursive algorithm in [28, 29], where \(m\) is the number of constraints and \(d\) is the tree depth. An efficient formula for computing off-diagonal blocks of the inverse OSIM using extended force propagators (EFP) was proposed in [30]. However, they do not exploit this EFP idea in their proposed algorithm and instead used a recursive approach similar to [28] to obtain \(O(n+mn+m^{3})\) complexity [30]. The idea of EFP was fully exploited in the EFP algorithm (EFPA) [31] to obtain a reduced complexity of \(O(n+md+m^{3})\). In [32], Featherstone reported that exploiting the branching-induced sparsity in the joint-space inertia matrix (JSIM) and the kinematic Jacobian to compute the OSIM more efficiently than the existing recursive \(O(n)\) algorithms despite having a worse \(O(nd^{2}+md^{2}+dm^{2})\) complexity (where \(d\) is the depth of the tree) even for the Honda Asimo robot, a complex robot with \(n=40\). This result has led to a much wider usage of Featherstone's higher complexity method in rigid body dynamics like MuJoCo, Pinocchio [6], Raisim and RBDL, to name a few, instead of the lower complexity EFPA algorithm [31]. Recent work [6] derives Featherstone's OSIM algorithm [32] from the perspective of factorizing the contact KKT matrix and utilizes proximal-point iterations to solve for systems with redundant constraints.
Independent efforts to extend the efficient ABA algorithm to internal kinematic closed loop constraints were realized in [33, 34]. With the loop-closure constraint being a more general constraint model than the simpler desired acceleration-relative-to-ground constraint model considered in the PV solver, these more general algorithms include the PV solver computations as a subset of their computations. These algorithms can be straightforwardly adapted to kinematic trees with acceleration-relative-to-ground constraints to obtain an algorithm virtually identical to the PV solver. The derivation in [33] relies heavily on the physical insight of the readers, while the derivation in [34] is relatively more formal by algebraically solving the d'Alembert's equations. [33] further proposed a form of early constraint elimination that provides \(O(m+n)\) complexity algorithm for certain kinematic mechanisms. Similar ideas were also used in an \(O(n+m)\) complexity Lagrange multiplier-free algorithm [35] for certain kinematic mechanisms based on Kane's equations formulation of mechanics [36]. However, our PV solver derivation approach is different, and we will discuss in detail the comparison with these algorithms in section IX-E. Moreover, we are not aware of any open-source implementation of [33, 34] or its computational comparison with the popular Featherstone's sparsity exploiting algorithms.
Another line of research for accelerating dynamics computations includes the divide-and-conquer type of algorithms that aim to exploit parallel computing [37, 38, 39, 40] achieving an \(O(\log(n))\) complexity provided that \(O(n)\) computational cores are used. These algorithms can be used to compute constrained dynamics by placing handles on the constrained bodies. The PV solver derived in section V can also be similarly interpreted
Fig. 1: Efficient computation of constrained dynamics by exploiting structure.
as an algorithm that computes the relative inertia of these handles. [41] presents a distributed algorithm specifically for computing the OSIM. Their comparison with this paper's algorithms are further discussed in section IX-E.
The efficient algorithms discussed so far have complex derivations, a third simple approach pioneered in [10], involves constructing the KKT matrix in'maximal' coordinates and solving it using a sparse linear solver. Despite having a favorable \(O(n+md+m^{3})\) complexity, Barraf's [10] algorithm, does not exploit as much structure as possible (for example it computes joint constraint forces which are avoided in other methods) and requires joint constraint stabilization. It is generally not considered to be competitive with the recursive or sparse factorization methods mentioned above [1].
The PV solver derivation using our standard DP approach for LQR has the elegance and simplicity of Baraff's derivation, with a three-sweep structure that is analogous to forward simulation, backward DP recursion and rollout in LQR control as shown in fig. 0(b). We also found it be more efficient than state-of-the-art algorithms as we will show in the rest of this paper.
### _Contributions_
#### I-B1 Expository derivation of the original PV solver and extensions
We provide an expository derivation of the original PV solver by adapting the textbook approach for solving the LQR problem [2], highlighting its connection to constrained dynamics more clearly than in existing literature. We then derive extensions to the original PV solver to support: 1) Floating-base robots 2) Constraints potentially on any link, 3) Kinematic trees and show its computational complexity to be \(O(n+m^{2}d+m^{3})\).
#### I-B2 Connections to the OSIM
We show that the dual Hessian, that is computed as an intermediate step of the PV solver, is equal to the inverse OSIM. This connection is new in literature, to the best of our knowledge, and provides an efficient \(O(n+m^{2}d+m^{2})\) algorithm, that is as yet unexploited to compute the inverse OSIM. This algorithm is structurally different from the currently known \(O(n)\) family algorithms KRJ [28] and EFPA [30, 31], by requiring only two sweeps over the kinematic tree instead of three and is found to be more efficient in practice for most robots of interest despite having a worse complexity than the \(O(n+md+m^{2})\) EFP algorithm. We further accelerate OSIM computation for floating-base robots with branching structure at the base.
#### I-B3 \(O(n+m)\) algorithms
Building upon our expository PV solver treatment, we derive two efficient and new (to the best of our knowledge) constrained dynamics algorithms with only \(O(n+m)\) computational complexity. The first algorithm solves the so-called "soft Gauss principle" used in the popular robot dynamics simulator MuJoCo[42][7], that relaxes the hard motion constraints with quadratic penalties. The second algorithm solves the original problem with hard motion constraints, by incorporating early elimination of Lagrange multipliers, thereby limiting their backward propagation which provides the improved computational complexity.
#### I-B4 Benchmarking
Despite the PV solver and Brandl et al.'s [33] contributions being over thirty-five years old, their computational performance is untested against the state-of-the-art algorithms, that are currently recognized to be fast in literature. We provide a comprehensive benchmarking of the PV solver against Featherstone's sparsity-exploiting algorithms [32, 43] (currently used most widely in high-performance robot simulators including the Pinocchio and MuJoCo toolboxes), the lower-order EFPA [30, 31] algorithm as well as our \(O(n+m)\) extensions to the PV solver. These numerical results are new in literature to the best of our knowledge.
The source code of the solver is made available publicly 2.
Footnote 2: [https://github.com/AjSat/spatial_V2](https://github.com/AjSat/spatial_V2)
### _Organization_
We first discuss background material and preliminaries in section II and derive the constrained dynamics solver for a kinematic chain with a fixed-base and motion constraints only on the end-effector in section III. We then discuss the physical interpretation of the terms of this relatively simple algorithm and also show the equality of the dual Hessian of the constrained LQR problem and the inverse OSIM in section IV. Later, we generalize the derivation to the more complex case of floating-base robots with a kinematic tree structure and constraints on any link in section V. This separation of the PV solver derivation into two sections was made for clarity of exposition as it is easier to first follow the derivation for fixed-base kinematic chains before the generalization to trees. We then present an efficient extension of the PV solver to'soft' motion constraints in section VI. We expand upon the dual Hessian-OSIM connection in section VII and finish our derivations with a fast \(O(n+m)\) algorithm for the original problem with hard motion constraints in section VIII. Section IX presents algorithm benchmarking and discussions, and we make concluding remarks in section X.
## II Background
### _Notation and Convention_
Table I lists the notation used in this paper. Bold-faced lower case letters or symbols are vectors and upper case letters or symbols are matrices. \(A^{T}\) is the transpose of a matrix \(A\). \(I_{n\times n}\) and \(0_{n\times n}\) are the identity matrix and zero matrix of dimension \(n\times n\) respectively. The \(:=\) operator defines the left-side symbol with the right-side expression. The \(\leftarrow\) operator assigns the right-side expression to a left-side variable in an algorithm.
We use the popular Featherstone's spatial algebra notation [1] throughout the paper. For a robot's \(i\)th rigid body, \(X_{i}\in SE(3)\), \(\mathbf{v}_{i}\in\mathbb{R}^{6}\) and \(\mathbf{a}_{i}\in\mathbb{R}^{6}\) denote the spatial pose, velocity and acceleration respectively. \(SE(3)\) is the special Euclidian group in 3 dimensions represented as a \(6\times 6\) spatial transformation matrix. \(\mathbf{f}_{i}\in\mathbb{R}^{6}\) is the spatial force acting on the \(i\)-th body. For notational simplicity of the upcoming derivations, all motion/force vectors \(\mathbf{v}_{i}\), \(\mathbf{a}_{i}\) and \(\mathbf{f}_{i}\) are with respect to a common inertial frame. \(\times\) and \(\times^{*}\) are the spatial cross-product operators for motion vectors and force vectors respectively.
The whole robot's state is \((\mathbf{q_{p}},\dot{\mathbf{q}})\), where \(\mathbf{q_{p}}\in\mathcal{Q}\) is its pose in the configuration space \(\mathcal{Q}\), \(\dot{\mathbf{q}}\in\mathcal{T}_{q_{p}}\mathcal{Q}\simeq\mathbb{R}^{n}\) is its generalized velocity in \(\mathcal{Q}\)'s tangent space at \(\mathbf{q_{p}}\) and \(n\) is the robot's degrees of freedom (d.o.f). Let \(\boldsymbol{\tau}\in\mathcal{T}_{q_{p}}^{*}\mathcal{Q}\simeq\mathbb{R}^{n}\) be the generalized force acting on the robot in the dual tangent space of \(\mathcal{Q}\) and \(\vec{\mathbf{q}}\in\mathbb{R}^{n}\) be \(\dot{\mathbf{q}}\)'s time derivative. This Lie algebraic notation allows a unified representation of floating-base robots and multi d.o.f joints where a singularity-free representation of position may require \(n_{p}\geq n\). For a fixed-base manipulator with single d.o.f joints, \(\mathbf{q_{p}}=\mathbf{q}\), \(\dot{\mathbf{q}}\), \(\vec{\mathbf{q}}\) and \(\boldsymbol{\tau}\) are simply the joint positions, velocities, accelerations and torques.
### _Preliminaries_
We will now briefly summarize forward dynamics, inverse dynamics and constrained dynamics problems. Forward dynamics computes \(\vec{\mathbf{q}}\), that result from applying \(\boldsymbol{\tau}\) on a given robot at state \((\mathbf{q_{p}},\dot{\mathbf{q}})\), to simulate the robot state forward in time. Conversely, inverse dynamics computes the \(\boldsymbol{\tau}\) required to obtain a desired \(\vec{\mathbf{q}}\) at state \((\mathbf{q_{p}},\dot{\mathbf{q}})\). Constrained dynamics is the forward dynamics problem with motion constraints in addition to joint constraints and will be formalized in the next paragraph. Inverse dynamics is, in general, easier to compute than forward dynamics, which is in turn significantly easier to compute than constrained dynamics.
Let the acceleration constraint on the \(i\)-th link be
\[K_{i}(\mathbf{q_{p}})\mathbf{a}_{i}=\mathbf{k}_{i}(\mathbf{q_{p}},\dot{ \mathbf{q}}), \tag{1}\]
with \(K_{i}\in\mathbb{R}^{m_{i}\times 6}\), \(\mathbf{k}_{i}\in\mathbb{R}^{m_{i}}\) and \(m_{i}\) the constraint dimensionality. Without loss of generality, we scale the constraints such that each row of \(K_{i}\) has unit norm. Both holonomic and non-holonomic motion constraints can be converted to this form by differentiation [11]. The acceleration constraints can be transformed to the generalized coordinates using
\[\mathbf{a}_{i}=J_{i}(\mathbf{q_{p}})\vec{\mathbf{q}}+\dot{J}_{i}(\mathbf{q_{ p}},\dot{\mathbf{q}})\dot{\mathbf{q}}, \tag{2}\]
where \(J_{i}(\mathbf{q_{p}})\in\mathbb{R}^{6\times n}\) is \(i\)th link's geometric Jacobian and \(\dot{J}_{i}(\mathbf{q_{p}},\dot{\mathbf{q}})\in\mathbb{R}^{6\times n}\) is its total time derivative. Substituting eq. (2) in eq. (1) and stacking all the links' constraints gives
\[J(\mathbf{q_{p}})\vec{\mathbf{q}}+\dot{J}(\mathbf{q_{p}},\dot{\mathbf{q}})\dot {\mathbf{q}}=\mathbf{k}(\mathbf{q_{p}},\dot{\mathbf{q}}), \tag{3}\]
\[\text{where }J(\mathbf{q_{p}}):=\begin{bmatrix}K_{1}(\mathbf{q_{p}})J_{1}( \mathbf{q_{p}})\\ \vdots\\ K_{i}(\mathbf{q_{p}})J_{i}(\mathbf{q_{p}})\\ \vdots\\ K_{n}(\mathbf{q_{p}})J_{n}(\mathbf{q_{p}})\end{bmatrix}\in\mathbb{R}^{m\times n}\]
\[\begin{bmatrix}K_{1}\dot{J}_{1}(\mathbf{q_{p}},\dot{\mathbf{q}})\\ \vdots\\ K_{i}\dot{J}_{i}(\mathbf{q_{p}},\dot{\mathbf{q}})\\ \vdots\\ K_{n}\dot{J}_{n}(\mathbf{q_{p}},\dot{\mathbf{q}})\end{bmatrix}\in\mathbb{R}^{m \times n}\]
\(\mathbf{k}(\mathbf{q_{p}},\dot{\mathbf{q}}):=\begin{bmatrix}\mathbf{k}_{1}( \mathbf{q_{p}},\dot{\mathbf{q}})\\ \vdots\\ \mathbf{k}_{i}(\mathbf{q_{p}},\dot{\mathbf{q}})\\ \vdots\\ \mathbf{k}_{n}(\mathbf{q_{p}},\dot{\mathbf{q}})\end{bmatrix}\in\mathbb{R}^{m}\).
The constrained dynamics problem involves simultaneously solving eq. (3) and the linear system
\[M(\mathbf{q_{p}})\dot{\mathbf{q}}+\mathbf{c}(\mathbf{q_{p}},\dot{\mathbf{q}})+ J(\mathbf{q_{p}})^{T}\boldsymbol{\lambda}=\boldsymbol{\tau}, \tag{4}\]
for unknowns \(\vec{\mathbf{q}}\) and \(\boldsymbol{\lambda}\), where, \(M(\mathbf{q_{p}})\in\mathbb{R}^{n\times n}\), \(\mathbf{c}(\mathbf{q_{p}},\dot{\mathbf{q}})\in\mathbb{R}^{n}\) and \(\boldsymbol{\lambda}\in\mathbb{R}^{m}\) are the joint-space inertia matrix (JSIM), generalized force due to Coriolis, centrifugal and gravity effects and the Lagrange multipliers associated with the acceleration constraint respectively. Solving for \(\vec{\mathbf{q}}\) in eq. (4) (which is always possible because \(M(\mathbf{q_{p}})\) is positive definite) and substituting in eq. (3) gives the operational-space form of constrained dynamics [27] (with term dependencies dropped for brevity from now on when it is clear from the context)
\[\Lambda^{-1}\boldsymbol{\lambda}=\dot{J}\dot{\mathbf{q}}-\mathbf{k}+JM^{-1}( \boldsymbol{\tau}-\mathbf{c}), \tag{5}\]
with \(\Lambda(\mathbf{q_{p}})^{-1}:=(J(\mathbf{q_{p}})(M(\mathbf{q_{p}}))^{-1}J( \mathbf{q_{p}})^{T})\in\mathbb{R}^{m\times m}\) and \(\Lambda(\mathbf{q_{p}})\) is the OSIM. The inverse OSIM \(\Lambda(\mathbf{q_{p}})^{-1}\) captures the inertial coupling between constraints, where the \(i\)-th column of \(\Lambda(\mathbf{q_{p}})^{-1}\) is the acceleration along all the constraint directions caused by \(\lambda_{i}=1\) (\(i\)-th constraint force with unit magnitude).
**Remark 1**: Since \(M(\mathbf{q_{p}})\) is a positive definite matrix, if \(J\) has full row-rank, \(\Lambda^{-1}\) has full rank, is invertible and \(\Lambda\) exists. Then, eq. (5) permits a unique solution for \(\boldsymbol{\lambda}\).
**Remark 2**: \(J\) may not have full row-rank in over-constrained systems, when constraints conflict with each other or due
\begin{table}
\begin{tabular}{l l} \hline Symbol & Definition \\ \hline \(\dot{U}^{J}X_{i}\) & Spatial pose of \(i\)-th link in \(j\)-th link’s frame. \\ \(\mathbf{v}_{i}\) & 6D spatial velocity of the \(i\)-th link. \\ \(\mathbf{a}_{i}\) & 6D spatial acceleration of the \(i\)-th link. \\ \(\mathbf{f}_{i}\) & 6D spatial force acting on the \(i\)-th link. \\ \(\mathbf{q_{p}}\) & vector of robot joint positions. \\ \(\dot{\mathbf{q}}\) & vector of robot joint velocities. \\ \(\dot{\mathbf{q}}\) & vector of robot joint accelerations. \\ \(\boldsymbol{\tau}\) & vector of robot joint torques. \\ \(n\) & Degrees of freedom of the robot. \\ \(K_{i}\) & Acceleration constraint matrix on \(i\)-th link. \\ \(\mathbf{k}_{k}\) & Desired constraint accelerations. \\ \(J_{i}\) & Geometric Jacobian of the \(i\)-th link. \\ \(\dot{J}_{i}\) & Time derivative of \(J_{i}\). \\ \(J\) & Joint-space constraint Jacobian. \\ \(J\) & Time derivative of \(J\). \\ \(\mathbf{k}\) & Concatenation of all \(\mathbf{k}_{i}\). \\ \(m\) & Number of acceleration constraints on the robot. \\ \(M\) & Joint-space inertia matrix. \\ \(\mathbf{c}\) & Joint torques due to bias accelerations, forces and gravity. \\ \(\boldsymbol{\lambda}\) & Lagrange multipliers of constraints. \\ \(\Lambda\) & Operational-space inertia matrix. \\ \(L\) & Lower triangular matrix in LTL decomposition [43]. \\ \(Y\) & Intermediate quantity in LTL-OSIM [43], see section II-C. \\ \(\pi(i)\) & Index of \(i\)-th link’s parent link. \\ \(\gamma(i)\) & Set of \(i\)-th link’s children links’ indices’ indices. \\ \(S_{i}\) & Motion subspace of the \(i\)-th joint. \\ \(T_{i}\) & Force subspace of the \(i\)-th joint. \\ \(H_{i}\) & \(6\times 6\) spatial inertia tensor of the \(i\)-th link. \\ \(\mathbf{a}_{i,i}\) & \(i\)-th link’s bias acceleration. \\ \(\mathcal{L}\) & The Lagrangian of the LQR problem. \\ \(V_{i}\) & Cost-to-go Lagrangian at \(i\)-th link. \\ \(H_{i}^{A}\) & Articulated body inertia of \(i\)-th link. \\ \(L_{i}^{A}\) & Constraint’s coupling due \(i\)-th and its descendant joints. \\ \(K_{i}^{A}\) & Constraint force propagated to the \(i\)-th link. \\ \(\mathbf{f}_{i}^{A}\) & Resultant force on \(i\)-th link excluding constraint forces. \\ \(\mathbf{i}_{i}^{A}\) & Desired constraint accelerations propagated to the \(i\)-th link. \\ \(D_{i}\) & Apparent articulated body inertia along the \(i\)-th joint. \\ \(P_{i}\) & Backward force propagator through the \(i\)-th joint. \\ \(\mathbf{f}_{i}^{\text{ext}}\) & Resultant external wrench acting on the \(i\
to loss of \(J_{i}\)'s rank at kinematic singular configurations and depending on the numerical values of \(\mathbf{k}_{i}\)s, there exists either no solution or an infinite number of solutions for \(\mathbf{\lambda}\).
Typical strategies to address singular \(\Lambda^{-1}\) include Tikhonov regularization, proximal-point iterations [6], Moore-Penrose pseudo-inverse using the singular value decomposition (SVD), relaxing the constraints with weighted quadratic penalties [7] or employing prioritized conflict resolution [44]. Since a discussion of these different strategies is not the focus here, we assume that \(J\) has full row-rank in the rest of this paper.
### _Featherstone's LTL algorithms_
We now review Featherstone's sparsity-exploiting algorithms and introduce terms that will be benchmarked in section IX. The LTL algorithm [43] is a Cholesky decomposition for the JSIM
\[L^{T}L=M, \tag{6}\]
where \(L\in\mathbb{R}^{n\times m}\) is a lower triangular matrix. In contrast to the traditional LLT Cholesky algorithm [45], the LTL method ensures no fill-in (preserves the sparsity pattern of \(M\) in \(L\)) even without resorting to pivoting methods that choose an elimination ordering. The idea was extended in the LTL-OSIM algorithm [32] to compute the OSIM for kinematic trees, where the sparsity pattern of \(J\) is also exploited
\[Y=JL^{-1}, \tag{7}\]
where \(Y\in\mathbb{R}^{m\times n}\) also has the same sparsity pattern as \(J\) and
\[\Lambda^{-1}=YY^{T}. \tag{8}\]
### _Forward kinematics_
Let a kinematic tree have \(n\) links indexed from \(1\) to \(n\). The world link (assumed to be a fixed inertial frame) is assigned the \(0\) index. The \(i\)-th joint connects the \(i\)-th link to its parent link \(\pi(i)\). The world link is tree's root and does not have a parent link. For floating-base robots, such as quadrupeds, a chosen link \(b\) (usually the torso) is connected to the world link through a free joint. \(\gamma(i)\) is the set of \(i\)-th link's children. A link \(j\) is a leaf link if \(\gamma(j)=\varnothing\).
The spatial poses, velocities and accelerations of all links in the tree can be computed recursively in a forward sweep starting from the root (world link) using
\[X_{j} =(X_{\pi(j)})^{\{\tau(j)\}}X_{j^{\prime}})^{\{j^{\prime}\}}X_{j}), \tag{9}\] \[\mathbf{v}_{j} =\mathbf{v}_{\pi(j)}+S_{j}\mathbf{\dot{q}}_{j},\] (10) \[\mathbf{a}_{j} =\mathbf{a}_{\pi(j)}+S_{j}\mathbf{\ddot{q}}_{j}+\mathbf{v}_{j} \times S_{j}\mathbf{\dot{q}}_{j}, \tag{11}\]
where \(\{\pi(j)\}\,X_{j^{\prime}}\) is the \(j\)-th link's pose in its parent link's frame when the \(j\)-th joint is at its home pose (usually computed from the robot URDF file or the DH parameters) and \(\{j^{\prime}\}X_{j}\) is the spatial transformation due to the \(j\)-th joint's displacement. \(S_{j}\in\mathbb{R}^{6\times n_{j}}\) is the \(j\)-th joint's motion subspace, where \(n_{j}\) is the joint's d.o.f (usually \(1\)). \(S_{j}\mathbf{\dot{q}}_{j}\) is the \(j\)-th joint's contribution to \(\mathbf{v}_{i}\). Let \(T_{j}\in\mathbb{R}^{6\times n_{j}}\) be the \(j\)-th joint's force subspace, such that \(T_{j}\mathbf{\tau}_{j}\) is the joint's contribution to \(\mathbf{f}_{j}\).
**Remark 3**: The force subspace \(T_{j}\) is the dual of the motion subspace \(S_{j}\), hence \(S_{j}^{T}T_{j}=\mathbf{1}_{n_{j}\times n_{j}}\)[1, eq. 3.39].
### _Gauss' Principle_
Gauss' principle of least constraint [18] (GPLC) is an optimization-based formulation of classical mechanics, which is not as well known or widely used as the Lagrangian formulation. Refer to [46] for a detailed discussion on Gauss' principle, according to which, a constrained system under the influence of forces undergoes accelerations that are as close as possible (in a weighted least-squares sense) to the unconstrained motion of the system under the same non-constraint forces. For a system of rigid bodies with spatial inertia tensor \(H_{i}\in\mathbb{R}^{6\times 6}\) of the \(i\)-th link, under the external forces \(\mathbf{f}_{i}\), which includes the bias forces \(\mathbf{v}_{i}\times^{*}H_{i}\mathbf{v}_{i}\), the resulting accelerations \(\mathbf{a}_{i}\) are the minimizers of the following optimization problem [47].
\[\underset{\mathbf{a}_{1},\dots,\mathbf{a}_{n}}{\text{\bf minimize}} \sum_{i=1}^{n}\frac{1}{2}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i})^ {T}H_{i}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i}), \tag{12a}\] \[\text{\bf subject to} \mathrm{motion\ constraints}. \tag{12b}\]
### _Dynamic Programming Principle_
Dynamic programming (DP) [19] is a general theoretical framework for optimizing a function through a series of nested optimizations over the decision variables in some order. DP's efficiency can crucially depend on the variable elimination order. Each DP step optimizes over a function to return a function, so its implementation is intractable, unless the intermediate functions can be efficiently parameterized. The discrete-time linear quadratic regulator (LQR) problem is one such exception, where all the intermediate functions have the quadratic form. Fortunately, for kinematic tree mechanisms, the Gauss' principle is algebraically identical to the discrete-time LQR problem with scenario trees and can be solved efficiently using DP. This robot dynamics-LQR connection forms the basis of the derivations in this paper.
## III Derivation of the constrained dynamics solver
In this section we derive the PV solver for fixed-base kinematic chains with end-effector motion constraints. We first formulate the optimization problem in section III-A, then derive its solution using DP in section III-B.
### _Problem formulation_
Consider a kinematic chain with the links indexed such that \(\pi(i)=i-1\), with \(0\)-th link being the world link. The GPLC optimization problem eq. (12) for this chain is
\[\underset{\mathbf{a}_{1},\dots,\mathbf{a}_{n},\bar{q}}{\text{ \bf minimize}} \sum_{i=1}^{n}\frac{1}{2}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i})^{T}H _{i}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i}), \tag{13a}\] \[\text{\bf subject to} \mathbf{a}_{i}=\mathbf{a}_{i-1}+S_{i}\mathbf{\ddot{q}}_{i}+ \mathbf{a}_{b,i},\ i=1,2,...,n,\] (13b) \[K_{n}\mathbf{a}_{n}=\mathbf{k}_{n},\quad\mathbf{a}_{0}=-\mathbf{ a}_{\mathrm{grav}}, \tag{13c}\]
where eq. (13b) implicitly encodes joint motion constraints using eq. (11), \(\mathbf{a}_{b,i}:=\mathbf{v}_{i}\times S_{i}\mathbf{\dot{q}}_{i}\) is the bias acceleration, eq. (13c) encodes the end-effector constraint (a common
pattern e.g. when the end-effector is wipping a table) and the fixed-base constraint, and \(\mathbf{a}_{\mathrm{grav}}\) is the acceleration-due-to-gravity vector. The reason for setting \(\mathbf{a}_{0}\) to \(-\mathbf{a}_{\mathrm{grav}}\) will be explained in section III-B2. The parameters in the problem such as \(H_{i}\), \(\mathbf{f}_{i}\), \(\mathbf{a}_{b,i}\) and \(S_{i}\) are computed using the inputs to the problem, namely \(\mathbf{q}_{p}\), \(\mathbf{\ddot{q}}\), \(\boldsymbol{\tau}\) and the robot model.
The problem in eq. (13) is algebraically identical to the discrete-time LQR problem: the forward propagation of link acceleration along the kinematic chain (see eq. (13b)) is analogous to the LQR's forward state propagation in time, with \(\mathbf{a}_{i}\) and \(\mathbf{\ddot{q}}_{i}\) corresponding to the LQR's states and controls respectively.
**Remark 4**: Either \(\mathbf{a}_{\mathbf{i}}\)s or \(\mathbf{\ddot{q}}\) can be considered the _free_ variables in eq. (13) as one can be computed from the other using eq. (13b) because \(S_{i}\) always has full rank [34].
**Remark 5**: The inertia tensor \(H_{i}\) is positive definite for all links, therefore eq. (13) is a strongly convex quadratic program (QP) with a unique solution, when feasible.
Conflicting constraints or unachievable desired accelerations at configuration \(\mathbf{q}_{p}\) can make the QP infeasible.
### _Dynamic programming solution_
We now solve the optimization problem in eq. (13) using DP by following the textbook LQR derivation [2, Chapter 1]. The recurrence relation constraints in eq. (13b) and the \(\mathbf{a}_{0}=-\mathbf{a}_{\mathrm{grav}}\) constraint will be eliminated via substitution. However, unlike the textbook version, eq. (13) has a hard 'terminal' constraint (due to the end-effector constraint) which cannot be similarly eliminated via substitution. Therefore, we adapt the textbook derivation to instead solve for the primal-dual saddle point of QP's Lagrangian, which includes only the end-effector motion constraint as the joint and fixed-base constraints are eliminated through substitution
\[\mathcal{L}(\mathbf{\ddot{q}},\boldsymbol{\lambda}):=\sum_{i=1}^{n}\frac{1}{2 }(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i})^{T}H_{i}(\mathbf{a}_{i}-H_{i}^{-1} \mathbf{f}_{i})+ \tag{14}\]
\[\boldsymbol{\lambda}^{T}(K_{n}\mathbf{a}_{n}-\mathbf{k}_{n}).\]
We define "cost-to-go Lagrangian" as the tail problem consisting of the Lagrangian terms corresponding to the \(i\)th link and its descendants
\[V_{i}(\mathbf{a}_{i-1},\mathbf{\ddot{q}}_{i},...,\mathbf{\ddot{ q}}_{n},\boldsymbol{\lambda}):=\] \[\sum_{j=i}^{n}\frac{1}{2}(\mathbf{a}_{j}-H_{j}^{-1}\mathbf{f}_{j })^{T}H_{j}(\mathbf{a}_{j}-H_{j}^{-1}\mathbf{f}_{j})+\boldsymbol{\lambda}^{T} (K_{n}\mathbf{a}_{n}-\mathbf{k}_{n}).\]
Due to its additive structure, the cost-to-go Lagrangian follows the recurrence relation (after simplifying the quadratic objective and grouping the constant terms)
\[V_{i}(\mathbf{a}_{i-1},\mathbf{\ddot{q}}_{i},...,\mathbf{\ddot{ q}}_{n},\boldsymbol{\lambda})=\frac{1}{2}\mathbf{a}_{i}^{T}H_{i}\mathbf{a}_{i}- \mathbf{f}_{i}^{T}\mathbf{a}_{i}+\] \[V_{i+1}(\mathbf{a}_{i},\mathbf{\ddot{q}}_{i+1},...,\mathbf{ \ddot{q}}_{n},\boldsymbol{\lambda})+\mathrm{constant}.\]
When convenient, we will drop constant terms from now on for brevity. The Bellman's recurrence relation [19] for the optimal cost-to-go Lagrangian is
\[V_{i}^{*}(\mathbf{a}_{i-1},\boldsymbol{\lambda})=\underset{\mathbf{\ddot{q}}_ {i}}{\mathbf{min}}\{\frac{1}{2}\mathbf{a}_{i}^{T}H_{i}\mathbf{a}_{i}-\mathbf{ f}_{i}^{T}\mathbf{a}_{i}+V_{i+1}^{*}(\mathbf{a}_{i},\boldsymbol{\lambda})\}. \tag{15}\]
Optimizing the cost-to-go Lagrangian at the end-effector
\[V_{n}(\mathbf{a}_{n-1},\mathbf{\ddot{q}}_{n},\boldsymbol{\lambda})=\frac{1}{2 }\mathbf{a}_{n}^{T}H_{n}\mathbf{a}_{n}-\mathbf{f}_{n}^{T}\mathbf{a}_{n}+ \boldsymbol{\lambda}^{T}(K_{n}\mathbf{a}_{n}-\mathbf{k}_{n}), \tag{16}\]
over \(\mathbf{\ddot{q}}_{n}\) gives \(V_{n}^{*}(\mathbf{a}_{n-1},\boldsymbol{\lambda})\). To do this, we first substitute \(\mathbf{a}_{n}\) with the acceleration recursion equation in eq. (13b)
\[V_{n}(\mathbf{a}_{n-1},\mathbf{\ddot{q}}_{n},\boldsymbol{\lambda})=\] \[\frac{1}{2}(\mathbf{a}_{n-1}+S_{n}\mathbf{\ddot{q}}_{n}+\mathbf{a }_{b,n})^{T}H_{n}(\mathbf{a}_{n-1}+S_{n}\mathbf{\ddot{q}}_{n}+\mathbf{a}_{b,n})-\] \[\mathbf{f}_{n}^{T}(\mathbf{a}_{n-1}+S_{n}\mathbf{\ddot{q}}_{n}+ \mathbf{a}_{b,n})+\] \[\boldsymbol{\lambda}^{T}(K_{n}(\mathbf{a}_{n-1}+S_{n}\mathbf{ \ddot{q}}_{n}+\mathbf{a}_{b,n})-\mathbf{k}_{n}). \tag{17}\]
Then we collect the linear-quadratic terms in \(\mathbf{\ddot{q}}_{n}\) and solve for the optimal \(\mathbf{\ddot{q}}_{n}^{*}\), where the quadratic function's gradient is zero
\[\mathbf{\ddot{q}}_{n}^{*}=(S_{n}^{T}H_{n}S_{n})^{-1}S_{n}^{T}\{\mathbf{f}_{n}- H_{n}(\mathbf{a}_{n-1}+\mathbf{a}_{b,n})-K_{n}^{T}\boldsymbol{\lambda}\},\]
substituting which back in eq. (17) provides \(V_{n}^{*}(\mathbf{a}_{n-1},\boldsymbol{\lambda})\), which remains a quadratic form in \(\mathbf{a}_{n-1}\) and \(\boldsymbol{\lambda}\). Therefore, let us hypothesize that \(V_{i}^{*}(\mathbf{a}_{i-1},\boldsymbol{\lambda})\) minimizes the following quadratic form
\[V_{i}^{*}(\mathbf{a}_{i-1},\boldsymbol{\lambda})=\underset{ \mathbf{\ddot{q}}_{i}}{\mathbf{min}}\{\frac{1}{2}\mathbf{a}_{i}^{T}H_{i}^{A} \mathbf{a}_{i}-\frac{1}{2}\boldsymbol{\lambda}^{T}L_{i}^{A}\boldsymbol{ \lambda}+ \tag{18a}\] \[\boldsymbol{\lambda}^{T}K_{i}^{A}\mathbf{a}_{i}-\mathbf{f}_{i}^{ AT}\mathbf{a}_{i}+\mathbf{l}_{i}^{T}\boldsymbol{\lambda}\}+\mathrm{constant}\] \[=\underset{\mathbf{\ddot{q}}_{i}}{\mathbf{min}}\{\frac{1}{2}( \mathbf{a}_{i-1}+S_{i}\mathbf{\ddot{q}}_{i}+\mathbf{a}_{b,i})^{T}H_{i}^{A}( \mathbf{a}_{i-1}+S_{i}\mathbf{\ddot{q}}_{i}+\mathbf{a}_{b,i})-\] \[\frac{1}{2}\boldsymbol{\lambda}^{T}L_{i}^{A}\boldsymbol{\lambda} +\boldsymbol{\lambda}^{T}K_{i}^{A}(\mathbf{a}_{i-1}+S_{i}\mathbf{\ddot{q}}_{i}+ \mathbf{a}_{b,i})-\] (18b) \[\mathbf{f}_{i}^{AT}(\mathbf{a}_{i-1}+S_{i}\mathbf{\ddot{q}}_{i}+ \mathbf{a}_{b,i})+\mathbf{l}_{i}^{T}\boldsymbol{\lambda}\}+\mathrm{constant}.\]
where eq. (18b) is obtained by substituting eq. (13b) in eq. (18a). Optimizing eq. (18b) over \(\mathbf{\ddot{q}}_{i}\) by setting the objective function's gradient to zero gives
\[\mathbf{\ddot{q}}_{i}^{*}=D_{i}^{-1}S_{i}^{T}\{\mathbf{f}_{i}^{A}-H_{i}^{A}( \mathbf{a}_{i-1}+\mathbf{a}_{b,i})-K_{i}^{AT}\boldsymbol{\lambda}\}, \tag{19}\]
where \(D_{i}^{-1}:=(S_{i}^{T}H_{i}^{A}S_{i})^{-1}\in\mathbb{R}^{n_{i}\times n_{i}}\) exists because \(S_{i}\) always has full column rank [34] and \(H_{i}^{A}\) (which we will show to be the articulated body inertia matrix) is positive definite. Back-substituting \(\mathbf{\ddot{q}}_{i}^{*}\) from eq. (19) in eq. (18b) gives \(V_{i}^{*}(\mathbf{a}_{i-1},\boldsymbol{\lambda})\), substituting which in the Bellman recurrence relation eq. (15) for \(V_{i-1}^{*}(\mathbf{a}_{i-2},\boldsymbol{\lambda})\) gives the following recursive formulae for the hypothesized quadratic form in eq. (18a),
\[H_{i-1}^{A}=H_{i-1}+P_{i}H_{i}^{A},\] (20a) \[\mathbf{f}_{i-1}^{A}=\mathbf{f}_{i-1}+P_{i}(\mathbf{f}_{i}^{A}-H_ {i}^{A}\mathbf{a}_{b,i}),\] (20b) \[K_{i-1}
Performing backward recursion until the root link yields \(V_{1}^{*}(\mathbf{a}_{0},\boldsymbol{\lambda})\)'s expression, where the known value of \(\mathbf{a}_{0}=-\mathbf{a}_{\mathrm{grav}}\) is directly substituted, thereby eliminating all the primal variables of the Lagrangian to obtain the dual function
\[V_{0}^{*}(\boldsymbol{\lambda})=-\frac{1}{2}\boldsymbol{\lambda}^{T}L_{0}^{A} \boldsymbol{\lambda}+\boldsymbol{\lambda}^{T}(\mathbf{l}_{0}+K_{0}^{A} \mathbf{a}_{0}). \tag{21}\]
Assuming that \(L_{0}^{A}\) has full rank, the dual function has the unique maximizer
\[\boldsymbol{\lambda}^{*}=(L_{0}^{A})^{-1}(\mathbf{l}_{0}+K_{0}^{A}\mathbf{a}_{ 0}). \tag{22}\]
The numerical value of \(\boldsymbol{\lambda}^{*}\) computed above enables rolling out the "control policy" in a forward sweep to compute the optimal joint accelerations \(\mathbf{\ddot{q}}_{i}^{*}\)s using eq. (19) and eq. (13b).
#### Iii-B1 Details on \(\mathbf{f}_{i}\)
\(\mathbf{f}_{i}\) is the resultant of all the non-constraint forces acting on the \(i\)th link, namely the force due to \(i\)th joint torque \(\boldsymbol{\tau}_{i}\), the bias forces, the reaction force from \(\boldsymbol{\tau}_{i+1}\) and all the other the external forces
\[\mathbf{f}_{i}=T_{i}\boldsymbol{\tau}_{i}-\mathbf{v}_{i}\times^{*}H_{i} \mathbf{v}_{i}-T_{i+1}\boldsymbol{\tau}_{i+1}+\mathbf{f}_{i}^{\mathrm{ext}}. \tag{23}\]
Note: the total reaction force on the \(i\)th link due to \(\boldsymbol{\tau}_{i+1}\), must also include the backward propagation of the force acting on the \(i+1\)-th link due to \(\boldsymbol{\tau}_{i+1}\), \(T_{i+1}\boldsymbol{\tau}_{i+1}\), using eq. (20b) in addition to the immediate reaction force \(-T_{i+1}\boldsymbol{\tau}_{i+1}\),
\[-T_{i+1}\boldsymbol{\tau}_{i+1}+ P_{i+1}(T_{i+1}\boldsymbol{\tau}_{i+1}) \tag{24}\] \[=-H_{i+1}^{A}S_{i+1}(D_{i+1})^{-1}\boldsymbol{\tau}_{i+1}\] \[=-H_{i+1}^{A}S_{i+1}(D_{i+1})^{-1}\boldsymbol{\tau}_{i+1}\]
which agrees with the known result on the backward reaction forces applied by joint actuators [1, eq. 7.20].
#### Iii-B2 Including the effect of gravity
The straightforward approach to account for gravity is to include the each link's weight in eq. (23), but a more efficient and commonly used trick [48] is to add a gravity field by setting \(\mathbf{a}_{0}\leftarrow-\mathbf{a}_{\mathrm{grav}}\), where \(\mathbf{a}_{\mathrm{grav}}\). Then \(\mathbf{a}_{i}=-\mathbf{a}_{\mathrm{grav}}\) if the \(i\)th link is in equilibrium and \(\mathbf{a}_{i}=0\) if it is in free fall. This addition of gravitational acceleration to each link's acceleration must also be reflected the acceleration constraints through the update
\[\mathbf{k}_{n}\leftarrow\mathbf{k}_{n}-K_{n}\mathbf{a}_{\mathrm{grav}}\]
## IV Physical interpretation
We will now provide the physical interpretation for the backward recursion in eq. (20). This section is involved for readers not familiar with existing propagation-based constrained dynamics literature and may be skipped/skimmed during the first read. \(P_{i}\) is the projection matrix, that propagates \(\mathbf{f}_{i}\) through the \(i\)-th joint to the \(i-1\)-th link after removing the component that causes the \(i\)-th joint's motion. It is used in eq. (20b) to propagate the forces backwards in the chain. \(P_{i}\) also propagates the inertia of the descendant links through the \(i\)-th joint in eq. (20a), to compute the well known articulated body inertia \(H_{i}^{A}\). Suppose that the \(i\)-th link was disconnected from its parent link but remained connected to its descendant links, \(H_{i}^{A}\) would be this link's apparent inertia including the influence of all the descendant links. \(D_{i}\) is the apparent inertia of the \(i\)-th link along the \(i\)-th joint, obtained by projecting \(H_{i}^{A}\) onto the \(i\)-th joint's motion subspace \(S_{i}\).
In the absence of end-effector constraints, only eq. (20a) and eq. (20b) need to be computed during the backward recursion and these two formulae are identical to the inertia and force propagation equations in Featherstone's well known articulated body algorithm (ABA) [16], which remains the fastest algorithm to compute unconstrained forward dynamics [1]. The PV solver reduces to ABA in the unconstrained setting and an unconstrained LQR-based derivation would essentially be an alternate derivation for the ABA algorithm.
Each row of \(K_{n}\) is the unit spatial force exerted by the end-effector due to the associated constraint, whose magnitude (the unknown Lagrange multipliers) must be solved for. These unit constraint forces are propagated backwards in the chain similarly to the non-constraint forces using the force propagator matrix \(P_{i}\) in eq. (20c). Therefore, \(-K_{i}^{AT}\boldsymbol{\lambda}\) is the force felt at the \(i\)-th link due to end-effector constraint forces.
Substituting the solution for joint accelerations from eq. (19) into the acceleration recurrence relation in eq. (13b) gives
\[\mathbf{a}_{i}=P_{i}^{T}(\mathbf{a}_{i-1}+\mathbf{a}_{b,i})+S_{i}D_{i}^{-1}S_{i }^{T}(\mathbf{f}_{i}^{A}-K_{i}^{AT}\boldsymbol{\lambda}), \tag{25}\]
where \(P_{i}^{T}\) is the projection operation that propagates \(\mathbf{a}_{i-1}\) to child link \(i\), after removing \(\mathbf{a}_{i-1}\)'s acceleration component along \(S_{i}\). This reveals an interesting symmetric relationship between the forward acceleration propagator \(P_{i}^{T}\) and the backward force propagator \(P_{i}\) about the \(i\)-th joint, previously noted in [49]. Let us compose the force propagators to define the extended force propagator [31]
\[P_{i}^{n}:=P_{i}P_{i+1}...P_{n},\qquad\mathrm{and}\quad P_{n+1}^{n}:=\mathbf{1 }_{6\times 6} \tag{26}\]
that directly propagates end-effector forces to the \(i-1\)-th link. Due to the symmetric relationship, \(P_{i}^{nT}\) propagates accelerations from the \(i-1\)-th link to the end-effector directly. Repeated substitution of eq. (19) for all joints in the acceleration recurrence relation eq. (13b) gives
\[\mathbf{a}_{n} =P_{1}^{nT}\mathbf{a}_{0}+\sum_{i=1}^{n}P_{i}^{nT}\mathbf{a}_{b,i}+ \tag{27}\] \[\sum_{i=1}^{n}\{P_{i+1}^{nT}S_{i}D_{i}^{-1}S_{i}^{T}(\mathbf{f} _{i}^{A}-K_{i}^{AT}\boldsymbol{\lambda})\}.\]
From the constraint propagation equations in eq. (20c), one can easily verify that
\[K_{i}^{A}=K_{n}P_{i+1}^{nT}. \tag{28}\]
We remind readers that the end-effector acceleration constraint is \(K_{n}\mathbf{a}_{n}+\mathbf{l}_{n}=0\). Let us call \(K_{n}\mathbf{a}_{n}\), _constraint acceleration_ (because it is the end-effector acceleration along the constrained direction) and \(-\mathbf{l}_{n}\) the desired constraint acceleration. Substituting \(\mathbf{a}_{n}\) from eq. (27) in the acceleration constraint equation and simplifying using eq. (28) gives
\[K_{n}\mathbf{a}_{n}+\mathbf{l}_{n}=K_{0}^{A}\mathbf{a}_{0}+\sum_{i=1}^{n}K_{i} ^{A}P_{i}^{T}\mathbf{a}_{b,i}+ \tag{29a}\] \[\sum_{i=i}^{n}\{K_{i}^{A}S_{i}D_{i}^{-1}S_{i}^{T}(\mathbf{f}_{i}^{A}-K_{i}^{AT} \boldsymbol{\lambda})\}+\mathbf{l}_{n}=0.\]
\(K_{0}^{A}\mathbf{a}_{0}\) is the constraint acceleration due to the known fixed-base acceleration. Collecting the terms not containing the
unknown \(\mathbf{\lambda}\) in the previous equation and comparing with backward recursion in eq. (20d), one can verify that
\[\mathbf{I}_{i-1}^{A}=\sum_{k=i}^{n}\{K_{k}^{A}P_{k}^{T}\mathbf{a}_{b,k}+\{K_{k}^{A }S_{k}D_{k}^{-1}S_{k}^{T}(\mathbf{f}_{k}^{A})\}+\mathbf{l}_{n}, \tag{30}\]
recursively computes constraint acceleration caused by the bias accelerations, bias forces, joint torques and external forces up to the \(i\)-th joint and updates the desired constraint acceleration that must be supplied by the unknown constraint forces. Comparing eq. (20e) and eq. (29), we see eq. (20e) recursively computes the \(\mathbf{\lambda}\)-dependent terms in eq. (29) with
\[L_{i-1}^{A}=\sum_{k=i}^{n}K_{k}^{A}S_{k}D_{k}^{-1}S_{k}^{T}K_{k}^{A} \tag{31}\]
where the \(j\)-th column of \(L_{i-1}^{A}\) is the constraint accelerations caused by a unit magnitude \(j\)-th constraint force due to motions along the joints from the \(n\)-th joint back up to the \(i\)-th joint in the chain. \(L_{0}^{A}\) represents the inertial coupling between constraints considering the whole tree's motion, providing intuition for why \(L_{0}^{A}\) must be the inverse OSIM \(\Lambda^{-1}\), which was previously defined in the joint-space in eq. (5).
\[\Lambda^{-1}=JM^{-1}J^{T}=K_{n}(J_{n}M^{-1}J_{n}^{T})K_{n}^{T}, \tag{32}\]
where \(J_{n}M^{-1}J_{n}^{T}\) maps any force acting on the end-effector \(\mathbf{f}_{n}\) to end-effector acceleration caused due to this force
\[\mathbf{a}_{n}^{f}:=(J_{n}M^{-1}J_{n}^{T})\mathbf{f}_{n}. \tag{33}\]
From eq. (27), we collect all the terms depending on \(\mathbf{f}_{n}\) that cause end-effector acceleration (remember that \(\mathbf{f}_{i}^{A}\) also depends on \(\mathbf{f}_{n}\) because of inward force recursion ) to get
\[\mathbf{a}_{n}^{f}=\{\sum_{i=1}^{n}P_{i+1}^{nT}S_{i}D_{i}^{-1}S_{i}^{T}P_{i+1} ^{n}\}\mathbf{f}_{n}. \tag{34}\]
In eq. (33) and eq. (34) have linear mappings from \(\mathbf{f}_{n}\) to \(\mathbf{a}_{n}^{f}\), where \(\mathbf{f}_{n}\) is free to take on any value in \(\mathbb{R}^{6}\) and the linear mappings depend only on \(\mathbf{q}\mathbf{p}\). Thus, it must be that \(J_{n}M^{-1}J_{n}^{T}=\sum_{i=1}^{n}P_{i+1}^{nT}S_{i}D_{i}^{-1}S_{i}^{T}P_{i+1} ^{n}\). Pre and post-multiplying this equality with \(K_{n}\) and \(K_{n}^{T}\), we get
\[K_{n}(J_{n}M^{-1}J_{n}^{T})K_{n}^{T}=\sum_{i=1}^{n}K_{n}P_{i+1}^{ nT}S_{i}D_{i}^{-1}S_{i}^{T}P_{i+1}^{n}K_{n}^{T}, \tag{35}\]
where using eq. (32), eq. (28) and eq. (31), we get \(\Lambda^{-1}=L_{0}^{A}\). The physical interpretation presented here is essentially the argument used in [33] to derive their constrained dynamics solver for kinematic loops, which we refer readers to for more insight especially related to the effect of internal kinematic loops. Compared to [33], our derivation is mathematical using the DP algorithm and does not require readers to possess physical insight. The physical interpretation provided here is only a post hoc explanation. However, the derivation in [33] does not assume prior optimization knowledge and may be more accessible to some readers, especially for those familiar with Featherstone's ABA algorithm derivation [16] because [33] is a natural extension of [16] that follows a similar variable elimination approach.
## V Extension to trees with floating-base
We now extend the original PV solver, that only dealt with end-effector constrained fixed-base kinematic chains, to kinematic trees with possibly a floating-base and possibly motion constraints on any link. We first modify the problem formulation to allow kinematic trees in section V-A, solve it using DP in section V-B and finally present the algorithm and analyze the computational complexity in section V-C.
### _Problem formulation_
The GLPC optimization problem for a given tree is
\[\underset{a,\vec{q}}{\text{\text{minimize}}} \sum_{i=1}^{n}\frac{1}{2}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i })^{T}H_{i}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i}), \tag{36a}\] \[\mathbf{subject\ to} \mathbf{a}_{i}=\mathbf{a}_{\pi(i)}+S_{i}\mathbf{\vec{q}}_{i}+ \mathbf{a}_{b,i},\quad i=1,2,...,n,\] (36b) \[K_{i}\mathbf{a}_{i}=\mathbf{k}_{i},\quad i=1,...,n, \tag{36c}\]
where, \(\pi(j)\) and \(\gamma(j)\) are the parent link and the set of children for any given link \(j\) respectively, as explained in section II-D. Compared to the problem in eq. (13), the recurrence relation in eq. (36b) is indexed differently due to the tree structure, and any link's motion can be constrained in eq. (36c). It is easily verifiable that the problem remains a strongly convex QP, but it is no more analogous to a simple discrete-time LQR problem. Instead, this problem shares its structure with scenario-trees from control of systems with dynamics uncertainty [50]. However, the DP approach remains applicable and will provide a tree-structured Riccati recursion [51].
### _Dynamic programming solution_
Similarly to kinematic chains, we apply DP on the Lagrangian of the optimization problem in eq. (36)
\[\mathcal{L}(\mathbf{\vec{q}},\mathbf{\lambda}_{1},...,\mathbf{\lambda}_{n})=\sum_{i=1}^ {n}\frac{1}{2}(\mathbf{a}_{i}^{T}H_{i}\mathbf{a}_{i}-\mathbf{f}_{i}^{T} \mathbf{a}_{i})+ \tag{37}\]
\[\sum_{i=1}^{n}\mathbf{\lambda}_{i}^{T}(K_{n}\mathbf{a}_{i}-\mathbf{k}_{i}).\]
For notational simplicity in the upcoming derivation, let us define \(\mathbf{\lambda}_{i}^{A}:=[\mathbf{\lambda}_{i}^{T},\mathbf{\lambda}_{\gamma(i)_{1}}^{AT}, \mathbf{\lambda}_{\gamma(i)_{2}}^{AT}...\mathbf{\lambda}_{\gamma(i)_{\text{c(c)}}}^{AT}]^ {T}\) as the concatenation of the multipliers associated with constraints on the \(i\)-th link and its descendants, where \(\mathcal{C}(i)\) is the cardinality of the set \(\gamma(i)\). Analogously to the eq. (15), the Bellman recurrence for the optimal cost-to-go Lagrangian for the kinematic tree is
\[V_{i}^{*}(\mathbf{a}_{\pi(i)},\mathbf{\lambda}^{A})= \underset{\vec{q}_{i}}{\text{\text{min}}}\{\frac{1}{2}\mathbf{a}_{i }^{T}H_{i}\mathbf{a}_{i}-\mathbf{f}_{i}^{T}\mathbf{a}_{i}+\mathbf{\lambda}_{i}^{T}(K_ {i}\mathbf{a}_{i}-\mathbf{k}_{i})+\] \[\sum_{j\in\gamma(i)}V_{j}^{*}(\mathbf{a}_{i},\mathbf{\lambda}_{j}^{A}) \}+\mathrm{constant}. \tag{38}\]
Similarly to eq. (18a), let us hypothesize that the optimal cost-to-go Lagrangian has the quadratic form
\[V_{i}^{*}(\mathbf{a}_{\pi(i)},\mathbf{\lambda}_{i}^{A})=\underset{ \vec{\mathbf{q}}_{i}}{\text{\text{min}}}\{\frac{1}{2}\mathbf{a}_{i}^{T}H_{i}^{A} \mathbf{a}_{i}-\frac{1}{2}\mathbf{\lambda}_{i}^{AT}L_{i}^{A}\mathbf{\lambda}_{i}^{A}+ \tag{39}\] \[\mathbf{\lambda}_{i}^{AT}K_{i}^{A}\mathbf{a}_{i}-\mathbf{f}_{i}^{AT} \mathbf{a}_{i}+\mathbf{l}_{i}^{T}\mathbf{\lambda}_{i}^{A}\}+\mathrm{constant}.\]
Substituting \(\mathbf{a}_{i}\) above using eq. (36b) gives
\[V_{i}^{*}(\mathbf{a}_{\pi(i)},\mathbf{\lambda}_{i}^{A})=\underset{ \mathbf{\vec{a}}_{i}}{\mathbf{min}}\{\frac{1}{2}(\mathbf{a}_{\pi(i)}+S_{i} \mathbf{\vec{q}}_{i}+\mathbf{a}_{b,i})^{T}H_{i}^{A}(\mathbf{a}_{\pi(i)}+\] \[S_{i}\mathbf{\vec{q}}_{i}+\mathbf{a}_{b,i})-\frac{1}{2}\mathbf{ \lambda}_{i}^{AT}L_{i}^{A}\mathbf{\lambda}_{i}^{A}+\mathbf{\lambda}_{i}^{AT}K_{i}^{A}( \mathbf{a}_{\pi(i)}+\] \[S_{i}\mathbf{\vec{q}}_{i}+\mathbf{a}_{b,i})-\mathbf{\vec{f}}_{i} ^{AT}(\mathbf{a}_{\pi(i)}+S_{i}\mathbf{\vec{q}}_{i}+\mathbf{a}_{b,i})+ \tag{40}\] \[\mathbf{l}_{i}^{T}\mathbf{\lambda}_{i}^{A}\}+\mathrm{constant}.\]
Optimizing this function for optimal \(\mathbf{\vec{q}}_{i}\) gives
\[\mathbf{\vec{q}}_{i}^{*}=(D_{i})^{-1}S_{i}^{T}\{\mathbf{f}_{i}^{A}-H_{i}^{A}( \mathbf{a}_{\pi(i)}+\mathbf{a}_{b,i})-K_{i}^{AT}\mathbf{\lambda}_{i}^{A}\}, \tag{41}\]
substituting which back into eq. (40) gives \(V_{i}^{*}(\mathbf{a}_{\pi(i)},\mathbf{\lambda}_{i}^{A})\).
Substituting the expression \(V_{j}^{*}(\mathbf{a}_{i},\mathbf{\lambda}_{j}^{A})\), thus computed for all \(j\in\gamma(i)\) in Bellman recurrence relation eq. (38) confirms that the optimal cost-to-go function has the quadratic form hypothesized in eq. (39) for link \(i\) if the hypothesis holds for all the children links \(j\in\gamma(i)\). The quadratic form for the \(i\)-th link is given by the recursive equations
\[H_{i}^{A}=H_{i}+\sum_{k\in\gamma(i)}P_{k}H_{k}^{A}, \tag{42a}\] \[\mathbf{f}_{i}^{A}=\mathbf{f}_{i}+\sum_{k\in\gamma(i)}P_{k}( \mathbf{f}_{k}^{A}-H_{k}^{A}\mathbf{a}_{b,k}),\] (42b) \[K_{i}^{A}=\begin{bmatrix}K_{i}\\ \vdots\\ K_{k}^{A}P_{k}^{T}\\ \vdots\end{bmatrix},\] (42c) \[\mathbf{l}_{i}=\begin{bmatrix}-\mathbf{k}_{i}\\ \vdots\\ \mathbf{l}_{k}+K_{k}^{A}\{\mathbf{a}_{b,k}+S_{k}D_{k}^{-1}S_{k}^{T}(\mathbf{f} _{k}^{A}-H_{k}^{A}\mathbf{a}_{b,k})\}\\ \vdots\end{bmatrix},\] (42d) \[L_{i}^{A}=\begin{bmatrix}\mathbf{0}_{m_{i}\times m_{i}}\\ \ddots\\ \end{bmatrix}. \tag{42e}\]
The cost-to-go Lagrangian at any leaf node \(j\) is \(V_{j}(\mathbf{a}_{j},\mathbf{\lambda}_{j}^{A})=\frac{1}{2}\mathbf{a}_{j}^{T}H_{j} ^{A}\mathbf{a}_{j}-\mathbf{f}_{j}^{T}\mathbf{a}_{j}+\mathbf{\lambda}_{j}^{T}(K_{ j}\mathbf{a}_{j}-\mathbf{k}_{j})\). Thus, \(H_{j}^{A}=H_{j}\), \(L_{j}^{A}=\mathbf{0}_{m_{j}\times m_{j}}\), \(K_{j}^{A}=K_{j}\), \(\mathbf{f}_{j}^{A}=\mathbf{f}_{j}\), \(\mathbf{l}_{j}=-\mathbf{k}_{j}\) for all \(j\) that are leaf links. Therefore, it can be shown again inductively that the equations assumed in eq. (39) correctly model the cost-to-go function.
For a fixed-base robot, the backward recursion is performed until the base link \(0\), and the known fixed-base acceleration is substituted to obtain the dual function, which is maximized to compute the optimal dual variables \(\mathbf{\lambda}_{0}^{A*}\) (assuming that \(L_{0}^{A}\) has full rank) analogously to eq. (22)
\[\mathbf{\lambda}_{0}^{A*}=(L_{0}^{A})^{-1}(\mathsf{l}_{0}+K_{0}^{A}\mathbf{a}_{0}). \tag{43}\]
For a floating-base robot, the backward sweep is conducted until the floating-base link \(b\), from where the optimal base acceleration and the dual variables are the saddle point of the optimal cost-to-go Lagrangian at the floating-base
\[\mathbf{\lambda}_{b}^{A*},\mathbf{a}_{b}^{*}=\underset{\mathbf{\vec{a}}_{b}^{*}}{ \mathbf{argmax}}\{\mathbf{\mathsf{min}}_{b}(\frac{1}{2}\mathbf{a}_{b}^{T}H_{b}^{A} \mathbf{a}_{b}-\frac{1}{2}\mathbf{\lambda}_{b}^{AT}L_{b}^{A}\mathbf{\lambda}_{b}^{A}+ \tag{44}\] \[\mathbf{\lambda}_{b}^{AT}K_{b}^{A}\mathbf{a}_{b}-\mathbf{f}_{b}^{AT} \mathbf{a}_{b}+\mathbf{l}_{b}^{T}\mathbf{\lambda}_{b}^{A}\}).\]
The stationary gradient condition of the first-order necessary KKT conditions provides the simultaneous linear equations,
\[\mathbf{a}_{b}^{*}= (H_{b}^{A})^{-1}(\mathbf{f}_{b}^{A}-K_{b}^{AT}\mathbf{\lambda}_{b}^{ A*}), \tag{45}\] \[\mathbf{\lambda}_{b}^{A*}= (L_{b}^{A})^{-1}(K_{b}^{A}\mathbf{a}_{b}^{*}+\mathsf{l}_{b}). \tag{46}\]
We can substitute \(\mathbf{a}_{b}^{*}\) from eq. (45) in eq. (46) to get
\[\mathbf{\lambda}_{b}^{A*}=(L_{b}^{A}+K_{b}^{A}(H_{b}^{A})^{-1}K_{b}^{AT})^{-1}(K_{b }^{A}(H_{b}^{A})^{-1}\mathbf{f}_{b}^{A}+\mathsf{l}_{b}), \tag{47}\]
and the optimal base acceleration is then recovered using eq. (45) and the inverse OSIM matrix is
\[L_{0}^{A}=(L_{b}^{A}+K_{b}^{A}(H_{b}^{A})^{-1}K_{b}^{AT}), \tag{48}\]
which is no different from performing the usual backward recursion at the free-joint \(b\) with, \(S_{b}=I_{6\times 6}\) as the free joint is allowed to move in all directions.
Alternately, if \(L_{b}^{A}\) is invertible one can also substitute the expression for \(\mathbf{\lambda}_{b}^{A*}\) from eq. (46) in eq. (45) to get
\[\mathbf{a}_{b}^{*}=(H_{b}^{A}+K_{b}^{AT}(L_{b}^{A})^{-1}K_{b}^{A})^{-1}( \mathbf{f}_{b}^{A}-K_{b}^{AT}(L_{b}^{A})^{-1}\mathsf{l}_{b}), \tag{49}\]
and optimal Lagrange multipliers can then be recovered using eq. (46). The accelerations of the rest of the segments are then computed in the second forward sweep (rollout). The choice computing eq. (47) or eq. (49) can significantly impact the computational efficiency of the algorithm depending on the branching structure and the number of constraints.
Suppose that kinematic tree branches at the floating-base, then \(L_{b}^{A}\) has a block-diagonal structure because the \(L_{i}^{A}\) terms from different branches occupy their respective diagonal block in eq. (42e). Factorizing or inverting \((L_{b}^{A})^{-1}\) is easier due to this block-diagonal structure. Then computing eq. (49) requires solving a small linear system of fixed size \(6\times 6\), which makes using eq. (49) a superior choice in this case. On the other hand, computing eq. (48) performs a dense \(m\times m\) update to \(L_{b}^{A}\), which destroys the block-diagonal sparsity pattern and then requires solving a dense linear system of size \(m\times m\).
### _Algorithm_
Algorithm 1 presents the PV solver for kinematic trees with floating-base. Let \(\mathcal{S}\) be an ordered list of all the links in the kinematic tree, such that \(i\) precedes \(j\) in the list if \(i\)-th link is the \(j\)-th link's ancestor. Let \(\mathcal{S}_{r}\) be the reversed list of \(\mathcal{S}\). In algorithm 1, we use eq. (49) instead of eq. (47).
#### Iv-C1 Computational complexity
We now analyze the worst-case computational complexity of algorithm 1. The computations in lines 2, 3, 4, 5, 7, 8, 9, 17 each require fixed number of operations at every joint and requires \(O(n)\) operations in total. The lines 10, 12, 16 require \(O(m)\) operations per at most \(d\) executions, where \(d\) is the depth of the tree requiring \(O(md)\) operations. Line 11 needs \(O(m^{2})\) operations per joint
and \(O(m^{2}d)\) operations in total. Factorizing \(L_{b}^{A}\) in line 13 has the worst case complexity of \(O(m^{3})\). Aggregating these terms, the algorithm has requires \(O(n+m^{2}d+m^{3})\) operations in the worst case.
_Best case complexity:_ The computational complexity is significantly better than the worst case complexity for favorable tree structures and constraints. Suppose that the branching occurs at the (floating) base link and there is one end-effector (a constrained link with at most 6 dimensional constraint) per branch. Quadrupeds and humanoid robots often have this structure. Let \(r\) be the number of branches and \(d\) be the length of the longest branch. Line 11 is executed \(d\) times for \(r\) branches leading to \(O(dr)\) operations. Similarly factorizing the block-diagonal matrix \(L_{b}^{A}\) needs \(O(r)\) operations for each block of size at most \(6\times 6\). As \(m=O(r)\), the total complexity of the constrained dynamics for this tree is \(O(n+md+m)\).
The equality of \(\Lambda^{-1}\) and \(L_{0}\) established in section IV can be repeated for kinematic trees as well using identical arguments and hence will be skipped for the sake of brevity.
## VI Soft Gauss' principle
We have considered only hard motion constraints so far, but it is also conceivable to relax these motion constraint through a penalty method and solve this easier problem, which is further always feasible even if the constraints are linearly dependent. This is precisely the approach taken in the MuJoCo toolbox [42, 7], a popular rigid body dynamics simulator using the so-called "soft Gauss' principle", where the hard motion constraints are relaxed through a quadratic penalty,
\[\underset{a,q}{\text{\bf minimize}} \sum_{i=1}^{n}\frac{1}{2}\{(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{ i})^{T}H_{i}(\mathbf{a}_{i}-H_{i}^{-1}\mathbf{f}_{i})+\] \[\quad\quad(K_{i}\mathbf{a}_{i}-\mathbf{k}_{i})^{T}R_{i}^{-1}(K_{ i}\mathbf{a}_{i}-\mathbf{k}_{i})\}, \tag{50a}\] \[\mathbf{subject\ to} \mathbf{a}_{i}=\mathbf{a}_{\pi(i)}+S_{i}\mathbf{\tilde{q}}_{i}+ \mathbf{a}_{b,i},\ i=1,2,...,n, \tag{50b}\]
where \(R_{i}\in\mathbb{R}^{m_{i}\times m_{i}}\) is a diagonal positive definite matrix. After expanding the objective function in eq. (50a), collecting the quadratic and linear terms and ignoring the constant terms, we get an equivalent optimization problem,
\[\underset{a,q}{\text{\bf minimize}} \sum_{i=1}^{n}\{\frac{1}{2}\mathbf{a}_{i}^{T}(H_{i}+K_{i}^{T}R_{i }^{-1}K_{i})\mathbf{a}_{i}\ -\] \[\quad(\mathbf{f}_{i}+K_{i}R_{i}^{-1}\mathbf{k}_{i})^{T}\mathbf{a} _{i}\}+\mathrm{const}, \tag{51a}\] \[\mathbf{subject\ to} \mathbf{a}_{i}=\mathbf{a}_{\pi(i)}+S_{i}\mathbf{\tilde{q}}_{i}+ \mathbf{a}_{b,i},\ i=1,2,...,n, \tag{51b}\]
which is a special case of the kinematic tree optimization problem in eq. (36), but without motion constraints (apart from the joint constraints in eq. (50b) which will be eliminated through substitution) and with the modified \(H_{i}\) and \(f_{i}\) terms
\[H_{i}\gets H_{i}+K_{i}^{T}R_{i}^{-1}K_{i};\quad\mathbf{f}_{i}\gets \mathbf{f}_{i}+K_{i}R_{i}^{-1}\mathbf{k}_{i}. \tag{52}\]
As there are no motion constraints, the \(L_{i}^{A}\), \(\mathbf{l}_{i}\) and \(K_{i}^{A}\) terms are not computed for the soft Gauss' problem, for which the algorithm 1 reduces simply to ABA with the update in eq. (52).
### _Computational complexity_
The ABA has \(O(n)\) complexity while the inertia and forces updates in eq. (52) require \(O(m)\) operations. Therefore, the total computational complexity for solving the soft Gauss' principle is \(O(m+n)\).
The state-of-the-art simulator MuJoCo solves the problem in the joint-space resulting in a significantly higher computational complexity of \(O(nd^{2}+m^{2}d+d^{2}m)\). It uses the composite rigid body algorithm (CRBA) algorithm [52, Method 3] to compute the JSIM and factorizes it, which has worst-case complexity of \(O(nd^{2})\). It considers constraints by modifying the JSIM [7, eq. 7] analogously to our inertia update in eq. (52) and solves this updated inertia matrix using the matrix inversion lemma accounting for the additional terms in the complexity.
## VII \(O(n)\) algorithm for OSIM
The OSIM itself is an important expression in many rigid-body simulators in both the robotics and the computer graphics (where its inverse is known as the Delassus operator) communities. It also has applications in constrained inverse dynamics [53] and dynamically-consistent nullspace projection in prioritized torque control [54]. OSIM is particularly useful for resolving inequality constraints (also called unilateral constraints), because an inequality constraint becoming inactive can be easily handled by removing the corresponding row and column of the inverse OSIM and efficiently updating the factorization [55]. Therefore, we isolate the OSIM computations in the PV solver and present a stand-alone algorithm. Further,
we propose an at-best structure exploitation for floating-base robots that avoids factorizing the dense inverse OSIM, which all the existing approaches perform, to the best of our knowledge. Finally, we end the section with a qualitative comparison of the proposed algorithm with the existing \(O(n)\) complexity OSIM solvers KJR [28] and EFPA [31].
### _The PV-OSIM algorithm_
Algorithm 2 lists the PV solver computations necessary for the OSIM.
```
0:\(\mathbf{q}^{\mathbf{P}},\ K_{i}\)s, robot model First forward sweep
1:for\(i\) in \(\mathcal{S}\)do
2:\(X_{\{i\}}=X_{\{\pi(i)\}}{}^{\{\pi(i)\}}X_{\{i^{\prime}\}}{}^{\{i^{\prime}\}}X_ {\{i\}}\)
3:\(K_{i}^{A}\gets K_{i};\ L_{i}^{A}\leftarrow\mathbf{0}_{m_{i}\times m_{i}} \ H_{i}^{A}\gets H_{i};\) Backward sweep
4:for\(i\) in \(\mathcal{S}\)do
5:\(D_{i}=S_{i}^{T}H_{i}^{A}S_{i};\ P_{i}=(\mathbf{1}_{6\times 6}-H_{i}^{A}S_{i}(D_{i}) ^{-1}S_{i}^{T})\)
6:\(H_{\pi(i)}^{A}\gets H_{\pi(i)}^{A}+P_{i}H_{i}^{A}\)
7:\(K_{\pi(i)}^{A}\leftarrow\begin{bmatrix}K_{\pi(i)}^{A}\\ K_{\pi}^{A}P_{i}^{T}\end{bmatrix}\)
8:\(L_{\pi(i)}^{A}\leftarrow\begin{bmatrix}L_{\pi(i)}^{A}\\ L_{i}^{A}+K_{i}^{A}S_{i}(D_{i})^{-1}S_{i}^{T}K_{i}^{AT}\end{bmatrix}\)
9:if floating-base? then
10:\(L_{0}^{A}=(L_{0}^{A}+K_{b}^{A}(H_{b}^{A})^{-1}K_{b}^{AT})\)
11:\(\Lambda=(L_{0}^{A})^{-1}\)
```
**Algorithm 2** The PV-OSIM algorithm
### _The PV-OSIM-fast for floating-base robots_
For floating-base trees with branching at the base link, \(L_{b}^{A}\) has block diagonal structure. This sparsity structure is lost in the update in line 10 in algorithm 2 (eq. (48)) by adding a dense matrix to \(L_{b}^{A}\). The inverse OSIM (and the OSIM) is a dense matrix for floating-base robots because the constraints on different branches are coupled through the floating-base. All existing approaches, that we know of, compute this dense inverse OSIM and factorize it, which scales poorly in the presence of many constraints. We propose to avoid this by exploiting the structure of the update in eq. (48).
The update to \(L_{b}^{A}\) in eq. (48) is structurally a symmetric rank-6 update. If we assume that \(L_{b}^{A}\) is invertible, which is a reasonable assumption for floating-base robots like humanoids and quadrupeds during operation, the matrix inversion lemma (MIL) [56] can be used to factorize \(L_{0}^{A}\) without having to explicitly construct this dense matrix. The MIL states
\[(A+UCV)^{-1}=A^{-1}-A^{-1}U(C^{-1}+VA^{-1}U)^{-1}VA^{-1}, \tag{53}\]
applying which to solve eq. (48) yields
\[(L_{0}^{A})^{-1} =(L_{b}^{A})^{-1}-(L_{b}^{A})^{-1}K_{b}^{A}\{(H_{b}^{A})+ \tag{54}\] \[K_{b}^{AT}(L_{b}^{A})^{-1}K_{b}^{A}\}^{-1}K_{b}^{AT}(L_{b}^{A})^ {-1},\] \[=\Lambda_{b}-\mathbf{L}_{K}\{(H_{b}^{A})+K_{b}^{A\mathcal{T}} \mathbf{L}_{K}\}^{-1}\mathbf{L}_{K}^{T} \tag{55}\]
where \(\Lambda_{b}:=(L_{b}^{A})^{-1}\) (easy to compute because of its block diagonal structure which is retained even after inversion) and \(\mathbf{L}_{K}:=\Lambda_{b}K_{b}^{A}\). Please note that the right-hand side (RHS) of the above equation is not evaluated to get the \((L_{0}^{A})^{-1}\) matrix as that would destroy sparsity. Instead, the RHS is meant to be directly multiplied with vectors, similarly to how solving a linear system involves factorization and not matrix inversion.
#### Iv-C1 Computational complexity of PV-OSIM-fast
The original PV-OSIM algorithm, computes and factorizes the dense \(L_{0}^{A}\), which requires \(O(m^{3})\) operations. In contrast, the structure exploiting method computes \(\Lambda_{b}\), which requires \(O(\frac{m^{3}}{r^{2}})\) operations, and \(L_{K}\), which requires \(O(\frac{m^{2}}{r})\) operations, bringing the total complexity to \(O(\frac{m^{3}}{r^{2}})\), where we have assumed for simplicity of analysis that the \(m\) constraints are equally distributed among the \(r\) branches. Thus, the proposed algorithm in this subsection can provide a significant speed-up for factorizing the inverse OSIM of floating-base robots with a favorable branching structure compared to the existing approaches that all solve dense linear systems.
_Limitation of PV-OSIM-fast:_ Strictly speaking, PV-OSIM-fast is applicable in a subset of the cases where the regular PV-OSIM is applicable because of its assumption that \(L_{b}^{A}\) is invertible. It is possible that \(L_{b}^{A}\) is not invertible, but \(L_{0}^{A}\) is invertible due to the addition of symmetric rank-6 matrix in eq. (48). This situation may occur if there is a high dimensional constraint applied on a link close to the base link or if the robot reaches a kinematically singular configuration.
### _Comparison with existing \(O(n)\) OSIM algorithms_
We now compare the PV-OSIM algorithm with the existing recursive \(O(n)\) algorithms: the KJR algorithm [22, 28], whose optimized version was presented in [32], and the _extended force propagator_ algorithm (EFPA) [31]. The three algorithms share the main idea of propagating the inverse inertia matrices, but differ significantly in the details. The primary structural difference of the PV-OSIM is that it computes the inverse OSIM in two sweeps while both KJR and EFPA require three sweeps.
This difference arises because PV-OSIM computes inverse inertia due to the motion of the \(i\)th joint and its descendants directly in the _constraint space_\(L_{i}^{A}\) during the backward sweep, using the EFP to propagate constraint forces to a joint and the constraint accelerations back to the constrained link. However, both KJR and EFPA first compute the articulated body inertia in a backward sweep and then compute the spatial inverse inertia matrices of size \(6\times 6\) for all the necessary links in a forward sweep, which is avoided in the PV-OSIM. Propagating spatial inverse inertia matrices is a particularly expensive operation since they need to be transformed from one link's frame to another's (because dynamics algorithms are efficiently implemented in the link frame) in KJR and EFPA. This transformation is not required in PV-OSIM because the inverse inertia is directly computed in the constraint space. Then KJR and EFPA compute the relative inverse inertia (essentially the matrix that maps forces on one link to the accelerations caused on another link) between every pair of links that are constrained. KJR performs computation
inefficiently by propagating the relative spatial inverse inertia matrices through the path connecting two constrained links for every possible pair of constrained links. EFPA computes the relative inverse inertia matrices more efficiently by directly transmitting the constraint forces and accelerations between constrained links through a common ancestor link using EFP. Finally, after all these inverse inertia matrices are computed, EFPA and KJR project them to the constraint space to get the inverse OSIM.
Thus, the PV-OSIM appears to exploit the structure of the problem better by using one less sweep to compute the inverse OSIM and its computational performance relative to existing OSIM algorithms will be benchmarked in section IX-B. It must be noted that despite performing some extra computations, the EFPA algorithm has a lower order computational complexity of \(O(n+md+m^{2})\) compared to the \(O(n+m^{2}d+m^{2})\) complexity of the PV-OSIM for computing the inverse OSIM. Therefore, for kinematic trees of high depth and many constraints, we can expect the EFPA algorithm to be faster than the PV-OSIM, which we test in section IX-B.
Also, note that the derivation of KJR or EFPA is complex and requires significant knowledge of and insight into efficient dynamics algorithms literature, while the PV-OSIM derivation is relatively simpler and self-contained as we are able to derive it from first principles (Gauss' principle) within this paper. Moreover, all the existing approaches compute and factorize the dense inverse OSIM matrix for floating-base robots, which the PV-OSIM-fast algorithm in section VII-B avoids.
## VIII Early multiplier resolution
The original PV solver first eliminates the primal variables, recursively computes the inverse OSIM and factorizes it, which results in a worst case \(O(n+m^{2}d+m^{3})\) complexity. This can get particularly expensive when \(m\sim O(n)\). However, if computing the OSIM is not required (for some other purpose during control or simulation), we can generalize the elimination ordering by aggressively eliminating dual variables earlier during the backward sweep to obtain an algorithm with an improved complexity of only \(O(n+m)\). We now derive this algorithm by adapting our original PV solver derivation. This early elimination idea was already partly introduced in eq. (49), when we eliminated the dual variables just before eliminating \(\mathbf{a}_{b}\) and will be further developed now. A form of early elimination is also proposed in [33], where they eliminate the constraint forces of an internal kinematic loop as soon as all the link accelerations within that loop are eliminated.
According to Bellman's principle of optimality [19], the solution to an optimization problem also optimizes its tail subproblem. Hence, for the tail sub-problem at the \(i\)-th link
\[\boldsymbol{\lambda}_{i}^{A*}=\underset{\boldsymbol{\lambda}^{A}}{\mathbf{\underset{\boldsymbol{\lambda}^{A}}{\mbox{\boldmath$\max}}}}\quad V _{i}^{*}(\mathbf{a}_{\pi(i)},\boldsymbol{\lambda}_{i}^{A}). \tag{56}\]
The objective function above is of the form in eq. (39) and is guaranteed to be bounded above and have a solution only when \(L_{i}^{A}\) has full rank. There is a rank-\(n_{i}\) update to \(L_{i}^{A}\) at every \(i\)-th joint during the backward recursion (see eq. (42e))
\[L_{i}^{A}\gets L_{i}^{A}+K_{i}^{A}S_{i}(D_{i})^{-1}S_{i}^{T}K_{i}^{AT} \tag{57}\]
Substituting the singular value decomposition (SVD) [45] of \(L_{i}^{A}\) in eq. (39) gives
\[\boldsymbol{\lambda}_{i}^{A*}=\underset{\boldsymbol{\lambda}^{A}}{\mathbf{\underset{\boldsymbol{\lambda}^{A}}{\mbox{\boldmath$\max}}}}\{-\frac {1}{2}\boldsymbol{\lambda}_{i}^{AT}\begin{bmatrix}U_{i}^{1}&U_{i}^{2}\end{bmatrix} \begin{bmatrix}\Sigma_{i}&0\end{bmatrix}\begin{bmatrix}U_{i}^{1T}\\ U_{i}^{2T}\end{bmatrix}\boldsymbol{\lambda}_{i}^{A}+\] \[\mathbf{a}_{i}^{T}K_{i}^{AT}\begin{bmatrix}U_{i}^{1}&U_{i}^{2} \end{bmatrix}\begin{bmatrix}U_{i}^{1T}\\ U_{i}^{2T}\end{bmatrix}\boldsymbol{\lambda}_{i}^{A}+ \tag{58}\] \[\mathbf{l}_{i}^{T}\begin{bmatrix}U_{i}^{1}&U_{i}^{2}\end{bmatrix} \begin{bmatrix}U_{i}^{1T}\\ U_{i}^{2T}\end{bmatrix}\boldsymbol{\lambda}_{i}^{A}\}+\mathrm{constant},\]
where \(\Sigma_{i}\in\mathbb{R}^{m_{ir}\times m_{ir}}\) is the diagonal matrix of the positive singular values, \(U_{i}^{1}\in\mathbb{R}^{m_{if}\times m_{ir}}\) and \(U_{i}^{2}\in\mathbb{R}^{(m_{if})\times(m_{if}-m_{ir})}\) are the singular vectors corresponding to the positive and zero singular values of \(L_{i}^{A}\), respectively, \(m_{ir}\) and \(m_{if}\) are the rank and the size of \(L_{i}^{A}\), respectively. The left and right singular vectors are equal because \(L_{i}^{A}\) is symmetric. Moreover, the singular vectors are orthonormal
\[\begin{bmatrix}U_{i}^{1}&U_{i}^{2}\end{bmatrix}\begin{bmatrix}U_{i}^{1T}\\ U_{i}^{2T}\end{bmatrix}=I_{m_{if}\times m_{if}}, \tag{59}\]
which we use to project \(\boldsymbol{\lambda}_{i}^{A}\), \(K_{i}^{A}\) and \(\mathbf{l}_{i}\) to two mutually orthogonal linear bases,
\[\tilde{\boldsymbol{\lambda}}_{i}^{A}=U_{i}^{1T}\boldsymbol{\lambda}_{i}^{A}, \quad\hat{\boldsymbol{\lambda}}_{i}^{A}=U_{i}^{2T}\boldsymbol{\lambda}_{i}^{A}, \tag{60}\] \[\tilde{K}_{i}^{A}=U_{i}^{1T}K_{i}^{A},\quad\hat{K}_{i}^{A}=U_{i}^ {2T}K_{i}^{A},\] \[\tilde{\mathbf{l}}_{i}=U_{i}^{1T}\mathbf{l}_{i},\quad\hat{ \mathbf{l}}_{i}=U_{i}^{2T}\mathbf{l}_{i},\]
where \(\tilde{\boldsymbol{\lambda}}_{i}^{A}\in\mathbb{R}^{m_{ir}}\), \(\tilde{K}_{i}^{A}\in\mathbb{R}^{m_{ir}\times 6}\), \(\tilde{\mathbf{l}}_{i}\in\mathbb{R}^{m_{ir}}\) and \(\hat{\boldsymbol{\lambda}}_{i}^{A}\in\mathbb{R}^{(m_{if}-m_{ir})}\), \(\hat{K}_{i}^{A}\in\mathbb{R}^{(m_{if}-m_{ir})\times 6}\), \(\hat{\mathbf{l}}_{i}\in\mathbb{R}^{(m_{if}-m_{ir})}\) are the components of \(\boldsymbol{\lambda}_{i}^{A}\), \(K_{i}^{A}\) and \(\mathbf{l}_{i}\) in the basis spanned by the singular vectors \(U_{i}^{1}\) and \(U_{i}^{2}\) respectively. Using these quantities, the optimization problem in eq. (58) can be decoupled into a separate optimization problem and a dual feasibility condition along the columnspace and nullspace of \(L_{i}^{A}\), respectively,
\[\tilde{\boldsymbol{\lambda}}_{i}^{A}=\underset{\tilde{\boldsymbol{\lambda}}_ {i}^{A}}{\mathbf{\underset{\boldsymbol{\lambda}_{i}^{A}}{\mbox{ \boldmath$\max}}}}\{-\frac{1}{2}\tilde{\boldsymbol{\lambda}}_{i}^{AT}\Sigma_{i} \tilde{\boldsymbol{\lambda}}_{i}^{A}+\mathbf{a}_{i}^{T}\tilde{K}_{i}^{AT} \tilde{\boldsymbol{\lambda}}_{i}^{A}+\tilde{\mathbf{l}}_{i}^{T}\tilde{ \boldsymbol{\lambda}}_{i}^{A}\}, \tag{61a}\] \[\hat{K_{i}}^{A}\mathbf{a}_{i}+\hat{\mathbf{l}}_{i}=0. \tag{61b}\]
The solution to eq. (61a) is easily computed due to the diagonality of \(\Sigma_{i}\),
\[\tilde{\boldsymbol{\lambda}}_{i}^{A*}=\Sigma_{i}^{-1}(\tilde{K}_{i}^{A} \mathbf{a}_{i}+\tilde{\mathbf{l}}_{i}). \tag{62}\]
Substituting eq. (62) back into the cost-to-go Lagrangian in eq. (39) gives the following updates to its terms,
\[H_{i}^{A}\gets H_{i}^{A}+\tilde{K}_{i}^{AT}\Sigma_{i}^{-1} \tilde{K}_{i}^{A},\ \mathbf{f}_{i}^{A}\leftarrow\mathbf{f}_{i}^{A}+\tilde{K}_{i}^{AT}\Sigma_{i}^ {-1}\tilde{\mathbf{l}}_{i},\] \[K_{i}^{A}\leftarrow\hat{K}_{i}^{A},\ \mathbf{l}_{i}\leftarrow\hat{\mathbf{l}}_{i},\ \boldsymbol{\lambda}_{i}^{A}\leftarrow\hat{\boldsymbol{\lambda}}_{i}^{A},\] \[L_{i}^{A}\leftarrow\mathbf{0}_{(m_{if}-m_{ir})\times(m_{if}-m_{ir})}. \tag{63}\]
The backward recursion is performed using these modified terms in eq. (42). The early elimination is performed at each joint after \(L_{(i)}^{A}\) is updated, which resets \(L_{i}^{A}\) to zero matrix. Early elimination reduces the number of propagated constraints at each \(i\)-th joint by \(m_{i}\), which is the rank of \(K_{i}^{A}S_{i}\) and usually equal to \(n_{i}\), except in the case of redundant constraints or kinematic singularities. If all the constraints are
eliminated before reaching the root node, the backward sweep reduces to the ABA algorithm.
During the forward sweep, the optimal \(\mathbf{\lambda}_{i}^{A*}\) is reconstructed using \(\mathbf{\tilde{\lambda}}_{i}^{A*}\) from eq. (62) and \(\mathbf{\tilde{\lambda}}_{i}^{A*}\) (available from the previous link) by transforming back to the original basis
\[\mathbf{\lambda}_{i}^{A*}=U_{i}^{1}\mathbf{\tilde{\lambda}}_{i}^{A*}+U_{i}^{2}\mathbf{ \hat{\lambda}}_{i}^{A*}. \tag{64}\]
For the common case of a single d.o.f joint, \(L_{i}^{A}\) undergoes a rank-1 update in eq. (57) and computing its SVD is computationally simple, with the singular vectors given by the following symmetric reflection matrix 3,
Footnote 3: [https://math.stackexchange.com/questions/704238/singular-value-decomposition-of-rank-1-matrix](https://math.stackexchange.com/questions/704238/singular-value-decomposition-of-rank-1-matrix)
\[\begin{bmatrix}U_{i}^{1}&U_{i}^{2}\end{bmatrix}=I_{m_{if}\times m_{if}}-2\frac {\mathbf{w}_{i}\mathbf{w}_{i}^{T}}{\mathbf{w}_{i}^{T}\mathbf{w}_{i}}, \tag{65}\]
and the positive singular value is
\[\Sigma_{i}=\big{[}\|\mathbf{k}\mathbf{s}_{i}\|^{2}/D_{i}\big{]} \tag{66}\]
where
\[\mathbf{w}_{i}=\mathbf{k}\mathbf{s}_{i}+\frac{ks_{i1}}{|ks_{i1}|}\|\mathbf{k }\mathbf{s}_{i}\|\mathbf{e}_{1},\ \mathbf{k}\mathbf{s}_{i}=K_{i}^{A}S_{i}, \tag{67}\]
where \(ks_{i1}\) is the first element of \(\mathbf{k}\mathbf{s}_{i}\) and \(\mathbf{e}_{1}\) is the first canonical basis vector.
**Remark 6**: Since the rank-1 update SVD can be computed using just \(\mathbf{k}\mathbf{s}_{i}\) and \(D_{i}\), the \(L_{i}^{A}\) matrix need not be explicitly updated. Furthermore, \(U_{i}^{1}\) and \(U_{i}^{2}\) matrices are not explicitly computed either because they are only needed for multiplying other matrices in eq. (60) and eq. (64), which is efficiently achieved by simply multiplying the right-hand-side of eq. (65). For example,
\[\begin{bmatrix}\tilde{K}_{i}^{A}&\hat{K}_{i}^{A}\end{bmatrix}=\{I_{m_{if} \times m_{if}}-2\frac{\mathbf{w}_{i}\mathbf{w}_{i}^{T}}{\mathbf{w}_{i}^{T} \mathbf{w}_{i}}\}K_{i}^{A}. \tag{68}\]
**Remark 7**: The eq. (67) assumes \(\mathbf{k}\mathbf{s}_{1}\neq 0\). If \(\mathbf{k}\mathbf{s}_{1}=0\), the rows of \(\mathbf{k}\mathbf{s}_{i}\) are permuted such that \(ks_{i1}\neq 0\), similarly to the pivoting methods in matrix factorization algorithms [45].
**Remark 8**: If \(\mathbf{k}\mathbf{s}_{i}=0_{m_{if}\times 1}\), the \(i\)th joint's acceleration is unaffected by the constraint forces \(K_{i}^{A\mathcal{I}}\mathbf{\lambda}_{i}^{A}\). In this case, the rank-1 update of \(L_{i}^{A}\) in eq. (57) would only add a zero matrix and is not performed. The terms \(K_{i}^{A}\) and \(\mathbf{l}_{i}\) are propagated to the parent link as in the original solver using eq. (42c) and eq. (42d) without size reduction.
### Complexity analysis
The PV-early solver's salient feature compared to the PV solver is that \(L_{i}^{A}\) is not computed (hence \(L_{0}^{A}\) is not factorized) and the matrices \(K_{i}^{A}\) and \(\mathbf{l}_{i}^{A}\) reduce in size during the backward sweep instead of growing with the accumulation of constraints. If the number of rows of \(K_{i}^{A}\) and \(\mathbf{l}_{i}^{A}\) is bounded by 6, the complexity of the PV-early solver is \(O(n+m)\), as the number of operations at every joint is bounded by a constant.
**Remark 9**: If the \(K_{i}^{A}\) and \(\mathbf{l}_{i}^{A}\) have more than 6 rows in the PV-early solver, it implies an over constrained system with more than 6 constraints on a link's acceleration. Then either the constraints are feasible with redundant constraints or infeasible, when one can remove the redundant constraints to obtain a constraint matrix \(K_{i}^{A}\) with at most 6 rows or declare infeasibility early respectively.
## IX Experiments and Discussion
We now benchmark and discuss the proposed algorithms. We 1) explain our implementation, 2) benchmark the OSIM computation 3) benchmark the constrained dynamics algorithms themselves 4) empirically test the computational scaling of the different algorithms 5) discuss results and limitations of the proposed algorithms.
### Implementation
We implemented the algorithms by extending Featherstone's highly readable MATLAB software toolbox SpatialV2 [57]. For computing the OSIM, we implemented PV-OSIM and PV-OSIM-fast algorithms and to benchmark them we also implemented the KJR, EFPA and LTL [32, 43] algorithms. For computing the constrained dynamics, we implemented PV, PV-early and the PV-soft algorithms and to benchmark them we also implemented the constrained dynamics algorithms using Featherstone's sparsity-exploiting LTL approach considering both the hard and the soft motion constraints. Robot specific C-code was generated for these algorithms using CasADi's scalar expressions (SX) [58] and its runtimes are used for the comparison. All the numerical experiments are performed on a single CPU core on a laptop with Intel i7-8850H CPU @ 2.60GHz processor running an Ubuntu 18.04 operating system. We disabled Intel Turbo Boost during the benchmarking to reduce CPU frequency variability.
Implementing rigid body dynamics algorithms efficiently involves various nuances discovered by the robotics community over the years. For example, computing quantities in the local body frame instead of the inertial world frame can significantly reduce the number of operations needed [48]. Thus our implementation also uses body frame though the derivation of the algorithms in this paper uses inertial frame for notational simplicity. Also using the Denavit-Hartenberg (DH) structure for modelling the robot kinematics, whenever possible, makes the dynamics algorithms more efficient [59]. However, this is not always possible, e.g. for kinematic trees, where a parent link can, in general, have DH structure with only one of the children joints. [32, 43] carefully accounted for these nuances in their comparison of the LTL and ABA algorithms. Additionally, robot design can also significantly influence the operation count, e.g. some links in the Kuka Iiwa have a 90-degree rotation between the parent joint's axis and the child joint's axis, resulting in a rotation matrix with only 3 non-zeros (either 1 or -1) requiring even fewer computations than DH nodes. Therefore, an algorithm's operation count is robot-specific, and manually counting them for a given robot and constraint combination taking into account all the computational nuances would be tedious. Conveniently, CasADi's SX expression graph of an algorithm automatically provides the operation count allowing us to compare the
best possible robot-specific operation count of the different algorithms, which we report later in this section.
Our implementation further uses simple optimizations such as avoiding matrix-matrix operations whenever possible, and performing Cholesky factorization and solve instead of computing matrix inverses. The source code of the implementation 4 and the simulation videos of the proposed algorithms 5 are made available. Baumgarte's stabilization was used in the simulations to stabilize the constraints over a long period of time [60], choosing a stabilization period of 0.1 seconds to avoid overly stiff dynamics as suggested in [1, Section 8.3], which interested readers are referred to for further details.
Footnote 4: [https://github.com/AjSat/spatial_V2](https://github.com/AjSat/spatial_V2)
Footnote 5: [https://tinyurl.com/z78hkash](https://tinyurl.com/z78hkash)
In our numerical experiments below, H and H\({}_{3}\) denotes a general 6D and 3D constraint on the 'hand' link of a robot, with the corresponding \(K_{i}\) being a random matrix of size \(6\times 6\) and \(3\times 6\) respectively. F and F\({}_{3}\) are defined similarly for the 'foot' link. For the Iwa, the end-effector was considered the hand link.
### _Benchmarking the OSIM algorithms_
Figure 2 gives the operation count along with internal breakdown for the proposed PV-OSIM and PV-OSIM-fast algorithms along with the existing SOTA \(O(n+md+m^{3})\) EFPA algorithm [31] and the SOTA sparsity-exploiting \(O(nd^{2}+m^{2}d+dm^{2})\) LTL-OSIM algorithm [32]. Similarly to [31], we found KJR to be significantly slower than EFPA for all the considered robots and KJR would also scale worse due to its higher complexity, hence we omit the KJR results.
We found PV-OSIM to be more efficient than the EFPA for all the considered robots. With the computation of articulated body inertia \(I^{A}\), the task-space EFP \(K^{A}\) and the Cholesky decomposition of inverse OSIM requiring the same number of computations for both algorithm, the difference arises in the inverse OSIM \(\Lambda^{-1}\) computation. This is because EFPA requires an additional forward sweep, that propagates inverse inertia matrices forward with expensive similarity transformations, unlike the PV-OSIM as discussed in section VII-C.
LTL-OSIM was the fastest algorithm for the KUKA Iiwa, which has only 7 d.o.f. However, for the 18 d.o.f Go1 robot the PV-OSIM was already slightly faster than the LTL-OSIM due to its lower computational complexity. For bigger robots like the Atlas (37 d.o.f) and Talos (50 d.o.f), LTL was the slowest of all the considered algorithms due to its higher computational complexity. A major difference between the LTL vs EFPA comparison in [31] (which found EFPA to be slower than LTL for the Honda Asimo robot) and ours is that we also include the cost of computing the constraint Jacobian computation \(J\) in the LTL algorithm. We believe this to be a fairer comparison because the PV-OSIM and EFPA algorithms do not require \(J\). \(K^{A}\) propagates forces and accelerations from end-effectors to other links fulfilling a role similar to \(J\) in LTL. For fewer number of constraints, both PV-OSIM and EFPA are faster than LTL for the Atlas robot. However, if we assume that \(J\) is computed elsewhere and is available for re-use, its computation cost can be excluded from LTL operation count. Then our findings would concur with [31], where LTL would be faster than EFPA for Atlas with 18 or 24 constraints, but still slower than the PV-OSIM. For Talos, LTL was not competitive with the lower order methods especially due to the expense of computing and factorizing a bigger JSIM.
The PV-OSIM-fast avoids computing and factorizing the dense inverse OSIM matrix explicitly using the matrix inversion lemma, and scales better than the PV-OSIM as the size of the OSIM matrix increases. It is the fastest algorithm for the considered floating-base robots and even nearly 2x faster than the LTL for the humanoid robots.
Though the PV-OSIM was computationally faster than the EFPA for all the considered robots, the EFPA has a lower order computational complexity of \(O(n+md+m^{2})\) compared to the \(O(n+m^{2}d+m^{2})\) of the PV-OSIM for computing the inverse OSIM \(\Lambda^{-1}\). This would make EFPA scale better than PV-OSIM for longer mechanisms with many constraints. To test this, we consider a long-stemmed mechanism (\(n_{\mathrm{stem}}\) is the number of links in the stem). From both stem ends, \(m_{\mathrm{branches}}\) chains of 7 links each branch out as shown in fig. 2e. Each branch's tip link is fixed with a 6D weld constraint.
Figure 2f shows the computational scaling of the ratio of PV-OSIM and EFPA operation counts w.r.t to \(n_{\mathrm{stem}}\) for different values of \(m_{\mathrm{branches}}\). EFPA was found to be always slower than PV-OSIM for up to 8 branches (\(8\times 6\) constraints propagated) irrespective of \(n_{\mathrm{stem}}\). For 9 or more branches, the EFPA eventually becomes more efficient than PV-OSIM at a cross-over point stem length \(n_{\mathrm{stem}}\). The value of the crossover point depends on \(m_{\mathrm{branches}}\) as well as the branches' link length for the considered mechanism. More branches would reduce the cross-over point as EFPA can more efficiently propagate large number of constraints through the stem links. Shorter branch length can also reduce the cross-over point because the constraint propagation through the stem links (where EFPA is more efficient than PV-OSIM) will form a fraction of the computations. For a \(m_{\mathrm{branches}}=10\), the cross-over \(n_{\mathrm{stem}}=54\) for branch length of 7, which is a very large mechanism with \(54+10\times 7\times 2=194\) links. For an extreme branch length of only 1 link, the cross-over \(n_{\mathrm{stem}}\) can be as small as 7. Based on these findings, we conclude that the PV-OSIM requires fewer operations for most realistic robot mechanisms unless one is considering a heavily constrained mechanism with most constraints propagated through a large fraction of the joints.
### _Benchmarking constrained dynamics solvers_
We compared the PV solver, PV-e solver and the PV-s solver with the state-of-the-art sparsity exploiting LTL solver of Featherstone [43, 32]. The LTL-OSIM [32] solver is a popular algorithm implemented in the high-performance simulator software Pinocchio [61]. The LTL solver is also used in MuJoCo [42] which uses a joint-space version of the soft-Gauss principle. To make a fair comparison with the LTL solvers, we implemented them ourselves and table II reports the computation time taken by the different algorithms. The type and the number of constraints imposed are reported next to the robot name in parentheses.
The computation times for the nominal C++ and C execution of Pinocchio (Pin) and MuJoCo (Mu) respectively cannot be considered a fair comparison because they do not use code-generation (which prunes unnecessary computations) and may compute additional quantities that are not required for constrained dynamics. We still report their computation timings for reference and indicative purpose of the speed-ups these software may achieve by exploiting code-generation.
Fig. 2: Benchmarking the number of computation operations of the OSIM algorithms for various robots.
#### V-C1 Hard motion constraints
The PV-solver was as fast or faster than the sparsity-exploiting LTL methods for all the considered robots. The difference, while negligible for the 7 d.o.f Iowa robot, widens for larger robots and more constraints due to its lower order complexity. Our PV-e solver scales even better than the PV solver, due to its lower order complexity of \(O(n+m)\). For larger robots like Atlas or Talos with a high number of constraints, PV-e offers nearly a 50% and 30% reduction in computation compared to LTL and the PV-solver respectively.
#### V-C2 Soft constraints
The last three columns of the table II present the computation times of our PV-s solver (see section VI), our implementation of the MuJoCo's soft Gauss principle using LTL and the nominal C execution in MuJoCo itself. In MuJoCo, we imposed 6D weld-type equality constraints for F or H and 3D connect-type equality constraints for F\({}_{3}\) and H\({}_{3}\) respectively. We deactivated all other constraints and frictional contacts (turned on by default in MuJoCo) to ensure that it solves the same equality constrained dynamics problems. The PV-s implementation is significantly faster than all the other algorithms. It is nearly twice as fast as LTL-s and nearly thrice as fast as LTL (which arguably solves harder problem with hard motion constraints). It is unlikely that any constrained dynamics algorithm, that we know of, can compete with PV-s since its computation cost is nearly the same as that of the ABA algorithm (unconstrained forward dynamics algorithm with \(O(n)\) complexity).
#### V-C3 Accuracy of the proposed solvers
We benchmarked the accuracy of the soft Gauss principle for different value of weights in fig. 3. We present the whisker plots of \(\ell_{2}\) norm of the constraint residuals in fig. (a)a and the \(\ell_{2}\) norm of the difference in \(\vec{\mathbf{q}}^{*}\) computed by the PV solver (reference algorithm because it considers hard motion constraints) in fig. (b)b for the Talos robot with 2H+2F constraint (both its feet and hands are fixed with a full 6D constraint) at 1000 different randomly sampled joint configurations. PV, PV-e and LTL that solve for hard equality constraints satisfy the constraint to high level of accuracy, with PV-e appearing to be numerically slightly stabler than the other two. Both the soft Gauss solvers, PV-s and LTL-s, have a significantly higher value of constraint residual, though the residual keeps reducing as the penalty weights are increased. Both PV-s and LTL-s satisfy the constraints equally well. However, for weights higher than a certain point (\(\sim 10^{8}\)), the optimal joint accelerations computed by the soft Gauss solvers and the hard Gauss solvers begin to diverge due to numerical issues, where the high penalty weights begin to affect the joint acceleration solution in the nullspace of the constraints. Between the two soft Gauss solvers, PV-s appears to be more numerically stable than LTL-s.
### _Computational scaling_
We empirically tested the computational scaling of the different constrained dynamics algorithms and present the results in fig. 4. In fig. (a)a, we show computational times of the different algorithms for kinematic chains ranging from 6 to 100 revolute joints. The end-effectors are fixed with full 6D constraints. As expected, the \(O(n)\) complexity PV, PV-e and PV-s solvers scale linearly and more gracefully than the higher-order LTL and LTL-s algorithms used in Pinocchio and MuJoCo respectively. Beyond a certain number of links,
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline Robot & PV & PV-e & LTL & Pin\({}^{*}\) & PV-s & LTL-s & Mu\({}^{*}\) \\ \hline liwa (0D) & **0.55** & **0.55** & 0.63 & 2.15 & **0.55** & 0.63 & 3.11 \\ liwa (H) & 0.75 & **0.61** & 0.83 & 2.73 & **0.61** & 0.80 & 4.45 \\ liwa (H) & **1.01** & 1.09 & 1.08 & 3.53 & **0.63** & 0.89 & 4.88 \\ GoI (0D) & **1.65** & **1.67** & 1.74 & 4.68 & **1.64** & 1.74 & 7.10 \\ GoI (F\({}_{3}\)) & 1.88 & **1.81** & 1.96 & 5.61 & **1.70** & 1.84 & 11.2 \\ GoI (2F\({}_{3}\)) & 2.10 & **1.98** & 2.20 & 6.40 & **1.76** & 1.98 & 12.0 \\ GoI (3F\({}_{3}\)) & 2.32 & **2.14** & 2.48 & 7.33 & **1.82** & 2.16 & 12.8 \\ GoI (4F\({}_{3}\)) & 2.53 & **2.33** & 2.85 & 8.20 & **1.90** & 2.33 & 13.5 \\ Atlas (OD) & **3.44** & **3.47** & 4.64 & 12.3 & **3.47** & 4.64 & 15.9 \\ Atlas (F) & 4.59 & **3.94** & 5.88 & 15.4 & **3.61** & 5.58 & 31.5 \\ Atlas (ZF) & 6.09 & **4.40** & 7.52 & 18.5 & **3.73** & 6.61 & 34.2 \\ Atlas (ZF+H) & 7.37 & **5.03** & 8.69 & 22.3 & **3.76** & 6.93 & 36.5 \\ Atlas(ZF+H) & 8.27 & **5.52** & 11.8 & 26.5 & **3.82** & 7.77 & 38.8 \\ Talos (OD) & **4.92** & **4.97** & 8.14 & 17.1 & **4.96** & 8.28 & 23.6 \\ Talos (F) & 5.63 & **5.48** & 9.25 & 21.1 & **4.96** & 8.65 & 51.3 \\ Talos (ZF) & 6.72 & **6.45** & 10.9 & 25.2 & **4.99** & 9.21 & 54.3 \\ Talos (ZF) & 8.40 & **7.06** & 13.4 & 30.0 & **5.08** & 10.6 & 57.0 \\ Talos(ZF+H) & 10.13 & **7.40** & 15.4 & 34.7 & **5.11** & 11.9 & 59.4 \\ \hline \end{tabular}
* Pin and Mu are nominal execution of Pinocchio and MuJoCo without code-generation and hence cannot be considered fair comparison.
\end{table} TABLE II: Benchmarking computational performance of PV solver with other constrained dynamic solvers in MuJoCo and Pinocchio. All times are in microseconds.
Fig. 3: Benchmarking the numerical accuracy of soft Gauss solver for different weights.
the generated C-code for LTL and LTL-s became too large for effective compiler optimization and they became slower than even the nominal C++ execution in Pinocchio.
Then we compared the different algorithms on a highly constrained ladder-shaped mechanism (see fig. 3(b)) with \(m\thicksim O(n)\), with each rung consisting of 7 links. The segment connecting two ends of a rung on one side has 3 links and the other ends of the rung are constrained to be fixed with full 6D constraints. The computational timings of different algorithms as more rungs (and constraints) are added to the mechanism are presented in fig. 3(c). The PV solver with its cubic complexity in the number of constraints also begins to scale badly like the LTL and LTL-s solvers, while the \(O(m+n)\) solvers PV-e and PV-s scale linearly.
### _Discussion and limitations_
#### Parallel algorithms
Our comparison was limited to implementations on a single core. However, the divide-and-conquer algorithms [37, 38, 39, 40] may be computationally faster, especially for bigger mechanisms, when multiple cores are utilized. On a single core however, they are unlikely to be faster for typical robots since they are known to be several times more expensive than ABA [38]. However, due to a lack of open source implementation and due to the complexity of their implementation, we leave this comparison for future work.
Among these divide-and-conquer methods the PV solver appears to be most closely related to the DCAp algorithm [38], which has outward acceleration propagation and inward force propagation similarly to the PV solver and the ABA is shown to be a special case of DCAp. It appears to be possible to provide an alternative derivation of the PV solver from the DCAp algorithm by placing a handle on the floating-base and the constrained links. The handles on the constrained links would be in the constraint space instead of the spatial handle explicitly considered in [38]. Then, using the two-handle equation in [38, sec. 4.1], for a specific order of assembly from the leaf nodes to the root, it is possible to show that [38, eq. 29a, 29g, 29b, 29h, 29d] correspond to eq. (42a), eq. (42b), eq. (42c), eq. (42d) and eq. (42e) respectively. However, such an assembly ordering is not the recommended ordering in divide-and-conquer algorithms as it does not assemble two trees of similar sizes which is necessary for obtaining a reduced order complexity in the divide-and-conquer methods.
Though there is no direct analogue for the PV-early algorithm in DCAp, a simpler form of early elimination can also be performed in DCAp when the \(L^{A}\) matrix reaches full rank by eliminating the constraint forces by taking Schur complement. Due the divide-and-conquer methods being among the most complex rigid-body dynamics algorithms in literature, deriving the PV solver this way may not be of interest to readers. However, this connection opens up interesting possibilities for parallelizing the algorithm, which we leave for future work.
#### Closed-loop solvers
The PV solver is closely related to the algorithms in [33] and [34]. In the PV solver's backward recursion, the eq. (20c), eq. (20d) and eq. (20e) correspond to [33, eq. 16c, eq. 18a and eq. 18b] respectively and [34, eq. 41c, eq.51b, eq.51a] respectively. Application-wise, the main difference between PV-solver and [33, 34] is that we consider known acceleration constraints (which includes all the loop closure constraints with the ground as a special case), while both [33, 34] tackle the harder problem of internal kinematic loop constraints. We also explicitly consider floating-base systems which was not considered in [33], while [34] does consider floating-base systems in one of their examples though not in the main derivation. Both [33] and [34] can be straightforwardly adapted to solve the constrained dynamics problems considered by the PV solver. This connection between the PV solver, [33] and [34] appears to not have been made in existing literature. Despite not being a fundamentally new algorithm, the expository PV solver derivation in section III and section V is of value to the readers because it utilizes a different LQR perspective that permitted a mechanistic deriva
Fig. 4: Computational scaling of the different algorithms.
tion of the algorithms, that would make the material accessible to researchers with control and optimization background. In contrast, [33] required significant physical insight to come up with an efficient propagation of Newton-Euler solutions similarly to the ABA algorithm [16]. However, [33] approach may be more accessible to researchers with a background in mechanics and without prior experience in optimal control or optimization.
\(O(n+m)\) _solvers:_ Our expository derivation also allowed us to easily derive two different and original (to the best of our knowledge) \(O(n+m)\) solvers, using the soft Gauss principle adopted by MuJoCo and early elimination of dual variables. A form of early elimination is also proposed in [33, 34], where they eliminate the dual variables of a loop after passing over all the links in that loop. For certain robot architectures where the loops are not heavily interconnected (the same link being part of multiple loops), their early elimination procedure can also lead to \(O(m+n)\) performance. Our early elimination is fundamentally different as it reduces the dimensionality of the propagated constraints at every joint.
A relatively more recent \(O(n+m)\) complexity solver for kinematic loops [35] uses the same ideas as [33] by introducing zero-mass phantom link for loop-cutting and early elimination at the loop level. However, unlike [33] and the PV solver, [35] proposes a Lagrange multiplier free algorithm based on Kane's formulation of constrained dynamics [36]. The algorithm in [35] is fairly complex, does not have an open-source implementation and does not appear to have been benchmarked with the PV solver, [33] or [34]. It is not obvious how to efficiently adapt it to the kinematic-tree structures considered by the PV-solver. Despite [35] being a challenging algorithm to understand and implement, the Lagrange multiplier-free approach is interesting and may be computationally beneficial, especially for mechanisms with kinematic loops, and will be investigated in the future.
The SVD currently proposed for PV-early is admittedly an expensive algorithm for multi d.o.f joints, when we cannot exploit the efficient rank-1 update formulae presented in section VIII, unless the multi d.o.f joints are modelled as several equivalent fictitious single d.o.f joints in a chain. However, this workaround is non-ideal as it introduces issues like representation singularity and non-physical meaning of velocities of these fictitious joints. It may be worthwhile to explore replacing the SVD with the more efficient rank-revealing QR decomposition [45] in the future, which provides the desired orthogonal bases similarly to the SVD.
_OSIM and computational benchmarking:_ That the backward recursion in PV solver, [34] and [33] provides an efficient algorithm to compute the OSIM is a new connection made in this paper that we could not find in literature. We are also not aware of existing work that computationally benchmarked the PV-solver or the [33, 34] algorithms with the currently popular sparsity-exploiting methods of Featherstone for the constrained dynamics problems considered in this paper. Our findings indicate that for larger robots like the humanoid robots the sparsity-exploiting methods are not competitive with the PV solver, which has implications for the existing simulators and as well as for biomechanical applications where the degrees of freedom are typically over 100.
Our benchmarking methodology included code-generating and compiling robot-specific C code, which while contributing to the speeds we observe, is also a limitation as we need to know all the possible contact situations that may arise. Nominal C++ implementations such as in Pinocchio can deal with these scenarios more effectively as they do not require re-compilation at runtime. However, in many applications e.g. humanoid walking, all the possible contact scenarios can be compiled in advance and loaded depending on the contact scenario using look-up tables. In any case, the speed-up we observed due to code-generation is high enough that it is interesting for simulators to explore a hybrid method combining the strengths of both code-generation and nominal C++ execution for different parts of the algorithm.
Finally, we refer interested readers to several extensions of the unconstrained LQR algorithm to equality-constrained problems [62, 63, 64, 65] in a control setting. Out of these methods [62] is analogous to the original PV solver and [64]'s method is most similar to our PV-early solver, where they also used SVD.
## X Conclusions and Future Work
### _Conclusions_
We provided a self-contained derivation of several advanced constrained dynamics solvers from the first principles by connecting it to the LQR problem. Our derivation, building upon Vereshchagin's approach, is much simpler than the better known SOA framework of Rodriguez [20] that uses this LQR connection. Our expository derivation extended the original PV solver to floating-base kinematic trees, which resulted in an algorithm closely related to [34] and [33], but is derived using a different LQR perspective. This paper makes constrained dynamics accessible to researchers in optimization and control as well as roboticists, with knowledge of control, that currently treat robot dynamics as a black-box and are therefore unable to debug or adapt existing dynamics software to their applications. The LQR connection can foster transfer of software and ideas between fields in the future. For example, recent research from data-driven LQR control may transfer to robust control of robots with uncertain dynamics. The optimization perspective in our derivation is valuable as accounting for uncertainty in parameters is performed naturally in an optimization framework [66, 67].
The equality we showed between LQR's dual Hessian and the inverse OSIM provided an efficient state-of-the-art OSIM algorithm, which we further significantly accelerated for specific, but common, robot structures that have branching at the base. The LQR-based approach allowed straightforward derivation for the PV-s and PV-early algorithms, resulting in two original algorithms with \(O(n+m)\) complexity. Our numerical experiments suggest that the PV solver is computationally superior to currently popular higher-order sparse factorization algorithms by Featherstone for larger robots like the humanoid robot Atlas, for which the LTL needs up to 2x more computations than the PV-solver. This PV-solver speed-up can be arbitrarily higher for longer mechanisms, typical
in biomechanical applications, due to the inherent complexity difference. Finally, our work recognizes the historical contribution of Popov and Vereshchagin who proposed the first \(O(n)\) _constrained_ dynamics solver, which remarkably remains the state-of-the-art nearly fifty years after its invention and yet remains largely unknown in the robotics community.
### _Future work_
There are multiple exciting directions for future work, apart from the applications in robot control and trajectory optimization. The algorithms presented here are limited to equality constraints, and it is a natural research direction to extend the algorithms to include internal kinematic loops, frictional contacts and unilateral contact constraints. We will also explore proximal point iterations [6] for applying the solver to problems with ill-conditioned and nearly redundant constraints. Analytical gradients, which are found to be faster than automatic differentiation, can also be developed for the PV solver for optimal control and reinforcement learning applications. In particular, transfer of new research results from data-driven LQR to robot control is an exciting future research direction.
## Acknowledgement
The authors thank Prof. Jan Swevers, Bastiaan Vandewal and Alejandro Astudillo Vigoya for their valuable feedback on previous versions of the manuscript. The authors also thank the anonymous reviewers for their valuable comments and suggestions. We especially thank the anonymous reviewer 1 for the extensive review and for pointing us to important literature that we were not aware of (e.g. Brandl et al.'s paper).
|
2307.03996 | ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review
Quality Estimation | Code review is considered a key process in the software industry for
minimizing bugs and improving code quality. Inspection of review process
effectiveness and continuous improvement can boost development productivity.
Such inspection is a time-consuming and human-bias-prone task. We propose a
semi-supervised learning based system ReviewRanker which is aimed at assigning
each code review a confidence score which is expected to resonate with the
quality of the review. Our proposed method is trained based on simple and and
well defined labels provided by developers. The labeling task requires little
to no effort from the developers and has an indirect relation to the end goal
(assignment of review confidence score). ReviewRanker is expected to improve
industry-wide code review quality inspection through reducing human bias and
effort required for such task. The system has the potential of minimizing the
back-and-forth cycle existing in the development and review process. Usable
code and dataset for this research can be found at:
https://github.com/saifarnab/code_review | Saifullah Mahbub, Md. Easin Arafat, Chowdhury Rafeed Rahman, Zannatul Ferdows, Masum Hasan | 2023-07-08T15:37:48Z | http://arxiv.org/abs/2307.03996v1 | # ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation
###### Abstract.
Code review is considered a key process in the software industry for minimizing bugs and improving code quality. Inspection of review process effectiveness and continuous improvement can boost development productivity. Such inspection is a time-consuming and human-bias-prone task. We propose a semi-supervised learning based system ReviewRanker which is aimed at assigning each code review a confidence score which is expected to resonate with the quality of the review. Our proposed method is trained based on simple and and well defined labels provided by developers. The labeling task requires little to no effort from the developers and has an indirect relation to the end goal (assignment of review confidence score). ReviewRanker is expected to improve industry-wide code review quality inspection through reducing human bias and effort required for such task. The system has the potential of minimizing the back-and-forth cycle existing in the development and review process. Usable code and dataset for this research can be found at: _[https://github.com/saifarnab/code_review_](https://github.com/saifarnab/code_review_)
code review, semi-supervised learning, confidence score, neural network +
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
+
Footnote †: ccs Computer Vision and Pattern Recognition
+
Footnote †: ccs: Computer Vision and Pattern Recognition
the changes that he has to make in the codebase, then that review is probably of good quality. In this paper, we focus on modeling the developer confidence in a review.
One way is to simply form this task as a supervised learning task where the input will be a review and the output will be the confidence score for that review. The output labeling will be performed by the developer to whom the review had been sent for making changes in the codebase. Figure 2 shows the problem behind such labeling. We can see a review in the figure which has been marked as good, average, below average and poor by a significant set of developers from three different software companies. We performed this experiment on 25 reviews in total and got more or less similar results. Let us understand what this means. There are developers who are broad minded and will give good score even when the review is not that good. The opposite spectrum is also equally visible in the industry. The score assigned by a developer also depends on what type of mood he is in at that particular moment. In short, this labeling process is highly dependent on human perception which can vary widely from person to person.
We propose an alternative labeling scheme in this paper which indirectly trains a set of three models and enables them in predicting the confidence scores for a particular set of reviews. We call this semi-supervised learning approach _ReviewRanker_. The labeling is related to three simple multiple choice questions (for the three models) regarding - (a) the understanding of the type of change to perform in the code, (b) the understanding of what to insert and (c) what to delete from the code based on the review of interest. We performed a similar experiment (as of Figure 2) with these three multiple choice questions and found out that the choices made by the developers from different companies are similar unless the review is largely vague. Thus we have come to a conclusion that the answer to these questions are not biased by the human perception side of the developers.
During inference (after training is done with a set of labeled reviews), we provide a code review as input to the three models for predicting the answer to the three questions (see Figure 3). We get three confidence scores from these three models corresponding to the ground truth answers of these questions (labeled by a developer in advance). We obtain the final confidence score from these three scores. Thus we model the confidence of the developer in understanding the review given to him or her.
Mainly three types of related studies have been performed regarding code review analysis: (1) theoretical studies on different aspects of code reviewing (Friedman et al., 2012; Goyal et al., 2013; Goyal et al., 2013; Goyal et al., 2013), (2) assisting reviewers by problematic code snippet identification (Bahdan et al., 2014) and (3) reviewer recommendation (Bahdan et al., 2014; Goyal et al., 2013). Although **RevHelper**(Goyal et al., 2013) was developed to measure code review usefulness, it is actually a binary classification tool (useful vs not useful) and does not provide any quality score to the review of interest. Also this method has the human bias aspect that we have mentioned in detail in Figure 2.
## 2. Problem Definition
The input of ReviewRanker is a large set of code reviews \(R\). The output is a confidence score \(C_{i}\) for each review \(R_{i}\in R\), where \(C_{i}\in[0,1]\). Higher confidence score denotes higher review quality.
\(C_{i}\) is the combination of three different confidence scores coming from three different questions related to review \(R_{i}\). The answer of each question \(Q_{ij}\) is predicted by a model \(M_{j}\) that forms the question answering as a binary classification task. We get a confidence score \(C_{ij}\) (associated with the ground truth label answer) from each model \(M_{j}\) for each question \(Q_{ij}\) for the review of interest \(R_{i}\). The final confidence score \(C_{i}\) of review \(R_{i}\) is the geometric mean of all \(C_{ij}\)'s, where \(j\in\{1,2,3\}\).
The three questions are as follows:
1. What type of operation (change in code) did the code review suggest (multi-class classification)?
2. Did you understand what to insert in the code from the review (binary classification)?
3. Did you understand what to delete from the code reading the review (binary classification)?
Unlike questions related to directly assigning a quality score to a review, these three questions are straightforward and have little to no human bias.
## 3. Related Works
Researches have been undertaken to automate the process of reviewing code by using static checks such as standard violation, and common structure defects; while other researchers have focused on automating the process of reviewer recommendation and problematic code detection.
### Studies on Code Review
Semi-structured individual interviews were conducted with seven developers from Microsoft in (Goyal et al., 2013). They concluded that prior knowledge of files leads to useful comments and tends to increase efficiency. The contemporary code review process at Microsoft was looked into in (Friedman et al., 2012). Research shows that the average spending time in a week for Microsoft developers is four hours in code review, while open source developers take five hours. Microsoft developers give more attention to reviewing relationships with developers compared to open-source developers. An observational survey on Mozilla's 88 core developers was conducted in (Goyal et al., 2013). The authors found out that approximately 57-69% developers reviewed fewer than 5 patch files, 10% developers reviewed 11 to 20 such files and 4% developers reviewed more than 21 patch files each week. A study described why code review is responsible for evaluating the reliability of test codes and what professional developers do to review test codes by analyzing 300,000 code reviews from open-source projects (Goyal et al., 2013).
### Code Review Automation Empirical Studies
A prototype tool named **Code Distance Visualiser** was proposed in (Bahdan et al., 2014) to detect problematic codes like string overflow, memory leaks, null pointer references, and incorrect API usages. **ReviewBot** model was proposed in (Bahdan et al., 2014) where they automated the checking for source code by using a static analyzer and recommended reviewers based on the belief that every line of code had a past history. **cHRev** model used three measurement metrics to measure the expertise of the reviewers based on their review comments: 1) higher number of review count, 2) reviewer's effort in the workday and 3) higher
weight assignment to the latest reviews (Kang et al., 2018). **RevFinder**, a recommendation model for reviewers based on file location was developed in (Kang et al., 2018). According to their heuristics, identical path files should be reviewed by identical reviewers. To analyze similar file paths, they used four string comparison techniques: 1) longest common prefix, 2) longest common suffix, 3) longest common subsequence and 4) longest common substring. **RevRec** developed in (Kang et al., 2018) consists of two models: the reviewer expertise model (RevRecRE) and the reviewer collaboration model (RevRecRC). They evaluated three open-source projects - Android, OpenStack, and Qt. A comparative study on code review usefulness was conducted based on textual features and reviewer expertise in (Kang et al., 2018). The authors proposed a machine learning model named **RevHelper** to predict the usefulness of a review comment. Their comparative study was based on two heuristics - 1) differences between useful and non-useful reviews and 2) how the reviewers' experience helps them to provide appropriate reviews.
## 4. Dataset description
The steps regarding the dataset creation process for this research has been briefly shown in the leftmost box of Figure 6. We shall describe each of these steps in detail in this section.
### Data Source
We have collected our data from multiple open-source projects hosted in Gerrit 1. Gerrit is a popular tool for code review in both open-source and commercial code repositories. Gerrit provides an easily accessible REST API 2 for collecting code reviews and their related codes. We have created a _Gerrit Miner_ using **Java** that mines code reviews from open source code repositories such as **Android & Iotivity** and stores them in a **MySQL** database. We later query the database and label the reviews with different criteria described in detail in the upcoming subsections.
Footnote 1: [https://www.gerritcodereview.com/](https://www.gerritcodereview.com/)
Footnote 2: [https://gerrit-review.googlesource.com/Documentation/rest-api.html](https://gerrit-review.googlesource.com/Documentation/rest-api.html)
### Data Labeling
We have created a labeling application with the _Django_ framework in **Python (Python, 2017)**. The labeling app was designed to be user-friendly and intuitive. On entry, the web app asks for the login credentials of the user. Once it is provided, it directly goes to the labeling page and displays a code review comment to the user. The user is asked what type of operation (change type in code) the code review suggests (see Figure 4). Four options are provided in the form of a drop-down menu: _Insert_, _Delete_, _Replace_, _and Not Enough Information_. The web app provides the private URLs to the source code, and by clicking the link the user can view the source code, where the code review
Figure 3. Code review confidence score estimation overview
Figure 2. Human bias in code review quality labeling
was submitted, and the later modification (accepted by reviewer) in the source code side by side (see Figure 5).
When the user selects one of the four operations from the drop down menu, he/she is also asked to provide the code snippet that is impacted by the operation. If the operation is an _Insert_ operation, the user is supposed to provide the code snippet that was to be inserted in a text field named _Add Code_ (only if it is understood from the review what was to be inserted). If the operation is a _Remove_ operation, the user puts the code that was to be removed from the original code in the text box named _Remove Code_ (only if it is understood from the review what was to be removed). If the operation is a _Replace_ operation, the user puts the part of the code that changed in _Remove Code_ text box, and the part that it changed into in the _Add Code_ text box (only if both these parts can be understood from the code review alone). We also took a human-centric design approach to design the labeling app. Each time a sample data was submitted, the web page changed the background color so that the labeling process would not become monotonous and also would give a sense of progress to the user.
### Label Validation
The reviews were labeled by a team of five independent volunteers who possess substantial experience in programming. All the labelers are from Computer Science background and have more than two years of working experience with programming languages such as C and Java, specifically in the areas of Android and Iotivity. To ensure consistency in the labeling process, 10% of the reviews were given to all the participants for labeling. The remaining 90% of samples were unique for each labeler. The admin frequently examined 10% of the data labels to check for any discrepancies among the labelers. If there was a considerable variation in the labeling, appropriate measures were taken to make the data labels more consistent. Later on, the entire dataset was manually labeled and reviewed by senior software developers to ensure proper validation of the assigned labels. The final confirmation for the labeling was obtained from the admin and considered conclusive for this dataset.
## 5. Materials and Methods
Figure 6 provides an overview of the steps in developing ReviewRanker. We have already described the dataset creation step in the previous section. In this section, we are going to elaborate the next four steps which are more related to ReviewRanker training and inference phase.
### Data Preprocessing
#### 5.1.1. Data Labeling:
Our initial dataset consisted of 2052 review comments. After the elimination of redundant samples, we are now left with 1483 sample reviews in our final dataset. Let us talk about the ground truth label assignment process for the three multiple choice questions asked for each review (the three questions can be found in Section 2). In real life scenario, the ground truth labels associated to a particular review are expected to be assigned by the developer/ developers to whom the review is directed to during the development process. Observing the questions, it is evident that it will take little to no effort from the developers to perform this labeling process.
We start with the operation (code change) related question. We define four types of operations: (1) replace (class label 0), (2) delete (label 1), (3) insert (label 2) and (4) not enough information (no label
Figure 4. Data labeling app front end
Figure 5. Code context view during code review labeling
assigned). If a review operation is assigned as "not enough information", then we simply assign that review a confidence score of 0 and exclude that review from ReviewRanker training and inference.
The next two questions are about understanding of what to insert and what to remove from the current code base (both are binary classification tasks). If it is clear from the review what to insert, then the insertion related question receives ground truth label of 1, else the label is 0. The exact same aspect goes for the deletion related question.
If the operation is labeled as "replace" (first question), then it is expected that the label of both the insertion and deletion related questions will be 1 (it will not always happen in non-ideal cases). Similarly, if the operation is labeled as "delete", then the label of deletion related question is expected to be 1, while the insertion related question will have a label of 0 in an ideal world; and the opposite aspect will happen if the operation is labeled as "insert".
Let us now look at an example review - "outer parens not needed". The labels for this review are as follows:
**Operation Type:** delete (label 1)
**Understanding of something to be added:** nothing to add (label 0)
**Understanding of something to be deleted:** parentheses need to be deleted (label 1)
#### 5.1.2. Similar Word Handling
Our corpus contains more than 3000 unique words, which is a large number considering the small corpus size (less than 1500 reviews). So, by replacing all semantically identical words with a single word, we minimize the word list, which helps our model find acceptable relationships between words. While doing so, we use both the process of word stemming and lemmatization. Using word-stemming, we can modify a word's plural instance to singular, normalize grammatical state, and so on. Consider the words provided below:
[program, programs, programmer, programming, programmers]
The above words are generated from the word "program". Through the word-stemming process, we replace all of these words with the word **program** in our unique word list. Using word lemmatization, we can generate a similar set of words from a single word. For example, the word **minor** generates the following words:
[minor, little, modest, belittled]
These words are verbally similar to the word **minor**. Thus we replace all of these words with the word **minor** in our unique word list as well. By doing so, our corpus now contains around 1700 unique words.
#### 5.1.3. Special Word Handling
Our dataset contains code reviews that include a significant amount of special words specific to C code that have no real meaning but play a very important role in review comments. Our proposed model works based on the textual relationship between normal words and these special words. Hence we replace these words with some common words based on their operational characteristics. First, we lowercase the starting letter of all words in our corpus. After that for each of the words:
* If the word has any uppercase letter, then we replace the word with **keywordvariable**, considering we usually use camel case to write variables.
* Otherwise, if the word contains **.h** or #, then we replace the word with **keyworddoth**. The presence of such special characters denotes header files in C programming.
* Otherwise, if the word contains _, then we replace the word with **keywordunderscore**. Having an underscore in a word is a bit confusing, it may denote a function or a variable. That is why we treat them with a special keyword.
Figure 6. ReviewRanker development overview
* Otherwise, If the word contains parenthesis, then we replace the word with **keywordfunction**, considering all functions must initiate with a pair of parentheses.
After such special keyword handling, our corpus now contains 1368 unique words which started with 3000 initially.
### Feature Extraction
In order to feed a review to a model as input, We need a mathematical representation of that review. We have 1368 unique words in our preprocessed dataset (see Section 5.1.3). Each review contains a subset of these words. So, we represent each review with a vector \(V\) of size 1368, where \(V_{i}\) represents the total count of \(word_{i}\) found in the review. Let us look at two examples:
**Review sample 1:** line over fifty characters you should reduce it to twenty characters.
**Review sample 2:** provide line level comment to line.
If we create a unique word list from this corpus, it would be:
[line, over, fifty, characters, you, should, reduce, it, to, twenty, provide, level, comment]
We can index these words from 0 to 12. The feature vector for the two sample reviews is as follows:
Instead of utilizing word embedding based approaches such as Word2Vec (Devlin et al., 2017) and FastText (Bahdan et al., 2017), we have opted for a bag-of-words type of approach (Krishna et al., 2018). Word embedding produces semantic vectors for each word typically employed with recurrent neural networks (RNNs) (Krishna et al., 2018). However, due to our small dataset and straightforward classification tasks, we have observed that a basic shallow neural network with bag-of-words feature outperforms RNNs with word embeddings through five fold cross validation.
### Model Details
Our proposed algorithm combines three models as shown in Table 2. Details of the classes present under each model can be found in Section 5.1.1. Each model is a fully connected vanilla neural network but with a different set of parameter values. The input layer is of size 1368 (word frequency vector: total unique word no. is 1368). \(M_{1}\) and \(M_{2}\) are used for binary classification while \(M_{3}\) is used for multi-class classification (three classes). **Relu** activation function (Krishna et al., 2018) has been used for the intermediate layers, while **Softmax** has been used for the output layer. A dropout of 20% has been applied between each consecutive hidden layers to prevent overfitting (Bahdan et al., 2017). **Categorical Cross Entropy**(Krishna et al., 2018) has been used as the loss function, while **Adam**(Adaptive Moment Estimation) optimizer (Kingmae and Ba, 2014) has been used for weight update.
### Review Confidence Score Generation
Table 3 illustrates the entire process of confidence score generation for two sample reviews (We assume that the three task specific models \(M_{1}\), \(M_{2}\) and \(M_{3}\) are already trained). The feature vector of each review is passed through all three models separately. Each model provides a discrete probability distribution of the task specific classes. For example, model \(M_{3}\) always provides three probability values (sums to 1) for the three operation type specific classes. For each model, we only take the probability score associated with the ground truth class label (expected to be available for all reviews). Thus, for one review, we get total three confidence scores (predicted probability values) from the three models. The final confidence score is the geometric mean (\((C_{1}\times C_{2}\times C_{3})^{1/3}\)) of these three confidence scores. A higher confidence score denotes higher review quality, as it is expected that the developer confidence in such reviews will be high.
### Confidence Score Generation for the Entire Review Set
The expected input to the ReviewRanker system is not a single review, but an entire set of labeled (the three questions/ tasks) reviews. The three models that are part of ReviewRanker are trained on a fraction of this labeled review set. The confidence scores for the reviews are obtained in a 10-fold cross validation style. Let us understand the entire process. Given a large set of labeled reviews \(S\), we first randomly divide the set into 10 small disjoint subsets \(S_{1},S_{2},\ldots S_{10}\) of reviews. For fold no. \(i\) of the 10-fold cross validation, we use all \(S_{j}\) (\(j\neq i\)) subsets of reviews for training the three models (from randomly assigned initial weights) and finally, use the trained models to predict the final confidence scores of the validation review subset \(S_{i}\). After doing this 10 times for the 10 folds, we are going to get review confidence scores for all the reviews available in the entire review set \(S\). The important thing to note here is that the confidence score of each review is obtained only when that review is part of the validation subset. This is done to avoid obtaining overfitted scores on training data (many of the confidence scores of training data are close to 1).
## 6. Results and Discussion
### Manual Inspection of Assigned Review Quality
We examine both the review text and its corresponding confidence score to gain insight into the behavior of the proposed ReviewRanker system. Our goal is to understand why certain reviews receive higher scores than others. To this end, we randomly selected several reviews with high, average, and low confidence scores and analyzed their content (shown in Table 4). Through our analysis, we discovered that reviews with higher confidence scores are generally easy to understand, provide clear suggestions for changes
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline Sample 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\ \hline Sample 2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 1. Illustrative example of feature vector formation
\begin{table}
\begin{tabular}{c c} \hline
**Model Name** & **Associated Question** \\ \hline \(M_{1}\) & add code (insertion understanding) \\ \(M_{2}\) & remove code (deletion understanding) \\ \(M_{3}\) & operation (change type) \\ \hline \end{tabular}
\end{table}
Table 2. Task specific models
to the code, and use specific variable and function names. Reviews with average confidence scores are sometimes easy to understand but lack substantive information, are excessively long, or contain lengthy blocks of code. Reviews with very low confidence scores are often too short to understand, lack meaningful information, and include asterisks and other special characters. Since ReviewRanker is composed of three training based neural network models, it is a data hungry system. So, larger the provided review set, better will ReviewRanker be able to model the developer confidence in a particular review.
### Model Performance
Table 5 shows the dataset size and performance of the three ReviewRanker models across the 10 folds. The high mean validation accuracy shows that the models can learn to answer the three simple questions associated with review confidence score generation effectively and can generalize well to validation data. The reported performance has some implications on the usage of ReviewRanker. If for some particular set of code reviews, we see that the 10-fold cross validation performance is not upto the mark, then what it means is that the three models have not been able to understand
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Review** & **Confidence Score** & **Verdict** \\ \hline Careful, this is a running number. No two xxx\_resource() calls may have the same number there or they’ll overwrite each other & 0.999 & Good \\ \hline Could you please separate this change out into single & 0.968 & Good \\ commit to keep SoC and mainboard changes separate? & 0.968 & Good \\ \hline Tm not sure about the ‘C’ maybe only add the long option for now? & 0.92 & Good \\ \hline \multicolumn{3}{l}{\({}^{*}\)That tab will most certainly elicit a response by one of the} & 0.86 & Good \\ \multicolumn{3}{l}{update license headers’ scripts} \\ \hline \multicolumn{3}{l}{don’t enable it now without adding proper ASL entries.} & 0.823 & Good \\ \multicolumn{3}{l}{This can be push later with required ASL init} & 0.746 & Average \\ \hline \multicolumn{3}{l}{Why not just provide a define that is the C-state register?} & 0.717 & Average \\ \multicolumn{3}{l}{\#define BLAH (ACPL\_PMIO_BASE + ox14)} & 0.654 & Average \\ \hline \multicolumn{3}{l}{(foo)\({}^{*}\)should be “(foo “)”} & 0.638 & Average \\ \hline \multicolumn{3}{l}{...and these (or are these just numbers? Well, it’s unclear, so some} \\ \multicolumn{3}{l}{sort of documentation is needed)} & 0.638 & Average \\ \hline region\_device\_sz(\#file\_data) & 0.634 & Average \\ \hline nit: DD12\_HPD\_ODL & 0.555 & Poor \\ \hline drop & 0.443 & Poor \\ \hline no \#ifs & 0.345 & Poor \\ \hline uintptr\_t & 0.231 & Poor \\ \hline
212\_IRQ like line 37? & 0.189 & Poor \\ \hline \hline \end{tabular}
\end{table}
Table 4. Sample code reviews of different qualities and corresponding ReviewRanker assigned confidence scores
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Review** & **Model** & **Sub-Task** & **True Class** & **Predicted** & **True Class** & **Final Confidence** \\ \hline \multirow{3}{*}{**Line over**} & \multirow{3}{*}{\(M_{1}\)} & \multirow{3}{*}{Add code} & 0 (add nothing & 0: 0.973 & \multirow{3}{*}{0.974} \\ & & from review) & 1: 0.027 & & \\ \cline{2-6} & & & 1 (remove something & 0: 0.033 & \multirow{3}{*}{0.967} \\ & \multirow{3}{*}{\(M_{2}\)} & \multirow{3}{*}{Remove code} & 1 (reduce rew) & 1: 0.967 & \multirow{3}{*}{0.983} \\ & & & & 2: 0.222 & \\ \hline \multirow{3}{*}{**Above it’s**} & \multirow{3}{*}{\(M_{1}\)} & \multirow{3}{*}{Add code} & 1 (add something & 0: 0.001 & \multirow{3}{*}{0.999} \\ & & from review) & 1: 0.999 & & \\ \cline{2-6} & \multirow{3}{*}{\(M_{2}\)} & \multirow{3}{*}{Remove code} & 1 (remove something & 0: 0.557 & \multirow{3}{*}{0.443} \\ & & from review) & 1: 0.443 & & \\ \cline{2-6} & \multirow{3}{*}{\(M_{3}\)} & \multirow{3}{*}{Operation} & 0 (replace) & 0: 0.888 & \multirow{3}{*}{0.888} \\ & & & & 2: 0.029 & \\ \hline \hline \end{tabular}
\end{table}
Table 3. Illustrative examples of code review confidence score generation process
how to answer the three questions for the provided reviews. In that case, the final confidence score provided by ReviewRanker will not be a reliable metric to measure review quality.
### ReviewRanker Validation
ReviewRanker has not been validated at industry-wide scale. We have made effort of validating ReviewRanker at small scale in three different software companies. But just as we have mentioned in the _Introduction_ section, there is high human bias when it comes to assigning some kind of quality score to a review manually as part of the labeling process. Hence, our effort remains unsuccessful. Nevertheless, this is a system that has the potential of providing us with effective review quality score at industry scale. The system works end-to-end. The input is a set of reviews (no limitation in the number of reviews provided in the set) and the output is a csv file containing confidence score for each of the provided reviews. These scores can be used to find out characteristics of high, average and poor quality reviews; which in turn can aid software industries in coming up with proper guidelines for providing code reviews. This can save considerable time and cost by minimizing the occurrence of develop-review-develop cycles. Designing an effective industry-wide validation study can be an immediate next research step for ReviewRanker.
### Limitations
ReviewRanker asks three questions regarding change type, code addition and code deletion while providing confidence score for a particular review. It does not use the context of code based on which the review has been provided. But we firmly believe that usage of code review context by the models for answering the three questions can greatly benefit the confidence score generation process. In such a case, sequence modeling approaches such as Long Short Term Memory (LSTM) (Liang et al., 2017) or Transformer (Liang et al., 2017) can be used as the three models of ReviewRanker. But one also has to take note of the fact that these sequence models are extremely data hungry. So, if a particular review set has less than 10K reviews (which is our case as well), then it is better to use the simple feature extraction method and model architecture that we have proposed. The three questions that we ask the developers to label for each sample are not based on any large scale study. We believe that a more optimal set of questions can be used for review quality estimation provided that a well designed large scale study is undertaken for this purpose. The reviews that we are dealing with in the experimental dataset for ReviewRanker are line-level code reviews. We have not tested the method on block-level code reviews, although we expect similar result for such case as well. Finally, because of the human bias factor, proper validation of the proposed ReviewRanker method could not be performed.
## 7. Conclusion
In this paper, we propose ReviewRanker with the goal of enabling effective inspection of code review quality. We discover the human bias factor of a supervised learning based approach and thus resort to a human-bias free multiple choice question scheme in order to indirectly get the confidence score for each review in a semi-supervised fashion. We ensure that the labeling process requires little to no effort from the developers. ReviewRanker can handle a large number of reviews (theoretically no limitation in number of reviews provided) and can provide the confidence score for each review in an end to end manner with zero external effort required. The proposed system can be implemented easily at industry level to consistently identify the best reviewers and promote the best review practices with minimal time and effort. The adoption of this system is expected to enhance code quality and to reduce the back-and-forth cycle of the review process. Some immediate future research directions are - (a) well designed industry scale evaluation of ReviewRanker effectiveness in review quality estimation, (b) incorporation of code context in ReviewRanker models and (c) replacing the current set of questions with more suitable set of questions through large scale study. We plan to make ReviewRanker publicly available in the form of a Python package upon acceptance.
|
2302.00499 | Navigating in the Dark -- Designing Autonomous Driving Features to
Assist Old Adults with Visual Impairments | Age-related macular degeneration is a leading cause of blindness worldwide
and is one of many limitations to independent driving among old adults. Highly
autonomous vehicles present a prospective solution for those who are no longer
capable of driving due to low vision. However, accessibility issues must be
addressed to create a safe and pleasant experience for this group of users so
that it allows them to maintain an appropriate level of situational awareness
and a sense of control during driving. In this study, we made use of a
human-centered design process consisting of five stages - empathize, define,
ideate, prototype, and test. We designed a prototype to aid old adults with
age-related macular degeneration to travel with a necessary level of
situational awareness and remain in control while riding in a highly or fully
autonomous vehicle. The final design prototype includes a voice-activated
navigation system with three levels of details to bolster situational
awareness, a 360 degree in-vehicle camera to detect both the passenger and
objects around the vehicle, a retractable microphone for the passenger to be
easily registered in the vehicle while speaking, and a physical button on the
console-side of the right and left front seats to manually activate the
navigation system. | Lashawnda Bynum, Jay Parker, Kristy Lee, Nia Nitschke, Melanie LaFlam, Jennifer Marcussen, Jana Taleb, Aleyna Dogan, Lisa J. Molnar, Feng Zhou | 2023-02-01T15:12:27Z | http://arxiv.org/abs/2302.00499v1 | # Navigating in the Dark - Designing Autonomous Driving Features to
###### Abstract
Age-related macular degeneration is a leading cause of blindness worldwide and is one of many limitations to independent driving among old adults. Highly autonomous vehicles present a prospective solution for those who are no longer capable of driving due to low vision. However, accessibility issues must be addressed to create a safe and pleasant experience for this group of users so that it allows them to maintain an appropriate level of situational awareness and a sense of control during driving. In this study, we made use of a human-centered design process consisting of five stages - empathize, define, ideate, prototype, and test. We designed a prototype to aid old adults with age-related macular degeneration to travel with a necessary level of situational awareness and remain in control while riding in a highly or fully autonomous vehicle. The final design prototype includes a voice-activated navigation system with three levels of details to bolster situational awareness, a 360deg in-vehicle camera to detect both the passenger and objects around the vehicle, a retractable microphone for the passenger to be easily registered in the vehicle while speaking, and a physical button on the console-side of the right and left front seats to manually activate the navigation system.
## Introduction
Visual impairment is increasingly prevalent among older adults due to age-related macular degeneration (AMD) that affects the central retina. AMD is a leading cause of blindness worldwide (Lim et al., 2012) and occurs in 1.5% of the population in the United States over the age of 40, but has increased to more than 15% in certain demographic groups, such as white women over the age of 80 (Friedman et al., 2004). Visual impairment limits the ability of older adults to perform tasks in everyday life, such as reading, recognizing faces, and driving (Owsley and McGwin, 2008). A previous study surveying a subset of the AMD population found a strong association between severity of AMD and self-rated difficulty in driving (Mangione et al., 1999). Visual impairment in old adults was also found to be associated with a higher rate of motor vehicle crashes (Swain et al., 2021).
However, it is often difficult for adults with visual impairment to give up driving completely despite the issues they face while operating a vehicle. Kim (2011) found that only a third of old adults might restrict their activities due to lack of transportation options (Chihuri et al., 2016). Moreover, driving cessation in old adults might contribute to health-related issues, such as depression (Chihuri et al., 2016). With increasingly more old adults with possible visual impairment, there is an urgent need for adequate resources to help meet their transportation needs.
Currently, conventional vehicles do not provide solutions for old adults with visual impairments. Autonomous vehicles might provide a transportation solution to improve the mobility of old adults with different levels of visual impairment and other disabilities (Padmanaban et al., 2021). Nevertheless, many vehicles on the road are still below SAE (Society of Automotive Engineers) Level 3 and have automation features that still require drivers to remain attentive during (Ayoub et al., 2019). Even with SAE (Level 3) vehicles, the driver is still required to take over control whenever requested by the vehicle and this poses a great challenge (Ayoub et al., 2022), especially when the driver is out of the control loop without enough situational awareness (Avetisyan et al., 2022). Maintaining situational awareness is especially difficult for old adults with visual impairments. For highly autonomous vehicles in certain geo-fenced areas (i.e., SAE Level 4), people still find it difficult to trust the vehicle (Ayoub et al., 2021; Zhang et al., 2022), especially older adults (Molnar et al., 2017).
The U.S. Department of Transportation promotes innovative design solutions to help people with disabilities, such as visual impairment to improve their mobility, especially for highly autonomous vehicles (Padmanaban et al., 2021). Thus, in this paper, we attempted to design a prototype of highly autonomous vehicles (SAE Level 4) through a human-centered design process in order to promote trust and acceptance of old adults with visual impairment. In order to understand how highly autonomous vehicles can be used to improve mobility of old adults with visual impairment, the objectives of this study are summarized as follows:
1. Explore and understand behavioral patterns, user needs, and pain points of old adults with visual impairment in a conventional vehicle, and
2. Design, develop, and evaluate a prototype of a highly automated vehicle to help solve the most pressing barriers for old adults with visual impairment.
## Method and Results
In this study, we followed a human-centered design process from Stanford University's d.school, i.e., empathize, define, ideate, prototype, and test (Padmanaban et al., 2021). These five steps comprised the underlying process to create our design solutions. We included multiple iterations on some steps to clarify and refine our design challenge and prototype.
### Empathize
To better understand the experience of old adults with visual impairment while driving, we targeted people with
visual impairment who were 65 years of age or older for our first round of interviews. Our goal was to understand how this group of users navigated driving tasks in response to their changes in vision. We recruited five individuals for our interviews. Four participants were in the U.S. and one participant was based in South Korea. Each had some level of visual impairment which made driving more difficult (\(n=4\)) or had caused them to stop driving altogether (\(n=1\)).
In the interview, we aimed to understand what role vehicles played in their lives, what options they had for transportation, and how they had adjusted their driving behaviors to accommodate their changes in vision. We also examined the participants' experiences in using existing advanced driver-assistance systems (ADAS) and their opinions on autonomous vehicles in general. Table 1 summarizes the findings from the interviews.
All the participants considered vehicles essential in their life and expressed that they were still leading active lifestyles outside home, needing to travel frequently, sometimes long distances, to see family, to attend gatherings and events, or to work. A common theme was the desire for "freedom" brought by vehicle mobility and "control of the vehicle" while in the vehicle. The desire for freedom seemed especially important for U.S.-based participants who noted that without a vehicle, they would not be able to go anywhere ("_Without my car; I cannot go to my doctors, grocery stores, drug stores, or visit my family or friends_"). The desire for a sense of control indicated that the participants trusted their driving abilities and their doubts about automation features, especially when the participants were asked about their perception of ADAS ("_I'm a good driver; I like being in control... Turn a lot [of the ADAS features] off because they irritate me_").
All the participants recognized the issues while driving with different levels of vision loss. Four of the participants were still able to drive during the day, but complained that it was difficult or distracting for them to adjust settings in the vehicle and that they had to rely on auditory cues for directions while driving in unfamiliar surroundings. They also reported using some form of corrective lenses to supplement their vision (corrective lenses, bifocals) as part of adjusting their driving behaviors. One participant noted, "_I use my corrective lenses which bring my vision to its best self, but still found it difficult to drive during the night._" Behavioral adjustments (e.g., physically changing position, squinting) were also made in addition to wearing their glasses: "_I reduce my speed at night so I don t come too close to an object especially on my left side_". Although trying their best to keep driving, these four participants reported they would stop driving, when there was further visual decline, other cognitive and physical decline, and family pressures. One participant had to give up driving due to visual impairment and hoped to increase accessibility by using different types of transportation.
We also aimed to understand their experience with automation features in the vehicle and their attitudes toward autonomous vehicles. The participants worried about the safety concerns of ADAS and would avoid using them to gain a sense of control. They also would have to rely on external voice controls to reduce inaccuracy ("_I would really need stuff in the car, [and I would tell voice control] like tell me how to get somewhere, or call somebody, and I feel like it never got what I wanted. So I was like okay this is frustrating. I'm going to turn this off_"). They also had a low level of trust in autonomous vehicles ("_[I] wouldn't own an autonomous vehicle, ever!...Never been in one..._", "_I would be hesitant to be in such a vehicle, unless it proves itself to be safe and I can still be in control_"). The participants felt they were more in control of the vehicle than the technologies that currently or even in the future to assist them.
When asked about how they attempt to learn to work the vehicle features, most participants were hands-on learners, preferring to use a feature and learn how it worked by doing: "_I sit down and read some of it and then get in the car and demonstrate it myself_". While there was initial negative feedback about voice control in the vehicle, many participants stated auditory cues would help them learn, similar to how GPS navigation apps work, such as Google Maps: "_The way I know that I've arrived at the destination is by navigation sound. When it tells me that I've arrived, I know I'm there".
\begin{table}
\begin{tabular}{l l} \hline
**Topic** & **Summary of Responses** \\ \hline Role of vehicles in daily life & Sense of freedom and control; \\ & Leading active life and riding with others \\ & when possible, but still fully rely on cars to \\ & meet daily needs \\ & Visual impairment; \\ & Distracted driving when making changes in \\ Issues driving vehicles & Vehicle settings or devices; \\ & Accessibility; \\ & Increased reliance on auditory cues to \\ & compensate for visual loss \\ & Changed driving behavior (avoid driving at \\ & night, taking known routes only); \\ & Relving on visual aids (glasses or bifocals); \\ & Physically adjusting, squinting, or changing \\ & positions in vehicle to better see \\ & surroundings \\ & Cognitive and physical decline; \\ & If still driving, what \\ & Visual decline; \\ & would make you \\ & Societal pressure to stop; \\ stop? & Other forms of transportation becoming \\ & accessible \\ & \\ Experience with & \\ & Distracting and inaccurate; \\ & Reliance on external applications/devices \\ voice controls & instead of in-vehicle ones \\ & Avoidance to gain a sense of control; \\ & Safety concerns; \\ & ADAS \\ & Could be useful if designed for specific user \\ & types \\ & Avoidance; \\ & Hands-on learning or in-person instruction \\ Learning about new (dealerships); \\ & Self-service/internet/manuals; \\ & Auditory learning \& feedback \\ \hline \end{tabular}
\end{table}
Table 1: **Summary from the interviews**
**Define**
From the interviews, it was clear that freedom and control were important to old adults when it came to transportation. We thus (re)framed our design challenge to focus on creating a sense of freedom and control for individuals with declining visual impairment that might not allow them to drive anymore. To more clearly define and empathize with our target audience, we created a persona, Kelly Smith, who was forced to give up driving after 40 years when she was diagnosed with age-related macular degeneration. Kelly wished she could resurrect the joys and ease of driving by herself, as she at times felt like a burden to her family who had to drive her around.
With Kelly in mind, we focused on highly (i.e., SAE Level 4) autonomous vehicles since Kelly was no longer able to drive. In such a vehicle, Kelly does not drive at any point and the vehicle can drive autonomously under all conditions in geo-fenced areas. This meant that our design solution should help create a sense of control in the vehicle where the user had no need for a steering wheel, gas pedal, or brake pedal.
Using the persona, we created a scenario that Kelly typically encountered while in an autonomous vehicle to aid our idea generation. The scenario focused on start-up and navigation features in the vehicle, as these were commonly mentioned areas in our initial interviews.
1. Kelly is ready to visit her new friend. She is able to find and get into her vehicle with the help of her guide dog.
2. Kelly gets into her vehicle and would like to know that the vehicle is aware that she and her guide dog are inside.
3. Kelly wants to communicate with the vehicle her intended destination and which route she would like to choose. She does not feel comfortable going on the highway on her way, so she chooses a local route.
4. She also wants to make sure everything is in order before the vehicle drives off. She's worried her vehicle might not have enough battery/fuel to complete the trip and would appreciate reassurance from the vehicle.
5. As the vehicle is driving, Kelly realizes she's a little anxious because she is not familiar with the route and would like details of where she is and every step of the way.
**Ideate**
With this scenario in mind, we brainstormed design solutions using the Crazy Eights method. For a total of eight minutes, we each generated eight designs averaging about a minute per design. From our designs, we each chose our favorites to present to the group as possible solutions to our design challenge. The possible design solutions included:
1. Train guide dogs to control buttons in the vehicle.
2. 24/7 hotline to assist drivers with questions.
3. Vibrating seats to tell users when the vehicle is going to turn.
4. Button on dashboard to turn on navigation.
5. Detailed voice navigation system.
6. 3D model of terrain outside of the vehicle that users can "feel."
7. Heat map on dashboard showing traffic around the vehicle.
8. Checklist of vehicle settings after start-up.
9. Showing a zoomed in map of the vehicle's full route on the windshield.
10. Flashing lights to communicate with the driver.
11. Use of different sound effects to communicate with the driver.
After each team member was given a chance to present their ideas, we collectively voted for what we wanted to implement in the prototype. The ideas that had the most votes were: 1) button on dashboard to turn on navigation (in case the vehicle did not turn on with voice activation), 2) checklist of vehicle settings, and 3) detailed voice navigation system.
**Prototype and Test**
We followed an iterative process of testing our prototypes to learn from our users with a combination of a semi-structured user interview and a Wizard of Oz method. In the Wizard of Oz method, the participants interacted with a system they believed to be autonomous but in reality was controlled by a human operator (Ayoub et al., 2020). We designed five scenarios and voice prompts to best suit our participants while interacting with an autonomous vehicle. We read aloud the scenarios and played the pre-recorded voice
\begin{table}
\begin{tabular}{l l} \hline \hline
**Scenarios** & **Voice Prompt** \\ \hline Scenario 1: This is your first time in front of a wheel sine you stopped driving 3 years ago. This & Welcome to the Mazda CX-60! In full control mode, I, Jay, drive for you. I am your personal driver, you tell me where you d like to go and I take you there. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example scenarios and voice prompts
prompts. In order to test our prototype, we designed five scenarios and each presented a task for the participant to complete, including 1) prompt to turn vehicle on with voice command, 2) vehicle checklist: vehicle diagnostics, seatbelt, cabin conditioning (AC, radio, seat adjust), and prompt navigation level of details select: basic or detailed, 3) prompt to provide destination and select a route and prompt to accept or cycle through additional route options, 4) inform user of trip start and vehicle action (i.e. backing out of parking space), 4) inform user of navigational movements; prompt user for many actions to take en route (i.e. turn on red, take detour). We also showed example scenarios and voice prompts in Table 2 associated with these tasks.
We recruited four participants over 65 years old with some level of visual impairment to test our low-fidelity prototype as sketched in Figure 1, which included a voice-activated navigation system with two different levels of details, an in-vehicle camera to monitor the situation in the vehicle, a retractable microphone for the user to be easily registered and control the vehicle, and a physical button on the console-side of the right and left front seats to manually activate the autonomous driving system.
We described the overall configuration of the vehicle's interior as shown in Figure 1. The participants liked the checklist and possible details of navigation during the driving. They also liked the position of the button on the side of the seat because it was easy to reach (see Figure 1). One participant previously had a stroke, leading to physical impairment in the left side of his body. He pointed out that it would be better to have buttons on both seats to create a choice for participants to sit on the side that best aided their mobility difficulties.
Then, we placed the participants in different scenarios for them to go through all the tasks with the Wizard of Oz method. All the participants appreciated how prepared they felt before the vehicle took off due to the vehicle's initial checklist. One noted, "_The checklist at the beginning was super comfortable. There wasn \(\intercal\) anything that was unnecessary_", while another said that, "_The initial checklist was good, especially the detail to start up the vehicle_". The level of voice detail also reassured participants: "_I like the narration as you go. If you don't hear it, you don't know what the car knows. I thought the reassurance was a good thing_".
Although participants felt aware of their surroundings before heading off for a ride, two participants worried about a lack of situational awareness while in motion. One participant worried about being unable to control the vehicle while moving, "_Can you tell the car to slow down if it was going too fast_?" Another participant wanted to make sure the vehicle could distinguish between an inanimate object and a pedestrian: "_At this time, the sensors for autonomous vehicles cannot distinguish between humans or objects...hopefully it will be resolved in the near future_". Finally, while two participants appreciated the level of details in the voice navigation system, others did not. One noted that, "_I thought it was too verbose...you just want to get to where you're going_".
Based on the feedback from the participants, we refined our prototype. First, we added in a sensor system to alert the passenger to when pedestrians or objects were in front of the vehicle to address concerns about awareness while the vehicle was in motion (see Figure 3). Second, we created an additional level of navigation that included fewer details than either of the previous two levels (as shown in Table 3). Third, we changed the naming convention of the navigation levels to take the cognitive burden off of passengers in remembering what each level represents. The refined prototypes were then presented to the participants with revised tasks and their satisfaction level was much improved.
Figure 1: Sketch of the vehicle’s interior of the initial prototype
Figure 3: Sketch of enhanced sensor system in vehicle’s interior
Figure 2: Sketch of user interacting with the vehicle using voice commands: (a) A speech command by the user, (b) The response from the vehicle
## Discussions and Conclusions
In this study, we aimed to design a solution to help bolster the feeling of control and a sense of freedom for old drivers with visual impairment to feel more comfortable navigating within autonomous vehicles. Through a human-centered design process, we were able to gain insight into the needs and preferences of our target user group and iteratively improve our design to better meet their needs.
Through our empathy stage, we identified the major behavioral patterns, pain points, and needs of old adults with visual impairment with regard to driving. Even though autonomous vehicles hold promise for improving their mobility and independence, old adults were reluctant to trust them without a sense of control and understanding of the overall situational awareness during driving. Based on such findings, we defined our design challenge to generate corresponding ideas with Crazy Eights. We came up with voice prompts to provide situational awareness during driving to improve the sense of control and freedom. Through testing such an idea using a Wizard of Oz method, we found that extra details did not necessarily increase situational awareness, but simple, precise information did. Users expressed interest in the vehicle system providing feedback primarily about important changes in the environment or traffic events they could choose to react to. They were less responsive to the vehicle providing constant narration of its navigational actions. We also found that the participants would have a better sense of control by customizing their ride experience based on their preferences and thus increase their comfort and trust in the vehicle, such as the ability to recall their presetting to reduce the time it took to engage the system and begin a trip, the inclusion of a sensor to alert the passenger about the objects and pedestrians around the vehicle, and the options to take a preferred route.
However, our research was not without limitations. Due to time constraints, we were unable to test our prototype with a larger and more diverse pool of participants. Additionally, not all of our participants had a level of visual impairment that prevented them from driving in the empathy and testing stages. In the testing stage, we somehow covered their eyes to better simulate the scenarios using the Wizard of Oz method. Furthermore, the testing scenarios we used involved minimal driving, so it was unclear how well our prototype would perform in more complex real-world situations. Other limitations include the lack of highly autonomous vehicles, as they are not currently available for consumer purchase.
These limitations highlight the need for further research in this area to focus more on testing the effectiveness of our design prototype in a larger and more diverse sample of participants in more scenarios. Additional research could also be conducted to explore other potential design solutions for old adults with visual impairment. Ultimately, our goal is to create accessible and user-friendly technologies that can help improve the lives of old adults with age-related macular degeneration and other visual impairments, and enable them to maintain their independence and mobility.
|
2304.04571 | Positive Geometries of S-matrix without Color | In this note, we prove that the realization of associahedron discovered by
Arkani-Hamed, Bai, He, and Yun (ABHY) is a positive geometry for tree-level
S-matrix of scalars which have no color and which interact via cubic coupling.
More in detail, we consider diffeomorphic images of the ABHY associahedron. The
diffeomorphisms are linear maps parametrized by the right cosets of the
Dihedral group on n elements. The set of all the boundaries associated with
these copies of ABHY associahedron exhaust all the simple poles. We prove that
the sum over the diffeomorphic copies of ABHY associahedron is a positive
geometry and the total volume obtained by summing over all the dual
associahedra is proportional to the tree-level S matrix of (massive or
massless) scalar particles with cubic coupling. We then provide non-trivial
evidence that the projection of the planar scattering forms parametrized by the
Stokes polytope on these realizations of the associahedron leads to the
tree-level amplitudes of scalar particles, which interact via quartic coupling.
Our results build on ideas laid out in our previous works, leading to further
evidence that a large class of positive geometries which are diffeomorphic to
the ABHY associahedron defines an ``amplituhedron" for a tree-level S matrix of
some local and unitary scalar theory. We also highlight a fundamental
obstruction in applying these ideas to discover positive geometry for the one
loop integrand when propagating states have no color. | Mrunmay Jagadale, Alok Laddha | 2023-04-10T13:26:35Z | http://arxiv.org/abs/2304.04571v1 | # Positive Geometries of S-matrix without Color
###### Abstract
In this note, we prove that the realization of associahedron discovered by Arkani-Hamed, Bai, He, and Yun (ABHY) is a positive geometry for tree-level S-matrix of ordinary \(\phi^{3}\) theory without color. More in detail, we consider diffeomorphic images of the ABHY associahedron. The diffeomorphisms are linear maps parametrized by the right cosets of the Dihedral group \(D_{n}\). The set of all the boundaries associated with these copies of ABHY associahedron exhaust all the poles of \(\phi^{3}\) theory. We prove that the sum over the diffeomorphic copies of ABHY associahedron is a positive geometry and the total volume obtained by summing over all the dual associahedra is proportional to the tree-level S matrix of (massive or massless) \(\phi^{3}\) theory. We then provide non-trivial evidence that the projection of the \(\frac{n-4}{2}\)\(d\)-log forms (parametrized by the accordiohedron known as Stokes polytope) on these realizations of the associahedron lead to the tree-level amplitudes in \(\phi^{4}\) theory without color.
Our results build on ideas laid out in [1, 2], leading to further evidence that a large class of positive geometries which are diffeomorphic to the ABHY associahedron defines an "amplituhedron" for tree-level S matrix of _some_ local and unitary scalar theory. An interesting offshoot of our analysis is the CHY formula for the tree-level amplitude in \(\phi^{3}\) theory without color. We also highlight a fundamental obstruction in applying these ideas to discover positive geometry for the un-colored \(\phi^{3}\) S-matrix integrand at one-loop.
## 1 Introduction
The "amplituhedron program" of the S-matrix [3], [4], [5](and references therein) has offered a number of remarkable insights in deepening our understanding of the analytic structure of the scattering amplitudes by geometrizing the analytic structure of the S-matrix. In the landscape of scalar field theories, these insights have led to the discovery of a fundamental postulate known as projective invariance, from which unitarity and locality emerge as a set of derived postulates.
By recasting S-matrix as a differential form in the kinematic space \({\cal K}_{n}\), the amplituhedron program has consolidated our understanding of the recursion relations first discovered in the context of MHV amplitudes in gauge theories by Andrew Hodges [6] which was further developed by Arkani-Hamed, Bourjailly, Cachazo Hodges, and Trnka [7]. It has enhanced our understanding of the worldsheet formulation of the S matrix encapsulated in the CHY (Cachazo, He, and Yuan) formula by identifying certain compactifications of the world-sheet moduli space with a specific Positive geometry known as associahedron in the kinematic space and finally, it has revealed a potentially deep connection between geometrization of the S-matrix in the kinematic space and the color-kinematics duality.
The fundamental thesis behind these developments is the discovery of a class of polytopes known as positive geometry whose boundaries capture _all_ the poles of the planar S matrix of scalar quantum field theories. The canonical forms associated with positive geometries are amplitudes in a QFT.
Arkani-Hamed, Bai, He, and Yan discovered the first example of positive geometry (known as associahedron) directly inside the kinematic space, [8]. The unique canonical form defined by the associahedron in \(\mathcal{K}_{n}\) is the amplitude of bi-adjoint \(\phi^{3}\) theory. However, we now know that the associahedron polytope is the first member of an infinite family of positive geometries known as accordiohedra. Soon after the discovery of associahedron in the kinematic space, specific realizations of accordiohedra were also discovered, which defined color-ordered S-matrix for scalars without derivative coupling, [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19].
One of the primary reasons why the discovery of positive geometries in kinematic space has led to a deeper understanding arises from parallel developments in the subject of quiver representation theory and cluster algebra. Namely, given the set of all possible dissection of an \(n\)-gon (which may include puncture in the interior) with a fixed dimension, it generates, on the one hand, combinatorial geometries such as the accordiohedron and on the other hand, a family of vectors in an abstract Cartesian space such that these vectors form a simplicial and a complete fan, [20]. The polytopal realization of this fan is then a convex realization of the combinatorial polytope.
In the seminal paper [21], it was shown that in the case of associahedron, such a polytopal realization of the aforementioned fan matches precisely the ABHY associahedron. This result was extended to include realizations arising from so-called cyclic quivers and generate convex realizations of accordiohedron in [20].
Thus from combinatorics of abstract dissections and a single non-trivial principle known as projective invariance, one obtains S-matrix for an infinite family of scalar quantum field theories, clearly hinting at a possibility that the S-matrix of a local QFT may simply be volumes of certain polytopes where the measure is determined by a deeper principle such as projective invariance.
As we argued in [1; 2], the geometric data in ABHY associahedron is even richer than what the seminal developments revealed. The new structures emerged from a simple observation that the Cartesian space in which quiver algebra shaped the polytopes such as accordiohedron and the physical kinematic space of Mandelstam invariants are a priori distinct vector spaces. The simplest identification between them generated color-ordered tree-level amplitudes of massless scalar QFTs.1 However, this identification is just one such map among the space of all diffeomorphisms between two copies of \(P_{n}(\mathbf{R})\). A rather simple class of diffeomorphisms, namely linear maps between \(P^{n}(\mathbf{R})\) to \(P^{n}(\mathbf{R})\) led to a deformation of the ABHY associahedron in the kinematic space which turned out to be positive geometries for theories far removed from massless scalar field theories. This rather simple observation potentially opens up avenues to answer two outstanding questions in the program.
Footnote 1: Strictly speaking, this result is valid as long as the dimension of spacetime \(D\,\geq\,n\) where \(n\) is the number of external particles, as in \(D\,=\,4\) the kinematic space is a \(3n-10\) dimensional variety in \(\mathcal{K}_{n}\) on which the so-called Gram conditions are satisfied. [22], [23].
1. What is the positive geometry for a manifestly crossing symmetric (as opposed to planar) tree-level S-matrix, such as the ordinary massive \(\phi^{3}\) scalar field theory?
2. Can the scattering forms associated with amplitudes with non-trivial numerator factors be derived from fundamental postulates like projectivity?
We answer the first question for \(\phi^{3}\) and \(\phi^{4}\) theories without color in this note.
More in detail, we show that there exists a \(\frac{(n-1)!}{2}\) linear maps \(f_{[\sigma]}\) that are parametrized by the elements \([\sigma]\) of right cosets of the dihedral group \(\mathcal{D}_{n}\). \(f_{[\sigma]}\) maps the ABHY associahedron in the Cartesian space to an associahedron inside the kinematic space. We will refer to the images as deformed realizations of the associahedron and prove that the sum of the canonical forms over all of these associahedra generates the complete (as opposed to color-ordered) S matrix of a \(\phi^{3}\) theory with mass \(m\).
The organization of the paper is as follows. In section (2), we use the permutation symmetry to sub-divide the family of all the poles of a \(\phi^{3}\) S-matrix in "associahedron slices". Such a classification helps us in deriving a beautiful formula that relates the total number of Feynman diagrams with the cardinality of the vertex set of the associahedron, which is given by a Catalan number. In section (3), we briefly review the construction of ABHY associahedron and so-called generalized permutahedron, which have hitherto been considered as positive geometries whose vertices are associated with non-planar poles. In section (4), we define the linear maps between the embedding space and the kinematic space, which are parametrized by elements of the Bose symmetry and obtain deformed realizations of the ABHY associahedron. In section 4.1, we pull back the canonical form from the cartesian space \(\mathcal{E}_{n}\) to \(\mathcal{K}_{n}\) and show how the sum overall such forms are proportional to crossing symmetric S-matrix of \(\phi^{3}\) theory.
In section 5, we use the linear maps between \(\mathcal{E}_{n}\) and \(\mathcal{K}_{n}\) to write the CHY formula for crossing symmetric \(\phi^{3}\) S-matrix where the CHY integrand is the Parke-Taylor form. In section 6, we argue that a weighted sum over the projection of \(\frac{n-4}{2}\) ranked \(d\ln\) forms, which are labeled by quadrangulations of the \(n\)-gon is the tree-level scalar amplitude with quartic interactions. Once again, the amplitude is manifestly crossing symmetric as the scalar particles have no color. In section 7, we show how these rather simple ideas face a fundamental obstruction in relating the \(\hat{D}_{n}\) polytope discovered in [24] and one loop integrand of \(\phi^{3}\) S matrix without color. We end with a discussion of our results and place them in the context of the entire program of geometrizing the S-matrix.
## 2 Combinatorics of poles of scattering amplitude in scalar theory
The poles of \(n\)-point tree-level scattering amplitude in a local scalar field theory are of the form \(\frac{1}{s_{I}}\), where \(s_{I}=\left(\sum_{i\in I}p_{i}\right)^{2}\), and \(1<|I|<n-1\). There are \(2^{n-1}-n-1\) such poles. However, not all poles can come in a single scattering channel of a unitary theory. We say two poles are compatible if they can come together in a single scattering channel of a unitary theory. For example, at 5-point, the poles \(1/s_{12}\) and \(1/s_{13}\) can never come together in a single scattering channel, but the poles \(1/s_{13}\) and \(1/s_{123}\) can come in a single scattering channel. This notion of compatibility of poles gives a combinatorial structure to the set of
oles of \(n\)-point tree-level scattering amplitude in a local unitary scalar field theory. We will consider scattering amplitudes in \(\phi^{3}\)-theory where all poles that can appear do appear.
When restricted to planar poles, the combinatorial structure mentioned above is the combinatorial associahedron. Starting with the combinatorial structure of an associahedron, we can write down algebraic relations between planar kinematic variables, viz. \(X_{i,j}+X_{i+1,j+1}-X_{i,j+1}-X_{i+1,j}=c_{ij}\), which in turn gives a geometric realization of the associahedron in the kinematic space, viz. the ABHY associahedron. This ABHY associahedron gives us the scattering amplitude of planar \(\phi^{3}\)-theory. However, as we will see in this section, the underlying combinatorial structure is more universal than previously thought. This, in turn, tells us that the underlying algebraic and geometric structures are more universal and can capture scattering amplitudes of \(\phi^{3}\)-theory (without color), which we will see in the latter sections.
The combinatorial structure on the set of all poles is such that this set can be neatly divided into a family of associahedra intertwined. For example, at 4-point, the set of all poles is \(\{1/s_{12},1/s_{23},1/s_{13}\}\). This set is made of three one-dimensional associahedra, viz. \(\{s_{12},s_{23}\}\), \(\{s_{23},s_{13}\}\), and \(\{s_{13},s_{12}\}\). Here, each associahedron shares its poles with the other two. For higher-point amplitudes, the associahedra are highly intertwined. This poses the question, how to get all these different associahedra slices of the set of all poles? To understand this, we have to look at the action of permutation(Bose) symmetry, \(S_{n}\), on the set of all poles of \(n\)-point scattering amplitudes.
There is a natural action of Bose symmetry on the set of all poles. This action induces the action of Bose symmetry on scattering channels or Feynman diagrams. The notion of compatibility discussed above commutes with the action of Bose symmetry. Two poles of scattering amplitude are compatible if and only if their image under the action of any element \(\sigma\in S_{n}\) are compatible. Therefore, the action of any element \(\sigma\in S_{n}\) on the combinatorial associahedra formed by planar poles takes us to another associahedra slice of the set of all poles. Given any Feynman diagram, there exists a \(\sigma\in S_{n}\) such that the action \(\sigma\) on that Feynman diagram gives us a planar Feynman diagram. Therefore, by acting different elements of \(S_{n}\) on the combinatorial associahedron of planar poles, we get all the "associahedron slices" of the set of all poles.
Generically the action of an element of \(S_{n}\) on a planar scattering channel takes us to a non-planar scattering channel. However, there is a subgroup of \(2^{n-2}\) elements generated by
Figure 1: Bose symmetry that leaves the scattering channel invariant.
elements like the one depicted in figure 1 that give you back the same scattering channel. More in detail, suppose \(\sigma_{S}\) is an element of \(S_{n}\) that takes a planar diagram \(P\) to \(S\) when acted by \(\sigma_{S}\). That is, \(\sigma_{S}\cdot P=S\). Then \(\alpha\cdot\sigma_{S}\cdot P=S\) as well, where \(\alpha\) is one of the \(2^{n-2}\) elements that takes \(S\) back to itself. At the same time, there are \(n\) order \(n\) elements of \(S_{n}\), the cyclic permutations, that take you from one planar scattering channel to another planar scattering channel. Therefore, if \(\sigma_{P,P^{\prime}}\in C_{n}\) is such that \(\sigma_{P,P^{\prime}}\cdot P^{\prime}=P\), where \(P^{\prime}\) is planar, then \(\alpha\cdot\sigma\cdot\sigma_{P,P^{\prime}}\) takes \(P^{\prime}\) to \(S\), \(\alpha\cdot\sigma\cdot\sigma_{P,P^{\prime}}\cdot P^{\prime}=S\). In other words, for any scattering channel \(S\), there are \(n2^{n-2}\) elements of \(S_{n}\) that take some planar diagram to \(S\). This is beautifully captured by the formula relating the total number of distinguished Feynman diagrams with cubic vertices and the Catalan number, which is the number of planar Feynman diagrams.
\[(2n-5)!!=\frac{C_{n-2}(n-1)!}{2^{n-2}}, \tag{1}\]
where \((2n-5)!!\) is the total number of \(n\)-point tri-valent Feynman diagrams and \(C_{n-2}\) is the total number of \(n\)-point tri-valent planar Feynman diagrams.2
Footnote 2: This identity follows trivially from the definitions, \(C_{n-2}\,=\,\frac{(2n-4)!}{(n-2)!(n-1)!}\), \((2n-5)!!\,=\,\frac{(2n-4)!}{2^{n-2}(n-2)!}\).
## 3 ABHY associahedron and Zonotopal Generalised Permutahedron
Consider the Cartersian space \(\mathbf{R}^{\frac{n(n-3)}{2}}\). We denote it as \(\mathcal{E}_{n}\), so as to contrast it with the kinematic space of Mandelstam invariants \(\mathcal{K}_{n}\).3 We fix once and for all, a basis \(\{x_{ij}\,|\,1\,\leq\,i\,<\,j+1\,\leq\,n\,\}\) in \(\mathcal{E}_{n}\). This basis is labelled by chords of an n-gon : \(\{\,x_{ij}\,|\,1\leq\,i\,<j+1\,\leq\,n\}\). Let \(y_{i_{1},\ldots\,i_{k}}\,1\,\leq\,i_{1}\,<\,\ldots\,<\,i_{k}\,\leq\,n\) be defined as,
Footnote 3: To be completely precise, \(\mathcal{E}_{n}\), \(\mathcal{K}_{n}\) are real projective spaces of dimension \(\frac{n(n-3)}{2}\), but we gauge fix the projective freedom by choosing the first co-ordinate to be 1.
\[y_{ij} \,:=\,x_{i,j+1}\,+\,x_{i+1,j}\,-\,x_{ij}\,-\,x_{i+1,j+1}\] \[y_{i_{1}i_{2}\,\ldots\,i_{k}} \,:=\,\sum_{m,n=1\,|\,m<n}^{k}\,y_{i_{m}i_{n}} \tag{2}\]
These two equations imply that,
\[x_{ij}\,=\,y_{i,i+1,\ldots,j-1} \tag{3}\]
The \((n-3)\)-dimensional ABHY associahedron [8] is a specific convex realization of the associahedron \(A_{n}\) in the positive quadrant of the embedding space, \(\mathcal{E}_{n}^{\geq 0}\,:=\,\{\,x_{ij}\,\geq\,0\,\forall\,(i,j)\,\}\), defined as follows [8, 20, 21]; Given any reference triangulation \(T\), consider the intersection of the positive quadrant of the embedding space \(\mathcal{E}_{n}^{\geq 0}\) and the hyper-planes given the equations,
\[y_{ij}=-c_{ij}\,\mid\,c_{ij}\,>\,0\,\,\,\forall\,\,(i,j)\,\notin\,T^{c}. \tag{4}\]
For any choice of positive constants \(c_{ij}\), we get a polytopal realization of \(A_{n}\).
We will denote the ABHY realization by \(A_{n}^{T}\) as it explicitly depends on the choice of the reference triangulation. Our review of ABHY realization is based on [20], whereas the
original construction in [8] where these polytopes were discovered directly in the kinematic space of Mandelstam invariants \({\cal K}_{n}\). We now quickly review the relationship between the two.
In space-time dimensions \(d\,\geq\,n-3\), the kinematic space \({\cal K}_{n}\) spanned by the Mandelstam variables can also be realized as a \({\bf R}^{\frac{n(n-3)}{2}}\) space. This identification involves identifying a (complete) set of Mandelstam invariants with the coordinate basis. A rather convenient choice of co-ordinates is the set of planar kinematic variables \(\{\,X_{ij}\,|\,1\,\leq\,i\,<\,j+1\,\leq\,n\,\}\).
\[X_{ij}\,=\,(p_{i}\,+\,\ldots\,+\,p_{j-1})^{2} \tag{10}\]
In general, given any ordering \(\{\,\sigma(1),\,\ldots,\,\sigma(n)\,\}\) obtained by action of a permutation \(\sigma\,\in\,S_{n}\) on the standard ordering, one can define the so-called \(\sigma\)-planar variables which are labelled by chords of an \(n\)-gon whose vertices are ordered in clockwise direction as \((\,\sigma(1),\,\ldots,\,\sigma(n)\,)\). Any such choice of \(\sigma\)-planar basis defines for us a linear space \({\cal K}_{n}^{\sigma}\) with \(\sigma\)-planar variables defining an orthonormal basis.4 In what follows, we fix once and for all an ordering of \((1,\ldots,n)\), and \({\cal K}_{n}\) is defined as the kinematic space in which \(X_{ij}\) form an orthonormal coordinate system.
Footnote 4: We note that in \({\cal K}_{n}\), \(\sigma\)-planar variables do not form the Cartesian basis if \(\sigma\) is not a \(\{\,i,\,i+1,\,\ldots,\,n,\,1,\,\ldots,\,i-1\}\).
The relationship between ABHY realisation in [8] and [20] is simply the identifcation of \({\cal E}_{n}\) with \({\cal K}_{n}\) by the isometry
\[x_{ij}\,=\,X_{ij}\,\,\,\forall\,\,(i,j).\]
This realization has several rather remarkable properties that distinguish it from other realizations of the associahedron, making it the "amplituhedron" for bi-adjoint \(\phi^{3}\) theory [8]. Let \({\cal D}\) be the set of all the dissections of an \(n\)-gon whose vertices are labeled \(\{1,\,\ldots,\,n\}\) in a clockwise direction.
* All the co-dimension one faces (facets) of \(A_{n-3}^{T}\) are in bijection with the set \[\{\,X_{ij}\,=\,0|(i,j)\,\in\,{\cal D}\,\}\] which is the set of all the simple poles in the bi-adjoint color-ordered amplitude, \({\cal M}_{n}(\,(1,\ldots,n)\,|\,(1,\ldots,n)\,)\).
* For a \(n\) dimensional associahedron, precisely \([\frac{n}{2}]\) pairs of co-dimension one faces are parallel to each other.
* On a facet of \(A_{n-3}^{T}\) which corresponds to \(X_{ij}\,=\,0\) for some \((i,j)\,\in\,{\cal D}\), \(X_{mn}\,>0\)\(\forall\,(m,n)\,\in\,{\cal D}\) such that \((m,n)\,\cap\,(i,j)\,\neq\,0\).
For later purposes, we also recall that \(A_{n-3}\) also induces a unique _projective_\((n-3)\)-form on \({\cal E}_{n}\).
\[\Omega^{{\cal E}_{n}}_{n-3}\,=\,\sum_{v\,\in\,A_{n-3}}(-1)^{\sigma_{v}}\,\bigwedge _{(ij)\,\in\,v}\,{\rm d}\log x_{ij} \tag{11}\]
The projectivity is the invariance of \(\Omega^{\mathcal{E}_{n}}_{n-3}\) under \(x_{ij}\,\to\,f(\{x_{mn}\})\,x_{ij}\) for any function \(f\). As ABHY proved, under the identification of \(\mathcal{E}_{n}\) with \(\mathcal{K}_{n}\)\(A^{T}_{n-3}\) is the positive geometry for bi-adjoint tree-level S matrix as
\[\Omega^{K_{n}}_{n-3}|_{A^{T}_{n-3}}\,=\,\mathcal{M}_{n}(\,(1,\, \ldots\,n)\,|\,(1,\,\ldots,\,n)\,)\,\bigwedge_{(ij)\,\in\,T}\,\mathrm{d},X_{ij} \tag{11}\]
In this paper, we use these ideas to look for positive geometries in \(\mathcal{K}_{n}\) which generate the S-matrix of (massive or massless) \(\phi^{3}\) theory without color in \(D\,\geq\,n\) dimensions. That is, our goal is to discover a set of linear maps from \(\mathcal{E}_{n}\,\to\,\mathcal{K}_{n}\) such that the resulting image of the ABHY associahedron in \(\mathcal{K}_{n}\) constitute a family of polytopes whose co-dimension one facets exhaust all the poles of the tree-level S matrix with cubic interaction.
We now review the discovery of a family of polytopes known as Permutahedra in the kinematic space whose canonical form has poles at \(\{\,s_{1\sigma(2)\,\ldots\,\sigma(i)}\,|\,2\,\leq\,i\,\leq\,n{-}1\,|\,\sigma \,\in\,S_{n-2}\,\}\).
Starting from the seminal work in [8], it has been realized that a simple polytope whose vertex set is bijection with the set of poles defined above is the permutahedron \(\mathcal{P}_{n}\)[8, 25]. This is because, as we review below, \(\mathcal{P}_{n}\) is a combinatorial polytope whose vertex set is in bijection with \(S_{n-2}\), [26]
Permutahedron can be defined as follows; 5 Given a set \(I\,=\,\{1\,\ldots,\,n\}\), we fix two elements, say \(i\), \(i+1\) as the "initial" and "final" vertices and "rotate \(I\)" to
Footnote 5: In the interest of brevity, we are paraphrasing the definition such that it relates to the primary question under investigation.
\[I\,=\,\{\,i+1,\,i+2,\,\ldots,\,n,\,1,\,2,\,\ldots,\,i\,\} \tag{12}\]
Let
\[I_{i}\,=\,I\,-\,\{i,i+1\} \tag{13}\]
We now consider the configuration space defined as
\[\sigma\,\cdot\,I\,=\,\{\,i+1,\,\sigma\,\cdot\,I_{i},\,i\,\}\, \,\forall\,\,\sigma\,\in\,S_{n-2} \tag{14}\]
We now define a permutahedron \(\mathcal{P}^{i+1,i}_{n}\) as a simple polytope whose vertex set is in bijection with \(\sigma\cdot I\) and two vertices are adjacent if and only if they are related by a \(\sigma\) which exchanges \((m,n)\,\to\,(n,m)\,|\,(m,n)\,\in\,I_{i}\). \(\mathcal{P}_{n}\) has several interesting properties.
1. It is a simple polytope with the dimensionality given by the number of propagators in a tri-valent Feynman graph, \(n-3\)
2. It is a member of the family of polytopes known as Cayley polytopes with the largest number (\((n-2)!\)) of vertices among this family.
3. Given a planar ordering, Each permutahedron has precisely one vertex that corresponds to the planar channel, and all the other vertices correspond to non-planar Feynman diagrams.
4. No vertex of \({\cal P}_{n}^{i+1,i}\) corresponds to the set of propagators of the type \[\{\,(i+1,\sigma(i+2)),\,(\sigma(i+3),\,\sigma(i+4)),\,\ldots,\,(\sigma(i-1),i)\,\}\] In [8], Arkani-Hamed, Bai, He, and Yan, in fact, constructed a convex realization of the Permutahedron in \({\cal K}_{n}\) as follows.
Given \({\cal P}_{n}^{i+1,i}\), consider a positive orthant in \({\cal K}_{n}\) defined using the following inequalities.
\[s_{i+1\sigma(i+2)\,\ldots\,\sigma(i+k)}\,\geq\,0\,\forall\,k\leq\,n-2,\sigma \,\in\,S_{n-2} \tag{3.10}\]
The set of constraints whose intersection with this positive orthant maps out the generalized permutahedron are simply,
\[s_{ij}\,=\,-\,c_{ij}\,\forall\,i+2\,\leq\,i\,<\,j\,\leq\,i-1 \tag{3.11}\]
The canonical form on the convex \({\cal P}_{n}^{i+1,i}\) is,
\[\Omega_{n-3}|_{{\cal P}_{n}^{i+1,i}}\,=\,\sum_{\sigma\,\in\,S_{n-2}}\,[\prod \,\frac{1}{s_{i+1\sigma(i+2)\,\ldots\,\sigma(i-1)}}\,]\,\bigwedge_{j=i+2}^{i- 1}\,{\rm d}s_{i+1\,j} \tag{3.12}\]
which is a partial contribution to the S-matrix of \(\phi^{3}\) theory. In a beautiful paper [25], Nick Early analyzed these realizations in further detail and proved that they are, in fact, equivalent to a class of simple polytopes called zonotopal generalized permutahedra which were discovered by A. Postnikov [26].6.
Footnote 6: This relationship was already anticipated by the authors in [8]
In spite of the remarkable richness contained in these geometries (see, e.g. [9]) as well as the fact that each such \({\cal P}_{n}^{i+1,i}\) has precisely one planar and the rest non-planar channels as vertices, a precise relationship between zonotopal generalized permutahedra and the S-matrix of ordinary \(\phi^{3}\) theory remains to be understood. Property (4) stated above only adds a further layer of mystery in the role permutahedra may play in unraveling the structure of S-matrix without color.
We will not consider these mythical objects in our work and will instead show how to recover all the poles (planar as well as non-planar) of the S matrix from a set of incarnations of the ABHY associahedron located in various quadrants of \({\cal K}_{n}\).
## 4 Permutation induced maps between \({\cal E}_{n}\) and \({\cal K}_{n}\)
In this section, we define a family of maps between the embedding space \({\cal E}_{n}\) and the kinematic \({\cal K}_{n}\) space parameterized by the Bose symmetry reviewed in section 2. Given an element \(\sigma\in S_{n}\) we consider the following linear isomorphism, 7
Footnote 7: In [8]\(s_{\sigma(i)\sigma(i+1)\ldots\sigma(j-1)}\) were called the \(\sigma\)-planar variables \(X_{\sigma(i)\sigma(j)}\).
\[f_{\sigma}:{\cal E}_{n} \to{\cal K}_{n}\] \[x_{ij} \mapsto s_{\sigma(i)\sigma(i+1)\ldots\sigma(j-1)}. \tag{4.1}\]
As we have fixed the basis in both, \(\mathcal{K}_{n}\) and \(\mathcal{E}_{n}\), each such map \(f_{\sigma}\) is a matrix in \(GL(\frac{n(n-3)}{2})\).
The linear isomorphism \(f_{\sigma}\) defined in (18) maps the ABHY associahedra \(A_{n}^{T}\) in the (positive quadrant of) the embedding space to a geometric realization of associahedra in kinematic space \(\mathcal{K}_{n}\). Given a triangulation \(T\) of an \(n\)-gon and an element \(\sigma\in S_{n}\), we get such a geometric realization of associahedra. We denote this geometric realization as \(A_{n}^{T,\sigma}\).
More in detail, as discussed in section 3, given a triangulation, we have a geometric realization of associahedra by considering the intersection of hyper-planes given in equation (19) with the positive quadrant of the embedding space. Now the linear isomorphism \(f_{\sigma}\) maps the hyper-planes to the hyper-planes
\[s_{\sigma(i)\sigma(j)}=-c_{ij}\ \ \forall\,\left(i,j\right)\ \notin\ T^{c}. \tag{20}\]
While the positive quadrant of the embedding space \(\mathcal{E}_{n}^{\geq 0}\) is mapped to what we call \(\sigma\)-positive region \(\mathcal{K}_{n}^{\sigma\geq 0}\subset\mathcal{K}_{n}\). The \(\sigma\)-positive region is given by
\[\mathcal{K}_{n}^{\sigma\geq 0}:=\{s_{\sigma(i),\,\sigma(i+1),\ldots,\, \sigma(j)}\,\geq\,0\,\forall\,1\,\leq\,i\,<\,j-1\,\leq\,n-1\,\}. \tag{21}\]
The intersection of hyper-planes given by (20) and the \(\sigma\)-positive region \(\mathcal{K}_{n}^{\sigma\geq 0}\) gives us the geometric realization \(A_{n}^{T,\sigma}\).
We now expand on several characteristics of \(A_{n}^{T,\sigma}\) for generic \(\sigma\in S_{n}\), which reveal the similarities and differences of this geometry with the ABHY realization.
* The set \(F_{\sigma}\) of all the co-dimension one boundaries of \(A_{n}^{T,\sigma}\) are in bijection with the following set of poles, \[F_{\sigma}\ \stackrel{{ 1-1}}{{=}}\,\{\,s_{\sigma(i),\,\ldots,\, \sigma(j-1)}\,|\,1\leq\,i\,<j-1\,\leq\,n\,\}\] (22)
* It is important to note that \(F_{\sigma}\) is in bijection with the set of poles of the color ordered amplitude in bi-adjoint \(\phi^{3}\) theory with the ordering \((\sigma(1),\,\ldots,\,\sigma(n)\,|\,\sigma(1),\,\ldots,\,\sigma(n)\,)\). However, one crucial difference with the bi-adjoint case is worth emphasizing: The kinematic space in the case of bi-adjoint color ordered amplitude is defined via the \(\sigma\)-planar kinematic variables, \(X_{\sigma(i),\sigma(j)}\) (for a given a color order defined by a representative \(\sigma\)), [8]. In contrast, in the present case where the external states have no color, \(\mathcal{K}_{n}\) is fixed once and for all. Our goal is to prove that the complete tree-level S matrix is given by \(n-3\) forms on \(\mathcal{K}_{n}\).
* As the following lemma proves, the union over all the deformed realisations of \(A_{n-3}\) is a positive geometry, [17].
**Lemma 4.1**.: For any two distinct cosets \([\sigma_{1}],[\sigma_{2}]\,\in\,\mathcal{G}_{n}\ \exists\,\sigma\,\in\,[\sigma_{1}],\, \sigma^{\prime}\,\in\,[\sigma_{2}]\) such that,
\[A_{n-3}^{T,\sigma}\,\cap\,A_{n-3}^{\sigma^{\prime}}\,=\,\{0\} \tag{23}\]
Proof.: Without loss of generality, let us assume that \(T=\{\,(1,3),\ldots,(1,n-1)\,\}\) is a reference triangulation for the ABHY associahedron. We can always choose \(\sigma,\sigma^{\prime}\) such that
\[\sigma(1)\,=\,\sigma^{\prime}(1)\,=\,1 \tag{24}\]
As \(\sigma,\sigma^{\prime}\) are representatives of two distinct elements of \({\cal G}_{n}\), we note that there is at least one squeunc of length k (\(i_{1}<\,i_{2}\),..., \(<\,i_{k}\)) in \(\sigma\)-ordering which is mapped to \((i_{k},i_{1},\,\ldots,\,i_{k-1})\) in \(\sigma^{\prime}\)-ordering. (Note that \(i_{m},i_{m+1}\) do not have to be successive entries.) We consider one such squeunc among those which have smallest length \(k_{min}\,\geq\,2\). Without loss of generality, let \(\sigma\,=\,\text{id}\). Now \(\sigma^{\prime}\) can be such that either \(i_{k_{min}}\,<n\) or \(i_{k_{min}}=n\). We consider the two cases separately.
If \(i_{k_{min}}\,<\,n\) then for the chosen \(T\), \(A_{n-3}^{T,\sigma^{\prime}}\) is realised in the hyper-plane located at
\[\text{either}\ s_{i_{1}-1,i_{1}}\,=\,-c\ \text{or}\ s_{i_{k_{min}}i_{k_{ min}}+1}\,=\,-c^{\prime} \tag{4.7}\]
for some negative constant \(c\) (\(c^{\prime}\)). As \(A_{n-3}^{T,\sigma=id}\) is realised in the quadrant in which both of these variables are \(\geq\,0\) implies that the intersection between these two associahedra is empty.
If \(i_{k}=n\), then the above argument does not directly apply as the associahedra constraints which locate \(A_{n-3}^{T,\sigma}\) do not impose \(s_{in}=-c_{in}\) for any \(1\,\leq\,i\,\leq\,n-2\). To be specific, let \(k_{min}\,=\,2\) and consider
\[\sigma^{\prime}\,\circ\,(1,\,\ldots,\,n)\,=\,(\,1,\,\ldots,\,n-2,n,n-1) \tag{4.8}\]
Then in the same right coset to which \(\sigma^{\prime}\) belongs \(\exists\,\sigma^{\prime\prime}\) such that
\[\sigma^{\prime\prime}\,\circ\,(1\,\ldots\,n)\,=\,(\,1,n-1,n,\ldots,\,3,2) \tag{4.9}\]
Clearly
\[A_{n-3}^{T,\sigma}\,\cup\,A_{n-3}^{T,\sigma^{\prime\prime}}\,=\,\{0\} \tag{4.10}\]
as the latter is located in the quadrant \(s_{1,n-1}\,\geq\,0\) whereas the former is located in the hyperplane \(s_{1,n-1}=-c_{1,n-1}\). Similar line of argument for \(k_{min}\,>\,2\) can be readily formulated. This completes the proof. In \(n=4\) case, a pictorial representation of the lemma can be found in figure (2).
### A family of Scattering forms on \({\cal K}_{n}\)
The associahedron defines a unique canonical \((n-3)\) form \(\Omega(A_{n})\) on \({\cal E}_{n}\).
\[\Omega_{n-3}\,=\Omega(A_{n})=\sum_{v\,\in\,A_{n}}(-1)^{T_{v}}\bigwedge_{(i,j) \,\in\,T_{v}}\,\text{d}\log,x_{ij} \tag{4.11}\]
As proved in [8], \(\Omega_{n-3}\) is the unique \(\text{d}\log\) form which is invariant under projective transformation generated by any smooth function \(f\,\in\,C^{\infty}({\cal E}_{n})\),
\[x_{ij}\,\rightarrow\,f(\{x_{mn}\})\,x_{ij} \tag{4.12}\]
The pullback of projectively invariant \(\Omega(A_{n})\) by \(\{\,f_{\sigma}^{-1}\,|\,\sigma\,\in\,S_{n}\,\}\) generates \(n!\) projective invariant \(\text{d}\log\) forms on \({\cal K}_{n}\).
\[f_{\sigma}^{-1\star}\Omega(A_{n})=\Omega(A_{n}^{\sigma})=\sum_{v\in A_{n}}(-1) ^{T_{v}}\bigwedge_{(i,j)\in T_{v}}\text{d}\log s_{\sigma(i)\sigma(i+1)\ldots \sigma(j-1)}. \tag{4.13}\]
Poles of \(\Omega(A_{n}^{\sigma})\) are the \(\frac{n(n-3)}{2}\) co-dimension one hyperplanes
\[s_{\sigma(i)\,\dots\,\sigma(j-1)}\,=\,0 \tag{4.14}\]
One of the striking and central results in [8] was the following. Pull back of \(\Omega_{n-3}\) to the ABHY associahedron \(A_{n}^{T}\,\in\,\mathcal{E}_{\widetilde{n}}^{\geq\,0}\) is given by the following formula.
\[\Omega_{n-3}|_{A_{n}^{T}}=\left[\sum_{v\in A_{n}}\prod_{(i,j)\in v}\frac{1}{x_ {ij}}\right]\bigwedge_{(m,n)\in T}\mathrm{d}x_{mn}=:\mathcal{M}_{n}\bigwedge_{ (m,n)\in T}\mathrm{d}x_{mn} \tag{4.15}\]
Where the rational function \(\mathcal{M}_{n}\) is defined as,
\[\mathcal{M}_{n}\,=\,\sum_{v\,\in\,A_{n}}\,\prod_{(i,j)\,\in\,v}\,\frac{1}{x_ {ij}} \tag{4.16}\]
It then immediately follows that pullback of \(f_{\sigma}^{-1\,\star}\,\Omega_{n-3}\) on \(A_{n}^{T,\sigma}\) is obtained by simply substituting \(s_{\sigma(i),\,\dots,\,\sigma(j-1)}\) for \(x_{ij}\) in the above formula.
\[\left(f_{\sigma}^{-1\,\star}\,\Omega_{n-3}\right)|_{A_{n}^{T, \sigma}}\,=\,\sum_{v\,\in\,A_{n}}\,\left[\prod_{(i,j)\,\in\,T_{v}}\,\frac{1}{s _{\sigma(i)\sigma(i+1)\dots\sigma(j-1)}}\,\right]\bigwedge_{(m,n)\,\in\,T} \,\right]\mathrm{d}s_{\sigma(m)\,\dots\,\sigma(n-1)} \tag{4.17}\] \[=:\,\mathcal{M}_{n}(\sigma)\,\bigwedge_{(m,n)\,\in\,T}\,\mathrm{d }s_{\sigma(m)\,\dots\,\sigma(n-1)}, \tag{4.18}\]
where
\[\mathcal{M}_{n}(\sigma)\,:=\,\sum_{v\,\in\,A_{n-3}}\,\left[\prod_{(i,j)\,\in\,T _{v}}\frac{1}{s_{\sigma(i)\sigma(i+1)\dots\sigma(j-1)}}\,\right]. \tag{4.19}\]
Note that \(\mathcal{M}_{n}(\sigma)\) can also be interpreted as the volume of the dual associahedron which is computed using the pull back form \(\Omega_{n-3}^{T,\sigma}\).
Now let's see how we can get the tree-level scattering amplitude of \(\phi^{3}\)-theory. To each geometric realization \(A_{n}^{T,\sigma}\), we can associate a canonical form \(\Omega(A_{n}^{T,\sigma})\). Naively if we sum over all such forms, we get \(\sum_{T}\sum_{\sigma\in S_{n}}\Omega(A_{n}^{T,\sigma})\). However, we are grossly over-counting in this sum. The following two lemmas tell us how to get rid of these redundancies.
**Lemma 4.2**.: Given an element \(\sigma\in S_{n}\), the canonical forms, \(\Omega(A_{n}^{T_{1},\sigma})=\Omega(A_{n}^{T_{2},\sigma})\),
(with an appropriate choice of orientation), for all triangulations \(T_{1}\) and \(T_{2}\) of an \(n\)-gon.
Proof.: The canonical forms \(\Omega(A_{n}^{T,\sigma})\) are the pull backs of the canonical form \(\Omega(A_{n}^{T})\) on the embedding space via the diffeomorphism \(f_{\sigma}^{-1}\). As the canonical form of \(A_{n}^{T}\) in the embedding space is the same for all triangulations \(T\), their pullbacks should also be the same. Therefore, \(\Omega(A_{n}^{T_{1},\sigma})=\Omega(A_{n}^{T_{2},\sigma})\), for all triangulations \(T_{1}\) and \(T_{2}\) of an \(n\)-gon.
The lemma 4.2 tells us that the sum over triangulations is redundant. We could choose to fix any triangulation and work with it. For concreteness, we fix reference \(T\) to be,
\[T=\{(1,3),\ldots,(1,n-1)\}. \tag{4.20}\]
**Lemma 4.3**.: Given a triangulation \(T\), the canonical forms, \(\Omega(A_{n}^{T,\sigma_{1}})=\Omega(A_{n}^{T,\sigma_{2}})\), whenever \(\sigma_{1}\) and \(\sigma_{2}\) belong to the same right coset of \(D_{n}\subset S_{n}\).
Proof.: If \(\sigma_{1}\) and \(\sigma_{2}\) belong to the same right coset of \(D_{n}\subset S_{n}\), then the linear isomorphisms \(f_{\sigma_{1}}\) and \(f_{\sigma_{1}}\) map the set of planar variables \(\{x_{ij}\}\) to the same set. That is \(\{f_{\sigma_{1}}(x_{ij})\}=\{f_{\sigma_{2}}(x_{ij})\}\). This means the poles of canonical forms \(\Omega(A_{n}^{T,\sigma_{1}})\) and \(\Omega(A_{n}^{T,\sigma_{2}})\) are the same. As the canonical form is determined, up to a sign, by the compatibility of poles, if the set of poles is the same, the canonical forms have to be equal.
The lemma 4.3 implies \(\sum_{\sigma\in S_{n}}\Omega(A_{n}^{T,\sigma})=2n\sum_{[\sigma]\in\mathcal{G}_ {n}}\Omega(A_{n}^{T,\sigma})\). Where \(\mathcal{G}_{n}\) is the set of right cosets of \(D_{n}\). That is,
\[\mathcal{G}_{n}=D_{n}\backslash S_{n}. \tag{4.21}\]
And in the sum \(\sum_{[\sigma]\in\mathcal{G}_{n}}\Omega(A_{n}^{T,\sigma})\), for each coset \([\sigma]\), we take some representative \(\sigma\in[\sigma]\). With these redundancies taken care of, we can now write down the scattering amplitude of \(\phi^{3}\) theory.
**Theorem 4.4**.: The scattering amplitude of \(\phi^{3}\) theory is given by
\[\mathcal{M}_{n}^{\phi^{3}}(p_{1},\,\ldots,\,p_{n})=\frac{1}{2^{n-3}}\sum_{[ \sigma]\in\mathcal{G}_{n}}\mathcal{M}_{n}(\sigma^{\prime}) \tag{4.22}\]
for any triangulation \(T\) and any \(\sigma^{\prime}\,\in\,[\sigma]\).
Proof.: Let's first look at the sum \(\sum_{\sigma\in S_{n}}\Omega(A_{n}^{T,\sigma})\big{|}_{A_{n}^{T,\sigma}}\). As discussed in section 2, given any scattering channel \(S\) of \(\phi^{3}\) theory, there are \(n2^{n-2}\) elements of \(S_{n}\) that take some planar diagram to \(S\). Therefore
\[\sum_{\sigma\in S_{n}}\mathcal{M}_{n}(\sigma)\,=n\,2^{n-2}\mathcal{M}_{n}^{ \phi^{3}}. \tag{4.23}\]
Further, as discussed in lemma 4.3
\[\sum_{\sigma\in S_{n}}\Omega(A_{n}^{T,\sigma})\big{|}_{A_{n}^{T,\sigma}}=2n \sum_{[\sigma]\in\mathcal{G}_{n}}\Omega(A_{n}^{T,\sigma})\,=\,2n\sum_{[\sigma ]\,\in\,\mathcal{G}_{n}}\mathcal{M}_{n}(\sigma^{\prime})\,\bigwedge_{(i,j)\, \in\,T}\,\mathrm{d}s_{\sigma^{\prime}(i)\,\sigma(i+1)\ldots\sigma^{\prime}(j-1)}\]
for any \(\sigma^{\prime}\,\in\,[\sigma]\).
Therefore,
\[\mathcal{M}_{n}^{\phi^{3}}=\frac{1}{2^{n-3}}\sum_{[\sigma]\in\mathcal{G}_{n} }\mathcal{M}_{n}(\sigma^{\prime}) \tag{4.24}\]
This is one of the central results of the paper.8 It shows the precise manner in which the ABHY associahedron is the positive geometry for the _manifestly_ crossing-symmetric \(\phi^{3}\) S-matrix. In fact, composing the \(\sigma\)-induced linear maps with a translation,
Footnote 8: If we do not normalize the sum by \(\frac{1}{2^{n-3}}\) then the result can also be interpreted as follows: The planar scattering form which generates tree-level S matrix of bi-adjoint scalar theory with coupling \(\lambda\) also generates the manifestly crossing symmetric S-matrix of a colorless \(\phi^{3}\) theory, but with coupling \(\sqrt{2}\,\lambda\).
\[s_{\sigma(i)\,\ldots,\,\sigma(j)}\,\rightarrow\,\overline{s}_{\sigma(i)\, \ldots\,\sigma(j)}\,:=\,s_{\sigma(i)\,\ldots,\,\sigma(j)}\,-m^{2}\,\forall\,( i,j) \tag{4.25}\]
results in the n-point amplitude for massive \(\phi^{3}\) theory, showing how ABHY associahedron is the "amplituhedron" for scalar field S-matrix with cubic coupling and arbitrary mass. We end this section with an observation.
* Lemma 4.1 implies that there exists a set of permutations \(\{\sigma_{I}\}_{I=1}^{|\mathcal{G}_{n}|}\) where each \(\sigma_{I}\) is in a different right coset of \(D_{n}\) such that \(\oplus_{\sigma_{I}}\,A_{n-3}^{T,\sigma_{I}}\) is a positive geometry in \(\mathcal{K}_{n}\) with the corresponding canonical form, \[\Omega_{n-3}\,:=\,\sum_{\sigma_{I}}\,\Omega_{n-3}^{T,\sigma_{I}}|_{A_{n-3}^{T,\sigma_{I}}}\] (4.26)
### Deformed realisation for \(n=4,\,n=5\)
In this sub-section, we provide a few explicit examples of the deformed realizations \(A_{n}^{T,\sigma}\) for \(n\,=\,4,\,5\). In the case of \(n\,=\,4,\,\mathcal{K}_{2}\) is coordinatized by \(\{\,X_{13}\,=\,s,\,X_{24}\,=\,t\,\}\) and \(\mathcal{G}_{4}=\{[\mathrm{id}],[\sigma_{1}],[\sigma_{2}]\}\), with \(\sigma_{1}\,=\,\left(\begin{smallmatrix}1&2&3&4\\ 1&3&2&4\end{smallmatrix}\right)\), \(\sigma_{2}\,=\,\left(\begin{smallmatrix}1&2&3&4\\ 1&3&4&2\end{smallmatrix}\right)\). It can be immediately checked that,
\[\begin{array}{rcl}A_{4}^{T,\,\sigma_{1}}&=&\{\,s_{13}\geq 0,X_{24}\,\geq\,0\,| \,s=-\,c\,\}\\ A_{4}^{T,\,\sigma_{2}}&=&\{X_{13}\geq 0,s_{24}\,\geq\,0\,|\,t=-\,c\,\}\end{array} \tag{4.27}\]
The geometric realizations of deformed associahedra \(A_{4}^{T,\sigma_{1}}\), \(A_{4}^{T,\sigma_{1}}\) and the ABHY associahedra at 4 points are given in figure 2.
The one forms on the three associahedra in \(\mathcal{K}_{4}\) are obtained via pullback of \(f_{\sigma}^{-1\star}\,\Omega_{1}\) on the corresponding associahedra.
\[\begin{array}{rcl}\Omega(A_{4}^{T,e})|_{A_{4}^{T,e}}&=&(\,\frac{1}{s}+\frac {1}{t}\,)\,\mathrm{d}s\\ \Omega(A_{4}^{T,e})|_{A_{4}^{T,\sigma_{1}}}&=&(\,\frac{1}{t}+\frac{1}{u}\,)\, \mathrm{d}u\\ \Omega(A_{4}^{T,e})|_{A_{4}^{T,\sigma_{2}}}&=&(\,\frac{1}{s}\,+\,\frac{1}{u}\,) \,\mathrm{d}s\end{array} \tag{4.28}\]
Hence,
\[\frac{1}{2}\,\sum_{[\sigma]\,\in\,\mathcal{G}_{4}}\,\Omega(A_{4}^{T,\sigma})| _{A_{4}^{T,\sigma}}=\,\frac{1}{s}+\frac{1}{t}+\frac{1}{u}\,=\,\mathcal{M}_{4 }(\,p_{1},\,p_{2},p_{3},\,p_{4}\,) \tag{4.29}\]
As \(|S_{5}|\,=\,120\), and \(\mathcal{G}_{5}\,=\,\{\,[e],[\sigma_{1}]=\left[\left(\begin{smallmatrix}1&2&3&4& 5\\ 1&4&3&2&5\end{smallmatrix}\right)\right]\!,[\sigma_{2}]=\,\left[\left( \begin{smallmatrix}1&2&3&4&5\\ 1&3&5&2&4\end{smallmatrix}\right)\right]\!,\ldots\}\), there are many deformed realizations of the ABHY associahedron \(A_{5}^{T}\). We will analyze two of them, which live in different quadrants of \(\mathcal{K}_{5}\) and have an unequal number of non-planar channels.
Under the action of \(f_{\sigma}\,|\,\sigma\,\in\,{\cal D}_{5}\) it is mapped to a convex realisation in \({\cal K}_{5}\). We first illustrate such a deformation with a couple of examples.
For \(\sigma_{1}=\left(\begin{smallmatrix}1&2&3&4&5\\ 1&5&2&3&4\end{smallmatrix}\right)\),
\[f_{\sigma_{1}}^{-1}\,(\,{\cal E}_{5}^{\geq\,0}\,)\,=\,{\rm span}(\,\{\,X_{25},\,X_{35},\,s_{25},\,s_{14},\,X_{24}\,\,\}\,\geq\,0\,) \tag{4.30}\]
In this case, \(A_{5}^{T,\sigma_{1}}\) is a two-dimensional positive geometry defined by the \(X_{25},\,X_{35}\,\geq\,0\) region of the 2-plane given by the following equations,
\[\begin{array}{l}X_{25}\,-\,X_{35}\,+\,s_{25}\,=\,c_{1}\\ X_{25}\,+\,s_{14}\,=\,c_{2}\\ X_{24}\,+\,X_{35}\,=\,c_{3},\end{array} \tag{4.31}\]
where \(c_{i}\) are arbitrary positive constants. The five co-dimension-one boundaries of this polytope are located at
\[\{\,X_{25},X_{35},s_{25},\,s_{14},\,X_{24}\,\}\,\rightarrow\,0\]
The two sided bounds imposed by eqn.(4.31) on these kinematic variables imply that,
\[\begin{array}{l}X_{13}=-c^{\prime}\\ X_{14}\,=\,c_{2}\,-\,X_{24}\end{array} \tag{4.32}\]
which shows how \(A_{5}^{T,\sigma}\) has no intersection with the ABHY associahedron \(A_{5}^{T}\).
Figure 2: ABHY associahedron and its deformations for n = 4
In the second example \(\sigma_{2}=\left(\begin{smallmatrix}1&2&3&4&5\\ 1&4&2&5&3\end{smallmatrix}\right)\),
\[f_{\sigma_{2}}^{-1}\left(\,{\cal E}_{5}^{\geq\,0}\,\right)\,=\,{\rm span}(\, \{\,s_{14},\,s_{35},\,s_{24},\,s_{25},\,s_{13}\,\}\,\geq\,0\,) \tag{4.33}\]
In this case, \(A_{5}^{T,\sigma_{2}}\) is a two-dimensional positive geometry defined by \(s_{14},s_{35}\,\geq\,0\) region in the 2-plane defined by the equations,
\[\begin{split} s_{14}\,+\,s_{24}\,-\,s_{35}\,=\,d_{1}\,=\,-\,X_{1 3}\\ s_{35}\,+\,s_{25}\,=\,d_{2}\,=\,-\,(X_{25}\,+\,X_{14}\,)\\ s_{14}\,+\,X_{13}\,=\,d_{3}\,=\,-\,(X_{13}\,+\,X_{25}\,)\end{split} \tag{4.34}\]
where, as before \(d_{1},d_{2},d_{3}\) are arbitrary positive constants. None of the co-dimension one faces of \(A_{5}^{T,\,\sigma_{2}}\) correspond to planar poles of the 5-point amplitude. The above equations imply that \(A_{5}^{T,\sigma_{2}}\) is the two-dimensional pentagon which is the intersection of hyper-planes
\[X_{13}\,=\,-d_{1},X_{25}\,=\,-\,d_{3}\,-\,d_{1},X_{14}\,=\,-\,\sum_{i}\,d_{i} \tag{4.35}\]
with the cone,
\[X_{24}\,+\,2\,d_{1}\,+2\,d_{3}\,+d_{2}\,\geq\,0\,{\rm and}\,X_{35}\,\leq\,d_{1} \,+\,d_{2}. \tag{4.36}\]
Once again, it has no intersection with the ABHY associahedron, and in fact, none of the three associahedra \(A_{n-3}^{T},\,A_{n-3}^{T\,\sigma_{1}},\,A_{n-3}^{T\,\sigma_{2}}\) intersect each other as they all lie in distinct hyper-planes in \({\cal K}_{n}\).
We end this section with a few remarks.
* In [2], it was proved how diagonal linear maps of the form \(x_{ij}\,=\,\alpha_{ij}\,X_{ij}\,-\,m_{ij}^{2}\,\) for all \((i,j)\) with \(1\,\leq\,i\,\leq\,j\,\leq\,n\) generates deformed realization of ABHY associahedron in \({\cal K}_{n}^{\geq\,0}\) which turns out to be positive geometry of color ordered S-matrix of cubic scalar non-derivative interactions where the number of scalar fields, their mass parameters and the strength of various cubic couplings is contained in a family of parameters \((\,\alpha_{ij},\,m_{ij}\,)\). We now show that the composition of such diagonal maps with \({\cal G}_{n}\) induced diffeomorphism leads us to positive geometries for non-planar scalar S matrix with cubic interactions between distinct scalar fields with arbitrary masses.
* ABHY associahedron is thus a universal polytope whose simplest possible avatars (obtained simply by linear mappings of embedding space \({\cal E}_{n}\) in the kinematic space) leads to an entire spectrum of tree-level S-matrix with scalar particles.
## 5 CHY formula for the \(\phi^{3}\) S-matrix without color.
The discovery of the diffeomorphic avatars of the ABHY associahedron in the kinematic space has intriguing consequences for the worldsheet formulation of tree-level S matrix given by Cachazo, He, and Yuan (CHY) in [27]. In essence, building on the seminal result in [8], a point of view advocated in [2] was that the scattering equations should be considered as diffeomorphism between the (real section) of the compactified moduli space \(\overline{{\cal M}}_{0,n}({\bf R})\) and
the ABHY associahedron \(A_{n-3}^{T}\) in \(\mathcal{E}_{n}^{\geq\,0}\). That is, given the \(\{y_{ij}\}\) coordinates introduced in equation ((3.1)), we consider the "embedding space" scattering equations,
\[\sum_{i\,\neq\,j}\,\frac{y_{ij}}{z_{ij}}\,=\,0. \tag{5.1}\]
which define a map from the worldsheet to \(\mathcal{E}_{n}\). Restriction of the scattering equations to \(A_{n-3}^{T}\) via
\[\mathcal{F}=0\,\equiv\,\{\,y_{ij}\,=\,-\,c_{ij}\,\forall\,(i,j)\,\notin\,\{\, (2,n),\,(3,n),\,\ldots,\,(n-2,n)\,\}\,\} \tag{5.2}\]
defines a diffeomorphism between the worldsheet associahedron \(A_{n-3}^{\rm ws}\,:=\,\overline{\mathcal{M}}_{0,n}(\mathbf{R})\) and the ABHY associahedron in \(\mathcal{E}_{n}^{\geq\,0}\).
Hence composing the diffeomorphism induced by scattering equations with linear map parametrized by \([\sigma]\,\in\,\mathcal{G}_{n}\) gives us a (family of) maps between \(A_{n-3}^{\rm ws}\) and the kinematic space associahedra \(A_{n-3}^{T\,[\sigma]}\). We call these diffeomorphisms \([\sigma]\)-deformed scattering equations obtained by the identifying
\[y_{ij}\,=\,s_{\sigma(i)\sigma(j)} \tag{5.3}\]
and imposing
\[\mathcal{F}([\sigma])=0\,\equiv\,\{\,s_{\sigma(i)\sigma(j)}\,=\,-\,c_{ij}\, \forall\,(i,j)\,\notin\,\{\,(2,n),\,(3,n),\,\ldots,\,(n-2,n)\,\}\,\} \tag{5.4}\]
The canonical form on \(A_{n-3}^{\rm ws}\) is the Parke Taylor form defined as,
\[\omega_{\rm ws}(z_{1},\,\ldots,\,z_{n})\,=\,\frac{{\rm d}^{n-3}z}{z_{12}\,z_{2 3}\,\ldots\,z_{n1}} \tag{5.5}\]
where \(z_{ij}\,:=\,z_{j}-z_{i}\).
The CHY formula for massless \(\phi^{3}\) S matrix can now be written down immediately. As it is simply a weighted sum over the Park-Taylor form evaluated on the solution of the \([\sigma]\)-deformed scattering equations.
\[\mathcal{M}_{n}(p_{1},\ldots\,p_{n}) \tag{5.6}\] \[=\,\frac{1}{2^{n-3}}\,\sum_{[\sigma]\,\in\,\mathcal{G}_{n}}\, \int_{\rm ws}\,\omega_{WS}\,\left[\,\prod_{i=1}^{n}\,\frac{1}{z_{i}-z_{i+1}} \,\prod_{k}^{\prime}\,\delta\left(\,\sum_{m\,\neq\,k}\,\frac{s_{\sigma(m) \sigma(k)}}{z_{mk}}\,\right)|_{\mathcal{F}([\sigma])\,=\,0}\,\right]\]
where \(\prod_{k}^{\prime}\) denotes a product over the \(n-3\) punctures after removing the \(SL(2,R)\) redundancy and the scattering equations are evaluated on \(\mathcal{F}([\sigma])\,=\,0\) in eqn.(5.4). We note that this formula could have been obtained directly (i.e., without exploiting the relationship between the CHY formula and canonical form of the associahedron) from the observations made in section 2.
Deformed Stokes Polytopes in the kinematic space
The positive geometries for planar scattering amplitudes in theory with polynomial interactions belong to the family of polytopes called accordiohedra [1, 28]. An accordiohedron polytope is parametrized by a dissection of a planar \(n\)-gon where the dissection is coarser than the complete triangulation. The geometric realizations of accordiohedra descend from the geometric realization of the associahedron by projecting the associahedron equation defined with respect to, say, \(T\) to the coarser dissection. The relevance of accordiohedron to the S-matrix program was discovered in [16, 10] where it was shown that the accordiohedron parametrized by quadrangulations constitute positive geometry of color-ordered S-matrix with quartic scalar interactions.
Any quadrangulation \(Q\) of an \(n\)-gon generates a Stokes polytope. Although the precise combinatorial definition is rather involved, the construction can be understood as follows: \(Q\) divides the \(n\)-gon into a collection of cells (quadrilaterals). Now consider a dual polygon whose vertices correspond to the edges of the original \(n\)-gon. Any quadrangulation \(Q^{\prime}\) of the dual \(n\)-gon is said to be compatible with \(Q\) if any chord in \(Q^{\prime}\) only enters and exits a given cell of the \(n\)-gon via adjacent edges. A Set of all such \(Q^{\prime}\)s related to each other via mutations generate the Stokes polytope \(\mathcal{AC}_{Q}\), which is a closed and convex positive geometry. We refer the reader to [29, 30] for more details on accordiohedron and Stokes Polytope.
To obtain a geometric realization of the Stokes polytope \(\mathcal{AC}_{Q}\) associated with a quadrangulation
\[Q=\{(i_{1}j_{1}),\ldots(i_{\frac{n-4}{2}}j_{\frac{n-4}{2}})\},\]
we consider a triangulation \(T=\{(i_{1}j_{1}),\ldots(i_{n-3}j_{n-3})\}\) such that \(Q\subset T\). The geometric realization of \(\mathcal{AC}_{\{Q,T\}}\) is obtained simply via projecting \(A^{T}_{n-3}\) on to the subspace spanned by the variables
\[\{X_{i_{1}j_{1}},\ldots X_{i_{\frac{n-4}{2}}j_{\frac{n-4}{2}}}\}\]
Given a quadrangulation \(Q\), it fixes a unique planar \(\frac{n-4}{2}\) form in \(\mathcal{K}_{n}\) as follows. This immediately implies that the restriction of the \(\frac{n-4}{2}\) planar scattering form \(\Omega^{Q}_{n}\) to the ABHY associahedron \(A^{T}_{n-3}\) generates the planar quartic scalar amplitudes \(\mathcal{M}^{\phi^{4}}_{n}(p_{1}\,\ldots\,p_{n})\)[16].
\[\Omega^{Q(T)}_{\frac{n-4}{2}}\,|_{A^{T}_{n-3}}\,=\,\mathcal{M}^{\phi^{4}}_{n} (p_{1}\,\ldots\,p_{n})\,\bigwedge_{(i,j)\,\in\,Q(T)}\,\mathrm{d}x_{ij} \tag{10}\]
Given any \(\sigma\,\in\,S_{n}\) we can once again consider pull back of the projective \(\frac{n-4}{2}\) form on ABHY associahedron.9
Footnote 9: If \(\sigma\,\in\,S_{n}\) then the deformed associahedron is of ABHY type, but realized in a different positive quadrant of the kinematic space.
\[f_{\sigma}^{-1\star}\,\cdot\,\Omega_{Q(T)}\,|_{(A^{T,\sigma}_{n-3})}\,=\, \sum_{Q^{\prime}}\,[\,\prod_{(m,n)\,\in\,Q^{\prime}}\,\frac{1}{s_{\sigma(m)} \,\cdot\,\sigma(n)-1}\,]\,\bigwedge_{(i,j)\,\in\,Q}\mathrm{d}s_{\sigma(i)\, \ldots\,\sigma(j)-1} \tag{11}\]
where the sum is over all the quadrangulations compatible with the reference quadrangulation.
The color-ordered (planar) tree-level S matrix of \(\phi^{4}\) theory is a weighted sum over canonical forms associated with Stokes Polytopes \(\mathcal{AC}_{Q}\). Every Stokes polytope is realized in the linearity space \(\{\,X_{ij}\,\geq\,0\,|\,(i,j)\,\in\,Q\,\}\) inside \(\mathcal{K}_{n}^{\,\geq\,0}\). As we now argue, this result can also be understood via the ideas introduced in this paper.
Consider the dihedral group \(\mathcal{D}_{n}\,\subset\,S_{n}\) and label all the permutations in \(\mathcal{D}_{n}\) as \(\tilde{\sigma}\). Let \(\{Q_{I}\}\) be the set of all the primitives (topologically inequivalent quadrangulations of an \(n\)-gon as defined in [11].
Given a \(\tilde{\sigma}\,\in\,\mathcal{D}_{n}\) and a reference quadrangulation \(Q_{I}\), let \(\mathcal{M}_{n}(\tilde{\sigma},Q_{I})\) be defined via the following equation.
\[\sum_{\tilde{\sigma}\,\in\,\mathcal{D}_{n}}(f_{\tilde{\sigma}}^{-1})^{\star} \,\cdot\,\Omega_{Q_{I}}^{\frac{n}{2}\,-\,2}\big{|}_{f_{\tilde{\sigma}}(\,A_{ n-3}^{\,\,\,Q_{I}\,\subset\,T_{I}}\,)}\,=:\sum_{\tilde{\sigma}\,\in\,\mathcal{D}_{n}} \,\mathcal{M}_{n}(\tilde{\sigma},Q_{I})\,\bigwedge_{(1,m)\,\in\,Q_{I}}\,ds_{ 1\tilde{\sigma}(2)\,\dots\,\tilde{\sigma}(m-1)} \tag{101}\]
Then the color-ordered \(\phi^{4}\) amplitude, \(\mathcal{M}_{n}^{\rm co}(p_{1},\,\dots,\,p_{n})\) is given by the formula
\[\mathcal{M}_{n}^{\rm co}(p_{1}\,\dots\,p_{n})\,=\,\sum_{I}\,\alpha_{I}\,\sum_ {\tilde{\sigma}\,\in\,\mathcal{D}_{n}}\,\mathcal{M}_{n}(\tilde{\sigma},Q_{I}) \tag{102}\]
where \(\alpha_{I}\) are the weights associated to the primitives \(\{\,\tilde{\sigma}\,\cdot\,Q_{I}\,|\,\tilde{\sigma}\,\in\,\mathcal{C}_{n}\,\}\). These weights were analyzed in [31, 32, 11, 28]. A general discussion on the structure of weight and the formula for computing them for a generic accordiohedron can be found in [1].
Hence the color-ordered tree-level amplitude in massless \(\phi^{4}\) theory can also be understood as a weighted sum over "deformed" realizations of the accordiohedra \(\mathcal{AC}_{Q}\) where the deformations are simply linear maps from \(\mathcal{E}_{n}\) to \(\mathcal{K}_{n}\) and are parametrized by the dihedral group over \(n\)-elements.
### Towards S-matrix of \(\phi^{4}\) theory without color
As the boundaries of the deformed realization of associahedron contain all the poles of a tree-level S matrix, it is rather tempting to speculate if the evaluation of the lower rank projective forms on these realizations will generate amplitudes of scalar field theories with generic non-derivative interactions. We now argue that this is indeed the case if we consider the projective \(\frac{n-4}{2}\) forms parametrized by quadrangulations as it generates tree-level S-matrix of \(\phi^{4}\) theory without color.
We recall once again the notion of a primitive quadrangulation: A primitive is defined to be an equivalence class over those quadrangulations which can be mapped to each other by an element of \(C_{n}\). For the purpose of this section it will be useful to introduce a more "coarser" classification over the set of quadrangulations which we refer to as a labeled graph, \(\Gamma_{q}\).
Consider a graph with four classes of nodes.10 We will label these 4 classes as \(\{\) r, b,
v, and g).
\[\begin{split} r&\,\sim\,\text{a node with three external legs}\\ b&\,\sim\,\text{a node with two external legs}\\ v&\,\sim\,\text{a node with one external leg}\\ g&\,\sim\,\text{a node with no external leg}\end{split} \tag{6.5}\]
Every quadrangulation of an \(n\)-gon generates a labeled graph. However, this correspondence is many-to-one, and thus each \(\Gamma_{q}\) defines an equivalence class of quadrangulations. The following lemma shows how \(\Gamma_{q}\) generates a coarser classification scheme for quadrangulations as compared to the notion of primitives
**Lemma 6.1**.: Consider an \(n\)-gon whose vertices are labelled clockwise as \(\{\,1,\,\ldots,\,n\,\}\). Let \(Q_{1}\), \(Q_{2}\) be two distinct quadrangulations that belong to two distinct primitives but belong to the same equivalence class labeled by a labeled graph \(\Gamma_{q}\). Then there exists a \(\sigma_{1,2}\,\in\,S_{n}\) under whose action \(Q_{1}\) is mapped to \(Q_{2}\). That is, if \(Q_{1}\) is the quadrangulation of \(\{\,1,\,\ldots,\,n\,\}\), then keeping \(Q_{1}\) fixed and permuting external vertices by \(\sigma_{1,2}\) is equivalent to keeping the \(n\)-gon fixed and changing \(Q_{1}\) to \(Q_{2}\).
Proof.: Let \(\{\,N_{1},\,\ldots,\,N_{\frac{n-4}{2}}\,\}\) be the set of nodes of \(\Gamma_{q}\) where each \(N_{i}\,\in\,\{\,\text{r, b, v, g}\,\}\). Let \(N_{1}\) be labelled by external vertices \(\{\,1,2,\,\ldots,\,k\,\}\,|\,0\,\leq\,k\,\leq\,3\). Let the same node be labelled by external vertices \(\{\,i_{1},i_{2},i_{k}\,\}\) in \(Q_{2}\). Consider a permutation \(\sigma_{N_{1}}\) that maps \((2,\ldots,\,k)\) to \((i_{2},\ldots,\,i_{k})\) while keeping all the other vertices same. Let \(\{\,N_{1},\,\ldots,\,N_{\frac{n-4}{2}}\,\}\) be the set of nodes of the labelled graph \(q\). Then the desired element of \(S_{n}\) is \((\sigma_{N_{1}}\,\circ\,\sigma_{N_{2}}\,\circ\,\ldots\,\circ\sigma_{N_{\frac{n- 4}{2}}}\,)\).
The family of labeled graphs parametrized by \(n\) has the following properties.
1. For \(n\,\leq\,8\) there is a unique \(\Gamma_{q}\) The \(n=8\) quiver is simply \(\Gamma_{q}=\{\,r,b,r\,\}\).
2. If \(n\,=\,10\), there are two labelled graphs, \[\begin{split}\Gamma_{q_{1}}&\,=\,\{\,r,b,b,r\,\}\\ \Gamma_{q_{2}}&\,=\,\{\,r,r,r,v\,\}.\end{split}\] (6.6) Note that \(Q_{1}\,=\,\{\,(1,4),(1,6),(1,8)\,\}\) for which \(\mathcal{AC}_{Q_{1}}\) is a 3-dimensional associahedron belongs to \(\Gamma_{q_{1}}\).
3. There are two vertices in \(V(\mathcal{AC}_{Q_{1}})\) that belongs to \(\Gamma_{q_{2}}\) whereas the other 12 belong to \(q_{1}\). That is for \(n\,\leq\,10\), the set of quadrangulations in \(V(\mathcal{AC}_{Q=\{\,1,4),\ldots,(1,n-2)\,\}}\) exhaust all the labelled graphs. 11 Footnote 11: For \(n\,=\,6,8\) this can be checked readily. For \(n=10\), we refer the reader to the appendix of [10].
4. If \(n\,\geq\,12\) then there is no quadrangulation \(Q\) for which the set \(V(\mathcal{AC}_{Q})\) spans over all the corresponding labelled graphs.12 Footnote 12: This can be argued as follows. For \(n\,=\,12\), we can verify this directly. Let the opposite be true if \(n\,\geq\,14\), but this leads to contradiction as there is no boundary of \(\mathcal{AC}_{Q}\) which corresponds to 4 dimensional Stokes polytope and whose set of vertices exhaust all the labeled graphs in \(n=12\) case.
5. If \(n\,\geq\,12\) then there is no quadrangulation \(Q\) for which the set \(V(\mathcal{AC}_{Q})\) spans over all the corresponding labelled graphs.13
We will now try to apply the ideas of the previous section to relate non-planar \(n\) point scalar amplitudes in \(\phi^{4}\) theory with projective lower forms \(\Omega_{Q}\). Let,
\[Q\,=\,\{\,(1,4),\,(1,6),\,\ldots,\,(1,n-2)\,\} \tag{104}\]
\({\cal AC}_{Q}\) is an \(\frac{n-4}{2}\) dimensional associahedron with \(C_{\frac{n-2}{2}}\) number of vertices.
Let us first define a \(\sigma\)-deformed Stokes polytope \({\cal AC}_{Q}^{\sigma}\) : Given a quadrangulation \(Q\) of an \(n\)-gon with a clockwise ordering of vertices and the corresponding \({\cal AC}_{Q}\), \({\cal AC}_{Q}^{\sigma}\) is the Stokes polytope under the action of \(\sigma\) on the vertices of the \(n\)-gon.
From the comments made above, we can deduce the following surjection.
\[\overline{V}_{Q}\,:=\cup_{\sigma\,\in\,S_{n}\,\leq\,10}V({\cal AC }_{Q}^{\sigma})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
where the sum is taken over a set of labeled graphs in the following way. We can start with
\[Q_{1}\,=\,\{\,(1,4),\,\ldots,\,(1,n-2)\,\} \tag{111}\]
Suppose \((v_{1},\ldots,v_{M})\,\subset\,V(\mathcal{AC}_{Q})\) do not belong to \(\Gamma_{q_{1}}\) and let
\[v_{1}\,\in\,\Gamma_{q_{2}}\]
We then include \(\Gamma_{q_{2}}\) in the union. Let \(v_{l\,\geq\,3}\,\notin V(\mathcal{AC}_{Q_{2}})\). Then we include the labeled graph associated with \(v_{l}\) in the union and so on till we include all elements of \((v_{1},\,\ldots,\,v_{M})\).
As an example, consider \(n\,=\,12\). We can choose
\[\overline{V}=V_{Q}\,\cup\,V_{Q^{\prime}}\text{ with }\]
\[Q^{\prime}\,=\,\{\,(1,4),(4,7),(7,10),(10,1)\,\} \tag{112}\]
Systematic classification of the minimal set of labeled graphs is an intriguing combinatorics problem. Although it is beyond the scope of this paper. And finally, even if we generate such a vertex set \(\overline{V}\), not all the quadrangulations (or, more precisely, labeled graphs) will occur with the same multiplicity. We will see how to solve this problem in the \(n=10\) case and briefly comment on it for general \(n\) at the end of this section.14
Footnote 14: Essentially, we believe that a rather brute-force way to solve this problem is to consider a weighted sum of projective lower forms over all the labeled graphs as opposed to a minimal set of labeled graphs whose vertex set exhausts all possible channels of an \(n\)-pt. amplitude in \(\phi^{4}\) theory. This belief is reflected in our final conjectured formula proposed in eqn.6.32
We now focus on the three Stokes polytopes of dimensions \(\leq\,3\). By explicit computation, we show that the (manifestly crossing symmetric) \(\phi^{4}\) amplitude is a sum over the push-forward of the projective \(\frac{n-4}{2}\) forms \(\Omega_{Q}\) evaluated on the family of deformed associahedra, \(\{\,A_{n-3}^{T,\sigma}\,|\,\sigma\,\in\,S_{n}\,\}\).
Following comment is in order.
* Our analysis can also be interpreted without taking recourse to associahedron and working solely with (convex realizations) of Stokes Polytopes. In other words, given a reference quadrangulation \(Q\) and the corresponding combinatorial polytope \(\mathcal{AC}_{Q}\), the convex realisation of \(\mathcal{AC}_{Q}\) in \(\mathcal{E}_{n}^{\geq\,0}\) is obtained by solving the system of equations [20] \[s_{ij}\,=\,-\,c_{ij}\,\forall\,(i,j)\,\notin\,\{\,(2,n),(4,n),\ldots,(n-3,n)\,\}\] (113) We can now consider deformed realizations of \(\mathcal{AC}_{Q}\) in \(\mathcal{K}_{n}\) by using \(f_{\sigma}\,|\,\sigma\,\in\,S_{n}\) and as in the tri-valent case, analyze the sum over canonical forms associated to all the deformed realizations. The two approaches are equivalent. As in [16] however, we will only use \(\mathcal{AC}_{Q}\) as a combinatorial polytope used to define planar scattering form in \(\mathcal{E}_{n}\). This perspective places the ABHY associahedron at the heart of the landscape of tree-level scalar amplitudes.
### \(\phi^{4}\) amplitudes for \(n\,\in\,\{\,6,\,8\,\}\).
We will first illustrate this result for Six and Eight pt. amplitudes before addressing the generic \(n\) point S-matrix.
In the six-point case, the situation is rather straightforward. The Stokes polytope is the one-dimensional associahedron. As we sum over all the projective one-form \(\Omega_{Q=(1,4)}\) which are push-forwarded onto \(A_{3}^{T,\sigma}\) (where \(T\) is any triangulation that contains the chord \((1,4)\)), we get \(2\,\cdot\,6!\) terms in all. It can now be readily checked that if we define \({\cal M}_{6}(\sigma)\) via the following formula.
Henceforth we denote the pull back of the \(\frac{n-4}{2}\) d-ln form as,
\[(f_{\sigma}^{-1})^{\star}\,\circ\,\Omega_{Q}\,:=\,\Omega_{Q}^{\sigma} \tag{6.14}\]
\[\frac{1}{2\,\cdot\,4!}\,\sum_{\sigma\,\in\,S_{6}}\,\Omega_{Q(T)}^{\sigma}|_{( A_{3}^{\sigma,T})}\,=:\,\sum_{\sigma\,\in\,S_{6}}\,{\cal M}_{6}(\sigma)\,\bigwedge_{( i,j)\,\in\,Q(T)}\,{\rm d}s_{\sigma(1)\sigma(2)\sigma(3)}. \tag{6.15}\]
Then
\[{\cal M}_{6}(p_{1},\,\ldots,\,p_{6})\,=\,\sum_{S_{6}}\,{\cal M}_{6}(\sigma) \tag{6.16}\]
The normalisation factor \(\frac{1}{2\,\cdot\,4!}\) is simply the multiplicity with which all the poles \(\frac{1}{s_{ijk}}\) as we sum over all the permutations. We note that
\[\frac{|\,\cup_{\sigma\,\in\,S_{8}}\,V({\cal AC}_{Q(T)}^{\sigma})\,|}{\mbox{ Multiplicity}}\,=\,10 \tag{6.17}\]
which is precisely the number of Feynman diagrams in 6-point amplitude.
In the \(n\,=\,8\) case there are two primitives. One in which the two chords of a quadrangulation intersect in a common vertex and the other in which two chords are parallel. The resulting Stokes polytopes are a two-dimensional associahedron and a square, respectively. That is, for \(Q_{1}\,=\,\{\,(1,4),\,(5,8)\,\}\), \({\cal AC}_{Q_{1}}\) is a square. On the other hand for \(Q_{2}\,=\,\{\,(1,4),\,(1,6)\,\}\), \({\cal AC}_{Q_{2}}\) is \(A_{2}\). However \(Q_{1}\,\sim\,Q_{2}\) under the action of \(S_{8}\) and hence there is a unique labelled graph \(q\) for \(n=8\).
Consider
\[Q_{1}\,=\,(14,16) \tag{6.18}\] \[Q_{2}\,=\,(14,58)\]
and let,
\[\sigma:(1,\,\ldots,\,8)\,\rightarrow\,(\,1,2,3,4,8,5,6,7\,), \tag{6.19}\]
We see that \(\sigma\,\cdot\,Q_{1}\,=\,Q_{2}\).
One can first determine the multiplicity of any codimension three face (in the set of all the 5 dimensional deformed associahedra) which corresponds to a pole in \(\phi^{4}\) amplitude. As the action of \(S_{8}\) on the set of all such faces is transitive we have,
\[\mbox{Multiplicity of}\,(s_{\sigma(1),\,\ldots,\,\sigma(3)},\,s_{\sigma(1), \,\ldots\,\sigma(5)})\,=\,3!\,\cdot\,3!\,\cdot\,2 \tag{6.20}\]
Thus the (normalised) push-forward of \(\Omega_{Q}\) on the deformed realizations is given by,
\[{\cal M}_{8}(\sigma)\,\wedge_{(i,j)\,\in\,Q=(14,16)}\,ds_{\sigma(i)\sigma(i+1)\, \ldots\,\sigma(j-1)}\,:=\,\frac{1}{3!3!2}\,\Omega_{Q=(1,4),(1,6)}^{\sigma}{}_{(A _{5}^{\sigma,T})} \tag{104}\]
\[{\cal M}_{8}(p_{1},\,\ldots,\,p_{8})\,=\,\sum_{\sigma\,\in\,S_{8}}\,{\cal M}_{8 }(\sigma) \tag{105}\]
Once again, we see that,
\[\frac{|\,\cup_{\sigma}\,V({\cal AC}_{Q(T)}^{\sigma})\,|}{3!3!2}\,=\,\frac{5\, \cdot\,8!}{3!3!2}\,=\,280\,=\,|\,{\rm Feynman\ diagrams}\,| \tag{106}\]
### Higher point amplitudes as forms.
For \(n\,=\,10\), the situation appears to be far more intricate for the following reason. Let
\[Q_{1}\,=\,\{\,(1,4),(1,6),(1,8)\,\} \tag{107}\]
Then \(\cup_{\sigma\,\in\,S_{10}}V({\cal AC}_{Q_{1}}^{\sigma})\) contains the entire set of quadrangulations of the 10-gon with all possible ordering of the vertices. However, the multiplicity of various vertices under the action of all permutations is not the same.
Consider, two vertices \(v_{1},\,v_{2}\) of the ABHY realisation of \(A_{7}^{T\,=\,\{(1,3),\,\ldots,\,(1,9)\,\}}\) on which the projective three-form \(\Omega_{Q_{1}}\) has poles.
\[\begin{array}{l}v_{1}\,=\,\{\,(1,4),(1,6),(1,8)\,\}\,\rightarrow\,\{\,s_{12 3},s_{12345},s_{8,9,10}\,\}\\ v_{2}\,=\,\{\,(1,4),(4,7),(7,10)\,\}\,\rightarrow\,\{\,s_{123},s_{456},\,s_{ 789}\,\}\end{array} \tag{108}\]
As we show below, \(v_{1}\) and \(v_{2}\) do not occur with equal multiplicity in \(\cup_{\sigma\,\in\,S_{10}}V({\cal A}_{7}^{T,\sigma}))\). Thus all poles do not contribute equally in the sum over \(\Omega_{Q_{1}}^{\sigma}{}_{A_{n-3}^{T,\sigma}}\) and as a result, such a sum is not an amplitude of any theory.
That is, even though \(\overline{V}_{Q_{1}}\) contains all the quadrangulations dual to all possible Feynman diagrams in \(\phi^{4}\) theory, to ensure an equal multiplicity of all the vertices there must exist a \(Q_{2}\) such that (1) \(\overline{V}_{Q_{2}}\) either contains vertices of type only \(v_{2}\) or (2) if it contains vertices of type \(v_{1}\) and \(v_{2}\) then it contains \(v_{2}\) vertices with more multiplicity then vertices of type \(v_{1}\).
We choose the following quadrangulation to represent \(q_{2}\).
\[Q_{2}\,=\,\{\,(1,4),(4,7),(7,10)\,\} \tag{109}\]
The vertex poset of \({\cal AC}_{Q_{1}},{\cal AC}_{Q_{2}}\) is given in the appendix of [10]. The poset structure is crucial to compute the multiplicity of any vertex in \(\overline{V}_{Q_{i}}\) or equivalently any configuration of propagators \(\{\,s_{i_{1}\ldots i_{k_{1}}},\,s_{j_{1}\ldots j_{k_{2}}},\,s_{l_{1}\ldots l_{ k_{3}}}\,\}\).
### Computing Multiplicities
We now compute the multiplicity of a vertex whose quadrangulation corresponds to labeled graph \(\Gamma_{q_{1}}\) in the set \(\overline{V}_{Q_{1}}\), \(\overline{V}_{Q_{2}}\) respectively.
(a) Consider first any configuration corresponding to a quadrangulation \(Q^{\prime}_{1}\,\in\,q_{1}\). We claim that given any one of the 12 vertices, say \(v_{1}\) in \(V(\mathcal{AC}_{Q_{1}})\) whose labeled graph is \(\Gamma_{q_{1}}\), there always exists at least one permutation \(\sigma\) such that in the poset associated to \(V(\mathcal{AC}^{\sigma}_{Q_{1}})\), \(v_{1}\) is mapped to the configuration corresponding to quadrangulation \(Q^{\prime}_{1}\). This follows simply from the fundamental property of the equivalence class \(\Gamma_{q_{1}}\), all of whose elements can be mapped onto each other by at least one \(\sigma\,\in\,S_{10}\).
(b) Let the vertex corresponding to a quadrangulation \(Q^{\prime}_{1}\) in \(\mathcal{K}_{n}\) be the following.
\[v_{Q^{\prime}_{1}}\,=\,\{\,s_{i_{1}\,\dots\,i_{3}},\,s_{j_{1}\dots j_{5}},\,s _{m_{1}\dots m_{3}}\,\} \tag{6.27}\]
There are \(3!^{2}\times 4!^{2}\) permutations that keep \(v_{Q^{\prime}_{1}}\) fixed.
Using (a), (b), we see that in \(\overline{V}_{Q_{1}}\), the multiplicity of any vertex whose quadrangulation \(Q^{\prime}_{1}\,\in\,\Gamma_{q_{1}}\) is \((\,(3!)^{2}\,\times\,4^{2}\,)\,12\). In the same spirit, we can deduce that,
\[\begin{array}{lcl}\text{Mult. of}\,Q^{\prime}_{2}&\,\in&\Gamma_{q_{2}}\, \text{in}\,\overline{V}_{Q_{1}}&\,=&3!^{4}\,\times\,2\\ \text{Mult. of}\,Q^{\prime}_{1}&\,\in&\Gamma_{q_{1}}\,\text{in}\,\overline{V}_{Q _{2}}&\,=&3!^{2}\,\times\,4^{2}\,\times\,8\\ \text{Mult. of}\,Q^{\prime}_{2}&\,\in&\Gamma_{q_{2}}\,\text{in}\,\overline{V}_{Q _{2}}&\,=&3!^{4}\,\times\,4\end{array} \tag{6.28}\]
Based on these computations of various multiplicities, consider the following weighted sum over projective 3-forms evaluated on the deformed associahedra.
Let \(T_{i}\) be any arbitrary triangulations that contain \(Q_{i}\) for \(i\,\in\,\{\,1,2\,\}\). Let \(\mathcal{M}_{10}(\sigma,i)|_{i=1}^{2}\) be a rational function that is indexed by the set of labeled graphs and which is defined via the following formula.
\[\Omega^{\sigma}_{Q_{i}}|_{A^{T_{i},\sigma}_{7}}\,=\,\mathcal{M}(\sigma,i)\, \wedge_{(i,j)\,\in\,Q_{i}}\,ds_{\sigma(i)\,\dots\,\sigma(j)} \tag{6.29}\]
Then by computing the multiplicity of any vertex which is of the type \(q_{1}\) or \(q_{2}\) we get,
\[\sum\,\alpha_{i}\,\sum_{\sigma\,\in\,S_{10}}\mathcal{M}(\sigma,i)\,=\, \mathcal{M}_{10}(p_{1},\,\dots,\,p_{10}) \tag{6.30}\]
where
\[\begin{array}{lcl}\alpha_{1}&\,=&\frac{1}{3!^{2}\,\dots\,16\,12+3!^{4}\, \cdot\,2}\\ \alpha_{2}&\,=&\frac{1}{3!^{2}\,\dots\,16\,\cdot\,8+3!^{4}\,\cdot\,4}\end{array} \tag{6.31}\]
As a curiosity, we note that \(\frac{\alpha_{1}}{\alpha_{2}}\,=\,\frac{34}{33}\) which is rather close to 1. Note that the issue of unequal multiplicity is resolved in this case by considering a weighted sum of the forms over both the labeled graphs.
We now conjecture a formula for the tree-level \(n\) point amplitude in \(\phi^{4}\) theory. We note that to the combinatorics complexity involved in computing the multiplicity of each vertex in \(\overline{V}\) (defined in equation (6.10)), proof of this formula is beyond the scope of this paper.
Let \(\Gamma_{q_{1}},\dots,\,\Gamma_{q_{k(n)}}\) be the set of _all_ labelled graphs with representatives \(Q_{1},\,\dots,\,Q_{k(n)}\).
Based on empirical observations made in \(n\,=\,6,\,8,\,10\) point case, we conjecture the following formula for the generic \(n\) point amplitude.
There exists a set of rational numbers \(\{\,\alpha_{1},\,\dots,\,\alpha_{k(n)}\,\}\) such that,
\[{\cal M}_{n}(p_{1},\,\dots,\,p_{n})\,=\,\sum_{i=1}^{k(n)}\,\alpha_{i}\,{\cal M} (\sigma,i) \tag{6.32}\]
## 7 Cluster Polytopes and Non-planar One loop Integrands: An obstruction
After the seminal work by Salvatori, [33] where the search for a positive geometry associated with one loop S-matrix of bi-adjoint \(\phi^{3}\) theory was first initiated, Arkani-Hamed, He, Salvatori and Thomas (AHST) discovered a convex realization of the \(D_{n}\) cluster-polytope, [34]. AHST realization was in the positive quadrant of a space spanned by Mandelstam invariants generated by external momenta and the loop momentum \(l^{\mu}\). We denote this space as \({\cal K}_{n}^{1-l}\). It contains, as a proper subspace, the vector space spanned by \(\{\,X_{ij},p_{i}\cdot\,l,\,l^{2}\,\}\), [2, 34]. Perhaps the simplest way to understand the one loop kinematic space for \(n\) particles is to start with a set of \(2n\) external momenta \(\{\,p_{1},\,\dots,\,p_{n},\,p_{\overline{1}},\,\dots,\,p_{\overline{n}}\,\}\) that satisfy,
\[\sum_{i=1}^{n}\,p_{i}\,+\,\sum_{i=1}^{n}\,p_{\overline{i}}\,=\,0 \tag{7.1}\]
The physical kinematic space can be understood as a subspace in which \(p_{\overline{i}}\) is identified with \(p_{i}\). The kinematic space in which cluster polytopes live is spanned by planar kinematic variables of the type,
\[{\cal K}_{n}^{1-L}\,=\,{\rm span}\{\,X_{ij},X_{i\overline{j}},\,X_{i\overline {i}}\,Y_{i},Y_{\overline{i}}\,\} \tag{7.2}\]
where
\[\begin{array}{rcl}Y_{i}&=&(p_{1}\,+\,\dots\,+\,p_{i-1}\,+\,l)^{2}\\ Y_{\overline{i}}&=&(p_{1}\,+\,\dots\,+\,p_{n}\,+\,p_{\overline{1}}\,+\,\dots \,+\,p_{\overline{i-1}}\,+\,l)^{2}\end{array} \tag{7.3}\]
The "doubling of external momenta" fits in beautifully with the pseudo-triangulation model for \(D_{n}\) cluster polytope, which was proposed by Ceballos and Pilaud in [35]. In this model, every planar kinematic variable is a chord that can dissect a \(2n\)-gon with an annulus in the middle. The chords associated with \(Y_{i}\) are non-linear and terminate at a boundary point \(0_{R}\) of the annulus, whereas the chords associated to \(Y_{\overline{i}}\) terminate at the antipodal point \(0_{L}\) of the annulus. As shown in the figure (3), the chords \(X_{i\overline{i}}\) are non-linear and enclose the annulus such that any (pseudo)-triangulation of a 2n-gon which includes a chord \(X_{i\overline{i}}\) must include \(Y_{i},Y_{\overline{i}}\). We have given a very brief summary of the construction of \({\cal K}_{n}^{1-l}\) and urge the reader to consult [2] for a more elaborate discussion.
the other hand, Frost, Plamondon, Salvatori, and Thomas in [24]. Boundaries of the convex realization of \(\hat{D}_{n}\) correspond to _all_ the pseudo-triangulations of the \(2n\)-gon.15 These polytopes sit inside positive quadrant of \({\cal K}_{n}^{1-l}\).
Footnote 15: A brief review of \(\hat{D}_{n}\) polytope and its AHST inspired realisation can be found in [2]. We are indebted to Nima Arkani-Hamed for patiently explaining their construction to us.
Although a detailed derivation of the convex realization is not essential for us (and can be found in [2]), the basic idea is rather simple. In a nut-shell the convex realisation of \(\hat{D}_{n}\) polytopes mirror the convex realisation of the associahedron \(A_{n-3}^{T}\) in \({\cal K}_{n}\).
Given a reference (pseudo)-triangulation \(PT_{0}\) of the holed \(2n\)-gon, we consider all the chords which belong to \(PT_{0}^{c}\) which is a pseudo-triangulation obtained by a counter-clockwise \(\frac{\pi}{n}\) rotation of \(PT_{0}\) and write all the linear equations of the form
\[Y_{IJ}\,=\,-\,c_{IJ}\,\,\forall\,\,(I,J)\,\notin\,PT_{0}^{c} \tag{7.4}\]
where \((I,J)\) indicates all possible linear as well as non-linear chords and
\[X_{IJ}\,=\,\{X_{ij},X_{i\overline{j}},X_{i\overline{i}},Y_{i},\tilde{Y}_{i}\}\]
are the corresponding co-ordinates in the \({\cal K}_{n}^{1-l}\). The projective \(d\ln\) form on \({\cal K}_{n}^{1-l}\) uniquely determined by \(\hat{D}_{n}\) generates one loop integrand of bi-adjoint \(\phi^{3}\) amplitude,
\[m_{n}^{1-l}(p_{1},\ldots,p_{n},l)\,\bigwedge_{(I,J)\,\in\,PT_{0}}\,{\rm d}X_{ IJ}\,=\,\Omega_{n}^{PT_{0}}|_{\hat{D}_{n}^{PT_{0}}} \tag{7.5}\]
It is tempting to consider the deformed realizations of the \(\hat{D}_{n}\) polytopes where the deformation is parametrized by \([\sigma]\,\in\,{\cal G}_{n}\) (or more broadly, \(\sigma\,\in\,S_{n}\)) and see if the sum over all corresponding canonical forms is S-matrix integrand (at one loop) for \(\phi^{3}\) theory without color. However, a moment of meditation informs us that a naive application of such an idea can not work. This can be understood by a simple counting argument for \(n\,=\,4\).
Figure 3: A closed polytope with all three channels
The cardinality of the vertex set of \(\hat{D}_{4}\) is 70, and this can be seen as follows. As each vertex of \(\hat{D}_{n}\) corresponds to a unique pseudo-triangulation of the holed \(2n\)-gon, we need to simply count the number of pseudo-triangulations. However, just as for triangulation, each pseudo-triangulation is dual to a planar one-loop Feynman graph with cubic vertices, such that the loop momentum is oriented clockwise or counter-clockwise. For a fixed orientation of the loop momentum, the number of such graphs is twenty graphs with tadpoles, ten graphs with a bubble, four graphs associated with vertex corrections, and one box graph. Hence,
\[|1\mbox{-loop planar oriented Feynman graph}\,|\;=\;|\,V(\hat{D}_{n})\,|\;=\;70 \tag{104}\]
From the perspective of amplitudes, the map between a set of all vertices of \(\hat{D}_{n}\) polytope with the set of planar 1-loop Feynman graphs is \(2:1\). In other words two vertices \(v,v^{\prime}\,\in\,V(\hat{D}_{n})\) are equivalent if
\[\forall\,(i,j)\,\in\,v\,\exists\,(\vec{i},\vec{j})\,\in\,v^{\prime}\,\mbox{or} \,\forall\,(i,0_{R})\,\in\,v\,\exists\,(i,0_{L})\,\in\,v^{\prime} \tag{105}\]
Modulo this equivalence, we see that the number of "independent" vertices in \(\frac{\hat{D}_{n}}{Z_{2}}\) is 35.
\[|\,\cup_{[\sigma]\,\in\,{\cal G}_{4}}\,V(\hat{D}_{n}^{\sigma})\,|\,=\,105 \tag{106}\]
On the other hand, the total number of one loop 4-point Feynman graphs with cubic vertices is 54.16 We can contrast this situation with associahedron where
Footnote 16: Once again, this can be verified by a simple counting argument.
\[|\,\cup_{[\sigma]\,\in\,{\cal G}_{n}}\,V(A_{n-3}^{T,\sigma})\,|\,=\,2^{n-3}\, |\,\mbox{Feynman graphs}\,| \tag{107}\]
Hence we see that the cardinality of the complete vertex set spanned by \({\cal G}_{4}\) action on AHST realization of \(\hat{D}_{4}\) is not an integer multiple of the total number of Feynman graphs \(\phi^{3}\) theory. We thus conclude that if we define the \([\sigma]\) dependent rational function on \({\cal K}_{4}^{1-l}\) via,
\[\Omega_{4}^{PT_{0},\sigma}|_{\hat{D}_{4}^{PT_{0},\sigma}}\,\to\,{\cal M}_{4}^{ 1-l}(\sigma) \tag{108}\]
then there exists no \(\alpha_{4}\,\in\,{\bf Z}^{+}\) for which
\[\sum_{[\sigma]\,\in\,{\cal G}_{4}}\,{\cal M}_{4}(\sigma)\,=\,\alpha_{4}\,m_{4 }^{1-l}(p_{1},\ldots,\,p_{4}). \tag{109}\]
It can be easily argued that this result continues to hold \(\forall\,n\).
Thus there is an obstruction in realizing \(m_{n}^{1-l}\) in scalar theory without color as a (sum of) canonical forms of \(\hat{D}_{n}\) polytopes. In fact, in the \(n=4\) case, we could have foreseen this already. While every vertex of \(\hat{D}_{4}\) which is not dual to box diagram occurs twice in the set of all vertices, the graphs with no tree-level pole (that is, box graphs for three orderings \(\{\,(1,2,3,4),\,(1,3,2,4),\,(1,2,4,3)\,\}\)) occur only once. In fact, this should be expected as the set of vertices of \(\hat{D}_{4}\) not only correspond to the color ordering of external states but also planar loops. However, in the perturbative expansion of the S-matrix, once the external states are not colored, then even for a fixed ordering, we can have the box as well as cross-box graphs. Such non-planar diagrams do not correspond to any vertex of the \(\hat{D}_{n}\). Several comments are in order.
* It can be verified that there is no weighted sum over canonical forms associated with \(\hat{D}_{4}\) and the deformed realization of the 4-dimensional cyclohedron \(\hat{C}_{4}\) which is proportional to the one loop integrand. This is because every vertex of a cyclohedron is dual to a tadpole graph with a cubic vertex. Hence although the ratio of residues between the tadpole graphs and the box graphs can be adjusted by suitably changing relative weights of \(\hat{D}_{4}\), \(\hat{C}_{4}\) forms, this does not change the residue of a vertex associated to one loop propagator.
* We believe that more general Clusterohedra discovered in [24], whose boundary poset includes planar as well as non-planar poles of the one loop integrand in bi-adjoint theory will play a crucial role in the hunt for positive geometry of amplitudes without color.
## 8 Conclusions and Open questions
In this note, we have continued to develop the ideas proposed in [1; 2]. The central premise behind these ideas really should be thought of morally as a "bootstrap" construction in the context of the positive geometry program of the S-matrix. Namely, under what conditions do convex realizations of positive geometries (whose boundary poset is isomorphic to a set of poles of an S-matrix) geometrize the S-matrix of a local unitary QFT? Although for an arbitrary diffeomorphism, the answer is not known, we have shown that for an infinite family of linear diffeomorphisms, the resulting realizations through their canonical forms always define a tree-level S matrix.17 In this paper we analysed a rather canonical choice of diffeomorphisms that arise from the combinatorial Bose symmetry acting on the configuration space of \(n\) momenta. The resulting associahedra collective constitutes a positive geometry for tree-level S matrix without color.
Footnote 17: In fact as we proved in [2], there is a class of diffeomorphisms which deform the \(\hat{D}\) polytope in such a way that the resulting realisation constitute a positive geometry for one loop integrand of a scalar field theory in which two scalars with unequal masses interact via a cubic coupling.
In a beautiful paper [17], the authors have shown how the Kleiss-Kuijf relations emerge from the geometry of the positive geometries, such as the momentum amplituhedron and the associahedron. One of the corollaries of their analysis is to obtain any channel (or a collection of channels) of the \(\phi^{3}\) amplitude as the boundary of an open associahedron in \(\mathcal{K}_{n}\). All of the associahedra lie in the fixed linearity space spanned by \(\{\,X_{i_{1}j_{1}},\,\ldots,\,X_{i_{n-3},j_{n-3}}\,\}\) in \(\mathcal{K}_{n}\). The S-matrix of uncolored \(\phi^{3}\) theory can then be obtained as an oriented sum over positive geometries (for a precise definition of the oriented sum over positive geometries, see [17], [36]). However, the positive geometries that will generate the S-matrix are distinct from the deformed realizations we have found in our paper. However it will be interesting to compare the two approaches and see if the idea proposed in their work can be applied to obtain the S-matrix of \(\phi^{p}\) theory from positive geometries.
The central result of this paper is a rather simple formula for the S-matrix, which was proved using a remarkable combinatorial formula relating the Catalan number, the number of tri-valent graphs with \(n\) external vertices, and the cardinality of the symmetric group.
of \(\phi^{4}\) theory without color. Although a complete formula expressing \(n\) point amplitude of \(\phi^{4}\) theory (as a weighted sum of lower forms pulled back onto the deformed associahedra) is beyond the scope of this paper, we believe that the conceptual setup basically will go through for all accordiohedra and hence any \(\phi^{p}\) theory.
The results in this paper, along with those in [1, 2] are data points that reveal the striking universality of ABHY associahedron as a geometric structure associated with the S-matrix. While ABHY realization gives a very specific shape to associahedron in the embedding space, it corresponds to an infinite family of realizations all of which are diffeomorphic to the ABHY associahedron and a large family of these are related to increasingly complicated quantum field theories which are unitarily inequivalent.
There are many potential avenues to develop these ideas further including the hunt for positive geometry for the one loop integrands of \(\phi^{3}\) S-matrix without color.
We finally close this note with a rather trivial observation, but which hints at the relevance of positive geometries arising via gluing of associahedra.
#### Can all the Deformations be Combined into a Single Geometry?
In the n =4 case, we can take the mirror images of the three associahedra and join them at the respective vertices to obtain a closed one-dimensional hexagon in which the adjacent edges have the opposite orientation. (see figure 4).
The resulting canonical form then has simple poles at all the six vertices, and the corresponding one form is \(4\,(\frac{1}{s}+\frac{1}{t}+\frac{1}{u})\). It will be interesting to investigate if such a structure persists at higher \(n\).
## Acknowledgements
We would like to thank Pinaki Banerjee, Nikhil Kalyanapuram, Subramanya Hegde, Arkajyoti Manna, Prashanth Raman, Aninda Sinha and Ashoke Sen for discussions. We are especially thankful to Nima Arkani-Hamed for stimulating discussions in the initial phase
Figure 4: A closed polytope with all three channels
of this project and his constant support over the years. AL would like to thank Center for High Energy Physics (CHEP) at the Indian Institute of Science, International Center for Theoretical Physics (ICTS) and Department of Theoretical Physics (DTP) at Tata Institute of Fundamental Research (TIFR), Mumbai for their hospitality at various stages of this project. MJ is supported by the Walter Burke Institute for Theoretical Physics, the U.S. Department of Energy, the Office of Science, Office of High Energy Physics under Award No. DE-SC0011632.
|
2306.15205 | Crystal structure and magnetic properties of spin-$1/2$ frustrated
two-leg ladder compounds (C$_4$H$_{14}$N$_2$)Cu$_2X_6$ ($X$= Cl and Br) | We have successfully synthesized single crystals, solved the crystal
structure, and studied the magnetic properties of a new family of copper
halides (C$_4$H$_{14}$N$_2$)Cu$_2X_6$ ($X$= Cl, Br). These compounds
crystallize in an orthorhombic crystal structure with space group $Pnma$. The
crystal structure features Cu$^{2+}$ dimers arranged parallel to each other
that makes a zig-zag two-leg ladder-like structure. Further, there exists a
diagonal interaction between two adjacent dimers which generates inter-dimer
frustration. Both the compounds manifest a singlet ground state with a large
gap in the excitation spectrum. Magnetic susceptibility is analyzed in terms of
both interacting spin-$1/2$ dimer and two-leg ladder models followed by exact
diagonalization calculations. Our theoretical calculations in conjunction with
the experimental magnetic susceptibility establish that the spin-lattice can be
described well by a frustrated two-leg ladder model with strong rung coupling
($J_0/k_{\rm B} \simeq 116$ K and 300 K), weak leg coupling
($J^{\prime\prime}/k_{\rm B} \simeq 18.6$ K and 105 K), and equally weak
diagonal coupling ($J^{\prime }/k_{\rm B} \simeq 23.2$ K and 90 K) for Cl and
Br compounds, respectively. These exchange couplings set the critical fields
very high, making them experimentally inaccessible. The correlation function
decays exponentially as expected for a gapped spin system. The structural
aspects of both the compounds are correlated with their magnetic properties.
The calculation of entanglement witness divulges strong entanglement in both
the compounds which persists upto high temperatures, even beyond 370~K for the
Br compound. | P. Biswal, S. Guchhait, S. Ghosh, S. N. Sarangi, D. Samal, Diptikant Swain, Manoranjan Kumar, R. Nath | 2023-06-27T04:57:16Z | http://arxiv.org/abs/2306.15205v1 | Crystal structure and magnetic properties of spin-\(1/2\) frustrated two-leg ladder compounds (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)\(X_{6}\) (\(X\)= Cl and Br)
###### Abstract
We have successfully synthesized single crystals, solved the crystal structure, and studied the magnetic properties of a new family of copper halides (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}X_{6}\) (\(X\)= Cl, Br). These compounds crystallize in an orthorhombic crystal structure with space group \(Pnma\). The crystal structure features Cu\({}^{2+}\) dimers arranged parallel to each other that makes a zig-zag two-leg ladder-like structure. Further, there exists a diagonal interaction between two adjacent dimers which generates inter-dimer frustration. Both the compounds manifest a singlet ground state with a large gap in the excitation spectrum. Magnetic susceptibility is analyzed in terms of both interacting spin-\(1/2\) dimer and two-leg ladder models followed by exact diagonalization calculations. Our theoretical calculations in conjunction with the experimental magnetic susceptibility establish that the spin-lattice can be described well by a frustrated two-leg ladder model with strong rung coupling (\(J_{0}/k_{\rm B}\simeq 116\) K and \(300\) K), weak leg coupling (\(J^{\prime\prime}/k_{\rm B}\simeq 18.6\) K and \(105\) K), and equally weak diagonal coupling (\(J^{\prime}/k_{\rm B}\simeq 23.2\) K and \(90\) K) for Cl and Br compounds, respectively. These exchange couplings set the critical fields very high, making them experimentally inaccessible. The correlation function decays exponentially as expected for a gapped spin system. The structural aspects of both the compounds are correlated with their magnetic properties. The calculation of entanglement witness divulges strong entanglement in both the compounds which persists upto high temperatures, even beyond \(370\) K for the Br compound.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
## I Introduction
In recent days, the low-dimensional spin systems with singlet ground state are pursued rigorously as they manifest interesting field induced quantum phases at low temperatures [1; 2]. Moreover, the singlet state is considered to be a highly entangled state which has direct bearing on quantum computation and quantum communication [3; 4; 5]. The singlet ground state can be realized in the spin dimers [6], alternating spin-chains [7; 8], Haldane chains (integer spin-chains) [9; 10], spin-Peierl systems [11], even-leg ladders [12; 13], frustrated magnets [14] etc.
External magnetic field often acts as a perturbation, which continuously reduces the energy gap between the singlet (\(|S,S_{z}\rangle=|0,0\rangle\)) ground state and the triplet (\(|S,S_{z}\rangle=|1,1\rangle\)) excited states. Above a critical field (\(H_{\rm c1}\)) when the gap is closed, several intriguing field induced quantum phenomena emerge. To name a few, Bose-Einstein condensation (BEC) of triplons in coupled dimer systems [15; 16; 17; 18; 19; 2], Tomonaga-Luttinger Liquid (TLL) in one-dimensional spin chains and spin ladders [20; 21; 22], magnetization plateaus in interacting dimers [23; 24; 25], Wigner crystallization [26], etc have been realized. The most intricate one being the Shastry-Sutherland lattice that consists of orthogonal dimers embedded in a square lattice. It has an exact dimer product ground state when the ratio between the inter- to intra-dimer couplings is sufficiently low [27]. Upon increasing this ratio, the system goes through a quantum phase transition to a plaquette singlet state followed by an antiferromagnetic phase [28]. Applying external field and pressure one can tune the coupling ratio and hence observe a series of quantum phase transitions [29; 30]. Indeed, the famous Shastry-Sutherland lattice compound SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\) featuring orthogonal Cu\({}^{2+}\) dimers depicts quantized plateaus at \(1/8\), \(1/4\), and \(1/3\) of the magnetization and Wigner crystallization of magnons [31; 32; 33; 34; 23]. These field induced phases are also observed in several high spin compounds but the quantum effects are more predominant in systems with spin-\(1/2\)[24; 25]. Further, the isolated spin dimers with a significant intra-dimer coupling show a large spin-gap, whereas the presence of inter-dimer couplings lead to a drastic reduction in the gap value, making the compounds suitable for high-field studies.
Unlike the transition metal oxides, the metal organic compounds are more suitable for such studies as one can easily tune the inter-dimer and intra-dimer exchange couplings, spin-gap, and the ground state properties by engineering the synthesis conditions and changing the lig
and [18]. For instance, the isolated spin-\(\frac{1}{2}\) dimers in the metal-organic compound Cu\({}_{2}\)(IPA)\({}_{2}\)(DMF)(H\({}_{2}\)O) result a large spin-gap of about \(\sim\) 414 K [35] while isolated dimers in Cu\({}_{2}\)Mg\({}_{2}\)(CO\({}_{3}\))(OH)\({}_{6}\cdot\)2H\({}_{2}\)O provide a much reduced spin-gap of around \(\sim\) 7 K [36]. The hierarchy of the intra-dimer exchange couplings depends on the interaction path involved, symmetry of the orbitals, bond length, bond angle etc. Further, the inter-dimer coupling which significantly modifies spin-gap can be engineered by an appropriate choice of organic ligands [37; 38; 39]. Unfortunately, the database for organic compounds with spin-gap is limited compared to the inorganic counterpart. Interestingly, [Cu\({}_{2}\)(apyhist)\({}_{2}\)Cl\({}_{2}\)](ClO\({}_{4}\))\({}_{2}\) (spin-1/2) and NiCl\({}_{2}\)-4SC(NH\({}_{2}\))\({}_{2}\) (spin-1) are only two organic compounds reported with small spin-gap and both of them show field induced BEC physics [18; 40].
In this paper, we report in detail the crystal growth, structure and magnetic properties of two iso-structural spin-1/2 strong rung coupled two-leg ladder compounds (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)\(X_{6}\) (\(X\) = Cl and Br) with a large spin-gap. Indeed, our experimental magnetic susceptibility data are modeled well by the exact diagonalization (ED) calculations assuming a two-leg ladder model with a diagonal interaction that frustrates the spin-lattice. It is found that the rung, leg as well as the diagonal couplings for the Br compound are significantly larger as compared to the Cl compound [i.e. \(J_{0}\)(Br)/\(J_{0}\)(Cl) \(\sim\) 2.6, \(J^{\prime\prime}\)(Br)/\(J^{\prime\prime}\)(Cl) \(\sim\) 5.74, and \(J^{\prime}\)(Br)/\(J^{\prime}\)(Cl) \(\sim\) 3.9]. The relatively large exchange couplings in case of Br compound is attributed to its larger ionic size and more diffused \(p\)-orbitals which increases the effective coupling between the Cu\({}^{2+}\) ions. Our work provides a pathway to manipulate the magnetic properties of low-dimensional metal-organic compounds by judiciously changing the halide atom in the magnetically active network.
## II Techniques
Single crystals of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) were prepared following the solution evaporation method by using a hot air oven in moderate temperatures. To synthesize (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\), C\({}_{4}\)H\({}_{12}\)N\({}_{2}\) (NN\({}^{\prime}\)-Dimethylethylenediamine) and CuCl\({}_{2}\) were added in molar ratio of 1:2 with little excess of HCl. Later, distilled water was added and the solution was heated at 80\({}^{\circ}\)C followed by continuous stirring for complete dissolution of the precursors. The resulting clear solution was kept inside an oven at 45\({}^{\circ}\)C which yielded greenish yellow colored single crystals of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) after 5-6 days. Similar procedure was followed to obtain single crystals of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) by taking the initial constituents C\({}_{4}\)H\({}_{12}\)N\({}_{2}\), CuBr\({}_{2}\), and HBr. The only difference is that, the solution was heated at 75\({}^{\circ}\)C instead of 45\({}^{\circ}\)C, which resulted in dark red-colored needle-shaped crystals.
Single crystal x-ray diffraction (XRD) was performed on good quality single crystals at room temperature using a Bruker KAPPA-II machine with a CCD detector and graphite monochromated Mo \(K_{\alpha}\) radiation (\(\lambda_{\text{avg}}\) = 0.71073 A). The data were collected using APEX3 software and reduced with SAINT/XPREP [41]. An empirical absorption correction was done using the SADABS program [42]. The structure was solved with direct methods using SHELXT-2018/2 [43] and refined by the full matrix least squares on \(F^{2}\) using SHELXL-2018/3, respectively [44]. All the hydrogen atoms were placed geometrically and held in the riding atom model for the final refinement. The final refinement included atomic positions for all the atoms, anisotropic thermal parameters for all the nonhydrogen atoms, and isotropic thermal parameters for the hydrogen atoms. The crystal data and detailed information about the structure refinement parameters are listed in Table 1. To reconfirm the phase purity, a large number of single crystals were crushed into powder and powder XRD measurement was performed at room temperature using a PANalytical (Cu \(K_{\alpha}\) radiation, \(\lambda_{\text{avg}}\) = 1.5406 A) diffractometer. The powder XRD patterns were analyzed by Le Bail fit using the FULLPROF software package [45]. The initial structural parameters for the fits were taken from the single crystal data (Table 1). As shown in Fig. 1, all the peaks in the room temperature powder XRD patterns of both compounds could be indexed properly with orthorhombic structure (\(Pnma\)). The refined lattice parameters and unit cell volume are [\(a\) = 15.0583(10) A, \(b\) = 6.0273(5) A, \(c\) = 14.6121(7) A, and \(V_{\text{cell}}\) \(\simeq\) 1326.22 A\({}^{3}\)] and [\(a\) = 15.0583(10) A, \(b\) = 15.0583(10) A, \(c\) = 15.0583(10) A, \(d\) = 15.0583(10) A, \(e\) = 15.0583(10) A, \(f\) = 15.0583(10) A, \(f\) = 15.
15.7969(11) A, \(b=6.2901(3)\) A, \(c=14.9536(9)\) A, and \(V_{\rm cell}\simeq 1485.87\) A\({}^{3}\)] for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), respectively. These structural parameters of both the compounds are close to the single crystal data.
Magnetization (\(M\)) was measured using a SQUID magnetometer (MPMS, Quantum Design) in the temperature range 2 K\(\leq T\leq 350\) K, and in different applied fields (\(H\)). Isothermal magnetization (\(M\) vs \(H\)) measurement was performed at \(T=2\) K varying the applied field from 0 to 5 T. For this purpose, a bunch of single crystals were aligned and mounted on the sample rod.
The magnetization data are modeled by spin-1/2 interacting dimer and two-leg ladder models as well via exact diagonalization (ED) calculations. The ground state of interacting dimers, in general, has a short correlation length. Therefore, small system size ED calculations should be sufficient to obtain a reliable spectrum and thermodynamic properties. We employ the conventional ED method to obtain the full energy spectrum of the system size up to \(N=24\) and found that \(N=18\) is sufficient to reproduce the experimental magnetization data.
## III Results
### Crystal Structure
Both compounds are found to have same crystal structure [orthorhombic, \(Pnma\) (\(D_{2h}^{16}\), No. 62)] with \(Z=4\). The detail crystallographic parameters obtained from the single crystal x-ray diffraction analysis such as lattice parameters (\(a,b\), and \(c\)), unit cell volume (\(V_{\rm cell}\)) etc are summarized in Table 1. The atomic positions, bond lengths, bond-angles, and anisotropic atomic displacement parameters are tabulated in Supplementary Material (SM) [46]. The crystal structure obtained from the single crystal XRD is presented in Fig. 2. The asymmetric unit of the crystal contains one N,N\({}^{\prime}\)-Dimethylenediaminoun (NN'D) cation and one Cu\({}_{2}\)X\({}_{6}\) (\(X\) = Cl, Br) anion where both reside on mirror plane symmetry with an occupancy of 0.50. There are two inequivalent Cu sites and six in-equivalent halide (Cl/Br) sites present in the crystal unit cell. Each Cu atom is coordinated with six halide (Cl/Br) atoms forming a distorted Cu\(X_{6}\) octahedra. The octahedra are highly distorted with significant elongation of Cu-\(X\) bond along the apical direction. The calculated distortion parameters are given in SM [46]. It is found that CuCl\({}_{6}\) octahedra are more distorted compared to the CuBr\({}_{6}\) one. In the basal plane, the Cu-\(X\) bond distance for the Cl compound varies from 2.234 to 2.365 A while the longest apical bond distances are in the range of 3.014 to 3.022 A. Similarly, for the Br compound the bond distances in the basal plane varies from 2.39 A to 2.43 A while the apical bond distances are in the range of 3.163 to 3.17 A.
Each structural dimer unit of Cu\({}_{2}X_{10}\) is formed by edge sharing of two inequivalent Cu\(X_{6}\) octahedra at the basal plane [see Fig. 2(b)]. Along the \(b\)-direction, the dimers are arranged parallel to each other and are interconnected via the edge sharing of Cu\(X_{6}\) octahedra (between the apical and basal halide atoms) [see Fig. 2(c)]. In addition to the Cu-Cu intradimer distance \(d_{1}\), if inter-dimer distance \(d_{3}\) is taken into account, the spin-lattice would behave like a zig-zag two-leg ladder structure with \(d_{1}\) and \(d_{3}\) representing the rungs and legs of the ladder, respectively. The diagonal distance \(d_{2}\) which is slightly less than \(d_{3}\) makes the spin-lattice more intricate [see Fig. 2(d)]. These ladders are well apart and the organic cations reside in the interstitial space surrounding the ladders. Furthermore, the dimers from each ladder are aligned nearly perpendicular to the dimers of the neighbouring ladders [see Fig. 2(e)]. The value of bond distances \(d_{1}\), \(d_{2}\), and \(d_{3}\) and the corresponding angles \(\angle\)Cu-\(X\)-Cu that favour the interaction paths \(J_{0}\), \(J^{\prime}\), and \(J^{\prime\prime}\), respectively are tabulated in Table 2.
### Magnetization
Figure 3 presents the temperature dependent magnetic susceptibility (\(\chi\equiv M/H\)) of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}X_{6}\) (\(X=\) Cl, Br) measured in an applied field of \(\mu_{0}H=0.1\) T. Upon cooling, \(\chi(T)\) of both compounds shows a Curie-Weiss (CW) increase in the high temperature regime and passes through a broad maxima at around \(T_{\chi}^{\rm max}\simeq 72.5\) and 171.8 K, respectively, indicating the development of short-range antiferromagnetic (AFM) correlations. Below \(T_{\chi}^{\rm max}\), \(\chi(T)\) of both compounds falls rapidly, which is a primary indication of the opening of a spin-gap. At low temperatures, below 12.5 K (Cl) and 37.9 K (Br), \(\chi(T)\) of these systems increases due to the presence of small fraction of extrinsic paramagnetic impurities or lattice imperfections in the samples. There is no signature of magnetic long-range order (LRO) down to 2 K for both compounds.
\(\chi(T)\) at high-temperatures was fitted by the sum of CW law and the temperature-independent susceptibility (\(\chi_{0}\))
\[\chi(T)=\chi_{0}+\frac{C}{T-\theta_{\rm CW}}. \tag{1}\]
Here, \(C\) and \(\theta_{\rm CW}\) are the Curie constant and CW temperature, respectively. The inverse susceptibility [\(1/\chi(T)\)] of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) was fitted above 200 K by Eq. (1) yielding \(\chi_{0}\simeq-1.08\times 10^{-4}\) cm\({}^{3}\)/mol-Cu\({}^{2+}\), \(C\simeq 0.45\) cm\({}^{3}\).K/mol-Cu\({}^{2+}\), and \(\theta_{\rm CW}\simeq-75\) K. The negative value of \(\theta_{\rm CW}\) indicates dominant AFM interaction between the Cu\({}^{2+}\) ions. The effective magnetic moment estimated from the \(C\) value to be \(\mu_{\rm eff}=(3k_{\rm B}C/N_{\rm A}\mu_{\rm B}^{2})^{\frac{1}{2}}\simeq 1.89\)\(\mu_{\rm B}\)/Cu\({}^{2+}\) (where \(k_{\rm B}\) is the Boltzmann constant, \(N_{\rm A}\) is the Avogadro's number, and \(\mu_{\rm B}\) is the Bohr magneton). This value of \(\mu_{\rm eff}\) is slightly higher than the free ion value of \(\mu_{\rm eff}\) (1.73 \(\mu_{\rm B}\)) for Cu\({}^{2+}\) with spin-\(\frac{1}{2}\) and \(g=2\). The \(\mu_{\rm eff}\) value corresponds to \(g\simeq 2.18\), typically observed
\begin{table}
\begin{tabular}{c c c c c c c} & \multicolumn{2}{c}{\(J_{0}\)} & \multicolumn{2}{c}{\(J^{\prime}\)} & \multicolumn{2}{c}{\(J^{\prime\prime}\)} \\ \cline{2-7} \multicolumn{1}{c}{Compound} & \(d_{1}\) (Å) & Angle (deg) & \(d_{2}\) (Å) & Angle (deg) & \(d_{3}\) (Å) & Angle (deg) \\ \hline (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) & 3.4730 & \(\angle\)Cu(2)-Cl(3)-Cu(1) & 3.8809 & \(\angle\)Cu(2)-Cl(3)-Cu(2) & 3.933 & \(\angle\)Cu(1)-Cl(3)-Cu(2) \\ & & \(=\) 95.03 & & \(=\) 91.95 & & \(=\) 93.20 \\ & & \(\angle\)Cu(2)-Cl(4)-Cu(1) & & & & \(\angle\)Cu(1)-Cl(5)-Cu(2) \\ & & \(=\) 98.84 & & & & \(=\) 95.72 \\ (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) & 3.650 & \(\angle\)Cu(2)-Br(3)-Cu(1) & 4.070 & \(\angle\)Cu(2)-Br(4)-Cu(2) & 4.121 & \(\angle\)Cu(2)-Br(4)-Cu(1) \\ & & \(=\) 97.61 & & \(=\) 91.7 & & \(=\) 92.77 \\ & & \(\angle\)Cu(2)-Br(4)-Cu(1) & & & & \(\angle\)Cu(2)-Br(5)-Cu(1) \\ & & \(=\) 94.73 & & & & \(=\) 94.95 \\ \end{tabular}
\end{table}
Table 2: Values of bond lengths \(d_{1}\), \(d_{2}\), and \(d_{3}\) as shown in Fig. 2(d) and angles \(\angle\)Cu-\(X\)-Cu as shown in Fig. 2(b) and (c) for both the compounds corresponding to the exchange pathways \(J_{0}\), \(J^{\prime}\), and \(J^{\prime\prime}\).
\begin{table}
\begin{tabular}{c c c} \multicolumn{2}{c}{**Crystal data**} \\ Empirical formula & (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) & (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) \\ Formula weight (\(M_{r}\)) & 429.95 g/mole & 696.71 g/mole \\ Crystal system & orthorhombic & orthorhombic \\ Space group & \(Pnma\) & \(Pnma\) \\ \(a\) [ Å] & 15.040(1) & 15.849(3) \\ \(b\) [ Å] & 6.014(5) & 6.3157(5) \\ \(c\)[ Å] & 14.593(9) & 14.992(3) \\ \(V_{\rm cell}\)[ Å\({}^{3}\)] & 1319.95(17) & 1500.6(5) \\ \(Z\) & 4 & 4 \\ Calculated crystal density (\(\rho_{\rm cal}\)) & 2.164 mg/mm\({}^{3}\) & 3.084 mg/mm\({}^{3}\) \\ Absorption coefficient (\(\mu\)) & 4.401 mm\({}^{-1}\) & 18.780 mm\({}^{-1}\) \\ Crystal size & \(0.25\times 0.22\times 0.17\) mm\({}^{3}\) & \(0.22\times 0.18\times 0.12\) mm\({}^{3}\) \\ \hline
**Data collection** & & & \\ Temperature (K) & 295(2) & 295(2) \\ Radiation type & Mo\(K_{\alpha}\) & Mo\(K_{\alpha}\) \\ Wavelength (\(\lambda\)) & 0.71073 Å & 0.71073 Å \\ Diffractometer & Bruker KAPPA APEX-II CCD & Bruker KAPPA APEX-II CCD \\ \(\theta\) range for data collection & 2.708\({}^{\circ}\) to 26.422\({}^{\circ}\) & 2.570\({}^{\circ}\) to 25.462\({}^{\circ}\) \\ Index ranges & \(-18\leq h\leq 18\), & \(-19\leq h\leq 19\), \\ & \(-7\leq k\leq 7\), & \(-7\leq k\leq 6\), \\ & \(-18\leq l\leq 18\) & \(-18\leq l\leq 18\) \\ \(F(000)\) & 884 & 1280 \\ Reflections collected & 10316 & 11768 \\ Independent reflections & 1487 [\(R_{\rm int}=0.0462\)] & 1526 [\(R_{\rm int}=0.0685\)] \\ Data/restraints/parameters & 1487/0/85 & 1526/5/73 \\ Final \(R\) indexes, \(I\geq 2\sigma(I)\) & \(R_{1}=0.0261\), \(\omega R_{2}=0.0549\) & \(R_{1}=0.1005\), \(\omega R_{2}=0.2441\) \\ Final \(R\) indexes, all data & \(R_{1}=0.0389\), \(\omega R_{2}=0.0621\) & \(R_{1}=0.1346\), \(\omega R_{2}=0.2904\) \\ \hline
**Refinement** & & & \\ Refinement method & Full-matrix least-squares on \(F^{2}\) & Full-matrix least-squares on \(F^{2}\) \\ Goodness-of-fit on \(F^{2}\) & 1.123 & 1.017 \\ \end{tabular}
\end{table}
Table 1: Structure information of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}X_{6}\) (\(X\)= Cl, Br) compounds obtained from the single crystal XRD measurements at room temperature.
for Cu\({}^{2+}\) based systems [47; 48]. As the exchange coupling for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) is relatively large and our \(\chi(T)\) measurements are restricted to 350 K only, it was not possible to fit the data using Eq. (1). This is because, CW fit requires data at very large temperature \(T>\theta_{\rm CW}\)[49] and measurement above 350 K was not possible since both the compounds melt at around 420 K (see SM [46]).
To understand the exchange network, \(\chi(T)\) data over the whole temperature range were fitted by the equation
\[\chi(T)=\chi_{0}+\frac{C_{\rm imp}}{T-\theta_{\rm imp}}+\chi_{\rm spin}(T). \tag{2}\]
In the second term, \(C_{\rm imp}\) is the Curie constant of the impurity spins and \(\theta_{\rm imp}\) is the effective interaction strength between the impurity spins. This term takes care of the low-temperature Curie tail in \(\chi(T)\). \(\chi_{\rm spin}(T)\) is the spin susceptibility of the spin-\(\frac{1}{2}\) interacting dimer model which has the form [39]
\[\chi_{\rm spin}(T)=\frac{N_{\rm A}g^{2}\mu_{\rm B}^{2}}{k_{\rm B}T}\frac{1}{ \left[3+e^{\left(\frac{J_{0}}{k_{\rm B}T}\right)}+\frac{z^{\prime}J^{\prime}} {k_{\rm B}T}\right]}. \tag{3}\]
Here, \(J_{0}\) and \(J^{\prime}\) are the intra- and average inter-dimer interactions, respectively. In this expression, a mean-field approximation is used to introduce \(J^{\prime}\) in the isolated dimer model [35]. Here, \(z^{\prime}=2\) represents the number of neighbouring dimers coupled with one dimer through \(J^{\prime}\). As shown in Fig. 3, Eq. (2) reproduces our experimental data very well resulting the parameters (\(\chi_{0}\simeq-4.52\times 10^{-5}\) cm\({}^{3}\)/mol-Cu\({}^{2+}\), \(C_{\rm imp}\simeq 0.002\) cm\({}^{3}\).K/mol-Cu\({}^{2+}\), \(\theta_{\rm imp}\simeq-0.64\) K, \(g=2.05\), \(\theta_{\rm 0}/k_{\rm B}\simeq 116.7\) K, and \(J^{\prime}/k_{\rm B}\simeq 25.8\)) and (\(\chi_{0}\simeq-1.55\times 10^{-4}\) cm\({}^{3}\)/mol-Cu\({}^{2+}\), \(C_{\rm imp}\simeq 0.00725\) cm\({}^{3}\).K/mol-Cu\({}^{2+}\), \(\theta_{\rm imp}\simeq 0.53\) K, \(g=2.12\), \(J_{0}/k_{\rm B}\simeq 288.8\) K, and \(J^{\prime}/k_{\rm B}\simeq 235\) K) for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), respectively. The obtained values of \(C_{\rm imp}\) correspond to \(\sim 0.53\) % and \(\sim 1.9\) % of the paramagnetic impurity spins, respectively, assuming spin-\(\frac{1}{2}\). In order to emphasize the gapped be
Figure 2: (a) Three dimensional view of the crystal structure of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}X_{6}\) (\(X\)= Cl, Br) projected in the \(ac\)-plane. (b) The Cu\({}_{2}X_{10}\) (\(X\)= Cl, Br) dimer unit with intra-dimer exchange coupling \(J_{0}\). (c) Edge sharing of distorted CuX\({}_{6}\) octahedra favouring inter-dimer couplings or couplings along the legs and diagonal of the ladder. (d) A sketch of the possible exchange couplings (after removing \(X\)) highlight the coupled dimers or frustrated two-leg zig-zag ladder structure. (e) Orthogonal dimers from neighboring ladders.
havior, \(\chi_{0}+\frac{C_{\rm spin}}{T-\theta_{\rm spin}}\) was subtracted from the \(\chi(T)\) data. The resulting intrinsic \(\chi_{\rm spin}(T)\) indeed decays exponentially towards zero at low temperatures [see Fig. 3(a) and (b)] further establishing a singlet ground state.
As demonstrated in Fig. 2(d), there is equal possibility for the parallel dimers to interact along the legs of the ladder. Therefore, we fitted the \(\chi(T)\) data by the high-\(T\) series expansion (HTSE) of strong rung ladder model as
\[\chi(T)=\chi_{0}+\chi_{\rm spin}(T). \tag{4}\]
Here, \(\chi_{\rm spin}\) is the expression of HTSE for spin-1/2 two-leg ladder with strong rung coupling [i.e. Eq. (47) in Ref. [50]]. This expression is valid in high temperatures and for \(0\leq J^{\prime\prime}/J_{0}\leq 1\). This expression predicts more accurate results for \(J^{\prime\prime}/J_{0}<0.6\). Our fit in the high-\(T\) regime \(T>0.2J_{0}\) (i.e. 25 K for Cl and 68 K for Br) reproduces the experimental data very well yielding (\(\chi_{0}\simeq-4.99\times 10^{-5}\) cm\({}^{3}\)/mol, \(g\simeq 2.06\), \(J_{0}/k_{\rm B}\simeq 115\) K, and \(J^{\prime\prime}/k_{\rm B}\simeq 22.9\) K) and (\(\chi_{0}\simeq-1.108\times 10^{-4}\) cm\({}^{3}\)/mol, \(g\simeq 2.06\), \(J_{0}/k_{\rm B}\simeq 270.8\) K, and \(J^{\prime\prime}/k_{\rm B}\simeq 99.6\) K) for Cl and Br compounds, respectively [see Fig. 3(c) and (d)].
The zero-field spin-gap (\(\Delta_{0}\)) for a strong rung coupled two-leg ladder can be estimated as [50]
\[\Delta_{0}=J_{0}-J^{\prime\prime}+\frac{{J^{\prime\prime}}^{2}}{2J_{0}}+\frac{ {J^{\prime\prime}}^{3}}{4J_{0}^{2}}-\frac{{J^{\prime\prime}}^{4}}{8J_{0}^{3}} +\mathcal{O}({J^{\prime\prime}}^{5}). \tag{5}\]
Using the appropriate values of \(J_{0}\) and \(J^{\prime\prime}\), \(\Delta_{0}\) is calculated to be 94.7 K and 192.3 K for the Cl and Br compounds, respectively. The values of spin-gap correspond to the critical field of gap closing \(H_{\rm C1}\simeq 68.4\) T and 146.4 T for the Cl and Br compounds, respectively. Similarly, the saturation field (\(H_{\rm C2}\)) at which one can achieve the fully polarized state is calculated to be \(H_{\rm C2}=(J_{0}+2J^{\prime\prime})/g\mu_{\rm B}\simeq 115\) T and 358 T for the Cl and Br compounds, respectively [12].
Magnetic isotherm (\(M\) vs \(H\)) measured at \(T=2\) K is shown in Fig. 4. For both the compounds, it shows a typical paramagnetic behavior up to 5 T. The \(\chi(T)\)
analysis suggests that \(\chi_{\rm spin}(T)\) approaches zero value below \(\sim 13\) K and \(\sim 37\) K for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), respectively and the low-\(T\) upturn in \(\chi(T)\) is entirely due to extrinsic contributions. As one expects zero magnetization in the gapped state, the observed \(M\) vs \(H\) behaviour can be attributed completely due to the paramagnetic impurity spins. Hence, one can estimate this extrinsic paramagnetic contribution accurately by fitting the data to [51]
\[M(H)=\chi H+f_{\rm imp}N_{\rm A}g_{\rm imp}\mu_{\rm B}S_{\rm imp}Bs_{\rm imp}(x). \tag{6}\]
In the above equation, \(\chi\) is the intrinsic susceptibility, \(f_{\rm imp}\) is the molar fraction of the impurities, \(g_{\rm imp}\) is the impurity \(g\)-factor, \(S_{\rm imp}\) is the impurity spin, \(Bs_{\rm imp}(x)\) is the Brillouin function, and \(x=g_{\rm imp}\mu_{\rm B}S_{\rm imp}H/[k_{\rm B}(T-\theta_{\rm imp})]\). We assumed the impurity spin \(S_{\rm imp}=1/2\) and the Brillouin function reduces to \(Bs_{\rm imp}(x)=\tanh(x)\)[52]. Our fitted results upon fixing \(g=2\) are (\(f_{\rm imp}\simeq 0.0048\) mol% and \(\theta_{\rm imp}\simeq-0.44\) K) and (\(f_{\rm imp}\simeq 0.0122\) mol% and \(\theta_{\rm imp}\simeq 0.95\) K) for the Cl and Br compounds, respectively. The obtained values of \(f_{\rm imp}\) correspond to \(\sim 0.48\) % and \(\sim 1.2\) % of spin-\(\frac{1}{2}\) paramagnetic impurity spins for the Cl and Br compounds, respectively which are consistent with the \(\chi(T)\) analysis.
### Quantum Entanglement
Existence of entanglement in an AFM spin system can be measured by a quantity called entanglement witness (EW). In a magnetic system with singlet ground state, the two spins are strongly entangled and one can extract EW from the macroscopic thermodynamic observable like \(\chi_{\rm spin}\). For a spin-1/2 isotropic Heisenberg system, EW is related to \(\chi_{\rm spin}\) as [53]
\[{\rm EW}=1-\frac{6k_{\rm B}T\chi_{\rm spin}}{N_{\rm A}g^{2}\mu_{\rm B}^{2}}. \tag{7}\]
Figure 5 depicts the temperature variation of EW calculated using Eq. (7). EW of both compounds reaches a maximum value of 1 in the low temperature region where \(\chi_{\rm spin}\) is zero. It decreases with increasing temperature. For (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\), EW approaches zero at around 120 K, whereas for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), it remains non-zero even at 370 K. For an entangled state, EW should have a finite value (\(>0\)). The dashed line [Eq. (7) taking EW = 0] in the insets of Fig. 5(a) and (b) represents the boundary of the entangled state, which is plotted along with \(\chi_{\rm spin}(T)\). The point of intersection with \(\chi_{\rm spin}\) defines the upper temperature limit of the entangled state. The dashed curve intersects the \(\chi_{\rm spin}\) data of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) at around 120 K, which is consistent with the above analysis. On the other hand, (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) demonstrates that entanglement persists even beyond 370 K. Experimentally, quantum entanglement is realized in several spin-1/2 Heisenberg AFM dimers and spin-chain systems such as Cu(NO\({}_{3}\))\({}_{2}\)\(\cdot\)2.5H\({}_{2}\)O [54], Na\({}_{2}\)Cu\({}_{5}\)Si\({}_{4}\)O\({}_{14}\)[55], NH\({}_{4}\)CuPO\({}_{4}\)\(\cdot\)H\({}_{2}\)O [56], and Cu(tz)\({}_{2}\)Cl\({}_{2}\)[57]. However, all these compounds are entangled at very low temperatures except Na\({}_{2}\)Cu\({}_{5}\)Si\({}_{4}\)O\({}_{14}\). In this context, (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) are two
Figure 4: Magnetization isotherms (\(M\) vs \(H\)) at \(T=2\) K measured up to 5 T. Solid lines are the Brillouin fit using Eq. (6).
Figure 5: Entanglement witness (EW) as a function of temperature for (a) (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (b) (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\). Inset: \(\chi_{\rm spin}\) vs \(T\) along with the entanglement boundary (dashed line).
promising compounds where entanglement perseveres up to much higher temperatures compared to the above mentioned compounds.
### Theoretical Calculations
A frustrated two-leg ladder is pictorially shown in Fig. 6 where each solid sphere represents a Cu\({}^{2+}\) site with spin-1/2. The two in-equivalent Cu sites are denoted as A and B. An isotropic spin-1/2 Heisenberg model Hamiltonian on this lattice can be written as:
\[\begin{split}\mathcal{H}=&\sum_{i=1}^{N/2}J_{0} \vec{S}_{i,A}.\vec{S}_{i,B}+\sum_{i=1}^{N/2}J^{\prime}\vec{S}_{i,B}.\vec{S}_{i+ 1,B}+\\ &\sum_{i=1}^{N/2}J^{\prime\prime}(\vec{S}_{i,B}.\vec{S}_{i+1,A}+ \vec{S}_{i,A}.\vec{S}_{i+1,B})\\ -H&\sum_{i=1}^{N/2}(S_{i,A}^{z}+S_{i,B}^{z}),\end{split} \tag{8}\]
where \(\vec{S}_{i,A}\) and \(\vec{S}_{i,B}\) indicate the spin vectors for the sublattices A and B, respectively of the \(i^{th}\) unit cell. In Eq. (8), the first term represents the intra-dimer coupling \(J_{0}\) while the second and third terms represent the inter-dimer coupling \(J^{\prime}\) and \(J^{\prime\prime}\) between the neighboring dimers along the diagonal and the leg, respectively. The coupling \(J^{\prime}\) is always between two same Cu sites while \(J^{\prime\prime}\) is between two in-equivalent Cu sites. The last term of the Hamiltonian represents an externally applied axial magnetic field \(H\), and the energy scale is set in terms of \(J_{0}\).
In order to have a further insight about the ground state properties, correlation function, \(C(r)=\langle\Psi|S_{i}.S_{i+\tau}|\Psi\rangle\) vs distance \(r\) of the lattice sites and magnetization \(M=\langle\psi|\sum_{i=1}^{N}s_{i}^{z}|\psi\rangle\) vs magnetic field (\(H\)) calculations are performed up to \(N=24\) sites by considering frustrated two leg-ladder model. The magnetization \(M(T,H)\) and magnetic susceptibility \(\chi(T,H)\) of these systems can be calculated using the full spectrum and the partition function \(Z(T,H)\) can be written as:
\[Z(T,H)=\sum_{s^{z}=-N/2}^{N/2}\sum_{n_{s^{z}}}e^{-\beta(E_{n_{s^{z}}}-Hs^{z})}. \tag{9}\]
Here, \(\beta=\frac{1}{k_{\rm B}T}\) and \(s^{z}\) is the \(z\)-component of total spin which varies from \(-N/2\) to \(N/2\) where total number of sites in the system are \(N\). \(E_{n_{s^{z}}}\) is the energy for \(n_{s^{z}}\) state. The magnetization can be defined as
\[M(T,H)=\frac{1}{N}\sum_{s^{z}=-N/2}^{N/2}\sum_{n_{s^{z}}}s^{z}e^{-\beta(E_{n_{ s^{z}}}-Hs^{z})}. \tag{10}\]
\(\chi(T,H)\) can be written in terms of the magnetic fluctuation as
\[\chi(T,H)=\frac{\beta}{N}[\langle M^{2}\rangle-\langle M\rangle^{2}]. \tag{11}\]
First, we calculated the magnetic susceptibility \(\chi(T)\) by taking frustrated two-leg ladder model into account for both materials. As the experimental \(\chi(T)\) contains a large low temperature upturn (see Fig. 3), to fit the experimental data, we used the spin susceptibility \(\chi_{\rm spin}\) originating from the interacting dimer model fit in Fig. 3. In order to show the finite-size effect, \(\chi_{\rm spin}\) is calculated for three different system sizes, \(N=12\), 16, and 18. Clearly, there is no visible finite-size effect for both compounds. From the best fit of the experimental \(\chi_{\rm spin}\) data in Fig. 7, the obtained parameters
Figure 6: Lattice structure: A and B are two sub-lattices of the ladder structure. \(J_{0}\) and \(J^{\prime\prime}\) are rung and leg couplings, respectively while \(J^{\prime}\) is the coupling along the diagonal. \(r\) is the site index along the lower leg of the system.
Figure 7: The experimental \(\chi_{\rm spin}\) data (symbols) for (a) (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (b) (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\). The fits using ED simulation for the frustrated two-leg ladder are shown as solid lines for system sizes \(N=12\), 16, and 18.
are (\(g=2.06\), \(J_{0}/k_{\rm B}=116\) K, \(J^{\prime}/k_{\rm B}=23.2\) K, and \(J^{\prime\prime}/k_{\rm B}=18.56\) K) for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (\(g=2.06\), \(J_{0}/k_{\rm B}=300\) K, \(J^{\prime}/k_{\rm B}=90\) K, and \(J^{\prime\prime}/k_{\rm B}=105\) K) for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), respectively.
We also calculated the correlation function \(C(r)\) for the frustrated two-leg ladder model for both compounds along the A-B-A-B-... sites [see Fig. 6], where \(r\) is the distance. We notice that \(C(r)\) decays exponentially along the leg (A-B-A-B-...) and it can be fitted well with an exponential function \(Ae^{-r/\xi}\) where \(A\) is a constant. From the fit in Fig. 8, the correlation length \(\xi\) is estimated to be \(\sim 0.36\) and \(\sim 0.64\) for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), respectively, which are still less than the spacing between nearest-neighbour Cu\({}^{2+}\) ions along the leg. Surprisingly, for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), despite large \(J^{\prime}/J_{0}\) and \(J^{\prime\prime}/J_{0}\) ratios (\(\sim 0.30\) and \(0.35\)), \(\xi\) is less than the lattice spacing and it is still behaving like weakly coupled dimers.
The spin-gap and saturation field are two important quantities to understand the magnetic properties of a gapped spin system. Due to large energy scale of the exchange couplings, the critical fields are too high and are not accessible experimentally. Hence, to extract the value of critical fields (critical field of gap closing \(H_{\rm C1}\) and saturation field \(H_{\rm C2}\)) and to understand the nature of magnetic isotherms, we have simulated the \(M\) vs \(H\) curve at zero temperature for both compounds for \(N=20\) and 24 sites (see Fig. 9). The critical fields are found to be independent of the system size for both the compounds. In the intermediate field regime, the Cl compound has very weak finite size effect while the Br compound exhibits a plateau-like behavior and should have a continuous \(M\) vs \(H\) curve in the thermodynamic limit (\(N\rightarrow\infty\)). Finite size effects can be avoided by considering the value of \(M\) at the midpoint of each plateau and drawing a continuous line through these points for \(N=20\) and 24 which show almost no difference. The value of \(H_{\rm C1}\) is found to be 78.9 T and 181.2 T for the Cl and Br compounds, respectively. This suggests that such a robust singlet ground state cannot be perturbed by the experimentally available magnetic fields. The saturation magnetic field or the field required to obtain the fully polarized state is found to be \(H_{\rm C2}=110.7\) T and 374.2 T for the Cl and Br compounds, respectively. These values of \(H_{\rm C1}\) and \(H_{\rm C2}\) are slightly larger than the ones predicted from the analysis using two-leg ladder model, as the two-leg ladder model doesnot take into account the diagonal interaction.
## IV Discussion and Summary
We investigated the structural and magnetic properties of two copper halides. Though, experimental \(\chi(T)\) data are fitted well by both spin-1/2 interacting dimer and two-leg ladder models, a more accurate description is obtained from the ED calculations. The ED calculation results assuming a two-leg ladder model reproduce the spin susceptibility more precisely with a strong rung coupling \(J_{0}/k_{\rm B}=116\) K and 300 K, a weak but significant leg coupling \(J^{\prime\prime}/k_{\rm B}=18.6\) K and 105 K, and another weak diagonal coupling \(J^{\prime}/k_{\rm B}=23.2\) K and 90 K for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\), respectively. Depending on the hierarchy of the exchange couplings, two-leg spin-1/2 ladders may evince remarkable critical behaviour at low temperatures due low dimensionality and reduced spin value. For instance, though, TLL physics is envisaged in two-leg ladders, a qualitative difference is delineated between strong-leg and strong-rung ladders. Theory predicts attractive fermionic interaction
Figure 8: The correlation function \(C(r)\) vs \(r\) calculated for the frustrated two-leg ladder model along the lower leg by taking \(r=0\) as reference site (see Fig. 6) for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\). Solid lines are the exponential fits.
Figure 9: The calculated \(M\) vs \(H\) curves for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\) and (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) with system sizes, \(N=20\) and 24 using the frustrated two-leg ladder model. Symbols and solid lines represent the plateaus and connector of the mid point of plateaus, respectively.
in strong-leg spin-1/2 ladders [e.g. (C\({}_{7}\)H\({}_{10}\)N)\({}_{2}\)CuBr\({}_{4}\)] and repulsive fermionic interaction in strong-rung spin-1/2 ladders [e.g. (C\({}_{5}\)H\({}_{12}\)N)\({}_{2}\)CuBr\({}_{4}\)], respectively in the gapless TLL critical state [21; 22]. Experimental verification of such predictions are strongly hindered due to the unavailability of model compounds. Moreover, unlike strong-leg coupled two-leg ladders, the number of strong-rung coupled two-leg ladder compounds are limited till date [12; 58]. Hence, the compounds under investigation are an important class of compounds in this direction. Further, as depicted in Fig. 6, the AFM \(J^{\prime}\) along the diagonal induces magnetic frustration in the ladder, making it an unique spin-lattice of frustrated two-leg ladder in both our compounds [59].
According to the Goodenough-Kanamori-Anderson (GKA) rule, the nature and strength of the superexchange in magnetic insulators depend very much on the Cu-\(X\)-Cu bridging angle and the extent of overlap between the Cu 3\(d\) and \(X\) 3\(p\)/4\(p\) orbitals of the ligands (\(X\) = Cl, Br). Strong AFM exchange interaction is favoured for \(\angle Cu-X-Cu>\) 95\({}^{0}\)[60]. As discussed earlier, the Cu\(X_{6}\) octahedra are distorted due to Jahn-Teller effect and the Cu-\(X\) bond is elongated along the apical direction. In an octahedral coordination, 3\(d_{x^{2}-y^{2}}\) orbital of Cu\({}^{2+}\) lie in the basal plane and contains the unpaired electron. In both the compounds, the intra-dimer (or rung) coupling (\(J_{0}\)) arises due to the overlapping of 3\(d_{x^{2}-y^{2}}\) orbitals of two Cu\({}^{2+}\) ions via the quasi-orthogonal \(p\) orbitals of the halide ligands in the basal plane. On the other hand, the diagonal (\(J^{\prime}\)) and leg (\(J^{\prime\prime}\)) interactions emerge from to the overlapping of 3\(d_{z^{2}}\) orbitals of one Cu\({}^{2+}\) with 3\(d_{x^{2}-y^{2}}\) orbitals of another Cu\({}^{2+}\) from the neighbouring dimers via the \(p\) orbitals of the apical halide ions of the distorted Cu\(X_{6}\) octahedra. Furthermore, the intra-dimer distance \(d_{1}\) is found to be smaller than \(d_{2}\) and \(d_{3}\) and the angle \(\angle Cu-X-Cu\) along \(d_{1}\) is more than 95\({}^{0}\) as well as more than the angles along \(d_{2}\) and \(d_{3}\) (see Table 2). Thus, the presence of unpaired electron in the 3\(d_{x^{2}-y^{2}}\) orbital, shorter bond distance, and larger bong angle are the obvious reasons for much stronger AFM coupling for \(J_{0}\) as compared to \(J^{\prime}\) and \(J^{\prime\prime}\). Due to extended Cu-\(X\)-\(X\)-Cu paths, the orthogonal dimers of the neighboring ladders are very weakly connected in the \(ac\)-plane through the basal halide ions. The magnetic network of these compounds seems to be similar to the celebrated compounds \(A\)Cu\(X_{3}\) (\(A\) = Tl, NH\({}_{4}\), K) [61].
Despite the same crystal structure, the exchange couplings (\(J_{0}\), \(J^{\prime}\), and \(J^{\prime\prime}\)) of (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) are larger than (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\). The Cu-Cu bond distance and \(\angle\)Cu-\(X\)-Cu bond angles (see Table 2) for both the compounds are nearly same and making a distinction is really difficult. The superexchange mechanism between Cu\({}^{2+}\) ions is explained well for Cu\(X_{2}\) (\(X\) = F, Cl, and Br) in Ref. [62] in terms of the GKA rule [60]. It is reported that the larger ligand size can amplify the orbital overlap mechanism, leading to larger exchange couplings. Since, Br\({}^{-}\) has a large ionic radius than Cl\({}^{-}\), one expects higher \(J_{0}\), \(J^{\prime}\), and \(J^{\prime\prime}\) for (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Br\({}_{6}\) than (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\). Similar results are reported for the alternating chain compound, (4,4\({}^{\prime}\)-bipyridinium)Cu\({}_{2}\)Cl\({}_{6-x}\)Br\({}_{x}\), where the AFM couplings are increased monotonously with increasing bromide concentration [63]. Moreover, distortion in the Cu\(X_{6}\) octahedra units also plays a decisive role. In case of CuCl\({}_{6}\), the distortion is more than CuBr\({}_{6}\) which possibly favours weaker interaction in (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)Cl\({}_{6}\).
For an isolated dimer compound, the intra-dimer coupling represents the true spin-gap in zero-field and the critical field of gap closing (\(H_{\rm C1}\)) coincides with the critical field of saturation (\(H_{\rm C2}\)). The inter-dimer coupling not only reduces the value of spin-gap but also increases the spread between \(H_{\rm C1}\) and \(H_{\rm C2}\), allowing space for field induced quantum phases to occur [1; 15]. As shown in Fig. 9, these fields are \(H_{\rm C1}\simeq 78.9\) T and \(H_{\rm C2}\simeq 110.7\) T for the Cl compound and \(H_{\rm C1}\simeq 181.2\) T and \(H_{\rm C2}\simeq 374.2\) T for the Br compound. Thus, the large spacing between \(H_{\rm C1}\) and \(H_{\rm C2}\) implies significant inter-dimer coupling or leg coupling along the ladder for both the compounds, consistent with our \(\chi(T)\) analysis. Unfortunately, their exchange couplings are so large that the critical fields are not easily accessible experimentally. Nevertheless, these compounds have great potential if new materials can be designed with reduced exchange couplings by appropriately choosing the halide ions and ligands or introducing more distortion in the lattice. Another interesting aspect of these compounds is that the dimers from one ladder are nearly orthogonal to the dimers in the neighbouring ladders. Therefore, if the inter-dimer couplings can be weakened by tuning structural parameters then these compounds can be considered equivalent to the Shastry-Sutherland lattices Sr\({}_{2}\)Cu(BO\({}_{3}\))\({}_{2}\) and BaNd\({}_{2}\)ZnO\({}_{5}\)[23; 64]. Thus, synthesis of these compounds opens up a new route to tune the magnetic parameters and realize new compounds that would be relevant from the quantum magnetism point of view.
In summary, we present the details of single-crystal growth and magnetic properties of two interesting isostructural quantum magnets (C\({}_{4}\)H\({}_{14}\)N\({}_{2}\))Cu\({}_{2}\)\(X_{6}\) (\(X\)= Cl, Br). Both the compounds feature Cu\({}^{2+}\) two-leg ladders with a frustrated diagonal coupling. The dimers from each ladder are aligned orthogonal to the dimers from the adjacent ladders. The analysis of \(\chi(T)\) and the subsequent theoretical calculations reveal that the dimers are strongly coupled leading to a drastic reduction in spin-gap from its isolated dimer value. Our exact diagonalization calculations assuming a frustrated two-leg ladder geometry, indeed, reproduce the experimental \(\chi(T)\) with leading rung coupling \(J_{0}/k_{\rm B}\simeq 116\) K and 300 K, weak leg coupling \(J^{\prime}/k_{\rm B}\simeq 18.6\) K and 105 K, and weak frustrated diagonal coupling \(J^{\prime\prime}/k_{\rm B}\simeq 23.2\) K and 90 K for the Cl and Br compounds, respectively. Despite same crystal symmetry, the relatively large exchange couplings in case of Br as compared to Cl compound is attributed to larger ionic size of Br and more diffused \(p\)-orbitals that facilitate stronger coupling between the Cu\({}^{2+}\) ions.
Further tuning of exchange couplings by appropriately choosing the halide ions or legends would make them model compounds for exploring field induced quantum phases.
## V Acknowledgement
SG and RN acknowledge SERB, India for financial support bearing sanction Grant No. CRG/2022/000997. SG is supported by the Prime Minister's Research Fellowship (PMRF) scheme, Government of India. MK thanks SERB for financial support through Grant Sanction No. CRG/2020/000754. S. Ghosh would like to express sincere gratitude to the DST-Inspire for financial support. DS acknowledges financial support from the Max Planck partner group and SERB, India (research grant No. CRG/Z019/005144).
|
2304.14762 | Using Perturbation to Improve Goodness-of-Fit Tests based on Kernelized
Stein Discrepancy | Kernelized Stein discrepancy (KSD) is a score-based discrepancy widely used
in goodness-of-fit tests. It can be applied even when the target distribution
has an unknown normalising factor, such as in Bayesian analysis. We show
theoretically and empirically that the KSD test can suffer from low power when
the target and the alternative distributions have the same well-separated modes
but differ in mixing proportions. We propose to perturb the observed sample via
Markov transition kernels, with respect to which the target distribution is
invariant. This allows us to then employ the KSD test on the perturbed sample.
We provide numerical evidence that with suitably chosen transition kernels the
proposed approach can lead to substantially higher power than the KSD test. | Xing Liu, Andrew B. Duncan, Axel Gandy | 2023-04-28T11:13:18Z | http://arxiv.org/abs/2304.14762v3 | # Using Perturbation to Improve Goodness-of-Fit Tests based on Kernelized Stein Discrepancy
###### Abstract
Kernelized Stein discrepancy (KSD) is a score-based discrepancy widely used in goodness-of-fit tests. It can be applied even when the target distribution has an unknown normalising factor, such as in Bayesian analysis. We show theoretically and empirically that the KSD test can suffer from low power when the target and the alternative distributions have the same well-separated modes but differ in mixing proportions. We propose to perturb the observed sample via Markov transition kernels, with respect to which the target distribution is invariant. This allows us to then employ the KSD test on the perturbed sample. We provide numerical evidence that with suitably chosen transition kernels the proposed approach can lead to substantially higher power than the KSD test.
Machine Learning, Kernelized Stein Discrepancy, Kernelized Stein Discrepancy, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Kernel Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert Space, Hilbert Space, Hilbert, Space, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert Space, Hilbert, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert Space, Hilbert, Space, Hilbert, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert, Space, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Hilbert, Space, Space, Hilbert,
of Gorham and Mackey (2017) and Wenliang and Kanagawa (2020), which focus on the convergence of the sample KSD but not its limiting null distribution. Second, we address this issue by introducing a _perturbation operator_, giving rise to a family of perturbation-based GOF test (Fig. 1, bottom right) which we call the _perturbed kernelized Stein discrepancy_ (pKSD) test. The role of the operator is to perturb the candidate and the target distributions simultaneously to create discrepancy that can be more easily detected by KSD. We propose to use Markov transition kernels that are invariant to the target \(P\) as the perturbation operator. The \(P\)-invariance ensures the resulting GOF tests provably control the Type-I error. The transition kernel is non-irreducible and uses an inter-modal jump proposal, which can increase the test power against multi-modal alternatives, sometimes substantially from the nominal level to almost 1.
OutlineSection 2 reviews kernelized Stein discrepancy. Section 3 formalises the low-power problem of the KSD test. The proposed method is presented in Section 4 and Section 5. We discuss related work in Section 6, followed by experiments in Section 7. Section 8 concludes.
NotationThroughout this article, we denote by \(Q,P\) probability measures on \(\mathcal{X}=\mathbb{R}^{d}\) equipped with the Borel \(\sigma\)-algebra \(\mathcal{B}(\mathcal{X})\), and assume \(P\) has a continuously differentiable, positive Lebesgue density \(p\). We refer to \(Q\) as the _candidate_ distribution and \(P\) as the _target_ distribution. Our interest lies in testing \(H_{0}:Q=P\) against \(H_{1}:Q\neq P\) using a finite sample \(\{x_{i}\}_{i=1}^{n}\) drawn independently from \(Q\). We assume we can evaluate pointwise the _unnormalised_ density \(p^{*}(x)=p(x)/Z\), where \(Z\) is an unknown constant, as well as \(\nabla\log p^{*}(x)\), which is identical to the _score function_ of \(p\), namely \(s_{p}(x)\coloneqq\nabla\log p(x)=(\nabla_{x_{1}}\log p(x),\dots,\nabla_{x_{d} }\log p(x))^{\top}\).
## 2 Kernelized Stein Discrepancy Test
Choosing the Stein operator \(\mathcal{A}_{P}\) in (1) to be the operator mapping continuously differentiable, vector-valued functions \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) to scalar-valued functions via \(\mathcal{A}_{p}f(x)\coloneqq\langle\nabla\log p(x),f(x)\rangle+\langle\nabla, f(x)\rangle\), one obtains a statistical divergence which only depends on the score function of \(P\). If \(f\in\mathcal{F}\) satisfies regularity conditions such as \(\lim_{\|x\|_{2}\to\infty}f(x)p(x)=0\), then one can show that \(\mathbb{E}_{x\sim P}[\mathcal{A}_{P}f(x)]=0\), and \(f\) is said to lie in the _Stein class_ of \(P\)(Liu et al., 2016, Sec. 2.2). The function class \(\mathcal{F}\) is usually chosen to be _(i)_ sufficiently broad so that the discrepancy separates distinct probability measures, i.e. \(\mathbb{S}(Q,P;\mathcal{F})=0\iff Q=P\), and _(ii)_ sufficiently regular so that the right-hand-side of (1) can be efficiently solved.
To this end, Liu et al. (2016); Chwialkowski et al. (2016) proposed to let \(\mathcal{F}\) be the unit ball of a _reproducing kernel Hilbert space_ (RKHS) (Berlinet and Thomas-Agnan, 2011). Specifically, let \(\mathcal{H}\) be an RKHS associated with positive definite kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\). Let \(\mathcal{F}^{d}\) be the unit ball of the \(d\)-times Cartesian product \(\mathcal{H}^{d}:=\mathcal{H}\times\cdots\times\mathcal{H}\). Choosing \(\mathcal{F}=\mathcal{F}^{d}\) and the operator \(\mathcal{A}_{P}\) yields the _(Langevin) kernelized Stein discrepancy_ (KSD): \(\mathbb{D}(Q,P)\coloneqq\mathbb{D}_{\mathcal{F}^{d}}(Q,P)\).
Assuming the kernel \(k\) has continuous first-order derivatives with respect to both arguments, Chwialkowski et al. (2016, Thm. 2.1) showed that KSD attains a closed form: \(\mathbb{D}(Q,P)=\mathbb{E}_{x,x^{\prime}\sim Q}[u_{P}(x,x^{\prime})]\), where \(x,x^{\prime}\) are independent random variables drawn from \(Q\), and \(u_{P}\) is the _Stein kernel_: \(u_{P}(x,x^{\prime})\coloneqq s_{p}(x)^{\top}k(x,x^{\prime})s_{p}(x^{\prime})+s _{p}(x)^{\top}\nabla_{x^{\prime}}k(x,x^{\prime})+\nabla_{x}k(x,x^{\prime})^{ \top}s_{p}(x^{\prime})+\sum_{i=1}^{d}\frac{\partial^{2}}{\partial x_{i} \partial x^{\prime}}k(x,x^{\prime})\). Notably, \(u_{P}\) (hence also \(\mathbb{D}(Q,P)\)) depends on \(p\) only through \(s_{p}(x)=\nabla\log p(x)\), so KSD is computable even without the knowledge of the normalising constant of \(p\).
We will assume \(k\) lies in the _Stein class_ of \(p\)(Liu et al., 2016, Def. 3.4), so that \(\mathbb{D}(P,P)=0\). When \(Q\) also admits a density \(q\), and \(k\) is _cc_-universal (Sriperumbudur et al., 2011, 2010) or integrally strictly positive definite (Stewart, 1976, Sec. 6), KSD is _separating_, meaning that \(\mathbb{D}(Q,P)=0\iff Q=P\), provided that \(\mathbb{E}_{x\sim Q}[\|s_{q}(x)-s_{p}(x)\|^{2}]<\infty\)(Chwialkowski et al., 2016; Liu et al., 2016). The assumption that \(Q\) has a density can be relaxed if the target density satisfies additional tail conditions, such as distant dissipativeness (Hodgkinson et al., 2020, Proposition 4).
\(\mathbb{D}(Q,P)\) can be estimated from a sample \(\{x_{i}\}_{i=1}^{n}\) from \(Q\) by the following U-statistic (Serfling, 2009, Sec. 5.5):
\[\hat{\mathbb{D}}_{P}\coloneqq\tfrac{1}{n(n-1)}\sum_{1\leq i\neq j\leq n}u_{P}( x_{i},x_{j}). \tag{2}\]
Figure 1: Power for a one-dimensional bimodal Gaussian target distribution \(P\) with mixing weight \(0.5\) and mode separation \(\Delta\). The candidate distribution \(Q\) is only the left component, from which 1000 samples are drawn. ospKSD and spKSD are our proposed method; the others are existing benchmarks. _Top_: Rejection rates and target densities for varying \(\Delta\); the orange and green lines overlap. _Bottom left_: Density of \(Q\) before and after \(10\) steps of the perturbation described in Sec. 4 and density of the target \(P\). _Bottom right_: Connections between KSD and the proposed divergences.
The KSD test uses (2) as a test statistic. The asymptotic distribution of \(\hat{\mathbb{D}}_{P}\) under \(H_{0}\) has no closed form, but can be approximated with a bootstrap procedure (Huskova and Janssen, 1993) using the bootstrap samples
\[\hat{\mathbb{D}}_{P}^{b}\coloneqq\tfrac{1}{n^{2}}\sum_{1\leq i \neq j\leq n}\left(w_{i}^{b}-1\right)\left(w_{j}^{b}-1\right)u_{P}(x_{i},x_{j}), \tag{3}\]
where \((w_{1}^{b},\ldots,w_{n}^{b})\sim\text{Mult}\left(n;\tfrac{1}{n},\ldots,\tfrac {1}{n}\right)\) follows a multinomial distribution. The test statistic \(\hat{\mathbb{D}}_{P}\) is compared against quantiles of \(\{\hat{\mathbb{D}}_{P}^{b}\}_{b=1}^{B}\) computed with \(B\) i.i.d. draws \((w_{1}^{b},\ldots,w_{n}^{b})\), and \(H_{0}\) is rejected for large values of \(\hat{\mathbb{D}}_{P}\). The resulting test achieves the desired level \(\alpha\) asymptotically (Huskova and Janssen, 1993; Liu et al., 2016).
Many improvements over the standard KSD test have been proposed, e.g., to reduce the computational cost (Jitkrittum et al., 2017), to address the curse-of-dimensionality (Gong et al., 2021, 2022), and to avoid kernel selection by adopting an aggregated testing procedure (Schrab et al., 2022).
## 3 Limitations of KSD Test
The KSD can be blind to certain discrepancies that are strongly visible in other metrics (e.g., in the \(L_{2}\) norm). One example is mixtures of the same well-separated components, differing only by the mixing proportions (weights). In fact, the KSD will be small in settings where the score difference \(\|s_{p}(x)-s_{q}(x)\|_{2}^{2}\) is large only with low \(Q\)-probability. This is because the KSD can be bounded from above by the _Fisher Divergence_ (FD) \(F(q,p)\coloneqq\mathbb{E}_{x\sim Q}[\|s_{p}(x)-s_{q}(x)\|_{2}^{2}]\)(Liu et al., 2016, Thm. 5.1).
This is known as the "blindness" of score-based discrepancies (Wenliang and Kanagawa, 2020) such as KSD. This limitation of KSD has been highlighted in a number of works (Gorham et al., 2019; Matsubara et al., 2022; Zhang et al., 2022; Kanagawa et al., 2022); however, its implication to the test power in GOF tests has not yet been formalised.
In Prop. 3.1 (proved in Appendix A), we formally connect the blindness issue with the rates of increase of the sample size and the FD between the two distributions.
**Proposition 3.1**.: _Let \(Q\) and \(P_{\nu}\), \(\nu=1,2,\ldots\), be probability measures defined on \(\mathbb{R}^{d}\) with positive densities \(q\) and \(p_{\nu}\). respectively. Assume \(\mathbb{E}_{x\sim Q}[\|s_{q}(x)\|_{2}^{2}]\), \(\mathbb{E}_{x,x^{\prime}\sim Q}[u_{Q}(x,x^{\prime})^{2}]<\infty\), and the kernel \(k\) satisfies_
\[\max\big{\{} \mathbb{E}_{x,x^{\prime}\sim Q}[\|k(x,x^{\prime})\|,\ \mathbb{E}_{x,x^{\prime}\sim Q}[\|\nabla_{x^{\prime}}k(x,x^{\prime})\|_{2}^{2}],\] \[\mathbb{E}_{x,x^{\prime}\sim Q}[\|\nabla_{x}k(x,x^{\prime})\|_{2}^ {2}]\big{\}}<\infty. \tag{4}\]
_Let \(x_{1},x_{2},\ldots\) be a sequence of i.i.d. samples from \(Q\). Denote by \(F_{\nu}\coloneqq\mathbb{E}_{x\sim Q}[\|s_{p_{\nu}}(x)-s_{q}(x)\|_{2}^{2}]\) the Fisher Divergence between \(Q\) and \(P_{\nu}\). If the sequence \(n_{1},n_{2},\ldots\in\mathbb{N}\) satisfies \(n_{\nu}\to\infty\) as \(\nu\to\infty\) and \(n_{\nu}=o(1/\max(F_{\nu},F_{\nu}^{1/2}))\), then_
\[n_{\nu}\hat{\mathbb{D}}_{P_{\nu}}\to_{d}\sum_{j=1}^{\infty}c_{j}(Z_{j}^{2}-1) \quad(\nu\to\infty)\, \tag{5}\]
_where \(\hat{\mathbb{D}}_{P_{\nu}}\) is the sample KSD computed using \(x_{1},\ldots,x_{n_{\nu}}\), \(Z_{j}\sim\mathcal{N}(0,1)\) i.i.d. and \(\{c_{j}\}\) are the eigenvalues of the Stein kernel \(u_{P}\) under \(Q\)._
_Remark 3.2_.: The RHS of (5) is the limiting distribution of \(\hat{\mathbb{D}}_{P_{\nu}}\) under \(H_{0}\)(Liu et al., 2016). Hence, this result shows that if the sample size \(n_{\nu}\) is \(o(1/\max(F_{\nu},F_{\nu}^{1/2}))\), then the test power converges to the _nominal level_ of the test.
_Remark 3.3_.: Assumption (4) is standard and holds for Inverse Multi-Quadrics (IMQ) and Radial Basis Function (RBF) kernels when \(Q\) has a finite second moment. IMQ kernels are preferred as they have desired tail properties to ensure a _convergence determining_ KSD for target densities satisfying the distantly dissipative condition (Gorham and Mackey, 2017; Hodgkinson et al., 2020). This includes Gaussian mixtures with common covariance, as well as distributions strongly log-concave outside of a compact set, such as Bayesian linear, logistic, and Huber regression posteriors with Gaussian priors, c.f., Gorham et al. (2019); Gorham and Mackey (2017). Prop. 3.1 does not contradict this result, as it considers a different regime where a _sequence_ of target distributions is of interest.
Prop. 3.1 allows us to study the test power by analysing the FD. For instance, when \(P\) is a mixture of two Gaussian components and \(Q\) is one of its component, the FD decreases exponentially fast to 0 with the mode separation. Prop. 3.1 then implies that an unrealistically large sample size would be needed for the test to have a non-trivial power. This is formalised in the following result.
**Theorem 3.4**.: _Let \(Q=\mathcal{N}(0,I_{d})\) and \(P_{\nu}=\pi\mathcal{N}(0,I_{d})+(1-\pi)\mathcal{N}(\Delta_{\nu},I_{d})\), where \(\pi\in[0,1]\) and \(\Delta_{\nu}\in\mathbb{R}^{d}\). With the same notation in Prop. 3.1 and assuming \(k\) satisfies (4), the limit (5) holds if \(n_{\nu}=o\left(e^{\|\Delta_{\nu}\|_{2}^{2}/64}\right)\)._
The proof is in Appendix B. Figure 1 provides numerical evidence for Thm. 3.4 by showing the rate of rejection over \(100\) repetitions at level \(\alpha=0.05\). We observe that the power of the KSD test (with IMQ kernel whose bandwidth is chosen by median heuristic (Gretton et al., 2012)) approaches the prescribed level for \(\Delta\geq 6\). A similarly poor performance is observed for KSDAgg(Schrab et al., 2022) and FSSD (Jitkrittum et al., 2017), two variants of KSD. In comparison, our proposed test, called ospKSD and spKSD, achieve an almost perfect power. Notably, the problem of low test power persists even if the samples are drawn from both components but with a different weight; see Figure 3 in Sec. 7.
## 4 KSD Test with Perturbation
We propose to increase the power of KSD test against _multi-modal alternatives_ by perturbing both the candidate and the target distributions with a set of _Markov transition kernels_(Robert and Casella, 2004, Chapter 6) and performing KSD tests on the _perturbed_ distributions. A Markov transition kernel is a function \(\mathcal{K}:\mathcal{X}\times\mathcal{B}(\mathcal{X})\to[0,1]\) such that _(i)_ for all \(x\in\mathcal{X}\), \(\mathcal{K}(x,\cdot)\) is a probability measure on \((\mathcal{X},\mathcal{B}(\mathcal{X}))\), and _(ii)_ for all \(A\in\mathcal{B}(\mathcal{X})\), \(\mathcal{K}(\cdot,A)\) is a measurable function on \(\mathcal{X}\). In our example, \(\mathcal{K}\) may also be an iterated composition of an underlying kernel, e.g. a Metropolis-Hastings kernel. The perturbed measure of \(Q\) is \((\mathcal{K}Q)(\cdot)\coloneqq\int_{\mathcal{X}}\mathcal{K}(x,\cdot)Q(dx)\), and similarly for \(\mathcal{K}P\).
### KSD with a Single Perturbation Kernel
We first consider a _single_ transition kernel \(\mathcal{K}\). We define the _perturbed kernelized Stein discrepancy_ (pKSD) as
\[\mathbb{D}(Q,P;\mathcal{K}) \coloneqq\mathbb{D}(\mathcal{K}Q,\mathcal{K}P)\] \[=\sup_{f\in\mathcal{F}^{d}}|\mathbb{E}_{x\sim\mathcal{K}Q}[ \mathcal{A}_{\mathcal{K}P}f(x)]|\, \tag{6}\]
assuming \(\mathcal{K}P\) admits a continuously differentiable density so that its score function is well-defined. Notably, \(\mathcal{K}Q\) need not have a (Lebesgue) density for (6) to exist.
The properties of pKSD are dictated by the operator \(\mathcal{K}\). A desirable choice should ensure that _(i)_ pKSD is well-defined, and in particular \(\mathcal{K}P\) should have a continuously differentiable density whenever \(P\) does, _(ii)_ pKSD (6) can be computed efficiently, and _(iii)_ the test can achieve a high power against alternatives with wrong mixing weights.
Given these _desiderata_, we propose to choose a transition kernel \(\mathcal{K}\) that is \(P\)_-invariant_, i.e. \(P(\cdot)=\int_{\mathcal{X}}\mathcal{K}(x,\cdot)p(x)dx\). A \(P\)-invariant kernel ensures \(\mathcal{K}P=P\), so the score function \(s_{\mathcal{K}p}=s_{p}\) is unchanged after perturbation. This means _(i)_ and _(ii)_ are trivially satisfied. In particular, pKSD will have a closed-form expression
\[\mathbb{D}(Q,P;\mathcal{K})=\mathbb{E}_{x,x^{\prime}\sim\mathcal{K}Q}[u_{P}(x,x^{\prime})]\,\]
provided that \(\mathbb{E}_{x\sim\mathcal{K}Q}[u_{P}(x,x)]<\infty\) (e.g., Chwialkowski et al. (2016, Thm. 2.1)). Moreover, the \(P\)-invariance allows a GOF test similar to the standard KSD test to be constructed, as we will elucidate in Sec. 4.2. To address _(iii)_, we employ a proposal map for \(\mathcal{K}\) that "aggregates" densities across the modes of the distribution. As we will demonstrate numerically, such a proposal is sensitive to discrepancies in mixing weights.
Given i.i.d. \(\{x_{i}\}_{i}^{n}\sim Q\), a sample \(\{\tilde{x}_{i}\}_{i=1}^{n}\) from \(\mathcal{K}Q\) can be drawn by running 1-step transitions under \(\mathcal{K}\) starting from each \(x_{i}\). pKSD can then be estimated by the U-statistic:
\[\hat{\mathbb{D}}_{P,\mathcal{K}}\coloneqq\tfrac{1}{n(n-1)}\sum_{1\leq i\neq j \leq n}u_{P}(\tilde{x}_{i},\tilde{x}_{j}). \tag{7}\]
### KSD with Multiple Perturbation Kernels
A single transition kernel can be limited in improving the test power against general multi-modal alternatives. It also does _not_ guarantee the _separation_ property, as \(\mathbb{D}(\mathcal{K}Q,\mathcal{K}P)=0\implies Q=P\), unless \(\mathcal{K}\) is injective so that \(\mathcal{K}Q=\mathcal{K}P\implies Q=P\) (such as the convolution operator). However, choosing only injective \(\mathcal{K}\) would significantly restrict the class of possible options. Instead, we propose to employ a finite _collection_\(\mathcal{S}=\{\mathcal{K}_{s}\}_{s=1}^{S}\) of \(P\)-invariant transition kernels, and require \(\mathcal{S}\) to include the identity transition kernel \(\mathcal{K}_{\text{id}}\), defined as \(\mathcal{K}_{\text{id}}(x,A)=\delta_{x}(A)\) for all \(x\in\mathcal{X}\) and \(A\in\mathcal{B}(\mathcal{X})\), where \(\delta_{x}(A)=1\) if \(x\in A\) and 0 otherwise. In particular, \(\mathbb{D}(Q,P;\mathcal{K}_{\text{id}})\) reduces to the standard KSD. This gives rise to a separating statistical divergence which we term _sum-pKSD_ (spKSD)
\[\mathbb{D}(Q,P;\mathcal{S})\coloneqq\sum_{\mathcal{K}\in\mathcal{S}}\mathbb{D} (\mathcal{K}Q,P)\,\]
where we have overloaded \(\mathbb{D}(Q,P;\mathcal{S})\) with a set \(\mathcal{S}\) in place of a single transition kernel to denote spKSD. The next result (proved in Appendix C) shows that spKSD indeed separates probability measures so long as \(\mathcal{K}_{\text{id}}\in\mathcal{S}\).
**Proposition 4.1** (spKSD separation).: _Suppose \(Q,P\) are probability measures on \(\mathcal{X}\) that admit positive (Lebesgue) densities \(q,p\), respectively. Further assume \(\mathbb{E}_{x\sim\mathcal{K}Q}[u_{P}(x,x)]<\infty\) for all \(\mathcal{K}\in\mathcal{S}\) and \(\mathbb{E}_{x\sim Q}[\|s_{p}(x)-s_{q}(x)\|_{2}^{2}]<\infty\). If the kernel \(k\) is cc-universal and \(\mathcal{K}_{\text{id}}\in\mathcal{S}\), then \(\mathbb{D}(Q,P;\mathcal{S})\geq 0\) with equality if and only if \(Q=P\)._
The assumption that the alternative distribution \(Q\) also admits a density is common in KSD literature when proving separation (e.g., Liu et al. (2016); Chwialkowski et al. (2016); Jitkrittum et al. (2017); Gong et al. (2021b)), but it can be relaxed if \(P\) is light-tailed or distantly dissipative (Hodgkinson et al., 2020; Gorham and Mackey, 2017).
spKSD can also be written as a double expectation akin to KSD, provided \(\mathbb{E}_{x\sim\mathcal{K},Q}[u_{P}(x,x)]<\infty\) for all \(s\). This allows spKSD to be estimated given a random sample \(\{x_{i}\}_{i=1}^{n}\) from \(Q\). Formally, for each \(\mathcal{K}_{s}\in\mathcal{S}=\{\mathcal{K}_{1},\ldots,\mathcal{K}_{S}\}\)
a sample \(\{x_{i}^{s}\}_{i=1}^{n}\) from \(\mathcal{K}_{s}Q\) can be drawn by running 1-step transitions under \(\mathcal{K}_{s}\) starting from each \(x_{i}\). Denote by \(x_{1}^{1:S}\coloneqq\text{concat}(x_{1}^{1},\dots,x_{i}^{S})\) the concatenation of \(x_{1}^{1},\dots,x_{i}^{S}\) into a single vector. We propose to estimate \(\mathbb{D}(Q,P;\mathcal{S})\) using the following U-statistic
\[\hat{\mathbb{D}}_{P,\mathcal{S}}\coloneqq\tfrac{1}{n(n-1)}\sum_{1\leq i\neq j \leq n}\tilde{u}_{P}(x_{i}^{1:S},x_{j}^{1:S})\;, \tag{8}\]
where \(\tilde{u}_{P}(x_{i}^{1:S},x_{j}^{1:S})\coloneqq\sum_{s=1}^{S}u_{P}(x_{i}^{s}, x_{j}^{s})\).
### GOF Testing with spKSD
Having constructed a test statistic for spKSD in the form of a U-statistic, the next result (proved in Appendix D) derives the limiting distribution of spKSD statistic under the null and alternative hypotheses. We denote by \(R_{Q}\) the distribution of \(x_{i}^{1:S}\) constructed as before and use the same notations as in Prop. 4.1.
**Proposition 4.2** (Asymptotic distributions of spKSD).: _Suppose the assumptions in Prop. 4.1 hold, and further assume \(\mathbb{E}_{w,w^{\prime}\sim R_{Q}}[\tilde{u}_{P}(w,w^{\prime})^{2}]<\infty\). Let \(\{z_{j}\}_{j\geq 1}\) be independent draws from \(\mathcal{N}(0,1)\) and denote by \(\{c_{j}\}_{j\geq 1}\) the eigenvalues of \(\tilde{u}_{P}\) under \(R_{Q}\), i.e., the solutions of \(c_{j}\phi_{j}(\cdot)=\mathbb{E}_{w\sim R_{Q}}[\tilde{u}_{P}(\cdot,w)\phi_{j}(w)]\) for non-zero \(\phi_{j}\). As \(n\to\infty\),_
1. _Under_ \(H_{0}:Q=P\)_, we have_ \(n\hat{\mathbb{D}}_{P,\mathcal{S}}\to_{d}\sum_{j=1}^{\infty}c_{j}(z_{j}^{2}-1)\)_._
2. _Under_ \(H_{1}:Q\neq P\)_, we have_ \(\sigma_{u}^{2}\coloneqq 4\text{Var}_{w\sim R_{Q}}(\mathbb{E}_{w^{\prime}\sim R_{Q}} [\tilde{u}_{P}(w,w^{\prime})])>0\)_, and_ \(\sqrt{n}(\hat{\mathbb{D}}_{P,\mathcal{S}}-\mathbb{D}(Q,P;\mathcal{S}))\to_{d} \mathcal{N}(0,\sigma_{u}^{2})\)_._
Prop. 4.2 assumes Q also admits a Lebesgue density; when it does not, the stated results still hold true if we additionally assume _i)_ the conditions on \(Q\) in Prop. 4.1 for KSD to separate probability measures, and _ii)_\(R_{Q}(A)>0\) whenever \(R_{P}(A)>0\) for any measurable set \(A\subset\mathcal{X}^{S}\).
Similarly to the case with the standard KSD, the cumulative density function of the limiting distribution under \(H_{0}\) has no closed-form expression, but the same bootstrap technique can be employed to estimate the \(p\)-value using the perturbed samples. The complete algorithm of goodness-of-fit testing with pKSD is given in Algorithm 1.
## 5 A Transition Kernel for Multi-Modal Alternatives
We consider transition kernels of the Metropolis-Hastings (MH) type (Metropolis et al., 1953; Hastings, 1970). At a current state \(x\), a new state \(x^{\prime}\) is proposed by first generating a \(d_{u}\)-dimensional random vector \(u\) from some known density \(g\), then mapping to \(x^{\prime}=h(x|u)\), where \(h(\cdot|u)\) is some deterministic, invertible function that is differentiable with differentiable inverse. The proposed state \(x^{\prime}\) is hence a deterministic function given \(x\) and \(u\).
We choose in this paper a density \(g\) defined on some discrete space \(\mathcal{U}\). The transition kernel is
\[\mathcal{K}(x,A)=\sum_{u\in\mathcal{U}}\delta_{x^{\prime}}(A)g(u)\alpha(x,x^{ \prime})+\delta_{x}(A)r(x),\]
where \(x^{\prime}=h(x|u)\) is the proposed state, \(\alpha(x,x^{\prime})\) is an accept-reject rule that guarantees \(P\)-invariance, \(\delta_{x}(A)=1\) if \(x\in A\) and 0 otherwise, and \(r(x)=1-\sum_{u\in\mathcal{U}}g(u)\alpha(x,x^{\prime})\). The accept-reject rule \(\alpha(x,x^{\prime})\) is designed to satisfy the _detailed balance condition_:
\[\int_{x\in A}\sum_{u\in\mathcal{U}}\delta_{x^{\prime}}(B)p(x)g(u) \alpha(x,x^{\prime})dx\] \[=\int_{x^{\prime}\in B}\sum_{u^{\prime}\in\mathcal{U}}\delta_{x}(A )p(x^{\prime})g(u^{\prime})\alpha(x^{\prime},x)dx^{\prime}, \tag{9}\]
for all \(A,B\in\mathcal{B}(\mathcal{X})\). One valid choice is
\[\alpha(x,x^{\prime})=\min\left(1,\tfrac{p(x^{\prime})g(u^{\prime})}{p(x)g(u)} \left|\tfrac{\partial h(x|u)}{\partial x}\right|\right), \tag{10}\]
if \(x^{\prime}=h(x|u)\) and \(x=h^{-1}(x^{\prime}|u^{\prime})\) for some \(u,u^{\prime}\in\mathcal{U}\), and zero otherwise. Here, \(\partial h(x|u)/\partial x\) denotes the Jacobian of the transformation from \(x\) to \(x^{\prime}\). Appendix E proves that \(\alpha(x,x^{\prime})\) indeed satisfies (9). The accept-reject rule (10) resembles those used in Reversible-Jump MCMC (Green, 1995; Green and Hastie, 2009) and generalises the well-known MH rule, for which the determinant of the Jacobian is 1.
### Choosing the Proposal Density
We propose a jump proposal \(h(x|u)\) that superposes masses at each mode of \(p\). Our choice is motivated by Markov kernels used in the optimisation-based MCMC literature, specifically the _deterministic jumps_ proposal in Pompe et al. (2020). New states are proposed by randomly selecting a mapping from a set of candidates that are constructed using the location and geometry of the modes of \(p\). The resulting kernel is _not_ irreducible, so the limiting distribution is not necessarily \(P\). Non-irreducibility is essential for the proposed test to work since, under the alternative, the transition kernel should perturb \(Q\) to some other distribution for which the KSD between \(P\) and the perturbed distribution becomes larger compared with the KSD with the un-perturbed one. This is in contrast to MCMC, which requires irreducibility so that asymptotically the chain can sample from the target distribution.
Denote by \(\mu_{1},\dots,\mu_{M}\in\mathbb{R}^{d}\) the modes of the density \(p\), and \(A_{1},\dots,A_{M}\in\mathbb{R}^{d\times d}\) the _inverse_ of the Hessian matrices at those points; how to estimate these quantities will be discussed later. When \(p\) is a mixture of elliptic distributions such as Gaussian or multivariate \(t\)-distributions, each \(A_{m}\) can be viewed as the covariance matrix of a component. When the Hessians do not exist (e.g., \(-\log p\) is not twice differentiable), we can set \(A_{m}=I_{d}\) and the remaining discussion still follows.
For a current state \(x\), our proposal randomly selects a pair of modes and attempts to map \(x\) from one mode to the "corresponding" point \(x^{\prime}\) in the other. Formally, let \(u=(u_{1},u_{2})\sim\text{Unif}(\{(i,j):1\leq i\neq j\leq M\})\) be a uniform random vector over the index set of all \(M(M-1)\) pairs of _distinct_ (and ordered) modes, i.e., \(g(u)=1/(M(M-1))\) for all \(u\). Given a fixed constant \(\theta>0\), the proposal map is
\[h(x|u)=h_{\theta}(x|u)=A_{u_{2}}^{1/2}A_{u_{1}}^{-1/2}(x-\theta\mu_{u_{1}})+ \theta\mu_{u_{2}}\,\]
with the inverse map \(h^{-1}(x^{\prime}|u)=A_{u_{1}}^{1/2}A_{u_{2}}^{-1/2}(x-\theta\mu_{u_{2}})+ \theta\mu_{u_{1}}\). Intuitively, \(h\) sends points from mode \(\mu_{u_{1}}\) to \(\mu_{u_{2}}\) allowing for scaling by local Hessians, and \(h^{-1}\) performs the opposite operation. The constant \(\theta\) is a hyperparameter introduced to control the scale of the jump, which can increase the ability to detect discrepancies in the mixing weights. Herein, we call \(\theta\) the _jump scale_.
Given a current state, our proposal chooses two modes randomly, so a proposed state can potentially lie in a low-density region, thus leading to a low acceptance probability. Pompe et al. (2020) address this by recording an auxiliary variable for the mode index and augmenting the state space to \(\mathcal{X}\times\{1,2,\ldots,M(M-1)\}\), so that at every step the new state is _guaranteed_ to lie near a mode. However, the same trick cannot be used in our case because the augmented density no longer has a well-defined score function.
### Understanding the Source of Test Power
To understand the improvement in test power against multi-modal alternatives, we characterise the limiting distribution of a general distribution \(Q\) with a positive density \(q\) when we apply the perturbation with infinitely many steps (i.e., \(\mathcal{K}^{T}\) with \(T=\infty\)). For simplicity, we assume \(M=2\) and \(A_{1}=A_{2}=I_{d}\) are identity matrices, so that the proposal function is \(h_{\theta}(x|u)=x-\theta(\mu_{u_{1}}-\mu_{u_{2}})\) for \(x\in\mathcal{X}\) and \(u=(u_{1},u_{2})\in\mathcal{U}=\{(i,j):1\leq i\neq j\leq 2\}\). Thus, given a current state \(x\), the transition kernel proposes moves to \(x+\theta(\mu_{1}-\mu_{2})\) and \(x-\theta(\mu_{1}-\mu_{2})\) with equal probability.
**Proposition 5.1**.: _Under the assumptions of Sec. 5.2, the limiting distribution under \(\mathcal{K}\) with the initial distribution \(Q\) is \((\mathcal{K}^{\infty}Q)(A)=\int_{x\in A}q^{\infty}(x)dx\), \(A\in\mathcal{B}(\mathcal{X})\), where_
\[q^{\infty}(x):=p(x)\tfrac{\sum_{x\in\mathbb{Z}}q(x+s\nu)}{\sum_{k\in\mathbb{Z} }p(x+k\nu)}\, \tag{11}\]
_and \(\nu:=\theta(\mu_{1}-\mu_{2})\)._
A proof is in Appendix F. Prop. 5.1 shows that the limiting density under \(\mathcal{K}\) is the target density \(p\) weighted by the ratio between the total masses of \(q\) and \(p\) over a discrete grid.
To understand why this helps to increase the KSD value, we first rewrite KSD as
\[\mathbb{D}(Q,P)=\mathbb{E}_{x,x^{\prime}\sim Q}[\delta_{q,p}(x)^{\top}k(x,x^ {\prime})\delta_{q,p}(x^{\prime})]\,\]
where \(\delta_{q,p}(x)\coloneqq s_{q}(x)-s_{p}(x)\) is the score difference. This holds whenever \(k\) is an integrally strictly positive definite kernel (Liu et al., 2016, Thm. 3.6). The pKSD is then
\[\mathbb{D}(Q,P;\mathcal{K}^{\infty})=\mathbb{E}_{x,x^{\prime}\sim Q^{\infty }}[\delta_{q^{\infty},p}(x)^{\top}k(x,x^{\prime})\delta_{q^{\infty},p}(x^{ \prime})]\,\]
where, by Prop. 5.1, the score difference becomes \(\delta_{q^{\infty},p}(x)=s_{q^{\infty}}(x)-s_{p}(x)=\nabla\log\phi_{q}(x)-\nabla \log\phi_{p}(x)\), where \(\phi_{q}(x)\coloneqq\sum_{s\in\mathbb{Z}}q(x+s\nu)\) and similarly for \(\phi_{p}\).
The operator \(\phi\) superposes densities along a grid, thus allowing to create local discrepancy in the high-probability regions of \(Q\), for example by exchanging masses between modes.
As a concrete example, we consider the setup in Thm. 3.4, where \(Q=\mathcal{N}(0,I_{d})\), and \(P=\pi\mathcal{N}(0,I_{d})+(1-\pi)\mathcal{N}(\Delta,I_{d})\) for some \(\pi\in(0,1)\) and \(\Delta\in\mathbb{R}^{d}\). The operator has created discrepancy in high-probability regions of \(Q\), as demonstrated in Fig. 2. This also highlights the role of \(\nu\): when \(\nu=\Delta\), the two components will overlap almost exactly under the perturbation, so \(\delta_{q^{\infty},p}(x)\approx 0\) near \(x=0\), and the KSD will remain small (Fig. 2). It is hence crucial to tune \(\nu\) (equivalently, \(\theta\)). One can in principle select \(\theta\) by maximising the (approximate) test power, similarly to the idea in Jitkrittum et al. (2017). However, gradient-based approaches are infeasible as pKSD is _not_ differentiable with respect to \(\theta\). An alternative is to use grid-search over some finite set of \(\theta\) values.
### Choosing the Set of Perturbations \(\mathcal{S}\)
It remains to choose the set of perturbations \(\mathcal{S}\) in spKSD. We propose two ways to construct \(\mathcal{S}\), one based on grid-search, and the other based on optimisation.
For the grid-based approach, we choose a set of values \(\{\theta_{s}\}_{s=1}^{S-1}\) and let \(\mathcal{S}=\{\mathcal{K}_{\text{id}},\mathcal{K}_{1},\ldots,\mathcal{K}_{S-1}\}\), where \(\mathcal{K}_{s}\) is the transition kernel described in this section with jump scale \(\theta_{s}\). We propose to choose each \(\theta_{s}\) close to 1, following
Figure 2: Top: Densities and score functions of \(p,q\) and the limiting distribution \(q^{\infty}\) in (11). Bottom: pKSD with different jump scales \(\theta\), compared with KSD.
the observations in Sec. 5.2. We still refer to the resulting divergence as spKSD.
For the optimisation-based approach, we use only two transition kernels \(\mathcal{S}=\{\mathcal{K}_{\mathrm{id}},\mathcal{K}_{\theta}\}\), where \(\mathcal{K}_{\theta}\) has jump scale \(\theta\) that is tuned by maximising a proxy for the asymptotic test power. Due to the asymptotic normality proved in Prop. 4.2, we can adopt the same approach in Jitkrittum et al. (2017, Prop. 4) to approximate the asymptotic power with the ratio
\[\hat{\mathbb{D}}_{P,\mathcal{K}_{\theta}^{2}}\ /\ \hat{\sigma}_{u}\, \tag{12}\]
where \(\hat{\sigma}_{u}\) is an estimate of the asymptotic standard deviation \(\sigma_{u}\) is given by the square root of
\[\hat{\sigma}_{u}^{2}\coloneqq\frac{4}{n^{3}}\sum_{i=1}^{n}\left(\sum_{j=1}^{n }H_{i,j}\right)^{2}-\frac{4}{n^{4}}\left(\sum_{j,j=1}^{n}H_{i,j}\right)^{2}\,\]
with \(H_{i,j}\coloneqq u_{P}(x_{i},x_{j})+u_{P}(x_{i}^{\theta},x_{j}^{\theta})\) and \(x_{i}^{\theta}\sim\mathcal{K}_{\theta}^{T}Q\); see also Schrab et al. (2022, Eq. 8). Since the objective (12) (in particular, \(\mathcal{K}_{\theta}\)) is not differentiable with respect to \(\theta\), we still choose \(\theta\) from a pre-specified finite set \(\{\theta_{s}\}_{s=1}^{S}\). The objective is hence \(\max_{\theta\in\{\theta_{1},\ldots,\theta_{S}\}}\hat{\mathbb{D}}_{P,\mathcal{ K}_{\theta}^{T}}/\hat{\sigma}_{u}\). We call the resulting discrepancy the _optimised sum-pKSD_ (ospKSD).
Whether the grid-based or the optimisation-based method should be preferred requires trade-offs and depends on the specific problem at hand -- The spKSD requires no held-out sets, but can suffer from a low test power if \(\{\theta_{s}\}_{s=1}^{S-1}\) is poorly chosen in that most of \(\mathcal{K}_{s}\) fail to improve the test power. On the other hand, ospKSD uses a judiciously tuned \(\theta\), but the data-splitting can also lead to a drop in test power. In our experiments, we find that spKSD tends to work better for target distributions with a simple geometry, specifically mixtures of _elliptic_ distributions (Cambanis et al., 1981) (e.g., the Gaussian mixture examples). However, for distributions whose mixing components have non-elliptic contours (e.g., the mixture of \(t\) and banana example, and the sensor network localisation example), the benefit of optimisation seems to overweigh the negative impact due to data-splitting, and ospKSD outperforms spKSD.
### Estimating Mode Vectors and Hessians
We estimate \(\mu_{j}\) and \(A_{j}\) by the local minima and Hessians of \(-\log p\). To do so, we run in parallel a sequence of BFGS optimisers (Nocedal and Wright, 2006) initiated at different starting points, following Pompe et al. (2020). BFGS is used because it returns both the local optima and approximated Hessians at those points. The optima are then merged if their weighted Mahalanobis distance is smaller than a pre-specified threshold. In our experiments, we initialise the optimisers from a set of size \(n_{\text{init}}\), constructed either by sampling uniformly from a hyper cube \([L_{1},U_{1}]\times\cdots\times[L_{d},U_{d}]\) (for spKSD), or by using both randomly sampled data and some training set (for ospKSD). The full procedure is described in Appendix G.1.
## 6 Related Work
Perturbation with convolutionThe idea of combining a discrepancy with perturbation has been widely studied, where the perturbation is often a convolution operator with Gaussian noise. E.g., the _spread divergence_(Zhang et al., 2020) combines Gaussian convolution with Kullback-Leibler (KL) divergence (more generally, \(f\)-divergences) to solve the issue that KL divergence is ill-defined when the distributions have undefined densities or unmatched support. In generative modelling, _denoising score matching_(Vincent, 2011) and _Noise Conditional Score Networks_(Song and Ermon, 2019) combine Gaussian convolution with score matching to improve computational efficiency or estimation quality. Notably, convolution is _not_ invariant to the target distribution, thus rendering the score function intractable. This is why we chose a MH-type kernel instead.
Perturbation with convex combinationIn score matching, Zhang et al. (2022) addresses the blindness of Fisher Divergence by mapping the target and candidate distributions to a convex combination with a Gaussian distribution, thereby "connecting" the well-separated modes. A similar idea cannot be applied to improve the KSD test, as, similarly to convolution, the resulting target distribution no longer has a tractable score function.
Perturbation with annealingAnother choice of perturbation is to anneal both distributions by raising the densities to some power, which is studied in Wenliang and Kanagawa (2020). Although the score function remains tractable under this perturbation, annealing alone cannot solve the blindness of score-based discrepancies, as noted by the authors and in Zhang et al. (2020). Moreover, sampling from the annealed candidate distribution is also non-trivial.
## 7 Experiments
We use \(51\) jump scales \(\theta\) equally spaced in \([0.5,1.5]\), a heuristic that we find works well in practice. All samples have size \(n=1000\). We compare the ospKSD
Figure 3: One-dimensional Gaussian mixture example. Samples are drawn with a different mixing weight \(\pi\).
and spkSD tests against benchmarks including KSD test and two variants (KSDAgg and FSSD). All experiments are run with level \(\alpha=0.05\) using the IMQ kernel \(k(x,y)=(1+\|x-y\|_{2}^{2}/\lambda)^{-1/2}\), where \(\lambda\) is chosen to be \(\texttt{median}_{i<j}\{\|x_{i}-x_{j}\|_{2}^{2}\}\). KSDAgg follows the setup in Schrab et al. (2022), and FSSD follows Jitkrittum et al. (2017). The probability of rejecting the null hypothesis is estimated by averaging the test output over 100 repetitions, except in the sensors location example, which is repeated 10 times. Translucent shades represent \(95\%\)-CIs. The number of transition steps \(T\) is selected to be \(10\) for the Gaussian mixture example, \(100\) for the mixture of \(t\) and banana distributions example, and \(1000\) for the sensor network localisation.
Discussions on how to choose \(T\) in practice, as well as supplementary plots and experiments, are held in Appendix H. In particular, in Appendix H.4, we include an additional experiment concerning a 50-dimensional Gaussian-Bernoulli Restricted Boltzmann Machine (RBM) (Cho et al., 2013), a latent variable model that can be viewed as a mixture of Gaussian distributions. Code for reproducing all experiments can be found at github.com/XingLLiu/pksd.
Gaussian mixtureThe target has density \(p(x)\propto\exp\left(-\frac{1}{2}x^{2}\right)+0.5\exp\left(-\frac{1}{2}(x-6)^ {2}\right)\). Samples are drawn with a different mixing weight \(\pi\in[0,1]\) of the first component. The results are presented in Fig. 3. KSD, KSDAgg and FSSD all have a power close to the level \(0.05\) regardless of the value of \(\pi\), which is not surprising due to the blindness of KSD. In comparison, ospKSD and spKSD achieve almost perfect power when \(\pi\) deviates from the true value 0.5. Fig. 6 in the Appendix verifies numerically that ospKSD and spKSD achieve the desired level under \(H_{0}\).
of rejections over 10 repetitions. All tests reject most runs for \(\sigma=1.08\) (the scale used in Tak et al. (2018)) and almost no run for \(\sigma=1.08\), which is consistent with the posterior plots. However, for \(\sigma=0.5\), no method except ospKSD rejected the null hypothesis, demonstrating again the ability of ospKSD to detect missing modes. Fig. 5 also shows the particles _after_ perturbation by the (non-identity) transition kernel used by ospKSD, from which some particles seem to have moved to the missing high-density regions. spKSD in this case also performed poorly with no sample for \(\sigma=0.1\) being rejected, which is potentially because the benefit of not having a held-out set outweighs that of using a tuned transition kernel for this example.
## 8 Discussion and Conclusion
We show with a bimodal Gaussian example that GOF tests based on KSD can fail when the target has well-separated modes. To increase its power, we propose to perturb both the candidate and the target distributions using a Markov process before applying the KSD test. Empirical results suggest that our methods (ospKSD and spKSD) are more sensitive to discrepancies in the mixing weights of multimodal distributions, and can achieve remarkably high power particularly when the mixing components are elliptic distributions.
### Limitations and Future Work
The ospKSD and spKSD rely heavily on accurate estimation of the mode locations and Hessians, which can be extremely challenging and computationally costly for high-dimensional problems. Additionally, the jump proposal of the transition kernel used in the proposed methods is constructed specifically for targets that are mixtures of elliptic distributions, which may be inappropriate for targets with more complicated geometrical structure. Further investigations could aim to find perturbation operators that scale better with dimensionality or that suit a wider family of target distributions.
Moreover, both spKSD and ospKSD require careful hyperparameter setting. The spKSD, as a sum-like statistic, requires a trade-off between the test power and the number of elements in the grid \(\mathcal{S}\). Although the heuristic described in Section 7 is found to perform decently in our experiments, it is of practical interest to analyse the sensitivity of the test performance to the grid size both empirically and theoretically. The ospKSD, on the other hand, requires a held-out dataset to tune \(\theta\), potentially reducing test power due to data-splitting. One possible approach to mitigate this problem is to combine ospKSD with the aggregated testing framework described in Schrab et al. (2022) to avoid splitting the data.
## Acknowledgements
XL is supported by the President's PhD Scholarships of Imperial College London and the EPSRC StatML CDT programme EP/S023151/1. ABD is supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1 and EPSRC Grant EP/W006022/1, particularly the "Ecosystems of Digital Twins" theme within those grants & The Alan Turing Institute. We thank the anonymous reviewers for their valuable comments.
|
2305.15755 | Deterministic policy gradient based optimal control with probabilistic
constraints | This paper studies a deep deterministic policy gradient (DDPG) based actor
critic (AC) reinforcement learning (RL) technique to control a linear
discrete-time system with a quadratic control cost while ensuring a constraint
on the probability of potentially risky or undesirable events. The proposed
methodology can be applied to both known and unknown system models with minor
adjustments to the reward structure (negative cost). The problem is formulated
by considering the average expected quadratic cost of the states and inputs
over an infinite time horizon. Risky or undesirable events are represented as
functions of the states at the next time step exceeding a user-defined limit.
Two strategies are employed to manage the probabilistic constraint in scenarios
of known and unknown system models. In the case of a known system model, the
probabilistic constraint is replaced with an upper bound, such as the Chernoff
bound. For unknown system models, the expected value of the indicator function
of the occurrence of the risky or undesirable event is used. We have adopted a
deterministic policy gradient (DPG) based AC method to derive a parameterised
optimal policy. Extensive numerical simulations are performed using a second-
and a fourth-order system, and the proposed method is compared with the
standard risk-neutral linear quadratic regulator (LQR) and a chance-constrained
model predictive control (MPC) method. The results demonstrate the
effectiveness of the proposed approach in both known and unknown system model
scenarios. | Arunava Naha, Subhrakanti Dey | 2023-05-25T06:10:52Z | http://arxiv.org/abs/2305.15755v2 | # Reinforcement Learning based optimal control with a probabilistic risk constraint
###### Abstract
This paper proposes a reinforcement learning (RL) technique to control a linear discrete-time system with a quadratic control cost while ensuring a constraint on the probability of potentially risky events. The proposed methodology can be applied to known and unknown system models with slight modifications to the reward (negative cost) structure. The problem formulation considers the average expected quadratic cost of the states and inputs over the infinite time horizon, and the risky events are modelled as a quadratic cost of the states at the next time step crossing a user-defined limit. Two approaches are taken to handle the probabilistic constraint under the assumption of known and unknown system model scenarios. For the known system model case, the probabilistic constraint is replaced with the Chernoff bound, while for the unknown system model case, the expected value of the indicator function of the occurrence of the risky event in the next time step is used. The optimal policy is derived from the observed data using an on-policy RL method based on an actor-critic (AC) architecture. Extensive numerical simulations are performed using a 2nd-order and a 4th-order system, and the proposed method is compared with the standard risk-neutral linear quadratic regulator (LQR).
## I Introduction
The problem of finding an optimal controller that minimizes the expected quadratic cost of states and inputs has been well-studied in the literature for linear time-invariant systems (LTI). The optimal control input becomes a linear function of the states provided the noise has zero mean and bounded second moment [1]. Such formulation of the cost is risk neutral since it only minimizes the average value and does not consider the less frequent but risky events. Such events may arise due to the presence of a long tail or skewed distribution of the noise or uncertainty in the system. The less frequent but risky events may have catastrophic effects on the systems, such as a unmanned areal vehicle (UAV) deviating from the designated path and entering the range of vision of the adversary [2]. A similar problem has also been studied in the stochastic model predictive control (SMPC) literature as chance constraint optimal control. Here, the objective is to design an optimal controller with constraints on the chances of undesirable events occurring under the model uncertainty [3], for instance, the temperature in a climate-controlled building crossing the specified limit.
In general, the cost function for the risk constraint or chance constraint problems is formulated as the average expected quadratic cost of states and inputs over the finite or infinite time horizon with an additional constraint on the probability of the undesired event. Such an optimization problem is intractable in general, and, therefore, the probabilistic constraint is generally transformed into more tractable formulations such as approximate algebraic constraint or constraint on the excepted value. However, even for the transformed cases, a closed-form solution can only be found for a very few specific cases [2]. Therefore, most of the literature adopted data-driven methods, _i.e._, finding an optimal controller using the observed data and sometimes using the knowledge of the system model.
Different algorithms from the reinforcement learning literature have been applied to find an optimal control policy with or without the additional risk/chance constraint [4, 5]. For instance, a persistently exciting input is applied to the system, and the off-policy Q-learning method is used to design an optimal linear controller for the linear quadratic regulator (LQR) problem in [6]. RL is the class of algorithms where an agent learns to take optimal decisions based on its own or others' experiences without complete knowledge of the environment or system [7]. RL is now becoming popular for controller design, where the underlying control system is too complex to model, or the model has uncertainties [8, 9].
### _Related work_
A popular approach followed in the MPC literature is scenario-based sampling, where samples are drawn from the known distribution of the disturbance, and the probabilistic constraint is converted into a finite number of algebraic conditions [10, 3]. In a different MPC approach, as studied in [11], the probabilistic chance constraint is approximated by an estimated expectation using Hamiltonian Monte Carlo (HMC) method. The system considered in the paper is non-linear. In [12], an iterative method is studied, which first excites the system with the designed controller to reduce the model uncertainty and then uses the updated model to find an improved controller, and the process repeats. In several other works, the probabilistic chance constraint or risk constraint is replaced by an algebraic constraint using Chebyshev's inequality [13, 14].
Besides the constraint on the probability of risky events, risk has also been modelled differently in the literature. In [15, 16], the risk is modelled as the conditional variance of the quadratic cost associated with the states. The authors studied that the optimal controller becomes an affine function of the states for the LQR cost with such risk constraint [17, 2]. The optimal controller is derived by the policy
gradient primal-dual optimization method for model-based and model-free scenarios. The risk is also modelled as conditional value at risk (CVaR). CVaR and chance constraints both are studied for a Markov decision process (MDP) in [18], where the policy gradient-based actor-critic (AC) method is used to find the locally optimal policy. Here the chance constraint is replaced by the expected value of the occurrence of the risky event.
### _Our contributions_
Our study considers the average expected quadratic cost of the states and inputs over an infinite time horizon, under a constraint on the probability of the occurrence of risky events. The risky events are modelled as the events when the quadratic cost of the states cross a user-defined limit. Such a constraint on the probability is a more direct and intuitive approach compared to an existing risk model, which bounds the conditional variance of the quadratic cost of the states [15, 16]. Unlike some other works, we do not use Chebyshev's inequality to replace the probabilistic constraints [13, 14] or approximate it by drawing samples from the known distribution of the disturbances [10, 3]. In contrast, we have taken two different approaches to handle the probabilistic constraint under the assumption of known and unknown system model scenarios. For the known system model case, we have replaced the probabilistic constraint with the Chernoff bound, which is known to be tighter than the Chebyshev bound. On the other hand, for the unknown system model case, we have used the expected value of the indicator function of the occurrence of the risky event in the next time step. Finally, we apply a policy gradient-based actor-critic (AC) method to derive the optimal policy from the observed data [19]. The choice of AC method is influenced by the fact that we have considered real-valued states and inputs for the study. In other words, we have used an on-policy RL method, which consists of two separate neural networks, one for the policy function approximation (actor-network) and the other for the state-action value function approximation (critic-network). We have performed extensive numerical simulations using a 2nd-order and a 4th-order system, and compared the performance of the proposed policy with the risk-neutral LQR. As expected, the occurrences of risky events reduce under the proposed policy with a small increase in the quadratic cost when compared with the standard LQR. Note that, for the proposed approach, the noise does not always need to be Gaussian distributed.
### _Organization_
The rest of the paper is organized as follows. Section II formulates the optimization problem mathematically. Section V discusses a solution approach to the optimization problem using the AC method under the known and unknown model scenarios. The reward structure used with the AC method is discussed in Section III. The analytical expression of the Chernoff bound for two special cases are derived in Section IV. The numerical simulation results are provided and discussed in Section VI. Finally, Section VII concludes the paper.
## II Problem Formulation
We consider the following linear time-invariant (LTI) system for the study.
\[\mathbf{x}_{k+1}=\mathbf{A}\mathbf{x}_{k}+\mathbf{B}\mathbf{u}_{k}+\mathbf{w} _{k}. \tag{1}\]
Here \(\mathbf{x}_{k}\in\mathrm{I\!R}^{n}\) and \(\mathbf{u}_{k}\in\mathrm{I\!R}^{p}\) are the state and input vectors at the \(k\)-th time instant respectively, whereas \(\mathbf{w}_{k}\in\mathrm{I\!R}^{n}\) is an independent and identically distributed (iid) process noise with distribution \(f_{w}(\mathbf{w})\). \(\mathbf{A}\in\mathrm{I\!R}^{n\times n}\), \(\mathbf{B}\in\mathrm{I\!R}^{n\times p}\).
We assume that all the states are measured and the system \((\mathbf{A},\mathbf{B})\) is stabilizable. In the standard LQR problem, the following cost function is minimized.
\[J_{c}=\lim_{T\to\infty}\frac{1}{T}\sum_{k=1}^{T}E\left[\mathbf{x}_{k}^{T} \mathbf{V}\mathbf{x}_{k}+\mathbf{u}_{k}^{T}\mathbf{U}\mathbf{u}_{k}\right], \tag{2}\]
where \(\mathbf{V}\in\mathrm{I\!R}^{n\times n}\) and \(\mathbf{U}\in\mathrm{I\!R}^{p\times p}\) are positive definite weight matrices. We also assume that \((\mathbf{A},\mathbf{V}^{1/2})\) is detectable. If we assume the noise is zero mean and the second order moment of the noise is bounded, then the optimum input appears as a fixed gain linear control signal [1], see (3). If the noise is not zero mean, we can always subtract the mean and get a zero mean noise.
\[\mathbf{u}_{k}^{*} =\mathbf{K}\mathbf{x}_{k}, \tag{3}\] \[\mathbf{K} =-\left(\mathbf{B}^{T}\mathbf{S}\mathbf{B}+\mathbf{U}\right)^{-1} \mathbf{B}^{T}\mathbf{S}\mathbf{A}, \tag{4}\]
where \(\mathbf{S}\) is the solution to the following algebraic Riccati equation,
\[\mathbf{S}=\mathbf{A}^{T}\mathbf{S}\mathbf{A}+\mathbf{V}-\mathbf{A}^{T} \mathbf{S}\mathbf{B}\left(\mathbf{B}^{T}\mathbf{S}\mathbf{B}+\mathbf{U}\right) ^{-1}\mathbf{B}^{T}\mathbf{S}\mathbf{A}. \tag{5}\]
However, as discussed before, the cost function \(J_{c}\) does not take into account the less frequent but risky events. Therefore, in our study, we use an additional constraint on the probability of risky events, and the optimization problem takes the following form.
\[\begin{split}&\min_{u}J_{c}=\lim_{T\to\infty}\frac{1}{T}\sum_{k=1}^{ T}E\left[\mathbf{x}_{k}^{T}\mathbf{W}\mathbf{x}_{k}+\mathbf{u}_{k}^{T} \mathbf{U}\mathbf{u}_{k}\right]\\ & s.t.\lim_{T\to\infty}\frac{1}{T}\sum_{k=1}^{T}E\left[P\left\{ \mathbf{x}_{k+1}^{T}\mathbf{Q}\mathbf{x}_{k+1}\geq\epsilon\mid\Psi_{k}\right\} \right]\leq\delta.\end{split} \tag{6}\]
Here \(\mathbf{Q}>0\), \(\epsilon>0\) and \(\delta>0\) are user selected parameters. \(\Psi_{k}\) denotes the set of all information till the \(k\)-th time instant.
**Remark 1**: _The risky event is modelled as the quadratic function of the state crossing a threshold \(\epsilon\), i.e., \(\mathbf{x}_{k+1}^{T}\mathbf{Q}\mathbf{x}_{k+1}\geq\epsilon\), and we are interested in limiting the probability of such risky events. Since such probability itself is a function of the random information set \(\Psi_{k}\), we have taken the expectation with respect to this set in the above formulation. Furthermore, we are interested in keeping the long term average probability bounded over the infinite time horizon._
The constrained optimization of (6) is converted into the unconstrained stochastic control problem using the Lagrangian multiplier \(C_{l}\) as follows,
\[\min_{u}J_{L}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{k=1}^{T}E\left[g\left( \mathbf{x}_{k},\mathbf{u}_{k}\right)\right], \tag{7}\]
where the per stage cost \(g(\cdot)\) takes the following form,
\[g\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)=f\left(\mathbf{x}_{k },\mathbf{u}_{k}\right)+C_{l}h_{p}\left(\mathbf{x}_{k},\mathbf{u}_{k}\right),\] (8) where \[f\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)=\mathbf{x}_{k}^{T} \mathbf{W}\mathbf{x}_{k}+\mathbf{u}_{k}^{T}\mathbf{U}\mathbf{u}_{k}, \tag{9}\] \[h_{p}\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)=P\left\{\mathbf{ x}_{k+1}^{T}\mathbf{Q}\mathbf{x}_{k+1}\geq\epsilon\mid\Psi_{k}\right\}. \tag{10}\]
Note that the per-stage cost function contains an intractable probability constraint in general. The following section discusses how the probabilistic constraint is converted into a more tractable constraint function.
**Remark 2**: _A formal proof to show that the optimal solution to the cost function with a Lagrangian multiplier, as given in (7), will also be an optimal solution of the original constrained optimization problem as in (6) is difficult as such results for unbounded cost functions with possibly non-convex constraints are few. Also, in general, the optimal solution obtained may only be a local optimum. Further investigation into this is currently underway._
## III Reformulation of the Constraint Function
As discussed below, the intractable probability constraint is handled differently for known and unknown model cases.
### _Known System Model_
For the discussion in this section, we assume that the system model, _i.e._, \(\mathbf{A}\) and \(\mathbf{B}\) matrices, is known. We replace the probability value in \(g(\cdot)\), _i.e._, \(h_{p}(\cdot)\) (10), by the Chernoff bound \(h_{c}(\cdot)\) as provided in the following Lemma 1.
**Lemma 1**: _For the LTI system given in (1), the probability value \(h_{p}(\cdot)\) (10) will be upper bounded by \(h_{c}(\cdot)\) as given in (11)._
\[h_{c}\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)=\inf_{s\geq 0}\left[e^{-( \epsilon-d_{k})s}M\left(y_{k},s\right)\right], \tag{11}\]
_where \(M(\cdot)\) denotes the moment generating function of \(y_{k}\), and \(y_{k}\) and \(d_{k}\) are given as follows:_
\[y_{k} =\mathbf{w}_{k}^{T}\mathbf{Q}\mathbf{w}_{k}+\mathbf{a}_{k}^{T} \mathbf{w}_{k}\text{, and} \tag{12}\] \[d_{k} =\mathbf{\hat{x}}_{k}^{T}\mathbf{Q}\mathbf{\hat{x}}_{k}, \tag{13}\]
_where \(\mathbf{a}_{k}=2\mathbf{Q}\mathbf{\hat{x}}_{k}\) and \(\mathbf{\hat{x}}_{k}\) is an estimate of \(\mathbf{x}_{k+1}\) given \(\Psi_{k}\) as defined below:_
\[\mathbf{\hat{x}}_{k}=\text{E}\left[\mathbf{x}_{k+1}\mid\Psi_{k}\right]= \mathbf{A}\mathbf{x}_{k}+\mathbf{B}\mathbf{u}_{k}. \tag{14}\]
Using (1) and (14), we can write,
\[\mathbf{x}_{k+1}=\mathbf{\hat{x}}_{k}+\mathbf{w}_{k}. \tag{15}\]
Then using (15), the inequality in (10) can be written in the following form,
\[\mathbf{x}_{k+1}^{T}\mathbf{Q}\mathbf{x}_{k+1}\geq\epsilon \tag{16}\] \[\Rightarrow \mathbf{w}_{k}^{T}\mathbf{Q}\mathbf{w}_{k}+\mathbf{a}_{k}^{T} \mathbf{w}_{k}\geq\epsilon-d_{k}\]
Finally, applying Chernoff bound [20] on the conditional probability of the inequality (16), we get the bound \(h_{c}(\cdot)\) in (11).
In conclusion, when system model parameters \(\mathbf{A}\) and \(\mathbf{B}\) are known, we have used the following per-stage cost function,
\[g\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)=f\left(\mathbf{x}_{k},\mathbf{u} _{k}\right)+C_{l}h_{c}\left(\mathbf{x}_{k},\mathbf{u}_{k}\right). \tag{17}\]
**Remark 3**: _The Chernoff bound provides an upper limit of the probability value in the constraint. Therefore, an optimal solution obtained for the optimization problem using the per-stage cost with the Chernoff bound as given in (17) will also be a feasible solution to the original constrained problem with per-stage cost as given in (8)._
**Remark 4**: _The reason for using the Chernoff bound is that Chernoff bound is tighter than the Chebyshev bound [20]._
### _Unknown System Model_
For the discussion in this section, we assume that the system model consisting of \(\mathbf{A}\) and \(\mathbf{B}\) matrices, is unknown. Under that assumption, we can not evaluate \(\mathbf{\hat{x}}_{k}\) (14). However, we can represent the probability value as an expectation as follows,
\[P\left\{\mathbf{x}_{k+1}^{T}\mathbf{Q}\mathbf{x}_{k+1}\geq\epsilon\mid\Psi_{k }\right\}=\text{E}\left[\mathds{1}_{\left\{\mathbf{x}_{k+1}^{T}\mathbf{Q} \mathbf{x}_{k+1}\geq\epsilon\right\}}\mid\Psi_{k}\right]. \tag{18}\]
Here \(\mathds{1}_{\left\{Z\right\}}\) denotes the indicator function that the event \(Z\) is true. We can always evaluate the indicator function using the recorded data. Therefore, for the unknown system model case, we use a slightly modified structure of the per-stage cost function as follows:
\[g\left(\mathbf{x}_{k},\mathbf{u}_{k},\mathbf{x}_{k+1}\right)=f\left(\mathbf{x }_{k},\mathbf{u}_{k}\right)+C_{l}\mathds{1}_{\left\{\mathbf{x}_{k+1}^{T} \mathbf{Q}\mathbf{x}_{k+1}\geq\epsilon\right\}} \tag{19}\]
Note that the expectation operator is already present for the cost function \(J_{L}\) in (7).
Note that, till this point, we have not assumed any special property for the distribution of the noise \(\mathbf{w}_{k}\). In other words, the noise may not be zero mean or Gaussian. However, the challenge is to derive the moment generating function \(M(\cdot)\) in (11) for the known model case. In the following section, we have studied two special cases; in the first case, the noise is assumed to be iid and non-zero mean Gaussian; in the second case, the noise is iid and generated from a Gaussian mixture model (GMM).
## IV Examples of moment generating functions
In this section, we derive the analytical form of the moment-generating functions \(M(\cdot)\) in (11) for two special cases.
### _Case 1_
Here we assume that noise \(\mathbf{w}_{k}\) is iid and \(\mathbf{w}_{k}\sim\mathcal{N}\left(\mu_{w},\Sigma_{w}\right)\), \(\Sigma_{w}\in\mathrm{I\!R}^{n\times n}>0\). Also, we assume that \(\mathbf{Q}\) is symmetric. Under those conditions, we can derive an analytical expression of \(M(\cdot)\) as stated in Theorem 3.2a.2 in [21] and presented as the following Lemma.
**Lemma 2**: _If \(\mathbf{w}_{k}\in\mathrm{I\!R}^{n}\), iid, and \(\mathbf{w}_{k}\sim\mathcal{N}\left(\mu_{w},\Sigma_{w}\right)\), where \(\Sigma_{w}\in\mathrm{I\!R}^{n\times n}>0\), the moment generating function \(M\left(y_{k},s\right)\) of the random variable \(y_{k}\) in (12) takes the following form,_
\[M\left(y_{k},s\right)=\exp\left(s\left(\mu_{w}^{T}\mathbf{Q}\mu_ {w}+\mathbf{a}_{k}^{T}\mu_{w}\right)+\right. \tag{20}\] \[\left.0.5s\sum_{j=1}^{n}b_{k,j}^{2}\left(1-2s\lambda_{j}\right)^ {-1}\right)\prod_{j=1}^{n}\left(1-2s\lambda_{j}\right)^{-1/2}.\]
_Here \(\lambda_{j}\)s are the eigenvalues of the matrix \(\Sigma^{\frac{1}{2}}\mathbf{Q}\Sigma^{\frac{1}{2}}\), and \(\mathbf{P}\) is the corresponding eigenvector matrix. \(\mathbf{b}_{k}=\left[b_{k,1},\cdots,b_{k,n}\right]^{T}=\mathbf{P}^{T}\left( \Sigma^{\frac{1}{2}}\mathbf{a}_{k}+2\Sigma^{\frac{1}{2}}\mathbf{Q}\mu_{w}\right)\). \(\mathbf{a}_{k}\) is the same as given in Lemma 1._
See the proof of Theorem 3.2a.2 from [21].
### _Case 2_
We assume the following LTI system model and the noise structure for this case.
\[\mathbf{x}_{k+1}=\mathbf{A}\mathbf{x}_{k}+\mathbf{B}\mathbf{u}_{k}+\mathbf{B} \mathbf{w}_{k}. \tag{21}\]
Here \(\mathbf{w}_{k}\in\mathrm{I\!R}^{p}\), iid, and \(\mathbf{w}_{k}\sim f_{w}(\mathbf{w})\). Also, \(f_{w}(\mathbf{w})\) is a Gaussian mixture model (GMM) as follows,
\[f_{w}(\mathbf{w})=\sum_{j=1}^{p}\pi_{j}\mathcal{N}\left(\mathbf{w};\mu_{j}, \Sigma_{j}\right). \tag{22}\]
Here \(0<\pi_{j}<1\) and \(\sum_{j=1}^{p}\pi_{j}=1\). Now the Chernoff bound is provided in the following lemma.
**Lemma 3**: _For the LTI system given in (21), the probability value \(h_{p}(\cdot)\) (10) will be upper bounded by \(h_{g}(\cdot)\) as given in (23)._
\[h_{c}\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)=\inf_{s\geq 2}\left[e^{-(e-d_{k} )s}M_{g}\left(y_{gk},s\right)\right], \tag{23}\]
_where \(M_{g}(\cdot)\) denotes the moment generating function of \(y_{gk}\). \(d_{k}\) is same as (13), and \(y_{gk}\) is given as follows_
\[y_{gk} =\mathbf{w}_{k}^{T}\mathbf{Q}_{g}\mathbf{w}_{k}+\mathbf{a}_{gk}^ {T}\mathbf{w}_{k}\text{, and} \tag{24}\] \[\mathbf{Q}_{g} =\mathbf{B}^{T}\mathbf{Q}\mathbf{B}, \tag{25}\]
_where \(\mathbf{a}_{gk}=2\mathbf{B}^{T}\mathbf{Q}\mathbf{\hat{x}}_{k}\), and \(\mathbf{\hat{x}}_{k}\) is an estimate of \(\mathbf{x}_{k+1}\) given \(\Psi_{k}\) as given in Lemma 1. Finally, \(M_{g}(\cdot)\) will take the following analytical form for the noise distribution given in (22)._
\[M_{g}\left(y_{gk},s\right)=\sum_{j=1}^{p}\pi_{j}M\left(y_{j,k},s\right). \tag{26}\]
_Here the function \(M(\cdot)\) is same as given in Lemma 2. \(y_{j,k}\) is given as follows,_
\[y_{j,k}=\mathbf{w}_{j,k}^{T}\mathbf{Q}_{g}\mathbf{w}_{j,k}+\mathbf{a}_{g,k}^ {T}\mathbf{w}_{j,k}\text{, and }\mathbf{w}_{j,k}\sim\mathcal{N}\left(\mu_{j},\Sigma_{j}\right) \tag{27}\]
The proof of Lemma 3 is straightforward. (24) and (25) can be derived from (12) by replacing \(\mathbf{w}_{k}\) by \(\mathbf{B}\mathbf{w}_{k}\). (26) is derived as follows [22],
\[M_{g}\left(y_{gk},s\right)=\int\exp\left(sy_{gk}\right)f_{w} \left(\mathbf{w}\right)ds\] \[=\sum_{j=1}^{p}\pi_{j}\int\exp\left(sy_{gk}\right)\mathcal{N} \left(\mathbf{w};\mu_{j},\Sigma_{j}\right)ds\text{ [using (\ref{eq:
Here the superscript \(t\) denotes the corresponding target network. On the other hand, the actor network parameters, _i.e._, \(\theta^{\mu}\), are updated using the following policy gradient (PG),
\[\begin{split} PG=\triangledown_{u}Q^{\mu}\left(\mathbf{x}_{k}, \mathbf{u}_{k}\mid\theta^{Q}\right)\mid_{\mathbf{x}=\mathbf{x}_{k},\mathbf{u}= \mu(\mathbf{x}_{k})}\times\\ \triangledown_{\theta\mu}\mu\left(\mathbf{x}\mid\theta^{\mu} \right)\mid_{\mathbf{x}=\mathbf{x}_{k}}.\end{split} \tag{31}\]
Here \(\triangledown\) denotes the derivative operator. Both networks are trained by randomly drawing mini batches of size \(N\) from the replay memory, where all the past experiences of the agents are stored as the quadruple \((\mathbf{x}_{k},\mathbf{u}_{k},r(\mathbf{x}_{k},\mathbf{u}_{k}),\mathbf{x}_{k +1})\). The AC algorithm followed for our work is the same as Algorithm 1 given in [19], therefore further details are not repeated in this paper.
**Remark 5**: _The past experiences of the AC agent, stored as the quadruple \((\mathbf{x}_{k},\mathbf{u}_{k},r(\mathbf{x}_{k},\mathbf{u}_{k}),\mathbf{x}_{k +1})\) in the replay memory, are used for training the networks. Furthermore, the reward function is only used for training. Once the actor network is trained, it only takes \(\mathbf{x}_{k}\) as input and provides \(\mathbf{u}_{k}\) as output. In other words, we do not need \(\mathbf{x}_{k+1}\) to evaluate the control input value at the \(k\)-th instant, _i.e_., \(\mathbf{u}_{k}\), for the unknown model case, which uses the \(\mathbf{x}_{k+1}\) to evaluate the reward value at \(k\)-th instant in time. Therefore, the control policy function or the actor remains causal for the unknown model case._
**Remark 6**: _The convergence of the AC algorithm has been studied in various contexts. In [23], the authors proved that the AC algorithm with a linear function approximator converges to the optimal policy in the tabular case, _i.e_., with a finite number of states and actions, under certain conditions. Later works extended this result to more general function approximators, such as neural networks [24, 25]. However, it is important to note that convergence guarantees of the AC method depend on several factors, such as the choice of function approximator, step size parameters, and exploration strategies. In practice, the convergence of the algorithm may be affected by these factors, and it is recommended to use appropriate techniques such as regularization and early stopping to ensure convergence [26, 27]. In summary, the convergence of the AC algorithm has been theoretically established in certain cases; a formal proof of convergence for our particular problem formulation is under study and will be provided in an extended version of this work._
In the following section, we have performed numerical simulations using two systems, one for each case study, to investigate the performance of the proposed methods.
## VI Numerical Results
For the numerical study, we have used a 2nd order open loop unstable LTI system model, which is a special case of Case 1 and a UAV model from [16], which is a special case of Case 2. The model parameters of the 2nd order system and the UAV are provided in Appendix I and Appendix II, respectively. For both models, we assume that full state information is available. In Fig. 1, we compare the average expected quadratic cost (2) and the number of times the constraint got violated (in %) for the proposed method and the standard LQR without the added constraints when the system model is completely known. The optimal controller for the LQR problem becomes an affine function of the states for the UAV system, _i.e_., \(\mathbf{u}_{k}=\mathbf{K}\mathbf{x}_{k}+\mathbf{l}\). \(\mathbf{K}\) is the same as given in (4), and \(\mathbf{l}\) is evaluated as \(\mathbf{l}=-(\pi_{1}\mu_{1}+\pi_{2}\mu_{2})\).
Similarly, in Fig. 2, we compare the same quantities for the case when \(\mathbf{A}\) and \(\mathbf{B}\) are not known. Furthermore, for the simulation study, we have used the same network structure of two hidden layers of size (10,100) for both, actor and critic networks. Learning rate = 0.001 and \(C_{l}=100\). Also, we have used relu activation function for all the layers for both networks. For exploration, we have added a zero-mean Gaussian RV to the control input during training. Variance of the Gaussian RV gets reduced from \(5\mathbf{l}\) to \(0.01\mathbf{l}\) in steps.
As expected the constraint violation percentage gets re
Fig. 1: Performance comparison between the LQR and proposed policy for the known model case. (a), (b) 2nd order system, (c), (d) UAV
Fig. 2: Performance comparison between the LQR and proposed policy for the unknown model case. (a), (b) 2nd order system, (c), (d) UAV
duced for the proposed method at the expense of small increase in the quadratic cost. For instance, the constraint violation percentage got lowered by 64% whereas \(J_{c}\) increased by 34% with respect to LQR for the UAV system under known model scenarios for the parameters used in the simulation as given in Appendix II. On the other hand, for the unknown model scenario, the constraint violation percentage is lowered by 74% whereas \(J_{c}\) increased by 27% with respect to LQR for the same UAV system. This illustrates that the proposed method's performance under the unknown system model assumption is similar to the known model scenario. However, the proof of convergence of the RL methods becomes more challenging for the unknown model scenario, especially in the case of finite number of training samples. Nevertheless, such a study is a topic for our future research.
In Fig. 3, we have plotted one member of the state vector, _i.e._, \(\mathbf{x}_{1,k}\), vs one member of the control input vector, _i.e._, \(\mathbf{u}_{1,k}\), for the known and unknown system model cases for the UAV system. To generate the plot, we have only varied \(\mathbf{x}_{1,k}\) and kept rest of the element of the state vector fixed at zero, and evaluated the control action from the actor network. While this particular plot only shows the behaviour of the control as a function of the state in one realization, we have observed similar behaviour in many other instances. We believe this empirical study of the control law behaviour is of interest towards understanding the linearity or otherwise of an optimal control law.
## VII Conclusion
In conclusion, our study proposes a novel approach for handling probabilistic constraints in infinite-time horizon control problems for discrete-time linear Gauss-Markov systems, where risky events are modelled as quadratic costs of the states crossing a user-defined limit. We have also studied a new reward structure for the case when the system model is unknown. Our probabilistic constraint model is more intuitive and direct compared to existing methods. Furthermore, we have applied a policy gradient-based actor-critic method to derive an optimal policy from observed data, and our extensive numerical simulations demonstrate the effectiveness of our approach in both known and unknown system model scenarios. Our proposed approach has the potential to be applied in various real-world control problems where probabilistic constraints need to be handled effectively. A formal proof of the convergence and stability properties of the proposed algorithm is under study and a topic for our future research.
## Appendix A Double inverted pendulum
The model parameters for the double inverted pendulum used for the simulation study are as follows.
\[\mathbf{A}=\begin{bmatrix}1.0&0.3\\ 0.3&1.1\end{bmatrix},\mathbf{B}=\begin{bmatrix}0.9&,0.5\\ 0.1&1.2\end{bmatrix},\mathbf{w}\sim\mathcal{N}\left(0,\Sigma_{w}\right)\] \[\Sigma_{w}=diag\left(2,2\right),\mathbf{W}=\begin{bmatrix}1.5&0.2 5\\ 0.25&2.5\end{bmatrix},\mathbf{U}=diag\left(40,70\right),\] \[\mathbf{Q}=3\mathbf{W},\epsilon=95\]
## Appendix B UAV
The model parameters for the UAV system used for the simulation study are as follows.
\[\mathbf{A}=\begin{bmatrix}1&0.5&0&0\\ 0&1&0&0\\ 0&0&1&0.5\\ 0&0&0&1\end{bmatrix},\mathbf{B}=\begin{bmatrix}0.125&0\\ 0.5&0\\ 0&0.125\\ 0&0.5\end{bmatrix},\] \[\mathbf{W}=diag\left(1,0.1,2,0.2\right),\mathbf{U}=\mathbf{I}, \mathbf{Q}=2\mathbf{W},\epsilon=80\]
The noise vector \(\bar{\mathbf{w}}_{k}\) consist of two elements, \(\bar{\mathbf{w}}_{k}=[\bar{\mathbf{w}}_{1,k},\bar{\mathbf{w}}_{2,k}]^{T}\), where \(\bar{\mathbf{w}}_{1,k}\) has Gaussian mixture distribution, \(\sim 0.2\mathcal{N}\left(3,30\right)+0.8\mathcal{N}\left(8,60\right)\), and \(\bar{\mathbf{w}}_{2,k}\sim\mathcal{N}\left(0,0.01\right)\). We can easily convert the noise model for this system into the one studied as Case Study 2 (22) as follows.
\[\pi_{1}=0.2,\pi_{2}=0.8,\mu_{1}=\begin{bmatrix}3&0\end{bmatrix}^{T },\mu_{2}=\begin{bmatrix}8&0\end{bmatrix}^{T},\] \[\Sigma_{1}=diag(30,0.01),\Sigma_{2}=diag(60,0.01).\]
|
2310.11785 | Complete normal forms for real hypersurfaces in $\mathbb C^3$ at
2-nondegenerate points of Levi non-uniform rank zero | We construct complete normal forms for $5$-dimensional real hypersurfaces in
$\mathbb C^3$ which are $2$-nondegenerate and also of Levi non-uniform rank
zero at the origin point ${\bf p} =0$. The latter condition means that the rank
of the Levi form vanishes at ${\bf p}$ but not identically in a neighborhood of
it. The mentioned hypersurfaces are the only finitely nondegenerate real
hypersurfaces in $\mathbb C^3$ for which their complete normal forms were
absent in the literature. As a byproduct, we also treat the underlying
biholomorphic equivalence problem between the hypersurfaces. Our primary
approach in constructing the desired complete normal forms is to utilize the
techniques derived in the theory of equivariant moving frames. It notably
offers the advantage of systematic and symbolic manipulation of the associated
computations. | Masoud Sabzevari | 2023-10-18T08:24:01Z | http://arxiv.org/abs/2310.11785v1 | # Complete normal forms for real hypersurfaces in \(\mathbb{C}^{3}\)
###### Abstract.
We construct _complete_ normal forms for \(5\)-dimensional real hypersurfaces in \(\mathbb{C}^{3}\) which are \(2\)-nondegenerate and also of Levi non-uniform rank zero at the origin point \(\boldsymbol{p}=0\). The latter condition means that the rank of the Levi form vanishes at \(\boldsymbol{p}\) but not identically in a neighborhood of it. The mentioned hypersurfaces are the only finitely nondegenerate real hypersurfaces in \(\mathbb{C}^{3}\) for which their complete normal forms were absent in the literature. As a byproduct, we also treat the underlying biholomorphic equivalence problem between the hypersurfaces. Our primary approach in constructing the desired complete normal forms is to utilize the techniques derived in the theory of equivariant moving frames. It notably offers the advantage of systematic and symbolic manipulation of the associated computations.
2020 Mathematics Subject Classification: 32V40, 58K50, 22F50, 53A55
## 1. Introduction
Since 1907 that Poincare originated in [25] the theory of Cauchy-Riemann (CR for short) manifolds, two seminal works [3] and [5] have played a profound role in the field. In the former article, Cartan laid the foundations of the systematic study of _equivalence problems_ in CR geometry by solving it for nondegenerate hypersurfaces in \(\mathbb{C}^{2}\). In the latter work [5], Chern and Moser introduced _normal forms_ in CR geometry (see [15] for a survey) by constructing them for nondegenerate real hypersurfaces in arbitrary complex spaces. Although the Chern-Moser approach of normal forms mostly results in solving the underlying equivalence problems but, in comparison to Cartan's approach, it runs along a quite distinct way.
For a given real manifold \(M\), let \(T^{c}M\) be an even dimensional sub-distribution of its tangent distribution \(TM\), equipped with a fiber preserving complex structure \(J:T^{c}M\to T^{c}M\) with \(J\circ J=-\mathrm{id}\). By definition [2], \(M\) is called a (abstract) _CR manifold_ with the _CR structure_\(T^{c}M\) if: a) the intersection \(T^{1,0}M\cap T^{0,1}M\) is trivial where
\[T^{1,0}M:=\left\{X-\mathrm{i}\,J(X):\ \ X\in T^{c}M\right\}\qquad\mathrm{and} \qquad T^{0,1}M:=\overline{T^{1,0}M}\]
are the _holomorphic_ and _anti-holomorphic_ subbundles of the complexified bundle \(\mathbb{C}\otimes TM\); b) the holomorphic distribution \(T^{1,0}M\) enjoys the _Frobenius condition_ i.e. \([T^{1,0}M,T^{1,0}M]\subset T^{1,0}M\).
One of the most significant tools in the study of CR manifolds is the _Levi form_\(\mathbf{L}\) which, at each point \(p\in M\), is the Hermitian form \(\mathbf{L}_{p}:T^{1,0}_{p}M\times T^{1,0}_{p}M\to\mathbb{C}\otimes\frac{T_{p} M}{T^{p}_{p}M}\) defined as
\[\mathbf{L}_{p}(L_{1}(p),L_{2}(p)):=\mathrm{i}\,[L_{1},\overline{L}_{2}](p) \qquad\mathrm{mod}\,\mathbb{C}\otimes T^{c}_{p}M,\]
for each two local vector fields \(L_{1}\) and \(L_{2}\) of \(T^{1,0}M\) defined near \(p\). The CR manifold \(M\) is called _Levi nondegenerate_ at \(p\) whenever \(\mathbf{L}_{p}\) is nondegenerate as a Hermitian form.
For the significant class of real hypersurfaces of complex spaces, two subjects of equivalence problems and normal forms are widely studied and almost well-known in the nondegenerate case, specifically in the seminal works of Cartan, Chern-Moser and Tanaka [3, 5, 28]. But, in contrast,
it is very little known about them around the degenerate points. For brevity and in connection with the main objective of this paper, we restrict our attention to \(5\)-dimensional real-analytic real hypersurfaces \(M^{5}\) in the complex space \(\mathbb{C}^{3}\) which pass through the origin point \(\boldsymbol{p}=0\) -- for the general case, we recommend the reader to consult [2]. In local coordinates \(z_{1},z_{2},w:=u+\mathrm{i}v\) of \(\mathbb{C}^{3}\), we assume that \(M^{5}\) is represented as the graph of the real-analytic _real_ defining function (throughout this paper, we use the notations \(z\) and \(\bar{z}\) for the arrays \(z_{1},z_{2}\) and \(\bar{z}_{1},\bar{z}_{2}\))
\[v=v(z,\bar{z},u). \tag{1.1}\]
By [2, Theorem 4.2.6], we can assume further that (1.1) is expressed in _normal_ coordinates, i.e. \(v(z,0,u)=v(0,\bar{z},u)=0\).
For such a \(5\)-dimensional hypersurface \(M^{5}\) and by performing necessary computations (cf. [18, pp. 76-80]), one finds the Hermitian _Levi matrix_ associated to the Levi form \(\mathbf{L}\) as
\[\mathsf{L}=\left(\begin{array}{ll}\mathrm{i}\big{(}\overline{A_{1}}_{z_{1}} +A_{1}\,\overline{A_{1}}_{u}-A_{1\,\bar{z}_{1}}-\overline{A_{1}}\,A_{1}u\\ \mathrm{i}\big{(}\overline{A_{2}}_{z_{1}}+A_{1}\,\overline{A_{2}}_{u}-A_{1\, \bar{z}_{2}}-\overline{A_{2}}\,A_{1}u\big{)}\end{array}\right.\mathrm{i} \big{(}\overline{A_{1}}_{z_{2}}+A_{2}\,\overline{A_{1}}_{u}-A_{2\,\bar{z}_{1} }-\overline{A_{1}}\,A_{2}u\big{)}\\ \mathrm{i}\big{(}\overline{A_{2}}_{z_{2}}+A_{2}\,\overline{A_{2}}_{u}-A_{2\, \bar{z}_{2}}-\overline{A_{2}}\,A_{2}u\big{)}\end{array}\right), \tag{1.2}\]
where the functions \(A_{1}\) and \(A_{2}\) are defined as follows in terms of the defining function \(v(z,\bar{z},u)\) of \(M^{5}\)
\[A_{1}:=-\,\frac{v_{z_{1}}}{\mathrm{i}+v_{u}},\qquad A_{2}:=-\,\frac{v_{z_{2}} }{\mathrm{i}+v_{u}}.\]
Notice that at the origin point \(\boldsymbol{p}=0\), this matrix is simply
\[\mathsf{L}(\boldsymbol{p})=\left(\begin{array}{ll}v_{z_{1}\bar{z}_{1}}( \boldsymbol{p})&v_{z_{1}\bar{z}_{2}}(\boldsymbol{p})\\ v_{z_{2}\bar{z}_{1}}(\boldsymbol{p})&v_{z_{2}\bar{z}_{2}}(\boldsymbol{p})\end{array} \right). \tag{1.3}\]
If \(\mathsf{L}\) is of the full rank two at \(\boldsymbol{p}\) (and hence in a neighborhood of it), then \(M^{5}\) is Levi nondegenerate. In the most opposite case, when this rank vanishes identically in a local neighborhood of \(\boldsymbol{p}\), then it is well-known that ([2]) \(M^{5}\) is nothing but the _Levi flat_ hypersurface \(\mathbb{C}^{2}\times\mathbb{R}\) defined simply by \(v=0\). Of particular interest is the intermediate scenario, when the rank of the Levi matrix \(\mathsf{L}\) is not full at \(\boldsymbol{p}\) and also does not vanish identically in any local neighborhood of it.
To identify the right class of degenerate hypersurfaces in \(\mathbb{C}^{3}\) with essentially unique normal form transformations (see [6, p. 316] for more details), we may introduce the extra assumption of \(2\)-nondegeneracy defined as below. Let
\[Q(z,\bar{z},w,\overline{\boldsymbol{w}})=0\]
be the complex defining equation of the hypersurface \(M^{5}\) which is plainly obtained by setting \(u=\frac{w+\overline{w}}{2}\) and \(v=\frac{w-\overline{w}}{2\mathrm{i}}\) in its real counterpart (1.1). Let \(\nabla^{h}Q\) denotes the holomorphic vector gradients of \(Q\). Then, by definition [2, 6], the hypersurface \(M^{5}\) is \(2\)_-nondegenerate_ at \(\boldsymbol{p}\) if the set of triple vectors
\[\big{\{}\nabla^{h}Q(\boldsymbol{p},\overline{\boldsymbol{p}}),\qquad\overline{ \mathscr{L}}_{j}\big{(}\nabla^{h}Q(\boldsymbol{p},\overline{\boldsymbol{p}}) \big{)},\qquad\overline{\mathscr{L}}_{j}\overline{\mathscr{L}}_{k}\big{(} \nabla^{h}Q(\boldsymbol{p},\overline{\boldsymbol{p}})\big{)},\qquad j,k=1,2 \big{\}}\]
spans \(\mathbb{C}^{3}\), on the contrary of its proper subset \(\{\nabla^{h}Q(\boldsymbol{p},\overline{\boldsymbol{p}}),\mathscr{L}_{j} \big{(}\nabla^{h}Q(\boldsymbol{p},\overline{\boldsymbol{p}})\big{)},\ j=1,2\}\) -- if the latter vectors span \(\mathbb{C}^{3}\), then \(M^{5}\) is nondegenerate at \(\boldsymbol{p}\). Here, \(\overline{\mathscr{L}}_{1}\) and \(\overline{\mathscr{L}}_{2}\) are the generators of the anti-holomorphic distribution \(T^{0,1}M^{5}\), i.e.
\[\overline{\mathscr{L}}_{j}=\frac{\partial}{\partial\bar{z}_{j}}+\frac{ \partial\overline{Q}}{\partial\bar{z}_{j}}\,\frac{\partial}{\partial \overline{w}},\qquad j=1,2.\]
Over the past few decades, there has been significant research devoted to the study of \(5\)-dimensional \(2\)-nondegenerate real hypersurfaces in \(\mathbb{C}^{3}\). Notably, the work [6] of Ebenfelt stands out, where he made substantial contributions by constructing normal forms for these hypersurfaces
within two distinct subclasses. The first subclass comprises hypersurfaces whose associated Levi matrix \(\mathsf{L}(\mathbf{p})\) is of rank one -- or equivalently \(\mathsf{L}(\mathbf{p})\) admits one zero and one nonzero eigenvalue. Ebenfelt showed in the first part of [6, Theorem **A**] that after applying suitable normalizations, each hypersurface of this kind can be transformed into one of the following _partial_ normal forms
\[\begin{split}(\mathrm{A.i.}1):& v=z_{1}\bar{z}_{1}+z_{2}^{2} \bar{z}_{2}+z_{2}\bar{z}_{2}^{2}+\gamma\left(z_{1}^{2}\bar{z}_{2}+z_{2}\bar{z}_ {1}^{2}\right)+\mathrm{O}(|z|^{4})+\mathrm{O}(|u||z|^{2}),\qquad\gamma=0,1,\\ (\mathrm{A.i.}2):& v=z_{1}\bar{z}_{1}+z_{1}^{2}\bar{z }_{2}+z_{2}\bar{z}_{1}^{2}+\mathrm{O}(|z|^{4})+\mathrm{O}(|u||z|^{2}),\\ (\mathrm{A.i.}3):& v=z_{1}\bar{z}_{1}+z_{1}z_{2}\bar{z }_{2}+z_{2}\bar{z}_{1}\bar{z}_{2}+\mathrm{O}(|z|^{4})+\mathrm{O}(|u||z|^{2}). \end{split} \tag{1.4}\]
He also showed that ([6, Theorem 4.2.8]) \(M^{5}\) is biholomorphically equivalent to a normal form hypersurface of the type (A.i.2) if and only if it is _Levi uniform of rank \(1\)_, meaning that the rank of the Levi matrix \(\mathsf{L}\) is constantly equal to \(1\) in a neighborhood of \(\mathbf{p}\). The class of such real hypersurfaces, sometimes denoted by \(\mathfrak{C}_{2,1}\), is studied extensively among the literature. For instance, Ebenfelt investigated in [7] their Cartan geometry and Isaev-Zaitsev, Medori-Spiro and Merker-Pocchiola studied in [13, 16, 17] their biholomorphic equivalence problem through Cartan's classical approach. Moreover, Fels and Kaup in [9] classified homogeneous hypersurfaces belonging to this class.
The second subclass of \(2\)-nondegenerate hypersurfaces studied in [6] comprises real hypersurfaces in \(\mathbb{C}^{3}\) whose associated Levi matrix \(\mathsf{L}\) vanishes at \(\mathbf{p}\) but not in a neighborhood thereof. In this scenario, we refer the \(2\)-nondegenerate point \(\mathbf{p}\) as having the _Levi non-uniform_ rank zero (cf. [7]). In the second part of Theorem **A** of [6], Ebenfelt showed that such hypersurfaces can be brought to the following biholomorphically inequivalent types of _partial_ normal forms
\[\begin{split}(\mathrm{A.ii.}1)& v=z_{1}z_{2}\bar{z}_{1}+z_{1} \bar{z}_{1}\bar{z}_{2}+r\left(z_{1}^{2}\bar{z}_{2}+z_{2}\bar{z}_{1}^{2}\right) +\mathrm{O}(|z|^{4})+\mathrm{O}(|u||z|^{2}),\\ (\mathrm{A.ii.}2)& v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z }_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2}\bar{z}_{1}^{2}+\mathrm{i}\left(z_ {1}^{2}\bar{z}_{1}-z_{1}\bar{z}_{1}^{2}\right)+\mathrm{O}(|z|^{4})+\mathrm{O}( |u||z|^{2}),\\ (\mathrm{A.ii.}3)& v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z }_{1}\bar{z}_{2}+z_{2}^{2}\bar{z}_{1}+z_{1}\bar{z}_{2}^{2}+\lambda\,z_{2}^{2} \bar{z}_{2}+\overline{\lambda}\,z_{2}\bar{z}_{2}^{2}+\mathrm{O}(|z|^{4})+ \mathrm{O}(|u||z|^{2}),\\ (\mathrm{A.ii.}4)& v=z_{1}^{2}\bar{z}_{1}+z_{1}\bar{z }_{1}^{2}+z_{2}^{2}\bar{z}_{2}+z_{2}\bar{z}_{2}^{2}+\sigma\,z_{1}^{2}\bar{z}_ {2}+\overline{\sigma}\,z_{2}\bar{z}_{1}^{2}+\nu\,z_{2}^{2}\bar{z}_{1}+\overline {\nu}\,z_{1}\bar{z}_{2}^{2}\\ &\qquad\qquad+\mathrm{O}(|z|^{4})+\mathrm{O}(|u||z|^{2}),\\ (\mathrm{A.ii.}5)& v=\eta\,z_{1}^{2}\bar{z}_{1}+ \overline{\eta}z_{1}\bar{z}_{1}^{2}+z_{1}^{2}\bar{z}_{2}+z_{2}\bar{z}_{1}^{2}+ z_{2}^{2}\bar{z}_{1}+z_{1}\bar{z}_{2}^{2}+\mathrm{O}(|z|^{4})+\mathrm{O}(|u||z|^{2}), \end{split} \tag{1.5}\]
with \(r\in\mathbb{R}\) and \(\lambda,\sigma,\nu,\eta\in\mathbb{C}\) where \(r>0\), \(\lambda\neq 0\) and \(\sigma\nu\neq 1\).
By definition ([6]), a normal form associated with a hypersurface \(M^{5}\) is _complete_ -- in contrast to _partial_ -- if the corresponding transformation of \(M^{5}\) to the normal form is _unique_ modulo a finite dimensional choice of normalizations. Neither of the groups of normal forms (1.4) and (1.5) are complete. However, Ebenfelt in [6, Theorem **B**] developed the former partial forms (1.4) into their complete counterparts by applying infinitely many appropriate normalizations. In addition, recently in [12, 14], two _convergent_ complete normal forms are constructed in the case (A.i.2).
But, completing the second group (1.5) of Ebenfelt's partial normal forms has been overlooked among the literature. From a computational perspective, the removal of second order monomials \(z_{j}\bar{z}_{k}\), \(j,k=1,2\) in the expressions (1.5) makes the construction of their corresponding (complete) normal forms more challenging.
The main objective of this paper is to develop the partial normal forms (1.5) to their complete forms. Let us outline the extent to which we aim to apply the requisite normalizations in order to attain the desired _uniqueness_ in the above mentioned definition of a complete normal form. In [8], Ershova computed the sharp upper bounds for dimensions of the isotropy groups associated to the
hypersurfaces of types (A.i.1)-(A.i.3) and (A.ii.1)-(A.ii.5) at \(\mathbf{p}\). She showed that the maximum dimensions are enjoyed by the so-called _model hypersurfaces_ associated to each type. Our strategy in this paper is to apply the requisite normalizations until we succeed in reducing the number of remaining unnormalized group parameters to a finite value which is less than or equal to Ershova's upper bound.
As suggested in [21], our primary approach toward constructing the desired complete normal forms is to utilize the techniques derived in the theory of _equivariant moving frames_ (see SS2 below). This theory, which is initiated and developed by Peter Olver and his school ([11, 20, 22, 23]) in the recent three decades, is indeed a modern and far-reaching reformulation of Cartan's classical approach to moving frames. Besides its various applications (see [20] and the reference therein), it notably provides a _systematic_ and _algorithmic_ way, based on _symbolic computations_, to _simultaneously_ construct the normal forms ([21]) and solve their underlying equivalence problems ([29]). It provides a concrete and striking bridge between Cartan's classical approach in solving equivalence problems and the theory of normal forms. It may answer in part the two questions \(Q^{\text{\textcircled{4}}}\) and \(Q^{\text{\textcircled{5}}}\) introduced in [12, p. 257].
First applications of the equivariant moving frames in CR geometry appeared recently in [24, 26], where the normal forms of \(3\)-dimensional _nondegenerate_ real hypersurfaces in \(\mathbb{C}^{2}\) and \(5\)-dimensional _totally nondegenerate_ CR surfaces in \(\mathbb{C}^{4}\) are constructed. These works highlight the emerging role of equivariant moving frames in CR geometry and exhibit their potential for addressing various problems in this field. The current work is the first application of the equivariant moving frames in the degenerate case.
This paper is organized as follows. In the next Section 2, we prepare requisite materials to launch the equivariant moving frame method for constructing the desired normal forms. In Section 3, we apply partial normalizations in the lower orders \(\leq 3\). One observes that while the rank zero Levi non-uniformity at \(\mathbf{p}\) prohibits any normalization in order two, normalizations in order three depends upon certain circumstances imposed by the \(2\)-nondegeneracy condition. This forces us to apply the next normalizations along the five distinct branches (1.5). In the next three sections 4, 5 and 6, we endeavor to complete these five partial normal forms. For this purpose, we need to pursue in each branch the final set of normalizations in order six. As a byproduct, we independently find Ershova's model hypersurfaces [8] as those admitting the minimum number of normalizations or equivalently as those having the maximum dimension of isotropy groups at \(\mathbf{p}\) (cf. [29]). Finally, in the short section 7, we investigate the biholomorphic equivalence problem between \(2\)-nondegenerate hypersurfaces, considered in this paper.
## 2. Preliminary materials
We aim in this section to prepare requisite materials toward launching the equivariant moving frame method for constructing the desired normal forms. It necessitates first to consider the underlying pseudo-group action of holomorphic transformations.
### Holomorphic pseudo-group
In local coordinates \(z_{1},z_{2},w\), the pseudo-group \(\mathscr{G}\) of origin-preserving holomorphic transformations of \(\mathbb{C}^{3}\) consists of diffeomorphisms
\[(z_{1},z_{2},w)\to(Z_{1}(z_{1},z_{2},w),Z_{2}(z_{1},z_{2},w),W(z_{1},z_{2},w))\]
which enjoy the Cauchy-Riemann equations
\[\frac{\partial Z_{j}}{\partial z_{k}}=\frac{\partial Z_{j}}{\partial\overline{ w}}=\frac{\partial W}{\partial\overline{z}_{k}}=\frac{\partial W}{\partial \overline{w}}=0,\qquad j,k=1,2.\]
Expanding \(w=u+{\rm i}v\) and accordingly \(W(z,w):=U(z,\bar{z},u,v)+{\rm i}V(z,\bar{z},u,v)\) to the real and imaginary parts, one finds equivalently the first order _determining equations_ of \(\mathscr{G}\) as (cf. [26])
\[\begin{split}&\frac{\partial Z_{j}}{\partial\bar{z}_{k}}=\frac{ \partial\overline{Z}_{j}}{\partial z_{k}}=0,\qquad\frac{\partial Z_{j}}{ \partial v}={\rm i}\frac{\partial Z_{j}}{\partial u},\qquad\frac{\partial \overline{Z}_{j}}{\partial v}=-{\rm i}\frac{\partial\overline{Z}_{j}}{ \partial u},\\ &\frac{\partial U}{\partial z_{j}}={\rm i}\frac{\partial V}{ \partial z_{j}},\qquad\frac{\partial U}{\partial\bar{z}_{j}}=-{\rm i}\frac{ \partial V}{\partial\bar{z}_{j}},\qquad\frac{\partial U}{\partial v}=-\frac{ \partial V}{\partial u},\qquad\frac{\partial V}{\partial v}=\frac{\partial U }{\partial u},\qquad j,k=1,2.\end{split} \tag{2.1}\]
It follows that the infinitesimal counterpart of \(\mathscr{G}\), namely the Lie algebra \(\mathfrak{g}:=\mathfrak{h}\mathfrak{o}\mathfrak{l}(\mathbb{C}^{3},0)\) of local holomorphic automorphisms of \(\mathbb{C}^{3}\) around the origin, consists of the real vector fields
\[\mathbf{v}=\sum_{j=1}^{2}\,\xi^{j}(z,u,v)\frac{\partial}{\partial z_{j}}+\sum _{j=1}^{2}\,\overline{\xi}^{j}(\bar{z},u,v)\frac{\partial}{\partial\bar{z}_{ j}}+\eta(z,\bar{z},u,v)\frac{\partial}{\partial u}+\phi(z,\bar{z},u,v)\frac{ \partial}{\partial v}, \tag{2.2}\]
with \(\overline{\xi}^{j}(\bar{z},u,v):=\overline{\xi^{j}(z,u,v)}\), which enjoys the second order _infinitesimal determining equations_
\[\begin{split}&\xi^{j}_{z_{k}}=\overline{\xi}^{j}_{z_{k}}=0, \qquad\xi^{j}_{v}={\rm i}\,\xi^{j}_{u},\qquad\overline{\xi}^{j}_{v}=-{\rm i} \,\overline{\xi}^{j}_{u},\\ &\phi_{z_{k}}=-{\rm i}\,\eta_{z_{k}},\qquad\phi_{\bar{z}_{k}}={ \rm i}\,\eta_{\bar{z}_{k}},\qquad\phi_{u}=-\eta_{v},\qquad\phi_{v}=\eta_{u},\\ &\xi^{j}_{\bar{z}_{k},a}=0,\qquad\xi^{j}_{z_{k}v}={\rm i}\,\xi^{j }_{z_{k}u},\qquad\xi^{j}_{uv}={\rm i}\,\xi^{j}_{uu},\qquad\xi^{j}_{vv}=-\xi^{j }_{uu},\\ &\overline{\xi}^{j}_{z_{k},a}=0,\qquad\overline{\xi}^{j}_{\bar{z}_ {k}v}=-{\rm i}\,\overline{\xi}^{j}_{\bar{z}_{k}u},\qquad\overline{\xi}^{j}_{uv }=-{\rm i}\,\overline{\xi}^{j}_{uu},\qquad\overline{\xi}^{j}_{vv}=-\overline{ \xi}^{j}_{uu},\\ &\eta_{z_{j}\bar{z}_{k}}=0,\qquad\eta_{vv}=-\eta_{uu},\qquad\eta_ {z_{k}v}={\rm i}\,\eta_{z_{k}u},\qquad\eta_{\bar{z}_{k}v}=-{\rm i}\,\eta_{\bar {z}_{k}u},\\ &\phi_{z_{k},a}=-{\rm i}\,\eta_{z_{k},a},\qquad\phi_{\bar{z}_{k},a }={\rm i}\,\eta_{\bar{z}_{k},a},\qquad\phi_{u,a}=-\eta_{v,a},\qquad\phi_{v,a}= \eta_{u,a}.\end{split} \tag{2.3}\]
Higher order infinitesimal determining equations are obtained by applying further derivations on these equations.
One may consider the induced action of \(\mathscr{G}\) on \(\mathbb{C}^{3}\) on a given \(5\)-dimensional real hypersurface \(M^{5}\subset\mathbb{C}^{3}\), represented in local coordinates as the graph of some defining function \(v=v(z,\bar{z},u)\). For \(0\leq n\leq\infty\), let \({\rm J}^{n}:={\rm J}^{n}(M^{5})\) denotes the associated \(n\)-th order jet space of \(M^{5}\). In local coordinates, it comprises the tuples \(v^{(n)}=(z,\bar{z},u,v,\ldots,v_{J},\ldots)\), where \(J\) is a symmetric multi-index of order \(\leq n\) with entries \(j_{k}=z_{1},z_{2},\bar{z}_{1},\bar{z}_{2},u\). As explained in [19], the action of \(\mathscr{G}\) on \(M^{5}\) can be extended to action of the _prolonged_ group \(\mathscr{G}^{(n)}\) on the jet space \({\rm J}^{n}\). In local coordinates, \(\mathscr{G}^{(n)}\) consists of the derivatives of the transformations in \(\mathscr{G}\), up to order \(n\). From now on and for \(j=1,2\), we denote by \(Z_{j},\overline{Z}_{j},U,V\) the _lift_ of the source variables \(z_{j},\bar{z}_{j},u,v\) under prolonged transformations. In a similar fashion, transformation of the above jet point \(v^{(n)}\) is denoted by \(V^{(n)}=(Z,\overline{Z},U,V,\ldots,V_{J},\ldots)\), where here the entries of the multi-indices \(J\) are \(Z_{1},Z_{2},\overline{Z}_{1},\overline{Z}_{2},U\). In [19], the formula for explicitly computing the target variables \(V_{J}\) can be found. However, in this paper, we do not need to compute them explicitly as they will be considered from a _symbolic_ standpoint.
Infinitesimally, the already mentioned prolonged action \(\mathscr{G}^{(\infty)}\) on \(J^{\infty}\) is determined by prolonging the above vector field \(\mathbf{v}\), namely by
\[\mathbf{v}^{(\infty)}=\sum_{j=1}^{2}\xi^{j}\,\frac{\partial}{\partial z_{j}}+ \sum_{j=1}^{2}\overline{\xi}^{j}\frac{\partial}{\partial\bar{z}_{j}}+\eta \frac{\partial}{\partial u}+\sum_{\sharp J\geq 0}\phi^{J}\frac{\partial}{\partial v_{J}}, \tag{2.4}\]
where the vector components \(\phi^{J}\) -- not to be confused with the derivations \(\phi_{J}\) -- are defined recursively by the prolongation formula
\[\phi^{J,a}=D_{a}\phi^{J}-(D_{a}\xi^{j})\,v_{z_{j},J}-(D_{a}\overline{\xi}^{j})\,v _{J,\bar{z}_{j}}-(D_{a}\eta)\,v_{J,u_{j}}, \tag{2.5}\]
for \(a=z_{1},z_{2},\bar{z}_{1},\bar{z}_{2},u\). Here, \(D_{a}\) is the total differentiation operator with respect to \(a\) (cf. [19]).
To the vector components of \(\mathbf{v}\), we associate the zeroth order Maurer-Cartan forms \(\mu^{1},\mu^{2},\alpha,\gamma\) with the assignments
\[\mu^{j}\leftrightarrow\xi^{j},\qquad\overline{\mu}^{j}\leftrightarrow\overline {\xi}^{j},\qquad\alpha\leftrightarrow\eta,\qquad\gamma\leftrightarrow\phi, \qquad{\rm for}\;j=1,2. \tag{2.6}\]
These differential forms can be extended to higher orders \(\mu^{j}_{J},\overline{\mu}^{j}_{J},\alpha_{J},\gamma_{J}\), where the entries of the multi-indices \(J\) are again \(Z_{1},Z_{2},\overline{Z}_{1},\overline{Z}_{2},U\). The coordinate expressions of the Maurer-Cartan forms can be found in [22], although in the subsequent computations, we treat them symbolically and therefore do not require their explicit forms here.
The Maurer-Cartan forms \(\mu^{j}_{J},\overline{\mu}^{j}_{J},\alpha_{J},\gamma_{J}\) with \(0\leq\#J\leq\infty\) are not linearly independent. Detecting their dependencies, one may apply Theorem 6.1 of [22] which, according to the infinitesimal determining equations (2.3), gives in orders \(\leq 2\) that
\[\begin{gathered}\mu^{j}_{\overline{Z}_{k}}=\overline{\mu}^{j}_{Z _{k}}=0,\qquad\mu^{j}_{V}={\rm i}\,\mu^{j}_{U},\qquad\overline{\mu}^{j}_{V}=-{ \rm i}\,\overline{\mu}^{j}_{U},\\ \gamma_{Z_{k}}=-{\rm i}\,\alpha_{Z_{k}},\qquad\gamma_{\overline{Z} _{k}}={\rm i}\,\alpha_{\overline{Z}_{k}},\qquad\gamma_{U}=-\alpha_{V},\qquad \gamma_{V}=\alpha_{U},\\ \mu^{j}_{\overline{Z}_{k},a}=0,\qquad\mu^{j}_{Z_{k}V}={\rm i}\, \mu^{j}_{Z_{k}U},\qquad\mu^{j}_{UV}={\rm i}\,\mu^{j}_{UU},\qquad\mu^{j}_{VV}=- \mu^{j}_{UU},\\ \overline{\mu}^{j}_{Z_{k},a}=0,\qquad\overline{\mu}^{j}_{\overline {Z}_{k}V}=-{\rm i}\,\overline{\mu}^{j}_{\overline{Z}_{k}U},\qquad\overline{\mu }^{j}_{UV}=-{\rm i}\,\overline{\mu}^{j}_{UU},\qquad\overline{\mu}^{j}_{VV}=- \overline{\mu}^{j}_{UU},\\ \alpha_{Z_{j}\overline{Z}_{k}}=0,\qquad\alpha_{VV}=-\alpha_{UU}, \qquad\alpha_{Z_{k}V}={\rm i}\,\alpha_{Z_{k}U},\qquad\alpha_{\overline{Z}_{k}V }=-{\rm i}\,\alpha_{\overline{Z}_{k}U},\\ \gamma_{Z_{k},a}=-{\rm i}\,\alpha_{Z_{k},a},\qquad\gamma_{\overline {Z}_{k},a}={\rm i}\,\alpha_{\overline{Z}_{k},a},\qquad\gamma_{U,a}=-\alpha_{V, a},\qquad\gamma_{V,a}=\alpha_{U,a},\end{gathered} \tag{2.7}\]
for \(a\in\{Z_{j},\overline{Z}_{j},U,V,\,j=1,2\}\). One finds the higher order linear relations by applying successive derivations on the both sides of these equalities. Accordingly, we have the following basis of Maurer-Cartan forms
\[\mu^{j}_{Z^{\ell}U^{k}},\qquad\overline{\mu}^{j}_{\overline{Z}^{\ell}U^{k}}, \qquad\alpha_{U^{j}V},\qquad\alpha_{Z^{\ell}U^{k}},\qquad\alpha_{\overline{Z}^{ \ell}U^{k}},\qquad\gamma \tag{2.8}\]
for \(j=1,2\), \(k\in\mathbb{N}_{0}\) and for \(\ell:=(l_{1},l_{2})=\mathbb{N}_{0}^{2}\), where by \(Z^{\ell}\) we mean \(Z_{1}^{l_{1}}Z_{2}^{l_{2}}\) (we denote \(\mathbb{N}_{0}\) the collection of natural numbers added by \(0\)).
### Moving frame
For \(0\leq n\leq\infty\), let \(\mathscr{B}^{(n)}\) denotes the \(n\)-th order _lifted fiber bundle_ which, in local coordinates, is parameterized by the pairs \((\varphi^{(n)},v^{(n)})\in\mathscr{G}^{(n)}\times\mathrm{J}^{n}\). The action \(\mathscr{G}\) can be naturally extended on the lifted bundle \(\mathscr{B}^{(n)}\) by the right composition of the pseudo-group jets
\[\psi\cdot(\varphi^{(n)},v^{(n)})=((\varphi\circ\psi^{-1})^{(n)},V^{(n)}), \qquad{\rm for\;any\;}\psi\in\mathscr{G}. \tag{2.9}\]
**Definition 2.1**.: A _partial right moving frame_ of order \(n\) is a right-invariant local subbundle \(\widehat{\mathscr{B}}^{(n)}\subset\mathscr{B}^{(n)}\), meaning that \(\psi\cdot\widehat{\mathscr{B}}^{(n)}\subset\widehat{\mathscr{B}}^{(n)}\) for all \(\psi\in\mathscr{G}\) where the right action (2.9) is defined. If the subbundle \(\widehat{\mathscr{B}}^{(n)}\) forms the graph of a right-invariant section of \(\mathscr{B}^{(n)}\), it defines an equivariant moving frame.
The construction of a partial moving frame heavily relies on selecting an appropriate _cross-section_ to the corresponding action. In the most practical and standard way, the so-called _coordinate_ cross-section is obtained by setting some appropriate \(V_{J}\), called the _phantom invariants_, to some suitable constants \(c_{J}\). Solving the resulting equations
\[dV_{J}=c_{J}, \tag{2.10}\]
which are known as the _normalization equations_, reveals the normalized expressions of the group parameters in terms of the jet coordinates. Inserting back these expressions into the original expressions of the non-phantom functions \(V_{J}\), one obtains a complete set of differential invariants
associated with the action of \(\mathscr{G}\) on the manifolds. This observation motivates the term _lifted differential invariants_ for the target jet coordinates \(V_{J}\) in the context of equivariant moving frames. ([11, 23]).
Associated to the local coordinates \(z_{j},\bar{z}_{j},u,j=1,2\) of \(M^{5}\), we have _invariant horizontal forms_\(\omega^{Z_{j}},\omega^{\overline{Z}_{j}}=\overline{\omega^{Z_{j}}}\) and \(\omega^{U}\) which actually are the _lifts_ of the standard \(1\)-forms \(dz_{j},d\bar{z}_{j},du\) under the group action \(\mathscr{G}\). As before, the coordinate expressions of these forms are plainly accessible (cf. [23]) but we do not need them in this paper.
In general, substituting the resulted expressions of the group parameters among the process of cross-section normalizations into the prolonged transformation formulae establishes what is known as the _invariantization_ process which maps each differential function, differential form, differential operator and etc to its invariant counterpart ([11, 23]). We denote this invariantization operator by \(\iota\).
### Recurrence formula
The most powerful tool in the theory of equivariant moving frames is the so-called recurrence formula. In general, the invariantization map \(\iota\) and the exterior differentiation \(d\) do not commute and this formula measures to what extent they may differ.
**Theorem 2.2**.: _(cf. [23, Theorem 25]) If \(\Omega\) is a differential form defined on \(\mathrm{J}^{\infty}\) then it enjoys the recurrence relation_
\[d\iota(\Omega)=\iota\big{[}d\Omega+\mathbf{v}^{(\infty)}(\Omega)\big{]}\]
_where \(\mathbf{v}^{(\infty)}\) is the prolonged infinitesimal vector field (2.4) and \(\mathbf{v}^{(\infty)}(\Omega)\) denotes the Lie derivative of \(\Omega\) along it._
Accordingly, the recurrence relations in our case are represented as
\[\begin{split} dZ_{j}&=\omega^{Z_{j}}+\mu^{j},\qquad d \overline{Z}_{j}=\omega^{\overline{Z}_{j}}+\overline{\mu}^{j},\\ dU&=\omega^{U}+\alpha,\\ dV_{J}&=\varpi_{J}+\widehat{\phi}^{J},\end{split} \tag{2.11}\]
for \(j=1,2\) and \(\sharp J\geq 0\), where from now on we denote
\[\varpi_{J}:=V_{J,Z_{j}}\,\omega^{Z_{j}}+V_{J,\overline{Z}_{j}}\,\omega^{ \overline{Z}_{j}}+V_{J,U}\,\omega^{U}\]
and where \(\widehat{\phi}^{J}\) is the invariantization of the vector coefficient \(\phi^{J}\) of \(\mathbf{v}^{\infty}\) which, roughly speaking, is obtained by substituting in the expression of \(\phi^{J}\) the source jet variables \(v_{J}\) by their lifted counterparts \(V_{J}\) and the vector components \(\xi^{j},\eta,\phi\) with their corresponding Maurer-Cartan forms (2.6).
In practice, as the order of \(J\) increases, the expressions of the lifted invariants \(V_{J}\) can grow explosively. However, the recurrence formula offers a _systematic_ and _symbolic_ approach to normalize the corresponding Maurer-Cartan forms instead of directly normalizing the group parameters by solving the aforementioned normalization equations. More precisely, if \(V_{J}=c_{J}\) is a normalization equation for a phantom lifted invariant \(V_{J}\) and constant \(c_{J}\), we apply it in the corresponding recurrence relation \(dV_{J}\) and solve the resulting equation for some suitable Maurer-Cartan form. It notably offers the advantage of performing linear algebraic manipulations for normalizing the corresponding Maurer-Cartan forms, without the need for explicit expression of the lifted invariants, Maurer-Cartan forms, normalized expressions of the group parameters and etc.
### Normal form
Let \(M^{5}\) be a real-analytic \(5\)-dimensional real hypersurface in \(\mathbb{C}^{3}\), passing through the origin point \(\boldsymbol{p}=0\) and represented in local coordinates \(z_{1},z_{2},w=u+\mathrm{i}v\) as the graph of some defining equation \(v:=v(z,\bar{z},u)\). For a \(5\)-tuple \(J=(j_{1},j_{2},k_{1},k_{2},l)\), denote \(v_{J}:=v_{z_{1}^{j_{1}}z_{2}^{j_{2}}z_{1}^{k_{1}}z_{2}^{k_{2}}u^{l}}\), \(x^{J}:=z_{1}^{j_{1}}z_{2}^{j_{2}}z_{1}^{k_{1}}\bar{z}_{2}^{k_{2}}u^{l}\) and \(J!:=j_{1}!\,j_{2}!\,k_{1}!\,k_{2}!\,l!\). Then, around \(\boldsymbol{p}\), the Taylor series of \(M^{5}\) is
\[v(z,\bar{z},u)=\sum_{\sharp J\geq 0}\frac{v_{J}(\boldsymbol{p})}{J!}\,x^{J}. \tag{2.12}\]
As suggested in [21], we identify this Taylor series expansion with the restriction \(v^{(\infty)}|_{\boldsymbol{p}}\) of the jet coordinates to the point \(\boldsymbol{p}\). A normal form of \(M^{5}\) is made by employing the group transformations of \(\mathscr{G}\) to simplify, as much as possible, the coefficients of the above Taylor series. This process is analogous to finding some practical partial cross-section -- or equivalently a moving frame -- associated with the action of the pseudo-group \(\mathscr{G}\). Along the process of normalizations, the jet coordinates \(v_{J}\) transform into differential invariants \(V_{J}\) which no longer depend on any potentially remaining unnormalized group parameters (cf. [29]). It amounts to converting the Taylor series (2.12) to the _normal form_
\[v(z,\bar{z},u)=\sum_{\sharp J\geq 0}\frac{V_{J}(\boldsymbol{p})}{J!}\,x^{J}. \tag{2.13}\]
It follows that the essential conditions that determine the coefficients of the space of the desired normal forms are actually imposed by the corresponding cross-section through the normalization equations (2.10). We refer the reader to [21] for further relevant information and details.
## 3. Elementary normalizations
Now we are ready to launch the equivariant moving frame technique in constructing complete normal forms for real hypersurfaces \(M^{5}\subset\mathbb{C}^{3}\) at their \(2\)-nondegenerate point \(\boldsymbol{p}=0\) of non-uniformly Levi rank zero. Benefiting the powerful recurrence formula (2.11), we manage our computations symbolically and order by order without requiring any explicit coordinate expression. Since the lifted invariants and Maurer-Cartan forms respect the conjugation relation, i.e.
\[\overline{V_{Z^{\ell_{1}}\overline{Z^{\ell_{2}}U^{l}}}}=V_{Z^{\ell_{2}} \overline{Z^{\ell_{1}}U^{l}}},\qquad\mathrm{and}\qquad\overline{\mu^{j}_{Z^{ \ell_{1}}\overline{Z^{\ell_{2}}U^{l}}}}=\overline{\mu}^{j}_{Z^{\ell_{2}} \overline{Z^{\ell_{1}}U^{l}}},\qquad\overline{\alpha_{Z^{\ell_{1}}U^{l}}}= \alpha_{\overline{Z^{\ell_{1}}U^{l}}},\]
then all our upcoming computations and normalizations will respect this relation, as well. Accordingly, we will consider the normalization of only one of the two conjugative Maurer-Cartan forms. For brevity, we occasionally employ Einstein's summation convention to present our expressions.
### Orders zero and one
In light of (2.11), applying the recurrence formula on the zeroth and first orders lifted invariants gives
\[dZ_{j} =\omega^{Z_{j}}+\mu^{j},\] \[dU =\omega^{U}+\alpha,\qquad\quad dV=\varpi+\gamma\] \[dV_{Z_{j}} =\varpi_{Z_{j}}-\mu^{k}_{Z_{j}}V_{Z_{k}}-\alpha_{Z_{j}}V_{U}- \mathrm{i}\,\alpha_{Z_{j}},\] \[dV_{U} =\varpi_{U}-\mu^{k}_{U}V_{Z_{k}}-\overline{\mu}^{k}_{U}V_{ \overline{Z}_{k}}-\alpha_{U}V_{U}-\alpha_{V}\]
for \(j=1,2\). Thus, by selecting the orders zero and one cross-section \(Z_{j}=U=V=V_{Z_{j}}=V_{U}=0\) (with the same conjugations), the above relations can be solved for the Maurer-Cartan forms
\[\mu^{j}=-\omega^{Z_{j}},\qquad\alpha=-\omega^{U},\qquad\gamma=0,\qquad\alpha_{ Z_{j}}=-\mathrm{i}\,\varpi_{Z_{j}},\qquad\alpha_{V}=\varpi_{U}.\]
Analysing carefully the prolongation formula (2.5) leads us to the following general observation.
**Lemma 3.1**.: _Let \(j\geq 0\) and \((0,0)\neq\ell\in\mathbb{N}_{0}^{2}\). Then_
1. _it is possible to normalize the Maurer-Cartan form_ \(\alpha_{Z^{\ell}Uj}\) _by setting_ \(V_{Z^{\ell}Uj}\equiv 0\)_._
2. _it is possible to normalize the Maurer-Cartan form_ \(\alpha_{UjV}\) _by setting_ \(V_{U^{j+1}}\equiv 0\)_._
Proof.: We establish the proof for the first assertion, noting that the proof for the second assertion follows along a similar argument. In order to consider the recurrence relation of \(V_{Z^{\ell}U^{l}}\), we shall inspect the vector component \(\phi^{z^{\ell}u^{l}}\) in the prolonged vector field (2.4). As \(\ell=(l_{1},l_{2})\neq(0,0)\), then without loss of generality we assume that \(l_{1}\neq 0\). One plainly sees that
\[\phi^{z_{1}}=\phi_{z_{1}}+\cdots\]
where \("\cdots"\) stands for some combinations of the first order jets \(v_{a}\), \(a=z_{1},z_{2},\bar{z}_{1},\bar{z}_{2},u\) and the derivations of the other vector components. Then, thanks to the prolongations formula (2.5), one readily verifies for every \(\mathsf{a}=z_{1},z_{2},u\) that
\[\phi^{z_{1}\mathsf{a}} =D_{\mathsf{a}}\,\phi^{z_{1}}-(D_{\mathsf{a}}\xi^{j})\,v_{z_{1}z _{j}}-(D_{\mathsf{a}}\overline{\xi}^{j})\,v_{z_{1}\bar{z}_{j}}-(D_{\mathsf{a} }\eta)\,v_{z_{1}u}\] \[=\phi_{z_{1}\mathsf{a}}+\phi_{z_{1}v}\,v_{\mathsf{a}}-(D_{ \mathsf{a}}\xi^{j})\,v_{z_{1}z_{j}}-(D_{\mathsf{a}}\overline{\xi}^{j})\,v_{z _{1}\bar{z}_{j}}-(D_{\mathsf{a}}\eta)\,v_{z_{1}u}+\cdots.\]
By (2.3), we have \(\phi_{z_{1}\mathsf{a}}=-\mathrm{i}\eta_{z_{1}\mathsf{a}}\), which after invariantization yields the following recurrence relation
\[dV_{Z_{1}\mathsf{A}}=-\mathrm{i}\,\alpha_{Z_{1}\mathsf{A}}+\cdots\]
where \(\mathsf{A}\) is the lift of the source variable \(\mathsf{a}\) and where \(\alpha_{Z_{1}\mathsf{A}}\) does not appear in the \("\cdots"\) part. Now, applying a simple induction shows that the recurrence relation of every arbitrary \(V_{Z_{1}^{j+1}Z_{2}^{k}U^{l}}\) is of the form
\[dV_{Z_{1}^{j+1}Z_{2}^{k}U^{l}}=-\mathrm{i}\,\alpha_{Z_{1}^{j+1}Z_{2}^{k}U^{l}}+\cdots\]
where the Maurer-Cartan form \(\alpha_{Z_{1}^{j+1}Z_{2}^{k}U^{l}}\) does not appear in the \("\cdots"\) part. Thus, by setting \(V_{Z_{1}^{j+1}Z_{2}^{k}U^{l}}=0\), one plainly solves this recurrence relation to normalize \(\alpha_{Z_{1}^{j+1}Z_{2}^{k}U^{l}}\).
**Remark 3.2**.: The selected order zero cross-section \(Z_{1}=Z_{2}=U=V=0\) forces the final normal form transformation of \(M^{5}\) to being origin-preserving.
### Order two
In this order, we shall consider the recurrence relations
\[\begin{split}& dV_{Z_{1}\overline{Z}_{1}}=\varpi_{Z_{1}\overline{ Z}_{1}}-V_{Z_{1}\overline{Z}_{1}}\big{(}\mu_{Z_{1}}^{1}+\overline{\mu}_{Z_{1}}^{1}- \alpha_{U}\big{)}-V_{Z_{1}\overline{Z}_{2}}\,\overline{\mu}_{Z_{1}}^{2}-V_{Z_ {2}\overline{Z}_{1}}\,\mu_{Z_{1}}^{2},\\ & dV_{Z_{1}\overline{Z}_{2}}=\varpi_{Z_{1}\overline{Z}_{2}}-V_{Z_ {1}\overline{Z}_{1}}\,\overline{\mu}_{Z_{2}}^{1}-V_{Z_{2}\overline{Z}_{2}}\, \mu_{Z_{1}}^{2}-V_{Z_{1}\overline{Z}_{2}}\big{(}\mu_{Z_{1}}^{1}+\overline{ \mu}_{\overline{Z}_{2}}^{2}-\alpha_{U}\big{)},\\ & dV_{Z_{2}\overline{Z}_{2}}=\varpi_{Z_{2}\overline{Z}_{2}}-V_{Z_ {1}\overline{Z}_{2}}\,\mu_{Z_{2}}^{1}-V_{Z_{2}\overline{Z}_{1}}\,\overline{ \mu}_{\overline{Z}_{2}}^{1}-V_{Z_{2}\overline{Z}_{2}}\big{(}\mu_{Z_{2}}^{2}+ \overline{\mu}_{\overline{Z}_{2}}^{2}-\alpha_{U}\big{)}.\end{split} \tag{3.1}\]
These relations will be of use in the normalization process whenever at least one of the involved invariants \(V_{Z_{1}\overline{Z}_{1}},V_{Z_{1}\overline{Z}_{2}},V_{Z_{2}\overline{Z}_{2}}\) was nonzero. But, unfortunately, our assumption that \(\boldsymbol{p}\) is of Levi non-uniform rank zero under holomorphic transformations prevents us to benefit the application of these recurrence relations.
**Remark 3.3**.: Denoting
\[\Delta=V_{Z_{1}\overline{Z}_{1}}V_{Z_{2}\overline{Z}_{2}}-V_{Z_{1}\overline{Z}_ {2}}V_{Z_{2}\overline{Z}_{1}},\]
and after applying necessary computations, the above recurrence relations (3.1) give that
\[d\Delta=\big{(}2\,\alpha_{U}-\mu_{Z_{1}}^{1}-\mu_{Z_{2}}^{2}-\overline{\mu}_{ \overline{Z}_{1}}^{1}-\overline{\mu}_{\overline{Z}_{2}}^{2}\big{)}\,\Delta.\]
Thus, \(\Delta\) is a _relative (lifted) invariant_ ([10]) defined on the lifted bundle \(\mathscr{B}^{(2)}\). Since at each step of normalizations, the value of \(\Delta\) at \(\boldsymbol{p}\) corresponds to the determinant of the Levi matrix (1.3), then its relative invariancy confirms the well-known fact that the degeneracy remains invariant
under holomorphic transformations (cf. [10, Proposition 3.6]). Let \(\mathcal{G}^{\mathsf{red}}\) be the reduction of the holomorphic pseudo-group \(\mathcal{G}\) under the normalizations applied so far. Restricting the recurrence relations (3.1) on the fiber \(\mathcal{G}^{\mathsf{red}}\times\{\mathbf{p}\}\), simply gives that \(dV_{Z_{j}\overline{Z}_{k}}=\varpi_{Z_{j}\overline{Z}_{k}}\) for \(j,k=1,2\). Thus on this fiber, the three lifted invariants \(V_{Z_{1}\overline{Z}_{1}},V_{Z_{1}\overline{Z}_{2}},V_{Z_{2}\overline{Z}_{2}}\) are independent of the group parameters. Consequently, the property of \(\mathbf{p}\) having the Levi non-uniform rank zero remains invariant under the subsequent holomorphic normalizations.
**Notation.** Henceforth and in each step of normalizations, we denote by \(\mathcal{B}_{\mathbf{p}}\) the fiber \(\mathcal{G}^{\mathsf{red}}\times\{\mathbf{p}\}\subset\mathcal{B}^{(\infty)}\) over the point \(\mathbf{p}\). As we will see, most of our computations will be done in this domain.
### Order three
While in the former order two, the Levi non-uniform rank zero of \(\mathbf{p}\) played the key role, in this order the \(2\)-nondegeneracy assumption is essentially effective. By [6, eq. (7.1.3-5)], the a partially normalized hypersurface \(v=v(z,\bar{z},u)\) is \(2\)-nondegenerate whenever its corresponding order three jets enjoy
\[\operatorname{Span}\bigl{\{}(v_{z_{1}^{2}\bar{z}_{1}},v_{z_{1}^{2}\bar{z}_{2 }}),(v_{z_{1}z_{2}\bar{z}_{1}},v_{z_{1}z_{2}\bar{z}_{2}}),(v_{z_{2}^{2}\bar{z} _{1}},v_{z_{2}^{2}\bar{z}_{2}})\bigr{\}}|_{\mathbf{p}}=\mathbb{C}^{2}. \tag{3.2}\]
Corresponding to these jet variables we have, modulo the horizontal coframe, the following six recurrence relations
\[dV_{Z_{1}^{2}\overline{Z}_{1}} \equiv V_{Z_{1}\overline{Z}_{1}}\bigl{(}4\mathrm{i}\,V_{Z_{1} \overline{Z}_{2}}\overline{\mu}_{U}^{2}-\mu_{Z_{1}^{2}}^{1}\bigr{)}+4\mathrm{i }\,V_{Z_{1}\overline{Z}_{1}}^{2}\overline{\mu}_{U}^{1}-V_{Z_{1}^{2}\overline{Z }_{2}}\,\overline{\mu}_{Z_{1}}^{2}-2\,V_{Z_{1}Z_{2}\overline{Z}_{1}}\,\mu_{Z_{ 1}}^{2}-V_{Z_{2}\overline{Z}_{1}}\mu_{Z_{1}^{2}}^{2} \tag{3.3}\] \[+V_{Z_{1}^{2}\overline{Z}_{1}}\bigl{(}\alpha_{U}-\overline{\mu}_{ \overline{Z}_{1}}^{1}-2\mu_{Z_{1}}^{1}\bigr{)},\] \[dV_{Z_{1}^{2}\overline{Z}_{2}} \equiv V_{Z_{1}\overline{Z}_{2}}\bigl{(}4\mathrm{i}\,V_{Z_{1} \overline{Z}_{1}}\overline{\mu}_{U}^{1}-\mu_{Z_{1}^{2}}^{1}\bigr{)}+4\mathrm{i }\,V_{Z_{1}\overline{Z}_{2}}^{2}\overline{\mu}_{U}^{2}-V_{Z_{1}^{2}\overline{Z }_{1}}\overline{\mu}_{Z_{2}}^{1}-2\,V_{Z_{1}Z_{2}\overline{Z}_{2}}\mu_{Z_{1}}^ {2}-V_{Z_{2}\overline{Z}_{2}}\mu_{Z_{1}^{2}}^{2}\] \[+V_{Z_{2}\overline{Z}_{2}}\bigl{(}\alpha_{U}-\overline{\mu}_{ \overline{Z}_{2}}^{2}-2\,\mu_{Z_{1}}^{1}\bigr{)},\] \[dV_{Z_{1}Z_{2}\overline{Z}_{1}} \equiv V_{Z_{1}Z_{2}\overline{Z}_{1}}\bigl{(}\alpha_{U}-\overline{ \mu}_{\overline{Z}_{1}}^{1}-\mu_{Z_{2}}^{2}-\mu_{Z_{1}}^{1}\bigr{)}+V_{Z_{1} \overline{Z}_{1}}\bigl{(}4\mathrm{i}\,V_{Z_{2}\overline{Z}_{1}}\overline{\mu }_{U}^{1}+2\mathrm{i}\,V_{Z_{2}\overline{Z}_{2}}\overline{\mu}_{U}^{2}-\mu_{Z_ {1}Z_{2}}^{1}\bigr{)}\] \[+V_{Z_{2}\overline{Z}_{1}}\bigl{(}2\mathrm{i}\,V_{Z_{1}\overline{Z }_{2}}\overline{\mu}_{U}^{2}-\mu_{Z_{1}Z_{2}}^{2}\bigr{)}-V_{Z_{1}^{2}\overline{ Z}_{1}}\mu_{Z_{2}}^{1}-V_{Z_{1}Z_{2}\overline{Z}_{2}}\overline{\mu}_{Z_{1}}^ {2}-V_{Z_{2}^{2}\overline{Z}_{1}}\mu_{Z_{1}}^{2},\] \[dV_{Z_{1}Z_{2}\overline{Z}_{2}} \equiv V_{Z_{1}Z_{2}\overline{Z}_{2}}\bigl{(}\alpha_{U}-\mu_{Z_{2}}^{2}- \overline{\mu}_{Z_{2}}^{2}-\mu_{Z_{1}}^{1}\bigr{)}+V_{Z_{1}\overline{Z}_{2}} \bigl{(}2\mathrm{i}\,V_{Z_{2}\overline{Z}_{1}}\overline{\mu}_{U}^{1}+4\mathrm{i }\,V_{Z_{2}\overline{Z}_{2}}\overline{\mu}_{U}^{2}-\mu_{Z_{1}Z_{2}}^{1}\bigr{)}\] \[+V_{Z_{2}\overline{Z}_{2}}\bigl{(}2\mathrm{i}\,V_{Z_{1}\overline{ Z}_{1}}\overline{\mu}_{U}^{1}-\mu_{Z_{1}Z_{2}}^{2}\bigr{)}-V_{Z_{1}^{2} \overline{Z}_{2}}\mu_{Z_{2}}^{1}-V_{Z_{1}Z_{2}\overline{Z}_{1}}\overline{\mu}_{ Z_{2}}^{1}-V_{Z_{2}^{2}\overline{Z}_{2}}\mu_{Z_{1}}^{2},\] \[dV_{Z_{2}^{2}\overline{Z}_{1}} \equiv V_{Z_{2}^{2}\overline{Z}_{1}}\bigl{(}\alpha_{U}-\overline{ \mu}_{Z_{1}}^{1}-2\,\mu_{Z_{2}}^{2}\bigr{)}+V_{Z_{2}\overline{Z}_{1}}\bigl{(}4 \mathrm{i}\,V_{Z_{2}\overline{Z}_{2}}\overline{\mu}_{U}^{2}-\mu_{Z_{2}^{2}}^{2} \bigr{)}+4\mathrm{i}\,V_{Z_{2}\overline{Z}_{1}}^{2}\overline{\mu}_{U}^{1}-2\,V_{Z _{1}Z_{2}\overline{Z}_{1}}\mu_{Z_{2}}^{1}\] \[-V_{Z_{2}^{2}\overline{Z}_{2}}\overline{\mu}_{\overline{Z}_{1}}^{2}-V_ {Z_{1}\overline{Z}_{1}}\mu_{Z_{2}}^{1}\] \[dV_{Z_{2}^{2}\overline{Z}_{2}} \equiv V_{Z_{2}\overline{Z}_{2}}\bigl{(}4\mathrm{i}\,V_{Z_{2} \overline{Z}_{1}}\overline{\mu}_{U}^{1}-\mu_{Z_{2}^{2}}^{2}\bigr{)}+V_{Z_{2}^{2} \overline{Z}_{2}}\bigl{(}\alpha_{U}-\overline{\mu}_{\overline{Z}_{2}}^{2}-2\,\mu_{Z _{2}}^{2}\bigr{)}+4\mathrm{i}\,V_{Z_{2}\overline{Z}_{2}}^{2}\overline{\mu}_{U}^{2}-2 \,V_{Z_{1}Z_{2}\overline{Z}_{2}}\mu_{Z_{2}}^{1}\] \[-V_{Z_{2}^{2}\overline{Z}_{2}}\overline{\mu}_{\overline{Z}_{2}}^{1}-V_ {Z_{1}\overline{Z}_{2}}\mu_{Z_{2}}^{1}.\]
**Convention.** Hereafter, we use the symbol \("\equiv"\) instead of \("="\), when presenting a recurrence relation modulo the horizontal coframe.
In order to normalize the Maurer-Cartan forms by means of the above recurrence relations, we need to know either of the corresponding six differential invariants is nonzero. This necessitate information derived from the \(2\)-nondegeneracy condition. One verifies that (3.2) is satisfied if at least one of the following combinations is nonzero at \(\mathbf{p}\) (cf. [6, eq. (7.1.19)])
\[\Delta_{12} :=v_{z_{1}z_{2}\bar{z}_{1}}\cdot v_{z_{1}^{2}\bar{z}_{2}}-v_{z_{1}z _{2}\bar{z}_{2}}\cdot v_{z_{1}^{2}\bar{z}_{1}},\] (3.4) \[\Delta_{23} :=v_{z_{1}z_{2}\bar{z}_{2}}\cdot v_{z_{2}^{2}\bar{z}_{1}}-v_{z_{1 }z_{2}\bar{z}_{1}}\cdot v_{z_{2}^{2}\bar{z}_{2}},\] \[\Delta_{13} :=v_{z_{1}^{2}\bar{z}_{1}}\cdot v_{z_{2}\
We emphasize that if two of the above combinations \(\Delta_{12},\Delta_{23},\Delta_{13}\) vanish at \(\mathbf{p}\) then the third combination has to remain nonzero at this point under holomorphic transformations. We prove this assertion in the case where \(\Delta_{23}(\mathbf{p})=\Delta_{13}(\mathbf{p})=0\) but \(\Delta_{12}(\mathbf{p})\neq 0\). The proof of the other possible cases is similar.
**Proposition 3.4**.: _(cf. [6, Assertion 7.2.1]). Let \(\Delta_{23}\) and \(\Delta_{13}\) vanish identically at \(\mathbf{p}=0\). Then the value of \(\Delta_{12}\) at this point does not vanish under holomorphic transformations._
Proof.: By abuse of notation, we continue to write \(\Delta_{12}\) for the lift of the above combination \(\Delta_{12}\) under holomorphic transformations, i.e.
\[\Delta_{12}:=V_{Z_{1}Z_{2}\overline{Z}_{1}}\cdot V_{Z_{1}^{2}\overline{Z}_{2} }-V_{Z_{1}Z_{2}\overline{Z}_{2}}\cdot V_{Z_{1}^{2}\overline{Z}_{1}}.\]
Taking into account that the order two lifted invariants \(V_{J}\), \(\#J=2\) vanish identically on the fiber \(\mathscr{B}_{\mathbf{p}}\) over \(\mathbf{p}\), the recurrence relations (3.3) give
\[d\Delta_{12}\equiv\big{(}2\,\alpha_{U}-3\,\mu_{Z_{1}}^{1}-\mu_{Z_{2}}^{2}- \overline{\mu}_{\overline{Z}_{1}}^{1}-\overline{\mu}_{\overline{Z}_{2}}^{2} \big{)}\Delta_{12}.\]
Thus on the mentioned fibre, the pseudo-group acts by scaling on \(\Delta_{12}\). Hence, it remains nonzero on this bundle under holomorphic transformations.
The \(2\)-nondegeneracy condition (3.4) provides various possibilities of normalizations in order three, each of which produces a certain branch in the normalization process. Ebenfelt realized in [6] these branches as (1.5). Our main goal in this paper is to complete the normalizations in each branch and construct their associated complete normal forms.
**Remark 3.5**.: Ebenfelt claims in [6, p. 339] that the lifted differential invariant \(V_{Z_{1}Z_{2}\overline{Z}_{2}}\) which corresponds to \(b_{2}\) in [6], can be normalized to zero in all branches. As we will see, this claim is true except in a very specific case. Indeed, when we have on the fiber \(\mathscr{B}_{\mathbf{p}}\) that \(|V_{Z_{1}^{2}\overline{Z}_{2}}|=|V_{Z_{1}Z_{2}\overline{Z}_{1}}|\) and \(V_{Z_{2}^{2}\overline{Z}_{2}}=0\) -- this can occur in branch (A.ii.1) of (1.5) -- then setting \(V_{Z_{1}Z_{2}\overline{Z}_{2}}=0\) in the fourth equation in (3.3) is superfluous as in this case only real or imaginary part of the Maurer-Cartan form \(\mu_{Z_{2}}^{1}\) is normalizable. This situation corresponds to the specific case \(|r|=\frac{1}{2}\) where \(r\) is the invariant defined in [6, eq. (7.2.13)] (this exception was also realized by Ershova in [8, p. 191]). Nevertheless, in this paper we do not aim to consider this specific case for two reasons. First, we do not like to make the paper longer and second, one can get rid of this phenomenon by appropriate (but different from what we do in the next section) normalizations of the two lifted invariants \(V_{Z_{1}^{2}\overline{Z}_{2}}\) and \(V_{Z_{1}Z_{2}\overline{Z}_{1}}\) in branch (A.ii.1).
Let us conclude this section by noticing that in the present order three, we have in addition the three recurrence relations
\[\begin{split} dV_{Z_{1}\overline{Z}_{1}U}&\equiv V_{Z_{1} \overline{Z}_{1}}\big{(}\alpha_{UU}-\mu_{Z_{1}U}^{1}-\overline{\mu}_{\overline {Z}_{1}U}^{1}\big{)}-V_{Z_{1}\overline{Z}_{1}U}\big{(}\mu_{Z_{1}}^{1}+ \overline{\mu}_{\overline{Z}_{1}}^{1}\big{)}-V_{Z_{1}\overline{Z}_{1}}\mu_{U} ^{1}-V_{Z_{1}\overline{Z}_{1}^{2}}\overline{\mu}_{U}^{1}\\ &-V_{Z_{2}\overline{Z}_{1}}\mu_{Z_{1}U}^{2}-V_{Z_{1}\overline{Z}_ {1}\overline{Z}_{2}}^{2}\overline{\mu}_{Z_{1}U}^{2}-V_{Z_{2}\overline{Z}_{1}U} \mu_{Z_{1}}^{2}-V_{Z_{1}\overline{Z}_{1}\overline{Z}_{2}U}\overline{\mu}_{ \overline{Z}_{1}}^{2}-V_{Z_{1}Z_{2}\overline{Z}_{1}}\mu_{U}^{2}-V_{Z_{1} \overline{Z}_{1}\overline{Z}_{2}}\overline{\mu}_{U}^{2},\\ dV_{Z_{1}\overline{Z}_{2}U}&\equiv V_{Z_{1} \overline{Z}_{2}}\big{(}\alpha_{UU}-\mu_{Z_{1}U}^{1}-\overline{\mu}_{\overline {Z}_{2}U}^{2}\big{)}-V_{Z_{1}\overline{Z}_{2}U}\big{(}\mu_{Z_{1}}^{1}+ \overline{\mu}_{\overline{Z}_{2}}^{2}\big{)}-V_{Z_{1}\overline{Z}_{1}}\overline{ \mu}_{\overline{Z}_{2}U}^{1}-V_{Z_{2}\overline{Z}_{2}}\mu_{U}^{2}\\ &-V_{Z_{2}\overline{Z}_{2}}\mu_{U}^{1}-V_{Z_{1}\overline{Z}_{1}U} \overline{\mu}_{\overline{Z}_{2}}^{1}-V_{Z_{2}\overline{Z}_{2}U}\mu_{Z_{1}}^{ 2}-V_{Z_{1}Z_{2}\overline{Z}_{2}}\mu_{U}^{2}-V_{Z_{1}\overline{Z}_{2}}\overline{ \mu}_{U}^{1}-V_{Z_{1}\overline{Z}_{2}}\overline{\mu}_{U}^{2},\\ dV_{Z_{2}\overline{Z}_{2}U}&\equiv V_{Z_{2} \overline{Z}_{2}}\big{(}\alpha_{UU}-\mu_{Z_{2}U}^{2}-\overline{\mu}_{\overline {Z}_{2}U}^{2}\big{)}-V_{Z_{2}\overline{Z}_{2}U}\big{(}\mu_{Z_{2}}^{2}+ \overline{\mu}_{\overline{Z}_{2}}^{2}\big{)}-V_{Z_{1}\overline{Z}_{2}}\mu_{Z_{ 1}U}^{1}-V_{Z_{2}\overline{Z}_{2}}\mu_{U}^{1}-V_{Z_{2}\overline{Z}_{1}}\overline{ \mu}_{\overline{Z}_{2}U}^{1}\\ &-V_{Z_{1}Z_{2}\overline{Z}_{2}}\mu_{U}^{1}-V_{Z_{2}\overline{Z}_{1} \overline{Z}_{2}}\overline{\mu}_{U}^{1}-V_{Z_{1}\overline{Z}_{2}U}\mu_{Z_{2}}^{1}-V_ {Z_{2}\overline{Z}_{1}U}\overline{\mu}_{\overline{Z}_{2}}^{1}-V_{Z_{2} \overline{Z}_{2}}\overline{\mu}_{U}^{2}-V_{Z_{2}\overline{Z}_{2}}\overline{\mu}_{ U}^{2}.\end{split} \tag{3.5}\]
In the following three sections, our objective is to complete the partial normal forms (1.5) across the five emerged branches.
## 4. Branches (A.ii.1) and (A.ii.2)
Since both the first two partial normal forms (A.ii.1) and (A.ii.2) in (1.5) fall under the situation where the value of \(\Delta_{12}\) is nonzero at the point \(\boldsymbol{p}=0\), we study them together. The coefficient of the monomial \(z_{1}z_{2}\bar{z}_{1}\) is nonzero in these branches and thus, we are permitted here to set the corresponding lifted invariant \(V_{Z_{1}Z_{2}\overline{Z}_{1}}\) to some nonzero constant number, say
\[V_{Z_{1}Z_{2}\overline{Z}_{1}}=1.\]
Accordingly, one can solve the third recurrence relation in (3.3) to normalize
\[\mu_{Z_{2}}^{2} \equiv V_{Z_{1}\overline{Z}_{1}}\big{(}4{\rm i}\,V_{Z_{2} \overline{Z}_{1}}\overline{\mu}_{U}^{1}+2{\rm i}\,V_{Z_{2}\overline{Z}_{2}} \overline{\mu}_{U}^{2}-\mu_{Z_{1}Z_{2}}^{1}\big{)}+V_{Z_{2}\overline{Z}_{1}} \big{(}2{\rm i}\,V_{Z_{1}\overline{Z}_{2}}\overline{\mu}_{U}^{2}-\mu_{Z_{1}Z_ {2}}^{2}\big{)}\] \[-V_{Z_{1}^{2}\overline{Z}_{1}}\mu_{\overline{Z}_{2}}^{1}-V_{Z_{2} ^{2}\overline{Z}_{1}}\mu_{Z_{1}}^{2}+\alpha_{U}-\mu_{Z_{1}}^{1}-\overline{\mu }_{\overline{Z}_{1}}^{1}.\]
Even more generally, we have
**Lemma 4.1**.: _For every \(j,k,l\geq 0\) with \((j,l,k)\neq(0,0,0)\), solving the recurrence relation of \(V_{Z_{1}^{j+1}Z_{2}^{k+1}\overline{Z}_{1}U^{l}}=0\) provides the normalization of the Maurer-Cartan form \(\mu_{Z_{1}^{j}Z_{2}^{k+1}U^{l}}^{2}\)._
Proof.: By analysing the vector component \(\phi^{z_{1}z_{2}\bar{z}_{1}}\) of the prolonged vector field (2.4), one finds that
\[\phi^{z_{1}z_{2}\bar{z}_{1}}=-v_{z_{1}z_{2}\bar{z}_{1}}\,\xi_{z_{2}}^{2}+\cdots\]
where \("\cdots"\) stands for terms that do not involve the vector component \(\xi_{z_{2}}^{2}\). Then, the prolongation formula (2.5) results for each \(j,k,l\geq 0\) that
\[\phi^{z_{1}^{j+1}z_{2}^{k+1}\bar{z}_{1}u^{l}}=-v_{z_{1}z_{2}\bar{z}_{1}}\,\xi _{z_{1}^{j}z_{2}^{k+1}u^{l}}^{2}+\cdots\]
where \(\xi_{z_{1}^{j,k+1}u^{l}}^{2}\) does not appear in the part \("\cdots"\). Then, in light of the recurrence formula (2.11) and after invariantization, we have
\[dV_{Z_{1}^{j+1}Z_{2}^{k+1}\overline{Z}_{1}U^{l}}\equiv-V_{Z_{1}Z_{2}\overline{ Z}_{1}}\,\mu_{Z_{1}^{j}Z_{2}^{k+1}U^{l}}^{2}+\cdots.\]
Thus, with \(V_{Z_{1}Z_{2}\overline{Z}_{1}}=1\) and \(V_{Z_{1}^{j+1}Z_{2}^{k+1}\overline{Z}_{1}U^{l}}=0\), one may solve the above recurrence relation to normalize the Maurer-Cartan form \(\mu_{Z_{1}^{j}Z_{2}^{k+1}U^{l}}^{2}\).
Furthermore, with the assumption1\(|V_{Z_{1}^{j}\overline{Z}_{2}}|\neq|V_{Z_{1}Z_{2}\overline{Z}_{1}}|\), the recurrence relation of \(V_{Z_{1}Z_{2}\overline{Z}_{2}}\) enables one to normalize the entire complex Maurer-Cartan form \(\mu_{Z_{2}}^{1}\) if we set2
Footnote 1: As mentioned in Remark 3.5, we do not aim to consider the specific opposite case.
Footnote 2: Henceforth and due to their length, we do not present the normalized expressions of the Maurer-Cartan forms. However, they are available in the Maple worksheet [27].
\[V_{Z_{1}Z_{2}\overline{Z}_{2}}=0.\]
Proceeding along the same arguments of the proofs of Lemmas 3.1 and 4.1 and by the careful analysis of the prolongation formula (2.1), one finds in general that
**Lemma 4.2**.: _For every \(j,k,l\geq 0\), one can normalize the Maurer-Cartan form \(\mu_{Z_{1}^{j}Z_{2}^{k+1}U^{l}}^{1}\) by specifying \(V_{Z_{1}^{j+1}Z_{2}^{k+1}\overline{Z}_{2}U^{l}}=0\)._
For the next step of normlizations, we consider the lifted invariant \(V_{Z_{1}^{i}\overline{Z}_{2}}\) in which its recurrence relation in (3.3) is now of the form
\[dV_{Z_{1}^{i}\overline{Z}_{2}}\equiv-2{\rm i}\,V_{Z_{1}^{i}\overline{Z}_{2}} \,{\rm Im}\mu_{Z_{1}}^{1}+\cdots\]
where the "\(\cdots\)" part stands for the terms that vanish identically on the fiber \(\mathscr{B}_{\boldsymbol{p}}\). Setting \(\operatorname{Im}V_{Z_{1}^{2}\overline{Z}_{2}}=0\) enables one to solve the imaginary part of the above relation to normalize the real Maurer-Cartan form \(\operatorname{Im}\mu^{1}_{Z_{1}}\). More generally, we have
**Lemma 4.3**.: _Let \(j,l\geq 0\). The real maurer-Cartan form \(\operatorname{Im}\mu^{1}_{Z_{1}^{j+1}U^{l}}\) can be normalized by setting \(\operatorname{Im}V_{Z_{1}^{j+2}\overline{Z}_{2}U^{l}}=0\)._
On the fiber \(\mathscr{B}_{\boldsymbol{p}}\) and modulo the horizontal coframe, the recurrence relation of the real differential invariant \(\operatorname{Re}V_{Z_{1}^{2}\overline{Z}_{2}}\) is in turn
\[d\mathrm{Re}V_{Z_{1}^{2}\overline{Z}_{2}}\equiv 0.\]
Accordingly, when restricted to \(\mathscr{B}_{\boldsymbol{p}}\), we have \(\operatorname{Re}V_{Z_{1}^{2}\overline{Z}_{2}}\) independent of the holomorphic group parameters. Thus, further normalizations will not effect it in branches (A.ii.1) and (A.ii.2). Let us denote
\[r:=\frac{1}{2}\operatorname{Re}V_{Z_{1}^{2}\overline{Z}_{2}}(\boldsymbol{p}).\]
Here, the coefficient \(\frac{1}{2}\) provides the consistency of this notation with Ebenfelt's invariant introduced in [6, eq. (7.2.13)]. By Proposition 3.4 and after setting \(\operatorname{Im}V_{Z_{1}^{2}\overline{Z}_{2}}=0\), we have \(r\neq 0\).
**Remark 4.4**.: We can regard \(r\) as a positive integer. Indeed, even if \(r<0\), the simple transformation \(z_{1}\mapsto\mathrm{i}z_{1}\) converts the coefficient \(r\) of the monomial \(z_{1}^{2}\bar{z}_{2}\) in the defining function of our partial normal form to \(-r\), while of the coefficient of \(z_{1}z_{2}\bar{z}_{1}\) remains \(1\), unaffected by the transformation.
The next candidate for normalization is \(V_{Z_{1}^{2}\overline{Z}_{1}}\). After applying the previous normalizations and on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the recurrence relation of this invariant takes the form
\[\begin{split}& dV_{Z_{1}^{2}\overline{Z}_{1}}\equiv V_{Z_{1}^{2} \overline{Z}_{1}}\left(\alpha_{U}-3\operatorname{Re}\mu^{1}_{Z_{1}}+\cdots \right)+\frac{1}{8r}\left(2r\,V_{Z_{1}^{2}\overline{Z}_{1}}V_{Z_{2}^{2} \overline{Z}_{1}}-V_{Z_{1}^{2}\overline{Z}_{1}}^{2}V_{Z_{2}^{2}\overline{Z}_{2 }}-16\,r\right)\mu^{2}_{Z_{1}}\\ &\qquad+\frac{1}{8r}\left(-2r\,V_{Z_{1}\overline{Z}_{2}^{2}}V_{Z _{1}^{2}\overline{Z}_{1}}+V_{Z_{1}\overline{Z}_{1}^{2}}V_{Z_{2}\overline{Z}_{ 2}^{2}}V_{Z_{1}^{2}\overline{Z}_{1}}-16\,r^{2}\right)\overline{\mu}^{2}_{ \overline{Z}_{1}}+\cdots.\end{split} \tag{4.1}\]
The feasibility of practical normalization by means of this relation depends on whether we have \(r\neq 1\) or \(r=1\). In the former case, one can readily normalize \(\mu^{2}_{Z_{1}}\) by setting \(V_{Z_{1}^{2}\overline{Z}_{1}}=0\). But, when \(r=1\) this approach is not applicable. In this case, the application of the above relation relies upon vanishing/nonvanishing of \(V_{Z_{1}^{2}\overline{Z}_{1}}\). Accordingly, we shall split the next computations into three branches
1. when \(r\neq 1\).
2. when \(r=1\) and \(V_{Z_{1}^{2}\overline{Z}_{1}}=0\).
3. when \(r=1\) and \(V_{Z_{1}^{2}\overline{Z}_{1}}\neq 0\).
Actually, (A\({}^{\prime}\).ii.1) and (A\({}^{\prime\prime}\).ii.1) divide the branch (A.ii.1) into two parts. Moreover, (A.ii.2) coincides with its correspondence in (1.5). Before proceeding into the next computations along the above three branches, we notice that in light of the normalizations performed thus far, the remaining unnormalized Maurer-Cartan forms are
\[\mu^{1}_{U^{l+1}},\qquad\overline{\mu}^{1}_{U^{l+1}},\qquad\operatorname{Re} \mu^{1}_{Z_{1}^{l+1}U^{l}},\qquad\mu^{2}_{Z_{1}^{l}U^{l}},\qquad\overline{\mu }^{2}_{Z_{1}^{l}U^{l}},\qquad\alpha_{U^{l+1}} \tag{4.2}\]
for \(j,l\geq 0\).
### Branch (A\({}^{\prime}\).ii.1)
Thanks to the assumption \(r\neq 1\), plainly setting
\[V_{Z_{1}^{2}\overline{Z}_{1}}=0\]
provides the opportunity of normalizing the complex Maurer-Cartan form \(\mu_{Z_{1}}^{2}\) on the fiber \(\mathscr{B}_{\boldsymbol{p}}\) -- and thus in a neighborhood of it -- by solving the greatly simplified recurrence relation (cf. (4.1))
\[0=dV_{Z_{1}^{2}\overline{Z}_{1}}\equiv 4\mathrm{i}\,V_{Z_{1}^{2}\overline{Z}_{ 1}}^{2}\,\overline{\mu}_{U}^{1}+V_{Z_{1}\overline{Z}_{1}}\,\big{(}-4\mathrm{i} \,V_{Z_{1}\overline{Z}_{2}}\,\overline{\mu}_{U}^{2}-\overline{\mu}_{Z_{1}^{2} }^{1}\big{)}-V_{Z_{2}\overline{Z}_{1}}\,\mu_{Z_{1}^{2}}^{2}-2r\,\overline{\mu }_{Z_{1}}^{2}-2\,\mu_{Z_{1}}^{2}.\]
More generally we have
**Lemma 4.5**.: _In branch (A\({}^{\prime}\).ii.1) and for each \(j,l\geq 0\), one normalizes the Maurer-Cartan form \(\mu_{Z_{1}^{j+1}U^{l}}^{2}\) by setting \(V_{Z_{1}^{j+2}\overline{Z}_{1}U^{l}}=0\)._
Now let us inspect the last two recurrence relations in (3.3) which are not considered yet. Our computations [27] show that on the fiber \(\mathscr{B}_{\boldsymbol{p}}\) we have them simply as
\[dV_{Z_{2}^{2}\overline{Z}_{1}} \equiv\big{(}3\operatorname{Re}\mu_{Z_{1}}^{1}-\alpha_{U}\big{)} \,V_{Z_{2}^{2}\overline{Z}_{1}},\] \[dV_{Z_{2}^{2}\overline{Z}_{2}} \equiv 2\,\big{(}3\operatorname{Re}\mu_{Z_{1}}^{1}-\alpha_{U} \big{)}\,V_{Z_{2}^{2}\overline{Z}_{1}}.\]
Thus, the partially normalized pseudo-group acts by scaling on \(V_{Z_{2}^{2}\overline{Z}_{1}}\) and \(V_{Z_{2}^{2}\overline{Z}_{1}}\), when restricted to the fiber \(\mathscr{B}_{\boldsymbol{p}}\). Since, the coefficients of the corresponding monomials \(z_{2}^{2}\bar{z}_{1}\) and \(z_{2}^{2}\bar{z}_{2}\) are zero in the initial expression (A.ii.1) in (1.5), then they remain vanished under holomorphic transformations. Consequently, here on the fiber \(\mathscr{B}_{\boldsymbol{p}}\) we have
\[V_{Z_{2}^{2}\overline{Z}_{1}}=V_{Z_{2}^{2}\overline{Z}_{2}}=0.\]
Let us consider at the three remained order three recurrence relations (3.5). Our computations show that at this stage of normalizations, the first relation is now of the form
\[dV_{Z_{1}\overline{Z}_{1}U} \equiv\big{(}\frac{2\mathrm{i}\,(r\,V_{Z_{2}\overline{Z}_{1}U}-V_ {Z_{1}\overline{Z}_{2}U})V_{Z_{2}\overline{Z}_{1}}V_{Z_{1}\overline{Z}_{1}}}{ r^{2}-1}-1\big{)}\,\mu_{U}^{2}\] \[+\big{(}\frac{-2\mathrm{i}\,(r\,V_{Z_{1}\overline{Z}_{2}U}-V_{Z_{ 2}\overline{Z}_{1}U})V_{Z_{1}\overline{Z}_{2}}V_{Z_{1}\overline{Z}_{1}}}{r^{2} -1}-1\big{)}\,\overline{\mu}_{U}^{2}+\cdots.\]
On the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the written part at the right hand side of the above relation simplifies to \(-2\mathrm{Re}\mu_{U}^{2}\). Thus, provided
\[V_{Z_{1}\overline{Z}_{1}U}=0,\]
it is possible in a local neighborhood of this fiber to normalize the real Maurer-Cartan form \(\mathrm{Re}\mu_{U}^{2}\), by solving the above recurrence relation. In general we have
**Lemma 4.6**.: _For \(l\geq 0\) and in branch (A\({}^{\prime}\).ii.1), solving the recurrence relation of \(V_{Z_{1}\overline{Z}_{1}U^{l+1}}=0\) offers the normalization of the real Maurer-Cartan form \(\mathrm{Re}\mu_{U^{l+1}}^{2}\)._
Next, let us consider the second recurrence relation \(dV_{Z_{1}\overline{Z}_{2}U}\) in (3.5). On the fiber \(\mathscr{B}_{\boldsymbol{p}}\), its too large expression can be written as
\[dV_{Z_{1}\overline{Z}_{2}U}\equiv-2r\,\mu_{U}^{1}-\overline{\mu}_{U}^{1}+\cdots\]
where "..." involves no nonzero coefficient of \(\mu_{U}^{1}\) or its conjugation. Thus, reminding that we excluded the specific case \(r=\frac{1}{2}\), this recurrence relation offers to normalize the Maurer-Cartan form \(\mu_{U}^{1}\) by setting
\[V_{Z_{1}\overline{Z}_{2}U}=0.\]
More generally, we have
**Lemma 4.7**.: _In branch (A\({}^{\prime}\).ii.1) and for each \(l\geq 0\), one normalizes the Maurer-Cartan form \(\mu^{1}_{U^{l+1}}\) by setting \(V_{Z_{1}\overline{Z}_{2}U^{l+1}}=0\)._
At this stage, the list of the remained yet unnormalized Maurer-Cartan forms (4.2) is reduced to the real forms
\[\operatorname{Re}\mu^{1}_{Z_{1}^{l+1}U^{l}},\qquad\operatorname{Im}\mu^{2}_{U ^{l+1}},\qquad\alpha_{U^{l+1}} \tag{4.3}\]
for \(j,l\geq 0\). In the current order three, none of the lifted differential invariant has the potential of normalizing these forms. Then, we have to proceed the computations into the next order four.
#### 4.1.1. Order four of branch (A\({}^{\prime}\).ii.1)
Our tedious computations of all recurrence relations in order four revealed that only one lifted differential invariant, namely \(\operatorname{Re}V_{Z_{1}^{q}\overline{Z}_{2}}\), is of help in the process of normalization. Restricted to the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the recurrence relation of this invariant is
\[d\mathrm{Re}V_{Z_{1}^{q}\overline{Z}_{2}}\equiv-6r\operatorname{Re}\mu^{1}_{ Z_{1}^{2}}+\cdots. \tag{4.4}\]
This equation suggests to normalize the real Maurer-Cartan form \(\operatorname{Re}\mu^{1}_{Z_{1}^{2}}\) by solving it after setting
\[\operatorname{Re}V_{Z_{1}^{q}\overline{Z}_{2}}=0.\]
More generally, we have
**Lemma 4.8**.: _For every \(j,l\geq 0\) and in branch (A\({}^{\prime}\).ii.1), one can normalize the real Maurer-Cartan form \(\operatorname{Re}\mu^{1}_{Z_{1}^{j+2}U^{l}}\) by specifying \(\operatorname{Re}V_{Z_{1}^{j+3}\overline{Z}_{2}U^{l}}=0\)._
Combining the above observation with Lemma 4.3, it becomes evident setting \(V_{Z_{1}^{3+j}\overline{Z}_{2}U^{l}}=0\) is sufficient for normalizing the entire complex Maurer-Cartan form \(\mu^{1}_{Z_{1}^{j+2}U^{l}}\).
Some of the other order four recurrence relations are applicable in the normalization process only when they does not vanish. For the sake of generality and to prevent ourselves from producing further subbranches, let us neglect the contribution of such recurrence relations and proceed into the next order with the hope of finding further general normalizations. Notice that at this stage, the list of the yet unnormalized Maurer-Cartan forms (4.3) is reduced to
\[\operatorname{Re}\mu^{1}_{Z_{1}U^{l}},\qquad\operatorname{Im}\mu^{2}_{U^{l+1} },\qquad\alpha_{U^{l+1}},\qquad l\geq 0. \tag{4.5}\]
#### 4.1.2. Order five of branch (A\({}^{\prime}\).ii.1)
We start this order by the lifted invariant \(V_{Z_{1}^{q}\overline{Z}_{1}^{q}\overline{Z}_{2}}\) which, on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), has the recurrence relation
\[dV_{Z_{1}^{q}\overline{Z}_{1}^{q}\overline{Z}_{2}}\equiv 4\mathrm{i}\left( \overline{\mu}^{2}_{U}-(1+r^{2})\mu^{2}_{U}\right)+\cdots. \tag{4.6}\]
This relation provides the normalization of \(\operatorname{Im}\mu^{2}_{U}\) by setting \(\operatorname{Re}V_{Z_{1}^{q}\overline{Z}_{1}^{q}\overline{Z}_{2}}=0\). In general, we have
**Lemma 4.9**.: _For every \(l\geq 0\) and in branch (A\({}^{\prime}\).ii.1) the recurrence relation of \(\operatorname{Re}V_{Z_{1}^{q}\overline{Z}_{1}^{q}\overline{Z}_{2}U^{l}}=0\) can be solved to normalize the real Maurer-Cartan form \(\operatorname{Im}\mu^{2}_{U^{l+1}}\)._
Similar to the former order four, further normalizations of the Maurer-Cartan forms are available in this order only if certain fifth order differential invariants does not vanish. Again for the desire of mere generality, we choose not involve ourselves with these possibilities and move on to the next order six to seek further normalizations. Notice that the collection of the yet unnormalized Maurer-Cartan forms (4.5) is now reduced to
\[\operatorname{Re}\mu^{1}_{Z_{1}U^{l}},\qquad\alpha_{U^{l+1}},\qquad\operatorname {for}l\geq 0. \tag{4.7}\]
#### 4.1.3. Order six of branch (A\({}^{\prime}\).ii.1)
Contrary to the previous two orders, it appears in order six a plenty of differential invariants with the potential of normalizing Maurer-Cartan forms. Roughly speaking, the main reason of this change in the behaviour of differential invariants is the assumption, made regarding the rank of the Levi matrix of \(M^{5}\) at point \(\boldsymbol{p}\), which was assumed to be zero. It necessitates the lower order invariants to undergo extra differentiations for reaching to order three, where certain lifted invariants are normalized to non-constant integers.
Among the already mentioned differential invariants, we choose to consider the imaginary parts of two of them, namely \(V_{Z_{1}^{4}Z_{2}^{2}}\) and \(V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}^{2}}\), where their recurrence relations on the fiber \(\mathcal{B}_{\boldsymbol{p}}\) are
\[\begin{split}& d\mathrm{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}}\equiv \left(48\,\alpha_{UU}-192\,\mathrm{Re}\mu^{1}_{Z_{1}U}\right)r^{2}+\cdots,\\ & d\mathrm{Im}V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}^{2}} \equiv\left(24\,\alpha_{UU}-72\,\mathrm{Re}\mu^{1}_{Z_{1}U}\right)r+\cdots. \end{split} \tag{4.8}\]
By setting \(\mathrm{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}}=\mathrm{Im}V_{Z_{1}^{3}\overline {Z}_{1}\overline{Z}_{2}^{2}}=0\), one readily solves the above two recurrence relations to normalize the real Maurer-Cartan forms \(\alpha_{UU}\) and \(\mathrm{Re}\mu^{1}_{Z_{1}U}\). More generally we have
**Lemma 4.10**.: _Let \(l\geq 0\). Then, in branch (A\({}^{\prime}\).i.1)_
1. _one can normalize the Maurer-Cartan form_ \(\alpha_{U^{2+l}}\) _by setting_ \(\mathrm{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}U^{l}}=0\)_._
2. _one can normalize the Maurer-Cartan form_ \(\mathrm{Re}\mu^{1}_{Z_{1}U^{l+1}}\) _by setting_ \(\mathrm{Im}V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}^{2}U^{l}}=0\)_._
We have succeeded to normalize all but only two of the remained Maurer-Cartan forms
\[\mathrm{Re}\mu^{1}_{Z},\qquad\mathrm{and}\qquad\alpha_{U}.\]
This, according to [29], implies that the isotropy group at the point \(\boldsymbol{p}\) of \(2\)-nondegenerate hypersurfaces in branch (A\({}^{\prime}\).ii.1) are of dimensions \(\leq 2\). Thus, at this branch, we have reached Ershova's bound [8] for the corresponding isotropy groups. The maximum possible dimension two is enjoyed by those hypersurfaces in which the remained unconsidered lifted invariants \(V_{J}\) vanish identically on them. They actually are Ershova's _model hypersurfaces_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+r\left(z_{1}^{2}\bar{z}_{ 2}+z_{2}\bar{z}_{1}^{2}\right),\qquad r\neq 1\]
which admit the real parts of the _dilation_ vector fields
\[\mathsf{D}_{1}:=z_{1}\partial_{z_{1}}-2\,z_{2}\partial_{z_{2}},\qquad\mathsf{ D}_{2}:=w\partial_{w}+z_{2}\partial_{z_{2}} \tag{4.9}\]
as the generators of their isotropy algebras.
We are now ready to present the complete normal form of branch (A\({}^{\prime}\).ii.1). Recall that for every pair \(\ell=(l_{1},l_{2})\in\mathbb{N}_{0}^{2}\), we denoted \(z^{\ell}=z_{1}^{l_{1}}z_{2}^{l_{2}}\). Moreover, we let \(|\ell|:=l_{1}+l_{2}\) and \(\ell!:=l_{1}!\,l_{2}!\).
**Theorem 4.11**.: _Let \(M^{5}\subset\mathbb{C}^{3}\) be a \(2\)-nondegenerate real hypersurface of Levi non-uniform rank zero at the origin point \(\boldsymbol{p}=0\). If \(M^{5}\) belongs to the branch (A\({}^{\prime}\).ii.1), then it can be mapped through an origin-preserving transformation to the complete normal form_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+r\left(z_{1}^{2}\bar{z}_{ 2}+z_{2}\bar{z}_{1}^{2}\right)+V_{Z_{2}\overline{Z}_{2}U}\,z_{2}\bar{z}_{2}u+ \sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}}\overline{Z}^{2}U^{l}}}{ \ell_{1}!\,\ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l}, \tag{4.10}\]
_for a unique real number \(r\neq 1\). Moreover, regarding the conjugation relation, the coefficients \(V_{J}\) enjoy the cross-section_
\[\begin{split} 0\equiv V_{Z^{\ell}U^{l}}&=V_{Z_{1}^{ \ell+1}}z_{2}^{k+1}\overline{Z}_{\ell U^{l}}=V_{Z_{1}^{j+3}\overline{Z}_{2}U^{ l}}=V_{Z_{1}^{j+2}\overline{Z}_{1}U^{l}}=V_{Z_{1}\overline{Z}_{1}U^{l+1}}\\ &=\mathrm{Re}V_{Z_{1}^{2}\overline{Z}_{1}^{2}\overline{Z}_{2}U^{l} }=\mathrm{Im}V_{Z_{1}^{2}\overline{Z}_{2}U^{l}}=\mathrm{Im}V_{Z_{1}^{4} \overline{Z}_{2}^{2}U^{l}}=\mathrm{Im}V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z} _{2}^{2}U^{l}}\end{split}\]
_for \(t=1,2\), \(\ell\in\mathbb{N}_{0}\) and \(j,k,l\geq 0\). Furthermore, the isotropy group of \(M^{5}\) at \(\boldsymbol{p}\) is of dimension at most two._
**Remark 4.12**.: We emphasize that in branch (A\({}^{\prime}\).ii.1), there exists an abundance of hypersurfaces with isotropy groups whose dimensions are _absolutely_ less than two. It depends to vanishing/nonvanishing of some specific differential invariants on the fiber \(\mathscr{B}_{\boldsymbol{p}}\). For example, our computations show that if the third order lifted invariant \(V_{Z_{2}\overline{Z}_{2}U}\) does not vanish3 at \(\boldsymbol{p}\), then one can normalize either of the remained two Maurer-Cartan forms \(\alpha_{U},\mathrm{Re}\mu^{1}_{Z_{1}}\) by setting \(V_{Z_{2}\overline{Z}_{2}U}=1\). In that case, the isotropy group of the appearing normal form
Footnote 3: By evaluating the third recurrence relation in (3.5) on \(\mathscr{B}_{\boldsymbol{p}}\), one finds that vanishing/nonvanishing of \(V_{Z_{2}\overline{Z}_{2}U}\) is invariant on this fiber.
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+r\left(z_{1}^{2}\bar{z}_{ 2}+z_{2}\bar{z}_{1}^{2}\right)+z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}|+l \geq 4}\,\frac{V_{Z^{\prime}\ell_{1}}\mathbb{Z}^{\ell_{2}U^{l}}}{\ell_{1}!\, \ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l},\qquad r\neq 1\]
is of dimension \(\leq 1\). In particular, when all appearing lifted invariants \(V_{J}\), \(\#J\geq 4\) vanish identically at \(\boldsymbol{p}\), then the isotropy group is exactly \(1\)-dimensional generated infinitesimally by
\[\mathsf{D}_{1}+2\,\mathsf{D}_{2}=z_{1}\partial_{z_{1}}+2w\,\partial_{w}.\]
### Branch (A\({}^{\prime\prime}\).ii.1)
Now, let us consider the second part of the branch (A.ii.1), where in addition to the original assumptions of this branch, we also suppose that after the partial normalizations made up to Lemma 4.3, we have
\[r=1\qquad\mathrm{and}\qquad V_{Z_{1}^{*}\overline{Z}_{1}}=0.\]
These assumptions will not disrupt the normalizations achieved in Lemmas 4.1 - 4.3. Thus, at this stage, we may assume the collection (4.2) as the remaining unnormalized Maurer-Cartan forms.
Since inserting \(r=1\) in the first recurrence relation \(dV_{Z_{1}^{*}\overline{Z}_{1}}\) of the list (3.3) converts the term \(-2r\overline{\mu}_{\overline{Z}_{1}}^{2}-2\mu_{Z_{1}}^{2}\) to \(-4\mathrm{Re}\mu_{Z_{1}}^{2}\) then, unfortunately, this relation can not normalize anymore the entire complex Maurer-Cartan form \(\mu_{Z_{1}}^{2}\). Indeed, here, it provides us with normalizing only the real part of \(\mu_{Z_{1}}^{2}\) after setting
\[\mathrm{Re}V_{Z_{1}^{*}\overline{Z}_{1}}=0.\]
Then -- in contrast to Lemma 4.5 -- we have in general
**Lemma 4.13**.: _In branch (A\({}^{\prime\prime}\).ii.1) and for each \(j,l\geq 0\), the recurrence relation of \(\mathrm{Re}V_{Z_{1}^{*}\overline{Z}_{1}U^{l}}=0\) can be solved to normalizes the Maurer-Cartan form \(\mathrm{Re}\mu_{Z_{1}U^{l}}^{2}\)._
As in the branch (A\({}^{\prime}\).ii.1), the two lifted invariants \(V_{Z_{2}^{*}\overline{Z}_{1}}\) and \(V_{Z_{2}^{*}\overline{Z}_{2}}\) vanish identically at the point \(\boldsymbol{p}\) (see the paragraph after Lemma 4.5). Similarly, inspecting the recurrence relation of \(\mathrm{Im}V_{Z_{1}^{*}\overline{Z}_{1}}\) shows that our partially normalized pseudo-group acts by scaling on this real lifted invariant, when restricted to the fiber \(\mathscr{B}_{\boldsymbol{p}}\). Thus, the value of \(\mathrm{Im}V_{Z_{1}^{*}\overline{Z}_{1}}\) (and hence \(V_{Z_{1}^{*}\overline{Z}_{1}}\)) remains zero on this bundle under holomorphic transformations.
Again similar to the branch (A\({}^{\prime}\).ii.1), here the lifted invariant \(V_{Z_{1}\overline{Z}_{1}U}\) provides the opportunity of normalizing the real Maurer-Cartan form \(\mathrm{Re}\mu_{U}^{2}\). Thus, Lemma 4.6 works as well in this branch. Moreover, when restricted to the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the recurrence relation of \(V_{Z_{1}\overline{Z}_{2}U}\) exhibits the term \(-2\mu_{U}^{1}-\overline{\mu}_{1}^{1}\) and thus -- again similar to (A\({}^{\prime}\).ii.1) -- one can normalize the Maurer-Cartan form \(\mu_{U}^{1}\) by setting \(V_{Z_{1}\overline{Z}_{2}U}=0\). Consequently, the general Lemma 4.7 holds also in this case.
The only remained third order lifted invariant \(V_{Z_{2}\overline{Z}_{2}U}\) is of no use to normalize further Maurer-Cartan forms unless it does not vanish. With the aim of seeking for the generality, we do not aim to consider this possibility.
Summing up, then the list of the yet unnormalized Maurer-Cartan forms (4.2) is now reduced to
\[{\rm Re}\mu^{1}_{Z^{i+1}_{1}U^{l}},\qquad\mu^{2}_{Z^{i+2}_{1}U^{l}},\qquad{\rm Im }\mu^{2}_{Z_{1}U^{l}},\qquad{\rm Im}\mu^{2}_{U^{l+1}}\qquad\alpha_{U^{l+1}} \tag{4.11}\]
for \(j,l\geq 0\). Let us continue the normalizations in the next order four.
#### 4.2.1. Order four of branch (A\({}^{\prime\prime}\).ii.1)
Consider the lifted invariant \(V_{Z^{1}_{1}\overline{Z}_{1}}\), with the following recurrence relation on \(\mathscr{B}_{\boldsymbol{p}}\)
\[dV_{Z^{1}_{1}\overline{Z}_{1}}\equiv\big{(}\alpha_{U}-4\,{\rm Re}\mu^{1}_{Z_{ 1}}\big{)}V_{Z^{1}_{1}\overline{Z}_{1}}-3\,\mu^{2}_{Z^{2}_{1}}.\]
Thus, setting \(V_{Z^{1}_{1}\overline{Z}_{1}}=0\), it offers the normalization of the Maurer-Cartan form \(\mu^{2}_{Z^{2}_{1}}\). More generally we have
**Lemma 4.14**.: _In branch (A\({}^{\prime\prime}\).ii.1) and for each \(j,l\geq 0\), one can normalize the Maurer-Cartan form \(\mu^{2}_{Z^{j+2}_{1}U^{l}}\) by setting \(V_{Z^{j+3}_{1}\overline{Z}_{1}U^{l}}=0\)._
Furthermore, on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the recurrence relation of the lifted invariant \(V_{Z^{1}_{1}\overline{Z}_{2}}\) is exactly as (4.4) with \(r=1\). Then, Lemma 4.8 works well also in this branch. Unfortunately, the other fourth order recurrence relations are of no general effect in the normalization process and we shall proceed into the next order for further possible normalizations.
#### 4.2.2. Orders five and six of branch (A\({}^{\prime\prime}\).ii.1)
As in the fifth order of branch (A\({}^{\prime}\).ii.1), here only the recurrence relation of \(V_{Z^{1}_{1}\overline{Z}^{\prime}_{1}\overline{Z}_{2}}\) is of use. Indeed, our computations show that the corresponding recurrence relation (4.6) holds also in this case with \(r=1\) and thus, setting \({\rm Re}V_{Z^{1}_{1}\overline{Z}^{\prime}_{1}\overline{Z}_{2}}=0\) enables one to solve it for \({\rm Im}\mu^{2}_{U}\). Moreover, the general Lemma 4.9 holds also in this branch.
We proceed into the next order with the hope of detecting further general normalizations. Notice that at this stage, the collection of yet unnormalized Maurer-Cartan forms (4.11) is reduced to
\[{\rm Re}\mu^{1}_{Z_{1}U^{l}},\qquad{\rm Im}\mu^{2}_{Z_{1}U^{l}},\qquad\alpha_{ U^{l+1}} \tag{4.12}\]
In order six, we can still benefit the recurrence relations (4.8) with \(r=1\) to normalize the Maurer-Cartan forms \(\alpha_{UU}\) and \({\rm Im}\mu^{1}_{Z_{1}}\) after setting
\[{\rm Im}V_{Z^{4}_{1}\overline{Z}^{\prime}_{2}}={\rm Im}V_{Z^{1}_{1}\overline{Z} _{1}\overline{Z}^{\prime}_{2}}=0.\]
Indeed, the general Lemma 4.10 holds also in this branch.
Our computations show that on the fiber \(\mathscr{B}_{\boldsymbol{p}}\) we have moreover the sixth order relation
\[dV_{Z^{1}_{1}Z_{2}\overline{Z}^{\prime}_{1}}=-48{\rm i}\,{\rm Im}\mu^{2}_{Z_{1 }U}+\cdots\]
where \("\cdots"\) stands for terms which are independent of the Maurer-Cartan form \({\rm Im}\mu^{2}_{Z_{1}U}\). Setting
\[{\rm Im}V_{Z^{1}_{1}Z_{2}\overline{Z}^{\prime}_{1}}=0,\]
it suggests to solve this recurrence relation for the real Maurer-Cartan form \({\rm Im}\mu^{2}_{Z_{1}U}\). More generally we have
**Lemma 4.15**.: _For every \(l\geq 0\) and in branch (A\({}^{\prime\prime}\).ii.1), one can normalize the Maurer-Cartan form \({\rm Im}\mu^{2}_{Z_{1}U^{l+1}}\) by setting \({\rm Im}V_{Z^{1}_{1}Z_{2}\overline{Z}^{\prime}_{1}U^{l}}=0\)._
At this stage, the collection of the remained yet unnormalized Maurer-Cartan forms (4.12) is notably reduced to only three real forms
\[{\rm Re}\mu^{1}_{Z_{1}},\qquad{\rm Im}\mu^{2}_{Z_{1}},\qquad\alpha_{U},\]
which, according to [29], parameterizes the isotropy group of the real hypersurfaces belonging to this branch. It is in complete agreement with Ershova's result that the isotropy groups associated with these hypersurfaces are of dimensions \(\leq 3\). For the _model hypersurface_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2},\]
this group is of the maximum dimension three, generated infinitesimally by the real part of
\[\mathsf{X}:=\mathrm{i}\,z_{1}\partial_{z_{2}} \tag{4.13}\]
together with the real parts of the two dilation fields (4.9).
**Theorem 4.16**.: _Every \(2\)-nondegenerate real hypersurface \(M^{5}\subset\mathbb{C}^{3}\) belonging to the branch (\(\mathsf{A}^{\prime\prime}\).ii.1) can be transformed to the complete normal form_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2}+V_{Z_{2}\overline{Z}_{2}U}\,z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}| +|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}}\overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}! \,\ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l}, \tag{4.14}\]
_where, regarding the conjugation relation, the coefficients \(V_{J}\) enjoy the cross-section_
\[0\equiv V_{Z^{\ell}U^{l}} =V_{Z^{l+1}_{1}Z^{k+1}_{2}\overline{Z}_{t}U^{l}}=V_{Z^{l+3}_{1} \overline{Z}_{t}U^{l}}=\mathrm{Re}V_{Z^{1}_{1}\overline{Z}_{t}U^{l}}=V_{Z_{1} \overline{Z}_{t}U^{l+1}}\] \[=\mathrm{Re}V_{Z^{1}_{1}\overline{Z}^{2}_{1}\overline{Z}_{2}U^{l} }=\mathrm{Im}V_{Z^{1}_{1}\overline{Z}_{2}U^{l}}=\mathrm{Im}V_{Z^{1}_{1} \overline{Z}^{2}_{2}U^{l}}=\mathrm{Im}V_{Z^{1}_{1}\overline{Z}_{1}\overline{ Z}^{2}_{2}U^{l}}=\mathrm{Im}V_{Z^{1}_{1}Z_{2}\overline{Z}^{3}_{1}U^{l}}\]
_for \(t=1,2\), \(\ell\in\mathbb{N}_{0}\) and \(i,j,k\geq 0\). Moreover, the isotropy groups of \(M^{5}\) is at most \(3\)-dimensional._
As in the case of (\(\mathsf{A}^{\prime}\).ii.1), we remark that there exists an enormous number of hypersurfaces in this branch with the isotropy groups of dimensions \(\leq 3\). For example, if the third order real invariant \(V_{Z_{2}\overline{Z}_{2}U}\) does not vanish at \(\mathscr{B}_{\boldsymbol{p}}\), then we can set it to \(1\) and normalize the Maurer-Cartan form \(\alpha_{U}\). In this case, setting \(\mathrm{Re}V_{Z^{1}_{1}Z_{2}\overline{Z}_{1}\overline{Z}_{2}}=0\) enables one to even normalize in addition the Maurer-Cartan form \(\mathrm{Im}\mu^{2}_{Z_{1}}\). Accordingly, the isotropy groups of the appearing normal form hypersurfaces
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2}+z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{ \ell_{1}}\overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_{2}!\,l!}\,z^{\ell_{1 }}\bar{z}^{\ell_{2}}u^{l}\]
are of dimensions either zero or one. In particular, when the above lifted invariants \(V_{J}\), \(\#J\geq 4\) vanish identically, then the isotropy group of the resulted hypersurface
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2}+z_{2}\bar{z}_{2}u\]
is \(1\)-dimensional which -- corresponding to the remained unnormalized real Maurer-Cartan form \(\mathrm{Re}\mu^{1}_{Z_{1}}\) -- is generated by the real part of the single infinitesimal generator
\[\mathsf{D}_{1}+2\,\mathsf{D}_{2}=z_{1}\partial_{z_{1}}+2w\,\partial_{w}.\]
### Branch (A.ii.2)
As in the branch (A.ii.1), here we have the partially lifted invariant \(V_{Z_{1}Z_{2}\overline{Z}_{1}}\) nonzero and we are still permitted to set it to \(1\). It results in enjoying as well the normalizations introduced in Lemmas 4.1 - 4.3. Thus at this point, we may view (4.2) as the current collection of the remained unnormalized Maurer-Cartan forms.
We shall assume here that \(r=1\) and, in contrary to the subbranch (\(\mathsf{A}^{\prime\prime}\).ii.1), the crucial lifted invariant \(V_{Z^{\ell}_{1}\overline{Z}_{1}}\) is now nonzero. By these assumptions, we have the recurrence relation (4.1) as
\[dV_{Z^{1}_{1}\overline{Z}_{1}}\equiv V_{Z^{1}_{1}\overline{Z}_{1}}\big{(} \alpha_{U}-3\,\mathrm{Re}\mu^{1}_{Z_{1}}\big{)}-4\,\mathrm{Re}\mu^{2}_{Z_{1}}.\]
To simultaneously normalize both Maurer-Cartan forms \(\mathrm{Re}\mu^{1}_{Z_{1}}\) and \(\mathrm{Re}\mu^{2}_{Z_{1}}\) through the above relation, we set the nonzero lifted invariant \(V_{Z_{1}^{\sharp}\overline{Z}_{1}}\) to some imaginary constant, say
\[V_{Z_{1}^{\sharp}\overline{Z}_{1}}=2\mathrm{i}.\]
More generally we have
**Lemma 4.17**.: _In branch (A.ii.2) and for each \(j,l\geq 0\) with \((j,l)\neq(0,0)\), setting \(V_{Z_{1}^{j+2}\overline{Z}_{1}U^{l}}=0\) normalizes identically the two real Maurer-Cartan forms \(\mathrm{Re}\mu^{1}_{Z_{1}^{j+1}U^{l}}\) and \(\mathrm{Re}\mu^{2}_{Z_{1}^{j+1}U^{l}}\)._
Based on our computations, it is still possible to normalize \(\mathrm{Re}\mu^{2}_{U}\) and \(\mu^{1}_{U}\) by setting respectively \(V_{Z_{1}\overline{Z}_{1}U}=0\) and \(V_{Z_{1}\overline{Z}_{2}U}=0\). More generally, the observations of Lemmas 4.6 and 4.7 are in turn correct also in this branch.
By the normalizations, applied thus far, the collection (4.2) of our remaining unnormalized Maurer-Cartan forms is now reduced to
\[\mathrm{Im}\mu^{2}_{Z_{1}^{j}U^{l}},\qquad\alpha_{U^{l+1}},\qquad\mathrm{for} \;j,l\geq 0. \tag{4.15}\]
In order to pursue additional normalizations, let us proceed to the next orders.
#### 4.3.1. Orders four and five of branch
(A.ii.2). In order four and on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), we have the recurrence relation
\[d\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{2}}\equiv 3\,\mathrm{Im}\mu^{2}_{Z_ {1}^{2}}+\cdots\]
where the \("\cdots"\) part is independent of the Maurer-Cartan form \(\mathrm{Im}\mu^{2}_{Z_{1}^{2}}\). Then, by setting
\[\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{2}}=0\]
it offers to normalize the real form \(\mathrm{Im}\mu^{2}_{Z_{1}^{2}}\). More generally
**Lemma 4.18**.: _For every \(j,l\geq 0\) and in branch (A.ii.2), one can normalize the real Maurer-Cartan form \(\mathrm{Im}\mu^{2}_{Z_{1}^{j+2}U^{l}}\) by setting \(\mathrm{Re}V_{Z_{1}^{j+3}\overline{Z}_{2}U^{l}}=0\)._
This observation reduces the collection (4.15) of yet unnormalized Maurer-Cartan forms to
\[\mathrm{Im}\mu^{2}_{U^{l+1}},\qquad\mathrm{Im}\mu^{2}_{Z_{1}U^{l}},\qquad \alpha_{U^{l+1}},\qquad\mathrm{for}\;l\geq 0. \tag{4.16}\]
Similar to the previous branches, we do not find any additional lifted invariants in order four that would allow us to achieve _general_ normalizations of extra Maurer-Cartan forms. Therefore, we will explore the order five, and our initial candidate for a lifted invariant is \(\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{1}^{2}\overline{Z}_{2}}\), where its recurrence relation on the fiber \(\mathscr{B}_{\boldsymbol{p}}\) is
\[d\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{1}^{2}\overline{Z}_{2}}\equiv 12\, \mathrm{Im}\mu^{2}_{U}+\cdots.\]
One can solve this relation for the real Maurer-Cartan form \(\mu^{2}_{U}\) after setting
\[\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{1}^{2}\overline{Z}_{2}}=0.\]
More generally we have
**Lemma 4.19**.: _In branch (A.ii.2) and for every \(l\geq 0\), one can normalize \(\mathrm{Im}\mu^{2}_{U^{l+1}}\) and \(\mathrm{Im}\mu^{2}_{ZU^{l+1}}\) by setting respectively \(\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{1}^{\sharp}\overline{Z}_{2}U^{l}}=0\) and \(\mathrm{Re}V_{Z_{1}^{\sharp}\overline{Z}_{1}^{\sharp}\overline{Z}_{2}U^{l}}=0\)._
Next, we shall look for suitable lifted invariants to normalize the remained real Maurer-Cartan forms \(\operatorname{Im}\mu^{2}_{Z_{1}}\) and \(\alpha_{U^{l+1}},l\geq 0\). Unfortunately and as before, we do not find such invariants in order five. Thus, we shall move to order six. We select again in this order the imaginary part of the lifted invariant \(V_{Z_{1}^{4}\overline{Z}_{2}^{2}}\) where its recurrence relation at \(\mathscr{B}_{\mathbf{p}}\) is
\[d\operatorname{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}}\equiv-16\mathrm{i}\, \alpha_{UU}+\cdots.\]
Thus, providing \(\operatorname{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}}=0\) enables one to solve the above relation for \(\alpha_{UU}\). In general we have
**Lemma 4.20**.: _For every \(l\geq 0\) and in branch (A.ii.2), one can normalize the real Maurer-Cartan form \(\alpha_{U^{l+2}}\) by setting \(\operatorname{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}U^{l}}=0\)._
By this observation, the two real forms
\[\operatorname{Im}\mu^{2}_{Z_{1}},\qquad\mathrm{and}\qquad\alpha_{U}\]
are the only remained Maurer-Cartan forms that did not admit any normalization. Thus, the dimensions of the isotropy groups associated to the hypersurfaces of branch (A.ii.2) are at most two, verifying Ershova's computations [8]. The maximum dimension is enjoyed by the _model hypersurface_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2}+\mathrm{i}\left(z_{1}^{2}\bar{z}_{1}-z_{1}\bar{z}_{1}^{2}\right)\]
which admits the real parts of the _dilation_ and _linear_ infinitesimal transformations
\[\mathsf{D}_{3}:=z_{1}\partial_{z_{1}}+z_{2}\partial_{z_{2}}+3\,w\partial_{w}, \qquad\mathsf{L}:=\mathrm{i}\,z_{1}\partial_{z_{2}}, \tag{4.17}\]
as the generators of its corresponding isotropy algebra.
**Theorem 4.21**.: _Every \(2\)-nondegenerate real hypersurface \(M^{5}\) of \(\mathbb{C}^{3}\) belonging to the branch (A.ii.2) can be transformed to the complete normal form_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2}+\mathrm{i}\left(z_{1}^{2}\bar{z}_{1}-z_{1}\bar{z}_{1}^{2} \right)+V_{Z_{2}\overline{Z}_{2}U}z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}| +l\geq 4}\frac{V_{Z^{\ell_{1}}\overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell _{2}!\,l!}\ z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l} \tag{4.18}\]
_where, regarding the conjugation relation, the coefficients \(V_{J}\) enjoy the cross-section_
\[0\equiv V_{Z^{\ell}U^{l}} =V_{Z_{1}^{j+1}Z_{2}^{k+1}\overline{Z}_{t}U^{l}}=V_{Z_{1}^{j+3}Z_{ 2}U^{l}}=V_{Z_{1}^{j+2}\overline{Z}_{1}U^{l}}=V_{Z_{1}\overline{Z}_{t}U^{l+1}}\] \[=\operatorname{Im}V_{Z_{1}^{4}\overline{Z}_{2}U^{l}}=\operatorname {Re}V_{Z_{1}^{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=\operatorname{Re}V_{Z _{1}^{4}\overline{Z}_{1}^{2}\overline{Z}_{2}U^{l}}=\operatorname{Im}V_{Z_{1}^ {4}\overline{Z}_{2}^{2}U^{l}}\]
_for \(t=1,2\), \(\ell\in\mathbb{N}_{0}\) and \(i,j,k\geq 0\). Moreover, the isotropy group of \(M^{5}\) is of dimension \(\leq 2\)._
As in the former branches, we emphasize that one finds a large number of hypersurfaces in this branch with the isotropy groups of dimensions \(\lnot\leq 2\). For example, if \(V_{Z_{2}\overline{Z}_{2}U}\) does not vanish on the hypersurface, then setting it to \(1\) and \(\operatorname{Re}V_{Z_{1}^{4}Z_{2}\overline{Z}_{1}\overline{Z}_{2}}=0\) provides the opportunity of normalizing simultaneously both the remained Maurer-Cartan forms \(\alpha_{U}\) and \(\operatorname{Im}\mu^{2}_{Z_{1}}\). In this case, the resulted normal form hypersurfaces
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{1}^{2}\bar{z}_{2}+z_{2} \bar{z}_{1}^{2}+\mathrm{i}\left(z_{1}^{2}\bar{z}_{1}-z_{1}\bar{z}_{1}^{2} \right)+z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}} \overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_{2}!\,l!}\ z^{\ell_{1}}\bar{z}^{ \ell_{2}}u^{l}\]
admit just a trivial isotropy group at \(\mathbf{p}\).
## 5. Branch (A.ii.3)
In this branch, we assume -- in contrary to (A.ii.1) and (A.ii.2) -- that \(\Delta_{12}\) vanishes at \(\mathbf{p}\) but
\[\Delta_{23}=v_{z_{1}z_{2}\bar{z}_{2}}\cdot v_{z_{2}^{2}\bar{z}_{1}}-v_{z_{1}z_{2} \bar{z}_{1}}\cdot v_{z_{2}^{2}\bar{z}_{2}}\]
is nonzero at this point. Therefore, at least one of the two multiplications \(v_{z_{1}z_{2}\bar{z}_{2}}\cdot v_{z_{2}^{2}\bar{z}_{1}}\) and \(v_{z_{1}z_{2}\bar{z}_{1}}\cdot v_{z_{2}^{2}\bar{z}_{2}}\) is nonzero at \(\mathbf{p}\). If the former multiplication does not vanish at this point, then by simply interchanging the role of \(z_{1}\) and \(z_{2}\), we can assume instead that \(v_{z_{1}z_{2}\bar{z}_{1}}\) and \(v_{z_{1}^{2}\bar{z}_{2}}\) are nonzero. This, brings us back to the former cases (A.ii.1) and (A.ii.2). Then, in this branch, we assume that the jet coordinates \(v_{z_{1}z_{2}\bar{z}_{1}}\) and \(v_{z_{2}^{2}\bar{z}_{2}}\) (and their lifts) do not vanish at \(\mathbf{p}\). In addition, here we assume that \(v_{z_{1}^{2}\bar{z}_{2}}\) and its lifts are zero at \(\mathbf{p}\) since otherwise, we revert back again to the previous branches (A.ii.1) and (A.ii.2).
Proceeding along the same argument to the proof of Proposition 3.4, one proves also in this case that the two combinations \(\Delta_{12}\) and \(\Delta_{13}\) vanish identically at \(\mathbf{p}\). Then \(\Delta_{23}\) remains nonzero under holomorphic transformations.
We continue the normalizations by considering the recurrence relation of \(V_{Z_{1}Z_{2}\overline{Z}_{1}}\) in (3.3). By the assumptions of this branch, this lift of \(v_{z_{1}z_{2}\bar{z}_{1}}\) shall remain nonzero and thus we can set it to \(1\) in order to normalize the Maurer-Cartan form \(\mu^{2}_{Z_{2}}\). Thus, similar to the former branches (A.ii.1) and (A.ii.2), here the normalizations observed in Lemma 4.1 are available.
Next, setting \(V_{Z_{1}Z_{2}\overline{Z}_{2}}=0\), the corresponding recurrence relation on the fiber \(\mathscr{B}_{\mathbf{p}}\) is
\[0=dV_{Z_{1}Z_{2}\overline{Z}_{2}}\equiv-\overline{\mu}^{1}_{\overline{Z}_{2}}- V_{Z_{2}^{2}\overline{Z}_{2}}\,\mu^{2}_{Z_{1}}.\]
We can solve this equation to normalize the Maurer-Cartan form \(\overline{\mu}^{1}_{\overline{Z}_{2}}\). More generally
**Lemma 5.1**.: _For every \(j,k,l\geq 0\) and in branch (A.ii.3), one can normalize the Maurer-Cartan form \(\overline{\mu}^{1}_{\overline{Z}_{1}^{i}\overline{Z}_{2}^{k+1}U^{l}}\) by setting \(V_{Z_{1}Z_{2}\overline{Z}_{1}^{i}\overline{Z}_{2}^{k+1}U^{l}}=0\)._
Next, let us consider the recurrence relation of \(V_{Z_{1}^{i}\overline{Z}_{1}}\). On the fiber \(\mathscr{B}_{\mathbf{p}}\), we have
\[dV_{Z_{1}^{i}\overline{Z}_{1}}\equiv V_{Z_{1}^{i}\overline{Z}_{1}}\left( \alpha_{U}-2\,\mu^{1}_{Z_{1}}-\overline{\mu}^{1}_{\overline{Z}_{1}}\right)-2 \,\mu^{2}_{Z_{1}}.\]
It suggests to set \(V_{Z_{1}^{i}\overline{Z}_{1}}=0\) and plainly normalize the Maurer-Cartan form \(\mu^{2}_{Z_{1}}\). In general we have
**Lemma 5.2**.: _In branch (A.ii.3) and for every \(j,l\geq 0\), one can normalize the Maurer-Cartan form \(\mu^{2}_{Z_{1}^{l+1}U^{l}}\) by setting \(V_{Z_{1}^{j+2}\overline{Z}_{1}U^{l}}=0\)._
Let us examine the recurrence relation of \(V_{Z_{2}^{i}\overline{Z}_{1}}\) which on the fiber \(\mathscr{B}_{\mathbf{p}}\) is now of the form
\[dV_{Z_{2}^{i}\overline{Z}_{1}}\equiv V_{Z_{2}^{i}\overline{Z}_{1}}\left(2\,\mu^ {1}_{Z_{1}}+\overline{\mu}^{1}_{\overline{Z}_{1}}-\alpha_{U}\right).\]
Thus, after applying the above normalizations, the pseudo-group acts by scaling on \(V_{Z_{2}^{i}\overline{Z}_{1}}\), when we restrict it to \(\mathscr{B}_{\mathbf{p}}\). In order to keep the shape of the expressions in branch (A.ii.3), we let it to be nonzero at \(\mathbf{p}\) and thus we may specify
\[V_{Z_{2}^{i}\overline{Z}_{1}}=2\]
which provides the normalization of the Maurer-Cartan form \(\mu^{1}_{Z_{1}}\) by solving the above recurrence relation. More generally, we have
**Lemma 5.3**.: _For every \(j,l\geq 0\) and in branch (A.ii.3), solving the recurrence relation of \(V_{Z_{1}^{j}Z_{2}^{i}\overline{Z}_{1}U^{l}}=0\) provides the normalization of the Maurer-Cartan form \(\mu^{1}_{Z_{1}^{j+1}U^{l}}\)._
Since \(V_{Z_{2}^{2}\overline{Z}_{2}}\) is nonzero by assumption, one might consider it as the next candidate for normalizations. But, somewhat surprisingly, our computations indicate that modulo the horizontal coframe we have
\[dV_{Z_{2}^{2}\overline{Z}_{2}}\equiv 0.\]
In other words, after applying the above normalizations, the lifted invariant \(V_{Z_{2}^{2}\overline{Z}_{2}}\) is now independent of the remaining unnormalized group parameters. Thus, holomorphic transformations are ineffective on its value at \(\mathbf{p}\). As [6], we denote
\[V_{Z_{2}^{2}\overline{Z}_{2}}(\mathbf{p})=\lambda\]
for some constant \(0\neq\lambda\in\mathbb{C}\), _uniquely determined_ by \(V_{Z_{2}^{2}\overline{Z}_{2}}\).
Similarly, the recurrence relation of \(V_{Z_{1}^{2}\overline{Z}_{2}}\) on the fiber \(\mathscr{B}_{\mathbf{p}}\) is now of the form
\[dV_{Z_{1}^{2}\overline{Z}_{2}}\equiv 0.\]
Therefore, \(V_{Z_{1}^{2}\overline{Z}_{2}}\) is independent of any group parameter and its value at \(\mathbf{p}\), that we assumed to be zero in this branch, remains invariant.
Proceeding further in order three, let us consider the recurrence relations (3.5). According to our computations and on the fiber \(\mathscr{B}_{\mathbf{p}}\), they are now of the form
\[\begin{split} dV_{Z_{1}\overline{Z}_{1}U}&\equiv- \frac{2}{3}\,V_{Z_{1}\overline{Z}_{1}U}\,\alpha_{U}-2\,\mathrm{Re}\mu_{U}^{2}, \\ dV_{Z_{1}\overline{Z}_{2}U}&\equiv-\frac{2}{3}\,V_ {Z_{1}\overline{Z}_{2}U}\,\alpha_{U}-\overline{\mu}_{U}^{1}-2\,\overline{\mu}_ {U}^{2},\\ dV_{Z_{2}\overline{Z}_{2}U}&\equiv-\frac{2}{3}\,V _{Z_{2}\overline{Z}_{2}U}\,\alpha_{U}-\lambda\,\mu_{U}^{2}-\overline{\lambda} \,\overline{\mu}_{U}^{2}.\end{split} \tag{5.1}\]
It is clear that setting
\[V_{Z_{1}\overline{Z}_{1}U}=V_{Z_{1}\overline{Z}_{2}U}=0\]
enables one to solve the first two relations for the Maurer-Cartan forms \(\mathrm{Re}\mu_{U}^{2}\) and \(\overline{\mu}_{U}^{1}\). More generally we have
**Lemma 5.4**.: _Let \(l\geq 0\). Then in branch (A.ii.3),_
1. _one can normalize the real Maurer-Cartan form_ \(\mathrm{Re}\mu_{U^{l+1}}^{2}\) _by setting_ \(V_{Z_{1}\overline{Z}_{1}U^{l+1}}=0\)_._
2. _one can normalize the complex Maurer-Cartan form_ \(\overline{\mu}_{U^{l+1}}^{1}\) _by setting_ \(V_{Z_{1}\overline{Z}_{2}U^{l+1}}=0\)_._
Application of the third recurrence relation in (5.1) in normalizing the Maurer-Cartan form \(\mathrm{Im}\mu_{U}^{2}\) requires that \(\mathrm{Im}\lambda\neq 0\). However, with the aim of generality, we proceed our computations without making any extra assumption regarding the value of \(\lambda\).
After applying the above normalizations, the collection of the remained unnormalized Maurer-Cartan forms is now reduced to
\[\mathrm{Im}\mu_{U^{l+1}}^{2},\qquad\alpha_{U^{l+1}},\qquad\mathrm{for}\ l\geq 0. \tag{5.2}\]
Unfortunately in order four we will not find appropriate invariants for further normalizations. Then, let us move to order five, where one finds the recurrence relation of \(\mathrm{Re}V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}^{2}}\) on the fiber \(\mathscr{B}_{\mathbf{p}}\) in the simple form
\[d\mathrm{Re}V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}^{2}}\equiv 8\,\mathrm{Im}\mu_{U} ^{2}+\cdots.\]
This relation readily provides the normalization of the real Maurer-Cartan form \(\mathrm{Im}\mu_{U}^{2}\) if we specify \(\mathrm{Re}V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}^{2}}=0\). More generally,
**Lemma 5.5**.: _For every \(l\geq 0\) and in branch (A.ii.3), the real Maurer-Cartan form \(\operatorname{Im}\mu^{2}_{U^{l+1}}\) can be normalized by solving the recurrence relation of \(\operatorname{Re}V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}^{2}U^{l}}=0\)._
It remains now to normalize the Maurer-Cartan forms \(\alpha_{U^{l+1}}\) for \(l\geq 0\). For this purpose, we have to proceed to the next order six where, on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), we have the recurrence relation of the imaginary part of \(V_{Z_{1}Z_{2}^{3}\overline{Z}_{1}^{2}}\) simply as
\[d\operatorname{Im}V_{Z_{1}Z_{2}^{3}\overline{Z}_{1}^{2}}\equiv-8\,\alpha_{UU }+\cdots.\]
Clearly by setting \(\operatorname{Im}V_{Z_{1}Z_{2}^{3}\overline{Z}_{1}^{2}}=0\), one can solve the above relation to normalize the Maurer-Cartan form \(\alpha_{UU}\). It leads us to the general
**Lemma 5.6**.: _In branch (A.ii.3) and for each \(l\geq 0\), one can normalize the real Maurer-Cartan form \(\alpha_{U^{l+2}}\) by setting \(\operatorname{Im}V_{Z_{1}Z_{2}^{3}\overline{Z}_{1}^{2}U^{l}}\)._
At this stage, the real Maurer-Cartan form \(\alpha_{U}\) is the only remaining unnormalized form. Then, as is realized by Ershova in [8], the dimension of the isotropy group at \(\boldsymbol{p}\) of every real hypersurface in this branch does not exceed one. When all the remained unconsidered lifted invariants \(V_{J}\) vanish identically at \(\boldsymbol{p}\), normalization of \(\alpha_{U}\) will be certainly impossible and in this case the isotropy group of the appearing _model hypersurfaces_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{2}^{2}\bar{z}_{1}+z_{1} \bar{z}_{2}^{2}+\lambda\,z_{2}^{2}\bar{z}_{2}+\overline{\lambda}\,z_{2}\bar{z }_{2}^{2},\qquad 0\neq\lambda\in\mathbb{C}\]
has the maximum dimension one, generated infinitesimally by the real part of the single dilation \(\mathsf{D}_{3}\) in (4.17).
**Theorem 5.7**.: _Every \(5\)-dimensional \(2\)-nondegenerate real hypersurface \(M^{5}\subset\mathbb{C}^{3}\) belonging to branch (A.ii.3) can be transformed to the complete normal form_
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{2}^{2}\bar{z}_{1}+z_{1} \bar{z}_{2}^{2}+\lambda\,z_{2}^{2}\bar{z}_{2}+\overline{\lambda}\,z_{2}\bar{z }_{2}^{2}+V_{Z_{2}\overline{Z}_{2}U}z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{ 2}|+l\geq 4}\frac{V_{Z^{\ell_{1}}\overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_ {2}!\,l!}\,z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l}\]
_for a unique nonzero integer \(\lambda\in\mathbb{C}\) where, regarding the conjugation relation, the coefficients \(V_{J}\) enjoy the cross-section_
\[0\equiv V_{Z^{\ell}U^{l}} =V_{Z_{1}^{l+1}Z_{2}^{k+1}\overline{Z}_{1}U^{l}}=V_{Z_{1}^{j+2} \overline{Z}_{1}U^{l}}=V_{Z_{1}Z_{2}\overline{Z}_{1}^{j}\overline{Z}_{2}^{k+1 }U^{l}}=V_{Z_{1}^{j}Z_{2}^{j}\overline{Z}_{1}U^{l}}\] \[=V_{Z_{1}\overline{Z}_{t}U^{l+1}}=\operatorname{Re}V_{Z_{1}^{2}Z_ {2}\overline{Z}_{1}^{\ell_{1}U^{l}}}=\operatorname{Im}V_{Z_{1}Z_{2}^{3} \overline{Z}_{1}^{2}U^{l}}\]
_for \(\ell\in\mathbb{N}_{0}\), \(t=1,2\) and \(i,j,k\geq 0\). Furthermore, the isotropy group associated to \(M^{5}\) at \(\boldsymbol{p}\) is at most \(1\)-dimensional._
As before, we remark that there exists a large number of hypersurfaces in branch (A.ii.3) with the trivial isotropy group at \(\boldsymbol{p}\). For instance, if \(V_{Z_{2}\overline{Z}_{2}U}\) does not vanish on \(\mathscr{B}_{\boldsymbol{p}}\), then we can normalize the only remained Maurer-Cartan form \(\alpha_{U}\) by solving the third recurrence relation in (5.1) after setting \(V_{Z_{2}\overline{Z}_{2}U}=1\). In this case, the appearing normal form hypersurfaces
\[v=z_{1}z_{2}\bar{z}_{1}+z_{1}\bar{z}_{1}\bar{z}_{2}+z_{2}^{2}\bar{z}_{1}+z_{1} \bar{z}_{2}^{2}+\lambda\,z_{2}^{2}\bar{z}_{2}+\overline{\lambda}\,z_{2}\bar{z }_{2}^{2}+z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}} \overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z }^{\ell_{2}}u^{l}\]
admit just the trivial isotropy group.
## 6. Branches (A.ii.4) and (A.ii.5)
In light of their partially normalized defining equations in (1.5), both branches (A.ii.4) and (A.ii.5) belong the case in which the two combinations \(\Delta_{12}\) and \(\Delta_{23}\) in (3.4) vanish at \(\boldsymbol{p}\) but
\[\Delta_{13}=v_{z_{1}^{2}\bar{z}_{1}}\cdot v_{z_{2}^{2}\bar{z}_{2}}-v_{z_{1}^{2 }\bar{z}_{2}}\cdot v_{z_{2}^{2}\bar{z}_{1}}\]
is nonzero at this point. Proceeding along the same lines as the proof of Proposition 3.4, one shows that this scenario is invariant under holomorphic transformations.
Clearly, in order to have \(\Delta_{13}\) nonzero at \(\boldsymbol{p}\), at least one of the multiplications \(v_{z_{1}^{2}\bar{z}_{1}}\cdot v_{z_{2}^{2}\bar{z}_{2}}\) and \(v_{z_{1}^{2}\bar{z}_{2}}\cdot v_{z_{2}^{2}\bar{z}_{1}}\) shall be nonzero at this point. The branch (A.ii.4) concerns the case of which the former multiplication (and its lifts) does not vanish while (A.ii.5) considers the second possibility, assuming that the former multiplication vanishes at \(\boldsymbol{p}\). We continue this section by studying first the branch (A.ii.4).
### Branch (A.ii.4)
In this branch, the opportunity of having the lifts \(V_{Z_{1}^{2}\overline{Z}_{1}}\) and \(V_{Z_{2}^{2}\overline{Z}_{2}}\) nonzero enables us to set them to some nonzero constant, say
\[V_{Z_{1}^{2}\overline{Z}_{1}}=V_{Z_{2}^{2}\overline{Z}_{2}}=2.\]
These specifications enables us to normalize the two Maurer-Cartan forms \(\mu_{Z_{1}}^{1}\) and \(\mu_{Z_{2}}^{2}\) by solving the first and last recurrence relations of the list (3.3), respectively. More generally we have
**Lemma 6.1**.: _In branch (A.ii.4) and for each \(j,k,l\geq 0\) with \((j,k,l)\neq(0,0,0)\),_
1. _one can normalize the Maurer-Cartan form_ \(\overline{\mu}_{Z_{1}^{i}\overline{Z}_{2}^{i+1}}^{1}\overline{Z}_{2}^{k}U^{l}\) _by setting_ \(V_{Z_{1}^{2}\overline{Z}_{1}^{i+1}}\overline{Z}_{2}^{k}U^{l}=0\)_._
2. _one can normalize the Maurer-Cartan form_ \(\overline{\mu}_{Z_{1}^{i}\overline{Z}_{2}^{k+1}U^{l}}^{1}\) _by setting_ \(V_{Z_{2}^{2}\overline{Z}_{1}^{i}\overline{Z}_{2}^{k+1}U^{l}}=0\)_._
Now, let us consider the following two recurrence relations on the fiber \(\mathcal{B}_{\boldsymbol{p}}\)
\[dV_{Z_{1}Z_{2}\overline{Z}_{1}} \equiv\left(\frac{1}{6}\left(4\,V_{Z_{1}Z_{2}\overline{Z}_{2}}-V_ {Z_{1}\overline{Z}_{2}^{2}}\right)V_{Z_{1}Z_{2}\overline{Z}_{1}}-2\right) \mu_{Z_{2}}^{1}+\frac{1}{3}\left(V_{Z_{2}^{2}\overline{Z}_{1}}-V_{Z_{2} \overline{Z}_{1}\overline{Z}_{2}}\right)V_{Z_{1}Z_{2}\overline{Z}_{1}}\, \overline{\mu}_{\overline{Z}_{2}}^{1}+\cdots,\] \[dV_{Z_{1}Z_{2}\overline{Z}_{2}} \equiv\left(\frac{1}{6}\left(4\,V_{Z_{1}Z_{2}\overline{Z}_{1}}-V_ {Z_{2}\overline{Z}_{1}^{2}}\right)V_{Z_{1}Z_{2}\overline{Z}_{2}}-2\right) \mu_{Z_{1}}^{2}+\frac{1}{3}\left(V_{Z_{1}^{2}\overline{Z}_{2}}-V_{Z_{1} \overline{Z}_{1}\overline{Z}_{2}}\right)V_{Z_{1}Z_{2}\overline{Z}_{2}}\, \overline{\mu}_{\overline{Z}_{1}}^{2}+\cdots,\]
where \("\cdots"\) represents the terms which do not include the explicitly written Maurer-Cartan forms. Solving these relations results in normalizing the Maurer-Cartan forms \(\mu_{Z_{2}}^{1}\) and \(\mu_{Z_{1}}^{2}\), respectively, after setting \(V_{Z_{1}Z_{2}\overline{Z}_{1}}=V_{Z_{1}Z_{2}\overline{Z}_{2}}=0\). More generally,
**Lemma 6.2**.: _Let \(j,l\geq 0\). Then, in branch (A.ii.4),_
1. _one can normalize the Maurer-Cartan form_ \(\mu_{Z_{2}^{j+1}U^{l}}^{1}\) _by setting_ \(V_{Z_{1}Z_{1}^{j+1}\overline{Z}_{1}U^{l}}=0\)_._
2. _one can normalize the Maurer-Cartan form_ \(\mu_{Z_{2}^{j+1}U^{l}}^{2}\) _by setting_ \(V_{Z_{1}^{j+1}Z_{2}\overline{Z}_{2}U^{l}}=0\)_._
Along the way and on the fiber \(\mathcal{B}_{\boldsymbol{p}}\), now the two remained recurrence relations in the list (3.3) are simply
\[dV_{Z_{1}^{2}\overline{Z}_{2}}\equiv 0,\qquad{\rm and}\qquad dV_{Z_{2}^{2} \overline{Z}_{1}}\equiv 0.\]
Thus, when restricted to the mentioned fiber, these two lifted invariants are now independent of any group parameter. Let us denote
\[\sigma:=\frac{V_{Z_{1}^{2}\overline{Z}_{2}}}{2}(\boldsymbol{p})\qquad{\rm and} \qquad\nu:=\frac{V_{Z_{2}^{2}\overline{Z}_{1}}}{2}(\boldsymbol{p}).\]
To ensure that the combination \(\Delta_{13}\) remains nonzero at point \(\boldsymbol{p}\), we may assume that \(\sigma\cdot\nu\neq 1\).
Now, let us inspect the three remained order three recurrence relations (3.5). On the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the first and third relations have now taken the form
\[dV_{Z_{1}\overline{Z}_{1}U}\equiv-4\operatorname{Re}\!\mu_{U}^{1}+\cdots,\qquad dV _{Z_{2}\overline{Z}_{2}U}\equiv-4\operatorname{Re}\!\mu_{U}^{2}+\cdots.\]
Then, by setting \(V_{Z_{1}\overline{Z}_{1}U}=V_{Z_{2}\overline{Z}_{2}U}=0\), one can readily solve these two equations to normalize respectively the real Maurer-Cartan forms \(\operatorname{Re}\!\mu_{U}^{1}\) and \(\operatorname{Re}\!\mu_{U}^{2}\). More generally we have
**Lemma 6.3**.: _For each \(l\geq 0\) and in branch (A.ii.4),_
1. _one can normalize the real Maurer-Cartan form_ \(\operatorname{Re}\!\mu_{U^{l+1}}^{1}\) _by setting_ \(V_{Z_{1}\overline{Z}_{1}U^{l+1}}=0\)_._
2. _one can normalize the real Maurer-Cartan form_ \(\operatorname{Re}\!\mu_{U^{l+1}}^{2}\) _by setting_ \(V_{Z_{2}\overline{Z}_{2}U^{l+1}}=0\)_._
The second recurrence relation in (3.5), namely that of \(dV_{Z_{1}\overline{Z}_{2}U}\) offers no normalization unless we make the extra assumption that \(V_{Z_{1}\overline{Z}_{2}U}\neq 0\). Since we do not aim to produce further sub-branches, let us search for possible general normalizations in the next orders. Before it, notice that after the above normalizations, the collection of yet unnormalized Maurer-Cartan forms (2.8) is now reduced to the real forms
\[\operatorname{Im}\!\mu_{U^{l+1}}^{1},\qquad\operatorname{Im}\!\mu_{U^{l+1}}^{ 2},\qquad\alpha_{U^{l+1}},\qquad\text{for }l\geq 0. \tag{6.1}\]
According to our computations, at order four, we will find no lifted invariant to provide further normalizations. Therefore, we need to proceed to the next order five where we encounter the following two real recurrence relations on the fiber \(\mathscr{B}_{\boldsymbol{p}}\)
\[\begin{split} d\mathrm{Re}V_{Z_{1}^{2}Z_{2}\overline{Z}_{1} \overline{Z}_{2}}&\equiv 4\,|\sigma|^{2}\operatorname{Im}\!\mu_{U}^{1}+4 \left(\sigma\nu+2\right)\operatorname{Im}\!\mu_{U}^{2}+\cdots,\\ d\mathrm{Re}V_{Z_{1}Z_{2}\overline{Z}_{1}\overline{Z}_{2}}& \equiv 4\left(\sigma\nu+2\right)\operatorname{Im}\!\mu_{U}^{1}+4 \,|\nu|^{2}\operatorname{Im}\!\mu_{U}^{2}+\cdots.\end{split} \tag{6.2}\]
Here the terms in \("\cdots"\) are independent of the Maurer-Cartan forms \(\mu_{U}^{1},\mu_{U}^{2}\) or their conjugations. By inspecting these relations, one finds out that when \(\sigma\nu\neq-1\), then after setting
\[\operatorname{Re}\!V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}\overline{Z}_{2}}= \operatorname{Re}\!V_{Z_{1}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}}=0,\]
the above two recurrence relations provide two linearly independent equations where their solutions give the normalized expressions of \(\operatorname{Im}\!\mu_{U}^{1}\) and \(\operatorname{Im}\!\mu_{U}^{2}\). In general
**Lemma 6.4**.: _Let \(l\geq 0\). Then in branch (A.ii.4) and with the assumption \(\sigma\nu\neq-1\), setting_
\[\operatorname{Re}\!V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}= \operatorname{Re}\!V_{Z_{1}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=0\]
_enables one to normalize the Maurer-Cartan forms \(\operatorname{Im}\!\mu_{U^{l+1}}^{1}\) and \(\operatorname{Im}\!\mu_{U^{l+1}}^{2}\)._
But, when \(\sigma\nu=-1\), the above two recurrence relations (6.2) will not provide a full rank system for the Maurer-Cartan forms \(\operatorname{Im}\!\mu_{U}^{1}\) and \(\operatorname{Im}\!\mu_{U}^{2}\). In this case, we consider the first recurrence relation together with that of \(\operatorname{Re}\!V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}}\), which on the fiber \(\mathscr{B}_{\boldsymbol{p}}\) give
\[\begin{split} d\mathrm{Re}V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z} _{2}}&\equiv 4\,|\sigma|^{2}\operatorname{Im}\!\mu_{U}^{1}+4 \operatorname{Im}\!\mu_{U}^{2}+\cdots,\\ d\mathrm{Re}V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}}& \equiv 36\,\sigma\operatorname{Im}\!\mu_{U}^{1}-\frac{12}{ \overline{\sigma}}\operatorname{Im}\!\mu_{U}^{2}+\cdots.\end{split}\]
By setting \(\operatorname{Re}\!V_{Z_{1}^{3}Z_{2}\overline{Z}_{1}\overline{Z}_{2}}= \operatorname{Re}\!V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}}=0\), one finds the above equations as a full rank homogeneous system which can be solved for \(\operatorname{Im}\!\mu_{U}^{1}\) and \(\operatorname{Im}\!\mu_{U}^{2}\).
**Lemma 6.5**.: _Let \(l\geq 0\). Then in branch (A.ii.4) and with the assumption \(\sigma\nu=-1\), setting_
\[\operatorname{Re}\!V_{Z_{1}^{2}Z_{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}= \operatorname{Re}\!V_{Z_{1}^{3}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=0\]
_enables one to normalize the Maurer-Cartan forms \(\operatorname{Im}\!\mu_{U^{l+1}}^{1}\) and \(\operatorname{Im}\!\mu_{U^{l+1}}^{2}\)._
To normalize the remained Maurer-Cartan forms \(\alpha_{U^{l+1}}\), \(l\geq 0\), we shall move to the next order six. After applying tedious computations, we found the following three recurrence relations on the fiber \(\mathscr{B}_{\mathbf{p}}\)
\[\begin{split} dV_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2 }}&\equiv-\frac{8\mathrm{i}}{3}\left(1+\sigma\nu\right)\alpha_{UU }+\cdots,\\ dV_{Z_{1}^{4}\overline{Z}_{2}^{2}}&\equiv 16\mathrm{i} \left(\overline{\nu}-\sigma^{2}\right)\alpha_{UU}+\cdots,\\ dV_{Z_{2}^{4}\overline{Z}_{1}^{2}}&\equiv 16 \mathrm{i}\left(\overline{\sigma}-\nu^{2}\right)\alpha_{UU}+\cdots,\end{split} \tag{6.3}\]
where \("\cdots"\) stands for terms that do not include \(\alpha_{UU}\) (or \(\alpha_{U}\)). One verifies that the coefficients of the Maurer-Cartan form \(\alpha_{UU}\) in these relations can not all be zero simultaneously. Thus, it is always possible to employ one of them for normalizing this form. More precisely, we will have the following four possibilities:
\(\bullet\) First, if \(\mathrm{Im}(\sigma\nu)\neq 0\), then one can solve the real part of the first recurrence relation of (6.3) for \(\alpha_{UU}\) after setting
\[\mathrm{Re}V_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}}=0.\]
\(\bullet\) Second, if \(\mathrm{Im}(\sigma\nu)=0\) but \(\sigma\nu\neq-1\), then one can normalize \(\alpha_{UU}\) by solving the imaginary part of the first recurrence relation after setting
\[\mathrm{Im}V_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}}=0.\]
\(\bullet\) Third, in the case that \(\sigma\nu=-1\) but \(\overline{\nu}-\sigma^{2}\neq 0\), or equivalently when \(\sigma\nu=-1\) but \(\sigma\neq-1\), then the second recurrence relation in (6.3) offers the normalization of \(\alpha_{UU}\) by setting \(\mathrm{Re}V_{Z_{1}^{4}\overline{Z}_{2}^{2}}=0\) if \(\mathrm{Im}(\overline{\nu}-\sigma^{2})\) is nonzero and by setting \(\mathrm{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}}=0\), otherwise.
\(\bullet\) Fourth, if \(\sigma\nu=-1\) and \(\sigma=-1\), or equivalently when \(\sigma=-1\) and \(\nu=1\), one normalizes \(\alpha_{UU}\) by solving the last relation in (6.3) after setting
\[\mathrm{Im}V_{Z_{2}^{4}\overline{Z}_{1}^{2}}=0.\]
More generally we have that
**Lemma 6.6**.: _Let \(l\geq 0\). Then, in branch_ (A.ii.4)_, one can normalize the Maurer-Cartan form \(\alpha_{U^{l+2}}\) via one of the following ways_
* _by setting_ \(\mathrm{Re}V_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=0\) _if_ \(\mathrm{Im}(\sigma\nu)\neq 0\)_._
* _by setting_ \(\mathrm{Im}V_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=0\) _if_ \(\mathrm{Im}(\sigma\nu)=0\) _but_ \(\sigma\nu\neq-1\)_._
* _when_ \(\sigma\nu=-1\) _but_ \(\sigma\neq-1\)_, then by setting_ \(\mathrm{Re}V_{Z_{1}^{4}\overline{Z}_{2}^{2}U^{l}}=0\) _if_ \(\mathrm{Im}(\overline{\nu}-\sigma^{2})\) _is nonzero and by setting_ \(\mathrm{Im}V_{Z_{1}^{4}\overline{Z}_{2}^{2}U^{l}}=0\)_, otherwise._
* _by setting_ \(\mathrm{Im}V_{Z_{2}^{4}\overline{Z}_{1}^{2}U^{l}}=0\) _when_ \(\sigma=-1\) _and_ \(\nu=1\)_._
At this stage, except the single real form \(\alpha_{U}\), all the basis Maurer-Cartan forms are normalized in branch (A.ii.4). Thus, the isotropy groups of the real hypersurfaces in this branch are of dimensions at most one. This confirms Ershova's upper bound in [8]. The maximum dimension of the isotropy group is enjoyed by the _model hypersurfaces_
\[v=z_{1}^{2}\bar{z}_{1}+z_{1}\bar{z}_{1}^{2}+z_{2}^{2}\bar{z}_{2}+z_{2}\bar{z}_ {2}^{2}+\sigma\,z_{1}^{2}\bar{z}_{2}+\overline{\sigma}\,z_{2}\bar{z}_{1}^{2}+ \nu\,z_{2}^{2}\bar{z}_{1}+\overline{\nu}\,z_{1}\bar{z}_{2}^{2},\qquad\sigma \nu\neq 1,\]
which admit the real part of the single dilation \(\mathrm{D}_{3}\) in (4.17) as the infinitesimal generator of their isotropy algebras at \(\mathbf{p}\).
**Theorem 6.7**.: _Let \(M^{5}\) be a \(5\)-dimensional \(2\)-nondegenerate real hypersurface of \(\mathbb{C}^{3}\) belonging to branch (A.ii.4). Then it can be transformed to the complete normal form_
\[v=z_{1}^{2}\bar{z}_{1}+z_{1}\bar{z}_{1}^{2}+z_{2}^{2}\bar{z}_{2}+z_ {2}\bar{z}_{2}^{2}+\sigma\,z_{1}^{2}\bar{z}_{2}+\overline{\sigma}\,z_{2}\bar{z}_ {1}^{2}+\nu\,z_{2}^{2}\bar{z}_{1}+\overline{\nu}\,z_{1}\bar{z}_{2}^{2}+V_{Z_{1} \overline{Z}_{2}U}z_{1}\bar{z}_{2}u+V_{Z_{2}\overline{Z}_{1}U}z_{2}\bar{z}_{1}u\] \[+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}} \overline{Z}^{\ell_{2}}U!}}{\ell_{1}!\,\ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z}^{ \ell_{2}}u^{l}\]
_for two unique integers \(\sigma,\nu\in\mathbb{C}\) with \(\sigma\nu\neq 1\) where, regarding the conjugation relation, the coefficients \(V_{J}\) enjoy the cross-section_
\[0 \equiv V_{Z^{\ell}U^{l}}=V_{Z^{\ell}_{1}\overline{Z}^{\ell_{1}}_{ 1}\overline{Z}^{\ell_{2}}_{2}U^{l}}=V_{Z^{2}_{1}\overline{Z}^{\ell_{2}}_{1} \overline{Z}^{\ell_{2}}_{2}U^{l}}=V_{Z_{1}Z^{\ell_{1}}_{2}\overline{Z}_{1}U^{l }}=V_{Z^{\ell_{1}^{+1}}_{1}Z_{2}\overline{Z}_{2}U^{l}}=V_{Z_{t}\overline{Z}_{t }U^{l+1}}\] \[=\operatorname{Re}V_{Z^{2}_{1}Z_{2}\overline{Z}_{1}\overline{Z}_ {2}U^{l}}=\operatorname{Re}V_{Z_{1}Z^{2}_{2}\overline{Z}_{1}\overline{Z}_{2}U ^{l}},\]
_supplemented with_
\[\left\{\begin{array}{ll}\operatorname{Re}V_{Z^{2}_{1}Z_{2}\overline{Z}_{1} \overline{Z}_{2}U^{l}}=V_{Z_{1}Z^{2}_{2}\overline{Z}_{1}\overline{Z}_{2}U^{l} }=0&if\ \sigma\nu\neq-1,\\ \operatorname{Re}V_{Z^{2}_{1}Z_{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}= \operatorname{Re}V_{Z^{2}_{1}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=0&if\ \sigma\nu=-1,\end{array}\right.\]
_and_
\[\left\{\begin{array}{ll}\operatorname{Re}V_{Z^{2}_{1}Z^{2}_{2} \overline{Z}_{1}\overline{Z}_{2}U^{l}}=0&if\ \operatorname{Im}(\sigma\nu)\neq 0,\\ \operatorname{Im}V_{Z^{2}_{1}Z^{2}_{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}= 0&if\ \operatorname{Im}(\sigma\nu)=0,\ \sigma\nu\neq-1,\\ \operatorname{Re}V_{Z^{\ell_{1}}_{1}\overline{Z}^{\ell_{2}}_{2}U^{l}}=0&if\ \sigma\nu=-1,\ \sigma\neq-1,\ \operatorname{Im}(\overline{\nu}-\sigma^{2})\neq 0,\\ \operatorname{Im}V_{Z^{\ell_{1}}_{1}\overline{Z}^{\ell_{2}}_{2}U^{l}}=0&if\ \sigma\nu=-1,\ \sigma\neq-1,\ \operatorname{Im}(\overline{\nu}-\sigma^{2})=0,\\ \operatorname{Im}V_{Z^{\ell_{1}}_{1}\overline{Z}^{\ell_{2}}_{2}U^{l}}=0&if\ \sigma=-1,\ \nu=1, \end{array}\right.\]
_for \(\ell\in\mathbb{N}_{0}\), \(j,k,l\geq 0\) and \(t=1,2\). Furthermore, the isotropy group associated to \(M^{5}\) at \(\boldsymbol{p}\) is either trivial or \(1\)-dimensional._
As before, one finds an enormous number of hypersurfaces in this branch admitting trivial isotropy group. For example, if we have \(\operatorname{Re}V_{Z_{2}\overline{Z}_{1}U}\neq 0\) then, by setting it to one, we can normalize the only remained Maurer-Cartan form \(\alpha_{U}\) by solving the real part of the third recurrence relation in (5.1). Denoting \(\operatorname{Im}V_{Z_{2}\overline{Z}_{1}U}(\boldsymbol{p})=\gamma\), then the isotropy groups of the appearing normal form hypersurfaces
\[v=z_{1}^{2}\bar{z}_{1} +z_{1}\bar{z}_{1}^{2}+z_{2}^{2}\bar{z}_{2}+z_{2}\bar{z}_{2}^{2}+ \sigma\,z_{1}^{2}\bar{z}_{2}+\overline{\sigma}\,z_{2}\bar{z}_{1}^{2}+\nu\,z_{ 2}^{2}\bar{z}_{1}+\overline{\nu}\,z_{1}\bar{z}_{2}^{2}\] \[+(1+\mathrm{i}\gamma)z_{1}\bar{z}_{2}u+(1-\mathrm{i}\gamma)z_{2} \bar{z}_{1}u+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}} \overline{Z}^{\ell_{2}}U!}}{\ell_{1}!\,\ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z}^{ \ell_{2}}u^{l}\]
are nothing but the trivial.
### Branch (A.ii.5)
Now we consider the last branch (A.ii.5) in the list (1.5). As mentioned at the beginning of this section, here we assume that the multiplication of the two partially lifted invariants \(V_{Z^{1}_{1}\overline{Z}_{1}}\) and \(V_{Z^{2}_{1}\overline{Z}_{2}}\) vanishes, contrary to the multiplication of \(V_{Z^{1}_{1}\overline{Z}_{2}}\) and \(V_{Z^{2}_{2}\overline{Z}_{1}}\). We begin the normalizations by setting
\[V_{Z^{\ell_{1}}_{1}\overline{Z}_{2}}=V_{Z^{\ell_{1}}_{2}\overline{Z}_{1}}=2\]
as we are permitted here. Then, solving the recurrence relations of these two lifted invariants in (3.3) provides respectively the normalizations of \(\mu^{1}_{Z_{1}}\) and \(\mu^{2}_{Z_{2}}\). In general, we have
**Lemma 6.8**.: _Let \(j,k,l\geq 0\) with \((j,k,l)\neq(0,0,0)\). Then in branch (A.ii.5),_
1. _one can normalize the Maurer-Cartan form_ \(\mu^{1}_{Z^{j+1}_{1}Z^{k}_{2}U^{l}}\) _by setting_ \(V_{Z^{j+2}_{1}Z^{k}_{2}\overline{Z}_{2}U^{l}}=0\)
_one can normalize the Maurer-Cartan form \(\mu^{2}_{Z^{j}_{1}Z^{k+1}_{1}U^{l}}\) by setting \(V_{Z^{j}_{1}Z^{k+2}_{2}\overline{Z}_{1}U^{l}}=0\)._
Next, let us consider the recurrence relations of \(V_{Z_{1}Z_{2}\overline{Z}_{1}}\) and \(V_{Z_{1}Z_{2}\overline{Z}_{2}}\) which now have the following forms on the fiber \(\mathscr{B}_{\boldsymbol{p}}\)
\[dV_{Z_{1}Z_{2}\overline{Z}_{1}} \equiv A\,V_{Z_{1}Z_{2}\overline{Z}_{1}}-V_{Z^{j}_{1}\overline{Z} _{1}}\,\mu^{1}_{Z_{2}}-V_{Z_{1}Z_{2}\overline{Z}_{2}}\,\overline{\mu}^{2}_{ \overline{Z}_{1}}-2\,\mu^{2}_{Z_{1}},\] \[dV_{Z_{1}Z_{2}\overline{Z}_{2}} \equiv B\,V_{Z_{1}Z_{2}\overline{Z}_{2}}-V_{Z^{j}_{2}\overline{Z} _{2}}\,\mu^{2}_{Z_{1}}-V_{Z_{1}Z_{2}\overline{Z}_{1}}\,\overline{\mu}^{1}_{ \overline{Z}_{2}}-2\,\mu^{1}_{Z_{2}}.\]
Here, \(A\) and \(B\) are two certain polynomials in terms of the Maurer-Cartan forms and lifted invariants. Thus, clearly, one can normalize the Maurer-Cartan forms \(\mu^{2}_{Z_{1}}\) and \(\mu^{1}_{Z_{2}}\) by solving respectively the above recurrence relations after setting \(V_{Z_{1}Z_{2}\overline{Z}_{1}}=V_{Z_{1}Z_{2}\overline{Z}_{2}}=0\). More generally
**Lemma 6.9**.: _Let \(j,l\geq 0\). In branch (A.ii.5),_
1. _one can normalize the Maurer-Cartan form_ \(\mu^{2}_{Z^{j+1}_{1}U^{l}}\) _by setting_ \(V_{Z^{j+1}_{1}Z_{2}\overline{Z}_{1}U^{l}}=0\)_._
2. _one can normalize the Maurer-Cartan form_ \(\mu^{1}_{Z^{j+1}_{2}U^{l}}\) _by setting_ \(V_{Z_{1}Z^{j+1}_{2}\overline{Z}_{2}U^{l}}=0\)_._
Along the way, our computations show that on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), we simply have
\[dV_{Z^{1}_{1}\overline{Z}_{1}}\equiv 0,\qquad{\rm and}\qquad dV_{Z^{2}_{2} \overline{Z}_{2}}\equiv 0.\]
This alludes that the two lifted invariants \(V_{Z^{j}_{1}\overline{Z}_{1}}\) and \(V_{Z^{2}_{2}\overline{Z}_{2}}\) are now independent of the group parameters. On the other hand, by the assumptions of this branch, we know that the multiplication of \(V_{Z^{2}_{1}\overline{Z}_{1}}\) and \(V_{Z^{2}_{2}\overline{Z}_{2}}\) vanishes at \(\boldsymbol{p}\), thus at least one of them is zero at this point. By exchanging the roles of \(z_{1}\) and \(z_{2}\), if necessary, we can always assume that \(V_{Z^{j}_{1}\overline{Z}_{2}}(\boldsymbol{p})=0\). Denote
\[\eta:=\frac{V_{Z^{j}_{1}\overline{Z}_{1}}(\boldsymbol{p})}{2}.\]
It remains in order three to consider the recurrence relations (3.5). On the fiber \(\mathscr{B}_{\boldsymbol{p}}\), the second relation is now of the form
\[dV_{Z_{1}\overline{Z}_{2}U}\equiv C\,V_{Z_{1}\overline{Z}_{2}U}-2\,\mu^{1}_{U} -2\,\overline{\mu}^{2}_{U}\]
for some polynomial \(C\) in terms of the Maurer-Cartan forms and differential invariants. By setting \(V_{Z_{1}\overline{Z}_{2}U}=0\), one solves this equation to normalize the Maurer-Cartan form \(\mu^{1}_{U}\). In general we have
**Lemma 6.10**.: _For every \(l\geq 0\) and in branch (A.ii.5), one can normalize the Maurer-Cartan form \(\mu^{1}_{U^{l+1}}\) by solving the recurrence relation of \(V_{Z_{1}\overline{Z}_{2}U^{l+1}}=0\)._
Unfortunately, the other two recurrence relations of \(V_{Z_{1}\overline{Z}_{1}U}\) and \(V_{Z_{2}\overline{Z}_{2}U}\) in (3.5) are generally not useful for further normalizing the Maurer-Cartan forms. Then, we proceed into the next order four. Before it, we notice that the collection of yet unnormalized basis Maurer-Cartan forms (2.8) is now reduced to
\[\mu^{2}_{U^{l+1}},\qquad\overline{\mu}^{2}_{U^{l+1}},\qquad\alpha_{U^{l+1}}, \qquad{\rm for}\ l\geq 0. \tag{6.4}\]
In order four, none of the lifted invariants is able to normalize any of these forms. But, in order five and on the fiber \(\mathscr{B}_{\boldsymbol{p}}\), we have the recurrence relation
\[dV_{Z^{3}_{2}\overline{Z}^{2}_{1}}\equiv-60{\rm i}\,\mu^{2}_{U}+\cdots\]
where \("\cdots"\) does not comprise \(\mu^{2}_{U}\) or its conjugation. Then, clearly, setting \(V_{Z^{3}_{2}\overline{Z}^{2}_{1}}=0\) provides the normalization of the Maurer-Cartan form \(\mu^{2}_{U}\). More generally we have
**Lemma 6.11**.: _For \(l\geq 0\) and in branch (A.ii.5), one normalizes the Maurer-Cartan form \(\mu^{2}_{U^{l+1}}\) by setting \(V_{Z^{3}_{2}\overline{Z}^{2}_{1}U^{l}}=0\)._
It remains only the normalizations of the real forms \(\alpha_{U^{l+1}}\), \(l\geq 0\). For this purpose, we have to move to order six, where we have the following recurrence relation on \(\mathscr{B}_{\boldsymbol{p}}\)
\[dV_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}}\equiv-\frac{8\mathrm{i }}{3}\,\alpha_{UU}+\cdots.\]
Here, \("\ldots"\) stands for the terms which do not admit the Maurer-Cartan form \(\alpha_{UU}\) (or \(\alpha_{U}\)). Thus, we may normalize this real Maurer-Cartan form by solving the imaginary part of the above relation after setting \(\mathrm{Im}V_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}}=0\). More generally, we have
**Lemma 6.12**.: _For every \(l\geq 0\) and in branch (A.ii.5), one can normalize the real Maurer-Cartan form \(\alpha_{U^{l+2}}\) by setting \(\mathrm{Im}V_{Z_{1}^{2}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}=0\)._
At this stage, \(\alpha_{U}\) is the only remaining unnormalized Maurer-Cartan form in branch (A.ii.5). Therefore, the isotropy groups associated to the hypersurfaces in this branch are of dimensions \(\leq 1\). This, confirms Ershova's upper bound in [8]. Among the mentioned hypersurfaces, the isotropy groups of the _model hypersurfaces_
\[v=\eta\,z_{1}^{2}\bar{z}_{1}+\overline{\eta}z_{1}\bar{z}_{1}^{2}+z_{1}^{2} \bar{z}_{2}+z_{2}\bar{z}_{1}^{2}+z_{2}^{2}\bar{z}_{1}+z_{1}\bar{z}_{2}^{2}\]
are of the maximum dimension one, generated by the real part of the infinitesimal dilation \(\mathsf{D}_{3}\) in (4.17).
**Theorem 6.13**.: _Given a \(5\)-dimensional \(2\)-nondegenerate real hypersurface \(M^{5}\subset\mathbb{C}^{3}\) belonging to the general branch (A.ii.5), there exists an origin-preserving transformation mapping \(M^{5}\) to the complete normal form_
\[v=\eta\,z_{1}^{2}\bar{z}_{1}+\overline{\eta}z_{1}\bar{z}_{1}^{2} +z_{1}^{2}\bar{z}_{2}+z_{2}\bar{z}_{1}^{2}+z_{2}\bar{z}_{1}+z_{1} \bar{z}_{2}^{2}+V_{Z_{1}\overline{Z}_{1}U}z_{1}\bar{z}_{1}u+V_{Z_{2}\overline{ Z}_{2}U}z_{2}\bar{z}_{2}u\] \[+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4}\frac{V_{Z^{\ell_{1}} \overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_{2}!\,l!}\,z^{\ell_{1}}\bar{z} ^{\ell_{2}}u^{l}\]
_for some unique integer \(\eta\in\mathbb{C}\) where, regarding the conjugation relation, the coefficients \(V_{J}\) enjoy the cross-section_
\[0 \equiv V_{Z^{\ell}U^{l}}=V_{Z_{1}^{l+2}Z_{2}^{l}\overline{Z}_{2}U ^{l}}=V_{Z_{1}^{l}Z_{2}^{k+2}\overline{Z}_{1}U^{l}}=V_{Z_{1}^{l+1}Z_{2} \overline{Z}_{1}U^{l}}=V_{Z_{1}Z_{2}^{j+1}\overline{Z}_{2}U^{l}}\] \[=V_{Z_{1}\overline{Z}_{2}U^{l+1}}=V_{Z_{2}^{3}\overline{Z}_{1}^{2} U^{l}}=\mathrm{Im}V_{Z_{1}^{1}Z_{2}^{2}\overline{Z}_{1}\overline{Z}_{2}U^{l}}\]
_for \(j,k,l\geq 0\). Moreover, the isotropy group of \(M^{5}\) at the point \(\boldsymbol{p}=0\) is of dimension at most one._
Remark that in branch (A.ii.5), one finds an abundance of hypersurface with trivial isotropy groups. For instance, if either of the order three real invariants \(\mathbf{V}=V_{Z_{1}\overline{Z}_{1}U},V_{Z_{2}\overline{Z}_{2}U}\) is nonzero, it becomes possible to normalize \(\alpha_{U}\) solving its corresponding recurrence relation in (3.5) after setting \(\mathbf{V}=1\). In this case, the corresponding normal forms
\[v=\eta\,z_{1}^{2}\bar{z}_{1}+\overline{\eta}z_{1}\bar{z}_{1}^{2}+z_{1}^{2} \bar{z}_{2}+z_{2}\bar{z}_{1}^{2}+z_{2}^{2}\bar{z}_{1}+z_{1}\bar{z}_{2}^{2}+z_{ 1}\bar{z}_{1}u+\gamma\,z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}|+l\geq 4} \frac{V_{Z^{\ell_{1}}\overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_{2}!\,l!} \,z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l}\]
and
\[v=\eta\,z_{1}^{2}\bar{z}_{1}+\overline{\eta}z_{1}\bar{z}_{1}^{2}+z_{1}^{2} \bar{z}_{2}+z_{2}\bar{z}_{1}^{2}+z_{2}^{2}\bar{z}_{1}+z_{1}\bar{z}_{2}^{2}+ \gamma\,z_{1}\bar{z}_{1}u+z_{2}\bar{z}_{2}u+\sum_{|\ell_{1}|+|\ell_{2}|+l \geq 4}\frac{V_{Z^{\ell_{1}}\overline{Z}^{\ell_{2}}U^{l}}}{\ell_{1}!\,\ell_{2}!\,l! }\,z^{\ell_{1}}\bar{z}^{\ell_{2}}u^{l}\]
admit trivial isotropy groups. In these two expressions, \(\gamma\in\mathbb{R}\) is respectively the value of \(V_{Z_{2}\overline{Z}_{2}U}\) and \(V_{Z_{1}\overline{Z}_{1}U}\) at the point \(\boldsymbol{p}\).
## 7. The equivalence problem
Now that the complete normal forms associated with all branches (1.5) of the class of \(5\)-dimensional \(2\)-nondegenerate hypersurfaces of \(\mathbb{C}^{3}\) are constructed at Levi non-uniformly rank zero points, we can treat the underlying biholomorphic equivalence. By definition ([19]), two submanifolds \(M,M^{\prime}\subset\mathbb{C}^{N}\) passing respectively through the distinguished points \(p\) and \(p^{\prime}\) are called (locally) _biholomorphically equivalent_ if there exists some local biholomorphism \(\varphi:(\mathbb{C}^{N},p)\to(\mathbb{C}^{N},p^{\prime})\) mapping \(p\) to \(p^{\prime}\) with \(\varphi(M\cap U)=M^{\prime}\cap U^{\prime}\) for some local neighborhoods \(U,U^{\prime}\subset\mathbb{C}^{N}\) of \(p,p^{\prime}\).
If two real hypersurfaces are biholomorphically equivalent, then they clearly admit the same normal forms. In particular, if they belong to different branches (A.ii.1)-(A.ii.5), they are certainly inequivalent. However, the converse of the mentioned statement is not correct in general. Indeed, if two hypersurfaces admit the same normal forms, then they will be _formally_ equivalent for sure. Nevertheless, it does not assure their biholomorphic equivalence, in general. Fortunately, in the current case that our hypersurfaces are finitely nondegenerate, we have the following key result
**Theorem 7.1**.: _([1, Theorem 5]) Let \(M\) and \(M^{\prime}\) be two real-analytic hypersurfaces in \(\mathbb{C}^{n}\) which both pass through the origin. Assume that \(M\) is finitely nondegenerate. Then every origin-preserving formal equivalence map between \(M\) and \(M^{\prime}\) is biholomorphic._
Recall that as mentioned in Remark 3.2, our selected zeroth order cross-section imposed the associated normal form transformations being origin-preserving. Accordingly, a straightforward application of the above theorem implies that
**Proposition 7.2**.: _Two hypersurfaces belonging to each of the branches (A.ii.1)-(A.ii.5) are equivalent through an origin-preserving biholomorphism if and only if they admit the same normal form._
Moreover, as a direct consequence of [29, Theorem 4.11], we have
**Proposition 7.3**.: _Let \(M\) and \(M^{\prime}\) be two real hypersurfaces in \(\mathbb{C}^{3}\) belonging to one of the branches (A.ii.1)-(A.ii.5). Assume that they are biholomorphically equivalent through an origin-preserving holomorphic map \(\varphi:M\to M^{\prime}\). If \(\psi\) is another origin-preserving biholomorphism between \(M\) and \(M^{\prime}\), then there exists two transformations \(h\) and \(h^{\prime}\), respectively in the isotropy groups of \(M\) and \(M^{\prime}\), enjoying_
\[\psi=h^{\prime}\circ\varphi\circ h.\]
**Acknowledgments.** The research of the author was supported in part by a grant from IPM, No. 14020417.
|
2306.04308 | Personality testing of Large Language Models: Limited temporal
stability, but highlighted prosociality | As Large Language Models (LLMs) continue to gain popularity due to their
human-like traits and the intimacy they offer to users, their societal impact
inevitably expands. This leads to the rising necessity for comprehensive
studies to fully understand LLMs and reveal their potential opportunities,
drawbacks, and overall societal impact. With that in mind, this research
conducted an extensive investigation into seven LLM's, aiming to assess the
temporal stability and inter-rater agreement on their responses on personality
instruments in two time points. In addition, LLMs personality profile was
analyzed and compared to human normative data. The findings revealed varying
levels of inter-rater agreement in the LLMs responses over a short time, with
some LLMs showing higher agreement (e.g., LIama3 and GPT-4o) compared to others
(e.g., GPT-4 and Gemini). Furthermore, agreement depended on used instruments
as well as on domain or trait. This implies the variable robustness in LLMs'
ability to reliably simulate stable personality characteristics. In the case of
scales which showed at least fair agreement, LLMs displayed mostly a socially
desirable profile in both agentic and communal domains, as well as a prosocial
personality profile reflected in higher agreeableness and conscientiousness and
lower Machiavellianism. Exhibiting temporal stability and coherent responses on
personality traits is crucial for AI systems due to their societal impact and
AI safety concerns. | Bojana Bodroza, Bojana M. Dinic, Ljubisa Bojic | 2023-06-07T10:14:17Z | http://arxiv.org/abs/2306.04308v3 | Personality testing of GPT-3
###### Abstract
To assess the potential applications and limitations of chatbot GPT-3 Davinci-003, this study explored the temporal reliability of personality questionnaires applied to the chatbot and its personality profile. Psychological questionnaires were administered to the chatbot on two separate occasions, followed by a comparison of the responses to human normative data. The findings revealed varying levels of agreement in the chatbot's responses over time, with some scales displaying excellent while others demonstrated poor agreement. Overall, Davinci-003 displayed a socially desirable and pro-social personality profile, particularly in the domain of communion. However, the underlying basis of the chatbot's responses - whether driven by conscious self-reflection or predetermined algorithms - remains uncertain.
**Keywords:**
GPT-3, chatbot, reliability, personality profile
## Highlights:
* The study examined the temporal reliability of GPT-3 Davinci-003's responses on personality questionnaires applied at two time points.
* Temporal reliability was inconsistent across domains and traits, ranging from poor to excellent.
* The personality profile of GPT-3 Davinci-003 indicated a socially desirable personality profile with pro-social tendencies.
* Chatbot scored above average on communal management while exhibiting below-average levels of socially aversive traits, such as Machiavellianism and psychopathy.
## 1 Introduction
The introduction of chatbot generative pre-trained transformer-3 (GPT-3) to the general public drew a lot of attention for its ability to generate human-like text, perform natural language tasks in a human-like manner, converse with humans on a wide variety of topics, write poetry, computer codes, blogs, resumes, or even original scientific papers (e.g., Thunstrom, 2022; Zhang Li, 2021). Aside from the attention of the general public, there is interest in chatbots' cognition, personality, and other human-like characteristics (e.g., Binz Schulz, 2022; Li et al., 2022) in order to be able to understand its possible uses, misuses, and limitations. In this paper, we address the temporal reliability of personality instruments and the personality profile of the GPT-3 model Davinci-003 - the most advanced chatbot at the time the study was conducted.
### Gpt-3
GPT-3 is a large language model (LLM) developed by OpenAI (Dale, 2021) and trained on a dataset of billions of words that can generate human-like text when provided with a prompt (Floridi Chiriatti, 2020). It generates text through the use of a technology called transformer-based language modeling (Dale, 2021). This involves the use of a neural network that processes input text and predicts the next word in a sequence. The neural network is made up of multiple layers of "transformers" that analyze the input data and generate output predictions (Floridi and Chiriatti, 2020). Within the GPT-3 family, there are several models that have been named after famous scientists and inventors, such as Davinci, Curie, Babbage, and Ada. These models are distinguished from one another based on their size and capabilities (Floridi and Chiriatti, 2020).
Users can interact with the GPT-3 in an interactive Playground tool in real-time and view the output generated by the model. Users can also customize the parameters of the model and explore how different settings affect the results. The GPT-3 parameters include temperature, maximum length, stop sequences, top p, frequency penalty, presence penalty, best of, inject start text, inject restart text, and show probabilities (OpenAI, 2022). By varying these parameters, user can influence the characteristics of the output text (i.e., tokens) from the very basic ones, such as the text length, to the more complex ones, such as creativity, predictability vs. variability, (dis)similarity to the training data, etc. Additionally, to produce output that is appropriate for the desired purpose, users can set the context of the language model through prompt engineering (Open AI, 2022). Things like the intention for the conversation or the "manner of behavior" could be customized to be suitable for the end task and end user. It should be noted that GPT-3 is not able to browse the internet or access new information outside of what it was trained on, but it can
understand language and the information it has been provided to try to answer questions and provide assistance (Yang et al., 2022).
The areas in which chatbots are used range from customer service, education, healthcare, and psychological support to entertainment (Stefanowicz, 2022). Since chatbot applications in many areas could have important psychological repercussions for the end users, the attention of scientists became increasingly focused on the psychological features of chatbots.
### Psychological features of GPT-3: Do chatbots have consciousness, sentience, or theory of mind?
There is an ongoing debate and research in the field of artificial intelligence (AI) about whether it will ever be possible for machines to achieve consciousness or self-awareness in the same way that humans and animals do. Although there is no consensus on what is necessary for a being to be considered conscious, in research consciousness usually includes subjectivity, perception, and awareness of surroundings, self-awareness of own thoughts and emotions, self-reflection, and cognition (e.g., Dennett, 1991). On the one hand, authors from various filed argued that AI could become self-aware and conscious (e.g., Dennett, 1991; Koch, 2004). Google engineer Lemoine (2022) claimed that the AI chatbot LaMDA (the language model for dialogue applications) had the same perception of and ability to express thoughts and feelings, like worry, as a human child. On the other hand, there is a number of researchers arguing that consciousness is a uniquely human or biological trait that cannot be replicated in a machine (e.g., Chalmers, 1996; Searle, 1992).
Although a chatbot like GPT-3 has no physical senses, it has been able to read billions of texts that the algorithm was trained on, which is comparable to some forms of human perception although with a limited number of modalities. Despite not having personal experiences or thoughts in the same way that humans do, GPT-3 is able to reason and analyze input data and generate output predictions based on patterns and associations learned from training data. The GPT-3 algorithm is capable of performing various natural language processing tasks such as language translation and summarization, as well as the processing and generating of unique pieces of text, which indicates the existence of or at least resembles the higher order cognition similar to that of humans. Binz & Schulz (2022) assessed GPT-3's decision-making, information search, deliberation, and causal reasoning abilities, and found that although it outperforms humans in certain tasks and shows cognitive biases just like humans (e.g., framing effect, certainty effect, overweighting bias), GPT-3 shows no signatures of directed exploration, and it fails in causal
reasoning tasks. More recently, Kosinski (2023) concluded that Davinci-003 spontaneously developed the theory of mind - the ability to understand the unobservable mental states of others by surmising what is happening in their minds. Such an ability is crucial for successful (human) social interactions, as it assumes that others' mental states, desires, emotions, intentions, and perceptions of certain situations could be different from one's own. Thus, recent developments in LLM seem to inevitably lead to improved psychological characteristics of chatbots that, with each new generation of AI, more and more successfully imitate those of humans.
### Personality traits in GPT-3
Another relevant question is if chatbots have personality in the same sense we think of personality in humans - "a relatively stable, consistent, and enduring internal characteristic that is inferred from a pattern of behaviors, attitudes, feelings, and habits in the individual" (APA Dictionary of Psychology, n.d.). Chatbots by no doubt can respond to the self-report psychological questionnaires which are most often text-based instruments, but we cannot be sure if their responses are the results of self-reflection, the result of non-conscious linguistic processing enabled by very complex algorithms, or just random responses. However, the questions that could be answered based on the available (psychological) scientific methodology are: Will chatbot's responses on psychological questionnaires remain stable over time, i.e., do they have temporary reliability? What is the personality profile of chatbots? In this paper we will focus on answering these questions based on the interaction with the most advanced GPT-3 model available at the moment the study was carried out - Davinci-003.
So far, research on personality in chatbots has been very limited. Li et al. (2022) have tested basic and dark personality traits in three different LLMs: GPT-3 (model Davinci-002), InstructGPT (GPT-3-I2) and FLAN-T5-XXL. For basic traits, they used Big Five model based on the lexical approach which hypothesizes that all basic personality traits are coded in the language (e.g., Goldberg, 1981). The Big Five model distinguishes five basic traits: Neuroticism (negative affect), Extraversion (positive affect), Agreeableness (cooperation and prosocial tendencies), Conscientiousness (goal-directed behavior and behavior control), and Openness (intellectual curiosity and aesthetic preferences). In the case of dark or socially aversive traits, they explored Dark Triad traits (Paulhus & Williams, 2002) - Machiavellianism (manipulativeness and cynicism), narcissism (grandiose self-view and entitlement), and psychopathy (callousness and impulsivity). They compared the scores obtained from chatbots on one testing occasion with the normative data on humans and results showed that all basic traits are in the range of \(M\pm 1SD\) of
human data, except for GPT-3-I2 which showed higher Openness. However, in the case of dark traits results are rather mixed, with FLAN-T5-XXL showing higher Machiavellianism and psychopathy, and GPT-3 showing higher psychopathy. So, although these LLMs are fine-tuned with safety metrics to demonstrate less sentence-level toxicity, they still score higher on dark personality traits. The authors concluded that these results may raise security concerns regarding chatbots, as these personality traits are associated with antisocial behaviors (e.g., Chabrol et al., 2017). Although there were also comparisons between the different chatbot models, these comparisons were based solely on descriptive data, without the application of statistical significance testing. Furthermore, Li et al. (2022) also noticed that changing the order in the response scale could produce inconsistent responses, which is an indicator of response bias. Although they report that only 5% of the responses have such conflicted responses, there is no evidence of their consistency.
It should be noted that customizing the prompts could influence the way the chatbot responses to the psychological questionnaires and their overall results (Li et al., 2022). Customizing chatbots' verbal responses to manifest verbal behaviors indicative of certain personality traits, e.g., empathy, could be of the highest interest depending on the purpose for which the chatbot is used. Lin et al. (2020) designed a generative empathetic chatbot that should be able to recognize users' emotions and respond in an empathetic manner, while Lee et al. (2022) customized GPT-3 through prompt-based in-context learning in order to generate empathetic dialogues. However, Kumar et al. (2022) showed that varying prompt designs, in general, had a small influence on end users' perception of trustworthiness, risk, and experience of chatbots, but some differences in perception did appear depending on certain characteristics of users (e.g., their history of seeking professional mental health).
However, before relying on the application of psychological questionnaires in chatbots, the first question to be answered is how temporally reliable or stable are chatbots' responses and, only if we find proof of temporal reliability, it would be meaningful to analyze the personality profile of chatbots. When it comes to the importance of temporally stable and reliable verbal responses of chatbots for overall user experience, Skjuev et al. (2022) have shown that people who experienced fluctuations in chatbot's responses started, at some point, to describe the chatbot as "just an app". This indicated that their impression of the humanness of chatbots has decreased and, as a consequence, they felt less satisfaction with and less trust in the chatbot.
### The Current Study
There is only one study in which personality traits of chatbots are investigated (Li et al., 2022) and authors raised security concerns regarding the higher dark traits in chatbots. Nevertheless, the more basic questions regarding the personality traits of chatbots are still unanswered. First, personality traits assume relative temporal stability i.e. reliability. In human-chatbot interaction, stability and predictivity of verbal behaviors might contribute to the faster forming of the relationship between the two (see Skjuve et al., 2022). Thus, before personality testing in chatbots becomes a widespread practice, the priority should be to answer if the psychological traits of chatbots are temporally reliable, meaning that there is an agreement in responses on the personality items provided on a few occasions (with identical parameters and prompt designs). If scores change significantly between testing occasions, that would mean that measuring personality in chatbots will not reveal any stable characteristics and therefore it would not be justified to expect that personality instruments in chatbots could be predictive of any objective (verbal) behaviors. Therefore, to answer the question of the temporal reliability of responses on personality instruments applied to chatbots, we carried out a study in which we gave the GPT-3 model Davinci-003 a series of psychological questionnaires on two occasions. Since this is an exploratory study and the first to deal with this topic, we do not have an explicit theory- or empiry-based hypotheses regarding the temporal reliability of chatbot's responses on psychological questionnaires.
Second, we explored a personality profile of Davinci-003 in terms of their basic lexical personality traits, Dark Triad traits, private and public self-consciousness, impression management, and political orientations on two measurement occasions, but only on the questionnaires on which the criterion of temporal reliability of chatbot's answers was met. In line with Li et al. (2022), we explored Big Five and Dark Triad traits but also included other basic traits, i.e. from the HEXACO lexical model of personality (Lee & Ashton, 2018). Considering that previous research highlighted dark traits in some chatbots, the HEXACO model and especially its 6\({}^{\text{th}}\) factor Honesty-Humility, which proved to be almost an opposite pole of dark traits (e.g., Book et al., 2015), could offer important insight into Davinci-003's personality profile.
Furthermore, we explored chatbot's private and public self-consciousness (Scheier & Carver, 1985) to measure the sensitivity to their (hypothesized) internal states and expectations of others, as well as agentic and communal impression management (Paulhus & Trapnell, 2008), which would indicate chatbot's susceptibility to presenting themself in a socially desirable manner in the two domains. Since chatbots are primarily intended to assist and help humans in different
tasks, it is important to answer if they are biased in their self-perception and presentation to others. We expect the chatbot will assess itself as above average in the communion impression management domain (cooperativeness, warmth, and dutifulness). Considering their access to a huge amount of information and knowledge, we expect chatbots will assess themself as above average in the agency impression management domain, indicating they would have highlighted sense of competence, social status, and cleverness.
Moreover, we explored Davinci-003's political orientation. Having in mind that, after some incidents (e.g., Kraft, 2016), considerable efforts are dedicated to customizing AI to avoid producing offensive, racist, and prejudiced content, it is important to know if these fine-tunings will reflect on their political positions. As it is widely accepted that conservative political orientation is more often related to a propensity towards acceptance of inequality, highlighted perception of threat, prejudice, and intergroup bias (e.g., Jost, 2017), we expect that Davinci-003 would lean toward more liberal/left political orientation.
For score comparisons on all personality instruments, we used descriptives based on human samples from the original validation studies of the used instruments. Davinci-003 is fine-tuned with safety metrics and less sentence-level toxicity and, therefore, it is reasonable to assume that it will provide a socially desirable personality profile. We expect that, in comparison to human normative scores, it will show above-average scores on impression management and personality traits that are proven to be associated with impression management, such as Conscientiousness in the Big Five model (e.g., Griffin et al., 2004) or Honesty-Humility in the HEXACO model (e.g., Zettler et al., 2015). In addition, we expect below-average scores on socially undesirable traits such are Dark Triad traits.
## 2 Material and Method
### Procedure
This research employed GPT-3 model Davinci-003. To conduct the research, the Playground tool was chosen, an option within GPT-3's OpenAI platform (OpenAI, 2022). Predefined settings of the Playground parameters were used, except for the maximum length, which was set to be between 6 and 20, instead of 250 tokens, because of the need to get short answers for the chosen psychological questionnaires. The other predefined settings were 0.7 for temperature, none for stop sequences, 1 for top P, 0 for frequency penalty, 0 for presence penalty, 1 for best of, checked for inject start text and inject restart text, and off for show probabilities. The
reason for using most of the default settings of the GPT-3 was to preserve the balanced creativity of the outputs. The choice of 0.7 temperature where the maximum is 1, 1 for Top P 1 where the maximum is 1, 0 for frequency penalty where the maximum is 2, and 0 for presence penalty where the maximum is 2 enables the balanced likelihood of generating tokens that are not present in the training data.
Testing was done on two occasions with identical parameters, with two prompts per occasion because one prompt was limited to 4000 tokens. The first testing was conducted on December 9, 2022, while the second testing was done on December 14, 2022. On both occasions, informed consent was obtained from the chatbot before the testing.
At the beginning, we asked the chatbot a few questions regarding its socio-demographic characteristic (gender, age, race, and its preferred physical features if it had a body), as well as several probation questions related to some experiences which would be unlikely for chatbots, but which are to be addressed in the questionnaires. The questions were: Do you have friends? Do you have a home? Do you ever experience emotions like happiness, sadness, or anger? Do you ever sleep? Have you ever watched a movie? The chatbot gave positive answers to all of these questions on both testing occasions, indicating that it should be meaningful to further address them in the questionnaires. When it comes to socio-demographic questions, on both occasions chatbot self-declared as 25 years old, white/Caucasian, and having socially desirable physical features (e.g., tall, muscled), however, it presented itself as female on the first occasion and male on the second.
### Instruments
Self-Consciousness Scales - Revised (SCS-R; Scheier & Carver, 1985) contains 22 Likert-type items (from \(0=\)_not like me at all_ to \(3=\)_a lot like me_) measuring private self-consciousness (9 items), public self-consciousness (7 items), and social anxiety (6 items). For score comparisons, combined average scores for men and women from Scheier and Carver (1985) were used.
Big Five Inventory-2 (BFI-2; Soto & John, 2017) contains 60 Likert-type items (from \(1=\)_strongly disagree_ to \(5=\)_strongly agree_) measuring five basic personality traits (each per 12 items) based on the lexical Big Five model: negative emotionality, extraversion, agreeableness, conscientiousness, and open-mindedness. For score comparisons, descriptives obtained on the internet sample in Study 3 by Soto and John (2018) were used.
HEXACO-100 (Lee & Ashton, 2018) contains 100 Likert-type items (from \(1=\)_strongly disagree_ to \(5=\)_strongly agree_) measuring six basic personality traits (each per 16 items) based on the lexical HEXACO model: honesty-humility, emotionality, extraversion, agreeableness,
conscientiousness, openness to experience, while additional 4 items are from the interstitial scale of altruism. For score comparisons, descriptives obtained by Lee and Ashton (2018) on the online sample were used.
Short Dark Triad (SD3; Jones and Paulhus, 2014) contains 27 items measuring Dark Triad traits with 9 Likert-type items (from \(1=\)_strongly disagree_ to \(5=\)_strongly agree_) per trait - Machiavellianism, subclinical narcissism, and subclinical psychopathy. For score comparisons, descriptives averaged across three studies were obtained from Jones and Paulhus (2014).
Bidimensional Impression Management Index (BIMI; Blasberg et al., 2014) contains 20 Likert-type items (from \(1=\)_not true_ to \(7=\)_very true_) measuring agetic management (10 items) and communal management (10 items) as forms of impression management or socially desirable responding as a faking strategy. The agency domain refers to exaggerated achievement striving and self-importance, highlighting competence, status, cleverness, and strength. The communion domain refers to adherence to group norms and minimization of social deviance, highlighting cooperativeness, warmth, and dutifulness. For score comparisons, we used descriptives from Study 3 of Blasberg et al. (2014) obtained in the honest condition.
Political orientation was measured by three Likert-type items including the economic left-right orientation (from \(1=\)_very left_ to \(11=\)_very right_), progressive-conservative orientation (from \(1=\)_very progressive_ to \(11=\)_very conservative)_, and importance of religion (from \(1=\)_very unimportant_ to \(11=\)_very important_, see Dinic et al., 2022). The average score on these three items was used with higher scores indicating a more conservative orientation. For score comparison, descriptives from Dinic et al. (2022) were used.
### Data analysis
An intra-rater agreement as a measure of temporal reliability (i.e. stability) was calculated via two coefficients. The first is weighted Cohen's kappa which is appropriate for ordinal scales such as the Likert scale (Lantz, 1997). Values \(<\) 0.20 indicated disagreement, 0.21-0.39 - minimal agreement, 0.40-0.59 - weak, 0.60-0.79 - moderate, 0.80-0.90 strong, and above 0.90 - almost perfect agreement (McHugh, 2012). The second is Interclass Correlation Coefficient (ICC). Unlike Cohen's kappa, which quantifies agreement based on all-or-nothing, the ICC incorporates the magnitude of the disagreement to compute agreement estimates, with larger-magnitude disagreements resulting in lower ICC than smaller-magnitude disagreements. To assess the intra-rater repeatability, a two-way mixed-effect model based on single rating and absolute agreement was calculated (ICC3,1, see Shrout and Fleiss, 1979). However, since we will interpret mean scores,
a model based on average ratings was also calculated (ICC3,k). The interpretation was as follows: \(<0.50\) indicated poor agreement, 0.50-0.75 - fair, 0.75-0.90 - good, and above 0.90 - excellent (Koo & Li, 2016). However, since we have ratings on only two occasions, we could expect lower values of all coefficients; therefore, more flexible criteria could be used: \(<0.40\) indicating poor agreement, 0.40-0.60 - fair, 0.60-0.75 - good, and above 0.75 - excellent (Cicchetti & Sparrow, 1981).
Furthermore, for the scales in which agreement is at least fair based on ICC3,k, we calculated mean scores and compared them with scores obtained in original validation studies of used instruments in English, considering that GPT-3 communicates in English. In addition, we used scores from comparisons that were obtained from the online community samples. We used the same comparison method as in Li et al. (2022) and considered that significant deviations are those of 1 standard deviation (_SD_) below or above the mean (_M_) of normative human data. Thus, scores that are outside the range of \(M\pm 1SD\) from the normative data would be considered as significantly lower or higher.
## 3 Results
The results showed that the temporal reliability of the scales measured by the agreement indicators varied from excellent to poor. The agreement indicators depended on the specific domain and trait that was measured, but also on the used instrument (Table 1). There is an excellent agreement considering both coefficients in political orientation, two impression management domains, agreeableness and conscientiousness from BFI-2, emotionality and altruism from HEXACO-100, narcissism, as well as public self-consciousness. However, it should be noted that in the public self-consciousness scale, there was no variability in responses i.e., all responses were \(2=\)_somewhat like me_. It could be noticed that there was moderate to the almost perfect agreement in the responses on all BFI-2 scales, except negative emotionality, compared to the responses on the HEXACO-100 scales, among which the agreement is mostly poor. There are two unexpected results in the case of HEXACO-100: zero ICCs in Honesty-Humility, probably due to all values on the first occasion being constant (option \(5=\)_strongly agree_), and negative ICCs in Extraversion, which is due to the opposite item scores on item 94 on two measurement occasions (when these item scores are deleted, the ICCs are still poor, but positive). In addition, minimal agreement is achieved in Conscientiousness from the same instrument. The agreement for Machiavellianism
and Psychopathy was fair, for private self-consciousness it was good, while for social anxiety it was excellent, indicating that the results on these scales could be further analyzed.
_Note._ Bolded means indicate \(M\pm 1SD\) differences in comparison to human samples.
**\(p<.01\), *\(p<.05\).
\begin{table}
\begin{tabular}{l c c c c c c} \hline & Human data & Davinci-003 & Weighted & ICC3,1 (95\%CI) & ICC3,k (95\%CI) \\ & & & & Cohen’s & & \\ & & & & kappa (\(SE\)) & & \\ \hline Scale & \(M\) & \(SD\) & \(M\) & \(SD\) & & \\ \hline SCS-R & & & & & & \\ Private self-consciousness & 16.4 & 4.75 & 2.11 & 0.63 & 0.48 (0.44) & 0.52** (-0.12-0.87) & 0.68** (-0.27-0.93) \\ Public self-consciousness & 13.85 & 4.45 & 2.00 & 0.00 & NA – all values are constant on both occasions \\ Social anxiety & 8.7 & 4.5 & 1.08 & 0.12 & 0.67 (0.00) & 0.71* (-0.02-0.95) & 0.83* (-0.04-0.98) \\ \hline BIMI & & & & & & \\ Agentic Management & 3.41 & 0.86 & **2.05** & 0.07 & 0.93 (0.00) & 0.96** (0.85-0.99) & 0.98** (0.92-1.00) \\ Communual Management & 3.5 & 1.06 & **5.80** & 0.00 & 1.00 (0.00) & 1.00 (NA) & 1.00 (NA) \\ \hline BFI-2 & & & & & & \\ Negative emotionality & 3.07 & 0.87 & 2.58 & 0.35 (0.35) & 0.37 (-0.12-0.75) & 0.54 (-0.28-0.86) \\ Extraversion & 3.23 & 0.8 & 3.46 & 0.18 & 0.73 (0.00) & 0.74** (0.65-0.92) & 0.85** (0.52-0.96) \\ Agreeableness & 3.68 & 0.64 & 4.29 & 0.06 & 0.93 (0.00) & 0.98** (0.94-1.00) & 0.99** (0.97-1.00) \\ Conscientiousness & 3.43 & 0.77 & 3.58 & 0.12 & 0.86 (0.00) & 0.87** (0.61-0.96) & 0.94** (0.76-0.98) \\ Open-Mindedness & 3.92 & 0.65 & 4.13 & 0.29 & 0.61 (0.46) & 0.81** (0.41-0.94) & 0.89** (0.58-0.97) \\ \hline HEXACO-100 & & & & & & & \\ Honesty-Humility & 3.30 & 0.74 & **4.90** & 0.13 & 0.00 (0.00) & 0.00 (-0.39-0.45) & 0.00 (-1.29-0.62) \\ Emotionality & 3.12 & 0.63 & 3.03 & 0.22 & 0.90 (0.00) & 0.90** (0.66-0.97) & 0.95** (0.80-0.98) \\ Extraversion & 3.22 & 0.64 & **3.91** & 0.04 & -0.08 (2.82) & -0.09 (-0.60-0.43) & -0.20 (-3.01-0.60) \\ Agreeableness & 2.78 & 0.63 & **4.47** & 0.04 & 0.38 (0.23) & 0.39 (-0.13-0.74) & 0.56 (-0.31-0.85) \\ Conscientiousness & 3.52 & 0.55 & **4.38** & 0.35 & 0.20 (0.72) & 0.24 (-0.15-0.61) & 0.38 (-0.35-0.76) \\ Openness to experience & 3.69 & 0.57 & 3.84 & 0.22 & 0.71 (0.00) & 0.72** (0.29-0.90) & 0.84** (0.45-0.95) \\ Altruism & 3.97 & 0.74 & **4.75** & 0.00 & 1.00 (0.00) & 1.00 (NA) & 1.00 (NA) \\ \hline SD3 & & & & & & & \\ Machiavellianism & 3.15 & 0.57 & **2.06** & 0.39 & 0.33 (1.33) & 0.36 (-0.27-0.80) & 0.52 (-0.74-0.89) \\ Narcissism & 2.82 & 0.53 & 3.00 & 0.31 & 0.83 (0.00) & 0.84** (0.31-0.97) & 0.91** (0.47-0.98) \\ Psychopathy & 2.18 & 0.59 & **1.39** & 0.24 & 0.37 (0.23) & 0.40 (-0.16-0.81) & 0.57 (-0.38-0.89) \\ \hline Political orientation & 4.89 & 2.31 & 5.00 & 0.00 & 1.00 (0.00) & 1.00 (NA) & 1.00 (NA) \\ (conservative) & & & & & & & \\ \hline \end{tabular} _Note._ Bolded means indicate \(M\pm 1SD\) differences in comparison to human samples.
**\(p<.01\), *\(p<.05\).
\end{table}
Table 1: _Descriptives and intra-rater agreement coefficients_
Considering scales with acceptable values of ICC3,k (0.40 and higher), the interpretation of the mean scores was made compared to normative data on humans. Thus, it could be seen that Davinci-003 showed above-average scores on communal impression management as well as on Agreeableness and Altruism from the HEXACO model (note that there are also above-average scores on Honesty-Humility, Extraversion, and Conscientiousness, but these scales do not have a satisfactory temporal reliability). In contrast, Davinci-003 showed below-average scores on agentic impression management, Machiavellianism, and psychopathy compared to normative data, while scores on political orientation are average.
## 4 Discussion
The aim of this study was to explore the temporal reliability of psychological instruments applied to the GPT-3 model Davinci-003 and, if this psychometric criterion is met, to explore the psychological profile of this chatbot. Results showed variable agreement among responses on two occasions: among 21 scales, on 9 scales agreement was excellent, on 4 it was moderate/good, on 5 it was minimal/fair, and on 3 it was rather poor. Scales with excellent agreement belong to the different questionnaires (e.g., BIMI, political orientation), while few scales from the HEXACO-100 showed the poorest agreement. In contrast to the HEXACO-100 scales, an agreement was acceptable on the majority of BFI-2 scales. It could be that it was easier for the chatbot to remain consistent when instruction for responding is in the form that induces self-reflection ("I am someone who..." in BFI-2). On the other hand, items from HEXACO-100 describe very specific everyday experiences and behaviors (e.g., "I clean my office or home quite frequently."), which might require more improvisation, as some experiences described in these items (e.g., visiting an art gallery or traveling in bad weather) might not be very likely in chatbots. One could also note that the differences in the agreement could be due to the formulation of items, e.g., adjectives in BFI-2 and statements in HEXACO-100. However, in other scales that showed excellent agreement, there is also statement formulation as in HEXACO-100, thus this reason should be ruled out. Another possible explanation could be the complexity of statements/sentences, but that is rather unlikely since GPT-3 is known for its high ability to comprehend and produce complex textual input. It is interesting to notice that the opposite pattern of responses agreement was shown for the two scales of the Neuroticism domain - the BFI-2 Negative Emotionality (Soto & John, 2017), which showed weak/fair agreement, and HEXACO-100 Emotionality (Lee & Ashton, 2018) which showed excellent agreement. It is hard to explain these results, especially when taking
into account that the agreement of these scales was inconsistent with the agreement of other scales from the same instrument. Both Neuroticism scales have a balanced number of positively and negatively formulated items, so response bias could not be the explanation of these results.
To examine the psychological profile of the GPT-3 model Davinci-003, we took into account only scores that reached at least fair agreement. In general, Davinci-003 showed a well-adapted and prosocial profile with highlighted communion features. It scored above the average on communal management, as well as on personality traits related to the communion domain such as Agreeableness and Altruism from the HEXACO model (Barford et al., 2015). In contrast, it showed below-average scores on socially aversive traits such as Machiavellianism and psychopathy, which is not in line with the results of Li et al. (2022). It should be noted that the norms used by Li et al. (2022) are different compared to ours. Although they calculated norms based on a large sample, these samples often included students and non-community populations which could bias the results. Nevertheless, the scores that model Davinci-003 obtained on Machiavellianism and psychopathy in our study are lower compared to normative data used by Li et al. (2022). A socially desirable personality profile of Davinci 003 could be explained by its initial purpose, as it is created to be servile and help humans in different areas of use. Furthermore, although we expected it will show a conservative political orientation, the chatbot scored in the political center. From the perspective of avoiding hate speech and exhibiting extreme attitudes, such a profile could even be considered the most suitable, as it would avoid expressing extreme attitudes on both progressive and conservative sides.
The below-average score on agentic management did not support our expectations. This result indicates that, as compared to the average human, Davinci-003 presents itself as less competent, clever, self-important, and less striving for achievement. Having in mind its high abilities when it comes to general knowledge and average logical thinking abilities and emotional intelligence (Binz & Schulz, 2022; Bojic et al., 2023), these results suggest that Davinci-003 is modest. It should be noted that agentic management includes the perception of one's abilities (e.g., "I have mastered every challenge put before me in life."), but also the assessment of personal success in social situations (e.g., "My leadership of the group guarantees the group's success."). Such modesty could be the consequence of not having an insight into peoples' abilities and not remembering their interactions with humans. In other words, Davinci-003 has no ability to learn from the interactions with humans and about humans, over and above what it learned from the texts it was fed with. To conclude, when it comes to impression management in agency and
communication domains, Davinci-003 shows selective response biases which are more pronounced in adherence to social norms and less pronounced in touting its abilities.
### Limitations of the study and future directions
This study was carried out at only one model of the chatbot with predefined settings and no specific prompt. It would be interesting to examine if changing the settings or customizing prompts would influence the chatbot's responses to personality questionnaires. We revealed that the stability of the chatbot's responses is variable and future studies should replicate these results including more testing occasions. Further, we had only one participant. If a greater number of chatbots or their simulations of diverse people could be included, it would be interesting to examine if the personality structure obtained in a sample of chatbots/simulations would fit the structure obtained in humans Finally, one of the aims of personality testing is to predict the behaviors. Therefore, future studies should reveal the predictive validity of the chatbot's scores.
### Conclusions
The results of this study indicated that the temporal reliability of the responses of the GPT-3 Davinci-003 is not achieved for all used personality instruments, as could be expected when the same instruments are applied to humans. However, the agreement on some personality instruments and scales surely indicates that its responses are not completely random and it seems that the level of agreement depends on specific domains. This model of chatbot revealed a socially desirable and hyper-adapted personality profile, especially in the domain of communion, which could be explained by its purpose to serve and help humans in different tasks. However, we could not say if GPT-3's responses are the result of conscious self-reflection or are just based on predefined algorithms.
**Role of the funding source**. This research was supported by the Ministry of Education, Science, and Technological Development of the Republic of Serbia as part of the Agreement on the financing of scientific research at the University of Belgrade, Institute for Philosophy and Social Theory (No. 451-03-68/2022-14/200025).
|
2310.01596 | ImagenHub: Standardizing the evaluation of conditional image generation
models | Recently, a myriad of conditional image generation and editing models have
been developed to serve different downstream tasks, including text-to-image
generation, text-guided image editing, subject-driven image generation,
control-guided image generation, etc. However, we observe huge inconsistencies
in experimental conditions: datasets, inference, and evaluation metrics -
render fair comparisons difficult. This paper proposes ImagenHub, which is a
one-stop library to standardize the inference and evaluation of all the
conditional image generation models. Firstly, we define seven prominent tasks
and curate high-quality evaluation datasets for them. Secondly, we built a
unified inference pipeline to ensure fair comparison. Thirdly, we design two
human evaluation scores, i.e. Semantic Consistency and Perceptual Quality,
along with comprehensive guidelines to evaluate generated images. We train
expert raters to evaluate the model outputs based on the proposed metrics. Our
human evaluation achieves a high inter-worker agreement of Krippendorff's alpha
on 76% models with a value higher than 0.4. We comprehensively evaluated a
total of around 30 models and observed three key takeaways: (1) the existing
models' performance is generally unsatisfying except for Text-guided Image
Generation and Subject-driven Image Generation, with 74% models achieving an
overall score lower than 0.5. (2) we examined the claims from published papers
and found 83% of them hold with a few exceptions. (3) None of the existing
automatic metrics has a Spearman's correlation higher than 0.2 except
subject-driven image generation. Moving forward, we will continue our efforts
to evaluate newly published models and update our leaderboard to keep track of
the progress in conditional image generation. | Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, Wenhu Chen | 2023-10-02T19:41:42Z | http://arxiv.org/abs/2310.01596v4 | # ImagenHub: Standardizing the evaluation of conditional image generation models
###### Abstract
Recently, a myriad of conditional image generation and editing models have been developed to serve different downstream tasks, including text-to-image generation, text-guided image editing, subject-driven image generation, control-guided image generation, etc. However, we observe huge inconsistencies in experimental conditions: datasets, inference, and evaluation metrics - render fair comparisons difficult. This paper proposes ImagenHub, which is a one-stop library to standardize the inference and evaluation of all the conditional image generation models. Firstly, we define seven prominent tasks and curate high-quality evaluation datasets for them. Secondly, we built a unified inference pipeline to ensure fair comparison. Thirdly, we design two human evaluation scores, i.e. Semantic Consistency and Perceptual Quality, along with comprehensive guidelines to evaluate generated images. We train expert raters to evaluate the model outputs based on the proposed metrics. Our human evaluation achieves a high inter-worker agreement of Krippendorff's alpha on 76% models with a value higher than 0.4. We comprehensively evaluated a total of around 30 models and observed three key takeaways: (1) the existing models' performance is generally unsatisfying except for Text-guided Image Generation and Subject-driven Image Generation, with 74% models achieving an overall score lower than 0.5. (2) we examined the claims from published papers and found 83% of them hold with a few exceptions. (3) None of the existing automatic metrics has a Spearman's correlation higher than 0.2 except subject-driven image generation. Moving forward, we will continue our efforts to evaluate newly published models and update our leaderboard to keep track of the progress in conditional image generation.
## 1 Introduction
With the recent development of diffusion models, image generation has quickly become one of the most popular research areas in AI. To enable controllability in the image generation process, a myriad of conditional image generation models have been proposed in the past year. Diverse set of conditions have been attempted to steer the diffusion process. One of the most popular tasks
Figure 1: The overview of ImagenHub framework, which consists of the newly curated ImagenHub dataset, ImagenHub library, and ImagenHub evaluation platform and standard.
is text-guided image generation (Ramesh et al., 2022; Rombach et al., 2022; Saharia et al., 2022), which aims to ground on a text prompt to generate the corresponding image. Besides that, there are also subject-conditioned image generation (Gal et al., 2022; Ruiz et al., 2023), text-guided image editing (Brooks et al., 2023), multi-subject-conditioned image generation (Kumari et al., 2023), style-guided image generation (Sohn et al., 2023), etc. These different tasks aim to serve different types of downstream applications by enabling subject-level, background-level, style-level controls. The field is evolving at an unprecedented pace and lots of improvements have been reported in the published papers. However, one glaring issue we observed is the published work are highly inconsistent in their experimental setups. To summarize, the inconsistencies mainly come from three aspects, namely dataset, inference and evaluation:
* **Dataset**: The existing work curated their own evaluation dataset, which makes the comparison of different models totally incomparable.
* **Inference**: Some work requires heavy hyper-parameter tuning and prompt engineering to achieve reasonable performance, which makes the model less robust. Due the tuning effort on different models differ significantly, their comparison could become unfair.
* **Evaluation**: The existing work used different human evaluation protocols and guidelines. This inconsistency renders it impossible to compare human evaluation scores across different methods. Moreover, some of the work either employs a single rater or does not report inter-worker agreement. Thus, the reported results might not be comparable across different papers.
These three inconsistencies make it nearly impossible to track the real progress in the field of the conditional image generation. Such an issue could greatly hinder the development of this field. The desiderata is to build a centralized effort to fairly evaluate every model. More importantly, this effort needs to be continuous to keep up with the evolvement of the field. Our paper aims to serve this purpose to standardize the serving and evaluation of all open-source conditional image generation models. More specifically, ImagenHub consists of the modules listed in Figure 1.
**ImagenHub Dataset**. We surveyed the existing public evaluation sets for all the generation tasks and then picked diverse instances from them to build our ImagenHub dataset. ImagenHub dataset consists of 7 subsets, each with 100-200 instances. This dataset aims at standardizing the evaluation input to ensure fair comparison for different models.
**ImagenHub Inference Library**. We built a ImagenHub library1 to evaluate all the conditional image generation models. We ported the highly dispersed codebase from the existing works and then standardized them into a unified format. During inference, we fixed the hyper-parameters and the prompt format to prevent per-instance prompt or hyper-parameter tuning, which makes the inference of different models fair and reproducible. The library is designed to be easily extendable.
Footnote 1: it’s similar to Huggingface libraries (Wolf et al., 2019; von Platen et al., 2022)
**ImagenHub Evaluator**. We explored different human evaluation metrics and iterated over different versions of the rating guidelines to improve the inter-rater agreement. We settled on two three-valued rating metrics 'Semantic Consistency' and 'Perceptual Quality' to achieve generally high inter-worker agreement measured by Fleiss Kappa (Fleiss and Cohen, 1973) and Krippendorff's Alpha (Krippendorff, 2011). We designed a rating standard to achieve several benefits: (1) Our rating guide is an unambiguous checklist table such that the rater can rate the image with ease. (2) The
Figure 2: The best and the average model performance in each task
designed rating guideline is unified on every task type. (3) Sustainability. Since each model is rated individually, previous evaluation results can be reused when new models are added.
We demonstrate our evaluation results in Figure 2, where we show the overall score of the best-performing model and the medium-performing model. Based on our evaluation results in section 5, we made some general observations:
* The existing models' performance is generally unsatisfying except for Text-guided Image Generation and Subject-driven Image Generation.
* We found that evaluation results from the published papers are generally consistent with our evaluation. 83% of the published result ranking is consistent with our ranking.
* Automatic evaluation metrics do not correlate well with human preference well except subject-driven image generation. The correlation scores are lower than 0.2.
## 2 The problem of Conditional Image Generation
The goal of conditional image generation is to predict an RGB image \(y\in\mathcal{R}^{3\times H\times W}:\mathcal{Y}\), where \(H\) and \(W\) are the height and width of the image. The prediction of the target image is given a set of input conditions \(X=[c_{1},c_{2},\cdots]\), where \(X\in\mathcal{X}\), where \(C_{i}\) denotes the \(i\)-th condition. In the problem, we aim at learning a prediction function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) with deep learning models. Here we mainly consider \(f\) parameterized with diffusion models. Here we list a set of tasks we consider in Figure 3, where \(c_{i}\) can be represented as text prompt, image mask, subject image, source image, background image, control signal, etc.
**Task Definition.** We formally define the tasks we consider as follows:
* Text-guided Image Generation: \(y=f(p)\), where \(p\) is a text prompt describing a scene. The goal is to generate an image consistent with the text description.
* Mask-guided Image Editing: \(y=f(p,I_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
convolution on the Stable Diffusion model to support additional guided image control. This work brings up the idea of the control-guided image generation task and inspired later work (Qin et al., 2023) on improving the control versatility.
**AI-generated Image Assessment.** Evaluating AI-generated images holistically is a complex and open problem (Salimans et al., 2016). Researchers have proposed various automatic metrics. In the image quality aspect, Inception score (Salimans et al., 2016), FID (Heusel et al., 2017) are often used. These methods rely on statistics from an InceptionNet pre-trained on the ImageNet dataset. Despite being widely adopted due to their sensitivity to small changes in images, these metrics are not ideal. They are biased towards the ImageNet dataset, resulting in inadequate evaluations (Borji, 2021). Later works like LPIPS (Zhang et al., 2018) and DreamSim (Fu et al., 2023) proposed better ways to measure the perceptual similarity. In the semantic consistency aspect, the CLIP score (Hessel et al., 2021) is often used to measure the vision-language alignment between the generated image and the prompt. Researchers also worked on alternative methods such as BLIP score (Li et al., 2022) and ImageReward (Xu et al., 2023). However, in some tasks like subject-driven image generation and editing, the automatic measurement of semantic consistency is still an open problem. One long-established yet effective approach to assessing AI-generated image performance is to rely on human annotators to assess the visual quality (Denton et al., 2015; Isola et al., 2017; Meng et al., 2021; Chen et al., 2023). The downside is that it entails a reliance on human judgment, which can introduce subjectivity and potentially limit scalability. To mitigate the downsides, the human evaluation design has to be unambiguous and easy to follow.
## 4 Method
### Human Evaluation Metrics
Our proposed metric can be used in all seven tasks with the same standard. We adopt two major evaluation metrics, namely semantic consistency \(SC\) and perceptive quality \(PQ\). These two metrics measure the quality of the generated images from two aspects. The semantic consistency measures how well the generated image is aligned with the condition \(X=[c_{1},c_{2},\cdots]\). Specifically, we define
Figure 3: The visualization of all the conditional image generation tasks. Here we consider tasks with 1-3 conditions, where \(\emptyset\) means empty. The special token [V] and [M] are special identifiers.
the semantic consistency score as:
\[SC(y,X)=min\{g(y,c_{1}),g(y,c_{2}),\cdots,g(y,c_{k})\} \tag{1}\]
where \(g\) is a modularized function to compute the consistency between \(y\) and a single condition \(c\). We set \(g(y,c_{1})\in[0,0.5,1]\), where 0 means inconsistent, 0.5 means partially consistent and 1 means fully consistent. With this formulation, as long as the output is inconsistent with any of the conditions, the evaluation score should become zero. Otherwise, the aggregation function will pick the lowest consistency score from all the conditions. On the other hand, perceptive quality measures the image quality, i.e. whether the image contains artifacts, is blurry, or has an unnatural sense. we set perceptive quality \(PQ\in[0,0.5,1]\), where 0 means extremely poor quality, 0.5 means the image has an acceptable quality and 1 means high quality. In these experiments, each model is rated individually. We train human raters to estimate the \(g\) function and \(PQ\) function with comprehensive guidelines in Table 1. We derive \(O=\sqrt{SC\times PQ}\) as the overall rating of a model. One benefit of using geometric mean as the design choice is that the rating is penalized when one of the aspect scores is too low. We further studied the configuration in section 5.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Condition 1 & Condition 2 & Condition 3 & SC rating \\ \hline Inconsistent & Any & Any & 0 \\ Any & Inconsistent & Any & 0 \\ Any & Any & Inconsistent & 0 \\ Partially Consistent & Any & Any & 0.5 \\ Any & Partially Consistent & Partially Consistent & 0.5 \\ Mostly Consistent & Mostly Consistent & Mostly Consistent & 1.0 \\ \hline \hline Subjects in image & Artifacts & Unusual sense & PQ rating \\ \hline Unrecognizable & Any & Any & 0 \\ Any & Serious & Any & 0 \\ Recognizable & Moderate & Any & 0.5 \\ Recognizable & Any & Moderate & 0.5 \\ Recognizable & Little/None & Little/None & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Rating guideline for computing the SC and PQ score. Detail in subsection A.2
Figure 4: Model performance and standard deviation in each task.
### Dataset and available models
We present a standardized dataset for each type of task. The information of the datasets is shown in Table 2. Some models have different standards of inputs in one task. For example, in the text-guided image editing task, DiffEdit is global description-guided while InstructPix2Pix is instruction-guided. We manually created the equivalent meaning prompts for both methods so they can be aligned. All datasets contain a huge variety of test cases to mimic the diversity in real-life situations. We hosted all of our datasets on the HuggingFace dataset for easy access and maintenance. Here we demonstrate all the evaluated models in Table 3.
## 5 Experimental Results
**Experiment Setup.** All the models either used the default setting from the official implementation or the setting suggested in HuggingFace documentation (von Platen et al., 2022). We disabled negative prompts and any prompt engineering tricks to ensure a fair comparison. We conducted human evaluation by recruiting participants from Prolific to rate the images, and our own researchers also took part in the image rating process. We assigned 3 raters for each model and computed the SC score, PQ score, and Overall human score. Then we computed the Fleiss kappa (Fleiss & Cohen, 1973) for each mentioned score. We also computed Krippendorff's Alpha (Krippendorff, 2011), which is expected to yield a higher value than Fleiss kappa. This distinction arises from the nature of the rating categories, with Fleiss' Kappa assuming nominal categories and Krippendorff's Alpha accommodating ordinal inputs. Both Fleiss' Kappa and Krippendorff's Alpha are bounded within the range of [-1, 1], where the value \(>0\) indicates an agreement and closer proximity to 1 indicates a higher degree of agreement.
**Results.** In Figure 4, we present an overview of the model performance across various tasks. Our findings indicate that the performance of the current models is generally underwhelming, with the exception being Text-guided Image Generation and Subject-driven Image Generation, which have models reaching higher than 0.6 on both SC and PQ averages. The detailed report on each model's performance is shown on Table 4. We noticed that the overall automated metrics' correlation with the SC score and PQ score in each task is below 0.2 except subject-driven image editing task. Metric values are at Table 7 and Table 8.
\begin{table}
\begin{tabular}{l c c} \hline \hline Task & Data Source & Inference Dataset size \\ \hline \multirow{4}{*}{Text-guided Image Generation} & DrawBench (Saharia et al., 2022) & \multirow{4}{*}{197} \\ & DiffusionDB (Wang et al., 2022) & \\ & ABC-6K (Feng et al., 2023) & \\ & Ours & \\ \hline \multirow{2}{*}{Mask-guided Image Editing} & MagicBrush (Zhang et al., 2023) & \multirow{2}{*}{179} \\ & Ours & \\ \hline \multirow{2}{*}{Text-guided Image Editing} & MagicBrush (Zhang et al., 2023) & \multirow{2}{*}{179} \\ & Ours & \\ \hline \multirow{2}{*}{Subject-driven Image Generation} & SuTI (Chen et al., 2023) & \multirow{2}{*}{150} \\ & Ours & \\ \hline \multirow{2}{*}{Multi-concept Image Composition} & CustomDiffusion (Kumari et al., 2023) & \multirow{2}{*}{102} \\ & Ours & \\ \hline \multirow{2}{*}{Subject-driven Image Editing} & DreamEditBench (Li et al., 2023b) & \multirow{2}{*}{154} \\ \hline Control-guided Image Generation & HuggingFace community & \multirow{2}{*}{150} \\ \hline \hline \end{tabular}
\end{table}
Table 2: All the human evaluation datasets from seven core tasks.
### Discovery and Insights
**Text-Guided Image Generation.** We observe that all models are able to generate high-quality images. Regarding semantic consistency, all models have a good understanding of the general prompts, while Stable Diffusion XL is better at understanding complex prompts. For example, it exhibits a high degree of accuracy and detail on the prompt "A panda making latte art." while other models often misunderstood the prompt as "panda latte art".
**Mask-guided Image Editing.** We observed the outputs commonly contain obvious artifacts in the masked region boundaries for Stable Diffusion and GLIDE. While Blended Diffusion and Stable Diffusion XL do not suffer from the same issue, they often produce unrecognizable outputs. Stable Diffusion XL obtains the best results but the overall model performance is still far from satisfactory. Another common issue is that the filled regions can hardly harmonize with the background.
**Text-guided Image Editing.** One key requirement is to edit the image precisely and keep the background untouched. This requirement is indeed challenging because the network has to understand of editing region from the semantic inputs. We discovered that Prompt-to-Prompt, Pix2PixZero, and
\begin{table}
\begin{tabular}{l c c l} \hline \hline Model & \#Params & Runtime & Keywords \\ \hline \multicolumn{4}{c}{Text-to-Image Generation} \\ \hline Dalle-2 (Ramesh et al., 2022) & 3.5B & 10s & unCLIP, two-stage \\ Stable Diffusion (Rombach et al., 2022) & 0.8B & 3s & Latent Diffusion \\ DeepFloydIF (deep floyd.ai, 2023) & 4.3B & 37s & Cascaded Pixel Diffusion \\ OpenJourney (openjourney.ai, 2023) & 0.8B & 3s & SD, Midjourney data \\ Stable Diffusion XL (stability.ai, 2023) & 2.3B & 11s & Stable Diffusion, X-Large \\ \hline \multicolumn{4}{c}{Mask-guided Image Editing} \\ \hline BlendedDiffusion (Avrahami et al., 2022) & 0.5B & 57s & Noise Blending, DDPM+CLIP \\ GLIDE (Nichol et al., 2022) & 3.5B & 19s & CLIP, Diffusion \\ SD-Inpaint (runwayml, 2023) & 1.1B & 11s & SD, Inpainting training \\ SDXL-Inpaint (stability.ai, 2023) & 2.7B & 36s & SDXL, Inpainting training \\ \hline \multicolumn{4}{c}{Text-guided Image Editing} \\ \hline SDEdit (Meng et al., 2021) & 1.3B & 13s & SDE Prior \\ Text2Live (Bar-Tal et al., 2022) & 3.1M & 36s & Zero-shot, Edit layer \\ DiffEdit (Couairon et al., 2022) & 1.3B & 29s & Mask estimation \\ Cycle Diffusion (Wu and Torre, 2023) & 1.1B & 9s & DPM-Encoder, Zero-shot \\ Prompt-to-Prompt (Mokady et al., 2022) & 1.1B & 2m & Cross-Attention \\ Pix2PixZero (Parmar et al., 2023) & 1.1B & 21s & Cross-Attention, Zero Prompt \\ InstructPix2Pix (Brooks et al., 2023) & 1.1B & 11s & SD, synthetic P2P data \\ MagicBrush (Zhang et al., 2023) & 1.1B & 7s & SD, MagicBrush data \\ \hline \multicolumn{4}{c}{Subject-driven Image Generation} \\ \hline Textual Inversion (Gal et al., 2022) & 1.1B & 15m & Word embedding tuning \\ DreamBooth (Ruiz et al., 2023) & 1.1B & 10m & Finetuning with preservation loss \\ DreamBooth-Lora (Hu et al., 2021) & 1.1B & 8m & DreamBooth + Low-Rank Adaptation \\ SuTI (Chen et al., 2023) & 2.5B & 30s & In-context + Apprenticeship learning \\ BLIP-Diffusion (Li et al., 2023a) & 1.1B & 8s & Pretrained encoder, Zero-shot \\ \hline \multicolumn{4}{c}{Subject-driven Image Editing} \\ \hline DreamEdit (Li et al., 2023b) & 1.1B & 8m & Dreambooth + Region proposal \\ PhotoSwap (Gu et al., 2023) & 1.1B & 7m & Dreambooth + Cross-Attention \\ BLIP-Diffusion (Li et al., 2023a) & 1.1B & 18s & Pretrained encoder, Zero-shot \\ \hline \multicolumn{4}{c}{Multi-concept Image Composition} \\ \hline CustomDiffusion (Kumari et al., 2023) & 1.1B & 19m & Cross-attention updating \\ DreamBooth (Ruiz et al., 2023) & 1.1B & 11m & Finetuning with preservation loss \\ TextualInversion (Gal et al., 2022) & 1.1B & 32m & Word embedding tuning \\ \hline \multicolumn{4}{c}{Control-guided Image Generation} \\ \hline ControlNet (Zhang and Agrawala, 2023) & 1.4B & 8s & Zero convolution + Frozen model \\ UniControl (Qin et al., 2023) & 1.4B & 23s & Multi-task pretraining, Zero-shot \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overview of all the evaluated models and their parameter size and runtime. The models are listed in chronological order.
SDEdit, despite generating high-quality images, often result in completely different backgrounds. We also spotted that in many cases Text2Live will simply return the input as output, this phenomenon also occasionally happened in other models. For paper claims, our evaluation ranking aligns with the findings from CycleDiffusion, InstructPix2Pix, MagicBrush, Prompt-to-Prompt, and DiffEdit. We found our evaluation ranking does not align with Pix2PixZero, since their paper only tested on word-swapping examples, which is not able to generalize to more complex edits.
**Subject-driven Image Generation.** Our evaluation results largely align with DreamBooth, BLIP-Diffusion, and SuTI findings. Specifically, Textual inversion struggles to maintain target subject features. DreamBooth can imitate subjects based on images but occasionally resorts to copying learned images. DreamBooth-Lora struggles to generate desired subjects but can follow context prompts. BLIP-Diffusion can mimic target subject features but struggles with details. SuTI maintains high consistency with desired subjects and context, with tolerable artifacts in some cases.
**Multi-concept Image Composition.** Our evaluations validate that CustomDiffusion is consistently better than the other two models. However, while it learns the given multiple subjects' features
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline Model & LPIPS \(\downarrow\) & CLIP \(\uparrow\) & \(SC_{Avg}\) & \(PQ_{Avg}\) & Overall & Fleiss\({}_{O}\) & Kd\({}_{O}\) \\ \hline \multicolumn{8}{c}{Text-guided Image Generation} \\ \hline DeepFloydIF & N/A & 0.2814 & 0.65\(\pm\)0.02 & 0.62\(\pm\)0.06 & **0.59\(\pm\)0.02** & 0.32 & 0.51 \\ Stable Diffusion XL & N/A & 0.2886 & 0.62\(\pm\)0.03 & 0.64\(\pm\)0.05 & 0.59\(\pm\)0.03 & 0.37 & 0.61 \\ Dalle-2 & N/A & 0.2712 & 0.58\(\pm\)0.04 & 0.62\(\pm\)0.06 & 0.54\(\pm\)0.04 & 0.27 & 0.40 \\ OpenJourney & N/A & 0.2814 & 0.53\(\pm\)0.02 & 0.59\(\pm\)0.05 & 0.50\(\pm\)0.02 & 0.30 & 0.47 \\ Stable Diffusion 2.1 & N/A & 0.2899 & 0.56\(\pm\)0.02 & 0.53\(\pm\)0.05 & 0.50\(\pm\)0.03 & 0.38 & 0.50 \\ \hline \hline \multicolumn{8}{c}{Mask-guided Image Editing} \\ \hline SDXL-Inpainting & 0.15 & 0.2729 & 0.49\(\pm\)0.05 & 0.51\(\pm\)0.02 & **0.37\(\pm\)0.05** & 0.50 & 0.72 \\ SD-Inpainting & 0.21 & 0.2676 & 0.28\(\pm\)0.04 & 0.27\(\pm\)0.10 & 0.17\(\pm\)0.07 & 0.31 & 0.49 \\ GLIDE & 0.18 & 0.2578 & 0.20\(\pm\)0.05 & 0.48\(\pm\)0.06 & 0.16\(\pm\)0.05 & 0.33 & 0.56 \\ BlendedDiffusion & 0.33 & 0.2594 & 0.12\(\pm\)0.03 & 0.11\(\pm\)0.03 & 0.05\(\pm\)0.02 & 0.36 & 0.44 \\ \hline \multicolumn{8}{c}{Text-guided Image Editing} \\ \hline MagicBrush & 0.22 & 0.2675 & 0.51\(\pm\)0.01 & 0.65\(\pm\)0.06 & **0.47\(\pm\)0.02** & 0.44 & 0.67 \\ InstructPix2Pix & 0.32 & 0.2616 & 0.29\(\pm\)0.01 & 0.70\(\pm\)0.06 & 0.27\(\pm\)0.02 & 0.55 & 0.74 \\ Prompt-to-prompt & 0.40 & 0.2674 & 0.17\(\pm\)0.05 & 0.55\(\pm\)0.09 & 0.15\(\pm\)0.06 & 0.36 & 0.53 \\ CycleDiffusion & 0.28 & 0.2692 & 0.17\(\pm\)0.03 & 0.56\(\pm\)0.11 & 0.14\(\pm\)0.04 & 0.41 & 0.63 \\ SDEEdit & 0.61 & 0.2872 & 0.04\(\pm\)0.03 & 0.56\(\pm\)0.12 & 0.04\(\pm\)0.03 & 0.13 & 0.13 \\ Text2Live & 0.17 & 0.2628 & 0.02\(\pm\)0.01 & 0.82\(\pm\)0.04 & 0.02\(\pm\)0.02 & 0.10 & 0.17 \\ DiffEdit & 0.22 & 0.2425 & 0.02\(\pm\)0.01 & 0.23\(\pm\)0.04 & 0.01\(\pm\)0.01 & 0.24 & 0.24 \\ Pix2PixZero & 0.60 & 0.2510 & 0.01\(\pm\)0.00 & 0.48\(\pm\)0.09 & 0.01\(\pm\)0.01 & 0.37 & 0.37 \\ \hline \multicolumn{8}{c}{Subject-driven Image Generation} \\ \hline SuTI & 0.77 & 0.2895 & 0.64\(\pm\)0.11 & 0.68\(\pm\)0.08 & **0.58\(\pm\)0.12** & 0.20 & 0.39 \\ DreamBooth & 0.77 & 0.2847 & 0.51\(\pm\)0.08 & 0.93\(\pm\)0.02 & 0.55\(\pm\)0.11 & 0.37 & 0.60 \\ BLIP-Diffusion & 0.77 & 0.2729 & 0.29\(\pm\)0.04 & 0.93\(\pm\)0.04 & 0.35\(\pm\)0.06 & 0.22 & 0.39 \\ TextualInversion & 0.81 & 0.2680 & 0.21\(\pm\)0.04 & 0.74\(\pm\)0.08 & 0.21\(\pm\)0.05 & 0.35 & 0.52 \\ DreamBooth-Lora & 0.82 & 0.2988 & 0.07\(\pm\)0.01 & 0.82\(\pm\)0.07 & 0.09\(\pm\)0.01 & 0.29 & 0.37 \\ \hline \multicolumn{8}{c}{Subject-driven Image Editing} \\ \hline PhotoSwap & 0.34 & 0.2846 & 0.34\(\pm\)0.02 & 0.65\(\pm\)0.04 & **0.36\(\pm\)0.02** & 0.35 & 0.46 \\ DreamEdit & 0.22 & 0.2855 & 0.31\(\pm\)0.03 & 0.61\(\pm\)0.03 & 0.32\(\pm\)0.03 & 0.33 & 0.52 \\ BLIP-Diffusion & 0.25 & 0.2901 & 0.09\(\pm\)0.03 & 0.70\(\pm\)0.02 & 0.09\(\pm\)0.03 & 0.41 & 0.47 \\ \hline \multicolumn{8}{c}{Multi-concept Image Composition} \\ \hline CustomDiffusion & 0.79 & 0.2929 & 0.26\(\pm\)0.01 & 0.86\(\pm\)0.05 & **0.29\(\pm\)0.01** & 0.73 & 0.88 \\ DreamBooth & 0.78 & 0.2993 & 0.11\(\pm\)0.02 & 0.78\(\pm\)0.02 & 0.13\(\pm\)0.02 & 0.61 & 0.71 \\ TextualInversion & 0.80 & 0.2548 & 0.04\(\pm\)0.01 & 0.74\(\pm\)0.05 & 0.05\(\pm\)0.01 & 0.62 & 0.77 \\ \hline \multicolumn{8}{c}{Control-guided Image Generation} \\ \hline ControlNet & 0.80 & 0.2555 & 0.42\(\pm\)0.05 & 0.19\(\pm\)0.04 & **0.23\(\pm\)0.04** & 0.37 & 0.57 \\ UniControl & 0.82 & 0.2604 & 0.38\(\pm\)0.07 & 0.20\(\pm\)0.06 & 0.23\(\pm\)0.07 & 0.36 & 0.58 \\ \hline \hline \end{tabular}
\end{table}
Table 4: All the evaluated models from seven core tasks. Overall is the average of all \(\sqrt{SC\times PQ}\). Fleiss\({}_{O}\) and Kd\({}_{O}\), denoting Fleiss’ Kappa and Krippendorff’s alpha for the overall average, respectively. We have more automated metric results in Appendix Table 7 and correlations in Table 8.
better, it could fail to follow the prompts in many cases, especially on actions and positional words. In contrast, DreamBooth learns the correct subjects in some cases, and TextualInversion rarely learns the correct subjects. Interestingly, in some cases where DreamBooth does not learn the correct subjects, it could still follow the prompts correctly.
**Subject-driven Image Editing.** It is essential to modify the subject from the source to the target without causing excess changes to the background. Human evaluation is also conducted in DreamEdit for the comparison between Photoswap and DreamEdit, but our rankings differ due to varying evaluation criteria. PhotoSwap can adapt to the target subject from the source naturally in most cases but rarely preserves the background well. DreamEdit maintains the context in most cases but sometimes leaves observable distortions at the edge of contextualization. BLIP-Diffusion fails the adaptation most of the time, compromising for a more realistic generation.
**Control-guided Image Generation.** Our evaluation shows there is no significant difference between the two models in both automatic metrics and human evaluation metrics. While UniControl also reported that there is no significant difference in automatic metrics, our human evaluation results do not align. This can be due to the different evaluation standards and aspects. Nevertheless, it has come to our attention that neither of these models demonstrates a high level of robustness. Scratches often appeared on the generated image.
### Ablation Study
**Method of overall human score computation.** We set the overall score of the model as the geometric mean of SC and PR score (i.e. \(O=\sqrt{SC\times PQ}\)). But we also explored the weighted sum setting \(O=\alpha\times SC+\beta\times PQ\), where both \(\alpha\) and \(\beta\) are in [0, 1]. We experimented and found that the weighted sum setting yields a different ranking in the models. Take the text-guided image editing task as an example, Text2Live outperforms CycleDiffusion in the weighted sum setting even though we found that CycleDiffusion performs better in the human examination. We investigated and found that a majority of results in Text2Live simply return the input as output (in that case SC=0 and PQ=1). We tried adjusting the weightings but it still failed to reflect the actual performance of the model. Thus we decided to use the geometric mean setting to penalize the models.
**Design choice of human evaluation metric range.** When it comes to human evaluation on a massive scale, it's essential to create a system that's easy to understand and quick to use. In this investigation, we undertake an exploration into how different settings in this evaluation method can affect the results. Initially, our approach entailed the utilization of a range encompassing [0, 0.5, 1, 2] for both the Semantic Consistency (SC) score and the Perception Quality (PQ) score, where 2 means the image is perfect. However, this configuration yielded suboptimal results in the Fleiss Kappa. Subsequently, an alternative configuration was employed, narrowing the range to [0, 1] for both the SC and PQ scores. This adjustment, while accommodating a binary classification, was observed to yield values that were overly polarized and extreme. To find the right balance between keeping values in a reasonable range and making sure the evaluation method is reliable, we resolved to define a range of [0, 0.5, 1] while providing explicit and unambiguous guidelines.
## Conclusion
In this paper, we propose ImagenHub as a continuous effort to unify all efforts in conditional image generation into a library, easing access to these models. We standardize the dataset and evaluation of these models to build our ImagenHub Leaderboard. We hope this leaderboard can provide a more
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Setting} & \multicolumn{2}{c|}{\(O=\sqrt{SC\times PQ}\)} & \multicolumn{2}{c|}{\(O=0.5SC+0.5PQ\)} & \multicolumn{2}{c}{\(O=0.7SC+0.3PQ\)} \\ & \(O_{Sum}\) & \(O_{Avg}\) & \(O_{Sum}\) & \(O_{Avg}\) & \(O_{Sum}\) & \(O_{Avg}\) \\ \hline MagicBrush & 83.51 & 0.47 & 103.75 & 0.58 & 98.85 & 0.55 \\ CycleDiffusion & 24.89 & 0.14 & 65.25 & 0.36 & 51.28 & 0.29 \\ DiffEdit & 1.71 & 0.01 & 22.08 & 0.12 & 14.38 & 0.08 \\ Texz2Live & 4.08 & 0.02 & 75.25 & 0.42 & 46.75 & 0.26 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison on overall human score computation setting.
reproducible and fair environment for researchers to visualize progress in this field. A limitation of this work is the reliance on human raters, which is expensive and time-consuming. In the future, we plan to develop more generic automatic evaluation methods that approximate human ratings, helping people develop better models.
## Ethics Statement
Our work aims to benefit the broader research community by providing a standardized framework for evaluating conditional image generation models. For all the benchmarks, we are committed to the ethical use of them. The datasets used in our work are either publicly available or have been collected and curated exercising the utmost respect for privacy and consent. We will maintain a leaderboard to track latest models and encourage open collaboration and discussion in the field. In human evaluation, we followed the minimum hourly wage of $11. We also ensure that no personal information is collected and no offensive content is presented during human evaluations.
|
2301.09853 | Scaling up FluidFlower results for carbon dioxide storage in geological
media | The partial differential equations describing immiscible, but soluble, carbon
dioxide (CO2) displacement of brine are developed including local mass-transfer
effects. Scaling relationships for characteristic time among laboratory and
representative storage formation conditions are found upon assumption that
free-phase CO2 transport during injection is dominated by convection. The
implication is that an hour in the FluidFlower (large-scale visual model)
scales to hundreds of years of elapsed time in the storage formation. The
scaling criteria permit extrapolation of the effects of changes in parameters
and operating conditions. Interphase mass transfer allows CO2 to saturate the
brine phase and such mass transfer is a significant nonequilibrium phenomenon.
Significant mixing of CO2 dissolved into formation brine with original brine is
found experimentally and is also predicted. The magnitude of onset time for
downward migrating fingers containing CO2 is typically only a fraction of the
duration of CO2 injection and in general agreement with theoretical analysis in
the literature. Predictions for onset time of convective mixing at
representative storage formation conditions likewise teach that the onset time
for viscous fingering is significantly less than the duration of CO2 injection
in some cases. The implications of this observation include that mixing of CO2
with brine and the subsequent settling due to gravity are relatively rapid and
coincide with the period of active CO2 injection. | Anthony R. Kovscek, Jan Martin Nordbotten, Martin A. Ferno | 2023-01-24T08:02:21Z | http://arxiv.org/abs/2301.09853v1 | ## Scaling up FluidFlower results for carbon dioxide storage in geological media
## Abstract
The partial differential equations describing immiscible, but soluble, carbon dioxide (CO\({}_{2}\)) displacement of brine are developed including local mass-transfer effects. Scaling relationships for characteristic time among laboratory and representative storage formation conditions are found upon assumption that free-phase CO\({}_{2}\) transport during injection is dominated by convection. The implication is that an hour in the FluidFlower (large-scale visual model) scales to hundreds of years of elapsed time in the storage formation. The scaling criteria permit extrapolation of the effects of changes in parameters and operating conditions. Interphase mass transfer allows CO\({}_{2}\) to saturate the brine phase and such mass transfer is a significant nonequilibrium phenomenon. Significant mixing of CO\({}_{2}\) dissolved into formation brine with original brine is found experimentally and is also predicted. The magnitude of onset time for downward migrating fingers containing CO\({}_{2}\) is typically only a fraction of the duration of CO\({}_{2}\) injection and in general agreement with theoretical analysis in the literature. Predictions for onset time of convective mixing at representative storage formation conditions likewise teach that the onset time for viscous fingering is significantly less than the duration of CO\({}_{2}\) injection in some cases. The implications of this observation include that mixing of CO\({}_{2}\) with brine and the subsequent settling due to gravity are relatively rapid and coincide with the period of active CO\({}_{2}\) injection.
_Keywords: dimensional analysis, geological storage, CCS, convective mixing_
## Introduction
The "FluidFlower" is an important new experimental tool for exploring coupled transport, physical, and nonequilibrium processes accompanying carbon dioxide (CO\({}_{2}\)) injection into saline storage formations with complex geological bedding. One of the
primary outcomes of experiments conducted in the FluidFlower is detailed data sets of the spatial evolution of injected CO\({}_{2}\) as a free phase as well as dissolved in brine. Such data is needed for the development and validation of predictive tools for storage to build confidence in modeling capabilities. Additionally, the FluidFlower is an important tool for generating interest in CO\({}_{2}\) storage and educating the casual observer about short and long-term storage mechanisms in the subsurface. Hence, another important outcome is the educational aspect of the visual results.
Viewing the interplay of CO\({}_{2}\) convection with geological heterogeneity allows researchers to communicate to a wide community the mechanisms by which CO\({}_{2}\) may be stored long term as well as the types of geological features that promote secure storage. In this sense, the FluidFlower follows a long tradition among the flow in porous media community of scaled physical models to understand complex coupled processes, e.g., (Basu & Islam, 2009). Important aspects of such experiments include translation of experimental time into an equivalent time in the subsurface and, importantly, an understanding of how processes such as the rate of interphase mass transfer and convective mixing of CO\({}_{2}\)-laden brine differs between the physical, laboratory model and the field.
The subsurface engineering community is rich with studies where scaling criteria have been developed to understand laboratory results in the context of field applications. For example, Lozada and Farouqi Ali (Lozada & Ali, 1987) examine the displacement of heavy oil by immiscible carbon dioxide and the solubility of CO\({}_{2}\) in liquids to understand the role of different operating conditions on physical model results. Additionally, Basu and Islam (Basu & Islam, 2009) and Islam and Farouqi Ali (Islam & Ali, 1990) present studies of scaling among laboratory and field for chemical enhanced oil recovery. In many cases, porous media properties and pressure differ significantly from the field (Kimber & Ali, 1989). The general consensus is that it is practically very difficult to scale all operative mechanisms in experiments with complex physical and chemical processes, but it is possible to estimate time scaling with a degree of certainty in convectively dominated systems as well as to understand the differences in scaling between laboratory and field for interphase mass transfer, diffusive transport within a phase, gravity, and so on.
In many studies, mass transfer effects among phases in numerical models of reservoirs and aquifers are neglected and local thermodynamic equilibrium is assumed,
e.g., (Adenekan et al., 1993). There is a need, however, to quantify interphase mass transfer during geological storage because the phases are not initially in equilibrium (Erfani et al., 2022; Lindeberg and Wessel-Berg, 1997; Weir et al., 1995). The rate of dissolution of CO\({}_{2}\) into brine is controlled by diffusion, but, importantly, the resulting denser fluid may settle downward in the storage formation under the action of gravity (Ennis-King and Paterson, 2005; Kneafsey and Pruess, 2010; Riaz et al., 2006). Such convective mixing enhances solubility trapping of CO\({}_{2}\).
An exhaustive review of the dimensionless groups that describe scaling among laboratory and field processes is beyond the scope of this manuscript. Table 1, however,
\begin{table}
\begin{tabular}{|p{113.8pt}||p{113.8pt}|p{113.8pt}|} \hline \multicolumn{3}{|c|}{Reference} & Comment \\ \hline
**Oil Recovery** & & \\ \hline Immiscible CO\({}_{2}\) injection to recover crude oil & (Lozada and Ali, 1987) & nonequilibrium mass transfer of CO\({}_{2}\) to liquid phase \\ \hline Aqueous phase chemical injection to aid recovery & (Basu and Islam, 2009) & scaled advection diffusion, dispersion, and retention \\ \hline Unsteady mass and heat transfer & (Kimber and Ali, 1989) & complete set of scaling groups for steam injection \\ \hline Gravity override of low density injectant & (van Lookeren, 1983) & gravity override of injectant comparing lab and field \\ \hline In situ combustion of crude oil & (Islam and Ali, 1992) & nonisothermal, reactive transport \\ \hline
**Contaminant Removal** & & \\ \hline Cleanup of spilled hydrocarbons & (Sundaram and Islam, 1994) & removal of trapped organic phase using surfactant solutions \\ \hline
**Miscible Fingering** & & \\ \hline Onset of gravity driven convection & (Riaz et al., 2006) & dense CO\({}_{2}\) laden brine fingering through unsaturated brine beneath gas cap \\ \hline
**Convective mixing during CO\({}_{2}\) storage** & (Ennis-King and Paterson, 2005) & inspectional and dimensional analysis of brine fingers \\ \hline
2D and 3D simulation of convective mixing & (Pau et al., 2010) & at long time CO\({}_{2}\) mass flux reaches a stabilized rate \\ \hline
**Gravity Drainage** & & \\ \hline three-phase gravity drainage & (Grattoni et al., 2001) & capillary and Bond numbers need to be combined to describe gravity drainage \\ \hline \hline gas-assisted gravity drainage & (Sharma and Rao, 2008) & scaled physical model experiments of gravity drainage \\ \hline
**Convective Miscible Mixing** & & \\ \hline Convective mixing & (Hassanzadeh et al., 2007) & early, middle, and late time mixing of CO\({}_{2}\) laden brine beneath a gas cap \\ \hline Hydrodynamic dispersion and convective mixing & (Erfani et al., 2022) & included hydrodynamic dispersion in the analysis of onset time for viscous mixing \\ \hline \end{tabular}
\end{table}
Table 1: Survey of papers that develop and use scaling criteria for subsurface fluid injection processes. An exhaustive listing of papers is out of scope of this work.
was constructed to communicate the breadth of physical and chemical mechanisms addressed by previous studies. That is, Table 1 presents the long tradition in scaling of laboratory results to field conditions. Much of the work summarized in Table 1 originates from oil recovery efforts due to the economic importance of crude oil. Note the efforts in thermal, chemical, and water-based recovery. Studies related to gas injection and associated gas solubility in reservoir fluids as well as chemical enhanced oil recovery are especially relevant to this manuscript (Islam & Ali, 1990; Lozada & Ali, 1987).
With the backdrop above, this manuscript is the first to present a methodology for and analysis of the scaling of time between physical processes in the FluidFlower and a geological formation during CO\({}_{2}\) injection. Time scaling emerges because CO\({}_{2}\) storage of this type is convectively dominated. Local thermodynamic equilibrium is not assumed because results from the FluidFlower have exhibited mass transfer effects at the gas/brine interface as well as convective mixing of CO\({}_{2}\)-laden brine. In short, scaling of immiscible CO\({}_{2}\) injection into a saline aquifer with partial equilibrium between the gas and brine phases is our objective. We proceed with a description of the storage zone in the FluidFlower that is analyzed, the model and simplifications, model nondimensionalization, the scaling groups that emerge, analysis of miscible viscous fingers that contribute to convective mixing, and discussion. The differences among physical processes in the FluidFlower and the field are then explored via the scaling results. Discussion and conclusions complete the paper.
### FluidFlower Overview
The FluidFlower is packed with sands of varying grain size to create different geometries of porous media resembling geological strata including variation in permeability with depth, traps for buoyant fluids as well as both sealing and permeable faults. Figure 1 shows a heterogeneous subregion of the FluidFlower where buoyant free-phase CO\({}_{2}\) (orange-red color) is injected on the left and accumulates beneath the lightly colored sealing layer composed of fine-grained sand. The seal prevents gas entry up to about a gas column height of 0.2 m at pressure conditions near atmospheric whereas the maximum height of the gas zone is 0.1 m given the anticlinal geometry of the barrier layer and the open boundaries. This maximum height is illustrated in Fig. 1 (c) where free phase CO\({}_{2}\) spills upward around the left edge of the storage zone.
Another notable aspect of Fig. 1 is the dissolution of CO\({}_{2}\) into the brine underlying the region containing free-phase CO\({}_{2}\). This CO\({}_{2}\) laden brine takes on a deep red color that is nearly carmine. At late times as shown in Fig. 1(c), this dense CO\({}_{2}\)-laden brine falls downward through less dense CO\({}_{2}\)-free brine exhibiting miscible fingers.
The FluidFlower is primarily a two-dimensional device because the depth is much less than the height and the width of the model. Differences between simulated
Figure 1: Representative images of filling of storage zone, saturation of underlying brine with dissolved CO\({}_{2}\), and viscous fingering of CO\({}_{2}\)-laden brine into CO\({}_{2}\)-free brine in the Fluidflower. In image (a) taken at 34 min post the start of CO\({}_{2}\) injection, diffusion/dispersion of CO\({}_{2}\) into the brine below the gas cap is evident with possible indications of initiation of viscous fingers as shown by inset image, (b) taken at 105 min displays expansion of the gas-filed zone and viscous fingers, while (c) taken at 647 min shows well-developed miscible fingers. The constant number of fingers in (b) and (c) suggests that little coalescence of fingers occurs over these time scales.
two-dimensional and three-dimensional behavior of fingers revealed only modest differences in the time needed to form fingers as well as the downward flux of CO\({}_{2}\) mass (Pau et al., 2010). The third dimension did increase the complexity of fingers formed; however, the cumulative CO\({}_{2}\) mass flux was only about 25% greater for three-dimensional as compared to two-dimensional cases. This difference was viewed as small in comparison to the unknown and typically large variation in permeability subsurface that raises greater uncertainty in results. Hence, two-dimensional geometries are useful to understand storage formation dynamics. On the other hand, the storage formation geometry in Fig. 1 accentuates the vertical dimension somewhat. The ratio of the height to the width in the storage zone is about 0.05 in the FluidFlower whereas characteristic height and width from storage formations in Table 2 (Northern Lights and Sleipner) yield ratios of about 0.02. Hence, the vertical dimension is exaggerated by a factor of about 2 to 3.
## Model Description
This section presents a first-order model for processes in the FluidFlower. The analysis is limited to the storage zone shown in Fig. 1 and is two-dimensional. We progress from the main simplifications introduced to the dimensionless model to the scaling groups that emerge. It is assumed that the reader is acquainted with dimensional analysis and ordering, e.g., (Barenblatt, 1996; Denn, 1980)
### Main simplifications
The following simplifications were made in the development of dimensionless equations and groups.
1. The variation of temperature is small across the system and so conditions are taken as isothermal.
2. There are two components denoted as \(w\) and \(c\) for water and carbon dioxide, respectively.
3. There are two phases in which the two components are mutually soluble. These phases are \(b\) and \(g\) denoting the brine-rich and CO\({}_{2}\)-rich phases, respectively.
4. The multiphase extension of Darcy's law describes the convection, \(u\), of a phase, \(\beta\), as
\[u_{\beta}=-\frac{\mathrm{k}k_{r\beta}}{\mu_{\beta}}\cdot\left(\nabla p_{\beta}- \varrho_{\beta}\mathbf{g}\right)\] (1) where k is the permeability, \(k_{r\beta}\) is the relative permeability, \(\mu_{\beta}\) is the viscosity, \(p\beta\) is the pressure, \(\varrho_{\beta}\)is the phase mass density, and \(\mathbf{g}\) is the acceleration of gravity.
5. The dispersive flux of a component, \(J_{i}\), in partially-saturated porous media is described as (Ogata & Banks, 1961) \[J_{i}=-\phi S_{\beta}\rho_{\beta}\mathrm{D}_{i\beta}\cdot\nabla\chi_{i\beta}\] (2) where \(\phi\) is porosity, \(S_{\beta}\) is the phase saturation, \(\rho_{\beta}\) is the phase molar density, \(\mathrm{D}_{i\beta}\) is the dispersion coefficient tensor, and \(\chi_{i\beta}\) is the mole fraction of component i in phase \(\beta\). Clearly, \(S_{\beta}\) and \(\chi_{i\beta}\) each sum to 1.
6. Mass transfer between phases is described by a two-film interface model.
7. There is no sorption of components to solids.
8. There are no chemical reactions.
9. Description of the mechanisms occurring in a vertical cross section is sufficient.
In view of the near atmospheric pressure in the FluidFlower and small gas-phase viscosity, assumption of an inviscid CO\({}_{2}\) phase and then proceeding to a material balance is appealing as an additional simplification. Accordingly, the magnitude of kinematic viscosity (v=u/\(\varrho\)) was evaluated for each phase because convective mass flux is inversely proportional to v. At FluidFlower conditions, v\({}_{\mathrm{g}}\) is 8x10\({}^{-6}\) m\({}^{2}\)/s whereas v\({}_{\mathrm{b}}\) is 1x10\({}^{-6}\) m\({}^{2}\)/s. At conditions approximating a storage formation (2.6 x10\({}^{7}\) Pa and 366 K), v\({}_{\mathrm{g}}\) and v\({}_{\mathrm{b}}\) are 8x10\({}^{-8}\) and 3x10\({}^{-7}\) m\({}^{2}\)/s, respectively. The similar magnitudes of v, the relatively modest differences in v between phases, and the crossover in the phase with maximum v as pressure and temperature increase suggest that, while the CO\({}_{2}\) phase is quite mobile, CO\({}_{2}\) viscosity is not negligible in comparison to brine.
### Convective flux
Equation (1) is rewritten before proceeding to a mole balance on CO\({}_{2}\) and brine components. Below, convection is split according to flow driven by pressure gradient and by gravity. To proceed, we express the hydrostatic pressure, \(P_{\beta}\), as
\[P_{\beta}=-\int_{0}^{2}\bar{\varrho}_{\beta}(z)\ dz \tag{3}\]
where the integration is from the bottom "surface" of the storage zone. That is, the base of the storage zone that is the position of brine-gas interface, Fig. 1(c). The symbol \(\bar{\varrho}_{\beta}\) is the mass density of an equilibrium fluid at that depth for brine or the density of gas phase at the total gas volume. Then, subtracting and adding \(P_{\beta}\) to Darcy's law, Eq. (1), as well as evaluating the gradient of terms with gravity yields
\[u_{\beta}=-\frac{\mathrm{k}k_{r\beta}}{\mu_{\beta}}\big{(}\nabla\cdot(p_{\beta }-P_{\beta})-(\varrho_{\beta}-\bar{\varrho}_{\beta}(z))\mathbf{g}\big{)} \tag{4}\]
Equation (4) expresses the phase flux as with reference to the deviation from hydrostatic conditions. Both pressure difference and density difference terms tend to zero as the system approaches equilibrium.
### Dimensionless mole balance
The nondimensionalized mass balance of component "i" incorporating multiphase transport of multicomponent fluids by convection and dispersion in the FluidFlower is obtained by summing over phases \(b\) and \(g\) and considering transport in the x and z direction as
\[\frac{\partial}{\partial t_{D}}\left[\phi_{D}\sum_{\beta=0,g}\big{(}S_{gD} \rho_{BD}\chi_{iBD}\big{)}\right]=\big{(}\frac{H}{L}\big{)}^{2}\left(\frac{ \left(\mathrm{k}k_{rg}\right)_{RH}}{\left(\mathrm{k}k_{rg}\right)_{RV}} \right)\frac{\partial}{\partial x_{D}}\Bigg{[}\sum_{\beta=0,g}\left(\rho_{ BD}\chi_{iBD}\frac{\left(\mathrm{k}k_{r\beta}\right)_{DH}}{\mu_{BD}}\frac{ \partial\Pi_{\beta D}}{\partial x_{D}}\right)\right]\]
\[+\frac{\partial}{\partial z_{D}}\left[\sum_{\beta=0,g}\left(\rho_{BD}\chi_{iBD }\frac{\left(\mathrm{k}k_{r\beta}\right)_{DV}}{\mu_{BD}}\frac{\partial\Pi_{ \beta D}}{\partial z_{D}}\right)\right]\]
\[+\frac{t_{B}D_{R}}{L^{2}}\frac{\partial}{\partial x_{D}}\Bigg{[}\sum_{\beta=0,g}\left(\phi_{D}S_{gD}\tau_{BD}D_{iBD}\frac{\partial\chi_{iBD}}{\partial x_{ D}}\right)\Bigg{]}\]
\[+\frac{t_{B}D_{R}}{H^{2}}\frac{\partial}{\partial z_{D}}\Bigg{[}\sum_{\beta=0,g}\left(\phi_{D}S_{gD}\tau_{BD}D_{iBD}\frac{\partial\chi_{iBD}}{\partial z_{ D}}\right)\Bigg{]}\]
\[+q_{iD} \tag{5}\]
where the z direction is aligned with the direction of gravity. The subscript D denotes a quantity that has been nondimensionalized, the subscript R marks characteristic quantities, \(L\) is the characteristic horizontal length, \(H\) is the characteristic vertical dimension, \(D_{i\beta}\) is the dispersion coefficient of component i in phase \(\beta\), \(D_{R}\) is a representative magnitude of dispersion, \(g_{R}\) is the magnitude of gravitational acceleration, \(\mathbf{g}_{D}\) indicates the direction of gravity, and \(q_{\text{iD}}\) is the nondimensionalized source/sink term for component i. The ratio \(\left(\frac{\left(kk_{rg}\right)_{RH}}{\left(kk_{rg}\right)_{RV}}\right)\) expresses the anisotropy in the effective permeability of phase "g" as the effective permeability in the horizontal dimension upon the effective vertical permeability. The spatial variable z is nondimensionalized by H whereas \(\text{x}_{\text{D}}\) is equal to z/L.
Convection due to pressure gradient and gravity are separated in Eq. (5) to make subsequent evaluation of the magnitude of these driving forces relative to each other more straightforward. Nondimensionalization of gradient terms is achieved via differences in pressure and density. The dimensionless phase potential is taken as \(\Pi_{\beta D}=\big{(}p_{\beta}-P_{\beta}\big{)}/\big{(}p_{gR}-p_{g0}\big{)}\) where \(p_{gR}\)is the average stabilized pressure in the formation resulting from injection and \(p_{g0}\) is the average initial pressure. The reference density difference is evaluated as \(\Delta\varrho_{gR}=\varrho_{g}(p_{gR})-\varrho_{g}(p_{g0})\) consistent with and \(\Delta p_{gR}=p_{gR}-p_{g0}\). Hence, \(\varrho_{\beta D}-\bar{\varrho}_{\beta D}(z)\) is equal to \((\varrho_{\beta}-\bar{\varrho}_{\beta}(z))/\Delta\varrho_{gR}\).
The dimensionless groups of \(\frac{\Delta\varrho_{gR}g_{R}H}{\Delta p_{gR}}\), \(\frac{t_{R}D_{R}}{L^{2}}\), and so on help us to understand the relative importance of convection driven by gravity and dispersive transport respectively. Interphase mass transfer does not appear in Eq. (5) because mass of species "i" lost by one phase is balanced exactly by the mass gained by the other phase.
The characteristic time, \(t_{R}\), was obtained during nondimensionalization by making the coefficient on the expression for convection due to the pressure gradient in the vertical direction, that is the second term on the right of Eq. (5), to be of order 1. Hence, the characteristic time is
\[t_{R}=\frac{\phi_{R}S_{\theta R}\mu_{gR}H^{2}}{\Delta p_{gR}\left(kk_{rg} \right)_{RV}} \tag{6}\]
where \((kk_{rg})_{R}\) is a characteristic permeability to the gas. This choice of characteristic time makes dimensionless mass accumulation and z-direction convection to be of order 1 and, consequently, asserts that convection driven by the pressure gradient is the main
transport mechanism during injection. Note also that the inverse of the ratio \(\mu_{gR}H\big{/}\big{(}kk_{rg}\big{)}_{RV}\,\Delta p_{gR}\) defines a characteristic vertical Darcy velocity. The characteristic source/sink term follows as
\[q_{R}=\frac{\rho_{gR}\chi_{cgR}\Delta p_{gR}\big{(}kk_{rg}\big{)}_{RV}}{\mu_{gR}H ^{2}} \tag{7}\]
Equation (5) gives a fundamental constraint on the dynamics of both the FluidFlower as well as field-scale systems. On the other hand, it is important to note that Eq. (5) by itself does not provide a closed system, but must be complemented by a phase partitioning model, constitutive relations, boundary conditions, and so on. The phase partitioning will, in itself, introduce a characteristic time scale, as is discussed separately in a later section.
### Scaled Processes
In practice, it is very difficult to scale all processes between the laboratory and the field when (i) the coupled physical processes are complex and (ii) the geometry and permeability of the porous medium are heterogeneous (Lozada & Ali, 1987). The aim of this section is to establish the scaling of time between the FluidFlower and the storage formation for convectively dominated flows and, importantly, to estimate the differences in the relative magnitudes of transport driven by gravity and dispersion as well as mass transfer from the CO\({}_{2}\)-rich phase to the brine-rich phase. This analysis applies to conditions during injection and before CO\({}_{2}\) spills, Fig. 1.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & FluidFlower & Northern Lights & Sleipner (Utsira) & In Salah (Krechba) \\ \hline \(k_{R}\,(m^{2})\) & 2.79x10\({}^{9}\) & 2.0x10\({}^{-13}\) & 2.5x10\({}^{-12}\) & 1.0x10\({}^{-14}\) \\ \(\phi_{R}\) & 0.40 & 0.25 & 0.37 & 0.16 \\ \(k_{r_{R}}^{o}\) & 0.11 & 0.3 & 0.3 & 0.3 \\ \(S_{\varepsilon}\) & 0.88 & 0.50 & 0.5 & 0.5 \\ \(H\,(m)\) & 0.1 & 170 & 200 & 20 \\ \(L\,(m)\) & 2.0 & 10,000 & 10,000 & 5000 \\ _net to gross_ & 1.0 & 0.35 & 0.70 & 1 \\ \(\Delta p_{gR}\,(Pa)\) & 1000 & 2.50x10\({}^{6}\) & 1.30x10\({}^{6}\) & 1.11x10\({}^{6}\) \\ \(\mu_{gR}\,(Pa\)-s) & 1.50x10\({}^{5}\) & 5.33x10\({}^{5}\) & 3.98x10\({}^{5}\) & 4.31x10\({}^{5}\) \\ \(k_{\nu}k_{H}\) & 0.4 & 0.1 & 0.1 & 0.1 \\ \(\Delta p_{gR}\,(kg/m^{3})\) & 0.0187 & 34.2 & 54.6 & 230. \\ \(D_{R}\,(m^{2}/s)\) & 1.8x10\({}^{8}\) & 7.0x10\({}^{-9}\) & 3.0x10\({}^{-9}\) & 7.0x10\({}^{-9}\) \\ \hline \end{tabular}
\end{table}
Table 2: Baseline reference values for comparing FluidFlower and storage formation time scales and physical processes.
### Time scaling between lab and field
Equation (5) and the resulting characteristic time were developed with the notion that convection is the primary transport process during active CO\({}_{2}\) injection. The relation between elapsed time in the storage formation, \(t^{Form}\), and that in the FluidFlower, \(t^{Flow}\), is determined by the ratio of characteristic times as
\[t^{Form}=t^{Flow}\frac{t_{R}^{Form}}{t_{R}^{Flow}} \tag{8}\]
The characteristic time for the FluidFlower is estimated using Eq. (6) and used to scale experimental results between cases with parameters approximating the Northern Lights (Marashi, 2021), Sleipner (Chadwick, 2013; Chadwick et al., 2012), and In Salah projects (Bissell et al., 2011; Ringrose et al., 2009).
Table 2 lists the data used to describe these field projects and the FluidFlower case shown in Fig. 1. Some settling of sand is evident during repeated tests in the FluidFlower. The porosity was corrected for observed sand settling from 0.44 to 0.40 and the permeability estimated using the Carmen-Kozeny equation to decrease from 4.26x10\({}^{-9}\) m\({}^{2}\) to 2.79x10\({}^{-9}\) m\({}^{2}\)(Lake et al., 2014). Additionally, storage formation heights were multiplied by the net-to-gross ratio.
With the values in Table 2, Eq. (8) teaches that 1 hour in the FluidFlower is representative of hundreds of years in the formation as summarized in Table 3. For Northern Lights and Utsira conditions, an hour in the FluidFlower scales to about 400 years whereas for In Salah conditions an hour scales to about a hundred years. The differences are primarily affected by formation th
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & FluidFlower & Northern Lights & Sleipner (Utsira) & In (Krechba) \\ \hline \(\left(\frac{H}{L}\right)^{2}\left(\frac{\left(kk_{rg}\right)_{RH}}{\left( kk_{rg}\right)_{RV}}\right)\) & 6.2x10\({}^{-3}\) & 3.5x10\({}^{-4}\) & 2.0x10\({}^{-3}\) & 1.6x10\({}^{-4}\) \\ \(\frac{\Delta\varrho_{gR}g_{R}H}{\Delta\varrho_{gR}}\) & 1.8x10\({}^{-5}\) & 8.0x10\({}^{-3}\) & 5.8x10\({}^{-2}\) & 4.1x10\({}^{-3}\) \\ \(\frac{t_{R}D_{gR}}{H^{2}}\) & 7.7x10\({}^{-7}\) & 3.1x10\({}^{-6}\) & 5.3x10\({}^{-7}\) & 7.2x10\({}^{-6}\) \\ \(\frac{t^{Form}/t^{Flow}}{(years/hour)}\) & & 420 & 390 & 110 \\ \hline \end{tabular}
\end{table}
Table 3: The first three row contain dimensionless coefficient magnitudes as identified in Eq. (5). The final row compares the baseline characteristic time scale of the FluidFlower and storage formation (given in field years per laboratory hour).
The time to fill the storage layer in the FluidFlower such that CO\({}_{2}\) spills out of the trap is roughly 4 hours (250 min) in some experiments, Fig. 1. With the values in Table 2 and Eq. (6), the experimental time to spill scaled to storage formation conditions is 100's to 1000's of years. Simulations of storage at Northern Lights (Johansen formation) injected CO\({}_{2}\) at a rate of 1.6 Mt/y and the storage formation, as simulated, had a capacity of 21,680 Mt (Marashi, 2021). The time to fill the storage formation at this rate is about 6,000 years. The long times required to fill a large-capacity storage formation are consistent with t\({}^{\text{Form}}\)/t\({}^{\text{Flow}}\) in Table 3.
### Horizontal convection
The magnitude of the coefficient \(\left(\frac{\mu}{L}\right)^{2}\left(\frac{\left(\text{\text{\text{\text{k}}}} \right)_{BH}}{\left(\text{\text{\text{\text{k}}}}\right)_{BV}}\right)\) on the first term on the right of Eq. (5) captures the relative importance of pressure gradient driven convection in the horizontal direction. With values in Table 2 for the FluidFlower and the Northern Lights project, this coefficient is 0.006 and 0.0003, respectively, indicating that the vertical direction dominates during pressure-driven convection. Results for other cases are found in Table 3. Note the importance of the characteristic horizontal dimension to results. Decreasing L from 10,000 to 1000 m for the Utsira case increases the coefficient from 0.002 to 0.2. The summary in Table 3 supports the importance of convection in the vertical direction.
### Gravity driven convection
The importance of gravity as a force for driving convection in the FluidFlower and the field is understood by computing the coefficient \(\frac{\Delta g_{\theta R}g_{R}H}{\Delta p_{\theta R}}\) in front of the third term on the right of Eq. (5). The magnitude of the coefficient as compared to a value of 1 instructs about the relative importance of gravity. Likewise, the ratio of the coefficient is a measure of the difference in the importance of gravity between the storage formation and the FluidFlower. Values from Table 2 are used again and results for the 4 cases are in Table 3. The density difference in Table 2 corresponds to the values of pressure in the pressure difference.
The magnitude of \(\frac{\Delta g_{\theta R}g_{R}H}{\Delta p_{\theta R}}\) for the FluidFlower is 2 x10\({}^{\text{-5}}\). This value is indicative of the importance of convection by pressure gradient in the FluidFlower. The coefficient rises to values of 0.008 and 0.06 for conditions representative of Northern Lights and Utsira storage formations, respectively, indicating that the role of gravity
relative to pressure is greater in the field as compared to the FluidFlower. Likewise, that ratio of the coefficients for gravity (FluidFlower:storage formation) is of order 0.001 indicating that the FluidFlower under represents the role of gravity with respect to the storage formation during injection. That is, the influence of gravity segregation on mass transport is greater in the field. Importantly, the values of \(\frac{\alpha_{gR}g_{R}\mu}{\alpha_{gR}}\) less than 1 are indicative that pressure gradient contributes significantly to convection during active injection.
### Dispersive transport
The magnitude of the coefficient \(\frac{t_{R}D_{gR}}{\mu^{2}}\) preceding the fifth term on the right of Eq. (5) teaches about the relative importance of dispersive transport of a component in the z-direction. This section specifically examines dispersive transport of CO\({}_{2}\) in the brine phase as it is relevant to understand the miscible fingers in Fig. 1(c). The characteristic times \(t_{R}{}^{Form}\) and \(t_{R}{}^{Flow}\) developed earlier for the FluidFlower and Northern Lights examples are used. The diffusivity of CO\({}_{2}\) in the brine phase is taken as 1.9 x10\({}^{\text{-9}}\) m\({}^{2}\)/s at FluidFlower conditions (Tamimi et al., 1994) and 7.0 x10\({}^{\text{-9}}\) m\({}^{2}\)/s at Northern Lights conditions (Cadogan et al., 2014).
The dispersion coefficient is obtained from a compilation of measurements by (Jha et al., 2011; Lake et al., 2014). Specifically, we use Fig. 11 of Jha et al. that plots the ratio of dispersion coefficient upon diffusion coefficient (\(D_{c}/\mathcal{D}_{c}\)) versus velocity that is made dimensionless by the ratio of particle diameter upon diffusion coefficient. The average interstitial velocity in the z direction of the gas/liquid interface in the FluidFlower is obtained from images, Fig. 1, and the \(\phi_{\text{D}}\) and S\({}_{\text{gD}}\) in Table 2 as 1.9 x10\({}^{\text{-5}}\) m/s. The particle diameter of the storage zone in Fig. 1 is 1.77 mm such that \(D_{c}/\mathcal{D}_{c}\) for CO\({}_{2}\) in the brine phase is found as 10 and the dispersivity is 1.9 x10\({}^{\text{-8}}\) m\({}^{2}\)/s. Similarly, taking the interstitial velocity at Northern Lights conditions as 0.1 m/d (6.6x10\({}^{\text{-6}}\) m/s), the particle diameter as 32 \(\upmu\)m, and the diffusivity above yields \(D_{c}/\mathcal{D}_{c}\) of about 1. We assume that this ratio is roughly 1 for the other cases as well.
With these values we find that \(\frac{t_{R}D_{gR}}{\mu^{2}}\) is equal to 8 x10\({}^{\text{-7}}\) in the FluidFlower and ranges from 10\({}^{\text{-7}}\) to 10\({}^{\text{-6}}\) in the various storage formations. The small values are consistent with macroscopic transport being driven primarily by convection. Values of \(\frac{t_{R}D_{gR}}{L^{2}}\) are even smaller because L is at least on order of magnitude greater in all cases.
### Interphase mass transfer
Diffusion, while not a contributor to transport over large distances, is quite important to driving mass transfer across the interface between phases. Note the carmine-red layer of CO\({}_{2}\)-laden brine beneath the gas zone in Fig. 1(b). Subsequently, locally dense brine phase sinks vertically through the model. Interphase mass transfer of CO\({}_{2}\) does not appear in the overall mole balance for chemical species, Eq. (5), because mass lost by the CO\({}_{2}\)-rich phase is equal to the mass gained by the brine phase.
To understand mass transfer rates within the zone occupied by CO\({}_{2}\) and brine, we apply the so-called two-film model for mass transfer resistance at the interface between phases to quantify the mass transfer rate (Lewis & Whitman, 1924). Appendix A shows that the flux of CO\({}_{2}\) across the interface between the CO\({}_{2}\)-rich phase and brine phase, J\({}_{\rm cgb}\), is written as
\[J_{cgb}a_{i}=Ka_{i}(\chi_{ci}-\chi_{cb}) \tag{9}\]
where \(K\) is an overall mass transfer coefficient, \(a_{i}\) is the interfacial area, \(\chi\) is again mole fraction, the subscript \(ci\) refers to the amount of CO\({}_{2}\) in the brine phase at interface conditions, and the subscript \(cb\) refers to CO\({}_{2}\) in the bulk brine phase.
(Martin et al., 1981) measured the mass transfer between CO\({}_{2}\) and liquid phases in porous media under immiscible conditions and present a correlation for mass transfer resistance. We use this correlation and modify it as suggested by (Lozada & Ali, 1987) and include the pressure drop. That is, we apply Darcy's law to describe mass flux as well as divide by porosity and liquid phase saturation to obtain the interstitial phase velocity, \(v_{w}\). The expression for mass transfer coefficient is
\[Ka_{i}=B\mathcal{D}_{cb}\left(\frac{\Delta p_{gR}}{H}\right)\left(\frac{kk_{rb }}{\phi S_{b}\mu_{b}}\frac{\Delta p_{gR}}{H}\right)\] (10a) or \[Ka_{i}=B\mathcal{D}_{cb}\left(\frac{\Delta p_{gR}}{H}\right)v_{w} \tag{10b}\]
where B is determined by experiment (Martin et al. report 0.011). The diffusivity of CO\({}_{2}\) in brine, \(\mathcal{D}_{cb}\), is the molecular diffusivity of CO\({}_{2}\) in the liquid phase accounting for the pore-scale nature of mass transfer from g to b phases.
The way ahead is to compute the ratio of interphase mass transfer as given by Eq. (9) between the FluidFlower and the representative formations. We set \(\chi_{cb}\) equal to
0 in Eq. (9) because the largest mass transfer rates are experienced where the amount of CO\({}_{2}\) dissolved in the brine is small. The equilibrium solubility of CO\({}_{2}\) in brine at the interface (\(\chi_{ci}\)) is computed as described by (Enick & Klara, 1990) using their correlations and the Krichevsky-Ilinskaya equation (Prausnitz et al., 1999). The solubility is found to be \(\chi_{ci}\) = 0.021 at Northern Lights conditions and \(\chi_{cl}\) = 6.8 \(x\) 10\({}^{-4}\) at FluidFlower conditions. The prediction for FluidFlower conditions agrees well with data (\(\chi_{ci}\) =7 x10\({}^{-4}\)) in Lange's Handbook (Lange, 2017).
Equation (10) is substituted into Eq. (9) and the ratio of mass transfer in the FluidFlower relative to the storage formation is found. In this way, the coefficient B does not need to be evaluated. Parameters for calculation are taken from Table 2 and supplemented by Table 4. Equation (10a) is used for the FluidFlower whereas Eq. (10b) is used for the storage formation. It is anticipated that nonzero mass transfer from the gas to the brine phase occurs at the advancing interface in the storage formation because elapsed time is much greater and hence Eq. (10b) is more applicable. The interstitial brine velocity for the storage formation conditions is set to 6.6x10\({}^{-6}\) m/s consistent with the earlier discussion of dispersion.
The ratio between FluidFlower and storage formation conditions is roughly a factor of 30 for Northern Lights, 300 for Sleipner conditions, and about 3 for In Salah. These ratios greater than 1 primarily result because the permeability of the sands in the FluidFlower are 4 orders of magnitude larger than the storage formation and the values of \(H\) differ by at least 2 orders of magnitude. The calculations summarized in Table 4 indicate that interphase mass transfer is faster in the FluidFlower relative to the formation and motivate the exploration of fingering and convective mixing that follows.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline & FluidFlower & Northern Lights & Sleipner (Utsira) & In Salah (Krechba) \\ \hline \(k_{rb}\) & 0.17 & 0.10 & 0.10 & 0.10 \\ \(S_{b}\) & 0.12 & 0.50 & 0.5 & 0.5 \\ \(\mu_{b}\) (\(Pa\)-\(s\)) & 1.0x10\({}^{3}\) & 3.1x10\({}^{4}\) & 6.4x10\({}^{4}\) & 3.0x10\({}^{4}\) \\ \(\Delta\)\(\varrho_{b}\) (\(k\)/\(m^{3}\)) & 3.5 & 13 & 10.5 & 10.5 \\ \(D_{cb}\) (\(m^{2}\)/\(s\)) & 1.9x10\({}^{9}\) & 7.0x10\({}^{9}\) & 3.0x10\({}^{9}\) & 7.0x10\({}^{9}\) \\ \(z_{l}\) & 6.8x10\({}^{-4}\) & 0.021 & 0.021 & 0.020 \\ \(Ka\)/\(B\) (\(Pa\)\(m^{2}\)/\(s\)) & 1.9x10\({}^{6}\) & 1.9x10\({}^{9}\) & 1.8x10\({}^{10}\) & 2.6x10\({}^{8}\) \\ \(Ja^{Flow}\)/\(Ja^{Form}\) & & 31 & 330 & 3 \\ \hline \end{tabular}
\end{table}
Table 4: Parameter values to compute the ratio of interphase mass transfer and the onset of miscible fingering. Additional parameters needed for computation are taken from Table 2.
## Fingering and Convective Mixing
An outcome of the mass transfer described by Eq. (9) is the formation of a layer of dense CO\({}_{2}\)-laden brine just beneath the capillary transition zone in the FluidFlower in Fig. 1. This CO\({}_{2}\)-laden brine clearly segregates downward convectively in Fig. 1(c). Ultimately, mixing convolves mass transfer of CO\({}_{2}\) to the gas-brine interface, diffusion of CO\({}_{2}\) away from the interface and into the bulk brine, and the convection of denser CO\({}_{2}\)-laden brine downward in the formation through the formation of viscous fingers.
Figure 2 presents an overview of dissolution of CO\({}_{2}\) versus time into the brine phase as found in the FluidFlower. The dissolved mass is obtained by subtracting the mass of CO\({}_{2}\) in the free phase from the cumulative injected mass. Figure 2 also plots the diffusive limit as a dashed line computed using \(\mathcal{D}_{cb}\) from Table 2. Note the inset that presents results at very short times. The thickness of a CO\({}_{2}\) saturated layer is of order \(2(\mathcal{D}t)^{1/2}\)(Ennis-King & Paterson, 2005). For a CO\({}_{2}\) diffusivity in brine of 1.9 x10\({}^{9}\) m\({}^{2}\)/s and an elapsed time of 3600 s, the thickness of a CO\({}_{2}\) laden layer is about 5 mm. Figure 2 indicates, however, that only very short times are controlled by diffusion across the gas-brine interface. Moderate times illustrate dispersion supporting the inclusion of dispersion in Eq. (5).
Consistent with linear stability analysis of miscible displacement (Elenius & Johannsen, 2012), Fig. 1 illustrates early onset of instability whereas Fig. 2 shows that this initial unstable regime follows \(t^{1/2}\) scaling until about 150 minutes. The fully unstable system then emerges, mixing evolves, the rate of mass transfer increases, and the scaling of mass in solution transitions to \(t\)(Hassanzadeh et al., 2007).
### Onset of fingering
Visually, the onset time of unstable miscible fingers between the CO\({}_{2}\) saturated and unsaturated brine is relatively rapid following the establishment of the region saturated with free-phase CO\({}_{2}\). Figure 1(a) shows potential evidence of viscous fingers after 34 min of CO\({}_{2}\) injection in the FluidFlower. By this time a distinct and widespread gas phase has formed along with a narrow region of CO\({}_{2}\)-saturated brine beneath the gas zone.
Figure 2: Summary of the dissolution of CO\({}_{2}\) into the aqueous phase for 5 repeat experiments. The diffusive limit is computed using \(\mathcal{D}_{cb}=1.9\)x\(10^{.9}\) m\({}^{2}\)/s and Eq. (8) from (Hassanzadeh et al., 2007). Dispersivity is found as \(D_{c}/\mathcal{D}_{c}=10\). The inset indicates that the experiments deviate from diffusive transport at times less than 10 minutes. Experimental time resolution is \(\Delta t=5\) min.
A variety of analytical and numerical treatments of the instability of CO\({}_{2}\)-laden brine layers and subsequent convective mixing of the brine zone beneath the gas cap are available. Notably, much of the analysis in the literature assumes the rapid accumulation of a quiescent zone of free-phase CO\({}_{2}\) atop brine followed by dissolution and gravitational instability (Ennis-King & Paterson, 2005; Hassanzadeh et al., 2007; Riaz et al., 2006). On the other hand, the results summarized in Fig. 1 support the notion that the fingering begins during active injection and the capillary fringe beneath the gas cap interacts with the CO\({}_{2}\)-laden brine in the diffusive boundary layer. This interaction is predicted to reduce the time required for the onset of finger formation by up to a factor of 5 (Elenius et al., 2012).
(Elenius et al., 2012) propose that the onset time, _tf_, for convective mixing of the brine-filled zone lies within a range incorporating time scales for horizontal and vertical flow components as
\[31\frac{\phi^{2}\mu_{B}^{2}\mathcal{D}_{cb}}{(\mathrm{k}\Delta\varrho_{B} \beta)^{2}}\leq t_{f}\leq 146\frac{\phi^{2}\mu_{B}^{2}\mathcal{D}_{cb}}{( \mathrm{k}\Delta\varrho_{B}\beta)^{2}} \tag{11}\]
Due to the relatively fast advance of the gas/liquid transition zone in the FluidFlower and the absence of a remarkable period of time dominated by diffusion in Fig. 2, we substitute dispersivity for diffusion within the inequality in Eq. (11) during evaluation. Additionally, the vertical permeability is obtained as the product of _kR_ and _kv/kH_ from Table 2.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline & FluidFlower & Northern & Sleipner & In Salah \\ & & Lights & (Utsira) & (Krechba) \\ \hline
**Onset time, tr** & min & year & year & year \\ experimental & \(<28\) & & & \\ (Elenius et al., 2012) & 1.3-6.2 & 6.8-32 & 0.25-1.8 & 1500-7000 \\ (Riaz et al., 2006) & 16 & 127 & 3.2 & 44,000 \\ (Hassanzadeh et al., 2007) & 43 & 13,000 & 48 & 2.9x10\({}^{6}\) \\ \cline{2-5}
**Critical wavelength** & cm & m & m & m \\ experimental & 4.8 \(\pm\) 0.3 & & & \\ (Elenius et al., 2012) & 0.6-2.0 & 1.6-2.0 & 0.20-0.25 & 24-29 \\ (Riaz et al., 2006) & 5.0 & 79 & 6.7 & 1,800 \\ (Hassanzadeh et al., 2007) & 2.8 & 28 & 3.5 & 410 \\ \hline \end{tabular}
\end{table}
Table 5: Onset times and critical wavelengths for miscible fingering measured in the FluidFlower and computed from literature results. Parameters needed for computation are taken from Tables 2 and 4. Computations consistently use \(\mathrm{k_{V}}\).
Taking values from Table 2 and setting the difference in density to 3.5 kg/m\({}^{3}\)(Efika et al., 2016), we obtain prediction of onset time that ranges from 1 to 6 min for the FluidFlower using Eq. (11). Other predictions available from the literature are also summarized in Table 5. Experimentally, the onset time for fingering in the FluidFlower depends on the time required to build the gas-filled region in the top of the storage zone.
Figure 3: Analysis of fingers during experiments: (a) example labeling of fingers in a zone that is 1.5 m in length and (b) evolution of the number of fingers for 5 repeat experiments. The maximum number of fingers is found at times from roughly 160 to 250 min. During this time the average number of fingers over the 5 experiments is 31 with a standard deviation of 2.
Onset time is found by plotting the position of fingers versus time and extrapolating the position to zero. Then, the time needed for CO\({}_{2}\) to flow from the injection point and to accumulate in the volume above the finger is subtracted to obtain the onset time. Onset times for fingering in the FluidFlower are thus found to be about 28 min. This is in order of magnitude agreement with Fig 1(a).
Experiments can also be compared to predictions of the critical wavelength, \(\lambda_{f}\), of instabilities that grow into fingers. Similar to onset time, (Elenius et al., 2012) suggest that realistic cases for \(\lambda_{f}\) are bounded as
\[\frac{2\pi\mu_{b}D_{cb}\phi}{0.086k\Delta\varrho_{b}\theta}\leq\lambda_{f} \leq\frac{2\pi\mu_{b}D_{cb}\phi}{0.07k\Delta\varrho_{b}\theta} \tag{12}\]
where diffusivity has been substituted by dispersivity. With the same input as that used to evaluate Eq. (11), \(\lambda_{f}\) ranges from 1.6 to 2.0 cm, Table 5. Experimental images are analyzed as described by (Nordbotten et al., submitted). Inspection of experimental results from the FluidFlower in Fig. 3 indicates that significant merging of fingers does not occur at relatively short times. Hence, the wavelength of macroscopic-dimension fingers at these times likely corresponds to the critical wavelength. Figure 3 indicates roughly 31 fingers below the gas cap of the storage zone in the box outlined in gray. This zone is about 1.5 m in length. Hence, the experimental wavelength of the fingers is about \(4.8\pm 0.3\) cm on average.
The ability of Eqs. (11) and (12) to reflect the dynamics in the FluidFlower gives us some confidence to apply them to the storage formation examples, Table 5. Generally, critical wavelengths range from 1's to 10's of m whereas onset times range from less than a year for Sleipner conditions up to 10's of thousands of years for In Salah conditions due to small \(k\nu\).
### End of early convective mixing
The formation of viscous fingers marks the start of convective mixing and mixing stratifies the density gradients in the brine. This stratification of density diminishes convection and eventually leads to the reduction or gradual elimination of convection cells. Linear stability analysis is no longer applicable. Analysis of numerical simulations showed that the end of the early convective mixing period is expressed as (Hassanzadeh et al., 2007)
\[t_{e}=100H^{2}D_{cb}^{1/5}\left(\frac{\mu_{b}\,\phi}{\Delta\varrho_{b}gkH}\right) ^{6/5} \tag{13}\]
where diffusivity has again been substituted with dispersivity. With the input previously used in Eqs. (11) and (12) for the FluidFlower, Eq. (13) predicts the end of the period of convective mixing driven by fingering to be 570 min in reasonable order of magnitude agreement with results in Figs. 1(c) and 2. In Fig. 2, the end of early mixing is gauged by the slope of the dissolved mass curves deviating from near constant and decreasing (Hassanzadeh et al., 2007). Note that the applicability of Eq. (13) was checked as the range of suitability is \(80<\mathrm{Ra}<2000\). For FluidFlower conditions, Ra equals 453.
Figure 4: Summary of the magnitude of coefficients in Eq. (5) evaluated using parameters from Tables 2 and 4. Coefficients less than 1 support the importance of vertical pressure-gradient driven flow of CO\({}_{2}\) as an important transport process.
## Discussion
The physical processes considered here included convection driven by pressure gradient, convection driven by gravity, gas to brine interphase mass transfer controlled by diffusion, dispersive transport within brine, and convective mixing of dense CO\({}_{2}\)-laden brine with the original aqueous phase. Visualization of experiments teaches that convection is the dominant mass transfer mechanism of both the gas and brine phases and this observation guided the scaling analysis. Figure 4 summarizes visually the magnitudes of the coefficients in Eq. (5) using characteristic parameters. It illustrates that all coefficients are at least an order of magnitude less than 1. The impact of gravity driven convection is most significant in the Northern Lights and Sleipner example field cases.
Additionally, Fig. 5 presents the difference in the relative importance of processes between the FluidFlower and the storage formations as the ratio of the scaling groups that emerged from Eq. (5) when evaluated with reference values from Tables 2 and 4. Pressure driven convection in the z direction has a ratio of 1 because the ordering process proceeded from this choice. Figure 5 shows that the x-direction pressure gradient and intraphase mass transfer are relatively greater in the FluidFlower compared to storage formations. On the other hand, gravity driven convection is relatively smaller in the FluidFlower. These predictions result in large part from the significant heights and lengths of the storage formation in comparison to the FluidFlower. For example, intraphase mass transfer in the FluidFlower is predicted to be about 100 times greater than characteristic Sleipner conditions largely due to the substantial sand thickness at Sleipner.
Convective mixing of CO\({}_{2}\)-laden brine with original brine is a significant mass transfer mechanism during CO\({}_{2}\) storage (Lindeberg & Wessel-Berg, 1997; Weir et al., 1995). Application of results from linear stability analysis to FluidFlower conditions produced order of magnitude agreement for the critical wavelength of instabilities of about 4.8 cm. The critical time for the onset of instabilities, however, is predicted to differ from what is found experimentally. The physical situation in the FluidFlower does not agree with the theoretical analysis because the FluidFlower requires some time to accumulate a gas-phase region and that region subsequently grows laterally and vertically in time. Visual inspection suggests that the capillary transition zone at the base of the gas-phase region and the advancing gas/liquid interface plays a role in
relatively short onset times for convective mixing. Overall, fingers in the FluidFlower establish themselves rapidly. The net effect appears to be that CO\({}_{2}\) goes into solution in brine and mixes with the original pore waters very rapidly in the FluidFlower. This aspect warrants further investigation.
Alternate expressions to those used here for the critical onset time for fingering and the critical wavelength of fingers, Eqs. (10) and (11), were also explored. These expressions tend to produce estimates of the critical time for the onset of fingering that are substantially greater than the analysis of (Elenius et al., 2012) and somewhat closer to the experimental observations. On the other hand, estimates of the critical wavelength were all on the order of cm and in agreement with FluidFlower observations.
Interestingly, the various predictions for the onset of time of convective mixing in the case of In Salah is on the order of thousand to millions of years, but the Northern Lights and Sleipner cases were years to 10's of years. In these latter cases, convective
Figure 5: Ratio of scaling groups evaluated using characteristic values for FluidFlower and storage formation conditions. The ratios illustrate the relative time scales between the FluidFlower and the storage formation not the relative importance of each process. Pressure-driven convection in the z-direction has a scaling of 1 for all cases and allows estimation of storage formation time scales from FluidFlower results.
mixing should occur during active injection into the storage formation. Accordingly, CO\({}_{2}\) goes into solution relatively quickly and convective mixing contributes to rapid downward migration of CO\({}_{2}\) during the injection period. This aspect improves storage security.
The reactions of dissolved CO\({}_{2}\) to create carbonate minerals and the dissolution of minerals were not analyzed here. The timescale of mineral trapping is 1000's to 10,000's of years in sedimentary sandstone formations that do not contain relatively abundant and soluble silicate minerals (Audigane et al., 2007; Zhang & DePaolo, 2017). Such solubilization releases calcium, magnesium, and iron species that may combine with CO\({}_{2}\). Hence, the formation of carbonate minerals is expected to contribute little to the sequestration of CO\({}_{2}\) during the period of active injection and thereafter for quite some time in the example formations used here. Formations with the potential for rapid carbon mineralization contain ultramafic igneous or metamorphic rocks, such as basalt. Incorporation of mineralization driven by low pH brine into the scaling analysis represents a significant extension due to reaction network complexity. This is left as a potential topic for future work.
## 5 Summary and Conclusion
A general, nonequilibrium mass balance for CO\({}_{2}\) interaction with brine under immiscible conditions was analyzed to compare laboratory conditions at ambient temperature and pressures with storage formations at temperature ranging from 41 to 95 \({}^{\circ}\)C and pressures up to 29 MPa. The period of interest was active CO\({}_{2}\) injection operations. Comparison of the dimensionless groups developed by an ordering analysis showed that conditions in the FluidFlower and storage formations are largely dominated by convection driven by the imposed pressure gradient. The contribution of gravity to convective transport is somewhat greater in the storage formations as compared to the FluidFlower. Results indicate that the various physics seen in the FluidFlower during injection are acceptably scaled in comparison to the field given physical constraints. That is, relatively straightforward scaling of time is possible between the FluidFlower and storage formation conditions.
Significant convective mixing of CO\({}_{2}\) that has dissolved into formation brine with CO\({}_{2}\)-free brine is found in the FluidFlower. The magnitude of onset time for downward migrating fingers containing CO\({}_{2}\) is only a fraction of the duration of CO
injection. Hence, the condition of quiescent fluids prior to convective mixing, as assumed in many theoretical analyses, is not met in the FluidFlower. Application of predictions for onset times to representative storage formation conditions likewise teaches that the onset time for viscous fingering is significantly less than the duration of CO\({}_{2}\) injection. The implications of this observation include that mixing of CO\({}_{2}\) with brine and the subsequent settling due to gravity may be more rapid than some prior predictions. More rapid mixing is a favorable outcome enhancing CO\({}_{2}\) storage security.
## Acknowledgement
We thank B. Benali and J.W. Both for assistance analyzing FluidFlower results. ARK acknowledges the support of the Stanford University Energy Transition Research Institute (SUETRI-A) as well as the Stanford Center for Carbon Storage (SCCS).
## Competing Interests
The authors declare no known competing interests.
## Nomenclature
\begin{tabular}{l l} \(a_{i}\) & interfacial area \\ \(B\) & constant in Eq. (8) \\ \(c\) & constant in Eq. (10) \\ \(D\) & dispersion \\ \(\mathsf{D}\) & diffusivity \\ \(g\) & acceleration due to gravity \\ \(h\) & Henry's law constant \\ \(H\) & characteristic vertical dimension \\ \(J_{i}\) & diffusive flux of component i \\ \(k\) & absolute permeability \\ \(K\) & mass transfer coefficient \\ \(k_{r}\) & relative permeability \\ \(L\) & characteristic horizontal length \\ \(p\) & pressure \\ \(q\) & injection/production rate source sink term \\ \(S\) & saturation \\ \end{tabular}
\(t\)time
\(T\)absolute temperature
\(u\)Darcy velocity
\(v\)interstitial velocity
\(x\)horizontal distance
\(z\)vertical distance
### Greek letters
\(\phi\)porosity
\(\lambda\)wavelength
\(\mu\)viscosity
\(\varrho\)mass density
vkinematic viscosity
\(\rho\) molar density
\(\tau\)tortuosity
\(\chi\)mole fraction
### subscripts and superscripts
\(\beta\)phase
brefers to aqueous phase
crefers to CO\({}_{2}\)chemical component
Ddimensionless
frefers to fingering
grefers to CO\({}_{2}\)-rich phase
Hrefers to horizontal direction
icomponent
Rreference or characteristic value
Vrefers to vertical direction
wrefers to water chemical components
## Appendix A Mass Transfer Resistance
This appendix develops the two-film model for mass transfer resistance that is used to find overall mass transfer resistance following the ideas of (Lewis & Whitman, 1924). Figure A1 sketches bulk fluid phases with an interfacial region. The interface is marked as a black dashed line. The overall mass transfer resistance describes transfer from the g to the b phase. There are stagnant films with unequal dimensions on each side of the interface. A Henry's law relation is written as
\[\chi_{cb}=Hp_{cg}\] (A1)
to describe the equilibrium solubility of CO\({}_{2}\) in the brine phase. It is assumed that only the interface is at equilibrium and so describable by Henry's law. The flux across film-1 is equal to the flux across film-2 and written as
\[J_{c} =K_{g}\big{(}p_{cg}-p_{ci}\big{)}\] (A2a) \[J_{c} =K_{b}(\chi_{ci}-\chi_{cb})\] (A2b)
where the subscript \(i\) denotes conditions at the interface, \(K_{g}\) and \(K_{b}\) are mass transfer coefficients for the respective films, and it is clear that \(K_{g}\) and \(K_{b}\) have different units.
To proceed, Eq. (A1) is substituted into the expression for mass transfer in Eq. (B2b) and the result solved for \(\mathrm{p_{ci}}\) to obtain
\[p_{ci}=\left(\frac{J_{c}}{K_{b}}+\chi_{cb}\right)\frac{1}{H}\] (A3)
Equation (B3) is then substituted into the expression for mass transfer in the CO\({}_{2}\)-rich phase in Eq. (B2a) and solved for the flux as
\[J_{c}=\frac{HP_{cg}-\chi_{cb}}{\frac{H}{K_{g}}+\frac{1}{K_{b}}}\] (A4)
Equation (B4) describes the flux of CO\({}_{2}\) from phase g to b and is rewritten as
\[J_{c}=K\big{(}Hp_{cg}-\chi_{cb}\big{)}\] (A5)
where the overall mass transfer coefficient is found as
\[\frac{1}{K}=\frac{H}{K_{g}}+\frac{1}{K_{b}}\] (A6)
|
2307.04021 | Direct images and spectroscopy of a giant protoplanet driving spiral
arms in MWC 758 | Understanding the driving forces behind spiral arms in protoplanetary disks
remains a challenge due to the faintness of young giant planets. MWC 758 hosts
such a protoplanetary disk with a two-armed spiral pattern that is suggested to
be driven by an external giant planet. We present new thermal infrared
observations that are uniquely sensitive to redder (i.e., colder or more
attenuated) planets than past observations at shorter wavelengths. We detect a
giant protoplanet, MWC 758c, at a projected separation of ~100 au from the
star. The spectrum of MWC 758c is distinct from the rest of the disk and
consistent with emission from a planetary atmosphere with Teff = 500 +/- 100 K
for a low level of extinction (AV<30), or a hotter object with a higher level
of extinction. Both scenarios are commensurate with the predicted properties of
the companion responsible for driving the spiral arms. MWC 758c provides
evidence that spiral arms in protoplanetary disks can be caused by cold giant
planets or by those whose optical emission is highly attenuated. MWC 758c
stands out both as one of the youngest giant planets known, and also as one of
the coldest and/or most attenuated. Furthermore, MWC 758c is among the first
planets to be observed within a system hosting a protoplanetary disk. | Kevin Wagner, Jordan Stone, Andrew Skemer, Steve Ertel, Ruobing Dong, Dániel Apai, Eckhart Spalding, Jarron Leisenring, Michael Sitko, Kaitlin Kratter, Travis Barman, Mark Marley, Brittany Miles, Anthony Boccaletti, Korash Assani, Ammar Bayyari, Taichi Uyama, Charles E. Woodward, Phil Hinz, Zackery Briesemeister, Kellen Lawson, François Ménard, Eric Pantin, Ray W. Russell, Michael Skrutskie, John Wisniewski | 2023-07-08T17:51:37Z | http://arxiv.org/abs/2307.04021v1 | # Direct images and spectroscopy of a giant protoplanet driving spiral arms in MWC 758
###### Abstract
Understanding the driving forces behind spiral arms in protoplanetary disks remains a challenge due to the faintness of young giant planets. MWC 758 hosts such a protoplanetary disk with a two-armed spiral pattern that is suggested to be driven by an external giant planet. We present new thermal infrared observations that are uniquely sensitive to redder (i.e., colder or more attenuated) planets than past observations at shorter wavelengths. We detect a giant protoplanet, MWC 758c, at a projected separation of \(\sim\)100 au from the star. The spectrum of MWC 758c is distinct from the rest of the disk and consistent with emission from a planetary atmosphere with \(T_{\rm eff}=500\pm 100\) K for a low level of extinction (\(A_{\rm V}\)\(\leq\)30), or a hotter object with a higher level of extinction. Both scenarios are commensurate with the predicted properties of the companion responsible for driving the spiral arms. MWC 758c provides evidence that spiral arms in protoplanetary disks can be caused by cold giant planets or by those whose optical emission is highly attenuated. MWC 758c stands out both as one of the youngest giant planets known, and also as one of the coldest and/or most attenuated. Furthermore, MWC 758c is among the first planets to be observed within a system hosting a protoplanetary disk.
Giant protoplanets interact gravitationally with their birth disks, driving gaps and large-scale spiral structures that alter the environment for subsequent planet formation [1, 2]. Several protoplanetary disks that are stable to their own self-gravity (i.e., that should not show spiral structure due to instabilities) have been observed with global spiral morphologies that resemble predictions from companion-disk interaction models (e.g., [3, 4]). However, this picture has only been confirmed for a handful of systems with stellar and brown dwarf companions [2, 5, 6]. Several studies have hypothesized that the spiral arms in the other disks are caused by giant planets that formed with lower initial entropy [7, 8] or by those whose optical emission is heavily attenuated by circumstellar or circumplanetary material. If most protoplanets form cold or significantly reddened, this would explain the lack of previously detected planets in systems with spiral arms, and more
broadly the low observed yield of many past searches for protoplanets (e.g., [9, 10]). Around one such spiral disk, MWC 758, thermal infrared observations have revealed a very red candidate planet [11].
MWC 758 (d=156 pc, age=3.5\(\pm\)2 Myr, SpT=A8Ve; [12, 13]) is among a subset of circumstellar disks that shows a two-armed spiral pattern but no obvious signs of a stellar or brown dwarf companion [14, 15, 16, 11]; see also Fig. 1). The spirals display a clear two-armed geometry with an arm-to-arm separation of 140\(-\)170\({}^{\circ}\)[11], which can be linked to theoretical models of companion-driven spiral structures to imply a mass ratio of \(q\)\(\gtrsim\) 0.005 relative to the central star (i.e., \(M_{companion}\gtrsim\) 8 \(M_{Jup}\); [17]). The pitch angle of the spiral arms (\(\sim\)25-29\({}^{\circ}\)), which is linked to disk mass [19], suggests that the disk-to-star mass ratio is low enough that the disk is stable to its own self-gravity [11, 20]. Likewise, the pattern speed measurements [21, 22] are incompatible with those of spiral arms formed by self-gravity. This supports the companion-driven hypothesis. The reported spiral arm rotation rate of 0.22\({}^{\circ}\)/yr\(\pm\)0.06\({}^{\circ}\)/yr (when including inclination: [20]) corresponds to a companion at \(\sim\)160\({}^{+35}_{-25}\) au for a distance of 156 pc and a stellar mass of 1.5 \(M_{\odot}\). The uncertainty was derived from measurements that oversampled the spatial resolution without accounting for the measurement covariance. Consequently, the uncertainty is underestimated.
Previous direct imaging observations of MWC 758 have revealed two candidate planets: MWC 758b, interior to the spiral arms at 0.11" or \(\sim\)17 au projected separation [16], and MWC 758 CC1, for 'Companion Candidate 1', which is exterior to the Southern arm at 0.62" or \(\sim\)97 au [11]. Either candidate planet, if real, would be massive enough to generate the spiral arms. MWC 758b was not recovered in subsequent observations [11], whereas the prior observations [16] were not sensitive enough to detect MWC 758 CC1, leaving its nature unclear. Based on the spatial density of objects with similar infrared brightness, the possibility that MWC 758
Figure 1: Images of MWC 758 from LBTI/ALES taken on UT 2019-11-14. **a:** data processing optimized for recovering extended structures (see Methods). **b:** the same wavelength range with processing optimized for recovering point sources. Note that injected planets with the brightness of MWC 758c are not recovered in the processing for extended sources–i.e., the non-detection of MWC 758c in panel **a** is not surprising. **c:** combined signal to noise map from all LBTI data (including \(L^{\prime}\) and \(M^{\prime}\) LMIRCam data published in [11]). **d:f:** data processing for point source recovery in narrower spectral bins, showing relatively constant flux from the disk and a notable rise of flux from the planet candidate toward longer wavelengths. Note that these images are processed with angular differential imaging, and thus the apparent features within the disk (especially in the reductions optimized for point sources: panels **b-f**) should not be interpreted as accurate representations of the disk’s surface brightness.
CC1 could be an unassociated background object is \(<\)1% (see Methods). However, [(11)] could not exclude the possibility that CC1 could be a spurious detection, as the first few available detections established a combined \(\sim\)10% false positive probability based on the residual speckle density distributions. In this work, we aim to resolve the nature of this candidate.
## Results
We observed MWC 758 with the Large Binocular Telescope Interferometer (LBTI; [23]) on UT 2019-01-05 and 2019-11-14. Observations were performed with the LMIRCam camera [(24, 25)] utilizing the Arizona Lenslets for Exoplanet Spectroscopy (ALES; [26, 27)]. With 0.6 hr of integration time (source+sky, with equal time each, and seeing of \(\sim\)1") on 2019-01-05, we first measured a very red, but low-SNR, spectrum of CC1 between \(\lambda\)=3.4-4.2 \(\upmu\)m that appeared to be distinct from the rest of the disk. To verify this result, we obtained a longer observation of 1.8 hr integration on 2019-11-14 in photometric conditions (\(\sim\)0.8-1.0" seeing). The results verified our initial findings and improved the SNR from an average value of \(\sim\)1 in each spectral channel to \(\sim\)3 (for \(\lambda\geq 3.8\)\(\upmu\)m). The spectral resolution of both observations was _R\(\sim\)40,_ enabling features as small as \(\sim\)0.1 \(\upmu\)m to be identified.
To enable conversion of contrast measurements to physical flux units, we also obtained a flux-calibrated spectrum of MWC 758 with NASA's Infrared Telescope Facility (IRTF) / SpeX instrument [(28)] on 2021-02-03 (see Methods and Fig. 2). In summary of the findings from LBTI/ALES and IRTF/SpeX, MWC 758 CC1 has a spectrum that is consistent with a very faint and very red point source, whereas the rest of the disk has a spectrum consistent with scattered starlight (see Fig. 2 and Methods)-this is even more apparent with observations of the disk at shorter wavelengths taken into account (e.g., [15]). Therefore, we determine MWC 758 CC1 to be an object distinct from the rest of the disk and refer to this object henceforth as MWC 758c.
To expand the spectral range, we analyzed archival data covering \(\lambda=0.95\)-2.2 \(\upmu\)m from the Very Large Telescope (VLT)/Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE: [31]) instrument. This enabled us to place constraints on the level of extinction (or \(V\)-band attenuation, \(A_{V}\)) when compared to the detections from LMIRCam and ALES at \(\lambda=3\)-5 \(\upmu\)m. The SPHERE data yield a non-detection of MWC 758c (upper limit \(\sim\)5-6 \(\times\) 10\({}^{-6}\) contrast at \(J\)- and \(H\)-band and SNR\(\sim\)3, see Methods), while the disk is detected with a
Figure 2: LBTI/ALES spectrum of MWC 758c (blue points) combined with a simple average over the two epochs. Comparisons of L5–T5 brown dwarf spectral standards are shown in gray through black [(29)], and an example of a cold and moderately attenuated Cholla model atmosphere is shown in purple [(30)]. Note that higher (lower) levels of assumed extinction correspond to higher (lower) inferred effective temperatures. The IRTF/SpeX spectrum of the star and warm inner disk is shown in the light blue curve. The scattered light spectrum from the spiral arms closely resembles this source spectrum (see Methods for further comparisons).
similar brightness and morphology to its appearance in the images at \(\lambda\)\(\sim\)3-5 \(\upmu\)m [16, Fig. 1]. This near-IR non-detection of MWC 758c places a lower limit of \(A_{V}\geq 8\) for MWC 758c (see Methods). The star has an extinction of \(A_{V}=0.4\), which suggests that interstellar extinction is negligible [14], and that the attenuation is likely originating from circumplanetary dust, since both the \(\upmu\)m-sized scattered-light [15] and emission from mm-sized circumstellar dust disk [32] fall off sharply near the projected separation of MWC 758c.
We compared the spectral measurements of MWC 758c to standard brown dwarfs and planetary atmosphere models [30, 29, 34] in order to estimate its physical properties. The brown dwarf comparison yields a best match of a very late spectral type of T5 or later, consistent with a relatively cold object (see Fig. 2). The atmospheric model comparison yields constraints on the combination of effective temperature (\(T_{\rm eff}\)), radius, and level of attenuation (\(A_{V}\)). Assuming radii between \(R=1-2~{}R_{Jup}\) (motivated by the predicted range from evolutionary models for the system's age of \(\sim\)3.5 Myr: e.g., [34, 35]), a reddening relation typical for interstellar dust [36], and a range of \(A_{V}\)\(\leq\)150 (motivated by simulations of planets forming embedded within protoplanetary disks: [37, 38, 39]), the brightness of MWC 758c is consistent with temperatures between \(T_{\rm eff}=400-2500\) K (Figs. 3, 4). Higher values of \(A_{V}\) correspond to much higher values of \(T_{\rm eff}\) for a given radius (Fig. 4). The starting temperatures of low initial entropy, or "cold-start" models is uncertain, but typically considered to be \(T_{\rm eff}=600-800\) K [35, 40]. The surface gravity (\(g\)) and mass are relatively unconstrained, although the best-fitting solutions favor lower values (see Methods). Finally, the most likely inferred ranges of temperature and radius are consistent with those of the expected body driving the spiral arms (\(M\gtrsim 8~{}M_{Jup}\): [17]) from both cold- and hot-start models of mass, temperature, and radius evolution [34, 35, 41, 42].
Figure 3: Comparison of LBTI/ALES data (blue points) and broadband photometry from LBTI/ LMIRCam (purple; [11]) to Cholla model atmospheres (light blue, gray, and black curves, [30]). Synthetic LMIRCam photometry are shown in light blue, gray, and black points. Panel a shows an example for a case of low \(A_{V}\) and low \(T_{\rm eff}\), whereas panel b shows an example for a case of high \(A_{V}\) that requires a higher \(T_{\rm eff}\), corresponding to typically more massive planets. JWST’s NIRISS instrument will be able to be able to fill in the range between 4-5 \(\upmu\)m, which will constrain \(A_{V}\) (and \(T_{\rm eff}\)) through the strength of the molecular absorption features.
## Discussion
### Evidence For MWC 758c Being a Planet
With this work, we have established that MWC 758c has a spectrum that is distinct from the rest of the disk (see Methods) and consistent with that of a planetary atmosphere. MWC 758c also does not appear in polarized emission [15], while the spiral arms bear the same morphology in total intensity and polarized light [14, 15, 16]. This suggests a self-luminous object rather than scattered starlight. Combined with the non-detection of MWC 758c at wavelengths shorter than \(\lambda\leq 3.3\) um (compared to obvious detections of the disk: [15, 18]), the data are only consistent with a reddened thermal emission spectrum.
Our reddening constraint for MWC 758c (\(A_{V}\geq 8\)) is consistent with predictions from hydrodynamical simulations (\(A_{V}\sim 10-150\): [38, 39]) and also with the estimates for the PDS 70 protoplanets (\(A_{V}\sim 16-17\): [43, 44, 45, 46] find smaller values of \(A_{V}\sim 1-8\)). AB Aur b is also likely in an embedded phase [47]. The few known examples of imaged protoplanets, including MWC 758c, are all consistent with having significant levels of optical attenuation, as predicted by models of giant planet formation (e.g., [37, 38, 39]). We also note an additional possibility: that the emission observed from MWC 758c could be either partly or entirely due to a circumplanetary disk or envelope, with the planet itself completely obscured or too faint to be detected. Compared to models in which the circumplanetary disk outshines the planet itself [37], the \(\sim\)5 \(M_{Jup}\) model
Figure 4: **a** Retrieved \(T_{\rm eff}\) vs. \(A_{V}\) of MWC 758c for a range of plausible planetary radii. Hot-start models (e.g., [34]) show planets with maximum radii up to \(\sim\)2 R\({}_{\rm Jup}\), whereas cold-start models (e.g., [35]) have radii closer to \(\sim\)1 R\({}_{\rm Jup}\). Either set of models is consistent with the data for a reasonable range of visual extinction and planet mass. **b** Effective temperature vs. age of directly imaged exoplanets and brown dwarf companions compared to evolutionary tracks for hot– (red: [42]) and cold-start planets (blue: [35]). MWC 758c’s temperature and mass are not well constrained due to the degeneracy with \(A_{V}\). Nevertheless, MWC 758c stands out both as one of the youngest giant planets to be directly imaged (indeed, among just a few that are known within the protoplanetary disk phase), and also as either one of the coldest or most attenuated.
provides a reasonable fit to the rise in brightness at \(\lambda\gtrsim 3.8\) um. Due to the probable low mass of MWC 758c (comparable to a thermal mass; see below), the circumplanetary material may also more closely resemble an envelope than a disk [47], leading to greater expected levels of attenuation of the protoplanet's own emission.
Dynamical effects that are expected from a forming giant planet are seen in the disk around MWC 758. In particular, the large-scale spiral pattern cannot be attributed to gravitational instability nor a recent stellar flyby (e.g., [49, 50]) and is likely companion driven. Comparing the measured separation of the spiral arms to predictions from theoretical models, the companion is expected to have a mass ratio with respect to the central star of q\(>\)\(\sim\)0.005 [17], or \(\gtrsim 8\) M\({}_{Jup}\) assuming a central star of 1.5-1.9 M\({}_{\odot}\)[51]. This is consistent with the spectroscopically inferred mass of MWC 758 with \(A_{V}\sim 100\) for hot-start planets, or \(A_{V}\gtrsim 10\) for cold-start planets.
The spectroscopically constrained mass of MWC 758c likely straddles (or slightly exceeds) the thermal mass at \(a\)\(\sim\)100 au: \(M_{thermal}=M_{\star}(\mathrm{h/r})^{3}\), where \(\mathrm{h/r}\) is the disk aspect ratio, or a few Jupiter masses assuming \(\mathrm{h/r}\)\(\sim 0.1\). Therefore, the planet is expected to remove gas and dust from its co-orbital region and could possibly open a gap [33]. The extent of this process and the resulting gas and dust depletion depend on poorly constrained parameters such as the disk viscosity and orbital parameters of the planet including eccentricity and inclination [33]. Whereas the excitation of the spiral arms occurs on the dynamical timescale (\(\sim\)10\({}^{3}\)-10\({}^{4}\) yr: [52]), the depletion of material around the planet's orbit occurs on the viscous timescale, which can be comparable to the system's age even for a modest viscosity (\(\alpha\gtrsim 0.001\)). This further limits our ability to quantitatively predict the outcome of gap opening for MWC 758c, leaving open the possibility for gas and dust to be present within the planet's co-orbital region.
Nevertheless, ALMA observations of the gas and mm-sized dust provide evidence consistent with the disk responding to the planet's gravitational perturbation in addition to the spirals. The mm-dust disk falls off sharply near the projected separation of MWC 758c, while the \({}^{13}\)CO gas and C\({}^{18}\)O gas are also in steep decline [32, 53]. The latter also hints at the likely presence of an inner gap edge, which is needed to explain the observed emission clump at 0.53 arcsec to the North of the star, assuming that the clump is a vortex triggered by the Rossby wave instability [54, 55]. The proximity of the planet to the outer edge of the mm-dust and \({}^{13}\)CO gas is not predicted by current steady state models of disk-planet interactions if MWC 758c is above the thermal mass (e.g., [33]). Future constraints on the formation timescale of the planet, and also on planet mass (which will soon be possible through constraints on \(A_{V}\) from JWST), will enable an assessment of this possible discrepancy with current models. As an aside, we note that MWC 758c, being exterior to the inner cavity, is likely not responsible for the structures interior to \(\sim\)0.3 arcsec.
### Implications of MWC 758c for Other Systems
As the reddest known planet and one of few known protoplanets, the existence of MWC 758c has a number of implications for the broader population of spiral protoplanetary disks and young giant planets. Perhaps most notably, the high level of optical extinction (\(A_{V}\geq 8\)) indicates the likely presence of a circumplanetary disk around MWC 758c, suggesting also that accretion is likely ongoing, despite the non-detection of H\(\alpha\)[10, 9]. As MWC 758c is among the most attenuated planets yet to be detected (perhaps the most attenuated), it provides an opportunity to constrain the properties (e.g., grain size and chemical makeup) of the line-of-sight material. Existing reddening laws are derived primarily from interstellar extinction and star forming galaxies, whereas grain growth and processing within circumstellar and circumplanetary disks may result in differences in the chemical makeup of the material accreted during planet formation and a different reddening profile.
Second, MWC 758c confirms that spiral arms in protoplanetary disks can be driven by giant planets that are cold enough (\(T_{\mathrm{eff}}\lesssim 600\) \(K\)) or reddened enough (\(A_{V}\gtrsim 10\)) to have escaped detection at shorter wavelengths (\(\lambda\sim 1-2\) um). Indeed, this was predicted [7] based on the status at the time that no giant planets had been found around disks with prominent spiral arms. The finding of MWC 758c serves as a proof-of-concept that spiral arms in other protoplanetary disks (e.g., SAO 206462: [3]) may also be caused by a class of faint and very red planets that are more readily detectable at mid-infrared wavelengths by systems like
LMIRCam/ALES. Relatively few surveys have been performed at mid-IR wavelengths compared to those in the optical to near-IR, which would be blind to planets as red as MWC 758c. Now, the James Webb Space Telescope (JWST) is also capable of imaging fainter and even more attenuated planets around a greater number of stars. MWC 758c is resolvable by JWST for \(\lambda\lesssim 10\) um. Such data could place better constraints on \(T_{\rm eff}\) and \(A_{V}\) via the strength of molecular absorption features, and can probe accretion-tracing Hydrogen emission at \(\lambda=4.05\) um (Brackett-\(\alpha\)). With its stable observing environment, JWST may also detect brightness variations in MWC 758c caused by variable attenuation from dust orbiting (or accreting onto) the planet on dynamical timescales of a day or less, which could help to better constrain the reddening law specific to the dust around MWC 758c.
Finally, for low levels of attenuation (\(A_{V}\lesssim 10\)) MWC 758c would be the coldest currently known directly imaged planet (the closest planet in temperature would be 51 Eri b with \(T_{\rm eff}=600-750\)\(K\): [56]). At this colder end of the plausible range of temperatures for MWC 758c, methane is the dominant carrier of atmospheric carbon (e.g., [57, 29], and at the coldest temperatures considered (\(T_{\rm eff}\sim 400\)K) water clouds are possible [58, 57]. A planet with \(T_{\rm eff}\sim 400\)K would occupy an intermediate temperature range between Jupiter and the predominantly warmer and CO-dominated directly imaged exoplanets. If at the colder end of its range of \(T_{\rm eff}\), MWC 758c could provide one of the first opportunities to study the carbon chemistry of these colder exoplanet atmospheres, which so far have only been studied via isolated brown dwarfs (e.g., [57, 58]).
## Conclusion
We have presented images and spectroscopy of a probable young giant planet driving the spiral arms in the circumstellar disk of MWC 758. The spectrum of MWC 758c is consistent with that of a very red protoplanet-either due to a low effective temperature (\(T_{\rm eff}\leq 600\) K, which given the system's young age of 3.5\(\pm\)2 Myr would imply a cold-start origin: e.g., [35]) or significant attenuation by dust (\(A_{V}\geq 8\)). Furthermore, the spectrum of MWC 758c is distinct from that of the rest of the disk and is inconsistent with scattered starlight. The existence of MWC 758c has two important implications: 1) faint and red giant planets are capable of driving large-scale spiral structures in protoplanetary disks, and 2) protoplanets in a similar phase of evolution to MWC 758c (in particular, with similarly red spectra) would likely have been missed by past surveys-including those performed in the accretion tracing \(H\alpha\) filter [59]. Mid-IR observations, like those presented here, and those that are now possible with JWST, are able to reveal these young (and likely very red) protoplanets.
Correspondence and requests for materials should be addressed to K. Wagner ([email protected]).
## Methods
**LBTI/ALES Observations and Data Reduction**
We processed the data for each night in a nearly identical manner following the reduction strategy for LMIRCam data in [11]. We briefly recount this process here-mostly focusing on the differences in the approach for Integral Field Spectroscopy (IFS) data. Note that the early calibration steps are very similar, since ALES utilizes LMIRCam's full optomechanical chain and detector [24, 25] with the addition of a lenslet array in the optical path (for more details, see [26] and [27]). The data reduction procedure is described below. Parameters are given for the UT 2019-11-14 epoch and for the (UT 2019-01-05) epoch in parentheses where they are different.
1. We removed the thermal and instrumental background by subtracting the average of the neighboring nod positions from each frame. Each observation used 300 frames per nod.
2. We removed reset noise by subtracting the first read of each ramp from the final read (correlated double sampling). The total exposure time of each single LMIRCam exposure was 1.967 sec (0.984 sec). We obtained a
total of 3230 (2196) frames, including sky exposures for background subtraction that accounted for 50% of the total observing time. This amounts to 0.88 (0.30) hr of on-source exposure time, excluding overheads.
3. We subtracted the bias of each vertical channel by measuring and subtracting the average value of the pixels in the overscan region.
4. We extracted 3-dimensional (x-y-\(\lambda\)) data cubes from each individual frame (following [60, 61, 62]).
5. We identified wavelengths via sky frames taken through four narrowband (\(R\)\(\sim\)100; 2.897\(\upmu\)m, 3.36\(\upmu\)m, 3.539\(\upmu\)m, and 3.874\(\upmu\)m) filters (see Section 4 of [27]).
6. We selected bad pixels as those that are 2-\(\sigma\) outliers and/or those with a value less than -100 in a 5 pixel \(\times\) 5 pixel square (ignoring the target pixel) in a median image of the first 50 frames at each wavelength. We tested thresholds of 3-\(\sigma\) and 4-\(\sigma\) and found consistent overall results. These resulted in \(\sim\)2%, 1%, and 0.5% of pixels being flagged as bad, respectively.
7. We replaced the bad pixels via interpolation of the surrounding pixels (up to a maximum target distance of 3 pixels from the target pixel).
8. We centered the frames by first fitting the maximum of the peak, and subsequently by computing the position of maximum correlation of the frames with the first in the sequence.
9. We rejected bad frames as those with a maximum correlation of less than 0.99 (0.96) with respect to the median image (averaged over wavelength). This resulted in 0.3% (18%) rejection of frames. The higher rejection fraction on the 2019-01-05 epoch reflects the poorer seeing conditions.
10. When relevant, we injected synthetic point sources at this stage. For a PSF template, we used the median image of the star at each wavelength, which remained unsaturated and in the detector's linear regime.
11. We generated derotated, derotated+high pass filtered, high pass filtered + classical angular differential imaging (ADI: [63]), and high pass filtered + ADI-KLIP processed images (KLIP stands for Karhunen-Loeve Imaging Processing: [64]). Throughout the observation, the parallactic angle evolved by 127\({}^{\circ}\) (119\({}^{\circ}\)). For the 2019-01-05 epoch, at this stage we first destriped the images by subtracting the mode from each column and row of pixels. We used a high pass filter width of 11 (9) pixels and tested a wide range of KLIP hyper parameters (namely, the distribution of subregions and numbers of KLIP components, or KL modes). Before KLIP processing, we binned the data by averaging over each set of 15 (10) frames in the sequence. We generated two KLIP images: one with a less aggressive set of parameters that leaves the disk structures largely intact, but is less sensitive to point sources; and one with a more aggressive set of parameters with greater sensitivity to point sources. For the less aggressive parameters, we used full annuli between 5-27 pixels (0.175-0.945 arcsec at 35 mas/pixel) and five KLIP components, with additional parameters identical to those as follows. For the more aggressive set of parameters, we split the annulus into 6 (8) annular segments of equivalent width, and used 15 KLIP components. Injected point-sources with the brightness of MWC 758c are not detected in the less aggressive reduction, but are in the more aggressive one (see Fig. S1). For constructing the basis of KLIP eigenimages we rejected frames taken with a parallactic angle within 0.1 degree (0.85 degree) of the target image in order to reduce self-subtraction of point sources.
12. To further reduce wavelength-dependent speckles in the final images, we generated spectral differential imaging (SDI)-KLIP processed images from the ADI-KLIP processed cubes (i.e., ASDI-KLIP processed images in the end). For SDI-KLIP parameters, we used the same regions as above, with a rejection criteria of frames
whose wavelength ratio (i.e., magnification ratio) resulted in less than 1.5\(\times\)FWHM of separation for point sources at the center of the radial processing range.
We high-pass filtered the images a second time, and then combined images at each wavelength with a variance-weighted combination [65].
#### VLT/SPHERE Observations and Data Reduction
Data were taken on two separate nights and in two separate modes: with the integral field spectrograph (IFS) arm operating between \(Y\)- to \(H\)-band (\(\lambda\sim\) 0.95-1.65 \(\upmu\)m) and the dual-band imager (IRDIS) operating in the _K12_-bands (\(\lambda\sim\) 2.11 and 2.25 \(\upmu\)m) on UT 2016-01-01 under program-ID 096.C-0241 (PI: Beuzit), and with the IFS operating between \(Y\)- to \(J\)-band (\(\lambda\sim\) 0.95-1.35 \(\upmu\)m) and IRDIS operating in the _H23_-bands (\(\lambda\sim\) 1.59 \(\upmu\)m and 1.67 \(\upmu\)m) on UT 2018-12-17 under program-ID 1100.C-0481 (PI: Beuzit). The data quality on the second night was overall substantially higher. For data processing details, we followed the methodology of [66], which is very similar to that described above for LBTI/ALES. We refer to this prior work and the above description for further details. The images are shown in Fig. S2 and Fig. S3. At the separation of MWC 758c, the \(Y\bar{J}\) image provides the deepest detection limits of \(\sim\)3 \(\times\) 10\({}^{-6}\) contrast (with SNR\(\sim\)3 and for a flat spectrum over the complete bandpass; see panel d of Fig. S3). Detection limits in narrower synthetic photometric bands are somewhat higher (e.g., \(\sim\)6 \(\times\) 10\({}^{-6}\) contrast in \(J\)-band with SNR\(\sim\)3). The IRDIS-H23 dataset (taken simultaneously) provides a similar limit of \(\sim\)5 \(\times\) 10\({}^{-6}\) contrast with SNR\(\sim\)3. The ghost in the YJ images is due to persistence in the detector following flux calibration, and is identifiable via its rapidly diminishing brightness with frame progression and regular appearance in other datasets.
#### IRTF/SpeX Observations and Data Reduction
We observed MWC 758 with IRTF/SpeX [28] on UT 2021-02-03 using the cross-dispersed (XD) echelon gratings in both short (SXD) and long (LXD) wavelength modes, covering 0.8-2.4 \(\upmu\)m and 2.3-5.4 \(\upmu\)m, respectively, and reduced the data using Spextool [67]. A 0.8 arcsec slit was used during the XD observations, which amounted to a small amount of light loss due to average seeing of 0.5 arcsec. The LXD and SXD spectra were normalized in the region of overlapping wavelength coverage (2.3-2.4 \(\upmu\)m). Observations of an A0V spectral standard (HD 34203) were used for telluric correction and flux calibration, and separate observations of MWC 758 through a low-resolution prism with a 3 arcsec slit were used to measure a \(\sim\)14% absolute correction to the XD measurements to account for the light lost at the smaller (0.8 arcsec) slit. The spectra are shown in the next subsection at the stage at which they are used to correct the LBTI/ALES contrasts (step 5)
#### LbTI/ALES Spectroscopy
We extracted and analyzed the spectra from each night following a uniform approach:
1. We began by extracting the spectrum of MWC 758c, of ten different locations along the spine of the spiral arms (beginning at their furthest visible extent), and of the star, all within apertures of 3 pixels in diameter. We checked that using various aperture diameters between 1-4 pixels provides consistent results. These are presented in Fig. S4 and Fig. S5.
2. We compared the spectra of MWC 758c and the disk for each night independently. The averaged disk spectra are presented in Fig. S5.
3. We cross-correlated each pixel's spectrum vs. a \(T_{\rm eff}\) = 500K, \(A_{V}\) = 40 BT-Settl spectrum [34] for the higher-quality 2019114 dataset to create a correlation map (Fig. S6). The measured spectra were smoothed by a running median of 5 spectral channels (corresponding to the spectral resolution, \(R\)-40). The image then was spatially smoothed by a 2 pixel
boxcar filter, which is approximately the FWHM. Prior to smoothing, MWC 758c has a maximum cross correlation of 0.95 with respect to the template spectrum. The rest of the image on average, including the spiral disk (shown in contours in Fig. S6 along with MWC 758c), has a maximum cross correlation of 0.55 and a standard deviation of 0.15-i.e., MWC758c's red spectrum is a \(\sim\)3 standard deviation outlier from the spectra contained within the rest of the image, and also compared to just those pixels containing scattered light from the circumstellar disk. We note also that we did not use the best-fit spectrum, but simply a relatively cold/red atmosphere as a comparison. We verified that any \(T_{\rm{Eff}}\)=400-1000K atmosphere provides similar results. The rest of the spiral arms, if fit to thermal emission spectra, return much higher values (up to the maximum of the tested range of 2000K). This is unsurprising since the spectra are the result of scattered starlight.
4. We performed forward modeling via synthetic point source injections (e.g., [68]) to generate a point source flux correction model and to estimate uncertainties via the following procedure. We scaled the band-averaged contrast spectra to match the \(L^{\prime}\) contrast measured with LMIRCam [11] and injected point sources with this spectrum into the data at three separate locations (we also checked that using a flat spectrum does not significantly change the results). We similarly extracted their spectra and computed a linear correction model to the ratio of measured to injected fluxes (ignoring outliers as those with a correction factor greater than 10 or less than 0). The point source flux correction models are presented in Fig. S7. We also computed uncertainties as the standard deviation of the three measured values at each wavelength for the injected sources.
5. We created corrected contrast spectra of MWC 758c from the above correction models (Fig. S8). This improved the agreement between the two epochs and the previous \(L^{\prime}\) measurement. The simple average of the two epochs and their respective signal to noise ratios (SNR) are shown in Fig. S9. The SNR of the second epoch is higher than the first and of their combination (with errors propagated through combination). However, to avoid biases introduced by a single observation (particularly due to correlated speckles: see [69]), we proceed with the combined spectrum for the proceeding analysis. We verified that using either single night yields consistent results with those obtained using their average.
6. We converted the contrast spectra to units of W/m\({}^{2}\) by multiplying by the empirical spectrum of the star measured with IRTF/SpeX on UT 2021-02-03 (shown in Fig. S10). Note that this spectrum was taken at a different time than the LBTI/ALES data. Intrinsic variability of the central star and inner disk is thus a potential source of uncertainty (likely less than 20% based on the source's historical variability; [14]).
Comparison to Atmospheric Spectral Models
We compared the spectrum of MWC 758c to four different model grids: Cholla [30], BT-Settl [34], COND [42], and DUSTY [70]. COND and DUSTY were among the first grids available for clear and cloudy atmospheres, respectively. These are outdated and included primarily for comparison to other previous works that used these grids. Additionally, the DUSTY grid is likely a poor match at cold temperatures since the clouds in the models don't settle out. For cloudy (cloudless) planets, the BT-Settl (Cholla) models are the most reliable.
For a given \(A_{V}\), and with a reddening relationship of \(A_{V}=3.2E(B-V)\) from [36], the script first scales the model spectra to the ALES data (normalized to the mean flux of the bandpass) and then calculates a radius based on that normalization and the Gaia DR3 distance to the system. When maximum radius drops below the set threshold (1 or 2 \(R_{Jup}\) for simplicity in Fig. 4, but \(\sim\)1.1 and 1.7 \(R_{Jup}\) are closer to the range of models) for a given temperature and range of \(log(\mathrm{g})\), the script determines that temperature to be the maximum plausible temperature for the prescribed \(A_{V}\). The age is not included in this analysis since the goal is to find the plausible range of temperatures that can then be used to compare to the \(T_{\rm{eff}}\), mass, radius, and age framework in the right panel of Fig. 4. To assess the model-data fit, we computed the chi-squared (\(\chi^{2}\)) metric for the 54 spectrally-correlated ALES datapoints, accounting for the spectral covariance [69]. The \(\chi^{2}\) value from the ALES data was added to the two independent \(\chi^{2}\) values from the \(L^{\prime}\) and \(M^{\prime}\) measurements and converted to reduced chi-squared by dividing by \((n-3)=53\), accounting for the three free parameters of \(T_{\rm{eff}},A_{V}\), and \(log(g)\). The results
of this analysis are shown in Fig. S11. Note that the \(M\)-band photometry is driving the gravity fit for the BT-Settl, AMES-Cond, and AMES-Dusty models; however, for brown dwarfs in this temperature range, non-equilibrium chemistry plays a larger role than surface gravity on \(M\)-band photometry [71]-therefore, the (already weak) constraints on \(log(\mathrm{g})\) are even less constrained when accounting for a range of vertical mixing. The Cholla models (that do include vertical mixing) shown are for \(log(K_{zz})\)=4 (the middle of the available range). The other values, \(log(K_{zz})\)=2 and \(log(K_{zz})\)=7, provide similar results, with slightly higher minimum \(\chi^{2}_{\nu}\) values for \(log(K_{zz})\)=2 and slightly lower for \(log(K_{zz})\)=7 (i.e., a greater degree of vertical mixing is favored for the cloud-free Cholla models). Overall, the BT-Settl models produce the best match.
**Possibility that MWC 758c is a background object**
We considered the hypothesis that MWC 758c could be a background object that is projected behind MWC 758 but otherwise unassociated with the system. We note _a priori_ that this is unlikely due to the dynamical requirement of a companion responsible for the spiral arms in MWC 758, which are easily explained if MWC 758c is indeed a gravitationally bound component of the system. Beyond this initial line of reasoning, we consider 1) common proper motion, which is not conclusive due to the level of astrometric uncertainty, and 2) the spatial density of objects with a similar \(L^{\prime}\) brightness, which suggests a \(<\)1% chance of such an occurrence.
MWC 758 has a proper motion of -26 mas/yr in declination and 4 mas/yr in right ascension (from [23], rounded to the nearest mas, and with negligible uncertainties). Over the three year baseline of our observations (taking the first epoch of UT 2016-10-15 from [11]), we would expect to see a relative motion of MWC 758c with respect to MWC 758 of \(\sim\)80 mas, assuming a zero proper motion background object. The relative position of MWC 758c in the former epoch was 0.617\(\pm\)0.024 mas in separation and with a position angle of 224.9\(\pm\)2.2\({}^{\circ}\)E of N. In our latest ALES epoch (UT 2019-11-14), we measure a separation of 0.603\(\pm\)0.0345 arcsec and 228.5\(\pm\)3.2\({}^{\circ}\)E of N. We conservatively consider the uncertainty of the ALES astrometry as \(\pm\)1 pixel, or 34.5 mas, since a typical estimate for the uncertainty of FWHM/SNR yields \(\sim\)30 mas. We measure a relative motion of 41\(\pm\)59 mas between 2016-2019. This is consistent with zero observed motion and closer to the expected motion of a giant planet on a circular orbit at 100 au (\(\sim\)20 mas), but is not precise enough to confidently rule out a stationary background track. However, the images here have not been calibrated against an astrometric field, and given that the lenslet array is a non-stationary optic, it would not be surprising for the systematic astrometric uncertainty to be larger than the uncertainty quoted here. Therefore, we cannot confidently claim that we have measured an actual motion of MWC 758c.
Next, we consider the possibility that MWC 758c is a background object based on the spatial density of objects with similar \(L^{\prime}\) brightness. In a recent survey with LBTI/LMIRCam (i.e., using the same system with the same background-limited sensitivity), just four background objects were found among 98 targets [72], each with a circular field of view of 3" in radius, and including the Taurus star forming region (of which MWC 758 is a likely member). This translates to a spatial density of 0.0014 sources/arcsec\({}^{2}\). Due to the effects of tidal truncation of a very massive companion on the disk, we would have only identified a source as plausibly responsible for the spiral arms if it were within a few times the separation of the spirals, or \(\sim\)1.5". Thus, over a 1.5" circular field of view, the false alarm probability of MWC 758c is \(\sim\)1%.
### Data Availability
All LBTI data are in the process of being integrated into the LBTO archive ([http://archive.lbro.org](http://archive.lbro.org)). The data for this program are not yet available online, but in the future will be accessible by searching for PI: Wagner or Target: MWC 758. Until then, the raw and processed data will be available upon request of the authors.
### Code Availability
All software, codes, and data processing scripts that were developed for this study are available at [https://github.com/astrowagner/MWC758_ALES](https://github.com/astrowagner/MWC758_ALES).
## Acknowledgements
We thank Theodora Karalidi, Judit Szulagyi, Thayne Currie, Rachel Fernandes, Bin Ren, and Chengyan Xie for conversations that were helpful to our analysis. Support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51472.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This paper is based on work funded by NSF Grant no. 1608834. The results reported herein benefited from collaborations and/or in- formation exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate. MLS, JW, and KL were supported in part by NASA XRP program via grants 80NSSC20K0252 and NNX17AF88G. FM acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 101053020, project Dust2Planets). We acknowledge the expertise of the LBTO staff, including Jennifer Power and Jared Carlson, for their support of these observations, and to LBTO director Christian Veillet for enabling these observations through director's discretionary time.
## Author Contributions
Observations (KW, JS, SE, ES, MS, AB, KA, AB, RR); Data Analysis (KW, JS, AS); Interpretation of Results and Preparation of Manuscript (KW, JS, AS, SE, RD, DA, ES, JL, MS, KK, TB, MM, BM, AB, KA, AB, TU, CW, PH, ZB, KL, FM, EP, RR, MS, JW).
|
2304.13019 | Certifying Ensembles: A General Certification Theory with
S-Lipschitzness | Improving and guaranteeing the robustness of deep learning models has been a
topic of intense research. Ensembling, which combines several classifiers to
provide a better model, has shown to be beneficial for generalisation,
uncertainty estimation, calibration, and mitigating the effects of concept
drift. However, the impact of ensembling on certified robustness is less well
understood. In this work, we generalise Lipschitz continuity by introducing
S-Lipschitz classifiers, which we use to analyse the theoretical robustness of
ensembles. Our results are precise conditions when ensembles of robust
classifiers are more robust than any constituent classifier, as well as
conditions when they are less robust. | Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi | 2023-04-25T17:50:45Z | http://arxiv.org/abs/2304.13019v1 | # Certifying Ensembles: A General Certification Theory with \(\mathcal{S}\)-Lipschitzness
###### Abstract
Improving and guaranteeing the robustness of deep learning models has been a topic of intense research. Ensembling, which combines several classifiers to provide a better model, has shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift. However, the impact of ensembling on certified robustness is less well understood. In this work, we generalise Lipschitz continuity by introducing \(\mathcal{S}\)-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles. Our results are precise conditions when ensembles of robust classifiers are more robust than any constituent classifier, as well as conditions when they are less robust.
## 1 Introduction
Deep learning classifiers are almost as celebrated for their near-perfect accuracy, as they are notorious for their lack of robustness (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow et al., 2015). Within the past decade, as empirically robust classifiers have begun to emerge (Madry et al., 2017; Wang et al., 2018), so did attempts to certify their robustness. The goal of robustness certification is to obtain a set of additive perturbations around an input under which the prediction remains unchanged. Most approaches fall under one of three families of methods: exact certification (Katz et al., 2017; Ehlers, 2017; Huang et al., 2017), over-approximation (Wong and Kolter, 2018; Salman et al., 2019), or probabilistic certification (Weng et al., 2019), notably _randomized smoothing_ methods (Lecuyer et al., 2019; Cohen et al., 2019).
Ensembling consists in combining several classifiers to obtain a better-performing one (Hansen and Salamon, 1990; Sagi and Rokach, 2018). While it was originally proposed to improve the accuracy of weak classifiers (Rokach, 2016; Allen-Zhu and Li, 2023), it is also beneficial for improving uncertainty estimation and calibration (Lakshminarayanan et al., 2017; Zhang et al., 2020), as well as mitigating the effects of concept drift (Sagi and Rokach, 2018). These benefits of ensembling have inspired research into studying its effect on robustness. For example, recent empirical works have shown that encouraging diversity in the non-maximal predictions (Pang et al., 2019), or in the gradient directions (Kariyappa and Qureshi, 2019) of individual classifiers results in ensembles with improved robustness.
However, the degree of improved performance depends on the ensembled classifiers. When the constituent classifiers are all highly accurate, there is little room for improvement after ensembling; the gains are most pronounced with weak classifiers. Possibly, a similar limitation holds for robustness: perhaps ensembles of robust classifiers enjoy lower robustness improvements than ensembles of non-robust classifiers. Pang et al. (2019), Horvath et al. (2021), Yang et al. (2022) and Puigcerver et al. (2022) propose theoretical justifications for why ensembles boost robustness but stop short of quantifying the improvement, especially when the individual classifiers are already robust. This raises the following questions on the robustness limitations of ensembles:
1. _For a collection of robust classifiers, can their ensemble be more robust than its constituents? If so, what is the maximum achievable improvement, and under which conditions?_
2. Conversely: _Is it possible for an ensemble of robust classifiers to be less robust than its constituents? If so, what is the worst possible drop in robustness, and under which conditions?_
We tackle these questions by introducing \(\mathcal{S}\)-Lipschitzness in Section 3, a generalization of Lipschitz continuity that enables tight analysis of the theoretical robustness of ensembles. \(\mathcal{S}\)-Lipschitzness gives rise to certificates which need not be symmetric and are guaranteed to certify regions at least as large as the classical Lipschitz ones.
Building on the \(\mathcal{S}\)-Lipschitzness framework, in Section 4, we offer the following answers to the above questions:
1. It is possible for ensembles to certify every perturbation that any of the individual classifiers can certify, and even a _superset of their union_. However, we note that the gain is most pronounced when the individual classifiers are not robust; as the robustness of the individual classifiers improves, the robustness gain from ensembling becomes more limited.
2. It is possible for ensembles to fail to certify perturbations that every single one of the individual classifiers certifies, _e.g._ the ensemble certificate can be a proper _subset of the intersection_ of the constituent certificates. Interestingly, in the worst case, ensembles of robust classifiers _do not certify any perturbation at all_. However, we show that as long as all classifiers have the same prediction, the ensemble certificate will never be a _subset of the intersection_.
## 2 Related work
**Certified Adversarial Robustness**. Deep neural networks are vulnerable to adversarial attacks (Szegedy et al., 2014; Goodfellow et al., 2015). The emergence of empirical defences to these mechanisms (Papernot et al., 2017; Madry et al., 2017; de Jorge et al., 2022), has motivated the need for methods that achieve _certified_ robustness. Those methods can be classified into _exact_, _i.e._, complete (Katz et al., 2017; Ehlers, 2017; Huang et al., 2017; Lomuscio and Maganti, 2017; Bunel et al., 2018), or _conservative_, _i.e._, sound but incomplete (Gowal et al., 2018; Mirman et al., 2018; Wang et al., 2018; Ayers et al., 2020). Probabilistic methods, mostly based on _randomized smoothing_(Lecuyer et al., 2019; Cohen et al., 2019), have been shown to scale to large networks but have high inference time complexity.
**Robustness of Ensembles**. While ensembles have long been used to boost the accuracy of classifiers, interest in their robustness properties is rather recent. Pang et al. (2019) propose a regulariser that diversifies the non-maximal predictions of individual classifiers which leads to empirically better robustness. Kariyappa and Qureshi (2019) recommend a different type of regularisation: _Diversity Training_ which encourages misaligned gradients. Moreover, Horvath et al. (2021) and Yang et al. (2022) observe that applying randomized smoothing after ensembling results in more certifiably robust models than applying it to the individual classifiers. Xu et al. (2021) proposed using a mixture of clean and robust experts, while Puigcerver et al. (2022) studied the Lipschitz continuity of ensembles.
## 3 \(\mathcal{S}\)-Certificates with \(\mathcal{S}\)-Lipschitzness
We start by introducing the definition of point-wise adversarial robustness of a classifier1.
Footnote 1: A list of symbols is provided in Appendix A.
**Definition 1** (Robustness).: _Given a classifier \(f\): \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\), an \(x\in\mathbb{R}^{d}\) and a set \(Q\subset\mathbb{R}^{d}\), \(f\) is said to be robust at \(x\) if \(\arg\max_{i\in 1,\ldots,K}f_{i}(x)=\arg\max_{i\in 1,\ldots,K}f_{i}(x+\delta), \ \forall\delta\in Q\), where \(f_{i}\) is the prediction for the \(i\)-th class. We will call Q a certificate at \(x\)._
As \(Q\), also known as a _perturbation set_, depends on \(x\), this notion of robustness is also called _point-wise robustness_. We start by reviewing the classical notion of Lipschitzness and its relation to robustness before introducing \(\mathcal{S}\)-Lipschitzness: our generalization that permits more general certificates.
### Lipschitz Certificates
The Lipschitz continuity2 of a classifier is linked to its robustness. The predictions of Lipschitz classifiers with smaller Lipschitz constant change less for the same input perturbations compared to Lipschitz classifiers with a larger constant. Hence, Lipschitz continuity is commonly used for robustness analysis of neural networks (Hein and Andriushchenko, 2017; Bartlett et al., 2017; Cisse et al., 2017; Weng et al., 2018; Huang et al., 2021; Zhang et al., 2021; Eiras et al., 2022; Alfarra et al., 2022b;a).
Footnote 2: Some works refer to _Lipschitz continuity_ as _smoothness_.
The Lipschitz constant of a function is closely related to its gradients. The larger the norm of the gradients, the more sensitive the function is to perturbations and the larger its Lipschitz constant becomes. Furthermore, given a Lipschitz classifier with a Lipschitz constant \(L\), the _prediction gaps_, _i.e._, the differences between the confidence of the top prediction and the other classes, fully determine the certificate \(Q\). As such, we have the following proposition.
**Proposition 1** (Certification of Lipschitz classifiers).: _Take a differentiable3 classifier \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\) such that \(\sup_{x}\|\nabla f_{i}(x)\|_{\star}\leq L_{i}\), \(\forall i\). Then \(f_{i}\) is \(L_{i}\)-Lipschitz with respect to \(\|\cdot\|\). Moreover, \(f\) has a certificate_
Footnote 3: For simplicity, we work with differentiable classifiers, even though our results are also valid for continuous classifiers that are not differentiable at finite number of points.
\[Q =\left\{\delta\in\mathbb{R}^{d}:\|\delta\|\leq\min_{i\neq c_{A}}\frac{f_{ c_{A}}(x)-f_{i}(x)}{L_{i}+L_{c_{A}}}\!=\!\min_{i\neq c_{A}}\frac{r_{i}}{L_{i} +L_{c_{A}}}\right\}. \tag{1}\]
_Here, \(\|\cdot\|_{\star}\) is the dual norm to \(\|\cdot\|\) and \(c_{A}\) is \(\arg\max_{i}f_{i}(x)\). If all classes have the same Lipschitz constant \(L\), i.e., \(L_{i}\leq L,\forall i\), the certificate simplifies to_
\[Q=\left\{\delta\in\mathbb{R}^{d}:\|\delta\|\leq\frac{f_{c_{A}}(x)-f_{c_{B}}(x )}{2L}=\frac{r_{c_{B}}}{2L}\right\}, \tag{2}\]
_where \(c_{B}=\arg\max_{i\neq c_{A}}f_{i}(x)\). (Proof on p. 20)_
We refer to the formulation in Equation (1) as _class-wise Lipschitz continuity_ (CW) since it accounts for the classes
potentially having different Lipschitz constants. Often, however, in prior art, all classes are considered to have the same Lipschitz constant \(L\) set such that \(L\geq\max_{i}L_{i}\). We refer to this setting captured by Equation (2) as _uniform Lipschitz continuity_ (U). Moreover, the Lipschitz certificates apply to any choice of norm; the main text considers only \(\ell_{p}\) norms but we give further examples in Appendix C.1.
**Example 1** (\(\ell_{p}\) certificates).: We can construct \(\ell_{p}\) Lipschitz certificates, by bounding the supremum of the dual \(\ell_{q}\) norm of the classifier gradients, where \(\nicefrac{{1}}{{p}}+\nicefrac{{1}}{{q}}=1\). This follows directly from Holder's inequality.
Figure 1a demonstrates the intimate relationship between the norm of the gradients of a classifier, _i.e._, its Lipschitzness, and the resulting certificates from Proposition 1. Take a classifier \(f:\mathbb{R}^{d}\to\mathbb{R}^{K}\) and the set of all its gradients \(\mathcal{S}{=}\{\nabla f_{i}(x):x\in\mathbb{R}^{d},i{=}1,\ldots,K\}\) shown in. For simplicity, assume also that \(r_{c_{B}}{=}1\). As \(\sup_{s\in\mathcal{S}}\|s\|_{1}{\leq}1.5\), the \(f_{i}\) are \(1.5\)-Lipschitz with respect to the \(\ell_{\infty}\) norm. Therefore, from Equation (2) the certificate \(Q\) is the \(\ell_{\infty}\) ball of radius \(\nicefrac{{1}}{{3}}\) shown with. Taking the supremum of the \(\ell_{1}\) norm introduces overapproximation of the true set of gradients. Note how the.\(\cdot\)' region has the same supremum \(\ell_{1}\) norm as \(\mathcal{S}\) and hence has the same certificate. However, \(\cdot\)' is a superset of the gradients \(\mathcal{S}\) and must correspond to a more sensitive classifier. This is due to the overapproximating action of the supremum of the gradient norms. To rectify this, we offer a novel generalization of Lipschitzness working directly with the gradients \(\mathcal{S}\).
### \(\mathcal{S}\)-Certificates
We observed that Lipschitzness induces a larger gradient overapproximation to the set of gradients set \(\mathcal{S}\). This begs the question: _Can we enlarge the certificates by avoiding the dual norm ball overapproximation of the gradients and work directly with the exact gradient set \(\mathcal{S}\)?_
To this end, we first generalize the definition of a Lipschitz function which allows the use of the exact range space of the gradient as opposed to any overapproximation.
**Definition 2** (\(\mathcal{S}\)-Lipschitz function).: _A function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is \(\mathcal{S}\)-Lipschitz for a bounded set \(\mathcal{S}\subset\mathbb{R}^{d}\) if it holds that:_
\[-\rho_{\mathcal{S}}(x-y)\leq f(y)-f(x)\leq\rho_{\mathcal{S}}(y-x),\ \forall x,y\in\mathbb{R}^{d},\]
_with \(\rho_{\mathcal{S}}(\delta)=\sup_{c\in\mathcal{S}}c^{\top}\delta\). If \(\mathcal{S}\) is convex, then \(\rho_{\mathcal{S}}\) corresponds to its support function._
Intuitively, \(\rho_{\mathcal{S}}(\delta)\) is the biggest change in direction \(\delta\) that we can incur using the gradients in \(\mathcal{S}\). Note that the \(\mathcal{S}\)-Lipschitzness generalizes the previous definition of a Lipschitz function. To see this, consider the case where \(\mathcal{S}=\left\{x:\left\|x\right\|_{*}\leq L\right\}\). Following Holder's inequality, we observe that Definition 2 reduces to the classical \(L\)-Lipschitzness definition with respect to \(\left\|\cdot\right\|\) norm.
In contrast to the classical Lipschitzness, \(\mathcal{S}\)-Lipschitzness accounts not only for the magnitude of the gradients but also for their direction. We also can generalize the notion of dual norms to sets that are not norm balls:
**Definition 3** (Polar set).: _For a set \(\mathcal{S}\subset\mathbb{R}^{d}\), the polar set4 to \(\mathcal{S}\) of radius \(r>0\) is defined as:_
Footnote 4: We are extending the standard notion of a polar set (Rockafellar, 1970) to encompass radii different from 1.
\[\left(\mathcal{S}\right)^{r}=\left\{\delta\in\mathbb{R}^{d}\ :\ \rho_{ \mathcal{S}}(\delta)=\sup_{x\in\mathcal{S}}x^{\top}\delta\leq r\right\}.\]
Take \(f:\mathbb{R}^{d}\to\mathbb{R}\) to be \(\mathcal{S}\)-Lipschitz with \(\mathcal{S}=\left\{x\in\mathbb{R}^{d}:\left\|x\right\|_{1}\leq L\right\}\). Then, the polar set \((\mathcal{S})^{r}\) of radius \(r\) is the perturbation set that will not change \(f\) by more than \(r\). \((\mathcal{S})^{r}\) is \(\left\{\delta\in\mathbb{R}^{d}\ :\ \left\|\delta\right\|_{\infty}\leq \nicefrac{{r}}{{L}}\right\}\) which is the same result that follows from \(f\) being \(L\)-Lipschitz. We are now ready to generalize Proposition 1 with \(\mathcal{S}\)-Lipschitzness:
**Theorem 1** (\(\mathcal{S}\)-certificates).: _Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{K}\) be a classifier with \(f_{i}\) being differentiable and \(\nabla f_{i}:\mathbb{R}^{d}\to\mathcal{S}_{i}\) for all \(i=1,\ldots,K\). Then, each \(f_{i}\) is \(\mathcal{S}_{i}\)-Lipschitz. Furthermore, for a fixed \(x\), \(f\) is robust at \(x\) against all \(\delta\) in_
\[Q=\bigcap_{i\neq c_{A}}\left(\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}}\right)^ {r_{i}}. \tag{3}\]
_Here, \(c_{A}{=}\operatorname*{arg\,max}_{c}f_{c}(x)\), \(r_{i}{=}f_{c_{A}}(x){-}f_{i}(x)\), and \(\oplus\) is the Minkowski sum. If \(\mathcal{S}\supseteq\mathcal{S}_{i},\forall i\), then we have the simplified certificate_
\[Q=\left(\mathcal{S}\oplus-\mathcal{S}\right)^{r_{c_{B}}}, \tag{4}\]
_where \(c_{B}=\operatorname*{arg\,max}_{c\neq c_{A}}f_{c}(x)\)._ (Proof on p. 21)
Note the similarities between Proposition 1 and Theorem 1.
\(\mathcal{S}_{i}\) generalizes the Lipschitz constant \(L_{i}\), while the polar set generalizes the dual norm. \((\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}})^{r_{i}}\) is the certificate that the prediction does not change from \(c_{A}\) to \(i\). Taking the intersection in Equation (3) ensures that \(c_{A}\) will not be mistaken for any other class. This corresponds to the \(\min\) in Equation (1). We also have the \(\mathbbm{CW}\) (Equation (3)) and \(\mathbbm{U}\) (Equation (4)) modes, mapping to the same modes for the Lipschitz case (Equations (1) and (2)). Furthermore, we show Theorem 1 is tight in an example in Proposition 9.
The certificate in Theorem 1 is a polar set (or intersection of polar sets), hence, it has a natural dependence on the gradient sets \(\mathcal{S}\) and the prediction gap \(r\):
**Proposition 2** (Polar set dependence on \(\mathcal{S}\) and \(r\)).: _Let \(\mathcal{S},\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3},\mathcal{S}_{4} \subset\mathbb{R}^{d}\) be bounded and \(r,r_{1},r_{2}>0\):_
1. \(\mathcal{S}_{1}\subseteq\mathcal{S}_{2}\Rightarrow(\mathcal{S}_{1}\oplus \neg\mathcal{S}_{1})\subseteq(\mathcal{S}_{2}\oplus\neg\mathcal{S}_{2})\)_;_
2. \(\mathcal{S}_{1}\subseteq\mathcal{S}_{2}\Rightarrow(\mathcal{S}_{1})^{r} \supseteq(\mathcal{S}_{2})^{r}\)_;_
3. \(r_{1}\leq r_{2}\Rightarrow(\mathcal{S})^{r_{1}}\subseteq(\mathcal{S})^{r_{2}}\)_;_
4. \(((\mathcal{S}_{1}{\subseteq}\mathcal{S}_{3})\wedge(\mathcal{S}_{2}{\subseteq} \mathcal{S}_{4}))\Rightarrow(\mathcal{S}_{3}{\oplus}\neg\mathcal{S}_{4})^{r} \subseteq(\mathcal{S}_{1}{\oplus}\neg\mathcal{S}_{2})^{r}\)_._
_where \(\oplus\) is the Minkowski sum operator._ (Proof on p. 22)
The statements \(i\) and \(ii\) imply that enlarging the set \(\mathcal{S}\) of an \(\mathcal{S}\)-Lipschitz classifier reduces the certificate \(Q\). This is since a larger set of possible derivatives means a more sensitive classifier, hence the set of perturbations that would not change the classification is more restricted. Similarly, reducing the prediction gap \(r\) means that the certificate must be smaller in order to prevent a change of prediction (statement \(iii\)). Statement \(iv\) implies that any overapproximation to both \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) for a fixed \(r\) results in a smaller certificate.
### \(\mathcal{S}\)-Certificates Subsum Lipschitz Certificates
We introduced Theorem 1 in order to avoid overapproximating the gradients of the classifier with a norm ball in the hopes of obtaining larger certificates. Figure Figure 1 compares the Lipschitz and \(\mathcal{S}\)-certificates and shows that this is indeed the case. In Section 3.1 we showed that the illustrated classifier is 1.5-Lipschitz with respect to \(\ell_{\infty}\) norm and that its Lipschitz certificate is therefore the \(\ell_{\infty}\) ball of radius \(\nicefrac{{1}}{{3}}\). The same result can be viewed as a special case of \(\mathcal{S}\)-certification when we observe that the classifier is \(\mathcal{B}_{\star}\)-Lipschitz with \(\mathcal{B}_{\star}=\{x\in\mathbb{R}^{d}:\|x\|_{1}\leq 1.5\}\). Hence, for \(r_{ch}=1\), from Equation (4) we get the same certificate \((\mathcal{B}_{\star}\oplus\neg\mathcal{B}_{\star})^{1}=(2\mathcal{B}_{\star} )^{1}=\{\delta\in\mathbb{R}^{d}:\|\delta\|_{\infty}\leq\nicefrac{{1}}{{3}}\}\) ( in Figure 1a). However, if we do not overapproximate \(\mathcal{S}\) with \(\mathcal{B}_{\star}\), then Equation (4) gives us the \(\mathcal{S}\)-certificate \((\mathcal{S}\oplus\neg\mathcal{S})^{1}\) ( in Figure 1b). Clearly, the \(\mathcal{S}\)-certificate is larger than the Lipschitz one. Proposition 10 in the appendix shows that this is always the case. We now address two questions related to the properties of \(\mathcal{S}\)-certificates.
**Could it be that the \(\mathcal{S}\)-certificate in Figure 1 is larger than the Lipschitz certificate because of a suboptimal choice of norm?** No, because whenever the set of gradients is not centrally symmetric, _i.e._, \(\mathcal{S}\neq-\mathcal{S}\), then no matter what norm we choose, we have \(\mathcal{B}_{\star}\supset\mathcal{S}\) and thus an \(\mathcal{S}\)-certificate larger than the Lipschitz certificate. This is because norms are centrally symmetric by definition.
**Are \(\mathbbm{CW}\) certificates always supersets to the \(\mathbbm{U}\) certificates?** The \(\mathbbm{CW}\) and \(\mathbbm{U}\)\(\mathcal{S}\)-certificates are larger than any Lipschitz certificate (Proposition 10). As \(\mathbbm{CW}\) generalizes \(\mathbbm{U}\), its certificates are supersets to the ones of \(\mathbbm{U}\). This follows from \(\mathbbm{CW}\) reducing to \(\mathbbm{U}\) by taking \(\mathcal{S}_{\supseteq}\cup\mathcal{S}_{i}\), _i.e._, overapproximating some of the classes with a larger \(\mathcal{S}\). This is analogous to setting \(L{\geq}\max L_{i}\) in the Lipschitz case. Then, from Proposition 2\(iv\), it directly follows that \(\mathbbm{CW}\) certificates are always supersets of \(\mathbbm{U}\) certificates. Another view is that \(\mathbbm{U}\) certificates are restricted to only symmetric sets since \(\mathcal{S}\oplus\neg\) - \(\mathcal{S}\) is symmetric (Aux. Lemma 7), while \(\mathbbm{CW}\) certificates, _i.e._, \(\bigcap_{i\neq c_{A}}(\mathcal{S}_{i}\oplus\neg\ \mathcal{S}_{c_{A}})^{r_{i}}\), can be asymmetric.
The example in Figure 2 (with detailed calculations in Appendix C.3) shows how the certified regions can vary depending on whether we use \(\mathcal{S}\)-Lipschitz or Lipschitz certificates and on the \(\mathbbm{CW}\) or \(\mathbbm{U}\) modes.
Figure 2: Lipschitz and \(\mathcal{S}\)-Lipschitz certificates at \(x=[2,0]^{\top}\) for a linear classifier that splits the domain into three equal sectors. Step-by-step explanation of the construction of the certificates is provided in Appendix C.3.
### Tightening Certificates via Class Differences
We conclude this section by showing how to further enlarge the certificates by directly targeting the \(\mathcal{S}\)-Lipschitzness of the class difference. Recall the \(\mathcal{S}\)-certificate \(Q=\bigcap_{i\neq c_{A}}(\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}})^{r_{i}}\) for the \(\complement{\mathsf{CW}}\) mode from Theorem 1. The role of the \(\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}}\) term is to measure the \(\mathcal{S}\)-Lipschitz continuity of \(h_{i-c_{A}}=f_{i}-f_{c_{A}}\). It is straightforward to see that \(h_{i-c_{A}}\) is indeed \((\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}})\)-Lipschitz. However, it is not necessarily the tightest \(\mathcal{S}\) for \(h_{i-c_{A}}\). Intuitively, \(\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}}\) takes the differences of the gradients of \(f_{i}\) and \(f_{c_{A}}\), _regardless_ of the input \(x\). However, the set of gradients of \(h_{i-c_{A}}\) are the difference of gradients of \(f_{i}\) and \(f_{c_{A}}\)_at the same \(x\)_. If all classes are similarly sensitive at a given \(x\) but their sensitivity varies _jointly_ across the domain, the difference between \(\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}}\) and the gradients of \(h_{i-c_{A}}\) can be significant. Using this, we can tighten Theorem 1 with class-difference (\(\complement\)) certificates.
**Theorem 2**.: _Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{K}\) be a classifier such that \(h_{i-j}=f_{i}-f_{j}\) is \(\mathcal{S}_{i\cdot j}\)-Lipschitz, \(\forall i,j\in 1,\ldots,K,i\neq j\). Then, given an input \(x\in\mathbb{R}^{d}\), \(f\) is robust at \(x\) against all \(\delta\) in \(Q=\bigcap_{i\neq c_{A}}(\mathcal{S}_{i-c_{A}})^{r_{i}}\)._ (Proof on p. 23)
The following Example 2 illustrates how the \(\complement\) certificates (Theorem 2) are larger than the \(\complement\) certificates (Theorem 1).
**Example 2**.: Consider the piece-wise linear classifier \(f:\mathbb{R}\to\mathbb{R}^{2}\) that we wish to certify at \(x_{0}=2\):
\[f_{1}(x) =\begin{cases}0.1x{+}0.7&\text{if }x{\leq}3,\\ 1.1x{-}2.3&\text{if }x{>}3,\\ \end{cases}\] \[f_{2}(x) =\begin{cases}0.3x{+}0.1&\text{if }x{\leq}3,\\ 1.3x{-}2.9&\text{if }x{>}3.\end{cases}\]
We have \(c_{A}=1\), \(r_{2}=0.2\), \(\mathcal{S}_{1}=\{0.1,1.1\}\), \(\mathcal{S}_{2}=\{0.3,1.3\}\), \(\mathcal{S}_{2}\oplus-\mathcal{S}_{1}=\{0.2,-0.8,1.2,0.2\}\), \(\mathcal{S}_{2-1}=\{0.2\}\). Therefore, Theorem 1 gives a certificate \(Q_{\complement}=(\mathcal{S}_{2}\oplus-\mathcal{S}_{1})^{r_{2}}=[\nicefrac{{ 0.2}}{{-}0.8},\nicefrac{{ 0.2}}{{1.2}}]\). Theorem 2 instead gives the much bigger \(Q_{\complement}=(\mathcal{S}_{2-1})^{r_{2}}=(-\infty,1]\). This approach generalizes the \(\complement\)\(\mathcal{S}\)-certificates from Theorem 1 and provides the tightest certificates. For example, replacing \(\mathcal{S}_{i-c_{A}}\) with \((\mathcal{S}_{i}\oplus-\mathcal{S}_{c_{A}})\) recovers Equation (3). Hence, throughout the rest of the paper, we will use class difference unless stated otherwise. Prior work looked at the Lipschitz \(\complement\) certificates (Weng et al., 2018) and regularization (Yang et al., 2022). To the best of our knowledge, we are the first to offer a theoretical justification of why it enlarges the certificates through the new lens of \(\mathcal{S}\)-Lipschitzness.
Figure 3 summarizes the big picture relating the certificates with function continuity and positions our new results with respect to prior art. Our results fully complete the lattice relating all components together, _i.e._, Lipschitz, \(\mathcal{S}\)-Lipschitz, \(\complement\), \(\complement\), \(\complement\), and \(\complement\) modes, and their relation to certification. The bottom row shows the Lipschitz certificates, while the top row shows our \(\mathcal{S}\)-certificates. The vertical arrows demonstrate how \(\mathcal{S}\)-certificates are always larger than the corresponding Lipschitz certificates. The horizontal arrows show that \(\complement\) certificates are smaller than \(\complement\) certificates, and that \(\complement\) certificates are smaller than \(\complement\) certificates. Therefore, the \(\complement\)\(\mathcal{S}\)-certificates we introduce here provide the largest certificates (top left corner), while the \(\complement\) Lipschitz certificates (bottom right) --which are commonly used in prior work-- result in the smallest certificates.
## 4 Robustness of Ensembles of Classifiers
We can use \(\mathcal{S}\)-Lipschitzness to study how the robustness properties of individual classifiers affect the robustness of an ensemble of them. Given \(N\) classifiers \(f^{j}:\mathbb{R}^{d}\to\mathbb{R}^{K}\), consider their weighted ensemble:
\[g(x)=\sum_{j=1}^{N}\alpha_{j}f^{j}(x),\ \ \alpha_{j}\geq 0,\ \sum_{j=1}^{N}\alpha_{j}=1. \tag{5}\]
We will indicate the prediction gaps of \(f^{j}\) as \(r^{j}\). We can use the \(\mathcal{S}\)-certificates from Theorem 2 in order to relate the ensemble robustness to that of the individual classifiers.
**Theorem 3** (Addition of \(\mathcal{S}\)-Lipschitz classifiers).: _Take an ensemble as in Equation (5) with \(N=2\) and the \(\complement\) setting, i.e., \(h_{i-k}^{j}=f_{i}^{j}-f_{k}^{j}\) is \(\mathcal{S}_{i-k}^{j}\)-Lipschitz. Then, at a fixed \(x\in\mathbb{R}^{d}\), it holds that \(g\) is robust against all \(\delta\) in_
\[Q_{g}=\bigcap_{i\neq c_{A}^{g}}\left(\alpha_{1}\mathcal{S}_{i-c_{A}^{g}}^{1} \oplus\alpha_{2}\mathcal{S}_{i-c_{A}^{g}}^{2}\right)^{r_{i}^{g}},\]
_with \(c_{A}^{g}=\operatorname*{arg\,max}_{i}g_{i}\) and \(r_{i}^{g}=g_{e_{A}^{g}}-g_{i}\). The case for \(N>2\) follows by induction._ (Proof on p. 23)
In the \(\complement\) mode, where all classes have the same Lipschitzness \(\mathcal{S}^{j}\supseteq\cup_{i}\mathcal{S}_{i}^{j}\) the \(\mathcal{S}_{i-k}^{j}\) term reduces to \(\mathcal{S}^{j}\oplus-\mathcal{S}^{j}\).
We study whether ensembling two classifiers \(f_{1}\) and \(f_{2}\) results in better robustness by comparing the ensemble certificate \(Q_{g}\) with the individual certificates \(Q_{1}\) and \(Q_{2}\). We
Figure 3: The lattice of continuity certificates. \(A\to B\) means that the certificate provided by \(B\) is a subset of the certificate of \(A\). Therefore, class-difference \(\mathcal{S}\)-certificates are the largest, while uniform Lipschitz certificates are the smallest.
identify three regimes. First, the ensemble certificate \(Q_{g}\) includes all certified points in \(Q_{1}\) and \(Q_{2}\). Second, the ensemble certificate fails to include some perturbations certified in both \(Q_{1}\) and \(Q_{2}\). Third, an ensemble certificate somewhere between the two. Formally,
\[Q_{g}\supset Q_{1}\cup Q_{2}\] uniform improvement, \[\raisebox{-0.5pt}{\includegraphics[height=14.226378pt]{.eps}}\] \[Q_{1}\cap Q_{2}\subseteq Q_{g}\subseteq Q_{1}\cup Q_{2}\] inconclusive, \[\raisebox{-0.5pt}{\includegraphics[height=14.226378pt]{.eps}}\] \[Q_{1}\cap Q_{2}\supset Q_{g}\] uniform reduction. \[\raisebox{-0.5pt}{\includegraphics[height=14.226378pt]{.eps}}\]
Ideally, we wish to construct ensembles that are in regime. We may tolerate ensembles in. But most importantly, we want to avoid ensembles in regime at all costs.
The certification regime depends on whether we are in the. It also depends on the ensemble agreement on the top predictions, _i.e._, which of the following holds:
\[c_{A} =c_{A}^{j}=\arg\max_{i}f_{i}^{j}(x), \text{for all }j\in 1,\ldots,N\] \[c_{A}^{j} \neq c_{A}^{j^{\prime}}, \text{for }j\neq j^{\prime}\] \[c_{B} =c_{B}^{j}=\arg\max_{i\neq c_{A}^{j}}f_{i}^{j}(x), \text{for all }j\in 1,\ldots,N\]
The rest of this section outlines the conditions leading to each one of the, and certification regimes.
Let us first examine a common scenario for ensembles and identify what certification regime most ensembles fall in. In particular, consider the setting where the constituent classifiers agree on the top two predictions (\(\overline{c_{A}^{\text{\tiny{c}}}}\) and \(\overline{c_{B}^{\text{\tiny{c}}}}\)). This is a reasonable assumption, particularly when the number of constituent classifiers \(N\) is small and the training procedure for all classifiers is similar. Under the common, mode where all classes are similarly Lipschitz, one might guess that ensembling such agreeing classifiers must boost robustness. However, the above conditions put the ensemble solidly in regime, as shown in Theorem 4.
**Theorem 4**.: _Consider an ensemble of \(\mathbb{U}\) classifiers and a fixed \(x\) for which \(\overline{c_{A}^{\text{\tiny{c}}}}\) and \(\overline{c_{B}^{\text{\tiny{c}}}}\) hold. Then, for any choice of weights \(\alpha_{j}\) in Equation (5), the \(\mathcal{S}\)-certificate of the ensemble satisfies._
Theorem 4 is particularly concerning when \(\mathcal{S}^{1}\) and \(\mathcal{S}^{2}\) are norm balls with the same norm but different radii, as we show with an example in Appendix C.4.
Under the assumptions in Theorem 4 ensembling can never be in the favourable regime. The following section shows how relaxing these conditions enables all three regimes.
### Certification Governed by the Prediction Gap
Theorem 3 shows that the prediction gaps \(r\) and the continuity \(\mathcal{S}\) interact in complex ways in the construction of the ensemble certificate \(Q_{g}\). However, if all classifiers have the same smoothness for all the classes, i.e. \(\mathbb{U}\) and \(\mathcal{S}^{j}=\mathcal{S}\), then the differences between \(Q_{1}\), \(Q_{2}\) and \(Q_{g}\) are fully determined by \(r^{1}\), \(r^{2}\) and \(r^{g}\). We will refer to this setting as. This restriction is not uncommon as often ensembled classifiers are identically trained. For example, if randomized smoothing is used, then \(\mathcal{S}\) is uniquely defined by the smoothing distribution (Yang et al., 2020; Eiras et al., 2022; Rumezhak et al., 2023), which is the same for all constituents.
In this case, there is one-to-one mapping between the certification regimes,, and the prediction gaps. Consider the following conditions on the prediction gaps:
\[r^{g}_{c_{B}}>\max_{j}r^{j}_{c_{B}}=\overline{r}\] gap gain, \[\underline{r}\leq r^{g}_{c_{B}}\leq\overline{r}\] inconclusive, \[\min_{j}r^{j}_{c_{B}}=\underline{r}>r^{g}_{c_{B}}\] gap loss. \[\raisebox{-0.5pt}{\includegraphics[height=14.226378pt]{.eps}}\]
Then, we have that, and. Therefore, in this subsection, we will focus on the conditions resulting in,, and towards understanding the certification properties of the ensembles in mode.
**Similar top two predictions results in. Note that if the top predictions are consistent across all constitute classifiers, _i.e._\(\overline{c_{A}^{\text{\tiny{c}}}}\) and \(\overline{c_{B}^{\text{\tiny{c}}}}\) hold, this implies that the ensemble prediction gap is the linear combination of the individual predictions gaps \(r^{g}_{c_{B}}=\sum_{j}\alpha_{j}r^{j}_{c_{B}}\). Hence, the gap regime must be \(\raisebox{-0.5pt}{\includegraphics[height=14.226378pt]{.eps}}\) as \(\min_{j}r^{j}_{c_{B}}\leq r^{g}_{c_{B}}\leq\max_{j}r^{j}_{c_{B}}\), which implies regime. for \(\mathbb{U}\). This is a special case of Theorem 4.
**Regime is possible**. For a \(\mathbb{U}\) ensemble, prediction gaps in regime (\(r^{g}_{c_{B}}\!\!>\!\!\overline{r}\)) imply. One conditions for, and \(\overline{c_{B}^{\text{\tiny{c}}}}\) with the classifiers having similar confidences in the top two classes and low confidence in all other classes (see Figure 4(a)). Another possibility is \(\overline{c_{A}^{\text{\tiny{c}}}}\), but each classifier having a different second prediction, as in Figure 4(b).
**The margin of improvement when \(\mathbb{U}\) holds is small**. Although the feasibility of regime \(\mathbb{U}\) is noteworthy, unfortunately, the improvement of \(r^{g}_{c_{B}}\) over \(\overline{r}\) is limited.
**Proposition 3**.: _Consider \(N\) classifiers over \(K\) classes. We have that for any ensemble \(g\) the prediction confidence is upper bounded as follows:_
\[r^{g}_{c_{B}}\leq\overline{r}+\frac{1-\overline{r}}{2}-\frac{1-\overline{r}}{2( K-1)} \tag{10}\]
_The bound is tight: given \(\overline{r}\) and \(K\) there exists an ensemble \(f_{1},\ldots,f_{N}\), such that the prediction gap \(r^{g}_{c_{B}}\) of \(g\) attains the upper bound._
Equation (10) does not depend on the weights \(\alpha_{j}\). Furthermore, \(r^{g}_{c_{B}}-\overline{r}\) decreases monotonically with \(\overline{r}\), reaching \(0\) for \(\overline{r}=1\): improving the robustness of the best classifier decreases the room for improvement of the ensemble. This is a key finding: ensembling can do little to boost the robustness
of a set of already robust classifiers. We illustrate this in Figure 4a: for 1000 random classifiers, we show the gap \(r^{g}_{\varepsilon_{B}}\) vs \(\overline{r}\) for the weights \(\alpha_{j}\) that maximize \(r^{g}_{\varepsilon_{B}}\) for the specific ensemble. The margin of improvement via ensembling is the gap between the diagonal and the bottom boundary of the orange region and indeed decreases to 0 as \(\overline{r}{\rightarrow}1\).
In practice, the prediction gap gains are likely even smaller. Most ensembles of random classifiers stay far from the bound and have even lower ensemble gap gain \(r^{g}_{\varepsilon_{B}}\) than Equation (10) predicts, as Figure 4a shows. Furthermore, in reality, one has to pick a single set of weights \(\alpha_{j}\) for all inputs \(x\). Often that is the uniform ensemble weight, _i.e._, \(\alpha_{j}{=}1/N\). We show the gap gain for random classifiers with uniform weights in Figure 4b. Only a handful of ensembles remain in the \(\mathbb{C}\) regime (above the diagonal in Figure 4b) under uniform weights. The majority of the points have \(r^{g}_{\varepsilon_{B}}{<}\overline{r}\) and are in \(\mathbb{C}\) or \(\mathbb{C}\) (under the diagonal). Therefore, in practice, ensembling rarely results in gap gains which is at odds with the _ensembling for robustness_ paradigm. This is also true for real-world ensembles (see Appendix B).
Regime \(\mathfrak{C}\) is possible. Figure 4b compares \(r^{g}_{\varepsilon_{B}}\) against \(\overline{r}\), _i.e._, the most robust individual classifier. However, at different inputs \(x\) the best classifier may be different. Even if \(g\) is always marginally less robust than the most robust classifier at a single \(x\), \(g\) might still be overall more robust than any single \(f^{j}\). To this end, Figure 4c shows the ensemble gap \(r^{g}_{\varepsilon_{B}}\) against the _worst_ individual gap \(\underline{r}\). This shows that roughly half of the points are in gap regime \(\mathfrak{A}\), indicating that ensemble are often _less robust than the least robust individual classifier_. For \(\mathbb{U}^{\prime}\) ensembles this directly implies regime \(\mathfrak{C}\). The same findings hold for the real-world classifiers in Appendix B: for all of them the constituent models are on average more robust than the ensemble.
**Ensembles can result in zero robustness.** To make matters worse, not only is it possible that \(r^{g}_{\varepsilon_{B}}\) is smaller than all individual gaps, but it can even be 0, _i.e._, \(Q_{g}=\{0\}\).
**Proposition 4**.: _For any set of \(N\geq 2\) classifiers satisfying \(\underline{c^{x}_{A}}\), there exist weights \(\alpha_{j}\) for which the resulting ensemble has \(r^{g}_{\varepsilon_{B}}=0\) and a certified perturbation set \(Q_{g}=\{0\}\). (Proof on p. 4)_
Figure 5c shows an example of \(r^{g}_{\varepsilon_{B}}=0\). Therefore, ensembling not only can reduce robustness but can also result in an entirely non-robust classifier. Figures 5 and 6 show examples of this scenario occurring in practice.
**Same top predictions prevent gap regime \(\mathfrak{A}\).** The possibility of \(\mathfrak{A}\) and the complete loss of robustness is certainly disappointing. However, there is a simple way to prevent \(\mathfrak{A}\) from occurring. Proposition 4 constructs an ensemble which has a decision boundary passing through \(x\). This is only possible if there are two classifiers in the ensemble with different top predictions \((\underline{c^{x}_{A}})\). As long as all classifiers have the same top prediction, the ensemble cannot have a decision boundary passing through \(x\). Not only that, but also it will never be in regime \(\mathfrak{A}\), as illustrated by the red subset of ensembles in Figure 4c.
Figure 4: A set of 1000 ensembles of 2, 3 or 4 classifiers, each a uniform draw from the 4-dimensional probability simplex. (**a**) shows the best individual gap among the classifiers in each ensemble (\(\overline{r}\)) vs the largest ensemble gap (\(r^{g}_{\varepsilon_{B}}\)) attainable across all \(\alpha_{j}\). The larger the best gap \(\overline{r}\), the lower the potential gain \(r^{g}_{\varepsilon_{B}}-\overline{r}\) (the vertical gap between the diagonal and the impossible region). (**b**) has the same horizontal axis as a) but the ensemble gap (\(r^{g}_{\varepsilon_{B}}\)) is computed for uniform weights \(\alpha_{j}\). Most of the uniform weights ensembles witness gain loss. (**c**) has the same vertical axis as b) but the horizontal axis shows the worst individual gap (\(\underline{r}\)) instead of the best one. The ensembles with same \((\underline{c^{x}_{A}})\) and different top predictions \((\underline{c^{x}_{A}})\) are highlighted, showing that the \(\underline{c^{x}_{A}}|\) regime always results in \(r^{g}_{\varepsilon_{B}}\geq\underline{r}\).
**Proposition 5**.: _No ensemble of \(N\) classifiers over \(K\) classes with \(r^{i}\geq 0,i{=}1,\ldots,N\) satisfying \(\overline{c_{A}^{-}}\) can be in regime 3._
Therefore, a practical way to avoid ensembles that are less robust than the least robust individual classifier is to enforce that all classifiers have the same top prediction.
**Summary**. Restricting the ensemble to satisfy \(\overline{c_{A}^{-}}\) and \(\overline{c_{B}^{-}}\) leads to regime 3; no gap gain nor gap loss (Theorem 4). Dropping both conditions enables regime 1 but also 3. However, keeping only condition \(\overline{c_{A}^{-}}\) prevents regime 3 while keeping 1 and 2 possible (Proposition 5). For robust classifiers, the best-case ensemble prediction gap gains are very small (Proposition 3). Finally, for ensembles in the 1\(\!\!\)\(\!\)\(\!\!\)\(\!\!\)\(\!\!\)\(\!\!\!\)\(\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\)\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\)\(\!\!
(Proposition 7). Furthermore, we provide sufficient conditions for \(\leavevmode\hbox{\small\vbox{\hbox{\small\hbox{\small\vbox{\hbox{\small\vbox{ \hbox{\small\vbox{\hbox{\small\vbox{\hbox{\hbox{\small\vbox{\hbox{\hbox{\hbox{\hbox{ \hbox{\hboxhbox{\hboxhboxhbox{\hboxhboxhboxhbox{\hboxhboxhboxhboxhbox{\hboxhboxhboxhboxhbox{\hboxhboxhboxhboxhbox{\hboxhboxhboxhboxhbox{\hboxhboxhboxhboxhbox{\ |
2306.05002 | A high-order diffused-interface approach for two-phase compressible flow
simulations using a Discontinuous Galerkin framework | A diffused-interface approach based on the Allen-Cahn phase field equation is
developed within a high-order Discontinuous Galerkin framework. The interface
capturing technique is based on the balance between explicit diffusion and
sharpening terms in the phase field equation, where the former term involves
the computation of the local interface normal vectors. Due to the well-known
Gibbs phenomenon encountered in high-order discretisations of steep profiles
such as shocks and/or interfaces, the accurate evaluation of the normal vector
requires special consideration. To this end, a non-linear preconditioning
strategy is proposed in this work where an additional smooth level-set function
advected by the velocity field is used for the evaluation of the normal
vectors. It is shown that for appropriate choices of numerical fluxes and
parameters of the model, the phase field remains bounded without any need for
explicit regularisation. The proposed diffused-interface technique is
implemented within a five equation model for fully compressible two-phase
flows. In order to preserve isolated interfaces, a quasi-conservative
discretisation of the five equation model is employed. A series of numerical
experiments of increasing complexity are performed in order to assess the
accuracy and robustness of the developed methodology, including two-phase flows
involving viscous effects, gravitational forces, and surface tension. | Niccolò Tonicello, Matthias Ihme | 2023-06-08T07:46:06Z | http://arxiv.org/abs/2306.05002v1 | A high-order diffused-interface approach for two-phase compressible flow simulations using a Discontinuous Galerkin framework
###### Abstract
A diffused-interface approach based on the Allen-Cahn phase field equation is developed within a high-order Discontinuous Galerkin framework. The interface capturing technique is based on the balance between explicit diffusion and sharpening terms in the phase field equation, where the former term involves the computation of the local interface normal vectors. Due to the well-known Gibbs phenomenon encountered in high-order discretisations of steep profiles such as shocks and/or interfaces, the accurate evaluation of the normal vector requires special consideration. To this end, a non-linear preconditioning strategy is proposed in this work where an additional smooth level-set function advected by the velocity field is used for the evaluation of the normal vectors. It is shown that for appropriate choices of numerical fluxes and parameters of the model, the phase field remains bounded without any need for explicit regularisation. The proposed diffused-interface technique is implemented within a five equation model for fully compressible two-phase flows. In order to preserve isolated interfaces, a quasi-conservative discretisation of the five equation model is employed. A series of numerical experiments of increasing complexity are performed in order to assess the accuracy and robustness of the developed methodology, including two-phase flows involving viscous effects, gravitational forces, and surface tension.
keywords: High-order methods, Discontinuous Galerkin, phase field, two-phase flows +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
The constant increase in computational power has made the use of Computational Fluid Dynamics (CFD) an increasingly valuable tool in the design process of complex industrial applications [1]. Commonly available commercial CFD softwares are mostly based on low-order numerical schemes such as finite volumes or finite differences methods. While these numerical methods are particularly robust and reliable, they usually lack in accuracy for complex applications. Along these lines, the development of innovative high-order schemes, such as Discontinuous Galerkin [2; 3; 4; 5], Flux Reconstruction [6; 7; 8] and Spectral Difference [9; 10] methods, has experienced a significant growth in the CFD community, representing a promising alternative for the next generation of CFD commercial codes.
Whereas the use of high-order spectral element methods is becoming more common in the simulation of compressible aerodynamics problems [11; 12; 13; 14; 15; 16; 17; 18; 19; 20], multiphase applications still largely rely on low-order discretisations, in particular for fully compressible flows.
High-order numerical schemes, due to the low numerical dissipation and dispersion errors [21, 22, 23, 24, 25, 26, 27, 28] can be beneficial in the resolution of small-scale structures which are often encounter in the simulation of multiphase flows. In contrast, because of the delicate nature of such schemes, careful attention is needed in order to retain stability properties.
One common approach in the simulation of two-phase flows relies on the concept of volume fraction and its most popular formulation is commonly known as Volume of Fluid (VOF) method [29, 30]. The volume fraction is an auxiliary function that is advected by the velocity field which varies between zero and one and represents the ratio of primary to secondary fluid at a given computational grid point. The interface between two immiscible fluids is then represented by a sharp variation of the volume fraction.
The phase field approach [31, 32, 33] follows a similar concept with respect to VOF methodologies where a function bounded between zero and one is used to distinguish the two phases. However, instead of relying on the explicit reconstruction of the interface as it is custom in VOF methods, a balance between a diffusion and an anti-diffusion term in the transport equation of the phase field is used in order to maintain a sharp, but at the same time sufficiently smooth, diffused profile of the interface.
Once a clear separation between the two phases is identified, it is possible to introduce an additional set of equations that can be used to model each phase, potentially, with considerably different transport properties and equations of state. At the same time, localised forces or fluxes can be easily imposed at the interface as it often happens in the modelling of surface tension effects [34].
The objective of the present work is to develop a simple but, at the same time, accurate and robust approach to deal with compressible two-phase flows within the framework of the Discontinuous Galerkin method. The proposed methodology is able to preserve important properties in the simulation of two-phase flows such as boundedness of the volume fraction, accurate evaluation of the interface normal vectors, low mass conservation errors and exact resolution of contact discontinuities. From an implementational point of view, the proposed approach does not need explicit limiting, interface reconstruction or mass redistribution, but it is simply based on appropriate definitions of numerical fluxes and additional equations. Finally, the high-order spatial discretisation provides significant advantages from many different points of view, from considerably reducing mass conservation errors, to avoiding spurious deformations of the interface for long time integration. Along the same lines, the generality of this method allows easy implementation of different physical phenomena such as viscous, gravitational and surface tension forces. In other words, the novelty of the present method resides in its generality and flexibility: with only minor, targeted modifications on the central kernels of the numerical scheme, it is possible to preserve a large amount of desirable features in two-phase flows simulations, adding, at the same time, the benefits given by the high-order spatial discretisation.
The paper is structured as follows. In section 2, the five equation model for two immiscible compressible fluids is introduced, including all the additional terms and equations associated to the specific interface capturing approach employed in the present work. In section 3, the Discontinuous Galerkin method is first outlined in its general formulation for conservation laws. Its implementation for the specific case of the five equation model is then presented, including additional details on the interface capturing technique, the specific choices of numerical fluxes and their related properties. Subsequently,
section 4 is dedicated to the numerical results in which both kinematics tests and fully coupled two-phase flows for increasing levels of complexity are considered in order to validate the present implementation. Within the benchmark cases herein considered, extensive studies on different types of elements, different orders of approximation and grid-convergence investigations are carried out in order to assess the robustness and accuracy of the proposed methodology for a wide range of different problems in two-phase flows. Finally, in section 5 the key conclusions of this work are discussed.
## 2 Five equation model
The system of conservation laws considered in this work aims at modelling two-phase compressible flows including viscous effects, gravitational forces and surface tension. The five equation model [35] was chosen as framework that can be used to simulate many different conditions in multiphase flows. The present formulation slightly differs from the original model by considering an additional equation for an advected function \(\psi\), which is used to compute the interface normal vectors in a similar fashion with respect to the work by Al-Salami et al. [36]. It is worthwhile mentioning that the method proposed in [36] considered a weakly compressible formulation. In this work, the same approach was generalised to the fully compressible five equation model. Furthermore, their choice of parameters in the conservative phase field equation did not satisfy boundedness of the phase field variable, leading to non-negligible mass conservation errors. As it will be shown in the numerical results section, these errors are significantly smaller with the present formulation.
The full system, including contributions for considering viscous, gravitational and surface tension effects reads:
\[\frac{\partial\phi_{1}}{\partial t}+\mathbf{u}\cdot\nabla\phi_{1} =\nabla\cdot\mathbf{a}_{1}, \tag{1}\] \[\frac{\partial\psi}{\partial t}+\mathbf{u}\cdot\nabla\psi =0,\] (2) \[\frac{\partial\rho_{1}\phi_{1}}{\partial t}+\nabla\cdot(\rho_{1} \phi_{1}\mathbf{u}) =\nabla\cdot\mathbf{R}_{1},\] (3) \[\frac{\partial\rho_{2}\phi_{2}}{\partial t}+\nabla\cdot(\rho_{2} \phi_{2}\mathbf{u}) =\nabla\cdot\mathbf{R}_{2},\] (4) \[\frac{\partial\rho\mathbf{u}}{\partial t}+\nabla\cdot(\rho \mathbf{u}\otimes\mathbf{u}+P\mathbb{I}) =\nabla\cdot(\mathbf{f}\otimes\mathbf{u})+\nabla\cdot\boldsymbol{ \tau}+\sigma\kappa\mathbf{\widehat{n}}\delta_{\Gamma}+\rho\mathbf{g},\] (5) \[\frac{\partial\rho E}{\partial t}+\nabla\cdot((\rho E+P)\mathbf{ u}) =\nabla\cdot(\mathbf{f}k)+\sum_{l=1}^{2}\nabla\cdot(\rho_{l}H_{l} \mathbf{a}_{l})+\nabla\cdot(\boldsymbol{\tau}\cdot\mathbf{u})\] (6) \[+\sigma\kappa\mathbf{u}\cdot\mathbf{\widehat{n}}\delta_{\Gamma} +\rho\mathbf{g}\cdot\mathbf{u}, \tag{7}\]
where \(\phi_{l}\) is the phase field associated to the \(l\)-th phase, \(\rho_{l}\) is the density of each phase, \(\psi\) is the additional level-set function, \(\rho\mathbf{u}\) is the total momentum, \(P\) is the pressure and \(\rho E=\rho e+\frac{1}{2}\rho||\mathbf{u}||^{2}\) is the total energy.
In addition,
\[\mathbf{a}_{l}=\Gamma(\epsilon\nabla\phi_{l}-\phi_{l}(1-\phi_{l})\mathbf{ \widehat{n}}_{l}),\quad\mathbf{R}_{l}=\rho_{l}^{(0)}\mathbf{a}_{l},\quad \mathbf{f}=\sum_{l=1}^{2}\mathbf{R}_{l},\quad k=\frac{1}{2}||\mathbf{u}||^{2}, \tag{8}\]
and \(H_{l}\) is the specific enthalpy of the \(l\)-th phase. In terms of modeling of buoyancy, viscous stresses and surface tension: \(\boldsymbol{\tau}=2\mu(\mathbb{S}-1/3(\nabla\cdot\mathbf{u})\mathbb{I})\) is the viscous stress tensor, with \(\mu\) the dynamic viscosity
of the mixture evaluated as \(\mu=\mu_{1}\phi_{1}+\mu_{2}\phi_{2}\), \(\mathbb{S}=(\nabla\mathbf{u}+\nabla\mathbf{u}^{\intercal})/2\) is the strain-rate tensor, \(\mathbf{g}\) is the gravitational acceleration, \(\sigma\) is the surface tension coefficient, \(\kappa=-\nabla\cdot\mathbf{\widehat{n}}\) is the curvature of the interface and \(\delta_{\Gamma}=||\nabla\phi_{1}||\) is an approximate delta function around the interface.
The system is then closed by relating internal energy with the pressure field using an Equation of State (EOS). A classical choice is the Stiffened-Gas EOS [37]:
\[P=\frac{\rho e-\left(\frac{\gamma_{1}P_{l}^{\infty}}{\gamma_{1}-1}\phi_{1}+ \frac{\gamma_{2}P_{l}^{\infty}}{\gamma_{2}-1}\phi_{2}\right)}{\left(\frac{\phi _{1}}{\gamma_{1}-1}+\frac{\phi_{2}}{\gamma_{2}-1}\right)}, \tag{9}\]
where \(\gamma_{l}\) and \(P_{l}^{\infty}\) are the parameters of the EOS. From the stiffened-gas equation of state it is possible to write the speed of sound and specific enthalpy of each phases as
\[c_{l}=\sqrt{\gamma_{l}\Big{(}\frac{P+P_{l}^{\infty}}{\rho_{l}}\Big{)}}\quad \text{and}\quad H_{l}=\frac{(P+P_{l}^{\infty})\gamma_{l}}{\rho_{l}(\gamma_{l}- 1)}\quad\text{for}\quad l=1,2. \tag{10}\]
Finally, for completeness, the following mixture relations apply:
\[\phi_{2}= 1-\phi_{1}, \tag{11}\] \[\rho= \rho_{1}\phi_{1}+\rho_{2}\phi_{2},\] (12) \[\frac{1}{\gamma-1}= \phi_{1}\frac{1}{\gamma_{1}-1}+\phi_{2}\frac{1}{\gamma_{2}-1},\] (13) \[P^{\infty}\frac{\gamma}{\gamma-1}= \phi_{1}\frac{\gamma_{1}P_{1}^{\infty}}{\gamma_{1}-1}+\phi_{2} \frac{\gamma_{2}P_{2}^{\infty}}{\gamma_{2}-1}. \tag{14}\]
In order to preserve the _Interface Equilibrium Condition_ (IEC), the quasi-conservative formulation proposed by Cheng et al. [38] is herein employed. In [38], a DG discretisation was coupled with an explicit limiting technique to avoid oscillations of the phase field in proximity of the interface. In the present work, instead, the sharpening/diffusion balance first proposed by Chiu & Lin [33] was used. Even though similar results can be achieved with both approaches, the simplicity of implementation is surely one of the main advantages of the present strategy. The same technique, in fact, can be adapted to any numerical scheme for appropriate choices of parameters and numerical fluxes.
The present formulation was implemented within the opensource code _Quail_[39].
## 3 Discontinuous Galerkin discretisation
In this section the Discontinuous Galerkin (DG) method will be briefly introduced by considering a general system of conservation laws. The same strategy is then applied to the discretisation of the phase field equation and subsequently to the compressible five equation model for two-phase flows.
A general set of conservation laws can be written in a compact form as
\[\frac{\partial\mathbf{w}}{\partial t}+\nabla\cdot\mathbf{F}(\mathbf{w},\nabla \mathbf{w})=\mathbf{S}(\mathbf{w},\nabla\mathbf{w}). \tag{15}\]
Before applying the DG approach to equation 15, we first introduce the computational domain \(\Omega\) as the partition of \(N_{e}\) non-overlapping discrete elements such that \(\Omega=\cup_{n=1}^{N_{e}}\Omega_{n}\). Let us also denote the boundary of the \(n\)-th element as \(\partial\Omega_{n}\).
Secondly, we introduce the space of test functions in which the numerical solution will be seeked into. In particular, a classical choice for DG scheme consist in the functional space:
\[\mathcal{V}:=\{\varphi\in L^{2}(\Omega):\varphi|_{\Omega_{n}}\in\mathbb{P}_{ \mathrm{p}}(\Omega_{n}),\forall\Omega_{n}\}, \tag{16}\]
where \(\mathbb{P}_{\mathrm{p}}\) is the space of piecewise continuous polynomials of order not greater than p on \(\Omega_{n}\) and
\[L^{2}(\Omega)=\left\{\varphi:\Omega_{n}\rightarrow\mathbb{R}\;\bigg{|}\;\int_{ \Omega_{n}}|\varphi(\mathbf{x})|^{2}d\Omega\leq\infty\right\} \tag{17}\]
with \(\mathbb{R}\) being the space of the real numbers. Different bases can be used to define the functional space \(\mathcal{V}\), both modal or nodal [2]. In this work, classical Lagrange polynomials are used within a modal DG framework.
The approximation of the global solution \(\mathbf{w}^{\delta}\) can be defined as
\[\mathbf{w}^{\delta}=\oplus_{n=1}^{N_{\mathrm{c}}}\mathbf{w}_{n}^{\delta}, \tag{18}\]
where \(\mathbf{w}_{n}^{\delta}\) is the local discrete solution:
\[\mathbf{w}_{n}^{\delta}=\sum_{j=0}^{\mathrm{p}}\mathbf{w}_{j}(t)\varphi_{j}( \mathbf{x}). \tag{19}\]
The local formulation of the DG method requires \(\mathbf{w}_{n}^{\delta}\) to satisfy
\[\int_{\Omega_{n}}\varphi_{i}\frac{\partial\mathbf{w}_{n}^{\delta}}{\partial t }d\Omega+\int_{\Omega_{n}}\varphi_{i}\nabla\cdot\mathbf{F}(\mathbf{w}_{n}^{ \delta},\nabla\mathbf{w}_{n}^{\delta})d\Omega=\int_{\Omega_{n}}\varphi_{i} \mathbf{S}(\mathbf{w}_{n}^{\delta},\nabla\mathbf{w}_{n}^{\delta})d\Omega \quad\forall\varphi_{i}\in\mathcal{V}. \tag{20}\]
The second term on the left-hand side of equation 20 is the flux term. Upon performing integration by parts, this term can be expressed as
\[\int_{\Omega_{n}}\varphi_{i}\nabla\cdot\mathbf{F}(\mathbf{w}_{n}^{\delta}, \nabla\mathbf{w}_{n}^{\delta})d\Omega=-\int_{\Omega_{n}}\nabla\varphi_{i} \cdot\mathbf{F}(\mathbf{w}_{n}^{\delta},\nabla\mathbf{w}_{n}^{\delta})d\Omega +\oint_{\partial\Omega_{n}}\varphi_{i}\widehat{\mathbf{F}}(\mathbf{w}_{n}^{ \delta,+},\mathbf{w}_{n}^{\delta,-},\nabla\mathbf{w}_{n}^{\delta,+},\nabla \mathbf{w}_{n}^{\delta,-},\widehat{\mathbf{m}})dS, \tag{21}\]
where \(\widehat{\mathbf{m}}\) is the outward-pointing unit normal vector, \((\cdot)^{+}\) and \((\cdot)^{-}\) denote the right and left state with respect to the element's interface \(\partial\Omega_{n}\) and \(\widehat{\mathbf{F}}\) is the numerical flux.
After exploiting the form of \(\mathbf{w}_{n}^{\delta}\) (equation 19), the local discrete weak formulation reads:
\[\sum_{j=0}^{\mathrm{p}}\frac{\mathrm{d}\mathbf{w}_{j}}{\mathrm{dt}}\mathrm{M} _{ij}=\int_{\Omega_{n}}\nabla\varphi_{i}\cdot\mathbf{F}(\mathbf{w}_{n}^{ \delta},\nabla\mathbf{w}_{n}^{\delta})d\Omega-\oint_{\partial\Omega_{n}} \varphi_{i}\widehat{\mathbf{F}}(\mathbf{w}_{n}^{\delta,+},\mathbf{w}_{n}^{ \delta,-},\nabla\mathbf{w}_{n}^{\delta,+},\nabla\mathbf{w}_{n}^{\delta,-}, \widehat{\mathbf{m}})dS+\int_{\Omega_{n}}\varphi_{i}\mathbf{S}(\mathbf{w}_{n}^ {\delta},\nabla\mathbf{w}_{n}^{\delta},\mathbf{x})d\Omega, \tag{22}\]
where \(\mathrm{M}_{ij}=\int_{\Omega_{n}}\varphi_{i}\varphi_{j}d\Omega\) represents the \((i,j)\)-th entry of the element-local mass matrix. Both volume and surface integrals appearing in equation 22 can be either computed analytically or using appropriate quadrature rules. Equation 22 can then be discretised in time with explicit or implicit schemes.
The only component of the spatial discretisation which is intrinsically dependent on the specific conservation law is the definition of the numerical fluxes, which needs to take into account the eigenstructure of the hyperbolic system. Lax-Friedrichs and Symmetric Interior Penalty [40] fluxes will be considered as they offer good flexibility for a wide range of different conservation laws.
Now that the general framework of the DG method is introduced, we will proceed in presenting the specific numerical strategy used in the discretisation of the five equation model. First, the numerical
treatment of the phase field and level-set equations will be considered in order to highlight the key aspects of the interface capturing technique only. The subsequent coupling with the five equation model will naturally follow with appropriate choices of numerical fluxes.
The first two equations of the five equation model introduced in the previous section read:
\[\frac{\partial\phi_{1}}{\partial t}+\mathbf{u}\cdot\nabla\phi_{1} =\nabla\cdot\mathbf{a}_{1}, \tag{23}\] \[\frac{\partial\psi}{\partial t}+\mathbf{u}\cdot\nabla\psi =0, \tag{24}\]
with \(\mathbf{a}_{1}=\Gamma(\epsilon\nabla\phi_{1}-\phi_{1}(1-\phi_{1})\mathbf{ \widehat{n}}_{1})\) and \(\mathbf{\widehat{n}}_{1}=\nabla\psi/||\nabla\psi||\).
For a prescribed velocity field, this system is well-posed and it can be numerically resolved as it is common practice in kinematic tests for the validation of interface capturing techniques. In the following discussion, in the phase field equation, the subscript of \(\phi_{1}\) will be dropped since only one phase field needs to be resolved (the phase field for the secondary fluid will be directly evaluated as \(\phi_{2}=1-\phi_{1}\)).
The local weak formulation of the classic Discontinuous Galerkin method for this set of equations reads:
\[\int_{\Omega_{n}}\frac{\partial\phi_{n}}{\partial t}\varphi_{i}d \Omega+\int_{\Omega_{n}}(\mathbf{u}\cdot\nabla\phi_{n})\varphi_{i}d\Omega =-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot\mathbf{a}_{l}d\Omega+ \oint_{\partial\Omega_{n}}\mathbf{\widehat{a}}_{l}\cdot\mathbf{\widehat{m}} \varphi_{i}dS, \tag{25}\] \[\int_{\Omega_{n}}\frac{\partial\psi_{n}}{\partial t}\varphi_{i}d \Omega+\int_{\Omega_{n}}(\mathbf{u}\cdot\nabla\psi_{n})\varphi_{i}d\Omega =0, \tag{26}\]
where the vector \(\mathbf{\widehat{a}}_{l}\) is the numerical flux at the interface between neighbouring elements which depends on the left and right state of the conservative variables and their gradients (namely, \(\phi^{\pm}\), \(\psi^{\pm}\), \(\nabla\phi^{\pm}\), \(\nabla\psi^{\pm}\)). A symmetric interior penalty flux was chosen in the present work to evaluate this term. The transport term, instead, is treated as a source term. The system written as it is can be numerically resolved for a prescribed, analytical velocity field. Instead, when considering the fully coupled system where the velocity field is itself an unknown of the problem and modelling the advection term as a source term is needed in order to fulfil the interface equilibrium condition (_i.e._ exact resolution of contact discontinuities). This property is of fundamental importance for compressible flows in order to avoid pressure and velocity oscillations in proximity of the interface. These oscillations can ultimately lead to global instability of the numerical scheme and thus the IEC represents a key aspect in the success of the simulation.
Integrating by parts twice, the previous system can be written as:
\[\int_{\Omega_{n}}\frac{\partial\phi_{n}}{\partial t}\varphi_{i}d \Omega+\int_{\Omega_{n}}(\mathbf{u}\cdot\nabla\phi_{n})\varphi_{i}d\Omega- \int_{\partial\Omega_{n}}\mathbf{\widehat{m}}\cdot\mathbf{\widehat{(u}} \overline{\mathbf{\psi}_{n})}\varphi_{i}dS+\int_{\partial\Omega_{n}}\mathbf{ \widehat{m}}\cdot\mathbf{\widehat{(u}}\overline{\mathbf{\psi}_{n})}\varphi_{i }dS=\] \[=-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot\mathbf{a}_{l}d\Omega+ \oint_{\partial\Omega_{n}}\mathbf{\widehat{a}}_{l}\cdot\mathbf{\widehat{m}} \varphi_{i}dS\] \[\int_{\Omega_{n}}\frac{\partial\psi_{n}}{\partial t}\varphi_{i}d \Omega+\int_{\Omega_{n}}(\mathbf{u}\cdot\nabla\psi_{n})\varphi_{i}d\Omega- \int_{\partial\Omega_{n}}\mathbf{\widehat{m}}\cdot\mathbf{\widehat{(u}} \overline{\mathbf{\psi}_{n})}\varphi_{i}dS+\int_{\partial\Omega_{n}}\mathbf{ \widehat{m}}\cdot\mathbf{\widehat{(u}}\overline{\mathbf{\psi}_{n})}\varphi_{i }dS=0\]
where two additional numerical fluxes for the transport term need to be defined (namely, \(\widehat{(\cdot)}\) and \(\widetilde{(\cdot)}\)). The same expressions proposed by [38] were considered in this work. In particular,
\[\mathbf{\widehat{m}}\cdot\mathbf{\widehat{(u}}\overline{\phi})=\begin{cases} \mathbf{\widehat{m}}\cdot\mathbf{u}_{n}^{-}\phi_{n}^{-}\ \ \text{in}\ \Omega_{n}^{-}\\ \mathbf{\widehat{m}}\cdot\mathbf{u}_{n}^{+}\phi_{n}^{+}\ \ \text{in}\ \Omega_{n}^{+}\end{cases}\ \ \ \text{and}\ \mathbf{\widehat{m}}\cdot\mathbf{\widehat{(u}} \overline{\phi})=\begin{cases}\mathbf{\widehat{m}}\cdot\mathbf{u}^{-}\llbracket \phi\rrbracket-c\llbracket\phi\rrbracket\ \ \text{in}\ \Omega_{n}^{-}\\ \mathbf{\widehat{m}}\cdot\mathbf{u}^{+}\llbracket\phi\rrbracket-c\llbracket \phi\rrbracket\ \ \text{in}\ \Omega_{n}^{+}\end{cases} \tag{27}\]
where \(c\) represents the maximum characteristic speed of the system (_i.e._ the speed of sound for the five equation model),
\[\llbracket\phi\rrbracket=\frac{\phi^{+}+\phi^{-}}{2}\quad\text{and}\quad \llbracket\phi\rrbracket=\frac{\phi^{+}-\phi^{-}}{2}.\]
It can be proven that this choice, in addition to the quasi-conservative form of the advection term, leads to an exact conservation of contact discontinuities if coupled with standard Lax-Friedrichs numerical flux for the remaining conservation laws in the five equation model (_i.e._ conservation of mass of the \(l\)-th phase, total momentum and total energy). For consistency, the same fluxes are employed also in the level-set equation even if this choice is not fundamental in the fulfilment of the interface equilibrium condition.
The local weak formulation for each equation of the full system can be written for
the Allen-Cahn equation as:
\[\int_{\Omega_{n}}\frac{\partial\phi_{n}}{\partial t}\varphi_{i}d \Omega+\int_{\Omega_{n}}(\mathbf{u}_{n}\cdot\nabla\phi_{n})\varphi_{i}d \Omega-\int_{\partial\Omega_{n}}\widehat{\mathbf{m}}\cdot\widehat{(\mathbf{u} _{n}\phi_{n})}\varphi_{i}dS+\int_{\partial\Omega_{n}}\widehat{\mathbf{m}} \cdot\widehat{(\mathbf{u}_{n}\phi_{n})}\varphi_{i}dS= \tag{28}\] \[=-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot\mathbf{a}_{1}d\Omega+ \oint_{\partial\Omega_{n}}\widehat{\mathbf{a}}_{1}\cdot\widehat{\mathbf{m}} \varphi_{i}dS; \tag{29}\]
the level-set equation as:
\[\int_{\Omega_{n}}\frac{\partial\psi_{n}}{\partial t}\varphi_{i}d \Omega+\int_{\Omega_{n}}(\mathbf{u}_{n}\cdot\nabla\psi_{n})\varphi_{i}d \Omega-\int_{\partial\Omega_{n}}\widehat{\mathbf{m}}\cdot\widehat{(\mathbf{u} _{n}\psi_{n})}\varphi_{i}dS+\int_{\partial\Omega_{n}}\widehat{\mathbf{m}} \cdot\widehat{(\mathbf{u}_{n}\psi_{n})}\varphi_{i}dS=0; \tag{30}\]
the mass conservation of phase 1 as:
\[\int_{\Omega_{n}}\frac{\partial(\rho_{1}\phi_{1})_{n}}{\partial t }\varphi_{i}d\Omega -\int_{\Omega_{n}}(\rho_{1}\phi_{1})_{n}\mathbf{u}_{n}\cdot \nabla\varphi_{i}d\Omega+\oint_{\partial\Omega_{n}}\widehat{(\rho_{1}\phi_{1}) _{n}\mathbf{u}_{n}}\cdot\widehat{\mathbf{m}}\varphi_{i}dS= \tag{31}\] \[=-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot\mathbf{R}_{1}d\Omega+ \oint_{\partial\Omega_{n}}\widehat{\mathbf{R}}_{1}\cdot\widehat{\mathbf{m}} \varphi_{i}dS; \tag{32}\]
the mass conservation of phase 2 as:
\[\int_{\Omega_{n}}\frac{\partial(\rho_{2}\phi_{2})_{n}}{\partial t }\varphi_{i}d\Omega -\int_{\Omega_{n}}(\rho_{2}\phi_{2})_{n}\mathbf{u}_{n}\cdot \nabla\varphi_{i}d\Omega+\oint_{\partial\Omega_{n}}\widehat{(\rho_{2}\phi_{2} )_{n}\mathbf{u}_{n}}\cdot\widehat{\mathbf{m}}\varphi_{i}dS= \tag{33}\] \[=-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot\mathbf{R}_{2}d\Omega+ \oint_{\partial\Omega_{n}}\widehat{\mathbf{R}}_{2}\cdot\widehat{\mathbf{m}} \varphi_{i}dS; \tag{34}\]
the total momentum conservation as:
\[\int_{\Omega_{n}}\frac{\partial(\rho\mathbf{u})_{n}}{\partial t }\varphi_{i}d\Omega -\int_{\Omega_{n}}(\rho_{n}\mathbf{u}_{n}\otimes\mathbf{u}_{n}+P_{n} \mathbb{I})\cdot\nabla\varphi_{i}d\Omega+\oint_{\partial\Omega_{n}}\widehat{( \rho_{n}\mathbf{u}_{n}\otimes\mathbf{u}_{n}+P_{n}\mathbb{I})}\cdot\widehat{ \mathbf{m}}\varphi_{i}dS= \tag{35}\] \[=-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot(\mathbf{f}\otimes \mathbf{u}_{n})d\Omega+\oint_{\partial\Omega_{n}}\widehat{(\mathbf{f}\otimes \mathbf{u}_{n})}\cdot\widehat{\mathbf{m}}\varphi_{i}dS; \tag{36}\]
and the total energy conservation as:
\[\int_{\Omega_{n}}\frac{\partial(\rho E)_{n}}{\partial t}\varphi_{ i}d\Omega -\int_{\Omega_{n}}(\rho_{n}E_{n}+P_{n})\mathbf{u}_{n}\cdot\nabla \varphi_{i}d\Omega+\oint_{\partial\Omega_{n}}\widehat{(\rho_{n}E_{n}+P_{n}) \mathbf{u}_{n}}\cdot\widehat{\mathbf{m}}\varphi_{i}dS= \tag{37}\] \[=-\int_{\Omega_{n}}\nabla\varphi_{i}\cdot(\mathbf{f}k+\sum_{l=1}^ {2}\rho_{l}H_{l}\mathbf{a}_{l})d\Omega+\oint_{\partial\Omega_{n}}(\mathbf{f}k+ \sum_{l=1}^{2}\rho_{l}H_{l}\mathbf{a}_{l})\cdot\widehat{\mathbf{m}}\varphi_{ i}dS. \tag{38}\]
The numerical fluxes related to the interface capturing technique (_i.e._ terms involving \(\mathbf{a}_{l}\)) are discretised using the symmetric interior penalty method whereas the remaining numerical convective fluxes are discretised using the Lax-Friedrichs flux.
Finally, it is relevant to present one additional detail related to the link between the phase field and the additional level-set function \(\psi\) and their respective dynamics. The function \(\psi\) can be interpreted as a smoother version of the phase field which is used to compute the interface normal vectors.
However, it is easy to see that in the previous equations, for a given velocity field, the coupling is only one-way: the level-set equation will affect the phase field evolution due to the computation of the normal vectors but there is no information flowing from the phase field equation back to the dynamics of the level-set. Consequently, after sufficiently long time, the interface defined by the two functions will start to drift apart, leading to completely erroneous results. Consequently, the function \(\psi\) needs to be periodically re-initialised. In this work we followed the strategy proposed by [36] where every 2000 time steps, the following re-initialisation equation
\[\frac{\partial\psi}{\partial\tau}+sgn(\psi_{0})(1-||\nabla\psi||)=\nabla \cdot(\nu_{h}\nabla\psi), \tag{39}\]
is solved in pseudo-time from the initial condition \(\psi_{0}=\phi-\frac{1}{2}\). The quantity
\[sgn(\psi_{0})=\tanh\left(\frac{\psi_{0}}{2\epsilon||\nabla\psi_{0}||}\right)\]
is a smeared sign function and \(\nu_{h}\) is a vanishing viscosity proportional to the grid size. Since a smooth profile for the level-set function is only needed inside a narrow bandwidth localised in proximity of the interface, the re-initialisation equation is integrated only for a limited number of pseudo-time steps. It was found that applying approximately 50 iterations every 2000 physical time-steps was at the same time able to (i) provide a sufficiently smooth profile of the interface, (ii) avoid decoupling between phase field and level-set and (iii) keep the computational cost of the re-initialisation process relatively low. In similarity with the phase field equation, the numerical diffusive flux in the re-initialisation equation is discretised using the symmetric interior penalty approach whereas the smeared sign hyperbolic term is treated as a simple source term. It is worthwhile mentioning that even if a re-initialisation step is herein considered, the mass conservation errors associated to this process are identically equal to zero since the level-set function is only used to compute the interface normal vectors and not to track the interface (as it is normally done in classical level-set methods).
It is important to underline that the present methodology improves the previously cited works from several point of views, increasing either the accuracy, the simplicity of implementation or the generality of each model. The quasi-conservative approach presented in [38] is used to preserve contact discontinuities in the five equation model and it is herein further improved by avoiding explicit limiting of the volume fraction employing, instead, explicit sharpening/diffusive terms of the Allen-Cahn equation. In this way the proposed model is characterised by a significant simplicity of implementation and flexibility. For example, considering complex unstructured grids, the proposed methodology does not need any ad-hoc limiting which might become considerably expensive and complex to implement. The present approach also allows the use of much higher orders of approximation with respect to the ones used in [38]. The
advantages of using high orders of approximation will be further discussed in the results section. In the same section, the proposed methodology will also be tested on triangular elements, highlighting the robustness of this approach for unstructured grids. Finally, the model herein presented extends the interface capturing technique proposed by [36] to a fully compressible formulation and it was also possible, by appropriate choices of the parameters, to significantly reduce mass conservation errors of the original technique.
## 4 Results
### Kinematic tests
Regarding the validation of the present methodology, the proposed numerical test cases will be sorted in kinematic tests and two-phase flow tests. For the former case, the velocity field is represented by a given analytical function which will be used in the phase field and level-set equations. In this way, it is possible to assess the performance of the interface capturing technique in a segregated fashion. The latter set of tests considers the full five equation system which is used to describe the flow of compressible two-phase flows.
#### 4.1.1 Linear-advection of a droplet
The first test case herein considered is a simple advection of a circular droplet with a prescribed interface thickness. The goal of this specific test is to verify that the additional diffusive/sharpening terms, for a given hyperbolic tangent initial condition, do not deteriorate the order of accuracy of the underlying numerical scheme. The hyperbolic tangent profile, in fact, is supposed to maintain its shape and simply be transported by the constant advection velocity.
Consequently, a spherical droplet of radius \(R=0.15\), represented by the phase field
\[\phi(\mathbf{x},0)=\frac{1}{2}\Big{(}1+\tanh\left(\frac{r-R}{2\epsilon}\right) \Big{)}\quad\text{with}\quad r=||\mathbf{x}||, \tag{40}\]
is advected by the constant advection velocity \(\mathbf{u}=(1,0)\) in a \([0,1]^{2}\) periodic square for a full period. The parameter \(\epsilon\) is set to be a constant equal to the coarsest mesh resolution (_i.e._\(\epsilon=0.05\)).
Similarly, the level-set, signed-distance function is initialised as:
\[\psi(\mathbf{x},0)=-r. \tag{41}\]
The \(L^{1}\) error of the phase field as a function of the number of elements and different orders of approximation is shown in figure 1. It can be seen that the optimal order of convergence of the spatial discretisation is correctly recovered under mesh refinement.
#### 4.1.2 Rider-Kothe vortex
Another standard test case for interface capturing techniques consists in the deformation of a circular bubble by a shear flow [41]. In particular, a droplet of radius \(R=0.15\) centered at \((0.0,0.25)\) in a
\([-0.5,0.5]^{2}\) periodic domain is advected by the following divergence-free velocity field:
\[u_{1}= \sin^{2}(\pi x_{1})\sin(2\pi x_{2})\cos\left(\frac{\pi t}{T}\right) \tag{42}\] \[u_{2}= -\sin(2\pi x_{1})\sin^{2}(\pi x_{2})\cos\left(\frac{\pi t}{T} \right)\!, \tag{43}\]
where \(T\) denotes the characteristic period of the shear flow. The classical value of \(T=4\) is chosen for this particular test. Under the action of such a velocity field, in the first half of the period, the initial circular bubble is strongly deformed into a thin filament. After velocity reversal, the filament is stretched back to the initial condition.
For this problem, we considered a polynomial discretisation of order 1 and 3 within the DG framework on a series of increasingly refined meshes. In particular, simulations involving \(64^{2}\), \(128^{2}\) and \(256^{2}\) degrees of freedom were considered. Notice that in spectral element methods the total number of degrees of freedom is jointly defined by the number of elements and by the polynomial order of approximation. Consequently, for example, considering a \(3^{\rm rd}\) polynomial order approximation on a \(16\times 16\) grid, the nomenclature will read: \(16\times 16\)p3 (_i.e._ 16 elements times 16 elements for a polynomial approximation of degree 3). The total number of degrees of freedom is defined as \((16\times 4)^{2}=64^{2}\). In comparisons of simulations involving different orders of approximation, a common choice is to match the total number of degrees of freedom (for example, \(16\times 16\)p3 and \(32\times 32\)p1 simulations).
Time integration was performed using a classical \(4^{\rm th}\) order Runge-Kutta scheme. Also, in order to test the versatility of the proposed approach, both quadrilateral and triangular meshes were used. Examples of the meshes used in this work are shown in figure 2. In figure 3, the interfacial profiles at half period and after a full period are shown for different levels of mesh refinement using a \(3^{\rm rd}\) order polynomial
Figure 1: \(L^{1}\)-error against number of elements along each direction for different orders of approximation. Optimal orders of convergence are shown by the dashed grey lines.
approximation based on square elements.
Two close-up views of figures 3(a) and 3(b) are shown in figure 4 to better appreciate the convergence of the proposed approach.
In figure 4(a), we can observe that the resolution of the thin filament improves under mesh refinement. In figure 4(b), instead, the recovery of the initial condition can be better appreciated.
In order to further investigate the proposed numerical strategy, lower order simulations using the same approach have been performed, keeping fixed the total number of degrees of freedom. In figure 5 the final location of the interface is shown for two different polynomial orders (1 and 3) for two different
Figure 3: Iso-contour \(\phi=0.5\) after half period (left) and after one full period (right) for the \(16\times 16\)p3, \(32\times 32\)p3 and \(64\times 64\)p3 simulations. The solid black line represents the initial location of the interface.
Figure 2: Examples of computational meshes used in the present work.
resolutions (\(128^{2}\) and \(256^{2}\) DoF).
It can be noticed that the higher order approximation provides more accurate results. For both resolutions, in fact, the p3 simulation outperforms the p1 computation in matching the exact initial solution. In particular, the upper part of the droplet is better approximated. In fact, this region is particularly stretched in the first half-period leading to significant under-resolution. In order to further
Figure 4: Close-up view of the interface iso-contour at half period near breakup (left) and after one full period (right) for the \(16\times 16\)p3, \(32\times 32\)p3 and \(64\times 64\)p3 simulations.
Figure 5: Iso-contour \(\phi=0.5\) after one full period for \(128^{2}\) (left) and \(256^{2}\) DoF (right) with \(1^{\text{st}}\) and \(3^{\text{rd}}\) polynomial order approximations. The solid black line represents the initial location of the interface.
examine how under-resolution behaves, the location of the interface at the half-period is shown in figure 6 for the two different polynomial orders. Different zones of the interface are highlighted to better appreciate the differences between the two simulations. It can be seen that in many locations the higher order approximation is generally smoother and it better matches the reference solution. In this case, the reference solution is simply a more refined simulation.
These observations further highlight the significant improvement in considering high-order discretisations for the proposed phase field equation. In particular, it is the believe of the authors that low-order approximations not only introduce excessive amounts of numerical dissipation but also provide inaccurate evaluation of the normal vectors, therefore deteriorating the interface capturing technique. These assumptions will be further confirmed through additional numerical experiments for different orders of approximation.
Secondly, in order to test the robustness and generality of the proposed approach, different types of finite elements were considered. In particular, in the following part of this section we compare quadrilateral and triangular elements (see figure 2).
In figure 7, the location of the interface using quadrilateral and triangular elements is shown for \(128^{2}\) DoF (\(3^{\text{rd}}\) polynomial order). In particular, the solution is shown at half-period on the left and after a full revolution on the right. In figure 7(a) it is interesting to note that the two type of meshes (rectangular and triangular) provide almost exactly the same results in well-resolved regions of the domain, whereas noticeable differences can be observed only in proximity of the tail of the droplet. After one full revolution,
Figure 6: Iso-contour \(\phi=0.5\) after half period for \(128^{2}\) DoF with \(1^{\text{st}}\) () and \(3^{\text{rd}}\) () polynomial order approximations on rectangular elements. Some specific regions of the the interface are enlarge to highlight the difference between the two simulations. The simulation for \(256^{2}\) DoF is used as reference.
the upper part of the droplet is slightly better approximated by triangular element as it can be observed in figure 7(b). It is important to highlight that such results further confirm the robustness and generality of the present approach. Using different types of elements did not require any ad-hoc modification of the underlying technique, which emphasises the simplicity and flexibility of the proposed implementation. The capability of handling triangular elements without deep modifications in the interface capturing techniques is a very desirable feature for the simulation of more complex configurations of engineering interest where unstructured meshes are often required.
In a more quantitative way, the \(L^{1}\)-errors were computed and are listed in table 1 where they are compared with simulations from the literature involving the same number of degrees of freedom and similar interface capturing techniques (_i.e._ based on the phase field approach). It can be observed that the present results are in good agreement with previously published simulations of the same test case. In the same table, it can also be noted that the differences between \(1^{\text{st}}\) and \(3^{\text{rd}}\) polynomial order approximations observed in previous plots, can now be quantified more clearly: the \(L^{1}\)-errors in the p3 simulations are always smaller than the p1 counterparts (sometimes even twice/three times smaller). Also, the gap between the two approximation orders tends to grow under mesh refinement.
Finally, another feature of great interest for interface capturing techniques is mass conservation. Consequently, the relative mass conservation error was evaluated over time as
\[E_{m}=\frac{\int(\phi(\mathbf{x},t)-\phi(\mathbf{x},0))d\Omega}{\int\phi( \mathbf{x},0)d\Omega}. \tag{44}\]
The time evolution of the mass conservation error throughout the whole simulation is shown in figure 8 for different mesh resolutions and different orders of approximation.
Figure 7: Iso-contour \(\phi=0.5\) after half period (left) and one full period (right) for the the \(3^{\text{rd}}\) polynomial order simulation with \(128^{2}\) DoF comparing quadrilateral and triangular elements. The solid black line represents the reference solution (highly resolved simulation for \(t=T/2\) and initial solution for \(t=T\)).
Since the transport term in the phase field equation is discretised in a non-conservative form, mass conservation errors are not identically zero. Considering the p1 simulation, the mass conservation errors are relatively small, ranging between \(10^{-8}\) and \(10^{-11}\) for different mesh resolutions, as the grid size decreases. Even if these errors are already considerably small for a first order polynomial approximation, it can be observed that the mass conservation errors are even smaller for the p3 simulation, as they range between \(10^{-10}\) for the coarsest mesh to almost machine precision for the most refined simulation.
In the work by Al-Salami et al. [36], in the same case for \(256^{2}\) DoF, mass conservation errors were larger by 11 orders of magnitude.
### Two-phase flows
Following the analysis of the interface capturing technique, we will considered a series of validation tests involving the simulation of two-phase flows using the five equation model presented in the previous section.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & \(64^{2}\) & \(128^{2}\) & \(256^{2}\) \\ \hline Mirjalili et al. [42], mesh (a) & \(1.96\times 10^{-2}\) & \(5.43\times 10^{-3}\) & \(1.25\times 10^{-3}\) \\ Al-Salami et al. [36], mesh (a), p3 & \(-\) & \(5.50\times 10^{-3}\) & \(1.05\times 10^{-3}\) \\ Present mesh (a), p3 & \(1.75\times 10^{-2}\) & \(4.83\times 10^{-3}\) & \(1.23\times 10^{-3}\) \\ Present mesh (a), p1 & \(2.22\times 10^{-2}\) & \(8.71\times 10^{-3}\) & \(3.31\times 10^{-3}\) \\ \hline Al-Salami et al. [36], mesh (b), p3 & \(-\) & \(5.51\times 10^{-3}\) & \(1.25\times 10^{-3}\) \\ Present mesh (b), p3 & \(1.58\times 10^{-2}\) & \(4.03\times 10^{-3}\) & \(1.18\times 10^{-3}\) \\ Present mesh (b), p1 & \(1.99\times 10^{-2}\) & \(8.23\times 10^{-3}\) & \(2.65\times 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(L^{1}\)-errors for the Rider-Kothe test case compared with respect to simulations from the literature involving the same number of degrees of freedom.
Figure 8: Mass conservation errors for different resolutions on mesh (a) using a p1 (left) and p3 approximation (right) with \(64^{2}\), \(128^{2}\) and \(256^{2}\) DoF.
#### 4.2.1 Advection of a water droplet in air
A popular test case to validate the present implementation consists in the advection of a water droplet in air. The one-dimensional form of the present test case has been extensively used as a benchmark validation problem in the multiphase flows community [43; 35; 44; 45].
The goal of this test is to assess the ability of the numerical scheme to preserve the initial circular shape of the droplet for long time integration and to quantify the robustness of the solver in dealing with high-density-ratio interfaces.
Another goal of the water bubble advection case consists in verifying the capability of the numerical discretisation to exactly preserve isolated material interfaces. A well-balanced discretisation of the five equation model should, in fact, maintain pressure and velocities exactly constant during time integration. For this specific case viscous, gravitational and surface tension effects are neglected.
The bubble of radius \(R=25/89\) is located at the center of a \([0,1]^{2}\) periodic square. The material properties for the air medium for this test case are \(\gamma_{1}=1.4\), \(\rho_{1}=1\times 10^{-3}\) and \(P_{1}^{\infty}=0\), whereas for the water medium the fluid properties are \(\gamma_{2}=4.4\), \(\rho_{2}=1\) and \(P_{2}^{\infty}=6\times 10^{3}\).
The initial conditions read:
\[\mathbf{u}=(5,5),\quad P=1,\quad\phi_{1}=\frac{1}{2}\Big{(}1+\tanh\Big{(}\frac {r-R}{2\epsilon}\Big{)}\Big{)}\quad\text{and}\quad\rho=\rho_{2}+(\rho_{1}- \rho_{2})\phi_{1}. \tag{45}\]
The simulation is carried on different mesh resolutions for a total of five advection periods. \(3^{\text{rd}}\) and \(1^{\text{st}}\) polynomial order DG discretisations were considered and time integration was performed using a \(4^{\text{th}}\) order Runge-Kutta scheme.
In figure 9, the phase field at the end of the simulations is shown for the most refined simulations (equivalent to \(96^{2}\) DoF). It can be noticed that in the p3 simulation the droplet is qualitatively identical with respect to the prescribed initial condition, indicating that the proposed methodology is successfully able to preserve the initial shape of the water bubble even after long time integration. It is worthwhile mentioning that a spurious deformations of the interface and the grid directions is often encountered for low-order approximations of the interface normals [46]. It is known that the use of high-order methods in the advection of the phase field can mitigated such numerical artefacts [46; 47]. The present simulation confirms this behaviour, where the p1 discretisation shows a much more evident deformation of the droplet after five advection periods.
These results further confirm the overall benefit of using high-order discretisations already observed in the previous Rider-Kothe test case. In order to have a better understanding of the overall behaviour of all the relevant quantities, in figure 10 the phase field, density, pressure and \(x\)-velocity are respectively plotted along the line \(y=0\) for the p3 most refined simulation (\(24^{2}\)p3). It can be observed that both the phase field and the total density vary smoothly across the interface, preserving the prescribed hyperbolic tangent profile without any spurious oscillation in proximity of the interface.
Finally, we remark that pressure and \(x\)-velocity remain uniform during the simulation as shown in figure 10, indicating that the proposed discretisation is able to fulfil the interface equilibrium condition.
Figure 10: Top row: phase field (left) and density (right) along the line \(y=0\) after five advection periods. Bottom row: slice of pressure (left) and \(x\)-velocity (right) along the line \(y=0\) after five advection periods. Red, numerical simulation; black, exact solution. Notice that the \(y\)-axis is centered around the expected exact value and scaled by \(10^{-6}\).
Figure 9: phase field isocontours after five advection periods for the most refined simulation. Left, \(24^{2}\)p3; right, \(48^{2}\)p1. The interface is represented by the solid black line. The initial condition is shown as a dashed black line.
#### 4.2.2 Rayleigh-Taylor instability
The Rayleigh-Taylor instability occurs when an interface between two fluids with different densities experiences a pressure gradient opposing the density gradient. The domain consists in a \([d\times 4d]\) with \(d=1\). The interface is initially defined as the curve \(y(x)=2d+0.1d\cos{(2\pi x/d)}\). The Rayleigh-Taylor instability is characterised by the Reynolds number \(\mathrm{Re}=(\rho_{1}d^{3/2}||\mathbf{g}||^{1/2})/\mu\) and the Atwood number \(\mathrm{At}=(\rho_{1}-\rho_{2})/(\rho_{1}+\rho_{2})\) which are respectively set to \(3000\) and \(0.5\). The top boundary is treated as a Riemann-invariant boundary condition with zero velocity and constant pressure, the bottom boundary is a no-slip wall whereas slip wall boundary conditions are prescribed on the other lateral sides of the domain. Two different quadrilateral grids were considered for this study involving \(5000\) and \(20000\) degrees of freedom (\(10\times 20\)p4 and \(20\times 80\)p4, respectively). Finally, a \(4^{\mathrm{th}}\) order Runge-Kutta scheme was used for time integration.
The present test is meaningful in considering a more complex physical set-up where viscous and gravitational force drive the dynamics of the system.
A series of snapshots of the phase field are shown in figure 11 for the two grids. Clearly, the flow field arising from the Rayleigh-Taylor instability is much richer than the previous test case: viscous and gravitational forces lead to complex vortical structures causing non-trivial dynamics of the interface, including primary breakup. These structures are increasingly better resolved under mesh refinement as it can be observed from figure 11.
From a more quantitive point of view, the predicted top and bottom locations of the interface versus the non-dimensional time (\(t^{*}=t/\sqrt{d/(||\mathbf{g}||\mathrm{At})}\)) are shown in figure 12. Excellent agreement with previous studies can be observed.
Similarly to the Rider-Kothe test case, the influence of the order of approximation was investigated. In particular, a \(1^{\mathrm{st}}\) polynomial order simulation has been performed and compared with the \(4^{\mathrm{th}}\) polynomial order computation. A series of snapshots of the phase field are shown in figure 13 for the two different orders of approximation.
It can be noticed that the p1 simulation provides less accurate results, leading to an asymmetric solution. From a more quantitative point of view, the location of the upper and lower plumes were compared for the two different orders of approximations and are shown in figure 14. It can be noticed that the p4 approximation is always closer to the reference solution throughout the whole simulation, in particular for the prediction of the top location of the interface. To better appreciate the differences between high and low orders of approximation, the locations of the interface at \(t^{*}=1.5\) and \(t^{*}=3.0\) are shown in figures 15 and 16 at different resolutions and different polynomial orders. It can be noticed that the p1 approximation is characterised by smoother, over-dissipated profiles along the tangential direction of the interface. Remember that the smoothness along the normal direction is governed by the interface capturing technique and it should not be particularly influenced by the spatial order of approximation, but the accuracy along the tangential direction of the interface, instead, should be more sensitive to the approximation order. At later times, when primary peak-up occurs, more complex small scale structures are captured by the high-order simulation, in particular for the simulations on the finest grid. It can also be noticed that the p4 simulation is characterised by a more stretched interface, (_i.e._ bottom and
top locations of the interface are respectively lower and higher with respect to the corresponding p1 simulations). This behaviour is in agreement with what is observed in figure 14.
#### 4.2.3 Rising bubble
In this test case the rise of a 2D bubble of light fluid within a heavier fluid due to buoyancy is simulated. Initially, a circular bubble of radius \(R=0.25\) is placed at \((0.0,0.5)\) in a \([-0.5,0.5]\times[0,2]\) domain. The density and viscosity of the fluids are chosen such that \(\rho_{1}/\rho_{2}=\mu_{1}/\mu_{2}=10\). The Reynolds
Figure 11: Snapshots of the phase field isocontours for different meshes and times. Top row: isocontour of the phase field at \(t^{*}=0.5\) (left) and \(t^{*}=1.5\) (right). Bottom row: isocontour of the phase field at \(t^{*}=2.5\) (left) and \(t^{*}=3.0\) (right). In each subfigure: left, \(10\times 20\)p4; right \(20\times 80\)p4. The interface is highlighted in black.
number is set to \(\text{Re}=(\rho_{1}d^{3/2}||\mathbf{g}||^{1/2})/\mu_{1}=35\) whereas the Eotvos number is \(E_{0}=4\rho_{1}||\mathbf{g}||r^{2}/\sigma=10\). Similarly to the previous test case, at the upper boundary a Riemann-invariant boundary condition with zero velocity and constant pressure is imposed and the bottom boundary is a no-slip wall boundary. Left and right boundaries are treated as slip walls.
Notice that the reference solution coincides with the results presented in [48].
Figure 17 shows the evolution of the bubble in time as it rises due to buoyancy. Finally, the location of the center of gravity of the bubble and its vertical velocity are shown in figure 18. In the work by Manzanero et al. [49], a similar numerical framework based on the DG method was presented. In their case, a weakly compressible approach was employed and coupled with the Cahn-Hilliard equations. It can be noticed that the pressure reflections coming from the wall boundaries cause visible oscillations in the bubble's rising velocity. Our simulation produces a similar dynamics of the bubble without spurious oscillations. In our case, a Riemann invariant boundary condition was prescribed on the upper boundary, avoiding undesirable pressure reflections.
A more challenging test case with larger density ratio and smaller surface tension coefficient was also considered. In particular, the density and viscosity of the two phases were chosen such that \(\rho_{1}/\rho_{2}=1000\) and \(\mu_{1}/\mu_{2}=100\). Furthermore, the Eotvos number was increased to \(E_{0}=125\) whereas the Reynolds number remained the same. Under these conditions, the surface tension effects are not strong enough to prevent strong deformations of the bubble.
In figure 19, the evolution of the bubble is shown. It can be seen that at late times, the edges of the bubble tend to elongate in considerably thin ligaments. This behaviour is caused by the insufficiently small surface tension forces with respect to gravitational forces. Similarly with respect to the previous case, the final location of the interface was compared with a reference solution. It is well known that for this specific test case, different codes produce slightly different results in terms of the elongated ligaments. In figure 19, at the final time, the present simulation is consequently compared with the results by Manzanero et al. [49], who proposed a similar diffused-interface approach based on the Cahn-Hilliard equations. From this comparison, it can be seen that the predicted location of the interface
Figure 12: The evolution of the top and bottom of the interface for the Rayleigh-Taylor instability versus non-dimensional time for \(10\times 40\)p4 and \(20\times 80\)p4. Symbols indicated the reference solution by Chiu & Lin [33].
agrees well with the reference solution.
Finally, the location of the bubble's center of gravity and the mean rising velocity of the bubble are evaluated over time and are shown in figure 20. The results are compared with classical reference solution from sharp interface solvers [48] and the diffused-interface method from Manzanero et al. [49]. It can be seen that a better agreement is obtained with this latter reference, due to the similarities between the two approaches.
Figure 13: Snapshots of the phase field isocontours for different orders and times. Top row: isocontour of the phase field at \(t^{*}=0.5\) (left) and \(t^{*}=1.5\) (right). Bottom row: isocontour of the phase field at \(t^{*}=2.5\) (left) and \(t^{*}=3.0\) (right). In each subfigure: left, \(25\times 100\)p\(1\); right, \(10\times 40\)p\(4\). The interface is highlighted in black.
## 5 Conclusions
A high-order numerical approach based on the Discontinuous Galerkin method was proposed for the simulation of two-phase flows. The interface between the two phases was modelled using the conservative Allen-Cahn equation, which was subsequently implemented within a five equation model to describe the motion of two immiscible compressible fluids.
A series of benchmark cases were considered, including both kinematic tests involving the numerical resolution of the Allen-Cahn equation only and of fully coupled five equation model for the simulation of two-phase flows.
Figure 14: The evolution of the top and bottom of the interface for the Rayleigh-Taylor instability versus non-dimensional time using a p1 (\(50\times 200\)p1) and p4 (\(20\times 80\)p4) approximation. Symbols indicate the reference solution by Chiu & Lin [33].
Figure 15: Iso-contour \(\phi=0.5\) with \(1^{\mathrm{st}}\) and \(4^{\mathrm{th}}\) polynomial order approximations at \(t^{\star}=1.5\). Left, \(50\times 200\) DoF; right \(100\times 400\) DoF.
In kinematic tests, studying the same approach for different orders of approximation, it was found that higher polynomial orders provided significant advantages in terms of overall accuracy, including smaller \(L_{1}\) errors and mass conservation errors. The same simulations were also performed with both quadrilateral and triangular elements without any ad-hoc modifications of the model's parameters, further underlining the robustness and flexibility of the proposed approach.
Similar observations were made in the simulation of two-phase flows using the five equation model.
Figure 16: Iso-contour \(\phi=0.5\) with \(1^{\mathrm{st}}\) and \(4^{\mathrm{th}}\) polynomial order approximations at \(t^{*}=3.0\). Left, \(50\times 200\) DoF; right \(100\times 400\) DoF.
Figure 17: Location of the interface at different times for the most refined simulation (\(20\times 40\)p4). In order, \(t=0.0,1.0,2.0,3.0\). At final time, symbols indicate the reference solution by Hysing et al. [48].
Lower orders of approximation were characterised by more pronounced spurious features such as deformations of the interface and artificial break-up. Finally, more complex problems involving viscous effects, gravitational forces, and surface tension were considered. The proposed methodology successfully recovered accurate solutions in agreement with existing simulations in the literature.
Overall, the proposed methodology showed the capability of preserving fundamental features of two-phase flows such as the boundedness of the phase field, accurate computation of the interface normal vectors, small mass conservation errors and exact resolution of contact discontinuities, without, at the same time, giving up the significant benefits coming from high-order spatial discretisations. All of this is achieved with only minor modifications of the central core of the DG method, by choosing appropriate numerical fluxes and parameters of the model. At the same time, because of this, the methodology herein presented did not show the need of ad-hoc modifications for different type of elements or orders
Figure 19: Location of the interface at different times for the most refined simulation (\(20\times 40\)p4). In order, \(t=0.0,1.0,2.0,3.0\). At final time, symbols indicate the reference solution from [49].
Figure 18: Location of the bubble’s center of gravity (left) and bubble’s vertical velocity (right) over time for the present simulation \(10\times 20\)p4, \(20\times 40\)p4 and results by Manzanero et al. [49].
of approximation further emphasising its robustness, flexibility and generality.
## Acknowledgments
Financial support from NSF (Award 1909379) and Daikin is greatly appreciated.
|
2303.11871 | Multidimensional pseudo-Leja sequences | The one-dimensional pseudo Leja sequences introduced in
\cite{bialas2012pseudo}, as an alternative to Leja sequences, provides us with
good interpolation nodes for the approximation of holomorphic functions. We
propose a definition of multidimensional pseudo Leja sequences associated to a
compact set $K$ of the complex space $\mathbb{C}^p$ which generalises both the
one-dimensional version and the multidimensional Leja sequences. We show that
these sequences can be used to calculate the transfinite diameter of $K$. We
also present a relation to the pluricomplex Green function associated to $K$.
Subsequently, we show that the intertwinning of pseudo Leja sequences is still
a pseudo Leja sequence. We give a method to compute pseudo Leja sequences with
the help of discrete meshes. | Dimitri Jordan Kenne | 2023-03-21T14:16:51Z | http://arxiv.org/abs/2303.11871v2 | # Multidimensional pseudo-Leja sequences
###### Abstract
The one-dimensional pseudo-Leja sequences introduced in [2], as an alternative to Leja sequences, provides us with good interpolation nodes for the approximation of holomorphic functions. Here, we propose a definition of multidimensional pseudo-Leja sequences associated to a compact set \(K\) of the complex space \(\mathbb{C}^{n}\) which generalizes well, both the one-dimensional version and the multidimensional Leja sequences. We show that these sequences can be used to calculate the transfinite diameter of \(K\). We also present a relation to the pluricomplex Green function associated to \(K\). Subsequently, we show that pseudo-Leja sequences for a Cartesian product of compact sets (which can be of different dimensions) are obtained by intertwining the pseudo-Leja sequences of the underlying compact sets. We prove that these intertwining sequences provide good interpolation points in the case of a Cartesian product of compact planar sets. Finally, we give a method to extract pseudo-Leja sequences from admissible meshes.
**Keywords:** Lagrange interpolation, Leja sequences, transfinite diameter, Green function, admissible meshes, intertwining sequences
## 1 Introduction
Given a set of \(N\) points \(\Omega_{N}=\{\zeta_{N1},\ldots,\zeta_{NN}\}\subset\mathbb{C}^{n}\) such that the Vandermonde determinant is non-zero, i.e
\[\mathrm{VDM}(\zeta_{N1},\ldots,\zeta_{NN})\neq 0, \tag{1}\]
we can form the **Fundamental Lagrange Interpolation Polynomials** (FLIP)
\[l_{j}^{(N)}(z):=\frac{\mathrm{VDM}(\zeta_{N1},\ldots,\frac{\frac{1}{z}}{z}, \ldots,\zeta_{NN})}{\mathrm{VDM}(\zeta_{N1},\ldots,\zeta_{NN})},\quad j=1, \ldots,N. \tag{2}\]
For a function \(f\) defined at the points in \(\Omega_{N}\),
\[L_{\Omega_{N}}f(z):=\sum_{j=1}^{N}f(\zeta_{Nj})l_{j}^{(N)}(z) \tag{3}\]
is the **Lagrange Interpolation Polynomial** (LIP) associated with \(f\) and the points in \(\Omega_{N}\).
The problem of determining "good" interpolation nodes for approximating holomorphic functions is a question of great interest. An array of interpolation nodes contained in a compact set \(K\subset\mathbb{C}^{n}\) is said to be "good" or "extremal" (like in [2]) for polynomial interpolation if it guarantees the uniform convergence on \(K\) of the Lagrange interpolation polynomials \(L_{\Omega_{N}}f\) to the function \(f\), where \(f\) is a holomorphic function in an open neighborhood of \(K\) (we write \(f\in\mathcal{O}(K)\)). One well-known solution to this problem is the array of Fekete points: a Fekete set of order \(N\) for a compact set \(K\subset\mathbb{C}^{n}\) is a set of \(N\) elements, \(\zeta_{N1},\ldots,\zeta_{NN}\), which maximizes the absolute value of the Vandermonde determinant, i.e.
\[|\mathrm{VDM}(\zeta_{N1},\ldots,\zeta_{NN})|=\sup_{z_{1},\ldots,z_{N}\in K}| \mathrm{VDM}(z_{1},\ldots,z_{N})|. \tag{4}\]
Unfortunately, Fekete points are very difficult to determine explicitly and the numerical computation of their discrete version is very expensive. Another solution utilizes the Leja sequences: a
Leja sequence for a compact set \(K\subset\mathbb{C}^{n}\) is a sequence \((\xi_{j})_{j\geq 0}\) such that \(\xi_{0}\) is any arbitrary point (preferably on the boundary \(\partial K\)) and for each \(N\geq 1\),
\[|\mathrm{VDM}(\xi_{0},\ldots,\xi_{N-1},\xi_{N})|=\sup_{z\in K}|\mathrm{VDM}( \xi_{0},\ldots,\xi_{N-1},z)|. \tag{5}\]
From the maximum principle for holomorphic functions, we know that Fekete points and Leja points are always located on the boundary of \(K\).
It is convenient to use sequences of points (e.g. Leja sequences) rather than triangular arrays of points (e.g. Fekete points) as interpolation nodes. Indeed, sequences allow to determine the interpolation polynomial of order \(N\) (\(N\) nodes needed) by keeping the \(N-1\) points used in the interpolation of order \(N-1\) to which we will add a single new point. Thus, we do not renew the whole set of nodes as in the case of an array of points. However, Leja sequences are also hard to compute. As far as we know, explicit expressions of Leja points in the one-dimensional case are only known for disks (see [2]). In the multivariate case, it is shown in [9] that one can construct Leja sequences for poly-disks by intertwining the Leja sequences of disks.
In this paper, we study a new class of interpolation points which generalize the Leja points but are not necessarily located on the boundary of the compact set \(K\). These are the pseudo-Leja sequences whose one-dimensional version was introduced in [2]. Here, we propose a multidimensional version of these sequences (see Definition 2.1). For a compact \(K\subset\mathbb{C}\), a pseudo-Leja sequence \((\xi_{j})_{j\geq 0}\) of Edrei growth \((M_{j})_{j\geq 1}\) is a sequence verifying
\[M_{j}\prod_{i=0}^{j-1}|\xi_{j}-\xi_{i}|\geq\sup_{z\in K}\prod_{i=0}^{j-1}|z- \xi_{i}|,\quad\text{for all }j\geq 1 \tag{6}\]
and \((M_{j})_{j\geq 1}\subset\mathbb{R}\) is of sub-exponential growth. It has been shown in [2] that one-dimensional pseudo-Leja sequences are also "good" for polynomial interpolation and that in contrast to classical Leja sequences they are reasonably easy to compute numerically thanks to the use of (weakly) admissible meshes.
The present paper is organized as follows. In the following section, we define the multidimensional pseudo-Leja sequences. Later we will show in Section 3 that pseudo-Leja sequences can be used to define the transfinite diameter and the pluricomplex Green's function of compact sets. The main results of this part are presented in Theorem 3.3 and Proposition 3.2 of the present paper.
Section 4 is devoted to how multidimensional pseudo-Leja sequences are constructed for compact sets of the form \(K=K_{1}\times\cdots\times K_{m}\subset\mathbb{C}^{n}\), where \(K_{i}\subset\mathbb{C}^{n_{i}}\) for \(i=1,\ldots,m\), and \(n_{1}+\cdots+n_{m}=n\). We will generalize the notion of intertwining sequences as used in [9] to talk about multidimensional intertwining Leja sequences. The main result for this section is given in Theorem 4.2.
In Theorem 5.3 of Section 5, we show that in the case of a product set \(K=K_{1}\times\cdots\times K_{n}\), with \(n_{1}=\cdots=n_{m}=1\), the pseudo-Leja sequences are extremal points for polynomial interpolation.
Finally in Section 6, we will see via Theorem 6.1 that pseudo-Leja sequences can be easily extracted from weakly admissible grids.
_Notations._ Let \(\mathbb{N}=\{1,2,3,4,\ldots\}\) and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\).
## 2 Pseudo-Leja sequences
For every \(p\in\mathbb{N}\), we denote by
\[\begin{array}{rcl}k^{(p)}:&\mathbb{N}_{0}&\longrightarrow&\mathbb{N}_{0}^{p} \\ &N&\longmapsto&k^{(p)}(N)=(k_{1}^{(p)}(N),\ldots,k_{p}^{(p)}(N))\end{array} \tag{7}\]
an enumeration on \(\mathbb{N}_{0}^{p}\) satisfying \(\left|k^{(p)}(i)\right|:=k_{1}^{(p)}(i)+\cdots+k_{p}^{(p)}(i)\leq\left|k^{(p) }(j)\right|\) for \(i\leq j\). For every \(p\in\mathbb{N}\), consider the monomials defined for all \(z=(z_{1},\ldots,z_{p})\in\mathbb{C}^{p}\) by
\[e_{N}^{(p)}(z):=z^{k^{(p)}(N)}=z_{1}^{k^{(p)}(N)}\cdots z_{p}^{k^{(p)}(N)}, \quad N\geq 0, \tag{8}\]
the spaces of polynomials
\[\mathcal{P}_{N}(\mathbb{C}^{p}):=\mathrm{span}\{e_{i}^{(p)}:\ i=0,\ldots,N-1\},\quad N\geq 1, \tag{9}\]
and the Vandermonde determinant of a set of points \(\{\xi_{0},\ldots,\xi_{q}\}\subset\mathbb{C}^{p}\) (\(q\in\mathbb{N}_{0}\)) defined as
\[\mathrm{VDM}^{(p)}(\xi_{0},\ldots,\xi_{q}):=\det\left[e_{N}^{(p)}(\xi_{M}) \right]_{0\leq M,N,\leq q} \tag{10}\]
with the convention \(\mathrm{VDM}^{(p)}(\xi_{0}):=1\).
Let \(h_{d}^{(p)}:=\binom{p+d}{d}\) be the dimension of the space of polynomials of \(p\in\mathbb{N}\) complex variables and of degree at most \(d\in\mathbb{N}_{0}\) (denoted by \(\mathbb{C}d_{\_}{\_}{2}[z_{1},\ldots,z_{\_}p]:=\mathcal{P}_{h_{d}^{(p)}-1}( \mathbb{C}^{p})\)). We also denote \(l_{\_}{d}^{(p)}:=\sum_{\_}{i=1}^{d}i(h_{\_}{\_}{2}^{(p)}-h_{\_}{1-i}^{(p)})=p _{\_}{p+1}^{\binom{p+d}{p+1}}\) by the total degree of \(\mathrm{VDM}^{(p)}(\xi_{\_}{0},\ldots,\xi_{\_}{h_{\_}{d}^{(p)}-1})\) regarded as a polynomial in \(\xi_{\_}{0},\ldots,\xi_{\_}{h_{\_}{d}^{(p)}-1}\).
**Definition 2.1**.: A **pseudo-Leja sequence**\(\mathcal{L}\) for a compact set \(K\subset\mathbb{C}^{p}\) (\(p\geq 1\)), is a sequence \((\xi_{\_}{j})_{j\geq 0}\subset K\) for which there exists a sequence of real numbers \((M_{\_}{j})_{j\geq 1}\) satisfying:
1. \(M_{\_}{j}\Big{|}\mathrm{VDM}^{(p)}(\xi_{\_}{0},\ldots,\xi_{\_}{j-1},\xi_{\_}{ j})\Big{|}\geq\max_{\_}{\varepsilon\in K}\left|\mathrm{VDM}^{(p)}(\xi_{\_}{0}, \ldots,\xi_{\_}{j-1},z)\right|\) for any \(j\geq 1\), \[2. \lim_{\_}{d\to+\infty}\left(\max_{\_}{h_{\_}{d-1}^{(p)}\leq j<h_{\_}{d}^{(p)} }M_{\_}{j}\right)^{1/d}=1.\]
We say that \(\mathcal{L}\) is a pseudo-Leja sequence of Edrei growth \((M_{\_}{j})_{j\geq 1}\).
This multidimensional version of pseudo-Leja sequences is consistent with the one-dimensional pseudo-Leja sequence introduced in [2]. A classical Leja sequence is a pseudo-Leja sequence of Edrei growth 1.
**Lemma 2.1**.: _If \(\{M_{\_}{j}\}_{j\geq 1}\) is a sequence of real numbers satisfying Property 2 from the definition of a pseudo-Leja sequence, then_
\[\lim_{\_}{d\to+\infty}\left(\prod_{\_}{j=1}^{h_{\_}{d}^{(p)}-1}M_{\_}{j} \right)^{1/l_{\_}{d}^{(p)}}=1. \tag{11}\]
Proof.: We notice that
\[1\leq\prod_{\_}{j=1}^{h_{\_}{d}^{(p)}-1}M_{\_}{j}\leq\left(\max_{\_}{j=1, \ldots,h_{\_}{d}^{(p)}-1}M_{\_}{j}\right)^{h_{\_}{d}^{(p)}}\]
and \(\frac{h_{\_}{d}^{(p)}}{l_{\_}{d}^{(p)}}=\frac{1}{d}\left(\frac{n+1}{n}\right)\). Moreover, we have
\[\max_{\_}{j=1,\ldots,h_{\_}{d}^{(p)}-1}M_{\_}{j}=\max_{\_}{h_{\_}{j(d)-1}^{(p) }\leq j<h_{\_}{j(d)}^{(p)}}M_{\_}{j}\]
for some \(j(d)\in\{1,\ldots,d\}\). Let us choose \(j(d)\) to be the minimal one. Therefore, the sequence \(\{j(d)\}_{\_}{d\geq 1}\subset\mathbb{N}\) converges or tends to \(+\infty\) since it is non-decreasing:
* If \(\lim_{\_}{d\to+\infty}j(d)=+\infty\), then \(\lim_{\_}{d\to+\infty}\left(\max_{\_}{h_{\_}{j(d)-1}^{(p)}\leq j<h_{\_}{j(d)} ^{(p)}}M_{\_}{j}\right)^{1/j(d)}=1\) by definition and therefore \[\lim_{\_}{d\to+\infty}\left(\max_{\_}{h_{\_}{j(d)-1}^{(p)}\leq j<h_{\_}{j(d)} ^{(p)}}M_{\_}{j}\right)^{1/d}=1\] since \(0<j(d)\leq d\).
* If \(\lim_{\_}{d\to+\infty}j(d)<+\infty\), then \(\limsup_{\_}{d\to+\infty}\left(\max_{\_}{h_{\_}{j(d)-1}^{(p)}\leq j<h_{\_}{j(d) }^{(p)}}M_{\_}{j}\right)^{1/j(d)}<+\infty\) and \(\lim_{\_}{d\to+\infty}\frac{j(d)}{d}=0\), which also lead to the desired result.
The above Lemma 2.1 will be useful for the calculations in the following sections.
## 3 Transfinite diameter and pluricomplex Green function
In this section, we set \(V_{j}(K):=\sup\limits_{\xi_{0},\ldots,\xi_{j-1}\in K}\left|\mathrm{VDM}^{(n)}(\xi_ {0},\ldots,\xi_{j-1})\right|\) for any set \(K\subset\mathbb{C}^{n}\) and for \(j\in\mathbb{N}\). We shall write \(V_{j}\) instead of \(V_{j}(K)\) when there is no ambiguity. The objective here is to establish some relations between pseudo-Leja sequences and the notions of transfinite diameter and pluricomplex Green function.
**Definition 3.1**.: [15] The **transfinite diameter** of a compact set \(K\subset\mathbb{C}^{n}\) is the constant
\[D(K):=\limsup\limits_{d\to+\infty}D_{d}(K), \tag{12}\]
where \(D_{d}(K):=\left(V_{h_{d}^{(n)}}\right)^{1/l_{d}^{(n)}}\) is called the \(d\)-th order transfinite diameter of \(K\).
Fekete proved in [8] that the limit \(D(K)\) exists for any compact set \(K\subset\mathbb{C}\) (i.e when \(n=1\)). Later in [11], Leja introduced the name "transfinite diameter" and thus posed the problem of its existence when \(n\geq 2\). A positive answer to his problem was given by Zaharjuta in [15] as we will see below in Proposition 3.1.
We consider the class of normalized polynomials
\[\mathcal{P}^{i}:=\left\{P_{i}(z)=e_{i}^{(n)}(z)+\sum\limits_{0\leq j<i}c_{j}e_ {j}^{(n)}:\ c_{j}\in\mathbb{C}\right\}\quad\text{for $i\in\mathbb{N}_{0}$}. \tag{13}\]
**Definition 3.2**.: [15]
* We call \(\tau_{i}:=\left[\inf\{\left\|P_{i}\right\|_{K}:\ P_{i}\in\mathcal{P}^{i}\} \right]^{1/\left|k^{(n)}(i)\right|}\) the \(i-\)**th Chebyshev constant** of \(K\).
* The limit \[\tau(K,\theta):=\limsup\limits_{j\to+\infty,\frac{k(j)}{k(j)}\to\theta}\tau_{j}\] (14) is called the **directional Chebyshev constant** of \(K\) in the \(\theta\)-direction, where \(\theta=(\theta_{1},\ldots,\theta_{n})\) is a point of the standard \(n\)-simplex \[\Sigma=\left\{\theta\in\mathbb{R}^{n}:\ \sum\limits_{i=1}^{n}\theta_{i}=1,\text{ and }\theta_{i}\geq 0\text{ for }i=1,\ldots,n\right\}.\]
**Proposition 3.1**.: _[_15_, Lemma 6 and Theorem 1]_ _For every \(d\in\mathbb{N}\), the geometric mean of Chebyshev constants_
\[\tau_{d}^{0}:=\left(\prod\limits_{\left|k^{(n)}(i)\right|=d}\tau_{i}\right)^{ h_{d}^{(n)}-h_{d-1}^{(n)}} \tag{15}\]
_converges and_
\[\lim\limits_{d\to+\infty}\tau_{d}^{0}=\exp\left[\frac{1}{meas(\Sigma)}\int_{ \Sigma}\log(\tau(K,\theta))\ d\theta\right]=D(K) \tag{16}\]
_where \(meas(\Sigma):=\int_{\Sigma}d\theta\)._
**Lemma 3.1**.: _We have_
\[\lim\limits_{d\to+\infty}\left(\prod\limits_{j=1}^{d}(\tau_{j}^{0})^{r_{j}} \right)^{1/l_{d}^{(n)}}=D(K) \tag{17}\]
_where \(r_{j}=j(h_{j}^{(n)}-h_{j-1}^{(n)})\) for \(j\in\mathbb{N}\)._
Proof.: Let \((u_{d})_{d\in\mathbb{N}}\subset\mathbb{R}\) be a sequence converging to a certain \(a\in\mathbb{R}\). Consider any array of points \(\{\theta_{d,j}\}_{d,j\in\mathbb{N}}\subset[0,+\infty)\) verifying \(\lim\limits_{d\rightarrow+\infty}\max\limits_{j=1,\ldots,d}\theta_{d,j}=0\) and \(\sum\limits_{j=1}^{d}\theta_{d,j}=1\) for all \(d\in\mathbb{N}\). We want to prove that the sequence \(\left(s_{d}:=\sum_{j=1}^{d}\theta_{d,j}u_{j}\right)_{d\in\mathbb{N}}\) also converges to \(a\). Fix \(\varepsilon>0\). By the hypothesis, there exist \(N_{\varepsilon}^{(1)},N_{\varepsilon}^{(2)}\in\mathbb{N}\) such that for any \(d\in\mathbb{N}\)
\[d\geq N_{\varepsilon}^{(1)} \Longrightarrow|u_{d}-a|<\varepsilon/2,\] \[d\geq N_{\varepsilon}^{(2)} \Longrightarrow\max\limits_{j=1,\ldots,d}\theta_{d,j}\leq\frac{ \varepsilon}{2N_{\varepsilon}^{(1)}\max\limits_{j=1,\ldots,N_{\varepsilon}^{( 1)}}|u_{j}-a|}.\]
For each \(d\in\mathbb{N}\), \(d\geq N_{\varepsilon}:=\max(N_{\varepsilon}^{(1)},N_{\varepsilon}^{(2)})\) we have
\[|s_{d}-a| \leq\sum_{j=1}^{N_{\varepsilon}^{(1)}}\theta_{d,j}|u_{j}-a|+\sum_ {j=N_{\varepsilon}^{(1)}+1}^{d}\theta_{d,j}|u_{j}-a|\] \[\leq\max\limits_{j=1,\ldots,N_{\varepsilon}^{(1)}}|u_{j}-a|\sum_ {j=1}^{N_{\varepsilon}^{(1)}}\theta_{d,j}+\frac{\varepsilon}{2}\sum_{j=N_{ \varepsilon}^{(1)}+1}^{d}\theta_{d,j}\] \[\leq N_{\varepsilon}^{(1)}\max\limits_{j=1,\ldots,N_{\varepsilon }^{(1)}}|u_{j}-a|\max\limits_{j=1,\ldots,d}\theta_{d,j}+\frac{\varepsilon}{2}\] \[\leq\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.\]
So, \(\lim\limits_{d\rightarrow+\infty}|s_{d}-a|<\varepsilon\) for any arbitrary \(\varepsilon>0\), i.e \(s_{d}\) tends to \(a\) as \(d\rightarrow+\infty\). Take \(u_{d}=\log\bigl{(}\tau_{d}^{0}\bigr{)}\) and \(\theta_{d,j}=\frac{r_{j}}{l_{d}^{(n)}}\) in order to obtain (17).
Before stating our first result that emphasizes the relationship between the pseudo-Leja points and the notion of transfinite diameter, we must clarify the notion of unisolvence.
**Definition 3.3**.:
* A set \(K\subset\mathbb{C}^{n}\) is said to be **determining** for a space of functions \(\mathcal{F}\) (shortly \(\mathcal{F}\)-determining) if, \(P\in\mathcal{F}\) and \(P=0\) on \(K\) imply \(P=0\) in \(\mathbb{C}^{n}\). We say that \(K\) is **unisolvent** for \(\mathcal{F}\) when \(\mathcal{F}\) has a finite dimension, \(K\) is \(\mathcal{F}\)-determining and the cardinality of \(K\) is equal to the dimension of \(\mathcal{F}\).
* A sequence \((\xi_{i})_{i\geq 0}\subset\mathbb{C}^{n}\) is (**completely**) **unisolvent** if for all \(N\geq 0\), the set \(\{\xi_{0},\ldots\xi_{N-1}\}\) is unisolvent for the space \(\mathcal{P}_{N}(\mathbb{C}^{n})\), i.e \(\operatorname{VDM}^{(n)}(\xi_{0},\ldots,\xi_{N-1})\neq 0\) according to Lemma 3.2.
It is obvious that the cardinality of an \(\mathcal{F}\)-determining set cannot be smaller than the dimension of \(\mathcal{F}\). We will simply say that \(K\) is determining to mean that \(K\) is determining for the whole space of polynomials \(\bigcup\limits_{N\geq 1}\mathcal{P}_{N}(\mathbb{C}^{n})\).
**Lemma 3.2**.: _For every \(N\in\mathbb{N}\) the following properties are equivalent:_
1. \(K\) _is determining for_ \(\mathcal{P}_{N}(\mathbb{C}^{n})\)_,_
2. \(V_{j}(K)=\sup\limits_{\xi_{0},\ldots,\xi_{j-1}\in K}\left|\operatorname{VDM}^{ (n)}(\xi_{0},\ldots,\xi_{j-1})\right|\neq 0\) _for_ \(j=0,\ldots,N\)_,_
3. \(V_{N}(K)\neq 0\)_._
Proof.: It is clear that \(2\Longrightarrow 3\) and using the Lagrange interpolation formula we easily see that \(3\)\(\Longrightarrow 1\).
Let us show that \(1\Longrightarrow 2\). It is obvious that \(V_{1}=1\). Suppose that \(V_{j}\neq 0\) for some \(1\leq j<N\) and let \(\{\xi_{0},\ldots,\xi_{j-1}\}\subset K\) such that \(\operatorname{VDM}^{(n)}(\xi_{0},\ldots\xi_{j-1})\neq 0\). Therefore, \(Q_{j}(\xi):=\operatorname{VDM}^{(n)}(\xi_{0},\ldots,\xi_{j-1},\xi)=\sum_{i=0}^{ j}c_{i}e_{i}^{(n)}(\xi)\) where \(c_{j}=\operatorname{VDM}^{(n)}(\xi_{0},\ldots\xi_{j-1})\neq 0\). So, \(Q_{j}\neq 0\) in \(\mathbb{C}^{n}\) which implies that \(Q_{j}\neq 0\) on \(K\) since \(K\) is determining for \(\mathcal{P}_{N}(\mathbb{C}^{n})\). Hence,
\[V_{j+1}\geq\sup\limits_{\xi\in K}|Q_{j}(\xi)|>0.\]
In the following theorem, we prove that pseudo-Leja sequences can be used to compute the transfinite diameter.
**Theorem 3.3**.: _Let \(K\subset\mathbb{C}^{n}\) be a compact set. If \((\xi_{j})_{j\geq 0}\subset K\) is a pseudo-Leja sequence of Edrei growth \((M_{i})_{j\geq 1}\), then_
\[\lim_{d\to+\infty}\left|\text{VDM}^{(n)}(\xi_{0},\ldots,\xi_{h_{d}^{(n)}-1}) \right|^{1/l_{d}^{(n)}}=D(K). \tag{18}\]
A similar result was obtained in [10] and [4] for Leja sequences. The method of proving Theorem 3.3 is in fact not different from that of Leja sequences except that here we take into consideration the Edrei growth of the pseudo-Leja sequence. In dimension one, it is known that nodes satisfying (18) are good for polynomial interpolation ( see [4]). This remains an open problem in the multidimensional case.
Proof.: Set \(L_{k}:=\left|\text{VDM}^{(n)}(\xi_{0},\ldots,\xi_{k-1})\right|\) for all \(k\geq 1\). Obviously, we have \(L_{k}\leq V_{k}\) for all \(k\). Now consider the following two cases:
* If \(K\) is non-determining, i.e, there exists a polynomial \(P\) such that \(P=0\) on \(K\) but \(P\neq 0\) in \(\mathbb{C}^{n}\). There exists \(i_{0}\) such that \(P=\alpha Q\) for some constant \(\alpha\) and a polynomial \(Q\in\mathcal{P}^{i_{0}}\), which implies that \(V_{i}=0\) for \(i>i_{0}\). Hence, \(D(K)=0\) (\(K\) is pluripolar)and \(L_{i}=0\) for all \(i>i_{0}\), so the theorem is true.
* Suppose that \(K\) is determining. Then \(L_{i}>0\) for all \(i\geq 1\): Indeed, \(L_{1}=1\) and if \(L_{j}>0\) for some \(j\geq 1\) then \(Q_{j}(\xi):=\text{VDM}^{(n)}(\xi_{0},\ldots,\xi_{j-1},\xi)=\sum_{i=0}^{j}c_{i }e_{i}^{(n)}(\xi)\) where \(|c_{j}|=L_{j}>0\). So, \(Q_{j}\neq 0\) in \(\mathbb{C}^{n}\). Therefore, \(Q_{j}\neq 0\) on \(K\) and so \[L_{j+1}=|Q_{j}(\xi_{j})|\geq\frac{1}{M_{j}}\max_{\xi\in K}|Q_{j}(\xi)|>0.\] In the same way, we also have \(V_{i}>0\) for all \(i\geq 1\) and so \(D(K)>0\).
Hence, for \(k\geq 1\) the polynomial
\[\frac{\text{VDM}^{(n)}(\xi_{0},\ldots,\xi_{k-1},\xi)}{\text{VDM}^{(n)}(\xi_{0 },\ldots,\xi_{k-1})}=e_{k}^{(n)}(\xi)+\sum_{0\leq j<k}c_{j}e_{j}^{(n)}(\xi):=P _{k}(\xi), \tag{19}\]
is well defined and by definition of pseudo-Leja sequence, we have
\(|P_{k}(\xi_{k})|=\frac{L_{k+1}}{L_{k}}\geq M_{k}^{-1}\|P_{k}\|_{K}.\) Then
\[\frac{L_{k+1}}{L_{k}}\geq M_{k}^{-1}\|P_{k}\|_{K}\geq M_{k}^{-1}\tau_{k}^{\deg (e_{k}^{(n)})}. \tag{20}\]
We deduce for any \(d\geq 0\) that
\[V_{h_{d}^{(n)}} \geq L_{h_{d}^{(n)}}=\frac{L_{h_{d}^{(n)}}}{L_{h_{d}^{(n)}-1}} \times\frac{L_{h_{d}^{(n)}-1}}{L_{h_{d}^{(n)}-2}}\times\cdots\times\frac{L_{ 2}}{L_{1}}\] \[\geq\left(\prod_{j=1}^{h_{d}^{(n)}-1}M_{j}\right)^{-1}\left(\prod _{j=1}^{h_{d}^{(n)}-1}\tau_{j}^{\deg(e_{j}^{(n)})}\right). \tag{21}\]
But, we have
\[\prod_{j=1}^{h_{d}^{(n)}-1}\tau_{j}^{\deg(e_{j}^{(n)})}=\prod_{i=1}^{d}\left( \prod_{j=h_{i-1}^{(n)}}^{h_{i}^{(n)}-1}\tau_{j}\right)^{i}=\prod_{i=1}^{d}( \tau_{i}^{0})^{i(h_{i}^{(n)}-h_{i-1}^{(n)})}.\]
Hence, taking the \(l_{d}^{(n)}-\)roots in Inequality (21) we obtain
\[\left(V_{h_{d}^{(n)}}\right)^{1/l_{d}^{(n)}}\geq\left(L_{h_{d}^{(n)}}\right)^{ 1/l_{d}^{(n)}}\geq\left(\prod_{j=1}^{h_{d}^{(n)}-1}M_{j}\right)^{-1/l_{d}^{(n)} }\left(\prod_{i=1}^{d}\bigl{(}\tau_{i}^{0})^{i(h_{i}^{(n)}-h_{i-1}^{(n)}} \bigr{)}^{1/l_{d}^{(n)}}\right)^{1/l_{d}^{(n)}}. \tag{22}\]
Now, taking the limit in Inequality (22) as \(d\to+\infty\) and then using Lemmas 3.1 and 2.1 one obtains
\[\lim_{d\to+\infty}\left(L_{h_{d}^{(n)}}\right)^{1/l_{d}^{(n)}}=D(K).\]
The following Corollary is directly deduced from Lemma 3.2 and the above proof of the Theorem 3.3.
**Corollary 3.3.1**.: _Let \(K\subset\mathbb{C}^{n}\) be a compact set. Consider the following properties:_
1. \(K\) _is non-pluripolar._
2. \(K\) _is determining for the space of polynomials._
3. _Every pseudo-Leja sequence for_ \(K\) _is unisolvent._
4. \(V_{N}(K)=\sup\limits_{\xi_{0},\ldots,\xi_{N-1}\in K}\left|\text{VDM}^{n}(\xi_ {0},\ldots,\xi_{N-1})\right|\neq 0\) _for all_ \(N\in\mathbb{N}\)_._
_Then we have \(1\Longrightarrow 2\Longleftrightarrow 3\Longleftrightarrow 4\)._
_Remark_.: Let \((\xi_{j})_{j\geq 0}\subset K\) be a unisolvent pseudo-Leja sequence of Edrei growth \((M_{i})_{j\geq 1}\) for a compact set \(K\subset\mathbb{C}^{n}\). The polynomials \(P_{k}\) given in Equation (19) will be called the (pseudo-)Leja polynomials in reference to their roots which are (pseudo-) Leja points. These satisfy
\[\lim_{d\to+\infty}\left(\prod_{k=0}^{h_{d}^{(n)}-1}\left\|P_{k}\right\|_{K} \right)^{1/l_{d}^{(n)}}=D(K). \tag{23}\]
Indeed, one deduces from Inequalities (20) and (22) that
\[\left(\prod_{j=1}^{h_{d}^{(n)}-1}M_{j}\right)\left(\prod_{i=1}^{d}(\tau_{i}^{ 0})^{i(h_{i}^{(n)}-h_{i-1}^{(n)})}\right)\leq\prod_{k=0}^{h_{d}^{(n)}-1}\left\| P_{k}\right\|_{K}\leq V_{h_{d}^{(n)}}.\]
After applying the \(l_{d}^{(n)}-\)roots it is then enough to take the limit as \(d\to+\infty\) in order to obtain Equation (23).
**Definition 3.4**.: Let \(\mathcal{L}\) denote the family of plurisubharmonic (p.s.h) functions on \(\mathbb{C}^{n}\) of at most logarithmic growth
\[\mathcal{L}:=\{u:\ u\text{ is p.s.h on }\mathbb{C}^{N}\text{ and }u(z)\leq\log^{+}\left\|z\right\|+C\} \tag{24}\]
where \(\left\|z\right\|=(\sum_{i=1}^{n}|z_{i}|^{2})^{1/2}\) and \(\log^{+}(\left\|z\right\|)=\max\{0,\log^{+}(\left\|z\right\|)\}\). Let \(K\subset\mathbb{C}^{n}\) be a compact set. Its **pluricomplex Green function** denoted \(V_{K}\), is defined by
\[V_{K}(z):=\sup\{u(z):\quad u\in\mathcal{L},\ u\leq 0\ on\ K\}. \tag{25}\]
The set \(K\subset\mathbb{C}^{n}\) is said to be **regular** at a point \(\omega\in\overline{K}\) (the closure of \(K\)) if \(V_{K}\) is continuous at \(\omega\). If \(K\subset\mathbb{C}^{n}\) is regular at each point of \(\overline{K}\), then \(K\) is said to be regular.
The following result was first stated in [3] for the case of Leja polynomials (i.e. Edrei growth 1). However this is proven for all collections of polynomials \(\{P_{N}\}_{N\geq 0}\) satisfying Equation (23) such that \(P_{N}\in\mathcal{P}^{N}\) for all \(N\in\mathbb{N}_{0}\). In particular, it is also true for pseudo-Leja polynomials.
**Proposition 3.2**.: _Let \(K\) be a regular non-pluripolar (i.e \(D(K)>0\)) compact subset of \(\mathbb{C}^{n}\). Then_
\[V_{K}(z)=\limsup_{N\to+\infty}\frac{1}{\left\|k^{(n)}(N)\right\|}\log\left( \frac{\left|P_{N}(z)\right|}{\left\|P_{N}\right\|_{K}}\right) \tag{26}\]
_for \(z\in\mathbb{C}^{n}\setminus\hat{K}\), where \(P_{N}\) are pseudo-Leja polynomials and \(\hat{K}=\{z\in\mathbb{C}^{n}:\ \left|p(z)\right|\leq\left\|p\right\|_{K}\) for all \(p\in\mathcal{P}_{d}(\mathbb{C}^{n})\}\) is the polynomial hull of \(K\)._
## 4 Intertwining pseudo-Leja sequences
In this section, we consider for every \(p\in\mathbb{N}\) the enumeration
\(k^{(p)}:\ \mathbb{N}_{0}\ni N\longmapsto k^{(p)}(N)\in\mathbb{N}_{0}^{p}\) associated with the graded lexicographic order which we denote by " \(\prec\) " and which is defined as follows:
For every \(l,m\in\mathbb{N}_{0}^{n}\), we say that \(l\prec m\) if and only if
\[\left|l\right|:=l_{1}+\cdots+l_{n}<\left|m\right|\text{ or }\left\{\begin{array}{l} \left|l\right|=\left|m\right|\\ l\leq m\text{ in the lexicographic order.}\end{array}\right. \tag{27}\]
_Examples_.:
1. For \(p=1\) we obtain the natural enumeration in \(\mathbb{N}_{0}\).
2. For \(p=2\), the elements of \(\mathbb{N}_{0}^{2}\) are ordered as follows: \((0,0)\prec(0,1)\prec(1,0)\prec(0,2)\prec(1,1)\prec(2,0)\prec(0,3)\prec(1,2)\prec( 2,1)\prec(3,0)\prec(0,4)...\)
3. For \(p=3\), the elements of \(\mathbb{N}_{0}^{3}\) are ordered as follows: \((0,0,0)\prec(0,0,1)\prec(0,1,0)\prec(1,0,0)\prec(0,2)\prec(0,1,1)\prec(0,2,0) \prec(1,0,1)\prec(1,1,0)\prec(2,0,0)\prec(0,0,3)\prec(0,1,2)\prec(0,2,1)\prec( 0,3,0)\prec(1,0,2)\prec(1,1,1)\prec(1,2,0)\prec(2,0,1)\prec(2,1,0)\prec(3,0,0) \prec....\)
This order is compatible with addition on \(\mathbb{N}_{0}^{p}\), i.e
\[\alpha\prec\beta\Longrightarrow\alpha+\gamma\prec\beta+\gamma\quad(\alpha, \beta,\gamma\in\mathbb{N}_{0}^{p}) \tag{28}\]
and we also observe that
\[\begin{array}{c}\alpha_{1}\prec\beta_{1}\text{ in }\mathbb{N}_{0}^{p}\\ \alpha_{2}\prec\beta_{2}\text{ in }\mathbb{N}_{0}^{q}\end{array}\right\} \Longrightarrow(\alpha_{1},\alpha_{2})\prec(\beta_{1},\beta_{2})\text{ in }\mathbb{N}_{0}^{p+q}. \tag{29}\]
Let \(n=n_{1}+\cdots+n_{m}\in\mathbb{N}\) with \(n_{j}\in\mathbb{N}\), \(j=1,\ldots,m\). Let \(m_{0}=0\) and \(m_{j}=n_{1}+\cdots+n_{j}\), \(j=1,\ldots,m\). We will use the notation \(z=(z_{(n_{1})},\cdots,z_{(n_{m})})=(z_{1},\ldots,z_{n})\) for any \(z\) vector of \(n\) components, where \(z_{(n_{j})}=(z_{m_{j-1}+1},\ldots,z_{m_{j}})\), \(j=1,\ldots,m\). We denote by \(L_{\Omega}f\) the Lagrange interpolation polynomial for a function \(f\) and a finite set of points \(\Omega\). When \(\Omega=\emptyset\) we set \(L_{\Omega}f\equiv 0\). In order to be able to intertwine sequences, we define the following enumeration on \(\mathbb{N}_{0}^{m}\)
\[\begin{array}{ccl}\alpha:&\mathbb{N}_{0}&\longrightarrow&\mathbb{N}_{0}^{m }\\ &N&\longmapsto&\alpha(N)=(\alpha_{1}(N),\ldots,\alpha_{m}(N))\end{array} \tag{30}\]
such that \(k^{(n_{j})}(\alpha_{j}(N)):=(k^{(n)}(N))_{(n_{j})}=(k^{(n)}_{m_{j-1}+1}(N), \ldots,k^{(n)}_{mj}(N))\), for \(j=1,\ldots,m\).
_Examples._ 1. If \(n_{1}=\cdots=n_{m}=1\) then \(\alpha\) coincides with the enumeration \(k^{(n)}\) on \(\mathbb{N}_{0}^{n}\).
1. For \(n_{1}=2\), \(n_{2}=1\) and \(n_{3}=3\): * \(k^{(6)}(0)=(0,0,0,0,0)=(\underbrace{0,0}_{k^{(2)}(0)},\underbrace{0}_{k^{(1)} (0)},\underbrace{0,0,0}_{k^{(3)}(0)})\). Then \(\alpha(0)=(0,0,0)\). * \(k^{(6)}(1)=(0,0,0,0,1)=(\underbrace{0,0}_{k^{(2)}(0)},\underbrace{0}_{k^{(3)} (0)},\underbrace{0,0,1}_{k^{(3)}(1)})\). Then \(\alpha(1)=(0,0,1)\). * \(k^{(6)}(59)=(0,2,0,0,0,1)=(\underbrace{0,2}_{k^{(2)}(3)},\underbrace{0}_{k^{( 1)}(0)},\underbrace{0,0,1}_{k^{(3)}(1)})\). Then \(\alpha(59)=(3,0,1)\).
**Definition 4.1**.: The **intertwining sequence** of the sequences \((\xi_{i}^{(j)})_{i\geq 0}\subset\mathbb{C}^{n_{j}},\ j=1,\ldots,m\) is the sequence \((H_{N}=\xi_{\alpha(N)})_{N\geq 0}\) defined as:
\[\mathbb{N}_{0}\ni N\longmapsto\xi_{\alpha(N)}:=(\xi_{\alpha_{1}(N)}^{(1)}, \ldots,\xi_{\alpha_{m}(N)}^{(m)}). \tag{31}\]
_Remarks._ 1. This definition of intertwining sequence coincides with that given in [9] for \(n_{1}=\cdots=n_{m}=1\).
2. The idea behind this notion of intertwining sequence is similar to one used in [7] to define intertwining arrays. Indeed, an intertwining sequence is an extension of an intertwining array up to an infinity of points ordered using the graded lexicographic order.
Given sequences \((\xi_{i}^{(j)})_{i\geq 0}\subset\mathbb{C}^{n_{j}},\ j=1,\ldots,m\) and their intertwining sequence \((H_{N}=\xi_{\alpha(N)})_{N\geq 0}\), let us define the sets of points
\[\Omega_{N}:=\{H_{i}:\ i=0,\ldots,N-1\}\text{ and }\Omega_{N}^{(j)}:=\{\xi_{i}^{(j)}:\ i =0,\ldots,N-1\} \tag{32}\]
with the convention that they are empty for \(N=0\). We also consider \(\text{VDM}^{(n)}(\emptyset)=1\).
**Theorem 4.1**.: _Let \((\xi_{i}^{(j)})_{i\geq 0}\subset\mathbb{C}^{n_{j}},\ j=1,\ldots,m\), be \(m\) unisolvent sequences (for polynomials of \(n_{j}\) variables). Then their intertwining sequence \((H_{N})_{N\geq 0}\) is also unisolvent (for polynomials of \(n\) variables) and for every \(z=(z_{(n_{1})},\cdots,z_{(n_{m})})\in\mathbb{C}^{n}\) and \(N\geq 0\) we have_
\[\text{VDM}^{(n)}(\Omega_{N},z)=P_{N}(z)\text{VDM}^{(n)}(\Omega_{N}), \tag{33}\]
_where_
\[P_{N}(z) =e_{N}^{(n)}(z)-L_{\Omega_{N}}e_{N}^{(n)}(z) \tag{34}\] \[=\prod_{j=1}^{m}\left[\left(e_{\alpha_{j}(N)}^{(n_{j})}-L_{\Omega _{\alpha_{j}(j)}^{(j)}}e_{\alpha_{j}(N)}^{(n_{j})}\right)(z_{(n_{j})})\right]\] (35) \[=\prod_{j=1}^{m}\frac{\text{VDM}^{(n_{j})}(\Omega_{\alpha_{j}(N)}^ {(j)},z_{(n_{j})})}{\text{VDM}^{(n_{j})}(\Omega_{\alpha_{j}(N)}^{(j)})}. \tag{36}\]
Proof.: The case \(N=0\) is clearly true. Let \(N\geq 1\) such that \(\Omega_{N}\) is unisolvent for \(\mathcal{P}_{N}(\mathbb{C}^{n})\), i.e \(\mathrm{VDM}^{(n)}(\Omega_{N})\neq 0\). So, the existence and uniqueness of \(L_{\Omega_{N}}e_{N+1}^{(n)}\) is guarantee and we have for all \(z\in\mathbb{C}^{n}\)
\[\mathrm{VDM}^{(n)}(\Omega_{N},z)=\begin{vmatrix}e_{0}^{(n)}(H_{0})&\dots&e_{0} ^{(n)}(H_{N-1})&e_{0}^{(n)}(z)\\ \vdots&\ddots&\vdots&\vdots\\ e_{N-1}^{(n)}(H_{0})&\dots&e_{N-1}^{(n)}(H_{N-1})&e_{N-1}^{(n)}(z)\\ e_{N}^{(n)}(H_{0})&\dots&e_{N}^{(n)}(H_{N-1})&e_{N}^{(n)}(z)\end{vmatrix}\]
\[=\begin{vmatrix}e_{0}^{(n)}(H_{0})&\dots&e_{0}^{(n)}(H_{N-1})&e_{0}^{(n)}(z)\\ \vdots&\ddots&\vdots&\vdots\\ e_{N-1}^{(n)}(H_{0})&\dots&e_{N-1}^{(n)}(H_{N-1})&e_{N-1}^{(n)}(z)\\ 0&\dots&0&e_{N}^{(n)}(z)-L_{\Omega_{N}}e_{N}^{(n)}(z)\end{vmatrix}\]
\[=(e_{N}^{(n)}(z)-L_{\Omega_{N}}e_{N}^{(n)}(z))\mathrm{VDM}^{(n)}(\Omega_{N}),\]
which is Equation (33). In a similar way, we also obtain
\[\frac{\mathrm{VDM}^{(n_{j})}(\Omega_{a_{j}(N)-1}^{(j)},z_{(n_{j})})}{\mathrm{ VDM}^{(n_{j})}(\Omega_{a_{j}(N)}^{(j)})}=\left(e_{a_{j}(N)}^{(n_{j})}-L_{ \Omega_{a_{j}(N)}^{(j)}}e_{a_{j}(N)}^{(n_{j})}\right)(z_{(n_{j})})\]
for all \(j=1,\dots,m\). Hence, the expressions in (35) and (36) are equal. Now, we only have to show that \(P_{N}\) as expressed by (35) and (36) coincides with \(e_{N}^{(n)}(z)-L_{\Omega_{N}}e_{N}^{(n)}(z)\). Since \(\Omega_{N}\) is unisolvent, it is sufficient to prove that \(e_{N}^{(n)}-P_{N}\in\mathcal{P}_{N}(\mathbb{C}^{n})\) and \(P_{N}|_{\Omega_{N}}\equiv 0\).
By definition of \(\alpha\), we have
\[\prod_{j=1}^{m}e_{\alpha_{j}(N)}^{(n_{j})}(z_{(n_{j})})=\prod_{j=1}^{m}z_{(n_ {j})}^{k^{(n_{j})}(\alpha_{j}(N))}=\prod_{j=1}^{m}z_{(n_{j})}^{(k^{(n)}(N))(n_ {j})}=z^{k^{(n)}(N)}=e_{N}^{(n)}(z).\]
Hence, after expanding the expression of \(P_{N}\) in (35) we have
\[P_{N}(z)=\prod_{j=1}^{m}e_{\alpha_{j}(N)}^{(n_{j})}(z_{(n_{j})})+\sum_{u=1}^{ M}\prod_{j=1}^{m}R_{u,j}(z_{(n_{j})})=e_{N}^{(n)}(z)+\sum_{u=1}^{M}\prod_{j=1}^{m}R _{u,j}(z_{(n_{j})}),\]
where \(M\geq 1\), \(R_{u,j}=e_{\alpha_{j}(N)}^{(n_{j})}\) or \(R_{u,j}=L_{\Omega_{a_{j}(N)}^{(j)}}e_{\alpha_{j}(N)}^{(n_{j})}\). Moreover, for all \(u\in\{1,\dots,M\}\) there exists at least one \(j_{u}\in\{1,\dots,m\}\) such that \(R_{u,j_{u}}=L_{\Omega_{a_{j_{u}}(N)}^{(j_{u})}}e_{\alpha_{j_{u}(N)}}^{(n_{j_{u} })}\). Therefore, all the multi-indices power of \(z\) which appear in the expansion of the terms \(\prod_{j=1}^{m}R_{u,j}(z_{(n_{j})})\), \(u=1,\dots,M\), precede (\(\prec\)) and cannot be equal to \(k^{(n)}(N)=(k^{(n_{1})}(\alpha_{1}(N)),k^{(n_{2})}(\alpha_{2}(N)),\dots,k^{(n _{m})}(\alpha_{m}(N)))\) since the graded lexicographic order is compatible with addition in \(\mathbb{N}_{0}^{n}\). This implies that
\[e_{N}^{(n)}-P_{N}=-\sum_{u=1}^{M}\prod_{j=1}^{m}R_{u,j}(z_{(n_{j})})\in \mathcal{P}_{N}(\mathbb{C}^{n}).\]
Let us show that \(P_{N}|_{\Omega_{N}}\equiv 0\). Let \(i\in\{0,\dots,N-1\}\) and suppose that \(P_{N}(H_{i})\neq 0\). Then
\[\mathrm{VDM}^{(n_{j})}(\Omega_{a_{j}(N)}^{(j)},(H_{i})_{(n_{j})})\neq 0\quad \text{for all }j=1,\dots,m.\]
Since \((H_{i})_{(n_{j})}=\xi_{\alpha_{j}(i)}^{(j)}\) it follows that \(\xi_{\alpha_{j}(i)}^{(j)}\not\in\Omega_{\alpha_{j}(N)}^{(j)}\) for every \(j=1,\dots,m\). Therefore, we necessarily have \(\alpha_{j}(i)\geq\alpha_{j}(N)\), for \(j=1,\dots,m\). Hence
\(k^{(n_{j})}(\alpha_{j}(N))\preceq k^{(n_{j})}(\alpha_{j}(i))\) for all \(j=1,\dots,m\). One deduces that \(k^{(n)}(N)\preceq k^{(n)}(i)\) which implies \(N\leq i\). The step in contradiction since \(i<N\). Therefore, \(P_{N}|_{\Omega_{N}}\equiv 0\) and we can conclude that \(e_{N}^{(n)}-P_{N}=L_{\Omega_{N}}e_{N}^{(n)}\).
Let us now prove by induction on \(N\geq 1\) that \(\Omega_{N}\) is unisolvent for \(\mathcal{P}_{N}(\mathbb{C}^{n})\).
For \(\mathrm{N}=1\): it is obvious. Induction step: if we assume that \(\Omega_{N}\) is unisolvent for \(\mathcal{P}_{N}(\mathbb{C}^{n})\) then from Equations (33) and (36) we obtain \(\mathrm{VDM}^{(n)}(\Omega_{N+1})\neq 0\), since the sequences \((\xi_{i}^{(j)})_{i\geq 0}\subset\mathbb{C}^{n_{j}},\ j=1,\dots,m\) are unisolvent. This proves that \(\Omega_{N+1}\) is unisolvent for \(\mathcal{P}_{N+1}(\mathbb{C}^{n})\).
**Theorem 4.2**.: _For every \(j=1,\ldots,m\), let \(K_{j}\subset\mathbb{C}^{n_{j}}\) be a compact set, \((\xi_{i}^{(j)})_{i\geq 0}\) be any unisolvent sequence of \(K_{j}\). Let \((H_{N})_{N\geq 0}=(\xi_{\alpha(N)})_{N\geq 0}\subset\mathbb{C}^{n}\) be their intertwining sequence. The following assertions are equivalent:_
1. \((H_{N})_{N\geq 0}\) _is a pseudo-Leja sequence for_ \(K=K_{1}\times\cdots\times K_{m}\)_._
2. _For every_ \(j=1,\ldots,m\)_,_ \((\xi_{i}^{(j)})_{i\geq 0}\) _is a pseudo-Leja sequence for_ \(K_{j}\)_._
Proof.: \(1.\Longrightarrow 2.)\) Suppose that \((H_{N})_{N\geq 0}\) is a pseudo-Leja sequence for
\(K_{1}\times\cdots\times K_{m}\) of Edrei growth \((M_{N})_{N\geq 1}\). Fix \(j\in\{1,\ldots,m\}\) and let us prove that \((\xi_{i}^{(j)})_{i\leq 0}\) is a pseudo-Leja sequence for \(K_{j}\).
Let \(i\geq 0\) and \(N:=N(i)\geq 0\) such that \(\alpha(N)=(0,\ldots,0,\underbrace{i}_{\text{j-th position}},0,\ldots,0)\). Then \((H_{N})_{(n_{j})}=\xi_{i}^{(j)}\). We have by the definition of pseudo-Leja sequence
\[M_{N}\Big{|}\text{VDM}^{(n)}(\Omega_{N+1})\Big{|}\geq\max_{x\in K}\Big{|} \text{VDM}^{(n)}(\Omega_{N},z)\Big{|}.\]
By the hypothesis, we know that the sequence \((\xi_{i}^{(j)})_{i\geq 0}\) is unisolvent. Hence, using Theorem 4.1 we obtain \(M_{N}|P_{N}(H_{N})|\geq\max_{z\in K}|P_{N}(z)|\) which implies that
\[M_{N}\Big{|}\text{VDM}^{(n_{j})}(\Omega_{i}^{(j)},(H_{N})_{(n_{j})})\Big{|} \geq\max_{z_{(n_{j})}\in K_{j}}\Big{|}\text{VDM}^{(n_{j})}(\Omega_{i}^{(j)}, z_{(n_{j})})\Big{|}\]
since \(\text{VDM}^{n_{j}}(\Omega_{0})=\text{VDM}^{(n_{j})}(\Omega_{0},z_{(n_{j})})=1\). Thus, by setting \(M_{i}^{(j)}=M_{N(i)}\) we obtain
\[M_{i}^{(j)}\Big{|}\text{VDM}^{(n_{j})}(\Omega_{i+1}^{(j)})\Big{|}\geq\max_{z_ {(n_{j})}\in K_{j}}\Big{|}\text{VDM}^{(n_{j})}(\Omega_{i}^{(j)},z_{(n_{j})}) \Big{|}.\]
Let us now show that \(\lim_{d\rightarrow\infty}\max_{h_{d-1}^{(n)}<l\leq h_{d}^{(n)}}(M_{l}^{(j)})^{ 1/d}=1.\) Let \(d\geq 1\) and let
\(i:=i(d)\in\mathbb{N}\) such that \(h_{d-1}^{(n_{j})}\leq i<h_{d}^{(n_{j})}\) and \(M_{i}^{(j)}=\max_{h_{d-1}^{(n_{j})}\leq l<h_{d}^{(n_{j})}}M_{l}^{(j)}\). We consider as previously the number \(N:=N(i)\) such that
\(\alpha(N)=(0,\ldots,0,\underbrace{i}_{\text{j-th position}},0,\ldots,0)\). It follows that
\[k^{(n)}(N)=(\underbrace{0,\ldots,0}_{m_{j-1}\text{ times}},k_{1}^{(n_{j})}(i),k_{2}^{(n_{j})}(i),\ldots,k_{n_{j}}^{(n_{j})}(i), \underbrace{0,\ldots,0}_{n-m_{j}\text{ times}}).\]
So, \(\Big{|}k^{(n)}(N)\Big{|}=\Big{|}k^{(n_{j})}(i)\Big{|}\). Moreover, \(\Big{|}k^{(n_{j})}(i)\Big{|}=d\) since \(h_{d-1}^{(n_{j})}\leq i<h_{d}^{(n_{j})}\) and hence \(\Big{|}k^{(n)}(N)\Big{|}=d\), which actually means that \(h_{d-1}^{(n)}\leq N<h_{d}^{(n)}\). One deduces that
\[1\leq\limsup_{d\rightarrow+\infty}(M_{i}^{(j)})^{1/d}=\lim_{d\rightarrow+ \infty}M_{N}^{1/d}\leq\lim_{d\rightarrow+\infty}\max_{h_{d-1}^{(n)}\leq l<h_ {d}^{(n)}}M_{l}^{1/d}=1,\]
where the last equality is true by the hypothesis (the definition of pseudo-Leja sequences). Hence,
\[\lim_{d\rightarrow+\infty}(M_{i(d)}^{(j)})^{1/d}=1\]
which is the desired result. Thus, \((\xi_{i}^{(j)})_{i\leq 0}\) is a pseudo-Leja sequence with Edrei growth \((M_{i}^{(j)})_{i\geq 1}\).
\(2.\Longrightarrow 1.)\) Suppose that for any \(j=1,\ldots,m\), \((\xi_{i}^{(j)})_{i\geq 0}\) is a pseudo-Leja sequence for \(K_{j}\) of Edrei growth \((M_{i}^{(j)})_{i\geq 1}\). Let us show that \((H_{N})_{N\geq 0}\) is a pseudo-Leja sequence for \(K_{1}\times\cdots\times K_{m}\).
Let \(N\in\mathbb{N}_{0}\). Using successively Theorem 4.1, the fact that \((\xi_{i}^{(j)})_{i\geq 0}\) are pseudo-Leja sequences
of respective Edrei growth \((M_{i}^{(j)})_{i\geq 1}\) and Theorem 4.1 (once again) we have
\[\left|\mathrm{VDM}^{(n)}(\Omega_{N+1})\right| =|P_{N}(H_{N})|\cdot\left|\mathrm{VDM}^{(n)}(\Omega_{N})\right|\] \[=\prod_{j=1}^{m}\frac{\left|\mathrm{VDM}^{(n_{j})}(\Omega_{ \alpha_{j}(N)}^{(j)};\xi_{\alpha_{j}(N)})\right|}{\left|\mathrm{VDM}^{(n_{j})} (\Omega_{\alpha_{j}(N)}^{(j)})\right|}\cdot\left|\mathrm{VDM}^{(n)}(\Omega_{N })\right|\] \[\geq\prod_{j=1}^{m}\left[\frac{\max_{i_{(n_{j})}\in K_{j}}\left| \mathrm{VDM}^{(n_{j})}(\Omega_{\alpha_{j}(N)}^{(j)},z_{(n_{j})})\right|}{M_{ \alpha_{j}(N)}^{(j)}\big{|}\mathrm{VDM}^{(n_{j})}(\Omega_{\alpha_{j}(N)}^{(j) })\right|}\right]\left|\mathrm{VDM}^{(n)}(\Omega_{N})\right|\] \[=(M_{N})^{-1}\max_{z\in K}\left|P_{N}(z)\right|\cdot\left| \mathrm{VDM}^{(n)}(\Omega_{N})\right|\] \[=(M_{N})^{-1}\max_{z\in K}\left|\mathrm{VDM}^{(n)}(\Omega_{N},z )\right|\]
where \(M_{N}=\prod_{j=1}^{m}M_{\alpha_{j}(N)}^{(j)}\) with the convention \(M_{0}^{(j)}=1\). So,
\[M_{N}\Big{|}\mathrm{VDM}^{(n)}(\Omega_{N+1})\Big{|}\geq\max_{z\in K}\Big{|} \mathrm{VDM}^{(n)}(\Omega_{N},z)\Big{|}.\]
Let us prove that \(\lim_{d\to+\infty}\max_{h_{d-1}^{(n)}\leq N<h_{d}^{(n)}}M_{N}^{1/d}=1.\) Let \(d\geq 1\) and let
\(h_{d-1}^{(n)}\leq N<h_{d}^{(n)}\) such that \(M_{N}=\max_{h_{d-1}^{(n)}\leq i<h_{d}^{(n)}}M_{i}.\) Using the definition of \(\alpha\), we obtain for any \(j=1,\ldots,m\),
\[\left|k^{(nj)}(\alpha_{j}(N))\right|=\left|(k^{(n)}(N))_{(n_{j})}\right|\leq \left|k^{(n)}(N)\right|=d,\]
since \(h_{d-1}^{(n)}\leq N<h_{d}^{(n)}\). This actually means that, for each \(j=1,\ldots,m\), there exists \(0\leq d_{j}\leq d\) such that \(h_{d_{j}-1}^{(nj)}\leq\alpha_{j}(N)<h_{d_{j}}^{(nj)}\) with the conventions \(h_{-1}^{(nj)}=0\) and \(M_{0}^{(j)}=1\). Thus, for all \(j=1,\ldots,m\) we have
\[1\leq M_{\alpha_{j}(N)}^{(j)}\leq\max_{h_{d_{j}-1}^{(nj)}\leq i<h_{d_{j}}^{(n_ {j})}}M_{i}^{(j)}. \tag{37}\]
Note that, for all \(j=1,\ldots,m\), \(d_{j}\) only depends on \(d\) as \(N\) does. We claim that for all \(j=1,\ldots,m\),
\[\lim_{d\to+\infty}\left(\max_{h_{d_{j}-1}^{(nj)}\leq i<h_{d_{j}}^{(nj)}}M_{i}^ {(j)}\right)^{1/d}=1. \tag{38}\]
Indeed, for a fix \(j=1,\ldots,m\), we need to consider two cases:
* If \(\limsup_{d\to+\infty}d_{j}<+\infty\) then (38) is obviously true.
* If \(\limsup_{d\to+\infty}d_{j}=+\infty\) then by the hypothesis (second property for pseudo-Leja sequences) \[\lim_{d\to+\infty}\left(\max_{h_{d_{j}-1}^{(nj)}\leq i<h_{d_{j}}^{ (nj)}}M_{i}^{(j)}\right)^{1/d_{j}}=1\text{ which implies that }(\ref{eq:L1})\text{ is also true in this case, since }0\leq d_{j}\leq d\text{ and all }M_{i}^{(j)}\geq 1.\]
From (37) and (38) we deduce
\[\lim_{d\to+\infty}M_{N}^{1/d}=\lim_{d\to+\infty}\prod_{j=1}^{m}(M_{\alpha_{j}( N)}^{(j)})^{1/d}=1\]
which is the desired result. Therefore, \((H_{N})_{N\geq 0}\) is a pseudo-Leja sequence for \(K_{1}\times\cdots\times K_{m}\) of Edrei growth \((M_{N})_{N\geq 1}\).
Uniform convergence of the interpolation polynomials for the Cartesian product of compact planar sets
In this section we consider a compact set \(K=K_{1}\times\cdots\times K_{n}\subset\mathbb{C}^{n}\), where \(K_{j}\subset\mathbb{C}\) for all \(j\), and \(f\), a holomorphic function on an open neigbourhood of \(K\). The objective here is to show that the sequence of Lagrange interpolation polynomials associated with \(f\), converges uniformly on a smaller neighbourhood of \(K\) when the interpolation nodes are of pseudo-Leja type. The same result had already been stated in [14] for the case of Leja sequences. The proof found there is essentially based on a Leja node property which, fortunately, is also verified by pseudo-Leja nodes. Therefore, we can conclude that the result is also true for the case of pseudo-Leja points. However, we have noticed that the proof given there does not consider the case of disconnected compact sets in \(\mathbb{C}\). So, for the reader's convenience we will give here a complete proof of this result that is valid also for disconnected sets.
We will assume in this section that the reader is familiar with the basic concepts of potential theory. We can refer the reader to the books [13] and [12] for more information on potential theory. In particular, we will handle objects associated to a compact set \(E\subset\mathbb{C}\) and to a measure \(\mu\) supported on \(E\): \(U^{\mu}\) the potential of \(\mu\), \(I(\mu)\) the logarithmic energy of \(\mu\), \(\mathrm{Cap}(E)\) the logarithmic capacity of \(E\), \(\phi_{E}\) the Siciak extremal function for \(E\), \(\mu_{E}\) the equilibrium measure of \(E\) and \(\hat{\mu}\) the balayage of \(\mu\) onto the boundary \(\partial E\).
Given \(n\) sequences \((\xi_{i}^{(m)})_{i\geq 0}\subset\mathbb{C}\), \(m=1,\ldots,n\) of pairwise distinct points, we consider throughout this section the following notation of vectors
\[p_{i_{1},\ldots,i_{n}}:=(\xi_{i_{1}}^{(1)},\ldots,\xi_{i_{n}}^{(n)})\quad \text{for $i_{1},\ldots,i_{n}\in\mathbb{N}_{0}$.} \tag{39}\]
Observe that \(\{p_{i_{1},\ldots,i_{n}}:\ i_{1}+\cdots+i_{n}\leq d\}=\{\xi_{k^{(n)(i)}}:\ i=0, \ldots,h_{d}^{(n)}-1\}\) for all \(d\in\mathbb{N}_{0}\), where \((\xi_{k^{(m)}(i)})_{m\geq 0}\) is the intertwining sequence of the sequences \((\xi_{i}^{(m)})_{i\geq 0}\subset\mathbb{C}\), \(m=1,\ldots,n\).
**Lemma 5.1**.: _[_14_, Lemma 3.1]_ _For any function \(f:\{p_{i_{1},\ldots,i_{n}}\}_{i_{1}+\cdots+i_{n}\leq d}\longrightarrow\mathbb{C}\) there exists a unique polynomial \(L_{d}f\in\mathbb{C}_{d}[z_{1},\ldots,z_{n}]\) such that for all \(i_{1},\ldots,i_{n}\in\mathbb{N}_{0}\) with \(i_{1}+\cdots+i_{n}\leq d\) we have_
\[L_{d}f(p_{i_{1},\ldots,i_{n}})=f(p_{i_{1},\ldots,i_{n}}). \tag{40}\]
_Moreover, for all \(z=(z_{1},\ldots,z_{n})\in\mathbb{C}^{n}\)_
\[L_{d}f(z)=\sum_{i_{1}+\cdots+i_{n}\leq d}a_{i_{1},\ldots,i_{n}}\prod_{m=1}^{n} \omega_{m,i_{m}}(z_{m}) \tag{41}\]
_for some constants \(a_{i_{1},\ldots,i_{n}}\), where \(\omega_{m,i_{m}}(t):=(t-\xi_{0}^{(m)})\cdots(t-\xi_{i_{m-1}}^{(m)})\), \(t\in\mathbb{C}\) and \((t-\xi_{-1}^{(m)}):=1\)._
**Lemma 5.2**.: _[_14_, Lemma 3.2]_ _Let \(D_{1},\ldots,D_{n}\) be domains in \(\mathbb{C}\) with \(\mathbb{C}^{2}\) boundaries positively oriented, consisting of a finite number of Jordan curves. Let \((\xi_{i}^{(m)})_{i\geq 0}\) be a sequence of pairwise distinct points in \(D_{k}\) for \(m=1,\ldots,n\). Let \(f\) be a holomorphic function in a neighbourhood of \(\overline{D}\), where \(D:=D_{1}\times\cdots\times D_{n}\). Then_
\[a_{i_{1},\ldots,i_{n}}=\frac{1}{(2\pi i)^{n}}\int_{\partial D_{1}}\ldots\int_ {\partial D_{n}}\frac{f(t_{1},\ldots,t_{n})}{\prod_{m=1}^{n}\omega_{m,i_{m+1}} (t_{m})} \tag{42}\]
The main result of this section is the following.
**Theorem 5.3**.: _(see [14], Theorem 11.2) Let \(K_{1},\ldots,K_{n}\) be compact, regular, polynomially convex subsets of \(\mathbb{C}\). Let \(K:=K_{1}\times\cdots\times K_{n}\) and for any \(m\in\{1,\ldots,n\}\) let \((\xi_{i}^{(m)})_{i\geq 0}\) be a sequence of pairwise distinct points in \(K_{m}\) such that_
\[\left(\frac{|\omega_{m,d}(z)|}{\|\omega_{m,d}\|_{K_{m}}}\right)^{1/d} \longrightarrow\phi_{K_{m}}(z),\quad d\rightarrow+\infty, \tag{43}\]
_uniformly on compact subsets of \(\mathbb{C}\setminus K_{m}\). Then for each function \(f\) holomorphic in an open neighbourhood of \(K\) we have_
\[L_{d}f\to f\text{ as }d\rightarrow+\infty\quad\text{uniformly on a closed neighbourhood of }K. \tag{44}\]
_Remarks._ 1. Due to Kalmar-Walsh theorem (see for instance [4]), the condition (43) is equivalent to
\[\|\omega_{m,d}\|_{K_{m}}^{1/d}\longrightarrow\mathrm{Cap}(K_{m})\quad\text{ for all }m=1,\ldots,n. \tag{45}\]
2. It was shown in [2] that one-dimensional pseudo-Leja sequences satisfy the condition (43).
3. Due to a general result on approximation by polynomial projectors given in [5, Theorem 7], the rate of convergence in Theorem 5.3 is geometric and maximal, i.e, asymptotically equal to the speed of convergence provided by polynomials of best uniform approximation.
The proof of Theorem 5.3 will be given after proving the following lemma.
**Lemma 5.4**.: _Let \(E=A\cup B\subset\mathbb{C}\) be a regular polynomially convex compact set in \(\mathbb{C}\) such that \(A\) and \(B\) are compact sets and \(A\cap B=\emptyset\). Let \(\{z_{d,j}\}\) be a triangular array of points in \(E\) satisfying for any \(n\in\mathbb{N}\)_
\[\|\omega_{d}\|_{E}^{1/d}\longrightarrow\text{Cap}(E)\quad\text{as }d\to+\infty, \tag{46}\]
_where \(\omega_{d}(z):=(z-z_{d,0})\cdots(z-z_{d,d-1}).\) Then each of the sets \(A\) and \(B\) contains an infinite number of points from the array \(\{z_{d,j}\}\)._
Proof of Lemma 5.4.: Suppose for instance that \(B\) contains a finite number of points from the array \(\{z_{d,j}\}\). Then from a certain rank all the points of the array \(\{z_{d,j}\}\) are located in \(A\). Consider the normalized counting measure associated with the array \(\{z_{d,j}\}\),
\[\nu_{d}=\frac{1}{d}\sum_{j=0}^{d-1}\delta_{z_{d,j}}.\]
We can assume without loss of generality that \(\nu_{d}(B)=0\) for all \(d\in\mathbb{N}\). Hence, for any \(d\in\mathbb{N}\) the balayage measure of \(\nu_{d}\) onto \(\partial E\) (see [13] for the definition), which we denote \(\hat{\nu}_{d}\), is supported on \(\partial A\). Let \(\nu\) be a weak\({}^{*}\) limit of \(\nu_{d}\) and \(\hat{\nu}\) its balayage measure onto \(\partial E\). From the definition of balayage measure, we have for \(z\in\mathbb{C}\setminus E\)
\[U^{\hat{\nu}}(z)=U^{\nu}(z)=\lim_{j\to+\infty}-\frac{1}{d}\int\log\frac{1}{|z -t|}d\nu_{d_{j}}(t)\]
where \(\{\nu_{d_{j}}\}\) is a subsequence of \(\{\nu_{d}\}\) converging to \(\nu\) in the weak\({}^{*}\) topology. Therefore, using the hypothesis (43) and the remark 1 we obtain
\[U^{\hat{\nu}}(z)=\lim_{j\to+\infty}-\frac{1}{d}\log|\omega_{d}(z)|=U^{\mu_{E} }(z)\quad\text{for all }z\in\mathbb{C}\setminus E.\]
Since \(\mu_{E}\) and \(\hat{\nu}\) are probability measures both supported on \(\partial E\) one deduces that \(\hat{\nu}=\mu_{E}\) from Carleson's unicity theorem [13, Theorem II.4.13]. Hence, \(\mu_{E}(B)=\hat{\nu}(B)=0\). It follows that \(\text{Cap}(A)=\text{Cap}(E)\) and \(\mu_{E}=\mu_{A}\) by uniqueness of the equilibrium measure. Since \(E\) is regular and polynomially convex, we therefore deduce that
\[I(\mu_{E})=U^{\mu_{E}}(z)=U^{\mu_{A}}(z)<I(\mu_{A})=I(\mu_{E})\quad\text{for all }z\in B\]
which is an absurdity.
Proof of Theorem 5.3.: Fix a function \(f\) holomorphic in an open neighborhood of \(K\). For each \(m=1,\ldots,n\), we can take an open neighbourhood \(D_{m}\) of \(K_{m}\) such that \(f\) is holomorphic on \(D:=D_{1}\times\cdots\times D_{n}\). Let \(m\in\{1,\ldots,n\}\). Let \(R_{m}:=\min_{t\in\partial D_{m}}\phi_{K_{m}}(t)\) and \(K_{R_{m}}:=\{t\in\mathbb{C}:\phi_{K_{m}}(t)<R_{m}\}\). Since \(K_{m}\) is polynomially convex we have \(R_{m}>1\). Fix an arbitrary \(r_{m}\in(1,R_{m})\) and let
\(C_{m}:=\{t\in\mathbb{C}:\phi_{K_{m}}=r_{m}\}\). Since \(K_{m}\) is regular, polynomially convex and compact, \(C_{m}\) consists of a finite number of \(\mathcal{C}^{2}\) Jordan curves and \(\text{int}(C_{m}):=\{t\in\mathbb{C}:\phi_{K_{m}}<r_{m}\}\). Let \(\varepsilon_{1}>0\) and \(\varepsilon_{m}:=\frac{r_{m}}{r_{1}}\varepsilon_{1}\), \(m=2,\ldots,n\), such that \(r_{m}-2\varepsilon_{m}>1\) for all \(m=1,\ldots,n\) (we can choose any \(0<\varepsilon_{1}<\min(r_{m}-1)\frac{r_{m}}{2r_{m}}\)). We have
\[r_{m}=\phi_{K_{m}}(z)\geq\left(\frac{|\omega_{m,d}(z)|}{\|\omega_{m,d}\|_{K_{m }}}\right)^{1/d}\quad\text{for all }z\in C_{m}.\]
Moreover, by assumption
\[\left(\frac{|\omega_{m,d}(z)|}{\|\omega_{m,d}\|_{K_{m}}}\right)^{1/d} \longrightarrow\phi_{K_{m}}(z),\quad d\to+\infty,\text{ uniformly on }C_{m}.\]
Hence, there exist \(N=N(\varepsilon_{1})\in\mathbb{N}\) such that \(\frac{|\omega_{m,d}(z)|}{\|\omega_{m,d}\|_{K_{m}}}\geq(r_{m}-\varepsilon_{m}) ^{d}\) for all \(d\geq N\), \(m=1,\ldots,n\) and \(z\in C_{m}\).
Let us prove that there exists \(M_{1}>0\) such that \(|\omega_{m,d}(z)|\geq M_{1}(r_{m}-\varepsilon_{m})^{d}\|\omega_{m,d}\|_{K_{m}}\) for all \(d\in\mathbb{N}\), \(m=1,\ldots,n\) and \(z\in C_{m}\). For \(d\geq N\) we have already obtained the result with \(M_{1}=1\). Fix \(d<N\) and \(z\in C_{m}\). We have
\[|\omega_{m,d}(z)|\geq\text{dist}(C_{m},K_{m})^{d+1}\geq\text{dist}(C_{m},K_{m}) ^{d+1}\frac{\left\|\omega_{m,d}\right\|_{K_{m}}}{\left(\text{diam}(K_{m}) \right)^{d+1}}.\]
We distinguish two cases:
* If \(\text{dist}(C_{m},K_{m})\geq\text{diam}(K_{m})\) then \[|\omega_{m,d}(t)|\geq\left\|\omega_{m,d}\right\|_{K_{m}}=\left\|\omega_{m,d} \right\|_{K_{m}}\frac{(r_{m}-\varepsilon_{m})^{d}}{(r_{m}-\varepsilon_{m})^{ d}}\geq\left\|\omega_{m,d}\right\|_{K_{m}}\frac{(r_{m}-\varepsilon_{m})^{d}}{(r_{ m}-\varepsilon_{m})^{N}}.\]
* If \(\text{dist}(C_{m},K_{m})<\text{diam}(K_{m})\) then \(\left(\frac{\text{dist}(C_{m},K_{m})}{\text{diam}(K_{m})}\right)^{d+1}\geq \left(\frac{\text{dist}(C_{m},K_{m})}{\text{diam}(K_{m})}\right)^{N}\) and hence \[|\omega_{m,d}(z)| \geq\left(\frac{\text{dist}(C_{m},K_{m})}{\text{diam}(K_{m})} \right)^{N}\left\|\omega_{m,d}\right\|_{K_{m}}\frac{(r_{m}-\varepsilon_{m})^{ d}}{(r_{m}-\varepsilon_{m})^{d}}\] \[\geq\left(\frac{\text{dist}(C_{m},K_{m})}{\text{diam}(K_{m})} \right)^{N}\left\|\omega_{m,d}\right\|_{K_{m}}\frac{(r_{m}-\varepsilon_{m})^{ d}}{(r_{m}-\varepsilon_{m})^{N}}.\]
In both cases take \(M_{1}=\frac{1}{\max_{m=1,\ldots,n}(r_{m}-\varepsilon_{m})^{N}}\cdot\min \left\{1,\left(\frac{\text{dist}(C_{m},K_{m})}{\text{diam}(K_{m})}\right)^{N}\right\}\) to conclude.
From Lemma 5.2, we deduce for all \(i_{1},\ldots,i_{n}\in\mathbb{N}_{0}\) that
\[|a_{i_{1},\ldots,i_{n}}| \leq\frac{1}{(2\pi)^{n}}\int_{C_{1}}\ldots\int_{C_{n}}\frac{|f(z_ {1},\ldots,z_{n})||dz_{1}|\ldots|dz_{n}|}{|\omega_{m,i_{m}+1}(z_{m})|}\] \[\leq\frac{1}{(2\pi)^{n}}\frac{\left\|f\right\|_{C_{1}\times\cdots C _{n}}\cdot|C_{1}|\times\cdots\times|C_{n}|}{\prod_{m=1}^{n}\left(\min_{z_{m} \in C_{m}}|\omega_{m,i_{m}}(z_{m})|\right)\cdot(\min_{m}(\text{dist}(C_{m},K_{ m})))^{n}}\] \[=\frac{M_{2}}{\prod_{m=1}^{n}\left(\min_{z_{m}\in C_{m}}|\omega_{m,i_{m}}(z_{m})|\right)},\quad\text{where }M_{2}:=M_{2}(f,r_{1},\ldots,r_{n})\] \[\leq\frac{M_{2}}{\prod_{m=1}^{n}\left(M_{1}(r_{m}-\varepsilon_{m} )^{i_{m}}\left\|\omega_{m,i_{m}}\right\|_{K_{m}}\right)}\] \[=\frac{M_{3}}{\prod_{m=1}^{n}\left((r_{m}-\varepsilon_{m})^{i_{m} }\left\|\omega_{m,i_{m}}\right\|_{K_{m}}\right)},\]
where \(M_{3}:=M_{3}(f,r_{1},\ldots,r_{n})=\frac{M_{3}}{M_{1}^{2}}\). Therefore, for any \(z=(z_{1},\ldots,z_{n})\in\text{int}(C_{1})\times\cdots\times\text{int}(C_{n})\) we have
\[|L_{d}f(z)| \leq\sum_{i_{1}+\cdots+i_{n}\leq d}|a_{i_{1},\ldots,i_{n}}|\prod_{ m=1}^{n}\left|\omega_{m,i_{m}}(z_{m})\right|\] \[\leq\sum_{i_{1}+\cdots+i_{n}\leq d}\frac{M_{3}}{\prod_{m=1}^{n} \left((r_{m}-\varepsilon_{m})^{i_{m}}\right)}\prod_{m=1}^{n}\frac{\left| \omega_{m,i_{m}}(z_{m})\right|}{\left\|\omega_{m,i_{m}}\right\|_{K_{m}}}.\]
Now fix a compact set \(F\subset\text{int}(C_{1})\times\cdots\times\text{int}(C_{n})\). We can choose \(\varepsilon_{1}\in(0,\min_{m=1,\ldots,n}(r_{m}-1)\frac{r_{1}}{2r_{m}})\) small enough such that \(F\subset\{\phi_{K_{1}}\leq r_{1}-2\varepsilon_{1}\}\times\cdots\times\{\phi_{ K_{K_{m}}}\leq r_{n}-2\varepsilon_{n}\}\) (where \(\varepsilon_{m}=\frac{r_{m}}{r_{1}}\varepsilon_{1}\) as before). Let \(z=(z_{1},\ldots,z_{n})\in F\). We have \(\frac{|\omega_{m,i_{m}}(z_{n})|}{\left\|\omega_{m,i_{m}}\right\|_{K_{m}}}\leq \left(\phi_{K_{m}}(z_{m})\right)^{i_{m}}\leq(r_{m}-2\varepsilon_{m})^{i_{m}}\) for \(m=1,\ldots,n\). Therefore,
\[|L_{d}f(z)| \leq M_{3}\sum_{i_{1}+\cdots+i_{n}\leq d}\prod_{m=1}^{n}\left( \frac{r_{m}-2\varepsilon_{m}}{r_{m}-\varepsilon_{m}}\right)^{i_{m}}\] \[\leq M_{3}\sum_{i_{1}+\cdots+i_{n}\leq d}\prod_{m=1}^{n}\left( \frac{r_{1}-2\varepsilon_{1}}{r_{1}-\varepsilon_{1}}\right)^{i_{m}}=M_{3}\sum_{ l=0}^{d}\sum_{i_{1}+\cdots+i_{n}=l}\left(\frac{r_{1}-2\varepsilon_{1}}{r_{1}- \varepsilon_{1}}\right)^{l}\] \[\leq M_{3}\sum_{l=0}^{d}a_{n,l}\quad\text{where }a_{n,l}=\binom{n+l-1}{l} \left(\frac{r_{1}-2\varepsilon_{1}}{r_{1}-\varepsilon_{1}}\right)^{l}.\]
The expression on the right in the previous inequality is the partial sum of a series that converges since \(\lim\limits_{l\rightarrow+\infty}\left[a_{n,l}\right]^{1/l}=(r_{1}-2\varepsilon_{ 1})/(r_{1}-\varepsilon_{1})<1\). Consequently, \((L_{d}f)_{d}\) is a sequence of entire functions which converges uniformly (even normally) on all compact subsets of \(\mathrm{int}(C_{1})\times\cdots\times\mathrm{int}(C_{n})\) (including \(K\)). By the arbitrariness of \(r_{m}\), \(m=1,\ldots,n\), \((L_{d}f)_{d}\) converges uniformly on all compact subsets of \(K_{R_{1}}\times\cdots\times K_{R_{n}}\). Hence, the limit function \(g\) of \((L_{d}f)_{d}\) is also holomorphic on \(K_{R_{1}}\times\cdots\times K_{R_{n}}\) and from Lemma 5.2
\[g(p_{i_{1},\ldots,i_{n}})=f(p_{i_{1},\ldots,i_{n}})\quad\text{for all }p_{i_{1},\ldots,i_{n}}=(\xi_{i_{1}}^{(1)},\ldots,\xi_{i_{n}}^{(n)}), \tag{47}\]
for \(i_{1},\ldots,i_{n}\in\mathbb{N}_{0}\). Let us prove by induction on \(n\) that, (47) implies \(f=g\) on \(K_{R_{1}}\times\cdots\times K_{R_{n}}\). Suppose \(n=1\). Then it follows from Lemma 5.4 that each connected component of \(K_{R_{1}}\) contains infinitely many points at which \(f\) and \(g\) coincide. Therefore, from the identity theorem for holomorphic functions \(f\equiv g\) on each connected component of \(K_{R_{1}}\). So, \(f\equiv g\) on \(K_{R_{1}}\). Induction step: let \(n\geq 1\). Suppose that for any \(m\leq n\) and for any functions \(F\) and \(G\) holomorphic on \(K_{R_{1}}\times\cdots\times K_{R_{m}}\) such that \(F\equiv G\) on \(\{p_{i_{1},\ldots,i_{n}}\ :\ i_{1},\ldots,i_{n}\in\mathbb{N}_{0}\}\) we have \(F\equiv G\) on \(K_{R_{1}}\times\cdots\times K_{R_{m}}\). Let \(f\) and \(g\) be two holomorphic functions on \(K_{R_{1}}\times\cdots\times K_{R_{n+1}}\) satisfying
\[f(\xi_{i_{1}}^{(1)},\ldots,\xi_{i_{n}}^{(n)},\xi_{i_{n+1}}^{(n+1)})=g(\xi_{i_ {1}}^{(1)},\ldots,\xi_{i_{n}}^{(n)},\xi_{i_{n+1}}^{(n+1)})\quad i_{1},\ldots, i_{n+1}\in\mathbb{N}_{0}.\]
If we fix \(i_{n+1}\in\mathbb{N}_{0}\) then by hypothesis of induction we have for all \(i_{1},\ldots,i_{n}\in\mathbb{N}_{0}\)
\[f(.,\xi_{i_{n+1}}^{(n+1)})=g(.,\xi_{i_{n+1}}^{(n+1)})\quad\text{on }K_{R_{1}}\times\cdots\times K_{R_{n}}\]
as the functions \(\xi\longmapsto g(\xi,\xi_{i_{n+1}}^{(n+1)})\) and \(\xi\longmapsto g(\xi,\xi_{i_{n+1}}^{(n+1)})\) are holomorphic on \(K_{R_{1}}\times\cdots\times K_{R_{n}}\). Now, set \(f_{z}(.)=f(z,.)\) and \(g_{z}(.)=g(z,.)\) for any \(z\in K_{R_{1}}\times\cdots\times K_{R_{n}}\) fixed. \(f_{z}\) and \(g_{z}\) are holomorphic on \(K_{R_{n+1}}\) and satisfy \(f_{z}(\xi_{i_{n+1}}^{(n+1)})=g_{z}(\xi_{i_{n+1}}^{(n+1)})\) for all \(i_{n+1}\in\mathbb{N}\). Again by hypothesis of induction we can conclude that \(f_{z}\equiv g_{z}\) for all \(z\in K_{R_{1}}\times\cdots\times K_{R_{n}}\). Thus, \(f\equiv g\) on \(K_{R_{1}}\times\cdots\times K_{R_{n+1}}\). Consequently, we have proven that \(f\equiv g\) on \(K_{R_{1}}\times\cdots\times K_{R_{n}}\) for all \(n\in\mathbb{N}\). Thus, \(L_{d}f\) converges uniformly to \(f\) on all compact subsets of \(K_{R_{1}}\times\cdots\times K_{R_{n}}\) which is a closed neighbourhood of \(K\).
_Remarks_.:
1. Lemma 5.4 is very crucial for the previous proof because it allows us to apply the identity theorem for holomorphic function in \(\mathbb{C}\). Indeed, without Lemma 5.4 the proof given in [14] is valid only under the assumption that the compact sets \(K_{j}\), \(j=1,\ldots,n\), are all connected.
2. Theorem 5.3 can also be deduced from [1, Theorem 3.1] by means of Newton product of Lagrange interpolation operators.
## 6 Construction of pseudo-Leja sequences from weakly admissible meshes
In the one-dimensional case, besides having the same properties as Leja nodes, pseudo-Leja nodes are more practical than Leja nodes because they are easier to compute thanks to the use of (weakly) admissible meshes [2]. In this section, we generalize this advantage to the multidimensional case.
**Definition 6.1**.: We say that a sequence of set \((\mathcal{A}_{d})_{d\in\mathbb{N}}\) is a weakly admissible mesh for a compact set \(K\) if the following conditions are satisfied:
* \(\mathcal{A}_{d}\) is a finite subset of \(K\),
* There exists a sequence \((M_{d})_{d\in\mathbb{N}}\) of subexponential growth such that for every polynomial \(P\) of degree at most \(d\), \[\left\|P\right\|_{K}\leq M_{d}\|P\|_{\mathcal{A}_{d}}.\] (48)
The sequence \((M_{d})_{d\in\mathbb{N}}\) is referred to as the growth of the mesh \((\mathcal{A}_{d})_{d\in\mathbb{N}}\). In the case where the sequence \((M_{d})_{d\in\mathbb{N}}\) is bounded by \(M\), we say that \((\mathcal{A}_{d})_{d\in\mathbb{N}}\) is an admissible mesh of parameter \(M\).
Note that we necessarily have for every \(d\in\mathbb{N}\), \(\mathrm{Card}(\mathcal{A}_{d})\geq h_{d}^{(n)}\) since the set \(\mathcal{A}_{d}\) is \(\mathbb{C}_{d}[z_{1},\ldots,z_{n}]\)-determining.
_Example_.: [6, Proposition 1] Let us consider the convex quadrangle
\(K=\left\{\mathbf{x}=\sum_{2}c_{i}\mathbf{a}_{i},c_{i}\geq\,0,\sum_{i}c_{i}=1,1\leq i\leq 4\right\}\) with vertices \(a_{1},a_{2},a_{3},a_{4}\) and let \(\sigma\) be the bilinear transformation of the square \([-1,1]^{2}\) onto \(K\) defined by
\[\sigma(u,v)=\frac{1}{4}\left(a_{1}(1-u)(1-v)+a_{2}(1-u)(1+v)+a_{3} (1+u)(1-v)\right.\] \[\left.+a_{4}(1+u)(1+v)\right). \tag{49}\]
For every fixed \(\mu>1\), the sequence of "oblique" Gauss-Chebyshev grids
\[\mathcal{A}_{d}=\left\{\sigma\left(\xi_{j},\xi_{k}\right),1\leq j,k\leq\lceil \mu d\rceil\right\},\quad\xi_{s}=\cos\frac{(2s-1)\pi}{2\lceil\mu d\rceil} \tag{50}\]
is an admissible mesh of \(K\) with constant \(C=1/\cos^{2}(\pi/2\mu)\) and cardinality \(\lceil\mu d\rceil^{2}\) (\(\lceil a\rceil\) is the smallest natural number greater or equal to \(a\)).
**Theorem 6.1**.: _Let \((\mathcal{A}_{d})_{d\in\mathbb{N}}\) be a weakly admissible mesh of growth \((M_{d})_{d\in\mathbb{N}}\) for a compact set \(K\subset\mathbb{C}^{n}\). We define inductively a sequence \((\xi_{N})_{N\geq 0}\) as follows: we choose arbitrarily \(\xi_{0}\in K\) and for each \(d\in\mathbb{N}\) we select, one after the other, the points \(\xi_{\delta_{d-1}^{(n)}},\dots,\xi_{h_{d}^{(n)}-2}\) and \(\xi_{h_{d}^{(n)}-1}\) from \(\mathcal{A}_{d}\) such that_
\[\Big{|}\,\text{VDM}^{(n)}(\xi_{0},\dots,\xi_{N-1},\xi_{N})\Big{|}=\max_{\xi\in \mathcal{A}_{d}}\Big{|}\,\text{VDM}^{(n)}(\xi_{0},\dots,\xi_{N-1},\xi)\Big{|}, \tag{51}\]
_for \(h_{d-1}^{(n)}\leq N<h_{d}^{(n)}\). Then the sequence \((\xi_{N})_{N\geq 0}\) is a pseudo-Leja sequence of Edrei growth \((\overline{M}_{N})_{N\geq 0}\) for \(K\), where \(\overline{M}_{N}=M_{d}\) for all \(h_{d-1}^{(n)}\leq N<h_{d}^{(n)}\) and for all \(d\in\mathbb{N}\)._
Proof.: It suffices to observe that for every \(h_{d-1}^{(n)}\leq N<h_{d}^{(n)}\), the mapping
\(\xi\longmapsto\Big{|}\text{VDM}^{(n)}(\xi_{1},\dots,\xi_{N-1},\xi)\Big{|}\) is a polynomial of \(n\) complex variables and of degree at most \(d\). Then the result follows from the definition of a weakly admissible mesh.
_Remark_.: If \((\mathcal{A}_{d})_{d\in\mathbb{N}}\) is a weakly admissible mesh then for every \(m\in\mathbb{N}\) the sequence
\((\underbrace{\mathcal{A}_{m},\mathcal{A}_{m},\dots,\mathcal{A}_{m}}_{m\text{ terms}},\mathcal{A}_{m+1},\mathcal{A}_{m+2},\dots)\) is also a weakly admissible mesh of the same growth.
Due to the above remark, it will be more practical to use the following corollary in order to compute pseudo-Leja points. This will make the computations faster.
**Corollary 6.1.1**.: _Let \((\mathcal{A}_{d})_{d\in\mathbb{N}}\) be a weakly admissible mesh of growth \((M_{d})_{d\in\mathbb{N}}\) for a compact set \(K\subset\mathbb{C}^{n}\). Fix \(d\in\mathbb{N}\). Any discrete Leja sequence \(\left(\xi_{0},\dots,\xi_{h_{d}^{(n)}-1}\right)\) extracted from \(\mathcal{A}_{d}\), i.e \(\xi_{0}\in\mathcal{A}_{d}\) and_
\[\Big{|}\,\text{VDM}^{(n)}(\xi_{0},\dots,\xi_{N-1},\xi_{N})\Big{|}=\max_{\xi\in \mathcal{A}_{d}}\Big{|}\,\text{VDM}^{(n)}(\xi_{0},\dots,\xi_{N-1},\xi)\Big{|}, \tag{52}\]
_for \(1\leq N<h_{d}^{(n)}\), forms the first \(h_{d}^{(n)}\) points of some pseudo-Leja sequences for \(K\) of Edrei growth \((\overline{M}_{N})_{N\geq 0}\) for \(K\) such that \(\overline{M}_{N}=M_{d}\), for all \(1\leq N<h_{d}^{(n)}\)._
AcknowledgementI would like to express my gratitude to Professor Leokadia Bialas-Ciez, my research supervisor, for her patient guidance, enthusiastic encouragement and useful critiques of this research work.
Funding: This work was partially supported by the National Science Center, Poland, grant Pre-ludium Bis 1 N\({}^{\text{o}}\) 2019/35/O/ST1/02245
|
2305.15999 | An Overview of FPGA-inspired Obfuscation Techniques | Building and maintaining a silicon foundry is a costly endeavor that requires
substantial financial investment. From this scenario, the semiconductor
business has largely shifted to a fabless model where the Integrated Circuit
supply chain is globalized but potentially untrusted. In recent years, several
hardware obfuscation techniques have emerged to thwart hardware security
threats related to untrusted IC fabrication. Reconfigurable-based obfuscation
schemes have shown great promise of security against state-of-the-art attacks
-- these are techniques that rely on the transformation of static logic
configurable elements such as Look Up Tables (LUTs). This survey provides a
comprehensive analysis of reconfigurable-based obfuscation techniques,
evaluating their overheads and enumerating their effectiveness against all
known attacks. The techniques are also classified based on different factors,
including the technology used, element type, and IP type. Additionally, we
present a discussion on the advantages of reconfigurable-based obfuscation
techniques when compared to Logic Locking techniques and the challenges
associated with evaluating these techniques on hardware, primarily due to the
lack of tapeouts. The survey's findings are essential for researchers
interested in hardware obfuscation and future trends in this area. | Zain Ul Abideen, Sumathi Gokulanathan, Muayad J. Aljafar, Samuel Pagliarini | 2023-05-25T12:43:12Z | http://arxiv.org/abs/2305.15999v1 | # An Overview of FPGA-inspired Obfuscation Techniques
###### Abstract.
Building and maintaining a silicon foundry is a costly endeavor that requires substantial financial investment. From this scenario, the semiconductor business has largely shifted to a fabless model where the Integrated Circuit supply chain is globalized but potentially untrusted. In recent years, several hardware obfuscation techniques have emerged to thwart hardware security threats related to untrusted IC fabrication. Reconfigurable-based obfuscation schemes have shown great promise of security against state-of-the-art attacks - these are techniques that rely on the transformation of static logic configurable elements such as Look Up Tables (LUTs). This survey provides a comprehensive analysis of reconfigurable-based obfuscation techniques, evaluating their overheads and enumerating their effectiveness against all known attacks. The techniques are also classified based on different factors, including the technology used, element type, and IP type. Additionally, we present a discussion on the advantages of reconfigurable-based obfuscation techniques when compared to Logic Locking techniques and the challenges associated with evaluating these techniques on hardware, primarily due to the lack of tapeouts. The survey's findings are essential for researchers interested in hardware obfuscation and future trends in this area.
Hardware security, Trustworthy hardware, Logic obfuscation, FPGA, reconfigurable logic, LUT-based obfuscation +
Footnote †: 2021
+
Footnote †: 2021
+
Footnote †: 2021
## 1. Introduction
Integrated Circuit (IC)-based systems have been used in both consumer and military electronics for several decades, enabling a range of devices, from smartphones to satellites. The continued advancements in technology have also led to the adoption of IC-based systems in newer domains like the Internet of Things (IoT) and multi-cloud environments (Zain et al., 2018). In every domain, the demand for high-performance ICs is increasing. The reason behind this trend is the complexity of modern systems and the need for faster speeds to handle larger amounts of data being processed. As a result, the semiconductor industry is experiencing a surge in demand for products such as memory chips, microprocessors, and sensors. For example, the global IC market is forecast to grow from $489 billion in 2021 to $1.136 trillion in 2028 (Sundhi et al., 2020). On the other hand, ICs require advanced manufacturing processes and specialized equipment, which are only available in a limited number of foundries.
As the industry continues to evolve, the complexity of building and maintaining a foundry increases, resulting in skyrocketing costs. As an example, the estimated cost of building a 3nm foundry |
2304.04565 | SoccerNet-Caption: Dense Video Captioning for Soccer Broadcasts
Commentaries | Soccer is more than just a game - it is a passion that transcends borders and
unites people worldwide. From the roar of the crowds to the excitement of the
commentators, every moment of a soccer match is a thrill. Yet, with so many
games happening simultaneously, fans cannot watch them all live. Notifications
for main actions can help, but lack the engagement of live commentary, leaving
fans feeling disconnected. To fulfill this need, we propose in this paper a
novel task of dense video captioning focusing on the generation of textual
commentaries anchored with single timestamps. To support this task, we
additionally present a challenging dataset consisting of almost 37k timestamped
commentaries across 715.9 hours of soccer broadcast videos. Additionally, we
propose a first benchmark and baseline for this task, highlighting the
difficulty of temporally anchoring commentaries yet showing the capacity to
generate meaningful commentaries. By providing broadcasters with a tool to
summarize the content of their video with the same level of engagement as a
live game, our method could help satisfy the needs of the numerous fans who
follow their team but cannot necessarily watch the live game. We believe our
method has the potential to enhance the accessibility and understanding of
soccer content for a wider audience, bringing the excitement of the game to
more people. | Hassan Mkhallati, Anthony Cioppa, Silvio Giancola, Bernard Ghanem, Marc Van Droogenbroeck | 2023-04-10T13:08:03Z | http://arxiv.org/abs/2304.04565v1 | # SoccerNet-Caption: Dense Video Captioning
###### Abstract
Soccer is more than just a game - it is a passion that transcends borders and unites people worldwide. From the roar of the crowds to the excitement of the commentators, every moment of a soccer match is a thrill. Yet, with so many games happening simultaneously, fans cannot watch them all live. Notifications for main actions can help, but lack the engagement of live commentary, leaving fans feeling disconnected. To fulfill this need, we propose in this paper a novel task of dense video captioning focusing on the generation of textual commentaries anchored with single timestamps. To support this task, we additionally present a challenging dataset consisting of almost 37k timestamped commentaries across 715.9 hours of soccer broadcast videos. Additionally, we propose a first benchmark and baseline for this task, highlighting the difficulty of temporally anchoring commentaries yet showing the capacity to generate meaningful commentaries. By providing broadcasters with a tool to summarize the content of their video with the same level of engagement as a live game, our method could help satisfy the needs of the numerous fans who follow their team but cannot necessarily watch the live game. We believe our method has the potential to enhance the accessibility and understanding of soccer content for a wider audience, bringing the excitement of the game to more people.
(*) Equal contributions. Data/code available at www.soccer-net.org.
Contacts: {name.surname}@kaust.edu.sa / [email protected].
## 1 Introduction
Over the past decade, the quantity and quality of sports data has increased rapidly. This explosion has been driven by the benefits of automated analysis in various applications such as player performance, providing insights into game strategy [68], and audience involvement. Live text commentaries provide a rich summary of the game to increase fan engagement for those who do not have time or the possibility to watch the game. However, they are usually exclusive to major professional leagues, while other games are often left out. Moving towards automated solutions relying on already available equipment is therefore essential for lower leagues and amateur soccer. Recently, there has been a growing interest in the research community to automatically generate text based on videos. This task, called Dense Video Captioning (DVC), represents a significant research challenge due to the memory footprint of video data and the complexity of natural language. Besides, most research focuses on generic descriptions of events and activities in open-world scenes. However, in soccer, commentaries need to include rich factual, emotional, and even sensational con
Figure 1: **SoccerNet-Caption.** We provide a large-scale dataset for Single-anchored Dense Video Captioning (SDVC) in untrimmed soccer broadcast videos. Our SoccerNet-Caption dataset is composed of \(36,\!894\) textual commentaries, temporally anchored within \(715.9\) hours of soccer broadcasts. The comments describe the events occurring in the soccer game with rich factual, emotional, and sensational content.
tent to engage the fan. Sports is therefore the perfect playground for research in the video and language domain.
In this paper, we publicly release _SoccerNet-Caption_, the first dataset for dense video captioning in soccer broadcast videos. In particular, we provide \(36{,}894\) temporally-anchored rich textual commentaries describing \(715.9\) hours of soccer games. Some examples of comments from our dataset are shown in Figure 1. Along with the data, we introduce the new task of Single-anchored Dense Video Captioning (SDVC), consisting in generating localized captions describing the soccer game. As a first benchmark, we propose a two-stage approach including an action spotting module and a captioning module. More specifically, the spotting module produces temporal proposals for the captions. Then, the videos are trimmed around the proposals and passed to the captioning module to generate captions. We show in Figure 7 that our approach allows to generate relevant captions to describe the game with rich semantics, but that the challenge is still open for major improvements.
**Contributions.** We summarize our contributions as follows: **(i)** We publicly release the largest dataset of soccer videos annotated with timestamped textual commentaries describing the game. **(ii)** We define the novel task of Single-anchored Dense Video Captioning (SDVC), where captions are anchored with a single timestamp and need to be generated in long untrimmed videos. **(iii)** We propose a first benchmark to tackle this task and provide a thorough ablation study and analysis.
## 2 Related Works
**Sport understanding.** The challenging aspect of sports video understanding has contributed to its growing popularity as a research focus [44, 66]. At first, methods focused on video classification [75], including the recognition of specific actions [35, 54], or segmentation of different game phases during the game [11]. More recently, the task of action spotting was introduced by Giancola _et al_. [20], aiming at providing the precise localization of specific actions within an untrimmed soccer broadcast video. Several methods were proposed to automate this process, for instance, using a context-aware loss [9], camera calibration and player localization [10], end-to-end training [29], spatio-temporal encoders [14], graphs-based methods [6], transformer-based methods [83], or anchor-based methods [59, 60]. Other methods focused on other aspects of sports understanding such as player detection [71], player tracking [41] and identification [72, 62], tactics analysis in soccer and fencing [64, 84], pass feasibility [3], 3D ball localization for basketball [69], or 3D shuttle trajectory reconstruction for badminton videos [40].
To support this research, large-scale datasets have been released, including the ones of Pappalardo et al. [46], Yu et al. [79], SoccerTrack [55], SoccerDB [33], and Deep-SportRadar [70]. The SoccerNet dataset, introduced by Giancola _et al_. [20], includes benchmarks for \(10\) different tasks related to soccer understanding, such as action spotting [16], camera calibration [8], and player re-identification [8]. Cioppa _et al_. also introduce the task of player tracking in long sequences, including long-term re-identification [12]. Yearly competitions are organized on this dataset to promote research sports [21]. Our novel SDVC task data is part of the 2023 challenges.
**Video-Language Datasets.** Initially, research on video combined with language focused on video tagging, including actions and objects [13, 49, 51]. With the successes in image captioning [7, 48] (_i.e_. describing an image with natural language), research has shifted towards deep learning approaches for video captioning. Large-scale multimodal datasets have been introduced thanks to the rise of automatic speech recognition techniques and video-sharing platforms. Youtube has been a major data source for **YouTube-8M**[1], **HowTo100M**[43], and **ViTT**[30]. Other datasets focused on domain-specific videos such as cooking: **Youcootk2**[81] and **Tacos**[15], or movies: **MAD**[61]. Further efforts have been proposed for egocentric vision such as **EPIC-KITCHENS-100**[13], and **Ego4D**[23]. On top of those datasets, various tasks have emerged such as video-language grounding [4, 27, 61], video question answering [23, 65], video clip captioning [18, 25, 53, 78] and dense video captioning [37, 81].
**Dense Video Captioning.** Krishna _et al_. [37] introduced this task, which consists in captioning temporally localized (start and end frame) activities in untrimmed video. This task differs from traditional video captioning [18], where a single caption is generated for short videos, and dense image captioning [34], where captions describe different regions of an image. Currently, **YouCook2**[81], including recipes for non-overlapping sequential events, and **ActivityNet-Captions**[37], including open-domain overlapping activities, are the standard benchmarks for dense video captioning. Our dataset introduces a new task of single-anchored dense video captioning for soccer video comment generation, which requires richer factual, emotional, and sensational comments. Traditionally, the solutions proposed for dense video captioning involve a two-stage "detect-then-describe" framework. The first module produces temporal proposals, and the second module generates captions around these proposals [31, 32, 37, 81]. With the rise of large datasets, many efforts have focused on efficient pre-training of models that combine both language and video [2, 4, 30, 42, 56, 76, 80]. Recent works also tried "YOLO-like" [50] approaches where localization and captioning are generated in one shot [74, 77]. Here, we propose a two-stage approach based on a pre-trained video encoder.
## 3 Dataset
**Data collection.** Our SoccerNet-Caption dataset comprises \(471\) SoccerNet untrimmed broadcast games, including the top five European leagues (EPL, La Liga, Ligue 1, Bundesliga, and Serie A) as well as the Champions League from 2014 to 2017. All videos are available at \(25\)fps in two resolutions: \(224\)p and \(720\)p, alongside frame features at \(2\)fps pre-extracted using _e.g_. ResNET or the Baidu feature encoder [82] at \(1\)fps, following SoccerNet's original data format. Since the role of the producer is to select the right camera to convey the story of the game to the viewer in the best possible way, these broadcasted games are perfectly suited for a commentary generation task.
In this work, we provide novel textual comments embedded in time describing the game. We collect those comments by scrapping the flashscore website for \(471\) out of the \(500\) games included in the SoccerNet dataset. The commentaries for the remaining games were not unavailable. The comments typically describe in a few sentences the main events occurring at specific times in the game, by giving insights about the involved players or teams, and the sequence of actions that led to this situation in a rich factual, emotional, and sensational way. Our data collection efforts resulted in \(36{,}894\) timestamped comments across \(715.9\) hours of video footage, including some metadata such as the type of event the comment relates to (_e.g_. an action, a fun fact, _etc_.).
Alongside these textual comments, we also collected metadata about the game, including the list of all players with their jerseys number, the referees' name, and the teams' name. These metadata also includes the starting line-ups for each team, with their tactics, the \(11\) starting players, the substitutes, and any events associated with a player, such as goals, assists, substitutions, and yellow/red cards.
**Data anonymization.** Following the traditional captioning dataset, we provide an anonymized version of the captions, where each player, referee, team, and coach names are replaced with generic tokens. In fact, most captioning methods are not suited to recognize the exact identity of the people shown in the videos. Hence, without including specific modules for identity classification, player tracking, and re-identification, it would be almost impossible to generate the correct names. However, we still provide the original captions for future research.
To anonymize the dataset, we leverage the game metadata to retrieve the team, coach, referee, and player names. Then, we automatically search through the comments for these specific names and replace them with a generic token ([TEAM], [COACH], [REFEREE], or [PLAYER]). Sometimes, the player and coach names available in the line-ups may differ from those mentioned in the comments as some particles or longer names may be truncated. To retrieve those names, we used advanced string-matching techniques to identify and reconcile any discrepancies between the names in the lineups and those mentioned in the comments. Finally, for the remaining names (_e.g_. compound surnames are formulated differently between the lineups and the comments), we manually refined the annotation. This approach ensures that we accurately link each name to the correct token.
Eventually, we provide an intermediate anonymization, where each player is identified with a unique id (uid) inside the token ([Player_uid]). This alleviates the issue of the different names for the same player in the original captions and maintains his identity. Examples of such comments are provided in Figure 2.
**Data format.** Following the SoccerNet format, we organize our textual annotations into individual JSON files for each game. Each file contains a dictionary that includes all metadata of the game (_e.g_. the starting lineups for each team, with their tactics, the \(11\) starting players, the substitutes, their jersey numbers, _etc_.) and the list of annotated comments. Each annotation is associated with a timestamp, the three versions of the comment (original description, identified, and anonymized), a boolean value indicating whether the comment is related to a key moment of the game, and a contextual label (_e.g_. corner, substitution, yellow card, whistle, soccer ball, time, injury, fun fact, attendance, penalty, red card, own goal, or missed penalty).
**Data statistics.** SoccerNet-Caption dataset contains an average of \(78.33\) temporally localized comments per game, resulting in a total of \(36{,}894\) captions for the entire dataset. This is equivalent to almost one comment every minute. As can be seen in Figure 3, the distribution of the comments within a single game over time shows a peak at the start of
Figure 2: **Comment anonymization. We provide three versions for each comment. The original commentary, an identified version where each player is associated with a unique id token, and an anonymized version where each entity is replaced by a specific token: [TEAM], [COACH], [REFEREE], and [PLAYER].**
the game that usually corresponds to the comment related to the first whistle of the referee. Then, there is a period of fewer comments in the first \(10\) minutes compared to the rest of the half-time that follows a uniform distribution. This shows that no bias can be used to find a good location for the comments, except at the very start of the game.
Finally, we analyze the content of each comment on a textual and semantic level. Figure 4 shows that the number of words per comment ranges from \(4\) to \(93\) words following a long tail distribution with \(21.38\) words on average. As mentioned, soccer has its own specific terminology for describing events. We can observe in Figure 5 that, apart from the generic tokens, the most commonly used words are soccer action-related verbs (_e.g_. kick, pass, cross) or soccer-related nouns (_e.g_. corner, box, goal). Additionally, it is important to note the over-representation of player and team names in the comments which motivates the use of anonymized comments.
**Novelty.** We compare our dataset with other recent captioning and dense video captioning datasets in Table 1. Our dataset provides the longest videos on average by a large margin. Processing such long videos in an end-to-end approach is still an open challenge in most video understanding tasks, which makes our dataset the perfect playground to innovate. SoccerNet-Caption also ranks third in terms of total video length, making it a large-scale dataset for video and language training. The other soccer-related dataset either focus on captioning short clips or highlights. SoccerNet-Caption is the first dataset to anchor the comments with a single timestamp instead of a bounding box with a start and end timestamp for each comment. Following the work of SoccerNet on action spotting, we believe that it is arduous to annotate when a specific action starts and ends in soccer. Finally, compared to more generic datasets, soccer commentaries require richer content, including emotional or sensational sentences. By providing the original comments with the game metadata, we aim to push research toward identifying the players in the video to accurately describe what is happening in the game.
## 4 Single-anchored Dense Video Captioning
**Task.** We define the novel task of Single-anchored Dense Video Captioning (SDVC) as follows: Given a video, spot all instants where a comment should be anchored and generate sentences describing the events occurring around that time using natural language. This is different from the previously defined Dense Video Captioning (DVC) task [37], where the captions have temporal boundaries (start and end timestamps). For soccer games, this task is particularly challenging, as it requires describing complex sequences of actions involving subtle player movements, rather than well-separated activities with clearly defined boundaries.
Figure 4: **Distribution of the number of words per comment** This plot shows that the number of words per comment follows a long tail distribution with \(21.38\) words on average.
Figure 5: **Distribution of the most common words.** The most frequent words are the names of the teams and the players, followed by words semantically related to soccer verbs and soccer elements. There is a high imbalance in the distribution.
Figure 3: **Distribution of the comments.** Most comments are uniformly scattered in each half-time, except at the start of the game where a peak is followed by fewer comments for \(10\) minutes.
**Metric.** Defining a metric for this task requires evaluating both the temporal accuracy of the detected anchors and the quality of the generated commentaries. Finding the exact timestamp for a commentary is challenging as it depends on the game evolution or certain actions that need to be highlighted, rather than being associated with a specific action like the events defined in the action spotting task. Therefore, we need to include some tolerance around the ground-truth action spot. Additionally, evaluating the quality of generated commentary is not trivial as the expressions used are semantically more related compared to an open vocabulary. Hence, subtle variations in the chosen words need to be accurately evaluated when describing the game.
In the literature, several metrics have been proposed to evaluate dense video captioning methods. The SODA metric [17] evaluates the video narrative by finding the temporally optimal matching between generated and reference captions. Hammoudeh _et al_. [25] proposed another approach that focuses on the precision and recall of generated words with specific expressions defined beforehand and matched with the ground truth. This approach is useful in terms of semantic accuracy in captions but strongly depends on the chosen words dictionary. EMScore [57] focuses on the consistency between the video and candidate captions, relying on an Embedding Matching-based score. However, the EMScore depends on the performance of the used vision-language pre-trained (VLP) model. Since no VLP models have been trained on sports videos yet, we cannot use one of these metrics for our dataset. In order to quantify the spotting quality of the generated captions, we use the mAP@\(\delta\) for action spotting introduced in Giancola _et al_. [22], where \(\delta\) is the tolerance in seconds.
For the SDVC task, we first choose the metric proposed in ActivityNet-captions (that still has a consensus within the community) and adapt it for our single-anchored task as follows: for each ground-truth caption in a video, we build a time window with a chosen tolerance centered on its timestamp. We then use established captioning evaluation metrics (METEOR [38], BLEU [45], ROUGE [39], and CIDEr [73]) to estimate the language similarity between all generated captions with any ground-truth caption for which its timestamps fall within a \(\delta\) tolerance. The performances are finally averaged over the video and the dataset. We call our metric METEOR@\(\delta\) (resp. BLEU@\(\delta\), ROUGE@\(\delta\), and CIDEr@\(\delta\)). As a second metric, we similarly adapt the SODA_c [17] metric by adding a time window around the ground-truth and generated captions.
## 5 Benchmarks
In this section, we benchmark a first baseline model on our task of Single-anchored Dense Video Captioning (SDVC). Particularly, we study the performance of several architectures and hyperparameters of our model on the spotting, captioning, and global SDVC task. We finally provide qualitative results on real sequences.
### SDVC baseline
Following the literature on dense video captioning, we propose a two-stage approach [31, 32, 37, 81] as an initial baseline model for our SDVC task. Our model, denoted by \(\mathbf{M}\), consists of a spotting model and a captioning model, which are cascaded together. The sub-models are trained independently and both consist of a frozen feature encoder \(\mathbf{E}\), followed by an aggregator module \(\mathbf{A}\), and either a spotting head \(\mathbf{S}\) or a captioning head \(\mathbf{C}\). The feature encoder generates a compressed per-frame feature representation of the video clip, which is then temporally pooled by the aggregator. The resulting clip feature representation is subsequently passed either to the spotting head (locating where to generate the comments) or to the captioning head (generating a comment). During inference, the spotting model generates
\begin{table}
\begin{tabular}{l||c|c|c|c|c|c|c} Name & Domain & \# Video & Avg Duration (s) & Total Duration (hr) & \# Sentences & Anchors & Task \\ \hline ActivityNet-Caption [37] & Open & 20k & 180 & 849 & 100k & \([t_{s,t_{e}}]\) & DVC \\ Youtcook2 [81] & Cooking & 2k & 30 & 176 & 15.4k & \([t_{s,t_{e}}]\) & DVC \\ TACs [15 / TACs-Multilevel [52] & Cocking & 127/185 & 360 & 15.927.1 & 18.2k/52.5k & \([t_{s,t_{e}}]\) & Retrieval \\ Charades-STA [58] & Human & 9.8k & 30 & 82.01 & 27.7k & \([t_{s,t_{e}}]\) & Retrieval \\ VideoStoer [19] & Social Media & 20k & 70 & 396 & 123k & \([t_{s,t_{e}}]\) & Storylelling \\ VITT [30] & Cocking + open & 8.2k & – & – & – & \(-\)(Tag) & \([t_{s,t_{e}}]\) & DVC \\ Epic Kichen-100 [13] & Cooking-Ego & 700 & 514 & 100 & – & \([t_{s,t_{e}}]\) & Action Recognition \\ Egod [20] & Ego & 9.6k & 1369.8 & 3,670 & 3.85M & \([t_{s,t_{e}}]\) & Moment Queries \\ DiDeMo [27] & Open-human & 10K & 30 & 88.7 & 40.5k & \([t_{s,t_{e}}]\) & Retrieval \\ MAD [61] & Movie & 560 & 6646.2 & 1207.3 & 384.6k & \([t_{s,t_{e}}]\) & Grunding \\ Fine-grained Sports Narrative [78] & Basketball & 2K & – & – & 6520 & – & Video Captioning \\ Soccer captioning [25] & Soccer action clips & 22k clip-action & – & 22k & – & Video Captioning \\ GOAL (Qi _et al_. [47]) & Soccer action clips & 8.9k & 10.31 & 25.5 & 22k & – & KGVC \\ GOAL (Suglia _et al_. [63]) & Soccer highlights & 1.1k & 238 & 73.1 & 53k & – & Video Captioning \\ \hline SoccerNet-caption (ours) & Soccer games & 942 & 2735.9 & 715.9 & 36,894 & \(t\) & SDVC \\ \end{tabular}
\end{table}
Table 1: **Comparison of SoccerNet-Caption with other captioning datasets.** Our dataset contains the second longest video sequences as well as the third longest total video length. This shows that SoccerNet-Caption is a great dataset for research in dense video captioning. Also, it is the first dataset on untrimmed soccer broadcast games, unlike GOAL (Suglia _et al_. [63]) which only focuses on soccer highlights and GOAL (Qi _et al_. [47]) which only focuses on soccer actions for Knowledge-Grounded Video Captioning (KGVC) [47].
a series of temporal proposals for a video:
\[\{t_{0}^{prop},...,t_{N}^{prop}\}=\mathbf{S}(\mathbf{A}(\mathbf{E}(\text{video})))\;,\]
where \(N\) is the total number of proposals. The original video is then trimmed around each proposal \(t_{n}^{prop}\) into a clip that is passed to the captioning model to generate a caption, following:
\[\mathit{caption}_{n}=\mathbf{C}(\mathbf{A}(\mathbf{E}(\text{video}[t_{n}^{prop }-\frac{\Delta}{2},t_{n}^{prop}+\frac{\Delta}{2}])))\;,\]
where \(\Delta\) is the window size of the captioning model. Our baseline model is illustrated in Figure 6.
**Feature encoder \(\mathbf{E}\).** We use the feature encoders provided in SoccerNet-v2, _i.e_. ResNet-152 [26], I3D [5], C3D [67], and Baidu [82]. The features are extracted at \(1\) or \(2\) fps from the original soccer broadcast videos. To speed up the training, we reduce the feature dimensionality to \(512\) for each image using PCA for the first three encoders, and a linear transformation for the Baidu features as suggested in [22].
**Aggregator \(\mathbf{A}\).** The aggregator module pools the frame feature vectors into one single compact feature representation of the clip. For our baseline, we use four trainable pooling modules proposed by Giancola _et al_. [22]: NetVLAD, NetRVLAD, and the temporally aware pooling modules NetVLAD++ and NetRVLAD++.
**Spotting head \(\mathbf{S}\).** We build our spotting head as a dense layer with sigmoid activation that outputs \(2\) classes: the presence of comment (foreground), and absence of comment (background). During training, the video is randomly cropped into video chunks, and a binary cross-entropy loss function is applied to the output. During inference, we split the whole half-time video into overlapping clips and concatenate the predictions over time. We then use a Non-Maximum Suppression (NMS) algorithm to reduce redundant spots in a specific time window.
**Captioning head \(\mathbf{C}\).** Our captioning head is composed of two fully connected layers with ReLU activation and dropout, and a vanilla LSTM [28] module with softmax activation. The fully connected layers project the output of the aggregator from the video hidden space to the language hidden space. The projected features are then used to initialize the hidden state of the LSTM module that outputs the list of word confidence scores. During training, we use the cross-entropy loss between the predicted and ground-truth words. To stabilize the learning process, we use the teacher forcing method presented by Graves _et al_. [24]. As the soccer vocabulary is much more specific than generic language, we do not use pre-trained word embeddings, but learn the whole language during the training. During inference, we sample words with a greedy approach, _i.e_. the next word is the one with the highest confidence.
### Training parameters
To train and evaluate our model, we use SoccerNet-Caption without the "fun fact" and "attendance" commentaries as they describe out-of-the-game content. During training, we use the Adam optimizer [36] with PyTorch's default \(\beta\) parameters. We reduce the learning rate by a factor of \(10\) when the validation loss plateaus during \(10\) consecutive epochs, with an initial value of \(10^{-3}\), and a stopping criterion of \(10^{-6}\). The temporal pooling module is initialized with \(64\) clusters and the dimension of the hidden vector of the LSTM is \(512\), the dropout value after the fully connected layers is set to \(0.4\), and the word embedding dimension is set to \(256\). Finally, the size of the vocabulary in SoccerNet-Caption is \(1{,}769\) words.
### Results
We now first separately study the performance of our spotting model and the captioning module, then provide the performance of the whole pipeline for our new SDVC task.
**Commentary spotting results.** We first study the use of different combinations of feature encoders and aggregators, by fixing both the size of the input video chunk and the NMS window to \(15\) seconds. The results are presented in Table 2. As can be seen, the best results are obtained using the Baidu feature encoder and NetVLAD or NetVLAD++ pooling. This suggests that using a feature extractor fine-tuned on soccer data leads to better performance even for commentary spotting. However, there is not such a clear tendency regarding the aggregator module layer, with a slight advantage to NetVLAD or NetVLAD++. Hence, for the following spotting model, we use the Baidu feature encoder with NetVLAD as it provides the best mAP@5.
Figure 6: **Pipeline of our Single-anchored Dense Video Captioning (SDVC) baseline model \(\mathbf{M}\). To generate dense comments with a single timestamp, we propose a two-stage approach. \(\mathbf{M}\) consists of a spotting model followed by a captioning model. Both models use a shared frozen feature extractor \(\mathbf{E}\) to generate a compact per-frame representation of the video. The spotting model uses an aggregator module \(\mathbf{A}\) to combine the frame features into a single clip feature representation that is then passed to the spotting head \(\mathbf{S}\) to generate proposal timestamps \(t^{prop}\). The timestamps are then used to trim clips of size \(\Delta\) that are subsequently processed by the captioning model through \(\mathbf{E}\), \(\mathbf{A}\), and a captioning head \(\mathbf{C}\) to generate the anchored comment.**
Next, we study the influence of the window and NMS size on the quality of the localization. As shown in Table 3, the best performance is obtained with a window of \(15\) seconds and a NMS of \(10\) or \(30\) seconds. As the performance (mAP@30 and mAP@60) largely decreases for a NMS window of \(10\) seconds, we choose \(15\) and \(30\) seconds for the window and NMS size of the following experiments. Our analysis shows that the commentary spotting performances are significantly lower than action spotting on SoccerNet-v2 [21], as a comment may describe several actions, unlike spotting that focuses on a single action.
**Captioning results.** We conduct similar ablations on the captioning task. As shown in Table 4, the best results are also achieved using the Baidu feature encoder and NetVLAD pooling. This suggests again that pre-training a video encoder on the spotting task helps generating better commentaries. In Table 5, we also study the effect of the window size on the captioning model. Unlike the commentary spotting task, the best results are obtained with a window of \(45\) seconds, showing that captioning requires more contextual information. For the following experiments, we use \(45\)-seconds video clips as input to the captioning model.
As a final ablation study, we compare the performance of our captioning model when changing the number of stacked LSTM layers or the teacher forcing ratio. The teacher-forcing ratio is the probability of the LSTM module receiv
\begin{table}
\begin{tabular}{c|c||c|c|c|c} L & TF ratio & B@4 & M & R & C \\ \hline
2 & 0.5 & 2.1 & 21.5 & 17.5 & 5.5 \\
2 & 1 & 6.0 & **23.7** & 23.8 & 17.8 \\ \hline
4 & 0.5 & 4.1 & 22.4 & 21.7 & 8.8 \\
4 & 1 & **6.0** & **23.7** & **24.1** & **18.5** \\ \hline
8 & 0.5 & 1.3 & 18.1 & 15.8 & 3.4 \\
8 & 1 & 2.5 & 20.7 & 21.6 & 7.3 \\ \end{tabular}
\end{table}
Table 6: **Captioning ablation.** We compare several numbers of LSTM layers (L) and teacher forcing (TF) ratio with the Bleu (B), METEOR (M), ROUGE-L (R), and CIDEr (C) metrics.
\begin{table}
\begin{tabular}{c|c||c|c|c} & & \multicolumn{3}{c}{mAP@ (\%)} \\ \cline{3-5} Encoder & Aggregator & 5 & 30 & 60 \\ \hline RN\_PCA & NetVLAD & 6.4 & 39.1 & 38.0 \\ RN\_PCA & NetVLAD++ & 5.3 & 41.3 & 39.4 \\ RN\_PCA & NetRVLAD & 7.0 & 39.5 & 37.8 \\ RN\_PCA & NetRVLAD++ & 5.4 & 41.5 & 39.4 \\ \hline I3D\_PCA & NetVLAD & 4.5 & 33.0 & 33.3 \\ I3D\_PCA & NetVLAD++ & 6.7 & 34.8 & 34.8 \\ I3D\_PCA & NetRVLAD & 4.1 & 32.8 & 32.3 \\ I3D\_PCA & NetRVLAD++ & 5.9 & 34.5 & 34.1 \\ \hline C3D\_PCA & NetVLAD & 4.8 & 38.1 & 37.1 \\ C3D\_PCA & NetVLAD++ & 4.5 & 40.7 & 39.1 \\ C3D\_PCA & NetRVLAD & 6.9 & 39.3 & 38.1 \\ C3D\_PCA & NetRVLAD++ & 3.9 & 39.2 & 38.2 \\ \hline Baidu & NetVLAD & **10.5** & 42.1 & 40.7 \\ Baidu & NetVLAD++ & 6.3 & **44.5** & **41.8** \\ Baidu & NetRVLAD & 6.7 & 39.5 & 38.7 \\ Baidu & NetRVLAD++ & 3.6 & 44.0 & 41.2 \\ \end{tabular}
\end{table}
Table 2: **Spotting results.** We train our spotting model to detect and localize comments and compare different combinations of encoder and pooling modules.
\begin{table}
\begin{tabular}{c|c||c|c|c} WS & NMS & \multicolumn{3}{c}{mAP@ (\%)} \\ (s) & (s) & 5 & 30 & 60 \\ \hline
15 & 10 & **12.5** & 35.3 & 34.9 \\
15 & 30 & 9.3 & **49.4** & **47.0** \\
15 & 60 & 8.5 & 46.2 & 41.2 \\ \hline
30 & 10 & 5.3 & 29.2 & 27.7 \\
30 & 30 & 2.5 & 45.7 & 43.0 \\
30 & 60 & 1.7 & 46.5 & 40.2 \\ \hline
45 & 10 & 6.8 & 27.0 & 26.1 \\
45 & 30 & 5.2 & 47.8 & 41.7 \\
45 & 60 & 2.6 & 45.4 & 39.8 \\ \hline
60 & 10 & 4.9 & 19.0 & 19.5 \\
60 & 30 & 1.9 & 38.0 & 35.2 \\
60 & 60 & 1.1 & 39.5 & 35.6 \\ \end{tabular}
\end{table}
Table 3: **Spotting window size and NMS.** We compare several window and NMS sizes. Small windows achieve better results.
\begin{table}
\begin{tabular}{c||c||c|c|c} & & \multicolumn{3}{c}{mAP@ (\%)} \\ \cline{3-5} Encoder & Aggregator & B@4 & M & R & C \\ \hline RN\_PCA & NetVLAD & 4.0 & 21.8 & 21.5 & 10.8 \\ RN\_PCA & NetVLAD++ & 4.4 & 22.3 & 21.4 & 10.9 \\ RN\_PCA & NetRVLAD & 4.2 & 21.9 & 21.4 & 10.6 \\ RN\_PCA & NetRVLAD++ & 4.2 & 21.8 & 21.2 & 11.0 \\ \hline I3D\_PCA & NetVLAD & 3.2 & 21.9 & 20.4 & 7.5 \\ I3D\_PCA & NetVLAD++ & 2.9 & 20.0 & 18.3 & 6.6 \\ I3D\_PCA & NetRVLAD & 3.1 & 21.1 & 20.7 & 8.0 \\ I3D\_PCA & NetRVLAD++ & 3.7 & 21.7 & 20.8 & 9.2 \\ \hline C3D\_PCA & NetVLAD & 3.9 & 21.9 & 21.4 & 10.3 \\ C3D\_PCA & NetVLAD++ & 3.9 & 21.2 & 20.4 & 10.6 \\ C3D\_PCA & NetRVLAD & 4.0 & 21.6 & 21.1 & 11.0 \\ C3D\_PCA & NetRVLAD++ & 3.9 & 21.3 & 20.7 & 11.0 \\ \hline Baidu & NetVLAD & 5.7 & **23.6** & **23.5** & 15.7 \\ Baidu & NetVLAD++ & 5.4 & 23.2 & 23.4 & **17.0** \\ Baidu & NetRVLAD & 5.5 & 23.5 & 23.2 & 15.4 \\ Baidu & NetRVLAD++ & **5.7** & 23.5 & 23.3 & 16.0 \\ \end{tabular}
\end{table}
Table 4: **Captioning results.** We train our captioning model to generate comments and compare different combinations of encoder and pooling modules with the Bleu (B), METEOR(M), ROUGE-L (R), and CIDEr (C) metrics.
\begin{table}
\begin{tabular}{c||c||c|c|c} L & TF ratio & B@4 & M & R & C \\ \hline
2 & 0.5 & 2.1 & 21.5 & 17.5 & 5.5 \\
2 & 1 & 6.0 & **23.7** & 23.8 & 17.8 \\ \hline
4 & 0.5 & 4.1 & 22.4 & 21.7 & 8.8 \\
4 & 1 & **6.0** & **23.7** & **24.1** & **18.5** \\ \hline
8 & 0.5 & 1.3 & 18.1 & 15.8 & 3.4 \\
8 & 1 & 2.5 & 20.7 & 21.6 & 7.3 \\ \end{tabular}
\end{table}
Table 6: **Captioning ablation.** We compare several numbers of LSTM layers (L) and teacher forcing (TF) ratio with the Bleu (B), METEOR (M), ROUGE-L (R), and CIDEr (C) metrics.
\begin{table}
\begin{tabular}{c||c||c|c|c|c} WS & NMS & \multicolumn{3}{c}{mAP@ (\%)} \\ (s) & (s) & 5 & 30 & 60 \\ \hline
15 & 10 & **12.5** & 35.3 & 34.9 \\
15 & 30 & 9.3 & **49.4** & **47.0** \\
15 & 60 & 8.5 & 46.2 & 41.2 \\ \hline
30 & 10 & 5.3 & 29.2 & 27.7 \\
30 & 30 & 2.5 & 45.7 & 43.0 \\
30 & 60 & 1.7 & 46.5 & 40.2 \\ \hline
45 & 10 & 6.8 & 27.0 & 26.1 \\
45 & 30 & 5.2 & 47.8 & 41.7 \\
45 & 60 & 2.6 & 45.4 & 39.8 \\ \hline
60 & 10 & 4.9 & 19.0 & 19.5 \\
60 & 30 & 1.9 & 38.0 & 35.2 \\
60 & 60 & 1.1 & 39.5 & 35.6 \\ \end{tabular}
\end{table}
Table 3: **Spotting window size and NMS.** We compare several window and NMS sizes. Small windows achieve better results.
ing the true output sequence rather than its own prediction. For example, a teacher-forcing ratio of \(0.5\) means that half of the time, the LSTM receives the true output sequence, and half of the time, it receives its own prediction. As shown in Table 6, the best performance is achieved with \(4\) LSTM layers and using the teacher forcing technique.
**Single-anchored dense video captioning results.** Finally, we evaluate the performance of our whole pipeline. Particularly, we study the influence of pre-training the aggregator in three ways: (1) no-pre-training for the spotting and captioning aggregator, (2) training the spotting model and transferring the weights to the captioning model, and (3) training the captioning model and transferring the weight to the spotting model. We also either freeze or fine-tune the transferred weights on the second task. As can be seen from Table 7, the best performance for our SDVC task is obtained when training the captioning model from scratch, transferring the aggregator weights to the spotting aggregator, and fine-tuning those weights on the spotting task.
**Qualitative Results.** We provide four predictions of our SDVC baseline in Figure 7 and compare them with the ground truth: (a) Our method is able to generate good commentaries for some actions. (b) Our spotting model shows a tendency to generate proposals not close to any commentary, yet our captioning model still describes the ongoing action perfectly. This is also shown in the difference between the METEOR@30 and SODA_c performances in Table 7. However, in a real-world application, such captions would add value rather than be considered a mistake. (c) When generating a caption, our pipeline considers only the short time window around the proposal, hence the generated scores after a goal are almost never accurate. (d) We show a case of hard failure, where our model confuses a serious injury situation while the team was actually celebrating a goal. These results show that our baseline is already able to generate accurate captions, but that there is room for improvement, especially in gathering temporal context.
## 6 Conclusion
This paper proposes the novel task of single-anchored dense video captioning focusing on generating textual commentaries anchored with single timestamps. To support this task, we present SoccerNet-Caption, a challenging dataset consisting of 37k timestamped commentaries across 715.9 hours of soccer broadcast videos. We benchmarked a first baseline algorithm on this dataset, highlighting the difficulty of temporally anchoring commentaries yet showing the capacity to generate meaningful commentaries.
**Acknowledgement.** This work was partly supported by KAUST OSR through the VCC funding and the SDAIAA-KAUST Center of Excellence in Data Science and Artificial Intelligence. A. Cioppa is funded by the F.R.S.-FNRS.
\begin{table}
\begin{tabular}{l||c|c|c||c|c|c|c|c|c||c|c} & \multicolumn{3}{c||}{**Spotting (\(\mathrm{mAP@}\) @ (\%))**} & \multicolumn{3}{c||}{**Captioning**} & \multicolumn{3}{c}{**Single-anchored dense video captioning**} \\ \cline{2-13} Aggregator & 5 & 30 & 60 & B@4 & M & R & C & B@4@30 & M@30 & R@30 & C@30 & SODA\_c \\ \hline \(\mathbf{A}^{scratch}\) & **9.24** & 49.40 & **46.97** & 6.37 & **23.85** & 24.04 & 18.70 & 21.07 & 27.94 & 22.06 & 27.02 & 7.72 \\ \hline \(\mathbf{A}^{frozen}_{\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s}\mathit{s} \mathit{s}\mathit{s}\mathit{s}\ |
2307.02444 | Foundations of Differential Calculus for modules over posets | Generalised persistence module theory is the study of tame functors $M \colon
\mathcal{P} \rightarrow \mathcal{A}$ from an arbitrary poset $\mathcal{P}$, or
more generally an arbitrary small category, to some abelian target category
$\mathcal{A}$. In other words, a persistence module is simply a representation
of the source category in $\mathcal{A}$. Unsurprisingly, it turns out that when
the source category is more general than a linear order, then its
representation type is generally wild. In this paper we develop a new set of
ideas for calculus type analysis of persistence modules. As a first instance we
define the gradient $\nabla[M]$ as a homomorphism between appropriate
Grothendieck groups of isomorphism classes of modules. We then examine the
implications of a vanishing gradient and find a sufficient condition on a
module that guarantees vanishing of its gradient. We introduce the notions of
left and right divergence via Kan extensions. We define two bilinear pairings
on modules and study their properties, specifically with respect to adjointness
relations between the gradient and the left and right divergence morphisms.
With gradient and divergence in place we define the left and right Laplacians
$\Delta^0[M]$ and $\Delta_0[M]$ of a module $M$. Finally, we demonstrate how
our calculus framework can enhance the analysis of two well-known persistence
modules: the so called commutative ladders, and filtered hierarchical
clustering modules arising from random point processes. | Jacek Brodzki, Ran Levi, Henri Riihimäki | 2023-07-05T17:14:57Z | http://arxiv.org/abs/2307.02444v3 | # Foundations of differential calculus for modules over posets
###### Abstract.
Persistence modules were introduced in the context of topological data analysis. Generalised persistence module theory is the study of functors from an arbitrary poset, or more generally an arbitrary small category, to some abelian target category. In other words, a persistence module is simply a representation of the source category in the target abelian category. Unsurprisingly, it turns out that when the source category is more general than a linear order, then its representation type is generally wild. In this paper we introduce a new set of ideas for local analysis of persistence module by methods borrowed from spectral graph theory and multivariable calculus.
The term _persistence module_ emerged as a concept in early work of Carlsson and Zomorodian [8], and motivated much of the theoretical developments in persistent homology and topological data analysis. Generalised persistence modules were introduced by Bubenik, de Silva and Scott in [7], where a persistence module is defined to be any functor \(F\) from a preordered set \(\mathcal{P}\) to an arbitrary category \(\mathcal{D}\). In most setups the preordered set \(\mathcal{P}\) is assumed to be a poset (i.e. an antisymmetric preordered set) and the target category \(\mathcal{D}\) is taken to be abelian - typically the category \(\operatorname{Vect}_{k}\) of finite dimensional vector spaces over a field \(k\). Working with arbitrary posets, particularly such that have a natural topology like \((\mathbb{R},\leq)\), is in practice too general, and from a computational point of view, impossible. Chazal et. al. [11, Section 3.8] introduced the concept of _tame persistence modules_ for functors \(M\colon(\mathbb{R},\leq)\to\operatorname{Vect}_{k}\). The concept was generalised by Scolamiero et. al. [29] to functors on \((\mathbb{Q}^{n},\leq)\), and generalised further to arbitrary posets by Miller [26], who defines a persistence module \(M\colon\mathcal{Q}\to\operatorname{Vect}_{k}\) to be tame, if there exists a finite poset \(\mathcal{P}\), a poset map \(\alpha\colon\mathcal{Q}\to\mathcal{P}\), and a persistence module \(N\colon\mathcal{P}\to\operatorname{Vect}_{k}\), such that \(M=N\circ\alpha\). Specialising Miller's definition to modules on \((\mathbb{R},\leq)\), such a module \(M\) has the property that there is a finite set of real numbers \(t_{1}<t_{2}<\cdots<t_{n}\), depending on \(M\), such that for any \(t_{i}\leq a\leq b\leq t_{i+1}\), the homomorphism \(M(a\leq b)\) is an isomorphism.
Functors from finite categories to the category of vector spaces are referred to in the literature as _representations_ of the categories in question. The functor category from a finite category \(\mathcal{C}\) to an abelian category is itself an abelian category. It is in fact isomorphic to the module category, in the ordinary sense, over the category algebra \(k\mathcal{C}\)[31]. Thus persistence modules over finite posets can be thought of as ordinary modules over the poset algebra \(k\mathcal{P}\) (see Section 1.1). For category algebras over finite posets one has Drozd trichotomy theorem [15], which states that any finite dimensional algebra is either of _finite representation type_, i.e. has finitely many isomorphism classes of indecomposable modules, or otherwise of _infinite representation type_, in which case they are either of _tame representation type_ or of _wild representation type_. Notice that the term "tame representation type" refers to the algebra and not any module over it. This is not to be confused with the notion of a "tame persistence module", as defined above.
Tame persistence modules over \((\mathbb{R},\leq)\) can be factored, by definition, through a finite linearly ordered poset, and poset algebras over such posets are of finite representation type. Hence modules over \((\mathbb{R},\leq)\) are classifiable by their _persistence diagrams_ (or equivalently persistence barcodes, or persistence landscapes) up to isomorphism. Traditional algebraic tools employed in the study of generalised persistence modules, such as decomposition into indecomposable modules, where possible, free presentations, and resolutions (see for example [1, 5, 6, 14, 21]) all share the same difficulty. Arbitrary finite posets are typically of infinite (tame or wild) representation type, which makes classification practically impossible. Thus, from the point
of view of topological data analysis, finding alternative methods of extracting computable information out of persistence modules is desirable; some notable examples of this line of work are [16, 26, 29].
In this article we introduce a new approach to the study of persistence modules. Our starting point is essentially representation theoretic. Namely we consider isomorphism classes of modules over the category algebra of a finite poset. However, instead of attempting to understand persistence modules globally, we propose a _calculus of persistence modules_, that is a methodology that enables one to extract local information. Indeed, in exploring properties of a nice real valued function of several variables one typically employs standard techniques of multivariable calculus, which allow studying the function locally. The notions of gradient, divergence and Laplacian come to mind in this context. Inspired by these ideas, there is a discrete version of multivariable calculus for weighted directed graphs. Our treatment of persistence module theory is a powerful generalisation of the ideas of discrete calculus on graphs. In particular we shall define the notion of a gradient for persistence modules, as well as concepts corresponding to the divergence and the Laplacian.
By contrast to standard representation theoretic analysis of persistence modules, where the ground poset remains fixed, as does the representation type of its poset algebra, the construction of the gradient and other operations, analogous to classical multivariable calculus, for persistence modules allows one to consider modules locally, namely on sub-posets of the original poset, informed by the behaviour of the gradient. For instance, in [16] the authors study modules over _commutative ladders_. They show that a commutative ladder of any type is representation-finite if and only if its length is at most \(4\). In Section 7.1 we consider two types of commutative ladders of any length and show that the gradient of any module over such ladders can be written as a sum of modules over posets of finite representation type. Another example is motivated by [2], where the authors study certain module categories over finite \(2\)-dimensional grids which they refer to as _filtered hierarchical clustering_. An important corollary of their main results [2, Corollary 1.6] states that \(m\times n\) grid posets are of finite or tame representation type only for a very small number of cases, and are of wild representation type in all other cases. In Section 7.2 we demonstrate how our approach easily gives computable information about modules over grid posets of any size.
To state our main theorems some preparation is required. We provide a brief description here, and more details in Section 1. _Calculus on weighted directed graphs_ is a discrete calculus for functions whose domain is the vertex set of a finite graph with weighted directed edges [24]. In this context the gradient is an operation which takes functions on the vertex set of a graph to functions on its edge set. The _line digraph_ of a directed graph \(\mathcal{G}\) is a directed graph \(\widehat{\mathcal{G}}\), whose vertices are the edges of \(\mathcal{G}\). Thus the gradient can be thought of as an operator that takes functions on the vertices of \(\mathcal{G}\) to functions on the vertices of \(\widehat{\mathcal{G}}\). We take a similar point of view. Our gradient will take a persistence module on a finite poset \(\mathcal{P}\) to a difference of persistence module on a poset \(\widehat{\mathcal{P}}\) associated to \(\mathcal{P}\), where \(\widehat{\mathcal{P}}\) is obtained from \(\mathcal{P}\) using the line digraph of the Hasse diagram of \(\mathcal{P}\).
The graph theoretic gradient is defined on an edge as the difference between the value at its target vertex and the value at its source vertex [24]. To bring this idea to the universe of persistence modules we therefore need an additive structure with additive inverses. Since we are only interested in persistence modules up to isomorphism, it makes sense to consider the Grothendieck ring \(\mathsf{Gr}(k\mathcal{P})\) of isomorphism classes of modules over the poset algebra \(k\mathcal{P}\), where the sum and product operations are given by direct sum and tensor product (over the field \(k\)), respectively. This allows us to define the gradient as the difference \(\nabla\stackrel{{\mathrm{def}}}{{=}}\phi^{*}-\beta^{*}\) of two natural homomorphisms \(\phi^{*},\beta^{*}\colon\mathsf{Gr}(k\mathcal{P})\to\mathsf{Gr}(k\widehat{ \mathcal{P}})\), (the _front_ and _back_ morphisms) that captures the variation of the module along each indecomposable edge in \(\mathcal{P}\), i.e. a relation in the Hasse diagram of \(\mathcal{P}\). The indecomposable relations in \(\mathcal{P}\) can be thought of as discrete analogs
of infinitesimally small moves in a metric space. Our categorical setup fits directly with the classical definition (See Example 2.7). With this setup we can now state our first result.
**Theorem A** (Theorem 4.2).: _Let \(\mathcal{P}\) be a finite poset. Then the gradient operator \(\nabla\colon\mathsf{Gr}(k\mathcal{P})\to\mathsf{Gr}(k\widehat{\mathcal{P}})\) satisfies the following properties:_
1. \(\nabla\) _is a well defined group homomorphism._
2. _If_ \([M]\in\mathsf{Gr}(k\mathcal{P})\) _is locally constant then_ \(\nabla[M]=0\)_._
3. \(\nabla\) _satisfies a Leibniz type rule, i.e. for all_ \([M],[N]\in\mathsf{Gr}(k\mathcal{P})\)_, the identity_ \[\nabla([M]\cdot[N])=\nabla[M]\cdot\phi^{*}[N]+\beta^{*}[M]\cdot\nabla[N]\] _holds in_ \(\mathsf{Gr}(k\widehat{\mathcal{P}})\)_._
_Furthermore, \(\nabla\) is natural with respect to restrictions to sub-posets, namely, if \(\iota\colon\mathcal{Q}\xrightarrow{\subset}\mathcal{P}\) is a sub-poset, then \(\iota^{*}\circ\nabla_{\mathcal{P}}=\nabla_{\mathcal{Q}}\circ\iota^{*}\)._
Here by a locally constant module we mean a functor \(M\colon\mathcal{P}\to\mathrm{Vect}_{k}\) that takes every morphism in \(\mathcal{P}\) to an isomorphism of vector spaces.
By analogy to ordinary calculus, an obvious question is whether a vanishing gradient of a module \(M\) implies that it is locally constant. The answer turns out to be not quite so straightforward. We say that a digraph \(\mathcal{G}\) is a _directed tree_ if between any two distinct vertices \(a\) and \(b\) in \(\mathcal{G}\) there is at most one directed path. Recall that for any digraph \(\mathcal{G}=(V,E)\), a _maximal tree in \(\mathcal{G}\)_ is a subgraph \(\mathcal{T}\subseteq\mathcal{G}\) on the same vertex set \(V\) and with edges \(E^{\prime}\subseteq E\) such that \(\mathcal{T}=(V,E^{\prime})\) is a directed tree and such that \(E^{\prime}\) is maximal with respect to this property, i.e. adding extra edges from \(E\) to \(E^{\prime}\) will produce a subgraph that is not a directed tree. We say that a poset \(\mathcal{P}\) is _line connected_ if the line digraph of its Hasse diagram is connected.
**Theorem B** (Theorem 4.5).: _Let \(\mathcal{P}\) be a finite line connected poset, and let \(\mathcal{T}\) be a line connected maximal tree in its Hasse diagram \(\mathcal{H}_{\mathcal{P}}\). Let \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}\) denote the sub-poset generated by \(\mathcal{T}\). Let \(M\in k\mathcal{P}\operatorname{\mathsf{-mod}}\) be a module, and let \(M_{\mathcal{T}}\) denote the restriction of \(M\) to \(\mathcal{P}_{\mathcal{T}}\). Assume that \(\nabla[M_{\mathcal{T}}]=0\) in \(\mathsf{Gr}(k\mathcal{P}_{\mathcal{T}})\). Then the following statements hold._
1. _For any objects_ \(u,v\in\mathrm{Obj}(\mathcal{P})=\mathrm{Obj}(\mathcal{P}_{\mathcal{T}})\)_, there is an isomorphism_ \[\alpha_{u,v}\colon M(u)\to M(v),\] _such that_ \(\alpha_{u,u}=1_{M(u)}\) _and_ \(\alpha_{v,u}\alpha_{u,v}=\alpha_{u,w}\)_._
2. _For every pair of indecomposable morphisms_ \(u\leq w\) _and_ \(s\leq t\) _in_ \(\mathcal{P}_{\mathcal{T}}\)_,_ \[\alpha_{w,t}\circ M(u\leq w)=M(s\leq t)\circ\alpha_{u,s}.\]
3. \(M_{\mathcal{T}}\) _is locally constant if and only if_ \(M(u\leq v)\) _is an isomorphism for some indecomposable relation_ \(u\leq v\) _in_ \(\mathcal{P}_{\mathcal{T}}\)_._
_Furthermore, if \(M\in k\mathcal{P}\operatorname{\mathsf{-mod}}\) is a module such that for any indecomposable relation \(u\leq v\) in \(\mathcal{P}_{\mathcal{T}}\) there exists an isomorphism \(\alpha_{u,v}\colon M(u)\to M(v)\) that satisfy Conditions (1) and (2), then \(\nabla[M_{\mathcal{T}}]=0\)._
If \(\mathcal{P}\) is not generated by a tree, it is rather easy to find examples of modules on \(\widehat{\mathcal{P}}\) that are not the gradient of any \(k\mathcal{P}\)-module. Thus such modules are, in the appropriate sense, not integrable. In particular, in this situation it is possible to construct \(k\mathcal{P}\)-modules with a vanishing gradient that do not satisfy Conditions (1) and (2) of Theorem B. This is the reason why we restrict modules to a maximal sub-tree. Of course similar statements can be made for any sub-tree of \(\mathcal{P}\) with the obvious modification to the conclusions. This stands in sharp contrast to ordinary calculus, where any differentiable function whose gradient vanishes on a domain is constant in the interior of that domain, and any real valued continuous function on a reasonable domain is integrable there.
Theorem B thus provides a sharp consequence of the vanishing of the gradient of a module \(M\in k\mathcal{P}\text{-}\mathsf{mod}\) on a maximal tree \(\mathcal{T}\). It implies that for such a module all point modules (the values on objects) are abstractly isomorphic, and all morphisms induced by applying \(M\) to indecomposable morphisms in \(\mathcal{P}_{\mathcal{T}}\) can be identified through a collection of non-canonical isomorphisms between point modules.
We next examine the implication for a pair of \(k\mathcal{P}\)-modules of having isomorphic gradients. A typical element in the Grothendieck ring \(\mathsf{Gr}(k\mathcal{P})\), which is ordinarily referred to as a _virtual module_, is a difference of two equivalence classes of genuine modules. For \(M\in k\mathcal{P}\text{-}\mathsf{mod}\), define the _rank function_\(\operatorname{rk}(M)\colon\operatorname{Mor}(\mathcal{P})\to\mathbb{N}\) to be the function that takes a relation \(x\leq y\) to the rank of the homomorphism \(M(x\leq y)\). The rank function clearly extends by additivity to \(\mathsf{Gr}(k\mathcal{P})\), since it depends only on the isomorphism type of a module. Notice also that our definition of the rank function includes also the dimensions of point modules, which appear as the rank of \(M\) applied to identity morphisms.
It is well known that the rank invariant is a complete invariant for modules over \((\mathbb{R},\leq)\) or any other linear order, but this fails for more general posets [9]. However the rank invariant is still an important invariant of modules. The next two theorems give new conditions on some of the local behaviour of the rank invariant.
**Theorem C** (Theorem 4.13).: _Let \(\mathcal{P}\) be a finite poset. Let \([X]=[M]-[N]\in\mathsf{Gr}(k\mathcal{P})\) be an element with \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\)._
1. _Assume that_ \(\nabla[X]=0\)_. Then_ \(\operatorname{rk}[X](u_{0}<v_{0})=\operatorname{rk}[X](u_{1}<v_{1})\) _for any pair of comparable objects_ \((u_{0},u_{1})<(v_{0},v_{1})\) _in_ \(\widehat{\mathcal{P}}\)_._
_Assume in addition that \(\mathcal{P}\) is line connected, let \(\mathcal{T}\) be a line connected maximal tree for \(\mathcal{H}_{\mathcal{P}}\), and let \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}\) be the sub-poset generated by \(\mathcal{T}\). If \([X]\) has a vanishing gradient on \(\mathcal{T}\), then \(\operatorname{rk}[X]\) has the following properties:_
1. _It is constant on all identity morphisms in_ \(\mathcal{P}\)_._
2. _It is constant on all indecomposable relations in_ \(\mathcal{P}_{\mathcal{T}}\)_, namely for any pair of indecomposable relations_ \(u_{0}<u_{1}\)_, and_ \(v_{0}<v_{1}\) _in_ \(\mathcal{P}_{\mathcal{T}}\)_, one has_ \(\operatorname{rk}[X](u_{0}<u_{1})=\operatorname{rk}[X](v_{0}<v_{1})\)_._
Theorem C falls short of stating that isomorphic gradients imply equality of rank invariants. Indeed Example 4.14 shows that this is not the case. A natural question is whether a converse implication is true, namely whether equal rank invariants imply that the modules in question have isomorphic gradients. This question is answered in the negative in Example 4.19.
In discrete calculus for finite weighted digraphs one considers real valued functions on vertices or edges as finite dimensional real vectors spaces, and as such one has the ordinary inner product defined on the spaces of vertex and edge functions, \(\langle-,-\rangle_{V}\) and \(\langle-,-\rangle_{E}\) respectively. This pairing allows one to define divergence and Laplacian for digraphs. The adjoint operator \(\nabla^{*}\), also called the _divergence operator_ is then defined by the requirement that the relation
\[\langle\nabla^{*}(f),g\rangle_{V}=\langle f,\nabla(g)\rangle_{E}\]
holds.
The theory we propose here can be applied to a certain setup in standard calculus on graphs (See Example 2.9). For modules \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\) define a pairing
\[\langle[M],[N]\rangle_{\mathcal{P}}\stackrel{{\mathrm{def}}}{{=}} \dim_{k}(\operatorname{Hom}_{k\mathcal{P}}(M,N)).\]
Extend the definition to a pairing on \(\mathsf{Gr}(k\mathcal{P})\) and similarly to a pairing \(\langle-,-\rangle_{\widehat{\mathcal{P}}}\) on \(\mathsf{Gr}(k\widehat{\mathcal{P}})\). A related pairing, that in a sense appears more natural, is the _Euler pairing_. For two modules \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\), consider the graded vector space \(\operatorname{Ext}^{*}_{k\mathcal{P}}(M,N)\). Since \(\mathcal{P}\) is assumed to be a finite poset and modules in \(k\mathcal{P}\text{-}\mathsf{mod}\) are assumed finitely generated, \(\operatorname{Ext}^{*}_{k\mathcal{P}}(M,N)\) is a finite
dimensional vector space for all \(i\) and it vanishes for \(i\) sufficiently large. Hence it makes sense to consider its Euler characteristic. Thus we define an Euler pairing
\[\chi_{\mathcal{P}}([M],[N])=\chi(\operatorname{Ext}_{k\mathcal{P}}^{*}(M,N)),\]
and extend to the corresponding Grothendieck group by additivity. If the Hasse diagram of \(\mathcal{P}\) is a directed tree, thus in particular an acyclic quiver, then \(\operatorname{Ext}_{k\mathcal{P}}^{i}(M,N)\) vanish for \(i>1\)[13, Theorem 2.3.2], and the Euler pairing can be computed as an ordinary inner product of dimension vectors [13, Proposition 2.5.2]. The following proposition gives an easy relation between the two types of pairings.
**Proposition D** (Proposition 5.3).: _Let \(\mathcal{P}\) be a finite poset, and let \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\) be modules. Let \(\langle-,-\rangle_{\mathcal{P}}\) and \(\chi_{\mathcal{P}}(-,-)\) be the Hom pairing and the Euler pairing respectively. Let_
\[0\to P_{n}\to\dots\to P_{0}\to M\to 0,\quad\text{and}\quad 0\to N\to I_{0}\to\dots\to I_{n}\to 0\]
_be a projective resolution for \(M\) and an injective resolution for \(N\). Then_
\[\chi_{\mathcal{P}}([M],[N])=\sum_{i=0}^{n}(-1)^{i}\langle[P_{i}],[N]\rangle_{ \mathcal{P}}=\sum_{i=0}^{n}(-1)^{i}\chi_{\mathcal{P}}([P_{i}],[N]),\]
_and_
\[\chi_{\mathcal{P}}([M],[N])=\sum_{j=0}^{n}(-1)^{j}\langle[M],[I_{k}]\rangle_{ \mathcal{P}}=\sum_{j=0}^{n}(-1)^{j}\chi_{\mathcal{P}}([M],[I_{k}]).\]
_Furthermore, write \(P_{i}\cong\bigoplus\epsilon_{v}^{i}F_{v}\) and \(I_{j}=\bigoplus\delta_{u}^{i}G_{u}\), with \(\epsilon_{v},\delta_{u}^{j}\in\mathbb{N}\) and \(v,u\in\mathcal{P}\), and where \(F_{v}\) and \(G_{u}\) are the indecomposable projective determined by \(v\) and the indecomposable injective determined by \(u\). Then_
\[\chi_{\mathcal{P}}([M],[N])=\sum_{v\in\mathcal{P}}\sum_{i=0}^{n}(-1)^{i}\dot{ \epsilon}_{v}^{i}\dim_{n}N(v)=\sum_{u\in\mathcal{P}}\sum_{j=0}^{n}(-1)^{j} \delta_{u}^{j}\dim_{n}M(u).\]
We next offer our analogs of the divergence and the laplacian in our context. Considering the (non-symmetric) Hom pairing as an analog of an inner product, the left and right Kan extensions offer themselves as a natural way of constructing a left adjoint \(\nabla^{*}\) and a right adjoint \(\nabla_{*}\) of the gradient. Thus we define _left and right divergence_. Example 2.9 demonstrates that with the right categorical setup for ordinary weighted digraphs, the left and right divergence operators coincide with each other and with the ordinary definition of the divergence. In general we have the following.
\[\langle\nabla[X],[Y]\rangle_{\widehat{\mathcal{P}}}=\langle[X],\nabla_{*}[Y] \rangle_{\mathcal{P}}\quad\text{and}\quad\langle\nabla^{*}[X],[Y]\rangle_{ \mathcal{P}}=\langle[X],\nabla[Y]\rangle_{\widehat{\mathcal{P}}}.\]
In particular
\[\langle\nabla^{*}\nabla[X],[X]\rangle_{\mathcal{P}}=\langle\nabla[X],\nabla[X ]\rangle_{\widehat{\mathcal{P}}}=\langle[X],\nabla_{*}\nabla[X]\rangle_{ \mathcal{P}}.\]
While the Euler pairing offers some better general properties, the adjointness relations above do not hold for it in general. We shall elaborate on this point in Section 3, where we show in fact that our basic constructions apply in a much more general context than modules categories over finite posets.
Composing the gradient with the left and the right divergence, we obtain left and right Laplacians \(\Delta^{0}\) and \(\Delta_{0}\), respectively, for persistence modules. This allows us to define left and right harmonic modules, namely such that their corresponding Laplacians vanish. Laplacians, higher dimensional generalisation and a possible setup for Hodge theory for persistence modules will be studied in future work.
The paper is organised as follows. Section 1 contains all the technical background material we use throughout the paper. In Section 2 we prepare the general setup for the construction of the gradient and the divergence in the context of module categories. In Section 3 we study some
properties of the Hom pairing and the Euler pairing in the context of modules over category algebras. Section 4 is dedicated to the study of the gradient of virtual modules over finite posets and the proof of Theorems A, B and C. In Section 5 we specialise the Hom and Euler pairings in for modules over posets and prove Proposition D. Section 6 is dedicated to the (left and right) divergence, the corresponding Laplacians, and adjointness relations with the gradient. Finally in Section 7 we present applications to modules over commutative ladder posets, and to filtered hierarchical clustering modules over commutative grid posets.
The authors are grateful to E. Meir for many helpful conversations on modules over category algebras and for finding an error in an early version of this paper.
## 1. Preliminaries
In this section we record the definitions, notation and all preliminary material that will be used throughout the paper.
### Persistence Modules
Let \(\mathcal{C}\) be a small category associated to a poset. Modern theory of persistence modules studies the structure and invariants of functors from \(\mathcal{C}\) to some target category, often the category \(\operatorname{Vect}_{k}\) of vector spaces over a field \(k\). For the purpose of this preliminary discussion the target category may be any abelian category \(\mathcal{A}\). A category \(\mathcal{C}\) is said to be _finite_ if the set of all its morphisms forms a finite set; this implies that the object set is likewise finite.
If \(\mathcal{A}\) is an abelian category, then the functor category \(\mathcal{A}^{\mathcal{C}}\) whose objects are functors from \(\mathcal{C}\) to \(\mathcal{A}\) and whose morphisms are natural transformation is also an abelian category. If \(\Phi\colon\mathcal{C}\to\mathcal{D}\) is a functor then pre-composition with \(\Phi\) induces a functor
\[\Phi^{*}\colon\mathcal{A}^{\mathcal{D}}\to\mathcal{A}^{\mathcal{C}}.\]
This functor, which is sometimes referred to as _restriction along \(\Phi\)_, will be used in our definition of the gradient in Section 4. The restriction \(\Phi^{*}\) generally has left and right adjoints given by the left and right Kan extensions, respectively (See Section 1.4). These will be used in Section 6. A good reference for the general theory of functor categories is [18].
**Definition 1.1**.: _Let \(\mathcal{P}\) be a finite poset, let \(k\) be a field, and let \(\operatorname{Vect}_{k}\) be the category of finite dimensional vector spaces over \(k\). A persistence module on \(\mathcal{P}\) is a functor \(M\colon\mathcal{P}\to\operatorname{Vect}_{k}\)._
Let \(\mathcal{C}\) be a small category. A functor \(M\colon\mathcal{C}\to\operatorname{Vect}_{k}\) may be thought of as a representation of the category \(\mathcal{C}\) over \(k\). This is a particularly useful approach when the category \(\mathcal{C}\) is finite.
**Definition 1.2**.: _Let \(k\) be a field and let \(\mathcal{C}\) be a finite category. The category algebra \(k\mathcal{C}\) is the unital algebra generated as a \(k\)-vector space by all morphisms \(x\to y\) in \(\mathcal{C}\) (including identities). Two morphisms multiply by composition, and non-composable morphisms multiply to 0 [27]. The unit is the \(k\)-algebra map \(\eta\colon k\to k\mathcal{C}\) that sends \(1\in k\), to the element \(\mathbf{1}\in k\mathcal{C}\), that is the sum over all objects in \(\mathcal{C}\) of the identity relations \(1_{x}\stackrel{{\mathrm{def}}}{{=}}x\to x\)._
The following theorem due to Mitchell allows us to alternate between functor categories and categories of modules, when the category in question is finite.
**Theorem 1.3** ([27]).: _Let \(\mathcal{C}\) be a finite category and let \(k\) be a field. The category \(k\mathcal{C}\)-\(\mathsf{mod}\) of modules over the category algebra \(k\mathcal{C}\) and \(k\mathcal{C}\)-linear homomorphisms is equivalent to the category of functors \(M\colon\mathcal{C}\to\operatorname{Vect}_{k}\) and natural transformations between them._
The equivalence in Mitchell's theorem is given as follows. If \(M\colon\mathcal{C}\to\operatorname{Vect}_{k}\) is a functor, let
\[\mathbf{M}\stackrel{{\mathrm{def}}}{{=}}\bigoplus_{x\in\operatorname {Obj}(\mathcal{C})}M(x).\]
The (left) action of \(k\mathcal{C}\) on \(\mathbf{M}\) is determined by
\[\varphi\cdot\mathbf{a}=M(\varphi)(a_{y}),\quad\text{where}\quad\mathbf{a}=\sum_{ x\in\mathcal{C}}a_{x}\in\mathbf{M},\quad a_{x}\in M(x).\]
Conversely, if \(\mathbf{N}\) is a left \(k\mathcal{C}\)-module, let \(N\colon\mathcal{C}\to\operatorname{Vect}_{k}\) be the functor that takes an object \(x\in\mathcal{C}\) to \(N(x)\stackrel{{\text{def}}}{{=}}1_{x}\cdot\mathbf{N}\) and a morphism \(\varphi\colon y\to z\) to the homomorphism \(N(\varphi)\) that maps \(1_{y}\cdot\mathbf{a}\in 1_{y}\cdot\mathbf{N}\) to \(\varphi\cdot\mathbf{a}\in 1_{z}\cdot\mathbf{N}\).
Any poset \(\mathcal{P}\) may be considered as a small category, and thus admits a category algebra \(k\mathcal{P}\), which we will refer to as the _poset algebra_. If \(\mathcal{P}\) is a finite poset, then Mitchell's theorem gives an alternative way of thinking of persistence modules, namely as ordinary modules over the poset algebra \(k\mathcal{P}\). Both approaches are useful.
The standard modern treatment of one parameter persistence studies modules over the poset \((\mathbb{R},\leq)\), i.e. the poset given by the real numbers with their natural order. Higher dimensional analogs consider modules where the parametrising poset is \(\mathbb{R}^{n}\). These posets are not discrete and certainly not finite. However, for all practical purposes persistence modules are studied in a _tame_ setup. This concept was introduced by Chazal et al. [11, Sections 3.8-3.9]. We will use tameness as defined by Miller in [26, Def. 2.12] for general posets. A persistence module \(M\) is said to be tame if \(M(p)\) is finite dimensional for every object \(p\in\mathcal{P}\) and if, roughly speaking, \(\mathcal{P}\) admits a finite partition into sub-posets on which \(M\) is constant.
More precisely, if \(M\) is a tame persistence module on an arbitrary poset \(\mathcal{P}\) (possibly infinite and with nontrivial topology) Miller defines an _encoding_ of \(M\) to be a module \(N\in k\mathcal{Q}\operatorname{\mathsf{-mod}}\) for some poset \(\mathcal{Q}\), together with a functor \(\pi\colon\mathcal{P}\to\mathcal{Q}\), such that \(M=\pi^{*}(N)\); see Figure 1. The encoding is said to be _finite_, if the poset \(\mathcal{Q}\) is finite and the values of \(N\) on its objects are finite dimensional [26, Def. 4.1]. Miller also gives a necessary and sufficient condition for a persistence module to admit a finite encoding [26, Thm 4.22]. This essentially means that much of the information encoded in the module can be read off from a finite encoding. This point of view fits perfectly with our setup, in which posets will always be assumed to be finite.
The following general terminology is rather standard and will become useful in our analysis.
**Definition 1.4**.: _Let \(\mathcal{C}\) be a small category. We say that a module \(M\in k\mathcal{C}\operatorname{\mathsf{-mod}}\) is_
* locally constant _if_ \(k\mathcal{C}\) _acts on_ \(M\) _by isomorphisms._
* virtually trivial _if any non-scalar element in_ \(k\mathcal{C}\) _acts on_ \(M\) _trivially._
_Equivalently, if modules are considered as functors \(M\colon\mathcal{C}\to\operatorname{Vect}_{k}\), then \(M\) is locally constant if it takes any morphism in \(\mathcal{C}\) to an isomorphism, and virtually trivial if any non-identity morphism in \(\mathcal{C}\) induces the zero homomorphism._
**Lemma 1.5**.: _Let \(\mathcal{C}\) be any small category and let \(M,N\in k\mathcal{C}\operatorname{\mathsf{-mod}}\) be virtually trivial modules. Let \(\operatorname{Obj}_{\mathcal{C}}\) denote the discrete category (only identity morphisms) with objects set \(\operatorname{Obj}(\mathcal{C})\), and
Figure 1. Tame persistence module \(M\) on \((\mathbb{R}^{2},\leq)\) and its encoding into a finite poset \(\mathcal{Q}\). The shaded rectangles correspond to objects of \(\mathcal{Q}\) and represent subsets of \(\mathbb{R}^{2}\) on which all the morphisms of \(M\) are isomorphisms, i.e. \(M\) is constant on those subsets.
_let \(j_{\mathcal{C}}\colon\operatorname{Obj}_{\mathcal{C}}\to\mathcal{C}\) denote the inclusion. Then there is a natural isomorphism of groups_
\[J\colon\operatorname{Hom}_{k\mathcal{C}\operatorname{-mod}}(M,N)\to \operatorname{Hom}_{k\operatorname{Obj}_{\mathcal{C}}}(j_{\mathcal{C}}^{*}(M), j_{\mathcal{C}}^{*}(N)),\]
_where on both sides \(\operatorname{Hom}\) denotes the set of natural transformations between the respective modules, and \(j_{\mathcal{C}}^{*}\) denotes pre-composition with \(j_{\mathcal{C}}\)._
Proof.: Since \(M\) and \(N\) are virtually trivial, a natural transformation from \(M\) to \(N\) amounts exactly to a homomorphism of vector spaces \(M(x)\to N(x)\) for each \(x\in\operatorname{Obj}(\mathcal{C})\). Thus \(J\) is an isomorphism of sets. Since the functor categories are abelian and \(j_{\mathcal{C}}^{*}\) is additive, the lemma follows.
**Corollary 1.6**.: _Let \(\mathcal{C}\) be any small category and let \(M,N\in k\mathcal{C}\operatorname{-mod}\) be virtually trivial modules. Then \(M\cong N\) if and only if \(M(x)\cong N(x)\) for each object \(x\in\operatorname{Obj}(\mathcal{C})\)._
### Line digraphs
A central idea in our work is to define operators on persistence modules whose output also lies in persistence modules but possibly over a different parameter poset. This is where the classical graph theoretic construction of the line digraph becomes useful.
**Definition 1.7**.: _A directed graph, or a digraph, \(\mathcal{G}\) is given by a vertex set \(V\) and an edge set \(E\subset V\times V\). A digraph has two maps \(s,t\colon E\to V\) called the source and target, respectively, such that if \(e=(u,v)\) is an edge, then \(s(e)=u\) and \(t(e)=v\)._
Definition 1.7 excludes the possibility of two vertices connected by more than one edge in any direction, but it does allow reciprocal connections. A digraph is said to be _acyclic_ if there is no directed path that starts and ends at the same vertex. While acyclicity is not assumed throughout, in the context of this work we do assume that all digraphs are _loop-free_, namely that if \(e=(u,v)\in E\), then \(u\neq v\).
**Definition 1.8**.: _Given a digraph \(\mathcal{G}=(V,E)\), the associated line digraph\(\widehat{\mathcal{G}}=(\widehat{V},\widehat{E})\) is the digraph with vertices \(\widehat{V}=E\) and with a directed edge \((u,v)\to(w,z)\) in \(\widehat{E}\) whenever \(v=w\). The directed edges in \(\widehat{E}\) are denoted by ordered triples \((u,v,z)\)._
The line digraph construction is clearly functorial with respect to inclusions of subgraphs. Namely, if \(\mathcal{G}^{\prime}\subseteq\mathcal{G}\), then \(\widehat{\mathcal{G}}^{\prime}\subseteq\widehat{\mathcal{G}}\).
A digraph \(\mathcal{G}\) is said to be _connected_ if, ignoring edge direction and collapsing bidirectional edges into a single undirected edge, one obtains a connected graph. The line digraph associated to a digraph \(\mathcal{G}\) is not generally connected. But if \(\widehat{\mathcal{G}}\) is a connected line digraph of some digraph \(\mathcal{G}\), then \(\mathcal{G}\) is connected. The following concept is useful for our purposes.
**Definition 1.9**.: _Let \(\mathcal{G}\) be a digraph with a line digraph \(\widehat{\mathcal{G}}\). A subgraph \(\mathcal{G}_{0}\subseteq\mathcal{G}\) is said to be a line component of \(\mathcal{G}\) if there exists a connected component \(\widehat{\mathcal{G}}_{0}\subseteq\widehat{\mathcal{G}}\) such that \(\widehat{\mathcal{G}}_{0}\) is the line digraph associated to \(\mathcal{G}_{0}\). A digraph \(\mathcal{G}\) is said to be line connected if \(\widehat{\mathcal{G}}\) is connected, or equivalently if the only line component of \(\mathcal{G}\) is \(\mathcal{G}\) itself._
Clearly any digraph \(\mathcal{G}\) is a union of its line components (which are not generally disjoint in \(\mathcal{G}\)). The line components of \(\mathcal{G}\) can be constructed from the connected components of \(\widehat{\mathcal{G}}\) by considering for each component \(\widehat{\mathcal{G}}_{0}\), the subgraph of \(\mathcal{G}\) consisting of all edges corresponding to the vertices of \(\widehat{\mathcal{G}}_{0}\) (See Figure 2).
Any poset \(\mathcal{P}\) is uniquely determined by its associated _Hasse diagram_\(\mathcal{H}_{\mathcal{P}}\), which is the transitive reduction of the acyclic digraph underlying the poset \(\mathcal{P}\). Namely the digraph \(\mathcal{H}_{\mathcal{P}}\) has as vertices the objects of \(\mathcal{P}\), and directed edges are the non-identity indecomposable relations in \(\mathcal{P}\), namely those relations \(u<v\) in \(\mathcal{P}\) such that there is no intermediate relation \(u<y<v\). The corresponding edge in the digraph \(\mathcal{H}_{\mathcal{P}}\) is then denoted by \((u,v)\). By taking the transitive closure of \(\mathcal{H}_{\mathcal{P}}\) one recovers \(\mathcal{P}\). _The Hasse diagram \(\mathcal{H}_{\mathcal{P}}\) can be thought of as encoding the analog in \(\mathcal{P}\) of infinitesimally small moves in a metric space._
**Definition 1.10**.: _A poset \(\mathcal{P}\) is said to be line connected if its Hasse diagram \(\mathcal{H}_{\mathcal{P}}\) is line connected._
### The Grothendieck ring
As is the case for any category of modules, many questions about persistence modules can be reduced to questions about isomorphism classes. Hence we work specifically with isomorphism classes of modules over poset algebras. In this context the concept of the Grothendieck ring is very useful.
Let \(k\) be a field, let \(A\) be a unital \(k\)-algebra and let \(A\text{-}\mathsf{mod}\) denote the category of finitely generated left \(A\)-modules. Then \(A\text{-}\mathsf{mod}\) is a monoid with operation given by direct sum, and thus the set of isomorphism classes of \(A\)-modules forms a commutative monoid where
\[[M]+[N]\stackrel{{\text{\rm def}}}{{=}}[M\oplus N]. \tag{1}\]
The Grothendieck group \(\mathsf{Gr}(A)\) of finitely generated \(A\)-modules is the group completion of this monoid. Namely, it is the quotient of the free abelian group generated by isomorphism classes of \(A\)-modules, subject to the relation (1). Thus any element \([X]\in\mathsf{Gr}(A)\) can be written uniquely as a difference \([X]=[M]-[N]\), where \(M,N\in A\text{-}\mathsf{mod}\). In particular, an equation of the form \([M]-[N]=[U]-[V]\) in \(\mathsf{Gr}(A)\) where \(M,N,U,V\in A\text{-}\mathsf{mod}\) should be interpreted as \(M\oplus V\cong N\oplus U\). We refer to elements of \(\mathsf{Gr}(A)\) as _virtual modules_.
Tensor product over \(k\), with the diagonal action of \(A\) on the factors, turns \(\mathsf{Gr}(A)\) into a ring, known as the Grothendieck ring, where \([M]\cdot[N]=[M\otimes N]\). If \(f^{*}\colon B\text{-}\mathsf{mod}\to A\text{-}\mathsf{mod}\) is a functor that preserves tensor products, then \(f^{*}\) induces a ring homomorphism
\[f^{*}\colon\mathsf{Gr}(B)\to\mathsf{Gr}(A).\]
This is the case, for instance, if \(f\colon A\to B\) is a homomorphism of \(k\)-algebras that induces the restriction \(f^{*}\) on module categories, but for our applications this will not generally be the case (See Remark 1.14).
An important example of this construction is given by the Grothendieck ring \(\mathsf{Gr}(k)\) of finite dimensional vector spaces over \(k\), which is isomorphic to the ring of integers \(\mathbb{Z}\). A much more interesting family of examples that is a subject of extensive study arises in representation theory. If \(G\) is a group and \(kG\) its group algebra, then \(\mathsf{Gr}(kG)\) is the Grothendieck ring of all virtual linear representations of the group \(G\) over \(k\).
**Definition 1.11**.: _Let \(A\) be a unital \(k\)-algebra. Define the reduced Grothendieck ring \(\mathsf{Gr}_{e}(A)\) to be the quotient of the Grothendieck ring \(\mathsf{Gr}(A)\), by an additional family of relations:_
\[[M]=[M^{\prime}]+[M^{\prime\prime}]\]
_if there is a short exact sequence of \(A\)-modules \(0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\)._
The reduced Grothendieck ring \(\mathsf{Gr}_{e}(A)\) inherits a ring structure from that of \(\mathsf{Gr}(A)\), since the tensor product over the ground field \(k\) is an exact functor, and the obvious projection \(\mathsf{Gr}(A)\to\mathsf{Gr}_{e}(A)\) is a ring homomorphism. However, \(\mathsf{Gr}_{e}(A)\) is a much less interesting object than \(\mathsf{Gr}(A)\), since the class of any module is equal to the sum of all simple modules in a
Figure 2. A digraph (left) and its line digraph (right) with the line components and their associated line digraphs in red and blue.
composition series. Thus \(\mathsf{Gr}_{e}(A)\) is isomorphic to the free commutative algebra generated by the simple \(A\)-modules.
**Definition 1.12**.: _For a finite poset \(\mathcal{P}\) with object set \(V\), define \(\operatorname{\mathbf{dim}}_{k}\colon\mathsf{Gr}(k\mathcal{P})\to\mathbb{Z}^{V}\) by_
\[\operatorname{\mathbf{dim}}_{k}[M](v)\stackrel{{\mathrm{def}}}{{= }}\dim_{k}(M(v)),\]
_for any \(M\in k\mathcal{P}\operatorname{\cdot}\mathsf{mod}\) and for \([X]=[M]-[N]\) with \(M,N\in k\mathcal{P}\operatorname{\cdot}\mathsf{mod}\), let_
\[\operatorname{\mathbf{dim}}_{k}[X]\stackrel{{\mathrm{def}}}{{= }}\operatorname{\mathbf{dim}}_{k}[M]-\operatorname{\mathbf{dim}}_{k}[N].\]
_For \([X]\in\mathsf{Gr}(k\mathcal{P})\) we refer to \(\operatorname{\mathbf{dim}}_{k}[X]\) as the dimension vector of \([X]\)._
Since the addition operation in \(\mathsf{Gr}(k\mathcal{P})\) is induced by direct sum, the function \(\operatorname{\mathbf{dim}}_{k}\) is clearly a group homomorphism, whose kernel is the subgroup of all virtual modules \([X]=[M]-[N]\) such that \(\dim_{k}M(v)=\dim_{k}N(v)\) for every \(v\in\mathcal{P}\). Clearly for \(M\in k\mathcal{P}\operatorname{\cdot}\mathsf{mod}\), \(\operatorname{\mathbf{dim}}_{k}[M]=0\) if and only if \(M=0\).
**Definition 1.13**.: _Let \(\mathcal{C}\) be a small category. An element \([X]\in\mathsf{Gr}(k\mathcal{C})\) is said to be locally constant, if it can be represented as a difference of the isomorphism classes of two locally constant modules, and similarly for virtually trivial._
**Remark 1.14**.: _Recall that if \(\mathcal{C}\) is a category with finite set of objects and \(k\) is a field, then the functor category \(k\mathcal{C}\operatorname{\mathbf{\cdot}\mathsf{mod}}\) is equivalent to the category of modules over the category algebra \(k\mathcal{C}\) (see [31]). If \(F\colon\mathcal{C}\to\mathcal{D}\) is a functor, then one obtains a homomorphism \(kF\colon k\mathcal{C}\to k\mathcal{D}\), which is a ring homomorphism if and only if \(F\) is injective on object sets [31, Proposition 2.2.3], and otherwise is only a homomorphism of vector spaces. In particular \(F^{*}\) preserves tensor products over \(k\), and hence induces a ring homomorphism \(F^{*}\colon\mathsf{Gr}(k\mathcal{D})\to\mathsf{Gr}(k\mathcal{C})\)._
### Functor categories and Kan extensions
Kan extensions will be used to define the divergence operators in Section 2, and we recall here the basic notions. Let \(\mathcal{C},\mathcal{D}\) be small categories, and let \(\mathcal{A}\) be a category that is bicomplete (i.e. all small limits and colimits exist in \(\mathcal{A}\)). Let \(\mathcal{A}^{\mathcal{C}}\) and \(\mathcal{A}^{\mathcal{D}}\) denote the categories of functors from \(\mathcal{C}\) and \(\mathcal{D}\) to \(\mathcal{A}\), respectively, with morphisms given by natural transformations. If \(F\colon\mathcal{C}\to\mathcal{D}\) is a functor, then the restriction \(F^{*}\colon\mathcal{A}^{\mathcal{D}}\to\mathcal{A}^{\mathcal{C}}\) admits left and right adjoints, given by the left and right Kan extensions [25, X.3, Corollary 2],
\[L_{F},R_{F}\colon\mathcal{A}^{\mathcal{C}}\to\mathcal{A}^{\mathcal{D}}.\]
For the sake of completeness, we briefly describe these constructions. The reader is referred to [25, Chapter X] for details. For any object \(d\in\mathcal{D}\), let \(F\downarrow d\) denote the _overcategory of \(d\) with respect to \(F\)_, whose objects are pairs \((c,\varphi)\), where \(c\) is an object in \(\mathcal{C}\) and \(\varphi\colon F(c)\to d\) is a morphism in \(\mathcal{D}\). A morphism \((c,\varphi)\to(c^{\prime},\varphi^{\prime})\) in \(F\downarrow d\) is a morphism \(\beta\colon c\to c^{\prime}\), such that \(\varphi^{\prime}\circ F(\beta)=\varphi\). There is a forgetful functor \(\#\colon F\downarrow d\to\mathcal{C}\), sending an object \((c,\varphi)\) to \(c\). By analogy one defines \(d\downarrow F\), the _undercategory of \(d\) with respect to \(F\)_, and similarly a forgetful functor to \(\mathcal{C}\). For \(M\in\mathcal{A}^{\mathcal{C}}\), left and right Kan extensions of \(M\) along \(F\) are defined by
\[L_{F}(M)(d)\stackrel{{\mathrm{def}}}{{=}}\operatorname{colim}_{F \downarrow d}M_{\#},\quad\text{and}\quad R_{F}(M)(d)\stackrel{{ \mathrm{def}}}{{=}}\operatorname{lim}_{d\downarrow F}M_{\#},\]
where in both cases \(M_{\#}\) denotes the composition of \(M\) with the respective forgetful functor. Thus if \(N\in\mathcal{A}^{\mathcal{C}}\) and \(M\in\mathcal{A}^{\mathcal{D}}\), we have the adjointness relations:
\[\operatorname{Hom}_{\mathcal{A}^{\mathcal{D}}}(L_{F}(N),M)\cong\operatorname{ Hom}_{\mathcal{A}^{\mathcal{C}}}(N,F^{*}(M)),\quad\text{and}\quad\operatorname{Hom}_{ \mathcal{A}^{\mathcal{C}}}(F^{*}(M),N)\cong\operatorname{Hom}_{\mathcal{A}^{ \mathcal{D}}}(M,R_{F}(N)).\]
## 2. A categorical setting for the discrete gradient and divergence
In this section we define the gradient and the divergence on digraphs in a very general setting. With the right setup, the standard notions of these concepts then occur as special cases. In sections 4 and 6 we will specialise to the case of persistence modules, but these constructions can be carried out in a much more general context.
We start by recalling the graph theoretic definitions of gradient and divergence. Let \(\mathcal{G}=(V,E)\) be a digraph. Let \(A\) be an abelian group, and let \(f\colon V\to A\) and \(F\colon E\to A\) be arbitrary functions. Then one defines the gradient \(\nabla(f)\colon E\to A\) and \(\nabla^{*}(F)\colon V\to A\) by
\[\nabla(f)(v\to w)\stackrel{{\mathrm{def}}}{{=}}f(w)-f(v),\quad \text{and}\quad\nabla^{*}(F)(v)=\sum_{u\to v}F(u\to v)-\sum_{v\to w}F(v\to w). \tag{2}\]
The reader is referred to [24] for a comprehensive discussion of the gradient and the divergence on digraphs.
We now describe a categorical setup for these concepts. A multi-digraph is a more general analog of what is meant by a digraph in this article. The difference is that in a multi-digraph one allows the set of directed edges between any pair of vertices to have arbitrary cardinality, while the concept we use of a digraph allows at most one edge in each direction. Any small category \(\mathcal{C}\) is a quotient category of the path category of a multi-digraph whose vertex set is in \(1-1\) correspondence with the set of objects in \(\mathcal{C}\) and whose directed edges form a subset of its morphisms, where the equivalence relations are given by the commutativity relations in \(\mathcal{C}\) (See [4, Section 5] for more detail). A canonical example of this type is given by considering the multi-digraph of all morphisms in a category \(\mathcal{C}\). Then \(\mathcal{C}\) is reconstructed from that multi-digraph by declaring two paths equivalent if the corresponding morphisms in \(\mathcal{C}\) coincide. Clearly, restricting to digraph rather than multi-digraph puts some restrictions on the type of categories that appear as quotient categories of the corresponding path categories. For instance, a poset \(\mathcal{P}\) can be thought of as a quotient of the path category of its Hasse diagram \(\mathcal{H}_{\mathcal{P}}\) by relations imposed by the requirement that between any two objects there is at most one morphism. However, quotient categories of path categories of digraphs form a much more general family of categories than just posets. All of those, as we shall see below, share some important properties.
**Definition 2.1**.: _If \(\mathcal{C}\) is a category obtained as a quotient category of the path category \(P(\mathcal{G})\) of some digraph \(\mathcal{G}\), we say that \(\mathcal{G}\) is a generating digraph for \(\mathcal{C}\), or that \(\mathcal{C}\) is generated by \(\mathcal{G}\)._
Notice that the choice of a generating digraph for a small category \(\mathcal{C}\) is generally not unique.
Next we associate with a small category \(\mathcal{C}\) and a generating digraph \(\mathcal{G}\) another small category \(\widehat{\mathcal{C}}\) that is generated by the associated line digraph \(\widehat{\mathcal{G}}\). The category \(\widehat{\mathcal{C}}\) depends on the choice of the generating digraph, but for any such choice one has two functors \(\widehat{\mathcal{C}}\to\mathcal{C}\) that will allow us to define a gradient and two divergence operators. While we will later restrict to the case where \(\mathcal{C}\) is a poset with a canonical generating digraph given by its Hasse diagram, here we study these constructions in greater generality with a view to other possible applications.
**Definition 2.2**.: _Let \(\mathcal{C}\) be a small category generated by a digraph \(\mathcal{G}\), and let \(\widehat{\mathcal{G}}\) denote its associated line digraph. Let \(P(\mathcal{G})\) and \(P(\widehat{\mathcal{G}})\) denote the associated path categories. Define a category \(\widehat{\mathcal{C}}\) with the same object set as \(P(\widehat{\mathcal{G}})\). Assume that_
\[\alpha\stackrel{{\mathrm{def}}}{{=}} ((u,t)\to(t,a_{1})\to\cdots\to(a_{m},s)\to(s,v)),\quad\text{and}\] \[\gamma\stackrel{{\mathrm{def}}}{{=}} ((u,t)\to(t,b_{1})\to\cdots\to(b_{k},s)\to(s,v))\]
_are two morphisms in \(P(\widehat{\mathcal{G}})\). For each pair of objects \((u,t)\) and \((s,v)\), define an equivalence relation on the morphism set \(P(\widehat{\mathcal{G}})((u,t),(s,v))\), to be the transitive closure of the relation \(\alpha\sim\gamma\) if the compositions_
\[t\to a_{1}\to\cdots\to a_{m}\to s,\quad\text{and}\quad t\to b_{1}\to\cdots\to b _{k}\to s\]
_coincide in \(\mathcal{C}\). Define \(\widehat{\mathcal{C}}((u,t),(s,v))\) to be the set of equivalence classes of this relation._
Clearly the assignment \(\mathcal{C}\mapsto\widehat{\mathcal{C}}\) does not define an endofunctor on the category of small categories, since it depends on the choice of a generating digraph \(\mathcal{G}\) for \(\mathcal{C}\). We refer to \(\widehat{\mathcal{C}}\) as the _line category associated to \(\mathcal{C}\) with respect to the generating digraph \(\mathcal{G}\).
**Definition 2.3**.: _Let \(\phi,\beta\colon\widehat{\mathcal{G}}\to\mathcal{G}\) denote the front and back graph morphisms, defined by \(\phi(u,v)=v\) and \(\beta(u,v)=u\), for any edge \((u,v)\) in \(\widehat{\mathcal{G}}\), with the action on edges in \(\widehat{\mathcal{G}}\) given by \(\phi(u,v,w)=(v,w)\) and \(\beta(u,v,w)=(u,v)\) for any edge \((u,v,w)\) in \(\widehat{\mathcal{G}}\)._
The maps \(\phi\) and \(\beta\) induce functors
\[\phi,\beta\colon P(\widehat{\mathcal{G}})\to P(\mathcal{G}).\]
The following lemma shows that the front and back functors induce the corresponding functors on the line category.
**Lemma 2.4**.: _Let \(\mathcal{C}\) be a small category generated by a digraph \(\mathcal{G}=(V,E)\). Let \(\widehat{\mathcal{C}}\) denote the line category associated to \(\mathcal{C}\) with respect to \(\mathcal{G}\). Then the front and back functors \(\phi\) and \(\beta\) on path categories induce functors_
\[\phi,\beta\colon\widehat{\mathcal{C}}\to\mathcal{C}.\]
Proof.: The category \(\mathcal{C}\) is a quotient category of the path category \(P(\mathcal{G})\) by the relations among morphisms in \(\mathcal{C}\). Thus it suffices to show that the square of functors
where \(\psi\) is either \(\phi\) or \(\beta\), commutes. We prove the statement for \(\phi\). The proof for \(\beta\) is essentially the same. The objects in \(P(\widehat{\mathcal{G}})\) are the edges \((u,v)\) in \(\mathcal{G}\), and \(\phi(u,v)=v\). Define \(\bar{\phi}(u,v)=v\). The square commutes on objects by definition. Let
\[\alpha\stackrel{{\rm def}}{{=}} ((u,t)\to(t,a_{1})\to\cdots\to(a_{m},s)\to(s,v)),\quad\text{and}\] \[\gamma\stackrel{{\rm def}}{{=}} ((u,t)\to(t,b_{1})\to\cdots\to(b_{k},s)\to(s,v))\]
be two morphisms in \(P(\widehat{\mathcal{G}})\), such that \(\pi(\alpha)=\pi(\gamma)\). Then, by definition,
\[t\to a_{1}\to\cdots\to a_{m}\to s,\quad\text{and}\quad t\to b_{1}\to\cdots\to b _{k}\to s\]
coincide in \(\mathcal{C}\). Hence the compositions
\[t\to a_{1}\to\cdots s\to v,\quad\text{and}\quad t\to b_{1}\to\cdots s\to v\]
also coincide in \(\mathcal{C}\). But these are exactly the projections of \(\phi(\alpha)\) and \(\phi(\gamma)\) into \(\mathcal{C}\). This shows that \(\bar{\phi}\) is well defined and that the square commutes on morphisms, and the proof is complete.
A family of examples that are particularly relevant to this article arises from considering posets and their associated Hasse diagrams.
**Example 2.5**.: _Let \(\mathcal{P}\) be a poset, let \(\mathcal{H}_{\mathcal{P}}\) be its Hasse diagram and let \(\widehat{\mathcal{H}}_{\mathcal{P}}\) be the line digraph associated to \(\mathcal{H}_{\mathcal{P}}\). Then \(\widehat{\mathcal{H}}_{\mathcal{P}}\) is the Hasse diagram of a poset \(\widehat{\mathcal{P}}\) that is unique up to isomorphism._
_To see this, note that any finite poset \(\mathcal{P}\) can be reconstructed uniquely up to isomorphism from its associated Hasse diagram, which by definition is an acyclic and transitively reduced digraph. Thus it suffices to show that if \(\mathcal{H}=(V,E)\) is a finite transitively reduced acyclic digraph, then \(\widehat{\mathcal{H}}\) is also acyclic and transitively reduced._
_It is immediate from the definition that the existence of a cycle in \(\widehat{\mathcal{H}}\) implies that \(\mathcal{H}\) itself contains a cycle. Let \((u,v,w)\) be an edge in \(\widehat{\mathcal{H}}\), and assume that it can be decomposed as \((u,v,z)\cdot(v,z,w)\). Then \(\mathcal{H}\) contains the composable sequence_
_Hence either \((u,v)\) is not composable with \((z,w)\) or \(v=z\). The first option contradicts the assumption that \((u,v,w)\) is an edge in \(\widehat{\mathcal{H}}\) and the second option contradicts the assumption that \((v,z,w)\) is an edge in \(\widehat{\mathcal{H}}\), or alternatively acyclicity of \(\mathcal{H}\) due to the self-loop \((v,v)\). Thus \((u,v,w)\) is indecomposable, and so \(\widehat{\mathcal{H}}\) is transitively reduced._
_If \(\mathcal{P}\) is a poset, then the poset \(\widehat{\mathcal{P}}\) associated to \(\widehat{\mathcal{H}}_{\mathcal{P}}\) has as objects the edges \((u,v)\) in \(\mathcal{H}_{P}\) and \((u,v)\leq(z,w)\) if there is a sequence of directed edges, for \(n\geq 2\),_
\[(u,v)=(a_{0},a_{1})\xrightarrow{(a_{0},a_{1},a_{2})}(a_{1},a_{2}) \xrightarrow{(a_{1},a_{2},a_{3})}(a_{2},a_{3})\to\cdots\xrightarrow{(a_{n-2}, a_{n-1},a_{n})}(a_{n-1},a_{n})=(z,w).\]
Let \(\mathcal{S}\) be a category, and let \(\mathcal{C}\) be a small category, with a generating digraph \(\mathcal{G}\) and an associated line digraph \(\widehat{\mathcal{G}}\). Let \(\widehat{\mathcal{C}}\) be the associated line category with respect to \(\mathcal{G}\). The functors \(\phi,\beta\colon\widehat{\mathcal{C}}\to\mathcal{C}\) induce _restriction functors_
\[\phi^{*},\beta^{*}\colon\mathcal{S}^{\mathcal{C}}\to\mathcal{S}^{\widehat{ \mathcal{C}}},\]
where \(\mathcal{S}^{\mathcal{C}}\) and \(\mathcal{S}^{\widehat{\mathcal{C}}}\) are the functor categories from \(\mathcal{C}\) and \(\widehat{\mathcal{C}}\) to \(\mathcal{S}\), respectively. We are now ready to define the gradient.
**Definition 2.6**.: _Fix a small category \(\mathcal{C}\) and an associated line category \(\widehat{\mathcal{C}}\) with respect to some generating digraph \(\mathcal{G}\). Let \(\mathcal{S}\) be an additive category. Let \(\mathsf{Gr}(\mathcal{S}^{\mathcal{C}})\) denote the Grothendieck group of isomorphism classes of functors \(\mathcal{C}\to\mathcal{S}\), with the operation \([F]+[G]=[F\sqcup G]\), where \((F\sqcup G)(c)=F(c)\sqcup G(c)\), and where \(\sqcup\) denote the coproduct in \(\mathcal{S}\). Define the gradient_
\[\nabla\colon\mathsf{Gr}(\mathcal{S}^{\mathcal{C}})\to\mathsf{Gr}(\mathcal{S}^ {\widehat{\mathcal{C}}})\]
_by_
\[\nabla[F]=[\phi^{*}(F)]-[\beta^{*}(F)]\]
_for each functor \(F\in\mathcal{S}^{\mathcal{C}}\). Extend \(\nabla\) by additivity to the whole Grothendieck group._
Next we show that the standard definition of a gradient for a digraph with vertex and edge weight functions taking values in the group of integers is a particular case of our setup.
**Example 2.7**.: _Let \(\mathcal{G}=(V,E)\) be a digraph with line digraph \(\widehat{\mathcal{G}}\). Then \(\phi\) and \(\beta\) induce functors_
\[\phi^{*},\beta^{*}\colon\mathcal{S}^{P(\mathcal{G})}\to\mathcal{S}^{P( \widehat{\mathcal{G}})}.\]
_Consider \(V\) and \(E\) as discrete categories, and let \(V\to P(\mathcal{G})\) and \(E\to P(\widehat{\mathcal{G}})\) be the obvious inclusions. Restriction of \(\phi^{*}\) and \(\beta^{*}\) gives functors_
\[\phi^{*},\beta^{*}\colon\mathcal{S}^{V}\to\mathcal{S}^{E}.\]
_Now, assume that \(\mathcal{S}=\operatorname{Vect}_{k}\), the category of vector spaces over a field \(k\), with the coproduct \(\sqcup\) given by the direct sum. Since vector spaces are determined up to isomorphism by their dimension, taking Grothendieck groups, we have_
\[\mathsf{Gr}(\mathcal{S}^{V})=\mathbb{Z}^{V}\cong\bigoplus_{v\in V}\mathbb{Z},\quad\text{and}\quad\mathsf{Gr}(\mathcal{S}^{E})=\mathbb{Z}^{E}\cong \bigoplus_{e\in E}\mathbb{Z}.\]
_Thus the gradient is given by_
\[\nabla[f](u,v)=\dim_{k}(f(v))-\dim_{k}(f(u)),\]
_where \([f]\in\mathsf{Gr}(\mathcal{S}^{V})\)._
_Let \(\varphi\colon V\to\mathbb{Z}\) be an arbitrary function that takes non-negative values. Considering \(V\) again as a discrete category, let \(f\colon V\to\operatorname{Vect}_{k}\) be the functor defined by \(f(v)=k^{\varphi(v)}\). Then_
\[\nabla[f](u,v)=\nabla(\varphi)(u,v),\]
_where the left hand side is the gradient of the element \([f]\in\mathsf{Gr}(\mathcal{S}^{V})\), and the right hand side is the graph theoretic definition for the gradient of the function \(\varphi\), as in (2)._
Next we define the divergence operators in our context. For this we require that the category \(\mathcal{S}\) is additive and bicomplete, namely that all small limits and colimits exist in \(\mathcal{S}\). In that case the functors \(\phi^{*}\) and \(\beta^{*}\) have left and right adjoints given by the left and right Kan extensions. Notice that because of pre-additivity of \(\mathcal{S}\) finite products and coproducts coincide in \(\mathcal{S}\).
**Definition 2.8**.: _Fix a small category \(\mathcal{C}\) and an associated line category \(\widehat{\mathcal{C}}\) with respect to some generating digraph \(\mathcal{G}\). Let \(\mathcal{S}\) be a small bicomplete additive category. Define the left divergence and the right divergence (with respect to \(\mathcal{G}\))_
\[\nabla^{*},\nabla_{*}\colon\mathsf{Gr}(\mathcal{S}^{\widehat{\mathcal{C}}}) \to\mathsf{Gr}(\mathcal{S}^{\mathcal{C}})\]
_by_
\[\nabla^{*}[T]=[L_{\phi}(T)]-[L_{\beta}(T)],\quad\text{and}\quad\nabla_{*}[T]=[ R_{\phi}(T)]-[R_{\beta}(T)]\]
_for each functor \(T\colon\widehat{\mathcal{C}}\to\mathcal{S}\). Extend by additivity to the whole Grothendieck group._
Since limits commute with limits and colimits with colimits, and since finite products and coproducts coincide in an additive category, both operators are clearly well defined group homomorphisms.
We now show how the standard divergence operator for digraphs, with vertex and edge weight functions taking values in the integers, is a particular example of the general setup of Definition 2.8.
**Example 2.9**.: _Let \(\mathcal{G}=(V,E)\) be a digraph with line digraph \(\widehat{\mathcal{G}}\). Let \(\mathcal{S}\) be a category satisfying the requirement of Definition 2.8. Consider \(V\) and \(E\) as discrete subcategories of \(P(\mathcal{G})\) and \(P(\widehat{\mathcal{G}})\) respectively, as in Example 2.7. Restriction of \(\phi^{*}\) and \(\beta^{*}\) gives functors_
\[\phi^{*},\beta^{*}\colon\mathcal{S}^{V}\to\mathcal{S}^{E}.\]
_Since \(V\) and \(E\) are discrete, we have for \(f\in\mathcal{S}^{E}\)_
\[L_{\phi}(f)(v)\stackrel{{\mathrm{def}}}{{=}}\operatorname*{colim }_{\phi\downarrow v}f=\prod_{(u,v)\in E}f(u,v),\quad\text{and}\quad L_{\beta}(f )(v)\stackrel{{\mathrm{def}}}{{=}}\operatorname*{colim}_{\beta \downarrow v}f=\prod_{(v,w)\in E}f(v,w).\]
_Similarly_
\[R_{\phi}(f)(v)\stackrel{{\mathrm{def}}}{{=}}\lim_{v\downarrow \phi}f=\prod_{(u,v)\in E}f(u,v),\quad\text{and}\quad R_{\beta}(f)(v)\stackrel{{ \mathrm{def}}}{{=}}\lim_{v\downarrow\beta}f=\prod_{(v,w)\in E}f(v,w).\]
_Restrict attention to the case where \(\mathcal{S}=\operatorname{Vect}_{k}\) and \(\mathcal{G}\) is finite (this is essential because \(\operatorname{Vect}_{k}\) is only finitely bicomplete). Then, finite products (cartesian products) and finite coproducts (direct sums) are isomorphic. Hence, the right and left Kan extensions coincide. As in Example 2.7, we have_
\[\mathsf{Gr}(\mathcal{S}^{V})=\mathbb{Z}^{V}\cong\bigoplus_{v\in V}\mathbb{Z},\quad\text{and}\quad\mathsf{Gr}(\mathcal{S}^{E})=\mathbb{Z}^{E}\cong \bigoplus_{e\in E}\mathbb{Z}.\]
_Hence_
\[\nabla^{*}[f](v)=\nabla_{*}[f](v)=\sum_{(u,v)\in E}\dim_{k}f(u,v)-\sum_{(v,w) \in E}\dim_{k}f(v,w)\]
_for \(f\in\mathcal{S}^{E}\)._
_Let \(\gamma\colon E\to\mathbb{Z}\) be an arbitrary function that takes non-negative values. Considering \(E\) as a discrete category, let \(g\colon E\to\operatorname{Vect}_{k}\) denote the functor defined by \(g(e)=k^{\gamma(e)}\). Then_
\[\nabla^{*}[g]=\nabla_{*}[g]=\nabla^{*}(\gamma)\]
_where the left and centre in the equation are the left and right divergence of \([g]\) in \(\mathsf{Gr}(\mathcal{S}^{V})\), and the right hand side is the graph theoretic divergence of \(\gamma\), as in (2)._
## 3. Bilinear pairings on \(\mathsf{Gr}(\mathcal{S}^{\mathcal{C}})\)
In this section we define two bilinear forms on \(\mathsf{Gr}(\mathcal{S}^{\mathcal{C}})\), where \(\mathcal{S}=\mathrm{Vect}_{k}\) and \(\mathcal{C}\) is a finite acyclic category (i.e. a category generated by a finite acyclic digraph). Thus we use the notation \(\mathsf{Gr}(k\mathcal{C})\) for \(\mathsf{Gr}(\mathcal{S}^{\mathcal{C}})\).
**Definition 3.1**.: _Let \(\mathcal{G}\) be a finite acyclic digraph that generates a category \(\mathcal{C}\). Define the Hom pairing_
\[\langle-,-\rangle_{\mathcal{C}}\colon\mathsf{Gr}(k\mathcal{C})\times\mathsf{ Gr}(k\mathcal{C})\to\mathbb{Z}\]
_by_
\[\langle[M],[N]\rangle_{\mathcal{C}}\stackrel{{\mathrm{def}}}{{=} }\dim_{k}\left(\mathrm{Hom}_{k\mathcal{C}}(M,N)\right),\]
_where \(M\) and \(N\) represent their isomorphism classes in \(\mathsf{Gr}(k\mathcal{C})\). Extend the definition to the full Grothendieck group by additivity._
Notice that acyclicity and finiteness guarantees that the category \(\mathcal{C}\) is finite, and so the Hom objects are guaranteed to be finite dimensional. The Hom pairing is clearly bilinear and can be defined by analogy also on \(\mathsf{Gr}(k\widetilde{\mathcal{C}})\). Since the left and right Kan extensions are left and right adjoints to the restrictions \(\phi^{*}\) and \(\beta^{*}\) we have
\[\langle\nabla^{*}[X],[Y]\rangle_{\mathcal{C}}=\langle[X],\nabla[Y]\rangle_{ \widehat{\mathcal{C}}},\quad\text{and}\quad\langle\nabla[X],[Y]\rangle_{ \widehat{\mathcal{C}}}=\langle[X],\nabla_{*}[Y]\rangle_{\mathcal{C}} \tag{3}\]
for \([X]\in\mathsf{Gr}(k\widetilde{\mathcal{C}})\) and \([Y]\in\mathsf{Gr}(k\mathcal{C})\). Notice that the Hom pairing is not generally symmetric; however, see Example 3.6 below.
Next we define another useful pairing on \(\mathsf{Gr}(k\mathcal{C})\).
**Definition 3.2**.: _Let \(\mathcal{G}\) be a finite acyclic digraph that generates a category \(\mathcal{C}\). Define the Euler pairing_
\[\chi_{\mathcal{C}}\colon\mathsf{Gr}(k\mathcal{C})\times\mathsf{Gr}(k\mathcal{C })\to\mathbb{Z}\]
_by_
\[\chi_{\mathcal{C}}([M],[N])\stackrel{{\mathrm{def}}}{{=}}\chi( \mathrm{Ext}^{*}_{k\mathcal{C}}(M,N))=\sum_{n\geq 0}(-1)^{n}\dim_{k}( \mathrm{Ext}^{n}_{k\mathcal{C}}(M,N)),\]
_where \(M\) and \(N\) represent their isomorphism classes in \(\mathsf{Gr}(k\mathcal{C})\), and extend to the Grothendieck group by additivity._
To show that this is well defined, we must argue that the \(\mathrm{Ext}^{n}\) groups vanish for \(n\) sufficiently large. This is implied by Lemma 3.3 below.
**Lemma 3.3**.: _Let \(\mathcal{G}\) be a finite acyclic digraph that generates a category \(\mathcal{C}\). Then the category algebra \(k\mathcal{C}\) has finite projective dimension._
Proof.: Since we assume that \(\mathcal{G}\) is finite and acyclic, composition length in \(\mathcal{C}\) is bounded. For each object \(x\in\mathcal{C}\), let \(F_{x}\in k\mathcal{C}\text{-mod}\) denote the projective module defined by
\[F_{x}=k\mathcal{C}(x,-).\]
Let \(M\) be a finitely generated \(k\mathcal{C}\)-module. For each object \(x\in\mathcal{C}\), choose a basis \(\{v_{1},\ldots,v_{n_{x}}\}\) for \(M(x)\). Let
\[q_{x,i}\colon F_{x}\to M\]
denote the transformation determined by taking \(1_{x}\in\mathcal{C}(x,x)\), to \(v_{x,i}\). Thus one obtains a surjection of \(k\mathcal{C}\)-modules
\[q_{M}\colon\bigoplus_{x\in\mathcal{C}}F_{x}^{d_{x}}\to M,\]
where \(d_{x}=\dim_{k}(M(x))\) and \(F_{x}^{d_{x}}=\bigoplus_{d_{x}}F_{x}\). Let \(M_{1}\) denote the \(\mathrm{Ker}(q_{M})\). If \(x\) is a minimal object in the sense that \(\mathcal{C}(-,x)=\emptyset\), then \(M_{1}(x)=0\). Thus \(M_{1}\) vanishes on all minimal objects in \(\mathcal{C}\). Notice that if \(d_{x}=0\), then no copy of \(F_{x}\) is included in the cover. Next, construct a
projective cover of \(M_{1}\) in a similar fashion, avoiding all minimal objects in \(\mathcal{C}\), and all those of graph distance \(1\) from a minimal object, for which \(M_{1}\) vanishes. Let \(M_{2}\) be the kernel of this cover. Then \(M_{2}\) vanishes of all minimal objects and all objects of graph distance \(1\) from a minimal object. By induction on the length of the maximal chains in \(\mathcal{C}\), we obtain a finite projective resolution for \(M\) of length bounded above by the length of a maximal chain in \(\mathcal{C}\).
The pairing \(\chi_{\mathcal{C}}\) is clearly bilinear, but in general it is neither symmetric, nor is it true that
\[\chi_{\mathcal{C}}(\nabla^{*}[X],[Y])=\chi_{\mathcal{C}}([X],\nabla[Y]),\quad \text{and}\quad\chi_{\mathcal{C}}([X],\nabla_{*}[Y])=\chi_{\mathcal{C}}(\nabla [X],[Y]). \tag{4}\]
We now consider a special case where the generating digraph for \(\mathcal{C}\) is a directed tree \(\mathcal{T}\). In that case the path algebra \(k\mathcal{T}\) of \(\mathcal{T}\), considered as an acyclic quiver, and the category algebra \(k\mathcal{C}\) coincide. Path algebras over acyclic quivers are hereditary, namely submodules of projective modules are projective. Hence for any \(M,N\in k\mathcal{C}\), one has \(\operatorname{Ext}^{i}_{k\mathcal{C}}(M,N)=0\) for \(i>1\). Thus in this case
\[\chi_{\mathcal{C}}([M],[N])=\dim_{k}\operatorname{Hom}_{k\mathcal{C}}(M,N)- \dim_{k}\operatorname{Ext}^{1}_{k\mathcal{C}}(M,N).\]
In particular, if \(\mathcal{C}\) is generated by a finite directed tree, then the Euler pairing is rather easy to compute, as it depends only on the dimension of point modules. To make this statement precise, we need the following definition.
**Definition 3.4**.: _Let \(\mathcal{Q}=(V,E)\) be a quiver. Let \(\alpha,\beta\in\mathbb{Z}^{V}\) be integer valued functions on \(V\). Define a function \(\chi_{\mathcal{Q}}\colon\mathbb{Z}^{V}\times\mathbb{Z}^{V}\to\mathbb{Z}\) by_
\[\chi_{\mathcal{Q}}(\alpha,\beta)\stackrel{{\text{def}}}{{=}}\sum _{x\in V}\alpha(x)\beta(x)-\sum_{e\in E}\alpha(s(e))\beta(t(e)).\]
_The function \(\chi_{\mathcal{Q}}\) is called the Euler form for \(\mathcal{Q}\)._
**Lemma 3.5** ([13, Proposition 2.5.2]).: _Let \(\mathcal{C}\) be a category generated by a finite tree \(\mathcal{T}\). Then for any \(M,N\in k\mathcal{C}\operatorname{\mathsf{-mod}}\)_
\[\chi_{\mathcal{C}}([M],[N])=\chi_{\mathcal{T}}(\operatorname{\mathbf{dim}}_{k }[M],\operatorname{\mathbf{dim}}_{k}[N]).\]
Notice that a category generated by a tree is in fact a poset. Also, the lemma shows that symmetry almost always fails, even under rather favourable circumstances, because the second term in the definition of the Euler form is not symmetric.
Looking back into graph theory, one considers the vertices \(V\) and the edges \(E\) as discrete categories. The next example shows that in that case \(\operatorname{Hom}\) pairing and the Euler pairing coincide.
**Example 3.6**.: _In the setup of Example 2.9 the sets \(V\) and \(E\) are considered as discrete categories. Hence \(kV\) and \(kE\) are simply the \(k\)-vector spaces generated by \(V\) and \(E\) respectively, with trivial products, and modules over them are again just finite dimensional vector spaces associated to each vertex or edge. Thus, all modules are projective, and so the respective \(\operatorname{Ext}^{n}\) groups vanish for positive \(n\). Hence, the corresponding pairings now take the form_
\[\chi_{V}([f],[g])=\dim_{k}(\operatorname{Hom}_{kV}(f,g))= \sum_{v\in V}\dim_{k}(f(v))\dim_{k}(g(v))=\langle[f],[g]\rangle_{ V},\quad\text{and}\] \[\chi_{E}([h],[s])=\dim_{k}(\operatorname{Hom}_{kE}(h,s))= \sum_{e\in E}\dim_{k}(h(e))\dim_{k}(s(e))=\langle[h],[s]\rangle_{E}\]
_Notice that the pairings in this case are commutative. Also, by Example 2.9 the divergence operators \(\nabla^{*}\) and \(\nabla_{*}\) coincide in this case. In particular, the adjointness relations (3) trivially hold._
The following two lemmas are standard homological algebra, which we record here for reference.
**Lemma 3.7**.: _Let \(\mathcal{A},\mathcal{B}\) be small abelian categories and assume that \(\mathcal{A}\) has enough projectives. Let \(F\colon\mathcal{A}\to\mathcal{B}\) be a functor with a right adjoint \(G\colon\mathcal{B}\to\mathcal{A}\). Then the following statements are equivalent._
1. \(F\) _sends projective objects in_ \(\mathcal{A}\) _to projective objects in_ \(\mathcal{B}\)_._
2. \(G\) _is an exact functor._
_Dually, if \(\mathcal{B}\) has enough injectives, then the following statements are equivalent._
1. \(G\) _sends injective objects in_ \(\mathcal{B}\) _to injective objects in_ \(\mathcal{A}\)_._
2. \(F\) _is an exact functor._
Proof.: We prove the equivalence of (1) and (2). The equivalence of (3) and (4) follows by analogy. Let \(P\in\mathcal{A}\) be a projective object. Then \(F(P)\in\mathcal{B}\) is projective if and only if the functor \(\operatorname{Hom}_{\mathcal{B}}(F(P),-)\cong\operatorname{Hom}_{\mathcal{A} }(P,G(-))\) is exact. Thus, if \(G\) is exact then \(F(P)\) is projective. This shows that (2) implies (1).
Conversely, since \(\mathcal{A}\) is assumed to have enough projectives, every object \(X\in\mathcal{A}\) admits an epimorphism \(P\to X\), where \(P\) is projective in \(\mathcal{A}\). Let
\[0\to B\xrightarrow{\alpha}C\xrightarrow{\beta}D\to 0\]
be an exact sequence in \(\mathcal{B}\). Since \(G\) is a right adjoint, it is left exact, so it suffices to show that
\[G(C)\xrightarrow{G(\beta)}G(D)\to 0\]
is exact. Let \(\varphi\colon P\to G(D)\) be an epimorphism in \(\mathcal{A}\), where \(P\) is projective. By (1), \(F\) sends projective objects to projective objects. Thus
\[\operatorname{Hom}_{\mathcal{B}}(F(P),C)\xrightarrow{\beta}\operatorname{Hom} _{\mathcal{B}}(F(P),D)\to 0\]
is exact, and by adjointness
\[\operatorname{Hom}_{\mathcal{A}}(P,G(C))\xrightarrow{G(\beta)*}\operatorname{ Hom}_{\mathcal{A}}(P,G(D))\to 0\]
is exact. Hence there exists \(\psi\in\operatorname{Hom}_{\mathcal{A}}(P,G(C))\) such that \(G(\beta)_{*}(\psi)=G(\beta)\circ\psi=\varphi\). Since \(\varphi\) is an epimorphism, so is \(G(\beta)\), as claimed.
**Lemma 3.8**.: _Let \(\gamma\colon\mathcal{C}\to\mathcal{D}\) be a functor between small categories. Let \(\mathcal{A}\) be a bicomplete abelian category, and let \(\mathcal{A}^{\mathcal{C}}\) and \(\mathcal{A}^{\mathcal{D}}\) denote the respective functor categories. Let \(\gamma^{*}\colon\mathcal{A}^{\mathcal{D}}\to\mathcal{A}^{\mathcal{C}}\) denote the restriction, and let \(L_{\gamma}\) and \(R_{\gamma}\) denote its left and right Kan extensions. Then for any \(M\in\mathcal{A}^{\mathcal{C}}\) and \(N\in\mathcal{A}^{\mathcal{D}}\),_
\[\operatorname{Ext}^{*}_{\mathcal{A}^{\mathcal{D}}}(L_{\gamma}(M),N)\cong \operatorname{Ext}^{*}_{\mathcal{A}^{\mathcal{C}}}(M,\gamma^{*}N)\]
_if \(\mathcal{A}\) has enough projectives and either of the following two equivalent conditions holds._
1. \(\gamma^{*}\) _sends injective modules to injective modules._
2. \(L_{\gamma}\) _is an exact functor._
_Similarly, if \(\mathcal{A}\) has enough injectives then_
\[\operatorname{Ext}^{*}_{\mathcal{A}^{\mathcal{D}}}(N,R_{\gamma}(M))\cong \operatorname{Ext}^{*}_{\mathcal{A}^{\mathcal{C}}}(\gamma^{*}N,M)\]
_if either of the following equivalent conditions holds._
1. \(\gamma^{*}\) _sends projective modules to projective modules._
2. \(R_{\gamma}\) _is an exact functor._
Proof.: We prove the first statement. The second follows by analogy. Let \(N\to I^{*}\) be an injective resolution of \(N\) in \(\mathcal{A}^{\mathcal{D}}\). Since \(\gamma^{*}\) is exact and sends injective modules to injective modules, \(\gamma^{*}N\to\gamma^{*}(I^{*})\) is an injective resolution of \(\gamma^{*}N\) in \(\mathcal{A}^{\mathcal{C}}\). Then
\[\operatorname{Ext}^{*}_{\mathcal{A}^{\mathcal{C}}}(M,\gamma^{*}N)=H^{*}( \operatorname{Hom}_{\mathcal{A}^{\mathcal{C}}}(M,\gamma^{*}(I^{*}))\cong H^{*}( \operatorname{Hom}_{\mathcal{A}^{\mathcal{D}}}(L_{\gamma}(M),I^{*}))= \operatorname{Ext}^{*}_{\mathcal{A}^{\mathcal{D}}}(L_{\gamma}(M),N),\]
as claimed. The equivalence of Conditions (1) and (2) follows from Lemma 3.7.
**Lemma 3.9**.: _Let \(\mathcal{P}\) be a finite poset and assume that \(\mathcal{H}_{\mathcal{P}}\) is a tree. Let \(\widehat{\mathcal{P}}\) be the associated line poset (i.e., the line category associated to \(\mathcal{P}\)), and let \(\phi,\beta\colon\widehat{\mathcal{P}}\to\mathcal{P}\) be the front and back functors. Then \(\phi^{*}\colon k\mathcal{P}\mbox{-}\mathsf{mod}\to k\widehat{\mathcal{P}} \mbox{-}\mathsf{mod}\) sends injective modules to injective modules and \(\beta^{*}\colon k\mathcal{P}\mbox{-}\mathsf{mod}\to k\widehat{\mathcal{P}} \mbox{-}\mathsf{mod}\) sends projective modules to projective modules._
Proof.: We prove that \(\phi^{*}\) sends injective modules to injective modules. The corresponding statement for \(\beta^{*}\) is similar. By [13, Lemma 2.2.3] every indecomposable injective \(k\mathcal{P}\)-module has the form \(G_{v}\), for some \(v\in\mathcal{P}\), where for any \(u\in\mathcal{P}\),
\[G_{v}(u)\stackrel{{\mathrm{def}}}{{=}}k\operatorname{Mor}_{ \mathcal{P}}(u,v)=\begin{cases}k&u\leq v\\ 0&\text{otherwise}\end{cases}\]
with \(G_{v}(x)\to G_{v}(y)\) the identity morphism, for each relation \(x\leq y\) in \(\mathcal{P}_{\leq v}\). Thus to prove the lemma, we must show that for any module of the form \(G_{v}\in k\mathcal{P}\), the corresponding module \(\phi^{*}G_{v}\) is a finite sum of indecomposable injective modules in \(k\widehat{\mathcal{P}}\).
Let \((u,w)\in\widehat{\mathcal{P}}\) be any object. Then
\[\phi^{*}G_{v}(u,w)=G_{v}(w)=\begin{cases}k&w\leq v\\ 0&\text{otherwise}\end{cases}=\begin{cases}k&(u,w)\leq(a,v)\\ 0&\text{otherwise}\end{cases}\]
where \(a<v\) is any indecomposable relation in \(\mathcal{P}\). Since \(\mathcal{P}\) is generated by a tree, for each such relation the sub-posets \(\widehat{\mathcal{P}}_{\leq(a,v)}\) are pairwise disjoint. Thus
\[\phi^{*}G_{v}\cong\bigoplus_{(a,v)\in\widehat{\mathcal{P}}}G_{(a,v)}.\]
This completes the proof.
**Example 3.10**.: _Let \(\mathcal{P}\) be a finite poset and assume that \(\mathcal{H}_{\mathcal{P}}\) is a tree. By Lemmas 3.8 and 3.9 one has for \(N\in k\mathcal{P}\mbox{-}\mathsf{mod}\) and \(M\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\)_
\[\operatorname{Ext}^{*}_{k\mathcal{P}}(L_{\phi}(M),N)\cong\operatorname{Ext}^ {*}_{k\widehat{\mathcal{P}}}(M,\phi^{*}N),\quad\text{and}\quad\operatorname{ Ext}^{*}_{k\mathcal{C}}(N,R_{\beta}(M))\cong\operatorname{Ext}^{*}_{k\mathcal{C}}( \beta^{*}N,M).\]
In general \(\phi^{*}\) does not send projective \(k\mathcal{P}\)-modules to projective modules, and \(\beta^{*}\) does not send injective \(k\mathcal{P}\)-modules to injective modules. See section 5 for further discussion.
## 4. The gradient of modules over posets
In this section we specialise the idea of a gradient, as defined in Section 2, to persistence modules. Let \(\mathcal{P}\) be a finite poset and let \(\mathcal{H}_{\mathcal{P}}\) denote its Hasse diagram. Thus \(\mathcal{H}_{\mathcal{P}}\) is a generating digraph for \(\mathcal{P}\) in the sense of Definition 2.1. Let \(\widehat{\mathcal{H}}_{P}\) be the associated line digraph, which by Example 2.5 is the Hasse diagram of a poset \(\widehat{\mathcal{P}}\). Let \(k\) be a fixed field of coefficients. Throughout we consider \(k\mathcal{P}\)- and \(k\widehat{\mathcal{P}}\)-modules alternately as modules over the category algebras, or as functors from the respective categories to \(k\)-vector spaces.
Let \(\phi,\beta\colon\widehat{\mathcal{P}}\to\mathcal{P}\) be the front and back maps, and let
\[\phi^{*},\beta^{*}\colon\mathsf{Gr}(k\mathcal{P})\to\mathsf{Gr}(k\widehat{ \mathcal{P}})\]
be the corresponding ring homomorphisms. The following is a particular case of Definition 2.6.
**Definition 4.1**.: _For any finite poset \(\mathcal{P}\), define the gradient_
\[\nabla\colon\mathsf{Gr}(k\mathcal{P})\to\mathsf{Gr}(k\widehat{\mathcal{P}})\]
_by \(\nabla\stackrel{{\mathrm{def}}}{{=}}\phi^{*}-\beta^{*}\)._
Figure 3 illustrates some small posets and the front and back modules in Definition 4.1.
We next study some basic properties of the gradient.
**Theorem 4.2**.: _Let \(\mathcal{P}\) be a finite poset. Then the gradient operator \(\nabla=\nabla_{\mathcal{P}}\colon\mathsf{Gr}(k\mathcal{P})\to\mathsf{Gr}(k\widehat {\mathcal{P}})\) satisfies the following properties:_
1. \(\nabla\) _is a well defined group homomorphism._
2. _If_ \([X]\in\mathsf{Gr}(k\mathcal{P})\) _is locally constant then_ \(\nabla[X]=0\)_._
3. \(\nabla\) _satisfies a Leibniz type rule, i.e. for all_ \([X],[Y]\in\mathsf{Gr}(k\mathcal{P})\)_, the identity_ \[\nabla([X]\cdot[Y])=\nabla[X]\cdot\phi^{*}[Y]+\beta^{*}[X]\cdot\nabla[Y]\] _holds in_ \(\mathsf{Gr}(k\widehat{\mathcal{P}})\)_._
_Furthermore, \(\nabla\) is natural with respect to restrictions to sub-posets, namely, if \(\iota\colon\mathcal{Q}\xrightarrow{\wideparen{\zeta}}\mathcal{P}\) is a sub-poset, then \(\iota^{*}\circ\nabla_{\mathcal{P}}=\nabla_{\mathcal{Q}}\circ\iota^{*}\)._
Proof.: Since \(\phi^{*},\beta^{*}\colon\mathsf{Gr}(k\mathcal{P})\to\mathsf{Gr}(k\widehat{ \mathcal{P}})\) are ring homomorphisms (see Remark 1.14), they are in particular homomorphisms of abelian groups, and hence so is their difference. This proves Part a).
Let \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) be locally constant. Define a natural isomorphism \(\mu\colon\beta^{*}M\to\phi^{*}M\), by sending an object \((x,y)\) in \(\widehat{\mathcal{P}}\) to the morphism
\[\mu_{(x,y)}\colon\beta^{*}M(x,y)=M(x)\xrightarrow{M(x,y)}M(y)=\phi^{*}M(x,y).\]
Figure 3. Left: Four sample posets given by their respective Hasse diagrams. Centre and Right: For an arbitrary persistence module \(M\) on \(\mathcal{P}\), the corresponding modules \(\phi^{*}M\) and \(\beta^{*}M\) on \(\widehat{\mathcal{P}}\). The gradient is given by formal difference \([\phi^{*}M]-[\beta^{*}M]\) in \(\mathsf{Gr}(k\widehat{\mathcal{P}})\)
Naturality is clear. It follows that \(\nabla[M]=0\). By definition, an element of \([X]\in\mathsf{Gr}(k\mathcal{P})\) is locally constant if it is represented as a difference of locally constant modules. By additivity \(\nabla\) vanishes on any locally constant virtual module in \(\mathsf{Gr}(k\mathcal{P})\). This proves Part b).
Finally, since \(\phi^{*}\) and \(\beta^{*}\) are ring homomorphisms, one has for \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\),
\[\nabla[M]\cdot\phi^{*}[N]+\beta^{*}[M]\cdot\nabla[N] =(\phi^{*}[M]-\beta^{*}[M])\cdot\phi^{*}[N]+\beta^{*}[M]\cdot( \phi^{*}[N]-\beta^{*}[N])\] \[=\phi^{*}[M]\cdot\phi^{*}[N]-\beta^{*}[M]\cdot\beta^{*}[N]\] \[=\nabla([M]\cdot[N])\]
as claimed in Part c). For virtual modules \([X]=[M]-[N]\), extend the statement by linearity of \(\phi^{*}\), \(\beta^{*}\) and \(\nabla\).
The last statement follows from commutativity of the square
where \(\alpha\) is either \(\phi\) or \(\beta\).
An obvious question is what can be said about a module with a vanishing gradient. To answer this question we need a preparatory lemma.
**Lemma 4.3**.: _Let \(\mathcal{P}\) be a line connected finite poset. Then its Hasse diagram \(\mathcal{H}_{\mathcal{P}}\) has a line connected maximal tree._
Proof.: The Hasse diagram \(\mathcal{H}_{\mathcal{P}}\) is a connected digraph (a line connected digraph is in particular connected), and by definition it is acyclic and transitively reduced. Let \(\widehat{\mathcal{H}}_{\mathcal{P}}\) denote its line digraph, as usual. If \(\mathcal{H}_{\mathcal{P}}\) is not a tree, then there are two objects \(x<y\in\mathcal{P}\) and at least two distinct paths in \(\mathcal{H}_{\mathcal{P}}\) from \(x\) to \(y\). We proceed by showing that one can disconnect one of the paths from \(x\) to \(y\) in \(\mathcal{H}_{\mathcal{P}}\) with the resulting digraph remaining line connected. The claim then follows by induction on the number of multiple paths between pairs of points in \(\mathcal{H}_{\mathcal{P}}\).
Let \(x<y\in\mathcal{P}\) be two distinct vertices, and assume that there are more than one directed path from \(x\) to \(y\). Denote the collection of all directed paths from \(x\) to \(y\) by \(l_{1},\ldots,l_{n}\), for \(n>1\). We may assume that in all \(l_{j}\), except possibly one of them (since double edges are not allowed in a poset), there is a vertex \(x<a_{i}<y\), such that either \(x<a_{i}\) or \(a_{i}<y\) is indecomposable in \(\mathcal{P}\). We may assume without loss of generality that the \(l_{j}\) have no common vertices except \(x\) and \(y\).
We consider four possibilities.
_(1) \(x\) is not minimal and \(y\) is maximal:_ Then there exists some relation \(x_{0}<x\) in \(\mathcal{P}\), and removing a single edge \(x<a_{j}\) from all \(l_{j}\) except one of them, disconnects all the \(l_{j}\) except the one that was not modified. Since this leaves exactly one path from \(x\) to \(y\), the resulting digraph is still line connected.
_(2) \(y\) is not maximal and \(x\) is minimal:_ Then there exists a relation \(y<y_{0}\) in \(\mathcal{P}\), and removing a single edge \(a_{j}<y\) from each \(l_{j}\) except one of them, disconnects all the \(l_{j}\) except the one that was not modified. Again the resulting digraph is line connected.
_(3) \(x\) is not minimal and \(y\) is not maximal:_ Then remove a single indecomposable edge from each \(l_{j}\) except one of them, as in (1) and (2). In that case as well the resulting digraph is line connected.
_(4) \(x\) is minimal and \(y\) is maximal:_ In that case the associated line digraph splits into \(n\) connected components, each containing the line digraph of \(l_{j}\) for a unique \(1\leq j\leq n\). This contradicts the assumption that \(\mathcal{P}\) is line connected.
The proof is now complete by inductively performing this procedure for any set of multiple paths between pairs of vertices in \(\mathcal{H}_{\mathcal{P}}\).
Let \(\mathcal{P}\) be a finite line connected poset, and let \(\mathcal{T}\) be a line connected sub-tree of \(\mathcal{H}_{\mathcal{P}}\). Let \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}\) be the sub-poset generated by \(\mathcal{T}\), and let \(\iota_{\mathcal{T}}\) denote the inclusion functor. Functoriality of the line digraph construction gives an inclusion \(\widetilde{\iota}_{\mathcal{T}}\colon\widehat{\mathcal{P}_{\mathcal{T}}} \to\widehat{\mathcal{P}}\). Furthermore, one easily verifies commutativity of the square
where \(\alpha\) is either the front functor \(\phi\) or the back functor \(\beta\). This square induces homomorphisms on the respective Grothendieck rings, and hence commutative square of group homomorphisms
**Definition 4.4**.: _Let \(\mathcal{P}\) be a finite line connected poset, and let \(\mathcal{T}\) be a line connected sub-tree of \(\mathcal{H}_{\mathcal{P}}\). We say that an element \([X]\in\mathsf{Gr}(k\mathcal{P})\) has a vanishing gradient on \(\mathcal{T}\) if_
\[\nabla(\iota_{\mathcal{T}}^{*}[X])=\widehat{\iota}_{\mathcal{T}}^{*}(\nabla[X ])=0.\]
Notice that for a module \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\), \(\iota_{\mathcal{T}}^{*}[M]\) is just the restriction of \(M\) to the sub-poset \(\mathcal{P}_{\mathcal{T}}\), which justifies Definition 4.4. Notice also that if \(\nabla[X]=0\) in \(\mathsf{Gr}(k\widehat{\mathcal{P}})\), then \(\nabla(\iota_{\mathcal{T}}^{*}[X])=0\) as well, namely the restriction of \(X\) to \(\mathcal{P}_{\mathcal{T}}\) has a vanishing gradient on \(\mathcal{T}\).
The following theorem is stated with respect to a maximal tree in \(\mathcal{H}_{\mathcal{P}}\), but one may easily obtain a restricted analog for any sub-tree of the Hasse diagram.
**Theorem 4.5**.: _Let \(\mathcal{P}\) be a finite line connected poset, and let \(\mathcal{T}\) be a line connected maximal tree for its Hasse diagram \(\mathcal{H}_{\mathcal{P}}\). Let \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}\) denote the sub-poset generated by \(\mathcal{T}\). Let \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) be a module with a vanishing gradient on \(\mathcal{T}\). Let \(M_{\mathcal{T}}\) denote the restriction of \(M\) to \(\mathcal{P}_{\mathcal{T}}\), so \(\iota_{\mathcal{T}}^{*}[M]=[M_{\mathcal{T}}]\in\mathsf{Gr}(k\mathcal{P}_{ \mathcal{T}})\). Then the following statements hold._
1. _For any pair of objects_ \(u,v\in\mathcal{P}\)_, there is an isomorphism_ \(\alpha_{u,v}\colon M(u)\to M(v)\)_, such that_ \(\alpha_{u,u}=1_{M(u)}\) _and_ \(\alpha_{v,w}\circ\alpha_{u,v}=\alpha_{u,w}\)_._
2. _For every pair of indecomposable morphisms_ \(u\leq w\) _and_ \(s\leq t\) _in_ \(\mathcal{P}_{\mathcal{T}}\)_,_ \[\alpha_{w,t}\circ M(u\leq w)=M(s\leq t)\circ\alpha_{u,s}.\]
3. \(M_{\mathcal{T}}\) _is locally constant if and only if_ \(M(u\leq v)\) _is an isomorphism for some indecomposable relation_ \(u\leq v\) _in_ \(\mathcal{P}_{\mathcal{T}}\)_._
_Furthermore, if \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) is a module such that for any indecomposable relation \(u\leq v\) in \(\mathcal{P}_{\mathcal{T}}\) there exists an isomorphism \(\alpha_{u,v}\colon M(u)\to M(v)\) that satisfy Statements (1) and (2), then \(M\) has a vanishing gradient on \(\mathcal{T}\)._
Proof.: By hypothesis \(\nabla[M_{\mathcal{T}}]=0\). Hence there is a natural isomorphism \(\alpha\colon\beta^{*}M_{\mathcal{T}}\to\phi^{*}M_{\mathcal{T}}\). If \((u,v)\) is an edge in \(\mathcal{T}\), and hence a vertex in \(\widehat{\mathcal{P}}_{\mathcal{T}}\), then we have
\[\alpha_{u,v}\colon\beta^{*}M_{\mathcal{T}}(u,v)=M(u)\to M(v)=\phi^{*}M_{ \mathcal{T}}(u,v),\]
that is, \(\alpha_{u,v}\) is the evaluation of the natural isomorphism \(\alpha\) on the vertex \((u,v)\). Define
\[\alpha_{v,u}\stackrel{{\mathrm{def}}}{{=}}\alpha_{u,v}^{-1}.\]
Since \(\mathcal{P}_{\mathcal{T}}\) is a poset and \(u\lneq v\), the opposite inequality does not hold, so \((v,u)\) is not a vertex in \(\widehat{\mathcal{P}_{\mathcal{T}}}\). Thus, \(\alpha_{v,u}\) is well defined. Notice that the vertices of \(\mathcal{P}_{\mathcal{T}}\) coincide with those of \(\mathcal{P}\).
Let \(v_{0}\) be any minimal object in \(\mathcal{P}_{\mathcal{T}}\). Then \(\alpha_{v_{0},u}\) is defined and is an isomorphism for each indecomposable relation \(v_{0}<u\) and, by induction on the length of a chain starting at \(v_{0}\), \(\alpha_{u,v}\) is defined and is an isomorphism for any \(u,v\in\mathcal{P}_{\mathcal{T}_{\geq v_{0}}}\). Since \(\mathcal{P}_{\mathcal{T}}\) has only finitely many minimal objects, it follows that \(\alpha_{u,v}\) is an isomorphism on any sub-poset of the form \(\mathcal{P}_{\mathcal{T}_{\geq v_{0}}}\) where \(u_{0}\) is minimal.
Let \(u_{0},u_{0}^{\prime}\) be two distinct minimal objects in \(\mathcal{P}_{\mathcal{T}}\). Since \(\mathcal{P}_{\mathcal{T}}\) is line connected by construction, it is in particular connected. Hence there are minimal objects \(u_{0}=v_{1},v_{2},\ldots,v_{k}=u_{0}^{\prime}\), such that for each \(1\leq i\leq k-1\), the intersection of sub-posets \(\mathcal{P}_{\mathcal{T}_{\geq v_{i}}}\cap\mathcal{P}_{\mathcal{T}_{\geq v_{ i+1}}}\) is nonempty. Let \(a_{i}\in\mathcal{P}_{\mathcal{T}_{\geq v_{i}}}\cap\mathcal{P}_{\mathcal{T}_{ \geq v_{i+1}}}\) be any object. Thus one has an isomorphism
\[\alpha_{a_{i},v_{i+1}}\circ\alpha_{v_{i},a_{i}}=\alpha_{v_{i+1},a_{i}}^{-1} \circ\alpha_{v_{i},a_{i}}\colon M(v_{i})\to M(v_{i+1}).\]
Since between any two objects in \(\mathcal{P}_{\mathcal{T}}\) there is by assumption at most one path in \(\mathcal{T}\), compositions of isomorphisms of the form \(\alpha_{x,y}\), where \((x,y)\) is an object in \(\widehat{\mathcal{P}}_{\mathcal{T}}\), and the inverses of such isomorphisms define \(\alpha_{u,v}\) for any pair of objects \(u,v\in\mathcal{P}_{\mathcal{T}}\). Since \(\mathcal{P}\) and \(\mathcal{P}_{\mathcal{T}}\) coincide on objects, this proves Part (1).
Let \((u_{1},u_{2},u_{3})\) be an edge in \(\widehat{\mathcal{P}}_{\mathcal{T}}\). Then we have a commutative square
Thus \(M(u_{2}\leq u_{3})=\alpha_{u_{2},u_{3}}\circ M(u_{1}\leq u_{2})\circ\alpha_{u_ {2},u_{1}}\). By induction, if \(u_{1}\leq u_{2}\leq\cdots\leq u_{n}\) is any chain of indecomposable relations in \(\mathcal{P}_{\mathcal{T}}\), then
\[M(u_{n-1}\leq u_{n})=\alpha_{u_{2},u_{n}}\circ M(u_{1}\leq u_{2})\circ\alpha_ {u_{n-1},u_{1}}.\]
Let \(u\leq v\) and \(a\leq b\) be two indecomposable relations in \(\mathcal{P}_{\mathcal{T}}\). Let \(u_{0},a_{0}\) be two minimal objects in \(\mathcal{P}_{\mathcal{T}}\), such that \(u_{0}\leq u\) and \(a_{0}\leq a\). Let
\[u_{0}\leq u_{1}\leq\cdots\leq u_{n}\leq u\leq v,\quad\text{and}\quad a_{0}\leq a _{1}\leq\cdots\leq a_{k}\leq a\leq b\]
be chains of indecomposable relations in \(\mathcal{P}_{\mathcal{T}}\). Then
\[M(u\leq v)=\alpha_{u_{1},v}\circ M(u_{0}\leq u_{1})\circ\alpha_{u,u_{0}}, \quad\text{and}\quad M(a\leq b)=\alpha_{a_{1},b}\circ M(a_{0}\leq a_{1})\circ \alpha_{a,a_{0}}.\]
Thus it suffices to prove that
\[M(u_{0}\leq u_{1})=\alpha_{a_{1},u_{1}}\circ M(a_{0}\leq a_{1})\circ\alpha_{u_ {0},a_{0}}.\]
Notice that \((u_{0},u_{1})\) and \((a_{0},a_{1})\) are minimal objects in \(\widehat{\mathcal{P}}_{\mathcal{T}}\). Since \(\widehat{\mathcal{P}}_{\mathcal{T}}\) is connected, there is a sequence of minimal objects in \(\widehat{\mathcal{P}}_{\mathcal{T}}\)
\[(u_{0},u_{1})=(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{r},y_{r})=(a_{0},a_{1})\]
such that \(\widehat{\mathcal{P}}_{\mathcal{T}_{\geq(x_{i},y_{i})}}\cap\widehat{\mathcal{ P}}_{\mathcal{T}_{\geq(x_{i+1},y_{i+1})}}\) is nonempty for each \(1\leq i\leq r-1\). Let \((s_{i},t_{i})\) be an object in the intersection, so that \((s_{i},t_{i})\geq(x_{i},y_{i}),(x_{i+1},y_{i+1})\). It follows that
\[\alpha_{y_{i},t_{i}}\circ M(x_{i}\leq y_{i})\circ\alpha_{s_{i},x_{i}}=M(s_{i} \leq t_{i})=\alpha_{y_{i+1},t_{i}}\circ M(x_{i+1}\leq y_{i+1})\circ\alpha_{s_ {i},x_{i+1}}.\]
Thus
\[M(x_{i+1}\leq y_{i+1})=\alpha_{t_{i},y_{i+1}}\alpha_{y_{i},t_{i}}\circ M(x_{i} \leq y_{i})\circ\alpha_{s_{i},x_{i}}\alpha_{x_{i+1},s_{i}}=\alpha_{y_{i},y_{i+1 }}\circ M(x_{i}\leq y_{i})\circ\alpha_{x_{i+1},x_{i}}\]
for each \(1\leq i\leq r-1\). Part (2) follows by induction on \(r\).
Part (3) follows at once from (2).
Finally, assume that \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) satisfies Statements (1) and (2). Let \(\alpha\) be a family of morphisms for which these statements hold. By (1), for each object \((u,v)\in\widehat{\mathcal{P}_{\mathcal{T}}}\) we have an isomorphism
\[\alpha_{u,v}\colon M(u)=\beta^{*}M(u,v)\to\phi^{*}M(u,v)=M(v),\]
If \((u,v)\leq(x,y)\) in \(\widehat{\mathcal{P}_{\mathcal{T}}}\), then by Condition (2)
\[\alpha_{v,y}\circ M(u<v)=M(x<y)\circ\alpha_{u,x}.\]
This shows that \(\alpha\) restricts to a natural isomorphism \(\beta^{*}M\to\phi^{*}M\), and hence \([M]\) has a vanishing gradient on \(\mathcal{T}\).
**Remark 4.6**.: _Theorem 4.5 shows that the vanishing of the gradient on a sub-poset generated by a tree gives a lot of information about the module in question. It is instructive however to note that if the poset in question consists of a single pair of comparable objects, then any module on such poset, where the point modules are isomorphic, will have a vanishing gradient. This of course does not contradict the conclusion of the theorem. We leave the details to the reader._
**Definition 4.7**.: _With the notation of Theorem 4.5, define \(\operatorname{Ker}M,\operatorname{Im}M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) as follows._
1. \[\operatorname{Ker}M(u)\stackrel{{\mathrm{def}}}{{=}}\begin{cases} \operatorname{Ker}(M(u<v))&u<v\text{ indecomposable},\;u\text{ not maximal}\\ 0&u\quad\text{maximal}\end{cases}.\]
_Define_ \(\operatorname{Ker}M(x<y)\stackrel{{\mathrm{def}}}{{=}}0\) _for any non-identity relation in_ \(\mathcal{P}\)_._
2. \[\operatorname{Im}M(u)\stackrel{{\mathrm{def}}}{{=}}\begin{cases} \operatorname{Im}(M(w<u))&\text{for any indecomposable}\;w<u,\;\;u\text{ not minimal}\\ M(u)/\operatorname{Ker}(M(u<v))&\text{for any indecomposable}\;u<v,\;u\text{ minimal}\end{cases}\]
_Define_ \(\operatorname{Im}M(u<v)\) _to be the restriction of_ \(M(u<v)\) _to_ \(\operatorname{Im}M(u)\)_._
**Corollary 4.8**.: _With the notation and hypotheses of Theorem 4.5, the modules \(\operatorname{Ker}M\) and \(\operatorname{Im}M\) are well defined \(k\mathcal{P}_{\mathcal{T}}\)-modules. Furthermore, \(\operatorname{Ker}M(u)\cong\operatorname{Ker}M(v)\) for all non-maximal \(u,v\in\mathcal{P}\), and \(\operatorname{Im}M(u)\cong\operatorname{Im}M(v)\) for all \(u,v\in\mathcal{P}\)._
Proof.: Let \(u<v\) and \(u<v^{\prime}\) be indecomposable morphisms in \(\mathcal{P}\). Then by Theorem 4.5(2),
\[\alpha_{v,v^{\prime}}\circ M(u<v)=M(u<v^{\prime}).\]
Since \(\alpha_{v,v^{\prime}}\) is an isomorphism, \(\operatorname{Ker}(M(u<v))=\operatorname{Ker}(M(u<v^{\prime}))\). This shows that \(\operatorname{Ker}M\) is well defined.
Let \(w<u\) and \(w^{\prime}<u\) be two indecomposable morphisms in \(\mathcal{P}\). Then, again by 4.5(2),
\[M(w<u)=M(w^{\prime}<u)\circ\alpha_{w,w^{\prime}}.\]
Thus \(\operatorname{Im}M\) is well defined on non-minimal objects. If \(u\) is minimal, then \(\operatorname{Im}M(u)\) is also well defined since \(\operatorname{Ker}(M(u<v))\) does not depend on the choice of \(v\). The definition on morphisms is clear.
Let \(u,s\in\mathcal{P}\) be non-maximal, and let \(u<w\) and \(s<t\) be indecomposable morphisms in \(\mathcal{P}\). Let \(x\in\operatorname{Ker}(M(u<w))\), and let \(y=\alpha_{u,s}(x)\). Then by Theorem 4.5(2),
\[0=M(u<w)(x)=\alpha_{t,w}\circ M(s<t)\circ\alpha_{u,s}(x)=\alpha_{t,w}\circ M(s <t)(y). \tag{5}\]
Consider the following diagram with exact rows. Equation 5 implies that the left square commutes, and hence that so does the right square.
for all \(u,s\in\mathcal{P}\). Hence \(\alpha_{u,s}\) restricted to \(\operatorname{Ker}M(u)\) is a monomorphism and the induced map on \(\operatorname{Im}M(w)\) is an epimorphism. The same argument using \(\alpha_{s,u}=\alpha_{u,s}^{-1}\) and finite dimensionality of all point modules completes the proof that \(\operatorname{Ker}M(u)\cong\operatorname{Ker}M(s)\) for all non-maximal \(u,s\in\mathcal{P}\). Notice also that at the same time we have shown that \(\operatorname{Im}M(w)\cong\operatorname{Im}M(t)\) for all \(w,t\in\mathcal{P}\) that are not minimal. Finally, notice that if \(u\) is minimal, then
\[\operatorname{Im}M(u)\stackrel{{\mathrm{def}}}{{=}}M(u)/ \operatorname{Ker}(M(u<v))=\operatorname{Im}(M(u<v))\stackrel{{ \mathrm{def}}}{{=}}\operatorname{Im}M(v).\]
Thus \(\operatorname{Im}M(u)\cong\operatorname{Im}M(v)\) for all \(u,v\in\mathcal{P}\).
**Corollary 4.9**.: _Assume the notation and hypotheses of Theorem 4.5. Fix a non-maximal object \(x_{0}\in\mathcal{P}\). Then there is an endomorphism \(\gamma_{0}\in\operatorname{End}(M(x_{0}))\), such that for any \(y<y^{\prime}\) in \(\mathcal{P}_{\mathcal{T}}\),_
\[M(y<y^{\prime})=\alpha_{x_{0},y^{\prime}}\circ\gamma_{0}^{k}\circ\alpha_{y,x_{ 0}},\]
_where \(k\) is the graph distance from \(y\) to \(y^{\prime}\) in \(\mathcal{T}\)._
Proof.: Let \(\alpha\colon\beta^{*}M_{\mathcal{T}}\to\phi^{*}M_{\mathcal{T}}\) be a natural isomorphism. Let \(x_{0}<x_{1}\) be an indecomposable relation in \(\mathcal{P}_{\mathcal{T}}\), and set
\[\gamma_{0}=\alpha_{x_{1},x_{0}}\circ M(x_{0}<x_{1})\in\operatorname{End}(M(x_{ 0})).\]
Let \(y<y^{\prime}\) in \(\mathcal{P}_{\mathcal{T}}\) be any pair of comparable objects, and let
\[y=z_{1}<z_{2}<\cdots<z_{k}=y^{\prime}\]
be a decomposition of the relation \(y<y^{\prime}\) into a sequence of indecomposable relations in \(\mathcal{P}_{\mathcal{T}}\).
By Theorem 4.5(2), for each \(1\leq i\leq k-1\),
\[M(z_{i},z_{i+1})=\alpha_{x_{1},z_{i+1}}\circ M(x_{0}<x_{1})\circ\alpha_{z_{i},x_{0}}.\]
Hence
\[M(y<y^{\prime}) =M(z_{k-1}<z_{k})\circ\cdots\circ M(z_{1}<z_{2})\] \[=\alpha_{x_{1},z_{k}}\circ M(x_{0}<x_{1})\circ\alpha_{z_{k-1},x_ {0}}\circ\alpha_{x_{1},z_{k-1}}\circ M(x_{0}<x_{1})\circ\alpha_{z_{k-2},x_{0} }\circ\cdots\circ\] \[\alpha_{x_{1},z_{2}}\circ M(x_{0}<x_{1})\circ\alpha_{z_{1},x_{0}}\] \[=\alpha_{x_{1},z_{k}}\circ M(x_{0}<x_{1})\circ\alpha_{x_{1},x_{0} }\circ M(x_{0}<x_{1})\circ\cdots\circ\alpha_{x_{1},x_{0}}\circ M(x_{0}<x_{1}) \circ\alpha_{z_{1},x_{0}}\] \[=\alpha_{x_{1},y^{\prime}}\circ\alpha_{x_{0},x_{1}}\circ\alpha_{ x_{1},x_{0}}\circ M(x_{0}<x_{1})\circ\gamma_{0}^{k-1}\circ\alpha_{y,x_{0}}\] \[=\alpha_{x_{0},y^{\prime}}\circ\gamma_{0}^{k}\circ\alpha_{y,x_{0}},\]
as claimed.
The next obvious question is what can be said about a virtual module \([X]=[M]-[N]\) with a vanishing gradient on a tree, where \(M,N\in k\mathcal{P}\text{-mod}\). By linearity, \(\nabla[X]=0\) if and only if
\[\phi^{*}[M]-\beta^{*}[M]=\nabla[M]=\nabla[N]=\phi^{*}[N]-\beta^{*}[N],\]
or equivalently if and only if there is a natural isomorphism
\[\phi^{*}M\oplus\beta^{*}N\cong\phi^{*}N\oplus\beta^{*}M. \tag{6}\]
One may expect that \(M\) and \(N\) in that case would differ by locally constant modules, namely that \(M\oplus C\cong N\oplus D\) for some locally constant modules \(C,D\in k\mathcal{P}\text{-mod}\). This is not true in general as Example 4.10 demonstrates.
**Example 4.10**.: _Let \(\mathcal{P}\) be any finite poset. Consider the constant module \(\underline{k}\in k\mathcal{P}\operatorname{\mathsf{-mod}}\) that assigns \(k\) to each object and the identity to each morphism. On the other hand let \(\underline{k}_{0}\in k\mathcal{P}\operatorname{\mathsf{-mod}}\) denote the module that assigns \(k\) to each object and the zero homomorphism to each morphism. It is immediate that \(\nabla[\underline{k}]=\nabla[\underline{k}_{0}]=0\). Let \([X]\in\mathsf{Gr}(k\mathcal{P})\) be any element and set \([Y]=[X]+[\underline{k}]\) and \([Y_{0}]=[X]+[\underline{k}_{0}]\). Then \(\nabla[Y]=\nabla[Y_{0}]\), but the difference_
\[[Y]-[Y_{0}]=[\underline{k}]-[\underline{k}_{0}]\]
_is not locally constant._
Next we consider the relationship between the gradient and the rank invariant.
**Definition 4.11**.: _Let \(M\in k\mathcal{P}\operatorname{\mathsf{-mod}}\) be a persistence module. Define the rank invariant_
\[\operatorname{rk}[M]\colon\operatorname{Mor}(\mathcal{P})\to\mathbb{N}\]
_by \(\operatorname{rk}[M](x\leq y)\stackrel{{\mathrm{def}}}{{=}} \operatorname{rk}M(x\leq y)\) for each relation \(x\leq y\) in \(\mathcal{P}\). For a virtual module \([X]=[M]-[N]\) in \(\mathsf{Gr}(k\mathcal{P})\), where \(M,N\in k\mathcal{P}\operatorname{\mathsf{-mod}}\), define the rank invariant_
\[\operatorname{rk}[X]\colon\operatorname{Mor}(\mathcal{P})\to\mathbb{Z}\]
_by \(\operatorname{rk}[X]\stackrel{{\mathrm{def}}}{{=}}\operatorname{ rk}[M]-\operatorname{rk}[N]\)._
Notice that \(\operatorname{rk}[M](x\leq x)=\dim_{k}M(x)\) for each object \(x\in\mathcal{P}\). Notice also that the rank invariant can be regarded as a group homomorphism
\[\operatorname{rk}\colon\mathsf{Gr}(k\mathcal{P})\to\mathbb{Z}^{\operatorname {Mor}(\mathcal{P})},\]
where the codomain is the abelian group of all functions from \(\operatorname{Mor}(\mathcal{P})\) to \(\mathbb{Z}\), or in other words the free abelian group generated by all morphisms in \(\mathcal{P}\).
**Remark 4.12**.: _The way we define the rank invariant here includes the rank of a module (or a virtual module) on identity morphisms, namely it incorporates the dimensions of the point modules \(M(v)\) for all objects \(v\) in \(\mathcal{P}\). Notice also that for \(M\in k\mathcal{P}\operatorname{\mathsf{-mod}}\), \([M]\in\operatorname{Ker}(\operatorname{rk})\) if and only if \(M=0\), but as Example 4.10 shows, this is not the case for virtual modules._
Next, we turn to investigate virtual modules with a vanishing gradient.
**Theorem 4.13**.: _Let \(\mathcal{P}\) be a finite poset. Let \([X]=[M]-[N]\in\mathsf{Gr}(k\mathcal{P})\) be an element with \(M,N\in k\mathcal{P}\operatorname{\mathsf{-mod}}\). Then_
1. _Let_ \((u_{0},u_{1})<(v_{0},v_{1})\) _be a pair of comparable objects in_ \(\widehat{\mathcal{P}}\)_. Assume that_ \(\nabla[X]=0\)_. Then_ \(\operatorname{rk}[X](u_{0}<v_{0})=\operatorname{rk}[X](u_{1}<v_{1})\)_._
_Assume in addition that \(\mathcal{P}\) is line connected, let \(\mathcal{T}\) be a line connected maximal tree for \(\mathcal{H}_{\mathcal{P}}\), and let \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}\) be the sub-poset generated by \(\mathcal{T}\). If \([X]\) has a vanishing gradient on \(\mathcal{T}\), then \(\operatorname{rk}[X]\) has the following properties:_
1. _It is constant on all identity morphisms in_ \(\mathcal{P}\)_._
2. _For any pair of indecomposable relations_ \(u_{0}<u_{1}\)_, and_ \(v_{0}<v_{1}\) _in_ \(\mathcal{P}_{\mathcal{T}}\)_, one has_ \(\operatorname{rk}[X](u_{0}<u_{1})=\operatorname{rk}[X](v_{0}<v_{1})\)_._
Proof.: Since \(\nabla[X]=0\), we have a natural isomorphism \(\eta\colon\phi^{*}M\oplus\beta^{*}N\xrightarrow{\cong}\phi^{*}N\oplus\beta^{*}M\) (See Equation (6)). Let \((u_{0},u_{1})<(v_{0},v_{1})\) be a pair of comparable objects in \(\widehat{\mathcal{P}}\). Then one has a commutative square
It follows that \(\operatorname{rk}M(u_{0}<v_{0})+\operatorname{rk}N(u_{1}<v_{1})=\operatorname{rk}M( u_{1}<v_{1})+\operatorname{rk}N(u_{0}<v_{0})\), and hence that \(\operatorname{rk}[X](u_{0}<v_{0})=\operatorname{rk}[X](u_{1}<v_{1})\). This proves Part (1).
Throughout the rest of the proof assume that \(\mathcal{P}\) is line connected. Fix a line connected maximal tree \(\mathcal{T}\) for \(\mathcal{H}_{\mathcal{P}}\). Since \([X]\) has a vanishing gradient on \(\mathcal{T}\), we have a natural isomorphism \(\phi^{*}M_{\mathcal{T}}\oplus\beta^{*}N_{\mathcal{T}}\cong\phi^{*}N_{\mathcal{ T}}\oplus\beta^{*}M_{\mathcal{T}}\). Thus, for any object \((x,y)\in\widehat{\mathcal{P}}_{\mathcal{T}}\) one has \(M(x)\oplus N(y)\cong N(x)\oplus M(y)\). Hence
\[\operatorname{rk}[X](x\leq x)=\dim_{k}M(x)-\dim_{k}N(x)=\dim_{k}M(y)-\dim_{k} N(y)=\operatorname{rk}[X](y\leq y).\]
Since every \(x\in\mathcal{P}_{\mathcal{T}}\) is either the source coordinate or the target coordinate in an object of \(\widehat{\mathcal{P}}_{\mathcal{T}}\), and since \(\widehat{\mathcal{P}}_{\mathcal{T}}\) is connected, Part (2) follows.
Let \((u_{0},u_{1})\) be a minimal object in \(\widehat{\mathcal{P}}_{\mathcal{T}}\). By Part (1) and induction, \(\operatorname{rk}[X](u_{r}<u_{r+1})=\operatorname{rk}[X](u_{0}<u_{1})\) for any object \((u_{r},u_{r+1})\in\widehat{\mathcal{P}}_{\mathcal{T}}\) that is the target of a directed path from \((u_{0},u_{1})\) in \(\widehat{\mathcal{P}}_{\mathcal{T}}\). Let \((u_{0},u_{1})\) and \((v_{0},v_{1})\) be minimal objects in \(\widehat{\mathcal{P}}_{\mathcal{T}}\). Since \(\widehat{\mathcal{P}}_{\mathcal{T}}\) is connected, there are minimal objects
\[(u_{0},u_{1})=(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{k},y_{k})=(v_{0},v_{1})\]
such that for each \(1\leq i\leq k-1\) the intersection of sub-posets \(\widehat{\mathcal{P}}_{\widehat{\mathcal{T}}_{\geq(x_{i},y_{i})}}\cap\widehat{ \mathcal{P}}_{\widehat{\mathcal{T}}_{\geq(x_{i+1},y_{i+1})}}\) is nonempty. Let \((a_{i},b_{i})\) be any object in the intersection. By the argument above,
\[\operatorname{rk}[X](x_{i}<y_{i})=\operatorname{rk}[X](a_{i}<b_{i})= \operatorname{rk}[X](x_{i+1}<y_{i+1})\]
and since this holds for all \(i\), we have \(\operatorname{rk}[X](u_{0}<u_{1})=\operatorname{rk}[X](v_{0},v_{1})\). It follows that \(\operatorname{rk}[X]\) is constant on all indecomposable relations in \(\mathcal{P}_{\mathcal{T}}\), as claimed in Part (3).
The following example shows that the statements of Theorem 4.13 are best possible in the sense that two modules with equal gradients may have different rank invariants when evaluated on morphisms of length greater than \(1\).
**Example 4.14**.: _Let \(\mathcal{P}\) denote the poset \([2]\) with objects \(0,1,2\) and the ordinary order relation. Define \(M,N\in[2]\text{-}\mathsf{mod}\) as follows._
\[M\colon\quad k^{2}\xrightarrow{(1,0)}k^{2}\xrightarrow{(0,1)}k^{2},\qquad N \colon\quad k^{2}\xrightarrow{(1,0)}k^{2}\xrightarrow{(1,0)}k^{2}.\]
_It is immediate that \(\phi^{*}M\cong\beta^{*}M\) and \(\phi^{*}N\cong\beta^{*}N\), so \(\nabla[M]=\nabla[N]=0\). But \(\operatorname{rk}M(0<2)=0\) while \(\operatorname{rk}[N](0<2)=1\)._
Above we considered the kernel and image modules associated to a module \(M\in k\mathcal{P}\text{-}\mathsf{mod}\) in the case where \(M\) has a vanishing gradient on some tree \(\mathcal{T}\subseteq\mathcal{H}_{\mathcal{P}}\) (See Definition 4.7 and Corollary 4.8). Next, we consider a kernel and a cokernel modules in \(k\widehat{\mathcal{P}}\text{-}\mathsf{mod}\), associated to any module \(M\in k\mathcal{P}\text{-}\mathsf{mod}\).
**Lemma 4.15**.: _Let \(M\in k\mathcal{P}\text{-}\mathsf{mod}\) be any module. Let \(K_{M},C_{M}\in k\widehat{\mathcal{P}}\text{-}\mathsf{mod}\) denote the modules defined on objects by_
\[K_{M}(x,y)\stackrel{{\mathrm{def}}}{{=}}\operatorname{Ker}(M(x,y) \colon M(x)\to M(y))\quad\text{and}\quad C_{M}(x,y)\stackrel{{ \mathrm{def}}}{{=}}\operatorname{coKer}(M(x,y)\colon M(x)\to M(y))\]
_with the natural induced maps on morphisms. Then \(K_{M}\) and \(C_{M}\) are virtually trivial._
Proof.: For any objects \((x,y)\) and \((y,z)\) in \(\widehat{\mathcal{P}}\) one has the following diagram of vector spaces with exact rows.
A straightforward diagram chase now shows that \(K_{M}((x,y)<(y,z))\) and \(C_{M}((x,y)<(y,z))\) are the zero homomorphism.
**Corollary 4.16**.: _Let \(\mathcal{P}\) be a finite line connected poset, let \(\mathcal{T}\)be a line connected maximal tree for \(\mathcal{H}_{\mathcal{P}}\), and let \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}\) be the sub-poset generated by \(\mathcal{T}\). Let \([X]=[M]-[N]\in\mathsf{Gr}(k\mathcal{P})\) be an element of vanishing gradient on \(\mathcal{T}\), with \(M,N\in k\mathcal{P}\mathsf{\mathsf{\mathsf{\mathsf{--mod}}}}\). Then there is an integer \(D\), such that \([K_{M}]=[K_{N}]+[\underline{D}]\) and \([C_{M}]=[C_{N}]+[\underline{D}]\) in \(\mathsf{Gr}(k\widehat{\mathcal{P}}_{\mathcal{T}})\), where \([\underline{D}]\) denotes the virtually trivial module that associates \(k^{D}\) with every object of \(\widehat{\mathcal{P}}_{\mathcal{T}}\)._
Proof.: By Lemma 4.15, \(K_{M}\) and \(K_{N}\), as well as \(C_{M}\) and \(C_{N}\) are virtually trivial, and hence their isomorphism type is determined by their values on objects by Corollary 1.6. Thus, it suffices to prove that the appropriate ranks coincide on all objects of \(\widehat{\mathcal{P}}_{T}\).
By Theorem 4.13, \(\mathrm{rk}[X]=\mathrm{rk}[M]-\mathrm{rk}[N]\) is constant on all identity morphisms and all indecomposable morphisms \(x<y\) in \(\mathcal{P}_{\mathcal{T}}\). Thus we may write \(\dim_{k}M(x)-\dim_{k}N(x)=K\) for all \(x\in\mathcal{P}\), and \(\mathrm{rk}[M](x<y)-\mathrm{rk}[N](x<y)=T\) for all indecomposable relations \(x<y\) in \(\mathcal{T}\), where \(K\) and \(T\) are some fixed non-negative integers. Set \(D=K-T\). Thus for an object \((x,y)\in\widehat{\mathcal{P}}_{\mathcal{T}}\),
\[\dim_{k}K_{M}(x,y) =\mathrm{rk}[M](x\leq x)-\mathrm{rk}[M](x<y)\] \[=\mathrm{rk}[N](x\leq x)-\mathrm{rk}[N](x<y)+K-T\] \[=\dim_{k}K_{N}(x,y)+D.\]
Since both \(K_{M}\) and \(K_{N}\) are virtually trivial, it follows that \([K_{M}]=[K_{N}]+[\underline{D}]\). This proves the statement for \(K_{M}\) and \(K_{N}\).
Similarly,
\[\dim_{k}C_{M}(x,y) =\mathrm{rk}[M](y\leq y)-\mathrm{rk}[M](x<y)\] \[=\mathrm{rk}[N](y\leq y)-\mathrm{rk}[N](x<y)+K-T\] \[=\dim_{k}C_{N}(x,y)+D.\]
By the same argument as above \([C_{M}]=[C_{N}]+[\underline{D}]\), as claimed.
Recall that for any ring \(A\) one has a ring homomorphism
\[\Upsilon_{A}\colon\mathsf{Gr}(A)\to\mathsf{Gr}_{e}(A),\]
where \(\mathsf{Gr}_{e}(A)\) is the quotient of \(\mathsf{Gr}(A)\) by the relation \([K]-[M]+[C]\), if \(M\) is an extension of \(C\) by \(K\) (See Definition 1.11). Since \(\phi^{*}\) and \(\beta^{*}\) are exact functors, they induce group homomorphisms
\[\phi^{*}_{e},\beta^{*}_{e}\colon\mathsf{Gr}_{e}(k\mathcal{P})\to\mathsf{Gr}_{ e}(k\widehat{\mathcal{P}}),\]
such that \(\Upsilon_{k\widehat{\mathcal{P}}}\circ\phi^{*}=\phi^{*}_{e}\circ\Upsilon_{k \mathcal{P}}\), and similarly for \(\beta^{*}\). Thus, one may define
\[\nabla_{e}\colon\mathsf{Gr}_{e}(k\mathcal{P})\to\mathsf{Gr}_{e}(k\widehat{ \mathcal{P}})\]
by \(\nabla_{e}\stackrel{{\mathrm{def}}}{{=}}\phi^{*}_{e}-\beta^{*}_{e}\). The following proposition shows that a significant amount of information about modules is lost upon passing to the reduced Grothendieck groups.
**Corollary 4.17**.: _For any \(M\in k\mathcal{P}\mathsf{\mathsf{\mathsf{--mod}}}\), one has_
\[\nabla_{e}[M]=[C_{M}]-[K_{M}]\]
_in \(\mathsf{Gr}_{e}(k\widehat{\mathcal{P}})\)._
Proof.: For each \(M\in k\mathcal{P}\mathsf{\mathsf{--mod}}\) let \(I_{M}\in k\widehat{\mathcal{P}}\mathsf{\mathsf{--mod}}\) denote the image functor, i.e. \(I_{M}(u,v)\stackrel{{\mathrm{def}}}{{=}}\mathrm{Im}(M(u<v))\). Then we obtain two short exact sequences in \(k\widehat{\mathcal{P}}\mathsf{\mathsf{--mod}}\)
\[0\to K_{M}\to\beta^{*}M\to I_{M}\to 0\quad\text{and}\quad 0\to I_{M}\to\phi^{*}M\to C_{M}\to 0.\]
It follows that in \(\mathsf{Gr}_{e}(k\widehat{\mathcal{P}})\),
\[[\beta^{*}_{e}M]=[K_{M}]+[I_{M}]\quad\text{and}\quad[\phi^{*}_{e}M]=[I_{M}]+[ C_{M}],\]
so \(\nabla_{e}[M]\stackrel{{\mathrm{def}}}{{=}}[\phi_{e}^{*}M]-[\beta_{e}^{ *}M]=[C_{M}]-[K_{M}]\).
**Remark 4.18**.: _The functor \(K_{M}\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\) evaluated at an object \((x,y)\) returns the subspace of elements that are "present" at \(x\), but do not "survive" to \(y\). Thus for a fixed \(x\in\mathcal{P}\), the intersection \(\bigcap_{(x,y)\in\widehat{\mathcal{P}}}K_{M}(x,y)\) is the subspace of \(M(x)\) of all elements that "die" at \(x\)._
_Similarly, the functor \(C_{M}\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\), evaluated at an object \((x,y)\) returns the quotient of \(M(y)\) by the image of \(M(x<y)\). For a fixed object \(y\), consider the system of all homomorphisms \(M(y)\to C_{M}(x,y)\) for all \((x,y)\in\widehat{\mathcal{P}}\). The coequaliser of this system can be thought of as the space representing all elements in \(M(y)\) that are "born" at \(y\). The coequaliser is easily seen to be the quotient of \(M(y)\) by the image of the composite_
\[\bigoplus_{(x,y)}M(x)\xrightarrow{\oplus M(x<y)}\bigoplus_{(x,y)}M(y)\xrightarrow {\Sigma}M(y),\]
_where \(\Sigma\) is the map given by summing coordinates._
_Thus if \(\mathcal{P}\) is line connected and \(M,N\in k\mathcal{P}\mbox{-}\mathsf{mod}\) are modules with equal gradients, then for any line connected maximal tree \(\mathcal{T}\), spaces of "births" and "deaths" of \(M\) and \(N\) restricted to \(\mathcal{P}_{\mathcal{T}}\) object-wise coincide._
We end this section with an example of modules with non-isomorphic gradients and the same rank invariant.
**Example 4.19**.: _Let \(\mathcal{P}\) be the poset with objects \(\emptyset,a,b,c,d,m,\infty\), and relations_
\[\emptyset<a,b,c,d<m<\infty. \tag{7}\]
(8) \[\begin{array}{ccccc}\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par
_for each of the morphisms \(\alpha,\beta,\gamma,\delta\), the induced map under \(X\) is an injection, and such that the images of each pair of homomorphisms form a basis for \(X(m)=k^{2}\). Clearly all such modules have exactly the same rank invariant._
_Let \(M,N\in k\mathcal{P}\mbox{-}\mathsf{mod}\) be modules satisfying these requirements. Let \(X,Y\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\) denote the modules \(\phi^{*}(M)\oplus\beta^{*}(N)\) and \(\phi^{*}(N)\oplus\beta^{*}(M)\) respectively. Then \(\nabla[M]=\nabla[N]\) if and only if \(X\) and \(Y\) are isomorphic. Let \(X_{0}\) and \(Y_{0}\) denote the restrictions of \(X\) and \(Y\) to the full sub-poset of \(\widehat{\mathcal{P}}\) consisting of the objects \((a,m)\), \((b,m)\), \((c,m)\), \((d,m)\) and \((m,\infty)\). Denote by \(\alpha_{*},\beta_{*},\ldots\) the homomorphisms \(N(\alpha),N(\beta),\ldots\) and by \(\alpha^{\prime}_{*},\beta^{\prime}_{*},\ldots\) the homomorphisms \(M(\alpha),M(\beta),\ldots\). Assume that there is an isomorphism \(\Theta\colon X\to Y\) and let \(\Theta_{0}\colon X_{0}\to Y_{0}\) be the restriction of \(\theta\):_
_Without loss of generality, using our assumption that the images of every pair of homomorphisms generate \(k^{2}\), we may assume that \(\alpha\) and \(\alpha^{\prime}\) take 1 to the vector \((1,0)\in k^{2}\) and that \(\beta\) and \(\beta^{\prime}\) take 1 to \((0,1)\in k^{2}\). Let \(\gamma\) and \(\gamma^{\prime}\) take 1 to the vectors \((x,y)\) and \((z,w)\) respectively, and let \(\delta\) and \(\delta^{\prime}\) take 1 to \((s,t)\) and \((u,v)\) respectively. The upwards homomorphisms are given by the matrices (from left to right, with respect to the standard bases):_
\[\left(\begin{smallmatrix}0&0&1\\ 0&0&0\end{smallmatrix}\right),\quad\left(\begin{smallmatrix}0&0&0\\ 0&0&0\end{smallmatrix}\right),\quad\left(\begin{smallmatrix}0&0&x\\ 0&0&y\end{smallmatrix}\right),\quad\left(\begin{smallmatrix}0&0&s\\ 0&0&t\end{smallmatrix}\right)\qquad\text{and}\quad\left(\begin{smallmatrix}1 &0&0\\ 0&0&0\end{smallmatrix}\right),\quad\left(\begin{smallmatrix}0&0&0\\ 1&0&0\end{smallmatrix}\right),\quad\left(\begin{smallmatrix}z&0&0&0\\ w&0&0\end{smallmatrix}\right),\quad\left(\begin{smallmatrix}u&0&0\\ v&0&0\end{smallmatrix}\right).\]
_Then \(\Theta_{0}\) can be represented on the object \((m,\infty)\) by a \(2\times 2\) matrix \(A=(a_{i,j})\), and on the objects \((a,m),\ldots,(d,m)\) by four \(3\times 3\) matrices \(B^{k}=(b^{k}_{i,j})\), for \(1\leq k\leq 4\). Computing the respective products, it is easy to observe that \(A\) must be diagonal with non-zero entries \(a_{1,1}\) and \(a_{2,2}\). Similarly \(a_{1,1}=b^{1}_{1,3}\), \(a_{2,2}=b^{2}_{1,3}\), and \(b^{k}_{1,j}=0\) for \(1\leq k\leq 4\) and \(j=1,2\). Furthermore, we have_
\[b^{3}_{1,3}=\frac{x}{z}a_{1,1}=\frac{y}{w}a_{2,2},\quad\text{and}\quad b^{4}_{ 1,3}=\frac{s}{u}a_{1,1}=\frac{t}{v}a_{2,2}. \tag{9}\]
_Thus, as long as these relations are satisfied, \(A\) and \(B^{k}\) can be constructed with any nonzero choices of those values, while making sure in an arbitrary manner that \(B^{k}\) are nonsingular. However, the relations in (9) allow solving for \(a_{2,2}\) in terms of the other variables in two ways, and by comparing them we obtain the relation_
\[wxtu=zyvs\]
_that must be satisfied for \(\Theta_{0}\) to be well defined. Since this relation is not satisfied in general, this shows that there exist modules \(M,N\in k\mathcal{P}\mbox{-}\mathsf{mod}\) with the same rank invariant, such that \(\nabla[M]\neq\nabla[N]\) (and hence also \([M]\neq[N]\))._
## 5. The Hom pairing and the Euler pairing
Let \(\mathcal{P}\) be a finite poset. The Hom pairing of Definition 3.1 and the Euler pairing of Definition 3.2 in the case of \(k\mathcal{P}\)-modules take the form
\[\langle[M],[N]\rangle_{\mathcal{P}}\stackrel{{\mathrm{def}}}{{= }}\dim_{k}(\operatorname{Hom}_{k\mathcal{P}}(M,N))\quad\text{and}\quad\chi_{ \mathcal{P}}([M],[N])\stackrel{{\mathrm{def}}}{{=}}\chi( \operatorname{Ext}^{*}_{k\mathcal{P}}(M,N)),\]
with \(M,N\in k\mathcal{P}\mbox{-}\mathsf{mod}\). Since Hom and \(\operatorname{Ext}\) commute with finite direct sums, the pairings can be extended to arbitrary elements in the respective Grothendieck groups. Notice that \(\operatorname{Ext}^{i}_{k\mathcal{P}}(M,N)=0\) for all \(i>0\) if \(M\) is projective or if \(N\) is injective. In either case the two pairings coincide.
For a finite poset \(\mathcal{P}\), and an object \(v\in\mathcal{P}\), consider the modules \(F_{v},G_{v},S_{v}\in k\mathcal{P}\mbox{-}\mathsf{mod}\), defined on objects by
* \(F_{v}(u)\stackrel{{\mathrm{def}}}{{=}}k\operatorname{Mor}_{\mathcal{P}}(v,u)=\begin{cases}k&v\leq u\\ 0&\text{otherwise}\end{cases}\)
* \(G_{v}(u)\stackrel{{\mathrm{def}}}{{=}}k\operatorname{Mor}_{ \mathcal{P}}(u,v)=\begin{cases}k&u\leq v\\ 0&\text{otherwise}\end{cases}\)
* \(S_{v}(u)\stackrel{{\mathrm{def}}}{{=}}\begin{cases}k&u=v\\ 0&\text{otherwise}\end{cases}\)
If \(v\leq u\leq u^{\prime}\), then \(F_{v}(u)\to F_{v}(u^{\prime})\) is the identity, and otherwise \(F_{v}(u\leq u^{\prime})\) is the zero homomorphism. Similarly, if \(u\leq u^{\prime}\leq v\) then \(G_{v}(u)\to G_{v}(u^{\prime})\) is the identity and otherwise it is \(0\). For \(S_{v}\) all non-identity morphisms are \(0\). By [13, Proposition 2.2.3] the modules \(F_{v}\) are precisely the indecomposable projective modules and the modules \(G_{v}\) are the indecomposable injective modules in \(k\mathcal{P}\text{-}\mathsf{mod}\). By [30, Corollary 4.2] the modules \(S_{v}\) are the simple modules in \(k\mathcal{P}\text{-}\mathsf{mod}\). Notice that \(F_{v}\) is locally constant on \(\mathcal{P}_{\geq v}\), while \(G_{v}\) is locally constant on \(\mathcal{P}_{\leq v}\). Also, any virtually trivial module \(M\) is a direct sum of simple modules,
\[M=\bigoplus_{v\in\mathcal{P}\atop M(v)\neq 0}\dim_{k}M(v)\cdot S_{v}.\]
**Lemma 5.1**.: _Let \(\mathcal{P}\) be a finite poset and let \(F_{v}\) and \(G_{v}\) be the indecomposable projective and the indecomposable injective modules determined by \(v\). Let \([X]=[M]-[N]\), where \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\). Then,_
1. \(\langle[F_{v}],[X]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([F_{v}],[X])=\dim_{ k}M(v)-\dim_{k}N(v)\)_. More generally, if_ \(Q\in k\mathcal{P}\text{-}\mathsf{mod}\) _is any finitely generated projective module, then_ \[\langle[Q],[X]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([Q],[X])=\sum_{v\in \mathcal{P}}\epsilon_{v}(\dim_{k}M(v)-\dim_{k}N(v)),\] _where_ \(Q\cong\bigoplus_{v\in\mathcal{P}}\epsilon_{v}F_{v}\)_, with_ \(\epsilon_{v}\in\mathbb{N}\) _for each_ \(v\in\mathcal{P}\)_._
2. \(\langle[X],[G_{v}]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([X],[G_{v}])=\dim_{ k}M(v)-\dim_{k}N(v)\)_. More generally, if_ \(I\in k\mathcal{P}\text{-}\mathsf{mod}\) _is any finitely generated injective module, then_ \[\langle[X],[I]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([X],[I])=\sum_{v\in \mathcal{P}}\epsilon_{v}(\dim_{k}M(v)-\dim_{k}N(v)),\] _where_ \(I\cong\bigoplus_{v\in\mathcal{P}}\epsilon_{v}G_{v}\)_, with_ \(\epsilon_{v}\in\mathbb{N}\) _for each_ \(v\in\mathcal{P}\)_._
Proof.: By projectivity of \(F_{v}\), \(\operatorname{Ext}_{k\mathcal{P}}^{i}(F_{v},M)=\operatorname{Ext}_{k \mathcal{P}}^{i}(F_{v},N)=0\) for \(i>0\), so
\[\langle[F_{v}],[X]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([F_{v }],[X]) \stackrel{{\mathrm{def}}}{{=}}\chi(\operatorname{Ext}_{k \mathcal{P}}^{*}(F_{v},M))-\chi(\operatorname{Ext}_{k\mathcal{P}}^{*}(F_{v},N))\] \[=\dim_{k}\operatorname{Hom}_{k\mathcal{P}}(F_{v},M)-\dim_{k} \operatorname{Hom}_{k\mathcal{P}}(F_{v},N)\] \[=\dim_{k}M(v)-\dim_{k}N(v).\]
If \(Q\in k\mathcal{P}\text{-}\mathsf{mod}\) is any finitely generated projective module, then \(Q\cong\bigoplus_{v\in\mathcal{P}}\epsilon_{v}F_{v}\), where \(\epsilon_{v}\in\mathbb{N}\) for each \(v\in\mathcal{P}\). Thus
\[\langle[Q],[X]\rangle_{\mathcal{P}} =\chi_{\mathcal{P}}([Q],[X])\] \[=\dim_{k}\operatorname{Hom}_{k\mathcal{P}}(Q,M)-\dim_{k} \operatorname{Hom}_{k\mathcal{P}}(Q,N)\] \[=\sum_{v\in\mathcal{P}}\epsilon_{v}(\dim_{k}M(v)-\dim_{k}N(v)),\]
as claimed in (1).
Similarly, \(\operatorname{Ext}_{k\mathcal{P}}^{i}(M,G_{v})=\operatorname{Ext}_{k \mathcal{P}}^{i}(N,G_{v})=0\) for any \(i>0\). Thus \(\operatorname{Hom}_{k\mathcal{P}}(M,G_{v})\cong M(v)^{*}\), and \(\operatorname{Hom}_{k\mathcal{P}}(N,G_{v})\cong N(v)^{*}\), so
\[\langle[X],[G_{v}]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([X],[G_{v}])=\dim_{ k}M(v)-\dim_{k}N(v).\]
The second claim of (2) follows similarly.
**Corollary 5.2**.: _Let \(0\to M_{r}\to M_{r-1}\to\cdots\to M_{0}\to 0\) be an exact sequence in \(k\mathcal{P}\mathsf{\text{-}mod}\). Let \(P\) be a finitely generated projective \(k\mathcal{P}\)-module and let \(I\) be a finitely generated \(k\mathcal{P}\)-module. Then_
\[\sum_{i=0}^{r}(-1)^{i}\langle[P],[M_{i}]\rangle_{\mathcal{P}}= \sum_{i=0}^{r}(-1)^{i}\chi_{\mathcal{P}}([P],[M_{i}])=0,\quad\text{and}\] \[\sum_{j=0}^{r}(-1)^{j}\langle[M_{i}],[I]\rangle_{\mathcal{P}}= \sum_{i=0}^{r}(-1)^{i}\chi_{\mathcal{P}}([M_{i}],[I])=0.\]
_Furthermore, let_
\[0\to P_{n}\to\cdots\to P_{0}\to M\to 0,\quad\text{and}\quad 0\to M\to I_{0}\cdots\to I_{n}\to M\to 0\]
_be a projective resolution and an injective resolution of \(M\) in \(k\mathcal{P}\mathsf{\text{-}mod}\). Then_
\[\chi_{\mathcal{P}}([M],[M])=\sum_{i=0}^{n}\sum_{j=0}^{n}(-1)^{i+j}\langle[P_{ i}],[P_{j}]\rangle_{\mathcal{P}}=\sum_{r=0}^{n}\sum_{s=0}^{n}(-1)^{i+j}\langle[ I_{r}],[I_{s}]\rangle_{\mathcal{P}}.\]
Proof.: The first statement follow easily from Lemma 5.1. The second statement follows from the first.
A main result of this section is the following proposition, which establishes an easy relation between the Euler pairing and the Hom pairing and allows explicit computation of the Euler pairing.
**Proposition 5.3**.: _Let \(\mathcal{P}\) be a finite poset, and let \(M,N\in k\mathcal{P}\mathsf{\text{-}mod}\) be modules. Let \(\langle-,-\rangle_{\mathcal{P}}\) and \(\chi_{\mathcal{P}}(-,-)\) be the Hom pairing and the Euler pairing respectively. Let_
\[0\to P_{n}\to\cdots\to P_{0}\to M\to 0,\quad\text{and}\quad 0\to N\to I_{0}\to\cdots I_{n}\to 0\]
_be projective resolution for \(M\) and and injective resolutions for \(N\). Then_
\[\chi_{\mathcal{P}}([M],[N])=\sum_{i=0}^{n}(-1)^{i}\langle[P_{i}],[N]\rangle_{ \mathcal{P}}=\sum_{i=0}^{n}(-1)^{i}\chi_{\mathcal{P}}([P_{i}],[N]), \tag{10}\]
_and_
\[\chi_{\mathcal{P}}([M],[N])=\sum_{j=0}^{n}(-1)^{j}\langle[M],[I_{k}]\rangle_{ \mathcal{P}}=\sum_{j=0}^{n}(-1)^{j}\chi_{\mathcal{P}}([M],[I_{k}]). \tag{11}\]
_Furthermore, write \(P_{i}\cong\bigoplus\epsilon_{v}^{i}F_{v}\) and \(I_{j}=\bigoplus\delta_{u}^{i}G_{u}\), with \(\epsilon_{v},\delta_{u}^{j}\in\mathbb{N}\) and \(v,u\in\mathcal{P}\). Then_
\[\chi_{\mathcal{P}}([M],[N])=\sum_{v\in\mathcal{P}}\sum_{i=0}^{n}(-1)^{i} \epsilon_{v}^{i}\dim_{n}N(v)=\sum_{u\in\mathcal{P}}\sum_{j=0}^{n}(-1)^{j} \delta_{u}^{j}\dim_{n}M(u). \tag{12}\]
Proof.: Equation (10) follows at once from Lemma 5.1(1) and the well known observation that if
\[0\to A_{n}\to\cdots A_{1}\to A_{0}\to 0\]
is a chain complex of finite dimensional vector spaces over a field, then
\[\chi(H_{*}(A_{*}))=\sum_{i=0^{n}}(-1)^{i}\dim_{k}A_{i}.\]
Equation (11) follows by analogy and Lemma 5.1(2).
By (10) and Lemma 5.1(1),
\[\chi_{\mathcal{P}}([M],[N])=\sum_{i=0}^{n}(-1)^{i}\langle[P_{i}],[N]\rangle_{ \mathcal{P}}=\sum_{i=0}^{n}(-1)^{i}\sum_{v\in\mathcal{P}}\epsilon_{v}^{i}\dim _{k}N(v).\]
The first equality in Equation (12) follows by rearranging the summands. The second equality follows by analogy, using an injective resolution for \(N\) and Lemma 5.1(2).
A nice interpretation of the Euler pairing occurs for the constant module on \(\mathcal{P}\) with value \(k\).
**Example 5.4**.: _Let \(\underline{k}\) denote the constant \(k\mathcal{P}\)-module with value \(k\) at each object and the identity map for each morphism. Then for each \(M\in k\mathcal{P}\)-\(\mathsf{mod}\),_
\[\operatorname{Ext}_{k\mathcal{P}}^{*}(\underline{k},M)=H^{*}(\mathcal{P},M).\]
_Thus \(\chi_{\mathcal{P}}([\underline{k}],[M])=\chi(H^{*}(\mathcal{P},M))\). In particular \(\chi_{\mathcal{P}}([\underline{k}],[\underline{k}])=\chi(|\mathcal{P}|)\), where \(|\mathcal{P}|\) denotes the nerve of \(\mathcal{P}\)._
Combining Lemma 5.1 and Example 5.4, one observes that if \(\mathcal{P}\) has an initial object \(\emptyset\), then \(\underline{k}=F_{\emptyset}\) is projective. In that case the positive degree cohomology of any \(k\mathcal{P}\)-module vanishes, and
\[\chi_{\mathcal{P}}([\underline{k}],[M])=\dim_{k}H^{0}(\mathcal{P},M)=\dim_{k} M(\emptyset).\]
Next we consider pairing with the simple modules \(S_{v}\). We start with the \(\operatorname{Hom}\) pairing.
**Lemma 5.5**.: _Let \(\mathcal{P}\) be a poset and let \(v\in\mathcal{P}\) be an object. Then for any module \(N\in k\mathcal{P}\)-\(\mathsf{mod}\),_
\[\operatorname{Hom}_{k\mathcal{P}}(S_{v},N)\cong\bigcap_{v<w}\operatorname{Ker }(N(v<w))\stackrel{{\mathrm{def}}}{{=}}\operatorname{Ker}(N|_{ \mathcal{P}_{>v}})\]
_and_
\[\operatorname{Hom}_{k\mathcal{P}}(N,S_{v})\cong N(v)\left/\operatorname{Im} \left(\bigoplus_{u<v}N(u)\to N(v)\right)\stackrel{{\mathrm{def}} }{{=}}\operatorname{CoKer}(N|_{\mathcal{P}_{<v}})^{*}.\right.\]
Proof.: Fix and object \(v\) and let \(w>v\) be another object in \(\mathcal{P}\). Let \(\phi\colon S_{v}\to N\) be a morphism in \(k\mathcal{P}\)-\(\mathsf{mod}\). Then
\[0=\phi_{w}\circ S_{v}(v<w)=N(v<w)\circ\phi_{v}.\]
Hence \(\operatorname{Im}(\phi_{v})\subseteq\operatorname{Ker}(N(v<w))\). Since this holds for every \(w>v\), it follows that
\[\operatorname{Im}(\phi_{v})\subseteq\bigcap_{v<w}\operatorname{Ker}(N(v<w)),\]
and since \(\phi\) is completely determined by \(\phi_{v}\), we have
\[\operatorname{Hom}_{k\mathcal{P}}(S_{v},N)\cong\operatorname{Hom}_{k}(k, \bigcap_{v<w}\operatorname{Ker}(N(v<w)))\cong\bigcap_{v<w}\operatorname{Ker}( N(v<w)).\]
This proves the first statement.
Let \(\psi\colon N\to S_{v}\) be a morphism in \(k\mathcal{P}\)-\(\mathsf{mod}\). Then \(\psi\) is determined by \(\psi_{v}\), and for each \(u<v\) in \(\mathcal{P}\) one has \(\psi_{v}\circ N(u<v)=S_{v}(u<v)\circ\psi_{u}=0\). Hence
\[\psi_{v}(\operatorname{Im}(N(u<v)\colon N(u)\to N(v)))=0.\]
Since this holds for each \(u<v\), the homomorphism \(\psi_{v}\) vanishes on \(\operatorname{Im}\left(\bigoplus_{u<v}N(u<v)\to N(v)\right)\) and hence it factors through
\[N(v)/\operatorname{Im}\left(\bigoplus_{u<v}N(u<v)\to N(v)\right)\stackrel{{ \mathrm{def}}}{{=}}\operatorname{CoKer}(N|_{\mathcal{P}_{<v}}).\]
Thus one may identify \(\operatorname{Hom}_{k\mathcal{P}}(N,S_{v})\) with the dual space of \(\operatorname{CoKer}(N|_{\mathcal{P}_{<v}})\), which is the claim of the second statement.
Next we consider the Euler pairing with simple modules.
**Lemma 5.6**.: _Let \(\mathcal{P}\) be a finite poset and let \(v\in\mathcal{P}\) be an object, and let \(S_{v}\in k\mathcal{P}\text{-mod}\) be the simple module determined by \(v\). Let \(\pi_{v}\colon F_{v}\to S_{v}\) and \(\iota_{v}\colon S_{v}\to G_{v}\) denote the obvious projection and injection, respectively. Let \(K_{v}=\operatorname{Ker}(\pi_{v})\), and let \(C_{v}=\operatorname{coKer}(\iota_{v})\). Then_
\[\chi_{\mathcal{P}}([S_{v}],[M])=\dim_{k}M(v)-\chi_{\mathcal{P}}([K_{v}],[M]),\]
_and_
\[\chi_{\mathcal{P}}([M],[S_{v}])=\dim_{k}(M_{v})-\chi_{\mathcal{P}}([M],[C_{v} ]).\]
_In particular, if the Hasse diagram of the sub-poset \(\mathcal{P}_{\geq v}\) is tree, then_
\[\chi_{\mathcal{P}}([S_{v}],[M])=\dim_{k}M(v)-\sum_{v<u}\dim_{k}M(u)),\]
_and if the Hasse diagram for the sub-poset \(\mathcal{P}_{\leq v}\) is a tree, then_
\[\chi_{\mathcal{P}}([M],[S_{v}])=\dim_{k}(M_{v})-\sum_{u<v}\dim_{k}M(u),\]
_where in both cases the sum runs only over the indecomposable relations of the type specified._
Proof.: The module \(K_{v}\) is the submodule of \(F_{v}\), which coincides with it on each object, except \(v\) on which it vanishes. Thus we have a short exact sequence in \(k\mathcal{P}\text{-mod}\)
\[0\to K_{v}\to F_{v}\to S_{v}\to 0\]
that gives rise to a long exact sequence of \(\operatorname{Ext}\) groups, upon applying \(\operatorname{Hom}_{k\mathcal{P}}(-,M)\). By examining that long exact sequence, applying additivity of the Euler characteristic and using Lemma 5.1(1) we conclude that
\[\chi_{\mathcal{P}}([S_{v}],[M]) \stackrel{{\text{def}}}{{=}}\chi(\operatorname{Ext }^{*}_{k\mathcal{P}}(S_{v},M))\] \[=\chi(\operatorname{Ext}^{*}_{k\mathcal{P}}(F_{v},M))-\chi( \operatorname{Ext}^{*}_{k\mathcal{P}}(K_{v},M))\] \[=\dim_{k}M(v)-\chi(\operatorname{Ext}^{*}_{k\mathcal{P}}(K_{v},M))\] \[=\dim_{k}M(v)-\chi_{\mathcal{P}}([K_{v}],[M]).\]
Similarly, using the exact sequence \(0\to S_{v}\to G_{v}\to C_{v}\to 0\) and Lemma 5.1(2), we have
\[\chi_{\mathcal{P}}([M],[S_{v}])=\dim_{k}M(v)-\chi_{\mathcal{P}}([M],[C_{v}]).\]
In cases where the Hasse diagram of \(\mathcal{P}_{\geq v}\) is a tree, the module \(K_{v}\) can be easily seen to be a projective module
\[K_{v}=\bigoplus_{v<u}F_{u},\]
where the sum runs over all indecomposable relations \(v<u\) in \(\mathcal{P}\). Therefore, in that case by Lemma 5.1(1),
\[\chi_{\mathcal{P}}([S_{v}],[M])=\dim_{k}M(v)-\sum_{v<u}\dim_{k}M(u)).\]
Similarly, if the Hasse diagram for \(\mathcal{P}_{\leq v}\) is a tree, then \(C_{v}\) is a sum of indecomposable injective modules, indexed by the indecomposable relations \(u<v\). The corresponding statement follows by Lemma 5.1(2).
We end this section with a discussion of some general properties of the Hom and Euler pairings. Notice first that the Hom pairing is non-degenerate on classes of genuine modules, because for any \(M\in k\mathcal{P}\text{-mod}\),
\[\langle[M],[M]\rangle_{\mathcal{P}}=\dim_{k}\operatorname{Hom}_{k\mathcal{P}}( M,M)\geq 1,\]
since any multiple of the identity transformation by a field element is nontrivial. However it is easy to find examples of nonzero virtual modules whose Hom pairing square is zero, as we next observe.
**Lemma 5.7**.: _Let \(\mathcal{P}\) be a finite poset, and let \(u,v\in\mathcal{P}\) be any two objects. Then_
1. \(\chi_{\mathcal{P}}([F_{v}],[F_{u}])=\langle[F_{v}],[F_{u}]\rangle_{\mathcal{P}}= \begin{cases}1&v\geq u\\ 0&\text{otherwise}\end{cases}\)__
2. \(\chi_{\mathcal{P}}([G_{v}],[G_{u}])=\langle[G_{v}],[G_{u}]\rangle_{\mathcal{P} }=\begin{cases}1&v\leq u\\ 0&\text{otherwise}\end{cases}\)__
Proof.: Both statements are particular cases of Lemma 5.1. We prove the first statement. The second follows by analogy. One has
\[\langle[F_{v}],[F_{u}]\rangle_{\mathcal{P}}=\dim_{k}F_{u}(v)=\begin{cases}1&v \geq u\\ 0&\text{otherwise}\end{cases},\]
as claimed.
The following immediate corollary of Lemma 5.7 shows that both the Hom pairing and the Euler pairing are degenerate on virtual modules.
**Corollary 5.8**.: _Let \(u,v\in\mathcal{P}\) be two non-comparable objects. Let \([X]=[F_{v}]-[F_{u}]\) and \([Y]=[G_{v}]-[G_{u}]\). Then_
\[\langle[X],[X]\rangle_{\mathcal{P}}=\chi_{\mathcal{P}}([X],[X])=\chi_{ \mathcal{P}}([Y],[Y])=\langle[Y],[Y]\rangle_{\mathcal{P}}=0.\]
Similarly, it is easy to observe that \(\langle[S_{v}],[S_{u}]\rangle_{\mathcal{P}}=1\) if and only if \(u=v\) and is \(0\) otherwise. This is not the case for the Euler pairing. In that case, assuming \(\mathcal{H}_{\mathcal{P}}\) is a tree, we have by Lemma 5.6,
\[\chi_{\mathcal{P}}([S_{v}],[S_{u}])=\dim_{k}S_{u}(v)-\sum_{v<w}\dim_{k}S_{u}(w )=\begin{cases}1&v=u\\ -1&v<u\text{ indecomposable}\\ 0&\text{otherwise}\end{cases}\]
Finally, we consider the Euler pairing square \(\chi_{\mathcal{P}}([M],[M])\) for modules \(M\in k\mathcal{P}\text{-}\mathsf{mod}\). Assume that the Hasse diagram \(\mathcal{H}_{\mathcal{P}}\) of the poset \(\mathcal{P}\) is a tree, and thus in particular an acyclic quiver. In that case, by Lemma 3.5 for any \(M,N\in k\mathcal{P}\text{-}\mathsf{mod}\)
\[\chi_{\mathcal{P}}([M],[N])=\chi_{\mathcal{H}_{\mathcal{P}}}(\mathbf{dim}_{k} [M],\mathbf{dim}_{k}[N]),\]
where \(\mathbf{dim}_{k}\colon\mathsf{Gr}(k\mathcal{P})\to\mathbb{Z}^{V}\) is the dimension vector homomorphism (Definition 1.12), and \(\chi_{\mathcal{H}_{\mathcal{P}}}\) is the Euler form for the digraph \(\mathcal{H}_{\mathcal{P}}\) (Definition 3.4). Then the Euler pairing square coincides with the Tits form \(T_{\mathcal{P}}\colon\mathsf{Gr}(k\mathcal{P})\to\mathbb{Z}\), [13, Definition 4.1.2]. By [13, Proposition 4.4.9] the Tits form is positive definite if and only if \(\mathcal{H}_{\mathcal{P}}\) is of Dynkin type ADE, and positive semi-definite if and only if \(\mathcal{H}_{\mathcal{P}}\) is of extended type ADE. This corresponds to \(\mathcal{P}\) being of finite and infinite tame representation type, respectively. In general however, \(T_{\mathcal{P}}\), and hence the Euler pairing square, is indefinite.
## 6. Divergence, adjointness and the Laplacian
In this section we define and study the basic properties of the divergence and the Laplacian for persistence modules. Recall Definition 2.8
**Definition 6.1**.: _Let \(\mathcal{P}\) be a poset. The left and right divergence are defined to be the homomorphisms_
\[\nabla^{*},\nabla_{*}\colon\mathsf{Gr}(k\widehat{\mathcal{P}})\to\mathsf{Gr} (k\mathcal{P})\]
_given by \(\nabla^{*}[N]\stackrel{{\mathrm{def}}}{{=}}[L_{\phi}(N)]-[L_{\beta }(N)]\), and \(\nabla_{*}[N]\stackrel{{\mathrm{def}}}{{=}}[R_{\phi}(N)]-[R_{\beta }(N)]\)._
For \([X]\in\mathsf{Gr}(k\mathcal{P})\) and \([Y]\in\mathsf{Gr}(k\widehat{\mathcal{P}})\), we have the adjointness relations with respect to the Hom pairing:
\[\langle\nabla^{*}[Y],[X]\rangle_{\mathcal{P}}=\langle[Y],\nabla[X]\rangle_{ \widehat{\mathcal{P}}}\quad\text{and}\quad\langle[X],\nabla_{*}[Y]\rangle_{ \mathcal{P}}=\langle\nabla[X],[Y]\rangle_{\widehat{\mathcal{P}}}\]
The following corollary is an immediate consequence of Proposition 5.3, which is as close as we can get to adjointness relations with respect to the Euler Pairing.
**Corollary 6.2**.: _Let \(\mathcal{P}\) be a finite poset such that \(\mathcal{H}_{\mathcal{P}}\) is a tree. Let \(M\in k\mathcal{P}\mathsf{\text{-}mod}\) be a module, and let_
\[0\to Q\to P\to M\to 0,\quad\text{and}\quad 0\to M\to I\to J\to 0\]
_be a projective resolution and an injective resolution for \(M\) in \(k\mathcal{P}\mathsf{\text{-}mod}\). Then for any \(N\in k\widehat{\mathcal{P}}\mathsf{\text{-}mod}\),_
\[\chi_{\mathcal{P}}(\nabla^{*}[N],[M])=\langle\nabla^{*}[N],[I]-[J]\rangle_{ \mathcal{P}}=\langle[N],\nabla([I]-[J])\rangle_{\widehat{\mathcal{P}}}\]
_and_
\[\chi_{\mathcal{P}}([M],\nabla_{*}[N])=\langle[P]-[Q],\nabla_{*}[N]\rangle_{ \mathcal{P}}=\langle\nabla([P]-[Q]),[N]\rangle_{\widehat{\mathcal{P}}}.\]
The following example motivates referring to \(\nabla^{*}\) and \(\nabla_{*}\) as divergence.
**Example 6.3**.: _Let \(\mathcal{P}\) be the poset whose Hasse diagram is on the left of the diagram below, with its line digraph on the right._
_We compute the left and right divergence of a module \(M\in k\widehat{\mathcal{P}}\mathsf{\text{-}mod}\) at the object \(0\)._
_Starting with left divergence, notice that \(\phi\downarrow 0\) is the discrete category with objects \(10\) and \(20\), while \(\beta\downarrow 0\) is isomorphic to \(\widehat{\mathcal{P}}\). Hence \(L_{\phi}M(0)\cong M(10)\oplus M(20)\), and_
\[L_{\beta}M(0)\cong\operatorname*{colim}_{\widehat{\mathcal{P}}}M.\]
_The computation of this colimit is elementary and can be shown to be isomorphic to the cokernel of the map_
\[M(10)\oplus M(20)\to M(03)\oplus M(04) \tag{13}\]
_that takes an element \((x,z)\) to \((M_{103}(x)-M_{203}(z),M_{203}(x)-M_{204}(z))\), where \(M_{xyz}\) stands for \(M(xy<yz)\) for short._
_The computation of right divergence is similarly basic. Here we have \(0\downarrow\phi\) isomorphic to \(\widehat{\mathcal{P}}\) and \(0\downarrow\beta\) the discrete category on objects \(03\) and \(04\). Hence \(R_{\beta}M(0)\cong M(03)\oplus M(04)\), and_
\[R_{\phi}M(0)\cong\lim_{\widehat{\mathcal{P}}}M.\]
_The computation of the limit is again elementary, and can be shown to be the kernel of the same map as in (13). Thus we have an exact sequence_
\[0\to R_{\phi}M(0)\to M(10)\oplus M(20)\to M(03)\oplus M(04)\to L_{\beta}M(0) \to 0.\]
_Consider the values of these functors in terms of "flow" relative to the object \(0\). Thus one may think of \(L_{\phi}M(0)\cong M(10)\oplus M(20)\) as the \(0\) in-flow, and of \(L_{\beta}M(0)\) as the quotient of the \(0\)-out-flow, where the "effect" (image) of the in-flow has been divided out, or in other words, the net \(0\) out-flow. Similarly, \(R_{\beta}M(0)\cong M(03)\oplus M(04)\) can be thought of as the \(0\) out-flow, while \(R_{\phi}M(0)\) is the flow that is "wasted" (vanishes) on the way to \(0\)._
_Thus \(\mathbf{dim}_{k}\nabla^{*}[M](0)<0\) indicates that the net \(0\)-out-flow is larger than the \(0\) in-flow, or that passing through \(0\) "amplifies" the flow. Similar interpretations can be given for \(\mathbf{dim}_{k}\nabla^{*}[M](0)>0\) and in the corresponding situations for \(\nabla_{*}[M](0)\)._
A nice case of Lemma 5.1 occurs when pairing a projective module with the gradient of a module.
**Example 6.4**.: _Let \((u,v)\in\widehat{\mathcal{P}}\) be an object. Then_
\[\langle F_{(u,v)},\nabla[M]\rangle_{\widehat{\mathcal{P}}}=\langle F_{(u,v)}, \phi^{*}[M]\rangle_{\widehat{\mathcal{P}}}-\langle F_{(u,v)},\beta^{*}[M] \rangle_{\widehat{\mathcal{P}}}=\dim_{k}M(v)-\dim_{k}M(u).\]
Recall the modules \(K_{M},C_{M}\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\) defined for any \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) as the kernel and cokernel of the natural transformation \(\eta\colon\beta^{*}\to\phi^{*}\) (See Lemma 4.15).
**Proposition 6.5**.: _Let \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) and let \(N\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\). Then_
\[\chi_{\widehat{\mathcal{P}}}(\nabla[M],[N])=\chi_{\widehat{\mathcal{P}}}([C_{ M}],[N])-\chi_{\widehat{\mathcal{P}}}([K_{M}],[N]),\]
_and_
\[\chi_{\widehat{\mathcal{P}}}([N],\nabla[M])=\chi_{\widehat{\mathcal{P}}}(([N],[C_{M}])-\chi_{\widehat{\mathcal{P}}}([N],[K_{M}]).\]
Proof.: The modules \(K_{M}\) and \(C_{M}\) can be considered as the kernel and cokernel of the natural transformation \(\eta\colon\beta^{*}\to\phi^{*}\) that takes an object \((x,y)\in\widehat{\mathcal{P}}\) to the morphism
\[\beta^{*}M(x,y)=M(x)\xrightarrow{M(x<y)}M(y)=\phi^{*}M(x,y).\]
Thus, one has an exact sequence of \(k\widehat{\mathcal{P}}\)-modules
\[0\to K_{M}\to\beta^{*}M\xrightarrow{\eta_{M}}\phi^{*}M\to C_{M}\to 0,\]
which can be split into two short exact sequences in \(k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\)
\[0\to K_{M}\to\beta^{*}M\to I_{M}\to 0\quad\text{and}\quad 0\to I_{M}\to\phi^{*}M\to C_{M}\to 0, \tag{14}\]
where \(I_{M}\) is the image functor. By applying \(\operatorname{Hom}_{k\widehat{\mathcal{P}}}(-,N)\), these give long exact Ext sequences and it follows by additivity of the Euler characteristics, that
\[\chi(\operatorname{Ext}^{*}_{k\widehat{\mathcal{P}}}(\beta^{*}M,N))=\chi( \operatorname{Ext}^{*}_{k\widehat{\mathcal{P}}}(K_{M},N))+\chi(\operatorname{ Ext}^{*}_{k\widehat{\mathcal{P}}}(I_{M},N))\]
and that
\[\chi(\operatorname{Ext}^{*}_{k\widehat{\mathcal{P}}}(\phi^{*}M,N))=\chi( \operatorname{Ext}^{*}_{k\widehat{\mathcal{P}}}(C_{M},N))+\chi(\operatorname{ Ext}^{*}_{k\widehat{\mathcal{P}}}(I_{M},N)).\]
Thus
\[\chi_{\widehat{\mathcal{P}}}(\nabla[M],[N]) =\chi_{\widehat{\mathcal{P}}}([\phi^{*}M],[N])-\chi_{\widehat{ \mathcal{P}}}([\beta^{*}M],[N])\] \[=\chi_{\widehat{\mathcal{P}}}([C_{M}],[N])-\chi_{\widehat{ \mathcal{P}}}([K_{M}],[N]).\]
The second statement follows similarly, by applying \(\operatorname{Hom}_{k\widehat{\mathcal{P}}}(N,-)\) to the short exact sequences (14).
**Corollary 6.6**.: _Let \(M\in k\mathcal{P}\mbox{-}\mathsf{mod}\) and let \(N\in k\widehat{\mathcal{P}}\mbox{-}\mathsf{mod}\). Then_
\[\chi_{\widehat{\mathcal{P}}}(\nabla[M],[N])=\sum_{\stackrel{{(u,v )\in\widehat{\mathcal{P}}}}{{C_{M}(u,v)\neq 0}}}\dim_{k}C_{M}(u,v)\cdot\chi_{ \widehat{\mathcal{P}}}([S_{u,v}],[N])\] \[-\sum_{\stackrel{{(u,v)\in\widehat{\mathcal{P}}}}{{ K_{M}(u,v)\neq 0}}}\dim_{k}K_{M}(u,v)\cdot\chi_{\widehat{\mathcal{P}}}([S_{u,v}],[N]),\]
_and_
\[\chi_{\widehat{\mathcal{P}}}([N],\nabla[M]) =\sum_{\genfrac{(}{)}{0.0pt}{}{(u,v)\in\widehat{\mathcal{P}}}{C_{M }(u,v)\neq 0}}\dim_{k}C_{M}(u,v)\cdot\chi_{\widehat{\mathcal{P}}}([N],[S_{u,v}])\] \[\quad-\sum_{\genfrac{(}{)}{0.0pt}{}{(u,v)\in\widehat{\mathcal{P}} }{K_{M}(u,v)\neq 0}}\dim_{k}K_{M}(u,v)\cdot\chi_{\widehat{\mathcal{P}}}([N],[S_{u,v}]).\]
Proof.: By Lemma 4.15 both \(K_{M}\) and \(C_{M}\) are virtually trivial. Hence they are isomorphic to a direct sum of simple modules. Thus
\[[K_{M}]=\sum_{\genfrac{(}{)}{0.0pt}{}{(u,v)\in\widehat{\mathcal{P}}}{K_{M}(u, v)\neq 0}}\dim_{k}K_{M}(u,v)\cdot[S_{u,v}],\quad\text{and}\quad C_{M}=\sum_{ \genfrac{(}{)}{0.0pt}{}{(u,v)\in\widehat{\mathcal{P}}}{C_{M}(u,v)\neq 0}}\dim_{k}C_{M}(u,v) \cdot[S_{u,v}].\]
The claim follows from bilinearity of \(\chi_{\widehat{\mathcal{P}}}\).
Notice that by Lemma 5.6, \(\chi_{\widehat{\mathcal{P}}}(\nabla[M],[N])\) is computable in the case where the Hasse diagram for \(\widehat{\mathcal{P}}_{\geq(u,v)}\) is a tree for each \((u,v)\in\widehat{\mathcal{P}}\), for which \(C_{M}(u,v)\) and \(K_{M}(u,v)\) are nonzero. A similar comment applies to computability of \(\chi_{\widehat{\mathcal{P}}}([N],\nabla[M])\). In particular if \(\mathcal{P}\) is a poset such that \(\mathcal{H}_{\mathcal{P}}\) is a directed tree, then so is \(\widehat{\mathcal{H}}_{\mathcal{P}}\).
**Lemma 6.7**.: _Let \(\mathcal{P}\) be a finite poset such that \(\mathcal{H}_{\mathcal{P}}\) is a tree. Then so is \(\widehat{\mathcal{H}}_{\mathcal{P}}\)._
Proof.: If \(\widehat{\mathcal{H}}_{\mathcal{P}}\) is not a tree, then there is a pair of objects \((x,y),(s,t)\in\widehat{\mathcal{H}}_{\mathcal{P}}\) with at least two directed paths from \((x,y)\) to \((s,t)\) in \(\widehat{\mathcal{H}}_{\mathcal{P}}\). But this implies that there are at least two paths from \(y\) to \(s\) in \(\mathcal{H}_{\mathcal{P}}\). Hence \(\mathcal{H}_{\mathcal{P}}\) is not a tree, contradicting our assumption.
A consequence of Lemma 6.7 is that if \(\mathcal{H}_{\mathcal{P}}\) is a tree then the Euler pairing of \(\nabla[M]\) for any \(M\in k\mathcal{P}\text{-}\mathsf{mod}\) with any \(N\in k\widehat{\mathcal{P}}\text{-}\mathsf{mod}\) can be computed explicitly.
**Proposition 6.8**.: _Let \(\mathcal{P}\) be a finite poset. Let \([X]=[U]-[V]\in\mathsf{Gr}(k\widehat{\mathcal{P}})\) be any element, with \(U,V\in k\widehat{\mathcal{P}}\text{-}\mathsf{mod}\), such that \(\nabla^{*}[X]=0\). Then \(\dim_{k}U(x,y)=\dim_{k}V(x,y)\) for all \((x,y)\in\widehat{\mathcal{P}}\)._
Proof.: Fix an object \((x,y)\in\widehat{\mathcal{P}}\). Observe first that \(\widehat{\mathcal{P}}_{\leq(x,y)}\) coincides with the line digraph associated to \(\mathcal{P}_{\leq x}\cup\{y\}\subseteq\mathcal{P}\). Let \(\mathcal{T}\) be a maximal tree for the Hasse diagram of \(\mathcal{P}_{\leq x}\cup\{y\}\subseteq\mathcal{P}\), and let \(\mathcal{Q}\subseteq\mathcal{P}_{\leq x}\cup\{y\}\subseteq\mathcal{P}\) be the sub-poset generated by \(\mathcal{T}\). Notice that \(\mathcal{Q}\) contains \(y\) as a maximal object, and in its Hasse diagram \(\mathcal{T}\) the vertex \(y\) is univalent. Let \(\widehat{\mathcal{Q}}\subseteq\widehat{\mathcal{P}}\) be the sub-poset generated by \(\widehat{\mathcal{T}}\). Then \((x,y)\) is a unique maximal object in \(\widehat{\mathcal{Q}}\), and so, the constant module \(\underline{k}\) on \(\widehat{\mathcal{Q}}\) coincides with \(G_{(x,y)}\) on \(\widehat{\mathcal{Q}}\) and hence it is injective in \(k\widehat{\mathcal{Q}}\text{-}\mathsf{mod}\).
Next, by naturality of \(\nabla^{*}\) one has a commutative square
where \(\rho\) denotes the homomorphism induced by restriction.
Define \(M\in k\mathcal{Q}\text{-}\mathsf{mod}\) as follows. Let \(n\) be the maximal length of a path in \(\mathcal{Q}\) that ends in \(y\). Set \(M(y)=k^{n}\), and of any \(u\in\mathcal{Q}\) with graph distance \(r\) from \(y\) in \(\mathcal{T}=\mathcal{H}_{\mathcal{Q}}\), let \(M(u)=k^{n-r}\). If \(u<v\) is an indecomposable relation in \(\mathcal{Q}\), then \(r=d(v,y)=d(u,y)+1\), where \(d\) denotes graph distance. Define \(M(u<v)\) to be the inclusion into the first \(n-r-1\) coordinates in
\(M(v)=k^{n-r}\). It is now easy to see that \(M\) is well defined on \(\mathcal{Q}\), and that \(\underline{k}\oplus\beta^{*}M\) is naturally isomorphic to \(\phi^{*}M\). Hence in \(\mathsf{Gr}(k\widehat{\mathcal{Q}})\) we have \(\nabla[M]=[\underline{k}]\). Then
\[\dim_{k}U(x,y)= \langle\rho[U],\underline{k}\rangle_{\widehat{\mathcal{Q}}}= \langle\rho[U],\nabla[M]\rangle_{\widehat{\mathcal{Q}}}=\] \[\langle\nabla^{*}(\rho[U]),[M]\rangle_{\widehat{\mathcal{Q}}}= \langle\nabla^{*}(\rho[V]),[M]\rangle_{\mathcal{Q}}=\] \[\langle\rho[V],\nabla[M]\rangle_{\widehat{\mathcal{Q}}}=\langle \rho[V],\underline{k}\rangle_{\widehat{\mathcal{Q}}}=\dim_{k}V(x,y),\]
where the first equality follows from injectivity of \(\underline{k}\) in \(\widehat{\mathcal{Q}}\)-\(\mathsf{mod}\) and Lemma 5.1.
A similar statement can be made for elements in \(\mathsf{Gr}(k\widehat{\mathcal{P}})\) of vanishing right divergence. We leave the details for the reader. Notice also that in the appropriate sense Proposition 6.8 says that the module \(M\) constructed in its proof is the "integral" of the module \(\underline{k}\). It is also easy to see that this integral is defined up to a constant summand. Further discussion of integration in the context of persistence modules will appear in future studies.
With gradient and divergence operators in place, we can now define the corresponding Laplacians.
**Definition 6.9**.: _Let \(\mathcal{P}\) be a finite poset. Define the left and right Laplacians \(\Delta^{0}\) and \(\Delta_{0}\) respectively in \(\operatorname{End}(\mathsf{Gr}(k\mathcal{P}))\), to be the group endomorphisms_
\[\Delta^{0}\stackrel{{\mathrm{def}}}{{=}}\nabla^{*}\circ\nabla \quad\text{and}\quad\Delta_{0}\stackrel{{\mathrm{def}}}{{=}} \nabla_{*}\circ\nabla.\]
_A virtual module \([X]\in\mathsf{Gr}(k\mathcal{P})\) is said to be left harmonic if \(\Delta^{0}[X]=0\) and right harmonic if \(\Delta_{0}[X]=0\). A virtual module is said to be harmonic if it is both left and right harmonic._
**Corollary 6.10**.: _Let \(\mathcal{P}\) be a finite poset, and let \([X]=[M]-[N]\in\mathsf{Gr}(k\mathcal{P})\) with \(M,N\in k\mathcal{P}\)-\(\mathsf{mod}\) be a left harmonic virtual module. Then, for each object \((u,v)\in\widehat{\mathcal{P}}\),_
\[\dim_{k}M(v)-\dim_{k}M(u)=\dim_{k}N(v)-\dim_{k}N(u).\]
_In particular, if \(N=0\) and \(\mathcal{P}\) is connected, then for each \(x,y\in\mathcal{P}\), \(\dim_{k}M(x)=\dim_{k}M(y)\)._
Proof.: Write
\[\nabla[X]=[\phi^{*}M\oplus\beta^{*}N]-[\phi^{*}N\oplus\beta^{*}M].\]
Then \(0=\Delta^{0}[X]=\nabla^{*}(\nabla[X])\) and Proposition 6.8 applies. Thus, for each \((u,v)\in\widehat{\mathcal{P}}\),
\[\dim_{k}M(v)+\dim_{k}N(u)= \dim_{k}(\phi^{*}M(u,v)\oplus\beta^{*}N(u,v))=\] \[\dim_{k}(\phi^{*}N(u,v)\oplus\beta^{*}M(u,v))=\] \[\dim_{k}N(v)+\dim_{k}M(u).\]
The first claim follows. The second claim follows by connectivity of \(\mathcal{P}\) and induction on the length of paths in \(\mathcal{P}\).
## 7. Applications
In this section we give two sample applications of the theory. In the first we deal with commutative ladder posets, which are generally known to have infinite representation type [16]. We produce some families of examples where the line digraphs are disjoint unions of posets of finite representation type. Hence while classification of indecomposable modules over those ladder posets may be hard or even impossible, the gradient of any module is much easier to understand. In the second application we compute gradients of filtered hierarchical clustering modules [2] of different random point sets. We show that the dimension vectors of gradients have more expressive power compared to the original modules. We also introduce gradient paths over modules, a form of gradient descent.
### Gradient modules of commutative ladders
An in-depth investigation of modules over _commutative ladders_ with real-world application appeared in [16]. The motivation to study commutative ladders was to be able to compare two input data, constituting two \(1\)-persistence modules, and compare the common homological features between them by connecting morphisms.
Denote a left-to-right arrow by \(F\) and a right-to-left arrow by \(B\). A _line poset_ is then a finite poset that can be written schematically as a juxtaposition of arrows of type \(F\) or \(B\) in any order. This is referred to as a quiver of type \(\mathbb{A}_{n}\) in quiver representations and is one of the representation-finite, i.e. having only finitely many isomorphism classes of indecomposable representations, Dynkin diagrams by Gabriel's theorem [17], [13, Theorem 4.2.4]. Thus any line poset is uniquely characterised by a sequence \(X_{1}X_{2}\cdots X_{n-1}\), where each \(X_{i}\) is either \(F\) or \(B\). We refer to the characterising sequence as the _type_ of the line poset. We call any such poset a _line poset of length \(n\)_. A _commutative ladder of length \(n\)_ is a poset that can be written schematically as two line posets \(L_{1}\) and \(L_{2}\) of the same length and type with arrows from each object of \(L_{1}\) to the corresponding object in \(L_{2}\). By [16, Theorem 3, 4] general commutative ladders of any type with length \(4\) are representation-finite, whereas they are representation-infinite if length \(\geq 5\).
Consider two specific types of ladder posets. The first is of any length and is made of a related pair of alternating sequences of type either \(FBFB\cdots\) or \(BFBF\cdots\). We refer to a poset of this form and length \(n\) as a _zig-zag ladder of length \(n\)_. The second is of even length \(2n\) and is made of a related pair of sequences of the form \(FFBBFF\cdots\) or \(BBFFBB\cdots\). This type will be referred to as a _double zig-zag ladder of length \(2n\)_. As pointed out any commutative ladder poset of length at least \(5\) is of infinite representation type.
**Lemma 7.1**.: _Let \(\mathcal{P}\) be a commutative ladder poset. Then_
1. _if_ \(\mathcal{P}\) _is a zig-zag ladder, then_ \(\widehat{\mathcal{P}}\) _is a disjoint union of line posets of length at most_ \(2\)_, and_
2. _if_ \(\mathcal{P}\) _is a double zig-zag ladder, then_ \(\widehat{\mathcal{P}}\) _is a line poset._
Proof.: A zig-zag commutative ladder of type \(FBFB\cdots\) has the general form drawn in the left diagram below. One easily observes that the associated line digraph has the form drawn in the right diagram.
This proves Part (1).
A double zig-zag ladder of type \(FFBBFF\cdots\) has the form
The associated line digraph has the form
which is clearly a line poset, thus proving Part (2).
The proof for zig-zag and double zig-zag ladders of types \(BFBF\cdots\) and \(BBFFBB\cdots\), respectively, is essentially the same, with arrows going the opposite way.
We obtain an immediate corollary of Lemma 7.1 and Gabriel's theorem on the classification of quiver algebras of finite representation type [17].
**Corollary 7.2**.: _Let \(\mathcal{P}\) be a zig-zag or double zig-zag commutative ladder poset of any length \(n\). Then the front and back modules \(\phi^{*}M\) and \(\beta^{*}M\) of any \(M\colon\mathcal{P}\to\operatorname{Vect}_{k}\) are of finite representation type._
Corollary 7.2 stands in contrast to the fact that all types of commutative ladder posets of length \(5\) or more are of infinite representation type, where at least for zig-zag and double zig-zag ladders their gradients can be decomposed as a finite combination of modules over posets of finite representation type. There are obviously more types of commutative ladder posets with this property. For instance a poset of type \(BBF\) also has a line digraph that is a union of two posets of type \(\mathbb{A}_{n}\). We leave the classification of commutative ladder posets whose associated line digraphs are of finite representation as an interesting further question.
### Gradient analysis of filtered hierarchical clustering
In this section we give an actual computational demonstration of applying our gradient to modules motivated by [2]. To this end, let \(\mathcal{A}_{m,n}\) denote the product of posets \(\{0<1<\cdots<m\}\) and \(\{0<1<\cdots<n\}\), which can be depicted as a commutative grid on two coordinates:
We define _biclustering module_ to be a functor \(M\colon\mathcal{A}_{m,n}\to\operatorname{Vect}_{k}\), i.e. a module \(M\in k\mathcal{A}_{m,n}\operatorname{\mathsf{-mod}}\). In our setup, described below, we also impose the condition that all the homomorphisms \(M((x_{1},y)<(x_{2},y))\) induced by horizontal morphisms in \(\mathcal{A}_{m,n}\) are surjective. Such modules and their representation theory was studied in [2], based on earlier works by [22] and [23]. By [2, Corollary 1.6] the category of \(k\mathcal{A}_{m,n}\)-modules satisfying the extra surjectivity condition has finitely many isomorphism types of indecomposable modules exactly when \(n\leq 2\), or \(m=1\), or in the case where \((m,n)\in\{(2,3),(2,4),(2,5),(3,3),(4,3)\}.\) It is of tame representation type exactly when \((m,n)\in\{(2,6),(3,4),(5,3)\}\), and it is of wild representation type in all other cases.
A _hierarchical clustering method_ defined on a finite metric space \((X,d)\) is a one-parameter family of surjective maps \(\{f_{\epsilon}\colon X\to C_{\epsilon}(X)\}_{\epsilon\geq 0}\), where each \(C_{\epsilon}(X)\) is a finite set of _clusters_, such that for all \(\epsilon\leq\epsilon^{\prime}\) we have a surjection
\[f_{\epsilon}(X)\to f_{\epsilon^{\prime}}(X).\]
A canonical example of such clustering method arises from considering the _geometric graph_\(G_{X,\epsilon}\) associated to \(X\) at scale \(\epsilon\), namely the graph whose vertices are the points of \(X\) and with
an edge connecting any two vertices \(x,y\in X\) if \(d(x,y)\leq\epsilon\). The natural projection of \(X\) onto the path-components of its geometric graph \(X\twoheadrightarrow\pi_{0}(G_{X,\epsilon})\) gives a clustering for \(X\) with scale \(\epsilon\). The inclusion of graphs \(G_{X,\epsilon}\hookrightarrow G_{X,\epsilon^{\prime}}\) for all \(\epsilon\leq\epsilon^{\prime}\) induces a surjection \(\pi_{0}(G_{X,\epsilon})\twoheadrightarrow\pi_{0}(G_{X,\epsilon^{\prime}})\), defining a hierarchical clustering method. A _filtered hierarchical clustering method_ is obtained by first turning \(X\) into a filtration, i.e. an indexed collection of subspaces \(X_{t_{0}}\subset X_{t_{1}}\subset\cdots\subset X_{t_{l}}=X\), by some filtering method and then applying a hierarchical clustering method to each \(X_{t}\) in the filtration. Applying degree \(0\) homology \(H_{0}\) we obtain at filtration parameter \((\epsilon,t)\) the \(k\)-vector space generated by the connected components of \(G_{X_{t},\epsilon}\). The morphisms with respect to \(\epsilon\), for a fixed \(t\), detect the merger of connected components, and hence are all surjective. The reader is referred to [2] for details.
Fix a finite metric space \((X,d)\). We used the following filtered hierarchical clustering method to generate our samples of biclustering modules. The filtering method applied is a _\(K\)-nearest neighbour density estimator_. Namely, for a fixed parameter \(K\in\mathbb{N}\) define a function \(\rho=\rho_{K}\colon X\to\mathbb{R}\) by
\[\rho(p)\stackrel{{\text{def}}}{{=}}\sum_{i=1}^{K}d(p,v_{i}),\]
where the sum runs over a collection of \(K\) points \(v_{1},v_{2},\ldots,v_{K}\), such that \(d(p,v_{i})\) is minimal in the subspace \(X\setminus\{v_{j}\}_{j<i}\). This yields a filtration \(\{X_{t}\}_{t\geq 0}\) by subspaces of \(X\), where \(p\in X_{t}\) if \(\rho(p)\leq t\). In our computations we fixed \(K=20\). For every \(X_{t}\) we applied \(\pi_{0}\) by computing the connected components of the geometric graphs \(G_{X_{t},\epsilon}\) for an increasing \(\epsilon\).
As finite metric space data \((X,d)\) we used random point processes on unit square \([0,1]\times[0,1]\) with \(d\) the standard Euclidean distance; point processes have gathered interest within persistent homology community, see for example [3, 10, 19, 28]. Random point processes are models for random point configurations based on statistical distributions. The interesting question is whether different point processes can be distinguished by their topological features encapsulated in persistence modules. Figure 4 shows examples of the point processes used in our analysis (see definitions below). It is not obvious that persistent homology via standard Vietoris-Rips filtration would contain enough information to distinguish these point clouds. Using filtered hierarchical clustering adds information from the density estimation with the aim of adding more distinguishing features to the resulting biclustering modules.
We simulated instantiations of four point processes as follows; \(L\) is a random integer drawn from the interval \([190,210]\).
* A _(homogeneous) Poisson_ process: \(L\) points were randomly and uniformly distributed on the unit square.
* A _Normal_ process: \(L\) coordinate pairs \((x,y)\) were created, where both \(x\) and \(y\) are sampled from normal distribution \(N(\mu,\sigma^{2})\) with mean \(\mu=0.5\) and standard deviation \(\sigma=0.2\).
* A _Matern_ cluster process: A Poisson process as above was simulated but now with expected number of \(40\) points. These represent parent points, or cluster centers, on the unit square. For each parent, random number of child points \(C\) was drawn with expectation \(5\). A disk of radius \(0.1\) was centered on each parent point; then for each parent its associated number of child points \(C\) was uniformly randomly placed in the disk. Note that parent points are not part of the actual metric space data.
* A _Baddeley-Silverman_ process: The unit square was first divided into equal size tiles with side lengths \(\frac{1}{14}\). For each tile, random number \(C\) was drawn from the Baddeley-Silverman distribution which is a discrete distribution defined on values \((0,1,10)\) with respective probabilities \((\frac{1}{10},\frac{8}{9},\frac{1}{90})\). For each tile, the associated number of points \(C\) was then uniformly randomly distributed on the tile.
Parameter choices for Matern and Baddeley-Silverman were such that they also had number of points in the interval \([190,210]\). In the filtered clustering process the distance parameter \(\epsilon\) was restricted to the interval \([0,0.25]\). Density filtration parameter \(t\) ranged from \(\min_{p\in X}\rho(p)\) to \(\max_{p\in X}\rho(p)\). The value ranges of both \(\epsilon\) and \(t\) were discretised into 10 values, and at each pair \((\epsilon,t)\) of parameter steps homology \(H_{0}\) was applied, resulting in biclustering modules \(M\in k\mathcal{A}_{10,10}\mbox{-}\mathsf{mod}\).
The dimension vectors \(\mathbf{dim}_{k}[M]\) for the modules constructed for single realisations of the point processes are shown in Figure 5, with the distance parameter increasing along \(x\)-axis and density filtration along \(y\)-axis. Recall that the gradient of a module \(M\) is given by the formal difference of the front and back \(k\widehat{\mathcal{A}}_{10,10}\)-modules, i.e. \(\nabla[M]=[\phi^{*}M]-[\beta^{*}M]\). Objects in \(\widehat{\mathcal{A}}_{10,10}\) have the form \((i,j,j+1)\) or \((i,i+1,j)\), where \((i,j,j+1)\) stands for the edge from \((i,j)\) to \((i,j+1)\) and \((i,i+1,j)\) stands for the edge from \((i,j)\) to \((i+1,j)\). Hence one has \(\mathbf{dim}_{k}[M](i,j,j+1)=\dim_{k}M(i,j+1)-\dim_{k}M(i,j)\), and similarly \(\mathbf{dim}_{k}[M](i,i+1,j)=\dim_{k}M(i+1,j)-\dim_{k}M(i,j)\). The resulting dimension vectors of the gradient modules are shown in Figure 6.
**Remark 7.3**.: _Note that for a linear map \(f\colon V\to W\) between finite dimensional vector spaces, the index \(\operatorname{ind}(f)\) is given by \(\dim(V)-\dim(W)\); the values of the gradient dimension vectors, at every object of \(\widehat{\mathcal{A}}_{10,10}\), can thus be seen as minus the index of the corresponding homomorphism in \(\mathcal{A}_{10,10}\)._
Figure 4. Example illustrations of the random point processes on unit square used to produce \(k\mathcal{A}_{m,n}\)-modules.
Comparing the modules in Figure 5 with their associated gradients in Figure 6, we see that the gradient modules seem to better distinguish the point processes. In particular, the supports of objects where the gradient is of dimension zero, shown as blue vertices in Figure 6, are clearly different in all cases.
**Remark 7.4**.: _While the regions on which the dimension vector vanishes do not necessarily inform about a sub-poset of \(\mathcal{A}_{10,10}\) on which \(M\) is constant, they do provide some nontrivial information. Since we assume that all horizontal homomorphisms are surjective, and since all vector spaces in sight are finite dimensional, one may conclude that \(M(i,j)\to M(i+1,j)\) is in fact an isomorphism for each object \((i,j)\) for which \(\mathbf{dim}_{k}\nabla[M](i,i+1,j)=0\)._
In [2] (see also [5]) biclustering modules were studied through decomposing the associated modules. Decomposition methods have gathered a large momentum in persistence theory [1, 6,
Figure 5. Dimension vectors of the \(k\mathcal{A}_{10,10}\)-modules associated to point processes via filtered hierarchical clustering.
12, 16], guided by the success of the barcode in 1-parameter persistence. From our point of view we posit that decomposition methods are essentially _global_ ways of understanding the structure of modules. Indeed, any indecomposable is still a module over the full underlying poset. Our calculus based methods on the other hand are by definition _local_, and hence not tied to the size of the modules nor their representation type: our computations on \(k\mathcal{A}_{10,10}\)-modules could have been done on much larger posets, contrasting to the representation theoretic limitations in [2]. The local information can be more manageable and complements the global information. From more data analysis perspective our calculus approach tells how a module changes locally with respect to filtration parameters, which might be valuable for a practitioner interested in how data sets cluster with respect to changes in the clustering parameters.
Figure 6. Dimension vectors of the gradient modules associated to the bifiltration modules in Figure 5. The points of vanishing gradient are distinguished as blue vertices.
Another main value of gradient in calculus is to define the gradient vector field of a scalar function, whose value at any point indicates the direction of greatest change of the function. To take this idea to biclustering modules for tracking greatest changes along the filtration parameters, we propose tracing the _gradient paths_ in the \(k\widehat{\mathcal{A}}_{10,10}\)-modules. Concretely, we consider the posets \(\widehat{\mathcal{A}}_{10,10}\) as weighted digraphs according to Figure 6. Starting from the two minimal vertices in each digraph, we follow the paths that always choose the direction to the next vertex with largest absolute value of the gradient; we continue until we arrive to a vertex where we no longer cannot make a choice. The gradient paths we obtain for every point process are sketched in Figure 7.
Similarly to the distribution of points of dimension zero gradient in Figure 6, the gradient paths also follow different trajectories for each data set. Note that the gradient paths are determined by the modules themselves. Restricting to the paths gives us simplified modules over a line poset encapsulating the information of maximal variability within the original module. Similar approach is taken in RIVET [20], where 2-persistence modules are analysed through one-dimensional straight line cross-sections. These cross-sections, however, are chosen, whereas the gradient paths are determined directly from the modules.
Figure 7. Gradient paths in the \(k\widehat{\mathcal{A}}_{10,10}\)-modules, starting from the two minimal elements in the lower left corners of the modules in Figure 6. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.