entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04147v1 | 20230709103519 | A Survey and Approach to Chart Classification | [
"Anurag Dhote",
"Mohammed Javed",
"David S Doermann"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Indian Institute of Information Technology, Allahabad Department of CSE, University at Buffalo, Buffalo, NY, USA
Email:{[email protected], [email protected],
[email protected]}
A. Dhote et al.
A Survey and Approach to Chart Classification
Anurag Dhote10009-0000-9385-4758 Mohammed Javed1Corresponding author0000-0002-3019-7401 David S Doermann20000-0003-1639-4561
August 12, 2023
================================================================================================================================
Charts represent an essential source of visual information in documents and facilitate a deep understanding and interpretation of information typically conveyed numerically. In the scientific literature, there are many charts, each with its stylistic differences. Recently the document understanding community has begun to address the problem of automatic chart understanding, which begins with chart classification. In this paper, we present a survey of the current state-of-the-art techniques for chart classification and discuss the available datasets and their supported chart types. We broadly classify these contributions as traditional approaches based on ML, CNN, and Transformers.
Furthermore, we carry out an extensive comparative performance analysis of CNN-based and transformer-based approaches on the recently published CHARTINFO UB-UNITECH PMC dataset for the CHART-Infographics competition at ICPR 2022. The data set includes 15 different chart categories, including 22,923 training images and 13,260 test images. We have implemented a vision-based transformer model that produces state-of-the-art results in chart classification.
§ INTRODUCTION
Charts provide a compact summary of important information or research findings in technical documents and are a powerful visualization tool widely used by the scientific and business communities. In the recent literature, the problem of chart mining has attracted increased attention due to numerous advantages, as suggested in the comprehensive survey published by Davila et al. in 2019 <cit.>. The term Chart mining refers to the process of extracting information represented by charts. Another motivating factor in the increased attention paid to this problem is a series of competitions held in conjunction with significant conferences to address the critical challenges in the chart mining pipeline<cit.>.
Since a variety of charts are possible, chart classification is often the first step in chart mining. The task of chart image classification can be formalized as, given a chart image extracted from a document, classifying the image into one of N defined categories. The wide variety of chart types in the literature adds to the complexity of the task<cit.>. Some additional problems include interclass similarity, noise in authentic chart images, and more state-of-the-art datasets that cover multiple chart types and incorporate 2.5 or 3D charts and noise into the training samples<cit.>. The rise of robust deep learning models has contributed significantly to the success of chart classification. Deep learning approaches have outperformed traditional machine learning approaches regarding robustness and performance. Yet there need to be more state-of-the-art solutions that can provide stable results and are robust enough to address noise in some data sets. In this paper, we provide a performance comparison of several deep learning models that are state-of-the-art in the ImageNet<cit.> classification task.
In addition, we report the performances of several popular vision transformers, which, to the best of our knowledge, have yet to be used for chart classification, except for the recent ICPR 2022 CHART-Infographics competition<cit.>.
This paper is organized as follows. Section 2 summarizes the existing chart classification literature covering traditional and deep learning-based methods, including a brief discussion on transformer-based chart classification. Section 3 reports and summarizes publicly available datasets.
Section 4 briefly highlights the popular ImageNet pre-trained deep learning-based models that will be used for our comparative study. Section 5 describes the latest edition of the UB PMC dataset, the training and testing protocols, and a discussion on their performance for chart classification. Section 6 provides information on possible improvements and suggestions for future research. Finally, Section 7 concludes with a summary of the paper.
§ CHART CLASSIFICATION TECHNIQUES
Based on the type of approaches used to implement the chart classification task in the literature, they can be grouped into traditional ML, CNN-based deep learning, and Transformer-based deep learning. Each type of approach is described briefly below.
§.§ Traditional ML approaches
Traditional approaches rely on feature extraction methods that are often manual and general-purpose. Features are extracted and then represented in mathematical form for direct processing by machine learning classifiers. Savva et al.<cit.> present a system that automatically reformats visualizations to increase visual comprehension. The authors use low-level image features for classification in conjunction with text-level features. The system uses a multiclass SVM classifier trained on a corpus containing 2601 chart images labeled with ten categories, following Gao et al.'s manual extraction approach. In <cit.>, researchers propose VIEW, a system that automatically extracts information from raster-format charts. The authors used an SVM to separate the textual and graphical components and classify the chart images based on the graphic elements extracted from the visual components. The text is typically found in three chart categories - bar charts, pie charts, and line graphs, with 100 images for each category collected from various real-world digital resources.
Instead of taking an image as input, Karthikeyani and Nagarajan<cit.> present a system to recognize chart images from PDF documents using eleven texture features that are part of a Gray Level Co-Occurrence Matrix. A chart image is located in the PDF Document database, and the features are extracted and fed to the learning model. SVM, KNN, and MLP are the classifiers used for classification. Cheng et al.<cit.> employ a multimodal approach that uses text and image features. These features are provided as input to an MLP. The output is characterized as a fuzzy set to get the final result. The corpus contains 1707 charts with three categories and a 96.1% classification result.
§.§ CNN-based Deep Learning Approaches
Liu et al.<cit.> used a combination of Convolutional Neural Networks (CNNs) and Deep Belief networks (DBNs) to capture high-level information present in deep hidden layers. Fully Connected Layers of Deep CNN are used to extract deeply hidden features. A DBN is then used to predict the image class using the deep hidden features. The authors use transfer learning and perform fine-tuning to prevent overfitting. They use a data set that includes more than 5,000 images of charts, including pie, scatter, line, bar, and flow classes. Deep features are useful over primitive features to provide better stability and scalability to the proposed framework. The proposed method achieves an average accuracy of 75.4%, which is 2.8% more than the method that uses only deep ConvNets.
Given the results of CNN in the classification of natural images, Siegel et al.<cit.> used two CNN-based architectures for chart classification. They evaluated AlexNet and ResNet-50, which are pre-trained on the ImageNet data set and then fine-tuned for chart classification. This transfer learning approach is prevalent in subsequent works addressing this particular problem. The proposed frameworks outperformed the state-of-the-art model at the time, such as ReVision, by a significant margin. ResNet-50 achieved the best classification accuracy of 86% on a data set that contained more than 60000 images spread over seven categories.
Amara et al.<cit.> proposed a CNN-based on LeNet to classify images from their corpus of 3377 images into 11 categories. The model comprises eight layers, one input layer, five hidden layers, one fully connected layer, and one output layer. The fully connected layer is used as a classifier, while the hidden layers are convolution and pooling layers designed to extract features automatically. A fully connected layer employs softmax activation to classify images into defined classes. For evaluation of the model's performance, an 80-20 split is performed on the data set for training and assessment. The proposed model performs better than the LeNet and pretrained LeNet architectures with an accuracy of 89.5%.
Jung et al. <cit.> present a classification method using the deep learning framework Caffe and evaluate its efficacy by comparing it with ReVision<cit.>. The authors use GoogLeNet<cit.> for classification and compare its results with shallower networks like LeNet-1 and AlexNet<cit.>. GoogLeNet outperforms LeNet-1 and AlexNet with an accuracy of 91.3%. Five-fold cross-validation is used for calculating the accuracy on an image corpus with 737 - 901 images for each chart type. The test concludes that ChartSense provides higher classification accuracy for all chart types than ReVision.
With studies adapting the deep learning approach for chart image classification, a comparative study of traditional vs. CNN architectures was required. Chagas et al.<cit.> provide a comparative analysis of conventional vs. CNN techniques. Authors evaluated CNN architectures (VGG19<cit.>, Resnet-50<cit.>, and Inception-V3<cit.>) for chart image classification for ten classes of charts. The performance is compared with conventional machine learning classifiers, Naive Bayes, HOG features combined with KNN, Support Vector Machines, and Random Forests. Pre-trained CNN models with fine-tuned last convolutional layers were used. The authors concluded that CNN models surpass traditional methods with an accuracy of 77.76% (Resnet-50) and 76.77% (Inception-V3) compared to 45.03% (HOG + SVM).
Dia et al.<cit.> employ four deep learning models on a corpus of 11,174 chart images of five categories. Of AlexNet<cit.>, VGG16<cit.>, GoogLeNet<cit.> and ResNet<cit.>, the authors get the best accuracy of 99.55% for VGG16 model. VGG16 outperforms the models used in ChartSense paper by a large margin.
Significant roadblocks to chart mining research are caused by the fact that current chart data sets must be larger and contain sufficient diversity to support deep learning. To address this problem, Jobin et al.<cit.> presented DocFigure, a chart classification data set with 33,000 charts in 28 different classes. To classify charts, the author's proposed techniques utilize deep features, deep texture features, and a combination of both. Among these baseline classification techniques, the authors observed that combining deep features and deep texture features classifies images more efficiently than individual features. The average classification accuracy improved by 3.94% and 2.10% by concatenating FC-CNN and FV-CNN over individual use of FC-CNN and FV-CNN, respectively. The overall accuracy of the combined feature methods turned out to be 92.90%.
Luo et al. proposed a unified method to handle various chart styles<cit.>, where they show that generalization can be obtained in deep learning frameworks with rule-based methods. The experiments were performed on three different datasets of over 300,000 images with three chart categories. In addition to the framework, an evaluation metric for the bar, line, and pie charts is also introduced. The authors concluded that the proposed framework performs better than traditional rules-based and pure deep learning methods.
Araújo et al.<cit.> implemented four classic CNN models that performed well on computer vision tasks, including Xception<cit.>, VGG19<cit.>, ResNet152<cit.> and MobileNet<cit.>. The weights of these models were pre-trained on the ImageNet dataset, and the authors further performed hyperparameter tuning to obtain a stable learning rate and weight decay. These models were employed on a self-aggregated chart image corpus of 21,099 images with 13 different chart categories. Xception outperforms the other models by hitting an accuracy of 95%.
The problem of small datasets has been prevalent since the problem of chart mining was first introduced. Most work tries to increase the size of the dataset. However, Bajic and Job<cit.> use a Siamese CNN network to work with smaller datasets. The authors show that an accuracy of 100% can be achieved with 50 images per class, which is significantly better than using a vanilla CNN.
With the increase in datasets for chart images and the rise of deep learning models being employed on said datasets, an empirical study of these deep learning models was due. Thiyam et al.<cit.> compared 15 different deep-learning models on a self-aggregated dataset of 110,182 images spfeatures24 different chart categories. In addition, the authors tested the performance of these models on several preexisting test sets. They concluded that Xception(90.25%) and DenseNet121(90.12%) provide the most consistent and stable performance of all the deep learning models. The authors arrived at this decision by employing a five-fold cross-validation technique and calculating the standard deviation for each model across all datasets.
Davila et al.<cit.> summarized the work of different participants in the competition's first edition by harvesting raw tables from Infographics that provided data and tools for the chart recognition community. Two data sets were provided for the classification task. One was a synthetically generated AdobeSynth dataset, and the other UB-PMC data set was gathered from the PubMedCentral open-access library. The highest average F1-measure achieved for the synthetic data set was 99.81% and the highest F1-measure achieved for the PMC data set was 88.29%. In the second edition of the competition, the PMC set was improved and included in the training phase. An ensemble of ResNet152 and DenseNet121 achieved the highest F1-score of 92.8%. The third edition of the competition was recently held at ICPR 2022. The corpus of real chart images was made up of 36,183 chart images. The winning team achieved an F1 score of 91% with a base Swin transformer model with a progressive resizing technique. We summarize the competition details in Table <ref>
§.§ Transformer-based Deep Learning Approaches
Since the inception of Vision Transformer, there has been a lot of development in various computer vision tasks such as image classification, object detection, and image segmentation. Vision transformer has outperformed CNN-based models in these tasks on the ImageNet dataset. However, there has not been widespread application of vision transformers to chart image classification.
To our knowledge, only the Swin transformer<cit.> has been used for chart classification as reported in <cit.>, which won the CHART-Infographics challenge ICPR2022. The authors applied a Swin Transformer Base Model with a progressive resizing technique. The models were initially trained on a scale (input size) of 224 followed by 384<cit.>.
The existing models in the literature are summarised in Table 2.
§ CHART CLASSIFICATION DATASETS
There has been a significant increase in the size of datasets both in terms of the number of samples and the number of chart types. The Revision dataset<cit.> had only 2,601
images and 10 chart types. The recent publicly available dataset<cit.> comprises around 33,000 chart images of 15 different categories. The details of several publicly available datasets are discussed in this section.
ChartSense <cit.>:
The ChartSense dataset was put together using the ReVision dataset, and the authors manually added some additional charts. The corpus has 5659 chart images that cover ten chart categories.
ChartVega <cit.>:
This dataset has ten chart types and was created due to a need for a benchmark dataset for chart image classification<cit.>. The dataset contains both synthetic and real chart images. The set contains 14,471 chart images, of which 12059 are for training and 2412 are for testing. In addition, a validation set of 2683 real chart images is provided. No separate annotations are provided, as chart images are separated according to their types.
DocFigure <cit.>:
This corpus consists of 28 categories of annotated figure images. There are 33,000 images that include non-chart categories like natural images, tables, 3D objects, and medical images. The train set consists of 19,797 images, and the test set contains 13173 images. The labels are provided in a text document.
ChartOCR <cit.>: The dataset contains 386,966 chart images created by the authors by crawling public excel sheets online. The dataset contains only three classes of chart images. The dataset is divided into the train, validation, and test sets. The training corpus contains 363,078 images, the validation set contains 11,932 images, and the test set contains 11,965 images. The annotations for the chart images are provided in JSON format.
UB-PMC CHART-Infographics: This dataset was introduced in the first edition of Competition on Harvesting Raw Tables from Infographics (ICPR 2019 CHART Infographics)<cit.>. This dataset has synthetic images created using matplotlib. For the testing, a large set of synthetic data and a small set of real chart images harvested from PubMedCentral[https://www.ncbi.nlm.nih.gov/pmc/] were used. The training set has 198,010 images, whereas the synthetic test set has 4540 images, and the real test set has 4242 images. The dataset has ten different chart categories.
The second edition of the competition<cit.> provided a dataset containing 22923 real chart images of 15 different chart categories in both training and testing sets. The training set has 15636 images, while the test set has 7287 images. The annotations for the chart image samples are provided in both JSON and XML formats.
The dataset presented as a part of the third and most recent competition comprises 36183 images of 15 different chart categories. The training set contains 22,923 images, while the test set contains 13,260 images. Similar to the previous edition, the annotations are provided in JSON and XML formats.
To the best of our knowledge, this is the largest publicly available dataset for chart image classification.
The existing classification data sets for charts are summarized in Table <ref>, and the composition of the publicly available datasets is reported in Table <ref>.
§ DEEP LEARNING MODELS FOR COMPARATIVE ANALYSIS
In this section, we briefly discuss prominent deep-learning models that have been used to study the performance of chart classification. We have selected two categories of deep learning models - CNN-based and Transformer-based for the comparative study. For CNN-based models, we have considered the proven state-of-the-art models for image classification on the large-scale benchmark dataset ImageNet<cit.> over the years. For vision transformer models, we have chosen the models that have been proven to outperform CNN-based models in computer vision tasks.
§.§ ResNet<cit.>
The Deep Residual Network was introduced in 2015 and was significantly deeper than the previous deep learning networks. The motivation behind the model was to address the degradation problem: Degrading training accuracy with increasing depth of the model. The authors added shortcut connections, also known as skip connections, that perform the proposed identity mapping and are significantly easier to optimize than unreferenced mappings. Despite being deeper than the previous models, ResNet still needed to be simplified. It achieved the top-5 error of 3.57% and claimed the top position in the 2015 ILSVRC classification competition<cit.>. We use a 152-layer version of this Deep Residual Network called ResNet-152 for our classification problem.
§.§ Xception<cit.>
Xception is a re-interpretation of the inception module. The said inception module is replaced with depth-wise separable convolutions. The number of parameters in both Inception V3 and Xception is the same, so the slight performance improvement is due to the more efficient use of parameters. Xception shows a better performance improvement than Inception V3 on the JFT dataset on the ImageNet dataset. It achieves the top five accuracy of 94.5%. Xception also shows promising results in the chart classification literature, as reported by <cit.> and <cit.>.
§.§ DenseNet<cit.>
The Dense Convolutional Network, introduced in 2018, connects each layer in the network architecture to all other layers. This allows for the exchange of feature maps at every level and considers the same input as input gathered from all the previous layers rather than just one preceding layer. The difference between DenseNet and Resnet lies in the way that they combine features. ResNet combines features through summation, whereas DenseNet combines them through concatenation. DenseNet is easier to train due to the improved flow of gradients and other information through the network. The vanilla DenseNet has fewer parameters than the vanilla ResNet network. We used DenseNet-121 for our classification task as it was one of the best models for the chart image dataset as reported in <cit.>.
§.§ ConvNeXt<cit.>
ConvNeXt model was introduced as a response to hierarchical transformers outperforming convnets in image classification tasks. Starting with a standard ResNet architecture, the model is carefully modified to adapt the specific characteristics of a typical hierarchical transformer. This resulted in a CNN-based model that matches the transformers in robustness and scalability across all benchmarks. ConvNeXt achieves a top-1 accuracy of 87.8% on ImageNet.
§.§ DeIT Transformer<cit.>
The authors proposed the Data Efficient Image Transformer(DeIT) with 86M parameters to make the existing vision transformer more adoptable. This convolution-free approach achieves competitive results against the existing state-of-the-art models on ImageNet. The proposed vision transformer achieved a top-1 accuracy of 85.2% on the ImageNet classification task. We use the base Base DeIT transformer for the chart classification task.
§.§ Swin Transformer<cit.>
A hierarchical transformer that employs shifting windows to obtain representations for vision tasks. The authors note that the hierarchical architecture provides linear computational complexity and scalability concerning image size. The limitation of self-attention calculation concerning noncoincident local windows due to the shifting windows allows for better cross-window connectivity. The qualities above contribute to the Swin transformer's excellent performance across computer vision tasks. It achieves 87.3% top-1 accuracy on the ImageNet-1k dataset. We perform experiments with all the 13 available Swin Transformer models and report their performance in Table <ref>. Furthermore we refer to the best performing Swin Transformer model as Swin-Chart in Table <ref>.
§ EXPERIMENTAL PROTOCOL
§.§ Dataset
We use the ICPR2022 CHARTINFO UB PMC<cit.> dataset to perform our comparative study of deep learning models. The dataset is divided into training and testing sets. The number of chart images in the training and test set is 22,923 and 11,388, respectively. The ground truth values are annotated in JSON and XML formats. We further divide the provided training set into training and validation sets with an 80/20 ratio. The dataset contains charts of 15 categories: area, map, heatmap, horizontal bar, Manhattan, horizontal interval, line, pie, scatter, scatter-line, surface, Venn, vertical bar, vertical box, and vertical interval. Samples of each chart type present in the dataset are shown in Figure <ref>
§.§ Training and Testing Setup
We choose ResNet152, DenseNet121, Xception, and ConvNeXt CNN-based models and DeIT and Swin Transformers-based models for chart image classification. The CNN-based models were selected based on their performance in the existing literature on the ImageNet image classification task. The transformer-based models are chosen because they beat the CNN-based models. We use the pre-trained ImageNet weights of these models and fine-tune them for our chart classification task. The models are trained on a computer with an RTX 3090 video card with 24 GB memory. Pytorch<cit.> was used as the engine for our experiments. We use a batch size of 64 for CNN-based models and a batch size of 16 for transformer-based models. A learning rate of 10^-4 is used to train each model for 100 epochs. Label Smoothing Cross Entropy Loss is used as a loss function. The evaluation measures the average over all classes and reports precision, recall, and F1-score.
§.§ Comparative Results
The models were trained following the steps mentioned in the previous section and were tested on the UB-PMC test data set. We calculate all deep learning models' average precision, recall, and F1 score. Among CNN-based models, ResNet-152 and ConvNeXt provide the best results across all evaluation metrics. The ResNet-152 result is consistent with the results in <cit.> for CNN-based models. For Swin transformer we perform experiments on 13 models consisting Swin Tiny(SwinT), Swin Small(SwinS), Swin Base(SwinB) and Swin Larger(SwinL) and their varients. SwinL with input image dimension 224 performs best with an F1-score of 0.932. So, SwinL model is further referred as Swin-Chart. The scores of all the Swin Transformer models are summarized in Table <ref>. The best performing CNN based models fail to compete with Swin-Chart for the chart classification task as it outperforms the other five models with an average F1-score of 0.932. The scores for the deep learning models are summarized in Table <ref>.
Furthermore, we compare our best-performing model(Swin-Chart) with the models reported in <cit.>. This comparison is summarized in Table <ref>. We note that Swin-Chart surpasses the winner of the ICPR 2022 CHART-Infographics competition with an average F1-score of 0.931.
§ FUTURE DIRECTIONS
Although there has been a significant increase in published articles on chart classification, several problems still need to be addressed.
§.§ Lack of Standard Benchmark Data Sets
The chart image classification problem has been extensively addressed in previous work. Efforts have been made to increase the size of chart image datasets that also cover a wide variety of charts<cit.>. With the growing literature in various domains, authors are finding creative ways to use different charts. This adds to the variety of chart types. Integrating such diverse chart types while creating chart datasets remains an open challenge. In addition, the popularity of charts such as bar, line, and scatter over others such as Venn, surface, and area adds to the problem of disparity between the number of samples in particular chart types.
§.§ Lack of Robust Models
Recent work makes some problematic assumptions in addressing this problem<cit.>. A lack of a diverse benchmark dataset adds to this problem, as there needs to be more consistency in model performance across publicly available datasets. The inherent intra-class dissimilarity and inter-class similarity of several chart types affect the model's performance.
§.§ Inclusion of Noise
Most of the work in the existing literature ignores the effect of noise. Different types of noise, such as background grids, low image quality, composite charts, and multiple components along with figures, lead to poor performance for models that perform exceptionally well on noiseless data<cit.>. In addition to the noiseless chart image dataset, if a small set of chart images could be provided that incorporates the noisy images, it would help fine-tune the models to work through the inclusion of noise and be invariant to the same.
§ CONCLUSION
We have provided a brief survey of existing chart classification techniques and datasets. We used a Transformer model to obtain state-of-the-art results. Although there has been a significant development both in terms of variety in models and in the size of datasets, we observe that the chart classification problem still needs to be solved, especially for noisy and low-quality charts. Our comparative study showed that Swin-Chart outperforms the other vision transformer and CNN-based models on the latest UB-PMC dataset. In the future, we plan to generalize the results of the Swin-Chart over other publicly available datasets and try to bridge the gap to a robust deep-learning model for chart image classification.
8
amara_17
Amara, J. et al.: Convolutional Neural Network Based Chart Image Classification. In: International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision. (2017).
araujo_20
Araújo, T. et al.: A Real-Worl Approach on the Problem of Chart Recognition Using Classification, Detection, and Perspective Correction. Sensors. 20, 16, 4370 (2020).
bajic_20
Bajić, F. et al.: Data Visualization Classification Using Simple Convolutional Neural Network Model. In: International Journal of Electrical and Computer Engineering Systems (IJECES). 11, 1, 43–51 (2020).
bajic_21
Bajić, F., Job, J.: Chart Classification Using Siamese CNN. Journal of Imaging. 7, 220 (2021).
balaji_18
Balaji, A. et al.: Chart-Text: A Fully Automated Chart Image Descriptor. ArXiv (2018).
chagas_18
Chagas, P. et al.: Evaluation of Convolutional Neural Network Architectures for Chart Image Classification. In:International Joint Conference on Neural Networks (IJCNN). pp. 1–8 (2018).
cheng_13
Cheng, B. et al.: Graphical chart Classification Using Data Fusion for Integrating Text and Image Features. In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR. (2013).
chollet_17_xception
Chollet, F.: Xception: Deep Learning with Depthwise Separable Convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017).
dai_18
Dai, W. et al.: Chart decoder: Generating textual and numeric information from chart images automatically. Journal of Visual Languages & Computing. 48, 101–109 (2018).
davila_19
Davila, K. et al.: ICDAR Competition on Harvesting Raw Tables from Infographics (CHART-Infographics). In: International Conference on Document Analysis and Recognition (ICDAR). pp. 1594–1599 IEEE, Sydney, Australia (2019).
davila_19_survey
Davila, K. et al.: Chart Mining: A Survey of Methods for Automated Chart Analysis. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 43, 11, 3799–3819 (2021).
davila_20
Davila, K. et al.: ICPR 2020 - Competition on Harvesting Raw Tables from Infographics. In: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, pp. 361-380 (2021).
davila_22
Davila, K. et al.: ICPR: Challenge on Harvesting Raw Tables from Infographics (CHART-Infographics). In: 26th International Conference on Pattern Recognition (ICPR), pp.4995-5001. (2022).
gao_12
Gao, J. et al.: View: Visual Information Extraction Widget for improving chart images accessibility. In: 19th IEEE International Conference on Image Processing. pp. 2865–2868 (2012).
he_15_resnet
He, K. et al.: Deep Residual Learning for Image Recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016).
.
howard_17_mobilenet
Howard, A.G. et al.: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, http://arxiv.org/abs/1704.04861, (2017).
huang_18_densenet
Huang, G. et al.: Densely Connected Convolutional Networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017).
jung_17
Jung, D. et al.: ChartSense: Interactive Data Extraction from Chart Images. In: Proceedings of the 2017 CHI conference on human factors in computing systems (2017).
karthikeyani_12
Karthikeyani, V., Nagarajan, S.: Machine Learning Classification Algorithms to Recognize Chart Types in Portable Document Format (PDF) Files. IJCA. 39, 2, 1–5 (2012).
krizhevsky_12_alexnet
Krizhevsky, A. et al.: ImageNet Classification with Deep Convolutional Neural Networks. In: Advances in Neural Information Processing Systems. (2012).
kv_19
kv, J. et al.: DocFigure: A Dataset for Scientific Document Figure Classification. In: International Conference on Document Analysis and Recognition Workshops (ICDARW). (2019).
liu_15
Liu, X. et al.: Chart classification by combining deep convolutional networks and deep belief networks. In: 13th International Conference on Document Analysis and Recognition (ICDAR). pp. 801–805 (2015).
liu_19
Liu, X. et al.: Data Extraction from Charts via Single Deep Neural Network. IN: arXiv preprint arXiv:1906.11906 (2019).
liu_21_swin
Liu, Z. et al.: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021).
liu_22_convnet
Liu, Z. et al.: A ConvNet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022).
luo_21
Luo, J. et al.: ChartOCR: Data Extraction from Charts Images via a Deep Hybrid Framework. In: IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1916–1924 IEEE, Waikoloa, HI, USA (2021).
paszke_19
Paszke, A. et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. pp. 8026–8037 Curran Associates Inc., Red Hook, NY, USA (2019).
russakovsky_15
Russakovsky, O. et al.: ImageNet Large Scale Visual Recognition Challenge. In: International journal of computer vision. pp.211-252, (2015).
savva_11
Savva, M. et al.: ReVision: automated classification, analysis and redesign of chart images. In: Proceedings of the 24th annual ACM symposium on User interface software and technology. pp. 393–402 Association for Computing Machinery, New York, NY, USA (2011).
siegel_16
Siegel, N. et al.: chartSeer: Parsing Result-charts in Research Papers. In: Leibe, B. et al. (eds.) Computer Vision – ECCV 2016. pp. 664–680 Springer International Publishing, Cham (2016).
simonyan_15_vgg
Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition, http://arxiv.org/abs/1409.1556, (2015).
szegedy_14_googlenet
Szegedy, C. et al.: Going Deeper with Convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2015).
szegedy_15_v3
Szegedy, C. et al.: Rethinking the Inception Architecture for Computer Vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016).
thiyam_21
Thiyam, J. et al.: Challenges in chart image classification: a comparative study of different deep learning methods. In: Proceedings of the 21st ACM Symposium on Document Engineering. pp. 1–4 Association for Computing Machinery, New York, NY, USA (2021).
thiyam_22
Thiyam, J. et al.: Chart classification: an empirical comparative study of different learning models. Presented at the December 19 (2021).
touvron_21_deit
Touvron, H. et al.: Training data-efficient image transformers & distillation through attention. In: Proceedings of the 38th International Conference on Machine Learning. pp. 10347–10357 PMLR (2021).
|
http://arxiv.org/abs/2307.06224v1 | 20230712151611 | Surfaces in which every point sounds the same | [
"Feng Wang",
"Emmett L. Wyman",
"Yakun Xi"
] | math.AP | [
"math.AP",
"math.CA",
"math.DG",
"math.SP"
] |
We address a maximally structured case of the question, “Can you hear your location on a manifold," posed in <cit.> for dimension 2. In short, we show that if a compact surface without boundary sounds the same at every point, then the surface has a transitive action by the isometry group. In the process, we show that you can hear your location on Klein bottles and that you can hear the lengths and multiplicities of looping geodesics on compact hyperbolic quotients.
Practical quantum imaging with undetected photons
Chris C. Phillips
July 12, 2023
=================================================
§ INTRODUCTION
<cit.> poses the following question: If you find yourself standing at some unknown position in a familiar Riemannian manifold, is it possible to deduce one's location, up to symmetry, by clapping your hands once and listening to the reverberations?
To phrase this question precisely, we require a bit of setup. We let (M,g) be a compact Riemannian manifold. We consider the natural generalization of the Laplacian to (M,g), the Laplace-Beltrami operator, written locally by
Δ_g = |g|^-1/2∑_i,j∂_i( g^ij |g|^1/2∂_j ).
By the spectral theorem, L^2(M) admits an orthonormal basis of Laplace-Beltrami eigenfunctions, e_1, e_2, … satisfying
Δ_g e_j = -λ_j^2 e_j.
If the manifold has a boundary, we require some appropriate boundary conditions (e.g. Dirichlet or Neumann) to make the Laplace-Beltrami operator self-adjoint. The pointwise Weyl counting function is defined as
N_x(λ) = ∑_λ_j ≤λ |e_j(x)|^2
and is independent of the choice of orthonormal basis. The question above is then phrased as follows.
[<cit.>] Take (M,g) as above, and suppose x, y ∈ M. If N_x(λ) = N_y(λ) for all λ, must there exist an isometry M → M mapping x to y?
From here on out, we will say that echolocation holds for a manifold if the question above is answered in the affirmative for that manifold.
To see how the pointwise counting function N_x is related to the physical interpretation of the problem as described above, we consider the solution operator cos(t√(-Δ_g)) taking initial data f to the unique solution to the initial-value initial value problem
(Δ_g - ∂_t^2) u = 0 and
u(0) = f
∂_t u(0) = 0.
We recall (or quickly verify) the Schwartz kernel of the solution operator is given in terms of the eigenbasis by
cos(t√(-Δ_g))(x,y) = ∑_j cos(tλ_j) e_j(x) e_j(y).
Taking a formal cosine transform of the measure dN_x yields
∫_-∞^∞cos(tλ) dN_x(λ) = ∑_jcos(t λ_j) |e_j(x)|^2 = cos(t√(-Δ_g))(x,x).
We interpret the rightmost side as the distributional solution u of the wave equation above with initial data f = δ_x, and evaluated at (t,x). Here, the initial δ_x represents our sharp “snap" whose reverberations we are listening to at point x for all t ≥ 0. Furthermore, since N_x is real and supported on [0,∞), dN_x is uniquely identified by its cosine transform on [0,∞). Hence, there is no loss of information when passing between interpretations of the problem.
In <cit.>, we prove that echolocation holds for boundaryless manifolds equipped with generic metrics. Generically, the group of isometries on M is trivial, and so this can be thought of as the minimally structured case. This paper starts to address the maximally structured case. Namely, if N_x is the same function over all x in M, must the isometry group act transitively on M. In this paper, we answer this question for two-dimensional surfaces without boundary.
Let (M,g) be a compact Riemannian surface
with empty boundary. N_x(λ) is constant in x for each λ, if and only if the isometry group acts transitively on M.
As a byproduct, we also prove the following.
Let (M,g) be a flat Klein bottle. Then echolocation holds on M despite the fact that N_x(λ) is not constant in x.
§.§ Acknowledgements
Xi was supported by the National Key Research and Development
Program of China No. 2022YFA1007200
and NSF China Grant No. 12171424. Wang was supported by NSF China Grant No. 12031017 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LR23A010001. Wyman was supported by NSF grant DMS-2204397. The authors would like to thank Wenshuai Jiang for helpful conversations.
§ THE STRATEGY OF PROOF FOR THEOREM <REF>
It is clear that if the there is an isometry on M mapping x to y then N_x must equal N_y, and thus the necessity direction of Theorem <ref> is trivial. We shall prove the remaining direction. We will need to extract two types of information from N_x: local information in the form of curvature, and global information regarding the behavior of geodesics. We begin with the former. By classical pointwise asymptotics for the heat kernel <cit.>, we have for small t > 0,
∫_-∞^∞ e^-tλ^2 dN_x(λ) = ∑_j e^-tλ_j^2 |e_j(x)|^2 = e^tΔ_g(x,x)
= 1/4π t( 1 + t/3 K(x) + O(t^2) ),
where, since M = 2, K(x) is the sectional curvature at x. It follows from the hypotheses that M has constant curvature. After rescaling the metric, there are only three cases:
* K = 1, where M is the sphere or the projective sphere with the standard metric.
* K = 0, where M is a flat torus or flat Klein bottle.
* K = -1, where M is a compact quotient of the hyperbolic plane ℍ.
Note the conclusion of Theorem <ref> holds in case (1) since spheres and real projective spaces are symmetric spaces. It also holds if M is orientable in case (2). We aim to exclude Klein bottles from case (2) and all of case (3). We will do the former in Section <ref> by direct calculation and the latter in Section <ref> by leveraging the global information we can extract from N_x, namely information about geodesic loops at x.
A geodesic loop at x is a geodesic segment in M with both endpoints at x. Note, a geodesic loop need not close smoothly. The looping times at x is the set
ℒ_x := { |γ| : γ is a looping geodesic at x}
of lengths of looping geodesics.
The relationship between looping geodesics and the behavior of the pointwise counting function is very well studied <cit.>. In particular, for fixed x,
_t cos(t√(-Δ_g))(x,x) ⊂ℒ_x ∪ -ℒ_x.
One can think of the looping times as the times at which you hear an echo after your initial clap. For the sake of Question <ref>, it would be convenient to show that the inclusion above is actually an equality, but this does not seem to hold in general.
In the special case where (M,g) has constant curvature -1, however, the looping times ℒ_x and even the multiplicities of the looping geodesics are audible, as we will show in Lemma <ref>. This will be used to derive the required contradiction to exclude case (3).
§ ECHOLOCATION ON A KLEIN BOTTLE
In this section, we explicitly compute the pointwest Weyl counting function N_x(λ) on a Klein bottle, and prove Theorem <ref>.
We shall follow the definition of a Klein bottle 𝕂_a,b in <cit.>. A point in 𝕂_a,b is identified with a point x=(x_1,x_2) in the rectangle [0,a/2]×[0,b], with its horizontal sides identified with the same orientation and the vertical sides identified with the opposite orientations. It is shown in <cit.> that a complete family of real eigenfunctions of 𝕂_a,b is given by the following functions.
cos(2π nx_2b), for m=0, n∈ℕ,
cos(2π mx_1a)cos(2π nx_2b), sin(2π mx_1a)cos(2π nx_2b), for even m∈ℤ^+, n∈ℕ,
cos(2π mx_1a)sin(2π nx_2b), sin(2π mx_1a)sin(2π nx_2b), for odd m∈ℤ^+, n∈Z^+.
After normalization, we obtain a orthonormal basis of eigenfunctions, e_λ_m,n, with eigenvalues λ_m,n:=2π√(m^2/a^2+n^2/b^2). We record the following table for the norm squared of each eigenfunction.
first suppose 1/b< 2/a. Then we have
∑_λ_m,n=2π/b|e_λ_m,n|^2(x)=4abcos^2(2π x_2b),
which is not constant in x_2, and thus N_x is not constant in x in this case. For the remaining case when 1/b≥ 2/a,
It is easy to see that N_x(λ) cannot be constant in x. In fact, since 0< 1/b<√(1/a^2+1/b^2). The multiplicity of λ_m,n=2π/b will be one, unless 1/b=2ℓ/a for some ℓ∈ℤ^+. Therefore we must have
∑_λ_m,n=2π/b|e_λ_m,n|^2(x)=4abcos^2(2π x_2b)+2ab, if 1/b=2ℓ/a for some ℓ∈ℤ^+,
∑_λ_m,n=2π/b|e_λ_m,n|^2(x)=4abcos^2(2π x_2b), otherwise.
In either case, N_x is not constant in x. Furthermore, we observe that any point x∈𝕂_a,b can be mapped, via a self-isometry of 𝕂_a,b, to a point x̃∈{0}×[0,b/4]. To see this, we first notice that any point can be associated with a point on x̃∈{0}×[0,b] via horizontal translations. Now we claim that if we divide 𝕂_a,b evenly into four horizontal strips, say A, B, C, and D, then these strips can be mapped to one another via suitable isometries.
Indeed, since our representation of 𝕂_a,b is centrosymmetric, we see that A can be mapped to D and B can be mapped to C if we turn 𝕂_a,b 180 degree about its center. Next, we notice that if we cut 𝕂_a,b along the middle horizontal line y=b/2,
and then patch A and D together again to recover 𝕂_a,b. See Figure <ref>. This shows that A can be mapped to C and B can be mapped to D, and our claim is proved. Since cos^2(2π x_2b) is always audible, we conclude that echolocation holds on 𝕂_a,b.
§ EXCLUDING COMPACT HYPERBOLIC SURFACES
Consider the audible distribution
cos(t √(-Δ_g))(x,x) = ∫_-∞^∞cos(tλ) dN_x(λ)
in t as described in the introduction. In <cit.>, Bérard lifts the cosine wave kernel to the universal cover (^2, g̃) by the identity
cos(t √(-Δ_g))(x,x) = ∑_γ∈Γcos(t √(-Δ_g̃))(x̃, γ(x̃)),
where here Γ is the deck group and x̃ is a lift of x. The fact that there are no conjugate pairs on (^2, g̃) allows us Bérard to use Hadamard's parametrix globally. We will do the same.
Before writing down the parametrix, we recall some standard notions. We say a smooth function a on ^n ×^N is a symbol of order m if it satisfies, for each compact K ⊂^n and multiindices α and β,
sup_x ∈ K |∂_θ^α∂_x^β a(x,θ)| ≤ C_K,α,β (1 + |θ|)^m - |α|.
We denote the set of symbols of order m on ^n ×^N as S^m(^n ×^N). For more on symbols and symbol classes, see e.g. <cit.>.
We will summarize what we need of the Hadamard parametrix from <cit.>. We write
cos(t√(-Δ_g̃))(x̃, ỹ) = K_N(t, x̃, ỹ) + R_N(t, x̃, ỹ)
where we will characterize K_N as an oscillatory integral and R_N can be made to be quite smooth. Combining Remark 1.2.5, (3.6.10), and (5.2.16) of <cit.>, we have
K_N(t,x̃, ỹ) = (2π)^-2α_0(x̃, ỹ) ∫_^2 e^i φ(x̃, ỹ, ξ) ± i tp(ỹ, ξ) a_±(t, x̃, ỹ, ξ) dξ
where
a_±(t,x̃, ỹ, ξ) - 1/2α_0(x̃, ỹ) ∈ S^-2(^1 + 2 + 2×^2)
for some appropriate phase function φ(x̃, ỹ, ξ) and an appropriate smooth function α_0(x̃, ỹ), which we will now describe.
Since M̃ is nonpositively curved, the exponential map exp_ỹ : T_ỹM̃→M̃ is a diffeomorphism. We use the logarithm log_ỹ : M̃→ T_ỹM̃ to denote its inverse. Remark 1.2.5 of <cit.> characterizes the phase function above as
φ(x̃, ỹ, ξ) = ⟨log_ỹ(x̃), ξ⟩.
The leading coefficient α_0 is characterized as
α_0(x̃, ỹ) = |g(log_ỹ(x̃))|^-1/4,
where the metric g is that of geodesic normal coordinates about ỹ. Finally, the remainder term R_N is C^N - 5 in (t,x̃, ỹ) by the discussion after (5.2.18). Furthermore, by Huygen's principle, the remainder term R_N can be made to be supported in d_g̃(x̃, ỹ) ≤ 2|t|.
We will use the parametrix above to establish the following key asymptotic quantity:
Let (M,g) be a boundary-less Riemannian surface with nonpositive sectional curvature. Let χ be a Schwartz-class function on with χ supported in a compact subset of ℝ^+.
∫_-∞^∞ e^-it λχ(t) cos(t√(-Δ_g))(x,x) dt
= (2π)^-1/2λ^1/2∑_γ∈Γ e^π i / 4 e^-i λ d_g̃(x̃, γ(x̃))α(x̃, γ(x̃))/d_g̃(x̃, γ(x̃))^1/2χ(d_g̃(x̃, γ(x̃))) + O(λ^-1/2).
By lifting to the universal cover and using Hadamard's parametrix above, we write
∫_-∞^∞ e^-it λχ(t) cos(t√(-Δ_g))(x,x) dt
= ∑_γ∈Γ∫_-∞^∞ e^-itλχ(t) cos(t√(-Δ_g̃))(x̃, γ(x̃)) dt
= I + II
where we have main term
I = (2π)^-2∑_γ∈Γ∑_±∫_-∞^∞∫_^2 e^i⟨log_ỹ(x̃), ξ⟩± it|ξ| - itλχ(t) a_±(t, x̃, γ(x̃), ξ) dξ dt
and remainder term
II = ∑_γ∈Γ∑_±∫_-∞^∞ e^-itλχ(t) R_N(t, x̃, γ(x̃)) dt.
Note, term II can be made to decay in λ of arbitrary polynomial order by taking N large enough and integrating by parts in t. Hence, it contributes nothing to the main term in the lemma, and we turn our attention to I. We first note that if the `±' sign in the exponent is negative, then integration by parts in t yields a rapidly-decaying term which we also neglect. Up to negligible terms, we have
I = (2π)^-2∑_γ∈Γ∫_-∞^∞∫_^2 e^i⟨log_x̃(γ(x̃)), ξ⟩ + it(|ξ| - λ)χ(t) a_±(t, x̃, γ(x̃), ξ) dξ dt.
We perform a change of variables ξ↦λξ and write this term as
= (2π)^-2λ^2 ∑_γ∈Γ∫_-∞^∞∫_^2 e^iλ ( ⟨log_x̃(γ(x̃)), ξ⟩ + t(|ξ| - 1))χ(t) a_±(t, x̃, γ(x̃), λξ) dξ dt.
Let β be a smooth bump function with compact support in (1/2,2) taking the value 1 on a neighborhood of 1. We cut the integral into β(|ξ|) and 1 - β(|ξ|) parts, the latter of which
decays rapidly by integrating by parts in t. We are left with
(2π)^-2λ^2 ∑_γ∈Γ∫_-∞^∞∫_^2 e^iλ ( ⟨log_x̃(γ(x̃)), ξ⟩ + t(|ξ| - 1))χ(t) β(|ξ|) a_±(t, x̃, γ(x̃), λξ) dξ dt.
Next, we write ξ = r (cosθ, sinθ) in polar form with r > 0 and rephrase the integral as
(2π)^-2λ^2 ∑_γ∈Γ∫_-∞^∞∫_-π^π∫_0^∞ e^iλ ( r (v_1 cosθ + v_2 sinθ) + t(r - 1))
χ(t) β(r) a_±(t, x̃, γ(x̃), λ r(cosθ, sinθ)) r dr dθ dt.
Note, the phase function can be written
φ = r (v_1 cosθ + v_2 sinθ) + t(r - 1)
where we take as shorthand
v = log_x̃(γ(x̃)). We now employ the method of stationary phase in variables t and r. After a rotation, assume without loss of generality at this point that v = d_g̃(x̃, γ(x̃)) e_1. Then, we have
∇_t,θ, rφ = [ r - 1; -r d_g̃(x̃, γ(x̃)) sinθ; d_g̃(x̃, γ(x̃)) cosθ + t ]
from which we obtain a critical point at (t, cosθ, r) = (∓ d_g̃(x̃, γ(x̃)), ± 1, 1). Note, for t ∈χ, we require that ± = - and ∓ = +. At this sole critical point, the Hessian of the phase reads
∇_t,θ, r^2 φ =
[ 0 0 1; 0 - d_g̃ 0; 1 0 0 ],
which has determinant and signature
|∇^2_t, θ, rφ| = d_g̃(x̃, γ(x̃)) and sig∇^2_t, θ, rφ = 1.
Hence, by the method of stationary phase, we have
I = (2π)^-1/2λ^1/2∑_γ∈Γ e^π i / 4 e^-i λ d_g̃(x̃, γ(x̃))α(x̃, γ(x̃))/d_g̃(x̃, γ(x̃))^1/2χ(d_g̃(x̃, γ(x̃))) + O(λ^-1/2).
The lemma follows.
As a corollary, we can conclude that the looping times, with multiplicity, are audible for hyperbolic surfaces. To state this precisely, fix x and a lift x̃ to the hyperbolic plane ℍ via the covering map. Then, every looping geodesic at x lifts to the unique geodesic in the universal cover ℍ with endpoints at x̃ and γ(x̃) where γ is an element of the deck group. We conclude that the looping times at x are given by
ℒ_x := { d_g̃(x̃, γ(x̃)) : γ∈Γ∖ I}.
Given any r in this set, we have a multiplicity
m_x(r) = #{γ∈Γ∖ I : d_g̃(x̃, γ(x̃)) = r}.
We now have:
If (M,g) is a compact hyperbolic surface, m_x and ℒ_x are audible.
It suffices to show m_x is audible as an integer-valued function on (0,∞). We can extract this information from the result of Lemma <ref>, but first we specify some of the constants.
Recall from the discussion at the start of the section that α(x̃, ỹ) = |g(log_x̃(ỹ))|^-1/4 in geodesic normal coordinates. In the case of constant curvature -1, we have
α(x̃, ỹ) = (sinh r/r)^-1/2
where r = d_g̃(x̃, ỹ). Hence,
∫_-∞^∞ e^-it λχ(t) cos(t√(-Δ_g))(x,x) dt
= (2π)^-1/2λ^1/2∑_γ∈Γ e^π i / 4 e^-i λ d_g̃(x̃, γ(x̃))1/√(sinh d_g̃(x̃, γ(x̃)))χ(d_g̃(x̃, γ(x̃))) + O(λ^-1/2).
Now take ρ to be supported on [-1,1] with ρ(0) = 1. Then, for r, ϵ > 0 fixed, set
χ_r,ϵ(t) = √(sinh t) ρ(ϵ^-1(t - r)).
m_x(r) is given by the output of the expression
lim_ϵ→ 0lim_λ→∞ (2π)^1/2 e^-π i / 4 + iλ rλ^-1/2∫_-∞^∞ e^-itλχ_r,ϵ(t) cos(t√(-Δ_g))(x,x) dt.
Note, no matter what r is, the inner limit converges for all sufficiently small ϵ > 0. The lemma is proved.
Now we assume that a compact hyperbolic manifold has constant pointwise counting function and derive a contradiction. For each x ∈ M, let r_x denote the length of the shortest looping geodesic, i.e. r_x = infℒ_x. This quantity is audible, and hence is constant. We claim that every point x lies in a closed geodesic in M, and we will use this claim to derive a contradiction. We recall the following standard result:
Each homotopy class of loops in a negatively curved manifold has a unique length-minimizing curve, and that curve is a closed geodesic.
Suppose we have a geodesic loop through x with length r_x. This loop is homotopic to a unique closed geodesic which also has length r_x. Since this is the length of our original loop at x, our geodesic loop must have been closed.
Absurdities abound already, but here is a straightforward one. There are only countably many closed geodesics on M—one for each element of the homotopy classes of loops—but somehow every point in M lies on one.
alpha
|
http://arxiv.org/abs/2307.04407v1 | 20230710081327 | Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution | [
"Hossein Rastgoftar"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution
Hossein Rastgoftar
H. Rastgoftar is with the Department
of Aerospace and Mechanical Engineering, University of Arizona, Tucson,
AZ, 85721 USA e-mail: [email protected].
August 12, 2023
=====================================================================================================================================================================================
This paper proposes a new architecture for multi-agent systems to cover an unknowingly distributed fast, safely, and decentralizedly. The inter-agent communication is organized by a directed graph with fixed topology, and we model agent coordination as a decentralized leader-follower problem with time-varying communication weights. Given this problem setting, we first present a method for converting communication graph into a neural network, where an agent can be represented by a unique node of the communication graph but multiple neurons of the corresponding neural network. We then apply a mass-cetric strategy to train time-varying communication weights of the neural network in a decentralized fashion which in turn implies that the observation zone of every follower agent is independently assigned by the follower based on positions of in-neighbors. By training the neural network, we can ensure safe and decentralized multi-agent coordination of coverage control. Despite the target is unknown to the agent team, we provide a proof for convergence of the proposed multi-agent coverage method.
The functionality of the proposed method will be validated by a large-scale multi-copter team covering distributed targets on the ground.
Large-Scale Coordination, Multi-Agent Coverage, and Decentralized Control.
§ INTRODUCTION
Multi-agent coberage has been received a lot of attentions by the control community over the recent years.
Multi-agent coverage has many applications such as wildfire management <cit.>, border security <cit.>, agriculture <cit.>, and wildlife monitoring <cit.>. A variety of coverage approaches have been proposed by the researchers that are reviewed in Section <ref>.
§.§ Related Work
Sweep <cit.> and Spiral <cit.> are two available methods used for the single-vehicle coverage path planning, while Vehicle Routing Problem <cit.> is widely used for the multi-agent coverage path planning. Diffusion-based multi-agent coverage convergence and stability are proposed in Ref. <cit.>. Decentralized multi-agent coverage using local density feedback is achieved by applying discrete-time mean-field model in Ref. <cit.>. Multi-agent coverage conducted by unicycle robots guided by a single leader is investigated in Ref. <cit.>, where the authors propose to decouple coordination and coverage modes. Adaptive decentralized multi-agent coverage is studied in <cit.>. Ref. <cit.> offers a multiscale analysis of multi-agent coverage control that provides the convergence properties in continuous time. Human-centered active sensing of wildfire by unmanned aerial vehicles is studied in Ref. <cit.>. Ref. <cit.> suggests to apply k-means algorithm for planning of zone coverage by multiple agents. Reinforcement Learning- (RL-) based multi-agent coverage control is investigated by Refs. <cit.>. Authors in <cit.> used Vononoi-based approach for covering a distributed target. Vononoi-based coverage in the presence of obstacles and failures is presented as a leader-follower problem in Ref. <cit.>. Ref. <cit.> experimentally evaluate functionality Voronoi-based and other multi-agent coverage approaches in urban environment.
§.§ Contributions
This paper develops a method for decentralized multi-agent coverage of a distributed target with an unknown distribution. We propose to define the inter-agent communications by a deep neural network, which is called coverage neural network, with time-varying weights that are obtained such that coverage convergence is ensured.
To this end, the paper establishes specific rules for structuring the coverage neural network and proposes a mass-centric approach to train the network weights, at any time t, that specify inter-agent communication among the agent team. Although, the target is unknown to the agent team, we prove that the weights ultimately converge to the unique values that quantify target distribution in the motion space. The functionality of the proposed coverage method will be validated by simulating aerial coverage conducted by a team of quadcopetr agents.
Compared to the existing work, this paper offers the following novel contributions:
* The proposed multi-agent coverage approach learns the inter-agent communication weights in a forward manner as opposed to the existing neural learning problem, where they are trained by combining forward and backward iterations. More specifically, weights input to a hidden layer are assigned based on the (i) outputs of the previous layer and (i) target data information independently measured by observing the neighboring environment. We provide the proof of convergence for the proposed learning approach.
* The paper proposes a method for converting inter-agent communication graph into a neural network that will be used for organizing the agents, structuring the inter-agent communications, and partitioning the coverage domain.
* The paper develops a method for decentralized partitioning and coverage of an unknowingly distributed target. This method is indeed more computationally-efficient than the the available Voronoi-based partitioning methods that require all agents' positions to determine the search subdomain allocated to each individual agent.
§.§ Outline
The remainder of the paper is organized as follows: The Problem Statement and Formulation are given in Section <ref>. The paper methodology is presented in Section <ref>. Assuming every agent is a quadcopter, the multi-agent network dynamics is obtained in Section <ref>, and followed by Simulation Results in Section <ref> and Conclusion in Section <ref>.
§ PROBLEM STATEMENT AND FORMULATION
We consider a team of N agents identified by set 𝒱={1,⋯,N} and classify them into the following three groups:
* “boundary” agents identified by 𝒱_B={1,⋯,N_B} are distributed along the boundary of the agent team configuration;
* a single “core” agent identified by singleton 𝒱_C={N_B+1} is an interior agent with the global position representing the global position of the agent configuration; and
* follower agents defined by 𝒱_I={N_B+2,⋯,N} are all located inside the agent team configuration.
Note that 𝒱_B, 𝒱_C, and 𝒱_I are disjoint subsets of 𝒱, i.e. 𝒱=𝒱_B⋃𝒱_C⋃𝒱_I.
Inter-agent commucication among the agents are defined by graph 𝒢(𝒱,ℰ) where ℰ⊂𝒱×𝒱 defines edges of graph 𝒢 and each edge represents a unique communication link (if (j,i)∈ℰ, then, i accesses position of j∈𝒱).
We define
𝒩_i={j∈𝒱:(j,i)∈ℰ}, ∀ i∈𝒱.
as the set of in-neighbors of every agent i∈𝒱.
§.§ Neural Network Representation of Inter-Agent Communication
Graph 𝒢 is defined such that it can be represented by a deep neural network with M+1 layers, where we use set ℳ={0,⋯,M} to define the layer identification numbers. Set 𝒱 can be expressed as
𝒱=⋃_l∈ℳ𝒱_l
where 𝒱_0 through 𝒱_M are disjoint subsets of 𝒱. We use 𝒲_0, 𝒲_1, ⋯, 𝒲_M to identify the neuron of layers 0 through M of the coverage neural network, and 𝒲_l and 𝒱_l are related by
𝒲_l=𝒱_l l∈{0,M}
𝒲_l-1⋃𝒱_l l∈ℳ∖{0,M}
,
where 𝒲_0=𝒱_0=𝒱_B⋃𝒱_C defines neurons that uniquely represent boundary and core agents.
For every neuron i∈𝒲_l at layer l∈ℳ∖{0}, ℐ_i,l∈𝒲_l-1 defines those neurons of 𝒲_l-1 that are connected to i∈𝒲_l.
Assuming the agent team forms an n-dimensional configuration in a three-dimensional motion space (n=2,3), we use the following key rules to define ℐ_i,l for every i∈𝒲_l and l∈ℳ∖{0}:
|ℐ_i,l|=
1 If i∈𝒲_l-1⋂𝒲_l and l∈ℳ∖{0}
n+1 If i∈𝒲_l-𝒲_l-1 and l∈ℳ∖{0}
n+1 If i∈𝒲_M
0 If i∈𝒲_0
.
We note that 𝒩_i and ℐ_i,l can be related by
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l- 𝒲_l-1(ℐ_i,l=𝒩_i).
For better clarification, we consider an agent team with N=26 agents identified by set 𝒱={1,⋯,26} forming a two-dimensional configuration (n=2) shown in Fig. <ref> (a). The inter-agent communications shown in Fig. <ref> (a) can be represented by the neural network of Fig. <ref> (b) with three layers ℳ={0,1,2}, where 𝒲_0={1,⋯,6}, defining the boundary and core leaders, has no in-neighbors 𝒲_2={8,9,10, 12,13,14,16,17,18,20,21,22, 24,25,26} defining followers, each has three in-neighbors. Also, {7,11,15,19,23}∈𝒲_1 each has three in-neighbors but the remaining neurons of {1,⋯,6}, that are repeated in layer 0, each has one in-neighbor.
§.§ Differential Activation Function
Unlike the available neural network, the activation of the coverage network's neurons are operated differential activation functions given by nonlinear dynamics
𝐱̇_i=𝐟_i(𝐱_i,𝐮_i)
𝐫_i=𝐡_i(𝐱_i)
, i∈𝒲_l, l∈ℳ,
that is used to model the agent i∈𝒱_l (See Fig. <ref>), where 𝐱_i∈ℝ^n_x,i and 𝐮_i∈ℝ^n_u,i denote the state vector and the control of neuron i, respectively, and 𝐡_i:ℝ^n_x,i→ℝ^3, 𝐟_i:ℝ^n_x,i→ℝ^n_x,i, and 𝐠_i:ℝ^n_x,i→ℝ^n_x,i× n_u,i are smooth functions.
The output of neuron i denoted by 𝐫_i∈ℝ^3× 1 is the position of agent i. The input of neuron i is defined by
𝐫_i,d(t) = 𝐩_i (given) i ∈𝒲_0
∑_j ∈ℐ_i,l w_ij(t)𝐫_j(t) i ∈𝒲_l-𝒲_l-1, l∈ℳ∖{0}
where 𝐩_i is a desired constant position for leader agent i ∈𝒲_0.
Also, w_i,j(t) > 0 is the time-varying communication weight between i∈𝒲_l and j ∈ℐ_i,l, and satisfies the following constraint:
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l- 𝒲_l-1(∑_j∈ℐ_i,lw_i,j(t)=1), ∀ t.
§.§ Objectives
Given above problem setting, this paper offers a neural-network-based method for optimal coverage of target set 𝒟 with unknown distribution in a 3-dimensional motion space. To achieve this objective, we assume that positions of boundary leader agents, defined by 𝒲_0∖𝒱_C, are known, and solve the following two main problems:
* Problem 1–Abstract Representation of Target: We develop a mass-centric approach in Section <ref> to abstractly represent target by N-N_B+1 position vectors 𝐩_N_B+2 through 𝐩_N that are considered as followers' desired positions.
* Problem 2–Decentralized Target Acquisition: We propose a forward method to train the communication weights w_i,j(t), and assign control input 𝐮_i, for every agent i∈𝒱 and in-neighbor agent j∈ℐ_i,l, such that actual position 𝐫_i converges to the desired position 𝐩_i in a decentralized fashion, for every i∈𝒱∖𝒲_0, where i∈𝒱∖𝒲_0 does not know global position 𝐩_j(t) of any in-neighbor agent j∈𝒱.
Without loss of generality, n is either 2, or 3 because motion space is three-dimensional. More specifically, for ground coverage n=2 and 𝒟
specifies finite number of targets on the ground.
§ METHODOLOGY
The agent team is aimed to cover a zone that is specified by 𝒟={1,⋯,n_d}, where 𝐝_i∈ℝ^3×1 is the position of target i∈𝒟.
We also define intensity function 𝒯:𝒟→(0,1] to quantify the intensity of data point i∈𝒟 positioned at i∈𝒟.
For development of the neural-networ-based coverage model, we apply the following Definitions and Assumptions:
Boundary leader agents form an n-D polytope in ℝ^n, thus, the boundary agents' desired positions
must satisfy the following rank condition:
rank([ 𝐩_2-𝐩_1 ⋯ 𝐩_N_B-𝐩_1 ])
=n
The polytope defined by the boundary agents is called leading polytope.
The leading polytope, defined by the boundary agents, can be decomposed into N_L disjoint n-dimensional simplexes all sharing the core node N_B+1∈𝒲_0.
We let ℒ={1,⋯,N_L} define all simplex cells of the leading polytope, where 𝒮_i={h_i,1,⋯,h_i,n,N_B+1} defines vertices of simplex cell i∈ℒ, i.e. h_i,1,⋯,h_n,i∈𝒮_i∖{N_B+1}⊂𝒲_0 are the boundary nodes of simplex i∈ℒ. Per Assumption <ref>, we can write
𝒲_0=⋃_i∈ℒ𝒮_i,
⋀_i∈ℒ(rank([ 𝐩_h_i,1-𝐩_N_B+1 ⋯ 𝐩_h_i,n-𝐩_N_B+1 ])
=n).
Every agent i∈𝒱∖𝒲_0 has n+1 in-neighbors, therefore,
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l-𝒲_l-1(|ℐ_i,l|=n+1).
The in-neighbors of every agent i∈𝒱∖𝒲_0 defined by 𝒩_i={j_1,⋯,j_n+1} forms an n-D simplex. This condition can be formally specified as follows:
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l-𝒲_l-1(rank([ 𝐩_j_2-𝐩_j_1 ⋯ 𝐩_j_n+1-𝐩_j_1 ])
=n).
For every agent i∈𝒱∖𝒲_0,
0.99!
𝒞̅_i={∑_j∈ℐ_i,lσ_j𝐩_j:σ_j≥0 and ∑_j∈ℐ_i,lσ_j=1}, i∈𝒲_l-𝒲_l-1, l∈ℳ,
0.99!
𝒞_i(t)={∑_j∈ℐ_i,lσ_j𝐫_j(t):σ_j≥0 and ∑_j∈ℐ_i,lσ_j=1}, i∈𝒲_l-𝒲_l-1, l∈ℳ,
define the convex hulls specified by “desired” and “actual” positions of agent i's in-neighbors, respectively.
We define
𝒞=⋃_l∈ℳ∖{0}⋃_i∈𝒲_l-𝒲_l-1𝒞̅_i
𝒞=⋃_l∈ℳ∖{0}⋃_i∈𝒲_l-𝒲_l-1𝒞_i(t)
specify the coverage zone that enclose all data points defined by set 𝒟.
By considering Definition <ref>, we can express set 𝒟 as
𝒟=⋃_i∈ℐ_i,l𝒟̅_i or 𝒟=⋃_i∈ℐ_i,l𝒟_i(t),
where
𝒟̅_i={j∈𝒟:𝐝_j∈𝒞̅_i},
is the target set that is “desired” to be searched by follower agent i∈𝒱∖𝒱_0 whereas
𝒟_i(t)={j∈𝒟:𝐝_j∈𝒞_i(t)},
is the subset of 𝒟 that is “actually” searched by follower agent i∈𝒱∖𝒱_0 at time t. Note that 𝒟̅_i and 𝒟_i(t) are enclosed by the convex hulls 𝒞̅_i and 𝒞_i(t), respectively, that are determined by the “desired” and “actual” positions of the agent i∈𝒱∖𝒲_0, respectively.
We assume that 𝒟̅_i≠∅ and 𝒟_i(t)≠∅, at any time t, for every i∈𝒱∖𝒲_0.
In order to assure that Assumption <ref> is satisfied, we may need to regenerate target set 𝒟, when target data set 𝒟 is scarcely distributed. When this regeneration is needed, we first convert discrete set 𝒟 to discrete set
𝒟'={𝐝=∑_i=1^n_d𝒩(𝐫; 𝐝_i,Σ_i):𝐝_i∈𝒟, 𝐫∈𝒞}
where 𝒩(𝐫; 𝐝_i,Σ_i) is a multi-variate normal distribution specified by mean vector 𝐝_i and covariance matrix Σ_i. Then, we regenerate 𝒟 by uniform dicretization of 𝒟.
§.§ Abstract Representation of Target Locations
We use the approach presented in Algorithm <ref> to abstractly represent target set 𝒟 by position vectors 𝐩_N_B+2, ⋯, 𝐩_N, given (i)
desired positions of leader agents denoted 𝐩_1 through 𝐩_N_B+1, (ii) the edge set ℰ, and (iii) target set 𝒟, as the input. Note that 𝐩_i is considered the global desired position of follower i∈𝒱_I={N_B+2,⋯,N}, but no follower i∈𝒱∖𝒱_0 knows 𝐩_i.
The desired position of every follower agent i∈𝒱_I=𝒱∖𝒲_0 is obtained by
𝐩_i=
∑_h∈𝒟̅_i𝒯_h(h)𝐝_h|𝒟̅_i|, ∀ i∈𝒱∖𝒲_0,
where 𝒟̅_i, defined by Eq. (<ref>), is a target data subset that is enclosed by 𝒞̅_i and defined by Eq. (<ref>). We notice that the desired position of every follower agent i∈𝒱∖𝒲_0 is assigned in a “forward” manner which in turn implies that 𝒲_l's desired positions are assigned after determining 𝒲_l-1's desired positions, for every l∈ℳ∖{0}.
Given desired positions of every follower agent i∈𝒱∖𝒲_0 and every in-neighbor agent j∈𝒩_i, ϖ_i,j>0 defines the desired communication weight between i∈𝒱∖𝒲_0 and j∈𝒩_i, and is obtained by solving n+1 linear algebraic equations provided by
𝐩_i=∑_j∈ℐ_i,lϖ_i,j𝐩_j,
∑_j∈ℐ_i,lϖ_i,j=1.
Algorithm <ref> also presents our proposed hierarchical approach for assignment of followers' desired communication weights.
We define desired weight matrix 𝐋̅=[L̅_ij]∈ℝ^N× N with (i,j) entry
L̅_ij=ϖ_i,j i∈𝒱∖𝒲_0, j∈𝒩_i
-1 i=j
0 otherwise
.
§.§ Decentralized Target Acquisition
For a decentralized coverage, it is necessary that every follower agent i∈𝒱_l=𝒲_l-𝒲_l-1, represented by a neorn in layer l∈ℳ∖{0}, chooses control 𝐮_i∈ℝ^n_u× 1, based on actual positions of the in-neighbor agents ℐ_i,l, such that 𝐫_i(t) stably tracks 𝐫_i,d(t) that is defined by Eq. (<ref>). Note that 𝐫_i,d(t) is a linear combination of the in-neighbors' actual positions, for i∈𝒱∖𝒲_0, with (communication) weights that are time-varying and constrained to satisfy equality constraint (<ref>).
We use forward training to learn the coverage neural network. This means that communication weights of layer l ∈ℳ∖{ 0 } neurons are assigned before communication weights of layer l+1 ∈ℳ∖{ 0,M} neurons, where communication weight of neuron i ∈𝒱_l=𝒲_l-𝒲_l-1 is learned by solving a quadratic program. Let
𝐫̅_i(t)=
∑_h∈𝒟_i(t)𝒯(h)𝐝_h(t)|𝒟_i(t)|, i∈𝒱_l, l∈ℳ∖{0},
denote the cetroid of subset set 𝒟_i(t)⊂𝒟, where 𝒟_i(t)⊂𝒟 is defined (obtained) by Eq. (<ref>). Then, followers' communication weights are determined by minimizing
min∑_h∈ℐ_i,lw_i,h(t)𝐫_j-𝐝̅_i(t^2
subject to equality constraint (<ref>).
We define weight matrix 𝐋=[L_ij]∈ℝ^N× N with (i,j) entry
L_ij=
w_i,j i∈𝒱∖𝒲_0, j∈𝒩_i
-1 i=j
0 otherwise
.
Assume every agent i∈𝒱 chooses control input 𝐮_i such that 𝐫_i(t) asymptotically tracks 𝐫_i,d(t). Then, 𝐫_i(t) asymptotically converges to the desired position 𝐩_i for every i∈𝒱.
If every agent j∈𝒲_0 asymptotically tracks 𝐫_j,d(t), then, actual position 𝐫_j converges to 𝐩_j because 𝐫_j,d(t)=𝐩_j is constant per Eq. (<ref>). Then, for every i∈𝒲_1, vertices of the simplex 𝒞̅_i, belonging to 𝒲_0, asymptotically converge to the vertices 𝒞̅_i, where 𝒞̅_i and 𝒞_i enclose target data subsets 𝒟̅_i and 𝒟_i, respectively. This implies that 𝐫_i,d(t), defined as the centroid of 𝒟_i(t) asymptotically converges to 𝐩_i for every i∈𝒲_1. By extending this logic, we can say that this convergence is propagated through the feedforward network 𝒢(𝒱,ℰ). As the result, for every agent i∈𝒲_l and layer l∈ℳ∖{0}, vertices of simplex 𝒞_i(t) asymptotically converge the vertices of 𝒞̅_i which in turn implies that 𝐫_i,d(t) asymptotically converges to 𝐩_i. This also implies that 𝐫_i asymptotically converges to 𝐩_i per the theorem's assumption.
§ NETWORK DYNAMICS
In this section, we suppose that every agent is a quacopter and
use the input-state feedback linearization presented in <cit.> and summerized in the Appendix to model quadcopter motion by the fourth-order dynamics (<ref>) in the Appendix. Here, we propose to choose 𝐯_i as follows:
𝐯_i=-k_1,i⃛𝐫_i-k_2,i𝐫̈_i-k_3,i𝐫̇_i+k_4,i(𝐫_i,d(t)-𝐫_i), i∈𝒱,
where 𝐫_i,d(t) is defined by Eq. (<ref>). Then, the external dynamics of the quadcopter team is given by <cit.>
ddt(
[ 𝐘; 𝐘̇; 𝐘̈; ⃛𝐘; ])
=𝐀_MQS[ 𝐘; 𝐘̇; 𝐘̈; ⃛𝐘; ]
+
𝐁_MQS[ 𝐑_L; 𝐑̇_L; 𝐑̈_L; ⃛𝐑_L; ]
,
where 𝐘=vec([ 𝐫_1 ⋯ 𝐫_N ]^T), 𝐑_L=vec([ 𝐩_1 ⋯ 𝐩_N_B+1 ]^T), 𝐋_0=[ 𝐈_N_B+1 0_(N_B+1)×(N-N_B-1) ]^T∈ℝ^N×(N_B+1),
𝐀_MQS=
[ 0 𝐈_3N 0 0; 0 0 𝐈_3N 0; 0 0 0 𝐈_3N; 𝐈_3⊗( 𝐊_4 𝐋) -𝐊_3𝐈_3N -𝐊_2𝐈_3N -𝐊_1𝐈_3N ]
,
0.99!
𝐁_MQS=
[ 0 0 0 0; 0 0 0 0; 0 0 0 0; 𝐈_3⊗( 𝐊_4𝐋_0) 𝐈_3⊗(𝐊_3𝐋_0) 𝐈_3⊗( 𝐊_2𝐋_0) 𝐈_3⊗( 𝐊_1 𝐋_0) ]
,
j=1,2,3,4, 𝐊_j=diag(k_j,1,⋯,k_j,N),
𝐈_3N∈ℝ^3N× 3N is the identity matrix, and “vec” is the matrix vectorization operator.
Note that control gains k_j,i (i∈𝒱 and j=1,2,3,4) are selected such that roots of the characteristic equation
|s^4𝐈+s^3𝐊_1𝐋+s^2𝐊_2𝐋+s𝐊_3+𝐊_4|=0
are all located the collective dynamics (<ref>) is stable.
§ SIMULATION RESULTS
We consider an agent team consisting of 57 quadcopters with the reference configuration shown in Fig. <ref>, where we use the model and trajectory control presented in Refs. <cit.> for multi-agent coverage simulation. Here quadcopters 1 through 4 defined by set 𝒱_B={1,2,3,4} are the boundary leader agents; agent 5 defined by singleton 𝒱_C={5} is core leader; and the remaining agents defined by 𝒱_I={6,⋯,57} are followers.
The inter-agent communications are directional and shown by blue vectors in Fig. <ref>. The communication graph is defined by 𝒢(𝒱,ℰ) and converted into the neural network shown in Fig. <ref> with four layers, thus, ℳ={0,1,2,3} (M=3), and 𝒱 can be expressed as 𝒱=𝒲_0⋃𝒲_1⋃𝒲_2⋃𝒲_3. In Fig. <ref>, the agents represented by 𝒲_0, 𝒲_1, 𝒲_2, and 𝒲_3 are colored by cyan, red, green, and black, respectively.
We apply the proposed coverage algorithm to cover elliptic, multi-circle, and triangular zones, each specified by the corresponding data set 𝒟, where 𝒟 defines 500 data points shown by green spots in Figs. <ref> (a,b,c). As shown, each target set is represented by 52 points positioned at 𝐩_6 through 𝐩_57, where they are obtained by using the approach presented in Section <ref>. These points are shown by red in Figs. <ref> (a,b,c).
Figures <ref> shows the components of actual and desired positions of quadcopters 13, 45, and 51 are plotted versus time overt time interval [0,20]s, by solid black and dashed red, respectively. As seen the actual position of these three agents almost reach the designated desired positions at time t=12s. Figure <ref> shows the time-varying communication weights of agent 41 with its in-neighbors defined by 𝒩_41={34,5,32}. As shown, w_41,j(t) converges to its desired value of ϖ_41,j in about 12 seconds for every j∈𝒩_41.
§ CONCLUSION
We proposed a novel neural-network-based approach for multi-agent coverage of a target with unknown distribution. We developed a forward approach to train the weights of the coverage neural network such that: (i) the target is represented by a finite number of points, (ii) the multi-agent system quickly and decentralizedly converge to the designated points representing the target distribution. For validation, we performed a simulation of multi-agent coverage using a team of 57 quadcopters, each of which is represented by at least one neuron of a the coverage neural network. The simulation results verified fast and decentralized convergence of the proposed multi-agent coverage where each quadcopter reached its designated desired position in about 12 seconds.
IEEEtran
Let x_i, y_i, and z_i denote position components of quadcopter i∈𝒱, and p_i, m_i ψ_i, θ_i, and ψ_i denote the thrust force magnitude, mass, roll, pitch, yaw angles of quadcopter i∈𝒱, and g=9.81m/s^2 be the gravity acceleration. Then, we can use the model developed in <cit.> and present the quadcopter dynamics by
𝐱̇_i=𝐟(𝐱_i,𝐮_i)
,
where 𝐟(𝐱_i,𝐮_i)=𝐅(𝐱_i)+𝐆(𝐱_i)𝐮_i
0.99!
𝐱_i=[ x_i y_i z_i ẋ_i ẏ_i ż_i ϕ_i θ_i ψ_i ϕ̇_i θ̇_i ψ̇_i p_i ṗ_i ]
^T,
𝐮_i=[ u_1,i u_2,i u_3,i u_4,i ]
^T,
𝐅(𝐱_i)=[ ẋ_i; ẏ_i; ż_i; p_i m(sinϕ_isinψ_i + cosϕ_icosψ_isinθ_i); p_i m(cosϕ_isinψ_isinθ_i- sinϕ_icosψ_i); p_i mcosϕ_icosθ_i-9.81; ϕ̇_i; θ̇_i; ψ̇_i; 0; 0; 0; ṗ_i; 0; ]
,
𝐆(𝐱_i)=[ 𝐠_1 𝐠_2 𝐠_3 𝐠_4 ]
=
[ 0_9× 1 0_9× 3; 0_3× 1 𝐈_3; 0 0_1× 3; 1 0_1× 3; ]
,
By defining transformation 𝐱_i→(𝐫_i,𝐫̇_i,𝐫̈_i,⃛𝐫_i,ψ_i,ψ̇_i), we can use the input-state feedback linearization approach presented in <cit.> and convert the the quadcopter dynamics to the following external dynamics:
⃜𝐫_i=𝐯_i,
ψ̈_i=u_ψ,i,
where 𝐯_i is related to the control input of quadcopter i∈𝒱, denoted by 𝐮_i, by <cit.>
𝐯_i=𝐌_1,i𝐮_i+𝐌_2,i,
with
𝐌_1,i= [ L_𝐠__1L_𝐟^3x_i L_𝐠__2L_𝐟^3x_i L_𝐠__3L_𝐟^3x_i L_𝐠__4L_𝐟^3x_i; L_𝐠__1L_𝐟^3y_i L_𝐠__2L_𝐟^3y_i L_𝐠__3L_𝐟^3y_i L_𝐠__4L_𝐟^3y_i; L_𝐠__1L_𝐟^3z_i L_𝐠__2L_𝐟^3z_i L_𝐠__3L_𝐟^3z_i L_𝐠__4L_𝐟^3z_i; L_𝐠__1L_𝐟ψ_i L_𝐠__2L_𝐟ψ_i L_𝐠__3L_𝐟ψ_i L_𝐠__4L_𝐟ψ_i; ]∈ℝ^14× 14
,
𝐌_2,i= [ L_𝐟^4x_i L_𝐟^4y_i L_𝐟^4z_i L_𝐟^2ψ_i ]
^T∈ℝ^14× 1
.
In this paper, we assume that the desired yaw angle and its time derivative are both zero at any time t, and choose
u_ψ,i=-k_5ψ̇_i-k_6ψ_i
Therefore, we can assume that ψ_i(t)=0 at any time t, as a result, the quadcopter i∈𝒱 can be modeled by Eq. (<ref>).
[
< g r a p h i c s >
]
Hossein Rastgoftar an Assistant Professor at the University of Arizona. Prior to this, he was an adjunct Assistant Professor at the University of Michigan from 2020 to 2021. He was also an Assistant Research Scientist (2017 to 2020) and a Postdoctoral Researcher (2015 to 2017) in the Aerospace Engineering Department at the University of Michigan Ann Arbor. He received the B.Sc. degree in mechanical engineering-thermo-fluids from Shiraz University, Shiraz, Iran, the M.S. degrees in mechanical systems and solid mechanics from Shiraz University and the University of Central Florida, Orlando, FL, USA, and the Ph.D. degree in mechanical engineering from Drexel University, Philadelphia, in 2015. His current research interests include dynamics and control, multiagent systems, cyber-physical systems, and optimization and Markov decision processes.
|
http://arxiv.org/abs/2307.07557v1 | 20230714180347 | Photon and dilepton emission anisotropy for magnetized quark-gluon plasma | [
"Xinyang Wang",
"Igor A. Shovkovy"
] | hep-ph | [
"hep-ph",
"nucl-th"
] | |
http://arxiv.org/abs/2307.04883v1 | 20230710200601 | Doping driven metal-insulator transition in disordered graphene | [
"Kaiyi Guo",
"Ying Liang",
"Tianxing Ma"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.dis-nn",
"cond-mat.mes-hall"
] |
Department of Physics, Beijing Normal University, Beijing 100875, China
[email protected]
Department of Physics, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics(Ministry of Education), Beijing Normal University, Beijing 100875, China
[email protected]
Department of Physics, Beijing Normal University, Beijing 100875, China
Key Laboratory of Multiscale Spin Physics(Ministry of Education), Beijing Normal University, Beijing 100875, China
Controlling the metal-insulator transition in graphene-based material is a crucial topic as it directly impacts its potential applications. Inspired by recent experiments, we study the effects of doping and bond disorder on metal-insulator transition in graphene within the Hubbard model on a honeycomb lattice. By using the determinant quantum Monte Carlo method, we first conduct tests on the value of sign under various parameters, such as electron density, on-site interactions, temperature, and lattice size, so as to select the appropriate parameters to alleviate the impact of the sign problem. Given the knowledge that bond disorder can lead to a mental-insulator transition, our study has revealed, after ruling out the influence of size effects, that the critical strength of disorder increases as the electron density decreases while decreasing as the on-site interactions increase. Furthermore, we compared our results with experimental data and concluded that, in actual graphene materials, the localization effect induced by doping plays a dominant role, resulting in an insulating phase.
Doping driven metal-insulator transition in disordered graphene
Tianxing Ma
^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France
^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France
^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria
^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Since the discovery of graphene, a honeycomb single layer of sp^2-bonded carbon atoms, it has attracted enormous attention because of its excellent electrical, structural, mechanical, and optical properties, which have always been the critical and challenging aspects of the research.<cit.> Due to its unique semimetal nature, intrinsic graphene can not provide sufficient conductivity for desired applications, and doping is considered as an optimal way to tailor the electronic structure of graphene,<cit.> which allows for control of the Fermi level E_F even pushes the van Hove singularity into the vicinity of E_F and impact on superconducting pairing.<cit.> Moreover, doping plays an extremely important role in various applications, such as photodetectors,<cit.> sensors,<cit.> field-effect transistors,<cit.> and so on. In these applications, the regulation of metal-insulator transition (MIT) in graphene materials is very crucial, as it has a direct impact on further applications of these materials.<cit.> Therefore, doping-dependent MIT in graphene is a worthwhile problem to investigate.
In essence, MIT can be driven by various mechanisms, resulting in different types of insulators: changing the chemical potential can produce a transition from a metal to a band insulator.<cit.> Strong correlations can drive metals into Mott insulators with an energy gap,<cit.> while Anderson insulators originate from disorder-induced localized insulators, where no gap can be observed in the spectrum.<cit.> It is of great importance to tune and control MIT on graphene for applications.<cit.> However, the nature of the metal-insulator transition remains elusive despite tremendous effort due to the complex interaction of doping, chemistry, elastic strain, and other applied fields.<cit.> There have been many experimental studies on MIT in graphene-based system. As early as 2009, researchers found that dosing atomic hydrogen on the surface of graphene would cause the system to transition from a metallic phase to an insulating phase and they discussed this phenomenon by possible transition to a strongly Anderson localized ground state.<cit.> Reports on MIT in nitrogen-doped and oxygen-doped graphene materials in 2016 further indicated that doping would transform the material from a metallic phase into an insulating phase.<cit.> Recent reports also suggest the possibility of modulating MIT in graphene through an externally applied electric field.<cit.>
Drawing inspiration from the aforementioned research, we conducted an investigation on the mechanical properties of graphene lattices at MIT. Due to the fact that doping leads to changes in carrier density and introduces disorder into the system at the same time,<cit.> while an applied electric field can also modulate electron density,<cit.> we took into account both disorder and electron density in the system and studied their interplay and the impact they have on the MIT. In order to investigate strongly correlated problems with both disorder and doping, the determinant quantum Monte Carlo (DQMC) method is a powerful tool<cit.>.
In the context of QMC simulations, various interesting MIT phenomena have been reported in the honeycomb lattice.<cit.>
For example, a disorder-induced nonmagnetic insulating phase is found to emerge from the zero-temperature quantum critical point, separating a semimetal from a Mott insulator at half filling.<cit.> Furthermore, recent QMC simulations on a bilayer honeycomb lattice have identified a potential deconfined quantum critical point in interacting Dirac fermions as a new area of study for investigating the MIT.<cit.> Localization due to the on-site Coulomb interaction and disorder can also induce an insulating transition.<cit.>
In this paper, we completed our simulations by the DQMC method for cases with different electron densities and bond disorder strength to investigate the MIT in doped graphene with a disordered Hubbard model. Our main focus is on the impact of electron density, on-site Coulomb interaction, and bond disorder on the conductivity σ_dc. We analyzed the interplay between these three factors and found that doping increases conductivity, which is favorable for the formation of metallic phases, while disorder has the opposite effect. The impact of the on-site Coulomb interaction on σ_dc depends on the particle-hole symmetry: at half-filling, the on-site Coulomb interaction suppresses conductivity, while deviating from half-filling can promote conductivity. Our study expands the understanding of MIT in honeycomb lattice through doping and disorder and may provide some inspiration for modulating MIT in experiments.
§ MODEL AND METHODS
The Hamiltonian for disordered Hubbard model on a honeycomb lattice is defined as
Ĥ= -∑_i,j,σt_ ij(ĉ_ iσ^†ĉ_ jσ^†+ĉ_ jσ^†ĉ_ iσ^†)-μ∑_ iσn̂_ iσ
+U∑_ in̂_ i↑n̂_ i↓
where t_ ij represent the hopping amplitude between two nearest-neighbor sites i and j, ĉ_ iσ^†(ĉ_ jσ^†) is the creation (annihilation) operator of a spin-σ electron at site i( j), and n̂_ iσ=ĉ_ iσ^†ĉ_ jσ^† is the number operator, denotes the number of spin-σ electrons at site i. The chemical potential μ determines the density of the system, and when μ=U/2, n=1, the system is half-filled, indicating the particle-hole symmetry. Here U>0 represent the on-site repulsive interaction.
Bond disorder is induced by modifying the matrix element t_ ij of the hopping matrix, which is chosen from t_ ij∈[t-Δ/2,t+Δ/2] and zero otherwise with a probability P(t_ ij)=1/Δ. We set t=1 as the energy scale. The strength of disorder can be characterized by Δ, which represents the magnitude of the modification of matrix elements t_ ij in the hopping matrix. In the presence of disorder, reliable results are obtained by taking an average of 20 disorder simulations, as it has been demonstrated to effectively avoids errors introduced by randomness.<cit.>
The DQMC method is employed to complete simulations on disordered Hubbard model of doped honeycomb lattice at finite temperature with periodic boundary condition. In DQMC, the partition function Z=Tr e^-β H is represented as an integral over the configuration space of a set of interacting fermions on a lattice and the integral is completed by the Monte Carlo sampling. The imaginary time interval (0,β) is discretely divided into M slices of interval Δτ, which is chosen as small as 0.1 to control the “Trotter errors". The diagonalization of two-operator products can be achieved with simplicity; however, the same cannot be said for on-site interaction involving four-operator products as they need to be decoupled into quadratic terms before computation by a discrete Hubbard-Stratonovich (HS) field. Then, by analytically integrating the Hamiltonian quadratic term, the partition function can be converted into the product of two fermion determinants, where one is spin up and the other is spin down. The value of the fermion determinant is not always positive in calculations, except for a few exceptional cases, and this will cause sign problem. We calculated the average fermion sign sign, which is the ratio of the integral of the product of up and down spin determinants to the integral of the absolute value of the product<cit.>
⟨ S ⟩ =
∑_ X det M_↑( X) det M_↓( X)
/∑_ X
| det M_↑( X) det M_↓( X) |
to measure the severity of the sign problem. sign=1 indicates the absence of sign problem.
To study the MIT of the system, we computed the T-dependent DC conductivity from calculating the momentum q- and imaginary time τ-dependent current-current correlation function
Λ_xx(q,τ):
σ_dc(T)=β^2/πΛ_xx(q=0,τ=β/2)
where Λ_xx(q,τ)=<ĵ_x(q,τ)ĵ_x(-q,0)>, β=1/T, ĵ_x(q,τ) is the Fourier transform of time-dependent current operator ĵ_x(r,τ) in the x direction:
ĵ_x(r,τ) = e^Hτ/hĵ_x(r)e^-Hτ/h
where ĵ_x(r) is the electronic current density operator, defined in Eq.(<ref>).
ĵ_x(r) = i∑_σt_i+x̂,i×(c_i+x̂,σ^+c_iσ-c_i σ^+c_i+x̂,σ)
Eq.(<ref>) has been used for MIT in the Hubbard model in many studies.<cit.>
§ RESULTS AND DISCUSSION
As the system is doped away from half-filled, the particle-hole symmetry no longer exists, resulting in a sign problem. We have known that sign∼ e^-β N_sγ, where γ relies on the values of n and U. In the case of a given fixed n value, γ is a monotonic function of U; whereas, with respect to a designated U value, γ is relatively small at certain specific values of n. To ensure the reliability of the data, the value of the average sign sign, given by Eq.(<ref>), was calculated and the corresponding results are presented in Fig.<ref>. We present the average sign sign as a function of the electron density n for different values of (a) disorder strength, (b) on-site interaction, (c) temperature, and (d) lattice size. Our studies were conducted in the region of n≥0.85, with the dashed line indicating the case of n=0.85. Obviously, when the system is doped, the average sign deviates from 1 and starts to decrease rapidly. The sign problem becomes more severe as the inverse temperature, interaction strength, lattice size increase, while introducing disorder can alleviate the sign problem to some extent. This is consistent with the preceding investigations.<cit.>
Fig.<ref>(a) shows the variation of average sign with respect to n for different disorder strengths Δ at L=12, U=3.0 and β=10. It can be observed that in the clean limit, Δ=0.0, the sign problem is severe and the calculation is almost impossible even with minor doping. However, the introduction of disorder partially alleviates the sign problem, and in the regime Δ≥1.0, which is of our primary interest, the sign problem is effectively suppressed. Fig.<ref>(b) exhibits the influence of on-site interaction on the sign problem when L=12, Δ=1.5 and β=10, implying that a larger U greatly exacerbates the sign problem. Moreover, it is observable that when U<2.5, sign∼ 1, making the impact of the sign problem almost negligible. The similar consequence is also evident in the Fig.<ref>(c): when β <6, the sign problem has a minimal impact; however, as β increases and the temperature decreases, the sign problem becomes increasingly severe. Fig.<ref>(d) displays the effect of lattice size L on the sign problem: as the lattice size increases, sign decreases and the sign problem becomes dire.
Given the significance of the sign problem, along with the computational processing time considerations, we opt to utilize a lattice size of L=12 as the primary subject of inquiry in this article, building upon the conclusion presented in Fig.<ref>. In Fig.<ref>, the dc conductivity is shown as a function of the temperature T for several values of the disorder strength Δ. The values are computed on the L=12 lattice with coupling strength U=2.0. Figs.<ref>(a)-(d) represent the situations under different density: (a) n=1.00; (b) n=0.95; (c) n=0.90; and (d) n=0.85. We have known that the system behaves as a mental in the clean limit at half-filling with the coupling strength U=2.0<cit.>, which means that in the low-temperature regime, dσ_dc/dT<0 and σ_dc diverges as the temperature is further decreased to the limit T→0. Then consider about the situations with bond disorder, the system will transfer from metallic to insulating phase, indicating by dσ_dc/dT>0 at low-T, with increasing value of Δ, as is shown in Fig.<ref>(a). At this condition, the critical disorder strength for MIT Δ_c is currently between 1.5 and 2.0. When the system deviates from half-filling, as is shown in Figs.<ref>(b)-(d), distinct insulation behavior is only observed for Δ>1.5. From this, we may draw the conclusion that in disordered systems, doping will increase the critical disorder strength Δ_c required for MIT. The impact of electron density n on MIT will be further discussed in Fig.<ref>.
To exclude the influence of system size being smaller than the localization length on insulation, we compute the finite-size effect. Fig.<ref> exhibits the response of the conductivity σ to the lattice size L=9,12,15, with respect to different electron density (a) n=0.95, (b) n=0.85 and varying values of disorder (a) Δ=1.5, 2.0 and (b) Δ=0.0, 2.5. Upon comparison, it is evident that both the metallic and insulating phases are minimally affected by system size in terms of conductivity. Additionally, Fig.<ref>(a) illustrates that the critical disorder strength values remain consistent across varying lattice dimensions of L=9,12,15. As the computational simulation time rapidly increases with an increase in lattice size, and a larger L suggests more severe sign problems while deviating from half-filling, it is reasonable that we selected L=12 as the primary focus of our study.
In Fig.<ref>, we further investigate the impact of electron densities n on the MIT. Fig.<ref>(a) and Fig.<ref>(b) respectively demonstrate the effect of n on the σ_dc-T curve for L=12, U=2.0, and the disorder strength (a)Δ=1.5 and (b)Δ=2.0: When Δ=1.5, as shown in Fig.<ref>(a), at n=1.00, the system exhibits an insulating phase due to hopping disorder, while deviating away from half-filling, the conductivity σ_dc increases with decreasing temperature, indicating metallic behavior, thus demonstrating a MIT induced by doping; When Δ=2.0, however, as shown in Fig.<ref>(b), the system will always remain in an insulating phase irrespective of the variation in n. We have also included the σ_dc-T curve for n=0.7, which reveals that within our measurement range, doping will not induce a MIT when the disorder strength Δ=2.0. A similar situation can be observed at on-site Coulomb interaction U=3.0, as shown in Fig.<ref>(c)L=12, U=2.0, Δ=1.5 and (d)L=12, U=2.0, Δ=2.0. Doping induces a transition from an insulating to a metallic phase at Δ=1.5, whereas there is no metallic phase observed in the range of n≤0.85 when Δ=2.0.
To obtain a more accurate determination of the critical disorder strength for the MIT, we plot the variation of conductivity σ_dc with disorder strength Δ at the three lowest temperatures β=6,8,10 in Fig.<ref>(a)-(c). When Δ<Δ_c, the σ_dc increases with decreasing temperature, exhibiting metallic behavior, while for Δ>Δ_c, the σ_dc decreases with decreasing temperature, exhibiting insulating behavior. The three curves in each subplot of Fig.<ref> intersect nicely at a point where the conductivity σ_dc becomes temperature-independent, marking the critical point of MIT. Here, (a) corresponds to L=12,U=2.0,n=0.95; (b) corresponds to L=12,U=2.0,n=90; and (c) corresponds to L=12,U=3.0,n=0.90. We have conducted extensive calculations to obtain the values of Δ_c for different parameters and plot the variation of Δ_c with on-site Coulomb interaction U for electron density n=1.00 and n=0.85 in Fig.<ref>(d), where the curves above denote the insulating phase and the curves below denote the metallic phase. An interesting phenomenon can be observed: as n=1.00 and the system is half-filled, the critical disorder strength Δ_c of MIT decreases with an increase in U, indicating a suppressing effect of U on the metallic state; whereas when n=0.85 and the system deviates from half-filling, Δ_c increases with an increase in U, signifying a promoting effect of U on the metallic state.
Next we move on to the role of U in the MIT for half-filled and doped cases. Fig.<ref>(d) demonstrates that at n=1.0 and Δ=1.5, an increase in U drives the system from a metallic state to an insulating state, whereas at n=0.85 and Δ=2.0, an increase in U leads the system from an insulating state to a metallic state. We set n=1.00,0.95,0.90,0.85 in Fig.<ref>(a)-(d). In order to observe the phase transition, we set the disorder strength to Δ=1.5 for half-filling and Δ=2.0 for deviations from half-filling, respectively. Furthermore, we set the minimum temperature parameter to β=14. Although this approach incurs a significant degree of error, it still yields valuable information. We then proceed to calculate the temperature dependence of the conductivity σ at different on-site Coulomb interactions U=1.0,2.0,3.0. Fig.<ref>(a) shows the transition of the system from a metallic state to an insulating state as the on-site Coulomb interaction U increasing, while Fig.<ref>(b)-(d) show the transition in the opposite direction. At U=1.0,2.0, the system shows insulating phases and at U=3.0, the system exhibits metallic phase. Overall, Fig.<ref> demonstrates that in half-filled systems, U has a suppressing effect on the metallic state, while in doped systems, U has a promoting effect on the metallic state.
§ CONCLUSION
In summary, we employed the determinant quantum Monte Carlo method to investigate the regulatory effects of doping and disorder on the metal-insulator transition process in graphene materials. We discussed the factors affecting the MIT, including doping, temperature, lattice size and on-site Coulomb interactions by carrying out calculations for variations of the DC conductivity σ_dc with temperature under different values, utilizing the reciprocal of the variation of σ_dc with temperature T to determine the metallic or insulating phase of the system. Through our calculations, we have reached the conclusion that doping increases conductivity and induces a transition from insulator to metal phase, while disorder has the opposite effect.
In experiments, substitutional doping or adsorbate doping often simultaneously alters the carrier density and introduces disorder, thus making the competition between doping and disorder important in the study of MIT in graphene materials. Our calculations show that when doping and disorder coexist, a larger disorder strength may cause the system to transition from the metal phase to the insulating phase. This finding is consistent with the metal-insulator transition phenomenon observed in hydrogen, nitrogen, and oxygen substitutional doped graphene materials in experiments.<cit.> Our research contributes to a deeper understanding of the mechanisms underlying the metal-insulator transition in graphene materials, and may be helpful in the development of applications for graphene materials.
§ ACKNOWLEDGEMENTS
This work was supported by NSFC (No. 11974049).
The numerical simulations in this work were performed at HSCC of
Beijing Normal University.
|
http://arxiv.org/abs/2307.04736v1 | 20230710175021 | Atmospheric muons at PeV energies in radio neutrino detectors | [
"Lilly Pyras",
"Christian Glaser",
"Steffen Hallmann",
"Anna Nelles"
] | astro-ph.HE | [
"astro-ph.HE",
"hep-ph"
] |
Particle production from non–minimal coupling in a symmetry breaking potential transporting vacuum energy
Orlando Luongo
August 12, 2023
==========================================================================================================
§ INTRODUCTION
The existence of a high-energy astrophysical neutrino flux is by now firmly established, e.g. <cit.>. However, at energies beyond tens of PeV the flux predictions for neutrinos produced directly in sources or when ultra-high energy cosmic rays interact with cosmological photon fields vary widely e.g. <cit.>. It is however clear that detectors with larger effective detection volumes than currently exist are necessary to discover EeV neutrinos. Radio neutrino observatories offer a promising approach to this challenge by exploiting the kilometer-scale attenuation length of radio emission in ice and the relatively low cost per detection unit <cit.>. Among these observatories are the Radio Neutrino Observatory Greenland (RNO-G) <cit.>, which is currently under construction, and the planned radio array for the extension of IceCube, IceCube-Gen2 <cit.>. Both of these experiments are discovery-focused, making it essential to have a robust understanding of the signals and backgrounds involved.
The radio signal to be detected is generated by the particle cascade following a neutrino interaction in ice. The build-up of a net negative charge at the shower front leads to the emission of coherent radiation, the Askaryan emission <cit.>. Due to a Cherenkov-like effect, the emission is strongest at the Cherenkov angle (∼56 in ice). The signal amplitude at a given observer distance scales linearly with the shower energy <cit.> and is typically visible above the thermal noise at 510PeV <cit.>. Due to this emission mechanism, any particle cascade induced in the ice with the necessary energy deposit creates a detectable signal, independent of its parent particle. This means that also high-energy muons stemming from air showers could act as a background in neutrino detectors whenever they initiate a shower <cit.>.
Radio detectors do not need to be installed deep in the glacial ice. The antennas are typically located within 200 below the surface, which makes them sensitive to potential anthropogenic noise[This will not be discussed here as its mitigation is very experiment- and site-dependent.], as well as air shower induced backgrounds.
In general, three different types of air shower backgrounds are distinguished: (1) the in-air radio emission of air showers that is refracted into the ice to the antennas; (2) the core of incompletely developed air showers can penetrate into the ice, where it induces a cascade that emits radio signals; and (3) in-ice particle showers following an energy loss of an atmospheric muon. The signatures of (1) and (2) have previously been studied and quantified <cit.>. Both signals can be triangulated to close to the surface and therefore provide signatures that can be suppressed on an analysis level. Reflections in the ice may complicate the reconstruction, but this is true also for the neutrino signals itself. For both direct air shower backgrounds, a reasonable estimate of the background rate is possible, because the distribution of shower maxima as function of energy is relatively well-known.
The number of muon-induced background events, however, has been studied less. It has in principle been shown that muons are a non-negligible background to radio neutrino detectors in ice <cit.>. However, the predicted event rate depends on the muon flux, which in turn strongly depends on the hadronic interaction model, and the cosmic ray composition, all of which are less-well determined, in particular at the highest energies. Furthermore, instrumental parameters, foremost the triggering system, determine the observable rate. We present a comprehensive study of the muon-induced background in this article to guide future searches for neutrinos beyond PeV energies. For energies up to [print-unity-mantissa=false]e5, the influence of hadronic interaction models and cosmic ray spectrum on the atmospheric muon flux is studied in <cit.>.
§ PREDICTIONS OF MUONS AT PEV ENERGIES AND BEYOND
Atmospheric muons are produced in extensive air showers, which occur when high-energy cosmic rays penetrate the Earth's atmosphere. The cosmic ray nucleon interacts with an air nucleus and produces short-lived intermediate particles, mostly pions (the lightest known meson) and a few heavier particles with shorter life times, such as kaons, D-mesons, etc. Their decay gives rise to an atmospheric lepton flux, including muons. The energy range in which the atmospheric muon flux is visible to radio neutrino detectors is limited by the minimum muon energy that is required to produce an in-ice particle cascade with a visible radio signal (around 10PeV). At these energies, the flux of parent cosmic rays is low, which results in a very small muon flux. Nonetheless, this muon rate is likely comparable to the expected neutrino rate at these energies, making radio neutrino detectors the first experiments where atmospheric muons of EeV energies become relevant. While the much-discussed Muon Puzzle <cit.> describes a discrepancy between predicted and observed muon production in air showers for muons with energies around 10TeV (more muons are measured than predicted by Monte Carlo simulations), the situation for muons above PeV energy is different: These muons are usually produced within the first three interactions of an air shower, rather than continuously throughout the shower development <cit.>. The energy of a parent particle is distributed among its children, which leads to lower energy particles with each further interaction of the cascade. Consequently, one has to concentrate on the highest energy interactions to study the relevant muon background. Unfortunately, these interactions are far outside of the energy regime currently observable at accelerators, which makes far-reaching extrapolations necessary.
§.§ Muon production in air showers
Atmospheric muons are produced in the hadronic cascade of an air shower mainly through the decay of short-lived mesons, namely charged pions and kaons (conventional component) <cit.>. At very high energies the Lorentz time dilatation increases the decay length of pions and kaons to a multiple of their interaction length (ℓ_int) in air, making it more likely that they will interact and lose energy before they can decay. The contribution of particles with a shorter lifetime τ then becomes dominant as shown in <ref>. Due to their almost immediate decay the contribution of short-lived hadrons with cτ≪ℓ_int is called prompt flux and dominates above [print-unity-mantissa=false]e6. Charmed hadrons (with D^0, D^+, D^+_s, Λ^+_c, Ω^0_c and their antiparticles) have large (∼10%) branching ratios into semi-leptonic modes and a lifetime τ ∼ [print-unity-mantissa=false]e-12, implying a prompt decay with a probability of order 1 up to energies around [print-unity-mantissa=false]e7 <cit.>. In principle, also bottom hadrons (B^0, B^+, B^+_s, B^+_c, Λ^0_b, Ξ^0_b, Ξ^+_b), which have similar lifetimes and semi-leptonic decays, contribute to the prompt muon flux. B-mesons are less frequently produced by cosmic rays in the atmosphere, but their decay length is smaller, yielding a contribution to the [print-unity-mantissa=false]e7 muon flux which is 10% of the one from charm hadron decays <cit.>.
The rare prompt decays of unflavored mesons (η, η', ρ^0, ω, ϕ) <cit.> and photo-conversion into a muon pair (γ Z →μ^+ μ^- Z) e.g. Bethe-Heitler process, Drell-Yan processes <cit.> and photon conversion into a vector meson (including J/Ψ) decaying into muons make significant additional contributions, which dominate the muon flux above ∼3e8 <cit.>. A sketch of the contributions according to <cit.> is shown in <ref>. The uncertainties are a rough estimate considering experimental limitations and differences between events generators <cit.>.
Taking into account these different sources, the atmospheric muon flux can then be expressed as the sum of five components:
ϕ_μ(E, θ) = ϕ^conv_μ(E, θ) + ϕ^charm_μ(E, θ) + ϕ^unflav_μ(E, θ) + ϕ^γ_μ(E, θ) + ϕ^bottom_μ(E, θ).
The high energy muon flux is mainly driven by the outcome of the first interaction of an air shower. The relativistic hadron-ion collisions under low momentum transfer are in the non-perturbative regime of quantum chromodynamics (QCD) <cit.>, where hadron production cannot be calculated directly from first principles. Instead, effective theories and phenomenology are used; see <cit.> for a recent review. To simulate the hadron production, different hadronic interaction models are present. They are the largest source of uncertainties in air shower simulations, because the center-of-mass energy in the first interactions significantly exceeds the maximum energy studied at the LHC and interactions in the forward direction, i.e. high pseudorapidities are not well covered <cit.>. When extrapolating to higher energies, the model predictions thus diverge even further. A detailed discussion of post-LHC hadronic interaction models follows in <ref>.
Next to the particle physics processes in the air shower, the atmospheric muon flux is determined by the cosmic ray composition. The type of the primary particle entering the atmosphere and its number of nucleons has an influence on the number of muons produced. The muon number grows less-than-linear with the primary energy of an air shower <cit.>. This is a consequence of the energy fraction f given to charged pions in each interaction f∼(2/3)^n, after n generations. For nuclear primaries, a nucleus with atomic number A can be treated as the sum of A separate proton air showers all starting at the same point, each with 1/A of the primary energy <cit.>. The lower energy nucleons which initiate the shower generate fewer interaction generations, and so lose less energy to electromagnetic components <cit.>. Therefore the number of muons is larger for heavy primaries than for showers initiated by light nuclei of the same energy.
For very high-energy muons, which are created within the first interactions, this picture changes: since a proton contains the kinetic energy in one nucleon, it can produce higher energy particles than an iron primary with the same energy. Therefore, a 3e10 proton shower can produce muons up to [print-unity-mantissa=false]e10, while an iron-induced shower with the same energy and arrival direction only produces muons up to 2e8 <cit.>, as shown in <ref>.
§.§ Muon flux simulations
For this article, the atmospheric muon flux is calculated using Matrix Cascade Equations (MCEq) <cit.>, which describe the evolution of particle densities as they propagate through the atmosphere, using the CORSIKA parametrizations <cit.> as atmospheric model. The 1-dimensional cascade equations neglect the lateral versus the longitudinal development of the shower, which is important at lower energies, where the transverse momentum of the particles may be relatively important and imply a larger lateral displacement. Since this paper focuses on energies > 1PeV this seems an acceptable limitation. Compared to computational extensive Monte Carlo codes like AIRES <cit.> and CORSIKA <cit.>, MCEq provides a way to estimate the relative importance of a given parameter, for which accurate studies with full shower simulations would require very large statistics.
§.§ Dependence on hadronic interaction models
Several theoretical approximations describing particle production are available for different energy ranges and kinematic regimes. Different approaches have to be combined to model all hadronic interactions in air showers. For this paper, the post-LHC hadronic interaction models EPOS-LHC <cit.>, QGSJet-II.04 <cit.>, and Sibyll-2.3c <cit.> are considered. While has a more general focus on minimum-bias proton-proton and heavy ion collisions, the latter two are focused on air shower simulation.
A theoretical prediction of the muon flux above PeV energies should include at least four components (see <ref>), which are, however, not all taken into account in the same way in the hadronic interaction models. While the conventional flux is implemented in all models considered, only Sibyll-2.3c includes charm production (D^+, D^0, D_s, Λ_c) through a parametrization; forward charm production is intrinsically included in the nucleon PDF <cit.>. Sibyll-2.3c also includes muons from unflavored mesons and J/Ψ.
EPOS-LHC does not include charm, its prompt component arises from the decay of unflavored mesons. only considers η decay as a production channel for prompt muons <cit.>. The calculated muon fluxes, therefore, start to vary widely at 1PeV where the prompt flux dominates, see <ref> left.
EPOS-LHC and QGSJet-II.04 yield the lowest muon flux because they neglect the charm component and in the latter case most unflavored mesons. The photo-production of muon pairs that becomes relevant at PeV energies is not implemented in any model <cit.>. Given that only Sibyll-2.3c includes charm and unflavored mesons, it is currently the most complete model to predict the muon flux above PeV energies. However, it still under-predicts the flux of muons at the highest energies due to missing production channels from photo-conversion and B-mesons. The main theoretical uncertainties arise from charm cross-section calculations. The theoretical calculations are limited by the uncertainties in the scale, charm mass, and the nuclear PDFs <cit.>. A non-perturbative intrinsic charm component may also contribute <cit.>.
This short overview illustrates the difficulty of predictions beyond LHC energies. On the one hand, theoretical uncertainties are present due to the non-availability of measurements, and on the other hand, known processes have not been implemented in all codes, since the priorities have been weighted differently for existing hadronic interaction models. We therefore use the spread between the three hadronic interaction models as indication of the current uncertainties, while keeping in mind that they do not provide the full range of possible systematic uncertainties at this point.
§.§ Dependence on cosmic ray composition
The cosmic ray spectrum covers several decades of energy up to [print-unity-mantissa=false]e11, including particles from galactic and extra-galactic origin. Just below the so-called ankle at 8e9, the transition region from galactic to extra-galactic cosmic rays is expected <cit.>, with detailed explanations still varying.
Measurements of the cosmic ray composition above a few [print-unity-mantissa=false]e5 suffer from the uncertainties in the hadronic interaction models, since the composition has to be inferred from shower parameters such as the position of the shower maximum X_max, which provides composition models with much room for interpretation. Since the ultra-high energy muon flux directly depends on the cosmic ray composition, different models have been investigated to study the uncertainty stemming from this aspect. We also combine models to study the influence of galactic and extra-galactic components. This is done to show the spread in models, rather than choosing one over the other for correctness.
The well-known Hillas Gaisser models are theoretical simplifications for extreme scenarios: a heavy composition after the ankle (H3a) <cit.> and a proton-rich composition (H4a) <cit.>.
This is contrasted by the Global Spline Fit (GSF) <cit.>, a data-driven parameterization that considers measurements of more than ten experiments and provides uncertainties at each energy. GSF is agnostic to theoretical models explaining the derived composition in terms of sources and propagation.
Thoudam et al. <cit.> published different theory-driven cosmic ray spectra up to EeV energies. In the following, their prediction for cosmic rays stemming from Supernova remnants (SNR-CR) and Wolf-Rayet stars (WR-CR) is used as a galactic component, labeled T, and are combined with different extra-galactic components.
The UFA model by <cit.> predicts a strong pure-proton component concentrated only about one order of magnitude in energy below the ankle. For our combination into the T+UFA model the results are optimized for a pure nitrogen galactic composition, which matches the predicted composition for WR-CR <cit.>.
The extra-galactic component of Heinze et al. <cit.> is based on a framework in which an ensemble of generalized ultra high energy cosmic ray accelerators is characterized by an universal spectral index (equal for all injection species), a maximal rigidity, and the normalizations for five nuclear element groups. The source evolution is included as an additional free parameter. This allows for a parameter scan with a best fit result. The composition used in this paper is obtained by a fit to the Auger data from 2019.
The resulting muon flux is shown in <ref> for Sibyll-2.3c as hadronic interaction model.
As discussed in <ref>, the relevant quantity to produce high-energy muons from different primaries is the energy per nucleon. For hydrogen as a primary (with A = 1) the nucleon energy is equal to the primary energy, for heavier elements the energy scales with 1/A where A is the atomic number.
On the left of <ref> the hydrogen flux for the chosen models is shown, while <ref> right depicts the proton fraction taking into account all nuclei, relative to their neutron number. For a pure proton flux, the fraction would be 1, given that hydrogen consists only of one proton and one electron. For pure iron (with 26 protons and 30 neutrons), the fraction would be ∼ 0.46. The models start to deviate at [print-unity-mantissa=false]e7, close to the transition region from galactic cosmic rays to extra-galactic cosmic rays. Here, the theory-based models (T+UFA, T+H) have a dip in proton flux. The proton fraction of the GSF flux only decreases in the transition region to a fraction of 0.9 and is significantly higher than the theoretical models (around 0.5). The GSF model therefore also predicts the highest muon flux, with the exception of the only proton case at energies >3e17. We will therefore use mainly GSF to estimate the muon numbers going forward to remain conservative, keeping in mind that it is just one realization of the uncertainty stemming from the cosmic ray composition.
§ SIGNATURES OF MUONS IN RADIO INSTRUMENTS
When a muon travels through the ice it initiates showers along its track. At PeV energies and above, the relevant shower production mechanism is catastrophic energy losses. As a rule of thumb, the energy of the parent particle inducing the cosmic ray air shower is roughly one decade higher than the subsequent muon, while the following in-ice particle cascade has a shower energy typically one decade lower than the initiating muon. The in-ice shower energy is the important quantity for the radio emission and hence the one which determines if a muon triggers the in-ice radio detector. The Monte Carlo framework NuRadioMC <cit.> with its extension to simulate secondary interactions <cit.> is used to simulate the muon interaction in-ice, the subsequent Askaryan radio emission, the propagation of the radio signal to the detector, and finally the detector response to the electric field. In order to also track secondary losses of all types of leptons, the lepton propagation code PROPOSAL <cit.> has been included in NuRadioMC and is used for our simulations.
We study the dependence on the instrument details (<ref>), on the muon flux itself (<ref>), as well as possible mitigation strategies (<ref>).
For the purpose of this work, we simulate a detector array of 35 stations, which is similar to the Radio Neutrino Observatory Greenland (RNO-G). Each station is comprised of a dipole antenna (Vpol) located at a depth of -100 in the ice (deep component), and three log-periodic dipole antennas (LPDA) pointing straight down located at the surface (shallow component). The stations are arranged in a square grid with a spacing of 1.25km.
Simulations are performed for several triggers to study the dependence on instrument details. The assumed noise temperature is 300K, in both deep and shallow component. At a depth of -100m signal-only trigger with a simple threshold of 1.5σ_noise and 2.5σ_noise in the band of 96220MHz are evaluated. At the shallow part a high-low threshold trigger of 2.5σ_noise and a two-out-of-three coincidence in the band of 80180MHz is applied. The deep triggers are a simplification of the phased array trigger that is the current state of the art in radio neutrino detection <cit.>. Simulating a true phased array using a fixed trigger rate would be the best approximation of a real instrument, as done in e.g. <cit.>. To save computing time, we chose to use the simplified trigger of a single dipole. While the 2.5σ case is likely close to the current implementation for RNO-G <cit.>, a 1.5σ trigger is used as a proxy for potential future optimizations. A true phased array implementation will likely affect the absolute event numbers (e.g. <cit.>), but should not affect the relative scaling of different effects.
The shallow trigger represents an optimistic performance of the current RNO-G trigger.
In order to express the detector performance, the effective area is calculated. This is done by simulating muon interactions within an ice volume containing the detector array, the initial muon position is on the air-ice planar interface. Since only the projection of the detector is perpendicular to the direction of the flux, the simulated area has to be corrected with cos(θ). The effective area (A_eff) is the projected surface area multiplied with the trigger efficiency:
A_eff = A_proj·N_trig/N_sim = A_sim·cos(θ) ·N_trig/N_sim.
The expected event rate is obtained combining the effective area with an incident muon flux integrated over energy and the solid angle element of the flux, which adds a sin(θ) in spherical coordinates:
Γ_μ(E, θ) = ∫_t_1^t_2∫_E_1^E_2∫_0^2π∫_θ_1^θ_2Φ_μ(E, θ, ϕ) · A_eff(E, θ) ·cos(θ) ·sin(θ) dθ dϕ dE dt.
§.§ Dependence on instrumental details
As shown in <ref>, the shallow antennas detect the fewest muons. This is expected, as the LPDAs also have a comparatively smaller neutrino effective volume, due to their location close to the surface. As a consequence of the ice profile, in which the index of refraction increases with depth, signals propagate less often to the surface, but are bent instead towards the denser ice. The shallow LPDAs detect mostly horizontal muons above 65 zenith angle, because of the geometry constraint by the Cherenkov cone, while the deep antennas have a broad detection range with a peak around 55 zenith angle. A lower detection threshold increases the number of muons from 0.07 per year and 35 stations to 0.16 per year and 35 stations. The higher muon yield can mostly be attributed to muons in the range of [print-unity-mantissa=false]e7e8. The uncertainties shown in <ref> are statistical uncertainties only based on the Feldman Cousins confidence belts <cit.>, which provide upper limits for null results and two-sided confidence intervals for non-null results, which converge to a Poisson error. At lower energies, only a few geometries allow the antenna to register a signal, hence the statistics are small, and uncertainties increase due to the comparatively high muon flux at low energy. Most events (97%) are only seen in one station, regardless of the trigger configuration.
§.§ Dependence on hadronic interaction models and cosmic ray composition
The differences in the flux predictions due to the hadronic interaction models propagate almost directly into the muon rate of an in-ice radio detector. <ref> left shows the number of muons predicted for three different hadronic interaction models per year and 35 stations and the same cosmic ray composition. As discussed in <ref>, Sibyll-2.3c includes the most production mechanisms, which explains the larger flux. In <ref> right, the expected muon rate for the same hadronic interaction model, but for three different cosmic ray compositions are shown. The GSF model yields the highest muon rate with a maximum between [print-unity-mantissa=false]e7e8 muon energy, which is expected due to the higher proton content.
§ RELATION TO PARENT AIR SHOWER
While the projected number of muons is relatively small, they can still pose a problem if the neutrino rate is comparatively low. One possibility to distinguish between an atmospheric muon and a neutrino is to detect the air shower from which the muon originates. This would identify the muon and provide a veto mechanism on muon events, as also discussed in <cit.>.
§.§ Detectability of the parent air shower
To calculate the veto efficiency, it is essential to have information about the energy and arrival direction of the air shower, as well as the distance to the nearest detector station. As high-energy muons are boosted along the air shower's axis, the cosmic ray arrival direction can assumed to be the same as the muon arrival direction. The location of the air shower core can be determined by projecting the muon vertex position along the arrival direction until it intersects the boundary between the ice and air.
To establish a relationship between a muon and the corresponding cosmic ray energy, Bayes’ theorem can be applied. By solving the Matrix Cascade Equation with Sibyll-2.3c as hadronic interaction model for different types of primary cosmic rays (pr) - namely proton, helium, carbon, and iron - and over a range of cosmic ray energies (10 bins between [print-unity-mantissa=false]e6e11) the muon flux on ground-level can be calculated. Once the muon flux for a specific cosmic ray induced shower is known, it has to be folded with the actual flux of the primary to obtain the muon flux for all cosmic rays. Here, the number of the different primaries is drawn from the GSF cosmic ray spectrum. The probability to produce a muon with a certain energy given a cosmic ray energy p(E_CR|E_μ) is calculated by
p(E_CR|E_μ) = ∑_pr N_μ(E_CR, E_μ, θ, pr) · N_CR(E_CR, θ, pr)/∑_E_CR∑_pr N_μ(E_CR, E_μ, θ, pr) · N_CR(E_CR, θ, pr).
The number of muons N_μ is calculated for each shower, therefore it has to be summed over all possible primaries, pr. The number of cosmic rays N_CR is calculated from the cosmic ray flux and also needs to be summed over all primaries. This sum is normalized by summing over all possible cosmic ray energies the muon can stem from.
The distribution for different muon energies stemming from a cosmic ray with a certain energy is shown in <ref> for Sibyll-2.3c and GSF. The plot shows, that a muon with a given energy can stem from a variety of cosmic ray energies, most likely from a cosmic ray with an energy ∼ 10× higher than the muon energy. Since no cosmic rays have been measured above [print-unity-mantissa=false]e11 and thus there is no rate prediction, the highest energy muon distributions show a different shape. The relation between muon and cosmic ray energy depends on the choice of hadronic interaction model and cosmic ray composition.
To calculate the veto efficiency in a RNO-G like array, for each muon event, an air shower is selected according to muon arrival direction and placed inside the array as previously described. The resulting radio signal is simulated with CORSIKA <cit.> and the radio extension CoREAS <cit.> and then folded with the detector response using NuRadioReco <cit.>. Since the amplitude of the air shower signal scales linearly with the cosmic ray energy <cit.> it can now be calculated, which air shower energy is necessary to exceed a simple 2.5σ_noise trigger threshold in an upward-pointing shallow LPDA antenna and hence veto the muon event. In the last step, the probability that a muon event stems from an air shower with an energy higher than the trigger threshold energy is calculated and assigned to that muon. Combined with the predicted muon flux, the number of muons that can be vetoed by detecting the parent cosmic ray can be calculated. <ref> shows a veto efficiency close to 100% for muon energies > [print-unity-mantissa=false]e9. Muons originating from inclined air showers are more likely to be vetoed since the radio signal for inclined air showers covers a larger area but becomes fainter at the same time. Therefore the veto efficiency increases with higher zenith angles only for higher energies.
§.§ Timing of air shower and muon
While the muon and the air shower stem from the same cosmic ray, the signal arrival time at the detector and subsequently at the data acquisition unit (DAQ) differs. The air shower propagates through the atmosphere with a zenith angle θ. The position where the air shower axis intersects with the ice surface is called core position, with t=0. Here, the radio emission from the air shower is assumed to be a plane wave at the shower front, traveling the distance from the axis to the shallow antenna according to its arrival direction θ and the velocity of light in air, see <ref>. The muon travels along the arrival direction of the air shower and continues into the ice until it creates a shower. From there the radio emission propagates through the ice on a bent path to the antennas. Once received by a deep antenna, the signal travels along the cable to the DAQ at the surface, see <ref>.
The time difference as registered in the DAQ of the radio signal stemming directly from the air shower and the subsequent muon is the difference of t_μ→daq and t_cr→daq with the following definitions:
t_cr→daq = d_core→shallow_ant·cos(θ) ·1/c_air + t_cable_delay_shallow
t_μ→daq = d_core→vertex·1/c_vac + t_ice_propagation_deep_ant + t_cable_delay_deep.
The cable delay for 100m coaxial cable is ∼500, with c_coax = 2/3 c. The cable from the shallow antennas is typically 10, which provides a lower bound of the time difference at ∼450. The full distribution is shown in <ref>. The muon can travel up to 4 in the ice which increases the possible travel time up to several microseconds, moreover the propagation velocity in ice is slower than in air according to the refractive index. Any air shower veto would need to take this travel time into account, by either allowing for read-out with no trigger dead-time (i.e. double-buffering) or sufficiently long record lengths. Self-triggering on the air shower is challenging due to the potentially small signals and the resulting high trigger rate. A longer record length would allow a post-processing search, which simplifies background identification, however, its implementation into a low-power DAQ system is not easily possible.
§ CONSEQUENCES FOR EXPERIMENTS
After having studied dependencies of the muon flux predictions on hadronic interaction models, composition, and instrumental details to set the stage of the uncertainties in the flux predictions, we now discuss the experimental consequences and mitigation strategies.
We will investigate whether neutrino and muon flux predictions can be treated as independent (<ref>), whether neutrinos and muons can be distinguished based on their experimental signature in terms of expected rates, energy or zenith distribution (<ref>), and whether radio detectors can be used to measure the prompt muon flux at 100PeV energies and above (<ref>).
§.§ Possible connection between muon flux and neutrino flux
In <ref>, we established that the muon flux strongly depends on the cosmic ray composition at Earth, specifically the proton fraction, which is in turn related to the cosmic ray composition at the sources. The production of cosmogenic neutrinos is also influenced by the cosmic ray composition, as ultra-high energy cosmic rays interact with the cosmic microwave background and the extra-galactic background light <cit.>. Moreover, the proton component plays a significant role in the generation of neutrinos, since protons produce more neutrinos than heavier nuclei when propagating through the Universe <cit.>. This raises the question whether background and signal can be treated as independent from each other.
In the following analysis, we assume different cosmic ray compositions consistent with the Auger published data from 2019 and evaluate the resulting neutrino and muon events for an in-ice radio neutrino detector. We combine the galactic component by Thoudam (denoted T) with three extra-galactic components by Heinze et al. <cit.>: the best fit (H_best_fit) with a maximal rigidity R = 1.58e9, a source evolution parameter m = 4.0, and spectral index γ = -0.7; a fit with a flat source evolution (H_flat_evol: R = 2.81e9; m = 0.0; γ = 0.75) and a fit with a high maximal rigidity (H_high_Rmax: R = 4.46e9; m = -5.6; γ = 1.6). As the fits are supposed to resemble the measured cosmic ray composition on Earth, we expect the resulting muon flux to be similar, but the large measurement uncertainties still leave room to accommodate different interpretations.
The neutrino flux is calculated using the method described in <cit.>, while the muon flux is calculated using the Matrix Cascade Equations, as detailed in <ref>. The result is shown in <ref>. While the different models alter the numbers of detected muons by only a factor of two, the variation in the number of detected neutrinos changes by about a factor of ten. Since the galactic component stays unchanged, this means that only a small influence of the extra-galactic component is visible in the muon flux. The flux differs strongest at muon energies above [print-unity-mantissa=false]e7, which is in agreement with the expected transition region from galactic to extra-galactic cosmic rays.
In other words, most muons at the relevant energies are generated by cosmic rays of 10^8 GeV to 10^9.5 GeV while cosmogenic neutrinos relevant for radio neutrino detectors stem from cosmic rays of above [print-unity-mantissa=false]e10. This can also be seen in the fact that the change in muon number is significantly smaller than in the neutrino number from the same models. This means that the muon background expectation can in general be treated independently from the neutrino production models. Of course, keeping in mind that some model-dependent cases are imaginable, where background and signal need to be considered together, in particular when including new physics.
§.§ Observational signatures
We now consider the practical implications for neutrino observations and analyses with a radio neutrino telescope.
The observational signature for in-ice radio neutrino detectors is an electric field whose amplitude is proportional to the shower energy. The signal strength depends on the fractional energy which is deposited in the shower, so the shower energy rather than the muon or neutrino energy is the relevant observational quantity. The shower energy (which requires a reconstructed vertex distance and viewing angle, see e.g. <cit.> for details), together with the arrival direction are likely the only two reconstructed quantities that can be used to distinguish signal from background, unless a veto from air shower tagging or multiple station/pulses coincidences is possible.
The detected arrival directions of muon and neutrinos differ only slightly, as shown in <ref>, because they are dominated by the detector geometry, which is also illustrated by the different shape of the distributions for shallow and deep component. This, however, prohibits a distinction between muons and neutrinos on an event-by-event basis and complicates it even using the whole distributions at low statistics. The only unique signature of neutrinos is an arrival direction > 90 zenith, since muons get absorbed in the Earth. However, this is only a very small fraction of the expected events.
To summarize, <ref> combines the most conservative and optimistic models for muon and neutrino predictions for an RNO-G like detector in terms of shower energy. In the most conservative case, an RNO-G like detector will detect 0.07 muons a year (0.16 muons with a 1.5σ trigger), and in the most optimistic case, only 0.002 muons (0.01 muons with a 1.5σ trigger).
While there are thus differences in the extreme case of 𝒪(30) between the muon predictions, current neutrino flux predictions in contract vary by more than a factor of 𝒪(150).
The combination of Sibyll-2.3c as hadronic interaction model and the Global Spline Fit (GSF) yields the highest muon rate, the theoretically driven model T+H is approximately a factor two lower. QGSJet-II.04 combined with T+H and the proton-poor cosmic ray composition of H3a together yields almost no muons. Recall that QGSJet-II.04 does not include charm which results in an underestimation of the muon flux above PeV energies, where the prompt muon component dominates. The differences using Sibyll-2.3c and GSF, and T+H respectively are therefore likely a better estimate for the uncertainties of the muon event rate, reducing the uncertainty budget to a factor of 2, keeping in mind that Sibyll-2.3c still does not model all components of the muon production. The neutrino flux predictions are influenced by source and propagation modeling, as well as the cosmic ray composition, as indicated by the two predictions for the composition as reported by the Pierre Auger Collaboration and the Telescope Array (TA). Without additional experimental evidence the entire neutrino parameter space has to be considered equally likely for discovery experiments.
The maxima of the muon distributions predicted in all considered scenarios are at around [print-unity-mantissa=false]e7 and fall steeply towards higher energies.
Above [print-unity-mantissa=false]e8 all shown neutrino predictions are higher than the muon expectation, which provides an avenue towards a possible analysis cut at high energies. A recent study of the discovery potential for the diffuse flux of ultra-high energy cosmic neutrinos also showed the usefulness of using the reconstructed shower energy as a discriminator for the atmospheric muon background <cit.>.
In addition, it should be noted that all showers with an energy < [print-unity-mantissa=false]e6 have their vertex position within 20 radius of the deep antenna. While the community is pushing towards lowering the energy threshold of detectors to gain an overlap to existing (optical) experiments, the current simulations make a number of approximations which are no longer completely valid in these cases, e.g. observing the far field of the radio emission, the separation of emission and propagation, and a constant index of refraction in the emission zone. The predictions of event rates at low energies therefore carry additional uncertainties. However, <ref> also shows that the background problem likely becomes larger at low energies, in particular since the muon flux rises much more steeply towards lower energies than the neutrino flux predictions. This is shown in a different way in <ref>, which illustrated potential minimum energy cuts that could be imposed to gain a cleaner neutrino sample. For instance, cutting at a shower energy of 10^7.5 GeV would retain 80% or more of all expected neutrinos, but improve the signal-to-background ratio with a factor of 5-10 depending on the model. This in turn, however, raises the question how successful an extension of the detector sensitivities to lower neutrino energies can be, given the increasing muon background.
§.§ Measuring the muon flux
Finally, one can invert the approach taken above and ask whether radio detectors can be used to measure the prompt muon flux above PeV energies. As shown in <ref>, across all energies and arrival directions, roughly 50% of the detected muons can be related to an air shower that is also detected by the same instrument, meaning that a clear identification of muon events will be possible. In the case of RNO-G, ∼0.3 tagged muon events are expected in 10 years at a trigger threshold of 2.5σ based on Sibyll-2.3c and GSF. Hence, the array will be too small to make a probable detection of a muon over the planned operation time. Even with an optimized trigger to 1.5σ noiseless signal equivalent, the largest flux predictions (Sibyll-2.3c and GSF) still predict <1 tagged events in 10 years of operations. In addition, all muon signals will be very close to the threshold and thus the yet unknown analysis efficiency, as well as unstudied properties of the near-surface ice will have to be considered to solidify this number.
However, future radio detectors are already being planned, in particular, IceCube-Gen2 <cit.>. While the precise expected event numbers will depend on the details of the detector such as the exact hardware implementation of the trigger, bandwidth of the system, and analysis efficiencies, at this point an estimate is already possible. Using the detector configuration and trigger settings as foreseen for IceCube-Gen2 <cit.> which includes a full simulation of the phased array trigger system, we simulated the atmospheric muon rates and find that IceCube-Gen2 will observe ∼1.9 tagged muon events in 10 years for the currently highest flux expectations of Sibyll-2.3c and GSF (see <ref>) and ∼0.1 for QGSJet-II.04 and H3a. With an optimized trigger, one can envision improving on these numbers to reach an expectation significantly >0. This would allow the in-ice radio array of IceCube-Gen2 to provide the first measurements of the prompt muon flux at 10PeV. We publish the expected muon background as a function of shower energy and incident direction for all cosmic ray composition and interaction models discussed in this paper as supplemental material (see <ref>), so that this forecast can be incorporated in future analyses such as <cit.>.
§ CONCLUSION AND OUTLOOK
We presented a study of the background of atmospheric muons at PeV energies and beyond for radio neutrino detectors in ice. The ultra-high energy muon flux is highly dependent on hadronic interaction models and the proton fraction of cosmic ray composition. Sibyll-2.3c currently provides the most complete hadronic interaction model for these high energies, since it considers the conventional component, the contribution from charmed hadrons and muons from unflavored mesons, neglecting only the subdominant contributing from B-mesons and photo-conversion into muon pairs. The main uncertainties arise from the unknown charm cross-section, which is not accessible in current particle colliders. Using QGSJet-II.04, which is tuned to the conventional flux, results in muon rates that are a factor of 10 lower.
The cosmic ray composition influences the muon rate mostly through the parameter of the proton fraction. Changing from a proton-rich to a proton-poor model, yields a difference of a factor of two in flux prediction.
The total observed flux is very sensitive to instrument geometry and in particular trigger settings. An RNO-G like detector will, at full completion, observe about 0.07 muons per year, using the Sibyll-2.3c prediction and a 2.5σ-threshold. At a trigger of 1.5σ this number would rise to 0.16 muons per year. These numbers should be compared to the very uncertain flux predictions for neutrinos, which are ranging from 2.7 neutrinos to 0.01 neutrinos per year in RNO-G.
Since both the neutrino and muon fluxes depend on the proton fraction of the cosmic ray composition, it was studied whether they are correlated. It could be shown, that muon and neutrino flux predictions mostly decouple. Most ultra-high energy muons stem from cosmic rays at energies lower than those that cause the cosmogenic neutrino flux. One can therefore also not reduce uncertainties through a combined treatment of signal and background.
A possible mitigation strategy is to detect cosmic rays and thereby identify muon events: if the parent air shower of the muon can be detected, it provides a signature unique to muon events. In a detector with shallow antennas, such an air shower tagging is possible directly, using the same system. The efficiency of this mechanism is energy and arrival direction dependent with good efficiency for a zenith arrival direction more inclined than 55 zenith and muon energies above [print-unity-mantissa=false]e9. One could consider adding a more closely spaced array in shallow-only stations for RNO-G, which will likely improve the veto efficiency for less inclined showers. However, for high efficiency, such an in-fill array would have to have a spacing of 𝒪(100)m, making it too dense to be feasibly installed.
A discrimination between muon and neutrino signals only based on the arrival direction is unlikely, as the distributions follow mostly the detector acceptance. It is, however, likely that neutrinos and muons show a different energy spectrum. The muon flux will likely not be measurable above [print-unity-mantissa=false]e9 shower energy, already being smaller than most neutrino fluxes at [print-unity-mantissa=false]e8 shower energy. The obtainable resolution of the shower energy of radio neutrino detectors is expected to be better than a factor of two <cit.>, which seems sufficient to assign a significant signalness probability for high energy events. Combined with an air shower veto, which is most efficient at high energies, this should allow for a relatively background-free neutrino shower detection above [print-unity-mantissa=false]e8.
An RNO-G-like detector is likely too small to make a first measurement of the prompt muon flux at energies above 10PeV. This could be done by using those muons that are identified as stemming from an air shower, but the expected number of these kinds of events is <1 in 10 years. However, a much larger detector like the planned radio array of IceCube-Gen2 has the potential for the first muon measurements at these energies, thereby providing additional handles on hadronic interaction models and cosmic ray composition.
§ ACKNOWLEDGMENTS
We would like to thank Pavlo Plotko for generating specific neutrino fluxes using the PriNCE code. We acknowledge fruitful discussions with our colleagues from the RNO-G and IceCube-Gen2 collaborations on the road to taking a fresh look at the muon background. We acknowledge funding from the German Research Foundation (NE 2031-2/1) and the Initiative and Networking Fund of the Helmholtz Association (W2/W3-115). Simulations were partly enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at UPPMAX partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
JHEP.bst
§ APPENDIX
For completeness we show the expected muon flux for the radio detector of IceCube-Gen2 as published in <cit.> in <ref>. Simulations were performed for a single station of the IceCube-Gen2 array at South Pole, which were scaled to match the full array of 164 hybrid stations and 197 shallow-only stations. Triggers stem from a phased array of four antennas at a depth of 200, using a trigger rate of 100Hz, and a two-out-of-four coincidence of downward pointing shallow log-periodic dipole antennas, also with a trigger rate of 100Hz.
|
http://arxiv.org/abs/2307.05348v1 | 20230711153633 | Scalar NSI: A unique tool for constraining absolute neutrino masses via $ν$-oscillations | [
"Abinash Medhi",
"Arnab Sarker",
"Moon Moon Devi"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
Department of Physics, Tezpur University, Napaam, Sonitpur, Assam-784028, India
In the standard interaction scenario, a direct measurement of absolute neutrino masses via neutrino oscillations is not feasible, as the oscillations depend only on the mass-squared differences. However, the presence of scalar non-standard interactions can introduce sub-dominant terms in the oscillation Hamiltonian that can directly affect the neutrino mass matrix and thereby making scalar NSI a unique tool for neutrino mass measurements. In this work, for the first time, we constrain the absolute masses of neutrinos by probing scalar NSI. We show that a bound on the lightest neutrino mass can be induced in the presence of scalar NSI at DUNE. We find that the lightest neutrino mass can be best constrained with η_ττ and η_μμ at 2σ C.L. for normal and inverted hierarchy respectively. This study suggests that scalar NSI can serve as an interesting avenue to constrain the absolute neutrino masses in long-baseline neutrino experiments via neutrino oscillations.
Scalar NSI: A unique tool for constraining absolute neutrino masses via ν–oscillations
Moon Moon Devi
August 12, 2023
======================================================================================
Introduction.– The discovery of neutrino (ν) oscillations essentially confirm that neutrinos are massive <cit.> which provides convincing evidence in favour of physics beyond the standard model (BSM). Nonetheless, the absolute mass of neutrinos and the mass ordering is still not precisely known. Neutrino oscillations imply that neutrino flavors mix with each other indicating non-zero masses of neutrinos. However, it cannot provide a direct measurement of the absolute ν-masses as it depends only on the mass-squared differences. The recent limits on the mass squared splittings can be found in ref. <cit.>. Neutrino masses are represented by m_i with orderings m_3>m_2>m_1 termed as normal hierarchy (NH) and m_2>m_1>m_3 as inverted hierarchy (IH).
Experiments around the world are in constant pursuit to provide a stringent bound on the ν-masses <cit.>. Different sources of data like cosmological <cit.>, ν–oscillations <cit.> and beta-decay <cit.> will contribute towards imposing constraints on the ν-mass <cit.>. A bound on the sum of ν-masses ∑ m_i< 0.12 eV (at 95% C.L.) is established from cosmology <cit.>. However, due to various degeneracies and cosmological assumptions, a stringent limit on the absolute mass of neutrinos is not possible via the cosmological approach. This can be overcome using two different approaches viz. direct and indirect. In direct approach, the ν-mass is estimated by measuring the endpoint of the electron energy spectrum in β-decay process. The recent bound on the ν-mass provided by the KATRIN experiment in this approach is m_ν ≤ 0.8 eVc^–2 at 90 % C.L <cit.>. Whereas the neutrinoless double-beta decay (2νββ) is used as an indirect approach to measure the ν-masses. The mass difference between the parent and daughter nuclei in 2νββ process is used to constrain the ν-mass. The 2νββ decay is only possible because neutrinos are Majorana particles and it can provide a bound on the ν-mass via m_ββ = ∑ U_ei^2m_i and its current bound is m_ββ < 75 - 180 meV <cit.>.
The mass of neutrinos is five orders of magnitude smaller than any other Standard Model fermions. The determination of ν-mass and its underlying mechanism will shed light on various fundamental unknowns of particle physics.
The challenges in measuring the absolute ν-masses can arise from various reasons. The weak interaction of neutrinos with matter makes it challenging to measure the ν-mass with high precision. The background noises are generally very high in such mass measurement experiments. Also, the complexity of the neutrino experiments leads to higher systematic errors. Despite these challenges, a significant development can be seen in the ν-mass bounds. In recent years, there have been several important results, such as the first direct measurement of the ν-mass squared difference and the most stringent limits on the 2νββ decay half-life <cit.>.
The neutrino oscillation experiments can probe the mass-squared differences of the neutrinos but are not sensitive to the absolute masses. However, the presence of scalar non-standard interactions (NSIs) can bring a direct dependence of absolute ν-masses via neutrino oscillations <cit.>. Neutrinos coupling via a scalar is interesting as it affects the ν-mass term which can be further probed to constrain the absolute masses of neutrinos. In our earlier studies <cit.>, we found that the scalar NSI (SNSI) can significantly impact the physics sensitivities of long baseline experiments. Also, it can introduce various degeneracies in the measurement of oscillation parameters. The presence of SNSI can provide a unique pathway to explore the absolute ν-masses. A stringent bound on the ν-mass will not only provide a better understanding of the properties of neutrinos but also help in explaining the physics BSM.
In this work, for the first time, we obtain a constrain on the lightest ν-mass by probing neutrino oscillation experiments. We have explored the effects of SNSI in a model-independent way to constrain the ν-mass for both hierarchies. As the SNSI contribution scales linearly with matter density, it makes long-baseline experiments a suitable candidate to explore its effects. The impact of SNSI is probed element-wise, one at a time, at the upcoming long baseline experiment DUNE <cit.>. We point out that the SNSI elements can constrain the ν-mass with a marginally better constraining capability for NH as compared to that of IH. We also note a slight variation in the constraining capability at DUNE with respect to different choices of SNSI elements. The findings of our analysis can help in estimating the absolute ν-masses in ν-oscillation experiments and will have the potential to play a key role in our quest to understand the nature of neutrinos.
Scalar NSI Formalism.–
The interaction of neutrinos with matter occurs via the mediators W^±, Z^0 and it impacts the ν-oscillations <cit.>. This matter effects appear as an additional potential term in the Hamiltonian for ν-oscillations <cit.>. The scalar interactions of neutrinos is an interesting possibility as neutrinos can couple with a scalar (Higgs boson) with non-zero vacuum expectation to generate its mass. The Lagrangian for such non-standard coupling of ν's with the environmental fermions via a scalar can be written as <cit.>,
ℒ_SNSI=y_fY_αβ[ν_α(p_3)ν_β(p_2)][f(p_1)f(p_4)]
where, y_f and Y_αβ represent the Yukawa coupling of the scalar mediator with environmental fermions f and neutrinos respectively.
The modification in ℒ leads to a correction in the Dirac equation as seen below,
ν_β[i∂_μγ^μ+(M_βα+∑_fn_fy_fY_αβ/m_ϕ^2)]ν_α=0
where, m_ϕ is the mass of the scalar mediator. As shown in ref. <cit.>, the scalar NSI can appear as a correction to the ν-mass matrix. The effective Hamiltonian in presence of SNSI can be framed as,
ℋ E_ν+M_effM_eff^†/2E_ν± V_SI
where, M_eff=M+M_SNSI with M_SNSI ≡∑_fn_fy_fY/m_ϕ^2. The ν-mass matrix can be diagonalized by a modified mixing matrix as 𝒰^' = P𝒰Q^†, where the Majorana rephasing matrix Q can be absorbed by QD_νQ^†=D_ν=diag(m_1,m_2,m_3). The unphysical diagonal rephasing matrix, P can be rotated away into the SNSI contribution as follows,
M_eff=𝒰D_ν𝒰^†+P^†M_SNSIP=M+ δ M
We parametrize the SNSI contribution δ M in a model-independent way as follows <cit.>,
δ M≡√(|Δ m_31^2|)([ η_ee η_eμ η_eτ; η_e μ^* η_μμ η_μτ; η_e τ^* η_μτ^* η_ττ ])
where, the dimensionless parameter η_αβ quantifies the strength of SNSI and √(|Δ m_31^2|) is used as a scaling. In the standard case, a common m_i^2 term can be subtracted to obtain a dependence of neutrino oscillations on the mass-squared splittings i.e. Δ m_21^2 and Δ m_31^2. However, in the presence of SNSI, the cross terms i.e. M δ M^† and M^†δ M bring a direct dependence on the absolute masses of neutrinos as no common term can be subtracted from the mass matrix. Hence, oscillation probabilities will also directly depend on the absolute mass of neutrinos in presence of SNSI. In this work, we utilize this direct dependence on the ν-mass to put a bound on the absolute ν-mass. We test with a non-zero diagonal η_αβ one at a time. The M_eff for such a case when η_ee≠ 0 is shown as an example below, where the explicit dependence on the absolute ν-masses is illustrated.
M_eff = 𝒰 diag(m_1, m_2, m_3
)𝒰^† + √(|Δ m^2_31|) diag ( η_ee, 0, 0
)
Methodology.– In this work, we have investigated the possibility of constraining the absolute ν-masses by probing the effects of SNSI. From the existing cosmological bound ∑ m_i<0.12 eV (95% C.L.) <cit.>, we obtain the upper limits of the lightest ν-masses for both hierarchies.
The upper limits of the ligtest neutrino masses m_1 and m_3 for NH and IH are 0.03 eV and 0.015 eV respectively according to the cosmological bound as in table <ref> of supplementary. The benchmark values of neutrino oscillation parameters used throughout the analysis are <cit.>- θ_12=34.51^∘, θ_13=8.44^∘, θ_23=47^∘, δ_CP=-π/2, Δ m_21^2=7.56×10^-5eV^2, Δ m_32^2=-2.497×10^-3eV^2 and Δ m_31^2=2.55×10^-3eV^2.
The long-baseline neutrino experiments will help to explore new-physics effects and bring stringer constraints on the ν-mixing parameters. As SNSI scales linearly with matter density, the long-baseline experiments are suitable to probe its effects. This will in turn help in constraining the absolute mass of neutrinos by putting bounds on the SNSI parameters. DUNE <cit.> is a next-generation long baseline neutrino experiment with a 40 kton LArTPC-type detector for a baseline of 1300 km. We have considered a total runtime of 7 years (3.5ν+3.5ν̅). The main focus of the experiment is the precise measurement of neutrino oscillation parameters, probing of CP-violation and identifying the true mass hierarchy of neutrinos. This experiment can be a good probe for exploring the effects of SNSI as it scales linearly with the environmental matter density. We have used the GLoBES simulation package <cit.> to calculate the numerical probabilities and to develop the statistical framework for exploring the physics sensitivities.
Impact on oscillation probabilities.–
In Fig. <ref>, we explore the impact of different allowed values of the lightest ν-mass on P_μ e for η_ee (left-panel), η_μμ (middle-panel) and η_ττ (right-panel) at DUNE. For NH (top-panel), we observe a marginal deviation as the scale of absolute ν-mass shifts to a higher value. The effect on P_μ e for different values of m_1 are different for both positive and negative η_ee. In presence of positive (negative) η_μμ, we see a suppression (enhancement) of probabilities for higher values of ν-masses. We also note a significant suppression (enhancement) of P_μ e for positive (negative) η_ττ as shown in the top-right panel. For IH (bottom-panel), in presence of a positive η_ee, we observe a suppression with increasing ν-mass, m_3 while for a negative η_ee an enhancement is seen. For positive (negative) η_μμ, a marginal shift of the second peak towards lower (higher) energy is noticed for increasing ν-mass. We observe an enhancement (suppression) of P_μ e with the ν-mass for a positive (negative) η_ττ. Additionally, the impact of NSI elements as a function of δ_CP-ν mass and energy-ν mass is shown in supplementary in Fig. <ref> and Fig. <ref> respectively. This provides a concrete motivation to further explore the SNSI effects in constraining the ν-mass.
Statistical χ^2-framework.–
We present our results for constraining the absolute ν-masses for both hierarchies in presence of SNSI taking DUNE as a case study. We place bounds on the ν-masses by taking one diagonal NSI element (η_αβ) at a time. We probe η_αβ on the appearance channel (P_μ e) for some chosen values of the lightest ν-mass. We particularly focus only on the SNSI parameters η_ee, η_μμ and η_ττ and examine DUNE's capability towards constraining the ν-masses. We further study the correlation of the lightest ν-mass with the SNSI parameters. We use the same values of neutrino oscillation parameters mentioned above. In order to constrain the absolute mass of neutrinos, we define a statistical χ^2 which is a measure of sensitivity as,
χ_pull^2=ζ_jmin(min_η∑_i∑_j[N_true^i,j - N_test^i,j]^2 /N_true^i,j+∑_i=1^kζ_i^2/σ_ζ_i^2),
where, N_true^i,j and N_test^i,j represents the number of true and test events in the {i,j}-th bin respectively. Using the pull method described in <cit.>, we incorporate the systematic errors as additional parameters known as nuisance parameters (ζ_k) with the systematical errors (σ_ζ_k^2).
Constraining the neutrino masses.–
In Fig. <ref>, we demonstrate the capability of DUNE towards constraining the absolute
ν-masses for both hierarchies. In eq. <ref>, we see that the SNSI parameters contribute directly to the standard ν-mass matrix which can be probed to place a bound on the ν-masses.We define the sensitivity as Δχ^2 which can help in constraining the lightest ν-masses,
Δχ^2 =min [χ^2 (η_αβ^test≠0,m_lightest^test)-
χ^2(η_αβ^true≠0,m_lightest^true≠0) ].
We vary the test value of the lightest ν-mass in the allowed range, while fixing the true ν-mass at m_1^true= 0.02 eV and m_3^true= 0.01 eV for NH and IH respectively. We have marginalized over the oscillation parameters θ_23 and δ_CP in the fit data. The sensitivity √(Δχ^2) is plotted for an allowed range of the lightest ν-mass, where the solid (dashed) lines represent positive (negative) η_αβ. We focus on the diagonal parameters η_ee (left-panel), η_μμ (middle-panel) and η_ττ (right-panel) for NH (top-panel) and IH (bottom-panel) respectively. We have varied m_1 (m_3) for NH (IH) in the allowed region. For NH, the presence of positive (negative) η_ee indicates an increase (decrease) in the constraining capability. For η_μμ, we see a significant enhancement in the sensitivity with an increase in negative values. A similar trend is also observed for higher negative values of η_ττ. The effect of positive η_μμ and η_ττ is relatively nominal.
For IH (bottom-panel), we observe that the constraining capability gets improved for negative η_ee. The positive η_ee shows minimal changes in the sensitivities. For η_μμ, we see an enhancement for both the positive and negative values. In presence of η_ττ, a significant improvement can be seen with increase in the negative strength of the parameter though only nominal changes are seen for positive values. In table <ref>, we have summarized the constraints on the lightest ν-mass at 3σ CL for both hierarchies. We note that the mass of the lightest neutrino is better constrained in the presence of η_ττ and η_μμ for NH and IH respectively.
Correlation in (η_αβ-m) parameter space.– In Fig. <ref>, we show the allowed region for 1σ, 2σ and 3σ CL in η_αβ-m planes for DUNE. We set the true values of (η_αβ, m) at (0.2, 0.02 eV) and (0.2, 0.015 eV) for NH and IH respectively. The test values of η_αβ are varied in the range [0.1,0.3] and the lightest ν-mass is varied in the allowed range for NH and IH. We focus on the diagonal elements - η_ee (left-panel), η_μμ (middle-panel) and η_ττ (right-panel) where the top (bottom) panel represents the NH (IH) case. The blue, red and black solid lines symbolize the 1σ, 2σ and 3σ confidence regions respectively. The true point is shown by a black star. For NH, the lightest ν-mass can be constrained as m_1∈ [0.009,0.03] eV at 1σ CL for η_ee. The element η_μμ can constrain the ν-mass as m_1 ∈ [0.016,0.024] eV at 1σ CL. For η_ττ, we see a similar constrain on m_1 ∈ [0.017,0.023] eV at 1σ CL. The allowed region for all the diagonal elements in the IH (bottom-panel) is larger as compared to the NH (top-panel). The constraint on the lightest mass in presence of η_ee worsens for IH. For η_μμ and η_ττ, the allowed region at 1σ CL for the lightest mass m_3 is ∼ [0.007,0.013] eV.
Concluding Remarks.–
The neutrino oscillation parameters are being measured with unprecedented precision in current experiments. However, sub-dominant effects like scalar NSI can have a significant impact on the detector sensitivities. Scalar NSI offers an intriguing way to probe the absolute ν-masses via neutrino oscillations. The central idea of this work is to showcase the capability of constraining the lightest ν-mass using neutrino oscillation data in presence of scalar NSI. In this letter, we have constrained the lightest ν-mass while accounting for the cosmological bound on the sum of ν-masses. We find that, the scalar NSI parameter η_αβ can provide a significant bound on the absolute mass of neutrinos. Particularly, the presence of η_ττ or η_μμ makes the constraining capability marginally better than that of η_ee on the lightest mass of neutrinos for both the hierarchies. The constraining of ν-mass can be crucial to shed light on the ν-mass generation mechanisms. It may provide a major breakthrough in our understanding of the universe and its underlying physics.
Acknowledgments.–
The authors acknowledge the Science and Engineering Research Board (SERB), DST for the grant CRG/2021/002961. The authors thank Debajyoti Dutta for his help and suggestions with the GLoBES framework. AS acknowledges the fellowship received from CSIR-HRDG (09/0796(12409)/2021-EMR-I). AM acknowledges the support of the Research and Innovation Grant 2021 (DoRD/RIG/10-73/1592-A) funded by Tezpur University.
§ SUPPLEMENTARY MATERIALS
Allowed mass range for NH and IH.– Considering the best-fit values of mass-squared splittings from table <ref>, we obtain the upper limit of the lightest ν-mass by taking the cosmological bound on the sum of ν-masses. In table <ref>, we show the limit on the ν-masses m_1 and m_3 while following the cosmological bound ∑ m_i<0.12eV. We observe that m_1 and m_3 should be less than 0.03 eV and 0.015 eV for NH and IH respectively. The values of mass-squared splitting Δ m_2l^2 and Δ m_3l^2 for normal (l=1) and inverted hierarchy (l=2) with 3σ range are shown in table <ref>.
Effect of SNSI with varying δ_CP and ν-mass.–
We study the variation of ν-oscillation probabilities as a function of δ_CP and the lightest ν-mass for both hierarchies. To quantify the impact of SNSI elements on the oscillation probabilities, we define a parameter Δ P_μ e = P_μ e^NSI - P_μ e^SI with P_μ e^NSI and P_μ e^SI as the appearance probabilities with and without NSI respectively. In Fig. <ref>, we have explored the effect of SNSI for varying δ_CP and ν-mass within the allowed range. The values of the oscillation parameters used are listed in the methodology of the draft. Here, we have fixed the neutrino energy at E=2.5 GeV and SNSI parameter at η_αβ=0.2. We then vary the mass in the allowed range and δ_CP ∈ [-π,π]. We have plotted Δ P_μ e for varying δ_CP and ν-mass. We take the SNSI parameter η_ee, η_μμ and η_ττ in the left, middle and right panel respectively. We show the effects for true NH (IH) in the top (bottom) panel. We observe that,
* For NH case, the element η_ee enhances the probabilities for all values of δ_CP. The enhancement is significant in negative δ_CP plane. The elements η_μμ and η_ττ suppresses the probabilities for the complete δ_CP space. While for η_ττ the suppression is more as compared to that of η_μμ.
* For IH case, the elements η_ee and η_μμ suppress the probabilities for complete δ_CP space. However, the suppression is higher for η_ee. The element η_ττ enhances the probailities and the enhancement is higher for δ_CP ∈ [-60^∘, 40^∘].
Effect of SNSI with varying ν-energy and ν-mass.–
In Fig. <ref>, we have explored the range of energy where the effect of SNSI is maximum for various choices of lightest ν-mass for both hierarchies within the allowed upper range. To quantify the effects, we use the quantity Δ P_μ e as defined earlier. Here, we have fixed the baseline at L=1300 km and NSI parameter at η_αβ=0.2. We then vary the value of the lightest ν-mass in the allowed range and the neutrino energy in 0.5-10 GeV. We have plotted Δ P_μ e for varying neutrino energies and masses. We consider the SNSI parameters η_ee, η_μμ and η_ττ in the left, middle and right panel respectively. The top (bottom) panel represents the normal (inverted) hierarchy case.
* In NH case (top–panel), the presence of scalar NSI elements mostly enhances the probabilities. With element η_ee (top–left), we observe a significant enhancement, roughly around an energy of 1.5 GeV with ν-mass greater than 0.02 eV. For other elements, suppression of probabilities can be seen around this energy value.
* For IH case (bottom panel), the maximal enhancement in probabilities can be seen for η_μμ (bottom-middle) and η_ττ (bottom-right). Whereas, the presence of η_ee (bottom-left) shows a significant suppression of P_μ e at lower energies.
|
http://arxiv.org/abs/2307.04014v2 | 20230708164616 | Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers | [
"Amirhossein Askari-Farsangi",
"Ali Sharifi-Zarchi",
"Mohammad Hossein Rohban"
] | cs.CV | [
"cs.CV"
] |
A. Askari Farsangi et al.
Sharif University of Technology, Iran
[email protected]
{asharifi,rohban}@sharif.edu
Novel Pipeline for Diagnosing Acute Lymphoblastic Leukemia Sensitive to Related Biomarkers
Amirhossein Askari Farsangi1 Ali Sharifi Zarchi1 Mohammad Hossein Rohban1
August 12, 2023
==========================================================================================
Acute Lymphoblastic Leukemia (ALL) is one of the most common types of childhood blood cancer. The quick start of the treatment process is critical to saving the patient's life, and for this reason, early diagnosis of this disease is essential. Examining the blood smear images of these patients is one of the methods used by expert doctors to diagnose this disease.
Deep learning-based methods have numerous applications in medical fields, as they have significantly advanced in recent years. ALL diagnosis is not an exception in this field, and several machine learning-based methods for this problem have been proposed.
In previous methods, high diagnostic accuracy was reported, but our work showed that this alone is not sufficient, as it can lead to models taking shortcuts and not making meaningful decisions. This issue arises due to the small size of medical training datasets. To address this, we constrained our model to follow a pipeline inspired by experts' work. We also demonstrated that, since a judgement based on only one image is insufficient, redefining the problem as a multiple-instance learning problem is necessary for achieving a practical result. Our model is the first to provide a solution to this problem in a multiple-instance learning setup.
We introduced a novel pipeline for diagnosing ALL that approximates the process used by hematologists, is sensitive to disease biomarkers, and achieves an accuracy of 96.15%, an F1-score of 94.24%, a sensitivity of 97.56%, and a specificity of 90.91% on ALL IDB 1. Our method was further evaluated on an out-of-distribution dataset, which posed a challenging test and had acceptable performance. Notably, our model was trained on a relatively small dataset, highlighting the potential for our approach to be applied to other medical datasets with limited data availability.
§ INTRODUCTION
Leukemia is a type of cancer that affects the body's blood-forming tissues, including the bone marrow and lymphatic system <cit.>. Based on whether the leukemia is acute or chronic and whether it is lymphoid or myeloid, four main types of leukemia can be considered: ALL, AML, CLL, and CML <cit.>.
Diagnosing leukemia through the examination of blood smear images is a common method used by hematologists <cit.>. While additional tests may be necessary for a more complete understanding of the patient's condition, experts are able to determine the presence and type of leukemia based on the number and shape of different types of white blood cells <cit.>. This suggests that deep learning has great potential for developing computer models for diagnosing leukemia from related microscopic images.
Among all the four types of leukemia, acute lymphoblastic leukemia (ALL) has special diagnostic significance because an early start to treatment can save a patient’s life. This significance grows when we consider that 75 percent of cases involve children under the age of 14 <cit.>.
The detection of blast cells in the blood and bone marrow makes it a suitable target for diagnosis using microscopic images. Therefore, studying the diagnosis of ALL using deep learning models is of particular importance in order to improve the accuracy and speed of diagnosis for this common and serious disease.
The performance of deep learning models is highly dependent on the size of the dataset used for training. In medical applications, obtaining large datasets can be a challenge, and many datasets are small in size. A popular ALL dataset is the ALL IDB provided by Scotti et al. <cit.>, which contains images of both ALL and normal patients. It is quite small in size. Another dataset, the Raabin dataset <cit.>, was recently introduced and contains a variety of data classes, but it has not yet been much explored.
Several classifiers have been proposed for diagnosing leukemia from related microscopic images <cit.>. These classifiers can be categorized based on their target classes. Some classifiers have been designed to classify more than two classes, and they often incorporate the ALL IDB dataset as part of their training due to its importance for ALL diagnosis and its availability <cit.>. However, it's important to consider the issue of dataset bias <cit.> when combining datasets, but some methods may not have paid enough attention to this point. In contrast, other methods have focused on the two-class problem, specifically classifying ALL from the Normal class, and the ALL IDB dataset has been widely used for this purpose <cit.>.
In the following, we focus only on the two-class classifiers that distinguish ALL from normal. We can categorize these classifiers based on their input type. Some of these classifiers perform on single-cell images such as those in the ALL IDB 2 dataset, which means they can only perform on processed data in the form of cropped images <cit.>.
On the other hand, other classifiers accept images similar to what can be seen under microscopes, such as those in ALL IDB 1 <cit.>.
In order to become practical models, classifiers of the first type only judge the image of single cells, while what is available are microscopic images containing a large number of white blood cells. Therefore, not only are object detection methods required in these cases, but it is also necessary to aggregate the results of all cells in a way to make a judgment about the patient. It seems that the second category of models is in better condition. These models do not need object detection methods, but the second problem somehow still exists for them. There is a possibility that in a patient with ALL, there are no signs of the disease in a single microscopic image and that different parts of the patient's blood sample must be examined to make an accurate diagnosis. This is exactly what expert doctors do in this situation. In other words, labels are weak in this case, and a multiple-instance learning setup is needed. Therefore, a third category of models that handle multiple images of the same patient should be considered, and our model belongs to this category of models.
Although there are some examples of multiple-instance learning methods for other problems related to diagnosis from blood microscopic images, there are only a few methods in the literature for diagnosing leukemia. Therefore, our work aims to fill this gap and explore the potential of this approach for improving leukemia diagnosis results <cit.>.
Finally, one of our model's most important strengths is its reliability and sensitivity to related biomarkers, and we achieved these properties by applying a special training method. We demonstrated that removing blast cells from microscopic images of patients with ALL made the model have difficulty diagnosing the patient as having ALL, whereas removing normal cells made the model accurately diagnose the patient as having ALL.
Since our model evaluates patients based on multiple images, testing this property required a completely independent dataset, as ALL IDB's samples are single images. We utilized a subset of Raabin's dataset as an out-of-distribution test set. In general, testing deep learning models on out-of-distribution test sets is a challenging task, and most methods tend to fail during these tests <cit.>. However, our model achieved acceptable accuracy in this challenging setting.
§ TRAINING CONSIDERATIONS FOR SMALL MEDICAL DATASETS
In medical applications of machine learning, in addition to common evaluation metrics such as accuracy and precision, a potential qualitative criterion for assessing model reliability is the similarity between the model's decision-making process and that of a human expert. Evaluating models based on this criterion can provide insight into their performance and trustworthiness.
As an illustration of this approach, we conducted a reimplementation of the method proposed by Ahmed et al. <cit.> using their dataset and visualized the model's attention map with the GradCAM algorithm <cit.>. Through our analysis, we identified that the model makes decisions based on unexpected patterns in the input image, which we refer to as "shortcuts," and that these shortcuts do not have any significant medical meaning. This highlights a potential flaw in these models, specifically, the issue of overfitting.
Overfitting can occur for a variety of reasons, including model complexity and dataset quality. Complex models are required for extracting high-level features from image data for proper processing; however, when the dataset is small in comparison to the model's complexity, the model's variance increases, resulting in overfitting. Another factor that can contribute to overfitting is when the training data is not clean, causing the model to learn shortcuts. To avoid this issue, it is crucial to use a clean dataset that minimizes the chances of spurious correlations. To address the potential causes of overfitting, two potential solutions are to increase the dataset size through augmentation methods and to manually clean the dataset.
It should be noted that we have an implicit assumption that we aim to train a classifier that performs similarly to the human expert classifier. This means that, in the expert's opinion, applied augmentation should not change the label of the image. In our case, cell morphology is an important criterion for classifying a specific cell as normal or diseased. As a result, we are not allowed to use augmentations like shearing that change the cell's morphology, and we have to use augmentations like rotating and translating instead. However, we can see that the majority of the proposed methods have not paid enough attention to this point.
Using these considerations, we discovered that the issues with shortcuts persisted and that we needed to find another solution. Two of these visualizations are shown in Fig. <ref>. In the following, we express our method to overcome this problem.
§ METHOD
§.§ Pipeline
We presented a pipeline that performs the decision-making process step by step. This approach allows us to observe the model's procedure for producing the final result and helps us design each step based on the actual process used by hematologists.
Since the presence of blast cells plays a decisive role in the existence of this disease, specialists look for these cells among white blood cells when examining the patient's microscopic image and making a decision based on that.
The first step in the computer simulation of this process should be a cell detector that detects white blood cells in the input image. The second step is to analyze each cell image using the criterion that is sensitive to whether a cell is a blast or not. In the third step, we must summarize the previous step's results and describe the patient's condition using a set of parameters. Finally, based on the generated report, we must determine the existence of this disease. Fig. <ref> depicts the overall layout of this pipeline. The following goes over each step in detail.
§.§.§ Object Detection
For the white blood cell detector, we used a pre-trained Faster RCNN network with a ResNet50 backbone <cit.>. We fine-tuned it using the ALL IDB 1 dataset, which was manually annotated.
§.§.§ Feature Extraction
To extract informative features from images, large networks with numerous training parameters are often required. However, training these networks from scratch on small datasets often leads to overfitting. A common solution is to use pre-trained networks on ImageNet, which are global feature extractors that produce rich feature vectors for each input image. In this study, we utilized a pre-trained AlexNet network with fixed weights as the global feature extractor <cit.>. To customize these feature vectors for our problem, we added a trainable 256-node, fully connected layer that takes these feature vectors as input.
In our study, we refrained from fine-tuning the global feature extractor since the computationally intensive process of training the LSTM-based architectures does not permit simultaneous optimization of all weights.
§.§.§ Profiling
There are various ways to aggregate the extracted features from the cell images of a patient. In this work, we used an LSTM-based architecture for this purpose. As the LSTM network accepts each series length as its input, this model doesn't have a problem with the different numbers of white blood cell images from different patients. In addition, it allows us to analyze the set of microscopic images for each patient, increasing the model's reliability and accuracy. It is reasonable because hematologists do not just look at one part of a patient's blood sample; instead, they move the blood slide under the microscope and make decisions based on what they see at several points. We used the LSTM network with 256 internal nodes, which produces a vector of length 64 as a patient's feature vector.
§.§.§ Final Classification
To classify the patient's feature vector, we used only one fully connected layer with two nodes in the final classification.
§.§ Dataset
Our research utilized two different datasets: the ALL IDB and the Raabin dataset. The ALL IDB dataset is the most widely used dataset in the literature, consisting of two subsets. The first subset, ALL IDB 1, contains 108 normal or diseased blood microscopic images, with blast cell centers annotated in ALL cases. The second subset, ALL IDB 2, includes 260 images of single white blood cells labeled as normal or cancerous. The Raabin dataset, on the other hand, comprises 938 single-cell images of normal white blood cells.
§.§.§ Cell Detection Dataset
To train the faster RCNN network, a dataset with blood microscopic images containing bounding boxes around white blood cells is required. Although ALL IDB 1 is a suitable option, it only provides annotations for blast cells, not normal ones. Therefore, we manually annotated the normal cells in these images to utilize this dataset for our purpose.
§.§.§ Generated Dataset For training LSTM
Inputs in the form of series of the same size are required for training LSTM networks. Furthermore, because LSTM networks are data-hungry models, a large dataset of image series of the same length is required for successful convergence. Because there is no such dataset in our case, we decided to create one. We generate a dataset based on the following assumption:
Assumption: The series of white blood cell images belong to ALL class if and only if there were at least one blast cell in it.
Based on this assumption, we generate different training image series of the same length by randomly selecting proper cell images from ALL IDB 2 and single cell images from the Raabin dataset. We choose a different number of cells because we expect the number of cells in each input image to be different. To make the sequences the same size, we add the required number of empty images to the set of selected images.
Because LSTM networks are sensitive to the order of inputs, and in our case, the order of images is unimportant, we decided to shuffle the image series to reduce LSTM sensitivity to the image series order.
Fig. <ref> depicts an example of one of these image series.
§.§ Training
The proposed pipeline's training was completed in three stages. Faster RCNN was trained on its dataset to become a white blood cell detector in the first step. LSTM and the classifier had been trained in two other steps. These two steps are explained below.
§.§.§ Making sensitivity to the blast cells
To train the network to pay attention to blast inputs, we first trained the LSTM and classifier on a one-length image series. Because of our dataset generation assumption, the classifier output should be considered cancer if and only if the input image is a blast cell. If this training process is successful, we expect the LSTM and classifier to extract information from AlexNet features that correlates with whether the input cell is a blast or not.
§.§.§ Training for analyzing image series
We optimize our network on the generated image series of length 15 in the final training step. Our goal in this step is to teach the model to apply what it learned about blast cells in the previous stage to the analysis of a series of images.
§ RESULTS
§.§ Classification Performance
To ensure a valid evaluation of our model on the ALL IDB 1 dataset, which includes cropped cells from the ALL IDB 2 dataset used in training, we first removed the corresponding cells from the evaluation set. Fig. <ref> provides a sample of this modified dataset. Our model achieved accuracy and F1-score of 96.15% and 94.24%, respectively. Table <ref> shows the accuracy achieved by our method in comparison to other methods.
In the following, we evaluated our model on an out-of-distribution test set consisting of multiple images for each patient, requiring classification in a multiple-instance setup. To do this, we utilized a subset of the Raabin Leukemia dataset, which contains numerous microscopic images for each patient. Since the number of patients is small in this dataset, we decided to split all images of each patient and considered them as separate patients. We did this in a way that the total white blood cells of different patients are equal and called this constant number partition size.
On a partition size of 50, our model attained an accuracy of 72.88% and an F1-score of 71.01%. The average accuracy and F1-score across partition sizes ranging from 20 to 100 were found to be 71.78% and 70.35%, respectively. It is important to note that testing on an out-of-distribution test set is a challenging task, and it is common for model performance to decrease significantly under such conditions.
It is necessary to mention the outcome of the cell detector section here. The faster RCNN was trained on 85 percent of the ALL IDB 1 images and achieved a mean average precision (mAP) of 96.03% on the remaining 15 percent of ALL IDB 1 images.
§.§ Sensitivity to Blast Cells
Our special training method has led us to expect that our model will be sensitive to ALL biomarkers, particularly blast cells. To put this hypothesis to the test, we designed a test that involved removing blast cells from the image series of ALL patients to see if it would make it difficult for our model to identify this new sample as belonging to the ALL class. We also expected that removing normal cells from these images would improve the model's performance. To perform this test, we required the coordinates of blast and normal cells to be annotated in each dataset. While these annotations were available for the ALL IDB 1 dataset, the Raabin leukemia dataset did not have such annotations. Therefore, we trained a Faster RCNN on ALL IDB 1 data to detect blast and normal cells. This object detector achieved a mean average precision of 93.41% on the test set split from ALL IDB 1.
Our test on the ALL IDB 1 dataset showed that the model's recall under blast removal, normal removal, and no attack conditions was 43.90%, 97.56%, and 97.56%, respectively. The observed decrease of 53.66% in recall under blast removal indicates that our hypothesis was correct and that the model was indeed sensitive to blast cells.
For this test on the Raabin dataset, we evaluated the model's performance on groups of images of varying sizes. For each patient, we selected as many images as the group size and used object detection to form three different cell series: one with only blast cells, one with only normal cells, and one with all cells. It should be noted that the length of these three cell series for each patient is not equal, and the sum of the first two is equal to the third.
We plotted the recall of the model under blast removal, normal cell removal, and no attack conditions. Our results showed that removing blast cells reduced the model's recall by an average of 18.84% across all group sizes. Conversely, removing normal cells has a large effect on model recall and, on average, increases it by 18.69%. This finding validates our model's proposed hypothesis on an even out-of-distribution test set and demonstrates the significance of blast cells in our model's decision-making process. Fig. <ref> provides a visual representation of our findings.
It should be noted that while we used the results of the trained detector on the ALL IDB 1 dataset to evaluate the model's performance on the Raabin leukemia dataset, there is no ground truth for the Raabin dataset. Therefore, an error in object detection may be present in the obtained results.
§ ABLATION STUDY
§.§ Cell numbers effect
Based on our understanding that blast cells are not uniformly distributed throughout a blood smear, we hypothesize that increasing the number of white blood cells per patient will improve our model's performance in detecting ALL. Fig. <ref> shows the accuracy plot for varying the number of images per patient, and as expected, we observe a positive correlation between the number of images and model performance. In all subsequent reports, the reported metrics are the averages of the results obtained for group sizes ranging from 20 to 100, since evaluations were performed on different group sizes.
§.§ Different feature extractors
Our method can employ several pre-trained feature extractors, including AlexNet, InceptionV3 <cit.>, ResNet50 <cit.>, VGG16 <cit.>, and ViT-base-patch-16 <cit.>. We conducted a comparison of these models, and the results are presented in Table <ref>. From this table, it is evident that the AlexNet feature extractor yields the best performance. We will conduct further tests using this feature extractor in the subsequent sections.
§.§ LSTM effect
The main reason for using LSTM in our architecture was to add the ability to aggregate the extracted results in our model. It may seem that since in the first step of training, we trained the model on single-cell images to learn how to identify blast cells, in the second step of training the model has generalized this result with a counting approach. In other words, it would be possible that the LSTM layer may not perform more than a linear operator. How to test this hypothesis is a bit unclear, but we can still define some tests.
For each group of patient images, we fed the extracted cells individually to the model for labeling as either blast or normal cells. As a result, each patient in the test set was assigned two values, representing the number of normal and blast cells, respectively. Using this data, we trained a perceptron to classify each patient. Since the training and testing sets were identical for this perceptron, the resulting accuracy can be considered ideal. The average accuracy of the ideal perceptron was found to be 8.51 percent lower than the average accuracy of the LSTM model. Therefore, it can be concluded that the LSTM layer is capable of more than just linear operations.
§.§ Pre-training effect
It is necessary to investigate the potential benefits of pre-training, the first step of our training process. We hypothesize that pre-training can accelerate model convergence by providing a controlled environment in which the model can recognize the important biomarker for ALL, which is blast cells, and learn to differentiate them from normal cells. To evaluate this hypothesis, we trained two models, one with pre-training and one without. All other conditions were kept identical. Our results indicate that the model without pre-training has an average accuracy that is 2.05 percent lower than that of the pre-trained model. Therefore, pre-training appears to be a beneficial step in our training process, leading to improved model accuracy.
§.§ Impact of training series length
To assess the impact of training series length on model performance, we trained several classifiers with varying series lengths ranging from 1 to 32. The results of these classifiers are shown in Fig. <ref>. The plot suggests that there is an initial positive correlation between series length and model performance. However, beyond a certain threshold, the impact of series length on performance diminishes, and longer series lengths do not significantly improve the model's accuracy.
§ CONCLUSION
In this work, we aimed to develop a machine learning-based model for diagnosing acute lymphoblastic leukemia from blood smear images. Since the size of the datasets is small in this field, training networks in an end-to-end manner leads the model to find shortcuts for making decisions instead of using medically meaningful patterns. To address this issue, we introduce a pipeline inspired by the hematologists' approach, consisting of four main steps: detecting white blood cells, analyzing each cell, aggregating results, and decision-making. Compared to end-to-end training, this approach has several advantages.
First and foremost, the training process is a kind of search among all feasible classifiers, and if we want to achieve a classifier similar to a human expert, we must constrain this search space. Each data point is a kind of constraint, and that's why the size of the dataset becomes important. In our problem, we do not have access to such large datasets, so we should apply the constraints in another way. We did this by constraining the classifier architecture to the described pipeline.
In addition, this approach allows us to monitor the performance of individual components and to find and fix possible faults.
Another important thing that we did for training our pipeline was to train the final classifier in two stages. The first step can be considered an auxiliary task that makes the classifier sensitive to the biomarkers of ALL, and the second step is for the model to learn to generalize the knowledge learned in the first step.
Finally, we show that our model is sensitive to ALL biomarkers. Furthermore, we analyzed the impact of our design choices, such as the use of the AlexNet feature extractor, the LSTM layer, the pre-training step, and the length of the training series.
In our work, we addressed the need to redefine the problem of acute lymphoblastic leukemia (ALL) diagnosis as a multiple-instance learning problem, which has not been done before. To overcome this, we generated a suitable training dataset and evaluated our model on an out-of-distribution test set, achieving acceptable results.
It seems that the model's sensitivity to staining is its major weakness, and further work should focus on addressing this issue to improve its performance. As a result, one potential solution could be to train feature extractors that are less sensitive to staining.
unsrt
|
http://arxiv.org/abs/2307.03956v1 | 20230708112126 | The annealed parabolic Anderson model on a regular tree | [
"Frank den Hollander",
"Daoyi Wang"
] | math.PR | [
"math.PR"
] |
[1]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
[2]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
We study the total mass of the solution to the parabolic Anderson model on a regular tree with an i.i.d. random potential whose marginal distribution is double-exponential. In earlier work we identified two terms in the asymptotic expansion for large time of the total mass under the quenched law, i.e., conditional on the realisation of the random potential. In the present paper we do the same for the annealed law, i.e., averaged over the random potential. It turns out that the annealed expansion differs from the quenched expansion. The derivation of the annealed expansion is based on a new approach to control the local times of the random walk appearing in the Feynman-Kac formula for the total mass. In particular, we condition on the backbone to infinity of the random walk, truncate and periodise the infinite tree relative to the backbone to obtain a random walk on a finite subtree with a specific boundary condition, employ the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs, and afterwards let the truncation level tend to infinity to obtain an asymptotically sharp asymptotic expansion.
MSC2010: 60H25, 82B44, 05C80.
Keywords: Parabolic Anderson model, Feynman-Kac formula, regular tree, double-exponential random potential, backbone of random walk, annealed Lyapunov exponent, variational formula.
Acknowledgment:
The research in this paper was supported by the Netherlands Organisation for Scientific Research through NWO Gravitation Grant NETWORKS-024.002.003.
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
§ INTRODUCTION AND MAIN RESULTS
Section <ref> provides background and motivation, Section <ref> lists notations, definitions and assumptions, Section <ref> states the main theorems, while Section <ref> places these theorems in their proper context.
§.§ Background and motivation
The parabolic Anderson model (PAM) is the Cauchy problem
∂_t u(x,t) = Δ_ u(x,t) + ξ(x) u(x,t) , t>0, x ∈,
where t is time, is an ambient space, Δ_ is a Laplace operator acting on functions on , and ξ is a random potential on . Most of the literature considers the setting where is either ^d or ^d with d ≥ 1, starting with the foundational papers <cit.>, <cit.>, <cit.> and further developed through a long series of follow-up papers (see the monograph <cit.> and the survey paper <cit.> for an overview). More recently, other choices for have been considered as well:
(I)
Deterministic graphs (the complete graph <cit.>, the hypercube <cit.>).
(II)
Random graphs (the Galton-Watson tree <cit.>, <cit.>, the configuration model <cit.>).
Much remains open for the latter class.
The main target for the PAM is a description of intermittency: for large t the solution u(·,t) of (<ref>) concentrates on well-separated regions in , called intermittent islands. Much of the literature focusses on a detailed description of the size, shape and location of these islands, and on the profiles of the potential ξ(·) and the solution u(·,t) on them. A special role is played by the case where ξ is an i.i.d. random potential with a double-exponential marginal distribution
(ξ(0) > u) = ^-^u/ϱ, u ∈,
where ϱ∈ (0,∞) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and represents a class of its own.
In the present paper we consider the case where 𝒳 is an unrooted regular tree . Our focus will be on the asymptotics as t→∞ of the total mass
U(t) = ∑_x ∈ u(x,t).
In earlier work <cit.>, <cit.> we were concerned with the case where 𝒳 is a rooted Galton-Watson tree in the quenched setting, i.e., almost surely with respect to the random tree and the random potential. This work was restricted to the case where the random potential is given by (<ref>) and the offspring distribution of the Galton-Watson tree has support in \{1} with a sufficiently thin tail. In the present paper our focus will be on the annealed setting, i.e., averaged over the random potential. We derive two terms in the asymptotic expansion as t→∞ of the average total mass
⟨ U(t) ⟩ = ∑_x ∈⟨ u(x,t) ⟩,
where ⟨·⟩ denotes expectation with respect to the law of the random potential. It turns out that the annealed expansion differs from the quenched expansion, even though the same variational formula plays a central role in the two second terms.
The derivation in the annealed setting forces us to follow a different route than in the quenched setting, based on various approximations of that are more delicate than the standard approximation of ^d (see <cit.>). This is the reason why we consider regular trees rather than Galton-Watson trees, to which we hope to return later. A key tool in the analysis is the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, which is recalled in Appendix <ref>.
§.§ The PAM on a graph
§.§.§ Notations and definitions
Let G = (V,E) be a simple connected undirected graph, either finite or countably infinite, with a designated vertex called the root. Let Δ_G be the Laplacian on G, i.e.,
(Δ_G f)(x) = ∑_y∈ V:{x,y}∈ E [f(y) - f(x)], x ∈ V, f V→,
which acts along the edges of G. Let ξ := (ξ(x))_x ∈ V be a random potential attached to the vertices of G, taking values in . Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,
[ ∂_t u(x,t) = (Δ_G u)(x,t) + ξ(x) u(x,t), x ∈ V, t>0,; u(x,0) = δ_(x), x ∈ V. ]
The quantity u(x,t) can be interpreted as the amount of mass at time t at site x when initially there is unit mass at . The total mass at time t is U(t) = ∑_x ∈ V u(x,t). The total mass is given by the Feynman-Kac formula
U(t) = _(^∫_0^t ξ(X_s) s),
where X=(X_t)_t ≥ 0 is the continuous-time random walk on the vertices V with jump rate 1 along the edges E, and _ denotes the law of X given X_0=. Let ⟨·⟩ denote expectation with respect to ξ. The quantity of interest in this paper is the average total mass at time t:
⟨ U(t) ⟩ = ⟨_(^∫_0^t ξ(X_s) s)⟩.
§.§.§ Assumption on the potential
Throughout the paper we assume that the random potential ξ = (ξ(x))_x ∈ V consists of i.i.d. random variables with a marginal distribution whose cumulant generating function
H(u) = log⟨^uξ()⟩
satisfies the following:
[Asymptotic double-exponential potential]
There exists a ϱ∈ (0,∞) such that
lim_u→∞ u H”(u) = ϱ.
[Double-exponential potential]
A special case of (<ref>) is when ξ() has the double-exponential distribution in (<ref>), in which case
H(u) = logΓ(ϱ u + 1)
with Γ the gamma function.
By Stirling's approximation, (<ref>) implies
H(u) = ϱ u log(ϱ u) - ϱ u + o(u), u →∞.
Assumption <ref> is more than enough to guarantee existence and uniqueness of the non-negative solution to (<ref>) on any discrete graph with at most exponential growth (as can be inferred from the proof in <cit.>, <cit.> for the case G=^d). Since ξ is assumed to be i.i.d., we have from (<ref>) that
⟨ U(t) ⟩ = 𝔼_𝒪(exp[∑_x∈ V H(ℓ_t(x))]),
where
ℓ_t(x) = ∫^t_0 1{X_s =x } s, x ∈ V, t≥ 0,
is the local time of X at vertex x up to time t.
§.§.§ Variational formula
The following characteristic variational formula is important for the description of the asymptotics of ⟨ U(t)⟩. Denote by (V) the set of probability measures on V. For p ∈(V), define
I_E(p) = ∑_{x,y}∈ E( √(p(x)) - √(p(y)) )^2,
J_V(p) = - ∑_x ∈ V p(x) log p(x),
and set
χ_G(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞).
The first term in (<ref>) is the quadratic form associated with the Laplacian, which is the large deviation rate function for
the empirical distribution
L_t = 1/t∫_0^t δ_X_s s = 1/t∑_x ∈ Vℓ_t(x) δ_x ∈(V)
(see e.g. <cit.>). The second term in (<ref>) captures the second order asymptotics of ∑_x ∈ V H(tp(x)) as t →∞ via (<ref>) (see e.g. <cit.>).
§.§.§ Reformulation
The following lemma pulls the leading order term out of the expansion and shows that the second order term is controlled by the large deviation principle for the empirical distribution.
[Key object for the expansion]
If G=(V,E) is finite, then
⟨ U(t) ⟩ = ^H(t) + o(t) _(^-ϱ t J_V(L_t)),
t →∞.
where J_V is the functional in (<ref>) and L_t is the empirical distribution in (<ref>).
Because ∑_x ∈ Vℓ_t(x) = t, we can rewrite (<ref>) as
⟨ U(t) ⟩ = _(exp[∑_x∈ V H(ℓ_t(x))])
= ^H(t) _(exp{t ∑_x∈ V1/t[H(ℓ_t(x)tt)-ℓ_t(x)tH(t)]}).
Assumption <ref> implies that H has the following scaling property (see <cit.>):
lim_t→∞1/t [H(ct) - cH(t)] = ϱ c log c uniformly in c ∈ [0,1].
Hence the claim follows.
§.§ The PAM on an unrooted regular tree: annealed total mass for large times and key variational formula
In this section we specialise to the case where G= = (E,V), an unrooted regular tree of degree d +1 with d ≥ 2 (see Fig. <ref>). The main theorem of our paper is the following expansion.
[Growth rate of the total mass]
For any d ≥ 4, subject to Assumption <ref>,
1/tlog⟨ U(t) ⟩ = ϱlog(ϱ t) - ϱ - χ_(ϱ) + o(1), t →∞,
where χ_(ϱ) is the variational formula in (<ref>) with G=.
The proof of Theorem <ref> is given in Sections <ref>–<ref> and makes use of technical computations collected in Appendices <ref>–<ref>.
The main properties of the key quantity
χ_(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
are collected in the following theorem (see Fig. <ref>).
[Properties of the variational formula]
For any d ≥ 2 the following hold:
(a) The infimum in (<ref>) may be restricted to the set
_^↓(V) = {p ∈(V) argmax p = ,
p is non-increasing in the distance to }.
(b) For every ϱ∈ (0,∞), the infimum in (<ref>) restricted to _^↓(V) is attained, every minimiser p is such that p>0 on V, and ∂ S_R = ∑_∂ B_R()p(x), R∈_0, satisfies
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ,
where B_R() is the ball of radius R centred at .
(c) The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with
lim_ϱ↓ 0χ_(ϱ) = d-1, lim_ϱ→∞χ_(ϱ) = d+1.
The proof of Theorem <ref> is given in Appendix <ref> (see Fig. <ref>).
§.§ Discussion
1.
Theorem <ref> identifies the scaling of the total mass up to and including terms that are exponential in t. The first two terms in the right-hand side of (<ref>) are the same as those of 1/t H(t) (recall (<ref>)). The third term is a correction that comes from the cost for X in the Feynman-Kac formula in (<ref>) to create an optimal local time profile somewhere in , which is captured by the minimiser(s) of the variational formula in (<ref>).
2.
For the quenched model on a rooted Galton-Watson tree we found in <cit.>, <cit.> that
1/tlog U(t) = ϱlog(ϱ t ϑ/loglog t)
- ϱ - χ(ϱ) +o(1), t →∞,
×-a.s.,
where is the law of the potential, is the law of , ϑ is the logarithm of the mean of the offspring distribution, and
χ_(ϱ) = inf_⊂χ_(ϱ)
with χ_(ϱ) given by (<ref>) and the infimum running over all subtrees of . This result was shown to be valid as soon as the offspring distribution has support in \{1} (i.e., all degrees are at least 3) and has a sufficiently thin tail. The extra terms in (<ref>) come from the cost for X in the Feynman-Kac formula in (<ref>) to travel in a time of order o(t) to an optimal finite subtree with an optimal profile of the potential, referred to as intermittent islands, located at a distance of order ϱ t/loglog t from , and to subsequently spend most of its time on that subtree. In this cost the parameter ϑ appears, which is absent in (<ref>). It was shown in <cit.> that if ϱ≥ 1/log (d_min+1), with d_min the minimum of the support of the offspring distribution, then the infimum in (<ref>) is attained at the unrooted regular tree with degree d_min+1, i.e., the minimal unrooted regular tree contained in , for which ϑ = log d_min. Possibly the bound on ϱ is redundant.
3. In view of Lemma <ref> and the fact that Assumption <ref> implies (<ref>), we see that the proof of Theorem <ref> amounts to showing that, on = (V,E),
lim_t→∞1/tlog_(^-ϱ t J_V(L_t)) = - χ_(ϱ).
We achieve this by deriving asymptotically matching upper and lower bounds. These bounds are obtained by truncating outside a ball of radius R, to obtain a finite tree _R, deriving the t→∞ asymptotics for finite R, and letting R→∞ afterwards. For the lower bound we can use the standard truncation technique based on killing X when it exits _R and applying the large deviation principle for the empirical distribution of Markov processes on finite graphs derived in <cit.>. For the upper bound, however, we cannot use the standard truncation technique based on periodisation of X beyond radius R, because is an expander graph (see <cit.> for a list of known techniques on ^d and ^d). Instead, we follow a route in which is approximated in successive stages by a version of _R with a specific boundary condition, based on monitoring X relative to its backbone to infinity. This route allows us to use the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, but we need the condition d ≥ 4 to control the specific boundary condition in the limit as R →∞ (see Remark <ref> for more details). The reason why the approximation of by finite subtrees is successful is precisely because in the parabolic Anderson model the total mass tends to concentrate on intermittent islands.
4. Theorem <ref> shows that, modulo translations, the optimal strategy for L_t as t→∞ is to be close to a minimiser of the variational formula in (<ref>) restricted to _^↓(V). Any minimiser is centred at , strictly positive everywhere, non-increasing in the distance to , and rapidly tending to zero. The following questions remain open:
(1)
Is the minimiser p unique modulo translation?
(2)
Does p(x) satisfy lim_|x| →∞ |x|^-1logp̅(x) = -∞, with |x| the distance between x and ?
(3)
Is p radially symmetric?
(4)
Is ϱ↦χ_(ϱ) analytic on (0,∞)?
We expect the answer to be yes for (1) and (2), and to be no for (3) and (4).
§ PROOF OF THE MAIN THEOREM: LOWER BOUND
In this section we prove the lower bound in Theorem <ref>, which is standard and straightforward. In Section <ref> we obtain a lower bound in terms of a variational formula by killing the random walk when it exits _R. In Section <ref> we derive the lower bound of the expansion by letting R→∞ in the variational formula.
§.§ Killing and lower variational formula
Fix R∈ℕ. Let _R be the subtree of =(V,E) consisting of all the vertices that are within distance R of the root and all the edges connecting them. Put V_R=V_R(_R) and E_R = E(_R). Let τ_R = inf{t ≥ 0 X_t ∉ V_R} denote the first time that X exits _R. It follows from (<ref>) that
⟨ U(t) ⟩≥_(exp[∑_x∈ V_R
H(ℓ_t(x))]1{τ_R>t}).
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≥^H(t) + o(t) _[^-ϱ t J_V(L_t)1{τ_R>t}]
with J_V the functional defined in (<ref>). As shown in <cit.> (see also <cit.>), the family of sub-probability distributions _(L_t ∈· , τ_R>t), t ≥ 0, satisfies the LDP on ^R(V) = {p ∈(V) supp(p) ⊂ V_R} with rate function I_E, with I_E the functional defined in (<ref>). This is the standard LDP for the empirical distribution of Markov processes. Therefore, by Varadhan's Lemma,
lim_t→∞1/tlog_[^-ϱ t J_V(L_t)1{τ_R>t}] = - χ^-_R(ϱ)
with
χ^-_R(ϱ) = inf_p ∈^R(V) [I_E(p) +ϱ J_V(p)],
where we use that p ↦ J_V(p) is bounded and continuous (in the discrete topology) on ^R(V). Note that
lim_t →∞1/tlog_(τ_R>t) = - inf_p∈^R(V) I_E(p) < 0,
which is non-zero because any p ∈^R(V) is non-constant on V. The expression in (<ref>) is the same as (<ref>) with G=, except that p is restricted to V_R.
§.§ Limit of the lower variational formula
Clearly, R ↦χ^-_R(ϱ) is non-increasing. To complete the proof of the lower bound in Theorem <ref>, it remains is to show the following.
lim sup_R→∞χ^-_R(ϱ) ≤χ_(ϱ).
Pick any p ∈(V) such that I_E(p)<∞ and J_V(p)<∞. Let p^ R be the projection of p onto V_R, i.e.,
p^ R(x) = {[ p(x), x ∈int(V_R),; ∑_y ≥ x p(y), x ∈∂ V_R, ].
where y ≥ x means that y is an element of the progeny of x in . Since p^ R∈^R(V), we have from (<ref>) that χ^-_R(ϱ) ≤ I_E(p^ R) + ϱ J_V(p^ R). Trivially, lim_R→∞ I_E(p^ R) = I_E(p) and lim_R→∞ J_V(p^ R) = J_V(p), and so we have lim sup_R→∞χ^-_R(ϱ) ≤ I_E(p) + ϱ J_V(p). Since this bound holds for arbitrary p ∈(V), the claim follows from (<ref>).
§ PROOF OF THE MAIN THEOREM: UPPER BOUND
In this section we prove the upper bound in Theorem <ref>, which is more laborious and requires a more delicate approach than the standard periodisation argument used on ^d . In Section <ref> we obtain an upper bound in terms of a variational formula on a version of _R with a specific boundary condition. The argument comes in four steps, encapsulated in Lemmas <ref>–<ref> below:
(I)
Condition on the backbone of X (Section <ref>).
(II)
Project X onto a concatenation of finite subtrees attached to this backbone that are rooted versions of _R (Section <ref>).
(III)
Periodise the projected X to obtain a Markov renewal process on a single finite subtree and show that the periodisation can be chosen such that the local times at the vertices on the boundary of the finite subtree are negligible (Section <ref>).
(IV)
Use the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.> to obtain a variational formula on a single subtree (Section <ref>).
In Section <ref> we derive the upper bound of the expansion by letting R→∞ in the variational formula.
§.§ Backbone, projection, periodisation and upper variational formula
§.§.§ Backbone
For r ∈_0, let τ_r be the last time when X visits ∂ B_r(), the boundary of the ball of radius r around . Then the sequence = (X_τ_r)_r ∈_0 forms the backbone of X, running from to infinity.
[Condition on a backbone]
For every backbone and every t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))])
= 𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = ).
By symmetry, the conditional expectation in the right-hand side does not depend on the choice of . Indeed, permutations of the edges away from the root do not affect the law of ∑_x∈ V() H(ℓ_t(x)).
Turn the one-sided backbone into a two-sided backbone by adding a second backbone from to infinity. By symmetry, the choice of this second backbone is arbitrary, say '. Redraw by representing ' ∪ as and representing the rest of as a sequence of rooted trees ^∗ = (^∗_x)_x ∈ hanging off (see Fig. <ref>). In ^∗_x, the root sits at x and has d-1 downward edges, while all lower vertices have d downward edges.
Let X^=(X^_t)_t ≥ 0 be the random walk on ^ and (ℓ^_t(x))_x ∈^ the local times of X^ at time t.
[Representation of as a backbone with rooted trees]
For every and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = )
= 𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞).
Simply redraw as ^.
Note that X^ is a Markov process whose sojourn times have distribution EXP(d+1) and whose steps are drawn uniformly at random from the d+1 edges that are incident to each vertex.
§.§.§ Projection
For R ∈\{1}, cut into slices of length R, i.e.,
= ∪_k∈ (z + (kR+I)), I={0,1,…,R-1},
where z is to be chosen later. Apply the following two maps to ^ (in the order presented):
(i)
For each k ∈, fold ^∗_z+(kR+(R-1)) onto ^∗_z+(k+1)R by folding the d-1 edges downwards from the root on top of the edge in connecting z+(kR+(R-1)) and z+(k+1)R, and putting the d infinite rooted trees hanging off each of these d-1 edges on top of the rooted tree ^*_z+(k+1)R hanging off z+(k+1)R. Note that each of the d infinite rooted trees is a copy of ^*_z+(k+1)R.
(ii)
For each k ∈ and m ∈{0,1,…,R-2}, cut off all the infinite subtrees trees in ^∗_z+(kR+m) whose roots are at depth (R-1)-m. Note that the total number of leaves after the cutting equals
(d-1) ∑_m=0^R-2 d^(R-2)-m = (d-1)d^R-2 1-d^-(R-1)/1-d^-1 = d^R-1 - 1,
which is the same as the total number of leaves of the rooted tree ^*_R of depth R-1 (i.e., with R generations) minus 1 (a fact we will need below).
By doing so we obtain a concatenation of finite units
_R=(_R[k])_k ∈
that are rooted trees of depth R-1 (see Fig. <ref>). Together with the two maps that turn ^ into _R, we apply two maps to X^:
(i)
All excursions of X^ in the infinite subtrees that are folded to the right and on top are projected accordingly.
(ii)
All excursions of X^ in the infinite subtrees that are cut off are replaced by a sojourn of X^_R in the tadpoles that replace these subtrees (see Fig. <ref>)
The resulting path, which we call X^_R = (X^_R_t)_t ≥ 0, is a Markov renewal process with the following properties:
* The sojourn times in all the vertices that are not tadpoles have distribution EXP(d+1).
* The sojourn times in all the tadpoles have distribution ψ, defined as the conditional distribution of the return time τ of the random walk on the infinite rooted tree ^* given that τ<∞ (see <cit.> for a proper definition).
* The transitions into the tadpoles have probability d/d+1, the transitions out of the tadpoles have probability 1 (because of the condition X^_∞ = + ∞).
* The transitions from z + (kR+(R-1)) to z+(k+1)R have probability d/d+1, while the reverse transitions have probability 1/d+1.
Write (ℓ^ _R_t(x))_x ∈ V__R to denote the local times of X^_R at time t.
[Projection onto a concatenation of finite subtrees]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞)
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
| X^_R_∞ = + ∞).
The maps that are applied to turn X^ into X^_R are such that local times are stacked on top of each other. Since H defined in (<ref>) is convex and H(0)=0, we have H(ℓ) + H(ℓ') ≤ H(ℓ+ℓ') for all ℓ,ℓ' ∈_0, which implies the inequality.
§.§.§ Periodisation
Our next observation is that the condition {X^_R_∞ = + ∞} is redundant.
[Condition redundant]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] | X^_R_∞ = + ∞)
= 𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] ).
The event {X^_R_∞ = + ∞} has probability 1 because on the edges connecting the units of _R (see Fig. <ref>) there is a drift downwards. To see why, note that 1/d+1 < 12 < d/d+1 because d ≥ 2, and use that a one-dimensional random walk with drift is transient to the right <cit.>.
Since _R is periodic, we can fold X^_R onto a single unit _R, to obtain a Markov renewal process X^_R on _R (see Fig. <ref>) in which the transition from the top vertex to the right-most bottom vertex has probability 1/d+1, while the reverse transition has probability d/d+1. Clearly, the sojourn time distributions are not affected by the folding and therefore remain as above. Write (ℓ^ _R_t(x))_x ∈ V(_R) to denote the local times of X^_R at time t.
[Periodisation to a single finite subtree]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]).
The periodisation again stacks local time on top of each other.
Before we proceed we make a crucial observation, namely, we may still choose the shift z ∈{0,1,…,R-1} of the cuts of the two-sided backbone (recall Fig. <ref>). We will do so in such a way that the local time up to time t spent in the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of a unit in _R
= all vertices marked by ∙ in Fig. <ref>
is at most t/R. After the periodisation these vertices are mapped to the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of _R
= all vertices marked by ∙ in Fig. <ref>.
[Control on the time spent at the boundary]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
1_{1/t∑_x ∈∂_ _Rℓ^_R_t(x) ≤ 1/R}).
For different z the sets of vertices making up ∂_R correspond to disjoint sets of vertices in ^ (see Fig. <ref>). Since ∑_x ∈^ℓ^_t(x) = t for all t ≥ 0, it follows that there exists a z for which ∑_x ∈∂_Rℓ^_t(x) ≤ t/R. Therefore the upper bound in Lemma <ref> can be strengthened to the one that is claimed.
§.§.§ Upper variational formula
Lemmas <ref>–<ref> provide us with an upper bound for the average total mass (recall ((<ref>)) on the infinite tree in terms of the same quantity on the finite tree-like unit _R with a specific boundary condition. Along the way we have paid a price: the sojourn times in the tadpoles are no longer exponentially distributed, and the transition probabilities into and out of the tadpoles and between the top vertex and the right-most bottom vertex are biased. We therefore need the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.>, which we can now apply to the upper bound.
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≤^H(t) + o(t) 𝔼_𝒪(^-ϱ J_V(_R)(L^ _R_t)
1_{L^_R_t(∂_ _R) ≤ 1/R})
with J_V the functional defined in (<ref>). The following lemma controls the expectation in the right-hand side.
[Scaling of the key expectation]
For every R ∈\{1},
lim_t→∞1/tlog_(^-ϱ t J_V(_R)(L^_R_t) 1_{L^_R_t(∂_ _R) ≤ 1/R}) = - χ^+_R(ϱ),
where
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)},
with
I^†_E(_R)(p) = inf_β∈ (0,∞)inf_q ∈(V(_R))[K(β q) + K(p |β q)],
where
K(β q) = sup_q∈(V(_R))∑_x ∈ V(_R)β q(x) log(q(x)∑_y ∈ V(_R)π_x,yq(y)),
K(p |β q) = ∑_x ∈ V(_R)β q(x) (λ_x)(p(x)β q(x)),
with
(λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),
λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ,
where ψ_x=ψ when x is a tadpole, ψ_x = EXP(d+1) when x is not a tadpole, and π_x,y is the transition kernel of the discrete-time Markov chain on V(_R) embedded in X^_R.
Apply the large deviation principle derived in <cit.>, which we recall in Proposition <ref> in Appendix <ref>.
The expression in (<ref>) is similar to (<ref>) with G=_R, expect that the rate function I_E(_R) in (<ref>) is more involved than the rate function I_E in (<ref>).
§.§ Limit of the upper variational formula
The prefactor ^H(t)+o(1) in Lemma <ref> accounts for the terms ϱlog(ϱ t)-ϱ in the right-hand side of (<ref>) (recall <ref>). In view of Lemma <ref>, in order to complete the proof of the upper bound in Theorem <ref> it suffices to prove the following lemma.
For any d ≥ 4, lim inf_R→∞χ^+_R(ϱ) ≥χ_(ϱ).
The proof is given in Appendix <ref> and relies on two steps:
* Show that, for d ≥ 4,
I^†_E(_R)(p) ≥ I^+_E(_R)(p) + O(1/R)
with I^+_E(_R) a rate function similar to the standard rate function I_E(_R) given by (<ref>).
* Show that, d ≥ 2,
χ^ +_R(ϱ) = inf_p ∈(V(_R))p(∂_ _R) ≤ 1/R{I^+_E(_R)(p) + ϱ J_V(_R)(p)}
satisfies
lim inf_R→∞χ^ +_R(ϱ) ≥χ_(ϱ).
§ LARGE DEVIATION PRINCIPLE FOR THE LOCAL TIMES OF MARKOV RENEWAL PROCESSES
The following LDP, which was used in the proof of Lemma <ref>, was derived in <cit.>, and generalises the LDP for the empirical distribution of a Markov proceses on a finite state space derived in <cit.>. See <cit.> for the definition of the LDP.
Let Y=(Y_t)_t ≥ 0 be the Markov renewal process on the finite graph G=(V,E) with transition kernel (π_x,y)_{x,y}∈ E and with sojourn times whose distributions (ψ_x)_x ∈ V have support (0,∞). For t > 0, let L_t^Y denote the empirical distribution of Y at time t (see (<ref>)). Then the family (ℙ(L^Y_t ∈·))_t>0 satisfies the LDP on 𝒫(V) with rate t and with rate function I^†_E given by
I^†_E(p) = inf_β∈ (0,∞)inf_q ∈(V)[K(β q) + K(p |β q)]
with
K(β q) = sup_q∈(V)∑_x ∈ Vβ q(x) log(q(x)∑_y∈ Vπ_x,yq(y)),
K(p |β q ) = ∑_x ∈ Vβ q(x) (λ_x)(p(x)β q(x)),
where
[ (λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),; λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ. ]
The rate function I_E consist of two parts: K in (<ref>) is the rate function of the LDP on (V) for the empirical distribution of the discrete-time Markov chain on V with transition kernel (π_x,y)_{x,y}∈ E (see <cit.>), while K in (<ref>) is the rate function of the LDP on (0,∞) for the empirical mean of the sojourn times, given the empirical distribution of the discrete-time Markov chain. Moreover, λ_x is the cumulant generating function associated with ψ_x, and λ_x is the Legendre transform of λ_x, playing the role of the Cramèr rate function for the empirical mean of the i.i.d. sojourn times at x. The parameter β plays the role of the ratio between the continuous time scale and the discrete time scale.
§ SOJOURN TIMES: CUMULANT GENERATING FUNCTIONS AND LEGENDRE TRANFORMS
In Appendix <ref> we recall general properties of cumulant generating functions and Legendre transforms, in Appendices <ref> and <ref> we identify both for the two sojourn time distributions arising in Lemma <ref>, respectively.
§.§ General observations
Let λ be the cumulant generating function of a non-degenerate sojourn time distribution ϕ, and λ be the Legendre transform of λ (recall (<ref>)). Both λ and λ are strictly convex, are analytic in the interior of their domain, and achieve a unique zero at θ = 0, respectively, α=α_c with α_c= ∫_0^∞τϕ(τ). Furthermore, λ diverges at some θ_c ∈ (0,∞] and has slope α_c at θ=0. Moreover, if the slope of λ diverges at θ_c, then λ is finite on (0,∞).
The supremum in the Legendre transform defining (λ)(α) is uniquely taken at θ=θ(α) solving the equation
λ'(θ(α)) = α.
The tangent of λ with slope α at θ(α) intersects the vertical axis at (-λ)(α), i.e., putting
μ(α) = λ(θ(α))
we have
μ(α) = α (λ)'(α)-(λ)(α).
(See Fig. <ref>.) Note that by differentiating (<ref>) we get
μ'(α) = α(λ)”(α),
which shows that α↦μ(α) is strictly increasing and hence invertible, with inverse function μ^-1.
Note that by differentiating the relation (λ)(α) = αθ(α)-λ(θ(α)) we get
(λ)'(α) = θ(α).
A further relation that is useful reads
(λ)' ∘μ^-1 = λ^-1,
which follows because μ = λ∘θ by (<ref>) and (λ)' = θ by (<ref>).
§.§ Exponential sojourn time
If ϕ=EXP(d+1), then the cumulant generating function λ(θ) = log∫_0^∞^θτψ(τ) is given by
λ(θ) =
log(d+1d+1-θ), θ < d+1,
∞, θ≥ d+1.
To find (λ)(α), we compute
∂/∂θ[αθ - log(d+1d+1 - θ)] = α - 1/d+1-θ,
∂^2/∂θ^2[αθ - log(dd+1-θ)] = - 1/(d+1-θ)^2 < 0.
Hence the supremum in (<ref>) is uniquely taken at
θ(α) = d+1 - 1α, α > 0,
so that
(λ)(α) = α (d+1) -1 - log[α (d+1)], α>0.
Thus, λ and λ have the shape in Fig. <ref>, with θ_c = d+1 and α_c = 1/d+1, and with lim_θ↑θ_cλ(θ) = ∞ and lim_θ↑θ_cλ'(θ) = ∞.
Note that μ has domain (0,∞) and range .
§.§ Non-exponential sojourn time
For ϕ=ψ the computations are more involved. Let ^*=(E,V) be the infinite rooted regular tree of degree d+1. Write for the root. Let X = (X_n)_n ∈_0 be the discrete-time simple random walk on ^*=(E,V) starting from . Write τ_ to denote the time of the first return of X to . Define r = ℙ_(τ_<∞). It is easy to compute r by projecting X on _0: r is the return probability to the origin of the random walk on _0 that jumps to the right with probability p = dd+1 and to the left with probability q = 1d+1, which equals p/q (see <cit.>). Thus, r= 1/d.
For y ∈^*, define h_y = ℙ_y(τ_ <∞). Then h_y can be explicitly calculated, namely,
h_y =
d^-|y|, y∈^*∖{},
1, y= .
Note that h is a harmonic function on ^* ∖, i.e., h_y = ∑_z∈^*π_y,z h_z, y∈^*∖. We can therefore consider the Doob-transform of X, which is the random walk with transition probabilities away from the root given by
σ̌_y,z =
d/d+1, z=y^↑,
1/d1/d+1, z≠ y^↑, {y,z}∈ E,
0, else,
y ∈^*∖{},
and transition probabilities from the root are given by
σ̌_,z =
1/d, {,z}∈ E,
0, else.
Thus, the Doob-transform reverses the upward and the downward drift of X.
Recall from Lemma <ref> that ψ is the distribution of τ_ conditional on {τ_<∞} and on X leaving at time 0.
Let λ(θ) = log∫_0^∞^θτψ(τ). Then
^λ(θ)
= d+1-θ/2 [1- √(1- 4d(d+1-θ)^2) ], θ∈ (-∞,θ_c],
∞, else,
with θ_c = (√(d)-1)^2. The range of exp∘λ is (0,√(d) ], with the maximal value is uniquely taken at θ=θ_c.
To compute the moment-generating function of τ_, we consider the Doob-transform of X and its projection onto ℕ_0. Let p_2k = P(τ_ = 2k). It is well-known that (see <cit.>)
G^p,q(s) = (s^τ_|τ_ <∞) = ∑_k ∈ s^2k p_2k = 1/2p[1- √(1-4pqs^2)], |s| ≤ 1.
Therefore we have
^λ(θ) = (^θτ_)
= ∑_k ∈ p_2k [(^θ EXP(d+1))]^2k-1
= ∑_k ∈ p_2k(d+1/d+1 - θ)^2k-1
= (d+1 -θ/d+1) G^p,q(s)
with
p = 1d+1, q = dd+1, s = d+1/d+1-θ.
Inserting (<ref>) into (<ref>), we get the formula for λ(θ). From the term in the square root we see that λ(θ) is finite if and only if θ≤θ_c = d+1-2√(d) = (√(d)-1)^2.
There is no easy closed form expression for (λ)(α), but it is easily checked that λ and λ have the shape in Fig. <ref>, with θ_c = (√(d)-1)^2 and α_c = ∫_0^∞τψ(τ)<∞, and with λ(θ_c) = log√(d)<∞ and λ'(θ_c)=∞, i.e., there is a cusp at the threshold θ_c, implying that λ is finite on (0,∞). It follows from (<ref>) that
lim_α→∞1/α (λ)(α) = lim_α→∞θ(α) = θ_c.
The function λ^-1∘log = (exp∘λ)^-1 is given by
(exp∘λ)^-1(β) = d+1 - β -d/β, β∈ (0,√(d) ].
The range of (exp∘λ)^-1 is (-∞,θ_c], with the maximal value θ_c uniquely taken at β = √(d).
We need to invert exp∘λ in (<ref>). Abbreviate χ = d+1-θ/2. Then
β = χ[1-√(1-d/χ^2) ] ⟹ χ = β^2+d/2β ⟹ θ = d+1 - β^2 + d/β.
Note that (√(d),∞) is not part of the domain of (exp∘λ)^-1, even though the right-hand side of (<ref>) still makes sense (as a second branch). Note that μ has domain (0,∞) and range (-∞,√(d) ] (see Fig. <ref>).
§ ANALYSIS OF THE VARIATIONAL PROBLEM ON THE INFINITE REGULAR TREE
In this appendix we prove Theorem <ref>. Appendix <ref> formulates two theorems that imply Theorem <ref>, Appendix <ref> provides the proof of these theorems. Recall the definition of (V), I_E(p) and J_V(p) from (<ref>). Set
χ_(ϱ) = inf_p ∈_(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
where _(V) = {p ∈(V) argmax p = }. Since (V), I_E and J_V are invariant under translations, the centering at is harmless.
§.§ Two properties
For every ϱ∈ (0,∞) the infimum in (<ref>) is attained, and every minimiser p is strictly positive, non-increasing in the distance to the root, and such that
∑_N∈_0∂ S_R log (R+1) ≤d+1/ϱ,
∂ S_R = ∑_∂ B_R()p(x),
where B_R() is the ball of radius R around .
The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with lim_ϱ↓ 0χ_(ϱ) = d-1 and lim_ϱ→∞χ_(ϱ) = d+1.
Theorems <ref>–<ref> settle Theorem <ref>. Their proof uses the following two lemmas.
For every ϱ∈ (0,∞), the infimum in (<ref>) may be restricted to p ∈_(V) such that J_V(p) ≤d+1ϱ.
Let δ_∈_(V) denote the point measure at . Then, for all ϱ∈ (0,∞),
χ_(ϱ) ≤ I_E(δ_) + ϱ J_V(δ_) = (d+1) + ϱ× 0 = d+1.
Since I_V ≥ 0, we may restrict the infimum in (<ref>) to p with J_V(p) ≤d+1/ϱ.
For every ϱ∈ (0,∞), there exists a c(ϱ) >0 such that the infimum in (<ref>) may be restricted to p∈𝒫_(V) such that J_V(p) ≥ c(ϱ).
Since J_V(p) = 0 if and only if p = δ_ is a point measure, it suffices to show that δ_ is not a minimiser of χ_(ϱ). To that end, for y ∈ V compute
∂/∂ p(y)[I_E(p) + ϱ J_V(p)] = 1 - ∑_z∼ y√(p(z)/p(y)) - ϱlog p(y) -ϱ.
Because p()>0, it follows that the right-hand side tends to -∞ as p(y) ↓ 0 for every y ∼. Hence, no p ∈_(V) with p(y) = 0 for some y ∼ can be a minimiser of (<ref>), or be the weak limit point of a minimising sequence. In particular, δ_ cannot.
§.§ Proof of the two properties
First observe that (V) and J_V are invariant under permutations, i.e., for any p ∈(V) and any relabelling π of the vertices in V, we have π p ∈(V) and J_V(π p)=J_V(p). The same does not hold for I_E, but we can apply permutations such that I_E(π p) ≤ I_E(p).
1.
Pick any p ∈(V). Pick any backbone = {x_0, x_1,⋯} that runs from x_0 = to infinity. Consider a permutation π that reorders the vertices in such that {(π p)(x)}_x ∈ becomes non-increasing. Together with the reordering, transport all the trees that hang off as well. Since π p is non-increasing along , while all the edges that do not lie on have the same neighbouring values in p and in π p, we have
I_E(π p) ≤ I_E(p).
Indeed,
12 [I_E(p) - I_E(π p)] = ∑_k ∈_0√((π p)(x_k) (π p)(x_k+1))
- ∑_k ∈_0√(p(x_k)p(x_k+1)),
where we use that p(x_0) = (π p)(x_0) (because p(x_0) ≥ p(x_k) for all k∈) and ∑_k∈ p(x_k) = ∑_k∈ (π p)(x_k). The right-hand side of (<ref>) is ≥ 0 by the rearrangement inequality for sums of products of two sequences <cit.>. In fact, strict inequality in (<ref>) holds unless p is constant along . But this is impossible possible because it would imply that p() = 0 and hence p(x) = 0 for all x ∈ V. Thus, p and being arbitrary, it follows from (<ref>) that any minimiser or minimising sequence must be non-increasing in the distance to . Indeed, if it were not, then there would be a along which the reordering would lead to a lower value of I_E+ϱ J_V. Hence we may replace (<ref>) by
χ_(ϱ) = inf_p ∈_^↓(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
with _^↓(V) defined in (<ref>).
2.
Let p ∈_^↓(V). Estimate
J_V(p) = ∑_R ∈_0∑_x ∈∂ B_R() [-p(x)log p(x)]
≥∑_R ∈_0∑_x ∈∂ B_R()[-p(x)log(1R+1)],
where we use that p(x) ≤1R+1 for all x ∈∂ B_R(). Hence
J_V(p) ≥∑_R ∈_0∂ S_R log(R+1)
with ∂ S_R = ∑_x ∈∂ B_R() p(x). By Lemma <ref>, J_V(p) ≤d+1/ϱ, and so
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ.
The computation in (<ref>) shows that any p for which there exist z ∼ y with p(z)>0 and p(y)=0 cannot be minimiser nor a weak limit point of a minimising sequence. Hence all minimisers or weak limit points of minimising sequences are strictly positive everywhere.
3.
Take any minimising sequence (p_n)_n∈ of (<ref>). By (<ref>), lim_R→∞∑_x ∉ B_R() p_n(x) = 0 uniformly in n∈, and so (p_n)_n∈ is tight. By Prokhorov's theorem, tightness is equivalent to (p_n)_n∈ being relatively compact, i.e., there is a subsequence (p_n_k)_k∈ that converges weakly to a limit p∈_^↓(V). By Fatou's lemma, we have lim inf_k→∞ I_E(p_n_k) ≥ I_E(p) and lim inf_k→∞ J_V(p_n_k) ≥ J_V(p). Hence
χ_(ϱ) = lim_k →∞ [I_E(p_n_k) + ϱ J_V(p_n_k)] ≥ I_E(p) + ϱ J_V(p).
Hence p is a minimiser of (<ref>).
The proof uses approximation arguments.
1.
We first show that ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz. Pick ϱ_1 < ϱ_2. Let p̅_ϱ_1 be any minimiser of (<ref>) at ϱ_1, i.e.,
χ_(ϱ_1) = I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1).
Estimate
[I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1)]
= [I_E(p̅_ϱ_1) + ϱ_2 J_V(p̅_ϱ_1)] - (ϱ_2 - ϱ_1)J_V(p̅_ϱ_1)
≥χ_(ϱ_2) - (ϱ_2 - ϱ_1) J_V(p̅_ϱ_1)
≥χ(ϱ_2) - (ϱ_2 - ϱ_1) d+1ϱ_1,
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≤ (ϱ_2-ϱ_1) d+1ϱ_1.
Similarly, let p̅_ϱ_2 be any minimiser of (<ref>) at ϱ_2, i.e.,
χ_(ϱ_2) = I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2).
Estimate
[I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2)]
= [I_E(p̅_ϱ_2) + ϱ_1 J_V(p̅_ϱ_2)] + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) c(ϱ_2),
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≥ c(ϱ_2)(ϱ_2 - ϱ_1).
2.
Because χ_(ϱ) ≤ d+1 for all ϱ∈ (0,∞), it follows that lim_ϱ→∞χ_(ϱ) ≤ d+1. To obtain the reverse inequality, let p_ϱ be any minimiser of (<ref>) at ϱ. By Lemma <ref>, we may assume that J_V(p_ϱ) ≤d+1/ϱ. Hence lim_ϱ→∞ J_V(p_ϱ) = 0, and consequently lim_ϱ→∞p_ϱ= δ_ weakly. Therefore, by Fatou's lemma, lim_ϱ→∞χ_(ϱ) = lim_ϱ→∞ [I_E(p) + ϱ J_V(p)] ≥lim inf_ϱ→∞ I_E(p_ϱ) ≥ I_E(δ_) = d+1.
3.
To prove that lim_ϱ↓ 0χ_(ϱ) ≤ d-1, estimate
χ_(ϱ) ≤inf_p ∈_^↓(V)(p) ⊆ B_R() [I_E(p)+ϱ J_V(p)],
R ∈_0.
Because
sup_p ∈_^↓(V)(p) ⊆ B_R() J_V(p) = J_V(p_R) = log |B_R()|,
R ∈_0,
with
p_R(x) =
|B_R()|^-1, x ∈ B_R(),
0, else,
it follows that
lim_ϱ↓ 0χ_(ϱ)
≤inf_p ∈_^↓(V)(p) ⊆ B_R() I_E(p)
≤ I_E(p_R), R ∈_0.
Compute (recall (<ref>)) ,
I_E(p_R) = |∂ B_R+1()|/|B_R()|, R ∈_0.
Inserting the relations
|∂ B_R()| = {[ 1, R=0,; (d+1)d^R-1, R ∈, ].
|B_R()| = ∑_R'=0^R |∂ B_R'()| = 1 + d+1/d-1(d^R-1),
R ∈_0,
we get
I_E(p_R) = (d-1) (d+1)d^R/(d+1)d^R-2.
Hence lim_R→∞ I_E(p_R) = d-1, and so lim_ϱ↓ 0χ_(ϱ) ≤ d-1.
4.
To prove that lim_ϱ↓ 0χ_(ϱ) ≥ d-1, note that because J_V ≥ 0 we can estimate
lim_ϱ↓ 0χ_(ϱ) ≥inf_p ∈_^↓(V) I_E(p).
It therefore suffices to show that
inf_p ∈_^↓(V) I_E(p) ≥ d-1,
i.e., (p_R)_R ∈_0 is a minimising sequence of the infimum in the left-hand side. The proof goes as follows. Write (recall (<ref>))
I_E(p) = 12 ∑_x,y ∈ Vx ∼ y(√(p(x)) - √(p(y)) )^2
= 12 ∑_x,y ∈ Vx ∼ y[p(x) + p(y) - 2 √(p(x)p(y)) ]
= (d+1) - ∑_x,y ∈ Vx ∼ y√(p(x)p(y)).
Since is a tree, each edge can be labelled by the end-vertex that is farthest from . Hence the sum in the right-hand side can be written as
∑_x ∈ V ∖ 2√(p(x)p(x^↓)),
where x^↓ is the unique neighbour of x that is closer to than x. Since 2√(p(x)p(x^↓))≤ p(x) + p(x^↓), it follows that
∑_x ∈ V ∖ 2√(p(x)p(x^↓))≤∑_x ∈ V ∖ p(x) + ∑_x ∈ V ∖ p(x^↓)
= [1-p()] + 1.
Therefore
I_E(p) ≥ d - 1 + p(),
which settles the claim.
§ LARGE DEVIATION ESTIMATE FOR THE LOCAL TIME AWAY FROM THE BACKBONE
In this appendix we derive a large deviation principle for the total local times at successive depths of the random walk on ^ (see Fig. <ref>). This large deviation principle is not actually needed, but serves as a warm up for the more elaborate computations in Appendix <ref>.
For k∈_0, let V_k be the set of vertices in ^ that are at distance k from the backbone (see Fig. <ref>). For R ∈, define
[ ℓ^R_t(k) = ∑_x ∈ V_kℓ^_t(x), k = 0,1,…,R,; ℓ_t^R = ∑_k > R∑_x∈ V_kℓ^_t(x), k= R+1, ]
and
L_t^R = 1/t ((ℓ_t(k))_k=0^R, ℓ^R_t).
Abbreviate V^*_R = {0,1,…,R,R+1},
For every R ∈, (L_t^R)_t ≥ 0 satisfies the large deviation principle on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = [√((d-1)p(0))-√(dp(1)) ]^2 + ∑_k=1^R-1[√(p(k))-√(dp(k+1)) ]^2
+ [√(p(R)+p(R+1)) - √(dp(R+1)) ]^2.
By monitoring the random walk on the tree in Fig. <ref> and projecting its depth on the vertices 0,1,…,R, respectively, R+1, we can apply the LDP in Proposition <ref> (see Fig. <ref>).
1.
The sojourn times have distribution EXP(d+1) at vertices k=0,1,…,R and distribution ψ at vertex k=R+1. The transition probabilities are
[ π_0,0 = 2d+1, π_0,1 = d-1d+1,; π_k,k+1 = 1d+1, π_k,k-1 = dd+1, k = 1,…,R,; π_R+1,R = 1. ]
Proposition <ref> therefore yields that (L_t^R)_t ≥ 0 satisfies the LDP on on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = (d+1) ∑_k=0^R p(k) + inf_v V^*_R → (0,∞)sup_u V^*_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C,
where
A = ∑_k=1^R v(x) {1+log(du(k-1)+u(k+1)/u(k) p(k)/v(k))},
B = v(0) {1+log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0))},
C = v(R+1) {log(u(R)/u(R+1))-(λ)(p(R+1)/v(R+1))}.
Here we use (<ref>) to compute A and B, and for C we recall that λ is the Legendre transform of the cumulant generation function λ of ψ computed in Lemma <ref>.
2.
We compute the infimum of L(u,v) over v for fixed u.
∙ For k=1,…,R,
∂ A/∂ v(k) = log(du(k-1)+u(k+1)/u(k) p(k)/v(k)),
⟹v̅_u(k) = p(k) du(k-1)+u(k+1)/u(k).
The second derivative is 1/v(k)>0.
∙ For k=0,
∂ B/∂ v(0) = log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0)),
⟹v̅_u(0) = p(0) 2u(0)+(d-1)u(1)/u(0).
The second derivative is 1/v(0)>0.
∙ For k=R+1, the computation is more delicate. Define (recall (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(R+1)/u(R) ≤√(d). Compute
∂ C/∂ v(R+1) = μ(p(R+1)/v(R+1)) - log(u(R+1)/u(R)),
⟹v̅(R+1) = p(R+1)/α_u(R+1)
with α_u(R+1) solving the equation
log(u(R+1)/u(R)) = μ(α_u(R+1)).
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(R+1) = μ^-1(log(u(R+1)/u(R))).
Putting (<ref>)–(<ref>) together, we get
L(u) = inf_v V^*_R → (0,∞) L(u,v)
= - ∑_k=1^R A_u(k) - B_u + C_u
with
A_u(k) = du(k-1)+u(k+1)/u(k) p(k), k = 1,…,R,
B_u = 2u(0)+(d-1)u(1)/u(0) p(0),
and
C_u = p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - log(u(R+1)/u(R))]
= p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - μ(α_u(R+1))]
= p(R+1) (λ)^'(α_u(R+1))
= p(R+1) ((λ)^'∘μ^-1)(log(u(R+1)/u(R))).
In (<ref>) in Appendix <ref> we showed that (λ)' ∘μ^-1 = λ^-1. Moreover, in (<ref>) in Appendix <ref> we showed that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], C_u(R+1) is only defined when u(R+1)/u(R) ≤√(d), in which case
C_u = p(R+1) S(u(R+1)/u(R)).
▸ u(R+1)/u(R) ≤√(d). In this case ∂ C/∂ v(R+1)>0, the infimum is taken at v̅(R+1)=0, and hence (recall (<ref>))
C_u = p(R+1) (√(d)-1)^2 = p(R+1) S(√(d)).
Note that the right-hand side does not depend on u. The expressions in (<ref>)–(<ref>) can be summarised as
C_u = p(R+1) S(√(d)∧u(R+1)/u(R)).
3.
Next we compute the supremum over u of
L(u) = L(u,v̅_u) = - A_u - B_u + C_u.
with A_u = ∑_k=1^R A_u(k). We only write down the derivatives that are non-zero.
∙ For k=2,…,R-1,
- ∂ A_u/∂ u(k) = - p(k+1) d/u(k+1) - p(k-1) 1/u(k-1) + p(k) du(k-1)+u(k+1)/u(k)^2.
∙ For k=1,
- ∂ A_u/∂ u(1) = - p(2) d/u(2) + p(1) du(0)+u(2)/u(1)^2,
- ∂ B_u/∂ u(1) = - p(0) d-1/u(0).
∙ For k=R,
- ∂ A_u/∂ u(R) = - p(R-1) 1/u(R-1) + p(R) du(R-1)+u(R+1)/u(R)^2,
∂ C_u/∂ u(R) = p(R+1) [u(R+1)/u(R)^2 - d/u(R+1)]
1_{u(R+1)/u(R)≤√(d)}.
∙ For k=0,
-∂ A_u/∂ u(0) = - p(1) d/u(1),
-∂ B_u/∂ u(0) = p(0) (d-1)u(1)/u(0)^2.
∙ For k=R+1,
-∂ A_u/∂ u(R+1) = - p(R) 1/u(R),
∂ C_u/∂ u(R+1) = p(R+1) [-1/u(R) + du(R)/u(R+1)^2]
1_{u(R+1)/u(R)≤√(d)}.
All the first derivatives of A_u+B_u+C_u are zero when we choose
u̅(0) = √((d-1)p(0)), u̅(k) = √(d^kp(k)), k = 1,…,R,
u̅(R+1) = √(d^R+1 p(R)p(R+1)/p(R)+p(R+1)).
All the second derivatives are strictly negative, and so u̅ is the unique maximiser.
4.
Inserting (<ref>) into (<ref>), we get
L(u̅) = L(u̅,v̅_u̅) = - ∑_k=2^R-1 A_u̅(k)
- [A_u̅(1) + B_u̅] - A_u̅(R) + C_u̅
= -∑_k=2^R-1√(dp(k)) [√(p(k-1)) + √(p(k+1)) ]
- [2√(d(d-1)p(0)p(1)) + 2p(0) + √(dp(1)p(2)) ]
- [√(dp(R-1)p(R)) + √(p(R)/p(R)+p(R+1)) √(dp(R)p(R+1)) ]
+ p(R+1) S(√(dp(R+1)/p(R)+p(R+1)) ).
Recalling (<ref>), (<ref>) and (<ref>), and rearranging terms, we find the expression in (<ref>).
Note that I^†_R has a unique zero at p given by
p(0) = 12, p(k) = 12 (d-1)d^-k, k = 1,…,R, p(R+1) = 12d^-R.
This shows that the fraction of the local time typically spent a distance k away from the backbone decays exponentially fast in k.
§ ANALYSIS OF THE UPPER VARIATIONAL FORMULA
In this appendix we carry out the proof of the claims in Section <ref>, namely, we settle (<ref>) in Appendix <ref> and (<ref>) in Appendix <ref>. The computations carried out in Appendix <ref> guide us along the way.
§.§ Identification of the rate function for the local times on the truncated tree
To identify the rate function I^†_E(_R) in Lemma <ref>, we need to work out the two infima between braces in (<ref>). The computation follows the same line of argument as in Appendix <ref>, but is more delicate. We will only end up with a lower bound. However, this is sufficient for the upper variational formula.
To simplify the notation we write (recall Fig. <ref>):
(V_R,E_R) = vertex and edge set of _R without the tadpoles,
= top vertex of V_R,
⋆ = right-most bottom vertex of V_R,
∂ V_R = set of vertices at the bottom of V_R,
= set of tadpoles,
_x = tadpole attached to x ∈∂ V_R\⋆.
Note that ∂ V_R consists of ⋆ and the vertices to which the tadpoles are attached. Note that int(V_R) = V_R ∖∂ V_R includes .
1.
Inserting (<ref>) in Appendix <ref> into (<ref>)–(<ref>), we get
I^†_E(_R)(p) = (d+1) ∑_x∈ V_R p(x)
+ inf_β∈ (0,∞)inf_q ∈(V_R)sup_q∈(V_R) L(β,q,q| p)
with
L(β,q,q| p) = - A - B - C - D,
where
A = ∑_x ∈int(V_R)β q(x){1+log(∑_y ∼ xq(y)/q(x)p(x)/β q(x))},
B = ∑_x ∈∂ V_R\⋆β q(x){1+log(q(x^↑)
+ d q(_x)/q(x)p(x)/β q(x))},
C = β q(⋆) {1+log(q(⋆^↑) + d q()/q(⋆)p(⋆)/β q(⋆))},
D = ∑_x ∈β q(x){log(q(x^↑)/q(x))
- (λ)(p(x)/β q(x)) },
with λ the Legende transform of the cumulant generating function of ψ (recall (<ref>)) and x^↑ the unique vertex to which x is attached upwards. (Recall that y ∼ x means that x and y are connected by an edge in E_R.) Note that A,B,C each combine two terms, and that A,B,C,D depend on p. We suppress this dependence because p is fixed.
2.
Inserting the parametrisation q = u/u_1 and q = v/v_1 with u,v V_R → (0,∞) and putting β q = v, we may write
I^†_E(^R)(p) = (d+1) ∑_x∈ V_R p(x) + inf_v V_R → (0,∞)sup_u V_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C - D,
where
A = ∑_x ∈int(V_R) v(x){1+log(∑_y ∼ xu(y)/u(x)p(x)/v(x))},
B = ∑_x ∈∂ V_R \⋆ v(x){1+log(u(x^↑)
+ d u(_x)/u(x)p(x)/v(x))},
C = v(⋆) {1+log(u(⋆^↑) + d u()/u(⋆)p(⋆)/v(⋆))},
D = ∑_x ∈v(x){log(u(x^↑)/u(x)) - (λ)(p(x)/v(x)) }.
Our task is to carry out the supremum over u and the infimum over v in (<ref>).
3.
First, we compute the infimum over v for fixed u. (Later we will make a judicious choice for u to obtain a lower bound.) Abbreviate
A_u(x) = ∑_y ∼ xu(y)/u(x) p(x), x ∈int(V_R),
B_u(x) = u(x^↑) + d u(_x)/u(x) p(x), x∈∂ V_R\⋆,
C_u(⋆) = u(⋆^↑) + d u()/u(⋆) p(⋆).
∙
For z ∈ V_R, the first derivatives of L are
z ∈int(V_R) ∂ L(u,v)/∂ v(z) = -log(A_u(z)/v(z)),
z ∈∂ V_R\⋆ ∂ L(u,v)/∂ v(z) = -log(B_u(z)/v(z)),
z = ⋆ ∂ L(u,v)/∂ v(z) = -log(C_u(z)/v(z)),
while the second derivatives of L equal 1/v(z)>0. Hence the infimum is uniquely taken at
x ∈int(V_R) v̅(x) = A_u(x),
x ∈ V_R \⋆ v̅(x) = B_u(x),
x = ⋆ v̅(x) = C_u(x).
∙ For z ∈, the computation is more delicate. Define (see (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(x)/u(x^↑) ≤√(d):
Abbreviate α_u(z) = p(z)/v(z). For z ∈,
∂ L(u,v)/∂ v(z) = log(u(z)/u(z^↑))
+ (λ)(p(z)/v(z)) - p(z)/v(z) (λ)^'(p(z)/v(z))
= log(u(z)/u(z^↑)) - μ(α_u(z)),
∂^2 L(u,v)/v(z)^2 =p^2(z)/v^3(z) (λ)^”(p(z)/v(z)) >0,
where we use that λ, being a Legendre transform, is strictly convex. Hence the infimum is uniquely taken at
v̅(x) = p(x)/α_u(x), x ∈,
with α_u(x) solving the equation
log(u(x)/u(x^↑))
= μ(α_u(x)), x ∈.
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(x) = μ^-1(log(u(x)/u(x^↑))), x ∈.
Putting the above formulas together, we arrive at (recall (<ref>))
L(u) = inf_v V_R → (0,∞) L(u,v)
= - ∑_x ∈int(V_R) A_u(x) - ∑_x∈∂ V_R\⋆ B_u(x) - C_u(⋆)
+ ∑_x ∈ D_u(x)
with (recall (<ref>))
D_u(x) = - p(x)/α_u(x)[log(u(x^↑)/u(x)) - (λ)(α_u(x))]
= p(x)/α_u(x)[(λ)(α_u(x)) - μ(α_u(x))]
= p(x) (λ)^'(α_u(x))
= p(x) ((λ)^'∘μ^-1)(log(u(x)/u(x^↑))).
In (<ref>) in Appendix <ref> we show that (λ)' ∘μ^-1 = λ^-1. Moreover In (<ref>) in Appendix <ref> we show that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], D_u(x) is only defined when u(x)/u(x^↑) ≤√(d), in which case
D_u(x) = p(x) S(u(x)/u(x^↑)), x ∈.
▸ u(x)/u(x^↑) > √(d): In this case ∂ L(u,v)/∂ v(z) > 0, the infimum is uniquely taken at v̅(x)=0, and
D_u(x) = p(x) (√(d)-1)^2 = p(x) S(√(d)), x ∈,
where we use (<ref>). Note that the right-hand side does not depend on u.
4.
Next, we compute the supremum over u. The first derivatives of L are
z ∈int(V_R) \ ∂ L(u)/∂ u(z)
= ∑_y ∼ z u(y)/u^2(z) p(z) - ∑_y ∼ z1/u(y) p(y),
z = ∂ L(u)/∂ u()
= ∑_y ∼ u(y)/u()^2 p() -∑_y: y^↑ = 1/u(y)p(y)
- d/u(⋆) p(⋆),
z = ⋆ ∂ L(u)/∂ u(⋆)
= -1/u() p() + u(⋆^↑) + du()/u(⋆)^2 p(⋆),
z ∈∂ V_R \⋆ ∂ L(u)/∂ u(z)
= -1/u(z^↑) p(z^↑) + u(z^↑)+du(_z)/u(z)^2 p(z)
+ [u(_z)/u(z)^2 - d/u(_z)]p(_z)
1_{u(z)/u(z^↑)≤√(d)},
z ∈ ∂ L(u)/∂ u(z)
= -d/u(z^↑) p(z^↑)
+ [-1/u(z^↑) +du(z^↑)/u(z)^2] p(z)
1_{u(z)/u(z^↑)≤√(d)}.
The second derivates of L are all <0. The first line in (<ref>) can be rewritten as
∑_y ∼ z u(y) [p(z)/u^2(z) - p(y)/u^2(y)],
which is zero when
u̅(x) = √(p(x)), x ∈ V_R.
Given the choice in (<ref>), the fifth line in (<ref>) is zero when
u̅(x) = √(dp(x^↑)p(x)/dp(x^↑)+p(x)), x ∈.
Indeed, the derivative is strictly negative when the indicator is 0 and therefore the indicator must be 1. But the latter is guaranteed by (<ref>)–(<ref>), which imply that
u̅(x)/u̅(x^↑) = √(dp(x)/dp(x^↑)+p(x))≤√(d), x ∈.
Given the choice in (<ref>)–(<ref>), also the fourth line in (<ref>) is zero. Thus, only the second and third line in (<ref>) are non-zero, but this is harmless because ,⋆ carry a negligible weight in the limit as R →∞ because of the constraint p(∂ V_R ∪) ≤ 1/R in Lemma <ref> (recall (<ref>)).
Inserting (<ref>)–(<ref>) into (<ref>) and using (<ref>), (<ref>), we get the following lower bound:
sup_u V_R → (0,∞) L(u)
≥ - ∑_x ∈int(V_R) A_u̅(x)
- ∑_x∈∂ V_R\⋆ B_u̅(x)
- C_u̅(⋆) + ∑_x ∈ D_u̅(x)
= - ∑_x ∈int(V_R)∑_y ∼ x√(p(y)p(x))
- ∑_x∈∂ V_R \⋆√(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))
-√(p(⋆))(√(p(⋆^↑))+ d√(p()))
+ ∑_x ∈ p(x) (d+1-√(d)[√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) ]).
5.
Using the relation (d+1) p(x) = ∑_y∼ x p(x), x∈int(V_R), we get from (<ref>) that
I^†_E(^R)(p) ≥ K^1_R(p) + K^2_R(p)
with
K^1_R(p)
= ∑_x ∈int(V_R)∑_y ∼ x[p(x) - √(p(x)p(y)) ]
= ∑_{x,y}∈E_R(√(p(x)) - √(p(y)) )^2
+ [p()-√(p()p(⋆)) ] - ∑_x∈∂ V_R[ p(x) - √(p(x)p(x^↑)) ]
and
K^2_R(p)
= ∑_x∈∂ V_R \⋆[(d+1) p(x) - √(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))]
+ (d+1) p(⋆)-√(p(⋆))(√(p(⋆^↑)) + d√(p()))
+ ∑_x ∈ p(x) [d+1-√(d) (√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) )].
The first sum in the right-hand side of K^1_R(p) equals the standard rate function I_E_R(p) given by (<ref>), with
E_R = E_R ∖{,⋆}
the set of edges in the unit _R without the tadpoles and without the edge {,⋆} (i.e., E_R = E(^*_R); recall Fig. <ref>). Rearranging and simplifying terms, we arrive at
I^†_E(^R)(p) ≥ I_E_R(p)+ K^3_R(p)
with
K^3_R(p) = S_∂ V_R \⋆(p) + S_,⋆(p) + S_(∂ V_R \⋆) ∪(p),
where
S_∂ V_R \⋆(p)
= d ∑_x∈∂ V_R \⋆ p(x),
S_,⋆(p)
= (√(p()) - √(p(⋆)))^2 + (d-1)[p(⋆) - √(p()p(⋆)) ],
S_(∂ V_R \⋆) ∪(p)
= - ∑_x∈∂ V_R \⋆ p(x) d√(dp(_x)/dp(x)+p(_x))
+ ∑_x∈∂ V_R \⋆ p(_x) (d+1-√(d) [√(p(_x)/d p(x) + p(_x))
+ √(d p(x) + p(_x)/p(_x)) ]).
6.
Since √(p()p(⋆))≤12[p()+p(⋆)], the boundary constraint ∑_x∈∂ V_R ∪ p(x) ≤ 1/R implies that S_∂ V_R \⋆(p) + S_,⋆(p) = O(1/R). The same constraint implies that the first sum in S_(∂ V_R \⋆) ∪(p) is O(1/R). Hence
K^3_R(p) = O(1/R) + ∑_x∈∂ V_R \⋆ p(x) F(p(_x)p(x))
with
F(w) = w (d+1-√(d) [√(w/d+w) + √(d+w/w) ]).
The map w ↦ F(w) is continuous on (0,∞) with
F(w) = {[ √(w) + (d+1)w + O(w^3/2), w ↓ 0,; [(d+1)-2√(d) ] w + O(w^-1), w →∞. ].
From this we see that if d ≥ 4, then there exists a C ∈ (1,∞) such that
F(w)+C ≥(1-√(w) )^2, w ∈ [0,∞).
Hence we have the lower bound
K^3_R(p)
≥ O(1/R) + ∑_x∈∂ V_R \⋆
p(x) [-C + (1-√(p(_x)p(x)) )^2]
= O(1/R) + ∑_x∈∂ V_R \⋆(√(p(x))-√(p(_x)) )^2.
Via (<ref>)–(<ref>), it follows that
I^†_E(^R)(p) ≥ O(1/R) + I_E_R(p), R ∈,
with I_E_R(p) the standard rate function given by (<ref>), with
E_R = E_R ∪[∪_x ∈∂ V_R ∖⋆{x,_x}]
the set of edges in the unit _R that is obtained from the unit _R by removing the edge {,⋆} (i.e., E_R = E(_R); recall Fig. <ref>). This completes the proof of (<ref>).
The condition d ≥ 4 is needed only in (<ref>). For d=2,3 we have F(w)+C ≥θ_c(1-√(w) )^2 with θ_c = d+1-2√(d)∈ (0,1). Consequently, the edges {x,_x}, x ∈∂ V_R∖⋆, carry a weight that is smaller than that of the edges in , which may cause the optimal p to stick to the boundary as R→∞, in which case we do not have (<ref>).
§.§ Limit of the upper variational formula
Note that
_R ⊆,
with the infinite tree. Consequently,
I_E_R(p) = I_E()(p) - (d-1) ∑_x ∈∂ V_R ∖⋆ p(x),
∀ p ∈(V()) (p) = V(_R),
where the sum compensates for the contribution coming from the edges in that link the vertices in ∂ V_R ∖⋆ to the vertices one layer deeper in that are not tadpoles. Since this sum is O(1/R), we obtain (recall (<ref>))
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)}
≥ O(1/R) + inf_p ∈(V())(p) = V(_R),
p(∂__R) ≤ 1/R{I_E()(p) + ϱ J_V()(p)}
≥ O(1/R) + χ_(ρ),
where the last inequality follows after dropping the constraint under the infimum and recalling (<ref>). This completes the proof of (<ref>).
99
A2016
A. Astrauskas,
From extreme values of i.i.d. random fields to extreme eigenvalues of finite-volume Anderson Hamiltonian,
Probab. Surv. 13, 156–244, 2016.
AGH2016
L. Avena, O. Gün, M. Hesse,
The parabolic Anderson model on the hypercube,
Stoch. Proc. Appl. 130, 3369–3393, 2020.
DV75
M.D. Donsker and S.R.S. Varadhan,
Asymptotic evaluation of certain Markov process expectations for large time,
Comm. Pure Appl. Math. (I) 28, 1–47, 1975; (II) 28, 279–301, 1975; (III) 29, 389–461, 1976; (IV) 36, 183–212, 1983.
FM1990
K. Fleischmann, S.A. Molchanov,
Exact asymptotics in a mean field model with random potential,
Probab. Theory Relat. Fields 86, 239–251, 1990.
G1977
J. Gärtner,
On large deviations from the invariant measure,
Theory Probab. Appl. 22, 24–39, 1977.
GdH1999
J. Gärtner, F. den Hollander,
Correlation structure of intermittency in the parabolic Anderson model,
Probab. Theory Relat. Fields 114, 1–54, 1999.
GM1990
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model I. Intermittency and related problems,
Commun. Math. Phys. 132, 613–655, 1990.
GM1998
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model II. Second-order asymptotics and structure of high peaks,
Probab. Theory Relat. Fields 111, 17–55, 1998.
HLP1952
G.H. Hardy, J.E. Littlewood, G. Pólya,
Inequalities,
Cambridge Mathematical Library (2nd. ed.), Cambridge University Press, 1952.
dHLDP2000
F. den Hollander,
Large Deviations,
Fields Institute Monographs 14, Providence RI, American Mathematical Society, 2000.
dHKdS2020
F. den Hollander, W. Konig, R.S. dos Santos,
The parabolic Anderson model on a Galton-Watson tree,
in Out of Equilibrium 3: Celebrating Vladas Sidoravicius
(eds. M.E. Vares, R. Fernandez, L.R. Fontes, C.M. Newman),
Progress in Probability 77, Birkhäuser, 2021, pp. 591–635.
dHW2021
F. den Hollander, D. Wang,
The parabolic Anderson model on a Galton-Watson tree revisited,
J. Stat. Phys. 189, paper no. 8, 1–30, 2022.
LP2016
R. Lyons, Y. Peres,
Probability on Trees and Networks,
Cambridge Series in Statistical and Probabilistic Mathematics 42,
Cambridge University Press, New York, 2016.
K2016
W. König,
The Parabolic Anderson Model,
Pathways in Mathematics, Birkhäuser, 2016.
MZ2016
M. Mariani, L. Zambotti,
Large deviations for the empirical measure of heavy-tailed Markov renewal processes,
Adv. Appl. Probab. 48, 648–671, 2016.
S1976
F. Spitzer,
Principles of Random Walk (2nd ed.),
Graduate Texts in Mathematics, Springer, 1976.
|
http://arxiv.org/abs/2307.06095v1 | 20230712113817 | Exact Resource Allocation over Fair Wireless Relay Networks | [
"Edgar Arribas",
"Vicent Cholvi",
"Vincenzo Mancuso"
] | cs.NI | [
"cs.NI"
] |
P[1]>p#1
M[1]>m#1
⌈⌉
⌊⌋
definitionDefinition
problemProblem
theoremTheorem
lemmaLemma
corollaryCorollary
observationObservation
conjectureConjecture
noteNote
[table]labelfont=footnotesize,sc, font=footnotesize,sc, name=TABLE
[figure]labelfont=footnotesize, font=footnotesize
[subfigure]labelfont=footnotesize, font=footnotesize
B-.05emi-.025em b-.08em
T-.1667em.7exE-.125emX
`%=12
|
http://arxiv.org/abs/2307.04493v1 | 20230710113115 | Geometric Constraints in Probabilistic Manifolds: A Bridge from Molecular Dynamics to Structured Diffusion Processes | [
"Justin Diamond",
"Markus Lill"
] | cs.LG | [
"cs.LG",
"q-bio.QM"
] |
[
Geometric Constraints in Probabilistic Manifolds: A Bridge from Molecular Dynamics to Structured Diffusion Processes
equal*
Justin Diamondyyy
Markus Lillyyy
yyyDepartment of Pharmaceutical Sciences, University of Basel, Basel, Switzerland
Justin [email protected]
Machine Learning, ICML
0.3in
]
Understanding the macroscopic characteristics of biological complexes demands precision and specificity in statistical ensemble modeling. One of the primary challenges in this domain lies in sampling from particular subsets of the state-space, driven either by existing structural knowledge or specific areas of interest within the state-space.
We propose a method that enables sampling from distributions that rigorously adhere to arbitrary sets of geometric constraints in Euclidean spaces. This is achieved by integrating a constraint projection operator within the well-regarded architecture of Denoising Diffusion Probabilistic Models, a framework founded in generative modeling and probabilistic inference.
The significance of this work becomes apparent, for instance, in the context of deep learning-based drug design, where it is imperative to maintain specific molecular profile interactions to realize the desired therapeutic outcomes and guarantee safety.
§ INTRODUCTION
Infinitesimal Dynamics in classical mechanics is commonly formalized by Lagrangians.
By solving for functionals that extremize the Lagrangian one obtains equations of motion. In molecular systems, e.g. Molecular Dynamics, the EOM are:
Md^2x/dt^2=-∇U - ∑_aλ_a∇σ_a,
where M is the diagonal mass matrix, x the cartesian coordinates, t is time,
and U is the potential energy. The σ_a are a set of holonomic constraints and λ_a are the Lagrange multiplier coefficients. To generalize from holonomic to nonholonomic constraints, one can use slack variables to transform the latter into the first.
Starting with z_x, z_h = f(x,h) = [x(0), h(0)] + ∫_0^1ϕ(x(t), h(t))dt with z being a latent vector sampled from Gaussians and the indexes x and h indicate the latent variables associated
to the coordinates of each particle and the vector embedding of each particle, ϕ is the parameterized transformation defined by a equivariant graph neural network. This defines a Neural ODE <cit.> which generalizes to Denoising Diffusion Probabilistic Models <cit.>.
This form of transformation has the same infinitesimal nature as our previous EOM which makes it acceptable to apply sets of constraints via Langrange's Multipliers, analogous to solving our EOM and thus one can insure the continual satisfaction of a set of constraints
using the Shake algorithm from Molecular Dynamics.
The study of constrained dynamics in Molecular Dynamics and Machine Learning, has traditionally focused on mostly linear constraints: e.g. removing high-frequency oscillations by constraining bond distances in the first and in-painting in the latter by thresholding certain pixel values to predetermined values. From a high level these can be seen as linear constraint problems as the constrained subset affects the unconstrained subset to minimal degrees. In addition, our task is more challenging as different constraints induce different geometric topological structures, such that some sets of distance constraints can determine uniquely the solution, and small modifications in the constraints may lead to vast changes in the solution set.
The problem we hope to model are non-linear constraints where constrained subsets of atoms determine the unconstrained subset to a high degree. We argue these types of non-linear constraints are important in the field of generative drug development where generated molecules must satisfy certain structural or analytic properties a priori. Take for instance, the optimization of lead molecules which is crucial at the final stages of drug development pipeline where off target interactions are attempted to be minimized. Since these off-target reactions can often be described by structural or analytic properties, then we can generate precisely molecules that satisfy a constraint profile of the target of interest, while specifying the subspace of generated molecules to not lie within the subspace of off-target interaction profiles.
In the following, we will give a summary of the Shake algorithm and segments of the equivariant normalaizing flow necessary to elaborate on how to combine them. Next, it will be elaborated that the spaces of latent embeddings and output samples are generally of very different nature, and constraints defined in one space will not necessarily be useful in the other. We suggest a continuous transformation of
the constraints such that they are always satisfied in the latent space, and become more restrictive throughout the integration. Lastly, we show simple examples where complex constraints are satisfied within small molecules. We leave to future work the study of this methodology to larger systems, and more application based studies. Our approach builds a fruitful junction where probabilistic inference, structured data representation, and generative modeling meet, while emphasizing the necessity to encode domain knowledge effectively in these settings, offering a way to formally verify the distributions from which samples are drawn.
§ PREVIOUS RESEARCH
Generative models of graphs have been a subject of interest in recent years. A number of different approaches have been proposed in the literature. <cit.> generates valid Euclidean distance matrices ensuring the resulting molecular structures are physically realistic which are then reconstructed in 3D space. In <cit.>, Boltzmann Generators sample equilibrium states of many-body systems with deep learning, useful for generating molecular configurations that obey thermodynamics distributions.
<cit.> proposed Equivariant Graph Neural Networks, which can be applied to model molecules and proteins while ensuring that their predictions are consistent under different orientations and permutations of the molecule.<cit.> further extended the concept to the diffusion process for 3D molecule generation. <cit.> applied similar methodologies to diffusion models on protein ligand complexes, and <cit.> devise a method of protein generation models that diffuse over harmonic potentials.
The Shake algorithm, described in a parallelized fashion by <cit.>, enforces linear constraints on molecular dynamics simulations of chemicals and biomolecules. This algorithm is conventionally used in simulations to get rid of high frequency motions, i.e. those seen in bonds between atoms.
§ CONSTRAINED GENERATIVE PROCESSES
§.§ Geometric Constraints in Shake
First, we define the constraint functions for the pairwise distance (not necessarily between bonded atoms), bond angle, and dihedral angle.
σ_d_ij = (d_ij - d_ij,0)^2 = 0
σ_θ_ijk = (θ_ijk - θ_ijk,0)^2 = 0
σ_ψ_ijkl = (ψ_ijkl - ψ_ijkl,0)^2 = 0
These constraint functions compare the current pairwise distance, bond angle, and dihedral angle with their target values, and the goal is to minimize the difference. We can additionally create nonholonomic constraints via slack variables. For example, we can add a slack variable y ≥ 0 and define d_j as the boundary of a nonholonomic constraint. Then, we can express the constraint as:
σ_a := ||x_aj - x_ak||^2_2 - d_j ≤ 0 → ||x_aj - x_ak||^2_2 - d_j + y= 0.
Next, modify the constraint matrix in the Shake algorithm to include pairwise distance, bond angle, and dihedral angle constraints seen in equation 4, where ij, ijk, and ijkl sum over the pairwise, bond angles, and torsion constraints indicating the number of atoms in each type of constraint type.
The constraint matrix now accounts for the pairwise distance, bond angle, and dihedral angle constraints by including their second-order derivatives with respect to the Cartesian coordinates by including their contributions to the Lagrange multipliers. After solving for the Lagrange multipliers, update the coordinates using the adjusted coordinate set equation like before.
It is also possible to try to optimize the coordinates via other optimization algorithms like ADAM or SGD.
In this section, we discuss the methods needed to understand how constraints can be represented, and define a novel diffusion process which projects the dynamics onto the submanifold defined by arbitrary sets of geometric constraints.
§.§ Shake Algorithm
The Shake algorithm takes as input a set of coordinates x of a molecular system and a set of constraints σ. At each time step the coordinates
are updated according to the equations of motion (EOM) at hand (without constraint terms) and subsequently are corrected. In general, the EOM will lead to dynamics that do not
satisfy the constraints, and thus this correction is mandatory.
Assuming masses of all the particles and delta time are unit we have the following equation for updating x_i iteratively until the constraints are satisfied.
x_i^(n)= x_i^(n-1) - ∑_bλ_b^(n-1)∇σ_b(x_i)
where x_i^(n) is the updated coordinate after n iterations of
satisfying constraints at each time step, x_i is the initial coordinates at each time step, and λ_b^(n-1) is the lagrange multiplier for each
constraint σ_a. The equation to solve at each iteration of each time step is
∑_βλ_β^(n-1)A_αβ^(n-1)= σ_α(x_i^(n-1))
with
A_αβ^(n-1)= ∇σ_α(x_i^(n-1)) ∇σ_β(x_i).
The matrix A^(n-1)_αβ is a symmetric matrix that describes how changes in particle positions affect both potential energy and constraint violations. The elements of the matrix are given by:
A^(n-1)_αβ = ∂^2 U/∂ x_α∂ x_β + ∑_k=1^N_cλ^(n-1)_k∂^2 σ_k/∂ x_α∂ x_β
where N_c is the number of constraints. The matrix A^(n-1)_αβ is used to solve for the Lagrange multipliers λ^(n)_β , which are then used to adjust particle positions.
§.§ Constraint-Induced Diffusion Process
Suppose we want to incorporate a constraint, such as a distance constraint between two atoms. Let's denote this constraint by f(x) = 0 for simplicity. We can modify the diffusion process to satisfy this constraint by projecting the noise term onto the nullspace of the gradient of the constraint function, analagous to the A matrix in Shake. This gives us:
dx = √(2D) (I - ∇ f(x) (∇ f(x))^T) dB - D ∇log p_t(x) dt
where D is the diffusion constant, B is a standard Brownian motion, and ∇log p_t(x) is the gradient of the log-probability density, which is equivalent to the negative of the potential energy function of the system.
Here, I is the identity matrix, and ∇ f(x) (∇ f(x))^T is the outer product of the gradient of the constraint function, which represents the direction in which the constraint is changing. This projection ensures that the noise term does not push the system out of the constraint-satisfying space.
The covariance matrix of the perturbed Gaussian distribution of the denoising process can be understood formally using the Schur complement method, available in the Appendix. The key takeaway is the relation between constraints and correlations via projecting out the constraints in the Covariance matrix of a Multivariate Gaussian. This modified covariance matrix then defines the perturbed Gaussian distribution from which we can sample at each time step of the diffusion process. This is a good approximation when the constraints are nearly linear or when the changes in the variables are small. One note is that in if the projection operator is non-linear than the the process is no longer Gaussian, but since we deal with linearized constraints, or small changes at each time step, this is negligible as seen in the original Shake formalism. However, the Schur Complement method gives a more general formalism to ensure Gaussian-ness.
§.§ Constraints as Correlations
Consider, for instance, a scenario involving pairwise distance constraints between a set of variables denoted as d = d_ij, where d_ij signifies the distance separating variables i and j. These constraints can be mathematically expressed through the set of functions C_ij(ϵ) = ||ϵ_i - ϵ_j|| - dij = 0, which is applicable to all corresponding variable pairs (i, j) ∈d, influencing the samples drawn from a Multivariate Normal distribution.
The introduction of these geometric constraints essentially interrelates variables that were initially independent in the Gaussian distribution. In order to comprehend the implications of these constraints, the covariance matrix Σ' of the perturbed distribution p'(ϵ') is worth examining:
Σ' = 𝔼_ϵ' ∼ p' [ϵ' (ϵ')^T] - 𝔼_ϵ' ∼ p' [ϵ'] 𝔼_ϵ' ∼ p' [ϵ']^T,
Here, the expectations are calculated over the perturbed distribution. The covariance matrix Σ' elucidates the correlations among variables that emerge as a result of the geometric constraints.
Importantly, these correlations, which are encoded within the covariance matrix of a multivariate Gaussian distribution, represent the constraints in the distribution. This provides a way to naturally incorporate constraint-based information into the model.
§.§ Training and Sampling Algorithms
§.§.§ Training Process
During training, in Algorithm 2, we first sample a time step t and noise vector ϵ from uniform and Gaussian distributions respectively. Then subtract the center of gravity from the noise vector to ensure that it lies on a zero center of gravity subspace. Then compute the latent variable z_t by scaling and adding the input coordinates [x,h] with the noise vector. Finally, minimize the difference between the estimated noise vector and output of the neural network to optimize EDM. For each molecule between 5 and 15 constraints are sampled from x for each batch element. The constraints are uniformly sampled from the pairs, triples, and quadruplets of the atom set of each molecule. This adds an extra layer of complexity due to the constraint distribution which we need to sample from the true data distribution.
§.§.§ Generative Process
In this generative process, we first sample a latent variable z_T from a Gaussian distribution. Then iterate backwards through time and sample noise vectors ϵ at each step. Subtract the center of gravity of the coordinates from the noise vector to ensure that it lies on a zero center of gravity subspace. Then compute the latent variable z_s by scaling and adding the input coordinates with the noise vector and previous latent variable. Finally, sample the input coordinates [x,h] from a conditional distribution given the initial latent variable z_0. The Shake algorithm enforces the constraints, as in training, at each sampling step during generation.
§ EXPERIMENTS
In the experimental section of our study, we evaluate our proposed method by generating molecules with cyclic constraints in Figure 1. The cyclic constraints impose specific geometric relationships among atoms in a molecule, such as the bond distances, bond angles, and torsional angles, which are essential for maintaining the chemical stability and physical plausibility of the generated molecules.
During the training phase, constraints are sampled from the dataset. This approach encourages the model to learn the distribution of constraints inherent in the training data, which reduces the Kullback-Leibler (KL) divergence between the data distribution and the model distribution. Consequently, the KL divergence during training is always minimized, promoting the model to generate molecules that closely resemble those in the training set.
For the practical implementation of this training procedure, we began with a pre-trained model provided by Welling et al.Our methodology then fine-tuned this pre-existing model using our constraint projection method. Due to time considerations and simplicity, our training and experiments focused on molecules consisting of 21 atoms.
§ DISCUSSION
Our method serves as a potent tool for incorporating complex constraints in denoising diffusion processes, specifically when dealing with multi-constraint specifications. Its iterative nature allows it to address nonlinear constraint problems and extends the power of denoising diffusion probabilistic models to work with constraints. Thus allowing these models to leverage the structure inherent in many physical systems. Indeed, many of these systems come with prior structural knowledge, including geometric information like distances, torsions, bond angles, and generalizeable to other piece-wise polynomial terms. Such information can significantly enhance the training process and enable explicit sampling of subsets of the state space.
Although constraints can guide generation towards more physically plausible structures, there can be potential instability in the generation process. This instability may originate from discrepancies between constraints used during training and those applied during generation. It underlines the need for further work to establish robust training procedures that align more closely with the generation constraints. Especially, with application focused studies like generating peptides or ligands with specific interaction profiles.
Though the language of our work is steeped in the semantics of Molecular Generation, the way we use geometric constraints to guide sampling mirrors a more general need of generative models in ML, which must navigate complex, structured probability spaces.
Further exploration could include adapting our methodology to discern constraints intrinsically or applying it to optimization processes like gradient-based learning and potentially lead to more efficient or robust learning algorithms.
§ APPENDIX A: GENERALIZED SCHUR COMPLEMENT FOR MULTIPLE CONSTRAINTS
To obtain a generalized approach of Schur Complement for multiple distance constraints, let's consider a set of M pairwise constraints between atoms. We can express each constraint as a function of the positions of the corresponding atoms:
f_m(𝐱_i, 𝐱_j) = ||𝐱_i - 𝐱_j||^2 - d_ij^2 = 0, m = 1, 2, …, M,
where d_ij is the distance constraint between atoms i and j.
To incorporate all the constraints, we can form the combined gradient and Hessian matrices by stacking the corresponding matrices for each constraint:
∇𝐟 = [ ∇ f_1 ∇ f_2 ⋮∇ f_M ],
∇^2 𝐟 = [ ∇^2 f_1 ∇^2 f_2 ⋮∇^2 f_M ].
To project the Gaussian distribution with the original covariance matrix Σ onto the space of distance constraints, we can use the following generalized Schur complement:
Σ' = Σ - Σ∇^2 𝐟^T (∇^2 𝐟Σ∇^2 𝐟^T)^-1∇^2 𝐟Σ.
While the Schur complement method can be implemented iteratively for non-linear systems, it is computationally intensive due to the inversion of the Hessian matrix. However, it serves as an excellent theoretical tool, providing a precise representation of how constraints can be formally incorporated into the diffusion process.
On the other hand, the Schur complement method provides a direct way to project the covariance matrix of the atomic positions onto the space that satisfies the distance constraints. It essentially modifies the covariance matrix in a way that embeds the constraints, without needing to adjust the atomic positions. This approach formally modifies the probability distribution of interest, and may be more useful for theoretic insight.
§ APPENDIX C: NONHOLONOMIC CONSTRAINTS
We are more interested in nonholonomic constraints where each constraint has possibly a lower and upper bound. As we mentioned earlier,
by adding a slack variable one can translate the nonholonomic constraints to holonomic ones. To formalize this, one sees that a constraint having
a lower and upper bound will either be completely satisfied or fail to satisfy a single boundary. Thus, we only have to consider
at most one holonomic constraint at each call to Shake meaning each constraint with a lower and upper bound may be replaced by a lower, upper,
or no bound for each call.
To calculate the slack variable y from σ_jk:=‖ x^l_i-x^l_j ‖ - d_jk which is ≤ or ≥ 0, one has
y={[ max(0,||x^l_i-x^l_j||-d^u_jk), if ≤; max(0,d^l_jk-||x^l_i-x^l_j||), if ≥ ].
where d_jk is the lower or upper bound in case of nonholonomic constriants and the defined constraint
value for holonomic constraints.
In the generative process, we define the initial values of d_jk such that the constraints have little effects. The constraints are then linearly interpolated throughout the ODE until the predetermined boundary values of d_jk are reached.
§ APPENDIX B: INCORPORATION OF LOGICAL OPERATORS IN GEOMETRIC CONSTRAINTS
The application of logical operators such as'AND', 'OR' and 'NOT' within geometric constraints enables a more flexible and representative modeling of physical and chemical systems. Real-world scenarios frequently require the satisfaction of multiple constraints following complex logical rules. Below, we detail the basic implementation of 'OR' and 'NOT' logical operators within the geometric constraints of our diffusion process while noting that the 'AND' operator is the basis of the formalism:
§.§ 'OR' Logic
The 'OR' condition necessitates that at least one of two (or more) constraints be met. Let's denote two constraint functions as f_1(x) and f_2(x). The 'OR' logic can be integrated by constructing a composite constraint function that is satisfied when any of its constituent constraints is met. We can express this as:
g(x) = min(f_1(x), f_2(x))
In this case, if either f_1(x) = 0 or f_2(x) = 0 (or both), g(x) = 0, thereby meeting the 'OR' condition. Alternatively, we can employ a product of the constraints:
g(x) = f_1(x) · f_2(x)
If either f_1(x) = 0 or f_2(x) = 0 (or both), g(x) = 0, again adhering to the 'OR' logic. This method requires that both f_1(x) and f_2(x) are always non-negative.
§.§ 'NOT' Logic
The "NOT" operator in the context of geometric constraints could be defined using the following equations. Let's say we have a constraint f(x) = 0. We want to define a NOT operator for this constraint. We can then define "NOT f(x)" as regions where f(x) does not equal zero, which can be represented with two inequality constraints which can be combined via the 'OR' operator to designate the 'NOT' operator.
We denote ϵ as a small positive number, then "NOT f(x)" can be represented as:
g_1(x) = f(x) + ϵ < 0
g_2(x) = f(x) - ϵ > 0
In the equations above, we have defined two regions (when f(x) is smaller than -ϵ and larger than ϵ) where "NOT f(x)" is true, thus defining a NOT operator for our constraints. Note that these regions depend on the choice of ϵ.
|
http://arxiv.org/abs/2307.04342v1 | 20230710045252 | Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays | [
"Kangheun Kim",
"Fan Yang",
"Klaus Mølmer",
"Jaewook Ahn"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas",
"physics.atom-ph"
] |
These authors contributed equally to this work
Department of Physics, KAIST, Daejeon 34141, Republic of Korea
These authors contributed equally to this work
Center for Complex Quantum Systems, Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark
Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark
[email protected]
Department of Physics, KAIST, Daejeon 34141, Republic of Korea
Strong mutual interactions correlate elementary excitations of quantum matter and plays a key role in a range of emergent phenomena <cit.>, from binding and condensation <cit.> to quantum thermalization and many-body localization <cit.>. Here, we employ a Rydberg quantum simulator to experimentally demonstrate strongly correlated spin transport in anisotropic Heisenberg magnets, where the magnon-magnon interaction can be tuned two orders of magnitude larger than the magnon hopping strength. In our approach, the motion of magnons is controlled by an induced spin-exchange interaction through Rydberg dressing <cit.>, which enables coherent transport of a single Rydberg excitation across a chain of ground-state atoms. As the most prominent signature of a giant anisotropy, we show that nearby Rydberg excitations form distinct types of magnon bound states, where a tightly bound pair exhibits frozen dynamics in a fragmented Hilbert space, while a loosely bound pair propagates and establishes correlations beyond a single lattice site. Our scheme complements studies using resonant dipole-dipole interactions between Rydberg states, and opens the door to exploring quantum thermodynamics with ultrastrong interactions and kinetic constraints <cit.>.
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
Quantum simulation of spin models has established a powerful tool for unraveling exotic many-body phases and dynamics <cit.>. As a pivotal process in quantum magnetism, the quasiparticle spin excitations (magnons) can propagate through the system by coherent spin exchanges that conserve the total magnetization <cit.>. The inclusion of strong magnon-magnon interaction complicates the underlying spin transport, where the motion of different magnons cannot be separated <cit.>. Similar correlated transport dynamics has been observed in various quantum systems, including ultracold atoms engineered by the superexchange mechanism <cit.>, trapped atomic ions with phonon mediated spin-spin couplings <cit.>, and Rydberg atom arrays subjected to resonant dipole-dipole interactions <cit.>. These works aim to construct a spin-1/2 Heisenberg model, where the correlations can be tuned by the anisotropy of the XXZ-type Hamiltonian, defined as the strength of the magnon-magnon interaction relative to the spin-exchange rate.
One of the biggest challenges in previous experiments was to acquire a very large anisotropy, for which the strongly correlated dynamics is constrained to flip-flops that conserve not only the total magnetization but also the number of domain-walls. This kinetic constraint is key to exotic non-ergodic dynamics, such as Hilbert space fragmentation <cit.> and quantum many-body scars <cit.>. In this work, we demonstrate an approach that can access such an extremely anisotropic regime on a neutral-atom quantum simulator, where ground-state atoms are off-resonantly dressed to a Rydberg state to induce an effective excitation exchange <cit.>. As evidence of the large anisotropy, we show that the propagation of a single Rydberg excitation significantly slows down in the presence of a nearest-neighbor Rydberg excitation, due to the formation of a tightly bound state. While similar magnon bound states have been identified in systems with short-range interactions <cit.> or moderate anisotropies <cit.>, the large long-range anisotropy in our work can further support a new type of bound states with a bond length beyond the nearest neighbor.
Effective spin exchange in a Rydberg Ising model
Our experiments are carried out in a chain of ^87 Rb atoms initially trapped in an optical tweezer array [see Fig. <ref>(a)]. We use a two-photon excitation scheme to couple the ground state |↓⟩=|5S_1/2,F=2,m_F = 2⟩ to the Rydberg state |↑⟩=|71S_1/2,m_J=1/2⟩, which maps the system onto a spin-1/2 chain described by a tilted Ising Hamiltonian (taking ħ=1, where ħ is the reduced Planck constant),
Ĥ_ Ryd = Ω/2∑_i σ̂_i^x - Δ∑_i n̂_i + 1/2∑_i≠ jV_ijn̂_i n̂_j.
Here, σ̂_i^α are Pauli matrices, n̂_i = |r_i⟩⟨ r_i|=(1+σ̂_i^z)/2 denotes the Rydberg-state projector, and Ω and Δ are the Rabi frequency and the detuning of the two-photon transition, respectively. The interaction strength V_ij between Rydberg atoms at sites i and j takes the form V_ij=C_6/r_ij^6, where r_ij is the distance between the atoms and C_6>0 is the van der Waals (vdW) coefficient.
To understand the dynamics of this Rydberg Ising model, we decompose the original Hamiltonian into Ĥ_ Ryd = Ĥ_0 + Ω̂_D, where Ĥ_0 is the diagonal part, and Ω̂_D=(Ω/2)∑_iσ̂_i^x is the off-diagonal driving term that can create or annihilate a single Rydberg excitation. If we label the eigenstates of Ĥ_0 according to the total Rydberg excitation number 𝒩̂_R=∑_i n̂_i, then Ω̂_D only couples states where 𝒩̂_R changes by one. As a result, the coupling usually admixes different 𝒩̂_R subspaces. However, if the energy difference between adjacent blocks of Ĥ_0 is much larger than the coupling strength Ω, these subspaces become dynamically decoupled, and only states of the same 𝒩̂_R are coupled with each other via a perturbation process. This perturbation effect occurs predominantly at the second order and can be described by an effective Hamiltonian Ĥ_eff (see Methods), which has a U(1) symmetry corresponding to the conserved Rydberg excitation number 𝒩̂_R. Figure <ref>(b) visualizes the perturbation process for two atoms, where states |↑↓⟩ and |↓↑⟩ are coupled by a spin-exchange interaction J(σ̂_1^+σ̂_2^-+σ̂_1^-σ̂_2^+) between the ground state and the Rydberg state, with σ̂^±_n=(σ̂^x_n ± iσ̂^y_n)/2. Crucially, the nonvanishing interaction strength J = Ω^2 V_12/4Δ(Δ-V_12) is enabled by unequal energy differences between adjacent 𝒩̂_R sectors. These nonuniform level spacings arise from the vdW interaction and can lead to complicated density-dependent spin exchanges. For example, in a three-atom chain with the central site excited to the Rydberg state [see Fig. <ref>(c)], the spin exchange between the first and the third atom is described by a three-body interaction term Q(σ̂_1^+σ̂_3^-n̂_2+σ̂_1^-σ̂_3^+n̂_2), where Q = Ω^2 V_13 /4(Δ-V_12)(Δ - V_12 - V_13) is the density-dependent coupling strength.
To observe these virtual spin-exchange processes, it is preferable to work in the weak dressing regime Ω≪|Δ|, which, however, results in weaker interaction strengths. Concerning this trade-off, which could be relaxed by a larger Rabi frequency, our experiments are typically performed with |Δ/Ω|∈ [1.5,4]. In this intermediate regime, we demonstrate that the U(1) symmetry is largely preserved and the deviation from the effective theory can be suppressed by a postselection measurement. Actually, we can accurately count Rydberg excitations in each experimental run by single-site resolved fluorescence imaging, which projects the spins to an exact microstate. Therefore, when exploring the dynamics of a specific 𝒩̂_R subspace, events subject to processes breaking the U(1) symmetry can be discarded, while only states remaining in the given symmetry sector are retained <cit.>. This postselection scheme has a high success probability and shows good tolerance to imperfect state initialization.
Quantum walk of a single magnon
We first investigate the dynamics within the 𝒩̂_R=1 subspace of a single Rydberg excitation (magnon). The effective Hamiltonian for this symmetry sector is a simple XY model describing coherent hopping of a single magnon: Ĥ_ eff = ∑_i< j J_ij (σ̂_i^+ σ̂_j^- + σ̂_i^- σ̂_j^+) + ∑_i μ_i n̂_i, where J_ij= Ω^2 V_ij/4Δ(Δ-V_ij) is the rate of the effective spin exchange, and μ_i = -Δ +2δ +∑_j≠ iJ_ij is the on-site potential of the magnon with δ=Ω^2/4Δ.
As a minimal yet nontrivial example, we begin with two sites and measure the spin-exchange process |↓↑⟩↔|↑↓⟩. To this end, two atoms are loaded into the tweezers and prepared in state |↓↓⟩ via optical pumping. Then, the trap is turned off, and the first atom is addressed with a 820-nm laser, making it off-resonant with respect to the transition driven by the global Rydberg beam. The second atom is on-resonant and subsequently driven to the Rydberg state by a π-pulse, creating the desired initial state |↓↑⟩. After that, the global Rydberg beam is significantly detuned to induce the effective spin exchange. The experimental sequence is shown in Fig. <ref>(d), and more details can be found in Refs. <cit.>. Figure <ref>(e) depicts the characteristic oscillation dynamics measured with Ω = 2π× 1.52 MHz, Δ = 2π× 5 MHz, and r=4.95 μ m, where r is the interatomic distance. It is clearly seen that the oscillation is approximately U(1) symmetric, as it mainly occurs in the single-excitation subspace, while states |↓↓⟩ and |↑↑⟩ are rarely populated. The oscillation frequency ∼ 0.80 MHz drawn from the experiment agrees well with the perturbation analysis that gives |J| ≈ 0.78 MHz. Here, the damping of the coherent spin exchange is mainly caused by uncorrelated dephasings from the intermediate-state scattering, and the scheme is intrinsically robust against correlated dephasings from the laser phase noise.
We next measure the distance dependence of the interaction J_ij=J(r_ij) by varying the distance r between the two atoms. As shown in Fig. <ref>(f), the measured potential perfectly matches the theoretical prediction J_±(r) = δ/[(r/r_c)^6∓1], where ± denotes the sign of the detuning, and r_c = (C_6/|Δ|)^1/6 is a characteristic length. For a negative detuning (Δ<0), J_-(r) is a soft-core potential that plateaus at δ for r<r_c and decays with a vdW tail ∼ 1/r^6, similar to the Rydberg-dressing induced interaction between ground-state atoms <cit.>. The potential for a positive detuning (Δ>0) has a distinct behavior: while it has the same plateau value and asymptotic scaling, J_+(r) diverges at r=r_c. This singularity is caused by the facilitation dynamics, where the condition V_i,i+1=Δ makes single-magnon states resonantly coupled with the two-magnon state |↑↑⟩, leading to a breakdown of perturbation theory and the U(1) symmetry. In the facilitation regime, it has been shown previously that a small thermal fluctuation of atomic positions can lead to a strong Anderson localization, hindering the transport of the excitation <cit.>. In contrast, for the U(1) symmetric regime studied in this work, the plateau of the potential makes the dynamics insensitive to the fluctuation of interatomic distance, and a magnon is expected to be highly delocalized.
To demonstrate that the magnon can exhibit robust quantum walk against atomic positional disorders, we now create a larger array containing 7 atoms with a spacing of 4.95 nm. In order to prepare the initial state |↓↓↓↑↓↓↓⟩, we apply the individual addressing beam to shift the detuning of the central site, followed by an adiabatic ramping of the global Rydberg beam, which only drives the atom at the center to the Rydberg state [Fig. <ref>(a)]. After the initialization, the addressing beam is turned off, and a red-detuned (Δ<0) Rydberg driving field is applied to induce the effective dynamics. The propagation of the initial excitation can be traced by observing the evolution of the local Rydberg density ⟨n̂_i⟩, as shown in Fig. <ref>(b), where an approximate light-cone wavefront can be identified. The staggered pattern of ⟨n̂_i⟩ during the evolution is a clear evidence of the quantum interference [Fig. <ref>(c)], as opposed to the Gaussian distribution in a classical random walk. In the current system, the existence of uncorrelated dephasings will eventually destroy the coherence of the system and leads to a uniform steady distribution. To quantify the role of the dephasing, we extract the mean square displacement ⟨ x^2 ⟩ of the magnon [Fig. <ref>(d)], and find good agreement with the simulations based on the Haken-Reineker-Strobl (HRS) model <cit.>, which includes both coherent magnon hoppings and on-site dephasings (with a rate γ=2π× 0.2 MHz). For a larger system, the HRS model predicts that the magnon will continue to spread with no steady-state distribution, but its motion has a quantum-classical crossover: while the initial propagation for t<1/γ is governed by a ballistic transport (⟨ x^2 ⟩∝ t^2), the spreading will gradually become diffusive with ⟨ x^2 ⟩∝ t. Such a scaling crossover can be identified in future experiments with increased system size.
Dynamics of magnon bound states
Having explored the single-magnon dynamics, we proceed to the observation of correlated motions of multiple magnons. In the two-excitation subspace (𝒩̂_R=2), neglecting the essentially uniform on-site potential, the effective Hamiltonian now reads
Ĥ_eff = ∑_i<j≠ kQ_ijk(σ̂_i^+ σ̂_j^-n̂_k + n̂_kσ̂_i^-σ̂_j^+) + ∑_i<j U_ijn̂_i n̂_j,
where Q_ijk = (G_ijk+G_jik)/2 is the density-dependent hopping strength with G_ijk = Ω^2 V_ij /4(Δ-V_ik)(Δ - V_ik - V_ij), and U_ij=V_ij-4J_ij+ ∑_l≠ i,j(G_lij-J_li) denotes the density interaction between magnons. Note that the density interaction U_ij∼ V_ij is mainly from the zeroth-order Hamiltonian Ĥ_0, while the exchange interaction Q_ijk is induced by the second-order perturbation. This leads to an important characteristic that |U_ij/Q_ijk|∼ (2Δ/Ω)^2≫1, which makes Eq. (<ref>) a long-ranged, highly anisotropic Heisenberg model.
One direct consequence of this large anisotropy is the emergence of a family of magnon bound states. In an infinite spin chain, the two-magnon eigenstate |ψ_K⟩=∑_i≠ jψ_K(i,j)σ̂_i^+σ̂_j^+|↓↓⋯↓⟩ can be labeled by the center-of-mass momentum K, where the wavefunction can be factorized as ψ_K(i,j) = e^iKRϕ_K(r) by introducing the center-of-mass position R = (i+j)/2 and the relative distance r= i-j <cit.> . The bound state has a bounded wavefunction ϕ_K(∞)→0, whose energy is isolated from the scattering continuum. Therefore, systems initially in the bound state remain localized in the relative coordinate, in stark contrast to the scattering state, where individual excitations propagate freely. Figure <ref>(a) shows the energy spectrum and the bound-state wavefunction for a typical parameter Δ/Ω=-3 and V_i,i+1/Δ=-8. The extremely large nearest-neighbor (NN) anisotropy ξ_1=U_i,i+1/Q_i-1,i,i+1≈ 684 in this case gives rise to a high-energy bound state (red curve), where magnons are tightly bounded at a relative distance r=1 (nearest neighbors) for all momenta. The strong density interaction also has a significant long-range effect absent in a short-range interacting system <cit.>: the next-nearest-neighbor (NNN) anisotropy ξ_2=U_i,i+2/Q_i-1,i,i+2≈ 4 is also quite large, and can thus support a low-energy loosely bound state (blue curve), whose wavefunction ϕ_K(r) has a larger bond-length r>1. We will focus on these two types of bound pairs in the experiment, and expect that the same system gives rise to further varieties of bound states at larger anisotropy or in different lattice configurations.
To probe the correlated dynamics of the tightly bound Rydberg pair, we prepare an initial state |↓↓↑↑↓↓⟩ in a 6-atom chain via an adiabatic anti-blockade excitation scheme, where the detuning for the center two atoms are swept across the resonant point Δ=V_i,i+1/2. We then quench the system to a fixed detuning and measure the evolution of the two-site correlator Γ_ij=⟨σ̂_i^+ σ̂_j^+ σ̂_i^- σ̂_j^-⟩. For a postive detuning Δ=2π× 12 MHz, the observed correlation function propagates almost perfectly along the directions j=i±1 [see the upper panels of Fig. <ref>(c)], demonstrating that two Rydberg excitations move in a correlated manner as expected [see Fig. <ref>(b)]. In fact, the large NN anisotropy ξ_1≈ -35 in our experiment makes the total NN-Rydberg bonds 𝒩̂_RR=∑_i n̂_in̂_i+1 another conserved charge. The tightly bound Rydberg pairs constitute the symmetry sector (𝒩̂_R=2, 𝒩̂_RR=1), whose dynamics are governed by an NNN hopping term Q∑_i (σ̂_i^+ σ̂_i+2^-n̂_i+1 +H.c.). Here, the strength Q=Q_i,i+2,i+1 corresponds to the exchange process illustrated in Fig. <ref>(c), and determines the propagation speed of the tightly bound pair. To further confirm this analysis, we turn the detuning to a negative value Δ=2π×-3.3 MHz, with which the single-magnon hopping strength J=J_i,i+1 remains unchanged, but the density-dependent hopping is significantly reduced (Q=0.13 MHz→ 0.01 MHz). Consistent with the theoretical prediction, the dynamics of the system becomes almost frozen within the time scale T∼ 2π/J [see the lower panels of Fig. <ref>(c)], at which a single Rydberg excitation should already spread over the lattice. Note that the slight spreading of the correlator at late time is mainly caused by the imperfect state initialization rather than by excitation hopping. The frozen dynamics observed here is a clear signature of the Hilbert space fragmentation: while all tightly bound states |⋯↑_i↑_i+1⋯⟩ share the local symmetry (𝒩̂_R and 𝒩̂_RR), they form dynamically disconnected Krylov subspaces of dimension 1 (frozen states). In fact, taking only NN vdW interactions into consideration (in accordance with a vanishing NNN hopping strength Q), the effective Hamiltonian can be mapped to a folded XXZ model <cit.>, where spin exchanges are constrained by the conservation of 𝒩̂_RR, leading to a strongly fragmented Hilbert space in the thermodynamic limit.
Unlike the tightly bound state, which has a nearly flat band in most parameter regimes (corresponding to the frozen dynamics), the loosely bound pair displays a finite bandwidth and is therefore more mobile [Fig. <ref>(a)]. To observe the propagation of this longer-range bound state, we prepare a 7-site chain and excite the third and the fifth atom to the Rydberg level. We first choose a small lattice spacing of 4.95 μ m to achieve large anisotropies ξ_1=539 and ξ_2≈ 1.24, for which the produced initial state |↓↓↑↓↑↓↓⟩ has a considerable overlap (≈ 0.24) with the loosely bound state. The upper panels of Fig. <ref>(e) depicts the evolution of the experimentally extracted correlation function Γ_ij. In contrast to the tightly bound pair, whose transport is determined by an NNN hopping term, the correlated motion of the loosely bound pair is mediated by two successive NN hopping processes [Fig. <ref>(d)], as evident from the predominant spreading of Γ_ij along the directions i=j±2. As a comparison, we then increase the interatomic distance to 8.5 μ m, at which the NNN anisotropy ξ_2≈ -0.52 is too small to support the long-range bound state for most values of the momenta. In this regime, the observed correlator Γ_ij rapidly spreads over the entire zone with no preferred propagation direction [see the lower panels of Fig. <ref>(e)], which suggests that the two Rydberg excitations are not bounded to each other but propagate freely <cit.>.
To further confirm the existence of the bound state, we extract their participation ratios (BR) from the measured correlation map, where the ratios for the tightly bound state and the long-range bound state are defined as BR_1 = ∑_iΓ_i,i+1/Γ_tot and BR_2 = ∑_iΓ_i,i+2/Γ_tot, respectively, with Γ_tot = ∑_i<jΓ_ij. For the system size realized in our experiment, the reflection from the boundary can lead to a finite BR_1 and BR_2 even in the absence of magnon interactions. To estimate this finite-size effect and get a lower reference value for the participation ratio, we assume a uniform thermal distribution of the magnons with Γ_ij=1/Γ_tot. As confirmed by Fig. <ref>, the measured ratio is much larger than this lower bound (dashed curves) during the free-magnon relaxation time ∼ 1/J. Here, the damping of the bound pair at late time is mainly caused by the local dephasing. It is here worth pointing out that an atomic positional disorder may slow down the propagation of bounded magnons more easily than single magnons, because it contributes a large disordered binding interaction U_ij (especially for the tightly bound pair). To account for the decoherence, the positional disorder, as well as other imperfections, we carry out full numerical simulations based on realistic experimental conditions and the original Rydberg Ising model (see Methods). This full simulation agrees very well with the experimental data (see Fig. <ref>) and suggests improving the coherence of the correlated spin-exchange dynamics in future studies.
Conclusions and outlook
In conclusion, we have demonstrated a new approach to constructing the Heisenberg-type spin model in a Rydberg atom array. Different from previous schemes realized by dipolar exchange interaction and Floquet engineering <cit.>, our approach is based on Rydberg dressing of an Ising Hamiltonian, which can offer a large and widely tunable anisotropy. In the current experiment, we focused on the single-magnon and the two-magnon sector. By creating more excitations in a large-scale array, the system may allow exploration of emergent Hilbert space fragmentation <cit.> and the Krylov-restricted thermalization of multiple magnons <cit.>. The scheme also allows dynamical engineering of spin transport, topological pumping protocols and programmable entanglement distributions <cit.>. Generalizations to higher dimension could lead to richer physics. In particular, in a 2D lattice, the inclusion of a multicolor dressing field could enable application of a synthetic gauge flux <cit.>, which can give rise to topologically protected chiral motion of the magnon-bound state and holds promise for observation of a chiral spin liquid <cit.>.
This research was supported by Samsung Science and Technology Foundation (SSTF-BA1301-52) and National Research Foundation of Korea (2017R1E1A1A01074307). F. Yang and K. Mølmer acknowledge the support from Carlsberg Foundation through the “Semper Ardens” Research Project QCooL and from the Danish National Research Foundation (DNRF) through the Center of Excellence “CCQ” (Grant No. DNRF156). We thank L. You, T. Pohl, A. E. B. Nielsen, H. Yarloo, H. Zhang, A. Cooper, and X. Wu for valuable discussions.
b>X
s>=.5X
§ METHODS
§.§ Effective Hamiltonian of the system
The effective U(1) symmetric model can be constructed from the Schrieffer-Wolff (SW) transformation <cit.>. Up to the second-order perturbation, the effective Hamiltonian is given by Ĥ_eff = Ĥ_0 + Ĥ_eff^(2) with
Ĥ_eff^(2)=𝒫̂(1/2[𝒮̂,Ω̂_D])𝒫̂,
where 𝒮̂ is a generator satisfying [𝒮̂,Ĥ_0]+Ω̂_D=0, and 𝒫̂ projects out terms that do not conserve 𝒩̂_R. Formally, the generator can be expressed as
𝒮̂=iΩ/2∑_iσ̂_i^y/Δ - ∑_j≠ iV_ijn̂_j.
It is difficult to get an explicit effective Hamiltonian using the above expression. Therefore, we expand 𝒮̂ in orders of the Rydberg excitation number that can influence the spin flip of a single atom at the i-th site, i.e.,
𝒮̂ = (2i/Ω) δ∑_i σ̂_i^y + (2i/Ω) ∑_i≠ jJ_ijσ̂_i^yn̂_j
+ (i/Ω)∑_i≠ j≠ k(G_ijk-J_ij)σ̂_i^yn̂_jn̂_k +⋯ ,
where δ = Ω^2/4Δ,
J_ij = Ω^2V_ij/4Δ(Δ-V_ij), G_ijk = Ω^2V_ij/4(Δ-V_ik)(Δ-V_ik-V_ij).
The above expansion then leads to an effective Hamiltonian Ĥ_eff^(2) = ℋ̂_1-body+ℋ̂_2-body+ℋ̂_3-body+⋯, where
ℋ̂_1-body = δ∑_iσ̂_i^z,
ℋ̂_2-body = ∑_i≠ jJ_ij/2(σ̂_i^+σ̂_j^- + σ̂_i^-σ̂_j^+ -2σ̂_i^zn̂_j)
ℋ̂_3-body = ∑_i≠ j≠ kG_ijk-J_ij/2(σ̂_i^+ σ̂_j^- + σ̂_i^-σ̂_j^+ -σ̂_i^zn̂_j)n̂_k,
are the one-body self-energy shift, the two-body XXZ-type Hamiltonian, and the three-body XXZ term, respectively. The Hamiltonian can be further simplified by the substitution σ̂_i^z = 2n̂_i -1 in a given state sector. For the single-magnon sector (𝒩̂_R=1), the quadratic term n̂_in̂_j can be neglected, which leads to the XY model given in the main text. For the two-magnon sector (𝒩̂_R=2), the cubic term n̂_in̂_jn̂_k can be discarded, and the resulting Hamiltonian can be mapped to Eq. (<ref>). For a general multi-magnon case, the dynamics is governed by a folded XXZ model exhibiting the HSF <cit.>.
§.§ Experimental setup and procedure
The experimental setup of our system is a Rydberg quantum simulator using a neutral atom array of ^87 Rb atoms, similar to our previous experiments <cit.>. The atomic ensembles are cooled and gathered inside a magneto-optical trap (MOT), while the single atoms are trapped inside a 820-nm optical tweezer array of 1 mK depth and sub-Doppler cooled to ∼ 35 μ K with polarization gradient cooling. Atoms are then optically pumped to |↓⟩ = |5S_1/2,F=2,m_F=2⟩. After the ground state preparation, traps are turned off and the atoms are operated to the Rydberg state |↑⟩ = |71S_1/2,m_J=1/2⟩ with the two Rydberg beams of 780-nm (homemade ECDL) and 480-nm (TA-SHG Pro of Toptica) with two photon transition of intermediate detuning of Δ_I = 2π× 660 MHz from the intermediate state |m⟩ =|5P_3/2,F=3,m_F=3⟩. Quantum operation is performed by a series of Rydberg and addressing laser pulses. After the quantum operation, atoms are trapped again by turning on the optical tweezer, and atoms in the Rydberg states are anti-trapped from the tweezer. The remaining atoms are imaged with the electron-multiplied charged coupled device (EMCCD, iXon Ultra 888 of Andor) by illuminating the imaging beam. By distinguishing the fluorescence of background and trapped atom, we could determine the internal state of each individual atom.
The optical tweezer trap and the addressing beam for the state initialization use the same 820-nm laser drived from Ti:Sapphire oscillator (TiC of Avesta) pumped by a 532-nm laser (Verdi G18 of Coherent). The laser beam passes an acousto-optic modulator (AOM) and is split into zeroth and first order beams. The first order beam is sent to the spatial light modulator (SLM, ODPDM512 of Meadowlark optics), and the optical tweezer array of target and reservoir traps is formed and rearranged with real-time calculation Gerchberg-Saxton weighted (GSW) algorithm with GPU (Titan-X Pascal of NVIDIA). The phase for atom arrays are calculated with a 4 times larger array zero-padded to the initial phase to achieve resolution less than the trap size <cit.>. The zeroth order beam propagates along a different path passing an additional AOM and followed by an acousto-optic deflector (AOD, DTSXY-400-820 of AA Opto-Electronic) which is used to address the target atom. This 820-nm addressing beam is off-resonant to the 5S→ 5P transition, inducing an a.c.-Stark shift to the target-atom Rydberg transition.
The quantum operation is programmed using a delay generator (DG645 of Stanford Research Systems) and an arbitrary waveform generator (AWG, XRF Agile RF Synthesizer of Moglabs), controlling AOMs of both the addressing beams and the Rydberg beams. The sequence is depicted in Fig. <ref>(d) of the main text, and a more detailed one is given in Extended Data Fig. <ref>. The sequence is divided into two parts: an initialization process driving the target atoms to Rydberg states, and the spin-exchange process inducing the many-body quench dynamics. For the two-atom experiment, the initial state is prepared by addressing one of the atoms to make it off-resonant to the Rydberg beams and applying a resonant π pulse to the other atom [see Extended Data Fig. <ref>(a) and Fig. <ref>(c)]. For all other experiments, the target atoms are addressed, and the Rabi frequency Ω and the detuning Δ of the global Rydberg beams are adiabatically swept according to the following sequence: (1) 0 μ s→ 0.1 μ s, (0,Δ_i)→ (Ω_ exp,Δ_i) (2) 0.1 μ s→ 0.9 μ s, (Ω_ exp,Δ_i)→ (Ω_ exp,Δ_f), and (3) 0.9 μ s→ 1 μ s, (Ω_ exp,Δ_f)→ (0,Δ_f) as depicted in Extended Data Fig. <ref>(b), where Ω_ exp is the Rabi frequency used in the spin-exchange step. The values of these parameters are summarized in Extended Data Table <ref>. With the above initialization, the addressed target atom is adiabatically excited to the Rydberg state [see Extended Data Fig. <ref>(d)].
§.§ Experimental parameters and measured values
The experimental parameters are given in the following tables. Extended Data Table <ref> shows the parameters and measured values for the two-atom spin-exchange dynamics, where Δ is the detuning for the spin exchange, r is the distance between the two atoms, Ω is the Rabi frequency, and J is the spin-exchange frequency fitted from each experiment, e.g., from the data in Fig. <ref>(e) of the main text. The vdW interaction strength V = C_6/r^6 is determined by the distance r with C_6 = 2π× 1023 GHz·μ m^-6 corresponding to the Rydberg state |71S_1/2, m_J = 1/2⟩ used in the experiment <cit.>. The values of Ω and J are fitted to the expression P=a+bcos(2π× c× t)×exp(-t/d) with unknowns a, b, c, d and probability P of the initial state, where Ω/2π and J/4π corresponds to c. The errors in r, which is plotted in Fig. <ref>(f) of the main text, has the same value 0.3 μ m for all distances, which is limited by the resolution of the image plane, where the beam waist is about ∼ 1.2 μ m and the resolution is ∼ 0.3 μ m =1.2/4 μ m because of the zero-padding. Extended Data Table <ref> shows the experimental parameters for the rest of the experiments. Here, Ω_ exp is the Rabi frequency for both spin-exchange dynamics experiment and the maximum Rabi frequency for the quantum annealing in the initial state preparation, Δ_A is the detuning applied on the target atom by the addressing beam (two values respectively for the left and the right atom in the two-magnon experiments), Δ_i and Δ_f is the initial and final detuning respectively for the detuning sweep of the state initialization, and Δ_ exp is the detuning for the spin-exchange quench dynamics.
§.§ Experimental imperfections and numerical simulations
Full numerical simulations in Fig. <ref> of the main text take the experimental errors into consideration. Extended Data Table <ref> shows types of experimental imperfections and its treatment in the numerical simulations. The dominant error in the dressing scheme is the uncorrelated individual dephasing mainly due to the spontaneous decay from the intermediate state, vdW interaction fluctuation due to the finite temperature of the atom, as well as the state-measurement error. The collective dephasing mainly induced by the laser phase noise does not have a significant role on the dynamics because of the decoherence-free feature of the effective model <cit.>. Both individual and collective dephasings are treated with the Lindblad master equation dρ/dt = -i [H, ρ ] + ℒ_ ind(ρ) + ℒ_ col(ρ) <cit.>, where the superoperator ℒ_ ind, ℒ_ col denotes the individual (on-site) and the collective phase noise, respectively. The individual dephasing rate γ_ ind≈ 2π× 0.2 MHz was fitted from the three level model of |g⟩, |r⟩ and the intermediate state |m⟩. The collective phase noise was fitted from the single-atom Rabi oscillation by fixing γ_ ind, and its value is γ_ col≈ 2π× 0.4 MHz. The temperature of the atomic thermal motion T_ atom = 34.27(5) μ K was measured using release and recapture method. With the temperature, we could calculate the motional variation of atom with a standard deviation σ_i = √(k_BT/(mω_i^2)) of the position for the trap frequency ω_i. In the simulation, the average effect of such an atomic positional disorder was evaluated with the Monte-Carlo method. The radial and longitudinal position standard variations are σ_r≈ 0.1 μ m and σ_a≈ 0.3 μ m respectively. The detection error was considered similar to <cit.>, where the dominant portion of the conditional error probability P(g|r) is due to the Rydberg decay and the dominant portion of P(r|g) is due to a finite temperature of the atom. The former is calculated with P(g|r) = 1-exp(-t_ trap/t_1), where t_ trap is the time when the trap is turned off, and the Rydberg lifetime t_1=43(15) μ s is measured with an additional Ramsey experiment <cit.>. The latter probability P(r|g)=P_ recap(t_ trap) is obtained from the release and recapture probability curve.
|
http://arxiv.org/abs/2307.04778v2 | 20230710054331 | Formulating A Strategic Plan Based On Statistical Analyses And Applications For Financial Companies Through A Real-World Use Case | [
"Saman Sarraf"
] | cs.LG | [
"cs.LG",
"cs.CE"
] |
Reducing Information Loss for Spiking Neural Networks
Yufei Guo1Equal contribution. Yuanpei Chen1^⋆ Liwen Zhang1 YingLei Wang1 Xiaode Liu1 Xinyi Tong1 Yuanyuan Ou2 Xuhui Huang1 Zhe Ma1
August 12, 2023
======================================================================================================================================
Business statistics play a crucial role in implementing a data-driven strategic plan at the enterprise level to employ various analytics where the outcomes of such a plan enable an enterprise to enhance the decision-making process or to mitigate risks to the organization. In this work, a strategic plan informed by the statistical analysis is introduced for a financial company called LendingClub, where the plan is comprised of exploring the possibility of onboarding a big data platform along with advanced feature selection capacities. The main objectives of such a plan are to increase the company’s revenue while reducing the risks of granting loans to borrowers who cannot return their loans. In this study, different hypotheses formulated to address the company’s concerns are studied, where the results reveal that the amount of loans profoundly impacts the number of borrowers charging off their loans. Also, the proposed strategic plan includes onboarding advanced analytics such as machine learning technologies that allow the company to build better generalized data-driven predictive models.
§ INTRODUCTION
Formulating a strategic plan aligned with a company’s business scope allows the company to explore data-driven ways of business improvement and risk mitigation quantitively while utilizing collected data to perform statistical applications. The company’s business leadership generally organizes joint meetings with internal or external data analysis teams to design a plan for executing business-related statistical analysis. Such projects demonstrate that the company should invest in what areas and adjust the budget for business verticals with low revenue. Furthermore, statistical applications can determine the logic of how to improve staff performance in the workplace.
LendingClub, as a peer-to-peer lending company, offers loans and investment products in different sectors, including personal and business loans, automobile loans, and health-related financing loans. LendingClub’s business model comprises three primary players: borrowers, investors, and portfolios for issued loans.
LendingClub is about expanding the statistical analytics that consists of infrastructure and software algorithm applications to develop two meaningful solutions ultimately: a) estimating durations in which clients will pay off loans; and b) 30-minute loan approval decision-making. To implement these two capabilities, the company has collected data on loans that were granted or rejected over 12 years, including 145 attributes and more than 2 million observations, where 32 features have no missing values across the dataset.
To achieve its ultimate targets, LendingClub performs a statistical analysis of numerous steps to determine whether to accept or reject hypotheses, which enables data scientists and statisticians to select attributes for predictive modeling. LendingClub seeks patterns in the loan data to discover relationships between a loan amount and borrowers who have charged off and reported by LendingClub <cit.>. The company assumes a potential correlation between the two features, which establishes specific loan criteria for the group loan applicants who might encounter such an issue. Discovering the correlation enables LendingClub to enhance its risk management portfolio and minimize the risk of losing financial resources, aiming to mitigate the negative impacts of issuing loans to borrowers of this category. Using business statistics, the company seeks proof of concept for the mentioned ideas before recruiting a third-party software developer to implement a standalone product; therefore, the internal data scientists explore various aspects of such data, not limited to the questions listed above <cit.>.
In the first phase, demographic information is extracted from the datasets, and data preprocessing steps, such as data cleaning, are performed to remove any broken data from the database. Next, further investigation of specific data (e.g., type of loans issued, loans issued by region, and a more in-depth analysis of bad loans) is performed <cit.>. In the second phase, which oversees the business perspective, the company’s experts explore the operative side of the business (operational business aspects) and analyze applicants’ income category. The third phase refers to the risk assessment of issuing loans, which consists of four steps: a) identifying existing risks in the business; b) the importance and role of credit scores in the loan approval or denial; c) defining bad loans and risky borrowers; d) loans by default (pre-approved); and e) exploring risks by targeted criteria <cit.>. The ultimate goals of such extensive analysis are to lead LendingClub’s data scientists to explore the feasibility of answering the two questions above based on current data, provide recommendations for data collection, or modify the business scope <cit.>.
§ PROBLEM STATEMENT AND HYPOTHESIS
The problem for this work points to statistical applications in LendingClub, which establishes three hypotheses regarding the relationship between the “Loan Amount” and “Charge OFF Flag” features, where various statistical analyses, including hypothesis testing <cit.> and correlation analysis <cit.>, are employed. The hypotheses are as follows:
* Accepting or rejecting the hypothesis that any relationship exists between the loan amounts and charge-offs
* Accepting or rejecting the hypothesis that any relationship exists between the higher loan amounts and charge-offs
* Accepting or rejecting the hypothesis that any relationship exists between the lower loan amounts and charge-offs
§ STATISTICAL ANALYSIS PIPELINE DESIGN
The problem statement consists of three main components: a) data exploration, b) descriptive analysis of loan duration, and c) real-time (fast) loan approval (or denial). Data exploration includes preprocessing, data cleaning, feature engineering, and selection to result in a meaningful descriptive analysis to find an accurate loan during and prediction. In the real-time step, various statistical techniques are explored, including hypothesis testing, student T-Test, and ANOVA testing, and statistical models, such as linear regression, logistic regression, cluster analysis, ANOVA tests, and correlation analysis <cit.>.
§.§ Data Exploration
Missing values are removed from the loan data, and “loanAmnt” refers to “the listed amount of the loan applied for by the borrower if, at some point in time, the credit department reduces the loan amount, then it will be reflected in this value” and “debtsettlementflag” indicating “flags whether or not the borrower, is charged-off, is working” are extracted from the preprocessed data shown in Figure <ref>. The “debtsettlementflag” – a binary feature – is considered a categorical attribute requiring conversion to numerical equivalents for statistical analysis <cit.>. Also, the histogram of loan amounts shows how borrowers are distributed regarding loan amounts.
§.§ Hypothesis Testing
In this experiment, T-Test is the primary method for whether to accept or reject the hypothesis. A T-Test is a hypothesis-testing method with broad applications in the industry due to its simplicity and convergence capability with a small sample of data <cit.>. T-Test requires a relatively small subset of data so that the loan dataset is shuffled, and a subsample of 1000 observations is randomly selected from charged-off samples along with 1000 samples, which are randomly selected from the on-time borrowers’ observations for further analysis<cit.>. To explore the consistency of T-Test results, analysis of variance (ANOVA) tests are applied to the same subsets as those used in the previous method. ANOVA tests demonstrate whether such groups offer statistically significant differences <cit.>.
§.§ Correlation Analysis
Correlation analysis is applied to the subsets to show the dependency between two features <cit.>. This analysis can indicate whether the loan amount impacts the number of borrowers charged off. Correlation analysis provides additional exposure to the data, which might strengthen the acceptance or rejection of the three hypotheses<cit.>.
§.§ Results Visualization and Interpretation
The results of statistical analysis methods are visualized and interpreted to verify whether the hypotheses are accepted. Also, the visualization of results allows the company’s data scientists to explore whether such outcomes from various techniques converge for decision-making and conclusion purposes.
§ SUMMARY OF RESULTS
To perform an accurate T-Test, several data requirements must be met: a) test variables are continuous; b) test variables (observations) are independent; c) subsets are randomly selected; c) data distribution is approximately normal; d) variance scores of subsets and population are approximately consistent; and e) no outliers <cit.>. In addition to these criteria, a balanced dataset design is required to conduct a meaningful ANOVA test, where the number of subjects in each group needs to be equal <cit.>. Also, an ideal correlation analysis requires data to be independently collected as paired samples, preferably continuous numeric values <cit.>.
§.§ Data Analysis
The first step of data analysis is exploring the distribution of observations regarding the number of on-time borrowers versus those who have charged off. The next step is to downsample the charged-off samples into subsets of 1000 observations. The same procedure was applied to on-time borrowers’ observations (non-charged-off), and 1000 samples were randomly selected; thus, each subset included 2000 samples of each class equally distributed <cit.>. The mean, standard deviation, and variance of each subset were calculated. The statistical measures of subsets are highly similar, which suggests the need for statistical testing to produce interpretable results. Figure <ref> shows a histogram of each subset where the number of bins is automatically calculated from the data (bin=10). The histogram results indicate that most of the issued loan amounts are in the range of [$5000,$20000].
§.§.§ Hypothesis 1
G*Power statistical software application <cit.> performed a T-Test against each subset, including 2000 samples of charged-off and on-time borrowers’ observations equally distributed. One-tailed T-Tests were conducted using an alpha error probability of 0.05 and a power of 0.95 (1 – beta error probability) to produce an actual power (decision-making criteria) for each subset. The results demonstrated that the actual power values were greater than 0.95, suggesting that the null hypothesis can be rejected, meaning that a “Loan Amount” affects whether a borrower can be charged-off.
ANOVA test was conducted against each subset using G*Power, where the outcomes demonstrate that the actual power values are higher than 0.95, suggesting that the null hypothesis can be rejected, which means two groups offer variance differences so that a “Loan Amount” affects whether a borrower can be charged-off.
The correlation analysis was performed against each subset and produced scores of -0.005255, 0.061228, and 0.007396 per subset, where the results indicate no strong correlation between the loan amount and the status of charged-off borrowers. The correlation results are not aligned with the T-Tests, suggesting that further analysis is needed.
§.§.§ Hypothesis 2
To explore the second hypothesis regarding a relationship between higher “Loan Amount” and “Charged-off,” each subset was sorted in descending order by loan amount, and the top 25% of observations were selected for analysis. The results revealed that all actual power values were higher than 0.95, suggesting that the null hypothesis should be rejected and indicating a strong relationship between the loan amount and charged-off borrowers.
§.§.§ Hypothesis 3
The third hypothesis is that the bottom 25% of loan amounts would also show a statistical relationship with the charged-off borrowers. Each subset was sorted in descending order regarding loan amount attributed, and the bottom 25% of observations were selected. The two-tailed T-Test (conducted by G*Power) revealed a strong relationship between the loan amount and charged-off accounts.
§ DISCUSSION
The company formulated a hypothesis to explore the impact of “Loan Amount” as a dependent variable on an independent attribute referring to “Charge OFF Flag,” showing whether a borrower has repaid the loan or charged it off. To do so, LendingClub decided to conduct T-Test and ANOVA hypothesis testing and correlation analysis. The hypothesis testing revealed a statistically significant difference at p-values less than .05, which is interpreted as an indication of the impact of the loan amount on loan repayment. However, the correlation analysis produced a low score, which disagreed with the results of hypothesis testing, and the company decided to perform a more in-depth analysis to locate the source of such divergence.
§.§ Steps in Statistical Analysis
Statistical analysis includes various steps, such as data exploration, hypothesis testing, and visualization, where the interpretation of results is the last step that aims to explain the results of each step (or most steps) of the analysis <cit.>. In general, an explanation of statistical results often covers four main areas: a) sample size, b) metrics of central tendency, c) distribution of data, and d) hypothesis testing <cit.>.
§.§.§ Dataset or Sample Size
The number of observations available for statistical analysis plays a crucial role in interpreting results. This number demonstrates whether the samples (observations) can be considered representative of analyzed data <cit.>. A significant difference between statistics and machine learning exists in terms of the number of samples required for experiments, where, for example, 50 observations can represent a population for statistical analysis. A significantly larger dataset is often required for developing a machine learning model.
§.§.§ Measures of Central Tendency
The mean, median, and mode of observations used for statistical analysis, along with the variance and standard deviation (i.e., measures of central tendency), reveal the central gravity of observations <cit.>. Interpreting those metrics enables practitioners to discover outliers in the observations and explore the possibility of removing them from the analysis. Unlike machine learning model development, where outliers might not impact results significantly, outliers here can affect statistical results by biasing the results towards that extreme.
§.§.§ Data Distribution
Spreading data by calculating the observation variance can show how samples are distributed among a population <cit.>. Also, exploring data distribution by calculating the histogram of data can reveal the type of data distribution (i.e., normal distribution). It also indicates whether the data are skewed towards the left or right of the histogram <cit.>. Interpreting the data distribution also reveals whether the data are multimodal, where observations come from two or more distributions. Moreover, such interoperation can be used for accurate data normalization, removing outliers, and properly formulating hypotheses for future analyses or reiterations of the current analysis <cit.>.
§.§.§ Hypothesis Testing
Interpretation of hypothesis testing comprises two steps: a) exploring the logic of formulating such a hypothesis and b) exploring the results of hypothesis testing <cit.>. In the first step, statisticians review the reasons for forming such a hypothesis by studying documents related to the business aspects of an organization. For example, statisticians can only formulate a hypothesis for analysis because they have considered the types/amounts of loans granted as dependent variables (inputs) when predicting whether borrowers could repay <cit.>. The logic behind such a hypothesis is explored and interpreted once the data are analyzed and the results produced. The second step is to interpret the hypothesis testing results, determine whether the hypothesis is accepted or rejected, and explore the confidence interval of such interpretations <cit.>. For example, the interpretation of hypothesis testing results for types of loans and successful repayment could potentially reveal a) whether types/amounts of loans are adequate metrics for predicting risks associated with a borrower; and b) how an organization can mitigate potential risks and update their criteria for granting loans <cit.>.
§.§ Limitations in Statistical Analysis
Statistical analysis encounters various limitations that make the interpretation of results challenging. As discussed earlier, the primary challenge of statistical analysis, relative to machine learning techniques, is the number of observations required to perform analysis <cit.>. A standard practice in statistical analysis is to sample a population randomly and test hypotheses against the subset of data that can raise concerns about whether the generated subset is a true representative of data <cit.>. By contrast, training machine learning algorithms require a significant amount of data, so practitioners assume that the number of samples or observations used to train the algorithms would represent the entire population <cit.>. Another limitation in interpreting the analysis results is how to relate findings to business problems and interpret the outcomes of hypothesis testing to address business problem statements <cit.>.
§.§.§ Small Dataset
The size of the dataset or sample used for statistical analysis plays a crucial role in determining the extent to which the results can be generalized <cit.>. A small sample size imposes significant limitations on statistical analysis, where a small dataset serves as a somewhat unrepresentative sample of the entire population, causing different types of bias in the analysis results <cit.>. Also, a small dataset increases the risk that outliers in each population will negatively impact measures of central tendency that have been calculated based on samples out of distribution. In addition to the problem of outliers discussed earlier, a small dataset makes splitting data into training and testing highly challenging. Although statistical analysis methods employ all samples provided to implement models based on hypothesis testing, practitioners in the field often use unseen data to validate hypothesis testing results <cit.>. Another issue caused by a small sample size is an unpredicted increase in measurement errors where the error metrics used to evaluate the models produce highly varying results. To overcome the limitations imposed by a small dataset, the primary practice is to randomly shuffle the dataset and generate several subsets of data, repeating statistical analysis to ensure the results converge <cit.>.
§.§.§ Cause and Effect
One of the challenges in interpreting statistical results relates to inconsistency between the hypotheses formulated and the outcomes of testing methods. Practitioners interpreting the statistical results might notice that the results are misaligned with the logic of hypothesis tests <cit.>. In such ambiguous circumstances, discovering the cause and effect in statistical analysis results conducted on specific business use cases is challenging since the interpretation disagrees with the predefined scenario <cit.>. This issue can arise when the hypothesis testing design does not cover the useful parameters in testing or when less powerful features and attributes in data are used for hypothesis testing <cit.>. It sometimes happens that practitioners or business teams helping design such statistical analysis misinterpret the results or overlook some findings and/or implications <cit.>. Another source of issues includes a low confidence interval level and results lacking statistical significance <cit.>.
§.§.§ Divergence of Results Obtained from Various Methods
A common challenge in interpreting statistical analysis results occurs when the results obtained from various techniques diverge <cit.>. It is a widespread practice that statisticians design a statistical analysis using multiple techniques, such as T-Test, ANOVA, or regression, to explore whether the results produced by these techniques align. An agreement between the results from different methods enables an organization to interpret analytical results clearly and make firm recommendations. However, the research shows that hypothesis testing and other methods, such as correlation analysis or machine learning, sometimes produce different results, contrasting with other methods<cit.>. Such an issue indicates that a systematic problem might exist in preparing samples or conducting hypothesis testing. The solution for this type of problem is offered case by case, where practitioners more familiar with the organization’s business scope can suggest methods that produce results closer to the problem statement.
§.§ Business Statistical Analysis and Interpretation
Business statistics, which include various types of analysis, focus on statistical methodologies aligned with an organization’s business scope to improve the decision-making process, mitigate risks to the organization, and increase revenue <cit.>. Interpretation of such analysis is crucial to the organization, and the process is expected to go beyond that of a simple report or presentation. The areas covered by business statistics include a) customer behavior prediction and trend extraction; b) data exploration, hypothesis testing, and interpretation, such as extensive visualization; c) enhancing business performance from various angles; and d) improving decision-making processes <cit.>. To achieve such targets, business data analysts understand their organization’s business objectives and explore data and results. Also, the root cause analysis is performed to extract in-depth technical insights regarding the organization’s vulnerabilities, enabling the organization to inform its decision-making process <cit.>.
§.§ Reflection on the Statistical Analysis Process
The findings from the initial statistical application enable the company to redesign the statistical analysis processes to concentrate on those attributes that more substantially impact their business. Feature engineering—a systematic methodology—is necessary to reveal the relationships between dependent attributes and target variables <cit.>. Also, the company aims to explore other features highly correlated with potential target variables from the business perspective but uncorrelated with other dependent attributes <cit.>.
§.§.§ Potential Improvement
The process of statistical analysis at LendingClub requires several changes to better serve the company’s business needs. The primary targets are to enhance the process of issuing loans, such as the duration of the loan approval process, and to mitigate financial risks to the company by offering borrowers a data-driven loan amount. LendingClub is to apply such changes to the statistical analysis and decision-making process by employing big data infrastructure for advanced multi-model data collection and analytics. In the first step, the company needs a plan demonstrating how to onboard new technology and its costs. The second step includes a broader statistical analysis, such as hypothesis testing, and uses the current data to assess whether specific statistical applications could broadly improve the company’s performance. In the third step, LendingClub conducts research and recruits a third party to develop the required infrastructure.
§.§.§ Required Infrastructure
Onboarding a large-scale system, such as an enabled big data analytics platform, is a significant change to LendingClub, where modifications have been performed to everything from databases to reporting systems. The first stage is to decide whether LendingClub would adopt a big data platform to the current system or entirely migrate to the new model. This decision allows the stakeholder to estimate the cost of a big data platform and start planning. Although the cost of system adaptation or migration to the big data platform requires detailed information, the migration to a cloud environment, for example, offering various big data services, would be a potential expansion of LendingClub’s analytics in the future. Figure <ref> illustrates the proposed steps for migrating the LendingClub data collection and analytics pipeline to a cloud-based environment that offers big data services such as Amazon Web Services (AWS) <cit.>. These steps consist of a) cloud assessment, b) proof of concept, c) data migration, d) application migration, e) leverage of the cloud, and f) optimization.
§.§ Proposed Large-Scale Plan
The large-scale plan to enhance the current statistical analysis pipeline consists of two primary phases: a) designing and implementing an end-to-end data collection and processing pipeline that offers big data analytics, and b) increasing the number and quality of features <cit.>. The current data collection pipeline collects data from various sources, and no broadly systematic methodology is employed to acquire such data. Gathering data from different providers (in-house or third-party) involves an extensive preprocessing pipeline, which might remove many observations to prepare a consistent dataset.
The proposed pipeline illustrated in Figure <ref> offers various capabilities, including big data collection and data stream processing. The first component of the architecture is a user interface that enables it to receive data from external sources where the data could either be stored in a multi-model database or be in the form of real-time messaging input into an allocated database. The collected data can be transferred between data storage and real-time messaging place holders, which offers big data capabilities to host structured and unstructured data. The next architecture layer includes enabled big data processing components for batch processing, which oversees data preparation and preprocessing for further analysis <cit.>.
A similar component—the stream processing unit—prepares and preprocesses data streams for real-time analysis and applications. The preprocessed data are sent to the next component of the architecture, which encompasses the statistical analysis and machine learning methods, where such a block is considered the brain that orchestrates the data analytics. Statistical analysis or machine learning outcomes are stored in a “results database.” The last layer of this orchestration is the user interface block, which enables practitioners in the organization to generate reports with visualizations that can be provided to leadership for decision-making purposes. An extra capability in the new architecture is scheduling automatic training machine learning models or performing statistical analysis.
The second phase of the new data analytics platform aims to enhance the quality of feature selection, which concentrates on those attributes that contribute most to target variables. Quarter-based statistical analysis and feature engineering demonstrate what features should be collected with higher resolution. The advantage of using targeted data collection through particular data attributes is to reduce the cost of on-demand infrastructure by reducing the load on the architecture servers and analytical blocks. However, the main disadvantage of employing such a step is that it decreases the amount of data that can be collected, which might harm statistical analysis or predictive model development. Therefore, the organization must weigh the cost of massive data streaming and collection against the impact of selective data collection.
§ CONCLUSIONS
Statistical applications enable enterprises to establish a data-driven business plan that provides clear objectives to enhance the enterprise’s performance, revenue, and risk management. This work summarized a strategic plan informed by an already performed analysis for LendingClub – a financial company – that grants various forms. The statistical results showed that different logic could be extracted from currently collected data. Such results enabled LendingClub to improve its business scope and to encourage the company to onboard a big data platform. The plan recommended exploring employing enhanced feature engineering capabilities to acquire enormous data per year and develop predictive models to increase the company’s revenue and lessen potential risks. LendingClub’s plan also seeks to utilize artificial intelligence and machine learning technologies to implement robust models aligned with the company’s business scopes.
unsrtnat
|
http://arxiv.org/abs/2307.05620v1 | 20230711035604 | Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) Framework | [
"Jesse Stevens",
"Daniel N. Wilke",
"Itumeleng Setshedi"
] | stat.ML | [
"stat.ML",
"cs.LG",
"stat.ME",
"62J20, 62H25, 62H30, 62H20",
"G.3; I.5.1; I.5.2; I.5.4; C.3"
] |
Department of Mechanical and Aeronautical Engineering, University of Pretoria, Lynnwood Rd, Hatfield, Pretoria, 0086
Linear latent variable models such as principal component analysis (PCA), independent component analysis (ICA),
canonical correlation analysis (CCA), and factor analysis (FA) identify latent directions (or loadings) either ordered or unordered. The data is then projected onto the latent directions to obtain their projected representations (or scores). For example, PCA solvers usually rank the principal directions by explaining the most to least variance, while
ICA solvers usually return independent directions unordered and often with single sources spread across multiple directions as multiple sub-sources, which is of severe detriment to their usability and
interpretability.
This paper proposes a general framework to enhance latent space representations for improving the interpretability of
linear latent spaces. Although the concepts in this paper are language agnostic, the framework is written in Python.
This framework automates the clustering and ranking of latent vectors to enhance
the latent information per latent vector, as well as, the interpretation of latent vectors.
Several innovative enhancements are incorporated including latent ranking (LR), latent scaling (LS),
latent clustering (LC), and latent condensing (LCON).
For a specified linear latent variable model, LR ranks latent directions according to a specified metric,
LS scales latent directions according to a specified metric, LC automatically clusters latent
directions into a specified number of clusters, while, LCON automatically determines
an appropriate number of clusters into which to condense the latent directions for a given metric. Additional functionality of the framework includes single-channel
and multi-channel data sources, data preprocessing strategies such as Hankelisation to seamlessly expand the applicability of linear latent variable models (LLVMs)
to a wider variety of data.
The effectiveness of LR, LS, and LCON are showcased on two crafted foundational problems with two applied latent variable models,
namely, PCA and ICA.
Latent Space Reconstruction Interpretation Scaling Ranking Clustering Condensing
§ INTRODUCTION
Latent variable models are statistical models that aim to describe the relationships between observed variables and unobserved, or latent, variables.
These models assume that the observed variables are generated by the underlying latent variables, which are not directly measured or observed but
are inferred from available data <cit.>. Practically, latent variable models (LVMs) can be classified into reconstruction- and interpretation-centered models
<cit.>.
Reconstruction-centered LVMs identify compressed latent representations that are efficient in reconstructing the variance
in the data, often optimal for the given model flexibility. In turn, interpretation-centered LVMs attempt to identify latent presentations
that are interpretable, e.g. independent variance contributing sources, when explaining the variance in the data. The latter often results in lesser
compressed latent representations.
The tasks of these two approaches are distinct as reconstruction-centered models are efficient at compressing data into lower dimensional latent spaces for efficient reconstruction,
while interpretation-centered approaches aim to identify lower-dimensional latent spaces that are interpretable, where contributing factors or sources of variance in
the data are independent and untangled <cit.>.
Ironically, reconstruction-centered LVMs usually present their latent directions
ordered from explaining the most to least variance, or vice versa. These include singular value decomposition (SVD) <cit.><cit.>, principal component analysis (PCA), and, conventional singular spectrum analysis (SSA),
giving clear discernability to reconstruction-focussed latent representations, while interpretation-centered LVMs usually return the latent directions unordered, with
single sources spread across multiple directions, making these latent representations less informative and more difficult to discern, interpret and manage.
These include independent component analysis (ICA) <cit.><cit.><cit.>, with a variety of underlying objective functions to be maximised such as non-Guassianity measures or proxies such as negentropy, skewness, or kurtosis, or minimisation of mutual information between latent variables.
Our proposed framework addresses this discrepancy by enhancing latent space representations to improve the interpretability of
linear or locally linearised latent spaces. This framework automates the clustering and ranking of latent vectors according to user-specified metrics
to improve interpretability and enhance the latent information.
Several innovative enhancements are incorporated including latent clustering (LC),
latent ranking (LR), and latent condensing (LCON) shown in Figure <ref>. Enhancements can be applied to latent spaces resulting from reconstruction-centered and interpretation-centered LVMs,
to re-rank already ordered latent variables according to an alternative metric, to order unordered latent variables, or to interrogate the influence of pre-processing
or filtering of data on latent interpretability
<cit.>, to mention some use cases. All of these cases have significant practical and research implications.
The effectiveness of LR, LS, LC, and LCON are show-cased on two crafted foundational problems for two applied latent variable models,
namely, PCA and ICA, respectively representative of reconstruction-centered and interpretation-centered LVMs.
§ REQUIRED BACKGROUND
§.§ Latent Variable Models
Auto-associative <cit.>, or auto-encoding <cit.> is a fundamental concept in LVMs to make the unsupervised learning problem of finding an appropriate latent representation tractable. A conceptual outline of auto-encoding is shown in Figure <ref>, depicting encoding and decoding. Encoding transforms higher-dimensional input data into a lower-dimensional latent representation
while decoding transforms the latent representations of higher-dimensional data back to their higher-dimensional representations. Variance-driven LVMs compress input data into compact latent representations,
while source-driven latent representations aim to identify informative latent representations that indicate sources contributing to the variance in the data.
Inferencing on latent representations or reconstructed representations enables latent and reconstruction inferencing, respectively.
The aim of this framework is to enhance the latent representations of LVMs for improved and enhanced latent inferencing.
Given a set of time series data represented by an m × n matrix X̅, where m is the number of observations and n is the number of variables or the discrete times at which data is recorded. For time series data the former relates to the number of samples, while
the latter is the time length of each sample. Latent variable models typically proceed as follows:
* Data standardisation through mean centering or whitening of the data, X.
* Compute the n × n covariance matrix C of the standardized time series data X.
* Find latent directions for the data C by maximising or minimising an objective function plus regularisation terms subject to equality[Equality constraints between latent directions are often enforced
e.g. orthogonality of the latent directions or some transformed representation of the latent directions. This can always be solved by direct optimisation but solving the first-order necessary optimality condition, or a matrix decomposition may in some cases be computationally more efficient <cit.>.] and inequality constraints <cit.>. PCA diagonalises the covariance matrix by finding its eigenvectors and eigenvalues, where the eigenvectors represent the principal components, and the eigenvalues represent the amount of variance explained by each principal component.
* Select k latent directions from a maximum of rank(C). Eigenvalues and their associated eigenvectors are automatically sorted in descending order, from which k eigenvectors associated with the largest eigenvalues are usually selected. By choosing eigenvectors corresponding to the largest eigenvalues the LVM prioritizes reconstruction as they capture the most significant variation in the time series data.
* The latent representation for a sample is obtained by projecting the standardised time series data C onto k selected latent directions (a.k.a. loadings) to obtain a k-dimensional latent representation of the sample often referred to as a k-dimensional score.
* Reconstructing the sample from the latent representation, merely requires the summation of each component of the k-dimensional score multiplied by their respective latent direction.
In this paper, we consider sklearn's PCA as a representative reconstruction-centered LVM, and independent component analysis (ICA) in the form FastICA, as an interpretation-centered LVM.
§.§ Data sources and channels
In data science, there are various sources of data that data analysts and scientists process to gain insights and make informed decisions.
Time series data refers to a sequence of observations or measurements taken at specific time intervals. Time series data is often collected
from sensors installed in various devices or environments. This can include temperature readings, air quality measurements, pressure recordings,
vibration data, and more. Industries such as manufacturing, energy, and environmental monitoring heavily rely on sensor-generated time series data.
The Internet of Things (IoT) has introduced a wide range of devices that are now equipped with sensors and connected to the internet. These devices generate time series data that can be
used for applications like smart homes, smart cities, and industrial monitoring. Medical devices, wearable devices, and health monitoring systems sense heart rates, blood pressure, glucose levels,
sleep patterns, and other physiological measurements, which aid healthcare analysis, disease detection, and personalised medicine.
Datasets can be single or multi-channel sensor measurements of single or multiple observations. These multi-channel or multiple observations of data can be homogenous or heterogeneous.
A single observation of a single channel time series data 𝐱∈ R^m+n-2 can be transformed to enable LVMs to operate on the data. These include
Hankelisation <cit.>
H =
[ x_0 x_1 x_2 ⋯ x_n-1; x_1 x_2 x_3 ⋯ x_n; x_2 x_3 x_4 ⋯ x_n+1; ⋮ ⋮ ⋮ ⋱ ⋮; x_m-1 x_m x_m+1 ⋯ x_m+n-2,; ]
while multi-observation or multi-channel sources can be considered in isolation or also transformed to enhance latent inferencing.
§ RELATED WORK
Clustering methods have been proposed to improve the efficacy of ICA <cit.> by using Tree-dependant Component Analysis (TCA). The TCA combines graphical models and Gaussian stationary contrast function to derive richer dependency classes. ICA and clustering have mainly focused on the use of ICA in pattern recognition and image classification analysis, while Expectations Maximisation (EM), K-Means, and fuzzy C-Means have shown satisfactory results when applied to imaging <cit.>. In contrast, our proposed LS-PIE framework introduces a generic framework for the enhancement of LVMs through
latent ranking (LR), latent scaling (LS), latent clustering (LC), and latent condensing (LCON).
§ SOFTWARE DESCRIPTION
LS-PIE makes latent ranking (LR), latent scaling (LS), latent clustering (LC), and latent condensing (LCON) accessible for reconstruction-centered or interpretation-centered LVMs.
An outline of LR, LS, LC, and LCON is given to conceptualising the approaches followed in each.
§.§ Latent ranking (LR)
Latent ranking (LR) allow the user to specify a metric and then rank the latent variables according to the selected metric. Although several metrics are
readily available, the framework also allows for a metric specified as a user-defined Python function. LR algorithm is outlined in Algorithm <ref>.
Latent ranking allows for the exploration
latent variables that have already been identified by optimising some regularised objective function subject to constraints. Also, this enables unordered latent variables to be ordered or
ordered latent variables to be ordered according to some other metric that enhances the interpretation of the current latent variables or explores some of their underlying characteristics.
§.§ Latent scaling (LS)
Latent scaling (LS) allows the user to specify a metric by which to scale the length of the latent vectors, e.g. the percentage variance explained. This enhances the
visual interpretation of the latent vectors when plotted to critically interrogate them. The framework supports a number of metrics but also allows for metrics expressed as user-defined Python functions.
The LS algorithm is outlined in Algorithm <ref>.
A practical example is to generate a scaling score,
s_j = θ_j/∑_i = 1^M θ_i,
for each j^th latent direction using some metric θ, with
associated scaling operator
L̃_j = S_j(L_j) = L_j/s_j.
This allows for the scaling of each latent direction based on the scaling score. Scaling scores include variance explained and kurtosis. This essentially allows us to highlight latent directions that a prominent in
reconstruction, while hiding latent directions that do not contribute significantly to reconstruction.
§.§ Latent clustering (LC)
We also need to counter an additional problem common in some LVMs. Unlike PCA, where as the number of components increases
the existing components remain unchanged and new components explain less and less of the variance, some LVMs split the same information over
more and more components as the number of latent directions increases, e.g. ICA. Latent clustering combines
similar latent directions according to a user defined metric into a single latent direction through clustering.
For linear models, the maximum number of latent dimensions is dictated by the rank of the data matrix X after standardisation.
Latent clustering (LC) enables the user to specify the number of latent clusters to be identified from the specified number of latent variables, i.e. LC clusters latent directions into a pre-selected number of clusters.
LC can be performed by selecting available similarities or
dissimilarity metrics, or as a user-specified Python function, as well as the clustering approach with BIRCH being the default for LC.
§.§ Latent CONdensing (LCON)
Latent condensing extends LC by automatically finding the optimal number of latent dimensions into which to express the latent directions using an
appropriate clustering algorithm with DBSCAN being the default for LCON.
§ NUMERICAL INVESTIGATION
The effectiveness of LR, LS, and LCON are showcased on two crafted foundational problems using single channel data. In both cases Hankelisation is employed before applying two latent variable models,
namely, PCA and ICA.
§.§ Single Channel - Latent ranking (LR), latent scaling (LS), and latent condensing (LCON)
Consider the simplest example
f(t) = sin(2π t),
uniformly sampled at 4000/12π samples per second using Hankelisation with a window length of 300. The results
for extracting 8 latent variables using PCA and ICA are shown in Figure <ref>(left). Here, we expect identical results for PCA and ICA, merely a single-frequency Fourier sin-cosine decomposition as shown in Figures <ref>.
Note the improvement in informativeness as latent scaling (LS) are applied. In turn, note the improvement in the informativeness of the latent directions of latent ranking (LR) and enhancement of latent condensing (LCON) on ICA. For ICA, LC combined the
second and third ranked ICs.
In turn, Figure <ref>(right)
is a signal with decreasing frequency over time, expressed by f(t) = sin(2π t^0.85). Here, we expect to see some differentiation in the latent directions between PCA and ICA as shown in Figures <ref>.
The improvement in interpretation and informativeness of the latent directions using LS-PIE is evident. LS-PIE isolates and enhances the essential latent directions that allow time for a critical interpretation of
the latent directions, and the comparison between LVMs.
§ IMPACT
The role of LS-PIE in interrogating LVMs and enhancing latent directions is clearly demonstrated in
two foundational example problems. LS-PIE ensures that the user can focus their time and energy
on interpreting the latent space for latent inference, as opposed to first having to define an informative
latent space. The potential impact of LS-PIE is an improved adoption of interpretation-centered LVMs in signal processing,
vibration-based condition monitoring, actuarial sciences, finances, and social and physical sciences, as well as, the improved
interpretation of reconstruction-centered LVMs.
This initial LS-PIE framework is a latent variable ecosystem to enhance the
practical application of LVMs and centering research activity of LVMs around latent inference for
interpretation.
§ CONCLUSIONS
LS-PIE improves the interpretability of reconstruction-centered and in-terpretation-centered LVMs through latent ranking and latent scaling, while enhancing the information spread over latent directions
through latent condensing. Two foundational datasets clearly highlight the benefit of utilising LS-PIE to enhance the informativeness of reconstruction-centered and interpretation-centered LVMs.
The LS-PIE framework is the first step towards an LVM ecosystem that benefits the practical application and research opportunities of LVMs. Future research will develop additional functionality
that benefits LVM research and practical deployment of LVM for industrial applications.
§ CONFLICT OF INTEREST
No conflict of interest exists:
We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
elsarticle-num
|
http://arxiv.org/abs/2307.03966v1 | 20230708123510 | Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems | [
"Nischal Ashok Kumar",
"Nitin Gupta",
"Shanmukha Guttula",
"Hima Patel"
] | cs.AI | [
"cs.AI",
"cs.SE"
] |
Both authors contributed equally to the paper
Work done during internship at IBM Research
UMass Amherst
United States
[email protected]
[1]
IBM Research
India
[email protected]
IBM Research
India
[email protected]
IBM Research
India
[email protected]
printacmref=false
In mapping enterprise applications, data mapping remains a fundamental part of integration development, but its time consuming. An increasing number of applications lack naming standards, and nested field structures further add complexity for the integration developers. Once the mapping is done, data transformation is the next challenge for the users since each application expects data to be in a certain format. Also, while building integration flow, developers need to understand the format of the source and target data field and come up with transformation program that can change data from source to target format. The problem of automatic generation of a transformation program through program synthesis paradigm from some specifications has been studied since the early days of Artificial Intelligence (AI). Programming by Example (PBE) is one such kind of technique that targets automatic inferencing of a computer program to accomplish a format or string conversion task from user-provided input and output samples. To learn the correct intent, a diverse set of samples from the user is required. However, there is a possibility that the user fails to provide a diverse set of samples. This can lead to multiple intents or ambiguity in the input and output samples. Hence, PBE systems can get confused in generating the correct intent program. In this paper, we propose a deep neural network based ambiguity prediction model, which analyzes the input-output strings and maps them to a different set of properties responsible for multiple intent. Users can analyze these properties and accordingly can provide new samples or modify existing samples which can help in building a better PBE system for mapping enterprise applications.
Multi-Intent Detection in User Provided Annotations for Programming by Examples Systems
Hima Patel
August 12, 2023
=======================================================================================
§ INTRODUCTION
String Transformation in mapping enterprise applications refers to the specific paradigm in the domain of Programming by Example (PBE) approaches, where a computer program learns to capture user intent, expressed through a set of input-output pairs, from a pre-defined set of specifications and constraints <cit.>. The set of specifications and constraints is expressed through the Domain-Specific Language (DSL) which consists of a finite number of atomic functions or string expressions that can be used to formally represent a program for the user to interpret. Most of the PBE systems <cit.> for string transformation use ranking mechanisms that are either built using heuristics or learned using historical data. These kinds of ranking systems are designed to capture the following two important characteristics: small length and simpler programs. Such kind of ranking system mostly depends on the quality and number of input and output (I/O) annotation samples to learn better program. The quality of I/O samples denotes how good the I/O samples are to generate a single intent output.
The number of given I/O annotation samples can vary depending on the user intent, but the fewer the better for the user (as the user has to provide less annotations). Therein lies the challenge of learning correct intent i.e., if examples are too few, then many possible DSL functions can satisfy them, and picking one intent (or program) arbitrarily or based on some ranking mechanism that satisfies simplicity and smaller length criteria, might lead to non-desired intent program. This might yield a solution that works well only on the given I/O samples but not on unseen samples. Similarly, the quality of I/O samples (irrespective of high I/O samples count) plays an important role in generating the correct intent program. The above two challenges are critical for PBE kind of systems to understand the user's intent by analyzing the given I/O samples. This can lead to a sub-optimal program that works on seen data but does not give desired outputs for unseen data. Hence, it is important to understand whether the given I/O samples capture the user's desired intent correctly or not.
For illustration, let's take an example shown in Table <ref>. "Train" columns denote the columns representing I/O samples used to generate a transformation program and "Test" columns denote the input sample column which is passed to the transformation program to generate an output. GT output column denotes the actual desired output. For each example, the user provides 3 I/O samples to generate a transformation program using <cit.>. In the first example, user intent is to extract substring after ”_" character, but here PROSE system learns the program which transforms test input “B_DS2345" into test output “2345" (see generated output column), which implies that the system learns to extract last numeric substring, which is different from a user-desired intent. This happens because there can be many possible programs to transform one set of inputs into outputs. Sometimes those programs converge to the same intent and other times it can lead to multiple intents. For example, in Table <ref>, for the first set of I/O samples, multiple programs can be possible. For test input sample “B_D2S345", where desired output value is “D2345". However, programs in Program(s) column generate different values for this example, first program generates - “345", second program -“D2S345", third program - “D2S345", fourth program - “345" and so on. This shows that all these consistent programs with I/O samples can lead to multiple intents (or outputs) on unseen data. For the above use case, two clear intents are - (a) Extract numeric substring after “_", and second intent is extract substring after “_". But if we look at the second row in Table <ref>, where we replaced the third sample with a better sample “GE_D443 - D443", then automatically first intent program got eliminated from the programs list. Hence, accessing the quality of annotations with respect to single or multiple intents is required for better PBE systems. If the user provides sufficient and single intent specific samples, the system can easily generalize to the rest of the samples. Hence, there is a need for a system that analyses the I/O samples that can help in finding multiple intent issues in annotations.
This would help in informing the user about multi-intent issues before generating a transformation program.
Therefore, we propose a framework to understand the quality of I/O samples to accurately predict a single confident program. To achieve this goal, we introduce a set of generic properties which helps to find ambiguity/multiple intents in a given set of I/O annotation samples. These properties are generic enough for most of the PBE systems because these properties are designed by analyzing several PBE systems' DSL. We propose a deep learning-based framework to automatically identify the presence of these properties in the annotations. The proposed framework takes a set of I/O samples annotation pairs as input and analyzes those samples together to classify the annotations to these properties. User can utilize this information to enhance the I/O samples, hence, generating more accurate, single intent, simpler and shorter program. In summary, the core contributions of our work are as follows:
* Multi-Tasking Attention-Based Deep Neural Network to address the issues of input and output annotation quality to generate a program with the correct intent.
* Defined a set of generic properties after analyzing several PBE systems' DSL that can help to find whether a given set of I/O samples can lead to multiple intents or not.
* We present an extensive quantitative analysis of a synthetically generated dataset. We also show the motivation of each module of our proposed framework through an ablation study.
* We also demonstrate the impact of detecting multiple intents and correcting them before building any PBE system.
§ OVERVIEW OF PROPOSED METHODOLOGY
In this section, we discuss the overview of the proposed methodology, define the set of properties to detect multiple intents, and formally define the problem setting. For any PBE system, I/O samples play an important role in determining the correct intent program. Examples are an ambiguous form of specification: there can be different programs that are consistent with the provided examples, but these programs differ in their behavior on unseen inputs. If the user does not provide a large set of examples or less but good quality samples, the PBE system may synthesize unintended programs, which can lead to non-desired outputs. Hence, there is a need for a framework that can access the quality of I/O samples with respect to multiple intents before generating the program. To access the quality of I/O samples, the most important aspect is to understand how good I/O patterns are for PBE system DSL.
The proposed framework (Figure <ref>) consists of two major modules, (a) For I/O annotations, defining set of properties which can cause ambiguity or multiple intents - we analyzed several string transformation specific DSL's, and came out with a generic set of properties which helps to identify whether given I/O samples can lead to multiple intents or not. However, the proposed system is generic enough that users can always add a new set of properties based on new functions introduced in DSL which can cause multiple intents, (b) Multiple intent analyzer - we designed a multi-tasking attention-based deep neural network to detect the ambiguity in given I/O samples based on given set of identified properties. The system first analyzes the user I/O annotation samples using the proposed deep learning framework to detect properties that cause multiple intents or ambiguities. In the next step, the user analyzes those detected properties and based on that, add or modifies samples in I/O annotations to improve the overall annotation quality to learn the correct intent program.
In the next section, we will first discuss the properties that will be helpful to decide whether given I/O samples can cause multiple intents or not. In Section 2.2, we will describe the proposed deep learning-based framework which utilizes these properties to find the presence of multiple intents in given annotations.
§.§ Properties to Detect Multiple Intent
The most important part in finding the ambiguity or possibility of the multiple intents in a given annotation is to analyze the I/O samples for generic characteristics of operators present in the DSL. Mostly, all the DSLs that exist in the literature for string transformation-based PBE systems use similar kinds of operators like split, substring with regex or constant value as an argument, concat, replace, extract first substring, etc. We analyzed several string manipulation-specific DSLs and come out with five generic properties that can help in detecting the multiple intents in the I/O samples. Figure <ref> shows one of the DSL created by combining several other DSL's commonly used operators. There can be other string manipulations operators such as trim, but these are high-level operators and generally doesn't contribute in the multi-intent scenario. In this paper, we will use the DSL showed in Figure <ref> to illustrate the importance of the defined properties.
Properties of I/O to detect the presence of multiple intents should be tightly bound to the DSL used for the PBE system. At the same time, those properties should also be (1) concise enough to capture the implicit or explicit multiple intent and (2) expressive enough to allow transformations to be achieved without any confusion in ranking between the programs. Below, we describe the set of 5 properties and the motivation behind their design.
§.§.§ Similar Length Ambiguity
- This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have the same length. For example, in Table <ref>, example 1, following substring or continuous sequence in output “123" and “535" are extracted from the similar continuous sequence in input and also are of the same length, hence it is not clear that whether the user wants to extract everything after second “_" or just three characters. In terms of DSL, mostly this kind of ambiguity can be possible because of the outputs generated by constant length-based operators like substring with constant positions vs pattern-based operators like split, substring with a pattern. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to the same continuous sequence of string in input and have the same number of characters across that sequence in all the output samples. I_l denotes the l^th input sample, and O_l denotes the corresponding output sample, and l denotes the total I/O samples in one example.
§.§.§ Exact Position Placement Ambiguity
- This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output string always starts or ends on the same position in the input string. For example, in Table <ref>, example 2, following substring or continuous sequence in output “Kumar" and “Williams" are extracted from the similar continuous sequence in input and also starts from the same position in input i.e. 5, hence it is not clear that whether the user always wants to extract substring started from position 5 in input, or the user have some other desired intent (extract something after space character). In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators which uses regex or split-based operation to extract substring. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ..., (I_l, O_l)), it satisfies this property when the continuous sequence of string in an output matches to the same continuous sequence of string in the corresponding input and it has a start or end always at the same position.
§.§.§ Exact Match Ambiguity
This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and extracted output substring across annotations have the same string value. For example, in Table <ref>, example 3, following substring or continuous sequence in output “11" and “11" are extracted from the similar continuous sequence in input and also have the same string value i.e. 11, hence it is not clear that whether the user always wants to have constant value 11 in output or the user want to extract this value from the input string. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow constant positions to detect a position of substring vs operators like split/substring which allow values to be extracted from the string itself. Formally, we can define this property as, given an example ((I_1, O_1),(I_2, O_2), ...., (I_l, O_l)) satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value.
§.§.§ Similar in Token Type Ambiguity
This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input and extracted output substring across I/O pairs is of the same type. For example, in Table <ref>, example 4, following substring or continuous sequence in output “123" and “53" are extracted from the similar continuous sequence in input and also have the same value type, hence it is not clear that whether the user always wants to extract the same data type value or something else. Mostly, three types of tokens, Alphabet Tokens which consists of all uppercase and lowercase English alphabets, Numeric Tokens which consists of digits from 0 to 9 and Special-Character Tokens which consists of all printable special characters on the keyboard, are possible. Hence, we say that an example satisfies a similar token type if all its continuous substring in outputs are either all Alphabet Tokens or all Numeric Tokens or all Special-Character Tokens. In terms of DSL, mostly this kind of ambiguity can be possible because of the operators which allow a specific set of regex positions to detect a position of substring vs operators like split/substring which allows values to be extracted from string itself. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to same continuous sequence of string in input and have same value type.
§.§.§ Repeating Characters Ambiguity
This kind of ambiguity can happen when the output substring of all the I/O pairs can be extracted by applying the same DSL operator on corresponding input annotations and have multiple instances of that output substring is possible in input. For example, in Table <ref>, example 5, following substring or continuous sequence in output “1" and “2" can be extracted from two similar positions from the input. Those positions can be defined by any low-level operators like constant positions, regex, or high-level operators like split, etc. In this case, that common substring is possible at two constant positions in input i.e. positions 3 and 9. Hence it is not clear that whether the user wants to extract a substring from position 3 or 9. This is DSL independent ambiguity, which can happen because the user provided the samples in the way that it internally it generating such kind of ambiguity. Formally, we can define this property as, given an example satisfies this property when the continues sequence of string in an output matches to multiple instance of continuous sequence of string in input.
§.§ Problem Formulation
Given a set of l input-output annotations ((I_1, O_1),.., (I_l, O_l)), and a set of p properties (P_1, P_2, .., Pp) which can help to detect multi-intents in I/O annotations. The goal of this task is to answer the question “Is there any multi-intent or ambiguity present in I/O samples", if yes, what kind of ambiguities exist. In this paper, p is set to 5, as we designed and discussed 5 properties in the last section that can hinder the generalization of PBE systems. To learn to detect these sets of ambiguities, we design the multi-tasking attention-based deep neural network model.
We first generate a set of I/O annotation examples corresponding to each of the five ambiguous properties. We refer to a single I/O pair (I_1, O_1) as a sample and a group of I/O pairs to learn the program using any PBE system as an example. Here, l denotes the total samples (I/O pair) used for each example. In this work, we used l=3, which means in each example, we have three I/O samples. One example can have multiple properties issues also. Intuitively, the goal of our proposed task is to detect the ambiguities in the user-provided I/O annotations so that the user resolves these ambiguities by adding the new or modifying the existing samples. This will enable PBE systems to generate a single intent program that performs as desired on the unseen samples.
In the proposed framework, we train a multi-tasking attention-based deep neural network model as shown in Figure <ref> to learn the ambiguities as expressed in the I/O examples. We define each task as a formulation to learn one type of ambiguity. Consequently, the proposed framework solves the five tasks at a time corresponding to ambiguity detection for five different properties. Our model follows an encoder-decoder architecture where the encoder is shared among all the tasks and the decoder is independent for each task. We pose this problem as a multi-class classification problem. Each example is classified against five ambiguous properties as positive or negative, where positive means that the example is ambiguous for that property and negative means that it is not ambiguous.
0.2in
Model Architecture - We model the proposed framework shown in Figure <ref> for detecting ambiguities through a hard-parameter sharing paradigm for multi-task learning. As shown in Figure <ref>, the proposed framework consists of three modules, Common Encoder, Task-Specific Modules, and the Loss module. We discuss each of these modules in subsequent subsections.
§.§.§ Common Encoder
This module is used for encoding the raw I/O strings (see Figure <ref>) and consists of two sub-modules:
* Character Level Embedding Layer - This layer maps each character of the I/O pairs in each example to a 128-dimensional learning space. Given an input (i_1,.., i_n) and an output string (o_1,.., o_m) consisting of a sequence of characters of length n and m respectively, this layer outputs a list of character embedding. Here, n refers to the maximum length of input among all the examples in the dataset. The input strings which are smaller than the maximum length are appended with <pad> tokens to make their length equal to n. The <pad> tokens specify that the current character does not signify the original string but marks the end of it or is used to make all sequences of the same length so that the deep learning tensor computations are easier.
A similar procedure is followed with the output strings, where the maximum length of output among all the examples in the dataset is m.
Each character i_t and o_s in the input and output sequence is mapped to the 128-dimensional raw embedding e_i_t and e_o_s respectively via a randomly initialized and trainable embedding matrix, where t ∈{1,..n} and s ∈{1,..m}.
* Input Encoder - This layer uses LSTM representations <cit.> applied on the embedding e_i_t of the inputs of each example as shown in Equation <ref>. This layer helps to learn the sequential dependencies of the characters of the inputs. It takes the input embedding of each character e_i_t and passes them through a LSTM layer consisting of n separate LSTM cells with a hidden vector size of 512 as shown in Equation <ref>.
h_i_t = LSTM(e_i_t,h_i_t-1), t ∈ (1...n)
Hence, the Common Encoder takes I/O pair as input and produces two output representations, the raw 128-dimensional embedding of each character in the output sample and the LSTM encoded embedding of the Input sample in the I/O pair. These embeddings will be generated for each I/O pairs in an example. The outputs of the Common Encoder are then utilized by next Modules.
§.§.§ Task Specific Modules
These modules are designed for the detection of each ambiguity property. We have 5 such modules (one for each ambiguity) with a similar structure, which process the inputs obtained from the common encoder. Each Task-Specific Module contains an Additive Attention Output Encoder, Concatenation Layer, Convolution Neural Networks & Pooling Layer, and Softmax Layer as classification layer. The weights across all these 5 task-specific modules are not shared with each other.
* Attention Output Encoder -
In our architecture, we use additive attention mechanism <cit.> to selectively impart more importance to the part of the input which has more influence on the output characters and hence obtain better output sample encoding. Specifically, this layer computes the additive attention a_e_o_s of a single embedded output character e_o_s with respect to the encoding of all the input characters h_i_1..n as shown in the equations <ref> and <ref>. For this, we pass the output from the Input Encoder to the Attention Output Encoder which first computes the attention weights α_s_1..n as shown in eq. <ref> and the corresponding attention vector a_e_o_s as shown in eq. <ref> for each output character O_s with respect to all input characters in the I/O pair. Here, W_a and U_a are the learnable weight matrices. W_a corresponds to the output embeddings vector e_o_s and U_a corresponds to the input encodings matrix h_i_1..n. V_a is the learnable vector.
The attention output a_e_o_s is concatenated with the output embedding e_o_s to give c_o_s as shown in equation <ref> and is passed through an LSTM layer with hidden vector size 512 as shown in eq. <ref>.
α_s_1..n = V_atanh(W_ae_o_s + U_ah_i_1..n), s ∈ (1..m)
a_e_o_s = Σ_tα_s_t h_i_t, t ∈ (1..n), s ∈ (1..m)
c_o_s= [ a_e_o_s,e_o_s], s ∈ (1..m)
h_o_s = LSTM(c_o_s,h_o_s-1), s ∈ (1..m)
The Attention Output Encoder outputs m different LSTM encodings h_O_1..m for each output string of length m in l I/O pairs, which further passed to the next Layer.
* Concatenation Layer -
For this, we concatenate the l encodings corresponding to l I/O pairs for each example. Detecting ambiguity is possible only by analyzing all the I/O pairs in a given example and not just one I/O pair. These encodings are obtained from the Attention Output Encoder in a row-wise manner as shown in equation <ref>. Here, h_1_o_s refers to the attention-encoded output of the s^th character of the Output O_1 from the first I/O pair. Similarly, h_l_o_s refers to the attention-encoded output of the s^th character of the Output O_l from the l^th I/O pair.
q_s = concat(h_1_o_s, h_2_o_s, ..., h_l_o_s), s ∈ (1..m)
Q = [q_1, q_2, ...., q_m-1, q_m]
The output of the Concatenation Layer is a matrix Q as shown in eq. <ref>. There are a total of m different rows in the matrix corresponding to the m characters of the Outputs in an I/O pair. More specifically, each row of the matrix represents the character-level concatenation of the output encodings from l different examples. This matrix is then passed into the next layer.
* Convolution Neural Network and Pooling Layers -
Convolution Neural Networks (CNNs) are used for finding local dependencies in features. In our architecture, CNNs help us to capture the dependencies between adjacent characters and subsequent encoded Outputs of the I/O pairs. The input to the CNN layer is the matrix Q for each example that we obtain from the Concatenation Layer. In this layer, we apply 2-dimensional convolution operations with 512 output channels where each channel contains a kernel of dimension (2, l*512) on the input from the concatenation layer. We then applying MaxPooling on the outputs of the CNNs across each channel to obtain a single vector r of size 512 dimensions for the I/O pairs in an example as seen in eq. <ref>. This 512 size vector is then passed into the next Layer.
r = MaxPool2D(Conv2D(Q))
* Classification Layer - Classification Layer is a fully-connected dense layer with 2 neurons corresponding to either the positive or negative class for each ambiguous property classification to give the classification logits u. This is shown in equation <ref> where W_f and b_f are the weight matrix and the bias vector respectively.
u = W_f r + b_f
Classification logits from the Classification Layer are then passed through the Softmax Layer.
* Softmax Layer - This layer applies the softmax activation function on the classification logits to obtain a probability distribution p over the prediction classes (ambiguous properties) as shown in equation <ref>. Here, z is used for indexing a single class among the positive and the negative classes.
p = exp(u_z)/Σ_z exp(u_z), z ∈ (0, 1)
§.§.§ Loss Calculation
The proposed multitask learning framework uses Cross-Entropy loss between the original and predicted labels as the objective function for all the five task-specific modules. Equation <ref> denotes the loss from the k^th task-specific module. We use k to index the task-specific modules. p_k is the predicted probability distribution for the k^th task-specific module. y_k is the original probability distribution for the k^th task-specific module.
We obtain the final loss L by taking a weighted sum of the individual losses L_k of each of the task-specific modules as shown in equation <ref>. Here w_k is the weight corresponding to the kth Loss L_k.
L_k = -Σ[y_klogp_k + (1-y_k)log(1-p_k)]
L = w_1*L_1 + w_2*L_2 + w_3*L_3 + w_4*L_4 + w_5*L_5
§ RESULTS AND DISCUSSIONS
§.§ Dataset Creation
We created a dataset corresponding to the five different ambiguous properties discussed in Section <ref>. We have written different regexes satisfying each ambiguous property based on a fixed Domain Specific Language (DSL). For each ambiguity property, the regexes generate several examples, and each example consists of 3 I/O pairs. We consider uppercase English characters, lowercase English characters, digits from 0 to 9, and all printable special characters. We generate a total of 100002 individual samples, grouped in an example of 3 samples, to finally produce 33334 examples per ambiguous property. In the next few subsections, we describe the procedure of generating the dataset for each ambiguous property. Table <ref> shows examples corresponding to each property.
§.§.§ Similar Length Ambiguity
For each output substring in an example, we chose a length from a range of 2-9 characters. We limit the output substrings to a maximum of 4 for each sample. Each output substring will contain a mixture of lowercase, uppercase English alphabets, and digits from 0-9. We add random strings on the front and the back of each output substring to construct the input string. Similarly, we do this for other output substrings, and finally, combine the I/O substrings to make it a single I/O pair. We repeat the above process by fixing the output substrings size across the samples in a single example and combine those I/O pairs to make a single example. In our case, we use a set of three I/O pairs in a single example.
We illustrate the process of creating I/O pairs through the following example. In the first step, we first assume an output substring of length three for sample-1 is “abc", for sample-2 is “klp" and for sample-3 is “12j". In the second step, we add random I/O strings before and after the first output substring for sample-1 “dfg1#abc#2311", sample-2 “era#klp#hj1", and sample-3 “h2ral#12j#klj23jk". In the third step, we create a new output substring that can follow this similar length property or not. We then repeat the second step, for example, let us assume that the second output substring is of varied length, let's say “hjuk", “puefhkj", and “jf16hsk". Now, either we append this directly to the input with some delimiter or first add some other random string before or after this string. In this case, we append this directly using delimiter “@", so final input strings become “dfg1#abc#2311@hjuk", “era#klp#hj1@puefhkj" and “h2ral#12j#klj23jk@jf16hsk". We can combine the output substrings using any character or directly. In this example, we are combining it directly which leads to the following output samples corresponding to the input samples - `abchjuk", “klppuefhkj" and “12jjf16hsk". We can repeat the same process by generating more output substrings for an example.
§.§.§ Exact Position Placement Ambiguity
The process of example generation for this ambiguity will remain almost the same as the “Similar Length Ambiguity" property. The only change is that instead of fixing output substring length across samples, we will fix the output substring's position in the input string.
§.§.§ Exact Match Ambiguity
- In this case, the process differs with respect to output substring value. The output substring value across the I/O pairs within the same example will remain the same. This property inherently also satisfies the Similar Length Ambiguity.
§.§.§ Similar in Token Type Ambiguity
In this case, the process differs with respect to output substring type. That is, the output substring's token-type across the I/O pairs within same example will remain the same. In our work, we define two types of token-types viz. alphabets and numerals. More specifically, the two categories of similar token types are when, either the output strings contain only the uppercase and lowercase alphabets or only digits from 0-9.
§.§.§ Repeating Characters Ambiguity
In this case, the output substring exists (or repeats itself) at multiple positions in the input.
§.§ Ablation Studies
We compare the results of two major variations in the proposed framework : (a) two different loss functions - Cross-Entropy and Focal Loss, and (b) the importance of each layer by removing it from the framework. We consider the model in Figure <ref> as the main model. This model is referred to as Our in the results table. We carry out various ablation studies of the proposed model by removing various components to ascertain the role played by each component in the model. These models are are discussed below.
§.§.§ Our_No_CNN:
In this setup, we remove the CNN and the MaxPool layers from the proposed model architecture and only pass the concatenated output encodings to the classification layer.
§.§.§ Our_No_AM
In this setup, we remove the Attention Mechanism from the proposed model. We retain the same output encoder but set the attention weights for each output characters over all input characters is equal to 1 while calculating the attention vector.
§.§.§ Our_GRU:
In this, we replace all LSTM layers and cells with GRU <cit.> cells in the proposed architecture. We retain the same overall architecture and keep the GRU hidden size equal to 512.
§.§ Discussions
§.§.§ Quantitative Results
In Table <ref>, we compare the results of the proposed framework with two different loss functions, Cross-Entropy, and Focal Loss. Also, we provide a quantitative analysis highlighting the importance of each layer. For this, we first remove the layer from the task-specific modules and then report the performance of the same. We show the property-wise performance in Table <ref>. From the result table, we can see that overall Cross-Entropy is performing better than the Focal Loss. The model has trained with 26,667 examples corresponding to each ambiguous property for 100 epochs with a batch size of 5 per epoch. We set the weights of each loss corresponding to five different ambiguity tasks equally to 1. We report the results on the 6,667 examples test set.
The main model, denoted by our, performs better than the other variations of the proposed framework when using the same loss metric. We can also observe that by removing the attention layer from the main model, the performance of the model got decreased by 10-20% for most of the cases, which highlights the need for an attention layer. A similar kind of pattern can be observed when we remove the CNN layer from the main model. In some cases, performance got dropped to around 50%. Also, we can observe that removing the CNN layer makes the model more worse as compared to removing the attention layer. This shows that the CNN part of the architecture plays an important role in ambiguity detection. Also, we can see a significant drop in performance in most of the cases, if we replace the LSTM units with GRU units. The reason for the same is that LSTM units are able to capture context better than GRU because a sufficient number of samples are provided to our model to learn the context. When a sufficient number of samples are available for training, we can expect the LSTM model to learn the context better than GRU <cit.>. Hence, this analysis shows the importance of different layers in our proposed framework.
Combining all these layers, makes the system perform almost 100 percent accurately on the test set, which shows that these ambiguities can be easily learned if we define the architecture which can capture context, interrelationships, and attention of output on input. In some cases, we observe that other variations are also giving perfect results which highlight that for those properties simpler network can also be generalizable on unseen test data.
§.§.§ Saliency Maps
For better understanding the predictions of the proposed model, we used the integrated gradients <cit.> based saliency on the inputs of the examples for visualization. We use three properties (similar length, exact match, and repeating characters) to illustrate the predictions of the learned model as shown in Figure <ref>. For each of these properties, we use one example (three I/O samples) to visualize the saliency maps. Also, we use a single substring in output just for ease of visualization, as this visualization becomes more complex to interpret if we have multiple substrings in output.
The first row in the Figure <ref> denotes the saliency maps corresponding to the Similar Length Ambiguity property for the I/O pair - {"input": ["niti gup", "klop kio", "xyz abc"], "output": ["gup", "kio", "abc"]}. From Figure <ref> (a), we can see that in all the inputs, more importance (shown by lighter colors with high values) is given to the characters which mark the beginning and the end of the part of the string (“gup", “kio", and “abc") which belongs to the output. That is, we can see that a higher saliency score is associated with the hyphen and the @end symbol which mark the beginning and the ending of the output string. Hence, we can conclude that the model is able to learn the Similar Length Ambiguity property.
The second row illustrates the saliency maps for the Exact Match Ambiguity property for the I/O pair - {"input": ["niti abc123", "klop abc123", "xyz abc123"], "output": ["abc123", "abc123", "abc123"]}. Here, it can be seen that, on average, more importance is given to the part of the input which contains the output as compared to the one which does not. That is, the characters corresponding to abc123 have higher saliency values as compared to the other parts like niti, klop, and xyz in the three inputs respectively. Hence, we can conclude that the model is able to recognize the output strings clearly and hence correctly classifying them.
The third row shows the saliency maps for the Repeating Characters Ambiguity for the I/O pair - {"input": ["M%qSFA8qb%We %qSFA8qb%", "1bN%i6Op4%YK%i6Op4%", "Yp%83cGK3%yRv%83cGK3%"], "output": ["qSFA8qb", "i6Op4", "83cGK3"]}. It can be noticed that the characters in the output string have higher saliency values on an average in the input in their second repetition as compared to their first occurrence. This shows that the model is able to well recognize the repeated characters and hence correctly classify them. We have observed similar kinds of patterns for the other ambiguities.
§.§.§ Case Study: Impact of detecting multiple intents and correcting them before building PBE systems
In this section, we discuss that how the presence of ambiguity in input and output annotations can affect the output of widely used tools like PROSE <cit.> and Microsft Excel. Table <ref>, shows the different ambiguities detected by the proposed system on 6 examples, and also shows that whether existing PBE systems will able to learn correct intent or not using those sets of I/O pairs. For each example, the user provides three I/O samples to convey the desired intent. However, as we can see from the ambiguities detected column that each of these examples has some kind of ambiguities or multi-intent issues. Effect of the same can be reflected in a mismatch of PROSE/Excel output columns and GT output column. This shows the need for the framework which helps to figure out the multi-intent quality issues in annotation before generating program through any PBE systems.
In the first example in Table <ref>, the system detects “Similar in Token Type Ambiguity", because substring (only one substring exist in this case) across the outputs have the same token type. This can lead to multiple intent issues of whether the user wants to extract everything after “_" irrespective of the token/data type, or user is just interested in specific numeric data type content for this case. Same multi-intent confusion can be reflected in the output of two different PBE systems on an input “B_DS2345" - (a) PROSE output is “2345", that means the PROSE framework learn to extract numeric content after “_", and (b) Excel output is “DS2345" which means that excel learns to extract all the content after “_". So, it is good if the user can first analyze the detected ambiguity and if that ambiguity holds for a user's actual intent, then the user can accordingly either provide new samples or change the existing samples. Like for the first example, the user intent is to extract everything after “_" and also detected ambiguity is of similar token type. So, the user can now either modify or add one new sample where the extracted output string also has non-numerical characters. With this new additional I/O sample (highlighted in bold) provided by a user, after analyzing the detected ambiguity, both PROSE and EXCEL are able to learn correct intent. This is reflected through the output columns i.e. the value of these columns is the same as the GT column (see Table <ref>) .
Similarly, if we analyze the fifth example in Table <ref>, the system detects multiple ambiguities. Exact Position, Similar Length, and Similar in Token Type ambiguities exist for both the output substrings (Mohan/Abhil/Johny and Mr.). Similar in Exact Match Ambiguity exists only for “MR" substring in the output. For the first output substring (Mohan/Abhil/Johny), the user is fine with Exact Position and Similar in Token Type Ambiguity. However, the user wants to add a new example to remove the Similar Length Ambiguity. Similarly, for the second output substring, the user is fine with all the detected ambiguities except Exact Position Placement Ambiguity, because the user's goal is not to extract this information from the input string, the user wants to add that as a constant string in the output. So, after analyzing these properties, the user can provide new samples which will remove these ambiguities to learn the correct intent. Also, we can see from the table that due to these ambiguities both PROSE and EXCEL system learn the intent wrongly. However, after analyzing the ambiguities, the user provided the new sample as shown in Table <ref>. This new sample helps the system to learn the correct intent, which can be seen through the correct output on the test data.
Similarly, by providing new sample as shown in Table <ref> for other examples, the user will be able to resolve the multi-intent quality issue and also be able to learn the correct intent through existing PBE frameworks. This shows the effectiveness of our proposed framework to detect ambiguity in PBE systems specifically in the string transformation domain.
§ RELATED WORK
Task-specific string transformation can be achieved via both program synthesis and induction models. Induction-based approaches obviate the need for a DSL since they are trained to generate required output directly from the input string and used in tasks like array sorting <cit.>, long binary multiplication <cit.>, etc. However, induction models are not feasible for the string transformation domain as they require to be re-trained for each task and have lower generalization accuracy on unseen samples than synthesis models <cit.>. In literature, both neural-guided-based and symbolic-based approaches have been widely used for program synthesis.
Several neural-guided approaches have been proposed in the last few years for program synthesis <cit.>. A sequential encoder-decoder network to infer transformation programs that are robust to noise present in input-output strings, where the hand-engineered symbolic systems fail terribly is proposed in <cit.>. A different variant of an encoder-decoder network where input-output string encoders are not cascaded but work in parallel to infer program sequences is proposed in <cit.>. In <cit.>, a novel neural architecture consisting of a R3NN module that synthesizes a program by incrementally expanding partial programs is used. These networks can be trained end-to-end and do not require any deductive algorithm for searching the hypotheses space. However, they do not guarantee that inferred programs are consistent with the observed set of input-output pairs and also, training on synthetically generated datasets results in poor generalizability on real-world tasks.
Symbolic Program Synthesis approaches operate by dividing required transformation tasks into sub-tasks and searching the hypothesis space for regex-based string expressions to solve each of them. However, smart search and ranking
strategies to efficiently navigate the huge hypothesis search space require significant engineering effort and domain knowledge. One of the earliest attempts to solve the problem
of program synthesis pioneered the Flash-Fill algorithm designed to infer specification satisfying string transformation program in the
form of Abstract Syntax Trees (AST) <cit.>. The
PROSE system from <cit.> employs several hand-crafted heuristics to design ranking functions for deductive searching. Systems like PROSE though perform well on tasks similar to the previously encountered tasks but face a generalizability issue when exposed to new unseen tasks. This is also demonstrated in Table <ref> where the system infers one intent which is satisfied in the seen examples but fails on new unseen test data. Since PBE systems for string transformations rely on input and output annotations, it is necessary to provide non-ambiguous input and output samples to them. There is no work existing in the literature that talks about finding the ambiguities or multiple intent-based quality issues in input and output annotations, and providing that information to the user so that the user can look for those detected ambiguities and accordingly modify existing samples or provide new samples. This kind of system will help to capture the user's intent more clearly and make the system automatically generalizable on unseen data. Hence, in this paper we focused on finding the quality issues in input-output annotations with respect to multi-intent, to learn correct intent.
§ CONCLUSION
This paper aims to solve the problem of detecting ambiguity in the user-provided I/O annotations for PBE systems which leads to the generation of wrong intent programs. To the best of our knowledge, our proposed framework is the first to solve this issue at the input and output annotation level. To solve this, we propose extensible multi-tasking attention-based DNN to find the multiple intents in the I/O samples. We also define a set of generic properties that help in detecting the multiple intents in the annotations. We have done a quantitative analysis of different variations of the proposed model architecture to show the impact of the proposed systems' modules. We have also illustrated the effectiveness of the proposed model through saliency maps and by using an existing PBE system outputs. A natural extension of our work is to use the detected ambiguity properties to automatically generate new input and output samples and to improve the program search space.
siam
|
http://arxiv.org/abs/2307.07332v1 | 20230714131832 | Nuclear Physics in the Era of Quantum Computing and Quantum Machine Learning | [
"J. E. García-Ramos",
"A. Sáiz",
"J. M. Arias",
"L. Lamata",
"P. Pérez-Fernández"
] | quant-ph | [
"quant-ph",
"nucl-ex",
"nucl-th"
] |
Nuclear Physics in the Era of Quantum Computing and Quantum Machine Learning
Zhili Ng^* Haozhe Wang^*,† Zhengshen Zhang^* Francis Tay Eng Hock Marcelo H. Ang Jr.
National University of Singapore
{ng.zhili, wang_haozhe, zhengshen_zhang}@u.nus.edu, {mpetayeh, mpeangh}@nus.edu.sg
August 12, 2023
==========================================================================================================================================================================================================================
,
,
,
, and
Dr. José-Enrique García-Ramos
Departamento de Ciencias Integradas y Centro de Estudios Avanzados en Física, Matemática y Computación, Universidad de Huelva, 21071 Huelva, Spain
Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, Fuentenueva s/n, 18071 Granada, Spain
Email Address: [email protected]
Álvaro Sáiz
Departamento de Física Aplicada III, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, E-41092 Sevilla, Spain.
Email Address: [email protected]
Dr. José M. Arias
Departamento de Física Atómica, Molecular y Nuclear, Facultad de Física, Universidad de Sevilla, Apartado 1065, E-41080 Sevilla, Spain
Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, Fuentenueva s/n, 18071 Granada, Spain
Email Address: [email protected]
Dr. Lucas Lamata
Departamento de Física Atómica, Molecular y Nuclear, Facultad de Física, Universidad de Sevilla, Apartado 1065, E-41080 Sevilla, Spain
Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, Fuentenueva s/n, 18071 Granada, Spain
Email Address: [email protected]
Dr. Pedro Pérez-Fernández
Departamento de Física Aplicada III, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, E-41092 Sevilla, Spain.
Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, Fuentenueva s/n, 18071 Granada, Spain
Email Address: [email protected]
In this paper, the application of quantum simulations and quantum machine learning to solve low-energy nuclear physics problems is explored. The use of quantum computing to deal with nuclear physics problems is, in general, in its infancy and, in particular, the use of quantum machine learning in the realm of nuclear physics at low energy is almost nonexistent. We present here three specific examples where the use of quantum computing and quantum machine learning provides, or could provide in the future, a possible computational advantage: i) the determination of the phase/shape in schematic nuclear models, ii) the calculation of the ground state energy of a nuclear shell model-type Hamiltonian and iii) the identification of particles or the determination of trajectories in nuclear physics experiments.
§ INTRODUCTION
In this perspective review, we discuss the link between low-energy nuclear physics and the emerging research field of quantum computing <cit.>, which includes quantum simulations and quantum machine learning (QML) techniques. While both research fields have their own distinct problems and applications, they can be combined in a fruitful collaboration to yield new and relevant advancements. Although quantum simulations in nuclear physics have made progress in recent years, they have mainly focused on toy models or simple scenarios. However, the utilization of QML techniques in low energy nuclear physics is virtually non-existent. The objective of this perspective is to demonstrate the value of studying the combined fields of nuclear physics and quantum computing by providing a few examples and highlighting, to researchers in both domains, that this area of research holds great potential. As a matter of fact, in several long term plans and white papers these new research avenues have already been considered and promoted, namely, <cit.> and <cit.> (US Department of Energy), the white paper <cit.>, <cit.> (US Department of Energy, National Science Foundation and National Institute of Standards and Technology) or the NuPECC Long Range Plan 2024 (still under discussion).
The structure of this work is as follows: Section <ref> revises the fundamentals of nuclear physics and presents its potential connections with quantum computing and QML. Next, in Section <ref>, we provide a brief overview of quantum simulations and QML. Section <ref> explores the current connections between quantum simulations, QML, and nuclear physics, illustrating this through a few examples such as: i) determining the shape/phase of a nucleus using the time evolution of an appropriated observable, ii) calculating the ground state energy of nuclei, and iii) identifying particles and reconstructing particle trajectories. Finally, in Section <ref>, we present our conclusions and provide an outlook for future research.
§ THE NUCLEAR PHYSICS REALM
In this section, we present in an abridged way, first, the most widely used nuclear physics models intended for low energy nuclear physics, second, the main lines of research of nuclear physics in quantum computing and, third, the most up to date applications of machine learning (ML) to treat nuclear physics problems.
§.§ Nuclear Models
The study of the atomic nucleus is difficult since it is a quantum many-body system where two types of nucleons, protons and neutrons, interact via a force that is not completely known. In addition, the number of particles is not so large as to be appropriated the use of the powerful statistical mechanic machinery <cit.>, therefore, it is necessary to consider a large number of degrees of freedom explicitly. Because of that, to understand the nuclear structure, one has to rely on nuclear models which can only describe partially the nuclear degrees of freedom. The situation is radically different from the studies in quantum chemistry, where the interaction is of Coulomb nature, therefore, fully known,
although still a large number of particles should be considered. In the atomic nucleus a natural center for the potential does not exists, as it is the case of the atom, and the field that nucleons feel is created by the own nucleons. Hence, it is not obvious whether a mean field could be defined <cit.>. The theoretical description of the nuclear structure at low energy is based on three main approaches: i) the microscopic approach whose basic realization is the shell model <cit.>, ii) the mean-field approach <cit.>, and iii) the macroscopic approach based on the liquid drop model <cit.>.
* The nuclear shell model. It describes the behavior of nucleons (protons and neutrons) in an atomic nucleus. It is based on the independent particle motion motion of nucleons, being possible to define single particle orbits and, therefore, single particle levels, that occupy the nucleons within the nucleus, in a similar manner to electrons in an atom, i.e. in the atomic shell model. The model assumes that nucleons move in a mean field created by the whole set of nucleons and exhibit a quantum behavior. Once the nucleons are distributed in shells, the residual interaction between them is taken into account and the full diagonalization of the Hamiltonian is needed <cit.>. The shell model successfully explains many nuclear properties, such as nuclear stability, magic numbers (nuclei with particularly stable configurations), nuclear spectra, and nuclear reactions.
* The mean field, the beyond mean field approximation and the use of energy functional theories <cit.>. These rely on the assumption that nucleons move independently in an effective average potential generated by all other nucleons. The interaction can be obtained globally in a self-consistent way for the whole mass table, fixing a set of free coefficients to reproduce ground state properties of all known nuclei. This simplification allows for the treatment of complex many-body systems by reducing the problem to an effective single-particle problem. Theoretical foundations of mean-field models include: i) the Hartree-Fock or Hartree-Fock-Bogoliubov formalism, ii) density functional theory (DFT), and iii) mean-field potentials and self-consistency. Applications of mean-field models in nuclear structure include the calculations of nuclear binding energies and masses, nuclear deformation and shape transitions, shell structure and magic numbers, and collective motion and excitations, among others.
* The collective model. Also known as the liquid drop model, it treats the nucleus as a droplet of incompressible nuclear matter. This model assumes that the nucleus behaves like a classical liquid drop, with nucleons interacting through attractive and repulsive forces. It explains nuclear phenomena by considering collective motion, such as rotation and vibration, of the nuclear surface. The model successfully describes phenomena like nuclear deformation, fission, and certain aspects of nuclear spectra. Along the same lines, the Kumar-Baranger model <cit.>, or the generalized collective model, is an extension of the collective Bohr model <cit.>. It takes into account the coupling between collective motion and single-particle excitations within the nucleus. This model considers both vibrational and rotational degrees of freedom and is especially useful for describing transitional nuclei that exhibit characteristics of both vibrational and rotational motion.
These models, and their extensions, have significantly contributed to our understanding of nuclear structure. The difficulty in solving the nuclear physics problem, apart from the related ones with the interaction, is the dimension of the Hilbert space that for medium-mass and heavy nuclei around the center of a major shell, either in protons, neutrons or both, is far beyond the present and even future computational capabilities. In the case of the shell model, an explosion of the dimension of the Hilbert space appears when a large number of nucleons should be distributed in large major shells. Moreover, in order to explain certain phenomena it is needed to allow multi particle-hole excitations across two major shells, which generates a further explosion of the Hilbert space dimension. In the case of the energy functional approach, the major issue is connected with the generation of states with defined quantum numbers, i.e., to go beyond mean field using, for instance, the generator coordinate method <cit.>, which involves the evaluation of integrals that, once more, can be computationally very costly. The use of quantum computing and, in particular, quantum machine learning can open a new avenue to deal with these nuclear problems in the near future.
Nowadays, the implementation of state-of-the-art nuclear shell model or beyond mean-field problems in Noisy-Intermediate Scale Quantum (NISQ) computers is not yet possible, because of the number of available qubits, but also because of the necessity of developing new algorithms to implement the nuclear problem in the available nuclear hardware. Because of that, some simpler models that retain the main characteristics of the aforementioned ones have been recently used: i) the Lipkin-Meshkov-Glick (LMG) model <cit.> that is valid for representing particular nuclear systems but it is also of great interest in quantum optics and also describes, in an approximate way, certain solid state systems and ii) the Agassi model <cit.> that mimics the interplay between pairing and quadrupole interactions as in the Kumar-Baranger model.
§.§ Nuclear Physics and Quantum Computing
Nuclear physics and quantum computing are two rather distinct fields. However, there are some potential connections and applications where they intersect:
* Quantum simulations: quantum computers could have the potential to simulate quantum systems more efficiently than classical computers. This includes simulating the behavior of atomic nuclei and nuclear interactions. By harnessing the power of quantum computing, researchers may be able to gain deeper insights into nuclear physics phenomena.
* Quantum algorithms for nuclear physics: quantum algorithms, such as the Variational Quantum Eigensolver (VQE) <cit.> and Quantum Phase Estimation (QPE) <cit.>, have been developed to solve problems in quantum chemistry, which has overlap with nuclear physics. These algorithms can be applied to nuclear physics problems, such as calculating nuclear energy levels or simulating nuclear reactions.
* Data analysis and optimization: nuclear physics experiments generate vast amounts of data that need to be analyzed and optimized. Quantum computing techniques, such as QML algorithms or quantum optimization algorithms, may offer novel approaches to process and extract valuable insights from nuclear physics data.
While the full extent of the connection between nuclear physics and quantum computing is still being explored, it is an exciting area of potential collaboration and research. The development of more powerful and scalable quantum computers may provide new tools and techniques to address complex nuclear physics problems.
§.§ Nuclear Physics and Machine Learning
In the last few years, ML has been applied to nuclear physics in different tasks <cit.>:
* Data analysis: nuclear physics generates large amounts of experimental data. ML can help to process and analyze this data efficiently to extract patterns, identify particles, perform classifications, and make predictions. ML algorithms such as neural networks can uncover hidden correlations and trends in nuclear data.
* Nuclear modeling: ML can also be used to develop models and simulations in nuclear physics. Traditional nuclear physics models often involve complex calculations and approximations. ML offers alternative approaches to nuclear modeling, where ML algorithms can be used to construct empirical models from nuclear data and improve prediction accuracy.
* Particle detection: in nuclear physics experiments, it is crucial to identify and track charged particles. ML has been successfully applied to particle detection and trajectory reconstruction, which can enhance the precision and efficiency of data analysis. Algorithms such as pattern classifiers, convolutional neural networks (CNN), and particle tracking algorithms can aid in particle identification and reconstruction in nuclear detectors.
* Experimental optimization: ML can assist in the optimization of nuclear physics experiments. Optimization algorithms such as genetic algorithms or reinforcement learning can help finding optimal configurations of experimental parameters, thereby saving time and resources in data collection.
These are just a few areas where ML has been successfully applied in nuclear physics. The intersection of both disciplines offers exciting opportunities to improve data analysis, develop more accurate models, and optimize experiments. As ML continues to evolve, its application in nuclear physics is likely to expand further.
§ QUANTUM SIMULATIONS AND QUANTUM MACHINE LEARNING
Quantum simulations <cit.> is a rapidly growing area of research in which a quantum controllable system is employed to reproduce the properties (either dynamical or static) of another quantum system of interest. Several proposals and experiments in this field have been produced in the past two decades, involving, as quantum simulator platforms, trapped ions, superconducting circuits, cold atoms, quantum photons, and nuclear magnetic resonance, among others. The simulated quantum systems are diverse and could be roughly grouped in condensed matter, quantum chemistry, and high-energy physics, although this is not an exclusive list <cit.>. Inside the field of quantum simulations, and in the intersection of many-body quantum systems and high-energy physics, a fledgling field has appeared in the past few years <cit.>.
Quantum simulators belong mainly to one of three possible categories: digital quantum simulators, analog quantum simulators, and digital-analog quantum simulators <cit.>. Digital quantum simulators allow for a wide variety of systems to be simulated, as they have universality properties: they decompose the simulated quantum dynamics into elementary unitary gates, that later on are implemented in the quantum simulator in a successive way. This is done via a Lie-Trotter-Suzuki expansion. The main drawback of digital quantum simulators is that it is difficult to go beyond a few dozen qubits with current technology, due to the accumulated gate errors as the single- and two-qubit gates in which the protocol is decomposed are never perfect. Moreover, there is usually a digital error as well, due to the fact that the different gates do not commute with each other.
On the other hand, analog quantum simulators implement in the quantum platform a quantum dynamics following a Hamiltonian that is similar to the one of the simulated quantum system, by tuning some controls such as laser pulses, microwaves, etc. The advantage of analog quantum simulators is that they are more scalable than the purely digital ones as they have less accumulated gate error as well as digital error.
Finally, digital-analog quantum simulators aim at benefiting from both paradigms, digital and analog, via combining large analog blocks (which provide scalability) with digital steps (which enable to simulate a wider variety of models than the purely analog). This paradigm could be a way of achieving useful new knowledge in NISQ Computers in the near and mid term <cit.>.
The typical errors in a quantum simulator can be the mentioned ones: accumulated gate error and digital error, as well as the common to all quantum systems, such as decoherence due to an uncontrollable coupling to the environment. Therefore, sometimes it is advisable to employ a master equation to theoretically model a quantum simulation platform, as well as to interpret the experimental results <cit.>.
QML <cit.> aims at connecting the two timely fields of ML (in turn belonging to the more general artificial intelligence) and quantum computing. The goal is either to employ quantum devices to carry out more efficient ML calculations, or to use ML algorithms to better control and analyze quantum systems.
The motivation to explore this field is the fact that quantum mechanics is described by the formalism of linear algebra, which is a discipline that is also commonly employed in ML, e.g., for computing distances when classifying data. Thus, it is sensible to aim at using quantum devices to carry out some of the ML tasks, which suffer from the so-called dimensionality curse, far more efficiently. Namely, with less time and energy resources expense <cit.>.
The field of QML has significantly grown in the past five years, and several theory proposals as well as experimental realizations have been produced, in platforms such as superconducting circuits, quantum photonics, and trapped ions <cit.>. However, the use of QML protocols for the analysis of nuclear physics is a relatively unexplored field so far.
Some of the quantum algorithms being employed in QML are quantum versions of standard ML ones, such as quantum supervised learning, quantum unsupervised learning, and quantum reinforcement learning. Quantum supervised learning employs a quantum algorithm for solving linear systems of equations, the Harrow-Hassidim-Lloyd (HHL) algorithm <cit.>. Other kinds of algorithms, such as quantum reinforcement learning, employ in some cases the speedup of Grover search algorithm to improve the performance of standard reinforcement learning. Some more prominent QML algorithms that are emerging are the parameterized quantum circuits, also named quantum neural networks, as well as quantum kernels and quantum feature spaces <cit.>.
§ QUANTUM SIMULATIONS AND QUANTUM MACHINE LEARNING FOR LOW ENERGY NUCLEAR PHYSICS
§.§ Determination of the Shape of a Nuclear System through its Time Evolution
Quantum Phase Transitions (QPT's) appear in quantum systems at zero temperature when a sudden change in the ground-state structure appears under a change of a control parameter in the Hamiltonian <cit.>, changing, therefore, the shape of the system. A typical situation in which a QPT is present corresponds to Hamiltonians that can be written as two pieces with different symmetries (A and B):
Ĥ= (1-x) Ĥ_A+ xĤ_B.
This formulation allows us to investigate the interplay between the two symmetries, A and B, by adjusting the control parameter x, which determines the relative contribution of each symmetry to the overall Hamiltonian. Under the formulation (<ref>) two clear limiting situations exist: A for x=0, and B for x=1. These limits typically correspond to dynamical symmetries <cit.>. However, for x-values different from 0 and 1, the Hamiltonian has no definite symmetry and A and B compete among them. In spite of this lack of symmetry, interestingly enough the system remains close to A or B until the critical point, x=x_c, at which a sudden change in the system structure appears (QPT). The existence of a QPT also implies a sudden change in the so-called order parameter, which vanishes in one of the phases (symmetric) and takes a nonzero value in the other phase (broken or non-symmetric phase) <cit.>. QPTs can be classified accordingly to the Ehrenfest classification <cit.> in a similar manner to the phase transitions that occur in macroscopic systems at non-zero temperature.
In nuclear models, the shape/phase of the system is determined thanks to mean-field calculations, although it can also be explored through certain observables that can serve as proxies for QPTs even in finite-size systems <cit.>. Is it possible to extract information about the phase/shape of the system using a different approach? To answer this question in <cit.> the phase diagram <cit.> of the Agassi model <cit.> has been determined exploring the time evolution of a correlation operator using a quantum computer (quantum simulator) to feed a ML algorithm, i.e., defining a hybrid quantum-classical algorithm. The Agassi model considers a two-level system with m-sites in each level. For the fermion operators two indexes are used: ξ labels the level (+1 for the upper level and -1 for the lower level) and m labels the site within each level. This simple Hamiltonian is interesting because it includes the competition between the monopole-monopole and the pairing interactions and mimics the Kumar-Baranger model for nuclear structure. The Hamiltonian for the extended Agassi model used in <cit.> can be written as
H =ε J^0-g∑_ξ,ξ^'=-1,1^A_ξ^†A_ξ^' -V/2[(J^+)^2+(J^-)^2]-2hA_0^†A_0,
where the operators in the Hamiltonian are all defined in terms of fermion creation and annihilation operators, c_ξ,m^† and c_ξ,m,
J^+ = ∑_m=-j^jc_1,m^†c_-1,m = (J^-)^†,
J^0 = 1/2 ∑_m=-j^j(c_1,m^†c_1,m - c_-1,m^†c_-1,m),
A_1^† = ∑_m=1^jc_1,m^†c_1,-m^† = (A_1)^†,
A_-1^† = ∑_m=1^jc_-1,m^†c_-1,-m^† = (A_-1)^†,
A_0^† = ∑_m=1^j (c_-1,m^†c_1,-m^† - c_-1,-m^†c_1,m^†) = (A_0)^†.
The phase diagram for the extended Agassi model is depicted in Figure <ref>a where the phase transition surfaces are clearly marked together with a pictorial representation of the phases. The phase diagram has been obtained analytically using mean-field techniques <cit.>.
The Hamiltonian (<ref>) can be easily mapped onto a spin Hamiltonian using the Jordan-Wingner (JW) mapping approach <cit.>, which can later be experimentally implemented into a digital quantum simulator. So far, the simulation has been performed for a system with eight sites (N=8, j=2). The system is already large from the present quantum computing point of view, but it is really small from the point of view of the QPT analysis, taking into account that the phase of a given system is expected to be properly defined in the large N limit. Let us define the correlation function between two sites, i, j of the system as,
C_z(i,j) =
⟨σ_i^z⊗ σ_j^z⟩ -
⟨σ_i^z⟩ ⟨σ_j^z⟩,
where σ_i^a are the Pauli matrices at site i for a=x,y,z. We will consider as an ansatz that the time evolution of this function with the state |↓_1 ↓_2 ↓_3 ↓_4 ↑_5 ↑_6 ↑_7 ↑_8 ⟩ can serve as a proxy to determine the shape of the system and eventually to find the location of the phase transition surfaces. Observing such an evolution with the naked eye one cannot provide any hint about the shape of the system (see panels (b)-(g) of Figure <ref> where the time evolution is depicted for selected values of the Hamiltonian parameters). Therefore, the use of a ML technique is in order. In particular, we will focus on the use of a CNN (in <cit.> a Multi-Layer Perceptron is also used). To train the system, a lattice in the control parameter space was created with 9261 points, reserving the 10% of them for testing purposes. The analysis was performed with the exact evolution operator and also approximating it with a Trotter expansion. The obtained global accuracy was of 98.7% for the exact evolution, while 99.2% for the Trotter one. An appealing fact is that the accuracy of the procedure is even larger when using the Trotter approximation with a small number of steps, which has clear practical advantages. A possible explanation is that the larger oscillations obtained in the approximate evolution, compared with the exact one, gives rise to exaggerated patterns that are easier to recognize. In Figure <ref>, the results of the CNN analysis are presented for both the exact evolution and the Trotter one for selected values of the Hamiltonian parameters. The reason why the time evolution of a given matrix element is able to describe the phase of the system is that it is connected with the complete spectrum of the Hamiltonian, assuming that the state is not an eigenstate of the Hamiltonian. For instance, a vibrational-like nuclear Hamiltonian will generate a vibrational spectrum while a rotational Hamiltonian will produce a sequence l(l+1). In the first case, the nucleus has a spherical shape, while it is well deformed in the second situation.
Very recently, a similar work, but for the Ising model, has been published <cit.>. It shows that the phase diagram in the axial next-nearest-neighbor Ising (ANNNI) model can be obtained using a quantum convolutional neural network (QCNN). The considered Hamiltonian can be written as,
H= ∑^N_i=1(σ^x_i σ^x_i+1 -κσ^x_i σ^x_i+2+ h σ^z_i ),
where σ^a_i are the Pauli matrices at position i and the coefficients κ and h are taken as positive. The phase diagram of the model is quite rich and three phases are known which are separated by two second-order phase transition lines. The phase diagram of the quantum model at temperature T = 0 K has been studied mainly using the renormalization group or Monte Carlo techniques. To detect the phase in this case, a QCNN was used. The function proposed to train and characterize the phase is the ground state energy of the system. In order to get this energy, a VQE is used and it serves to feed the QCNN. The circuit to perform the process is depicted in Figure <ref>.
In order to train the QCNN and fix the variational parameters,
the cross entropy L loss function was used <cit.>. The appealing fact of this work is that only very few points were used over the κ=0 or h=0 axes but, nevertheless, the whole phase diagram, including the phase transition lines, was correctly reproduced. In Figure <ref> the theoretical phase diagram of the model is depicted superimposing the training points (red dots) and the predicted phase transition lines (red lines). It is really remarkable the ability of the QCCN to disentangle the complete phase diagram including the point with h=0 and κ=0.5 where three phases coexist.
§.§ Shell Model Calculations: the Ground State of Nuclear Systems
The determination of the ground state of a nuclear system is one of the central problems of nuclear physics, as it is for quantum chemistry to determine the structure of a given molecule. The exact treatment of this problem is really far from our present knowledge and, consequently, the use of some kind of model is required (see Section <ref>). One of these models is the nuclear shell model that provides the ideal starting framework for quantum simulations. Below, different key examples of proposed quantum simulations to calculate ground state energies in nuclear systems are discussed. All of them correspond to the pioneering implementation of the VQE technique to obtain the ground state of an atomic nucleus. The VQE is a variational procedure that strongly depends on the used ansatz. It is a hard problem to disentangle whether the trial state is the most appropriate or not. To help along this line, it has been recently presented a work in which a reinforcement learning optimization approach is carried out over a variational quantum circuit <cit.>. It shows a very remarkable performance in reproducing the ground state energy of the LiH molecule. The implementation of this method in the cases that will be described below within nuclear physics is not simple, but this work clearly proves that the VQE can be optimized using QML.
One of the first works along this line was the study of the ground state structure of deuteron in a quantum computer discussed in <cit.>. The deuteron was treated using a Hamiltonian extracted from a pionless effective field theory such that it can be simulated on a quantum chip. The ground state is obtained using a variational wave-function ansatz based on the unitary coupled-cluster theory (UCC). In the case of the deuteron, the dimension of the Hilbert space is very small and only three single-particle states have been considered. However, a kind of extrapolation can be used and as a result, the energy of the ground state is only 0.5% away from the exact value. The expectation value of the different terms appearing in the Hamiltonian and evaluated in two different quantum computers are presented in Figure <ref>. The θ variable is the corresponding variational parameter.
Along the same line, in <cit.> a widely used model that sketches the nuclear interaction, such as the LMG model <cit.>, was implemented in a quantum computer and the ground state was determined using the VQE. In this case, the design of the trial wave function is guided by symmetry considerations of the model and it makes possible to use a single variational parameter, θ, in the wave function for four particles,
|ψ(θ)⟩ = cos^2 θ|↓↓↓↓⟩ +
sin^2 θ |↑↑↑↑⟩
+
-1/√(12)sin 2θ (|↑↑↓↓⟩ + |↓↓↑↑⟩ + |↓↑↓↑⟩ + |↓↑↑↓⟩ + |↑↓↓↑⟩ + |↑↓↑↓⟩).
Also focused in the LMG model, in <cit.> the equation of state method, which is an extension of the VQE, is employed to obtain excited states of the system. In this work, two levels of complexity were used, RPA (Random Phase Approximation) or second RPA (SRPA) and they did not find any noticeable difference between both.
So far, we have seen two types of approaches for defining the variational state, either to use the UCC or to use the symmetry of the Hamiltonian to guide the election of the trial wave function and in both cases were applied to small systems. Next, we will present in detail a set of works where the considered nuclei are heavier and, therefore, their Hilbert spaces are much larger. In these cases, the authors consider as trial wave function the UCC ansatz with Adaptive derivative-assembled problem-tailored (ADAPT)-VQE <cit.> which seems to provide a clear advantage over other VQE approaches. In general, ADAPT-VQE is superior to random or lexical ordering of the excitation operators in terms of convergence and circuit depth. The ADAPT-VQE defines the ansatz by selecting operators from a pool,
{τ̂_1, τ̂_2, …, τ̂_N }
that present the largest influence on the gradient, i.e., the largest value of
|∂ E/∂θ_i|_θ_i=0=|⟨ψ|[H, τ̂_i]|ψ⟩|.
Once a new operator from the pool has been selected, k optimization steps are carried out till convergence is reached, before moving into a new term from the pool. It is worth mentioning that with this method the number of trainable parameters does not grow exponentially with the size of the system. The number and type of operators in the pool can be limited thanks to symmetry consideration, which can strongly reduce the complexity of the calculation. This method has been used for ^6Li <cit.> reaching a precision of a 3.81% for the ground state and 0.12% for the first excited one. The calculation was run in the IBM Quantum 27 qubits architecture using error mitigation. The authors noticed that because the number of nuclear states grows very rapidly with the number of valence nucleons, the scaling of VQE application becomes unfeasible, needing the use of symmetry arguments to reduce the number of operators in the pool.
A step forward to improve the use of ADAPT-VQE is to start with a more correlated initial state. In general, one can start with a Hartree-Fock state but it is worth exploring the use of other states. In <cit.>, the authors use the UCC with ADAPT-VQE, but they include in the initial state two-particle two-hole excitations obtaining a rather good approximation for the ground state energy of ^6He, ^6Be, ^20O and ^22O. In Figure <ref>, the circuit of a Hartree-Fock state including two-particle two-hole excitations is shown.
The latest case to be discussed <cit.> also corresponds to the use of ADAPT-VQE to obtain the ground state within the nuclear shell model, but in this case, the authors present a rather general formalism and a large set of potential nuclei can be considered. The formalism is intended for working in the p shell (0p_1/2, 0p_3/2 orbitals), the sd shell (1s_1/2 , 0d_3/2 and 0d_5/2 orbitals) or the pf shell (0f_7/2, 0f_5/2, 1p_3/2 and 1p_1/2 orbitals). The number of single-particle states, either for protons or neutrons, are 6, 12, and 20, respectively. The type of two-body interaction to deal with the above shells is the Cohen-Kurath interaction in the p shell, the USDB in the sd shell and the KB3G interaction in the pf shell.
The dimension of the Hilbert space will correspond to the product of the dimension of the states for protons and neutrons, which depend in a combinatorial way on the size of the single-particle space and the number of valence particles. This makes unfeasible a direct diagonalization when the dimension is well above of several millions, even using the Lanczos algorithm. Quantum computation has the capability to avoid this problem. The fermion nuclear shell model Hamiltonian is easily mapped into Pauli matrices using a JW transformation. The JW mapping only requires as many qubits as single-particle states, independently of the number of valence particles, which means that the dimension of the problem remains constant for all nuclei belonging to the same major shell. The way to tackle the problem is, once more, the ADAPT-VQE approximation. In this work, the authors explore the complete quantum circuit design to estimate the necessary resources to carry out the nuclear shell-model calculations in regions where the standard approaches cannot be used. In Figure <ref>, it is depicted a quantum circuit with five layers to prepare the ground state of the nucleus ^18O. The multiqubit gates in boxes are defined as U^pq_rs(θ)=e^i θ T^pq_rs, where T^pq_rs is a two-particle promotion operator present in the pool of operators of the ADAPT-VQE method. The state defined in Figure <ref> has an energy accuracy better than 10^-6.
An important conclusion of this work is that the accuracy obtained with this approach is increasing faster than the growth of the number of CNOT gates. In Figure <ref>, the value of the error together with the number of used CNOT gates is depicted and one can easily note that the errors decay exponentially while the number of CNOT gates seems to increase polynomially. The obtained results are very encouraging, having computed the ground state energy for ^6Be (10^-8 relative error), ^6Li, ^8,10Be (10^-7 relative error), ^13C (10^-5 relative error), ^18,19,20, 22O (10^-6 relative error), ^20,22,24Ne (10^-2 relative error), ^42Ca (10^-8 relative error) and ^44,46,48,50Ca (10^-2 relative error).
Finally, tightly connected with the use of the shell model or other simple models, there are few other publications that deserve to be mentioned. In <cit.> the comparison between the use of qubits and qudits is explored in the Agassi model. In <cit.>, the authors studied the effect of using effective model spaces in the quantum computation. The issue of restoring symmetries or preparing states with a given symmetry is analyzed in depth in <cit.>. Finally, in <cit.>, a technique for preparing excited states is presented and in <cit.>, a neutrino-nucleus scattering calculation has been implemented in a quantum computer.
§.§ Particle identification and track reconstruction
The detection and identification of particles, including the measurements of their properties, such as energy, charge, linear or angular momentum is the cornerstone of nuclear and high energy physics experiments since the pioneering works of E. Rutherford making collide α particles with a thin foil of gold till nowadays. Nuclear physics and high energy physics experiments present many aspects in common. In particular, the need to know the precise trajectory of the particle inside the detector system or the large number of events to be analyzed (much larger in the case of high energy physics) are common issues of the experiments in both areas of research.
In nuclear physics, γ-ray spectra is a standard technique for isotope identification and fundamental to nuclear structure studies. It is critical to determine the energy spectrum of nuclei and to obtain information about transition probabilities between states, which is essential to disentangle the internal structure of the states. Also, it is of great relevance the spectroscopy of charged particles, β or α. So far, no QML techniques have been used in nuclear spectroscopy, but classical ML methods have been already used (see Section <ref>).
In high energy physics, practical examples of the application QML already exist <cit.>, which could serve as inspiration for nuclear physics. Here we describe two of them that could be easily implemented in low-energy nuclear experiments.
The first example corresponds to the determination of the precise trajectory followed by a charged particle or a photon, which is commonly known as tracking. The tracking consists in associating to a given particle the hits observed in the system of detectors, and then, reconstructing its trajectory. Tracking is the cornerstone of particle path reconstruction, which is compulsory to identify the nature of events of interest. In nuclear physics this is a standard technique to increase the efficiency of the detection system, which is especially relevant in experiments with low counting rates. In high luminosity experiments, as happens with LHC experiments, the number of events to be analyzed is really large and, therefore, to carry out the tracking of particles becomes a challenge. Nowadays, state-of-the-art algorithms are based on the use of Kalman filters. This approach is rather reliable and robust, providing good physics performance. Its main problem is connected with its scalability, which is expected to be worse than quadratically with the increase in the number of simultaneous collisions. Therefore, it is of great interest to explore other approaches to speed up the process including deep machine learning techniques. Such a new avenue is based on the use of image-based interpretation of the detector data where the use of CNN could provide high-accuracy results. The HEPtrkX project <cit.> followed this approach and is based on graph neural networks (GNNs) to perform hits and segments classification. The work <cit.> considers a GNN architecture from a quantum computing perspective, implementing the original networks as quantum circuits.
First, they start with the TrackML dataset, a publicly available dataset that consists of simulated measurements of many detector layers. The detector layers are arranged using a model layout that is common to most LHC experiments. This set of data is first preprocessed prior to the training. The HepTrkX team proposed a GNN to perform segment classification. The model consists of 3 types of networks. The first one is an input network, the second one is an edge network and the third one is a node network. The model has an overall accuracy of 99.5%.
To transform the GNN into a quantum circuit many modifications are needed. For simplicity, the authors only take into account the edge network and do not use the others networks. Then, it is used the Tree Tensor Network (TTN) among the hierarchical quantum classifiers as the quantum circuit to replace the neural network layer. The input is encoded in qubits and then the TTN circuit is applied. The TTN circuit is made of R_y and CNOT gates. R_y gates start with random parameters that will be tuned later. The CNOT gate is used to introduce correlations between qubits so that their values are not independent. At the end of the circuit, there is a measurement. The structure of the quantum circuit is depicted in Figure <ref>.
The quantum circuit was trained over 2 epochs. The data were divided randomly into training and test sets with a ratio of 9 to 1. The model is trained using stochastic gradient descent and weighted binary cross entropy. The training performance of the model can be seen in Figure <ref>. Note that the obtained accuracy is noticeably small, but this is mainly due to the oversimplification of the model. At the end of the day, this is still a proof of principle prototype of a complete quantum GNN structure.
The second example that we will present corresponds to the application of QML to identify if a jet contains a hadron formed by a b (bottom) or b̅ quark <cit.> at the moment of production. To this end, the Variational Quantum Classifier is used with simulated data of the LHCb experiment. LHCb is a single-arm spectrometer designed to study b and c (charm) hadrons in the forward region of proton-proton collisions. The goal of this work is to distinguish between jets that contain a b or a b̅ hadron just after the hadronization. Therefore the analysis is restricted to a sample of jets that belong to these two categories, either labeled as b jets or as b̅ jets. Hence, we have a binary classification problem.
The QML approach presented in this application belongs to the category of inclusive algorithms (upper jet of Figure <ref>). The figure of merit of this work corresponds to the tagging power, defined as,
ϵ_tag=ϵ_eff(2a-1)^2,
where ϵ_eff is the tagging efficiency and a the accuracy, i.e. the fraction of correctly tagged jets with respect to the tagged jets.
The QML procedure is implemented with a Variational Quantum Classifier (VQC) which is a hybrid quantum-classical algorithm to perform classification tasks based on a Parametrized Quantum Circuit (PQC). The structure of the PQC consists in i) the data encoding, ii) the variational circuit and, iii) the prediction. Two different PQCs are used in this work, the Amplitude Embedding and the Angle Embedding. In Figure <ref>, we depict the quantum circuit for the first case. The probability of identifying a b or a b̅ is connected with the measurement of σ_z, i.e. ⟨σ_z⟩.
In this work, two different datasets are used, namely, the muon and the complete one. As usual, both are split into training and testing sub-datasets: about 60% of the samples are used in the training and the remaining 40% are used to test, evaluate and compare the classifiers.
In Figure <ref>, the value of the tagging power as a function of jet p_T and η parameters is presented for the classical and quantum classifiers. The results are similar for the different classifiers, without any obvious bias. The tagging power decreases as the p_T value increases because for larger values of p_T the identification is more difficult. It is worth noting that the deep neural network (DNN) shows slightly better performance compared to the Angle Embedding algorithm, although both values are compatible when considering the error bars. Anyhow, both algorithms reach better results than the Amplitude Embedding model and the muon tagging approach.
An additional analysis of the value of the tagging power as a function of the number of training events shows that for a large number of events, the performance of the quantum algorithm is similar to the DNN, but when the number of training events decreases, the quantum algorithm keeps very high performance, while the DNN is not able to perform a good classification. Therefore, with respect to the DNN, the QML method reaches optimal performance with a lower number of events. This special feature of QML algorithms deserves an analysis in depth.
§ CONCLUSIONS AND OUTLOOK
Nuclear physics, particularly in the realm of low-energy phenomena, is still in its early stages when it comes to the utilization of quantum computing. While ML techniques have found extensive application in nuclear physics, the application of QML in the context of low-energy nuclear physics is largely unexplored. Consequently, a significant gap exists in the scientific literature regarding this subject.
The objective of this perspective was to provide non-practitioners with essential elements to comprehend ongoing research in nuclear structure studies and to introduce three significant applications of quantum computing and QML in the field of nuclear physics. It is hoped that these examples will inspire future endeavors. The three applications considered were as follows: i) determining the phase and shape of a schematic nuclear system, ii) calculating the ground state of a system described by a shell model Hamiltonian, and iii) determining particle trajectories and identifying particles in nuclear experiments.
Acknowledgements This work was partially supported by the Consejería de Universidad, Investigación e Innovación de la Junta de Andalucía (Spain) under Groups FQM-160, FQM-177, and FQM-370, and under projects P20-00617, P20-00764, P20-01247, and US-1380840; by grants PID2019-104002GB-C21, PID2019-104002GB-C22, and PID2020-114687GB-I00 funded by MCIN/AEI/10.13039/50110001103 and “ERDF A way of making Europe”.
10
Niel2010
M. A. Nielsen, I. L. Chuang,
Quantum Computation and Quantum Information: 10th Anniversary
Edition,
Cambridge University Press, 2010.
Carl2018
J. Carlson, D. J. Dean, M. Hjorth-Jensen, D. Kaplan, J. Preskill, K. Roche,
M. J. Savage, M. Troyer,
Quantum Computing for Theoretical Nuclear Physics, A White Paper
prepared for the U.S. Department of Energy, Office of Science, Office of
Nuclear Physics, 2018,
<https://www.osti.gov/biblio/1631143>.
Cloet2019
I. C. Cloët, M. R. Dietrich, J. Arrington, A. Bazavov, M. Bishof, A. Freese,
A. V. Gorshkov, A. Grassellino, K. Hafidi, Z. Jacob, M. McGuigan, Y. Meurice,
Z.-E. Meziani, P. Mueller, C. Muschik, J. Osborn, M. Otten, P. Petreczky,
T. Polakovic, A. Poon, R. Pooser, A. Roggero, M. Saffman, B. VanDevender,
J. Zhang, E. Zohar,
Opportunities for nuclear physics and quantum information science,
2019.
Humb2022
T. S. Humble, A. Delgado, R. Pooser, C. Seck, R. Bennink, V. Leyton-Ortega,
C. C. J. Wang, E. Dumitrescu, T. Morris, K. Hamilton, D. Lyakh, P. Date,
Y. Wang, N. A. Peters, K. J. Evans, M. Demarteau, A. McCaskey, T. Nguyen,
S. Clark, M. Reville, A. D. Meglio, M. Grossi, S. Vallecorsa, K. Borras,
K. Jansen, D. Krücker,
Snowmass white paper: Quantum computing systems and software for
high-energy physics research, 2022.
Beck2023
D. Beck, J. Carlson, Z. Davoudi, J. Formaggio, S. Quaglioni, M. Savage,
J. Barata, T. Bhattacharya, M. Bishof, I. Cloet, A. Delgado, M. DeMarco,
C. Fink, A. Florio, M. Francois, D. Grabowska, S. Hoogerheide, M. Huang,
K. Ikeda, M. Illa, K. Joo, D. Kharzeev, K. Kowalski, W. K. Lai, K. Leach,
B. Loer, I. Low, J. Martin, D. Moore, T. Mehen, N. Mueller, J. Mulligan,
P. Mumm, F. Pederiva, R. Pisarski, M. Ploskon, S. Reddy, G. Rupak, H. Singh,
M. Singh, I. Stetcu, J. Stryker, P. Szypryt, S. Valgushev, B. VanDevender,
S. Watkins, C. Wilson, X. Yao, A. Afanasev, A. B. Balantekin, A. Baroni,
R. Bunker, B. Chakraborty, I. Chernyshev, V. Cirigliano, B. Clark, S. K.
Dhiman, W. Du, D. Dutta, R. Edwards, A. Flores, A. Galindo-Uribarri, R. F. G.
Ruiz, V. Gueorguiev, F. Guo, E. Hansen, H. Hernandez, K. Hattori, P. Hauke,
M. Hjorth-Jensen, K. Jankowski, C. Johnson, D. Lacroix, D. Lee, H.-W. Lin,
X. Liu, F. J. Llanes-Estrada, J. Looney, M. Lukin, A. Mercenne, J. Miller,
E. Mottola, B. Mueller, B. Nachman, J. Negele, J. Orrell, A. Patwardhan,
D. Phillips, S. Poole, I. Qualters, M. Rumore, T. Schaefer, J. Scott,
R. Singh, J. Vary, J.-J. Galvez-Viruet, K. Wendt, H. Xing, L. Yang, G. Young,
F. Zhao,
Quantum Information Science and Technology for Nuclear Physics.
Input into U.S. Long-Range Planning, 2023, 2023.
He2020BI
K. Heyde,
Basic Ideas and Concepts in Nuclear Physics: An Introductory
Approach,
Third Edition. Reino Unido, CRC Press, 2020.
Ta1993SM
I. Talmi,
Simple Models of Complex Nuclei,
New York, Harwood Acad. Publ., 1993.
RS2004MB
P. Ring, P. Schuck,
The Nuclear Many-Body Problem,
Berlin, Springer, 2004.
Niksic:2011sg
T. Niksic, D. Vretenar, P. Ring,
Prog. Part. Nucl. Phys. 2011, 66 519.
Grasso:2018pen
M. Grasso,
Prog. Part. Nucl. Phys. 2019, 106 256.
RRR-2018-rev
L. M. Robledo, T. R. Rodríguez, R. R. Rodríguez-Guzmán,
Journal of Physics G: Nuclear and Particle Physics
2018, 46 013001.
BM1975nuclear
A. Bohr, B. Mottelson,
Nuclear Structure, vol 2,
W.A. Benjamin, Inc., Reading, Massachusetts, London, 1975.
rowe2010nuclear
D. Rowe,
Nuclear Collective Motion: Models and Theory,
World Scientific, 2010.
KB1
K. Kumar, M. Baranger,
Nuclear Physics A 1967, 92, 3 608.
KB2
M. Baranger, K. Kumar,
Nuclear Physics A 1968, 110, 3 490.
KB3
K. Kumar, M. Baranger,
Nuclear Physics A 1968, 110, 3 529.
Bohr1998
A. Bohr, B. Mottelson,
Nuclear Structure,
World Scientific, 1998.
LMG
H. Lipkin, N. Meshkov, A. Glick,
Nuclear Physics 1965, 62, 2 188.
AGASSI196849
D. Agassi,
Nuclear Physics A 1968, 116, 1 49.
Peru014
A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love,
A. Aspuru-Guzik, J. L. O'Brien,
Nature Communications 2014, 5, 1 4213.
Tilly2022
J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y. Li, E. Grant, L. Wossnig,
I. Rungger, G. H. Booth, J. Tennyson,
Physics Reports 2022, 986 1, the Variational
Quantum Eigensolver: a review of methods and best practices.
Boeh2022
A. Boehnlein, M. Diefenthaler, N. Sato, M. Schram, V. Ziegler, C. Fanelli,
M. Hjorth-Jensen, T. Horn, M. P. Kuchera, D. Lee, W. Nazarewicz,
P. Ostroumov, K. Orginos, A. Poon, X.-N. Wang, A. Scheinker, M. S. Smith,
L.-G. Pang,
Rev. Mod. Phys. 2022, 94 031003.
Geor2014
I. M. Georgescu, S. Ashhab, F. Nori,
Rev. Mod. Phys. 2014, 86 153.
Bauer2023
C. W. Bauer, Z. Davoudi, N. Klco, M. J. Savage,
Nature Reviews Physics 2023.
DAQS_Review
L. Lamata, A. Parra-Rodriguez, M. Sanz, E. Solano,
Advances in Physics: X 2018, 3, 1 1457981.
QML_Nature_Review
J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, S. Lloyd,
Nature 2017, 549, 7671 195.
Lamata_2020
L. Lamata,
Machine Learning: Science and Technology 2020,
1, 3 033002.
Lamata_2023
L. Lamata,
Advanced Quantum Technologies 2023, 2300059.
Garc2018
J. E. García-Ramos, J. Dukelsky, P. Pérez-Fernández, J. M. Arias,
Phys. Rev. C 2018, 97 054303.
Saiz2022
A. Sáiz, J. E. García-Ramos, J. M. Arias, L. Lamata,
P. Pérez-Fernández,
Phys. Rev. C 2022, 106 064322.
Sach11
S. Sachdev,
Quantum Phase Transitions,
Cambridge University Press, Cambridge, UK, 2011.
iachello1987interacting
F. Iachello, A. Arima,
The Interacting Boson Model,
Cambridge Monographs on Mathematical Physics. Cambridge University
Press, 1987.
Land69
L. Landau, E. Lifshitz,
Statistical Physics,
Pergamon Press, Oxford, 1969.
Iach98
F. Iachello, N. V. Zamfir, R. F. Casten,
Phys. Rev. Lett. 1998, 81 1191.
perezfernandez2021quantum
P. Pérez-Fernández, J.-M. Arias, J.-E. García-Ramos, L. Lamata,
Physics Letters B 2022, 829 137133.
AgassiPhase1
E. D. Davis, W. D. Heiss,
J. Phys. G: Nucl. Phys. 1986, 12, 9 805.
Mona2023
S. Monaco, O. Kiss, A. Mandarino, S. Vallecorsa, M. Grossi,
Phys. Rev. B 2023, 107 L081105.
Batista_2001
C. D. Batista, G. Ortiz,
Physical Review Letters 2001, 86, 6
1082–1085.
JordanWigner
P. Jordan, E. Wigner,
Zeitschrift für Physik 1928, 47 631.
Osta2021
M. Ostaszewski, L. M. Trenkwalder, W. Masarczyk, E. Scerri, V. Dunjko,
In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J. W. Vaughan,
editors, Advances in Neural Information Processing Systems, volume 34.
Curran Associates, Inc., 2021 18182–18194,
<https://proceedings.neurips.cc/paper_files/paper/2021/file/9724412729185d53a2e3e7f889d9f057-Paper.pdf>.
Dumi2018
E. F. Dumitrescu, A. J. McCaskey, G. Hagen, G. R. Jansen, T. D. Morris,
T. Papenbrock, R. C. Pooser, D. J. Dean, P. Lougovski,
Phys. Rev. Lett. 2018, 120 210501.
Cerv2021
M. J. Cervia, A. B. Balantekin, S. N. Coppersmith, C. W. Johnson, P. J. Love,
C. Poole, K. Robbins, M. Saffman,
Phys. Rev. C 2021, 104 024305.
Manq2022
M. Q. Hlatshwayo, Y. Zhang, H. Wibowo, R. LaRose, D. Lacroix, E. Litvinova,
Phys. Rev. C 2022, 106 024319.
Grims2019
H. R. Grimsley, S. E. Economou, E. Barnes, N. J. Mayhall,
Nature Communications 2019, 10, 1 3007.
Kiss2022
O. Kiss, M. Grossi, P. Lougovski, F. Sanchez, S. Vallecorsa, T. Papenbrock,
Phys. Rev. C 2022, 106 034325.
Stet2022
I. Stetcu, A. Baroni, J. Carlson,
Phys. Rev. C 2022, 105 064308.
Pere2023
A. Pérez-Obiol, A. M. Romero, J. Menéndez, A. Rios, A. García-Sáez,
B. Juliá-Díaz,
Nuclear shell-model simulation in digital quantum computers,
2023.
Romero2022
A. M. Romero, J. Engel, H. L. Tang, S. E. Economou,
Phys. Rev. C 2022, 105 064317.
Illa2023
M. Illa, C. E. P. Robin, M. J. Savage,
Quantum Simulations of SO(5) Many-Fermion Systems using Qudits,
2023.
Robin2023
C. E. P. Robin, M. J. Savage,
Quantum Simulations in Effective Model Spaces (I): Hamiltonian
Learning-VQE using Digital Quantum Computers and Application to the
Lipkin-Meshkov-Glick Model, 2023.
Lacr2020
D. Lacroix,
Phys. Rev. Lett. 2020, 125 230502.
Guzm2022
E. A. Ruiz Guzman, D. Lacroix,
Phys. Rev. C 2022, 105 024324.
Guzm2023
E. A. R. Guzman, D. Lacroix,
Phys. Rev. C 2023, 107 034310.
Lacr2023
D. Lacroix, E. A. Ruiz Guzman, P. Siwach,
The European Physical Journal A 2023, 59, 1 3.
Rogg2020
A. Roggero, C. Gu, A. Baroni, T. Papenbrock,
Phys. Rev. C 2020, 102 064624.
Rogg2020b
A. Roggero, A. C. Y. Li, J. Carlson, R. Gupta, G. N. Perdue,
Phys. Rev. D 2020, 101 074038.
Guan2021
W. Guan, G. Perdue, A. Pesah, M. Schuld, K. Terashi, S. Vallecorsa, J.-R.
Vlimant,
Machine Learning: Science and Technology 2021,
2, 1 011003.
Tuys2020
Tüysüz, Cenk, Carminati, Federico, Demirköz, Bilge, Dobos,
Daniel, Fracas, Fabio, Novotny, Kristiane, Potamianos, Karolos,
Vallecorsa, Sofia, Vlimant, Jean-Roch,
EPJ Web Conf. 2020, 245 09013.
Farr2017
Farrell, Steven, Anderson, Dustin, Calafiura, Paolo, Cerati, Giuseppe,
Gray, Lindsey, Kowalkowski, Jim, Mudigonda, Mayur, Prabhat,
Spentzouris, Panagiotis, Spiropoulou, Maria, Tsaris, Aristeidis,
Vlimant, Jean-Roch, Zheng, Stephan,
EPJ Web Conf. 2017, 150 00003.
Gian2022
A. Gianelle, P. Koppenburg, D. Lucchesi, D. Nicotra, E. Rodrigues, L. Sestini,
J. de Vries, D. Zuliani,
Journal of High Energy Physics 2022, 2022, 8
14.
|
http://arxiv.org/abs/2307.05246v1 | 20230711132159 | Polytope Extensions with Linear Diameters | [
"Volker Kaibel",
"Kirill Kukharenko"
] | math.CO | [
"math.CO",
"52Bxx, 90C05"
] |
arabic
Colloids in Two-Dimensional Active Nematics: Conformal Cogs and Controllable Spontaneous Rotation
Gareth P. Alexander
August 12, 2023
=================================================================================================
We describe constructions of extended formulations that establish a certain relaxed version of the Hirsch-conjecture and prove that if there is a pivot rule for the simplex algorithm for which one can bound the number of steps by the (monotone) diameter of the polyhedron of feasible solutions then the general linear programming problem can be solved in strongly polynomial time.
The diameter of a polytope P is the smallest number δ such that in the graph of P formed by its vertices and its one-dimensional faces (edges) of P every pair of vertices is connected by a path with at most δ edges. Warren M. Hirsch conjectured in 1957 (see, e.g., <cit.>) that the diameter of each d-dimensional polytope with n facets is bounded from above by n-d. Though being of central interest in polytope theory, that conjecture has only been disproved in 2010 by , who exhibited a 43-dimensional polytope with 86 facets and diameter 44.
Today, it is known that no upper bound better than 2120(n-d) is valid in general <cit.>.
The best-known upper bounds are (n-d)^log_2 d by Todd (),
n^log_2 d + 2 by <cit.>, and
(Δ^2n^3.5log_2(nΔ)) by Bonifas, di Summa, Eisenbrand, Hähnle, and Niemeier (), where Δ is the largest absolute value of a sub-determinant of the integral coefficient matrix of some inequality description of P.
While not presenting a new bound on the diameters of polytopes,
the first main contribution (see Theorem <ref>, and in particular its Corollary <ref>) we make is to prove that for each d-dimensional polytope P in ^d with n facets that satisfies a certain non-degeneracy assumption there is a non-degenerate (d+1)-dimensional polytope Q with n+1 facets and diameter at most 2(n-d) that can be mapped linearly to P (Q is an extension or extended formulation of P). We further show in Theorem <ref> that such an extension Q is even computable in strongly polynomial time, if a vertex of P is specified within the input.
Consider that without requiring the number of facets and the dimension of Q to be polynomial in n and d, Q can be chosen as a high-dimensional simplex (which even has diameter one) with the number of vertices of P many facets (which might easily be exponential in n and d). Similarly, without the non-degeneracy requirement on Q such a construction can trivially be obtained by forming a pyramid over P (which has diameter at most two). We elaborate below why the restriction on the dimension and the non-degeneracy property of Q makes the result interesting.
The motivation for the interest in the diameter of polytopes is that it necessarily is bounded by a polynomial in n (i.e., the polynomial Hirsch-conjecture must be true) if a polynomial time pivot rule for the simplex algorithm for linear programming exists. The search for such a pivot rule is considered highly relevant in the light of the question whether there is a strongly polynomial time algorithm for linear programming (i.e. an algorithm for which not only the number of bit-operations can be bounded by a polynomial in the entire input length, but also the number of its arithmetic operations can be bounded by a polynomial in the number of inequalities),
which is most prominent in Smale's list of 18 open problems for the 21st century <cit.>.
A basis of a system Ax ≤ b with an m× d-matrix A and (A)=d is a subset I ⊆ [m] with |I|=d such that the submatrix A_I of A formed by the rows of A indexed by I is non-singular. Such a basis defines the basis solution A_I^-1b_I of the system; it might be feasible (if it satisfies all inequalities Ax ≤ b) or not. The feasible basis solutions are exactly the vertices of the polyhedron defined by Ax ≤ b. A d-dimensional polyhedron is called simple or non-degenerate if each vertex is contained in exactly d facets, which, for a (full-dimensional) polytope defined by an irredundant system Ax ≤ b is equivalent to every vertex being defined by exactly one basis.
The bases-exchange graph of a d-dimensional polytope P⊆^d defined by an irredundant system Ax ≤ b has the feasible bases of Ax ≤ b as its nodes, where two bases are adjacent if and only if their symmetric difference consists of exactly two indices. If P is simple then the graph of P is isomorphic to the bases-exchange graph for any irredundant system defining P. The diameter of a (bases-exchange) graph is the smallest number δ for which any pair of nodes in the (bases-exchange) graph is connected by a path of length at most δ. The monotone diameter of a bases-exchange graph is the smallest number
δ⃗ such that for each linear objective function and for every node in the bases-exchange graph there is a monotone path of length at most δ⃗ to some basis defining an optimal solution, where monotone means that only edges are used that improve the objective function or that connect two bases defining the same vertex. Clearly, the diameter of the graph of a polytope is a lower bound on the diameter of any corresponding bases-exchange graph, which in turn is a lower bound for the monotone diameter of the latter. We show in Theorem <ref> that one can further lift (by spending one more dimension) the extensions described in Theorem <ref>
such that even a monotone path of length at most 2(n-d+1)+1 to some optimal vertex exists, for each linear objective function and each start vertex,
The simplex algorithm in fact proceeds along monotone paths in the bases-exchange graph.
Therefore, for each polytope the worst-case running time of the simplex algorithm (over all linear objective functions) is bounded from below by the monotone diameter of the bases-exchange graph. Consequently, a variant of the simplex algorithm that runs in polynomially (in the number of inequalities) bounded time for all linear programs can only exist if there is a polynomial (in the number of facets) upper bound on the monotone diameters of the bases-exchange graphs of polytopes, and thus on the diameters of the graphs of polytopes.
Our second main contribution is to use the extensions of small diameters that we devise in the first part in order to show that if there is a pivot rule for the simplex algorithm for which one can bound the number of steps polynomially in the diameter of the graph of the polyhedron formed by the feasible solutions (or even only in the monotone diameter of the bases-exchange graph) then the general linear programming problem can be solved in strongly polynomial time (see Theorems <ref> and <ref>). Thus, even if it turns out that the polynomial Hirsch-conjecture fails, it still might be possible to come up with a strongly polynomial time algorithm for general linear programming by devising a polynomial pivot rule for only the special class of problems exhibiting small (monotone) diameters.
The paper is organized as follows.
Section <ref> introduces a special type of extended formulations that we call rock extensions which will allow us to realize the claimed diameter bounds. Special properties of rock extensions for two- and three-dimensional polytopes are discussed in Section <ref>.
In Section <ref> we ensure that the procedure we devise in the first section for obtaining a rock extension with certain additional properties (that we need to maintain in our inductive construction) can be adjusted to produce a rational extension having its encoding size polynomially bounded in the encoding size of the input. We eventually consider computational aspects in Section <ref> and upgrade our extensions to allow for monotone short paths in Section <ref> in order to establish the results announced above.
§ ROCK EXTENSIONS
For a row-vector α∈^d∖{} and a number β∈ we call the sets H^≤(α,β) := { x ∈ R^d|α x ≤β} and H^=(α,β) := { x ∈ R^d|α x = β} a halfspace and a hyperplane, respectively. Moreover we naturally extend the above notation by H^σ(α,β) to denote the set { x ∈ R^d|α x σ β} where σ∈{<,>}. For A∈^m× d and b ∈^m we use P^≤(A,b) to denote the polyhedron {x∈^d | Ax≤ b}. For A ∈^m× d and I ⊆ [m] we use A_I to denote the submatrix of A formed by the rows of A indexed by I. Let Ax ≤ b be a system of linear inequalities with A ∈^m× d, b ∈^m. Then we call the family of hyperplanes H^=(A_1, b_1),…,H^=(A_m, b_m) the hyperplane arrangement associated with Ax ≤ b and denote it by ℋ(A,b). We call a d-dimensional polytope d-polytope.
We commence by introducing two types of systems of linear inequalities which will be crucial throughout the work.
A feasible system of linear inequalities Ax ≤ b with A ∈^m× d, b ∈^m is said to be non-degenerate if each vertex of ℋ(A,b) is contained in exactly d of the m hyperplanes.
The system is called totally non-degenerate, if, for any collection of k hyperplanes of ℋ(A,b), their intersection is a (d-k)-dimensional affine subspace for 1 ≤ k ≤ d and the empty set for k>d.
Note that total non-degeneracy implies non-degeneracy. We introduce correpsonding notions for polytopes in the following way.
A polytope is called strongly non-degenerate resp. totally non-degenerate if there is a non-degenerate resp. totally non-degenerate system of linear inequalities defining it.
We observe that each strongly non-degenerate polytope is full-dimensional and simple.
A non-degenerate system Ax ≤ b with A ∈^m× d, b ∈^m is said to be simplex-containing if there exists a subset I ⊆ [m] of with |I|=d+1 such that P^≤(A_I,b_I) is a d-simplex.
Note that
each
strongly
non-degenerate polytope P can be described by a simplex-containing non-degenerate system Ax ≤ b. This is due to the fact, that one can add d+1 redundant inequalities defining a simplex S ⊇ P to any non-degenerate description of P maintaining non-degeneracy (in fact later we establish, that a single auxiliary inequality is enough to ensure the simplex-containing property).
In addition, it turns out that any totally non-degenerate system defining a polytope is simplex-containing. We proceed with a proof of this fact.
Let P be a d-polytope given by a totally non-degenerate system Ax ≤ b of m linear inequalities. There exists a subset I ⊆ [m] with |I| = d+1 such that the polyhedron P^≤(A_I, b_I) is bounded.
We can assume ∈(P), implying P^∘ = {A_1^T,…,A_m^T} for the polar dual of P (for the theory of polar duality, see e.g. <cit.>). Since P is bounded, we have ∈ P^∘ (even ∈(P^∘)). Hence, by Carathéodory's theorem there exists some subset I ⊆ [m] with |I| ≤ d+1 such that ∈ Q:={A_i^T| i∈ I}. In fact, we have ∈(Q), since otherwise there was some proper subset J ⊊ I with ∈{A_i^T| i∈ J} implying the contradiction (A_J) < |J| ≤ d to the non-degeneracy of Ax ≤ b. But ∈(Q) in turn implies that P^≤(A_I, b_I) = Q^∘ is bounded, which in particular infers |I| = d+1
Next we introduce a special type of extensions we will be working with.
Let P be the polytope defined by a system Ax ≤ b with A ∈^m × d,b ∈^m. Any polytope Q := {(x,z) ∈^d+1| Ax + a z ≤ b, z ≥ 0} with a ∈^m_>0 will be called a rock extension of P.
Note that a rock extension Q together with the orthogonal projection on the first d coordinates indeed provides an extended formulation of P. If P is a full-dimensional d-polytope (what we assume henceforth), then Q is a (d+1)-dimensional polytope that has at most m+1 facets including the polytope P itself (identified with P×{0}) as the one defined by the inequality z≥ 0. In case Ax ≤ b is an irredundant description of P, a rock extension Q has exactly m+1 facets defined by z≥ 0 and A_i x+a_i z≤ b_i for i∈ [m], where the latter m inequalities are in one-to-one correspondence with the facets of P. See Figure <ref> for an illustration.
We call the facet P of Q the base and partition the vertices of Q into base vertices and non-base vertices accordingly. A vertex of Q with maximal z-coordinate is called a top vertex. A path in the graph of a rock extension will be called z-increasing if the sequence of z-coordinates of vertices along the path is strictly increasing. To shorten our notation, we denote a hyperplane {(x,z)∈^d+1| z = h} and a halfspace {(x,z)∈^d+1| z ≤ h} by {z = h} and {z ≤ h}, respectively. We also use the notation B_ϵ(q) for the open Euclidean ball of radius ϵ with center q.
Let ϵ>0 be a positive number.
We say that a rock extension Q of P is ϵ-concentrated around (o,h)∈^d×_>0 if (o,h) is the unique top vertex of Q, we have B_ϵ(o) ⊆ P, and all non-base vertices of Q are contained in the open ball B_ϵ((o,h)).
It turns out that maintaining ϵ-concentrated rock extensions opens the door for inductive constructions of rock extensions. More precisely, we are going to establish by induction on the number of inequalities the following result which makes up the core of our contributions.
For every d-polytope P given by a simplex-containing non-degenerate system Ax ≤ b of m linear inequalities, every ϵ >0, and every point o with
B_ϵ(o) ⊆ P, there exits a simple rock extension Q that is ϵ-concentrated around (o,1) so that for each vertex of Q there exists a z-increasing path of length at most m-d to the top vertex (o,1).
For totally non-degenerate polytopes the latter result immediately implies the following bound that is only twice as large as the bound originally conjectured by Hirsch. For a more general result for all strongly non-degenerate polytopes along with considerations of algorithmic complexity see Section <ref>.
Each totally non-degenerate d-polytope P with n facets admits a simple (d+1)-dimensional extension Q with n+1 facets and diameter at most 2(n-d).
We proceed by induction on m.
In case of m = d+1 the polytope P is a d-simplex and hence the (d+1)-dimensional pyramid Q over P with top vertex (o,1) has the required properties.
So let us consider the case m ≥ d+2. Since Ax ≤ b is simplex-containing, there exists an inequality A_ix ≤ b_i (i ∈ [m]∖ I can be chosen arbitrarily for some I as in Definition <ref>), whose deletion from Ax ≤ b results in system defining a bounded polyhedron P. By the induction hypothesis and due to B_ϵ(o) ⊆ P ⊆P, for every 0 < μ≤ϵ the polytope P defined by the simplex-containing non-degenerate system A_Jx ≤ b_J with J := [m]∖{i} admits a simple rock extension Q that is μ-concentrated around (o,1) with each vertex having a z-increasing path of length at most m-d-1 to the top vertex (o,1) of Q.
To complete the proof we will use the inductive construction of Q for an appropriate choice of 0<μ<ϵ. Then we will add to its inequality description an inequality A_ix + a_iz≤ b_i in order to obtain a simple rock extension Q of P that is ϵ-concentrated around (o,1) and show that the vertices of Q admit similar paths to the top vertex as the vertices of Q do. Here we choose the coefficient a_i>0 that determines the “tilt angle” of the corresponding hyperplane
in such a way that H^=((A_i,a_i), b_i) is tangential to B_μ((o,1)) with B_μ((o,1)) ⊆ H^≤((A_i,a_i), b_i), what
indeed can be achieved since due to μ < ϵ we have B_μ(o) ⊊ B_ϵ(o)⊆ P. Then the inequality A_ix + a_iz≤ b_i will not cut-off any non-base vertices from Q (as they are all contained in B_μ((o,1))), and hence (o,1) is the unique top vertex of Q as well. Note that each “new” non-base vertex of Q is the intersection of H^=((A_i,a_i), b_i) with the relative interior of some non-base edge of Q connecting a base vertex of Q cut-off by H^≤((A_i,a_i), b_i) to a non-base vertex contained in B_μ((o,1)).
We use the following statement, which will be proven separately.
There exists a number D ≥ 7, such that for every
0<μ≤1/2 with
μ < ϵ the Euclidean distance from any “new” non-base vertex of Q to (o,1) is less than μ D.
Hence by choosing any 0<μ≤min{1/2, ϵ/D} (in particular, μ < ϵ), we guarantee that all non-base vertices of Q (including the the “new” ones) are contained in B_ϵ((o,1)).
As Q is simple, every base vertex of Q has exactly one edge not lying in the base, which will be called its increasing edge (since the z-coordinate of its non-base endvertex is greater than 0, the z-coordinate of its base endvertex). Note that a z-increasing path connecting a base vertex u to the top vertex necessarily contains the increasing edge incident to u.
Now suppose v is a (base or non-base) vertex of Q, that is a vertex of Q as well, then v ∈ H^<((A_i,a_i), b_i) holds,
where this is clear for the non-base vertices, and for the base vertices this is due to
Ax≤ b being non-degenerate. In particular, v is still contained in exactly d facets of Q. Hence v has the same z-increasing path of length at most m-d-1 to the top vertex in Q as in Q, since v itself and all non-base vertices of Q are contained in H^<((A_i,a_i), b_i).
Finally consider a “new” base vertex v of Q, which is the intersection of H^=((A_i,a_i), b_i) with the relative interior of some base edge e of Q
(again due to the non-degeneracy of Ax≤ b). Denote the endpoint of e contained in H^>((A_i,a_i), b_i) by u. Since u is a base vertex of Q, it has a unique increasing edge which we denote by g. Lets denote the other endvertex of g by w. Then, since w ∈ B_μ((o,1)), the hyperplane H^=((A_i,a_i), b_i) intersects g in a relative interior point that we denote by y. As Q is simple, both v and y are contained in exactly d facets of Q and there exist a 2-face f of Q containing both edges e and g incident to u. Since the hyperplane H^=((A_i,a_i), b_i) intersects both edges e and g in points v and y, respectively, it intersects f in the edge {v,y} of the rock extension Q .
Since there exists a z-increasing path of length at most m-d-1 connecting u and the top vertex (o,1) in Q, the same path with only the edge {w,u} replaced by the two edges {w,y}, {y,v} (which both are z-increasing since u is a base vertex and y is contained in the relative interior of the increasing edge {w,u}) connects the base vertex v to (o,1) in Q and has length at most m-d. Note that every “new” non-base vertex of Q arises as we described for y above, thus admitting a z-increasing path to the top vertex (o,1) of length at most m-d (in fact at most m-d-1). Therefore, Q is indeed a simple rock extension that is ϵ-concentrated around (o, 1) with each vertex of Q admitting a z-increasing path to the top vertex of length at most m-d. See Figure <ref> for an illustration.
We still have to prove Claim <ref>. Let us therefore first introduce some additional notations.
Let δ_1 denote the maximum Euclidean distance from any (feasible or infeasible) basis solution of the system Ax ≤ b to the point o.
And let δ_2 be the minimum Euclidean distance from any (again feasible or infeasible) basis solution u to a hyperplane H^=(A_i,b_i) not containing u with A_ix ≤ b_i being a row of Ax ≤ b.
Let U be a base vertex of Q cut-off by H^≤((A_i,a_i), b_i). We denote the other vertex of the increasing edge of U by W. Note that the following argumentation only relies on W ∈ B_μ((o,1)) and the fact that W doesn't lie above {z = 1}, which will be useful for the considerations in Section <ref>. Let further Y be the intersection point of H^=((A_i,a_i), b_i) with the edge UW. We aim to bound the distance from Y to (o,1). Note that Y lies below {z=1} because of W ∈{z ≤ 1}. Furthermore, due to the choice of a_i, the hyperplane H^=((A_i,a_i), b_i) is tangential to B_μ((o,1)) at a point we denote by T. Note that T lies above {z = 1} since we have
B_μ(o) ⊊ B_ϵ(o) ⊆ P. Thus the line through T and Y intersects {z =0 } in a point R. Since both T and Y are contained in H^=((A_i,a_i), b_i), so is that line. We denote the angles
∠ RYU = ∠ TYW , ∠ WTY and ∠ YUR by α, γ and δ respectively. See Figure <ref> for an illustration.
On the one hand applying the law of sines for RYU we obtain sinα/UR = sinδ/YR. On the other hand, for TYW the same implies sinα/TW = sinγ/WY. Solving both equations for sinα we get UR/YRsinδ = TW/WYsinγ. Then, solving the last equality for WY we obtain
WY = TW · YR/URsinγ/sinδ≤2μ(YU+UR)YU/UR · h_Y,UR ,
where the last inequality holds since TW ≤(T, (o,h)) + (W, (o,h)) ≤ 2μ, sinγ≤ 1, YR≤ YU + UR and sinδ = h_Y,UR/YU, where h_Y,UR is the height of vertex Y in RYU.
We denote the orthogonal projections of Y and W to the hyplerpane {z=0} by Y' and W', respectively.
Since YY' is the distance between Y and the hyperplane {z=0} that contains both U and R, we conclude h_Y, UR≥ YY'. Moreover, the triangles YUY' and WUW' are similar and therefore h_Y, UR≥ YY'=YU/YU+WYWW' ≥YU/YU+WY (1 -μ), where the last inequality follows from the fact, that W ∈ B_μ((o,1)). Plugging that estimate into (<ref>) gives
WY ≤2μ(YU+UR)YU(YU + WY)/UR (1-μ) YU = 2μ(YU + WY)/1-μ(1+YU/UR) .
Finally we bound the length of all the remaining line segments appearing in the right-hand side of (<ref>) to obtain an upper bound on WY. First, we observe YU ≤ YU+WY≤(U, (o,1)) + μ≤√(δ_1^2 + 1)+μ. Secondly UR≥(U, H^=(A_i, b_i)) ≥δ_2. Plugging those inequalities into (<ref>) we obtain
WY ≤2μ(√(δ_1^2 + 1)+μ)/1-μ(1+√(δ_1^2 + 1)+μ/δ_2)
≤ 4μ(δ_1 +1.5)(1+δ_1 +1.5/δ_2) ,
where for the last inequality we used μ≤ 0.5 and √(δ_1^2 + 1)≤δ_1 + 1. It follows that ((o,1), Y) < μ + WY ≤μ D, with D := 4(δ_1 +1.5)(1+δ_1 +1.5/δ_2) + 1 ≥ 7.
§ LOW DIMENSIONAL POLYTOPES
This section is dedicated to an improvement of the diameter bound from the last section for rock extensions of two- and three-dimensional polytopes.
Let us consider again the setting of the proof of Theorem <ref>. The main source of improvement for d∈{2,3} will be to apply the induction hypothesis to a polytope obtained by deleting a batch of inequalities defining pairwise disjoint facets of the original polytope. It will turn out that subsequently constructing a rock extension by adding all of the batch inequalities back one after another (with coefficients a as in the proof of Theorem <ref>) will have the effect of increasing the combinatorial distances to the top vertex by at most one overall. Next we elaborate on the latter fact.
Let Ax ≤ b be a simplex-containing non-degenerate system of m ≥ d+3 inequalities defining a polytope P = P^≤(A,b) with an interior point o, and let ϵ be a positive number such that B_ϵ(o) ⊆ P. Furthermore, let the inequalities A_ix ≤ b_i and A_jx ≤ b_j with i,j ∈ [m]∖ I (where, again, I is as in Definition <ref>) and i j define disjoint facets f_i and f_j of P, respectively. Note that each vertex of f_j is contained in H^<(A_i,b_i) and vice versa.
Consider the polytopes P_J:=P^≤(A_J,b_J) with J := [m] ∖{i} and P_K = P^≤(A_K,b_K) with K:=[m]∖{i,j}.
For the number ν := min{1/2D, ϵ/D^2} < ϵ with D as in Claim <ref> by Theorem <ref> the polytope P_K admits a simple rock extension Q_K that is ν-concentrated around (o,1) such that for every vertex of Q_K there exists a z-increasing path of length at most m-d-2 to the top vertex (o,1).
Now we argue that adding the inequality A_jx +a_jz≤ b_j to a system describing Q_K with a_j chosen as discussed in the proof of Theorem <ref>, where we use μ := min{1/2, ϵ/D} for ϵ in that theorem, and then further adding A_ix +a_iz≤ b_i (with a_i as in the proof of Theorem <ref> again)
results in a simple rock extension Q of P that is ϵ-concentrated around (o,1) and has diameter at most 2(m-d-1). More precisely, despite subsequently adding two cutting halfspaces, the length of all paths to the top has increased by at most one.
Let v be a “new” base vertex of Q_J, which is the intersection of H^=((A_j,a_j), b_j) with the relative interior of some base edge e of Q_K, admitting a z-increasing path to the top vertex of Q_J of length at most m-d-1 as in the proof of Theorem <ref>. Since v is identified with a vertex of facet f_j of P and since f_i and f_j are disjoint, v ∈ H^<((A_i,a_i),b_i) holds and hence v is a vertex of Q as well. Moreover, recall that all non-base vertices of Q_J are vertices of Q since they are contained in B_μ((o,1)) ⊆ H^<((A_i,a_i),b_i) and hence they admit increasing path of length at most m-d-2 to the top of Q. Therefore, v admits the very same z-increasing path of length at most m-d-1 to the top vertex of Q as in Q_J. On the other hand any “old” base vertex u of Q_J (which is a base vertex of Q_K too), admits a path to the top vertex of Q_J of length at most m-d-2. Since the vertices of the latter kind are the only ones that could be cut off by A_ix+a_i ≤ b_i when constructing Q, all the “new” base and non-base vertices of Q admit increasing path of length m-d-1 resp. m-d-2 to the top vertex of Q.
Note that the above argumentation naturally extends to any number of inequalities, defining pairwise disjoint facets of P where the sequence μ = min{1/2, ϵ/D}, ν = μ/D is extended to μ, μ/D, μ/D^2, μ/D^3,…
We now exploit the latter consideration to improve the diameter bounds for rock extensions of two- and three-dimensional polytopes. Let us also note upfront that any non-degenerate system of m inequalities Ax ≤ b defining a d-polytope P can be augmented to a non-degenerate simplex-containing system describing P by adding a single redundant inequality to Ax ≤ b. Let v be a vertex of P. Then the redundant inequality α x ≤β can be chosen in such way that together with d inequalities defining v it forms a simplex containing P and such that the system Ax ≤ b , α x ≤β is non-degenerate. We will elaborate on how to choose α and β in Section <ref> in more detail.
The following statement for polygons holds.
Each n-gon admits a simple 3-dimensional extension with at most n+2 facets and diameter at most 2log_2 (n-2)+4.
We commence with the observation, that any irredundant system of inequalities describing an n-gon P is non-degenerate, since no three distinct edge-containing lines intersect in a point. Hence, as discussed above, P can be described by a non-degenerate system Ax ≤ b of m=n+1 inequalities, consisting of n edge-defining inequalities for P and an artificially added inequality. Two inequalities defining edges incident to a vertex of P and the auxiliary inequality, such that the three of them form a simplex containing P are indexed by I ⊆ [m].
As in Theorem <ref> we prove by induction that for any interior point o of P and every ϵ >0 with B_ϵ(o) ⊆ P there exists a simple rock extension Q of P that is ϵ-concentrated around (o,1) such that for each vertex of Q there exists a z-increasing path of length at most log_2 (m-3)+2 to the top vertex. Clearly Q then has diameter at most 2log_2(n-2)+4.
It is easy to see that the claim holds for m=4,5.
Note that ⌈n-2/2⌉ = ⌈m-3/2⌉ of the facets defined by inequalities from [m] ∖ I are pairwise disjoint. For that we just pick every second edge while traversing the graph of the (not necessarily bounded) polygon P^≤(A_[m] ∖ I,b_[m] ∖ I) since the corresponding edges are pairwise disjoint in P as well. Deleting the inequalities corresponding to all those facets at once yields a polygon P described by a system of m:=⌊m+3/2⌋≤m+3/2 inequalities. By the induction hypothesis for μ := D^-⌈n-2/2⌉min{D/2,ϵ} with D as in Claim <ref> there exists a simple rock extension Q of P that is μ-concentrated around (o,1) so that for each vertex of Q there exists a z-increasing path of length at most log_2 (m-3)+2 to the top vertex.
According to the arguments discussed above,
subsequently adding all ⌈n-2/2⌉ deleted inequalities back with appropriate a-coefficients, thus constructing a sequence of ⌈n-2/2⌉ rock extensions λ_k-concentrated around (o,1) with λ_0=μ, λ_k+1=Dλ_k,λ_⌈n-2/2⌉≤ϵ, yields a simple rock extension Q of P that is ϵ-concentrated around (o,1) such that each vertex of Q admits a z-increasing path to the top vertex of length at most log_2 (m-3)+2 +1 ≤log_2(m+3/2 -3)+2 +1= log_2(m-3)+2 = log_2(n-2)+2.
A three-dimensional simple rock extension of a polygon having logarithmic diameter is depicted in Figure <ref>. Similarly we prove the following bound for three-dimensional polytopes (recall that each strongly non-degenerate polytope with n facets can be described by a non-degenerate simplex-containing system of at most m=n+1 inequalities).
Each three-dimensional polytope P described by a non-degenerate simplex-containing system with m inequalities admits a simple four-dimensional extension with at most m+1 facets and diameter at most 2log_4/3 (m-4)+4
Once more, the set of indices of four inequalities defining the simplex containing P is refereed to as I. To estimate a number of pairwise disjoint facets of P, consider the graph G_F whose vertices are the facets of P where two vertices are adjacent if and only if the corresponding facets are non-disjoint. Since P is simple, two facets are non-disjoint if and only if they share an edge. Therefore G_F is the graph of the polar polytope P^∘. Since P^∘ is three-dimensional, G(P^∘) is planar, and so is the graph G_F^':= G(P^∘) ∖ V(I), where V(I) contains vertices of G(P^∘) corresponding to the facets of P defined by the inequalities indexed by I. It is a consequence of the four-color theorem <cit.>, that any planar graph G admits a stable set of cardinality at least |V(G)|/4. Let S ⊆ V(G_F^') be a stable set in G_F^' of cardinally at least |V(G_F^')|/4 = m-4/4. By deleting the inequalities that correspond to the vertices in S from Ax ≤ b, applying the induction hypothesis as in Theorem <ref>, and subsequently adding these deleted inequalities back with appropriate a-coefficients we again obtain a simple rock extension with diameter at most 2(log_4/3 (3m+4/4 - 4) +2 + 1) = 2(log_4/33(m-4)/4 +2 + 1 )= 2log_4/3 (m-4) +4.
§ RATIONAL POLYTOPES AND ENCODING SIZES
For a rational d-polytope given by a non-degenerate system Ax ≤ b with A ∈^m × d,b ∈^m we want to argue in this section that there exists a simple rational rock extension Q with diameter at most 2m such that its encoding size (w.r.t. the inequality description) is polynomially bounded in the encoding size of P, denoted by ⟨ A,b ⟩.
We can assume that A and b are integral, since one can multiply the system Ax ≤ b by the least common multiple of all denominators of entries of A and b (which has a polynomial encoding size in ⟨ A,b ⟩). We denote the maximum absolute value of a k× k sub-determinant of (A,b) by Δ_k. We now adjust the proof of Theorem <ref> so that the extension Q being constructed meets additional requirements.
For each polynomial q_1(·) there exists a polynomial q_2(·) such that for every simplex-containing non-degenerate system defining a d-polytope P = P^≤(A,b) with A∈^m× d, b∈^m, every rational ϵ>0, and every rational point o with B_ϵ(o) ⊆ P with ⟨ϵ⟩ ,⟨ o ⟩≤ q_1(⟨ A,b ⟩), there exists a simple rational rock extension Q that is ϵ-concentrated around (o,1) so that for each vertex of Q there exists a z-increasing path of length at most m-d to the unique top vertex, and ⟨ a ⟩≤ q_2(⟨ A,b ⟩) holds for the vector a of coefficients of the additional variable.
W.l.o.g we assume o=∈(P), which implies b > 0.
Again, for m =d+1 the statement trivially holds true for Q = {P∪{(,1)}} = {x∈^d | Ax + bz ≤ b, z ≥ 0}. Let us next consider the induction step.
First, let us obtain explicit bounds on δ_1 and δ_2 (from Definition <ref>). Due to Cramer's rule and the intergrality of A and b each coordinate of any basis solution of Ax ≤ b is in absolute value at most Δ_d. Moreover, since ⟨(M) ⟩≤ 2⟨ M ⟩ holds for any rational square matrix M <cit.>, we have Δ_d ≤ 2^2⟨ A,b ⟩, and therefore
δ_1 ≤Δ_d√(d)≤ 2^2⟨ A,b ⟩d .
Now assume that a basis solution u and a hyperplane H^=(A_i,b_i) corresponding to a row of Ax ≤ b with u ∉ H^=(A_i,b_i) have Euclidean distance |A_iu - b_i|/||A_i||_2 = δ_2. Since the least common multiple of denominators of all coordinates of u is at most Δ_d (due to Cramer's rule again), |A_iu - b_i| ≠ 0 and A_i, b_i are integral, |A_iu - b_i| ≥1/Δ_d. Therefore, we obtain
δ_2 ≥1/Δ_d||A_i||_2≥ (2^2⟨ A,b ⟩dΔ_1)^-1 ,
where the last inequality follows from the aforementioned bound on Δ_d and ||A_i||_2≤Δ_1√(d)≤ dΔ_1.
Now we can adjust the choice of the constant D from Claim <ref>. Using (<ref>) and (<ref>) for bounding D as chosen at the end of the proof of Theorem <ref> we estimate:
((, 1), Y) < μ + WY
≤μ(4(2^2⟨ A,b ⟩d + 1.5)(1+(2^2⟨ A,b ⟩d + 1.5)(2^2⟨ A,b ⟩dΔ_1)) + 1)
≤μ·25d^3Δ_1 2^6⟨ A, b⟩_=:D
Furthermore, later in the proof it will turn out to be useful if μ does neither exceed 1/4d nor 4d b_i/||A_i||_1+b_i for any i ∈ [m]. Therefore, we choose
μ := min{{4d b_i/||A_i||_1+b_i}_i ∈ [m],1/4d,ϵ/D}.
Note that
this choice guarantees μ≤1/2
and
μ < ϵ as well as that
μ is rational
with
⟨μ⟩ = (⟨ A,b ⟩ + ⟨ϵ⟩).
In the proof of Theorem <ref> we chose a_i such that H^=((A_i,a_i),b_i) is tangential to B_μ((,1)). We now want to quantify this value. Denoting the tangential point of the ball once again by T, we have (A_i,a_i) T = b_i since T ∈ H^=((A_i,a_i),b_i) and T = (,1) + (A_i,a_i)^T/||(A_i,a_i)||_2μ since T lies on the boundary of B_μ((,1)) and B_μ((,1)) ⊆ H^≤((A_i,a_i),b_i). Plugging the second equality into the first one, we obtain
(A_i,a_i)(,1) + ||(A_i,a_i)||_2^2/||(A_i,a_i)||_2μ = a_i + μ√(||A_i||_2^2+a_i^2) = b_i .
Note, that b_i ≥ b_i - a_i > 0 holds since (,1) ∈ H^<((A_i,a_i),b_i). By taking a_i to the right in the last equation and squaring both sides we get
μ^2(||A_i||_2^2+a_i^2) = b_i^2 + a_i^2 - 2b_ia_i .
After rearranging the terms we obtain a quadratic equation
a_i^2(1 - μ^2) - 2a_ib_i + b_i^2 - μ^2||A_i||_2^2 = 0
with roots
a_i^+,- = b_i ±√(b_i^2 - (1 - μ^2)(b_i^2 - μ^2||A_i||_2^2))/1-μ^2
= b_i ±μ√((1-μ^2)||A_i||_2^2 + b_i^2)/1-μ^2 .
We deduce that a_i = a_i(μ) : = a_i^-, since a_i^+≥b_i/1-μ^2≥ b_i. Unfortunately, a_i(μ) is not necessarily rational. However we will show that the rational number
a_i(μ) := b_i -μ/2d(||A_i||_1 + b_i)/1-μ^2,
whose encoding size is polynomially bounded in ⟨ A,b ⟩ + ⟨μ⟩, satisfies
a_i(μ^') ≥a_i(μ) ≥ a_i(μ) ,
with μ^' := μ/4d.
Note that due to ⟨μ' ⟩ = ⟨μ⟩ + (⟨ d ⟩) (and the above estimate on ⟨μ⟩), throughout all less than m recursive steps the encoding length of μ' will be bounded by (m⟨ A,b ⟩ + ⟨ϵ⟩ ) = (⟨ A,b ⟩ + ⟨ϵ⟩ ) with the “original” ϵ.
Then, in order to construct a rational rock extension Q of P, we use a recursively constructed rational rock-extension Q of P that is in fact μ^'-concentrated around (,1) and then add the inequality A_i x + a_i(μ) ≤ b.
Due to B_μ^'((,1)) ⊆ H^≤((A_i, a_i(μ^'), b_i ) and a_i(μ^') ≥a_i(μ), we have B_μ^'((,1)) ⊆ H^≤((A_i, a_i(μ)), b_i ). Therefore, the argument for the existence of z-increasing paths to the top vertex of length at most m-d in Q is the same as in the proof of Theorem <ref>. On the other hand, since a_i(μ) ≥ a_i(μ), all “new” non-base vertices of Q are contained in B_ϵ((,1)). Let us shortly prove the latter. Consider Figure <ref> once again. The point W is now contained in a smaller ball B_μ^'((,1)) ⊆ B_μ((,1)) and lies in {z ≤ 1}. Since a_i(μ) ≥ a_i(μ), the hyperplane
H^≤((A_i, a_i(μ), b_i )
intersects the edge UW in a point Y that lies on the line segment WY. Therefore WY≤ WY and hence Y∈ B_ϵ((,1)) as well. It remains to prove (<ref>).
We commence with a sequence of estimations:
[ μ/4d√((1-(μ/4d)^2)||A_i||_2^2 + b_i^2) μ≥ 0 ≤ μ/4d√(||A_i||_2^2 + b_i^2); 2||A_i||_2b_i≥ 0 ≤ μ/4d(||A_i||_2 + b_i); ||·||_2 ≤ ||·||_1 ≤ μ/4d(||A_i||_1 + b_i) . ]
Furthermore, we have
[ μ/2d(||A_i||_1 + b_i) ||·||_1 ≤ d||·||_2 ≤ μ/2d(d||A_i||_2 + b_i); b_i ≥ 0≤ μ/2(||A_i||_2 + b_i); = μ/2√(||A_i||_2^2 + b_i^2 + 2||A_i||_2b_i ); 2xy≤ x^2+y^2≤ μ/2√(2(||A_i||_2^2 + b_i^2)); 4(1-μ^2)≥ 43/4=3≥ 2≤ μ/2√(4(1-μ^2)||A_i||_2^2 + 2b_i^2); ≤ μ√((1-μ^2)||A_i||_2^2 + b_i^2) , ]
where 1-μ^2≥3/4 since μ≤1/2. Finally, let us prove (<ref>), where we exploit the inequalities
μ≤4d b_i/||A_i||_1+b_i for all i ∈ [m].
[ a_i(μ/4d) = b_i - μ/4d√((1-(μ/4d)^2)||A_i||_2^2 + b_i^2)/1-(μ/4d)^2; (<ref>)≥ b_i -μ/4d(||A_i||_1 + b_i)/1-(μ/4d)^2; = 1-μ^2/1-(μ/4d)^2·b_i -μ/4d(||A_i||_1 + b_i)/1-μ^2_≥ 0; 1/4d≥μ≥ 0 ≥ 1-μ/4d/1·b_i -μ/4d(||A_i||_1 + b_i)/1-μ^2; = b_i - μ/4db_i - (1-μ/4d)μ/4d(||A_i||_1 + b_i)/1-μ^2; μ≥ 0≥ b_i - μ/4db_i - μ/4d(||A_i||_1 + b_i)/1-μ^2; μ≥ 0≥ b_i -μ/2d(||A_i||_1 + b_i)/1-μ^2 =a_i(μ); (<ref>)≥ b_i -μ√((1-μ^2)||A_i||_2^2 + b_i^2)/1-μ^2; = a_i(μ) . ]
§ ALGORITHMIC ASPECTS OF ROCK EXTENSIONS
In this section we address questions of how to compute rock extensions efficiently and how to utilize them in order to solve linear programming problems.
We first give an explicit algorithm for constructing a rock extension, assuming we have some prior information about the polytope. In the second part of the section we discuss a strongly polynomial time reduction of general (rational) linear programming to optimizing linear functions over rock extensions.
The proof of Corollary <ref> shows that for any rational simplex-containing non-degenerate system Ax ≤ b of m inequalities defining a (necessarily full-dimensional and simple) d-polytope P it is possible to construct a simple rational rock extension Q of P with diameter at most 2(m-d) in strongly polynomial time, if the following additional information is available: an interior point o of P (with ⟨ o ⟩ bounded polynomially in ⟨ A,b ⟩) and a subsystem A_I x≤ b_I of d+1 inequalities defining a simplex containing P. Having that information at hand, we can shift the origin to o, scale the system to integrality, and then construct Q by choosing c-coefficients in accordance with the proof of Corollary <ref>. For that we explicitly state Algorithm <ref>. Note that it runs in strongly polynomial time.
We also need some ϵ > 0 with encoding size polynomially bounded in ⟨ A,b ⟩ and B_ϵ() ⊆ P. We make the following explicit choice for ϵ. For B_ϵ() ⊆ P to hold, ϵ should not exceed the minimum distance from to a hyperplane corresponding to a facet of P. To achieve polynomial encoding size we bound this value from below and choose ϵ := min_i∈[m]b_i/dΔ_1≤min_i∈[m]b_i/||A_i||_2 = min_i∈[m](, H^=(A_i,b_i) ).
Algorithm <ref> emulates the iterative construction of a rock extension described in the proof of Corollary <ref>, starting with a pyramid over a given simplex P^≤(A_I, b_I) and adding the inequalities indexed by [m]∖ I one by one. Note that we compute coefficients a_j in the reverse order of the iterative construction.
What can we do if no interior point o of P is known (such that we could shift P to P-o in order to have in the interior), and neither is set I? For now let us assume we are given a vertex x^U of a strongly non-degenerate polytope P =P^≤(A,b) with integral A and b, and let U ⊆ [m] be the corresponding basis of x^U. Then the point o(λ) := x^U + λ/||(A_U)^-1||_1 (A_U)^-1 is an interior point of P for every small enough positive λ. This is due to the fact that P is simple and hence the extreme rays of the radial cone of P at u are the columns of (A_U)^-1. Hence the sum of the extreme rays points from x^U into the interior of P and by choosing λ := 1/2(2^2⟨ A,b ⟩dΔ_1)^-1≤1/2δ_2 (recall δ_2 from Definition <ref> and the last inequality is due to (<ref>)), we guarantee that o(λ) ∈(P). Of course, before making this choice of λ one has to scale Ax ≤ b to integrality first.
The knowledge of x^U and U as above also enables us to come up with set I as required in Algorithm <ref>. Indeed, the inequalities A_Ux ≤ b_U together with one additional redundant inequality ^T(A_U)^-Tx ≤ 2^2⟨ A,b ⟩ ||(A_U)^-1||_1 + 1, denoted by α x ≤β, form a simplex containing P. Since the hyperplane H^=(α, β) does not contain any basis (feasible or infeasible) solution of Ax ≤ b, the system Ax ≤ b, α≤β is non-degenerate as well and we can choose I as the union of U and the index of α x ≤β. Now, after shifting the origin to o(λ) and scaling the system to integrality we can apply Algorithm <ref> to construct a rock extension of P. Thus we have established the following.
Given A ∈^m × d and b ∈^m defining a non-degenerate system of linear inequalities such that P=P^≤(A,b) is bounded and a vertex of P one can construct in strongly polynomial time a matrix A_Q ∈^(m+2) × (d+1) and a vector b_Q ∈^m+2 such that Q = P^≤(A_Q, b_Q) is a simple rational rock extension of P with at most m+2 facets and diameter at most 2(m-d+1).
Since the described construction of a rock extension works only for the case of non-degenerate systems and requires to know a vertex of the polytope, we introduce the following definition.
We call a pair (S,u) a strong input, if S is a rational non-degenerate system Ax ≤ b defining a polytope P and u is a vertex of P.
Next we show that the setting of strong input we are working with is general enough in order to solve general linear programs.
If there is a strongly polynomial time algorithm for finding optimal basis solutions for linear programs with strong inputs and rational objective functions then all rational linear programs can be solved in strongly polynomial time.
In order to prove the above theorem we first state and prove the following technical lemma.
For all A ∈^m × d with (A) = d, b ∈^m, c ∈^d such that P:= P^≤(A,b) is a pointed polyhedron and for every positive ϵ≤ (3d||c||_1 2^5⟨ A,b⟩)^-1 the following holds for
P^ϵ:= P^≤(A, b+b^ϵ), where b^ϵ_i := ϵ^i , i ∈ [m].
* P ≠∅ if and only if P^ϵ≠∅. If P is non-empty, then P^ϵ is full-dimensional.
* For each feasible basis U for Ax ≤ b+ b^ϵ, the basis solution A^-1_U b_U is a vertex of P.
* For each vertex v of P there is a basis U of Ax ≤ b with v = A_U^-1b_U such that A_U^-1(b+b^ϵ)_U is a vertex of P^ϵ.
* If W is an optimal feasible basis for min{c^Tx| x ∈ P^ϵ}, then W is an optimal feasible basis for min{c^Tx| x ∈ P} as well.
* The system of linear inequalities Ax ≤ b + b^ϵ is non-degenerate.
A proof for statement <ref> can be found in <cit.>.
We commence with the simple observation that U is a (feasible or infeasible) basis for Ax ≤ b if and only if it is a (feasible or infeasible) basis for Ax ≤ b+b^ϵ since both system have the same left-hand side matrix A. We will refer to any such U as a basis of A. The following property 𝒫 turns out to be useful for the proof: if a basis (feasible or infeasible) solution x^U := A_U^-1b_U of Ax ≤ b with basis U is contained in H^<(A_i,b_i) or H^>(A_i,b_i) for some i ∈ [m], then the basis (feasible or infeasible) solution x^U, ϵ := A_U^-1(b+b^ϵ)_U of Ax ≤ b + b^ϵ is contained in H^<(A_i,(b+b^ϵ)_i) or H^>(A_i,(b+b^ϵ)_i), respectively. We later show that 𝒫 holds for all small enough positive ϵ, but before let us observe how <ref> and <ref> follow from 𝒫.
Assume x^U, ϵ is a feasible basis solution of Ax ≤ b+ b^ϵ with basis U such that x^U:=A_U^-1b_U is infeasible for Ax ≤ b, i.e. there exists some i∈ [m] with x^U ∈ H^>(A_i,b_i). If 𝒫 holds, then the latter contradicts, however, the feasibility of x^U, ϵ for Ax ≤ b+ b^ϵ. Thus 𝒫 implies <ref>.
In order to see that 𝒫 also implies <ref>, let A_E(v)x ≤ b_E(v) consist of all inequalities from Ax ≤ b that are satisfied with equality at a vertex v of P. Note that the set of feasible bases of A_E(v)x ≤ (b+b^ϵ)_E(v) is non-empty, since P^≤(A_E(v),(b+b^ϵ)_E(v)) is pointed because of (A_E(v))=d (as v is a vertex of P) with v ∈ P(A_E(v),(b+b^ϵ)_E(v)) (due to b^ϵ≥). We now can choose U as any feasible basis of A_E(v)x ≤ (b+b^ϵ)_E(v). We clearly have v = A_U^-1b_U and A_[m]∖E(v) v < b_[m]∖E(v) by the definition of E(v). Hence the basis solution A^-1_U(b+b^ϵ)_U is feasible for Ax ≤ b+b^ϵ due to 𝒫.
The property 𝒫 holds for 0< ϵ≤ (3d||c||_1 2^5⟨ A,b⟩)^-1 (we clearly can assume c).
Let x^U=A_U^-1b_U be a basis (feasible or infeasible) solution of Ax ≤ b with a basis U ⊆ [m], and let H^=(A_i,b_i), with i ∈ [m]∖ U be a hyperplane with x^U ∉ H^=(A_i,b_i). Furthermore, let x^U, ϵ := A_U^-1(b+b^ϵ)_U be the corresponding basis (feasible or infeasible) solution of the perturbed system. Consider the following expression
A_ix^U, ϵ - (b+b^ϵ)_i = ∑_j=1^d A _i,jx^U, ϵ_j - (b+b^ϵ)_i
= ∑_j=1^d A_i,j A^j = b+b^ϵ_U - (b+b^ϵ)_i A_U / A_U =: h_U,i(ϵ) ,
where A_U^j=q denotes the square d × d matrix arising from A_U by replacing the j-th column by the vector q. Note that h_U,i(ϵ) is a univariate polynomial in ϵ with its free coefficient α_0 = h_U,i(0) = A_i x^U - b_i ≠ 0 due to x^U ∉ H^=(A _i,b_i ). Therefore the property 𝒫 holds if ϵ > 0 is small enough, such that h_U,i(ϵ) has the same sign as α_0. We will need the following result on roots of univariate polynomials. See <cit.>, a proof can be found in <cit.>.
Let f(x) = α_n x^n + … + α_1 x + α_0 be a polynomial with real coefficients and α_0 ≠ 0. Let x̅≠ 0 be a root of f(x). Then 1/δ≤ |x̅| holds with δ = 1 + max{|α_1/α_0|, …, | α_n/α_0| }.
Hence 𝒫 holds for all 0 < ϵ < 1/δ (with δ chosen as in the lemma w.r.t. h_U,i) since there are no roots of h_U,i(ϵ) in the interval (-1/δ, +1/δ). Aiming to bound δ from above, we hence have to bound the coefficients of h_U,i(ϵ). Due to Cramer's rule, the integrality of A and b, and | A_U| ≤Δ_d, the absolute value of
each non-vanishing coefficient of h_U,i(ϵ) is at least 1/Δ_d. On the other hand, all coefficients are bounded in absolute value from above by ∏_(i,j) ∈ [m]×[d](1+|a_ij|) ∏_i ∈ [m] (1+|b_i|) ≤ 2^⟨ A,b⟩, since
by Leibniz formula each of them is 1/ A_U (≤ 1) times a sum s_1 + … + s_q with |s_k|= |∏_(i,j) ∈ F_ka_ij∏_i ∈ F_k b_i| for parwise different sets F_1,…,F_q ⊆ ([m]×[d]) ∪ [m].
Therefore δ≤ 1 + Δ_d 2^⟨ A,b ⟩≤ 2·2^ 3⟨ A,b ⟩ holds, where the last inequality follows from Δ_d ≤ 2^2⟨ A,b⟩.
For 0<ϵ≤ (3d||c||_12^5⟨ A,b ⟩)^-1 we thus indeed have ϵ < 1/22^-3⟨ A,b ⟩≤1/δ (as c is integral).
Next, to show <ref> (before we establish <ref>) let us assume that Ax ≤ b + b^ϵ^' is not non-degenerate for some ϵ^'≤ (3d||c||_1 2^5⟨ A,b⟩)^-1. Hence there is a basis U ⊆ [m] of A with corresponding (feasible or infeasible) basis solutions x^U, ϵ^' and x^U of the perturbed and of the unperturbed system, respectively, such that there exists i ∈ [m]∖ U with x^U, ϵ^'∈ H^=(A_i, (b+b^ϵ^')_i), thus h_U,i(ϵ^') = 0. Due to Lemma <ref> (and the upper bound on ϵ^') this implies h_U,i(0) = 0, thus x^U ∈ H^=(A_i,b_i).
Since U is a basis of A, there exists some λ∈^d with λ^TA_U = A_i. We have b_i = A_ix^U = λ^TA_Ux^U = λ^T b_U. Hence h_U,i(ϵ) = A_ix^U, ϵ - (b+b^ϵ)_i = λ^T A_U (A_U)^-1(b+b^ϵ)_U - (b+b^ϵ)_i = λ^T b^ϵ_U - ϵ^i is not the zero polynomial because of i ∉ U.
Consequently, there exists a polynomial g_U,i(ϵ) such that h_U,i(ϵ) = ϵ^r g_U,i(ϵ) with r ≥ 1 and g_U,i(0) ≠ 0.
Applying Lemma <ref> to g_U,i(ϵ) and bounding its coefficients in exactly the same way as for h_U,i(ϵ) yields that there are no roots of g_U,i(ϵ), and therefore no roots of h_U,i(ϵ), in the interval (0, 1/22^-3⟨ A,b ⟩), thus contradicting x^U, ϵ^'∈ H^=(A_i,(b+b^ϵ^')_i).
Finally, in order to show <ref>, we first prove the following claim.
Let U ⊆ [m] be a basis of A with x^U, ϵ and x^U being the corresponding (feasible or infeasible) basis solutions of the Ax ≤ b+b^ϵ and Ax ≤ b, respectively. Then 0 <ϵ≤ (3d||c||_1 2^5⟨ A,b⟩)^-1 implies |c^Tx^U - c^Tx^U, ϵ| < 1/2Δ_d^2.
By Cramer's rule, we have
|c^T(x^U-x^U, ϵ)| ≤∑_j=1^d|c_j||x^U_j - x^U, ϵ_j| =
∑_j=1^d|c_j| | A_U^j=b - A_U^j = b+b^ϵ/ A_U _=:f_U^j(ϵ)| .
To prove the claim it suffices to show that for all 0 <ϵ≤ (3d||c||_1 2^5⟨ A,b⟩)^-1 we have |f_U^j(ϵ)| < 1/2|c_j|d|Δ_d^2 for each j∈ [d] with c_j ≠ 0.
In order to establish this, let j ∈ [d] be an index with c_j ≠ 0. Due to f_U^j(0) = 0 we have f^j_U(ϵ) = α_lϵ^l + … + α_1ϵ with some α_1,…,α_l ∈. For β_0:=1/2|c_j|dΔ_d^2 and f_U^j±(ϵ) := f_U^j(ϵ) ±β_0 we have f_U^j-(0) < 0 < f_U^j+(0). Due to Lemma <ref>, the polynomiales f_U^j±(ϵ) thus have no roots in the interval (-1/δ, +1/δ), where δ = 1 + max{|α_1/β_0|, …, | α_l/β_0| }. Hence in order to establish |f_U^j(ϵ)| < β_0 it suffices to show ϵ < 1/δ.
In order to prove this we bound δ from above (thus 1/δ from below) by upper-bounding the coefficients α_k for all k ∈ [l]. From Leibniz' formula (and the integrality of A_U) once again we conclude that α_k ≤∏_(i,j) ∈ [m]×[d](1+|a_ij|) ∏_i ∈ [m] (1+|b_i|) ≤ 2^⟨ A,b⟩. Hence 1/δ≥ (1 + 2d|c_j|Δ_d^2|2^⟨ A, b ⟩)^-1≥ (3d||c||_12^5⟨ A,b ⟩)^-1≥ϵ as required.
To complete the proof of claim <ref> of Lemma <ref>, let x^W, ϵ:=A_W^-1(b+b^ϵ)_W be an optimal feasible basis solution for min{c^Tx| x ∈ P^ϵ} with optimal basis W. Thus, due to <ref>, x^W:=A_W^-1b_W is a feasible basis solution of Ax ≤ b. Furthermore, let v be an optimal vertex of P w.r.t. minimizing c and let U be a basis of A with v = x^U = A_U^-1b_U such that x^U, ϵ:=A_U^-1(b+b^ϵ)_U is a vertex of P^ϵ (such a basis U exists by statement <ref> of Lemma <ref>). Assume x^W is not optimal for min{c^Tx| x ∈ P}. Then we have |c^T(x^W-x^U)| ≥1/Δ_d^2, since c is integral and the least common denominators of the union of the coordinates of x^W and x^U is at most Δ_d^2 (as the least common multiple of the coordinates of x^W is at most Δ_d and so is the least common multiple of the coordinates of x^U).
But this constradicts
c^T(x^W - x^U) = c^T(x^W - x^W, ϵ)_<1/2Δ_d^2 + c^T(x^W, ϵ - x^U, ϵ)_≤ 0 + c^T(x^U, ϵ - x^U)_<1/2Δ_D^2 < 1/Δ_d^2 ,
where we used Claim <ref> for bounding the first and the third term and the optimality of x^W, ϵ for bounding the second one.
Now we can finally return to the proof of Theorem <ref>.
Let 𝒜 be a strongly polynomial time algorithm for finding optimal basis solutions for linear programs with strong inputs and rational objective
functions. We first use 𝒜 to devise a strongly polynomial time algorithm 𝒜^⋆ for finding optimal basis solutions for arbitrary rational linear programs min{c^Tx| Ax ≤ b} if a non-degenerate vertex v of P := P^≤(A,b) is specified within the input, i.e., a vertex for which there is a unique basis U⊆ [m] with x^U=v.
In order to describe how 𝒜^⋆ works, we may assume that (after appropriate scaling) its input data A,b,c are integral. With
ϵ := (3d||c||_1 2^5⟨ A,b⟩)^-1 let P^ϵ := {x ∈^d | Ax ≤ b+b^ϵ}. Due to the uniqueness property of U and part (3) of Lemma <ref>, U is also a feasible basis of the perturbed system. We scale that perturbed system to integrality, obtaining a non-degenerate (part (5) of Lemma <ref>) system A'x≤ b' with P^ϵ := {x ∈^d | A'x ≤ b' } and a vertex v'=x^U,ϵ.
Then, as discussed in the context of Theorem <ref>, we add the inequality ^T(A^'_U)^-Tx ≤ 2^2⟨ A^',b^'⟩ ||(A^'_U)^-1||_1+1, denoted by α x ≤β, to A'x ≤ b' and thus obtain a non-degenerate bounded system Ax ≤b with a simplex-defining subsystem of d+1 inequalities. Let us define P := P^≤(A, b).
Note that the problem min{c^Tx| x∈ P} is unbounded if and only if min{c^Tx| x∈ P^ϵ} is unbounded since the polyhedra P and P^ϵ have the same characteristic cone. Moreover, min{c^Tx| x∈ P^ϵ} is unbounded if and only if an optimal basis W (corresponding to any optimal vertex x^W) of min{c^Tx| x∈P} contains the added inequality α x ≤β and the unique extreme ray of the radial cone of P at x^W not contained in H^=(α, 0) has positive scalar product with c (recall that the polytope P is simple).
Thus, in order to solve min{c^Tx| x∈ P} in strongly polynomial time, we can apply algorithm 𝒜 to min{c^Tx| x∈P} (providing the algorithm with the vertex v' of P), since any optimal basis of the latter problem either proves that the former problem is unbounded or is an optimal basis of the former problem due to part <ref> of Lemma <ref>.
Finally, let us assume that we are faced with an arbitrary linear program in the form min{c^Tx | Ax ≤ b, x ≥ 0} with A ∈^m × d ,b ∈^m and c ∈^d (clearly, each rational linear program can be reduced to this form, for instance by splitting the variables into x^+ and x^- and scaling the coefficients to integrality) and let P := P^≤(A,b) ∩_≥ 0^d.
Due to parts <ref> and <ref> of Lemma <ref> the perturbed system Ax ≤ b+b^ϵ, -x ≤ o^ϵ with b^ϵ_i := ϵ^i for all i ∈ [m] and o^ϵ_j := ϵ^m+j for all j ∈ [d] is non-degenerate for ϵ := (3d||c||_1 2^5(⟨ A,b ⟩ + ⟨-I_d, _d ⟩ ))^-1 with the polyhedron P^ϵ := {x ∈^d | Ax ≤ b+b^ϵ, -x ≤ o^ϵ} being non-empty (in fact: full-dimensional) if P ≠∅ and empty otherwise.
We follow a classical Phase I approach by first solving the auxiliary problem min{_m^T s | (x, s) ∈ G} with
G := { (x, s) ∈^d+m| Ax - s ≤ b + b^ϵ , -x ≤ o^ϵ , s ≥}.
Note that (x^⋆, s^⋆)
with x^⋆_j=-ϵ^m+j for all j ∈ [d] and s^⋆_i=max{-b_i - ϵ^i - ∑_j∈[d]A_ijϵ^m+j,0} for all i ∈ [m] is a
vertex of G, which is defined by a unique basis U^⋆ as for every i ∈ [m] we have (once more employing Lemma <ref>)
-b_i - ϵ^i - ∑_j∈[d]A_ijϵ^m+j≠ 0 due to the integrality of A and b and the choice of ϵ. Hence we can apply algorithm 𝒜^⋆ in order to compute an optimal vertex (x̃,s̃) of the auxiliary problem
min{_m^T s | (x, s) ∈ G}. If ^Ts̃≠ 0 holds, then we can conclude P^ϵ = ∅, thus P=∅. Otherwise,
x̃ is a vertex of P^ϵ that clearly is non-degenerate (in fact, P^ϵ is simple). Thus we can solve min{c^Tx | x ∈ P^ϵ} by using algorithm 𝒜^⋆ once more. If the latter problem turns out to be unbounded then so is min{c^Tx | x ∈ P} (as P and P^ϵ have the same characteristic cone). Otherwise, the optimal basis of min{c^Tx | x ∈ P^ϵ} found by 𝒜^⋆ is an optimal basis for min{c^Tx | x ∈ P} as well (due to part (4) of Lemma <ref>).
Suppose now we are given a system of linear inequalities defining a polyhedron P := P^≤(A,b) with A ∈^m × d , (A)=d ,b ∈^m and an objective vector c ∈^d (we can always scale rational system to integrality and achieve (A) =d by intersecting with the orthogonal complement of kernel(A)). We want ro solve min{c^Tx | Ax ≤ b}.
Due to Lemma <ref> parts <ref> and <ref>, the perturbed system Ax ≤ b+b^ϵ (recall b^ϵ_i := ϵ^i , i ∈ [m]) is non-degenerate for ϵ≤ (3d|c_j|2^5⟨ A,b ⟩)^-1, with the polyhedron P^ϵ := P^≤(A,b+b^ϵ) being full-dimensional if P ≠∅ and empty otherwise.
Assume a feasible basis solution x^U of Ax ≤ b+b^ϵ with basis U is known. This assumption will next be removed by a Phase I approach. We first scale the system Ax ≤ b+b^ϵ to integrality obtaining A^' x ≤ b^'. Then, as in the discussion preceding Theorem <ref> one can add the inequality ^T(A^'_U)^-Tx ≤ 2^2⟨ A^',b^'⟩ ||(A^'_U)^-1||_1+1, denoted by α x ≤β, to reduce to the bounded case and obtain a simplex-defining subsystem of d+1 inequalities. We denote the resulting non-degenerate system by Ax ≤b and define P := P^≤(A, b). Note that the problem min{c^Tx| x∈ P} is unbounded if and only if min{c^Tx| x∈ P^ϵ} is unbounded since the polyhedra P and P^ϵ have the same characteristic cone. Moreover, min{c^Tx| x∈ P^ϵ} is unbounded if and only if an optimal basis W (corresponding to every optimal vertex x^W) of min{c^Tx| x∈P} contains the added inequality α x ≤β and the extreme ray of the radial cone of P at x^W not contained in H^=(α, 0) has positive scalar product with c (recall that the polytope P is simple).
Thus, in order to solve min{c^Tx| x∈ P} in strongly polynomial time, one could apply algorithm 𝒜 to min{c^Tx| x∈P}, since any optimal basis of the latter problem either proves the former problem is unbounded, or delivers an optimal solution to the former due to Lemma <ref> part <ref>. Finally, in order to obtain a vertex u of P^ϵ, which is also a start vertex of P for algorithm 𝒜 to run, we make use of the Phase I idea. We first solve the auxiliary problem min{_m^T s | (x^+, x^-, s) ∈ G}, where G := { (x^+, x^-, s) ∈^2d+m| A(x^+ - x^-) - I_m s ≤ b + b^ϵ , x^+, x^-, s ≥ 0 }, in the same manner by (scaling and) perturbing and adding a bounding inequality to the corresponding system of linear inequalities to obtain a non-degenerate system defining a polytope G. Note that an optimal basis solution (x̃^+, x̃^-, s̃) of the later problem either gives a vertex x := x̃^+ - x̃^- of P^ϵ, if ^Ts̃ = 0 holds, or proves P^ϵ = ∅ otherwise (recall P=∅ iff P^ϵ = ∅). Moreover, since a vertex (x^+, x^-, s) = (_d, _d, max(-b - b^ϵ,_m) ) of G is known and is non-degenerate (note that (b + b^ϵ)_j ≠ 0 for all j ∈ [m] due to bound on ϵ), its basis corresponds to a vertex of G (due to Lemma <ref> part <ref>) thus allowing to run algorithm 𝒜 on it. Note that the reduction is strongly polynomial, since basically it only involves perturbing system of linear inequalities with values having encoding size polynomial in ⟨ A,b,c⟩, scaling them and adding inequalities with entries having again encoding size polynomial in the input length.
Suppose now we are given a system of linear inequalities defining a polyhedron P = P^≤(A,b) with A ∈^m × d,b ∈^m and an objective vector c ∈^d (we can always scale rational system to integrality).
In order to obtain a weakly non-degenerate system, we first perturb b by adding b^ϵ to it, where b^ϵ_i := ϵ^i , i ∈ [m] for some small ϵ>0.
Note that if P ≠∅, then the polyhedron P^ϵ := P^≤(A, b+ b^ϵ) is full-dimensional, moreover for every ϵ≤1/2d^-12^-2⟨ A,b ⟩ it holds that P is empty if and only if P^ϵ is empty. See <cit.> for more details.
Let us look more closely at what happens with the polyhedron P when slightly shifting outwards the halfspaces defining is. More precisely, the perturbations should be small enough to preserve the following property 𝒫: if a basis (feasible or infeasible) solution x of Ax ≤ b with basis U (i.e. u = A_U^-1b_U) is contained in H^<(A_i,b_i) or H^>(A_i,b_i) for some i ∈ [m], then the basis (feasible or infeasible) solution x̃ = A_U^-1(b+b^ϵ)_U of Ax ≤ b + b^ϵ is contained in H^<(A_i,(b+b^ϵ)_i) resp. H^>(A_i,(b+b^ϵ)_i). If 𝒫 holds, then each feasible basis solution of the perturbed system corresponds to a feasible basis solution of Ax ≤ b with the same basis.
Next we prove that each vertex v of P in a sense gives rise to at least one vertex of P^ϵ. Let A_1x ≤ b_1 contain all inequalities of Ax ≤ b, that are satisfied with equality by the vertex v. Then, due to 𝒫 all feasible bases of A_1x ≤ (b+b^ϵ)_1 are feasible bases of Ax ≤ b corresponding to vertex v. Moreover the set of feasible bases of A_1x ≤ (b+b^ϵ)_1 is non-empty, since P^≤(A_1,(b+b^ϵ)_1) is pointed because of A_1=d and v ∈ P(A_1,(b_1+b^ϵ)_1).
Now we start quantifying the the choice of ϵ, so that 𝒫 holds. Let x be a basis (feasible or infeasible) solution of Ax ≤ b with basis U ⊆ m and let H^=(A_i,b_i), with i ∈ [m]∖ U be a hyperplane corresponding to the ith row of Ax ≤ b, such that x ∉ H^=(A_i,b_i). Furthermore let x̃ be the basis (feasible or in feasible) solution of the perturbed system with basis U. Consider the following expression
A_ix̃ - (b+b^ϵ)_i = ∑_j=1^d A _i,jx̃_j - (b+b^ϵ)_i
= ∑_j=1^d A_i,j A^j = b+b^ϵ_U - (b+b^ϵ)_i A_U / A_U =: h(ϵ) ,
where A_U^j=b denotes square d × d submatrix consisting of U-rows of A with the jth column replaced by vector b. Note that h(ϵ) is a univariate polynomial in ϵ with its 0-th coeffcient α_0 ≠ 0 by the choice of x and i, since |h(0)| = |α_0| is the (scaled Euclidean) distance from x to the hyperplane H^=(A _i,b_i ). Therefore the property 𝒫 holds as long as ϵ > 0 is small enough, so that h(ϵ) has the same sign as α_0. We will need the following result on roots of univariate polynomials.
Let f(x) = α_n x^n + … + α_1 x + α_0 be a polynomial with α_n, α_0 ≠ 0. Let x̅≠ 0 be a root of f(x). Then 1/δ≤ |x̅|, where δ = 1 + max( |α_1/α_0|, …, | α_n/α_0| ).
Hence 𝒫 holds for all 0 < ϵ < 1/δ, since there are no roots of h(ϵ) in the interval (-1/δ, +1/δ), where δ is defined as in the lemma above. Aiming to bound δ from above, we hence have to bound all coefficients of h(ϵ). We have |α_0| ≥1/Δ_d because of system integrality and since | A_U| ≤Δ_d holds by definition. All the other coefficients are bounded in absolute value from above by ∏_(i,j) ∈ [m]×[d](1+|a_ij|) ∏_i ∈ [m] (1+|b_i|) ≤ 2^⟨ A,b⟩, since A and b are integral and by Leibniz formula fro the determinant, each of the coefficients is a sum of products (multiplied with -1 perhaps) of at most d-1 entries (staying at pairwise distinct positions) of (A,b)_U∪{i}. Therefore δ≤ 1 + Δ_d 2^⟨ A,b ⟩≤ 2·2^ 3⟨ A,b ⟩, where the last inequality follows from Δ_k ≤ 2^2⟨ A,b⟩, and hence h(ϵ) has the same sign as it's free coefficient α_0 for 0<ϵ<1/22^-3⟨ A,b ⟩.
Next, observe that the choice of ϵ∈ (0, 1/22^-3⟨ A,b ⟩) at the same time guaranties, that each vertex of the hyperplane arrangement HA^=(A,b+b^ϵ) is contained in exactly d hyperplanes. For that consider again a basis (feasible or infeasible) solution x of Ax ≤ b with basis U and a hyperplane H^=(A_i,b_i) , i ∈ [m] ∖ U, but this time with x ∈ H^=(A_i,b_i) and again let x̃ be a basis (feasible or infeasible) solution of the perturbed system with basis U. Then the polynomial h(ϵ) from (<ref>) now has a root at ϵ = 0. Therefore consider polynomial g(ϵ), such that h(ϵ) = ϵ^r g(ϵ), r ≥ 1 and g(0) ≠ 0. Note that such g(ϵ) exists since h(ϵ) in not zero polynomial. Let us quickly show the latter. Since U is a basis of Ax ≤ b, there exists λ∈^d such that λ^TA_U = A_i. Moreover, since x = (A_U)^-1b_U satisfies A_ix = b_i, it follows b_i = A_ix = λ^TA_Ux = λ^T b_U. Hence h(ϵ) = A_ix̃ - (b+b^ϵ)_i = λ^T A_U (A_U)^-1(b+b^ϵ)_U - (b+b^ϵ)_i = λ^T b^ϵ_U - ϵ^i is not zero polynomial, since i ∉ U.
Applying Lemma <ref> to g(ϵ) and bounding it's coefficients in exact same way as for h(ϵ) we ensure, that there are no roots of g(ϵ) and therefore no roots of h(ϵ) in the interval (0, 1/22^-3⟨ A,b ⟩) and therefore x̃∉ H^=(A_i,(b+b^ϵ)_i).
Finally, in addition to the aforementioned property 𝒫, ϵ has to be small enough, so that for and any optimal basis solution x̃^⋆ of min{c^Tx| x∈ P^ϵ} with basis W holds, that x^⋆ = A_W^-1b_W is an optimal basis solution of min{c^Tx| x∈ P}.
Let x resp. x^⋆ be a non-optimal resp. optimal feasible basis solution of Ax ≤ b w.r.t c. Then |c^T(x-x^⋆)| ≥1/Δ_d^2, since the least common denominators of all coordinates of x and x^⋆ are both at most Δ_d by Cramer's rule and hence the overall least common denominator is at most Δ_d^2 and A,b,c are integral.
Let now x be a feasible basis solutions of Ax ≤ b and let x^1,…,x^k be feasible basis solutions of Ax ≤ b+b^ϵ, such that for any l∈[k], the basis U^l of x^l is also a basis of x in Ax ≤ b. As it was shown above, k ≥ 1. Let x̃ = x^l for some l∈[k] and let U be the corresponding basis. Then by Cramer's rule
|c^T(x-x̃)| ≤∑_j=1^d|c_j||x_j - x̃_j| =
∑_j=1^d|c_j| | A_U^j=b - A_U^j = b+b^ϵ/ A_U | .
If we choose ϵ so that the last expression in (<ref>) is at most 0.4/Δ_d^2, x̃ cannot be an optimal vertex of P^ϵ (w.r.t c) unless x = x^⋆.
Consider the last expression in (<ref>). Choose some non-zero summand j and consider the polynomial f^j_U(ϵ) := ( A_U)^-1( A_U^j=b - A_U^j = b+b^ϵ) = α_nϵ^n + … + α_1ϵ. Note that f_U^j(0) = 0. Let α_0 := 0.4/d|c_j|Δ_d^2 > 0, then we introduce polynomials f_U^j±(ϵ) := f_U^j(ϵ) ±α_0. Since f_U^j-(0) < 0 < f_U^j+(0) and since there are no roots of f_U^j±(ϵ) in the interval (-1/δ, +1/δ) due to Lemma <ref> again, it holds -α_0 ≤ f_U^j(ϵ) ≤α_0 for 0 < ϵ≤1/δ, where δ = 1 + max( |α_1/α_0|, …, | α_n/α_0| ). In order to estimate δ from above, let's take a closer look at a coefficient α_k, k ∈ [n]. By definition of the determinant, α_k is again a sum of products of at most d-1 entries of (A,b)_U. Therefore, since A and b are integral, α_k ≤∏_(i,j) ∈ [m]×[d](1+|a_ij|) ∏_i ∈ [m] (1+|b_i|) ≤ 2^⟨ A,b⟩. Hence δ≤ 1 + 2.5d|c_j|Δ_d^2|2^⟨ A, b ⟩≤ 3d|c_j|2^5⟨ A,b ⟩ and therefore, for any ϵ≤ (3d|c_j|2^5⟨ A,b ⟩)^-1, |f^j_U(ϵ)| ≤0.4/d|c_j|Δ_d^2 holds true and hence the last expression in (<ref>) does not exceed 0.4/Δ_d^2.
Hence, all the aforementioned requirements are satisfied if ϵ is chosen as ϵ^':= (3d||c||_1 2^5⟨ A,b⟩)^-1. Note that encoding size of ϵ^' is polynomial in ⟨ A, b, c⟩.
Finally, after perturbing the right side b of Ax ≤ b we are in position to obtain a weakly non-degenerate system. Assume a feasible basis solution u of Ax ≤ b+b^ϵ with basis U in known. This assumption will next be removed with Phase I. We first scale the system Ax ≤ b+b^ϵ to integrality obtaining A^' x ≤ b^'. Then, as in the discussion preceding Theorem <ref> one can add inequality ^T(A^'_U)^-Tx ≤ 2^2⟨ A^',b^'⟩ ||(A^'_U)^-1||_1, denoted by α x ≤β, to reduce to bounded case and obtain a simplex-defining subsystem of d+1 inequalities. We denote resulting weakly non-degenerate system by Ax ≤b and P := P^≤(A, b). Note that the problem min{c^Tx| x∈ P} is unbounded if and only if min{c^Tx| x∈ P^ϵ} is unbounded since polyhedra P and P^ϵ have the same recession cone. Moreover, min{c^Tx| x∈ P^ϵ} is unbounded if and only if an optimal basis U^⋆ (corresponding to vertex u^⋆) of min{c^Tx| x∈P} contains added inequality α x ≤β and the extreme ray of the radial cone of K_P(u^⋆) not contained in H^=(α, 0) has negative scalar product with c (recall polytope P is simple).
Thus, in order to solve min{c^Tx| x∈ P} in strongly polynomial time, one could apply algorithm 𝒜 to min{c^Tx| x∈P}, since any optimal basis of the latter problem either proves the former problem is unbounded, or delivers an optimal basis for the former. Finally, in order to obtain a vertex u of P^ϵ, which is also a start vertex of P for algorithm 𝒜 to run, we make use of the Phase I idea. We first solve auxiliary problem min{_m^T s | (x^+, x^-, s) ∈ G}, where G := { (x^+, x^-, s) ∈^2d+m| A+(x^+ - x^-) - I_m s ≤ b + b^ϵ , x^+, x^-, s ≥ 0 }, in the same manner by (scaling and) perturbing and extending the corresponding system of linear inequalities to obtain a weakly non-degenerate system defining polytope G. Note that an optimal basis solution (x̃^+, x̃^-, s̃) of the later problem either gives a vertex x := x̃^+ - x̃^- of P^ϵ, if the ^Ts̃ = 0, or proves P^ϵ = ∅ otherwise (recall P=∅ iff P^ϵ = ∅). Moreover, since a vertex (x^+, x^-, s) = (_d, _d, max(-b - b^ϵ,_m) ) of G is known and is non-degenerate (note that (b + b^ϵ)_j ≠ 0 for all j ∈ [m]), it's basis gives a vertex of G thus allowing to run algorithm 𝒜 on it. Note that the reduction is strongly polynomial, since basically it only involves perturbing system of linear inequalities with values having polynomial encoding size, scaling them and adding inequalities with entries having again polynomial encoding size.
It is well-known that any strongly polynomial time algorithm for linear programming can be used
(by appropriately perturbing the objective function) to even compute optimal basis solutions if they exist (see, e.g., <cit.> for more details). Hence
Theorem <ref> and Theorem <ref> allow us to conclude the following.
If there exists a strongly polynomial time algorithm for linear programming with rational data over all simple polytopes whose diameters are bounded linearly in the numbers of inequalities in their descriptions, then all linear programs (with rational data) can be solved in strongly polynomial time.
In fact, in order to come up with a strongly polynomial time algorithm for general linear programming problems it would be enough to devise a strongly polynomial time algorithm that optimizes linear functions over any rock extension for which a vertex is part of the input data.
§ EXTENSIONS WITH SHORT MONOTONE DIAMETERS
The results of the previous sections showed for every d-polytope P described by a non-degenerate system of m linear inequalities the existence of a simple (d+1)-dimensional rock extension Q with at most m+2 facets, where each vertex admits a (z-increasing) “canonical” path of length at most m-d+1 to a distinguished vertex (the top vertex) of Q. Yet, no statement has been made so far regarding the potential monotonicity of such paths w.r.t linear objective functions. In this section we are now going to build upon a rock extension in order allow for short monotone paths.
For an objective function c we will call an optimal vertex of a polytope P c-optimal. A path in the graph of P is said to be c-monotone if the sequence of c-values of vertices along the path is strictly increasing.
It clearly does not hold, that for any linear objective function c∈^d with w being a c-optimal vertex of a non-degenerate
d-polytope P and for any other vertex v of P, both the “canonical” path from (v,0) to the top vertex t of the rock extension Q of P constructed by Algorithm <ref> and the “canonical” path (w,0)-t traversed backwards from t to (w,0) are c-monotone.
Even the path form t to (w,0) itself is not always c-monotone.
However, the latter issue can be handled by defining a new objective vector c:=(c, -c_z) ∈^d+1 with c_z being a big enough positive number, such that all the backwards traversals of “canonical” paths in Q, including the one for the c-optimal vertex (w,0), are c-monotone. Although this workaround is justified by the fact, that the top vertex of the rock extension constructed by Algorithm <ref> is known (its basis is defined by the d+1 inequalities indexed by I), it does not offer a short monotone path from any vertex (v,0) to (w,0), since the “canonical” path from (v,0) to t is not c-monotone (the sequence of c-values along the path is in fact strictly decreasing). To simplify our notation we further identify a vertex u of P with the corresponding basis vertex (u,0) of Q.
In order to handle monotonicity we are going to spend one more dimension by building a crooked prism over the rock extension. Let P be a d-polytope defined by a simplex-containing non-degenerate system Ax ≤ b of m inequalities. And let Q:= {(x,z) ∈^d+1| Ax + az ≤ b , z ≥ 0} be the rock extension of P constructed by Algorithm <ref>. Consider the prism Q× [0,1]. We now tilt the facets Q ×{0} and Q×{1} towards each other such that the (euclidean) distance between two copies of a vertex of Q is reduced by some factor that is proportional to its z-coordinate. More precisely the resulting polytope is
Q:= {(x,z,y) ∈^d+2| Ax + az ≤ b , z ≥ 0 , y-1/3z≥ 0 , y-1/3z ≤ 1} .
See Figure <ref> for an illustration. Observe that Q is simple. We will denote the two facets of Q defined by inequalities y-1/3z≥ 0 and y-1/3z ≤ 1 by Q^0 and Q^1, respectively. Note that both Q^0 and Q^1 are isomorphic to Q. Thus each vertex u of Q corresponds to two vertices of Q^0 and Q^1 denoted by u^0 and u^1, respectively.
Let c ∈^d be a linear objective function and w be a c-optimal vertex of P, and let v be some vertex of P. Then for the “canonical” path from v to the top vertex t of Q there exists an isomorphic path from v^0 to t^0 of Q^0. Since the “canonical” v-t-path in Q is z-increasing and due to Q^0 = f^0(Q) with f^0:(x,z) ↦ (x,z,1/3z), the corresponding v^0-t^0-path in Q^0 is y-increasing. Similarly, there exists a y-increasing t^1-w^1-path in Q^1 isomorphic to the backwards traversal of the z-increasing “canonical” w-t-path in Q, since Q^1 = f^1(Q) with f^1:(x,z) ↦ (x,z,1-1/3z). Together with the edge t^0t^1 the two aforementioned paths comprise a v^0-w^1-path of length at most 2(m-d+1)+1 in Q that is monotone for the objective function c:=(c,0, c_y) ∈^d+2 with large enough positive c_y. Note that w^1 is a c-optimal vertex of Q and a preimage of a c-optimal vertex w of P under the affine map π_d:(x,z,y) ↦ x projecting Q down to P. In fact, by exploiting Cramer's rule in a similar way as in the proofs of the previous section, it can be shown that choosing c_y as 6||c||_1 2^8⟨ A, a, b⟩ + 1 (after scaling Ax + az ≤ b to integrality) is enough to guarantee c-monotonicity of a v^0-t^0-t^1-u^1-path of the above mentioned type for any two vertices u,v of Q. Thus we derive the following statement, where π_k denotes the orthogonal projection on the first k coordinates.
Let A ∈^m × d and b ∈^m define a non-degenerate system of linear inequalities such that P=P^≤(A,b) is bounded. Then there exists a d+2-dimensional simple extension Q with π_d(Q) = P having at most m+4 facets such that for any linear objective function c ∈^d there is a positive number c_y such that for any vertex v of P there exists a (c,0, c_y)-monotone path from the vertex (v,0,0) to a (c,0, c_y)-optimal vertex w of Q of length at most 2(m-d+1)+1 with π_d(w) being a c-optimal vertex of P. A system of linear inequalities defining Q and the number c_y are computable in strongly polynomial time, if a vertex of P is specified within the input.
Note that since the extension Q is simple its graph is isomorphic to its bases-exchange graph. Therefore, combining the latter result with Theorem <ref> we conclude the following.
If there is a pivot rule for the simplex algorithm for which one can bound the number of iterations polynomially in the monotone diameter of the bases-exchange graph of the polytope then the general (rational) linear programming problem can be solved in strongly polynomial time.
§ ACKNOWLEDGEMENTS
We are grateful to Stefan Weltge for several helpful comments, to Robert Hildebrand for pointing us to Cauchy's lemma, and to Lisa Sauermann for discussions on the three-dimensional case. The authors would like to thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for support within GRK 2297 MathCoRe.
|
http://arxiv.org/abs/2307.04860v1 | 20230710191606 | Levi problem in context of generalised convexity | [
"Krzysztof J. Ciosmak"
] | math.CV | [
"math.CV",
"math.FA",
"Primary: 31C05, 31C10, 32E40, 46E10, Secondary: 06B23, 31D05, 32T05,\n 32T35, 32U05, 46E05, 46J10, 52A01"
] |
We provide a generalisation of the Levi problem to the context of generalised convexity, with an elementary proof.
We show that the Cartan–Thullen theorem and its generalisation, which we prove here, can be seen as consequences of the classical theorems of functional analysis.
Furthermore, we characterise the domains of holomorphy, and their generalisations, as precisely the spaces that are complete.
Bragg-Primakoff Axion Photoconversion in Crystal Detectors
Adrian Thompson
^1Central European University, Quellenstraße 51, 1100 Vienna, Austria
^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary
^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary.
^4 National Laboratory for Health Security, Hungary.
^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary.
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
§.§ Generalised convexity
In <cit.> Fan introduced a notion of generalised closed and convex set with respect to a family of functions, which allowed for a generalisation of the Krein–Milman theorem.
Here we shall be concerned with sets that are convex with respect to a cone ℱ of functions on a topological space Ω. A closed set K⊂Ω shall be called ℱ-convex whenever it is equal to its closed ℱ-convex hull
clConv_ℱK={ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}.
We observe that not only does the notion of generalised convexity generalise classical convexity, when ℱ is the cone of lower semi-continuous and convex functions, but also it includes pseudoconvexity of complex analysis. We refer the reader to the book of Hörmander <cit.> for an account comprising matter on notions of convexity with respect to: subharmonic functions, <cit.>, plurisubharmonic functions, <cit.>. Note that the latter is equivalent to the notion of holomorphic convexity of complex analysis, see the book of Krantz <cit.>.
The main result of the paper is a generalisation of the Cartan–Thullen theorem, see Theorem <ref>, to the setting of generalised convexity, under mild assumptions. This includes the setting of Stein spaces.
Another achievement is a definition of a convex set with respect to a family of functions, regardless of the closedness of the set. This is achieved by means of embedding the space via the Gelfand transform, see Section <ref> and Section <ref>.
Let us mention that the notion of convexity has been developed and further generalised over the years, see <cit.>, <cit.> and <cit.>. These developments mainly were concerned with various dualities, notions of subdifferentials and Fenchel transforms.
§.§ The Levi problem and the Cartan–Thullen theorem
In the setting of complex analysis the classical result of Cartan and Thullen, see e.g. <cit.>, <cit.>, <cit.>, is concerned with holomorphically convex subsets of ℂ^n.
Let Ω⊂ℂ^n be an open set. For a set B⊂Ω its closed holomorphically convex hull is equal to
{z∈Ω|h(z)≤suph(B) for any holomorphic h on Ω}.
Then Ω is said to be holomorphically convex, whenever any of its compact subsets has relatively compact holomorphically convex hull in B.
The Cartan–Thullen theorems reads as follows.
Suppose that Ω⊂ℂ^n is an open set. Then the following conditions are equivalent:
* for any compact set K⊂Ω, its closed holomorphically convex hull is a compact subset of Ω,
* Ω is a domain of holomorphy, i.e., there are no open sets Ω_1,Ω_2⊂ℂ^n such that
* ∅≠Ω_1⊂Ω_2∩Ω,
* Ω_2 is connected and Ω∖Ω_2≠∅,
* for each holomorphic a on Ω there exists a holomorphic function â on Ω_2 such that â=a on Ω_1,
* Ω is a domain of existence, i.e., there exists holomorphic a on Ω such that there are no open sets Ω_1,Ω_2 satisfying <ref>, <ref> and such that a=â on Ω_1 for some holomorphic function â on Ω_2,
* there exists a continuous, proper plurisubharmonic function pΩ→ℝ.
Above, we used the following definition.
A function pΩ→ℝ on a topological space Ω is called proper, whenever preimages via p of compactae are compact.
The equivalence of <ref>, <ref> and <ref> is also known as the Levi problem. This problem, posed in <cit.>, was concerned with geometric characterisation of domains of existence in ℂ^n. It was solved independently by Oka <cit.>, Bremermann and Norguet.
In <cit.> it is claimed that the equivalence is very difficult to prove. Let us mention existence of another apporach developed by Hörmander <cit.>, <cit.>, which employs methods of partial differential equations.
We shall provide a simple and elementary proof of this equivalence in the context of generalised convexity.
We refer the reader also to a survey on the Levi problem <cit.>, where the Cartan–Thullen theorem is presented <cit.> and to the book <cit.> for a thorough discussion of the problem in the setting of complex analysis. We refer also to <cit.> for a more recent paper on the topic and to <cit.> and <cit.>.
§.§ Main results
We shall show that Theorem <ref> can be adapted to the setting of generalised convexity. Our proofs rely purely on functional analytic techniques, including the Banach–Alaoglu theorem and the Banach–Steinhaus uniform boundedness principle.
Some of the implications admit elementary proofs, but we also provide proofs of these implications that rely on the Krein–Šmulian theorem and on the Mazur theorem.
Definition <ref>, <ref>, and Definition <ref> below are analogues of domain of holomorphy and domain of existence, respectively. These definitions we find most useful in our developments.
Let Ω be a topological space and let 𝒜 be a linear space of continuous functions on Ω. We shall say that:
* a net (ω_α)_α∈ A in Ω is a Cauchy net with respect to 𝒜 whenever for each a∈𝒜, (a(ω_α))_α∈ A is a Cauchy net in ℝ,
* Ω is complete with respect to 𝒜 whenever every Cauchy net in Ω is convergent in τ(𝒜).
Let Ω be a set and let 𝒜 be a set of functions on Ω. The coarsest topology on Ω with respect to which all functions in 𝒜 are continuous we shall call the topology generated by 𝒜 and denote by τ(𝒜).
The above defined topology τ(𝒜) is generated by the basis sets of the form
{ω∈Ω|a_i(ω)-a_i(ω_0)≤ϵ_i for i=1,2,…,k},
where ϵ_i>0, ω_0∈Ω and a_i∈𝒜 for i=1,2,…,k.
Let Ω be a set. Let 𝒜 be a linear space of continuous functions on Ω. Suppose that Ω, equipped with the weak topology τ(𝒜) induced by 𝒜, is metrisable. Let us denote by Ω̅ the completion of Ω.
We shall say that Ω is an 𝒜-space whenever there exists a∈𝒜 that does not extend to a continuous function on Ω∪{ω} for any element ω∈Ω̅∖Ω.
If Ω is an 𝒜-space, is also complete with respect to 𝒜.
Suppose that Ω⊂ℂ^n is an open set and that 𝒜 is the space of real-parts of holomorphic functions on Ω. If Ω is an 𝒜-space, then it is a domain of existence.
We shall say that a family of functions is symmetric if together with any function it contains its negative.
Let ℱ be a cone of functions on a set Ω. We say that P is an ℱ-polygon whenever there exists a finite sequence (f_i)_i=1^k of functions in ℱ such that
P={ω∈Ω| f_i(ω)≤ 1 for i=1,2…,n}.
We shall say that an ℱ-polygon is symmetric whenever the corresponding family can be taken to be symmetric.
Let Ω be a σ-compact and locally compact Hausdorff topological space. Let 𝒜 be a linear space of continuous functions on Ω that contains constants and separates points of Ω.
Then the following conditions are equivalent:
* for any compact set K⊂Ω and for any C≥ 1 the set
{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}
is a compact subset of Ω,
* there exists a non-negative, continuous and proper function p on Ω such that
p=sup{a_α|α∈ A}
for some symmetric family (a_α)_α∈ A
of functions in 𝒜,
* there exists a non-negative, proper function p on Ω, bounded on compactae, such that
p=sup{a_α|α∈ A}
for some symmetric family (a_α)_α∈ A of functions in 𝒜,
*
there exists a family (P_i)_i=1^∞ of compact, symmetric 𝒜-polygons such that
⋃_i=1^∞P_i=Ω, P_i⊂intP_i+1 for i=1,2,….
Moreover, the suprema in <ref> and in <ref> may be locally taken over finite families of functions in 𝒜.
Suppose moreover that Ω is separable and 𝒜 is closed with respect to the compact-open topology. Then the above conditions are equivalent to Ω being complete with respect to 𝒜.
If moreover 𝒜 consists of real-parts of a complex algebra, then the above conditions are equivalent to Ω being an 𝒜-space.
If Ω is locally compact, Hausdorff topological space and 𝒜 consists of continuous functions that separate points of Ω, then the topology of Ω is the weak topology induced by 𝒜, see Lemma <ref>, <ref>.
If Ω is σ-compact, locally compact, Hausdorff topological space and 𝒜 is a closed, in the compact-open topology, algebra of continuous functions that separate points of Ω, then 𝒜 is the space of all continuous functions on Ω, as follows by the Stone–Weierstrass theorem.
Let us note that if <ref> is satisfied for 𝒜 then it is also satisfied for the completion of 𝒜, with respect to the metric of locally uniform convergence.
We can think of function p of <ref> or of <ref> as an analogue of a norm on the space Ω.
The assumption that 𝒜 separates points of Ω is technical. If 𝒜 does not satisfy this assumption, then we may pass to the quotient space and infer the same equivalences as in Theorem <ref>.
Let us note that the condition <ref> of Theorem <ref> differs from the condition <ref> of Theorem <ref>, as it involves arbitrary constants. We shall in Proposition <ref>, that when 𝒜 consists of real-parts of a complex algebra ℬ, then <ref> is equivalent to the condition that for any compact K⊂Ω the set
{ω∈Ω|b(ω)≤supb(K) for all b∈ℬ}
is a compact subset of Ω.
§.§.§ Families satisfying maximum principle
As we see, Theorem <ref> does not exactly correspond to Theorem <ref>, as <ref> of Theorem <ref> requires not only compactness of the ℱ-convex hull of sets, where ℱ is the cone generated by 𝒜, but compactness of a larger family of sets. However, under the additional assumption that the family of considered functions satisfies the maximum principle, we may show that the existence of an appropriate exhaustion function p, cf. Definition <ref>, is equivalent to ℱ-convexity of the space Ω, see Definition <ref>.
We shall say that a convex cone ℱ of functions on a topological space Ω satisfies the maximum principle whenever for any:
* function f∈ℱ that is non-constant,
* compact set K⊂Ω,
* open set U⊂Ω such that K⊂ U,
there is
sup f(K)<sup f(U).
If Ω is compact, then one may take K=Ω. Thus, if ℱ satisfies the maximum principle, then any function f∈ℱ has to be constant.
Let Ω be a σ-compact, locally compact, Hausdorff connected topological space. Let ℱ be a convex cone of continuous functions on Ω that contains constants and satisfies the maximum principle.
Then the following conditions are equivalent:
* for any compact set K⊂Ω the set
{ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}
is a compact subset of Ω,
* there exists a non-negative, continuous and proper function p on Ω
such that
p=sup{f_α|α∈ A}
for some family (f_α)_α∈ A
of functions in ℱ,
*
there exists a non-negative, proper function p on Ω, bounded on compactae, such that
p=sup{f_α|α∈ A}
for some family (f_α)_α∈ A of functions in ℱ,
*
there exists a family (P_i)_i=1^∞ of compact ℱ-polygons such that
⋃_i=1^∞P_i=Ω, F_i⊂intP_i+1 for i=1,2,….
Moreover, the suprema in <ref> and in <ref> may be locally taken over finite families of functions in ℱ.
One may take for ℱ a linear space of continuous functions on Ω. This allows for a comparison of Theorem <ref> and Theorem <ref>.
As we have already noticed in Remark <ref>, we shall show that when ℱ is an algebra, or consists of real-parts of a complex algebra, then the conditions <ref> of Theorem <ref>, where 𝒜=ℱ, and <ref> of Theorem <ref> are equivalent, see Proposition <ref>. For an equivalent characterisation of <ref> of Theorem <ref>, see Proposition <ref>. However, note that in general <ref> of Theorem <ref> and <ref> of Theorem <ref> are not equivalent as observed in Remark <ref>.
We shall that the cone ℱ of functions on a topological space Ω is local, whenever for any family (U_i)_i∈ I of distinct connected components of Ω and any family (f_i)_i∈ I⊂ℱ, there exists f∈ℱ such that
f=f_i on U_i.
Suppose that Ω is a σ-compact, locally compact, Hausdorff topological space. Let ℱ be a local convex cone of continuous functions that satisfies the maximum principle. Then the following conditions are equivalent:
* for any compact set K⊂Ω the set
{ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}
is a compact subset of Ω,
* there exists a non-negative, continuous and proper function p on Ω
such that
p=sup{f_α|α∈ A}
for some family (f_α)_α∈ A
of functions in ℱ,
*
there exists a non-negative, proper function p on Ω, bounded on compactae, such that
p=sup{f_α|α∈ A}
for some family (f_α)_α∈ A of functions in ℱ.
Moreover, the suprema in <ref> and in <ref> may be locally taken over finite families of functions in ℱ.
§.§ Examples
We shall now exhibit several examples how Theorem <ref> and Theorem <ref> applies to various settings.
§.§.§ Plurisubharmonic functions and holomorphic convexity
The example that inspired this research is concerned with plurisubharmonic functions and holomorphic convexity.
We shall see how Theorem <ref> follows from Theorem <ref>.
Let Ω⊂ℂ^n be an open set.
Let ℬ be the algebra of holomorphic functions on Ω and let 𝒜 is the set of its real-parts, i.e., the space of pluriharmonic functions, see e.g. <cit.>.
Suppose that Theorem <ref> holds true. By Remark <ref>, <ref> of Theorem <ref> is equivalent to <ref> of Theorem <ref>. Note that Ω is separable, and 𝒜 is closed in the compact-open topology, cf. Definition <ref> and Remark <ref>. Therefore Theorem <ref> shows that <ref> of Theorem <ref> is equivalent to Ω being complete with respect to 𝒜, as well as to Ω being an 𝒜-space. It suffices to observe that the condition that Ω is an 𝒜-space implies readily that it is a domain of existence, which in turn trivially implies that it is a domain of holomorphy. The latter is known to imply <ref> of Theorem <ref> by a simple argument involving power series expansion, see <cit.>.
The function p that we obtain by Theorem <ref> in <ref> is plurisubharmonic, as it is locally a maximum of a finite family of pluriharmonic functions. Therefore equivalence of <ref> and <ref> of Theorem <ref> follows from equivalence of <ref> and <ref> of Theorem <ref>.
We see that the two last assertions of Theorem <ref> show that Ω is complete with respect to 𝒜, see Definition <ref>, and that it is an 𝒜-space, provided that Ω is holomorphically convex. The completeness property of Ω implies immediately that Ω is a domain of holomorphy, which is the difficult part of the equivalence of <ref> and <ref> of Theorem <ref>, according to <cit.>. In our proof, it follows from the Banach–Alaoglu theorem.
The holomorphically convex sets are exactly sets convex with respect to the cone of plurisubharmonic functions. The way to show this is as follows.
If h is holomorphic, then equality exp(ℜ𝔢h)=exp(h), and the fact that h is plurisubharmonic imply that for any set K⊂Ω, its closed holomorphically convex hull and its closed plurisubharmonically convex hull coincide, see Proposition <ref>.
When ℱ is the cone of plurisubharmonic functions, then Theorem <ref> and Proposition <ref> allow us to recover also
<cit.>.
The usual definition of plurisubharmonicity, see e.g. <cit.>, <cit.>, requires a plurisubharmonic function to be upper semi-continuous. Note however that with this definition, the space of plurisubharmonic functions is not a complete lattice. Note also that if a compact set K is ℱ-convex with respect to some family ℱ of functions, i.e.,
K={ω∈Ω f(ω)≤sup f(K) for all f∈ℱ},
then it is also convex with respect to the complete lattice cone generated by that family.
Since the plurisubharmonic functions play a vital rôle in the theory of holomorphic convexity it seems that it is natural to define the space of plurisubharmonic functions as the complete lattice cone generated by real-parts of holomorphic functions. Since we shall not need to deal with general plurisubharmonic functions in our developments, we shall not pursue this topic further.
§.§.§ Subharmonic functions
When ℱ is the cone of subharmonic functions, then Theorem <ref> allows us to prove an analogue of <cit.>, cf. <cit.>.
We shall not dwelve into details here, as they are completely analogous to the details in the previous section.
§.§.§ Convex functions
For related characterisation for convex functions, see a theorem <cit.> and its corollary <cit.>, which can be inferred from Theorem <ref>.
§.§ Gelfand transform and relation to convexity
The notion of convexity that we study here is naturally linked to the classical convexity via the Gelfand transform, see Section <ref>.
The Gelfand transform in our setting can be viewed as a linearisation, as it replaces a possibly non-linear setting with a linear setting. The linearisation technique has been successful in a great many contexts, e.g., in the context of Lipschitz-free spaces, see e.g. <cit.>.
Using the Gelfand transform, we can transfer linear notions to the setting of the space Ω, in a similar way to transference of the notion of convex hull, see Section <ref>, Definition <ref>. This allows to reduce studying Ω and 𝒜 to studying subsets of a linear space and linear, continuous functionals on that space.
§.§ Relation to theorems of functional analysis
Let us note the following analogies of Theorem <ref> and Theorem <ref> with classical results of functional analysis, see e.g. <cit.>. These results include the Krein–Šmulian theorem, <cit.>, the Banach–Alaoglu theorem and the Mazur theorem. Indeed, that completeness of Ω implies Theorem <ref>, <ref>, is follows from the Banach–Alaoglu theorem, cf. Lemma <ref> and proof of Theorem <ref>; that Theorem <ref>, <ref>, implies completeness of Ω is analogous to the Krein–Šmulian theorem, see <ref>. Moreover, that completeness of Ω implies <ref> of Theorem <ref> follows from the Mazur theorem.
§.§ Other notions of convexity
We refer the reader to <cit.> for another development concerning convexity with respect to algebras of functions rather than with respect to linear spaces. The theory in <cit.> does not however include analogues of our results and is concerned with the notion of naturality of the pair of the space and the function algebra on the space, which is close to our considerations of closedness of the embedding of Ω into 𝒜^*, cf. <cit.> and Theorem <ref>. Moreover, the generalisation of plurisubharmonicity in <cit.> is different than ours.
§.§ Applications
Let us mention that generalised convexity has been employed in the study of the monopolist's problem by Figalli, Kim and McCann <cit.> and by McCann and Zhang in <cit.>.
An application of the results obtained in this paper is concerned with a generalisation of martingale transport of measures, which we investigate in <cit.>. Theorem <ref> is used there to provide a characterisation of spaces that admit an exhaustion function. In the setting of such spaces in <cit.> we extend the results of De March and Touzi <cit.>, Obłój and Siorpaes <cit.>, Ghoussoub, Kim, Lim <cit.> to the setting of martingales with respect to a linear space of functions. Let us mention also <cit.>, where we study a related concept of ordering of measures with respect to a cone of functions.
The aforementioned extension allows for a new type of localisation-type results. Another type of localisation has been studied in <cit.>. This approach is related to optimal transport of vector measures <cit.>, <cit.> and extensions of Lipschitz maps <cit.>.
§ PRELIMINARIES
Let us recall several definitions.
We shall say that a topological space Ω is exhaustible by compactae if there exists a sequence (K_i)_i=1^∞ of compact subsets of Ω such that
* Ω=⋃_i=1^∞K_i,
* K_i⊂intK_i+1 for i=1,2,….
We shall say that Ω is σ-compact whenever there exist compact subsets (K_i)_i=1^∞ whose union is Ω.
It is readily seen that the condition that Ω is exhaustible by compactae is equivalent to Ω being σ-compact and locally compact.
Let Ω, Z be topological spaces. A map pΩ→ Z is said to be proper whenever preimages p^-1(Z) of compact sets in Z are compact sets in Ω.
Let 𝒦 denote the family of all compact subsets of Ω. For any compact set K∈𝒦 and a∈𝒜 we put
a_𝒞(K)=sup{a(ω)|ω∈ K}.
The family of semi-norms (·_𝒞(K))_K∈𝒦 on 𝒜 defines a locally convex topology on 𝒜, which we shall denote below by τ_𝒦.
Let us remark that the topology τ_𝒦 coincides with the compact-open topology on 𝒜.
If not stated otherwise, we shall denote by 𝒜^* the space of all linear functionals on 𝒜, continuous with respect to the topology generated by the above family of semi-norms.
The space 𝒜^* we shall consider equipped with the weak* topology generated by 𝒜.
We refer to <cit.> and to <cit.> for a background on Cauchy nets and completeness in relation to linear topological spaces.
If Ω is σ-compact, then 𝒜 in τ_𝒦 is metrisable, as the topology is induced by a countable separating family of semi-norms <cit.>.
If moreover Ω is separable, then for each compact set K⊂Ω the space 𝒞(K) is separable, as follows by the Stone–Weierstrass theorem, so that 𝒜 is separable. Therefore separability and σ-compactness of Ω imply that 𝒜^* in the weak* topology is metrisable.
Suppose that Ω is σ-compact and separable and that 𝒜 is closed under the topology τ_𝒦. Then 𝒜^* is complete with respect to 𝒜.
Since Ω is σ-compact and separable, then τ_𝒦-topology on 𝒜 is metrisable and 𝒜^* is metrisable in the weak* topology, cf. Remark <ref>
Suppose that (h_n)_n=1^∞ is a Cauchy sequence in 𝒜^*. It follows that for each a∈𝒜, (h_n(a))_n=1^∞ is convergent and we may define a linear functional h by the formula
h(a)=lim_n→∞h_n(a) for a∈𝒜.
We need to prove that h∈𝒜^*. Since 𝒜 is complete and metrisable, it follows by the Banach–Steinhaus uniform boundedness principle that there exist a compact set K∈𝒦 and C>0 such that
h_n(a)≤ Ca_𝒞(K) for all n=1,2,… and all a∈𝒜.
Thus h∈𝒜^*.
Let Ω be a set. Let 𝒜 be a set of functions on Ω with values in (-∞,∞]. A set 𝒦 of functions on Ω is said to be:
* a convex cone whenever α f+β g ∈𝒦 for any f,g∈𝒦 and any numbers α,β≥ 0;
* stable under suprema provided that the supremum sup{f_i| i ∈ I} of any family (f_i)_i∈ I⊂𝒦 belongs to 𝒦;
* a complete lattice cone whenever it is a convex cone stable under suprema;
* the complete lattice cone generated by 𝒜 whenever it is the smallest complete lattice cone of functions on Ω containing 𝒜;
* separating points of Ω whenever for any two distinct ω_1,ω_2∈Ω, there exist f,g∈𝒦 such that f(ω_1)≠ g(ω_2).
Let us stress that we allow the functions to take value +∞. Note however that this is not allowed for linear subspaces.
We refer to <cit.> for a study of the above notions in contexts related to this work.
Suppose that 𝒜 is a linear space of functions on Ω. Let ℱ denote the complete lattice cone generated by 𝒜. Then for any f∈ℱ there exists a family (a_i)_i∈ I⊂𝒜 such that
f(ω)=sup{a_i(ω)| i∈ I} for all ω∈Ω.
Let 𝒢 be the set of functions of the form (<ref>). We shall show that 𝒢 is a convex cone. Let α,β≥ 0, f,g,∈𝒢. Let (a_i)_i∈ I, (a_j)_j∈ J⊂𝒜 be families of functions in 𝒜 corresponding to f and g respectively.
Then
α f+β g=sup{α a_i+β a_j| j∈ J, i∈ I}∈𝒢.
Clearly, ℱ⊃𝒢. Since 𝒢 is a complete lattice cone, ℱ=𝒢.
If Y is the space all linear, continuous functionals on a linear topological space X, then the complete lattice cone generated by Y is the set of all convex, lower semi-continuous functions on X. We refer the reader to <cit.> for a proof of this fact.
Let ℱ be the complete lattice cone of functions on Ω that contains constants.
Let p∈ℱ. Let ξℝ→ℝ convex and non-decreasing. Then the composition ξ(p) belongs to ℱ.
As ξ is convex and non-decreasing, there exists a family of non-decreasing affine functions (λ_j)_j∈ J on ℝ such that for all t≥ 0 there is
ξ(t)=sup{λ_j(t)| j∈ J}.
Then
ξ(p)(ω)=sup{λ_j(p)(ω)| j∈ J}.
For any j∈ J, there exist α_j≥ 0 and β_j∈ℝ such that λ_j(t)=α_j t+β_j for all t∈ℝ, so that α_j p+β_j∈ℱ for each j∈ J.
Thus ξ(p)∈ℱ.
§ GELFAND TRANSFORM
Given a topological space Ω and a linear space 𝒜 of continuous functions on Ω, one may consider the space of all linear functionals on 𝒜 that are evaluations at points of Ω.
Let ΦΩ→𝒜^* be given by the formula Φ(ω)(a)=a(ω) for ω∈Ω and a∈𝒜. We shall call Φ the Gelfand transform, cf. <cit.>.
Let us recall, see <cit.>, that if ℬ is Banach algebra, then one considers the space Δ of multiplicative linear functionals on ℬ. It can be shown that the kernels of these functionals are precisely the maximal ideals of ℬ. To any element b∈ℬ we can assign a function Φ(b) on Δ defined by the formula
Φ(b)(ϕ)=ϕ(b) for ϕ∈Δ.
With an appropriate topology this function is continuous, so that we have defined a map Φℬ→𝒞(Δ). In this way we can treat ℬ as a linear space of functions on Δ and apply the developed theory.
Let Ω be a σ-compact and locally compact Hausdorff space and let 𝒜 be a linear space of continuous functions on Ω that separates points of Ω. Let ℱ be the complete lattice cone generated by 𝒜. We equip 𝒜 with the topology τ_𝒦 and 𝒜^* with the weak* topology induced by 𝒜.
Then:
* Φ is a homeomorphism of Ω and Φ(Ω),
* for any a∈𝒜, a(Φ^-1) extends to a weakly* continuous linear map on 𝒜^*,
* for any weakly* continuous linear map h on 𝒜^*, the function
ω↦ h(Φ(ω))
belongs to 𝒜,
* for any f∈ℱ, f(Φ^-1) extends to a map in the complete lattice cone G of functions on 𝒜^* generated by 𝒜,
* for any map g∈ G, the function
ω↦ g(Φ(ω))
belongs to ℱ.
We shall first show that Φ is homeomorphism onto Φ(Ω). As 𝒜 separates points of Ω and consits of continuous functions, Φ is injective and continuous. Therefore, for any compact K⊂Ω, the restriction of Φ to K is a homeomorphism onto the corresponding image.
As Ω is exhaustible by compactae, cf. Remark <ref>, there exist compact sets (K_i)_i=1^∞ such that
⋃_i=1^∞K_i=Ω, K_i⊂intK_i+1 for i=1,2,….
Therefore for any open set U⊂Ω the set
Φ(U)=⋃_i=1^∞Φ(U∩intK_i)
is open. This is to say, Φ is a homeomorphism.
Let a∈𝒜. Extension in <ref> is given by the formula
𝒜^*∋ a^*↦ a^*(a)∈ℝ.
We shall now define an extension for f∈ℱ. By Lemma <ref> for f∈ℱ there exists a family (a_i)_i∈ I of elements of 𝒜 such that
f=sup{a_i| i∈ I}.
We define a map
𝒜^*∋ a^*↦sup{a^*(a_i)| i∈ I}∈ (-∞,∞].
By (<ref>), this is an extension. As functionals a^*↦ a^*(a) are weakly* continuous, this extension belongs to the complete lattice cone G generated by 𝒜. Thus <ref> is proven.
Note that any weakly* continuous linear functional h on 𝒜^* is given by some element a_0∈𝒜. Therefore for ω∈Ω,
h(Φ(ω))=a_0(ω).
This proves <ref>. Point <ref> is proven similarly.
Suppose that Ω is separable and σ-compact, locally compact Hausdorff topological space Suppose also that 𝒜 is a linear space of continuous functions on Ω that separates points of Ω and is closed under τ_𝒦. Then Ω is complete with respect to 𝒜 if and only if Φ(Ω) is closed in the weak* topology.
Note that Lemma <ref> tells us that, under current assumptions, 𝒜^* is complete with respect to the weak* topology. Therefore, thanks to Lemma <ref>, <ref>, closedness of Φ(Ω) is equivalent to completeness of Ω with respect to 𝒜.
§ ℱ-CONVEX SETS
We shall continue to study a linear space 𝒜 of continuous functions on a topological space Ω. Typically, ℱ shall denote the complete lattice cone generated by 𝒜. As announced in the introduction we shall study the following hulls of sets, which are analogues of convex hulls.
Let ℱ be a convex cone of functions on Ω. Let S⊂Ω. We define closed ℱ-convex hull of S by the formula
clConv_ℱS={ω∈Ω| f(ω)≤sup f(S) for all f∈ℱ}.
For a discussion of this notion in the setting of complex analysis we refer the reader to <cit.>. For a discussion in other settings see <cit.>.
Let S⊂Ω. Then
clConv_ℱS={ω∈Ω| a(ω)≤sup a(S) for all a∈𝒜}.
Any element of ℱ is a supremum of elements of 𝒜, by Lemma <ref>. The claim follows.
Definition <ref> generalises the closed convex hull of a set, when 𝒜 is taken to be the space of linear functions on a vector space, as follows by the Hahn–Banach theorem. When we take 𝒜 to be the space of real-parts of holomorphic functions, then, according to Proposition <ref>, it yields the holomorphically convex hull of a set.
Let S⊂Ω. Then
clConv_ℱS=Φ^-1(clConv_GΦ(S)),
where G is the complete lattice cone of convex, lower semi-continuous functions on 𝒜^* generated by evaluations on elements of 𝒜.
By Remark <ref> and Lemma <ref>
clConv_GΦ(S)={h∈𝒜^*| h(a)≤supΦ(S)(a) for all a∈𝒜}.
Since for any a∈𝒜, Φ(S)(a)=a(S), we see that
Φ^-1(clConv_GΦ(S))={ω∈Ω| a(ω)≤sup a(S) for all a∈𝒜},
which is exactly what we were to prove.
The above proposition tells us that, up to a change of coordinates, the closed convex hull with respect to the complete lattice cone ℱ is the usual closed convex hull with respect to convex, lower semi-continuous functions.
Lemma <ref>, <ref>, and Proposition <ref> shows that the following definition of ℱ-convex hull is natural.
Let S⊂Ω. We define the ℱ-convex hull of S by the formula
Conv_ℱS= Φ^-1(Conv_GΦ(S)).
We shall say that S⊂Ω is ℱ-convex whenever
S=Conv_ℱS.
Here G is the complete lattice cone of convex, lower semi-continuous functions on 𝒜^* generated by evaluations on elements of 𝒜.
Note that above Conv_GΦ(S) denotes the usual convex hull of a set Φ(S)⊂𝒜^*. Moreover, if S⊂Ω is closed, then it is ℱ-convex if and only if
S=clConv_ℱS.
This is to say, Definition <ref> and Definition <ref> are consistent.
Suppose that f∈ℱ. Then for any t∈ℝ the sets f^-1(-∞,t)) and f^-1((-∞,t]) are ℱ-convex.
The claim follows by Defnition <ref>, Lemma <ref>, <ref> and an analogous claim for convex, lower semi-continuous functions on 𝒜^*.
Let ℱ be a cone of continuous functions on Ω that contains constants. Let K⊂Ω be compact and let C≥ 1. Let ω∈Ω. The following conditions are equivalent:
* f(ω)≤ C sup f(K) for all f∈ℱ,
* f(ω)≤ C (sup f(K))_+ for all f∈ℱ,
* ω∈clConv_ℱK.
Suppose that <ref> holds true.
Let 𝒢 denote the complete lattice cone generated by ℱ. Then also
g(ω)≤ C( sup g(K))_+ for all g∈𝒢.
Lemma <ref> implies that if ξ is a non-negative, convex and increasing function on [0,∞) then for any f∈ℱ,
ξ(f_+(ω))≤ Csupξ(f_+(K)).
For k≥ 2, the function t↦ t^k is convex and increasing on [0,∞). It follows that
f_+^k(ω)≤ C sup f_+^k(K).
Taking roots and letting k tend to infinity yields
f_+(ω)≤sup f_+(K).
Then applying the above to f+inf f(K∪{ω})∈ℱ shows that
f(ω)≤sup f(K).
That is ω belongs to clConv_ℱK.
The converse implication is trivial.
Let us note that the above does not imply that <ref> of Theorem <ref> and <ref> of Theorem <ref> are equivalent. Indeed, in general they are not. For example, one can take 𝒜 to be the linear space of affine functions on an open, convex set Ω⊊ℝ^n. Then Ω is ℱ-convex, but <ref> of Theorem <ref> is not satisfied.
In this case, the union
⋃{{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}| C≥ 1}
consists of the elements of the affine hull of a compact set K.
Let 𝒜 be an algebra or a linear space consisting of the real-parts of a complex algebra ℬ of functions on Ω, closed in τ_𝒦 and containing constants. Let ℱ be the complete lattice cone generated by 𝒜. Then for any compact set K⊂Ω and any C≥ 1
clConv_ℱK={ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}.
Moreover, if 𝒜 consists of the real-parts of a complex algebra ℬ, then the above sets coincide with
{ω∈Ω|b(ω)≤ Csupb(K) for all b∈ℬ}.
We shall prove the proposition in the case when 𝒜 consists of the real-parts of a complex algebra ℬ. The proof of the other case is simpler.
Let
R={ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}
and
S= {ω∈Ω|b(ω)≤ Csupb(K) for all b∈ℬ}.
Clearly, clConv_ℱK⊂ R.
Let ω∈ R. Let b∈ℬ. Take θ∈ℂ of modulus one such that θ b(ω)=b(ω). As ω∈ R and θ b∈ℬ we see that
b(ω)≤sup Cb(K).
This shows that R⊂ S.
Let now ω∈ S.
As ℬ is an algebra, for any b∈ℬ and k∈ℕ, b^k belongs to ℬ. It follows that
b(ω)≤ C^1/ksupb(K).
Letting k tend to infinity we get that
b(ω)≤supb(K).
Since ℬ is closed in τ_𝒦, for any b∈ℬ, exp(b)∈ℬ. Applying the above to exp(b) and taking logarithms, we see that
ℜ𝔢 b(ω)≤supℜ𝔢b(K).
This shows that ω∈clConv_ℱK and concludes the proof.
Suppose that ℱ is a convex cone of of continuous functions on Ω that contains constants. Then for any compact set K⊂Ω,
{ω∈Ω|f(ω)≤supf(K) for all f∈ℱ}={ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}.
Indeed, as all functions in ℱ are continuous, if f∈ℱ, then also f+inf f(K)∈ℱ. Thus if ω belongs to the set on the left-hand side of the above equality, then for all f∈ℱ,
f(ω)+inf f(K)≤sup f(K)+inf f(K),
so that ω belongs also to the set on the right-hand side of the above equality. The converse inclusion is trivial.
Suppose that Ω is a set and ℱ is a convex cone of functions on Ω. We shall say that Ω is ℱ-convex whenever for any compact set K⊂Ω
{ω∈Ω| f(ω)≤sup f(K) for all f∈ℱ}
is a compact subset of Ω.
Suppose that Ω is a set and 𝒜 is a linear space of functions on Ω. We shall say that Ω is 𝒜-complete whenever for any compact set K⊂Ω and any C≥ 1
{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}
is a compact subset of Ω.
Theorem <ref>, under its assumptions, shows that Ω is 𝒜-complete if and only it is complete with respect to 𝒜.
Under the assumptions of Theorem <ref> and the assumptions of Proposition <ref>, we see that ℱ-convexity of Ω and 𝒜-completeness of Ω are equivalent, whenever ℱ is the complete lattice cone generated by 𝒜. Note that ℱ-convexity has a different meaning when applied to the entire space Ω and when applied to a subset of Ω.
§ PROOFS
§.§ Proof of Theorem <ref>
We shall now prove Theorem <ref>. We begin with necessary lemmata.
Let Ω be a set and let ℱ be a cone of functions on Ω. We say that
pΩ→ [0,∞)
is an exhaustion function whenever it belongs to ℱ and is proper.
Let Ω be a topological space. Let 𝒜 be a linear space of continuous functions on Ω that contains constants. Let ℱ be the complete lattice cone generated by 𝒜. Suppose that Ω is exhaustible by compactae and that for any compact set K⊂Ω and C≥ 1, the set
{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}
is compact.
Then there exists a continuous and non-negative exhaustion function p∈ℱ.
Moreover, one may take p such that there exists a symmetric family (a_α)_α∈ A of functions in 𝒜 such that p=sup{a_α|α∈ A} and for any compact set K, p is a maximum of a finite number of functions in A.
We shall first choose a sequence of compactae (K_i)_i=1^∞, such that
for each i=1,2…,
{ω∈Ω|a(ω)≤ 4^isupa(K_i) for all a∈𝒜}⊂intK_i+1.
Such a sequence exists by the assumption that Ω is exhaustible by compactae and by the assumption that for each C≥ 1 and for each compact set K⊂Ω the set
{ω∈Ω|a(ω)≤ Csupa(K) for all a∈𝒜}
is compact.
We shall construct non-negative, continuous functions p_i∈ℱ such that
K_i⊂ p_i^-1((-∞, 1/2^i]) and K_i+3∖ K_i+2⊂ p_i^-1([ 2^i,∞)) for i=1,2,….
To this aim, observe that the compact set K_i+3∖intK_i+2 is covered by a family of open sets
{ω∈Ω|a(ω)> 4^isupa(K_i)} with a∈𝒜.
By compactness we may pick a finite subcover and the corresponding functions a_1,…,a_k_i∈𝒜. Let
p_i=max{a_l/2^ia_l_𝒞(K_i)| l=1,…,k_i}∨ 0.
Then p_i∈ℱ fulfils our requirements.
Now, let
p=max{p_i| i=1,2…}.
Then p belongs to ℱ. For i=1,2,…
K_i⊂ p_j^-1((-∞, 1/2^i]) for j≥ i
and
K_i+3∖ K_i+2⊂ p^-1([2^i,∞)).
This shows that p is proper. Moreover, we see that
p=max{p_1,…,p_i} on K_i,
which immediately implies that p is continuous.
Since each p_i is a supremum of a symmetric family of functions in 𝒜, thus so is p. To see that p on any compact is a maximum of a finite family of functions in 𝒜, it suffices to notice that each p_i is such a maximum, for i=1,2,….
This completes the proof.
By Lemma <ref> and Remark <ref> it follows that <ref> implies <ref>. Clearly, <ref> follows from <ref>.
Suppose that <ref> holds true.
Let pΩ→ [0,∞) belong to ℱ. Then by Lemma <ref> for any t≥ 0 the set p^-1([0,t]) is ℱ-convex and compact. Suppose that K⊂Ω is compact. By the assumption, p is bounded on compact sets, so there is some t≥ 0 such that K⊂ p^-1([0,t]). Let C≥ 1. Consider the set
S={ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}.
We claim that S⊂ p^-1([0,Ct]). Indeed, by the assumption on p∈ℱ, there is a family (a_α)_α∈ A of elements of 𝒜 such that
p=sup{ a_α|α∈ A}.
Then K⊂ p^-1([0,t]) implies that for α∈ A, supa_α(K)≤ t.
Let ω∈ S. Then we see
p(ω)=sup{a_α(ω)|α∈ A}≤ Csup{supa_α(K)|α∈ A}≤ Ct.
This proves the claim and shows, due to properness of p, that <ref> is satisfied.
Let us observe that since p is bounded on compactae and on any compact is a maximum of a finite family of functions in 𝒜, <ref> follows. Indeed, one can take
P_i=p^-1([0,i]).
These sets are compact, symmetric, 𝒜-polygons which exhaust Ω. Moreover, as p is non-negative and continuous
P_i=p^-1([0,i])⊂ p^-1((-∞, i+1))⊂intP_i+1 for i=1,2,….
This is to say, <ref> is satisfied.
Clearly, it immediately implies <ref>.
We shall now show that these conditions are equivalent to Ω being complete with respect to 𝒜, if 𝒜 is closed in τ_𝒦 and Ω is separable.
Let us assume that <ref> is satisfied.
Let us pick a Cauchy net (ω_α)_α∈ A of elements of Ω. Suppose that the net is not convergent. Since Ω is exhaustible by compactae, we may pick a subsequence (ω_n)_n=1^∞ that is not convergent.
The subsequence induces a sequence (Φ(ω_n))_n=1^∞ of linear, continuous functionals on 𝒜, which is pointwise bounded.
Note that 𝒜 is complete and metrisable in τ_𝒦, according to Remark <ref>.
Therefore, by the Banach–Steinhaus uniform boundedness principle we can find a compact set K⊂Ω and a constant C>0 such that for all a∈𝒜 and all n=1,2,3,…
a(ω_n)≤ Ca_𝒞(K).
That is the elements of (ω_n)_n=1^∞ belong to the set
{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜},
which is compact by <ref>. It follows that (ω_n)_n=1^∞ converges, contrary to the assumption. We have shown that Ω is complete with respect to 𝒜.
Suppose now conversely that Ω is complete. Equivalently, Φ(Ω) is closed, see Lemma <ref>.
Recall that by Lemma <ref>, <ref>, the map ΦΩ→𝒜^* is a homeomorphism onto its image, when 𝒜^* is equipped with the weak* topology.
Let K⊂Ω and let C∈ℝ. Then the set
R={a^*∈𝒜^*| a^*(a)≤ Csupa(K) for all a∈𝒜}
is a compact subset of 𝒜^*, by the Banach–Alaoglu theorem.
Now
{ω∈Ω| a(ω)≤ Csupa(K) for all a∈𝒜}= Φ^-1(R∩Φ(Ω))
is therefore compact, as Φ(Ω) is closed and Φ is a homeomorphism onto its image. This shows that <ref> holds true.
Let us now assume that Ω is separable, that 𝒜 is τ_𝒦 closed and is an algebra or consists of real-parts of a complex algebra. Let us assume that <ref> is satisfied. We shall show that Ω is 𝒜-space. Let K_1,K_2,… be compact subsets of Ω such that
⋃_i=1^∞K_i=Ω and K_i⊂intK_i+1 for i=1,2,….
Taking their ℱ-convex hulls, and relabelling if necessary, we may moreover assume that these sets are ℱ-convex. We may moreover assume that none of these compact sets is equal to Ω. Otherwise, the claim is trivial.
By the assumption Ω is separable and therefore it is also metrisable, see Remark <ref>. Let Ω̅ denote the completion of Ω. Then Ω̅ is separable as a metric space with a separable dense subset Ω. It is moreover metrisable, so that it admits a countable basis (U_i)_i=1^∞ of neighbourhoods of points in Ω̅∖Ω.
Let us take a function kℕ→ℕ, with the property that each element appears infinitely often in its image.
As the sets (K_j)_j=1^∞ are compact, for any j=1,2,… there exists an element ω_j∈ U_k(j)∩Ω∖ K_j. Relabelling the sets (K_j)_j=1^∞ if necessary, we may assume that ω_j∈ K_j+1.
By Proposition <ref> there exists a_j∈𝒜 such that
supa_j(K_j)≤ 2^-j and a_j(ω_j)>∑_i=1^j-1a_i(ω_i)+j.
Let us set
a=∑_j=1^∞a_j.
Then the series converges in τ_𝒦, so that a∈𝒜. Moreover, for each j=1,2,…
a(ω_j)>a_j(ω_j)-∑_i=1^j-1a_i(ω_j)-∑_i=j+1^∞a_i(ω_j)≥ j-2^-j.
We shall now show that a verifies the fact that Ω is an 𝒜-space. We shall show that it does not extend to a continuous function on Ω∪{ω} for any ω∈Ω̅∖Ω. Suppose on the contrary, that it does extend to ω∈Ω̅∖Ω. By (<ref>), a is takes arbitrary large values on any neighbourhood of ω. If it extended continuously to ω, it would have to be infinite on ω.
As already observed, the fact that Ω is an 𝒜-space immediately implies that Ω is complete with respect to 𝒜.
We can prove that completeness of Ω implies that Ω is ℱ-convex alternatively in the following way.
Proposition <ref> tells us that for a compact set K⊂Ω
clConv_ℱK=Φ^-1(clConv_GΦ(K)∩Φ(Ω)).
The Mazur theorem and Lemma <ref> imply that clConv_GΦ(K) is compact.
By Lemma <ref>, Φ(Ω) is closed in the weak* topology on 𝒜^* and Φ is a homeomorphism onto its image, so that clConv_ℱK is compact subset of Ω.
Let us recall the following theorem of Krein and Šmulian, <cit.>. Below we shall consider bounded weak* topology on a dual space H^* to a normed space H. A set U⊂ H^* is open in bounded weak* topology provided that the intersection of U with any closed ball in H^* is relatively weakly* open.
Let H be a Banach space. Then a convex set Z⊂ H^* is closed in the weak* topology if and only if it is closed in the bounded weak* topology.
Let us observe an analogy between the Krein–Šmulian theorem and the equivalence of <ref> of Theorem <ref> and closedness of Φ(Ω).
If 𝒜 is an algebra, or consists of real-parts of a closed complex algebra, that is closed in τ_𝒦, then there is an alternative proof of the fact that <ref> of Theorem <ref> implies completeness of Ω, that is analogous to the proof <cit.>. It relies on the following proposition.
Let Ω be a locally compact, σ-compact Hausdorff topological space. Suppose that 𝒜 is an algebra of functions, or consists of real-parts of a complex algebra, that is closed under τ_𝒦. Suppose that Ω is ℱ-convex. Then for any net (ω_α)_α∈ A with no accumulation point in Ω there exist a function a∈𝒜 and a subsequence (ω_j)_j=1^∞ of (ω_α)_α∈ A such that
lim_j→∞a(ω_j)=∞.
We shall assume that 𝒜 is an algebra. If 𝒜 consists of real-parts of a complex algebra, only minor modifications are needed.
Let us pick a net (ω_α)_α∈ A of elements of Ω with no accumulation point in Ω. Thus, for any compact K⊂Ω, only finitely many elements of the net belong to K.
We shall construct a sequence of compact sets (K_i)_i=1^∞, for which
⋃_i=1^∞K_i=Ω and K_i⊂intK_i+1 for i=1,2,….
Moreover, we shall find functions (a_i)_i=1^∞ in 𝒜 and a subsequence (ω_j)_j=1^∞, such that for all i=1,2,…
ω_i∈ K_i+1 andsupa_i(K_i)≤ 2^-i
and
a_i(ω_i)>i+∑_j=1^i-1a_j(ω_i).
We shall proceed inductively.
Since Ω is exhaustible by compactae, there is a sequence of compact sets (L_i)_i=1^∞ for which
⋃_i=1^∞L_i=Ω, L_i⊂intL_i+1 for i=1,2,….
Suppose the functions a_1,…,a_k∈𝒜, elements α_1,…,α_k and sets K_1,…,K_k are chosen. There is an index j_k≥ k such that K_k∪{ω_1,…,ω_k}⊂intL_j_k.
By the assumption, the set
K_k+1={ω∈Ω| a(ω)≤supa(L_j_k) for all a∈𝒜}
is compact.
Since (ω_α)_α∈ A has no accumulation point, there is an element ω_k+1 of the net that does not belong to K_k+1.
Thus, there also exists a_k+1'∈𝒜 for which
a_k+1'(ω_k+1)>supa_k+1'(K_k+1).
Normalising and taking power sufficiently high power of a_k+1' yields a function a_k+1 that we were after. Since j_k≥ k, we see that
⋃_i=1^∞K_i=Ω.
We define now
a=∑_i=1^∞a_i.
Since
supa_i(K_j)≤ 2^-i for i≥ j,
we see that the series converges in τ_𝒦, and therefore, by the assumption, a∈𝒜.
However, for j=1,2,… we have
a(ω_j)≥ a_j(ω_j)-∑_i=1^j-1a_i(ω_j)-∑_i=j+1^∞a_i(ω_j)≥ j-2^-j,
as ω_j∈ K_i for i≥ j+1, so that
a_i(ω_j)≤ 2^-i.
This shows that
lim_j→∞ a(ω_j)=∞.
The proposition above immediately shows that, under its assumptions, <ref> of Theorem <ref> implies completeness of Ω. Indeed, any Cauchy net (ω_α)_α∈ A with an accumulation point is convergent.
§.§ Proof of Theorem <ref>
Let Ω be a topological space. Let ℱ be a cone of continuous functions on Ω that contains constants and satisfies the maximum principle. Let 𝒢 be the complete lattice cone generated by ℱ. Let K_1,K_2,K_3,K_4 be compact and ℱ-convex subsets of Ω for which
K_1⊊intK_2, K_2⊂intK_3 and K_3⊂ K_4.
Then there exist a non-negative, continuous function p∈𝒢 such that
K_1⊂ p^-1({0})
and
K_2⊂ K_4∩ p^-1((-∞,1])⊂ K_3.
As K_1,K_2 are compact and ℱ-convex we see that for i=1,2
K_i={ω∈Ω| f(ω)≤sup f(K_i) for all f∈ℱ}.
Moreover, K_4∖intK_3
is a compact set, covered by a collection of open sets
{ω∈Ω| f(ω)> sup f(K_2)}, for all f∈𝒜,
so it is likewise covered by a finite subcollection of such open sets. Let f_1,…,f_k be the corresponding functions in ℱ.
Since ℱ satisfied the maximum principle and K_1⊊intK_2, we may assume that for i=1,…,k we have
sup f_i(K_1)<sup f_i(K_2).
Then
{ω∈ K_4|f_i(ω)-sup f_i(K_1)/sup f_i(K_2)-sup f_i(K_1)≤ 1 for i=1,…,k}⊂int K_3.
It is readily verifiable that
p=max{f_i-sup f_i(K_1)/sup f_i(K_2)-sup f_i(K_1)| i=1,…,k}∨ 0
satisfies our requirements.
Let Ω be a topological space. Let ℱ be a cone of continuous functions on Ω that contains constants and satisfies the maximum principle. Let 𝒢 be the complete lattice cone generated by 𝒜. Let K_1,K_2,K_3,… be compact and ℱ-convex subsets of Ω for which
K_i⊊intK_i+1 for i=1,2,… and ⋃_i=1^∞K_i=Ω.
Then there exists a continuous, proper and non-negative function p∈𝒢 and such that for any compact set K, p is a maximum of a finite number of functions in ℱ.
For i=1,2,… let p_i∈𝒢 be a non-negative, continuous function yielded by Lemma <ref> and corresponding to the sets K_i,K_i+1,K_i+2,K_i+3.
Let
p=sup{ip_i| i=1,2,…,}.
Observe that p is non-negative and continuous. Indeed, if ω∈ K_j for j=1,2,… then p_i=0 for i≥ j. Thus, Ω is covered by open sets, on each of which p is a maximum of a finite number of non-negative and continuous functions.
We shall show that p is proper. By the construction of Lemma <ref>, p_i is at least one on K_i+3∖ K_i+2. Thus, for i=1,2,…
p≥ i on Ω∖ K_i+2.
This shows that for non-negative t∈ℝ we have
{ω∈Ω| p(ω)≤ t}⊂{ω∈Ω| p(ω)≤⌈ t⌉}⊂ K_⌈ t⌉+3.
By continuity of p we see that preimages of compactae are compact, i.e., p is proper.
This completes the proof.
Similarly to the argument in the proof of Theorem <ref>, <ref> implies <ref>.
Suppose now that <ref> is satisfied. If Ω is compact there is nothing to prove. Let us suppose that Ω is not compact.
Let (K_i)_i=1^∞ be a sequence of compact, exhausting subsets of Ω, which exist by Remark <ref>. By <ref>, the sets (clConv_ℱK_i)_i=1^∞ are compact and
⋃_i=1^∞clConv_ℱK_i=Ω.
Since intK_i⊂intclConv_ℱK_i for i=1,2,…, we see that the sets (clConv_ℱK_i)_i=1^∞ have non-empty interia that cover Ω. Relabelling the sets if necessary, we may therefore assume that
clConv_ℱK_i⊊intclConv_ℱK_i+1 for i=1,2….
Indeed, if for some non-empty compact set K we had K=intK, then, due to assumption on connectedness, we would see that K=Ω, contrary to the assumption that Ω is not compact.
Now, Lemma <ref> shows that <ref> holds true. Trivially, <ref> implies <ref>.
The proof that <ref> is equivalent to the other conditions follows along similar lines to the proof of the analogous equivalence of Theorem <ref>.
As Ω is σ-compact, it might have at most countably many connected components. Indeed, any compact subset of Ω is covered by at most finitely many of components of Ω.
We shall prove that <ref> implies <ref>. The other implications follow as in the proof of Theorem <ref>.
Let (U_i)_i=1^∞ be the family of connected components of Ω. For each i=1,2,…, let ℱ_i be the cone of restrictions of elements of ℱ to U_i. Thanks to Theorem <ref>, for i=1,2,… there exists p_i∈ℱ_i, which is a proper, non-negative continuous function which is locally maximum of a finite family of functions in ℱ_i. Let us set
p=p_i+i on U_i, i=1,2,….
Then, by the assumption that ℱ is local, p∈ℱ. It is readily verifiable that p satisfies our requirements.
plain
|
http://arxiv.org/abs/2307.04516v1 | 20230710122404 | An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification | [
"Ashish Singh",
"Antonio Bevilacqua",
"Timilehin B. Aderinola",
"Thach Le Nguyen",
"Darragh Whelan",
"Martin O'Reilly",
"Brian Caulfield",
"Georgiana Ifrim"
] | cs.CV | [
"cs.CV"
] |
Wearable Sensors and Video Data Capture for Human Exercise Classification
Insight Centre for Data Analytics, University College Dublin, Ireland
{ashish.singh,antonio.bevilacqua,timi.aderinola,thach.lenguyen,b.caulfield, georgiana.ifrim}@insight-centre.org
Output Sports Limited, NovaUCD, Dublin, Ireland
{darragh, martin}@ouputsports.com
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
Ashish Singh
Antonio Bevilacqua
Timilehin B. Aderinola
Thach Le Nguyen
Darragh Whelan
Martin O'Reilly
Brian Caulfield
Georgiana Ifrim
August 12, 2023
============================================================================================================================================
Wearable sensors such as Inertial Measurement Units (IMUs) are often used to assess the performance of human exercise. Common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis. Multiple sensors are required to achieve high classification accuracy, which is not very practical. These sensors require calibration and synchronization and may lead to discomfort over longer time periods. Recent work utilizing computer vision techniques has shown similar performance using video, without the need for manual feature engineering, and avoiding some pitfalls such as sensor calibration and placement on the body.
In this paper, we compare the performance of IMUs to a video-based approach for human exercise classification on two real-world datasets consisting of Military Press and Rowing exercises.
We compare the performance using a single camera that captures video in the frontal view versus using 5 IMUs placed on different parts of the body. We observe that an approach based on a single camera can outperform a single IMU by 10 percentage points on average. Additionally, a minimum of 3 IMUs are required to outperform a single camera. We observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features. Finally, we show that an ensemble model combining the data from a single camera with a single IMU outperforms either data modality. Our work opens up new and more realistic avenues for this application, where a video captured using a readily available smartphone camera, combined with a single sensor, can be used for effective human exercise classification.
§ INTRODUCTION
Recent years have seen an accelerated use of machine learning solutions to assess the performance of athletes.
New technologies allow easier data capture and efficient machine learning techniques enable effective measurement and feedback. In this paper, we focus on the application of human exercise classification where the task is to differentiate normal and abnormal executions for strength and conditioning (S&C) exercises. S&C exercises are widely used for rehabilitation, performance assessment, injury screening and resistance training in order to improve the performance of athletes <cit.>.
Approaches to data capture are either sensor-based or video-based. For sensor-based approaches, sensors such as Inertial Measurement Units (IMUs) are worn by participants <cit.>. For video, a participant's motion is captured using 3D motion capture <cit.>, depth-capture based systems <cit.>, or 2D video recordings using cameras <cit.>. The data obtained from these sources is processed and classified using machine learning models.
Classification methods based on sensor data are popular in the literature and real-world applications, and yet, video-based approaches are gaining popularity <cit.> as they show potential for providing high classification accuracy and overcoming common issues of inertial sensors.
Sensors require fitting on different parts of the body and the number of sensors to be worn depends upon the context of the exercise. For instance, the Military Press exercise requires at least 3 IMUs for optimal performance. Despite their popularity, sensors may cause discomfort, thereby hindering the movement of participants. In addition, using multiple sensors leads to overheads such as synchronization, calibration and orientation.
Recent advances in computer vision have enabled the usage of 2D videos for human exercise classification.
Past work explored posture detection <cit.> and the application of human exercise classification using pose estimation. Our previous work <cit.> proposed a novel method named BodyMTS to classify human exercises using video, human pose estimation and multivariate time series classification. There is less work comparing sensors with video in real-world applications.
In this paper, we compare the performance of a sensor-based approach utilizing 5 IMUs with that of video from a single front-facing camera, on the same set of 54 participants, on two real-world datasets consisting of Military Press (MP) and Rowing exercises. These are important S&C exercises and are widely used for injury risk screening and rehabilitation <cit.>. Incorrect executions may lead to musculoskeletal injuries and undermine the performance of athletes <cit.>. Hence, correct detection of abnormal movements is crucial to avoid injuries and maximize performance.
The main requirements for an effective human exercise classification application are <cit.>: accurate monitoring of body parts movement, correct classification of deviations from normal movements, timely feedback to end users, simple data capture using available smartphones and coverage of a wide range of S&C exercises. Previous work <cit.> has shown that this task is difficult and has poor intra and inter-rater accuracy in user studies with domain experts, with Kappa scores for inter-rater agreement between 0.18-0.53, and intra-rater between 0.38-0.62. Through discussions with domain experts, we established that an effective application should achieve a minimum accuracy of 80% to be useful for end users.
Existing methods using IMUs involve pre-processing the raw data, creating handcrafted features <cit.>, and applying classical machine learning algorithms. Handcrafted feature extraction is often tedious and time-consuming, requires access to domain knowledge and is prone to cherry-pick features that only work for a specific set of exercises.
Deep learning methods <cit.> overcome this issue by automatically constructing features during training, but still require expertise in deep learning architectures along with hardware resources such as GPUs. Hence, we take two approaches to feature extraction: (1) using lightweight packages such as catch22 <cit.> and tsfresh <cit.> to automate the feature extraction from raw signals and (2) using the raw time series data with time series classifiers, which implicitly construct features inside the algorithm.
For videos, we first extract multivariate data using human pose estimation with OpenPose <cit.> to obtain (X,Y) location coordinates of key body parts over all the frames of a video.
Figure <ref> shows data captured with IMUs and video for the Military Press exercise. The top part shows the Y-signal for 3 body parts for a total of 10 repetitions, while the bottom part shows the X, Y, and Z signals of the magnetometer from an IMU worn on the right arm for the same set of 10 repetitions.
Our main contributions are:
* We compare 3 strategies for creating features from IMU data for human exercise classification. We observe that directly classifying the raw signals using multivariate time series classifiers outperforms the approach based on handcrafted features by a margin of 10 and 4 percentage points in accuracy for MP and Rowing respectively. Automatic feature extraction shows better performance than handcrafted features.
* We compare the performance of IMU and video for human exercise classification. We observe that a single video-based approach outperforms a single IMU-based approach by a margin of 5 percentage points accuracy for MP and 15 percentage points for Rowing.
Additionally, we observe that a minimum of 3 IMU devices are needed to outperform a single video for both exercises.
* We propose an ensemble model that combines the data modalities from IMU and video, which outperforms either approach by a minimum of 2 percentage points accuracy for both MP and Rowing. This leads to an accuracy of 93% for MP and 87% for Rowing, using only a single IMU and a reduced-size video. We discuss reasons why combining video and sensor data is beneficial, in particular, the 2D video provides positional information, while the sensor provides information on orientation and depth of movement.
* To support this paper we have made all our code and data available [<https://github.com/mlgig/Video_vs_Shimmer_ECML_2023>].
The rest of the paper is organized as follows. Section <ref> presents an overview of related work, Section <ref> describes the data collection procedure, Section <ref> describes the data analysis and methodology for classification and Section <ref> presents the classification results using IMUs and video. Section <ref> concludes and outlines directions for future work and Section <ref> discusses ethical implications of this work.
§ RELATED WORK
This section describes the purpose of S&C exercises and provides an overview of sensor-based and video-based data capture approaches.
§.§ S&C Exercise Classification
S&C exercises aim at improving the performance of human participants in terms of strength, speed and agility, and they can be captured using sensor-based or video-based techniques.
Wearable sensor-based approaches involve fitting Inertial Measurement Units (IMUs) <cit.> on different parts of the body. This is followed by creating handcrafted features which are used in conjunction with a classical machine learning model. Deep learning methods attempt to automate the process of feature extraction. CNN models work by stacking IMU signals into an image <cit.>, whereas <cit.> uses an attention mechanism to identify the important parts in a signal.
Using IMUs has its own limitations. First, the number of inertial sensors required and their positions can vary from exercise to exercise <cit.>. Furthermore, sensors require calibration and synchronization and may also hinder the movement of the body and cause discomfort when used over longer time periods <cit.>.
Video-based systems can be categorized into 3 types: 3D motion capture, depth camera-based and 2D video camera. Though they are accurate, 3D motion capture systems are expensive and require complex setups. In addition, fitting multiple markers on the body may hinder the normal movement of the body <cit.>. Microsoft Kinect is commonly used for depth camera-based systems <cit.>. These systems are less accurate and are affected by poor lighting, occlusion, and clothing, and require high maintenance <cit.>. The third subcategory uses video-based devices such as DSLR or smartphone cameras. Works based on video rely on human pose estimation to track different body parts <cit.> and have shown 2D videos to be a potential alternative to IMU sensors.
The video-based analysis also includes commercial software such as Dartfish <cit.> by providing the option to analyze motion at a very low frame rate. However, these are less accurate and require fitting body markers of a different colour to the background.
§.§ Multivariate Time Series Classification (MTSC)
In multivariate time series classification tasks, the data is ordered and each sample has more than one dimension.
We focus on recent linear classifiers and deep learning methods, which have been shown to achieve high accuracy with minimal run-time and memory requirements <cit.>.
Linear Classifiers. ROCKET <cit.> is a state-of-the-art algorithm for MTSC in terms of accuracy and scalability. Two more extensions named MiniROCKET <cit.> and MultiROCKET <cit.>, have further improved this method. These classifiers work by using a large number of random convolutional kernels which capture different characteristics of a signal and hence do not require learning the kernel weights as opposed to deep learning methods. These features are then classified using a linear classifier such as Logistic or Ridge Regression.
Deep Learning Classifiers. Deep learning architectures based on Fully Convolutional Networks (FCN) and Resnet <cit.> have shown competitive performance for MTSC, without suffering from high time and memory complexity.
§ DATA COLLECTION
Participants.
54 healthy volunteers (32 males and 22 females, age: 26 ± 5 years, height: 1.73 ± 0.09 m, body mass: 72 ± 15 kg) were recruited for the study. Participants were asked to complete multiple repetitions of the two exercises in this study; the Military Press and Rowing exercises. In each case, the exercises were performed under 'normal' and 'induced' conditions. In the 'normal' condition the exercise was performed with the correct biomechanical form and in the 'induced' condition the exercise was purposefully performed with pre-determined deviations from the normal form, assessed and confirmed in real-time by the movement scientist. Please refer to these sources <cit.> for additional information on the experiment protocol.
The data was collected using two video cameras and 5 Shimmer IMUs placed on 5 different parts of the body. Two cameras (30 frames/sec with 720p resolution) were set up in front and to the side of the participants. In this work, we only use the video recordings from the front view camera which is a more common use case. The 5 IMUs with settings: sampling frequency of 51.2 Hz, tri-axial accelerometer(±2 g), gyroscope (±500 ^∘/s) and magnetometer (±1.9 Ga) <cit.> were fitted on the participants at the following five locations: Left Wrist (LW), Right Wrist (RW), Left Arm (LA), Right Arm (RA) and Back. The orientation and locations of all the IMUs were consistent for all the participants.
Exercise Technique and Deviations.
The induced forms were further sub-categorized depending on the exercise.
§.§ Exercise Classes for Military Press (MP)
Normal (N): This class refers to the correct execution, involving lifting the bar from shoulder level to above the head, fully extending the arms, and returning it back to shoulder level with no arch in the back. The bar must be stable and parallel to the ground throughout the execution.
Asymmetrical (A): The bar is lopsided and asymmetrical.
Reduced Range (R): The bar is not brought down completely to the shoulder level.
Arch (Arch): The participant arches their back during execution.
Figure <ref> shows these deviations using a single frame.
§.§ Exercise Classes for Rowing
Normal (N): This class refers to the correct execution, where the participant begins by positioning themselves correctly, bending knees and leaning forward from the waist. The execution starts by lifting the bar with fully extended arms until it touches the sternum and bringing it back to the starting position. The bar must be stable and parallel to the ground and the back should be straight.
Asymmetrical (A): The bar is lopsided and asymmetrical.
Reduced Range (R): The bar is not brought up completely until it touches the sternum.
Ext: The participant moves his/her back during execution.
RB: The participant executes with a rounded back.
Figure <ref> shows these deviations by depicting a single frame.
§ DATA ANALYSIS AND METHODS
This section presents the data pre-processing, features extraction and classification models. We present the feature extraction for IMU data, followed by feature extraction for video. We also provide a description of the train/test splits for IMUs and video data.
§.§ IMU Data
We discuss three strategies to create features from IMU data. First, we directly use the raw signal as a time series. Second, we use existing approaches to create handcrafted features. Third, we use dedicated packages to automatically extract features. Features extraction is performed after segmenting the full signal to obtain individual repetitions.
§.§.§ Raw Signal as Multivariate Time Series.
The raw signal from IMU records data for 10 repetitions. Hence, we segment the time series to obtain signals for individual repetitions. The Y signal of the magnetometer from the IMU placed on the right arm is utilized to segment the signals. The time series obtained after this step has variable length since the time taken to complete each repetition differs from participant to participant. Further, current implementations of selected time series classifiers cannot handle variable-length time series and therefore all time series are re-sampled to a length of 161 (the length of the longest time series). This does not impact the performance as shown in the supplementary material.
Every single repetition constitutes a single sample for train/test data. The final data D has a shape of D ∈ℝ^N × 45 × 161, where N indicates the total samples.
Each sample denoted by x_i in the data has a dimension of x_i ∈ ℝ^45 × 161, where 45 denotes the total number of time series (5 IMUs x 9 signals) and 161 is the length of each time series.
§.§.§ Handcrafted Features.
Each of the 5 IMUs outputs 9 signals (X,Y,Z) for each of the accelerometer, magnetometer and gyroscope.
We follow the procedure as described in <cit.> to create handcrafted features. Additionally, 5 signals were created for each IMU: pitch, roll, yaw signal and vector magnitude of accelerometer and gyroscope,
giving a total of 70 signals (5 × (9 + 5)). For each repetition signal, 18 handcrafted features that capture time and frequency domain characteristics were created.
Hence, we obtain the final data D ∈ℝ^N × 1260, where N is the total samples and 1260 represents the features extracted from 70 signals with 18 features each for both MP and Rowing.
§.§.§ Auto Extracted Features.
We use packages catch22 <cit.> and tsfresh <cit.> to perform automatic feature extraction from a single repetition signal. These packages calculate a wide range of pre-defined metrics in order to capture the diverse characteristics of a signal. They are straightforward to use and avoid the need for domain knowledge and signal processing techniques.
Catch22 captures 22 features for each of the 45 signals (5 IMUs x 9 signals) giving a total of 990 tabular features for MP and Rowing in the final dataset D ∈ℝ^N × 990, where N indicates the total samples. Similarly, tsfresh captures a large number of time series characteristics by creating a large number of features.
The final dataset D has a shape of D ∈ℝ^N × 15000 and D ∈ℝ^N × 16000, for MP and Rowing respectively. Both manual and automatic feature extraction are performed on the normalized time series, as we observed that normalizing the time series leads to an increase in accuracy.
§.§ Video Data
We follow the methodology presented in our previous work <cit.> to classify human exercise from videos. OpenPose is used for human pose estimation to track the key body parts, followed by a multivariate time series classifier. Each video consists of a sequence of frames where each frame is considered a time step. Each frame is fed to OpenPose which outputs coordinates (X,Y) for 25 body parts. We only use the 8 upper body parts most relevant to the target exercises but also conduct experiments with the full 25 body parts.
The time series obtained from a single body part is denoted by b^n = [(X,Y)^1, (X,Y)^2, (X,Y)^3,...(X,Y)^T] where n indicates the n^th body part and T is the length of the video clip.
§.§.§ Multivariate Time Series Data.
Since each video records 10 repetitions for each exercise execution, segmentation is necessary in order to obtain single repetitions. Each repetition forms a single time series sample for training and evaluating a classifier. We use peak detection to segment the time series as mentioned in our previous work <cit.>.
Similarly to the IMU case, every time series obtained after this step has a variable length and therefore is re-sampled to a length of 161.
The final data is denoted by D ∈ℝ^N × 16 × 161, where N indicates the total samples. Each sample denoted by x_i has a dimension of x_i∈ℝ^16 × 161, where 16 indicates X and Y coordinates for 8 body parts and 161 is the length of each time series.
§.§.§ Auto Extracted Features.
We use catch22 <cit.> and tsfresh <cit.> to perform automatic feature extraction from each single repetition signal.
§.§ Train/Test Splits
We use 3 train/test splits in the ratio of 70/30 on the full data set to obtain train and test data for both IMUs and video. Each split is done based on the unique participant IDs to avoid leaking information into the test data.
Train data is further split in the ratio of 85/15 to create validation data to fine-tune the hyperparameters. The validation data is merged back into the train data before the final classification.
The data is balanced across all the classes. Table <ref> shows the number of samples across all classes for a single train/test split for MP and Rowing respectively.
§.§ Classification Models
We use tabular machine learning models to work with handcrafted and automated features. Informed by previous literature on feature extraction for IMU data <cit.>, we focus on Logistic Regression, Ridge Regression, Naive Bayes, Random Forest and SVM as classifiers for tabular data.
We select ROCKET, MultiROCKET and deep learning models FCN and Resnet as recent accurate and fast multivariate time series classifiers <cit.>.
§ EMPIRICAL EVALUATION
We present results on IMU data, video data and combinations using ensembles.
We report average accuracy over 3 train/test splits for all the results. We use the sklearn library <cit.> to classify tabular data and sktime <cit.> to classify time series data. All the experiments are performed using Python on an Ubuntu 18.04 system (16GB RAM, Intel i7-4790 CPU @ 3.60GHz).
The Supplementary Material [<https://github.com/mlgig/Video_vs_Shimmer_ECML_2023/blob/master/Supplementary_material.pdf>] presents further detailed results on
leave-one-participant-out cross-validation, demographic results, execution time, as well as the impact of normalization and re-sampling length on the classification accuracy.
§.§ Accuracy using IMUs
We present the classification results using 3 different strategies for creating features from IMU data. For tabular features, we perform feature selection to reduce overfitting and execution time. We use Lasso Regression (C=0.01) with L1 penalty for feature selection, where C is the regularization parameter.
Logistic Regression achieves the best performance followed by Ridge Regression and SVM. These results suggest that linear classifiers are best suited for this problem. Hence we only present results using Logistic Regression here.
We tune hyperparameters, particularly regularization parameter C of Logistic Regression using cross validation. We observed that Logistic Regression (LR) with C=0.01 achieves the highest accuracy (Table <ref> presents results with Logistic Regression).
Table <ref> presents the results using raw data and multivariate time series classifiers. ROCKET achieves the best performance with MultiROCKET having similar accuracy for this problem. ROCKET has the added benefit that it can also work with unnormalised data and it is faster during training and prediction, so we select this classifier for the rest of the analysis.
We analyse the average accuracy using all 5 IMUs as well as combinations of IMUs using raw time series with ROCKET as classifier. The goal is to select the minimum number of IMUs needed to achieve the best performance for MP and Rowing. Table <ref> presents the average accuracy over 3 splits obtained using all IMUs whereas Table <ref> presents the average accuracy using different combinations of IMUs.
Results and Discussion:
From Table <ref> we observe that using raw data with ROCKET achieves the highest accuracy when compared to the approaches based on handcrafted and automated feature extraction. We tune hyperparameters of ROCKET using the validation data, particularly the number-of-kernels and observe no impact on the accuracy. The normalization flag is set to True here as turning it off leads to a 4 percentage points drop in the accuracy. ROCKET can easily be run on a single CPU machine without the need for much engineering effort (only 2 parameters to tune) and dedicated hardware. It is much faster than using tsfresh or catch22 for feature extraction followed by classification.
Table <ref> presents the accuracy using different combinations of IMUs placed on different parts of the body.
Accuracy is lowest when using only a single sensor. Accuracy starts to increase as more IMUs are included, for both MP and Rowing. We observe that placing 1 IMU on each wrist and 1 at the back achieved the same accuracy as using all 5 IMUs. The accuracy jumps from 0.83 to 0.88 moving from one IMU placed on the right wrist to two IMUs placed on both wrists and finally jumps to 0.91 when adding one more IMU at the back for MP. Similar behaviour is observed for Rowing. This suggests that 3 IMUs are sufficient for these exercises.
§.§ Accuracy using Video
Here we present the results of classification using video as the data source. We report the average accuracy over 3 train/test splits for MP and Rowing. We also present results using tabular classifiers with automated features for comparison with the IMU based approach. For the raw data approach, we study the accuracy when involving different body parts, e.g., all 25, the 8 upper body parts suggested by domain experts and results using automated channel selection technique <cit.>. The normalization flag is set to False here as turning it on leads to a 4 percentage points drop in accuracy. This is in contrast to the setting configured for IMUs. We tune hyperparameters of ROCKET, particularly the number-of-kernels and observe no impact on the accuracy. Table <ref> presents the average accuracy using these different approaches for classifying MP and Rowing exercises.
Results and Discussion:
From Table <ref> we observe that the average accuracy achieved using raw time series is highest when using the 8 body parts suggested by domain experts. Using automated features does not seem to work very well, in this case, achieving accuracy below 80% for both exercises. Moreover, using channel selection techniques leads to an improvement by 1 and 3 percentage points in accuracy versus using the full 25 body parts.
§.§ IMU versus Video
We compare IMU and video data for human exercise classification, using the raw data approach for both IMU and video as it achieves the best performance. We report the accuracy,
the execution time and the storage space required.
Table <ref> presents the results for both MP and Rowing exercises. We observe that a minimum of 3 IMUs are required to achieve a higher accuracy than a single video.
A single video outperforms a single IMU for both exercises by a minimum of 5 percentage points.
Table <ref> reports the real train/test time for both approaches. This time includes time taken for data pre-processing and to train/test the model. It also includes time to run pose estimation in case of video.
The IMUs approach takes the least amount of time to train/test as compared to the video-based approach. For video, OpenPose extracts the multivariate time series data. The total duration of all videos is 1h 38 minutes for MP, whereas OpenPose took 1h 12 minutes thus OpenPose can run faster than real-time, which is important for getting fast predictions.
Table <ref> presents the storage consumption for both approaches. We note savings in terms of storage space: 5 IMUs require 6 times more space than the time series obtained from videos. Even after selecting the minimum number of sensors which is 3 in both exercises, the storage consumption is more than 200 MB which is also higher as compared to using time series from video.
Our previous work in <cit.> explored the impact of video quality such as resolution and bit rate on classification accuracy and demonstrated how much video quality can be degraded without having a significant impact on the accuracy, whilst saving storage space and processing power.
§.§ Combining IMU and Video
We create an ensemble model by combining individual models trained independently on IMU and Video. For IMUs, we take the 3 sensors that achieved the highest accuracy. When video is combined with just a single sensor, we take the IMU placed on the left wrist, as it had the highest accuracy among single sensors and it is the most common location for people to wear their smartwatch.
Probabilities are combined by averaging and the class with the highest average probability is predicted for a sample during test time. Table <ref> presents a comparison of different approaches, using ROCKET as a multivariate time series classifier. From Table <ref>, we observe that an ensemble model achieves the best average accuracy when compared to using any number of IMUs and a single video-based approach. The accuracy for MP jumps by 2 percentage points when transitioning from 5 IMUs to an ensemble approach, and by 5 percentage points when moving from a single video to an ensemble. Similar results are observed for Rowing. These results suggest that combining IMU and video modalities enhances the performance of exercise classification. Combining video and IMU data sources, with video providing 2D location coordinates for key anatomical landmarks and IMUs capturing acceleration and orientation of the body parts, results in improved classification accuracy, as shown in this investigation (see supplementary material). This finding is consistent with previous work in <cit.> that highlights the complementary nature of video and IMUs in enhancing human pose estimation quality, while in this work we see a similar benefit for human exercise classification.
§ CONCLUSION
We presented a comparison of IMU and video-based approaches for human exercise classification on two real-world S&C exercises (Military Press and Rowing) involving 54 participants.
We compared different feature-creation strategies for classification. The results show that an automated feature extraction approach outperforms classification that is based on manually created features. Additionally, directly using the raw time series data with multivariate time series classifiers achieves the best performance for both IMU and video. While comparing IMU and video-based approaches, we observed that using a single video significantly outperforms the accuracy obtained using a single IMU. Moreover, the minimum number of IMUs required is not known in advance, for instance, 3 IMUs are required for MP to reach a reasonable accuracy.
Next, we compared the performance of an ensemble method combining both IMU and video with the standalone approaches.
We showed that an ensemble approach outperforms either data modality deployed in isolation. The accuracy achieved was 93% and 88% for MP and Rowing respectively.
The criteria to select sensors or videos will ultimately depend on the goal of the end user. For instance: the choice between video and IMUs will depend on a combination of factors such as convenience and levels of accuracy required for the specific application context.
We acknowledge the fact that the scenario that was tested in this research does not accurately reflect real-world conditions. This does mean that we are exposed to the risk that the induced deviation performances could be exaggerated, and therefore not reflective of the often very minor deviations that can be observed in the real-world setting. However, we would argue that performing exercises under induced deviation conditions, if done appropriately, is a very necessary first step towards validating these exercise classification strategies in this field. It would not be prudent to assume that this model could be generalised to operate to the same level in real-world conditions. Having said that, the use of conditioned datasets is a necessary first step in this kind of application and provides the proof of concept evidence necessary to move onto the real-world setting.
§.§.§ Acknowledgements
This work was funded by Science Foundation Ireland through the Insight Centre for Data Analytics (12/RC/2289_P2) and VistaMilk SFI Research Centre
(SFI/16/RC/3835).
§ ETHICAL IMPLICATIONS
Using videos for human exercise classification raises ethical implications that need to be mitigated, prompting a discussion of potential ethical implications.
Data Collection.
Participants in this study provided written consent and the Human Research Ethics Committee of the university approved this study. All experiments were conducted under the supervision of an expert physiotherapist. The potential implications, in this case, can arise when the language used for the consent form may not be native to all the participants. In our case, the organizing authority or professional who was carrying out the data collection made sure that all the participants have well understood the consent form and the use of this data in the future.
Privacy and Confidentiality. This study uses videos which record participants executing exercises. This poses obvious privacy challenges. A first step is to blur the video to protect the participant's identity. This work utilizes human pose estimation to extract time series from video, thereby avoiding the need to directly use the original video. By working with the extracted time series, it largely safeguards the privacy and confidentiality of the participants.
Diversity of Representation.
The participants considered in this study fall into the age group of 20 to 46. Hence the results presented here may not generalise for other age groups. Therefore the final use case will depend on the specific target users, such as athletes competing in the Olympic games versus individuals with less intensive training goals. While there were slightly more male participants than female participants, it does not impact the conclusions drawn in this work, as analysed in the supplementary material. However, this requires further exploration to avoid any biases in the conclusion. Future studies should aim for equal representation among participants in terms of age, sex, gender, race etc., from the start of the study.
Transparency and Feedback. The prediction of the model in this case outputs whether the execution of the exercise was correct or incorrect. Deep learning-based models and other posthoc explanation methods support saliency maps which can be used to highlight the discriminative regions of the data that can be mapped back to the original video thus providing more information about the model decision to the participant.
The above list is not exhaustive and other inherent biases may appear because of the chosen model and the way the data has been collected.
splncs04
|
http://arxiv.org/abs/2307.13807v1 | 20230711155311 | Sports Betting: an application of neural networks and modern portfolio theory to the English Premier League | [
"Vélez Jiménez",
"Román Alberto",
"Lecuanda Ontiveros",
"José Manuel",
"Edgar Possani"
] | q-fin.PM | [
"q-fin.PM",
"cs.LG",
"62P05",
"G.3"
] |
Handwritten Text Recognition Using Convolutional Neural Network
Atman Mishra
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
A. Sharath Ram
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
Kavyashree C.
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
Received / Accepted
==================================================================================================================================================================================================================================================================================================================================================================
This paper presents a novel approach for optimizing betting strategies in sports gambling by integrating Von Neumann-Morgenstern Expected Utility Theory, deep learning techniques, and advanced formulations of the Kelly Criterion. By combining neural network models with portfolio optimization, our method achieved remarkable profits of 135.8% relative to the initial wealth during the latter half of the 20/21 season of the English Premier League. We explore complete and restricted strategies, evaluating their performance, risk management, and diversification. A deep neural network model is developed to forecast match outcomes, addressing challenges such as limited variables. Our research provides valuable insights and practical applications in the field of sports betting and predictive modeling.
§ INTRODUCTION
This study aims to address the question of which bets a rational gambler, considering their risk appetite, should select in order to maximize expected utility. The concept of optimality is defined based on the Von Neumann-Morgenstern Classical Utility Theory. We employ deep learning techniques to estimate event odds, and subsequently employ the Sharpe Ratio and Kelly's financial criteria to identify the optimal set of bets.
The primary objective of this study is to assess the performance of the Sharpe Ratio and the Kelly Criterion within the context of real-world sports betting, specifically during the second half of the 2020-2021 season of the English Premier League (EPL). Additionally, compare the impact of underlying assumptions on the aforementioned criteria, shedding light on their significance.
§.§ Sports Betting
The analysis of a bet can be approached as a decision problem defined by the tuple (D, E, C, (≽)). Within our mathematical framework, this decision tuple (D, E, C, (≽)) encompasses fundamental elements for making rational decisions within the realm of sports betting. The decision set D represents the available choices or bets that a bettor can select. The events set E defines the potential outcomes or events associated with these bets. The consequence set C encompasses the various outcomes or consequences that arise from each event, denoted as c = c(d, e). The preference relation (≽) captures the bettor's subjective preferences and enables comparisons between different bets based on personal criteria <cit.>. By defining and analyzing these elements, our approach aims to assist rational gamblers in choosing optimal bets that maximize their expected utility and align with their preferences <cit.>. Specifically, we seek the optimal bet d_*∈ D based on the bettor's preferences (≽). These preferences are contingent upon the consequences c ∈ C, which in this case include the odds and probability of the event e ∈ E <cit.>.
We denote ℓ as the percentage of the wealth wagered on an event e, hence ℓ represents the wager vector. Consequently, the decision d_ℓ signifies the act of placing a bet of ℓ on the events e ∈ E.
§.§.§ Odds
Within the domain of sports betting, let W_0 represent the initial wealth and W_1 denote the wealth obtained for placing a bet on event e but not taking in account the initial quantity W_0. In the event's occurrence, the outcome manifests as a gross profit of W_1 + W_0, while in its non-occurrence, the bet is forfeited, resulting in a loss. The probabilities associated with these outcomes are denoted by p and 1-p respectively. It is assumed that all wagers result in a positive return, that is, W_1 > 0.
On the other hand, the odds are defined as the ratio between the gross profit and the initial wealth, i.e. = (W_1 + W_0)/W_0 = 1 + W_1/W_0. It is assumed that the odds and probabilities of the events are fixed over time.
§.§.§ Returns
Formally, the net return ϱ from betting $1 on event e is the random variable ϱ = - 1 with probability p and ϱ = - 1 with probability 1-p. In the case where there are m events, it is defined as the random vector ϱ, where the i-th entry is the random variable ϱ_i.
To facilitate notation, we introduce the odds vector := (_1, _2, …, _m)', representing the individual odds per event. Similarly, we define the probability vector p in the same manner. Additionally, we denote the diagonal matrix of odds as D_ := diag(_1, _2, …, _m).
i.e. ϱ = D_𝓂 - 1, 𝓂∼multinomial(1; p).
Likewise, the total return R(ℓ) of one hundred percent of the initial wealth for betting ℓ_i percent of the wealth to the event e_i is equal to the initial wealth plus the random gains or consequences <cit.>. In mathematical notation,
R(ℓ) = 1 + ∑_i=1^mℓ_iϱ_i = 1 + ℓ' ϱ.
The analysis assumes certain conditions regarding the nature of the betting system. Specifically, it assumes the absence of short selling and borrowing, and assumes that money is infinitely divisible with no minimum bet requirement. These assumptions can be summarized as follows: First, it is implied that the wager amount ℓ_i for each event i is non-negative (ℓ_i≥ 0) for all i. Second, the assumption is made that the total wager across all events satisfies the constraint ∑_i^mℓ_i≤ 1. Lastly, the last two assumptions state that the wager amount ℓ_i for each event i is bounded within the range [0,1].
§.§.§ Betting Market
By definition, for a bet to be fair one would expect the net return to be zero, i.e. 𝔼[ϱ] = 0. The above happens if and only if = 1/p. But, the odds of a bookmaker are always of the form ∑_i^m1/_i = 1 + tt <cit.>, where the tt is known as the commission (commonly known as track take) that a casino charges. Therefore tt > 0. However, since the betting market is a non-efficient market <cit.>, one can find odds of different casinos such that tt ≤ 0. When the market commission is negative, arbitrage is obtained [By arbitrage, it is mean that there is a price advantage between bookmakers such that, regardless of the outcome, a fixed strategy always makes money.] . The strategy that exploits this phenomenon is of the form ℓ_i^(A) := _i^-1/∑_j_j^-1, since fixing the event e_i the total return R(ℓ_A) = 1 + ℓ_i^(A)_i - ∑_j^mℓ_j^(A) = 1/(1+tt), ∀ e_i∈ E. It will be shown below, that, this strategy matches the vector of strategies found by the Sharpe Ratio Criterion for non-efficient markets.
§.§ Uncertainty
This section provides a compilation of key findings from Frequentist Statistical Inference and Information Theory, which serve as justifications for the methods employed in this research.
§.§.§ Information Theory
In information theory, the concept of entropy is introduced to quantify the uncertainty associated with a phenomenon represented by the random variable X. Entropy, denoted as H(X), is defined as H(X) := - 𝔼_F[log(X)] = - ∫𝒳log d F(x), where the random variable X follows a distribution characterized by F <cit.>. In other words, we express X as X ∼ F. It is worth noting that the density function f is related to the distribution function F through the expression f(x) = d/dxF(x). The entropy tends to approach 0^+ when uncertainty is low and approaches infinity in the presence of high uncertainty. Cross-entropy, denoted as H(F, G) := - 𝔼_F[log(g(X))], quantifies the difference between distributions F and G for the same phenomenon. Furthermore, the Kullback-Leibler Divergence (KL-Divergence) <cit.> provides a means to estimate the disparity between distributions F and G, which is given by
FG := 𝔼_F[ log(f(X)/g(X) ) ].
Two important properties of the KL-Divergence are its direct relationship to cross-entropy and its equivalence to the minimization of KL-Divergence when maximizing the likelihood of a random sample. This equivalence is demonstrated through max{L(θ; x)} =
max{1/n∑_j^nlog(f(x_j; θ)) } = max{𝔼_F̂_emp[log(f(x_j; θ))] } = min{H(F̂_emp, F)} = min{F̂_empF} <cit.>.
§.§.§ Deep Learning
In the realm of event prediction, deep learning methods have gained prominence due to their effectiveness in nonlinear scenarios. They excel in identifying relationships between covariates and exhibit flexibility in optimizing objective functions to address diverse requirements for the same phenomenon <cit.>. For instance, optimizing the KL-Divergence for observed and predicted data is a valuable tool employed to estimate the odds of EPL matches in the study.
With this approach, it becomes possible to make more informed wagers with bookmakers. However, to complete the process, it is essential to determine the specific events to bet on and the corresponding wager allocation based not only on the probabilities estimated but also on the market odds.
§.§ Utility Theory
In order to avoid contradictions and paradoxes associated with the moral value given to money, this study adopts the Von Neumann-Morgenstern Utility Axioms <cit.> as the basis for its methodology. Leveraging this theory offers several notable advantages, including the recognition that the nominal value of money differs from its moral value, as well as the establishment of a clear correspondence between qualitative preferences and quantitative utilities in the context of gambling.
Consider a probability space (Ω, ℱ, ℙ), where Ω represents the sample space, ℱ denotes the associated sigma algebra, and ℙ is the probability measure. Within this context, let X be a random variable that follows a distribution characterized by F, and takes on values x belonging to the set 𝒳. The probability distribution F is referred to as a "lottery" over the set 𝒳. The objective is to establish preference relations (≽) over the set of lotteries, denoted by ℒ = {F | F is a probability distribution over 𝒳} <cit.>. In essence, making a decision d ∈ D to choose a lottery F ∈ℒ is tantamount to studying D itself, given the fixed probabilities. Relaxing the assumption of fixed probabilities would transform the gambling problem into a Bayesian framework <cit.>.
As previously mentioned, the moral value of money differs from its nominal value <cit.> <cit.>. To capture this distinction, the moral value associated with a monetary outcome x is modeled using a Bernoulli Utility Function u𝒳→ℝ. Similarly, to quantify the utility of a lottery F, a Von Neumann-Morgenstern Utility Functional U𝒳→ℝ is employed. According to the Expected Utility Theorem <cit.>, the utility functional can be expressed as U(F) = ∫_𝒳 u dF = 𝔼_F[u(X)]. Additionally, it follows that U(F_X) ≥ U(F_Y) F_X ≽ F_Y <cit.>. Consequently, a gambler's preferences can be quantified through the utility function u, which is determined by the individual's risk profile.
§.§.§ Utility Functions
In economic practice, the utility function u is commonly assumed to be an increasing function that exhibits decreasing marginal rates of substitution. This implies that u'(x) > 0 and u”(x) < 0 for x ∈𝒳. Such assumptions capture the notion that individuals with higher wealth tend to exhibit lower levels of risk aversion <cit.>.
Modern Portfolio Theory, pioneered by Harry Markowitz, suppose a quadratic utility function, where u(x) is a polynomial of degree two, given by u(x) = β_0 + β_1 x + β_2 x^2 <cit.>. Consequently, the utility of a lottery can be expressed as
U(F) = 𝔼_F[X] = β_0 + β_1 𝔼[X] + β_2 (Var(X) + 𝔼[X]^2) = W(μ, σ).
Here, W denotes a utility function that depends on the mean μ and variance σ^2 of the random variable X. Thus, the utility of a lottery can be fully characterized by its mean and variance. In fact, F_1 ≽ F_2 W(μ_1, σ_1) ≥ W(μ_2, σ_2). To align with the principle that greater wealth is always preferred, the function W must be monotonically increasing in μ, while also being monotonically decreasing in σ. In other words, for the same expected return, individuals with risk-averse preferences prefer lotteries with lower variance.
On another note, Daniel Bernoulli argued in his work "Exposition of a New Theory of the Measurement of Risk" that the change in utility experienced by an individual is inversely proportional to their wealth <cit.>. Consequently, Bernoulli suggested that utility functions follow a logarithmic form
u'(x) = 1/x u(x) = log(x) + C.
Once the utility functions have been characterized and the probabilities of the events have been estimated, the objective is to identify the lottery that maximizes expected utility in order to determine the optimal bet.
§ BETTING STRATEGIES
In this section, we address the problem of determining the best strategy for a set of r random rewards within the betting system described previously.
§.§ Sharpe Ratio
Under the assumption (<ref>) that a rational gambler's utility follows a quadratic form, the objective is to identify the best strategy in the universe Ψ = {ψ = (μ, σ)| ∑_k = 1^rℓ_k = 1} <cit.>. Here, μ = ∑_k^rℓ_kμ_k represents the return of the portfolio, and σ^2 = ℓ' Σℓ denotes the portfolio's variance. The covariance matrix Σ captures the covariances between the r random rewards, while μ_k = 𝔼[ϱ_k] represents the expected value of bet k.
A rational strategy aims to minimize the portfolio variance while maintaining a specified expected return level μ_*. Such a strategy ℓ_* can be obtained by solving the following optimization problem
_ℓ≥0{ℓ' Σℓ} subject to ℓ' μ = μ_*, ∑_k^rℓ_k = 1.
In the context of sports gambling, where simultaneous bets are placed, the portfolio return is given by μ = D_p - 1, and the covariance matrix is Σ = D_(diag(p) - pp') D_, as shown in equation (<ref>). In the case of simultaneous bets, Σ becomes a block diagonal matrix, denoted as Σ = diag(Σ_1, Σ_2, …, Σ_r).
The set of optimal portfolios with minimum variance, for all possible return levels, is referred to as the Efficient Frontier <cit.>. To identify the best portfolio within this optimal set, the Sharpe Ratio was used, which is defined as the ratio between the difference of the portfolio return and the risk-free rate R_f, and the standard deviation of the portfolio <cit.>. The Sharpe Ratio is given byS(ℓ) := (ℓ' μ - R_f)/√(ℓ' Σℓ).
To facilitate the optimization process, it is beneficial to transform the Sharpe Ratio problem into a convex optimization problem by introducing an additional dimension. This transformation helps avoid issues related to non-convexity and local optima <cit.>. It is introduced a change of variable y = κℓ, assuming a feasible solution exists such that ℓ' μ > R_f, and fix a scalar κ > 0 such that (μ - R_f1)'y = 1. The resulting convex optimization problem in r+1 dimensions is as follows <cit.>
_y≥0{y' Σy} subject to (μ - R_f1)'y = 1
∑_k^ry_k = κ
κ > 0
.
As mentioned in Section <ref>, in a sports betting market where the commission tt is negative and under the strategy ℓ_A, the gross return R(ℓ_A) > 1. The Sharpe Criterion helps identify such market inconsistencies by converging on the optimal strategy ℓ_A. This occurs because 𝔼[R(ℓ_A)] = 𝔼[1 + ℓ_A' ϱ] = 1 + ℓ_A'(Dp - 1) = 1/(1 + tt) > 1. Furthermore, Var(R(ℓ_A)) = ℓ_A' Var(D_𝓂 - 1) ℓ_A = ℓ_A' (D_(diag(p) - pp') D_) ℓ_A = 0. Thus, the optimal strategy under quadratic utilities and arbitrage is ℓ_A. This is due to the fact that the Sharpe Ratio is positively infinite, as the numerator is positive and the denominator is zero, assuming the risk-free asset is zero, which is reasonable for 90-minutes bets.
In summary, assuming quadratic utilities, the mean and variance of the returns R_k alone provide both necessary and sufficient information to determine the optimal market portfolio that maximizes the Sharpe Ratio Criterion, which is a crucial aspect discussed in this section. It is important to highlight that this criterion exploits arbitrage opportunities, enabling strategies with no risk at all. Furthermore, the allocation between the portfolio and the risk-free asset R_f can range from 0% to 100% of the total wealth, depending on the individual's risk tolerance. However, it is important to note that, when investing the entire budget in such a portfolio, the probability of ruin becomes positive which is a downside of this model.
§.§ Kelly Criterion
The Kelly Criterion is a strategy aimed at maximizing long-term wealth by effectively balancing the potential for large returns with the risk of losses <cit.>. The formula for determining the optimal strategy, denoted as ℓ_*, is derived as follows.
§.§.§ Classical Bivariate Kelly Criterion
Consider a sequence of random rewards {R_j}_j^n, and let W_n denote the final wealth of an individual who reinvests their returns according to a fixed strategy ℓ. At time n, the individual's wealth is given by W_n = W_0∏_j^nR_j(ℓ), where W_0 represents the initial wealth. By defining G_n := W_n/W_0 and taking logarithms, obtain the random walk expression G_n = ∑_j^nlog(R_j(ℓ)), which exhibits a drift term equal to the expected value 𝔼[logR_j(ℓ)]. In the other side, if S_n is the number of victories at time n then S_n∼binomial(n, p). Hence, the following relationship
W_n =(1 + ( - 1) ℓ)^S_n_Winnings(1 - ℓ)^n - S_n_Losses W_0,
G_n = S_nlog(1 + ( - 1) ℓ) + (n - S_n)log(1 - ℓ).
Since G_n is a sum of independent and identically distributed (i.i.d.) random variables log(R_j(ℓ)), according to Borel's Law of Large Numbers <cit.>,
lim_n →∞1/n G_n(ℓ) = p log(1 + ( - 1) ℓ) + (1 - p) log(1 - ℓ), with probability 1.
The expression (<ref>) denoted as to G(ℓ) is defined as the wealth log-growth rate by John Kelly <cit.>. Since G is a function of the strategy ℓ, taking the derivative of G with respect to ℓ eads us to the optimal solution. Thus,
G'(ℓ_*) = 0 ℓ_* = p - 1/ - 1.
Recalling that in fair gambling (<ref>), the odds of the event e ∈ E are the reciprocal of the probabilities of this events. However, if 1/ = p̃≠ p, hence ℓ_* = (p-p̃)/(1-p̃). Rearranging terms, it is obtained,
G(ℓ_*) = p log (1 + ( - 1)p - p/1 - p) + (1-p) log(1 - p - p/1 - p)
= p log(p/p) + (1-p) log(1 - p/1 - p)
= pp.
Thus, the maximum log-growth is equal to the KL-Divergence (<ref>). Therefore, the greater the disparity between the odds and the actual probability observed by the bettor, the greater the competitive advantage.
§.§.§ Properties
The Kelly Criterion possesses several noteworthy properties that contribute to its significance and effectiveness in optimizing long-term wealth accumulation.
Firstly, the log-growth rate G(ℓ) associated with the Kelly Criterion exhibits a unique optimal strategy, as established by Eduard Thorp in his paper "Optimal gambling systems for favorable games" <cit.>.There exists a critical threshold ℓ_c > ℓ_*, where ℓ_* represents the Kelly Criterion strategy, such that G(ℓ) transitions from positive to negative, reaching a value of zero. This property remarks emphasizes the distinct nature of different strategies in relation to their alignment with this critical threshold ℓ_c.
Thorp's research <cit.> also underscores the significant impact of the chosen strategy on capital growth. When G(ℓ) > 0, the wealth W_n grows infinitely with probability 1, highlighting the potential for substantial wealth accumulation. Conversely, for G(ℓ) < 0, W_n converges to zero over time. In the case where G(ℓ) = 0, the wealth exhibits interesting behavior, with the upper limit limsup W_n tending to infinity and the lower limit liminf W_n approaching zero as the investment horizon extends indefinitely. These findings demonstrate the sensitivity of capital growth to the chosen strategy and its profound influence on long-term financial outcomes.
Additionally, the superiority of the Kelly Criterion strategy ℓ_* over alternative strategies ℓ is established by Breiman <cit.>. Irrespective of the specific alternative strategy employed, portfolios adhering to the Kelly Criterion consistently outperform other strategies with probability one in terms of wealth accumulation. As the investment horizon extends indefinitely, the wealth W_n(ℓ_) of a portfolio following the Kelly Criterion experiences infinite growth relative to the wealth W_n(ℓ) of portfolios employing alternative strategies. That is to say lim W_n(ℓ_*)/W_n(ℓ) = ∞ when n →∞, with probability 1. This highlights the remarkable advantage of the Kelly Criterion in maximizing long-term wealth accumulation.
The properties of the Kelly Criterion underscore its significance and effectiveness in optimizing long-term wealth accumulation. The distinct nature of strategies in relation to the critical threshold, the sensitivity of capital growth to the chosen strategy, and the superior performance of the Kelly Criterion strategy over alternatives all contribute to its importance. By aligning with the Kelly Criterion, investors can enhance their wealth accumulation potential, leading to more favorable financial outcomes in the long run.
§.§.§ Multivariate Kelly Criterion
Expanding Kelly's Criterion to encompass multivariate bets, where there are m possible events associated with the same phenomenon, when event e_i occurs the raw return is denoted as R(e_i; ℓ) = 1 + _iℓ_i - ∑_j^mℓ_j, where ℓ represents the vector of strategies corresponding to each event.
Considering S_ias the number of occurrences of the i-th outcome where ∑_j^mS_j = n, then the wealth at trail n is given by W_n(ℓ) = ∏_i^m (1 + _iℓ_i - ∑_j^mℓ_j)^S_iW_0. By taking logarithms and considering the limit as n approaches infinity, the log-growth rate is obtained as followed <cit.>
G(ℓ) = ∑_i = 1^m p_ilog(1 + _iℓ_i - ∑_j=1^mℓ_j), with probability 1.
The formulation of the Multivariate Kelly Criterion in a matrix-based representation constitutes an original contribution. By introducing the probability vector p∈ [0,1]^m as the vector of probabilities associated with each event and the consequences matrix W = [w_1|w_2|…|w_m], where w_j = _j𝐞̂_j and 𝐞̂_j is the j-th canonical vector, the problem of determining the Multivariate Kelly Criterion can be cast. The objective is to maximize the expression
{p' log(1 + W'ℓ - ∑_i=1^mℓ_i1)} subject to ∑_i=1^mℓ_i≤ 1, ℓ≥0.
Importantly, the optimization problem associated with the Multivariate Kelly Criterion exhibits a concave structure, enabling its solution through convex optimization algorithms like Successive Quadratic Programming (SQP) <cit.>. While Smoczynski has proposed an algorithm for determining ℓ_* in his work <cit.>, a closed-form solution is not available. The formulation of this criterion in matrix form significantly contributes to a deeper comprehension of its characteristics. By focusing on the functional form of the log growth and obtaining analytical gradients, we can effectively maximize the criterion function. Furthermore, this matrix-based approach facilitates practical implementation by eliminating the need for iterative loops and relying solely on linear algebraic operations. As a result, this formulation not only enhances theoretical understanding but also provides valuable insights for the efficient application of the Multivariate Kelly Criterion.
§.§.§ Multivariate and Simultaneous Kelly Criterion
In the most general case, the multivariate criterion has been extended to incorporate simultaneous random rewards. This extension represents a novel development in the field. In this scenario, a set of r independent random rewards occurs simultaneously, each with m_k possible events. The probability space is defined as Ω = _k^rΩ_k, resulting in a total of M = ∑_k^r m_k events and N = ∏_k^r m_k possibilities. The strategy vector is defined as the concatenation of the betting vectors of the r random rewards, denoted as ℓ = (ℓ_1, ℓ_2, …, ℓ_r)' ∈^M. Similarly, the decimal odds vector is represented as and the net returns vector as ϱ. The overall raw return of the strategy, denoted as R(ℓ), can be expressed as R(ℓ) = 1 + ℓ'ϱ. The matrix of consequences, denoted as W ∈^M × N, represents the profit possibilities, with each column corresponding to a specific profit outcome ω_j_k = ω_j for each one of the sample spaces Ω_k⊆Ω.
i.e. w_j = D_ (𝐞̂_j_1, 𝐞̂_j_2, …, 𝐞̂_j_r)', j_k∈ω_j, ∀ k = 1, 2, …, M.
Since the outcomes of each bet are assumed to be independent, the probability vector is given by
p_i = ℙ[ϱ(ω_i) ] = ℙ[ ⋂_k^rϱ_k(ω_i,k) ] = ∏_k^rℙ [ϱ_k(ω_i,k) ].
Consequently, the Multivariate and Simultaneous Kelly Criterion is formulated as the optimization problem
{p' log(1 + W'ℓ - ∑_i=1^Mℓ_i1)} subject to ∑_i=1^Mℓ_i≤ 1, ℓ≥0.
The extension of the Multivariate Kelly Criterion to incorporate simultaneous and multiple random rewards presents significant advantages and opportunities for optimizing decision-making in intricate betting scenarios. As previously discussed in the context of the Multiple Kelly Criterion, this expanded framework offers a distinct advantage by enabling analytical and programmatically efficient implementation. By leveraging the functional form of the criterion, decision-makers can employ a numerical approach that is both robust and stable within the realm of matrix algebra.
The aforementioned properties of the Kelly Criterion highlight its resilience and competitive edge in the pursuit of long-term wealth accumulation. The existence of a distinctive optimal strategy, coupled with the profound influence of strategy selection on capital growth, solidifies the Criterion's efficacy and prominence within the realm of financial decision-making. Building upon these foundational principles, we can now delve into a real-world application that pits the Kelly Criterion against the Sharpe Criterion: betting in the English Premier League. This practical demonstration will further illuminate the practical implications and comparative performance of these two criteria in a tangible and relevant context.
§ RESULTS
This section presents the optimal betting portfolios for the second part of the 2020/2021 season of the English Premier League. These portfolios are derived using the Sharpe Ratio Criterion and the Kelly Criterion based on odds estimated by deep learning models. For each criterion, two types of strategies were examined unrestricted strategies and restricted strategies. The former allows for betting on all possible events of each match, while the latter imposes limitations on the number of events that can be bet upon.
It is worth noting that excessive betting can lead to a negative log-growth rate G(ℓ) < 0, resulting in the wealth tending towards zero, as mentioned in the first property of the Kelly Criterion <cit.>. However, according to the findings of E. Thorp stated at <ref>, this negative impact occurs after reaching the optimal point. Therefore, it is preferable to underestimate the bets rather than overestimate them. This rationale underscores the inclusion of fractional bets in the strategies.
§.§ Predictive Model
The predictive model was constructed based on data obtained from three distinct public sources. Firstly, https://sofifa.com/EA Sports' ratings <cit.> were utilized to assess the overall quality, offense, midfield, and defense of teams on a weekly basis. Secondly, team statistics were collected from https://understat.com/Understats <cit.>. Lastly, match odds from multiple bookmakers, obtained through https://www.football-data.co.uk/englandm.phpFootball Data U.K.. <cit.>, were integrated into the model.
§.§.§ Data
The dataset encompassed the period from the 2014/2015 season to the 2020/2021 season. To ensure the reliability of the analysis, data from the initial week of each season was excluded. These initial weeks were deemed less informative due to ongoing team rebuilding during the summer transfer market, introducing substantial uncertainty in team performance.
The final dataset comprised a total of 2,660 games, which were divided into three distinct groups training (2014/2015 to 2019/2020 seasons), validation (first half of the 2020/2021 season), and test (second half of the 2020/2021 season). Temporal variables were aggregated using weighted maximum likelihood, employing an exponential decay rate of 0.1. This particular value was suggested by the Dixon-Coles analysis <cit.> and effectively captured the diminishing impact of historical data as time progressed. Moreover, the model considered variables for both home and visiting teams, encompassing various aspects of team performance (refer to the appendix for detailed variable information).
As an additional input to the model, the final odds from Pinnacle Sports <cit.>, a reputable bookmaker, were incorporated. This incorporation was motivated by the by the work of Surowiecki, which recognize that market information holds predictive power in estimating potential events <cit.>. By integrating these odds, the model could effectively leverage market insights to enhance its predictive capabilities.
§.§.§ Model Selection
To determine the variables with the highest predictive power, lasso-shrinkage techniques were employed for a multinomial regression analysis <cit.>. The use of variable selection was motivated by the relatively limited size of the dataset compared to the number of parameters in the neural network, which increased the risk of overfitting.
By employing the lasso technique, a subset of variables with the most significant predictive impact was identified. The multinomial regression model trained on this reduced set of variables achieved a predictive accuracy of 41.76% and a cross-entropy value of 1.34. In contrast, when trained on the complete set of variables, the model achieved an accuracy of 51.76% and a cross-entropy value of 1.22. It is important to note that the coefficients of the multinomial regression model were not easily interpretable, and the accuracy was significantly reduced by 19.3% compared to the original model, while the entropy was 9.8% higher. Considering these findings, it was determined that utilizing all variables in the subsequent deep learning models would be more beneficial. This decision was influenced by the understanding that deep learning models have the capability to effectively capture complex relationships and patterns present in the data, even if certain variables may have less intuitive interpretations.
A training set consisting of 2,200 observations was obtained, while the number of parameters to estimate was 2,700, given the 30 × 30 × 3 dimensions. Consequently, regularization was necessary to address the challenge of having more variables than observations.
Furthermore, certain hyperparameters were fixed across all deep learning models with different regularizations using the framework described at <cit.> <cit.>. These fixed hyperparameters included the architecture design, where a funnel architecture was employed, progressing from a higher number of neurons to a lower number as the number of hidden layers increased. The random seed and kernel initializer were also standardized, using He-Normal for hidden layers and Glorot-Normal for the output layer <cit.>. The number of hidden layers was set to three, and for models incorporating batch normalization regularization, the number of hidden layers became a hyperparameter. Additionally, the NAdAM learning algorithm <cit.> was utilized consistently across all models.
Subsequently, several hyperparameters were learned through the optimization process. The number of neurons in the first layer was explored within a range of 70% to 200% of the number of variables, while for subsequent layers, the algorithm initiated the search with the number of neurons from the previous layer. The learning rate was selected from a list of three values 10^-2, 10^-3, and 10^-4. The penalization magnitude and the convexity trade-off (elastic net) were determined by sampling penalty values from a uniform distribution.
To evaluate the predictive errors and information loss, cross-temporal validation techniques, specifically the Split Temporal Cross-Validation (STCV) approach, were employed.
§.§.§ Model Assessment
Subsequently, the best models selected from the training and validation sets were trained. Among the different architectures considered, the Drop Out architecture consistently demonstrated superior performance in terms of precision and information loss. Therefore, it was chosen as the best model for further analysis.
The predictions made by the model were compared to the pre-match odds provided by the bookmaker Pinnacle Sports. This comparison was motivated by the belief that the odds reflect the collective wisdom of the crowds <cit.>.
In terms of accuracy, the model achieved a 54% accuracy rate, while the crowds achieved 51.5%. In comparison, the null models of "always bet on the home team," "always bet on a draw," and "always bet on the away team" had accuracy rates of 38%, 20%, and 42% respectively.
Regarding information loss, the model achieved a value of 1.0318, while the crowds achieved 0.9966, and the historical frequencies up to the 19th matchweek of the 2020/2021 season had an information loss value of 1.0631.
§.§ Portfolios
This section presents the results of the optimal portfolios based on quadratic and logarithmic utilities. It is important to note the following assumptions and constraints throughout the analysis no more than 100 percent of the wealth can be bet, short selling is not allowed, the risk-free asset has a zero return, money is infinitely divisible, the portfolio is also infinitely divisible[For numerical purposes, any wager less than 0.0001 was considered negligible.]. Without loss of generality the initial wealth is set to $1. Gains are reinvested each matchweek, i.e. W_n = ∏_J19^J38R_i(ℓ).
Four types of scenarios were considered for the portfolios. Each strategy was evaluated under two scenarios restricted betting, where only a single event per match can be bet upon, and unrestricted betting, allowing for multiple bets per match. Furthermore, portfolios were examined for both full strategies and fractional strategies at f =17%. The optimal fraction of 17% for this set was determined after conducting two hundred Dirichlet simulations in the validation set. In practice, it is common to use 25% of the Kelly Criterion strategy for betting purposes. The decision to split the bets is based on the insight that under-betting is preferable to over-betting, as indicated by E. Thorp <cit.>.
§.§.§ Complete Strategies
In the test set, there are 20 matchweeks, with 10 matches taking place on each match day. Each match has three possible outcomes home win, draw, or away win. The results of one match are independent of the results of the other matches. Consequently, there are M = ∑_i^rm_i = 3 × 10 = 30 possible bets and N = ∏_i^rm_i = 3^10 = 59,049 combinations of outcomes per matchweek.
For portfolios with logarithmic utilities, the optimal bets were determined by optimizing the Multiple Simultaneous Kelly Criterion (<ref>) using the SQP algorithm. On average, this model took approximately 20.9 seconds to converge per fixture. However, it should be noted that the numerical algorithm did not converge for matchweek 23 due to gradient overflow. Despite this challenge, it was decided to use the stakes from the last iteration of the algorithm as an approximation for that matchweek. Although it is important to acknowledge this approximation, it was necessary in order to maintain the continuity of the analysis and ensure the inclusion of matchweek 23 in the overall assessment of the strategies.
In the case of quadratic utilities, the optimal portfolios were obtained by maximizing the Sharpe Ratio through the solution of the convex optimization problem (<ref>) using the SQP algorithm. On average, the algorithm took 5.3 seconds per fixture to converge. It is worth mentioning that all optimizations successfully converged for the Sharpe Ratio criterion.
Three matches with arbitrage opportunities across three different matchweeks were identified. One notable example is the match between Everton and Aston Villa on June 5, 2021, which had a commission of tt = -0.0085. The following table provides a summary of the performance of both strategies in comparison to the match featuring arbitrage on that particular match day.
§.§.§ Restricted Strategies
The restricted betting sample space consists of the event with the highest expected value. Mathematically, it can be defined as Ω := _i^r{ω_k k = _j{𝔼[ϱ_i,j]}, j = 1, 2, …, m_i}. Consequently, the number of possible outcomes is reduced to N = 2^10, and the total number of bets is M = 3 × 10. The algorithms for both utilities exhibited convergence, with an average convergence time of approximately 1 second for all match days, and there were no instances of non-convergence.
The table below presents the portfolios derived from the Kelly Criterion and the Sharpe Ratio Criterion for the final matchweek of the English Premier League. It is noteworthy that the Kelly criterion never wagers the entire wealth, unlike the Sharpe Ratio criterion. Additionally, while the two criteria allocate similar amounts for each event, the magnitude of the largest bet differs between them.
§.§.§ Portfolios' Performance
The metrics for the eight portfolios for the 20 fixtures are summarized below, including the metrics pval bets and pval wealth. The pval bets metric represents the p-value of the hypothesis test that the average of the betting outcomes is not positive. Similarly, the pval wealth metric is the p-value for the hypothesis that the wealth obtained in the matchweeks is not positive.
§ CONCLUSIONS
This study aimed to address several key aspects first, identifying the optimal betting strategy for a rational gambler seeking to maximize expected utility, taking into account logarithmic or quadratic utilities. Second, analyzing the characteristics and performance of complete and restricted betting strategies to gain insights into their respective qualities. Third, exploring the qualitative and quantitative differences between portfolios constructed based on the Kelly Criterion and the Sharpe Ratio Criterion in a real-life betting scenario. Fourth, developing statistical learning models to forecast outcomes in the English Premier League. Notably, this study made significant contributions by introducing a matrix-based formulation (<ref>) for the multivariate Kelly Criterion and formulating the optimization problem for the multiple and simultaneous Kelly Criterion (<ref>), enhancing the understanding and applicability of these approaches in practical settings.
The successful demonstration of the first objective of this study involved the development of a systematic method to identify optimal bets within the framework of (D, E, C, (≽)). The method followed a step-by-step approach first, by defining the set E to determine the elements of C, and subsequently constructing the set D from these two sets. Second, the probabilities of the joint events were estimated. Third, the optimal decision d_ℓ_* was determined by maximizing the expected utility of returns, denoted as R(ℓ), while ensuring that d_ℓ_* belongs to the set D. Finally, appropriate metrics were defined to assess the performance of the strategies in terms of returns.
Regarding the second aspect investigated, it was observed that although the restricted strategies exhibited better convergence and higher returns compared to the complete strategies, they also displayed higher variance and lower diversification. Notably, in extreme cases, the Sharpe Ratio criterion led to the possibility of the player's ruin, as evidenced by the complete loss of funds two weeks prior to the conclusion of the study. Moreover, it was theoretically established that the advantages of exploiting arbitrage opportunities diminish when stakes are limited. Furthermore, the narrowing of the set of possible bets in restricted strategies resulted in a compromise of the fundamental properties of the criteria. Both the Kelly Criterion and the Sharpe Ratio Criterion exhibit similarities and differences in their portfolio outcomes. Firstly, unlike portfolios based on the Sharpe Ratio, Kelly-based portfolios offer protection against the risk of player ruin. For instance, on average, the full investment strategy and the Kelly-constrained strategy wagered 98% and 83% of the total wealth, respectively. Secondly, it is noteworthy that the logarithmic growth achieved through the Kelly Criterion is not invariant to fractional bets, which sets it apart from the Sharpe Ratio approach. Consequently, fractionalizing the Kelly strategy prior to optimization results in a distinct strategy compared to the fractional strategy fℓ derived post-optimization.
Thirdly, it is noteworthy that portfolios utilizing logarithmic utilities tend to exhibit relatively lower levels of diversification compared to portfolios employing quadratic utilities, as a proportion of the total stake invested in the portfolio. This observation was addressed by examining the maximum stake per matchweek relative to the total amount wagered in the portfolio, revealing that the Kelly Criterion maximum bet averaged 29.7% compared to the maximum Sharpe Ratio's bet average of 25.9%. Consequently, logarithmic portfolios tend to display higher volatility, resulting in more pronounced gains or losses. However, when considering the total budget of $1, the maximum Kelly Criterion stakes average 1.9% less than the Sharpe Ratio approach. This discrepancy arises due to the logarithmic nature of the growth function G(ℓ), which tends towards negative infinity as ∑_iℓ_i approaches 1, as demonstrated by Thorp <cit.>.
Fourth, while the constituent elements of the portfolios are similar for both the Kelly Criterion and the Sharpe Ratio Criterion (see Table <ref>), the strategies themselves are not proportionally aligned between the two methods. This divergence is evident even in extreme cases such as arbitrage, where notable differences are observed in the resulting portfolios (see Table <ref>).
In regards to the deep learning model, several considerations need to be addressed. Despite the advantages in terms of predictive power, flexibility, and relaxed assumptions offered by neural networks in modeling sporting events, it is important to acknowledge that the present study faced limitations due to the low volume of available data. Nevertheless, employing a deep learning approach still represents an improvement over traditional multinomial regression for sports modeling. Furthermore, the selection of variables used in the model remains an area of ongoing investigation. It is evident that there is a deficiency in the number of variables available to accurately determine match outcomes, particularly at the individual player level. Despite this limitation, Zimmerman suggests that there exists an empirical benchmark of 75% in terms of predictive accuracy for sporting events <cit.>. Moreover, Hubáček emphasizes that the selection of variables holds greater significance than the specific statistical model employed used <cit.>. These perspectives underscore the importance of further refining the variable selection process to enhance the predictive capabilities of the model.
Finally, it is imperative to validate the underlying assumptions of the models. The cornerstone of the developed method relies on the premise that the actual probabilities of events are known. However, upon examining the predictions' variability in Table <ref> and the classification errors illustrated in Table <ref>, doubts arise regarding the veracity of this assumption. Notably, the model's a priori cross entropy was anticipated to be 1.0318, whereas the empirical entropy based on frequencies amounts to 1.0631. Furthermore, the maximum entropy for a three-event phenomenon is calculated to be 1.0986. Consequently, the certainty surrounding this assumption is called into question.
Furthermore, it is crucial to evaluate whether the model's performance of 35.8% can be attributed to "luck" or "skill." To address this, the model is subjected to a test against 500 Dirichlet simulations with unit parameters, conducted within the same test period but without restrictions and employing strategies bounded to the same fraction f as the model in question. The results reveal that, on average, the model outperforms 78 out of 100 simulations. This serves as compelling evidence that the model's performance extends beyond mere chance. Nevertheless, despite investigating the relationship between log-growth and performance for the simulations, no linear evidence supporting such a connection is found (p-value of 0.595).
§.§ Future Research
The findings and hypotheses explored in this study open up various avenues for future research. These avenues span both financial methodologies and predictive modeling in the context of sports betting.
From a financial perspective, it would be of great theoretical and empirical interest to examine the long-term behavior of returns when encountering arbitrage opportunities. Exploring regularization techniques, particularly in the context of L_2 regularization for betting portfolios, could also yield valuable insights. Additionally, investigating the divergences in portfolio composition and returns between restricted and full strategies, considering both quadratic and logarithmic utilities, through simulation studies would provide further understanding.
In the realm of predictive modeling, there are multiple aspects worth exploring. One avenue is adopting a Bayesian perspective to analyze the prediction problem, complementing the frequentist approach employed in this research. Furthermore, incorporating player-level data in addition to aggregate team data could enhance the predictive model's accuracy and granularity. The inclusion of data from other soccer leagues could also contribute to a more comprehensive and robust modeling approach.
It is important to acknowledge that the work presented in this study is grounded in the Von Neumann and Morgenstein's Axioms of Preference. However, it is well-known that these axioms may not always hold in reality. Exploring the disparities between theoretical preferences and empirical observations, given the same available information, would shed light on the limitations and challenges associated with relying solely on axiomatic models. Furthermore, investigating optimal policies under Reinforcement Learning models could provide valuable insights into the dynamic decision-making processes within the realm of sports betting.
Overall, the future research directions outlined above have the potential to further advance the understanding of financial strategies, predictive modeling approaches, and the complex dynamics of decision-making in sports betting beyond this paper.
§ APPENDIX
The apendix provide a list and description of the three databases utilized in this study for the estimation of probabilities and obtaining odds from various bookmakers. The variables employed in the prediction of match outcomes, with a dagger (†) indicating their usage, and the target variable marked by an asterisk (*), are also specified.
2
§.§ Football Data U.K.
The "Football Data U.K." database contains records representing individual matches played in the English Premier League from the 2013/2014 season through the 2020/2021 season. The database includes a range of variables that provide valuable insights into match characteristics and team performance. However, it should be noted that a subset of variables was excluded from the analysis for matches prior to 2019, as these variables were not available in the reported files during that time period.
§.§.§ Fetched Variables
* date Date in day/month/year on which the match took place.
* hometeam Home team name.
* awayteam Name of the team playing as visitor.
* fthg Goals scored at the end of the home team's game.
* ftag Goals scored at the end of the game by the away team.
* ftr Final score. The possible values are H, D, A. Which represent that the home team won, drew or that the visiting team won, respectively.
* referee Name of the main referee who directed the match.
* _h, _d, _a The bookmakers' odds for the possible outcomes of the match. The information is collected on Tuesdays and Fridays. The bookmakers are:
* b365 Bet365.
* bw Bwin.
* iw Interwetten.
* ps Pinnacle Sports.
* vc Victor Chandler.
* wh William Hill.
* †psch, pscd, psca Pinnacle Sports' final odds, that is, an instant before the match starts, for the result that the home team wins, draws or that the away team wins.
§.§.§ Generated Variables
* †matchweek The day on which the respective matches are played.
* *result The variable ftr renamed.
* season Season in which they are playing. If the season is 2013/2014, 13 is captured.
* maxo_ The maximum odds for the three possible outcomes (H, T, V) of the six bookmakers mentioned at the beginning. Note Pinnacle Sports final odds are not considered. Note 2 These are the odds used for betting.
* market_tracktake The market commission. That is, the sum of the inverse of the maximum odds found for each outcome.
* diff_ The relative difference between the odds -without the respective commission of the collected and the final Pinnacle Sports odds.
§.§ Understats
The Understats database provides match-level data for each team in the English Premier League. Each record in the table represents a match that a team has played, rather than the match itself. This means that if there are 380 matches in the Premier League in a season, there will be 760 records in this table for that season.
It is important to note that observations for teams on the first matchweek of each season were removed from the dataset. This is because the variance between the last game of the previous season and the first game of the current season tends to be very large due to changes in the player market and initial team performances, as mentioned on the present work. Removing these observations helps to avoid potential biases in the data caused by these factors.
§.§.§ Fetched Variables
* h_a Character that represents whether the team plays as home or away team, whose values are a, h, respectively.
* xG Number of goals expected in the match by the team.
* xGA Number of goals expected in the match by the opposing team.
* npxG Number of expected goals in the match by the team without taking penalties into account.
* npxGA Number of goals expected in the match by the opposing team without taking penalties into account.
* npxGD Difference between npxG and npxGA.
* deep[These statistics present great inconsistencies with respect to the official Understats page. Likewise, since they were obtained through an R and Python package, the methodology with which these variables were obtained is unknown.] Number of passes completed by the team in the last quarter of the court -on the opposing team's side.
* deep_allowed[1] Number of passes completed by the opposing team in the last quarter of the court -on the team's side-.
* scored Number of goals scored by the team in the match.
* missed Number of goals conceded by the opposing team in the match.
* xpts Number of expected points. It is the expected result for the team.
* result Result of the match for the team. Possible values are w, d, l representing that the team won, drew or lost the match, respectively.
* date Date in year-month-day when the match took place.
* wins, draws, loses Dummy variables representing whether the team won, drew or lost the match, respectively.
* pts Points obtained by the result of the match for the team. Winning, drawing or losing awards 3, 1 and 0 points, respectively.
* ppda.att[1] Total passes made by the team when attacking divided by the number of defensive actions of the opposing team (interceptions + tackles + fouls). Metric suggested by Colin Trainor.
* ppda.def[1] Total passes made by the team when defending divided by the number of defensive actions by the opposing team.
* ppda.att[1] Total passes made by the opposing team when attacking divided by the number of the team's defensive actions (interceptions + tackles + fouls). Metric suggested by Colin Trainor.
* ppda_allowed.def[1] Total passes completed by the opposing team while defending divided by the number of defensive team actions.
* team_id Id with which Understats identifies the team.
* team_name Name with which Understats identifies the team.
* league_name Name by which Understats identifies the league.
* year Season number of the match. If the season is 2013/2014, 2013 is captured.
* matchweek Day of the season of the current match. There are 38 matchweeks in total.
§.§.§ Generated Variables
* †position_table:The position in tables for the current day before the games.
* †total_points The total points of the team for the current day before the games.
* †promoted_team Dummy variable indicating whether the team was promoted to the EPL in the current season.
* †big_six Dummy variable that indicates whether the team is a Big Six. That is, if the team is Arsenal, Chelsea, Liverpool, Manchester City, Manchester United or Tottenham.
* season The last two digits of the variable .
* †npxGD_ma It is the weighted average, with ξ = 0.1, of the npxGD from the first record of the team until the current matchday prior to the games.
* †npxGD_var Is the weighted variance, with ξ = 0.1, of the npxGD from the team's first record to the present day prior to the games.
§.§ SoFIFA
The author utilized web scraping techniques to download tables from the SoFIFA page for the English Premier League. The data obtained represents the statistics of teams for each week of each season, with a one-matchday delay to reflect the team's status in the corresponding week prior to the matches to be played.
In the dataset, each record represents a team in a specific week. However, there are cases where more than one team's statistics were reported in a single week. To address this, only the last record reported on the page for each week was considered. In instances where there is no record available for a particular week, data from the previous week was utilized instead.
§.§.§ Fetched Variables
* name_team Name under which SoFIFA identifies the team.
* id Id with which SoFIFA identifies the team.
* †ova Rating from 1 to 100 of the team's overall performance up to that week.
* †att Rating from 1 to 100 of the team's attack up to that week.
* †mid Rating from 1 to 100 of the team's average up to that week.
* †def Rating from 1 to 100 of the team's defense up to that week.
* †transfer_budget Budget for the transfer market, in millions of euros, of the team for that season.
* speed[Despite being variables with excellent information, for the 2019 seasons onwards, SoFIFA stopped updating these values and they became constant for all teams. So these variables were no longer, in their entirety, informative.] Type of speed the team plays with.
* dribbling Type of the number of dribbles with which the team plays[2]..
* passing Type of passes with which the team plays. They can be very risky, normal or safe passes.[2].
* positioning Formation with which the team plays.
* crossing[2] Type of band changes in the passes with which the team plays.
* aggression[2] Aggressiveness with which the team defends.
* pressure[2] Pressure with which the team defends.
* team_width Width of the formation with which the team plays.
* defender_line[2] Type of mark with which the team defends.
* dp Number of the team's domestic prestige. It is rated from 1 to 20.
* †ip Number of the international prestige of the team. It is graded from 1 to 20.
* players Number of players registered by the team to play in the current EPL season.
* †saa Average age of the starting roster for that season as of that date.
* taa Average age of the team for that season at that date.
* date Date in year-month-day when SoFIFA published the EA Sports data of the teams.
* fifa Name and number of the EA Sports FIFA video game.
* year_week Date in year-week when SoFIFA published the EA Sports data of the teams. The date is in ISO 8601 format.
§.§.§ Generated Variables
No new variables were transformed or generated.
§.§ Main Database
The main database used in the machine learning models consists of various variables, each undergoing a specific transformation before being used to train the neural networks. It should be noted that the standardization or normalization transformations applied to the variables have a time window of one matchweek, meaning that the calculations are based on data from the same week. These transformations are important for ensuring that the variables are appropriately scaled and prepared for input into the neural networks.
3
* matchweek[Normalized. That is for say s = (x - x_(1)) / (x_(n) - x_(1)).]
* position_table_home[Inversely normalized. In other words s = (x_(n) - x) / (x_(n) - x_(1)).]
* total_pts_[Standarized. i.e. t = (x - x̅) / ŝ.]
* npxGD_ma_home
* npxGD_var_home
* big_six_home
* promoted_team_home
* position_table_away[15]
* total_pts_away[16]
* npxGD_ma_away
* npxGD_var_away
* big_six_away
* promoted_team_away
* ova_home[16]
* att_home[16]
* mid_home[16]
* def_home[16]
* transfer_budget_home[Normalized, with the maximum observation cliffed at 100.]
* ip_home[16]
* saa_home[14]
* ova_away[16]
* att_away[16]
* mid_away[16]
* def_away[16]
* transfer_budget_away[17]
* ip_away[16]
* saa_away[14]
* proba_h
* proba_d
* proba_a
unsrt
|
http://arxiv.org/abs/2307.05760v1 | 20230711193009 | Line Art Colorization of Fakemon using Generative Adversarial Neural Networks | [
"Erick Oliveira Rodrigues",
"Esteban Clua",
"Giovani Bernardes Vitor"
] | cs.CV | [
"cs.CV"
] |
[]978-1-6654-6156-6/22/$31.00 2022 IEEE []
Line Art Colorization of Fakemon using Generative Adversarial Neural Networks
Erick Oliveira Rodrigues
Department of Academic Informatics (DAINF)
Universidade Tecnologica Federal do Parana
Pato Branco, Brazil
[email protected]
Esteban Clua
Computer Science Department
Universidade Federal Fluminense
Niteroi, Brazil
[email protected]
Giovani Bernardes Vitor
Institute of Technological Sciences
Universidade Federal de Itajuba
Itabira, Brazil
[email protected]
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================
This work proposes a complete methodology to colorize images of Fakemon, anime-style monster-like creatures. In addition, we propose algorithms to extract the line art from colorized images as well as to extract color hints. Our work is the first in the literature to use automatic color hint extraction, to train the networks specifically with anime-styled creatures and to combine the Pix2Pix and CycleGAN approaches, two different generative adversarial networks that create a single final result. Visual results of the colorizations are feasible but there is still room for improvement.
line art colorization, fakemon, anime colorization, generative adversarial neural networks
§ INTRODUCTION
Colorization of images is an important stage in 2D asset production for digital games, animations and digital content production. Colorization of grey-level images can be traced back to early 2000's <cit.> where optimization methods <cit.> were used to convert the grey-level pixels to target colored pixels. Eventually, machine learning started to be used in the same context. Lipowezky <cit.> used classical classification and feature extraction <cit.>. Later, approaches shifted towards deep learning <cit.>, which includes the usage of Generative Adversarial Networks (GANs), and is now mainstream.
In 2017, Zhang et al. <cit.> proposed a robust method for image colorization, where one of their results can be seen in Figure <ref>. One major limitation is that it requires the user to pre-define a “color palette”, which is also called color hint, as they are associated to the position where the color occur in the image. Their best results cannot be obtained without the color hint.
To this date <cit.>, grey-level image colorization is still an open problem. First, we can have different styles of colorization, which leaves room for different approaches. The second issue is image resolution. GANs do not work really well with large resolutions, and the colored images are usually 256x256, eventually 512x512. Large resolutions usually require huge networks that consume lots of memory and require much time to train, while still not producing high quality results.
A third issue is artefact generation. In some cases its possible to spot artefacts such as a repeated colorization pattern that occurs over the entire image. A fourth issue is related to the image domain. Certain types of grey-level images obtain better colorization results. Works coined towards a single type of image tend to excel generic approaches.
It is also possible to find other variations of image colorization in the literature, which includes image colorization from infrared images <cit.>. However, these are not the primary focus of this work. We are interested in line art colorization for the generation or the acceleration of asset creation for games and entertainment.
Around 2016, we start to get online approaches for line art colorization such as the PaintsChainer <cit.>. Line art colorization is a different problem when compared to image colorization. In line art, we just have two main colors as input (black and white - eventually a few shades of gray), as opposed to the image colorization case, where we have rich gray-level information. In this sense, the line art colorization problem requires you to fill in white spaces with varying colors and shades. In line art, the information that can be used is mostly edge, area and location of the pixel (x and y coordinates).
Later in 2018, Zhang et al. <cit.> also proposed an approach for line art colorization that is similar to their previous photo colorization <cit.>. Figure <ref> shows one particular example that includes color hints (squares - before the colorization) and adjustment color hints (circles - after the colorization).
Most line art works in the literature focus on “humanoid” characters, which is not directly applicable to creatures/monsters in the kinds of Pokemon and Digimon. In this work, we use the so called “Fakemons”, which are fan-art monsters inspired by the Pokemon and Digimon franchises. This is the first work to pursue this scenario. We also evaluate the usage of automatic color hints, which is also used for training. Besides, we compare and combine the performance of two different GANs, the Pix2Pix network <cit.> as well as CycleGAN <cit.>. Combining both networks is also a novel contribution that has never been explored.
The main contributions of this work are: (1) proposal of an automatic and adaptive extraction of line arts, (2) proposal of the first automatic proposal for color hint extraction, (2) first time combination of the responses of the Pix2Pix and CycleGAN and (3) first time line art colorization of monster-like creatures. In what follows, we provide a literature review, proposed methodology, obtained results and conclusion.
§ LITERATURE REVIEW
Ci et al. <cit.> use a GAN and color hints for line art colorization. Similar to <cit.>, this work also focus on “humanoid” characters. Fang et al. <cit.> also use a GAN and consider different styles of colorization. The obtained results are visually pleasing but are mainly anime faces.
The MANGAN approach <cit.> also uses a GAN, where the authors came up with their own methodology to extract the line art from colorized images in order to train the network. Their line art extraction contains a lot of noise and generated lines are thicker than usual. This extraction can influence other works to under-perform. Hence, the comparison may not be the fairest. Furthermore, the authors also work with colorized “humanoid” anime characters.
Another remarkable factor: the authors apply a Gaussian blur to the color hint, which spreads the colors throughout the image. As a matter of analysis, we also applied a Gaussian blur in our approach but it does not heavily influence the end result of Pix2Pix and CycleGAN. A line or a circular spot is sufficient for Pix2Pix to translate the image appropriately.
PaintsTorch <cit.> uses a GAN architecture that is similar to Ci et al. <cit.>. Their results are pleasing, one of the best results in the literature. However, again, the authors use “humanoid” characters and their method uses manually placed color hint. Figure <ref> shows their result. We would like to draw attention to the amount of color hints, which is fairly significant, specially for the image at the bottom.
Serpa et al. <cit.>, on the other hand, used the Pix2Pix framework to create the shading of 2D sprites in order to accelerate their production. The authors show that they can recreate shading under controlled circumstances. Strange poses increase the difficulties to recreate the shading. This work, however, is not focused on line art colorization. It bears similarities to our approach as it uses Pix2Pix.
Pix2Pix is an image-to-image translation framework based on a GAN. In contrast to Convolutional Neural Networks (CNNs), GANs learn a loss function that classifies whether the output image is real or fake, at the same time that they train a generative model that minimize the loss <cit.>. Pix2Pix uses a “U-Net” based architecture for the generator and a “PatchGAN” classifier for the discriminator. The generator creates an image and the discriminator is responsible for checking whether the generated image is fake. These two deep neural networks working together compose the GAN architecture. The idea behind Pix2Pix is to provide a general image-to-image framework that works in all sorts of problems and is not application specific.
The authors <cit.> mention that classical pixel classification treats the output as “unstructured” data, as if the information of the class of a certain pixel does not influence on the classification of neighbouring pixels. Conditional GANs (or cGANs), contrariwise, learn a “structured loss” function, where this information “propagates” over areas of pixels. Pix2Pix is actually a cGAN. Recent works <cit.> have also explored this approach coupled with classical machine learning (manually coined feature extraction + classifiers), and obtained great results, which means that classifiers other than neural networks can be used and adapted to the “structured” idea. This approach is also called “connectivity”.
Besides Pix2Pix, we also evaluated the CycleGAN framework, also proposed by Isola <cit.>, author of the Pix2Pix framework. CycleGAN is not the best framework of choice for our problem, as it actually transfers styles of a group of images to another group of images, it does not provide an image-to-image translation.
CycleGAN is more recommended to cases where the pair of images (line art and colorized ones) are not aligned. However, we combine Pix2Pix and CycleGAN to produce colorized and shaded images.
§ PROPOSED METHODOLOGY
We collected a total of 880 colorized images of Fakemon characters in websites such as DeviantArt, while also including a few examples made by us. We carefully collected pieces of art that use the Creative Commons license. We would like to highlight that we do not hold the copyright for the creatures shown in this work. The collected images were used to train the Pix2Pix and CycleGAN models. We separated a few “Fakemons” to be shown as result in this work that were not included in the training dataset. In the end of the work we provide the credits for the used creatures.
The line arts and color hints were automatically extracted from the collected images, which differs from works in the literature. Some works include the color pallet later in the network architecture or treat it as a separate input image. Contrariwise, we actually paint the line art image using circular spots of color hints and do not alter the standard architecture of the network. Therefore, we actually work with a single input image that contains the line art and the color hints.
§.§ Line art extraction
First, we use an automatic methodology for the extraction of the line arts, later visiting each one of the characters to verify if the line art was extracted properly. We manually improved the lines in a few cases. However, in real situations, the line art is produced by the artist and this line art extraction step is not necessary. This extraction was required just to create data for training, as it is nearly impossible to find a dataset or an adequate number of pairs of images containing the line art and its colorized version.
To extract the line art, we use an adaptive threshold, shown in Algorithm <ref>. We write the tolerance variable according to the number of pixels that compose the character. Larger and more complex characters tend to have thinner lines while small characters tend to have thicker lines. Thus, we increase the tolerance variable if the amount of pixels in the character is low (i.e., pixels that are not background info). Increasing the tolerance variable means that the algorithm will select darker shades of gray for the threshold value. More info on classical image thresholds can be found at <cit.>.
§.§ Color hint extraction
After the extraction of the line art, the color hint is estimated. The approach we used is shown in Algorithm <ref>. This is a k-Medoid clustering algorithm <cit.> that skips the transparent pixels and uses an unconventional distance that we adjusted empirically. The used distance is shown in Equation <ref> and r, g and b represent the color layers. p_1 and p_2 represent the pixels.
The distance in Equation <ref> does not use the x and y information, and therefore we may end up with clusters of a single color that are much wider than other colors. After that, we apply the k-Medoid algorithm again. For the first time, shown in Algorithm <ref>, we use k=35 to quantize the images in a total of at most 35 colors. This k=35 is empirical, but it depends on the objective. Lower values for k (less colors) would reduce the time spent by the artist when placing manual color hints over the line art.
d(p_1,p_2) = (getHue(p_1.r*p_1.r,p_1.g*p_1.g,p_1.b*p_1.b)
-getHue(p_2.r*p_2.r,p_2.g*p_2.g,p_2.b*p_2.b))^2
+(1.5*getSat(0.8*p_1.r*p_1.r, 0.8*p_1.g*p_1.g, p_1.b*p_1.b)
-getSat(0.8*p_2.r*p_2.r, 0.8*p_2.g*p_2.g, p_2.b*p_2.b))^2
Later, for the second time, the k-Medoid algorithm is used with a slight different k and distance so that we can have uniformly spaced different clusters (or color hints) with the same color. The distance used in this second time is the Euclidean distance, with a total of 5 dimensions r, g, b, x and y and equal weights for all of them. For that second time, k is set to 10. Therefore, we automatically generate a total of 10 color hints, where each color hint is represented as a circle of radius 15 in pixels. The result of this processing can be seen in Figure <ref>. The top-left image in this figure has a total 35 colors (due to the first k), and a total of 10 circular color spots (due to the second k). The top-right image represents the color hint extraction. The bottom row shows the Pix2Pix and CycleGAN automatic colorizations.
It is possible to increase the number of color hints. However, if we train the algorithm using more color hints, the artist would be required to insert more color hints as well, and we want the colorization to be performed in the easiest way possible. We performed some experiments using more color hints and concluded that the results improve, but we still chose to stick to k=10. This trade-off should be analysed carefully.
We adjusted the parameters of both Pix2Pix and CycleGAN empirically. The parameters do not influence that much on the final result. However, we increased the ngf and ndf parameters (ngf: number of generator filters in the last conv layer, ndf: number of discriminator filters in the first conv layer) to 150 as they provide better results.
§ RESULTS
Figure <ref> shows the colorization obtained with Pix2Pix. These results were obtained directly from the automatic color hint extraction shown in Algorithm <ref>. We did not use any specific metric to measure the accuracy of the colorization as we adhered to the evaluation performed by other works in the literature, which are based on human visual observation. We can argue that the results are acceptable and that they are suitable for game art and entertainment as is. The style of result kind of reminds watercolor. The boundaries of the images were well colored. No case presented colorizations that exceeded the line art limits.
Figure <ref> shows some results obtained with CycleGAN. We are not sure why the colors did not vary that much, as other the colors varied in other experiments with CycleGAN. One clear difference is that it is capable of grasping the darker and lighter shades better than Pix2Pix, even providing a type of “dither” where it should be lighter such as in the heads of the two first Fakemon in Figure <ref>. Pix2Pix, on the other hand, works better with the colors.
Considering that both approaches have interesting aspects (one the color and the other the shading), we tried to combine them. To our surprise, the combination yielded interesting results. If we divide (blend mode) the Pix2Pix result by the CycleGAN result, we get the response shown in Figure <ref>, with soft colors and shading.
As a final experiment, we manually included several color hints and analysed how the Pix2Pix perform. The result can be seen in Figure <ref>. Results do improve in some areas of the image. Overall, the improvement is not substantial.
All the experiments in this work were run using an Intel i7-10700 CPU and a Nvidia RTX 3060 with 12GB of memory. The Pix2Pix trained for 2 days, past epoch 1000, while the CycleGAN trained for 4 days straight and reached epoch 300. We experimented with previous epochs to double check for overfitting, but the results did not seem to improve. The parameters were mostly the standard ones, we increased the ngf and ndf parameters to the maximum of memory the GPU would support (150 for Pix2Pix and 128 for CycleGAN).
§ CONCLUSION
This work performs experiments towards the automatic colorization of Fakemons, monster-like characters. We collected a total of 880 images that fit in the “Fakemon” category for training. Contributions of this work include the algorithm for the extraction of the line art as well as the automatic extraction and generation of the color hints. Besides, a major contribution is colorizing anime-styled creatures, which is the first occasion in the literature. We are also the first to experiment with a small amount of color hints and to combine Pix2Pix and CycleGAN in the same result.
The first major conclusion is that there is still a lot of room for improvement in automatic colorization. Even with a fairly large dataset we still had to combine the approaches to obtain arguably adequate results (Figure <ref>). The results can still be improved to look more like the original pieces. Furthermore, we also have a limitation with respect to the resolution, the images were 256x256.
However, we also conclude that the results are feasible, they remind the watercolor style and can be used as is in games and entertainment. One particular advantage is that all the generated colorization appear to be colorized with the same art style, irrespective of the author who created it. These approaches can be used, for instance, to “standardize” pieces of art from different authors.
We used frameworks (Pix2Pix and CycleGAN) that are generic for this type of task (image to image translation and style transfer), coined to work with all sorts of problems. Future works may also include the creation of frameworks and approaches coined specifically for this task, aiming to improve the results and similarity to the original piece.
Although the colorization of Fakemon and usual anime art may sound like the same thing, they are actually very different approaches. Color palettes are different, line art is different, the background information is different, etc. This justifies the creation of a framework for this type of task as future work, as well as the use of specific images for training.
§ ACKNOWLEDGMENT
All the art shown in this work was extracted from DeviantArt.com. We carefully selected arts that conform to the Creative Commons license, which can be found at their website. In this manuscript, we include pieces from the artists called Dragonith (https://www.deviantart.com/dragonith), Edari (https://www.deviantart.com/edari) and Bombeetle (https://www.deviantart.com/bombeetle), which can be found at their profile page.
unsrt
|
http://arxiv.org/abs/2307.04873v2 | 20230710195036 | Gauge fixing in cosmological perturbations of Unimodular Gravity | [
"Francisco X. Linares Cedeño",
"Ulises Nucamendi"
] | gr-qc | [
"gr-qc",
"astro-ph.CO"
] |
Engineering bound states in continuum via nonlinearity induced extra dimension
Girish S. Agarwal
August 12, 2023
==============================================================================
§ INTRODUCTION
We are living in the era of modern cosmology, and accurate predictions of the dynamics of the universe are required. With the advent of more data, the information acquired from different sources in the cosmos challenges the broad variety of cosmological models in the literature. The standard Lambda Cold Dark Matter (ΛCDM) model is based on the theory of General Relativity (GR), which offers a great concordance with several observations <cit.>. However, there are still some unsolved problems within the ΛCDM model <cit.>. This has motivated a large amount of proposals considering both new particles and new theories of gravity.
The current accelerated expansion of the universe is one of the riddles that cosmologist are trying to solve, and within the framework of GR it is the Cosmological Constant Λ who plays the role of the main component responsible of such accelerated expansion, the so–called Dark Energy. The origin of this dark component is still unknown, and there are many models in the literature based on different physics: some examples are dark energy fluids <cit.>, quintessence/phantom scalar fields <cit.>, modified gravity <cit.>, between others.
In first formulations of GR, a choice of coordinates such that the determinant of the metric tensor is fixed was considered <cit.>, this is, the metric tensor g_μν obeys the unimodular condition √(-g) = 1. Later, the relation between a fixed metric determinant and the cosmological constant was made <cit.>, where unimodular coordinate mappings were imposed: x^μ→ x^'μ such that |∂ x^'μ/∂ x^ν| = 1. Such consideration leads to the traceless part of the Einstein field equations, and this gravitational theory has been dubbed Unimodular Gravity (UG) <cit.>. One of the main consequences of having a four–volume preserving theory is that the energy–momentum tensor is no longer conserved (∇_μ T^μ_ ν≠ 0), and new non–gravitational interactions are allowed in the matter–energy sector. This feature is expected in theories of gravity that at a fundamental level could be more compatible with quantum mechanics <cit.>.
Cosmological models based on UG have been studied in last years <cit.>, although most of them have been focused in the background dynamics. Particularly in <cit.>, the authors of the present work analyzed four phenomenological diffusion models describing interactions between the dark sector components, and it is shown that such interactions alleviate the H_0 tension. It is then natural to go further, and studying whether it is possible to describe an inhomogenous universe by considering linear perturbations in UG, with the aim of reproduce observables such as the Cosmic Microwave Background (CMB) and the Large Scale Structure (LSS). This implies to properly solve the Einstein–Boltzmann system describing the cosmological evolution for the initial fluctuations of both, the all matter components and the metric tensor.
Some contributions in this direction have been made: in <cit.> the linear perturbations are obtained although the authors impose by hand the conservation of the energy–momentum tensor, which is not the most general form of the UG field equations. Nonetheless, it is shown that the Sachs–Wolfe effect <cit.> in UG has a new term given by a scalar metric perturbation <cit.>. On the other hand, the second order perturbations were obtained and no major distinctions from GR are found <cit.>. A recent analysis including the non–conservation of the energy–momentum tensor has been realized in <cit.> and then, the presence of an energy–momentum current violation has been considered.
As we mentioned above, whereas a first work was made by the authors of the present work with focus in the Hubble tension at a background level <cit.>, now we will pay all the attention to the details of the cosmological perturbations in UG, which is a previous necessary step to the posterior implementation of this theory at the level of numerical solutions from Boltzmann solvers, as well as statistical analysis in order to both constraint parameters, and performing model comparison. A particular aspect that has not been mentioned in previous literature is that of gauge fixing. This is of crucial importance due to one have to ensure that no spurious degrees of freedom are propagating in the theory. We show that it is possible to fix both Newtonian and Synchronous gauges: the former is fixed in the same way as in GR, whereas for the latter to be fixed the unimodular constraint at first order is needed to be implemented. Nonetheless, the dynamics of the linear perturbations differs to that of GR due to both, the unimodular constraint at linear order, and the presence of the energy–momentum current violation. Thus, with the aim of exploring possible imprints of UG dynamics at first order, we obtain the CDM density contrast in both gauges mentioned above for a matter–dominated universe, and the Sachs–Wolfe effect is obtained as well.
This paper is organized as follows: Section <ref> is dedicated to review in more detail the current contribution of previous works on the analysis of linear perturbations in UG. In Section <ref> we obtain the equations of motion for UG by variations method. Once the field equations are obtained, the background for a Friedmann–Robertson–Walker (FRW) line element is implemented and the linear perturbations are obtained in Section <ref>. Besides, for the first time the perturbations for the energy–momentum current violation is presented. Later in Section <ref> we present the main analysis of this work, we fix the most commonly used gauges in cosmology: Newtonian and Synchronous gauges. Additionally, we review the gauge choice implemented at <cit.>. In Section <ref> we present the physical implications of UG linear perturbations dynamics, and particularly we pay attention to the evolution of CDM density contrast, as well as in the Sachs–Wolfe effect. Final remarks are given in Section <ref>.
We will follow the signature convention (- , + , + , +). On the other hand, we will use dots ( ) for derivatives with respect to cosmic time t, and primes ( ^' ) will be used to label change of coordinates when choosing gauges.
§ UNIMODULAR GRAVITY PERTURBATION THEORY: STATE OF ART
In this Section we are going to briefly review previous works that have addressed the cosmological perturbations in UG. We will highlight the main results, as well as several crucial aspects that have not been deeply analyzed yet.
The first analysis on linear perturbations within the framework of UG was done by <cit.>. In such work it is shown for the first time the unimodular constraint at linear perturbations level: what at the background level is a fixed four–volume, now is a new relation between the scalar modes of the metric fluctuations that is not present in GR at the level of linear perturbations. Notwithstanding, the unimodular constraint leads to a gauge issue that seems to be unavoidable when obtaining the dynamics for the scalar modes of the metric fluctuations. Specifically, the authors of <cit.> explain:
"This scalar type metric perturbation cannot be removed through a gauge choice in unimodular gravity and thus leads to the possibility of observationally distinguishing unimodular gravity from GR".
In fact, such scalar mode appears as a new term when obtaining the relation between temperature and gravitational potential in the Sachs–Wolfe effect within the framework of UG. However, the differences between GR and UG when analyzing the Sachs–Wolfe effect are suppressed on large angular scales, making both theories practically indistinguishable. It is important to mention that even when the unimodular constraint gives an additional relation between the gravitational potentials for the metric perturbations, the gauge choice made by the authors of <cit.> leaves two gravitational degrees of freedom, as in GR. This is done so in order to compare with the longitudinal gauge of GR. However, this choice does not fix the gauge, as it will be shown in the present work.
Later, going a step further the authors of <cit.> obtain the second order perturbations of the theory. The gauge choice they consider is the same as that in <cit.>, and then the gauge fixing issue persists. Nonetheless, the authors of <cit.> claim that the appearance of the new term in the Sachs–Wolfe effect can be compatible with GR with the proper gauge choice. On the other hand, the second order Mukhanov–Sasaki equation in UG is obtained, and it is shown that it depends only on the first order unimodular constraint. It is concluded that there is no significant difference between GR and UG at neither of both first and second order in perturbations.
Then, both works mentioned above conclude that there are no major distinctions between GR and UG. However, in both studies it is assumed that the energy–momentum tensor is conserved, which neglect one of the main features of the UG theory. Current progress in this direction has been done: in <cit.> the cosmological linear perturbations in UG under the Newtonian conformal gauge are studied for scalar and tensor perturbations, and the Boltzmann equation for photons is obtained as well. In particular, they obtain the 00 component for scalar perturbations, an it is shown that it presents an extra contribution due to Λ. On the other hand, the Boltzmann equation for photons contains an additional term that contains third order derivatives. In this respect, the authors say:
"...the Boltzmann equation for photons is exposed because it contains the energy momentum violations that characterize the UG. Notoriously, the extra term carries higher order derivatives in the conformal time component for the scale factor and for the scalar curvature."
This is the case when considering that radiation will be coupled with non–standard terms due to the energy–momentum current violation. In our case, we will consider that the new non–gravitational interactions occur only between the dark sector components.
Another work assuming violation of the energy–momentum tensor as well as considering linear perturbations in the longitudinal gauge in UG is <cit.>, where the authors do not use the action principle to obtain the UG field equations, but their starting point is the trace-free Einstein equations only. Thus, the authors do not assume the unimodular condition. This is different from our approach where the UG field equations are derived from an action principle (in fact, we will consider unimodular variations), and the unimodular constraint will be considered for both, background and linear perturbations. The study of instabilities in UG with non–gravitational interactions in the dark sector are carried out with detail, and in particular the authors report:
"...the usual instability is driven by the nonadiabatic pressure perturbation of the dark energy fluid, but for the trace–free Einstein equations and a transfer potential that depends only on the dark matter energy density there is no nonadiabatic pressure perturbation to dark energy – this is ultimately why there is no instability here."
Later in <cit.> the contribution of an energy–momentum current violation is considered as well, and it is referenced by the authors as nonconservative unimodular gravity. Besides, the unimodular constraint is fully considered when obtaining the dynamics of linear perturbations, which are written in terms of one only gravitational degree of freedom. In relation to the Newtonian and Synchronous gauges, the authors in <cit.> report:
"The newtonian gauge can not be used in the unimodular context unless any anisotropic contribution to the stress–tensor are considered," and
"Scalar perturbations in the nonconservative unimodular gravity are permitted in the synchronous gauge only and have a growing mode."
The former aspect is still without analysis, whereas the latter was analyzed by considering a specific background solution. Even when these results constitutes a significant breakthrough in the study of cosmological perturbations in UG, there is still missing the proper analysis of gauge fixing, and looking for the choices that leave the theory without spurious degrees of freedom. Moreover, it is mandatory to have the correct dynamics of linear perturbations if one is interested in the implementation of UG cosmological model in Boltzmann solvers such as <cit.> and <cit.>. For instance, integrates
transfer functions for quantities
defined in the synchronous gauge, whereas uses the synchronous gauge by default although Newtonian gauge equations are implemented on top of the synchronous ones. Therefore, the gauge fixing issue in UG must be deeply understood[Since Boltzmann solvers mentioned above are still being used by cosmologist, we are interested in the study of Newtonian/Synchronous gauges. The gauge invariant formalism of UG is presented in detail in <cit.>, and it is not our focus to deal which such treatment in the present work.].
From all mentioned above, there are some important technical details still to be addressed in order to properly study the cosmological perturbations in UG, with aim of analyze whether cosmological models based on UG are viable candidates to describe the universe, not only the background dynamics but to describe CMB and LSS as well.
Summarizing, we have that
* Even when UG naturally leads to non–gravitational interactions, this is ∇_μ T^μ_ ν≠ 0, first works on UG linear perturbations assume the opposite, and energy–momentum conservation is imposed by hand <cit.>. Within these analysis, GR and UG are basically the same theory.
* With focus on scalar modes, once the non–conservation of energy–momentum tensor is considered it has been possible to obtain the 00 component of field equations for linear perturbations in the Newtonian gauge <cit.>, the perturbed field equations in the longitudinal gauge but without considering the unimodular constraint <cit.>, and solutions for linear perturbations only in the synchronous gauge <cit.>. Newtonian gauge requires anisotropic term in the matter sector, and solutions in this gauge are lacking.
* None of the works mentioned above analyze the gauge fixing in UG. This is crucial to do in order to avoid the propagation of spurious degrees of freedom that can be mistaken with physical effects on cosmological observables.
In the present work we will address the missing points mentioned above, with special emphasis on the gauge fixing problem, the dynamics of the evolution of linear perturbations considering the unimodular constraint as well as the energy–momentum current violation, and the physical repercussions on the growth of CDM density contrast. In this respect, we show that it is possible to fix both the Newtonian and Synchronous gauges, although they have different consequences in the dynamics of linear perturbations. We also review the gauge choice studied at <cit.>, and we show that it is not completely fixed due to a remaining undetermined function. We find analytical solutions in terms of the only gravitational degree of freedom in Newtonian and Synchronous gauges. Therefore, a possibility to track some signatures of UG at cosmological scales is by analyzing the growth of structure, which implies to obtain the dynamics not only for CDM, but for all the matter components and by setting the proper dynamical equations for the Einstein–Boltzmann system within the framework of UG. On the other hand, we obtain the Sachs–Wolfe effect in UG, and different from what has been previously reported in the literature, the result is exactly the same as GR without new contributions of metric perturbations.
§ EQUATIONS OF MOTION IN UNIMODULAR GRAVITY
Different from <cit.>, where the unimodular constraint was introduced in the Einstein–Hilbert action through a Lagrange multiplier, this time we will obtain the UG equations of motion considering the following unimodular variation δ_u,
δ_ug^μν≡δ g^μν - 1/4g^μνg_αβδ g^αβ ,
and then, it follows that
g_μνδ_ug^μν = g_μν( δ g^μν - 1/4g^μνg_αβδ g^αβ) = g_μνδ g^μν - 1/4δ_λ^λg_μνδ g^μν= 0 .
The volume–preserving diffeomorphisms are satisfied under the unimodular variation δ_u, since when considering the variation of the determinant of the metric, we have
δ_u√(-g) = -1/2√(-g)g_μνδ_u g^μν = 0 ⇒ √(-g) = f ,
where f=f(x) is a nondynamical scalar density which depends on the coordinates, and it can always be redefined to the unity.
The total action is,
S = S_EH + S_M = 1/2κ^2∫ d^4x√(-g)R + S_M ,
where S_EH is the Einstein–Hilbert action, and S_M is the action for the matter fields. The Ricci scalar is defined as R=g^μνR_μν, and then, the unimodular variation of the Einstein–Hilbert action is
δ_u S_EH = 1/2κ^2[∫ d^4x(δ_u √(-g))R + ∫ d^4x √(-g)(δ_u g^μν)R_μν + ∫ d^4x √(-g)g^μν(δ_u R_μν)] .
The first term is zero due to (<ref>), and the last term is also vanishing because, as in GR, after some algebra such term is a boundary contribution at infinity which can be set to zero <cit.>. Then, the Einstein–Hilbert action gets reduced to the second term only, which using Eq. (<ref>) is written as
δ_u S_EH = 1/2κ^2∫ d^4x √(-g)(δ_u g^μν)R_μν = 1/2κ^2∫ d^4x √(-g)(δ g^μν - 1/4g^μνg_αβδ g^αβ)R_μν ,
= ∫ d^4x √(-g)[1/2κ^2(R_μν - 1/4R g_μν)]δ g^μν .
For the matter content, we have the standard energy–momentum tensor definition but considering the unimodular variation, this is,
T_μν≡ -2/√(-g)δ_u S_M/δ_u g^μν
and then, we have
δ_u S_M = -1/2√(-g)T_μνδ_u g^μν = -1/2√(-g)T_μν(δ g^μν - 1/4g^μνg_αβδ g^αβ)
= -1/2√(-g)( T_μν - 1/4T g_μν)δ g^μν .
Therefore, the previous results from Eq. (<ref>) and (<ref>) give the following variation for the total action (<ref>),
δ_u S = ∫ d^4x √(-g)/2[1/κ^2(R_μν - 1/4R g_μν) -( T_μν - 1/4T g_μν)]δ g^μν = 0 ,
and thus, we obtain the UG field equations,
R_μν - 1/4R g_μν = κ^2( T_μν - 1/4T g_μν) ,
which are the trace–free version of the Einstein field equations. We can rewrite Eq. (<ref>) as follows,
R^μ_ ν - 1/2Rδ^μ_ν + 1/4(R + κ^2T)δ^μ_ν = κ^2 T^μ_ ν ,
and applying the Bianchi identities,
∇_μ( R^μ_ ν - 1/2Rδ^μ_ ν) + 1/4∇_ν(R + κ^2T) = κ^2 ∇_μT^μ_ ν ,
we notice that whereas the first term on the l.h.s. is identically zero, the covariant derivative of the energy–momentum tensor is no longer locally conserved,
κ^2∇_μT^μ_ ν = 1/4∂_ν(R + κ^2T) ≡ J_ν ,
where J_ν is the energy–momentum current violation. Integrating the expression from above, and replacing this result into Eq. (<ref>), we have
R_μν - 1/2Rg_μν + [ Λ + ∫ dx^α J_α(x) ] g_μν = κ^2 T_μν ,
where Λ is a constant of integration. Notice that, in the particular case when the energy–momentum tensor is conserved (J_ν = 0), Eq. (<ref>) coincides with the Einstein field equations of GR, and then, Λ is identified as the cosmological constant. Thus, within the framework of UG, the cosmological constant Λ arises naturally in the equation of motion as an integration constant when considering volume–preserving diffeomorphisms. Notwithstanding, in general we will have J_ν≠ 0, and non–gravitational interactions are allowed between different matter and energy components.
In summary, the UG field equations are given by
R_μν - 1/2Rg_μν + Λ(x) g_μν = κ^2 T_μν , with Λ(x) ≡Λ + ∫ dx^α J_α(x) ,
∇_μ T^μ_ ν = 1/κ^2J_ν , with J_ν≡1/4∂_ν(R + κ^2T) ,
where Λ(x) in Eq. (<ref>) is an effective cosmological constant which in general depends on the spacetime coordinates. We will focus in non–gravitational interactions only between dark matter and the effective cosmological constant through the energy–momentum current violation J_ν according to Eq. (<ref>).
§ LINEAR COSMOLOGICAL PERTURBATIONS IN UNIMODULAR GRAVITY
Let us write the metric, the energy–momentum tensor, the effective cosmological constant, and the energy–momentum current violation in the following way
g_μν = g̅_μν + h_μν , T_μν = T̅_μν + δ T_μν , Λ = Λ̅ + δΛ , J_μ = J̅_μ + δ J_μ ,
where the bar denotes quantities from the background, and h_μν , δ T_μν , δΛ , and δ J_μ are small fluctuations with respect to their corresponding background values. In the case of the background metric, we consider the flat FRW spacetime, whose components are
g̅_00 = -1 , g̅_0i = 0 , g̅_ij = a^2(t)δ_ij ,
g̅^00 = -1 , g̅^0i = 0 , g̅^ij = a^-2(t)δ_ij ,
while for the inverse of the metric perturbation we have
h^μν = -g̅^μαg̅^νβh_αβ ,
whose components are given by
h^00 = -h_00 , h^i0=a^-2h_i0 , h^ij=-a^-4h_ij
Notice that the determinant of the metric given by Eq. (<ref>) can be written at first order as
√(-g)≃√(-g̅)[ 1 + 1/2g̅^μνh_μν + 𝒪(h^2) ] = √(-g̅)( 1 - h_00/2 + a^-2h_ii/2) ,
and, whereas at zero order we recover the unimodular constraint (<ref>), at first order we have
-h_00 + a^-2h_ii = 0 .
The last expression will be important to be considered in the following analysis of the dynamics of small fluctuations, since it constitutes a new relation between the components of the perturbed metric that is not present in GR.
The Christoffel symbols are defined as
Γ^α_μν = 1/2g^αβ( ∂_νg_βμ + ∂_μg_βν - ∂_βg_μν) ,
and then, the non–null components are
Γ^0_00 = -ḣ_00/2 ,
Γ^0_i0 = ȧ/ah_i0-1/2∂_i h_00 ,
Γ^0_ij = aȧδ_ij + 1/2( 2aȧδ_ijh_00 - ∂_j h_i0 - ∂_i h_j0 + ḣ_ij) ,
Γ^i_00 = 1/2a^2( 2ḣ_i0 - ∂_i h_00) ,
Γ^i_j0 = ȧ/aδ_ij + 1/2a^2( -2ȧ/ah_ij + ḣ_ij + ∂_j h_i0 - ∂_i h_j0) ,
Γ^i_jk = 1/2a^2( -2aȧh_i0δ_jk + ∂_k h_ij + ∂_j h_ik - ∂_i h_jk) .
The Ricci tensor is,
R_μν = ∂_αΓ^α_μν - ∂_νΓ^α_αμ + Γ^α_αβΓ^β_μν - Γ^α_νβΓ^β_αμ ,
with components given by
R_00 = -3ä/a - ∇^2h_00/2a^2 - 3/2ȧ/aḣ_00 + ∂_iḣ_i0/a^2 -
1/2a^2{ḧ_ii - 2ȧ/aḣ_ii + 2[ ( ȧ/a)^2 - ä/a]h_ii} ,
R_0i = -ȧ/a∂_i h_00 - 1/2a^2( ∇^2 h_i0 - ∂_i ∂_j h_j0) + [ ä/a + 2( ȧ/a)^2 ]h_i0 - 1/2∂_t[ 1/a^2( ∂_i h_jj - ∂_j h_ji) ] ,
R_ij = ( aä + 2ȧ^2 )δ_ij + 1/2∂_i∂_j h_00 + ( 2ȧ^2 + aä)δ_ij h_00 + 1/2aȧδ_ijḣ_00 + 1/2ḧ_ij - ȧ/aδ_ij∂_k h_k0
-1/2a^2( ∇^2h_ij - ∂_k∂_j h_ki - ∂_k∂_i h_kj + ∂_i∂_j h_kk) - 1/2ȧ/a( ḣ_ij - δ_ijḣ_kk)
+( ȧ/a)^2( 2h_ij - δ_ijh_kk) - 1/2( ∂_i ḣ_j0 + ∂_j ḣ_i0) - 1/2ȧ/a( ∂_i h_j0 + ∂_j h_i0) ,
The Ricci scalar R = R̅ + δ R = g̅^μαR̅_αμ + g̅^μαδ R_αμ + h^μαR̅_αμ , is given by
R = 6[ (ȧ/a)^2 + ä/a] + 6[(ȧ/a)^2 + ä/a]h_00 + 3ȧ/aḣ_00 + ∇^2h_00/a^2
-2/a^2( 2ȧ/a∂_ih_i0 + ∂_iḣ_i0) -2/a^2[ (ȧ/a)^2 + ä/a]h_ii + ḧ_ii/a^2
-1/a^4( ∇^2h_ii - ∂_i∂_jh_ij) .
For the matter content we are going to be interested in the energy–momentum tensor of a perfect fluid, this is
T_μν = ( ρ + p )U_μU_ν + pg_μν ,
with
ρ = ρ + δρ , p = p + δ p , U^μ = ( 1+δ U^0, v^i ) , U_μ = (-1+δ U_0,v_i) .
where ρ is the energy density, p the pressure, and U^μ the four–velocity of the fluid which at the level of the background we have chosen the system of reference of comoving observers. The term v^i=δ U^i is the peculiar velocity, which can be considered as a small quantity as δρ and δ p. Notice that, due to the condition g_μνU^μ U^ν=-1, the time component of the four–velocity perturbation is δ U^0=δ U_0 = h_00/2. Thus, the components of the energy–momentum tensor in terms of the zero and first order perturbations for a perfect fluid are given by
T_00 = ρ̅ -ρ̅ h_00 + δρ ,
T_i0 = p̅ h_i0 - (ρ̅ + p̅)v_i ,
T_ij = a^2p̅δ_ij + p̅h_ij + a^2δ pδ_ij ,
whereas the components with mixed indices are
T^0_ 0 = -ρ̅ - δρ ,
T^0_ i = -(ρ̅ + p̅)v_i = -T^i_ 0 ,
T^i_ j = p̅δ^i_j + δ p δ^i_j .
Once inserted the above expressions in the UG field equations (<ref>) for a spatially flat FRW universe, we obtain for the zeroth–order perturbations the background equations, i.e.,
H^2 - Λ̅(t)/3 = κ^2/3ρ̅ , Ḣ = -κ^2/2ρ̅( 1 + ω) ,
ρ̇̅̇ + 3Hρ̅( 1 + ω) = -J̅_0(t)/κ^2 ,
where H=ȧ/a is the Hubble parameter. Both Λ̅ and J̅_0 depend only on the cosmic time t due to homogeneity and isotropy. The energy density and the pressure for the matter fields are related by a constant equation of state ω≡p̅/ρ̅ (ω=0 for non–relativistic matter such as baryons and cold dark matter, and ω=1/3 for radiation).
On the other hand, following <cit.> the linear perturbations for Eq. (<ref>) are
δ R_μν - Λ̅h_μν - g̅_μνδΛ = κ^2( δ T_μν - 1/2g̅_μνδ T - 1/2h_μνT̅) ,
where T̅ is the trace of the background energy–momentum tensor, and δ T its perturbation,
T̅ = 3p̅ - ρ̅ = -6/κ^2[ ä/a + (ȧ/a)^2 -2/3Λ̅] , δ T = 3δ p - δρ .
The components of Eq. (<ref>) are given by
κ^2/2( δρ + 3δ p ) = -∇^2h_00/2a^2 - 3/2Hḣ_00 + ∂_iḣ_i0/a^2 - 3( H^2 + Ḣ)h_00
- 1/2a^2( ḧ_ii - 2Hḣ_ii - 2 Ḣ h_ii) + δΛ ,
-κ^2( ρ̅ + p̅)v_i = -H∂_i h_00 - 1/2a^2(∇^2h_i0 - ∂_i∂_jh_j0) - 1/2∂/∂ t[ 1/a^2(∂_i h_jj - ∂_j h_ji) ] ,
a^2/2κ^2( δρ - δ p )δ_ij = 1/2∂_i∂_j h_00 + (2ȧ^2+aä)δ_ijh_00 + 1/2aȧδ_ijḣ_00 - H/2(∂_ih_j0 + ∂_jh_i0)
- 1/2a^2( ∇^2h_ij - ∂_k∂_j h_ki - ∂_k∂_i h_kj + ∂_i∂_jh_kk ) + 1/2ḧ_ij
- H/2(ḣ_ij - δ_ijḣ_kk) -( H^2 + Ḣ)h_ij - H^2δ_ijh_kk - Hδ_ij∂_k h_k0
- 1/2(∂_iḣ_j0 + ∂_jḣ_i0) - a^2 δ_ijδΛ ,
and at first order, the (non) conservation of the energy–momentum tensor (<ref>) is
∂_μδ T^μ_ ν + Γ̅^μ_μαδ T^α_ ν + δΓ^μ_μαT̅^α_ ν - Γ̅^α_μνδ T^μ_ α - δΓ^α_μνT̅^μ_ α = δ J_ν/κ^2 .
We can simplify these expressions by decomposing the perturbations into scalars, divergenceless vectors, and divergenceless traceless symmetric tensors. The perturbation of the metric h_μν can be written as
h_00 = -E ,
h_i0 = a( ∂_i F + G_i ) ,
h_ij = a^2( Aδ_ij + ∂_i∂_j B + ∂_j C_i + ∂_i C_j + D_ij) ,
where A , B , E , F are scalar perturbations, C_i and G_i are vector perturbations, and D_ij are tensor perturbations. Particularly, C_i , G_i and D_ij satisfy
∂_i C_i = ∂_i G_i = 0 , ∂_i D_ij = 0 , D_ii = 0 .
Analogously, the energy–momentum tensor can be decomposed in a similar way, this is, we can rewrite Eq.(<ref>) as
δ T_00 = -ρ̅ h_00 + δρ ,
δ T_i0 = p̅h_i0 - (ρ̅ + p̅)(∂_i v + δ v_i^V) ,
δ T_ij = p̅h_ij + a^2( δ_ijδ p + ∂_i∂_j π^S + ∂_i π_j^V + ∂_jπ_i^V + π_ij^T ) ,
where we have decomposed the spatial part of the four–velocity perturbation as v_i ≡∂_i v + δ v_i^V, with ∂_i v the gradient of a scalar velocity potential, and δ v_i^V a divergenceless vector. The terms π^S , π^V , and π^T represent dissipative corrections to the perturbation of the inertia tensor δ T_ij. This quantities satisfy similar conditions to that of Eq. (<ref>)
∂_iπ_i^V = ∂_iδ v_i^V = 0 , ∂_iπ_ij^T = 0 , π_ii^T=0 .
Besides, the mixed components of the energy–momentum tensor (<ref>) are given by
δ T^0_ 0 = -δρ ,
δ T^i_ 0 = a^-2(ρ̅ + p̅)(a∂_i F + aG_i - ∂_i v - δ v_i^V) ,
δ T^0_ i = (ρ̅ + p̅)(∂_i v + δ v_i^V) ,
δ T^i_ j = δ_ijδ p + ∂_i∂_jπ^S + ∂_iπ_j^V + ∂_jπ_i^V + π_ij^T ,
δ T = 3δ p - δρ + ∇^2π^S .
As is the case in GR, in the linear regime of small fluctuations it is possible to separate the perturbations into three classes: scalar modes, vector modes, and tensor modes, which at linear order are completely independent from each other. We will be focused in the scalar modes of perturbations, and then, Eq. (<ref>) is given by
κ^2(δρ + 3δ p + ∇^2π^S) = ∇^2E/a^2 + 3HĖ + 2/a∇^2Ḟ + 2H/a∇^2F - 3Ä - 6HȦ + 6(H^2+Ḣ)E
- 2H∇^2Ḃ - ∇^2B̈+ 2δΛ ,
while Eq. (<ref>) gives
-κ^2(ρ̅ + p̅)∂_i v = H∂_i E - ∂_i Ȧ ,
which is exactly the same as that of GR. Eq. (<ref>) can be separated in two parts: that proportional to δ_jk, and that proportional to ∂_j∂_k, which gives
κ^2(δρ - δ p - ∇^2π^S) = -HĖ - 2(3H^2 + Ḣ)E - ∇^2A/a^2 + Ä + 6HȦ + H∇^2Ḃ - 2H/a∇^2F -δΛ ,
0 = ∂_i∂_j(2κ^2a^2π^S + E + A - a^2B̈ - 3aȧḂ + 2aḞ + 4ȧF) .
On the other hand, the energy–momentum (non) conservation given by Eq. (<ref>) will be now written as
-δ J_0/κ^2 = δρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a( v/a - F ) + Hπ^S ] + (ρ̅ + p̅)/2(3Ȧ + ∇^2Ḃ) ,
∂_iδ J^S/κ^2 = ∂_i{δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v + (ρ̅ + p̅)/2E } ,
where we have decomposed the energy–momentum current violation perturbation in the same way as the other perturbed quantities, i.e., δ J_μ = (δ J_0 , δ J_i), and δ J_i = ∂_iδ J^S + δ J_i^V with ∂_iδ J_i^V=0. We consider only the scalar modes δ J_0 and δ J^S.
The unimodular constraint at linear regime of perturbations on the determinant of the metric given by Eq. (<ref>) can be written as
3A + ∇^2B + E = 0 ,
which coincides with that reported by <cit.> in their respective notations.
Notice that the Ricci scalar (<ref>) is then written as
R = 6[ ( ȧ/a)^2 + ä/a] - 6[ ( ȧ/a)^2 + ä/a]E - 3ȧ/aĖ - ∇^2E/a^2 - 6/aȧ/a∇^2F - 2/a∇^2Ḟ - 2/a^2∇^2A
+4ȧ/a( 3Ȧ + ∇^2Ḃ) + 3Ä + ∇^2B̈ ,
and the perturbed energy–momentum current violation δ J_μ = (1/4)∂_μ(δ R + κ^2δ T) is given by
δ J_μ = 1/4∂_μ{ - 6[ ( ȧ/a)^2 + ä/a]E - 3ȧ/aĖ - ∇^2E/a^2 - 6/aȧ/a∇^2F - 2/a∇^2Ḟ - 2/a^2∇^2A .
. + 4ȧ/a( 3Ȧ + ∇^2Ḃ)
+ 3Ä + ∇^2B̈ + κ^2( 3δ p - δρ)} .
From the above expression we notice that, besides the fact that the perturbed energy–momentum current violation δ J_μ is function of the scalar metric perturbations A , B , E , F , as well as of the perturbed matter quantities δρ and δ p, it is obtained that
δ J_0 = ∂_0δ J^S .
The set of equations (<ref>)–(<ref>) constitute the relativistic linear perturbations equations to describe the evolution of small fluctuations of a perfect fluid in an expanding universe within the framework of UG.
§ FIXING THE GAUGE
The theory of General Relativity is invariant under diffeomorphism, which means that the equations will remain the same under general coordinate transformations. On the other hand, we have set in the previous Section that the geometry will be described by the sum of two metric tensors: one describing the background spacetime g̅_μν and which we have fixed to be the FLRW (<ref>), and the other metric h_μν representing the small perturbations of the spacetime. Then, since the theory is invariant under diffeomorphism, and the metric of the background g_μν is fixed, the components of the metric tensor for the perturbations h_μν are not unique. In other words, we can choose how to fix the perturbations of the metric.
Consider the following coordinate transformation
x^μ→ x^'μ = x^μ + ϵ^μ(x) ,
with ϵ^μ(x) a small quantity as the other perturbations h_μν , δρ , etc., and primes ( ^' ) labels change of coordinates. Whereas in GR the 4–vector ϵ^μ=(ϵ^0 , ϵ^i) is arbitrary, in the case of UG it satisfies
∇_μϵ^μ = 0 ,
which reflects the rigidity of the spacetime volume under the unimodular condition. Notice that ϵ^0=-ϵ_0 and ϵ^i = a^-2ϵ_i. Developing the above expression we have
∇_μϵ^μ = ∂_μϵ^μ + Γ̅^μ_μνϵ^ν= ϵ̇_0 - ∂_iϵ_i/a^2 + 3Hϵ_0 = 0 .
Additionally, with the coordinate transformation (<ref>) the metric will transform as
g^'_μν(x^') = g_λκ∂ x^λ/∂ x^'μ∂ x^κ/∂ x^'ν .
Since we are in a scenario in which only the perturbed metric will be affected by a coordinate transformation (the unperturbed metric is given by the FRW line element), we implement gauge transformations and we will attribute the whole change in g_μν to a change in h_μν. Therefore, any change of coordinates Δ h_μν(x) on the perturbation of the metric of the form h_μν(x) → h_μν(x) + Δ h_μν(x) must leaves invariant the field equations[This is the gravitational analogue to the electromagnetic potentials φ and A⃗, which under gauge transformations, both fields the electric E⃗ and magnetic B⃗ remain the same, leaving invariant the Maxwell equations.].
The change on the perturbation is defined as follows
Δ h_μν(x) ≡ g^'_μν(x) - g_μν(x) ,
which written in terms of its components once inserted Eq. (<ref>), and after expanding up to first order in perturbations, it can be shown that it is obtained
Δ h_00 = -2ϵ̇_0 ,
Δ h_i0 = -ϵ̇_i - ∂_i ϵ_0 + 2Hϵ_i ,
Δ h_ij = - ∂_i ϵ_j -∂_j ϵ_i + 2aȧδ_ijϵ_0 .
Analogous to Δ h_μν(x), the change on the perturbation of the energy–momentum tensor will be
Δδ T_00 = 2ρ̅ϵ̇_0 + ρ̇̅̇ϵ_0 ,
Δδ T_i0 = -p̅ϵ̇_i + ρ̅∂_i ϵ_0 + 2p̅Hϵ_i ,
Δδ T_ij = -p̅( ∂_i ϵ_j + ∂_j ϵ_i ) + ∂/∂ t(a^2p̅)δ_ijϵ_0 .
Following the same procedure, but this time applied to the energy–momentum current violation, we have
Δδ J_μ(x) = -J̅_λ(x)∂ϵ^λ/∂ x^μ - ∂J̅_μ/∂ x^λϵ^λ ,
whose components are given by
Δδ J_0 = 2J̅_0ϵ̇_0 + J̅_i/a^2( 2Hϵ_i - ϵ̇_i ) .
Δδ J_i = J̅_0∂_iϵ_0 + J̇̅̇_iϵ_0 - a^-2( J̅_j∂_iϵ_j + ϵ_j∂_j J̅_i ) .
Since we have chosen comoving observers for the background (see the four–velocity in Eq. (<ref>)), it can be shown that the energy–momentum current violation is given by
J̅_μ = 1/4∇_μ( R̅ + κ^2T̅) = 1/4∇_μ[ 4Λ̅(t) ] ⇒ J̅_μ = [ Λ̇̅̇(t) , 0 , 0 , 0 ] ,
and then, Eq. (<ref>) gets reduced in the following simpler form
Δδ J_0 = 2J̅_0ϵ̇_0 , Δδ J_i = J̅_0∂_iϵ_0 .
To be able to classify these gauge transformation into scalar, vector and tensor components, let us decompose the spatial part of ϵ^μ into the gradient of a scalar ϵ^S and a divergenceless vector ϵ_i^V as follows
ϵ_i = ∂_i ϵ^S + ϵ_i^V , with ∂_iϵ_i^V = 0 ,
and then, from Eq. (<ref>) we obtain
ϵ̇_0 - ∇^2ϵ^S/a^2 + 3Hϵ_0 = 0 .
Therefore, Eq. (<ref>) and (<ref>) give the gauge transformations of the metric components (<ref>) and energy–momentum tensor (<ref>) respectively, and the scalar modes of the coordinate transformation obey (<ref>). For the metric perturbation we have
Δ A = 2ȧ/aϵ_0 , Δ B = -2/a^2ϵ^S , Δ C_i = -1/a^2ϵ_i^V , Δ D_ij = 0 , Δ E = 2ϵ̇_0 ,
Δ F = 1/a( -ϵ_0 - ϵ̇^S + 2ȧ/aϵ^S ) , Δ G_i = 1/a( -ϵ̇_i^V + 2ȧ/aϵ_i^V ) ,
while for the energy–momentum tensor, the gauge transformations are given by
Δδρ = ρ̇̅̇ϵ_0 , Δδ p = ṗ̅̇ϵ_0 , Δ v = -ϵ_0 ,
and the other terms are gauge invariants, this is
Δπ^S = Δπ_i^V = Δ_ij^T = Δδ u_i^V = 0 .
With expressions (<ref>) and (<ref>) we can fix the gauge, this is, we can choose particular values of the components of ϵ^μ(x) to close the system of equations unambiguously. As was said before, our interest lies in the scalar perturbations, in which case the most general line element is written as
ds^2 = -(1+E)dt^2 + 2a∂_iFdtdx^i + a^2[ (1+A)δ_ij + ∂_i∂_j B ]dx^idx^j ,
and there are several choices we can consider to fix them. We want to analyze two gauges that are broadly used in the literature for cosmological perturbations: Newtonian gauge and Synchronous gauge. For a detailed study of these gauges in GR see <cit.>.
§.§ “Newtonian” gauge: B^'=0 and F^'=0
The gravitational potentials E , F , A , B in (<ref>) are general non–null solutions of the perturbed cosmological equations (<ref>),(<ref>),(<ref>), and (<ref>). In this gauge, also known as conformal/longitudinal gauge, we choose ϵ^S such that B^'=0 and then ask for ϵ_0 such that F^'=0, where primed quantities label gravitational potentials in the new coordinates. In first place, let us show that it is possible to fix unambiguously this gauge, i.e., the scalar components of the vector ϵ^μ(x) given by ϵ_0 and ϵ^S are equal to zero after performing coordinate transformations once the conditions requested above are satisfied. From (<ref>) we have
Δ B = B^'-B = -2/a^2ϵ^S ,
Δ F = F^'-F = 1/a( -ϵ_0 - ϵ̇^S + 2ȧ/aϵ^S ) .
Solving for ϵ_0 and ϵ^S from (<ref>) we obtain that the conditions B^'=0 and F^'=0 are satisfied when
ϵ^S(t,x⃗) = 1/2a^2(t)B(t,x⃗) , ϵ_0(t,x⃗) = a(t)F(t,x⃗) + 1/2a^2(t)Ḃ(t,x⃗) .
Now, by performing a new coordinate transformation ϵ̃^μ(x), and requiring to remain in the Newtonian gauge, this is, choosing ϵ̃^S such that B^''=0 and ϵ̃_0 such that F^''=0, we have
Δ B = B^''-B^' = -2/a^2ϵ̃^S ,
Δ F = F^''-F^' = 1/a( -ϵ̃_0 - ϵ̇̃̇^S + 2ȧ/aϵ̃^S ) ,
where, since we already have that B^'=F^'=0, there is no other possible choice of coordinate transformation than ϵ̃_0 = 0 and ϵ̃^S = 0 and then, the remaining variables are totally determined. Therefore, from (<ref>) the scalar gravitational potentials satisfy:
Δ A = A^''-A^' = 2ȧ/aϵ̃_0 = 0 ⇒ A^'' = A^'≠ 0 ,
Δ B = B^''-B^' =-2/a^2ϵ̃^S = 0 ⇒ B^'' = B^' = 0 ,
Δ E = E^''-E^' = 2ϵ̇̃̇_0 = 0 ⇒ E^'' = E^'≠ 0 ,
Δ F = F^''-F^' = 1/a( -ϵ̃_0 - ϵ̇̃̇^S + 2ȧ/aϵ̃^S )= 0 ⇒ F^'' = F^' = 0 ,
and then, the only non–null gravitational potentials in this gauge are A and E. Thus, this gauge is completely fixed, and there is no remaining freedom to make any additional transformation. The unimodular condition (<ref>) was not necessary to be implemented in order to fix this gauge. However, it will be taken into account in the equations of motion.
§.§ “Synchronous” gauge: E^'=0 and F^'=0
The choice for this gauge consists in fixing ϵ_0 such that E^'=0, and then choose ϵ^S such that F^'=0. Using (<ref>) we have
Δ E = E^' - E = 2ϵ̇_0 ,
Δ F = F^'-F = 1/a( -ϵ_0 - ϵ̇^S + 2ȧ/aϵ^S ) ,
from where it is obtained
ϵ_0(t,x⃗) = f_1(x⃗) - 1/2∫ E(t,x⃗)dt , ϵ^S(t,x⃗) = a^2(t)[f_2(x⃗) - ∫a(t)F(t,x⃗)+ϵ_0(t,x⃗)/a^2(t)] .
Now, considering a new coordinate transformation ϵ̃^μ(x), and again requiring to remain in the synchronous gauge choosing ϵ̃_0 such that E^''=0 and ϵ̃^S such that F^''=0, we have
Δ E = E^''-E^' = 2ϵ̇̃̇_0 ,
Δ F = F^''-F^' = 1/a( -ϵ̃_0 - ϵ̇̃̇^S + 2ȧ/aϵ̃^S ) .
This time, it it found that
ϵ̃_0(x⃗) = f_3(x⃗) , ϵ̃^S(t,x⃗) = a^2(t)[f_4(x⃗) - ϵ̃_0(x⃗)∫dt/a^2(t)] .
In order to completely fix the synchronous gauge, we have to determine in some way the arbitrary scalar functions f_3 and f_4. We can perform a new coordinate transformation, but it can be proved that successive gauge transformations lead to the same mathematical structure of (ϵ̃_0, ϵ̃^S), with a new couple of spatial functions. For instance, it can be shown that for a third gauge transformation (ϵ̃̃̃_0,ϵ̃̃̃^S) it is possible to obtain
ϵ̃̃̃_0(x⃗) = f_5(x⃗) , ϵ̃̃̃^S(t,x⃗) = a^2(t)[f_6(x⃗) - ϵ̃̃̃_0(x⃗)∫dt/a^2(t)] .
Therefore, we are always left with two arbitrary spatial functions, and the synchronous gauge remains ambiguous. Notwithstanding, all the spatial functions in ϵ̃^S, ϵ̃̃̃^S and so on, affects only the initial coordinate labelling, and it is only the spatial function in the time components ϵ̃_0, ϵ̃̃̃_0, etc, which remains as a spurious degree of freedom, and it will have repercussions on physical quantities if it is not properly determined <cit.>. We can safely keep then the coordinate transformations (<ref>) and (<ref>) as
ϵ̃_0(x⃗) = f_3(x⃗) , ϵ̃^S(t,x⃗) = -a^2(t) ϵ̃_0(x⃗)∫dt/a^2(t) ,
ϵ̃̃̃_0(x⃗) = f_5(x⃗) , ϵ̃̃̃^S(t,x⃗) = -a^2(t) ϵ̃̃̃_0(x⃗)∫dt/a^2(t) ,
respectively, and so on for successives coordinate transformations. Then, we have to deal with only one arbitrary function. Notice that if f_5=0, then both ϵ̃̃̃_0=0 and ϵ̃̃̃^S=0, and the gauge is completely fixed. The standard approach in GR to handle this situation of ambiguity in the coordinates, is to move to the CDM frame of reference, i.e., by choosing a coordinate transformation comoving with the CDM fluid. From Eq. (<ref>), and for the GR case where δ J^S=0, we have
δ p^' + ∇^2π^S ' + ∂_t[ (ρ̅ + p̅)v^'] + 3H(ρ̅ + p̅)v^' + (ρ̅ + p̅)/2E^' = 0 ,
which for the synchronous gauge we already show that it is asked for new coordinates where E^'=0, in whose case we have
δ p^' + ∇^2π^S ' + ∂_t[ (ρ̅ + p̅)v^'] + 3H(ρ̅ + p̅)v^' = 0 ,
and when considering the CDM fluid, there is no pressure nor anisotropic stress. Then, the previous equation applied to CDM is
ρ̇̅̇_CDMv^'_CDM + ρ̅_CDMv̇^̇'̇_CDM +3Hρ̅_CDMv^'_CDM = 0 .
In the case of GR, the energy–momentum conservation at background level for CDM is (see Eq. (<ref>) with J̅_0 = 0),
ρ̇̅̇_CDM = -3Hρ̅_CDM ,
where Eq. (<ref>) gets reduced to
ρ̅_CDMv̇^̇'̇_CDM = 0 ⇒ v^'_CDM = f(x⃗) ,
and then, in GR the CDM peculiar velocity is a function of spatial coordinates only. This is a crucial result in order to completely fix the synchronous gauge in GR, because now we can consider the coordinate transformation for v_CDM as follows: from (<ref>) we have
Δ v_CDM = v_CDM^' - v_CDM = -ϵ_0 ⇒ v_CDM^' = v_CDM -ϵ_0 ,
where ϵ_0 is given by (<ref>). After a new change of coordinates, we have
Δ v_CDM = v_CDM^'' - v_CDM^' = -ϵ̃_0 ⇒ v_CDM^'' = v_CDM^' -ϵ̃_0 ,
but as we already see from Eq. (<ref>) and (<ref>), both functions v_CDM^' and ϵ̃_0 are functions only of spatial coordinates. Thus, we can choose ϵ̃_0 = v_CDM^' in order to have v_CDM^'' = 0, and then we will be in a reference frame comoving with the CDM fluid. Now, under a new change of coordinates and asking for remaining in the CDM fluid reference frame, which is given by the condition v_CDM^'''=0, we obtain
Δ v_CDM = v_CDM^''' - v_CDM^'' = -ϵ̃̃̃_0 ⇒ v_CDM^''' = v_CDM^'' -ϵ̃̃̃_0 ,
but since v_CDM^''' = v_CDM^'' = 0, there is no other possible choice for ϵ̃̃̃_0 that ϵ̃̃̃_0=0. This completely fix the synchronous gauge in GR, as can be seen from Eq. (<ref>).
Whereas in GR this gauge is fixed as we have shown above, in UG the scenario is completely different. Basically, the issue we have to deal with is based on the fact that it is not possible to move to the CDM reference frame due to the energy–momentum current violation J_μ. From Eq. (<ref>), we have
δ p^' + ∇^2π^S ' + ∂_t[ (ρ̅ + p̅)v^'] + 3H(ρ̅ + p̅)v^' + (ρ̅ + p̅)/2E^' = δ J^S '/κ^2 .
In the synchronous gauge, and for the CDM fluid, the previous equation is written as
ρ̇̅̇_CDMv^'_CDM + ρ̅_CDMv̇^̇'̇_CDM +3Hρ̅_CDMv^'_CDM = δ J^S '/κ^2 ,
but now, in UG the energy–momentum conservation at background level for CDM is (see Eq. (<ref>)),
ρ̇̅̇_CDM = -3Hρ̅_CDM -J̅_0(t)/κ^2 ,
which leads to
v̇^̇'̇_CDM = J̅_0(t)v^'_CDM + δ J^S '/κ^2ρ̅_CDM ,
and then, in UG it is no longer true in general that v^'_CDM is a function of spatial coordinates only. In fact, by solving Eq. (<ref>) we obtain
v^'_CDM = g(x⃗)e^1/κ^2∫J̅_0(t)/ρ̅_CDM(t)dt[1 + 1/κ^2g(x⃗)∫e^-1/κ^2∫J̅_0(t)/ρ̅_CDM(t)dtδ J^S '/ρ̅_CDM(t)dt ] .
Besides, notice that it is not possible to choose a coordinate system where there is not perturbations of the energy–momentum current violation, as we can see in Eq. (<ref>) from the fact that we can not fix neither of ϵ_0, ϵ̃_0, ϵ̃̃̃_0, etc, equal to zero. Moreover, we are left with an arbitrary function g(x⃗) that must be determined. Therefore, within the framework of UG, the cosmological perturbations in the synchronous gauge do not allow to choose the CDM comoving frame due to the presence of the energy–momentum current violation J̅_0 and its scalar perturbation δJ^S. Thus, this gauge can not be fixed in this way as in GR.
We can think in using the unimodular constraint for perturbations (<ref>): for the synchronous gauge it reads
3A^''' = -∇^2B^''' .
For the gauge transformations (<ref>), it is written as
∇^2ϵ̃̃̃^S(t,x⃗) = 3a(t)ȧ(t)ϵ̃̃̃_0(x⃗) ,
which from Eq. (<ref>) we obtain,
[∫dt/a^2(t) ]∇^2f_5(x⃗) = -[3ȧ(t)/a(t)]f_5(x⃗) .
The differential equation from above can be solved by the method of variable separation, and we have
∇^2f_5(x⃗)/f_5(x⃗) = 𝒞 , -[3ȧ(t)/a(t)∫ dt/a^2(t)]=𝒞 ,
with 𝒞 a constant. The second expression is an integro–differential equation for the scale factor a(t), which can be written as
da(t)/dt=-𝒞a(t)/3∫ dt/a^2(t) ⇒ Ḣ = -𝒞/3a^2(t) ,
which demands a very particular solution for the background equations (<ref>) that does not need to be a physical solution for the expansion of the universe filled with a particular matter content. Moreover, since the scale factor is solution of the background dynamics, it will not necessarily satisfy (<ref>). In fact, such equation is way to restrictive to govern the expansion of the universe. For instance, we can consider a general power law solution a(t) = (t/t_0)^p, where t_0 is the present day. It can be shown that this solution is valid only when p=2 and 𝒞=18/t_0^2. However, this constitutes a very particular background evolution for the dynamics of any matter component, as we mentioned above. Doing the same for a late time solution within the UG framework (see Appendix B in <cit.>), where the scale factor is given by a(t) = [Asinh^2(Bt)]^C with A,B,C constants, we have that it is not possible to satisfy (<ref>).
On the other hand, the first expression in Eq. (<ref>), is the Poisson equation with source term given by the function itself,
∇^2f_5(x⃗) = 𝒞f_5(x⃗) .
Thus, the only way to have a scale factor driven by the background dynamics, and simultaneously the unimodular condition being satisfied at the level of perturbations in this gauge (<ref>), is through the trivial solution f_5(x⃗) = 0. But this is precisely what we need to fix the synchronous gauge, as can be seen from Eq. (<ref>). Therefore, when considering non–gravitational interactions it is not possible to consistently fix the synchronous gauge in UG by choosing the comoving frame of CDM. Instead, the unimodular condition appears to be useful to completely fix this gauge. This has serious repercussions on the possibility of implementing cosmological models based on UG in Boltzmann solvers such as <cit.> and <cit.>, as we will discuss later.
§.§ Alternative gauge choice: B^'=0 and unimodular constraint
This is the approach implemented by <cit.> with the aim of keeping two geometric degrees of freedom, just as in GR perturbations. Moreover, in such work choose B^'=0 in order to compare with the Newtonian gauge in GR. Then, the line element under this choice can be written as
ds^2 = -(1-3A^')dt^2 + 2a∂_iF^'dtdx^i + a^2(1+A^')δ_ijdx^idx^j ,
where the only degrees of freedom are A^' and F^'. However, we will show that this choice does not determined unambiguously these gravitational potentials, and spurious effects due to this not properly fixed gauge will affect physical quantities such as energy density and pressure perturbations.
Similar to previous procedures, and as in the Newtonian gauge, we ask for ϵ^S=0 such that B^'=0 (see Eq. (<ref>) and (<ref>)), this is
Δ B = B^'-B = -2/a^2ϵ^S ⇒ ϵ^S(t,x⃗) = a^2/2B(t,x⃗) .
Now, instead of asking for ϵ_0 such that F^'=0, we follow the approach of <cit.> by imposing the unimodular condition (<ref>), which in terms of the scalar components of the coordinate transformation ϵ^μ is written as
ϵ̇_̇0̇ + 3ȧ/aϵ_0 = 0 ⇒ ϵ_0(t,x⃗) = f_1(x⃗)/a^3(t) .
It can be seen that we have left with an arbitrary spatial function f_1(x⃗). A new coordinate transformation ϵ̃^μ where B^''=0 leads to
Δ B = B^''-B^' = -2/a^2ϵ̃^S ⇒ ϵ̃^S(t,x⃗) = 0 ,
and the remaining scalar component ϵ̃_0 must be zero to completely fix the gauge. However, once the unimodular condition is imposed one more time, we have
ϵ̇̃̇_0 + 3ȧ/aϵ̃_0 = 0 ⇒ ϵ̃_0(t,x⃗) = f_2(x⃗)/a^3(t) ,
and then, we still have an arbitrary spatial function f_2(x⃗). It can be shown that successives coordinates transformations lead to the same results, i.e., ϵ̃̃̃^S = 0, but ϵ̃̃̃_0 is always in terms of an arbitrary spatial function. This gauge freedom will affect not only the remaining gravitational potentials, which after such transformations are
F^''(t,x⃗) = F(t,x⃗) - a/2Ḃ(t,x⃗) - f_3(x⃗)/a^4 ,
A^''(t,x⃗) = A(t,x⃗) + 2ȧ/a^4f_3(x⃗) ,
with f_3 = f_1+f_2, but also to the energy density perturbation, which from Eq. (<ref>) is
δρ^'' = δρ + ρ̇̅̇/a^3(t)f_3(x⃗) ,
and similarly for both, the pressure perturbation Δδ p and peculiar velocity Δ v transformations. Moreover, the Sachs–Wolfe effect <cit.> has been derived in UG in <cit.> in order to spot differences between GR and UG through possible signatures in the anisotropies of the CMB. In our notation, they obtain that
( -3A^''/2 + δ T/T̅ + aḞ^'') = ctte ,
but as it is shown in Eq. (<ref>), both gravitational potentials are not completely determined due to the arbitrary spatial function f_3(x⃗). Therefore, even when the differences obtained in <cit.> between GR and UG are negligible under the assumptions they considered, it is important to study physical observables such as the CMB radiation, with a proper gauge choice without spurious degrees of freedom. This will be discussed in detail in the next Section.
§ PHYSICAL IMPLICATIONS OF COSMOLOGICAL PERTURBATIONS IN UG
Now that we have fixed both, the Newtonian and Synchronous gauge in UG, we are able to write down the dynamical equations for the linear perturbations in each of these gauges. As we will see below, the unimodular condition (<ref>) will reduce the degrees of freedom from two to only one gravitational potential. It is possible to find solutions for the density contrasts in each of these gauges in terms of the corresponding metric scalar perturbation. Besides, we obtain proper derivation of the Sachs–Wolfe effect within the UG framework for the Newtonian gauge, and we show that there is only a modification in the coefficient of the gravitational potential derivative, but the physical result is exactly the same as that obtained in GR.
§.§ Linear perturbations: Newtonian gauge
The line element (<ref>) is written as[In order to keep the notation simple, notice that we drop out the primes (^') since we already know that it is possible to find a consistent coordinates transformation where E and A are the only physical degrees of freedom for the gravitational perturbations, prior to impose the unimodular constraint.],
ds^2 = -(1+E)dt^2 + a^2 (1+A)δ_ij + dx^idx^j ,
but we will use the standard notation for E and A in this gauge, which is given by E≡ 2Φ, and A≡ -2Ψ, and then, the perturbed line element (<ref>) takes the form
ds^2 = -(1+2Φ)dt^2 + a^2 (1-2Ψ)δ_ijdx^idx^j .
The evolution of perturbations given by Eq. (<ref>)–(<ref>) are then written as
κ^2/2(δρ + 3δ p + ∇^2π^S) = ∇^2Φ/a^2 + 3HΦ̇ + 3Ψ̈ + 6HΨ̇ + 6(H^2+Ḣ)Φ + δΛ ,
-κ^2/2(ρ̅ + p̅)∂_i v = H∂_iΦ + ∂_iΨ̇ ,
-κ^2/2(δρ - δ p - ∇^2π^S) = HΦ̇ + 2(3H^2 + Ḣ)Φ - ∇^2Ψ/a^2 + Ψ̈ + 6HΨ̇ + δΛ/2 ,
κ^2a^2∂_i∂_j π^S = ∂_i∂_j(Ψ - Φ) ,
and Eq. (<ref>) for the energy–momentum tensor becomes
δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] - 3(ρ̅ + p̅)Ψ̇ = -δ J_0/κ^2 ,
δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v + (ρ̅ + p̅)Φ = δ J^S/κ^2 .
We have 6 equations (<ref>)–(<ref>) and 6 variables to determine: δρ , δ p , π^S δ u , Φ , and Ψ. In particular, variables Φ and Ψ differ by the scalar anisotropic term π^S as can be seen from Eq. (<ref>). In the particular case of a perfect fluid without dissipative corrections, π^S=0, and we obtain that Φ = Ψ. However, in UG we have that the choice of coordinates such that B=F=0 leads to the following unimodular constraint for perturbations (<ref>)
3Ψ = Φ ,
and the previous equations acquire the form
κ^2/6(δρ + 3δ p + ∇^2π^S) = ∇^2Ψ/a^2 + 4HΨ̇ + Ψ̈ + 6(H^2+Ḣ)Ψ + δΛ/3 ,
-κ^2/2(ρ + p)∂_i v = 3H∂_iΨ + ∂_iΨ̇ ,
-κ^2/2(δρ - δ p - ∇^2π^S) = 6(3H^2 + Ḣ)Ψ - ∇^2Ψ/a^2 + Ψ̈ + 9HΨ̇ + δΛ/2 ,
-κ^2/2a^2∂_i∂_j π^S = ∂_i∂_j Ψ ,
-δ J_0/κ^2 = δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] - 3(ρ̅ + p̅)Ψ̇ ,
δ J^S/κ^2 = δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v + 3(ρ̅ + p̅)Ψ ,
and the perturbation for the energy–momentum current violation in this gauge is
δ J_μ = 1/4∂_μ{ - 36[ ( ȧ/a)^2 + ä/a]Ψ - 44ȧ/aΨ̇ - 2∇^2Ψ/a^2 - 6Ψ̈ + κ^2( 3δ p - δρ)} .
Notice that we need the presence of the scalar anisotropic stress π^S in order to have non–null gravitational potential Ψ, as can be seen from Eq. (<ref>). In this sense, we can see that strictly the Newtonian gauge is not recovered in UG. This was already reported in previous literature, and recently by <cit.>. However, different from the approach of the mentioned work, we will keep the anisotropic stress term in order to find solutions for physical quantities, such as the CDM density contrast δ_CDM≡δρ_CDM/ρ̅_CDM. We have then that p̅_CDM = δ p_CDM = 0, but different from the standard model in Λ CDM, the dark matter particle will have π^S_CDM≠ 0. Combining the previous equations, it can be shown that in a matter–dominated era the CDM density contrast in UG for the Newtonian gauge δ_CDM(new)^UG is given by
δ_cdm(new)^UG = (4/3)(-2/HΨ̇) - 2[ 1 + (1/6)(k^2/3a^2H^2) ](3)Ψ ,
and it can be seen that it differs only by numerical factors from the GR result (see Eq.(12) in <cit.>),
δ_cdm^GR = -2/HΨ̇ - 2(1 + k^2/3a^2H^2)Ψ ,
where we have used ∇→ -k^2 for solutions in Fourier space. Thus, once the gravitational potential Ψ is known, it is possible to follow the cosmological evolution of CDM fluctuations. Moreover, if a particular model for the non–gravitational interaction is considered at background level, such information will be in the Hubble parameter H through the Friedmann equation (<ref>), and constraints could be put on cosmological models of UG by studying LSS of the universe through the Matter Power Spectrum (MPS). Also notice that, even when neglecting the energy–momentum current violation, and GR is recovered at background level, the unimodular constraint at the level of linear perturbations changes the evolution of CDM fluctuations, as can be seen when comparing the coefficients of Eqs. (<ref>) and (<ref>).
§.§ Linear perturbations: Synchronous gauge
In this case the perturbed line element is written as
ds^2 = -dt^2 + a^2[ (1+A)δ_ij + ∂_i∂_j B ]dx^idx^j ,
The field equations (<ref>)–(<ref>) in this gauge are given by
κ^2(δρ + 3δ p + ∇^2π^S) = -3Ä - 6HȦ - ∇^2B̈ - 2H∇^2Ḃ + 2δΛ ,
κ^2(ρ̅ + p̅)v = Ȧ ,
κ^2(δρ - δ p - ∇^2π^S) = -∇^2A/a^2 + Ä + 6HȦ + H∇^2Ḃ - δΛ ,
2κ^2a^2π^S = -A + a^2B̈ + 3aȧḂ ,
and Eq. (<ref>) for the energy–momentum tensor now takes the form
δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] + (ρ̅ + p̅)/2(3Ȧ + ∇^2Ḃ) = -δ J_0/κ^2 .
δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v = δ J^S/κ^2 ,
and the energy–momentum current violation perturbation (<ref>) is given by
δ J_μ = 1/4∂_μ[ -2/a^2∇^2A + 4ȧ/a( 3Ȧ + ∇^2Ḃ) + 3Ä + ∇^2B̈ + κ^2( 3δ p - δρ) ] .
However, in UG we have that the choice of coordinates of Section <ref> leads to the following unimodular constraint (<ref>) for perturbations
3A = -∇^2B ,
and the previous equations are written as follows
κ^2(δρ + 3δ p + ∇^2π^S) = 2δΛ ,
κ^2(ρ̅ + p̅)v = Ȧ ,
κ^2(δρ - δ p - ∇^2π^S) = -∇^2A/a^2 + Ä + 3HȦ - δΛ ,
2κ^2π^S = ∇^2B/3a^2 + B̈ + 3HḂ ,
-δ J_0/κ^2 = δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] ,
δ J^S/κ^2 = δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v ,
with Eq. (<ref>) now written as
δ J_μ = 1/4∂_μ[ -2/a^2∇^2A + κ^2( 3δ p - δρ) ] .
As we have done in the previous case, it is possible to show that in a matter–dominated era the CDM density contrast in UG for the Synchronous gauge δ_CDM(syn)^UG is given by
δ_cdm(syn)^UG = 2k^2/7a^2H^2A ,
where again we have used ∇→ -k^2 for solutions in Fourier space. Then, once the solution for the gravitational potential A is known, we have the cosmological evolution for the density contrast.
§.§ Sachs–Wolfe effect: a proper derivation in UG
The approach followed by <cit.> lead to an expression that modifies the GR result by a new term (see Eq. (<ref>)). However, we have shown in Section <ref> that such new term is not completely determined unambiguously due to the gauge is not properly fixed. In what follows, we present the derivation for the Sachs–Wolfe effect for UG in the Newtonian gauge.
We start by setting the line element for the Newtonian gauge (<ref>):
ds^2 = -(1+2Φ)dt^2 + a^2 (1-2Ψ)δ_ijdx^idx^j ,
where we left both gravitational potentials in order to compare with GR the final expression, and at the end of the procedure we use the unimodular constraint (<ref>). Following <cit.>, it can be shown that the Boltzmann equation for photons at linear order in UG is given by
∂/∂ t(δ T/T̅) + p̂^i/a∂/∂ x^i(δ T/T̅) - ∂Ψ/∂ t + p̂^i/a∂Φ/∂ x^i = 0 ,
where the right hand side of the previous equation neglects the collision term since we are interested in the moment that photons are already decoupled. The mean temperature and its fluctuations are denoted respectively by T̅ and δ T, whereas p̂^i is the unitary 3–momentum. In order to apply the same differential operator to both gravitational potentials, we add new partial derivatives as follows
∂/∂ t(δ T/T̅) + p̂^i/a∂/∂ x^i(δ T/T̅) - ∂Ψ/∂ t + ∂Φ/∂ t + p̂^i/a∂Φ/∂ x^i = ∂Φ/∂ t
( ∂/∂ t + p̂^i/a∂/∂ x^i)( δ T/T̅ + Φ) = ∂Φ/∂ t + ∂Ψ/∂ t .
At this point, notice that once the gravitational potentials are equal the standard result is obtained, and the right hand side of the previous equation is 2∂Φ/∂ t (see Eq.(9.20) of <cit.>). Notwithstanding, the latter is true in GR where no anisotropic stress is present and the condition Φ = Ψ is satisfied. In our case, the anisotropic stress can not be set to zero in the Newtonian gauge, as we have discussed in previous Sections. Nonetheless, we have to impose the unimodular constraint of Eq. (<ref>) in the Newtonian gauge, which reads Ψ = Φ/3. Thus, the relation between the temperature fluctuations δ T/T̅ and the gravitational potential Φ in UG is
( ∂/∂ t + p̂^i/a∂/∂ x^i)( δ T/T̅ + Φ) = 4/3∂Φ/∂ t ,
and it is only a factor of 2/3 the difference with respect to the GR result. After recombination the universe is matter–dominated and then we can approximate ∂Φ/∂ t ≃ 0. This leads to the standard expression of the Sachs–Wolfe effect:
( δ T/T̅ + Φ) = const .
Therefore, whereas previous literature have found modifications due to the presence of a new gravitational potential in Eq. (<ref>) (see Eq. (<ref>)), we have shown that such modification propagates spurious degrees of freedom due to the gauge choice. Our result shows that effectively there is no distinction between GR and UG when looking at the Sachs–Wolfe effect, but UG does not induced new terms in the relation between temperature fluctuations δ T/T̅ and the gravitational potential Φ.
§ FINAL REMARKS
The theory of Unimodular Gravity in its original formulation brings new interesting features due to the constraint in the spacetime four–volume, reducing general coordinate transformations to volume–preserving diffeomorphisms. The natural arising of the non–conservation of energy–momentum tensor allows to generate new non–gravitational interactions, which can be used to elucidate the behavior of the dark sector in cosmological models.
We have analyzed whether the most common gauges used in cosmology are properly fixed, since previous work on linear perturbations within the framework of UG have not discussed this crucial aspect in the study of cosmological perturbations. We have demonstrated that it is possible to fix both, Newtonian and Synchronous gauges in UG, although the consequences on the matter fields are different to those for GR: particularly, CDM must have a non–null anisotropic stress when working in the Newtonian gauge, whereas it is not possible to choose a comoving observer with the CDM fluid in the Synchronous one.
Even when the dynamics of the perturbations change with respect to GR due to the unimodular constraint (we are left with only one gravitational potential instead of two), we have shown that it is possible to obtain the fluctuations of the CDM energy density as function of the only gravitational degree of freedom in both, Newtonian and Synchronous gauge. In fact, in the same line of ideas developed by Ma & Bertschinger <cit.>, we can obtain the equations in terms of the fluid variables: density contrast δ and velocity divergence θ, given by
δ≡δρ/ρ̅ , θ≡∂_i v_i/a = 1/a∂_i (∂_iv + v_i^V) = ∇^2v/a ,
where in the last expression, we only consider the scalar mode of peculiar velocity. From Eqs. (<ref>) and (<ref>), where the unimodular constraint has not been imposed yet, δ and θ are given in both gauges respectively by,
Newtonian gauge:
δ^' = -(1+ω)( θ -3Ψ^') - 3a^'/a( δ p/δρ - ω)δ + aJ̅_0 δ - ρ̅δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/a(1-3ω)θ - ω^'/1+ωθ + δ p/δρ/1+ωk^2δ -k^2σ + k^2Φ +aJ̅_0/κ^2ρ̅θ - k^2δ J^S/κ^2ρ̅(1+ω) .
Synchronous gauge:
δ^' = -(1+ω)( θ + h^'/2) - 3a^'/a( δ p/δρ - ω)δ + aJ̅_0 δ - δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/a(1-3ω)θ - ω^'/1+ωθ + δ p/δρ/1+ωk^2δ -k^2σ +aJ̅_0/κ^2ρ̅θ - k^2δ J^S/κ^2ρ̅(1+ω) ,
where, for the sake of comparison with <cit.>, this time the prime indicates derivative with respect to conformal time τ, relating with cosmic time t through dτ = dt/a. We also identify our anisotropic term π^S with σ through the relation[In order to coincide in notation with <cit.>, we have redefined the traceless anisotropic stress by adding the term -δ_ij∇^2π^S/3 in Eq. (<ref>).] σ≡ -2∇^2 π^S/3ρ̅(1+ω), and the trace part of the spatial metric perturbation in conformal time is related to our gravitational potentials in the synchronous gauge by h = h_ii≡ 3A + ∇^2B. Previous equations are the UG version of Eqs. (29) and (30) from <cit.>, where new terms due to the energy–momentum current violation J_μ are present. From what we have learn in previous Sections, we have to set the unimodular constraint, and taking into consideration the new features arising in UG under the analysis of gauge fixing: for instance, in the Newtonian gauge we have to keep the anisotropic term in order to have gravity perturbations (see discussion in Section <ref>). On the other hand, once the synchronous gauge is fixed, it is not possible to have a comoving observer with the CDM fluid, and then, the velocity divergence can not be set equal to zero (see discussion in Section <ref>). Thus, considering the corresponding unimodular constraint in each gauge (3Ψ = Φ and 3A = -∇^2B for the Newtonian and Synchronous gauge respectively) for a CDM–dominated universe, the previous equations for the evolution of density contrast δ and velocity divergence θ are written as:
Newtonian gauge:
δ^' = -θ + Φ^' + aJ̅_0 δ - ρ̅δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/aθ - k^2σ + k^2Φ +aJ̅_0/κ^2ρ̅θ - k^2δ J^S/κ^2ρ̅(1+ω) .
Synchronous gauge:
δ^' = -θ + aJ̅_0 δ - δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/aθ +aJ̅_0θ - k^2δ J^S/κ^2ρ̅ .
In the case of Eqs. (<ref>), the differences with respect to GR and the standard ΛCDM model are the presence of the energy–momentum current violation J_μ, the anisotropic term σ, and we have only one gravitational potential Φ. Notice that even when assuming ∇_μ T^μν = 0 , and then J̅_0 = δ J_0 = δ J^S = 0 , the perturbation equations do not recover the GR case. This is precisely due to both, the unimodular constraint and the anisotropic term. Similarly, the dynamics of the perturbed equations (<ref>) for the Synchronous gauge in UG is very different from that of GR. Even when neglecting the energy–momentum current violation, we can observe that the source of the density contrast evolution is not the trace h (as is the case in ΛCDM, see Eq.(42) in <cit.>), but the velocity divergence θ. This is another way to understand why it is not possible to choose an observer comoving with CDM: there will be not growth of structures if θ = 0. Of course, in the general case we are studying, the presence of the energy–momentum current violation and its scalar perturbations will affect as well the dynamics of structure formation in both gauges.
Therefore, UG leaves imprints in the properties of dark sector, and the implications on the linear perturbations are different to those in GR. Specifically, the physical consequences of linear perturbations in UG due to the geometric constraint imposed by the unimodular condition, are translated into non–standard features of the cold dark matter component: if working in the Newtonian gauge, CDM must be anisotropic as this term is directly proportional to the only gravitational degree of freedom (see Eqs. (<ref>) and (<ref>)); if working in the Synchronous gauge, it is not possible to choose a comoving observer with CDM fluid as its velocity divergence drives the structures formation (see Eqs. (<ref>)). Besides, we have new contributions from the energy–momentum current violation, in the form of the background term J̅_0 and scalar modes δ J_0 , δ J^S . The background dynamics have been solved and studied under numerical and statistical analysis, by considering phenomenological models of diffusion to describe the new non–gravitational interactions in the dark sector <cit.>. However, the linear perturbations in UG lead to a novel level of complexity: we have focused in a matter–dominated universe in order to extract some information about the process of structure formation through the CDM density contrast, but this is not enough if one want to reproduce observables such as CMB or MPS. One of the new issues to handle is the information about the scalar perturbations of the energy–momentum current violation δ J_0 and δ J^S. Perhaps a naive way to proceed is to directly consider fluctuations of the diffusion models, or as was the case for the background in <cit.>, to propose a phenomenological model for such perturbations.
Thus, more work have to be done in order to properly implement a cosmological model based on UG in a Boltzmann solver such as or . Even when the conservative approach of energy–momentum conservation for ordinary matter content (photons, neutrinos, baryons) is assumed, the unimodular constraint changes the dynamics of linear perturbations for all species. In other words, while J_μ = 0 for ordinary matter, the curvature produced by the only gravitational potential in UG will change the dynamics of matter fields. Even more, Boltzmann solvers are written for GR in the Synchronous gauge[See <https://cosmologist.info/notes/CAMB.pdf> and <http://www.class-code.net>. In particular, allows to work in both, Synchronous and Newtonian gauge. In any case, the numerical implementation must be applied consistently by considering the gravitational effects of the unimodular constraint at linear perturbations level for all matter components.], and strictly speaking such gauge does not exist in UG, since it is not possible to choose consistently a CDM comoving observer by setting its velocity divergence θ_CDM = 0 . With this in mind, we consider that for any attempt to reproduce CMB and MPS observations for cosmological models within the framework of UG, analysis of cosmological perturbations such as <cit.> must be done, in order to consistently solve the dynamics of linear perturbations for all matter and energy content gravitating as UG dictates. This work constitutes a first step in this direction by considering only the evolution of the dark matter component at linear order in perturbations.
F.X.L.C. acknowledges Beca CONACYT. U.N and F.X.L.C acknowledges to PROYECTO CIENCIA DE FRONTERA CF 2019/2558591 for financial support.
plunsrt
|
http://arxiv.org/abs/2307.04137v1 | 20230709093305 | A Novel Explainable Artificial Intelligence Model in Image Classification problem | [
"Quoc Hung Cao",
"Truong Thanh Hung Nguyen",
"Vo Thanh Khang Nguyen",
"Xuan Phong Nguyen"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
A Novel Explainable Artificial Intelligence Model in Image Classification problem
Hung Quoc Cao
FSO.QNH.QAI.AIC
FPT Software
Binh Dinh, Vietnam
[email protected]
Hung Truong Thanh Nguyen1, 2
1FSO.QNH.QAI.AIC
FPT Software
2Department of Computer Science
Frankfurt University of Applied Sciences
Frankfurt am Main, Germany
[email protected]
Khang Vo Thanh Nguyen
FSO.QNH.QAI.AIC
FPT Software
Binh Dinh, Vietnam
[email protected]
Phong Xuan Nguyen
Graduate School of Engineering Advanced Interdisciplinary Studies
The University of Tokyo
[email protected]
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In recent years, artificial intelligence is increasingly being applied widely in many different fields and has a profound and direct impact on human life. Following this is the need to understand the principles of the model making predictions. Since most of the current high-precision models are black boxes, neither the AI scientist nor the end-user deeply understands what's going on inside these models. Therefore, many algorithms are studied for the purpose of explaining AI models, especially those in the problem of image classification in the field of computer vision such as LIME, CAM, GradCAM. However, these algorithms still have limitations such as LIME's long execution time and CAM's confusing interpretation of concreteness and clarity. Therefore, in this paper, we propose a new method called Segmentation - Class Activation Mapping (SeCAM) that combines the advantages of these algorithms above, while at the same time overcoming their disadvantages. We tested this algorithm with various models, including ResNet50, Inception-v3, VGG16 from ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set. Outstanding results when the algorithm has met all the requirements for a specific explanation in a remarkably concise time.
Explainable Artificial Intelligence (XAI), machine learning, explanation, transparency, interpretability
§ INTRODUCTION
In recent years, along with the rapid development of deep learning, more and more new models are being created with outstanding accuracy in the field of computer vision. However, those models have a complex network structure where users and scientists still cannot fully interpret and understand its black box <cit.>. Arising from the need to explain to users and experts the reasons behind the model’s decisions or predictions, Explainable Artificial Intelligence (XAI) was born and is drawing more and more attention in AI. Various XAI methods have been introduced with different approaches, Adadi and Berrada <cit.> have presented several ways to classify XAI algorithms, it's based on scope, time of information extraction or model AI. With Scope-based classification, Global and Local are two variations according to the scope of interpretability. The Global XAI methods try to understand the entire model behavior while the Local XAI methods want to understand a single prediction. Global interpretability techniques lead users to trust a model. In reverse, local techniques lead users to believe in a prediction. Also, they try to identify each feature’s contributions in the input towards a particular output <cit.>. Also for Based on Time of information extraction, we have two classes Post-hoc and Intrinsic. Post-hoc methods explain the model after being trained, these methods use model prediction and parameters to explain. Post-hoc methods can be applied to models without changing the model architecture. Therefore, Post-hoc methods can be applied to pretrained models. Intrinsic methods will modify the original model architecture. The model will be adjusted to have a new layer with interpretable constraints. And finally, model related methods, it’s another important way to classify XAI methods is whether they are agnostic or specific models. If an XAI method can be used for any type of model, it is classified as Agnostic. If an XAI method can be applied for only a single type or a number of classes of models, it is classified as Specific.
Although these methods can give satisfactory explanations, they still have many limitations and need to be improved. With the image classification problem, many XAI methods have been proposed, each method has a different approach and the output is therefore also different. For example: the output of LIME <cit.> are the superpixels that affect the model's prediction most. SHAP <cit.> will show the impact on the prediction, either positive or negative, of all superpixels. Those superpixels come from a previous perturbation step. With visualization algorithms like CAM <cit.>, SISE <cit.>, Saliency Map <cit.>... the output will show the user a heatmap on the original image.
From the knowledge we have gained through researching and comparing the three algorithms: LIME, SHAP, CAM. We recognize that LIME's explanation, the areas with the most influence, is the most intuitive and accurate for the image classification problem. These regions explained by LIME are approximately equivalent to how humans perceive the object. Nevertheless, the calculation time of LIME is too high. But, when we explain an image, the average time of LIME is 200 seconds greater than that of the CAM. However, the choice of the number of the most influenced regions is still dependent on people and specific image <cit.>. The recent works have proposed a method to improve the computation speed of LIME, namely Modified Perturbed Sampling for LIME (MPS-LIME) [6]. In their experimental results with Google’s pre-trained Inception neural network on Image-net database, the runtime of MPS-LIME is nearly half as the runtime of LIME; but the calculation time is still incredibly long. In contrast, CAM does not suffer from these limitations, but the high impact areas of CAM are far broader than the human-defined bounding box. Moreover, CAM must modify the original model’s layers to work. In this work, we propose a new local post-hoc method of XAI in the image classification problem, called Segmentation - Class Activation Mapping (SeCAM). That method selects the regions that affect the model’s prediction as LIME, but with much faster time (approximate to CAM) . We believe that this concept of segmentation is also applicable to the class of CAM-based XAI methods <cit.>
Our main contribution are:
* Propose SECAM as a new local post-hoc agnostic method of XAI in the image classification problem, which combines these advantages of the above two algorithms (LIME and CAM), and at the same time, overcomes their inherent weaknesses. Specifically, it can provide friendly images, close to human explanations like LIME while ensuring computation speed as fast as CAM, averaging 2 seconds for an explanation; Moreover, it also overcomes the weakness of having to edit the original model of the CAM method in some specific models.
* In addition to applying SeCAM to explain AI models in image classification problems, this approach’s main idea has much potential to improve other related XAI algorithms, especially in the computer vision field.
* We have experimented with datasets, models, ... and a number of qualitative as well as quantitative evaluation methods.
* We have a discussion about what it means to view an image as these superpixel to a user through a user study. We believe that, with the representation of an image in the form of superpixels, each superpixel will have some meaning, for example the head area, body area, ... of the object. Therefore, the user will be able to learn how the parts of the object affect the prediction of the model, or what do they mean?
* We also experiment with many segmentation algorithms to see how much impact it has on XAI algorithms such as LIME, SHAP, SeCAM - the algorithms use segmentation algorithms as part of the perturbation step.
* We had a survey with real users to see what the given explanations mean to them.
* Finally, we believe that applying segmentation to XAI methods can make the results to be consistent and easy to compare between XAI methods.
This paper's remainder is arranged to provide in the following order: related work, our proposed methods, experiments, and, ultimately, conclusions along with our future research directions.
§ RELATED WORK
XAI methods can be categorized based on two factors. Firstly, the method can be intrinsic or post-hoc based on when the data is extracted. Secondly, the method can be either global or local based on the explanation's scope. Global models explain the complete, general model behavior and attempt to explain the whole logic of a model by inspecting the model's structures. Local models give explanations for a specific choice. For example, "Why the model have this prediction?". Global interpretability techniques lead users to trust a model. In reverse, local techniques lead users to believe in a prediction. Also, they try to identify each feature's contributions in the input towards a particular output <cit.>. Post-hoc interpretation models can be applied to intrinsic models, but not fundamentally vice versa. LIME represents the Local Post-hoc approach <cit.>, which is model-agnostic. In contrast, CAM represents the Local Intrinsic approach, which belongs to model-specific <cit.>.
We also introduce the superpixel-based image segmentation method that we chose to use in this article
§.§ Segmentation Algorithms
In the problem of image classification, the input image is of course a very important part. However, not every pixel in an image is meaningful. It would seem more intuitive to evaluate not only the perceptual but also the semantic meanings of an image created by locally grouping pixels. We get superpixels when we do this kind of local grouping of pixels on our pixel grid. It at the same time brings about computational efficiency benefits. It allows us to reduce the complexity of the image itself from hundreds of thousands of pixels down to just a few hundred superpixel. Each of these superpixels would then contain some sort of perceptual value and, ideally, semantics.
So, superpixels are becoming increasingly popular for use in computer vision applications. Superpixels provide a convenient original for calculating local image features<cit.>. The XAI algorithms in this field have segmented the image into superpixels and used the presence or absence of these superpixels as the interpretable representation <cit.>. With this presentation, the image will be divided into regions. Each region will consist of several superpixels and will have certain meanings <cit.>. For example with the segmented image below, humans can easily see that the region consisting of superpixels number 11,12 and 20 will represent the hummingbird, where superpixels number 11 represent the bird’s beak, the bird’s head is the superpixels number 12,...
The goal of the XAI methods in this case is to figure out which superpixels have the most influence on the model’s prediction.
Some of the most commonly used superpixels methods are ETPS (Extended Topology Preserving Superpixels)<cit.>, SEEDS (Superpixels Extracted via Energy-Driven Sampling)<cit.>, SLIC (Simple Linear Iterative Clustering)<cit.>, Quickshift <cit.>,...
For superpixels to be useful they must be fast, easy to use, and produce high-quality segmentations. It is difficult to determine if segmentation is good or not because the definition of “good” often depends on the application. In this work, we experiment with the SLIC algorithm first and then with other algorithms. We will discuss the influence of segmentation algorithms later.
§.§ Local Interpretable Model Agnostic Explanations
Local Interpretable Model Agnostic Explanations (LIME) is an XAI method that can explain any classifier or regressor's predictions assuredly by approximating it locally with an interpretable model <cit.>. LIME intends to provide an easy to interpret method with local fidelity. The local fidelity means that the explanation for individual predictions should at least be locally faithful. In other words, it must correspond to how the model performs in the vicinity of the individual observation being predicted. The local fidelity does not imply global fidelity where the local context may not require globally essential features and vice versa. Due to this, even if a model has hundreds of variables globally, it could be the case that only a handful of variables directly relate to a local or individual prediction. LIME performs the steps below:
* Generating new samples then gets their predictions using the original model.
* Weighing these new samples by the proximity to the instance being explained.
Using the output probabilities from a given collection of samples that cover part of the input desired to be clarified, it then builds a linear model. Then, the surrogate model weights are used to measure the value of input features. Moreover, LIME is model-agnostic, so that it can be applied to any model of machine learning <cit.>.
Figure<ref> is an example for LIME explanation with the Input image in Figure<ref> with model Resnet50.
§.§ Class Activation Mapping
Class Activation Mapping (CAM) is a weighted activation map created for each input image <cit.>. It utilizes a global average pooling (GAP) in CNNs. A class activation map for an appropriate category indicates the discriminative image regions used by CNN to identify that category. It is a locally intrinsic interpretable model that achieved by designing more justified model architectures <cit.>. It explicitly allows CNNs to have exceptional localization ability despite being trained on image-level labels, enabling classification-trained CNNs to learn to produce object localization without using any bounding box annotations. CAM permits us to visualize the predicted class scores on any given image, highlighting the CNN's discriminative object parts. The CAM result shows a heatmap on the input image. This heatmap presents the impactful area of a given prediction <cit.>.
§.§ Gradient-weighted Class Activation Mapping
Gradient-weighted Class Activation Mapping (Grad-CAM) <cit.> is a generalized version of CAM. Grad-CAM uses the gradient information flowing into the last convolutional layer of the CNN to understand each neuron's decision of interest. Note that, to use CAM, the model must use a GAP layer followed by a fully connected softmax layer. This model architecture modification forces us to retrain the model. With a gradient approach, Grad-CAM can get the visualizations without changing the base model or retraining.
The final feature convolutional map of the input image is activated for different class channels. In detail, every channel in the feature is weighed with the class gradient for that channel. The global average pooling over two dimensions (i,j) for the gradient of respective class output for feature map is the spatial score of a specific class. Then, the resulting value is multiplied with the feature map along with the channel axis k. While the resultant is pooled along its channel dimension. Hence, the spatial score map is of size i*j which is normalized to positive region predictions using the nonlinear ReLU transformation. The class k's score correlates directly with the class-specific saliency map's importance, impacting the final prediction output.
§ PROPOSED METHODS
§.§ Motivation
In this section, we present the reason why we come up with the idea of SeCAM. In our previous work, we have applied LIME and CAM to explain the ResNet50 model. With the LIME method, we divided the image into 49 regions using the K-Means algorithm and calculated with the number of examples of 1000, which is the most appropriate number in this case. The results are shown in Figure <ref>.
The computation time of LIME and CAM are presented in Table <ref>.
The result in Table <ref> reveals that both LIME and CAM can yield the original image's regions that most affect the prediction. However, we find that LIME's explanation is resembling human explanations. The CAM heatmap area is too large, thus containing additional areas that do not have a decisive effect on the model's prediction, thereby reducing the explanation's reliability. As introduced in the Segmentation Algorithms section, with the results of LIME, we can see that the Resnet50 model rated the head and tail the most impact, more than the beak, but the InceptionV3 model shows that the head and the beak are the more important parts. Thus, by using superpixels, humans can see which regions will make more sense in making the model’s prediction instead of individual pixels like CAM. Nevertheless, LIME's computation time is too large, while CAM's computation time is completely superior with nearly 20000 times faster speed. The calculation time here is solely the time given for explaining. One of the prerequisites for using the CAM method is that at least one Global Average Pooling (GAP) layer exists in the model architecture <cit.>. If the GAP layer is not already available in the model, then the model needs to be added with a GAP layer and retrains with all the data.
To overcome the above problems, we propose a new method called Segmentation - Class Activation Mapping (SeCAM) to improve LIME and CAM's disadvantages while preserving their advantages. Our proposed method produces a precise explanation of a predicted object like LIME but has a quick computation time of CAM. Furthermore, this method can be directly applied to any model with only an individual layer, followed by the last fully-connected softmax output layer. Thus, we do not have to add the GAP layer to the model or retrain the model anymore in the mentioned situation.
§.§ Segmentation - Class Activation Mapping (SeCAM)
In this section, we describe our novel method - SeCAM in detail. As sketched in Figure 2, SeCAM consists of three blocks. In the first block, we initially apply the same procedure as the original CAM method, which identifies the image regions’ importance by projecting the output layer’s weights onto the convolutional feature maps [4]. For models without a GAP layer followed by a fully connected layer, we use the gradient information flowing into the last convolutional layer of the CNN (idea from Grad-CAM) and make some adjustments. So, our method SeCAM does not require models to have a GAP layer because it allows using any other layer such as a flatten layer to replace the GAP layer in the original CAM method, as shown in Figure 2. In block 2, we use a segmentation algorithm to segment the input image into superpixels. Results obtained from the previous two blocks are combined and compute the effect of each superpixel on the model’s prediction. In the following section, we will discuss more carefully about each block.
§.§.§ Block 1: Class Activation Mapping
For an input image, in the last convolutional layer, we get n feature maps. Let f_k(x, y) present the activation for unit k in feature maps at spatial location (x, y). After a flattening class, each point represents a value of f_k(x, y). In the case of a pooling layer such as max or average pooling that turn multiple values f_k(x, y) (where (x, y) belongs to a spatial location set A into a point, that point will represent multiple corresponding values for all spatial locations in the set A
Therefore, for a class c, we call the weights corresponding to the input to the softmax layer S_c = ∑_k∑_x, yw_k^c(x, y)f_k(x, y) where w_k^c(x, y) is the weight corresponding to class c for unit k at (x, y) location. In case the model already has a GAP layer followed by a fully connected softmax layer (Resnet50, InceptionV3,...), w_k^c(x,y)is taken directly from weight of (x, y) in unit k corresponding to class c. In the other cases,w_k^c(x,y) get the value of gradient via backpropagation:
w_k^c(x,y) = ∂ y^c/∂ A^k (x, y)
where A^k is the k^th feature map.
In the other words, w_k^c(x, y) presents the importance of spatial element (x, y) in unit k to class c. After a softmax class, the output for class c, P_c is determined by the Equation <ref>.
expS_c/∑_cexpS_c
.
Let M_c as the class activation map for class c, where each spatial element is given by Equation <ref>.
M_c (x,y)=∑_k w_k^c(x,y) f_k (x,y)
Therefore,
S_c =∑_k ∑_x,y w_k^c (x,y) f_k (x,y)
= ∑_x,y∑_k w_k^c(x,y)f_k (x,y)
=∑_x,y M_c(x,y)
Thus, M_c (x, y) indicates the importance of the activation at the spatial grid (x, y), leading to the classification of input image to class c.
§.§.§ Block 2: Image Segmentation
The Image Segmentation block runs parallel to the calculation of CAM values in Block 1. In this block, we split the input image into separate regions with similar coloring pixels. Hence, each region represents more meaningful and interpretable, and also carries more information than pixels.
Currently, there are many image segmentation algorithms. In the scope of this article, we use the K-Means algorithm to perform this division of images. More specifically, we use the simple linear iterative clustering (SLIC) algorithm, which is a particular case of K-means adapted to the task of generating superpixels. SLIC performs a local clustering of pixels in the 5-dimensional space defined by the L, a, b values of the CIELAB color space and the x, y pixel coordinates. We get high-quality segmentations with the SLIC algorithm with a low computational cost (SLIC achieves O(N) complexity) <cit.> [9]. With the SLIC algorithm, we can adjust the number of regions to be divided.
The SLIC algorithm includes the following steps:
* Firstly, we initialize K cluster centers by sampling pixels at every grid interval S= √(N/K), where N is the number pixels of the input image.
* We move the centers to the new locations corresponding to the lowest gradient position 3 x 3 neighborhood. Image gradients are calculated as follow:
G(x,y)= I(x+1,y)-I(x-1,y)^2
+ I(x,y+1)-I(x,y-1)^2
Where I(x,y) is the color vector in CIELAB color space corresponding to the pixel at position (x,y), and . is the L2 norm.
* Similar to the K-means clustering <cit.>, we have a loop to update cluster centers and labels for all pixels. We repeat the following loop until convergence:
* Update label for each pixel base on the nearest cluster center according to the distance measure Dsbetween a cluster center C_k = [l_k, a_k, b_k, x_k, y_k]^T and a pixel P_i=[l_i, a_i, b_i, x_i, y_i]^T, is the sum of the lab distance and the xy plane distance normalized by the gri-d interval S:
D_s = d_lab + m/Sd_xy
where:
* m is a parameter allowing us to control the density of a superpixel. The value of m can be in range [1, 20]. We choose 10 as the default value.
* d_lab and d_xy are respectively the lab and the xy plane distances, defined as follow:
d_lab=√((l_k - l_i)^2 + (a_k - a_i)^2 + (b_k - b_i)^2)
d_xy= √((x_k - x_i)^2 + (y_k - y_i)^2)
* Compute a new center as the average labxyvector of all the pixels belonging to the cluster.
* In the last step, if a few tray labels may remain, SLIC will enforce connectivity by relabeling disjoint segments with the labels of the largest neighboring cluster.
§.§.§ Block 3: CAM Averaging in Segmented Image
Deriving the CAM values from Block 1 and the segmented image from Block 2, we average the values from the heatmap obtained in Block 1 for each region, called Segmentation Class Activation Mapping (SeCAM) value corresponding to that region. The SeCAM value for class c of region s is denoted M_c^s and is calculated by Equation <ref>.
M_c^s = 1/|s|∑_(x, y) ϵ sM_c (x, y)
In which, |s| is the number of pixels in region s. Thus, the M_c^s value represents each region's importance to the given prediction. The averaging ensures fairness between regions with different acreages and bypasses the requirement of adding a GAP class to the original model's architecture. When we take the average, each point in the region influences the M_c^s value. If the M_c^s value is maximized, the SeCAM ignores the effects of points with a smaller CAM value in one region.
Finally, we select the regions that have the most significant impact on the prediction of the model, which are areas with the highest SeCAM values. There are two approaches to extract SeCAM’s explanation. The first way is to choose the number of regions with the most influence. The greater the number of regions, the broader the scope of explanation. The second way is to select the areas in which the value is above a given threshold of the SeCAM’s max value. The more extensive the threshold, the less the number of regions selected will be, and the smaller the explanation’s scope. We will discuss strategies for selecting the region later.
§ EXPERIMENTAL DETAILS
Experimental Setup
We conduct experiments for our SeCAM method, CAM, GradCAM, LIME on image from the dataset ILSVRC. We use models Resnet50, InceptionV3 and VGG16. In those models, Resnet50 and InceptionV3 have a GAP (Global Average Pooling) layer followed by a fully connected softmax layer, so we can use CAM with those two models easily. The VGG16 model is more complicated so CAM can’t be applied directly so we will use GradCAM instead.
Quanlitative Results
We compare the explanation quality of LIME, CAM (GradCAM) and SeCAM on sample images from the ILSVRC dataset. The qualitative results for some images are shown in table II. The running time for each explanation is also included.
Quantitative Results
Since currently, to the extent of our knowledge, there are no accurate and recognized methods for comparative evaluation of XAI algorithms, we compare the precision between these results of SeCAM, LIME and CAM (GradCAM) on a human-grounded basis. Denoting G is the human-grounded bounding box and S is the bounding box of the explanation. We use the following evaluation metrics:
* Intersection Over Union (IOU) is the ratio between the overlapped area of two bounding boxes and their union area. IOU compares each bounding box produced by XAI methods to the ground truth.
IOU=S_intersection/S_union=S_A∩ B/S_A∪ B
The IOU value varies from 0 to 1. The XAI method with the highest IOU value is the most accurate.
* Energy-Based Pointing Game (EBPG) evaluates the “accuracy” and variability of the XAI algorithms <cit.>. Extending the traditional pointing game, EBPG measures the fraction of its energy captured in the corresponding ground truth G.
EBPG_S = ∑1_(S∩ G){x}/∑1_S{x}
In which, ∑1_(S∩ G){x} is the number of points in both regions of S and G, and ∑1_S{x} is the number of points in S. So EBPS tells us what percentage of explanation's box S is in the human-grounded box G.
§ DISCUSSION
§.§ Computational resources
We used the 6 x RTX 2080 Ti GPU with Dual Xeon E5-2673 v3 CPU and 128Gb memory for experience. SeCAM is always the fastest and the slowest algorithm is LIME.
§.§ SeCAM vs CAM
Through the Qualitative results in Table II. We find that the SeCAM results are not only closer, but also help us learn some insights of models. For example with the hummingbird image. The results of CAM (GradCAM) are heatmap regions related to the hummingbird bird and it is difficult to know which region has the most influence on the model's prediction when we look at the explanation of model Resnet50. Meanwhile, the results of SeCAM can show the user the influence of each part of the hummingbird on the prediction results. Specifically, the hummingbird is divided into 3 main parts: the beak, the head and the body. The model resnet50 rated the body and head as the most important, while InceptionV3 took the beak and head. More specifically, the VGG16 model evaluates the beak as particularly important. However, with the GradCAM results, it is difficult to see that the head part also has a great influence on the prediction. This can be seen easier in the SeCAM results with 4 segments.
CAM obscure some important color parts
During the experiment, we also found that, because the result of CAM or GradCAM is a heatmap, when overlaying the heatmap on the original image, it sometimes obscures some important parts. For example in Figure 3: In this case, the prediction of model InceptionV3 is Indigo bunting instead of coucal. We use CAM and SeCAM to explain why the model’s prediction is Indigo bunting. We found that the heatmap results of CAM obscured the characteristic blue color of the Indigo bunting species, while the SeCAM explanation showed more clearly the characteristic blue color area. With SeCAM we can see that the reason InceptionV3 predicts Indigo bunting is because of the similarity in color.
CAM can not have a clear distinction between parts
Also in the experimental process, in some cases, we find that the heatmap of CAM does not show humans which region affects the prediction the most, instead humans can only see that the heatmap contains the object and it is difficult to understand why the model makes that prediction. For example in Figure 4 and 5: In those cases, the InceptionV3 model predicts Input images 4.a and 5.d are Kite and Vulture instead of Coucal. If humans look at the explanation of CAM (Figure 4.b and 5.b), it's hard for humans to understand why the model predicted wrong. Meanwhile, based on SeCAM's explanation, we can see that the model evaluates the impact of the head is not high. So it is easier to understand why the model prediction Figure 4.a is a kite when we look at the SeCAM explanation.Especially with the 5.a image, the background with the surrounding dry branches also greatly affects the prediction of Vulture.
§.§ SeCAM vs LIME
When we compare SeCAM and LIME, the most obvious thing is that SeCAM and LIME's results are in similar form, indicating which superpixels have the most influence on the model's prediction results. However, the computation time of SeCAM is enormously faster with more than 100 seconds difference. Also, SeCAM ensures safety and stability when any image can give appropriate explanations to standard humans while LIME will be skewed in some instances, as shown in Table II, Figure 4 and Figure 5. So, we find that the quantitative results of SeCAM equal or surpass LIME.
§.§ Effect of Segmentation Algorithms
We also tried other fractional algorithms in addition to SLIC. Of course, with different algorithms, the way they segment the image is also different. However, XAI algorithms can still determine the exact affected area compared to human judgment. For example, in the Table III below, we have used two algorithms quickshift and SLIC, they produce different segmentation even though they have the same number of regions, but SeCAM can still select the correct areas of the image containing the object in both uses.
Therefore, the choice of different segmentation algorithms has no effect on model interpretation. We chose the SLIC algorithm because it is the easiest to understand, uses k-means clustering, and is the fastest implemented. At the same time, showing the segmentation areas of this algorithm is also friendly to the end-user, especially users who do not know much about technology.
§.§ Comparition of XAI methods
As introduced, comparing and evaluating XAI methods with each other is a big challenge. One of the reasons is that the results are out of sync. With LIME, SHAP results in perturbed images based on an segmentation algorithm, and CAM, GradCAM results in heatmaps. We believe that the idea of using segmentation integrated into heatmaps will not only improve the interpretation results, but also make comparisons between XAI methods easier. Because we can consider the explanation based on the influence of each part of the image instead of each pixel, comparing XAI methods and understanding the model behavior will be easier and save more time.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we have introduced a novel method that explains the model’s prediction in the image classification problem, called Segmentation - Class Activation Mapping (SeCAM). Our method is developed based on these algorithms LIME, CAM, and GradCAM, the improved version of CAM. It combines the advantages of LIME’s explanation accuracy with the CAM’s calculation speed. Besides, SeCAM can represent the most significantly influential regions on the prediction and its impact value. These values can evaluate SeCAM’s accuracy based on humans grounded. We held a survey to see if the explanation of the new approach really got better. The results are extremely satisfactory. We experimented with many different image classification models on the ILSVRC dataset. In some cases, SeCAM gave the correct explanation and LIME gave rather absurd results. In explaining widely used image classification models, our method SeCAM results are more outstanding than other XAI methods such as LIME and CAM. There are a whole number of pathways of future work for us to explore with SeCAM. We recognize that our proposed method has limitations, and future academics and researchers should be aware of these and indeed interpret the material presented in this research within the context of the limitations. Firstly, there is a reasonably obvious limitation that the accuracy of SeCAM’s explanation depends too much on selecting the parameter for the selected segments or exact threshold level. Secondly, choosing the right algorithm for each model still has to be done manually. Currently, we are classifying into two main categories, which have multiple fully connected layers such as VGG16 and only one class fully connected layer, for example, ResNet50. The user will have to define the model's type in order to correctly apply the algorithm. We will try to find ways to automatically identify the model types in updated versions of the algorithm. Besides, we also find the inconvenience of the lack of a standard evaluation method for existing XAI methods; Therefore, we will also study to give a general evaluation method of the accuracy of different XAI algorithms in parallel with the development and improvement of the XAI algorithm.
§ ACKNOWLEGMENT
We are grateful for the collaborative research environment provided by FPT Software Quy Nhon. We would like to express our special thanks of gratitude to Phong Nguyen for his sponsor and Prof. Takehisa Yairi from The University of Tokyo for his helpful support and discussions; Dr. Vinh Nguyen for his careful review. Finally, we would also like to acknowledge FSOFT AI Laboratory for providing us opportunities of incubating ideas in this project.
IEEEtran
§ APPENDIX
§.§ Result images in experiment
In the experiment, we have applied SeCAM, CAM and LIME on various images from the ILSVRC dataset. In the Table <ref>, we present three examples of results with different XAI methods' parameters.
|
http://arxiv.org/abs/2307.04353v1 | 20230710053014 | On Sufficient Graphical Models | [
"Bing Li",
"Kyongwon Kim"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
We introduce a sufficient graphical model by applying the recently developed nonlinear sufficient dimension reduction techniques to the evaluation of conditional independence. The graphical model is nonparametric in nature, as it does not make distributional assumptions such as the Gaussian or copula Gaussian assumptions. However, unlike a fully nonparametric graphical model, which relies on the high-dimensional kernel to characterize conditional independence, our graphical model is based on conditional independence given a set of sufficient predictors with a substantially reduced dimension. In this way we avoid the curse of dimensionality that comes with a high-dimensional kernel. We develop the population-level properties, convergence rate, and variable selection consistency of our estimate.
By simulation comparisons and an analysis of the DREAM 4 Challenge data set, we demonstrate that our method outperforms the existing methods when the Gaussian or copula Gaussian assumptions are violated, and its performance remains excellent in the high-dimensional setting.
conjoined conditional covariance operator, generalized sliced inverse regression, nonlinear sufficient dimension reduction, reproducing kernel Hilbert space
§ INTRODUCTION
In this paper we propose a new nonparametric statistical graphical model, which we call the sufficient graphical model, by incorporating the recently developed nonlinear sufficient dimension reduction techniques to the construction of the distribution-free graphical models.
Let G = ( Γ, E) be an undirected graph consisting of a finite set of nodes Γ={1, …, p} and set of edges
ℰ⊆{(i,j)∈Γ×Γ : i ≠ j }.
Since (i,j) and (j,i) represent the same edge in an undirected graph, we can assume without loss of generality that i>j.
A statistical graphical model links G with a random vector X=(X1, …, X p) by the conditional independence:
(i,j) ∉ℰ⇔ X i X j | X-(i,j),
where
X -(i,j)= {X 1, …, X p }∖{X i, X j}, and
A B | C means conditional independence. Thus, nodes i and j are connected if and only if X i and X j are dependent given X -(i,j).
Our goal is to estimate the set E based on a sample X 1, …, X n of X.
See <cit.>.
One of the most popular statistical graphical models is the Gaussian graphical model, which assumes that X ∼ N(μ, Σ). Under the Gaussian assumption, conditional independence in (<ref>) is encoded in the precision matrix Θ = Σ in the following sense
X i X j |X-(i,j)⇔θij =0,
where θij is the (i,j)th entry of the precision matrix Θ. By this equivalence, estimating E amounts to identifying the positions of the zero entries of the precision matrix, which can be achieved by sparse estimation methods
such as the <cit.>, <cit.>, and <cit.>. A variety of methods have been developed for estimating the Gaussian graphical model, which include, for example, <cit.>, <cit.>, <cit.>, and <cit.>. See also <cit.>, <cit.>, and <cit.>.
Since the Gaussian distribution assumption is restrictive, many recent advances have focused on relaxing this assumption. A main challenge in doing so is to avoid the curse of dimensionality <cit.>: a straightforward nonparametric extension would resort to a high-dimensional kernel, which are known to be ineffective.
One way to relax the Gaussian assumption without evoking a high dimensional kernel is to use the copula Gaussian distribution, which is the approach taken by <cit.>, <cit.>, and <cit.>, and is further extended to the transelliptical model by <cit.>.
However, the copula Gaussian assumption could still be restrictive: for example, if A and B are random variables satisfying B=A2+ϵ, where A and ϵ are i.i.d. N(0,1), then (A,B) does not satisfy the copula Gaussian assumption. To further relax the distributional assumption, <cit.> proposed a new statistical relation called the additive conditional independence as an alternative criterion for constructing the graphical model. This relation has the advantage of achieving nonparametric model flexibility without using a high-dimensional kernel, while obeying the same set of semi-graphoid axioms that govern the conditional independence <cit.>. See also <cit.> and <cit.>. Other approaches to nonparametric graphical models include <cit.> and <cit.>.
In this paper, instead of relying on additivity to avoid the curse of dimensionality, we apply the recently developed nonparametric sufficient dimension reduction <cit.> to achieve this goal. The estimation proceeds in two steps: first, we use nonlinear sufficient dimension reduction to reduce X -(i,j) to a low-dimensional random vector U ij; second, we use the kernel method to construct a nonparametric graphical model based on (X i, X j) and the dimension-reduced random vectors U ij. The main differences between this approach and <cit.> are, first, we are able to retain conditional independence as the criterion for constructing the network, which is a widely accepted criterion with a more direct interpretation, and second, we are no longer restricted by the additive structure in the graphical model. Another attractive feature of our method is due to the “kernel trick”, which means its computational complexity depends on the sample size rather than the size of the networks.
The rest of the paper is organized as follows. In Sections <ref> and <ref>, we introduce the sufficient graphical model and describe its estimation method at the population level. In Section <ref> we lay out the detailed algorithms to implement the method. In Section <ref> we develop the asymptotic properties such as estimation consistency, variable selection consistency, and convergence rates. In Section <ref>, we conduct simulation studies to compare of our method with the existing methods. In Section <ref>, we apply our method to the DREAM 4 Challenge gene network data set. Section <ref> concludes the paper with some further discussions. Due to limited space we put all proofs and some additional results in the Supplementary Material.
§ SUFFICIENT GRAPHICAL MODEL
In classical sufficient dimension reduction, we seek the lowest dimensional subspace S of p, such that, after projecting X ∈ p on to S, the information about the response Y is preserved; that is, Y X | P S X, where P S is the projection onto S. This subspace is called the central subspace, written as S Y|X. See, for example, <cit.>, <cit.>, and <cit.>. <cit.> and <cit.> extended this framework to the nonlinear setting by considering the more general problem: Y X | G, where G a sub-σ field of the σ-field generated by X. The class of functions in a Hilbert space that are measurable with respect to G is called the central class, written as S Y|X. <cit.> introduced the Principal Support Vector Machine, and <cit.> generalized the Sliced Inverse Regression <cit.> and the Sliced Average Variance Estimate <cit.> to estimate the central class. Precursors of this theory include <cit.>, <cit.>, and <cit.>.
To link this up with the statistical graphical model, let (Ω, F, P) be a probability space, (Ω X, F X) a Borel measurable space with Ω X ⊆ p, and X: Ω→Ω X a random vector with distribution P X.
The ith component of X is denoted by X i and its range denoted by ΩX i. We assume Ω X = ΩX 1×⋯×ΩX p. Let X (i,j)=(X i, X j) and X -(i,j) be as defined in the Introduction. Let σ (X - (i,j)) be the σ-field generated by X -(i,j).
We assume, for each (i,j) ∈Γ×Γ, there is a proper sub σ-field G -(i,j) of σ (X -(i,j)) such that
X (i,j) X -(i,j) | G -(i,j).
Without loss of generality, we assume G -(i,j) is the smallest sub σ-field of σ ( X -(i,j) ) that satisfies the above relation; that is, G -(i,j) is the central σ-field for X (i,j) versus X -(i,j). There are plenty examples of joint distributions of X for which the condition (<ref>) holds for every pair (i,j): see Section S10 of the Supplementary Material.
Using the properties of conditional independence developed in <cit.> (with a detailed proof given in <cit.>), we can show that (<ref>) implies the following equivalence.
If X (i,j) X -(i,j) | G -(i,j), then
X i X j | X -(i,j) ⇔X i X j | G -(i,j).
This equivalence motivates us to use X i X j | G -(i,j) as the criterion to construct the graph G after performing nonlinear sufficient dimension reduction of X (i,j) versus X -(i,j) for each (i,j) ∈Γ×Γ, i > j.
Under condition (<ref>), the graph defined by
(i,j) ∉ E ⇔ X i X j | G -(i,j)
is called the sufficient graphical model.
§ ESTIMATION: POPULATION-LEVEL DEVELOPMENT
The estimation of the sufficient graphical model involves two steps: the first step is to use nonlinear sufficient dimension reduction to estimate G -(i,j); the second is to construct a graph G based on reduced data
{ (X (i,j), G -(i,j)): (i,j) ∈Γ×Γ, i > j }.
In this section we describe the two steps at the population level. To do so, we need some preliminary concepts such as the covariance operator between two reproducing kernel Hilbert spaces, the mean element in an reproducing kernel Hilbert spaces, the inverse of an operator, as well as the centered reproducing kernel Hilbert spaces. These concepts are defined in the Supplementary Material, Section S1.2. A fuller development of the related theory can be found in <cit.>. The symbols (·) and (·) will be used to denote the range and the closure of the range of a linear operator.
⊥
κ
§.§ Step 1: Nonlinear dimension reduction
We use the generalized sliced inverse regression <cit.>, <cit.> to perform the nonlinear dimension reduction. For each pair (i,j) ∈Γ×Γ, i > j, let ΩX -(i,j) be the range of X -(i,j), which is the Cartesian product of ΩX 1, …, ΩX p with ΩX i and ΩX j removed. Let
X -(i,j): ΩX -(i,j)×ΩX -(i,j)→
be a positive semidefinite kernel.
Let H X -(i,j) be the centered reproducing kernel Hilbert space generated by X -(i,j). Let ΩX (i,j), X (i,j), and H X (i,j) be the similar objects defined for X (i,j).
E[ X -(i,j) ( X -(i,j), X -(i,j) ) ]< ∞, E[ X (i,j) ( X (i,j), X (i,j) )] < ∞.
This is a very mild assumption that is satisfied by most kernels.
Under this assumption, the following covariance operators are well defined:
ΣX -(i,j) X (i,j): H X (i,j)→ H X -(i,j), ΣX -(i,j) X -(i,j): H X -(i,j)→ H X -(i,j).
For the formal definition of the covariance operator, see S1.2. Next, we introduce the regression operator from H X (i,j) to H X -(i,j). For this purpose we need to make the following assumption.
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j) ).
As argued in <cit.>, this assumption can be interpreted as a type of collective smoothness in the relation between X (i,j) and X -(i,j): intuitively, it requires the operator ΣX -(i,j) X (i,j) sends all the input functions to the low-frequency domain of the operator ΣX -(i,j) X -(i,j). Under Assumption <ref>, the linear operator
R X -(i,j) X (i,j)=ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j)
is defined, and we call it the regression operator from H X (i,j) to H X -(i,j). The meaning of the inverse
ΣX -(i,j) X -(i,j) is defined in Section S1.2 in the Supplementary Material.
The regression operator in this form was formally defined in <cit.>, but earlier forms existed in <cit.>; see also <cit.>.
R X -(i,j) X (i,j) is a finite-rank operator, with rank d ij.
Intuitively, this assumption means that R X -(i,j)X (i,j) filters out the high frequency functions of X (i,j), so that, for any f ∈ H (i,j), R X -(i,j)X (i,j) f is relatively smooth. It will be violated, for example, if one can find an f ∈ H (i,j) that makes R X -(i,j)X (i,j) f arbitrarily choppy.
The regression operator plays a crucial role in nonlinear sufficient dimension reduction. Let L 2 ( P X -(i,j) ) be the L 2-space with respect to the distribution P X -(i,j) of X -(i,j). As shown in <cit.>, the closure of the range of the regression operator is equal to the central subspace; that is,
( R X -(i,j) X (i,j) ) = 𝔖X (i,j) | X -(i,j)
under the following assumption.
* H X -(i,j) is dense in L 2 (P X -(i,j) ) modulo constants; that is, for any f ∈ L 2 (P X -(i,j) ) and any ϵ > 0, there is a g ∈ H X -(i,j) such that [ f( X -(i,j) ) - g( X -(i,j) ) ] < ϵ;
* 𝔖X (i,j) | X -(i,j) is a sufficient and complete.
The first condition essentially requires the kernel X -(i,j) to be a universal kernel with respect to the L 2(P X -(i,j))-norm. It means H -(i,j) is rich enough to approximate any L 2(P X -(i,j))-function arbitrarily closely. For example, it is satisfied by the Gaussian radial basis function kernel, but not by the polynomial kernel. For more information on universal kernels, see <cit.>. The completeness in the second condition means
E[ g (X -(i,j)) | X (i,j)] = 0 ⇒ g (X -(i,j)) = 0 .
This concept is defined in <cit.>,
and is similar to the classical definition of completeness treating X -(i,j) as the parameter. <cit.> showed that completeness is a mild condition, and is satisfied by most nonparametric models.
A basis of the central class 𝔖X (i,j) | X -(i,j) can be found by
solving the generalized eigenvalue problem: for k = 1, …, d ij,
⟨ f, ΣX -(i,j) X (i,j) A ΣX (i,j) X - (i,j) f ⟩-(i,j)
⟨ f k, ΣX -(i,j) X -(i,j) f k ⟩-(i,j) = 1
⟨ f k, ΣX -(i,j) X -(i,j) f ℓ⟩-(i,j), ℓ=1, …, k-1
where A: H X (i,j)→ H X (i,j) is any nonsingular and self adjoint operator, and ⟨·, ·⟩-(i,j) is the inner product in H X -(i,j). That is, if f ij 1, … f ijd ij are the first d ij eigenfunctions of this eigenvalue problem, then they span the central class. This type of estimate of the central class is called generalized sliced inverse regression.
Convenient choices of A are the identity mapping I or the operator ΣX (i,j) X (i,j). If we use the latter, then we need the following assumption.
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j) ).
This assumption has the similar interpretation as Assumption <ref>; see Section S11 in the Supplementary Material.
At the population level, choosing A to be ΣX -(i,j) X -(i,j) achieves better scaling because it down weights those components of the output of ΣX -(i,j)X (i,j) with larger variances. However, if the sample size is not sufficiently large, involving an estimate of ΣX -(i,j)X (i,j) in the procedure could incur extra variations that overwhelm the benefit brought by ΣX -(i,j)X (i,j). In this case, a nonrandom operator such as A=I is preferable.
In this paper we use A = Σ X (i,j) X (i,j). Let U ij denote the random vector
( f ij 1 (X -(i,j)) , … f ijd ij(X -(i,j)) ).
The set of random vectors { U ij: (i,j) ∈Γ×Γ, i > j } is the output for the nonlinear sufficient dimension reduction step.
§.§ Step 2:Estimation of sufficient graphical model
To estimate the edge set of the sufficient graphical model
we need to find a way to determine whether X i X j | U ij is true. We use a linear operator introduced by <cit.> to perform this task, which is briefly described as follows.
Let U, V, W be random vectors taking values in measurable spaces (Ω U, F U), (Ω V, F V), and (Ω W, F W).
Let ΩUW = Ω U ×Ω W, ΩVW = Ω V ×Ω W, F UW= F U × F V, and F VW = F V × F W.
Let
UW: ΩUW×ΩUW→, VW: ΩVW×ΩVW→, W: Ω W ×Ω W →
be positive kernels. For example, for (u 1, w 1), (u 2, w 2) ∈ΩUW×ΩUW, UW returns a real number denoted by UW[(u 1, w 1), (u 2, w 2)]. Let H UW, H VW, and H W be the centered reproducing kernel Hilbert space's generated by the kernels UW, VW, and W.
Define the covariance operators
Σ(UW)(VW): H VW→ H UW, Σ(UW)W: H W → H UW,
Σ(VW)W: H W → H VW, ΣWW: H W → H W
as before.
The following definition is due to <cit.>. Since it plays a special role in this paper, we give it a name – “conjoined conditional covariance operator” that figuratively depicts its form.
Suppose
* If S is W, or (U,W), or (V, W), then E [ S (S, S) ] < ∞;
* (ΣW (VW) ) ⊆ (ΣWW), (ΣW (UW) ) ⊆ (ΣWW).
Then the operator
ΣÜV̈|W = Σ(UW)(VW) - Σ(UW)WΣWWΣW(VW)
is called the conjoined conditional covariance operator between U and V given W.
The word “conjoined” describes the peculiar way in which W appears in Σ(UW)W and ΣW(VW), which differs from an ordinary conditional covariance operator, where these operators are replaced by ΣUW and ΣWV. The following proposition is due to <cit.>, a proof of a special case of which is given in <cit.>.
Suppose
* H UW⊗ H VW is probability determining;
* for each f ∈ H UW, the function E[ f(U, W) | W=·] belongs to H W;
* for each g ∈ H VW, the function E[ g(V, W) | W =· ] belongs to H W;
Then ΣÜV̈|W = 0 if and only if U V | W.
The notion of probability determining in the context of reproducing kernel Hilbert space was defined in <cit.>. For a generic random vector X, an reproducing kernel Hilbert space H X based on a kernel X is probability determining if and only if the mapping
P ↦ E P [ X(·, X)]
is injective.
Intuitively, this requires the family of expectations { E P f(X): f ∈ H X } to be rich enough to identify P. For example, the Gaussian radial basis function is probability determining, but a polynomial kernel is not. We apply the above proposition to X i, X j, U ij for each (i,j) ∈Γ×Γ, i > j. Let
XUi,ij: (ΩX i×ΩU ij ) × (ΩX i×ΩU ij ) →
be a positive definite kernel, and H XUi,ij the centered reproducing kernel Hilbert space generated by XUi,ij. Similarly, let
Uij: ΩU ij×ΩU ij→
be a positive kernel, and H Uij the centered reproducing kernel Hilbert space generated by Uij.
Conditions (1) and (2) of Definition <ref> and conditions (1), (2), and (3) of Proposition <ref> are satisfied with U, V, and W therein replaced by
X i, X j, and U ij, respectively, for each (i,j) ∈Γ×Γ and i > j.
Under this assumption, the conjoined conditional covariance operator ΣẌ i Ẍ j | U ij is well defined and has the following property.
Under Assumption <ref>, we have
(i,j) ∉ℰ⇔ΣẌ i Ẍ j | U ij = 0.
This corollary motivates us to estimate the graph by thresholding the norm of the estimated conjoined conditional covariance operator.
§ ESTIMATION: SAMPLE-LEVEL IMPLEMENTATION
§.§ Implementation of step 1
Let (X 1, Y 1), …, (X n, Y n) be an i.i.d. sample of (X,Y). At the sample level, the centered reproducing kernel Hilbert space H X -(i,j) is spanned by the functions
{ X -(i,j) ( ·, X -(i,j) a ) - E n [ X -(i,j) ( ·, X -(i,j))]: a = 1, …, n },
where X -(i,j) (·, X -(i,j) ) stands for the function u ↦ X -(i,j) (u, X -(i,j) ), and
E n [ X -(i,j) (·, X -(i,j) )] the function u ↦ E n [ X -(i,j) (u, X -(i,j) )].
We estimate the covariance operators ΣX -(i,j) X (i,j) and Σ X -(i,j) X -(i,j) by
Σ̂X -(i,j) X (i,j) =
E n {[ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )]
⊗
[ X (i,j) ( ·, X (i,j) )
-E n X (i,j) ( ·, X (i,j) )] }
Σ̂ X -(i,j) X -(i,j) =
E n { [ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )]
⊗
[ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )] },
respectively. We estimate ΣX (i,j) X (i,j) by the Tychonoff-regularized inverse
( Σ̂X (i,j) X (i,j) + ϵ X (i,j) I ),
where I: H X (i,j)→ H X (i,j) is the identity operator.
The regularized inverse is used to avoid over fitting. It plays the same role as ridge regression <cit.> that alleviates over fitting by adding a multiple of the identity matrix to the sample covariance matrix before inverting it.
At the sample level, the generalized eigenvalue problem (<ref>) takes the following form: at the kth iteration,
⟨ f, Σ̂X -(i,j) X (i,j) ( Σ̂X (i,j) X (i,j) + ϵ X (i,j) I )Σ̂X (i,j) X - (i,j) f ⟩-(i,j)
⟨ f, Σ̂X -(i,j) X -(i,j) f ⟩-(i,j) = 1,
⟨ f, Σ̂X -(i,j) X -(i,j) f ℓ⟩-(i,j) = 0, ℓ = 1, …, k-1,
where f 1, …, f k-1 are the maximizers in the previous steps. The first d ij eigenfunctions are an estimate of a basis in the central class S X (i,j) | X -(i,j).
Let K X -(i,j) be the n × n matrix whose (a,b)th entry is X -(i,j) (X a -(i,j), X b -(i,j)), Q = I n - 1 n 1 n / n, and
G X -(i,j) = Q K X -(i,j) Q.
Let a 1, …, a d ij be the first d ij eigenvectors of the matrix
( G X -(i,j) + ϵ X -(i,j) I n ) G X -(i,j) G X (i,j) ( G X (i,j) + ϵ X (i,j) I n ) G X - (i,j) ( G X -(i,j) + ϵ X -(i,j) I n ).
Let
b r = ( G X -(i,j) + ϵ X -(i,j) I n ) a r for r = 1, …, d ij.
As shown in Section S12.2, the eigenfunctions f 1 ij, …, f d ijij are calculated by
f r ij = ∑a=1 n b r a { X -(i,j) ( ·, X -(i,j) a ) - E n [ X -(i,j) ( ·, X -(i,j))]}.
The statistics Ûij a = ( f 1 ij (X a -(i,j)) , …, f d ijij (X a -(i,j))), a = 1, …, n, will be used as the input for the second step.
§.§ Implementation of step 2
This step consists of estimating the conjoined conditional covariance operator's for each (i,j) and thresholding their norms. At the sample level, the centered reproducing kernel Hilbert space's generated by the kernels XUi,ij, XUj,ij, and U ij are
H XU i,ij= {XUi,ij ( ·, (X a i, U a ij)) - E n [ XUi,ij ( ·, (X i, U ij)) ]: a = 1, …, n },
H XU j,ij= {XUj,ij ( ·, (X a j, U a ij)) - E n [ XUj,ij ( ·, (X j, U ij)) ]: a = 1, …, n },
H U ij= {Uij ( ·, U a ij) - E n [ Uij ( ·, U ij) ]: a = 1, …, n },
where, for example, XUi,ij ( ·, (X a i, U a ij)) denotes the function
ΩX i×ΩU ij→, (x i, u ij ) ↦XUi,ij ( (x i, u ij ), (X a i, U a ij))
and E n [ XUi,ij ( ·, (X i, U ij)) ] denotes the function
ΩX i×ΩU ij→, (x i, u ij ) ↦ E n [ XUi,ij ( (x i, u ij ), (X i, U ij))].
We estimate the covariance operators
Σ(X i U ij)( X i U ij), Σ(X i U ij)U ij, ΣX j (X jU ij), and ΣU ij U ij by
Σ̂(X i U ij) (X j U ij) = E n { [ XUi,ij ( ·, ( X i, U ij))- E n XUi,ij ( ·, ( X i, U ij)) ]
⊗ [ XUj,ij ( ·, ( X j, U ij))- E n XUj,ij ( ·, ( X j, U ij)) ] }
Σ̂(X i U ij) U ij = E n { [ XUi,ij ( ·, ( X i, U ij))- E n XUi,ij ( ·, ( X i, U ij)) ]
⊗ [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ] }
Σ̂U ij(X j U ij) = E n { [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ]
⊗ [ XUj,ij ( ·, ( X j, U ij))- E n XUj,ij ( ·, ( X j, U ij)) ] }
Σ̂U ij U ij = E n { [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ]
⊗ [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ] },
respectively. We then estimate the conjoined conditional covariance operator by
Σ̂Ẍ i Ẍ j | U ij=
Σ̂(X i U ij) (X j U ij) -
Σ̂(X i U ij) U ij
(Σ̂U ij U ij + ϵ U (i,j) I ) Σ̂U ij(X j U ij) ,
where, again, we have used Tychonoff regularization to estimate the inverted covariance operator ΣU ij U ij.
Let K U ij, K X i U ij, and K X j U ij be the Gram matrices
K U ij= { U ij (U a ij, U b ij) }a, b = 1 n,
K X i U ij= {XUi, ij ((X i a, U a ij), (X i b, U b ij)) }a, b =1 n,
K X j U ij = {XUj, ij ((X j a, U a ij), (X j b, U b ij)) }a,b=1 n,
and G X i U ij,
G X j Uij, and
G U ij their centered versions
G X i U ij = Q K X i U ij Q,
G X j Uij = Q K X jU ij Q,
G U ij = Q K U ij Q.
As shown in Section S12 in the Supplementary Material,
Σ̂Ẍ i Ẍ j| U ijhs
= G X i U ij1/2 G X j U ij1/2 - G X i U ij1/2 G U ij ( G U ij + ϵ U (i,j) Q ) † G X jUij1/2 f,
where · f is the Frobenius norm.
Estimation of the edge set is then based on thresholding this norm; that is,
Ê = { (i,j) ∈Γ×Γ: i > j, Σ̂Ẍ i Ẍ j| U ijhs > ρ n }
for some chosen ρ n > 0.
§.§ Tuning
We have three types of tuning constants: those for the kernels, those for Tychonoff regularization, and the threshold ρ n. For the Tychonoff regularization, we have ϵ X (i,j) and ϵ X -(i,j) for step 1, and ϵ U (i,j) for step 2. In this paper we use the Gaussian radial basis function as the kernel:
(u,v) = exp ( - γ u - v 2 ).
For each (i,j), we have five γ's to determine: γ X (i,j) for the kernel X (i,j), γ X -(i,j) for X -(i,j), γXUi,ij for XUi,ij, γXUj,ij for XUj,ij, and γ U ij for U ij, which are chosen by the following formula (see, for example, <cit.>)
1/ √(γ)= n 2∑a < b s a - s b ,
where s 1, …, s n are the sample of random vectors corresponding to the mentioned five kernels. For example, for the kernel XUj, ij, s a = (X a j, U a ij).
For the tuning parameters in Tychonoff regularization, we use the following generalized cross validation scheme (GCV; see <cit.>):
GCV(ϵ)= ϵ∑i<j‖ G 1-G 2 [ G 2+ϵλmax(G 2)]-1G 1
‖F/1/ntr{I n-G 2 [ G 2+ϵλmax(G 2) ]-1},
where G 1, G 2 ∈n × n are positive semidefinite matrices, and λmax (G 2) is the largest eigenvalue of G 2. The matrices G 1 and G 2 are the following matrices for the three tuning parameters:
* G 1 = G X -(i,j), G 2 = G X (i,j) for ϵ X (i,j),
* G 1 = G X (i,j), G 2 = G X -(i,j) for ϵ X -(i,j),
* G 1 = G X (i,j), G 2 = G U ij for ϵ U (i,j),
We minimize (<ref>) over a grid to choose ϵ, as detailed in Section <ref>.
We also use
generalized cross validation to determine the thresholding parameter ρ n. Let Ê(ρ) be the estimated edge set using a threshold ρ, and, for each i ∈Γ, let C i (ρ)={ X j: j ∈Γ, (i,j) ∈Ê(ρ) } be the subset of components of X at the neighborhood of the node i in the graph (Γ, Ê ( ρ)). The basic idea is to apply the generalized cross validation to the regression of the feature of X i on the feature of C i (ρ). The generalized cross validation for this regression takes the form
GCV (ρ) = ∑i=1 p‖ GX i-GC i (ρ) [ GC i (ρ)+ϵλmax(GC i (ρ))I n ]-1GX i‖F/1/ntr{I n-GC i (ρ) [ GC i (ρ)+ϵλmax(GC i (ρ) ) I n ]-1},
where G C i (ρ)= Q K C i (ρ) Q, and K C i (ρ) is the n × n kernel matrix for the sample of C i (ρ).
We minimize GCV(ρ) over the grid ρ∈{ℓ× 10-2: ℓ=2, …, 7} to determine the optimal threshold ρ n.
Regarding the selection of the dimension of U ij, to our knowledge there has been no systematic procedure available to determine the dimension of the central class for nonlinear sufficient dimension reduction. While some recently developed methods for order determination for linear sufficient dimension reduction, such as the ladle estimate and predictor augmentation estimator <cit.>, may be generalizable to the nonlinear sufficient dimension reduction setting, we will leave this topic to future research. Our experiences and intuitions indicate that a small dimension, such as 1 or 2, for the central class would be sufficient in most cases. For example, in the classical nonparametric regression problems Y = f(X) + ϵ with X ϵ, the dimension of the central class is by definition equal to 1.
§ ASYMPTOTIC THEORY
In this section we develop the consistency and convergence rates of our estimate and related operators. The challenge of this analysis is that our procedure involves two steps: we first extract the sufficient predictor using one set of kernels, and then substitute it into another set of kernels to get the final result. Thus we need to understand how the error propagates from the first step to the second. We also develop the asymptotic theory allowing p to go to infinity with n, which is presented in the Supplementary Material.
§.§ Overview
Our goal is to derive the convergence rate of
| Σ̂Ẍ i Ẍ j | Ûijhs - ΣẌ i Ẍ j | U ijhs|,
as Σ̂Ẍ i Ẍ j | Ûijhs is the quantity we threshold to determine the edge set.
By the triangular inequality,
| Σ̂Ẍ i Ẍ j | Ûijhs - ΣẌ i Ẍ j | U ijhs|
≤Σ̂Ẍ i Ẍ j | Ûij - ΣẌ i Ẍ j | U ijhs
≤Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs + Σ̂Ẍ i Ẍ j | U ij - ΣẌ i Ẍ j | U ijhs.
So we need to derive the convergence rates of the following quantities:
Ûij - U ij[ H -(i,j) (X)] d ij,
Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs,
Σ̂Ẍ i Ẍ j | U ij - ΣẌ i Ẍ j | U ijhs,
where, to avoid overly crowded subscripts, we have used H -(i,j) (X) to denote H -(i,j) X when it occurs as a subscript.
The first and third convergence rates can be derived using the asymptotic tools for linear operators developed in <cit.>, <cit.>, <cit.>, and <cit.>. The second convergence rate is, however, a new problem, and it will also be useful in similar settings that require constructing estimators based on predictors extracted by sufficient dimension reduction. In some sense, this is akin to the post dimension reduction problem considered in <cit.>.
ØȮ
øȯ
Ö
ö
In the following, if {a n } and { b n } are sequences of positive numbers, then we write a n ≺ b n if a n / b n → 0. We write a n ≍ b n if 0< lim inf n (b n / a n) ≤lim sup n (b n / a n) < ∞. We write b n ≼ a n if either b n ≺ a n or b n ≍ a n. Because (i,j) is fixed in the asymptotic development, and also to emphasize the dependence on n, in the rest of this section we denote ϵ X (i,j), ϵ X -(i,j), and ϵ U (i,j) by ϵ n, η n, and δ n, respectively.
§.§ Transparent kernel
We first develop what we call the “transparent kernel” that passes information from step 1 to step 2 efficiently. Let Ω be a nonempty set, and : Ω×Ω→ a positive kernel.
We say that is a transparent kernel if, for each t ∈Ω, the function s ↦ (s,t) is twice differentiable and
* ∂ (s,t)/ ∂ s | s=t = 0;
* the matrix H(s,t) = ∂ 2 (s,t) / ∂ s ∂ s has a bounded operator norm; that is, there exist -∞ < C 1 ≤ C 2 < ∞ such that
C 1 ≤λmin (H(s,t)) ≤λmax (H(s,t)) < C 2
for all (s,t) ∈Ω×Ω, where λmin(·) and λmax (·) indicate the largest and smallest eigenvalues.
For example, the Gaussian radial basis function kernel is transparent, but the exponential kernel
(u,v) = τ 2 exp(-γ‖ u-v ‖ ) is not.
This condition implies a type of Lipschitz continuity in a setting that involves two reproducing kernels 0 and 1, where the argument of 1 is the evaluation of a member of the reproducing kernel Hilbert space generated by 0.
Suppose H 0 is the reproducing kernel Hilbert space generated by 0, H 0 d is the d-fold Cartesian product of H 0 with inner product defined by
⟨ U, V ⟩ H 0 d = ⟨ u 1, v 1 ⟩ H 0 + ⋯ + ⟨ u d, v d ⟩ H 0
where U = (u 1, …, u d) and V = (v 1, …, v d) are members of H 0 d,
H 1 is the reproducing kernel Hilbert space generated by 1. Then:
(i) for any U, V ∈ H 0 d, a ∈Ω, we have
U (a)- V(a) d≤ [ 0(a, a) ] 1/2 U- V H 0 d;
(ii)
if 1(s,t) is a transparent kernel,
then there exists a C> 0 such that, for each U, V ∈ H 0 d and a ∈Ω,
1 ( ·, U ( a) ) - 1 ( ·, V ( a) ) H 1≤ C [ 0 (a , a)]1/2 U - V H 0 d.
A direct consequence of this theorem is that, if Û is an estimate of some U, a member of H 0 d, with Û - U H 0 d = O P ( b n) for some 0 < b n → 0, Σ̂(Û) is a linear operator estimated from the sample Û 1, …, Û n (and perhaps some other random vectors), and Σ̂(U) is a linear operator estimated from the sample U 1, …, U n, then,
Σ̂( Û) - Σ̂(U) hs = O P ( b n).
This result is somewhat surprising, because sample estimates such as Σ̂( Û) can be viewed as E n 𝔾 ( X, Û ), where Û is an estimate of a function U in a functional space with norm · and 𝔾 is an operator-valued function. If Û - U = O P (b n) for some b n → 0, then it is not necessarily true that
E n 𝔾 ( X, Û) - E n 𝔾 ( X, U) = O P (b n),
particularly when U is an infinite dimensional object. Yet relation (<ref>) states exactly this. The reason behind this is that the reproducing kernel property separates the function Û and its argument X a (i.e. Û (x) = ⟨Û, (·, x) ⟩), which implies a type of uniformity among Û (X 1), …, Û (X n). This point will be made clear in the proof in the Supplementary Material.
Statement (<ref>) is made precise by the next theorem.
Suppose conditions (1) and (2) of Definition <ref> are satisfied with U, V, W therein replaced by X i, X j, and U ij. Suppose, furthermore:
(a) U ij, XUi,ij, and XUj,ij are transparent kernels;
(b) Ûij - U ij[ H -(i,j) (X) ] d ij = O P ( b n ) for some 0 < b n → 0.
Then
(i) Σ̂ÛijÛij - Σ̂ U ij U ijhs=O P ( b n );
(ii) Σ̂ (X iÛij) Ûij - Σ̂ (X i U ij) U ijhs=O P ( b n );
(iii) Σ̂ (X iÛij) (X jÛij) - Σ̂ (X i U ij) (X j U ij) hs=O P ( b n ).
Using Theorem <ref> we can derive the convergence rate of Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs.
Suppose conditions in Theorem <ref> are satisfied and, furthermore,
(a)
ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(b) b n ≼δ n ≺ 1.
Then
Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs = O P ( b n ).
Note that, unlike in Theorem <ref>, where our assumptions imply
ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j)
is a finite-rank operator, here, we do not assume
ΣU ij(U ij)ΣU ij(X j U ij) to be a finite-rank (or even Hilbert-Schmidt) operator; instead, we assume it to be a bounded operator.
This is because (X j, U ij) contains U ij, which makes it unreasonable to assume ΣU ijU ijΣU ij(X j U ij) to be finite-rank or Hilbert Schmidt. For example, when X j is a constant, ΣU ij(X j U ij) is the same as ΣU ij U ij and ΣU ij U ijΣU ij U ij is not a Hilbert Schmidt operator, though it is bounded.
Theorem <ref> shows that convergence rate of (ii) in (<ref>) is the same as the convergence rate of (i) in (<ref>); it now remains to derive the convergence rate of (i) and (iii).
§.§ Convergence rates of (i) and (iii) in (<ref>)
We first present the convergence rate of Ûij to U ij. The proof is similar to that of Theorem 5 of <cit.> but with two differences. First, <cit.> took A in (<ref>) to be I, whereas we take it to be ΣYY. In particular, the generalized sliced inverse regression in <cit.> only has one tuning parameter η n, but we have two tuning parameters η n and ϵ n. Second, <cit.> defined (in the current notation) f r ij to be the eigenfunctions of
ΣX -(i,j)X -(i,j)ΣX -(i,j)X (i,j)ΣX (i,j)X (i,j)ΣX (i,j)X -(i,j)ΣX -(i,j)X -(i,j),
which is different from the generalized eigenvalue problem (<ref>).
For these reasons we need to re-derive the convergence rate of Ûij.
Suppose
(a) Assumption <ref> is satisfied;
(b) ΣX -(i,j) X (i,j) is a finite-rank operator with
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j)2),
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j));
(c) n -1/2≺η n ≺ 1, n -1/2≺ϵ n ≺ 1;
(d) for each r = 1, …, d ij, λij 1 > ⋯ > λijd ij.
Then,
Ûij - U ij[ H -(i,j) (X) ] d ij= O P (
η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n ).
An immediate consequence is that, under the transparent kernel assumption, the b n in Theorem <ref> is the same as this rate. We next derive the convergence rate in (iii) of (<ref>). This rate depends on the tuning parameter δ n in the estimate of conjoined conditional covariance operator, and it reaches b n for the optimal choice of δ n.
Suppose conditions (1) and (2) of Definition <ref> are satisfied with U, V, W therein replaced by X i, X j, and U ij. Suppose, furthermore,
(a)
ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(b) b n ≼δ n ≺ 1.
Then
Σ̂Ẍ i Ẍ j | U ij- ΣẌ i Ẍ j | U ijhs = O P (δ n). Consequently, if δ n ≍ b n, then
Σ̂Ẍ i Ẍ j | U ij- ΣẌ i Ẍ j | U ijhs = O P (b n).
Finally, we combine Theorem <ref> through Theorem <ref> to come up with the convergence rate of Σ̂Ẍ i Ẍ j | Ûij. Since there are numerous cross references among the conditions in these theorems, to make a clear presentation we list all the original conditions in the next theorem, even if they already appeared. These conditions are of two categories: those for the step 1 that involves sufficient dimension reduction of X (i,j) versus X -(i,j), and those for the step 2 that involves the estimation of the conjoined conditional covariance operator. We refer to them as the first-level and second-level conditions, respectively.
Suppose the following conditions hold:
(a) (First-level kernel) E [ (S, S)] < ∞ for = X (i,j) and = X -(i,j);
(b) (First-level operator) ΣX -(i,j) X (i,j) is a finite-rank operator with rank d ij and
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j)2),
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j));
all the nonzero eigenvalues of ΣX (i,j) X -(i,j)ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j) are distinct;
(c) (First-level tuning parameters) n -1/2≺η n ≺ 1, n -1/2≺ϵ n ≺ 1, η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n 1/2 + ϵ n ≺ 1;
(d) (Second-level kernel) E [ (S, S)] < ∞ is satisfied for = U ij, XUi,ij, and XUj,ij; furthermore, they are transparent kernels;
(e) (Second-level operators) ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(f) (Second-level tuning parameter) δ n ≍η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n.
Then
Σ̂Ẍ i Ẍ j | Ûij- ΣẌ i Ẍ j | U ijhs = O P (η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n).
Using this result we immediately arrive at the variable selection consistency of the Sufficient Graphical Model.
Under the conditions in Theorem <ref>, if
η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n ≺ρ n ≺ 1,
Ê = { (i,j) ∈Γ×Γ: i > j, Σ̂Ẍ i Ẍ j | Ûijhs < ρ n }
then limn →∞ P ( Ê = E ) → 1.
§.§ Optimal rates of tuning parameters
The convergence rate in Theorem <ref> depends on ϵ n and η n explicitly, and δ n implicitly (in the sense that δ n ≍η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n is optimal for fixed ϵ n and η n). Intuitively, when ϵ n, η n, and δ n increase, the biases increase and variances decrease; when they decrease, the biases decrease and the variances increase. Thus there should be critical rates for them that balance the bias and variance, which are the optimal rates.
Under the conditions in Theorem <ref>, if ϵ n, η n, and δ n are of the form n a, n b, and n c for some a > 0, b > 0, and c > 0, then
(i) the optimal rates the tuning parameters are
n -3/8≼ϵ n ≼ n -1/4, η n ≍ n -1/4, δ n ≍ n -1/4;
(ii) the optimal convergence rate of the estimated conjoined conditional covariance operator is
Σ̂Ẍ i Ẍ j | Ûij- ΣẌ i Ẍ j | U ijhs = O P (n -1/4).
Note that there is a range of ϵ n are optimal, this is because the convergence rate does not have a unique minimizer. This also means the result is not very sensitive to this tuning parameter.
In the above asymptotic analysis, we have treated p as fixed when n →∞. We have also developed the consistency and convergence rate in the scenario where the dimension of p n of X goes to infinity with n, which is placed in the Supplementary Material (Section S9) due to limited space.
§ SIMULATION
In this section we compare the performance of our sufficient graphical model with previous methods such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and a Naïve method which is based on the conjoined conditional covariance operator without the dimension reduction step.
By design, the sufficient graphical model has advantages over these existing methods under the following circumstances. First, since the sufficient graphical model does not make any distributional assumption, it should outperform <cit.> and <cit.> when the Gaussian or copula Gaussian assumptions are violated; second, due to the sufficient dimension reduction in sufficient graphical model, it avoids the curse of dimensionality and should outperform <cit.>, <cit.>, and a Naïve method in the high-dimensional setting; third, since sufficient graphical model does not require additive structure, it should outperform <cit.> when there is severe nonadditivity in the model. Our simulation comparisons will reflect these aspects.
For the sufficient graphical model, <cit.>, and the Naïve method, we use the Gaussian radial basis function as the kernel. The regularization constants ϵ X(i,j), ϵ X-(i,j), and ϵ U(i,j) are chosen by the generalized cross validation criterion described in Section <ref> with the grid {10-ℓ: ℓ=-1,0,1,2,3,4}. The kernel parameters γ X (i,j), γ X -(i,j), γXUi,ij, γXUj,ij, and γ U ij are chosen according to (<ref>). Because the outcomes of tuning parameters are stable, for each model, we compute the generalized cross validation for the first five samples and use their average value for the rest of the simulation.
The performance of each estimate is assessed using the averaged receiver operating characteristic curve as a function of the threshold ρ.
The accuracy of a method across all ρ is measured by the area under the receiver operating characteristic curve.
To isolate the factors that affect accuracy, we first consider two models with relatively small dimensions and large sample sizes, which are
Model 1: X1 =ϵ1, X2=ϵ2, X3=sin(2X1)+ϵ3
X4 = (X1)2 + (X2)2+ϵ4, X5=ϵ5,
Model 2: X1 =ϵ1, X2=X1+ ϵ2, X3=ϵ3, X4 = (X1 + X3)2+ϵ4,
X5 =cos(2X2X3)+ϵ5, X6=X4+ϵ6,
where ϵ i, i=1, …, p are from independent and identically distributed standard normal distribution. The edge sets of the two models are
: E = { (1,3), (1, 4), (2,4), (1,2)}
: E = {(1,2), (1,4), (3, 4), (1,3), (2,5), (3, 5), (2, 3), (4, 6) }.
We use n = 100, 1000 for each model, and for each n, we generate 50 samples to compute the averaged receiver operating characteristic curves. The dimension d ij for sufficient graphical model is taken to be 2 for all cases (we have also used d ij = 1 and the results are very similar to those presented here).
The plots in the first row of Figure <ref> show the averaged receiver operating characteristic curves for the seven methods, with the following plotting symbol assignment:
Sufficient graphical model: red solid line <cit.>: red dotted line
<cit.>: black solid line <cit.>: black dotted line
<cit.>: red dashed line Naïve: blue dotted line
<cit.>: black dashed line
From these figures we see that
the two top performers are clearly sufficient graphical model and <cit.>, and their performances are very similar. Note that none of the two models satisfies the Gaussian or copula Gaussian assumption, which explains why sufficient graphical model and <cit.> outperform <cit.> and <cit.>. Sufficient graphical model and <cit.> also outperform <cit.>, <cit.>, and Naïve method, indicating that curse of dimensionality already takes effect on the fully nonparametric methods. The three nonparametric estimators have similar performances. Also note that Model I has an additive structure, which explains the slight advantage of <cit.> over sufficient graphical model in subfigure (a) of Figure <ref>; Model II is not additive, and the advantage of <cit.> disappears in subfigure (b) of Figure <ref>.
We next consider two models with relatively high dimensions and small sample sizes. A convenient systematic way to generate larger networks is via the hub structure. We choose p = 200, and randomly generate ten hubs h 1, …, h 10 from the 200 vertices. For each h k, we randomly select a set H k of 19 vertices to form the neighborhood of h k. With the network structures thus specified, our two probabilistic models are
Model 3: X i = 1+ | Xh k|2 + ϵ i, where i ∈ H k ∖ h k,
Model 4: X i = sin((Xh k)3) ϵ i, where i ∈ H k ∖ h k,
and ϵ i's are the same as in Models 1 and 2. Note that, in Model III, the dependence of X i on X h k is through the conditional mean E ( X i | X h k), whereas in Model IV, the dependence is through the conditional variance ( X i | X h k ).
For each model, we choose two sample sizes n=50 and n=100. The averaged receiver operating characteristic curves (again averaged over 50 samples) are presented in the second row in Figure <ref>. From the figures we see that, in the high-dimensional setting with p > n, sufficient graphical model substantially outperforms all the other methods, which clearly indicates the benefit of dimension reduction in constructing graphical models.
We now consider a Gaussian graphical model to investigate any efficiency loss incurred by sufficient graphical model. Following the similar structure used in <cit.>, we choose p=20, n=100, 200, and the model
Model 5: X ∼ N(0, Θ-1),
where Θ is 20 × 20 precision matrix with diagonal entries 1, 1, 1, 1.333, 3.010, 3.203, 1.543, 1.270, 1.544, 3, 1, 1, 1.2, 1, 1, 1, 1, 3, 2, 1, and nonzero off-diagonal entries θ3,5=1.418, θ4,10=-0.744, θ5,9=0.519, θ5,10=-0.577, θ13,17=0.287, θ17,20=0.542, θ14,15=0.998. As expected, Figure <ref> shows that <cit.>, <cit.>, and <cit.> perform better than sufficient graphical model in this case. However, sufficient graphical model still performs reasonably well and significantly outperforms the fully nonparametric methods.
Finally, we conducted some simulation on the generalized cross validation criterion (<ref>) for determining the threshold ρ n. We generated samples from Models I through V as described above, produced the receiver operating characteristic curves using sufficient graphical model, and determined the threshold ρ n by (<ref>). The results are presented in Figure S1 in the Supplementary Material. In each penal, the generalized cross validation-determined threshold ρ n are represented by the black dots on the red receiver operating characteristic curves.
§ APPLICATION
We now apply sufficient graphical model to a data set from the DREAM 4 Challenge project and compare it with other methods.
The goal of this Challenge is to recover gene regulation networks from simulated steady-state data.
A description of this data set can be found in <cit.>.
Since <cit.> already compared their method with <cit.>, <cit.>, <cit.>, <cit.>, and Naïve method for this dataset and demonstrated the superiority of <cit.> among these estimators, here we will focus on the comparison of the sufficient graphical model with <cit.> and the champion method for the DREAM 4 Challenge.
The data set contains data from five networks each of dimension of 100 and sample size 201. We use the Gaussian radial basis function kernel for sufficient graphical model and <cit.> and the tuning methods described in Section <ref>. For sufficient graphical model, the dimensions d ij are taken to be 1. We have also experimented with d ij = 2 but the results (not presented here) show no significant difference. Because networks are available, we can compare the receiver operating characteristic curves and their areas under the curve's, which are shown in Table <ref>.
As we can see from Table <ref>, sufficient graphical model has the same areas under the receiver operating characteristic curve values as <cit.> for Networks 2, 3, and 4, performs better than <cit.> for Network 5, but trails slightly behind <cit.> for Network 1; sufficient graphical model has the same areas under the curve as the champion method, performs better for Network 5 and worse for Network 1. Overall, sufficient graphical model and <cit.> perform similarly in this dataset, and they are on a par with the champion method. We should point out that sufficient graphical model and <cit.> are purely empirical; they employ no knowledge about the underlying physical mechanism generating the gene expression data. However, according to <cit.>, the champion method did use a differential equation that reflects the underlying physical mechanism.
The results for threshold determination are presented in Figure S2 in the Supplementary Material.
§ DISCUSSION
This paper is a first attempt to take advantage of the recently developed nonlinear sufficient dimension reduction method to nonparametrically estimate the statistical graphical model while avoiding the curse of dimensionality. Nonlinear sufficient dimension reduction is used as a module and applied repeatedly to evaluate conditional independence, which leads to a substantial gain in accuracy in the high-dimensional setting.
Compared with the Gaussian and copula Gaussian methods, our method is not affected by the violation of the Gaussian and copula Gaussian assumptions. Compared with the additive method <cit.>, our method does not require an additive structure and retains the conditional independence as the criterion to determine the edges, which is a commonly accepted criterion. Compared with fully nonparametric methods, sufficient graphical model avoids the curse of dimensionality and significantly enhances the performance.
The present framework opens up several directions for further research. First, the current model assumes that the central class S X (i,j) | X -(i,j) is complete, so that generalized sliced inverse regression is the exhaustive nonlinear sufficient dimension reduction estimate. When this condition is violated, generalized sliced inverse regression is no longer exhaustive and we can employ other nonlinear sufficient dimension reduction methods such as the generalized sliced averaged variance estimation <cit.> to recover the part of the central class that generalized sliced inverse regression misses. Second, though we have assumed that there is a proper sufficient sub-σ-field G -(i,j) for each (i,j), the proposed estimation procedure is still justifiable when no such sub-σ-field exists. In this case, U ij is still the most important set of functions that characterize the statistical dependence of X (i,j) on X -(i,j) – even though it is not sufficient. Without sufficiency, our method may be more appropriately called the Principal Graphical Model than the sufficient graphical model. Third, the current method can be extended to functional graphical model, which are common in medical applications such as EEG and fMRI. Several functional graphical models have been proposed recently, by
<cit.>, <cit.>, <cit.>, and <cit.>. The idea of a sufficient graph can be applied to this setting to improve efficiency.
This paper also contains some theoretical advances that are novel to nonlinear sufficient dimension reduction. For example, it introduces a general framework to characterize how the error of nonlinear sufficient dimension reduction propagates to the downstream analysis in terms of convergence rates. Furthermore, the results for convergence rates of various linear operators allowing the dimension of the predictor to go to infinity are the first of its kind in nonlinear sufficient dimension reduction. These advances will benefit the future development of sufficient dimension reduction in general, beyond the current context of estimating graphical models.
Bing Li's research on this work was supported in part by the NSF Grant DMS-1713078. Kyongwon Kim's work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2021R1F1A1046976, RS-2023-00219212), basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2021R1A6A1A10039823).
§ SUPPLEMENTARY MATERIAL
Supplementary material includes proofs of all theorems, lemmas, corollaries, and propositions in the paper, asymptotic development for the high-dimensional setting, some additional simulation plots for threshold determination.
0.2in
|
http://arxiv.org/abs/2307.04299v1 | 20230710013448 | Schiff moments of deformed nuclei | [
"Oleg P. Sushkov"
] | nucl-th | [
"nucl-th",
"physics.atom-ph"
] |
School of Physics, The University of New South Wales, Sydney, New South Wales
2052, Australia
Stimulated by recent suggestion of Cosmic Axion Spin Precession Experiment
with Eu contained compound we develop a new method for accurate calculation
of Schiff moments of even-odd deformed nuclei.
The method is essentially based on experimental data on magnetic moments and
E1,E3-amplitudes in the given even-odd nucleus and in adjacent even-even
nuclei. Unfortunately such sets of data are not known yet for most of
interesting
nuclei. Fortunately the full set of data is available for ^153Eu.
Hence, we perform the calculation for ^153Eu and find value of the
Schiff moment.
The value is about 30 times larger than a typical Schiff moment
of a spherical heavy nucleus. The enhancement of the Schiff moment
in ^153Eu is related to the low energy octupole mode.
On the other hand the value of Schiff moment we find is 30 times
smaller than that obtained in the assumption of static octupole
deformation.
Schiff moments of deformed nuclei.
O. P. Sushkov
August 12, 2023
====================================
§ INTRODUCTION
Electric dipole moment (EDM) of an isolated quantum object in a nondegenerate
quantum state is a manifestation of violation of time reversal (T) and
parity (P) fundamental symmetries. Searches of EDM
of neutron is a long quest for fundamental
P,T-violation <cit.>.
EDM of a nucleus can be significantly larger than that of a
neutron <cit.>.
However a nucleus has nonzero electric charge and therefore in a charge neutral
system (atom, molecule, solid) EDM of nucleus cannot be measured <cit.>. The quantity that can be measured is the so called Schiff Moment (SM)
which is nonzero due to the finite nuclear size <cit.>.
Like EDM the SM is a vector directed along the angular momentum.
Renewal of my interest to this problem is related to Cosmic Axion Spin
Precession Experiment (CASPEr) on searches of the QCD axion dark matter.
The current CASPEr experiment is based on Lead Titanate
ferroelectric <cit.>, see also Ref. <cit.>.
The experiment is probing the Schiff moment of ^207Pb nucleus.
There is a recent suggestion <cit.> to use for CASPEr
experiment the crystal of EuCl_3· 6H_2O instead of Lead Titanate.
The major advantage is experimental: a possibility to polarise Eu nuclei via
optical pumping in this crystal
allows to improve sensitivity by orders of magnitude.
Expected effect in EuCl_3· 6H_2O has been calculated in
Ref. <cit.>.
The observable effect in a solid is built like a Russian doll Matreshka,
it has four different spatial and energy scales inside each other.
(i) Quark-gluon scale, r < 1fm,
(ii) Nuclear scale, 1fm ≲ r ≲ 10fm,
(iii) Atomic scale, 10fm < r ≲ 1Å,
(iv) Solid state scale, r > 1Å.
The calculation <cit.> is pretty accurate at the scale (iii),
it has an uncertainty at most by factor 2 at the scales (i) and (iv).
However, the uncertainty at the scale (ii), the nuclear scale, is two
orders of magnitude, this is the uncertainty in ^153Eu Schiff moment.
Such an uncertainty is more or less typical for deformed even-odd nuclei.
The aim of the present work is twofold (i) development of the accurate method
for SM calculation, (ii) performamce of the calculation for ^153Eu.
A reliable purely theoretical calculation is hardly possible.
Therefore, our appoach is to use available experimental
data as much as posible.
^153Eu is a deformed nucleus. A simplistic estimate of SM of a
nucleus with quadrupolar deformation based on Nilsson model
performed in Ref. <cit.>
gave a result by an order of magnitude larger than SM of a spherical heavy
nucleus, say SM of ^207Pb.
It has been found later in Ref. <cit.> that if the nucleus
has a static octupolar deformation the SM is dramatically enhanced.
Based on analysis of rotational spectra of ^153Eu authors of
Ref. <cit.> argued that ^153Eu has a static octupolar
deformation and hence, using the idea <cit.>
arrived to the estimate of SM that is 10^3 times larger than that of a
heavy spherical nucleus.
To elucidate structure of wave functions of ^153Eu in the
present work we analyse available experimental data on magnetic moments and
amplitudes of E1,E3-transitions. In the result of this
analysis we confidently claim that the model of static octupolar deformation
is too simplistic. Nilsson wave functions of quadrupolar deformed nucleus are
almost correct. However, this does not imply that the octupolar mode is
irrelevant.
There is an admixture of the octupole vibration to the Nilsson states
and we determine the amplitude of the admixture. All in all this allows us
to perform a pretty reliable and accurate calculation of SM.
To avoid misundertanding, our statement about the magnitude of the SM
is based on analysis of a broad set of data, therefore, the statement is
nuclear
specific, it is valid for ^153Eu and it is valid for ^237Np.
Unfortunately such sets of data are not known yet for many interesting
nuclei.
Structure of the paper is the following.
In Section II we analyse lifetimes of relevant levels in ^152Sm and
^153Eu and hence find the relevant E1-amplitudes.
The Section III is the central one, here we discuss the structure of wave
functions of the parity doublet |5/2^±⟩ in ^153Eu.
Section IV determines the quadrupolar deformation of ^153Eu.
In Section V we explain the parametrisation we use for the octupolar
deformation.
Section VI describes the structure of octupole excitations.
Section VII extracts the value of octupole deformation from experimental data.
In section VIII we calculate the T- and P-odd mixing of 5/2^+ and 5/2^-
states in ^153Eu.
EDM of ^153Eu nucleus is calculated in Section IX and SM of ^153Eu
nucleus is calculated in Section X.
Section XI presents our conclusions.
§ EXPERIMENTAL E1-AMPLITUDES IN ^152SM AND ^153EU
All data in this Section are taken from Ref. <cit.>.
Even-even nuclei in vicinity of ^153Eu have low energy ≈ 1MeV
collective octupole excitation.
There is the quadrupolar ground state rotational band and the octupolar
rotational band starting at energy of the octupole excitation.
As a reference even-even nucleus we take ^152Sm. In principle ^154Sm
also would do the job, but the data for ^154Sm are much less detailed,
especially on electron scattering that we discuss in Section VII.
Energies of the relevant states of the octupolar band in ^152Sm
are: E(1^-)=963keV, E(3^-)=1041keV.
The halftime of the 1^- state is t_1/2=28.2fs, hence the
lifetime is τ=28.2/ln(2)=40.7fs.
The state decays via the E1-transition to the ground state, 0^+, and to the
2^+ state of the ground state rotational band.
The decay branching ratio is W(0^+)/W(2^+)=0.823.
Therefore, the partial lifetime for 1^- → 0^+ transition is
τ_partial=90fs.
The 1^- → 0^+ E1-transition decay rate is <cit.>
1/τ_partial=4ω^3/3(2j+1)
|⟨ j^'||d||j⟩|^2 ,
For 1^- → 0^+ transition j=1 and j^'=0.
The reduced matrix element of the dipole moment can be expressed in terms
of d_z in the proper reference frame of the deformed nucleus <cit.>
|⟨ j^'||d|| j⟩|^2 = |√((2j+1)(2j^'+1))(
[ j^' 1 j; -m 0 m ])|^2
× |⟨ 0| d_z|1⟩|^2
For 1^- → 0^+ transition j=1, j^'=0, m=0. Hence
⟨ 0| d_z|1⟩=+ e× 0.31fm
Here e=|e| is the elementary charge.
^153Eu is a deformed nucleus with the ground state |5/2^+⟩.
The nearest opposite parity state |5/2^-⟩ has energy E=97.4keV.
The halftime of the |5/2^-⟩ state is t_1/2=0.20ns, hence the
lifetime is τ=0.29ns. The lifetime is due to the E1-decay
|5/2^-⟩→ |5/2^+⟩.
Using Eqs.(<ref>),(<ref>) with j=j^'=m=5/2 and comparing with
experiments we find the corresponding d_z in the proper reference frame.
⟨ 5/2^+ |d_z|5/2^-⟩= - e× 0.12fm
Of course lifetimes do not allow to determine signs in Eqs. (<ref>) and
(<ref>). We explain in Section VI how the signs are determined.
§ WAVE FUNCTIONS OF THE GROUND STATE PARITY DOUBLET |5/2^±)
IN ^153EU.
The standard theoretical description of low energy states in ^153Eu
is based on the Nilsson
model of a quadrupolar-deformed nucleus. In agreement with experimental data,
the model predicts the spin and parity of the ground state, 5/2^+. It also
predicts the existence of the low-energy excited state with opposite parity,
5/2^-. The wave functions of the odd proton in the Nilsson scheme are
|5/2^+⟩=|4135/2⟩, |5/2^-⟩=|5325/2⟩.
Explicit form of these wave functions is presented in Appendix.
There are two rotational towers built on these states.
An alternative to Nilsson approach is the model of static collective octupolar
deformation <cit.>.
In this model the odd proton moves in the pear shape potential forming
the Ω=5/2 single particle state.
A single rotational tower built on this odd proton state is consistent with
observed spectra and this is why the paper <cit.> argues in favour
of static octupole deformation. However, two different parity rotational
towers in Nilsson scheme are equally consistent with observed spectra.
Therefore, based on spectra one can conclude only that both the Nilsson model
and the static octupolar deformation model are consistent with spectra. One
needs additional data to distinguish these two models.
The Nilsson model explains the value Ω=5/2 while in the
“static octupole” model this value pops up from nowhere. However, in
principle it is possible that accidentally
the single particle state in the pear shape potential has
Ω=5/2.
To resolve the issue “Nilsson vs octupole” we look at magnetic moments.
The magnetic moment of the ground state is μ_5/2^+=1.53μ_N,
see Ref. <cit.>. This value is consistent with prediction
on the Nilsson model <cit.>.
The magnetic moment of the 5/2^- state has some ambiguity,
the measurement led to two possible interpretations,
“the recommended value” μ_5/2^-=3.22μ_N, and another value
consistent with measurement μ_5/2^-=-0.52μ_N, see Ref. <cit.>. The recommended value is consistent with the prediction of the
Nilsson model <cit.>.
Thus the magnetic moments are consistent with the Nilsson model and
inconsistent with the octupole model which implies
μ_5/2^-≈μ_5/2^+.
While the arguments presented above rule out the static octupole model,
they do not imply that the octupole is irrelevant, actually it is relevant.
We will show now that while the Nilsson model explains magnetic moments
it cannot explain E1-amplitudes.
Within the Nilsson model one can calculate the E1 matrix
element ⟨ 5/2^+|d_z|5/2^-⟩.
A straightforward calculation with wave functions (<ref>) gives the
dipole matrix element
d_z = e (1-Z/A)⟨ 5325/2|z|4135/2⟩
= e (1-Z/A)z_0/√(2)(0.527-0.510+0.017)
= e× 0.036fm .
Here we account the effective proton charge (1-Z/A)=0.59.
The calculated matrix element (<ref>)
is 3 times smaller than the experimental one (<ref>).
The first impression is that the disagreement is not bad
having in mind the dramatic compensations in Eq.(<ref>).
However, there are two following observations.
(i) It has been pointed out in Ref.<cit.> that the compensation
in (<ref>) is not accidental: the compensation is due to the structure
of Nilsson states, and the matrix
element ⟨ 5325/2|z|4135/2⟩ is proportional
to the energy
splitting Δ E = E_5/2^--E_5/2^+. The matrix element is small
because Δ E is small compared to the shell model energy
ω_0≈ 7.7MeV.
The value (<ref>) is calculated with wave functions from
Ref. <cit.> that correspond to Δ E ≈ 450keV.
On the other hand in reality Δ E ≈ 97keV.
Therefore, the true matrix element must be even smaller than the value
(<ref>).
(ii) The electric dipole operator is T-even. Therefore, there is a suppression of the matrix element due to pairing of protons, d_z → d_z (u_1u_2-v_1v_2),
where u and v are pairing BCS factors. This further reduces the matrix
element, see Ref.<cit.>.
The arguments in the previous paragraph lead to the conclusion that while the
Nilsson model correctly predicts quantum numbers and explains magnetic
moments, the model does not explain the electric dipole
transition amplitude.
The experimental amplitude is by an order of magnitude larger than the Nilsson one. This observation has been made already in Ref.<cit.>.
Admixture of the collective octupole to Nilsson states resolves the dipole moment issue.
So, we take the wave functions as
|+⟩ = |5/2^+⟩ = √(1-α^2)|4135/2⟩|0⟩-α| 5325/2⟩|1⟩
|-⟩ = |5/2^-⟩ = √(1-α^2)| 5325/2⟩|0⟩
-α|4135/2⟩|1⟩
Here the states |0⟩ and |1⟩ describe collective octupole mode,
|0⟩ is the symmetric octupole vibration and
|1⟩ is antisymmetric octupole vibration. For intuition:
|0⟩ corresponds to the
ground state of ^152Sm and |1⟩ corresponds to the octupole
excitation at energy ≈ 1MeV.
We will discuss in Section VI the specific structure of the states
|0⟩, |1⟩, explain why the mixing coefficient in both states
in (<ref>) is the same, and explain why α >0.
Using (<ref>) and
neglecting the small single particle contribution the transition electric dipole moment is
⟨5/2^+|d_z|5/2^-⟩=-2α√(1-α^2)⟨ 0 |d_z|1⟩
Hence, using the experimental values (<ref>) and (<ref>) we find
α≈0.12/2 × 0.31=0.20
Thus, the weight of the admixture of the collective vibration to the simple Nilsson state is just α^2= 4%.
This weight is sufficiently small to make the Nilsson scheme calculation of
magnetic moments correct. On the other hand the weight is sufficiently large
to influence electric properties.
Note that the octupole vibration itself does not
have an electric dipole transition matrix element. The E1 matrix element
is zero due to elimination of zero mode, ⟨ 1|d_z|0⟩=0.
The nonzero value of the dipole matrix element, ⟨ 1|d_z|0⟩ 0,
arises due to a small shift of the neutron
distribution with respect to the proton distribution in combination
with the octupole deformation, see e.g.
Refs. <cit.>.
While this issue is important theoretically, pragmatically it is not
important to us since we take both values of matrix elements (<ref>)
and (<ref>) from experiment.
It is worth noting also that in the static octupole model one expects
⟨ 5/2^+ |d_z|5/2^-⟩= ⟨ 0| d_z|1⟩=+ e× 0.31fm
that is like magnetic moments inconsistent with data.
§ QUADRUPOLAR DEFORMATION OF ^153EU.
The standard way to describe nuclear deformation is to use parameters
β_l.
In the co-rotating reference frame for the quadrupolar deformation
the surface of the nucleus is given by equation (we neglect β_2^2
compared to 1)
R(θ)=R_0(1+β_2Y_2,0)
R_0=r_0A^1/3
r_0≈1.2fm
Here A is the number of nucleons.
Let us determine β_2 using the known electric quadrupole moment Q
in the ground state of ^153Eu. There are two contributions in Q,
(i) collective contribution due to the collective deformation,
(ii) single particle contribution of the odd proton. Using Nilsson
wave functions it is easy to check that the single particle contribution is
about 3-4% of the experimental one, so it can be neglected.
Collective electric quadrupole moment is given by density of protons ρ_p,
Q_0=Q_zz = ∫ρ_p(3z^2-r^2)dV=4√(π/5)∫ρ_pr^2Y_20dV
= 3ZR_0^2/√(5π)β_2
[1+2√(5)/7√(π)β_2
+12/7√(π)β_4]
Here we also account β_4. Z is the nuclear charge.
Eq.(<ref>) gives the quadrupole moment in the proper reference frame.
In the laboratory frame for the ground state, J=Ω=5/2, the quadrupole
moment is Q=5/14Q_0, see problem to 119 in Ref.<cit.>.
The ground state quadrupole moment of ^153Eu is
Q=2.412 barn <cit.>. From here, assuming β_4=0.07,
we find the quadrupole deformation of ^153Eu nucleus in the ground state,
β_2≈ 0.29 .
The values β_2≈ 0.29, β_4=0.07 perfectly agree with that
in ^152Sm determined from electron scattering <cit.>.
The electric quadrupole moment of ^151Eu nucleus in the ground state is
Q=0.903barn <cit.>.
Therefore, in ^151Eu the quadrupolar deformation, β_2≈ 0.12,
is significantly smaller than that in ^153Eu.
§ NUCLEAR DENSITY VARIATION DUE TO THE OCTUPOLE DEFORMATION
The standard way to describe the static octupole deformation β_3 is to
use parametrisation (<ref>)
R(θ)=R_0(1+β_1Y_10+β_2Y_2,0+β_3Y_3,0+...)
This Eq. describes the surface of nucleus in the proper reference frame.
The dipole harmonic Y_10 is necessary to eliminate the zero mode, i.e.
to satisfy the condition
⟨ z⟩= ∫ρ(r)rY_10dV=0
where ρ(r) is the number density of nucleons.
From (<ref>) we find
β_1=-xβ_2β_3 , x=√(243/140π)≈ 0.743 .
For our purposes it is more convenient to use parametrisation different from
(<ref>), the parametrisation we use is
δρ= β_3
3A/4π R_0^2δ[r-R_0(1+β_2Y_20)](Y_30-xβ_2Y_10) .
Here δρ is the octupolar component of the nuclear density.
Due to the δ-function, δ[...], the component is nonzero only
at the surface of the nucleus. Parametrisations (<ref>) and (<ref>)
are equivalent, both satisfy the constraint (<ref>) and
both give the same octupole moment
Q_30=√(4π/7)∫ρ r^3 Y_30dV=
β_33A/√(28π)R_0^3 .
§ STRUCTURE OF THE VIBRATIONAL STATES |0⟩, |1⟩
The deformation picture described in the previous section is purely classical.
Quantum mechanics makes some difference.
We work in the proper reference frame where the nuclear axis direction, the
z-axis, is fixed.
Hence, there are two possible orientations of the pear, as it is shown in
Fig.<ref>. There is tunnelling between these two orientations, the
tunnelling leads to the energy splitting and to formations of
symmetric and antisymmetric states
|0⟩, |1⟩. This picture is valid when the tunnelling energy
splitting, Δ E_tun, is larger than the rotational energy
splitting,
Δ E_rot. Experimentally Δ E_tun∼ 1MeV,
Δ E_rot≈ 20keV, so the description is well justified.
The description of Fig. <ref> implies that
the octupole deformation is quasistatic. The quasistatic description is
justified by the existence of well defined rotational towers in ^152Sm
built on |0⟩ and |1⟩ states, see Ref. <cit.>.
Note that even if the pear tunneling amplitude is comparable with the
rotatinal energy, Δ E_tun∼Δ E_rot, the octupole
deformation is not static. To have a trully static octupole one needs
Δ E_tun≪Δ E_rot.
The Hamiltonian for the odd proton reads
H=p^2/2m+U(r) .
Here U(r) is the selfconsistent potential of the even-even core.
It is well known that the nuclear density ρ(r) has approximately the
same shape as the potential
U(r) ≈U_0/ρ(0)ρ(r) ,
where U_0≈ -50MeV and ρ(0)=3/(4π r_0^3).
Hence the variation of the potential related to the octupole deformation is
δ U = U_0/ρ(0)δρ
= β_3 U_0 R_0 δ[r-R_0(1+β_2Y_20)](Y_30-xβ_2Y_10) .
This is the perturbation that mixes single particle Nilssen states
with simultaneous mixing of |0⟩ and |1⟩. The mixing matrix
element is
M=⟨ 1|⟨ 5325/2|δ U|4135/2⟩|0⟩
= ∫ρ_sp(r)δ U(r) dV
ρ_sp(r)= ⟨ψ^*_532(r)ψ_413(r)⟩ .
Here ρ_sp is offdiagonal single particle density
of Nilsson wave functions (<ref>),
the density depends on r, the brackets ⟨ ..⟩ in
ρ_sp denote averaging over spin only.
Numerical evaluation of the mixing matrix element is straightforward,
the answer at β_2=0.29 is M≈ 5β_3MeV. The value slightly
depends on β_2, at β_2=0 the value of M is 10% smaller.
The coefficient α in Eqs.(<ref>) is
α=M/Δ E_tun ,
where Δ E_tun≈ 1MeV.
Eqs.(<ref>),(<ref>) together with positive value of M explain why the
coefficient α is the same in both Eqs.(<ref>) and why
α > 0.
Moreover, comparing (<ref>) with value of α extracted from
experimental data, Eq.(<ref>), we determine the octupole deformation,
β_3 =0.04. While the value is reasonable, unfortunately one cannot
say that this is the accurate value.
The shape approximation (<ref>) is not very accurate.
Even more important, it is not clear how the BCS factor influences ρ_sp. The BCS factor can easily
reduce ρ_sp by factor ∼ 2-3, hence increasing β_3 by the
same factor.
Theoretical calculations of β_3 give
values from 0.05 <cit.>, to 0.075 <cit.>, and
even 0.15 <cit.>.
§ THE VALUE OF THE OCTUPOLE DEFORMATION PARAMETER Β_3
With wave functions shown in Fig.<ref> one immediately finds
the electric octupole matrix element between states |0⟩ and
|1⟩
⟨ 1|Q_30^(e)|0⟩=eZ/AQ_30 ,
where Q_30 is given by Eq.(<ref>).
We are not aware of direct measurements of Q_30^(e) in ^152Sm.
The book <cit.> presents the “oscillator strengths” for
corresponding E3 transitions in ^152Sm and ^238U,
^152Sm: B_3=1.2× 10^5e^2fm^6, ^238U: B_3=5× 10^5e^2fm^6
(table 6.14 in the book).
However, these values have been determined not from direct electromagnetic
measurements, the “oscillator strengths” have been indirectly extracted from
deuteron scattering from the nuclei.
Fortunately for ^238U there is a more recent value determined from the
electron scattering <cit.>: B_3=(6.4± 0.6)× 10^5e^2fm^6.
All in all this data give β_3 ≈ 0.08 for both ^152Sm and
^238U.
Fortunately, the electron scattering data <cit.>
allow to determine β_3 in ^152Sm pretty accurately.
The Ref. <cit.> was aimed to determine β_2 and β_4,
their results, β_2=0.287± 0.003, β_4=0.070± 0.003 are remarkably
close to that we get for ^153Eu in Section IV.
The inelastic scattering spectrum copied from Ref. <cit.> is
shown in Fig.<ref>.
Here we reanalyse the spectrum.
The first inelastic peak at E=122keV (≈ channel 73) corresponds to
the 2^+
excitation of the rotational ground state band. The peak after subtraction
of the background is shown in panel a of Fig.<ref>.
Red dots are experimental points and the solid curve is the Gaussian fit
I=Ae^-(x-x_0)^2/σ^2
A=7.23, x_0=72.9, σ=5.21 .
Hence, the halfwidth is
Γ =2ln(2)σ=49.3keV. Here we account that one channel step is
6.82keV.
This energy resolution is 0.065% of electron energy 76MeV.
This is slightly smaller than the “typical value” 0.08% mentioned
in Ref. <cit.>.
The peak at Fig.<ref> near the channel 210 is a combination of the
3^- octupole (E=1041keV) and of the γ 2^+ state of the
γ-band (E=1086keV).
The peak after subtraction of the background is shown in panel b of
Fig.<ref>. We fit the double peak by the double Gaussian
I=B[e^-(x-x_1)^2/σ^2+e^-(x-x_2)^2/σ^2]
B=0.670, x_1=207.6, x_2=214.2, σ=5.21.
The value of x_1 corresponds to E=1041keV, the value of x_2 corresponds
to E=1086keV, σ is known from (<ref>).
The fitting shows that intensities of 3^- and γ 2^+ lines cannot
differ by more than 5%, so we take them equal. Therefore in the end
there is only one fitting parameter B.
Based on Eqs.(<ref>) and (<ref>) we find the ratio of spectral weights
S(3^-)/S(2^+)=B/A=0.093
Here 2^+ is the ground state rotational state.
Interestingly, the analysis gives also the spectral weight of the
γ 2^+ state. This allows to determine the magnituide of the
γ-deformation. However, this issue is irrelevant to the Schiff
moment and therefore we do not analyse it further.
Coulomb potential or Eu nucleus at r ≈ R_0 is 15MeV. This is
significantly smaller than the electron energy 76MeV. Therefore, the electron
wavefunction can be considered as a plane wave. The momentum transfer is
q=2psin(93.5^o/2)≈ 111MeV≈ 0.562fm^-1 .
Using expansion of the plane wave in spherical harmonics together with
Wigner-Eckart theorem the spectral weights can be expressed as integrals
in the co-rotating reference frame
S(2^+) ∝ |∫ Y_20j_2(qr)ρ(r) dV|^2
S(3^-) ∝ |∫ Y_30j_3(qr)δρ(r) dV|^2
Here j_l(qr) is the spherical Bessel function <cit.>,
ρ(r) is the density with quadrupolar deformation, and
δρ is given by Eq.(<ref>).
The coefficient of proportionality in both equations (<ref>) is the same
and therefore we skip it. Evaluation of integrals in (<ref>)
is straightforward, it gives
∫ Y_20j_2(qr)ρ(r) dV ∝β_2j_2(qR_0)=0.302β_2
∫ Y_30j_3(qr)δρ(r) dV ∝β_3j_3(qR_0)=0.205β_3
Comparing the theoretical ratio with it's experimental value (<ref>) and
using the known quadrupolar deformation we find the octupolar deformation
β_3=0.45β_2=0.130.
In the previous paragraph we used the plane wave approximation
for the electron wave function neglecting the Coulomb potential ≈ 15MeV
compared to the electron energy 76MeV. A simple way to estimate the
Coulomb correction is to change q→ q^'≈ q(1+15/76)=0.673fm^-1.
This results in β_3=0.090. Probably this simplistic way overestimates the
effect of the Coulomb potential. An accurate calculation of distorted electron
wave functions would allow to determine β_3 very accurately.
For now we take
β_3= 0.10
§ T- AND P-ODD MIXING OF 5/2^+ AND 5/2^- STATES IN ^153EU
The operator of the T, P-odd interaction reads <cit.>
H_TP=ηG/2√(2)mσ⃗·∇⃗ρ
Here G≈ 1.03/m^2 is the Fermi constant, η is a dimensionless
constant characterising the interaction, σ⃗ is the Pauli matrix
corresponding to the spin of unpaired nucleon, and ρ is the nuclear
number density.
The single particle matrix element of H_TP between the Nilsson states can be
estimated as, see Ref. <cit.>,
⟨ 532|H_TP|413⟩∝⟨ 532|∇ρ|413⟩∝⟨ 532|∇ U|413⟩
∝⟨ 532|[p, H]|413⟩∝
(E_532-E_413) ⟨ 532|p|413⟩
∝
(E_532-E_413) ⟨ 532|[r,H]|413⟩
∝
(E_532-E_412)^2 ⟨ 532|r|413⟩
Thus, the matrix element is suppressed by the small parameter (Δ E/ω_0)^2,
with Δ E ≈ 100keV and ω_0 ≈ 8MeV. Hence,
the single particle matrix element can be neglected.
The matrix element between the physical states (<ref>) contains also the
collective octupole contribution
⟨ - |H_TP|+⟩= -α⟨ 5325/2|⟨ 1|H_TP|
0⟩|5325/2⟩
-α⟨ 4135/2|⟨ 0|H_TP|
1⟩|4135/2⟩
Integrating by parts we transform this to
⟨ - |H_TP|+⟩ = αη G/2√(2)m∫[ρ_532(r)+ρ_413]δρ (r) dV
ρ_532(r) = ∂_z⟨ 532|σ_z|532⟩
ρ_413(r) = ∂_z⟨ 413|σ_z|413⟩
Here δρ is the octupole density (<ref>). Note that the “spin densities”
ρ_532 and ρ_413 depend on r, the brackets ⟨ ..⟩ in
definition of the densities in (<ref>) denote averaging over spin only.
Note also that the “spin densities” are T-odd. Therefore, the BCS factor
practically does not influence them.
Numerical evaluation of integrals in (<ref>) with Nilsson wave functions (<ref>) is straightforward, the result is
⟨ - |H_TP|+⟩ = αηβ_3
G/2√(2)m3A/4π R_0^4[ I_413+I_532]
Dimensionless I_413 and I_532 are plotted in Fig.<ref> versus β_2.
At the physical deformation, β_2=0.29, Eq.(<ref>), the values are
I_413=-0.66 and I_532=-1.05. Hence we arrive to the following mixing
matrix element
⟨ - |H_TP|+⟩=-0.24 αηβ_3 eV .
§ ELECTRIC DIPOLE MOMENT OF ^153EU NUCLEUS
We need to determine signs in Eq.(<ref>) and Eq.(<ref>).
In our notations β_3 > 0 corresponds to the pear orientation with
respect to the z-axis shown in Fig.<ref>.
According to Refs. <cit.> protons are
shifted in the positive z-direction. Hence, d_z in Eq.(<ref>) is
positive. Hence, using Eqs.(<ref>), we conclude that the sign in
Eq.(<ref>) is negative.
With Eqs.(<ref>) and (<ref>) we find the T,P-odd electric dipole moment
in the ground state.
d^TP_z = 2⟨+|d_z|-⟩⟨-|H_TP|+⟩/E_+-E_-
= -0.59× 10^-6αβ_3η [e · fm]
= -1.18× 10^-8η [e· fm] .
For the numerical value we take α=0.20, see Eq.(<ref>), and
β_3=0.10, see Eq.(<ref>).
Eq. (<ref>) gives the EDM in the co-rotating reference frame. The EDM in
the laboratory reference frame is
d^TP=5/7d^TP_z = -0.84× 10^-8η [e · fm]
This EDM is comparable with that of a heavy spherical nucleus, see
Ref. <cit.>
§ SCHIFF MOMENT OF ^153EU NUCLEUS
The operator of the Schiff moment (SM) reads <cit.>
Ŝ_z=1/10[∫ρ_q r^2 z dV -5/3r_q^2d_z]
It is a vector. Here ρ_q is the charge density and
r_q^2 ≈3/5R_0^2
is the rms electric charge radius squared.
With the static octupole deformation (<ref>) the 1st
term in (<ref>) is
S_intr=1/10∫ρ_q r^2 z dV =9/20√(35)πeZR_0^3β_2β_3
Here we use the same notation S_intr as that in
Refs.<cit.>.
The matrix element of the first term in (<ref>) between the states
(<ref>) is
⟨ +|Ŝ_1z|-⟩ = -2α S_intr
= -α9/10π√(35)eZR_0^3β_2β_3
Combining this with Eq.(<ref>) we find the expectation value over the
ground state
⟨ +|Ŝ_1z|+⟩ =
2⟨+|Ŝ_1z|-⟩⟨-|H_TP|+⟩/E_+-E_-
= -0.24× 10^-6e Z R_0^3α^2β_2β_3^2η
Hence, the Schiff moment is
S_z = ⟨ +|Ŝ_z|+⟩ =⟨ +|Ŝ_1z|+⟩-
1/10R_0^2d_z^TP
= [-4.0× 10^-3α^2β_2β_3^2
+2.4× 10^-6αβ_3]η [e · fm^3]
= -4.16×10^-7η [e · fm^3]
For the final numerical value we take α=0.20, see Eq.(<ref>),
β_2=0.29, see Eq.(<ref>) and β_3=0.10, see Eq.(<ref>).
Note that the first term in the middle line of Eq.(<ref>) is proportional
to α^2β_3^2 and at the same time the second term is
proportional to αβ_3. This is because one power of αβ_3
is “hidden” in the experimental dipole matrix element (<ref>).
The second term is just about 10% of the first one.
Eq. (<ref>) gives the Schiff moment in the co-rotating reference frame.
The Schiff moment in the laboratory reference frame is
S=5/7S_z = -2.97× 10^-7η [e · fm^3]
This result is pretty reliable, the major uncertainty about
factor 2 is due to uncertainty in the value of β_3.
A more accurate analysis of inelastic electron scattering
data <cit.>, see Section V, can reduce the uncertainty.
In ^151Eu the energy splitting E_–E_+ is 3.5 times larger than that
in ^153Eu, and the quadrupolar deformation is 2.5 times smaller.
Therefore, the Schiff moment is at least an order of magnitude smaller
than that of ^153Eu.
Unfortunately, there is no enough data for an accurate calculation for
^151Eu.
Another interesting deformed nucleus is ^237Np.
Performing a simple rescaling from our result for ^153Eu we get the
following estimate of ^237Np Schiff Moment,
S ∼-1.5× 10^-6η [e · fm^3]. This is 40 times larger than
the single particle estimate <cit.>.
Of course following our method and using ^238U as a reference nucleus
(like the pair ^153Eu, ^152Sm in the present work) one can
perform an accurate calculations of ^237Np Schiff moment.
Data for ^238U are available in Ref. <cit.>.
§ CONCLUSIONS
The Hamiltonian of nuclear time and parity violating interaction is
defined by Eq.(<ref>). For connection of the dimensionless
interaction constant η with the QCD axion θ-parameter see
Ref. <cit.>. The Hamiltonian (<ref>) leads to the Schiff moment
of a nucleus.
In the present work we have dveloped a new method to calculate Schiff moment
of an even-odd deformed nucleus.
The method is essentially based on experimental data on magnetic moments and
E1,E3-amplitudes in the given even-odd nucleus and in adjacent even-even
nuclei. Unfortunately such sets of data are not known yet for most of
interesting nuclei.
Fortunately the full set of necessary data exists for ^153Eu.
Hence, using the new method, we perform the calculation for ^153Eu.
The result is given by Eq.(<ref>).
The theoretical uncertainty of this result, about factor 2, is mainly due to
the uncertainty in the value of the octupole deformation.
A more sophisticated analysis of available electron scattering
data can further reduce the uncertainty.
The Schiff Moment (<ref>) is about 20-50 times larger than that in
heavy spherical nuclei <cit.> and it is
3 times larger than what paper <cit.>
calls “conservative estimate”.
On the other hand it is by factor 30 smaller than the result of
Ref. <cit.> based on the model of static octupole deformation.
Using the calculated value of the Schiff Moment we rescale results of
Ref. <cit.> for the energy shift of ^153Eu nuclear spin
and for the effective electric field in EuCl_3· 6H_2O compound.
The result of the rescaling is
δ E_o= 0.9× 10^-9θ [eV]
E_o^*=0.3 MV/cm
These are figures of merit for the proposed <cit.>
Cosmic Axion Spin Precession Experiment with EuCl_3· 6H_2O.
§ ACKNOWLEDGEMENT
I am grateful to A. O. Sushkov for stimulating discussions and interest to the
work. This work has been supported by the Australian Research Council Centre
of Excellence in Future Low-Energy Electronics Technology (FLEET)
(CE170100039).
§ NILSSON WAVE FUNCTIONS
Parameters of the deformed oscillator potential used in Nilsson model are
ω_z=ω_0(1-2/3δ) , z_0=1/√(mω_z)
ω_ρ=ω_0(1+1/3δ) , ρ_0=1/√(mω_ρ)
ω_0=41MeV/A^1/3
where m≈ 940MeV is the nucleon mass.
The parameter δ is related β_2 used in the main text
as
δ=3√(5)/4√(π)β_2≈ 0.946β_2 .
The oscillator wave functions defined in Ref. <cit.> are
z = z/z_0
|0⟩_z = 1/(√(π)z_0)^1/2e^-z^2/2
|1⟩_z = √(2)/(√(π)z_0)^1/2z
e^-z^2/2
|2⟩_z = 1/(2√(π)z_0)^1/2[2z^2-1]
e^-z^2/2
|3⟩_z = 1/(3√(π)z_0)^1/2z[2z^2-3]
e^-z^2/2
ρ = ρ/ρ_0
|2,2⟩_ρ = 1/√(2π)ρ_0ρ^2e^-ρ^2/2e^2iφ
|3,3⟩_ρ = 1/√(6π)ρ_0ρ^3 e^-ρ^2/2e^3iφ
|4,2⟩_ρ = 1/√(6π)ρ_0ρ^2(ρ^2-3)
e^-ρ^2/2e^2iφ
|5,3⟩_ρ = 1/√(24π)ρ_0ρ^3(ρ^2-4)
e^-ρ^2/2e^3iφ
The Nilsson wave functions for the quadrupolar deformation δ=0.3
written the oscillator basis (<ref>) are <cit.>
|4135/2⟩ = 0.938|1⟩_z|3,3⟩_ρ|↓⟩
-0.342|2⟩_z|2,2⟩_ρ|↑⟩
+ 0.054|0⟩_z|4,2⟩_ρ|↑⟩
|5325/2⟩ =
0.861|3⟩_z|2,2⟩_ρ|↑⟩
+0.397|2⟩_z|3,3⟩_ρ|↓⟩
+ 0.310|1⟩_z|4,2⟩_ρ|↑⟩
+0.075|0⟩_z|5,3⟩_ρ|↓⟩
99
Ramsey1982
N. F. Ramsey, Ann. Rev. Nucl. Part. Sci. 32, 211 (1982).
Serebrov2015
A. P. Serebrov et al,
Physics of Particles and Nuclei Letters 12, 286 (2015).
Abel2020
C. Abel et al., Physical Review Letters 124, 081803 (2020), arXiv:2001.11966.
Sushkov1984
O. P. Sushkov, V V Flambaum, and I B Khriplovich. Sov. Phys. JETP 60,873 (1984).
Schiff1963 L. I. Schiff, Physical Review 132, 2194 (1963).
Budker2014 D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran,
and A. O. Sushkov, Phys. Rev. X 4, 21030 (2014).
Mukhamedjanov2005 T. N. Mukhamedjanov and O. P. Sushkov,
Physical Review A 72, 34501 (2005).
ASushkov2023a A. O. Sushkov, arXiv:2304.12105.
ASushkov2023
A. O. Sushkov, O. P. Sushkov, A. Yaresko,
Phys. Rev. A 107, 062823 (2023); arxiv 2304.08461.
Auerbach1996 N. Auerbach, V. V. Flambaum, and V. Spevak,
Physical Review Letters, 76, 4316 (1996).
Flambaum2020 V. V. Flambaum and H. Feldmeier.
Phys. Rev. C 101, 015502 (2020).
Firestone1999
Richard B Firestone. Table of Isotopes. Ed. by S Y Frank Chu and Coral M Baglin. 1999.
LL4
Quantum Electrodynamics: Volume 4 (Course of Theoretical Physics) 2nd Edition
by V B Berestetskii, E.M. Lifshitz, and L. P. Pitaevskii.
LL3
Quantum Mechanics: Non-Relativistic Theory 3rd Edition
by L. D. Landau, E. M. Lifshitz.
Lamm1969
I. L. Lamm, Nuclear Physics A 125, 504 (1969).
Kemah2022 E. Kemah, E. Tabar, H. Yakut, G. Hosgor
https://dergipark.org.tr/en/pub/saufenbilder/article/1123474
BohrMottelson
Aage Bohr and Ben R. Mottelson. Nuclear Structure. World Scientific, 1998
Sushkov1993 O. P. Sushkov and V. B. Telitsin,
Phys. Rev. C 48, 1069 (1993).
Leander1986
G. A. Leander, W. Nazarewicz, G. F. Bertsch, and J.
Dudek, Nucl. Phys. A 453, 58 (1986).
Dorso1986
C. O. Dorso, W. D. Myers, and W. J. Swiatecki, Nucl.
Phys. A 451, 189 (1986).
Butler1991 P. A. Butler and W. Nazarewicz, Nucl. Phys. A 533,
249 (1991).
Bertozzi1972
W. Bertozzi, T. Cooper, N. Ensslin, J. Heisenberg, S. Kowalski, M. Mills,
W. Turchinetz, C. Williamson, S. P. Fivozinsky, J. W. Lightbody, Jr., and S.
Penner, Phys. Rev. Lett. 28, 1711 (1972).
Hirsch1978
A. Hirsch, C. Creswell, W. Bertozzi, J. Heisenberg, M. V. Hynes, S. Kowalski,
H. Miska, B. Norum, F. N. Had, C. P. Sargent, T. Sasanuma, and W. Turchinetz,
Rev. Lett. 40, 632 (1978).
Ebata2017 S. Ebata and T. Nakatsukasa
Physica Scripta 92, 064005 (2017).
Zhang2010 W. Zhang, Z. P. Li, S. Q. Zhang, and J. Meng,
Phys. Rev. C 81, 034302 (2010).
Spevak1997
V. Spevak, N. Auerbach, and V. V. Flambaum. Phys. Rev. C 56, 1357 (1997).
|
http://arxiv.org/abs/2307.04622v1 | 20230710150832 | Correlations between QPO frequencies and spectral parameters of GRS 1915+105 using AstroSat observations | [
"Ruchika Dhaka",
"Ranjeev Misra",
"JS Yadav",
"Pankaj Jain"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage
Speed and Acceleration of CMEs Associated with Sustained Gamma-Ray Emission Events
Observed by Fermi/LAT
Seiji Yashiro
August 12, 2023
=========================================================================================================
In this work, we study the correlation between Quasi-periodic Oscillation (QPO) frequency and the spectral parameters during various X-ray states in the black hole binary GRS 1915+105 which matches well with the predicted relativistic dynamic frequency (i.e. the inverse of the sound crossing time) at the truncated radii. We have used broadband data of LAXPC and SXT instruments onboard AstroSat. Spectral fitting shows that the accretion rate varies from ∼ 0.1 to ∼ 5.0 × 10^18 gm/s and the truncated radius changing from the last stable orbit of an almost maximally spinning black hole, ∼ 1.2 to ∼ 19 Gravitational radii. For this wide range, the frequencies of the C-type QPO (2 - 6 Hz) follow the trend predicted by the relativistic dynamical frequency model and interestingly, the high-frequency QPO at ∼ 70 Hz also follows the same trend, suggesting they originate from the innermost stable circular orbit with the same mechanism as the more commonly observed C-type QPO. While the qualitative trend is as predicted, there are quantitative deviations between the data and the theory, and the possible reasons for these deviations are discussed.
accretion, accretion discs - black hole physics - stars: black holes - X-rays: binaries - relativistic processes
§ INTRODUCTION
The Black Hole X-ray Binary (BHXB) GRS 1915+105 was discovered on August 15, 1992, as a transient by the WATCH All-sky monitor onboard Granat observatory. It was the first galactic object to show a superluminal jet <cit.>. The binary system contains a black hole of 12.4 solar mass <cit.>. This source is located at a distance D = 8.6 kpc <cit.> and its relativistic jets are directed at an angle i=70^∘ from the line of sight <cit.>. It is an outstanding source because of its huge variability <cit.>. This source is observed in 14 different X-ray classes, based on its X-ray flux, Color-Color Diagram (CCD) and hardness ratio <cit.>. Some of these classes are named ϕ, χ, θ, λ, ρ, etc. Among all the 14 different classes the most observed class is χ. The χ class is the least variable class, and no large amplitude and long-term X-ray flux variability have been observed. Most of the time, since its discovery in 1992, GRS 1915+105 has been seen in bright X-ray states like High Soft state (HS) and High HIMS state (also called the Steep Power Law state (SPL state)). This source has entered into a decline phase since 2018 (lower branch of HIMS and the Low Hard State (LS)).
X-ray binaries exhibit variability on rapid time scales. Fourier analysis is often used to study fast variability and quasi-periodic oscillations (QPOs) by computing power-density spectra (PDS)<cit.>.
There are numerous patterns have been observed in the PDS <cit.>, ranging from various types of broad-band noise to much narrower structures known as QPOs. These appear as sharp peaks in the power spectrum. QPOs with frequencies ranging from few mHz to ∼70 Hz have been observed for the source GRS 1915+105 <cit.>.
The centroid frequencies of these QPOs during specific spectral states and transitions can be associated with physical processes occurring in these systems.
Typically, there are two types of QPOs. Low-frequency QPOs have a centroid frequency ≲ 30 Hz, whereas high-frequency QPOs have a centroid frequency ≳ 60 Hz (up to a few hundred hertz) <cit.>. Low-frequency QPOs are further subdivided into A, B, and C-type QPOs based on differences in power spectral properties and phase lag behavior, and they occur in various spectral states <cit.>.
However, the precise physical origin of QPOs in BHXBs is so far not well understood.
<cit.> have studied the dependence of QPO frequency f on the inner radius r of the truncated accretion disk. They found that f/Ṁ is well correlated with r, where Ṁ is the accretion rate. Remarkably, the relationship between the two is well described in terms of dynamical frequency arising due to normal modes of disk oscillations <cit.>.
The dynamical frequency is defined as the inverse of the sound crossing time (f_dyn∼ c_s(r)/r). The sound crossing time is the ratio of the truncation radius and the sound speed at the inner disc. According to the standard relativistic disc model proposed by <cit.>, the sound speed is dependent on several factors, including the mass accretion rate (Ṁ), spin, and inner radius (r) of the disc. This leads to the following formula for the dynamical frequency <cit.>:
f_dyn/Ṁ = N 8979 Hz (r/r_g)^-2.5(M/12.4 M_⊙)^-2× A^1 B^-2 D^-0.5 E^-0.5 L
where r_g = GM/c^2 is the gravitational radius, and r is the inner disc radii, N is a normalisation factor to take into account the assumptions made in the standard accretion disc theory.
The parameters A, B, D, E, and L are functions of the inner disc radii and the spin parameter described in <cit.> and <cit.>. All these parameters are important for small radii, r < 10 r_g. As a result, in this regime, the functional form of f_dyn considerably differs from its Newtonian dependence. Using spectral and timing analysis, one can determine the mass accretion rate, inner disc radii, and QPO frequency. Thus, the interpretation, and in particular Eqn <ref> can be verified with such an analysis. <cit.> did such an analysis using AstroSat observation data collected on 28 March 2016 and 1 April 2017 when GRS 1915+105 was in the low HIMS state (i.e., the lower horizontal track of HIMS). The source showed C-type QPOs in the frequency range of 3.5–5.4 Hz during the observation. A similar analysis was undertaken for Insight-HXMT observations of GRS 1915+105 when it exhibited low-frequency C-type QPOs <cit.>. For a wider range of QPO frequency, 2.6-4.3 Hz, and inferred accretion rate of 0.2-1.2× 10^18gm/s, they confirmed the results obtained by <cit.>.
Apart from these C-type QPOs, GRS 1915+105 also shows a QPO at ∼ 69 Hz, which is remarkable in having a nearly constant frequency <cit.>. This QPO has also been reported for AstroSat data, where it varied slightly from 67.4 to 72.3 Hz <cit.>.
In this paper, we perform an extensive spectro-temporal analysis of various X-ray states observed in GRS 1915+105 using AstroSat data. In GRS 1915+105, so far, only one outburst (started in 1992) is observed which is still continuing. GRS 1915+105 is never seen in the rising phase of an outburst. Our data includes a low hard state (Obs. 7), which has never been reported before. The motivation here is to study the QPO frequency dependence on spectral parameters covering a wider range of inner disc radii, accretion rates and QPO frequencies.
In Section <ref> of this work, we describe observations and data reduction techniques using the LAXPC and SXT pipeline software. In Section <ref>, we explain the various analytical and modelling techniques used to analyse the temporal and spectral features of GRS 1915+105. In Section <ref> of the paper, we describe the outcomes of the study and draw conclusions based on those results.
§ OBSERVATION AND DATA REDUCTION
AstroSat is a multi-wavelength observatory launched for astronomical studies of various celestial objects in near and far UV, soft (0.3-80 keV) and hard (3-100 keV) X-rays <cit.>. It has four science payloads: 1) Soft X-ray Telescope (SXT) <cit.>, 2) Ultra-Violet Imaging Telescope (UVIT) <cit.>, 3) Cadmium Zinc Telluride Imager (CZTI) <cit.> and 4) the Large Area X-ray Proportional Counter (LAXPC) <cit.>. Large Area X-ray Proportional Counters (LAXPC) consist of three identical but independent PCUs (LAXPC 10, LAXPC20 and LAXPC30) with an effective area of 6000 cm^2 at 15 keV and has a time resolution of 10μs in the energy range 3.0-80.0 keV with the dead-time of about 42 μs <cit.>.
A simultaneous fit of SXT data along with LAXPC data provides a broadband spectrum of the source. We have analysed various observations with simultaneous data from SXT and LAXPC spanning over 1094 days starting from 3 March 2016. Out of all the AstroSat observations that we looked into, we picked out the ones that showed the presence of QPOs in their power density spectrum.
In our study, we have included only those observations when the source flux is more or less steady. GRS 1916+105 often shows strong flares when the flux can change by a factor of a few <cit.>. Such flaring situations are not included in this study.
All transient black hole binary outbursts should follow a q-diagram. GRS 1915+105 has shown only one outburst so far; starting with its discovery on 15th August 1992 and the outburst ending now (not yet over); for approximately 31 years. The rising phase of the outburst in GRS 1915+105 is never observed. Our observations cover the period from 2016 to 2019 when the source remained mostly in luminous X-ray states. Thus our observations trace only part of the q-diagram; mostly vertical left and bottom horizontal branches, partly when QPOs are present. Its variability is complex as the source stays in the high luminous X-ray states most of the time. We selected seven observations of four distinct states: the High Soft (HS) state, the Low HIMS state; the High HIMS state; and the Low Hard (LS) state.
The data used in this work consists of 7 different observations made on 3 March 2016 (Obs. 1), 25 April 2016 (Obs. 2), 27 April 2016 (Obs. 3), 28 March 2017 (Obs. 4), 1 April 2017 (Obs. 5), 15 April 2017 (Obs. 6), and 21 March 2019 (Obs. 7). Table <ref> presents the effective exposure time of LAXPC and SXT of the observations used in this study. The Burst Alert Telescope (SWIFT/BAT) Hard X-ray Transient Monitor and the Monitor of All-sky X-ray Imaging (MAXI) provide continuous coverage of GRS 1915+105 in soft and hard X-rays. To see the evolution of the source, we extract the MAXI flux in the energy range of 2–20 keV and the SWIFT/BAT flux in the energy range of 15–50 keV, as shown in Fig. <ref>. The SWIFT/BAT flux is scaled by 30 so that both X-ray band light curves of GRS 1915+105 starting from 13 January 2016 to 27 April 2019 can be seen clearly. The vertical lines in the figure represent AstroSat observations of the GRS 1915+105 source used for this study. The sequence of vertical lines in the light curve shown in Fig. <ref>
is identical to that presented in Table <ref>. Each observation was further divided into segments such that each segment was continuous without gaps.
The HID of GRS 1915+105, covering the period from 13 January 2016 (MJD 57400) to 27 April 2019 (MJD 58600), is illustrated in Fig.
<ref>, where the 2–20 keV MAXI flux is plotted against the X-ray colour (HR). The location of the source in the HID diagram broadly reflects the state of the system. Also marked in Fig. <ref> are the locations of the AstroSat observations. Obs. 2 and 3 correspond to the soft state, while the high flux of Obs. 1 shows that it is in the Hard Intermediate state (High HIMS). On the other hand, Obs. 4, 5 and 6 correspond to
the Low HIMS state. The data from Obs. 7 represents the Low Hard
state of the source.
§.§ SXT Data Reduction
Level 1 photon counting mode data of the SXT instrument was processed through the official
SXT pipeline AS1SXTLevel2 - 1.4b[https://www.tifr.res.in/ astrosat_sxt/sxtpipeline.htmlhttps://www.tifr.res.in/ astrosat_sxt/sxtpipeline.html] to produce Level 2 mode data. The Photon
Counting mode (PC mode) data were chosen for the analysis of all sets of observations
listed in Table <ref>.
Using Julia-based SXTevtmerger script[https://www.tifr.res.in/ astrosat_sxt/dataanalysis.htmlhttps://www.tifr.res.in/ astrosat_sxt/dataanalysis.html], we merged all the events belonging to
one set of observations into a single event file. The HEASoft (version 6.29) tool XSELECT was used to generate the spectrum, light curves and images. The response matrix file (RMF) “sxt_pc_mat_g0to12_RM.rmf,” standard background spectrum “SkyBkg_comb_EL3p5_Cl_Rd16p0_v01.pha” and ancillary response file (ARF) "sxt_pc_excl00_v04_20190608_mod_16oct21.arf" were used for the analysis. The sxtARFmodule[https://www.tifr.res.in/ astrosat_sxt/sxtpipeline.htmlhttps://www.tifr.res.in/ astrosat_sxt/sxtpipeline.html] provided by the SXT
instrument team was used to apply a correction for offset pointing. In order to implement simultaneous analysis, we ensured that the LAXPC 20 observations were available at the same Good Time Interval (GTI) as the SXT observation. Therefore, we
used the simultaneous data segments to generate light curves, images and spectrum of
GRS1915+105.
For the Obs. 4, Obs. 5, Obs. 6 and Obs. 7 observations (low X-ray flux states), there was no pile-up near the centre of the image due to low flux (<40 counts per second, as mentioned in the AstroSat Handbook;[https://www.iucaa.in/ astrosat/AstroSat_handbook.pdfhttps://www.iucaa.in/ astrosat/AstroSat_handbook.pdf]). The average count rate in the Obs. 1, Obs. 2 and Obs. 3 was 91.33 counts/sec, 84.25 counts/sec, and 90.00 counts/sec, respectively. Therefore, to account for the pile-up effect at the centre of the image caused by the high flux rate (∼ 1 Crab) of the source in the charged-coupled device (CCD), the inner radius of the circular annulus region was set to 2 arcmins.
§.§ LAXPC Data Reduction
Level 2 event files were extracted from Level 1 event mode data utilising the official LAXPC software version released on 04 Aug 2020[http://astrosat-ssc.iucaa.in/laxpcDatahttp://astrosat-ssc.iucaa.in/laxpcData].
LAXPC data was extracted to obtain the light curve and spectrum of the source<cit.>.
Details of the response matrix (RMF) and background spectrum generation for
proportional counters 10, 20, and 30, respectively, can be found in <cit.>.
Out of three LAXPC detectors (LAXPC 10, LAXPC20 and LAXPC 30), we used only LAXPC 20
data for energy spectral studies for all of the observations given in Table <ref>.
§ DATA ANALYSIS
§.§ X-ray lighcurve and Timing Analysis
We have produced Background-subtracted light curves for four distinct observation types in the 4.0-50 keV energy range using LAXPC 20 data for the minimum time resolution of the SXT, which is 2.378 seconds. The left panel of Fig. <ref> shows 800 sec long Background-Subtracted light curve for HS state (Obs. 3), SPL state (Obs. 1), low HIMS (Obs. 4), and LH state (Obs 7).
The right panel of Fig.<ref> shows 800 sec SXT light curves in the 0.3-8 keV energy range for the identical segments used to generate the LAXPC 20 Background-Subtracted lightcurves in the left panel.
In order to study the properties of QPOs, we analyse the data in the frequency regime by generating a Power Density Spectrum (PDS). The PDS were generated by dividing the lightcurve of each segment into parts and averaging the power spectra of each part. We used all three LAXPC detector units (LAXPC 10, LAXPC 20, and LAXPC 30) to plot the PDS for the HS state (Obs. 3, Seg. 2). To plot the PDS for the rest of the observations, we used the LAXPC 20 unit. The PDS for the HS state is shown in the upper left panel of Fig. <ref> in the frequency range 10-110 Hz and is modelled using several Lorentzian functions <cit.> and a
power-law component in order to account very low frequency noise (VLFN). It shows HFQPO at ∼ 70 Hz, while no QPO is seen in the lower frequency region.
Fig. <ref>, the upper right panel, shows the PDS of the low HIMS state (Obs. 4, Seg. 6) in the frequency range 0.1-20 Hz. The lower panels of Fig. <ref> show the PDS for the SPL state (left panel) and the LH state (right panel) for the Obs. 1 (Seg. 5) and Obs. 7 (Seg. 7), respectively.
The component of broad-band noise related to these three PDS (Obs. 4, Obs. 1, and Obs. 7) was modelled using only a few Lorentzians. The frequency of QPOs, along with errors, has been estimated and tabulated in the third column of Table <ref>. All three panels show LFQPOs along with their harmonics.
§.§ Spectral Analysis
We have performed a simultaneous spectral fitting of SXT and LAXPC20 spectra using
XSPEC 12.12.0 in the broad energy range 1–50 keV (SXT: 1-5 keV and LAXPC20: 4-50
keV) for 4, 6 and 7 sets of observations listed in Table <ref>. The high energy range above 50.0 keV has been ignored because of the low S/N (signal-to-noise) ratio. For the rest of the observation sets, we have used the combined SXT, and LAXPC
energy range 1.0-20.0 keV; during these observations source spectrum is soft and signal to noise ratio deteriorates fast above 20 keV. Lower energies below 1 keV were not considered in all the observations due to uncertainties in the effective area and response of the SXT. The left panels of Fig. <ref> display the energy spectra of HS state (Obs 3, Seg. 2) in the top panel and SPL state (Obs. 1) in the bottom panel, respectively, covering an energy range of 1-20 keV. The low HIMS and LH state spectra for the Obs. 4 (Seg. 6) and Obs. 7 (Seg. 6), respectively, are shown in the right top and right bottom panels of Fig. <ref> in the energy range of 1-50 keV. A relative normalisation constant was used for the simultaneous fitting of LAXPC and SXT data.
As recommended by the LAXPC team, the 3% systematic error was incorporated for uncertainties in background estimation when fitting LAXPC and SXT data together <cit.>.
A gain correction was applied to the SXT data using the gain fit in XSPEC with slope fixed to 1, and the best-fit offset value was found to range from 0 to 35.68 eV.
SXT data were grouped with the ftgrouppha[https://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/ftgrouppha.htmlhttps://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/ftgrouppha.html] tool of Ftools[https://heasarc.gsfc.nasa.gov/ftools/https://heasarc.gsfc.nasa.gov/ftools/]. There are several
ways for binning the input pha file data; we have done the optimal binning
using the ftgrouppha tool. The spectrum was fitted using a combination of models,
Constant*tbabs (kerrdisk+simpl*kerrd). The absorption by the Inter-Stellar Medium (ISM)
was taken into account with the TBabs model <cit.> implemented with the galactic absorption abundance. The hydrogen column density was kept fixed at 4 × 10^22cm^-2 for data sets of HIMS, SPL and LH states listed in Table <ref>, as there was no significant difference in the best-fit while keeping this parameter free <cit.>. N_h was
kept free for HS state data set and was found to vary from 4.47 × 10^22cm^-2 to 4.65 × 10^22cm^-2. The convolution model of comptonization “simpl” <cit.> was used to take into account the Comptonization of the disk photons in the inner flow. The simpl model processes any input spectrum and transforms a fraction f_sc of the source photons into a power-law distribution.
The inner radius of the disk and mass accretion rate was estimated from the best-fit values obtained from the relativistic disk model, “kerrd” <cit.>. The black hole mass, disk inclination angle, and distance to the source were fixed to 12.4M_⊙, 60^∘, and 8.6 kpc, respectively, <cit.>. The spectral hardening factor of kerrd was fixed to
1.7 <cit.>. For the kerrdisk model, the emissivity index for both the inner and the outer portions of the disk was fixed at 1.8 <cit.>. The rest-frame energy of the iron line was set at 6.4 keV <cit.>. As GRS 1915+105 is a highly spinning
black hole, we set the spin parameter for “kerrdisk” at 0.98 <cit.>. Keeping these parameters free does not significantly affect the best-fit values of other parameters. The break radius separating the inner and outer parts of the disk was fixed at 6 r_g (gravitational radii). The radius parameter in kerrd is measured in the unit of gravitational radius (r_g), while for the kerrdisk, it is in units of radius of marginal stability or innermost stable circular orbit (ISCO). Therefore inner radius in the kerrdisk was normalised to that used for the “kerrd” after dividing by a factor of 1.235. The fraction scatter parameter in the data from 3 March 2016 was not constrained; therefore, we set it to 0.6. For HS state observation, gamma and flux in line emission parameters were not constrained; thus, we set them to 4.5 and 1 × 10^-2photons cm^-2s-1, respectively. Table <ref> represents the best-fit values of the spectral parameters, including the absorption column density, inner disk radius, accretion rate, scattered fraction, photon index (gamma), and flux in the iron emission line.
§ RESULTS
An overview of the observations used in this work which includes the date of observation, X-ray flux, hardness ratio, X-ray state, QPO frequency, accretion rate, and the inner disk radius, is given in Table <ref>.
The X-ray flux observed in the LAXPC20 detector is presented in Column 2 of Table <ref>. The value of HR2 is shown in column 3, where HR2 is defined as the ratio of X-ray flux in the 13–60 keV to the 3-5 keV energy range. We observe that the hardness ratio continuously decreases as the source moves from the Low Hard (LH) state to the HS state via the SPL state and the low HIMS state. The accretion rate, shown in column 6 of Table <ref>, generally increases as energy spectra become softer. The accretion rate is highest during the SPL state and lowest during the LH state.
Columns 5 and 7 of Table <ref> list the range of QPO frequencies and the inner radii of the truncated disc for different observations.
Fig. <ref> shows the variation of QPO frequency with accretion rate (top left panel), with inner disc radius (top right panel), and the variation of accretion rate with the inner disc radius (bottom panel). While for some of the individual data sets (i.e. for observations taken during a particular spectral state, such as Obs. 4 and Obs. 1), correlations between these parameters are evident, there is, in general, no correlation seen when all the observations are considered.
Next, we consider the possibility that the QPO frequency may depend both on the accretion rate and the inner disc radius and, in particular, in the form suggested by Equation <ref>, i.e. the QPO frequency divided by the accretion rate depends on the inner disc radius, as was suggested by <cit.>. This is illustrated in Fig. <ref>, where the QPO frequency divided by the mass accretion rate is plotted against the inner radius of the accretion disc. In this case, a clear trend is visible for all the observations. The solid violet line in Fig. <ref> represents the best-fitted standard accretion disc model for Low Frequency QPOs (LFQPOs) with spin parameter 0.97 and normalisation constant 0.01 (earlier work; <cit.>, who used only low HIMS data). For all the data sets, we find that the relationship is consistent with that predicted by the dynamic frequency model (given in Equation 1 with a=0.999 and N=0.1). This is shown by the solid black in Fig. 8. Note that the high spin value is already implied by the small inner radii of ∼ 1.2 R_g obtained from the spectral fitting. This work extends the earlier results to different spectral states and covers a large variation in accretion rate from 0.1 × 10^18gm/s to 5.0 × 10^18gm/s and the truncated radius changing from the last stable orbit of a maximally spinning black hole, ∼ 1.2 to ∼ 19 Gravitational radii. For this wide range, the frequencies of the C-type QPO follow the trend predicted by the relativistic model and, interestingly, the high frequency QPO at ∼ 70 Hz (which is an obvious outlier in top panels of Fig. <ref>) also follow the same trend, suggesting a common origin. While the qualitative trend is as predicted, there are quantitative deviations, which we discuss in the next section.
We have so far studied the QPO frequency divided by Ṁ as a function of the inner disc radius based on the interpretation that the QPO frequency is the dynamical one given by Equation <ref>. To generalise, we define a variable Y = QPO freq./(Ṁ^p) and check if other values of p other than unity would also represent the data by checking if Y is correlated with inner disc radius. The absolute magnitude of the Spearman rank correlation has a maximum of 0.99 for p ranging between 0.8 and 1.2. The Spearman rank correlation variation with p is plotted in Fig. <ref>. This figure shows that the correlation does not show significant change for p values within 0.8 to 1.2.
§ DISCUSSION
In order to put the results of this work into perspective, it is necessary first to enumerate the various possible different reasons why the data points in Fig. <ref>, show some deviations from the predicted values. It has been assumed that the colour factor f is a constant =1.7. The colour factor depends on the local vertical radiative transfer in the disc and has been numerically obtained to be approximately 1.7 by <cit.> for black hole binaries. The radiative transfer depends on the vertical structure of the disc and on the fairly uncertain viscous energy dissipation as a function of height. Moreover, a corona on top of the disc and irradiation will also affect the colour factor. The effect of changing the colour factor is more prominent for observations with a larger inner truncated disc radius. For example, if the colour factor is increased to 2, the mass accretion rates and the inner radii of the accretion disk slightly change for the soft state data collected on 25 April 2016 and 27 April 2016 i.e. mass accretion rate changes from 1.95 ^+0.06_-0.02 to 1.93 ^+0.10_-0.048× 10^18 g/sec and the inner radius changes from 1.40 ^+0.42_-0.15 to 1.32 ^+0.62_-0.08 R_g. On the other hand, for the Low HIMS (15 Apr 2017), the accretion rate change from 0.74 ^+0.07_-0.06 to 2.4 ^+0.3_-0.2× 10^18 g/sec while the inner radius changes from 4.6^+0.3_-0.3 to 9.6^+1.0_-0.3 R_g. An increase in the colour factor results in an increase in accretion rate and inner radii, making the HIMS points (Obs. 4, 5, 6) in Fig. <ref> to move right and downwards. We have tested that by changing the colour factor to 2, then the predicted curve matches with the data points, but
the normalisation factor increases from 0.1 to 0.15. Note that we have also assumed that the colour factor is independent of the accretion rate and radii which may not be the case. Some of the deviations of the data points from the predicted values could be due to such dependence.
It should be emphasised that the theoretical formula for the dynamical frequency (Equation <ref>) is an order of magnitude estimate, the uncertainty of which is parameterised by the normalisation factor N. Thus, one may expect N to vary not only for different observations (with different accretion rates and inner disc radii) but also to vary with radius, leading to deviations when the data is compared with a constant N prediction. The theoretical prediction is based on the standard accretion disc, where the disc extends to the last stable orbit and is not truncated. The sound speed at a radius may differ when the disc is truncated at that radius compared to when it is not, and this difference may be a function of the accretion rate and radius. A related issue is the assumption of standard accretion disc theory that the viscous dissipation goes to zero at the last stable orbit, which is incorporated both in the form of Equation <ref> and in the spectral model kerrbb used in this work. This assumption forces the temperature (and hence the sound speed) to go to zero at the last stable orbit. However, this assumption may not correctly describe the system, and instead, the accretion flow should necessarily pass through a sonic point, which leads to deviations from the standard theory near the last stable orbit <cit.>. Apart from these theoretical considerations, another potential reason for the deviation between the data and the predicted values is that the source may not be in the steady state and may be in a variable state. Out of seven observations used in this work, the source
shows significant short-time variability (on the hour/orbital time scale) during three observations (3rd March 2016, 28th March 2017 and 1st April 2017 (Obs. 1, 4 & 5)) <cit.>, as reflected in Table 2. During these observations, values of QPO frequency, inner disk radii and the Gamma clearly show a trend with time (for different orbits). Thus, the spectra averaged over the whole observation may not provide accurate accretion rates and inner disc radii values. Moreover, when the system is dynamic, it may not be correct to model the time-averaged spectra with a steady state one, as assumed when we have used a disc model like kerrbb. These three data sets show most deviations from the theory, as seen in Figure <ref> as the disk was not in the steady state. The 15th April 2017 (Obs. 6) data support this argument. This observation data do not show any trend with time/orbit and fall in the middle of points of Obs. 4 & 5 in Figure <ref> with little deviation (also see Table <ref>).
Given all the above-listed possibilities, which may cause the data points not to follow the theoretical predictions accurately, it is quite remarkable that the overall predicted trend is seen, for such a wide range of accretion rates, inner disc radii and QPO frequency. Indeed, as mentioned earlier, the general trend that for an empirical form of Y = f_QPO/Ṁ^p, the best anti-correlation with radii is obtained for p ∼ 1, indicates that the QPO frequency can be identified with dynamical one. It is also remarkable that the high frequency QPO at ∼ 70 Hz also follows the trend of the low frequency ones and the explanation for the observed high frequency is that for the high frequency QPO, the accretion rate is significantly higher and the inner radius close to the last stable orbit.
Interpreting the QPO frequency as the dynamic one, is an alternate explanation to the model where the QPO is due to the precession of the inner flow at the Lense-Thirring frequency <cit.>. In that interpretation, the QPO frequency is expected to be a function only of the truncation radius and not the accretion rate. Moreover, there is some evidence that the energy dependent properties of some of the QPOs vary with the inclination angle of the binary <cit.>, which would be more likely explained by a precessing inner flow. At present, this evidence is limited to a few sources due to the difficulty in estimating the inclination angle and energy dependent QPO properties. A more detailed theoretical analysis of the predicted inclination dependence of these two interpretations, along with better data, would be able to differentiate between them. Note that in the interpretation used in this work, the QPO frequency is not expected to depend on the inclination angle of the disc.
The wide band spectral and rapid temporal capabilities of AstroSat and Insight-HXMT had shown that the frequencies of the C-type QPO of GRS 1915+105 can be identified with general relativistic dynamic ones. In this work, we extend the results using AstroSat for a broader range of accretion rates and inner radii and have shown that the high frequency QPO may also be of a similar origin. The work needs to be extended to other observations of GRS 1915+105 and other black hole systems. Apart from AstroSat and Insight-HXMT observations, such work can also be done by NICER with perhaps high energy spectral coverage from simultaneous Nustar data. Such a systematic and multi-observatory study will give a clearer picture of the origin of the QPO phenomenon in black hole systems.
§ ACKNOWLEDGEMENTS
The authors would like to thank the anonymous reviewer for his or her insightful remarks and suggestions that considerably enhanced the quality of the manuscript.
This work has used the data from the Soft X-ray Telescope (SXT) developed at TIFR Mumbai. And the SXT POC at TIFR is acknowledged for verifying and releasing the data through the Indian Space Science Data Centre (ISSDC) and providing the required software tools.
We would also like to thank the LAXPC POC and SXT POC teams for their support. In addition, this study utilised the Monitor of All-sky X-ray Image (MAXI) and SWIFT/BAT data provided by the MAXI and BAT teams.
This research has used the software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Astrophysics Science Division at NASA.
§ DATA AVAILABILITY
The software and packages utilised for data analysis are available at NASA’s HEASARC website (<https://heasarc.gsfc.nasa.gov/docs/software/heasoft/patch.html>). The data used [][s]in this article are available at the AstroSat-ISSDC website
(<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>), Maxi website (<http://maxi.riken.jp/top/index.html>) and the Swift/BAT observations from NASA’s SWIFT website
(<https://swift.gsfc.nasa.gov/results/transients/>).
mnras
§ APPENDIX
The relativistic correction parameters A, B, D, E, and L <cit.> which have been used to derive equation 1 are as follows.
A=1+a_*^2x^-4+2a_*^2x^-6
B=1+a_*x^-3
D=1-2x^-2+a_*^2x^-4
E=1+4a_*^2x^-4-4a_*^2x^-6+3a_*^4x^-8
L=3/2M 1/x^2(x^3-3x+2a_*)[x-x_0-3/2a_8ln(x/x_0)
-3(x_1-a_*)^2/x_1(x_1-x_2)(x_1-x_3)ln(x-x_1/x_0-x_1)
-3(x_2-a_*)^2/x_2(x_2-x_1)(x_2-x_3)ln(x-x_2/x_0-x_2)
-3(x_3-a_*)^2/x_3(x_3-x_1)(x_3-x_2)ln(x-x_3/x_0-x_3)]
where x=√(r/M) and a_* is the spin parameter in parameters A, B, D, E, L. Here,
x_1=2cos(1/3cos^-1a_*-π/3)
x_2=2cos(1/3cos^-1a_*+π/3)
x_3=-2cos(1/3cos^-1a_*)
x_0={3+Z_2-sgn(a_*)[(3-Z_1)(3+Z_1+2Z_2)]^1/2}^1/2
where Z_1=1+(1-a_*^2)^1/3[(1+a_*)^1/3+(1-a_*)^1/3] and Z_2=(3a_*^2+Z_1^2)^1/2
f=3/2M1/x^2(x^3-3x+2a_*)[x-x_0-3/2a_8lnx/x_0
-3(x_1-a_*)^2/x_1(x_1-x_2)(x_1-x_3)lnx-x_1/x_0-x_1 -3(x_2-a_*)^2/x_2(x_2-x_1)(x_2-x_3)lnx-x_2/x_0-x_2
-3(x_3-a_*)^2/x_3(x_3-x_1)(x_3-x_2)lnx-x_3/x_0-x_3]
|
http://arxiv.org/abs/2307.04459v1 | 20230710101228 | Thermal fluctuation, deflection angle and greybody factor of a high-dimensional Schwarzschild black hole in STVG | [
"Qian Li",
"Yu Zhang",
"Qi-Quan Li",
"Qi Sun"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] (Corresponding author) Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
In this work, we study the thermal fluctuation, deflection angle and greybody factor of the high-dimensional Schwarzschild black hole in scalar-tensor-vector gravity (STVG). Based on the correction of black hole entropy due to thermal fluctuation, we calculate some thermodynamic quantities associated with the correction of black hole entropy. The influence of the first-order and second-order corrections, spacetime dimensionality and STVG parameters on these thermodynamics quantities are discussed in detail. Additionally, by utilizing the Gauss-Bonnet theorem, the deflection angle is obtained in the weak field limit and the effect of two parameters on the results is visualized. Finally, we calculate the bounds on greybody factors of a massless scalar field.
Thermal fluctuation, deflection angle and greybody factor of a high-dimensional Schwarzschild black hole in STVG
Qi Sun
August 12, 2023
================================================================================================================
§ INTRODUCTION
Although Einstein's general relativity is one of the successful and well-established gravitational theories in modern physics, general relativity fails to explain many observational results, such as the present stage of cosmic acceleration <cit.>, rotation curves of galaxies <cit.> and some cosmological data <cit.>. Moreover, general relativity has inherent deficiencies in the theory, such as the presence of spacetime singularities. Therefore, the problems of general relativity motivate us to research the alternative gravity theories. One of the modified gravity theories is the scalar-tensor-vector gravity (STVG) proposed by Moffat <cit.>, which is based on the action principle and is presented by the metric tensor, three scalar fields and a massive vector field. Moffat gave the black hole solution in STVG in another paper <cit.>. What's more, this modified gravity (MOG), i.e., STVG may be considered an alternative to the dark matter problem, which can be solved by changes in the gravity sector. STVG was able to fit the rotation curves of galaxies <cit.> without considering dark matter and was showing no difference with solar system observational tests. However, Jamali and his colleagues <cit.> found that a modified version of the STVG, known as mMOG, cannot be deemed as an alternative to the dark matter problem when new constants are introduced in the kinetic term of the scalar field as its coefficients.
The interest in the physical properties of high-dimensional black holes significantly increases, even though high-dimensional black holes have not been directly observed or experimentally supported in comparison with the four-dimension black hole. This has a lot to do with the development of string theory. In addition, the theoretical importance of higher dimensional black hole solutions was introduced by Emparan and Reall <cit.>. Tangherlini <cit.> proposed firstly the solutions of the Schwarzschild and Reissner-Nordström black holes in D dimensional spacetime. Later, Myers et al. obtained the Kerr black hole solution in high dimensional spacetime in Ref. <cit.>. Recently, Cai et al. <cit.> derived a high-dimensional static spherically symmetric Schwarzschild black hole in STVG, which is a high dimensional extension of STVG theory, and studied its quasinormal modes of a massless scalar field and black hole shadow. This black hole solution is a link between Einstein's theory and STVG theory. Specifically, this black hole degenerates to Schwarzschild-Tangherlini black hole in Einstein's theory with the coupling constant α being zero.
The black hole entropy is proportional to the area of the event horizon of the black hole, known as the Bekenstein-Hawking formula <cit.>. The black hole entropy is maximum compared with the objects of the same volume in order to avoid the violation of the second law of black hole thermodynamics. However, due to thermal fluctuation which leads to the concept of the holographic principle <cit.>, the maximum entropy of black holes may be corrected. The corrected term for maximum entropy is generated by the quantum fluctuations in the spacetime geometry rather than the matter field in the spacetime. For large black holes, quantum fluctuations are negligible. When the size of black hole reduces due to Hawking radiation, however, the quantum fluctuations in the spacetime geometry will increase. Thus, there is a logarithmic correction at leading order in black hole entropy <cit.>. Upadhyay investigated the effect of thermal fluctuations on a quasitopological black hole and found the negative correction term result leads to a local instability of black holes <cit.>. The influence of logarithmic corrections on the thermodynamics due to thermal fluctuations for a dilaton black holes in gravity's rainbow has been studied in Ref. <cit.>. There are several works are devoted to studying the thermal fluctuation effects on black hole thermodynamics <cit.>.
Hawking believed that black holes are not completely black objects and can emit radiation, known as Hawking radiation <cit.>. This lays an important foundation for understanding the thermodynamics of black holes. The Hawking radiation detected at infinity of the black hole differs by a redshift factor, called as greybody factor, from the authentic radiation detected at the black hole horizon. The greybody factor that derives from the transmission amplitude can provide information related to the quantum nature of the black hole <cit.>. There are several methods to calculate the greybody factor such as the bounds on greybody factors <cit.>, the WKB method <cit.> and the exact numerical approach <cit.>. In this paper, we choose the bounds on greybody factor due to the fact that it can provide analytical results for the intermediate frequencies and all angular momentum.
When light ray encounters a dense compact object in its trajectory toward a distant observer, the observer will find the light ray has a deflection angle. That is to say, the compact object bends the light ray, which forms gravitational lensing. So gravitational lensing which can be classified as strong gravitational lensing, weak gravitational lensing and micro gravitational lensing is used as a special astronomical tool to check whether general relativity theory is correct. Concretely, the strong gravitational lensing is used to calculate the magnification and position of the black hole. The weak gravitational lensing can help us to measure different objects' masses or restrict of the cosmological parameter. In addition, on the cosmic microwave background aspects, the weak gravitational lensing also has an important effect <cit.>. At present, strong or weak gravitational lensing of compact objects, such as wormholes, black holes and cosmic strings has been widely considered <cit.>. Part of the work in the above literature is based on Gauss-Bonnet theorem to calculate the deflection angle for the weak gravitational lensing. The Gauss-Bonnet theorem proposed by Gibbon and Werner <cit.> in 2008, is used to derive the deflection angle for the first time in the context of optical geometry. Since then, this method has been applied to the weak deflection angles of different black holes <cit.>. We will also research the weak gravitational lensing of a high-dimensional Schwarzschild spacetime in STVG by using Gauss-Bonnet theorem.
Motivated by the above, the purpose of the paper is to study the thermal fluctuation, weak deflection and grey-body factor of the high-dimensional Schwarzschild black hole in STVG. The present paper is structured as follows. In section <ref>, we briefly introduce a high-dimensional Schwarzschild black hole solution in STVG. Then, we review the physical features of this black hole. In section <ref>, we study the corrected thermodynamic quantities due to thermal fluctuation. Section <ref> is devoted to calculating the weak deflection angle using Gauss-Bonnet theorem. We discuss the bounds on greybody factors in section <ref>. In the last section, our conclusions are summarized.
Throughout this paper, the natural system of units (G_N=ħ=c=1) is adopted.
§ FUNDAMENTAL SPACETIME
In the section, we introduce the high-dimensional Schwarzschild spacetime in
scalar-tensor-vector gravity (STVG) and simply review some thermodynamical properties. The general action of the STVG theory in D-dimensional spacetime takes the form <cit.>
S_L=S_GR+S_ϕ+S_S+S_M,
where
S_ GR=1/16π∫ d^Dx√(-g)1/GR,
S_ϕ=-1/4π∫ d^Dx√(-g)(K-1/2μ̃^2ϕ ^μϕ _μ),
S_S =∫ d^D x √(-g)[1/G^3(1/2 g^μν∇_μ G ∇_ν G-V_G(G)) +1/μ̃^2 G(1/2 g^μν∇_μμ̃∇_νμ̃-V_μ̃(μ̃))],
here S_GR is the Einstein-Hilbert action, S_ϕ stands for the action of a massive vector field ϕ^μ, S_S denotes the action of the scalar field and S_M represents the matter action. The black hole metric in the D-dimensional spacetime has the following form
ds^2=-f(r)dt^2+dr^2/f(r)+r^2dΩ ^2_D-2,
with the line element f(r) being <cit.>
f(r)=1-m/r^D-3+Gq^2/r^2(D-3),
where G is the Newton's gravitational constant, G=G_N(1+a). And m and q are defined by
m≡16π GM/(D-2)Ω _D-2, q≡8π√(a G_N)M/√(2(D-2)(D-3))Ω _D-2,
where the dimensionless parameter a in the form is regarded as a deviation of the STVG theory from standard general relativity theory and M is the black hole mass. Moreover, Ω_D-2 denoting the volume of unit (D-2)-dimensional sphere has the form
Ω_D-2=2π^D-1/2/Γ (D-1/2).
When the dimensionless parameter a, we can get a Schwarzschild-Tangherlini black hole in Einstein's gravity. Moffat gave a Schwarzschild black hole in STVG for the case D=4 <cit.>. Moreover, one can find that there is a similarity between a high-dimensional Schwarzschild black hole in STVG and a high-dimensional Reissner-Nordström black hole in Einstein gravity from the metric <cit.>. The high-dimensional Schwarzschild STVG black hole possesses up to two horizons
r_±=(m/2±√(m^2-4Gq^2)/2)^2,
where r_- and r_+ represent the Cauchy horizon and the event horizon, respectively. But Mureika et al. <cit.> pointed out that the Schwarzschild black hole in STVG, i.e., MOG black hole, relies only on the mass M and dimensionless parameter a. So q is called the gravitational charge rather than charge.
The black hole mass in terms of r_+ has the form
M=r_+^D-3(A-√(A^2-4 G B^2))/2 G B^2,
where the coefficients A and B are expressed as
A≡16 π G/(D-2) Ω_D-2, B≡8 π√(a G_ N)/√(2(D-2) (D-3))Ω_D-2.
The Hawking temperature is given by
T_ H=1/4πdf(r)/dr|_r=r_+ =(D-3)(A √(A^2-4 G B^2 )-A^2+4 G B^2 )/8 π G B^2r_ +.
Also, the Bekenstein-Hawking entropy of this high-dimensional black hole, S_0, is given by
S_0= Ω_D-2 r_+^D-2/4.
§ THERMAL FLUCTUATIONS
In the section, we investigate the influence of thermal fluctuations on thermodynamic potential of a high-dimensional Schwarzschild black hole in STVG. First of all, we simply introduce the thermal fluctuation and then calculate some important modified thermodynamics quantities.
We can not neglect the influence of the thermal fluctuation on the black hole thermodynamics when the radius of the black hole decreases and the temperature of the black hole is large. The thermal fluctuation will be regarded as a perturbation around the state of equilibrium if it is small enough. Using the partition function approach, a general expression for the corrected entropy area relation is written as <cit.>
S=S_0 -αln(S_0T^2)+ λ/S_0,
where α is the leading order correction parameter and λ is the second order correction parameter. The leading order correction is a logarithmic term caused by the thermal fluctuations, and the second order correction proportional to the inverse to uncorrected entropy is produced by extending the entropy function around the equilibrium.
Using Eqs. (<ref>) and (<ref>), the corrected entropy of this high-dimension black hole is given as
S =1/4r_+^D-2Ω_D-2 + 4r_+^D-2λ/Ω_D-2-αln[(D-3)^2(A^2-4GB^2)(A-√(A^2-4GB^2))^2 r_+^D-4Ω_D-2/256G^2B^4π^2].
We draw the corrected entropy versus the event horizon radius for different parameters in Figs.<ref> and <ref>. As shown in Fig.<ref>, the presence of leading order correction leads to an increase in entropy for small values of the event horizon radius. However, the corrected entropy gradually decreases and recovers to the original entropy when with the increase of the event horizon radius. This means that the equilibrium of the small black hole is unstable due to Δ S >0 when the black hole is regarded as an isolated system. The right figure in Fig. <ref> shows that the inverse correction term has a significant influence on the entropy for a small black hole. In fact, compared to the large black hole, the thermal fluctuation has a greater impact on the small black hole. We also show the effect of spacetime dimensionality on the corrected entropy in the left figure of Fig.<ref>. We find that the change of corrected entropy is not only fast but also large in high-dimensional spacetime. So one can easily see that for a small or large black hole, the higher the dimension, the larger the corrected entropy, whereas the middle black hole is not the case. We also obtain from the left figure in Fig. <ref> that the STVG parameter a leads to a slight increase in corrected entropy.
We can calculate the Helmholtz free energy using the corrected entropy and temperature as
F =-∫ S d T = (D-3)√(A^2-4 G B^2)(A-√(A^2-4 G B^2))/8 G B^2π
×(-4 r_+^D-1λ/(D-1)Ω_D-2 + r_+^D-3Ω_D-2/4(D-3)+ α/r( D-4 + ln[ (D-3)^2√(A^2-4 G B^2)(A-√(A^2-4 G B^2))^2 r_+^D-4Ω_D-2/256 G^2 B^4π^2])).
In order to have a better understanding of the corrected Helmholtz free energy, we plot the Helmholtz free energy in terms of the event horizon for the different parameters α,λ, D, a in Figs.<ref> and <ref>. In Fig.<ref>, we can find that the Helmholtz free energy without any corrections is a function that increases monotonically and keeps positive. It is worth noting that the Helmholtz free energy becomes negative for a small black hole under the thermal fluctuation but returns to positive with the increase of event horizon radius. In contrast to the case of the small black hole, the presence of logarithmic correction term increases the Helmholtz free energy for the larger black hole. We can conclude that thermal fluctuation causes small black holes to be more stable. In addition, we also obtain from the left in Fig.<ref> that the impact of spacetime dimension on the modified Helmholtz free energy is similar to that of logarithmic correction. We can see the effect of parameter a on the corrected Helmholtz free energy in the right figure of Fig.<ref>. It is clear that the parameter a decreases the corrected Helmholtz free energy.
The internal energy as one of the thermodynamic quantities has the thermodynamics relationship E=F+TS, i.e.,
E =-1/32 π (D-1) G B^2Ω_D-2(4GB^2+A(√(A^2-4GB^2)-A)r_+^-D-3) r_+^-D-3
×(16(D-3)(D-2)r_+^4λ+(D-1)r_+^DΩ_D-2×(4(D-4)(D-3)r_+^2α+(D-2)r_+^DΩ_D-2)).
Figs.<ref> and <ref> present the behavior of corrected internal energy with increasing the event horizon radius for the different parameters α,λ, D, a. As it is shown in Fig.<ref>, the internal energy has a positive asymptotic value under thermal fluctuation for a small black hole whereas we can neglect the effect of thermal fluctuation when we increase the event horizon radius. We can see clearly that the higher the dimensionality of the black hole, the larger the corrected internal energy. However, the corrected internal energy decreases with the increase of the STVG parameter.
Next, we investigate the heat capacity of black hole, which can be written as C=(dU/ dT)_V=(d U/ dr)/ (dT/dr) using Eqs. (<ref>) and (<ref>), concretely
C=(D-4)α+4(D-2)r_+^D-2λ/Ω_D-2-1/4(D-2)r_+^D-2Ω_D-2.
We draw the behavior of heat capacity by figures of Figs.<ref> and <ref>. In Fig.<ref>, we observe that without any thermal fluctuation, the heat capacity is negative and thus black hole is thermodynamically unstable. The existence of thermal fluctuations causes small black holes to have positive heat capacity and thus there is a phase transition that shows the transition of the system from stable to unstable. Moreover, the critical point gradually moves to the right when we increase the correction coefficients α,λ. From Fig.<ref>, we can see that the phase transition occurs at a larger event horizon radius if spacetime dimensionality D increases. It is worth mentioning that the heat capacity of a high-dimensional Schwarzschild black hole in STVG recovers to that of Schwarzschild-Tangherlini black hole. That is to say, the STVG parameter does not affect the stability conditions of black holes.
§ WEAK DEFLECTION ANGLE
In this section, we would like to obtain the deflection angle in weak field limit using Gauss-Bonnet theorem. For equatorial plane θ =π/2 and null geodesic ds^2=0, the corresponding optical metric of a high-dimensional Schwarzschild black hole in STVG has the following form
dt^2=1/f^2(r)dr^2+r^2/f(r)dφ^2.
Afterwards, we can rewrite the optical metric using the coordinate transformation dr_*=1/f(r)dr as
dt^2= dr_*^2+ f̃^2(r_*)dφ^2,
where f̃(r_*)≡√(r^2/f(r)).
We obtain the Gaussian optical curvature as following <cit.>
K =RicciScalar/2 =1/4(D-3)r^1-4D(4(D-2)G^2q^4r^9-2(D-2)Mr^3D
-6(D-2)Gq^2r^6+D+((D-1)M^2+4(2D-5)Gq^2)r^3+2D).
Now, we can calculate the deflection angle utilizing Gauss-Bonnet theorem <cit.>. The domain D is deemed to be a subset of a compact, oriented surface, with Gaussian optical curvature K and Euler characteristic number χ(D) and ∂D is the piecewise smooth boundary of domain D with geodesic curvature κ. We consider α_i to be the i^th exterior angle. The Gauss-Bonnet theorem is that
∫∫_DKdS+∫_∂Dκd t+∑_iα_i=2πχ(D),
where dS stands for the surface element. In addition, the geodesic curvature κ along a smooth curve γ is written as κ=g(Δ_γ̇γ̇,γ̈) where γ̈ denotes unit acceleration vector. We consider that D is bounded by the geodesics γ_c and geodesic γ_R where γ_R is considered to be perpendicular to γ_c at the source S and the observer O, so κ (γ_c)=0 by definition. Then ∑_iα_i=α_S+α_O as well as χ(D)=1. Eq.(<ref>) reduces to
∫∫_DKdS+∫_γ_Rκ (γ_R)d t =π.
Utilizing the definition of geodesic curvature, the radial part of κ (γ_p) can be expressed as
κ (γ_p)= (Δ_γ̇_̇ṗγ̇_̇ṗ)^r=γ̇_R^ϕ(∂_ϕγ̇_R^r)+Γ_ϕϕ^r(γ̇_R^ϕ)^2,
where γ̇_R represents the tangent vector of geodesics γ_R and Γ_ϕϕ^r is the Christoffel symbol. When we consider γ_R:=R=const, the first term on the right side of the above equation equals zero and the second term is 1/R. So κ (γ_R) reduces to 1/R.
We can make a change of variables dt using the relevant optical metric (<ref>), which can be rewritten as dt=R dφ.
Eq.(<ref>) becomes
∫∫_DKdS+∫_0^π+αdφ =π.
Finally, we obtain the deflection angle <cit.>
α̂=-∫_0^π∫_b/ sinϕ^∞K dS.
Now, we can calculate the deflection angle of a high-dimensional Schwarzschild black hole in STVG for the different spacetime dimensionality. As an example, we calculate the deflection angle when D=4,5,6,7
α̂_D=4 = 2m/b-3 m^2π/16b^2-3Gπ q^2/4b^2+4Gmq^2/3b^3 +O(q^4/b^4),
α̂_D=5 =3mπ/4b^2-3m^2π/16b^4-15Gπ q^2/16b^4+15Gmπ q^2/32b^6+O(q^4/b^8),
α̂_D=6 =8m/3b^3-25m^2π/128b^6-35Gπ q^2/32b^6+512Gmq^2/315b^9
+O(q^4/b^12),
α̂_D=7 =15π m/16b^4-105m^2π/512b^8-315Gπ q^2/256b^8+1155Gmπ q^2/2048b^12
+O(q^4/b^16),
We draw the behavior of the deflection angle with respect to the impact parameter for different values of D and a in Fig.<ref>. It is clear that the higher the black hole dimension, the smaller the deflection angle. However, the STVG parameter has an increasing effect on the deflection angle, i.e., a high-dimensional Schwarzschild black hole in STVG leads to a larger deflection angle than a Schwarzschild-Tangherlini black hole.
§ GREYBODY FACTOR
In this section, we study the bounds on greybody factors for the massless scalar field. The massless scalar field Φ is represented by the Klein-Gordon equation <cit.>
1/√(-g)∂_μ(√(-g)g^μν∂_ν)Φ=0,
where g is the determinant of the metric tensor. In order to separate radial and angular variables, we have an ansatz Φ=e^-iω t Y_lm(Ω)Ψ(r) and make a change dr_*=dr/f(r). Substituting the above definitions and metric function Eq. (<ref>) into Eq. (<ref>), we obtain a Schrödinger-like wave expression
d^2Ψ(r)/d^2r_*+[ω^2-V_eff(r)]Ψ(r)=0,
in which ω donates frequency, l and m are the azimuthal quantum number and the spherical harmonic index, respectively.
The effective potential V_eff(r) can be written as
V_eff(r)=f(r)[l(D+l-3)/r^2+(D-2)(D-4)f(r)/4r^2+(D-2)f'(r)/2r].
To better understand the effect of the dimensionality of the spacetime and STVG parameter on the effective potential, we visualize the effective potential with respect to the black hole radius for different values of D and a in Fig.<ref>. Obviously, the dimensionality of the spacetime causes an increase in the effective potential whereas the STVG parameter has the opposite effect. We can expect the behavior of greybody factors from the effective potential.
The bounds on greybody factors can be expressed as <cit.>
T≥sech^2[∫_-∞^∞√((h')^2+(ω^2-V_eff-h^2)^2)/ 2hdr_*],
where h≡ h(r_*) and h(r_*)>0. h is an arbitrary function and satisfies h(-∞)=h(∞)=ω and there are two particular functional forms of h considered in Ref.<cit.>. Here we only consider the case h=ω. Thus Eq.(<ref>) is rewritten as
T≥sech^2[1/2ω∫_r_+^∞V_eff/f(r)dr].
After expanding the integral, we obtain the lower bound on the greybody factors
T ≥sech^2[-1/2ω((-8+2D+D^2-12l+4lD+4l^2)(1/4r_+)
-(-2+D)B^2(-16+3D)Gm^2/4(2D-5)r^5-2D_++(D-10)Am/4r^2-D_+)
].
Fig.<ref> demonstrates the behavior of the greybody factor for the high-dimensional Schwarzschild black hole in STVG. We observe that the greybody factor reduces with the increase of dimension from the left panel. That is to say, the greybody factor is suppressed in high-dimensional spacetime. It indicates that less massless scalar particles pass through the potential barrier and reach to spatial infinity in a higher dimensional black hole. Additionally, we observe that as the STVG parameter a increases, the greybody factor increases. That is, the STVG parameter makes the gravitational potential transparent.
§ CONCLUSION
In this paper, we analyzed thermal fluctuation, weak deflection angle and greybody factor for a high-dimensional Schwarzschild black hole in STVG.
First, we evaluated the influence of the logarithmic and higher-order corrections of the entropy on the Helmholtz free energy, internal energy and heat capacity and made a comparison to corrected and uncorrected thermodynamic properties. Overall, the corrected entropy as a consequence of thermal fluctuation presents the trend of decreasing first and then increasing, and the impact of thermal fluctuation is significant for a small black hole. Due to the effect of the dimensionality of spacetime, the curve of modified entropy has different intersections. This causes that for a small-size or large-size black hole, the corrected entropy increases with the spacetime dimensionality increases, whereas the middle black hole is not the case. The existence of the STVG parameter leads to a slight increase in corrected entropy. The black hole with small values of event horizon radius possesses the negative Helmholtz free energy because of the thermal fluctuation. The Helmholtz free energy increases monotonically with increasing values of the parameters D and a for a small-size black hole. For a larger black hole, the parameters D and a have the opposite effects on Helmholtz free energy. The internal energy remains positive and its behavior is similar to corrected entropy. The internal energy increases with the increase of dimensions, while it decreases as the STVG parameter increases. In addition, we found that thermal fluctuation makes the small-size black hole more stable from the analysis of Helmholtz free energy and heat capacity in all dimensional cases and the heat capacity is independent of the STVG parameter.
Second, we calculated the weak deflection angle with Gauss-Bonnet theorem. We have shown the expression of weak deflection angle for D=4,5,6,7. We have pointed out that in the higher dimensional spacetime the weak deflection angle gets weaker but the presence of the STVG parameter results in the increase of deflection angle.
Finally, we computed the greybody factors of the massless scalar field and then analyzed the effect of the spacetime dimensionality and STVG parameter on greybody factors. We found that the 4-dimensional black hole has the largest values of greybody factors whereas the 7-dimensional black hole possesses the smallest values. Moreover, we have seen that when the STVG parameter increases, the greybody factor increases. We got the fact that the more radiation can reach spatial infinity in 4-dimensional black hole with the larger value of STVG parameter.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
99
Astier:2012ba
P. Astier and R. Pain,
Observartional Evidence of the Accelerated Expansion of the Universe.
Comptes Rendus Physique 13 (2012), 521-538.
doi:10.1016/j.crhy.2012.04.009
Moffat:2013sja
J. W. Moffat and S. Rahvar,
The MOG weak field approximation and observational test of galaxy rotation curves.
Mon. Not. Roy. Astron. Soc. 436 (2013), 1439-1451.
doi:10.1093/mnras/stt1670
Planck:2015fie
P. A. R. Ade et al. [Planck],
Planck 2015 results. XIII. Cosmological parameters.
Astron. Astrophys. 594 (2016), A13.
doi:10.1051/0004-6361/201525830
Moffat:2005si
J. W. Moffat,
Scalar-tensor-vector gravity theory.
JCAP 03 (2006), 004.
doi:10.1088/1475-7516/2006/03/004
Moffat:2014aja
J. W. Moffat,
Black Holes in Modified Gravity (MOG).
Eur. Phys. J. C 75 (2015), 175.
doi:10.1140/epjc/s10052-015-3405-x
Brownstein:2005zz
J. R. Brownstein and J. W. Moffat,
Galaxy rotation curves without non-baryonic dark matter.
Astrophys. J. 636 (2006), 721-741.
doi:10.1086/498208
Jamali:2017zrh
S. Jamali, M. Roshan and L. Amendola,
On the cosmology of scalar-tensor-vector gravity theory,
JCAP 01 (2018), 048.
doi:10.1088/1475-7516/2018/01/048
Emparan:2008eg
R. Emparan and H. S. Reall,
Black Holes in Higher Dimensions.
Living Rev. Rel. 11 (2008), 6.
doi:10.12942/lrr-2008-6
Tangherlini:1963bw
F. R. Tangherlini,
Schwarzschild field in n dimensions and the dimensionality of space problem.
Nuovo Cim. 27 (1963), 636-651.
doi:10.1007/BF02784569
Myers:1986un
R. C. Myers and M. J. Perry,
Black Holes in Higher Dimensional Space-Times.
Annals Phys. 172 (1986), 304.
doi:10.1016/0003-4916(86)90186-7
Cai:2020igv
X. C. Cai and Y. G. Miao,
High-dimensional Schwarzschild black holes in scalar–tensor–vector gravity theory.
Eur. Phys. J. C 81 (2021), 559.
doi:10.1140/epjc/s10052-021-09351-x
Bekenstein:1973ur
J. D. Bekenstein,
Black holes and entropy.
Phys. Rev. D 7 (1973), 2333-2346.
doi:10.1103/PhysRevD.7.2333
Easther:1999gk
R. Easther and D. A. Lowe,
Holography, cosmology and the second law of thermodynamics.
Phys. Rev. Lett. 82 (1999), 4967-4970.
doi:10.1103/PhysRevLett.82.4967
Das:2001ic
S. Das, P. Majumdar and R. K. Bhaduri,
General logarithmic corrections to black hole entropy.
Class. Quant. Grav. 19 (2002), 2355-2368.
doi:10.1088/0264-9381/19/9/302
Upadhyay:2017qmv
S. Upadhyay,
Quantum corrections to thermodynamics of quasitopological black holes.
Phys. Lett. B 775 (2017), 130-139.
doi:10.1016/j.physletb.2017.10.059
Dehghani:2018qvn
M. Dehghani,
Thermodynamics of charged dilatonic BTZ black holes in rainbow gravity.
Phys. Lett. B 777 (2018), 351-360.
doi:10.1016/j.physletb.2017.12.048
Jawad:2017mwt
A. Jawad and M. U. Shahzad,
Effects of Thermal Fluctuations on Non-minimal Regular Magnetic Black Hole.
Eur. Phys. J. C 77 (2017), 349.
doi:10.1140/epjc/s10052-017-4914-6
Shahzad:2018znu
M. U. Shahzad and A. Jawad,
Thermodynamics of Black holes With Higher Order Corrected Entropy.
Can. J. Phys. 97 (2019), 742-751.
doi:10.1139/cjp-2018-0091
Sharif:2021vex
M. Sharif and Z. Akhtar,
Study of thermal fluctuations in five-dimensional rotating regular black hole.
Chin. J. Phys. 71 (2021), 669-682.
doi:10.1016/j.cjph.2021.04.005
Khan:2022zcf
Y. H. Khan and P. A. Ganai,
Remnants and thermal corrections in Horndeski black holes with non-minimal kinetic coupling.
Eur. Phys. J. Plus 137 (2022), 827.
doi:10.1140/epjp/s13360-022-03036-4
Ama-Tul-Mughani:2022wtg
Q. Ama-Tul-Mughani, A. Waseem, W. u. Salam and A. Jawad,
Greybody factor and thermal fluctuations of rotating regular black hole bounded by PFDM.
Chin. J. Phys. 77 (2022), 2213-2227.
doi:10.1016/j.cjph.2021.11.024
Chen:2021czh
X. Chen, X. Huang, J. Chen and Y. Wang,
Effect of thermal fluctuation on the thermodynamics of GMGHS black hole.
Gen. Rel. Grav. 53 (2021), 9.
doi:10.1007/s10714-020-02780-1
Upadhyay:2019hyw
S. Upadhyay, Nadeem-ul-islam and P. A. Ganai,
A modified thermodynamics of rotating and charged BTZ black hole.
JHAP 2 (2022), 25-48.
doi:10.22128/jhap.2021.454.1004
Khan:2021tzv
Y. H. Khan, S. Upadhyay and P. A. Ganai,
Stability of remnants of Bardeen regular black holes in presence of thermal fluctuations.
Mod. Phys. Lett. A 36 (2021), 2130023.
doi:10.1142/S0217732321300238
Hawking:1974rv
S. W. Hawking,
Black hole explosions.
Nature 248 (1974), 30-31.
doi:10.1038/248030a0
Hawking:1975vcx
S. W. Hawking,
Particle Creation by Black Holes.
Commun. Math. Phys. 43 (1975), 199-220
[erratum: Commun. Math. Phys. 46 (1976), 206].
doi:10.1007/BF02345020
Barman:2019vst
S. Barman,
The Hawking effect and the bounds on greybody factor for higher dimensional Schwarzschild black holes.
Eur. Phys. J. C 80 (2020), 50.
doi:10.1140/epjc/s10052-020-7613-7
Boonserm:2008zg
P. Boonserm and M. Visser,
Bounding the greybody factors for Schwarzschild black holes.
Phys. Rev. D 78 (2008), 101502.
doi:10.1103/PhysRevD.78.101502
Boonserm:2014fja
P. Boonserm, A. Chatrabhuti, T. Ngampitipan and M. Visser.
Greybody factors for Myers-Perry black holes,
J. Math. Phys. 55 (2014), 112502.
doi:10.1063/1.4901127
Boonserm:2017qcq
P. Boonserm, T. Ngampitipan and P. Wongjun,
Greybody factor for black holes in dRGT massive gravity.
Eur. Phys. J. C 78 (2018), 492.
doi:10.1140/epjc/s10052-018-5975-x
Okyay:2021nnh
M. Okyay and A. Övgün,
Nonlinear electrodynamics effects on the black hole shadow, deflection angle, quasinormal modes and greybody factors.
JCAP 01 (2022), 009.
doi:10.1088/1475-7516/2022/01/009
Kokkotas:2010zd
K. D. Kokkotas, R. A. Konoplya and A. Zhidenko,
Quasinormal modes, scattering and Hawking radiation of Kerr-Newman black holes in a magnetic field.
Phys. Rev. D 83 (2011), 024031.
doi:10.1103/PhysRevD.83.024031
Konoplya:2020jgt
R. A. Konoplya, A. F. Zinhailo and Z. Stuchlik,
Quasinormal modes and Hawking radiation of black holes in cubic gravity.
Phys. Rev. D 102 (2020), 044023.
doi:10.1103/PhysRevD.102.044023
Li:2022jda
Q. Li, C. Ma, Y. Zhang, Z. W. Lin and P. F. Duan,
Gray-body factor and absorption of the Dirac field in ESTGB gravity.
Chin. J. Phys. 77 (2022), 1269-1277.
doi:10.1016/j.cjph.2022.03.027
Harris:2003eg
C. M. Harris and P. Kanti,
Hawking radiation from a (4+n)-dimensional black hole: Exact results for the Schwarzschild phase.
JHEP 10 (2003), 014.
doi:10.1088/1126-6708/2003/10/014
Catalan:2014ama
M. Catalán, E. Cisternas, P. A. González and Y. Vásquez,
Quasinormal modes and greybody factors of a four-dimensional Lifshitz black hole with z=0.
Astrophys. Space Sci. 361 (2016), 189.
doi:10.1007/s10509-016-2764-6
Abedi:2013xua
J. Abedi and H. Arfaei,
Fermionic greybody factors in dilaton black holes.
Class. Quant. Grav. 31 (2014), 195005.
doi:10.1088/0264-9381/31/19/195005
Lewis:2006fu
A. Lewis and A. Challinor,
Weak gravitational lensing of the CMB.
Phys. Rept. 429 (2006), 1-65.
doi:10.1016/j.physrep.2006.03.002
Peloton:2016kbw
J. Peloton, M. Schmittfull, A. Lewis, J. Carron and O. Zahn,
Full covariance of CMB and lensing reconstruction power spectra.
Phys. Rev. D 95 (2017), 043508.
doi:10.1103/PhysRevD.95.043508
Pratten:2016dsm
G. Pratten and A. Lewis,
Impact of post-Born lensing on the CMB.
JCAP 08 (2016), 047.
doi:10.1088/1475-7516/2016/08/047
Chen:2015cpa
S. Chen and J. Jing,
Strong gravitational lensing for the photons coupled to Weyl tensor in a Schwarzschild black hole spacetime.
JCAP 10, 002 (2015).
doi:10.1088/1475-7516/2015/10/002
Chen:2016hil
S. Chen, S. Wang, Y. Huang, J. Jing and S. Wang,
Strong gravitational lensing for the photons coupled to a Weyl tensor in a Kerr black hole spacetime.
Phys. Rev. D 95, 104017 (2017).
doi:10.1103/PhysRevD.95.104017
Wang:2016paq
S. Wang, S. Chen and J. Jing,
Strong gravitational lensing by a Konoplya-Zhidenko rotating non-Kerr compact object.
JCAP 11, 020 (2016).
doi:10.1088/1475-7516/2016/11/020
Lu:2016gsf
X. Lu, F. W. Yang and Y. Xie,
Strong gravitational field time delay for photons coupled to Weyl tensor in a Schwarzschild black hole.
Eur. Phys. J. C 76, 357 (2016).
doi:10.1140/epjc/s10052-016-4218-2
Zhao:2016kft
S. S. Zhao and Y. Xie,
Strong field gravitational lensing by a charged Galileon black hole.
JCAP 07, 007 (2016).
doi:10.1088/1475-7516/2016/07/007
Zhao:2017cwk
S. S. Zhao and Y. Xie,
Strong deflection gravitational lensing by a modified Hayward black hole.
Eur. Phys. J. C 77, 272 (2017).
doi:10.1140/epjc/s10052-017-4850-5
Zhang:2017vap
R. Zhang, J. Jing and S. Chen,
Strong gravitational lensing for black holes with scalar charge in massive gravity.
Phys. Rev. D 95, no.6, 064054 (2017).
doi:10.1103/PhysRevD.95.064054
Abbas:2019olp
G. Abbas, A. Mahmood and M. Zubair,
Strong Gravitational Lensing for Photon Coupled to Weyl Tensor in Kiselev Black Hole.
Chin. Phys. C 44, 095105 (2020).
doi:10.1088/1674-1137/44/9/095105
Bergliaffa:2020ivp
S. E. P. Bergliaffa, E. E. d. Filho and R. Maier,
Strong Lensing and Nonminimally Coupled Electromagnetism.
Phys. Rev. D 101, 124038 (2020).
doi:10.1103/PhysRevD.101.124038
Wang:2019cuf
C. Y. Wang, Y. F. Shen and Y. Xie,
Weak and strong deflection gravitational lensings by a charged Horndeski black hole.
JCAP 04, 022 (2019).
doi:10.1088/1475-7516/2019/04/022
Kumaran:2019qqp
Y. Kumaran and A. Övgün,
Weak Deflection Angle of Extended Uncertainty Principle Black Holes.
Chin. Phys. C 44, 025101 (2020).
doi:10.1088/1674-1137/44/2/025101
Javed:2020frq
W. Javed, M. B. Khadim and A. Övgün,
Weak gravitational lensing by Bocharova–Bronnikov–Melnikov–Bekenstein black holes using Gauss–Bonnet theorem.
Eur. Phys. J. Plus 135, 595 (2020).
doi:10.1140/epjp/s13360-020-00619-x
Kumar:2020sag
R. Kumar, S. U. Islam and S. G. Ghosh,
Gravitational lensing by charged black hole in regularized 4D Einstein–Gauss–Bonnet gravity.
Eur. Phys. J. C 80, 1128 (2020).
doi:10.1140/epjc/s10052-020-08606-3
ElMoumni:2020wrf
H. El Moumni, K. Masmar and A. Övgün,
Weak deflection angle of light in two classes of black holes in nonlinear electrodynamics via Gauss–Bonnet theorem.
Int. J. Geom. Meth. Mod. Phys. 19, 2250094 (2022).
doi:10.1142/S0219887822500943
Javed:2020pyz
W. Javed, J. Abbas, Y. Kumaran and A. Övgün,
Weak deflection angle by asymptotically flat black holes in Horndeski theory using Gauss-Bonnet theorem.
Int. J. Geom. Meth. Mod. Phys. 18, 2150003 (2021).
doi:10.1142/S0219887821500031
Xu:2021rld
X. Xu, T. Jiang and J. Jia,
Deflection angle with electromagnetic interaction and gravitational-electromagnetic dual lensing.
JCAP 08, 022 (2021).
doi:10.1088/1475-7516/2021/08/022
Javed:2021arr
W. Javed, A. Hamza and A. Övgün,
Weak Deflection Angle and Shadow by Tidal Charged Black Hole.
Universe 7, 385 (2021).
doi:10.3390/universe7100385
Gao:2021luq
Y. X. Gao and Y. Xie,
Gravitational lensing by hairy black holes in Einstein-scalar-Gauss-Bonnet theories.
Phys. Rev. D 103, no.4, 043008 (2021).
doi:10.1103/PhysRevD.103.043008
Javed:2020lsg
W. Javed, A. Hamza and A. Övgün,
Effect of nonlinear electrodynamics on the weak field deflection angle by a black hole.
Phys. Rev. D 101 (2020), 103521.
doi:10.20944/preprints201911.0142.v1
Gibbons:2008rj
G. W. Gibbons and M. C. Werner,
Applications of the Gauss-Bonnet theorem to gravitational lensing.
Class. Quant. Grav. 25 (2008), 235009.
doi:10.1088/0264-9381/25/23/235009
Ishihara:2016vdc
A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura and H. Asada,
Gravitational bending angle of light for finite distance and the Gauss-Bonnet theorem.
Phys. Rev. D 94 (2016), 084015.
doi:10.1103/PhysRevD.94.084015
Islam:2020xmy
S. U. Islam, R. Kumar and S. G. Ghosh,
Gravitational lensing by black holes in the 4D Einstein-Gauss-Bonnet gravity.
JCAP 09 (2020), 030.
doi:10.1088/1475-7516/2020/09/030
Zhu:2019ura
T. Zhu, Q. Wu, M. Jamil and K. Jusufi,
Shadows and deflection angle of charged and slowly rotating black holes in Einstein-Æther theory.
Phys. Rev. D 100 (2019), 044055.
doi:10.1103/PhysRevD.100.044055
Sakalli:2017ewb
I. Sakalli and A. Ovgun,
Hawking Radiation and Deflection of Light from Rindler Modified Schwarzschild Black Hole.
EPL 118 (2017), 60006.
doi:10.1209/0295-5075/118/60006
Jusufi:2018jof
K. Jusufi, A. Övgün, J. Saavedra, Y. Vásquez and P. A. González,
Deflection of light by rotating regular black holes using the Gauss-Bonnet theorem.
Phys. Rev. D 97 (2018), 124024.
doi:10.1103/PhysRevD.97.124024
Ovgun:2018fte
A. Övgün, İ. Sakallı and J. Saavedra,
Weak gravitational lensing by Kerr-MOG black hole and Gauss–Bonnet theorem.
Annals Phys. 411 (2019), 167978.
doi:10.1016/j.aop.2019.167978
Li:2020wvn
Z. Li, G. Zhang and A. Övgün,
Circular Orbit of a Particle and Weak Gravitational Lensing.
Phys. Rev. D 101 (2020), 124058.
doi:10.1103/PhysRevD.101.124058
Javed:2020fli
W. Javed, M. B. Khadim, A. Övgün and J. Abbas,
Weak gravitational lensing by stringy black holes.
Eur. Phys. J. Plus 135 (2020), 314.
doi:10.1140/epjp/s13360-020-00322-x
Belhaj:2020rdb
A. Belhaj, M. Benali, A. El Balali, H. El Moumni and S. E. Ennadifi,
Deflection angle and shadow behaviors of quintessential black holes in arbitrary dimensions.
Class. Quant. Grav. 37 (2020), 215004.
doi:10.1088/1361-6382/abbaa9
Pourhassan:2017kmm
B. Pourhassan, K. Kokabi and S. Rangyan,
Thermodynamics of higher dimensional black holes with higher order thermal fluctuations.
Gen. Rel. Grav. 49 (2017), 144.
doi:10.1007/s10714-017-2315-7
Mureika:2015sda
J. R. Mureika, J. W. Moffat and M. Faizal,
Black hole thermodynamics in MOdified Gravity (MOG).
Phys. Lett. B 757 (2016), 528-536.
doi:10.1016/j.physletb.2016.04.041
Pourhassan:2016zzc
B. Pourhassan and M. Faizal,
Thermodynamics of a sufficient small singly spinning Kerr-AdS black hole.
Nucl. Phys. B 913 (2016), 834-851.
doi:10.1016/j.nuclphysb.2016.10.013
Pourhassan:2017rie
B. Pourhassan, H. Farahani and S. Upadhyay,
Thermodynamics of higher-order entropy corrected Schwarzschild–Beltrami–de Sitter black hole.
Int. J. Mod. Phys. A 34 (2019), 1950158.
doi:10.1142/S0217751X19501586
Pourhassan:2018wjg
B. Pourhassan, M. Faizal and S. A. Ketabi,
Logarithmic correction of the BTZ black hole and adaptive model of Graphene.
Int. J. Mod. Phys. D 27 (2018), 1850118.
doi:10.1142/S0218271818501183
Bubuianu:2018qsq
L. Bubuianu and S. I. Vacaru,
Black holes with MDRs and Bekenstein–Hawking and Perelman entropies for Finsler–Lagrange–Hamilton Spaces.
Annals Phys. 404 (2019), 10-38.
doi:10.1016/j.aop.2019.02.013
Sharif:2022ccc
M. Sharif and A. Khan,
Thermal fluctuations, quasi-normal modes and phase transitions of regular black hole.
Chin. J. Phys. 77 (2022), 1885-1902.
doi:10.1016/j.cjph.2022.01.002
Sharif:2020hid
M. Sharif and Q. Ama-Tul-Mughani,
Phase transition and thermal fluctuations of quintessential Kerr–Newman-AdS black hole.
Phys. Dark Univ. 30 (2020), 100723.
doi:10.1016/j.dark.2020.100723
Berti:2009kk
E. Berti, V. Cardoso and A. O. Starinets,
Quasinormal modes of black holes and black branes.
Class. Quant. Grav. 26 (2009), 163001.
doi:10.1088/0264-9381/26/16/163001
|
http://arxiv.org/abs/2307.03960v2 | 20230708115812 | Nonparametric estimation of the diffusion coefficient from S.D.E. paths | [
"Eddy Ella-Mintsa"
] | math.ST | [
"math.ST",
"stat.TH"
] |
Seismic Signatures of the ^12C(α, γ)^16O Reaction Rate in White Dwarf Models with Overshooting
[
August 12, 2023
==============================================================================================
Consider a diffusion process X=(X_t)_t∈[0,1] observed at discrete times and high frequency, solution of a stochastic differential equation whose drift and diffusion coefficients are assumed to be unknown. In this article, we focus on the nonparametric esstimation of the diffusion coefficient. We propose ridge estimators of the square of the diffusion coefficient from discrete observations of X and that are obtained by minimization of the least squares contrast. We prove that the estimators are consistent and derive rates of convergence as the size of the sample paths tends to infinity, and the discretization step of the time interval [0,1] tend to zero. The theoretical results are completed with a numerical study over synthetic data.
Keywords. Nonparametric estimation, diffusion process, diffusion coefficient, least squares contrast, repeated observations.
MSC: 62G05; 62M05; 60J60
§ INTRODUCTION
Let X=(X_t)_t∈[0,1] be a one dimensional diffusion process with finite horizon time, solution of the following stochastic differential equation:
dX_t=b(X_t)dt+σ(X_t)dW_t, X_0=0
where (W_t)_t≥ 0 is a standard Brownian motion. The drift function b and the diffusion coefficient σ are assumed to be unknown Lipschitz functions. We denote by (ℱ_t)_t∈ [0,1] the natural filtration of the diffusion process X. The goal of the article is to construct, from N discrete observations X̅^j=(X^j_kΔ_n)_0≤ k≤ n, 1 ≤ j ≤ N with time step Δ_n = 1/n, a nonparametric estimator of the square of the diffusion coefficient σ^2(.). We are in the framework of high frequency data since the time step Δ_n tends to zero as n tends to infinity. Furthermore, we consider estimators of σ^2(.) built from a single diffusion path (N = 1), and those built on N paths when N →∞. In this paper, we first propose a ridge estimator of σ^2(.) on a compact interval. Secondly, we focus on a nonparametric estimation of σ^2(.) on the real line . We measure the risk of any estimator σ^2 of the square of the diffusion coefficient σ^2 by [σ^2 - σ^2^2_n,N], where σ^2 - σ^2^2_n,N := (Nn)^-1∑_j=1^N∑_k=0^n-1(σ^2(X^j_kΔ) - σ^2(X^j_kΔ))^2 is an empirical norm defined from the sample paths.
Related works.
There is a large literature on the estimation of coefficients of diffusion processes, and we focus on the papers studying the estimation of σ^2.
Estimation of the diffusion coefficient has been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). In the nonparametric case, estimators of the diffusion coefficient from discrete observations are proposed under various frameworks.
First, the diffusion coefficient is constructed from one discrete observation of the diffusion process (N = 1) in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). For the case of short time diffusion processes (for instance T = 1), estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]) (see later in the paper). For the space-dependent diffusion coefficient x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent under a condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established.
Secondly, the diffusion coefficient is built in short time (T < ∞) from N repeated discrete observations with N →∞. In <cit.>, a nonparametric estimator of σ^2 is proposed from repeated discrete observations on the real line when the time horizon T = 1. The estimator has been proved to be consistent with a rate of order N^-1/5 over the space of Lipschitz functions.
Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>).
Nonparametric estimation of coefficients of one-dimensional diffusion process from discrete observations is widely studied in the literature under various frameworks. In a first framework, the diffusion coefficient is constructed from one discrete observation of the diffusion process in long time (T →∞) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>), or in short time (T = 1) (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Note that in short time (T<∞), only the diffusion coefficient can be estimated consistently from a single discrete path contrary to the drift function whose consistent estimation relies on repeated discrete observations of the diffusion process (see e.g. <cit.>, <cit.>). In a second framework, σ^2 is estimated from N discrete observations of the diffusion process, with N →∞ (see e.g. <cit.>).
Estimation of the diffusion coefficient has also been considered in the parametric case (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). For the nonparametric setting, estimators of a time-dependent diffusion coefficients t ↦σ^2(t) have been proposed. In this context, <cit.> built a nonparametric estimator of t↦σ^2(t) and studied its L_2 risk using wavelets methods, <cit.> studies the L_p risk of a kernel estimator of σ^2(t), and <cit.> derived a minimax rate of convergence of order n^-ps/(1+2s) where s>1 is the smoothness parameter of the Besov space ℬ^s_p,∞([0,1]).
For the case of space-dependent diffusion coefficients x ↦σ^2(x), a first estimator based on kernels and built from a single discrete observation of the diffusion process with T = 1 is proposed in <cit.>. The estimator has been proved to be consistent on condition on the bandwidth, but a rate of convergence of its risk of estimation has not been established.
Two main methods are used to build consistent nonparametric estimators of x ↦σ^2(x). The first method is the one using kernels (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), the other method consists in estimating σ^2 as solution of a nonparametric regression model using the least squares approach. Since the diffusion coefficient is assumed to belong to an infinite dimensional space, the method consists in projecting σ^2 into a finite dimensional subspace, estimating the projection and making a data-driven selection of the dimension by minimizing a penalized least squares contrast (see e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>).
Main contribution.
In this article, we assume to have at our disposal N i.i.d. discrete observations of length n of the diffusion process X. The main objectives of this paper are the following.
* Construct a consistent and implementable ridge estimator of σ^2 from a single diffusion path (N=1) using the least squares approach. We derive rates of convergence of the risk of estimation of the ridge estimators built on a compact interval and on the real line over a Hölder space, taking advantage of the properties of the local time of the diffusion process, and its link with the transition density.
* We extend the result to the estimation of σ^2 on repeated observations of the diffusion process (N →∞). We prove that the estimators built on a compact interval and on are more efficient considering their respective rates compared to nonparametric estimators built from a single diffusion path.
* Focusing on the support of the diffusion coefficient, we consider an intermediate case between a compact interval and by proposing a ridge estimator of σ^2 restricted to the compact interval [-A_N,A_N] where A_N→∞ as N→∞. The benefit of this approach is that the resulting projection estimator can reach a faster rate of convergence compared to the rate obtained on the real line .
* Finally, we propose adaptive estimators of σ^2 based on a data-driven selection of the dimension through the minimization of the penalized least squares contrast in different settings.
We sum up below the rates of convergence (up to a log-factor) of the ridge estimators of σ^2_|I with I⊆ over a Hölder space defined in the next section with a smoothness parameter β≥ 1.
Outline of the paper.
In Section <ref>, we define our framework with the key assumptions on the coefficients of the diffusion process ensuring for instance that Equation (<ref>) admits a unique strong solution. Section <ref> is devoted to the non-adaptive estimation of the diffusion coefficient from one diffusion path both on a compact interval and on the real line . In Section <ref>, we extend the study to the non-adaptive estimation of the diffusion coefficient from repeated observations of the diffusion process. We propose in Section <ref>, adaptive estimators of the diffusion coefficient, and Section <ref> complete the study with numerical evaluation of the performance of estimators. We prove our theoretical results in Section <ref>.
§ FRAMEWORK AND ASSUMPTIONS
Consider a diffusion process X=(X_t)_t∈[0,1], solution of Equation (<ref>) whose drift and diffusion coefficient satisfy the following assumption.
* There exists a constant L_0>0 such that b and σ are L_0-Lipschitz functions on ℝ.
* There exist constants σ_0,σ_1>0 such that : σ_0≤σ(x)≤σ_1, ∀ x∈ℝ.
* σ∈𝒞^2(ℝ) and there exist C >0 and α≥ 0 such that:
|σ^'(x)|+|σ^''(x)|≤ C(1+|x|^α), ∀ x∈ℝ.
Under Assumption <ref>, X=(X_t)_t∈[0,1] is the unique strong solution of Equation (<ref>), and this unique solution admits a transition density (t,x)↦ p_X(t,x). Besides, we draw from Assumption <ref> that
∀ q≥ 1, 𝔼[t∈[0,1]sup|X_t|^q]<∞.
§.§ Definitions and notations
We suppose to have at our disposal, a sample D_N,n={X̅^j, j=1,⋯,N} constituted of N independent copies of the discrete observation X̅ = (X_kΔ_n)_0≤ k≤ n of the diffusion process X where Δ_n = 1/n is the time-step. The objective is to construct, from the sample D_N,n, a nonparametric estimator of the square σ^2 of the diffusion coefficient on an interval I ⊆. In the sequel, we consider two main cases, the first one being the estimation of σ^2 on the interval I from a single path (N=1 and n→∞). For the second case, we assume that both N and n tend to infinity.
For each measurable function h, such that 𝔼[h^2(X_t)]<∞ for all t∈[0,1], we define the following empirical norms:
h^2_n:=𝔼_X[1/n∑_k=0^n-1h^2(X_kΔ_n)], h^2_n,N:=1/Nn∑_j=1^N∑_k=0^n-1h^2(X^j_kΔ_n).
For all h ∈𝕃^2(I), we have
h^2_n=∫_Ih^2(x)1/n∑_k=0^n-1p_X(kΔ_n,x_0,x)dx=∫_Ih^2(x)f_n(x)dx,
where f_n: x↦1/n∑_k=0^n-1p_X(kΔ_n,x) is a density function. For the case of non-adaptive estimators of σ^2, we also establish bounds of the risks of the estimators based on the empirical norm ._n or the 𝕃^2-norm . when the estimation interval I is compact.
For any integers p,q ≥ 2 and any matrix M ∈^p × q, we denote by ^tM, the transpose of M.
§.§ Spaces of approximation
We propose projection estimators of σ^2 on a finite-dimensional subspace. To this end, we consider for each m ≥ 1, a m-dimensional subspace 𝒮_m given as follows:
𝒮_m:=Span(ϕ_ℓ, ℓ=0,⋯,m-1), m≥ 1
where the functions (ϕ_ℓ, ℓ∈ℕ) are continuous, linearly independent and bounded on I. Furthermore, we need to control the ℓ^2-norm of the coordinate vectors of elements of 𝒮_m, which leads to the following constrained subspace,
𝒮_m,L:={h=∑_ℓ=0^m-1a_ℓϕ_ℓ, ∑_ℓ=0^m-1a^2_ℓ=𝐚^2_2≤ mL, 𝐚=(a_0,⋯,a_m-1), L>0}.
Note that 𝒮_m,L⊂𝒮_m and 𝒮_m,L is no longer a vector space. The control of the coordinate vectors allows to establish an upper bound of the estimation error that tends to zero as n→∞ or N,n→∞. In fact, we prove in the next sections that the construction of consistent estimators of σ^2 requires the functions h=∑_ℓ=0^m-1a_ℓϕ_ℓ to be bounded, such that
h_∞≤ℓ=0,…,m-1maxϕ_ℓ_∞ 𝐚_2.
This condition is satisfied for the functions of the constrained subspaces 𝒮_m,L with m ≥ 1. In this article, we work with the following bases.
[B] The B-spline basis
This is an exemple of a non-orthonormal basis defined on a compact interval. Let A > 0 be a real number, and suppose (without restriction) that I = [-A,A]. Let K,M∈ℕ^*, and consider 𝐮=(u_-M,⋯,u_K+M) a knots vector such that u_-M = ⋯ = u_-1 = u_0 = -A, u_K+1 = ⋯ = u_K+M = A, and for all i=0,⋯,K,
u_i = -A+i2A/K.
One calls B-spline functions, the piecewise polynomial functions (B_ℓ)_ℓ=-M,⋯,K-1 of degree M, associated with the knots vector 𝐮 (see <cit.>, Chapter 14). The B-spline functions are linearly independent smooths functions returning zero for all x∉[-A,A], and satisfying some smoothness conditions established in <cit.>. Thus, we consider approximation subspaces 𝒮_K+M defined by
𝒮_K+M=Span{B_ℓ, ℓ=-M,⋯,K-1}
of dimension (𝒮_K+M)=K+M, and in which, each function h=∑_ℓ=-M^K-1a_ℓB_ℓ is M-1 times continuously differentiable thanks to the properties of the spline functions (see <cit.>). Besides, the spline basis is included in the definition of both the subspace 𝒮_m and the constrained subspace 𝒮_m,L (see Equations (<ref>) and (<ref>)) with m = K + M and for any coordinates vector (a_-M,…,a_K-1) ∈^K+M,
∑_ℓ=-M^K-1a_ℓB_ℓ = ∑_ℓ=0^m-1a_ℓ-MB_ℓ-M.
The integer M ∈ℕ^* is fixed, while K varies in the set of integers ℕ^*. If we assume that σ^2 belongs to the Hölder space Σ_I(β,R) given as follows:
Σ_I(β,R):={h∈𝒞^⌊β⌋+1(I), |h^(ℓ)(x)-h^(ℓ)(y)|≤ R|x-y|^β-l, x,y∈ I},
where β≥ 1, ℓ=⌊β⌋ and R>0, then the unknown function σ^2_|I restricted to the compact interval I can be approximated in the constrained subspace 𝒮_K+M,L spanned by the spline basis. This approximation results to the following bias term:
h ∈𝒮_K+M,Linfh - σ^2_|I^2_n≤ C|I|^2βK^-2β
where the constant C > 0 depends on β, R and M, and |I| = sup I - inf I. The above result is a modification of Lemma D.2 in <cit.>.
[F] The Fourier basis
The subspace 𝒮_m can be spanned by the Fourier basis
{f_ℓ, ℓ = 0, …, m-1} = {1,√(2)cos(2π jx), √(2)sin(2π jx), j=1,...,d} with m=2d+1.
The above Fourier basis is defined on the compact interval [0,1]. The definition can be extended to any compact interval, replacing the bases functions x ↦ f_ℓ(x) by x ↦ 1/(max I - min I)f_ℓ(x-min I/max I - min I). We use this basis to build the estimators of σ^2 on a compact interval I ⊂.
Define for all s ≥ 1 and for any compact interval I ⊂, the Besov space ℬ^s_2,∞(I) which is a space of functions f ∈ L^2(I) such that the ⌊ s⌋^th derivative f^(⌊ s ⌋) belongs to the space ℬ^s-⌊ s ⌋_2,∞(I) given by
ℬ^s - ⌊ s ⌋_2,∞(I) = {f ∈ L^2(I) and w_2,f(t)/t^s - ⌊ s ⌋∈ L^∞(I∩^+)}
where for s-⌊ s⌋∈ (0,1), w_2,f(t)=|h|≤ tsupτ_hf - f_2 with τ_hf(x) = f(x-h), and for s-⌊ s⌋ = 1, w_2,f(t)=|h|≤ tsupτ_hf + τ_-hf - 2f_2. Thus, if we assume that the function σ^2_|I belongs to the Besov space ℬ^s_2,∞, then it can be approximated in a constrained subspace 𝒮_m,L spanned by the Fourier basis. Moreover, under Assumption <ref> and from Lemma 12 in <cit.>, there exists a constant C>0 depending on the constant τ_1 of Equation (<ref>), the smoothness parameter s of the Besov space such that
h∈𝒮_m,Linfh-σ^2_|I^2_n≤τ_1h∈𝒮_m,Linfh-σ^2_|I^2≤ C|σ^2_|I|^2_β m^-2β
where |σ^2_|I|_s is the semi-norm of σ^2_|I in the Besov space ℬ^s_2,∞(I).
Note that for all β≥ 1, the Hölder space Σ_I(β,R) and the Besov space ℬ^β_2,∞ satisfy:
L^∞() ∩Σ_I(β,R) ⊂ℬ^β_∞,∞(I) ⊂ℬ^β_2,∞(I)
(see <cit.>, Chap. 2 page 16). As a result, we rather consider in the sequel the Hölder space Σ_I(β,R) which can also be approximated by the Fourier basis.
[H] The Hermite basis
The basis is defined from the Hermite functions (h_j,j≥ 0) defined on ℝ and given for all j≥ 0 and for all x∈ℝ by:
h_j(x)=c_jH_j(x), where H_j(x)=(-1)^jexp(x^2/2)d^j/dx^j(e^-x^2/2) and c_j=(2^jj!√(π))^-1/2.
The polynomials H_j(x), j≥ 0 are the Hermite polynomials, and (h_j,j≥ 0) is an orthonormal basis of L^2(ℝ). Furthermore, for all j≥ 1 and x∈, |h_j(x)|≤ c|x|exp(-c_0x^2) for x^2≥(3/2)(4j+3) where c,c_0>0 are constants independent of j (see <cit.>, Proof of Proposition 3.5). We use the Hermite basis in the sequel for the estimation of σ^2 on the real line .
If one assumes that σ^2 belongs to the Sobolev space W^s_f_n(,R) given for all s ≥ 1 by
W^s_f_n(,R) := {g ∈ L^2(, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s}
where for each ℓ≥ 1, g_ℓ is the L^2(, f_n(x)dx)-orthogonal projection of g on the ℓ-dimensional vector space 𝒮_ℓ spanned by the Hermite basis. Consider a compact interval I ⊂ and the following spaces:
W^s(I,R) := {g ∈ L^2(I), ∑_j=0^∞j^s<g,ϕ_j>^2≤ R},
W^s_f_n(I,R) := {g ∈ L^2(I, f_n(x)dx), ∀ ℓ≥ 1, g - g_ℓ^2_n≤ Rℓ^-s}
where (ϕ_j)_j≥ 0 is an orthonormal basis defined on I and for all ℓ≥ 1, g_ℓ is the orthogonal projection of g onto 𝒮_ℓ = Span(h_j, j≤ℓ) of dimension ℓ≥ 1 (see e.g. <cit.>). Then, for all g ∈ W^s(I,R), we have
g=∑_j=0^∞<g,ϕ_j>ϕ_j and g-g_ℓ^2 = ∑_j=ℓ+1^∞<g,ϕ_j>^2≤ℓ^-s∑_j=ℓ+1^∞j^s<g,ϕ_j>^2≤ Rℓ^-s.
We have W^s_f_n(I,R) = W^s(I,R) as the empirical norm ._n and the L^2-norm . are equivalent. The space W^s_f_n(,R) is an extension of the space W^s_f_n(I,R) wher I = and (ϕ_j)_j ≥ 0 is the Hermite basis.
The B-spline basis is used for the estimation of σ^2 on a compact interval on one side (N = 1 and N>1), and on the real line on the other side restricting σ^2 on the compact interval [-log(n), log(n)] for N = 1, or [-log(N), log(N)] for N > 1, and bounding the exit probability of the process X from the interval [-log(N), log(N)] (or [-log(n), log(n)]) by a negligible term with respect to the estimation error. In a similar context, the Fourier basis is used as an othonormal basis to built nonparametric estimators of σ^2 on a compact interval and on , both for N = 1 and for N > 1. The main goal is to show that, in addition to the spline basis which is not orthogonal, we can built projection estimators of σ^2 on orthonormal bases that are consistent. The advantage of the Hermite basis compared to the Fourier basis is its definition on the real line . As a result, we use the Hermite basis to propose for N > 1, a projection estimator of σ^2 whose support is the real line .
Denote by ℳ, the set of possible values of the dimension m ≥ 1 of the approximation subspace 𝒮_m. If (ϕ_0,⋯,ϕ_m-1) is an orthonormal basis, then for all m,m^'∈ℳ such that m < m^', we have 𝒮_m⊂𝒮_m^'. For the case of the B-spline basis, one can find a subset 𝒦⊂ℳ of the form
𝒦={2^q, q=0,⋯,q_max}
such that for all K,K^'∈𝒦, K < K^' implies 𝒮_K + M⊂𝒮_K^' + M (see for example <cit.>). The nesting of subspaces 𝒮_m, m∈ℳ is of great importance in the context of adaptive estimation of the diffusion coefficient and the establishment of upper-bounds for the risk of adaptive estimators.
In the sequel, we denote by [𝐅], [𝐇] and [𝐁] the respective collection of subspaces spanned by the Fourier basis, the Hermite basis and the B-spline basis.
§.§ Ridge estimators of the square of the diffusion coefficient
We establish from Equation (<ref>) and the sample D_N,n the regression model for the estimation of σ^2. For all j ∈ [[1,N]] and k ∈ [[0,n-1]], define
U^j_kΔ_n := (X^j_(k+1)Δ_n - X^j_kΔ_n)^2/Δ_n.
The increments U^j_kΔ_n are approximations in discrete times of d<X,X>_t/dt since, from Equation (<ref>), one has d<X,X>_t = σ^2(X_t)dt. From Equation (<ref>), we obtain the following regression model,
U^j_kΔ_n=σ^2(X^j_kΔ_n)+ζ^j_kΔ_n+R^j_kΔ_n, ∀ (j,k)∈[[1,N]]×[[0,n-1]]
where U^j_kΔ_n is the response variable, ζ^j_kΔ_n and R^j_kΔ_n are respectively the error term and a negligible residual whose explicit formulas are given in Section <ref>.
We consider the least squares contrast γ_n,N defined for all m ∈ℳ and for all function h∈𝒮_m,L by
γ_n,N(h):=1/Nn∑_j=1^N∑_k=0^n-1(U^j_kΔ-h(X^j_kΔ_n))^2.
For each dimension m ∈ℳ, the projection estimator σ^2_m of σ^2 over the subspace 𝒮_m,L satisfies:
σ^2_m∈h∈𝒮_m,Lmin γ_n,N(h).
Indeed, for each dimension m ∈ℳ, the estimator σ^2_m of σ^2 given in Equation (<ref>) satisfies σ^2_m=∑_ℓ=0^m-1a_ℓϕ_ℓ, where
𝐚=(a_0,⋯,a_m-1):=𝐚^2_2≤ mLmin𝐔-𝐅_m𝐚^2_2
with ^tU = (U^1_0,…,U^1_(n-1)Δ_n, …, U^N_0,…,U^N_(n-1)Δ_n) and the matrix 𝐅_m is defined as follows
F_m := ( ^t(ϕ_ℓ(X^j_0),…,ϕ_ℓ(X^j_(n-1)Δ_n)))_1 ≤ j ≤ N0 ≤ℓ≤ m-1∈ℝ^Nn × m.
The vector of coefficients 𝐚 is unique and called the ridge estimator of 𝐚 because of the ℓ^2 constraint on the coordinate vectors (see <cit.> Chap. 3 page 61).
§ ESTIMATION OF THE DIFFUSION COEFFICIENT FROM A SINGLE DIFFUSION PATH
This section focuses on the nonparametric estimation of the square of the diffusion coefficient σ^2 on an interval I ⊆ when only a single diffusion path is observed at discrete times (N=1). It is proved in the literature that one can construct consistent estimators of the diffusion coefficient from one path when the time horizon T is finite (see e.g. <cit.>). Two cases are considered. First, we propose a ridge estimator of σ^2 on a compact interval I ⊂, say for example I = [-1,1]. Secondly, we extend the study to the estimation of σ^2 on the real line I =.
§.§ Non-adaptive estimation of the diffusion coefficient on a compact interval
In this section, we consider the estimator σ^2_m of the compactly supported square of the diffusion coefficient σ^2_|I on the constrained subspaces 𝒮_m,L from the observation of a single diffusion path.
Since the interval I⊂ is compact, the immediate benefit is that the density function f_n defined from the transition density of the diffusion process X̅ = (X_kΔ) is bounded from below. In fact, there exist constants τ_0,τ_1∈(0,1] such that
∀ x∈ I, τ_0≤ f_n(x)≤τ_1,
(see <cit.>). Thus, for each function h∈𝕃^2(I),
τ_0h^2≤h^2_n≤τ_1h^2
where . is the 𝕃^2-norm. Equation (<ref>) allows to establish global rates of convergence of the risk of the ridge estimators σ^2_m of σ^2_|I with m∈ℳ using the L^2-norm . which is, in this case, equivalent with the empirical norm ._n.
To establish an upper-bound of the risk of estimation that tends to zero as n tends to infinity, we need to establish equivalence relations between the pseudo-norms ._n,1 (N=1) and ._X on one side, and ._X and the L^2-norm . on the other side, where the random pseudo-norm ._X is defined for each function h∈𝕃^2(I) by
h^2_X := ∫_0^1h^2(X_s)ds.
Define for x∈, the local time ℒ^x of the diffusion process X = (X_t)_t∈[0,1] by
ℒ^x = ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)ds.
In general, the local time of a continuous semimartingale is a.s. càdlàg (see e.g. <cit.>). But, for diffusion processes and under Assumption <ref>, the local time ℒ^x is bicontinuous at any point x∈ (see Lemma <ref> in Section <ref>). Furthermore, we obtain the following result.
Under Assumption <ref>, and for any continuous and integrable function h, it yields,
* ∫_0^1h(X_s)ds = ∫_h(x)ℒ^xdx.
* For all x∈, (ℒ^x) = ∫_0^1p_X(s,x)ds.
In Lemma <ref>, we remark that there is a link between the local time and the transition density of the diffusion process. Thus, if we consider the pseudo-norm ._X depending on the process X = (X_t)_t∈[0,1] and given in Equation (<ref>), and
using Lemma <ref>, we obtain that,
[h_X^2] = ∫_h^2(x)[ℒ^x]dx = ∫_h^2(x)∫_0^1p_X(s,x)dsdx≥τ_0h^2.
where ∫_0^1p_X(s,x)ds≥τ_0 >0 (see <cit.>, Lemma 4.3), and h^2 is the 𝕃^2-norm of h.
Set L = log(n). Suppose that σ^2 is approximated in one of the collections [𝐁] and [𝐅]. Under Assumption <ref>, it yields
[σ^2_m - σ^2_|I^2_n,1] ≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n)
[σ^2_m - σ^2_|I^2_n] ≤34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C^'(m/n + m^2γ+1log(n)/n^γ/2 + Δ^2_n)
where the number γ > 1 comes from the use of the Hölder inequality. The constant C>0 depends on σ_1 and the constant C^'>0 depends on σ_1, τ_0 and τ_1.
We observe that the upper-bound of the risk of estimation of σ^2_m is composed of the bias term, which quantifies the cost of approximation of σ^2_|I in the constrained space 𝒮_m,L, the estimation error O(m/n) and the cost of the time discretization O(Δ^2_n) are established on a random event in which the pseudo-norms ._n,1 and ._X are equivalent, and whose probability of the complementary times σ^2_m - σ^2_|I^2_∞ is bounded by the term O(m^2γ+1log(n)/n^γ/2) (see Lemma <ref> and proof of Theorem <ref>).
The next result proves that the risk of estimation can reach a rate of convergence of the same order than the rate established in <cit.> if the parameter γ > 1 is chosen such that the term O(m^2γ+1log(n)/n^γ/2) is of the same order than the estimation error of order m/n. Note that the risk σ^2_m - σ^2_|I^2_n is random since
σ^2_m - σ^2_|I^2_n = _X[1/n∑_k=0^n-1(σ^2_m - σ^2_|I)(X_kΔ)]
and the estimator σ^2_m is built from an independent copy X̅^1 of the discrete times process X̅. Thus, the expectation relates to the estimator σ^2_m.
Suppose that σ^2∈Σ_I(β,R) with β > 3/2, and γ = 2(2β+1)/(2β-3). Assume that K_opt∝ n^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ n^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields,
[σ^2_m_opt - σ^2_|I^2_n,1] = O(log(n)n^-2β/(2β+1))
[σ^2_m_opt - σ^2_|I^2_n] = O(log(n)n^-2β/(2β+1)).
Note that we obtain the exact same rates when considering the risk of σ^2_m_opt defined with the 𝕃^2-norm equivalent to the empirical norm ._n. Moreover, these rates of convergence are of the same order than the optimal rate n^-s/(2s+1) established in <cit.> over a Besov ball.
§.§ Non-adaptive estimation of the diffusion coefficient on the real line
In this section, we propose a ridge estimator of σ^2 on the real line , built from one diffusion path. In this context, the main drawback is that the density function f_n:x↦1/n∑_k=0^n-1p_X(kΔ,x) is no longer lower bounded. Consequently, the empirical norm ._n is no longer equivalent to the L_2-norm . and the consistency of the estimation error is no longer ensured under the only assumptions made in the previous sections. Consider the truncated estimator σ^2_m,L of σ^2 given by
σ^2_m,L(x) = σ^2_m(x)_σ^2_m(x) ≤√(L) + √(L)_σ^2_m(x) > √(L).
Thus, the risk of the ridge estimator σ^2_m,L is upper-bounded as follows:
[σ^2_m,L - σ^2^2_n,1] ≤ [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + [(σ^2_m,L - σ^2)_[-log(n),log(n)]^c^2_n,1]
≤ [(σ^2_m,L - σ^2)_[-log(n),log(n)]^2_n,1] + 4log^2(n)t∈[0,1]sup(|X_t|>log(n)).
The first term on the r.h.s. is equivalent to the risk of a ridge estimator of σ^2 on the compact interval [-log(n),log(n)]. The second term on the r.h.s. is upper-bounded using Lemma <ref>. We derive below, an upper-bound of the risk of estimation of σ^2_m.
Suppose that L = log^2(n). Under Assumption <ref>, it yields,
[σ^2_m,L - σ^2^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^qlog^2(n)/n)
where C>0 is a constant, q = 1 for the collection [𝐁], and q = 2 for the collection [𝐅].
We first remark that the upper-bound of the risk of the truncated estimator of σ^2 differs with respect to each of the chosen bases. This contrast comes from the fact that the Fourier basis {f_ℓ, ℓ = 0, …, m-1} and the spline basis {B_ℓ-M, ℓ = 0, …, m-1} satisfy
∑_ℓ = 0^m - 1f_ℓ(x)≤ C_fm, and ∑_ℓ=0^m-1B_ℓ-M(x) = 1.
Secondly, the estimation error is not as fine as the one established in Theorem <ref> where σ^2 is estimated on a compact interval. In fact, on the real line , the pseudo-norm ._X can no longer be equivalent to the 𝕃^2-norm since the transition density is not bounded from below on . Consequently, we cannot take advantage of the exact method used to establish the risk bound obtained in Theorem <ref> which uses the equivalence relation between the pseudo-norms ._n,1 and ._X on one side, and ._X and the 𝕃^2-norm . on the other side. Moreover, we can also notice that the term of order 1/n^2 does not appear since it is dominated by the estimation error.
We obtain below rates of convergence of the ridge estimator of σ^2 for each of the collections [𝐁] and [𝐅].
Suppose that σ^2∈Σ_I(β,R) with β≥ 1
For [B].
Assume that K ∝ n^1/(4β+1). Under Assumptions <ref>, there exists a constant C>0 depending on β and σ_1 such that
[σ^2_m,L - σ^2^2_n,1] ≤ Clog^2β(n)n^-2β/(4β+1).
For [F].
Assume that m ∝ n^1/2(2β+1). Under Assumptions <ref>, it yields,
[σ^2_m,L - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1)
where the constant C>0 depends on β and σ_1.
As we can remark, the obtained rates are slower than the ones established in Section <ref> where σ^2 is estimated on a compact interval. This result is the immediate consequence of the result of Theorem <ref>.
§ ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED DIFFUSION PATHS
We now focus on the estimation of the (square) of the diffusion coefficient from i.i.d. discrete observations of the diffusion process (N →∞).
§.§ Non-adaptive estimation of the diffusion coefficient on a compact interval
We study the rate of convergence of the ridge estimators σ^2_m of σ^2_|I from D_N,n when I is a compact interval. The next theorem gives an upper-bound of the risk of our estimators σ^2_m, m∈ℳ.
Suppose that L = log(Nn) and ℳ = {1,…,√(min(n,N))/log(Nn)}. Under Assumption <ref> and for all m ∈ℳ, there exist constants C>0 and C^'>0 depending on σ_1 such that,
𝔼[σ^2_m-σ^2_|I^2_n,N]≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n)
𝔼[σ^2_m-σ^2_|I^2_n]≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n + C^'(m/Nn+mlog(Nn)exp(-C√(min(n,N)))+Δ^2_n).
Note that the result of Theorem <ref> is independent of the choice of the basis that generate the approximation space 𝒮_m. The first term on the right-hand side represents the approximation error of the initial space, the second term O(m/(Nn)) is the estimation error, and the last term characterizes the cost of the time discretization. The next result is derived from Theorem <ref>.
Suppose that σ^2∈Σ_I(β,R) with β > 3/2. Moreover, assume that K_opt∝ (Nn)^1/(2β+1) for [𝐁] (m_opt = K_opt + M), and m_opt∝ (Nn)^1/(2β+1) for [𝐅]. Under Assumptions <ref>, it yields,
𝔼[σ^2_m_opt-σ^2_|I^2_n,N] = O((Nn)^-2β/(2β+1))
𝔼[σ^2_m_opt-σ^2_|I^2_n] = O((Nn)^-2β/(2β+1)).
The obtained result shows that the nonparametric estimators of σ^2_|I based on repeated observations of the diffusion process are more efficient when N,n→∞. Note that the same rate is obtained if the risk of σ^2_m_opt is defined with the 𝕃^2-norm . equivalent to the empirical norm ._n.
The rate obtained in Corollary <ref> is established for β > 3/2. If we consider for example the collection [B] and assume that β∈ [1, 3/2], then K_opt∝ (Nn)^1/(2β+1) belongs to ℳ for n ∝√(N)/log^4(N) and we have
𝔼[σ^2_m_opt-σ^2_|I^2_n,N]≤ C(Nn)^-2β/(2β+1).
Under the condition n ∝√(N)/log^4(N) imposed on the length of diffusion paths, the obtained rate is of order n^-3β/(2β+1) (up to a log-factor) which is equivalent to N^-3β/2(2β+1) (up to a log-factor).
§.§ Non-adaptive estimation of the diffusion coefficient on the real line
Consider a ridge estimator of σ^2 on built from N independent copies of the diffusion process X observed in discrete times, where both N and n tend to infinity.
For each m ∈ℳ, we still denote by σ^2_m the ridge estimators of σ^2 and σ^2_m,L the truncated estimators of σ^2 given in Equation (<ref>). We establish, through the following theorem, the first risk bound that highlights the main error terms.
Suppose that L=log^2(N). Under Assumptions <ref> and for any dimension m∈ℳ, the following holds:
𝔼[σ^2_m,L-σ^2^2_n,N] ≤ 2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + Δ^2_n)
where C>0 is a constant depending on the upper bound σ_1 of the diffusion coefficient. Moreover, q = 1 for the collection [𝐁] and q = 2 for the collection [𝐇].
If we consider the risk of σ^2_m,L using the empirical norm ._n, then we obtain
𝔼[σ^2_m,L-σ^2^2_n] ≤ 2h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^qlog^2(N)/Nn) + m^2log^3(N)/N+Δ^2_n)
The risk bound given in Equation (<ref>) is a sum of four error terms. The first term is the approximation error linked to the choice of the basis, the second term is the estimation error given in Theorem <ref>, the third term m^2log^3(N)/N comes from the relation linking the empirical norm ._n to the pseudo-norm ._n,N (see Lemma <ref>), and the last term is the cost of the time-discretization.
We derive, in the next result, rates of convergence of the risk bound of the truncated ridge estimators σ^2_m,L based on the collections [𝐁] and [𝐇] respectively.
Suppose that σ^2∈Σ_I(β,R) with β≥ 1, I = [-log(N),log(N)], and K ∝ (Nn)^1/(4β+1) for [𝐁], and σ^2∈ W^s_f_n(,R) with s ≥ 1 and m ∝ (Nn)^1/2(2s+1) for [𝐇]. Under Assumption <ref>, the following holds:
For [𝐁] 𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^2β(N)(Nn)^-2β/(4β+1) + 1/n^2),
For [𝐇] 𝔼[σ^2_m,L-σ^2^2_n,N] ≤ C(log^3(N)(Nn)^-s/(2s+1) + 1/n^2).
where C>0 is a constant depending on β and σ_1 for [𝐁], or s and σ_1 for [𝐇].
The obtained rates are slower compared to the rates established in Section <ref> for the estimation of σ^2_|I where the interval I⊂ is compact. In fact, the method used to establish the rates of Theorem <ref> from which the rates of Corollary <ref> are obtained, does not allow us to derive rates of order (Nn)^-α/(2α+1) (up to a log-factor) with α≥ 1 (e.g. α = β, s). Finally, if we consider the risk defined with the empirical norm ._n, then from Equation (<ref>) with n ∝ N and assuming that m ∝ N^1/4(s+1) for [𝐇] or K ∝ N^1/4(β+1) for [𝐁], we obtain
[𝐁]: 𝔼[σ^2_m,L-σ^2^2_n] ≤ Clog^2β(N)(Nn)^-β/2(β+1),
[𝐇]: 𝔼[σ^2_m,L-σ^2^2_n] ≤ Clog^3(N)(Nn)^-s/2(s+1),
where C>0 is a constant depending on σ_1 and on the smoothness parameter. We can see that the obtained rates are slower compared to the results of Corollary <ref> for n ∝ N. The deterioration of the rates comes from the additional term of order m^2log^3(N)/N which is now regarded as the new estimation error since it dominates the other term in each case as N→∞.
§.§ Non-adaptive estimation of the diffusion coefficient on a compact interval depending on the sample size
This section combines the two first sections <ref> and <ref> focusing on the estimation of σ^2 on the compact interval [-A_N,A_N] where (A_N) is a strictly positive sequence such that A_N →∞ as N→∞. Consequently, we obtain that the estimation interval tends to as the sample size N tends to infinity.
Define from the observations and for each dimension m∈ℳ, the following matrices:
Ψ_m:=(1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ϕ_ℓ^'(X^j_kΔ))_0≤ℓ,ℓ^'≤ m-1,
Ψ_m:=𝔼(Ψ_m)=([1/n∑_k=0^n-1ϕ_ℓ(X_kΔ)ϕ_ℓ^'(X_kΔ)])_0≤ℓ,ℓ^'≤ m-1.
These two matrices play an essential role in the construction of a consistent projection estimator of σ^2 over any approximation subspace 𝒮_m spanned by the basis (ϕ_0,⋯,ϕ_m-1). Furthermore, for all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m, we have:
h^2_n,N = ^t𝐚Ψ_m𝐚, h^2_n = 𝔼(h^2_n,N) = ^t𝐚Ψ_m𝐚,
where 𝐚=(a_0,⋯,a_m-1). The Gram matrix Ψ_m is invertible under the spline basis (see <cit.>) and the Hermite basis (see <cit.>). We define for any invertible matrix M, the operator norm M^-1_op of M^-1 given by M^-1_op=1/inf{λ_j} where the λ_j are eigenvalues of M.
For all dimension m∈ℳ, the matrices Ψ_m and 𝐅_m satisfy:
Ψ_m= ^t𝐅_m𝐅_m.
Consider the ridge estimator σ^2_m of σ^2_A_N = σ^2_[-A_N,A_N], with m∈ and A_N →∞ as N →∞. The estimator σ^2_m can reach a faster rate of convergence if the Gram matrix Ψ_m given in Equation (<ref>) satisfies the following condition,
ℒ(m)(Ψ^-1_m_op∨ 1)≤ CN/log^2(N), where ℒ(m):=x∈ℝsup∑_ℓ=0^m-1ϕ^2_ℓ(x)<∞
where C>0 is a constant. In fact, the optimal rate of convergence is achieved on a random event Ω_n,N,m in which the two empirical norms ._n,N and ._n are equivalent (see <cit.>, <cit.>). Then, Condition (<ref>) is used to upper-bound (Ω^c_n,N,m) by a negligible term with respect to the considered rate (see <cit.>). Note that in Equation (<ref>), the square on log(N) is justified by the fact that the value of constant C>0 is unknown, and that the spline basis is not othonormal (see <cit.>, proof of Lemma 7.8). The assumption of Equation (<ref>) is also made in <cit.> on the operator norm of Ψ^-1_m based on an orthonormal basis with the bound 𝐜N/log(N) where the value of 𝐜 is known, and chosen and such that the upper-bound of (Ω^c_n,N,m) is negligible with respect to the estimation error. In our framework, since the transition density is approximated by Gaussian densities, we derive the following result.
Suppose that n ∝ N and that the spline basis is constructed on the interval [-A_N,A_N] with A_N > 0. Under Assumption <ref> , for all m∈ and for all w∈^m such that w_2,m=1, there exists a constant C>0 such that
For [𝐇]: w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))),
For [𝐁]: w^'Ψ_mw ≥CA_N/mlog(N)exp(-c_σA^2_N),
where the constant c_σ>1 that comes from the approximation of the transition density, depends on the diffusion coefficient σ.
The result of Lemma <ref> implies for the Hermite basis that
(Ψ^-1_m_op∨ 1)≤log(N)/Cexp(3c_σ(4m+3)/2(1-log^-1(N)))
where the upper-bound is an exponentially increasing sequence of N since the dimension m∈ has a polynomial growth with respect to N. Thus, Condition (<ref>) cannot be satisfied for the Hermite basis in our framework. Considering the spline basis, one has ℒ(m)=ℒ(K+M)≤ 1 and there exists a constant C>0 such that
Ψ^-1_m_op≤ Cmlog(N)/A_Nexp(c_σA^2_N).
For K ∝(N^2/(2β+1)A_N), Condition (<ref>) is satisfied if the estimation interval [-A_N,A_N] is chosen such that A_N = o (√(log(N))). In the next theorem, we prove that the spline-based ridge estimator of σ^2_A_N reaches a faster rate of convergence compared to the result of Corollary <ref> for the collection [𝐁].
Suppose that N ∝ n and consider the ridge estimator σ^2_A_N,m of σ^2_A_N based on the spline basis. Furthermore, suppose that L = log(N), A_N = o(√(log(N))) and K ∝ (Nn)^1/(2β+1)A_N (m = K + M). Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with I = [-A_N, A_N], the following holds:
[σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ Clog^β(N)(Nn)^-2β/(2β+1)
where C>0 is a constant depending on β.
The above result shows that the risk of the ridge estimator of σ^2_A_N on [-A_N,A_N] reaches a rate of order (Nn)^-β/(2β+1) (up tp a log-factor) thanks to Condition (<ref>) which allows us to take advantage of the equivalence relation between the empirical norms ._n and ._n,N given in Equation (<ref>) to derive a finer estimation error (see proof of Theorem <ref>). Note that the obtained result depends on an appropriate choice of the estimation interval [-A_N,A_N] which tends to as N tends to infinity. Therefore, any choice of A_N such that A_N/√(log(N))⟶ +∞ cannot lead to a consistent estimation error since Equation (<ref>) is no longer satisfied for the upper-bounding of (Ω^c_n,N,m) by a term that tends to zero as N →∞. Thus, the assumption A_N = o(√(log(N))) is a necessary and sufficient condition for the validation of Condition (<ref>) which leads, together with Assumption <ref>, to the result of Theorem <ref>. Finally, under the assumptions of Theorem <ref> and considering the risk of σ^2_A_N,m based on the empirical norm ._n, we also obtain
[σ^2_A_N,m-σ^2_A_N^2_n] = O(log^β(N)(Nn)^-2β/(2β+1)).
In fact, under Condition (<ref>), the estimator σ^2_A_N,m satisfies the results of Theorem <ref> with I = [-A_N,A_N] and A_N = o(√(log(N))), which implies rates of the same order for the two empirical norms.
§ ADAPTIVE ESTIMATION OF THE DIFFUSION COEFFICIENT FROM REPEATED OBSERVATIONS
In this section, we suppose that n ∝ N and we propose a adaptive ridge estimator of σ^2 by selecting an optimal dimension from the sample D_N. In fact, consider the estimator σ^2_K,L where K satisfies:
K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)}
and the penalty function pen : K↦pen(K) is established using the chaining technique of
<cit.>. We derive below the risk of the adaptive estimator of σ^2_|I when the interval I⊂ is compact and the sample size N →∞.
Suppose that N ∝ n, L=log(N) and consider the collection [B] with
K ∈𝒦 = {2^q, q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}.
Under Assumption <ref> , there exists a constant C>0 such that,
𝔼[σ^2_K,L-σ^2_|I^2_n,N]≤ 34K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)}+C/Nn
where pen(K) = κ(K+M)log(N)/Nn with κ > 0 a numerical constant.
We deduce from Corollary <ref> and its assumptions that the adaptive estimator σ^2_K,L satisfies:
𝔼[σ^2_K,L-σ^2_|I^2_n] = O((Nn)^-2β/(2β+1)).
This result is justified since the penalty term is of the same order (up to a log-factor) than the estimation error established in Theorem <ref>.
Considering the adaptive estimator of σ^2 on the real line I= when the sample size N →∞, we obtain the following result.
Suppose that N ∝ n and L = log(N), and consider the collection [𝐁] with
K ∈𝒦 = {2^q, q=0,1,…,q_max}⊂ℳ = {1,…,√(N)/log(N)}.
Under Assumption <ref> and for N large enough, the exists a constant C>0 such that,
[σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn.
where pen(K) = κ^'(K+M)log(N)/Nn with κ^'>0 a numerical constant.
We have a penalty term of the same order than the one obtained in Theorem <ref> where σ^2 is estimated on a compact interval. One can deduce that the adaptive estimator reaches a rate of the same order than the rate of the non-adaptive estimator given in Corollary <ref> for the collection [𝐁].
If we consider the adaptive estimator of the compactly supported diffusion coefficient built from a single diffusion path, we obtain below an upper-bound of its risk of estimation.
Suppose that N = 1, L = √(log(n)) and consider the collection [𝐁] with
K ∈𝒦 = {2^q, q=0,…,q_max}⊂ℳ = {1,…,√(n)/log(n)}.
Under Assumption <ref>, it yields
𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n.
where C>0 is a constant depending on τ_0, and pen(K) = κ(K+M)log(n)/n with κ>0 a numerical constant.
We deduce from Theorem <ref> that if we assume that σ^2 ∈Σ_I(β,R), then the adaptive estimator σ^2_K,L reaches a rate of order n^-β/(2β+1) (up to a log-factor). The result of this theorem is almost a deduction of the result of Theorem <ref>, the slight difference being the use, in the proofs, of the local time of the process and the equivalence relation between the pseudo-norm ._n,1 with the pseudo-norm ._X instead of the empirical norm ._n considered in the proof of Theorem <ref>.
§ NUMERICAL STUDY
This section is devoted to the numerical study on a simulation scheme. Section <ref> focuses on the presentation of the chosen diffusion models. In Section <ref>, we describe the scheme for the implementation of the ridge estimators. We mainly focus on the B-spline basis for the numerical study, and in Section <ref>, we add a numerical study on the performance of the Hermite-based ridge estimator of σ^2 on . Finally, we compare the efficiency of our estimator built on the real line from a single path with that of the Nadaraya-Watson estimator proposed in <cit.>.
§.§ Models and simulations
Recall that the time horizon is T=1 and X_0 = 0. Consider the following diffusion models:
Model 1 Ornstein-Uhlenbeck: b(x) = 1-x, σ(x)= 1
Model 2: b(x) = 1-x, σ(x) = 1-x^2
Model 3: b(x) = 1-x, σ(x) = 1/3+sin(2π x)+cos^2(π/2x)
Model 1 is the commonly used Ornstein-Uhlenbeck model, known to be a simple diffusion model satisfying Assumption <ref>. Model 2 does not satisfy Assumption <ref>. Model 3 satisfies Assumption <ref> with a multimodal diffusion coefficient.
The size N of the sample D_N takes values in the set {1,10,100,1000} where the length n of paths varies in the set {100,250,500,1000}. As we work with the spline basis, the dimension m=K+M of the approximation space is chosen such that M=3 and K takes values in 𝒦={2^p, p=0,⋯,5} so that the subspaces are nested inside each other. We are using for the simulation of diffusion paths via the function of package, (see <cit.> for more details on the simulation of SDEs).
§.§ Implementation of the ridge estimators
In this section, we assess the quality of estimation of the adaptive estimator σ^2_m in each of the 3 models through the computation of its risk of estimation. We compare the performance of the adaptive estimator with that of the oracle estimator σ^2_m^* where m^* is given by:
m^*:=m∈ℳmin σ^2_m-σ^2^2_n,N.
For the spline basis, we have m^* = K^* + M with M=3. Finally, we complete the numerical study with a representation of a set of 10 estimators of σ^2 for each of the 3 models.
We evaluate the MISE of the spline-based adaptive estimators σ^2_K by repeating 100 times the following steps:
* Simulate samples D_N,n and D_N^',n with N∈{1,10,100,1000}, N^'=100 and n ∈{100, 250,1000}.
* For each K∈𝒦, and from D_N,n, compute estimators σ^2_K given in Equations (<ref>) and (<ref>).
* Select the optimal dimension K∈𝒦 using Equation (<ref>) and compute K^* from Equation (<ref>)
* Using D_N^',n, evaluate σ^2_K-σ^2^2_n,N^' and σ^2_K^*-σ^2^2_n,N^'.
We deduce the risks of estimation considering the average values of σ^2_m-σ^2^2_n,N^' and σ^2_m^*-σ^2^2_n,N^' over the 100 repetitions. Note that we consider in this section, the estimation of σ^2 on the compact interval I = [-1,1] and on the real line . The unknown parameters κ and κ^' in the penalty functions given in Theorem <ref> and Theorem <ref> respectively, are numerically calibrated (details are given in Appendix <ref>), and we choose κ = 4 and κ^' = 5 as their respective values.
§.§ Numerical results
We present in this section the numerical results of the performance of the spline-based adaptive estimators of σ^2_|I with I ⊆ together with the performance of the oracle estimators. We consider the case I=[-1,1] for the compactly supported diffusion coefficient, and the case I=.
Tables <ref> and <ref> present the numerical results of estimation of σ^2_|I from simulated data following the steps given in Section <ref>.
The results of Table <ref> and Table <ref> show that the adapted estimator σ^2_K is consistent, since its MISE tends to zero as both the size N of the sample D_N,n and the length n of paths are larger. Moreover, note that in most cases, the ridge estimators of the compactly supported diffusion coefficients perform better than those of the non-compactly supported diffusion functions. As expected, we observe that the oracle estimator has generally a better performance compared to the adaptive estimator. Nonetheless, we can remark that the performances are very close in several cases, highlighting the efficiency of the data-driven selection of the dimension.
An additional important remark is the significant influence of the length n of paths on the performance of σ^2_K and σ^2_K^*,L (by comparison of Table <ref> with Table <ref>), which means that estimators built from higher frequency data are more efficient. A similar remark is made for theoretical results obtained in Sections <ref> and <ref>.
Performance of the Hermite-based estimator of the diffusion coefficient
We focus on the estimation of σ^2 on and assess the performance of its Hermite-based estimator (see Section <ref>). We present in Table <ref>, the performance of the oracle estimator σ^2_m^*,L.
From the numerical results of Table <ref>, we observe that the Hermite-based estimator of σ^2 is consistent as the sample size N and the length n paths take larger values.
Estimation of the diffusion coefficient from one path
Consider ridge estimators of σ^2_|I with I=[-1,1]. For the case of the adaptive estimators of σ^2_|I, the dimension K is selected such that
K = K∈𝒦minγ_n(σ^2_K) + pen(K)
where pen(K) = κ(K+M)log(n)/n with κ >0. We choose the numerical constant κ = 4 and we derive the numerical performance of the adaptive estimator of σ^2_|I.
Table <ref> gives the numerical performances of both the adaptive estimator and the oracle estimator of σ^2_|I on the compact interval I=[-1,1] and from a single diffusion path. From the obtained results, we see that the estimators are numerically consistent. However, we note that the convergence is slow (increasing n from 100 to 1000), which highlights the significant impact of the number N of paths on the efficiency of the ridge estimator.
Comparison of the efficiency of the ridge estimator of the diffusion coefficient with its Nadaraya-Watson estimator.
Consider the adaptive estimator σ^2_K of the square of the diffusion coefficient buit on the real line from a single diffusion path (N=1), where the dimension K is selected using Equation (<ref>). For the numerical assessment, we use the interval I = [-10^6, 10^6] to approximate the real line , and then, use Equation (<ref>) for the data-driven selection of the dimension.
We want to compare the efficiency of σ^2_K with that of the Nadaraya-Watson estimator of σ^2 given from a diffusion path X̅ = (X_k/n)_1≤ k≤ n and for all x ∈ by
S_n(x) = ∑_k=1^n-1K(X_k/n - x/h_n)[X_(k+1)/n - X_k/n]^2/n/∑_k=1^nK(X_k/n - x/h_n)
where K is a positive kernel function, and h_n is the bandwidth. Thus, the estimator S_n(x) is consistent under the condition nh^4_n→ 0 as n tends to infinity (see <cit.>). We use the function of the R-package to compute the Nadaraya-Watson estimator S_n.
We remark from the results of Table <ref> that our ridge estimator is more efficient. Note that for the kernel estimator S_n, the bandwidth is computed using the rule of thumb of Scott (see <cit.>). The bandwidth is proportional to n^-1/(d+4) where n is the number of points, and d is the number of spatial dimensions.
§.§ Concluding remarks
The results of our numerical study show that our ridge estimators built both on a compact interval and on the real line are consistent as N and n take larger values, or as only n takes larger values when the estimators are built from a single path. These results are in accordance with the theoretical results established in the previous sections. Moreover, as expected, we obtained the consistency of the Hermite-based estimators of σ^2 on the real line . Nonetheless, we only focus on the Hermite-based oracle estimator since we did not establish a risk bound of the corresponding adaptive estimator. Finally, we remark that the ridge estimator of σ^2 built from a single path performs better than its Nadaraya-Watson kernel estimator proposed in <cit.> and implemented in the R-package .
§ CONCLUSION
In this article, we have proposed ridge-type estimators of the diffusion coefficient on a compact interval from a single diffusion path. We took advantage of the local time of the diffusion process to prove the consistency of non-adaptive estimators of σ^2 and derive a rate of convergence of the same order than the optimal rate established in <cit.>. We also propose an estimator of σ^2 on the real line from a single path. We proved its consistency using the method described in Section <ref>, and derive a rate of convergence order n^-β/(4β+1) over a Hölder space for the collection [𝐁]. Then, we extended the study to the estimation of σ^2 from repeated discrete observations of the diffusion process. We establish rates of convergence of the ridge estimators both on a compact interval and on . We complete the study proposing adaptive estimators of σ^2 on a compact interval for N=1 and N→∞, and on the real line for N→∞.
A perspective on the estimation of the diffusion coefficient could be the establishment of a minimax rate of convergence of the compactly supported (square of the) diffusion coefficient from repeated discrete observations of the diffusion process. The case of the non-compactly supported diffusion coefficient may be a lot more challenging, since the transition density of the diffusion process is no longer lower-bounded. This new fact can lead to different rates of convergence depending on the considered method (see Section <ref>).
§ ACKNOWLEDGEMENTS
I would like to thank my supervisors, Christophe Denis, Charlotte Dion-Blanc, and Viet-Chi Tran, for their sound advice, guidance and support throughout this research project.
Their experience in scientific research and their expertise in stochastic calculus and process statistics were decisive in providing precise and relevant answers to the issues raised in this paper, taking into account what has already been done in the literature.
I am particularly grateful for their precise and constant help throughout the writing of this article, from editorial advice to proofreading the introduction, the proofs and all other sections of the paper.
§ PROOFS
In this section, we prove our main results of Sections <ref>, <ref> and <ref>. To simplify our notations, we set Δ_n = Δ(=1/n) and constants are generally denoted by C>0 or c>0 whose values can change from a line to another. Moreover, we use the notation C_α in case we need to specify the dependency of the constant C on a parameter α.
§.§ Technical results
Recall first some useful results on the local time and estimates of the transition density of diffusion processes.
For all integer q≥ 1, there exists C^*>0 depending on q such that for all 0≤ s<t≤ 1,
[|X_t-X_s|^2q]≤ C^*(t-s)^q.
The proof of Lemma <ref> is provided in <cit.>.
Under Assumptions <ref>, there exist constants c_σ >1, C > 1 such that for all t ∈ (0,1], x ∈ℝ,
1/C√(t)exp(-c_σx^2/t) ≤ p_X(t,x) ≤C√(t)exp(-x^2/c_σt).
The proof of Proposition <ref> is provided in <cit.>, Proposition 1.2.
Let h be a L_0-lipschitz function. Then there exists h̃∈𝒮_K_N,M, such that
|h̃(x)-h(x)| ≤ C log(N)/K_N, ∀ x ∈ (-log(N),log(N)),
where C >0 depends on L_0, and M.
The proof of Proposition <ref> is provided in <cit.>. The finite-dimensional vector space 𝒮_K_N,M = 𝒮_K_N+M is introduced in Section <ref>.
Under Assumption <ref>, there exist C_1,C_2 >0 such that for all A >0,
sup_t ∈ [0,1](|X_t|≥ A)
≤C_1/Aexp(-C_2A^2).
The proof of Lemma <ref> is provided in <cit.>, Lemma 7.3.
Under Assumption <ref>, the following holds:
∀ x∈, ℒ^x = ℒ^x_- a.s.
where ℒ^x_- = ε→ 0limℒ^x-ε.
The result of Lemma <ref> justifies the definition of the local time ℒ^x, for x∈, given in Equation (<ref>).
From <cit.>, Theorem 1.7, we have
∀ x∈, ℒ^x - ℒ^x_- = 2∫_0^1_X_s = xdX_s = 2∫_0^1_X_s = xb(X_s)ds + 2∫_0^1_X_s = xσ(X_s)dW_s.
For all x∈ and for all s∈[0,1], we have for all ε>0,
(X_s = x) = ε→ 0lim (X_s≤ x + ε) - ε→ 0lim (X_s≤ x - ε) = ε→ 0lim F_s(x + ε) - ε→ 0lim F_s(x - ε)
= F_s(x) - F_s(x^-)
= 0
Thus, for all x∈,
[|ℒ^x - ℒ^x_-|] ≤ 2∫_0^1|b(x)|(X_s = x)ds + 2[|∫_0^1_X_s=xσ(X_s)dW_s|]
= 2[|∫_0^1_X_s=xσ(X_s)dW_s|].
Using the Cauchy Schwartz inequality, we conclude that
[|ℒ^x - ℒ^x_-|] ≤ 2√((∫_0^1_X_s=xσ^2(X_s)ds)) = 2σ(x)∫_0^1(X_s = x)ds = 0.
Using the Markov inequality, we have
∀ ε>0, (|ℒ^x - ℒ^x_-|>ε) ≤1/ε[|ℒ^x - ℒ^x_-|] = 0.
We finally conclude that for all x ∈,
(ℒ^x≠ℒ^x_-) = (|ℒ^x - ℒ^x_-|>0) = 0.
§.§ Proofs of Section <ref>
§.§.§ Proof of Lemma <ref>
The proof is divided into two parts for each of the two results to be proven.
First result.
Since the function h is continuous on , let H be a primitive of h on . We deduce that for all s ∈ [0,1],
h(X_s) = ε→ 0limH(X_s + ε) - H(X_s - ε)/2ε = ε→ 0lim1/2ε∫_X_s - ε^X_s + εh(x)dx = ε→ 0lim1/2ε∫_-∞^+∞h(x)_(x-ε,x+ε)(X_s)dx.
Finally, since h is integrable on and using the theorem of dominated convergence, we obtain
∫_0^1h(X_s)ds = ∫_-∞^+∞h(x)ε→ 0lim1/2ε∫_0^1_(x-ε,x+ε)(X_s)dsdx = ∫_-∞^+∞h(x)ℒ^xdx.
Second result.
Fix t∈(0,1] and consider P_X : (t,x) ↦∫_-∞^xp_X(t,y)dy the cumulative density function of the random variable X_t of the density function x ↦ p_X(t,x). We have:
∀ x∈, (ℒ^x) = ε→ 0lim1/2ε∫_0^1[_(x-ε,x+ε)(X_s)]ds = ε→ 0lim1/2ε∫_0^1(x - ε≤ X_s≤ x + ε)ds
= ∫_0^1ε→ 0limP_X(s,x+ε) - P_X(s,x-ε)/2εds
= ∫_0^1p_X(s,x)ds.
§.§.§ Proof of Theorem <ref>
Let Ω_n,m be the random event in which the two pseudo-norms ._n,1 and ._X are equivalent and given by
Ω_n,m := g∈𝒮_m∖{0}⋂{|g^2_n,1/g^2_X-1| ≤1/2}.
The proof of Theorem <ref> relies on the following lemma.
Let γ > 1 be a real number. Under Assumption <ref>, the following holds
(Ω^c_n,m) ≤ Cm^2γ/n^γ/2,
where C>0 is a constant depending on γ.
The parameter γ > 1 has to be chosen appropriately (i.e. such that m^2γ/n^γ/2 = o(1/n)) so that we obtain a variance term of the risk of the estimator σ^2_m of order mlog(n)/n (see Theorem <ref> and Corollary <ref>).
Recall that since N = 1,
ζ^1_kΔ=ζ^1,1_kΔ+ζ^1,2_kΔ+ζ^1,3_kΔ is the error term of the regression model, with:
ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds],
ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW^1_s,
ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s.
Besides,
R^1_kΔ
=R^1,1_kΔ+R^1,2_kΔ, with:
R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2+1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds
R^1,2_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s)
where
Φ:=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2.
By definition of the projection estimator σ^2_m for each m∈ℳ (see Equation (<ref>)), for all h∈𝒮_m,L, we have:
γ_n,1(σ^2_m)-γ_n,1(σ^2_|I)≤γ_n,1(h)-γ_n,1(σ^2_|I).
Furthermore, for all h∈𝒮_m,L,
γ_n,1(h)-γ_n,1(σ^2_|I)=σ^2_|I-h^2_n,1+2ν_1(σ^2_|I-h)+2ν_2(σ^2_|I-h)+2ν_3(σ^2_|I-h)+2μ(σ^2_|I-h),
where,
ν_i(h) = 1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,i_kΔ, i∈{1,2,3}, μ(h)=1/n∑_k=0^n-1h(X^1_kΔ)R^1_kΔ,
and ζ^1,1_kΔ, ζ^1,2_kΔ, ζ^1,3_kΔ are given in Equations (<ref>), (<ref>), (<ref>), and finally, R^1_kΔ = R^1,1_kΔ+R^1,2_kΔ given in Equations (<ref>) and (<ref>). Then, for all m ∈ℳ, and for all h ∈𝒮_m,L, we obtain from Equation (<ref>) that
σ^2_m-σ^2_|I^2_n,1≤h-σ^2_|I^2_n,1+2ν(σ^2_m-h)+2μ(σ^2_m-h), with ν=ν_1+ν_2+ν_3.
Then, it comes,
𝔼[σ^2_m-σ^2_|I^2_n,1] ≤h∈𝒮_m,Linfh-σ^2_|I^2_n+2𝔼[ν(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)].
Besides, for any a,d>0, using the inequality xy ≤η x^2 + y^2/η with η = a, d, we have,
2ν(σ^2_m-h) ≤2/aσ^2_m-σ^2_|I^2_X+2/ah-σ^2_|I^2_X+ah∈𝒮_m, h_X=1supν^2(h),
2μ(σ^2_m-h) ≤2/dσ^2_m-σ^2_|I^2_n,1+2/dh-σ^2_|I^2_n,1+d/n∑_k=1^n(R^1_kΔ)^2.
§.§.§ Upper bound of 1/n∑_k=1^n(R^1_kΔ)^2
We have:
∀ k∈[[1,n]], R^1_kΔ=R^1,1_kΔ+R^1,2_kΔ+R^1,3_kΔ with,
R^1,1_kΔ=1/Δ(∫_kΔ^(k+1)Δb(X^1_s)ds)^2, R^1,2_kΔ=1/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)Φ(X^1_s)ds
R^1,3_kΔ=2/Δ(∫_kΔ^(k+1)Δ(b(X^1_s)-b(X^1_kΔ))ds)(∫_kΔ^(k+1)Δσ(X^1_s)dW^1_s).
For all k∈[[1,n]], using the Cauchy-Schwarz inequality and Equation (<ref>),
𝔼[|R^1,1_kΔ|^2] ≤𝔼[(∫_kΔ^(k+1)Δb^2(X^1_kΔ)ds)^2]≤Δ𝔼[∫_kΔ^(k+1)Δb^4(X^1_kΔ)ds]≤ CΔ^2.
Consider now the term R^1,2_kΔ. From Equation (<ref>), we have Φ=2bσ^'σ+[σ^''σ+(σ^')^2]σ^2 and according to Assumption <ref>, there exists a constant C>0 depending on σ_1 and α such that
|Φ(X^1_s)| ≤ C[(2+|X^1_s|)(1+|X^1_s|^α) + (1+|X^1_s|^α)^2].
Then, from Equation (<ref>) and for all s∈(0,1],
[Φ^2(X^1_s)] ≤ Cs∈(0,1]sup[(2+|X^1_s|)^2(1+|X^1_s|^α)^2 + (1+|X^1_s|^α)^4] < ∞
and
𝔼[|R^1,2_kΔ|^2] ≤1/Δ^2∫_kΔ^(k+1)Δ((k+1)Δ-s)^2ds∫_kΔ^(k+1)Δ𝔼[Φ^2(X^1_s)]ds≤ CΔ^2
Finally, under Assumption <ref>, from Equation (<ref>) and using the Cauchy-Schwarz inequality, we have
𝔼[|R^1,3_kΔ|^2] ≤4/Δ^2𝔼[Δ∫_kΔ^(k+1)ΔL^2_0|X^1_s-X^1_kΔ|^2ds(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2]
≤4/Δ√(𝔼[L^4_0Δ∫_kΔ^(k+1)Δ|X^1_s-X^1_kΔ|^4ds]𝔼[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^4])
≤ CΔ^2.
As a result, there exists a constant C>0 such that,
𝔼[1/n∑_k=1^n(R^1_kΔ)^2]≤ CΔ^2.
We set a = d = 8 and considering the event Ω_n,m on which the empirical norms ._X and ._n,1 are equivalent, we deduce from Equations (<ref>), (<ref>) and (<ref>) that,
𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]≤ 3h∈𝒮_minfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_X=1supν^2(h))+CΔ^2
where C>0 is a constant depending on σ_1.
§.§ Upper bound of 𝔼(h∈𝒮_m, h_X=1supν^2(h))
For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h^2_X=1, we have h^2≤1/τ_0 (see Equation (<ref>)) and the coordinate vector 𝐚 = (a_-M,⋯,a_K-1) satisfies:
* 𝐚^2_2≤ Cm (m = K+M) for the spline basis (see <cit.>, Lemma 2.6)
* 𝐚^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = 𝐚^2_2.
Furthermore, using the Cauchy-Schwarz inequality, we have:
ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤𝐚^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ).
Thus, since ν=ν_1+ν_2+ν_3, for all ℓ∈[[-M,K-1]] and for all i∈{1,2,3},
𝔼[ν^2_i(ϕ_ℓ)]= 1/n^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2].
* Case i=1
Recall that ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] where W=W^1.
We fix a initial time s∈[0,1) and set M^s_t=∫_s^tσ(X^1_u)dW_u, ∀ t≥ s. (M^s_t)_t≥ s is a martingale and for all t∈[s,1], we have:
<M^s,M^s>_t=∫_s^tσ^2(X^1_u)du.
Then, ζ^1,1_kΔ=1/Δ(M^kΔ_(k+1)Δ)^2-<M^kΔ,M^kΔ>_(k+1)Δ is also a ℱ_kΔ-martingale, and, using the Burkholder-Davis-Gundy inequality, we obtain for all k∈[[0,n-1]],
𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤C/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_u)du)^2]≤ Cσ^4_1.
Then, using Equation (<ref>) we have:
𝔼[ν^2_1(ϕ_ℓ)] = 1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]]
≤ Cσ^4_1/n^2𝔼[∑_k=0^n-1ϕ^2_ℓ(X^1_kΔ)]
and,
∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤Cσ^4_1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)].
One has:
∑_ℓ=-M^K-1B^2_ℓ(X^1_η(s))≤ 1 for the Spline basis (m = K + M),
∑_ℓ = 0^m-1ϕ^2_ℓ(X^1_η(s))≤ Cm for an orthonormal basis with C = 0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞.
Thus, it comes that
* ∑_ℓ=-M^K-1𝔼[ν^2_1(B_ℓ)]≤ C/n for the Spline basis,
* ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤ Cm/n for an orthonormal basis,
and,
𝔼(h∈𝒮_m, h^2_X=1supν^2_1(h))≤ Cm/n
where C>0 is a constant depending on σ_1 and the basis.
* Case i=2
Wa have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and,
𝔼[ν^2_2(ϕ_ℓ)] = 4𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)∫_kΔ^(k+1)Δ(k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]
= 4𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]
≤ Cσ^4_1Δ^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))ds]
where C>0 is a constant. We deduce for both the spline basis and any orthonormal basis that there exists a constant C>0 depending on σ_1 such that:
𝔼(h∈𝒮_m, h^2_X=1supν^2_2(h))≤ Cm/n^2.
* Case i=3
We have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and,
𝔼[ν^2_3(ϕ_ℓ)] = 4/n^2𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2]
≤ 4σ^2_1/n^2𝔼[∫_0^1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))ds]
Since for all x∈ℝ, b^2(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on σ_1 such that:
𝔼(h∈𝒮_m, h^2_X=1supν^2_3(h))≤ Cm/n^2.
We finally obtain from Equations (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that:
𝔼(h∈𝒮_m, h^2_X=1supν^2(h))≤ Cm/n.
We deduce from Equations (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that,
[σ^2_m - σ^2_|I^2_n,1_Ω_n,m] ≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + Δ^2).
For n large enough, we have σ^2_m - σ^2_|I^2_∞≤ 2mL since σ^2_m_∞≤√(mL). Then, from Lemma <ref> and for all m∈ℳ, there exists a constant C>0 depending on σ_1 such that
𝔼[σ^2_m-σ^2_|I^2_n,1] =𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+𝔼[σ^2_m-σ^2_|I^2_n,1_Ω^c_n,m]
≤𝔼[σ^2_m-σ^2_|I^2_n,1_Ω_n,m]+2mLℙ(Ω^c_n,m)
≤ 3h∈𝒮_m,Linfσ^2_|I - h^2_n + C(m/n + m^2γ+1L/n^γ/2 + Δ^2).
Since the pseudo-norms ._n,1 and ._X are equivalent on the event Ω_n,m, then, using Lemma <ref>, there exists a constant C>0 depending on σ_1 such that
𝔼[σ^2_m-σ^2_|I^2_X] = 𝔼[σ^2_m-σ^2_|I^2_X_Ω_n,m] + 𝔼[σ^2_m-σ^2_|I^2_X_Ω^c_n,m]
≤ 8𝔼[σ^2_m-σ^2_|I^2_n,1] + 10h∈𝒮_minfσ^2_|I-h^2_n + 2mL(Ω^c_n,m)
≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2).
Finally, since the estimator σ^2_m is built from a diffusion path X̅^1 independent of the diffusion process X, and from Equations (<ref>) and (<ref>), the pseudo-norm ._X depending on the process X and the empirical norm ._n are equivalent (∀ h∈𝕃^2(I), h^2_n≤ (τ_1/τ_0)[h^2_X]), there exists a constant C>0 depending on σ_1, τ_0 and τ_1 such that
𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34τ_1/τ_0h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/n+m^2γ+1L/n^γ/2+Δ^2).
The proof of this Lemma mainly focus on the spline basis and the Fourier basis based on functions cos and sin which are Lipschitz functions. Thus, for all g = ∑_ℓ = 0^m-1a_ℓϕ_ℓ∈𝒮_m,
|g^2_n,1 - g^2_X| ≤∫_0^1|g^2(X_η(s)) - g^2(X_s)|ds≤ 2g_∞∫_0^1|g(X_η(s)) - g(X_s)|ds.
From Equation (<ref>), one has [g^2_X] ≥τ_0 g^2. Thus, if g^2_X = 1, then g^2≤ 1/τ_0, and we deduce for all g = ∑_ℓ=0^m-1a_ℓϕ_ℓ that there exists a constant C>0 such that
* Spline basis: g_∞≤a_2 ≤ C√(m) (see <cit.>)
* Fourier basis: g_∞≤ C√(m) since g = a_2 and ∑_ℓ=0^m-1ϕ^2_ℓ = O(m).
Moreover, each g∈𝒮_m such that g^2_X = 1 is the Lipschitz function with a Lipschitz coefficient L_g = O(m^3/2). For the spline basis, this result is obtained in <cit.>, proof of Lemma C.1 combined with Lemma 2.6. For the Fourier basis, for all x,y∈ I and using the Cauchy Schwarz inequality, we obtain
|g(x) - g(y)| ≤ ∑_ℓ = 0^m - 1|a_ℓ|.|ϕ_ℓ(x) - ϕ_ℓ(y)|
≤ 2π m√(m)𝐚_2|x-y|
≤ 2π/τ_0m√(m)|x-y|.
Back to Equation (<ref>), there exists a constant C>0 such that
|g^2_n,1 - g^2_X| ≤ Cm^2∫_0^1|X_η(s) - X_s|ds
We have:
Ω^c_n,m = {ω∈Ω, ∃ g∈𝒮_m∖{0}, |g^2_n,1/g^2_X-1| > 1/2},
and, using Equation (<ref>), we obtain
g∈𝒮_m∖{0}sup|g^2_n,1/g^2_X-1| = g∈𝒮_m, g^2_X = 1sup|g^2_n,1-g_X|≤ Cm^2∫_0^1|X_η(s) - X_s|ds.
Finally, using the Markov inequality, the Hölder inequality, Equation (<ref>), and Lemma <ref>, we conclude that
(Ω^c_n,m) ≤ (Cm^2∫_0^1|X_η(s) - X_s|ds≥1/2)
≤ Cm^2γ∫_0^1[|X_η(s) - X_s|^γ]ds
≤ Cm^2γ/n^γ/2
with γ∈ (1,+∞).
§.§.§ Proof of Theorem <ref>
Since L=log^2(n), we have
𝔼[σ^2_m,L-σ^2^2_n,1] = 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^c^2_n,1]
≤ 𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] + 2log^2(n)t∈(0,1]sup(|X_t|>log(n)).
From Equation (<ref>) (Proof of Theorem <ref>), for all h∈𝒮_m,L,
𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m-h)]+2𝔼[μ(σ^2_m-h)]
where ν_i, i=1,2,3 and μ are given in Equation (<ref>). For all i∈{1,2,3} and for all h∈𝒮_m,L, one has
𝔼[ν_i(σ^2_m,L-h)]≤√(2mlog^2(n))√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]).
* Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]
According to Equation (<ref>), we have
∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/n∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ
where ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] is a martingale satisfying
𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1
with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that
∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔ)^2]=1/n^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2]
since for all integers k, k^' such that k > k^'≥ 0, we have
[ϕ_ℓ(X^1_kΔ)ζ^1,1_kΔϕ_ℓ(X^1_k^'Δ)ζ^1,1_k^'Δ|ℱ_kΔ] = ϕ_ℓ(X^1_kΔ)ζ^1,1_k^'Δϕ_ℓ(X^1_k^'Δ)[ζ^1,1_kΔ|ℱ_kΔ] = 0.
For each k∈[[0,n-1]], we have
∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B_ℓ(X^1_kΔ) =1 for the spline basis
∑_ℓ=0^m-1ϕ_ℓ(X^1_kΔ)≤ Cm For an orthonormal basis with C=0 ≤ℓ≤ m-1maxϕ_ℓ_∞.
Finally, there exists a constant C>0 such that
∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/n for the spline basis
Cm/n for an orthonormal basis.
* Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]
For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have:
∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]
=4∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2].
We conclude that
∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/n^2 for the spline basis
Cm/n^2 for an orthonormal basis.
where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions.
* Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)]
We have:
∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/n^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2]
=4/n^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2]
≤4/n^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b^2(X^1_η(s))σ^2(X^1_s)ds].
Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^4)<∞, there exists a constant C>0 depending on the diffusion coefficient such that
∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/n^2 for the spline basis
Cm/n^2 for an orthonormal basis.
We finally deduce that from Equations (<ref>) and (<ref>) that for all h∈𝒮_m,L,
𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n)+2𝔼[μ(σ^2_m,L-h)] [B]
𝔼[(σ^2_m,L-σ^2)_[-log(n),log(n)]^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n)+2𝔼[μ(σ^2_m,L-h)] [F]
where C>0 is a constant. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L,
2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,1+2/ah-σ^2^2_n,1+a/n∑_k=0^n-1(R^1_kΔ)^2
2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,1+2/ah∈𝒮_minfh-σ^2^2_n +a/n∑_k=0^n-1𝔼[(R^1_kΔ)^2].
Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that,
𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n) [B]
𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n) + 2log^2(n)t∈(0,1]sup(|X_t|>A_n) [F].
From Proposition <ref>, t∈(0,1]sup(|X_t|>log(n))≤log^-1(n)exp(-clog^2(n)) with c>0 a constant. Then, we obtain from Equation (<ref>) that
𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n) [B]
𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n) [F].
§.§.§ Proof of Corollary <ref>
We have under Assumption <ref> from Theorem <ref> that
𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(mlog^2(n)/n) [B]
𝔼[σ^2_m,L-σ^2^2_n,1]≤h∈𝒮_minfh-σ^2^2_n+C√(m^2log^2(n)/n) [F].
For [B].
We have m=K+M with M∈ℕ^* fixed. From Proposition <ref> and under Assumption <ref>, there exists a constant C>0 depending on β such that
h∈𝒮_K+M,Linfh-σ^2^2_n≤ Clog^2β(n)K^-2β.
Since K ∝ n^1/(4β+1), we obtain that
[σ^2_K - σ^2^2_n,1] = O(log^2β(n)n^-2β/(4β+1)).
For [F].
Under Assumptions <ref> and <ref> and From Lemma 12 in <cit.>, there exists a constant C>0 depending on τ_1 of Equation (<ref>) and the smoothness parameter β of the Besov space 𝐁^β_2,∞ such that
h∈𝒮_m,Linfh-σ^2^2_n≤τ_1h∈𝒮_m,Linfh-σ^2^2≤ C|σ^2|^2_β m^-2β
where |σ^2|_β is the semi-norm of σ^2 in the Besov space ℬ^β_2,∞([-log(n),log(n)]). Under Assumption <ref>, |σ^2|_β < ∞. Then, for m ∝ n^1/2(2β+1), the exists a constant C>0 depending on β, σ_1 and τ_1 such that
[σ^2_m - σ^2^2_n,1] ≤ Clog(n)n^-β/(2β+1).
§.§ Proof of Section <ref>
The following lemma allows us to obtain a risk bound of σ^2_m,L defined with the empirical norm ._n from the risk bound defined from the pseudo norm ._n,N.
Let σ^2_m,L be the truncated projection estimator on of σ^2 over the subspace 𝒮_m,L. Suppose that L = log^2(N), N>1. Under Assumption <ref>, there exists a constant C>0 independent of m and N such that
[σ^2_m,L - σ^2^2_n,N] - 2[σ^2_m,L - σ^2^2_n] ≤ C m^2log^3(N)/N.
The proof of Lemma <ref> is provided in <cit.>, Theorem 3.3. The proof uses the independence of the copies X̅^1,…,X̅^N of the process X at discrete times, and the Bernstein inequality.
§.§.§ Proof of Theorem <ref>
For fixed n and N in ℕ^*, we set for all m∈ℳ,
Ω_n,N,m:=h∈𝒮_m∖{0}⋂{|h^2_n,N/h^2_n-1|≤1/2}.
As we can see, the empirical norms h_n,N and h_n of any function h∈𝒮_m∖{0} are equivalent on Ω_n,N,m. More precisely, on the set Ω_n,N,m, for all h∈𝒮_m∖{0}, we have : 1/2h^2_n≤h^2_n,N≤3/2h^2_n. We have the following result:
Under Assumption <ref>, the following holds:
* If n ≥ N or n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)} and,
ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)).
* If n ≤ N, then m ∈ℳ= {1,…,√(n)/log(Nn)} and
ℙ(Ω^c_n,N,m) ≤ 2exp(-C√(n))
where C>0 is a constant.
We have:
Ω^c_n,N,m={ω∈Ω, ∃ h_0∈𝒮_m, |h^2_n,N/h^2_n-1|>1/2},
Denote by ℋ_m = {h∈𝒮_m, h_n = 1}
and ℋ^ε_m the ε-net of ℋ_m for any ε >0.
We have
h∈ℋ_msup|h^2_n,N/h^2_n-1| = h∈ℋ_msup|h^2_n,N-1|.
Let ε > 0 and let ℋ^ε_m be the ε-net of ℋ_m w.r.t. the supremum norm ._∞. Then, for each h∈ℋ_m, there exists h_ε∈ℋ^ε_m such that h-h_ε_∞≤ε. Then
|h^2_n,N - 1| ≤|h^2_n,N - h_ε^2_n,N| + |h_ε^2_n,N - 1|
and,
|h^2_n,N - h_ε^2_n,N| ≤ 1/Nn∑_j=1^N∑_k=0^n-1|h(X^j_kΔ) - h_ε(X^j_kΔ)|(h_∞ + h_ε_∞)≤(h_∞ + h_ε_∞)ε.
Moreover, we have h^2, h_ε^2≤ 1/τ_0. Then, there exists a constant 𝐜 > 0 such that
|h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε for the spline basis (see Lemma 2.6 in Denis et al.(2021))
|h^2_n,N - h_ε^2_n,N| ≤ 2√( cm/τ_0)ε for an orthonormal basis (h^2_∞≤ (0≤ℓ≤ m-1maxϕ_ℓ^2_∞)mh^2).
Therefore, for all δ > 0 and for both the spline basis and any orthonormal basis,
(h∈ℋ_msup|h^2_n,N-1|≥δ) ≤(h∈ℋ^ε_msup|h^2_n,N-1|≥δ/2) + _4ε√( cm/τ_0)≥δ.
We set δ = 1/2 and we choose ε > 0 such that 4ε√( cm/τ_0) < 1/2. Then, using the Hoeffding inequality, there exists a constant c>0 depending on c and τ_0 such that
ℙ(Ω^c_n,N,m)≤ 2𝒩_∞(ε,ℋ_m)exp(-cN/m)
where 𝒩_∞(ε,ℋ_m) is the covering number of ℋ_m satisfying:
𝒩_∞(ε,ℋ_m) ≤(κ√(m)/ε)^m
where the constant κ>0 depends on c>0 (see <cit.>, Proof of Lemma D.1). We set ε = κ√(m^*)/N with m^* = maxℳ and we derive from Equations (<ref>) and (<ref>) that
(Ω^c_n,N,m) ≤ 2N^m^*exp(-cN/m^*) = 2exp(-cN/m^*(1-m^*2log(N)/cN)).
* If n ≥ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that
ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)).
* If n ≤ N, then m ∈ℳ = {1,…, √(n)/log(Nn)}, m^*2log(N)/N ≤log(N)/log^2(Nn) → 0 as N,n →∞, and
ℙ(Ω^c_n,N,m)≤ 2exp(-C√(n)).
* If n ∝ N, then m ∈ℳ = {1,…,√(N)/log(Nn)}. Since m^*2log(N)/N → 0 as N→ +∞, there exists a constant C>0 such that
ℙ(Ω^c_n,N,m)≤ 2exp(-C√(N)).
The proof of Theorem <ref> extends the proof of Theorem <ref> when N tends to infinity. Then, we deduce from Equation (<ref>) that
𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfh-σ^2_|I^2_n+C𝔼(h∈𝒮_m, h_n=1supν^2(h))+CΔ^2
where C>0 is a constant depending on σ_1, and ν = ν_1+ν_2+ν_3 with
ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ, i=1,2,3
and the ζ^j,i_kΔ's are the error terms depending on each path X^j, j=1,…,N.
§.§ Upper bound of 𝔼(h∈𝒮_m, h_n=1supν^2(h))
For all h=∑_ℓ=0^m-1a_ℓϕ_ℓ∈𝒮_m such that h_n=1, we have h^2≤1/τ_0 and the coordinate vector a=(a_0,⋯,a_m-1) satisfies:
* a^2_2≤ CK ≤ Cm for the spline basis (see <cit.>, Lemma 2.6)
* a^2_2≤ 1/τ_0 for an orthonormal basis since h^2 = a^2_2.
Furthermore, using the Cauchy Schwartz inequality, we have:
ν^2(h)=(∑_ℓ=0^m-1a_ℓν(ϕ_ℓ))^2≤a^2_2∑_ℓ=0^m-1ν^2(ϕ_ℓ).
Thus, for all ℓ∈[[0,m-1]], ν=ν_1+ν_2+ν_3 and for all i∈{1,2,3}
𝔼[ν^2_i(ϕ_ℓ)]= 1/Nn^2𝔼[(∑_k=0^n-1ϕ_ℓ(X^1_kΔ)ζ^1,i_kΔ)^2].
We finally deduce from (<ref>), (<ref>) and (<ref>) that there exists a constant C>0 depending on σ_1 such that:
𝔼(h∈𝒮_m, h_n=1supν^2(h))≤ Cm/Nn.
We deduce from (<ref>) and (<ref>) that there exists a constant C>0 such that,
𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]≤ 3h∈𝒮_m,Linfσ^2_|I-h^2_n+C(m/Nn+Δ^2).
Since we have σ^2_m_∞≤√(mL), then for m and L large enough, σ^2_m-σ^2_|I^2_∞≤ 2mL. There exists a constant C>0 such that for all m∈ℳ and for m and L large enough,
𝔼[σ^2_m-σ^2_|I^2_n,N] =𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+𝔼[σ^2_m-σ^2_|I^2_n,N_Ω^c_n,N,m]
≤𝔼[σ^2_m-σ^2_|I^2_n,N_Ω_n,N,m]+2mL(Ω^c_n,N,m).
Then, from Equation (<ref>), Lemma <ref> and for m ∈ℳ = {1,…,√(min(n,N))/√(log(Nn))}, we have:
𝔼[σ^2_m-σ^2_|I^2_n,N]≤ 3h∈𝒮_m,Linfh - σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2)
where C>0 is a constant. Recall that the empirical norms ._n,N and ._n are equivalent on Ω_n,N,m, that is for all h∈𝒮_m, h^2_n≤ 2h^2_n,N. Thus, we have
𝔼[σ^2_m-σ^2_|I^2_n] = 𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 𝔼[σ^2_m-σ^2_|I^2_n_Ω^c_n,N,m]
≤ 𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] + 2mL(Ω^c_n,N,m).
For all h ∈𝒮_m,L⊂𝒮_m, we have:
𝔼[σ^2_m-σ^2_|I^2_n_Ω_n,N,m] ≤ 2𝔼[σ^2_m-h^2_n_Ω_n,N,m] + 2h-σ^2_|I^2_n
≤ 4𝔼[σ^2_m-h^2_n,N_Ω_n,N,m] + 2h-σ^2_|I^2_n
≤ 8𝔼[σ^2_m-σ^2_|I^2_n,N] + 10h-σ^2_|I^2_n.
We finally conclude that
𝔼[σ^2_m-σ^2_|I^2_n] ≤ 34h∈𝒮_m,Linfh-σ^2_|I^2_n+C(m/Nn+mLexp(-C√(min(n,N)))+Δ^2).
§.§.§ Proof of Corollary <ref>
Under Assumption <ref> and from Theorem <ref> and Lemma (<ref>), there exists a constant C>0 such that
𝔼[σ^2_m-σ^2_|I^2]≤ C(h∈𝒮_m,Linfh-σ^2_|I^2_n+m/Nn+L/min(N^4,n^4)+1/n^2).
For [B].
We have m=K+M where M is fixed. From Lemma (<ref>), under Assumption <ref>, we have h∈𝒮_m,Linfh-σ^2_|I^2_n = O(K^-2β). Thus, for K ∝ (Nn)^1/(2β+1) and L = log(Nn),
𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2β/(2β+1)+Clog(Nn)min(N^-4,n^-4).
where C>0 is a constant depending on β.
For [F].
From Equation (<ref>) and the proof of Corollary <ref>, we have
h∈𝒮_minfh - σ^2_|I^2_n = O(m^-2s).
Then, for m = (Nn)^1/(2s+1) and L = log(Nn), we obtain
𝔼[σ^2_m-σ^2_|I^2]≤ C(Nn)^-2s/(2s+1)+Clog(Nn)min(N^-4,n^-4).
§.§.§ Proof of Theorem <ref>
We consider the restriction σ^2_[-log(N),log(N)] of σ^2 on the compact interval [-log(N),log(N)] on which the spline basis is built. Then we have:
𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n]
and from Proposition <ref>, Lemma <ref> and for N large enough, there exists constants c,C>0 such that
𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^c^2_n] ≤ 2L/n∑_k=0^n-1(|X_kΔ| > log(N))≤ 2Lt∈[0,1]sup(|X_t|≥log(N))
≤ C/log(N)exp(-clog^2(N)).
We deduce that
𝔼[σ^2_m,L-σ^2^2_n] = 𝔼[(σ^2_m,L-σ^2)_[-log(N),log(N)]^2_n] + C/log(N)exp(-clog^2(N)).
It remains to upper-bound the first term on the right hand side of Equation (<ref>).
Upper bound of 𝔼[σ^2_m,L-σ^2^2_n_[-log(N),log(N)]].
For all h∈𝒮_m,L, we obtain from Equation (<ref>),
γ_n,N(σ^2_m,L)-γ_n,N(σ^2)≤γ_n,N(h)-γ_n,N(σ^2).
For all h∈𝒮_m,L,
γ_n,N(h)-γ_n,N(σ^2)=h-σ^2^2_n,N+2ν_1(σ^2-h)+2ν_2(σ^2-h)+2ν_3(σ^2-h)+2μ(σ^2-h)
where
ν_i(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,i_kΔ, i∈{1,2,3}, μ(h)=1/nN∑_j=1^N∑_k=0^n-1h(X^j_kΔ)R^j_kΔ,
we deduce from Equation (<ref>) that for all h∈𝒮_m,L,
𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+2∑_i=1^3𝔼[ν_i(σ^2_m,L-h)]+2𝔼[μ(σ^2_m,L-h)].
For all i∈{1,2,3} and for all h∈𝒮_m,L, one has
𝔼[ν_i(σ^2_m,L-h)]≤√(2mL)√(∑_ℓ=0^m-1𝔼[ν^2_i(ϕ_ℓ)]).
* Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]
According to Equation (<ref>), we have
∀ℓ∈[[0,m-1]], ν_1(ϕ_ℓ)=1/Nn∑_j=1^N∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^j,1_kΔ
where ζ^j,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds] is a martingale satisfying
𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0 and 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤1/Δ^2𝔼[(∫_kΔ^(k+1)Δσ^2(X^1_s)ds)^2]≤ Cσ^4_1
with C>0 a constant, W=W^1 and (ℱ_t)_t≥ 0 the natural filtration of the martingale (M_t)_t∈[0,1] given for all t∈[0,1] by M_t=∫_0^tσ(X^1_s)dW_s. We derive that
∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]= 1/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1ϕ_ℓ(X^j_kΔ)ζ^1,1_kΔ)^2]=1/Nn^2𝔼[∑_k=0^n-1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)(ζ^1,1_kΔ)^2].
For each k∈[[0,n-1]], we have
∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ) = ∑_ℓ=-M^K-1B^2_ℓ(X^1_kΔ) =1 for the spline basis
∑_ℓ=0^m-1ϕ^2_ℓ(X^1_kΔ)≤ Cm For an orthonormal basis with C=0 ≤ℓ≤ m-1maxϕ_ℓ^2_∞.
Finally, there exists a constant C>0 such that
∑_ℓ=0^m-1𝔼[ν^2_1(ϕ_ℓ)]≤C/Nn for the spline basis
Cm/Nn for an orthonormal basis.
* Upper bound of ∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]
For all k∈[[0,n-1]] and for all s∈[0,1], set η(s)=kΔ if s∈[kΔ,(k+1)Δ). We have:
∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)] =4/N∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2]
=4/N∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))(η(s)+Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s)^2].
We conclude that
∑_ℓ=0^m-1𝔼[ν^2_2(ϕ_ℓ)]≤C/Nn^2 for the spline basis
Cm/Nn^2 for an orthonormal basis.
where the constant C>0 depends on the diffusion coefficient and the upper bound of the basis functions.
* Upper bound of ∑_ℓ=0^m-1𝔼[ν^3_2(ϕ_ℓ)]
We have:
∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)] =4/Nn^2∑_ℓ=0^m-1𝔼[(∑_k=0^n-1∫_kΔ^(k+1)Δϕ_ℓ(X^1_kΔ)b(X^1_kΔ)σ(X^1_s)dW_s)^2]
=4/Nn^2∑_ℓ=0^m-1𝔼[(∫_0^1ϕ_ℓ(X^1_η(s))b(X^1_η(s))σ(X^1_s)dW_s)^2]
≤4/Nn^2𝔼[∫_0^1∑_ℓ=0^m-1ϕ^2_ℓ(X^1_η(s))b(X^1_η(s))σ^2(X^1_s)ds].
Since for all x∈ℝ, b(x)≤ C_0(1+x^2) and t∈[0,1]sup𝔼(|X_t|^2)<∞, there exists a constant C>0 depending on the diffusion coefficient such that
∑_ℓ=0^m-1𝔼[ν^2_3(ϕ_ℓ)]≤C/Nn^2 for the spline basis
Cm/Nn^2 for an orthonormal basis.
We finally deduce that from Equations (<ref>) and (<ref>) that for all h∈𝒮_m,L,
𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(mL/Nn)+2𝔼[μ(σ^2_m,L-h)] (1)
𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C√(m^2L/Nn)+2𝔼[μ(σ^2_m,L-h)] (2)
where C>0 is a constant, the result (1) corresponds to the spline basis, and the result (2) corresponds to any orthonormal basis. It remains to obtain an upper bound of the term μ(σ^2_m,L-h). For all a>0 and for all h∈𝒮_m,L,
2μ(σ^2_m,L-h) ≤ 2/aσ^2_m,L-σ^2^2_n,N+2/ah-σ^2^2_n,N+a/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2
2𝔼[μ(σ^2_m,L-h)] ≤ 2/a𝔼σ^2_m,L-σ^2^2_n,N+2/ah∈𝒮_m,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2].
Using Equations (<ref>), (<ref>) and setting a=4, we deduce that there exists constant C>0 depending on σ_1 such that,
𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]]≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(mL/Nn)+Δ^2) [𝐁]
𝔼[σ^2_m,L-σ^2^2_n,N_[-log(N),log(N)]] ≤h∈𝒮_m,Linfh-σ^2^2_n+C(√(m^2L/Nn)+Δ^2) [𝐇].
The final result is obtained from Equations (<ref>) and (<ref>).
§.§.§ Proof of Lemma <ref>
It is proven in <cit.> that for each dimension m∈ℳ, the Gram matrix Ψ_m built from the Hermite basis is invertible. For the case of the B-spline basis, let us consider a vector (x_-M,⋯,x_K-1)∈ℝ^m such that x_j∈[u_j+M,u_j+M+1) and B_j(x_j)≠ 0. Since [u_j+M,u_j+M+1)∩[u_j^'+M,u_j^'+M+1)=∅ for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', then for all j,j^'∈{-M,⋯,K-1} such that j≠ j^', B_j(x_j^')=0. Consequently, we obtain:
((B_ℓ(x_ℓ^'))_-M≤ℓ,ℓ^'≤ K-1) =(diag(B_-M(x_M),⋯,B_K-1(x_K-1)))
=∏_ℓ=-M^K-1B_ℓ(x_ℓ)≠ 0.
Then, we deduce from <cit.>, Lemma 1 that the matrix Ψ_m is invertible for all m∈ℳ, where the function f_T are replaced by
f_n : x↦1/n∑_k=0^n-1p_X(kΔ,x)
with λ([-A_N,A_N]∩supp(f_n))>0, λ being the Lebesgue measure.
Case of the B-spline basis.
For all w∈ℝ^m such that w_2,m=1, we have:
w^'Ψ_mw = t_w^2_n=∫_-A_N^A_Nt^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=-M^K-1w_ℓB_ℓ.
Under Assumption <ref>, the transition density (t,x)↦ p_X(t,x) is approximated as follows
∀ (t,x)∈(0,1]×ℝ, 1/K_*√(t)exp(-c_σx^2/t)≤ p_X(t,x)≤K_*/√(t)exp(-x^2/c_σt)
where K_*>1 and c_σ>1. Since s↦exp(-c_σx^2/s) is an increasing function, then for n large enough and for all x∈[-A_N,A_N],
f_n(x) ≥1/K_*n∑_k=1^n-1exp(-cx^2/kΔ)≥1/K_*∫_0^1-Δexp(-c_σx^2/s)ds
≥1/K_*∫_1-(log(N))^-1^1-(2log(N))^-1exp(-c_σx^2/s)ds
≥1/2K_*log(N)exp(-c_σx^2/1-log^-1(N)).
Thus, the density function satisfies
∀ x∈[-A_N,A_N], f_n(x)≥12K_*log(N)exp(-c_σA^2_N/1-log^-1(N))≥12K_*log(N)exp(-c_σA^2_N).
Finally, since there exists a constant C_1>0 such that t_w^2≥ C_1A_NK^-1_N (see <cit.>, Lemma 2.6), for all w∈ℝ^m (m = K_N+M) such that w_2,m=1, there exists a constant C>0 such that,
w^'Ψ_mw≥CA_N/mlog(N)exp(-c_σA^2_N).
Case of the Hermite basis.
For all w∈^m such that w_2,m=1, we have
w^'Ψ_mw=t_w^2_n=∫_-∞^+∞t^2_w(x)f_n(x)dx+t^2_w(x_0)/n with t_w=∑_ℓ=0^m-1w_ℓh_ℓ.
Recall that for all x∈ such that |x|≥√((3/2)(4m+3)), |h_ℓ(x)|≤ c|x|exp(-c_0x^2) for all ℓ≥ 0. Then we have
w^'Ψ_mw ≥ ∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2f_n(x)dx
≥ x∈[-√((3/2)(4m+3)),√((3/2)(4m+3))]inff_n(x)∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx
≥ 1/2K_*log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N)))∫_|x|≤√((3/2)(4m+3))(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx
since for all x∈ℝ, f_n(x)≥(1/2K_*log(N))exp(-c_σx^2/1-log^-1(N)). Set a_N=√((3/2)(4m+3)), then we obtain
w^'Ψ_mw≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(∫_-∞^+∞(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx-∫_|x|>a_N(∑_ℓ=0^m-1w_ℓh_ℓ(x))^2dx)
≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-2c^2m∫_a_N^+∞x^2exp(-8c_0x^2)dx)
≥exp(-3c_σ(4m+3)/2(1-log^-1(N)))/2K_*log(N)(1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3)))
where c,c_0>0 are constants depending on the Hermite basis. Finally, for N large enough,
1-c^2m/8c_0√(3/2(4m+3))exp(-12c_0(4m+3))≥1/2.
Finally, there exists a constant C>0 such that for all w∈^m such that w_2,m,
w^'Ψ_mw ≥C/log(N)exp(-3c_σ(4m+3)/2(1-log^-1(N))).
§.§.§ Proof of Theorem <ref>
The proof of Theorem <ref> relies on the following lemma:
Under Assumptions <ref> and for σ^2∈Σ_I(β,R) with β≥ 1, I = [-A_N,A_N] and
N ∝ n, A_N = o (√(log(N))), K ∝((Nn)^1/(2β+1)A_N) (m = K+M),
the following holds:
ℙ(Ω^c_n,N,m) ≤ Cexp(- c log^3/2(N))
where c,C>0 are constants independent of N.
According to Equations (<ref>) in the proof of Theorem <ref>, for all dimension m=K+M, with K∈, and for all h∈𝒮_K+M, there exists a constant C>0 such that
[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C[h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+(h∈𝒮_K+M,h_n=1supν^2(h))+Δ^2]
where Ω_n,N,m is given in Equation (<ref>) and ν=ν_1+ν_2+ν_3 with the ν_i given in Equation (<ref>) .
For all h=∑_ℓ=-M^K-1a_ℓB_ℓ∈𝒮_K+M,L_N,
h^2_n=[1/n∑_k=0^n-1h^2(X_kΔ)]=∑_ℓ=-M^K-1∑_ℓ=-M^K-1a_ℓa_ℓ^'[1/n∑_k=0^n-1B_ℓ(X_kΔ)B_ℓ^'(X_kΔ)]=a^'Ψ_ma.
The Gram matrix Ψ_m is invertible for each K∈ℳ (see proof of Lemma <ref>). It follows that for all h=∑_ℓ=-M^K-1a_ℓB_ℓ such that h^2_n=a^'Ψ_ma=1, one has a=Ψ^-1/2_mu where u∈ℝ^m and u_2,m=1. Furthermore, we have:
h=∑_ℓ=-M^K-1a_ℓB_ℓ=∑_ℓ=-M^K-1u_ℓ∑_ℓ^'^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'.
Then for all h∈𝒮_K+M, we have ν^2(h)≤ 3(ν^2_1(h)+ν^2_2(h)+ν^2_3(h)) where,
∀ i∈{1,2,3}, ν^2_i(h)≤∑_ℓ=-M^K-1(1/Nn∑_j=1^N∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^j,i_kΔ)^2.
So we obtain,
∀ i∈{1,2,3}, [h∈𝒮_K+M,h_n=1supν^2_i(h)]≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,i_kΔ)^2]
For i=1, we have ζ^1,1_kΔ=1/Δ[(∫_kΔ^(k+1)Δσ(X^1_s)dW_s)^2-∫_kΔ^(k+1)Δσ^2(X^1_s)ds] and we obtained in the proof of Theorem <ref> that there exists a constant C>0 such that for all k∈[[0,n-1]],
𝔼[ζ^1,1_kΔ|ℱ_kΔ]=0, 𝔼[(ζ^1,1_kΔ)^2|ℱ_kΔ]≤ C𝔼[(∫_kΔ^(k+1)Δσ^2(X_u)du)^2]≤ Cσ^4_1Δ^2.
We deduce that
[h∈𝒮_K+M,h_n=1supν^2_1(h)] =1/Nn^2Δ^2∑_ℓ=0^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ)ζ^1,1_kΔ)^2]
≤1/N∑_ℓ=-M^K-1∑_k=0^n-1{(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2(ζ^1,1_kΔ)^2}
≤4σ^2_1/Nn∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''.
We have:
∑_ℓ=-M^K-1∑_ℓ^'=-M^K-1∑_ℓ^''=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓ[Ψ^-1/2_m]_ℓ^'',ℓ[Ψ^-1/2_m]_ℓ^',ℓ^''=Tr(Ψ^-1_mΨ_m)=Tr(I_m)=m.
So we obtain
[h∈𝒮_K+Msupν^2_1(h)]≤4σ^2_1m/Nn.
For i=2, we have ζ^1,2_kΔ=2/Δ∫_kΔ^(k+1)Δ((k+1)Δ-s)σ^'(X^1_s)σ^2(X^1_s)dW_s and
[h∈𝒮_K+M,h_n=1supν^2_2(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,2_kΔ)^2]
≤4σ^4_1σ^'^2_∞Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2]
≤4σ^2_1σ^'^2_∞m/Nn^2.
For i=3, we have ζ^1,3_kΔ=2b(X^1_kΔ)∫_kΔ^(k+1)Δσ(X^1_s)dW_s and there exists constants C_1,C_2>0 such that
[h∈𝒮_K+M,h_n=1supν^2_3(h)] ≤1/Nn^2∑_ℓ=-M^K-1[(∑_k=0^n-1∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^j_kΔ)ζ^1,3_kΔ)^2]
≤ C_1σ^2_1Δ/Nn^2∑_ℓ=-M^K-1∑_k=0^n-1[(∑_ℓ^'=-M^K-1[Ψ^-1/2_m]_ℓ^',ℓB_ℓ^'(X^1_kΔ))^2]
≤ C_2σ^2_1m/Nn^2.
Finally, there exists a constant C>0 depending on σ_1 and M such that
[h∈𝒮_K+M,h_n=1supν^2(h)]≤ Cm/Nn.
From Equations (<ref>) and (<ref>) , we deduce that
[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2)
where C>0 is a constant depending on σ_1 and M. We obtain
[σ^2_A_N,m-σ^2_A_N^2_n,N]≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+Δ^2)+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m]
and for N large enough, σ^2_A_N,m-σ^2_A_N^2_n,N≤ 4mL, and according to Lemma <ref> ,
[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m]≤ 4mLℙ(Ω^c_n,N,m)≤ CmLexp(-clog^3/2(N))
where c>0 is a constant. Thus, there exists a constant C>0 such that
[σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ [σ^2_A_N,m-σ^2_A_N^2_n,N_Ω_n,N,m]+[σ^2_A_N,m-σ^2_A_N^2_n,N_Ω^c_n,N,m]
≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn+mLexp(-clog^3/2(N))+Δ^2).
Then, as n ∝ N and L = log(N), there exists a constant C>0 such that
[σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ C(h∈𝒮_K+M,Linfh-σ^2_A_N^2_n+m/Nn).
Finally, since σ^2∈Σ_I(β,R) with β≥ 1 and I = [-A_N, A_N], one has
h∈𝒮_K+M,Linfh-σ^2_A_N^2_n≤ CA^2β_NK^-2β
where the constant C>0 depends on β, R and M. Furthermore, as we chose the inverval [-A_N,A_N] such that A_N = o (√(log(N))) and for K ∝((Nn)^1/(2β+1)A_N), we obtain
[σ^2_A_N,m-σ^2_A_N^2_n,N] ≤ Clog^β(N)(Nn)^-2β/(2β+1).
§.§ Proof of Section <ref>
§.§.§ Proof of Theorem <ref>
Set for all K, K^'∈𝒦 = {2^q, q=0,…, q_max, 2^q_max≤√(N)/log(N)}⊂ℳ,
𝒯_K,K^' = {g∈𝒮_K+M+𝒮_K^'+M, g_n=1, g_∞≤√(L)}.
Recall that for all j ∈ [[1,N]] and for all k ∈ [[0,n]],
ζ^j,1_kΔ = 1/Δ[(∫_kΔ^(k+1)Δσ(X^j_s)dW^j_s)^2-∫_kΔ^(k+1)Δσ^2(X^j_s)ds].
The proof of Theorem <ref> relies on the following lemma whose proof is in Appendix.
Under Assumption <ref>, for all ε, v>0 and g∈𝒯_K,K^', there exists a real constant C>0 such that,
ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ε, g^2_n,N≤ v^2)≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2))
and for all x>0 such that x≤ε^2/σ^2_1(εg_∞+4σ^2_1v^2),
ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ζ^j,1_kΔ≥ 2σ^2_1v√(x)+σ^2_1g_∞x, g^2_n,N≤ v^2)≤exp(-CNnx).
From Equation (<ref>), we have
K:=K∈𝒦min{γ_n,N(σ^2_K)+pen(K)}.
For all K∈𝒦 and h∈𝒮_K+M,L,
γ_n,N(σ^2_K)+pen(K)≤γ_n,N(h)+pen(K),
then, for all K∈𝒦 and for all h∈𝒮_K+M,L,
γ_n,N(σ^2_K)-γ_n,N(σ^2_|I)≤ γ_n,N(h)-γ_n,N(σ^2_|I)+pen(K)-pen(K)
σ^2_K-σ^2_|I^2_n,N≤ h-σ^2_|I^2_n,N+2ν(σ^2_K-h)+2μ(σ^2_K - h)+pen(K)-pen(K)
≤ h-σ^2_|I^2_n,N+1/dσ^2_K-t^2_n+dg∈𝒯_K,Ksupν^2(g)+1/dσ^2_K-h^2_n,N
+d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+pen(K)-pen(K)
where d>1 and the space 𝒯_K,K is given in Equation (<ref>). On the set Ω_n,N,K_max (given in Equation (<ref>)): ∀ h∈𝒮_K+M, 1/2h^2_n≤h^2_n,N≤3/2h^2_n. Then on Ω_n,N,K_max, for all d>1 and for all h∈𝒮_K+M with K∈𝒦,
(1-10/d)σ^2_K-σ^2_|I^2_n,N≤ (1+10/d)h-σ^2_|I^2_n,N+dh∈𝒯_K,Ksupν^2(h)+d/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2
+pen(K)-pen(K).
We set d=20. Then, on Ω_n,N,max and for all h∈𝒮_K+M,L,
σ^2_K-σ^2_|I^2_n,N≤ 3h - σ^2_|I^2_n,N+20h∈𝒯_K,Ksupν^2(h)+20/Nn∑_j=1^N∑_k=0^n-1(R^j_kΔ)^2+2(pen(K)-pen(K)).
Let q : 𝒦^2⟶ℝ_+ such that 160 q(K,K^')≤ 18 pen(K)+16 pen(K^'). Thus, on the set Ω_n,N,K_max, there exists a constant C>0 such that for all h∈𝒮_K+M
𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]≤ 34(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K))
+160(h∈𝒯_K,Ksupν^2_1(h)-q(K,K))+CΔ^2
where
ν_1(h):=1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j,1_kΔ
with ζ^j,1_kΔ the error term. We set for all K,K^'∈𝒦,
G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h)
and for N and n large enough, σ^2_K-σ^2_|I^2_n,N≤ 4(K+M)L. We deduce that,
𝔼[σ^2_K-σ^2_|I^2_n,N] ≤ 𝔼[σ^2_K-σ^2_|I^2_n,N_Ω_n,N,K_max]+𝔼[σ^2_K-σ^2_|I^2_n,N_Ω^c_n,N,K_max]
≤ 34K∈𝒦inf(h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K))
+CΔ^2+4(K+M)Lℙ(Ω^c_n,N,K_max)
+160𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max].
In the sequel, we refer to the proof of Proposition 6.1 in <cit.>. We known from Lorentz et al (see <cit.>) that given the unit ball B_._n(0,1) of the approximation subspace 𝒮_K+M with respect to norm ._n defined as follows:
B_._n(0,1)={h∈𝒮_K+M : h_n≤ 1}={h∈𝒮_K+M : h≤1/τ_0}=B_2(0,1/τ_0),
we can find a ε-net E_ε such that for each ε∈(0,1], |E_ε|≤(3/ετ_0)^K+M.
Recall that 𝒯_K,K^'={g∈𝒮_K+M+𝒮_K^'+M, g_n=1, g_∞≤√(L)} and consider the sequence
(E_ε_k)_k≥ 1 of ε-net with ε_k=ε_0 2^-k and ε_0∈(0,1]. Moreover, set N_k = log(|E_ε_k|) for each k≥ 0. Then for each g∈𝒮_K+M+𝒮_K^'+M such that g_∞≤√(L), there exists a sequence (g_k)_k≥ 0 with g_k∈ E_ε_k such that g=g_0+∑_k=1^∞g_k-g_k-1. Set ℙ:=ℙ(.∩Ω_n,N,K_max) and
τ:=σ_1^2√(6x^n,N_0)+σ^2_1√(L)x^n,N_0+∑_k≥ 1ε_k-1{σ_1^2√(6x^n,N_k)+2σ^2_1√(L)x^n,N_k}=y^n,N_0+∑_k≥ 0y^n,N_k.
For all h∈𝒯_K,K^' and on the event Ω_n,N,K_max, one has h^2_n,N≤3/2h^2_n=3/2. Then, using the chaining technique of <cit.>, we have
ℙ(h∈𝒯_K,K^'supν_1(h)>τ) =ℙ(∃ (h_k)_k≥ 0∈∏_k≥ 0E_ε_k/ ν_1(h)=ν_1(h_0)+∑_k=1^∞ν_1(h_k-h_k-1)>τ)
≤∑_h_0∈ E_0ℙ(ν_1(h_0)>y^n,N_0)+∑_k=1^∞∑_h_k-1∈ E_ε_k-1h_k∈ E_ε_kℙ(ν_1(h_k-h_k-1)>y^n,N_k).
According to Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that
(ν_1(h_0) > y^n,N_0) ≤ (ν_1(h_0) > σ_1√(6x^n,N_0)+σ^2_1h_0_∞x^n,N_0)
≤ exp(-CNnx^n,N_0),
∀ k≥ 1, (ν_1(h_k - h_k-1) > y^n,N_0) ≤ (ν_1(h_k - h_k-1) > σ_1√(6x^n,N_k)+σ^2_1h_k - h_k-1_∞x^n,N_k)
≤ exp(-CNnx^n,N_k).
Finally, since N_k = log(|E_ε_k|) for all k≥ 0, we deduce that
ℙ(h∈𝒯_K,K^'supν_1(h)>τ) ≤|E_ε_0|exp(-CNnx^n,N_0) + ∑_k=1^∞(|E_ε_k|+|E_ε_k-1|)exp(-CNnx^n,N_k)
≤exp(N_0-CNnx^n,N_0)+∑_k=1^∞exp(N_k+N_k-1-CNnx^n,N_k).
We choose x^n,N_0 and x^n,N_k, k≥ 1 such that,
N_0 - CNnx^n,N_0 = -a(K+K^' + 2M)-b
N_k + N_k-1 - CNnx^n,N_k = -k(K + K^'+2M) - a(K + K^' + 2M) - b
where a and b are two positive real numbers. We deduce that
x^n,N_k≤ C_0(1+k)K + K^'+2M/Nn and τ≤ C_1σ^2_1√(√(L)K + K^'+2M/Nn)
with C_0>0 and C_1 two constants depending on a and b. It comes that
∼ℙ(t∈𝒯_K,K^'supν(t)>τ) ≤e/e-1e^-bexp{-a(K + K^' + 2M)}.
From Equation (<ref>), we set
q(K,K^')=κ^*σ^2_1√(L)K + K^' + 2M/Nn
where κ^*>0 depends on C_1>0.
Thus, for all K,K^'∈𝒦,
ℙ({h∈𝒯_K,K^'supν^2(h)>q(K,K^')}∩Ω_n,N,K_max)≤e^-b+1/e+1exp{-a(K + K^' + 2M)}
and there exists constants c,C>0 such that
𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max]
≤c(K+K^')/Nnℙ({t∈𝒯_K,K^'supν^2(t)>q(K,K^')}∩Ω_n,N,K_max)
≤C/Nnexp{-a/2(K+K^')}.
Finally, there exists a real constant C>0 such that,
𝔼[(G_K(K)-q(K,K))_+_Ω_n,N,K_max]≤∑_K^'∈𝒦𝔼[(G_K(K^')-q(K,K^'))_+_Ω_n,N,K_max]≤C/Nn.
We choose the penalty function pen such that for each K∈𝒦,
pen(K)≥κσ^2_1√(L)K+M/Nn.
For N large enough, one has σ^2_1≤√(L). Thus, we finally set pen(K)=κ(K+M)log(N)/Nn with L = log(N). Then, there exists a constant C>0 such that,
𝔼[σ^2_K-σ^2_|I^2_n,N] ≤ 34K∈𝒦inf{h∈𝒮_K+Minfh-σ^2_|I^2_n+pen(K)}+C/Nn.
§.§.§ Proof of Theorem <ref>
From Equation (<ref>), we have
K := K∈𝒦minγ_n,N(σ^2_K+M,L) + pen(K).
Then, for all K∈𝒦 and for all h ∈𝒮_K+M,L, we have
γ_n,N(σ^2_K,L) + pen(K) ≤γ_n,N(h) + pen(K).
Then, for all K∈𝒦 and for all h∈𝒮_K+M,L,
γ_n,N(σ^2_K,L) - γ_n,N(σ^2) ≤ γ_n,N(h) - γ_n,N(σ^2) + pen(K) - pen(K)
σ^2_K,L - σ^2^2_n,N≤ h - σ^2^2_n,N + 2ν(σ^2_K,L - h) + 2μ(σ^2_K,L - h) + pen(K) - pen(K).
We have for all a>0,
2𝔼[μ(σ^2_K,L-h)] ≤ 2/a𝔼σ^2_K,L-σ^2^2_n,N+2/ah∈𝒮_K+M,Linfh-σ^2^2_n +a/Nn∑_j=1^N∑_k=0^n-1𝔼[(R^j_kΔ)^2]
and since ν = ν_1 + ν_2 + ν_3, according to the proof of Theorem <ref>, there exists a constant c>0 such that
[ν(σ^2_K,L - h)] ≤ c[ν_1(σ^2_K,L - h)]
where the for i∈{1,2,3} and for all h ∈𝒮_K+M,L, K∈𝒦,
ν_i(h) = 1/Nn∑_j=1^N∑_k=0^n-1h(X^j_kΔ)ζ^j_kΔ,
and the ζ^j_kΔ are given Then,
(1-2/a)[σ^2_K,L - σ^2^2_n,N] ≤ (1+2/a)h∈𝒮_K+M,Linfh-σ^2^2_n + 2c[ν_1(σ^2_K,L - h)]
+ pen(K)-pen(K) + a/Nn∑_j=1^N∑_k=0^n-1[(R^j_kΔ)^2]
From Equation (<ref>) and for a = 4, there exists a constant C>0 such that
[σ^2_K,L - σ^2^2_n,N] ≤ 3h∈𝒮_K+M,Linfh-σ^2^2_n + 4c[ν_1(σ^2_K,L - h)] + 2(pen(K)-pen(K)) + CΔ^2.
Since for all K∈𝒦, pen(K) ≥ 2κ^*σ^2_1(K+M)√(2L)/(Nn), define the function q: (K,K^') ↦ q(K,K^') such that
q(K,K^') = 2C^*σ^2_1(K+K^'+2M)√(2L)/Nn≥ 2σ^2_1v√(x^n,N) + σ^2_1vx^n,N
where
x^n,N∝(K+K^'+2M/Nn)^2 and v = √(2L).
The constant C^*>0 depends on constants κ^*>0 and c>0 of Equation (<ref>) such that
4cq(K,K^') ≤pen(K) + 2pen(K^').
Then for all K ∈𝒦 and for all h∈𝒮_K+M,L,
[σ^2_K,L - σ^2^2_n,N] ≤ 3(h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)) + 4c[(ν_1(σ^2_m,L - h) - q(K,K))_+] + CΔ^2.
For all K∈𝒦 and for all h∈𝒮_K+M,L such that h_∞≤√(L), we have ,
σ^2_K,L - h^2_n,N≤σ^2_K,L - h^2_∞≤ 2L =: v^2.
Then, using Equation (<ref>) and Lemma <ref>, there exists a constant C>0 such that for all K,K^'∈𝒦 and for all h∈𝒮_K+M,L,
(ν_1(σ^2_K^',L - h) ≥ q(K,K^'), σ^2_K,L - h^2_n,N≤ v^2) ≤exp(-CNnx^n,N).
Since L = log(N), then for N large enough, σ^2_1≤√(log(N)), we finally choose
pen(K) = κ(K+M)log(N)/Nn
where κ>0 is a new constant. Since [ν_1(σ^2_K,L - h)] ≤O(√((K_max+M)log^2(N)/Nn)) (see proof of Theorem <ref>), for all K ∈𝒦 and h ∈𝒮_K+M,L, there exists a constant c>0 such that
[(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤ K^'∈𝒦max{[(ν_1(σ^2_K^',L - h) - q(K,K^'))_+]}
≤ cq(K,K_max)K^'∈𝒦max{(ν_1(σ^2_K^',L - h) ≥ q(K,K^'))}.
From Equation (<ref>), we obtain that
[(ν_1(σ^2_K,L - h) - q(K,K))_+] ≤ cq(K,K_max)exp(-CNn)≤C/Nn
since K and K_max increase with the size N of the sample paths D_N,n, and
cNnq(K,K_max)exp(-CNn) → 0 as N →∞.
Then, from Equations (<ref>) and (<ref>), there exists a constant C>0 such that
[σ^2_K,L - σ^2^2_n,N] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2^2_n + pen(K)} + C/Nn.
§.§.§ Proof of Theorem <ref>
The proof of Theorem <ref> is similar to the proof of Theorem <ref>. Then, from Equation (<ref>), for all h∈𝒮_K+M,
σ^2_K,L-σ^2_|I^2_n,1≤ 3h - σ^2_|I^2_n,1+20h∈𝒯_K,Ksupν^2(h)+20/n∑_k=0^n-1(R^1_kΔ)^2+2(pen(K)-pen(K)),
where 𝒯_K,K^' = {h ∈𝒮_K+M+𝒮_K^'+M, h_X = 1, h_∞≤√(L)}. Let q: 𝒦^2⟶_+ such that 160q(K,K^') ≤ 18pen(K) + 16pen(K^').
Recall that the 𝕃^2-norm ., the norm [._X] and the empirical norm ._n are equivalent on 𝕃^2(I) since the transition density is bounded on the compact interval I. Then, for all K ∈𝒦 and h ∈𝒮_K+M,L, we have
[σ^2_K,L-σ^2_|I^2_n,1] ≤ 3(h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)) + 20(h∈𝒯_K,Ksupν^2_1(h)-q(K,K)) + CΔ^2
where
ν_1(h):=1/n∑_k=0^n-1h(X^1_kΔ)ζ^1,1_kΔ
with ζ^1,1_kΔ the error term. We set for all K,K^'∈𝒦, G_K(K^'):=h∈𝒯_K,K^'supν^2_1(h). Then, there exists C>0 such that
[σ^2_K,L-σ^2_|I^2_n,1] ≤ 3K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + 20∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+]
+ CΔ^2.
Considering the unit ball B_._X(0,1) of the approximation subspace given by
B_._X(0,1) = {h∈𝒮_K+M, h^2_X≤ 1} = {h∈𝒮_K+M, h^2≤1/τ_0}.
We obtain from the proof of Theorem <ref> with N=1 that,
∑_K^'∈𝒦[(G_K(K^')-q(K,K^'))_+]≤C/n,
where C>0 is a constant, q(K,K^') ∝σ^4_1(K+K^'+2M)√(log(n))/n and pen(K) ∝(K+M)log(n)/n. Then we obtain
𝔼[σ^2_K,L-σ^2_|I^2_n,1] ≤3/τ_0K∈𝒦inf{h∈𝒮_K+M,Linfh-σ^2_|I^2_n+pen(K)} + C/n.
ScandJStat
Appendix
§.§ Calibration
Fix the drift function b(x) = 1-x, the time-horizon T=1 and at time t=0, x_0=0. Consider the following three models:
Model 1: σ(x)=1
Model 2: σ(x)=0.1+0.9/√(1+x^2)
Model 3: σ(x) = 1/3+sin^2(2π x)/π + 1/(π+x^2).
The three diffusion models satisfy Assumption <ref> and are used to calibrate the numerical constant κ of the penalty function given in Theorem <ref>
As we already know, the adaptive estimator of σ^2 on the interval [-√(log(N)), √(log(N))] necessitate a data-driven selection of an optimal dimension through the minimization of the penalized least squares contrast given in Equation (<ref>) . Since the penalty function pen(d_N)=κ (K_N+M)log^2(N)/N^2 depends on the unknown numerical constant κ>0, the goal is to select an optimal value of κ in the set 𝒱={0.1,0.5,1,2,4,5,7,10} of its possible values. To this end, we repeat 100 times the following steps:
* Simulate learning samples D_N and D_N^' with N∈{50,100}, N^'=100 and n ∈{100, 250}
* For each κ∈𝒱:
* For each K_N∈𝒦 and from D_N, compute σ^2_d_N,L_N given in Equations (<ref>) and (<ref>).
* Select the optimal dimension K_N∈𝒦 using Equation (<ref>)
* Using the learning sample D_N^', evaluate σ^2_d_N,L_N-σ^2_A^2_n,N^' where d_N=K_N+M.
Then, we calculate average values of σ^2_d_N,L_N-σ^2_A^2_n,N^' for each κ∈𝒱 and obtain the following results:
We finally choose 5∈𝒱 as the optimal value of κ in reference to the results of Figure <ref> .
§.§ Proof of Lemma <ref>
We obtain from Comte,Genon-Catalot,Rozenholc (2007) proof of Lemma 3
that for each j∈[[1,N]], k∈[[0,n-1]] and p∈ℕ∖{0,1}
𝔼[exp(ug(X^j_kΔ)ξ^j,1_kΔ-au^2g^2(X^j_kΔ)/1-bu)|ℱ_kΔ]≤ 1
with a=e(4σ^2_1c^2)^2, b=4σ^2_1c^2eg_∞, u∈ℝ such that bu<1 and c>0 a real constant. Thus,
ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_N,n≤ v^2)=𝔼(1_{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ≥ Nnuε}1_g^2_n,N≤ v^2)
=𝔼(1_{exp(∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^(j,1)_kΔ)e^-Nnuε≥ 1}_g^2_N,n≤ v^2)
≤e^-Nnuε𝔼[_g^2_n,N≤ v^2exp{∑_j=1^N∑_k=0^n-1ug(X^j_kΔ)ξ^j,1_kΔ}].
It follows that,
ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2)
≤ exp{-Nnuε+Nnau^2v^2/1-bu}.
We set u=ε/ε b+2av^2. Then, we have -Nnuε+Nnav^2u^2/(1-bu)=-Nnε^2/2(ε b+2av^2) and,
ℙ(1/Nn∑_j=1^N∑_k=0^n-1g(X^j_kΔ)ξ^j,1_kΔ≥ε, g^2_n,N≤ v^2) ≤exp(-Nnε^2/2(ε b+av^2))
≤exp(-CNnε^2/σ^2_1(εg_∞+4σ^2_1v^2))
where C>0 is a constant depending on c>0.
§.§ Proof of Lemma <ref>
Set K_n,N = K_N since N ∝ n. Let us remind the reader of the Gram matrix Ψ_K_N given in Equation (<ref>),
Ψ_K_N=[1/Nn𝐅^'_K_N𝐅_K_N]=(Ψ_K_N)
where,
𝐅_K_N:= ((B_ℓ(X^j_0),…,(B_ℓ(X^j_(n-1)Δ)))_1 ≤ j ≤ N0 ≤ℓ≤ K_N-1∈ℝ^Nn× (K_N+M)
The empirical counterpart Ψ is the random matrix given by
Ψ_K_N of size (K_N+M) × (K_N+M) is given by
Ψ_K_N:=1/Nn𝐅^'_K_N𝐅_K_N=(1/Nn∑_j=1^N∑_k=0^n-1f_ℓ(X^j_kΔ)f_ℓ^'(X^j_kΔ))_ℓ,ℓ^'∈[-M,K_N-1].
For all t=∑_ℓ=-M^K_N-1 a_ℓ B_ℓ,M, u∈ S_K_N, M one has
t_n,N^2 = a^'Ψ_K_N a and t_n^2 = a^'Ψ_K_N a, with a=(a_-M,⋯,a_K_N-1)^'.
Under Assumption <ref>, we follow the lines of <cit.> Proposition 2.3 and Lemma 6.2. Then,
sup _t ∈ S_K_N,M,t_n=1|t_n,N^2-t_n^2| = sup _w ∈^K_N+M,Φ_K_N^1 / 2 w_2, K_N+M=1|w^'(Ψ_K_N-Ψ_K_N) w|
= sup _u ∈ℝ^K_N+M,u_2, K_N+M=1|u^'Ψ_K_N^-1 / 2(Ψ_K_N-Ψ_K_N) Ψ_K_N^-1 / 2 u|
= Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op.
Therefore,
Ω_n, N, K_N^c={Ψ_K_N^-1 / 2Ψ_K_NΨ_K_N^-1 / 2-Id_K_N+M_op > 1 / 2}.
Since A_N = o(√(log(N))), we obtain from <cit.>, proof of Lemma 7.8, there exists a constant C>0 such that
(Ω^c_n,N,K_N)≤ 2(K_N+M)exp(-C log^3/2(N)).
Finally, since 2(K_N+M)exp(- (C/2) log^3/2(N)) ⟶ 0 as N ⟶ +∞, one concludes from Equation (<ref>) and for N large enough,
(Ω^c_n,N,K_N)≤ Cexp(- c log^3/2(N))
where c >0 and C>0 are new constants.
|
http://arxiv.org/abs/2307.04850v1 | 20230710184245 | SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features | [
"Sanjay Kariyappa",
"Leonidas Tsepenekas",
"Freddy Lécué",
"Daniele Magazzeni"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Physics-Based Modeling and Validation of 2D Schottky Barrier Field-Effect Transistors
Ashwin Tunga^a,
Zijing Zhao,
Ankit Shukla,
Wenjuan Zhu,
and Shaloo Rakheja
Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL USA
^[email protected]
August 12, 2023
===============================================================================================================================================================================================================================================================
The SHAP framework provides a principled method to explain the predictions of a model by computing feature importance. Motivated by applications in finance, we introduce the Top-k Identification Problem (TkIP), where the objective is to identify the k features with the highest SHAP values. While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient. The goal of our work is to improve the sample efficiency of existing methods in the context of solving TkIP. Our key insight is that TkIP can be framed as an Explore-m problem <cit.>–a well-studied problem related to multi-armed bandits (MAB). This connection enables us to improve sample efficiency by leveraging two techniques from the MAB literature: (1) a better stopping-condition (to stop sampling) that identifies when PAC (Probably Approximately Correct) guarantees have been met and (2) a greedy sampling scheme that judiciously allocates samples between different features. By adopting these methods we develop KernelSHAP@k and SamplingSHAP@k to efficiently solve TkIP, offering an average improvement of 5× in sample-efficiency and runtime across most common credit related datasets.
§ INTRODUCTION
The ability to explain the predictions of ML models is of critical importance in highly regulated industries, where laws provide right to explanations for people who are adversely impacted by algorithmic decision making.
Specifically in finance, regulations like Fair Credit Reporting Act <cit.> and Equal Credit Opportunity Act <cit.> require a rejected loan/credit application (i.e. adverse action) to be explained to the borrower, by providing reasons for why the application was rejected (e.g., low credit score, high debit-to-income ratio, recent delinquencies, etc.).
Owing to its principled formulation, the SHAP framework <cit.> is the defacto choice for explaining model predictions in credit-risk assesment models <cit.>. While exact computation of SHAP values is computationally intractable,
sampling-based techniques like KernelSHAP <cit.> and SamplingSHAP provide a practical alternative to compute approximate SHAP values. Additionally, recent works have developed methods to quantify the approximation error of such sampling-based techniques, by providing confidence intervals (CIs) for the estimated SHAP values <cit.>.
In this paper, we introduce the Top-k Identification Problem (TkIP), where the objective is to identify the k most important features, i.e., those with the k highest SHAP values (referred to as the Top-k features). TkIP is motivated by an important real-world use-case of processing credit/loan applications, where the lender is required to provide the top features that contributed negatively to the model's prediction (i.e. explanations) in the event of a rejection; this is standard practice by credit/loan issuers in order to comply with the Equal Credit Opportunity Act <cit.>. Existing methods like KernelSHAP and SamplingSHAP can be straightforwardly adapted to identify Top-k features with PAC guarantees, by evaluating enough samples to sufficiently reduce the the CIs of the SHAP estimates. However, doing so can be computationally expensive as it often requires a very large number of samples.
Motivated by this problem, our paper investigates methods to improve the sample efficiency of KernelSHAP and SamplingSHAP, specifically to solve TkIP.
Our key insight is that TkIP can be framed as an Explore-m problem <cit.> – a well-studied problem related to multi-arm bandits (MAB), where the goal is to identify a subset of arms with the highest expected payoffs. By leveraging this connection, we make the following key changes to the SHAP estimation algorithms based on ideas that have been developed in the MAB literature:
* Overlap-based stopping condition <cit.>: Sampling for KernelSHAP and SamplingSHAP is usually done until the CI widths of SHAP values associated with all the features falls below a threshold. This naive stopping condition is unnecessarily conservative for solving TkIP; so instead, we use a stopping condition that is based on the overlap in CIs between different features (instead of the absolute CI width of each feature). This allows for early-stopping once a PAC solution for TkIP has been identified.
* Greedy sampling scheme <cit.>: For SamplingSHAP, the default sampling scheme of allocating samples according to the variance of each feature is ill-suited for solving TkIP. Instead, we leverage a greedy sampling scheme that is designed to efficiently solve the Explore-m problem by allocating a higher number of samples to features that are likely to change the Top-k subset. This enables a significant reduction in sample-costs compared to the variance-based sample allocation. Note that (C2) requires the ability to allocate samples to evaluate the SHAP values on a per-feature basis, so it cannot be applied to KernelSHAP.
We use the above techniques to develop KernelSHAP@k (KernelSHAP + C1) and SamplingSHAP@k (SamplingSHAP + C1 + C2). We evaluate these methods with the most common credit related datasets and show that they offer significant improvements in sample efficiency and runtime, compared to their respective baselines.
The rest of this paper is structured as follows:
* In Section <ref>, we provide background on sampling-based methods that can be used to estimate SHAP values and related work on variance reduction and uncertainty estimation.
* In Section <ref>, we formally define the Top-k Identification problem and develop a naive stopping condition that can be used with Kernel/Sampling SHAP to correctly identify Top-k features. Nonetheless, this condition is sample-inefficient.
* In Section <ref>, we develop KernelSHAP@k and SamplingSHAP@k to efficiently solve TkIP with PAC guarantees. The key insight here is framing TkIP as an Explore-m problem.
* In Section <ref>, we evaluate Kernel/Sampling-SHAP@k on a suite of credit related datasets and demonstrate significant improvements in sample-costs and runtime.
* We discuss limitations and future directions in Section <ref>, and conclude in Section <ref>.
§ BACKGROUND AND RELATED WORK
The goal of our work is to modify existing algorithms to efficiently identify Top-k features with PAC guarantees. In this section, we provide background on the SHAP framework and discuss existing sampling-based techniques (SamplingSHAP and KernelSHAP) that estimate SHAP values. Additionally, we discuss related works that extend these method by reducing the variance of the estimates and quantify uncertainty in the form of confidence intervals.
§.§ SHAP
SHAP (SHapley Additive exPlanations) is based on a game-theoretic concept called Shapley values <cit.>, which is a method to fairly distribute the payoffs of a cooperative game among the players. This is done by measuring the average marginal contribution of a single player computed across all possible coalitions of players. Such a formulation of assigning credit has been shown to uniquely satisfy a set of fairness axioms such as local accuracy, missingness and consistency <cit.>. SHAP applies this concept to explaining the predictions of the model by treating individual features as players and the output of the model as the payoff. By measuring the marginal contributions of features across different coalitions, SHAP assigns a score to each feature that reflects its contribution to the final prediction of the model. Given a set of features D ={1,2,..,d}, the SHAP value ϕ_i for the i^th feature of an input x with a model f is computed by taking the weighted average of the change in predictions of f when feature i is added to a subset of features S as shown in Eqn.<ref>.
ϕ_i(x,f) = ∑_S ⊆ D ∖{i}|S|!(d-|S|-1)!/d![f(x_S ∪{i}) - f(x_S)]
Here x_S is the feature vector restricted to S. To evaluate the model function with missing features in the above expression, we use the interventional SHAP formulation <cit.>, where missing feature values are set to a default baseline. Note that computing SHAP values exactly has a computational complexity of Θ(2^d). While there are efficient methods to compute exact SHAP values for specific models such as decision trees <cit.>, in general, the exponential complexity makes it computationally intractable to evaluate exact SHAP values when the number of features is large. To reduce computational costs, sampling-based approximation techniques have been proposed. We explain two such methods in the remainder of this section.
§.§ SamplingSHAP
SamplingSHAP estimates SHAP values by only evaluating a subset of terms in Eqn.<ref> and then averaging over the resulting marginals. Štrumbelj et al. <cit.> provide an efficient algorithm to perform Monte Carlo sampling according to the probability distribution induced by the weights in Eqn.<ref>. To quantify the uncertainty in the SHAP estimate based on the number of samples, Merrick et al. <cit.> proposed the use of Standard Error of Means (SEM) to derive confidence intervals through the Central Limit Theorem (CLT). Specifically, the Monte Carlo simulation is run T_i times for each feature i, thus giving a set of SHAP estimates {ϕ̂_i^j}_j=1^T_i. Finally, the SHAP value for i is set to be ϕ̂_i = ∑^T_i_j = 1ϕ̂_i^j / T_i. Eqn.<ref> shows how the 95% CI for the i^th feature (there's a 0.95 probability of ϕ_i being in CI_i):
CI_i = [ϕ̂_i ± 1.96σ_i/√(T_i)].
Here σ_i denotes the standard deviation of the set of SHAP estimates {ϕ̂_i^j}_j=1^T_i. Note that we can achieve any confidence that we want, by tweaking the parameter 1.96 accordingly.
Additionally, prior works have also tried to reduce the length of the CIs through variance reduction techniques. For instance, Mitchell et al. <cit.> propose to evaluate negatively correlated pairs of samples in SamplingSHAP to reduce the variance σ_i of SHAP estimates. Sampling techniques have also been used in the context of Game Theory for computing Shapley values <cit.>.
§.§ KernelSHAP
KernelSHAP <cit.> is another sampling-based method that views SHAP values as the solution to a weighted regression problem. Specifically, consider a linear model of the form g(S) = ϕ_0 + ∑_i∈ Sϕ_i, where ϕ_i denote the SHAP values. KernelSHAP proposes to estimate these values by solving the following optimization problem:
{ϕ_i} = _ϕ_1,..ϕ_d∑_S⊆ Dw(S)(f(S)-g(S))^2.
Here, w(S) is a weighting function that is chosen in a way that makes solving Eqn.<ref> equivalent to finding SHAP values. Note that evaluating Eqn.<ref> requires evaluating an exponential number of terms in the summation, making the computation of exact SHAP values intractable. Fortunately, an approximation of Eqn.<ref> that evaluates only a small subset of terms is sufficient in practice to estimate SHAP values. Furthermore, a recent work <cit.> has shown that the variance of SHAP values, computed by using KernelSHAP, can be used to derive confidence intervals, providing a means of detecting convergence in the SHAP estimates; this leads to CIs identical to those of Eqn.<ref>. Additionally, this work also uses paired-sampling (similar to <cit.>) with KernelSHAP to reduce computational costs, by reducing the variance of the SHAP estimates.
§ PROBLEM SETTING
In this section, we formally define the Top-k identification problem (TkIP), the goal of which is to identify the features with the highest SHAP values. To apply sampling based techniques to solve TkIP, we define an (ϵ, δ)-PAC solution for it, which allows for an ϵ-approximate version of the solution with a low probability of failure (δ). Finally, we describe a naive stopping condition that can be used with Kernel/Sampling-SHAP to derive a (ϵ, δ)-PAC solution. We demonstrate that this naive solution is sample-inefficient, motivating the need for our proposed solutions that improve sample-efficiency.
§.§ Top-k identification problem
Consider a model f:ℐ→ℝ, which acts on a d-dimensional input x∈ℐ to produce a prediction p=f(x). For an input x∈ℐ, let {ϕ_1, ϕ_2, ..., ϕ_d} denote the set of SHAP values corresponding to the input features D={1, 2,.., d} respectively. To simplify notation, let us assume that the features are indexed such that:
ϕ_1 ≥ϕ_2 ≥ϕ_3.. ≥ϕ_d.
The goal of TkIP is to identify the k features: Topk= {1,2,..,k} corresponding to the k highest SHAP values 𝒮={ϕ_1, ϕ_2, .. ϕ_k}. Note that the ordering of features in Topk does not matter. Solving TkIP exactly requires us to precisely evaluate all the SHAP values, which is computationally intractable. Instead, we define ϵ-approximate and (ϵ, δ)-PAC solutions for TkIP that are more useful in the context of sampling-based PAC methods.
* ϵ-approximate solution: For a given accuracy parameter ϵ∈ (0,1), consider a subset of features D^*⊂ D such that |D^*| = k. D^* is an ϵ-approximate solution to TkIP if it satisfies the following:
ϕ_i ≥ϕ_k-ϵ, ∀ i ∈ D^*.
* (ϵ, δ)-PAC solution: For given accuracy and confidence parameters ϵ, δ∈ (0,1), D^* is said to be an (ϵ, δ) solution for TkIP if it is an ϵ-approximate solution with a probability at least 1-δ:
[ϕ_i ≥ϕ_k-ϵ, ∀ i ∈ D^*] ≥ 1-δ.
In other words, here we allow for randomized algorithms that should compute D^* with controllable (low) probability of failure.
This relaxed notion of the solution allows for a feature i to be returned as part of the solution even if i ∉ Topk, as long as the corresponding SHAP value ϕ_i is ϵ-close to ϕ_k (i.e. the k^th SHAP value).
§.§ PAC solution for TkIP with naive stopping condition
In both KernelSHAP and SamplingSHAP, we can use the CLT-based approaches mentioned in Sections <ref>, <ref> to obtain confidence intervals of the following form. Let ϕ_i be the true SHAP value for feature i, and let ϕ̂_i be our approximation for it. Then, if we repeat the corresponding algorithm T_i times, with probability at least 1-δ/d we have:
|CI_i| = 2 · Z(δ / d) σ_i/√(T_i)
In the above, Z(δ / d) is the critical value from the standard normal distribution for the desired level of confidence; note that this value is a small constant. It is clear from Eqn. <ref>, that the larger T_i is, the closer our approximation is to the true value. One way to identify the Topk features is by running the SHAP estimation algorithm (i.e. adding more samples) until the CIs for all the features are small enough to meet the following stopping condition:
|CI_i| = 2 · Z(δ / d) σ_i/√(T_i)≤ϵ, ∀ i∈ D.
We call this the naive stopping condition, and in Theorem <ref> we show that it indeed leads to an (ϵ, δ)-PAC solution for TkIP. Thus, Kernel/Sampling-SHAP can be straightforwardly adapted to solve TkIP by using enough samples to meet this stopping condition. In the following subsection, we will explain why this naive approach is sample-inefficient with the aid of an example, motivating the need for a better stopping condition and sampling technique.
Let 𝒮={ϕ̂_̂1̂, ϕ̂_̂2̂,.., ϕ̂_d} denote the SHAP estimates of input features D={1,2,..,d}, such that the CI_is defined using a confidence of δ/d satisfy |CI_i| ≤ϵ, ∀ i∈ D. Then, D^*=_k(𝒮) is an (ϵ, δ)-PAC solution for TkIP; the solution consists of the k features with the largest ϕ̂_i.
We show that when ϕ_i ∈ CI_i for every i, the solution is ϵ-approximate. Using a union bound over all features we have:
[ϕ_i ∈ CI_i, ∀ i] = 1 - [∃ i: ϕ_i ∉ CI_i] ≥ 1 - ∑^d_i = 1δ/d = 1 - δ
For the inequality above we used the definition of CI_i, which states that [ϕ_i ∉ CI_i] ≤δ / d.
Clearly, if we prove that ϕ_i ∈ CI_i, ∀ i implies an ϵ-approximate solution we are done. Therefore, for the sake of contradiction, assume that the resulting solution is not ϵ-approximate. This means that there exists feature ĩ with ϕ_ĩ < ϕ_k - ϵ, which still made it in our top-k solution. By definition of Topk and CI_i, we have that for all i ∈Topk ϕ̂_i ≥ϕ_k - ϵ/2. By definition of CI_ĩ, we have ϕ̂_ĩ≤ϕ_ĩ + ϵ/2. Combining this with ϕ_ĩ < ϕ_k - ϵ, gives ϕ̂_ĩ < ϕ_k - ϵ/2. Hence, ĩ could never be chosen instead of any i ∈Topk in the returned solution.
§.§ Understanding the inefficiencies of the naive stopping condition
The Naive stopping condition requires the CIs of all the features to be of width at most ϵ. For a feature i, the number of samples N_i necessary to achieve this is proportional to the variance of the feature's SHAP estimate (N_i∝σ^2_i), resulting in high-variance features incurring a higher sample-cost. To illustrate, we apply SamplingSHAP to explain the prediction of an MLP model on a single example from the UCI Credit dataset. To identify the Top-k features (with k=4), we obtain CIs by runing SamplingSHAP multiple times for each feature, until the stopping condition in Eqn. <ref> is met. We visualize the CIs of the SHAP estimates of the individual features in Fig.<ref>a, where the Top-4 features are marked as green. To understand the cost of this stopping condition, we plot the number of function evaluations consumed by the algorithm in Fig.<ref>d and the variance of the SHAP estimate for each feature in Fig.<ref>c. As expected, we find that the cost is proportional to the variance of the per-feature SHAP estimate, resulting in a high sample-cost for high-variance features.
A key drawback of the naive sampling scheme is that it requires |CI_i| ≤ϵ for all features, regardless of the uncertainty that the feature belongs in Topk. This results in a lot of wasted samples. For instance, in the example in Fig.<ref>, ϕ̂_3 (SHAP estimate for feature-3) is much higher compared to the other features, allowing us to conclude with high confidence that 3∈Topk early on in the sampling process and avoid sampling feature-3 further. However the naive sampling scheme lacks such adaptivity and forces this high-variance feature to continue sampling until |CI_3| ≤ϵ, thus leading to a lot of wasted samples and contributing significantly to the sample cost of SamplingSHAP. In the next section we develop SamplingSHAP@k and KernelSHAP@k to avoid such wasted samples by using a modified stopping condition and sampling scheme.
§ SHAP@K: FRAMING TKIP AS AN EXPLORE-M PROBLEM
The key insight of our work is that TkIP can be framed as an Explore-m problem–a well-studied problem in multi-armed bandits (MAB), where the goal is to identify the arms with the highest expected payoffs in a sample-efficient way <cit.>. Formally, given N arms, each with some unknown distribution of payoffs, the objective is to identify (with PAC guarantees) the subset of m arms with the highest expected payoff. Note that TkIP has a 1-1 correspondence with the Explore-m problem. The arms in MAB are equivalent to the features in the context of SHAP, and the reward obtained by pulling an arm is equivalent to the SHAP estimate of a specific feature obtained through a single sample. The goal is to identify the subset of m arms/k features with the highest expected rewards/SHAP values. This connection allows us to leverage methods from the MAB literature to efficiently solve TkIP. Hence, we propose changes to the earlier sampling scheme and stopping condition, to develop sample efficient variants of Kernel and Sampling SHAP.
§.§ Overlap-based stopping condition (C1)
Inspired by Kalyanakrishnan et al. <cit.>, we use the stopping condition in Theorem <ref> that considers the overlap in CIs between the SHAP estimates of different features. By only considering the overlap between the CIs, the improved stopping condition avoids the need to reduce all the CIs widths to below ϵ as shown in Fig. <ref>b. Through experimental evaluations, we show that compared to the naive-stopping condition, this results in a significant reduction in the number of samples necessary to identify the Topk features (Fig. <ref>d).
We now introduce some notation. Let T_i the number of SHAP estimates that we have collected so far for feature i. For the desired confidence δ, we define a δ / d confidence interval CI_i = [α_i, β_i] as before, where ϕ̂_i the current SHAP estimate, α_i = ϕ̂_i - Z(δ / d)σ_i/√(T_i) and β_i = ϕ̂_i + Z(δ / d)σ_i/√(T_i).
Let High denote the set of k features with the highest SHAP estimates ϕ̂_̂î and Low denote the remaining set of d-k features. Let h be the feature in High with the lowest lower confidence bound i.e. h = _i∈ High{α_i}, and let ℓ be the feature in Low with the highest higher confidence bound i.e. ℓ=_i∈ Low{β_i}. Then, High is a (ϵ, δ)-PAC solution for TkIP if the following condition is satisfied:
β_ℓ - α_h ≤ϵ.
This proof is identical to Theorem 1 from <cit.> with one minor difference. The authors in <cit.> use Hoeffding's inequality prior to taking a union bound to show that the failure probability is at most δ. Here, we do not need the application of Hoeffding's inequality, since we alreay have the CLT guarantees for the CIs.
§.§ Greedy sampling scheme (C2)
The default variance-based sampling scheme used by Sampling SHAP minimizes the CIs for all features. Such sampling schemes are inefficent for the stopping condition in Theorem <ref>, which only depends on two features (h and ℓ) at any given point in the sampling process. To improve the sample efficiency, we consider a greedy sampling strategy <cit.> as described in Algorithm <ref>. The algorithm starts by using any feature-wise SHAP estimation algorithm (e.g., SamplingSHAP) to find an initial set of SHAP estimates {ϕ̂_i^j} for each input feature i; a feature-wise SHAP estimator computes the SHAP values independently for each feature. The mean SHAP estimates are used to categorize the features into the two groups High, Low. Then, the algorithm identifies h and ℓ as defined in Threorem <ref>, and evaluates additional SHAP estimates for these two features. These steps are repeated until the stopping condition is met. At this point, High will be a valid (ϵ, δ)-PAC solution for TkIP. This scheme improves sample efficiency by allocating more samples to (h,ℓ), which are exactly the features that can potentially affect what is inside Topk. To see why this algorithm terminates, notice that in each iteration exactly 2 CIs shrink. Therefore, in the worst case, there will come a point where all CIs will be of length at most ϵ, and thus the stopping condition will trivially be true.
§.§ KernelSHAP@k and SamplingSHAP@k
We apply the above changes to existing algorithms to propose KernelSHAP@k (KernelSHAP + C1) and SamplingSHAP@k (SamplingSHAP + C1 + C2). In both cases, we incrementally add SHAP estimates ϕ̂^j_i until the stopping condition (C1) is met and the Topk features are identified. Additionally, for SamplingSHAP@k, we use the more efficient greedy sampling scheme (C2) that allocates samples only to features that influence the stopping condition. Note that the greedy sampling scheme (C2) requires the ability to compute the SHAP values of features individually. Thus, we cannot apply C2 to KernelSHAP as it estimates the SHAP values of all features together. In contrast, SamplingSHAP estimates SHAP values per-feature, which makes it compatible with C2.
§ EXPERIMENTS
To quantify the improvements in sample efficiency of our proposed methods, we compare the sample cost (i.e. number of function evaluations) of Kernel/SamplingSHAP@k with that of Kernel/SamplingSHAP (with naive stopping condition) using various credit-realted datasets. We present the experimental setup, followed by the results comparing sample costs and sensitivity studies that quantify how these costs change with the accuracy parameter ϵ.
§.§ Experimental setup
Table<ref> lists the datasets used in our experiments, along with a brief description of the prediction task, number of features, and train/test split. In each case, we train a 5-layer MLP model on the binary classification task using the training set for 100 epochs, and use this model to make predictions on the test set. For the negatively classified examples in the test set (indicating a high likelihood of the credit application being rejected), we use different methods to compute the Top-4 features that contributed the most to the negative prediction in terms of their SHAP values[Our methodology of only evaluating explanations for negatively outcomes is motivated by regulations that require explainations to be provided in case of adverse actions (e.g., credit application being rejected).]. We use interventional SHAP for our experiments and use a positively classified example from the training set as our baseline. We compare the sample-efficiency of various methods in terms of the number of function (f) evaluations and runtime required to identify the Top-4 features with PAC guarantees[Runtime measured on a machine with 32-core AMD CPU and 128GB of memory. Code to reproduce results is included in the supplementary material.].
§.§ Results
Table<ref> compares the average sample cost (i.e. number of function evaluations) and average runtime required by different methods to identify Top-4 features with a (ϵ=0.005, δ=10^-6)-PAC guarantee across different datasets. Our evaluations show that Kernel/SamplingSHAP@k significantly outperform their baseline counterparts Kernel/SamplingSHAP, offering between 1.2×-14.2× improvement in sample efficiency and between 1.2× -14.7× improvement in runtime. Between SamplingSHAP@k and KernelSHAP@k, we find that the method with the better sample-cost depends on the dataset in question. However, SamplingSHAP@k has a consistently lower runtime compared to kernelSHAP@k, even in cases when it has a higher sample cost. For instance, for the UCI credit dataset, we find that SamplingSHAP@k has roughly twice the sample cost compared to KernelSHAP@k, but it is 10× faster in terms of runtime. The reason for this is that each KernelSHAP estimate is more expensive to compute as it requires solving a weighted regression problem using the outputs of the model. In contrast, SamplingSHAP works by just computing a simple average on the outputs of the model, which requires much less compute, resulting in a faster runtime.
§.§ Sensitivity studies
To understand how the accuracy parameter ϵ influences the sample-efficiency of various methods, we perform sensitivity studies by varying ϵ between [0.005, 0.01]. For different values of ϵ, we plot the sample-cost (i.e. number of function evaluations) and runtime of different methods across the four datasets considered in our experiments. Note that a lower value of ϵ implies a lower margin of error in identifying the Top-4 features and requires estimating SHAP values with greater precision (narrower CIs). As ϵ is reduced from 0.01 to 0.005, we find that the sample-costs and runtimes of all methods increase. Notably, the rate of this increase is much higher for Sampling/KernelSHAP, compared to Sampling/KernelSHAP@k. This is because the naive stopping condition used by Sampling/KernelSHAP requires the CI widths of the SHAP estimates of all features to be lower than ϵ, which drives up the samples required. In contrast, the stopping condition used by Sampling/KernelSHAP@k, allows for the CI widths of the features that don't influence the stopping condition to be much higher than ϵ and thus requires fewer samples.
§ LIMITATIONS AND FUTURE WORK
We discuss the limitations of our work and future directions of research in this section.
Feature dependence: Since our work builds on the SHAP framework, it shares the limitations of SHAP. Importantly, SHAP assumes that the features of the input are not correlated. This assumption is typically not true in most practical settings. To address this issue, methods like GroupSHAP <cit.> have been developed, which groups features that are highly correlated and assigns attributions to groups of features instead of individual features. We leave the evaluation of our methods to the GroupSHAP setting as part of future work.
Ordering of Top-k features: Our proposed methods only solves the problem of identifying the Topk features. The features returned by our methods may not be in the right order. Thus, our methods may not be well suited for applications where the order of reporting the top-k features is important. One way in which our methods can be modified to such setting is by the repeated application of Kernel/SamplingSHAP@k by setting different values of k ranging from 1,2,..k. This would result in Topk features being identified in the right rank order. We leave the evaluation of this method as part of future studies.
§ CONCLUSION
This paper studies the Top-k Identification problem (TkIP)– a novel problem setting, where the goal is to identify the k features with the highest SHAP values. TkIP is motivated by applications in finance, where explanations for adverse actions are typically provided by listing the top-k features that led to a negative outcome. We find that while existing black-box techniques like KernelSHAP and SamplingSHAP can be trivially adapted to solve TkIP, doing so is highly sample inefficient. To address this issue, we develop sample efficient variants of these methods that are designed specifically for solving TkIP. Our key insight is that TkIP can be viewed as an Explore-m problem – a well-studied problem related to multi-armed bandits (MAB). This connection allows us to improve sample efficiency by using (1) an overlap-based stopping-condition and (2) a greedy sampling scheme that efficiently allocates samples between different features. We leverage these techniques to develop Kernel/SamplingSHAP@k, which can efficiently identify the Topk features with (ϵ, δ)-PAC guarantees . Our experiments on several credit-related datasets show that Kernel/SamplingSHAP@k significantly outperform their corresponding baselines: Kernel/SamplingSHAP , offering an average improvement of 5× in sample-efficiency and runtime. We also characterize the sample-costs and runtime of our proposed methods across different levels of accuracy (ϵ). Our paper provides efficient solutions to a previously unstudied problem that has important practical applications in finance.
§ ACKNOWLEDGEMENTS
This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates (“J.P. Morgan”) and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
|
http://arxiv.org/abs/2307.05320v1 | 20230711150440 | The thermoelectric effect on diffusion in the two-dimensional Hubbard model | [
"Martin Ulaga",
"Jernej Mravlje",
"Jure Kokalj"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
University of Ljubljana, Faculty of Civil and Geodetic
Engineering, Jamova 2, 1000 Ljubljana, Slovenia
Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
We study charge and heat transport in the square lattice Hubbard model at strong coupling using the finite-temperature Lanczos method. We construct the diffusion matrix and estimate the effect of thermoelectric terms on the diffusive time evolution.The thermoelectric terms prevent the interpretation of the diffusion in terms of a single time scale. We discuss our results in relation to cold-atom experiments and measurements of heat conductivity based on the measurements of heat diffusion.
The thermoelectric effect on diffusion in the two-dimensional Hubbard model
Jure Kokalj
August 12, 2023
===========================================================================
§ INTRODUCTION
Strong correlations lead to unusual phenomena such as unconventional superconductivity <cit.>, non-Fermi liquid behavior <cit.>, strange metallicity <cit.>, transport without quasiparticles <cit.>, to name a few. Solutions of microscopic Hamiltonians provide crucial insights to aid the interpretation of experiments and guide phenomenological theory approaches <cit.>. Recently, numerical simulations of the Hubbard model successfully described the high-temperature “bad-metal” regime <cit.> and also reached the strange metal regime <cit.>.
Parallel efforts of simulating model Hamiltonians in cold atoms have led to a remarkable advance <cit.> as well. Recent highlights include the simulation of charge <cit.> and spin <cit.> dynamics in the square lattice Hubbard model and the observation of thermalization and a crossover from diffusive to sub-diffusive dynamics at infinite temperature (T) <cit.>. In these setups, the transport properties are usually determined indirectly <cit.> from observing the time evolution of a chosen initial state (e.g. a density wave) without reaching a steady state with a fixed current.
A crucial aspect that can affect the interpretation of such time evolution is the fact that the dynamics is coupled, with diffusion involving several quantities, such as charge and heat, due to the finite thermoelectric effect away from particle-hole symmetry. Therefore, the discussion should account for the associated mixed dynamics <cit.>. With cold atoms, the thermoelectric effect has been investigated for a gas in the bottleneck geometry <cit.> but has not been explored in optical lattices and was assumed to be negligible in the interpretation of existing lattice results.
In this paper, we address the issue of mixed diffusion by considering the matrix diffusion equation. We calculate all needed quantities, including the ones related to the thermoelectric effect, in the square lattice Hubbard model using the finite-temperature Lanczos method (FTLM). We further use numerical results to obtain the hydrodynamic solution to the time evolution including current relaxation rates. As an example, in Fig. <ref>, we show the solution of the coupled density-heat diffusion problem with diffusion matrix and current relaxation rates obtained from the numerical solution of the doped Hubbard model at a particular T. Due to the thermoelectric effect, the initial pure density profile additionally results in a T profile as time evolves.
The obtained time dependence differs from that when the thermoelectric mixing is neglected. In the Hubbard model at high T, accessible to our numerics, quantitatively the effect is moderate. The density profile is seen to be close to the one obtained if the thermoelectric effects are neglected. We discuss why this is so and under what circumstances the effect can become larger. On the other hand, the emerging T modulation is completely absent if thermoelectric effects are neglected.
The qualitative aspects of our results apply not only to cold-atom experiments but also to measurements of diffusivity in general. One important example is a “flash” method, which determines the heat conductivity from the propagation of the T modulation <cit.>. More recent extensions of such a method, where the decay of a thermal wave introduced by a periodic laser heating is studied, are also potentially affected by our considerations <cit.>.
Very recently, related calculations of the thermoelectric effect were reported in Ref. <cit.> that used quantum Monte Carlo method on a related model. Whereas these remarkable state-of-the-art calculations reach large system sizes, they rely on analytical continuation. It is important to cross-verify those results by a method that does not include the same systematic uncertainties (difficult to precisely quantify) and to estimate qualitatively and quantitatively the effect of thermoelectric coupling on the time evolution for some typical experimental setups.
This paper is structured as follows. We review the model and method, the hydrodynamic equations, and the diffusion matrix in Section <ref>. We present the impact of the thermoelectric effect on hydrodynamics in Section <ref> and discuss the implications for experiments in Section <ref>. Appendix <ref> contains details on the diffusion matrix, Appendix <ref> contains details on the FTLM calculations, and Appendix <ref> contains further information on the extraction of lifetimes from correlation functions.
§ METHODS
§.§ The Hubbard model
The Hamiltonian of the Hubbard model is given by
H=-∑_⟨ ij⟩σt_ijc^†_iσc_jσ+U∑_in_in_i,
where t_ij is the hopping integral between nearest neighbours on a square lattice (we set t_ij=t_0), and U is the local Hubbard interaction. We treat the model on finite clusters using the FTLM <cit.> and avoid low T where results are affected by finite-size effects. We use ħ=k_B=e_0=1. When not written out explicitly, we use t_0 as the unit of energy and the lattice spacing a as the unit of distance.
§.§ Transport coefficients
Gradients of T and chemical potential μ induce currents as given by the transport coefficients L_ij.
j =-L_11∇μ-L_12∇ T/T,
j_q =-L_12∇μ-L_22∇ T/T.
The transport coefficients are related to charge and heat conductivities as
σ_c=L_11, κ=1/T(L_22-L_12^2/L_11).
The Seebeck coefficient S is the ratio between the gradient of voltage and the temperature gradient
S=∇μ/∇ T=-L_12/L_11T.
We compute L_ij from current-current correlation functions as described in Appendix <ref>.
§.§ Diffusion matrix
Gradients of chemical potential μ and temperature T induce gradients of density and entropy (assuming local equilibrium)
∇ n =χ_c ∇μ+ζ∇ T,
T ∇ s =Tζ∇μ+c_μ∇ T.
Using these relations together with continuity equations, we can write the diffusion equation for quantities n and T as (see Appendix <ref>)
∂_t[ n; T ]=𝐃∇^2[ n; T ]
where the diffusion matrix (in basis of n,T) reads
𝐃 =(
[ L_11/χ_c L_12/T-ζ L_11/χ_c; L_12χ_c-ζ L_11 T/c_nχ_c^2 ζ ^2 L_11 T^2-2 ζ L_12 T χ_c + L_22χ_c^2/c_nT χ_c^2; ])
=([ D_c ±√(c_n χ_c D_corr D_c/T); ±√(D_corr D_c T/c_n χ_c) D_Q ]).
Here, c_n=c_μ-ζ^2T/χ_c is the specific heat at fixed density. The off-diagonals are chosen to be positive when S^K-S>0. D_c and D_Q=D_Q+ D_corr are the charge and heat diffusion constants for cases with no temperature or density modulations, respectively. D_c and D_Q are the standard diffusion constants, related to the corresponding conductivities by the Nernst-Einstein equations σ_c=D_cχ_c and κ=D_Qc_n. Note that the standard heat diffusion constant D_Q differs from the diagonal element D_Q.
The form of diffusion matrix depends on the chosen basis, e.g. the occurrence of D_c in the element D_nn of 𝐃 is characteristic of (n,T) basis. This simple expression is associated with the fact that if ∇ T=0, the charge current is given by j_c=σ_c (-∇μ) =σ_c/χ_c (-∇n), i.e. the standard Fick's law. Analogously, if one chooses (μ, Q) as the basis, one finds a simple form for the heat-heat element of the diffusion matrix D_QQ= L_22/(c_μ T). When not written otherwise, we refer to 𝐃 and its elements in the basis of (n,T). See Appendix <ref> for details.
We expressed off-diagonal entries of 𝐃 using the parameter
D_corr=D_cW(S^K-S)^2,
given in terms of the difference of the Seebeck coefficient from its thermodynamic Kelvin approximate <cit.> S^K=-∂_T μ|_n and
a modified “Wilson ratio” W=Tχ_c/c_n with charge susceptibility χ_c in the place of the more standard spin susceptibility.
The diffusion matrix eigenvalues are given with
D_±=D_c +D_Q/2±√((D_c - D_Q/2)^2 + D_c D_corr.)
D_corr is the key quantity that controls the effect of thermoelectric mixing. It is important to notice that D_corr not only occurs in the off-diagonal matrix elements but also in the diagonal element D_TT. The diffusion matrix was recently also discussed in related models for bad metals <cit.>.
§.§ Hydrodynamics of charge
Let us first discuss a typical measurement of diffusion in, e.g., cold atom experiments <cit.>. One prepares an initial state with some charge modulation via some spatially modulated external potential. Such a state is initially in equilibrium and has no temperature modulation or currents. Next, the external potential is switched off and the system is left to evolve freely, during which time the charge modulation starts to decay. In the case of negligible thermoelectric coupling, the charge modulation decays according to the diffusion equation ∂_t n=D_c ∇^2 n and current flows according to the Fick's law
j+D_c∇ n=0.
However, Fick's law dictates that the current appears instantly after the external potential is switched off and is instantly proportional to the density gradient, while in reality the current needs some time to develop. For this reason, one introduces the current relaxation rate Γ_c and uses the improved hydrodynamic description <cit.>,
∂_tj+Γ_c(j+D_c∇ n)=0.
This description has been previously discussed in the context of the Hubbard model at various values of U <cit.>.
Together with the continuity equation and a spatial Fourier transform for a wave-vector k one obtains the second-order differential equation
∂_t^2 n + Γ_c (∂_t n + D_ck^2 n)=0.
This is the ordinary damped harmonic oscillator equation, and its solution is
n(t) =aRe[cos(ωt+ϕ)]e^-Γ_c t/2,
ω =√(Γ_c D_c k^2-Γ_c^2/4).
Throughout this work we set the phase ϕ (for finite Γ cases) such that initially no current is flowing or ∂_t n|_t=0=0, explicitly ϕ=arctan(-
Γ_c/2 ω̃). The prefactor a determines the initial amplitude of modulation and we plot the modulations relative to this initial amplitude. The resulting n(t) is shown on Fig. <ref> (dashed lines) with parameters corresponding to the Hubbard model at 15% doping. Similar to the damped oscillator, the time dependence of the density modulation amplitude exhibits an underdamped regime with oscillations for D_c k^2 > Γ_c/4 (e.g., for larger values of k), and an overdamped regime without oscillations for small k. One recovers purely diffusive behavior with e^-D_c k^2 t for D_ck^2≪Γ_c, realized, e.g., in the k→ 0 limit.
§.§ Matrix formulation of mixed diffusion
When thermoelectric effects are finite, density and heat diffusion are not independent and one has to extend the hydrodynamic treatment in a matrix formulation. We define the density and temperature modulation vector v⃗=[n(x,t),T(x,t)] and generalize Eq. (<ref>) to matrix form:
∂_t^2 v⃗ + Γ (∂_tv⃗ + 𝐃k^2 v⃗)=0.
Here, 𝐃 is the diffusion matrix and Γ is a matrix of relaxation rates. These are phenomenological parameters but can be related to the microscopic theory. To achieve this we introduce 𝐃(ω) using L_ij(ω) in Eq. <ref>, then diagonalize 𝐃 for each ω and extract the corresponding eigenmodes relaxation rates Γ_± as the width (half-width at half-maximum) of D_±(ω). See also Appendix <ref>. The solution of Eq. (<ref>) can then be expressed as
v(t)=a_+v_+f_+(t)+a_-v_-f_-(t),
Here, v_± are the corresponding eigenvectors with diffusion constants D_± and relaxation rates Γ_±. The form of f(t) again corresponds to the solution of the damped harmonic oscillator and is that of Eq. (<ref>), but with D_c and Γ_c replaced with D_± and Γ_±, respectively. Prefactors a_+ and a_- depend on initial conditions.
§ RESULTS
§.§ Hubbard model results
Let us start the discussion of the extent of the thermoelectric mixing, which is determined by D_corr and, via Eq. (<ref>), by the deviation of the Seebeck coefficient S from its Kelvin estimate S_K and the modified Wilson ratio W.
On top panels of Fig. <ref> we show the temperature dependence of S (full lines) and compare it to S_K (dashed). In the considered regimes one expects S to be characterized by a crossover from a high-temperature charge fluctuating regime characterized by the Heikes' <cit.> value -log[(2-n)/n] ≈ -2 p (negative for hole doping p) to the regime with suppressed double occupancy (at large U and small T) with Heikes' value -log[2(1-n)/n] (with positive values for considered p). One sees that these considerations indeed roughly describe the data. With increasing p, the maximum in S moves to higher T. S increases moderately with increasing U/t in a wide T range.
The Kelvin result suggests that S changes sign as a function of doping at p∼ 0.15 in the regime of lowest calculated T. Due to finite-size effects in the FTLM calculations at low T, we cannot observe this in the full Kubo calculation.
The key result for our discussion is that, despite considering a high-temperature regime (T∼ 1), we find that the difference S^K-S is not small (one expects S^K-S to drop as 1/T for T →∞) and approaches k_B/e_0 in the U=10 results.
In the middle panels of Fig. <ref> we show the “Wilson ratio”. At high T, W=Tχ_c/c_n∼ T^2 since χ_c∼ T^-1 and c_n ∼ T^-2. On lowering T, W drops and at larger interactions develops a plateau. At lowest T, W grows again. In the metallic Fermi-liquid regime at low T, one expects W → const. It is obvious from these results that neither S^K-S nor W are particularly small hence one does not expect D_corr to be negligible either.
In the bottom panels of Fig. <ref> we show D_corr. We see that this takes overall moderate values in our calculations (notice that charge and heat diffusion constants are typically of order 1 at high T <cit.>). At highest T, D_corr tends to a constant because (S^K-S)^2 W and D_c both become temperature independent there. At the lowest T (not accessible in our calculations) in the Fermi-liquid regime one again expects a T-independent value of D_corr as W →const., S^K-S ∝ T, D_c ∝ 1/T^2 there. We notice that D_corr/D_c∝ T^2 in the Fermi liquid and thermoelectric mixing has a limited effect at low T.
We now consider a particular case of intermediate doping p=0.15 and interaction U=7.5. On Fig. <ref>a we show the bare diffusion constants D_c, D_Q and the mixing element D_corr, together with the diffusion eigenvalues D_±. One sees a growth of charge diffusion constant on lowering T and remarkably a much weaker temperature dependence of the heat diffusion constant D_Q leading to a crossing of the two quantities at T≈ 3.
The weaker temperature dependence and a shallow minimum of D_Q are discussed in more detail in Ref. ulaga22, while for the case of spin and heat diffusion, no such crossing was observed <cit.>.
The magnitude of D_corr is ∼10% of the bare diffusion constants, leading to important effects of mixing when the two bare values are close. This is seen (Fig. <ref>a) from the temperature dependence of the two eigenvalues D_± that follow a level-repulsion mechanism and hence differ significantly from the bare values.
On Fig. <ref>b we show also the corresponding components of the eigenvectors. Looking at the n components of the eigenvectors, one sees that at low T, v⃗_+ has a larger n component (v_+n).
At higher T, the larger n component is in v⃗_-.
This is consistent also with the crossing of the bare diffusion constants. Further Fig. <ref>b shows that n and T components are in counter-phase for v⃗_+, while they are in phase for v⃗_-. Therefore, when the main component is v⃗_+ the n and T modulation are in counter-phase as e.g. on Fig. <ref>, while they are in-phase when v⃗_- component is the dominant one. Which component dominates is determined by the initial condition via a_± and the decay rate of each of the components.
§.§ Time-evolution for mixed diffusion
How important are the effects of mixing for the determination of diffusion constants from the time evolution, such as done in cold atom experiments? We start the discussion assuming a fast relaxation limit Dk^2≪Γ/4, e.g., due to long-wavelength limit k→ 0. The time evolution in this limit in purely diffusive and is given by the matrix form of the diffusion equation and its solution
v(t)= exp(- 𝐃 k^2 t) v (0).
It can be expressed also in terms of the eigenmodes
v(t)=a_+ v_+ e^-D_+k^2t+ a_- v_- e^-D_-k^2t,
where a_± are coefficients set by the initial condition. Except in a special case where one of a_± vanishes, the time evolution involves two time scales.
Let us consider the initial state v(0)=(1,0) (pure density modulation) and ask about the density modulation at later times. At short times, before appreciable temperature modulation develops, n(t) falls as dictated by diagonal entry D_c of the diffusion matrix (Eq. <ref>), or from the perspective of Fick's law, a pure density modulation drives the charge current given by D_c. At long times, only the slower decaying eigenmode v_- survives and the long-time dynamics are given by the corresponding eigenvalue D_-.
This behavior is illustrated on Fig. <ref> which shows n(t) for U=7.5 at T=1.5. There one sees that the solution begins to drop according to exp(-D_c k^2 t) (initial short time dependence ∝ 1-D_c k^2 t holds strictly) while at long times one sees exponential decay with time constant (D_- k^2)^-1. The full result is the sum of two exponentials.
In experiments, one often assumes a simple single exponential decay and fits the observed time dependence with n(t)=n_0exp(-D^ext k^2 t ). It is now clear that the extracted diffusion constant D^ext depends on the fitting range. We illustrate this by showing D^ext/D_c for several values of D_corr as a function of the fitting range on Fig. <ref>, taking D_Q/D_c =0.5. One obtains sizable deviations of D^ext/D_c from 1 only for large values of D_corr and for longer fitting times. If the fitting range is very long, one approaches D^ext∼ D_-. One reaches D^ext = D_- when only the long-time regime is fitted and the short-time regime is left out.
Since smaller D_Q lowers D_- to which D^ext tends at longer fitting times, a smaller D_Q also leads to a bigger mismatch and lower values of D^ext/D_c. Similarly, increasing D_corr decreases D_- via the level repulsion scenario and again leads to decreasing D^ext/D_c. These findings are summarized on Fig. <ref> and we note that the effect of D_corr is already significant at D_corr/D_c∼ 0.2.
All this illustrates that in principle the effects of thermoelectric coupling can be large and a naïve application of a bare diffusion with neglected thermoelectric effects can lead to a significant error in the estimate of diffusion constant. On the other hand, it is reassuring, that at least at very short times, the decay rate is indeed governed by D_c. However, at such times, the current relaxation time can become important as discussed further in Sec. <ref>.
§.§ Finite Γ case and application to cold atom experiments
The measurements on optical lattices are done with modulations with sizable momenta k and hence one needs to take into account the current relaxation and keep Γ in Eq. (<ref>) finite.
The relaxation is estimated as explained in Appendix <ref>.
Snapshots of the resulting time evolutions are plotted on Fig. <ref> (full lines). On Fig. <ref>, these are compared to diffusive solutions without current relaxation rates.
The finite relaxation times lead to a slower decay at short times due to a slower initial buildup of currents, and to the oscillatory behavior as currents have some persistence and continue to flow even if the modulation becomes zero at a certain time.
It is worth mentioning that each eigenmode decay is determined by both D_± and Γ_± (Eq. <ref>). Furthermore, the eigenmode tends to exponential decay given with e^-D_± k^2 t in the overdamped limit (D_± k^2≪Γ_±/2), while in the underdamped regime (D_± k^2≫Γ_±/2) it tends to oscillations suppressed with e^-Γ_± t/2. The long-lived mode is therefore given with the smaller value of D_± in the overdamped (diffusive) regime, namely D_- (as discussed above), while in the underdamped regime, it is given by the smaller value of Γ_- or Γ_+. It is possible that Γ_+<Γ_- as in our case shown in Fig. <ref>, making the longer lived mode in the underdamped regime v⃗_+ with corresponding out-of-phase modulation of n and T (see components in Fig. <ref>).
To estimate the impact of the thermoelectric effect in optical lattice measurements, we mimicked the analysis done there. Namely, we obtain the solutions of the matrix hydrodynamic equations (<ref>) which we fit with a simpler ansatz describing charge hydrodynamics (Eq. (<ref>)), only. We compared the results of this procedure to D_c and Γ_c obtained through FTLM calculations.
This analysis is summarized on Fig. <ref>a. One sees that the extracted D_c^ext is actually quite close to D_c=σ_c/χ_c in the entire temperature range. At low T one could attribute this to a relatively large component |v_+n| and D_+∼ D_c. At T where D_c and D_Q cross, a_+v_+n≈ a_-v_-n, i.e. both eigenmodes are present in the initial state with similar weight and the mixing is close to maximal. Despite the fact that D_+ and D_- are far from D_c, the initial time dependence is given by D_c and extending the fitting time beyond t_max=6 t_0^-1 (with moderate D_corr∼ 0.15) results in D^ext only slightly deviating from D_c.
If one uses the value D_c^ext to calculate the resistivity via ρ=(D_c^extχ_c)^-1, the estimation exceeds the value ρ=(D_cχ_c)^-1 by ∼10%. Figure <ref>b shows that, conversely, Γ_c^ext is not close to Γ_c and is systematically overestimated.
§.§ Effects of mixing on the thermal diffusion
The above considerations apply also to estimates of thermal transport based on measurements of thermal diffusion. The standard “flash” method estimates the thermal diffusion constant from the time it takes for the temperature on the back side of the sample to reach half of its equilibrium value after the front side has been illuminated by a laser pulse. It seems reasonable to assume that the initial state is described in terms of modulated temperature but that charge density is unaffected by the pulse, hence the diffusion matrix in the basis (n,T) is appropriate to consider also in this case. Because the experimental procedure is sensitive to the initial time evolution before appreciable charge density gradients appear, the effects of the mixing with charge diffusion are expected to be limited.
On the other hand, it is important to recognize that the quantity obtained from such measurements is D_Q= D_Q + D_corr, hence if this quantity is used to estimate thermal conductivity using the Nernst-Einstein relation one obtains a diffusion estimate
κ_diff= κ (1 + D_corr/D_Q)
that is systematically larger from such measurements than what one obtains from the direct transport determination of κ.
As a concrete example, we calculated D_Q and Γ_Q from a time evolution starting with a state containing a temperature modulation only. The results are shown on Fig. <ref> with dashed lines. Both D_Q and Γ_Q show deviations from D_Q^ext and Γ_Q^ext. In particular, Γ_Q is seen to be underestimated at low T and overestimated the most at T∼ 2t with the difference decreasing at higher T. D_Q estimation is impacted differently than D_c because of the occurrence of D_corr on the diagonal.
We notice that if one assumes a different initial state, for instance with a constant chemical potential, the initial diffusion of temperature is given by the diagonal element in the (μ,T) basis,
D_TT^(μ,T) =L_22χ_c - ζ L_12 T/c_nTχ_c
=D_Q+D_n S(S-S^K)Tχ_c/c_n
i.e., a value again distinct from standard diffusion D_Q=κ/c_n. Interestingly also here the deviations from D_Q are given in terms of S-S^K.
§ CONCLUSIONS
In conclusion, we investigated the mixed particle-heat diffusion in the doped Hubbard model. The thermoelectric effect caused the appearance of mixed diffusion modes of particle and heat and introduced new timescales that can alter the time dependence from that of the simple exponential decay. This should be taken into account in measurements in cold atom systems. We pointed out that the standard “flash” methods systematically give a higher value of thermal conductivity than what is obtained from the transport measurements (at least when the thermal conductivity is dominated by electronic contribution).
It would be interesting to directly measure the mixed diffusion, for instance by introducing a temperature modulation into the system and studying the amplitude of the induced charge density wave. Because the dynamics are that of coupled damped oscillators, one for density modulation and one for T modulation, one could also explore the resonating behavior as a function of driving frequency with possibly enhanced dynamic thermoelectric effect.
The effects of thermoelectric mixing are given by D_corr/D_c.
This quantity was found to be moderate <0.2 in our calculations but can become large in regimes where the charge susceptibility is large. For example, the divergence of χ_c in the vicinity of phase separation, such as in doped antiferromagnets <cit.> or Hund's metals <cit.> enhances W and hence D_corr. Such systems are good candidates to observe the predicted effects.
§ ACKNOWLEDGMENTS
We acknowledge support by Slovenian Research Agency (ARRS) under Grant no. P1-0044 and J1-2458.
§ THE DIFFUSION MATRIX IN THE PRESENCE OF SPIN FLUCTUATIONS
Here we give an overview of the diffusion matrix, which in general also includes spin properties, ie. is a 3× 3 matrix, even though we focus on a 2× 2 sub-block in the main text. The grand potential is given by
Ω=E-ST-μ N-BM,
where S is the entropy, N=N_↑+N_↓ is particle number and M=(N_↑-N_↓)/2 is magnetization. The entropy per site is given by
s=1/N_0(log e^-βΩ+β⟨K⟩),
where K̂=Ĥ-μN̂-BM̂ is the grand Hamiltonian, β is the inverse temperature, and N_0 is the number of sites. Changes in density are described by
dn =-1/N_0∂^2Ω/∂μ^2dμ-1/N_0∂^2Ω/∂μ∂ TdT-1/N_0∂^2Ω/∂μ∂ BdB
≡χ_c dμ+ζ dT +ω dB.
Similarly, we have
3 χ_s=-1/N_0∂^2Ω/∂^2 B, ξ=-1/N_0∂^2Ω/∂ B∂ T, c_μ,B=-T/N_0∂^2Ω/∂ T^2,
and use these to express changes in entropy and magnetization. Together, these can be cast as a matrix equation <cit.>,
[ dn; Tds; dm ]=
[ χ_c ζ ω; ζ T c_μ,B ξ T; ω ξ χ_s ][ dμ; dT; dB ]≡𝐀[ dμ; dT; dB ].
We use the Kubo formalism to obtain transport coefficients <cit.> from transport equations
[ j; j_q; j_s ]=[ -L_11 -L_12/T -L_13; -L_21 -L_22/T -L_23; -L_31 -L_32/T -L_33; ][ ∇μ; ∇ T; ∇ B ]≡𝐋[ ∇μ; ∇ T; ∇ B ]
where L_ij=L_ji by Onsager reciprocity and j_q is the heat current,
j_q=j_ε-μ j-Bj_s.
Combining these with continuity equations
∂_t n + ∇· j=0,
∂_tε + ∇· j_ε=0,
∂_t m + ∇· j_s=0,
one obtains a matrix-form diffusion equation
[ ∂_t n; T∂_t s; ∂_t m ]=𝐃_0
[ ∇^2n; T∇^2 s; ∇^2m ],
where 𝐃_0=-𝐋𝐀^-1 is the diffusion matrix.
It is sometimes useful to use energy density dε instead of entropy density. In this case, the susceptibility matrix that enters is 𝐀=𝐏_με𝐀, where
𝐏_με=[ 1 0 0; μ 1 B; 0 0 1 ].
To get the energy current, one multiplies Eq. <ref> from the left with 𝐏_με once more, arriving at
[ ∂_t n; ∂_tε; ∂_t m ]=-𝐏_με𝐋𝐀^-1𝐏_με^-1[ ∇^2n; ∇^2 ε; ∇^2m ].
To obtain the form of the diffusion matrix in the basis (n,T) given in the main text, one uses
𝐏_nT^-1=[ 1 0; Tζ/χ_c c_n ]
with the upper left 2× 2 block of 𝐃_0. Then, 𝐏_nT𝐃_0𝐏_nT^-1 gives the diffusion matrix given in the main text under Eq. (<ref>). Here the specific heat at constant density c_n=c_μ-Tζ^2/χ_c is used. The transformation into the basis of (μ,Q) taking dQ=TdS is achieved as 𝐏_μ Q𝐃_0𝐏_μ Q^-1, where
𝐏_μ Q^-1=[ χ_c-Tζ^2/c_μ ζ/c_μ; 0 1 ]
Finally, the transformation into (μ,T) basis is the combination of the previous two,
𝐏_μ T^-1=[ χ_c ζ; Tζ c_μ ].
Notice that the physics is contained in the eigenvalues of 𝐃, which do not depend on the “basis” of 𝐃. We use T and n as they are commonly used and experimentally monitored quantities.
§ DETAILS OF THE FTLM CALCULATION
The transport coefficients L_ij within Kubo formalism are given by the ω→ 0 limit of current-current correlation functions L_ij(ω), namely
L_ij(ω)=1/ω N_0 V_u.c.Re∫_0^∞dte^iω t⟨ [Ĵ_i(t),Ĵ_j(0)]⟩.
We consider the particle, spin, and heat currents only in the x direction. We have
Ĵ_n= -it∑_j,σ,δ R_δ^x c^†_j+δ,σc_j,σ,
Ĵ_s= -it∑_j,σ,δ R_δ^x σ c^†_j+δ,σc_j,σ,
Ĵ_E= -it^2/2∑_j,σ,δ,δ ' R_δδ'^x c^†_j+δ+δ',σc_j,σ
+itU/2∑_j,σ,δR_δ^x c^†_j+δ,σc_j,σ(n_j+δ,σ̅+n_j,σ̅),
Ĵ_Q= Ĵ_E-μĴ_n-BĴ_s,
where R_δ^x=x_j+δ-x_j and R_δδ'^x=x_j+δ+δ'-x_j (δ point to the nearest neighbours of site j). We evaluate Eq. (<ref>) and thermodynamic quantities on a 4× 4 cluster with twisted periodic boundary conditions using FTLM. At low T, finite size effects appear and we do not show results where the contribution to ∫ L_ij(ω)dω from diagonal elements (stiffness) exceeds 0.1%.
§ DETAILS ON EXTRACTING Γ_±
Assuming a Drude form for the low-frequency part of dynamical conductivity, one can extract a scattering rate Γ^0_c as the half-width of σ_c(ω)=L_11(ω)[Similarly, one can define Γ_ε and Γ_cε from σ_εε(ω) and σ_cε(ω).].
In the matrix generalization of Γ, one has to account for additional relaxation rates. Just like Γ_c can be obtained from the width of Drude peak of L_11(ω), one could determine elements of Γ from the ω-widths of low-ω parts of L_ij(ω). We however use a slightly different approach and assume the Drude-like form
σ_c(ω) = σ_c(0)/1-iω/Γ_c, κ(ω)=κ(0)/1-iω/Γ_Q
is also applicable to D_±(ω) (ie. generalized to finite ω) for small ω. We therefore first calculate 𝐃(ω) and obtain Γ_± as the width of its eigenvalues D_±(ω),
D_±(ω)=D_±(0)/1-iω/ Γ_±.
We find that the eigenvectors v_± show weak enough frequency dependence in the considered regime at small ω that the obtained Γ_± correspond to the ω=0 diffusion matrix eigenvectors. At half filling, Γ_± coincide with Γ_c and Γ_Q due to the vanishing thermoelectric effect.
We show D_±(ω) on Fig. <ref>a where one sees they indeed inherit the shape of the conductivities, justifying Eq. (<ref>) for small ω, and that the general picture of “level repulsion” applies in the whole frequency range. Note that in the case of a 2× 2 diffusion matrix, D_±(ω) are guaranteed to be smooth functions, as evident from their closed-form expressions (Eq. <ref>).
The frequency dependence of D_±(ω) reveals a feature at ω≈ 2.5 t where the various diffusion constants touch because D_corr(ω)=0 and S^K-S(ω) changes sign. This occurs at ω exceeding Γ_± and thus does not impact our estimates for Γ_±. Figure <ref>b shows how Γ_± differ from the bare values as obtained from σ_c(ω) and κ(ω) and one sees they generically reinforce the “level repulsion” picture at least at high T where Γ_+ is about half of Γ_-. Γ_± are decreasing in magnitude at low T, similarly to how Γ_c,Q are expected to decrease as one approaches the coherent regime.
37
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Nguyen et al.(2021)Nguyen,
Sidorenko, Taupin, Knebel,
Lapertot, Schuberth, and Paschen]nguyen2021superconductivity
author author D. H. Nguyen, author A. Sidorenko,
author M. Taupin, author G. Knebel, author
G. Lapertot, author
E. Schuberth, and author
S. Paschen, https://doi.org/https://doi.org/10.1038/s41467-021-24670-z journal journal Nat Commun volume
12, pages 4341 (year 2021)NoStop
[Hayes et al.(2021)Hayes,
Maksimovic, Lopez, Chan,
Ramshaw, McDonald, and Analytis]hayes2021superconductivity
author author I. M. Hayes, author N. Maksimovic,
author G. N. Lopez, author M. K. Chan, author
B. Ramshaw, author R. D. McDonald, and author J. G. Analytis, https://www.nature.com/articles/s41567-020-0982-x journal
journal Nat. Phys. volume 17, pages 58 (year 2021)NoStop
[Stewart(2001)]stewart2001
author author G. R. Stewart, https://doi.org/10.1103/RevModPhys.73.797 journal journal Rev. Mod. Phys. volume
73, pages 797 (year 2001)NoStop
[Hill et al.(2001)Hill,
Proust, Taillefer, Fournier, and Greene]hill01
author author R. W. Hill, author C. Proust,
author L. Taillefer, author P. Fournier, and author R. L. Greene, https://doi.org/10.1038/414711a journal journal
Nature volume 414, pages 711
(year 2001)NoStop
[Legros et al.(2019)Legros,
Benhabib, Tabis, Laliberté, Dion, Lizaire, Vignolle, Vignolles, Raffy, Li et al.]legros2019universal
author author A. Legros, author S. Benhabib,
author W. Tabis, author F. Laliberté, author
M. Dion, author M. Lizaire, author B. Vignolle, author D. Vignolles, author H. Raffy, author Z. Li, et al., https://doi.org/10.1038/s41567-018-0334-2 journal journal Nat. Phys. volume 15, pages
142 (year 2019)NoStop
[Pustogow et al.(2021)Pustogow, Saito, Löhle, Sanz Alonso, Kawamoto, Dobrosavljević,
Dressel, and Fratini]pustogow2021rise
author author A. Pustogow, author Y. Saito,
author A. Löhle, author M. Sanz Alonso, author
A. Kawamoto, author
V. Dobrosavljević, author
M. Dressel, and author
S. Fratini, https://doi.org/10.1038/s41467-021-21741-z journal journal Nat Commun volume 12, pages
1571 (year 2021)NoStop
[Chen et al.(2022)Chen,
Lowder, Bakali, Andrews,
Schrenk, Waas, Svagera,
Eguchi, Prochaska, Si et al.]chen2022shot
author author L. Chen, author D. T. Lowder,
author E. Bakali, author A. Andrews, author
W. Schrenk, author M. Waas, author R. Svagera, author G. Eguchi,
author L. Prochaska, author Q. Si, et al., https://doi.org/10.48550/arXiv.2206.00673 journal journal arXiv preprint arXiv:2206.00673 (year
2022)NoStop
[Hartnoll and Mackenzie(2022)]hartnoll2022rmp
author author S. A. Hartnoll and author A. P. Mackenzie, https://doi.org/10.1103/RevModPhys.94.041002
journal journal Rev. Mod. Phys. volume 94, pages 041002 (year
2022)NoStop
[Chowdhury et al.(2022)Chowdhury, Georges, Parcollet, and Sachdev]chowdhury2022rmp
author author D. Chowdhury, author A. Georges,
author O. Parcollet, and author S. Sachdev, https://doi.org/10.1103/RevModPhys.94.035004 journal
journal Rev. Mod. Phys. volume 94, pages 035004 (year 2022)NoStop
[Kokalj(2017)]kokalj17
author author J. Kokalj, https://doi.org/10.1103/PhysRevB.95.041110 journal journal Phys. Rev. B volume
95, pages 041110 (year 2017)NoStop
[Huang et al.(2019)Huang,
Sheppard, Moritz, and Devereaux]huang19
author author E. W. Huang, author R. Sheppard,
author B. Moritz, and author T. P. Devereaux, https://doi.org/10.1126/science.aau7063 journal journal Science volume 366, pages
987 (year 2019)NoStop
[Bloch et al.(2008)Bloch,
Dalibard, and Zwerger]bloch2008many
author author I. Bloch, author J. Dalibard, and author W. Zwerger, https://doi.org/http://dx.doi.org/10.1103/RevModPhys.80.885 journal journal Rev. Mod. Phys. volume
80, pages 885 (year 2008)NoStop
[Altman et al.(2021)Altman,
Brown, Carleo, Carr,
Demler, Chin, DeMarco,
Economou, Eriksson, Fu et al.]altman2021quantum
author author E. Altman, author K. R. Brown,
author G. Carleo, author L. D. Carr, author
E. Demler, author C. Chin, author B. DeMarco, author S. E. Economou, author M. A. Eriksson, author K.-M. C. Fu,
et al., https://link.aps.org/doi/10.1103/PRXQuantum.2.017003 journal
journal PRX Quantum volume 2, pages 017003 (year 2021)NoStop
[Brown et al.(2019)Brown,
Mitra, Guardado-Sanchez, Nourafkan, Reymbaut, Hébert,
Bergeron, Tremblay, Kokalj,
Huse, Schauß, and Bakr]brown19
author author P. T. Brown, author D. Mitra,
author E. Guardado-Sanchez,
author R. Nourafkan, author A. Reymbaut, author
C.-D. Hébert, author
S. Bergeron, author
A.-M. S. Tremblay, author
J. Kokalj, author D. A. Huse, author P. Schauß, and author W. S. Bakr, https://doi.org/10.1126/science.aat4134 journal journal Science volume 363, pages
379 (year 2019)NoStop
[Nichols et al.(2019)Nichols, Cheuk, Okan, Hartke, Mendez, Senthil, Khatami, Zhang, and Zwierlein]nichols19
author author M. A. Nichols, author L. W. Cheuk,
author M. Okan, author
T. R. Hartke, author
E. Mendez, author T. Senthil, author E. Khatami, author H. Zhang, and author M. W. Zwierlein, https://doi.org/10.1126/science.aat4387
journal journal Science volume 363, pages 383 (year
2019)NoStop
[Guardado-Sanchez et al.(2020)Guardado-Sanchez, Morningstar, Spar,
Brown, Huse, and Bakr]gardadosanchez19
author author E. Guardado-Sanchez, author A. Morningstar, author B. M. Spar, author P. T. Brown,
author D. A. Huse, and author W. S. Bakr, https://doi.org/10.1103/PhysRevX.10.011042 journal journal Phys. Rev. X volume 10, pages 011042 (year 2020)NoStop
[Borup et al.(2015)Borup,
De Boor, Wang, Drymiotis,
Gascoin, Shi, Chen,
Fedorov, Müller, Iversen
et al.]borup2015measuring
author author K. A. Borup, author J. De Boor,
author H. Wang, author
F. Drymiotis, author
F. Gascoin, author X. Shi, author L. Chen, author M. I. Fedorov,
author E. Müller, author B. B. Iversen, et al., https://doi.org/10.1039/C4EE01320D journal journal Energy Environ. Sci. volume 8, pages 423 (year 2015)NoStop
[Mravlje et al.(2022)Mravlje, Ulaga, and Kokalj]mravlje2022spin
author author J. Mravlje, author M. Ulaga, and author J. Kokalj, https://doi.org/10.1103/PhysRevResearch.4.023197 journal
journal Phys. Rev. Res. volume 4, pages 023197 (year 2022)NoStop
[Brantut et al.(2013)Brantut, Grenier, Meineke, Stadler, Krinner, Kollath, Esslinger, and Georges]brantut2013
author author J.-P. Brantut, author C. Grenier,
author J. Meineke, author D. Stadler, author
S. Krinner, author C. Kollath, author T. Esslinger, and author A. Georges, https://doi.org/10.1126/science.1242308 journal journal Science volume 342, pages
713 (year 2013)NoStop
[Krinner et al.(2017)Krinner, Esslinger, and Brantut]krinner2017two
author author S. Krinner, author T. Esslinger, and author J.-P. Brantut, https://doi.org/10.1088/1361-648X/aa74a1 journal
journal J. Phys.: Condens. Matter volume
29, pages 343003 (year 2017)NoStop
[Häusler et al.(2021)Häusler, Fabritius, Mohan, Lebrat, Corman, and Esslinger]hausler2021interaction
author author S. Häusler, author P. Fabritius, author J. Mohan,
author M. Lebrat, author L. Corman, and author
T. Esslinger, https://doi.org/10.1103/PhysRevX.11.021034 journal journal Phys. Rev. X volume 11, pages 021034 (year 2021)NoStop
[Parker et al.(1961)Parker,
Jenkins, Butler, and Abbott]Parker1961
author author W. J. Parker, author R. J. Jenkins,
author C. P. Butler, and author G. L. Abbott, https://doi.org/10.1063/1.1728417 journal journal Journal of Applied Physics volume 32, pages 1679 (year 1961)NoStop
[Zhang et al.(2017)Zhang,
Levenson-Falk, Ramshaw, Bonn,
Liang, Hardy, Hartnoll, and Kapitulnik]zhang17
author author J. Zhang, author E. M. Levenson-Falk, author B. J. Ramshaw, author D. A. Bonn,
author R. Liang, author W. N. Hardy, author
S. A. Hartnoll, and author
A. Kapitulnik, https://doi.org/10.1073/pnas.1703416114 journal journal Proc. Natl. Acad. Sci. volume 114, pages 5378 (year 2017)NoStop
[Sun et al.(2023)Sun,
Mishra, McGuinness, Filipiak,
Markovic, Sokolov, Kikugawa,
Orenstein, Hartnoll, Mackenzie et al.]sun2023spatially
author author F. Sun, author S. Mishra,
author P. McGuinness, author Z. Filipiak, author
I. Markovic, author
D. Sokolov, author N. Kikugawa, author J. Orenstein, author S. Hartnoll, author A. Mackenzie, et al., https://doi.org/10.48550/arXiv.2303.02017 journal journal arXiv preprint arXiv:2303.02017 (year
2023)NoStop
[Wang et al.(2023)Wang,
Ding, Huang, Moritz, and Devereaux]wang23
author author W. O. Wang, author J. K. Ding,
author E. W. Huang, author B. Moritz, and author
T. P. Devereaux, https://doi.org/10.48550/arXiv.2302.13169 journal journal arXiv preprint arXiv:2302.13169 (year
2023)NoStop
[Jaklič and Prelovšek(2000)]jaklic00
author author J. Jaklič and author P. Prelovšek, https://doi.org/10.1080/000187300243381
journal journal Adv. Phys. volume 49, pages 1 (year 2000)NoStop
[Peterson and Shastry(2010)]peterson2010
author author M. R. Peterson and author B. S. Shastry, https://doi.org/10.1103/PhysRevB.82.195105 journal journal Phys. Rev. B volume
82, pages 195105 (year 2010)NoStop
[Mendez-Valderrama and Chowdhury(2021)]mendezvalderrama2021
author author J. F. Mendez-Valderrama and author D. Chowdhury, https://doi.org/10.1103/PhysRevB.103.195111
journal journal Phys. Rev. B volume 103, pages 195111 (year
2021)NoStop
[Kadanoff and Martin(1963)]kadanoff63
author author L. P. Kadanoff and author P. C. Martin, https://doi.org/http://dx.doi.org/10.1016/0003-4916(63)90078-2 journal journal Annals of Physics volume 24, pages 419 (year
1963)NoStop
[Vučičević et al.(2022)Vučičević, Predin, and Ferrero]vucicevic2022charge
author author J. Vučičević, author S. Predin, and author M. Ferrero, https://arxiv.org/abs/2208.04047 journal
journal arXiv preprint: arXiv:2208.04047 (year
2022)NoStop
[Chaikin and Beni(1976)]chaikin76
author author P. M. Chaikin and author G. Beni, https://doi.org/10.1103/PhysRevB.13.647 journal
journal Phys. Rev. B volume 13, pages 647 (year 1976)NoStop
[Ulaga et al.(2022)Ulaga,
Mravlje, Prelov ššek, and Kokalj]ulaga22
author author M. Ulaga, author J. Mravlje,
author P. Prelov ššek, and author
J. Kokalj, https://doi.org/10.1103/PhysRevB.106.245123 journal journal Phys. Rev. B volume 106, pages 245123 (year 2022)NoStop
[Emery and Kivelson(1993)]emery1993frustrated
author author V. J. Emery and author S. Kivelson, https://doi.org/https://doi.org/10.1016/0921-4534(93)90581-A journal journal Physica C: Superconductivity volume 209, pages 597 (year
1993)NoStop
[de' Medici(2017)]demedici2017hund
author author L. de'
Medici, https://doi.org/10.1103/PhysRevLett.118.167003 journal journal Phys. Rev. Lett. volume 118, pages 167003 (year
2017)NoStop
[Hartnoll(2015)]hartnoll15
author author S. A. Hartnoll, https://doi.org/10.1038/nphys3174 journal journal Nat. Phys. volume
11, pages 54 (year 2015)NoStop
[Shastry(2009)]shastry09
author author B. S. Shastry, https://doi.org/10.1088/0034-4885/72/1/016501
journal journal Rep. Prog. Phys. volume 72, pages 016501 (year
2009)NoStop
[Note1()]Note1
note Similarly, one can define Γ _ε and
Γ _cε from σ _εε(ω )
and σ _cε(ω ).Stop
|
http://arxiv.org/abs/2307.07235v1 | 20230714091745 | A Monte Carlo study of multiplicity fluctuations in proton-proton collisions at $\sqrt{s}=$~7~TeV | [
"Valeria Zelina Reyna Ortiz",
"Maciej Rybczynski",
"Zbigniew Wlodarczyk"
] | hep-ph | [
"hep-ph"
] |
[email protected]
Institute of Physics, Jan Kochanowski University, 25-406 Kielce, Poland
With large volumes of data available at LHC, it has possible to study the multiplicity distributions.
It is interesting as well to check how well event generators can describes the properties and the behavior of
multi-particle production processes. In this paper, we analyse the oscillatory behavior of modified combinants
in proton-proton collisions at centre of mass energy of 7 TeV.
A Monte Carlo study of multiplicity fluctuations
in proton-proton collisions at √(s)= 7 TeV
Zbigniew Włodarczyk
August 12, 2023
============================================================================================
§ INTRODUCTION
Multiplicity distributions (MDs) of charged particles produced in high-energy nuclear collisions have been extensively studied in the field of multi-particle production. The determination of multiplicity distribution is among the initial observations in new high-energy experiments, primarily because it is relatively easy to obtain such information. Furthermore, MDs provide valuable insights into the underlying production processes. Since perturbative QCD fails to fully explain the observed MDs, a range of phenomenological approaches have been employed. These approaches include dynamical methods like colored string interactions <cit.> and the dual-parton model <cit.>, as well as geometrical approaches <cit.> leading to the fireball model <cit.>,
and stochastic approaches <cit.> that model high-energy collisions
as branchings <cit.> or clans <cit.>.
Charged particle multiplicity distribution, P(N), is usually fitted with a single negative binomial distribution (NBD) <cit.>:
P_NBD(N) = Γ(N+k)/Γ(N+1)Γ(k) p^N (1 - p)^k.
NBD has two free parameters: p describing probability of particle emission and parameter k≥ 1 influencing shape of the distribution.
Nevertheless, as the energy and the number of charged secondaries, denoted as N, increase, the negative binomial distribution tends to deviate from the observed data for large values of N, as discussed in <cit.>. In these cases, alternative approaches are adopted, including combinations of two <cit.>, three <cit.>, or multi-component NBDs <cit.>, or even different forms of P(N)
distributions <cit.>. However, it should be noted that such adjustments primarily improve the agreement for large N, while the ratio R = data/fit exhibits significant deviations from unity at small N across all fitting scenarios <cit.>.
Such a observation suggests that there is additional information in the measured multiplicity distribution that is not covered by the following recurrence relation:
(N+1)P(N+1) = γ(N)P(N), γ(N) = α + β N.
Three commonly encountered forms of P(N) resulting from the recurrence relation (<ref>) are as follows: the binomial distribution, where α = Kp/(1-p) and β = -α/K; the Poisson distribution, where α = λ and β = 0; and the negative binomial distribution, where α = kp and β = α/k. Here, the parameter p again represents the probability of particle emission. In our previous work <cit.>, we introduced a more generalized form of the recurrence relation that is applicable in counting statistics when considering multiplication effects in point processes <cit.>. Unlike Eq. (<ref>), this new relation connects all multiplicities using coefficients C_j, which determine the corresponding P(N) in the following manner:
(N + 1)P(N + 1) = ⟨ N⟩∑^N_j=0 C_j P(N - j).
The modified combinants, C_j, can be obtained from experimental data
⟨ N⟩ C_j = (j+1)[ P(j+1)/P(0)] - ⟨ N⟩∑^j-1_i=0C_i [ P(j-i)/P(0)]
and exhibit a pronounced oscillatory pattern. This behavior is not only observed in proton-proton collisions, as discussed in previous works such as <cit.>, but has also been recently demonstrated in e^+e^- annihilation processes, as shown in <cit.>. These oscillations suggest the presence of additional information regarding the multi-particle production process that remains undisclosed. The periodic nature of the oscillations observed in the modified combinants derived from experimental data is particularly indicative in this regard.
Nevertheless the probability that such oscillations are statistically insignificant is very small (∼10^-16, see Ref. <cit.> for more details) the sensitivity to experimental procedures are still under debate.
The aim of this paper is to show that the observed oscillations have a physical origin and are not the result of experimental procedures. We focus on the analysis of the Monte Carlo simulated events and comparison with existing experimental data. The paper is organized as follows. In Sec. <ref> we discuss the methodology of event generation and analysis of model data. In Sec. <ref> we provide a concise description of the results we obtained for proton-proton interactions. Finally, in Sec. <ref> we made several comments referring to the oscillatory behavior of the higher-order moments of multiplicity distributions observed both in experimental data and the models.
§ EVENT GENERATION AND ANALYSIS METHODOLOGY
PYTHIA <cit.> is a widely used Monte Carlo event generator program designed to generate events in high-energy physics. It serves as a tool for simulating collisions at high-energies involving elementary particles like e^+, e^-, protons, anti-protons, as well as heavy-ions, and various combinations thereof. The program encompasses a wide range of physics aspects, such as total and partial cross sections, interactions at both hard and soft scales, parton distributions, initial- and final-state parton showers, matching and merging of matrix elements with showers, multi-parton interactions, as well as processes related to hadronization/fragmentation and particle decays.
The Energy Parton Off-shell Splitting (EPOS) transport model <cit.> is a Monte Carlo event generator program designed for simulation high-energy particle collisions. It provides a framework to study various aspects of particle interactions and the resulting hadron production in both nucleus-nucleus and proton-proton collisions. EPOS considers each nucleus-nucleus or proton-proton collision as a collection of many elementary collisions happening simultaneously. These collisions involve the exchange of ”parton ladders“, which represent the evolution of partons from the projectile and target sides towards the central region (small x). The evolution of partons in EPOS is governed by an evolution equation, typically based
on the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) formalism. The intermediate gluons in the parton ladder are treated as kink singularities within the framework of relativistic strings. These strings represent flux tubes that connect the interacting partons. These flux tubes eventually decays by producing
quark-antiquark pairs, giving rise to fragments that are identified as hadrons <cit.>.
The Ultra-relativistic Quantum Molecular Dynamics (UrQMD) model, as outlined in <cit.>, is a microscopic transport model that employs the covariant propagation of hadrons along classical trajectories, coupled with stochastic binary scatterings, color string formation, and resonance decay. It provides a comprehensive framework to study the dynamics and interactions of particles across a wide energy range. It operates as a Monte Carlo solution to a complex system of coupled partial integro-differential equations, which describe the time evolution of phase space densities for various particle species. In the UrQMD model, baryon-baryon collisions at lower energies consider the exchange of electric and baryonic charge, strangeness, and four-momentum in the t-channel. On the other hand, meson-baryon and meson-meson interactions are treated through the formation and subsequent decay of resonances, following the s-channel reaction mechanism. At higher energies, a vast array of particle species can be generated, and the model accounts for subsequent re-scatterings. The UrQMD model allows generate all types of particles in hadron-hadron collisions and enables their further interaction with one another <cit.>. For the analysis described in this paper, we used UrQMD set to LHC mode. It means that no hydrodynamics functions were activated. This compilation mode of the UrQMD model, in short words, is prepared to do calculations over high values of multiplicity. No essential changes to the model parameters were introduced.
In this study we have used PYTHIA 8.308 <cit.>, EPOS LHC <cit.> and UrQMD 3.4 set to LHC mode <cit.> to generate proton-proton interactions at √(s)=7 TeV in accordance to data on charged particles multiplicity distributions obtained by the ALICE experiment at CERN LHC <cit.>. In PYTHIA simulation we have implemented the inelastic component of the total cross-section for soft-QCD processes with the parameter SoftQCD:inelastic=on. The remaining set of PYTHIA parameters we left with its default values. In the case of EPOS LHC and UrQMD we used default values of the parameters. To match with the experimental conditions, charged particle multiplicities have been chosen in the trigger conditions and acceptance of the ALICE detector, defined in <cit.>. Namely, the generated events of collisions (EOCs) were divided into two classes: inelastic (INEL) class and non-single diffractive (NSD) class. The generated EOC belongs to INEL class if there is at least one charged particle in either the -3.7<η<-1.7, |η|<2.98, or 2.8<η<5.1 pseudorapidity interval corresponding to the acceptances of the V0-C, SPD, and V0-A ALICE sub-detectors, respectively. The NSD class requires charged particles to the detected in both -3.7<η<-1.7 and 2.8<η<5.1 pseudorapidity intervals <cit.>.
§ RESULTS
Multiplicity distributions P(N) of charged particles in simulated events and the modified combinants C_j that results from them are shown in Fig. <ref> for INEL class and in Fig. <ref> for NSD class. Modified combinants in all models exhibit oscillating behavior, however their amplitudes and periods of oscillations are different. Growth of amplitudes with rank j can be described as ⟨ N⟩ C_j∼ a^j with a= 1.042, 0.985 and 1.185 (INEL class), and a= 1.23, 1.18 and 1.22 (NSD class) for PYTHIA, EPOS and UrQMD models respectively. Periods of oscillations (INEL class) are 22, 28 and 7 for PYTHIA, EPOS and UrQMD models respectively. For NSD class, periods of oscillations are smaller and equal to 16, 4 and 3, respectively.
Remarkable oscillations of modified combinants C_j and multiplicity distribution P(N) given by PYTHIA model are compared with experimental data in Fig. <ref>. The model and ALICE data show substantial discrepancies at small multiplicities, the experimental results for P(N) cannot be described exactly. In particular, for the void probability P(0), we observe large difference between ALICE data and PYTHIA prediction. Since
P(0) = exp(-∑_j=0^∞⟨ N⟩ C_j/j+1)
this find reflection in behavior of modified combinants. Comparing with ALICE data, in PYTHIA model the period of oscillations is 1.43 times larger and the ratio of amplitudes (model/data) increase as 1.24^j.
The most commonly used form of P(N), the NBD form given by Eq. (<ref>) does not describe experimental data nor the multiplicity distributions given by models. To describe P(N) using NBD, the negative binomial distribution parameter p must depend on N <cit.>. The probability of particle emission, p, which is constant in the standard NBD, is dependent on the multiplicity N in the way presented in Fig. <ref>. Both in experimental data and model the non-monotonic form of p is clearly visible and can be described by
p(N) = A(1-B· N) ·
(1-C/Nexp(-(ln(N/D)/E)^2))
with parameters listed in Table <ref>. Nevertheless the behaviors are roughly similar, we observe significant difference in position N=Dexp(-E^2/2) of minimum of p. The probability of particle emission p=min for multiplicity N ≃ 23 in PYTHIA model and N ≃ 14 for ALICE experimental data.
Multiplicity distributions measured by ALICE can be successfully described by a two-component compound binomial distribution
P(N)=∑_i=1^2 w_i h(N;p_i,K_i,k_i,m_i),
where h(N) is the compound binomial distribution (BD + NBD) given by the generating function
H(z)= [ p ( k/k-m(z-1))^k +1-p ]^K.
Comparison with PYTHIA simulation is shown in Fig. <ref> for parameters given in Table <ref>.
§ DISCUSSION
It is worth pointing out that modified combinants evaluated from models exhibit oscillatory behavior, though the oscillation period differs from experimental data. Modified combinants C_j can be expressed by the generating function:
G(z) = ∑_N=0^∞ P(N) z^N
of the count probability P(N) as:
⟨ N⟩ C_j = 1/j!d^j+1ln G(z)/dz^j+1|_z=0.
The generating function can be shown to be a sum over the “averaged” connected correlation function g_(n) of all orders, (n):
ln G(z) = ∑_n=1^∞(z-1)^n/n! m^ng_(n),
where m is the number of particles in a cell of the phase-space volume <cit.>. The modified combinants C_j can be expressed as an infinite series of the g_(n):
⟨ N⟩ C_j = ∑_n=j+1^∞(-1)^n-j-1m^n/j!(n-j-1)!g_(n).
Note that the correlation functions g_(n) are associated with widely used cumulant factorial moments (see Appendix for more details).
Higher-order correlations, characterized by an n-body correlation function g_(n), are of general interest and have been investigated in many fields of physics, including astronomy, particle physics and quantum optics. In 1963, Glauber predicted that the maximal value of the same-point normalized n-body correlation function g_(n) calculated for thermal light is directly related to the order of the function by a simple relationship n! <cit.>. This n! dependence is a consequence of Wick's theorem <cit.>, which enables higher-order correlations to be expressed using products of one-body correlation functions. The applicability of Wick's theorem is not limited to correlation functions for light, it has also been applied in many other fields; for example, it is commonly used in radio-astronomy, nuclear physics, and generally in quantum field theory. The validity of Wick's theorem has been firstly demonstrated with thermal photons, and recently proved to higher-order correlations for massive particles <cit.>.
For correlation function
g_(n) = (n-1)!k^-n+1
with real positive parameter k we have:
⟨ N⟩ C_j = k(m/(m+k))^j+1
as for NBD, where m is the average multiplicity and the two-body correlation function determines the value of NBD shape parameter, 1/k=g_(2).
To assure oscillating behavior of modified combinants we choose correlation function in the form:
g_(n) = (n-1)!cos(n/k),
which leads to to the following formula for modified combinants:
⟨ N⟩ C_j = 1/2m^j+1[(e^/k+m)^-j-1+(e^-/k+m)^-j-1]
with denoting imaginary unit.
It is remarkable that k parameter only affects oscillation amplitude, see Fig. <ref>. The period of oscillations (∼ 2π m) is determined by the value of m parameter, see Fig. <ref>.
The average number of particles in a cell given by the value of m parameter is not derived from any first principle, although some suggestions have been made to equate it with the average number of partons in the QCD cascade. Considering hadrons as a dense system of partons we expect that m∼ Q_S^2, where Q_S denotes the saturation scale <cit.>.
In the PYTHIA model, the saturation scale Q_S can be connected with p_T0 parameter <cit.>. The period of oscillations depends on p_T0, but ∼ p_T0^1/3 dependence is very weak. By setting MultiPartonInteraction:pT0Ref=1.56 instead of its default value 2.28 we found the period of oscillations decreasing by 15% (see Fig. <ref>), but a more thorough re-tune is necessary in order to simultaneously obtain the correct multiplicity distribution.
Our results can, hopefully, lead to wider theoretical investigations and provide a better understanding of multi-particle production processes in hadronic collisions.
*
§ RELATIONSHIP OF CORRELATION FUNCTIONS WITH CUMULANTS
Usually information contained in P(N) is obtained by examining their corresponding cumulant factorial moments <cit.>
K_q=F_q-∑_i=1^q-1q-1i-1K_q-iF_i,
where
F_q=∑_N=q^∞N(N-1)(N-2)...(N-q+1)P(N)
are the factorial moments. Modified combinants C_j can be expressed as an infinite series of the K_q <cit.>
⟨ N⟩ C_j=1/j!∑_p=0^∞(-1)^p/p!K_p+j.
When comparing Eq. (<ref>) with Eq. (<ref>) we have
K_q=m^q+1g_(q+1).
Note that the moments K_q require knowledge of all P(N) while, according to Eq. (<ref>), calculation of C_j (and corresponding correlation function g_(n)) requires only a finite number of probabilities P(N<j) which may be advantageous in applications.
§ ACKNOWLEDGEMENTS
This research was supported by the Polish National Science Centre (NCN) Grant No. 2020/39/O/ST2/00277. In preparation of this work we used the resources of the Center for Computation and Computational Modeling of the Faculty of Exact and Natural Sciences of the Jan Kochanowski University of Kielce.
99
Andersson:1983ia
B. Andersson, G. Gustafson, G. Ingelman and T. Sjostrand,
Phys. Rept. 97, 31-145 (1983)
doi:10.1016/0370-1573(83)90080-7
Capella:1992yb
A. Capella, U. Sukhatme, C. I. Tan and J. Tran Thanh Van,
Phys. Rept. 236, 225-329 (1994)
doi:10.1016/0370-1573(94)90064-7
Chen:1986ns
W. R. Chen and R. C. Hwa,
Phys. Rev. D 36, 760 (1987)
doi:10.1103/PhysRevD.36.760
Hwa:1987mm
R. C. Hwa,
Phys. Rev. D 37, 1830 (1988)
doi:10.1103/PhysRevD.37.1830
Chou:1983xg
K. c. Chou, L. s. Liu and T. c. Meng,
Phys. Rev. D 28, 1080 (1983)
doi:10.1103/PhysRevD.28.1080
Dewanto:2008zz
A. Dewanto, A. H. Chan, C. H. Oh, R. Chen and K. Sitaram,
Eur. Phys. J. C 57, 515-523 (2008)
doi:10.1140/epjc/s10052-008-0750-z
Chew:1986qv
C. K. Chew, D. Kiang and H. Zhou,
Phys. Lett. B 186, 411-415 (1987)
doi:10.1016/0370-2693(87)90318-2
Chan:1990hs
A. H. Chan and C. K. Chew,
Phys. Rev. D 41, 851-862 (1990)
doi:10.1103/PhysRevD.41.851
Brambilla:2006zt
M. Brambilla, A. Giovannini and R. Ugoccioni,
Physica A 387, 1110-1122 (2008)
doi:10.1016/j.physa.2007.10.047
[arXiv:hep-ph/0605269 [hep-ph]].
Grosse-Oetringhaus:2009eis
J. F. Grosse-Oetringhaus and K. Reygers,
J. Phys. G 37, 083001 (2010)
doi:10.1088/0954-3899/37/8/083001
[arXiv:0912.0023 [hep-ex]].
Wilk:2016dcn
G. Wilk and Z. Włodarczyk,
J. Phys. G 44, no.1, 015002 (2017)
doi:10.1088/0954-3899/44/1/015002
[arXiv:1601.03883 [hep-ph]].
Ghosh:2012xh
P. Ghosh,
Phys. Rev. D 85, 054017 (2012)
doi:10.1103/PhysRevD.85.054017
[arXiv:1202.4221 [hep-ph]].
Giovannini:2003ft
A. Giovannini and R. Ugoccioni,
Phys. Rev. D 68, 034009 (2003)
doi:10.1103/PhysRevD.68.034009
[arXiv:hep-ph/0304128 [hep-ph]].
Zborovsky:2013tla
I. Zborovský,
J. Phys. G 40, 055005 (2013)
doi:10.1088/0954-3899/40/5/055005
[arXiv:1303.7388 [hep-ph]].
Dremin:2004ts
I. M. Dremin and V. A. Nechitailo,
Phys. Rev. D 70, 034005 (2004)
doi:10.1103/PhysRevD.70.034005
[arXiv:hep-ph/0402286 [hep-ph]].
Dremin:2000ep
I. M. Dremin and J. W. Gary,
Phys. Rept. 349, 301-393 (2001)
doi:10.1016/S0370-1573(00)00117-4
[arXiv:hep-ph/0004215 [hep-ph]].
Chekanov:1996ah
S. V. Chekanov and V. I. Kuvshinov,
J. Phys. G 22, 601-610 (1996)
doi:10.1088/0954-3899/22/5/007
[arXiv:hep-ph/9606202 [hep-ph]].
Hoang:1987tt
T. F. Hoang and B. Cork,
Z. Phys. C 36, 323 (1987)
doi:10.1007/BF01579149
Wilk:2018kvg
G. Wilk and Z. Włodarczyk,
Int. J. Mod. Phys. A 33, no.10, 1830008 (2018)
doi:10.1142/S0217751X18300089
[arXiv:1803.07832 [hep-ph]].
ST
B.E.A. Saleh and M.K. Teich,
Proc. IEEE 70, 229 (1982).
Rybczynski:2018bwk
M. Rybczynski, G. Wilk and Z. Włodarczyk,
Phys. Rev. D 99, no.9, 094045 (2019)
doi:10.1103/PhysRevD.99.094045
[arXiv:1811.07197 [hep-ph]].
Rybczynski:2019dwa
M. Rybczyński, G. Wilk and Z. Włodarczyk,
Ukr. J. Phys. 64, no.8, 738-744 (2019)
doi:10.15407/ujpe64.8.738
[arXiv:1906.11531 [hep-ph]].
Zborovsky:2018vyh
I. Zborovský,
Eur. Phys. J. C 78, no.10, 816 (2018)
doi:10.1140/epjc/s10052-018-6287-x
[arXiv:1811.11230 [hep-ph]].
Ang:2018zjy
H. W. Ang, M. Ghaffar, A. H. Chan, M. Rybczyński, Z. Włodarczyk and G. Wilk,
Mod. Phys. Lett. A 34, no.39, 1950324 (2019)
doi:10.1142/S0217732319503243
[arXiv:1812.08840 [hep-ph]].
Bierlich:2022pfr
C. Bierlich, S. Chakraborty, N. Desai, L. Gellersen, I. Helenius, P. Ilten, L. Lönnblad, S. Mrenna, S. Prestel and C. T. Preuss, et al.
doi:10.21468/SciPostPhysCodeb.8
[arXiv:2203.11601 [hep-ph]].
Pierog:2013ria
T. Pierog, I. Karpenko, J. M. Katzy, E. Yatsenko and K. Werner,
Phys. Rev. C 92, no.3, 034906 (2015)
doi:10.1103/PhysRevC.92.034906
[arXiv:1306.0121 [hep-ph]].
Motornenko:2017klp
A. Motornenko, K. Grebieszkow, E. Bratkovskaya, M. I. Gorenstein, M. Bleicher and K. Werner,
J. Phys. G 45, no.11, 115104 (2018)
doi:10.1088/1361-6471/aae149
[arXiv:1711.07789 [nucl-th]].
Bass:1998ca
S. A. Bass, M. Belkacem, M. Bleicher, M. Brandstetter, L. Bravina, C. Ernst, L. Gerland, M. Hofmann, S. Hofmann and J. Konopka, et al.
Prog. Part. Nucl. Phys. 41, 255-369 (1998)
doi:10.1016/S0146-6410(98)00058-1
[arXiv:nucl-th/9803035 [nucl-th]].
Bleicher:1999xi
M. Bleicher, E. Zabrodin, C. Spieles, S. A. Bass, C. Ernst, S. Soff, L. Bravina, M. Belkacem, H. Weber and H. Stoecker, et al.
J. Phys. G 25, 1859-1896 (1999)
doi:10.1088/0954-3899/25/9/308
[arXiv:hep-ph/9909407 [hep-ph]].
ALICE:2017pcy
S. Acharya et al. [ALICE],
Eur. Phys. J. C 77, no.12, 852 (2017)
doi:10.1140/epjc/s10052-017-5412-6
[arXiv:1708.01435 [hep-ex]].
White:1979kp
S. D. M. White,
Mon. Not. Roy. Astron. Soc. 186, 145 (1979)
Glauber:1963fi
R. J. Glauber,
Phys. Rev. 130, 2529-2539 (1963)
doi:10.1103/PhysRev.130.2529
Wick:1950ee
G. C. Wick,
Phys. Rev. 80, 268-272 (1950)
doi:10.1103/PhysRev.80.268
Dall:2013
R. Dall et al.,
Nature Phys. 9, 341-344 (2013)
doi: 10.1038/nphys2632
Gotsman:2020bjc
E. Gotsman and E. Levin,
Phys. Rev. D 102, no.7, 074008 (2020)
doi:10.1103/PhysRevD.102.074008
[arXiv:2006.11793 [hep-ph]].
Kharzeev:2017qzs
D. E. Kharzeev and E. M. Levin,
Phys. Rev. D 95, no.11, 114008 (2017)
doi:10.1103/PhysRevD.95.114008
[arXiv:1702.03489 [hep-ph]].
Bartel
R. Bartel and M. Płoszajczak,
Universal Fluctiations, The Phenomenology of Hadronic Matter
(World Scientific, Singapore, 2002).
|
http://arxiv.org/abs/2307.04037v2 | 20230708195151 | Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages | [
"E. C. Nunes"
] | cs.RO | [
"cs.RO"
] |
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages
1st Eduardo Carvalho Nunes
Department of Engineering
University of Trás-os-Montes and Alto Douro
5000-801, Vila Real, Portugal
ORCID: 0000-0002-5345-8854
=====================================================================================================================================================================
This article explores the use of drones in agriculture and discusses the various types of drones employed for different agricultural applications. Drones, also known as unmanned aerial vehicles (UAVs), offer numerous advantages in farming practices. They provide real-time and high-resolution data collection, enabling farmers to make informed irrigation, fertilization, and pest management decisions. Drones assist in precision spraying and application of agricultural inputs, minimizing chemical wastage and optimizing resource utilization. They offer accessibility to inaccessible areas, reduce manual labor, and provide cost savings and increased operational efficiency. Drones also play a crucial role in mapping and surveying agricultural fields, aiding crop planning and resource allocation. However, challenges such as regulations and limited flight time need to be addressed. The advantages of using drones in agriculture include precision agriculture, cost and time savings, improved data collection and analysis, enhanced crop management, accessibility and flexibility, environmental sustainability, and increased safety for farmers. Overall, drones have the potential to revolutionize farming practices, leading to increased efficiency, productivity, and sustainability in agriculture.
Drone, Agriculture, UAV
§ INTRODUCTION
The use of drones in agriculture has gained significant attention in recent years due to their potential to revolutionize farming practices. Drones, also known as unmanned aerial vehicles (UAVs), offer a range of applications that can enhance efficiency, productivity, and sustainability in agriculture.
One of the key advantages of using drones in agriculture is their ability to provide real-time and high-resolution data collection <cit.>. Drones equipped with cameras, sensors, and imaging technologies can capture detailed imagery of crops, soil conditions, and field topography <cit.>. This data can be used for crop monitoring, assessment, and precision agriculture practices <cit.>. By analyzing this data, farmers can make informed decisions regarding irrigation, fertilization, and pest management, leading to optimized resource utilization and improved crop yields <cit.>.
Drones also play a crucial role in precision spraying and application of agricultural inputs <cit.>. With their ability to navigate through fields and deliver targeted treatments, drones can reduce chemical wastage, minimize environmental impact, and improve the efficiency of pesticide and fertilizer application <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and optimize resource utilization <cit.>.
Furthermore, drones offer accessibility to inaccessible or inaccessible areas by traditional means <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. This enables farmers to monitor large farmland areas quickly and efficiently, reducing the time and labor required for manual inspections <cit.>. Drones can cover large farmland areas in a fraction of the time it would take using traditional methods, leading to cost savings and increased operational efficiency <cit.>.
In addition to data collection and monitoring, drones can assist in mapping and surveying agricultural fields. They can create high-resolution maps and 3D models, providing valuable information for crop planning, land management, and resource allocation. Drones equipped with advanced sensors, such as LiDAR or hyperspectral cameras, can capture detailed data for precise analysis and decision-making <cit.>. This enables farmers to identify areas of nutrient deficiencies, optimize irrigation practices, and implement site-specific management strategies. The use of drones in agriculture is challenging. Regulations and licensing requirements for drone operation vary across countries and regions, and compliance with these regulations is essential to ensure safe and responsible drone use <cit.>. Additionally, drones' limited flight time and battery capacity can pose challenges in large-scale farming operations <cit.>. However, advancements in drone technology, such as improved battery life and payload capacity, are addressing these limitations and expanding the possibilities for drone applications in agriculture.
§ DIFFERENT TYPES OF DRONES USED IN AGRICULTURE
In agriculture, different types of drones are used for various applications. These drones offer unique capabilities and functionalities that cater to specific agricultural needs. Some of the commonly used types of drones in agriculture include:
* Multi-Rotor Drones: Multi-rotor drones (Figure <ref>), such as quadcopters and hexacopters, are popular in agriculture due to their maneuverability and stability <cit.>. They are equipped with multiple rotors that allow them to hover in place, fly at low altitudes, and capture high-resolution imagery. Multi-rotor drones are suitable for tasks that require close and contained object capture, such as monitoring crop health, detecting pests and diseases, and applying targeted treatments <cit.>.
* Fixed-Wing Drones: Fixed-wing drones (Figure <ref>) have a wing-like structure and are designed to fly like airplanes <cit.>. They are known for their long-flight endurance and ability to cover large areas. Fixed-wing drones are commonly used for mapping and surveying agricultural fields, as they can fly faster and cover more considerable distances. However, they require a runway for takeoff and landing, which can be a limitation in specific agricultural settings.
* Hybrid Drones: Hybrid drones (Figure <ref>) combine the features of multi-rotor and fixed-wing drones <cit.>. They can take off and land vertically like multi-rotor drones and then transition to fixed-wing flight for longer endurance and coverage <cit.>. Hybrid drones are suitable for applications that require both close-range imaging and large-scale mapping, providing flexibility and versatility in agricultural operations.
* Thermal Imaging Drones: Thermal imaging drones (Figure <ref>) are equipped with thermal cameras that capture infrared radiation emitted by objects <cit.>. These drones are used in agriculture to monitor crop health, detect irrigation issues, and identify areas of heat stress or pest infestation <cit.>. Thermal imaging drones can provide valuable insights into the temperature distribution and thermal patterns in agricultural fields, aiding precision agriculture practices.
* Spraying Drones: Spraying drones (Figure <ref>), also known as agricultural drones or crop dusting drones, are specifically designed for the targeted application of pesticides, fertilizers, and other agricultural inputs <cit.>. These drones are equipped with spraying systems that can accurately and efficiently deliver chemicals to crops, reducing the need for manual labor and minimizing chemical wastage <cit.>. Spraying drones offer precise and controlled applications, reducing environmental impact and optimizing resource utilization.
* Surveillance Drones: Surveillance drones (Figure <ref>) are used in agriculture for monitoring and security purposes <cit.>. These drones are equipped with cameras and sensors that capture real-time video footage and imagery, allowing farmers to monitor their fields, livestock, and infrastructure remotely <cit.>. Surveillance drones can help detect unauthorized activities, track animal movements, and identify potential threats or risks in agricultural operations.
* Mapping and Surveying Drones: Mapping and surveying drones (Figrue <ref>) are used to create high-resolution maps and 3D models of agricultural fields <cit.>. These drones have advanced sensors, such as LiDAR (Light Detection and Ranging) or photogrammetry cameras, to capture detailed and accurate data <cit.>. Mapping and surveying drones are valuable tools for precision agriculture, enabling farmers to analyze topography, monitor soil conditions, and plan efficient land management strategies.
* Payload-Specific Drones: Drones are designed for specific agricultural applications besides the above types. For example, there are drones equipped with hyperspectral sensors for detailed analysis of crop health and nutrient content <cit.>. There are also drones with specialized sensors for monitoring soil moisture levels, detecting weed infestations, or assessing plant growth parameters <cit.>. These payload-specific drones (Figure <ref>) cater to specific data collection needs in agriculture.
§ ADVANTAGES OF USING DRONES IN AGRICULTURE
Using drones in agriculture offers several advantages contributing to improved efficiency, productivity, and sustainability in agricultural practices. The advantages of using drones in farming are:
* Precision Agriculture: Drones enable precision agriculture practices by providing high-resolution imagery and data collection capabilities <cit.>. They can capture detailed information about crop health, soil conditions, and pest infestations, allowing farmers to make informed decisions and apply targeted treatments <cit.>. This precision approach helps optimize resource utilization, reduce input wastage, and increase crop yields <cit.>.
* Cost and Time Savings: Drones can cover large areas of farmland quickly and efficiently, reducing the time and labor required for manual inspections and data collection <cit.>. They can perform tasks such as crop monitoring, mapping, and spraying in a fraction of the time it would take using traditional methods <cit.>. This leads to cost savings by minimizing the need for manual labor and reducing the use of resources such as water, fertilizers, and pesticides <cit.>.
* Improved Data Collection and Analysis: Drones equipped with various sensors, such as cameras, thermal imaging, and multispectral sensors, can collect a wide range of data about crops, soil, and environmental conditions <cit.>. This data can be used for detailed analysis and monitoring, enabling farmers to detect early signs of crop stress, nutrient deficiencies, or disease outbreaks <cit.>. The data collected by drones can be processed using advanced analytics and machine learning algorithms to generate actionable insights for better decision-making <cit.>.
* Enhanced Crop Management: Drones provide real-time and up-to-date information about crop health, allowing farmers to implement timely interventions and optimize crop management practices <cit.>. For example, drones can help identify areas of the field that require additional irrigation or fertilization, enabling precise application and reducing waste <cit.>. They can also assist in monitoring crop growth, estimating yield potential, and predicting harvest times <cit.>.
* Accessibility and Flexibility: Drones offer accessibility to areas that are difficult to reach or inaccessible by traditional means, such as steep slopes or dense vegetation <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. Drones can be deployed quickly and easily, allowing farmers to respond rapidly to changing conditions or emergencies <cit.>.
* Environmental Sustainability: Using drones in farming can contribute to environmental sustainability by reducing the use of chemicals and minimizing the environmental impact of agricultural practices <cit.>. Drones enable targeted spraying of pesticides and fertilizers, reducing the amount of chemicals applied and minimizing their dispersion into the environment <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and promote ecological balance <cit.>.
* Safety: Drones eliminate or reduce the need for farmers to physically access hazardous or difficult-to-reach areas, such as tall crops, steep terrains, or areas with potential safety risks <cit.>. This improves the safety of farmers and reduces the risk of accidents or injuries associated with manual labor <cit.>.
§ CONCLUSION
Using drones in agriculture holds immense promise for revolutionizing farming practices and improving efficiency, productivity, and sustainability. The various types of drones available cater to specific agricultural needs, ranging from crop monitoring and assessment to precision spraying, mapping, and surveying. Drones provide real-time and high-resolution data collection, enabling farmers to make informed decisions regarding resource allocation and optimize crop management practices. They offer cost and time savings by reducing manual labor and minimizing the use of resources. The ability of drones to access inaccessible areas and provide comprehensive views of the fields enhances their usability and efficiency in large-scale farming operations.
Furthermore, drones contribute to environmental sustainability by enabling targeted spraying, reducing chemical wastage, and minimizing the environmental impact of agricultural practices. The safety aspect of using drones must be considered, as they eliminate or reduce the need for farmers to access hazardous areas physically. Despite challenges such as regulations and limited flight time, advancements in drone technology are continually addressing these limitations. Overall, the advantages of using drones in agriculture are significant, and their integration into farming practices has the potential to transform the industry, leading to optimized resource utilization, improved crop yields, and sustainable agricultural practices.
00
10.1002/net.21818Otto, A., Agatz, N., Campbell, J., Golden, B. & Pesch, E. Optimization Approaches for Civil Applications of Unmanned Aerial Vehicles (UAVs) or Aerial Drones: A Survey. Networks. (2018)
10.1007/s41666-020-00080-6Nasajpour, M., Pouriyeh, S., Parizi, R., Dorodchi, M., Valero, M. & Arabnia, H. Internet of Things for Current COVID-19 and Future Pandemics: An Exploratory Study. Journal Of Healthcare Informatics Research. (2020)
10.3390/rs9010088Jakob, S., Zimmermann, R. & Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sensing. (2017)
10.3390/s20051487Gao, D., Sun, Q., Hu, B. & Zhang, S. A Framework for Agricultural Pest and Disease Monitoring Based on Internet-of-Things and Unmanned Aerial Vehicles. Sensors. (2020)
10.1109/access.2020.2982086Castellanos, G., Deruyck, M., Martens, L. & Joseph, W. System Assessment of WUSN Using NB-IoT UAV-Aided Networks in Potato Crops. Ieee Access. (2020)
10.1038/s41598-020-67898-3Santangeli, A., Chen, Y., Kluen, E., Chirumamilla, R., Tiainen, J. & Loehr, J. Integrating Drone-Borne Thermal Imaging With Artificial Intelligence to Locate Bird Nests on Agricultural Land. Scientific Reports. (2020)
10.3390/land10020164Ayamga, M., Tekinerdogan, B. & Kassahun, A. Exploring the Challenges Posed by Regulations for the Use of Drones in Agriculture in the African Context. Land. (2021)
10.3390/drones6070160Javan, F., Samadzadegan, F., Gholamshahi, M. & Mahini, F. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition. Drones. (2022)
10.1109/access.2021.3130900Dutta, A., Roy, S., Kreidl, O. & Bölöni, L. Multi-Robot Information Gathering for Precision Agriculture: Current State, Scope, and Challenges. Ieee Access. (2021)
10.5937/ekonomika1804091sSpalević, Ž., Ilic, M. & Savija, V. The Use of Drones in Agriculture: ICT Policy, Legal and Economical Aspects. Ekonomika. (2018)
10.3390/app11052138Kim, S., Ahmad, H., Moon, J. & Jung, S. Nozzle With a Feedback Channel for Agricultural Drones. Applied Sciences. (2021)
10.5194/isprs-archives-xlii-2-789-2018Oliveira, R., Khoramshahi, E., Suomalainen, J., Hakala, T., Viljanen, N. & Honkavaara, E. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing. The International Archives Of The Photogrammetry Remote Sensing And Spatial Information Sciences. (2018)
10.1111/sum.12771Chen, Q., Li, L., Chong, C. & Wang, X. AI‐enhanced Soil Management and Smart Farming. Soil Use And Management. (2021)
10.1088/1757-899x/1259/1/012015Borikar, G., Gharat, C. & Deshmukh, S. Application of Drone Systems for Spraying Pesticides in Advanced Agriculture: A Review. Iop Conference Series Materials Science And Engineering. (2022)
10.1016/j.jairtraman.2020.101929Merkert, R. & Bushell, J. Managing the Drone Revolution: A Systematic Literature Review Into the Current Use of Airborne Drones and Future Strategic Directions for Their Effective Control. Journal Of Air Transport Management. (2020)
10.1371/journal.pone.0141006Lisein, J., Michez, A., Claessens, H. & Lejeune, P. Discrimination of Deciduous Tree Species From Time Series of Unmanned Aerial System Imagery. Plos One. (2015)
10.3390/drones5020041Krul, S., Pantos, C., Frangulea, M. & Valente, J. Visual SLAM for Indoor Livestock and Farming Using a Small Drone With a Monocular Camera: A Feasibility Study. Drones. (2021)
10.3390/agronomy11091809Huzaifah, M., Juraimi, A., Che'ya, N., Sulaiman, N., Manaf, M., Ramli, Z. & Motmainna, M. Using Remote Sensing and an Unmanned Aerial System for Weed Management in Agricultural Crops: A Review. Agronomy. (2021)
10.30657/pea.2021.27.10Dadi, V., Nikhil, S., Mor, R., Agarwal, T. & Arora, S. Agri-Food 4.0 and Innovations: Revamping the Supply Chain Operations. Production Engineering Archives. (2021)
10.22438/jeb/43/1/mrn-1912Verma, A., Singh, M., Parmar, R. & Bhullar, K. Feasibility Study on Hexacopter UAV Based Sprayer for Application of Environment-Friendly Biopesticide in Guava Orchard. Journal Of Environmental Biology. (2022)
10.1007/978-981-16-4369-9_25Kumaar, A. & Kumaar, A. GPS-Based Path Planning Algorithm for Agriculture Drones. (2021)
10.3390/agriculture13051075McCarthy, C., Nyoni, Y., Kachamba, D., Banda, L., Moyo, B., Chisambi, C., Banfill, J. & Hoshino, B. Can Drones Help Smallholder Farmers Improve Agriculture Efficiencies and Reduce Food Insecurity in Sub-Saharan Africa? Local Perceptions From Malawi. Agriculture. (2023)
10.1051/matecconf/202133502002Lee, C., Phang, S. & Mun, H. Design and Implementation of an Agricultural UAV With Optimized Spraying Mechanism. Matec Web Of Conferences. (2021)
10.1051/e3sconf/202338101048Zhichkin, K., Nosov, V., Zhichkina, L., Anichkina, O., Borodina, I. & Beketov, A. Efficiency of Using Drones in Agricultural Production. E3s Web Of Conferences. (2023)
10.1109/access.2019.2949703Farooq, M., Riaz, S., Abid, A., Abid, K. & Naeem, M. A Survey on the Role of IoT in Agriculture for the Implementation of Smart Farming. Ieee Access. (2019)
|
http://arxiv.org/abs/2307.05277v1 | 20230711141219 | Positive mass theorems for spin initial data sets with arbitrary ends and dominant energy shields | [
"Simone Cecchini",
"Martin Lesourd",
"Rudolf Zeidler"
] | math.DG | [
"math.DG",
"gr-qc",
"53C21 (Primary) 53C24, 53C27 (Secondary)"
] |
We prove a positive mass theorem for spin initial data sets (M,g,k) that contain an asymptotically flat end and a shield of dominant energy (a subset of M on which the dominant energy scalar μ-|J| has a positive lower bound).
In a similar vein, we show that for an asymptotically flat end ℰ that violates the positive mass theorem (i.e. E < |P|), there exists a constant R>0, depending only on ℰ, such that any initial data set containing ℰ must violate the hypotheses of Witten's proof of the positive mass theorem in an R-neighborhood of ℰ.
This implies the positive mass theorem for spin initial data sets with arbitrary ends, and we also prove a rigidity statement.
Our proofs are based on a modification of Witten's approach to the positive mass theorem involving an additional independent timelike direction in the spinor bundle.
A Quasi Time-Reversible scheme based on density matrix extrapolation on the Grassmann manifold for Born-Oppenheimer Molecular Dynamics
Filippo Lipparini
August 12, 2023
======================================================================================================================================
§ INTRODUCTION
The positive mass theorem is a fundamental result in differential geometry and the geometry of scalar curvature that arose as a conjecture in general relativity, where it was formulated for asymptotically flat manifolds (M^n,g) (complete manifolds whose ends are asymptotic to Euclidean space ℝ^n). The theorem gives an inequality ≥ 0 where is a quantity computed at infinity in an asymptotically flat end, and a rigidity statement stating that if =0 then (M^n,g) is Euclidean space ℝ^n.
There are two main settings for the positive mass theorem, one that applies to Riemannian manifolds (M,g) where the metric g is assumed to have nonnegative scalar curvature ≥ 0 (the Riemannian case), and one which applies to initial data sets (M^n,g,k) satisfying the dominant energy condition (DEC) μ≥ |J|.
An initial data set (M^n,g,k) is a Riemannian manifold (M^n,g) endowed with a symmetric two tensor k, and the dominant energy condition is
μ≥ |J|,
where
μ =1/2(_g+(_g(k))^2-|k|_g^2), J=div (k)- _g(k).
The initial data set version of the positive mass theorem involves significant additional technicalities over the Riemannian case, and the appropriate rigidity statement for initial data sets also differs in that it involves the existence of an embedding into Minkowski spacetime (ℝ^1, n,η).
In 1988, Schoen–Yau <cit.> conjectured, in the Riemannian case, that the positive mass theorem holds for manifolds that need only have one asymptotically flat end (i.e., there may be other complete ends but nothing about them is assumed other than the curvature condition). A number of more recent works on the subject have now settled the following <cit.>.
Let (M^n ≥ 3,g) be a complete manifold that contains a distinguished asymptotically flat end ℰ and nonnegative scalar curvature ≥ 0. Assume either that n≤ 7 or that M^n is spin. Then the ADM mass of ℰ satisfies (ℰ)≥ 0, with equality if and only if (M^n,g) is Euclidean space.
In dimensions 3≤ n≤ 7, the nonnegativity `(ℰ)≥ 0' in Theorem <ref> was first proved by Lesourd–Unger–Yau <cit.>. Later, J. Zhu <cit.> gave a different proof that also included the rigidity statement. Yet another proof of nonnegativity and rigidity was given by Lee–Lesourd–Unger <cit.>, building on their own earlier work <cit.>.
Under a spin assumption, the nonnegativity in Theorem <ref> was proved by Bartnik-Chruściel <cit.>; in fact, their argument is more general and applies to initial data sets satisfying the DEC. More recently, in the Riemannian case, a different proof of both nonnegativity and rigidity was given by Cecchini–Zeidler <cit.>, whose methods are also able to cover cases of manifolds that need not be complete.
Subsequently, these techniques have been extended to the case of asymptotically hyperbolic ends by Chai–Wan <cit.>.
In this paper, by building on <cit.>, we give a different proof of the inequality `≥ 0' for spin initial data sets satisfying the DEC, and we prove the appropriate rigidity statement in all dimensions for spin initial data sets.
Our proof of the inequality differs significantly from that of Bartnik–Chruściel <cit.>, and our method yields a new quantitative shielding theorem for the DEC that is able to deal with incomplete manifolds. The main difficulty we have to overcome is that, whilst the method in <cit.> allows for incomplete manifolds and arbitrary ends, we now face the fact that g is nonlinearly coupled to k. To deal with this, we introduce another independent `time' direction in the spin bundle and we consider a modified connection. As a result, we find that the associated Dirac operator satisfies a Schrödinger–Lichnerowicz formula which allows us to use some of the ideas in <cit.> that enabled dealing with incomplete manifolds and arbitrary ends. We expect that similar tricks may be useful in other contexts.
For initial data sets, the definition of asymptotic flatness involves decay assumptions on both g and k (see <ref>), and moreover there are two asymptotic quantities of interest, the ADM energy and ADM linear momentum (see <cit.> definitions and context). In this setting, the positive mass theorem is as follows.
Let (M^n ≥ 3,g,k) be an asymptotically flat initial data set that satisfies the dominant energy condition μ-|J|≥ 0. If either n≤ 7 or M^n is spin, then ≥ ||.
Theorem <ref> was first proved for n=3 by Schoen–Yau <cit.>. Later, Witten <cit.> gave an spinor argument for n=3 that was later made rigorous by Parker–Taubes <cit.>, and was shown to hold for all n (under a spin assumption). More recently, the theorem was proved for manifolds in the low dimensional range 3≤ n≤ 7 by Eichmair–Huang–Lee–Schoen <cit.>.
The rigidity associated with Theorem <ref> is more subtle.
Let (M^n ≥ 3,g,k) be an asymptotically flat initial data set satisfying μ-|J|≥ 0. Then the following holds:
for 3≤ n≤ 7, if =0, then (M^n,g,k) embeds into Minkowski spacetime as a spacelike hypersurface,
for any n, if (g,k) are assumed to satisfy certain extra decay assumptions on a given end and =|| on that end, then in fact =0.
The proof of (1) was given by Eichmair <cit.> building on the n=3 case that had been done by Schoen–Yau <cit.>. The proof of (1) does not require any extra decay assumptions on g and k.[Eichmair used an extra decay condition on _g(k) for n=3, but this extra condition on _g(k) when n=3 can be lifted <cit.>.]
More recently, we note that Huang–Lee <cit.> have given an elegant argument for (1) which uses slightly stronger assumptions but which holds for all n.
Under a spin assumption, (2) was proved by Beig–Chruściel <cit.> for n=3, and by Chruściel–Maerten <cit.> for all n.
Later, using ideas from <cit.>, Huang–Lee <cit.> gave an elegant proof of (2) that did not involve a spin assumption.
In another paper, Huang–Lee <cit.> constructed 9-dimensional asymptotically flat initial data sets satisfying μ-|J|≥0, =||, ≠ 0 but which do not satisfy the `extra decay' assumption, which shows that there is some form of extra decay assumption is necessary for (2). It is not known to us whether 9 is a critical dimension for constructing such examples.
In view of Theorems <ref> and <ref>, it is natural to expect the following.
Let (M^n≥ 3,g,k) be a complete initial data set that contains at least one asymptotically flat end ℰ and that satisfies μ-|J|≥ 0. Then _ℰ≥ |_ℰ|.
If M^n is spin, this was shown by Bartnik–Chruściel <cit.>.
In view of the rigidity in Theorems <ref> and <ref>, the following is also natural.
Let (M^n ≥ 3,g,k) be an initial data set that contains at least one asymptotically flat end ℰ and that satisfies μ-|J|≥ 0. Then the following holds:
(1) if _ℰ=0 then (M^n,g,k) embeds as a spacelike hypersurface in Minkowski space,
(2) if (g,k) on ℰ has an extra decay and _ℰ=|_ℰ|, then _ℰ=0.
In view of the so-called quantitative shielding theorem in <cit.>, it is also natural to pose the following.
Let (M^n,g,k) be a complete initial data set that contains at least one asymptotically flat end ℰ and a dominant energy shield as in Definition <ref>. Then _ℰ>|_ℰ|.
Let (M^n,g,k) be a Riemannian manifold, not assumed to be complete. We say that (M,g,k) contains a dominant energy shield U_0 ⊃ U_1 ⊃ U_2 if U_0, U_1, and U_2 are open subsets of M such that
U_0 ⊃U_1, U_1⊃U_2,
the closure of U_0 in (M,g) is a complete manifold with compact boundary,
and we have the following:
* μ-|J|≥ 0 on U_0,
* μ-|J| ≥σ n(n-1) on U_1∖ U_2,
* the mean curvature _∂U̅_0 on ∂U̅_0 and the symmetric two tensor k satisfy
_∂U̅_0 - 2n-1| k(ν, )|_∂U̅_0|>-Ψ(d,l).
Here, Ψ(d,l) is the constant defined as
Ψ(d,l) 2/nλ(d)/1-l λ(d) if d < π/√(σ) n and l < 1/λ(d),
∞ otherwise,
where d _g(∂ U_2, ∂ U_1), l _g(∂ U_1, ∂ U_0), and
λ(d) √(σ) n/2tan(√(σ) n d/2).
Observe that in (3) of Definition <ref>, the assumption on k involves |k(ν,-)|_∂ X, and not the trace tr_∂ Xk which appears in the formula for the null mean curvature θ^+= + _∂ Xk.
Naively, one might have expected (3) to involve θ^+, since θ^+ is the natural generalization of H.
We impose this somehwat mysterious boundary condition because the term |k(ν,-)|_∂ X occurs as a natural bound for the boundary term that appears in our spinorial approach to the positive mass theorem with shields, see <ref>.
This may be a technical artifact of our method.
However, it turns out that |k(ν,-)|_∂ X also figures in other treatments of initial data sets with boundary [Definition 2.5]almaraz:spacetimePMT_noncptbd[Definition 2.3]TinYau, and, moreover, there is an indication that this term is to be expected from the perspective of the boundary Hamiltonian in general relativity (see the discussion in <cit.>).
Here, using the aforementioned modification of the Callias operator method in <cit.>, we prove Conjecture <ref> under a spin and compactness assumption.
Let (M^n≥ 3,g,k) be an initial data set that contains an asymptotically flat end ℰ and a dominant energy shield as in <ref>. Assume that U_0 is spin and that ℰ\ U_0 is compact. Then _ℰ> |_ℰ|.
The proof of Theorem <ref> also leads to the following.
Let (ℰ,g,k) be an asymptotically flat initial data end of dimension n ≥ 3 such that _ℰ < |_ℰ|.
Then there exists a constant R = R(ℰ,g,k) such that the following holds: If (M,g,k) is an n-dimensional initial data set without boundary that contains (ℰ, g, k) as an open subset and = _R(ℰ) ⊆ M denotes the open neighborhood of radius R around ℰ in M, then at least one of the following conditions must be violated:
* is metrically complete,
* μ-|J|≥ 0 on 𝒰,
* is spin.
Theorem <ref> yields a new proof that <ref> holds in the spin setting:
Let (M^n ≥ 3,g,k) be a complete initial data set that satisfies μ-|J|≥ 0 and contains at least one distinguished asymptotically flat end ℰ. Assume that M^n is spin. Then _ℰ≥|_ℰ|.
It is worth noting that <ref> can accommodate lower regularity. As in the recent work of Lee–Lesourd–Unger <cit.>, we can allow for the presence of corners (reminiscent of Miao <cit.> and Shi–Tam <cit.>), and we can also allow for distributional curvature as in Lee–LeFloch <cit.>. The required modifications are fairly standard so we do not pursue this here.
Finally, we show the following rigidity statement.
Let (M^n ≥ 3,g,k) be a complete spin initial data set that satisfies μ-|J|≥ 0 and that contains at least one distinguished asymptotically flat end ℰ. Then the following holds:
* if _ℰ=0, then (M^n,g,k) embeds as a spacelike hypersurface in Minkowski space,
* if (g,k) on ℰ has extra decay as in <ref> and _ℰ=|_ℰ|, then _ℰ=0.
The paper is organized as follows. In <ref> we introduce the basic definitions and notation. In Section <ref> we introduce the relevant spin bundle and compute some of the key formulae that underlie the remaining arguments. In Section <ref> we prove Theorems <ref> and <ref>. Finally in Section <ref> we prove the rigidity Theorem <ref>. Useful weighted Poincaré inequalities for manifolds with boundary are included in Appendix <ref>.
§.§ Acknowledgements
The authors thank Carla Cederbaum, Demetre Kazaras, Dan Lee, and Ryan Unger for insightful discussions.
§ NOTATIONS AND DEFINITIONS
Let a nonnegative integer k and a constant τ∈ be fixed.
Let _d(0)⊂^n be the closed disc of radius d around the origin.
For a function f on ^n∖_d(0), we write f=_k(|x|^-τ) if the function |x|^τ+|β||∂^β f(x)| is bounded for each multi-index 0 ≤|β|≤ k.
Moreover, for λ∈ (0,1), we write f=_k+λ(|x|^-τ) if f=_k(|x|^-τ) and the function
|x|^λ+|β|+τ|∂^β f(x)-∂^β(y)|/|x-y|^λ, x,y∈^n∖_d(0), 0≤|x-y|≤|x|/2
is bounded for every multi-index |β|=k.
Let M^n be a smooth n-dimensional manifold, and fix a smooth background metric g which is identically Euclidean on a distinguished end ℰ⊂ M, where ℰ is diffeomorphic to ℝ^n \_d(0), and where _d(0) is the closed disc of radius d around the origin.
For a function f defined on ℰ, we write f=_k(|x|^-τ) if there is a constant C such that we have
|∂^i f|≤ C|x|^-τ-i
for 0≤ i≤ k. Moreover, for λ∈ (0,1) we write f=_k+λ(|x|^-τ) if f=_k(|x|^-τ) and if there exists a constant C such that
|z-y|≤ |x(y)|/2⇒ |∂^kf(z)-∂^kf(y)|≤ C|z-y|^λ |x|^-τ-k-λ.
We will make the following standard decay assumption on ℰ.
We say that an initial data set (M^n,g,k) has an asymptotically flat end ℰ if (g,k) is locally ^2,α×^1,α for some 0<α<1, and there exists an asymptotically Euclidean coordinate chart for the subset ℰ⊂ M such that (g,k) on M∩ℰ satisfies
g_ij =δ_ij+_2(|x|^-τ)
k_ij = _1(|x|^-τ-1)
for some τ>n-2/2, and also (μ,J)∈^1(M∩ℰ).
We refer to this τ as the asymptotic decay rate of (g,k).
In this case, the ADM energy-momentum (,) is well-defined. We refer the reader to <cit.> for details and references.
Let (M^n,g,k) be an initial data set.
We say that an open subset ℰ⊂ M is an asymptotically flat end of order τ>n-2/2 if (g,k) is locally ^2,α×^1,α for some 0<α<1, μ and J are integrable in ℰ, and there exists a diffeomorphism Φℰ^n ∖_d(0) for some d > 0 such that, with respect to the induced coordinate chart x=(x^1,…,x^n), we have
g_ij-δ_ij=_2(|x|^-τ) ,
k_ij=_1(|x|^-τ-1)
for all 1 ≤ i,j ≤ n.
We say that an initial data set (M,g,k) is asymptotically flat if there exists a bounded set K⊂ M such that M∖ K is a non-empty disjoint union of finitely many ends.
Following the convention of <cit.>, we define the ADM energy-momentum (_ℰ,_ℰ) of an asymptotically flat end ℰ as
_ℰ lim_r→∞1/2(n-1)ω_n-1∫_S_r^n-1∑_i,j=1^n(∂_ig_ij-∂_jg_ii)ν^j S̅
(_ℰ)_i lim_r→∞1/(n-1)ω_n-1∫_S_r^n-1∑_j=1^n(k_ij-(_gk)g_ij)ν^j S̅
for i=1,…,n, where ω_n-1 is the volume of the (n-1)-dimensional unit sphere, _r^n-1 is the (n-1)-sphere of radius r with respect to the chosen asymptotically flat coordinates x=(x^1,…,x^n), ν^j=x^j/|x|, and S̅ is the volume element on _r^n-1 with respect to the background Euclidean metric.
The numbers (_ℰ)_i are understood to represent the covector at infinity _ℰ = ∑_i=1^n (_ℰ)_i x^i in ℰ in the coordinate chart (x^1, …, x^n).
For Part <ref> of <ref>, we will use the following extra decay assumption.
Let R>0 and let (g,k) be initial data on ℰ≃ℝ^n\_R(0).
We say that (g,k) has extra decay if
g_ij-δ_ij=_3+λ(|x|^-α) , k_ij=_2+λ(|x|^-1-α)
α> 1/2 n=3
n-3 n≥ 4
, ϵ>0 , 0<λ<1
J^i=_1+λ(|x|^-n-ϵ) , μ=_1+λ(|x|^-n-ϵ)
Let (M,g,k) be an n-dimensional asymptotically flat initial data set with compact boundary.
Let ρ be a smooth positive function such that ρ=|x| outside a disk in each asymptotically flat end with respect to some asymptotically flat coordinate chart.
Moreover, we assume that ρ remains uniformly bounded away from 0 and ∞ outside of the asymptotically flat ends.
Let (E,∇) be a Hermitian vector bundle with metric connection on M.
For p≥ 1 and δ∈, we define the weighted Lebesgue space ^p_δ(M,E) as the space of sections u∈^p_(M,E) such that the weighted norm
u_^p_δ(M,E)(∫_M|u|^pρ^-δ p-n)^1/p
is finite.
For a nonnegative integer k, we define the weighted Sobolev space ^k,p_δ(M,E) as the space of sections u∈^k,p_(M,E) such that the weighted Sobolev norm
u_^k,p_δ(M,E)∑_i=0^k∇^iu_^p_δ-i(M,E)
is finite.
When p = 2, we use the usual notation ^k_δ(M,E) ^2,k_δ(M,E).
§ DIRAC–WITTEN OPERATORS WITH POTENTIAL
We consider the following setup: Let (M,g,k) be an initial data set possibly with boundary, and let ψ M → be a potential.
We consider the vector bundle ^∗ M ⊕^2, where we endow ^2 = ⟨ y_0, y_1⟩ with the negative definite bundle metric - y_0^2 - y_1^2 so that ^∗ M ⊕^2 is endowed with a bundle metric of signature (n,2).
We assume furthermore that M is a spin manifold and let S → M denote the spinor bundle associated to ^∗ M ⊕^2 with respect to some faithful representation _n,2↪(Σ).
Let ϵ_i = (y_i) S → S be the Clifford action of y_i.
We define modified connections on S by the following formulas
_ξ ∇_ξ + 1/2(k(ξ, )) ϵ_0,
_ξ^ψ ∇_ξ + 1/2(k(ξ, )) ϵ_0 - ψ/n(ξ) ϵ_1 = _ξ - ψ/n(ξ) ϵ_1,
and consider the associated Dirac operators
∑_i=1^n (e^i) _e_i = - 1/2(k) ϵ_0,
^ψ ∑_i=1^n (e^i) _e_i^ψ = - 1/2(k) ϵ_0 + ψϵ_1 = + ψϵ_1,
where = ∑_i=1^n (e^i) ∇_e_i is the unmodified Dirac operator.
The operator is the usual Dirac–Witten <cit.> operator of the initial data set (M,g,k) and ^ψ is the Dirac–Witten operator augmented with a Callias potential (the ψϵ_1 part).
The modified objects and behave as the restrictions of the corresponding data from a hypothetical spacetime in which the initial data set is embedded.
In a similar vein, the connection ^ψ and the corresponding Dirac operator ^ψ may be thought of as arising from M being embedded as a spacelike submanifold of codimension two with trivial normal bundle and second fundamental form k ⊕ -2 ψ/n g inside a pseudo-Riemannian manifold of signature (n,2).
However, from the point of view of our applications only the first timelike normal dimension is of physical and geometric relevance, whereas the second additional one involving the potential ψ is only used as a technical tool.
Direct calculation and the Schrödinger–Lichnerowicz formula for 𝒟 (see e.g. <cit.>) yield the following formula:
(^ψ)^2 = (^ψ)^∗^ψ
+ 1/4( + |(k)|^2 - | k |^2)
+ 1/2((k) - (k))ϵ_0
+ n-1/n(ψ^2 + (ψ) ϵ_1).
The energy density μ = 1/2( + (k)^2 - | k |^2) and momentum density J = (k) - (k), see <cit.>, both appear in the formula above.
It follows that, on sections compactly supported in the interior, we obtain the estimate
(^ψ)^2 ≥1/2(μ - | J |) + n-1/n(ψ^2 - |ψ|).
Note that, by definition, the dominant energy condition (DEC) says that μ - |J| ≥ 0.
This already includes the right normalization (n-1)/n for the terms corresponding to the potential and no use of the Friedrich inequality and Penrose operator as in <cit.> is required.
This is because we write the formula using the modified connection ^ψ.
Integrating <ref> and partial integration yields:
For every compactly supported smooth section u,
∫_M |^ψ u |^2 = ∫_M ( |^ψ u |^2 + 1/4( + |(k)|^2 - | k |^2) | u |^2 )
+ ∫_M ( 1/2⟨ u, ((k) - (k))ϵ_0 u ⟩ + n-1/n⟨ u, (ψ^2 + (ψ) ϵ_1) u ⟩)
+ ∫_∂ M( n-12| u |^2 - ⟨ u, 𝒟^∂ u ⟩)
+ ∫_∂ M( 1/2⟨ u, ((k(ν, )) - (k) (ν^♭)) ϵ_0 u ⟩ + n-1/n⟨ u, ψ(ν^♭) ϵ_1 u ⟩) .
Here denotes the mean curvature of the boundary, ^∂ the canonical boundary Dirac operator, and ν the interior normal field compare for the conventions.
Assume that u satisfies the boundary condition u = χ_0 u = (ν^♭) ϵ_0 u.
Then ⟨ u, ^∂ u ⟩ = 0 and ⟨ u, (ν^♭) ϵ_1 u ⟩ = ⟨ u, χ_1 u ⟩ = 0 because χ_0 anti-commutes with ^∂ and χ_1.
Thus the boundary term appearing in <ref> is given by
∫_∂ M1/2( (n-1) - _∂ M(k) ) | u |^2 ,
compare <cit.>.
Notably, the potential ψ completely drops out on the boundary, and thus cannot be used to dominate the other terms.
However, (n-1) - _∂ M(k) ≥ 0 corresponds to the boundary being weakly inner trapped in the sense of <cit.>.
Assume that u satisfies the boundary condition u = χ_1 u = (ν^♭) ϵ_1 u (this corresponds to the correct boundary condition for the potential term).
Then ⟨ u, ^∂ u ⟩ = 0 and ⟨ u, (ν^♭) ϵ_0 u ⟩ = ⟨ u, χ_0 u ⟩ = 0 because χ_1 anti-commutes with ^∂ and χ_0.
Thus the boundary term appearing in <ref> is given by
∫_∂ M( n-12( + 2nψ) | u |^2 + 1/2⟨ u, (k(ν, )) ϵ_0 u ⟩) .
Moreover, letting e_1, …, e_n-1 be a local orthonormal frame of ∂ M, we obtain
⟨ u, (k(ν, )) ϵ_0 u ⟩ = k(ν, ν) ⟨ u, χ_0 u ⟩_=0 + ⟨ u, (∑_i=1^n-1 k(ν, e_i) e^i) ϵ_0 u ⟩
≥ - (∑_i=1^n-1 k(ν, e_i)^2)^1/2| u |^2= -| k(ν, )|_∂ M|| u |^2.
Thus (<ref>) can be bounded below by
∫_∂ M( n-12( + 2nψ) - | k(ν, )|_∂ M|) | u |^2 .
§ PROOF OF THEOREMS REFERENCEDEC.SHIELD.PMT AND REFERENCETHM:SPACETIME_PMT_LOCALIZED
We use the setup and notation of <ref>.
In order to estimate the quantity _ℰ - |_ℰ|, we make use of asymptotically constant spinors, which we will now define.
Let (e_1,…,e_n) be a tangent orthonormal frame in ℰ.
Note that it lifts to a section of the principal (n)-bundle over ℰ and thus induces a trivialization of S over ℰ.
We say that a spinor u_0∈ C^∞(ℰ,S|_ℰ) is constant with respect to the orthonormal frame (e_1,…,e_n) if it is constant with respect to this induced trivialization.
Moreover, we say that an orthonormal frame (e_1,…,e_n) is asymptotically constant if there exist asymptotically flat coordinates x^1, …, x^n such that e_i=∑_j=1^ne_i^j∂/∂ x^j satisfies e^j_i - δ_ij∈^2_-τ, where τ is the order of the end ℰ.
We say that a spinor u∈^1_(X,S) is asymptotically constant in ℰ if there exists a spinor u_0∈ C^∞(ℰ,S|_ℰ) which is constant with respect to an asymptotically constant orthonormal frame and such that u|_ℰ - u_0 ∈^1_-q(ℰ,S), where q (n-2)/2.
In this case, the norm at infinity of u in ℰ is defined as |u|_ℰ_∞ |u_0| ∈ [0,∞).
Moreover, for a co-vector P = ∑_i=1^n P_i e^i at infinity in ℰ, where P_i ∈ℝ, we define ⟨ u,(P)ϵ_0 u⟩_ℰ_∞⟨ u_0,(P)ϵ_0 u_0⟩∈.
Note that an asymptotically flat orthonormal frame in ℰ can be obtained by orthonormalizing the coordinate frame of an asymptotically flat coordinate chart.
Note also that, for an orthonormal frame (e_1,…,e_n) in ℰ and a vector P=(P_1,…,P_n), we can choose a spinor u_0 which is constant which respect to this orthonormal frame and satisfies |u_0|=1 and ⟨ u_0,(P)ϵ_0u_0⟩=-|P|.
Let (X,g,k) be a complete connected asymptotically flat initial data set with compact boundary and let ψ∈^∞(X, ).
Let u ∈^1_(X,S; χ_1) be a section that is asymptotically constant in each end ℰ.
Then
1/2(n-1)ω_n-1∑_ℰ( _ℰ| u |_ℰ_∞^2 + ⟨ u, (_ℰ) ϵ_0 u ⟩_ℰ_∞)
+ ^ψ u_^2(X)^2
≥^ψ u_^2(X)^2 + ∫_X ζ^ψ| u |^2 + ∫_∂ Xη^ψ| u |^2,
where (_ℰ, _ℰ) is the ADM energy-momentum of the end ℰ and
ζ^ψ = 1/2(μ - | J |) + n-1/n(ψ^2 - |ψ|) η^ψ = n-1/2( + 2nψ) - | k(ν, )|_∂ X|.
Combine <ref> and the proof of <cit.>.
We set q (n-2)/2 > 0 and will use -q < 0 as the decay rate for the weighted Sobolev spaces we consider in the following proposition and for the remainder of this section.
Let (X,g,k) be a complete connected asymptotically flat initial data set with compact boundary ∂ X.
Fix a connected codimension zero submanifold X_0 ⊆ X with compact boundary ∂ X_0 which contains at least one asymptotically flat end of X.
Then there exists a constant c = c(X_0,g,k) > 0, depending only on the data on X_0, such that the following holds:
Let ψ∈^∞(X,) be a potential such that η^ψ≥ 0 on ∂ X and write ζ^ψ = ζ_+^ψ - ζ_-^ψ with ζ_±^ψ≥ 0, where ζ^ψ and η^ψ are defined as in eq:theta_eta.
Suppose that (ζ^ψ_-) ⊆ X_0 and that the following holds:
* ζ^ψ_- _^∞_-2(X_0)≤ c/2,
* ψ^2 / n^2 _^∞_-2(X_0)≤ c/2.
In this case
^ψ^1_-q(X,S;χ_1) →^2(X,S)
is an isomorphism and
^ψ u_^2(X)^2 ≥ c u ^2_^2_-q(X_0)
for all u ∈^1_-q(X,S; χ_1).
By <ref>, there is a constant constant c_0 = c_0(X_0,g,k) > 0 such that c_0 v_^2_-q(X_0)^2 ≤ v ^2_^2(X_0) for all v ∈_-q^1(X_0,S).
We now set c = c_0 / 2.
Then for all u ∈^1_-q(X,S;χ_1) <ref> implies
^ψ u ^2_^2(X) ≥^ψ u ^2_^2(X) + ∫_Xζ^ψ| u |^2 + ∫_∂ Xη^ψ| u |^2_≥ 0
≥^ψ u ^2_^2(X) - ∫_X_0ζ^ψ_- | u |^2
≥^ψ u ^2_^2(X_0) - ∫_X_0ζ^ψ_- | u |^2
≥ u ^2_^2(X_0) - ψn u_^2(X_0)^2 - ∫_X_0ζ^ψ_- | u |^2
≥ c_0 u ^2_^2_-q(X_0) - ψ^2n^2_^∞_-2(X_0) u _^2_-q(X_0)^2 - ζ^ψ_-_^∞_-2(X_0)u _^2_-q(X_0)^2
≥ (c_0 - c/2 - c/2) u ^2_^2_-q(X_0) = c u ^2_^2_-q(X_0).
So we have established eq:partial_injectivity_estimate.
A similar argument as in the proof of of <cit.> shows that ^ψ is a Fredholm operator with (^ψ)≤(^ψ).
Thus, to see that ^ψ is an isomorphism, it suffices to show that it has trivial kernel.
Indeed, if u ∈^1_-q(X,S;χ_1) with ^ψ u = 0, then eq:partial_injectivity_estimate implies that u_^2_-q(X_0) = 0. In particular,
u vanishes on X_0.
In this case, the first two lines in the previous estimate imply that ^ψ u _^2(X) = 0 and hence ^ψ u = 0 on all of X.
Finally, <ref> implies that u = 0.
We are now ready to prove <ref>.
Let (M^n,g,k) be an initial data set of dimension n ≥ 3 containing an asymptotically flat end ℰ and a dominant energy shield U_2⊂ u_1⊂ U_0 as in <ref>.
Moreover, assume that U_0 is spin and that ℰ\ U_0 is compact so that ℰ is the only asymptotically flat end contained in U_0.
Recall that U̅_0 is a complete manifold with compact boundary.
Let ψ∈^∞_(U̅_0,) be a potential such that ψ|_U_2=0, ζ^ψ≥0 in U̅_0, and η^ψ>0 in ∂U̅_0, where ζ^ψ and η^ψ are defined by (<ref>).
Note that the existence of a potential ψ satisfying such properties follows from <cit.> and Conditions (1)–(3) of <ref>.
Let u_00∈^∞(X_0,S) be a spinor which is asymptotically constant in ℰ and such that
| u_00|_ℰ_∞ = 1, ⟨ u_00, (_ℰ) ϵ_0 u_00⟩_ℰ_∞ = - |_ℰ|.
Using <ref>, pick a spinor v∈^1_-q(U̅_0,S;χ_1) such that
^ψv=-(u_00)=-^ψ(u_00).
Then u u_00+v is an asymptotically constant spinor which is asymptotic to u_00 in ℰ and satisfies ^ψu=0.
Since ζ^ψ≥0 in U̅_0, and η^ψ>0 in ∂U̅_0, by <ref>
1/2(n-1)ω_n-1( _ℰ - |_ℰ|)
≥^ψ u_^2(U̅_0)^2 + ∫_U̅_0ζ^ψ| u |^2
+∫_∂U̅_0η^ψ| u |^2>0
which concludes the proof.
We now turn to the proof of <ref>, the main steps of which are contained in the following lemmas.
We work in the following setup which is designed to prove <ref> by contrapositive.
Let (M,g,k) be a connected initial data set and ℰ⊆ M a fixed asymptotically flat end.
* Fix a connected codimension zero submanifold X_0 ⊆ M such that (X_0, g) is complete and has a single asymptotically flat end coinciding with ℰ at infinity.
We also fix a collar neighborhood K ≅∂ X_0 × [-1,0] of ∂ X_0 inside X_0 and a smooth cut-off function χ X_0 → [0,1] such that χ = 0 on X_0 ∖ K and χ = 1 near ∂ X_0.
* Now given a distance r > 0, we assume that there exists another connected codimension zero submanifold X_0 ⊂ X_r ⊆ M such that _g(∂ X_0, ∂ X_r) > r, (X_r, g) is a complete spin manifold such that μ - | J |≥ 0 holds on X_r, and X_r ∖ X_0 is relatively compact.
Given a distance r > 0 and assuming we are in <ref>, we can find a potential ψ_r ∈^∞(X_r,[0,∞)) such that the following holds:
ψ_r = χ/r on X_0,
ψ_r(x) < 4/r if _g(x, X_0)< r/2,
ψ_r^2 - |ψ_r| ≥0 on X_r ∖ X_0,
η^ψ_r ≥0 on ∂ X_r,
where η^ψ_r is defined as in eq:theta_eta.
Since the function ψ_r is already determined on X_0 by eq:potential_on_end, we only need to extend it to X_r while satisfying eq:potential_upper_bound,eq:riccati_nonneg,eq:boundary_nonneg.
We can achieve this by taking a smooth and appropriately cut-off version of the function x ↦ 1/(r-t) with t = _g(∂ X_0, x) for x ∈ X_r ∖ X_0.
The details of this construction were essentially worked out in <cit.>.
Given <ref>, there exist constants c = c(X_0,g,k) and r_0 = r_0(X_0,g,k,χ) > 0, depending only on the data on X_0, such that if r > r_0 and ψ_r ∈^∞(X_r, [0,∞)) satisfies eq:potential_on_end,eq:riccati_nonneg,eq:boundary_nonneg, then the following holds:
The operator ^ψ_r^1_-q(X_r, S; χ_1) →^2(X_r, S) is an isomorphism and
∀ v ∈^1_-q(X_r, S; χ_1) ^ψ_r v _^2(X_r)^2 ≥ c v ^2_^2_-q(X_0).
Let c = c(X_0,g,k) > 0 be the constant from <ref>.
Since μ - | J |≥ 0 on X_r and using ψ_r = 0 on X_0 ∖ K together with eq:riccati_nonneg, we obtain that
ζ^ψ_r1/2(μ - | J |) + n-1/n(ψ_r^2 - |ψ_r|) ≥
0 on X_r ∖ K,
- 1/r|χ| on K.
Together with eq:potential_on_end and since χ is supported in the compact set K, we can choose a constant r_0 = r_0(X_0, g, k, K,f ) > 0 such that the conditions
* ζ^ψ_r_- _^∞_-2(X_0)≤1/rρ^2 f_^∞(K)≤ c/2,
* ψ_r^2 / n^2 _^∞_-2(X_0)≤1/n^2 r^2ρ^2 _^∞(K)≤ c/2,
hold for all r > r_0.
Thus <ref> implies that for all r > r_0 that ^ψ_r^1_-q(X_r, S; χ_1) →^2(X_r, S) is an isomorphism and that eq:applied_partial_injectivity_bound holds.
Given <ref>, let c = c(X_0,g,k) and r_0 = r_0(X_0,g,k,χ) > 0 be the constants obtained from <ref> and let ψ_r ∈^∞(X_r, [0,∞)) satisfy eq:potential_on_end,eq:riccati_nonneg,eq:boundary_nonneg.
Then there exists a further constant C = C(X_0,g,k,χ) > 0, depending only on the data on X_0, such that the following holds:
For each asymptotically constant section u_r of S over X_r such that ^ψ_r u_r =0 and
| u_r |_ℰ_∞ = 1, ⟨ u_r, (_ℰ) ϵ_0 u_r ⟩_ℰ_∞ = - |_ℰ|,
we have the following estimate:
1/2(n-1)ω_n-1( _ℰ - |_ℰ|) ≥^ψ_r u_r_^2(X_r)^2 - C/ru_r^2_^2_-q(K).
We use <ref> and estimate
1/2(n-1)ω_n-1( _ℰ - |_ℰ|) ≥^ψ_r u_r_^2(X_r)^2 + ∫_X_rζ^ψ_r| u_r |^2 + ∫_∂ X_rη^ψ_r| u_r |^2,
≥^ψ_r u_r_^2(X_r)^2 - ∫_Kζ^ψ_r_- |u_r|^2 , (using eq:applied_zeta,eq:boundary_nonneg)
≥^ψ_r u_r_^2(X_r)^2 - ζ^ψ_r_- _^∞_-2(K)u_r^2_^2_-q(K),
≥^ψ_r u_r_^2(X_r)^2 - 1/rχ_^∞_-2(K) u_r^2_^2_-q(K). (using eq:applied_zeta)
Thus we can choose C χ_^∞_-2(K).
Let (ℰ,g,k) be an n-dimensional asymptotically flat initial data end.
We fix a connected codimension zero submanifold X_0 ⊆ℰ and a smooth cut-off function χ X_0 → [0,1] as in <ref> item:setup_X0.
In addition, we fix a section u_00∈^∞(X_0,S) which is asymptotically constant in ℰ and such that
| u_00|_ℰ_∞ = 1, ⟨ u_00, (_ℰ) ϵ_0 u_00⟩_ℰ_∞ = - |_ℰ|.
We now assume that <ref> item:setup_Xr can be realized for all r > 0 and will prove that _ℰ≥|_ℰ|; this is precisely the contrapositive statement of <ref>.
For each r > 0, choose ψ_r X_r → [0, ∞) as in <ref>.
Let c = c(X_0,g,k) and r_0 = r_0(X_0,g,k,χ) > 0 be the constants obtained from <ref>.
Now for each r > r_0, choose a section v_r ∈^1_-q(X_r, S; χ_1) such that
^ψ_r v_r = - (u_00) = - ^ψ_r(u_00).
Then u_r u_00 + v_r is an asymptotically constant section which is asymptotic to u_00 in ℰ and satisfies ^ψ_r u_r = 0.
Then we may estimate
1/2(n-1)ω_n-1 ( _ℰ - |_ℰ |) ≥^ψ_r u_r_^2(X_r)^2 - C/r u_r^2_^2_-q(K) (using eq:applied_energy_momentum_bound)
≥^ψ_r u_r_^2(X_r)^2 - C/r v_r^2_^2_-q(X_0) (u_r = v_r on K)
≥^ψ_r u_r_^2(X_r)^2 - C/r c ^ψ_r v_r^2_^2_-q(X_0) (using eq:applied_partial_injectivity_bound)
= ^ψ_r u_r_^2(X_r)^2 - C/r c u_00^2_^2_-q(X_0). (using eq:solution_condition)
In sum, we obtain
1/2(n-1)ω_n-1( _ℰ - |_ℰ|) ≥^ψ_r u_r_^2(X_r)^2 - C'/r≥ - C'/r,
where C' C/c u_00^2_^2_-q(X_0) is a constant that only depends on objects chosen in advance on X_0 independently of r.
Thus letting r →∞ in eq:final_energy_momentum_estimate proves that _ℰ - |_ℰ|≥ 0, as desired.
Let (M,g,k) be a complete connected n-dimensional spin initial data set which contains a distinguished asymptotically flat end ℰ.
* If _ℰ = |_ℰ|, then there exists a section u of S that satisfies u = 0 and is asymptotically constant in ℰ satisfying
| u |_ℰ_∞ = 1, ⟨ u, (_ℰ) ϵ_0 u ⟩_ℰ_∞ = - |_ℰ| = -_ℰ.
* If _ℰ = 0, then for any asymptotically constant section u_00 on ℰ, one can find a section u with u = 0 that is asymptotic to u_00.
item:parallel_spinor_E_equals_P As in the proof of <ref> above, we fix a connected codimension zero submanifold X_0 ⊆ℰ and a smooth cut-off function χ X_0 → [0,1] as in <ref> item:setup_X0.
In addition, we fix a section u_00∈^∞(X_0,S) which is asymptotically constant in ℰ and such that | u_00|_ℰ_∞ = 1, ⟨ u_00, (_ℰ) ϵ_0 u_00⟩_ℰ_∞ = - |_ℰ| = -_ℰ.
Since M is complete, spin and connected, we can find for each r > r_0, a connected codimension zero submanifold X_0 ⊂ X_r ⊆ M which realizes <ref> item:setup_Xr.
At this point the same argument as in the proof of <ref> above applies and we obtain for each r > 0 potentials ψ_r satisfying eq:potential_on_end,eq:riccati_nonneg,eq:boundary_nonneg,eq:potential_upper_bound and sections u_r = u_00 + v_r ∈^∞(X_r, S; χ_1) (with v_r ∈^1_-q(X_r, S; χ_1)) that satisfy ^ψ_r u_r = 0 and eq:final_energy_momentum_estimate.
But since _ℰ = |_ℰ|, the estimate eq:final_energy_momentum_estimate just says that
^ψ_r u_r_^2(X_r)^2 ≤C'/r,
where C' is a constant independent of r.
Now fix an arbitrary connected codimension zero submanifold X_0 ⊂ X ⊆ M.
Then there exists an r_1 > r_0 such that for each r > r_1, we have X ⊆ X_r and ψ_r_^∞_-1(X)≤ C_X/r, where C_X is a constant depending on the data on X.
Here we have used the fact that ψ_r is compactly supported on X and eq:potential_upper_bound to establish the latter estimate.
Thus <ref> implies that there exists an r_2 > r_1 such that there exists a constant c_X > 0, depending on the data on X, such that
∀ r > r_2 ∀ v ∈^1_-q(X_r, S; χ_1) ^ψ_r v _^2(X_r)≥ c_X v _^2_-q(X)
Thus we obtain for r,s > r_1:
v_r - v_s_^2(X) = u_r - u_s_^2(X)
≤^ψ_r u_r - ^ψ_s u_s _^2(X) + 1/n(ψ_r v_r_^2(X) + ψ_s v_s_^2(X))
≤^ψ_r u_r_^2(X) + ^ψ_s u_s _^2(X) + C_X/n( 1/rv_r_^2_-q(X) + 1/sv_s_^2_-q(X))
≤^ψ_r u_r_^2(X) + ^ψ_s u_s _^2(X) + 2 C_X/c_X n u_00_^2_-q(X_0)(1/r + 1/s),
where in the last inequality we have used eq:partial_injectivity_once_again and ^ψ_r v_r = -(u_00).
Together with eq:connection_with_potential_to_zero this proves that v_r - v_s_^2(X)→ 0 as r,s →∞.
Finally, using the weighted Poincaré inequality for on X, this shows that v lim_r →∞ v_r exists in ^1_-q(X,S) and u u_00 + v satisfies u = 0.
Since X was arbitrary, this already shows the existence of the desired section u on all of M. This finishes the proof of item:parallel_spinor_E_equals_P.
Moreover, if _ℰ = 0, then _ℰ = 0 by <ref> and so the same argument as above can be run starting with every asymptotically constant section u_00 supported near the end ℰ because in this case the condition ⟨ u_00, (_ℰ) ϵ_0 u_00⟩_ℰ_∞ = - |_ℰ| is always trivially satisfied.
Thus the statement item:parallel_spinor_E_equals_0 also holds.
§ PROOF OF <REF>
The proof of <ref> is essentially based on fundamental ideas from <cit.> and <cit.>.
The key ingredient is the following construction, the so-called Killing development, which enables to find an ambient Lorentzian manifold that we can identify as Minkowski space.
Given a Riemannian manifold (M,g) and a pair (N,Y), where N M → is a smooth function and Y ∈Ω^1(M) a 1-form, we may define
M̅× M, g̅_2^∗(|Y|^2_g - N^2) x_0^2 + _2^∗ Y ⊗x_0 + x_0⊗_2^∗ Y + _2^∗ g.
We first observe that under a suitable uniformity assumption and completeness of the Riemannian manifold M, the Killing development is a geodesically complete Lorentzian manifold.
Let (N,Y) be a scalar, 1-form pair on a complete Riemannian manifold (M,g) such that N^2 - |Y|^2_g ≥ c for some c > 0.
Then the Killing development eq:Killing_development is a geodesically complete Lorentzian manifold.
We first show that g̅ is a Lorentzian metric.
To this end, let ν∂_x_0 - Y^♯, where Y^♯ is the vector field on M satisfying g(Y^♯, ξ) = Y(ξ) for all vector fields ξ on M.
Then g̅(ν, ξ) = 0 for all vector fields ξ on M and g̅(ν,ν) = g̅(∂_x_0, ∂_x_0 - Y^♯) = |Y|^2 - N^2 - |Y|^2 = -N^2 ≤ -c < 0.
Since g̅ restricts to the Riemannian metric g on M, this shows that g̅ is Lorentzian.
Moreover, by construction, translation along the u direction is an isometry of g̅.
Thus the vector field ∂_x_0 is a Killing vector field.
To prove geodesic completeness, it suffices to show that, for every g̅-geodesic c [0,r) →ℳ defined on a bounded half-open interval, there exists a compact subset K ⊆ℳ of the tangent bundle such that c'(t) ∈ K for all t ∈ [0,r).
Because then, by standard ODE theory, the geodesic c can be extended beyond the interval [0,r) and so there exists no geodesic which stops in finite time.
To this end, if we are given such a geodesic c [0,r) →ℳ, we decompose it as c(t) = (α(t), γ(t)) with α(t) ∈ and γ(t) ∈ M.
Then we can write c'(t) = α'(t) ⊕γ'(t) with α'(t) ∈_α(t) = and γ'(t) ∈_γ(t) M.
Since c is a geodesic, there exists a constant k_1 ∈ such that
k_1 = g̅(c'(t), c'(t)) = (|Y|^2_g - N^2) α'(t)^2 + 2 α'(t) Y(γ'(t)) + g(γ'(t), γ'(t))
for all t ∈ [0, r).
Moreover, since ∂_x_0 is a Killing field, there exists a constant k_2 ∈ such that
k_2 = g̅(∂_x_0, c'(t)) = (|Y|^2_g - N^2) α'(t) + Y(γ'(t))
for all t ∈ [0,r).
Combining eq:geodesic_constant_speed,eq:killing_component_constant, we derive that
0 ≤ g(γ'(t), γ'(t)) = k_1 - (N^2 - |Y|^2_g) α'(t)^2 - 2 k_2 α'(t) ≤ k_1 -c α'(t)^2 - 2 k_2 α'(t)
for all t ∈ [0,r).
Since the right-hand side tends to -∞ as |α'(t)| →∞, it follows that a = sup_t ∈ [0,r) |α'(t)| < ∞.
But then b = sup_t ∈ [0,r) |γ'(t)|_g ≤√(|k_1| + 2 a |k_2|) < ∞.
It follows that α(t) ∈ K_1 [α(0) - ar, α(0) + ar] and γ(t) ∈ K_2 _br(γ(0)) for all t ∈ [0,r).
Since (M,g) is a complete Riemannian manifold, the set K_2 is compact.
Thus we conclude that c'(t) ∈ K for all t ∈ [0,r), where K ⊂ℳ is the compact subset defined as
K { (η_s, ξ_x) ∈× M = ℳ| s ∈ K_1, x ∈ K_2, |η_s| ≤ a, |ξ_x|_g ≤ b}.
Given a suitable section u of a spinor bundle S → M, where (M,g) is a spin manifold and S is as in <ref>, we will define
N= ⟨ u,u ⟩ and Y(ξ)=⟨(ξ^♭) ϵ_0 u,u ⟩
and consider the corresponding Killing development eq:Killing_development.
It turns out that if the spinor u is -parallel, then the initial data set (M,g,k) embeds into the spacetime (M̅ = × M, g̅) as {x_0}× M for arbitray x_0 ∈ (in particular, this means that the second fundamental form of {x_0}× M inside M̅ with respect to g̅ agrees with k); see <cit.>.
Suppose that u = 0 and let (N,Y) be as in defining.pair.
Then (N^2 - |Y|^2) = 0.
Note that u = 0 means that ∇_ξ u = - 1/2(k(ξ,)) ϵ_0 u for all vector fields ξ by definition of .
Then
(∇_ξ Y)(η) = ∇_ξ (Y(η)) - Y(∇_ξη)
= ∇_ξ⟨(η^♭) ϵ_0 u, u ⟩ - ⟨(∇_ξη^♭) ϵ_0 u ⟩
= -1/2( ⟨(η^♭) ϵ_0 (k(ξ, )) ϵ_0 u, u⟩ + ⟨(η^♭) ϵ_0 u, (k(ξ, )) ϵ_0 u ⟩)
= 1/2⟨ ((η^♭) (k(ξ, )) ϵ_0 + (k(ξ, )) (η^♭) )u, u⟩
= - k(ξ, η) ⟨ u, u ⟩.
Thus ∇_ξ Y = -k(ξ, ) N and so
∇_ξ (N^2 - |Y|^2) = 4 N ⟨∇_ξ u, u ⟩ - 2 ⟨∇_ξ Y, Y ⟩
= -2N ⟨(k(ξ, )) ϵ_0 u, u ⟩ + 2 N ⟨ k(ξ, ), Y⟩
= -2N ⟨(k(ξ, )) ϵ_0 u, u ⟩ + 2 N ⟨(k(ξ, ))ϵ_0 u, u ⟩ = 0.
We may now start the proof of rigidity.arb.end:item1 in <ref>, that is, the implication that E=0 implies the existence of an embedding of (M,g,k) into Minkowski space.
Let u_00∈^∞(ℰ, S) be an asymptotically constant section with
| u_00|_ℰ_∞ = 1, ∀ξ∈^n ⟨(ξ^♭) ϵ_0 u_00, u_00⟩_ℰ_∞ = 0.
By <ref> item:parallel_spinor_E_equals_0, there exists a -parallel spinor u asymptotic to u_00 on ℰ in the sense that u - u_00∈^1_-q on the end ℰ.
We then define (N,Y) by defining.pair and consider the corresponding Killing development eq:Killing_development.
Then eq:rigidity_spinor_asymptotic_conditions implies that N → 1 and Y → 0 as we go to infinity in the end ℰ.
By <ref>, this means that N^2 - |Y|^2 = 1 everywhere and <ref> implies that the Killing development M̅ is a geodesically complete Lorentzian manifold.
Furthermore, we may directly apply <cit.> to see that the Lorentzian manifold M̅ contains the intial data set (M,g,k) as {x_0}× M for each x_0 ∈ and that the Killing vector field ∂_x_0 is parallel on M̅.
Next we will prove that M̅ is already flat.
Indeed, by <ref> item:parallel_spinor_E_equals_0, we actually obtain that there exists a family (u_α) of -parallel sections which forms a global frame of the bundle S.
Then, as is described in the proof of <cit.>, the spinor bundle S can be canonically extended to a spinor bundle S̅→M̅ corresponding to the Lorentzian metric g̅.
Moreover, restriction of sections to M ×{0} yields an isomorphism,
{u̅∈^∞(M̅, S̅) |∇̅_∂_x_0u̅ = 0 }^∞(M, S).
In particular, we find unique extensions u̅_α∈^∞(M̅, S̅) of u_α∈^∞(M, S) such that ∇̅_∂_x_0u̅_α = 0.
By construction, the Lorentzian spinor connection on S̅ restricts to on S, and hence (u̅_α)_α actually form a basis of parallel sections of S̅.
This means that S̅ is flat and so the underlying Lorentzian manifold (M̅, g̅) must already be flat.
Thus a standard classification result <cit.> implies that the universal cover of (M̅, g̅) is isometric to Minkowski space.
But then since the universal covering only has one end (being diffeomorphic to ^n+1) and M̅ already has (topologically) at least one end of the form ℰ×, it follows that M̅ must be simply-connected.
Thus (M,g,k) embeds as desired in Minkowski space (M̅,g̅).
We now turn to part rigidity.arb.end:item2 of <ref> (E_ℰ=|P_ℰ|⇒ E_ℰ=0), which is essentially a repeat of <cit.>.
The key part is the following computational lemma we take from <cit.>.
Let R>0 and (g,k) be initial data on ℝ^n\_R(0) satisfying
g_ij-δ_ij=_3+λ(|x|^-α) , k_ij=_2+λ(|x|^-1-α)
α> 1/2 n=3
n-3 n≥ 4
, ϵ>0 , 0<λ<1
J^i=_1+λ(|x|^-n-ϵ) , μ=_1+λ(|x|^-n-ϵ)
Let N be a scalar field and Y^i a vector field on ℝ^n\ B(R) such that
N→_r→∞ A^0 , Y^i→_r→∞ A^i , (A^0)^2=|A|^2
for some constants A^μ≠ 0. Suppose moreover that
2Nk_ij+ℒ_Yg_ij=_3+λ(|x|^-(n-1)-ϵ)
τ_ij=_1+λ(|x|^-n-ϵ).
Then (,)=(0,0).
ℒ denotes the Lie derivative and τ_ij is defined as in Proposition 2.1 of <cit.>
τ_ij-1/2g^klτ_klg_ij=_ij+g^mlk_lmk_ij -2k_ikk_ljg^lk - N^-1(ℒ_Yk_ij+∇_i∇_j N)-μ/2g_ij.
The proof of Lemma <ref>, which is contained in <cit.>, solely involves asymptotic analysis on ℰ and applies unchanged in our setting, where possibly other arbitrary but unrelated ends are present.
Assume by contradiction that _ℰ≠ 0.
By <ref>, we have a section u of S satisfying
|u|_E_∞ = 1, ⟨ u, (_ℰ) ϵ_0 u ⟩_ℰ_∞ = - |_ℰ| = -_ℰ,
and u=0.
Again, we use it to define a scalar, 1-form pair (N,Y) as in (<ref>).
By eq:rigidity_spinor_extra_decay_asymptotic_conditions and the fact that we have assumed _ℰ≠ 0, we obtain that both N → 1 and |Y|^2 → 1 as we go to infinity in ℰ.
Using <ref>, we thus obtain N^2 = |Y|^2.
Then, by the computation from line (3.18) to (3.28) in <cit.>, we arrive at
N^2τ_ij= μ Y_i Y_j
where τ_ij is as above.
This can be taken verbatim from <cit.> because it only involves .
Given the decay assumption on μ in (<ref>) of <ref>, the second equality in (<ref>) implies that τ_ij satisfies the decay assumption in (<ref>).
Moreover, the computation from line (3.15) to (3.27) in <cit.> shows that the assumption on J^i in (<ref>) is satisfied. Finally, the computation between line (3.27) and (3.28) in <cit.> shows that (<ref>) in Lemma <ref> is satisfied. Thus we can apply Lemma <ref> to obtain that (_ℰ,_ℰ)=0, a contradiction.
§ WEIGHTED POINCARÉ INEQUALITIES IN THE PRESENCE OF BOUNDARY
In this appendix we discuss weigthed Poincaré inequalities adapted to our setting, that is, on asymptotically flat manifolds with compact boundary for sections of Hermitian vector bundles with respect to not-necessarily metric connections.
This is essentially folklore (compare <cit.>).
Let (^n ∖_d(0),g) be an asymptotically flat end of some decay rate τ > (n-2)/2.
Then for each δ < 0, there exists a constant C = C(g,d, δ) > 0 and r_0 = r_0(g,d,δ) > d such that the following holds: For each r ≥ r_0 and u ∈^1_δ(^n ∖_r(0)), we have
u _^2_δ(^n ∖_r(0))≤ C |∇ u|_g _^2_δ-1(^n ∖_r(0)),
where the weighted ^2-norms are calculated with respect to the weight function ρ = |x| and the volume measure induced by the Riemannian metric g.
It is enough to consider compactly supported smooth functions u ∈^∞(^n ∖_r(0)).
We essentially follow the proof given by Lee in <cit.> (which, in turn, follows Bartnik <cit.>) except that we need to treat an additional boundary term in our setting.
The first part of the proof of <cit.> is unaffected by the presence of a boundary and we obtain for each r > d the estimate
∫_^n∖_r(0)⟨∇^g(ρ^2-n), ∇^g (ρ^-2 δ |u|^2) ⟩_g
≤∫_^n ∖_r(0) - 2 δ (2-n) |∇^g ρ|^2 ρ^-2δ -n |u|^2 _g + C' u_^2_δ·∇^g u _^2_δ-1,
where C' is a constant that only depends on the metric g.
Using partial integration, we also obtain the identity
∫_^n∖_r(0)⟨∇^g(ρ^2-n), ∇^g (ρ^-2 δ |u|^2) ⟩_g
= ∫_^n ∖_r(0)∇^∗∇^g(ρ^2-n) ρ^-2δ |u|^2 _g + ∫_^n-1_r∇^g_ν(ρ^2-n) ρ^-2 δ |u|^2 ,
where ν denotes outer unit normal with respect to the metric g along ∂ (^n ∖_r(0)), that is, pointing towards the deleted interior disk _r(0).
Since 2-n < 0, we have ∇^g_ν (ρ^2-n) ≥ 0 along _r^n-1, and thus we deduce
∫_^n∖_r(0)⟨∇^g(ρ^2-n), ∇^g (ρ^-2 δ |u|^2) ⟩_g ≥∫_^n ∖_r(0)∇^∗∇^g(ρ^2-n) ρ^-2δ |u|^2 _g
Chaining together the estimates eq:weighted_poincare_estimate1,eq:weighted_poincare_estimate2 an rearranging terms, we arrive at the estimate
∫_^n ∖_r(0)( ∇^∗∇^g(ρ^2-n) ρ^-2δ |u|^2 + 2 δ (2-n) |∇^g ρ|^2 ρ^-2 δ -n |u|^2 ) _g
≤ C' u_^2_δ·∇^g u _^2_δ-1
Since ρ^2-n is harmonic with respect to the Euclidean background metric, we obtain that
∇^∗∇^g (ρ^2-n) = -Δ_g(ρ^2-n) = Δ_g_eukl(ρ^2-n) - Δ_g(ρ^2-n) ∈(|x|^-τ-n).
In particular, if r ≥ r_0 for r_0 = r_0(g, d, δ) sufficiently large, then we can ensure that
∇^∗∇^g(ρ^2-n) ρ^-2δ |u|^2 + 2 δ (2-n) |∇^g ρ|^2 ρ^-2 δ -n |u|^2 ≥ C” |u|^2 ρ^-2 δ -n
on all of ^n ∖_r(0) for a constant C” = C”(g, δ, r_0) >0.
Here we have used that δ < 0.
Then eq:poincare_penultimate_step,eq:use_asymptoticaly_harmonic show that for all r ≥ r_0, we have
C”u_^2_δ(^n ∖_r(0))^2 ≤ C' u_^2_δ(^n ∖_r(0))·∇^g u _^2_δ-1(^n ∖_r(0)).
This proves the desired estimate with C = C' / C”.
Let (X,g) be a complete connected asymptotically flat manifold with compact interior boundary ∂ X.
Let E → X be a Hermitian vector bundle endowed with a metric connection ∇.
Let δ < 0 and A be a smooth 1-form on X with values in the endomorphisms of E such that | A |∈(ρ^δ-1) on each asymptotically flat end of X.
Define a new connection ∇^A = ∇ + A on E.
Then each u ∈^1_δ(X,E) which satisfies ∇^A u = 0 must vanish.
Assume to the contrary that u ≠ 0 and ∇^A u = 0.
Then u is smooth and it must be non-zero at each point because X is connected and u satisfies a linear ODE along each smooth curve in X.
Now select an asymptotically flat end ^n ∖_d(0) ≅ℰ⊆ X and choose r_0 and C as in <ref> above.
Note that since ∇ is a metric connection E, we have Kato's inequality |∇| u ||≤|∇ u |.
We thus find for each r ≥ r_0 that
1/Cu_^2_δ(^n ∖_r(0)) ≤∇ |u|_^2(^n ∖_r(0))
≤∇ u_^2(^n ∖_r(0)) = A u _^2(^n ∖_r(0))≤ A __-1^∞(^n ∖_r(0)) u _^2_δ(^n ∖_r(0)).
Since u vanishes nowhere, this implies that 0 < 1/C ≤ A __-1^∞(^n ∖_r(0)).
But since |A| ∈(ρ^δ-1) and δ - 1 < -1, we have that A __-1^∞(^n ∖_r(0))→ 0 as r →∞, a contradiction.
Let (X,g) be a complete connected asymptotically flat manifold with compact interior boundary ∂ X.
Let E → X be a Hermitian vector bundle endowed with a metric connection ∇.
Let δ < 0 and A be a smooth 1-form on X with values in the endomorphisms of E such that | A |∈(ρ^δ-1) on each asymptotically flat end of X.
Then there exists a constant C = C(X,g,E,δ,A) such that for all u ∈^1_δ(X,E) we have
u _^2_δ(X)≤ C ∇^A u _^2_δ - 1(X).
The estimate
∇ u _^2_δ-1(X)≤∇^A u _^2_δ-1(X) + A u _^2_δ-1(X)≤∇^A u _^2_δ-1(X) + A_^∞_δ-1(X) u_^2_0(X)
holds for all u ∈^1_δ(X,E).
Assume to the contrary that such a constant C > 0 does not exist.
Then there exists a sequence u_i ∈^1_δ(X,E) such that u_i_^2_δ(X) = 1 and ∇^A u_i _^2_δ-1(X)→ 0.
Then because of eq:estimate_nabla_by_modnabla the sequence (u_i) is uniformly bounded in ^1_δ(X,E).
By the weighted Rellich theorem[see e.g. <cit.> which immediately extends to our setting with boundary.], we can pass to a subsequence and assume that u_i → u in ^2_0(X,E).
But then eq:estimate_nabla_by_modnabla implies that ∇ (u_i - u_j)_^2_δ-1(X)→ 0 as i,j →∞.
This, in turn, via <ref> and using the fact that the sequence (u_i) converges locally in ^2 (as a consequence of ^2_0-convergence) implies that u_i - u_j_^2_δ(X)→ 0 as i,j →∞.
We thus conclude that the sequence (u_i) converges in ^1_δ(X,E) and so u_i → u in ^1_δ(X,E).
Hence u_^2_δ(X) = 1 and ∇^A u = 0, a contradiction to <ref>.
In this paper, we use the weighted Poincaré inequality for the case δ = -q = -(n-2)/2, in which case it simplifies to u _^2_-q≤ C ∇ u _^2.
Moreover, we are primarily interested in applying it to the following setup:
Let (X,g,k) be a complete asymptotically flat initial data set, where (X,g) is endowed with a spin structure, and let S → X be the variant of the spinor bundle as introduced in <ref>.
Then both the modified connection _ξ = ∇_ξ + 1/2 (k(ξ, )) ϵ_0 and ^ψ_ξ = _ξ - ψ/n(ξ) ϵ_1 with ψ∈^∞(X) can be written in the form ∇ + A for suitable 1-forms A satisfying |A| ∈(ρ^-q-1).
Thus we obtain weighted Poincaré inequalities for both of these modified connections from <ref>.
|
http://arxiv.org/abs/2307.05122v1 | 20230711090019 | Synthetic Decomposition for Counterfactual Predictions | [
"Nathan Canen",
"Kyungchul Song"
] | econ.EM | [
"econ.EM"
] |
Synthetic Decomposition for Counterfactual Predictions
Nathan Canen and Kyungchul Song
University of Houston, University of Warwick & NBER, and University of British Columbia
We thank Victor Aguirregabiria, Tim Armstrong, Xu Cheng, EunYi Chung, Wayne Gao, Yu-Chin Hsu, Hiro Kasahara, Vadim Marmer, Ismael Mourifie, Vitor Possebom, Frank Schorfheide and Paul Schrimpf, and participants in the seminars in Seoul National University, University of Calgary, University of Illinois Urbana-Champaign, University of Norte Dame, University of Pennsylvania, University of Toronto, University of Victoria, and in CIREQ Montreal Econometrics Conference, Conference on Econometrics for Modern Data Structures, and SETA 2022 for valuable comments. All errors are ours. We also thank Ratzanyel Rincón for his excellent research assistance. Song acknowledges that this research was supported by Social Sciences and Humanities Research Council of Canada. Corresponding Address: Kyungchul Song, Vancouver School of Economics, University of British Columbia, 6000 Iona Drive, Vancouver, BC, V6T 1L4, Canada, [email protected]
Department of Economics, University of Warwick, Coventry, CV4 7AL, United Kingdom
[email protected]
Vancouver School of Economics, University of British Columbia, 6000 Iona Drive, Vancouver, BC, V6T 1L4, Canada
[email protected]
1214 [econometrica]
Counterfactual predictions are challenging when the policy variable goes beyond its pre-policy support. However, in many cases, information about the policy of interest is available from different (“source”) regions where a similar policy has already been implemented. In this paper, we propose a novel method of using such data from source regions to predict a new policy in a target region. Instead of relying on extrapolation of a structural relationship using a parametric specification, we formulate a transferability condition and construct a synthetic outcome-policy relationship such that it is as close as possible to meeting the condition. The synthetic relationship weighs both the similarity in distributions of observables and in structural relationships. We develop a general procedure to construct asymptotic confidence intervals for counterfactual predictions and prove its asymptotic validity. We then apply our proposal to predict average teenage employment in Texas following a counterfactual increase in the minimum wage.
Key words: Counterfactual Predictions, Decomposition Analysis, Ex Ante Policy Evaluation, Synthetic Decomposition, Uniform Asymptotic Validity
JEL Classification: C30, C54
Entanglement Distribution in the Quantum Internet: Knowing when to Stop!
Angela Sara Cacciapuoti^*, Senior Member, IEEE, Michele Viscardi, Jessica Illiano, Marcello Caleffi, Senior Member, IEEE
The authors are with the www.quantuminternet.itwww.QuantumInternet.it research group, FLY: Future Communications Laboratory, University of Naples Federico II, Naples, 80125 Italy. A.S. Cacciapuoti and M. Caleffi are also with the Laboratorio Nazionale di Comunicazioni Multimediali, National Inter-University Consortium for Telecommunications (CNIT), Naples, 80126, Italy.
^*Corresponding author.
A preliminary version of this work is currently under review for IEEE QCE23 <cit.>.
Michele Viscardi acknowledges PNRR MUR project CN00000013, Angela Sara Cacciapuoti acknowledges PNRR MUR NQSTI-PE00000023, Marcello Caleffi acknowledges PNRR MUR project RESTART-PE00000001.
August 12, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Policymakers' questions are often centered around the prediction of a new policy's outcome, such as predicting the effect of a new job training program, the welfare implication of a proposed merger of firms, or the employment effect of a minimum wage increase.[See <cit.>, <cit.>, and <cit.> for discussions on the importance and challenges of ex ante policy evaluations. See <cit.> for related issues in a program evaluation setting.] Such questions are hard to answer because the new policy's outcome is not observed. For example, when a state in the U.S. considers increasing its minimum wage to a level never seen before within that state, this will imply that the policy is beyond its historical variations. In this situation, the researcher may consider using a parametric specification of the outcome-policy relationship and extrapolate it to a post-policy setting. However, when the new policy goes beyond the support of its historical variations, the counterfactual prediction may fail to be nonparametrically identified and the prediction inevitably relies on the particular parametrization that is chosen.
Alternatively, the researcher may use data from another region that has experienced a similar policy in the past. Transferring empirical features from one source to another has long been used in economics. In macroeconomics, it is common to calibrate a structural model by using estimates from micro-studies (see <cit.> for a review). In a different context, the decomposition method in labor economics can also be viewed as following a similar idea: the researcher “transfers” information from a population in one period (before the policy) to the same population in another period (after the policy) (see <cit.> for a review of this vast literature).[Since the population may have changed between the two periods for reasons other than the policy, we can view the decomposition method as a special form of a transfer between different populations.] The program evaluation literature also explores various methods of transferring causal inference results from one experiment setting to another experiment or non-experiment setting (see <cit.>, <cit.>, <cit.>, <cit.>, to name but a few). However, to the best of our knowledge, much less attention has been paid to the transfer problem when predicting counterfactual policies using structural equation models.
In this paper, we consider the problem of generating counterfactual predictions from a new policy, using data from other regions that have experienced a similar policy in the past. In order to transfer empirical features across regions in structural models, the outcome-policy relationship needs to be “transferable” from one region to another. For example, we may want to predict the average teenage employment after a minimum wage increase in Texas in the U.S., and consider using data from California, assuming that its structural relationship between the teenage employment outcome and the minimum wage (after controlling for some observed characteristics) is identical to that in Texas.[As we explain later, the transferability condition is closely related to conditional external validity or the external unconfoundedness condition in the literature of causal inference and external validity (see <cit.>, <cit.>, and <cit.>). The synthetic transferability condition has a testable implication in a spirit similar to checking the pre-treatment fit in synthetic control methods. Our proposal entails a formal test of this implication with uniform asymptotic size control.] However, the transferability between two regions can be strong in practice, especially when the market environments in the two regions exhibit salient differences.
As we show in this paper, the transferability issue can be alleviated when we have multiple “source” regions in which similar policies have been implemented in the past. In the minimum wage example with Texas as the target region, there may be several other states such as California, Oregon, and Connecticut which experienced similar minimum wage increases. However, aggregating data from these source regions is not immediately obvious. Ideally, it would be desirable to choose a source region that is most “similar” to the target region, but it is not clear which dimensions of the characteristics between the two regions would be most relevant for a given policy prediction problem.
To solve this issue, we develop a method of aggregating information from multiple source regions to generate counterfactual predictions in the target region by constructing a synthetic structural relationship from multiple source regions. First, as noted by <cit.>, structural equation models often involve the policy variable in an index (called the policy component here) which exhibits variations at the individual level. We can classify each person in the target population into the matched group and unmatched group depending on whether the person's post-policy value of the policy component can be matched with another person's pre-policy value of the policy component in the same population. As proposed by <cit.>, we can use the pre-policy data from the target population to nonparametrically identify the counterfactual predictions for the matched group even under new policies never implemented before (see <cit.> for an overview of this approach).
To generate counterfactual predictions for the unmatched group, we introduce what we call the synthetic transferability condition which requires that there exist a weight vector such that the weighted average of the outcome-policy relationships in the source regions coincides with that of the target region. This condition is weaker than the transferability condition with a single source region, described above, as it does not require the source regions to have the same structural relationship as the target region. Then, under a non-redundancy condition for the source regions, we can identify this weight as a minimizer of an L^2 distance between the outcome-policy relationship in the target region and the weighted average of the outcome-policy relationships in the source regions, where the L^2 distance is restricted to the matched group in the target region. Thus, we find a weighted average of the outcome-policy relationships in the source regions that is most similar to that in the target region on the matched group. This weighted average can be viewed as a synthetic structural relationship that can be used to generate counterfactual predictions. As our proposal essentially replaces the structural relationship involved in the decomposition method with a synthetic one, we call this method a synthetic decomposition method.
Our method is quite general and can be applied to a wide range of counterfactual prediction settings. In particular, we consider a generic nonparametric form of an outcome-policy relationship that is nonseparable in the (potentially multi-dimensional) unobserved heterogeneity. This flexibility allows the researcher to derive a nonparametric outcome-policy relationship from a structural model that specifies peoples' incentives and choices differently across the populations. Furthermore, the type of a policy can vary, including policies that transform a certain individual-level exogenous variable (e.g., demographic-dependent tax subsidies) or an aggregate-level exogenous variable (e.g., minimum wages). The policy can be one that changes a structural parameter or a coefficient of a certain variable, or a change in the distribution of an exogenous observed variable.
We then develop inference on the counterfactual prediction from the synthetic decomposition method. In this paper, we pursue a general approach that does not require the researcher's knowledge of the details of the asymptotic properties of estimators for each source region, because such properties may vary depending on the particular model specified for each region (e.g., the specification of the structural relationship between outcomes, a policy, and observed or unobserved characteristics). More specifically, we develop an inference method for the policy predictions inspired by <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. The non-standard aspect of inference in our context is that the estimated weight is chosen from a simplex, and hence, the limiting distribution of the estimated weight depends on how close the population weight is to a vertex or an edge of the simplex. While the situation is analogous to a setting with the parameter on the boundary studied by <cit.> and <cit.>, their approach of quadratic approximation does not apply here.[Asymptotic or bootstrap inference for constrained estimators has received a considerable attention in the econometrics literature. More recent examples include <cit.>, <cit.>, <cit.>, <cit.> and <cit.> to name but a few. See those papers for more references in this literature.] By adapting the proposal of <cit.> and <cit.> to our setting, we develop an asymptotic inference method that is valid uniformly over the behavior of the population weight. Monte Carlo simulations suggest that our procedure works well in finite samples.
We illustrate our procedure with an empirical application studying the effects of a counterfactual increase in minimum wages in Texas to US$9. (The prevailing minimum wage is the federal level of US$7.25, set in 2009.) Such increases are subject to extensive policy and academic interest, as shown by it being a central policy proposal in the 2022 Texas gubernatorial elections.[See <tinyurl.com/2cmv7fhz> for its presence and analysis within one of the candidate's policy platforms.] However, the extensive minimum wage literature in labor economics predominantly focuses on ex post analyses of minimum wage increases (<cit.>). We implement our proposed method using Current Population Survey (CPS) data and estimate that an increase in minimum wages would decrease average (teenage) employment by 9.5-11 percentage points on a baseline of approximately 29% if minimum wages in Texas were US$9. In doing so, our synthetic comparison for Texas (i) accounts for the heterogeneous skill distributions and demand conditions across states (<cit.>), (ii) does not require the researcher to choose the comparison unit (e.g., whether to focus on geographically close or distant states - see <cit.> and <cit.> for a discussion), (iii) accounts for the difference in causal relationships between minimum wages and employments across states (<cit.>), (iv) does not rely on parametric extrapolation, which is a concern in this literature - see <cit.>, for example.
Related Literature
The importance of ex ante policy evaluation in economics has been emphasized in the literature. See, for example, <cit.>, <cit.> and <cit.>. See also the review by <cit.> and the references therein. The evaluation usually builds on an invariance condition that requires certain structural relationships to remain unchanged after the policy. While most literature on program evaluations focuses on measuring the impact of a policy, the invariance of structural relationships that underlies the measured impact is crucial for understanding the reasons for the effect of the policy and predicting the effect of a new policy.
The literature of counterfactual predictions using structural models has often been motivated by the ex ante policy evaluation settings in practice. <cit.> studied the problem of counterfactual predictions using a structural equation model, when a policy changes the distribution of an exogenous variable. A recent stream of literature studies the nonparametric identification of counterfactual predictions in structural models (see <cit.>, <cit.>, and <cit.>, to name but a few.) A consistent theme in this literature is that certain objects of counterfactual prediction are nonparametrically identified even when the structural model is not fully identified. Examples include <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. However, this literature usually considers identification using data from the target population only.
Our proposal is closely related to the decomposition method in labor economics. After the seminal papers of <cit.> and <cit.>, the decomposition method has been extended and widely used in labor economics.[Vitor Possebom kindly let us know that there was an early appearance of a similar idea in <cit.>.] See <cit.> and <cit.>. The causal interpretation of the decomposition method was studied by <cit.>. See also <cit.> for an extensive review of this literature. There is a growing literature of nonparametric counterfactual prediction inspired by the decomposition method. See <cit.>, <cit.>, and <cit.>. The transfer of results from a source population to a target population requires various forms of invariance conditions. (See <cit.> for a detailed discussion on these conditions.) See also <cit.> for an invariance condition invoked in the problem of counterfactual predictions from one study context to another. In the literature of statistics, a similar form of an invariance condition is proposed by <cit.>.
A growing attention has been paid to the issue of external validity in field experiments, such as when the results obtained from experiments are not replicated in their scaled-up implementation. (See <cit.>, <cit.>, and <cit.> and references therein. See also <cit.> and <cit.> for the review of these issues and the literature.) One of the earliest approaches to detect or address the issue of external validity in program evaluations is found in <cit.>. They consider the problem of using past experiment results to predict its outcome for a different population. They introduce what they call the assumption of unconfounded location and show the external validity of the past experiments under this assumption. Variants of this assumption have been used in the study of external validity in the literature. Examples include <cit.>, <cit.>, and <cit.>. Another strand of related literature considers combining experimental and observational data (see, e.g., <cit.> and <cit.>), or performs meta analysis by aggregating experiment results across studies using a Bayesian hierarchical model (BHM) or pursuing minimax regret optimality (see, e.g., <cit.>, <cit.>, <cit.>, and <cit.>.)
Our paper's method of constructing a counterfactual prediction is closely related to <cit.>. They estimate a dynamic structural model of the labor market using pre-treatment data from a randomized experiment in Mexico and validate the model by comparing the predictions from the structural model with the randomized experiment results. Then, they use the model to generate counterfactual predictions from different policies. Instead of building up a full structural model, we follow <cit.> and focus on the nonparametric outcome-policy relationship that is relevant to the policy prediction problem. To deal with a setting where the policy variable goes out of the support, we develop a new method that uses data from multiple source populations.
Our synthetic decomposition method is inspired by the method of synthetic control which currently attracts a wide attention in applied research and the literature of econometrics (see <cit.> for an overview and the related literature on the method). Both methods are similar in the sense that they aim to construct a synthetic comparison group using data from multiple populations, instead of relying on an ad-hoc comparison of various characteristics of the regions. However, the way the comparison group is constructed is fundamentally different. The synthetic control method compares the pre-treatment outcomes between the target population and the source populations, whereas the synthetic decomposition method compares the pre-treatment outcome-policy relationships between the target population and the source populations on the matched group. The synthetic decomposition method does not require observations over multiple time periods, but requires individual-level data for each population.
The rest of the paper proceeds as follows. In Section 2, we present our main proposal of the synthetic decomposition method and discuss conditions for the method to work. In Section 3, we provide procedures of estimation and construction of confidence intervals, assuming that we observe a random sample of data from each population. In Section 4, we present an empirical application that studies the prediction problem of average employment when the minimum wage increases in Texas. In Section 5, we conclude. In the Supplemental Note, we present general conditions for the proposed confidence intervals to be uniformly asymptotically valid, as well as formal results and proofs. The Supplemental Note also contains some more details on the Monte Carlo simulations and the empirical application.
§ SYNTHETIC DECOMPOSITION FOR COUNTERFACTUAL PREDICTIONS
§.§ The Target Population and Counterfactual Predictions
Suppose that there is a region where we are interested in predicting the outcome of a new policy. The outcome variable is denoted by Y_i and the observed random vector of exogenous variables by X_i. We assume that they are related as follows:
Y_i = g_0(μ_0(X_i), U_i),
where U_i is an unobserved random vector, μ_0 is a map subject to a change depending on a policy, and g_0 is a map that is invariant to the policy. We call the population in the region the target population.[It is important to note that the notion of “population” here not only refers to the joint distribution of random variables, but also depends on the causal model of how the random variables are generated. Hence, two identical joint distributions that are generated according to different causal models are treated as drawn from different populations.] In our paper, we focus on settings where we do not have long time series data, and hence, the randomness of variables arises only from their within-population cross-sectional variations. This means that, in our paper, the aggregate variables behave like nonstochastic quantities. Throughout this paper, we assume that X_i does not include any aggregate variables, and treat observed aggregate variables as “observed parameters.”
A policy is a transform of the map μ_0 into μ_0^Γ. Thus, after the policy, the relation between X_i and Y_i changes as follows:
Y_i = g_0(μ_0^Γ(X_i), U_i).
We call μ_0(X_i) and μ_0^Γ(X_i) the policy components. The main requirement for the policy components is that they exhibit cross-sectional variations before and after the policy.
The target parameter is the predicted average outcome after the policy and is written as
θ_0 = 𝐄_0[ g_0(μ_0^Γ(X_i), U_i ) ],
where 𝐄_0 denotes the expectation with respect to the target population P_0. Our framework accommodates various forms of policies. We discuss some examples of policy components below.
Example 1 (Transforming an Individual Covariate): μ_0(x) = x and μ_0^Γ(x) = f(x) for some function f. For example, suppose that X_i = (X_1,i, X_2,i) where X_1,i represents an individual's income and X_2,i represents other demographic characteristics. Now the policy of interest is an income subsidy by an amount, say, δ>0, for each individual with X_i in a set A. Then, we can take
f(x) = (x_1 + δ,x_2) 1{x ∈ A} + (x_1, x_2) 1{x ∉ A}.
Even if the amount δ is the same across individuals, the policy components μ_0 and μ_0^Γ generally exhibit variations at the individual level.[It is important to note that this simple setting of counterfactual prediction is already different from the standard program evaluation setting. Here, the potential outcomes are given as follows:
Y_i(0) = g_0(μ_0(X_i),U_i) and Y_i(1) = g_0(μ_0^Γ(X_i),U_i).
However, unlike the standard program evaluation setting, everybody is treated here. Furthermore, we focus on an ex ante policy evaluation where we do not observe the outcome of the policy for the target population yet. (See <cit.> and <cit.> for the problem of policy analysis in such a setting.)]
Example 2 (Changing a Structural Parameter or an Aggregate Variable): μ_0(x) = q(x;v_0) for a parametric function q(·;v_0) with parameter v_0, and μ_0^Γ(x) = q(x;ṽ_0) for a different parameter ṽ_0. For example, the parameter can represent certain structural parameters such as parameters of the matching function in search and matching models. Alternatively, we may consider a setting where the policy affects some aggregate state variable, such as the minimum wage, the level of a sales tax or the population size through immigration policies. In such cases, we can view v_0 as the aggregate variable that the policy targets, and ṽ_0 as its post-policy value. Note that the aggregate policy variable v_0 does not vary at the individual level, but the policy components q(x;v_0) and q(x;ṽ_0) can.
It is convenient to introduce what we call the Average Response Function (ARF) as follows:[The ARF is closely related to the Local Average Response function (LARF) proposed by <cit.>. In fact, if we take μ_0(x) = x, and take the derivative of the ARF with respect to the first argument and evaluate it at the same value as the second argument, the derivative becomes the LARF. The ARF is generally different from the ASF (Average Structural Function) introduced by <cit.>, unless X_i and U_i are independent.]
m_0(μ,x ) = ∫ g_0(μ,u)dP_0,U | X(u | x).
where P_0,U | X denotes the conditional distribution of U_i given X_i in the target population (before the policy). The ARF summarizes the structural relationship between the outcome and the policy component in the model. Note that the dependence of the ARF m_0(μ,x) on the first argument μ is causal so that we can use this dependence for counterfactual analysis when we change the value of μ. However, the dependence on the second argument x is not, because in this model the causal relation between U_i and X_i is left ambiguous. The target parameter is written as follows:
θ_0 = ∫ m_0(μ_0^Γ(x),x) dP_0(x).
From here on, we call the map m_0(μ_0^Γ(·),·) the post-policy ARF in the target population.[Alternatively, we might be interested in a post-policy Distributional Response Function (DRF): for some set A,
p_0(A; μ_0^Γ(x),x) = ∫ 1{g_0(μ_0^Γ(x),u) ∈ A }dP_0,U | X(u | x).
The quantity represents the conditional probability of Y_i taking values from a set A given X_i = x, when μ_0(X_i) is fixed to be μ_0(x). Once we replace m_0(μ_0^Γ(x),x) by p_0(A; μ_0^Γ(x),x), the main proposal of this paper carries over to this alternative.] As noted by <cit.> for the case of ASF (Average Structural Function), it is not always necessary to recover the reduced form g_0 from data to obtain the ARF in many settings. See <cit.> for a similar observation in game-theoretic models.
We assume that the target population has not experienced the policy yet. The average counterfactual outcome θ_0 is not nonparametrically identified when the policy sends the policy component outside of its pre-policy support. To address this issue, we may choose a parametric specification of the map g_0 and extrapolate it beyond the support of the pre-policy data. However, since the counterfactual prediction is not nonparametrically identified, it unavoidably relies on the choice of a parametric specification. To address this challenge, we consider using information from other populations which have already implemented a similar policy. We will explain this idea later.
Let 𝒳_0 be the support of X_i in the target population. Define
𝒳_0^Γ = {x ∈𝒳_0: μ_0^Γ(x) = μ_0(x̃), for some x̃∈𝒳_0 }.
Roughly speaking, the set 𝒳_0^Γ is the set of values x such that the post-policy value μ_0^Γ(x) matches up with the pre-policy value μ_0(x̃) for some x̃∈𝒳_0. We follow <cit.> and call the set 𝒳_0^Γ a matched group.
Consider the following simple model for Y_i for the target population:
Y_i = μ_0(X_i) + U_i,
and assume that X_i and U_i are independent with U_i having mean zero, and the policy changes X_i to X_i + Δ for some vector Δ. Then, μ_0^Γ(x) = μ_0(x + Δ), and the ARF is given by the identity map μ↦μ, and the matched group is given by
𝒳_0^Γ = {x ∈𝒳_0: μ_0(x + Δ) = μ_0(x̃) for some x̃∈𝒳_0}.
See Figure <ref> for an illustration. ▪
We require the identification of the post-policy ARF for the target population only for x in the matched group.
[Identification of the Post-Policy ARF on a Matched Group]
The set 𝒳_0^Γ⊂𝒳_0 is non-empty and m_0(μ_0^Γ(x),x) is identified for all x ∈𝒳_0^Γ.
Essentially, Assumption <ref> requires that a portion of the target population before the policy can be used to predict outcomes after the policy. If μ_0(X_i) = X_i and X_i and U_i are independent, the ARF is immediately identified as the conditional expectation:
m_0(μ,x) = 𝐄_0[Y_i | X_i = μ].
When X_i and U_i are not independent, one can still identify the ARF when there is an appropriate control function, as shown by <cit.> for the case of ASF.
§.§ Transfer from a Source Population
Suppose that there is a region 1 which has already experienced the policy. It seems reasonable to assume that the data from this region should be useful for policy prediction in the target region in some way or another. We call the population in this region the source population. Suppose that Y_i and X_i in the source region are related by the following reduced form:
Y_i = g_1(μ_1^Γ (X_i),U_i),
where the source post-policy ARF, m_1(μ_1^Γ(x),x), is identified for each x ∈𝒳_0. The joint distribution of (Y_i,X_i) can differ between the source and target regions, but we assume the following transferability condition: for all x ∈𝒳_0,
m_1(μ_1^Γ(x),x) = m_0(μ_0^Γ(x),x).
The condition (<ref>) says that the average outcome-policy relationship in the source population is transferable to that in the target population. In this setting, we can identify θ_0 as follows:
θ_0 = ∫_𝒳_0^Γ m_0(μ_0^Γ(x),x) dP_0(x) + ∫_𝒳_0 ∖𝒳_0^Γ m_1(μ_1^Γ(x),x) dP_0(x).
This identification strategy can be viewed as originating from the Oaxaca-Blinder decomposition method in labor economics. To see the connection with the decomposition method, consider a setting where at time t_0, the policy is not implemented and Y_i is generated as follows:
Y_i = g_0(μ_0(X_i),U_i), with X_i ∼ P_0,X,
and at time t_1 > t_0, the policy is implemented and Y_i is generated as follows:
Y_i = g_1(μ_1^Γ(X_i),U_i), with X_i ∼ P_1,X.
Here P_0,X and P_1,X denote the distribution of X_i before and after the policy. Then, the difference between the expected outcomes at times t_1 and t_0 is given by
𝐄_1[Y_i] - 𝐄_0 [Y_i] = ∫ m_1(μ_1^Γ(x),x ) (dP_1,X(x) - dP_0,X(x))
+ ∫(m_1(μ_1^Γ(x),x) - m_0(μ_0(x),x)) dP_0,X(x).
Thus, the change in the mean of Y_i before and after the policy is decomposed into the component due to the change in the distribution of X_i and the component due to the change in the average outcome-policy relationship. The transferability condition (<ref>) excludes the possibility that the change of the conditional average outcome given X_i between times t_0 and t_1 is due to events other than the policy. Under the transferability condition, the second term on the right hand side of (<ref>) can be interpreted as an average causal effect of the policy in the target population.
The transferability condition (<ref>) is related to the conditional external validity conditions used in program evaluations (see <cit.>, <cit.>, <cit.>, and <cit.>). For example, if we take the potential outcomes
Y_i(0) = g_k(μ_k(X_i),U_i) and Y_i(1) = g_k(μ_k^Γ(X_i),U_i), k=0,1,
with k depending on the population the individual i belongs to, then the transferability condition (<ref>) holds if for each x and d=0,1, the conditional distribution of Y_i(d) given X_i = x remains the same across the two populations. This latter condition follows from the location unconfoundedness condition of <cit.>.
However, the transferability condition in (<ref>) can be strong in practice. We can relax this transferability condition when we have at least two source populations that satisfy certain conditions.
§.§ Multiple Source Populations and Synthetic Decomposition
§.§.§ Multiple Source Populations
We assume that there are K regions (e.g., countries, states, markets, etc.), and each region k=1,...,K has a random vector (Y_i,X_i,U_i) with a distribution P_k, where for each k=1,...,K, we assume that Y_i is generated according to the following reduced form:
Y_i = g_k(μ_k(X_i), U_i),
where the map g_k governs the causal relationship between the outcome variable Y_i and the exogenous variables (μ_k(X_i),U_i) in region k. Note that the causal map, g_k, varies per region, reflecting different structural relationships (e.g., laws, structural parameters, regulations, equilibria, etc.) We call P_k's the source populations. As in the target population, the map μ_k is subject to a change by a policy. Each source population k has experienced a policy that changes μ_k into μ_k^Γ. After the policy, Y_i is generated as follows:
Y_i = g_k(μ_k^Γ(X_i), U_i).
Similarly as for the source population, we define the ARF:
m_k (μ,x) =∫ g_k(μ, u)dP_k,U | X(u | x),
where P_k,U | X denotes the conditional distribution of U_i given X_i in population k. We let 𝒳_k denote the support of X_i in the source population P_k.
We introduce the following transferability condition for the source populations. Let Δ_K-1 denote the (K-1)-simplex, i.e., Δ_K-1 = {𝐰∈𝐑^K: ∑_k w_k = 1, w_k ≥ 0, k=1,...,K}.
[Synthetic Transferability]
There exists 𝐰^* = (w_1^*,...,w_K^*) ∈Δ_K-1 such that
m_0(μ_0^Γ(x),x) = ∑_k=1^K m_k (μ_k^Γ(x),x )w_k^*
for all x ∈𝒳_0.
The synthetic transferability condition is weaker than the condition in (<ref>) in the sense that none of the source post-policy ARFs is required to be identical to that in the target population.
The major distinction between the target population and the source population is that, unlike the target population, each source population has experienced a policy Γ_k such that m_k(μ_k^Γ(x),x) is identified for all x ∈𝒳_0. We state this condition formally below.
[Rich Support]
For all k=1,...,K and all x ∈𝒳_0, m_k(μ_k^Γ(x),x) is identified.
Assumption <ref> requires that the post-policy ARF m_k(μ_k^Γ(x),x) is identified on the support of X_i in the target population. One can view Assumption <ref> as an “eligibility condition” for any population to serve as a source population for the prediction problem.
Let us introduce the last condition for the source populations. It requires that these populations are not redundant in an appropriate sense. Define
𝐦(x) = [m_1(μ_1^Γ(x),x),...,m_K(μ_K^Γ(x),x)]^⊤,
and let
H = ∫_𝒳_0^Γ𝐦(x)𝐦(x)^⊤ d P_0(x).
H is invertible.
Assumption <ref> requires that the post-policy ARFs, m_k(μ_k^Γ(·),·), k=1,...,K, be linearly independent on 𝒳_0^Γ. This assumption is used to point-identify the weight 𝐰^* in the Synthetic Transferability assumption. This assumption can be removed with a minor modification of our proposal. Thus, in the Supplemental Note, we present a modified inference procedure which does not require the point-identification of 𝐰^* and thus Assumption <ref>.
§.§.§ Synthetic Decomposition
Here, we propose our main method of synthetic decomposition. For a given weight 𝐰 = (w_1,...,w_K)^⊤∈Δ_K-1, we define
θ(𝐰) = ∫_𝒳_0^Γ m_0(μ_0^Γ(x),x) dP_0(x) + ∫_𝒳_0 ∖𝒳_0^Γ m^𝗌𝗒𝗇(x;𝐰) dP_0(x),
where
m^𝗌𝗒𝗇(x;𝐰) = ∑_k=1^K m_k(μ_k^Γ(x),x) w_k.
The relevance of θ(𝐰) to the original problem of prediction in the target population depends on the choice of the weight 𝐰. Whenever 𝐰^* is the weight vector satisfying the synthetic transferability condition, we have
θ_0 = θ(𝐰^*).
In fact, under Assumptions <ref>-<ref>, we can identify 𝐰^* = 𝐰_0, where
𝐰_0 = min_𝐰∈Δ_K-1ρ^2(𝐰),
and
ρ^2(𝐰) = ∫_𝒳_0^Γ( m^𝗌𝗒𝗇(x; 𝐰) - m_0(μ_0^Γ(x),x) )^2 dP_0(x).
Hence, the weight 𝐰_0 brings the synthetic ARF m^𝗌𝗒𝗇(x;𝐰) as close as possible to the target ARF, m_0(μ_0^Γ(x),x), for x ∈𝒳_0^Γ. The integral in the pseudo-distance ρ is taken only on the matched group. Therefore, both m^𝗌𝗒𝗇(x; 𝐰) and m_0(μ_0^Γ(x),x) are identified on the matched group.
From this 𝐰_0, we obtain the identification of θ_0 as follows.
Suppose that Assumptions <ref>-<ref> hold. Then, θ_0 is identified as θ(𝐰_0).
When the synthetic transferability condition fails, the prediction θ(𝐰_0) is still derived from the weighted average of the outcome-policy relationships which is chosen to be as close as possible to meeting the synthetic transferability condition, based on their predictive performance on the pre-policy support of X_i in the target population. Naturally, those source populations with the outcome-policy relationships most similar to that of the target population on the matched group receive a highest weight by design.
To see the role of the rich support condition in Assumption <ref>, we decompose it into the following two conditions:
(a) For all k=1,...,K and all x ∈𝒳_0^Γ, m_k(μ_k^Γ(x),x) is identified.
(b) For all k=1,...,K and all x ∈𝒳_0 ∖𝒳_0^Γ, m_k(μ_k^Γ(x),x) is identified.
Condition (a) is used to identify the objective function ρ(𝐰) in (<ref>), so that we obtain 𝐰_0. Condition (b) is used to identify θ(𝐰) defined in (<ref>).
To see the role of the invertibility condition in Assumption <ref>, first define
𝐡 = ∫_𝒳_0^Γ𝐦(x) m_0(μ_0^Γ(x),x) d P_0(x).
Then, it is not hard to see that we can rewrite
𝐰_0 = min_𝐰∈Δ_K-1( 𝐰 - H^-1𝐡)^⊤ H ( 𝐰 - H^-1𝐡).
We can see that the solution 𝐰_0 defined in (<ref>) is unique.
The crucial identifying restriction for the synthetic decomposition method is that the weight 𝐰^* in the synthetic transferability hypothesis remains the same regardless of whether x ∈𝒳_0^Γ and x ∈𝒳_0 ∖𝒳_0^Γ. Later, we relax the synthetic transferability assumption so that 𝐰^* is allowed to depend on a subvector of X_i that is not part of the policy variable.[This identifying strategy is precisely the way synthetic control methods identify the counterfactual non-treatment outcome of the treated unit as a weighted average of the outcomes in the donor pool of control units. The underlying assumption here is that the weights obtained from the pre-treatment outcomes remain valid after the treatment.]
§.§.§ Synthetic Decomposition and Linear Extrapolation
We can view the synthetic decomposition as a form of extrapolation from multiple source populations to a target population. In particular, when (i) the outcome-policy relationships follow a linear regression model and (ii) the synthetic transferability condition holds, the synthetic decomposition coincides with parametric extrapolation.
To see this, consider the simple model in Example <ref> and suppose that the reduced forms follow the same linear regression specification,
Y_i = X_i^⊤β_0 + U_i and Y_i = X_i^⊤β_k + U_i, k=1,...,K,
where β_0 belongs to the convex hull of β_k's. This latter condition implies the synthetic transferability condition as we have
∑_k=1^K w_k^* β_k = β_0,
for some weight vector 𝐰^* = (w_1^*,...,w_K^*). Unsurprisingly, in this case the synthetic decomposition method chooses a weighted average of the outcome-policy relationships and yields a prediction that coincides with the one from a linear extrapolation in the target population.
§.§ Examples
We now provide two applied examples that fit our framework.
§.§.§ Minimum Wages and Labor Supply
Minimum wages are one of the most prevalent and widely debated policies for the labor market. When studying the effects of counterfactual raises in minimum wages on employment, rather than past raises, the literature often uses search-and-bargaining models (e.g., <cit.>). In Section <ref>, we carefully rewrite the model of <cit.> to fit our framework. We give a brief overview here.
Let Y_i,j∈{0,1} denote the employment status of worker i after a match with firm j, X_i as worker i's observable characteristics (age, race, etc.), W_k as the prevailing minimum wage in region k that i is subject to, and U_i,j as a match-specific unobservable (shocks, unobserved types) drawn from a CDF F_k. As we explain in Section <ref>, the wage generation in <cit.> can be written as:
W_i,j = max{β_k M_i,j, W_k},
where β_k ∈ (0,1) is a parameter that represents the worker i's bargaining strength in region k, M_ij is the match productivity drawn for worker i with firm j. We parameterize the generation of M_i,j as follows:
log M_i,j = X_i^⊤γ_k + U_i,j.
The employment indicator, Y_i,j, equals one if the match surplus is higher than the wage:
Y_i,j = 1{M_i,j≥ W_i,j} = 1{M_i,j≥W_k} = 1{X_i^⊤γ_k + U_i,j≥log(W_k)}
where the first equality follows from (<ref>) and β_k ∈ (0,1) and the second one after applying logs.
Now, suppose that minimum wages in Texas increase to US$9 and we want to predict its effects on employment. Then, Texas would be the target region 0. In order to apply the synthetic decomposition method, we first set the policy component for region k as
μ_k(X_i) = X_i^⊤γ_k - log(W_k).
The counterfactual policy sets μ_0^Γ(X_i) = X_i^⊤γ_k - log(9). States that have had the policy variable μ_k(X_i) overlapping with μ_0^Γ(X_i) (e.g., California, Washington) would be source regions. Hence, it follows from (<ref>) that:
g_k(μ,u) = 1{μ+u ≥ 0}.
The ARF for state k is identified as
m_k(μ,x) = 1 - F_k(-μ).
The ARF is identified as the share of workers in state k whose productivity is higher than the minimum wage in state k. As we show in Section <ref>, the policy component parameter γ_k is identified from a semiparametric censored regression of wages:
W_i,j = max{logβ_k + X_i^⊤γ_k + U_i,j, W_k}.
Then, we can identify the ARF m_k using (<ref>) as m_k(μ,x) = 𝐄[Y_i,k|μ_k(X_i) = μ]. Note that we do not need to parametrize the distribution of U_i,j.
§.§.§ Tax Policy and Immigration
Changes to income tax rates may immediately affect tax revenue, but they may also change the composition of the population. For instance, high earners may choose to emigrate when facing higher taxes. This matters for welfare, as such high earners are highly mobile and pay a large share of taxes.[In 2016, the top 1% of households in the U.S. earned 16% of the total income while paying 25% of all federal taxes. However, their income, accumulated wealth and favorable immigration policies permit straightforward changes to residence status, making them very responsive to tax policy. See <https://doc-research.org/2019/01/global-mobility-wealthy-push-pull-factors/> for a policy overview.] (See <cit.> for a theoretical investigation and <cit.>, <cit.>; <cit.> and <cit.> for evidence on the effects of past changes to tax policies, including Danish and Spanish reforms).
To evaluate the effects of a decrease in tax rates in country 0 (e.g., U.K.) on high earners' immigration, we could follow <cit.> and model this as a discrete choice problem. A high earner i's preference, V_i,k, for living in country k depends on the average tax rate τ_k on the wage W_i the individual would face, and is specified as follows:
V_i,k = αlog(1-τ_k) + αlog(W_i) + Z_i^⊤β_k + U_i,k.
The first two terms represent the (concave) preferences over net-of-tax wages, Z_i'β_k captures heterogeneity of worker preferences for each country (which may depend on age, nationality, etc.), with Z_i denoting the observed characteristics of the individual i, and U_i,k represents an idiosyncratic Extreme Value Type 1 shock which is i.i.d. across individuals and regions. (Note that for high earners, the average tax rate is approximately equal to the marginal tax rate which is the same across all the high earners.) Then, individual i's decision to live in region k is represented by a binary indicator Y_i as follows:
Y_i = 1{V_i,k > max_j ≠ k V_i,j}.
To apply the synthetic decomposition method, we take the policy component for the individual as: for each k=0,1,...,K, and for each individual i in region k with X_i = (W_i,Z_i),
μ_k(X_i) = αlog(1-τ_k) + αlog(W_i) + Z_i^⊤β_k.
The target country is the U.K., and the policy of interest is lowering tax rates in the U.K. so that
μ^Γ_0(X_i) = αlog(1-τ_0^Γ) + αlog(W_i) + Z_i^⊤β_0,
for τ_0^Γ < τ_0. Therefore, the ARF for country k is identified as
m_k(μ, x) = exp(μ)/exp(μ) + ∑_j kexp(μ_j(x)).
The matched group is constructed as in (<ref>) using the definitions of μ_0(x) and μ_0^Γ(x) above.
§.§ Extensions
§.§.§ Policy Prediction with Observed Time-Varying Aggregated Variables
In many empirical applications, we observe individuals over multiple time periods (either in panel data or in rotational cross-sectional data) and aggregate variables that affect individual outcomes. The aggregate variables may represent regional economic states and often contain a policy variable such as a change in the minimum wages or taxes. Here we show how our method applies to this case.
First, consider the generation of outcomes for populations k=0,1,...,K in periods t=1,...,T:
Y_it = g_k(μ_k(X_it;z_k,t,v_k,t),z_k,t,U_it), i ∈ N_k.
Here X_it denotes a vector of individual covariates and v_k,t and z_k,t the vectors of time-varying observed aggregate variables for population k. The policy sets the vector v_k,t to v_0^*. The policy component μ_k(X_it;v_k,t,z_k,t) is allowed to depend on the observed aggregate characteristics of region k. It is required to exhibit individual-level variations through X_it.
For each region k=0,1,...,K, the ARF is written as
m_k,t(μ,x,z_k,t) = ∫ g_k(μ, z_k,t,u)dP_k,t(u | x,z_k,t),
for each (μ, x) on the support of (μ_k(X_it;z_k,t,v_k,t),X_it), where P_k,t(·| x,z) denotes the conditional distribution of U_it given X_it = x and z_k,t = z, for i ∈ N_k. We assume that there are not many time periods in the sample, and hence, any aggregate observed variables are regarded as “observed constants.”
Our main interest is in predicting the average outcome of Y_it for population 0, when the policy changes v_0,t into v_0^*. Then, our target parameter is defined as
θ_0,t(v_0^*) = ∫ m_0,t(μ_0(x;z_0,t,v_0^*),x,z_0,t) d P_0,t(x),
where P_0,t denotes the distribution of X_it in the target population. The quantity θ_0,t(v_0^*) represents the average outcome when the distribution of X_i,t and the value of the aggregate variables z_0,t are fixed at those at time t, and the policy changes the variable v_0,t into the counterfactual one v_0^*.
Let us see how our method applies in this setting. First, we take the matched group as
𝒳_0,t^Γ = { x ∈𝒳_0,t: μ_0(x;z_0,t,v_0^*) = μ_0(x̃;z_0,t,v_0,t), for some x̃∈𝒳_0,t},
where 𝒳_0,t denotes the support of X_it in the target population. The synthetic transferability condition is given as follows: there exists a weight vector 𝐰^*(v_0^*) such that for each x ∈𝒳_0,t, we have
m_0,t(μ_0(x;z_0,t,v_0^*),x,z_0,t)
= ∑_k=1^K m_k,t(μ_k,t(x;z_k,t,v_k,t),x,z_k,t) w_k^*(v_0^*).
(We make it explicit that the weight depends on the value of v_0^*.) We can identify the weights by minimizing ρ_t(𝐰) over 𝐰, where
ρ_t^2(𝐰) = ∫( M_0,t(x;z_0,t,v_0^*) - ∑_k=1^K M_k,t(x;z_k,t,v_k,t) w_k )^2 dP_0,t(x),
where for k=0,1,...,K and t=1,...,T, we define
M_k,t(x;z,v) = m_k,t(μ_k(x;z,v),x,z) 1{x ∈𝒳_k,t},
with 𝒳_k,t denoting the support of X_it for i ∈ N_k. Let 𝐰_0(v_0^*) be the minimizer of ρ_t(𝐰) over 𝐰∈Δ_K-1. Using the weight 𝐰_0(v_0^*), we can identify θ_0 as follows:
θ_0,t(v_0^*) = ∫_𝒳_0,t^Γ m_0,t(μ_0(x;z_0,t,v_0^*),x,z_0,t) dP_0,t(x)
+ ∑_k=1^K( ∫_𝒳_0,t∖𝒳_0,t^Γ m_k,t(μ_k(x;z_k,t,v_k,t),x,z_k,t) d P_0,t(x)) w_0,k(v_0^*).
The average effect of changing v_0 from v_0' to v_0^* is given by
θ_0,t(v_0^*) - θ_0,t(v_0').
When v_0' is chosen to be v_0,t in the target region, we can obtain θ_0,t(v_0,t) in two different ways. The first way is to obtain θ_0,t(v_0,t) by replacing v_0^* with v_0,t in (<ref>) and the second way is to obtain
θ_0,t(v_0,t) = ∫_𝒳_0,t m_0,t(μ_0(x;z_0,t,v_0'),x,z_0,t) dP_0,t(x).
If the estimates of θ_0,t(v_0,t) obtained through two different ways are close to each other, this suggests that the synthetic transferability condition is supported by data.
§.§.§ Spillover of Policy Effects Across Regions
A policy in one region can often have a spillover effect on other regions. For example, the immigration of high-skilled workers in response to a change in tax policy in a target region (as in <cit.>) would affect the number of immigrants in source regions. We show that the situation with spillover effects can be accommodated in our proposal.
We consider two types of spillover effects. The first is the spillover effect of policies from the source regions on the target region. The spillover effect is something that has already happened and is reflected in the data at the time when the policymaker considers implementing a new policy on the target population. For instance, source countries with lower taxes have already received high-skilled immigrants from the target region. The spillover effect of source regions' policies on the target region's outcomes realizes through its impact on the exogenous variables X_i and U_i, as prescribed in the reduced form in (<ref>). As such, the spillover effect is entirely mediated through the variations in X_i, and its presence does not alter anything in our proposal, because it is among the many sources of exogenous variations in X_i which we can use to identify the ARF. On the other hand, if the spillover effect is an additional source of endogeneity (i.e., the correlation between X_i and U_i), we need to carefully search for an identification method using instrumental variables or resorting to a control function approach (e.g., <cit.>). Once the ARF is identified, this paper's proposal can be applied.
The second spillover effect is from the policy of the target population to other regions. This is a spillover effect that is not yet reflected in the data, and hence part of our counterfactual analysis of policy in the target population. For example, a decrease in tax rates in the target region (say, the U.K.) would induce immigration away from source regions (e.g., Spain). Our definition of the pre-policy population will then be the population that consists of people before the migration induced by the policy, and likewise the post-policy population will be the population that consists of people after the migration. Therefore, the policy effect, according to our definition, includes both the effect on the people who do not migrate as a consequence and the composition effect that arises due to the migration.
For example, suppose that the policy not only changes μ_0 into μ_0^Γ, but also alters the distribution P_0 into P_0 ∘ f^-1 for some map f. The latter change corresponds to changing X_i into f(X_i). Now, the post-policy prediction includes both the effects, so that we can take
θ_0 = ∫_f(𝒳_0) m_0(μ_0^Γ(x),x) d(P_0 ∘ f^-1)(x)
= ∫_𝒳_0 m_0(μ_0^Γ(f(x),f(x))) dP_0(x).
Hence, by redefining the policy operator, Γ, we can study the effect of a policy that has a spillover effect through migration.[However, in contrast to the previous situations, we may need to estimate the “policy” Γ as it includes its composition effect through migration from the target region.]
§.§.§ Covariate-Dependent Weights
The synthetic transferability condition assumes that the weights are the same across different demographic groups, and this may be restrictive in some applications. For example, suppose that we have two source regions 1 and 2, where a high education group in region 1 is matched better with a high education group in the target region than region 2, whereas a low education group in region 2 is matched better with a low education group in the target region than in region 1. By allowing the weight to depend on the education indicator, we can accommodate such a situation flexibly.
Suppose that X_i = (X_i,1,X_i,2), where we denote the supports of X_i,1 and X_i,2 in regions k=0,1,...,K by 𝒳_k,1 and 𝒳_k,2 respectively. For each individual i who belongs to population k=0,1,...,K, we have
Y_i = g_k(μ_k(X_i,1),X_i,2, U_i), before the policy
Y_i = g_k(μ_k^Γ(X_i,1),X_i,2, U_i), after the policy.
We define a generalized version of the synthetic ARF: for a subvector x̃_2 of x_2, with x = (x_1,x_2),
m^𝗌𝗒𝗇(x; 𝐰) = ∑_k=1^K m_k (μ_k^Γ(x_1),x_2 )w_k(x̃_2),
where w_k(x̃_2) is a nonnegative function such that ∑_k=1^K w_k(x̃_2) = 1, and w_k(x̃_2) is the k-th entry of 𝐰(x̃_2). We denote by 𝒳̃_0,2 the support of the corresponding subvector X̃_i,2 of X_i,2 in the target population. In contrast to the previous case, the weight given to a source region k can vary across different people in the target region depending on the value of their covariate x̃_2. Hence, 𝐰 is a map from 𝒳̃_0,2 to Δ_K-1. Similarly as above, we obtain a counterfactual prediction for policy Γ_0 for the target region 0 as θ(𝐰) in (<ref>) with this redefined m^𝗌𝗒𝗇(x; 𝐰).
To motivate the problem of selecting the weight vector 𝐰, we introduce a generalized version of the transferability condition.
[Generalized Synthetic Transferability]
For some map 𝐰^*:𝒳̃_0,2→Δ_K-1,
m^𝗌𝗒𝗇(x; 𝐰^*) = m_0(μ^Γ(x_1),x_2), for all x = (x_1,x_2)∈𝒳_0.
The previous synthetic transferability condition is a special case of this condition. We now construct the optimal weight as follows. First, we can define
𝐰_0 = inf_𝐰:𝒳̃_0,2→Δ_K-1ρ^2(𝐰),
and
ρ^2(𝐰) = ∫_𝒳_0^Γ( 𝐦(x)^⊤𝐰(x̃_2) - m_0(μ_0^Γ(x_1),x_2) )^2 dP_0(x),
where 𝐦(x) = [m_1(μ_1^Γ(x_1),x_2),...,m_K(μ_K^Γ(x_1),x_2)]^⊤. The quantity ρ(𝐰_0) is smaller than that in (<ref>), because the domain of the minimizers 𝐰 is larger now. This means that the covariate-dependent weight will exhibit a better fit than the previous weights.
One can obtain a prediction for the target population as θ(𝐰_0) using this weight, 𝐰_0(x̃_2). To obtain a characterization of this generalized weight, we first define
H(x̃_2) = ∫_𝒳_0^Γ𝐦(x)𝐦(x)^⊤ d P_0(x |x̃_2), and
𝐡(x̃_2) = ∫_𝒳_0^Γ𝐦(x) m_0(μ_0^Γ(x_1),x_2) dP_0(x |x̃_2),
where P_0(·|x̃_2) denotes the conditional distribution of X_i given X̃_i,2 = x̃_2 in the target population. We replace Assumption <ref> by the following assumption.
For each x̃_2 ∈𝒳̃_0,2, H(x̃_2) is invertible.
This assumption excludes the situation where X_i,1 = X_i. In other words, there must be variables in X_i that are excluded from the covariate X_i,1 that the weight is allowed to depend on. Then, it is not hard to see that
𝐰_0(x̃_2) = inf_𝐰∈Δ_K-1( 𝐰 - H^-1(x̃_2) 𝐡(x̃_2) )^⊤ H(x̃_2) ( 𝐰 - H^-1(x̃_2) 𝐡(x̃_2) ).
Again, under Assumption <ref>, 𝐰_0 is uniquely determined. Hence under the generalized synthetic transferability condition, we have θ_0 = θ(𝐰_0). This approach is not very computationally costly when X̃_i,2 is a discrete random variable with its support having only a few points.
§ ESTIMATION AND CONFIDENCE INTERVALS
§.§ Estimation
Let us first consider the estimation of 𝐰_0 and θ(𝐰_0). As for the estimation of 𝐰_0, we first make use of the characterization (<ref>) and consider its sample counterpart. For each k=1,...,K, we first estimate the post-policy ARFs to obtain m̂_k (μ̂_k^Γ(x),x ) using the sample from the source population P_k. We let 𝒳̂_0^Γ be an estimated set of 𝒳_0^Γ using the sample from the target population and construct the sample version of H and 𝐡 as follows:
Ĥ = 1/n_0∑_i ∈ N_0𝐦̂(X_i)𝐦̂(X_i)^⊤ 1{X_i ∈𝒳̂_0^Γ}, and
ĥ = 1/n_0∑_i ∈ N_0𝐦̂(X_i) m̂_0(μ̂_0^Γ(X_i),X_i ) 1{X_i ∈𝒳̂_0^Γ},
where 𝐦̂(x) = [m̂_1(μ̂_1^Γ(x),x),...,m̂_K(μ̂_K^Γ(x),x)]^⊤. Note that each ARF, m̂_k(μ̂_k^Γ(·),·), is constructed using the sample from the source region k, whereas in constructing Ĥ and ĥ, it is evaluated at a data point X_i of the sample from the target region. Using these, we obtain
ŵ = min_𝐰∈Δ_K-1( 𝐰 - Ĥ^-1ĥ)^⊤Ĥ( 𝐰 - Ĥ^-1ĥ).
In the Supplemental Note, we show that ŵ is √(n_0)-consistent for 𝐰_0.[As we formally state later, we assume that the size of a random sample from each population is asymptotically comparable across the populations, i.e., there exists r_k > 0 such that n_k/n_0 → r_k as n_0,n_k →∞ for each k=1,...,K.]
Using the estimated weight, ŵ, we obtain the prediction for the target region as follows:
θ̂(ŵ) = 1/n_0∑_i ∈ N_0m̂_0(μ̂_0^Γ(X_i),X_i ) 1{X_i ∈𝒳̂_0^Γ} + 1/n_0∑_i ∈ N_0m̂^𝗌𝗒𝗇(X_i; ŵ) 1{X_i ∈𝒳_0 ∖𝒳̂_0^Γ},
where
m̂^𝗌𝗒𝗇(x; ŵ) = ∑_k=1^K m_k( μ̂_k^Γ(x),x) ŵ_k.
We will show below that, under regularity conditions, the estimator θ̂(ŵ) is √(n_0)-consistent.
§.§ Confidence Set for 𝐰_0
Let us consider constructing confidence intervals for θ(𝐰_0). Since 𝐰_0 can take a value arbitrarily close to the boundary of the simplex Δ_K-1, it turns out that
√(n_0)(θ̂(ŵ) - θ(ŵ)) →_d ζ,
where ζ is a complicated non-Gaussian distribution that depends on whether 𝐰_0 is in the interior of Δ_K-1 or on the boundary of Δ_K-1. And if it is on the boundary, what part of the boundary 𝐰_0 is located in. While one might consider using a naive bootstrap where one first constructs a bootstrap counterpart of the quantity √(n_0)(θ̂(ŵ_0) - θ(ŵ)) and uses its bootstrap distribution in place of the asymptotic distribution, this approach does not work. Such a failure of the bootstrap when the parameter is on the boundary was shown by <cit.>.
In this paper, we pursue an approach that does not require the researcher to find the details of the limiting distribution ζ, which may change depending on the specifications of the models and estimation methods. To construct the confidence set, we first formulate the identification of 𝐰_0 as a solution to a constrained optimization, and using a Kuhn-Tucker condition, formulate the identification in terms of a set of equality restrictions with a nuisance parameter that is constrained to a convex cone. The problem of asymptotic inference in such a setting has been studied in the literature (see, for example, <cit.>, <cit.>, <cit.>). Here, we follow the approach of <cit.> and <cit.> to construct a test statistic for the restrictions and invert it to form a confidence set for 𝐰_0. This approach is simple and does not involve tuning parameters often required in the problem of testing for inequality restrictions. Later, we show that this confidence set is uniformly asymptotically valid. This approach is generally applicable whenever we have √(n_0)-consistent estimators of H and 𝐡. This often follows from a wide range of estimators of the ARFs.
First, let us construct a confidence set for 𝐰_0. For this, we form an equality restriction using the Lagrangian of the constrained optimization in (<ref>):
ℒ(𝐰,λ̃, λ) = (𝐰 - H^-1𝐡)^⊤ H (𝐰 - H^-1𝐡) + λ̃(1- 𝐰^⊤1) - λ^⊤𝐰,
where λ̃ and λ are Lagrange multipliers.
By the Kuhn-Tucker condition and the strict convexity of the objective function, the necessary and sufficient conditions for 𝐰_0 ∈Δ_K-1 to be the unique minimizer of ρ(𝐰) are that for some λ̃∈𝐑 and λ∈Λ(𝐰_0),
H 𝐰_0 - 𝐡 + λ̃1 - λ = 0,
where 1 is the K × 1 vector of ones, 0 is the K × 1 vector of zeros, and
Λ(𝐰_0) = {λ∈𝐑^K: λ^⊤𝐰_0 = 0 and λ≤0}.
If we concentrate out λ̃ using the restrictions 𝐰_0^⊤1 = 1 and λ'𝐰_0 = 0, we obtain that
𝐟(𝐰_0) - λ = 0, for some λ∈Λ(𝐰_0),
where
𝐟(𝐰_0) = H 𝐰_0 - 𝐡 - 𝐰_0^⊤(H 𝐰_0 - 𝐡) 1.
We form a test statistic that tests the restriction in (<ref>) as follows:
T(𝐰_0) = n_0 inf_λ∈Λ(𝐰_0)(𝐟̂(𝐰_0) - λ)^⊤Ω̂^-1( 𝐟̂(𝐰_0) - λ),
where 𝐟̂(𝐰_0) is the same as 𝐟(𝐰_0) except that H and 𝐡 are replaced by Ĥ and ĥ, and Ω̂ is a scale normalizer which we explain later.[The method of constructing a test statistic from a constrained optimization over Lagrangian multipliers appeared in <cit.>. The main difference here is that in our case, the inequality restrictions are crucial for the point-identification of 𝐰_0, whereas, in their case, the parameters are point-identified using only equality restrictions, and hence their use of quadratic approximation for constructing a critical value does not apply in our setting.]
As for critical values, we follow the approach of <cit.> and <cit.>. First, for each 𝐰∈Δ_K-1, we let λ̂(𝐰) be the solution λ in the minimization problem in (<ref>) with 𝐰_0 replaced by the generic 𝐰∈Δ_K-1. Then the confidence set for 𝐰_0 is given by
C̃_1-κ = {𝐰∈Δ_K-1: T(𝐰) ≤ĉ_1 - κ(𝐰)},
where ĉ_1-κ(𝐰) denotes the 1 - κ percentile of the χ^2 distribution with the degree of freedom equal to the number of zero entries in λ̂(𝐰). The test 1{T(𝐰) ≤ĉ_1 - κ(𝐰)} is essentially what <cit.> called the CC test in their paper. The main difference is that 𝐟(𝐰_0) is not necessarily the expectation of a random vector in our setting. Otherwise, our setting is much simpler than <cit.> because the inequality restrictions (as represented by the constraint λ∈Λ(𝐰_0)) do not involve any unknowns.
We may be interested in checking whether data support the synthetic transferability condition in (<ref>). The confidence set C̃_1-κ for 𝐰_0 defined in (<ref>) can be used to test an implication from the condition. Consider testing the following implication from the synthetic transferability condition:
H_0 : There exists 𝐰∈Δ_K-1 such that m_0( μ_0^Γ(x),x) = m^𝗌𝗒𝗇(x; 𝐰) for all x ∈𝒳_0^Γ.
H_1 : H_0 is false.
We set κ in C̃_1-α to be the level of the test and perform the following procedure. If C̃_1-α = ∅, we reject H_0 at level α. Otherwise, we do not reject H_0 at level α.
Now, let us discuss the bootstrap construction of Ω̂. First, we construct a bootstrap sample as follows. Since each population has a different distribution, we need to resample (with replacement) from each region. For each region k=0,1,...,K, let {W_i^*: i ∈ N_k} be the bootstrap sample from the sample {W_i: i ∈ N_k}, where W_i = (Y_i, X_i')^⊤, i ∈ N, and
N = ⋃_k=0^K N_k.
Then for each k=0,1,...,K, we construct the bootstrap version of the extended post-policy ARF, m̂_k^*(μ̂_k^Γ *(·),·), using the bootstrap sample from the region k, and define
𝐦̂^*(·) = [m̂_1^*(μ̂_1^Γ *(·),·),...,m̂_K^*(μ̂_K^Γ *(·),·) ]^⊤,
and let
Ĥ^* = 1/n_0∑_i ∈ N_0𝐦̂^*(X_i^*)𝐦̂^*(X_i^*)^⊤1{X_i^* ∈𝒳̂_0^Γ *}, and
ĥ^* = 1/n_0∑_i ∈ N_0𝐦̂^*(X_i^*) m̂_0^*(μ̂_0^Γ *(X_i^*),X_i^* ) 1{X_i^* ∈𝒳̂_0^Γ *}.
Then, we define
γ̂^* = √(n_0)(Ĥ^* - Ĥ) ŵ - √(n_0)( ĥ^* - ĥ)
-√(n_0)ŵ^⊤(Ĥ^* - Ĥ) 1 + √(n_0)ŵ^⊤( ĥ^* - ĥ) 1.
To construct a scale normalizer Ω̂, we apply the truncation method of <cit.> as follows. For each k=1,...,K, we define
τ̂_k = √(n_0)max{|[Ĥŵ - ĥ]_k |, c_0 },
for some constant c_0>0 such as c_0 = 0.05, and construct a truncated version of γ̂^* as γ̃^* = [γ̃_k^*]_k=1^K, where
γ̃_k^* = {[ τ̂_k, if γ_k^* ≥τ̂_k,; γ̂_k^*, if -τ_k ≤γ̂_k^* ≤τ̂_k, and; -τ̂_k, if γ_k^* ≤ - τ̂_k, ].
and γ̂_k^* denotes the k-th entry of γ̂^*. Thus, we construct γ̃^* for each bootstrap sample b=1,...,B. Let us denote it by γ̃_b^*. Then we construct[As pointed out by <cit.>, without using the truncation, the confidence set C̃_1 - κ is still asymptotically valid, although it is conservative. In our simulations, the truncation does not make any meaningful difference in the results.]
Ω̂= 1/B∑_b=1^B γ̃_b^* γ̃_b^* ⊤ - (1/B∑_b=1^B γ̃_b^*)(1/B∑_b=1^B γ̃_b^*)^⊤.
§.§ Confidence Intervals for θ(𝐰_0)
Now, let us construct the confidence interval for θ(𝐰_0). First, we can show that
n_0 (θ̂(𝐰_0) - θ(𝐰_0))^2/σ̂^2→_d χ^2(1),
for an appropriate scale normalizer. To construct σ̂, we use a bootstrap interquartile range as proposed by <cit.> in a different context. More specifically, we define
θ̂^*(𝐰) = 1/n_0∑_i ∈ N_0m̂_0^*(μ̂_0^Γ*(X_i^*),X_i^*) 1{X_i^* ∈𝒳̂_0^Γ *} +1/n_0∑_i ∈ N_0m̂^𝗌𝗒𝗇*(X_i^*; 𝐰) 1{X_i^* ∉𝒳̂_0^Γ *},
where
m̂^𝗌𝗒𝗇*(x; 𝐰) = ∑_k=1^K m̂_k^*(μ̂_k^Γ *(x),x ) w_k.
We let
T^* = √(n_0)(θ̂^*(ŵ) - θ̂(ŵ)).
We read the 0.75 quantile and 0.25 quantile of the bootstrap distribution of {T^*: b = 1,...,B}, and denote them to be q̂_0.75 and q̂_0.25, respectively. Define
σ̂= q̂_0.75 - q̂_0.25/z_0.75 - z_0.25,
where z_0.75 and z_0.25 are the 0.75- and 0.25-quantiles of N(0,1).
Define
T̂(𝐰, θ) = √(n_0) (θ̂(𝐰) - θ)/σ̂.
We construct the (1-α)-level confidence interval using the Bonferroni approach as follows:
C_1- α = {θ∈Θ: inf_𝐰∈C̃_1- κT̂^2(𝐰, θ) ≤ c_1 - α + κ(1) },
where κ>0 is a small constant, such as κ = 0.005, and c_1 - α + κ(1) denotes the (1 - α + κ)-quantile of the χ^2(1) distribution.
§.§ Uniform Asymptotic Validity
We summarize the conditions that we use to establish the uniform asymptotic validity of the confidence set C_1- α. Here, we state the conditions verbally. The formal statements and the proof are found in the Supplemental Note.
(i) The post-policy ARFs in the target and source populations have the 4+δ-th finite moment uniformly over P.
(ii) The estimated post-policy ARFs and their bootstrap versions have an asymptotic linear representation uniform over P, with the influence function having the 4+δ-th finite moment uniformly over P.
The moment condition is a technical condition that is often used in asymptotic inference. The asymptotic linear representation is often part of the proofs that show asymptotic normality of an estimator. Its derivation is standard in many examples.
The matrix H and the population version of Ω̂ have minimum eigenvalues bounded from below uniformly over n and P.
This assumption requires that the post-policy ARFs are not redundant. As mentioned before, we can relax this assumption once we modify the procedure. Details are found in the Supplemental Note.
For each k=0,1,...,K, there exists a constant r_k >0 such that n_k/n_0 → r_k, as n_0, n_k →∞.
Under these conditions, we can show that the confidence interval C_1- α is asymptotically valid uniformly over P.
Suppose that Assumptions <ref>-<ref> hold. Then, for each α∈ (0,1), the confidence interval C_1- α is asymptotically valid uniformly over P.
The proof of the theorem is found in the Supplemental Note.
§.§ Monte Carlo Simulations
In the Supplemental Note, we present Monte Carlo simulations exploring the finite sample properties of our inference procedure.
We consider a total of eight exercises: two specifications for the ARF, two different amounts of overlap of the support of the policy variable between target and source regions (small, 50%, or large, 90%), and two sample sizes (n_0 = 500, 1000). In particular, the ARF specifications differ by: (i) having w_0 to be in the interior or on the boundary of the simplex, and (ii) different functional forms for the relationship between the outcome and the policy variable.
The results for the coverage probability and the average length of the confidence interval are shown in Table <ref>, while the results on the finite sample properties for the estimators ŵ, θ̂_0(𝐰̂) are shown in Table <ref>. Across specifications, inference for the target parameter (θ_0) is typically conservative, as seen in Table <ref>: their empirical coverage probabilities are usually above 95%, the nominal level for all sample sizes and specifications, with the coverage probabilities closer to 100% in most cases. Consistent with our asymptotic results, the average length of the confidence interval decreases as the sample size grows across specifications and inference approaches. The average length of the confidence interval is smaller when there is a larger overlap between the support of the policy variable in the target region and the source regions. In this case, there is more information from the target region that can be used for identification and estimation of our target parameter.
Finally, our estimators for θ_0 and 𝐰_0 seem to perform very well pointwise: the Root Mean Square Error (RMSE) for θ̂ and ŵ are small. Furthermore, the average bias and variance of θ̂(ŵ) across simulations are close to 0, suggesting that our estimator is close to the true values even with moderate sample sizes.
§ EMPIRICAL APPLICATION: MINIMUM WAGES AND LABOR SUPPLY
§.§ Background
Minimum wages have been among the most studied and debated policies for the labor market, spurring an immense literature in economics. The predominant paradigm in empirical work is to study their effects on employment or other outcomes by leveraging their state-level variation. This includes difference-in-difference designs with Two-Way Fixed Effects models (which <cit.> summarizes as the workhorse approach), synthetic control (see <cit.> for extensive discussions), decomposition methods (<cit.>), cross-border comparisons (<cit.>), among others.
While this literature can evaluate minimum wage increases that have already been implemented, they are by-and-large inappropriate to predict the effects of policies yet to occur, including increases in minimum wages beyond the support of historical variations. Indeed, even simple theoretical models predict highly nonlinear effects of minimum wages (e.g., <cit.>).[This is best summarized by <cit.> who writes in a recent review that, “even if one has a strong view of what the U.S. literature says about the employment effects of past minimum wage increases, this may provide much less guidance in projecting the consequences of much larger minimum wage increases than those studied in the prior literature. Predicting the effects of minimum wage increases of many dollars, based on research studying much smaller increases, is inherently risky for the usual statistical reasons. But the problem is potentially exacerbated because the reduced form estimates on which the prior literature is based may fail to capture changes in underlying behavior as high minimum wages affect a far greater share of workers.” (p.294)] The synthetic decomposition method presented in this paper is able to address such policy questions.
As foreshadowed in Section <ref>, our empirical illustration studies a (counterfactual) increase in minimum wage in Texas beyond federally mandated levels and how it affects teenage employment. The focus on Texas, while an illustration, is of both academic and policy interest. Texas is the largest state in the U.S. with minimum wages set at the federal level (constant since 2009). Raising the minimum wages has also been a policy of the 2022 Democratic gubernatorial candidate. We illustrate our method by investigating the effects of an increase in minimum wages in Texas from US$7.25 to US$9.00, on teenage employment. We follow the structural labor economics literature in basing such predictions on an equilibrium search and matching model of labor markets (e.g., <cit.> and <cit.>, in particular). However, in contrast to such papers, we construct a synthetic comparison using other states beyond Texas where the policy has been observed (e.g., Oregon, Washington, etc.).
Our empirical application suits the synthetic decomposition method very well. There are two main sources of heterogeneity across regions. First, the population characteristics differ. For example, states are heterogeneous in workers' education, age and skill, among others, all of which may matter for the effects of minimum wages (<cit.>, and seen in the data below). More importantly, the causal structure g_0 for the source region could be very different than those for other states, g_k, even those from neighboring states. Intuitively, even if California and other states had similar characteristics to Texas, they may have very different labor market environments (e.g., state income taxation, different labor laws, etc.). In fact, <cit.> argues that structural parameters are estimated to be very different across submarkets. The synthetic decomposition method respects such heterogeneity across regions. It assigns weights to those source states to form the best comparison units in terms of their causal structures.
§.§ A Two-Sided Search Model of Labor Markets with Minimum Wages
We follow <cit.> and consider the following static model of two-sided matching between firms and workers. For each population k=0,1,...,K, we let N_k be the total measure of the workers and J_k the total measure of the firms in the population k. Each worker-firm pair (i,j) is drawn, and then for each worker i, (R_i,K_i) is drawn, where R_i is the reservation wage of worker i and K_i the cost of searching for the worker i. The worker-firm pair is given the offer of matching with a contact rate λ_k >0. The timing of the events for the worker-firm pair given the offer of the match proceeds as follows.
* The worker decides to search for a match with a firm. Once the worker decides to search, the worker pays the search cost K_i and receives an offer of match with a firm j with probability λ_k>0. If the worker decides not to search for a firm, the worker receives zero payoff.
* The worker decides whether to accept the offer of the match or not. If the worker rejects the offer of the match, the worker receives a reservation wage R_i. If the worker accepts the offer, the worker-firm pair (i,j) jointly produces output M_i,j.
* Once the output M_i,j is realized the firm and the worker enter a Nash bargaining to determine the wage, W_i,j, under the minimum wage constraint.
* After the wage W_i,j is determined, the firm decides whether to retain the worker or not. If the firm retains the worker, the firm obtains the profit M_i,j - W_i,j and the worker receives the wage W_i,j. If the firm does not retain the worker, the firm and the worker receive the zero payoff.
* After these events are completed, the econometrician observes a random sample of the workers, their employment status and wages, and their observed characteristics.
To close the model, we need to state the equilibrium constraints. First, it is profitable for worker i to accept the offer from the match with firm j if and only if
𝐄_k[1{M_i,j≥ W_i,j} W_i,j| R_i, K_i] ≥ R_i,
where the conditional expectation 𝐄_k is with respect to the distribution in population k. Then, it is profitable for the worker to search for a job if and only if
λ_k 𝐄_k[max{1{M_i,j≥ W_i,j} W_i,j,R_i}| R_i,K_i] ≥ K_i.
For the firm, it is profitable for it to retain the worker if and only if Y_i,j≥ W_i,j. Finally, we assume that the contact rate λ is endogenously determined as a fixed point as follows:
λ_k = ℳ_k(λ_k, J_k, N_k)/ζ_k(λ_k) N_k,
where ℳ_k(λ_k, J_k, N_k) denotes the matching technology, representing the total measure of matched workers, and
ζ_k(λ_k) = P{λ_k 𝐄_k[max{1{M_i,j≥ W_i,j} W_i,j,R_i}| R_i,K_i] ≥ K_i },
i.e., the probability of the worker deciding to search for a firm. Hence, ζ_k(λ_k) N_k represents the total measure of workers searching for a match with a firm.
As for the wage determination through Nash bargaining, we follow <cit.> and obtain the following wage generation: for M_i,j≥W_k,
W_i,j = max{β_k M_i,j,R_i,W_k},
where β_k ∈ (0,1) is a parameter that represents worker i's bargaining strength. We also follow <cit.> in simplifying the procedure by assuming that (<ref>) is satisfied for all the workers such that R_i ≤W_k. Then the wage is generated only for those workers with R_i ≤W_k, and hence, the wage generation is simplified as follows: for M_i,j≥W_k,
W_i,j = max{β_k M_i,j,W_k}.
The employment indicator Y_i,j∈{0,1} is also given as follows:
Y_i,j = 1{M_i,j≥ W_i,j} = 1{M_i,j≥W_k},
where the last equality follows from (<ref>) and β_k ∈ (0,1).
Our counterfactual policy is to set the minimum wage to W_k^Γ. We aim to predict the employment rate for population 0 (Texas) after the minimum wage changes to W_k^Γ (US$9).
To build up an empirical model, for population k, we specify the match output M_i,j as follows:
log M_i,j = X_i^⊤γ_k + U_i,j.
where X_i denotes the observed characteristics of worker i, U_i,j represents a match component that is not observed by the econometrician, and γ_k is a parameter vector. We assume that U_i,j's are i.i.d., independent of (X_i, W_i,j, W_k), i ∈ N_k, and all firms j, and follow the distribution with the CDF F_k. Unlike <cit.>, we leave F_k as nonparametrically specified. Since we do not restrict U_i,j to have mean zero, we lose no generality by assuming that the vector X_i does not include an intercept term.
It follows from this parametrization and (<ref>) that:
Y_i,j = 1{M_i,j≥W_k} = 1{X_i^⊤γ_k + U_i,j≥logW_k}.
In order to check the applicability of the synthetic decomposition method, we consider the support conditions required in this setting. First, we define our policy components
μ_k(X_i,W_k) = X_i^⊤γ_k - logW_k, and
μ_k^Γ(X_i,W_k) = X_i^⊤γ_k - logW_k^Γ.
We take
𝒳_0^Γ = {x ∈𝒳_0: x^⊤γ_0 - logW_0^Γ = x̃^⊤γ_0 -logW_0, for some x̃∈𝒳_0},
where we denote the support of X_i in the target population by 𝒳_0. The set 𝒳_0^Γ represents the set of characteristics for people who have a match for comparison after the policy. The support conditions required can be summarized as follows.
(a) The support of X_i^⊤γ_0 - logW_0 and that of X_i^⊤γ_0 - logW_0^Γ overlap in the target population (Assumption <ref>), so that the set 𝒳_0^Γ is not empty.
(b) The support of X_i in each source population k=1,...,K contains the set 𝒳_0.
First, note that due to the independence between U_i,j's and X_i's, the ARF m_k(μ,x) does not depend on the second argument, and we simply write m_k(μ). In our case, the pre-policy and post-policy ARFs take the following form:
m_k(μ_k(X_i)) = ∫ g_k(μ_k(X_i),u) dF_k(u), and
m_k(μ_k^Γ(X_i)) = ∫ g_k(μ_k^Γ(X_i),u) dF_k(u),
where
g_k(μ,u) = 1{μ+u ≥ 0}.
For each worker i ∈ N_k, we let j(i) be the firm matched with this worker, and simply write Y_i = Y_i,j(i) and W_i = W_i,j(i). Since (X_i,W_k) and U_ij are independent, we have
m_k(μ) = 𝐄_k[ Y_i |μ_k(X_i) = μ].
Then, the synthetic prediction is obtained by taking the weights w_k's which minimize the L^2-distance between
m_0(μ_0^Γ(x)) and ∑_k=1^K m_k(μ_k^Γ(x))w_k,
over the set 𝒳^Γ_0 such that the intersection of the support of X_i^⊤γ_0 - logW_0 and that of X_i^⊤γ_0 - logW_0^Γ when we restrict X_i to 𝒳_0^Γ is nonempty.
§.§ Empirical Implementation
We use the dataset from <cit.> for our exercises, which is drawn from the Current Population Survey (CPS), a repeated cross-section. Following the authors, among many others, we focus on teenagers and use their individual-level employment status as the outcome, Y_i,j∈{0,1}, individual-level characteristics as X_i (age, sex, marriage status, whether they are Hispanic, whether they are black or another non-white race). We further observe wages for an employed samples, W_i,j, and each state's minimum wages. Our sample is restricted from 2002 to 2014, so that it does not start during the 2001 recession (see <cit.> for a discussion).
The counterfactual sets the minimum wage in Texas (US$7.25 in 2014) to US$9 in 2014 (i.e., US$11.42 in 2022 dollars). Our parameter of interest, θ_0 is the average teenage employment in Texas in 2014 (for Texas' 2014 population) had the minimum wage been US$9. We compare this to teenage employment for those in Texas in 2014 with the prevailing minimum wage.
To make this comparison, we consider two sets of source regions. First, we use the states with the highest prevailing minimum wages within our sample, which are California, Connecticut, D.C. and Washington.[Vermont also satisfies this restriction, but we drop it as its sample is too small to provide meaningful variation for estimation of Vermont-specific parameters.] We note that the support conditions (<ref>) can include more states because it is a condition on the support of the ARF and not on the policy itself. Hence, in a second exercise, we further include Florida (a large state close to Texas), Louisiana (a neighboring state) and Oregon (another state satisfying the first restriction). For illustration purposes, we use a 10% random sample of the data for each region. This shows the performance of our estimator with reasonably standard sample sizes.
Summary statistics are provided in Table <ref>, while the variation in minimum wages across all source and target regions is shown in Figure <ref>. In terms of demographics (e.g., the share of teenage Hispanics and African-Americans), Texas most resembles California. However, it is more similar to Florida and Louisiana in terms of average teenage employment and in wages. On the other hand, Louisiana's minimum wage policies are very similar to Texas', which may provide less information on such changes.
We estimate the ARF's for the model in two steps. First, we estimate γ_k using the pairwise differencing method of <cit.>. Then, we plug them into μ_k(X_i) and estimate m_k nonparametrically using a kernel regression estimation method and a cross-validated bandwidth. Details are provided in the Supplemental Note. We use B=200 bootstrap draws, set κ = 0.005 and α = 0.05. We draw a fine grid of 𝐰 uniformly over its simplex, using a procedure based on <cit.>.[To construct each gridpoint, we first draw a vector of dimension K-1, where each element is drawn i.i.d. from the uniform distribution with support [0,1]. Then, we include 0 and 1 into that drawn vector, which is then sorted. The grid point is the vector of differences across adjacent elements of 𝐰 (which are all nonnegative and must sum up to 1 by construction).]
§.§ Results
Table <ref> presents the results of the estimation. We present two specifications per exercise, which only differ in whether they accommodate aggregate variables: the share of teenagers in the state population and the average unemployment in the state.
Our estimates suggest that an increase in the minimum wage decrease predicted average (teenage) employment: our estimates of θ_0 and all upper bounds of their associated confidence intervals are all below the observed employment rate of 0.292. In particular, the counterfactual employment is estimated between 0.186-0.195, implying a decrease in average (teenage) employment between 9-11 percentage points. This is robust across specifications and consistent with the labor economics literature finding such negative effects (see <cit.> for a review). In terms of magnitudes, it is also very similar to those found in <cit.> with a similar proportional increase in minimum wages from US$5 to US$6 – see his Figure 4.
Our synthetic comparison is predominantly based on California, D.C., Florida and Washington. This seems intuitive, as California best approximates the demographics of Texas. However, our estimates also suggest that accounting for common shocks/aggregate variables is important. Absent state-specific economic trends (here, the average teenage share in the population and the average unemployment rate), we would have estimated the effects of minimum wages on employment to be about 1 percentage point lower, thereby underestimating its negative effects. The aggregate variables also matter for the weights given to source regions: because state-level variables change the model's causal structure, as well as the characteristics of those states, there is no reason why each region would remain equally comparable to Texas with/without them. In fact, we find that California receives lower weights when including such variables. This is because its state unemployment levels are much larger than Texas's which, in turn, is more similar to Washington's.
§ CONCLUSION
In this paper, we propose a novel way to utilize data from other populations to generate counterfactual predictions for a target population, when we do not have enough data for the latter. We explore ways to utilize data from other populations (“source populations”), motivated by a synthetic transferability condition. This hypothesis generalizes existing invariance conditions for extrapolation of causal effects and allows us to build predictions based on a synthetic causal structure, chosen to be as close as possible to the target ARF under a certain metric. Our approach is quite general and applies to various policy settings where the researcher may have multiple source populations, regardless of how the reduced forms are originated structurally.
There are further extensions that one can explore from this research. First, it is possible that, just like in synthetic control methods, using many source populations may cause overfitting. As in synthetic control, a judicious selection of source populations based on the domain knowledge of the context of application is important in practice. We believe that a decision-theoretic guidance to help the researcher in this selection would be helpful, although to the best of our knowledge, the predominant portion of the literature focuses on a decision setting under a single population. Second, it would be useful to statistically gauge the plausibility of the synthetic transferability condition. For this, we may need to sacrifice the generality of this paper's setting and make use of further restrictions on the ARF's, such as continuity or shape constraints of the ARFs, depending on the application of focus. Finally, the current paper has assumed that the policy is known to the researcher. However, in practice, the precise form of the policy may be unknown. The researcher may face a range of policies under consideration, or may not have precise knowledge of how the policy alters the reduced form, and may need to estimate it using additional data. This question seems relevant in practice.
[SyntheticDecomp]
[econometrica]
Supplemental Note to “Synthetic Decomposition for Counterfactual Predictions”
Nathan Canen and Kyungchul Song
University of Houston and University of British Columbia
The supplemental note provides the proof of asymptotic validity of inference proposed in <cit.>, and some details on the Monte Carlo simulations and empirical application.
§ UNIFORM ASYMPTOTIC VALIDITY
§.§ Assumptions and Results
Let us first introduce conditions that ensure uniform asymptotic validity of the confidence intervals, C_1 - α, defined in (<ref>). Let 𝒫 be the space of probability distributions that satisfy Assumptions <ref>-<ref> below. From here on, we make explicit the dependence of 𝐰_0, θ(𝐰_0), and Ω on P ∈𝒫 by rewriting them as 𝐰_P, θ_P(𝐰_P) and Ω_P. Similarly we write H_P and 𝐡_P instead of H and 𝐡, and write μ_k,P, μ_k,P^Γ and m_k,P instead of μ_k, μ_k^Γ, and m_k.
The nonstandard aspect of uniform asymptotic validity in our setting comes from the fact that √(n_0)(ŵ - 𝐰_P) exhibits discontinuity in its pointwise asymptotic distribution. Hence, our proof focuses on dealing with this aspect, using high level conditions for other aspects that can be handled using standard arguments.
For each k=0,1,...,K, there exists a constant r_k >0 such that n_k/n_0 → r_k, as n_k, n_0 →∞.
Assumption <ref> says that the sample size from each source population is not asymptotically negligible relative to the sample size from the target population.
For each k=0,1,...,K, there exists δ>0 such that
sup_P ∈𝒫𝐄_P[ |m_k,P(μ_k,P(X_i), X_i)|^4+ δ] < ∞ and sup_P ∈𝒫𝐄_P[ |m_k,P(μ_k,P^Γ(X_i), X_i)|^4+ δ] < ∞.
Assumption <ref> requires that the ARFs have a moment bounded uniformly over P ∈𝒫.
There exists η>0 such that for all n ≥ 1,
inf_P ∈𝒫λ_min(H_P) > η,
where λ_min(H_P) denotes the smallest eigenvalue of H_P.
Assumption <ref> requires that the matrix H_P has eigenvalues bounded away from zero uniformly over P ∈𝒫 and over n ≥ 1. The assumption excludes a setting where 𝐰_P is weakly identified. Later we will discuss how this assumption can be relaxed.
Recall the definition W_i = (Y_i, X_i^⊤)^⊤. For each k = 0,1,...,K, let us define
q_k,0,P(W_i) = m_k,P(μ_k,P^Γ(X_i),X_i ) 1{X_i ∈𝒳_0^Γ} and
q̂_k,0(W_i) = m̂_k(μ̂_k^Γ(X_i),X_i ) 1{X_i ∈𝒳̂_0^Γ}.
Similarly, we define q_k,1,P(W_i) and q̂_k,1(W_i) to be the same as q_k,0,P(W_i) and q̂_k,0(W_i) except that 1{X_i ∈𝒳_0^Γ} and 1{X_i ∈𝒳̂_0^Γ} are replaced by 1{X_i ∉𝒳_0^Γ} and 1{X_i ∉𝒳̂_0^Γ} respectively. The following assumption requires the asymptotic linear representation of the estimated ARFs.
For any sequence of random vectors Z_n_0 and W_n_0 in 𝐑^d, n_0 ≥ 1, we denote
Z_n_0 = W_n_0 + o_𝒫(1),
if for each ϵ>0,
lim sup_n_0 →∞sup_P ∈𝒫 P{ Z_n_0 - W_n_0 > ϵ} = 0.
Suppose that for each k=0,1,...,K, ℓ=0,1, φ_k,ℓ,P(·) is equal to q_k,ℓ,P(·) or a constant function at one. Then, for each j,k=0,1,...,K, ℓ=0,1, as n_0 →∞,
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ(W_i) - q_j,ℓ,P(W_i))φ_k,ℓ,P(W_i) = 1/√(n_j)∑_i ∈ N_jψ_j,ℓ,P(W_i;φ_k,ℓ,P)+ o_𝒫(1),
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ(W_i) - q_j,ℓ,P(W_i))(q̂_k,ℓ(W_i) - q_k,ℓ,P(W_i)) = o_𝒫(1),
where ψ_j,ℓ,P(W_i;φ_k,ℓ,P) is a mean zero random variable such that for some δ>0,
sup_P ∈𝒫𝐄_P[|ψ_j,ℓ,P(W_i;φ_k,ℓ,P)|^4 + δ] < ∞,
for all k=0,1,...,K and ℓ = 0,1.
Note that the estimation error in q̂_j,ℓ(·) comes from the sample in region j, whereas the summation is over the sample in region 0. The influence function is driven by the randomness in the estimation error q̂_j,ℓ(·) - q_j,ℓ,P(·). Similarly, we make the following assumption for the bootstrap version of the estimators.
Suppose that for each k=0,1,...,K, ℓ=0,1, (φ̂_k,ℓ(·),φ_k,ℓ,P(·)) is equal to (q̂_k,ℓ(·),q_k,ℓ,P(·)) or a pair of constant functions at one. Then, for each j,k=0,1,...,K, ℓ=0,1, the following statements hold.
(i) As n_0 →∞,
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ^*(W_i^*) - q̂_j,ℓ(W_i^*))φ̂_k,ℓ(W_i^*) = 1/√(n)_j∑_i ∈ N_jψ̂_j,ℓ,P(W_i^*;φ_k,ℓ,P) + o_𝒫(1),
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ^*(W_i^*) - q̂_j,ℓ(W_i^*))(q̂_k,ℓ^*(W_i^*) - q̂_k,ℓ(W_i^*)) = o_𝒫(1),
where
ψ̂_j,ℓ,P(W_i^*;φ_k,ℓ,P) = ψ_j,ℓ,P(W_i^*;φ_k,ℓ,P) - 1/n_j∑_i ∈ N_jψ_j,ℓ,P(W_i;φ_k,ℓ,P),
and ψ_j,ℓ,P(·;φ_k,ℓ,P) is the influence function in Assumption <ref>.
(ii) As n_0 →∞,
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ(W_i^*) - q̂_j,ℓ(W_i))(q̂_k,ℓ(W_i^*) - q_k,ℓ,P(W_i^*)) = o_𝒫(1),
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ(W_i^*) - q̂_j,ℓ(W_i))(q̂_k,ℓ(W_i) - q_k,ℓ,P(W_i)) = o_𝒫(1), and
1/√(n)_0∑_i ∈ N_0(q̂_j,ℓ^*(W_i^*) q̂_k,ℓ^*(W_i^*) - q_j,ℓ,P(W_i) q_k,ℓ,P(W_i)) = o_𝒫(1).
Define
Ω_n,P = 1/n_0∑_i ∈ N𝐄_P[ ψ̃_i,Pψ̃_i,P^⊤],
where ψ̃_i,P = Ψ_i,P𝐰_P - ψ_i,P and Ψ_i,P and ψ_i,P are defined in Lemma <ref> below. Inspection of Ω_n,P shows that it depends on n only through n_k/n_0, k=1,...,K, and depends on the ratios continuously. Let Ω_P be the same as Ω_n,P with n_k/n_0 replaced by r_k, for k=1,...,K, where r_k's are positive constants in Assumption <ref>. Then, it is not hard to see that from Assumption <ref>,
sup_P ∈𝒫Ω_n,P - Ω_P→ 0,
as n_0 →∞.
The following theorem shows that the estimators ŵ and θ̂(ŵ) are √(n_0)-consistent for 𝐰_P and θ_P(𝐰_P) uniformly over P ∈𝒫.
Suppose that Assumptions <ref>-<ref> hold and that inf_P ∈𝒫λ_min(Ω_P) > 0. Then,
lim_M ↑∞lim sup_n_0 →∞sup_P ∈𝒫 P{√(n_0)ŵ - 𝐰_P > M } = 0.
However, as noted earlier, depending on the sequence of probabilities in 𝒫, √(n_0)(ŵ - 𝐰_P) can be asymptotically non-normal, and so can √(n_0)(θ̂(ŵ) - θ_P(𝐰_P)) as a consequence. Nevertheless, the confidence interval C_1-α we propose in the main text turns out to be uniformly asymptotically valid as the following theorem shows.
Suppose that Assumptions <ref>-<ref> hold. Then, for each α∈ (0,1),
lim inf_n_0 →∞inf_P ∈𝒫 P{θ_P(𝐰_P) ∈ C_1-α}≥ 1 - α.
The proofs of these results are presented in the next section.
§.§ Proofs
§.§.§ Preliminary Results
We begin with auxiliary results on the rejection probability of a test involving the squared residual from projecting an asymptotically normal random vector onto a polyhedral cone. The main preliminary result is Lemma <ref>. This is the result we use later to establish the uniform asymptotic validity of the confidence set for 𝐰_P.
First, for any matrix m × K matrix A, we consider a polyhedral cone of the following type:
Λ(A) = {𝐱∈𝐑^K: A 𝐱≤0}.
When A is replaced by
A(𝐰) = [I_K, 𝐰, -𝐰]^⊤,
with 𝐰∈Δ_K-1, where I_K is the K-dimensional identity matrix, we write Λ(𝐰) simply, instead of Λ(A(𝐰)). Let K = {1,...,K}. For each J ⊂K, let
Λ_J(𝐰) = {𝐱∈𝐑^K : 𝐱_J = 0, 𝐱_-J≤0, and 𝐰^⊤𝐱 = 0 } and
L_J(𝐰) = {𝐱∈𝐑^K : 𝐱_J = 0 and 𝐰^⊤𝐱 = 0 }.
(An inequality between vectors are understood as holding element-wise. We also assume that any inequality or equality that involves a vector 𝐱_J with J = ∅ is vacuously true.) Note that L_J(𝐰) is the span of Λ_J(𝐰). The relative interior of Λ_J(𝐰) (relative to L_J(𝐰)) is given by
ri(Λ_J(𝐰)) = {𝐱∈𝐑^K: 𝐱_J = 0, 𝐱_-J < 0, and 𝐰^⊤𝐱 = 0 }.
For any vector 𝐱, we denote [𝐱]_j to mean its j-th entry. For any vector 𝐱∈𝐑^K, we let
J_0(𝐱) = {j ∈K: [𝐱]_j = 0}.
Given a symmetric positive definite matrix Ω, we define the norm ·_Ω as 𝐱_Ω = √(𝐱'Ω^-1𝐱), and the projection Π_Ω(𝐲|Λ(𝐰)) (along the norm ·_Ω) to be the solution to the following minimization problem:
inf_𝐱∈Λ(𝐰)𝐲 - 𝐱_Ω^2.
Since Ω is positive definite and Λ(𝐰) is closed and convex, the projection Π_Ω(𝐲|Λ(𝐰)) exists and is unique. The following lemma shows how the projection along ·_Ω is translated into that along ·.
For any J ⊂K and 𝐱∈Λ(𝐰) with any 𝐰∈Δ_K-1, the following holds.
(i) 𝐱∈ri(Λ_J(𝐰)) if and only if J_0(𝐱) = J.
(ii) If ri(Λ_J(𝐰)) ∅ for some J ⊂K, then, J ∅ and K∖ J ⊂ J_0(𝐰), and
ri(Λ_J(𝐰)) = {𝐱∈𝐑^K: 𝐱_J = 0, 𝐱_-J < 0}.
(iii) For any 𝐲∈𝐑^K, J_0(Π_Ω(𝐲|Λ(𝐰))) ∅.
Proof: (i) The result follows from the fact that ri(Λ_J(𝐰)), J ⊂K, partition Λ(𝐰).
(ii) For the first statement, suppose to the contrary that J = ∅. Then, since 𝐰∈Δ_K-1, ri(Λ_J(𝐰)) = ∅. Hence we must have J ∅. As for the second statement, suppose that K∖ J ⊄J_0(𝐰) so that there exists j ∈K∖ J, with [𝐰]_j > 0. Then, for such J, none of 𝐱∈Λ_J(𝐰) satisfies both 𝐱_-J < 0, and 𝐰^⊤𝐱 = 0, because 𝐰≥0, and hence, ri(Λ_J(𝐰)) = ∅. Hence the second statement holds.
Now, we turn to the third statement. Suppose that J = K. Then, ri(Λ_J(𝐰)) = Λ_J(𝐰) = {0}. Hence (<ref>) follows. Suppose that K∖ J ∅. Since K∖ J ⊂ J_0(𝐰) by the previous result, K∖ J_0(𝐰) ⊂ J. Then, for any 𝐱∈𝐑^K such that 𝐱_J = 0, the condition 𝐰^⊤𝐱 = 0 in (<ref>) holds. Again, (<ref>) follows.
(iii) Since Λ(𝐰) is closed and convex, Π_Ω( 𝐲|Λ(𝐰)) exists in Λ(𝐰) and is unique. Since ri(Λ_J(𝐰))'s partition Λ(𝐰), there exists a unique J^* ⊂K such that Π_Ω( 𝐲|Λ(𝐰)) ∈ri(Λ_J^*(𝐰)). Hence, ri(Λ_J^*(𝐰)) ∅. By the previous results (i) and (ii), we must have J^* = J_0(Π_Ω(𝐲|Λ(𝐰))), and J^* ∅.
▪
For any m × K matrix A, and any symmetric positive definite K × K matrix Ω,
Π_Ω(𝐲|Λ(A)) = Ω^1/2Π_I(Ω^-1/2𝐲|Ω^-1/2Λ(A)).
Proof: Note that
Π_Ω(𝐲|Λ(A)) = min_𝐱: A 𝐱≤0 (𝐲 - 𝐱)^⊤Ω^-1(𝐲 - 𝐱)
=Ω^1/2min_𝐱: A Ω^1/2𝐱≤0 (Ω^-1/2𝐲 - 𝐱)^⊤(Ω^-1/2𝐲 - 𝐱).
The last term is equal to Ω^1/2Π_I(Ω^-1/2𝐲|Λ(AΩ^1/2)) = Ω^1/2Π_I(Ω^-1/2𝐲|Ω^-1/2Λ(A)). ▪
Suppose that Y ∈𝐑^K is a random vector following N(0,Ω), with a symmetric positive definite matrix Ω. Then, for any α∈ (0,1) and 𝐰∈Δ_K-1,
P{ Y - Π_Ω(Y|Λ(𝐰))_Ω^2 > c_1-α(Y;𝐰, Ω)}≤α,
where c_1-α(Y;𝐰, Ω) = G^-1(1-α; |J_0( Π_Ω(Y|Λ(𝐰)))| ), and G(·; k) is the CDF of the χ^2-distribution with degree of freedom equal to k.
Proof: Let F_ℓ, ℓ=1,...,L, be the faces of the polyhedral cone Λ(𝐰), and let ri(F_ℓ) be the relative interior of F_ℓ. Then, by Theorem 1 of <cit.>,[They apply Lemma 3.13.2 of <cit.>, p.125, which uses the orthogonal decomposition of 𝐑^K equipped with the inner product ⟨𝐚,𝐛⟩ = 𝐚'𝐛. However, the lemma continues to hold with any other inner product, with the definition of projections and orthogonal complements appropriately redefined.] we have
P{ Y - Π_Ω(Y|Λ(𝐰))_Ω^2 > q_1-α(Y;𝐰, Ω)}≤α,
where
q_1-α(Y;𝐰, Ω) = ∑_ℓ =1^L 1{Π_Ω(Y|Λ(𝐰)) ∈ri(F_ℓ)} G^-1(1-α; K - rk(P_ℓ)),
and F_ℓ's are faces of Λ(𝐰), P_ℓ denotes the projection matrix (along ·_Ω) onto the linear span of F_ℓ, and rk(P_ℓ) denotes the rank of P_ℓ. It suffices to show that
q_1-α(Y;𝐰, Ω) = c_1-α(Y;𝐰, Ω).
In our case with the polyhedral cone Λ(𝐰), the faces and their relative interiors are given by Λ_J(𝐰) and ri(Λ_J(𝐰)), J ⊂K (see, e.g., the proof of Lemma 3.13.5 of <cit.>). Furthermore, by Lemma <ref>(ii), Π_Ω(Y|Λ(𝐰)) ∈ri(Λ_J(𝐰)) implies that J ∅ and K∖ J ⊂ J_0(𝐰). Hence, we can rewrite q_1-α(Y;𝐰, Ω) as
∑_J ⊂K: J ∅, K∖ J ∅ 1{K∖ J ⊂ J_0(𝐰)}1{Π_Ω(Y|Λ(𝐰)) ∈ri(Λ_J(𝐰))} G^-1(1-α; K - rk(P_J))
+ 1{Π_Ω(Y|Λ(𝐰)) ∈ri(Λ_K(𝐰))} G^-1(1-α; K - rk(P_K)),
where P_J is the projection matrix onto the linear span of Λ_J(𝐰).
Since ri(Λ_K(𝐰)) = {0}, and P_K is a zero matrix, the last term in (<ref>) is equal to
1{Π_Ω(Y|Λ(𝐰)) = 0} G^-1(1-α; K).
We focus on the first sum in (<ref>). The linear span of Λ_J(𝐰) is given by L_J(𝐰). However, for any nonempty J such that K∖ J ⊂ J_0(𝐰), the span is reduced to {𝐱∈𝐑^K: 𝐱_J = 0}, and rk(P_J) = K-|J|. Therefore, K-rk(P_J) = |J| in (<ref>). As seen in the proof of Lemma <ref>(iii), there exists a unique J^* ⊂K such that Π_Ω(Y|Λ(𝐰)) ∈ri(Λ_J^*(𝐰)) and J^* = J_0(Π_Ω(Y|Λ(𝐰))), which implies that K∖ J^* ⊂ J_0(𝐰). Hence, from (<ref>),
q_1-α(Y;𝐰,Ω) = 1{K∖ J_0( Π_Ω(Y|Λ(𝐰))) ∅}G^-1(1-α; |J_0( Π_Ω(Y|Λ(𝐰)))|)
+ 1{J_0( Π_Ω(Y|Λ(𝐰))) = K} G^-1(1-α; K )
= G^-1(1-α; |J_0( Π_Ω(Y|Λ(𝐰)))|) = c_1-α(Y;𝐰,Ω).
This gives the desired result. ▪
Let Y ∈𝐑^K be a random vector following N(0,Ω) for a symmetric positive definite matrix Ω. Then, for any α∈ (0,1) and 𝐰∈Δ_K-1,
P{ Y - Π_Ω(Y|Λ(𝐰)) _Ω^2 ≥ c_1-α(Y;𝐰,Ω) }≤α,
where c_1-α(Y;𝐰,Ω) is as defined in Lemma <ref>.
Proof: In light of Lemma <ref>, it suffices to show that
P{ Y - Π_Ω(Y|Λ(𝐰)) _Ω^2 = c_1-α(Y;𝐰,Ω) } =0.
The probability on the left hand side is equal to
∑_k=1^K P { Y - Π_Ω(Y|Λ(𝐰)) _Ω^2 = G^-1(1-α; k) and |J_0( Π_Ω(Y|Λ(𝐰)))| = k }.
Note that the summation excludes k=0 by Lemma <ref>(iii). It suffices to show that
P{ Y - Π_Ω(Y|Λ(𝐰)) _Ω^2 = c} = 0,
for any constant c ≥ 0.
First, we write
Y - Π_Ω(Y|Λ(𝐰)) _Ω^2 =∑_J ⊂K Y - Π_Ω(Y|Λ(𝐰)) _Ω^2 1{Π_Ω(Y|Λ(𝐰)) ∈ri(Λ_J(𝐰))}
=∑_J ⊂K Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 1{Π_Ω(Y|Λ(𝐰)) ∈ri(Λ_J(𝐰))}
=∑_J ⊂K Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 1{ J_0(Π_Ω(Y|Λ(𝐰))) = J }1{K∖ J ⊂ J_0(𝐰)}.
The second equality follows by Lemma 3.13.2 of <cit.>. The last equality follows by Lemma <ref>. By (iii) of Lemma <ref>, the last sum is equal to
∑_J ⊂K: J ∅ Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 1{ J_0(Π_Ω(Y|Λ(𝐰))) = J }1{K∖ J ⊂ J_0(𝐰)}.
Since the events {J_0(Π_Ω(Y|Λ(𝐰))) = J}, J ⊂K, are disjoint across J ⊂K, it suffices to show that Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 is a continuous random variable for all nonempty J ⊂K. We let
Q_J(𝐰) = [[Ω^1/2]_J^⊤, Ω^1/2𝐰]^⊤,
where [Ω^1/2]_J denotes the J × K matrix of which each row corresponds to the j-th row vector of Ω^1/2, j ∈ J. Then, by Lemma <ref>,
Π_Ω(Y | L_J(𝐰)) = Ω^1/2Π_I(Ω^-1/2 Y | L_J^Ω(𝐰)),
where L_J^Ω(𝐰) = {𝐱: Q_J(𝐰) 𝐱 = 0}, which is the linear span of {𝐱: A(𝐰) Ω^1/2𝐱≤0}, with A(𝐰) defined in (<ref>).
Now,
Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 = (Ω^-1/2 Y - Ω^-1/2Π_Ω(Y | L_J(𝐰)))^⊤(Ω^-1/2 Y - Ω^-1/2Π_Ω(Y | L_J(𝐰)))
=(Ω^-1/2 Y - Π_I(Ω^-1/2 Y | L_J^Ω(𝐰)))^⊤(Ω^-1/2 Y - Π_I(Ω^-1/2 Y | L_J^Ω(𝐰)))
=(Ω^-1/2 Y) 'M_J^Ω(𝐰)(Ω^-1/2 Y),
where M_J^Ω(𝐰) is a K × K symmetric idempotent matrix of rank equal to K - (L_J^Ω(𝐰)). When J = K, L_J^Ω(𝐰) = {0}, and hence (L_J^Ω(𝐰)) = 0. Suppose that J is such that |J| = K-1. Then, all but one entries of Ω^1/2𝐱 in L_J^Ω(𝐰) are zero. If this nonzero entry appears in the j-th entry of Ω^1/2𝐱, the requirement K∖ J ⊂ J_0(𝐰) yields that w_j = 0. Therefore, (L_J^Ω(𝐰)) = 1. Similarly, if J is such that 0<|J|<K-1, (L_J^Ω(𝐰)) = K-|J| (under the condition that K∖ J ⊂ J_0(𝐰)). Hence, for any nonempty J ⊂K such that K∖ J ⊂ J_0(𝐰), we have
K - (L_J^Ω(𝐰)) = |J|.
This means that Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 follows the χ^2-distribution with degree of freedom equal to |J|. Since J ∅, Y - Π_Ω(Y | L_J(𝐰)) _Ω^2 is a continuous random variable. ▪
Suppose that Ω_n is a sequence of symmetric positive definite K × K matrices such that Ω_n →Ω_0, as n →∞, for a symmetric positive definite matrix Ω_0. Suppose also that 𝐲_n ∈𝐑^K and 𝐰_n ∈Δ_K-1 are sequences of vectors such that 𝐲_n →𝐲_0, as n →∞, for some 𝐲_0 ∈𝐑^K. Then, the following statements hold.(i) lim_n →∞Π_Ω_n(𝐲_n|Λ(𝐰_n)) - Π_Ω_0(𝐲_0 |Λ(𝐰_n)) = 0.
(ii) lim_n →∞||J_0(Π_Ω_n(𝐲_n|Λ(𝐰_n)))| - |J_0(Π_Ω_0(𝐲_0|Λ(𝐰_n)))| | = 0.
Proof: (i) Note that
Π_Ω_n(𝐲_n|Λ(𝐰_n)) - Π_Ω_0(𝐲_0 |Λ(𝐰_n))≤ A_n,1 + A_n,2,
where
A_n,1 = Π_Ω_n(𝐲_n|Λ(𝐰_n)) - Π_Ω_n(𝐲_0 |Λ(𝐰_n)), and
A_n,2 = Π_Ω_n(𝐲_0|Λ(𝐰_n)) - Π_Ω_0(𝐲_0 |Λ(𝐰_n)).
Since a projection map in a Hilbert space on a closed convex set is a contraction map (see, e.g. Theorem 3 of <cit.>),
A_n,1≤𝐲_n - 𝐲_0 _Ω_n.
Since Ω_n→Ω_0 and Ω_0 is positive definite, the above bound vanishes as n →∞.
Let us turn to A_n,2. Due to the contractive property of the projection map, and since 0∈Λ(𝐰_n), and Ω_n →Ω_0, we have
Π_Ω_n(𝐲_0|Λ(𝐰_n)) _Ω_n = Π_Ω_n(𝐲_0|Λ(𝐰_n)) - Π_Ω_n(0|Λ(𝐰_n)) _Ω_n
≤𝐲_0 _Ω_n→𝐲_0 _Ω_0,
as n →∞. Hence, there exists a fixed bounded, closed, convex set B ⊂𝐑^K which depends only on 𝐲_0 and Ω_0 such that for all n ≥ 1, Λ(𝐰_n) ∩ B ∅ and
inf_𝐱∈Λ(𝐰_n)𝐲_0 - 𝐱_Ω_n^2 = inf_𝐱∈Λ(𝐰_n) ∩ B𝐲_0 - 𝐱_Ω_n^2 and
inf_𝐱∈Λ(𝐰_n)𝐲_0 - 𝐱_Ω_0^2 = inf_𝐱∈Λ(𝐰_n) ∩ B𝐲_0 - 𝐱_Ω_0^2.
Furthermore, note that
| inf_𝐱∈Λ(𝐰_n) ∩ B𝐲_0 - 𝐱_Ω_n^2 - inf_𝐱∈Λ(𝐰_n) ∩ B𝐲_0 - 𝐱_Ω_0^2|
≤| inf_𝐱∈Λ(𝐰_n) ∩ B(𝐲_0 - 𝐱_Ω_n^2 - 𝐲_0 - 𝐱_Ω_0^2 + 𝐲_0 - 𝐱_Ω_0^2 - inf_𝐱̃∈Λ(𝐰_n) ∩ B𝐲_0 - 𝐱̃_Ω_0^2) |.
The last term is bounded by
sup_𝐱∈Λ(𝐰_n) ∩ B| (𝐲_0 - 𝐱)^⊤ (Ω_n^-1 - Ω_0^-1) (𝐲_0 - 𝐱) |→ 0,
as n →∞. Since Λ(𝐰_n) ∩ B is a closed convex set, the projection of 𝐲_0 onto Λ(𝐰_n) ∩ B along ·_Ω_n exists and is unique. Hence, we find that
lim_n →∞ A_n,2 = 0.
(ii) By the result of (i), as n →∞,
Π_Ω_n(𝐲_n|Λ(𝐰_n)) - Π_Ω_0(𝐲_0|Λ(𝐰_n)) → 0.
Recall that the relative interiors ri(Λ_J(𝐰)), J ⊂K, partition Λ(𝐰). For any subsequence of {n}, we choose a further subsequence {n'} and J,J' ⊂K such that for all n' in the subsequence, we have
Π_Ω_n'(𝐲_n'|Λ(𝐰_n')) ∈ri(Λ_J(𝐰_n')), and
Π_Ω_0(𝐲_0|Λ(𝐰_n')) ∈ri(Λ_J'(𝐰_n')).
Since ri(Λ_J(𝐰_n')) and ri(Λ_J'(𝐰_n')) are nonempty, by Lemma <ref>, we have
ri(Λ_J(𝐰_n')) = {𝐱∈𝐑^K: 𝐱_J = 0, 𝐱_-J < 0},
and similarly with ri(Λ_J'(𝐰_n')). Hence both the relative interiors do not depend on 𝐰_n' or n'. Furthermore, from this, we have
J_0(Π_Ω_n'(𝐲_n'|Λ(𝐰_n'))) = J and J_0(Π_Ω_0(𝐲_0|Λ(𝐰_n'))) = J'.
Now, by (<ref>) and (<ref>), we find that from large n' on, we have
Π_Ω_n'(𝐲_n'|Λ(𝐰_n')) ∈ri(Λ_J'(𝐰_n')).
Since the relative interiors ri(Λ_J(𝐰_n')), J ⊂K, partition Λ(𝐰_n'), we find that J = J' from some large n' on. ▪
Suppose that Y_n ∈𝐑^K, n ≥ 1, is a sequence of random vectors, and 𝐰_n ∈Δ_K-1 is a sequence of nonstochastic vectors, such that Y_n →_d Y, where Y follows N(0,Ω_0) for some symmetric positive definite matrix Ω_0. Furthermore, let Ω_n be a sequence of symmetric positive definite random matrices such that Ω_n →_P Ω_0, as n →∞.
Then, for any α∈ (0,1),
lim sup_n →∞ P{ Y_n - Π_Ω_n(Y_n|Λ(𝐰_n)) _Ω_n^2 > c_1-α(Y_n;𝐰_n,Ω_n)}≤α,
where c_1-α(Y_n;𝐰_n,Ω_n) = G^-1(1-α; |J_0(Π_Ω_n(Y_n |Λ(𝐰_n))) |).
Proof: Due to the almost sure representation theorem (cf. Theorem 6.7 of <cit.>, p.70), there is a common probability space on which we have a sequence of random vectors Ỹ_n and random matrices Ω̃_n such that
[Ỹ_n^⊤,vec(Ω̃_n)^⊤]^⊤→_a.s [Ỹ^⊤, vec(Ω̃)^⊤]^⊤,
as n →∞, where Ỹ_n and Ω̃_n have the same distribution as Y_n and Ω_n, and Ỹ and Ω̃_0 have the same distribution as Y and Ω_0.
By Lemma <ref>(i), we have
Ỹ_n - Π_Ω̃_n(Ỹ_n|Λ(𝐰_n)) _Ω̃_n^2 - Ỹ - Π_Ω̃_0(Ỹ|Λ(𝐰_n)) _Ω̃_0^2 →_a.s 0,
as n →∞.
Let us turn to the critical values. By Lemma <ref>(ii), we have
lim_n →∞| |J_0(Π_Ω̃_n(Ỹ_n |Λ(𝐰_n)))| - |J_0(Π_Ω̃_0(Ỹ|Λ(𝐰_n)))| | = 0.
Hence,
c_1-α(Ỹ_n;𝐰_n,Ω̃_n) = G^-1(1-α; |J_0(Π_Ω̃_n(Ỹ_n |Λ(𝐰_n))) |)
= G^-1(1-α; |J_0(Π_Ω̃_0(Ỹ|Λ(𝐰_n))) |) + o_a.s.(1)≡ c_1-α(Ỹ;𝐰_n,Ω̃_0) + o_a.s.(1),
as n →∞. Thus, we find that
Ỹ_n - Π_Ω̃_n(Ỹ_n|Λ(𝐰_n))_Ω̃_n^2 - c_1-α(Ỹ_n;𝐰_n,Ω̃_n)
- (Ỹ - Π_Ω̃_0(Ỹ|Λ(𝐰_n))_Ω̃_0^2- c_1-α(Ỹ;𝐰_n,Ω̃_0)) →_a.s. 0,
as n →∞. Now, observe that
P{Ỹ_n - Π_Ω̃_n(Ỹ_n|Λ(𝐰_n))_Ω̃_n^2 - c_1-α(Ỹ_n;𝐰_n,Ω̃_n) > 0 }
≤ P{Ỹ_n - Π_Ω̃_n(Ỹ_n|Λ(𝐰_n))_Ω̃_n^2 - c_1-α(Ỹ_n;𝐰_n,Ω̃_n) ≥ 0 }
= P{Ỹ - Π_Ω̃_0(Ỹ|Λ(𝐰_n)) _Ω̃_0^2 - c_1-α(Ỹ;𝐰_n,Ω̃_0) + o_a.s.(1) ≥ 0 }
≤ P{Ỹ - Π_Ω̃_0(Ỹ|Λ(𝐰_n)) _Ω̃_0^2 - c_1-α(Ỹ;𝐰_n,Ω̃_0) ≥ 0 } + o(1) ≤α + o(1),
as n →∞. The second inequality follows by reversed Fatou's Lemma and from the fact that the map 1{·≥0} is upper semicontinuous. The last inequality follows by Lemma <ref>. ▪
§.§.§ The Proof of the Main Results
Throughout the proofs below, we assume that Assumptions <ref>-<ref> are satisfied. Define
Ĝ_P = √(n_0)( Ĥ - H_P) and ĝ_P = √(n_0)(ĥ - 𝐡_P).
The following lemma gives an asymptotic linear presentation for Ĝ_P and ĝ_P.
As n_0 →∞,
Ĝ_P = 1/√(n_0)∑_i ∈ NΨ_i,P + o_𝒫(1), and ĝ_P = 1/√(n_0)∑_i ∈ Nψ_i,P + o_𝒫(1),
where Ψ_i,P is the K × K matrix whose (j,k)-entry is given by
ψ_i,P, jk = √(n_0/n_j)ψ_j,0,P(W_i;q_k,0,P) 1{i ∈ N_j} + √(n_0/n_k)ψ_k,0,P(W_i;q_j,0,P) 1{i ∈ N_k}
+ {q_j,0,P(W_i)q_k,0,P(W_i) - 𝐄_P[q_j,0,P(W_i)q_k,0,P(W_i)]} 1{i ∈ N_0},
and ψ_i,P is the K × 1 vector whose k-th entry is given by
ψ_i,P, k = √(n_0/n_k)ψ_k,0,P(W_i;q_0,0,P) 1{i ∈ N_k} + ψ_0,0,P(W_i;q_0,0,P) 1{i ∈ N_0}
+ {q_k,0,P(W_i)q_0,0,P(W_i) - 𝐄_P[q_k,0,P(W_i)q_0,0,P(W_i)]}1{i ∈ N_0}.
Proof: For j,k=1,...,K, let Ĥ_jk be the (j,k)-th entry of Ĥ and H_P,jk the (j,k)-th entry of H_P. As for the first statement, for each j,k=1,...,K, we write
√(n_0)(Ĥ_jk - H_P,jk) = 1/√(n_0)∑_i ∈ N_0 (q̂_j,0(W_i) - q_j,0,P(W_i)) q̂_k,0(W_i)
+ 1/√(n_0)∑_i ∈ N_0 (q̂_k,0(W_i) - q_k,0,P(W_i)) q_j,0,P(W_i)
+ 1/√(n_0)∑_i ∈ N_0{q_j,0,P(W_i)q_k,0,P(W_i) - 𝐄_P[q_j,0,P(W_i)q_k,0,P(W_i)]}.
By Assumption <ref>, we find that
√(n_0)(Ĥ_jk - H_P,jk) = √(n_0/n_j)∑_i ∈ N_jψ_j,0,P(W_i;q_k,0,P) + √(n_0/n_k)∑_i ∈ N_kψ_k,0,P(W_i;q_j,0,P)
+ 1/√(n_0)∑_i ∈ N_0{q_j,0,P(W_i)q_k,0,P(W_i) - 𝐄_P[q_j,0,P(W_i)q_k,0,P(W_i)]} + o_𝒫(1).
The proof for the second statement is similar and is omitted. ▪
As n_0 →∞, Ĥ = H_P + o_𝒫(1) and ĥ = 𝐡_P + o_𝒫(1).
Proof: Since sup_P ∈𝒫𝐄_P[Ψ_i,P^2] < ∞ and sup_P ∈𝒫𝐄_P[ψ_i,P^2] < ∞, the result is immediate from Lemma <ref>. ▪
For each 𝐰∈𝐑^K, we define
ℳ̂(𝐰) = ( 𝐰 - Ĥ^-1ĥ)^⊤Ĥ( 𝐰 - Ĥ^-1ĥ), and
ℳ_P(𝐰) = ( 𝐰 - H_P^-1𝐡_P )^⊤ H_P ( 𝐰 - H_P^-1𝐡_P ).
As n_0 →∞, ŵ = 𝐰_P + o_𝒫(1).
Proof: First, we prove the following two claims.
(i) For each ϵ >0,
lim_n_0 →∞sup_P ∈𝒫P{sup_𝐰∈Δ_K-1 |ℳ̂(𝐰) - ℳ_P(𝐰) | > ϵ} =0.
(ii) For each ϵ >0,
lim inf_n_0 →∞inf_P ∈𝒫inf_𝐰∈Δ_K-1∖ B(𝐰_P: ϵ){ℳ_P(𝐰) - ℳ_P(𝐰_P) } > 0,
where B(𝐰_P; ϵ) = {𝐰∈Δ_K-1: 𝐰 - 𝐰_P < ϵ}.
Once we have (i) and (ii), we follow the arguments in the proof of Theorem 2.1 of <cit.> to complete the proof. More specifically, we invoke (ii) and take ϵ>0, η_ϵ>0 and n _ϵ such that for all n ≥ n_ϵ,
inf_P ∈𝒫inf_𝐰∈Δ_K-1∖ B(𝐰_P: ϵ){ℳ_P(𝐰) - ℳ_P(𝐰_P) } > η_ϵ.
The event of ŵ - 𝐰_P > ϵ implies ℳ_P(ŵ) - ℳ_P(𝐰_P) > η_ϵ, or
ℳ̂(𝐰_P) - ℳ_P(𝐰_P) > ℳ̂(ŵ) - ℳ_P(ŵ) + η_ϵ,
where we use that ℳ̂(ŵ) ≤ℳ̂(𝐰_P). The probability of this event is bounded by
sup_P ∈𝒫 P{ 2 sup_𝐰∈Δ_K-1 |ℳ̂(𝐰) - ℳ_P(𝐰) | > η_ϵ}→ 0,
as n →∞, by (i). Since the last convergence is uniform in P ∈𝒫, we obtain the desired result of the lemma.
Let us prove (i) first. For each 𝐰∈Δ_K-1, we write
ℳ̂(𝐰) - ℳ_P(𝐰) = 𝐰^⊤ (Ĥ - H_P)𝐰 - 2 (ĥ - 𝐡_P)^⊤𝐰 + ĥ^⊤Ĥ^-1ĥ - 𝐡_P^⊤ H_P^-1𝐡_P.
The desired result of (i) follows by Lemma <ref> and Assumption <ref>.
Let us turn to (ii). Note that
ℳ_P(𝐰) - ℳ_P(𝐰_P) = (𝐰 - 𝐰_P)^⊤ H_P(𝐰 - 𝐰_P) + 2(𝐰 - 𝐰_P)^⊤ H_P (𝐰_P - H_P^-1𝐡_P)
≥inf_P ∈𝒫λ_min(H_P) 𝐰 - 𝐰_P^2,
because (𝐰 - 𝐰_P)^⊤ H_P (𝐰_P - H_P^-1𝐡_P) ≥ 0 for all 𝐰∈Δ_K-1 by the definition of 𝐰_P. (See, e.g., Propositions 2.1.5 and 2.3.2 of <cit.>.) The desired result follows from Assumption <ref>. ▪
Define
Ĝ^* = √(n_0)( Ĥ^* - Ĥ) and ĝ^* = √(n_0)(ĥ^* - ĥ).
As n_0 →∞,
Ĝ^* = 1/√(n_0)∑_i ∈ NΨ_i,P^* + o_𝒫(1) and ĝ_P^* = 1/√(n_0)∑_i ∈ Nψ_i,P^* + o_𝒫(1),
where Ψ_i,P^* is the K × K matrix whose (j,k)-entry is given by
ψ_i,P, jk^* = √(n_0/n_j)ψ̂_j,0,P(W_i^*; q_k,0,P)1{i ∈ N_j} + √(n_0/n_k)ψ̂_k,0,P(W_i^*; q_j,0,P)1{i ∈ N_k}
+ {q_j,0,P(W_i^*)q_k,0,P(W_i^*) - 1/n_0∑_i ∈ N_0 q_j,0,P(W_i)q_k,0,P(W_i) } 1{i ∈ N_0},
and ψ_i,P^* is the K × 1 vector whose k-th entry is given by
ψ_i,P, k^* = √(n_0/n_k)ψ̂_k,0,P(W_i^*;q_0,0,P) 1{i ∈ N_k} + ψ̂_0,0,P(W_i^*;q_0,0,P) 1{i ∈ N_0}
+ {q_k,0,P(W_i^*)q_0,0,P(W_i^*) - 1/n_0∑_i ∈ N_0 q_k,0,P(W_i)q_0,0,P(W_i) }1{i ∈ N_0}.
Proof: The proof is similar to that of Lemma <ref>. Since the arguments are standard, we provide a sketch of the proof of the first statement only for brevity. Let Ĥ_jk^* be the (j,k)-th entry of Ĥ^*. We write
√(n_0)( Ĥ_jk^* - Ĥ_jk) = A_n,1 + A_n,2,
where
A_n,1 = 1/√(n_0)∑_i ∈ N_0 (q̂_j,0^*(W_i^*) - q̂_j,0(W_i^*)) q̂_k,0^*(W_i^*) + 1/√(n_0)∑_i ∈ N_0 (q̂_k,0^*(W_i^*) - q̂_k,0(W_i^*)) q̂_j,0(W_i^*), and
A_n,2 = 1/√(n_0)∑_i ∈ N_0{q̂_j,0(W_i^*)q̂_k,0(W_i^*) - 1/n_0∑_i ∈ N_0q̂_j,0(W_i)q̂_k,0(W_i) }.
From Assumptions <ref>-<ref>, we can show that
A_n,1 = √(n_0/n_j)∑_i ∈ Nψ̂_j,0,P(W_i^*;q_k,0,P) 1{i ∈ N_j}
+ √(n_0/n_k)∑_i ∈ Nψ̂_k,0,P(W_i^*;q_j,0,P)1{i ∈ N_k} + o_𝒫(1).
By Assumption <ref>(ii),
A_n,2 = 1/√(n_0)∑_i ∈ N_0{q_j,0,P(W_i^*)q_k,0,P(W_i^*) - 1/n_0∑_i ∈ N_0 q_j,0,P(W_i)q_k,0,P(W_i) } + o_𝒫(1).
Thus, we obtain the desired result. ▪
Recall the definition of Ω_n,P in (<ref>). We construct its bootstrap version. Define
ψ̃_i,P^* = Ψ_i,P^* 𝐰_P - ψ_i,P^*,
where Ψ_i,P^* and ψ_i,P^* are defined in Lemma <ref>. We let
Ω̃_n,P = 1/n_0∑_i ∈ N𝐄[ψ̃_i,P_n^* ψ̃_i,P^* ⊤|ℱ_n ],
where ℱ_n denotes the σ-field generated by (W_i)_i ∈ N.
Suppose that N_n = {1,...,n} is partitioned as
N_n = ⋃_k=0^K N_n,j,
where we denote n_k,n = |N_n,j|, k=0,1,...,K, and n_k,n/n_0,n→ r_k for the constant r_k>0 in Assumption <ref>, and n_k,n denotes the sample size in the region k. Then, the following statements hold for any sequence of probabilities P_n ∈𝒫.
(i) As n →∞,
sup_t ∈𝐑| P_n{Ω_n,P_n^-1/21/√(n_0,n)∑_i ∈ N_nψ̃_i,P_n≤ t } - Φ(t) | → 0,
where Φ is the CDF of N(0,1).
(ii) For any ϵ>0, as n →∞,
P_n{sup_t ∈𝐑| P_n {Ω̃_n,P_n^-1/21/√(n_0,n)∑_i ∈ N_nψ̃_i,P_n^* ≤ t |ℱ_n } - Φ(t) | > ϵ}→ 0.
Proof: Both results follow from standard arguments involving the Central Limit Theorem and its bootstrap version for a sum of independent random variables. (See Chapter 3 of <cit.>.) ▪
As n_0 →∞, Ω̂= Ω_n,P + o_𝒫(1).
Proof: It suffices to show that as n_0 →∞,
Ω_n,P = Ω̃_n,P + o_𝒫(1) and Ω̂= Ω̃_n,P + o_𝒫(1).
The first statement is easy to show. For brevity, we focus on showing the second statement. We write Ω̂_n instead of Ω̂ to make the sample size explicit. We choose a subsequence {n'} of {n} such that for P_n'∈𝒫,
lim inf_n_0 →∞sup_P ∈𝒫 P{Ω̂_n - Ω̃_n,P > ϵ}
= lim inf_n' →∞ P_n'{Ω̂_n' - Ω̃_n',P > ϵ}.
Then, there exists a further subsequence {n”} of {n'} such that
lim inf_n' →∞ P_n'{Ω̂_n' - Ω̃_n',P > ϵ} = lim_n”→∞ P_n”{Ω̂_n” - Ω̃_n”,P > ϵ}.
Recall that τ̂_k = √(n_0)max{|[Ĥŵ - ĥ]_k |, c_0 }. Thus, from Lemma <ref>, we have
lim_M →∞lim inf_n”→∞ P_n”{τ̂ > M } = 0.
Since 𝐄[ |γ̃_k^*|^2+ δ|ℱ_n] ≤τ̂_k^2+δ for any k=1,...,K and for any δ>0,
lim_M →∞lim inf_n”→∞ P_n”{𝐄[ γ̃^*^2 + δ|ℱ_n”] > M } = 0.
Hence, for each k,ℓ = 1,...,K, and ϵ>0,
lim_M →∞lim inf_n”→∞ P_n”{𝐄[ |γ̃_k^*γ̃_ℓ^*| 1{ |γ̃_k^*γ̃_ℓ^*| > M}|ℱ_n”] > ϵ}
≤lim_M →∞lim inf_n”→∞ P_n”{𝐄[ |γ̃_k^*γ̃_ℓ^*|^1+ δ|ℱ_n”] > ϵ M^δ} = 0.
Therefore, 𝐄[ γ̃^*^2 + δ|ℱ_n”] is asymptotically uniformly integrable uniformly over P ∈𝒫. From Lemma <ref>, we have along the subsequence P_n”,
Ω̃_n”,P_n”^-1/2(1/B∑_b=1^B γ̃_b^* γ̃_b^* ⊤) Ω̃_n”,P_n”^-1/2 = Ω̃_n”,P_n”^-1/2𝐄[ γ̃_b^* γ̃_b^* ⊤|ℱ_n”] Ω̃_n”,P_n”^-1/2 + o_P(1),
as B →∞ and then n”→∞. Furthermore, from (<ref>),
Ω̃_n”,P_n”^-1/2𝐄[ γ̃_b^* γ̃_b^* ⊤|ℱ_n”] Ω̃_n”,P_n”^-1/2 = Ω̃_n”,P_n”^-1/2𝐄[ ψ̃_i,P_n”^*ψ̃_i,P_n”^* ⊤|ℱ_n”] Ω̃_n”,P_n”^-1/2 + o_P(1).
From the arguments in the proof of Theorem 2.20 of <cit.>, we find that
Ω̃_n”,P_n”^-1/2(1/B∑_b=1^B γ̃_b^* γ̃_b^*') Ω̃_n”,P_n”^-1/2→_P I_K,
as n”→∞. ▪
For any κ∈ (0,1), we have
lim inf_n_0 →∞inf_P ∈𝒫P{𝐰_P ∈C̃_1- κ}≥ 1 - κ.
Proof: For any subsequence of {n_0}, we choose a further subsequence in Lemma <ref>. Then, we apply Lemma <ref> on the subsequence, with
Y_n = Ĝ_P_n𝐰_P_n - ĝ_P_n,
and Y = Ω_P^1/2ℤ, and with Ω_n and Ω in the lemma replaced by Ω̂_n and Ω_P. Then, since Y_n →_d Y and Ω̂_n →_P Ω_P by Lemmas <ref> and <ref> and (<ref>), we obtain the desired result. ▪
As n_0 →∞,
sup_𝐰∈Δ_K-1|√(n_0)(θ̂(𝐰) - θ_P(𝐰)) -∑_k=1^K w_k 1/√(n_0)∑_i ∈ Nψ_k,P^θ(W_i)| = o_𝒫(1),
where
ψ_k,P^θ(W_i) = (ψ_0,0,P(W_i;1) + q_0,0,P(W_i) - 𝐄_P[q_0,0,P(W_i)]) 1{i ∈ N_0}
+ (q_k,1,P(W_i) - 𝐄_P[q_k,1,P(W_i)]) 1{i ∈ N_0} + √(n_0/n_k)ψ_k,1,P(W_i;1) 1{i ∈ N_k}.
Proof: We write
√(n_0)(θ̂(𝐰) - θ_P(𝐰)) = 1/√(n_0)∑_i ∈ N_0(q̂_0,0(W_i) - q_0,0,P(W_i)) + 1/√(n_0)∑_i ∈ N_0(q_0,0,P(W_i) - 𝐄_P[q_0,0,P(W_i)])
+ 1/√(n_0)∑_k=1^K w_k ∑_i ∈ N_0{q̂_k,1(W_i) - q_k,1,P(W_i) + q_k,1,P(W_i) - 𝐄_P[q_k,1,P(W_i)]}.
By Assumption <ref> and by the fact that ∑_k=1^K w_k = 1, we find
√(n_0)(θ̂(𝐰) - θ_P(𝐰)) = 1/√(n_0)∑_i ∈ N_0ψ_0,0,P(W_i;1) + 1/√(n_0)∑_i ∈ N_0(q_0,0,P(W_i) - 𝐄_P[q_0,0,P(W_i)])
+ ∑_k=1^K w_k 1/√(n_0)∑_i ∈ N_0(q_k,1,P(W_i) - 𝐄_P[q_k,1,P(W_i)])
+ ∑_k=1^K w_k ∑_i ∈ N_k√(n_0/n_k){ψ_k,1,P(W_i;1) + q_k,1,P(W_i) - 𝐄_P[q_k,1,P(W_i)]} + o_𝒫(1)
= ∑_k=1^K w_k 1/√(n_0)∑_i ∈ Nψ_k,P^θ(W_i) + o_𝒫(1).
We obtain the desired result. ▪
Define
σ^2 = ∑_k=1^K w_k^2 1/n_0∑_i ∈ N𝐄[ (ψ_k,P^θ(W_i) )^2 ].
As n_0 →∞,
sup_P ∈𝒫sup_t ∈𝐑|P{√(n_0)(θ̂(𝐰_P) - θ_P(𝐰_P) )/σ≤ t } - Φ(t)| → 0,
where Φ is the CDF of N(0,1).
Proof: The result follows from Lemma <ref> and the Central Limit Theorem for independent random variables. ▪
As n_0 →∞, σ̂= σ + o_𝒫(1).
Proof: The result follows from the uniform asymptotic normality of the bootstrap version √(n_0)(θ̂^*(ŵ) - θ̂(ŵ)). The arguments are standard and omitted. ▪
(i) For any ϵ>0, there exists M>0 such that
lim sup_n_0 →∞sup_P ∈𝒫 P{sup_𝐰∈Δ_K-1| ℳ̂(𝐰) - ℳ_P(𝐰) | > M n_0^-1/2} < ϵ.
(ii) For any ϵ>0, there exists M>0 such that for any sequence δ_ n → 0 as n →∞,
lim sup_n_0 →∞sup_P ∈𝒫 P{sup_𝐰∈Δ_K-1: 𝐰 - 𝐰_P ≤δ_n| ℳ̂^Δ(𝐰) - ℳ̂^Δ(𝐰_P) | > M δ_n n_0^-1/2} < ϵ.
Proof: (i) First, we write
ℳ̂(𝐰) - ℳ_P(𝐰) = 𝐰^⊤ (Ĥ - H_P)𝐰 - 2 (ĥ - 𝐡_P)^⊤𝐰 + ĥ' Ĥ^-1ĥ - 𝐡_P' H_P^-1𝐡_P.
Since the weights are from the simplex Δ_K-1 that is a bounded set, the desired result follows from Lemma <ref> and the Central Limit Theorem.
(ii) We write
ℳ̂^Δ(𝐰) - ℳ̂^Δ(𝐰_P) = (𝐰 - 𝐰_P)^⊤ (Ĥ - H_P) (𝐰- 𝐰_P) + 2 𝐰_P' (Ĥ - H_P) (𝐰 - 𝐰_P)
- 2 (ĥ - 𝐡_P)^⊤ (𝐰- 𝐰_P),
where ℳ̂^Δ(𝐰) = ℳ̂(𝐰) - ℳ_P(𝐰). Similarly as before, from Lemma <ref>, the desired result immediately follows. ▪
Suppose that for some positive sequence δ_n,1 such that lim_n →∞δ_n,1 = 0, we have
lim_M ↑∞lim sup_n_0 →∞sup_P ∈𝒫 P{ŵ - 𝐰_P > M δ_n,1} = 0.
Then,
lim_M ↑∞lim sup_n_0 →∞sup_P ∈𝒫 P{ŵ - 𝐰_P ^2 > M n_0^-1/2δ_n,1} = 0.
Proof: We take arbitrary ϵ>0 and large M_ϵ>0 such that
lim sup_n_0 →∞sup_P ∈𝒫 P{ŵ -𝐰_P > M_ϵδ_n,1}≤ϵ.
Recall the definition ℳ̂^Δ(𝐰) = ℳ̂(𝐰) - ℳ_P(𝐰). Since ℳ̂(𝐰_P) ≥ℳ̂(ŵ), we have
ℳ̂^Δ(𝐰_P) - ℳ̂^Δ(ŵ) ≥ℳ_P(ŵ) - ℳ_P(𝐰_P)
≥inf_P ∈𝒫λ_min(H_P) ŵ - 𝐰_P^2 ≥ηŵ - 𝐰_P^2,
from (<ref>), where η>0 is the constant in Assumption <ref>. Define the event
E_n(ϵ) = {ŵ -𝐰_P > M_ϵδ_n,1}.
By Lemma <ref>(ii), for any ϵ_1>0, there exists M >0 such that
lim sup_n_0 →∞sup_P ∈𝒫 P{|ℳ̂^Δ(𝐰_P) - ℳ̂^Δ(ŵ)| > M n_0^-1/2M_ϵδ_n,1}∩ E_n^c(ϵ) ≤ϵ_1.
Therefore, from (<ref>),
lim inf_n_0 →∞inf_P ∈𝒫 P{ηŵ - 𝐰_P^2 ≤ M n_0^-1/2M_ϵδ_n,1}≥ 1 - ϵ_1 - ϵ.
Since the choice of ϵ_1 and ϵ is arbitrary, the desired result follows. ▪
Proof of Theorem <ref>: By Lemma <ref>, there exists a sequence δ_n,1→ 0 such that
lim_M ↑∞lim sup_n_0 →∞sup_P ∈𝒫 P{ŵ - 𝐰_P > M δ_n,1} = 0.
By Lemma <ref>, we find that the above result holds for δ_n,1 = n_0^-1/4. Now, we use mathematical induction. Suppose that (<ref>) holds with δ_n,1 such that
log(δ_n,1) = log (n_0) ( - 1/4 - 1/8 - ... - 1/2^m),
for some m ≥ 2. Then, with this choice of δ_n,1, we apply Lemma <ref> again to find that (<ref>) holds with δ_n,1 such that
log(δ_n,1) = log (n_0) ( - 1/4 - 1/8 - ... - 1/2^m+1).
Hence, we find that (<ref>) holds with δ_n,1 such that
log(δ_n,1) = log (n_0) ( - ∑_m=2^∞1/2^m) = - 1/2log (n_0).
This gives the desired result. ▪
Proof of Theorem <ref> : Note that
P{θ_P(𝐰_P) ∉ C_1- α} = P{inf_𝐰∈C̃_1 - κ( √(n_0)(θ̂(𝐰) - θ_P(𝐰_P))/σ̂)^2 > c_1 - α + κ(1) }
≤ P{( √(n_0)(θ̂(𝐰_P) - θ_P(𝐰_P))/σ̂)^2 > c_1 - α + κ(1) } + P{𝐰_P ∉C̃_1 - κ}.
The desired result follows by Lemmas <ref>, <ref> and <ref>. ▪
§.§ When H is Not Necessarily Invertible
Let us discuss the case where H is not necessarily invertible. In this case, we show how we can still obtain uniformly valid confidence intervals for θ_0. First, we provide a modification of the method to accommodate this setting, and then present the uniform validity result.
We begin by noting that we can rewrite
ρ_P^2(𝐰) = 𝐰^⊤ H_P 𝐰 + 2 𝐰^⊤𝐡_P.
We define
𝐖_P = min_𝐰∈Δ_K-1ρ_P^2(𝐰).
Let us explain how we construct the confidence interval for θ_0(𝐰_0) for a fixed 𝐰_0 ∈Δ_K-1. We first define
θ̂(𝐰) = 1/n_0∑_i ∈ N_0m̂_0(μ̂_0^Γ(X_i),X_i ) 1{X_i ∈𝒳̂_0^Γ} + 1/n_0∑_i ∈ N_0m̂^𝗌𝗒𝗇(X_i; 𝐰) 1{X_i ∈𝒳_0 ∖𝒳̂_0^Γ},
where
m̂^𝗌𝗒𝗇(x; 𝐰) = ∑_k=1^K m_k( μ̂_k^Γ(x),x) w_k.
As in (<ref>), we construct
T'(𝐰) = n_0 inf_λ∈Λ(𝐰)(𝐟̂(𝐰) - λ)^⊤Ω̂^-1(𝐰) ( 𝐟̂(𝐰) - λ),
where Ω̂(𝐰) is constructed as in (<ref>) with 𝐰 replacing ŵ. Then, the confidence set for 𝐰_0 is given by
C̃_1-κ' = {𝐰∈Δ_K-1: T'(𝐰) ≤ĉ_1 - κ(𝐰)},
where ĉ_1-κ(𝐰) denotes the 1 - κ percentile of the χ^2 distribution with degree of freedom equal to the number of zero entries in λ̂(𝐰).
We let
T^*(𝐰) = √(n_0)(θ̂^*(𝐰) - θ̂(𝐰)).
We read the 0.75 quantile and 0.25 quantile of the bootstrap distribution of {T^*(𝐰): b = 1,...,B}, and denote them to be q̂_0.75(𝐰) and q̂_0.25(𝐰), respectively. Define
σ̂(𝐰) = q̂_0.75(𝐰) - q̂_0.25(𝐰)/z_0.75 - z_0.25,
where z_0.75 and z_0.25 are the 0.75- and 0.25-quantiles of N(0,1).
Define
T̂'(𝐰, θ) = √(n_0) (θ̂(𝐰) - θ)/σ̂(𝐰).
We construct the (1-α)-level confidence interval using the Bonferroni approach as follows:
C_1- α' = {θ∈Θ: inf_𝐰∈C̃_1- κ(T̂'(𝐰, θ))^2 ≤ c_1 - α + κ(1) },
where κ>0 is a small constant, such as κ = 0.005, c_1 - α + κ(1) denotes the (1 - α + κ)-quantile of the χ^2(1) distribution. By modifying the arguments in the proof of Theorem <ref>, we can show that
lim inf_n_0 →∞inf_P ∈𝒫inf_𝐰_0 ∈𝐖_P P{θ_P(𝐰_0) ∈ C_1-α}≥ 1 - α.
Equipped with Lemma <ref>, we can show this using standard arguments. We omit the details.
§ FURTHER DETAILS ON MONTE CARLO SIMULATIONS AND EMPIRICAL APPLICATIONS
§.§ Details on the Monte Carlo Simulations
In our simulations, there is an outcome Y_i which is a function of a single-dimensional policy variable X_i and an unobserved random variable U_i. We draw U_i ∼ N(0, σ^2), i.i.d. and independently from X_i. We allow for one target region (region 0), three source regions (k=1,2,3), with all regions having the same sample size n_0 ∈{500,1000}.
For the policy experiment, we draw μ_k^Γ(X_i) i.i.d. Uniform[0,1] for all source regions (i.e., the post-policy variable). However, for the target region, we assume that we only observe pre-policy X_i drawn i.i.d. Uniform[1-s,1]. The policy of interest is the map
μ_0^Γ(X_i) = X_i/s - (1-s).
Hence, the post-policy distribution of μ_0^Γ(X_i) in the target region is Uniform[0,1] and s measures the overlap of the support of X_i in the target region relative to the source regions. When s=1, no information from source regions is necessary: the target parameter θ_0 is fully identified and estimable from the target region alone. When s is very close to 0, then the post-policy ARF for the target region is not identified for X_i>0 and identification of θ_0 almost solely relies on information from source regions.
We consider two separate specifications for Average Response Functions, which we refer to as (i) “Linear” and (ii) “Non-Linear” (in X_i). They differ in specifying linear versus non-linear specifications for the ARF's, as well as in having boundary versus interior values for 𝐰^* (the weights satisfying the synthetic transferability condition). This allows us to verify our inference in all of these theoretically and empirically relevant cases.
The Linear Specification specifies the following causal structures for the outcome, Y_i:
Y_i =
X_i + U_i, if k=1
0.5 X_i - 1+U_i, if k=2
0.3X_i + 1 + U_i, if k=3
0.4 X_i + U_i, if k=0,
while the Non-Linear Specification is:
Y_i =
X_i + U_i, if k=1
X_i^2 - 1 + U_i, if k=2
X_i^3 - 3X_i + U_i, if k=3
0.2 X_i^3 + 0.4X_i^2 -0.2 X_i -0.4 + U_i, if k=0.
Our target parameter for the Linear Specification is given by θ_0 = 0.2, while it is θ_0 = -0.317 for the Non-Linear Specification. It follows that, for the Linear Specification, 𝐰_0= (0, 0.5, 0.5), while for the Non-Linear case, 𝐰_0 = (0.4, 0.4, 0.2).
We estimate the ARF's for each region by Ordinary Least Squares with covariates in region k given by
X̃_i^k = [1, X_i, X_i^2, X_i^3]^⊤.
Estimation of Ĥ, ĥ and ŵ follows those in equations (<ref>) above. We then estimate θ̂(ŵ) as in equation (<ref>) which, in our set-up, is given by:
θ̂(ŵ) = 1/n_0∑_i ∈ N_0m̂_0(μ̂^Γ(X_i)) 1{X_i ∈ [1-s,1]}
+ 1/n_0∑_i ∈ N_0m̂^𝗌𝗒𝗇(X_i; ŵ) 1{X_i ∈ [0,1-s]},
where
m̂^𝗌𝗒𝗇(x; ŵ) = ∑_k=1^K m̃_k( μ̂_k^Γ(x)) ŵ_k.
We consider the Bonferroni-based Confidence Interval (CI), denoted C_1- α in equation (<ref>) in the main text. Finally, we set α = 0.05 to be the significance level, σ = 0.5, κ = 0.005, R=1000 simulations and B=999 bootstrap draws. The results are shown below and described in the main text.
§.§ Details on Empirical Application: Estimation for the ARFs
In this section, we explain the estimation of m_k(μ_k(X_i)) and m_k(μ_k^Γ(X_i)) used in our empirical application. As for estimation, we use the equilibrium wage generation in (<ref>) so that for each i ∈ N_k, we have
log W_i = max{logβ_k + X_i^⊤γ_k + U_i,j, logW_k }.
We estimate γ_k using the pairwise difference estimation method of <cit.>. (Note that β_k is not identified in this semiparametric setting.) More specifically, we first define
s(y_1,y_2,δ) = {[ y_1^2 - (y_2 + δ) y_1, if δ≤ - y_2; (y_1 - y_2 - δ)^2, if - y_2 < δ < y_1; (-y_2)^2 - (δ - y_1)(-y_2), if δ≥ y_1. ].
For each k=0,1,...,K, we let γ̂_k be the estimator obtained as a solution to the following optimization problem:
min_γ1/n_K (n_K-1)∑_i ∈ N_K∑_j ∈ N_K: j > i s(log W_i - logW_k, log W_j - logW_k, (X_i - X_j)^⊤γ).
From this, we obtain γ̂_k.
Then we define
μ̂_k(X_i) = X_i'γ̂_k - logW_k and μ̂_k^Γ(X_i) = X_i'γ̂_k - logW_k^Γ,
and construct
m̂_k(μ) = ∑_ℓ∈ N_K, ℓ i K_h(μ- μ̂_k(X_ℓ)) Y_i /∑_ℓ∈ N_K, ℓ i K_h(μ- μ̂_k(X_ℓ)),
where K_h(x) = K(x/h)/h and K is a univariate kernel. In particular, we use a quartic kernel and choose h by cross-validation. We obtain the estimators of m_k(μ_k(X_i)) and m_k(μ_k^Γ(X_i)) as follows:
m̂_k(μ̂_k(X_i)) and m̂_k(μ̂_k^Γ(X_i)).
Once the ARFs are estimated, we can proceed to construct a synthetic prediction after the minimum wage changes as described in the main text.
[SyntheticDecomp]
|
http://arxiv.org/abs/2307.07582v1 | 20230714190728 | A novel mesh regularization approach based on finite element distortion potentials: Application to material expansion processes with extreme volume change | [
"Abhiroop Satheesh",
"Christoph P. Schmidt",
"Wolfgang A. Wall",
"Christoph Meier"
] | cs.CE | [
"cs.CE"
] |
On Diameter Approximation in Directed Graphs
Amir AbboudWeizmann Institute of Science, <[email protected]>. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (grant agreement No 101078482). Additionally, Amir Abboud is supported by an Alon scholarship and a research grant from the Center for New Scientists at the Weizmann Institute of Science. ,
Mina DalirrooyfardMassachusetts Institute of Technology, <[email protected]>. Partially supported by an Akamai Fellowship. ,
Ray LiUC Berkeley, <[email protected]>. Supported by the NSF Mathematical Sciences Postdoctoral Research Fellowships Program under Grant DMS-2203067, and a UC Berkeley Initiative for Computational Transformation award. ,
Virginia Vassilevska-WilliamsMassachusetts Institute of Technology, <[email protected]>. Partially supported by the National Science Foundation Grant CCF-2129139.
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
sections/abstract_introduction
sections/problem_definition
sections/meshadaptation
sections/results_conclusion
§ ACKNOWLEDGMENTS
The authors acknowledge the financial support from the European Union's Horizon 2020 research and
innovation program under the Marie Skłodowska-Curie grant agreement No 764636.
unsrt
|
http://arxiv.org/abs/2307.05471v1 | 20230711175622 | Scale Alone Does not Improve Mechanistic Interpretability in Vision Models | [
"Roland S. Zimmermann",
"Thomas Klein",
"Wieland Brendel"
] | cs.CV | [
"cs.CV"
] |
Differentiable Blocks World:
Qualitative 3D Decomposition by Rendering Primitives
Tom Monnier^1 Jake Austin^2 Angjoo Kanazawa^2 Alexei A. Efros^2
Mathieu Aubry^1
^1LIGM, Ecole des Ponts, Univ Gustave Eiffel ^2UC Berkeley
====================================================================================================================================================================
In light of the recent widespread adoption of AI systems, understanding the internal information processing of neural networks has become increasingly critical.
Most recently, machine vision has seen remarkable progress by scaling neural networks to unprecedented levels in dataset and model size.
We here ask whether this extraordinary increase in scale also positively impacts the field of mechanistic interpretability. In other words, has our understanding of the inner workings of scaled neural networks improved as well?
We here use a psychophysical paradigm to quantify mechanistic interpretability for a diverse suite of models and find no scaling effect for interpretability — neither for model nor dataset size.
Specifically, none of the nine investigated state-of-the-art models are easier to interpret than the GoogLeNet model from almost a decade ago. Latest-generation vision models appear even less interpretable than older architectures, hinting at a regression rather than improvement, with modern models sacrificing interpretability for accuracy.
These results highlight the need for models explicitly designed to be mechanistically interpretable and the need for more helpful interpretability methods to increase our understanding of networks at an atomic level.
We release a dataset containing more than 120'000 human responses from our psychophysical evaluation of 767 units across nine models.
This dataset is meant to facilitate research on automated instead of human-based interpretability evaluations that can ultimately be leveraged to directly optimize the mechanistic interpretability of models.
§ INTRODUCTION
Since the early days of deep learning, artificial neural networks have been referred to as black boxes: opaque systems that learn complex functions which cannot be understood, not even by the people who build and train them. Mechanistic interpretability <cit.> is an emerging branch of explainable AI (XAI) focused on understanding the internal information processing of deep neural networks by attempting to reverse engineer them — possibly by focusing on their atomic building blocks.
Designing interpretable neural networks and aligning their information processing with that of humans would not only satisfy academic curiosity but also constitute a major step toward trustworthy AI that can be employed in high-stakes scenarios.
A natural starting point for mechanistic interpretability research is to investigate the individual units of a neural network. For convolutional neural networks (CNNs), the individual output channels of a layer, called activation maps, are often treated as separate units <cit.>. A common hypothesis is that channel activations correspond to the presence of features of the input <cit.>. There is hope that by understanding which feature(s) a unit is sensitive to, one could build a fine-grained understanding of a model by identifying complex circuits within the network <cit.>. To learn about a unit's sensitivity, researchers typically focus on inputs that cause strong activations at the target unit, either by obtaining highly activating images from the training set (natural exemplars), or by generating synthetic images that highly activate the unit. The well-known method of feature visualization <cit.> achieves this through gradient ascent in input space (see <ref>). However, in practice, identifying a unit's sensitivity is far from trivial <cit.>. Historically, work on feature visualization has focused on the Inception architecture <cit.>, in particular, GoogLeNet, but in principle, both of these methods should work on arbitrary network architectures and models.
The starting hypothesis of this work is that the dramatic increase in both the scale of the datasets and the size of models <cit.> might benefit mechanistic interpretability. Evidence for this hypothesis comes from recent work showing that
models trained on larger datasets become more similar in their decisions to human judgments as measured by error consistency <cit.>. If models make more human-like decisions, this might hint at a closer alignment between the extracted features and human perception. In addition, as models get larger, they can dedicate more units to representing learned features without having to encode features in superposition <cit.>.
We conduct a large-scale psychophysical study (see <ref>) to investigate the effects of scale and other design choices and find no practically relevant differences between any of the investigated models. While scaling models and datasets has fuelled the progress made on many research frontiers <cit.>, it does not improve mechanistic interpretability. Neither scale nor the other design choices make individual units more interpretable on their own.
As our study shows, new model design choices or training objectives are needed to explicitly improve the mechanistic interpretability of vision models.
We expect the data collected in our study to serve as a starting point and test bed to develop cheap automated interpretability measures that do not require collecting human responses. These automated measures could pave the way for new ways to directly optimize model interpretability.
Therefore, we release the study's results as a new dataset (called ImageNet Mechanistic Interpretability) to foster new developments in this line of research.
§ RELATED WORK
The idea of investigating the information processing on the level of individual units in neural networks has a long history <cit.>, possibly inspired by work in the neuroscience community that investigates receptive fields of individual neurons <cit.>. The same holds for the technique of feature visualization, first proposed by <cit.>, developed further by <cit.>, and popularized by <cit.>. <cit.> present work on extending feature visualizations to ViTs. <cit.> experimented with imposing priors on feature visualizations to make them more similar to natural images.
Only years after the work on improving feature visualizations matured, their usefulness for understanding units was experimentally quantified by <cit.> and <cit.>, who found that feature visualizations are helpful but not more so than highly activating natural exemplars.
Much work on interpretability has focused on so-called post-hoc explanations, that is, explaining specific model decisions to end users <cit.>. In contrast, mechanistic interpretability <cit.>, the branch of XAI that we focus on here, is concerned with understanding the internal information processing of a model. See the review by <cit.> for a distinction and a broader overview of the field of XAI.
As <cit.> point out, it is vitally important to not only generate explanations that look convincing but also to conduct falsifiable hypothesis testing in interpretability research, which is what we attempt here. Furthermore, as <cit.> emphasize, interpretability should be evaluated in a human-centric way, a stance that motivates employing a psychophysical experiment with humans in the loop to measure interpretability.
The field of interpretability has always struggled with a lack of consensus about definitions and suitable measurement scales <cit.>. Several previous works <cit.> focus on measuring the utility of post-hoc explanations.
In contrast, we here are not primarily concerned with methods that explain model decisions to end-users, but instead focus on introspective methods that shed light on the internal information processing of neural networks.
Our psychophysical experiment builds on <cit.> and <cit.>, whose psychophysical task we expand and adapt for arbitrary models as outlined in <ref>.
§ METHODS
§.§ Measuring the Mechanistic Interpretability of Many Models
Selecting Models. We investigate nine computer vision models compatible with ImageNet classification <cit.>. These models span four different design axes, allowing us to analyze the influence of an increasing model scale on their interpretability. First, we look at the influence of model size in terms of parameter count, starting with GoogLeNet <cit.> at 6.8 million parameters and culminating in ConvNeXt-B <cit.> at 89 million parameters. Next, we look at various model design choices, such as increasing the width or depth of models (GoogLeNet vs. ResNet-50 <cit.> vs. WideResNet-50 <cit.> vs. DenseNet-201 <cit.>) and using different computational blocks (ViT-B <cit.> vs. ConvNeXt). Third, we scale training datasets up and compare the influence of training on 1 million ImageNet samples to pre-training on 400 million LAION <cit.> samples (ResNet-50 vs. Clip ResNet-50 <cit.> and ViT-B vs. Clip ViT-B <cit.>). Last, we test the relation between adversarial robustness and interpretability (ResNet-50 vs. Robust ResNet-50 <cit.>) as previous work <cit.> found adversarial robustness to be beneficial for feature visualizations.
Selecting Units. For each of the investigated models, we randomly select 84 units (see <ref>) by first drawing a network layer from a uniform distribution over the layers of interest and then selecting a unit, again at random, from the chosen layer. This scheme is used instead of randomly drawing units from a uniform distribution over all units since CNNs typically have a higher density of units in later layers. The layers of interest are convolution and normalization layers, as well as the outputs of skip connection blocks. We avoid the very first convolution layers since they can be interpreted more directly by inspecting their filters <cit.>. For GoogLeNet, we select only from the last layers of each inception block in line with earlier work <cit.>. For the ViT models, we adhere to the insights by <cit.> and only inspect the position-wise feedforward layers.
Performing & Designing the Psychophysics Experiment. As interpretability is a human-centric model attribute, we perform a large-scale psychophysical experiment to measure the interpretability of models and individual units. For this, we use the experimental paradigm proposed by <cit.> and <cit.>: Here, the ability of humans to predict the sensitivity of units is used to measure interpretability. Specifically, crowd workers on Amazon Mechanical Turk complete a series of 2-Alternative-Forced-Choice (2-AFC) tasks (see <ref> for an illustration). In each task, they are presented with a pair of strongly and weakly activating (query) images for a specific unit and are asked to identify the strongly activating one. During this task, they are supported by 18 explanatory (reference) images that strongly activate the unit, either natural dataset exemplars or synthetic feature visualizations. We begin by making the task as easy as possible by choosing the query images as the most/least activating samples from the ImageNet dataset. By choosing query images that cause less extreme activations, the task's difficulty can be increased and allows us to probe a more general understanding of the unit's behavior by participants. For details refer to <ref>.
[1.5][t]
< g r a p h i c s >
Illustration of task design.
Users see a set of nine maximally/minimally activating reference images (i.e., synthetic feature visualizations or natural exemplars) on the right/left side of the screen. In the center, two extremely activating natural query images are shown. Users need to pick the more positively activating query image (here, the bottom one) by pressing on a number indicating their confidence in their choice.
While we explain the task to the participants, we do not instruct them to use specific strategies to make their decisions to avoid biasing results. For example, we do not explicitly prompt them to pay attention to the colors or shapes in the images. Instead, participants complete at least five hand-picked practice trials to learn the task and receive feedback in all trials. Once they have successfully solved the practice trials, they are admitted to the main experiment, in which they see 40 real trials interspersed with five fairly obvious catch-trials. See <ref> for details on how trials are created. In all trials, subjects give a binary response and rate their confidence in their decisions on a three-point Likert scale. For each investigated model, we recruit at least 63 unique participants who complete trials for 84 randomly selected units of each model (see <ref>). This means every unit is seen by 30 different participants. Within each task, no unit is shown more than once.
We ascertain high data quality through two measures: First, by restricting the worker pool to experienced and reliable workers. Second, by performing quality checks and excluding participants who showed signs of not paying attention, such as failing to get all practice trials correct by the second attempt, failing to pass catch trials, taking too long, or being unreasonably quick. We also forbid workers to participate multiple times in our experiments to avoid biases introduced through learning effects. We keep recruiting new participants until 63 workers passed our quality checks per model. See <ref> for details.
We finally report the ratio of correct answers as a measure of a unit's interpretability.
As there are two options participants have to choose from, random guessing amounts to a baseline performance of 0.5.
We record >120'000 responses from >1'800 participants recruited over Amazon Mechanical Turk for 760 units spread across 9 models.
For more details, refer to <ref>.
§.§ Scaling Feature Visualization to Many Models
Feature visualization describes the process of synthesizing maximally activating images through gradient ascent on a unit's activation. While simple in principle, this process was refined to produce the best-looking visualizations (see <ref>). However, these algorithmic design choices and the required hyperparameters have predominantly been optimized for a single model — the original GoogLeNet. This poses a challenge when creating synthetic feature visualizations for different models, as required for a large-scale comparison of models such as ours: How should these hyperparameters be chosen for each model individually without introducing any biases to the comparison? While we cannot revisit all algorithmic choices, we develop an optimization procedure for setting the most crucial parameters, i.e., the number of optimization steps and the strength of the regularizer responsible for creating visually diverse images. In a nutshell, we stop optimization based on the achieved relative activation value and perform a binary search over the latter hyperparameter, to obtain feature visualizations that are comparable in terms of how well they activate a unit. For details, see <ref>. Unfortunately, there is no generally accepted method for generating feature visualizations for ViT models yet: While <cit.> present a method to generate visualizations for ViTs, we refrain from using it because one of the steps of their procedure seems hard to justify (see <ref>).
§ RESULTS
We now present and analyze the data we obtained through our psychophysical experiment. We look at how scaling models affects mechanistic interpretability (<ref>), compare feature visualizations and exemplars (<ref>), investigate systematic layer-dependence of interpretability (<ref>), and investigate the dependence of our results on task difficulty (<ref>).
Lastly, we introduce a dataset bundling the experimental data that we hope can lead to a paradigm shift in mechanistic interpretability research (<ref>).
Unless noted otherwise, error bars correspond to the 95th percentile confidence intervals of the mean of the unit average estimated through bootstrap sampling.
§.§ Scaling Models Does not Coincide with Improving Interpretability
We begin by visualizing the interpretability of the nine networks investigated in <ref> for both the natural and the synthetic conditions. We sampled models with different levels of scale (in terms of their model size, datasets, or training paradigms), but find little to no differences in their interpretability. Strikingly, the latest generation of vision models (i.e., ConvNeXT and ViT) performs worse than even the oldest model in this comparison (GoogLeNet).
We similarly see no improvements if we plot a model's interpretability against its classification performance on ImageNet.
In <ref>, we plot this relationship for both feature visualizations and natural exemplars. While models vary widely in terms of their classification performance (∼ 60 % to ∼ 85 %), their interpretability varies in a much narrower range. For feature visualizations, we see a decline in interpretability as a function of classification performance. For natural exemplars, we do not find any dependency between interpretability and classification performance.
These results highlight that mechanistic interpretability does not directly benefit from scaling effects, neither in model nor dataset size.
[1][tb]
< g r a p h i c s >
Higher classification performance does not come with higher interpretability. While the investigated models have strongly varying classification performance, as measured by the ImageNet validation accuracy, their interpretability shows less variation for both natural exemplars (blue) and synthetic feature visualizations (orange). More accurate classifiers are not necessarily more interpretable. For synthetic feature visualizations, there might even be a regression of interpretability with increasing accuracy.
§.§ Feature Visualizations are Less Helpful than Exemplars for all Models
The data in <ref> clearly shows that the findings by <cit.> generalize to models other than GoogLeNet: Feature visualizations do not explain unit activations better than natural exemplars, regardless of the underlying model.
This includes adversarially robust models, which have previously been argued to increase the quality of feature visualizations <cit.>. The idea was that for non-robust models, naive gradient ascent in pixel space leads to adversarial patterns. To overcome this problem, various image transformations, e.g., random jitter and rotations, are applied to the image over the course of feature visualization. As adversarially more robust models have less adversarial directions, one can hope to obtain visualizations that are visually more coherent and less noisy.
There is indeed a substantial and significant increase in performance in the synthetic condition for the robust ResNet-50 over the normal ResNet-50. In fact, this model significantly outperforms all models except GoogLeNet (see <ref>). Nevertheless, it remains true that natural exemplars are still far more helpful.
To see whether well-interpretable units for one interpretability method are also well-interpretable for the other, we visualize them jointly in <ref>. Here, we find a moderate correlation between the two for a few models but no general trend.
§.§ Which Layers are More Interpretable?
In light of the small differences between models regarding the average per-unit interpretability, we now zoom in and ask whether there are rules to identify well-interpretable units within a model.
A unit’s interpretability is not well predicted by its layer’s position relative to the network depth (i.e., early vs. late layers).
In <ref>, we visualize the recorded interpretability scores for all investigated layers as a function of their relative position.
[Note that the layer position is not precisely defined for layers computed in parallel, e.g., in the Inception blocks of the GoogLeNet architecture.]
We average the interpretability over all investigated units from a layer to obtain a single score per layer.
To check for correlations between layer position and interpretability, we compute Spearman's rank correlation for the data of each model.
For most models, we do not see a substantial correlation. However, two notable outliers exist: the Clip ResNet and Clip ViT. A strong and highly significant correlation can be found for both of them.
We find much smaller correlations for the same architectures trained on smaller datasets (i.e., ResNet and ViT, trained on ImageNet-2012).
We thus conclude that (pre-)training on large-scale datasets might benefit the interpretability of later layers while sacrificing that of early layers.
§.§ Do our Findings Depend on the Difficulty of the Task?
As outlined in <ref>, the difficulty of the task used to quantify interpretability depends on how the query images (i.e., the images users need to identify as the more/less strongly activating image) are sampled. So far, we have made the task as easy as possible: the query images were chosen as the most/least strongly activating samples from the entire ImageNet dataset. In this easy scenario, the models were all substantially more interpretable than a random black box (for which we would expect a proportion correct of 0.5). We now ask: Are these models still interpretable in a (slightly) stronger sense, or do their decisions become incomprehensible to humans when increasing the task's difficulty ever so slightly? For this, we repeat our experiment for two models (ResNet-50 and Clip ResNet-50) with query images that are now sampled from the 95th (medium difficulty) or 85th (hard difficulty) percentile of the unit's activations. As the interpretability score for synthetic feature visualizations is already fairly low in the previously tested easy condition (cf. <ref>), we do not test them in the hard condition. Note that the reference images serving as explanations are always chosen from the very end of the distribution of activations, i.e., they are the same for all three difficulties.
[2][tb]
[t]0.3
< g r a p h i c s >
[t]0.2
< g r a p h i c s >
Human performance decreases with increasing task difficulty.
We increase the task difficulty by not using the most strongly/weakly activating images as the query images (easy) but instead sampling them from the 95th (medium) or 85th (hard) percentile. We see a decrease in human performance with increasing difficulty. Strikingly, even a small change in the sampling (easy vs. medium) leads to stark performance decreases when using natural exemplars (left), showing that human understanding of a unit's overall behavior is relatively limited. For the synthetic feature visualizations, the performance is reduced close to chance level by this small change (right).
The results in <ref> show a drastic drop in performance when making the task only slightly more difficult (medium). For the synthetic feature visualizations, performance is reduced close to chance level. When looking at how the performance changes per unit (see <ref>), we see that for almost all units, the measured interpretability scores do indeed follow the defined difficulty levels, meaning that humans perform best in the easy and worst in the hard task.
But is this a fair modification of the task or does it make the task unreasonably difficult? If the distribution of activations for a unit across the entire dataset was multimodal with small but pronounced peaks at the end for strongly activating images and if we assume each of these modes corresponds to different behavior, making the task harder as described above would be unfair: When the query images are sampled from the 95th percentile while the reference images are still sampled from the distribution's tail, these two sets of images could come from different modes, which might correspond to different types of behavior, making the task posed to users less meaningful. However, we find a unimodal distribution of activations that smoothly tapers out (see <ref>). In other words, the query images used in the harder conditions are in the same mode of unit activation as the ones from the easy condition, and we would, therefore, expect them to also be in a similar behavioural regime.
§.§ IMI - A Dataset to Learn Automated Interpretability Measures
The results above paint a rather disappointing picture of the state of mechanistic interpretability of computer vision models:
Just by scaling up models and datasets, we do not get increased interpretability for free, suggesting that if we want this property, we need to explicitly optimize for it.
One hurdle for research in this direction is that experiments are costly due to the requirement of human psychophysical evaluations. While those can be afforded for some units of a few models (as done in this work), it is infeasible to evaluate an entire model or even multiple models fully. However, this might be required for developing new models that are more interpretable. For example, applying the experimental paradigm used in this work to each of the roughly seven thousand units in GoogLeNet would amount to obtaining more than 200 thousand responses costing around 25 thousand USD.
One conceivable way around this limitation is to remove the need for human evaluations by developing automated interpretability evaluations aligned with human judgments. Put differently, if one had access to a model that can estimate the interpretability of a unit (as perceived by humans), we could potentially leverage this model to directly optimize for more interpretable models.
To enable research on such automated evaluations, we release our experimental results as a new dataset called ImageNet Mechanistic Interpretability (IMI). Note that this is the first dataset containing interpretability measurements obtained through psychophysical experiments for multiple explanation methods and models. The dataset contains >120'000 anonymized human responses, each consisting of the final choice, a confidence score, and a reaction time. Out of these >120'000 responses, > 69'000 passed all our quality assertions while the rest failed (some of) them. We consider the former the main dataset and provide the latter as data for development/debugging purposes. Furthermore, the dataset contains the used query images as well as the generated explanations for >760 units across nine models.
The dataset itself should be seen as a collection of labels and meta information without the presence of fixed features that should be predictive of a unit's interpretability. Moreover, finding and constructing features that are predictive of the recorded labels will be one of the open challenges posed by this line of research.
We illustrate how this dataset could be used by trying to predict a unit's interpretability from the pattern of its activations in <ref> in two examples: First, we test the hypothesis that easier units are characterized by a clearly localized peak of activation within the activation map, while for harder units, the activation is more distributed, making it harder for humans to detect the unit's sensitivity.
However, we do not find a reliable relationship between measures for the centrality of activations, e.g. the local contrast of activation maps, and the unit's interpretability.
Second, we analyze whether more sparsely activated units, i.e., units sensitive to a very particular image feature, are easier to interpret as the unit's driving feature might be easier to detect and understand by humans. Similar to the other hypothesis, we also do not find a meaningful relation between the sparseness of activations and a unit's interpretability.
We deliberately do not suggest a fixed cross-validation split: Depending on the intended use case of models fit on the data, different aspects must be considered resulting in other splits. For example, when building a metric that has to generalize to different models, another split might be used than when building a measure meant to work for a single model only. For that reason, we recommend researchers to follow best practices when training models on our dataset.
§ DISCUSSION & CONCLUSION
Discussion
Due to the costly nature of psychophysical experiments involving humans, we cannot test every vision model but had to make a selection. To perform the most meaningful comparisons and obtain as informative results as possible, we chose the four design axes outlined above and models representing different points along each axis. For some axes, we did not test all conceivable models, such as the largest vision model presented so far <cit.> as the weights have not been released yet. However, based on the trends in the current results, it is unlikely that the picture would drastically change when considering more models.
An explicit assumption of the approach to mechanistic interpretability investigated here is that feature representations are axis aligned, i.e., features are encoded as the activations of individual units instead of being encoded using a population code. This can be motivated by the fact that human participants do not fail in our experiments completely — they achieve better than chance-level performance. Therefore, this approach of investigating a network does not seem to be entirely misguided, but that alone does not exclude other coding schemes.
[See work by <cit.> for further arguments.]
Furthermore, <ref> reveals that the two interpretability methods we investigated here are only partially correlated, so other explanation methods might come to different conclusions.
Assessing the interpretability of neural networks remains an ongoing field of research, with no clear gold standard yet. This work utilizes an established experimental paradigm to quantify human understanding of individual units within a neural network. While it is possible that construction of a new paradigm may alter the results, we contend that the employed experimental paradigm closely mirrors how mechanistic interpretability is applied in practice.
Additionally, one could argue that the models analyzed in this work are already interpretable — we just have not discovered the most effective explanation method yet. Although this is theoretically possible, it is important to note that we employed the two best and most widely-used explanation methods currently available, and we were unable to detect any increase in interpretability when scaling models up. We encourage further research on interpretability methods.
Conclusion In this paper, we set out to answer the question: Does scale improve the mechanistic interpretability of vision models? By running extensive psychophysical experiments and comparing various models, we come to the conclusion that none of the investigated axes seem to positively affect model interpretability: Neither the size of the model, nor that of the dataset, nor model architecture or training scheme improve interpretability. This result highlights the importance of building more interpretable models: unless we explicitly optimize for this property, we do not get it for free by just increasing downstream task performance. We believe that the benchmark dataset we released can play an important enabling role in this line of research.
§ ACKNOWLEDGEMENTS
We thank Felix Wichmann, Evgenia Rusak, Robert-Jan Bruintjes, Robert Geirhos, Matthias Kümmerer, and Matthias Tangemann for their valuable feedback.
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. WB acknowledges financial support via an Emmy Noether Grant funded by the German Research Foundation (DFG) under grant no. BR 6382/1-1 and via the Open Philantropy Foundation funded by the Good Ventures Foundation. WB is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645. This research utilized compute resources at the Tübingen Machine Learning Cloud, DFG FKZ INST 37/1057-1 FUGG.
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting RSZ and TK.
plainnat
§ METHODOLOGICAL DETAILS
§.§ Measuring the Mechanistic Interpretability of Many Models
To measure the interpretability afforded by a model, we extend the paradigm established by <cit.>. Participants in our study complete a sequence of 2-Alternative-Forced-Choice (2-AFC) trials, where each trial measures the interpretability of one unit of a network. In each trial, participants are presented with two so-called query images, sourced from the training set of ImageNet. One query image is highly positively activating for the investigated unit, i.e., feeding this image through the network would cause a large positive activation at the target unit. In contrast, the other query image is highly negatively activating. Participants are tasked with determining which of the two query images is the positive one. To do so, they are presented with two sets of nine reference images which characterize the unit. One set contains highly positively activating images, while the other contains highly negatively activating images. In the natural condition, these reference images are other natural images, whereas, in the synthetic condition, the reference images are synthetic images generated by Feature Visualization. See <ref> for an example of one trial in the natural condition. We phrase the task by asking which set of reference images fits the positive query image better so that participants can be completely agnostic with respect to the true semantics of the task. We also do not give overly specific instructions to avoid biasing the participants' behavior. Instead, participants learn the task by completing at least five hand-picked practice trials at the beginning of the experiment. Participants give a binary response and rate their confidence in their decision on a three-point Likert scale.
§.§ Sampling Images for the Psychophysical Tasks
The difficulty of an individual trial depends to a certain degree on the specific images that are shown in the trial. To avoid biasing the results for an individual unit, we do not only select the single highest/lowest activating image as a query image but instead create t=10 different trials for each unit. For each of these, we collect responses from crowd workers thrice.
In the following, we describe the stimuli selection process for positively activating images, with negatively activating images being selected analogously. This procedure follows <cit.> who also illustrate the approach.
First, we select the top t images for each unit, to be used as query images, where t is the number of unique trials to be generated. Second, we select the next 9· t top activating images as candidates for reference images. The set of reference images for a trial should not simply be chosen from the topmost candidates following the query images, since that would likely create trials of decreasing difficulty. Instead, we divide the range of candidate images into 9 groups of t images each and create a set of reference images by sampling one image from each of the 9 groups without replacement. We initially create t=20 trials but use only 10 of those, keeping the rest for an anticipated later experiment.
§.§ Amazon Mechanical Turk
Our psychophysical study is conducted on Amazon Mechanical Turk to meet the requirement of scale. For each investigated model, we recruit at least 63 unique participants, each of which completes 40 real trials and five catch-trials with obvious, hand-picked stimuli, resulting in 2'520 usable trials per model and condition, for 40'320 trials in total.
To maintain high data quality, we exclude participants who do not fulfill certain criteria. First of all, we restrict participation in our experiment to countries in which workers can be expected to be adequately proficient in English and in which completion of our click-work at the expected hourly wage is not unreasonably more profitable than other work, which we deemed unethical. Specifically, we restrict participation to the USA, Canada, Great Britain, Australia, New Zealand, and Ireland. As a second barrier, we only offer our Human Intelligence Task (HIT) to experienced workers who have submitted at least 2'000 HITs for which the response was approved. To ascertain high reliability, we further restrict the pool to workers whose approval rate is at least 99%. Of course, we also prevent workers from participating in our experiments more than once[Due to technical issues some workers participated more than once. However, we exclude their data in our analysis and recollect the missing data.].
Even if workers meet the aforementioned requirements, they might still be distracted during the experiment or give random answers to quickly finish the experiment (e.g., if they are unmotivated or frustrated due to the task difficulty). Therefore, we filter our data further. To use only data from workers who understand the task, we only accept HITs that require no more than three attempts at solving the demo trials and reject workers who spend less than 15 seconds reading the instructions.
To catch workers who just click mindlessly, we exclude responses in which fewer than four of our five catch-trials were answered correctly and responses that take the worker less than 135 seconds overall. On the other hand, we also reject responses that take them longer than 2'500 seconds since it can be assumed that these workers interrupted their work.
We also reject responses in which participants select the same query image (as in, the upper / lower one) in more than 90% of trials.
Due to our exclusion criteria, we recruit more than 63 participants per model since we keep recruiting until 63 workers pass the criteria. The responses of the workers who have not passed these checks are not used in our analysis but are included in our IMI dataset.
We select 84 units of each model so that every unit is seen by 30 different participants since, within each task, no unit is shown more than once.
All procedures conform to Standard 8 of the American Psychological 405 Association’s “Ethical Principles of Psychologists and Code of Conduct” (2016).
Participants are compensated at a targeted hourly rate of 15 USD, which amounts to 2.79 USD per task.
§.§ Scaling Feature Visualization to Many Models
A fundamental problem with using natural images to characterize the receptive field of individual units (apart from idiosyncrasies of the used dataset) is that visual features do not usually appear in isolation, resulting in ambiguity. For example, highly activating ImageNet-exemplars for a unit that is sensitive to feathers would probably depict birds, which makes it hard to isolate feathers as the crucial visual feature, instead of beaks, claws, or a background of greenery or blue sky.
The promise of Feature Visualization is to circumvent these limitations by synthetically generating images that only contain visual features contributing to high unit activation. The procedure starts with an initial random noise image and performs gradient ascent on the activation achieved by this image at the unit of interest. Following established work <cit.>, a unit is defined as one feature map of a convolutional layer, where the activation across the feature map is aggregated by calculating the mean, just like for natural stimuli. To prevent mode collapse of the generated batch of feature visualizations, i.e. to truthfully capture the receptive field of so-called polysemantic units that show sensitivity to multiple different concepts, a regularization term is added to the loss to diversify the images.
We build on an existing implementation <cit.> and extend it to support various models flexibly. Previous implementations had two critical hyperparameters: the number of gradient ascent steps to be performed and the weight used for the diversity term. As earlier work mainly focused on the GoogLeNet model, hyperparameters were tuned for it. We find, however, that these fixed values do not generalize well to other models, but their optimal[Judged by the first authors.] values heavily depend, among other factors, on the model and location of the unit within the network - in extreme cases, the ideal value can even be different for two units of the same layer in the same network. Therefore, using any fixed value would introduce an unfair bias for or against some models. Furthermore, since a larger weight for the diversity term hinders the optimization, the number of necessary gradient ascent steps depends partially on the diversity weight, meaning these parameters cannot be set independently.
To overcome the latter problem of choosing an appropriate number of optimization steps, we implement an adaptive procedure that interrupts the optimization when the gradients become small. The procedure performs at least 2'500 steps of gradient ascent and records a trajectory of the observed gradient magnitude. We smooth these trajectories with a large sliding window and halt optimization once the average gradient magnitude in the last window is larger than in the second-to-last window.
To solve the first problem, we determine the diversity weight for each unit individually as follows. We first record the maximum and minimum activation achieved by natural dataset samples for the unit. Then, we generate feature visualizations without diversity and assert that they achieved a stronger activation. We then try to find the largest possible diversity value that still produces images that achieve at least as strong activations as all dataset samples. To do so, we first perform an exponential search starting at a diversity of 1, increasing by a factor of 10 in each step. Once the value becomes too large, we perform 6 steps of binary search between the largest diversity value still known to work and the final value tested in the exponential search. If no value tested during the binary search worked, we return the lower bound of the search range, i.e. the images generated in the end are always guaranteed to be at least as activating as the strongest natural images. Generating one batch of Feature Visualizations, i.e., one step of the procedure, takes between two and 90 minutes on an Nvidia 2080Ti GPU, depending mostly on the width of the layer of the unit, since the diversity term scales quadratically.
For ViTs, feature visualization could theoretically be performed using the same method by maximizing the activation at the position-wise feedforward layers. However, just applying the existing methodology does not lead to visually coherent images. <cit.> present a method for adapting the procedure to ViTs that seems to produce intelligible images, but one step of their algorithm just adds large-scale noise to the visualizations, effectively performing a random search in image space to find activating images. Removing this augmentation or reducing the scale of the noise leads to unintelligible images again. In light of these issues, we chose not to evaluate ViTs in the synthetic condition.
§.§ A Priori Power Analysis
A central question for the experimental design of this study is how many units need to be sampled per model to obtain a result representative of the entire model. Answering this question is non-trivial as there might be large inter-unit interpretability differences within one model. Indeed, this is what we observe as displayed in <ref>). While the most naive approach would be to test all units, this is unfeasible due to the associated financial costs. Therefore, we need to find a trade-off between these considerations and keep the number of sampled units as low as possible while still getting representative results. Put differently: What is the lowest number of units one can select while still being reasonably sure that the found effect is statistically significant?
To answer this question, we first ran a pilot study where we controlled for inter-participant differences by showing stimuli from two models (GoogLeNet and Robust ResNet-50) to the same subjects. Participants in this pilot were the study's first authors and other lab members. This means that the obtained data is of high quality, and we can be confident that all participants understood the task. The mean difference in the proportion of correctly completed trials came out to be 0.1, with standard deviations of 0.15 for both interpretability methods, resulting in a relatively large effect size, with Cohen's d of 0.67. Irrespective of concerns of statistical significance, we deem an effect of this size to be practically relevant; in other words, if the difference in interpretability between two models would be at least 10 percentage points, we would consider this practically relevant. To determine the required number of sampled units at these effect sizes, we then performed an a-priori power analysis using the software G*Power <cit.> — a standard tool widely used in psychology and the social sciences. To avoid unrealistic assumptions about the shape of the distribution of measurements (the normality-assumption of the t-test will almost certainly not be met because the data points are proportions expected to lie between 0.5 and 1.0), we opted for the non-parametric Mann-Whitney-U test. We assumed an α-level of 0.01 (subject to Bonferroni-correction to safely conduct up to five significance tests on the same data) and a β-level of 0.95. This analysis yields that at least 86 units are required.
However, the situation is further complicated by the fact that we are comparing values of which we cannot actually take a continuous measurement since we aggregate binary trials to estimate the proportion of correctly completed trials for each unit, i.e. there is measurement noise. This can be modeled as a Binomial distribution, characterized by the parameter p, the probability of answering correctly in any given trial for units of this model. This gives rise to the question of how many measurements we should take per unit to be able to assess an individual unit’s interpretability with any confidence. Accepting a standard deviation of 0.1 in the estimate of each unit’s p results in 30 independent trials per unit.
Another consideration is how many trials one participant can be asked to complete. Earlier work presented up to 24 trials to each participant under similar conditions <cit.>. Still, again we might be interested in accurately estimating the participant’s performance, and each participant incurs some fixed cost for the time spent instructing them and completing the practice trials. On the other hand, MTurk HITs are typically very short. Constructing long tasks, e.g. of 100 trials or more, would increase the risk of participants losing focus or becoming frustrated and just answering randomly. We deemed 55 trials per participant (40 real trials, 10 instruction trials, and 5 catch trials) a suitable balance of these concerns.
Finally, the required number of participants is the total number of trials divided by the number of trials per participant. The total number of trials is, of course, the number of units times the number of necessary measurements per unit, resulting in 86 · 30 / 40 trials. As this is not an integer, we opt for using 84 units instead, which brings the number of needed participants to 63.
§ FURTHER EXPERIMENTAL RESULTS
§.§ Extended Visualizations of Results in <ref>
§.§ Analysis of Quality Checks
§.§ Distribution of Activations
§.§ Are Activation Patterns in Feature Maps Predictive of a Unit's Interpretability?
Since we observe large differences in unit-wise interpretability across all networks, a logical research direction is to find out what drives these differences. As an example, we investigate two hypotheses here.
Contrast. First, we investigate whether there is a relationship between a unit's interpretability and the local contrast in the activation maps caused by its highly activating images. This can be motivated by the idea that if a feature is concentrated at one location in the image, it might be easier to be detected by human observers than if the activation is distributed across the image.
We visualize the relationship between a unit's interpretability and the computed contrast in its activation maps in <ref>. There does not appear to be a strong relationship between the two, as supported by a low Spearman's rank correlation (r = -0.042, p = 0.714).
Sparseness. Second, we analyze whether the sparseness of activations in a feature map is predictive of a unit's interpretability. This is motivated by the argument that units that sparsely fire over a large dataset are sensitive to a particular image feature that might be easier for humans to detect and understand.
To test this, we compute the fraction of non-positive values (i.e., zeros after ReLU activation) in a unit's feature map averaged over the ImageNet validation set. The resulting data and the units' interpretability scores are shown in <ref>. As before, we see only a weak, non-significant relation between the two (r = 0.10, p = 0.39).
§ BROADER IMPACTS
We expect the broader impacts of our work to be positive since advancements made with respect to the interpretability of AI systems should increase their transparency and fairness. However, as is always the case for interpretability work, explanations can also give users a false sense of trust in the explained model. This can lead to the deployment of models that, under real-world conditions, give incorrect or undesired results. Too much trust in AI systems can also lead to their deployment in areas that are better left in human hands for ethical reasons, such as policing or the justice system. Apart from these general and high-level concerns, we see no direct way in which someone could use the findings and data presented here to cause harm, especially since we do not build an interpretability method but investigate whether models are interpretable.
§ COMPUTIONAL AND FINANCIAL COST
The by far most computationally intensive aspect of this work is the creation of stimuli for the experiments, which can be further subdivided into collecting natural exemplars and producing feature visualizations. The former point is negligible since all that is required is one forward pass over the ImageNet training set for each model. We record the activations on Nvidia 2080Ti GPUs and perform multiple forward passes due to memory constraints, but even if we assume a pessimistic 4 hours of GPU time and full utilization of the GPU at 250 W, this results in 9 kWh power consumption for all models in total. Creating feature visualizations for 100 randomly selected units — we later randomly sample 84 units for each model and kept some stimuli for anticipated later experiments — requires the parallel use of 25 2080Ti GPUs for about 12 hours for all models except ConvNeXt, which takes about 24 hours on average. Since this is done for only seven models because we do not generate feature visualizations for the ViTs, the required electricity amounts to 600 kWh.
Assuming our country's consumer electricity price of 0.4812 € / kWh and the country's typical CO2 emissions per kWh of 428 g CO2e / kWh, both of which are pessimistic estimates given that the experiments ran in a local academic datacenter, these requirements translate to about 300 USD and 256 kg of CO2 equivalent emissions.
The financial cost of this work is dominated by crowdworker compensations. As outlined in <ref>, workers are compensated at an hourly wage of 15 USD, or 2.79 USD / HIT. Since all workers are compensated, even if the results of their HIT do not pass our quality checks, the total cost incurred by the experiment (including the fees paid to MTurk) amounts to around 12'000 USD.
§ FURTHER SCREENSHOTS OF PSYCHOPHYSICS TRIALS
|
http://arxiv.org/abs/2307.05571v1 | 20230710071220 | Average of Central L-values for GL(2)$\times$GL(1), Hybrid Subconvexity, and Simultaneous Nonvanishing | [
"Liyang Yang"
] | math.NT | [
"math.NT"
] |
plain
thmTheorem[section]
cor[thm]Corollary
thmyTheorem
thmxthm
coryCorollary
corxthm
hy[thm]Hypothesis
*thmaTheorem A
*corbCorollary B
*thmcTheorem C
lemma[thm]Lemma
prop[thm]Proposition
conj[thm]Conjecture
fact[thm]Fact
claim[thm]Claim
definition
defn[thm]Definition
example[thm]Example
remark
remark[thm]Remark
equationsection
]Average of Central L-values for GL(2)×GL(1), Hybrid Subconvexity, and Simultaneous Nonvanishing
We employ a regularized relative trace formula to establish a second moment estimate for twisted L-functions across all aspects over a number field. Our results yield hybrid subconvex bounds for both Hecke L-functions and twisted L-functions, comparable to the Weyl bound in suitable ranges. Moreover, we present an application of our results to address the simultaneous nonvanishing problem.
[
Liyang Yang
August 12, 2023
===================
§ INTRODUCTION
Central L-values of modular forms play important roles in number theory and arithmetic geometry. The relative trace formula, introduced in <cit.>, has emerged as a powerful analytic tool for studying the average behavior of central L-values for holomorphic cusp forms. Building upon this, <cit.> extended the analysis to include Hilbert modular forms over total real fields. In this article, we employ a regularized relative trace formula to investigate central values of general automorphic L-functions for GL(2)×GL(1) over a number field. Our approach yields several new results, including a second moment estimate that encompasses all aspects and incorporates stability concepts from <cit.>, hybrid-type subconvexity bounds for both Hecke L-functions and twisted L-functions that can rival the strength of the Weyl bound in the appropriate range, and an improved bound on simultaneous nonvanishing in the level aspect.
§.§ Hybrid Second Moment Involving Stability
Our first result is the following bound towards the second moment of twisted L-functions.
Let F be a number field with ring of adeles 𝔸_F. Let χ be a Hecke character of 𝔸_F^×/F^× with arithmetic conductor Q=C_(χ). Let 𝔐 be an integral ideal of norm M. For v|∞, let c_v, C_v, T_v>0. Set T=∏_v|∞T_v. Let Π_∞=⊗_v|∞Π_v be an irreducible admissible generic representation of GL(2)/F_∞. Let 𝒜_0(Π_∞,𝔐;χ_∞,ω) be the set of cuspidal automorphic representations π=⊗_vπ_v of GL(2)/F with central character ω such that π_=⊗_v<∞π_v has arithmetic conductor dividing 𝔐, and π_v⊗χ_v≃Π_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>. Then
∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2≪ (TMQ)^ε(TM+T^1/2Q·1_M≪ Q^2(M,Q)),
where the implied constants depend on ε, F, c_v, and C_v, v|∞.
Theorem <ref> is an analog and extension of the results in <cit.> from Hilbert modular forms on an anisotropic quaternion algebra to cuspidal automorphic representations of GL(2) over general number fields. The estimate (<ref>) incorporates the explicit dependence on the spectral parameter T by utilizing Nelson's test function at the archimedean places. Notably, there are no restrictions on the arithmetic conductors, allowing M and Q to be arbitrary.
The condition 1_M≪ Q^2(M,Q) in (<ref>) captures the stability of regular orbital integrals, akin to the treatment in <cit.>, although the specific regular orbital integrals under consideration differ significantly.
For F=ℚ, with Π_∞ being a holomorphic discrete series of SL(2), and χ as a Dirichlet character, Theorem <ref> implies the following.
Let k≥ 2 and N≥ 1. Let χ be a primitive Dirichlet character modulo q. Then
∑_f∈ℱ_k^(N)|L(1/2,f×χ)|^2≪ (kNq)^ε(kN+k^1/2q·1_N≪ q^2(N,q)),
where the implied constant depends only on ε. Here ℱ_k^new(N) is an orthogonal basis of normalized new forms that are holomorphic Hecke eigenforms with weight k and level N, and have trivial nebentypus.
Note that (<ref>) improves <cit.> by explicating the dependence on k, and allowing for arbitrary values of N and q.
§.§ Hybrid Weyl Subconvex Bounds
Dropping all but one terms on the left hand side of (<ref>) we then obtain the following hybrid bound for twisted L-functions.
Let F be a number field with ring of adeles 𝔸_F. Let π be either a unitary cuspidal automorphic representation of GL(2)/F or a unitary Eisenstein series. Let χ be a Hecke character of 𝔸_F^×/F^×. Suppose that π_v⊗χ_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>. Then
L(1/2,π×χ)≪ C(π⊗χ)^ε[T^1/2C_(π)^1/2+T^1/4C_(χ)^1/2],
where the implied constant depends on ε, F, c_v, and C_v, v|∞. In particular,
L(1/2,π×χ)≪_π_∞,χ_∞,F,ε C_(π×χ)^1/6+ε
if (C_(π),C_(χ))=1 and C_(π)^1-ε≪ C_(χ)≪ C_(π)^1+ε.
When considering a CM extension E/F, where π corresponds to a Hilbert modular form over F and σ_Ω represents the theta series associated with an ideal class group character Ω of E, a hybrid variant of (<ref>) for L(1/2, π×σ_Ω) has been established in <cit.> through the utilization of a relative trace formula on a quaternion algebra. This relative trace formula, together with a selection of local test function, has further been employed in <cit.> to derive a hybrid subconvexity outcome in a similar fashion.
In the case of GL(2)×GL(1) over F=ℚ, the Weyl bound L(1/2,π×χ)≪ C_(χ)^1/3+ε was established by <cit.> for a fixed cusp form π of PGL(2) and a quadratic Dirichlet character χ. This result was further generalized by <cit.>, where the Weyl bound L(1/2,π×χ)≪ C_(π×χ)^1/6+ε is proven under the conditions χ^2≠ 1, π has a level dividing C_(χ), and π has a central character χ^2. In particular, C_(π) is not coprime to C_(χ). Consequently, (<ref>) addresses a complementary case to <cit.>.
By taking ω=η^2 for some Hecke character η and π=η⊞η, we obtain the following bound for Hecke L-functions.
Let F be a number field with ring of adeles 𝔸_F. Let η and χ be Hecke character of 𝔸_F^×/F^× with coprime arithmetic conductors. Then
L(1/2,ηχ)≪min{C_(η)^1/2+ε+C_(χ)^1/4+ε,C_(η)^1/4+ε+C_(χ)^1/2+ε},
where the implied constant depends on F, ε, η_∞, and χ_∞. In particular,
L(1/2,χ)≪_F,χ_∞,εC_(χ)^1/6+ε
if χ=χ_1χ_2 with (C_(χ_1),C_(χ_2))=1 and C_(χ_1)^2-ε≪ C_(χ_2)≪ C_(χ_1)^2+ε.
§.§ Applications to Simultaneous Nonvanishing
Corollary <ref> serves as a versatile alternative to multiple third moment estimates in certain applications. It replaces Young's third moment bound (cf. <cit.>) in <cit.> and provides a substantial improvement to the level aspect simultaneous nonvanishing result (cf. <cit.>), replacing Petrow-Young's third moment estimate <cit.> with the use of Corollary <ref>.
Let k∈{2,3,4,5,7}. Let N≥ 2 be a prime. Denote by ℱ_2k^new(N) an orthogonal basis of normalized new forms that are holomorphic Hecke eigenforms with weight 2k and level N, and have trivial nebentypus. Let f∈ℱ_2k^new(N). Then there exists a nontrivial primitive quadratic character χ such that
#{g∈ℱ_2k^(N): L(1/2,f×χ)L(1/2,g×χ)≠ 0}≫_εN^1-ε,
where the implied constant depends on ε.
The lower bound N^1-ε in Corollary <ref> significantly improves the main result in <cit.>, where the lower bound achieved was N^1/2-ε.
From (<ref>) we obtain subconvex bounds for L(1/2,f×χ)
in the range q^δk^δ-1≪ N≪ q^2-δk^-δ, δ>0. This generalizes <cit.>.
§.§ Discussion of the Proofs
Let A=(GL(1), 1), G=GL(2), and G=PGL(2). Let f be a nice function on G(𝔸_F). Denote by
(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), g_1, g_2∈ G(𝔸_F)
the associated kernel function, which also admits a spectral expansion. By substituting these expansions of (x,y) into the integral
∫_A(F)\ A(𝔸_F)∫_A(F)\ A(𝔸_F)(x,y)χ(x)χ(y)d^×xd^×y,
we obtain a formal equality between two divergent expressions. To regularize it, we establish an identity between two holomorphic functions on ℂ^2 in the form of
J_^,(f,s,χ)=J_^,(f,s,χ), s∈ℂ^2,
where evaluating this identity at 𝐬=(0,0) provides a regularization of (<ref>).
§.§.§ The spectral side: a lower bound
We will prove a lower bound
J_^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2.
A more comprehensive version that includes the continuous spectrum is given by Theorem <ref> in §<ref>.
§.§.§ The geometric side: an upper bound
According to types of orbital integrals, we decompose the geometric side into three integrals
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
* The terms J^_,(f,χ) and J^_,(f,χ) correspond to irregular orbital integrals, exhibiting an asymptotic magnitude of T^1/2+o(1)M^1+o(1).
* The term J^,2_,(f,0,χ) represents the contribution from regular orbital integrals, which constitutes the main focus of this paper. We establish that it is bounded by ≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q).
Based on the above estimates, we obtain an upper bound for the geometric side
J_^,(f,0,χ)≪ T^1/2+εM^1+ε+T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
Using equations (<ref>), (<ref>), and (<ref>), we establish Theorem <ref> for the case where π is cuspidal.
§.§.§ Some Remarks
The approach utilized in this work exhibits similarities to that of <cit.>, albeit with notable distinctions in the treatment of test functions at ramified places. In <cit.>, the focus is primarily on the case of joint ramification, where Q| M, resulting in relatively simpler regular orbital integrals that can be further improved through nontrivial bounds on specific character sums. However, in the case of totally disjoint ramification, where (M,Q)=1, the regular orbital integrals do not exhibit any oscillatory behavior, and the trivial bound becomes optimal. This paper addresses the most general situation, allowing M and Q to take arbitrary values. Another difference from the aforementioned work is that we evaluate the expressions at s=(0,0) (instead of some s_0=(s_0,s_0) with s_0>0) in order to compute the second moment over the family. This necessitates careful consideration of singularity matching when computing the main term J^_,(f,χ)+J^_,(f,χ).
By employing a straightforward `trivial' estimate of the regular orbital integrals, we establish convexity in the χ-aspect and achieve strong hybrid subconvexity. This represents one of the key advantages of the relative trace formula. The robust nature of this approach holds promise for deriving bounds for higher rank Rankin-Selberg L-functions in the level aspect. In future work, we intend to extend the techniques presented in this paper to higher ranks, building upon the general regularized relative trace formula introduced in <cit.>.
§.§ Outline of the Paper
§.§.§ The Regularized Relative Trace Formula
In §<ref>, we introduce the notations that will be consistently used throughout the paper, along with setting up the local and global data. Additionally, we define the test functions that will play a crucial role in the relative trace formula.
Moving to §<ref>, we derive the regularized relative trace formula summarized in Theorem <ref> and Corollary <ref> in §<ref>.
§.§.§ The Spectral Side
In §<ref>, we explore the spectral side J_^,(f,0,χ). Its meromorphic continuation is obtained in §<ref>. By combining this with the local estimates developed in §<ref>–§<ref>, we establish a lower bound for the spectral side (cf. Theorem <ref>) in terms of the second moment of central L-values.
§.§.§ The Geometric Side
In §<ref>–§<ref> we handle the geometric side
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
* The small cell orbital integral J^_,(f,χ), one of the main terms, is addressed in Proposition <ref> in §<ref>, utilizing local estimates from §<ref>–§<ref>.
* The dual orbital integral J_,^(f,χ) is bounded by Proposition <ref> in §<ref>. This integral is considered `dual' to J_,^(f,χ) through Poisson summation and contributes as the other main term.
* The regular orbital integrals J^,2_,(f,0,χ) present the most challenging aspect of the geometric side J_^,(f,0,χ). Their behaviors are outlined in Theorem <ref> in §<ref>.
§.§.§ Proof of Main Results
With the aforementioned preparations, we are able to prove the main results in §<ref>. In §<ref>–§<ref> we put estimates from the spectral and geometric side all together, obtaining Theorem <ref>, which yields Theorem <ref>.
§.§ Notation
§.§.§ Number Fields and Measures
Let F be a number field with ring of integers 𝒪_F. Let N_F be the absolute norm. Let 𝔒_F be the different of F. Let 𝔸_F be the adele group of F. Let Σ_F be the set of places of F. Denote by Σ_F, (resp. Σ_F,∞) the set of nonarchimedean (resp. archimedean) places. For v∈Σ_F, we denote by F_v the corresponding local field and 𝒪_v its ring of integers. For a nonarchimedean place v, let 𝔭_v be the maximal prime ideal in 𝒪_v. Given an integral ideal ℐ, we say v|ℐ if ℐ⊆𝔭_v. Fix a uniformizer ϖ_v∈𝔭_v. Denote by e_v(·) the evaluation relative to ϖ_v normalized as e_v(ϖ_v)=1. Let q_v be the cardinality of 𝒪_v/𝔭_v. We use v|∞ to indicate an archimedean place v and write v<∞ if v is nonarchimedean. Let |·|_v be the norm in F_v. Put |·|_∞=∏_v|∞|·|_v and |·|_=∏_v<∞|·|_v. Let |·|_𝔸_F=|·|_∞⊗|·|_. We will simply write |·| for |·|_𝔸_F in calculation over 𝔸_F^× or its quotient by F^×.
Let ψ_ℚ be the additive character on ℚ\𝔸_ℚ such that ψ_ℚ(t_∞)=exp(2π it_∞), for t_∞∈ℝ↪𝔸_ℚ. Let ψ_F=ψ_ℚ∘_F, where _F is the trace map. Then ψ_F(t)=∏_v∈Σ_Fψ_v(t_v) for t=(t_v)_v∈𝔸_F. For v∈Σ_F, let dt_v be the additive Haar measure on F_v, self-dual relative to ψ_v. Then dt=∏_v∈Σ_Fdt_v is the standard Tamagawa measure on 𝔸_F. Let d^×t_v=ζ_F_v(1)dt_v/|t_v|_v, where ζ_F_v(·) is the local Dedekind zeta factor. In particular, (𝒪_v^×,d^×t_v)=(𝒪_v,dt_v)=N_F_v(𝔇_F_v)^-1/2 for all finite place v. Moreover, (F\𝔸_F; dt_v)=1 and (F\𝔸_F^(1),d^×t)=s=1 ζ_F(s), where 𝔸_F^(1) is the subgroup of ideles 𝔸_F^× with norm 1, and ζ_F(s)=∏_v<∞ζ_F_v(s) is the finite Dedekind zeta function. Denote by F^×\𝔸_F^(1) the Pontryagin dual of F^×\𝔸_F^(1).
Note that at a ramified place v|𝔇_F_v, the conductor of ψ_v is precisely the inverse different 𝔒_F_v^-1. Write 𝔒_F_v^-1=ϖ_v^-d_v𝒪_v for some integer d_v≥ 1. Set ψ=⊗_v∈Σ_Fψ_v, where ψ_v is the additive character of F\𝔸_F defined by
* at v|𝔇_v, ψ_v(x):=ψ_v(ϖ_v^-d_vx), where x∈ F_v;
* at v|∞ or v∤𝔇_v, ψ_v(x):=ψ_v(x), where x∈ F_v.
Then ψ is unramified everywhere. Let D=N_F/ℚ(𝔇_F) be the absolute discriminant.
§.§.§ Reductive Groups
For an algebraic group H over F, we will denote by [H]:=H(F)\ H(𝔸_F). We equip measures on H(𝔸_F) as follows: for each unipotent group U of H, we equip U(𝔸_F) with the Haar measure such that, U(F) being equipped with the counting measure and the measure of [U] is 1. We equip the maximal compact subgroup K of H(𝔸_F) with the Haar measure such that K has total mass 1. When H is split, we also equip the maximal split torus of H with Tamagawa measure induced from that of 𝔸_F^×.
In this paper we set A=(GL(1),1), and G=GL(2). Let B be the group of upper triangular matrices in G.
Let G=Z\ G and B_0=Z\ B, where Z is the center of G. Let T_B be the diagonal subgroup of B. Then A≃ Z\ T_B. Let N be the unipotent radical of B. Let K=⊗_vK_v be a maximal compact subgroup of G(𝔸_F), where K_v=U_2(ℂ) is v is complex, K_v=O_2(ℝ) if v is real, and K_v=G(𝒪_v) if v<∞. For v∈Σ_F,, m∈ℤ_≥ 0, define
K_v[m]:={[ a b; c d ]∈ G(𝒪_v): c∈𝔭_v^m}.
§.§.§ Automorphic Data
Let s=(s_1, s_2)∈ℂ^2.
Let ω∈F^×\𝔸_F^(1). Denote by 𝒜_0([G],ω) the set of cuspidal representations on G(𝔸_F) with central character ω.
For η_1, η_2∈F^×\𝔸_F^(1), let (η_1⊗η_2) be the unitary parabolic induction from B(𝔸_F) to G(𝔸_F) associated with η_1⊗η_2, and let η_1⊞η_2 be Langlands sum.
Let Φ∈𝒮(𝔸_F) with Fourier transform Φ and let ω”=|·|^inα be a unitary character of 𝔸_F^×. Define an Eisenstein series
E(s,x;Φ,ω”)=∑_δ∈ B_0(F)\G'(F)∫_𝔸_F^×Φ(zηδ x)| zx|^sω”(z)d^×z
on [G']. Then E(s,x;Φ,ω”) converges absolutely in (s)>1 and admits a meromorphic continuation to ℂ, given by
E(s,x;Φ,ω”)=E_+(s,x;Φ,ω”)+E_+^∧(s,x;Φ,ω”)+E_(s,x;Φ,ω”),
where
E_(s,x;Φ,ω”):=-Φ(0)| x|^s/s+iα+Φ(0)| x|^s-1/s-1+iα
E_+(s,x;Φ,ω”):=∑_δ∈ B_0(F)\G'(F)∫_|z|≥ 1Φ(zηδ x)| zx|^sω”(z)d^×z,
E_+^∧(s,x;Φ,ω”):=∑_δ∈ B_0(F)\G'(F)∫_|z|≥ 1Φ(zηδ x)| zx|^1-sω”^-1(z)d^×z.
Moreover, E_+(s,x;Φ,ω”) and E_+^∧(s,x;Φ,ω”) converges absolutely for all s.
§.§.§ Other Conventions
For a function h on G(𝔸_F), we define h^* by assigning h^*(g)=h(g^-1), g∈ G(𝔸_F). Let F_1(s), F_2(s) be two meromorphic functions. Write F_1(s)∼ F_2(s) if there exists an entire function E(s) such that F_1(s)=E(s)F_2(s). Denote by α≍β for α, β∈ℝ if there are absolute constants c and C such that cβ≤α≤ Cβ.
Throughout the paper, we adhere to the ε-convention, wherein ε denotes a positive number that can be chosen arbitrarily small, though it may vary between different instances.
Acknowledgements
I am grateful to Dinakar Ramakrishnan for his helpful discussions. I would also like to extend my thanks to Caltech for their warm hospitality during my visit, where this paper was written.
§ CHOICE OF THE TEST FUNCTION
The notations introduced in this section will be extensively utilized throughout the remainder of this paper.
§.§ Intrinsic Data
Let F be a number field. Let χ=⊗_vχ_v and ω=⊗_vω_v be primitive unitary Hecke characters of F^×\𝔸_F^×. Let 𝔐 be an integral ideal of norm |𝔐|:=N_F(𝔐).
§.§.§ Analytic Conductor of Hecke Characters
Let C(χ):=⊗_v∈Σ_FC_v(χ) be the analytic conductor of χ, where each local conductor C_v(χ) is defined as follows.
* For F_v≃ℝ, χ_v=^n_v'|·|^iκ_v, n_v'∈{0,1}, we define
C_v(χ)=1+|n_v'+iκ_v/2|.
* For F_v≃ℂ, and χ_v(a)=(a/|a|)^n_v'|a|^2iκ_v, a∈ F_v^×, we define
C_v(χ):=(1+|iκ_v+|n_v'|/2|)^2.
* For v<∞, let n_v the exponent of χ_v, namely, r_χ_v is the smallest nonnegative integer such that χ_v is trivial over 1+ϖ_v^r_χ_v𝒪_v^× but not over 1+ϖ_v^r_χ_v-1𝒪_v^×. Let C_v(χ)=q_v^r_χ_v.
Denote by C_∞(χ):=⊗_v|∞C_v(χ) and C_(χ):=⊗_v<∞C_v(χ).
§.§.§ Analytic Conductor of Automorphic Representations of GL(2)/F
Let π=⊗_vπ_v be an automorphic representation of G(𝔸_F) with central character ω_π=ω=⊗_vω_v. Let C(π):=⊗_vC_v(π) be the analytic conductor of π, where each local conductor C_v(π) is defined as follows.
* Let v<∞. We denote by r_π_v≥ 0 the exponent of π_v, which is the least integer such that π_v has a vector that is K_v[r_π_v]-invariant (as defined in (<ref>)). The local conductor of π_v is defined as C_v(π):=q_v^r_π_v.
* For v|∞, the local L-function of π_v can be expressed as a product of shifted Gamma factors, given by L_v(s,π_v)=Γ_v(s+β_1,v)Γ_v(s+β_2,v), where β_1,v, β_2,v∈ℂ, and Γ_v represents the Gamma function over F_v. Let
C_v(π):=[(1+|β_1,v|)(1+|β_2,v|)]^[F_v:ℝ].
Let C_(π)=∏_v<∞C_v(π) be the arithmetic conductor of π and let C_∞(π)=∏_v|∞C_v(π) be the archimedean conductor of π.
§.§.§ Uniform Parameter Growth
Let Π_∞=⊗_v|∞Π_v be an irreducible admissible generic representation of GL(2)/F_∞. For v|∞, let L_v(s,Π_v)=Γ_v(s+γ_1,v)Γ_v(s+γ_2,v) be the associated L-factor of Π_v.
For v|∞, we say that Π_v has uniform parameter growth of size (T_v;c_v,C_v) for some constants c_v and C_v, and parameters T_v, if c_vT_v≤ |γ_j,v|≤ C_vT_v.
§.§.§ Ramification Parameters
For v∈Σ_F,, let e_v(·) be the normalized evaluation of F_v such that e_v(ϖ_v)=1. Following the notation in §<ref>, let r_χ_v (resp. r_ω_v) be the exponent of χ_v (resp. ω_v). We set m_v:=e_v(𝔐) and n_v:=r_χ_v. Let Σ_^+:={v∈Σ_F_: m_v≥ n_v≥ 1}, and Σ_^-:={v∈Σ_F_: m_v< n_v, n_v≥ 1}. Let K_v[m_v] and K_v[n_v] be defined by (<ref>).
Denote by 𝔔=∏_v<∞𝔭_v^n_v. For simplicity we write Q=C_(χ), M=|𝔐|:=N_F(𝔐), and M'=C(ω_). Suppose that Q>1. Note that M'| M.
§.§.§ The Family of Automorphic Forms
Let c_v and C_v be positive constants for each v|∞, and let T_v>0. In this paper, we will vary T_v as needed, while keeping c_v and C_v fixed. Let T=∏_v|∞ T_v.
For v|∞, let Π_v be an irreducible admissible generic representation of GL(2)/F_v, which uniform parameter growth of size (T_v;c_v,C_v), cf. §<ref>.
* Let 𝒜_0(Π_∞,𝔐;χ_∞,ω) be the set of cuspidal automorphic representations π=⊗_vπ_v of GL(2)/F such that
* π has central character ω,
* for all v<∞, π_v has a K_v[m_v]-invariant vector, i.e., r_π_v≤ e_v(𝔐).
* π_v⊗χ_v≃Π_v at each v|∞.
Note that Weyl law yields #𝒜_0(Π_∞,𝔐;χ_∞,ω)=(T|𝔐|)^1+o(1).
* Let 𝒳_0(Π_∞,𝔐;χ_∞,ω) be the set of Hecke characters η=⊗_vη_v∈F^×\𝔸_F^(1) such that
* for all v<∞, the representation η_v⊞ω_vη_v has a K_v[m_v]-invariant vector, i.e., r_η_v+r_ω_vη_v≤ m_v,
* η_vχ_v⊞ω_vη_vχ_v≃Π_v at each v|∞.
By <cit.> there exists some
d'∈ [10^-1exp(-3√(log T^2MQ^2)),exp(-3√(log T^2MQ^2))],
which may be determined by π and χ,
such that for all s with |s-1/2|=d',
|L(1/2,π×χ)|≪exp(log^3/4C(π×χ))· |L(s,π×χ)|.
Here the implied constant depends only on F.
§.§.§ Other Notations
For a function h on G(𝔸_F) or G(F_v), v∈Σ_F, define h^*(g)=h(g^-1) and
(h*h^*)(g)=∫ h(gg'^-1)h^*(g')dg'=∫ h(gg')h(g')dg'.
§.§ Construction of Test Functions
We construct a test function f on G(𝔸_F) using the following procedure:
* For the archimedean places (cf. §<ref>), we rely on Nelson's work <cit.> (cf. §1.5.2 and §14 on p.80) and follow the approach described in <cit.>, §1.10. Additional information can be found in <cit.>, Part 2.
* For the finite places, we employ the test function constructed in <cit.>, which involves a double average over unipotent translations weighted by characters (cf. §<ref>).
§.§.§ Construction of f_∞
Let v|∞. Recall that Π_v has uniform parameter growth of size (T_v;c_v,C_v) (cf. Definition <ref> in §<ref>). Then Π_v has uniform parameter growth of size (T_v;c_v/2,2C_v), where s_0 is the parameter defined by (<ref>) in §<ref>.
Let 𝔤 (resp. 𝔤') be the Lie algebras of G(F_v) (resp. A(F_v)), with imaginal dual 𝔤̂ (resp. 𝔤̂'). One can choose an element τ∈𝔤̂ with the restriction τ'=τ|_A∈𝔤̂', so that τ (resp. τ') lies in the coadjoint orbit 𝒪_Π_v of Π_v (resp. 𝒪_1_v of 1_v the trivial representation of A(F_v)). Let f̃^∧_v: 𝔤̂→ℂ be a smooth bump function concentrated on {τ+(ξ,ξ^): ξ≪ T_v^1/2+ε, ξ^≪ T_v^ε}, where ξ lies in the tangent space of 𝒪_Π_v at τ, and ξ^ has the normal direction. Let f̃_v∈ C_c^∞(G(F_v)) be the pushforward of the Fourier transform of f̃_v^∧ truncated at the essentially support, namely,
f̃_v⊆{g∈ G(F_v): g=I_n+1+O(T_v^-ε), ^*(g)τ=τ+O(T_v^-1/2+ε)},
where the implied constants rely on c_v and C_v.
Then, in the sense of <cit.>, §2.5, the operator π_v(f̃_v) is approximately a rank one projector with range spanned by a unit vector microlocalized at τ. Let
f_v(g):=f_v(g,χ_v)*f_v(g,χ_v)^*,
where v|∞, g∈ G(F_v), and
f_v(g,χ_v):=χ_v( g)∫_Z(F_v)f̃_v(zg)ω_v(z)d^×z.
Due to the support of f̃, the function f_v(g) is non-zero unless | g|_v > 0. Therefore, (| g|_v) = 1. As a result, the function f_v is smooth on G(F _v).
§.§.§ Application of Transversality
By definition, one has (cf. (14.13) in <cit.>)
f̃_v_∞≪_ε T_v^1+ε, v|∞,
where ·_∞ is the sup-norm. For g∈G(F_v), we may write
g=[ a b; c d ]∈ G(F_v), g^-1=[ a' b'; c' d' ]∈ G(F_v).
Define
d_v(g):=min{1, |d^-1b|_v+ |d^-1c|_v+ |d'^-1b'|_v+|d'^-1c'|_v }, if dd'≠ 0,
1, if dd'=0.
Let notation be as above. Then there is a fixed neighborhood 𝒵 of the identity in A(F_v) with the following property. Let g be in a small neighborhood of I_n+1 in G(F_v). Let δ_v>0 be small. Then
({z∈𝒵: (gzτ, A(F_v)τ)≤δ_v})≪δ_v/d_v(g).
Here (⋯) denotes the infimum over g'∈ A(F_v) of gzτ-g'τ, where · is a fixed norm on 𝔤̂.
Proposition <ref> (with δ_v=T_v^-1/2+ε) will be used to detect the restriction ^*(g)τ=τ+O(T_v^-1/2+ε) in the support of f̃_v. By (<ref>), (<ref>), and (<ref>),
|f̃_v(g)|≪ T^1+ε·1_|^*(g)τ-τ|≪ T_v^-1/2+ε·1_|g-I_2|≪ T_v^-ε·{1,T_v^-1/2+ε/d_v(g)}.
§.§.§ Finite Places
For v∈Σ_F,, we define a function on G(F_v), supported on Z(F_v)\ K_v[m_v], by
f_v(z_vk_v;ω_v)=(K_v[m_v])^-1ω_v(z_v)^-1ω_v(E_2,2(k_v))^-1,
where K_v[m_v] is the image of K_v[m_v] in G(F_v), and E_2,2(k_v) is the (2,2)-th entry of k_v∈ K_v[m_v]. For g_v∈ G(F_v), define by
f_v(g_v)=1/|τ(χ_v)|^2∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∑_β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(α)χ_v(β)f_v(g_α,β,v;ω_v),
where
τ(χ_v)=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×ψ_v(αϖ_v^-n_v)χ_v(α)
is the Gauss sum relative to the additive character ψ_v, and
g_α,β,v:=[ 1 αϖ_v^-n_v; 1 ]g_v[ 1 βϖ_v^-n_v; 1 ].
Note that n_v=0 for almost all v∈Σ_F,. Hence, for all but finitely many v∈Σ_F,, the test function f_v(·)=f_v(·;ω_v) (cf. (<ref>)) supports in Z(F_v)\ K_v[m_v].
§.§.§ Construction of the Test Function
Let f=⊗_v∈Σ_Ff_v, where f_v is constructed in §<ref> and §<ref>. Note that f_∞ is determined by Π_∞.
§ THE REGULARIZED RELATIVE TRACE FORMULA
§.§ Fourier Expansion of the Kernel Function
Let f=⊗_vf_v be defined in §<ref>. Then f defines an integral operator
R(f)ϕ(g)=∫_G(𝔸_F)f(g')ϕ(gg')dg'
on the space L^2([G],ω) of functions on [G] which transform under Z(𝔸_F) by ω and are square integrable on [G]. This operator is represented by the kernel function
(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), g_1, g_2∈ G(𝔸_F).
It is well known that L^2([G],ω) decomposes into the direct sum of the space L_0^2([G],ω) of cusp forms and spaces L_^2([G],ω) and L_^2([G],ω) defined using Eisenstein series and residues of Eisenstein series respectively. Then
_0(g_1,g_2)+_(g_1,g_2)=(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2),
where _0(g_1,g_2) (resp. _(g_1,g_2)) is the contribution from the cuspidal (resp. non-cuspidal) spectrum. Explicit expansions of _0(g_1,g_2) and _(g_1,g_2) will be given in §<ref>.
By Bruhat decomposition (g_1,g_2)=_(g_1,g_2)+_(g_1, g_2), where
_(g_1,g_2)=∑_γ∈ B_0(F)f(g_1^-1γ g_2), _(g_1,g_2)=∑_γ∈ B_0(F)wN(F)f(g_1^-1γ g_2).
Let 𝒦(·,·)∈{(·,·), _0(·,·), _(·,·), _(·,·),_(·,·)}. Define
ℱ_0ℱ_1𝒦(g_1,g_2):= ∫_[N]𝒦(g_1,u_2g_2)du_2, ℱ_1ℱ_0𝒦(g_1,g_2):=∫_[N]𝒦(u_1g_1,g_2)du_1,
ℱ_1ℱ_1𝒦(g_1,g_2):= ∫_[N]∫_[N]𝒦(u_1g_1,u_2g_2)du_2du_1,
ℱ_2ℱ_2(g_1,g_2):= ∑_α∈ A(F)∑_β∈ A(F)∫_[N]∫_[N](u_1α g_1,u_2β g_2)θ(u_1)θ(u_2)du_2du_1.
Using Poisson summation twice the integral ℱ_2ℱ_2𝒦(g_1,g_2) is equal to
𝒦(g_1,g_2)-ℱ_0ℱ_1𝒦(g_1,g_2)-ℱ_1ℱ_0𝒦(g_1,g_2)+ℱ_1ℱ_1𝒦(g_1,g_2).
By <cit.> we have, for x, y∈ A(𝔸_F), that
ℱ_0ℱ_1_(x,y)=ℱ_1ℱ_0_(x,y)=ℱ_1ℱ_1_(x,y)≡ 0.
Along with (<ref>) we then obtain that
ℱ_2ℱ_2_(x,y)=_(x,y).
Note that (<ref>) only holds over (x,y)∈ A(𝔸_F)× A(𝔸_F).
§.§ The Relative Trace Formula
§.§.§ The Spectral Side
Let (s_1)≫ 1 and (s_2)≫ 1. Define
J_^(f,s,χ):=J_0^(f,s,χ)+J_^(f,s,χ),
the spectral side, where s=(s_1, s_2)∈ℂ^2, and
J_0^(f,s,χ):= ∫_[A]∫_[A]_0(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y,
J_^(f,s,χ):= ∫_[A]∫_[A]ℱ_2ℱ_2_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y.
By Proposition 6.4 in <cit.> (cf. §6.2), the integral J_^(f,s,χ) converges absolutely in (s_1), (s_2)≫ 1. In addition, J_^(f,s,χ) admits a holomorphic continuation J_^,(f,s,χ) to 𝐬∈ℂ^2. We will see in §<ref> that J_^,(f,s,χ) is roughly an average of L(1/2+s_1,π×χ)L(1/2+s_2,π×χ) as π varies over families of unitary automorphic representations of GL(2)/F.
§.§.§ The Geometric Side
By (<ref>) and the decomposition (x,y)=_(x,y)+_(x,y), the geometric side is
J_^(f,s,χ):=J^_,(f,s,χ)+J^_,(f,s,χ),
where (s_1)≫ 1, (s_2)≫ 1, and
J^_,(f,s,χ):= ∫_[A]∫_[A]ℱ_2ℱ_2_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y
,
J^_,(f,s,χ):= ∫_[A]∫_[A]_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y.
As in <cit.>, we have
J^_,(f,s,χ)=J^_,(f,s,χ)+J^,2_,(f,s,χ),
where J_,^(f,s,χ) is defined by
∫_𝔸_F^×∫_𝔸_F^×f([ 1; x 1 ][ y; 1 ])|x|^s_1+s_2|y|^s_2χ(y)d^×yd^×x,
and the regular orbital J^,2_,(f,s,χ) is defined by
∑_t∈ F-{0,1}∫_𝔸_F^×∫_𝔸_F^×f([ y x^-1t; xy 1 ])|x|^s_1+s_2|y|^s_2χ(y)d^×yd^×x.
Note that (<ref>) converges absolutely in (s_1+s_2)>1, and by <cit.> the integral J^,2_,(f,s,χ) converges absolutely in 𝐬∈ℂ^2, and in particular, the sum over t∈ F-{0,1} is finite, which is called stability of the regular orbital integrals (cf. <cit.>, <cit.>, <cit.>). Therefore, the geometric side J_^(f,s,χ) admits a holomorphic continuation J_^,(f,s,χ) to 𝐬∈ℂ^2. We shall investigate it in §<ref>-§<ref>.
§.§.§ The Regularized Relative Trace Formula
Note that _0(x,y)=ℱ_2ℱ_2_0(x,y), _0(x,y)+_(x,y)=(x,y)=_(x,y)+_(x,y). Then by (<ref>),
_0(x,y)+ℱ_2ℱ_2_(x,y)=ℱ_2ℱ_2(x,y)=ℱ_2ℱ_2_(x,y)+_(x,y).
As a consequence, when (s_1)≫ 1 and (s_2)≫ 1,
J_^(f,s,χ)=J_^(f,s,χ).
By applying the singularity matching process described in <cit.>, the equality (<ref>) extends to its holomorphic continuation, leading to the following equality between two holomorphic functions:
Let notation be as before. Then
J_^,(f,s,χ)=J_^,(f,s,χ)<ref>.
In this paper, our focus is on evaluating the above regularized RTF at 𝐬 = 0 = (0,0). Write 𝐬'=(s,0). Define the following normalized integrals
J^_,(f,χ):= [J^_,(f,s',χ)-s^-1s=0 J^_,(f,s,χ)]_s=0,
J^_,(f,χ):= [J^_,(f,s',χ)-s^-1s=0 J^_,(f,s,χ)]_s=0.
Notice that s=0 J^_,(f,s,χ)+s=0 J^_,(f,s,χ)≡ 0. Therefore,
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ)<ref>.
Let notation be as before. Then
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
§ THE SPECTRAL SIDE: MEROMORPHIC CONTINUATION AND BOUNDS
In this section we shall show that J_^,(f,s,χ) admits a holomorphic continuation to 𝐬∈ℂ^2. Moreover, we derive a lower bound of it as follows.
[]thmthmf
Let notation be as in §<ref>. Then
J_^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2
+T^-1/2-ε(MQ)^-ε∑_η∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt,
where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω), and the implied constant depends only on F, ε, c_v and C_v at v|∞.
§.§ Spectral Side: Meromorphic Continuation
§.§.§ Spectral Expansion of the kernel functions
Let notation be as in §<ref>. Let f=⊗_vf_v be defined in §<ref>. Let _0(x,y) and _(x,y) be defined by (<ref>) in §<ref>. Then by the spectral decomposition we have (e.g., cf. <cit.>)
_0(x,y)=∑_σ∈𝒜_0([G],ω)∑_ϕ∈𝔅_σσ(f)ϕ(x)ϕ(y),
_(x,y)=1/4π∑_η∈F^×\𝔸_F^(1)∫_iℝ∑_ϕ∈𝔅_σ_0,ηE(x,ℐ(λ,f)ϕ,λ)E(y,ϕ,λ)dλ.
Here, 𝔅_σ denotes an orthonormal basis of the cuspidal representation σ, and σ_0,η is given by σ_0,η=(η,η^-1ω).
§.§.§ Rankin-Selberg Periods
Let θ=⊗_vθ_v be the generic induced by the fixed additive character ψ (cf. §<ref>). For a generic automorphic form φ on G(𝔸_F), define the associated Whittaker function by
W_φ(g):=∫_[N]φ(ug)θ(u)du, g∈ G(𝔸_F),
Using the multiplicity one property, we can express W_φ(g) as a product over all places v∈Σ_F as W_φ(g)=∏_v∈Σ_FW_φ,v(g_v), where g=⊗_vg_v∈ G(𝔸_F). The local Whittaker function W_φ,v is spherical for all but finitely many places v∈Σ_F. Define
Ψ(s,φ,χ):=∫_𝔸_F^×W_φ([ x; 1 ])|x|^sχ(x)d^×x=∏_v∈Σ_FΨ_v(s,φ,χ),
where the local integral is defined by
Ψ_v(s,φ,χ)=∫_F_v^×W_φ,v([ x_v; 1 ])|x_v|_v^sχ_v(x_v)d^×x_v.
The integral Ψ(s,φ,χ) converges absolutely in (s)>1. Furthermore, it is related to L-functions as follows.
* If φ∈𝔅_σ, where σ∈𝒜_0([G],ω), then Ψ(s,φ,χ) converges absolutely for all s∈ℂ, making it an entire function. By Hecke's theory, Ψ(s,φ,χ) serves as the integral representation for the complete L-function Λ(s+1/2,σ).
* If φ∈𝔅_0,η associated with some η∈F^×\𝔸_F^(1), then as established in <cit.>, the function Ψ(s,φ,χ) converges absolutely in the region (s)_1≫ 1 and (s_2)≫ 1, and it has a meromorphic continuation to s∈ℂ, representing the complete L-function Λ(s+1/2,η)Λ(s+1/2,η^-1ω).
Let v∈Σ_F be a place. Let (s)>1. We denote by
R_v,λ(s,ϕ,χ):=Ψ_v(s,ϕ,χ)L_v(s+1/2,σ_v×χ_v)^-1, if ϕ∈𝔅_σ,σ∈𝒜_0([G],ω),
Ψ_v(s,ϕ,χ)/L_v(s+1/2,η_vχ_v)L_v(s+1/2,η_v^-1χ_vω_v), if ϕ∈𝔅_0,η,η∈F^×\𝔸_F^(1).
Let R_λ(s,ϕ,χ)=∏_v∈Σ_FR_v,λ(s,ϕ,χ). Then R_λ(s,ϕ,χ) turns out to be an entire function of s∈ℂ. Denote by
R_,λ(s,ϕ,χ)=∏_v∈Σ_F,R_v,λ(s,ϕ,χ), Ψ_∞(s,ϕ,χ):=∏_v∈Σ_F,∞Ψ_v(s,ϕ,χ).
§.§.§ Meromorphic Continuation
According to the construction of the test function f, the Eisenstein series E(x,ℐ(λ,f)ϕ,λ), ϕ∈𝔅_0,η, vanishes unless ϕ is right invariant under K_v[m_v], where m_v=e_v(𝔐), cf. §<ref>.
Substituting the Rankin-Selberg periods (cf. §<ref>) into the decomposition (<ref>) we then obtain J_^(f,s,χ)=J_0^(f,s,χ)+J_^(f,s,χ), where
J_0^(f,s,χ)= ∑_σ∈𝒜_0(Π_∞,𝔐;χ_∞,ω)∑_ϕ∈𝔅_σΨ(s_1,σ(f)ϕ)Ψ(s_2,ϕ,χ),
J_^(f,s,χ)= 1/4π∑_η∫_iℝ∑_ϕ∈𝔅_σ_0,ηΨ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ)dλ,
where η ranges through ∈F^×\𝔸_F^(1), (s_1)≫ 1 and (s_2)≫ 1.
The function J_0^(f,s,χ) continues to a holomorphic function J_0^,(f,s,χ) in ℂ^2. It is proved in <cit.> that J_^(f,s,χ) extends to a holomorphic function J_^,(f,s,χ) in -1/4<(s_1), (s_2)<1/4 with
J_^,(f,s,χ)=1/4π∑_η∫_iℝ∑_ϕ∈𝔅_σ_0,ηΨ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ)dλ,
where η∈F^×\𝔸_F^(1), and the integrand Ψ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ) is identified with its meromorphic continuation. In particular, J_^(f,s,χ) is holomorphic in the region -1/4<(s_1), (s_2)<1/4.
§.§ Spectral Side: the Second Moment
Let notation be as in §<ref>. Denote by f^(g)=⊗_v|∞f_v(g_v,χ_v)⊗⊗_v∈Σ_F, f_v(g_v;ω_v), where f_v(·;ω_v) is defined by (<ref>), i.e., f_v(z_vk_v;ω_v)=(K_v[m_v])^-1ω_v(z_v)^-1ω_v(E_2,2(k_v))^-1. Define
φ^(x):=∫_G(𝔸)f^(g)∏_v|𝔔[1/τ(χ_v)∑_βχ_v(β)σ_v([ 1 βϖ_v^-n_v; 1 ])]σ(g)φ(x)dg,
where β_v ranges over ∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×.
To simplify notations, we shall still write Ψ(s,ϕ^,χ) and Ψ(s,E(·,ϕ,λ)^,χ) for their holomorphic continuations, respectively. It follows from the construction of f that J_0^,(f,0,χ) and J_^,(f,0,χ) can be written as follows.
Let notation be as before. Then
J_0^,(f,0,χ)= ∑_σ∈𝒜_0([G],ω)∑_ϕ∈𝔅_σ|Ψ(0,ϕ^,χ)|^2,
J_^,(f,0,χ)= 1/4π∑_η∈F^×\𝔸_F^(1)∫_iℝ∑_ϕ∈𝔅_σ_0,η|Ψ(0,E(·,ϕ,λ)^,χ)|^2dλ.
§.§.§ Local calculations
The non-archimedean calculation presented in <cit.> is as follows:
Let notation be as before. Let σ∈𝒜_0([G],ω). Let ϕ∈σ be a pure tensor. Suppose that ϕ^≠ 0. Then for v∈Σ_F,, we have
Ψ_v(s,ϕ^,χ)=W_ϕ,v(I_2)L_v(s+1/2,σ_v×χ_v), (s)≥ 0.
Let notation be as before. Let ϕ∈σ_λ,η be a pure tensor. Let φ=E(·,ϕ,λ). Suppose that φ^≠ 0. Then for v∈Σ_F,, (s)≥ 0, we have
Ψ_v(s,φ^,χ)=W_φ,v(I_2)L_v(s+1/2+λ,η_vχ_v)L_v(s+1/2-λ,η_v^-1χ_vω_v).
§.§ Spectral Side: the lower bound
In this section we prove Theorem <ref>.
Denote by f_v^∘:=∫_Z(F_v)f̃_v(zg)ω_v(z)d^×z. Let π=π_∞⊗π_ be a unitary automorphic representation of GL(2)/F with π_∞⊗χ_∞≃Π_∞. Let v|∞, by the properties of f_v (cf. e.g., <cit.>),
T_v^-1/4-ε≪_ε∫_F_v^×(π_v(f_v^∘)(W_v⊗χ_v))([ x_v; 1 ])d^×x_v≪_ε T_v^-1/4+ε
for some W_v in the Kirillov model of π_v. By definition (<ref>) in §<ref>, we have
π_v(f_v(·,χ_v))W_v([ x_v; 1 ])χ_v(x_v)=(π_v(f_v^∘)(W_v⊗χ_v))([ x_v; 1 ]).
Hence, Ψ_v(s_0,π_v(f_v(·,χ_v))W_v,χ_v)≫_ε T_v^-1/4-ε for some W_v in the Kirillov model of π_v. Let ϕ∈π be a cusp form with Petersson norm ⟨ϕ,ϕ⟩=1, and Whittaker function W_ϕ=⊗_vW_ϕ,v (defined by (<ref>)), such that W_ϕ,v=W_v, for all v|∞, and W_ϕ,v is ∏_v<∞K_v[n_v]-invariant. Then
Ψ_v(0,ϕ^,χ)=Ψ_v(0,π_v(f_v(·,χ_v))W_v,χ_v)≫_ε T_v^-1/4-ε,
where the implied constant depends on ε, c_v and C_v at v|∞.
Together with Lemmas <ref>, <ref>, and the bound |W_(I_2)|≫ (TM)^-ε (cf. <cit.>),
J_0^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2,
and J_^,(f,0,χ) is
≫ T^-1/2-ε(MQ)^-ε∑_η∫_t∈ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt,
where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω). Therefore, Theorem <ref> follows.
§ THE GEOMETRIC SIDE: THE ORBITAL INTEGRAL J^_,(F,Χ)
Let f=⊗_vf_v be text function constructed in §<ref>. Let 𝐬=(s,0) with s∈ℂ. Recall the definition in §<ref>:
J^_,(f,χ):=[J^_,(f,s,χ)-s=0 J^_,(f,s,χ)]_s=0,
where, for (s)>1, the small cell orbital integral is defined by (cf. §<ref>):
J^_,(f,s,χ):=∫_𝔸_F^×∫_𝔸_F^×f([ y b; 1 ])ψ(xb)|x|^1+sχ(y)d^×xd^×y,
which is a Tate integral representing Λ(1+s,1_F).
Let notation be as before. Then
J^_,(f,χ)≪_ε M^1+εT^1/2+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
§.§ Local calculations at nonarchimedean places
Let (s)>0. Let
J_,v(s):=∫_F_v^×|x_v|_v^1+2s∫_F_v^×∫_F_vf_v([ y_v b_v; 1 ])ψ_v(x_vb_v)χ_v(y_v)db_vd^×y_vd^×x_v.
Take 𝐬=(s,0). By definition, we have
J^_,(f,s,χ):= ∏_v∈Σ_FJ_,v(s), (s)>0.
By <cit.> we have, for v∈Σ_F,, that
J_,v(s)=
N_F_v(𝔇_F_v)^-1/2(K_v[m_v])^-11_(𝔇_F_v^-1)^×(x_v), if v|𝔔,
|x_v|_v^1+2s_0N_F_v(𝔇_F_v)^-1/2(K_v[m_v])^-11_𝔇_F_v^-1(x_v), if v∤𝔔,
where m_v=e_v(𝔐) is defined in §<ref>. Hence,
J^_,(f,s,χ)=V_F· N_F(𝔇_F)^1/2+sζ_F(1+2s)∏_v|∞J_,v(s),
where V_F:=∏_v<∞(K_v[m_v])^-1≍ |𝔐|^1+o(1).
§.§ Local estimates at archimedean places
Let notation be as before. Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε. Then for s∈𝒞, we have
∏_v|∞J_,v(s)≪ T^1/2+ε,
where the implied constant depends on F, ε, c_v and C_v, v|∞.
Let s∈𝒞. Denote by
ℐ_v(x_v,s):=|x_v|_v^1+2s∫_F_v^×∫_F_vf_v([ y_v b_v; 1 ])ψ_v(x_vb_v)χ_v(y_v)db_vd^×y_v.
By the construction of f we have ℐ_v(x_v,s)≠ 0 unless x_vT_v^-1-γ_v≪_ε T_v^-1/2+ε, where γ_v is determined by τ∈𝔤̂, cf. §<ref>. Moreover, by decaying of Fourier transform of f_v, f_v([ y_v b_v; 1 ])≪ T_v^-∞ if |b_v|_v≫ T_v^-1/2+ε. Together with (<ref>),
ℐ_v(x_v,s)≪ |x_v|_v^1+2ε1_x_vT_v^-1-γ_v≪_ε T_v^-1/2+ε· T_v^1+ε· T_v^-1/2+ε· T_v^-1/2+ε+O(T_v^-∞),
where the factor T_v^1+ε comes from the sup-norm estimate (cf. (<ref>)), the first factor T_v^-1/2+ε comes from the range of y_v according to the support of f_v (cf. (<ref>)), and the second T_v^-1/2+ε comes from the essential range of b_v, i.e., |b_v|_v≪ T_v^-1/2+ε. In particular, the implied constant in (<ref>) depends only on F_v, ε, and c_v, C_v at v|∞.
As a consequence, we have
J_,v(s)=∫_F_v^×ℐ_v(x_v,s)d^×x_v≪ T_v^ε∫_F_v|x_v|_v^2ε1_x_vT_v^-1-γ_v≪_ε T_v^-1/2+εdx_v+O(T_v^-∞),
which is ≪ T_v^1/2+ε. Then (<ref>) follows.
§.§ Proof of Proposition <ref>
Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε (cf. Lemma <ref>). By Cauchy formula,
J^_,(f,χ)=1/2π i∫_𝒞J^_,(f,s,χ)/sds
Substituting (<ref>) into the above integral,
J^_,(f,χ)=V_F/2π i∫_𝒞N_F(𝔇_F)^1/2+sζ_F(1+2s)∏_v|∞J_,v(s)/sds.
By Lemma <ref> we have
J^_,(f,χ)≪ V_F∫_𝒞N_F(𝔇_F)^1/2+ε T^1/2+ε/|ε|·max_s∈𝒞|ζ_F(1+2s)|ds≪ M^1+εT^1/2+ε.
Hence, Proposition <ref> follows.
§ THE GEOMETRIC SIDE: THE ORBITAL INTEGRAL J_,^(F,Χ)
Let f=⊗_vf_v be text function constructed in §<ref>. Let 𝐬=(s,0) with s∈ℂ. Recall the definition in §<ref>:
J^_,(f,χ):=[J^_,(f,s,χ)-s=0 J^_,(f,s,χ)]_s=0,
where, for (s)>0, the dual orbital integral is defined by (cf. §<ref>):
J_,^(f,s,χ):=∫_𝔸_F^×∫_𝔸_F^×f([ 1; x 1 ][ y; 1 ])|x|^sχ(y)d^×yd^×x,
which is a Tate integral representing Λ(2s,1_F). Let s<0. By Poisson summation (or equivalently, the functional equation),
J^_,(f,s,χ) becomes
∫_𝔸_F^×∫_𝔸_F^×∫_𝔸_Ff([ 1; b 1 ][ y; 1 ])ψ(bx)|x|^1-sχ(y)dbd^×yd^×x.
Let notation be as before. Then
J^_,(f,χ)≪ M^1+εT^1/2+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
Write
J^_,(f,s,χ)=∏_v∈Σ_FJ_,v(s),
where (s)<0, and each J_,v(s) is defined by
∫_F_v^×∫_F_v^×∫_F_vf_v([ 1; b_v 1 ][ y_v; 1 ])ψ_v(b_vx_v)|x_v|_v^1-sχ_v(y_v)db_vd^×y_vd^×x_v.
Similar to (<ref>) and (<ref>) we have
J^_,(f,s,χ)=V_F· N_F^(𝔔)(𝔇_F)^1/2+sζ_F^(𝔔)(1+2s)∏_v|𝔔J_,v(s)∏_v|∞J_,v(s),
where V_F:=∏_v<∞(K_v[m_v])^-1≍ |𝔐|^1+o(1),
N_F^(𝔔)(𝔇_F)=∏_𝔭|𝔇_F
𝔭+𝔔=𝒪_FN_F(𝔭), ζ_F^(𝔔)(1+2s):=∏_𝔭 prime
𝔭+𝔔=𝒪_F1/1-N_F(𝔭)^-1-2s.
§.§ Local estimates at ramified places
At v|𝔔, by definition
f_v([ 1; b_v 1 ][ y_v; 1 ])=0
unless
z_v[ 1 αϖ_v^-n_v; 1 ][ 1; b_v 1 ][ y_v; 1 ][ 1 βϖ_v^-n_v; 1 ]∈ K_v[m_v]
for some α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×, and z_v∈ F_v^×, i.e.,
z_v[ y_v+α b_vy_vϖ_v^-n_v (y_v+α b_vy_vϖ_v^-n_v)βϖ_v^-n_v+αϖ_v^-n_v; b_vy_v 1+β b_vy_vϖ_v^-n_v ]∈ K_v[m_v].
Analyzing the (2,1)-th entry of the matrix on the LHS of (<ref>) yields that e_v(z_v)+e_v(b_v)+e_v(y_v)≥ m_v≥ n_v. Hence an investigation of the (1,1)-th and (2,2)-th entry leads to
e_v(y_v)+2e_v(z_v)=0
e_v(z_v)+e_v(y_v)≥ 0, e_v(z_v)≥ 0.
As a consequence, e_v(z_v)=0, i.e., z_v∈𝒪_v^×. So e_v(y_v)=0, e_v(b_v)≥ m_v.
Hence we have f_v([ 1; b_v 1 ][ y_v; 1 ])=1_𝒪_v^×(y_v)1_ϖ_v^m_v𝒪_v(b_v). After a change of variable (i.e., β↦ y_v^-1β),
𝒥_v(x_v)=|τ(χ_v)|^-2/(K_v[m_v])∑_α,β∫_ϖ_v^m_v𝒪_v1_K_v[m_v](X_v)ψ_v(b_vx_v)χ_v(α)χ_v(β)db_v,
where X_v denotes the matrix
[ 1+α b_vϖ_v^-n_v (1+α b_vϖ_v^-n_v)βϖ_v^-n_v+αϖ_v^-n_v; b_v 1+β b_vϖ_v^-n_v ].
Note that 1_K_v[m_v](X_v)≠ 0 unless (1+α b_vϖ_v^-n_v)β +α∈ϖ_v^n_v𝒪_v. Hence,
𝒥_v(x_v)=|τ(χ_v)|^-2χ_v(-1)/(K_v[m_v])∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∫_ϖ_v^m_v𝒪_vψ_v(b_vx_v)χ_v(1+α b_vϖ_v^-n_v)db_v.
Write b_v=ϖ_v^mγ_v, γ_v∈𝒪_v^×. Changing the variable α↦γ_v^-1α,
𝒥_v(x_v)=|τ(χ_v)|^-2χ_v(-1)/(K_v[m_v])ζ_F_v(1)∑_m≥ m_vq_v^-mG(m)R(m,x_v).
where G is the character sum
G(m):=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(1+αϖ_v^m-n_v),
and R(m,x_v) is the Ramanujan sum
R(m,x_v):=∫_𝒪_v^×ψ_v(γ_vϖ_v^mx_v)d^×γ_v.
Applying the trivial bound G(m)≪ q_v^n_v, and R(m,x_v)=0 if m<-e_v(x_v)-1, R(m,x_v)≪ q_v^-1 if m=-e_v(x_v)-1, and R(m,x_v)=1 if m≥ -e_v(x_v), we then deduce that
J_,v(s)≪ (m_v+2n_v)(K_v[m_v])^-1.
§.§ Local estimates at archimedean places
Similar to Lemma <ref> we have
Let notation be as before. Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε. Then for s∈𝒞, we have
∏_v|∞J_,v(s)≪ T^1/2+ε,
where the implied constant depends on F, ε, c_v and C_v, v|∞.
§.§ Proof of Proposition <ref>
Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε (cf. Lemma <ref>). By Cauchy formula,
J^_,(f,χ)=1/2π i∫_𝒞J^_,(f,s,χ)/sds
Plugging the expression of J^_,(f,s,χ) into the above integral,
J^_,(f,χ)=V_F/2π i∫_𝒞N_F^(𝔔)(𝔇_F)^1/2+sζ_F^(𝔔)(1+2s)∏_v|𝔔J_,v(s)∏_v|∞J_,v(s)/sds.
By the estimate (<ref>) and Lemma <ref> we have
J^_,(f,χ)≪ V_F^1+ε∫_𝒞N_F(𝔇_F)^1/2+ε T^1/2+ε/|ε|·max_s∈𝒞|ζ_F^(𝔔)(1+2s)|ds≪ M^1+εT^1/2+ε.
Hence, Proposition <ref> follows.
§ THE GEOMETRIC SIDE: REGULAR ORBITAL INTEGRALS
Recall the definition (<ref>) in §<ref>:
J^,2_,(f,0,χ):=∑_t∈ F-{0,1}∏_v∈Σ_Fℰ_v(t),
where for v∈Σ_F,
ℰ_v(t):=∫_F_v^×∫_F_v^×f_v([ y_v x_v^-1t; x_vy_v 1 ])χ_v(y_v)d^×y_vd^×x_v.
By Theorem 5.6 in <cit.> (or <cit.>) the orbital integrals J^,2_,(f,0,χ) converges absolutely. We shall establish an upper bound for it as follows.
Let notation be as before. Then
J^,2_,(f,0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where the implied constant depends on ε, F, c_v, and C_v, v|∞. Here T, M, and Q are defined in §<ref>. In particular, J^,2_,(f,0,χ)=0 if M is large enough.
The observation that J^,2_,(f,0,χ)=0 for large M aligns with the calculation in <cit.>, despite the distinct nature of the regular orbital integrals involved.
§.§ Local Estimates: unramified nonarchimedean places
The following straightforward calculation can be found in <cit.>.
Let v∈Σ_F, be such that v∤𝔔. Then
ℰ_v(t)≪(1-e_v(1-t))(1+e_v(t)-2e_v(1-t))/(K_v[m_v])1_e_v(t-1)≤ 0
e_v(t)-e_v(1-t)≥ m_v.
Moreover, ℰ_v(t)=1 if e_v(t)=e_v(1-t)=0, m_v=0, and v∤𝔇_F. In particular, ℰ_v(t)=1 for all but finitely many v's.
§.§ Local Estimates at Ramified Places Σ_^-
In this section, we consider the case where v∈Σ_^-, specifically v |𝔔 and m_v < n_v. The local integrals ℰ_v(t) demonstrate unique characteristics that distinguish them from those discussed in <cit.>. This distinction sets them apart from the analysis presented in the aforementioned work.
Let v∈Σ_^-. Then
ℰ_v(t)≪
q_v^m_v+k if e_v(1-t)=-2k for m_v-n_v≤ k≤ -1
(e_v(t)-e_v(1-t)+1)q_v^m_v if e_v(t)-e_v(1-t)≥ 0
(1-e_v(t))^2q_v^m_v if e_v(t)≤ -1
0 otherwise,
where the implied constant is absolute.
By definition, f_v([ y_v x_v^-1t; x_vy_v 1 ])=0 unless
ϖ_v^k[ 1 αϖ_v^-n_v; 1 ][ y_v x_v^-1t; x_vy_v 1 ][ 1 βϖ_v^-n_v; 1 ]∈ K_v[m_v]
for some k∈ℤ. Write x_v=ϖ_v^r_1γ_1, y_v=ϖ_v^r_2γ_2, where r_1, r_2∈ℤ and γ_1, γ_2∈𝒪_v^×. Then (<ref>) becomes
ϖ_v^k[ 1 γ_1αϖ_v^-n_v; 1 ][ ϖ_v^r_2 ϖ_v^-r_1t; ϖ_v^r_1+r_2 1 ][ 1 γ_1γ_2βϖ_v^-n_v; 1 ]∈ K_v[m_v].
Changing variables α↦γ_1^-1α, β↦γ_1^-1γ_2^-1β, the above constraint becomes
ϖ_v^kY_α,β,r_1,r_2,t∈ K_v[m_v]
for some k∈ℤ, where Y_α,β,r_1,r_2,t is defined by
[ ϖ_v^r_2+αϖ_v^r_1+r_2-n_v (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v; ϖ_v^r_1+r_2 1+βϖ_v^r_1+r_2-n_v ].
By definition the local integral ℰ_v(t) becomes
1/|τ(χ_v)|^2∑_α,βχ(α)χ(β)∑_r_1, r_2∈ℤ f_v(Y_α,β,r_1,r_2,t;ω_v),
where f_v(·;ω_v) is defined by (<ref>) in §<ref>. Note that (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
We will categorize our discussion into three cases based on the value of k: the case where k≤ -1 will be addressed in §<ref> below, the case where k=0 will be addressed in §<ref> below, and the case where k≥ 1 will be addressed in §<ref> below. Proposition <ref> can then be readily derived from these discussions.
§.§.§ The case that k≤ -1
Suppose k≤ -1. Then (<ref>) simplifies to
2k+e_v(1-t)=0
m_v-n_v≤ k≤ -1
r_2=0, r_1=n_v
1+β∈ϖ_v^-k𝒪_v
1+α∈ϖ_v^-k𝒪_v
ϖ_v^k[(1+α )(1+β)ϖ_v^-n_v+ϖ_v^-n_v(t-1)]∈𝒪_v.
* Suppose that k=-n_v. Then m_v=0, e_v(1-t)=-2n_v, α=β=-1ϖ_v^n_v. So the contribution from this case is
1/|τ(χ_v)|^21_e_v(1-t)=-2n_v1_m_v=0=q_v^-n_v1_e_v(1-t)=-2n_v1_m_v=0.
* Suppose that k>-n_v. Write α=-1+ϖ_v^-kα', and β=-1+ϖ_v^-kβ', where α', β'ϖ_v^n_v+k. Then
ϖ_v^k[(1+α )(1+β)ϖ_v^-n_v+ϖ_v^-n_v(t-1)]∈𝒪_v
becomes
α'β'+(t-1)ϖ_v^2k∈ϖ_v^n_v+k𝒪_v.
So the contribution from this case is
|τ(χ_v)|^-2/(K_v[m_v])∑_max{m_v-n_v,1-n_v}≤ k≤ -1𝒮(k),
where
𝒮(k):=∑_α',β'ϖ_v^n_v+k
α'β'=-(t-1)ϖ_v^2kϖ_v^n_v+kχ(1-ϖ_v^-kα')χ(1-ϖ_v^-kβ')ω_v(ϖ_v^-kβ').
Employing the trivial bound to 𝒮(k), we see that the corresponding contribution to ℰ_v(t) in this case (i.e., k>-n_v) is
≪(K_v[m_v])^-1∑_max{m_v-n_v,1-n_v}≤ k≤ -1q_v^k·1_e_v(1-t)=-2k.
Therefore, the contribution to ℰ_v(t) in the case that k≤ -1 is
≪q_v^k/(K_v[m_v])∑_m_v-n_v≤ k≤ -11_e_v(1-t)=-2k≪ q_v^m_v+k∑_m_v-n_v≤ k≤ -11_e_v(1-t)=-2k.
§.§.§ The case that k=0
Suppose that k=0 in (<ref>), which implies that
r_2+e_v(1-t)=0
r_1+r_2≥ m_v
min{r_2, r_1+r_2-n_v}≥ 0
(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v.
Then r_1+r_2≥max{n_v,m_v}=n_v, e_v(t)-r_1≥ -n_v, and e_v(1-t)=-r_2≤ 0. So 0≤ r_2=-e_v(t-1), and n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. Therefore, the contribution to ℰ_v(t) from this case is
1_e_v(t)-e_v(1-t)≥ 0/|τ(χ_v)|^2(K_v[m_v])∑_n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v𝒥_1(r_1,t),
where 𝒥_1(r_1,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^r_2+αϖ_v^r_1-e_v(t-1)-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-e_v(t-1)-n_v).
* Suppose r_2≥ 1. Then e_v(1-t)≤ -1, implying that e_v(t)=e_v(1-t)=-r_2≤ -1. Hence, -r_1+e_v(t)=-r_1-r_2≤ -n_v (from the third constraint in (<ref>)). Along with the last condition in (<ref>) we have -r_1+e_v(t)≥ -n_v. So -r_1+e_v(t)=-n_v, i.e., r_1+r_2=n_v. Consequently,
𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α )β +ϖ_v^n_v-r_1t+α≡ 0ϖ_v^n_vχ(α)χ(β)ω_v(1+β).
Write t=ϖ_v^e_v(t)γ under the embedding F^×↪ F_v^×, where γ∈𝒪_v^×. Then
𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α )(1+β)≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β)ω_v(1+β),
which, after a change of variables, is equal to
𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α-ϖ_v^-e_v(t))χ(β-1)ω_v(β).
Since -γ+ϖ_v^-e_v(t)∈𝒪_v^×, by the trivial bound, we have |𝒥_1(r_1,t)|≤ q_v^n_v.
* Suppose r_2=0. Then e_v(1-t)=0. Therefore, 𝒥_1(r_1,t) is equal to
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(1+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-n_v).
Changing variable α↦α, the sum 𝒥_1(r_1,t) becomes
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(α+ϖ_v^r_1-n_v)(β +ϖ_v^n_v-r_1t) ≡ t-1ϖ_v^n_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-n_v).
Changing variables α↦α-ϖ_v^r_1-n_v and β↦β-ϖ_v^n_v-r_1t, 𝒥_1(r_1,t) can be rewritten as
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ t-1ϖ_v^n_vχ(α-ϖ_v^r_1-n_v)χ(β-ϖ_v^n_v-r_1t)ω_v(1-t+βϖ_v^r_1-n_v).
Since e_v(t-1)=0, then β is uniquely determined by α. Hence the trivial bound yields |𝒥_1(r_1,t)|≤ q_v^n_v.
Consequently, substituting the above discussions into (<ref>) we then see that the contribution from this case is
≪1_e_v(t)-e_v(1-t)≥ 0/|τ(χ_v)|^2(K_v[m_v])∑_r_1q_v^m_v≪ (e_v(t)-e_v(1-t)+1)q_v^m_v1_e_v(t)-e_v(1-t)≥ m_v-n_v,
where n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v.
§.§.§ The case that k≥ 1
Suppose that k≥ 1 in (<ref>), which implies that
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥max{n_v, m_v}=n_v
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
From the last constraint we conclude that k-r_1+e_v(t)≥ -n_v. Hence
e_v(1-t)=e_v(t)≤ -k≤ -1
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k≥ n_v-r_1-r_2≥ 1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
Therefore, the contribution to ℰ_v(t) from this case is
1_e_v(t)≤ -1/|τ(χ_v)|^2(K_v[m_v])∑_r_1∑_e_v(t)≤ r_2≤ -e_v(t)-2∑_1≤ k≤ -e_v(t)𝒥_2(r_1,r_2,k,t),
where n_v+e_v(t)+1≤ r_1≤ n_v, and 𝒥_2(r_1,r_2,k,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^n_v+k-r_1ϖ_v^n_vχ(α)χ(β)
ω_v(1+βϖ_v^r_1+r_2-n_v).
Note that k≥ 1, β +ϖ_v^k∈(𝒪_v/ϖ_v^n_v𝒪_v)^×. So α is uniquely determined by β. Therefore, by the trivial bound, |𝒥_2(r_1,r_2,k,t)|≤ q_v^n_v. Along with (<ref>), the contribution to ℰ_v(t) from this case is
∑_n_v+e_v(t)+1≤ r_1≤ n_v∑_e_v(t)≤ r_2≤ -e_v(t)-2q_v^m_v1_e_v(t)≤ -1≪ (1-e_v(t))^2q_v^m_v1_e_v(t)≤ -1.
A more refined bound can be derived in the case where k≥ 0 by estimating the character sums nontrivially. However, it becomes apparent that the contribution from the k≥ 0 case is overshadowed by the contribution from k≤ -1. Therefore, there is no necessity to further reduce the error term.
§.§ Local Estimates at Ramified Places Σ_^+
Consider v∈Σ_^+, which means v|𝔔 and m_v≥ n_v, where m_v=e_v(𝔐) and n_v=r_χ_v (cf. §<ref>).
Let v∈Σ_^+. Then
ℰ_v(t)≪
(1-e_v(t))^2q_v^m_v if e_v(t)≤ -1,m_v=n_v,
(e_v(t)-e_v(1-t)+1+m_v-n_v)q_v^m_v if e_v(t)≥ m_v-n_v,
0 otherwise,
where the implied constant depends at most on F_v.
Consider the notation used in the proof of Proposition <ref> in §<ref> (or in <cit.>). Since m_v≥ n_v≥ 1, the constraints (<ref>) can be simplified as follows:
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
By considering the second and fourth constraints in (<ref>), we deduce that k≥ 0. We can now proceed to examine the following two cases.
§.§.§ The case that k=0
Suppose that k=0 in (<ref>), which implies that
r_2+e_v(1-t)=0
r_1+r_2≥ m_v
min{r_2, r_1+r_2-n_v}=0
(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v.
Then r_1+r_2≥max{n_v,m_v}=m_v, e_v(t)-r_1≥ -n_v, and e_v(1-t)=-r_2≤ 0. So 0≤ r_2=-e_v(t-1), and m_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. Therefore, the contribution to ℰ_v(t) from this case is
1_e_v(t)-e_v(1-t)≥ m_v-n_v/|τ(χ_v)|^2(K_v[m_v])∑_m_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v𝒥_1(r_1,t),
where 𝒥_1(r_1,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^r_2+αϖ_v^r_1-e_v(t-1)-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-e_v(t-1)-n_v).
By the trivial bound (as in §<ref>) the sum in (<ref>) is
≪ (e_v(t)-e_v(1-t)+1+m_v-n_v)q_v^m_v1_e_v(t)-e_v(1-t)≥ m_v-n_v.
§.§.§ The case that k≥ 1
Suppose that k≥ 1 in (<ref>), which implies that
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ n_v
k+r_1+r_2-m_v=0
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
Since m_v≥ n_v, then by the second and the third constraints in (<ref>) we have m_v=n_v. From the last constraint we conclude that k-r_1+e_v(t)≥ -n_v. Hence
e_v(1-t)=e_v(t)≤ -k≤ -1, m_v=n_v
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k= n_v-r_1-r_2≥ 1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
As in §<ref>, the contribution to ℰ_v(t) from this case is
1_e_v(t)≤ -1·1_m_v=n_v/|τ(χ_v)|^2(K_v[m_v])∑_n_v+e_v(t)+1≤ r_1≤ n_v∑_e_v(t)≤ r_2≤ -e_v(t)-2𝒥_2(r_1,r_2,k,t),
where 𝒥_2(r_1,r_2,k,t) is defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^n_v+k-r_1ϖ_v^n_vχ(α)χ(β)
ω_v(1+βϖ_v^r_1+r_2-n_v).
By trivial bound the contribution to ℰ_v(t) in this case is
≪ (1-e_v(t))^2q_v^m_v1_e_v(t)≤ -11_m_v=n_v.
Therefore, Proposition <ref> follows.
Let notation be as before. Then
𝒥_v^(2)(r_1,t)≪ n_vq_v^r_ω_v+n_v/2+n_v-r_1+e_v(t)/2,
where the implied constant is absolute.
Note that (1+αϖ_v^r_1-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v amounts to
(α^-1+ϖ_v^r_1-n_v)β +α^-1ϖ_v^n_v-r_1t+1∈ϖ_v^n_v𝒪_v.
Changing the variable α↦α^-1, we have
𝒥_v^(2)(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(α+ϖ_v^r_1-n_v)β +αϖ_v^n_v-r_1t+1∈ϖ_v^n_v𝒪_vχ_v(α)χ_v(β)ω_v(1+βϖ_v^r_1-n_v).
Changing variables α↦α-ϖ_v^r_1-n_v and β↦β-ϖ_v^n_v-r_1t, 𝒥_v^(2)(r_1,t) becomes
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ t-1ϖ_v^n_vχ_v(α-ϖ_v^r_1-n_v)χ_v(β-ϖ_v^n_v-r_1t)ω_v(1-t+βϖ_v^r_1-n_v).
Let h∈𝒪_v^×. Let 𝒥_v^(2)(r_1,t,ψ_v,h) be defined by
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ t-1ϖ_v^n_vχ_v(α-ϖ_v^r_1-n_v)χ_v(β-ϖ_v^n_v-r_1t)ψ_v(hβ q_v^-r_ω_v).
Here we recall that ψ_v is a fixed unramified additive chatacter of F_v. By definition, we have 𝒥_v^(2)(r_1,t,ψ_v,h)=0 if r_ω_v>r_1.
Notice that χ is primitive. By Theorem 2G of <cit.> (cf. p.45) or Deligne's quasi-orthogonality of trace functions (cf. <cit.>) and Lemmas 12.2 and 12.3 in <cit.>, following the proof of Proposition 2 in <cit.>, we have
𝒥_v^(2)(r_1,t,ψ_v,h) ≪ n_v(q_v^n_v-r_1+e_v(t),q_v^r_1-n_v,q_v^n_v)^1/2q_v^n_v/2·1_r_ω_v≤ r_1,
where the implied constant is absolute. In particular, (<ref>) yields that
𝒥_v^(2)(r_1,t,ψ_v,h) ≪ n_vq_v^n_v-r_1+e_v(t)/2· q_v^n_v/2·1_r_ω_v≤ r_1.
Since ω_v is primitive, we have the Gauss sum expansion
ω_v(γ)=1/τ(ω_v)∑_h∈ (𝒪_v/ω_v^r_ω_v𝒪_v)^×ω_v(h)ψ_v(hγ q_v^-r_ω_v),
where q_v^r_ω_v is the conductor of ω_v.
Hence, (<ref>) follows from (<ref>), (<ref>), triangle inequality, and the fact that |τ(ω_v)|=q_v^r_ω_v/2.
* Suppose that k+r_1+r_2=n_v. Then (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2=n_v≥ m_v
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
As a consequence, (<ref>) yields
e_v(1-t)=e_v(t)≤ -k≤ -1
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k=n_v-r_1-r_2≥ 1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
From the last constraint we conclude that k-r_1+e_v(t)=-n_v. Hence
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β)
ω_v(1+βϖ_v^-k)
Note that 2r_2+k=m_2-r_1+k=-e_v(t)≥ 1. Then γ:=ϖ_v^2k+r_2-tϖ_v^-e_v(t)∈𝒪_v^×. After a change of variables, we obtain
𝒥_v^(3)(r_1,r_2,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡γϖ_v^n_vχ(α-ϖ_v^k+r_2)χ(β-ϖ_v^k)ω_v(βϖ_v^-k).
By <cit.> and the fact that k≤ -e_v(t),
𝒥_v^(3)(r_1,r_2,t)≪ n_vq_v^n_v+r_ω_v/2· q_v^min{k,k+r_2,n_v}/21_r_ω_v≤ n_v-k≤ n_vq_v^n_v+r_ω_v-e_v(t)/21_r_ω_v≤ n_v.
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
* Suppose that k+r_1+r_2≥ n_v+1. Then m_v=0, which forces that r_ω_v=0. In this case (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ n_v
k+r_1+r_2≥ m_χ_v
k+r_2≥ 0
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v^×.
e_v(1-t)=e_v(t)≤ -k≤ -1
e_v(t)≤ r_2≤ -e_v(t)-2
n_v+e_v(t)+1≤ r_1≤ n_v
k+r_1+r_2≥ n_v+1
[(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v.
Since n_v=m_v, then r_ω_v=0, i.e., ω_v is trivial. Hence the contribution from this case to ℰ_v(t) is
ℰ^(3)_v(t):=∑_r_1=n_v+e_v(t)+1^n_v∑_r_2=e_v(t)^-e_v(t)-2q_v^-2r_1s_0-r_2s_01_r_1+r_2≤ n_v-1/|τ(χ_v)|^2(K_v[n_v])·𝒥_v^(3)(r_1,r_2,t),
where we set k=n_v-r_1-r_2 and
𝒥_v^(3)(r_1,r_2,t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^k+r_2+α)(β +ϖ_v^k)+tϖ_v^-e_v(t)-ϖ_v^2k+r_2∈ϖ_v^n_v𝒪_vχ(α)χ(β).
Therefore, we have
ℰ^(3)_v(t)≪ n_v(1-e_v(t))^2q_v^-(2n_v+2s_0+3e_v(t))s_0·1_e_v(t)≤ -1· q_v^n_v-e_v(t)/2,
where the implied constant is absolute.
Then Proposition <ref> follows from (<ref>), (<ref>) and (<ref>).
§.§ Local Estimates: archimedean
Let v|∞. Define by
ℰ_v^†:=∫_F_v^×∫_F_vmax_t∈ F-{0,1}|f_v([ y_v x_v^-1t; x_vy_v 1 ])|dx_vd^×y_v.
By <cit.> we have the following estimate.
Let notation be as before. Let v|∞. Then
ℰ_v^†≪ T_v^ε,
where the implied constant depends on ε, F, c_v, and C_v defined in §<ref>.
§.§ Bounding Regular Orbital Integrals: Proof of Theorem <ref>
§.§.§ The support of the rationals t∈ F-{0,1}
Let notation be as before. Suppose t∈ F-{0,1}. Let f be the test function defined in §<ref>. Let
𝔛(𝔔,f):={ξ∈ F^×∩∏_v∈Σ_^-𝔭_v^-2(n_v-m_v)∏_v∤𝔔𝔭_v^m_v𝒪_F: |ξ|_v≪ 1, v|∞},
where the implied constant depends only on f_∞. Then the integral ∏_v∈Σ_Fℰ_v(t) converges absolutely and it vanishes unless t/t-1∈𝔛(𝔔,f).
Recall the definition (<ref>): for v∈Σ_F,
ℰ_v(t):=∫_F_v^×∫_F_v^×f_v([ y_v x_v^-1t; x_vy_v 1 ])χ_v(y_v)d^×y_vd^×x_v.
By Lemma <ref> the integral ℰ_v(t)=1 for all but finitely many v's. It then follows from Propositions <ref> and <ref>, and Lemma <ref> that ∏_v∈Σ_Fℰ_v(t) converges absolutely and it is vanishing unless
e_v(t)-e_v(t-1)≥ m_v, if v∤𝔔,
e_v(t)-e_v(t-1)≥ 0, if v∈Σ_^+.
e_v(t)-e_v(t-1)≥ -2(n_v-m_v), if v∈Σ_^-.
Since t/(t-1)∈ F-{0,1}, then (<ref>) follows from (<ref>).
§.§.§ Estimate of nonarchimedean integrals
Fix an ideal ℜ⊂𝒪_F with the property that e_v(ℜ)=m_v for v∤𝔔, and e_v(ℜ)=0 for all v<∞ and v|𝔔.
Fix an ideal 𝔑⊂𝒪_F with the property that e_v(𝔑)=n_v-m_v for v∈Σ_^-, and e_v(ℜ)=0 for all v<∞ and v∉Σ_^-.
For t∈ F-{0,1} with t/(t-1)∈𝔛(𝔔,f) (cf. (<ref>)), we may write
t/(t-1)=u, u∈ℜ𝔑^-2𝒪_F.
Then 1/(t-1)=u-1.
Let notation be as above. Let ℰ_v(t) be defined by (<ref>). Set ℰ_(t):=∏_v<∞|ℰ_v(t)|. Let t/(t-1)=u∈ℜ𝔑^-2𝒪_F be as in (<ref>). Then ℰ_(t) is
≪ (MQN_F(u(u-1)))^εM∏_v∈Σ_F,
v∤𝔔1_e_v(u)≥ m_v∏_v∈Σ_^-𝒥_v^-(u)∏_v∈Σ_^+𝒥_v^+(u),
where M=N_F(𝔐) (cf. (<ref>)), and
𝒥_v^-(u):= 1_e_v(u)≥ 0+∑_m_v-n_v≤ k≤ -1q_v^k1_e_v(u-1)=2k,
𝒥_v^+(u):= 1_e_v(u-1)≥ 11_m_v=n_v+1_e_v(u)≥ m_v-n_v.
Here the implied constant in (<ref>) depends on F and ε.
By Lemma <ref> we have ℰ_v(t)=1 if e_v(t)=e_v(1-t)=0, m_v=0, n_v=0, and v∤𝔇_F. There are finitely many remaining places
v∈𝒱:={v∈Σ_F,:v|𝔐𝔑 or e_v(t)≠ 0 or e_v(t-1)≠ 0}.
Let us denote the expression α as follows:
∏_v∈𝒱∩Σ_^+n_v^2(|e_v(t)-e_v(t-1)|+1)^2∏_v∈𝒱-Σ_^+(1+|e_v(t)|+2|e_v(t-1)|)^2,
where the terms in the product dominate coefficients in Lemma <ref>, Propositions <ref> and <ref>. Using (<ref>), we observe that e_v(u)≥ -e_v(𝔔). Consequently, we have the estimate:
α≪ (MQ)^2ε· (N_F(u)N_F(u-1))^ε,
where the implied constants depends on ε. As a consequence, (<ref>) follows from Lemma <ref>, Propositions <ref> and <ref>.
For x_∞=⊗_v|∞x_v∈ F_∞. For t∈𝔛(𝔔,f), parametrize t/(t-1) via (<ref>). Let
𝒞(x_∞):=∑_t∈ F-{0,1}, t/t-1=u∈𝔛(𝔔,f)
|t/t-1|_v≪ |x_v|_v, v|∞ℰ_(t).
Let notation be as before. Let x_∞∈ F_∞^×. Let 𝒞(x_∞) be defined by (<ref>). Then
𝒞(x_∞)≪_ε,F(MQ(1+|x_∞|_∞))^ε· |x_∞|_∞· Q·1_M≪ Q^2(M,Q) |x_∞|_∞,
where the implied constant depends on ε and F.
Note that u∈ℜ𝔑^-2𝒪_F-{0,1} and N_F(u)≪ |x_∞|_∞. Hence, N_F(ℜ𝔑^-2)≪ |x_∞|_∞, i.e., M/(M,Q)≪ Q^2|x_∞|_∞. By Lemma <ref>, we have
𝒞(x_∞)≪ (MQ(1+|x_∞|_∞))^ε𝒮(x_∞)·∏_v∈Σ_F,q_v^m_v.
where the auxiliary sum 𝒮(x_∞) is defined by
𝒮(x_∞):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔𝒮^+(u)𝒮^-(u).
Here the integral ideals ℜ and 𝔑 are defined in §<ref>, and
𝒮^+(u):= ∏_v∈Σ_^+[1_e_v(u-1)≥ 1·1_m_v=n_v+1_e_v(u)≥ m_v-n_v],
𝒮^-(u):= ∏_v∈Σ_^-[1_e_v(u)≥ 0+∑_m_v-n_v≤ k≤ -1q_v^k1_e_v(u-1)=2k].
We proceed to deal with 𝒮(x_∞). Let 𝔔^+:=∏_v∈Σ_^+𝔭_v and 𝔔^-:=∏_v∈Σ_^-𝔭_v. Expanding the products 𝒮^+(u)𝒮^-(u) we obtain
𝒮(x_∞)=∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∑_𝔟_3𝔟_4=𝔔^-𝒮(𝔞_1,𝔟_3),
where
𝒮(𝔞_1,𝔟_3):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔
e_v_1(u-1)≥ 1, v_1|𝔞_1
e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2
e_v_3(u)≥ 0, v_3|𝔟_3
∏_v_4|𝔟_4∑_m_v_4-n_v_4≤ k≤ -1q_v_4^k1_e_v_4(u-1)=2k.
Write 𝔟_4=𝔔^-𝔟_3^-1=∏_v∈𝒱_4𝔭_v, where 𝒱_4={v_1',⋯,v_l'} is a subset of Σ_^-. Denote by k=(k_1,⋯,k_l)∈ℤ^l. Then
∏_v_4|𝔟_4∑_m_v_4-n_v_4≤ k≤ -1q_v_4^k1_e_v_4(u-1)=2k=∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^k_j1_e_v_j'(u-1)=2k_j.
Therefore,
𝒮(x_∞)=∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^k_j·𝒮^†(x_∞),
where
𝒮^†(x_∞):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔
e_v_1(u-1)≥ 1, v_1|𝔞_1
e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2
e_v_3(u)≥ 0, v_3|𝔟_3
e_v_j'(u-1)=2k_j, 1≤ j≤ l
1.
By counting rational lattice points in a bounded region, we have
𝒮^†(x_∞)≪∑_u∈ℜ𝔑^-2𝒪_F∩ F^×
|u|_v≪ |x_v|_v, v|∞
e_v(u)≥ m_v, v<∞, v∤𝔔
e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2
e_v_3(u)≥ 0, v_3|𝔟_3
e_v_j'(u)=2k_j
1≪ |x_∞|_∞∏_v∈Σ_F,
v∤𝔔q_v^-m_v∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2∏_j=1^lq_v_j'^-2k_j.
Therefore, 𝒮(x_∞) is majorized by
|x_∞|_∞∏_v∈Σ_F,
v∤𝔔1/q_v^m_v∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^l1/q_v_j'^k_j.
Notice that
∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_1∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2=∑_v|𝔔^+q_v_2^n_v_2-m_v_2∑_𝔞_1𝔞_2=𝔔^+
m_v=n_v, ∀ v|𝔞_11≪ Q^ε∑_v|𝔔^+q_v_2^n_v_2-m_v_2,
and
∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^-k_j≪∏_v|𝔔^-q_v^n_v-m_v∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l)
m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l1,
which is ≪ Q^ε∏_v|𝔔^-q_v^n_v-m_v. Therefore,
𝒮(x_∞)≪ |x_∞|_∞Q^ε∏_v∈Σ_F,
v∤𝔔q_v^-m_v∏_v|𝔔q_v^n_v-m_v.
Then (<ref>) follows from substituting (<ref>) into (<ref>).
§.§.§ Proof of Theorem <ref>
Recall the definition (<ref>) in §<ref>:
J^,2_,(f,0,χ)=∑_t∈ F-{0,1}∫_𝔸_F^×∫_𝔸_F^×f([ y x^-1t; xy 1 ])χ(y)d^×yd^×x.
So the regular orbital integrals J^,2_,(f,0,χ) is
≪∫_F_∞^×∫_F_∞^×∑_t∈ F-{0,1}
t/t-1∈𝔛(𝔔,f)ℰ_(t)|f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])|d^×y_∞d^×x_∞,
where ℰ_(t):=∏_v<∞|ℰ_v(t)|.
By the support of f_∞ (cf. (<ref>) in §<ref>), we have
f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])=0
unless y_∞≍ 1, |x_v|_v≪ 1, and |t/t-1|_v≪ |x_v|_v, for all v|∞. Write
t/(t-1)=u𝔑^-2ℜ
with u∈𝒪_F as in (<ref>). Then
J^,2_,(f,0,χ) is
≪∫_F_∞^×∫_1+o(1)1_|x_v|_v≪ 1
v|∞·𝒞(x_∞)·max_t∈𝔛(𝔔,f)|f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])|d^×y_∞d^×x_∞,
where 𝒞(x_∞) is defined by (<ref>). Note that |x_v|_v≪ 1 for v|∞, yielding that |x_∞|_∞≪ 1. Hence, we may replace 1_M≪ Q^2(M,Q) |x_∞|_∞ with 1_M≪ Q^2(M,Q) in Lemma <ref>. As a consequence, we have
J^,2_,(f,s_0,χ)≪_ε(MQ)^ε· Q·1_M≪ Q^2(M,Q)·∏_v|∞ℰ_v^†,
where ℰ_v^† is defined by (<ref>). By Lemma <ref>, the above bound becomes
J^,2_,(f,s_0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where the implied constant depends on ε, F, c_v, and C_v, v|∞.
§ PROOF OF MAIN RESULTS
Recall the intrinsic data in §<ref>. Let F be a number field. Let χ=⊗_vχ_v be a primitive unitary Hecke character of F^×\𝔸_F^×.
§.§ The Spectral Side
Recall the lower bound of J_^,(f,0,χ) in §<ref>.
*
§.§ The Geometric Side
Recall the geometric side (<ref>) in §<ref>:
J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ).
Let notation be as before. Then
J_^,(f,0,χ)≪ T^1/2+εM^1+ε+T^εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where the implied constant depends on ε, F, c_v, and C_v, v|∞ (cf. §<ref>).
By Propositions <ref> and <ref>, we have
J^_,(f,χ)+J^_,(f,χ)≪_ε M^1+εT^1/2+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. Moreover, by Theorem <ref> we have
J^,2_,(f,0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q).
The estimate (<ref>) follows from the above inequalities.
§.§ Put It All Together: Proof of Main Results
Substituting Theorem <ref> and Proposition <ref> into the regularized relative trace formula J_^,(f,0,χ)=J_^,(f,0,χ) (cf. Corollary <ref> in §<ref>), we obtain the following.
Let the notation be as before. Denote by 𝒜_0(Π_∞,𝔐;χ_∞,ω) the set of cuspidal representations and 𝒳_0(Π_∞,𝔐;χ_∞,ω) the set of Hecke characters, as defined in §<ref>. Then
∑_π|L(1/2,π×χ)|^2≪ T^1+εM^1+εQ^ε+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where π∈𝒜_0(Π_∞,𝔐;χ_∞,ω), and
∑_η∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt ≪ T^1+εM^1+εQ^ε
+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q),
where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω), and the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
Let notation be as before. Then
∑_π∈𝒜_0(M;χ_∞)
σ_π(𝔭)≥σ|L(1/2,π×χ)|^2≪ T^1-σ/2+ε· M^1-2σ+ε· Q^1-σ+ε.
If C_(χ)>1, then Theorem <ref> follows from (<ref>). In the case where C_(χ)=1, we replace χ with χχ_0, where χ_0 is a fixed Hecke character induced from a Dirichlet character with a fixed modulus, such as 3. Similarly, we replace π with π⊗χ_0. By applying Theorem <ref> to π⊗χ_0 and χχ_0, we obtain the same bound (with a different implied constant dependent on the modulus of χ_0) for the second term L(1/2,π×χ). Consequently, Theorem <ref> follows.
Let π=η⊞η. Then ω=η^2. By <cit.> there exists t_0∈ [2^-1exp(-3√(log C(ηχ))), exp(-3√(log C(ηχ)))] (which might depend on the character ηχ) such that
|L(1/2,ηχ)|≪exp(log^3/4C(ηχ))|L(1/2+it_0,ηχ)|,
where the implied constant depends only on F. Here C(χη)≪ M^1/2Q is the analytic conductor of ηχ. By <cit.> we have
∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt≫|L(1/2+it_0,ηχ)L(1/2+it_0,ωηχ)|^2/C(ηχ)^ε.
Since ω=η^2, then |L(1/2+it_0,ωηχ)|=|L(1/2+it_0,ηχ)|. So it follows from Theorem <ref> that, for χ∈𝒳_0(Π_∞,𝔐;χ_∞,ω),
|L(1/2+it_0,ηχ)|^4≪ T^1+εM^1+εQ^ε+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q).
Suppose η is primitive. Then C_(η)=M^1/2. It then follows from (<ref>) and (<ref>) that
L(1/2,ηχ)≪_η_∞,χ_∞ C_(η)^1/2+ε+C_(χ)^1/4+ε.
By symmetry we also have
L(1/2,ηχ)≪_η_∞,χ_∞ C_(η)^1/4+ε+C_(χ)^1/2+ε.
Hence the estimate (<ref>) holds.
§.§ Proof of Corollary <ref>
Let f∈ℱ_2k^new(N). By Hecke's theorem there exists a primitive quadratic character χ of conductor q ≪ kN^1+ε such that L(1/2,f×χ)≠ 0. Here the implied constant is absolute.
Let k∈{2,3,4,5,7}. Denote by 𝒩:=#{g∈ℱ_2k^(N): L(1/2,f×χ)L(1/2,g×χ)≠ 0}. Recall that (e.g., cf. <cit.>)
∑_g∈ℱ_2k^(N)L(1/2,g×χ)≫ N^1-ε.
Then by Cauchy-Schwarz inequality and Corollary <ref> we obtain
N^1-ε≪𝒩^1/2·[∑_g∈ℱ_2k^(N)|L(1/2,g×χ)|^2]^1/2≪𝒩^1/2· N^1/2+ε,
leading to (<ref>). Here the implied constant depends only on ε.
In the above proof, a crucial new ingredient is our Corollary <ref>, which effectively replaces the third moment estimate employed in <cit.>:
∑_g∈ℱ_2k^(N)L(1/2,g×χ)^3≪_k,ε(Nq)^1+ε.
It is worth noting that Corollary <ref>, given by
∑_g∈ℱ_k^(N)|L(1/2,g×χ)|^2≪ (kNq)^ε(kN+k^1/2q·1_N≪ q^2(N,q)),
provides the average Lindelöf estimate in the N-aspect when q≪_k N^1+ε. However, (<ref>) does not yield this bound when q is large.
§ HYBRID SUBCONVEXITY: PROOF OF THEOREM <REF>
In this section, we will establish the validity of Theorem <ref> by presenting a proof that draws upon similar techniques to those used in the proof of Theorem <ref> (cf. §<ref>). However, instead of relying on Theorem <ref> in §<ref>, we will utilize the relative trace formula (i.e. Theorem <ref>) from §<ref>. Notably, the proof is simplified by not requiring amplification, although the overall methodology remains similar.
§.§ Notation
Recall the data in Theorem <ref>: we let
* χ=⊗_vχ_v be a Hecke character of 𝔸_F^×/F^×, and Q:=C_(χ) ;
* 𝔐 be an integral ideal of norm |𝔐|:=N_F(𝔐);
* 𝒜_0^χ_∞(T;𝔐) be the set of cuspidal automorphic representations π=⊗_vπ_v of PGL(2)/F such that π_=⊗_v<∞π_v has arithmetic conductor dividing 𝔐, and π_v⊗χ_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>, where T=∏_v|∞ T_v.
Note that Weyl law yields #𝒜_0^χ_∞(T;𝔐)=(T|𝔐|)^1+o(1).
§.§.§ Choice of Test Functions
Despite potential ambiguity, we will continue to use the notation f=⊗_vf_v to refer to the test function, which is defined as follows.
* Let f_∞ be defined as in §<ref>.
* For v∈Σ_F,, let m_v'=e_v(𝔐), and n_v=n_v, the local exponent of χ_v (cf. §<ref>). Define a function on G(F_v), supported on Z(F_v)\ K_v[m_v'], by
f_v(z_vk_v;1)=(K_v[m_v'])^-1,
where K_v[m_v'] is the image of K_v[m_v'] in G(F_v). For g_v∈ G(F_v), define by
f_v(g_v)=1/|τ(χ_v)|^2∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∑_β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(α)χ_v(β)f_v(g_α,β,v;1),
where
τ(χ_v)=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×ψ_v(αϖ_v^-n_v)χ_v(α)
is the Gauss sum relative to the additive character ψ_v, and
g_α,β,v:=[ 1 αϖ_v^-n_v; 1 ]g_v[ 1 βϖ_v^-n_v; 1 ].
Note that n_v=0 for almost all v∈Σ_F,. Hence, for all but finitely many v∈Σ_F,, the test function f_v(·)=f_v(·;1) (cf. (<ref>)) supports in Z(F_v)\ K_v[m_v].
* Take f=⊗_v≤∞f_v as our test function into Theorem <ref>:
J_^,(f,0,χ)=J_^,(f,0,χ),
where 0=(s_0,s_0), with s_0:=2^-1exp(-2√(log C(π×χ))) (cf. §<ref>).
§.§ The Spectral Side
Similar to Theorem <ref>, we have
Let notation be as before. Then
𝒥_^(α,χ)≫_εT^-1/2-ε(|𝔐|Q)^-ε∑_π∈𝒜_0^χ_∞(T;𝔐)|L(1/2,π×χ)|^2,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
§.§ The Geometric Side
In this section we handle the geometric side
J_^,(f,0,χ)= J^_,(f,0,χ)+J^,+_,(f,0,χ)+J^,∧_,(f,0,χ)
+J^,_,(f,0,χ)+J^,2_,(f,0,χ).
§.§.§ Bounds of Irregular Orbital Integrals
The estimates from §<ref> and §<ref> remain valid with T≍ T, 𝒩_f replaced by 1, and [M,M'Q] replaced with |𝔐|. Specifically, Propositions <ref> and <ref>, and Lemmas <ref>, <ref> and <ref> become:
Let notation be as before. Then
J^_,(f,0,χ)≪ |𝔐|^1+εT^1/2+ε,
J^,∧_,(f,0,χ)≪ s_0^-1|𝔐|^1+εT^1/2+ε,
J^,+_,(f,0,χ)+J^,,1_,(f,0,χ)≪ T^ε|𝔐|^ε,
J^,,2_,(f,0,χ)≪ s_0^-1|𝔐|^1+εT^1/4+ε,
where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>.
§.§.§ Bounds of Regular Orbital Integrals
We need to do
Theorem <ref> in §<ref>.
Let notation be as before. Then
J^,2_,(f,0,χ)≪ T^ε|𝔐|^εQ^1+ε,
where the implied constant depends on ε, F, c_v, and C_v, v|∞.
Proposition <ref> in §<ref>.
Let v|𝔔. Then
ℰ_v(t)≪
n_v q_v^n_v/2-e_v(t)s_0 if e_v(t)≤ -1,
κ_vq_v^m_v'+e_v(t)/2 if e_v(t)≥ m_v'-n_v,e_v(t-1)=0,
0 otherwise,
where κ_v=n_v(e_v(t)+n_v-m_v'+1), and the implied constant is absolute.
Following the proof of Proposition <ref>, the local integral ℰ_v(t) becomes
1/|τ(χ_v)|^2∑_α,βχ(α)χ(β)∑_r_1, r_2∈ℤ q_v^-2r_1s_0-r_2s_01_Y_α,β,r_1,r_2,t∈ Z(F_v)K_v[m_v]f_v(Y_α,β,r_1,r_2,t;1),
where f_v(·;1) is defined by (<ref>), and Y_α,β,r_1,r_2,t is defined by
[ ϖ_v^r_2+αϖ_v^r_1+r_2-n_v (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v; ϖ_v^r_1+r_2 1+βϖ_v^r_1+r_2-n_v ].
Note that 1_Y_α,β,r_1,r_2,t∈ Z(F_v)K_v[m_v]≠ 0 unless
ϖ_v^kY_α,β,r_1,r_2,t∈ K_v[m_v']
for some k∈ℤ. Similar to (<ref>), the constraint (<ref>) amounts to
2k+r_2+e_v(1-t)=0
k+r_1+r_2≥ m_v'
ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^×
ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v.
Then the estimate of ℰ_v(t) boils down to Proposition <ref> (with m_v replaced by m_v') if m_v'≥ n_v.
Now we assume that m_v'<n_v.
* Suppose k≥ 1. Then it follows from ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× that k+r_1+r_2=n_v, which yields that r_1=n_v+k+e_v(1-t). From ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^× we have r_1+k≥ 0, which, in conjunction with the first constraint, leads to k+r_2≥ 0. Therefore,
1≤ k≤ -e_v(t-1)
r_1=n_v+k+e_v(1-t)
r_2=-2k-e_v(1-t).
* Suppose k≤ -1. Then it follows from ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× that r_1+r_2=n_v, which contradicts k+r_1+r_2≥ m_v'. Hence, we must have k≤ 0.
r_1=n_v, r_2=0,
m_v'-n_v≤ k≤ -1,
e_v(1-t)=-2k,
α≡ -1ϖ_v^-k
β≡ -1ϖ_v^-k
Moreover, β is uniquely determined by αϖ_v^n_v. So
* Suppose k=0 in (<ref>), which implies that
r_2+e_v(1-t)=0
r_1+r_2≥ m_v'
min{r_2, r_1+r_2-n_v}=0
(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v.
* Suppose r_2≥ 1. Then r_1+r_2=n_v. Since e_v(t)-r_1≥ -n_v, then e_v(t)+r_2≥ 0. So e_v(t)-e_v(1-t)≥ 0. In this case we have r_2=-e_v(t). Therefore, the contribution to ℰ_v(t) from this case is
ℰ^(1)_v(t):=q_v^-2n_vs_0-e_v(t)s_01_e_v(t)≤ -1/|τ(χ_v)|^2(K_v[m_v'])∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β).
Write t=ϖ_v^e_v(t)γ under the embedding F^×↪ F_v^×, where γ∈𝒪_v^×. Then the last sum over α and β becomes
∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(ϖ_v^-e_v(t)+α )(β+1)≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β),
which, after a change of variables, is equal to
𝒥_v^(1)(t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
αβ≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α-ϖ_v^-e_v(t))χ(β-1).
As a special case of Proposition 2 on p.71 of <cit.> we have 𝒥_v^(1)(t)≪ n_vq_v^n_v/2, where the implied constant is absolute. Hence,
ℰ^(1)_v(t)≪ n_vq_v^m_v'-n_v/2-2n_vs_0-e_v(t)s_01_m_v'≤ n_v1_e_v(t)≤ -1.
* Suppose r_2=0. Then e_v(1-t)=0, r_1≥ n_v, and e_v(t)≥ r_1-n_v.
The contribution to ℰ_v(t) from this case is
ℰ^(2)_v(t):=∑_r_1=n_v^e_v(t)+n_vq_v^-2r_1s_01_e_v(t)≥ r_1-n_v/|τ(χ_v)|^2(K_v[m_v'])·𝒥_v^(2)(r_1,t),
where
𝒥_v^(2)(r_1,t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×
(1+αϖ_v^r_1-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β).
By Lemma <ref> below (in conjunction with r_1≥ n_v) the sum ℰ^(2)_v(t) is
≪ n_v(e_v(t)+1)1_e_v(t)≥ e_v(1-t)=0/|τ(χ_v)|^2(K_v[m_v'])· q_v^n_v+e_v(t)/2.
Since |τ(χ_v)|^2(K_v[m_v])≫ q_v^n_v-m_v', then
ℰ^(2)_v(t)≪ n_v^2(e_v(t)+1)1_e_v(t)≥ e_v(1-t)=0q_v^m_v'-n_v/2+e_v(t)/2.
Then Proposition <ref> follows from (<ref>) and (<ref>).
alpha
|
http://arxiv.org/abs/2307.05934v1 | 20230712055942 | Sem-CS: Semantic CLIPStyler for Text-Based Image Style Transfer | [
"Chanda Grover Kamra",
"Indra Deep Mastan",
"Debayan Gupta"
] | cs.CV | [
"cs.CV"
] |
Roberto Doriguzzi-Corin^α, Luis Augusto Dias Knob^α, Luca Mendozzi^β, Domenico Siracusa^α, Marco Savi^β
^αCybersecurity Centre, Fondazione Bruno Kessler, Trento - Italy
^βUniversity of Milano-Bicocca, Department of Informatics, Systems and Communication (DISCo), Milano - Italy
August 12, 2023
============================================================================================================================================================================================================================================================================================
CLIPStyler demonstrated image style transfer with realistic textures using only a style text description (instead of requiring a reference style image). However, the ground semantics of objects in the style transfer output is lost due to style spill-over on salient and background objects (content mismatch) or over-stylization. To solve this, we propose Semantic CLIPStyler (Sem-CS), that performs semantic style transfer.
Sem-CS first segments the content image into salient and non-salient objects and then transfers artistic style based on a given style text description. The semantic style transfer is achieved using global foreground loss (for salient objects) and global background loss (for non-salient objects).
Our empirical results, including DISTS, NIMA and user study scores, show that our proposed framework yields superior qualitative and quantitative performance. Our code is available at https://github.com/chandagrover/sem-csgithub.com/chandagrover/sem-cs.
Object detection, Salient, CLIP, Style Transfer, Semantics
§ INTRODUCTION
Image style transfer <cit.> aims to synthesize new images by transferring style features such as colour and texture patterns to the content image. Image style transfer can be classified into photo-realistic style transfer <cit.> and artistic style transfer <cit.> based on the input content image and style image. One problem in image style transfer is the fact that a user needs to find a good reference image with the desired style.
Recently, CLIPStyler <cit.> proposed a novel artistic style transfer approach that uses a text condition to perform style transfer without a reference style image. However, it suffers from the over-styling problem, which results in the distortion of content features in the output image (Fig. <ref>-first row).
Another challenge in style transfer is when style spillover between dissimilar objects occurs, also known as the content mismatch problem <cit.> (Fig. <ref>-first-second row). Content mismatch reduces the visual quality of the style transfer output, and it is hard to avoid when the semantic objects in the style and the content features are of different types and numbers <cit.>. A good style transfer approach minimizes both content mismatch and over-styling.
Generative Artisan (Gen-Art) <cit.> addresses the over-styling problem of CLIPStyler <cit.> through an FCN semantic segmentation network <cit.>. They control the degree of image style transfer in different semantic chunks. However, the supervised approach to extract the semantic parts of the content image needs to be more generalizable. E.g., their FCN semantic segmentation network only considers 21 classes; this is insufficient to represent real-world images. Also, they do not address content mismatch (see Fig. <ref>).
In this paper, we propose Semantic CLIPStyler (Sem-CS), which addresses the content mismatch and over-styling problems of text-condition based style transfer. We use the deep spectral segmentation network <cit.>, which extracts salient and non-salient objects of the content image in an unsupervised manner. As such, our method is generalizes well for real-world images.
Sem-CS applies styles on salient or non-salient objects based on the text conditions. The key idea is to perform semantic style transfer using the proposed global foreground and background loss. Sem-CS also achieves controllable generation in transferring texture information for multiple text conditions. Our major contributions are as follows:
* We propose a novel framework (Sem-CS) to perform style transfer with a text condition (Algorithm <ref>).
* We propose global foreground and global background loss to supervise style features semantically on the output (Sec. <ref>).
* We provide a reference-based quality assessment using DISTS <cit.> as well as a no-reference quality assessment using NIMA <cit.> to show Sem-CS outperforms baselines (Table <ref>).
§ OUR METHOD
This section describes our framework. It has two major phases: Salient Object Detection and Semantic Style Transfer. We illustrate Sem-CS in Fig. <ref> and Algorithm <ref> formally describes the proposed framework. The two phases of Sem-CS are described as follows.
Salient Object Detection: In the first phase, we compute the masks for salient objects in the content image; see Algorithm <ref>, lines 2-4. (The mask for salient objects is computed in an unsupervised setting.) First, we compute the affinity matrix (W) of the content image I_C from the attention block of the last layers of feature extractor ϕ. Secondly, we find the eigenvectors of the laplacian of the affinity matrix. Finally, we extract the mask from the eigenvector y_1.
Semantic Style Transfer: In the second phase, we train Sem-StyleNet S to transfer style features to the salient objects and background objects based on the text conditions (Fig. <ref>). We use ResNet50 with softmax3d <cit.> for the image encoder to make the stylized output more robust. We propose global foreground loss and global background loss for style supervision on salient objects and the background of the output image, respectively. These are:
Global Foreground Loss. This ensures that relevant style text applies to the salient objects present in the output. To maintain the diversity of generated stylized outputs, directional CLIP loss <cit.> is computed instead of global CLIP loss <cit.> by aligning the CLIP-space direction between the text-image pairs of input and output. Foreground text directional loss (Δ fg_T) is defined to be the difference between source text embedding (t_src) and foreground style text embedding (t_fg) as described in Eq. <ref>.
Δ fg_T = E_T(t_fg) - E_T(t_src)
Here, E_T is the CLIP text-encoder and t_src is set to "Photo". Foreground image directional loss (Δ fg_I) is computed between embeddings of salient objects and style transfer output. Given the content image I_C and Mask, Hadamard product ⊙ is computed between Mask and S(I_C) to extract features for salient objects as I_fg = Mask ⊙ S(I_C). Next, Δ fg_I is computed as described in Eq. <ref>.
Δ fg_I = E_I(I_fg) - E_I(I_C)
E_I is the CLIP image encoder. Finally, Global foreground loss (ℒ_FGlob) is computed by taking cosine similarity between CLIP-Space direction of the foreground of image and style texts (Eq. <ref>).
ℒ_FGlob = 1- Δ fg_I . Δ fg_T/|Δ fg_I||Δ fg_T|
Here, one minus the cosine similarity represents the distance between image and text directional loss. In other words, the global foreground loss minimizes the distance between the image direction loss and text direction loss for salient objects.
Global Background Loss. This is computed for style feature supervision of the output image background. Similar to global foreground loss, we compute background text directional loss (Δ bg_T) for style background as given in Eq. <ref>.
Δ bg_T = E_T(t_bg) - E_T(t_src)
Here, t_bg is the style text condition for the background. Also, background image directional loss Δ bg_I is computed as shown in Eq. <ref>. We take Hadamard product between the background mask and generated image I_bg = (1-Mask) ⊙ I_O to extract background features. Next, Δ bg_I is computed as below in Eq. <ref>
Δ bg_I = E_I(I_bg) - E_I(I_C)
Finally, global background loss ℒ_BGlob is computed to minimize the distance between image and text directional losses for background objects as described in Eq. <ref>.
ℒ_BGlob = 1- Δ bg_I . Δ bg_T/|Δ bg_I||Δ bg_T|
Here, global background loss ℒ_BGlob helps to perform controllable style transfer for background objects in the style transfer outputs.
Other Loss. We also add content loss and a total variation regularization loss to our proposed loss for style transfer <cit.>.
§ EXPERIMENTAL RESULTS
Fig. <ref> shows that Sem-CS preserves the semantics of objects in output images while minimizing over-stylization and content mismatch. For example, let us see first row on left side of Fig. <ref>. It could be observed that CLIPStyler <cit.> and Generative Artisan <cit.> outputs are over-stylized (the "Acrylic" style spills both below the bridge and onto the sky), and the content features of the water are lost. Sem-CS (ours) preserves the semantics of the bridge. Similarly, in the first row, on the right side, CLIPStyler <cit.> and Generative Artisan <cit.> outputs suffer from content mismatch as the "Snowy" style is applied to the bicycle and background. Sem-CS performs style transfer while minimizing the content mismatch effects of the "Snowy" style feature.
We evaluated Sem-CS framework with DISTS <cit.>, NIMA <cit.>, and a User Study (Table <ref>). We describe the quantitative results as follows.
DISTS <cit.> Scores. DISTS <cit.> is a reference-based image quality assessment that shows the preservation of object structure in the presence of texture transfer in stylized output (how well they are preserved); since DISTS <cit.> may not capture all aspects of style transfer quality like semantic coherence, we add NIMA <cit.> scores to support it and also conduct a user study Table <ref>.
NIMA <cit.> Scores. NIMA <cit.> is a no-reference-based image quality metric that predicts the quality of distribution ratings with a significant correlation to ground truth ratings.
Table <ref> reports the average scores of top-100 output images.
User Study.
We conducted a user study to validate preserved semantics of objects while transferring the style texts onto the content image (Table <ref>). We randomly sampled 5 groups of 15 images from the outputs produced above, with ten images from single-style text and five images from double-style text stylized outputs. All 5 × 15 stylized outputs were distributed anonymously and randomly to 40 participants. They were asked to observe the stylized results from different methods and vote for the image that looks better in quality and matches the style text.
Table <ref> shows the percentage vote for each method. Sem-CS outperforms baseline methods.
Overall, we find that Sem-CS scores are higher than the baseline methods CLIPStyler <cit.> and Generative Artisan <cit.>. This justifies that adding global foreground and background losses improves the image quality of stylized output. Sem-CS minimizes content mismatch and prevents distortion of objects present in output image when supervising style features.
Ablation Studies. Fig. <ref> illustrate ablation studies for style transfer using double style texts condition. The double-style texts are challenging because style supervision is required for salient objects and backgrounds of image. Therefore, double-style texts require more controllable generation capabilities for style transfer. We evaluated Sem-CS framework for double style-texts with NIMA <cit.> and DISTS <cit.> scores on 100 stylized outputs. Table <ref> describes that Sem-CS outperforms Generative Artisan <cit.>. Also, note that the user study scores of style transfer outputs for double text condition for Sem-CS are higher.
§ CONCLUSION
We proposed Semantic CLIPStyler (Sem-CS) to preserve the semantics of objects and prevent over-stylization when performing text-based image style transfer. We showed that style transfer could be done semantically by training the StyleNet with the proposed global background and foreground loss. Our quantitative and qualitative experimental results showed that Sem-CS achieves superior stylized output with text descriptions. The scope of future work extends to applying different text conditions on more than one object present in the content image. For this, we aim to improve the segmentation mask of content image.
IEEEbib
|
http://arxiv.org/abs/2307.04432v1 | 20230710091220 | Density-dependent relativistic mean field approach and its application to single-$Λ$ hypernuclei in Oxygen isotopes | [
"Shi Yuan Ding",
"Wei Yang",
"Bao Yuan Sun"
] | nucl-th | [
"nucl-th"
] |
Density-dependent relativistic mean field approach and its application to single-Λ hypernuclei in Oxygen isotopes This work was partly supported by the Fundamental Research Funds for the Central Universities, Lanzhou University under Grant No. lzujbky-2022-sp02 and lzujbky-2023-stlt01, the National Natural Science Foundation of China under Grant No. 11875152 and No. 12275111, and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB34000000. The authors also want to thank the computation resources provided by the Supercomputing Center of Lanzhou University.
Shi-Yuan Ding^1,2)
Wei Yang^1,2
Bao-Yuan Sun^1,2;1)[email protected]
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
^1MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, China
^2School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
The in-medium feature of nuclear force which includes both nucleon-nucleon (NN) and hyperon-nucleon (Λ N) interactions impacts the description of single-Λ hypernuclei. With the alternated mass number or isospin of hypernuclei, such effects could be unveiled by analyzing systematical evolution of the bulk and single-particle properties. From a density-dependent meson-nucleon/hyperon coupling perspective, a new Λ N effective interaction in the covariant density functional (CDF) theory, namely DD-LZ1-Λ1, is obtained by fitting the experimental data of Λ separation energies for several single-Λ hypernuclei. It is then adopted to study the structure and transition properties of single-Λ hypernuclei in Oxygen isotopes, comparing with several selected CDF Lagrangians. Discrepancy is observed explicitly in the isospin evolution of Λ1p spin-orbit splitting with various effective interactions, ascribed to their divergence of the meson-hyperon coupling strengths with increasing density. In particular, the density-dependent CDFs introduce an extra contribution to enhance the isospin dependence of the splitting, which is originated from the rearrangement terms of Λ self-energies. In addition, the characteristics of hypernuclear radii are studied along the isotopic chain. Owing to the impurity effect of Λ hyperon, a size shrinkage is observed in the matter radii of hypernuclei as compared to their cores of normal nuclei, while its magnitude is elucidated further to correlate with the incompressibility of nuclear matter. Besides, there exists a sizable model-dependent trend that Λ hyperon radii evolve with the neutron number, which is decided partly by the in-medium NN interactions as well as the core polarization effects.
21.80.+a,
13.75.Ev,
21.30.Fe,
21.60.Jz
§ INTRODUCTION
The discovery of hyperon, particles containing strange quarks, in 1953 sparked strong interest among experimental and theoretical physicists <cit.>. The ability of hyperons to enter the nucleus and form a system of hypernuclei makes them sensitive probes for studying the structure and specific nuclear features. The studies on hyperon behavior in the nucleus help us to understand the baryon-baryon interaction in nuclear medium and its effects on nuclear properties <cit.>. In addition, hyperons are thought to be produced inside neutron stars <cit.>. The link between hypernucleus and neutron star properties benefits our comprehension of the state of matter in extreme environments, as well as strangeness-bearing nuclear force at high densities. In recent decades, a wealth of hypernuclear data has been generated through induced reactions of meson and electron beams at various radioactive beam facilities, including the Japan Proton Accelerator Research Complex (J-PARC) <cit.>, the Thomas Jefferson National Accelerator Facility (JLab) <cit.>, and the Facility for Antiproton and Ion Research (FAIR) <cit.>. These advanced facilities have played a pivotal role in advancing our understanding of strangeness in nuclear physics. Notably, single-Λ hypernuclei have been the most extensively studied, with experimental data covering hypernuclei from ^3_ΛH to ^208_ΛPb in various laboratories <cit.>.
When Λ hyperon enters into a nucleus, various phenomena could be observed. For instance, in ^7_ΛLi, it has been found that the size of the ^6Li core is smaller compared to the free space ^6Li nucleus, as suggested by the measurement of the γ-ray transition probability from E2(5/2^+→1/2^+) in ^7_ΛLi <cit.>. In addition, in ^13_ΛC, it is hinted that the Λ spin-orbit splitting is much smaller than the nucleon's <cit.>. Recently, the potential for producing neutron-rich hyperfragments at high-intensity heavy-ion accelerator facilities is discussed <cit.>. The directed flow of hypernuclei (^3_ΛH and ^4_ΛH) just observed at RHIC for the first time in heavy-ion collisions, providing insights into hyperon-nucleon interactions under finite pressure <cit.>. These advances highlight the promising prospects for investigating hypernuclear structures using the forthcoming high-intensity heavy-ion accelerator facility HIAF <cit.>. To provide accurate predictions for these experiments, researchers have performed detailed theoretical work on observables such as hypernuclear binding energy <cit.>, spin-orbit splitting <cit.>, hyperon and hypernuclear matter radius <cit.>. Overall, these efforts aim to provide valuable insights into the behavior of hypernuclei, and to deepen our understanding of the in-medium baryon interactions.
Due to their ability to provide a self-consistent and unified description of almost all nuclei on the nuclear chart, both non-relativistic and relativistic mean-field theories are widely used in the calculation of finite nuclei and nuclear matter, and have been extended to describe hypernuclear systems with strange degrees of freedom during the development of theoretical models <cit.>. As a key model utilized in this work, the relativistic mean-field theory has been extensively developed to study hypernulcear properties such as hyperon separation energy <cit.>, spin-orbit splitting <cit.>, hyperon halo <cit.>, hypernuclear deformation <cit.>, cluster structure <cit.> and drip lines <cit.>. While most theoretical models have primarily emphasized nonlinear self-coupling interactions for studying hypernuclei, there has been a recent study that explores the effective interactions for single-Λ hypernuclei within the density-dependent relativistic mean-field (DDRMF) model <cit.>. With three distinct fitting approaches, they propose six new sets of effective Λ N interactions and uncover a significant linear correlation between the ratios R_σ and R_ω, representing scalar and vector coupling strengths, respectively, between these effective Λ N and NN interactions.
Recently, a new type of density-dependent relativistic mean-field Lagrangian, DD-LZ1, has been proposed, inspired by the restoration of pseudo-spin symmetry (PSS) and nuclear medium effects <cit.>. This new effective Lagrangian has produced satisfactory results in describing the properties of nuclear matter and finite nuclei. With unique density-dependent form, DD-LZ1 eliminates the spurious shell closures that appeared in previous RMF calculations, and reasonably restores the PSS of high orbital angular momentum near the Fermi energy <cit.>. Applications with this new RMF Lagrangian has been performed for several nuclear many-body characteristics, in both finite nuclei with mass ranging from light to superheavy, and neutron star properties with density ranging from low to high. For instance, a comprehensive macroscopic-microscopic model was developed to evaluate the total energies for even-even nuclei with proton numbers ranging from 8 to 110 <cit.>. Even with the appearance of hyperon <cit.>, larger maximum masses of neutron stars could be obtained with DD-LZ1 than with several other RMF parameter sets, providing the possibility that the secondary object observed in GW190814 is a neutron star <cit.>. Utilizing the Thomas-Fermi approximation, different microscopic structures of nonuniform nuclear matter were calculated for the crust of neutron stars and a unified equation of state was established in a vast density range <cit.>. The different density-dependent behaviors of meson-nucleon couplings impact the microscopic structures of neutron star matter with DD-LZ1, affect correspondingly the description on various physical processes and evolutions of neutron stars.
Apart from dealing with the different nuclear medium effects caused by the interactions themselves, the evolution of isospin also leads to significant changes in the in-medium effects of hypernuclei, thereby affecting the description of their structural properties. In recent years, a series of refined theoretical studies have been conducted on hypernuclei in different isotopic chains using various interaction models. For instance, the no-core shell model has been employed to investigate the systematic evolution of the ground and excited state energies in the Helium and Lithium hyperisotopes <cit.>. The antisymmetrized molecular dynamics method has been applied to explore the e low-lying level structure of hypernuclei in the Beryllium hyperisotopes <cit.>. The multidimensionally constrained RMF model has been used to study the shape evolution of hypernuclei in the Argon hyperisotopes <cit.>. The beyond mean-field approach has been utilized to discuss the evolution of p-state energies and composition in the Carbon hyperisotopes <cit.>, as well as the hyperon halo structures in the Boron and Carbon hyperisotopes <cit.>. The studies exhibit the significance of isospin role in the description of hypernuclear structure. In fact, with the development of hypernuclear spectroscopy, new experiments related to hypernuclei have been initiated, such as the planned measurements in the J-PARC project, aiming to study the Λ hyperon binding energies in neutron-rich hyperisotopes of ^124-136_ΛSn <cit.>. These experiments will provide crucial information about the properties of hypernuclei associated with various isospin circumstance.
In view of the essential role of nuclear in-medium effects on hypernuclear structure and their relevance to the isotopic evolution, we aim to further expand the density-dependent RMF model to investigate the structure of single-Λ hypernuclei in Oxygen hyperisotopes. First, we will introduce the theoretical framework of the hypernuclear RMF approach in Sec. <ref>. Then, the induced Λ-nucleon (Λ N) effective interactions will be determined by fitting Λ separation energies to the experimental data for DD-LZ1 Lagrangian. To give the results and discussion, the influence of nuclear in-medium effects will be studied in Sec. <ref>, on the isospin dependence of hypernuclear bulk properties, hyperon spin-orbit splitting and matter/hyperon radius. Finally, a summary will be given in Sec. <ref>.
§ DDRMF APPROACH FOR SPHERICAL SINGLE-Λ HYPERNUCLEI
To describe single-Λ hypernuclei within the meson-exchanged type of the relativistic mean-field theory, the covariant Lagrangian density serves as the foundation, which is
ℒ = ℒ_B + ℒ_φ + ℒ_I,
where the terms of free fields read as
ℒ_B= ∑_Bψ̅_B(iγ^μ∂_μ-M_B)ψ_B,
ℒ_φ= +1/2∂^μσ∂_μσ-1/2m_σ^2σ^2-1/4Ω^μνΩ_μν+1/2m_ω^2ω^μω_μ
-1/4R⃗^μν·R⃗_μν+1/2m_ρ^2ρ⃗^μ·ρ⃗_μ-1/4F^μνF_μν,
where the index B (B') represents nucleon N or hyperon Λ, with its sum ∑_B over nucleon N and hyperon Λ. The masses of the baryon and mesons are given by M_B and m_ϕ (ϕ=σ, ω^μ, ρ⃗^μ), while Ω^μν, R⃗^μν and F^μν are the field tensors of vector mesons ω^μ, ρ⃗^μ and photon A^μ, respectively. The interaction between nucleon (hyperon) and mesons (photon) is involved by the Lagrangian ℒ_I,
ℒ_I=∑_Bψ̅_B (-g_σ Bσ-g_ω Bγ^μω_μ)ψ_B
+ψ̅_N (-g_ρ Nγ^μτ⃗·ρ⃗_μ-eγ^μ1-τ_3/2A_μ)ψ_N.
Here the Λ hyperon (namely ψ_B taken as ψ_Λ), which is charge neutral with isospin zero, only takes part in interactions that are spread by isoscalar mesons. The nuclear in-medium effects are introduced phenomenologically via the coupling strengths g_ϕ B (g_ϕ N), which use baryon-density dependent functions in density-dependent RMF (DDRMF) approach to define the strengths of different meson-baryon (meson-nucleon) couplings <cit.>.
The effective Hamiltonian operator for Λ hypernuclei can be obtained by performing the general Legendre transformation on the Lagrange density ℒ in Eq. (<ref>), and it can be written as the sum of the kinetic energy operator T̂ and the potential energy operator V̂_φ,
Ĥ≡ T̂+∑_φV̂_φ
= ∫ dx ∑_Bψ̅_B(x)(-iγ·∇+M_B) ψ_B(x)
+ 1/2∫ dx ∑_B∑_φ[ψ̅_B𝒢_φ Bψ_B]_x D_φ(x,x') [ψ̅_B'𝒢_φ B'ψ_B']_x',
here x is four-vector (t,x). Correspondingly, we define interaction vertices 𝒢_φ B(x) for a various of meson (photon)-nucleon (hyperon) coupling channels, which for isoscalar σ and ω mesons are represented as
𝒢_σ B(x) = +g_σ B(x),
𝒢_ω B^μ(x) = +g_ω B(x)γ^μ.
Notably, both nucleons and the Λ hyperon can contribute to the isoscalar meson fields. However, for the remaining isovector mesons and photon fields, it is expected that their interaction vertices solely connect to nucleons since the isoscalar and charge-zero nature of Λ hyperon,
𝒢_ρ N^μ(x) = +g_ρ N(x) γ^μτ⃗,
𝒢_A N^μ(x) = +eγ^μ1-τ_3/2.
As the retardation effects could be neglected in the majority of RMF models, the meson (photon) propagators D_ϕ (D_A) read as
D_ϕ(x,x')=1/4πe^-m_ϕ|x-x'|/|x-x'|,
D_A(x,x')=1/4π1/|x-x'|.
The baryons field operator ψ_B in the Hamiltonian (<ref>) can be second quantized in the positive-energy space under the no-sea approximation as
ψ_B(x)
=∑_if_i(x)e^-iϵ_i tc_i.
Here, f_i represents the Dirac spinor, while c_i denote the annihilation operators for state i. Accordingly, the energy functional E is determined by evaluating the expectation value of the Hamiltonian with respect to a trial Hartree-Fock ground state |Φ_0⟩,
E = ⟨Φ_0|Ĥ| Φ_0⟩ = ⟨Φ_0|T̂| Φ_0⟩+∑_φ⟨Φ_0|V̂_φ| Φ_0⟩.
Then the binding energy of a Λ hypernucleus is written by
E= ∑_B(E_kin,B + E_σ,B + E_ω,B) + E_ρ,N+E_e.m. + E_c.m. + E_pair,
where the kinetic energy functional of baryons is shown by E_kin,B. The contributions of the potential energy functional from σ and ω are denoted by the variables E_σ,B and E_ω,B. Additionally, E_ρ,N and E_e.m. are used to represent the contributions from ρ and A, respectively. The center-of-mass adjustment to the mean-field is represented by the term E_c.m., while E_pair takes into account the contribution from nucleon pairing correlations <cit.>.
The role of deformation in single-Λ hypernuclei has been discussed in various density functional models <cit.>, which may generate non-negligible effects on the single-particle energies like in Carbon hyperisotopes <cit.>. To describe single-Λ hypernuclei, in particularly the Oxygen hyperisotopes discussed hereafter, we just restrict the RMF approach to the spherical symmetry. Correspondingly, the Dirac spinor f_i(x) of the nucleon or hyperon in Eq. (<ref>) has the following form:
f_nκ m(x) = 1/r([ iG_a(r)Ω_κ m(ϑ,φ); F_a(r)Ω_-κ m(ϑ,φ) ]),
where the index a consists of the set of quantum numbers (nκ) = (njl), and Ω_κ m is the spherical spinor. Meanwhile, the propagators can be expanded in terms of spherical Bessel and spherical harmonic functions as
D_ϕ(x,x^') = ∑_L=0^∞∑_M=-L^L(-1)^MR^ϕ_LL( r, r^') Y_LM(Ω)Y_L-M(Ω^'),
where Ω=(ϑ,φ), and R_LL contains the modified Bessel functions I and K as
R_L L^ϕ(r, r^') =√(1/rr^') I_L+1/2(m_ϕr_<) K_L+1/2(m_ϕr_>),
R_L L^A(r, r^') =1/2L+1r_<^L/r_>^L+1.
In the DDRMF approach, the meson-baryon coupling strengths are adopted as a function of baryon density ρ_b, which are written by
g_ϕ B(ρ_b)=g_ϕ B(0) f_ϕ B(ξ) or
g_ϕ B(ρ_b)=g_ϕ B(0) e^-a_ϕ Bξ,
where ξ=ρ_b/ρ_0 with ρ_0 the saturation density of nuclear matter, and
f_ϕ B(ξ)=a_ϕ B1+b_ϕ B(ξ+d_ϕ B)^2/1+c_ϕ B(ξ+d_ϕ B)^2.
The free coupling strength at ρ_b=0 is represented by g_ϕ B(0) in the expression above. To keep the variational self-consistency between the energy density functional and single-particle properties, the extra terms in baryon self-energies, namely the rearrangement terms, will occur due to the density dependence of the coupling strengths. The single-particle (nucleon or hyperon) properties can be determined by solving the Dirac equation,
ε_a,B[ G_a,B(r); F_a,B(r) ] = [ Σ_+^B(r) -d/dr+κ_a,B/r; d/dr+κ_a,B/r -[2M_B-Σ_-^B(r)] ][ G_a,B(r); F_a,B(r) ].
Here the self-energies Σ_±^B=Σ_0,B±Σ_S,B composed by the vector and scalar terms. The scalar self-energy Σ_S,B = Σ_S,B^σ, and the time component of the vector one has
Σ_0,B(r) = ∑_ϕΣ_0,B^ϕ(r)+Σ_R(r),
where ϕ=ω, ρ for nucleons, and ϕ=ω for Λ hyperon. The self-energies of nucleon or hyperon include scalar one Σ_S,B and vector one Σ_0,B, in which the coupling of isoscalar mesons contributes as follows,
Σ_S,B^σ(r) =-g_σ B(r)∑_B^'∫ r^'2dr^' g_σ B^'(r^')ρ_s,B^'(r^')R^σ_00(r,r^'),
Σ_0,B^ω(r) =+g_ω B(r)∑_B^'∫ r^'2dr^' g_ω B^'(r^')ρ_b,B^'(r^')R^ω_00(r,r^').
Here, ρ_s,B and ρ_b,B represent the scalar and baryon density, respectively <cit.>. Additionally, the rearrangement term Σ_R appears in DDRMF approach, which contain the summation over all baryons for the isoscalar case of ϕ=σ,ω, but only over nucleons for the isovector one. For example, the contribution from σ-S coupling is shown as
Σ_R,σ(r)=∑_B1/g_σ B∂ g_σ B∂ρ_bρ_s,BΣ_S,B^σ(r).
§ RESULTS AND DISCUSSION
In recent years, there has been extensive theoretical research on hypernuclei, particularly focusing on the simplest single-Λ hypernuclei, using RMF and RHF theories. In this section, we aim to extend the effective interaction DD-LZ1 <cit.>, which has been proven to be successful and promising in determining the properties of nuclear structure in both bulk and single-particle aspects, to incorporate Λ hyperon within the framework of RMF model. To give a comparative study and illustrate the role of nuclear in-medium effects, the calculations with DD-LZ1 will be accompanied by several existing effective Λ N interactions within CDF models. These interactions have been significantly expanded to incorporate the degrees of freedom of the Λ hyperon and have yielded many successful findings in the study of hypernuclear structure and the properties of dense stars. In detail, density-dependent RMF effective interactions DD-LZ1 <cit.>, PKDD <cit.>, DD-ME2, TW99, DDV <cit.>, density-dependent RHF (DDRHF) effective interactions PKO1, PKO2, PKO3 <cit.>, and nonlinear RMF (NLRMF) effective interactions NL-SH <cit.> and PK1 <cit.> were selected. In these CDF functionals, the ω-tensor coupling which has been proved to be essential in reducing Λ's spin-orbit splitting in hypernuclei <cit.> is ignored. The Dirac equation is solved in a radial box size of R=20 fm with a step of 0.1 fm. For open-shell hypernuclei, we employ the BCS method to account for pairing correlations. As the strength of hyperon pairing correlations remains uncertain and may become essential in multi-Λ hypernuclei, our current work solely considers pairing correlations between nn and pp pairs by using the finite-range Gogny force D1S <cit.>, see Refs. <cit.> for details. In addition, the blocking effect should be taken into account for the last valence nucleon or hyperon, with a detailed description to the Ref. <cit.>.
§.§ Density dependence of Λ N effective interaction
For the theoretical study of hypernuclear structure, the Λ N interaction must be determined first. Since the Λ hyperon is an electrically neutral particle with isospin zero, our focus lies on the coupling strengths between the isoscalar-scalar σ meson and the isoscalar-vector ω meson with the Λ hyperon. For convenience, we introduce the ratio of the coupling strengths between the meson-hyperon and meson-nucleon, g_ϕΛ/g_ϕ N. According to the näive quark model <cit.>, we fix the ratio of the isoscalar-vector meson coupling strength g_ωΛ/g_ω N to 0.666, while the ratio of the isoscalar-scalar one g_σΛ/g_σ N can be obtained by reproducing the Λ hyperon separation energy B_Λ experimental data for ^16_ΛO, ^40_ΛCa, and ^208_ΛPb <cit.>. In the fitting process, the hyperon is placed in the 1s_1/2 ground state, and the B_Λ is defined as follows:
B_Λ(^A_Λ Z) = E(^A-1Z) - E(^A_Λ Z),
Based on the effective interaction DD-LZ1, we finally obtained a new set of Λ N interaction, namely DD-LZ1-Λ1, after a fitting process of Levenberg-Marquardt minimization. Then, we calculated the Λ separation energy B_Λ as well as the single-Λ energy, with hyperon occupying the ground state 1s_1/2 or possible excited states with higher angular momentum l_Λ. For B_Λ of DD-LZ1-Λ1, a remarkable agreement with experimental data is found for most of hypernuclei, except for ^28_ΛSi with significant deformation and Carbon hyperisotopes with light mass, as shown in Fig. <ref>. Actually, more accuracy description to the light-mass Carbon hyperisotopes could be obtained, by limiting the mass region of fitting and taking into account the deformation effects <cit.>. To investigate the deviation in describing the structural properties of single-Λ hypernuclei using different CDF effective interactions, the coupling strength of DD-LZ1-Λ1 in comparison with other selected CDF functionals are listed in Table <ref>. One could check the root-mean-square deviation Δ for B_Λ between theoretical calculation and experimental value, which is defined by
Δ≡√(1/N∑_i=1^N(B_Λ,i^exp.-B_Λ,i^cal.)^2).
To reveal the systematics, we define Δ_1 to be the deviation only for ^16_ΛO, ^40_ΛCa, and ^208_ΛPb, as well as Δ_2 that suitable for all hypernuclei.
From Table <ref>, it can be seen that different CDF theoretical models have good descriptions for ^16_ΛO, ^40_ΛCa and ^208_ΛPb, and most parameter sets have good consistency for hypernuclear theoretical calculations and experimental data over a large mass range from ^12_ΛC to ^208_ΛPb. In addition, by comparing three different types of CDF effective interactions, we can find that when the ratio of isospin scalar-vector meson coupling strength is fixed to the same value, the ratio of isospin scalar-scalar meson coupling strength g_σΛ/g_σ N may satisfy certain linear correlations with the ratio of isospin scalar-vector meson coupling strength, which has been systematically explored in some works <cit.>. It should be pointed out that the linear correlation of meson-hyperon coupling strength ratios obtained in the RMF framework is obviously not suitable for density-dependent RHF models <cit.>.
In DDRMF approach, the in-medium effects of nuclear force are effectively embedded in the density-dependent shape of meason-baryon coupling strength, playing the role in the nuclear structure via the equilibrium of nuclear dynamics from various coupling channels. In recent years, analysis based on the equilibrium of nuclear in-medium dynamics has been applied to clarify the mechanism of the pseudospin symmetry, the shell evolution, the liquid-gas phase transition, and hyperon's spin-orbit splitting in the CDF models <cit.>. The delicate in-medium balance between nuclear attractive and repulsive interactions may be significantly altered by treating the density dependence of coupling strength differently, impacting the description of the properties of nuclear matter and finite nuclei with different CDF effective interactions.
To provide a comprehensive understanding of the in-medium equilibrium in hypernuclei, we present the density dependence of coupling strengths for selected CDF effective interactions in Fig. <ref>(a) and Fig. <ref>(b), corresponding to the isoscalar-scalar channel g_σΛ and isoscalar-vector one g_ωΛ. There are systematic divergences of the meson-hyperon coupling strengths with increasing density among density-dependent RMF, density-dependent RHF, and nonlinear RMF effective interactions. Notably, the density dependence of g_σΛ and g_ωΛ is significantly reduced in the DDRHF effective interaction compared to the DDRMF effective interaction. This pronounced reduction in density dependence also influences the description of single-particle properties in hypernuclei, such as Λ hyperon spin-orbit splitting <cit.>. Furthermore, in contrast to density-dependent interactions, the NLRMF effective interaction exhibits density-independent characteristics for g_σΛ and g_ωΛ. Consequently, when applying these three types of CDF effective interactions to single-Λ hypernuclei, the systematic deviation could take place in describing the isospin dependence of the hypernuclear structure.
§.§ Bulk properties of single-Λ hypernuclei in Oxygen hyperisotopes
To focus on the isospin dependence of single-particle properties, we choose the Λ hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes as examples, since they usually take the spherical symmetry. To check the accuracy of the chosen interactions in describing the properties of finite nuclei, we first calculated the binding energies E_B, charge radii R_c, and matter radii R_m for Oxygen isotopes using the DD-LZ1 effective interaction. We compared the theoretical calculations with experimental measurements, which were taken from Refs. <cit.>. From the results in Table <ref>, we can see that the theoretical calculations and experimental measurements are in good agreement for both the binding energies E_B and the charge radii R_c, for the interaction DD-LZ1. It is worth noting that the total matter radius R_m of finite nuclei, unlike the charge radius, still has significant uncertainties based on heavy ion reaction experiments. The theoretical calculations of R_m reconcile with the experimental measurements with the existence of error bars.
Furthermore, we summarize in Table <ref> the systematics of the occupied energy level of Λ hyperon, the single-particle energies of Λ hyperon, the total binding energies, the charge radii, and the matter radii of hypernuclei in Oxygen hyperisotopes. In order to give possible reference to hypernuclear experiments, we also calculated the strength of electric dipole transition B(E1) between the Λ1p and Λ1s occupation states. The transition strength is expressed as
B(E1;J_i⟶ J_f)=3e^2_Λ/4π⟨ f|r|i⟩^2(2j_f+1)
[ j_f 1 j_i; -1/2 0 1/2 ]^2,
Where e_Λ represents the effective charge of the Λ hyperon. The integration ⟨ f|r|i⟩ can be computed using the radial wave functions of the initial and final single-Λ state, see Ref. <cit.> for details.
In the framework of relativistic models, Dirac spinors with both upper and lower components could contribute to determining the value of B(E1). However, it is checked that the contribution from the lower component is negligible, especially for non-charge exchange channel. Therefore, only the contribution from the upper component is preserved in current calculations as a simplification. The inclusion of Λ hyperon causes the so-called impurity effect inside hypernuclei <cit.>. When the Λ hyperon is filled in the 1s_1/2 state, we can see from the comparison of the total matter radii in Table <ref> and Table <ref> that the introduction of hyperon causes a shrinkage effect on the hypernuclei, which is approximately 0.06-0.13 fm. Compared with the ground-state results, we observe a significant enhancement in Λ root-mean-square radii when hyperon is filled in higher-lying 1p state. This change in the density distribution of hyperon due to different level occupations leads to an overall expansion of the hypernuclear matter radii, different from the Λ1s case. Additionally, with the increase of neutron filling, both the hyperon radii, matter radii and B(E_1) show significant isospin dependence, which can be qualitatively explained by the density-dependence of the coupling strength. As indicated in Table <ref>, when Λ hyperon occupies the 1p state, its density distribution spreads more outward than the nucleonic core. As isospin evolves, more neutrons are filled and their attraction to the hyperon increases, correspondingly leading to a significant reduction in the hyperon radius. For B(E_1), its value is determined not only by the overlap between initial and final states which are sensitive to the neutron number, but also by the effective charge. As a result, the B(E_1) values enlarge a little from ^15_ΛO to ^17_ΛO and go down gradually as isospin evolves after N=8.
§.§ Isospin dependence of Λ spin-orbit splitting
Motivated by the connection between the density-dependent effective interactions of theoretical models and the isospin-dependent properties of nuclear structure, the spin-orbit splitting of Λ hyperon in hypernuclei, as a promising observable in current hypernuclear spectroscopy, will be discussed in this subsection with newly developed DD-LZ1-Λ1 and other selected CDF functionals. The Λ's spin-orbit splitting is defined by the difference of Λ single-particle energies between a couple of spin partner states, which is
Δ E_SO^Λ≡ε_j_Λ=l_Λ-1/2 - ε_j_Λ=l_Λ+1/2.
As shown in Fig. <ref>, the analysis is carried out for Λ spin partner states 1p in Oxygen hyperisotopes, with the Λ hyperon occupying its ground state.
In Fig. <ref>(a), it is seen that the isospin dependence of Δ E_SO^Λ is clearly distinguished with the chosen CDF functionals. The curves from NLRMF models tend to be stable with increasing neutron number, while for density-dependent RMF or RHF functionals the splitting enlarges generally with isospin. Among them, DD-LZ1-Λ1 exhibits the most significant isospin dependence. Besides, it is clear that the smaller Λ spin-orbit splitting is predicted by DDRHF compared to RMF, which has been illustrated as a result in single-particle properties since the dynamical equilibrium between nuclear attraction and repulsion is dramatically changed with the appearance of Fock terms <cit.>.
To better understand the evolution of Λ spin-orbit splitting with isospin, we could decompose Δ E_SO^Λ into various parts according to its source of the kinetic or potential energy. The values are obtained by left-multiplying the transferred Dirac spinor to the Dirac equation Eq. (<ref>), and separate the integrated contributions from different self-energie terms. For instance, Δ E_rea comes from the contribution of the rearrangement term Σ_R to Λ self-energy Σ_0,Λ, as seen in Eq. (<ref>), due to the density dependence of meson-hyperon couplings. Consequently, the rest one from the kinetic energy and the density-independent potential energies could be summed over, which means Δ E_kin+σ+ω≡Δ E_SO^Λ-Δ E_rea, as discussed in Fig. <ref>(b).
It is observed that the values of Λ spin-orbit splitting are primarily determined by Δ E_kin+σ+ω. However, the isospin dependence of the splitting is weakly controlled by Δ E_kin+σ+ω except for ^15_ΛO. Attributed to the occupation of ν 1p_1/2 orbit, the Λ spin-orbit splitting predicted by various CDF functionals systematically reduces from ^15_ΛO to ^17_ΛO. As has been illustrated in Ref. <cit.>, the spin-orbit coupling potential of hyperon is determined mainly by the radial derivative of the self-energy Σ_-^Λ. In general, the more neutrons are filled into hypernuclei, the larger the density circumstance where the Λ hyperon is housing. Thus, if the model is density dependent like DDRMFs and DDRHFs given in Fig. <ref>, the meson-hyperon coupling strength then weakens and Δ E_SO^Λ should become smaller correspondingly as the neutron number increases. As seen in Fig. <ref>(b), such a reduction in Δ E_kin+σ+ω is remarkable from ^15_ΛO to ^17_ΛO, and relatively less significant at larger neutron numbers.
Different from the NLRMF case, the density-dependent CDFs introduce extra contribution to reinforce the isospin dependence of the splitting, as demonstrated in Fig. <ref>(c), which cancels the reduction trend in Δ E_kin+σ+ω overwhelmingly and finally leads to the enhancement of Δ E_SO^Λ with increasing neutron number in Fig. <ref>(a). In fact, the contribution Δ E_rea to Λ spin-orbit splitting is originated from the rearrangement terms of Λ self-energies Σ_0,Λ which according to Eq. (<ref>) depends on the density slope of the meson-hyperon coupling strength. As the neutron number increases, the density scenario where Λ lives could get more intense, consequently weaker density dependence of the meson-hyperon coupling strength, smaller density slope as well as the suppressed value of Δ E_rea. Therefore, the link between the isospin evolution of Λ spin-orbit splitting and the in-medium behavior of Λ N interaction with baryon density is elucidated from the discussion on Oxygen hyperisotopes. In consequence, possible experimental constraints on Δ E_SO^Λ along the hyperisotopes could assist us further in understanding the in-medium effects of nuclear force.
§.§ Isospin dependence of matter and hyperon radii
In the properties of hypernuclear structure, not only the Λ spin-orbit splitting but also the Λ impurity effect could exhibit the information of in-medium nuclear interactions. In Fig. <ref>(a), we selected DDRMF functionals DD-LZ1-Λ1 and DD-ME2, DDRHF's PKO1-Λ1 and NLRMF's PK1, to illustrate its influence on the matter radii of Oxygen (hyper)isotopes, where the solid and dash-dotted lines correspond to the calculated results for single-Λ hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes, respectively. The matter radius R_m in hypernuclei goes up monotonically as the neutron number increases, regardless of the specific model used, where a steep leap from ^23_ΛO to ^25_ΛO corresponds to the effect of new occupation in ν 2s_1/2.
Although divergent values given for Oxygen isotopes without hyperon, all of the selected models are getting closer in size of matter radii for hypernuclei, implying R_m of hypernuclei as a possible model-independent observable. It is evident that the matter radii of Oxygen hyperisotopes contract as compared to their nucleonic counterparts, namely the size shrinkage due to the impurity effect of the Λ hyperon. However, the shrinkage magnitude appears to be strongly model dependent. Among them, the DDRMF effective Lagrangian DD-LZ1-Λ1 yields the largest difference between the solid and dash-dotted lines, whereas the NLRMF one PK1 shows the smallest disparity. By checking the bulk properties of nuclear matter within these CDFs, it is verified that the shrinkage magnitude correlates well with the incompressibility, which is 230.7 MeV for DD-LZ1, 250.8 MeV for DD-ME2, 250.2 MeV for PKO1, 282.7 MeV for PK1, respectively <cit.>. In fact, the larger the incompressibility K is, the harder the nucleus is contracted by the exerted attraction from the filled hyperon inside, consequently the weaker size shrinkage effect in the calculated matter radii. The similar relation could be found from the Table II of a work on the isoscalar giant monopole resonance of hypernuclei, where the effective nuclear incompressibility modulus was extracted <cit.>.
To further distinguish the effects of different interactions on the description of hypernuclear structure, we investigate the isospin evolution of the Λ hyperon radius R_Λ in Oxygen hyperisotopes using all selected CDF effective interactions, as shown in Fig. <ref>. It is seen tangibly that R_Λ evolve diversely along Oxygen hyperisotopes with different CDF effective interactions. Some effective interactions, like PKO3-Λ1, DD-ME2, DDV, and DD-LZ1-Λ1, exhibit a reduced R_Λ with increasing neutron number. Especially, DD-LZ1-Λ1 gives the smallest hyperon radii among all chosen CDFs, and an strong declining trend. In fact, the core polarization effect due to Λ hyperon plays a significant role in this evolution. When Λ occupies the 1s_1/2 state, its density distribution is concentrated inside the hypernucleus. As a result, the Λ's coupling or attraction with the nucleons in the core (here corresponding to ^16O) appears relatively stronger than that with the valence nucleons. Hence, the evolution of the hyperon radius could be comprehended more or less by the size change of the core with respect to the neutron number.
The variation of the matter radii for the ^16O core in Oxygen (hyper)isotopes is plotted in Fig. <ref>(b) with respect to the neutron number. From N=8 to 14, in contrast to the situation of total matter radii R_m, there is no consistent isospin dependence for the selected CDFs in the core radius R_m^core with increasing neutron number. The nonlinear RMF functional PK1 exhibits a significant increasing trend with isospin, while the density-dependent RMF one DD-LZ1-Λ1 shows a noticeable decrease. Consequently, the hyperon radius R_Λ exhibit a similar isospin dependence resulting from the core polarization effect, determined mainly by the various isospin properties of CDF functionals in nucleon-nucleon channels. From such analysis, the importance of nuclear in-medium effects in affecting the hyperon radii is unveiled. So the divergent isospin evolution of R_Λ given by the CDFs with different density dependent meson-baryon couplings makes it a valuable tool to elucidate the in-medium behavior of nuclear force.
§ SUMMARY
In summary, considering the significance of nuclear in-medium effects in nuclear many-body problems, such as eliminating the spurious shell closures, we expanded the newly developed DDRMF Lagrangian DD-LZ1 to incorporate the Λ hyperon degree of freedom and determined the Λ N effective interaction by fitting the experimental data of Λ separation energies for several single-Λ hypernuclei. Subsequently, with several other CDF functionals, the features including Λ separation energy and B(E1) transition, and the evolution of the spin-orbit splitting as well as the characteristic radii were analyzed in detail along the Oxygen (hyper)isotopes.
By comparing the results obtained from different CDF models, we further investigated the crucial impact of nuclear in-medium effects on accurately describing the properties of hyperon, both in terms of their bulk and single-particle properties. For the 1p spin-orbit splitting of the Λ hyperon, significant differences in the isospin dependence are observed among the selected CDF effective interactions in Oxygen hyperisotopes. As the neutron number increases, the density circumstance where the hyperon is housing gradually increases, which causes the meson-hyperon coupling strengths that determine the hypernuclear properties to change as well. In particular, the density-dependent CDF effective interactions introduce additional rearrangement terms that significantly enhance the isospin dependence of the Λ spin-orbit splitting, leading to more distinct variation of Δ E_SO^Λ with neutron number in DDRMF and DDRHF models.
The evolution of the hypernuclear matter radius with isospin was further investigated. Significant model dependence in the magnitude of size shrinkage due to the inclusion of Λ hyperon is observed, where the DDRMF functional DD-LZ1-Λ1 displays the largest shrinkage effect. The result was then explained by an anticorrelation between the incompressibility coefficients K of nuclear matter and the hyperon radii R_Λ, providing us a possible way to constrain the hyperon distribution inside a hypernucleus from better-determined bulk properties of nuclear matter. Additionally, it is found that the isospin evolution of the hyperon radius is primarily influenced by the density-dependent behavior of the chosen CDF functional in NN interaction channel via the procedure of the core polarization. Thus, the sensitivity in depicting these hyperon-relevant properties in CDF models with a various of different meson-baryon couplings holds us great potential to elucidate nuclear in-medium nature in both Λ N and NN channels.
apsrev
91
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Danysz and
Pniewski(1953)]Danysz1953Philos.Mag.44.348
authorM. Danysz and
authorJ. Pniewski,
journalThe London, Edinburgh, and Dublin Philosophical Magazine
and Journal of Science volume44, pages348
(year1953), https://doi.org/10.1080/14786440308520318,
<https://doi.org/10.1080/14786440308520318>.
[Hashimoto and Tamura(2006)]Hashimoto2006PPNP57.564
authorO. Hashimoto and
authorH. Tamura,
journalProgress in Particle and Nuclear Physics
volume57, pages564 (year2006),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641005000761>.
[Gal et al.(2016)Gal, Hungerford, and
Millener]Gal2016Rev.Mod.Phys.88.035004
authorA. Gal,
authorE. V. Hungerford,
and authorD. J.
Millener, journalRev. Mod. Phys.
volume88, pages035004
(year2016),
<https://link.aps.org/doi/10.1103/RevModPhys.88.035004>.
[Prakash et al.(1997)Prakash, Bombaci,
Prakash, Ellis, Lattimer, and Knorren]Prakash9971Phys.Rep.280.1
authorM. Prakash,
authorI. Bombaci,
authorM. Prakash,
authorP. J. Ellis,
authorJ. M. Lattimer,
and authorR. Knorren,
journalPhysics Reports volume280,
pages1 (year1997), ISSN issn0370-1573,
<https://www.sciencedirect.com/science/article/pii/S0370157396000233>.
[Tolos and Fabbietti(2020)]Tolos2020PPNP112.103770
authorL. Tolos and
authorL. Fabbietti,
journalProgress in Particle and Nuclear Physics
volume112, pages103770
(year2020), ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S014664102030017X>.
[Burgio et al.(2021)Burgio, Schulze, na,
and Wei]Burgio2021PPNP120.103879
authorG. Burgio,
authorH.-J. Schulze,
authorI. V. na, and
authorJ.-B. Wei,
journalProgress in Particle and Nuclear Physics
volume120, pages103879
(year2021), ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641021000338>.
[Sawada(2007)]Sawada2007NPA782.434
authorS. Sawada,
journalNuclear Physics A volume782,
pages434 (year2007), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947406007305>.
[Nakamura et al.(2005)Nakamura,
Hashimoto, Fujii, Tamura, Takahashi, Maeda, Kanda, Okayasu, Nomura, Matsumura
et al.]Nakamura2005NPA754.421
authorS. N. Nakamura,
authorO. Hashimoto,
authorY. Fujii,
authorH. Tamura,
authorT. Takahashi,
authorK. Maeda,
authorH. Kanda,
authorY. Okayasu,
authorH. Nomura,
authorA. Matsumura,
et al., journalNuclear Physics A
volume754, pages421 (year2005),
ISSN issn0375-9474, noteproceedings of the Eighth
International Conference on Hypernuclear and Strange Particle Physics,
<https://www.sciencedirect.com/science/article/pii/S037594740500076X>.
[Henning(2004)]Henning2004NPA734.654
authorW. Henning,
journalNuclear Physics A volume734,
pages654 (year2004), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947404001393>.
[Pile et al.(1991)Pile, Bart, Chrien,
Millener, Sutter, Tsoupas, Peng, Mishra, Hungerford, Kishimoto
et al.]Pile1991PRL66.2585
authorP. H. Pile,
authorS. Bart,
authorR. E. Chrien,
authorD. J. Millener,
authorR. J. Sutter,
authorN. Tsoupas,
authorJ.-C. Peng,
authorC. S. Mishra,
authorE. V. Hungerford,
authorT. Kishimoto,
et al., journalPhys. Rev. Lett.
volume66, pages2585 (year1991),
<https://link.aps.org/doi/10.1103/PhysRevLett.66.2585>.
[Feliciello and
Nagae(2015)]Feliciello2015Rep.Prog.Phys.78.096301
authorA. Feliciello and
authorT. Nagae,
journalReports on Progress in Physics
volume78, pages096301
(year2015),
<https://doi.org/10.1088/0034-4885/78/9/096301>.
[Tanida et al.(2001)Tanida, Tamura, Abe,
Akikawa, Araki, Bhang, Endo, Fujii, Fukuda, Hashimoto
et al.]Tanida2001PRL86.1982
authorK. Tanida,
authorH. Tamura,
authorD. Abe,
authorH. Akikawa,
authorK. Araki,
authorH. Bhang,
authorT. Endo,
authorY. Fujii,
authorT. Fukuda,
authorO. Hashimoto,
et al., journalPhys. Rev. Lett.
volume86, pages1982 (year2001),
<https://link.aps.org/doi/10.1103/PhysRevLett.86.1982>.
[Kohri et al.(2002)Kohri, Ajimura,
Hayakawa, Kishimoto, Matsuoka, Minami, Miyake, Mori, Morikubo, Saji
et al.]Kohri2002PRC65.034607
authorH. Kohri,
authorS. Ajimura,
authorH. Hayakawa,
authorT. Kishimoto,
authorK. Matsuoka,
authorS. Minami,
authorY. S. Miyake,
authorT. Mori,
authorK. Morikubo,
authorE. Saji, et al.,
journalPhys. Rev. C volume65,
pages034607 (year2002),
<https://link.aps.org/doi/10.1103/PhysRevC.65.034607>.
[Feng(2020)]Feng2020PRC102.044604
authorZ.-Q. Feng,
journalPhys. Rev. C volume102,
pages044604 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevC.102.044604>.
[Saito et al.(2021)Saito, Dou, Drozd,
Ekawa, Escrig, He, Kalantar-Nayestanaki, Kasagi, Kavatsyuk, Liu
et al.]Saito2021Nat.Rev.Phys.3.803
authorT. R. Saito,
authorW. Dou,
authorV. Drozd,
authorH. Ekawa,
authorS. Escrig,
authorY. He,
authorN. Kalantar-Nayestanaki,
authorA. Kasagi,
authorM. Kavatsyuk,
authorE. Liu, et al.,
journalNature Reviews Physics volume3,
pages803 (year2021),
<https://doi.org/10.1038/s42254-021-00371-w>.
[Aboona et al.(2023)Aboona, Adam, Adams,
Agakishiev, Aggarwal, Aggarwal, Ahammed, Aitbaev, Alekseev, Anderson
et al.]Aboona2023PRL130.212301
authorB. E. Aboona,
authorJ. Adam,
authorJ. R. Adams,
authorG. Agakishiev,
authorI. Aggarwal,
authorM. M. Aggarwal,
authorZ. Ahammed,
authorA. Aitbaev,
authorI. Alekseev,
authorD. M. Anderson,
et al. (collaborationSTAR Collaboration),
journalPhys. Rev. Lett. volume130,
pages212301 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevLett.130.212301>.
[Yang et al.(2013)Yang, Xia, Xiao, Xu,
Zhao, Zhou, Ma, He, Ma, Gao et al.]Yang2013NIMPR317.263
authorJ. C. Yang,
authorJ. W. Xia,
authorG. Q. Xiao,
authorH. S. Xu,
authorH. W. Zhao,
authorX. H. Zhou,
authorX. W. Ma,
authorY. He,
authorL. Z. Ma,
authorD. Q. Gao,
et al., journalNuclear Instruments and Methods in
Physics Research Section B: Beam Interactions with Materials and Atoms
volume317, pages263 (year2013),
ISSN issn0168-583X,
<https://www.sciencedirect.com/science/article/pii/S0168583X13009877>.
[Zhou et al.(2022)Zhou, Yang, and the
HIAF project team]Zhou2022AAPPSBulletin32.35
authorX. Zhou,
authorJ. Yang, and
authorthe HIAF project team,
journalAAPPS Bull. volume32,
pages35 (year2022), ISSN issn2309-4710,
<https://link.springer.com/10.1007/s43673-022-00064-1>.
[Mare šš and
Jennings(1994)]Mares1994PRC49.2472
authorJ. Mare šš and authorB. K.
Jennings, journalPhys. Rev. C
volume49, pages2472 (year1994),
<https://link.aps.org/doi/10.1103/PhysRevC.49.2472>.
[Wirth and Roth(2018)]Wirth2018PLB779.336
authorR. Wirth and
authorR. Roth,
journalPhysics Letters B volume779,
pages336 (year2018), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269318301230>.
[Vretenar et al.(1998)Vretenar, Pöschl,
Lalazissis, and Ring]Vretenar1998PRC57.R1060
authorD. Vretenar,
authorW. Pöschl,
authorG. A. Lalazissis,
and authorP. Ring,
journalPhys. Rev. C volume57,
pagesR1060 (year1998),
<https://link.aps.org/doi/10.1103/PhysRevC.57.R1060>.
[Umeya and Harada(2011)]Umeya2011PRC83.034310
authorA. Umeya and
authorT. Harada,
journalPhys. Rev. C volume83,
pages034310 (year2011),
<https://link.aps.org/doi/10.1103/PhysRevC.83.034310>.
[Xia et al.(2017)Xia, Mei, and
Yao]Xia2017Sci.China-Phys.Mech.Astron60.102021
authorH. J. Xia,
authorH. Mei, and
authorJ. M. Yao,
journalSci. China-Phys. Mech. Astron
volume60, pages102021
(year2017),
<https://doi.org/10.1007/s11433-017-9048-2>.
[Ning et al.(2009)Ning, Xian-Rong, and
Fang-Qi]Wei2009CPC33.116
authorW. Ning,
authorZ. Xian-Rong,
and authorC. Fang-Qi,
journalChinese Physics C volume33,
pages116 (year2009),
<https://dx.doi.org/10.1088/1674-1137/33/S1/037>.
[Lu et al.(2011)Lu, Zhao, and
Zhou]Lu2011PRC84.014328
authorB. N. Lu,
authorE. G. Zhao, and
authorS. G. Zhou,
journalPhys. Rev. C volume84,
pages014328 (year2011),
<https://link.aps.org/doi/10.1103/PhysRevC.84.014328>.
[Zhang et al.(2021)Zhang, Sagawa, and
Hiyama]Zhang2021PRC103.034321
authorY. Zhang,
authorH. Sagawa, and
authorE. Hiyama,
journalPhys. Rev. C volume103,
pages034321 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.103.034321>.
[Zhang et al.(2022a)Zhang,
Sagawa, and Hiyama]Zhang2022PTEP2022.023D01
authorY. Zhang,
authorH. Sagawa, and
authorE. Hiyama,
journalProgress of Theoretical and Experimental Physics
volume2022 (year2022a), ISSN
issn2050-3911, note023D01,
https://academic.oup.com/ptep/article-pdf/2022/2/023D01/42931223/ptac004.pdf,
<https://doi.org/10.1093/ptep/ptac004>.
[Xue et al.(2022)Xue, Chen, Zhou, Cheng,
and Schulze]Xue2022PRC106.044306
authorH.-T. Xue,
authorQ. B. Chen,
authorX.-R. Zhou,
authorY. Y. Cheng, and
authorH.-J. Schulze,
journalPhys. Rev. C volume106,
pages044306 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.106.044306>.
[Reinhard(1989)]Reinhard1989Rep.Prog.Phys.52.439
authorP. G. Reinhard,
journalReports on Progress in Physics
volume52, pages439 (year1989),
<https://doi.org/10.1088/0034-4885/52/4/002>.
[Ring(1996)]Ring1996PPNP37.193
authorP. Ring,
journalProgress in Particle and Nuclear Physics
volume37, pages193 (year1996),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/0146641096000543>.
[Bender et al.(2003)Bender, Heenen, and
Reinhard]Bender2003Rev.Mod.Phys.75.121
authorM. Bender,
authorP. H. Heenen,
and authorP. G.
Reinhard, journalRev. Mod. Phys.
volume75, pages121 (year2003),
<https://link.aps.org/doi/10.1103/RevModPhys.75.121>.
[Vretenar et al.(2005)Vretenar,
Afanasjev, Lalazissis, and Ring]Vretenar2005Phys.Rep.409.101
authorD. Vretenar,
authorA. V. Afanasjev,
authorG. A. Lalazissis,
and authorP. Ring,
journalPhysics Reports volume409,
pages101 (year2005), ISSN issn0370-1573,
<https://www.sciencedirect.com/science/article/pii/S0370157304004545>.
[Meng et al.(2006)Meng, Toki, Zhou,
Zhang, Long, and Geng]Meng2006PPNP57.470
authorJ. Meng,
authorH. Toki,
authorS. G. Zhou,
authorS. Q. Zhang,
authorW. H. Long, and
authorL. S. Geng,
journalProgress in Particle and Nuclear Physics
volume57, pages470 (year2006),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S014664100500075X>.
[Nikšić
et al.(2011)Nikšić, Vretenar, and
Ring]Niksic2011PPNP66.519
authorT. Nikšić,
authorD. Vretenar, and
authorP. Ring,
journalProgress in Particle and Nuclear Physics
volume66, pages519 (year2011),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/S0146641011000561>.
[Meng and Zhou(2015)]Meng2015JPG42.093101
authorJ. Meng and
authorS. G. Zhou,
journalJournal of Physics G: Nuclear and Particle Physics
volume42, pages093101
(year2015),
<https://doi.org/10.1088/0954-3899/42/9/093101>.
[Meng(2016)]Meng2016Density
authorJ. Meng,
titleRelativistic Density Functional for Nuclear Structure
(publisherWORLD SCIENTIFIC, year2016),
https://www.worldscientific.com/doi/pdf/10.1142/9872,
<https://www.worldscientific.com/doi/abs/10.1142/9872>.
[Rayet(1976)]Rayet1976Ann.Phys.102.226
authorM. Rayet,
journalAnnals of Physics volume102,
pages226 (year1976), ISSN issn0003-4916,
<https://www.sciencedirect.com/science/article/pii/0003491676902621>.
[Lanskoy and Yamamoto(1997)]Lanskoy1997PRC55.2330
authorD. E. Lanskoy and
authorY. Yamamoto,
journalPhys. Rev. C volume55,
pages2330 (year1997),
<https://link.aps.org/doi/10.1103/PhysRevC.55.2330>.
[Brockmann and Weise(1977)]Brockmann1977PLB69.167
authorR. Brockmann and
authorW. Weise,
journalPhysics Letters B volume69,
pages167 (year1977), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269377906359>.
[Bouyssy(1981)]Bouyssy1981PLB99.305
authorA. Bouyssy,
journalPhysics Letters B volume99,
pages305 (year1981), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269381901064>.
[Glendenning and
Moszkowski(1991)]Glendenning1991PRL67.2414
authorN. K. Glendenning
and authorS. A.
Moszkowski, journalPhys. Rev. Lett.
volume67, pages2414 (year1991),
<https://link.aps.org/doi/10.1103/PhysRevLett.67.2414>.
[Sugahara and Toki(1994)]Sugahara1994PTP92.803
authorY. Sugahara and
authorH. Toki,
journalProgress of Theoretical Physics
volume92, pages803 (year1994),
ISSN issn0033-068X,
https://academic.oup.com/ptp/article-pdf/92/4/803/5358491/92-4-803.pdf,
<https://doi.org/10.1143/ptp/92.4.803>.
[Zhou et al.(2008)Zhou, Polls, Schulze,
and Vidaña]Zhou2008PRC78.054306
authorX.-R. Zhou,
authorA. Polls,
authorH.-J. Schulze,
and authorI. Vidaña,
journalPhys. Rev. C volume78,
pages054306 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.054306>.
[Hu et al.(2014)Hu, Hiyama, and
Toki]Hu2014PRC90.014309
authorJ. N. Hu,
authorE. Hiyama, and
authorH. Toki,
journalPhys. Rev. C volume90,
pages014309 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevC.90.014309>.
[Li et al.(2018)Li, Long, and
Sedrakian]Li2018EPJA54.133
authorJ. J. Li,
authorW. H. Long, and
authorA. Sedrakian,
journalThe European Physical Journal A
volume54, pages133 (year2018),
<https://doi.org/10.1140/epja/i2018-12566-6>.
[Rong et al.(2020)Rong, Zhao, and
Zhou]Zhou2020PLB807.135533
authorY. T. Rong,
authorP. W. Zhao, and
authorS. G. Zhou,
journalPhysics Letters B volume807,
pages135533 (year2020), ISSN
issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269320303373>.
[Wu et al.(2017)Wu, Mei, Yao, and
Zhou]Yao2017PRC95.034309
authorX. Y. Wu,
authorH. Mei,
authorJ. M. Yao, and
authorX. R. Zhou,
journalPhys. Rev. C volume95,
pages034309 (year2017),
<https://link.aps.org/doi/10.1103/PhysRevC.95.034309>.
[Tanimura and Hagino(2012)]Tanimura2012PRC85.014306
authorY. Tanimura and
authorK. Hagino,
journalPhys. Rev. C volume85,
pages014306 (year2012),
<https://link.aps.org/doi/10.1103/PhysRevC.85.014306>.
[Hong-Feng and Jie(2002)]Lv2002CPL19.1775
authorL. Hong-Feng and
authorM. Jie,
journalChinese Physics Letters volume19,
pages1775 (year2002),
<https://dx.doi.org/10.1088/0256-307X/19/12/310>.
[Win and Hagino(2008)]Win2008PRC78.054311
authorM. T. Win and
authorK. Hagino,
journalPhys. Rev. C volume78,
pages054311 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.054311>.
[Lu et al.(2014)Lu, Hiyama, Sagawa, and
Zhou]Zhou2014PRC89.044307
authorB. N. Lu,
authorE. Hiyama,
authorH. Sagawa, and
authorS. G. Zhou,
journalPhys. Rev. C volume89,
pages044307 (year2014),
<https://link.aps.org/doi/10.1103/PhysRevC.89.044307>.
[Chen et al.(2022)Chen, Chen, Zhou,
Cheng, Cui, and Schulze]Chen2022CPC46.064109
authorC. F. Chen,
authorQ. B. Chen,
authorX.-R. Zhou,
authorY. Y. Cheng,
authorJ.-W. Cui, and
authorH.-J. Schulze,
journalChinese Physics C volume46,
pages064109 (year2022),
<https://dx.doi.org/10.1088/1674-1137/ac5b58>.
[Tanimura(2019)]Tanimura2019PRC99.034324
authorY. Tanimura,
journalPhys. Rev. C volume99,
pages034324 (year2019),
<https://link.aps.org/doi/10.1103/PhysRevC.99.034324>.
[Meng et al.(2003)Meng, Lü, Zhang,
and Zhou]Meng2003NPA722.C366
authorJ. Meng,
authorH. Lü,
authorS. Zhang, and
authorS.-G. Zhou,
journalNuclear Physics A volume722,
pagesC366 (year2003), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S0375947403013915>.
[Rong et al.(2021)Rong, Tu, and
Zhou]Rong2021PRC104.054321
authorY.-T. Rong,
authorZ.-H. Tu, and
authorS.-G. Zhou,
journalPhys. Rev. C volume104,
pages054321 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.104.054321>.
[Wei et al.(2020)Wei, Zhao, Wang, Geng,
Sun, Niu, and Long]Wei2020CPC44.074107
authorB. Wei,
authorQ. Zhao,
authorZ. H. Wang,
authorJ. Geng,
authorB. Y. Sun,
authorY. F. Niu, and
authorW. H. Long,
journalChinese Physics C volume44,
pages074107 (year2020),
<https://doi.org/10.1088/1674-1137/44/7/074107>.
[Zhang et al.(2022b)Zhang,
Li, Gao, and Sun]Zhang2022CPC46.104.105
authorW. Zhang,
authorZ. Y. Li,
authorW. Gao, and
authorT. T. Sun,
journalChinese Physics C volume46,
pages104105 (year2022b),
<https://dx.doi.org/10.1088/1674-1137/ac7b18>.
[Rather et al.(2021a)Rather,
Rahaman, Dexheimer, Usmani, and Patra]Rather2021APJ917.46
authorI. A. Rather,
authorU. Rahaman,
authorV. Dexheimer,
authorA. A. Usmani,
and authorS. K. Patra,
journalThe Astrophysical Journal volume917,
pages46 (year2021a),
<https://dx.doi.org/10.3847/1538-4357/ac09f7>.
[Sun et al.(2023)Sun, Miao, Sun, and
Li]Sun2023APJ942.55
authorX. Sun,
authorZ. Miao,
authorB. Sun, and
authorA. Li, journalThe
Astrophysical Journal volume942, pages55
(year2023),
<https://dx.doi.org/10.3847/1538-4357/ac9d9a>.
[Rather et al.(2021b)Rather,
Rahaman, Imran, Das, Usmani, and Patra]Rather2021PRC103.055814
authorI. A. Rather,
authorU. Rahaman,
authorM. Imran,
authorH. C. Das,
authorA. A. Usmani,
and authorS. K. Patra,
journalPhys. Rev. C volume103,
pages055814 (year2021b),
<https://link.aps.org/doi/10.1103/PhysRevC.103.055814>.
[Malik et al.(2022)Malik, Ferreira,
Agrawal, and Providência]Malik2022APJ930.17
authorT. Malik,
authorM. Ferreira,
authorB. K. Agrawal,
and
authorC. Providência,
journalThe Astrophysical Journal volume930,
pages17 (year2022),
<https://dx.doi.org/10.3847/1538-4357/ac5d3c>.
[Yang et al.(2022)Yang, Wen, Wang, and
Zhang]Yang2022PRD105.063023
authorS. Yang,
authorD. Wen,
authorJ. Wang, and
authorJ. Zhang,
journalPhys. Rev. D volume105,
pages063023 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevD.105.063023>.
[Xia et al.(2022a)Xia, Sun,
Maruyama, Long, and Li]Xia2022PRC105.045803
authorC.-J. Xia,
authorB. Y. Sun,
authorT. Maruyama,
authorW.-H. Long, and
authorA. Li, journalPhys.
Rev. C volume105, pages045803
(year2022a),
<https://link.aps.org/doi/10.1103/PhysRevC.105.045803>.
[Xia et al.(2022b)Xia,
Maruyama, Li, Sun, Long, and Zhang]Xia2022CTP74.095303
authorC.-J. Xia,
authorT. Maruyama,
authorA. Li,
authorB. Y. Sun,
authorW.-H. Long, and
authorY.-X. Zhang,
journalCommunications in Theoretical Physics
volume74, pages095303
(year2022b),
<https://dx.doi.org/10.1088/1572-9494/ac71fd>.
[Isaka et al.(2013)Isaka, Homma, Kimura,
Dote, and Ohnishi]Isaka2013Few-Body-Systems54.1219
authorM. Isaka,
authorH. Homma,
authorM. Kimura,
authorA. Dote, and
authorA. Ohnishi,
journalFew-Body Systems volume54,
pages1219 (year2013), ISSN issn0177-7963,
1432-5411,
<http://link.springer.com/10.1007/s00601-012-0547-3>.
[Choi et al.(2022)Choi, Hiyama, Hyun, and
Cheoun]Choi2022EPJA58.161
authorS. Choi,
authorE. Hiyama,
authorC. H. Hyun, and
authorM.-K. Cheoun,
journalThe European Physical Journal A
volume58, pages161 (year2022),
ISSN issn8,
<https://doi.org/10.1140/epja/s10050-022-00817-4>.
[Aoki et al.(2021)Aoki, Fujioka, Gogami,
Hidaka, Hiyama, Honda, Hosaka, Ichikawa, Ieiri, Isaka
et al.]Aoki2021arXive2110.04462
authorK. Aoki,
authorH. Fujioka,
authorT. Gogami,
authorY. Hidaka,
authorE. Hiyama,
authorR. Honda,
authorA. Hosaka,
authorY. Ichikawa,
authorM. Ieiri,
authorM. Isaka,
et al., titleExtension of the j-parc hadron
experimental facility: Third white paper (year2021),
2110.04462.
[Long et al.(2006)Long, Van Giai, and
Meng]Long2006PLB640.150
authorW. H. Long,
authorN. Van Giai,
and authorJ. Meng,
journalPhysics Letters B volume640,
pages150 (year2006), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269306009610>.
[Ding et al.(2022)Ding, Qian, Sun, and
Long]Ding2022PRC106.054311
authorS. Y. Ding,
authorZ. Qian,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume106,
pages054311 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.106.054311>.
[Xia et al.(2023)Xia, Wu, Mei, and
Yao]Xia2023Sci.China-Phys.Mech.Astron66.252011
authorH. Xia,
authorX. Wu,
authorH. Mei, and
authorJ. Yao,
journalScience China Physics, Mechanics & Astronomy
volume66, pages252011
(year2023), ISSN issn1674-7348, 1869-1927,
<https://link.springer.com/10.1007/s11433-022-2045-x>.
[Xue et al.(2023)Xue, Chen, Chen, Luo,
Schulze, and Zhou]Xue2023PRC107.044317
authorH.-T. Xue,
authorY.-F. Chen,
authorQ. B. Chen,
authorY. A. Luo,
authorH.-J. Schulze,
and authorX.-R. Zhou,
journalPhys. Rev. C volume107,
pages044317 (year2023),
<https://link.aps.org/doi/10.1103/PhysRevC.107.044317>.
[Tu and Zhou(2022)]Tu2022APJ925.16
authorZ.-H. Tu and
authorS.-G. Zhou,
journalThe Astrophysical Journal volume925,
pages16 (year2022),
<https://dx.doi.org/10.3847/1538-4357/ac3996>.
[Ren et al.(2017)Ren, Sun, and
Zhang]Ren2017PRC95.054318
authorS.-H. Ren,
authorT.-T. Sun, and
authorW. Zhang,
journalPhys. Rev. C volume95,
pages054318 (year2017),
<https://link.aps.org/doi/10.1103/PhysRevC.95.054318>.
[Jennings(1990)]Jennings1990PLB246.1990325
authorB. Jennings,
journalPhysics Letters B volume246,
pages325 (year1990), ISSN issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/0370269390906078>.
[Berger et al.(1984)Berger, Girod, and
Gogny]Berger1984NPA428.23
authorJ. F. Berger,
authorM. Girod, and
authorD. Gogny,
journalNuclear Physics A volume428,
pages23 (year1984), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/0375947484902409>.
[Meng(1998)]Meng1998NPA.635.3
authorJ. Meng,
journalNuclear Physics A volume635,
pages3 (year1998), ISSN issn0375-9474,
<https://www.sciencedirect.com/science/article/pii/S037594749800178X>.
[Long et al.(2010)Long, Ring, Giai, and
Meng]Long2010PRC81.024308
authorW. H. Long,
authorP. Ring,
authorN. V. Giai, and
authorJ. Meng,
journalPhys. Rev. C volume81,
pages024308 (year2010),
<https://link.aps.org/doi/10.1103/PhysRevC.81.024308>.
[Geng et al.(2020)Geng, Xiang, Sun, and
Long]Geng2020PRC101.064302
authorJ. Geng,
authorJ. Xiang,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume101,
pages064302 (year2020),
<https://link.aps.org/doi/10.1103/PhysRevC.101.064302>.
[Geng and Long(2022)]Geng2022PRC105.034329
authorJ. Geng and
authorW. H. Long,
journalPhys. Rev. C volume105,
pages034329 (year2022),
<https://link.aps.org/doi/10.1103/PhysRevC.105.034329>.
[Dover and Gal(1984)]Dover1984PPNP12.171
authorC. B. Dover and
authorA. Gal,
journalProgress in Particle and Nuclear Physics
volume12, pages171 (year1984),
ISSN issn0146-6410,
<https://www.sciencedirect.com/science/article/pii/0146641084900048>.
[and and and(2013)]Wang2013Com.Theor.Phys.60.479
authorand and
authorand, journalCommunications in
Theoretical Physics volume60, pages479
(year2013),
<https://dx.doi.org/10.1088/0253-6102/60/4/16>.
[Liu et al.(2020)Liu, Niu, and
Long]Liu2020PLB806.135524
authorJ. Liu,
authorY. F. Niu, and
authorW. H. Long,
journalPhysics Letters B volume806,
pages135524 (year2020), ISSN
issn0370-2693,
<https://www.sciencedirect.com/science/article/pii/S0370269320303282>.
[Yang et al.(2021)Yang, Sun, Geng, Sun,
and Long]Yang2021PRC103.014304
authorS. Yang,
authorX. D. Sun,
authorJ. Geng,
authorB. Y. Sun, and
authorW. H. Long,
journalPhys. Rev. C volume103,
pages014304 (year2021),
<https://link.aps.org/doi/10.1103/PhysRevC.103.014304>.
[Wang et al.(2021)Wang, Huang, Kondev,
Audi, and Naimi]Wang2021CPC45.030003
authorM. Wang,
authorW. Huang,
authorF. Kondev,
authorG. Audi, and
authorS. Naimi,
journalChinese Physics C volume45,
pages030003 (year2021),
<https://doi.org/10.1088/1674-1137/abddaf>.
[Zhang et al.(2022c)Zhang,
Cheoun, Choi, Chong, Dong, Dong, Du, Geng, Ha, He
et al.]Zhang2022ADNDT144.101488
authorK. Zhang,
authorM.-K. Cheoun,
authorY.-B. Choi,
authorP. S. Chong,
authorJ. Dong,
authorZ. Dong,
authorX. Du,
authorL. Geng,
authorE. Ha,
authorX.-T. He,
et al., journalAtomic Data and Nuclear Data Tables
volume144, pages101488
(year2022c), ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X22000018>.
[Kaur et al.(2022)Kaur, Kanungo,
Horiuchi, Hagen, Holt, Hu, Miyagi, Suzuki, Ameil, Atkinson
et al.]Kaur2022PRL129.142502
authorS. Kaur,
authorR. Kanungo,
authorW. Horiuchi,
authorG. Hagen,
authorJ. D. Holt,
authorB. S. Hu,
authorT. Miyagi,
authorT. Suzuki,
authorF. Ameil,
authorJ. Atkinson,
et al., journalPhys. Rev. Lett.
volume129, pages142502
(year2022),
<https://link.aps.org/doi/10.1103/PhysRevLett.129.142502>.
[Angeli and Marinova(2013)]Angeli2013ADNDT99.69
authorI. Angeli and
authorK. Marinova,
journalAtomic Data and Nuclear Data Tables
volume99, pages69 (year2013),
ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X12000265>.
[Li et al.(2021)Li, Luo, and
Wang]Li2021ADNDT140.101440
authorT. Li,
authorY. Luo, and
authorN. Wang,
journalAtomic Data and Nuclear Data Tables
volume140, pages101440
(year2021), ISSN issn0092-640X,
<https://www.sciencedirect.com/science/article/pii/S0092640X21000267>.
[Sun et al.(2008)Sun, Long, Meng, and
Lombardo]Sun2008PRC78.065805
authorB. Y. Sun,
authorW. H. Long,
authorJ. Meng, and
authorU. Lombardo,
journalPhys. Rev. C volume78,
pages065805 (year2008),
<https://link.aps.org/doi/10.1103/PhysRevC.78.065805>.
[Long et al.(2012)Long, Sun, Hagino, and
Sagawa]Long2012PRC85.025806
authorW. H. Long,
authorB. Y. Sun,
authorK. Hagino, and
authorH. Sagawa,
journalPhys. Rev. C volume85,
pages025806 (year2012),
<https://link.aps.org/doi/10.1103/PhysRevC.85.025806>.
[Lv et al.(2018)Lv, Zhang, Zhang, Wu,
Liu, and Cao]Lv2018CPL35.062102
authorH. Lv,
authorS.-S. Zhang,
authorZ.-H. Zhang,
authorY.-Q. Wu,
authorJ. Liu, and
authorL.-G. Cao,
journalChinese Physics Letters volume35,
pages062102 (year2018),
<https://dx.doi.org/10.1088/0256-307X/35/6/062102>.
|
http://arxiv.org/abs/2307.04188v1 | 20230709144204 | Wasserstein-p Bounds in the Central Limit Theorem Under Local Dependence | [
"Tianle Liu",
"Morgane Austern"
] | math.PR | [
"math.PR",
"math.ST",
"stat.TH",
"60F05"
] |
Wasserstein-P Bounds in CLT Under Local Dependence]Wasserstein-P Bounds in the Central Limit Theorem Under Local Dependence
[email protected]
[email protected]
Department of Statistics, Harvard University, Cambridge, MA 02138
[2020]60F05.
The central limit theorem (CLT) is one of the most fundamental results in probability; and establishing its rate of convergence has been a key question since the 1940s. For independent random variables, a series of recent works established optimal error bounds under the Wasserstein-p distance (with p≥ 1). In this paper, we extend those results to locally dependent random variables, which include m-dependent random fields and U-statistics. Under conditions on the moments and the dependency neighborhoods, we derive optimal rates in the CLT for the Wasserstein-p distance. Our proofs rely on approximating the empirical average of dependent observations by the empirical average of i.i.d. random variables. To do so, we expand the Stein equation to arbitrary orders by adapting the Stein's dependency neighborhood method. Finally we illustrate the applicability of our results by obtaining efficient tail bounds.
[
Morgane Austern
Received: date / Accepted: date
===================================
§ INTRODUCTION
The central limit theorem (CLT) is one of the most fundamental theorems in probability theory. Initially formulated for independent and identically distributed random variables, it has since then been generalized to triangular arrays <cit.>, martingales <cit.>, U-statistics <cit.>, locally dependent random variables <cit.>, and mixing random fields <cit.>. Let (I_n) be an increasing sequence of subsets I_1⊆ I_2⊆⋯⊆ I, whose sizes increase to infinity |I_n|→∞. Set (X_i)_i∈ I to be (dependent) centered random variables. Under certain conditions on the moments of (X_i) and on its dependence structure, the CLT states that the scaled sum is asymptotically normal, i.e.,
W_n:=σ_n^-1∑_i∈ I_n X_i𝒩(0,1),
where we write σ_n^2=Var(∑_i∈ I_nX_i). Starting with the work of Berry and Esseen in 1940s, there is a long history of quantifying how far W_n is from being normally distributed. One of the most important metrics to do so is the Wasserstein-p distance originated in optimal transport theory <cit.>. For two probability measures ν and μ over the real line ℝ, we denote by Γ(ν,μ) the set of all couplings of ν and μ, and the Wasserstein-p distance between ν and μ is defined as
𝒲_p(ν,μ):=inf_γ∈Γ(ν,μ)(𝔼_(X,Y)∼γ[| X-Y |^p])^1/p.
When the observations (X_i) are independent, <cit.> established that for p=1 the convergence rate for the CLT is 𝒪(| I_n|^-1/2). Extending such results to p>1 remained for a while an open question. The first bounds for p> 1 obtained by <cit.> dating back to the 1970s were sub-optimal in terms of the sample size |I_n| as they decrease at a slower rate of 𝒪(|I_n|^-1/2+1/p). <cit.> obtained that, for 1≤ p≤ 2, the Wasserstein distance converges at the optimal rate 𝒪(| I_n|^-1/2) under some additional necessary moment conditions, and they conjectured that such a rate would be extendable to arbitrary p≥ 1. This was recently proven to be true by <cit.> using a series of methods including the Edgeworth expansion and the exchangeable pair method. They showed that if max_i X_i_p+2<∞ and if Var(X_1)=Var(X_i)=1, then there is a constant K_p<∞ such that
𝒲_p(ℒ(W_n),𝒩(0,1))≤K_p‖ X_1‖_p+2^1+2/p/√(| I_n|),
where ℒ( · ) designates the distribution of the given random variable. It is however crucial to note that these rates were obtained under the key assumption of independence of the (X_i). In this paper, we aim to generalize this beyond the assumption of independence which is restrictive for many applications.
An important class of dependent observations (X_i) are locally dependent random variables. Intuitively, we say that (X_i) are locally dependent if for every finite group of random variables (X_i)_i∈ J, where J⊂ I, there exists a subset N(J)⊂ I such that (X_i)_i∈ J is independent from (X_i)_i∈ I∖ N(J). The subset N(J) is often called the dependency neighborhood of J. Examples of such random variables include m-dependent random fields, U-statistics, and subgraph count statistics in the Erdős–Rényi random graphs. Under general conditions on the sizes of the dependency neighborhoods the central limit theorem is known to hold and its rate of convergence in Wasserstein-1 distance was established by <cit.>. This was extended to Wasserstein-2 bounds by <cit.> by relating it to Zolotarev's metrics and cleverly exploiting Stein's method. Drawing inspiration from <cit.>, sub-optimal rates were also achieved in <cit.> for arbitrary p≥ 1 under more technical conditions. Nevertheless, an optimal rate bound for general Wasserstein-p distances (p≥ 1) remains unknown. This is the gap that we fill in this paper. We consider locally dependent (not necessarily identically distributed) random variables (X_i), and consider the empirical average W_n:=σ_n^-1∑_i∈ I_nX_i where σ_n^2:=∑_i∈ I_nX_i. For all p≥ 1 we obtain bounds for the 𝒲_p distance 𝒲_p(ℒ(W_n),𝒩(0,1)). We do so under the assumption that the variances (σ_n) are nondegenerate, and under moment conditions and on the sizes of dependency neighborhoods. Notably if the size of the dependency neighborhoods is uniformly bounded we obtain bounds that decrease at the optimal rate (see <ref>)
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪(1/√(|I_n|)).
We further generalize our results to triangular arrays where the random variables (X^0.6(n)_i) are allowed to change with n. Finally, we demonstrate how those bounds can be exploited to obtain non-uniform Berry–Esseen type bounds that have polynomial decay.
The key idea of our proofs is to approximate the empirical average W_n by an empirical average V_n of i.i.d. random variables for which Wasserstein's bounds are already known. To do this we establish an Edgeworth-type expansion of the Stein equation in terms of the cumulants of the W_n. Indeed, in <ref> we prove that if h is a function smooth enough (made precise later) and Z∼𝒩 is a standard random variable then
𝔼[ h(W_n)]-𝔼[h(Z)]=𝔼 [f_h'(W_n)-W_nf_h(W_n)]
= ∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(W_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] + Remainders,
where f_h is the solution of the Stein equation <ref> and where (κ _j(W_n)) designates the cumulants of W_n (the other notations will be made explicit in the next few sections). This generalizes a similar well-known result for i.i.d. observations established in <cit.>. To guarantee that our choice of V_n is a good approximation of W_n we utilize this expansion and exploit the Hamburger moment problem to choose V_n to be such that its first ⌈ p ⌉+1 cumulants match the ones of W_n.
§.§ Related Literature
<cit.> established that the convergence rate in the central limit theorem is 𝒪(| I_n|^-1/2) in terms of the Wasserstein-1 distance. Since then it has been tightened and generalized to dependent observations.
Notably, the Stein's method offers a series of powerful techniques for obtaining Wasserstein-1 bounds in the dependence setting. See <cit.> for a survey of those methods. <cit.> obtained Wasserstein-1 bounds under local dependence conditions.
<cit.> proposed a rate of 𝒪(|I_n|^-1/2+1/p) for the Wasserstein-p distance under the hypothesis that the random variables have finite exponential moments. <cit.> obtained a similar rate but only required the existence of p-th moments. <cit.> showed that in order to obtain a convergence rate of 𝒪(| I_n|^-1/2), it is necessary to require finite (p+2)-th moments of the random variables. They also obtained the optimal rate for 1≤ p≤ 2 and conjectured that a similar rate should be valid for any arbitrary p> 2. This conjecture was demonstrated to be true by <cit.>. Those two papers took different approaches. <cit.> used an Edgeworth expansion argument. <cit.>, on the other hand, used the Ornstein-Uhlenbeck interpolation combined with a Stein exchangeable pair argument and their methods further applied to multivariate settings. Previous to that, <cit.> had already obtained the optimal rate for the Wasserstein-p distance using the Ornstein-Uhlenbeck interpolation but needed significantly stronger assumptions on the distribution of the random variables by requiring the existence of a Stein kernel. Moreover, for the special case p=2, the celebrated HWI inequality <cit.> and Talagrand quadratic transport inequality <cit.> can help obtain Wasserstein-2 bounds by relating it to the Kullback-Leibler divergence.
Contrary to the independent case, much less is known for the general Wasserstein-p distance for dependent data. <cit.> adapted the Stein's method to obtain Wasserstein-2 bounds for locally dependent variables. <cit.> modified the approach of <cit.> and obtained a sub-optimal rate 𝒪(| I_n|^-1/2log | I_n|) for the Wasserstein-p distance under local dependence. Our results propose significant extensions to both of those results by generalizing the optimal rate to arbitrary p≥ 1.
Our proofs also rely on the Stein's method and a result of <cit.> that allows to upper the Wasserstein-p distance by an integral probability metric <cit.>. As those metrics are defined as the supremum of expected differences over a certain class of functions, the Stein's method lends itself nicely to this problem.
The Stein's method was first introduced in <cit.> as a new method to obtain a Berry–Esseen bound and prove the central limit theorem for weakly dependent data. It has since then become one of the most popular and powerful tools to prove asymptotic normality for dependent data, and different adaptations of it have been proposed, notably the dependency neighborhoods, the exchangeable pairs, the zero-bias coupling, and the size-bias coupling <cit.>. In addition to being used to prove the central limit theorem, it has also been adapted to obtain limit theorems with the Poisson distribution <cit.> or the exponential distribution <cit.>. Moreover, it has been used for comparing different univariate distributions <cit.>. Our use of the Stein's method is closely related to the dependency neighborhood method described in <cit.>.
§.§ Paper Outline
In <ref> we clarify some notations that we use throughout the paper. Then we present our results under two different local dependence conditions in <ref>. In <ref> and <ref> we respectively apply our results to m-dependent random fields and to U-statistics. In <ref> we apply our results to obtain non-uniform Berry–Esseen bounds with polynomial decay.
In <ref>, we make an overview of our proof techniques. In <ref> we present the main lemmas (notably <ref>) and use them to prove the main result <ref>. Those lemmas and additional results are proved in <ref>.
§ GENERAL NOTATIONS
toc
§.§ Notations concerning integers and sets
In this paper, we will write ⌈ x⌉ to denote the smallest integer that is bigger or equal to x and ⌊ x⌋ denotes the largest integer smaller or equal to x. We use ℕ to denote the set of non-negative integers and let ℕ_+ be the set of positive integers. For any n∈ℕ_+, denote [n]:={ℓ∈ℕ_+:1≤ℓ≤ n}.
Moreover, for a finite set B we denote by |B| its cardinality.
toc
§.§ Notations for sequences
Given a sequence (x_i) we will shorthand x_1:ℓ=(x_1,⋯,x_ℓ) and similarly for any subset B⊆ℕ_+ we denote x_B:=(x_i)_i∈ B.
toc
§.§ Notations for functions
For any real valued functions f( · ),g( · ):ℕ_+→ℝ, we write f(n)≲ g(n) or f(n)=𝒪(g(n)) if there exists some constant C (with dependencies that are fixed in the contexts) and an integer N>0 such that the inequality f(n)≤ C g(n) holds for all n≥ N. We further write f(n)≍ g(n) as shorthand for f(n)≲ g(n) and g(n)≲ f(n).
toc
§.§ Notations for probability distributions
For a random variable X we write by ℒ(X) the distribution of X.
§ MAIN THEOREMS
Let p≥ 1 be a positive real number, we write ω :=p+1-⌈ p⌉∈ [0,1]. We choose I to be an infinite index set and (I_n)_n=1^∞ to be an increasing sequence of finite subsets of I_1⊆ I_2⊆⋯⊊ I that satisfy |I_n|∞.
Let (X^0.6(n)_i)_i∈ I_n be a triangular array of random variables, each row indexed by i∈ I_n (n=1,2,⋯), we define W_n to be the following empirical average
W_n:=σ_n^-1∑_i∈ I_nX^0.6(n)_i, with σ_n^2:=Var(∑_i∈ I_n X^0.6(n)_i).
Under the hypothesis that the random variables (X^0.6(n)_i) are locally dependent we will, in this section, bound the Wasserstein-p distance between W_n and its normal limit. The bound we obtain depends on the size of the index set I_n, the moments of the random variables and the structure of local dependence in question.
To formally state our conditions on the dependency structure of (X_i^(n)), we first define the notion of dependency neighborhoods similarly as in <cit.>.
Given random variables (Y_i)_i∈ I and given J⊆ I, we say that N(J)⊂ J is a dependency neighborhood of J if { Y_j:j∉ N(J) } is independent of { Y_j: j∈ J}. To state our theorem, we impose that such dependency neighborhoods can be defined for (X_i^0.6(n)). More formally, we assume that there is a sequence (N_n(i_1:q))_q of subsets of I_n that satisfy the following conditions:
[LD-1]: For each i_1∈ I_n, the subset N_n(i_1)⊆ I_n is such that { X^0.6(n)_j:j∉ N_n(i_1) } is independent of X^0.6(n)_i_1.
[LD-q] (q≥ 2): For each i_1∈ I_n, i_2∈ N_n(i_1), ⋯, i_q∈ N_n(i_1:(q-1)), the subset N_n(i_1:q)⊂ I_n is such that { X^0.6(n)_j:j∉ N_n(i_1:q) } is independent of (X^0.6(n)_i_1,⋯,X^0.6(n)_i_q).
We remark that the sequence of subsets (N_n(i_1:q))_q is increasing, i.e., N_n(i_1:(q-1))⊆ N_n(i_1:q) in q; and that the neighborhoods N_n(i_1:q) are allowed to be different for different values of n–which reflects the triangular array structure of our problem. The condition of dependency neighborhoods here generalizes the one in <cit.> and was also adopted in <cit.>, inspired by <cit.>. <cit.> obtained a Wasserstein-1 bound under “decomposable” conditions similar to [LD-1] and [LD-2], and <cit.> showed a Berry–Esseen type result under slightly stronger assumptions for local dependence, while finally <cit.> obtained a Wasserstein-2 bound.
In order to define the remainder terms that will appear in our bounds, we introduce the following notions. Given t∈ℕ_+, and ℓ∈ℕ_+ such that k≥ 2, we say that the tuple (η_1,η_2,⋯,η_ℓ) is an integer composition of t if and only if η_1:ℓ are positive integers such that η_1+η_2+⋯+η_ℓ=t. We denote by C(t) the set of all possible integer compositions
C (t):={ℓ,η_1:ℓ∈ℕ_+:∑_j=1^ℓη_j=t }.
Moreover, for any random variables (Y_i)_i=1^t, we define the order-t compositional expectation with respect to η_1:ℓ as
[η_1,⋯,η_ℓ]▹ (Y_1,⋯,Y_t):=
𝔼[Y_1⋯ Y_η_1] 𝔼[Y_η_1+1⋯ Y_η_1+η_2] ⋯ 𝔼[Y_η_1+⋯+η_ℓ-1+1⋯ Y_t].
Note that if η_ℓ=1, the last expectation reduces to 𝔼 [Y_t]. For any positive integer k and real value ω∈ (0,1], we define
R_k,ω,n:= ∑_(ℓ,η_1:ℓ)∈ C^*(k+2)∑_i_1∈ I_n∑_i_2∈ N_n(i_1)⋯∑_i_k+1∈ N_n(i_1:k)
[η_1,⋯,η_ℓ]▹(| X^0.6(n)_i_1|,⋯,| X^0.6(n)_i_k+1|,(∑_i_k+2∈ N_n(i_1:(k+1))| X^0.6(n)_i_k+2|)^ω),
where C^*(k+2) is given by
C^*(t):={(ℓ,η_1:ℓ)∈ C(t): η_j≥ 2 for 1≤ j≤ℓ-1,}⊆ C(t).
The terms (R_k,ω,n) are remainder terms that appear in our bound of the Wasserstein-p distance between W_n and its normal limit.
Let (X^0.6(n)_i)_i∈ I_n be a triangular array of mean zero random variables and suppose that they satisfy [LD-1] to [LD-(⌈ p⌉+1)]. Let σ_n^2:=Var(∑_i∈ I_n X^0.6(n)_i) and define W_n:=σ_n^-1∑_i∈ I_nX^0.6(n)_i. Further suppose for any j∈ℕ_+ such that j≤⌈ p⌉ -1, it holds that R_j,1,nn→∞⟶ 0 as n→∞. Then there exists an integer N∈ℕ_+ such that for all n≥ N, we have the following Wasserstein bounds:
𝒲_p(ℒ(W_n), 𝒩(0,1)) ≤ C_p (∑_j=1^⌈ p⌉-1R _j,1,n^1/j+∑_j=1^⌈ p⌉R _j,ω,n ^1/(j+ω -1) ),
where ω=p+1-⌈ p⌉ and C_p is a constant that only depends on p.
We note that the condition that the remainder terms R_j,1,n shrink to 0 for all j≤⌈ p⌉ -1 impose an implicit constraint on the size of the sets N_n(i_1:q).
In particular, for p=1,2 we have
𝒲_1(ℒ(W_n), 𝒩(0,1))≤ C_1R_1,1,n,
𝒲_2(ℒ(W_n), 𝒩(0,1))≤ C_2(R_1,1,n+R_2,1,n^1/2).
where the remainders are given by
R_1,1,n= σ_n^-3∑_i∈ I_n∑_j∈ N_n(i)∑_k∈ N_n(i,j)(𝔼[| X^0.6(n)_iX^0.6(n)_jX^0.6(n)_k|]+𝔼[| X^0.6(n)_iX^0.6(n)_j|] 𝔼[| X^0.6(n)_k|]),
R_2,1,n= σ_n^-4∑_i ∈ I_n∑_j∈ N_n(i)∑_k∈ N_n(i,j)∑_ℓ∈ N_n(i,j,k)(𝔼[| X^0.6(n)_iX^0.6(n)_jX^0.6(n)_kX^0.6(n)_ℓ|]
+𝔼[| X^0.6(n)_iX^0.6(n)_jX^0.6(n)_k|] 𝔼[| X^0.6(n)_ℓ|]+𝔼[| X^0.6(n)_iX^0.6(n)_j|] 𝔼[| X^0.6(n)_kX^0.6(n)_ℓ|]).
Note that (<ref>) was proven by <cit.> and (<ref>) is a corollary of Theorem 2.1, <cit.>. The bound (<ref>) with an integer p was also proposed as a conjecture in <cit.>. As p grows, the right-hand side of (<ref>) becomes more and more complicated, which suggests the necessity of new assumptions in order to obtain a simplified result. We further remark that the choice of N_n (i_1:q) might not be unique (even if we require that it has the smallest cardinality among all possible index sets that fulfill the assumption [LD-q]).
Therefore, to be able to obtain more interpretable upper-bounds for the remainder terms (R_j,ω,n)
, we impose a slightly stronger assumption on the dependence structure:
[LD*]: We suppose that there exists a graph G_n=(V_n,E_n), with V_n:=I_n being the vertex set and E_n being the edge set, such that for any two disjoint subsets J_1,J_2⊆ I_n if there is no edge between J_1 and J_2, then { X^0.6(n)_j:j∈ J_1} is independent of { X^0.6(n)_j:j∈ J_2}.
Introduced by <cit.> the graph G_n defined above is known as the dependency graph and was later adopted in <cit.>. Please refer to <cit.> for a detailed discussion.
If is satisfied, for any subset J⊆ I_n, we define N_n(J) to be the set of vertices in the neighborhood of J⊆ I_n in the graph G.
To be precise, this is
N_n(J):=J∪{ i∈ I_n: e(i,j)∈ E_n for some j∈ J },
where e(i,j) denotes an edge between the vertices i and j.
To simplify the notations, we further denote N_n(J) by N_n(i_1:q) if J={ i_1,⋯,i_q} for any 1≤ q≤⌈ p⌉+1.
Then (N_n(i_1:q)) not only satisfies [LD-1] to [LD-(⌈ p⌉+1)], but has the following properties as well:
* N_n(i_1:q)=N_n(i_π(1),⋯,i_π(q)) for any permutation π on { 1,⋯,q };
* i_q∈ N_n(i_1:(q-1))⇔ i_1∈ N_n(i_2:q).
We point out that by definition of the dependency graph even if { X^0.6(n)_j:j∈ J_1} is independent of { X^0.6(n)_j:j∈ J_2}, there can still be edges between the vertex sets J_1 and J_2. In fact, there might not exist G_n such that there is no edge between J_1 and J_2 as long as { X^0.6(n)_j:j∈ J_1} is independent of { X^0.6(n)_j:j∈ J_2} since pairwise independence does not imply joint dependence.
The condition provides us with a tractable bound on R_k,ω,n, which is applicable in most of the commonly encountered settings, including m-dependent random fields and U-statistics.
Given M∈ℕ_+ and real number ω∈ (0,1], suppose that (X^0.6(n)_i)_i∈ I_n satisfies and that the cardinality of N_n(i_1:(k+1)) is upper-bounded by M<∞ for any i_1,⋯,i_k+1∈ I_n. Then there exists a constant C_k+ω only depending on k+ω such that
R_k,ω,n ≤ C_k+ω M^k+ω∑_i∈ I_nσ_n^-(k+1+ω)𝔼[| X^0.6(n)_i|^k+1+ω].
We remark that the upper bound on (R_k,ω,n) depends on the moments of the random variables (X^0.6(n)_i) and the maximum size of the dependency neighborhoods. The results of <ref> can be used to propose a more interpretable upper bound for the Wasserstein-p distance.
Suppose that (X^0.6(n)_i) is a triangular array of mean zero random variables satisfying , and that the cardinality of index set N_n(i_1:(⌈ p⌉+1)) is upper-bounded by M_n<∞ for any i_1,⋯,i_⌈ p⌉ +1∈ I_n. Furthermore, assume that
M_n^1+ωσ_n^-(ω+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2]→ 0, M_n^p+1σ_n^-(p+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2]→ 0.
Then there is N such that for all n≥ N we have
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ C_p(M_n^1+ωσ_n^-(ω+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2] )^1/ω+C_p(M_n^p+1σ_n^-(p+2)∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2] )^1/p,
for some constant C_p that only depends on p.
We notably remark that if the moments are nicely behaved in the sense that
B_1:=sup_i∈ I_n, n∈ℕ_+√(| I_n|)·X^0.6(n)_i_p+2/σ_n<∞,
and that the size of the dependency neighborhood are universally bounded, i.e.,
B_2:=sup_i_1:(⌈ p⌉ +1)∈ I_n,n∈ℕ_+|N_n(i_1:(⌈ p⌉ +1))|<∞,
then there is a constant K_p that only depends on B_1, B_2 and p≥1 such that for n large enough we have
𝒲_p(ℒ(W_n),𝒩(0,1))≤K_p/√(|I_n|).
The rate of convergence matches the known rate for independent random variables (see <cit.>).
§ RESULTS FOR M-DEPENDENT RANDOM FIELDS
Let d∈ℕ_+ be a positive integer, in this section, we study d-dimensional random fields.
A random field (X_i)_i ∈ T on T ⊆ℤ^d is m-dependent if and only if for any subsets U_1, U_2⊆ℤ^d, the random variables (X_i_1)_i_1∈ U_1∩ T and (X_i_2)_i_2∈ U_2∩ T are independent whenever ‖ i_1-i_2‖ >m for all i_1∈ U_1 and i_2∈ U_2.
Here ‖·‖ denotes the maximum norm on ℤ^d, that is ‖z‖=max _1 ≤ j ≤ d| z_j| for z=(z_1, ⋯, z_d).
Now we consider an increasing sequence T_1⊆ T_2⊆⋯ of finite subsets of ℤ^d that satisfy |T_n|∞. We have the following result as a corollary of <ref>.
Let p∈ℕ_+ and m∈ℕ_+ be positives integer.
Suppose that (X^0.6(n)_i) is a triangular array where each row is an m-dependent random field indexed by finite subsets T_n⊆ℤ^d such that |T_n|∞. Let σ_n^2:=Var(∑_i∈ T_nX_i^0.6(n)) and define W_n:=σ_n^-1∑_i∈ T_nX_i^0.6(n). Further suppose that 𝔼[X^0.6(n)_i]=0 for any i∈ T_n and that the following conditions hold:
* Moment condition: σ_n^-(p+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] → 0 as n→∞;
* Non-degeneracy condition: lim sup_nσ_n^-2∑_i∈ T_n𝔼[| X_i^0.6(n)|^2]≤ M<∞ for some M≥ 1.
Then for n large enough, we have
𝒲_p(ℒ(W_n),𝒩(0,1))≤ C_p,dm^(1+ω)d/ωM^p-ω/pωσ_n^-p+2/p (∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p,
where C_p,d only depends on p and d.
In particular, for a triangular array of m-dependent stationary random fields, suppose that we have sup_n𝔼[| X_i^0.6(n)|^p+2]<∞, and that the non-degeneracy condition lim inf_nσ_n^2/| T_n |>0 holds. Then we have
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪 (| T_n |^-1/2).
§ APPLICATION TO U-STATISTICS
Let (X_i)_i=1^n be a sequence of i.i.d. random variables. Fix m∈ℕ_+ such that m≥ 2. Let h:ℝ^m→ℝ be a fixed Borel-measurable function. The Hoeffding U-statistic is defined as
∑_1≤ i_1≤⋯≤ i_m≤ nh(X_i_1,⋯,X_i_m).
Given p≥ 1, suppose that the U-statistic of an i.i.d. sequence (X_i)_i=1^n induced by a symmetric function h:ℝ^m→ℝ satisfies the following conditions
* Mean zero: 𝔼[h(X_1, ⋯, X_m)]=0;
* Moment condition: 𝔼[| h(X_1, ⋯, X_m)|^p+2]<∞;
* Non-degeneracy condition: 𝔼 [g(X_1)^2]>0, where g(x):=𝔼[h(X_1,⋯,X_m)| X_1=x].
If we let
W_n:=1/σ_n∑_1≤ i_1≤⋯≤ i_m≤ nh(X_i_1,⋯,X_i_m),
where
σ_n^2:=Var(∑_1≤ i_1≤⋯≤ i_m≤ n h(X_i_1,⋯,X_i_m)),
the following Wasserstein bound holds:
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪(n^-1/2).
§ APPLICATION TO NON-UNIFORM BERRY–ESSEEN BOUNDS
In this section, we show a specific application of our results to non-uniform Berry–Esseen bounds with polynomial decay of any order. Mirroring the classical literature, <cit.> established Berry–Esseen bounds for locally dependent random variables. Notably, their Theorem 2.4 showed that if the random variables (X_i^(n)) satisfy some boundedness condition on the dependency neighborhoods, then there is a constant C>0 such that
sup_t|ℙ(W_n≥ t)-Φ^c(t)|≤ C ∑_i∈ I_n‖ X_i^(n)‖ _3^3 / σ_n^3,
where Φ^c(t):=ℙ (Z≥ t) with Z∼𝒩(0,1).
This extends the classical Berry–Esseen bound to locally dependent random variables, and can potentially be used to construct Kolmogorov–Smirnov tests under local dependence in nonparametric inference. However, one of the drawbacks of this inequality is that it does not depend on t. One would imagine that for large t we could find tighter bounds for |ℙ(W_n≥ t)-Φ^c(t)|. Non-uniform Berry–Esseen bounds establish this. Notably <cit.> (Theorem 2.5) showed that under the above conditions, there exists some universal constant C' such that
|ℙ(W_n≥ t)-Φ^c(t)|≤C'/1+|t|^3∑_i∈ I_n‖ X_i^(n)‖ _3^3 / σ_n^3, ∀ t∈ℝ.
This bound does decrease as |t| increases and does so at a rate of |t|^-3. However one would expect that this rate could be tightened if additional assumptions were made about the moments of (X_i^(n)). If the random variables admit some exponential moments then <cit.> demonstrated that locally dependent random variables admit moderate deviation inequalities. In this section, we show how 𝒲_p bound
can help us obtain bounds that decrease polynomially fast in t at a small price in its dependence on |I_n|, and do so without assuming infinite moments.
We assume that the conditions of <ref> are satisfied. There is a constant C>0 such that for all β>0 and t>0 satisfying
(√(2π)p)^1/p+1(1-√(2βlog t)/t)t^1-β/p+1≥𝒲_p(ℒ(W_n),𝒩(0,1)),
we have
- C/tφ(t(1-1/p+1)) ≤ℙ(W_n≥ t)-Φ^c(t)/𝒲_p(ℒ(W_n),𝒩(0,1))^1-1/p+1≤C/t^1+β(1-1/p+1),
where φ is the density function of 𝒩(0,1).
We can see from this result that the quantity |ℙ (W_n≥ t)-Φ^c(t) | decays in both t and n. Notably given any p,r≥ 1 assuming that the (p+2)-th moments of X _i's and the dependency neighborhoods are bounded in the sense that
sup_n∈ℕ^+,i_1:(p +1)∈ I_n|N_n(i_1:(p +1))|<∞,
we have |ℙ (W_n≥ t)-Φ^c(t) |=o(t^-r| I_n|^-p/2(p+1)) for t and n large enough.
In particular, for p∈ℕ^+ <ref> imply the uniform Berry–Esseen bound by taking the supremum over t:
sup_t∈ℝ|ℙ(W_n≥ t)- Φ^c(t)|≤ C (∑_i∈ I_n‖ X_i^(n)‖ _3^3 / σ_n^3+∑_i∈ I_n‖ X_i^(n)‖ _p+2^1+2/p / σ_n^1+2/p).
Note that it recovers the uniform Berry–Esseen bound in <cit.> with p=1.
§ OVERVIEW OF THE PROOFS
The key idea of our proofs is to approximate the sum of weakly dependent random variables (X_i^0.6(n))_i∈ I_n by the empirical average of q_n i.i.d. random variables ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) which we denote V_n:=1/√(q_n)∑_i=1^q_nξ_i^0.6(n). More specifically we aim for the Wasserstein-p distance between them
𝒲_p(ℒ(W_n),ℒ(V_n)) to be as small as possible. To establish the desired result we then exploit the triangle inequality that guarantees that
𝒲_p(ℒ(W_n),𝒩(0,1))≤𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1)),
and we use previously known bounds for 𝒲_p(ℒ(V_n),𝒩(0,1)) (<ref>).
To be able to show that such random variables ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) exist, we first show (<ref>) that as long as the third and higher-order cumulants of W_n decay then there exist integers (q_n) and i.i.d. random variables such that the first k (k∈ℕ_+) cumulants of
V_n:=1/√(q_n)∑_i=1^q_nξ_i^0.6(n)
matches those of W_n for n large enough. The decay of the cumulants can be proven to hold by exploiting the local dependence assumptions (see <ref>).
As a reminder, our goal is to establish that the Wasserstein distance 𝒲_p(ℒ(V_n),ℒ(W_n)) is small. We relate this to the cumulants thanks to the fact that the Wasserstein-p distance can be upper-bounded by integral probability metrics (<ref>) and the well-known Stein equation.
Indeed for i.i.d. random variables (ξ_i^0.6(n))_i=1^q_n, <cit.> showed that the following approximation holds (restated in <ref>)
𝔼[ h(V_n)]-𝒩h=𝔼 [f'_h(V_n)-V_nf_h(V_n)]
= ∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(V_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] + Remainders,
where f_h is the solution of the Stein equation (<ref>) and κ_j( · ) denotes the j-th cumulant of a random variable. (All the other notations in (<ref>) will be made clear in <ref>.) We show that we can obtain similar expansions for 𝔼 [f'(W_n)-W_nf(W_n)] (see <ref>):
𝔼[ h(W_n)]-𝒩h=𝔼 [f'_h(W_n)-W_nf_h(W_n)]
= ∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(W_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] + Remainders,
As mentioned in the previous paragraph, q_n and ξ_i^0.6(n) can be chosen to be such that κ_j(V_n)=κ_j(W_n) for j=1,⋯,⌈ p⌉+1. Thus, by taking the difference of (<ref>) and (<ref>), we get an upper bound on |𝔼[h(W_n)]-𝔼 [h(V_n)]| for a large class of function h. As shown in <ref>, this allows us to obtain an upper bound of the Wasserstein-p distance between ℒ(W_n) and ℒ(V_n) for a general p≥ 1. The desired result is therefore implied by the triangle inequality of the Wasserstein-p distance
𝒲_p(ℒ(W_n),𝒩(0,1))≤𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1)),
and the already known Wasserstein-p bounds for i.i.d. random variables (<ref>).
To be able to show that (<ref>) holds, we develop new techniques to obtain such expansions, which will be carefully elaborated and discussed in <ref>.
§ ADAPTING STEIN'S METHOD FOR WASSERSTEIN-P BOUNDS
In this section, we provide the proofs of <ref> using Stein's method. We first introduce some background definitions and lemmas before showing the proofs of the main theorems.
toc
§.§ Preliminaries and Notations
For any k∈ℕ and real number ω∈ (0,1], the Hölder space 𝒞^k,ω(ℝ) is defined as the class of k-times continuously differentiable functions f: ℝ→ℝ such that the k-times derivative of f is ω-Hölder continuous, i.e.,
| f|_k, ω:=sup _x ≠ y ∈ℝ|∂^k f(x)-∂^k f(y)|/| x-y|^ω<∞,
where ∂ denotes the differential operator. Here ω is called the Hölder exponent and | f |_k,ω is called the Hölder coefficient.
Using the notions of Hölder spaces, we define the Zolotarev's ideal metrics, which are related to the Wasserstein-p distances via <ref>.
Suppose μ and ν are two probability distributions on ℝ. For any p>0 and ω :=p+1-⌈ p⌉∈ (0,1], the Zolotarev-p distance between μ and ν is defined by
𝒵_p(μ, ν):=sup _f∈Λ_p(∫_ℝ f(x) μ(x)-∫_ℝ f(x) ν(x)),
where Λ_p:={ f ∈𝒞^⌈ p⌉-1,ω(ℝ):| f |_⌈ p⌉-1,ω≤ 1 }
We will see in <ref> how the Zolotarev distance can be used to obtain 𝒲_p(·,·) rates. To bound 𝒵_p( · , · ) we rely on the Stein's method which was introduced by <cit.> in order to prove the central limit theorem for dependent data. It has been widely adapted to all kinds of normal approximation problems. See <cit.>
for a detailed exposition.
toc
§.§ Stein equation and its solutions
Let Z∼𝒩(0,1) be a standard normal random variable. For any measurable function h:ℝ→ℝ, if h(Z)∈ℒ^1(ℝ), we write 𝒩 h:=𝔼 [h(Z)]. Thus, h(Z)∈ℒ^1(ℝ) if and only if 𝒩|h|<∞. Moreover, we define f_h( · ) by
f_h(w) :=∫_-∞^w e^(w^2-t^2)/2(h(t)-𝒩 h) t
=-∫_w^∞ e^(w^2-t^2)/2(h(t)-𝒩 h) t .
We remark that f_h(·) is a solution of the Stein equation meaning that it satisfies
f_h'(w)-w f_h(w)=h(w)-𝒩 h, ∀ w∈ℝ.
Bounding |𝔼(f'_h(W_n)-W_nf_h(W_n))| therefore allows to control |𝔼(h(W_n))-𝒩h|. If we do this for a large class of functions h we can therefore upper-bound the Zolotarev distance between ℒ(W_n) and a normal distribution. This is the key idea behind the Stein's method. For notational convenience, we denote by Θ the operator that maps h to f _h for any h such that 𝒩| h |< ∞, i.e.,
Θ h=f _h.
Note that Θ h( · ) is a function. If h∈Λ_p, then we see in <ref> that Θ h can be bounded.
For any p>0, let h ∈Λ_p be as defined in <ref>. Then Θ h=f_h in (<ref>) is a solution to (<ref>). Moreover, Θ h∈𝒞^⌈ p⌉-1,ω(ℝ)∩𝒞^⌈ p⌉,ω(ℝ) and the Hölder coefficients |Θ h |_⌈ p⌉-1,ω and |Θ h |_⌈ p⌉,ω are bounded by some constant only depending on p.
toc
§.§ Key Lemmas
First, we present an important result that states that the Wasserstein-p distance can be controlled in terms of the Zolotarev distance.
For any p≥ 1, there exists a positive constant C_p, such that for any pair of distributions μ,ν on ℝ with finite absolute moments of order p such that
𝒲_p(μ, ν) ≤ C_p(𝒵_p(μ, ν))^1/p.
In particular, 𝒲_1(μ,ν)=𝒵_1(μ,ν) by Kantorovich–Rubinstein duality.
In the next two lemmas, we present already-known results for the normal approximation of sums of independent random variables. Firstly <ref> provides an expansion for the difference between 𝔼[h(S_n)], where S_n is an empirical average and 𝒩h. This lemma will allow us to relate the Zolotarev distance to the cummulants.
For any p>0, let h ∈Λ_p and S_n:=∑_i=1^n X_i where {X_1, ⋯, X_n} are independent, with 𝔼 [X_i]=0 and 𝔼 [S_n^2]=1. Then it follows that
𝔼[ h(S_n)]-𝒩h
=
∑_(r,s_1:r)∈Γ (⌈ p⌉-1)(-1)^r∏_j=1^rκ _s _j+2(S_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h] +𝒪( ∑_i=1^n𝔼 [| X_i| ^p+2]),
where the first sum is over Γ (⌈ p⌉ -1):={ r, s _1:r∈ℕ_+:∑_j=1^rs _j≤⌈ p⌉-1}.
Note that there is a slight abuse of notation in (<ref>). The last ∏ indicates the composition of the operators in the parentheses rather than the product.
Secondly <ref> gives an upper bound on the Wasserstein distance between the distribution of this empirical average, S_n, and the standard normal distribution. This lemma will guarantee that if an approximation of W_n by a sum of independent random variables V_n can be obtained then V_n is approximately normally distributed.
For any p≥ 1, let S_n:=∑_i=1^n X_i where {X_1, ⋯, X_n} are independent and satisfy that 𝔼 [X_i]=0 and 𝔼 [S_n^2]=1. Then it follows that
𝒲_p(ℒ(S_n), 𝒩(0,1)) ≤ C_p(∑_i=1^n𝔼[| X_i| ^p+2])^1/p,
where C_p continuously depends on p.
We now introduce two new lemmas crucial in the proof of <ref>. They will be proven in <ref> and <ref>. The first lemma generalizes <ref> to the dependent setting.
Suppose that (X^0.6(n)_i)_i∈ I_n is a triangular array of random variables with dependency neighborhoods satisfying the local dependence conditions [LD-1] to [LD-(⌈ p⌉+1)]. Let W_n:=∑_i∈ I_nX^0.6(n)_i with 𝔼[X_i^0.6(n)]=0, 𝔼 [W_n^2]=1.
Then for any p>0 and h ∈Λ_p, we have
𝔼 [h(W_n)]-𝒩h
= ∑_(r,s_1:r)∈Γ (⌈ p⌉ -1)(-1)^r∏_j=1^rκ _s _j+2(W_n)/(s _j+1)!𝒩 [∏_j=1^r(∂ ^s _j+1Θ) h]
+𝒪(∑_j=1^⌈ p⌉-1R _j,1,n^p/j+∑_j=1^⌈ p⌉R _j,ω,n ^p/(j+ω -1)),
where the first sum is over Γ (⌈ p⌉ -1):={ r, s _1:r∈ℕ_+:∑_j=1^rs _j≤⌈ p⌉-1}.
We can see that <ref> look quite similar to one another with the only differences being the dependence structures of (X^0.6(n)_i) and the remainder terms in the expansions. This similarity inspires the proof of <ref>. To illustrate this, imagine that there would exist some i.i.d. random variables (ξ_i^0.6(n))_i=1^q_n and a large sample size q_n such that the first ⌈ p⌉+1 cumulants of V_n:=q_n^-1/2∑_i=1^q_nξ_i^0.6(n) match with those of W_n, then the expansion (<ref>) and in (<ref>) would be almost identical, and the difference between those would be controlled by the remainder terms (R_j,1,n) and (R_j,ω,n). If those remainder terms are small then we could exploit the asymptotic normality of V_n to obtain the asymptotic normality of W_n. We show that such a sequence exists when |I_n| is large.
Let p≥ 1 and k:=⌈ p⌉. If p>1, let (u_j^0.6(n))_j=1^k-1 be a sequence of real numbers. Suppose that for any j=1,⋯, k-1, we have u_j^0.6(n)→ 0 as n→∞. Then there exist constants C_p, C_p' only depending on p and a positive value N>0 (that might depend on (u_j^0.6(n)) ) such that for any n >N, there exists q_n∈ℕ_+ and a random variable ξ^0.6(n) such that
* 𝔼 [ξ^0.6(n)]=0, 𝔼 [(ξ^0.6(n))^2]=1;
* κ_j+2(ξ^0.6(n))=q_n^j/2u_j^0.6(n) for j=1,⋯, k-1;
* Either max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |=0 or max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |≥ C_p>0;
* 𝔼 [|ξ^0.6(n)|^p+2]≤ C_p'.
Furthermore, q_n can be chosen to be such that q_n→∞ as | I |→∞.
We note that the condition that u_j^0.6(n)→ 0 as n→∞ is crucial. <ref> is an asymptotic statement in the sense that for a given n≤ N, q_n and ξ^0.6(n) might not exist.
Intuitively, <ref> and <ref> determines the cumulants of ξ^0.6(n) and relates them to the cumulants of W_n. <ref> requires that the maximum
max_1≤ j≤ k|κ_j+2(ξ^0.6(n)) |
is either 0 or bounded away from 0 as n grows. And <ref> indicates that the (p+2)-th absolute moment is upper-bounded.
toc
§.§ Proof of Theorem <ref>
The proof of <ref> works in three stages:
* Using <ref> we find a sequence of i.i.d. random variables (ξ^0.6(n)_ℓ)_ℓ and a sample size q_n such that the first k+1 cumulants of W_n match the first k+1 cumulants of V_n:=q_n^-1/2∑_i=1^q_nξ^0.6(n)_i;
* Using <ref> we remark that we can bound the Wasserstein distance between the distributions of W_n and an empirical average, V_n, of i.i.d. observations in terms of |𝔼 [h(W_n)]-𝔼 [h(V_n)] | for a large class of functions h. We do so by exploiting <ref>;
* We remark that <ref> provides us with the bound on the Wasserstein distance between the distribution of V_n and the standard normal.
Then <ref> follows from the triangle inequality of the Wasserstein metric:
𝒲_p(W_n,𝒩(0,1))≤𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1)).
Without loss of generality, we assume σ_n= 1 and denote W_n:=∑_i∈ I_nX^0.6(n)_i.
Firstly, we remark that according to <ref>, for all 1≤ j≤ k-1 we have |κ_j+2(W_n) |≲ R_j,1,n. Moreover, by assumption we have R_j,1,n→ 0 as n→∞. Therefore, |κ_j+2(W_n) |→ 0 as n→∞ and the assumptions of <ref> hold, which implies that there exist constants C_p and C'_p
such that for any n large enough there are positive integers (q_n) and random variables (ξ^0.6(n)) such that
* 𝔼 [ξ^0.6(n)]=0, 𝔼 [(ξ^0.6(n))^2]=1;
* κ_j+2(ξ^0.6(n))=q_n^j/2κ_j+2 (W_n) for j=1,⋯, k-1;
* Either max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |=0 or max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |≥ C_p>0;
* 𝔼 [|ξ^0.6(n)|^p+2]≤ C_p'.
Furthermore, we know that (q_n) satisfies that q_n→∞ as n→∞.
As presented in the proof sketch we will use this to bound the distance between the distribution of W_n to the one of an empirical average of at least q_n i.i.d. random variables. Note that when max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n))|>0 then we can obtain (by combining <ref> ) a lower bound on q_n which will be crucial in our arguments as it will allow us to control the distance between this empirical average and its normal limit. When κ_3(W_n)=⋯=κ_k+1(W_n)=0, such a lower bound on q_n cannot be obtained in a similar way. Thus, we introduce an alternative sequence (q_n) by setting q_n:=| I_n |^2(p+1)/p∨ q_n if κ_3(W_n)=⋯=κ_k+1(W_n)=0, and q_n:=q_n otherwise. We remark that the sequence (q_n) still respects q_n→∞ as n→∞.
Let ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) be i.i.d. copies of ξ^0.6(n). Define V_n:=q_n^-1/2∑_i=1^q_nξ_i^0.6(n).
By construction, for any j∈ℕ_+ such that j≤ k-1=⌈ p⌉ -1 we have
κ_j+2(V_n)(*)=q_n^-(j+2)/2∑_i=1^q_nκ_j+2(ξ_i^0.6(n))=q_n^-j/2κ_j+2(ξ^0.6(n))=κ_j+2(W_n).
Here in (*) we have used the fact that cumulants are cumulative for independent random variables, which is directly implied by their definition. For more details on this, please refer to <cit.>.
Thus, by <ref> and <ref>, for any h∈Λ_p we have
|𝔼 [h(W_n)]-𝔼 [h(V_n)] |≲∑_j=1^k-1R _j,1,n^p/j+∑_j=1^kR _j,ω ,n^p/(j+ω -1)+q_n^-(p+2)/2∑_i=1^q_n𝔼[|ξ_i^0.6(n)|^p+2].
To be able to have this upper bound not depend on ξ_i^0.6(n) we will upper-bound
q_n^-(p+2)/2∑_i=1^q_n𝔼 [|ξ_i^0.6(n)|^p+2]
in terms of the remainders (R_j,1,n) and (R_j,ω,n). To do so we use the lower bounds on (q_n) implied by the specific form we chose.
If max_1≤ j≤ k-1|κ_j+2(W_n)|>0, <ref> implies that
C_p≤max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |(*)=max_1≤ j≤ k-1{q_n^j/2|κ_j+2(W_n)|}(**)≲max_1≤ j≤ k-1{q_n^j/2R_j,1,n}.
where to get (*) we used <ref> and to get (**) we used <ref>.
Thus, the following holds
q_n^-p/2=(q_n^-j_0/2)^p/j_0≲ R_j_0,1,n^p/j_0≤∑_j=1^k-1 R_j,1,n^p/j,
where j_0 is the integer satisfying that |κ_j_0+2(ξ^0.6(n)) |=max_1≤ j≤ k-1|κ_j+2(ξ^0.6(n)) |.
On the other hand, if κ_j+2(W_n)=0 for all 1≤ j≤ k-1, then by definitions we have q_n≥| I_n |^2(p+1)/p, and therefore, q_n^-p/2≤| I_n |^-(p+1).
Moreover, by Hölder's inequality we know that the following holds
∑_i∈ I_n𝔼[| X^0.6(n)_i|^2]≤| I_n|^p/(p+2)(∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2])^2/(p+2).
and
(∑_i∈ I_nX^0.6(n)_i)^2≤| I_n |∑_i∈ I_n| X^0.6(n)_i|^2.
Since 𝔼[(∑_i∈ I_nX^0.6(n)_i)^2]=σ_n^2=1, we have
q_n^-p/2≤ | I_n |^-(p+1)(𝔼[(∑_i∈ IX^0.6(n)_i)^2])^(p+2)/2
(*)≤ | I_n |^-p/2(∑_i∈ I𝔼[| X^0.6(n)_i|^2])^(p+2)/2
(**)≤ ∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2]≤ R_k,ω,n ,
where to obtain (*) we used (<ref>) and to obtain (**) we used (<ref>).
Thus, using <ref> and the fact that ξ_1^0.6(n),⋯,ξ_q_n^0.6(n) are i.i.d., we obtain
q_n^-(p+2)/2∑_i=1^q_n𝔼[|ξ_i^0.6(n)|^p+2]≤ C_p'q_n^-p/2≲∑_j=1^k-1R _j,1,n^p/j+∑_j=1^kR _j,ω ,n^p/(j+ω -1).
Therefore, by combining this with (<ref>) we obtain that there is a constant K>0 that does not depend on h such that
|𝔼 [h(W_n)]-𝔼 [h(V_n)] |≤ K( ∑_j=1^kR _j,1,n^p/j+∑_j=1^k+1R _j,ω ,n^p/(j+ω -1)).
By taking supremum over h∈Λ_p and by <ref> we obtain that
𝒲_p(ℒ(W_n),ℒ(V_n))≲sup_h∈Λ_p|𝔼[h(W_n)]-𝔼 [h(V_n)] |^1/p≲∑_j=1^k-1R _j,1,n^1/j+∑_j=1^kR _j,ω ,n^1/(j+ω -1).
Moreover, by combining <ref> and (<ref>) we have
𝒲_p(ℒ(V_n),𝒩(0,1))≲(q_n^-(p+2)/2∑_i=1^q_n𝔼[|ξ_i^0.6(n)|^p+2])^1/p≲∑_j=1^k-1R_j,1,n^1/j+∑_j=1^kR _j,ω,n ^1/(j+ω -1).
Therefore, as the Wasserstein distance 𝒲_p satisfies the triangle inequality we conclude that
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ 𝒲_p(ℒ(W_n),ℒ(V_n))+𝒲_p(ℒ(V_n),𝒩(0,1))
≲ ∑_j=1^k-1R _j,1,n^1/j+∑_j=1^kR _j,ω ,n^1/(j+ω -1).
§ PROOF OF LEMMA <REF>
For ease of notation, when there is no ambiguity we will drop the dependence on n in our notation and write W, N(·), σ, X_i, I and R_j,ω for respectively W_n, N_n(·), σ_n, X^0.6(n)_i, I_n and R_j,ω,n.
toc
§.§ Example and Roadmap
Given the form of expression in <ref>, it is natural to consider performing induction on ⌈ p⌉. In fact, <cit.> used a similar induction idea to prove <ref>, the analogous result to <ref> for independent variables. As <cit.> suggested, the key of each inductive step is the following expansion of 𝔼 [Wf(W)].
Denote by κ_j(W) the j-th cumulant of W. Given k∈ℕ_+ and real number ω∈ (0,1], for any f∈𝒞^k,ω(ℝ), we have
𝔼 [Wf(W)]=∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+𝒪(| f |_k,ωR_k,ω).
The case k=ω=1 is a well-known result in the literature of Stein's method (for example see <cit.>). The case k=2, ω=1 was first proven by <cit.>, and they also conjectured that it was true for any positive integer k with ω=1. Inspired by <cit.>'s method, we confirm that this conjecture is correct by proving <ref>.
To help better understand the intuition behind our proof for the general settings, let's first consider the simplest case with k=ω=1. Given a positive integer m, suppose that (X_i)_i=1^n is an m-dependent random sequence (the special case of d=1 in <ref>). We let W:=∑_i=1^nX_i and require that 𝔼 [X_1]=0 and 𝔼 [W^2]=1. For simplicity, we further assume f∈𝒞^2(ℝ)∩𝒞^1,1(ℝ) meaning that f” is a continuous and bounded function.
For any indexes i,j∈ [n] (by convention [n]:={ 1,2,⋯,n }), we write
N(i)={ℓ∈ [n]: |ℓ-i |≤ m }, N(i,j):={ℓ∈ [n]:|ℓ-i |≤ m or |ℓ-j |≤ m }.
Denote W_i,m:=∑_j∉ N(i)X_j and W_i,j,m:=∑_ℓ∉ N(i,j)X_ℓ. The idea is that for each i, we split W into two parts, W_i,m and W-W_i,m. The former is independent of X_i while the latter is the sum of X_j's in the neighborhood of X_i and will converge to 0 when n grows to ∞. Thus, we perform the Taylor expansion for f(W) around W_i,m.
We have
𝔼[ Wf (W)- f'(W) ]
= ∑_i=1^n𝔼[X_i(f (W)-f (W_i,m) - f'(W_i,m)(W-W_i,m))]
+ ∑_i=1^n𝔼 [X_if (W_i,m)]
+ ∑_i=1^n𝔼[X_i(W-W_i,m)f'(W_i,m)]-𝔼 [f'(W)]
= ∑_i=1^n𝔼[X_i(f (W)-f (W_i,m) - f'(W_i,m)(W-W_i,m))]
+ ∑_i=1^n 𝔼 [X_i] 𝔼 [f (W_i,m)]
+ ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,m)]-𝔼 [f'(W)]
= ∑_i=1^n𝔼[X_i(f (W)-f (W_i,m) - f'(W_i,m)(W-W_i,m))]
+ (∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,m)]-𝔼 [f'(W)] )=:E_1+E_2.
By assumption, ‖ f”‖ is bounded and we have
| E_1|= |∑_i=1^n𝔼[X_i(f (W)-f (W_i,m)
- f'(W_i,m)(W-W_i,m))]|
≤ ‖ f”‖/2∑_i=1^n𝔼[| X_i(W-W_i,m)^2|]
= ‖ f”‖/2∑_i=1^n𝔼[| X_i| (
∑_j∈ N(i)X_j)^2]
= ‖ f”‖/2∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i)𝔼[| X_iX_jX_ℓ|]≤‖ f”‖/2∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)𝔼[| X_iX_jX_ℓ|].
Now we bound E_2.
E_2 = ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,m)]-𝔼 [f'(W)]
= ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]
+∑_i=1^n∑_j∈ N(i)𝔼[X_iX_jf'(W_i,j,m)] -𝔼 [f'(W)]
(*)= ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]
+∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j] 𝔼 [f'(W_i,j,m)] -𝔼 [f'(W)]
(**) = ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]
+ ∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j] 𝔼[f'(W_i,j,m)-f'(W)]
= (t_1)+(t_2),
where to obtain (*) we have used the fact that W_i,j,m is independent of (X_i,X_j) in the second equation and to obtain (**) we have assumed hat 𝔼(W^2)=1.
The first term (t_1), can be upper-bounded by the mean value theorem as
|∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j(f'(W_i,m)-f'(W_i,j,m))]|
≤ ∑_i=1^n∑_j∈ N(i)‖ f”‖ 𝔼[| X_iX_j(W_i,m-W_i,j,m)|]
≤ ‖ f”‖∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)𝔼[| X_iX_jX_ℓ|].
By another application of the mean-value theorem, we remark that the second term (t_2), is controlled by
|∑_i=1^n∑_j∈ N(i)𝔼[X_iX_j] 𝔼[f'(W_i,j,m)-f'(W)]|
≤ ∑_i=1^n∑_j∈ N(i)‖ f”‖ 𝔼[| X_iX_j|] 𝔼[| W_i,j,m-W|]
≤ ‖ f”‖∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)𝔼[| X_iX_j|] 𝔼 [| X_ℓ|].
Thus,
|𝔼[Wf(W)-f'(W)] |
≤ ‖ f”‖∑_i=1^n∑_j∈ N(i)∑_ℓ∈ N(i,j)(3/2𝔼 [| X_iX_jX_ℓ|]+𝔼 [| X_iX_j|] 𝔼 [| X_ℓ|])
≤ 3‖ f”‖/2R_1,1.
This gives us a bound that matches with (<ref>).
For k≥ 2, we would like to carry out the expansion in the same spirit. However, it would be too tedious to write out every sum in the process. Thus, in <ref>, we introduce the terms called 𝒮-sums, 𝒯-sums, and ℛ-sums, which serve as useful tools in tracking different quantities when we approximate 𝔼 [f'(W)-Wf(W)] with respect to locally dependent random variables. Instead of performing the expansion to get (<ref>) for 𝔼 [Wf(W)], we first do it for any 𝒯-sum and use induction to prove a more general result for the existence of such expansions (see <ref>). In the general situation of 𝒯-sums, the cumulants are replaced by other constants that only depend on the specific 𝒯-sum in consideration and the joint distribution of (X_i)_i∈ I. Finally, we prove that in particular, those constants for 𝔼 [Wf(W)] are precisely the cumulants of W. This will be done by direct calculation when f is a polynomial and then extended to more general f's by applying <ref>.
toc
§.§ Notations and Definitions
As in <ref>, given an integer k≥ 1, suppose (X_i)_i∈ I is a class of mean zero random variables indexed by I that satisfy the local dependence assumptions [LD-1] to [LD-k]. Without loss of generality, we always assume that σ^2:=Var(∑_i∈ IX_i)=1. We denote W:=σ^-1∑_i∈ IX_i=∑_i∈ IX_i.
§.§.§ 𝒮-sums
Fix k∈ℕ_+ and t_1,⋯,t_k∈ℤ be integers such that | t_j|≤ j-1 for any j∈ [k]. We set t_1=0. Let z=|{ j:t_j>0 }| be the number of indexes j for which t_j is strictly positive. If z≥ 1, we write { j:t_j>0 }={ q_1,⋯,q_z}. Without loss of generality, we suppose that the sequence 2≤ q_1<⋯<q_z≤ k is increasing. We further let q_0:=1 and q_z+1:=k+1. We define an order-k 𝒮-sum with respect to the sequence t_1:k as
𝒮 [t_1,⋯,t_k]
:= ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k [q_1-q_0,⋯,q_z+1-q_z]▹(X_i_1,⋯,X_i_k)
= ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k 𝔼[X_i_q_0⋯ X_i_q_1-1] 𝔼[X_i_q_1⋯ X_i_q_2-1] ⋯ 𝔼[X_i_q_z⋯ X_i_q_z+1-1],
where N_1:=I, and for j∈ℕ_+ such that j≥ 2, we let
N_j:= N (i_1:| t_j|)=N(i_1,⋯,i_| t_j|) if t_j≠ 0
∅ if t_j=0
.
Note that N_j depends on t_j and the sequence i_1:(j-1). For ease of notation, we do not explicitly write out the dependencies on i_1:(j-1) when there is no ambiguity. Further note that if any t_j, that is not t_1, is null then N_j=∅ therefore, the 𝒮-sum 𝒮[t_1,⋯,t_k]=0.
By definition all 𝒮-sums are deterministic quantities, the value of which only depends on t_1:k, and the joint distribution of (X_i)_i∈ I. We also remark that the signs of t_j's determine how an 𝒮-sum factorizes into different expectations. Notably if z=0 (meaning that all the t_j are negative) then the 𝒯-sum is
𝒯_f,s [t_1,⋯,t_k]=∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_1⋯ X_i_k∂^k-1 f(W_i.[k-s])].
Since by assumption, X_i's are centered random variables, the 𝒮-sum vanishes if q_j+1=q_j+1 for some 0≤ j≤ z:
𝒮 [t_1,⋯,t_k]
=∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k 𝔼[X_i_q_0⋯ X_i_q_1-1] ·
𝔼[X_i_q_1⋯ X_i_q_2-1] ⋯𝔼[X_i_q_j] ⋯ 𝔼[X_i_q_z⋯ X_i_q_z+1-1]=0.
Furthermore, the absolute value of t_j's influences the range of running indexes. The bigger | t_j| is the larger the set N_j is. The largest possible index set for i_j+1 is N(i_1:(j-1)), which corresponds to the case | t_j|=j-1. On the other hand, if t_j=0, the sum is over an empty set and vanishes.
In particular, if we require that the 𝒮-sum is not always zero, then t_2 is always taken to be -1 and i_2∈ N(i_1).
§.§.§ 𝒯-sums
For any function f∈𝒞^k-1(ℝ) and integer s∈ℕ such that s≤ k, the order-k 𝒯-sum, with respect to the sequence t_1:k, is defined as
𝒯_f,s [t_1,⋯,t_k]
:=
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k [q_1-q_0,⋯,q_z+1-q_z]▹(X_i_1,⋯,X_i_k-1,X_i_k∂^k-1f(W_i.[k-s]))
= { ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_1⋯ X_i_k∂^k-1 f(W_i.[k-s])] if z=0
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_q_0⋯ X_i_q_1-1] ⋯ 𝔼[X_i_q_(z-1)⋯ X_i_q_z-1] ·
𝔼[X_i_q_z⋯ X_i_k∂^k-1f(W_i.[k-s])]
if z≥ 1,
.
where N_1:k,z,q_0:(z+1) are defined as in the definition of 𝒮-sums and W_i.[j] is defined as
W_i.[j]:=
W if j=0
∑_i∈ I\ N(i_1:j)X_i if 1≤ j≤ k
.
Note that the bigger s is, the larger the set I\ N(i_1:(k-s)) is, which means that W_i.[k-s] is the sum of more X_i's. Again we remark that the values of 𝒯-sums can depend on the values of s and the sequences t_1:k. In particular, if s=0, then we have W_i.[k-s]=W_i.[k]=∑_i∈ I \ N (i_1:k)X_i, which implies that W_i.[k-s] is independent of X_i_1,⋯,X_i_k by the assumption [LD-k]. Thus, we have
𝔼[X_i_q_z⋯ X_i_k∂^k-1f(W_i.[k-s])]=𝔼[X_i_q_z⋯ X_i_k] 𝔼[∂^k-1f(W_i.[k-s])].
By definitions (<ref>) and (<ref>) we get
𝒯_f,0 [t_1,⋯,t_k]=𝒮[t_1,⋯,t_k] 𝔼 [∂^k-1f(W_i.[k])].
This equation will be useful in our discussion later. In general if z>0 then
𝒯_f,s [t_1,⋯,t_k]
= 𝒮[t_1,⋯,t_q_z-1] ∑_i_q_z∈ N_q_z∑_i_q_z+1∈ N_q_z+1⋯∑_i_k∈ N_k
𝔼[X_i_q_z⋯ X_i_k∂^k-1f(W_i.[k-s])].
§.§.§ ℛ-sums
For k≥ 2 and given a real number ω∈ (0, 1], we further define an order-k ℛ-sum with respect to the sequence t_1:k as
ℛ_ω [t_1,⋯, t_k]:=
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k-1∈ N_k-1[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k-1|,(∑_i_k∈ N_k| X_i_k|)^ω)
= { ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_1⋯ X_i_k-1(∑_i_k∈ N_k| X_i_k|)^ω] if z=0
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k∈ N_k𝔼[X_i_q_0⋯ X_i_q_1-1] ⋯ 𝔼[X_i_q_(z-1)⋯ X_i_q_z-1] ·
𝔼[X_i_q_z⋯ X_i_k-1(∑_i_k∈ N_k| X_i_k|)^ω]
if z≥ 1.
.
We again remark that if z≥ 1 then
ℛ_ω [t_1,⋯, t_k]
= ℛ_1[t_1,⋯,t_q_z-1] ∑_i_q_z∈ N_q_z∑_i_q_z+1∈ N_q_z+1⋯∑_i_k∈ N_k
𝔼[X_i_q_z⋯ X_i_k-1(∑_i_k∈ N_k| X_i_k|)^ω].
We call ω the exponent of the ℛ-sum. If ω=1, the only difference between an ℛ-sum and an 𝒮-sum is that the X_i_j's in (<ref>) are replaced by | X_i_j|'s in (<ref>). Thus, an 𝒮-sum is always upper-bounded by the corresponding compositional 1-sum, i.e.,
|𝒮 [t_1,⋯,t_k]|≤ℛ_1 [t_1,⋯,t_k].
Another important observation is that we can compare the values of ℛ-sums with respect to two different sequences t_1,⋯,t_k and t_1',⋯,t_k' in certain situations. In specific, if for any j∈ [k] we have that if t_j and t'_j are of the same sign and |t_j|≤ |t_j'|, then
ℛ_ω[t_1,⋯,t_k]≤ℛ_ω[t_1',⋯,t_k'].
In fact, the sequences (t_j) and (t_j') having the same sign indicates that { j:t_j>0 }={ j:t_j'>0 }.
Thus, we can write
ℛ_ω [t_1',⋯, t_k']
= ∑_i_1∈ N_1'∑_i_2∈ N_2'⋯∑_i_k-1∈ N_k-1'[q_1-q_0,⋯,q_z+1-q_z]▹
(| X_i_1|,⋯,| X_i_k-1|,(∑_i_k∈ N_k'| X_i_k|)^ω),
where we note that N_1'=I=N_1 and for j=2,⋯,k we have
N_j'=N(i_1,⋯,i_| t_j' |)⊇ N(i_1,⋯,i_| t_j|)=N_j.
By comparing (<ref>) with (<ref>), we obtain (<ref>).
§.§.§ Re-expression of the remainder terms R_k,ω
Using the notion of ℛ-sums, we rewrite the R_k,ω in <ref> as
R_k,ω:= ∑_(ℓ,η_1:ℓ)∈ C^*(k+2)∑_i_1∈ N_1'∑_i_2∈ N_2'⋯∑_i_k+1∈ N_k+1'
[η_1,⋯,η_ℓ]▹(| X_i_1|,⋯,| X_i_k+1|,(∑_i_k+2∈ N_k+2'| X_i_k+2|)^ω)
= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2].
where N_1':=I and N_j':=N(i_1:(j-1)) for j≥ 2. C^*(k+2) and ℳ_1,k+2 are given by
C^*(k+2)={ℓ,η_1:ℓ∈ℕ_+: η_j≥ 2 ∀ j∈ [ℓ-1], ∑_j=1^ℓη_j=k+2},
and
ℳ_1,k+2:={t_1:(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ 1≤ j≤ k+1}.
Note that t_j∧ t_j+1<0 for any j∈ [k+1] means that there is at least one -1 in any two consecutive signs, which corresponds to the requirement that η_j≥ 2 for j∈ [ℓ-1] in (<ref>).
toc
§.§ Proofs of Proposition <ref> and Lemma <ref>
In this section, we carry out the local expansion technique and prove <ref>.
Firstly, we establish the following lemma, which will be crucial in the inductive step of proving the main theorem.
Fix k∈ℕ_+. For any s∈ [k]∪{ 0 } and f∈𝒞^k,ω(ℝ), we have
|𝒯_f,s[t_1,⋯,t_k+1]-𝒮[t_1,⋯,t_k+1] 𝔼 [∂^kf(W)]|
≤ | f |_k,ω(-(𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)ℛ_ω[t_1,⋯,t_k+1,-(k+1)]).
Given any ℓ∈ [k] and s∈ [ℓ]∪{ 0 }, we further have
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒮[t_1,⋯,t_ℓ] 𝔼 [∂^ℓ-1f(W)]
-𝕀(s≥ 1)·∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times]
-(𝕀(t_ℓ<0)∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
≤ | f |_k,ω/(k-ℓ+1)!(-𝕀(t_ℓ<0)·ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times]).
Firstly, we remark that the definition of Hölder continuity implies that
|∂^kf(y)-∂^kf (x)|≤| f |_k,ω| y-x |^ω,
where ω is the Hölder exponent of f and | f |_k,ω is the Hölder constant (see <ref>).
Let z=|{ j∈ [k+1]: t_j>0 }| be the number of positive indexes (t_j). If z≥ 1, we write { j∈ [k+1]:t_j>0 }={ q_1,⋯,q_z}. Without loss of generality, we suppose that the sequence 2≤ q_1<⋯<q_z≤ k+1 is increasing. We further let q_0:=1 and q_z+1:=k+2.
Applying (<ref>) we have
|𝔼[X_i_q_z⋯ X_i_k+1∂^kf(W_i.[k+1-s])]-𝔼[X_i_q_z⋯ X_i_k+1∂^kf(W_i.[k+1])]|
≤ | f |_k,ω𝔼[| X_i_q_z⋯ X_i_k+1|·| W_i.[k+1-s]-W_i.[k+1] |^ω]
≤ | f |_k,ω𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1))\ N (i_1:(k+1-s)) X_i|^ω]
≤ | f |_k,ω𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1)) X_i|^ω],
where in the last inequality we have used the fact that N(i_1:(k+1))\ N (i_1:(k+1-s))⊆ N(i_1:(k+1)).
If z=0, this directly implies that
|𝒯_f,s[t_1,⋯,t_k+1]-𝒯_f,0[t_1,⋯,t_k+1] |≤𝕀(s≥ 1)·| f |_k,ωℛ_ω[t_1,⋯,t_k+1,-(k+1)].
If z≥ 1, by definition (<ref>) we have for s≥ 1
|𝒯_f,s[t_1,⋯,t_k+1]-𝒯_f,0[t_1,⋯,t_k+1] |
= |∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1𝔼[X_i_q_0⋯ X_i_q_1-1] ⋯ 𝔼[X_i_q_z-1⋯ X_i_q_z-1]·
𝔼[X_i_q_z⋯ X_i_k+1(∂^kf(W_i.[k+1-s])-∂^kf(W_i.[k+1]))] |
≤ ∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1𝔼[| X_i_q_0⋯ X_i_q_1-1|] ⋯ 𝔼[| X_i_q_z-1⋯ X_i_q_z-1|]·
|𝔼[ X_i_q_z⋯ X_i_k+1∂^kf(W_i.[k+1-s])-∂^kf(W_i.[k+1])]|
(<ref>)≤ | f |_k,ω∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1𝔼[| X_i_q_0⋯ X_i_q_1-1|] ⋯ 𝔼[| X_i_q_z-1⋯ X_i_q_z-1|] ·
𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1)) X_i|^ω]
= | f |_k,ωℛ_ω[t_1,⋯,t_k+1,-(k+1)].
Here the last equality is due to the definition (<ref>).
Thus, (<ref>) is proven for both z=0 and z≥ 1. Next we show that
|𝒮[t_1,⋯,t_k+1] (𝔼 [∂^kf(W)]-𝔼 [∂^kf(W_i.[k+1])])|
≤ -𝕀(t_k+1<0)| f |_k,ωℛ_ω[t_1,⋯,t_k+1,k+1].
In this goal, we first note that if t_k+1≥ 0, by definition (<ref>) we know that q_z=k+1 and therefore, according to (<ref>) we know that
𝒮[t_1,⋯,t_k+1]=0,
and so (<ref>) holds. Otherwise, we note that we have
|𝔼[∂^kf(W)]-𝔼[∂^kf(W_i.[k+1])]|≤| f |_k,ω𝔼[ | W_i.[k+1-s]-W_i.[k+1] |^ω]
≤ | f |_k,ω𝔼[ |∑_i∈ N(i_1:(k+1))\ N (i_1:(k+1-s)) X_i|^ω]
≤| f |_k,ω𝔼[ |∑_i∈ N(i_1:(k+1)) X_i|^ω].
This implies that
|𝒮[t_1,⋯,t_k+1] (𝔼 [∂^kf(W)]-𝔼 [∂^kf(W_i.[k+1])])|
≤ |𝒮[t_1,⋯,t_k+1] |·|𝔼[∂^kf(W)]-𝔼[∂^kf(W_i.[k+1])]|
(*)≤ | f |_k,ωℛ_1[t_1,⋯,t_k+1] 𝔼[ |∑_i∈ N(i_1:(k+1)) X_i|^ω]
= | f |_k,ω∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1[q_1-q_0,⋯,q_z+1-q_z]▹
(| X_i_1|,⋯,| X_i_k+1|)·𝔼[ |∑_i∈ N(i_1:(k+1)) X_i|^ω]
= | f |_k,ω∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k+1∈ N_k+1[q_1-q_0,⋯,q_z+1-q_z,1]▹
(| X_i_1|,⋯,| X_i_k+1|,(∑_i_k+2∈ N(i_1:(k+1))| X_i_k+2|)^ω)
= | f |_k,ωℛ_ω[t_1,⋯,t_k+1,k+1],
where (*) is due to (<ref>) and (<ref>).
Taking the difference of (<ref>) and (<ref>), we obtain (<ref>) by applying the equation (<ref>).
For ℓ≤ k, we apply the Taylor expansion with remainders taking the integral form and obtain that
∂^ℓ-1 f(y)-∂^ℓ-1 f(x)
= ∑_j=1^m-ℓ1/j!(y-x)^j∂^ℓ-1+jf(x)
+1/(k-ℓ+1)!(y-x)^k-ℓ+1∫_0^1(k-ℓ+1)v^k-ℓ∂^k f(v x+(1-v)y) v
(*) = ∑_j=1^k-ℓ+11/j!(y-x)^j∂^ℓ-1+jf(x)
+1/(k-ℓ+1)!(y-x)^k-ℓ+1∫_0^1(k-ℓ+1) v ^k-ℓ(∂^k f( v x+(1- v )y)-∂^k f(x)) v,
where to obtain (*) we added and subtracted (y-x)^k-ℓ+1/(k-ℓ+1)!∂^k f(x). Moreover, using the fact that ∂^k f(·) is assumed to be Hölder continuous we obtain that
|∂^k f( v x+(1- v )y)-∂^k f(x)|≤| f |_k,ω(1-v)^ω | y-x |^ω≤| f |_k,ω| y-x |^ω.
Therefore, as ∫_0^1(k-ℓ+1)v^k-ℓ v =1, by combining (<ref>) with (<ref>) we get that
|∂^ℓ-1 f(y)-∂^ℓ-1 f(x)- ∑_j=1^k-ℓ+11/j!(y-x)^j∂^ℓ-1+jf(x)|≤| f |_k,ω/(k-ℓ+1)!| y-x |^k-ℓ+1+ω.
We prove that the following inequality holds:
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ]
-𝕀(s≥ 1)·∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] |
≤ 𝕀(s≥ 1)·| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times],
First, let's establish (<ref>). Let z=|{ j∈ [ℓ]: t_j>0 }|. If z≥ 1, we write { j∈ [ℓ]:t_j>0 }={ q_1,⋯,q_z}. Without loss of generality, we suppose that the sequence 2≤ q_1<⋯<q_z≤ℓ is increasing. We further let q_0:=1 and q_z+1:=ℓ+1.
Applying (<ref>) we have
|𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ-s])]-𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ])]
-∑_j=1^k-ℓ+11/j!𝔼[ X_i_q_z⋯ X_i_ℓ (W_i.[ℓ-s]-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])]
|
≤ | f |_k,ω/(k-ℓ+1)!𝔼[| X_i_q_z⋯ X_i_ℓ|·| W_i.[ℓ-s]-W_i.[ℓ] |^k-ℓ+1+ω].
For convenience let
E_1:=∑_i_q_z∈ N_q_z⋯∑_i_ℓ∈ N_ℓ𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ-s])]-𝔼[X_i_q_z⋯ X_i_ℓ∂^ℓ-1f(W_i.[ℓ])],
E_2,j:=∑_i_q_z∈ N_q_z⋯∑_i_ℓ∈ N_ℓ𝔼[ X_i_q_z⋯ X_i_ℓ (W_i.[ℓ-s]-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])],
E_3:=∑_i_q_z∈ N_q_z⋯∑_i_ℓ∈ N_ℓ𝔼[| X_i_q_z⋯ X_i_ℓ|·| W_i.[ℓ-s]-W_i.[ℓ] |^k-ℓ+1+ω].
Then (<ref>) reduces to
| E_1-∑_j=1^k-ℓ+1E_2,j/j! |≤| f |_k,ωE_3/(k-ℓ+1)!.
Then we observe that by definition of W_i.[·] we have
𝔼[ X_i_q_z⋯ X_i_ℓ (W_i.[ℓ-s]-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])]
= 𝔼[ X_i_q_z⋯ X_i_ℓ(∑_i∈ N(i_1:ℓ)X_i-∑_i∈ N(i_1:ℓ-s)X_i)^j∂^ℓ-1+jf(W_i.[ℓ])]
= ∑_h=0^j(-1)^hjh 𝔼[ X_i_q_z⋯ X_i_ℓ(∑_i∈ N(i_1:ℓ-s)X_i)^h(∑_i∈ N(i_1:ℓ)X_i)^j-h∂^ℓ-1+jf(W_i.[ℓ])],
and that
𝔼[| X_i_q_z⋯ X_i_ℓ|·| W_i.[ℓ-s]-W_i.[ℓ] |^k-ℓ+1+ω]
≤ 𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1))\ N (i_1:(k+1-s)) X_i|^k-ℓ+1+ω]
≤ 𝔼[| X_i_q_z⋯ X_i_k+1|·|∑_i∈ N(i_1:(k+1)) X_i|^k-ℓ+1+ω]
≤ 𝔼[| X_i_q_z⋯ X_i_k+1|·( ∑_i∈ N(i_1:(k+1))| X_i|)^k-ℓ+1·|∑_i∈ N(i_1:(k+1)) X_i|^ω].
If z=0, we take the sum of (<ref>) or (<ref>) over i_q_z∈ N_q_z,⋯,i_ℓ∈ N_ℓ. By definition (<ref>) and (<ref>) we have
E_1=𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ],
E_2,j=∑_h=0^j(-1)^hjh 𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times],
E_3≤ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
Combining (<ref>) and (<ref>), we have for s≥ 1
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ]
-∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] |
(<ref>)= | E_1-∑_j=1^k-ℓ+1E_2,j/j! |(<ref>)≤| f |_k,ωE_3/(k-ℓ+1)!
(<ref>)≤ | f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times].
Thus, (<ref>) holds for z=0.
If z≥ 1, similar to (<ref>) we have
𝒮[t_1,⋯,t_q_z-1]
· E_1=𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ],
𝒮[t_1,⋯,t_q_z-1]
· E_2,j
=∑_h=0^j(-1)^hjh 𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times],
ℛ_1[t_1,⋯,t_q_z-1]
· E_3≤ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times].
Combining (<ref>) and (<ref>) we get for s≥ 1
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒯_f,0[t_1,⋯,t_ℓ]
-∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] |
(<ref>)= |𝒮[t_1,⋯,t_q_z-1] |·| E_1-∑_j=1^k-ℓ+1E_2,j/j! |(<ref>)≤ℛ_1[t_1,⋯,t_q_z-1]·| E_1-∑_j=1^k-ℓ+1E_2,j/j! |
(<ref>)≤ ℛ_1[t_1,⋯,t_q_z-1]·| f |_k,ωE_3/(k-ℓ+1)!
(<ref>)≤| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times].
Thus, we have shown (<ref>) for both z=0 and z≥ 1.
Next we prove that the following inequality holds:
|𝒮[t_1,⋯,t_ℓ](𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])])
-𝕀(t_ℓ<0)·∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
≤ 𝕀(t_ℓ<0)·| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
For (<ref>), we apply (<ref>) again and get that
|𝔼[∂^kf(W)]-𝔼[∂^kf(W_i.[ℓ])]
-∑_j=1^k-ℓ+11/j!𝔼[ (W_i.-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])]
|≤| f |_k,ω/(k-ℓ+1)!𝔼[ | W-W_i.[ℓ] |^k-ℓ+1+ω].
For convenience let
E_4:=𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])],
E_5,j:=𝔼[ (W_i.-W_i.[ℓ] )^j∂^ℓ-1+jf(W_i.[ℓ])],
E_6:=𝔼[ | W-W_i.[ℓ] |^k-ℓ+1+ω].
Then (<ref>) reduces to
| E_4-∑_j=1^k-ℓ+1E_5,j/j! |≤| f |_k,ωE_6/(k-ℓ+1)!.
We first note that if t_ℓ≥ 0 then 𝒮[t_1,⋯,t_ℓ]=0 therefore, (<ref>) holds.
Moreover, similar to (<ref>), we have for t_ℓ<0
𝒮[t_1,⋯,t_ℓ]· E_4=𝒮[t_1,⋯,t_ℓ](𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])]),
𝒮[t_1,⋯,t_ℓ]· E_5,j=𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times],
ℛ_1[t_1,⋯,t_ℓ]· E_6≤ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
Combining (<ref>) and (<ref>), we have
|𝒮[t_1,⋯,t_ℓ](𝔼 [∂^ℓ-1f(W)]-𝔼 [∂^ℓ-1f(W_i.[ℓ])])
-∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
(<ref>)=|𝒮[t_1,⋯,t_ℓ] |·| E_4-∑_j=1^k-ℓ+1E_5,j/j! |(<ref>)≤ℛ_1[t_1,⋯,t_ℓ]·| E_4-∑_j=1^k-ℓ+1E_5,j/j! |
(<ref>)≤ℛ_1[t_1,⋯,t_ℓ]·| f |_k,ωE_6/(k-ℓ+1)!
(<ref>)≤| f |_k,ω/(k-ℓ+1)!ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times].
Therefore, we have established both (<ref>) and (<ref>). Taking their difference and applying (<ref>), we obtain (<ref>).
Equipped with the tools in <ref>, we approximate any 𝒯-sum 𝒯_f,s[t_1,⋯,t_ℓ] by order-j 𝒮-sums (j=ℓ,⋯,k+1) with remainder terms being order-(k+2) ℛ-sums.
Fix k∈ℕ_+. For any ℓ∈ [k+1], s∈ [ℓ]∪{ 0 }, and t_1,⋯,t_ℓ∈ℤ such that | t_j|≤ j-1 for any j∈ [ℓ], there exist Q_ℓ,⋯,Q_k+1 (which depend on s and t_1:ℓ and the joint distribution of (X_i)_i∈ I) and a constant C_k,ℓ (C_k,ℓ≤ 4^k-ℓ+1) such that for any f∈𝒞^k,ω(ℝ), we have
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j𝔼 [∂^j-1 f(W)]|≤ C_k,ℓ| f |_k,ωR_k,ω.
Note that by (<ref>) R_k,ω is given as
R_k,ω= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2],
where
ℳ_1,k+2:={t_1:(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ 1≤ j≤ k+1}.
If there exists an integer 2≤ j≤ℓ such that t_j=0 or there exists j∈ [ℓ-1] such that t_j and t_j+1 are both positive, then 𝒯_f,s[t_1,⋯,t_ℓ]=0 by definition and the theorem already holds by setting Q_j=⋯=Q_k+1=0.
Otherwise, we claim:
Let 𝒯_f,s [t_1,⋯,t_ℓ] be a 𝒯-sum. For any j=ℓ+1,⋯,k+1, let
ℰ_ℓ+1,j:={ t_(ℓ+1):j: | t_h+1|≤ h & t_h∧ t_h+1 ∀ℓ≤ h≤ j-1}.
For all j=ℓ+1,⋯, k+1, ν∈ [j]∪{ 0 }, and (t_ℓ+1,⋯,t_j)∈ℰ_ℓ,j, there are coefficients a_j,ν,t_(ℓ+1):j (additionally depending on s) such that if we write
Q_j=∑_t_(l+1):j∈ℰ_ℓ,j∑_ν=0^j a_j,ν,t_(ℓ+1):j𝒯_f,ν[t_1,⋯,t_ℓ,t_ℓ+1,⋯,t_j],
then the following holds
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j𝔼 [∂^j-1 f(W)]|
≤ 4^k-ℓ+1| f |_k,ω∑_t_(ℓ+1):(k+2)∈ℳ_ℓ,k+1ℛ_ω[t_1,⋯,t_ℓ, ⋯ ,t_k+2],
where
ℳ_ℓ+1,k+2:={t_(ℓ+1):(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ℓ≤ j≤ k+1}.
We establish this claim by performing induction on ℓ with ℓ taking the value k+1,k,⋯, 1 in turn.
For ℓ=k+1, by (<ref>) we have
|𝒯_f,s[t_1,⋯,t_k+1]-𝒮[t_1,⋯,t_k+1] 𝔼 [∂^kf(W)]|
≤ | f |_k,ω(𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]).
If there exists j∈ [k] such that t_j and t_j+1 are both positive, then 𝒯_f,s[t_1,⋯,t_k+1]=0 and the claim holds with all a_j,ν,t_ℓ:(k+1)=0. Otherwise, for all j≤ k either t_j is negative or t_j+1 is negative for j∈ [k]. If t_k+1<0, then we have
𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
= ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
(*)≤ ℛ_ω[0, sgn(t_2),2sgn(t_3),⋯,k·sgn(t_k+1),k+1]
+ℛ_ω[0, sgn(t_2),2sgn(t_3),⋯,k·sgn(t_k+1),-(k+1)]
≤ ∑_t_k+2=± (k+1):
t_k+1∧ t_k+2< 0 ℛ_ω[t_1,⋯,t_k+1,t_k+2],
where (*) is a consequence of (<ref>) and sgn(x)=0,1, or -1 denotes the sign of a real number x.
Further note that if t_k+1>0, then 𝕀(t_k+1<0)=0 and we get
𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
= 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)]
(*)≤ ℛ_ω[0, sgn(t_2),2sgn(t_3),⋯,k·sgn(t_k+1),-(k+1)]
≤ ∑_t_k+2=± (k+1):
t_k+1∧ t_k+2< 0 ℛ_ω[t_1,⋯,t_k+1, t_k+2],
where (*) is a consequence of (<ref>). Thus, we have shown that
|𝒯_f,s[t_1,⋯,t_k+1]-𝒮[t_1,⋯,t_k+1] 𝔼 [∂^kf(W)]|
≤ | f |_k,ω(𝕀(t_k+1<0)·ℛ_ω[t_1,⋯,t_k+1,k+1]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_k+1,-(k+1)])
≤ | f |_k,ω∑_t_k+2=± (k+1):
t_k+1∧ t_k+2< 0 ℛ_ω[t_1,⋯,t_k+1,t_k+2].
Now suppose the claim holds for ℓ+1 and consider the case of ℓ. By (<ref>) we have
|𝒯_f,s[t_1,⋯,t_ℓ]-𝒮[t_1,⋯,t_ℓ] 𝔼 [∂^ℓ-1f(W)]
-𝕀(s≥ 1)·∑_j=1^k-ℓ+1∑_h=0^j(-1)^h1/h !(j-h)!𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times]
+𝕀(t_ℓ<0)∑_j=1^k-ℓ+11/j!𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] |
≤ | f |_k,ω/(k-ℓ+1)!(𝕀(t_ℓ<0)·ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times]
+ 𝕀(s≥ 1)·ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times]).
Note that 𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times] and 𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] are 𝒯-sums of order at least ℓ+j (j≥ 1). Therefore, we can apply inductive hypothesis on them. In specific, the remainder term (ℛ-sums) in the expansion of
𝒯_f,j[t_1,⋯,t_ℓ,s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times]
is given by this
4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+j+1):(k+2)∈ℳ_ℓ+j+1,k+2ℛ_ω[t_1,⋯,t_ℓ, s - ℓ,⋯,s - ℓ_h times,-ℓ,⋯,-ℓ_(j-h) times,t_ℓ+j+1,⋯ ,t_k+2]
(<ref>)≤ 4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+j+1):(k+2)∈ℳ_ℓ+j+1,k+2ℛ_ω[t_1,⋯,t_ℓ, -ℓ,-(ℓ+1),⋯,-(ℓ+j-1),t_ℓ+j+1,⋯ ,t_k+2]
≤ 4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2ℛ_ω[t_1,⋯,t_ℓ, -ℓ,t_ℓ+2,⋯ ,t_k+2]=:4^k-ℓ-j+1| f |_k,ω· U_1.
Similarly, the remainder term in the expansion of 𝒯_f,j[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(j-1) times] is given by
4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+j+1):(k+2)∈ℳ_ℓ+j+1,k+2ℛ_ω[t_1,⋯,t_ℓ, ℓ,-ℓ,⋯,-ℓ_(j-1) times,t_ℓ+j+1,⋯ ,t_k+2]
≤ 4^k-ℓ-j+1| f |_k,ω∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2ℛ_ω[t_1,⋯,t_ℓ, ℓ,t_ℓ+2,⋯ ,t_k+2]=:4^k-ℓ-j+1| f |_k,ω· U_2.
Note that U_1+𝕀(t_ℓ<0)· U_2 is controlled by
U_1+𝕀(t_ℓ<0)· U_2
= ∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2 ℛ_ω[t_1,⋯,t_ℓ, -ℓ,t_ℓ+2,⋯ ,t_k+2]
+𝕀(t_ℓ<0)·∑_t_(ℓ+2):(k+2)∈ℳ_ℓ+2,k+2 ℛ_ω[t_1,⋯,t_ℓ, ℓ,t_ℓ+2,⋯ ,t_k+2]
≤ ∑_t_(ℓ+1):(k+2)∈ℳ_ℓ+1,k+2 ℛ_ω[t_1,⋯,t_ℓ,t_ℓ+1,⋯ ,t_k+2].
As we mentioned above, by inductive hypothesis we have that there exist coefficients Q_j satisfying (<ref>) such that
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j 𝔼 [∂^j-1 f(W)]|
≤ ∑_j=1^k-ℓ+1∑_h=0^j1/h!(j-h)!4^k-ℓ-j+1| f |_k,ω· U_1+𝕀(t_ℓ<0)∑_j=1^k-ℓ+11/j!4^k-ℓ-j+1| f |_k,ω· U_2
+| f |_k,ω/(k-ℓ+1)!(𝕀(t_ℓ<0)·ℛ_ω[t_1,⋯,t_ℓ,ℓ,-ℓ,⋯,-ℓ_(k-ℓ+1) times]
+ ℛ_ω[t_1,⋯,t_ℓ,-ℓ,⋯,-ℓ_(k-ℓ+2) times]).
Noting that ∑_h=0^j1/(h!(j-h)!)=2^j/j!, we have
|𝒯_f,s [t_1,⋯, t_ℓ]-∑_j=ℓ^k+1Q_j 𝔼 [∂^j-1 f(W)]|
≤ ∑_j=1^k-ℓ+12^j· 4^k-ℓ-j+1/j!| f |_k,ω·( U_1+𝕀(t_ℓ<0)· U_2)
+| f |_k,ω/(k-ℓ+1)!(𝕀(t_ℓ<0)· U_2
+ U_1)
≤ (1+∑_j=1^k-ℓ+1 2^2k-2ℓ-j+2)| f |_k,ω( U_1+𝕀(t_ℓ<0)· U_2)
≤ 4^k-ℓ+1| f |_k,ω( U_1+𝕀(t_ℓ<0)· U_2)
(<ref>)≤ 4^k-ℓ+1| f |_k,ω∑_t_(ℓ+1):(k+2)∈ℳ_ℓ+1,k+2 ℛ_ω[t_1,⋯,t_ℓ,t_ℓ+1,⋯ ,t_k+2].
Thus, we have shown (<ref>).
Finally, we note that for all t_1:ℓ∈ℳ_1,ℓ and then by (<ref>) we have
∑_t_(ℓ+1):(k+1)∈ℳ_ℓ+1,k+2 ℛ_ω[t_1,⋯,t_ℓ, ⋯ ,t_k+2]
≤ ∑_t_(ℓ+1):(k+2)∈ℳ_ℓ+1,k+2 ℛ_ω[0,sgn(t_2),2sgn(t_3),⋯,(ℓ-1)sgn(t_ℓ), t_ℓ+1⋯ ,t_k+2]
≤ ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2]=R_k,ω.
We remark that if f is a polynomial of degree at most k, then the Hölder constant | f |_k,ω=0 and hence the remainder C_k,ℓ| f |_k,ωR_k,ω vanishes.
For any 𝒯-sum, we have established the existence of expansions in <ref>. Next we show the uniqueness of such expansions.
Under the same settings as <ref>, suppose that there exist two sets of coefficients Q_ℓ,⋯,Q_k+1 and Q_ℓ',⋯,Q_k+1' only depending on s and t_1:ℓ, and the joint distribution of (X_i)_i∈ I such that for any polynomial f of degree at most ℓ, we have
𝒯_f,s [t_1,⋯, t_ℓ]= Q_ℓ𝔼 [∂^ℓ-1 f(W)]+⋯+Q_k+1𝔼 [∂^k f(W)]
= Q_ℓ'𝔼 [∂^ℓ-1 f(W)]+⋯+Q_k+1'𝔼 [∂^k f(W)],
Then Q_j= Q_j' for any j=ℓ,⋯, k+1.
We prove this lemma by contradiction.
Let j be the smallest number such that Q_j≠ Q_j'. Since the coefficients Q_ℓ,⋯,Q_k+1 do not depend on f, we can choose f(x)=c x^j-1 such that ∂^j-1 f(x)= c(j-1)!≠ 0. But Q_j+1𝔼 [∂^jf(W)]=⋯=Q_k+1𝔼 [∂^k f(W)]=0, which implies cQ_j=cQ_j'. This is a contradiction. Therefore, Q_j= Q_j' for any j=ℓ,⋯, k+1.
Applying <ref> with ℓ=1, and s=t_1=t_2=0, we have for any f∈𝒞^k,ω(ℝ),
𝔼 [Wf(W)]=∑_i_1∈ I𝔼 [X_i_1f(W)]=𝒯_f,0[0]=∑_j=1^k+1Q_j𝔼 [∂^j-1 f(W)]+𝒪(| f |_k,ωR_k,ω),
for some Q_1,⋯, Q_k+1 that only depend on the distribution of (X_i)_i∈ I and where R_k,ω is defined in (<ref>). Suppose that f is a polynomial of degree at most k, then we observe that f∈𝒞^k,ω(ℝ) and | f |_k,ω=0. Thus, this implies that
𝒯_f,0[0]=𝔼 [Wf(W)]=∑_j=1^k+1Q_j𝔼 [∂^j-1 f(W)].
On the other hand, for any random variable, the moments (μ_j)_j≥ 0 and cumulants (κ_j)_j≥ 0, provided that they exist, are connected through the following relations <cit.>:
μ_n=∑_j=1^nn-1j-1κ_jμ_n-j.
Using this we will obtain a similar expansion to (<ref>) by using the cumulants (κ_j). In this goal, we first remark that if f(x)= x^j where j≤ k, then by using (<ref>) we obtain that
𝔼 [Wf(W)]=μ_j+1(W)
=∑_h=1^j+1jh-1κ_h(W)μ_j+1-h(W)
= ∑_h=0^jjhκ_h+1(W)μ_j-h(W)
=∑_h=0^kκ_h+1(W)/h !𝔼 [∂^h f(W)].
Moreover, we remark that this can be generalized to arbitrary polynomials f of degree k. Indeed, any polynomial f of degree k can be written as f(x)=∑_j=0^ka_jx^j for certain coefficients (a_j). By the linearity of expectations, we know that
𝔼 [Wf(W)]=∑_j=0^kκ_j+1(W)/j !𝔼 [∂^j f(W)].
Compare this to (<ref>) and apply <ref>. We conclude that Q_j=κ_j(W)/(j-1)! for any j∈ [k+1]. In particular, Q_1=0=κ_1(W).
Next we upper-bound the cumulants of W using R_k,1.
For any k∈ℕ_+, there exists a constant C_k that only depends on k such that |κ_k+2(W) |≤ C_kR_k,1.
Let f(x)=x^k+1/(k+1)!. We remark that f∈Λ_k+1 where Λ_k+1:={ f∈𝒞^k,1(ℝ):| f |_k,1≤ 1 }. Moreover, by using <ref> we have
𝔼 [Wf(W)]=∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+𝒪(R_k,1).
Here the constant dropped from the big 𝒪 analysis is controlled by 4^k.
On the other hand, by (<ref>) we have
𝔼 [Wf(W)]=1/(k+1)!μ_k+2(W)
= ∑_j=1^k+1k+1jκ_j+1(W)μ_k+1-j(W)
= ∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+κ_k+2(W)/(k+1)!.
Thus, there exists C_k such that |κ_k+2(W) |≤ C_kR_k,1.
Finally, we are able to prove <ref> based on <ref> and <ref>.
We perform induction on k:=⌈ p⌉. We start with k=1. In this goal, we first remark that by <ref>, we have f = Θ h∈𝒞^1,ω(ℝ) and that | f |_1,ω is bounded by a constant. Moreover, as f=Θ h is the solution to the Stein equation (<ref>). By <ref> we obtain that
𝔼 [h(W)]-𝒩h= 𝔼 [f'(W)]-𝔼 [W f(W)] = 𝒪(R_1,ω).
Therefore, the desired result is established for 1.
Suppose that the proposition holds for 1,⋯,k-1, we want to prove that it will also hold for k. Let f=Θ h, then by <ref> we know that f∈𝒞^k,ω(ℝ) and that | f |_k,ω is bounded by some constant that only depends on k,ω. Thus, by <ref>, we have
𝔼 [Wf(W)]=∑_j=1^kκ_j+1(W)/j !𝔼 [∂^j f(W)]+𝒪(R_k,ω).
Hence we have the following expansion of the Stein equation
𝔼 [h(W)]-𝒩h= 𝔼 [ f'(W)]-𝔼 [W f(W)]
= -∑_j=2^kκ_j+1(W)/j!𝔼 [∂^jf(W)]+𝒪 (R_k,ω)
= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝔼 [∂^j+1Θ h(W)]+𝒪 (R_k,ω).
Noting that
∂^j+1Θ h∈𝒞^k-j-1,ω(ℝ) and |∂^j+1Θ h |_k-j-1,ω is bounded by a constant only depending on k,ω, then by inductive hypothesis we obtain that
𝔼 [∂^j+1Θ h(W)]-𝒩[∂^j+1Θ h]
= ∑_(r,s_1:r)∈Γ(k-j-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ)∂^j+1Θ h]
+𝒪(∑_ℓ=1^k-j-1R _ℓ,1^(k-j-1+ω )/ℓ+∑_ℓ=1^k-jR _ℓ,ω^(k-j-1+ω )/(ℓ+ω -1)),
where we denoted Γ(k-j-1):={ r,s_1:r∈ℕ_+: ∑_ℓ=1^rs_ℓ≤ k-j-1 }.
By <ref> and Young's inequality, we have
|κ_j+2(W) R_ℓ,ω^k-j+ω -1/ℓ+ω -1|≲ R_j,1R_ℓ,ω^k-j+ω -1/ℓ+ω -1≤j/k+ω-1 R_j,1^k+ω -1/j+k-j+ω -1/k+ω -1R_ℓ,ω^k+ω -1/ℓ+ω -1,
|κ_j+2(W) R_ℓ,1^k-j+ω -1/ℓ|≲ R_j,1R_ℓ,1 ^k-j+ω -1/ℓ≤j/k+ω-1 R_j,1^k+ω-1/j+k-j+ω -1/k+ω-1R_ℓ,1 ^k+ω -1/ℓ.
Thus, we derive that
𝔼 [h(W)]-𝒩h
(<ref>)= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝔼 [∂^j+1Θ h(W)]+𝒪 (R_k,ω)
(<ref>)= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝒩 [∂^j+1Θ h] +∑_j=1^k-1κ_j+2(W)/(j+1)!·
∑_(r,s_1:r)∈Γ(k-j-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ)∂^j+1Θ h]
+𝒪(R_k,ω+∑_j=1^k-1|κ_j+2(W) |∑_ℓ=1^k-j-1R _ℓ,1^(k+ω -j-1)/ℓ
+∑_j=1^k-1|κ_j+2(W) |∑_ℓ=1^k-jR _ℓ,ω^(k+ω -j-1)/(ℓ+ω -1))
(<ref>)= -∑_j=1^k-1κ_j+2(W)/(j+1)!𝒩 [∂^j+1Θ h]+∑_j=1^k-1κ_j+2(W)/(j+1)!·
∑_(r,s_1:r)∈Γ(k-j-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ)∂^j+1Θ h]
+𝒪(R_k,ω+∑_j=1^k-1R_j,1^(k+ω-1 )/j+∑_j=1^k-1∑_ℓ=1^k-j-1R _ℓ,1^(k+ω -1)/ℓ
+∑_j=1^k-1∑_ℓ=1^k-jR _ℓ,ω^(k+ω -1)/(ℓ+ω -1))
= ∑_(r,s_1:r)∈Γ(k-1)(-1)^r∏_ℓ=1^rκ _s _ℓ+2(W)/(s _ℓ+1)!𝒩 [∏_ℓ=1^r(∂ ^s _ℓ+1Θ) h]
+𝒪(∑_ℓ=1^k-1R _ℓ,1^(k+ω -1)/ℓ+∑_ℓ=1^kR _ℓ,ω^(k+ω -1)/(ℓ+ω -1)).
Therefore, the desired property was established by induction.
§ PROOF OF LEMMA <REF>
In <ref>, we would like to find a random variable with a given sequence of real numbers as its cumulants. Constructing a random variable from its cumulants can be difficult in practice. However, there is a rich literature on establishing the existence of a random variable given the moment sequence. And it is well-known that the moments can be recovered from the cumulants, and vice versa. The explicit expression between moments μ_n and cumulants κ_n is achieved by using the Bell polynomials, i.e.,
μ_n =B_n(κ_1,⋯,κ_n)=∑_j=1^nB_n,j(κ_1,⋯,κ_n-j+1),
κ_n =∑_j=1^n(-1)^j-1(j-1)!B_n,j(μ_1,⋯,μ_n-j+1),
where B_n and B_n,j are the exponential Bell polynomial defined by
B_n(x_1,⋯,x_n):=∑_j=1^nB_n,j(x_1,x_2,⋯,x_n-j+1),
B_n,j(x_1,x_2,⋯,x_n-j+1):=∑n!/i_1!i_2!⋯ i_n-j+1!(x_1/1!)^i_1(x_2/2!)^i_2⋯(x_n-j+1/(n-j+1)!)^i_n-j+1.
The sum here is taken over all sequences i_1,⋯,i_n-j+1 of non-negative integers such that the following two conditions are satisfied:
i_1+i_2+⋯+i_n-j+1=j,
i_1+2 i_2+⋯ +(n-j+1)i_n-j+1=n.
In mathematics, the classical moment problem is formulated as follows: Given a sequence (μ_i)_i≥ 0, does there exist a random variable defined on a given interval such that μ_j=𝔼 [X^j] for any non-negative integer j? There are three essentially different types of (closed) intervals. Either two end-points are finite, one end-point is finite, or no end-points are finite, which corresponds to the Hamburger, Hausdorff, and Stieltjes moment problem respectively. See <cit.> or <cit.> for a detailed discussion. For our purpose, there is no restriction on the support of random variables. Thus, the following lemma for the Hamburger moment problem is all we need.
The Hamburger moment problem is solvable, i.e., (μ_j)_j≥ 0 is a sequence of moments if and only if μ_0=1 and the corresponding Hankel kernel
H=([ μ_0 μ_1 μ_2 ⋯; μ_1 μ_2 μ_3 ⋯; μ_2 μ_3 μ_4 ⋯; ⋮ ⋮ ⋮ ⋱ ])
is positive definite, i.e.,
∑_j, k ≥ 0μ_j+k c_j c_k≥ 0
for every real sequence (c_j)_j ≥ 0 with finite support, i.e., c_j=0 except for finitely many j's.
If we define the (j+1)-th upper-left determinant of a Hankel matrix by
H_j(x_0,x_1,⋯,x_2j):=|[ x_0 x_1 ⋯ x_j; x_1 x_2 ⋯ x_j+1; ⋮ ⋮ ⋱ ⋮; x_j x_j+1 ⋯ x_2j ]|,
by Sylvester's criterion in linear algebra <cit.>, the positive-definite condition above is equivalent to H_j(μ_0,⋯,μ_2j)>0 for any j∈ℕ_+.
In order to prove <ref>, we construct a Hankel matrix from given values of cumulants and ensure that the upper-left determinants of (<ref>) are all positive. Then by <ref>, there exists a random variable that has matched moments with the ones in (<ref>) and hence it also has the required cumulants by (<ref>).
For convenience, we write
L_j(x_1,⋯,x_2j):=H_j(1,B_1(x_1),B_2(x_1,x_2),⋯,B_2j(x_1,⋯,x_2j)).
Taking x_1=0, from the definitions (<ref>) and (<ref>), there is an expansion
L_j(0,x_2,⋯,x_2j)
= H_j(1,0,B_2(0,x_2),⋯,B_2j(0,x_2,⋯,x_2j))
= ∑ a_t_2,⋯ , t_2j^(j)x_2^t_2⋯ x_2j^t_2j,
where the sum is taken over
t_2+t_3+⋯+t_2j≥ j,
2t_2+3t_3+⋯+(2j)t_2j=j(j+1).
We further define in the following way a sequence of univariate polynomials which will be essential in our construction in <ref>, by setting
P_j(x):=L_j(0,1,x,x^2,x^3,⋯,x^2j-2).
Firstly, we present a lemma on the properties of P_j(x).
P_j(x) is a polynomial of degree at most j(j-1) with only even-degree terms and if we write
P_j(x)=∑_ℓ=0^j(j-1)/2b_2ℓ^(j)x^2ℓ,
we have b_0^(j)=a_j(j+1)/2,0,⋯,0^(j)≥ 2 for any j≥ 2, j∈ℕ_+.
Note that by applying (<ref>) we obtain that
P_j(x)=L_j(0,1,x,⋯,x^2j-2)=∑ a_t_2,⋯,t_2j^(j)x^t_3+2t_4+⋯+(2j-2)t_2j,
where the sum is taken over
t_2+t_3+⋯+t_2j≥ j,
2t_2+3t_3+⋯+(2j)t_2j=j(j+1).
The degree of each term in (<ref>) is
t_3+2t_4+⋯+(2j-2)t_2j
= (2t_2+3t_3+⋯+(2j)t_2j)-2 (t_2+t_3+⋯+t_2j)
= j(j+1)-2 (t_2+t_3+⋯+t_2j).
This is even and no greater than j(j-1) since t_2+t_3+⋯+t_2j≥ j.
Then we show the constant term b_0^(j)≥ 2. Consider a standard normal random variable ξ∼𝒩(0,1). Then κ_j(ξ)=0 for all j≥ 1,j≠ 2, and κ_2(ξ)=1, which is straightforward by checking that the moment generating function of ξ is exp (t^2/2). By <ref>, we have
b_0^(j)=P_j(0)=L_j(0,1,0,⋯,0)
= L_j(κ_1(ξ),κ_2(ξ),⋯,κ_2j(ξ))
= H_j(μ_0(ξ),μ_1(ξ),⋯,μ_2j(ξ))>0.
Since μ_2ℓ(ξ)=(2ℓ-1)!! and μ_2ℓ-1(ξ)=0 are integers for ℓ∈ℕ_+, b_0^(j) is also an integer. Checking Leibniz formula of the determinant for the Hankel matrix H_j <cit.>, we observe that there is an even number of terms and that each term is odd. In specific, the determinant for the Hankel matrix is given by
b_0^(j)=H_j(μ_0(ξ),μ_1(ξ),⋯,μ_2j(ξ))=∑_τ∈ S_jsgn(τ)∏_i=1^jμ_τ(i)+i-2(ξ),
where by abuse of notation sgn is the sign function of permutations in the j-th permutation group S_j, which returns +1 and -1 for even and odd permutations, respectively. Since μ_2ℓ(ξ)=(2ℓ-1)!! and μ_2ℓ-1(ξ)=0 for all ℓ∈ℕ_+, we have
sgn(τ)∏_i=1^jμ_τ(i)+i-2(ξ)
is odd if τ (i)+i is even ∀ i=1,⋯,j
=0 otherwise .
Noting that the number of permutations τ that satisfies τ (i)+i is even for all i=1,⋯,j is (j!)^2, which is even when j≥ 2, we conclude that b_0^(j) is even, and thus, it should be at least 2.
As we have explained at the beginning of this section, we would like to construct a `moment' sequence such that the corresponding Hankel kernel is positive definite. The following lemma offers one single step in the construction.
Suppose there is some constant C such that |μ_ℓ|≤ C for ℓ=1,⋯, 2j+1 and H_j(μ_0,⋯,μ_2j)≥ 1. Then there exists C' only depending on j and C such that
H_j+1(μ_0,⋯,μ_2j,μ_2j+1,C')≥ 1.
Let C'=(j+1) (j+1)!C^j+2+1. Then by the Laplace expansion <cit.> of the determinant, we have
H_j+1(μ_0,⋯,μ_2j,μ_2j+1,C')= C'H_j(μ_0,⋯,μ_2j)+∑_ℓ=0^j(-1)^j+1+ℓμ_j+1+ℓA_j+2,ℓ+1
≥ C'-(j+1)C· (j+1)!C^j+1≥ 1,
where A_j+2,ℓ+1 is the determinant of the (j+1)× (j+1) submatrix obtained by deleting the (j+2)-th row and (ℓ+1)-th column of
A=([ μ_0 μ_1 ⋯ μ_j+1; μ_1 μ_2 ⋯ μ_j+2; ⋮ ⋮ ⋱ ⋮; μ_j+1 μ_j+2 ⋯ C' ]).
Now we prove <ref>.
The key of the proof will be to use <ref>. To do so we need to postulate an infinite sequence that will be our candidates for of potential moments and check that the conditions of <ref> hold. We remark that as we already know what we want the first k+1 cumulants to be, we already know what the candidates are for the first k+1 moments; and we only to find adequate proposal for the (k+2)-th moment onward. We will do so by iteratively using <ref>.
In this goal, we remark that since by <ref> we know that b_0^(j)≥ 2. Therefore, we can choose a small enough constant 0<C_p<1 only depending on k=⌈ p⌉ such that
b_0^(j)-∑_ℓ=1^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)| a_t_2,⋯,t_2j^(j)| C_p^2ℓ≥ 1,
for any integer j=1,⋯, (k+1)/2. Given an index set I_n, if u_j^0.6(n)= 0 for all j=1,⋯, k-1, let ξ^0.6(n)∼𝒩(0,1) and q_n≫| I_n |. Then q_n and ξ^0.6(n) satisfy all the requirements since κ_j(ξ^0.6(n))=0 for all j∈ℕ_+,j≠ 2 and κ_2(ξ^0.6(n))=1, which is straightforward by checking that the momemt generating function of ξ^0.6(n) is exp (t^2/2).
Otherwise, let
q_n:=⌊min_1≤ j≤ k-1, u_j^0.4(n)≠ 0{ C_p^2| u_j^0.6(n)|^-2/j}⌋,
where ⌊ x⌋ denotes the largest integer not exceeding x. Since by assumption, for any j=1,⋯, k-1, u_j^0.6(n)→ 0 as n→∞, then we know that there exists N>0 such that (i) q_n≥ 1 for any n>N and (ii) q_n→∞ as n→∞.
We note that by definition
min_1≤ j≤ k-1, u_j^0.4(n)≠ 0{ C_p^2| u_j^0.6(n)|^-2/j}< q_n+1,
which implies
max_1≤ j≤ k-1{ q_n^j/2| u_j^0.6(n)|}>C_p^j(q_n/(q_n+1))^j/2>C_p^p/2^p/2.
On the other hand, (<ref>) also implies that C_p^2| u_j^0.6(n)|^-2/j≥ q_n. Thus, q_n^j/2| u_j^0.6(n)|≤ C_p^j. Now let κ_j+2:=q_n^j/2u_j^0.6(n). We remark that κ_j+2≤ C_p^j and κ̃_j+2≥ C_p^p/2^p/2. We write μ_j+2:=B_j+2(0,κ_2,⋯,κ_j+2) for j=1,⋯,k-1. Those will be our candidates for the first k+1 moments. Moreover, if k is odd, we also propose a candidate for (k+2)-th moment by setting μ_k+2:=0.
For j=1,⋯,⌈ k/2⌉
by (<ref>) we have
H_j(1,0,μ_2,μ_3,⋯,μ_2j)=L_j(0,κ_2,κ_3,⋯,κ_2j)
= ∑_2t_2+3t_3+⋯+2jt_2j=j(j+1)a_t_2,⋯ , t_2j^(j)κ_2^t_2⋯κ_2j^t_2j
= ∑_ℓ=0^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)a_t_2,⋯ , t_2j^(j)κ_2^t_2⋯κ_2j^t_2j
(a)≥ b_0^(j)-∑_ℓ=1^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)| a_t_2,⋯ , t_2j^(j)κ_2^t_2⋯κ_2j^t_2j|
(b)≥ b_0^(j)-∑_ℓ=1^j(j-1)/2∑_2t_2+2t_3+⋯+2t_2j=j(j+1)-2ℓ 2t_2+3t_3+⋯+2jt_2j=j(j+1)| a_t_2,⋯,t_2j^(j)| C_p^2ℓ (c)≥ 1.
where to get (a) we used the definition of b_0^(j), and where to obtain (b) we used the fact that |κ_j+2|≤ C_p^j, and where to get (c) we used (<ref>).
Moreover, as |κ_j+2|≤ C_p^j, then we know that there exists some constant C_p' such that |μ_j+2|=| B_j+2(0,κ_2,⋯,κ_j+2) |≤ C_p' for any integer j=1,⋯, 2⌈ k/2⌉-1.
Therefore, by <ref>, there exists C_p” depending on k=⌈ p⌉ and C_p' such that
H_⌈ k/2⌉+1(1,0,μ_2,⋯,μ_2⌈ k/2⌉+1,C_p”)≥ 1.
Let μ_2⌈ k/2⌉+2:=C_p”. Applying <ref> repeatedly, we get a sequence (μ_j)_j≥ 1 such that μ_0=1 and H_j(μ_0,μ_1,⋯,μ_2j)≥ 1>0 for any j∈ℕ_+. The sequence (μ̃_j) is then our candidate for the moments and we remark that they satisfy the conditions of <ref>. Therefore, by <ref>, we conclude that there exists ξ^0.6(n) such that μ_j(ξ^0.6(n))=μ_j for any j∈ℕ_+. As the first k+1 moments uniquely define the first k+1 cumulants of a random variable we have κ_j+2(ξ^0.6(n))=κ_j+2=q_n^j/2u_j^0.6(n) for all j=1,⋯, k-1. Thus, the q_n and ξ^0.6(n) that we have constructed meet the requirements of <ref>. Moreover, (<ref>) implies that <ref> is also satisfied. Lastly, to show <ref> we note that
𝔼 [|ξ^0.6(n)|^p+2]=‖ξ^0.6(n)‖_p+2^p+2(*)≤‖ξ^0.6(n)‖_2⌈ k/2⌉ +2^p+2
= (μ_2⌈ k/2⌉ +2(ξ^0.6(n)))^(p+2)/(2⌈ k/2⌉ +2)
≤ (C_p”)^(p+2)/(2⌈ k/2⌉ +2).
Here (*) is due to the fact that k=⌈ p⌉≥ p.
§ PROOFS OF OTHER RESULTS
In this section, we provide the proofs of all the other results in the main text.
toc
§.§ Proof of Proposition <ref> and Theorem <ref>
For ease of notation, in this section we will drop the dependence on n in our notation and write W, N( · ), σ, X_i, I and R_j,ω for respectively W_n, N_n( · ), σ_n, X^0.6(n)_i, I_n and R_j,ω,n.
Before we prove the bounds for R_k,ω, we note that R_k,ω can be defined without assuming local dependence . Thus, we first aim to generalize this concept, which makes the result derived in <ref> also applicable in general dependent situations. Let (X_i)_i∈ I be a class of mean zero random variables indexed by I. For any graph G (not necessarily the dependency graph) with the vertex set I and a subset J⊆ I, we define N(J) to be vertex set of the neighborhood of J. As in <ref>, we assume Var(∑_i∈ IX_i)=1, without loss of generality. Let W=∑_i∈ IX_i.
We extend the notation of ℛ-sums defined in (<ref>) to this general setting. Given an integer k∈ℕ_+ such that k≥ 2, for any t_1:k∈ℤ such that | t_j|≤ j-1 for any j∈ [k], let z=|{ j:t_j>0 }|. If z≥ 1, we write { j:t_j>0 }={ q_1,⋯,q_z}, where the sequence 2≤ q_1<⋯<q_z≤ k is taken to be increasing. We further let q_0:=1 and q_z+1:=k+1. Then we could still define the ℛ-sums by
ℛ_ω[t_1,t_2,⋯,t_k] : =
∑_i_1∈ N_1∑_i_2∈ N_2⋯∑_i_k-1∈ N_k-1[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k-1|,(∑_i_k∈ N_k| X_i_k|)^ω),
where N_1:=I, and for 2≤ j≤ k
N_j:= N (i_1:| t_j|)=N(i_1,⋯,i_| t_j|) if t_j≠ 0
∅ if t_j=0
.
Now the remainder term R_k,ω is defined as
R_k,ω:= ∑_(ℓ,η_1:ℓ)∈ C^*(k+2)∑_i_1∈ N_1'∑_i_2∈ N_2'⋯∑_i_k+1∈ N_k+1'
[η_1,⋯,η_ℓ]▹(| X_i_1|,⋯,| X_i_k+1|,(∑_i_k+2∈ N_k+2'| X_i_k+2|)^ω)
= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2].
where N_1':=I and N_j':=N(i_1:(j-1)) for j≥ 2. C^*(k+2) and ℳ_1,k+2 are given by
C^*(k+2)={ℓ,η_1:ℓ∈ℕ_+: η_j≥ 2 ∀ j∈ [ℓ-1], ∑_j=1^ℓη_j=k+2},
and
ℳ_1,k+2:={t_1:(k+2): t_j+1=± j & t_j∧ t_j+1< 0 ∀ 1≤ j≤ k+1}.
Note that the expressions of ℛ-sums and R_k,ω have the same forms as those in <ref>, but here we do not impose the assumption of the local dependence of (X_i)_i∈ I anymore as N(i_1:q)'s are defined directly from the graph structure we constructed on I. The main goal of this section is to prove the following proposition.
Fix k∈ℕ_+ such that k≥ 2 and real number ω∈ (0,1]. Let N(J) be defined as above and suppose the cardinality of N(J) is upper-bounded by M for any | J |≤ k. Then there exists a constant C_k+ω only depending on k+ω such that
ℛ_ω [t_1,t_2,⋯,t_k]≤ C_k+ω M^k-2+ω∑_i∈ I𝔼 [| X_i|^k-1+ω].
Before proving <ref>, we need the following two lemmas. <ref> helps us change the order of summation in ℛ_ω[t_1,⋯,t_k] and <ref> is a generalized version of Young's inequality, which allows us to bound the expectations of products by sums of moments.
Fix k∈ℕ_+ such that k≥ 2. For any J⊆ I, let N(J) be defined as above. Suppose (i_1,⋯,i_k) is a tuple such that i_1∈ I, i_2∈ N(i_1), ⋯, i_k∈ N(i_1:(k-1)). Then for any 1≤ h≤ k, there exists a permutation π on [k] such that π (1)=h, i_π(1)∈ I, i_π(2)∈ N(i_π(1)), ⋯, i_π(k)∈ N(i_π(1),⋯,i_π(k-1)).
We perform induction on k.
Firstly, suppose that k=2, then we remark that i_2∈ N(i_1)⇔ i_1∈ N(i_2). For h=1, we can choose π to be the identity and the desired identity holds. For h=2, we let π(1):=2 and π(2):=1 and remark than once again the desired result holds.
Suppose that the proposition is true for 2,⋯,k-1. We want to prove that it holds for k. If h<k, consider the tuple (i_1,⋯, i_h). By inductive hypothesis, there is a permutation π on { 1,2,⋯,h } such that π(1)=h, i_π(2)∈ N(i_π(1)), ⋯, i_π(h)∈ N(i_π(1),⋯,i_π(q-1)). Define
π(j):={ π(j) if 1≤ j≤ h
j if h+1≤ j≤ k
..
Then π satisfies the requirements in the lemma.
Now suppose h=k. i_k∈ N(i_1:(k-1)) indicates that i_k is a neighbor of { i_1,⋯,i_k-1}. Then there exists 1≤ℓ≤ k-1 such that there is an edge between i_k and i_ℓ in the graph G=(I,E). Thus, i_h∈ N(i_ℓ).
By inductive hypothesis, there is a permutation π on [ℓ] such that π(1)=ℓ, i_π(2)∈ N(i_π(1)), ⋯, i_π(ℓ)∈ N(i_π(1),⋯,i_π(ℓ-1)).
Define
π(j):=
k if j=1
π(j-1) if 2≤ j≤ℓ+1
j-1 if ℓ+2≤ j≤ k
.
Then π(1)=h=k. Moreover, we have i_π(2)=i_ℓ∈ N(i_k)=N(i_π(1)), and note that for all j=3,⋯,ℓ we have i_π(j+1)=i_π(j)∈ N(i_π(1),⋯,i_π(j-1))=N(i_π(1),⋯,i_π(j)). Finally, for all j≥ℓ+1 we have i_π(j+1)=i_j∈ N(i_1:(j-1))⊆ N(i_1,⋯,i_j-1,i_k)=N(i_π(1),⋯,i_π(j)). Thus, the lemma holds for k as well. By induction, the proof is complete.
Also, we need a generalization of Young's inequality.
Given t∈ℕ_+, let Y_1,⋯,Y_t be a sequence of random variables, and real numbers p_1,⋯, p_t>1 satisfy that 1/p_1+⋯+1/p_t=1. Then for any (ℓ, η_1:ℓ)∈ C(t):={ℓ,η_1:ℓ∈ℕ_+:∑_j=1^ℓη_j=t}, we have that
[η_1,⋯, η_ℓ]▹ (| Y_1|,⋯,| Y_t|)≤1/p_1𝔼 [| Y_1|^p_1]+⋯+ 1/p_t𝔼 [| Y_t|^p_t].
First, we prove
𝔼 [| Y_1⋯ Y_t|]≤1/p_1𝔼 [| Y_1|^p_1]+⋯+ 1/p_t𝔼 [| Y_t|^p_t],
𝔼 [| Y_1|]⋯𝔼 [| Y_t|] ≤1/p_1𝔼 [| Y_1|^p_1]+⋯+ 1/p_t𝔼 [| Y_t|^p_t].
In this goal, note that Young's inequality is stated as follows: For any a_1,⋯,a_t≥ 0, and p_1,⋯,p_t>1 such that 1/p_1+⋯+1/p_t=1, we have
a_1⋯ a_t≤1/p_1a_1^p_1+⋯+1/p_ta_t^p_t.
Thus, by Young's inequality we know that
| Y_1⋯ Y_t|≤1/p_1| Y_1|^p_1+⋯+1/p_t| Y_t|^p_t.
Taking the expectation, we have
𝔼 [| Y_1⋯ Y_t|]≤1/p_1𝔼 [| Y_1|^p_1]+⋯+1/p_t𝔼 [| Y_t|^p_t].
Again by Young's inequality, we obtain that
𝔼 [| Y_1|]⋯𝔼 [| Y_t|]≤1/p_1𝔼 [| Y_1|]^p_1+⋯+1/p_t𝔼 [| Y_t|]^p_t.
By Jensen's inequality, 𝔼 [| Y_i|]^p_i≤𝔼 [| Y_i|^p_i] for i∈ [t].
This implies that
𝔼 [| Y_1|]⋯𝔼 [| Y_t|]≤1/p_1𝔼 [| Y_1|^p_1]+⋯+1/p_t𝔼 [| Y_t|^p_t].
Finally, we prove (<ref>). Let 1/q_j:=∑_i=η_j-1+1^η_j1/p_i for 1≤ j≤ k.
[η_1,⋯,η_ℓ]▹ (| Y_1|,⋯,| Y_k|)
= 𝔼[| Y_1⋯ Y_η_1|] 𝔼[| Y_η_1+1⋯ Y_η_2|] ⋯ 𝔼[| Y_η_1+⋯+η_ℓ-1+1⋯ Y_k|]
(<ref>)≤ 1/q_1𝔼[| Y_1⋯ Y_η_1|^q_1]+⋯+1/q_k𝔼[| Y_η_1+⋯+η_ℓ-1+1⋯ Y_k|^q_k]
(<ref>)≤ 1/p_1𝔼 [| Y_1|^p_1]+⋯+1/p_η_1𝔼 [| Y_η_1|^p_η_1]+⋯
+1/p_η_1+⋯+η_ℓ-1+1𝔼 [| Y_k+1-u _ℓ|^p_η_1+⋯+η_ℓ-1+1]+⋯+1/p_k𝔼 [| Y_k|^p_k].
Now we are ready to prove <ref>.
By (<ref>), we only need to prove that the following inequality holds for any k∈ℕ_+:
ℛ_ω [0,± 1,⋯,± k]≲ M^k-1+ω∑_i∈ I𝔼 [| X_i|^k+ω].
Once again we write z:=|{ j:t_j>0 }|. If z≥ 1, we write { j:t_j>0 }={ q_1,⋯,q_z}, where 2≤ q_1<⋯<q_z≤ k is increasing. Further let q_0:=1 and q_z+1:=k+1.
Noticing that
1/k+ω+⋯+1/k+ω_k times+ω/k+ω=1,
we apply <ref> and obtain that
[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
≲ 𝔼 [| X_i_1|^k+ω]+ ⋯ +𝔼 [| X_i_k|^k+ω ] +𝔼[ (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^k+ω].
Now by Jensen's inequality and the fact that | N(i_1:k)|≤ M, we get that
𝔼[ (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^k+ω]≤1/M∑_i_k+1∈ N(i_1:k)𝔼 [| X_i_k+1|^k+ω].
Moreover, we remark that
M^ω [q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (1/M∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
= [q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
Thus, this implies that
ℛ_ω[0,± 1,⋯,± k]
= ∑_i_1∈ I⋯∑_i_k∈ N(i_1:(k-1))[q_1-q_0,⋯,q_z+1-q_z]▹(| X_i_1|,⋯,| X_i_k|, (∑_i_k+1∈ N(i_1:k)| X_i_k+1|)^ω)
≲ M^ω∑_i_1∈ I⋯∑_i_k∈ N(i_1:(k-1))(𝔼 [| X_i_1|^k+ω]+⋯+𝔼 [| X_i_k|^k+ω]+1/M∑_i_k+1∈ N(i_1:k)𝔼 [| X_i_k+1|^k+ω]).
Since the cardinality of N(i_1),⋯,N(i_1:k) are bounded by M, for j=1 we have
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω]≤ M^k-1∑_i∈ I𝔼 [| X_i|^k+ω].
Now we bound
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω],
where j=2,⋯,k.
By <ref>, for any tuple (i_1,⋯,i_k) in the summation, there exists a permutation π such that π(1)=j, i_π(2)∈ N(i_π(1)), ⋯, i_π(k)∈ N(i_π(1),⋯,i_π(k-1)). Let ϕ_j be a map that sends (i_1,⋯,i_k) to (i_π(1),⋯,i_π(k)). Then no more than (k-1)! tuples are mapped to the same destination since (i_1,⋯,i_k) is a permutation of (i_π(1),⋯,i_π(k)) and i_j is fixed to be i_π(1). Thus, we obtain that
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω]
≤ (k-1)!∑_π:π(1)=j∑_i_π(1)∈ I∑_i_π(2)∈ N(i_π(1))⋯∑_i_π(k)∈ N(i_π(1),⋯,i_π(k-1))𝔼 [| X_i_π(1)|^k+ω]
≤ (k-1)!∑_π:π(1)=j∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k∈ N(i_1:(k-1))𝔼 [| X_i_j|^k+ω]
≤ ((k-1)!)^2M^k-1∑_i∈ I𝔼 [| X_i|^k+ω]≲ M^k-1∑_i∈ I𝔼 [| X_i|^k+ω].
Similarly,
∑_i_1∈ I∑_i_2∈ N(i_1)⋯∑_i_k+1∈ N(i_1:k)𝔼 [| X_i_k+1|^k+ω]≲ M^k∑_i∈ I𝔼 [ | X_i|^k+ω].
Substituting (<ref>), (<ref>), and (<ref>) into (<ref>), we conclude
ℛ_ω[t_1,t_2,⋯,t_k]
≤ ℛ_ω[0,sgn(t_2),2·sgn(t_3),⋯,(k-1)sgn(t_k-1)]
≲ M^k-2+ω∑_i∈ I𝔼 [ | X_i|^k-1+ω].
By <ref>, we have
R_k,ω(<ref>)= ∑_t_1:(k+2)∈ℳ_1,k+2 ℛ_ω[t_1,t_2,⋯,t_k+2]
≲∑_t_1:(k+2)∈ℳ_1,k+2M^k+ω∑_i∈ I𝔼 [ | X_i|^k+1+ω].
Noting that |ℳ_1,k+2|< 2^k+1 <cit.>, we conclude that
R_k,ω≲ M^k+ω∑_i∈ I𝔼 [ | X_i|^k+1+ω].
The proof of <ref> relies on <ref> and <ref>.
Let k:=⌈ p⌉. Then p=k+ω -1. Without loss of generality, we assume σ_n= 1.
By <ref>,
R_j,ω,n≲ M_n^j+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^j+1+ω].
If we let q_1=(k-1)/(k-j) and q_2=(k-1)/(j-1), then 1/q_1+1/q_2=1 and (2+ω )/q_1+(k+1+ω )/q_2=j+1+ω.
Thus,
| X^0.6(n)_i|^j+1+ω=| X^0.6(n)_i|^(2+ω )/q_1·| X^0.6(n)_i|^(k+1+ω )/q_2.
By Hölder's inequality,
M_n^j+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^j+1+ω]
≤ (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/q_1(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/q_2
= (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^(k-j)/(k-1)(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^(j-1)/(k-1).
Since
ω (k-j)/(k-1)(j+ω -1)+(j-1)(k+ω-1 )/(k-1)(j+ω -1)=1,
by Young's inequality (See <ref> for details), we get
(M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^k-j/(k-1)(j+ω -1)(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^j-1/(k-1)(j+ω -1)
≤ ω (k-j)/(k-1)(j+ω -1)(M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/ω
+(j-1)(k+ω -1)/(k-1)(j+ω -1)(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/(k+ω -1).
Thus, we have
R_j,ω,n^1/(j+ω -1)≲(M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/ω+(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/(k+ω -1).
Similarly, we derive that
R_j,1,n^1/j≲(M_n^j+1∑_i∈ I_n𝔼[| X^0.6(n)_i|^j+2])^1/j
≤ (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^k+ω -j-1/kj(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^j-ω/(k-1)j
≲ (M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^2+ω])^1/ω+(M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^k+1+ω])^1/(k+ω -1).
Since by assumption M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2]→ 0 and M_n^k+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2]→ 0 as n→∞, we have that R_j,1,n→ 0 as n→∞.
Therefore, by <ref> and noting the fact that p=k+ω -1, we conclude
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ C_p((M_n^1+ω∑_i∈ I_n𝔼[| X^0.6(n)_i|^ω +2] )^1/ω+(M_n^p+1∑_i∈ I_n𝔼[| X^0.6(n)_i|^p+2] )^1/p),
where C_p only depends on p.
toc
§.§ Proofs of Corollaries <ref> and <ref>
Define the graph (T_n,E_n) to be such that there is an edge between i_1,i_2∈ T_n if and only if ‖ i_1-i_2‖≤ m. From the definition of the m-dependent random field, (X_i^0.6(n))_i∈ T_n satisfies . We will therefore apply <ref> to obtain the desired result. We remark that j∈ N_n(i_1:(⌈ p⌉ +1)) only if there is ℓ∈[⌈ p⌉ +1] such that i_ℓ-j≤ m, which directly implies that |N_n(i_1:(⌈ p⌉ +1))|≤ (2m+1)^d(⌈ p⌉+1).
Moreover, by Hölder's inequality, we have
∑_i∈ T_n𝔼[| X^0.6(n)_i|^ω+2]
≤ (∑_i∈ T_n𝔼[| X^0.6(n)_i|^2])^(p-ω)/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2])^ω/p
(a)≤ M^(p-ω)/pσ^2(p-ω)/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2])^ω/p.
Here (a) is due to the non-degeneracy condition. And this directly implies that
m^(1+ω)d/ω(σ_n^-(ω+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^ω +2] )^1/ω
≤ m^(1+ω)d/ωM^p-ω/pω(σ_n^-(p+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p→ 0 as n→∞.
Therefore, by <ref>, there exists C_p,d>0 such that for n large enough we have
𝒲_p(ℒ(W_n),𝒩(0,1))≤ C_p,dm^(1+ω)d/ωM^p-ω/pωσ_n^-p+2/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p.
Moreover, if (X^0.6(n)_i) is in addition assumed to be stationary, then by assumption there is a constant K such that lim inf_n→∞σ_n^2/| T_n |≥ K. Therefore, we get that
σ_n^-(p+2)∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2]≍| T_n |^-(p+2)/2·| T_n |=| T_n |^-p/2→ 0,
and
𝒲_p(ℒ(W_n),𝒩(0,1))
≤ C_p,dm^(1+ω)d/ωM^p-ω/pωσ_n^-p+2/p(∑_i∈ T_n𝔼[| X^0.6(n)_i|^p+2] )^1/p
= 𝒪(| T_n |^-1/2).
Consider the index set I_n={i=(i_1,⋯,i_m):1≤ i_1≤⋯≤ i_m≤ n }⊆ℤ^m. For each i∈ I_n, let ξ_i:=h(X_i_1,⋯,X_i_m). Then W_n=σ_n^-1∑_i∈ Iξ_i. Let (I_n,E_n) be the graph such that there is an edge between i,j∈ I_n if and only if { i_1,⋯,i_m}∩{ j_1,⋯,j_m}≠∅.
Then we remark that the conditions holds. Moreover, note that j is in
N_n(i_1:(⌈ p⌉+1)) only if there is ℓ∈ [⌈ p⌉+1] and k_1,k_2∈ [m] such that j_k_1=(i_ℓ)_k_2, where (i_ℓ)_k_2 denotes the k_2-th component of the vector i_ℓ. This directly implies that the cardinality of the dependency neighborhoods are bounded by n^m-(n-m(⌈ p⌉+1))^m≍ n^m-1. Moreover, the non-degeneracy condition of the U-statistic implies that σ_n^2≍ n^2m-1 <cit.>. Applying <ref>, we get that
𝒲_p(ℒ(W_n),𝒩(0,1))
≲ (n^m(n^m-1)^1+ω1/σ_n^ω +2𝔼[| h(X_1,⋯,X_m) |^ω +2])^1/ω
+(n^m(n^m-1)^p+11/σ_n^p+2𝔼[| h(X_1,⋯,X_m) |^p+2])^1/p
≲ n^-1/2(𝔼[| h(X_1,⋯,X_m) |^ω +2])^1/ω+n^-1/2(𝔼[| h(X_1,⋯,X_m) |^p+2])^1/p
≤ n^-1/2‖ h(X_1,⋯,X_m) ‖ _p+2^(ω+2)/ω+n^-1/2‖ h(X_1,⋯,X_m) ‖ _p+2^(p+2)/p.
By the moment condition, ‖ h(X_1,⋯,X_m) ‖ _p+2<∞. Thus, we conclude
𝒲_p(ℒ(W_n),𝒩(0,1))=𝒪(n^-1/2).
toc
§.§ Proof of Theorem <ref>
For ease of notation we write ω_p:=𝒲_p(ℒ(W_n),𝒩(0,1)). Choose ρ∈ (0,1). Then remark that for all ϵ>0 there is G∼𝒩(0,1) such that G-W_n_p≤𝒲_p(ℒ(W_n),𝒩(0,1))+ϵ. Therefore, by the union bound we have
ℙ(W_n≥ t) = ℙ(W_n-G+G≥ t)
≤ℙ(W_n-G≥ (1-ρ)t)+ ℙ(G≥ρ t)
(a)≤Φ^c(ρ t)+ W_n-G_p^p/((1-ρ) t)^p
≤Φ^c(ρ t)+ ( ω_p+ϵ)^p/((1-ρ) t)^p
where to obtain (a) we have used Markov's inequality. Now as this holds for any arbitrary choice of ϵ>0 we conclude that
ℙ(W_n≥ t) ≤Φ^c(ρ t)+ ω_p^p/((1-ρ) t)^p.
Define the function g_t:x↦ (1-x)^p+1 e^-(xt)^2/2, then we can remark that g_t:[0,1]→[0,1] is a bijection. Choose ρ^*_t:=g_t^-1(√(2π)pω_p^p/t^p+1).
Moreover, we obtain that
ℙ(W_n≥ t) ≤Φ^c(ρ^*_t t)+ φ(ρ^*_t t)(1-ρ^*_t)tω_p^p/t^p+1(1-ρ_t^*)^p+1φ(ρ^*_t t)
(a)≤Φ^c(t)+ (1-ρ^*_t)tφ(ρ^*_t t)(1+1/p)
≤Φ^c(t)+ p^1/p+1ω_p^1-1/p+1/tφ(ρ^*_t t(1-1/p+1))(1+1/p)
where to obtain (a) we used the fact that Φ^c(ρ^*_t t)≤Φ^c(t)+(1-ρ^*_t)tsup_x∈ [ρ^*_t t,t]φ(x).
Suppose that t≥ 1 and satisfies 1-√(2βlog t)/t≤ 1. Define
x:= √(2βlog t )/t,
we notice that x∈ [0,1]. We remark that if
ω_p≤ (√(2π)p)^1/p+1(1-√(2βlog t )/t)t^1-β/p+1.
then we have g_t^-1(x)≥√(2π)pω_p^p/t^p+1.
Therefore as g_t^-1(·) is a decreasing function we have that x≤ρ^*_t which implies that
ℙ(W_n≥ t)≤Φ^c(t)+ 1/t^1+β(1-1/p+1)p^1/p+1ω_p^1-1/p+1(1+1/p).
Moreover, similarly we can remark that
ℙ(G≥ (1+ρ) t) ≤ℙ(W_n≥ t )+ℙ(G-W_n≥ρ t)
≤ℙ(W_n≥ t )+(ω_p+ϵ)^p/ρ^pt^p
Therefore, as this holds for any arbitrary ϵ>0 we obtain that
Φ^c((1+ρ) t) ≤ℙ(W_n≥ t )+ω_p^p/ρ^pt^p.
Moreover, we can definite g̃_t:x↦ e^-(1+x)^2t^2x^p+1 then choose ρ̃_t^*:=g̃_t^-1(√(2π)pω_p^p/t^p+1). We similarly obtain that
ℙ(W_n≥ t)
≥Φ^c(t)-p^1/p+1ω_p^1-1/p+1/tφ(t(1-1/p+1))(1+1/p).
amsalpha
toc
|
http://arxiv.org/abs/2307.06079v1 | 20230712110108 | Better bounds on the minimal Lee distance | [
"Jessica Bariffi",
"Violetta Weger"
] | cs.IT | [
"cs.IT",
"cs.DM",
"math.IT"
] |
J. Bariffi]Jessica Bariffi^1,2
V. Weger]Violetta Weger^3
^1Institute of Communication and Navigation
German Aerospace Center
Germany
^2Institute of Mathematics
University of Zurich
Switzerland
[email protected]
^3Department of Electrical and Computer Engineering
Technical University of Munich
Germany
[email protected]
[2020]94B05,94B65
Conformal and Contact Kinetic Dynamics and Their Geometrization
[
===============================================================
This paper provides new and improved Singleton-like bounds for Lee metric codes over integer residue rings. We derive the bounds using various novel definitions of generalized Lee weights based on different notions of a support of a linear code. In this regard, we introduce three main different support types for codes in the Lee metric and analyze their utility to derive bounds on the minimum Lee distance. Eventually, we propose a new point of view to generalized weights and give an improved bound on the minimum distance of codes in the Lee metric for which we discuss the density of maximum Lee distance codes with respect to this novel Singleton-like bound.
§ INTRODUCTION
The Lee metric was introduced in 1958 by Lee <cit.> to cope with phase modulation in communication. It provides an interesting alternative to the Hamming and rank metric which are considered for orthogonal modulation and network coding, respectively. The Lee metric is most known for the celebrated result in <cit.> where the authors showed that some optimal non-linear binary codes can be represented as linear codes over ℤ/4ℤ endowed with the Lee metric.
Recently, there is a renewed interest in the Lee metric due to its similarities with the Euclidean norm used in lattice-based cryptography. In fact, the Lee metric was introduced to cryptography in <cit.> and its hard problems were further studied in <cit.>. Recently, a first Lee-metric primitive <cit.> has been submitted to the reopened NIST standardization process for digital signature schemes.
Although the Lee metric is one of the oldest metrics and has interesting properties and applications, it did not receive as much attention as other metrics which is visible in the lacking of a well understood algebraic foundation of Lee-metric codes. Indeed, only recently it was discovered that Lee-metric codes attain the Gilbert-Varshamov bound with high probability <cit.>, for the length of the code tending to infinity. This aligns with famous and well studied results in the Hamming <cit.> and the rank metric <cit.>.
In addition, the characterization of constant Lee weight codes, initiated by Wood <cit.>, has only recently been completed in <cit.>.
The study of optimal codes is a classical direction
in algebraic coding theory and the most famous bound within this direction is the Singleton bound.
The bound gives an upper bound on the minimum distance of a code, given other parameters of the code. Thus, codes attaining this bound have the maximal possible minimum distance and in turn the largest error correction capacity.
In the Hamming metric, the Singleton bound was introduced by Singleton <cit.> and was already studied by Komamiya <cit.>.
Codes in the Hamming metric that are attaining the Singleton bound are called Maximum Distance Separable (MDS) codes and are constituting some of the most used and studied codes in coding theory. It is well known that codes attain the Hamming-metric Singleton bound with high probability when letting the size of the finite field q tend to infinity.
Instead, if one lets the length n tend to infinity, the probability for a code to attain the Singleton bound tends to 0.
When changing the underlying metric, this behaviour can drastically change. This is for example the case in the rank metric.
While the rank metric is younger than the Lee metric, first being introduced by Delsarte <cit.> in 1978 and reintroduced by Gabidulin <cit.> and Roth <cit.>, its Singleton bound was already given in <cit.> in 1985 and its optimal codes, called Maximum Rank Distance (MRD) codes, have been well studied since. In fact, we know that for 𝔽_q^m-linear codes, MRD codes are dense when letting q or m tend to infinity <cit.>. For 𝔽_q linear codes, however MRD codes are sparse when letting q tend to infinity <cit.>, except for some special cases, where m or n are 2 <cit.>.
The situation in the Lee metric is completely different.
Indeed, the first Singleton bound in the Lee metric has only been stated in 2000 by Shiromoto <cit.>. This bound is tight, as there exists a code attaining it.
However, the recent paper <cit.>
revealed that this example is in fact the only non-trivial linear code that is optimal with respect to this Lee-metric Singleton bound. Thus, such optimal codes are "extremely" sparse and show the need of a more thorough study of bounds in the Lee metric and their optimal codes.
The used puncturing argument to derive the classical Singleton bound and thus also the Lee-metric alternative in <cit.> proves to be not suitable for the Lee metric. One thus requires other techniques to derive a Singleton-like bound which appear to are more tailored to the Lee metric setting. In fact, one possible technique is through generalized weights, first introduced in <cit.>.
In this paper we introduce several possible definitions of Lee supports of a code, which allows us to define generalized Lee weights.
We compare the resulting Lee-metric Singleton bounds and compute the density of their optimal codes.
The main contribution of this paper is a new Lee-metric Singleton bound for which the optimal codes are not sparse.
This paper is structured as follows. In Section <ref> we provide the preliminary basics on ring-linear codes and the Lee metric, which will be helpful for the remainder of the paper.
We also provide the already known Singleton-like bounds for the Lee metric and the densities of their optimal codes.
Section <ref> serves as a recap of generalized Hamming weights for finite fields and for finite integer rings. We restate there the main properties and definitions and discuss how to derive bounds on the minimum Lee distance using generalized weights. In Sections <ref> and <ref> we introduce two new support definitions in the Lee metric deriving from the ideas used in the Hamming metric. For both of them we are able to derive new bounds on the minimum Lee distance defining generalized Lee weights with respect to the new supports. Additionally, we show that the newly proposed generalized Lee weights are invariant under isometries in the Lee metric, and we discuss the density of optimal codes. Even though these new bounds are sharper than the existing bounds on the minimum Lee distance, they are still not sufficiently tight. Therefore, we introduce new generalized weights based on filtrations of a code in Section <ref>. Together with some additional parameters for a generator matrix of a filtration of a code we are able to derive an improved bound on the minimum Lee distance. Again, we discuss the density of optimal codes and the invariance under isometries for the new generalized weights. In Section <ref> we compare all the bounds for several parameters.
Finally, we conclude the paper in Section <ref>.
§ PRELIMINARIES
We start this section by introducing the main definitions, notions and results used in the course of this paper. Throughout this paper we denote by p a
prime number, by s a positive integer, and by the integer residue ring. Furthermore, for any integer i ∈0, …, s-1, we write ⟨ p^i ⟩ to denote either the ideal p^i() or the submodule p^i()^n.
§.§ Ring-Linear Codes
Classical coding theory considers finite fields 𝔽_q with q elements and a linear code 𝒞 is a subspace of the vector space 𝔽_q^n. Thus, 𝒞 has a dimension k, which determines its size |𝒞| =q^k and the minimum number of generators.
Such a code 𝒞 can then be represented through a generator matrix G ∈𝔽_q^k × n or a parity-check matrix H ∈𝔽_q^(n-k) × n, which have the code as image, respectively as kernel.
In this paper, we focus on codes over ℤ/p^sℤ.
A linear code ⊆ ()^n is a -submodule of ()^n.
Due to the fundamental theorem of finite abelian groups, a linear code is isomorphic to the following direct sum of ℤ/p^sℤ-modules
𝒞≅⊕_i=1^K (ℤ/p^sℤ) / ⟨ p ⟩^λ_i.
The type of a module 𝒞 is then defined as the partition
λ= (s, …, s_k_0, s-1, …, s-1_k_1, …, 1, …, 1_k_s-1)
or equivalently
λ=(s^k_0(s-1)^k_1⋯ 1^k_s-1). Instead of using this well-known notation from the theory of modules, we prefer the notation (k_0, …, k_s), called subtype of the code 𝒞, due to its simplicity.
We call n the length of the code and its elements codewords. We can define the ℤ/p^sℤ-dimension of a linear code 𝒞 as
k := log_p^s().
The rank K of a code is given by the number of generators, i.e., K := ∑_i = 0^s-1 k_i. Additionally, k_0 is called the free rank of the code .
We denote by the ℤ/p^sℤ-dimension of the code 𝒞 the following
k= log_p^s(|𝒞|).
Note that the ℤ/p^sℤ-dimension of a code is not necessarily an integer. In fact, k is determined by the subtype as
k= ∑_i=0^s-1s-i/sk_i.
In general it holds that 0≤ k_0 ≤ k ≤ K ≤ n.
If the rank and the ℤ/p^sℤ-dimension of a code coincide, we call the code free.
It has been shown in <cit.> that free codes are dense as p ⟶∞ but they are neither dense nor sparse as the length n or s tend to infinity.
As in the finite field case, ring-linear codes are represented through a generator matrix or a parity-check matrix.
Consider a code ⊆ ()^n of rank K. A matrix G ∈ ()^K × n is called a generator matrix of if the rows of G span the code. A parity-check matrix H is an (n-K)× n matrix over whose its null-space coincides with .
Usually it is helpful to consider these matrices in their systematic form.
Let be a linear code in n of subtype (k_0, …, k_s-1) and rank K. Then is permutation equivalent to a code having a generator matrix ∈K × n of the form
=
[ 𝕀_k_0 A_1,2 A_1,3 ⋯ A_1,s A_1,s+1; 0 p𝕀_k_1 pA_2,3 ⋯ pA_2,s pA_2,s+1; 0 0 p^2 𝕀_k_2 ⋯ p^2A_3,s p^2 A_3,s+1; ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 0 ⋯ p^s-1𝕀_k_s-1 p^s-1 A_s,s+1 ],
where A_i,s+1∈ (ℤ / p^s+1-iℤ)^k_i-1× (n-K), A_i,j∈ (ℤ / p^s+1-iℤ)^k_i-1× k_j for j ≤ s.
In addition, the code is permutation equivalent to a code having a parity-check matrix H ∈(n-k_0) × n of the form
H_𝗌𝗒𝗌 =
[ B_1,1 B_1,2 ⋯ B_1,s-1 B_1,s 𝕀_n-K; pB_2,1 pB_2,2 ⋯ pB_2,s-1 p𝕀_k_s-1 0; p^2B_3,1 p^2B_3,2 ⋯ p^2 𝕀_k_s-2 0 0; ⋮ ⋮ ⋮ ⋮ ⋮; p^s-1B_s,1 p^s-1𝕀_k_1 ⋯ 0 0 0 ],
where B_1,j∈ (ℤ / p^sℤ)^(n-K) × k_j+1, B_i,j∈ (ℤ / p^s+1-iℤ)^k_s-i+1× k_j+1 for i > 1.
We call the forms in (<ref>) and (<ref>) the systematic form of a generator matrix and a parity-check matrix, respectively.
Additionally to the subtype of a code ⊆ ()^n, we can define a similar parameter going over the columns of a generator matrix of .
Let ⊆ ()^n be a linear code of rank K. For each j ∈1, … , n consider the j-th coordinate map
[ π_j: ⟶ ; (c_1,…,c_n) ⟼ c_j ].
The support subtype of is defined to by an (s+1)-tuple (n_0(), …, n_s()), where n_i() counts the number coordinates j ∈1, … ,n belonging to ideal ⟨ p^i ⟩, i.e.,
n_i() := j ∈1, …, n⟨π_j() ⟩ = ⟨ p^i ⟩.
A code with n_s() = 0 is called non-degenerate.
We will simply write n_i instead of n_i() if the code is clear from the context.
§.§ Lee Metric
In this paper we will focus on the Lee metric introduced in <cit.>. This metric was introduced to cope with phase modulation in communications.
However, we will often refer and compare the Lee metric to the Hamming metric.
Thus, recall that for two n-tuples x, y ∈ ()^n their Hamming distance is defined to be the number of positions where they differ, i.e.,
(x, y) = i ∈1, …, n x_i ≠ y_i.
For a single element x∈()^n its Hamming weight is the Hamming difference to zero, i.e., the number of nonzero entries of x, and we denote it by (x) := (x, 0). The minimum Hamming distance of a linear code ⊆ ()^n is then defined as
_𝖧() := min(x) x ∈∖0.
Let us now introduce the Lee metric.
Given an integer residue ring and consider an element a ∈ interpreted as an integer in 0, … , p^s-1. The Lee weight a∈ is given by
(a) := mina, p^s-a.
For x ∈ ()^n its Lee weight is defined in an additive fashion. That is,
(x) = ∑_i = 1^n (x_i).
Similarly to the Hamming metric, the Lee weight induces a distance which we refer to as Lee distance. In fact, for x, y ∈ ()^n their Lee distance is defined to be
(x, y) = (x-y).
Note that the Hamming weight of an element x ∈ ()^n is a natural lower bound on its Lee weight. Additionally, the Lee weight of each entry x_i can never exceed M := p^s/2. Hence, we have
0 ≤(x) ≤(x) ≤(x)M ≤ n M.
Considering the Lee metric for linear codes over , we are able to introduce the minimum Lee distance of .
Given a linear code ⊆ ()^n, we define its minimum Lee distance as
() := min(c) c ∈∖0.
Again, we easily observe that
() ≤() ≤ M ().
The linear isometries of the Lee metric are {-1,1}⋊ S_n, that is we can permute the entries and multiply each entry with either 1 or -1.
§.§ Singleton-like Bounds over
One of the major research directions in coding theory is to bound the minimum distance of a code.
This will then, in turn, bound the error-correction capability of a code ⊆. In fact, the higher the minimum distance of , the more errors can be corrected.
The task of bounding the minimum distance, clearly depends on the metric used to endow the ambient space.
For the Hamming metric over finite fields, the best-known bound is the Singleton bound describing the trade-off between the minimum Hamming distance of a code and its dimension.
Given a linear code ⊆_q^n of dimension k over 𝔽_q, its minimum Hamming distance is upper bounded by
() ≤ n - k + 1.
Codes achieving this bound are called maximum distance separable (MDS). It is well known that for q tending to infinity a random linear code will attain the Singleton bound. Hence, MDS codes are dense for q tending to infinity with high probability. On the other hand, if we let the length n grow, MDS codes are sparse.
Note that this also follows immediately from the famous MDS conjecture <cit.>, which states that if q is odd, then an MDS code must have n ≤ q+1. In the case where q=2^s, and k = 3 or k = q -1, in which case n ≤ q + 2.
A very analogous bound can be established over finite integer residue rings for the Hamming metric.
Let ⊆( ℤ/p^sℤ)^n be a linear code of ℤ/p^sℤ-dimension k, then
() ≤ n-k+1.
This Singleton-like bound does hold for non-linear codes as well, stating that
|𝒞|≤ q^n- () +1. For linear codes only, this bound has been further tightened:
Let ⊆( ℤ/p^sℤ)^n be a linear code of rank K, then
() ≤ n-K+1.
To differentiate codes achieving the bound in Proposition <ref> from MDS codes, we will refer to the first as maximum distance codes with respect to the rank (or MDR codes for short). However, MDS codes and MDR codes are closely related. In fact, any linear MDS code is always an MDR code. Vice versa we can say that a code ⊆ is MDR if and only if the socle ∩⟨ p^s-1⟩ can be identified with an MDS over _p. We can therefore directly apply the results from finite fields which means that by the MDS conjecture, MDR codes are sparse as n grows large and, as usual, they are dense as p grows large.
Note that the relation
() ≤() ≤ M ()
immediately gives raise to a bound for codes in the Lee metric, i.e.,
() ≤ M(n-⌊ k ⌋ +1).
Apart from this obvious bound, the first Lee-metric Singleton-like bound was introduced by Shiromoto in 2000 <cit.>.
Consider a linear code ⊆ of ℤ/p^sℤ-dimension k. Then the following bound holds
() - 1/M≤ n - k.
Note that this bound implies () ≤ M(n-k)+α for some α∈{1, …, M}, and can be generalized easily to () ≤ M(n-⌊ k ⌋)+α, again for some α∈{1, …, M}.
The bound from Theorem <ref> can be derived from the simple fact that () ≤ M () for a code ⊆ or by a puncturing argument, as in the classical case.
That is, given a code of size || and minimum Lee distance (), we can puncture the code in only () - 1/M coordinates, to make sure that the punctured code 𝒞' still has size |𝒞|. In fact, every pair of codewords in 𝒞 has Lee distance at least (), by puncturing in one position, one has to assume that their distance decreased by the maximal possible value, i.e., by M.
The claim then follows easily as
𝒞' ⊆(ℤ/p^sℤ)^n- () - 1/M.
Shiromoto also provided a code which attains the bound, showing its tightness.
Let 𝒞= ⟨ 1,2 ⟩⊆(ℤ/5ℤ)^2. Then, since ()=3, this code attends the bound of Theorem<ref>, as
() - 1/M = 3-1/2=1= n - k =2-1.
However, in <cit.>, it was observed that this is actually the only non-trivial linear code that attends the bound in Theorem <ref>. The Lee-metric Singleton bound can be further improved, as shown in <cit.>, e.g. by employing the rank K.
Let ⊆(ℤ/p^sℤ)^n be a linear code of rank K, then
() ≤ M(n-K+1).
Alderson and Huntemann in <cit.> provided a similar bound to Corollary <ref> by restricting k to a positive integer bounded by n.
For any code in n of ℤ/p^sℤ-dimension k a positive integer, 1<k<n we have that
() ≤ M(n-k).
However, in <cit.>, the authors characterized all codes attaining the above Lee-metric Singleton bounds, with the result that their optimal codes are sparse in both cases, i.e., when n, p or s →∞.
Using the newly introduced parameter for the code, i.e., the support subtype, one can easily derive an improved Lee-metric Singleton bound from the puncturing argument.
Let ⊆(ℤ/p^sℤ)^n be a linear code of rank K and support subtype (n_0, …, n_s-1,0). Define for all i ∈{0, …, s}
M_i = p^s-i/2p^i, B_j = ∑_i=j^s-1 n_i, A_j = ∑_i=j^s-1 n_iM_i.
Let j ∈{1, …, s-1} be the smallest positive integer such that A_j < (), then
K ≤ n-B_j- ⌊()-A_j-1/M_j-1⌋.
We start by puncturing the code in the positions of smallest possible Lee weight. To identify these positions, we use the support subtype. Clearly, in the ideal ⟨ p^i⟩, we have as largest possible Lee weight M_i= ⌊p^s-1/2⌋ p^i, and thus we would start puncturing in the positions, where all codewords live in ⟨ p^s-1⟩, i.e., in the positions belonging to the support subtype n_s-1.
We hence assume that the minimum distance between two distinct tuples decreased by A_s-1=n_s-1M_s-1. If this is still smaller than the minimum Lee distance, we can continue puncturing in the next ideal, namely ⟨ p^s-2⟩.
We continue in this fashion, every time puncturing in n_iM_i positions, until A_j=∑_i=j^s-1 n_iM_i has reached the minimum Lee distance. We are left with codewords that are at least ()-A_j apart, thus we can continue puncturing in ⌊()-A_j-1/M_j-1⌋ positions living in ⟨ p^j-1⟩, i.e., belonging to the support subtype n_j-1, and still be sure that the punctured code has the same size as the original code.
In this case, we have the new length of the punctured code, being n-B_j -⌊()-A_j-1/M_j-1⌋, for B_j= ∑_i=j^s-1 n_i.
Let us consider 𝒞⊆(ℤ/9ℤ)^4 generated by
G = [ 1 0 2 3; 0 3 6 0; 0 0 3 6 ].
The ℤ/9ℤ-dimension of this code is k=2.
The minimum Lee distance of this code is ()=6. The code also has support subtype (2,2,0). Thus, we would identify j=s=2, as we cannot puncture in both positions belonging to n_1=2, namely the second and the last column, as we would get n_1 M_1=6 ≮(). However, we can puncture in one of these two columns. In fact, ⌊()-0-1/3⌋ = 1. That is, the bound in Theorem <ref> is attained as
K=3 =4-0-⌊6-0-1/3⌋ = n-B_j- ⌊()-A_j-1/M_j-1⌋.
The bound from Theorem <ref> would instead give
⌊()-1/M⌋ = ⌊6-1/4⌋ =1 < 2 = n- k.
Since we are also in the case where k is an integer strictly larger than 1, we can also apply the bound from Theorem <ref>, and get
() =6 < 8= (4-2) · 4 = (n-k)M.
We can rewrite the bound from Theorem <ref> as upper bound on the minimum Lee distance as, for j the smallest positive integer with A_j < () we have
() ≤ M_j-1(∑_i=0^j-1 n_i -K) +∑_i=j^s-1 n_i M_i + α
for some α∈{1, …, M_j-1}.
However, the condition to find the smallest j such that A_j < (), renders the bound impractical, as usually one does not know the minimum Lee distance of the code and thus wants to bound it.
§ GENERALIZED HAMMING WEIGHTS
In this section we propose a new definition of generalized Lee weights. We base the definition on the Hamming weight counterparts. Generalized Hamming weights have originally been introduced in <cit.> and were then rediscovered by Wei in <cit.>. Generalized weights in the Hamming metric have been studied in various areas <cit.>. In <cit.> the authors defined and the generalized Hamming weights of ring-linear codes by considering the join-Hamming support of a code.
Let us recap the definition of a Hamming support of a vector and a code. For this, consider a finite field _q of q elements and a positive integer n. The Hamming support of x ∈𝔽_q^n is defined to be the set of indices where x is nonzero, i.e.,
(x) := i ∈1, …, n x_i ≠ 0 .
Note here, that the cardinality of the support of x corresponds to the Hamming weight of x, that is
(x) = (x).
Let us consider now a linear code ⊂^n of length n and dimension K. For the definition of the Hamming support of we take into account every codeword c∈ and for each index, we figure out whether a codeword exists which is nonzero in this position, i.e.,
()= {i ∈{1, …, n }|∃ c ∈, c_i ≠ 0}.
Analogously to the definition of the weight of a vector x in (<ref>), we can define the weight of a code to be the size of its support, i.e.,
()= () .
The goal is to generalize these notions to other metrics and ambient spaces. In particular, we are interested in the Lee metric defined over rings. In order to do so, we will follow the approach of <cit.>.
Let ℛ denote a finite unitary ring.
Let us consider a weight function
: ℛ→ℕ,
which is such that
* (0)=0 and (x)>0 for all x ≠ 0,
* (x)=(-x),
* (x+y) ≤(x)+(y).
Note the difference to the definition used in <cit.> which corresponds to a norm, as they also require the absolute homogeneity property. That is, for any λ∈ℛ∖{0}, we have that
4. (λ x)= (x).
This property holds for the Hamming weight and many other weights, however it is not a requirement for a weight function. The properties <ref>.-<ref>. are enough to induce a distance. In fact, for a weight function with properties <ref>.-<ref>. we can define a distance as
: ℛ×ℛ →ℕ,
(x,y) ↦(x-y).
By abuse of notation, we will also denote their coordinate-wise extension by and d, that is for x,y ∈ℛ^n we have
(x)= ∑_i=1^n (x_i) and (x,y)= ∑_i=1^n (x_i,y_i).
We call such weight functions additive weights.
Given a weight function, one can then define the support of x ∈ℛ^n as an n-tuple
(x) := ( (x_1), …, (x_n) ).
As we are now dealing with an n-tuple instead of a subset of {1, …, n}, we require some additional definitions.
Let s ,t∈ℕ^n be n-tuples.
The size of s is given by the sum of its entries, that is s = ∑_i=1^n s_i.
The join of s and t, denoted by s ∨ t is given by taking the maximum in each position, that is
s ∨ t= ( max{s_1,t_1}, …, max{s_n,t_n}).
The meet of s and t, denoted by s ∧ t is given by taking the minimum in each position, that is
s ∧ t = (min{s_1, t_1}, …, min{s_n,t_n}).
Since the weight is additive, we have that
(x) =∑_i=1^n (x_i)= (x).
The Hamming support is thus such an example, where instead of considering the support as subset of {1, …, n}, the support is considered as the n-tuple
(x) = ( (x_1), …, (x_n) ).
In order to extend this to the support of codes, we have several options. One of those, is the join-support, as considered in <cit.>:
for ⊆ℛ^n a linear code, we define its join-support as
() := ( max_c ∈(c_1), … , max_c ∈(c_n) )= ⋁_c ∈(c).
Note that another possibility would be to define the meet-support, as follows
() := ( min_c ∈{max{(c_1),0}}, …, min_c ∈{max{(c_n),0}})
= ⋀_c ∈ ((c) ∨ 0).
As the Hamming weight of nonzero elements equals one, we observe that the join-support coincides with the meet-support of a code in the Hamming metric, i.e.,
() = ().
Let us consider the code over 𝔽_3 generated by
G= [ 1 0 0 1 0; 0 1 0 1 0; 0 0 1 0 0 ].
With the usual definition of the Hamming support in (<ref>), we have that
()= {1,2,3,4}.
With the join-support, we are considering the maximal value of the weight of the entries of a codeword in each position, that is
()=(1,1,1,1,0).
For the meet-support, we take the minimum nonzero value of the weight of the entries of a codeword in each position, which also gives (1,1,1,1,0).
By applying definition of the weight of a code (<ref>) we observe that all the three support definitions of yield the same weight
() = () = 4,
() = () = 4.
In the classical case, one defines the r-th generalized weights as follows.
Let ⊆𝔽_q^n be a linear code of dimension k. Then for any r ∈{1, …, k} the r-th generalized weight is given by
^r()= min{wt(𝒟) |𝒟≤, dim(𝒟) = r}.
For the generalized weights, we want the following properties to hold.
Let be a linear code of dimension k. Then we have
* () = ^1(),
* ^r() < ^r+1() for every 1 ≤ r < k,
* ^k() = ().
For the Hamming support and the rank support these properties have been showed in <cit.>. In <cit.> they get similar properties, with the exception of ^r() ≤^r+1(), instead of the strict inequality.
The strict inequality is important to us, however, as it then leads to neat Singleton bounds, that is as
()=^1() < ^2() < ⋯ < ^k-1() <^k() = (),
we get
() ≤()-k+1.
Note that for non-degenerate codes we have that ()=n, and thus we retrieve the classical Singleton bound
()≤ n-k+1.
Since we move from the classical case of finite fields to rings, we have to exchange the fixed dimension of the subcodes with a ring-analogue parameter. A natural choice would be the ℤ/p^sℤ-dimension, but as this value is not necessarily an integer and there might not exist subcodes of 𝒞 of certain fixed smaller rational number as the ℤ/p^sℤ-dimension, we choose to discard this option.
In <cit.>, the authors chose to exchange the dimension with the subtype.
In fact, in the same paper the authors defined generalized Lee weights for ℤ/4ℤ. This particular case is, however, not of interest for us, as the Lee-metric Singleton bound over ℤ/4ℤ directly follows from the Gray isometry.
Following the idea of <cit.>, a first attempt on defining generalized weights over ℤ/p^sℤ would be the following.
Let ⊆(ℤ/p^sℤ)^n be a linear code of subtype (k_0, …, k_s-1). Then for any (r_0, …, r_s-1) with r_i ≤ k_i for all i ∈{1, …, s-1} the (r_0, …, r_s-1)-th generalized weight is given by
^(r_0, …, r_s-1)()= min{wt(𝒟) |𝒟≤, 𝒟 has subtype (r_0, …, r_s-1)}.
Note that this definition is not considering all possible subcodes or all possible subtypes of subcodes.
To allow for a comparison between two different subtypes (r_0, …, r_s-1) and (r_0', …, r_s-1') which might have r_i<r_i' for some i but r_j > r_j' for some j, a natural choice is
to impose a lexicographical order, i.e., we consider the order
(k_0,…, k_s-1) >(k_0-1,…, k_s-1) > ⋯ > (0,k_1, …, k_s-1) > ⋯ > (0,…, 0, 1).
However, then the property ()=^(0, …, 0,1)() is not guaranteed.
In fact, a minimum Lee weight codeword will live in a subcode having subtype one of the standard vectors e_i. Thus, we have ()= ^e_i() for some i.
Observing that this just means to fix the rank of the subcode as 1, we choose to directly fix the rank instead.
Let ⊆(ℤ/p^sℤ)^n be a linear code of of rank K. Then for any r ∈{1, …, K} the r-th generalized weight is given by
^(r()= min{wt(𝒟) |𝒟≤, rk(𝒟)=r }.
§ JOIN-SUPPORT IN THE LEE METRIC
Now we turn our focus on exchanging the Hamming weight with the Lee weight.
We want to define the Lee support and hence the generalized Lee weights in a similar fashion.
For the Lee support of a vector x ∈ ()^n we view the Lee support as an n-tuple and define it analogous to the Hamming support, i.e.,
(x) := ((x_1), … ,(x_n)).
We now want to define the Lee support and the generalized Lee weights of a code ⊆ℤ/p^sℤ^n, according to the two possibilities: the join-support and the meet-support. However, in the Lee metric, the meet-support is not practical to derive bounds on the minimum distance for the code. Let us quickly argue why.
For a code ⊂ ()^n we define the Lee meet-support as the minimal (if possible) nonzero Lee weight in each position among all codewords, meaning that
() := (min_c ∈{max{(c_1), 0}} , … , min_c ∈{max{(c_n), 0 }}).
For ⊆ ()^n of support subtype (n_0, …, n_s), we have that
() = ()= ∑_i=0^s n_ip^i.
The Lee meet-support asks to take the smallest nonzero Lee weight in position j and then to sum over all entries j ∈{1, …, n}. Since any position belonging to the support subtype n_i is living in the ideal ⟨ p^i ⟩, this position has as smallest nonzero Lee weight p^i.
We can then define the r-th generalized meet-Lee weights.
Let ⊆(ℤ/p^sℤ)^n be a linear code of rank K.
For i ∈{1, …, K} define the ith generalized meet-Lee weight as
^i()= min{()≤, (𝒟)= i}.
Unfortunately, the meet-support in the Lee metric does not generally fulfill the property
() ≤^1().
As an easy example for () > ^1(), consider =⟨ (1,2) ⟩⊆ℤ/9ℤ^n. The minimum Lee distance of this code is 3, however, the first generalized meet-Lee weight is 2, as ⟨(1,2)⟩ =(1,1) is the minimal meet-Lee support. This will then not lead to a Singleton bound and is thus discarded.
Instead we will now focus on the join-Lee support, as also promoted in <cit.>.
For a code ⊂ ()^n its join-Lee support is defined as the maximal possible Lee weight in each position among all codewords, i.e.,
() := (max_c ∈(c_1), … , max_c ∈(c_n)).
For ⊆ ()^n of support subtype (n_0, …, n_s), we have that
() = ()= ∑_i=0^s n_iM_i.
In each index j ∈{1, …, n}, we can check in which ideal this coordinate of the code lives. Let us assume that this is ⟨ p^i ⟩, for some i ∈{0, …, s}. Since the support of the code takes the maximum over all codewords in the code, we will reach in this entry the maximal Lee weight of the ideal ⟨ p^i ⟩, which is given by M_i= ⌊p^s-i/2⌋ p^i. Since we know the support subtype of the code, we have n_i many of this entries.
The r-th generalized join-Lee weight is then defined as follows.
Let ⊆(ℤ/p^sℤ)^n be a linear code of rank K.
For i ∈{1, …, K} we define the ith generalized join-Lee weight as
^r()= min{(𝒟) |𝒟≤, rk(𝒟)=r}.
Let us consider an example, which also perfectly shows the differences between the meet-Lee support and the join-Lee support.
Let us consider the code ⊆ℤ/9ℤ^4 generated by
G= [ 1 0 3 2; 0 1 2 0; 0 0 3 3 ],
which has support subtype (4,0,0) and minimum Lee distance 2, for example (1,0,0,8) is a minimal Lee weight codeword.
For the generalized meet-Lee weights we have that
() ≥^1() ≤^2()=^3() = ().
Since
^1() =(⟨ (0,1,2,0)⟩= 2
^2() = (⟨ (1,0,3,2),(0,1,2,0)⟩)= 4
^3() = (⟨ G ⟩)= 4 = ().
For the generalized join-Lee weights we have that
() ≤^1() < ^2() < ^3() ≤().
Since
^1() =(⟨ (0,0,3,3) ⟩)= 6
^2() = (⟨ (0,0,3,3),(3,0,0,6) ⟩ )= 9
^3() = (∩⟨ 3⟩)=12
() = 16.
The subcodes which attain the r-th generalized join-Lee weights all live in the socle.
By contradiction, assume that ≤ of rank r achieves the r-th generalized Lee weight ^r() and 𝒟 does not live in the socle. That is, if has support subtype (n_0, …, n_s), then for some i < s-1 we have n_i ≠ 0. Thus,
^r()= ()≤∑_i=1^s-1 n_iM_i.
By considering the subcode _0 = ∩⟨ p^s-1⟩, we observe that its support subtype is (0, …, 0, n_0 + ⋯ + n_s-1, n_s). Furthermore,
(_0) = M_s-1(n_0 + ⋯ +n_s-1) < ∑_i=1^s-1 n_iM_i,
since M_s-1<M_i for all i < s-1. This gives a contradiction to the minimality of the subcode .
Thus, it is enough to only consider the generalized join-Lee weights of the socle ∩⟨ p^s-1⟩.
Let ⊆ℤ/p^sℤ^n be a linear code of rank K. Then for all r ∈{1, …, K} we have
^r() = ^r(∩⟨ p^s-1⟩).
This property gives us an immediate relation to the generalized Hamming weights.
In fact, the socle can be considered as a code over 𝔽_p and the subcodes which attain the minimal join-Lee support are then those which attain the minimal Hamming support.
Let ⊆ℤ/p^sℤ^n be a linear code of rank K. Then for all r ∈{1, …, K} we have
^r() = ^r() M_s-1.
Thus, we can use the properties of the generalized Hamming weights to show the following.
Let ⊂ ()^n be a linear code of rank K. Then we have
* () ≤^1().
* ^r() < ^r+1() for every 1 ≤ r < K.
* ^K() ≤().
The first property follows easily from the definition of the join- Lee support of a vector x. It can be tight, whenever the minimal Lee weight codeword is in the socle, which is not necessary.
For the second property we simply use Corollary <ref> and the third property also simply follows from the definition of join-Lee support.
In fact, we do not recover the exact properties of the generalized Hamming weight codes. We do not have ()= ^1() and ()= ^K(). This seems to be the price we have to pay in order to drop the absolute homogeneity property and to be able to consider the Lee metric.
However, unlike the meet-Lee support we get a nice chain of inequalities:
() ≤^1() < ^2() < ⋯ < ^K() ≤().
This gives us a new Lee-metric Singleton bound.
Let ⊂ ()^n be a (non-degenerate) linear code of rank K. Then we have
() ≤ M_s-1(n-K+1)= p/2 p^s-1 (n-K+1).
Using the properties 1.-3. from Proposition <ref> we know that
() ≤^K() -∑_i=1^K-1 x_i,
where
x_i = ^i() - ^i-1().
From Corollary <ref> we know that
x_i=^i() - ^i-1() ≥ M_s-1.
We get the claim using that
^K()= ∑_i=0^s-1 n_i M_s-1 = nM_s-1,
where we have assumed that the code is non-degenerate.
Note that
we could have gotten this bound also by directly using
() ≤^1() = ^1() M_s-1 = () M_s-1≤ (n-K+1) M_s-1.
This new Singleton bound is clearly much sharper than the previously known Lee-metric Singleton bounds, for example the bound from Theorem <ref>.
§.§ Density of Optimal Codes with respect to the Join-Lee Support
Clearly, any code ∈ ()^n of rank K attaining this bound can be characterized by the following two properties:
* The socle _s-1=∩⟨ p^s-1⟩ is an MDS code over 𝔽_p.
* There exists a minimum Lee weight codeword in the socle.
The first property already implies sparsity as n tends to infinity and triviality for p=2. Even the second property is problematic: (_s-1)=(n-K+1)M_s-1, implies that all nonzero entries of a minimal Hamming weight codeword in the socle must be of maximal Lee weight. Using the systematic form of the socle,
G_s-1 = [ p^s-1𝕀_K p^s-1A ],
we can immediately see that any row g of G_s-1 is also of minimal Hamming weight n-K+1. Thus, for g to have only nonzero entries of maximal Lee weight implies p^s-1=M_s-1, which will restrict optimal codes with respect to this bound to p∈2,3 and any positive integer s. Because of the MDS property over _3, we must have a block length n≤ 4.
We can drop the second condition, i.e., there exists a minimal Lee weight codeword in the socle, if we manage to estimate the difference
^1() - ().
This task is, however, equally hard as bounding () itself.
Knowing hence, that only codes over for p = 2, 3 with length n = 4 can attain this bound,
the socles of the codes ⊆ with p = 3 and n ≤ 4 attaining the bound in Theorem <ref> are hence generated by a matrix of the form
[ 3^s-1 0 0 A; 0 3^s-1 0 B; 0 0 3^s-1 C ],
where A, B, C ∈ are such that their Lee weight is M_s-1.
For instance, the code ⊆(9)^4 of rank K = 3 generated by
[ 3 0 0 3; 0 3 0 6; 0 0 3 6 ].
This code has ()=6 and thus attains the join-Lee metric Singleton bound as ()=6= 3 · (4-3+1)= M_s-1(n-K+1).
Note that for MDS codes, we actually know all r-th generalized Hamming weights: let 𝒞⊆𝔽_q^n be a linear code of dimension k, then
^r()=n-k+r.
Thus, a natural question that arises is whether the optimal codes with respect to the newly defined Lee-metric Singleton bound have a similar behaviour. That is, we are interested in an expression for the r-th generalized join-Lee weight ^r() for every r ∈{1, … , K}.
Let ⊆ ()^n be code of rank K attaining the join-support bound in Theorem <ref>. Then, for each r ∈{1, … , K}, the r-th generalized join-Lee weight is given by
^r() = p^s-1(n-K+r).
Recall that is optimal with respect to the join-support Lee-metric Singleton bound, if its minimum Lee weight code word has the form
c_min =
(
[ 0 ⋯ 0 p^s-1 0 ⋯ 0 ± p^s-1 ⋯ ± p^s-1 ]).
Thus, it holds ^1() = p^s-1(n - K + 1).
For arbitrary r∈1, … , K, the r-dimensional subcodes of attaining the r-th generalized join-Lee weight are contained in the socle of the code as well. Hence, they admit a generator matrix G_r which is permutation equivalent to a matrix of the form
G_r =
(
[ 0 p^s-1𝕀_r ± p^s-1𝕁 ]),
where 0 is the all-zero matrix of size r× (K-r) and 𝕁 is an all-one matrix of size r× (n-K). Thus, the desired result follows.
§.§ Invariance under Isometry in the Lee Metric
For the generalized Hamming weights of a linear k-dimensional code 𝒞⊆𝔽_q^n, we also know that ^r() = ^r('), for any equivalent code ' and any r ∈{1, …, k}.
Also for the generalized join-Lee weights we have the same behaviour.
Let 𝒞⊆( ℤ/ p^sℤ)^n be a linear code of rank K, then
^r()= ^r('), for all r ∈{1, …, K} and all ' which are equivalent to , under the Lee-metric isometries.
Recall that the Lee-metric isometries only consist of permuting the positions and multiplying any position by 1 or -1. Thus, all codewords of ' can be written as c' =σ(c) ⋆ v, for some permutation σ and v ∈{1,-1}^n, where ⋆ denotes the coordinatewise multiplication and c ∈. Now the claim follows immediately as
^r() = min{ |( max_c ∈𝒞{(c_1) }, …, max_c ∈𝒞{(c_n) } )| | c ∈𝒟≤𝒞, rk(𝒟)=r}
= min{ |σ( max_c ∈𝒞{(c_1) }, …, max_c ∈𝒞{(c_n})| | c ∈𝒟≤𝒞, rk(𝒟)=r}
= ^r(').
§ COLUMN SUPPORT FOR THE LEE METRIC
We observe that in order to compute the r-th generalized Hamming weight of a code , all we do is considering a generator matrix G and count the number of nonzero columns, i.e., the column weight. In fact, for any r-th generalized Hamming weight one can choose r rows of G which attain the minimal column weight.
For a matrix A ∈ℛ^K × n we will denote by S_r(A) ∈ℛ^r × n all the submatrices of A of size r × n.
Consider a matrix A = (a_1^⊤⋯ a_n^⊤) ∈ℛ^K × n. We define the column weight, _C(A), of A by the number of nonzero columns of A, i.e.,
(A) := i ∈1, … , n a_i ≠ 0 ∈ℛ^K.
The column support, _C(A), of A is given by
_C(A) := (max((a_1)), …, max((a_n))).
Again we have the nice property that _C(A)= (A).
In fact,
(A) = _C(A) = ∑_i=1^n max((a_i)).
Thus, we can define the column support, column weight and the generalized column weights of a code.
Let ⊆ℛ^n be a linear code of rank K. The column support of is given by the minimal column support of any generator matrix, i.e.,
_C()=min_G: ⟨ G ⟩ = _C(G).
The column weight of a code is then given by the size of the column support, i.e.,
()= _C().
Finally, the r-th generalized column weight of is defined as
^r()= min{() |≤, ()=r}.
Note that the definition of the r-th generalized column weight of a linear code ⊂ℛ^n of rank K is equivalent to
^r()= min{(S_r(G)) (⟨ S_r(G) ⟩ )=r, ⟨ G ⟩ = }.
The difficulty of this new definition lies in the choice of the generator matrix instead of the choice of the subcode. This is the only difference to the usual definition of join support and weight.
The difficulty of finding the correct generator matrix to read of the minimal column weights, or the subcode with minimal weight is equivalent.
Let us show the dependency on the choice of generator matrix in the following example.
Let us consider ⊆𝔽_2^5 generated by
G = [ 1 0 0 1 1; 0 1 0 1 1; 0 0 1 1 1 ].
If we were to compute the column (Hamming) weights of S_r(G), we would get for S_1(G)
([ 1 0 0 1 1 ])= 3.
However, this is not the first generalized Hamming weight of the code.
There exists a generator matrix G', such that S_r(G') attains the r-th generalized Hamming weights as column weights, for each r ∈{1, …, k}:
G'= [ 1 1 0 0 0; 0 1 1 0 0; 0 1 0 1 1 ].
Now we can read of the r-th generalized Hamming weights easily:
^1() = ([ 1 1 0 0 0 ])=2,
^2() = ([ 1 1 0 0 0; 0 1 1 0 0 ])=3,
^3() = ([ 1 1 0 0 0; 0 1 1 0 0; 0 1 0 1 1 ])=5.
Thus, the definition is not independent on the choice of generator matrix.
Let us now adapt the definitions to the Lee weight.
Consider a matrix A = [ a_1^⊤ ⋯ a_n^⊤ ]∈ℛ^K × n. Its column Lee support is given by the n-tuple
(A) = (max((a_1), … , max((a_n)).
The column Lee weight of A is given by
(A) = (A) = ∑_i=1^n max((a_i)).
Note that this definition asks us to choose in each column the entry of maximal Lee weight.
Let us consider the matrix
G= [ 1 0 3 2; 0 1 2 0; 0 0 3 3 ]∈ℤ/9ℤ^3 × 4.
Then, the column Lee support and the column Lee weight of G are given by
(G)= (1,1,3,3) and (G) = 8.
We are now able to extend the definitions of column Lee support and column Lee weight to a linear code ⊆(ℤ/p^sℤ)^n of rank K.
Consider a linear code ⊆(ℤ/p^sℤ)^n of rank K. We define its column Lee support by the minimal column Lee weight of any generator matrix of , i.e.,
()=min_G: ⟨ G ⟩ = (G).
The column Lee weight of is then given by the size of its column Lee support, i.e.,
()= ().
As in the case for the Hamming metric, also in this case the definition is not independent on the choice of generator matrix. For this, we introduce the following matrix, called reduced systematic generator matrix.
Consider a matrix G ∈( )^K × n as given in (<ref>). We call G to be in reduced systematic form if for every entry a of A_i, j∈ (ℤ / p^s+1-iℤ)^k_i × k_j with i < j ≤ s it holds that (a) ≤ p^j-1.
We will denote a matrix G in reduced systematic form by G_𝗋𝗌𝗒𝗌. Let us give an example to clarify Definition <ref>.
Consider G ∈27^3× 4
G = [ 1 14 11 0; 0 9 18 0; 0 0 9 18 ].
Note that G is in systematic form as defined in (<ref>). By elementary row reduction, i.e., by subtracting suitable multiples of the rows r_j from row r_i with 1 ≤ i < j ≤ 3, we obtain a matrix G_𝗋𝗌𝗒𝗌 in reduced systematic form
G_𝗋𝗌𝗒𝗌 = [ 1 5 -7 0; 0 9 9 -18; 0 0 9 18 ].
By a similar argument used to prove Proposition <ref> we observe the following.
Consider a linear code ⊆()^n of rank K and subtype (k_0, … , k_s-1). The code is permutation equivalent to a code having a generator matrix in reduced systematic form.
This new systematic form yields a natural upper bound on the column Lee weight of a code .
For this let us now consider the support subtype outside an information set of size K of the code. Since we can always find a permutation equivalent code, which has an information set in the first K positions, we can assume that we only consider the last n-K columns of a generator matrix in reduced systematic form. In order not to confuse it with the support subtype (n_0, …, n_s) of the entire generator matrix, we will denote it by (μ_0, … , μ_s).
Let ⊆ ()^n be a linear code of rank K and subtype (k_0, …, k_s-1) and let (μ_0, …, μ_s-1) be the support subtypes in the last n-K positions.
Then the column Lee weight of is upper bounded by
() ≤∑_i = 0^s-1 p^ik_i + ∑_i = 0^sμ_iM_i.
By Definition <ref> we have
() = min_G: ⟨ G ⟩ = (G) .
Furthermore, by Proposition <ref>, admits a generator matrix ∈ ()^K × n in reduced systematic form. Hence, the column Lee weight of is a natural upper bound to the column Lee weight of the code, i.e.,
() ≤ ().
Thanks to the form of , we observe that the maximum Lee weight in the first K columns is given by the entry ()_i,i for i ∈{1, …, K}. For the last n-K columns we have to assume the maximal Lee weight. The support subtype (μ_0. …, μ_s) in these positions immediately tells us, how many columns are contained in which ideal. Hence, for each column lying in ⟨ p^i ⟩ (where i is maximal for this column) the maximal Lee weight is M_i. This yields the desired result.
Let us now introduce the r-th generalized column Lee weights of a code .
Given a linear code ⊆ ()^n of rank K and subtype (k_0, … , k_s-1). The r-th generalized column Lee weight of is defined as
^r()= min{() ≤, ()=r}.
Similarly to Definition (<ref>), the r-th generalized column Lee weight is equivalent to
^r()= min{(S_r(G)) |(⟨ S_r(G) ⟩ )=r, ⟨ G ⟩ = }.
As in the Hamming-metric case, the difficulty lies now in finding a generator matrix attaining the r-th generalized column Lee weights.
To visualize this, let us return to our previous example for the Lee-metric support.
Let us consider the code ⊆ℤ/9ℤ^4 generated by
G= [ 1 0 3 2; 0 1 2 0; 0 0 3 3 ],
which has support subtype (4,0,0) and minimum Lee distance 2.
If we compute the minimal column weights of submatrices of G we get
([ 0 1 2 0 ]) =3,
([ 0 1 2 0; 0 0 3 3 ]) =7,
(G)=8.
However, there is a generator matrix of the code which is not in systematic form and which attains smaller column Lee weights:
G'=[ 8 0 0 1; 0 1 2 0; 0 8 1 3 ].
The r-th generalized Lee weights are then
^1() = ([ 8 0 0 1 ])=2 = (),
^2() = ([ 8 0 0 1; 0 1 8 0 ])=4,
^3() = ([ 8 0 0 1; 0 1 8 0; 0 0 3 0 ]) = 6 = ().
Note that both matrices within this example are of reduced systematic form.
Let ∈ be code of rank K.
Let G^(i)∈(ℤ/p^sℤ)^i × n of a rank i ∈{1, …, K-1} be a generator matrix of a subcode of attaining ^i(). Let c ∈ such that [ G^(i); c ] is of a generator matrix of a subcode of rank i+1. Then it holds,
( [ G^(i); c ]) > (G^(i)).
Let us define for all columns j ∈{1, …, n} the maximal Lee weight of the j-th column in G^(i) as A_j^(i).
We clearly have (G^(i)) ≤( [ G^(i); c ]).
Thus, let us assume that (G^(i)) = ( [ G^(i); c ]).
Then
∑_j=1^n A_j^(i) = ∑_j=1^n max{ A_j^(i), (c_j) }
and so for all j ∈{1, …, n} we have (c_j) ≤ A_j^(i).
However, as G^(i) attains ^i(), the sum ∑_j = 1^n A_j^(i) is minimal among all rank i subcodes of . Hence, there is no index j∈1, …, n for which (c_j) < A_j^(i) and thus for all j we have (c_j)=A_j^(i). This implies c_j = ± A_j^(i).
This means that c has in every position the maximal Lee weight over all rows of G^(i).
Thus, for every row g_ℓ of G^(i) with ℓ∈1, …, i for which (g_ℓ) > (c), we can add and/or subtract c to decrease its weight.
For each row ℓ∈1, … , i let us therefore define the sets
I_ℓ^- = { j ∈{1, …, n }|(c_j - g_ℓ j) < (c_j)},
I_ℓ^+ = {j ∈{1, …, n}|(c_j + g_ℓ j) ≤(c_j)}.
For a fixed row ℓ∈{1, …, i}, if
∑_j ∈ I_ℓ^-(c_j)< ∑_j ∈ I_ℓ^+(c_j),
then we add c to the row g_ℓ. If however,
∑_j ∈ I_ℓ^+(c_j) ≤∑_j ∈ I_ℓ^-(c_j),
we subtract c from that row g_ℓ.
We consider now the new row g'_ℓ := c ± g_ℓ which has a strictly smaller Lee weight than c.
Since the cases are similar, assume that for g_ℓ the first case is true, i.e., ∑_j ∈ I_ℓ^-(c_j) < ∑_j ∈ I_ℓ^+(c_j) and thus we add the row c, getting g'_ℓ:= c+g_ℓ.
Clearly for each position j in I_ℓ^- we at most added a Lee weight of A_j^(i), while in each position j in I_ℓ^+ we subtracted a Lee weight of at most A_j^(i), thus
(g'_ℓ)
=∑_j ∈ I_ℓ^+(g_ℓ j + c_j) + ∑_j ∈ I_ℓ^-(g_ℓ j+c_j)
< ∑_j ∈ I_ℓ^+(g_ℓ j + c_j) + ∑_j ∈ I_ℓ^- A_j^(i) + ∑_j ∈ I_ℓ^-((c_j)
<∑_j ∈ I_ℓ^+(c_j) - ∑_j ∈ I_ℓ^+ A_j^(i)
+ ∑_j ∈ I_ℓ^+ A_j^(i) + ∑_j ∈ I_ℓ^-((c_j)
= (c).
Doing this procedure for every row of the matrix G^(i), obtaining the new matrix G'^(i) of rank i, we have
(G'^(i)) < (G^(i)),
since in every row we now reduced the Lee weight, but this is a contradiction to G attaining ^i().
Finally, we are able to prove the desired properties for the generalized column Lee weights.
Let ⊆ ()^n be a linear code of rank K. Then
* ^1()=().
* ^r()< ^r+1() for all r<K.
* ^K()=().
For the first property, note that the column Lee weight of a 1 × n matrix is equal to the Lee weight of that n-tuple. Since a minimal Lee-weight codeword c is the rank 1 subcode of with the smallest column Lee weight, it attains _L,C(c)=^1().
The second property follows from Lemma <ref>.
Since then for any matrix G^(i+1)∈( ℤ/p^sℤ)^(i+1) × n, which attains ^i+1(), which we can write as G^(i+1)= [ G^(i); g' ], we either have that G'_i already attained ^i() and hence
^i() = (G^(i)) < (G^(i+1))=^i+1(),
or if G^(i) did not attain ^i(), then
^i() < (G^(i)) ≤(G^(i+1)).
In either case, we get that
^i() < ^i+1().
Lastly, the third property follows immediately from the definition of the column Lee weight of a code .
The properties in Proposition <ref> allow us to deduce a natural Singleton-like bound for the Lee metric.
Given a linear code ∈ ()^n of rank K. The minimum distance of is upper bounded by
() ≤() - K + 1.
Using the properties given in Proposition <ref> we note that
() = ^1() ≤^K() - ∑_i = 2^K( ^i() - ^i-1() ).
By the strict inequality between the generalized column Lee weights, we have a difference of at least one, i.e.,
^i() - ^i-1() ≥ 1.
Since ^K() = (), the desired bound follows.
As for increasing parameters of a linear code ⊆ ()^n of rank K and subtype (k_0, … , k_s-1) it becomes harder to compute (), applying Proposition <ref> we obtain a direct consequence to Theorem <ref>, which requires no computational effort.
Given a linear code ∈ ()^n of rank K. The minimum distance of is upper bounded by
() ≤∑_i = 0^s-1 p^ik_i + ∑_i = 0^sμ_iM_i - K + 1.
The bounds given in Theorem <ref> and Corollary <ref> improve the Singleton bound by Shiromoto <cit.> and the one by Alderson-Huntemann <cit.>. In the proof of Theorem <ref> we bounded the differences ^i() - ^i-1() by one for every i = 2, … , K. However, for a relatively small rank K this bound is not very tight. The sum in Equation (<ref>) is a telescoping sum, meaning that
∑_i = 2^K( ^i() - ^i-1() ) = ^K() - ^1() = () - ().
Hence, the goal is now to derive a lower bound on the difference () - () allowing us to further tighten the Singleton-like bound.
In the following let ⊆ ()^n be a linear code of rank K and subtype (k_0, … , k_s-1). Let us introduce the maximal subtype i ∈0, …, s-1 for which k_i is nonzero, that is
σ := max i ∈0, …, s-1 k_i ≠ 0.
Let p be an odd prime. For a linear code ⊆ ()^n of rank K and subtype (k_0, … , k_s-1) and maximal subtype k_σ, we get the following lower bound
() - () ≥∑_i = 0^σ - 1( ∑_j = 0^i k_j )p/2p^i + (k_σ-1)p^σ.
Let us start by focusing on the generalized column Lee weights. Assume that c_1 ∈ is such that ^1() = (⟨ c_1 ⟩). By Lemma <ref>, we know that the generalized column Lee weights can be obtained in an iterative fashion. Hence, to find a subcode _2 of rank 2 we are looking for a codeword c_2 ∈ such that [ c_1; c_2 ] is of rank 2 and such that it minimizes ([ c_1; c_2 ]). We continue with this process until we obtain a matrix
G_K := [ c_1; ⋮; c_K ]
of rank K such that (G_K) = () = ^K().
Since the code is of subtype (k_0, … , k_σ, 0,…, 0), the rows of the matrix G_K each correspond to one of the σ blocks formed by the systematic form of G_K. To understand the difference of () and the first generalized column Lee weight ^1() we can think of successively removing rows from G_K until we are only left with the minimum weight codeword c_1. Thinking in the block-wise structure of a generator matrix in systematic form, at some point we will have cancelled k_i rows corresponding to the i-th block of . Hence, the minimal difference subtracted is
M_i-1 - M_i = p/2p^i-1.
Doing this successively for every k_i, with i ∈{ 0, …, σ}, gives
∑_i = 0^σ - 1(∑_j = 0^i k_j) p/2 p^i.
At this point we are left only with a block corresponding to the rows belonging to the maximal subtype k_σ. The minimal difference between the rows of the same block is given by p^σ. Hence, cancelling (k_σ - 1) rows yields to a difference of p^σ(k_σ -1) and the desired result follows.
A natural consequence (combining Propositions <ref> and <ref>) is the next bound on the minimum Lee distance () of a code of given rank and subtype.
Consider a linear code ⊆ ()^n, where p is an odd prime. Let be of rank K and subtype (k_0, … , k_s-1) with maximal subtype k_σ and having support subtype (μ_0, …, μ_s-1) in the last n-K positions. Then the following upper bound on the minimum Lee distance of holds
() ≤∑_i = 0^s-1 p^ik_i + ∑_i = 0^sμ_iM_i - [ ∑_i = 0^σ - 1( ∑_j = 0^i k_j )p/2p^i + (k_σ-1)p^σ].
Let us give an example over 9.
Consider again the code generated by
G= [ 1 0 3 2; 0 1 2 0; 0 0 3 3 ],
over 9. In the last column, the code has support subtype (1, 0, 0) and minimum Lee distance 2. Furthermore, we observe that σ = 1 and support subtype (1,0). Hence, by Corollary <ref>
() ≤ 2 + 3 + 1·4 - [ 2· 1 + (1 - 1)3 ] = 7.
Similarly to the join-support, examples of codes attaining this bound are codes generated by matrices G = [ p^s-1𝕀_K p^s-1A ] for A ∈ ()^K × (n-K), where p = 3. In fact, for any odd p these codes have a minimum Lee distance d = p^s-1(n-K+1). Furthermore we note that in the last n-K positions we have support subtype (0, …, 0, n-K) and M_s-1 = p/2 p^s-1. Hence, inserting these values in the bound given in Corollary <ref> gives
() ≤ p^s-1(1 + (n-K) p/2).
This is equal to () exactly if p = 3.
For instance, consider again the code ⊆ (9^4) of rank K = 3 with generator matrix
G =
[ 3 0 0 3; 0 3 0 6; 0 0 3 6 ].
This code has minimum Lee distance () = 6 and subtype (k_0, k_1) = (0, 3). Hence, we also have σ = 1. The support subtype in the last n-K = 1 positions is (0, 1, 0) and M_1 = 3. Computing the bound in Corollary <ref> gives then
∑_i = 0^1 p^ik_i + ∑_i = 0^2μ_iM_i - (k_1-1)p
=
3 · 3 + 1· 3 - (3-1)3 = 2· 3 = 6
and we conclude that this code is optimal with respect to Lee-metric Singleton bound <ref>.
§.§ Density of Optimal Codes with respect to the Column-Lee Support
Let us discuss the density of the codes attaining the bound in Corollary <ref>. Recall that the bound is derived by () ≤() - ( () - ()) where we upper bounded the column weight of the code by () ≤∑_i = 0^s-1 p^ik_i + ∑_i = 0^sμ_iM_i. Hence, in order to have codes attaining the bound on the minimum Lee distance, they must attain the bound on the column Lee weight too. That is, their generator matrix G must be in reduced systematic form. Furthermore, the support subtype of the last n-K positions is (μ_0, …, μ_s) where in each of the μ_i positions the maximum Lee weight M_i is attained. For instance, a generator matrix may look as follows:
=
cwc1cmc|cccc[margin, last-row = 4]
3-3U
…
μ_0 μ_1 μ_s[1-43-4]
[1-53-5]
[1-73-7]
There are two options to attain a Lee weight M_i. Hence, the probability that a generator matrix is of this form is given by the number of such matrices divided by the number of all matrices, i.e.,
∏_i = 0^s-1(2 (p^s-i)^(k-1)/(p^s-i)^(k-1)(p^s-i-p^s-i-1))^μ_i = ∏_i = 0^s-1(2 /p^s-i-p^s-i-1)^μ_i.
= 2^n-K∏_i = 0^s-1( 1/p^s-i(1- 1/p))^μ_i
= 2^n-K∏_i = 0^s-1( p^i+1/p^s(p-1))^μ_i.
Note that p^s(p-1) > p^i+1 for every i ∈{0, …, s-1}. Hence, the fraction in the product is smaller than 1. Therefore, for p ∞ the product tends to 0. The same argument holds if we let s tend to infinity. Similarly, as μ_i depends on n, we note that 2/p^s-i-p^s-i-1 < 1. This implies that if n ⟶∞ the product tends to zero as well. Thus, codes attaining the bound in Corollary <ref> are sparse with respect to p, s and n.
Given an optimal code with respect to the Lee-metric Singleton bound <ref>, one could also ask if the r-th generalized column Lee weights are then also fixed. Since the main problem of the column Lee weight of a code is the computational difficulty, we leave this as an open question.
§.§ Invariance under Isometry in the Lee Metric
Finally, we ask if the r-th generalized column Lee weights are fixed under isometries.
Let 𝒞⊆( ℤ/p^sℤ)^n be a linear code of rank K, then any equivalent code ', under the linear Lee-metric isometries is such that
^r()= ^r('),
for every r ∈{1, …, K}.
Recall that any generator matrix G'^(i) of a subcode of rank i of a equivalent ' can be written as G'^(i) =G^(i)P diag(v), for some permutation matrix P, v ∈{1,-1}^n and some generator matrix G^(i) of a subcode of rank i of . Both, G'^(i) and G^(i) have the same column weight. Now the claim follows immediately as
^r() = min{(G^(r)) |⟨ G^(r)⟩≤𝒞, rk(⟨ G^(r)⟩)=r}
= min{(G^(r)Pdiag(v)) |⟨ G^(r)⟩≤𝒞, rk(⟨ G^(r)⟩)=r} = ^r(').
§ GENERALIZED LEE WEIGHTS FROM FILTRATION
The resulting Lee-metric Singleton bounds in Theorem <ref> and Corollary <ref> are improving the previously known bounds, however their optimal codes are sparse and the column Lee weight of a code is computationally difficult to compute.
We thus ask if fixing the rank of the subcode is the correct direction. In fact, a ring-linear code 𝒞⊆(ℤ/p^sℤ)^n of rank K has very natural subcodes to consider, which are all of rank K.
For each i ∈0, …, s-1 we define the i-th filtration subcode _i of as the intersection of with the ideal ⟨ p^i ⟩, i.e.,
_i := ∩⟨ p^i ⟩.
The (s-1)-st filtration _s-1 is commonly known as the socle of the code .
Note that the filtration subcodes naturally form a chain, namely
_s-1⊆_s-2⊆…⊆_1 ⊆_0 = .
We then define a new class of generalized Lee weights, or more concretely generalized Lee distances, coming from filtration subcodes.
Let ⊆( )^n be a linear code.
For each r ∈1, …, s we define the r-th generalized minimum Lee distance of the code to be the minimum distance of the filtration subcode _r-1, that is
^r() = (_r-1).
The generalized minimum Lee distances have some natural properties that are summarized in the following.
Given a linear code ⊆( )^n of rank K and subtype (k_0, …, k_s-1), let σ := max i ∈0, …, s-1 k_i ≠ 0. Then the generalized minimum Lee distances satisfy
* ^1() = (),
* ^r() ≤^r+1() for every r ∈1, … s-1,
* ^r() ≤ p^r-1 + (n-k)M_r-1 for every r = σ + 1, … , s.
The first and second property immediately follow from (<ref>).
For the third property we observe that for every r ∈σ + 1, … , s, by applying elementary row operations, we can bring a generator matrix G_r-1 of _r-1 in the form
G_r-1 =
[ p^r-1𝕀_K A ],
where A ∈(p^r-1)^K × (n-K). The r-th minimum Lee distance is upper bounded by the Lee weight of any row of G_r-1. For each row, the first K positions have a Lee weight of exactly p^r-1. In the last n-K positions of each row we assume the maximal Lee weight given by M_r-1 := p^s-(r-1)/2p^r-1 and hence the inequality follows.
Due to Property 2. in Proposition <ref>, we cannot use the usual Singleton-like argument and decrease the weight of the whole code. Instead, we note that any ^r() is a direct upper bound on the minimum Lee distance. The only question that remains, is how far we have to go down in the filtration to expect the lowest minim Lee distance, ^r(). In the following we identify several parameters of the code, that are easy to read off from any generator matrix of the code, that indicates which filtration subcode gives an appropriately low upper bound on ().
The upper bound on the generalized minimum Lee distances in Property <ref>. of Proposition <ref> is relatively loose. This is due to the fact, that we have assumed no knowledge about the matrix A given in (<ref>).
As computing the minimum Lee distance of every subcode _i is an exhausting task, especially if there is no knowledge about the structure of A, we would like to introduce some more parameters regarding A for the first filtration of admitting a generator matrix of the form (<ref>).
That is the filtration _σ with a generator matrix of the form G_σ = [ p^σ𝕀_K A ], for some matrix A ∈(p^σ)^K × (n-k).
Let a_ij denote the entry of A lying in row i and column j.
For each row of A, we determine the maximal power of p appearing and we denote it by
ℓ_i := max k ∈σ, … , s-1∃ a_ij : ⟨ a_ij⟩ = ⟨ p^k ⟩, K+1 ≤ j ≤ n .
Clearly, ℓ_i ≥σ.
Let n'_i denote the number of entires of the i-th row of A that live in the ideal ⟨ p^ℓ_i⟩, i.e.,
n'_ℓ_i := j ∈K+1, … , n a_ij∈⟨ p^ℓ_i⟩.
For a given linear code ⊆, these parameters help us to understand the evolution of the matrix A in the generator matrices of the filtration subcodes _r-1, for r ∈{σ + 1, … , s}. In fact, given a generator matrix G_σ of the filtration _σ in the form (<ref>), the parameters ℓ_i and n'_ℓ_i for a row i ∈1, …, K allow to understand at which point in the filtration these positions become zero. More precisely, knowing ℓ_i and n'_ℓ_i implies that in _s-ℓ_i + σ there are n'_ℓ_i many zero entries in i-th row of A.
Knowing the number of entries turning into zero in a certain filtration is a huge advantage in bounding the minimum distance of a code. Therefore, we define by n'^(r-1) the maximal number of zeros we can get in the last n-K positions of a row of a generator matrix of the filtration _r-1. That is, for every r ∈σ + 1, …, s,
n'^(r-1) := max n'_ℓ_iℓ_i > s - r + σ, i ∈{1, …, K}.
If there is no ℓ_i with ℓ_i > s - r + σ, we will set n'^(r-1) = 0.
Furthermore, let ℓ^(r-1) be the corresponding value ℓ_i to n'^(r-1), i.e.,
ℓ^(r-1) := maxℓ_i n'_ℓ_i = n'^(r-1), i ∈{ 1, …, K}.
We can hence refine the third property in Proposition <ref> as follows.
Given a linear code ⊆ of subtype (k_0, …, k_s-1) with maximal subtype k_σ. Then, for every r ∈σ +1 , … , s, the r-th generalized Lee distance can be upper bounded by
^r() = (_r-1) ≤ p^r-1 + (n - K - n'^(r-1))M_r-1.
The proof follows in a similar fashion as the proof of Proposition <ref> by focusing on the row with the maximal number of zeros in the last n-K columns of _r-1 which is captured in n'^(r-1). Hence, the remaining (n-K-n'^(r-1)) positions are bounded by the maximal Lee weight in the ideal considering, which is given by M_r-1.
Let us consider a free code ∈ (27)^5 spanned by the rows of the matrix
[ 1 0 0 21 6; 0 1 0 10 7; 0 0 1 18 8 ]
=:
[ 𝕀_3 A ].
We easily check that
ℓ_1 = 1 and n'_ℓ_1 = 2,
ℓ_2 = 0 and n'_ℓ_2 = 2,
ℓ_3 = 2 and n'_ℓ_3 = 1.
Let us now consider the filtration subcodes _1 and _2 in order to compute the bound given in Proposition <ref>.
Note that in this case σ = 0 as the code is free. For _σ = _0 =, the values ℓ_i and n'_i are given above. As ℓ_3 = 2 and n'_3 = 1, at filtration _3-2+0 = _1 there is one entry equal to zero. Indeed, _1 = ∩⟨ 3 ⟩ has a generator matrix of the form
[ 3 0 0 9 18; 0 3 0 3 21; 0 0 3 0 24 ],
where the last row contains one zero element in the last 2 columns.
Note that n'^(1) = n'_ℓ_3 = 1 and hence, ^2() ≤ 3 + (5-3-1)12 = 15.
Similarly, at filtration _2 = ∩⟨ 9 ⟩ we observe two zero entries in the first row, as
[ 9 0 0 0 0; 0 9 0 9 9; 0 0 9 0 18 ].
Here we notice that n'^(2) = n'_ℓ_1 = 2 and thus ^3() ≤ 9 + (5-3-2)9 = 9.
By Proposition <ref>, we know that the r-th generalized Lee distances are in non-decreasing order. Therefore, for any r ∈σ + 1, … , s the bound in Lemma <ref> is a valid upper bound for the minimum Lee distance of a code . However, as visible in Example <ref>, the bounds on the r-th minimum Lee distances do not have to follow the same non-decreasing order. As they all hold as an upper bound to the minimum Lee distance of the code, the following bound is a direct consequence of Lemma <ref> by choosing the smallest among the bounds given in the statement.
Given a code ⊆ of subtype (k_0, …, k_s-1). For each r ∈σ + 1, … , s let ℓ≥ 1 and (ℓ, n') be the pair (ℓ^(r-1), n'^(r-1)) minimizing
p^s - ℓ^(r-1) + σ + (n - K - n'^(r-1))M_s - ℓ^(r-1) + σ.
Then the codes minimum distance is bounded by
() ≤ p^s - ℓ + σ + (n - K - n')M_s - ℓ + σ.
As for large rank K this minimum can take a while to compute, we can also derive a slightly weaker bound depending on the maximal value ℓ_i, which is easy to compute.
Given a code ⊆ of subtype (k_0, …, k_s-1) of maximal subtype k_σ. For each r ∈σ + 1, … , s let ℓ := maxℓ_i i = 1, … , K and define the corresponding value n':= maxn_ℓ_iℓ_i = ℓ, for i = 1, … , K
Then, the minimum distance is bounded by
() ≤
p^s - ℓ + σ + (n - K - n')M_s - ℓ + σ if ℓ≥ 1,
p^σ + (n-K)M_σ else.
In fact, we can identify conditions, leading to four different cases for the bound provided in Corollary <ref>.
For this very last observation, leading to the very last Lee-metric Singleton bound, we first need one last definition. Let 𝒞⊆( ℤ/p^sℤ)^n be a linear code of maximal subtype k_σ and assume that _σ is generated by (p^σ𝕀 A). Let us denote the entries of A as a_i,j, for i ∈{1, …, K} and j ∈{K+1, …, n}. We define
N'= max{ j ∈{K+1, …, n}| p | a_i,j, i ∈{1, …, K}}.
That is N' is the maximal number of entries in a row of A, which are divisible by p.
Let us consider the code over ℤ/27ℤ generated by
G= [ 1 0 3 6; 0 1 18 1 ].
The previous bound from Corollary <ref> would take ℓ=2 and n'=1, instead N'=2, as in the first row of A we have two entries that are divisible by p. In fact, this indicates the minimum Hamming weight codeword in the socle, in this case 1. Clearly, if N' is large, it is beneficial to go until the socle.
* Case ℓ = σ or n'/2 ≤p^s-ℓ-1/p^s -σ-1. In this case we stay in _σ:
() ≤ p^σ + (n-K)M_σ
* Case ℓ = s. In this case we also stay in _σ, but observed some zero entries:
() ≤ p^σ + (n-k-n')M_σ
* Case ℓ≠σ or ℓ≠ s and n'/2 ≥p^s-ℓ-1/p^s -σ-1. In this case we can move to _s-ℓ+σ:
() ≤ p^s-ℓ + σ + (n-k-n')M_s - ℓ + σ
* Case: if n' ≤ N' p^ℓ-σ-p^ℓ-σ-1/p^ℓ-σ-1+(n-K-2)p^ℓ-σ-1-1/p^ℓ-σ-1. In this case we go to the socle:
() ≤ p^s - 1 + (n - K - N')M_s - 1.
Note also, that instead of taking the filtration subcodes 𝒞_i = 𝒞∩⟨ p^i ⟩, we could have also considered the torsion subcodes.
Let 𝒞⊆( ℤ / p^sℤ)^n. For i ∈{0, …, s-1}, we call 𝒞_i=𝒞 p^s-i⊆( ℤ/p^s-iℤ)^n the i-th torsion code.
We can, however, immediately observe that the i-th torsion code represented as a code over the ambient space is naturally a subcode of the filtration subcode as
p^i𝒞_i⊆𝒞_i ⊆( ℤ/p^sℤ)^n,
with rk(p^i 𝒞_i)= ∑_j=0^i-1 k_j < rk(𝒞_i) =K.
In fact, any generator matrix of 𝒞_i is a truncation of a generator matrix of G, i.e., we cut off the rows belonging to the subtypes k_i, …, k_s-1.
Thus if we would define the r-th generalized Lee distances as ^r(𝒞)= (𝒞_r), for r ∈{0, …, s-1} then
(𝒞) ≤(𝒞_i) ≤(p^i𝒞̃_̃ĩ). Thus, any upper bound on (p^i𝒞̃_̃ĩ) would serve as upper bound on (𝒞), but would be worse than taking directly bounds on the smaller (_i).
Finally, we want to note, that the same considerations also apply to the Hamming metric.
Given a code ⊆ of subtype (k_0, …, k_s-1) of maximal subtype k_σ. For each r ∈σ + 1, … , s let ℓ := maxℓ_i i = 1, … , K and define the corresponding value n':= maxn_ℓ_iℓ_i = ℓ, for i = 1, … , K
Then, the Hamming minimum distance is bounded by
() ≤
1 + (n - K - n') if ℓ≥ 1,
1 + (n-K) else.
Note, that the Lee-metric version, that is Corollary <ref>, is not directly implied by the Hamming-metric bound. Such a direct bound would say
() ≤
M(1 + (n - K - n')) if ℓ≥ 1,
M(1 + (n-K)) else.
This is clearly a worse bound than our Lee-metric Singletonn bound of Corollary <ref>.
§.§ Density of Optimal Codes with respect to Filtrations
One interesting quantity is the number of codes of maximum achievable Lee distance for given parameters. We call a such a code a maximum Lee distance (MLD) code. We have already seen that codes attaining the bounds based on the join-support and based on the column support are sparse as p, s and n tend to infinity.
In this subsection we discuss the density of MLD codes with respect to the new Lee-metric Singleton bound <ref> from the filtration.
If nothing else is stated we consider a code ⊆ of rank K and subtype (k_0, …, k_s-1).
Recall that the bound from Corollary <ref> is especially tight, if there are many zero positions in a row of a generator matric of a filtration subcode. Given the rank K of a code ⊆ the probability that an entire row of A is zero, where A are the last n-K columns of a generator matrix of a filtration _r-1
with r ∈σ + 1, …, s, is depending on σ, i.e., it depends on whether the code is free or not.
For n ∞ it is known <cit.> that
( is free) =
1 if R < 1/2,
0 if R > 1/2.
Hence, in this case we would have to distinguish again the two cases. On the contrary for p ∞, it is well-known that the code is free with high probability, which implies that σ = 0. In this case, we have
* For every i ∈1, …, K, ℓ_i = 0. Thus, the bound in Corollary <ref> can be reduced to
() ≤ 1 + (n-K)M,
which coincides with the Singleton-like bound provided by <cit.>.
* There is a i ∈1, …, K with ℓ_i ≠ 0. In this case, we can find the pair (ℓ, n') as in Corollary <ref> and the minimum Lee distance is bounded by
() ≤ p^s-ℓ + (n-K-n')M_s-ℓ.
The following Lemma shows that for p ∞ the first case occurs with high probability.
For a free linear code ⊆, as p ∞, ℓ = 0 with high probability.
Note that (ℓ = 0) is the probability that there is no multiple of p contained in the last n-K columns of a generator matrix G of in systematic form. More explicitly, it is the probability that all of the entries in the last n-K columns of G are units. That is,
(ℓ = 0) = ((p-1)p^s-1/p^s)^K(n-K) = (1 - 1/p)^K(n-K).
Hence, letting p grow to infinity and keeping n and K fixed, yields the desired result.
This means that, with high probability, MLD codes are sparse as p ∞, as codes attaining the bound <ref> of Shiromoto are sparse.
Note that, letting s grow to infinity and keeping p fixed, the size of the ring still grows whereas the probability (ℓ = 0) is a nonzero constant. This let us suggest, that codes attaining the bound on the minimum distance derived from filtration subcodes might not be sparse.
We start by discussing the case, where the code is a free code, hence σ = 0. Free codes have a generator matrix of the form (𝕀_K A), whit A ∈ ()^K×(n-K).
If there is an 0<ℓ<s such that n' = n_ℓ = n-K, the filtration _s-ℓ has an entire row equal to zero. This results in having an (s-ℓ)-th generalized Lee distance of p^s - ℓ and hence () ≤ p^s - ℓ.
Let us investigate on the probability for A having a maximal 0<ℓ_i = ℓ < s with corresponding n' = n-K. This requires that all other rows of A are contained at most in the ideal ⟨ p^ℓ⟩. The probability that A is of this form is therefore
𝒫 := (p^s - ℓ- p^s - ℓ-1)^(n-K) (p^s-1 - p^s-ℓ-1)^(K-1) (p^s - ℓ - p^s - ℓ-1)^(n-K-1)(K-1)/(p^s)^(n-K)K
= (p^ - ℓ- p^ - ℓ-1)^(n-K) (p^s-1 - p^-ℓ-1)^(K-1) (p^ - ℓ - p^ - ℓ-1)^(n-K-1)(K-1)
= ( 1/p^ℓ - 1/p^ℓ+1)^(n-K-1)K + 1( 1/p - 1/p^ℓ+1)^(K-1).
This probability tends to zero as n ∞, and thus MLD codes are sparse with respect to the bound given in Corollary <ref> and n ∞. However, since 𝒫 does not dependent on s, as s ∞ and is a nonzero constant, this implies neither sparsity nor density. In any case, we have that with a probability ( is free) 𝒫 the minimum distance of the code is bounded by ()≤ p^s-ℓ.
Let us consider now codes that achieve the bound on the minimum Lee distance based on filtration subcodes, i.e., Corollary <ref> and check whether this fixes the r-th generalizes Lee distances.
Clearly, if 𝒞 has maximal subtype k_σ and attains the bound in Corollary <ref>, then
()= ^1() = ⋯ = ^σ-1()= (_σ).
If σ=s-1, or we are in the case 4, i.e., n' ≤ N' p^ℓ-σ-p^ℓ-σ-1/p^ℓ-σ-1+(n-K-2)p^ℓ-σ-1-1/p^ℓ-σ-1. In this case we go to the socle, and hence all ^r() are equal.
If we are not in case 4, the behaviour of the filtration subcodes _r with r≥σ is more unpredictable.
As already discussed above there are codes with several properties which are attaining the bound in Corollary <ref>.
One class of codes that we want to consider are those having n' = n-K. Assuming that such a code attains the bound, the following result gives us a closed expression for the r-th generalized Lee distances for all r.
Let ⊆ of rank K, subtype (k_0, …, k_s) of maximal subtype k_σ, and tuple (ℓ, n-K), such that () = ^s-ℓ+σ(). Then the r-th generalized Lee distance is given by
^r() =
p^s-ℓ+σ for every r ≤ s - ℓ + σ,
p^r for every r > s - ℓ + σ.
Since () = ^s-ℓ+σ() and since the r-th generalized Lee distances are increasing in r, we have ^r() = ^s-ℓ+σ() for every r ≤ s - ℓ + σ. Hence, the first case is clear.
For the second case note that _s - ℓ + σ admits a generator matrix containing only zeros in the last n-K columns. These entries remain to be zero for every filtration _r with r > s - ℓ + σ. Hence, the minimum distance (_r) is always given by p^r.
§.§ Invariance under Isometry in the Lee Metric
Finally, we observe again that the r-th generalized Lee distance for a code ⊆ coincides with the r-th generalized Lee distance of a code ' ⊆ that is equivalent to .
Let ⊆ of rank K and let ' ⊆ another code that is equivalent to . Then, for every r ∈1, … , K, we have
^r() = ^r(').
Let ϕ denote an isometry preserving the Lee distance. Recall that this isometry can only consist of permutations and multiplications by ± 1. Furthermore, recall that the r-th generalized Lee distance is given by the minimum Lee distance of the r-th filtration subcode _r of , i.e.,
^r() = (_r).
By the inclusion property of the filtrations, we have _r ⊆. Note that ϕ additionally preserves this inclusion property, i.e.,
'_r := ϕ(_r) ⊆ϕ().
Hence, the minimum Lee distances of _r and '_r coincide.
Now let us study the density in the limit of large blocklength, i.e., n ∞. Assume that we fix the rank rate of the to be R := K/n.
Recall that the bound on the minimum distance deriving from bounds on the minimum Lee distance of the filtrations of a code is especially tight, if there are many zero positions in a row of a filtration. Given the rank K of a code ⊆ the probability that an entire row of is zero in its last n-K positions of a generator matrix of a filtration _r-1 with r ∈σ + 1, …, s is depending on σ, i.e., it depends on whether the code is free or not.
As n ∞ it is known CITE that
( is free) =
1 if R < 1/2
0 if R > 1/2
.
§.§.§ Free Code
Let us first assume that the code ⊆ is a free code. This means, that the rank rate R < 1/2 and σ = 0. Let G denote a generator matrix of in systematic form G = [ 𝕀_K A ] with A ∈ ()^K × (n-K). Codes in this setting are MLD, if they have an entire row equal to zero. Using the proof of Lemma <ref> for n∞ instead of p ∞, we observe that with high probability ℓ≠ 0, since 1-1/p < 1.
For i = 1, … , s, let us define the probability
p_i := (ℓ = i).
Given a free linear code ⊆ of rank K and rate rank R < 1/2, then for each i ∈1, …, s, as n ∞,
(ℓ = i)
Since is free, we have σ =0. Let G be a generator matrix of in systematic form, i.e., G = [ 𝕀_K A ], where A ∈ ()^(K×(n-K). Let us focus first on the probability p_1. That is, the probability that no entry of A is in the ideal ⟨ p^2 ⟩ and that there is at least one entry a_ij of A satisfying a_ij∈⟨ p^1 ⟩∖⟨ p^2 ⟩. More precisely,
(ℓ = 1) = (every a_ij∈⟨ p^0 ⟩∖⟨ p^2 ⟩ and there exists an entry a_ij∈⟨ p^1 ⟩ )
= ∑_m = 1^(n-K)K(every a_ij∈⟨ p^0 ⟩∖⟨ p^2 ⟩ and there exist m many entries in ⟨ p^1 ⟩ )
= ∑_m = 1^(n-K)K((n-k)K - m units) (m many entries are in ⟨ p^1 ⟩∖⟨ p^2 ⟩)
= ∑_m = 1^(n-K)K(p^s - p^s-1)^(n-K)K-m/(p^s)^(n-K)K-m·(p^s-1 - p^s-2)^m/(p^s)^m
= ∑_m = 1^(n-K)K (p^s - p^s-1)^(n-K)K-m · (p^s-1 - p^s-2)^m/(p^s)^(n-K)K
Not sure if (p^s-1 - p^s-2)^m is the right expression.. I'll check
§ COMPARISON OF THE BOUNDS
At this point let us compare the bound of Corollary <ref> to the bounds derived from the new puncturing argument (Theorem <ref>), the join-Lee support (Theorem <ref>), to the column support (Corollary <ref>) and to the bounds provided by <cit.>. We do so by providing first some examples that attain the bound from Corollary <ref> and compare it to the other bounds.
* Let ⊆ (9)^4 generated by
G =
[ 1 0 0 2; 0 1 0 6; 0 0 1 4 ].
We quickly observe that this code has a minimum Lee distance () = 3. For the last n-K = 1 column, we note, that all the entries live in the ideal generated by 1. This means that ℓ = 0 and n' = n-K = 1. Then the bounds are computed as follows.
Filtration: () ≤ 3 (Corollary <ref>)
Join-support: () ≤ 6 (Theorem <ref>)
Column support: () ≤ 5 (Corollary <ref>)
New puncturing: () ≤ 8 (Theorem <ref>)
Shiromoto: () ≤ 5 (<cit.>)
Alderson - Huntemann: () ≤ 4 (<cit.>)
* Let ⊆ (27)^5 generated by
G =
[ 1 10 4 20 9; 0 3 9 18 9 ].
The minimum Lee distance of this code is () = 9. For the last n-K = 3 columns, we quickly compute ℓ'=2 and n'=1. Then the bounds are computed as follows.
Filtration: () ≤ 9 (Corollary <ref>)
Filtration: () ≤ 9 (Corollary <ref>)
Join-support: () ≤ 36 (Theorem <ref>)
Column support: () ≤ 38 (Corollary <ref>)
New puncturing: () ≤ 48 (Theorem <ref>)
Shiromoto: () ≤ 40 (<cit.>)
Alderson - Huntemann: not existing (<cit.>)
* In this example let us consider the code ⊆ (125)^6 generated by
G =
[ 1 0 25 50 75 100; 0 1 2 3 4 5 ].
This code has minimum distance () = 5.
Note that the two bounds with respect to the filtration (Corollary <ref> and <ref>) coincide. Hence, we obtain
Filtration: () ≤ 5 (Corollary <ref> and <ref>)
Join-support: () ≤ 200 (Theorem <ref>)
Column support: () ≤ 247 (Corollary <ref>)
New puncturing: () ≤ 300 (Theorem <ref>)
Shiromoto: () ≤ 249 (<cit.>)
Alderson - Huntemann: () ≤ 248 (<cit.>)
We now compare the bounds for different parameters. We will leave out the bound given by the column support, i.e., Corollary <ref>, as we would need to consider too many different parameters which would not fit in the overview.
|p2cm||p2.5cm|p2.5cm|p2.5cm|p2.5cm|
(n, K, p^s, σ) Alderson - Huntemann <cit.> Shiromoto <cit.> Join-support (Theorem <ref>) Filtration (Corollary <ref>) (ℓ, n')
7*(6, 3, 9, 0) 7*12 7*16 7*12 (0, 3): 13
(1, 1): 9
(1, 2): 6
(1, 3): 3
(2, 1): 9
(2, 2): 5
(2, 3): 1
4*(6, 3, 9, 1) 4*Not existing 4*16 4*12 (1, ⋆): 12
(2, 1): 9
(2, 2): 6
(2, 3): 3
10*(6, 3, 125, 0) 10*186 10*248 10*200 (0, 3): 187
(1, 1): 125
(1, 2): 75
(1, 3): 25
(2, 1): 125
(2, 2): 65
(2, 3): 5
(3, 1): 125
(3, 2): 63
(3, 3): 1
7*(6, 3, 125, 1) 7*248 7*200 (1, ⋆): 185
(2, 1): 125
248 (2, 2): 75
(only for subtype (2, 3): 2
(0, 3, 0)) (3, 1): 125
(3, 2): 65
(3, 3): 5
4*(6, 3, 125, 2) 310 (only for sub- 4*248 4*200 (2, ⋆): 175
type (0, 0, 3)) (3, 1): 125
248 (only for sub- (3, 2): 75
type (1, 1, 1)) (3, 3): 25
Comparison of the bounds on the minimum Lee distance of a code with given parameters.
Let us focus first on a free code, i.e., σ = 0). Observe, that if the last n-K columns of a generator matrix consist only of nonunits, i.e., ℓ = 0) the bound by Alderson and Huntemann beats our bounds. However, as soon as ℓ≠ 0 the new bound based on the minimum distance of filtration subcodes (Corollary <ref>) always outperforms any other bound. In Table <ref> we also observe, that the bound provided by Shiromoto is the loosest.
For nonfree codes, recall that the bound in <cit.> only works for integer ℤ/p^sℤ-dimensions k> 1.
Furthermore, we note that for a given σ≥ 1 we always have ℓ≥σ and if ℓ = σ the filtration bound (Corollary <ref>) is the same for any n'. This is denoted by n' = ⋆ in Table <ref>. In any of the parameters presented, the bound based on the minimum Lee distance of a filtration subcodes of the code (Corollary <ref>) outperforms all other bounds.
§ CONCLUSIONS AND FUTURE WORK
In this paper we presented several novel definitions of Lee support and the corresponding generalized Lee weights of subcodes of a fixed rank.
These give raise to new Lee-metric Singleton bounds, that beat the previous bound by Shiromoto <cit.>, which follows from a puncturing argument.
However, their optimal codes are still sparse for s,n or p going to infinity. This led us to consider different subcodes, namely the filtration subcodes. Bounding their minimum distances gives a sharper Singleton bound, that finally has the desired property; their optimal codes are not sparse for s going to infinity.
The open question remains, whether there exists a bound on the minimum Lee distance of codes, which is such that their optimal codes are dense for s,n or p going to infinity.
§ ACKNOWLEDGEMENTS
Violetta Weger is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 899987.
plain
|
http://arxiv.org/abs/2307.04879v2 | 20230710195954 | Modeling evidential cooperation in large worlds | [
"Johannes Treutlein"
] | econ.GN | [
"econ.GN",
"q-fin.EC"
] |
Modeling evidential cooperation in large worlds
Johannes Treutlein
First written in 2018; major edits in 2023
===============================================
Evidential cooperation in large worlds (ECL) refers to the idea that humans and other agents can benefit by cooperating with similar agents with differing values in causally disconnected parts of a large universe. Cooperating provides agents with evidence that other similar agents are likely to cooperate too, resulting in gains from trade for all. This could be a crucial consideration for altruists.
I develop a game-theoretic model of ECL as an incomplete information bargaining problem. The model incorporates uncertainty about others' value systems and empirical situations, and addresses the problem of selecting a compromise outcome. Using the model, I investigate issues with ECL and outline open technical and philosophical questions.
I show that all cooperators must maximize the same weighted sum of utility functions to reach a Pareto optimal outcome. However, I argue against selecting a compromise outcome implicitly by normalizing utility functions. I review bargaining theory and argue that the Nash bargaining solution could be a relevant Schelling point. I introduce dependency equilibria Spohn2007-fp, an equilibrium concept suitable for ECL, and generalize a folk theorem showing that the Nash bargaining solution is a dependency equilibrium. I discuss gains from trade given uncertain beliefs about other agents and analyze how these gains decrease in several toy examples as the belief in another agent decreases.
Finally, I discuss open issues in my model. First, the Nash bargaining solution is sometimes not coalitionally stable, meaning that a subset of cooperators can unilaterally improve payoffs by deviating from the compromise. I investigate conditions under which stable payoff vectors exist. Second, I discuss how to model agents' default actions without ECL.
§ INTRODUCTION
Evidential cooperation in large worlds (ECL)[In previous work, this concept has been referred to as “multiverse-wide
cooperation via superrationality” (MSR).] Oesterheld2017-qgGloor2017Oesterheld2018 is a crucial consideration that could have important implications for the prioritization of altruists.
To illustrate the idea, consider a prisoner's dilemma between two artificial
agents with identical source code. Even if both agents cannot causally
interact, one agent's action provides them with strong
evidence about the other agent's action. Evidential
decision theory (EDT), as well as functional decision theory Yudkowsky2017-vb and some variants of causal decision theory (CDT) Spohn2012-fo,Poellinger2013-we,
say that agents should take such evidence into account when making
decisions. In situations like the prisoner's dilemma with two identical
agents, they prescribe cooperation for this reason, an idea that is also called superrationality hofstadter1983dilemmas. ECL is based on
the idea that humans on Earth are in a similar situation as such agents.
First, there probably is a large or infinite universe,
containing vast numbers of civilizations, inhabiting different, causally disconnected parts of the universe tegmark2003parallel,tegmark2015our. I refer to such a large universe as a multiverse, and to causally disconnected parts of it as universes, regardless of the specific structure of the universe (e.g., these parts could just be far-apart regions of space). Given their vast number, there are likely universes containing agents that are very similar to humans, such that humans'
actions are evidence about these agents' actions macaskill2021evidentialist.
Second, these
agents may pursue different goals, leading to possible gains from trade. For instance, pursuing a given goal in one universe may have diminishing returns, and agents may care about other universes as well. In that case, it may be beneficial for agents to trade by pursuing a mixture of everyone's goals in all universes. Since agents in different universes cannot
communicate and there is no way to enforce an agreement, this puts them in a collective prisoner's dilemma. Under the right conditions,
the abovementioned decision theories recommend that humans take the
preferences of other, similar agents in the multiverse into account,
in order to produce the evidence that these agents do in turn take humans'
preferences into account, leaving everyone better off.
According to [sec. 4]Oesterheld2017-qg, this idea could
have far-reaching implications for the prioritization of altruists. For instance, given
ECL, some forms of moral advocacy could become ineffective: agents
advocating their particular values provides them with evidence that
others will do the same, potentially neutralizing each other's
efforts [sec. 4.2]Oesterheld2017-qg. Moreover, ECL could play a role in deciding which strategies
to pursue in AI alignment. If potential gains from cooperation
are vast, then it becomes more important to ensure that AI systems are aligned with humans' idealized philosophical views on decision theory and ECL.[Note that interventions to promote ECL could also backfire by exacerbating other risks from advanced AI [see][]xu2021open.]
In this report, I develop a game-theoretic model of ECL as an incomplete information bargaining problem, incorporating uncertainty about the values and empirical situations of potential superrational cooperators, and addressing the problem of selecting a compromise outcome. I clarify the conditions that make ECL feasible and analyze gains from trade given empirical uncertainty. Moreover, I discuss several technical and philosophical problems that arise.
Basic knowledge of game theory, such as normal form games, Nash equilibria, and the prisoner's dilemma [see][]osborne1994course, as well as decision theory and ECL (see Gloor2017 for an introduction), will be helpful for understanding this report.
§.§ Summary
Here, I provide a short summary of the report, highlighting key contributions. Afterwards, I outline the organization of the remaining report, and briefly discuss related work.
§.§.§ Game-theoretic models
I introduce three models: a bargaining model, a Bayesian game model, and a Bayesian bargaining model, combining the two previous models. The first two models are useful since many issues can more easily be discussed in the less general setting, and this structure may make the report easier to follow. However, it is possible to skip directly to the final model in <Ref>.
In a bargaining game, players have to agree on some compromise outcome, from a feasible set of achievable payoff vectors. A disagreement point specifies the outcome that is realized if no compromise is reached. I argue for modeling ECL as a bargaining problem, since (i) there is an inherent bargaining problem in determining a compromise between superrational cooperators that needs to be addressed, (ii) bargaining solutions that are supported by plausible axioms serve as Schelling points, and (iii) there are important parallels between acausal trade[<https://www.lesswrong.com/tag/acausal-trade>] (where agents use mutual simulations to reach an agreement) and ECL, meaning that bargaining could be a relevant model for agents forming conditional beliefs over other agents' actions. I also address [Sec. 2.8]Oesterheld2017-qg's suggested approach of pursuing a sum of normalized utility functions as a compromise utility function. I show that to achieve a Pareto optimal outcome, i.e., an outcome that cannot be improved upon without making any player worse off, everyone has to maximize the same compromise utility function. However, I argue for choosing a compromise based on a bargaining solution rather than a normalization method such as variance normalization, on the grounds that the latter can leave agents worse off than without the compromise. I review two popular bargaining solutions, the Nash bargaining solution (NBS) and the Kalai-Smorodinsky bargaining solution (KSBS), and conclude that the NBS could be a relevant Schelling point for ECL.
The Bayesian game formalism serves to incorporate incomplete information—that is, information about the values and available options of other players. Specifically, I use a modified version of Harsanyi1967's type space formalism. In my model, there is a large number of players, living in different universes. Each player is assigned a type, representing their values and empirical situation, according to some prior distribution p. Players' posterior beliefs over types, after updating on their own type, represent their beliefs over other universes. Players' utility functions depend on the actions and types of all players. Relaxing the assumption of a common prior p is an important area for future work.
Finally, the Bayesian bargaining model implements a bargaining game on top of a Bayesian game, incorporating bargaining with incomplete information. The feasible set here is the set of expected utilities that can be produced by players for all the types, given the types' beliefs about other players.
§.§.§ Gains from trade under uncertainty
Two important assumptions in this report are additive separability and anonymity. Additive separability means each player's utility functions can be expressed as a sum of contributions from other players. This would be true for total utilitarians but false for average utilitarians valuing average well-being across the multiverse, for instance.
Anonymity means beliefs, utilities and strategies depend only on types, not on specific players. This means we do not distinguish between different universes.
Given these two assumptions, we can regard strategies as vectors α∈ A^T, where T is the set of types and A the set of strategies for any type. The expected utility of a strategy for a type t∈ T can be simplified to the expression
EU_t(α)=u_t,t(α) + (n-1)∑_t'∈ Tp(t'| t)u_t',t(α_t')
where n is the number of players, p(t'| t) is the belief of any player of type t that any other player has type t', and u_t',t(α_t') is the utility provided by a player of type t' to a player of type t.
The first term is the utility produced by a player for themself, and the second term stands for the expected utility produced by all the other players.
Note that if n is large, the expected utility is dominated by the second term, meaning that the utilities produced by a player for themself in their own universe can be ignored. It follows that a potential compromise option β produces gains from trade for a type t if
∑_t'∈ T∖{t}p(t'| t)(u_t',t(α_t')-u_t',t(β_t))≥ p(t| t)(u_t,t(α)-u_t,t(β)).
That is, both potential gains from other types' cooperation as well as potential losses due to players of type t compromising are weighted by type t's posterior beliefs over the types of other players. If the former outweigh the latter, then β leads to gains for players of type t.
This shows that when it comes to gains from trade, what matters are players' posterior beliefs over other players' types. For instance, if types are certain that all players have the same type, i.e., p(t| t)=1, then no trade is possible. If p(t'| t)=0 for a specific type t', then that type cannot benefit players of type t. If all types have the same posterior beliefs, then trade may in principle be possible, depending on the different types' options. In general, different beliefs can put a tax on trade.
I also consider a model that includes uncertainty over whether other players are superrationalists or sufficiently similar to enable ECL. However, I argue that such considerations can also be incorporated into the posterior beliefs p(t'| t), so this extension does not increase the generality of the model.
§.§.§ Double decrease and Paretotopia
Using <Ref>, we can analyze gains from trade in different toy models. I consider an example with a trade between two types, T={1,2}. Both types start out with an equal number of resources and can invest resources into either type's utility function. Resource investments have diminishing returns, leading to potential gains from trade. As a compromise outcome, I use the NBS. I consider square root as well as logarithmic returns to resources. <Ref> shows individual feasible sets in each case, which are the sets of expected utilities a player of each type can produce for both types. Gains from trade are larger given logarithmic utilities, since utilities diminish faster in this case.
Using this model, I analyze resource investments in the respective other type and gains from trade under the NBS, for different posterior beliefs p(t'| t) in the other type (assuming both types have the same prior weight p(1)=p(2)=1/2) (<Ref>). As this belief goes down, gains from trade go down approximately quadratically in the square root utility case, leading to a “double decrease” as observed by armstrong2017double. However, in drexler2019pareto's “Paretotopia” model with logarithmic returns to resource investments, gains from trade diminish more slowly with the belief in the other player.
§.§.§ Equilibrium concepts
I introduce two equilibrium concepts for the Bayesian game and Bayesian bargaining game models. First, I introduce Bayesian Nash equilibria. In the additively separable case, these equilibria are trivial as each player is simply optimizing for their own values in their own universe, ignoring other players. Second, I introduce a generalization of Spohn2007-fp's dependency equilibria for Bayesian games and for continuous action spaces. A dependency equilibrium is a joint belief over the actions of all players, where every player's actions have optimal conditional expected utility. Since it evaluates conditional probabilities and allows for dependencies between players' actions, dependency equilibria are suitable to model the superrational reasoning required for ECL. For instance, in a prisoner's dilemma, there is a dependency equilibrium in which both players cooperate.[There are several other equilibrium concepts in the literature with similar properties al2015evidential,daley2017magical,halpern2018game, which I have not looked at in this report.] My technical contributions are generalizing dependency equilibria to Bayesian games and to continuous action spaces. The latter is necessary for my bargaining model since players bargain over a continuous space of, e.g., independent randomizations over actions, or continuous resource investments.
I prove several results about dependency equilibria in my model, including a generalization of Spohn2007-fp's folk theorem for dependency equilibria, showing that any Pareto improvement over a Bayesian Nash equilibrium is a dependency equilibrium. As a corollary, it follows that the NBS with the Nash equilibrium disagreement point is a dependency equilibrium. I also show that a dependency equilibrium with independent action distributions is a Bayesian Nash equilibrium.
§.§.§ Disagreement points
I discuss the problem of choosing a disagreement point in ECL. Since ECL only involves choosing some compromise action based on some joint belief, without any actual bargaining, it is unclear what the relevant notion of non-compromise outcome should be. However, how to model agents' default options without ECL is an important question in general, not only in my bargaining model.
A natural option is the Bayesian Nash equilibrium, but there is also the threat point, which is the equilibrium of a game in which players choose disagreement actions to improve their bargaining position. I review a plausible axiomatization of the threat point by Nash1953 and show that the NBS with the threat disagreement point can sometimes lead to bargaining outcomes that are worse than a Nash equilibrium. I also show that the NBS with the threat disagreement point is still a dependency equilibrium. Coercion to join a compromise should not be relevant to ECL, since there are no explicit threats. However, threats might be relevant for the same reason bargaining in general is relevant to ECL. The question of disagreement points is an important area for future work.[It may be valuable to review recent work by diffractor2022rose on threat-resistant bargaining in this context.]
§.§.§ Coalitional stability
Finally, I discuss the issue of coalitional stability. A bargaining solution is coalitionally stable if it is in the core, which is the set of payoff vectors in the feasible set such that no subgroup of players (coalition) can strictly increase payoffs for all of its members by unilaterally deviating from the compromise. Coalitional stability is an important criterion for a compromise solution for ECL since it seems plausible that players would choose to pursue a compromise with a subgroup of players if this leads to higher payoffs. Hence, if the grand coalition of all players is not stable, this would lead to a difficult coalition finding problem, making ECL even more complicated to implement.
I show that the NBS with either the Nash or the threat disagreement point can sometimes be unstable. I then analyze the existence of core allocations. The core is known to be empty in general games [][ch. 13.2]osborne1994course. However, using a result by scarf1967core, I show that assuming additively separable utilities, the core is always nonempty. In this analysis, I assume worst-case responses by players outside the coalition, from among the possible Pareto optimal strategies they could pursue for themselves. (Specifically, I do not assume other players respond with threats against coalitions.)
Additionally, I show that if players outside the coalition respond with a Nash equilibrium, the core can be empty even given additive separability. This demonstrates that sometimes no stable bargaining solution exists that improves upon the Nash equilibrium disagreement point, a strong argument against this disagreement point. The intuition is that sometimes two players cooperating leads to negative externalities for a third player, leaving the third player worse off than with no cooperation. Motivated by my result on the existence of core allocations, I suggest an alternative disagreement point that guarantees stability.
§.§ Outline
* In <Ref>, I discuss several assumptions and simplifications I make in the report.
* In <Ref>, I introduce a standard bargaining formalism. I argue that a bargaining problem is an appropriate model for ECL. After providing an example bargaining problem (<Ref>), I introduce the formal bargaining model and relevant notation (<Ref>). I then discuss maximizing a sum of normalized utility functions as a compromise utility function (<Ref>). Next, I briefly review bargaining theory, introducing the Nash and Kalai-Smorodinsky bargaining solutions (<Ref>). Lastly, in <Ref>, I make some initial observations about the model.
* In <Ref>, I introduce a Bayesian game model. In Sections <ref>–<ref>, I introduce the formalism and notation. I then introduce Bayesian Nash equilibria and dependency equilibria (<Ref>) and discuss extending the model to include uncertainty about decision procedures and similarity to other agents (<Ref>). Finally, I prove several equilibrium results (<Ref>).
* In <Ref>, I introduce Bayesian bargaining game, combining the previous models. I introduce the formal setup and notation in Sections <ref>–<ref>. In <Ref>, I introduce a version of the Nash bargaining solution adapted to my model. I then define equilibria in the model (<Ref>). Lastly, in <Ref>, I discuss several takeaways: I provide equilibrium results, discuss how to think about gains from trade given uncertainty, and work through several toy examples, including armstrong2017double's “double decrease” and drexler2019pareto's “Paretotopia” model.
* In <Ref>, I discuss two important issues: disagreement points (<Ref>) and coalitional stability (<Ref>).
* Finally, in <Ref>, I conclude and outline possible future work.
§.§ Related work
A list of prior work on ECL can be found at <https://longtermrisk.org/msr>. No prior work introduces a formal game-theoretic model and discusses equilibria or bargaining theory. [Sec. 2.7]Oesterheld2017-qg includes a simple calculation establishing the plausibility of ECL but without modeling different players, beliefs, or utilities. [][Sec. 2.9.4]Oesterheld2017-qg introduces a variable for other players' decision theories, an idea I discuss in <Ref>. treutlein2018three introduces a simple model with variables for correlations, gains from trade, and number of cooperators, to establish a wager for ECL.
The most important related work is armstrong2017acausal's sequence on acausal trade. He introduces a toy model where players have different utility functions and uncertainty about the existence of other players. Among other issues, he discusses how gains from trade change under different beliefs. I reproduce some of Armstrong's findings in <Ref>. Armstrong focuses on acausal trade and does not discuss relevance to ECL.
§ PRELIMINARIES
I make several assumptions and simplifications in this report:
* I focus on EDT as a decision theory. In particular, I introduce a game-theoretic solution concept based on conditional expected utilities (see <Ref>). While I believe my analysis also applies to other decision theories that take dependencies between similar
agents into account, I will not discuss this. My
analysis may also be relevant to readers with decision-theoretic
uncertainty, since there may be a wager to take ECL into account given any nonzero credence in EDT macaskill2021evidentialist,treutlein2018three.
I do not model decision-theoretic uncertainty, but it could be added similarly to uncertainty about decision-theoretic similarity (see <Ref>).
* I do not address questions about the nature of dependencies between the decisions of different agents, how one could evaluate whether different agents' decisions are dependent, etc. However, I discuss modeling partial correlations or uncertain beliefs about dependencies in <Ref>.
* I assume that there is only a finite set of
agents and the utilities of the options involved are all finite. This is a problem, since the most likely case in which the universe
is large enough to give rise to ECL is an infinite universe. It seems plausible that solutions to infinite
ethics will not change conclusions from my model [cf.][]macaskill2021evidentialist[][sec. 6.10]Oesterheld2017-qg. The assumption
of a finite set of agents is more problematic, since there likely exists a continuum of agents with a continuum of value systems. One may be able to discretize such a set and approximately recover the model discussed here, but it also seems possible that the general case leads to qualitatively new problems.
* I assume that agents are Bayesian (conditional)
expected utility maximizers.
* For simplicity, I model ECL as a one-off decision. For instance, this
could be a commitment to a policy or a decision to maximize some compromise
utility function in the future. I assume that it is possible to
commit oneself to this compromise, and that there won't be changes to the compromise based on new information about one's empirical situation in the universe. This is plausible if either the agents
can actually commit themselves in this way, or if they just
never learn enough such that their assessment of the situation would
relevantly change. Note that this does not affect how agents arrive at this compromise (whether by first-principles reasoning, by simulating agents in the multiverse, etc.; see the discussion in the next section).
§ COMPLETE INFORMATION BARGAINING MODEL
In this section, I develop a model of ECL as a complete information
bargaining problem. A bargaining problem is a game between players
in which the players have some method of negotiating a binding agreement.
If everyone accepts the agreement, the actions specified by the agreement
are carried out. Otherwise, players carry out some disagreement action.
Complete information, as opposed to incomplete information, means
that everyone knows who the other players are, as well as their options and utility functions.
In ECL, players are uncertain about their superrational cooperators, so an incomplete information model would be more appropriate. Nevertheless, it is useful to start with complete information for simplicity, since many ideas from the complete information setup will transfer. I will relax the complete information assumption in the following sections.
A more critical assumption is that of using a bargaining model for ECL. ECL is based on the idea that an agent has some belief about other agents' actions,
conditional on their own action. The agent takes some
cooperative action, to produce the evidence that other agents also take more cooperative actions. This does not involve any explicit bargaining between the agents. Nevertheless, I believe using a bargaining model is useful for thinking about ECL.
First, the problem of choosing a compromise outcome in ECL has to be addressed in some way. In <Ref>, I discuss [Sec. 2.8]Oesterheld2017-qg's suggested approach of maximizing a compromise utility function, consisting of a sum of normalized utility functions of all superrational cooperators. This is a valid approach, since every compromise that is Pareto optimal, i.e., that cannot be improved upon without making anyone worse off, is the result of maximizing some common weighted sum of utility functions (see <Ref>). However, I argue against choosing a compromise outcome implicitly by normalizing all agents' utility functions, e.g., according to variance, since that approach might leave some cooperators worse off than without a compromise. Formulating ECL as a bargaining problem and reviewing the relevant literature is a natural starting point for addressing the problem explicitly.
Second, a solution to a bargaining problem may serve as a Schelling point[<https://en.wikipedia.org/wiki/Focal_point_(game_theory)>] for superrational cooperators. Solutions can be supported by plausible axioms that could be universally agreed upon. Hence, bargaining theory can be one relevant reference point for determining which evidence one's actions provide about other agents' actions. It seems plausible that, if humans adopt some parsimonious solution, then other, similar agents will do the same.
Third, bargaining may be important because of a parallel between ECL and acausal trade[<https://www.lesswrong.com/tag/acausal-trade>]. Acausal trade refers to the more general
idea that agents could be able to negotiate and enforce a cooperative outcome via mutual simulations, in the absence of any causal interaction. ECL is the special case in which, instead of mutual
simulation, similarity in decision algorithms or psychological processes
ensures a joint cooperative action. While humans might be able to engage in ECL, acausal trade is likely only feasible for superhuman AI systems.
I think there is no principled
distinction between acausal trade and ECL. Determining the conditional
beliefs about the actions of other agents involves, at least in principle,
similar questions as those concerning acausal trade. Conditioning on one's own decision process having some output, one needs to determine which
actions a similar but non-identical decision process in a similar
or symmetrical, but non-identical decision situation would output. At the same time, the other decision process is trying to make the same determination.
Due to such mutual dependencies between the actions of agents, one cannot divide the decision process clearly
into given conditional beliefs that specify which inferences to make
based on different actions, and the subsequent choice of the action
with the highest expected utility. Instead, one has to already make choices while
inferring the (logical) conditional credences. For instance, the inferred conditional distribution over opponent actions may be influenced by one's own commitment
to respond to opponent actions in a certain way [see][]kokotajlo2019commitment,mennen2018wishful.[In a comment on an earlier draft, Max Daniel writes: “If I understand this correctly, this seems important to me, and quite connected to some of the reasons why I feel skeptical about ECL having practical implications for humans. I also feel like it has been underemphasized in texts on ECL so far.”]
It is prudent for humans to have conditional beliefs about the world,
including other agents, even without being able to entirely solve
this issue (which involves various open problems, for instance,
in logical uncertainty[<https://www.alignmentforum.org/tag/logical-uncertainty>]). In this situation, it makes sense to try
to improve one's guesses about ECL both based on reasoning
that is purely based on agents having beliefs about other agents,
and reasoning that involves hypothetical (acausal) bargaining.
Lastly, another way in which a bargaining problem is an inadequate model is that ECL is really a coalitional game. In a bargaining game,
there are only two possibilities: either all players agree to a proposed compromise action, or bargaining completely fails. If one player
disagrees, everyone plays their disagreement action. In a coalitional
game, any group of players can split off and negotiate an agreement, if this is beneficial to that group. I discuss this issue in <Ref>.
Next, I give an example bargaining problem (<Ref>) and introduce the formal bargaining framework (<Ref>). Afterwards, I discuss approaches to compromise that work via maximizing a sum of normalized utility functions (<Ref>). I argue against normalization and for explicitly picking out compromise outcomes. I then review some bargaining theory and discuss the Nash bargaining and the Kalai-Smorodinsky bargaining solutions, concluding that the former may serve as a good Schelling point (I review another solution by Armstrong2013 in <Ref>). Finally, in <Ref>, I make some initial observations and discuss issues that arise in the bargaining model. I address the uniqueness of the actions corresponding to bargaining solutions (<Ref>) and discuss how to think about gains from trade in the bargaining framework (<Ref>).
§.§ Example
Here, I give an example bargaining problem to motivate the following formal definitions, derived from a case by armstrong2017double.
There are two players, Alice and Bob. Alice and
Bob care in an additive way about the things that both do. Say Alice
has 10 and Bob has 5 units of some resource, and A and B is the amount spent on Alice's utility function, by Alice and Bob respectively. 10-A and 5-B are the respective amounts spent on Bob. Resources invested in
Alice's utility function produce linear utility for her, so her utility
is A+B. Bob's utility function,
on the other hand, has diminishing returns; the marginal cost of one
additional utilon equals the utilons that have already been produced. So to create
x utilons for Bob, both Alice and Bob need to invest ∫_0^xydy=1/2x^2
resources. Hence, Bob's utility function is √(2(10-A))+√(2(5-B)).
It is possible to plot both Alice's and Bob's actions (i.e., ways
to split up their resources between Alice and Bob) in a two-dimensional
plane in which the axes are the utility functions of Alice (x-Axis)
and Bob (y-Axis). Additionally, one can plot all combinations of actions
of Alice and Bob in one joint graph for both utilities (<Ref>). The upper right boundary of the set of feasible utilities is the Pareto frontier—the set of utility vectors such that
no one can improve on their utility without making someone else worse
off.
In this example, both agents maximizing their own utility function leads to a Pareto inferior outcome: the
point (10,√(10)), which does not lie on the Pareto frontier.
If, on the other hand, Bob and Alice are able to coordinate on a cooperative
combination of actions, this leaves both better off. There is the
question, though, which point on the Pareto frontier they should
choose. In this section, I am considering this question in the case of ECL.
An interesting property of the Pareto frontier is that if Alice and Bob choose actions such that the slopes
of their individual Pareto frontiers—in this example, the slope
of the lines in <Ref> (a)—at the point of their actions
are not the same, then the actions are not Pareto optimal. Regarding
the slope of the Pareto frontiers as marginal rates of substitution,
this is a well-known concept in economics. If the marginal rates of substitution for Alice and Bob are not the same, then both players can move in opposite directions on
their Pareto frontier to become jointly better off. One person can give
up some amount x of utility for Alice to gain some amount y
of utility for Bob, and at the same time, the other person can give
up less than y utility for Bob and gain more than x for Alice,
such that jointly, the effect on both of their utilities is positive.
§.§ Formal setup
A (complete information) bargaining game is a 4-tuple
B=(N,(A_i)_i∈ N,(u_i)_i∈ N,d),
where
* N={1,…,n} is the set of players;
* (A_i)_i∈ N is the tuple of finite sets of actions for
each player;
* (u_i)_i∈ N,u_i A→ℝ is the tuple
of utility functions for all players, where A=∏_i∈ NA_i
is the set of outcomes, or the set of pure strategy profiles;
* d∈ℝ^n is the disagreement point.
A bargaining game as defined above is a standard normal form game, with the addition of a disagreement point, which is needed to specify a default outcome that is realized when bargaining fails.
In the following, I introduce some initial notation and definitions. As is standard, I write a_-i∈ A_-i:=∏_j∈ N, j≠ iA_j and (a_-i,a_i) for the vector in which the i-th entry is a_i∈ A_i and the remaining entries are given by a_-i∈ A_-i.
Given a bargaining game B, players
are able to randomize between actions. Let Σ_i:=Δ(A_i) be the set of
probability distributions (identified with probability mass functions) over the actions in A_i. Then σ_i∈Σ_i is called a mixed strategy. Moreover, σ∈Σ:=∏_i∈ NΣ_i is called a mixed strategy profile, and I write
u_i(σ):=∑_a∈ A(∏_j∈ Nσ_j(a_j))u_i(a)
for player i's expected utility given mixed strategy profile
σ∈Σ.
I regard the mixed strategies as the options of the players. Note
that at this stage, strategies are always independent. Later I introduce
a different concept which involves possibly dependent joint distributions
over players' actions, which I call “joint strategy distributions”.
Given a bargaining game B, one can define
F(B)={x∈ℝ^n|∃σ∈Σ∀ i∈ N x_i=u_i(σ)}
as the feasible set. This set is by construction a simplex, and hence
convex and compact. The feasible set contains, for all the possible
mixed strategy profiles that the players can choose, vectors that
specify the expected utilities for each player given that profile—i.e.,
the utility vectors that are feasible given the bargaining game B. I also
define
H(B)={x∈ F(B)|∀ y∈ F(B):(∀ i∈ N:y_i≥ x_i)⇒ y=x}
as the (strict) Pareto frontier of F(B).
As an addendum to <Ref>, I will assume in the following
that the disagreement point simply corresponds to one of the possible mixed strategies, and hence lies in the feasible set, i.e.,
d∈ F(B). Moreover, I assume that it is possible to achieve gains from trade for all players. I.e., there exists x∈ F(B) such that x_i>d_i for all i∈ N. This simplifies matters and is not really a substantial restriction. To see this, note that if players cannot receive gains from trade, then it does not make sense for them to participate in ECL. Moreover, consider the set of players that can receive gains from trade without making other players worse off than their disagreement point. Then by convexity of F(B), there also exist outcomes that make all of these players better off simultaneously.
Utility functions are only specified up to positive affine transformation,
i.e., if there are utility functions u,u' and there is a∈ℝ_>0,
b∈ℝ such that u=au'+b, then these utility functions
imply exactly the same preference relation over Σ. I write
u∼ u' to denote that two utility functions are equivalent in
this sense.
If one subtracts the disagreement point from a utility function,
then the resulting function is at least unique with respect
to the addition of a constant b∈ℝ. One can of course
still multiply the function by an arbitrary positive number.
The disagreement point may be just some outcome that would obtain
if everyone were to maximize their own utility function, but it could
also be a “threat point” where everyone takes the action which
produces the best subsequent bargaining solution. More on this in <Ref>, but for now I assume such a point is given.
In the ECL context,
we can assume that players live in completely causally disconnected universes. Hence, if all the players have
value systems that are additive in the universes, it does not make a difference to the
utility one player gets from another player what all the other players
do (this is false in general, e.g., if players were able to causally interact).
So it makes sense to give the following definition:
Utility functions are called additively
separable if there are functions u_i,j for i,j∈ N such that for all
i∈ N and a∈ A, we have
u_i(a)=∑_j∈ Nu_i,j(a_j). A bargaining game B is called additively separable if the corresponding utility functions are additively separable.
Unfortunately, this excludes some notable value systems. For instance, value systems which have diminishing
returns for some good across the multiverse, or value systems that care about the average happiness of all beings in the multiverse. However, it makes things
much easier. I will assume additive separability in many results.
In case utility functions are additively separable, it is possible
to write
F(B)=∑_i∈ NF_i(B):={∑_i∈ Nx_i|∀ j∈ N x_j∈ F_j(B)},
where
F_i(B):={x∈ℝ^n|∃σ_i∈Σ_i∀ j∈ N x_j=u_j,i(σ_i)}
is the feasible set for player i∈ N and u_j,i(σ_i)=∑_a_k∈ A_iσ_i(a_k)u_j,i(a_k).
That is, for each player, there is an individual feasible set of utility vectors that this player can generate for all players, and the joint feasible
set consists of all the points x that are sums of points in the
individual feasible sets. I define
H_i(B)={x∈ F_i(B)|∀ y∈ F_i:(∀ j∈ N:y_j≥ x_j)⇒ y=x}
as the strict Pareto frontier of F_i(B).
This is the upper right boundary of
the utilities that the individual player i∈ N can contribute in
their part of the universe. Note that not every sum ∑_i x_i of points x_i∈ H_i(B) on the individual Pareto frontiers will be Pareto optimal.
In <Ref>, utilities are additively
separable, and <Ref> (a) depicts the two individual feasible sets, while (b) depicts the one combined feasible set.
These feasible sets are not valid in the sense of the above definition,
since they are not convex and they are not simplices. However, we can relax the assumption that F(B) is a simplex by allowing for a bargaining problem to be directly defined as a tuple B=(N,F,d), where N is the set of players, F the feasible set, and d∈ F the disagreement point. In this case, F still has to be compact and convex, but it need not be a simplex (e.g., if the underlying set of actions is continuous, as in <Ref>). Convexity and compactness are required so that we can apply bargaining theory (e.g., to ensure that strictly convex functions have unique minima).
Assuming additive separability, it is practical to just identify
the space of actions of player i∈ N with their feasible set
F_i(B). In that case, we can define a bargaining problem as a tuple B=(N,(F_i)_i∈ N,d) of a set of players N, individual feasible sets F_i for each player, and a disagreement point d. Then we have F(B)=∑_i∈ NF_i. Here, the F_i have to be compact, convex sets, but need not be simplices. Particularly, there are feasible sets F_i such that the
H_i are smooth, n-1-dimensional manifolds, which we will assume in some results below.
Lastly, it is useful to define
ℱ^N={F(B)| B is a bargaining game with set of players N}
as the set of all possible sets of feasible utilities for the set
of players N and ℱ=⋃_N∈𝒫ℱ^N
as the set of all possible feasible sets, where 𝒫={{1,…,n}| n∈ℕ}
is the set of all finite sets of agents. Moreover,
Υ={(F,d)| N∈𝒫,F∈ℱ^N,d∈ F}.
With these definitions, a bargaining solution is a function μ
on Υ such that μ(F,d)∈ F. That is, it takes a feasible
set and a disagreement point, and outputs a unique point in the feasible
set as solution.
§.§ Normalizing utility functions
One possible approach to determining the actions of individual players
in the bargaining problem posed by ECL is maximizing some compromise utility function [Sec. 2.8]Oesterheld2017-qg. In particular, one may start by normalizing individual utility functions via shifting and scaling, and then maximize a weighted sum of them. Maximizing a sum picks out a specific point or affine subset of the Pareto frontier. Note that this correspondence also works the other way around—for every point on the Pareto frontier, we can derive weights such that the point maximizes the corresponding weighted sum. In this section, we will first argue why all players have to maximize the same sum to reach a Pareto optimal agreement. Second, we motivate the use of bargaining solutions that directly pick out points on the Pareto frontier, by arguing against an approach that starts by normalizing utility functions.
One motivation behind the idea of maximizing a weighted sum of utility functions is Harsanyi's utilitarian theorem hammond1992harsanyi. Assume that a player wants to maximize a compromise
utility function u^* that also incorporates other players' preferences.
A very plausible axiom in this case is the following:
Let α,β∈Σ and u_i(α)≥ u_i(β)
for all i∈ N. Then u^*(α)≥ u^*(β).
This is a kind of Pareto optimality condition. If one mixed strategy
profile is at least as good for everyone as another mixed strategy
profile, then it should also be at least as good for the new utility
function u^*. According to a version of Harsanyi's utilitarian
theorem, it follows from this axiom that u^* is just a weighted
sum of the utility functions of individual players:
Resnik1983Fishburn1984-FISOHU Let u^* satisfy Axiom <ref>.
Then there are weights λ_1,…,λ_n∈ℝ_≥0
such that
u^*∼∑λ_iu_i.
This result says that a player that wants to pursue a compromise and respect the Pareto axiom has to maximize some sum of utility functions. But it leaves open how
to choose the weights in this sum of utility functions.
Assuming additive
separability, we can also show that, to get a Pareto optimal outcome, different players have to maximize the same weighted sum of utility functions. This
follows from the fact that maximizing a weighted sum picks out the
point on the Pareto frontier where the slope of the frontier corresponds
exactly to the weights in the sum. But if two players choose points
on their frontiers with different slopes, there are gains from trade
left on the table. As mentioned in <Ref>, in a Pareto-optimal outcome,
the slopes of the frontiers, i.e., marginal rates of substitution,
have to be identical. Otherwise, both players could jointly move in
opposite directions on the frontier such that both gain more than
they lose.
Let B=(N,(F_i)_i∈ N,d) be an additively separable bargaining game. Assume that there are weight
vectors μ_i∈ℝ_≥0^n for
i∈ N such that player i∈ N takes an action x_i∈ F_i
that maximizes ∑_j∈ Nμ_i,jx_i,j. Then
(i) If μ_1,i=…=μ_n,i>0
for all i∈ N, then ∑_i∈ Nx_i
is Pareto optimal.
(ii) If the boundaries ∂ F_i are smooth n-1-dimensional manifolds and there exist i,j such that μ_i≠μ_j, then ∑_ix_i is not Pareto optimal.
To begin, note that w.l.o.g. we can assume that for all i∈ N, we have ‖μ_i‖_2=1. This is because we assume μ_i≠ 0 for both (i) and (ii), and we can rescale μ_i to have norm 1 without changing the optimum x_i.
Now, to prove (i), assume that μ_1=…=μ_n. We have
∑_i∈ N∑_j∈ Nμ_i,jx_i,j=∑_i∈ Nmax_y_i∈ F_i(B)∑_j∈ Nμ_i,jy_i,j=max_y∈∏_i∈ NF_i(B)∑_j∈ Nμ_1,j∑_i∈ Ny_i,j=max_y∈ F(B)∑_j∈ Nμ_1,jy_j.
If a point is a solution to a maximization problem max_y∈ F(B)∑_j∈ Nμ_1,jy_j
such that μ_1,i>0 for all i, then we cannot improve the utilities for one of the players without making anyone else worse off. Hence, the point is Pareto optimal.
Next, we show (ii) via contraposition. Assume that ∑_i∈ Nx_i
is Pareto optimal.
For any Pareto
optimal point, there is a weight vector ν∈ℝ_≥0^n,
‖ν‖=1 such that ∑_i∈ Nx_i∈_y∈ Fν^⊤y.
Moreover, since the boundaries ∂ F_i are smooth, we can define smooth functions h_iℝ^n→ℝ such that ∂ F_i={x| h_i(x)=0}, i.e., the boundaries ∂ F_i are the level sets h_i=0, and such that for any x∈∂ F_i, ∇ h_i(x) with ‖∇ h_i(x)‖ =1 is a normal vector to the boundary ∂ F_i at x.
Then we have h_i(x_i)=0 for i∈ N. Hence,
x:=(x_1,…,x_n) is a solution to the problem of maximizing
f∏_i∈ NF_i→ℝ,y↦ν^⊤∑_i∈ Ny_i
under the side-constraint that ℋ(y)=0 where ℋ∏_i∈ NF_i→ℝ^n
such that ℋ_i∏_j∈ NF_j→ℝ,y↦ h_i(y_i).
According to the method of Lagrange multipliers, there hence are λ_j∈ℝ
for j∈ N such that
∂_if(x)=∑_j∈ Nλ_j∂_iℋ_j(x),
for all i∈ N. Since ∂_iℋ_j(y)=δ_i,j∇ h_i(y_i) (where δ_i,j is the Kronecker delta),
it follows that
ν=∂_if(x)=∑_j∈ Nλ_j∂_iℋ_j(x)=λ_i∇ h_i(x_i).
In particular, λ_i≠ 0.
Moreover, by assumption, for all i∈ N, x_i maximizes g_i F_i→ℝ,y_i↦∑_j∈ Nμ_i,jy_i,j
under the side-constraint that h_i(y_i)=0. Hence, it follows that
there is λ'_i∈ℝ such that
μ_i=∇ g_i(x_i)=λ'_i∇ h_i(x_i).
Putting everything together, it follows that
μ_i=λ'_i∇_i(x_i)
=λ'_i/λ_iν
for all i∈ N. Since ‖μ_i‖=1=‖μ_j‖,
it is μ_i=ν/‖ν‖=μ_j. This shows the contrapositive.
I believe the result carries over to some degree to a game with non-smooth feasible sets. If there are kinks
in the Pareto frontiers, then at these points, it will be possible to maximize
slightly different weights and still achieve a Pareto optimal outcome,
since several different maximized weighted sums or normal vectors of the
frontier will correspond to the same point.
Since there exist bargaining problems for which the boundaries ∂ F_i are smooth n-1-dimensional manifolds (e.g., in the trivial case in which the F_i are n-dimensional balls), this result shows that there exist problems for which maximizing different weighted sums would result in Pareto suboptimal outcomes.
Assume that there are weight vectors λ_i∈ℝ_≥0^n,
‖λ_i‖_1=1 for all i∈ N, such that λ_i,k≠λ_j,k
for some i,j,k∈ N. Then there is an ECL bargaining game B
such that if all players i∈ N choose to play a mixed strategy
that corresponds to a point x_i∈ F_i(B) such that
x_i∈_y_i∈ F_i(B)λ_i^⊤y_j,
it follows that x=∑_i∈ Nx_i is not Pareto optimal.
Follows directly from Theorem 6.
Together with the utilitarian theorem, we can conclude that all superrationalists should maximize some common sum of utility
functions. This leaves open the question of which weighted
sum to maximize.
One suggestion by [][sec. 2.8.5]Oesterheld2017-qg is to choose weights
that normalize utility functions according to their variance. Variance
normalization is also supported by MacAskill2020-MACSNM-2, who
set up a scenario in which players submit utility functions to cast
their vote on a social utility function. Using
relatively strong ignorance assumptions, they show that
normalizing the variance of utility functions leads all players to
have equal voting power; that is, they are all equally likely to change
the option that is best under the social utility function.
For my setting, I think this approach does not work well. This is because
under some circumstances, variance normalization can lead one player
to expect negative gains from trade, and I think that one important
requirement for a compromise is that everyone gets positive gains
from trade. This is true even if players that implement an updateless decision
theory dai2009updateless or have only very little prior knowledge about ECL. Players will have some
(prior) beliefs to determine whether a trade will be positive. Given
these beliefs, the trade has to be positive. Otherwise, rational players
will decide not to engage in the compromise.
As an example where variance normalization does not work, take a game
with players 1,2 and action sets A_1={a_1,b_1} and A_2={a_2,b_2}, with utilities
as depicted in Tables <ref>–<ref>. Note that utility functions are additively separable.
Here, the
dominant option for both players is (a_1,a_2). To normalize according to variance, we have to determine a distribution over actions. Here, I assume a uniform distribution. Then the mean
for player 1 is μ_1=-2, and for player 2 it is μ_2=1.
We subtract this mean from the utilities
of all the players, then divide the utilities by their variance.
The variance is σ_i^2=∑_x∈ A_1× A_2(u_i(x)-μ_i)^2
for player i, which is 10 for both players. The normalized
utilities are as depicted in <Ref>.
Here, b_1,a_2 maximizes the sum of normalized utility
functions. But this leaves player 1 worse off than without a compromise.
Though I do not investigate this in more detail, I believe problems may arise with all methods that do not directly pick a point on the Pareto frontier as a compromise solution. It would still be interesting to investigate under which conditions variance normalization or other normalization methods give all agents positive gains from trade, but I will not pursue this approach here further.
In the following, I will consider solutions that directly pick out
a point in the feasible set. Once such a point is given, it is possible to derive weights for utility functions such that the point maximizes the corresponding
weighted sum. If the Pareto frontier is differentiable at this point,
it follows from the proof of <Ref> that these weights are unique. I will not delve into the issue of translations between maximizing
weighted sums and points on the Pareto frontier further in this report. (Though I will address the related issue of uniqueness of the individual mixed strategies maximizing a particular weighted sum in <Ref>.)
§.§ Bargaining theory
Here, I briefly review the existing literature on bargaining theory. Since there exists a large literature on bargaining, it seems likely to me that the most plausible and easy to find solutions to
bargaining problems have already been discovered. There are
two main approaches to bargaining problems:
* The axiomatic or normative approach, which involves specifying plausible axioms for bargaining solutions and proving that these
axioms are equivalent to some choice of a bargaining solution.
* The noncooperative or positive approach, which involves specifying a bargaining
game and analyzing the equilibria of the game.
Both approaches are interesting from an ECL perspective. First, the axiomatic
approach is interesting because a solution that has any chance of giving an agent evidence
that others are pursuing the same solution must be parsimonious. This seems more likely if the bargaining solution depends on plausible axioms. Moreover, it is an argument for relying on the existing
literature, because solutions that have already been found by economists are ceteris
paribus also solutions that are more likely to be found by other superrationalists.[Note that this argument is informal, assuming dependencies of the sort “if I look for plausible axioms
and find them, the other agent will do the same and find the same
axioms”. It is not backed up by some equilibrium or game-theoretic
analysis but a judgement of psychological plausibility.]
Second, modeling the situation using noncooperative game theory can provide
one with evidence in favor of a particular solution being more likely
to result from real-world bargaining situations. This has only been
done for causal bargaining, but hopefully acausal
bargaining theory would give similar results to the causal setting.
Some work points in the direction that such transfer may be possible oesterheld2019robust.
In the following, I will turn to the axiomatic approach and review some of the desiderata from the literature. There are several
plausible axioms for a bargaining solution:
Let μ be some bargaining solution, B a
bargaining game, F(B) its feasible set with Pareto frontier H,
and d∈ F(G) its disagreement point.
(1) (Weak) Individual rationality. The solution should give everyone non-negative
gains from trade. So μ_i(F(B),d)≥ d_i for all i∈ N.
(2) (Strong) Pareto optimality: μ(F(B),d)∈ H(B).
(3) Invariance to affine transformations of utility functions. Let ϕℝ^n→ℝ^n such that ϕ(x)=[λ_1x_1,…,λ_nx_n]^⊤+y
for some λ_i∈ℝ_>0,y∈ℝ^n. Then
μ(ϕ(F(B)),ϕ(d))=ϕ(μ(F(B),d)).
(4) Anonymity. For any permutation π on N, define π(x)=(x_π(1),…,x_π(n))
for x∈ℝ^n. Then π(μ(F(B),d))=μ(π(F(B)),π(d)).
I argued for (1) in the preceding section. (2) seems fairly plausible
on the grounds that ECL should not leave any possible gains from trade on the
table. (3) is plausible since the solution should not depend on which
representative we pick out of the equivalence class of utility functions
which give rise to the same cardinal ranking over mixed strategy profiles.
(4) tells us that the bargaining solution should be equivariant: the payoffs assigned to players should stay the same, even if we change their indices. While anonymity is plausible, this definition unfortunately ignores the individual feasible
sets F_i(B) for each player i that exist in the additively separable case. This means that players may have to be treated equally, even if their contributions F_i(B) to the overall payoffs differ. However, it seems that the relative size of the contributions
should make a difference for fairness. We will turn to this fairness point again
in <Ref>.
The axioms outlined above do not yet uniquely specify a bargaining solution. However, they do so after adding a fifth axiom. In the following sections, I will turn to two popular suggestions for fifth axioms, which correspond to two different bargaining solutions, the Nash bargaining solution (NBS) and the Kalai Smorodinsky bargaining solution (KSBS). In <Ref>, I discuss an additional solution proposed by Armstrong2013. I do not focus on it here since it violates individual rationality. Since the main parts of this report were written in 2018, I do not consider more recent work such as diffractor2022rose.
§.§.§ The Nash bargaining solution
The Nash bargaining solution (NBS) Nash1950-vg,Harsanyi1972,Lensberg1988-lf,Okada2010-ql,Anbarci2013-yd,Roth1979a
is the point in F(B) which maximizes the product of the players'
gains from trade, also called the Nash welfare.
μ(F(B),d):=_x∈ F(B)^≥ d∏_i∈ N(x_i-d_i),
where F(G)^≥ d:={x∈ F(G)|∀ i∈ N x_i≥ d_i}.
Since F(B)^≥ d is compact and convex and there exists x∈ F(B)
such that x_i>d_i for all i∈ N by assumption, this point exists and is unique. It is also called the symmetric NBS.
Applying the NBS to <Ref>, using the point (10,√(10)) as a disagreement point,
we get the optimization problem
max_A∈[0,10],B∈[0,5](A+B-10)(√(2(10-A))+√(2(5-B))-√(10)).
This has a maximum at
A≈ 8.15, B≈ 3.15, which I have plotted as a green dot in <Ref>.
There are different axiomatizations of the NBS (i.e., choices for
the fifth axiom) which are equivalent. In the many-player case, an
axiom which I find plausible is due to Lensberg1988-lf.
It has an intuitive geometric interpretation but its mathematical formulation is quite technical.
Let P,Q⊆ N,P⊆ Q. Let x_p denote the projection
of x∈ℝ^Q onto ℝ^P. Let H_P^x={y∈ℝ^Q| y_Q∖ P=x_Q∖ P}.
Given C⊆ℝ^Q and x∈ C, denote t_P^x(C) for
the projection of H_P^x∩ C onto ℝ^P.
Multilateral stability. If P⊆ N, μ(F(G),d)=x,
and D=t_P^x(F(G)), then
μ(D,d_P)=x_P.
<Ref> shows an illustration from Lensberg1988-lf. Basically, the idea is that if one fixes the payoffs for a subset
N∖ P of the players and lets the remaining players P
renegotiate their solution on the new feasible set D=t_P^x(F(G))
that results if you fix the payoffs for all the players in N∖ P
and project this feasible set onto ℝ^P, the result should be the same as the solution of the
entire problem, projected onto ℝ^P. I find this
axiom very appealing.
<Ref> (Pareto optimality, Invariance to affine transformations, Anonymity) and <Ref> (Multilateral stability) together are necessary and sufficient to specify the Nash bargaining solution.
Assuming <Ref>, multilateral stability is interchangeable with the Independence of irrelevant alternatives axiom:
[Independence of irrelevant
alternatives] Let B,B' be two bargaining games such that F(B)⊆ F(B'),
μ(F(B'),d)∈ F(B). Then μ(F(B'),d)=μ(F(B),d).
This also seems like an appealing desideratum. There are several further axiomatizations
of the NBS (in the 2 or n-player case).
There also exists an asymmetric version of the NBS Kalai1977,Roth1979a. Either Pareto optimality or strong Individual rationality
(i.e., μ_i(F(B),d)>d_i for all i∈ N) in combination
with Invariance to affine transformations and Independence of irrelevant alternatives
are necessary and sufficient to characterize all functions
_x∈ F(G)^≥ d∏_i∈ N(x_i-d_i)^α_i,
where α_i>0 and ∑_i∈ Nα_i=1. (I am not sure
whether this would also work with Multilateral stability instead of
Independence of irrelevant alternatives.)
The NBS is also supported by several noncooperative bargaining models Nash1953Binmore1986Anbarci2013-ydBRANGEWITZ2013224Okada2010-ql.
The most common one is a version of
the alternating offers model by Rubinstein1982-vw. Here, players take turns in making offers, and the other
party (or, in the multilateral case, all other players) can reject
or accept the offer. Players are impatient: either there is a chance at each step that bargaining
breaks down, or the players discount their utilities over time. In this game, there is a unique subgame perfect equilibrium. The limit of this equilibrium as the probability of breakdown approaches zero is the NBS Binmore1986.
§.§.§ The Kalai-Smorodinsky bargaining solution
The KSBS Kalai1975-yv is a more recent alternative to
the NBS. I think it is less suitable for ECL. In the KSBS, the utility
functions are normalized such that 0 is the disagreement point and
1 is the ideal point—the best possible payoff for the agent (among
all payoffs in F(B)). Then the bargaining solution is the point
where the line from the zero point to the point where everyone has
1 utility intersects with the Pareto frontier. This means that the
solution is the point on the Pareto frontier at which ratios between players' utilities are equal to the ratios between their ideal utilities. Formally, if U_i(B) is the best
possible attainable point for i, then the solution is the point
x∈ H(B) such that
U_i(B)-d_i/U_j(B)-d_j=x_i-d_i/x_j-d_j
for all i,j∈ N.
Apparently, there are some problems with generalizing the KSBS to
n-player games Roth1979b. One needs several axioms. One
possible way to axiomatize it in this case is via the following axioms
(in addition to <Ref>) Karos2018generalization.
To make the definitions easier, assume for now that d=0
(if this is not the case, just subtract d from all points in the
feasible set). Then a bargaining solution is just a function of the
feasible set, assuming d=0.
[Individual monotonicity] μ_i(F(B))≤μ_i(F(B')) for all
i∈ N and all problems B, B' with F(B)⊆ F(B'),
U_i(B)≤ U_i(B') and U_j(B)=U_j(B') for all j≠ i.
That is, if someone's ideal point is greater in B than in B', then,
all else equal, their bargaining solution should also be greater in
B than in B'.
[Homogeneous ideal independence of irrelevant alternatives] μ(F(B))=μ(F(B'))
for all bargaining problems B,B' with F(B)⊆ F(B'), μ(F(B))∈ F(B'),
and U(B)=rU(B') for some r≤1.
This is a weakened version of independence of irrelevant alternatives
which requires the ratios of the ideal points to be equal for the
axiom to apply.
[Midpoint domination] For all bargaining problems B and any player i, we have μ_i(F(B))≥1/nU_i(B).
This axiom is also known as Proportional fairness.[<https://en.wikipedia.org/wiki/Proportional_division>]
The KSBS is supposed to be fairer than the NBS, in the sense that if someone has better options (their ideal point
is better), then they should be left better off in bargaining. This
is not the case in the NBS but is the case for the KSBS. However, I disagree with this notion of fairness. To me, fairness in the two-player case is concerned with splitting
the gains from trade equally or according to differences in power.[In the Rubinstein bargaining model, one can derive bargaining power
from players' discount rates. If a player
has a higher time discount, they have a weaker bargaining position.]
But splitting gains from trade equally only makes sense in a transferable utility game, i.e., a game in which there is a common currency of money or resources which has equal utility for both players. Since ECL deals with arbitrary utility functions, we cannot in general assess fairness in the same way here.
An important aspect of fairness is the idea
that there should not be one player or a group of players that only contribute very little to the ECL-compromise, while gaining a lot from it. This type of fairness can be ensured in a coalitional game by requiring that the solution is coalitionally stable. If a player contributes little, then a coalition
of players can split off such that all players in the coalition are
better off, making the solution unstable. I will discuss this in <Ref> and conclude that the KSBS does not fare better than the NBS in this respect.
I think there is a problem with the KSBS that arises
if several agents have the same utility function. Consider a case
with players 1,2,3 and utility functions u_1,u_2,u_3,
where players 1,2 have the same utility function. Intially, players 1,2,3
all have 1 utility, so d=(2,2,1) (since 1 and 2
benefit each other). Now they are trying to decide how to split a
surplus of 1 utility that arises from cooperating. The best achievable utilities are b_1,2=3
and b_3=2. Hence,
b_1-d_1/b_2-d_2=b_2-d_1/b_3-d_2=b_1-d_1/b_2-d_2=1,
so the ratios of utilities minus the default points in the chosen
outcome have to be equal. Hence, the KSBS chooses (2.5,2.5,1.5).
But this seems wrong: Players 1 and 2 have only received half of the utility, even though there are two of them. If they had been two players with distinct goals, then they would have each gotten one third of the utility, giving 1 and 2 a total of 2/3.
The NBS, since it is maximizing a product, is instead skewed towards
1 and 2 and chooses the point ≈(2.7,2.7,2.3), effectively giving each player one third of the utility surplus.
Lastly, another reason to prefer the NBS over the KSBS is that it seems
to lack the widespread support via noncooperative models and plausible
axiomatizations that the NBS has. The axioms for the KSBS in the multilateral
case seem much more contrived than those for the NBS, which makes
it less plausible as a multiverse-wide Schelling point.
Although there apparently are some noncooperative bargaining models
supporting the KSBS Anbarci1997, the support for the
NBS seems greater to me. Relatedly, Google scholar searches for
word combinations such as “Nash bargaining noncooperative” and
“Kalai smorodinsky bargaining noncooperative”, or for the names of the solutions, consistently turn up more than
ten times as many papers for the NBS. Admittedly, there may be
some path dependency or founder's effect here. Nash is a more prominent
name and the NBS was the first published solution. Still,
it seems reasonable that a solution like the NBS—simply maximizing
the product of gains from trade—will be discovered first and thus considered a Schelling point in many parts of the multiverse.
§.§ Observations
Here, I make some initial observations about the bargaining model and discuss potential issues. First, I discuss the question of whether the actions corresponding to a point on the Pareto frontier are unique (<Ref>). If not, this could lead to a coordination problem. I give an example where the decomposition is not unique, show that it is unique if utilities are additively separable and the feasible set is strictly convex, and argue that this should not be an issue in practice. Second, I make some basic observations about gains from trade given additively separable utilities, based on marginal rates of substitutions between different utility functions on individual Pareto frontiers (<Ref>). I conclude with remarks on trade between more than two players and continuity of the NBS (<Ref>).
§.§.§ Uniqueness of the actions corresponding to bargaining solutions
The NBS provides players with a unique compromise point x∈ H(B).
The question arises whether this leaves all players with
a clear instruction on which action to take. There may be
several mixed strategy profiles in Σ which correspond to x.
Then, if players cannot coordinate, the actually chosen outcome may
differ from the NBS. In principle, this outcome could even be worse than the disagreement point
for some.
As an example, take the game with individual Pareto frontiers H_1=H_2={x∈ℝ^2| x_1+x_2=1}
and d=0 (Figure <ref>). Apparently the overall Pareto frontier is H={x∈ℝ^2| x_1+x_2=2}
and the point (1,1)∈ H is the NBS. Then
given any action combination a,b∈ H_1,H_2 such that a_1=b_2
and a_2=b_1, it is a_1+b_1=a_2+b_2=1. Hence, any
such combination corresponds to (1,1) and maximizes the product
of gains.
Now, if player 1 chooses (2,-1), and the other player chooses
(0.5,0.5), which are both individually possible choices if they
were combined with a suitable choice by the respective other player,
then one of the players is worse off than their disagreement point.
Hence, choosing a compromise outcome leads to a coordination problem.
The problem would be even worse without separability of utility
functions. In this case, the coordination problem may be
severe and wrong combinations of action may even be Pareto suboptimal.
Given additive separability, the problem is not as severe. It is not a problem
if, for a player i∈ N, there are several σ_i∈Σ_i
with the same utilities for all players. Hence, it suffices to analyze
a game directly on the basis of the individual feasible sets (F_i)_i∈ N.
Here, as the next result shows, the outcomes will at least always be Pareto optimal.
Let B=(N,(F_i)_i∈ N,d) be an additively separable bargaining
game. Let x∈ H. Let
X_i={y_i∈ H_i|∃ y_-i∈ H_-i:=∑_j∈ N∖{i}H_j:y_i+y_-i=x}
be the set of points y_i that player i can choose from to realize x. Then ∑_i∈ NX_i⊆ H. That is, any combination of such points chosen independently by different players is Pareto optimal.
Let μ∈ℝ^n such that μ^⊤x=max_y∈ Fμ^⊤y
(since x is Pareto optimal, such a weight vector exists). Apparently,
for any x_i∈ X_i, it is μ^⊤x_i=max_y∈ F_iμ^⊤y.
Hence, μ^⊤y=max_z∈ Fμ^⊤z for any y∈∑_i∈ NX_i.
But this means that y∈ H.
Under which conditions could ∑_i∈ NX_i contain more
than one vector? At least if F is strictly convex, this cannot
happen.
Same assumptions as <Ref>. Moreover, assume that F(B)
is strictly convex.
Then ∑_i∈ NX_i={x}.
Assume that for i∈ N there are two points x≠ x'∈ H_i
and points y,y'∈ H_-i:=∑_j∈ N∖{i}H_j such
that x+y=x'+y'=h∈ H. Let λ=1/2. It is h'=λ x+(1-λ)x'+λ y+(1-λ)y'=λ(x+y)+(1-λ)(x'+y')=h∈ H.
Moreover, from <Ref>, it follows that h̃:=x'+y∈ H
and λ h+(1-λ)h̃=λ x+(1-λ)x'+y∈ H.
Since h≠h̃∈∂ F(B) and λ h+(1-λ)h̃∈∂ F(B),
F(B) is not strictly convex, which is a contradiction. Hence, x=x'.
Overall, I believe that the kind of non-uniqueness discussed here is unlikely to be a big problem. First, even if the decomposition is not unique in principle, there may still be unique points that are somehow more parsimonious and can thus serve as a Schelling points. E.g., in <Ref>, this could be the symmetric point (0.5,0.5). Second, I think it is very unlikely that a situation in which
∑_i∈ NX_i contains more than one point occurs in practice. I have not formalized this, but intuitively, the reason is that Pareto optimal points are points at which the normal vector to the individual Pareto frontiers for all players are colinear. It is unlikely that two players have Pareto frontiers that have a part that is affine and thus not strictly convex, for which their normal vectors are also exactly colinear. This is because there can only be countably many such affine parts.
§.§.§ Possible gains from trade
We can assess possible gains from trade by looking at the individual Pareto frontiers. Assume that
the whole surface of F_i is a smooth manifold, for each player i (recall that F_i is the set of expected utility vectors that player i can choose from, assuming additive separability, i.e., that the total expected utility for each player is a sum of the individual contributions from each player). For instance, one could justify this with the fact that there exists a continuum of possible actions in the real world.
Then there exists a unique normal vector to this surface at each point on the Pareto frontier H_i⊆∂ F_i. As mentioned in <Ref>, in the 2-D case, the slope of the Pareto frontier at a point corresponds to the marginal rate of substitution between the two utility functions. Pareto optimal points are points at which those slopes are equal for both players, and the normal vectors colinear. Gains from trade
are possible whenever the marginal rates of substitution between the
different utility functions on the Pareto frontier are not equal for
all players. In particular,
if a player was
previously optimizing for their own goals, then giving utility
to other players costs them nothing on the margin (see <Ref>). This idea was introduced as “marginal charity” by hanson2012marginal.
The amount of trade that can happen depends on the specific shape of the Pareto frontiers. If the Pareto frontiers are curved strongly at the disagreement point, such that Pareto optimal trades are very close to this point, then barely any trade is possible
(see <Ref>). I will return to this analysis of possible gains from trade using different toy models for the Pareto frontiers in <Ref>.
§.§.§ Further observations
Another observation concerns trades between more than
two players, which can exhibit a more complicated graph structure. For instance, if there are three players 1,2,3,
it is possible that 1 benefits 2, 2 benefits 3, and 3
benefits 1. Not everyone has to receive gains from trade from everyone
else. This property allows for higher gains from trades, but it also means that there can be players involved that don't benefit anyone else, which can be problematic (see <Ref>).
Lastly, it is worth noting that the NBS is continuous in the
feasible set and the disagreement point Lensberg1988-lf.
This means that the NBS is in some sense robust to slight changes or uncertainties about the right specification of the bargaining game.[I believe this is also true of the KSBS, though I have not investigated this.]
§ BAYESIAN GAME MODEL
In this section, I introduce a Bayesian game formalism to model uncertainty about the values and empirical situations of agents in the multiverse, using the type space formalism by Harsanyi1967,
adapted to ECL. In a Bayesian game, players have incomplete information, meaning they are uncertain about the utility function of other players. I will build on the formalism introduced in Sections <ref> and <ref> in <Ref> to define incomplete information bargaining games.
In <Ref>, I introduce the basic formalism and notation, and in <Ref>, I define pure and mixed strategies and their expected utilities. Next, I introduce joint distributions over strategies in <Ref>, which I use in <Ref> to define dependency equilibria, alongside standard Bayesian Nash equilibria. Dependency equilibria assume optimal conditional expectations of actions under joint distributions over strategies and can thus incorporate the evidential reasoning required by ECL.
I then show several equilibrium results, including a generalization of Spohn2007-fp's folk theorem for dependency equilibria, which says that all strategy profiles that Pareto dominate a Nash equilibrium are dependency equilibria (<Ref>). This result shows that dependency equilibria alone won't be useful in constraining beliefs over the players' strategies in ECL further. I also derive simple conditions for when a strategy profile leads to positive gains from trade and is thus a dependency equilibrium.
Finally, in <Ref>, I discuss a possible extension of the formalism to include uncertainty about decision procedures and similarity to other agents.
§.§ Formal setup
An ECL Bayesian game is a tuple G=(N,A,T,p,(u_i)_i∈ N), where
* N={1,…,n} is the set of players;
* A is a generic, finite set of actions;
* T={1,…,m} is a generic set of types, specifying the private information available to each player, i.e., the values and empirical situation in each universe;
* p T^n→[0,1] is a prior probability distribution
over the players' types such that all types have positive probability for all players, i.e., ∑_t_-i∈ T_-ip(t_-i,t_i)>0 for all players i∈ N and types t_i∈ T;
* (u_i)_i∈ N is the tuple of utility functions for each player,
where u_i A^n× T^n→ℝ.
Each player's type gets randomly chosen according to the joint distribution p. A type specifies a player's private information, i.e., whatever information about the player
is not common knowledge. In an ECL Bayesian game, I understand each player as a causally separate universe, inhabited by some intelligent civilization that is able to engage in ECL. The player's type then specifies this civilization's values, as well as their options in furthering any of the other types' values. Players know how many universes and thus causally disconnected civilizations there are, but they are uncertain about everyone's types.
I assume that this formal framework is common knowledge. In particular, everyone knows the common prior
over types. I believe this is a good starting point to analyze the situation, but I am unsure to what degree ECL breaks down as we relax the assumption. One possible generalization for future work would be to allow for individual
probability distributions over types that don't stem from a common
prior over types [see][]Harsanyi1967, or analyze relaxations to common knowledge such as common
p-belief Monderer1989-pj.
My formalization is different from a standard Bayesian game [e.g.][p. 347f.]maschler2020game since the set of types T is the same for each player, and there is only one set of actions, independent of the player and type. Both of these are simply notational simplifications without loss of generality. First, all the information about the actions is encoded in the utility functions, which can depend on players and types (if there are too many actions for some types, we can simply map several actions onto the same utilities). Second, we can still distinguish the different players' type distributions by choosing an appropriate prior distribution p. The only restricting condition here is the assumption that each type for each player has strictly positive probability. However, this assumption could easily be relaxed without changing anything substantial; it merely serves to avoid cumbersome case distinctions based on whether a type has zero probability.
My simplification makes particular sense in ECL, where players are causally disconnected universes. Here, we can regard players simply as vessels that can be inhabited by any of the types, such that really only the types matter. I do not think it would be useful at this point to try to distinguish the different universes.
I formalize the idea that we cannot distinguish between the players as anonymity below, alongside the additive separability assumption from the previous section. I will use this assumption a lot in the following.
Assume that there
are utility functions u_t,t' for all t,t'∈ T such that for
a=(a_1,t_1,…,a_n,t_n)∈ A, we have
u_i(a)=∑_j=1^nu_t_j,t_i(a_j)
for each player i∈ N. Then the utility functions are called additively separable and anonymous.
This definition says that we can express the utility function of any player as a sum of contributions from every player (additive separability), where the received utility only depends on their type, as well as the type of the other player (anonymity). The term u_t,t'(a) thus expresses the utility that any player of type t' gets from any player of type t, when that player chooses action a∈ A.
The prior distribution
p is called anonymous if, for all permutations on players
π and type vectors t∈ T^n, we have p(t_1,…,t_n)=p(t_π(1),…,t_π(n)).
This says that also the distribution over types is anonymous, i.e., symmetric in the players. Note that this does not mean that players' types have to be independent. One could still incorporate a belief under which, for instance, players always believe that other players are likely of the same type as they are.
Lastly, I define the same properties for Bayesian games.
I say that an ECL Bayesian game G=(N,A,T,p,(u_i)_i∈ N) is additively separable and anonymous if u is additively separable and anonymous and if p is anonymous.
§.§ Pure and mixed strategies
Now I turn to the strategies of players in an ECL Bayesian game, as well as associated expected utilities. I start with pure, i.e., deterministic strategies. I then turn to mixed strategies.
A pure strategy α_i∈ A^m is a mapping from the possible
types of player i to their actions. We denote a pure strategy profile
as α∈ A^n,m.
To introduce expected utilities, we need some additional notation for the distribution over types. In a slight abuse of notation, I denote the prior probability of player i
having type t_i as
p(t_i):=∑_t_-i∈ T_-ip(t_-i,t_i).
Note that, if p is anonymous,
p(t_i) does not depend on the player.
Player i of type t_i has a conditional belief over t_-i∈ T_-i, which is given by
p(t_-i| t_i):=p(t_-i,t_i)/p(t_i).
Now the expected utility of α for player i of type t_i
is
EU_i(α; t_i):=∑_t_-i∈ T_-ip(t_-i| t_i)u(α_1,t_1,t_1,…,α_n,t_n,t_n).
This is an ex interim expected utility, i.e., after updating on the player's own type, but before having seen anyone else's type. I will focus on ex interim expected utilities in this report since they allow for modeling players with different beliefs, which is an important aspect of ECL in my view.[For more discussion on the question of whether players should update on their own type in principle, see benya2014sin,treutlein2018udt.]
Given two players i≠ j∈ N, the joint prior p and types
t_i,t_j∈ T, we can define
p(t_i| t_j):=∑_t'_-j∈ T_-j s.t. t'_i=t_ip(t_-j'| t_j).
If p is anonymous, then p(t_i| t_j)
depends only on the two types and not the players. We can thus write p(t'| t) for the probability that any player of type t assigns to any other player having type t'.
Given additive separability and anonymity of u, one can use this to simplify
the expected utility of α as
EU_i(α; t_i)=u_t_i,t_i(α_i,t_i)+∑_j∈ N∖{i}∑_t_j∈ Tp(t_j| t_i)u_t_j,t_i(α_i,t_j).
If α_1t=…=α_nt for all t∈ T, I say that
α is anonymous and we can write α∈ T^m. If, in
addition to additive separability/anonymity of u, α
and p are anonymous, we can write
EU_t(α)=u_t,t(α_t)+(n-1)∑_t'∈ Tp(t'| t)u_t',t(α_t')
for the expected utility for any player of type t, if the anonymous strategy α∈ T^m is played. Here, the first term stands for the utility that the player produces for themself, while the term with the factor (n-1) stands for the expected utility provided by all other n-1 players, where the expectation is over possible types for any of the other players (and this term is the same for every player due to anonymity).
Next, I consider mixed strategy profiles. In a Bayesian game, we have to specify a distribution over actions for each tuple (i,t_i)∈ N× T of a player and associated type.
A mixed strategy σ_i∈Σ_i:=Δ(A)^T for player i specifies for each possible type t_i, a probability distribution over actions, denoted via σ_i(·| t_i). A mixed strategy profile is a vector σ∈∏_i∈ NΣ_i of mixed strategies for each player.
As with actions, we can denote a mixed strategy profile specifying only distributions over actions for players N∖{i} as σ_-i∈Σ_-i.
The expected utility for player i of action a_i, given mixed strategy profile σ_-i∈Σ_-i, is defined as
EU_i(σ_-i,a_i;t_i):=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i(∏_j∈ N∖{i}σ_j(a_j| t_j))u_i(a_1,t_1,…,a_n,t_n).
Similarly, we can define
EU_i(σ;t_i)
:=∑_a_i∈ Aσ_i(a_i)[EU_i(σ_-i,a_i;t_i)].
Similarly to the above for pure strategy profiles, we could simplify this expression for additively separable and anonymous games. I will skip this here since it won't be needed in the following.
§.§ Joint distributions over strategies
Mixed strategy profiles specify independent distributions over actions for each player. I will use them below to define Bayesian Nash equilibria [][p. 354]maschler2020game. However, in the case of ECL, it is important to consider dependencies between the actions of different players. In this section, I will thus define joint strategy distributions, which allow for different players' actions to be dependent. I will use them to introduce a Bayesian game generalization of dependency equilibria Spohn2007-fp,Spohn2010Depen-13626,Spohn2003-gi,Spohn2005-hi, which explicitly take such dependencies into account.
Let S={s T^n→Δ(A^n),t↦ s(·| t)}
be the set of conditional joint probability distributions over the
actions of all players given their types. Then s∈ S is called a joint strategy
distribution.
Unlike the mixed strategy profiles in bargaining problems, I interpret the distributions over strategies here as subjective credences, rather than as options that could be implemented by the players, e.g., via a randomization device. If players were able to randomize, then this would naturally lead to independent distributions (absent a randomization device that is correlated across the multiverse). Instead, ECL is based on beliefs over actions that imply that agents' actions are dependent, due to the similarity of their decision procedures. I use joint strategy distributions to formalize such beliefs.[The idea that distributions over actions describe beliefs rather than randomization is also common in traditional game theory. E.g.,
Aumann1987 writes:
“An important feature of our approach is that it does not require
explicit randomization on the part of the players. Each player always
chooses a definite pure strategy, with no attempt to randomize; the
probabilistic nature of the strategies reflects the uncertainties
of other players about his choice.”]
Joint strategy distributions can also be anonymous, i.e., symmetric in the player number.
A joint strategy profile s∈ S is called anonymous if for any player permutation
π N→ N, action vector a∈ A^n, and type vector t∈ T^n, we have
s(a| t)=s(a_π(1),…,a_π(n)| t_π(1),…,t_π(n)).
Joint strategy distributions are equivalent to standard mixed strategy profiles in the special case in which the marginals over the different players' actions are independent. To define this formally, we denote the probability for player i∈ N of playing a_i given
type vector t∈ T^n by
s(a_i| t):=∑_a_-i∈ A_-is(a_-i,a_i| t).
Moreover, the prior probability for player i∈ N of type t_i of playing
a_i is
s(a_i| t_i):=∑_t_-i∈ T_-is(a_i| t_-i,t_i)p(t_-i,t_i)/p(t_i).
If s is anonymous, these probabilities don't depend on i. This
justifies defining s(a| t) for any a∈ A and t∈ T in
this case.
A joint strategy distribution s is said to be uncorrelated, if
* s(a_i| t_i)=s(a_i| t) for any player i∈ N, type t∈ T^n and action a_i∈ A;
* s factorizes into a product of its marginals, i.e., if for any t∈ T^n and a∈ A^n, we have
s(a| t)=∏_i∈ Ns(a_i| t_i).
Note that the term “uncorrelated” is imprecise, since the definition actually requires independence. However, I am using the term for simplicity.
Now I turn to conditional expected utilities of actions. For player i∈ N, the conditional probability of other players' actions a_-i∈ A_-i
given player i's action a_i, type vector t∈ T^n, and joint strategy profile s∈ S is
s(a_-i| a_i,t):=s(a_-i,a_i| t)/s(a_i| t).
If the players' action distributions under s are dependent, then this probability might differ between different actions a_i. It takes dependencies into account, instead of simply marginalizing over all possible actions for player i to arrive at an unconditioned probability.
Next, for i,j∈ N, the probability of a∈ A^n given t_i,t_j∈ T
is
s(a| t_i,t_j):=∑_t'∈ T^n s.t. t'_i=t_i,t'_j=t_js(a| t')p(t')/∑_t'∈ T^n s.t. t'_i=t_i,t'_j=t_jp(t').
In another slight abuse of notation, I regard α
as an A^n,m-valued random variable and denote the probability
that player i of type t_i plays a_i given that player
j of type t_j plays a_j via
s(α_i,t_i=a_i|α_j,t_j=a_j,t_i,t_j):=∑_a'∈ A^n s.t. a'_i=a_i,a'_j=a_js(a'| t_i,t_j)/∑_a'∈ A^n s.t. a'_j=a_js(a'| t_i,t_j).
Given anonymous s and p, if i≠ j∈ N, this does not depend on
the players. Lastly, I define
s(a_i,t_i| a_j,t_j):=s(α_i,t_i=a_i|α_j,t_j=a_j,t_i,t_j)p(t_i| t_j).
Apparently, given anonymity, s(a_i,t_i| a_j,t_j) only
depends on the types and actions, but not on either i or j (as
long as i≠ j).
With these notations at hand, we can proceed and define conditional expected
utilities.
The conditional expected
utility of strategy s∈ S, given action a_i∈ A and type t_i for player i is defined as
EU_i(s; a_i,t_i):=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-is(a_-i| a_i,t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i).
Moreover, assuming anonymity of s and p and additive separability and anonymity of u, we define
EU_t(s; a):=u_t,t(a)+(n-1)∑_t'∈ T∑_a'∈ As(a',t'| a,t)u_t',t(a')
as the conditional expected utility of s∈ S for any player of type t given action
a∈ A.
Note that here, we condition the distribution over the other players' actions on player i's action. The conditional expected utility of different actions hence differs not only due to the different causal effects of the actions, but also due to potential dependencies between different players' actions under the distribution s. For instance, in a prisoner's dilemma, one could define a distribution s under which either all players cooperate or all players defect. Then the conditional expected utility of cooperating would be higher, since it would take into account the correlations between the players' actions.
The following lemma justifies above definition of EU_t(a| s) in the case of anonymity and additive separability.
Assume s∈ S and p are anonymous and u is additively separable and anonymous. Then we have
EU_i(s;a_i,t_i)=EU_t_i( s;a_i)
for any player i∈ N, action a_i∈ A, joint strategy profile s∈ S, and type t_i∈ T.
We have
EU_i(s;a_i,t_i)
=∑_a_-i∈ A_-i∑_t_-i∈ T_-is(a_-i| a_i,t_-i,t_i)p(t_-i| t_i)u_i(a_-i,a_i,t_-i,t_i)
=u_t_i,t_i(a_i)+∑_a'∈ A_-i∑_t'∈ T_-is(a'| a_i,t',t_i)p(t'| t_i)∑_k∈ N∖{i}u_t'_k,t_i(a'_k)
=
u_t_i,t_i(a_i)+∑_k∈ N∖{i}∑_t”∈ T∑_a”∈ A∑_a'∈ A_-i s.t. a'_k=a”∑_t'∈ T_-i s.t. t'_k=t”s(a'| a_i,t',t_i)p(t'| t_i)u_t'_k,t_i(a'_k)
=u_t_i,t_i(a_i)+(n-1)∑_t'∈ T∑_a'∈ As(a',t'| a_i,t_i)u_t',t_i(a')
=EU_t_i( s;a_i).
Before turning to equilibrium concepts, we briefly consider the case in which strategies are uncorrelated. In this case, conditional expected utilities correspond to standard expected utilities given a mixed strategy profile and an action.
Assume s∈ S is uncorrelated and define σ via σ_i(a_i| t_i):=s(a_i| t_i) for any player i∈ N, action a_i∈ A, and type t_i∈ T. Then we have
EU_i(s; a_i,t_i)=EU_i(σ_-i,a_i;t_i)
for all players i∈ N, actions a_i∈ A, and types t_i∈ T.
We have
EU_i(s;a_i,t_i) =∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-is(a_-i| a_i,t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-is(a_-i,a_i| t_-i,t_i)/s(a_i| t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ Ns(a_j| t_j)/s(a_i| t_i)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}s(a_j| t_j)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)u_i(a_-i,a_i,t_-i,t_i)
=EU_i(σ_-i,a_i;t_i).
§.§ Equilibrium concepts
To analyze the equilibria of ECL Bayesian games, I first define a Bayesian Nash equilibrium, which is a standard solution concept for Bayesian games and which assume mixed strategy profiles, or equivalently, uncorrelated joint strategy distributions. Afterwards, I will introduce dependency equilibria Spohn2007-fp,Spohn2010Depen-13626,Spohn2003-gi,Spohn2005-hi, which are based on conditional expected utility of potentially dependent strategy distributions and thus more suitable for ECL. Both equilibrium concepts can be motivated descriptively, to analyze how agents in the multiverse might behave, as well as normatively, to ask how rational agents should behave. In addition to the assumption of common knowledge in rationality, both equilibrium concepts are based on the assumption that all players share the same belief over the actions of the other players, conditional on their types. This assumption is too restrictive, absent a mechanism that could force such a common belief, such as repeated interactions or mutual simulation. However, as with other modeling assumptions, we will use this as a starting point for our analysis. In the case of dependency equilibria, our assumptions don't constrain the space of equilibria much: there exists a result similar to the folk theorems for iterated games [see][]fudenberg1986folk, saying that any Pareto improvement over a Bayesian Nash equilibrium is a dependency equilibrium (see <Ref>).
A mixed strategy profile σ is a Bayesian Nash equilibrium
if for all players i∈ N, types t_i∈ T, actions a_i∈ A such that σ_i(a_i| t_i)>0, we have
EU_i(σ_-i,a_i;t_i)≥ EU_i(σ_-i,a'_i;t_i) ∀ a'_i∈ A.
An uncorrelated joint strategy distribution σ is a Bayesian Nash equilibrium if
EU_i(s;a_i,t_i)≥ EU_i(s;a'_i;t_i) ∀ a'_i∈ A.
for all actions a_i∈ A such that s(a_i| t_i)>0.
Note that for a Bayesian Nash equilibrium σ, we have
EU_i(σ;t_i)
=∑_a_i∈ Aσ_i(a_i)EU_i(σ_-i,a_i;t_i)
≥∑_a_i∈ Aσ_i(a_i)EU_i(σ_-i,a'_i;t_i)
=EU_i(σ_-i,a'_i;t_i)
for any player i∈ N, action a'_i∈ A, and type t_i∈ T. Similarly, one can show that if EU_i(σ;t_i)≥ EU_i(σ_-i,a'_i;t_i) holds for all actions a'_i, then σ is a Bayesian Nash equilibrium.
A Bayesian Nash equilibrium is a generalization of a Nash equilibrium for Bayesian games, where the expected utility of a strategy is replaced with the ex interim expected utility. The condition for Nash equilibria is simply EU_i(σ_-i,a_i)≥ EU_i(σ_-i,a'_i) for all players i∈ N and actions a_i,a_i'∈ A where σ_i(a_i)>0.
In a Bayesian Nash equilibrium, we assume that players respond optimally to the distributions over other players' actions. We assume that these distributions are independent, and taking an action does not provide any evidence about the actions of other players. Hence, the notion of best response here takes into account only causal effects of an action, by influencing u directly, rather than by influencing the distribution over actions. As a result, Bayesian Nash equilibria cannot capture the type of reasoning that is required for ECL.
There is another standard solution concept that does assume potential correlations between players' actions, the correlated equilibrium [][ch. 8]maschler2020game. However, this equilibrium concept also fails to capture ECL-type reasoning. Even though players' actions can be correlated, the notion of best response still requires that a player cannot improve their payoff by unilaterally deviating from the joint distribution, without taking into account the evidence such deviations would provide about other players' actions. Hence, I will not delve further into correlated equilibria here.
Instead, I will turn to dependency equilibria, which incorporate evidential reasoning by considering potentially correlated joint distributions and evaluating only conditional expected utilities of actions. There are several other concepts achieving a similar purpose that one could look at in future work al2015evidential,daley2017magical,halpern2018game, but I will focus on dependency equilibria in the following. The following definition of a dependency equilibrium is a Bayesian game generalization of the definition in Spohn2007-fp.[For more discussions on dependencies between agents in games, see Spohn2007-fp,Spohn2010Depen-13626. Spohn sees prior
causal interactions as a common cause between agents' actions, leading to a dependency [cf.][]sep-physics-Rpcc. ECL involves dependencies despite no prior causal interaction. Instead, the dependency is caused by the similarity of decision
algorithms and decision situations of agents in ECL. It could be considered a logical dependency, for which there does not need to exist a common cause. Alternatively, the decision situation and decision algorithm similarity could be considered as an abstract common cause [cf.][]Yudkowsky2010-ur.]
A joint strategy distribution s∈ S is a dependency equilibrium if
there exists a sequence of distributions (s_r)_r∈ℕ
such that lim_r→∞s_r=s, and s_r(a_i| t_i)>0
for all players i∈ N, actions a_i∈ A, types t_i∈ T
and r∈ℕ, and if for all i∈ N, t_i∈ T and
a_i∈ A with s(a_i| t_i)>0, it is
lim_r→∞EU_i(s_r;a_i,t_i)≥lim_r→∞EU_i(s_r;a'_i,t_i) ∀ a'_i∈ A.
The requirement of rationality here is that any action with nonzero probability (in the limit) has to have
greater or equal conditional expected utility for the player performing that
action than any other action. This is similar to a Bayesian Nash equilibrium, only that players' actions are potentially dependent, and we take such dependencies into account when calculating conditional expected utilities.
The construction with limits is required since conditional credences s(a_-i| a_i,t) can only be computed for actions a_i that have positive probability. Hence, to be able to compute all possible conditional credences, we represent a dependency equilibrium s as a limit of distributions s_r for which this is the case.
As an example, consider a Bayesian
version of a prisoners' dilemma with additively separable and anonymous
utilities. There are two players, 1,2, and two types 1,2. Assume that there is a simple ignorance prior p which gives each
combination of types equal probability. In particular, p is anonymous.
Table <ref> shows the utilities that players of the two types produce
with either of two actions 1,2.
For any of the two players, given an anonymous strategy profile s and action a, using <Ref>, we get
EU_t(s;a) =u_t,t(a)+ ∑_t'∈{1,2}∑_a'∈{1,2}u_t',t(a)· s(a',t'| a,t).
Given an uncorrelated strategy profile we have s(a',t'| a,t)=s(a',t'|â,t) for any two actions a,â. Hence, the only term differing between different actions is the term u_t,t(a). It follows that the only possible optimal choice for either type is a=2, leading to an expected utility of EU_t(s;a)=3 + 1/2· 3=4.5, consisting of 3 utility produced by a player for themself and 3 utility provided by the other player in the 50% of cases in which the other player has the same type, and 0 utility provided otherwise. This is the only Bayesian
Nash equilibrium.
What about
dependency equilibria? For simplicity, I restrict myself to joint
strategies that have players of the same type always performing
the same action. This leaves us with 4 probabilities to be determined
(Table <ref>) and the following payoffs for the two types:
EU_1( s;1)= 2+1/2· 2+1/2· 2·a/a+b = 3 + a/a+b
EU_1(s;2)= 3+1/2· 3 + 1/2· 2·c/c+d = 4.5 + c/c+d
EU_1( s;1)= 2+1/2· 2+1/2· 2·a/a+c=3+ a/a+c
EU_1(s;2)= 3+1/2· 3 + 1/2· 2·b/b+d=4.5 +b/b+d .
Here, the first term is the utility produced by a player for themself, the second term the utility produced by the other player given that they have the same type (which happens with probability 1/2), and the third term is the utility produced by the other player, assuming they have the opposite type. The term a/a+b, for instance, stands for the probability that a player of type 2 plays actions 1, assuming that the other player of type 1 also plays that action.
In this case, there can be no dependency equilibrium in which action 1
gets any probability, since 1 is worse than 2, regardless of the chosen probabilities. In the best case, we have a=d=1/2, in which case EU_t(s;1)=4 and EU_t(s;2)=4.5.
This changes if there are many players. Suppose there
are 10 players, and the other properties of the game stay
the same. Then we have the following payoffs:
EU_1(s;1)= 12+10a/a+b
EU_1(s;2)= 18+10c/c+d
EU_2(s;1)= 12+10
EU_2(s;2)= 18+10.
Here, everyone playing action 1
can be a dependency equilibrium. Given any distribution that puts
only weight on a and d, action 1 is always better for either type. Hence, we can define s_r via a=r-1/r and d=1/r. Then, for any r∈ℕ, a is the optimal action under distribution s_r, and s:=lim_r→∞ is the distribution in which all players play action 1. To find all the mixed joint strategy
dependency equilibria, we would have to solve for s such that EU_t( s;1)=EU_t(s;2). I leave this as an exercise.
For further examples and to become more familiar with the concept,
see Spohn2007-fp.
As in above example, both equilibrium concepts can again
be adopted to an anonymous and additively separable setting. For instance, for Bayesian Nash equilibria, given an anonymous and additively separable game G and anonymous and uncorrelated s∈ S, we get the requirement
EU_t(s;a)≥ EU_t(s;a') ∀ a'∈ A
for all types t∈ T and actions a∈ A such that s(a| t)>0.
§.§ Observations
In this section, I show basic results about equilibria in ECL Bayesian games. First, I show that in an additively separable and anonymous game, there is essentially only one unique Bayesian Nash equilibrium—the strategy profile in which each type simply optimizes for their own values in their own universe, disregarding what everyone else is doing.
Let s∈ S be a Bayesian Nash equilibrium of an additively separable and anonymous ECL Bayesian game G. Then for any player i∈ N, action a_i∈ A and type t_i∈ T, we have s(a_i| t_i)>0 if and only if u_t_i,t_i(a_i)=max_a'∈ Au_t_i,t_i(a'). In particular, if the maximizer of u_t,t is unique for any type t, then s is anonymous and it corresponds to a unique anonymous pure strategy profile α∈ A^m. Moreover, an anonymous pure strategy Bayesian Nash equilibrium always exists in an anonymous and additively separable game.
First, since s is a Bayesian Nash equilibrium, it is uncorrelated, so by <Ref>, there exists σ∈Σ such that EU_i(s;a_i,t_i)=EU_i(σ_-i,a_i;t_i) for all players i, actions a_i, and types t_i. Now let a_i∈ A,t_i∈ T arbitrary. Then we have
EU_i(s; a_i,t_i) =EU_i(σ_-i,a_i;t_i)
=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)u_i(a_-i,a_i,t_-i,t_i)
=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)(u_t_i,t_i(a_i)+∑_j∈ N∖{i}u_t_j,t_i(a_j))
=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)u_t_i,t_i(a_i)
=+∑_a_-i∈ A_-i∑_t_-i∈ T_-ip(t_-i| t_i)∏_j∈ N∖{i}σ_j(a_j| t_j)∑_j∈ N∖{i}u_t_j,t_i(a_j)
=u_t_i,t_i(a_i)+∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)∑_j∈ N∖{i}u_t_j,t_i(a_j).
Note that the second term does not depend on a_i. Hence, by the definition of a Bayesian Nash equilibrium, for any action a_i∈ A such that s(a_i| t_i)>0, and any alternative action a'_i∈ A, we have
0≤ EU_i(s;a_i,t_i)-EU_i(s;a'_i,t_i)
=u_t_i,t_i(a_i)-u_t_i,t_i(a'_i).
This shows that
u_t_i,t_i(a_i)≥ u_t_i,t_i(a'_i) for all a'_i∈ A, so
u_t_i,t_i(a_i)=max_a'∈ Au_t_i,t_i(a').
For the “in particular” part, note that if the maximizer is unique, it follows for any i∈ N and t_i∈ T that s(a_i| t_i)=1 for a_i=_a'∈ Au_t_i,t_i(a'). Hence, s corresponds to the unique anonymous pure strategy profile α∈ A^m, defined via α_t:=_a'∈ Au_t,t(a') for any t∈ T.
Since this does not depend on the player, we have
s(a| t)
=1_a=(α_t_i)_i∈ N
=1_(a_π(i))_i∈ N=(α_t_π(i))_i∈ N
=s(a_π(1),…,a_π(n)| t_π(1),…,t_π(n))
for any permutation π N→ N.
Lastly, it is clear from the above that we can always define an anonymous pure strategy Bayesian Nash equilibrium α∈ A^m given an additively separable and anonymous game by choosing α_t∈_a'∈ Au_t,t(a') arbitrarily for each t.
Next, I turn to dependency equilibria. First, as mentioned by Spohn2007-fp, every Bayesian Nash equilibrium is also a dependency equilibrium.
Every Bayesian Nash equilibrium is a dependency equilibrium.
Let s∈ S be a Bayesian Nash equilibrium. Define s_r=s for any r∈ℕ. Then, by the definition, we have
lim_r→∞EU_i(s_r;a_i,t_i)
=EU_i(s;a_i,t_i)
≥ EU_i(s;a'_i,t_i)
=lim_r→∞EU_i(s_r;a'_i,t_i)
for any player i∈ N, type t_i∈ T, and actions a_i,a'_i∈ A such that s(a_i| t_i)>0. This shows that s is also a dependency equilibrium.
Second, any pure strategy profile that is at least as good as some mixed strategy profile for every player and given any action is a dependency equilibrium. It follows as a corollary that a profile is a dependency equilibrium if it is a (weak) Pareto improvement over a Bayesian Nash equilibrium. The latter was proven in [][Observation 5]Spohn2007-fp for two-player normal form games.
Let σ∈Σ be any mixed strategy profile. Let α∈ A^n,m be a pure strategy profile such
that EU_i(α| t_i)≥ EU_i(σ_-i,a_i;t_i) for all players
i∈ N, t_i∈ T, and actions a_i∈ A.
Define
q∈ S such that for all t∈ T^n, q(α_1,t_1,…,α_n,t_n| t)=1
and q(a| t)=0 for a∈ A^n such that a≠(α_1,t_1,…,α_n,t_n).
Then q is a dependency equilibrium.
We construct distributions q_r that converge to q and such that, conditional on any player taking an action other than the one specified by α, the remaining players play σ. It then follows from the assumption that the actions in α have highest conditional expected utility.
To begin, note that for any player i∈ N and type t_i∈ T, we have
EU_i(q;α_i,t_i,t_i)=∑_t_-i∈ T_-ip(t_-i|t_i)∑_a_-i∈ A_-is(a_-i| a_i,t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i)
=∑_t_-i∈ T_-ip(t_-i| t_i)u_i(α_1,t_1,t_1,…,α_n,t_n,t_n)=EU_i(α;t_i),
so conditional on taking actions specified by α, q is equivalent to α.
Now let s be the (uncorrelated) joint strategy distribution corresponding to σ, i.e., such that EU_i(s;a_i,t_i)=EU_i(σ_-i,a_i;t_i) for all i∈ N, t_i∈ T, and a_i∈ A such that σ_i(a_i| t_i)=s(a_i| t_i)>0. We distinguish two cases.
First, we assume s is strictly positive, i.e., s(a_i| t_i)>0 for any i∈ N, a_i∈ A and t_i∈ T.
Define q_r:=r-1/rq+1/rs
(for r>0). Then for any i∈ N, t∈ T^n, a_-i∈ A_-i, and
a_i∈ A such that a_i≠α_i,t_i,
we have
q_r(a_-i| a_i, t)
=r-1/rq(a_-i,a_i| t)+1/rs(a_-i,a_i| t)/∑_a'_-i∈ A_-ir-1/rq(a'_-i,a_i| t)+1/rs(a'_-i,a_i| t)
=1/rs(a_-i,a_i| t)/∑_a'_-i∈ A_-i1/rs(a'_-i,a_i| t)
=s(a_-i| a_i,t).
That is, conditional on taking action a_i, a_-i is distributed according to s. Hence, using the assumption on α and σ, it follows
EU_i( q_r; a_i,t_i)=EU_i(s; a_i,t_i)=EU_i(σ_-i,a_i;t_i)≤ EU_i(α; t_i)
for any r∈ℕ_>0. Since the expected utility is continuous in the joint strategy distribution, it follows that
lim_r→∞EU_i(q_r;a_i,t_i)≤ EU_i(α;t_i)(<ref>)=
EU_i(q;α_i,t_i,t_i)=EU_i(lim_r→∞q_r;α_i,t_i,t_i)
=lim_r→∞EU_i(q_r;α_i,t_i,t_i).
Hence, since α_i,t_i is the only action that player i of type t_i plays under q, it follows that q is a dependency equilibrium.
Now consider the case where s is not strictly positive. Then we need to modify q_r to put weight on all actions, to satisfy the definition of a dependency equilibrium. Spohn2007-fp does not specify exactly how this
would be done in their setup,
so I am providing a more detailed proof here.
To this end, I define
a joint distribution s'. Let s'(a| t)=0 for a∈ A^n,t∈ T^n,
unless specified otherwise. Take any i∈ N,a_i∈ A,t∈ T^m
such that s(a_i| t)=0. We want to define s' in a way such that s'(a_i| t)>0. To do so, for some yet to be determined constant c>0, we let s'(a_-i,a_i| t):=c·σ_-i(a_-i| t)
for all a_-i∈ A_-i. Now s'(a_i| t)>0 and
s'(a_-i| a_i,t)=cσ_-i(a_-i| t)/∑_a'_-i∈ A_-icσ_-i(a'_-i| t)=cσ_-i(a_-i| t)/c=σ_-i(a_-i| t)
for all a_-i∈ A_-i. Moreover, assume for some j≠ i, that there
is s(a_j| t)=0 for some a_j∈ A. Then s'(a_j| t)=0
still after these definitions, and by defining s'(a_-j,a_j| t) for player j, I don't change
the previously defined s'(a_-i,a_i| t) for player i. Apply the same procedure
to all actions and players.
Now choose c such that ∑_a∈ A^ns'(a| t)=1.
Proceed in the same manner with all t∈ T^n. Define q_r(a| t):=r-1/rq(a| t)+r-1/r^2s(a| t)+1/r^2s'(a| t)
for a∈ A^n,t∈ T^n. Then for any player i∈ N, action
a_i, t∈ T^n such that q_r(a_i| t) was previously
zero, we now have q_r(a_-i| a_i,t)=σ_-i(a_-i| t) and hence
lim_r→∞EU_i(q_r;a_i,t_i)=EU_i(σ_-i,a_i;t_i).
Moreover, for any of the actions a_i that receive positive probability
by s under some t∈ T^n (but not by q), we have
lim_r→∞q_r(a_-i| a_i,t)=lim_r→∞(r-1/r^2s(a_-i,a_i| t)/r-1/r^2s(a_i| t)+1/r^2s'(a_i| t)+1/r^2s'(a_-i,a_i| t)/r-1/r^2s(a_i| t)+1/r^2s'(a_i| t))
=lim_r→∞(s(a_-i,a_i| t)-1/rs(a_-i,a_i| t)/s(a_i| t)-1/rs(a_i| t)+1/rs'(a_i| t)+1/rs'(a_-i,a_i| t)/s(a_i| t)-1/rs(a_i| t)+1/rs'(a_i| t))
=s(a_-i,a_i| t)/s(a_i| t)=s(a_-i| a_i,t)
and hence also lim_r→∞EU_i(q_r;a_i,t_i)=EU_i(s;a_i,t_i)=EU_i(σ_-i,a_i;t_i).
It follows that lim_r→∞EU_i(q_r;a_i,t_i)=EU_i(σ_-i,a_i;t_i)
for any player i∈ N, type t_i, and action a_i∈ A. From here on we can proceed as above, thus concluding the proof.
Let σ be a Bayesian Nash equilibrium and α a pure strategy profile such that EU_i(α| t_i)≥ EU_i(σ| t_i) for any player i∈ N and type t_i∈ T. Then q, defined as in <Ref>, is a dependency equilibrium.
Using the assumption and <Ref>, we have
EU_i(α| t_i)≥ EU_i(σ; t_i)≥ EU_i(σ_-i,a_i; t_i)
for any i∈ N, a_i∈ A, and t_i∈ T. Hence, the result follows from <Ref>.
<Ref> provides a sufficient criterion to see whether an anonymous pure strategy profile β∈ A^m might be a dependency equilibrium.
Let G be an additively separable and anonymous ECL Bayesian game. Let β∈ A^m be an anonymous pure strategy Bayesian Nash equilibrium of the game. Then α is a dependency equilibrium if
u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t)(u_t',t(α_t')-u_t',t(β_t'))≥0
for all t∈ T.
Since G is additively separable and anonymous, an anonymous pure strategy Bayesian Nash equilibrium β∈ A^m exists by <Ref>. Hence, by <Ref> and <Ref>, α is a dependency equilibrium if
EU_t(α)≥ EU_t(β)
for all t∈ T. Using <Ref>, it follows
0≤ EU_t(α)-EU_t(β) =u_t,t(α_t)
+(n-1)∑_t'∈ Tp(t'| t)u_t',t(α_t')
=-(u_t,t(β_t)
+(n-1)∑_t'∈ Tp(t'| t)u_t',t(β_t'))
=
u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t)(u_t',t(α_t')-u_t',t(β_t')).
Intuitively, α is a dependency equilibrium if t's gains from other players choosing α
are larger than their losses from adopting α themself. If n is sufficiently large, only
the losses that other players of the same type occur are relevant.
Lastly, if the distributions s_r are all uncorrelated, then the dependency equilibrium must be a Bayesian Nash equilibrium. This tells us that if players' actions are independent, then only Bayesian Nash equilibria are relevant even for superrational players.
Assume s=lim_r→∞s_r is a dependency equilibrium, with (s_r)_r as in <Ref>. Then if for every r∈ℕ, s_r is uncorrelated, s is a Bayesian Nash equilibrium.
Let σ be the mixed strategy profile corresponding to s, and σ^r the one corresponding to s_r. Let i∈ N arbitrary. It is easy to see that since lim_r→∞s_r=s, also lim_r→∞σ_-i^r=σ_-i. Moreover, the expected utility of player i is a continuous function in σ_-i. Now let t_i∈ T, a_i such that σ_i(a_i| t_i)>0, and a'_i∈ A. Then also s(a_i| t_i)=σ_i(a_i| t_i)>0 and thus (i) lim_r→∞EU_i(s_r;a_i,t_i)≥lim_r→∞EU_i(s_r;a'_i,t_i) by the definition of a dependency equilibrium. It follows that
EU_i(σ_-i,a_i;t_i)
=EU_i(lim_r→∞σ_-i^r,a_i;t_i)
=lim_r→∞EU_i(σ_-i^r,a_i;t_i)
=lim_r→∞EU_i(s_r;a_i,t_i)
(i)≥lim_r→∞EU_i(s_r;a'_i,t_i)
=lim_r→∞EU_i(σ^r_-i,a'_i;t_i)
=EU_i(lim_r→∞σ^r_-i,a'_i;t_i)
=EU_i(σ_-i,a'_i;t_i).
This shows that σ is a Bayesian Nash equilibrium and thus concludes the proof.
§.§ Uncertainty about decision procedures and similarity
The Bayesian game model introduced here does not explicitly incorporate players with different decision procedures, or with different degrees of similarity. While these aspects could still be modeled implicitly, by defining a suitable joint distribution over actions s∈ S, it might be valuable to introduce explicit controllable parameters. Moreover, dependency equilibria are based on conditional expected utilities and thus effectively assume that all players act optimally under evidentialist or superrational reasoning. In this section, I will relax this assumption by extending the model to incorporate different decision procedures. My analysis is a generalization of the discussion in [][Sec. 2.9.4]Oesterheld2017-qg. As I will show below, this does not substantially increase the generality of my model. For this reason, I will not use the concepts introduced here in the rest of the report, so this subsection can be skipped.
One approach would be splitting types into different subtypes. Assume that there is finite index set Ω for the different subtypes (e.g., specifying the types' decision procedures). Then we can define new subsets of types T_ω={(1,ω),…,(m,ω)} for each ω∈Ω,
and let the new set of types be T=⋃_ω∈ΩT_ω. We also need to specify a new prior p over this bigger set of types T. I assume that utility functions do not depend on the subtype ω. Having defined these types, we can then restrict the space of possible joint distributions
in S in some way based on types.
I consider a simple binary approach, with two indices: C
for cooperators and D for anyone else.
The cooperators can be thought of as implementing the same or an equivalent decision procedure. Moreover, I assume that these agents maximize conditional expected utility in an ECL Bayesian game, such that we can apply
dependency equilibria to joint distributions over their actions. I assume that the actions of the players with subtype D are independent from those of the C players.[This makes the situation easier to analyze, though I think it is not entirely realistic. Even though the players in T_D are not thought of as superrational cooperators, the players in T_C may still have some conditional beliefs
about their actions. This could include the possibility of these agents
being seen as irrational in some way. For instance, players in T_C
may believe that it is more likely for a player of type T_D to
choose an uncooperative action given that they choose a cooperative
action.] In this framework, one could
model gradual beliefs about similarity by being uncertain about
whether another player belongs to T_C or not.[Compare the comment discussion on treutlein2018request, in particular <https://forum.effectivealtruism.org/posts/92wCvqF73Gzg5Jnrr/request-for-input-on-multiverse-wide-superrationality-msr?commentId=iXXvEremjJtedccwh>]
For simplicity, I assume an additively separable and anonymous setting. I write p(t',ω'| t,ω) for the probability that any player j≠ i has type (t',ω'), given that player i has type (t, ω). Moreover, I define a joint strategy distribution
s_C∈Δ(A^T_C) for all the types in T_C, and a similar distribution s_D for the types in T_D. Note that here, I take a distribution over actions given types as fundamental, rather than deriving such a distribution from an anonymous joint strategy profile as in <Ref>. Given this distribution, one can derive all the relevant probabilities, though I will not explicate this here. I denote with s_C(α_t'=a'| a,t) the belief of a player of type (t,C) that any other player of type (t',C) would play action a', given that the first player plays action a. We also need the marginal probability s_D(α_t'=a') that a player of type (t',D) plays action a' (since that type's action is independent from the actions of a player in T_C, we do not condition it on anything).
Given joint distributions s_C,s_D and a type (t,C), we can then use <Ref> to define expected utilities:
EU_t,C(s_C,s_D; a)
:=u_t,t(a)
+(n-1)∑_t'∈ Tp(t',C| t,C)∑_a'∈ As_C(α_t'=a'| a,t)u_t',t(a')
+(n-1)∑_t'∈ Tp(t',D| t,C)∑_a'∈ As_D(α_t'=a')u_t',t(a').
Since the actions of players in T_C and T_D are independent, the term for the utility from the D types does not depend on the action of a player of type C. Hence, we get
EU_t,C(s_C,s_D;a)-EU_t,C(s_C,s_D;â)
≥ 0
if and only if
u_t,t(a)-u_t,t(â)+(n-1)∑_t'∈ Tp(t',C| t,C)∑_a'∈ A(s_C(α_t'=a'| a,t)-s_C(α_t'=a'|â,t))u_t',t(a').
We can use this to determine when an anonymous pure strategy profile β is a dependency equilibrium, as in <Ref>. To that end, let α be the unique anonymous pure strategy profile Bayesian Nash equilibrium (this does not depend on the subtypes, since utilities do not depend on subtypes). I will not work this out formally here, but analogously to <Ref>, we get the condition
u_t,t(α_t)-u_t,t(β)+(n-1)∑_t'∈ Tp(t',C| t,C)(u_t',t(α_t')-u_t',t(β_t')).
To see what this means, in another abuse of notation, I write p(C| t',t,C) to denote the belief of a player of type (t,C) that another player has subtype C, given that they are of type t'. Then p(t',C| t,C)=p(C| t',t,C)p(t'| t,C). Hence, we get the condition
u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t,C)p(C| t', t,C)(u_t,t'(α_t')-u_t,t'(β_t'))≥ 0
⇔ u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t,C)(1-p(D| t', t,C))(u_t',t(α_t')-u_t',t(β_t'))≥ 0
We can see that it differs from <Ref> in that the weight of each type t is reduced based
on how likely such a player is of subtype D instead of C. Given sufficiently large n, this
is not a problem per se—if n goes to infinity, it does not matter
how much weight there is on T_D in total—but it may shift
the relative conditional credences about types of the players. For
instance, a type (t,C) may deem other players with type t more
likely to have subtype C, but may be sceptical whether players of types
t'≠ t are of the C subcategory.
Such shifts in the relative weight of the types, due to different coefficients p(C| t',t,C), can be equivalently modeled by an anonymous prior p' over types, without any subtypes. To sketch an argument for this, note first that we can regard such a prior p' as a symmetric joint distribution p'∈Δ(T× T). Here, p'(t,t')=p'(t',t) is the probability that any two distinct players will have types t and t'. Now we can let
p'(t,t'):=δ^-1p((t,C),(t',C)),
where p is the original prior over types and subtypes, and δ:=∑_t,t'∈ Tp((t,C),(t',C)) is a normalization constant. Then this is apparently a symmetric probability distribution, and we have
p(t',C| t,C)
=p((t',C),(t,C))/p(t,C)
=δ p'(t',t)/p(t,C)
= p'(t'| t) p'(t) δ/p(t,C)
=p'(t'| t)δ'
for some constant δ':=p'(t) δ/p(t,C) that only depends on t, but not on t'. This shows that the relative weights of the different types t', after conditioning on being of type (t,C), are preserved under p'. So, in particular, we have
=u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t',C| t,C)(u_t',t(α_t')-u_t',t(β_t'))
=u_t,t(α_t)-u_t,t(β_t)+δ' (n-1)∑_t'∈ Tp'(t'| t)(u_t',t(α_t')-u_t',t(β_t')).
This shows that at least the simplified model of subtypes considered here is not more general than the already introduced type space model. Assuming large n, believing that another player is of subtype D is equivalent to just giving them less relative weight in the conditional distribution p'(t'| t). For this reason, I will continue without the model introduced in this section in the following.
While I will not pursue this in this report, there may be other, more interesting ways to extend the model introduced here in future work. For instance, one could specify a specific bargaining solution or point on the Pareto frontier for each subtype. The same could be done for other contingent parameters such as disagreement points. One could then analyze how possible gains from trade change with different assumptions about these parameters.
§ ECL AS A BAYESIAN BARGAINING PROBLEM
In this section, I combine the models from the two previous sections, by defining a bargaining game on top of a Bayesian game (Sections <ref> and <ref>). To simplify the formal setup and analysis, I will assume from the start that the underlying ECL Bayesian game is additively separable and anonymous. In <Ref>, I introduce a version of the Nash bargaining solution for incomplete information bargaining games by Harsanyi1972. In <Ref>, I adapt Bayesian Nash equilibria and dependency equilibria to the Bayesian bargaining setup. To be able to apply dependency equilibria to bargaining problems, I generalize dependency equilibria to joint beliefs over continuous strategy spaces.
Finally, I conclude with several takeaways from the model (<Ref>). First, I adapt the results about dependency equilibria from <Ref>, including Spohn2007-fp's folk theorem. I then discuss gains from trade given different beliefs in general (<Ref>) and analyze several toy examples (Sections <ref>–<ref>).
§.§ Formal setup
An ECL Bayesian bargaining game is a tuple G=(N,T,(A_t)_t∈ T,p,d),
where
* N={1,…,n} is the set of players;
* T={1,…,m} is a generic set of types;
* 𝒜_t⊆ℝ^m is the convex and compact set of actions
for type t;
* pΔ(T^n) is an anonymous prior probability
distribution over type vectors, such that each type has positive prior probability (i.e., for any i∈ N, p(t_i)>0 for all types t_i∈ T).
* d∈ℝ^m is the disagreement point.
The set of players and types are the same as in an ECL Bayesian game, but the actions are now different. In the previous model, there
was a finite set of actions A, and in the additively separable
and anonymous case, there were utility functions u_t,t' for each tuple of types
t,t'∈ T, specifying the utility that type t produces for type t' with their actions. Since we assume additive separability and anonymity from the start, we now directly define sets of actions 𝒜_t for each type t∈ T, such that each vector x_t∈𝒜_t specifies the utilities x_t,t' that that player can produce for any other player of type t'∈ T. One could regard 𝒜_t as the convex hull of the
image of A under the function u_t:=[u_t,1,…,u_t,m]^⊤.
That is, if Σ_t:=Δ(A)
is the set of t's mixed strategies, then
𝒜_t={∑_a∈ Aσ_t(a)u_t(a)|σ_t∈Σ_t}.
This corresponds to what was in <Ref> the feasible set F_i for an individual
player (though note that we will define separate feasible sets for the setting here later). In the anonymous incomplete information setup, utilities depend only on
the types, so it suffices to have one such set for each type, with
as many dimensions as there are types. As in <Ref>, this set could
also be something other than a simplex—it only needs to be convex
and compact.
§.§ Strategies and feasible sets
Turning to pure strategies and expected utilities, I directly introduce strategies that are anonymous and only depend on the types. First, players of the same type have the same information, so it seems plausible that they would all choose the same action. Second, since they have exactly the same set of actions with the same utilities, they likely have to choose the same option to produce Pareto optimal outcomes (as discussed in <Ref>).
Let G=(N,T,(𝒜_t)_t∈ T,p,d) be an ECL Bayesian bargaining
game. Let α∈𝒜:=∏_t∈ T𝒜_t. Then α is called a pure
strategy profile.
Using <Ref>, we can define the expected utility of α∈𝒜 for type t, after updating on observing their own type, as
EU_t(α):=α_t,t+(n-1)∑_t'∈ Tp(t'| t)α_t',t,
where the first term is the utility provided by the player to themself, and the second term is the utility provided by the n-1 other players in expectation.
Next, a feasible set is the set of vectors of expected utilities for all types that can be produced by pure strategy profiles.
Let G=(N,T,(𝒜_t)_t∈ T,p,d) be an ECL Bayesian bargaining
game. Then
F(G):={ x∈ℝ^m|∃α∈𝒜∀ t∈ T EU_t(α| t)=x_t}
is the feasible set of G.
As in <Ref>, I assume that d∈ F(G) and that at least one payoff x∈ F(G) exists that is a strict Pareto improvement, i.e., x_i>d_i for all i∈ N.
Next, we turn to the individual feasible sets. These are sets of vectors of expected utilities for all types t' that can be produced by type t with their pure strategies. Here, we have to carefully scale the utilities in 𝒜_t to satisfy <Ref>.
Let t∈ T. Define f^(t)ℝ^T→ℝ^T
via its component functions f_t'^(t)(y):=(n-1)p(t| t')y_t'
for t'∈ T∖{t} and f_t^(t)(y)=y_t+(n-1)p(t| t)y_t.
Then t's individual feasible set is
F_t(G):=f^(t)(𝒜_t).
Given this definition, it follows that F(G)=∑_t∈ TF_t(G). That is,
similarly to complete information bargaining, the feasible set of G is the set of sums of vectors from the individual feasible sets. Since the sets 𝒜_t for
each t∈ T are convex and compact, F_t(G) is also convex
and compact, since it is just the image of 𝒜_t under the linear
mapping f^(t). Hence, the sum F(G) is also convex and compact.
Assume there are two types,
1,2. First, we have to specify the sets of actions. Suppose that there are
diminishing returns for both types' utility functions, such that the sets of actions are 𝒜_1=𝒜_2={x∈ℝ^2_≥ 0| x_1^2+x_2^2≤1}
(<Ref>). This could be motivated, for instance, by assuming that resources invested are quadratic in the utilities, and both types can allocate at most one unit of resources to both utility functions.
Second, we compute the feasible sets F_t(G) for both types.
Say there are 3 players in total with independent and uniform type distributions, such that p(1|1)=p(1|2)=p(2|1)=p(2|2)=0.5 (where p(t'| t) is the probability that any player of type t assigns to any other player having type t', as defined in <Ref>).
In the feasible set for type 2, the expected utilities
for type 1 are lower than for type 2, because a player of type 1 is
certain that they themselves have type 1 and hence they believe
that there are in expectation two players of type 1 and only one
type 2 player. The same applies vice versa. The resulting feasible
sets are depicted in <Ref> (a), where F_1=f^(1)(𝒜_1),F_2=f^(2)(𝒜_2) with
f_1^(1)(y) =y_1+2·1/2y_1=2y_1
f_2^(2)(y) =y_2+2·1/2y_2=2y_2
f_2^(1) =2·1/2y_2=y_2
f_1^(2)(y) =2·1/2y_1=y_1
.
for y∈𝒜_1 or y∈𝒜_2. The feasible set F(G) is then just the set with the points x_t+x_t' for all
possible x_t∈ F_t(G),x_t'∈ F_t'(G) (<Ref> (b)).
Above example has a type prior p that assigns equal probability to
all combinations of types, but players end up with different feasible sets because they update on the type of their own player and there are only few players in the game. I will give examples with different priors, where n is very large, in the next section.
§.§ The Nash bargaining solution for incomplete information games
Here, I define the NBS for an ECL Bayesian bargaining game G. I will use this bargaining solution below in my examples to compute cooperative outcomes. Harsanyi1972 introduce an axiomatization of the NBS for a two-player incomplete information game, which I discuss in <Ref>. This axiomatization includes versions of all the axioms of the complete information NBS discussed in <Ref>, and adds two additional axioms to deal with weights for the different types. Below definition is a generalization of Harsanyi1972's definition for more than two players, adapted to my formal setup.
Let G be an ECL Bayesian bargaining game, and let (ν_t)_t∈ T be a set of weights for each type, such that ν_t≥0 for all t∈ T and ∑_t∈ Tν_t=1. Then the Nash bargaining solution (NBS) for these weights is defined via the optimization problem
_x∈ F(G)^≥ d∏_t∈ T(x_t-d_t)^ν_t.
Harsanyi1972 derive the specific weighting ν_t=p(t) for all t∈ T, suggesting that the utility of a type should be weighed by the prior probability of that type. This weighting ensures the desirable behavior that the bargaining solution does not change if a type is split into two types with identical actions and utilities, provided their combined probability equates to the probability of the original type. These weights also appear reasonable from an ex ante fairness perspective, given that a type with a higher prior likelihood would, in expectation, occupy a larger number of universes. However, when it comes to fairness, there are other criteria that could be important for determining weights (see <Ref>).
§.§ Joint distributions and equilibria
Finally, I define joint distributions and equilibria in ECL Bayesian bargaining games. We do not need to introduce distributions to define Bayesian Nash equilibria. Action spaces are already convex, and Bayesian Nash equilibria are in any case trivial in the additively separable case.
Let G be an ECL Bayesian bargaining game. Then a strategy profile α∈𝒜 is a Bayesian Nash equilibrium if for all types t∈ T and actions α'_t∈𝒜_t, we have
EU_t(α)≥ EU_t(α_-t,α'_t).
Note that the notion of best response here assumes that all players of the same type change their action simultaneously, rather than only a single player deviating. This assumes perfect correlations between players of the same type, which seems inappropriate for the uncorrelated notion of Bayesian Nash equilibria. However, I do not consider this an issue, since, as the next proposition shows, Bayesian Nash equilibria are trivial in the additively separable case.
Let G be an ECL Bayesian bargaining game. Then the set of Bayesian Nash equilibria is given via A^*=∏_t∈ TA_t^*, where
A^*_t:=_α_t∈𝒜_tα_t,t
for t∈ T.
This follows as a simple exercise from the definition of a Bayesian Nash equilibrium and <Ref>.
Next, I introduce joint distributions to define dependency equilibria. In the present model, the sets 𝒜_t of strategies are continuous, potentially containing all possible (independent) randomizations over a set of actions that a type could implement. This is necessary to enable bargaining. Joint strategy distributions are then separately defined as joint distributions over the space 𝒜. This allows us to express beliefs such as “if I choose the NBS, other players do the same”, where the NBS is an arbitrary point in the continuous set 𝒜_t.
I define the set S of joint strategy distributions as the set of measures over the measure space 𝒜⊆ℝ^m,m, endowed with the default Borel σ-algebra. For a set A_t⊆𝒜_t, I write s(A_t):=s({α∈𝒜|α_t∈ A_t}), and similarly I define conditionals
s(A| A_t):=s({α∈ A|α_t∈ A_t})/s(A_t)
for A⊆𝒜 and A_t⊆𝒜_t such that s(A_t)>0.
For any set A_t⊆𝒜_t with s(A_t)>0, the expected utility given A_t is defined as
EU_t(s;A_t):=𝔼_α∼ s[EU_t(α)|α_t∈ A_t]
=∫_α∈𝒜EU_t(α)ds(·| A_t).
Now we can define dependency equilibria as a generalization of <Ref>.
Let s∈ S be a joint strategy distribution, and assume there exists a sequence of distributions (s_r)_r∈ℕ that converges weakly to s such that for each r∈ℕ, s_r has full support on 𝒜_t, i.e., such that s_r(A_t)>0 for all nonempty open sets A_t⊆𝒜_t. Then s is a dependency equilibrium if for all A_t⊆𝒜_t with s(A_t)>0 and arbitrary nonempty open set A'_t⊆𝒜_t, we have
lim_r→∞EU_t(s_r;A_t)≥lim_r→∞EU_t(s_r;A'_t).
We say that α∈𝒜 is a dependency equilibrium if δ_α, the Dirac measure with δ_α(A)=1 if and only if α∈ A, is a dependency equilibrium.
Here, weak convergence means that for any continuous function f𝒜→ℝ, we have
lim_r→∞𝔼_α∼ s_r[f(α)]=𝔼_α∼ s[f(α)].
I choose weak convergence as a generalization of the pointwise convergence we assumed in the case where s is a discrete distribution over a finite set of joint actions. Note that weak convergence does not require that the probability of each set converges; for instance, assume that s is the Dirac measure for some point α. Then we can define s_r via densities f_rα'↦ c_r·exp(-r‖α'-α‖), where c_r is some normalization constant. s_r becomes more and more concentrated on α as r→∞, and thus the integral with respect to s_r converges to the one with respect to δ_α for continuous functions. But s_r({α})=0 for all r∈ℕ and s({α})=δ_α({α})=1.
§.§ Observations
In this section, I discuss several takeaways from the model introduced above. I begin by adapting results from <Ref>. I prove a version of [][Observation 5]Spohn2007-fp, saying that any strategy profile that is a Pareto improvement over a Bayesian Nash equilibrium is a dependency equilibrium. In particular, the NBS with the Bayesian Nash equilibrium disagreement point is a dependency equilibrium.
In <Ref>, I make some general remarks about gains from trade in this model. I show that, given large n, only the beliefs over the types of other players matter.
I then work through several toy examples with two types. I apply the NBS as a compromise solution and analyze how gains from trade are affected by different assumptions about beliefs and utility functions. I start with a model in which all types have the same posterior beliefs, but different prior weights (<Ref>). Next, I analyze the situation in which players' types are correlated, such that players have higher posterior weight for their own type. In this case, gains from trade diminish when players become more confident that other players have the same type. This happens roughly quadratically in a model where utilties are square roots of resource investments (<Ref>), reproducing the “double decrease” observed by armstrong2017double. However, given logarithmic returns, as in drexler2019pareto's “Paretotopia” model, gains from trade go down more slowly (<Ref>).
§.§.§ Dependency equilibria
To begin, I show that if a strategy profile is at least as good for each type as some other strategy profile, for any possible action they could take, then it is a dependency equilibrium. The proof idea is the same as for <Ref>. As a corollary, it follows that Bayesian Nash equilibria and Pareto improvements over Bayesian Nash equilibria are dependency equilibria.
thmspohnfivec
Let α,β be two strategy profiles such that for every t∈ T and β'_t∈𝒜_t, we have
EU_t(α)≥ EU_t(β_-t,β'_t).
Then α is a dependency equilibrium.
In <Ref>.
Let β be a Bayesian Nash equilibrium and α such that
EU_t(α)≥ EU_t(β)
for all t∈ T. Then α is a dependency equilibrium. In particular, any Bayesian Nash equilibrium β is a dependency equilibrium.
Since β is a Bayesian Nash equilibrium, we have
EU_t(α)≥ EU_t(β)≥ EU_t(β_-t,β'_t)
for all β'_t∈𝒜_t. Hence, the result follows by <Ref>.
Lastly, I conclude that the NBS with the Bayesian Nash equilibrium disagreement point is a dependency equilibrium.
Let α be the strategy profile corresponding to the NBS with Bayesian Nash equilibrium disagreement point d. Then α is a dependency equilibrium.
The NBS as defined in <Ref> always chooses a point x such that x_t>d_t for all t∈ T. Hence, if β is the profile corresponding to the disagreement point d, then EU_t(α)≥ d_t= EU_t(β) for all t∈ T. By <Ref>, it follows that α is a dependency equilibrium.
§.§.§ General takeaways about gains from trade
Here, I give some general takeaways from the incomplete information bargaining model outlined above. First, assuming additively separable utilities and anonymous strategy profiles allows us to greatly simplify expected utilities received by each type. Recalling <Ref>, we have
EU_t(α)=α_t,t+(n-1)∑_t'∈ Tp(t'| t)α_t',t.
Assuming large n, this becomes
EU_t(α)≈ (n-1)∑_t'∈ Tp(t'| t)α_t',t.
That is, only the expected utility provided by the other players matters. In the following, I will assume large n such that this is the case (unlike in <Ref>, where I assumed n=3).
Second, the contributions α_t',t by other players of different types are weighted by type t's posterior weight for that type, p(t'| t). The higher the posterior weight p(t| t), the lower the weight of all other types, reducing the gains from trade from other types cooperating. If players of type t believe that type t' does not exist, then that type t' cannot benefit players of type t. As observed in <Ref>, uncertainty about the decision procedures of other players or similarities between players similarly factor into expected utilities. It does not matter, for instance, whether a type just cannot benefit other types much, or whether other types believe that the type has low posterior weight.
Positive correlations between players' types reduce gains from trade. In the extreme case in which all players are always of the same type, for instance, no trade is possible. However, if there are no or only small correlations, then trade is possible even given uncertainty about types of other players.
Note that this analysis still depends on the common prior assumption. Relaxing it could lead to further reductions in gains from trade, or it could completely break the analysis.
Moreover, I have not addressed uncertainty about actions, such as which bargaining solution other players will choose. Lastly, it is unclear what happens if the number of types m is as large as the number of players n. In that case, we cannot simply assume that only the other players matter, as the product (n-1)p(t'| t) may stay roughly constant for any given types t,t'. If the different types all have different values, then this could imply a situation more similar to the one with only few players and types.
§.§.§ Different prior probabilities but equal posterior beliefs
Here, I analyze a toy model in which two types have different prior weight, but players have the same beliefs over types, since players' type distributions are independent. I continue as in <Ref> with
two different types of players.
Recall Example <ref>. Assume large n, such that approximately only the expected utilities from the other players matter, and assume
the utility functions are rescaled such that if p(t| t)=0.5
for t=1,2, we have F_1(G)=F_2(G)={x∈ℝ_+^2|(2x_1)^2+(2x_2)^2≤1}. I assume the disagreement point is the Bayesian Nash equilibrium, which is the point (1/2,1/2).
Since the sets and the disagreement point are symmetric, the NBS would pick the symmetric point on the Pareto frontier,
2·(√(2)/4,√(2)/4)
=(√(2)/2,√(2)/2)∈ F(G).
Now consider a situation in which everyone has the same conditional
beliefs about other players (i.e., types of different players are independent), but one type has lower prior probability, p(1|2)=p(1|1)=3/4
and p(2|2)=p(2|1)=1/4. In this case, the individual feasible
sets get rescaled and we get
F_1(G) ={x∈ℝ_+^2 | (4/3x_1)^2+(4/3x_2 )^2≤1}
F_2(G) ={x∈ℝ_+^2 | (4x_1)^2+(4x_2)^2≤1},
as displayed in <Ref>.
Fewer players have type 2 in expectation, so their actions produce less expected utility, both for themselves and for players of the other type. Since the shapes of both Pareto frontiers are the same, the NBS will still pick the same point on F_2(G) as F_1(G), only scaled down. However, the Bayesian Nash equilibrium is asymmetric, given by d_1=3/4 and d_2=1/4, and the prior weights of types are also asymmetric. Using the Bayesian Nash equilibrium as disagreement point and the prior probabilities as weights in the NBS, we hence get
_x∈ F_1(
G)+F_2(G)(x_1-d_1)^3/4(x_2-d_2)^1/4≈ (0.92,0.39).
The points in the individual feasible sets corresponding to the NBS and the disagreement point, as well as corresponding points in the overall feasible set, are plotted as green and red dots in <Ref>.
§.§.§ “Double decrease” given different beliefs and square root utilities
Next, I consider a case in which types have different conditional beliefs.
In this situation, if a type deems another type more likely, then
this increases the gains they can receive from trade. Conversely, if a type deems another
less likely, this decreases their potential gains. Since it is more plausible
that players consider their own type more likely than others, I only consider the latter case.
First, I investigate to what degree lower beliefs in the other type decrease gains from trade, given that utilities are square roots of invested resources. armstrong2017double has observed a “double decrease” in this case, which is the effect that gains from trade
quadratically decrease with the probability assigned to the other type.
Assume that types have equal prior weights, but beliefs p(1|1)=p=p(2|2)
and p(1|2)=1-p=p(2|1). That is, conditional on observing their own type, players of either type believe other players have the same type with probability p, and the other type with probability 1-p. For, p=3/4, we get the feasible
sets F_1(G)={x∈ℝ_+^2|(4/3x_1)^2+(4x_2)^2≤1},
F_2(G)={x∈ℝ_+^2|(4x_1)^2+(4/3x_2)^2≤1}
(Figure <ref>).
Due to the symmetry of the situation, it is easy to see that the NBS always picks points on the individual Pareto frontiers where the Pareto frontier has slope
-1 (i.e., the symmetric point on the overall Pareto frontier). Using this, we can compute the point
(p^2/√(1-2(1-p)p),(1-p)^2/√(1-2(1-p)p))∈ F_1(G)
for the first type, and the same point with swapped coordinates for the
second type.
Now we compute the share of expected utility received by the other type, as well as the percent gains from trade, at the NBS outcome. The expected utility received by the other player is (1-p)^2/√(1-2(1-p)p), while the sum of expected utilities received by both types is
p^2/√(1-2(1-p)p)+(1-p)^2/√(1-2(1-p)p)
=√(1 - 2 (1-p) p).
Overall, we get a share of (1-p)^2/1 - 2 (1-p) p). This is approximately quadratic as 1-p→ 0. Next, turning to gains from trade, the total expected utility from compromise for either player is
p^2/√(1-2(1-p)p)+(1-p)^2/√(1-2(1-p)p)=√(1-2(1-p)p).
The individually achievable expected utility is p. The gain from trade
in percentage of disagreement expected utility is hence √(1-2(1-p)p)/p-1.
We plot share of expected utilities received by the other type as well as percent gains from trade for 1/2≤ p≤ 1 in <Ref> (where p=1-p(t'| t) for t'≠ t). Interestingly, gains from
trade as a percentage of individually attainable utility decline even faster than share of expected utility received by the other type. Overall, this confirms armstrong2017double's observation of a “double decrease” in the square root utility model.
§.§.§ “Paretotopia” given logarithmic utilities
While the previous example demonstrates a “double decrease”, this relies on the particular shape of Pareto frontiers in that example. In this section, I show a different result in the case of logarithmic utilities. Given logarithmic utilities, gains from trade can be very large, and Pareto frontiers are shaped in a way that makes compromise expected utilities and gains from trade change less as the belief in the other type goes down. This relates to drexler2019pareto's idea of a “Paretotopia” in the case of logarithmic returns to resources, where reaping gains from trade at all is vastly more important to players than increasing their share of the compromise outcome.
As before, let p(t| t)=p and p(t'| t)=1-p for t≠ t'∈{1,2}. Assume feasible sets
are given by
F_1(G) ={x∈ℝ_+^2 | exp(1/px_1)+exp(1/1-px_2)≤ r}
F_2(G) ={x∈ℝ_+^2 | exp(1/1-px_1)+exp(1/px_2)≤ r}
as illustrated in <Ref>. We could interpret this as a case in which
resources produce logarithmic utility for either value system and where r is the amount
of available resources. For symmetry reasons, the NBS is again
the point on the Pareto frontiers where the frontier has slope -1, as long as that point is in the feasible set. This is the point (plog(pr),(1-p)log((1-p)r))
for type 1, with swapped coordinates for type 2, for p≤1/r. For p>1/r, no trade is possible, and players just optimize for their own values. plog(r) is the amount of utility either type can produce for themself.
Performing the same calculations as in Example <ref>,
we get
(1-p)log(max{(1-p)r,1})/plog(pr)+(1-p)log(max{(1-p)r,1})
as the share of expected utility received by the other type, and
plog(pr)+(1-p)log(max{(1-p)r,1}/plog(r)-1
percent gains from trade.
I plot both functions for p∈ [1/2,1], for the cases r=100 and r=10^9, in <Ref>.
Both share of expected utility received by the other player and percent gains from trade decrease much more slowly than in <Ref>, particularly for the case in which the amount of resources is large and thus gains from trade are vast. Given r=10^9, both percent gains from trade and share of expected utility received by the other type appear to go down approximately linearly as p→ 1.
This shows that the shape of the Pareto frontier determines how gains from trade are affected by differing posterior beliefs. In future work, it would be interesting to extend this analysis, for instance, by investigating situations in which Pareto frontiers are asymmetric.
§ DISCUSSION
In this section, I discuss two issues that arise in my model.
First, I discuss the problem of choosing a disagreement point (<Ref>). I define the threat game disagreement point, which is an equilibrium of a game in which players choose disagreement actions to improve their bargaining position. I discuss an axiomatization that supports the threat point, and show that the NBS with this disagreement point is a dependency equilibrium, even though it can be worse for some players than the Nash equilibrium. I also discuss reasons against its relevance to ECL.
Second, I discuss coalitional stability (<Ref>). A compromise outcome is coalitionally stable and thus in the core of a game if no subset of players can unilaterally guarantee its members higher payoffs. I argue that stability is a desirable property in the ECL context. Unfortunately, the NBS with the Nash equilibrium or threat game disagreement point is sometimes not stable. I investigate the existence of stable allocations and show that in an additively separable game, the core is always nonempty. However, I also show that sometimes all core allocations make some players worse off than a Nash equilibrium, providing a strong argument against the Nash equilibrium disagreement point. I conclude by suggesting an alternative disagreement point that guarantees stability.
§.§ Disagreement points
The bargaining model introduced in <Ref> requires a disagreement
point, an outcome that is obtained if players do not reach an agreement.
For ECL, a plausible choice for a disagreement point is a Bayesian Nash equilibrium, which is unique in an anonymous and additively separable game, up to each type's choice of an action that optimizes their own values (Propositions <ref> and <ref>). This is the outcome that players would plausibly choose absent any dependencies
between players. In particular, in the Bayesian game model from <Ref>, I showed that this is the only dependency equilibrium in this case (<Ref>). I also showed in the model from <Ref> that the NBS with the Bayesian Nash equilibrium disagreement point is a dependency equilibrium (<Ref>).
Unfortunately, I will show in <Ref> in <Ref> that sometimes no point that is a weak Pareto improvement over the Bayesian Nash equilibrium is coalitionally stable (even if a stable point exists in principle, i.e., if the core of the game is nonempty). This strongly suggests that the Nash equilibrium may not be the right disagreement point.
Similar to bargaining solutions, one can also find a disagreement point by positing axioms that constrain the possible choices for disagreement points, or by setting up a noncooperative game and analyzing its equilibria. As argued in <Ref>, both approaches can provide relevant insights for ECL, even if ECL does not involve any actual bargaining.
Nash1953 provides both an axiomatization and a noncooperative game that implies the “threat game” disagreement point. This point represents the equilibrium of a game where players choose disagreement actions and receive as payoffs the Nash bargaining solution computed with these disagreement actions. I define this point here for the setup from <Ref>.
For the following definition, I assume that μ^ν(F(G),d) is defined as the NBS with weights ν, computed for all the types t for which there exists x∈ F(G)^≥ d such that x_t>d_t. Note that since F(G)^≥ d is convex, if such a point x exists for all types t∈ P⊆ T, then there also exists a point x'∈ F(G)^≥ d such that x'_t>d_t for all t∈ P simultaneously. Hence, we can define
μ^ν(F(G),d):=_x∈ F(G)^≥ d∏_t∈ P(x_t-d_t)^ν_t.
Let G be an ECL Bayesian bargaining game. Then the threat game disagreement point or threat point is a point d∈ F(G) such that there exists a strategy profile α∈𝒜 with d_t=EU_t(α) for all types t∈ T, and for any t∈ T and α'_t∈𝒜_t, letting d':=(EU_t'(α_-t,α'_t))_t'∈ T, we have
μ_t^ν(F(G),d)≥μ^ν_t(F(G),d').
This definition says that the threat point is a point d, corresponding to a strategy profile α∈𝒜, such that no type can improve their bargaining outcome by changing their action in α. Nash1953 shows that the threat point exists and is unique in his two-player bargaining game. I believe Nash's proof translates to my setup at least with respect to existence, though uniqueness could be violated if there are more than two players.
In Nash1953's axiomatization of the NBS with the threat point, there exists a feasible set
F together with two sets S_1,S_2 that contain the possible
disagreement strategies for players 1,2. In addition to versions
of Axioms <ref> and <ref>,
[][p. 137]Nash1953 requires the following axioms:
A restriction of the set of strategies available to a player cannot
increase the value to him of the game. That is, if S_1'⊆ S_1,
then μ_1(S_1',S_2,F)≤μ_1(S_1,S_2,F). The same
applies for the second player.
There is some way of restricting both players to single strategies
without increasing the value to player one of the game. That is,
there exist s_1∈ S_1,s_2∈ S_2 such that μ_1({s_1},{s_2},F)≤μ_1(S_1,S_2,F).
The same applies for the second player.
It follows from those axioms that the bargaining solution μ will be the NBS with the threat game disagreement point. Note that the
axioms and Nash's proof require a separate set for disagreement strategies,
so this does not directly translate to my setting. However, it seems plausible that one may be able to extend the result.
Note that, even in the two-player case, the NBS with the threat point can be worse for a player than a Nash equilibrium.
Take the game with two players 1,2 and actions a_1,a_2,a_3 and b_1,b_2, respectively, given by <Ref> (a).
Here, the threat game disagreement point would be (-3,2), since given actions (a_3,b_2) as disagreement point, none of the players can change their action to improve their bargaining outcome. Normalizing by this point leads to the
payoffs in <Ref> (b). The feasible set, alongside the relevant points, is illustrated in <Ref>.
One can calculate the NBS as the point (5.5,2.75), which is worse for player
1 than the Nash equilibrium (3,3).
An important question when it comes to ECL is whether there is a dependency equilibrium supporting the NBS with the threat point. This gives at least some basic plausibility to joint beliefs that imply this compromise outcome. Despite it potentially being worse than a Nash equilibrium, this is the case.
Let α∈𝒜 be a strategy profile corresponding to the NBS with the threat game disagreement point. Then α is a dependency equilibrium.
Let β be the strategy corresponding to the threat point d. We show that EU_t(α)≥ EU_t(β_-t,β_t') for all t∈ T and β_t'∈𝒜_t. Then the result follows using <Ref>.
Towards a contradiction, assume that there exists t∈ T and β'_t with EU_t(α)<EU_t(β_-t,β'_t). Then, defining d':=(EU_t'(β_-t,β'_t))_t'∈ T, we have μ^ν_t(F(G),d')≥ d'_t>EU_t(α)=μ^ν_t(F(G),d). But this is a contradiction to the definition of a threat point d.
One problem with the threat point in conventional bargaining is that it supposes an ability to commit to a non-equilibrium action in case no agreement is reached. Insofar as humans cannot credibly commit to certain actions, this suggests that it may not be an appropriate solution concept for bargaining problems between humans. Another concern with the threat point is that it potentially leads to an agreement reached through coercion. It seems reasonable to assume that rational agents should refuse to give in to such coercion. Therefore, if the opponent commits to pursuing a threat in case no agreement is reached, one should not take this as a disagreement point for evaluating gains from trade.[Note that, in general, the distinction between extortion and a fair trade depends on some assignment
of a default outcome [][]armstrong2016extortion. In an additively separable game, the Nash equilibrium is a plausible non-threat default outcome, but it leads to coalitional instability. I will turn to defining an alternative non-threat default outcome in the next section.] It is unclear how these considerations apply to ECL, though it seems plausible that threats should be even less relevant to ECL than to conventional bargaining.
Overall, disagreement points are an important area of further study for ECL. Some recent work on threat-resistant bargaining may be particularly relevant diffractor2022rose. Moreover, it would be interesting to investigate acausal bargaining models to gain insights into the question [e.g.][]diffractor2018cooperative,kosoy2015superrationality.
§.§ Coalitional stability
Another issue with ECL is coalitional
stability. In a coalitional game [][Pt. 4]osborne1994course, players can choose to cooperate with a smaller coalition (subset of players), ignoring the remaining players. This is different from a bargaining game, where all players have to agree to a compromise. A bargaining solution is coalitionally stable if no coalition can unilaterally ensure higher payoffs for their members. In the ECL case, it seems possible for superrationalists
to choose to cooperate with a subset of players rather than with all players (the “grand coalition”). Hence, coalitional stability is an important desideratum for a bargaining solution in the ECL case.[Issues with coalitional stability in ECL were also informally discussed by gloor2018commenting2.]
In the following, I will focus on a complete information bargaining model for simplicity. To formalize coalitional stability, let P⊆ N be an arbitrary coalition. Then we define ν(P)⊆ℝ^n as the set of payoffs x∈ℝ^n for all players
such that the players in P can achieve at least as much for themselves via a collective action. Depending on assumptions about the responses by the remaining players, this can be formalized in different ways, leading to different functions ν. In any case, we have
ν(N)={x∈ℝ^n|∃ y∈ F(B)∀ i∈ N x_i≤ y_i}.
Given a function ν, the core C^ν(B) is defined as the set of all payoffs x∈ν(N) such that no coalition can guarantee their members strictly higher payoffs.
The core of the bargaining game B with respect to ν is the set
C^ν(B):={x∈ν(N)|∀ P⊆ N∀ y∈ν(P)∃ i∈ P x_i≥ y_i}.
A standard definition for ν is the set of α-effective vectors, which assumes worst-case responses by the remaining players. Formally, x∈ν^α(P)
if and only if there is σ_P∈∏_i∈ PΣ_i such
that for all σ_-P∈∏_j∈ N∖ PΣ_j,
u_i(σ_P,σ_-P)≥ x_i for all i∈ P. The corresponding core C^α(B) is called the α-core.
This definition allows for threats to discourage formation of a coalition. In general, it is unclear whether unfriendly actions like
these should play a role (see <Ref>).
I also consider another way to define ν that does not involve outright threats. For simplicity, I assume additively separable utility functions. Assume that there is some worst case payoff matrix A∈ℝ^n,n, specifying for each player i∈ N payoffs A_i,j∈{x_i,j| x_i∈ F_i(B)} they may produce for a player j∈ N, if they are left out of the coalition. Then I define ν^A as the set of A-effective vectors, via x∈ν^A(P) if there exists y_j∈ F_j(B) for all j∈ P such that for each coalition member i∈ P, we have
x_i≤∑_j∈ Py_j,i+∑_j'∈ N∖ PA_j',i.
That is, we assume coalition members contribute payoffs y_j,i and the remaining players payoffs A_j',i to player i. For instance, A may represent Nash equilibrium payoffs, i.e., A_i∈_x_i∈ F_i(B)x_i,i for each i∈ N. The A-core C^A(B) is defined analogously to the α-core, but using the A-effective vectors ν^A. In the case where A represents the Nash equilibrium, I also write C^NE(B) for the Nash equilibrium core.
Note that by definition, the α-core is the largest possible core.
Let B be a bargaining problem with additively separable utilities. Then for any A∈ℝ^n,n with A_i,j∈{x_i,j| x_i∈ F_i(B)} for i,j∈ N, we have C^A(B)⊆ C^α(B).
Let x∈ C^A(B) arbitrary. Let P⊆ N and y∈ν^α(P). To show x∈ C^α(B), we have to show that there exists at least one i∈ P such that x_i≥ y_i. We know by definition of ν^α that there exists σ_P∈Σ_P such that y_i≤ u_i(σ_P,σ_-P) for all i∈ N and σ_-P∈Σ_-P. In particular, let σ_j be the strategy corresponding to A_j for j∉ P. Then we know
that
y_i≤ u_i(σ_P,σ_-P) for all i∈ N. Letting x̂_j∈ F_j(B) corresponding to σ_j for j∈ P, it follows
y_i≤ u_i(σ)=∑_j∈ Px̂_j,i + ∑_j'∈ N∖ PA_j',i
for all i∈ N. Hence, we have y∈ν^A(P).
By the definition of C^A(B), there thus exists i∈ P such that x_i≥ y_i. This concludes the proof.
Now we turn to analyzing the existence of core allocations. First, we show that the NBS is not necessarily in the α-core, even if the core is nonempty.
Consider the game of three players 1,2,3, where each player has
three options 1,2,3. The (additively separable)
utilities generated by each player taking any of the options are specified
in <Ref>.
The unique Nash equilibrium disagreement point is d=(3,3,3). This is also the threat game disagreement point—no matter the actions of the other players, a player can always increase their bargaining position by moving to action 1 and thus raising their own disagreement payoff and reducing that of the other players.
Now, if 1 and 2 form a coalition, they can both guarantee each other a payoff of 5 each, so
ν^α({1,2})={x∈ℝ^3| x_1≤ 5, x_2≤ 5}.
However, the NBS payoffs can be computed as x=(4.33,4.33,5.67). While both 1 and 2 can benefit 3
well, 3
cannot benefit 1 or 2 well, so they are left worse off by joining the grand coalition. Hence, x∉ν^α({1,2}), and the NBS is unstable.
The same goes for the KSBS, since 3's ideal point is better than 1 and 2's, so
the KSBS would grant 3 the highest surplus of the players. In particular, the KSBS does not seem fairer than the NBS in this example, giving the highest surplus to a player that is not contributing much.
However, the α-core (and thus also the A-core for any payoff matrix A) is nonempty. For instance, consider the payoff vector (5,5,3). This is feasible via the strategy profile (2,2,1), and one can show that no coalition could guarantee strictly higher payoffs for all of their members.
To address the instability issue, one could try to find a bargaining solution that always picks elements from the core. For instance, if one chooses the disagreement payoff in the core, then the NBS will always be in the core as well. While the α-core can be empty in general games [][ch. 13.2]osborne1994course, if we assume additive separability, the α-core is always nonempty. This follows as a corollary from a theorem by scarf1967core. Similar results have been shown in the literature scarf1967core, but I have not found this exact result upon a cursory search, so I am providing it here.
Since the α-core includes possible threats, which I regard as undesirable, I show the result for a somewhat more strict notion of core. For a coalition P⊆ N, define Σ^H_P⊆∏_i∈ PΣ_i as the set of Pareto optimal strategies for the players in P. That is, σ_P∈Σ_P^H if and only if for
x:=∑_i∈ Pu_i(σ_i) and y:=∑_i∈ Pu_i(σ'_i) for any σ'_P∈Σ_P, if y_j≥ x_j for all j∈ P, then y_j=x_j for all j∈ P.
We then define A as the worst-case Pareto optimal payoffs in any coalition. That is,
A_i,j:=min_P⊆ N s.t. i∈ Pmin_σ_P∈Σ_P^Hu_i,j(σ_i)
for i,j∈ N. If we assume that players outside of a coalition are allowed to form their own arbitrary coalitions and Pareto optimal compromises, but we do not allow any threats, this is the relevant notion of core.
The A-core as defined above, and thus by <Ref> also the α-core, is nonempty in additively separable games.
thmcorenonempty
Let B be a bargaining game with additively separable utility functions, as defined in <Ref>. Let A∈ℝ^n,n be defined as in <Ref>. Then the A-core C^A(B) is nonempty.
In <Ref>.
One may ask whether the same would hold for the Nash equilibrium core. Unfortunately, the next example shows that the Nash equilibrium core can be empty, even given additive separability. The intuition behind this is that sometimes, if two players cooperate, this can lead to negative externalities for a third party. However, the Nash equilibrium point does not take this possibility of cooperation between two players into account. Hence, the two players are better off ignoring any agreement that gives everyone at least their Nash equilibrium payoffs.
Consider a bargaining game with three players, N={1,2,3}, with payoffs as in <Ref>. There is a unique Nash equilibrium in which all players play action 1 and receive utilities (5,5,15). However, players 1 and 2 can also coordinate on action 2, which serves as a compromise between the two and produces 8 utility for both. Intuitively, we can imagine that 1 and 2 share some common goal that they can choose maximize instead of their own idiosyncratic goals. However, player 3 benefits from the players optimizing their idiosyncratic goals, and if 1 and 2 cooperate, player 3 loses out.
I added a third strategy for the first two players to make sure an option x∈ F(B) exists that strictly dominates the Nash equilibrium disagreement point, but this is inessential to the example. (Similarly, the fact that player 3 only has one option that is not Pareto dominated is inessential and can easily be relaxed.)
Now let A correspond to the Nash equilibrium strategies. Then
A=[ 5 0 5; 0 5 5; 0 0 5 ].
The coalition P={1,2} can guarantee its members a payoff of 8 each, so
ν^NE({1,2})={x∈ℝ^3| x_1,x_2≤ 8}.
Moreover, we have
ν^NE({3})={x∈ℝ^3| x_3≤ 15},
since A_1,3+A_2,3+A_3,3=15, and 3 cannot improve upon this payoff by changing their action.
It follows from the above that any payoff vector x∈ C^NE(B) has to satisfy x_3≥ 15 and x_1,x_2≥ 8. However, such a payoff vector (8,8,15) is not in the feasible set and thus impossible to obtain.
The only way to produce 8 utility for both 1 and 2 is for both players to play 2. But then player 3 can have at most 5<15 utility. Hence, C^NE(B)=∅.
There exists a bargaining game B with additively separable utilities such that the Nash equilibrium core C^NE(B) is empty.
See <Ref>.
Based on the above results, one possible way to define a disagreement point that leads to a stable bargaining solution and that does not involve threats would be via
d_j =max{x_j| x∈ν^A({j})}=max_σ_j∈Σ_ju_j,j(σ_j) + ∑_i∈ N∖{j}A_i,j
for any j∈ N and where A is defined as in <Ref>. That is, we let d_j correspond to the best possible payoff that j can attain given worst-case Pareto optimal responses by the other players.
It would be valuable to investigate coalitional stability and stable solution concepts in future work, including a more thorough review of the relevant literature on nontransferable utility coalitional games [e.g.][]shapley1967utility,Maschler1989,Maschler1992,hart1996,Harsanyi1963. As in the case of disagreement points, the work by diffractor2022rose may also be relevant.
§ CONCLUSION AND FUTURE WORK
In this report, I developed a game-theoretic model of ECL, making it possible to formalize many important aspects and issues with ECL. This includes agents' uncertainty about other agents in the multiverse, the problem of selecting a multiverse-wide compromise outcome, and the question of which joint beliefs to adopt over the actions of agents. There are many interesting open problems and avenues for future work:
* How to model agent's default options without ECL? The choice of a disagreement point (<Ref>) is a fundamental issue in ECL. In particular, there is the question whether threats should play a role in selecting a multiverse-wide compromise. It may be valuable to review the ROSE value diffractor2022rose or consider acausal bargaining models [e.g.][]diffractor2018cooperative,kosoy2015superrationality in future work.
* Another fundamental issue is that of coalitional stability (<Ref>), which is related to the problem that compromise between some parties can make other parties worse off, potentially preventing the formation a grand multiverse-wide coalition. While there always exist stable payoff allocations given additively separability, it is unclear what happens if some value systems violate this assumption. Additionally, it is an open question how to choose a stable bargaining solution. Here, it may be useful to review the literature on nontransferable utility coalitional games [e.g.][]shapley1967utility,Maschler1989,Maschler1992,hart1996,Harsanyi1963, as well as diffractor2022rose.
* How to assess possible dependencies between different agents, especially in the human case where no source code is available? What is the nature of these dependencies? What is the relevant reference class of agents for superrationality in ECL? Can one rigorously justify inferences such as “if I choose the NBS, other players are likely to do so, too”?
* How can acausal bargaining models inform ECL? Can we model the process of arriving at conditional beliefs about other agents' actions as some kind of bargaining procedure? If so, what is a plausible model, and how can it inform the problems discussed above?
* I make standard common knowledge and common prior assumptions (see <Ref>), which are unrealistic in the ECL context, at least when it comes to ECL among humans. How to relax these assumptions? Assigning posterior beliefs to other players is important to assess their gains from trade. How to do this without a common prior? See Harsanyi1967,Monderer1989-pj.
* How do gains from trade diminish when agents have different models or choose different bargaining solutions? This would lead to wasted gains from trade, but it is unclear how much would get lost, and how much different value systems would be affected.[Thanks to Lukas Gloor for a comment on an earlier draft.] How robust are bargaining solutions in practice to different empirical assumptions and model parameters?
* An alternate approach to the one employed in this report would be to take joint distributions over actions as given, analyzing and classifying them based on the dependencies they imply. For example, a specific joint distribution could imply positive or negative correlations between more or less cooperative actions of players. One could then investigate which joint distributions enable ECL.[This approach was suggested to me by Philip Trammell.]
* How to deal with the infinities involved in ECL in an infinite universe, as well as the potential continuum of players and values, rather than the discrete set assumed in this report? Is there a relatively small number of discrete clusters of similar value systems, or are there as many different types as players?
* What is the distribution over values of superrational cooperators, and what are their beliefs? Can humans usefully make progress on this question, and if not, would superintelligent AI systems be able to do so?
The main purpose of this report is to contribute towards the development of a theory of ECL and to outline open technical and philosophical problems, rather than to introduce an applicable model. However, the Bayesian bargaining model from <Ref> could still be useful for preliminary simulations to investigate possible gains from trade. This might help estimate the potential value of ECL and inform prioritization decisions.
§ ACKNOWLEDGEMENTS
Part of the work on this report was carried out during a Summer Research Fellowship at the Centre for Effective Altruism (CEA). Special thanks go to Max Dalton, who was my supervisor at the CEA. I am grateful for support by the Center on Long-Term Risk, the Center on Long-Term Risk Fund, an Open Phil AI Fellowship, and an FLI PhD Fellowship. Moreover, I am indebted to Lennart Stern, Philip Trammell, Owen Cotton-Barratt, Caspar Oesterheld, Max Daniel, Sam Clarke, Daniel Kokotajlo, Lukas Gloor, Leon Lang, Abram Demski, and Stuart Armstrong for their invaluable discussions and feedback, as well as for their help with the mathematics and game theory in this report. Finally, I want to express my gratitude to commenters on an earlier post, where I requested input on this report treutlein2018request.
§ ARMSTRONG2013'S BARGAINING SOLUTION
Armstrong2013 has published a series of blog
posts on bargaining in which he develops a bargaining solution. In this appendix, I will discuss the solution and argue against using it to model ECL.
In Armstrong's solution, utility functions are normalized such that their
zero point is the disagreement point and 1 is their
ideal point, just as with the KSBS. But instead of then taking the
point on the Pareto frontier where everyone has the same utility given
this normalization (as the KSBS would), Armstrong suggests maximizing the
sum of the thus normalized utility functions.
Armstrong discusses two ideas to support his proposed solution. The
first one is the normalization according to the KSBS, which is supposed
to give credit to the fact that if a player can benefit another player
a lot, the other's ideal point will also be higher, and their utility
function will thus be scaled down in the normalization in comparison
to the utility function of the player. The second idea is that of maximizing a sum
instead of maximizing a product or just taking some point with a fixed
ratio of utilities, which is to give agents higher ex ante
expectations of utility.
I think Armstrong's solution is unsuitable for my
setting. First, his solution does not solve the issue with fairness
in a multilateral setting that I discuss in <Ref>. Second, as argued in <Ref>,
solutions should guarantee positive gains from trade for all participants.
Maximizing a sum of normalized utility functions does not generally
guarantee that, as I have shown in the case of variance normalization.
As has been pointed out in the comments to Armstrong2013, normalizing according
to disagreement and ideal point may also not guarantee positive gains
from trade.
Lastly, the fact that a bargaining solution maximizes the sum of utilities is not a reason to choose it over other Pareto optimal solutions. Even the KSBS or NBS will maximize some
weighted sum of utility functions, since every point on the Pareto frontier corresponds to the maximizer of some weighted sum of
coordinates. I currently don't see a reason why
choosing the weighting based on knowledge of the entire Pareto frontier is
at an (a priori) disadvantage over weightings which are chosen based
on other information.
Lastly, note that the NBS maximizing a product does not mean that an agent's
uncertainty cannot be taken into account well by the NBS. As outlined in <Ref>, the expectations
of agents over different possible games can be incorporated
into feasible sets and Pareto frontiers, so the NBS need not only be applied to games
with certainty. Hence, when it comes to expectations over different
games, the NBS chooses a point that is Pareto optimal as judged by agents' beliefs—as
opposed to, for instance, choosing a point which leads to certain gains from
trade but to a lower expectation
across games.
§ HARSANYI1972'S AXIOMATIZATION OF THE NASH BARGAINING SOLUTION IN INCOMPLETE INFORMATION GAMES
In this section, I outline Harsanyi1972's axiomatization of the NBS in two-player incomplete information games. It is not directly applicable to my setup in <Ref>, and I did not find a more relevant result in the literature. I believe one should be able to translate the analysis to my setup, but I will not investigate this here.
Harsanyi1972's axiomatization
includes versions of the axioms from <Ref>, namely Individual rationality, Pareto optimality,
Invariance to affine transformations, a version of Anonymity for both
players and all types, and the Independence of irrelevant alternatives
axiom. In addition, there are two new axioms which specifically address the types.
To define these new axioms as in Harsanyi1972, we first have to specify a slightly different version of a Bayesian
bargaining game.
A two-player Bayesian bargaining game is a tuple G=(T_1,T_2,F,p)
where
* T_1={1,…,m} and T_2={m+1,…,l} are the two sets of types
for either player;
* F⊆ℝ^l is the feasible set, which specifies the
ex interim expected utilities for each type;
* p is a joint distribution over types for both players.
In this game, there are only two players, 1 and 2, and each
player has their own set of types. The feasible set F is just what
would have been the set F(G) in my case, only that the payoffs
depend on both types and players instead of just depending on types. If x∈ F, then there
exists a mixed strategy profile such that x_i specifies the utility
that type i would expect given this mixed strategy profile and
their beliefs about which types the other player could have.
The set F is assumed to be chosen such that the minimal element
in F is the disagreement point. That is, there exists d∈ F
such that d_i≤ x_i for all x∈ F,i∈ T_1∪ T_2.
Moreover, it is assumed that there are positive gains from trade to
be had for everyone—i.e., there is an element x∈ F such that
x_i>d_i for all i∈ T_1∪ T_2.
To define one of the new axioms, we need to define the operation of “splitting a type”.
We can define splitting a type for feasible payoff vectors as well as for games:
* Let j∈{1,…,m}. j is the type of player 1 we want
to split (the definition is analogous for player 2). We have two
new sets of types T'_1={1,…,m+1} and T'_2={m+2,…,l+1}.
Define F' such that it contains all x' such that there is x∈ F
such that x'_i=x_i for i∈{1,…,j}, x'_j+1=x_j,
and x'_i=x_i-1 for i∈{j+2,…,l+1}. This is called
deriving x' from x by splitting type j of player 1 into
two types.
* Let 0<ν<1. Let t∈ T'_2. We then define p' such that
p'(k,t)=p(k,t-1) for all k=1,…,j-2, p'(j,t)=ν p(j,t-1),
p'(j+1,t)=(1-ν)p(j,t-1), and p'(k,t)=p(k-1,t-1) for k∈{j+1,…,m+1}.
The new game G'=(T'_1,T'_2,F',p') with F' as feasible set,
types T'_1,T'_2, and p' as distribution over types is derived
from splitting type j of player 1 into two types with probabilities
ν and 1-ν.
With these definitions, the two new axioms are as follows:
Splitting types. If G'=(T'_1,T'_2,F',p') is derived from G
by splitting type j of player 1 into two types with probabilites
ν and ν-1, then x'=μ(G') is derived from x=μ(G)
by splitting type j of player 1 into two types.
Mixing basic probability matrices. If G=(T_1,T_2,F,p) and G'=(T_1,T_2,F,p')
have the same solution vector, then for every G”=(T_1,T_2,F,p”)
with p”=ν p+(1-ν)p' where ν∈[0,1], it is μ(G”)=μ(G')=μ(G).
Given these two additional axioms, Harsanyi1972 show that the solution
function must be
μ(G)=_x∈ F∏_t∈ T_1∪ T_2(x_t-w_t)^p(t).
That is, an asymmetric version of the NBS where the weights are the
prior probabilities of the types.
§ PROOF OF THEOREM <REF>
*
Let q:=δ_α be the Dirac measure, defined via δ_α(A)=1 if and only if α∈ A. As in <Ref>, we now want to define a joint distribution q_r for each r∈ℕ that converges weakly to q. To that end, define s as follows. For every t∈ T, let μ_t be some probability measure on 𝒜_t with full support such that μ_t({α
_t'})=0 for any α'_t∈𝒜_t (assuming 𝒜_t contains more than one point, and thus by convexity a continuum of points). For any set A⊆𝒜, define
s(A):=m^-1∑_t∈ Tμ_t({α'_t| (α'_t,β_-t)∈ A}).
To show that this is a probability measure, note that s(∅)=0, s is always non-negative, and
s(𝒜)
=m^-1∑_t∈ Tμ_t({α_t| (α'_t,β_-t)∈𝒜})
=m^-1∑_t∈ Tμ_t(𝒜_t)
=1.
Moreover, for any countable collection of pairwise disjoint sets A^1,A^2,…, we have
s(⋃_l∈ℕA^l)
=m^-1∑_t∈ Tμ_t(α'_t| (β_-t,α'_t)∈⋃_l∈ℕA^l)
=
m^-1∑_t∈ T∑_l∈ℕμ_t({α'_t| (β_-t,α'_t)∈ A^l})
=∑_l∈ℕm^-1∑_t∈ Tμ_t({α'_t| (β_-t,α'_t)∈ A^l})
=∑_l∈ℕs(A^l).
This shows that s is a probability measure.
Moreover, for any open, nonempty A_t⊆𝒜_t, we have
s(A_t)
=
m^-1∑_t'∈ Tμ_t({α'_t'| (α'_t',β_-t')∈𝒜_-t× A_t})
≥ m^-1μ_t(A_t)>0,
so this measure satisfies the full support condition that is required to define q_r.
Now we define q_r:=r-1/rq+r-1/r^2δ_β+1/r^2s. Since this is a convex combination of probability measures, it is still a probability measure. It remains to show that this measure satisfies our requirements. First, clearly, this weakly converges to q as r→∞. Second, since 1/r^2>0 for all r∈ℕ, it is q_r(A_t)≥1/r^2s(A_t)>0 for any t∈ T and nonempty open set A_t⊆𝒜_t.
Now we turn to the condition on expected utilities. Let t∈ T and A_t⊆𝒜_t with q(A_t)>0 arbitrary but fixed in the following. Then it follows that α_t∈ A_t, and thus
q({α}| A_t)=q({α})/q(A_t)=1. Hence, for measurable A⊆𝒜, it follows
lim_r→∞q_r(A| A_t)
=lim_r→∞r-1/rδ_α(A)+r-1/r^2δ_β(A)+1/r^2s({α'∈ A|α'_t ∈ A_t})/r-1/r+r-1/r^2δ_β(A)+1/r^2s(A_t)
=δ_α(A)=q(A| A_t)
and thus
lim_r→∞EU_t(q_r;A_t)=EU_t(q;A_t)=EU_t(α).
Next, let B_t⊆𝒜_t an arbitrary nonempty open set, representing any other set of actions type t could condition on. We have to show that lim_r→∞EU_t(q_r;A_t)≥lim_r→∞EU_t(q_r;B_t). To that end, note that if q(B_t)>0, it follows from the above that
lim_r→∞EU_t(q_r;B_t)=EU_t(α)=lim_r→∞EU_t(q_r;A_t),
and we are done.
Now consider the case q(B_t)=0. First, assume β_t∈ B_t. In this case, for measurable A⊆𝒜, we have
lim_r→∞q_r(A| B_t)
=lim_r→∞r-1/rδ_α(A∩(𝒜_-t× B_t)) +r-1/r^2δ_β(A∩(𝒜_-t× B_t))+1/r^2s(A∩(𝒜_-t× B_t))/r-1/rq(B_t)+r-1/r^2δ_β_t(B_t)+1/r^2s(B_t)
=lim_r→∞r-1/r^2δ_β(A∩(𝒜_-t× B_t))+1/r^2s(A∩(𝒜_-t× B_t))/r-1/r^2δ_β_t(B_t)+1/r^2s(B_t)
=lim_r→∞r-1/rδ_β(A∩(𝒜_-t× B_t))+1/rs(A∩(𝒜_-t× B_t))/r-1/rδ_β_t(B_t)+1/rs(B_t)
=
δ_β(A∩(𝒜_-t× B_t))/δ_β_t(B_t)
=δ_β(A)
.
Hence, it follows that
lim_t→∞
EU_t(q_r;B_t)=
lim_t→∞𝔼_α'∼ q_r[EU_t(α')|α'_t∈ B_t]
=
lim_t→∞𝔼_α'∼δ_β[EU_t(α')]
=EU_t(β).
Using the assumption on α and β, we can conclude that
lim_t→∞
EU_t(q_r;B_t)=EU_t(β)≤ EU_t(α)=lim_r→∞EU_t(q_r;A_t),
and we are done.
Second, consider the case β_t∉ B_t. Then for any r∈ℕ, we have
q_r(A| B_t)
=
r-1/rδ_α(A∩(𝒜_-t× B_t)) +r-1/r^2δ_β(A∩(𝒜_-t× B_t))+1/r^2s(A∩(𝒜_-t× B_t))/r-1/rq(B_t)+r-1/r^2δ_β_t(B_t)+1/r^2s(B_t)
=
1/r^2s(A∩(𝒜_-t× B_t))/1/r^2s(B_t)
=
s(A∩(𝒜_-t× B_t))/s(B_t)
=s(A| B_t).
It follows that EU_t(q_r;B_t)=EU_t(s;B_t).
Now define A_t^β_-t:={α_t'|α'∈ Aα'_-t=β_-t}. Then
s(A∩ (𝒜_-t× B_t))=
m^-1∑_t'∈ Tμ_t'({α'_t'| (α'_t',β_-t')∈ A∩ (𝒜_-t× B_t)})
=
m^-1μ_t({α'_t| (α'_t,β_-t)∈ A, α'_t∈ B_t})
=m^-1μ_t(B_t∩ A_t^β_-t).
It follows that s(A| B_t)=0 if β_-t∉ A_-t, so
for a random variable α'∼ s, we have
s(α'_-t=β_-t| B_t)=1.
It follows for any r∈ℕ that
EU_t(q_r;B_t)(<ref>)=EU_t(s;B_t)=𝔼_α'∼ s[EU_t(α')|α'_t∈ B_t]=𝔼_α'∼ s[EU_t(β_-t,α'_t)|α_t'∈ B_t]
(i)≤𝔼_α'∼ s[EU_t(α)|α_t'∈ B_t]=EU_t(α),
where we have used the assumption on α,β in (i).
Hence, also
lim_r→∞EU_t(q_r;B_t)≤ EU_t(α)=lim_r→∞EU_t(q_r;A_t),
which concludes the proof.
§ PROOF OF THEOREM <REF>
I begin by introducing some additional notation, in order to be able to state the result used to prove <Ref>.
The following definitions and conditions are adapted from kannai1992core. I assume a set N of players is given.
A function ν𝒫(N)→𝒫(ℝ^N) is called characteristic function if it satisfies the following criteria:
(i) ν(∅)=∅;
(ii) for all S⊆ N, S≠∅, ν(S) is a nonempty closed subset of ℝ^N;
(iii) if x∈ν(S) and y_i≤ x_i for all i∈ S, then y∈ν(S);
(iv) there exists a closed set F⊆ℝ^N such that
ν(N)={x∈ℝ^N|∃ y∈ F∀ i∈ N x_i≤ y_i};
(v)The set
F∩{x∈ℝ^N|∀ i∈ N x_i≥max{y_i| y∈ν({i})}}
is nonempty and compact.
Now let B be a bargaining game and A∈ℝ^n,n such that A_i,j∈{x_i,j| x_i∈ F_i(B)} for all i,j∈ N.
Then the function ν^A of A-dominated vectors as introduced in <Ref>, defined via
ν^A(P):={x∈ℝ^n|∃ y∈∏_i∈ PF_i∀ i∈ P x_i ≤∑_j∈ Py_j,i+∑_j'∈ N∖ PA_j',i},
satisfies these criteria.
ν^A is a characteristic function.
Left as an exercise. It follows from the assumption that the F_i(B) are compact, convex sets, together with the definition of ν^A. For (iv) and (v), we can take F=F(B).
Next, we need two technical definitions to be able to state the result.
Let T⊆𝒫(N) be a collection of coalitions. Then T is said to be a balanced collection if there exist nonnegative weights (δ_S)_S∈ T such that
∑_S∈ T s.t. i∈ Sδ_S=1.
This means that there exist weights for each set in T such that, for each player i∈ N, the weights of all the sets containing that player add up to 1.
A characteristic function ν is called balanced if for every balanced collection T, we have
⋂_S∈ Tν(S)⊆ν(N).
This means that if a payoff vector x can be guaranteed for their members by every single coalition in a balanced set of coalitions, then it must also be achievable by the grand coalition. This is in general not true, but we will show that it is true in the case of an additively separable bargaining game.
Now we can state the main result used to prove <Ref>. Recall the definition of the core as the set of vectors x∈ν(N) such that for all coalitions P⊆ N and y∈ν(P), there exists at least one player i∈ P such that x_i≥ y_i. Note that every characteristic function ν defines a core C^ν.
Every balanced characteristic function has a nonempty core.
See kannai1992core.
Now we can prove <Ref>.
*
Consider a bargaining game B with additively separable utility functions. Recall
A_i,j:=min_P⊆ N s.t. i∈ Pmin_σ_P∈Σ_P^Hu_i,j(σ_i)
for i,j∈ N, where Σ^H_P is the set of Pareto optimal strategies for the players in P. By <Ref>, ν:=ν^A is a characteristic function. It remains to show that ν is balanced. Then it follows from <Ref> that C^A(B)=C^ν is nonempty.
To that end, assume T is a balanced collection with weights (δ_S)_S∈ T, and assume x∈ν(S) for all S∈ T.
Then by definition, there exists x_i^S∈ F_i(B) for each i∈ N that corresponds to the utilities produced by player i in coalition S, such that
x_j≤∑_i∈ S x_i,j^S + ∑_i ∈ N∖ SA_i,j
for all j∈ S. Note that w.l.o.g., we can assume that for some σ_S∈Σ_S^H, we have x_i^S=u_i(σ_i) for all i∈ S. That is, we can choose vectors x_i^S that result in Pareto optimal payoffs for the members of S.
Then, by definition of A, we have
x_i,j^S≥ A_i,j
for any player i∈ S and j∈ N.
Now we want to find a matrix of vectors x̂∈ℝ^n,n such that
x̂_i∈ F_i(B) for each i∈ N, and such that
∑_i∈ Nx̂_i,j≥ x_j
for all j∈ N. If we we can find such a matrix, then it follows that x∈ F(B) and thus x∈ν(N), and we are done.
To define this matrix, let i∈ N arbitrary and set
x̂_i:=∑_S∈ T s.t. i∈ Sδ_Sx_i^S.
Note that this is a convex combination of vectors x_i^S∈ F_i(B), and thus also x̂_i∈ F_i(B) since the feasible sets are convex. It follows that
∑_ix̂_i,j=∑_i∈ N∑_S∈ T s.t. i∈ Sδ_Sx_i,j^S
=∑_S∈ T∑_i∈ Sδ_Sx_i,j^S
=∑_S∈ T(1_S(j)
∑_i∈ Sδ_Sx_i,j^S
+(1-1_S(j))∑_i∈ Sδ_Sx_i,j^S)
<ref>≥∑_S∈ T(1_S(j)δ_S(x_j- ∑_i∈ N∖ SA_i,j)
+(1-1_S(j))∑_i∈ Sδ_SA_i,j)
=
x_j- ∑_S∈ Tδ_S(1_S(j)∑_i∈ NA_i,j
-∑_i∈ SA_i,j)
=
x_j- ∑_i∈ NA_i,j+∑_S∈ Tδ_S∑_i∈ SA_i,j
=
x_j-∑_i∈ NA_i,j+∑_i∈ N∑_S∈ T s.t. i∈ Sδ_SA_i,j
=
x_j-∑_i∈ NA_i,j+∑_i∈ NA_i,j
=x_j.
This shows that x∈ν(N) and thus concludes the proof.
|
http://arxiv.org/abs/2307.04079v1 | 20230709011140 | Projective Rectangles | [
"Rigoberto Florez",
"Thomas Zaslavsky"
] | math.CO | [
"math.CO",
"Primary 51E26, Secondary 05B15, 05B35, 05C22, 51A30, 51E20"
] |
myheadings
Flórez and ZaslavskyProjective Rectangles
empty
Dept. of Mathematical Sciences, The Citadel, Charleston, South Carolina 29409
[email protected]
Dept. of Mathematical Sciences, Binghamton University, Binghamton, New York 13902-6000
[email protected]
A projective rectangle is like a projective plane that has different lengths in two directions. We develop the basic theory of projective rectangles including incidence properties, projective subplanes, configuration counts, a partial Desargues's theorem, a construction from projective planes, and alternative formulations. In sequels we study harmonic conjugation and the graphs of lines and subplanes.
[2010]Primary 51E26; Secondary 05B15, 05B35, 05C22, 51A30, 51E20
Projective Rectangles
Thomas Zaslavsky
August 12, 2023
=====================
empty
§ INTRODUCTION
A projective rectangle is like a projective plane, but narrower than it is tall. More precisely, it is like the set of points on a certain kind of family of lines in a projective plane, with their induced lines. Very precisely, it is an axiomatic incidence structure based on adapting axioms of projective geometry.
Projective rectangles are found in all known harmonic matroids, such as full algebraic matroids. Harmonic matroids are matroids within which there is harmonic conjugation <cit.>; their definition was inspired by Lindström's article <cit.> about abstract harmonic conjugation. Harmonic conjugation applied to complete lift matroids of group expansions <cit.> of a triangle (for instance, L_2^k, Example <ref>) led us to structures that looked like vertical strips in projective planes—whence the name “projective rectangle” and the impulse to find a general theory of this idea in terms of incidence geometry. Projective rectangles themselves are almost examples of harmonic matroids, seemingly falling short only in special lines, as we prove in the sequel <cit.>.
An indication of what we accomplish in this article: First, the axioms (Section <ref>) and basic consequences for incidence geometry (Section <ref>) and counting (Section <ref>). Especially, we see that a projective rectangle, if it is not a projective plane, contains a multitude of maximal projective planes; we call them its “planes”. Section <ref> develops partial Desarguesian properties of projective rectangles, which satisfy limited versions of the two halves of Desargues's Theorem. In Section <ref> we show that the construction based on a subplane and a special point, alluded to above, actually works to produce projective rectangles in planes that are Pappian, i.e., coordinatized by a field; we do not know how far that subplane construction generalizes. The following section treats the narrowest projective rectangles, which are the simplest and best understood. Next are two sections that give alternative viewpoints: in Section <ref> we see that a projective rectangle is essentially a Paschian transversal design and thus is equivalent to a special kind of orthogonal array, and in Section <ref> we take the approach of projective duality by interchanging points and lines, which may suggest new properties but which we have not studied deeply.
We have only an elementary understanding of projective rectangles in general, as is shown by the list of significant open problems in Section <ref>.
In sequels we treat adjacency graphs and harmonic conjugation.
One concerns the graphs of adjacency of lines and of planes <cit.>. Notably, in projective rectangles that are not projective planes the graph of planes, where adjacency means having an ordinary line in common, has striking internal structure that presents a tantalizing vision of higher dimensionality.
The other sequel <cit.> explores abstract harmonic conjugation as a theme linking harmonic matroids and projective rectangles. In one direction, a projective rectangle is almost a harmonic matroid. In the other direction, a harmonic matroid contains a projective rectangle if it contains a matroid of a finite-field expansion of a triangle, in particular if it contains a Reid cycle matroid.
Our personal interest is mainly in finite systems, but many results apply to infinite projective rectangles. For instance, Section <ref> encompasses infinite systems, while Section <ref> requires finiteness. Our viewpoint is influenced by matroid theory but is largely that of incidence geometry; matroid theory is not needed to read this paper.
We wish to acknowledge the inspiration of the elegant and deep short papers <cit.> of Bernt Lindström. Lindström's ideas, as further developed by the first author in his doctoral dissertation and <cit.>, led to this study of projective rectangles.
§ PROJECTIVE RECTANGLES
An incidence structure is a triple (,ℒ,ℐ) of sets with ℐ⊆×ℒ. The elements of
are points, the elements of ℒ are lines.
A point p and a line l are incident if (p,l) ∈ℐ. A set P of points is said to be
collinear if all points in P are in the same line. We say that two
distinct lines intersect in a point if they are incident with the same point.
A projective rectangle is an incidence structure (,ℒ,ℐ) that satisfies the following axioms:
* Every two distinct points are incident with exactly one line.
* There exist four points with no three of them collinear.
* Every line is incident with at least three distinct points.
* There is a special point D.
A line incident with D is called special. A line that is not incident with D is called ordinary, and a point that is not D is called ordinary.
* Each special line intersects every other line in exactly one point.
* If two ordinary lines l_1 and l_2 intersect
in a point, then every two lines that intersect both l_1 and l_2 in four
distinct points, intersect in a point.
A complete quadrilateral is an incidence structure that consists of four lines, no three concurrent, and their six points of intersection. A nearly complete quadrilateral is like a complete quadrilateral but with only five of the intersection points; the sixth intersection point may or may not exist. Axiom (A<ref>) states that almost every nearly complete quadrilateral in a projective rectangle is complete. This is a partial Pasch axiom (e.g., see <cit.>), not the full Pasch axiom because it has an exception when either of the first two lines is special; then the remaining two lines may or may not be concurrent. This exception is what admits projective rectangles that are not projective planes. Section <ref> has more discussion of the significance of Axiom (A<ref>).
Notation: We write pq for the unique line that contains two points p and q. After we establish the existence of projective planes in , we use the notation abc… to mean the unique line (if abc… are collinear) or plane (if they are coplanar but not collinear) that contains the points abc….
The projective planes are some familiar examples of projective rectangles.
A projective plane is called a trivial projective rectangle.
In particular the Fano plane F_7 is the smallest projective rectangle (see Theorem <ref> Part (<ref>)).
The non-Fano configuration is not a projective rectangle; it fails Axiom (A<ref>).
The matroid L_2^k is another example of a projective rectangle
(see Figure <ref>). It has m=3 special lines. Let A:= { a_g | g ∈_2^k }∪{D }, B:= { b_g | g ∈_2^k }∪{D } and
C:= { c_g | g ∈_2^k }∪{D }, where we think of _2^k as a multiplicative group, writing gh for the group operation. Let
L_2^k be the simple matroid of rank 3 defined on the ground set
E:= A∪ B∪ C by its rank-2 flats. The non-trivial rank-2 flats are A, B, C, which are the special lines,
and the sets {a_g, b_g h, c_h } with g and h in _2^k, which are the ordinary lines.
We note that L_2^k is the complete lift matroid of the group expansion of a triangle, i.e., L_0(_2^k) in the language of <cit.>.
We say more about projective rectangles with m=3 and matroids similar to L_2^k in Section <ref>.
§ PROPERTIES OF PROJECTIVE RECTANGLES
In this section we study essential properties of projective rectangles. We begin with basic facts; then we prove that the projective rectangle contains projective planes and we conclude with a section of counting formulas for later use.
§.§ Fundamental properties
If a projective rectangle with exactly m special lines has one of them with n points, then we say that the
order of is (m,n). We do not assume m or n is finite unless we so state.
In Theorem <ref>
we prove m≤ n; we also prove that every special line has the same number of
points, that every ordinary line has the same number of points, and many other elementary facts about points and lines.
(If we define ν := n-1 and μ := m-1, then when the projective rectangle is a projective plane, ν=μ= the order of the plane as customarily defined; that is, one less than the number of points in a line.)
The following result states basic properties of a projective rectangle.
If is a projective rectangle of order (m,n), then the following hold in :
* The point set of ∖ D is partitioned by all special lines deleting D.
* There are at least three special lines
and four ordinary lines. Moreover, there are at least seven points.
* If l is a line and p is a point not in l, then the number of distinct
lines incident with p intersecting l equals the number of points on l.
*
Through each ordinary point there passes exactly one special line.
*
All ordinary lines have the same number of points. The number of points in an ordinary line is equal to the number of special lines, that is, m.
* All special lines have the same number of points, i.e., n points, and the same number of ordinary points, i.e., n-1.
* There are exactly m(n-1) ordinary points.
* The number of lines incident with an
ordinary point is equal to the number of points in a special line, that is, n.
The number of ordinary lines that contain each ordinary point is n-1.
* The number of points in a special line
is at least the number of points in an ordinary line; that is, n ≥ m.
* There are exactly (n-1)^2 ordinary lines.
* For a given point p in an ordinary line l,
there are n-2 ordinary lines intersecting l at p.
Proof of Part (<ref>).
By Axiom (A<ref>), every point p ∈∖ D belongs to the unique special line pD.
Proof of Part (<ref>). From Axiom (A<ref>) we know that in there are four points, no three of them collinear. If one is D, each other one with D generates a special line, all of which are distinct by noncollinearity. If none of them is D, the points generate six distinct lines, of which at most two can contain D because no three of the four points are collinear.
Thus, the four remaining lines are ordinary lines. Since in one of the ordinary lines there are at least three points, these points form with D three special lines.
We have proved that in there are at least three special lines and three ordinary lines.
By Axiom (A<ref>), each special line contains at least two ordinary points, so there are at least seven points.
Now consider two special lines s, s' and two ordinary points p_1,p_1' on s and p_1',p_2' on s'. The lines p_ip'_j are four distinct ordinary lines.
We prove Part (<ref>).
From Part (<ref>) we can deduce that in there are a non-incident ordinary point and ordinary line, also that there are a non-incident ordinary point and special line.
Let q ∈ l and p∉ l. From (A<ref>) there is exactly one
line incident with p that intersects l at q, and all such lines are distinct.
We prove Parts (<ref>) and (<ref>).
Given an arbitrary ordinary line l, we know by (A<ref>) that each point in l together with D determines a unique special line.
Every special line is generated in this way, by (A<ref>).
Thus, there is a bijection between the special lines and the points in l. This implies the number of points in any ordinary line equals the number of special lines.
We prove Parts (<ref>) and (<ref>).
We suppose that l_1 and l_2 are special lines in with n_1 and n_2
points, respectively. Let p be a point non-incident with
either of those lines. Part (<ref>) implies
that there are n_1 distinct lines intersecting l_1 that are incident with p.
Those n_1 lines also intersect l_2. Indeed, one of those lines is special and
the remaining (n_1-1) lines intersects l_2 because they are ordinary.
Therefore, n_1 ≤ n_2. Similarly, n_2 ≤ n_1. This proves that all special lines have the same number of points. Deducting 1 for the special point D gives the number of ordinary points on a special line.
Proof of Part (<ref>). The number of special lines is m, Part (<ref>) says the number of ordinary points in each special line equals n-1 and Part (<ref>) says the special lines partition the ordinary points.
Proof of Part (<ref>). We suppose that p is an ordinary
point with exactly k incident lines. Let l be a special line with n points
and p∉l. From Part (<ref>) we know that there
are exactly n distinct lines intersecting l that are incident with p.
This implies that k≥ n. We want to prove that k = n. Suppose by contradiction
that there is another line l_1 incident with p and not intersecting l.
It is clear the l_1 must be an ordinary line. That is a contradiction,
because an ordinary always intersect special lines.
By Part (<ref>) every special line has n-1 ordinary points, and by definition there are m special lines.
Proof of Part (<ref>). Let p be a point in an ordinary line. Two ordinary points in two special lines give rise to a unique ordinary line. Since every special line has n points and one of them is D, it is easy to see that the two special lines give rise to (n-1)^2 ordinary lines. Those are all the ordinary lines that intersect the two special lines. Since every ordinary line intersects every special line, we conclude that there are no more ordinary lines in .
Proof of Part (<ref>). Since p is a point in an ordinary line l,
from Part (<ref>) there are n
lines incident with p. Only one of those n lines is special; the other n-1 are not.
This implies that there are n-2 ordinary lines intersecting l at p.
§.§ Projective subplanes
We show that a projective rectangle is a combination of projective planes, in the strong sense that every two intersecting ordinary lines are lines of a substructure that is a projective plane. Before our results, though, we have to clarify the notion of substructure of an incidence structure (,,).
An incidence substructure of (,,) is an incidence structure (',',') in which ' ⊆, ' ⊆, and ' = |'×', i.e., the incidence relation is the same as in the superstructure but restricted to the elements of the substructure. In particular, if (',',') is a projective plane, we call it a subplane of (,,).
In a projective rectangle a subplane may contain an ordinary line and all its points; we call that kind full. A full subplane necessarily has order m-1. A subplane need not be full; it also need not be a maximal subplane, for instance if it is a proper subplane of a full subplane. In fact, that is the only way a subplane can fail to be maximal, as we will see in Theorem <ref>.
The special point D is very special, as are the special lines.
In a projective rectangle , the special point D is a point of every full subplane. Also, for every special line s and every full subplane π, s∩π is a line of π.
A full subplane π contains at least two lines, l and l', which intersect at a point p ∈π, and at least one is ordinary, say l.
If l' is ordinary, then every special line s intersects both l and l' at different points, unless s is the special line s_p on p. These two points of s determine a line of π, which is the intersection of s with π. Thus, for every special line except possibly s_p, s ∩π is a line of π.
If l' is special, or rather if l'=s'∩π for some special line s', then there is at least one point p' on l' that is neither p nor D. Let q be a point in l ∖ p; then π has a line m determined by p' and q, which is ordinary since it contains not only p ∈ s_p but also q ∉ s_p. Then we can replace l' by m and have the case of two ordinary lines, so we may as well assume l' is ordinary.
Let s_1 and s_2 be two special lines that are not s_p. Their intersection is in π, but their intersection is D. Therefore, D ∈π.
Let p_1 be the intersection of l with s_1 and let p_2 be the intersection of l' with s_2. Since p_1 ∉ l' and p_2 ∉ l, the line m of π determined by p_1 and p_2 does not contain p. Since the points p_1,p_2 are not D and are not in the same special line, m is ordinary, hence it is contained in π. Therefore, m intersects s_p in a point p_12, which cannot be p, so p and p_12 determine a line of π, which must be s_p∩π. That is, s_p∩π is a line of π.
Now we present the fundamental result about subplanes.
Let be a projective rectangle. If two ordinary lines in intersect in a point, then both lines are lines of a unique full projective plane in .
First we state the construction that gives the projective plane.
Let l_0 and l_1 be ordinary lines in with exactly one point q in common. (See Figure <ref>.)
Let a_0s= l_0∩ s and a_1s= l_1∩ s, where s ranges over the set of special lines in , and pick three special lines to be called x, y, and z such that q ∈ x. Thus, q=a_0x=a_1x.
(We know there are three special lines by Theorem <ref> Part (<ref>).)
Let b_1s= n_1∩ s, where n_1 is the ordinary line that passes through a_0y and a_1z.
Suppose that s and t denote two special lines. We denote by l_st the ordinary line passing
through a_0s and a_1t with s,t x and we denote by n_st the
ordinary line passing through a_0s and b_1t with s,t y. Let
L={l_st: s,t ∈, s,t x and s t }
and
N={n_st: s,t ∈, s,t y and s t }.
Note that n_1 = l_yz∈ L and l_1 = n_xz∈ N.
We set :=(_,ℒ_,ℐ_), where ℐ_ is
the incidence relation defined in and
[ _ := (⋃_l∈ N l) ∪ (⋃_l∈ L l) ∪ l_0 ∪{ D },; _1 := { s∩_ : s ∈},; _2 := L ∪ N ∪{ l_0},; _ := _1 ∪_2. ]
We begin with the incidence structure given by Construction <ref>. With the notation there, we prove that is a projective plane.
First of all, we note that one of the defining properties of a projective plane, that there are four points in _ with no three of them collinear, is satisfied by a_0y, a_1z, q, and D.
We next prove that given two lines in , they intersect. Suppose that the two given lines
are in L (they are ordinary). If they intersect in a point in l_0 or in a point in l_1,
there is nothing to prove. Suppose that neither of those two cases holds. So, they are two ordinary lines
that intersect l_0 and l_1 in four different points. Therefore, by Axiom (A<ref>) the two given lines intersect.
By a similar argument we conclude that if the two given lines are in N, then they intersect. It is clear
that any two lines in _1 intersect in D and that a line in _2 intersects every line in _1.
Suppose the two given lines are λ and η with λ∈ L and η∈ N. If a_0y∈λ and q∈η, then λ and η intersect both l_0 and n_1 in four distinct points. Since l_0 and n_1 intersect in a_0y, by (A<ref>) we conclude that λ and η intersect.
Now suppose that a_0y∉λ.
Since λ intersects both l_0 and l_1 in distinct points, and n_1 intersects l_0 and l_1 in distinct points, by (A<ref>) we know that λ intersects n_1.
Then λ intersects l_0 and n_1 in distinct points (because n_1 intersects l_0 at a_0y∉λ).
The fact that λ and η both intersect l_0 and n_1 in distinct points, with (A<ref>), implies that λ and η intersect in a point.
Supposing q∉η, the proof is similar. Since λ meets l_0 at a_0y∉ l_1, and q = l_0 ∩ l_1 ∉η, each of λ and η intersects l_0 and l_1 in distinct points; thus, λ and η intersect in a point.
This completes the proof that any two lines in _ intersect.
We now prove that given two points p_0, p_1 ∈_, they are in a line in . (If they are in one line, they cannot be in two, because the lines of are ordinary lines or restrictions of special lines of , and every line in is determined by two of its points.)
This proof requires cases depending on the locations of the two points. The proofs (if not trivial) depend on repeated application of Axiom (A<ref>).
For economy of notation we employ a shorthand: p_34 = A6(l_1,l_2;l_3,l_4| p_12;p_13,p_23,p_14,p_24) means that each pair {l_i,l_j} intersects at p_ij for ij = 12, 13, 14, 23, 24. Axiom (A<ref>) then implies that l_3 and l_4 intersect at a point p_34, provided that l_1 and l_2 are ordinary. In this proof all four lines are always ordinary.
Case 1. If both points are in a special line s, the line in is s∩_∈_1. This includes the case in which one of those points is D. Henceforth we assume the points are not in the same special line.
Case 2. If both points are in l_0 or l_1, there is nothing to prove.
Case 3. Suppose both points are not in x ∪ l_0 ∪ l_1. Then p_0 is in a line l_st = a_0sa_1t for some two special lines s and t, not equal, and p_1 is in a line l_uv = a_0ua_1v for some two special lines u and v, not equal (but s,t may not be distinct from u,v). Form the point p_3 = A6(l_0,l_1;l_st,l_uv| q; q_0u,a_1v,a_0s,a_1t), then the point p_4 = A6(l_st,l_uv;l_1,| p_3;a_1t,a_0s,p_1,p_0), and finally the point p_5 = A6(l_st,l_uv;l_0,| p_3;a_0u,a_1v,p_1,p_0).
Now p_3 and p_4 are the intersections of l_0 and l_1, respectively, with . Since p_3 ≠ p_4, is a line generated by a point on l_0 ∖ q and a point on l_1 ∖ q (as p_0, p_1 ≠ q). Since that line is not a special line, it is in L. Therefore, p_0 and p_1 are collinear.
Case 4. In this case p_0 ∈ l_0 but p_1 ∉ x ∪ l_0 ∪ l_1. We choose names so p_0 = a_0s and p_1 ∈ l_uv as in Case 3. Choose a_1t∈ l_0 ∖ (∪{q}) and form p_2 = A6(l_0,l_1;l_uv,l_st| q;a_0u,a_1v,a_0s, a_1b); then let p_3 = A6(l_uv,l_st;,l_1 | p_2;a_1t,a_1v,p_0,p_1).
Now p_3 is the intersection of with l_1, which implies that is generated by p_0 ∈ l_0 ∖ q and p_3 ∈ l_1 ∖ q. Since is not special, it is a line in L.
Case 5. In this most complicated case we assume p_0 ∈ x ∖ q and p_1 ∉ x ∪ l_0 ∪ l_1. As in the preceding cases we take p_1 ∈ l_uv. Step 1: Choose p_2 = A6(n_1,l_0;n_st,l_1 | a_0s;b_1t,a_0s,a_0u,a_1v). Step 2: p_3 = A6(l_0,l_1;n_st,l_uv| q;a_0s,p_2,a_0u,a_1v). Step 3: p_4 = A6(n_st,l_uv;l_1,| p_3;p_2,a_1v,p_0,p_1). Step 4: p_5 = A6(n_st,l_uv;l_0,| p_3;a_0s,a_0u,p_0,p_1). The result is that is generated by p_5 ∈ l_0 ∖ q and p_4 ∈ l_1 ∖ q so it is in L.
Case 6. Here we assume p_0 ∈ x ∖ q and p_1 ∈ l_1 ∖ q. In this case we take p_0 ∈ n_su. We first find p_2 = A6(l_1,n_1;n_su,l_1 | a_0s;a_0s,b_1u,q,a_1z). Then we find p_3 = A6(n_su,l_1;,n_1 | p_2;p_0,p_1,b_1u,a_1z) and last p_4 = A6(l_1,n_1;l_0,| a_1t;q,a_0s,p_1,p_3). Then is generated by p_4 ∈ l_0 ∖ q and p_1 ∈ l_1 ∖ q, therefore it is in L.
Case 7. Now p_0=q and p_1 ∉ x ∪ l_0 ∪ l_1. As usual we take p_1 ∈ l_uv. The first step is to define p_2 = A6(l_0,l_1;l_uv,n_1 | q;a_0u,a_1v,a_0s,a_1t), and then p_3 = A6(l_1,l_uv;,n_1 | a_0u;p_0,p_1,a_1t,p_2). Since p_3 lies on n_1 it is a point b_1w for a special line w ≠ x. Thus, is generated by p_0 = q = a_0x and p_3 = b_1w; this line is n_xw so it is in N.
Case 8. The last case is where p_0=q and p_1 ∈ l_1. Both are in the line l_1.
In all cases there is a line in _ that contains both p_0 and p_1, so they are collinear in .
We have proved collinearity of all pairs of points in _, so is indeed a projective planes.
An interpretation of Theorem <ref> is the following corollary.
Given three noncollinear ordinary points in a projective rectangle , there is a unique full projective plane in that contains all three points.
Given an ordinary line l and an ordinary point p not in l, there is a unique full projective plane in that contains both.
For the first part, let the three points be p,q,r. No special line contains all three, so there is one, say p, that is not in a special line through the others. The lines pq and pr are ordinary lines, they are distinct by noncollinearity of the three points, and they intersect, so by Theorem <ref> there is a unique full projective plane that contains them and the three points.
The second part follows by taking q,r ∈ l.
In a projective rectangle, every maximal subplane is full.
The line set of an incidence subplane π contains two ordinary lines l_1,l_2 and its point set contains their intersection point. It follows from Theorem <ref> that π is a subplane of the full subplane determined by l_1 and l_2.
Thus, maximality and fullness are equivalent for projective subplanes of a projective rectangle.
From now on, when we refer to a plane in a projective rectangle, we mean a full projective subplane. Also, when we say several lines are coplanar, we mean there is a plane π such that each of the lines that is ordinary is a line of π and for each line s that is special, s ∩π is a line of π.
We can now characterize a nontrivial projective rectangle as a projective rectangle that contains more than one maximal projective subplane. Such projective rectangles have properties not common to all projective planes; e.g., they satisfy the dual half of Desargues's Theorem (see Theorem <ref>) and they are harmonic matroids (see <cit.>).
Let be a projective rectangle. Every ordinary line in is a line of a plane in . If is nontrivial, then every ordinary line l is a line of at least three planes that contain l.
Let l be an ordinary line in . From Theorem <ref> Part (<ref>) we know that there
is another ordinary line l' that intersects l at exactly one point. This and Theorem <ref>
imply that l is in a plane π.
If is nontrivial, there is a point q not in π. Let p_1,p_2 ∈π be points in l that are not in the special line that contains q. Then the plane p_1p_2q that contains both ordinary lines p_1q and p_2q, which exists and is unique by Theorem <ref>, is a plane containing l that is different from π. To find a third plane, let p_1 ∈π_1 and p_2 ∈π_2 be ordinary points not in l. There is an ordinary line p_1p_2 that must contain a third point p_3 since m≥3 by Theorem <ref>. By Corollary <ref> there is a unique plane π_3 that contains l and p_3.
If s is a special line in the projective rectangle and π is a plane in , then s ∩π is a line of π.
Let p_1 and p_2 be points in distinct special lines that are not s. Then by Axiom (A<ref>) there is an ordinary line l that contains both p_1 and p_2, and by Corollary <ref> there is a plane π that contains l.
In π there is another line l' that intersects l at p_1; then q=l∩ s and q'=l' ∩ s are two points in s ∩π, which determine a line in π that is contained in the unique line s of that contains q and q'. Thus, s ∩π is a line of π.
Now we prove a generalization of Theorem <ref> to all lines, although we lose uniqueness of the containing plane.
Let be a projective rectangle. If two lines l_1 and l_2 intersect in a point p, then they are coplanar.
Suppose l_1 is a special line. There are points p_1 in l_1 ∖ l_2 ∖ D and p_2 in l_2 ∖ l_1. By Axiom (A<ref>) there is an ordinary line l_3 determined by p_1 and p_2.
If l_2 is ordinary, by Theorem <ref> there is a unique plane π that contains l_2 and l_3. By Proposition <ref> the restriction of l_1 to π is a line of π, so l_1 and l_2 are coplanar.
If l_2 is special, then l_3 is ordinary. By Proposition <ref> there is a plane π that contains l_3, and by Proposition <ref> both l_1∩π and l_2∩π are lines of π. Thus, l_1 and l_2 are coplanar.
Next is an intersection property of lines that has a consequence for the matroid structure of a projective rectangle.
Suppose three lines in a projective rectangle intersect pairwise in three different points. Then they are a coplanar triple.
Equivalently, if three lines intersect pairwise (i.e., are pairwise coplanar) but are not a coplanar triple, then they all intersect in the same point.
Suppose two ordinary lines l_1, l_2 intersect in a point p and lie in a common plane π, and suppose a third line l_3, possibly special, intersects l_1 and l_2 in points different from p. Choosing any points q_1 ∈ l_1 ∖ p and q_2 ∈ l_2 ∖ p determines a line of π through q_1 and q_2. By Construction <ref> and Theorem <ref>, this line is either an ordinary line of or the restriction to π of a special line of . In particular, this applies to l_3, hence l_1, l_2 and l_3 are a coplanar triple of lines of .
In case l_1 is ordinary while l_2 and l_3 are special, by Corollary <ref> l_1 and l_2 are coplanar in a plane π and by Proposition <ref> l_3∩π is a line of π, so the three lines are coplanar.
The second statement, which is the contrapositive of the first (and see Corollary <ref>), is a useful restatement.
If a finite projective rectangle has order (n,n), then it is a projective plane.
Because n=m, the projective plane of Corollary <ref> is the whole projective rectangle.
This proposition does not apply to the infinite case; see Example <ref>.
§.§ No Vamos configuration
The Vamos matroid is the matroid of eight points in Figure <ref>. It is one of the smallest matroids that cannot be represented in a projective geometry; for that reason it is one of the fundamental matroid examples.
However, we shall not think of it as a matroid but as an incidence structure with eight points as well as lines and planes. The lines are the solid lines in Figure <ref> and the planes are the ones composed of pairs of lines as described in the caption.
(As a matroid a projective rectangle has rank 3 while the Vamos matroid has rank 4 and therefore it is trivial that it cannot be a submatroid of a projective rectangle. That is why it is important to think of the Vamos incidence structure instead of the Vamos matroid, even though they look the same in a diagram.)
The Vamos incidence structure is not a substructure of any projective rectangle.
Suppose a configuration of this kind exists in a projective rectangle. By Proposition <ref> the lines l_1,l_2,l_3 are concurrent in a point and the lines l_2,l_3,l_4 are also concurrent in a point. Clearly, these points are one point, so l_1 and l_3 contain a common point and hence are coplanar, contrary to the structure of the Vamos matroid. That proves the corollary.
§ FINITE PROJECTIVE RECTANGLES
In finite projective rectangles there are many possibilities for counting elements and configurations. They are the topic of this section.
§.§ Counts
We extend the counts of points, lines, etc. in Section <ref> to planes and various kinds of incidence.
Let be a projective rectangle of order (m,n).
* The number of ordinary lines that are concurrent with each ordinary line is m(n-2).
* There are m(m-1) ordinary points and (m-1)^2 ordinary lines in each plane.
* The number of pairs (p,l) that consist of an ordinary point p and an ordinary line l that contains p is m(n-1)^2.
* The number of planes that contain each ordinary line is n-2m-2.
* The number of pairs (l,π) such that l is an ordinary line and π is a plane that contains l is (n-1)^2 n-2m-2.
* The number of planes in is (n-1)^2(n-2)(m-1)^2(m-2).
* For a fixed ordinary point p, the number of triples (p,l,π) such that l is an ordinary line incident with p and π is a plane that contains l is (n-1) n-2m-2.
* The number of triples (p,l,π) such that p is an ordinary point, l is an ordinary line, and π is a plane that contains l is m(n-1)^2 n-2m-2.
* The number of pairs (p,π) such that p is an ordinary point and π is a plane that is incident with p is m(n-1)^2m-1 n-2m-2.
* The number of planes that are incident with each ordinary point is n-1m-1 n-2m-2.
Proof of (<ref>). Let l be an ordinary line. From Part (<ref>)) there are m points on l. From Theorem <ref> Part (<ref>) we know there are n-2 ordinary lines that intersect l at each point. All those lines are distinct.
Proof of (<ref>). This follows from the fact that the plane is projective of order m-1. We exclude the one special point D and the m special lines in the plane.
Proof of (<ref>). Each of the (n-1)^2 ordinary lines (Theorem <ref> Part (<ref>)) contains m ordinary points (Part (<ref>)).
Proof of (<ref>).
Let l be an ordinary line. From Part (<ref>) there are m(n-2) ordinary lines l' that intersect l at exactly one point.
Theorem <ref> guarantees the existence of a unique plane π that contains
both l and l'. By Part (<ref>) the number of ordinary lines in π that intersect l is (m-1)^2-1 = m(m-2). Thus, the number of planes on l is the quotient, m(n-2)/m(m-2)=(n-2)/(m-2).
Proof of (<ref>). The number of ordinary lines should be multiplied by the number of planes on each line.
Proof of (<ref>). The number of incident line-plane pairs should be divided by the number of ordinary lines in a plane.
Proof of (<ref>). The number of incident line-plane pairs should be multiplied by the number of points in an ordinary line.
Proof of (<ref>). The number of triples in Part (<ref>) should be multiplied by the number of ordinary points from Part (<ref>).
Proof of (<ref>). The number of triples in Part (<ref>) should be divided by the number of ordinary lines in pi that contain p, which is m-1.
Proof of (<ref>). Either divide the number of triples in Part (<ref>) by m-1, the number of ordinary lines on p in π, or divide the number in Part (<ref>) by m(n-1), the whole number of ordinary lines on p.
Two lines are skew if they have no point in common.
A skew class of lines is a maximal set of lines, in which every pair is skew. If a line has no skew mate, it is a skew class of one. A line may belong to more than one skew class. Two lines that are skew to the same line may intersect.
If is a finite projective rectangle of order (m,n), then the following hold in :
* Given an ordinary point p and given any ordinary line
l that does not contain p, there are exactly n-m ordinary lines containing p that are skew to l.
* If l is an ordinary line, then there are (n-2)(n-m) lines that are skew to l.
* If l_1 is skew to l, there are m(n-m) lines skew to l that are concurrent with l_1.
Proof of Part (<ref>). From Theorem <ref> Part (<ref>)
we know that there are exactly n lines passing through p (including a special line).
From Theorem <ref> Part (<ref>) we also know that there are exactly m lines
passing through p that intersect l (including a special line). Therefore, there are exactly (n-1)-(m-1)
ordinary lines passing through p and skew to l.
Part (<ref>) follows by subtracting from the number of ordinary lines, (n-1)^2 (Theorem <ref> Part (<ref>)), the number that are concurrent with l, which is m(n-2) (Theorem <ref> Part (<ref>)), and the number that are l, which is 1.
Part (<ref>) follows from Part (<ref>).
Suppose that is a nontrivial projective rectangle of order (m,n).
Let l be an ordinary line l∈.
Tthere is a skew line class containing l that has at least m lines in it.
I.e., there are m-1 ordinary lines skew to l and skew to one other.
Let M = ⌈ (n-m)/(m-1) ⌉ - m, the largest integer such that (n-1)/(m-1)>m+M. Then there is a skew class containing l that has at least m+M lines in it.
I.e., there are m+M-1 ordinary lines skew to l and skew to one other.
Let l be an ordinary line and let l_1 l be an ordinary line passing though q∈ l. Let p q be a second point in l.
By Theorem <ref> Part (<ref>), since n>m there is an ordinary line l_2 passing through
p skew to l_1. Let a_i and b_i' be the points in l_1 and l_2 for i=1,2, …, m, labeled so that the line a_ib_i' is special. Lines a_ib_i and a_jb_j for i,j∈{1,2, …, m} with i j, b_i ≠ b_j, b_i ≠ b_i', and b_j ≠ b_j' are ordinary and are skew to each other, because
if they intersect, then by Axiom (A<ref>), l_1 intersects l_2, which is a contradiction.
Note that it is easy to choose all b_i ≠ b_i' since m>1.
Also, we can suppose that l is the line a_1b_1.
Now we suppose that (n-1)/(m-1)-m>0 and M is the largest integer such that (n-1)/(m-1)>m+M. (Thus, n>m+M.) Let s be a special line with points
s_1, s_2, …, s_m, …, s_n-1,D. Suppose that S∩ a_ib_i=s_i for i=1, …, m. We prove by induction that there are lines
h_1, h_2, …, h_M, skew to one other and to all lines of the form a_ib_i.
Assume we have k lines h_1, h_2, …, h_k that are skew to one other and to
all lines of the form a_ib_i for some k∈{0,1, …, M-1}, where s_m+t∈ h_t for t=1, 2, …, k.
First note that neither h_t nor a_ib_i contains the point s_m+k+1 and that (m-1)(m+k) is the number of points in
(⋃_t=1^k h_t∪⋃_i=1^m a_ib_i)∖ S.
Thus, the maximum number of ordinary lines passing through s_m+k+1 intersecting a line of the form a_ib_i and the lines
h_1, …, h_k is (m-1)(m+k). Since s_m+k+1 is an ordinary point, by Theorem <ref> Part (<ref>)
we know there are n-1 ordinary lines passing through this point. Since (n-1)>(m-1)(m+k) there must be at least one ordinary line h_k+1
passing through s_m+k+1 that is skew to all lines of the form a_ib_i and the lines h_1, …, h_k. This proves the induction, completing the proof.
In the notation of Theorem <ref>, M = (τ-1)m - 2τ. This is negative or zero if τ = 1, or if τ=2 and m≤4, and positive otherwise, so in the “otherwise” case the second bound on the maximum size of the skew class is the better one.
§.§ Constraints on the parameters
We have found some integers in Theorem <ref>, namely,
ρ=n-2m-2, n-1m-1 n-2m-2, and (n-1)^2(m-1)^2 n-2m-2.
These integral fractions imply relationships between m and n.
Theorem <ref> is a constraint on n, given a value of m. By Section <ref> m-1 must be the order of a projective plane; that is the only constraint we know on m.
Let p,p' be two ordinary points in a special line s. Let s' be any other special line. The planes π that contain both p and p' partition s'∖ D into sets π∩(s'∖ D) of size m-1, and each such set is in a unique plane that contains p and p', so there are (n-1)/(m-1) such planes.
For an ordinary point q∈ s' let π(q) denote the plane that contains p,p',q. This plane is unique, by Theorem <ref>, because it is determined by the intersecting ordinary lines pq and p'q.
Choose another ordinary point q' ∈ s' ∖π(q) and suppose π(q) and π(q') contain a common point r. Then both planes contain the intersecting ordinary lines pr and p'r, so they must be the same plane. It follows that the distinct planes π(q) for q ∈ s' ∖ D partition the points of s' ∖ D. The intersection π(q) ∩ s' is a line of π(q) that contains D, so the number of ordinary points in it is m-1. The number of sets into which s' ∖ D is partitioned is therefore equal to (n-1)/(m-1), and this is the number of planes that contain both p and p'.
For a projective rectangle of order (m,n), there is an integer τ≥ 0 such that n = m + τ (m-1)(m-2). If is nontrivial, then τ≥ 1.
We simplify the notation by writing ν=n-1 and μ=m-1.
Integrality of (n-2)/(m-2) implies that there is an integer ρ≥ 1 such that ν = 1 + ρ(μ-1).
Proposition <ref> implies that ν = σμ for some positive integer σ. Therefore, ν = ρ(μ-1)+1 = σμ. It follows that (ρ-σ)μ = ρ-1, so ρ-1 is a multiple of μ, say ρ = τμ+1 where τ≥0. Then substituting for ρ gives (τμ+1-τ)μ = τμ, and upon division by μ we find that σ = τ(μ-1) + 1. This implies ν = τμ(μ-1) + μ, so n-m = ν-μ = τμ(μ-1).
We infer the expressions
n-2m-2 = τ(m-1)+1, n-1m-1 = τ(m-2)+1,
n-1m-1 n-2m-2 = [τ(m-2)+1] [τ(m-1)+1],
(n-1)^2(m-1)^2 n-2m-2 = [τ(m-2)+1]^2 [τ(m-1)+1].
If the projective rectangle is nontrivial, n ≥ (m-1)^2 + 1 and ρ≥ m.
If the projective rectangle has m=3, then n= 3 + 2τ, where τ≥0. The value τ=0 gives the Fano plane and τ=1 gives n=5 as with the L_2^2 projective rectangle of Example <ref>.
However, not all those values of τ admit a projective rectangle with m=3; there are examples only for n = 2^k+1, that is, for τ = 2^k-1-1 (see Section <ref>). Our numerical constraints need strengthening.
§ AXIAL AND CENTRAL DESARGUES'S THEOREMS
Consider two triangles in a projective rectangle, A = a_1a_2a_3 and B = b_1b_2b_3. (A triangle consists of three points, not all collinear, and the three lines joining the points in pairs.) There are three lines l_i = a_ib_i; if they concur in a point p we say the triangles are centrally perspective from center p. If each of the three pairs of lines a_ia_j and b_ib_j meets in a point p_ij and the points p_12, p_13, p_23 are collinear in a line l, we say A and B are axially perspective from axis l.
The Central Desargues's Theorem says that, if two triangles are centrally perspective, then they are axially perspective. The converse is the Axial Desargues's theorem. The two together are generally known as Desargues's Theorem.
In a projective plane the points p_ij always exist. However, neither half of Desargues's Theorem is valid in every projective plane; in fact the validity of Desargues's Theorem is equivalent to the existence of plane coordinates in a division ring. Thus, for any plane, knowing whether Desargues's theorem holds true is a fundamental question.
Every projective plane is a projective rectangle, so we cannot say that Desargues's Theorem holds true in every projective rectangle; but eliminating projective planes from consideration changes the situation. We first establish that each triangle in the axial configuration is necessarily coplanar.
If A= a_1a_2a_3 is a triangle and l is a line that intersects the three lines a_ia_j in three points p_ij, then all six points and the four lines are contained in a unique plane.
There are four lines in the configuration of six points: l and the lines l_ij = a_ia_j. At most two can be special, so two are ordinary, say l' and l”. Any two of the four lines intersect, so l' and l” intersect; this implies they are in a unique plane π (by Theorem <ref>). The other two lines of the four are each determined by one point in l and one in l', so each is a line of π, or if special the intersection with π is a line of π.
Let be a nontrivial projective rectangle. Every plane in satisfies the Axial Desargues's Theorem when the axis is an ordinary line.
We begin by assuming triangles A and B are in planes π_A and π_B, respectively, and are axially perspective from an ordinary line l with intersection points p_ij, as in Figure <ref>. The two planes may be the same or different; if they are different, l is their intersection. We may assume a_i ≠ b_i for i=1,2,3 because otherwise the conclusion is trivial.
If a_1b_1, a_2b_2, a_3b_3 are not all coplanar, they are coplanar in pairs, since a_i,b_i,a_j,b_j ∈p_ija_ia_j. Hence, by Proposition <ref> there is a point q at which all three lines are concurrent; therefore, q is a center of perspectivity for A and B.
Thus, we assume henceforth that a_1b_1, a_2b_2, a_3b_3 are all in one plane, so that π_A = π_B.
There is another plane π_ on l because is nontrivial and l is ordinary (by Corollary <ref>), and in this plane we can find a triangle = _1_2_3 that is axially perspective from l with the same intersection points p_ij = l ∩_i_j.
The lines b_i_i and b_j_j are coplanar in a plane p_ijb_i_j = b_i_ib_j_j. Therefore, they intersect in a point s_ij.
The pairwise coplanar lines b_1_1, b_2_2, and b_3_3 are not all coplanar because _1_2_3 = π_∌b_1,b_2,b_3. By Proposition <ref>, those three lines have a common point s = s_12 = s_13 = s_23. See Figure <ref>.
Similarly, there is a point r = a_1_1∩a_2_2∩a_3_3.
We prove that r ≠ s and r,s ∉π_A. If r=s, then a_i_i = ra_i_i = r_i and b_i_i = sb_i_i = r_i, so ra_i_i and rb_i_i are the same line; that is, a_i,b_i,_i are collinear; but this is impossible. Similarly, a_i,b_i,_i are collinear, which is impossible, if r or s ∈π_A.
Each plane a_ib_i_i contains r and s so the lines a_ib_i and rs are coplanar. We know that r,s ∉a_ib_i⊂π_A. Hence, we have three triples a_ib_i, a_jb_j, rs of lines that are coplanar in pairs but not all coplanar. By Proposition <ref> there is a point q_ij at which each triple is concurrent. Then taking i=1 and j=2,3, we have q_12 = rs∩a_1b_1 = q_13, so q_12=q_13 is a point on all three lines a_1b_1, a_2b_2, a_3b_3 and a center of perspectivity for A and B.
That completes the proof.
The case in which A and B are not coplanar is reminiscent of the higher-dimensional Desargues's Theorem for projective geometries. That suggests a central Desargues's Theorem for noncoplanar triangles.
Let be a nontrivial projective rectangle. Then satisfies the Central Desargues's Theorem for triangles that are not coplanar.
We begin by assuming triangles A and B are in two different planes, π_A and π_B respectively, and are centrally perspective from a point p.
We show that we may assume a_i ≠ b_i for i=1,2,3. Since the triangles are not coplanar, they cannot be equal; in particular, say, a_3 ≠ b_3. The conclusion is trivial if a_1=b_1 and a_2=b_2; the axis is then a_1a_2=b_1b_2. Suppose henceforth that a_2 ≠ b_2 and a_3 ≠ b_3.
Assume first that a_1 ≠ b_1.
Let l_i := a_ib_i (which exists and contains p by central perspectivity), p_ij := l_i ∩ l_j (which exists because a_i,b_i,a_j,b_j,p are coplanar and any distinct three of them, excluding D if one of them is not ordinary, determine the plane), and λ_ij := p_ikp_jk where {i,j,k} = {1,2,3}. The lines λ_ij exist if a_1 ≠ b_1 because if p_ij=p_ik (i,j,k all different), then this point is the intersection of a_ia_j and a_ia_k but that intersection is a_i, and it is also the intersection of b_ib_j and b_ib_k but that intersection is b_i, from which it follows that a_i=b_i, contrary to our assumption.
Now we observe that all points p_ij∈π_A ∩π_B, so all lines λ_i ⊆π_A ∩π_B. But as we assumed π_A ≠π_B, their intersection cannot consist of more than one line. It follows that λ_12 = λ_13 = λ_23 and this is the required axis of perspectivity.
If a_1=b_1, in the previous discussion the line l_1 degenerates to a point and the rest of the proof is similar but simpler, with a_1p_23 as the axis of perspectivity.
We note that any of the lines in the proof might be special, but because we only argue within planes, the proof is not affected.
Theorem <ref> reinforces our belief that a nontrivial projective rectangle should be regarded as, in a strange way, nonplanar. Unfortunately, we were not able to make this intuition precise.
§ THE SUBPLANE CONSTRUCTION OF PROJECTIVE RECTANGLES
Given a projective plane π and a subplane π', we wish to get a projective rectangle by taking a point D, all the lines joining it to points of π', all the points on those lines, and all the restrictions to our point set of the lines in π that are generated by our points (i.e., contain at least two of our points).
D must be taken in the subplane.
Suppose D is not in π'. Take a point P ∈π' and the line PD. This is supposed to be a special line so it must be a line of any plane in the projective rectangle; the proof is that every line of a projective rectangle, thus every line of π', intersects every special line (Axiom (A<ref>)), so L ∩π' cannot be one point. Therefore L ∩π' must be a line of π'. Now consider a second point P' ∈π' ∖ L. Then L and L'=P'D are both extensions of lines of π' so they intersect in π', but they intersect in D; this means D ∈π'.
We could simplify the construction: Take a subplane π' and one line l in it, and any point D in π' ∖ l. For the projective rectangle, take all lines that join D to l and for ' take all points of π on those lines. This gives precisely the subplane construction, because already it gives all the points of π' and then only the points generated from D and π' in that construction.
A plane is Pappian if it is coordinatized by a (commutative) field.
The subplane construction in a Pappian projective plane produces a projective rectangle.
Let our point set be ' and the incidence structure induced on it by π be '. There are two kinds of line in ': a long line is a line of π and a short line l is the restriction to ' of a line L of π that is not contained in ', so if l is any short line, L denotes its extension into π. If ' turns out to be a projective rectangle, the long lines will be the special lines of ' and the short lines will be the ordinary lines.
Axiom (A<ref>): By definition, since we took every line generated by two points of '.
Axiom (A<ref>): Four such points exist in the subplane π'.
Axiom (A<ref>): By definition.
Axiom (A<ref>): Every point of ' is in a long line, every short line of ' is a restriction of a line L of π, and any two lines of π intersect in a point P. Thus, for each short line l of ', its extension L intersects each long line s in a point which, by definition of ', is in the long line s.
Axiom (A<ref>): Follows from (A<ref>) because there are at least 3 special lines.
Axiom (A<ref>): Let the other two lines be l_1' and l_2'. If either of them is long, the conclusion follows from Axiom (A<ref>). Therefore, assume l_1' and l_2' are short lines. If two or more of them are in π', then all four are and the property follows from that of a projective plane. This leaves two cases: One of the lines is in π', or none is. We give an analytic proof, using coordinates, when π=π() for a field , so we can take π' to be a subplane generated by a subfield '.
We write P := l_1 ∩ l_2, Q_ij := l_i ∩ l_j', R := L_1' ∩ L_2'. We need to prove that R ∈'.
We give an analytic proof.
Write I_m for the point on the ideal line L_∞ that is on all lines of slope m. We choose D to be the point I_∞ on all vertical lines of π; thus, the point set of our supposed projective rectangle is
' = {[z:x:y] : z=0, or z=1 and x ∈'}.
We consider two cases, depending on whether or not one of the short lines is within π'=π(').
Case 1. One of the short lines is in π', say l_1 ⊆π'.
Since we can assign noncollinear coordinates arbitrarily to any three noncollinear points in π', we may choose the coordinate system so that l_1 has the equation y=0, P = (0,0), l_2 has the equation y=m_2x, and l_2' has the equation y = b_2' (where b_2' ∉' since l_2' ⊈π'). Then Q_12 = I_0. The equation of l_1' has the form y = m_1'x+b_1'. Note that m_2, m_1' ∉' since l_2, l_1' are not in π'.
From this information we can find the coordinates of the other intersection points. They are
Q_11 = (-b_1'/m_1', 0 ), Q_21 = (b_1'/m_2-m_1', y_21), Q_22 = (b_2'/m_2, b_2'),
R = (b_2'-b_1'/m_1' , b_2').
Because Q_11, Q_21, Q_22∈', their x-coordinates are in '. None equals 0. Therefore,
m_1'/b_1', m_2-m_1'/b_1', b_2'/m_2∈',
so also
m_2/b_1'∈'.
The x-coordinate of R is
b_2'-b_1'/m_1' = b_2'/m_1' - b_1'/m_1' = b_2'/m_2m_2/b_1'b_1'/m_1' - b_1'/m_1'∈',
proving that R ∈'.
Case 2. None of the four short lines is in π'.
We choose coordinates so that P ∈ L_∞; that is, P = I_m for some m ∈, so l_1 has equation y=mx+b_1 and l_2 has equation y=mx+b_2 with b_1,b_2 ∈ and b_1 ≠ b_2. The other lines l_j' have equations y = m_j'x+b_j', where m_j' ≠ m. The special case m_1'=m_2' is not excluded, but then R ∈ L_∞⊆', so we may assume m_1' ≠ m_2'. The special case b_1' = b_2' is also not excluded; then R is in the line x=0; this case will be dealt with in the course of the proof. We can exclude m_1'=m and m_2'=m since then P ∈ l_1' or l_2', respectively, which violates the assumption of Axiom (A<ref>).
The intersection points (other than P), which cannot be in L_∞, have coordinates
Q_11 = (b_1-b_1'/m_1'-m, y_11),
Q_12 = (b_1-b_2'/m_2'-m, y_12),
Q_21 = (b_2-b_1'/m_1'-m, y_21),
Q_22 = (b_2-b_2'/m_2'-m, y_22),
R = (b_1'-b_2'/m_2'-m_1', y_R).
The x-coordinates of the Q_ij are in '; we want to show that of R is also in '.
Write ρ_ij for the x-coordinate of Q_ij. That is, b_i-b_j' = ρ_ij(m_j'-m). These are four equations E_ij. By combining E_11 with E_21 and E_12 with E_22 we infer that
b_2-b_1 = (ρ_21-ρ_11)(m_1'-m) = (ρ_22-ρ_12)(m_2'-m).
Thus,
m_1'-m/m_2'-m = ρ_22-ρ_12/ρ_21-ρ_11 =: α∈'.
(This last step would be forbidden if ρ_21=ρ_11, but that implies l_1' contains D, contrary to assumption.)
Now combining E_11 with E_12 and E_21 with E_22 we infer that
b_2'-b_1' = ρ_11(m_1'-m) - ρ_12(m_2'-m) = (ρ_12-αρ_11)(m_2'-m)
with α∈' and similarly
b_2'-b_1' = (ρ_22-βρ_21)(m_1'-m)
with β∈'. Rewriting,
m_1'-m = b_2'-b_1'/ρ_22-βρ_21,
m_2'-m = b_2'-b_1'/ρ_12-αρ_11,
which combine to give
m_2'-m_1' = (b_2'-b_1') ( 1/ρ_12-αρ_11 - 1/ρ_22-βρ_21),
or in a different form,
m_2'-m_1'/b_2'-b_1'∈'.
This is the reciprocal of the x-coordinate of R; consequently, R ∈'.
The one caveat is that, if b_1'=b_2', we cannot proceed from Equation (<ref>); but then that equation implies m_1'=m_2', which was excluded at the beginning of the proof. So this difficulty will not occur.
That concludes the proof of Theorem <ref>.
If π is Pappian and not prime, it has a prime subplane so there are proper subplanes to carry out this construction. All Desarguesian planes and many others have proper subplanes (e.g., planes over near fields; cf. the book of Hughes and Piper <cit.>). However, we do not know whether the subplane construction works in a non-Pappian plane.
We did not try to construct an algebraic proof for Desarguesian planes; we chose to study only Pappian planes to keep the algebra simple.
We fear that generalization may require finding a synthetic proof.
There are nontrivial projective rectangles in which n=m, but n,m must be infinite. Suppose is a field that has a proper subfield ' of the same infinite cardinality. The subplane construction generates a nontrivial projective rectangle with n=|| and m = |'| = n, within which π(') is one of the (full) planes.
This contrasts with the case of finite m=n in Proposition <ref>.
§ NARROW RECTANGLES
The smallest allowed value of m is 3. We call a projective rectangle narrow if it has m=3.
The matroid L_2^k of Example <ref> is defined for any group 𝔊 (except the trivial group), simply replacing _2^k by 𝔊. In fact, all we need for 𝔊 is a (nontrivial) quasigroup; this matroid is the complete lift matroid L_0(𝔊K_3) from <cit.> or <cit.>). We define L_0(𝔊K_3) in a way compatible with Example <ref>. The ground set is E:= A∪ B∪ C where A:= { a_g | g ∈𝔊}∪{D }, B:= { b_g | g ∈𝔊}∪{D } and
C:= { c_g | g ∈𝔊}∪{D }. The lines (rank-2 flats of the matroid) are A, B, and C and the sets {a_g, b_g h, c_h } with g, h ∈𝔊. If this is a projective rectangle, A, B, and C are the special lines and the other lines are the ordinary lines. But L_0(𝔊K_3) is not always a projective rectangle.
Every narrow projective rectangle has the form L_0(𝔊K_3) where 𝔊 is a nontrivial group with exponent 2, and conversely. If is finite the group is ℤ_2^k with k≥1 and its parameters are (m,n)=(3,2^k+1) with k≥1.
This proposition includes infinite groups.
First we note that every narrow projective rectangle is an L_0(𝔊K_3) where 𝔊 is a quasigroup of order greater than 1. There are three special lines, which we call A, B, and C. We label the elements of each line, except D, by a set G of labels and we define an operation on G by gh=k such that a_gc_hb_k is an ordinary line of . It is clear that this is well defined and that any two of g,h,k determine the third, so G is a quasigroup. Then is the same as L_0(𝔊K_3) except that in the projective rectangle we ignore the trivial lines of the matroid.
Now let us assume that a matroid L_0(𝔊K_3) is a projective rectangle. We prove that 𝔊 satisfies the following fundamental property:
gh=ef gf=eh.
Consider the lines l_1={a_g,b_gh,c_h} and l_2={a_e,b_ef,c_f in Axiom (A<ref>), and two other lines, l={a_g,b_gf,c_f} and l'={a_e,b_eh,c_h}. According to Axiom (A<ref>) the lines l and l' should have a common point, so b_gf=b_eh, which means gf=eh.
Any quasigroup is isotopic to a loop (a quasigroup with identity element, 1), so we may assume 𝔊 is a loop. Suppose h=e=1 in Equation (<ref>). Then g=f gf=1; in other words, gg=1 for every element of 𝔊. Suppose g=h and e=f. Then 1=1 ge=eg; that is, 𝔊 is commutative. A property that characterizes a quasigroup that is isotopic to a group is the Quadrangle Criterion <cit.>, which is
.[ a_1c_1=a_2c_2; a_1d_1=a_2d_2; b_1c_1=b_2c_2 ]} b_1d_1=b_2d_2.
We prove the Quadrangle Criterion for 𝔊 by means of Equation (<ref>).
a_1c_1=a_2c_2 a_1a_2=c_1c_2,
a_1d_1=a_2d_2 a_1a_2=d_1d_2,
b_1c_1=b_2c_2 b_1b_2=c_1c_2.
The first two lines imply that c_1c_2=d_1d_2 and combined with the third line we deduce that b_1b_2=d_1d_2, proving the Quadrangle Criterion. Hence, 𝔊 is isotopic to a group. By isotopy we may assume 𝔊 is a group, and we have seen that it is abelian and has exponent 2. If 𝔊 is finite, it is _2^k for some positive integer k as in Example <ref>. These necessary properties of 𝔊 are sufficient for L_0(𝔊K_3) to be a projective rectangle, because exponent 2 implies Axiom (A<ref>), as is easy to verify.
The geometry of a narrow projective rectangle is determined by the isotopy type of its quasigroup. Thus, the finite such rectangles are obtained from a finite Pappian projective plane of 2-power order by the subplane construction of Section <ref> using a Fano subplane.
§ ORTHOGONAL ARRAYS FROM PROJECTIVE RECTANGLES
A transversal design is a partition of a set _T of m(n-1) points into m special sets of size n-1 together with a family of m-subsets of _T such that each such m-set intersects each special set exactly once and each pair of points not contained in a special set lies in exactly one m-set. A projective rectangle with D deleted is exactly a transversal design with the extra partial Pasch property Axiom (A<ref>). A dual concept to transversal designs is that of orthogonal arrays; the corresponding dual to projective rectangles is orthogonal arrays with a dual property to (A<ref>). We explore that dual concept in this section.[We thank Douglas Stinson for drawing our attention to transversal designs.]
An orthogonal array (OA) is a generalization of orthogonal latin squares. We adopt the notation for orthogonal arrays used in <cit.>.
An N× k array with A entries from S (a set of size s) is said to be an orthogonal array, OA_λ(N,k,s,t), with s symbols, strength 0≤ t ≤ k,
and index λ if every N× k subarray of A contains each tuple based on S exactly λ times as a row. We write a(r,c) for the label that appears in row r and column c.
§.§ An orthogonal array from points and lines
We represent a projective rectangle as an orthogonal array of points and lines. In ∖ D we have m special lines partitioning all the points, and (n-1)^2 ordinary lines.
By Theorem <ref>, every ordinary line intersects every special line exactly once and every pair of points in different special lines lie in exactly one ordinary line.
Each ordinary line will give a row of the orthogonal array and each special line will give a column. We label the points in each special line by the numbers 1,…,n-1 and we write a(p) for the label of the point p. The entries in a row are the labels of the points that appear in that ordinary line, arranged in the column of the special line that contains the point. Thus, each pair of labels appears once in each pair of columns.
That is a 2-(n-1,m,1) orthogonal array in standard notation.
In the notation used in <cit.>, it is an OA_1((n-1)^2, m, n-1,2).
We formulate a special property for an orthogonal array of type OA_1((n-1)^2, m, n-1,2).
(OA6) If four rows in the orthogonal array appear like the first five columns c_ij in this table,
c_12 c_13 c_24 c_14 c_23 c_34
r_1 a_12 a_13 a_14
r_2 a_12 a_24 a_23
r_3 a_13 a_23 a_34
r_4 a_24 a_14 a_34
where it is possible that c_13=c_24 or c_14=c_23, then there is a sixth column that appears like c_34.
(The empty cells are arbitrary.)
The property (OA6) does not follow from the definition of an orthogonal array. We are not aware that it has been considered in the theory of orthogonal arrays or dually in transversal designs. Its contrary, that the sixth column of (OA6) never appears, arises (in the language of transversal designs) as the “anti-Pasch configuration” in <cit.> (whose “Pasch configuration” is slightly stricter than ours).[We are very grateful to Charles Colbourn for hunting in the literature and communicating these facts.]
Let n≥ m ≥ 3.
* A projective rectangle of order (m,n) gives rise to an orthogonal array OA_1((n-1)^2, m, n-1,2) with property (OA6).
* An orthogonal array OA_1((n-1)^2, m, n-1,2) gives rise to a projective rectangle of order (m,n) if, and only if, it satisfies the additional property (OA6).
Proof of Part (i). We have shown that gives rise to an orthogonal array with the stated parameters.
Conversely, suppose we have an OA_1((n-1)^2, m, n-1,2). Let C be the set of m columns, let R be the set of rows, let L be the set of n-1 labels in the array, and write a(r,c) for the entry in row r, column c. We form an incidence structure whose point set is (C× L) ∪ D. The lines of this structure are special lines, of the form s_c = {(c,a) : a ∈ L }∪ D, for each c∈ C, and ordinary lines, of the form l_r = {(c,a) : c ∈ C and a= a(r,c) }, for each r∈ R.
We prove this incidence structure satisfies Axioms (A<ref>)–(A<ref>) of a projective rectangle. We assumed n-1≥ m-1≥2 so in the orthogonal array there are at least two distinct labels, which we call a_1 and a_2, and at least 3 columns, of which three are c_1,c_2,c_3. There are also at least 2^3 rows.
Proof of Axiom (A<ref>). We consider two points p_1=(r_1,a_1) and p_2=(r_2,a_2) where a_1=a(r_1,c_1) and a_2=a(r_2,c_2).
The points belong to the same special line if and only if c_1=c_2. The special line is s_c_1. Otherwise, there is exactly one row r where the entry in column c_1 is a_1 and the entry in column c_2 is a_2. Then p_1 and p_2 belong to the ordinary line l_r.
Proof of Axiom (A<ref>). Among the three pairs a(r_1,c_j), a(r_2,c_j) for j=1,2,3, only one can be the same label, a(r_1,c_j) = a(r_2,c_j), because each ordered pair of labels appears only once in the same two columns. Say a(r_1,c_1) ≠ a(r_2,c_1) and a(r_1,c_2) ≠ a(r_2,c_2). Then (c_1,a(r_1,c_1)), (c_1,a(r_2,c_1)), (c_2,a(r_1,c_2)), (c_2,a(r_2,c_2)) are four points, no three collinear.
Proof of Axiom (A<ref>). The special line s_c contains at least the three points D, (c,a_1), (c,a_2). The ordinary line l_r contains the points (c_1,a(r,c_1)), (c_2,a(r,c_2)), (c_3,a(r,c_3)).
Proof of Axiom (A<ref>). This follows by the definition of the incidence structure.
Proof of Axiom (A<ref>). Two special lines intersect only in D. A special line s_c and an ordinary line l_r intersect only in the point (c,a(r,c)).
Proof of Part (ii). We assume an orthogonal array is constructed from . Property (OA6) is the interpretation of Axiom (A<ref>) for an OA_1((n-1)^2, m, n-1,2). In Axiom (A<ref>) let l_3 and l_4 be the two lines besides l_1 and l_2. The assumption in the axiom is that points p_ij = l_i ∪ l_j exist for (i,j) = (1,2),(1,3),(2,4),(1,4),(2,3). Let s_ij be the special line that contains p_ij; we note that the special lines are distinct except that s_13 may be the same as s_24 and s_14 may be the same as s_23. In the orthogonal array derived from , the row of line l_i is r_i, the column of line s_ij is c_ij, and the label of p_ij is a(r_i,c_ij)=a(r_j,c_ij). Therefore, the array looks as in Property (OA6), except for the last column.
The conclusion of Axiom (A<ref>) is that there is a point p_34 that is incident with both lines l_3 and l_4. That translates to the existence of a final column as in (OA6) with a_34 = a(p_34). Hence, Property (OA6) is satisfied by the array derived from the projective rectangle .
Conversely, we prove Axiom (A<ref>) from Property (OA6). Let r_1, r_2 be the rows of the array that correspond to the lines l_1, l_2 in this axiom and let l_3,l_4 be the two other lines with corresponding rows r_3,r_4. The hypotheses of intersection imply that the diagram in Property (OA6) is satisfied, possibly except for the last column. By the assumption of Property (OA6), the final column does exist. This implies that l_3∩ l_4 is the point p_34 in the special line s_34 that corresponds to column c_34 and has the label a(p_34 = a_34. Therefore, the conclusion of Axiom (A<ref>) is satisfied.
§.§ An orthogonal array from points and planes
Ryser gives a nice construction of an orthogonal array from a projective plane <cit.>.
We extend Ryser's ideas to construct an orthogonal array from points and planes of a projective rectangle by partitioning the ordinary points outside a given ordinary line by means of the separate planes that contain that line.
The proof is based on the proof that Ryser gives for projective planes, adapted to the existence of multiple planes.
Let l be an ordinary line in a finite . The family of sets π∖ (l∪ D) for all planes π that contain l is a partition of the points in ∖ (l ∪ D) into (n-2)/(m-2) parts of m(m-2) points each.
We observe that every plane in containing l also contains the special point D.
If p∉l ∪ D, then by Corollary <ref> there is a unique plane on l that contains p; thus, the planes on l partition the points in ∖ (l ∪ D). The number of such planes is given by Theorem <ref> Part (<ref>).
The number of parts of the resulting partition equals the number of planes that contain the line l.
Suppose that (m,n) is the order of the projective rectangle .
Let l ∈ be an ordinary line and let π_1, π_2, …, π_w be all the planes in that contain l, where w=(n-2)/(m-2). Then
gives rise to an orthogonal array of the form OA_w((m-1)+w, m, m-1,2).
Let p_1, p_2, …, p_m be the points of l. We label the points in π_i∖ l by q_1^i, q_2^i, …, q_k^i where k=(m-1)^2
(D is one of these points) and label the lines on p_r in π_i∖ l with 1, 2, …, m-1 for each r=1,2, …, m.
We write a_st^i to record the label of the line q_s^ip_t∈π_i.
We claim that the matrix A_i=[a_st^i]_s,t is an orthogonal array of the form OA_1((m-1)^2,m,m-1,2).
We prove this by contradiction. Suppose that there two ordered pairs in the rows of A_i that are equal; that is, (a_s_1t_1^i,a_s_1t_2^i) =(a_s_2t_1^i,a_s_2t_2^i) with s_1 s_2.
Therefore, a_s_1t_1^i=a_s_2t_1^i and a_s_1t_2^i =a_s_2t_2^i. The equality of these labels implies that the points q_s_1^i, q_s_2^i, and p_t_1^i
are collinear and that q_s_1^i, q_s_2^i, and p_t_2^i are also collinear. Thus, each p_t_j^i is the unique point of l on the same line q_s_1^i q_s_2^i. Therefore, p_t_1^i = p_t_2^i, but that is impossible because t_1 ≠ t_2.
Now let B=[ A_1; A_2; ⋮; A_w ]. This matrix is an orthogonal array of the form OA_λ((m-1)^2+w,m,m-1,2) where λ = ∑_i=1^w 1 = w. That completes the proof.
We give an example for Theorem <ref> using the projective rectangle L_2^2 depicted in Figure <ref>. For the sake of simplicity we pick the
line l={a_1, b_1,c_1}. We recall that for an ordinary line in L_2^2, there are exactly λ=3 planes having that line in common.
Figure <ref> shows the three planes embedded in L_2^2 with l as common line.
For the first plane, let's say π_1, we distinguish the points
a_1, a_g, b_1, b_g, c_1, c_g and D_1:=D. For a fixed point in l theres two lines in π_1∖ l passing by the fixed point; from the set {1,2} we assign labels to these lines.
For the lines {a_1,a_g,D_1} and {a_1,b_g,c_g}, which intersect l at a_1, we assign 1 and 2 to them, respectively. We arbitrarily assign 1 and 2 to
{b_1,b_g,D_1} and {a_g,b_g,c_g}, respectively, and also to {a_g,b_g,c_1} and {c_g,c_1,D_1}.
With these labels we construct the first four rows of the rectangular array in Table <ref>.
The columns of the array are labeled on top with the points in the line
l and the rows are labeled on the left with the points in each plane that are not in l.
In this case the first four rows are
labeled with the points in π_1∖ l. The entries of the rectangular array are the labels of the lines passing through the point in the column label and the point
in the row label. For instance, the first entry of the first row in Table <ref> is 1, because the line passing through a_1 and a_g has label 1.
The first entry of the fourth row is 1, because the line passing through a_1 and D has label 1.
The second plane in Figure <ref>, π_2, has the points a_1, a_h, b_1, b_h, c_1, c_h and D_2:=D.
As in π_1, we assign arbitrary labels from {1,2}. We choose 1 to be
the label of {a_1,b_h,c_h}, {a_h,b_1,c_h}, and {c_1,c_h,D_2} and 2 as the label of {a_1,a_h,D_2}, {b_1,b_h,D_2}, and {a_h,b_h,c_1}.
For the third plane in Figure <ref>, π_3 with points a_1, a_gh, b_1, b_gh, c_1, c_gh and D_3:=D, we also assign arbitrary labels from {1,2}. So, for example, 1 will be
the label of {a_1,a_gh,D_3}, {a_gh,b_1,c_gh}, and {a_gh,b_bh,c_1} and 2 will be the label of {a_1,b_gh,c_gh}, {b_1,b_gh,D_3}, and {c_1,c_gh,c_1}.
These give the orthogonal array OA_3(12,3,2,2). This is a 12 × 3 array filled with 2 symbols, such that in any 2 columns there are 4 different ordered pairs, each repeated λ=3 times.
§ THE DUAL INCIDENCE STRUCTURE
The dual structure is obtained by interchanging the roles of points and lines. It is interesting in its own right, as it connects projective rectangles with incidence geometry in a different way. The dual is essentially a net with a complete quadrangle property. Being a dual projective rectangle, it contains all the dual projective planes of the planes of the original projective rectangle.
A net is an incidence structure (,,ℐ) which consists of a set of points and a set of parallel classes _i (i ∈ an index set) of lines, such that each line is a set of points, every point belongs to exactly one line of each parallel class, and any two lines of different parallel classes have exactly one point in common. The theory of nets is extensive. It is easy to prove that every parallel class has the same number of lines and that the number of points on every line is the same.
We call these points and lines ordinary. By adding a special point for each parallel class, which is defined to belong to all lines of that class and no other ordinary lines, and adding one special line that contains all the special points, we get a projectively extended net. (“Projectively” refers to the existence of the special line.)
Two points might not be in any common line. They are called collinear if they are in a line. They cannot be in more than one line.
A complete quadrangle in a net consists of 4 points, no three collinear, and 6 lines determined by them. A nearly complete quadrangle consists of the same 4 points and 5 of the 6 lines, the 6th line possibly existing or not existing.
The dual of Axiom (A<ref>) is
(A<ref>*) (Complete Quadrangle Property) Every nearly complete quadrangle is complete.
A projective extension of a net has the complete quadrangle property if and only if the unextended net has it.
Assume a net has the complete quadrangle property and consider the cases in its extension that are not in itself. If P_1' and P_2' are special points, they are already collinear. Suppose only P_1' is special: then it is in every line of some parallel class, and that class includes a line that contains P_2'.
The dual of a projective rectangle is a projective extension of a net that has the complete quadrangle property, at least three parallel classes, and at least 2 lines in each parallel class, and vice versa.
We dualize the rectangle axioms and consider how they apply to the net.
* Every two distinct lines contain exactly one point in common.
This is true by definition if one of the lines is the special line. It is valid in the net except when the lines are parallel. Parallel lines have a common point in the extension.
* There exist four lines in the extended net with no three of them concurrent.
Take the special line, three special points, and one ordinary line on each of the special points. If the three ordinary lines are concurrent, replace one of them by a parallel line.
Or, take two lines from each of two parallel classes.
* Every point is in at least three distinct lines.
This is equivalent for an ordinary point to the existence of at least 3 parallel classes and for a special point to the existence of a parallel to each ordinary line.
* There is a special line D.
(A point in with D is called special. A point that is not in D and a line that is not D are called ordinary.)
This is part of the definition of a projectively extended net.
* Each special point belongs to exactly one line with each other point.
This is part of the definition of a projectively extended net.
* If two ordinary points P_1 and P_2 are collinear, then any two other points that are collinear with P_1 and P_2 through four distinct lines (i.e., there are four distinct lines P_iP_j' for i,j=1,2), are themselves collinear.
It is clear that Axiom (A*<ref>) is the complete quadrangle property for the extended net, excluding the case where P_1 or P_2 is special. Lemma <ref> says that the two formulations are actually equivalent.
§ OPEN PROBLEMS
Our work on nontrivial projective rectangles leaves many unanswered questions. Here are some to add those in the body of the paper.
* All our examples of projective rectangles are substructures of Pappian projective planes that can be obtained by the subplane construction. Are there other examples?
* We are ignorant of how a special line compares in its intersections with two planes π and π'. Two questions stand out.
* If a plane π has an ordinary line l, there are many other planes in which l is a line. However, if l is special, i.e., l = s ∩π for a special line s, we have no idea whether even one other plane has l as a line.
* We do not know whether there may be another plane π' such that s ∩π∩π' has a specific cardinality (not greater than m), what the possible values of |s ∩π∩π'| may be, whether 0 is a possible value in every nontrivial (aside from L_2^2, where it is not), or in the infinite case whether it is even possible that s ∩π' may properly contain s ∩π.
* We proved the subplane construction of Section <ref> only for Pappian planes, coordinatizable by a field.
* Is there an analytic proof for skew fields?
* Does an analytic proof using alternative algebras succeed in planes with weaker coordinate algebras such as near fields and alternative algebras?
* Is there a synthetic proof for Pappian or Desarguesian or other projective planes?
* Does the construction exist in non-Desarguesian, or non-Moufang, planes?
* Are all planes in a projective rectangle isomorphic? We were unable to find a proof or a counterexample.
* What do the partial Desargues's theorems in Section <ref> imply about automorphisms and coordinatizations?
* Is there a rigorous sense in which a projective rectangle is higher-dimensional, as suggested in Section <ref> and <cit.>?
* If every plane in is Moufang, it has coordinates in an alternative ring. If all such rings are isomorphic, does extend to a Moufang plane with an alternative ring that extends that of the planes in ?
* Given a projective rectangle, in what projective planes can it be embedded? In particular, our constructions by subplanes and harmonic extension give projective rectangles embedded in a Pappian plane but the same rectangles may possibly be isomorphically embeddable in planes that are not Pappian, not Desarguesian, maybe not even Moufang, in a nontrivial way, i.e., not by finding the Pappian plane as a subplane of a non-Pappian plane.
99
dk
J. Dénes and A. D. Keedwell, Latin Squares and Their Applications.
Academic Press, New York–London, 1974.
dls Jeff H. Dinitz, Alan C. H. Ling, and Douglas R. Stinson, Perfect hash families from transversal designs. Australas. J. Combin. 37 (2007), 233–242.
rfhc Rigoberto Flórez, Harmonic conjugation in harmonic matroids.
Discrete Math. 309 (2009), 2365–2372.
bgpp
Rigoberto Flórez and Thomas Zaslavsky, Projective planarity of matroids of 3-nets and biased graphs.
Australasian J. Combin. 77(2) (2020), 299–338.
pr2
Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Incidence graphs and higher structure. In preparation.
pr3
Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Harmonic conjugation. In preparation.
Hedayat A. S. Hedayat, N. J. A. Sloane, and J. Stufken,
Orthogonal Arrays, Theory and Applications.
Springer-Verlag, New York, 1999.
HP
Daniel R. Hughes and Fred C. Piper, Projective Planes.
Grad. Texts in Math., Vol. 6. Springer-Verlag, New York, 1973.
MR 48 #12278. Zbl 267.50018.
ldt
Bernt Lindström, A Desarguesian theorem for algebraic combinatorial geometries.
Combinatorica 5 (1985), no. 3, 237–239.
lhc
Bernt Lindström, On harmonic conjugates in full algebraic combinatorial geometries.
Europ. J. Combin. 7 (1986), 259–262.
Ryser H. J. Ryser, Combinatorial Mathematics.
Carus Math. Monographs, No. 14.
Math. Assoc. Amer., New York, 1963.
vw J. H. van Lint and R. M. Wilson, A Course in Combinatorics. Second ed. Cambridge University Press, Cambridge, Eng., 2001.
b1 Thomas Zaslavsky,
Biased graphs. I. Bias, balance, and gains.
J. Combin. Theory Ser. B 47 (1989), 32–52.
b2
Thomas Zaslavsky, Biased graphs. II. The three matroids.
J. Combin. Theory Ser. B 51 (1991), 46–72.
|
http://arxiv.org/abs/2307.07208v1 | 20230714075537 | Signatures of Quantum Chaos and fermionization in the incoherent transport of bosonic carriers in the Bose-Hubbard chain | [
"P. S. Muraev",
"D. N. Maksimov",
"A. R. Kolovsky"
] | quant-ph | [
"quant-ph"
] |
Kirensky Institute of Physics, Federal Research Centre KSC SB RAS, 660036, Krasnoyarsk, Russia
School of Engineering Physics and Radio Electronics, Siberian Federal University, 660041, Krasnoyarsk, Russia
IRC SQC, Siberian Federal University, 660041, Krasnoyarsk, Russia
Kirensky Institute of Physics, Federal Research Centre KSC SB RAS, 660036, Krasnoyarsk, Russia
IRC SQC, Siberian Federal University, 660041, Krasnoyarsk, Russia
Kirensky Institute of Physics, Federal Research Centre KSC SB RAS, 660036, Krasnoyarsk, Russia
School of Engineering Physics and Radio Electronics, Siberian Federal University, 660041, Krasnoyarsk, Russia
We analyse the stationary current of Bose particles across the Bose-Hubbard chain connected to a battery, focusing on the effect of inter-particle interactions. It is shown that the current magnitude drastically decreases as the strength of inter-particle interactions exceeds the critical value which marks the transition to quantum chaos in the Bose-Hubbard Hamiltonian. We found that this transition is well reflected in the non-equilibrium many-body density matrix of the system. Namely, the level-spacing distribution for eigenvalues of the density matrix changes from Poisson to Wigner-Dyson distributions. With the further increase of the interaction strength, the Wigner-Dyson spectrum statistics changes back to the Poisson statistics which now marks fermionization of the bosonic particles. With respect to the stationary current, this leads to the counter-intuitive dependence of the current magnitude on the particle number.
Signatures of Quantum Chaos and fermionization in the incoherent transport of bosonic carriers in the Bose-Hubbard chain
A. R. Kolovsky
August 12, 2023
========================================================================================================================
1. Recently, we have seen a surge of interest to quantum transport in one-dimensional systems coupled at their edges to particle reservoirs. Following <cit.> we refer to these systems as boundary driven systems. Particularly, one very important example of the boundary driven system is the so-called open Bose-Hubbard (BH) model or BH chain <cit.>. This system can be realised experimentally by using different physical platforms including superconducting circuits <cit.>, photonic crystals <cit.>, and cold Bose atoms in optical lattices <cit.>. The central question to be addressed with the open BH chain, both theoretically and experimentally, is the stationary current of Bose particles across the chain and its dependence on the strength of inter-particle interactions. It is known that properties of the closed/conservative BH system crucially depend on the ratio of the hopping matrix element J and the interaction constant U which are two of the four parameters of the BH Hamiltonian, the other being the chain length L and the particle number N. For example, for integer N/L the ground state of the system shows the quantum phase transition between the super-fluid state for J≫ U and the Mott-insulator state in the opposite limit <cit.>. As for the excited states they show a qualitative change from the regular to the chaotic <cit.>. Thus, one may expect that the stationary current in the open BH chain may also crucially depend on the interaction constant.
Up to now the above problem has been analysed only by using the pseudoclassical approach which approximates the quantum dynamics by that of the classical counterpart of the BH model <cit.>. This is because the exact quantum treatment, both numerical and analytic, is complicated by the fact that the open BH model does not conserve the particle number. Thus, the system dynamics takes place in the extended Hilbert space spanned by the direct sum of the subspaces with given number of particles. Taking into account that the transition to quantum chaos in the closed BH model depends not only on the ratio U/J but also on the ratio N/L, it is very problematic to track this transition to in the transport properties of the open BH model.
In the present work we introduce a boundary driven BH model which shows a behaviour similar to the standard open BH model but conserves the number of particles. This allows us to separate the dependencies on the interaction constant U and the particle number N, thus, relating the obtained results to the transition to chaos in the conservative BH model.
2. We consider the BH chain of the length L with incoherent coupling between the first and the L_ th sites. The coupling is described by the the following Lindblad operators,
[ L_1(R) =
V^†VR + RV^†V
- 2VRV^†
,; ; L_2(R) =
VV^†R + RVV̂^†
- 2V^†RV̂
, ]
where V=â_1^†â_L. Thus, the master equation for the carriers density matrix R reads
∂R/∂ t=-i[H, R] -Γ_1 L_1(R) - Γ_2 L_2(R) ,
where
H = -J2∑^L - 1_ℓ = 1(â^†_ℓ + 1â_ℓ + h.c.)
+ U2∑^L_ℓ = 1n̂_ℓ(n̂_ℓ-1)
is the Bose-Hubbard Hamiltonian. It is easy to see that the Lindblad operator L_1(R) induces the incoherent transport of the carriers from the last to the first sites, while the operator L_2(R) is responsible for the incoherent transport in the reverse direction. If the rates Γ_1 Γ_2, there is a non-zero current in the clockwise or counterclockwise direction depending on the inequality relationship between the two relaxation constants.
We notice that, by an analogy with electronic devices, the introduced Lindblad operators mimic the effect of a battery which induces direct current in the electric circuits.
3. We are interested in the stationary current I= Tr[IR] where R=R(t→∞) is now the steady-state density matrix and I is the current operator,
I= J2i∑^L - 1_ℓ = 1(â^†_ℓ + 1â_ℓ - h.c.) .
First of all, we notice that if Γ_1 = Γ_2 the steady-state density matrix is proportional to the identity matrix, namely, R=1/ N, where N is the dimension of the Hilbert space. In what follows we focus on the liner response regime where Γ_1=Γ +ΔΓ/2, Γ_2=Γ -ΔΓ/2, and ΔΓ≪Γ. Thus, we have
R=1/ N+ΔΓR ,
where Tr[R]=0. Substituting the Ansatz (<ref>) into the master equation we obtain
-i[H,R] - Γ[ L_1(R) + L_2(R)]
-2(n̂_L-n̂_1)/ N = O(ΔΓ) .
In the limit ΔΓ→ 0 Eq. (<ref>) transforms into the algebraic equation for the elements of the unknown matrix R. In our numerical approach, however, we do not solve this algebraic equation but evolve the density matrix R(t) according to the master equation (<ref>)
and use Eq. (<ref>) to check that we reached the true steady state. We found this method to be more efficient than the straightforward solution of the algebraic equation.
Since our primary goal is the stationary current across the chain, we shall analyse the matrix R in the basis of the eigenstates of the current operator,
I=∑_j=1^ Nσ_j |Φ_j⟩⟨Φ_j| .
Two examples of the matrix R in this basis are given in Fig. <ref> for U=0, left panel, and U=J, right panel. A qualitative difference between these two cases is clearly visible from the plot. In the next paragraph we quantify this difference. We conclude the present paragraph by displaying the commutation relation between the current operator and the BH Hamiltonian for U=0,
-i[H,I] - (n̂_L-n̂_1)/2=0 ,
which we shall use later on.
4. Knowing that the BH Hamiltonian (<ref>) exhibits transition to quantum chaos as U is increased, we expect a similar transition for the non-equilibrium density matrix. This expectation is supported by the visual analysis of the matrices depicted in Fig. <ref> and results of the relevant studies on the boundary driven spin chains <cit.>. Following Ref. <cit.> we consider the spectrum and eigenstates of the non-equilibrium density matrix,
R=∑_j=1^ Nλ_j |Ψ_j⟩⟨Ψ_j| .
In what follows we restrict ourselves by the case Γ≪ J. Then, if U=0, the states |Ψ_j⟩ practically coincide with the eigenstates of the current operator |Φ_j⟩, while the eigenvalues are related to each other as
λ_j ≈σ_j / 4 N .
To see that let us scale the density matrix R→ 4 NR and set ΔΓ=0 in Eq. (<ref>). Then the obtained algebraic equation differs from the commutation relation Eq. (<ref>) by a small term ∼Γ which can be taken into account perturbatively. As expected, application of the perturbation theory removes degeneracies of the eigenvalues of the current operator, see the lower inset in Fig. <ref>.
The magenta staircase curve in Fig. <ref> depicts the case U 0. Here we see that the width of the spectrum increases as U is increased. The main difference is, however, in the spectrum statistics. Figure <ref> shows the distribution of the the scaled spacings s=(λ_j+1-λ_j) f(λ_j), f(λ) being the mean density of states, as compared to the Poisson and Wigner-Dyson distributions <cit.>. For U=J an excellent agreement with the Wigner-Dyson distribution, which is the hallmark of quantum chaos <cit.>, is noticed.
5. Next we analyse the stationary current I across the chain,
I = Tr[IR] =∑_j=1^ Nλ_j ⟨Ψ_j | I |Ψ_j⟩≡∑_j=1^ Nλ_j I_j .
In the case of vanishing inter-particle interactions one derives by using Eq. (<ref>) the following semi-analytic equation,
I = 4J N^2 ΔΓ∫_0^1 σ^2(x) dx ,
where x=j/ N and σ(x) is the inverse function to the integrated density of states of the current operator, which interpolates the blue and red lines in Fig. <ref>. Thus, as it is intuitively expected, for U=0 the total current increases with the number of particles in the system.
The case U0 is more subtle. Figure <ref> shows numerically obtained dependence of the stationary current on the interaction constant U for L=6 and different number of particles N. One clearly identifies in this figure the critical U_cr=U_cr(n̅ ), n̅=N/L, above which the current drastically decreases. This critical interaction marks the crossover from the Poisson to Wigner-Dyson spectrum statistics for the non-equilubrium density matrix R. An unexpected result is that for U≫ U_cr the current decreases with the number of particles. Furthermore, we find that for these large U the spectrum statistics is again Poissonian.
6. We relate the observed change of the spectrum statistics and the counter-intuitive dependence of the current on the particle number to the interaction-induced localization of the eigenstates |Ψ_j⟩ and the fermionization of the strongly interacting Bose particles <cit.>. Indeed, the obvious consequence of the eigenstate localization is that the mean I_j=⟨Ψ_j | I |Ψ_j⟩ tends to zero and is strictly zero if all bosons occupies a single site of the chain. Figure <ref> shows the quantiles I_j for (L,N)=(6,3) and U=0,0.5,10. It is seen that the fraction of the delocalized states which support the current decreases in favor of the localised states for which I_j≈0. For example, in Fig. <ref> (c) the states corresponding to the minimal and maximal eigenvalues are the Fock states |0,0,0,0,0,3⟩ and |3,0,0,0,0,0⟩, respectively. Along with the localised and partially localised states one can see in Fig. <ref> (c) a number of the delocalized states. A closer inspection of these states shows that they are a superposition of the Fock states where occupation numbers of the chain sites are either zero or unity. Since this subspace of the Hilbert space is the Hilbert space of the hard-core bosons, we conclude that the residual conductivity of the system at large U is mainly due to the hard-core bosons. As known, the spectral and transport properties of the hard-core bosons are similar to those of the non-interacting fermions and, hence, they can support ballistic transport for arbitrary large U if N/L<1.
We also mention that the appearance of the localised states and the integrable `hard-core boson states' is consistent with the observed change of the level-spacing distribution from the Wigner-Dyson distribution back to the Poisson distribution in the limit of large U.
7. In summary, we introduced the model for quantum transport of Bose particles across the Bose-Hubbard chain which conserves the number of particles in the chain. Similar to the standard transport model where Bose-Hubbard chain connects two particle reservoirs with different chemical potentials and where the number of particles is not conserved, the introduced model shows different transport regimes depending on the ratio between the tunnelling and interaction constants in the Bose-Hubbard Hamiltonian (<ref>). Namely, for U∼ J the stationary current of the Bose particles is drastically suppressed as compared to the case U=0. In our previous publication <cit.> we explain this effect by transition to chaotic dynamics of the classical counterpart of the system.
In this work we use the genuine quantum approach where the object of interest is the non-equilibrium many-body density matrix of the bosonic carriers in the chain. Up to the best of our knowledge this is the first example where the non-equilibrium density matrix is calculated/analysed for the non-integrable bosonic system. (For transport properties of the fermionic and spin systems we refer the reader to the recent review <cit.>.) We found the spectrum of this matrix exhibits a transition from a regular spectrum for U ≪ J, which obeys the Poisson statistics, to an irregular spectrum for U∼ J, which obeys the Wigner-Dyson statistics. In this sense we confirm the conjecture of Ref. <cit.> that the drastic reduction of the current for U∼ J is due to the transition to quantum chaos.
Within the framework of the introduced model we also observed a new effect, – the residual conductivity due to fermionization of the Bose particles. We notice that this is a pure quantum effect which cannot be addressed by using the classical (mean-field) or pseudoclassical (truncated Wigner function) approaches.
Acknowledgment.
This work has been supported by Russian Science Foundation through Grant No. N19-12-00167.
|
http://arxiv.org/abs/2307.03928v1 | 20230708080247 | Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy | [
"Georgios Kaissis",
"Jamie Hayes",
"Alexander Ziller",
"Daniel Rueckert"
] | cs.CR | [
"cs.CR",
"cs.AI"
] |
Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.]
: Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu
August 12, 2023
===========================================================================================================================================================================================================================================================
We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.
Previous research has demonstrated that differential privacy (DP) mechanisms also provide ReRo, but so far, only asymptotic Monte Carlo estimates of a tight ReRo bound have been shown.
Directly computable ReRo bounds for general DP mechanisms are thus desirable.
In this work, we establish a connection between hypothesis testing DP and ReRo and derive closed-form, analytic or numerical ReRo bounds for the Laplace and Gaussian mechanisms and their subsampled variants.
§ INTRODUCTION
In the rapidly advancing field of machine learning (ML), the importance of preserving privacy cannot be understated, particularly in critical tasks where privacy may be compromised through attacks on unprotected ML models.
Among these, membership inference (MI) poses a considerable risk <cit.>.
Here, an adversary attempts to determine whether a candidate record was part of the model's training database.
Differential privacy (DP) <cit.> plays a crucial role as a safeguard against privacy risks in ML.
Its guarantees can be interpreted in terms of the protection it offers against MI, a notion termed the hypothesis testing interpretation of DP <cit.>.
Broadly speaking, protecting against MI also serves to protect against all weaker forms of attack <cit.>.
For example, data reconstruction (DR) attacks <cit.>, where adversaries attempt to extract records from the model's weights or gradients <cit.>, are also prevented by DP mechanisms.
In fact, it can be shown that protecting against DR requires substantially less noise than protecting against MI <cit.>.
Recent works have proposed formal bounds tailored to DR.
For instance, Guo et al. <cit.> frame DR as a signal estimation problem and use the properties of the Fisher information matrix to lower-bound reconstruction error.
Moreover, Guo et al. <cit.> utilise Fano's inequality to bound the mutual information between the training data and the model's parameters.
Last but not least, Balle et al. <cit.> recently proposed Reconstruction Robustness (ReRo), which serves as a high-probability bound on successful DR.
Moreover, this work's authors prove a strong relationship between DP and ReRo in the sense that (Rényi-)DP <cit.> implies ReRo (and vice versa under some preconditions).
Very recently, Hayes et al. <cit.> strengthened the aforementioned results by circumventing the utilisation of Rényi-DP and bounding ReRo directly.
In this work, we expand upon the previous investigations on ReRo, which we regard as the most promising DR bound (as it both outperforms previous DR guarantees and is closely matched by the results of empirical DR attacks against ML models).
The aforementioned work by Hayes et al. <cit.> limits its purview to DP-SGD <cit.> and utilises a Monte Carlo (MC) technique to estimate the ReRo bound.
This MC bound only holds asymptotically and cannot be used efficiently in workflows involving large datasets.
Methods to directly obtain ReRo upper bounds for arbitrary datasets and mechanisms (e.g. also the Laplace mechanism and its subsampled variant), would thus be of value to practitioners.
Contributions
The contributions of our work are as follows:
(1) We extend the work of Hayes et al. by proposing ReRo bounds derived from the hypothesis testing interpretation of DP.
(2) We furnish closed-form bounds for the Gaussian and Laplace mechanisms and provide an analytic formulation for the Poisson-sampled Gaussian and Laplace mechanisms using an Edgeworth series.
Both techniques are very efficient in terms of memory and run time, even for very large datasets and across broad ranges of the mechanism parameters.
(3) We experimentally corroborate the accuracy of our bounds against a numerical ground truth, provide the first ReRo bounds for ImageNet-scale workflows and explain a finding by <cit.> regarding differences in ReRo bounds when DP-SGD parameters are varied at a fixed (ε, δ)-value.
Background
We assume familiarity with the fundamentals of DP and omit a detailed introduction due to space constraints.
In brief, we will focus on the global model of DP and the add/remove one adjacency relation between databases D and D'.
An extension to replacement adjacency is straightforward.
We will denote the deterministic query function (e.g. a single step of SGD outputting a gradient containing sensitive information) by q and its global sensitivity by Δ with an appropriate subscript to indicate the order of the norm it is measured in.
We will use ℳ for an (additive noise) mechanism, i.e. the Laplace mechanism (LM), Gaussian mechanism (GM) or their Poisson-subsampled variants (SLM and SGM).
For details on subsampling, we refer to <cit.>; in brief, to realise Poisson subsampling, each record in a database participates in the query with individual probability p.
In the hypothesis testing interpretation of DP, we presume that an adversary 𝒜 who has complete knowledge of D, D', q, and all specifications of ℳ observes a mechanism output y and must decide: ℋ_0: y ∼ℳ(D) vs. ℋ_1: y ∼ℳ(D').
ℋ_0 and ℋ_1 are called the null and alternative hypothesis, respectively.
We stress that the only unknown in the aforementioned hypothesis testing problem is the exact noise draw realised by ℳ.
The privacy guarantee of ℳ thus expresses how difficult it is to distinguish between the distributions ℳ(D) and ℳ(D') as measured in terms of trade-off between the fundamental errors of hypothesis testing: the Type-I error α and the Type-II error β.
Since the aforementioned hypothesis testing problem is one between two simple hypotheses, 𝒜 is endowed with the optimality properties furnished by the Neyman-Pearson (NP) lemma <cit.>.
In other words, their test has the highest power 1-β at any given level α∈ [0, 1].
f-DP <cit.> utilises a trade-off function T: α↦β to express DP guarantees.
Concretely, let ϕ be a rejection rule for the aforementioned hypothesis testing problem.
Then, T(ℳ(D), ℳ(D'))(α) = inf_ϕ{β_ϕ|α_ϕ≤α}.
A mechanism is said to satisfy f-DP, if, for all α∈ [0,1] and all adjacent D, D' it holds that T(ℳ(D), ℳ(D'))(α) ≥ f(α), where f is some reference trade-off function.
The inf_ϕ means that, by definition, f-DP only considers the rejection rule with the highest power among all realisable rejection rules at the same level α, which is consistent the optimality properties of 𝒜.
For rejection rules with asymmetric trade-off functions (e.g. for sub-sampled mechanisms), one must also consider T^-1=T(ℳ(D'), ℳ(D)) and obtain the symmetrised/convexified curve C(T, T^-1).
This is important as the DP guarantee must hold identically for the add one and the remove one adjacency relations.
A mechanism whose trade-off function is β(α)=1-α, i.e. the off-diagonal of the unit square, offers perfect privacy.
As a worst-case guarantee, f-DP thus additionally only considers the trade-off function which is farthest from this off-diagonal, corresponding to the pair of mechanism distributions exhibiting the greatest effect size.
This pair is called the dominating pair of a mechanism <cit.>.
For the GM, the dominating pair is (𝒩(0, σ^2), 𝒩(Δ_2, σ^2)) and for the LM it is (Lap(0, b), Lap(Δ_1, b)).
For the SGM two pairs must be considered: (𝒩(0, σ^2), (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2)) and ((1-p)𝒩(Δ_2, σ^2)+p𝒩(0, σ^2), 𝒩(Δ_2, σ^2)); this transfers to the SLM by replacing the Gaussian by the Laplace density.
ReRo <cit.> is an upper bound on the probability of a successful DR attack.
In this work, we will study ReRo under a pessimistic threat model which is very similar to that of DP:
𝒜 has access to all database records and executes a DR attack R on a model w outputting a reconstructed record z^∗∼ R(w).
The goal of 𝒜 is to select the correct database record z corresponding to z^∗ (i.e. record matching).
Formally, let π denote 𝒜's prior distribution (i.e. auxiliary information) and let ρ be a reconstruction error function.
Then, ℳ satisfies (η, γ)-ReRo if, for any fixed D, it holds that ℙ_z∼π, w ∼ℳ(D ∪{ z })(ρ(z, R(w))≤η) ≤γ.
Note the difference to DP: ReRo is defined purely through the add one adjacency relation.
The authors of <cit.> directly show that mechanisms whose output distributions satisfy a bound on the so-called blow-up function ℬ_κ(η) also satisfy ReRo.
Concretely, let μ and ν be ℳ's dominating pair distributions for the add one adjacency relation and E be a measurable event.
Then, ℳ satisfies (η, γ)-ReRo with respect to a prior κ(η) with γ = ℬ_κ(η)(μ, ν) = sup{ℙ_μ(E) |ℙ_ν(E) ≤κ(η) }.
Throughout, we follow <cit.> and let ρ=1(z≠ z^∗) (i.e. an exact match) and assign a uniform prior κ(η)=1/n, where n can e.g. be the cardinality of the database, since 𝒜 has an a priori probability of 1/n to select the correct candidate record without observing R(w), or some more pessimistic fixed prior, e.g. 1/10.
Although general hypothesis testing theory is used in <cit.> to prove the ReRo bound for DP mechanisms, the authors do not directly use f-DP to bound ReRo and instead estimate γ using MC (Algorithm 1 of <cit.>).
This strategy has the drawback of holding only at the limit as the number of MC samples approaches infinity and is impracticable for very large n or very small κ.
Next, we will show that ℬ_κ(η)(μ, ν) has a natural hypothesis testing interpretation, allowing us to circumvent the MC procedure and directly bound γ.
§ RERO BOUNDS FOR DP MECHANISMS THROUGH HYPOTHESIS TESTING
We begin by expressing ℬ_κ(η) in terms of the hypothesis testing problem between ℳ(D) and ℳ(D').
Assume that 𝒜 employs a rejection rule ϕ with power 1-β_ϕ(α) at a pre-selected level α.
Consistent with the worst-case guarantee, we will only consider the rejection rule with the highest power among all realisable rejection rules and denote this supremum power as 𝒫(α).
We remark that we make no further specifications about the rejection rule.
Therefore, although we will consider the DP threat model which assumes an optimal ϕ using the likelihood ratio test statistic evaluated at the dominating pair, all results transfer to threat model relaxations, provided the realisable rejection rules and their corresponding test statistics can be specified.
(2) We formulate our results in terms of the test ℋ_0:ℳ(D) vs. ℋ_1:ℳ(D') because we only need to bound the add one adjacency relation to bound ReRo.
The upshot of this choice can be seen in Figure 1, panel f.
If ℳ upper-bounds the adversary's supremum power 𝒫(α), then it also satisfies (η, 𝒫(κ(η))-ReRo for a prior κ.
In particular, if ℳ satisfies f-DP, it also satisfies (η, 1-f(κ(η)))-ReRo and if it satisfies (ε, δ)-DP, it also satisfies (η, min{e^εκ(η) + δ, 1})-ReRo.
The special case of (ε, 0)-DP appeared previously in <cit.>.
The theorem's main advantage is that it allows us to think about the relationship between DP and ReRo in terms of statistical power analysis, for which robust tools and an extensive body of theory exist.
Moreover, it explains the finding by <cit.> that directly bounding ReRo using ℬ_κ(η)(μ, ν) instead of taking a detour via Rényi DP results in a tighter bound: ReRo has a natural hypothesis testing interpretation, whereas Rényi DP does not <cit.>.
Furthermore, the theorem establishes that ReRo as a weaker guarantee than f-DP in the sense that f-DP bounds 𝒜's supremum power at all levels α∈ [0,1], whereas ReRo is a bound on the supremum power at a single level α = κ(η).
Consequently, achieving ReRo is easier (i.e. requires less noise) than achieving f-DP.
In terms of concrete mechanisms, we obtain the following results:
Let μ_1 = Δ_1/b and
f_Lap(α, μ_1) =
1-αe^μ_1, α < e^-μ_1/2
e^-μ_1/4α, e^-μ_1/2≤α≤1/2
(1-α)e^-μ_1, α > 1/2.
Then, the LM satisfies (η, γ)-ReRo with γ = 1-f_Lap(κ(η), μ_1).
Let μ_2 = Δ_2/σ and f_Gauss(α, μ_2) = Φ(Φ^-1(1-α) - μ_2), where Φ and Φ^-1 are the cumulative distribution and quantile function of the standard normal distribution. Under N-fold homogeneous composition, the GM satisfies (η, γ)-ReRo with γ = 1-f_Gauss(κ(η), √(N)μ_2).
Under heterogeneous composition of mechanisms with μ_a, μ_b, …, we have γ = 1-f_Gauss(κ(η), √(μ_a^2+μ_b^2+…)).
These two corollaries allow us to obtain an exact bound on ReRo for the respective mechanisms.
Unfortunately, the trade-off functions for the LM under composition and for the SLM and SGM are not available in closed form.
Three distinct options exist for evaluating these functions:
(1) Compute the trade-off functions numerically either through direct numerical integration or e.g. using the technique by <cit.>.
This approach can be optimal in the sense that it can provide an exact bound up to numerical precision (or with a controlled error tolerance).
To obtain a valid ground truth, we use direct numerical integration by performing a grid discretisation over G points and using an arbitrary-precision floating point library such as <cit.>.
This technique is extremely time-consuming, as (for N composition steps) it requires G · N numerical integrations (in neural network applications N = 𝒪(10^4)) and thus only serves as a gold standard.
An approach using the technique by <cit.> can be found in the appendix.
(2) One can leverage an analytic (e.g. Edgeworth or saddle-point) finite sample approximation to the trade-off function which can be computed in constant time for homogeneous composition.
Such approximations are a cornerstone of statistical power analysis <cit.>, and have been previously used for (ε, δ)-DP accounting <cit.>.
For our experiments, we use an improved version of the technique proposed by <cit.>, i.e. a fourth order Edgeworth approximation, which has error 𝒪(N^-2).
(3) Asymptotically, the trade-off function of a (Poisson-)subsampled mechanism with sampling rate p converges to f_Gauss(α, μ̃) with μ̃= p√(N(e^μ_2-1)) when p√(N) converges to a positive constant as the compositions N→∞ <cit.>.
This so-called CLT approximation is essentially an order zero Edgeworth approximation and has an error of 𝒪(N^-1/2).
We note that, although the MC technique of <cit.> has a nominally even higher error rate of 𝒪((κ N)^-1/2), it performs better than the CLT approximation in practice because it is unbiased, whereas the CLT approximation presupposes that the approximated trade-off function is Gaussian, which leads to poor performance when its assumptions are violated (see experiments below and <cit.> for discussion).
Independent of the technique used to approximate the trade-off function, we can formulate the following results:
Let f̃_SLM(α, μ_1, N, p) denote the approximate trade-off function for the SLM with sampling rate 0<p≤ 1 under N-fold composition using one of the approximation techniques above.
Then, the SLM satisfies (η, γ)-ReRo with γ≈ 1-f̃_SLM(κ(η), μ_1, N, p).
Similarly, let f̃_SGM(α, μ_2, N, p) denote the approximate trade-off function for the SGM with sampling rate 0<p<1.
Then, the SGM satisfies (η, γ)-ReRo with γ≈ 1-f̃_SGM(κ(η), μ_2, N, p).
Note that for the SGM, when p=1, we revert to the GM and can use the closed-form bound from Corollary 2 (see Figure 1c below).
We remark for completeness that heterogeneous composition is also possible using the techniques above and that approximations are not necessarily valid upper bounds unless verified, e.g. using the technique by <cit.>.
We omit a detailed discussion of these points due to space constraints.
§ EXPERIMENTAL EVALUATION AND CONCLUSION
Figure <ref> compares the MC estimate <cit.> of γ (10^6 samples) at a fixed prior κ to the asymptotic CLT approximation <cit.>, the fourth-order Edgeworth approximation and the Ground Truth computed by numerical integration (a,b,d) or in closed form (c).
γ is plotted against the effect size (Δ_1/b or Δ_2/σ), corresponding to increasing privacy loss: a: ε_max=20, b/c: ε_max=100, d: ε_max=𝒪(10^8) at δ=10^-6 for c/d.
Observe that in panel d, the MC algorithm of <cit.> already has too high variance to provide an accurate estimate of γ.
This means that the analysis of ImageNet-sized datasets where the values of κ and p are very low and the number of steps N is very high is infeasible using MC (or the Ground Truth) due to memory or time constraints.
In contrast, estimating γ using the Edgeworth approximation yields excellent precision at a constant memory consumption and run time of only about 1.5s, exactly matching the Ground Truth.
Panel e shows γ as a function of κ for a very low p and a very large N, similar to the hyperparameters used by <cit.> when training ImageNet from scratch.
Even at κ=10^-7, our presented techniques allow for estimating γ, and the CLT approximation matches the Edgeworth approximation very well.
Further examples of CIFAR-10 and ImageNet workflows are shown in the Appendix.
Panel f explains the observation by <cit.>, where, at a constant (ε, δ), the authors find that different sampling rates p lead to different values of γ.
The crux of this finding is that the authors of <cit.> choose mechanism parameter combinations which result in the same privacy guarantee in terms of a single (ε, δ)-pair (recall that SGMs are only identical if they correspond in all possible (ε, δ)-pairs).
Thus, mechanisms with different p are fundamentally distinct and thus lead to different γ values across the range of κ.
In particular, the trade-off function (and thus the ReRo bound function) is increasingly asymmetric at low values of p.
As seen in the figure, for κ=0.1 (used by <cit.>), γ is lower at p=0.1 (blue) compared to 0.9 (lavender), matching Figure 6 of <cit.>.
A detailed discussion on this topic can be found in the Appendix.
Conclusion
In this work, we expanded on the connection between ReRo and DP by leveraging hypothesis testing theory and techniques from statistical power estimation.
This allowed us to formulate refined ReRo bounds for relevant DP mechanisms and propose techniques to estimate them with high precision across a broad range of use-cases.
Our results can thus help ML practitioners to evaluate the vulnerability of sensitive data processing systems against data reconstruction attacks, thereby increasing user trust.
In future work, we intend to assess ReRo bound tightness for large vision and language models/datasets, provide ReRo bounds in the shuffle model of DP and for individual privacy accounting schemes, expand our analysis to non-uniform priors other reconstruction error functions and heterogeneous compositions.
§ APPENDIX
§.§ Proofs
Proof of Theorem 1
Let y be a mechanism output, μ, ν be the dominating pair distributions of ℳ and κ(η) ∈ [0,1] be a prior.
Since E is an arbitrary measurable event, we can fix E to be the event of rejecting ℋ_0 (this mirrors the event definitions in Corollary 3 of <cit.> and standard hypothesis testing theory).
Moreover, let ϕ be a rejection rule for ℋ_0: y ∼ν and ℋ_1: y ∼μ.
This is without loss of generality since f can always be considered (or made) symmetric, and thus the following statements also hold when the role of the hypotheses is exchanged, although this is not required to bound ReRo, which only considers the add one adjacency relation.
From the definition of ReRo, γ = ℬ_κ(η)(μ, ν) = sup{ℙ_μ(E) |ℙ_ν(E) ≤κ(η) }.
From our assumption above, ℙ_μ(E)=1-β_ϕ (correctly reject ℋ_0 given ℋ_1) and ℙ_ν(E) = α_ϕ (wrongly reject ℋ_0 given ℋ_0).
Substituting, we obtain γ = sup{ 1-β_ϕ | α_ϕ≤κ(η) }.
In other words, γ exactly corresponds to the supremum power of ϕ given a pre-selected bound on Type-I error rate, i.e. γ = 𝒫(α), and thus a bound on γ is implied by a bound on 𝒫(α) with α=κ(η).
To prove the ReRo bound implied by f-DP, we consider the definition of the trade-off function: f(κ(η)): inf{β_ϕ | α_ϕ≤κ(η)).
Since f is convex, continuous and non-increasing on the unit square, 1-f(κ(η))=sup{ 1-β_ϕ | α_ϕ≤κ(η)) = 𝒫(α) = γ.
We note that the reverse does not hold in general: bounding ReRo through a bound on γ implies a bound on 𝒫(α) for a specific level α, whereas f-DP implies a bound on 𝒫(α) at all levels α∈ [0,1].
To prove the the ReRo bound implied by (ε, δ)-DP, we leverage a result by <cit.>, who show that, if a mechanism satisfies (ε, δ)-DP, it imposes a bound on the power 1-β at a level α of the optimal hypothesis test ϕ such that 1-β_ϕ(α_ϕ) ≤e^εα_ϕ + δ, i.e. 𝒫(α) = e^εα_ϕ + δ.
Finally, we substitute κ(η) as the desired level α_ϕ and take the min since γ is a probability, from which the claim follows.
Algorithm 1 of <cit.> essentially computes an MC estimate of the complementary trade-off function for 𝒩(0, σ^2) vs. (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2).
The sampling inefficiency and high variance at small values of κ is due to the fact that the algorithm draws S MC samples but discards all but ⌈κ· S ⌉ of them.
This percolates to extreme parameter regimes such as the ones discussed above, necessitating orders of magnitude more samples to be drawn to correctly estimate the bound, which eventually becomes infeasible due to memory constraints.
In terms of the distributions of the likelihood ratio test statistics under ℋ_0 and ℋ_1, constructing 𝒫(α) corresponds to the following steps:
Per the Neyman-Pearson lemma, the optimal test ϕ is realised by thresholding the likelihood ratio test statistics.
Let c be the critical value for rejecting ℋ_0.
Then, (1) determine the value of c for which α_ϕ(c)<κ(η) by computing the quantile function of the test statistic under ℋ_0 at 1-κ(η) and
(2) compute the value of the cumulative distribution function of the test statistic under ℋ_1 evaluated at c.
The likelihood ratios under ℋ_0 and ℋ_1 are also called the privacy loss random variables in DP.
The equivalence between the privacy loss random variables and the test statistics from which the trade-off function f is computed represents the intuitive link between f-DP, (ε, δ)-DP and ReRo and reinforces the pivotal role of the privacy loss random variable.
The Edgeworth approximation utilises the cumulant generating functions of the likelihood ratio test statistics computed numerically, followed by a series approximation combined with a numerical inverse for the quantile function.
The CLT approximation is equivalent to an Edgeworth approximation of order zero, rendering it quite inflexible, which explains its poor performance when its assumptions are violated.
Proof of Corollaries 1 and 2
The claims of both corollaries follow directly from the closed-form expressions of the trade-off functions of the LM and the GM.
The derivations of the trade-off functions themselves can be found e.g. in <cit.>.
Proof of Corollary 3
The claims follow from the ReRo bound implied by f-DP proven in Theorem <ref>.
We remark that since we are dealing with trade-off function approximations, minimising the approximation error is crucial for obtaining an exact bound on γ.
§.§ Supplementary Figure
The following figure illustrates further scenarios in which the Edgeworth and CLT approximation yield excellent results, whereas the MC technique of <cit.> would not be usable due to an impracticably high number of MC samples required to obtain an accurate estimate.
Moreover, in these scenarios, the numerical ground truth would take on the order of weeks to compute and is thus unavailable.
In contrast, the Edgeworth and CLT approximations are computable in constant time.
Moreover, the assumptions of the CLT approximation kick in for these parameter values and thus the two methods yield identical results.
The top figure row shows ReRo bounds for CIFAR-10-style workflows with hyperparameters taken from Table 13 of <cit.> (left) and an even smaller sampling rate (right), whereas the bottom row shows ImageNet-style workflows with the hyperparameters from Table 15 of <cit.> (left) and an even smaller batch size (right).
The bottom right panel is identical to Figure 1, panel e in the main manuscript.
For all panels, κ∈ [10^-7, 10^-1].
§.§ Experimental details
Conversion to (ε, δ)-DP
Conversions to (ε, δ)-DP were performed as follows:
* For the LM, following the simple composition theorem: ε = NΔ_1/b.
* For the SLM, following <cit.>: ε = log(1 + p(e^NΔ_1/b-1).
* For the GM, following <cit.>: Compute δ(ε) = Φ(-σε/Δ_2 + Δ_2/2σ) - e^εΦ(-σε/Δ_2 - Δ_2/2σ), then solve for ε at a given δ numerically.
* For the SGM, following <cit.>: Compute the (symmetrised) trade-off function, then compute the convex conjugate numerically and solve for ε at a given δ.
Details of numerical techniques
The numerical Ground Truth was evaluated by using the technique proposed in Section 5.1 of <cit.> with G=1 000 grid points and using 25 digits of numerical precision in <cit.> (for reference, a 64-bit floating point value provides ≈ 15 digits of precision).
We recall that this technique requires one numerical integral per step N and grid point, rendering it extremely time consuming and impracticable for any use beyond establishing a gold standard.
The fourth-order Edgeworth approximation was computed as previously described (see Section 3.1 of <cit.>).
However, we expanded the Edgeworth series up to order four as described in the main manuscript.
Moreover, the original work <cit.> only approximates the trade-off function for only one of the two dominating pairs of the SGM (𝒩(0, σ^2), (1-p)𝒩(0, σ^2)+p𝒩(Δ_2, σ^2)).
Whenever required (e.g. for conversions to (ε, δ)-DP or for Figure 1, panel e), we also instantiated the trade-off function for the other dominating pair ((1-p)𝒩(Δ_2, σ^2)+p𝒩(0, σ^2), 𝒩(Δ_2, σ^2)) and obtained the symmetrisation/convexification of the two trade-off functions, in line with the assumption that the trade-off function is symmetric.
Monte Carlo (MC) estimation of γ was performed according to Algorithm 1 of <cit.>.
All MC experiments were performed with S=1 000 000 samples.
We used multi-core sampling with 16 concurrent processes on a single 2019 Apple MacBook Pro with an 8 core Intel i9 CPU and 64 GB of memory.
The CLT and Edgeworth approximations have constant run time, the latter provided the composition is homogeneous (i.e. the effect size is constant over all N).
In terms of memory usage, the MC algorithm allocates an array of size S · N, where S is the number of MC samples and N is the number of SGM steps.
The numerical Ground Truth, Edgeworth and CLT approximations require constant memory.
§.§ Discussion of ReRo bound sensitivity to subsampling probability
In <cit.>, the authors found that the ReRo upper bound is dependent on the subsampling probability, p.
They showed this by fixing the number of steps in DP-SGD and the gradient cliping norm, and finding a σ that would give a fixed (ε, δ)-DP guaranteee across different subsampling rates.
In <cit.>, authors chose a small number of steps (100) for this experiment, due to the computational overhead of their MC estimation method.
For this relatively small number of composition, the CLT assumption for Gaussian DP is not yet fully in effect, meaning the mechanisms authors selected at different values of p are not identical, they only intersect for a specific choice of ε and δ.
We plot this in <Ref>, and show the corresponding trade-off curves for the add one and remove one adjacency relations along with their symmetrised version (see Definition F.1 in <cit.>).
When the CLT does not apply, this comparison is across fundamentally distinct mechanisms with different trade-off curves, and so the upper bounds for ReRo are different.
We compare different values of subsampling probabilities when the CLT is assumed to hold (number of steps N=10,000).
From <Ref>, the three mechanisms intersect at identical (ε, δ) pairs, as so they are identical mechanisms.
In the right figure, we plot the trade-off curves under the assumption that the CLT is valid [We use μ̃ = p √(N ( e^1/σ^2 -1 )),
so the collection of all σ to obtain the same μ̃ can be found through:
σ = 1/√(log(1 + μ̃^2/N p^2)), see <cit.> for details.].
For each mechanism, we also numerically compute its privacy profile using <cit.> and convert to a trade-off curve using the (ε, δ) trade-off function (Eq. 5 in <cit.>).
These all coincide perfectly.
When the CLT holds, the mechanisms are identical, the trade-off curves are independent of p, and since the curves are symmetric, the add one and the remove one curves are one and the same.
|
http://arxiv.org/abs/2307.05095v1 | 20230711080710 | ATWM: Defense against adversarial malware based on adversarial training | [
"Kun Li",
"Fan Zhang",
"Wei Guo"
] | cs.CR | [
"cs.CR",
"cs.AI"
] |
ATWM:Defense against adversarial malware based on adversarial training
1st Kun Li
PLA Information Engineering University
Zhengzhou, China
moyue_lk@foxmail
2nd Fan Zhang
PLA Information Engineering University
Zhengzhou, China
3rd Wei Guo
PLA Information Engineering University
Zhengzhou, China
=======================================================================================================================================================================================================================================================================================================================================
Deep learning technology has made great achievements in the field of image. In order to defend against malware attacks, researchers have proposed many Windows malware detection models based on deep learning. However, deep learning models are vulnerable to adversarial example attacks. Malware can generate adversarial malware with the same malicious function to attack the malware detection model and evade detection of the model. Currently, many adversarial defense studies have been proposed, but existing adversarial defense studies are based on image sample and cannot be directly applied to malware sample. Therefore, this paper proposes an adversarial malware defense method based on adversarial training. This method uses preprocessing to defend simple adversarial examples to reduce the difficulty of adversarial training. Moreover, this method improves the adversarial defense capability of the model through adversarial training. We experimented with three attack methods in two sets of datasets, and the results show that the method in this paper can improve the adversarial defense capability of the model without reducing the accuracy of the model.
Adversarial malware, Adversarial examples, Adversarial defense, Deep learning, Malware detection
§ INTRODUCTION
With the development of Internet technology, more and more researchers are focusing on the security of cyberspace. Malware is an important method of network attack, which poses a huge threat to the security of cyberspace. In recent years, deep learning has made great progress in image classification<cit.><cit.>, natural language processing<cit.><cit.>, speech signal processing<cit.><cit.>, recommendation systems<cit.><cit.>, and machine translation<cit.>, and researchers have begun combining deep learning techniques to detect malware. At present, the main research on malware detection based on deep learning is end-to-end detection. End-to-end malware detection only requires inputting the Portable Executable (PE) binary file into the detection model, and the model can output the detection result. Compared to traditional malware detection methods, deep learning malware detection automatically learns to sample features through neural networks, which reduces manual participation and reduces detection costs<cit.>. Deep learning has now achieved significant results in the field of end-to-end malware detection<cit.><cit.>.
However, existing research shows that deep learning-based malware detection models are vulnerable to adversarial malware<cit.><cit.><cit.><cit.>. The adversarial attack can be divided into black box adversarial attack and white box adversarial attack according to whether the attacker can obtain the model weight, gradient, structure and other information. In the black box attack, the attacker cannot obtain important information such as model structure, model weight, model gradient, etc. The attacker can only obtain the input and output of the model, and obtain useful information from it. In the white box attack, the attacker can obtain important information such as model gradient, model structure, model weight parameters, etc., but the attacker cannot interfere with model training and can only implement the attack in the model testing stage.
To sum up, when applied, deep learning malware detection models are attacked by adversarial examples. Therefore, how to improve the adversarial defense capability of detection models has become a research topic of concern. The existing adversarial example defense research can be mainly divided into adversarial training<cit.>, randomization<cit.>, and denoising<cit.>, but the above research is based on the image domain. Malware is different from images. Small changes to the pixels of the image will not change its semantic information, but small changes to the content of the malware will cause it to lose malicious function and executable. Due to the difference between adversarial malware and adversarial image, the above defense methods cannot defend against adversarial malware and will affect the normal detection of malware detection models. Therefore, the existing adversarial example defense research cannot be directly applied to deep learning malware detection models. Therefore, defending against adversarial malware is very challenging. Our defense must reduce the negative impact of adversarial malware on the model, but not affect the normal detection of the malware detection model.
This paper proposes an adversarial malware defense method ATWM (Adversarial Training for Windows Malware) based on adversarial training. We evaluate the effectiveness of ATWM, and the results show that ATWM can improve the adversarial defense capability of malware detection models. The contributions of this paper are summarized as follows:
* We use preprocessing to filter low entropy perturbations to defend against BARAF attacks. Moreover, preprocessing reduces inefficient adversarial examples to make adversarial training less difficult.
* We use two perturbation generative approaches and two perturbation injection methods for stochastic combinations to generate diverse adversarial examples.
* We improve the robustness and adversarial defense of malware detection models through adversarial training.
The remainder of the paper is organized as follows. In Section 2, we present work related to adversarial example defense. Section 3 describes the proposed ATWM. Section 4 reports the experimental results of ATWM. Finally, our paper is summarized in Section 6.
§ RELATED WORK AND BACKGROUND
§.§ Related work
Adversarial examples have been widely proven to trick deep learning models into output errors. Therefore, researchers use it as an attack against deep learning models and propose various methods to defend against adversarial example attacks. Most of the existing adversarial example defense research focuses on image and text domains, so it cannot be directly applied to adversarial malware defense. However, the idea of adversarial example defense is universal. This section will introduce the existing image adversarial example defense ideas and research status. The existing adversarial defense ideas can be mainly divided into adversarial training, randomization and denoising.
Adversarial Training:Adversarial training is a general adversarial example defense method that adds adversarial example to training set of the model and enhances the robustness of the model by retraining to defend against adversarial example attacks. Goodfellow<cit.> et al. added adversarial examples generated by the Fast Gradient Sign Method (FGSM) to model training to enhance model robustness. After experimental evaluation, the model trained by this method can defend against the adversarial example generated by FGSM, but the model still shows very fragile when faced with more powerful adversarial attacks. Madry<cit.> et al. proposed to use Projected Gradient Descent (PGD) to generate adversarial examples for adversarial training to improve the robustness of the model. Experiments show that the robustness of the model trained by this method is greatly improved, and it can defend against typical adversarial attack methods such as FGSM, PGD, C&W<cit.>. However, this method consumes a lot of resources in adversarial training and cannot defend against other paradigms of adversarial attacks. The adversarial example is the key to adversarial training. To improve the quality of adversarial examples, ensemble adversarial training<cit.> (EAT) is proposed. ETA adds adversarial examples from multiple pre-trained models to increase the diversity of adversarial examples. Experiments show that in some adversarial attacks, the adversarial defense capability of the EAT adversarial training model is better than that of the PGD adversarial training model.
In contrast to the above ideas, Lee<cit.> et al. proposed using Generative Adversarial Network<cit.> (GAN) for adversarial training, rather than using a specific adversarial example generation algorithm. The GAN consists of the generator and discriminator. The method uses the generator to generate adversarial perturbations and adds perturbations to the original sample to generate adversarial examples. The discriminator model can be improved by feeding the original sample and adversarial example to the discriminator. Through the mutual adversarial training of the generator and discriminator, the generating perturbation ability of the generator can be improved. After the generative adversarial network is trained, use the adversarial example generated by it for adversarial training. This method can not only generate high-quality adversarial examples, but also train discriminators to detect adversarial examples, but GAN training is difficult.
Randomization:The aim of randomization is to reduce the attack capability of the adversarial example in order to improve the adversarial defense capability of the model. Xie<cit.> et al. use stochastic transform to input images to defend against adversarial examples. Such as stochastic image resizing, stochastic filling zeros on image edges. Dhillon<cit.> et al. proposed stochastic feature pruning to reduce the weight of adversarial perturbations on classification results. In addition to stochastic processing of input samples, Liu<cit.> et al. add stochastic noise to the network. This method achieves stochastic model inference by adding a noise layer in front of the convolutional layer of the model.
Denoising:Denoising reduces noise in samples to defend against adversarial attacks. The researchers found that perturbations added to images exhibit similar characteristics to noise, so noise reduction can be used to attenuate the attack power of adversarial examples. Xu<cit.> et al. proposed to reduce the high frequency noise in the input sample by reducing the bit depth of the image sample and blurring the image. Experiments show that this method can reduce the attack capability of the adversarial example and improve the adversarial defense capability of the model. In addition, some studies map the adversarial example to the original sample through generative adversarial network<cit.> (GAN) and auto-encoder (AE). Input noise is reduced by the mapping process. The main work is Defense-GAN<cit.>, APE-GAN<cit.>, MagNet<cit.>.
In conclusion, the above adversarial defense research cannot be applied directly to adversarial malware defense. Adversarial training is general, but methods that generate adversarial images cannot generate adversarial malware. Randomization and denoising will change the content of the PE file, resulting in a decrease in the accuracy of the malware detection model. Therefore, how to defend against adversarial malware based on adversarial training is the focus of this paper.
§.§ Malware detection model
The malware detection model in this paper is a byte-to-image malware detection model. Its process is shown in Figure<ref>, which is mainly divided into binary2img, resize, and classification<cit.><cit.><cit.>. The first is binary2img, which converts PE file binary bytes into gray images. The second is resizing, which resizes all gray images to the same size for parallel computing. Finally, classification, which feeds gray images into existing convolutional neural network models for classification. In summary, the input of this detection method is a gray image with size adjustment, so the input loses some features of the original data source, resulting in weak comprehensibility of the detection method. However, the size adjustment reduces resource consumption and improves detection speed.
In Binary2img, the specific steps follow the B2IMG algorithm<cit.>. First, the algorithm determines the gray image size according to the size of the PE file. The specific rules are that the gray image sizes of different PE files are different, but the width and height of the gray image are the same. That is, the width of the gray image multiplied by the height of the gray image is approximately equal to the size of the PE file. Secondly, the algorithm converts 8 unsigned binary numbers into decimal numbers between [0,255], and forms an image array according to the width and height of the gray image, and the insufficient area is supplemented by 0.
Currently, several models for detecting malware that converts bytes into images have been proposed. This paper selects the best performing model as the detection model of this paper<cit.>, and the convolutional neural network used is DenseNet121<cit.>.
§ ATWM
This section mainly introduces the specific details of the adversarial malware defense method based on adversarial training. Firstly, the overall process of ATWM is introduced, followed by the generation and injection perturbation methods used by ATWM, and finally the algorithm for generating various adversarial examples in ATWM is introduced.
§.§ Problem formulation
Deep learning can be divided into supervised machine learning and unsupervised learning according to different training methods. The current training method for deep learning is mainly supervised machine learning, and model ℱ can be obtained through training. Model ℱ is essentially a mapping relationship between the feature vector X and the label vector Y, where X and Y conform to the joint distribution P(X, Y). In supervised machine learning training, the loss function l is defined, and the weight of the model is continuously adjusted with the goal of minimizing the loss function until the model learns the mapping relationship between X and Y. In training, the average loss function in the P distribution is the expected risk<cit.>, as shown in Equation<ref>. Therefore, the goal of supervised machine learning is to minimize the expected risk. However, in practical applications, it is impossible to obtain all the data that conform to the joint distribution P, and the distribution of the existing training data set can only be called empirical distribution. The average of the loss function in the empirical distribution P_σ is called empirical risk<cit.>, as shown in Equation<ref>. When the difference between the empirical distribution P_σ and the joint distribution P is very small, the trained model will be able to fully map the feature vector X to the label vector Y. However, when the data distribution P_σ of the training set is quite different from the joint distribution P, the model obtained through empirical minimization training has poor robustness and will be vulnerable to adversarial examples.
R(f)=∫ l(f(x),y)dP(x,y)
R_σ (f)=∫ l(f(x),y)dP_σ (x,y) =1/n∑_i=1^nl(f(x_i ),y_i)
The essence of adversarial training is that by adding adversarial examples to the training data, the data distribution of the training set is similar to the joint distribution P. We can enhance the robustness of our malware detection model against adversarial example attacks by training it in the adversarial training set.
§.§ ATWM framework
The ATWM framework is shown in Figure<ref>, which is mainly divided into perturbation generation, perturbation injection, adversarial training, and preprocessing. First, the pre-trained model is obtained based on the original sample training set, and then perturbation is generated based on the pre-trained model, and perturbation is injected into the original sample to generate adversarial examples. When the pre-trained model misclassifies the adversarial example, the adversarial example is successfully generated. Finally, the original sample and adversarial example are combined into an adversarial training set, and the adversarial training set is used for training (adversarial training) to obtain a robustness-enhanced malware detection model. In addition, ATWM adds preprocessing before model input, which filters out simple perturbations with obvious features. Existing research has shown that adversarial training can improve the adversarial defense of the model but also reduce the accuracy of the model<cit.>. ATWM filters simple perturbations through preprocessing and reduces simple adversarial examples in the adversarial training dataset, which can reduce the decline in model accuracy.
§.§ Preprocessing
The preprocessing goal of ATWM is to filter perturbations with adversarial attack capability but obvious features. This type of perturbation can destroy the byte feature of PE files, and lower information entropy is the characteristic of this type of perturbation. In general, there is no content with very low information entropy in the PE file, and even if there is such a portion of the content, it is not used (such as bytes filling gaps). This type of adversarial example does not appear under normal circumstances and does not belong to the content of the joint distribution P. If such adversarial examples are added, the accuracy of the adversarial training model will be greatly reduced. Therefore, ATWM preprocesses PE files, removing the very low entropy content of the file information. The specific operation is shown in algorithm 1. The main principle of this algorithm is to remove sequences with low entropy in the byte sequence of PE files. First, obtain the byte sequence in the sliding interval and calculate the sequence entropy. When the entropy value is higher than the threshold, the sequence will be added to the preprocessed malware byte sequence, otherwise it will be discarded.
§.§ Perturbation generation and injection
In this paper, various perturbation generation and perturbation injection methods are set up to generate diverse adversarial malware through stochastic combination methods. This section will introduce two perturbation generation methods and two perturbation injection methods.
§.§.§ Perturbation generation
The perturbations generated by ATWM are stochastic perturbation bytes and gradient perturbation bytes, which correspond to perturbations generated by different types of attacks. The content of stochastic perturbations is stochastic generated, as shown in Equation<ref>.
s=random(0,255)
The gradient perturbation content is generated by the original PE file, and the generation method is shown in Equation<ref>, where x is the byte of the original PE file, g_sign is the gradient direction of the malware detection model, and η is the perturbation intensity. The gradient perturbation content is obtained by adding the original PE file and the gradient perturbation, which retains the structure information of the original PE file. This type of perturbation is mainly generated by gradient-based attack methods, such as FGSM<cit.>, PGD<cit.>, etc.
s=x+η g_sign,η∈ [0,255]
§.§.§ Perturbation injection
After obtaining the adversarial perturbation, ATWM injects the perturbation through byte replacement and byte insertion. Currently, adversarial examples of malware are presented as variants of malware samples. The byte replacement and byte insertion operations on this paper are equivalent to existing malware variant operations.
The first is byte replacement. The main characteristic of this type of operation is that part of the byte content in the PE file is replaced. When this part of the content has a significant impact on the model decision-making, replacing this part of the content will cause malware to evade detection of the model. The main representative operations are as follows:
* Code Replacement: In the PE file, the original code is replaced with the same logical code, such as calculating constant values. The above operation will change part of the PE file, but in the PE file bytes it appears that part of the bytes will be replaced.
* Code Encryption: In a PE file, the content of the code is encrypted. For example, strings, text, etc. are encrypted. This operation changes part of the contents of the PE file, but in the PE file bytes, this operation replaces part of the bytes.
The second is byte insertion. The main characteristic of this type of operation is that some content is inserted in the PE file. When this area has a significant impact on decision-making, inserting content near this area will cause the original features of the area to be destroyed, allowing malware to evade detection of the model. The main representative operations are as follows:
* Junk code insertion: Junk code is inserted into the contents of the PE file, and the junk code does not interfere with the normal operation of the program. Such as empty NOP instructions, sequential addition and subtraction of register values, pushing values onto the stack and popping them up, computational constants, etc. There are various ways to implement the above operations in practice. But in PE file bytes, these operations will inject some bytes.
* Gap filling. Fill in any contents in the PE file byte gap. Due to the alignment mechanism of the PE file, there are many byte gaps in the PE file. The byte gap space is not used by the program and can be filled with any content, and the filling content does not affect the original function of the program. This operation will inject some bytes into the bytes of the PE file.
* Indirect call function: Call the function in the PE file by nesting multiple function calls, state synchronization calls, etc. This operation adds code logic to the code. But in the PE file bytes, the operation injects some bytes.
The above operations have many ways in practice, but in PE file bytes, these operations add or replace some bytes. Both operations are equivalent to existing malware variant technology, but the content produced by the specific malware variant is unpredictable. Therefore, the main purpose of the above operation is to force the malware detection model to pay more attention to global features and reduce sensitivity to behaviors such as local content insertion and local content replacement, in order to improve the robustness and adversarial example defense capabilities of the model.
§.§ Adversarial example generation
The specific process of generating the adversarial example is shown in algorithm 2. We first obtain the length of the original malicious bytes, and combining the number of perturbation injections, stochastic generates a list of perturbation injection position coordinates (line1-line2). We sort the list of coordinates in reverse to avoid coordinate changes caused by byte insertion operations (line3). The injected perturbation has a stochastic value between 0 and the maximum perturbation (line4). We use two perturbation generation methods to generate perturbations and random selection of injection operations(line5-line6). We inject perturbations according to the coordinate list. Since the coordinate list is in reverse order, the perturbation of the injection does not change the subsequent injection coordinates. The original sample is injected multiple times to generate an adversarial example (line7-line11). Note that the adversarial malware generated in ATWM cannot be executed, but presents the same byte features as the executable adversarial example. Therefore, it can be used for adversarial training to improve the adversarial defense of the model.
§ EXPERIMENT
In order to verify the improved effect of ATWM on the robustness and adversarial defense capability of the model, the accuracy and adversarial defense capability of the model are evaluated. Existing studies have shown that adversarial training can improve the adversarial defense capability of the model, but reduce the detection accuracy of the model. Adversarial training should balance model accuracy with model adversarial defense capability. Therefore, this paper evaluates the effect of ATWM from two perspectives: model accuracy and adversarial defense capability. The experimental environment is based on Linux CentOS 7.9 operating system, equipped with Intel (R) Xeon (R) Gold 5218 CPU @2.30GHz (CPU), 256G (RAM), NVIDIA Tesla V100 32GB * 3 (GPU), based on PyTorch deep leaning framework, using Python to complete programming.
§.§ Experimental datasets
In this article, we use two datasets for experiments, the BIG2015 dataset and the VirusShare dataset. The BIG2015 data set comes from Microsoft<cit.>. To ensure security, the PE file header has been removed and cannot be executed. The composition of the dataset is shown in Table<ref>. There is a large imbalance in the number of categories in the BIG2015 dataset, resulting in insufficient robustness of the trained model.
The VirusShare dataset is a non-public dataset that is collected by itself and comes from the VirusShare website[https://virusshare.com/] and Windows system. The PE file in the dataset can be executed because it has not been processed. The composition of its data set is shown in Table<ref>. Among them, Benign comes from the Windows system, and Malware comes from the VirusShare website.
The above data set is divided into training set, validation set, and test set according to the ratio of 6:2:2. During the experiment, the malware detection model is trained through the training set, and the model that performs best on the validation set is selected as the final model, and the accuracy of the model are verified on the test set. It is worth noting that the adversarial attack process is time-consuming, so 500 samples were stochastic selected from the test set as the adversarial defense capability test data set.
§.§ ATWM parameters and examples
The ATWM parameter settings are shown in Table<ref>. The preprocessing threshold is set to 1, and when the perturbation content is a single byte, the entropy value is 0. When bytes fluctuate in the range of difference 2, the entropy approximation is 1. Therefore, ATWM will filter byte sequences in PE files where byte fluctuations do not exceed 2.
The gray image of ATWM-generated adversarial malware is shown in Figure 2. The adversarial malware cannot be executed, but has the same gray image characteristics as the executable adversarial malware. The example diagram shows that the adversarial malware retains some features of the original sample, but parts of the adversarial malware have been inserted or replaced with adversarial perturbations.
§.§ Model accuracy
ATWM adds preprocessing before malware detection models, and uses a diverse adversarial malware data set for adversarial training. This section evaluates model accuracy on two datasets. In both datasets, SGD dynamic optimizer is used for training. The parameters of the optimizer are: learning rate is 0.1, decay rate is 0.6, step size is 5, and the number of training iterations is 100 rounds. First, the accuracy of the original model (OM), preprocessing + original model (P + OM), and preprocessing + adversarial training model (P + ATM) were tested on the VirusShare dataset. The experimental results are shown in Table<ref>. The results show that the adversarial training model obtained by ATWM does not have the problem of precision degradation on the VirusShare dataset.
Secondly, the accuracy of the original model, preprocessing + original model, and preprocessing + adversarial training model were tested on the BIG2015 dataset. The experimental results are shown in Table<ref>. Experimental results show that the adversarial training model obtained by ATWM does not decrease the accuracy of the BIG2015 dataset, but improves the accuracy of the model. The main reason is that preprocessing filters out meaningless bytes in the original sample, making the features of the original sample more prominent.
§.§ Model adversarial defense capabilities
To evaluate the effectiveness of ATWM, the adversarial defense capabilities of the model were tested using existing adversarial attack methods. The attack methods include black box attack methods and white box attack methods. Black box attacks are GAMMA<cit.> and BARAF<cit.>, and white box attacks is COPYCAT<cit.>. Note that for uniform comparison, the amount of perturbation injection is used as the attack strength control variable.
The first is GAMMA. GAMMA is a black box attack method based on genetic algorithms. This method generates adversarial malware through genetic algorithms to attack malware detection models. The GAMMA parameters are set as follows: the number of genetic algorithm iterations is set to 20, the population size is set to 50, the crossover parameter is set to 0.7, and the mutation parameter is set to 0.8. The experimental results are shown in Table<ref>. Experimental results show that the adversarial defense capability of the model obtained by the ATWM method is better than that of the original model on both datasets. In addition, as the attack intensity increases, the model accuracy gradually decreases, but the ATWM method is still effective.
The second is BARAF, which is a black box attack method based on image texture feature destruction, which generates adversarial malware by destroying the texture feature of the original sample to attack the malware detection model. In the experiment, FF byte with the best attack effect on the original paper was used as the perturbation byte. Experimental attack results are shown in Table<ref>. The experimental results show that the perturbation used by BARAF is a simple perturbation, which can be completely filtered by ATWM preprocessing, and will not lead to a decrease in the accuracy of the model. In addition, as the amount of perturbation increases, the defense effect does not decrease.
Finally, there is COPYCAT, a white box attack method based on model gradients that generates adversarial malware by injecting adversarial perturbations into the end of the original sample. During the attack, the perturbation strength of COPYCAT is set to 0.1, and perturbation is generated for the entire original sample, and perturbation near the injection area is selected as injection perturbation. The results of the attack experiment are shown in Table<ref>. The results show that ATWM improves the defense capability of the model against COPYCAT attacks. As the amount of perturbation increases, the defense capability does not decrease significantly.
§ CONCLUSION
The deep learning malware end-to-end detection model is vulnerable to adversarial attack, but existing adversarial defense research cannot be applied directly to adversarial malware defense. Aiming at this problem, this chapter proposes an adversarial malware defense method (ATWM) based on adversarial training. ATWM filters out simple perturbations through preprocessing, and generates adversarial examples through various stochastic combinations for adversarial training to improve the adversarial defense ability of the model. Experimental evaluation on two sets of different types of data sets shows that ATWM can improve the adversarial defense ability of the model without causing a significant decrease in model accuracy. ATWM is highly scalable. In future work, the effect of ATWM can be improved by adding more adversarial malware generative approaches to stochastic combinations. In addition, ATWM is versatile, and in the next work, ATWM can be applied to multiple malware detection models.
|
http://arxiv.org/abs/2307.07618v1 | 20230714202514 | $\texttt{BTSbot}$: A Multi-input Convolutional Neural Network to Automate and Expedite Bright Transient Identification for the Zwicky Transient Facility | [
"Nabeel Rehemtulla",
"Adam A. Miller",
"Michael W. Coughlin",
"Theophile Jegou du Laz"
] | astro-ph.IM | [
"astro-ph.IM"
] |
[
: A Multi-input Convolutional Neural Network to Automate and Expedite Bright Transient Identification for the Zwicky Transient Facility
Nabeel RehemtullaNU,CIERA
Adam A. MillerNU,CIERA
Michael W. CoughlinUMinn
Theophile Jegou du LazCaltech
on behalf of ZTF
NUDepartment of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
CIERACenter for Interdisciplinary Exploration and Research in Astrophysics (CIERA), 1800 Sherman Ave., Evanston, IL 60201, USA
UMinnSchool of Physics and Astronomy, University of Minnesota,
Minneapolis, Minnesota 55455, USA
CaltechDivision of Physics, Mathematics, and Astronomy, California
Institute of Technology, Pasadena, CA 91125, USA
Nabeel [email protected]
Machine Learning, ICML
0.3in
]
The Bright Transient Survey (BTS) relies on visual inspection (“scanning") to select sources for accomplishing its mission of spectroscopically classifying all bright extragalactic transients found by the Zwicky Transient Facility (ZTF). We present , a multi-input convolutional neural network, which provides a bright transient score to individual ZTF detections using their image data and 14 extracted features. eliminates the need for scanning by automatically identifying and requesting follow-up observations of new bright (m <18.5 mag) transient candidates. outperforms BTS scanners in terms of completeness (99% vs. 95%) and identification speed (on average, 7.4 hours quicker).
§ INTRODUCTION
Large, wide-field surveys have recently revolutionized time-domain astronomy by repeatedly imaging the entire night sky. These surveys produce staggering amounts of data every night, an influx that demands the adoption of machine learning (ML) techniques. ML models have been applied to a variety of tasks in astronomy including real/bogus classification <cit.>, photometric transient classification <cit.>, photometric redshift estimation <cit.>, and many others. For the most part, ML models in astronomy perform their task using a small number of extracted numeric features. Limiting these models to extracted features ignores potentially valuable information present in the associated images. A comparatively small number of convolutional neural networks (CNNs) have been built that make use of the information embedded in astronomical images, and they have generally had great success <cit.>. CNNs are particularly well suited to astronomy because they can capture properties, like galaxy morphology, which often remain largely obscured to other image processing techniques. Only a very small subset of these CNNs are multi-input, meaning they take in images and input of another type, like extracted features <cit.>. Multi-input CNNs (MI-CNNs) have more and varied information to draw from compared to analogous single-input models. Here, we present a new MI-CNN for bright transient identification.
The Bright Transient Survey <cit.> aims to classify all extragalactic transients with m < 18.5 mag in g or r band from the public alert stream of the Zwicky Transient Facility <cit.>. Every night, experts inspect candidate BTS sources, a process called “scanning,” and select the bright extragalactic transients[We use “supernova" interchangeably with “extragalactic transient." These are not equivalent, but it simplifies the prose.] for spectroscopic observation and classification. There are typically ∼50 BTS candidates per night, of which ∼10 are real bright supernovae (SNe), with the others being mostly dim SNe, active galactic nuclei (AGN), and cataclysmic variables (CVs). Since its origin, BTS has maintained near-perfect completeness of relevant sources and has rapidly and publicly released their classifications: a monumental service to the community. The BTS sample enables an enormous amount of science including some of the largest SN population studies ever conducted. The MI-CNN introduced here, dubbed , is built to automate scanning for BTS by performing binary classification (bright SN / not bright SN) on ZTF alert packets.
§ MODEL SCOPE, ARCHITECTURE, AND TRAINING
Our MI-CNN, , automates BTS scanning by assigning each individual ZTF alert packet a bright transient score. Alerts from real bright SNe will have high scores, allowing us to automatically select them from the pool of BTS candidates for follow-up and classification. Further, our model identifies sources and requests follow-up before a human can, helping to maintain the survey's near-perfect completeness by obtaining a spectrum before the transient fades beyond detection.
Figure <ref> shows how input is fed into and how the information from the extracted features and images is combined to produce the output. contains three main components: (1) The convolutional branch processes the science, reference, and difference images as a three-channel image through a VGG-like architecture <cit.> followed by flattening. (2) The fully-connected branch processes the extracted features through two dense layers. (3) The combined section concatenates the output of the two branches and passes it through two more fully-connected layers; the second of which produces the final output using a softmax activation function. The output is a unit-interval score where higher scores represent increased confidence that the source in the input alert packet is, or will become, a bright extragalactic transient.
The choice of a MI-CNN is motivated by the fact that the images and the extracted features provide different, valuable information for performing our task. For example, the features and respectively represent the angular distance to and the star-galaxy score <cit.> of the Pan-STARRS1 <cit.> cataloged source nearest to this ZTF source. While the new SNe are not present in PS1, their host galaxies often are. Thus, alerts from bright SNe tend to have moderate and small values, indicating a galaxy projected nearby to the source. In contrast, AGN and CVs will typically be cataloged in PS1 and thus have very near to zero. The images also provide important information following a similar heuristic. Bright SNe tend to be associated with prominent off-center extended sources (their host galaxies); faint SNe tend to have less prominent host galaxies because they tend to be farther away; AGN will appear as exactly centered extended sources; CVs will often appear surrounded by many bright point sources because they tend to occur near dense star fields. A MI-CNN is able to pool information from all input types and consider them together when making a prediction. Where a single-input CNN might fail due to a lack of discriminating information, a MI-CNN leverages additional and distinct information when making its prediction. For example, a very faint alert that shows next to no features in its images can be identified as an AGN by a MI-CNN because AGN often have a large number of previous detections at their location (represented by ), information a single-input CNN could not have.
Given the scope and architecture of we encounter a number of challenges. First, we are requiring the model to learn multiple non-trivial separations. must learn to separate SNe from other sources without using distance information because it is not known a priori. It must also learn to identify bright SNe from a single alert irrespective of the SN's current phase. Early in its rise or late in its fade, a bright SN can appear very similar to a near-peak dim SN. This is related to the second complication, which is that uses no time-series information. Although light curves contain crucial information for evaluating a source as a bright SN or not, we make the choice to omit time-series information, in part, because it would introduce complexity and, likely, noise. There is no established method for representing partial light curves of the wide variety our model encounters in a way that is fit for input into a neural network. There has been a great deal of work to accomplish this for SNe alone <cit.>, but these methods are not applicable to all the types of sources that our model encounters. Further, omitting time-series information offers an advantage over light curve-based models. With its alert-based architecture, can identify a bright SN from its very first detection, rather than preferring multiple detections to begin constraining a light curve. We expect that a similar model with light curve information would identify bright SNe as or less quickly than does, potentially hurting BTS's completeness. Section <ref> illustrates how prompt and automatic identification of SNe with can aid in uncovering poorly understood SN physics. To our knowledge, no other CNNs exist that simultaneously filter out non-SNe and predict a SN's future behavior given a single snapshot in time.
§.§ Training
Solving this complex classification task requires a significant training set. Having run since 2018, BTS has now amassed the largest set of public SNe classifications ever. The size of this labeled data set enables the construction of . Bright SNe classified by BTS populate the positive class, while AGN, CVs, and dim SNe rejected by BTS constitute the negative class. In total, we have 561,000 alerts (∼44% of which belong to the positive class) from 14,348 sources (∼33% of which belong to the positive class). This difference in class distribution between alerts and sources stems from some sources having many more alerts than others: AGN can have thousands, bright SNe typically have dozens, and dim SNe can have as little as a few. Training on every alert would result in some types of sources being over-represented. To remedy this we define a hyperparameter called N_max, the maximum number of alerts included per source. We find N_max=60 to be optimal; it balances between thinning extra alerts from sources like AGN and maximizing the training set size. We also weight contributions to the loss function by the relative size of each class to mitigate the effects of class imbalance.
is trained with mostly standard hyperparameters. We adopt the Adam optimizer <cit.> and the binary cross-entropy loss function. We perform a number of Bayesian hyperparameter searches with the Weights and Biases platform <cit.> to optimize our choice of batch size, Adam parameters, N_max, which extracted features to include, and more. We also employ data augmentation to the image cutouts like random rotations of 0^∘, 90^∘, 180^∘, and 270^∘ and random horizontal and vertical flipping. These help prevent overfitting and ensure that is invariant to these transformations.
We employ a standard 81/9/10% train/validation/test split. We also prevent sources from having alerts in multiple splits. This prevents different splits from containing extremely similar data, which would introduce bias.
§ MODEL PERFORMANCE
The trained model is run on the full validation split (i.e. no N_max thinning) to create unbiased performance diagnostics.
Figure <ref> illustrates classification outcomes as a function of PSF magnitude. The highlight marks m < 18.5 mag, the BTS magnitude threshold. The alerts within the highlight are especially important to classify correctly because they influence whether or not a spectrum of the source is to be collected. In that region, misclassifications are very limited and mostly in the dimmest bin. For m > 18.5 mag, there are more misclassifications and many more alerts overall.
Ultimately, alert-based metrics are not perfectly representative of the model's real-world performance; source-based classification is more relevant. The chosen metrics must consider that the model has multiple chances to correctly (or incorrectly) classify a source. To this end, we define a “policy" that maps the real-time history of a source's scores to a source-based classification. For simplicity, we put aside policy optimization for now and only consider one naive policy with the impression that we could improve performance with a more sophisticated choice of policy.
Our naive policy is named ; it classifies a source as a bright SN once it has one alert with score ≥0.5. The left panel of Figure <ref> shows the purity and completeness of the sample produced when following this policy. We observe that yields a sample with 99.1% completeness and 90.7% purity overall. BTS scanners produce a sample that is roughly 95% in both completeness and purity <cit.>. In tests with other more conservative policies like (defined analogously to ), we observe that purity is increased at the cost of completeness. We favor because it maximizes completeness, BTS's highest priority. The right panel of Figure <ref> compares the time between when and the BTS scanners identified some source. Here, negative numbers represent that identified the bright SN before the scanners and positive numbers represent the opposite. The median is -0.31 days, indicating that expedites the identification of bright SNe by, on average, 7.44 hours. Expediting identification by several hours will frequently mean observing the source an entire night earlier, thus yielding an improvement of ∼24 hours in practice. With , outperforms human scanners in completeness and speed, while making only a small compromise with slightly lower purity.
§ REAL-TIME, REAL-WORLD USAGE
has been integrated into Fritz, the first-party ZTF alert-broker and a SkyPortal instance <cit.>, and is currently posting scores to all new ZTF alert packets. Alert packets are created and augmented with a bright transient scores just minutes after the observations are taken with ZTF. During the night, a tool we call checks for new BTS candidates every half hour and immediately saves those that pass . will soon also simultaneously request follow-up for passing sources. With this, we can monitor the model's real-time, real-world performance and note any frequent misclassifications. We have been awarded a large observing program for the 2023B semester with the SEDMv2 spectrograph on the Kitt Peak 2.1 m telescope. With this time, we will build a spectroscopic sample that represents the model's unbiased performance. The resulting performance metrics will be invaluable when considering adaptations of this MI-CNN, e.g., to specific SN sub-types or exotic transients like kilonovae.
§.§ SN 2023ixf & Rapid follow-up with
We take SN 2023ixf, the recent Type-II supernova in M101, as an example illustrative of the additional discovery potential of an alert-based model like .
SN 2023ixf was reported to TNS by Koichi Itagaki at 14:42 PDT on May 19th <cit.>.
The earliest published spectrum was collected by Daniel Perley less than an hour later with the SPRAT spectrograph <cit.>. About 14 hours before the first TNS report, SN 2023ixf was detected by ZTF, and, just minutes later, this alert packet was assigned a bright transient score of 0.840 by . If it was in-place at the time, would have identified this new source passing at about 01:00 PDT and requested a spectrum from one of the numerous robotic spectrographs associated with ZTF and BTS, e.g., SEDM. At this point, there is about half of the night remaining at SEDM's location, plenty of time for a spectrum to be collected. Even if the spectrum is collected at the end of the night (∼05:00 PDT), this represents a ∼10 hour speed-up over the otherwise earliest spectrum. In this example, and probe the early, rapidly-evolving explosion physics mostly unavailable to traditional triggering methods. This would not be possible with light curve-based models, which typically require using the source's evolution over multiple days to identify it as a transient. This extremely rapid follow-up is enabled by the alert-based architecture of .
§ SOFTWARE AND DATA
Code relating to and are made public at <https://github.com/nabeelre/BTSbot>. In their development, we use Astropy <cit.>, cron <cit.>, Keras <cit.>, Matplotlib <cit.>, NumPy <cit.>, pandas <cit.>, penquins <cit.>, scikit-learn <cit.>, and Tensorflow <cit.>.
§ ACKNOWLEDGEMENTS
The material contained in this document is based upon work supported by a National Aeronautics and Space Administration (NASA) grant or cooperative agreement. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author and do not necessarily reflect the views of NASA. This work was supported through a NASA grant awarded to the Illinois/NASA Space Grant Consortium.
M. W. Coughlin acknowledges support from the National Science Foundation with grant numbers PHY-2010970 and OAC-2117997.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. Major funding has been provided by the U.S National Science Foundation under Grant No. AST-1440341 and by the ZTF partner institutions: the California Institute of Technology, the Oskar Klein Centre, the Weizmann Institute of Science, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron, the University of Wisconsin-Milwaukee, and the TANGO Program of the University System of Taiwan.
icml2023
|
http://arxiv.org/abs/2307.05961v1 | 20230712070630 | FGo: A Directed Grey-box Fuzzer with Probabilistic Exponential cut-the-loss Strategies | [
"Harvey Lau"
] | cs.SE | [
"cs.SE"
] |
FGo: A Directed Grey-box Fuzzer with Probabilistic Exponential cut-the-loss Strategies
Harvey Lau
Shanghai Jiao Tong University
August 12, 2023
======================================================================================
§.§ Abstract
Traditional coverage grey-box fuzzers perform a breadth-first search of the state space of Program Under Test (PUT). This aimlessness wastes a lot of computing resources. Directed grey-box fuzzing focuses on the target of PUT and becomes one of the most popular topics of software testing. The early termination of unreachable test cases is a method to improve directed grey-box fuzzing. However, existing solutions have two problems: firstly, reachability analysis needs to introduce extra technologies (e.g., static analysis); secondly, the performance of reachability analysis and auxiliary technologies lack versatility.
We propose FGo, a probabilistic exponential cut-the-loss directed grey-box fuzzer. FGo terminates unreachable test cases early with exponentially increasing probability. Compared to other technologies, FGo makes full use of the unreachable information contained in iCFG and doesn‘t generate any additional overhead caused by reachability analysis. Moreover, it is easy to generalize to all PUT. This strategy based on probability is perfectly adapted to the randomness of fuzzing.
The experiment results show that FGo is 106% faster than AFLGo in reproducing crashes. We compare multiple parameters of probabilistic exponential cut-the-loss algorithm and analyze them in detail. In addition, for enhancing the interpretability of FGo, this paper discusses the difference between the theoretical performance and the practical performance of probabilistic exponential cut-the-loss algorithm.
§ INTRODUCTION
Grey-box fuzzing is a popular automated vulnerability detecting technique. It takes improving code coverage as the goal and performs a breadth-first search on PUT. Because the more code coverage, the higher the possibility of discovering bugs. However, the proportion of code containing bugs is very small. This means that lots of computing resource used to improve code coverage are wasted. To solve this problem, directed grey-box fuzzing appeared. Directed grey-box fuzzer focuses on the target of PUT and tilts computing resource towards it, which performs depth-first search on PUT. From the perspective of state space, the states of coverage grey-box fuzzing is all paths while the states of directed grey-box fuzzing is some specific paths.
The essence of fuzzing is to obtain the information of PUT through test cases. As far as the application scenario of fuzzing is concerned, the information of PUT we need is bugs. Grey-box fuzzing often uses the known information of PUT to increase the speed of searching states or reduce state space. The existing directed grey-box fuzzers obtain the call graph (CG) and control flow graph (CFG) of PUT through static analysis and set the target of PUT through known vulnerabilities<cit.>. Firstly, directed grey-box fuzzer extracts CG and CFG from the source code of PUT; secondly, combined with the target of PUT, directed grey-box fuzzer pre-computes all function distance and basic block (BB) distance with the target functions and the target BBs; thirdly, directed grey-box fuzzer uses the feedback information obtained at runtime to reduce the distance of test cases and approach the target until the corresponding vulnerability is triggered.
Improvements to directed grey-box fuzzers can be divided into two categories: improving the framework of directed grey-box fuzzing before getting test cases<cit.> and improving the execution process of PUT after getting test cases<cit.>. As shown in Figure 1.a, if we improve the framework of directed grey-box fuzzing before getting test cases, such as distance, power schedule, and mutation strategy, it is similar to putting a stone in the upper reaches of a river. Although this improvement can tilt the computing resource towards the target, its effect is very limited. As shown in Figure 1.b, if we improve the execution process of PUT after getting test cases, such as early termination, it is similar to building a dam in the middle reaches of a river. This improvement can greatly increase the efficiency of directed grey-box fuzzing.
The key information to improve the execution process of PUT after getting test cases is reachability. FuzzGuard<cit.> divides test cases into reachable and unreachable by neural network. Through backward interval analysis, BEACON<cit.> pre-inserts the conditions which correspond to reaching the target. They have two commonalities: firstly, the reachability analysis needs to introduce other technologies; secondly, the effect of the reachability analysis depends on specific PUTs. FuzzGuard<cit.> predicts reachability by pre-trained neural network. Moreover, vulnerabilities are various and therefore neural network trained on a few specific PUT lack versatility. Similarly, BEACON requires static analysis and the preconditions are only numerical conditions<cit.>.
We found that existing directed grey-box fuzzers do not make full use of control flow information (iCFG). In fact, the distances pre-computed by AFLGo<cit.> implicitly imply the reachability information. From the perspective of reachability, the path of a test case is divided into two segments: in iCFG, the functions and BBs in the previous segment are reachable to the target while the functions and BBs in the latter segment are unreachable to the target. Therefore, when the execution process of a test case enters the unreachable segment, it should be terminated early. However, existing directed grey-box fuzzers do not take this into consideration and cannot cut-the-loss in time.
This paper focuses on the known iCFG to terminate unreachable test cases early. We propose FGo, a probabilistic exponential cut-the-loss directed grey-box fuzzer. FGo terminates unreachable test cases early with exponentially increasing probability. Without early termination, it prevents false positives of unreachable test cases. In the case of early termination, it saves lots of computing resource. Compared to directed grey-box fuzzers introducing other technologies, FGo makes full use of the reachability information contained in the known iCFG and doesn't generate any additional overhead caused by reachability analysis. Moreover, it is easy to generalize to all PUTs. This strategy based on probability is perfectly adapted to the randomness of fuzzing. We implement the prototype of FGo on AFLGo.
The contributions of this paper are as follows:
* Without introducing any additional overhead caused by reachability analysis, we propose probabilistic exponential cut-the-loss strategies to terminate unreachable test cases early. This idea can be generalized to all directed grey-box fuzzers and PUTs.
* We extend AFLGo to implement FGo and publish its source code.
* We evaluate FGo on real programs and analyze experiment results in detail.
* We study the theoretical performance of probabilistic exponential cut-the-loss algorithm and analyze its difference from the practical performance.
To facilitate the research in this field, we open source FGo at <https://github.com/harvey-lau/fgo>.
§ BACKGROUND
§.§ Directed Grey-box Fuzzing
According to the degree of dependence on the known information of PUT, fuzzing can be divided into black-box, white-box and grey-box<cit.>. Black-box fuzzing<cit.> considers PUT as a black box that no information about PUT is known before fuzzing and only the output corresponding to the input can be observed. White-box fuzzers<cit.> generate test cases based on analyzing PUT and their runtime information. Grey-box fuzzing<cit.> performs lightweight static analysis on PUT or collects some runtime information of test cases to guide the mutation of test cases. Black-box fuzzing is not effective enough while the effectiveness of white-box fuzzers pays the cost of efficiency<cit.>. Grey-box fuzzing strikes a balance between effectiveness and efficiency and therefore becomes the most widely used fuzzing method.
At first, grey-box fuzzers aim to improve code coverage<cit.>. Because the higher the code coverage, the greater the possibility of discovering vulnerabilities. However, the path corresponding to the vulnerability accounts for a very small proportion of code<cit.>. It means a large number of computing resource used to improve code coverage can not improve vulnerability coverage. Therefore, blindly improving code coverage is very inefficient. The empirical analysis of coverage-based fuzzer benchmarking shows that the relationship between of the coverage achieved and the number of vulnerabilities found is strong correlation but isn't strong agreement<cit.>.
Directed fuzzing focuses on the target of PUT and tilts computing resource towards the target, rather than trying to cover all paths. In the beginning, directed fuzzing use symbolic execution to generate inputs that triggered specific paths. It is often based on a symbolic execution engine, such as KLEE<cit.>. However, directed symbolic execution relies on program analysis and constraint solving which bear unacceptable time overhead. Compared to directed symbolic execution, directed grey-box fuzzing is not only suitable for large programs, but also has strong flexibility.
§.§ The Distance and Reachability of Test Cases
AFLGo is the first directed grey-box fuzzer<cit.>. It regards the process of reaching the target of PUT as an optimization problem and considers the distance between a test case and the target as loss function. AFLGo defines distance in three steps.
Function-level distance defines the distance between a function and all target functions. Firstly, the distance of two functions is defined as the shortest path length in CG; Secondly, the distance between a function and a set of functions is defined as the harmonic mean of the distances between a function and each function.
BB-level distance defines the distance between a BB and all target BBs. Different situations are discussed as follows:
* If the BB belongs to the BB set of the target, the distance is zero.
* If the function of the BB and the function set of the target are reachable, the BB and the BB set of the target is reachable. The BB-level distance is defined as a multiple of the function-level distance.
* If the BB has no direct reachability relationship with the BB set of the target, the BB-level distance is defined as the harmonic mean of the distance between the BB and each BB in the same CFG.
Normalized distance defines the distance between a test case and the target. We can regard the path of a test case as a series of BBs. Based on the distance between a BB and all target BBs defined above, the distance between a test case and the target is defined as the arithmetic mean of the distance between each BB of a test case and the target BBs. In order to make the maximum distance not change with the scale of PUT, based on the minimum distance and the maximum distance, the distance of a test case is normalized to [0, 1].
So, how does AFLGo handle unreachable test cases?
* In function-level distance, The distances of unreachable functions are undefined.
* In BB-level distance, if the BB is unreachable to the BB set of the target, since its distance is based on function-level distance, The distances of unreachable BBs are also undefined.
* So is test case-level distance.
Let us turn to the implementation of AFLGo<cit.>. At compile time, AFLGo initializes the distance between a BB and the target as -1. If the initial distance isn't updated by a positive, the BB doesn't participate in the instrumentation process of calculating the final distance. At runtime, AFLGo initializes the distance between a test case and the target as -1. If the initial distance isn't updated by a positive, the test case doesn't participate in the power schedule based on simulated annealing.
Based on reachability, FuzzGuard and BEACON cut-the-loss. However, both of their reachability analysis are enabled by auxiliary technologies. FuzzGuard needs to pre-train a classification neural network through test cases and their reachability information<cit.>. BEACON needs to extract numerical conditions from the source code of PUT through backward interval analysis<cit.>.
§.§ Exponential Backoff Algorithm
Exponential backoff algorithms<cit.> are used in control systems to reduce the rate of responding to adverse events. For example, if a client cannot connect to the server, it will try again after 1ms; if it fails again, it will try again after 2ms, 4ms, 8ms, ..., and so on.
The mathematical model of reducing the rate of responding to adverse events is an exponential function:
t = c^k, k = 0, 1, 2, ...
* t represents the delay between adverse events.
* c represents the multiple of each delay.
* k represents the number of adverse events.
When c = 2, exponential backoff algorithm is also known as binary exponential backoff algorithm.
In this paper, the backoff of adverse events is replaced by the cut-the-loss of test cases.
§ MOTIVATING EXAMPLE
As shown in Figure 2, let white nodes represent the BBs of a test case and red nodes represent the BBs of the target. In Figure 2.a, the number of white nodes represents the distance between this BB and the last BB of the target. In Figure 2.b, the number of white nodes represents the cut-the-loss probability of this BB.
Firstly, assuming that it is reachable between a test case and the first sub-target, then it is reachable between a test case and the third sub-target.
Secondly, assuming that it is reachable between the second BB of a test case and the last sub-target, then it is reachability between the first BB of a test case and the last sub-target. It is worth mentioning that the distance of the first BB doesn't necessarily increase on the distance of the second BB, because the path of going through the second BB may be one of several paths between the first BB and the target.
Thirdly, assuming that it is unreachable between the fourth BB of a test case and the last sub-target, then it is unreachable between the fifth BB of a test case and the last sub-target.
Based on the above analysis, it can be known that:
* The reachability between a test case and the target is equivalent to the reachability between a test case and the last sub-target.
* A test case transfers from reachable to unreachable (if such transfer exists).
Because the first conclusion about reachability involves the completeness of iCFG, it is not the point of this paper. For the second conclusion, we use the idea of exponential backoff algorithm to calculate the cut-the-loss probability of each unreachable BB.
We know that a test case can be divided into reachable segment and unreachable segment. As shown in Figure 2.b, if we set exponential cut-the-loss probability to 0.2, the probability that the test case is terminated at the first unreachable BB is 0.2, the probability that the test case is terminated at the second unreachable BB is (1 - 0.2) · 0.2 = 0.16, the probability that the test case is terminated at the third unreachable BB is (1 - 0.2)^2 · 0.2 = 0.128 , the probability that the test case is terminated at the i-th unreachable BB is (1 - 0.2)^i - 1· 0.2, ..., and so on.
§ METHODOLOGY
The workflow of FGo is shown in Figure 3. We abstract the process of fuzzing a seed into four steps.
* Firstly, according to its importance, a seed is assigned some given computing resource (e.g., the number of mutations n).
* Secondly, one seed is mutated to n test cases in the light of mutation strategy.
* Thirdly, the reachable test cases are executed normally while the unreachable test cases are terminated early. For example, if the BBs of test case 1 almost are unreachable, test case 1 is terminated at the beginning of execution process; if the BBs of test case n almost are reachable, test case n is terminated at the end of execution process; if test case k is reachable, it is executed normally.
* Fourly, If a test case is interesting, it will be added into seed pool.
§.§ Probabilistic Exponential cut-the-loss Algorithm
Algorithm 1 represents the execution of a test case with a probabilistic exponential cut-the-loss algorithm. initializes a random number seed; generates a random integer; p represents exponential cut-the-loss probability. For each BB of a test case, if its distance is -1, gets a random integer between 1 and 10. If the integer is greater than (1 - p) · 10, the test case will be terminated early. As a result, probabilistic exponential cut-the-loss algorithm achieves the early termination of test case t at each unreachable BB with exponential cut-the-loss probability p.
§.§ True Positive
We refer to unreachable test cases with early termination as true positive test cases. BEACON shows that more than 95% of unreachable test cases are fully executed<cit.>. If they are terminated early, the performance of directed grey-box fuzzers will be greatly improved.
As shown in Figure 4, it represents the distance and cut-the-loss probability of a test case's each BB. White nodes represent reachable BBs while blue nodes represent unreachable BBs. The computing resource saved by the test case is terminated early depends on the number of unreachable BBs. As shown in Figure 4.b, if a test case is terminated early at the first unreachable BB, the computing resource for executing the subsequent k unreachable BBs is saved. From the perspective of static analysis, the cut-the-loss probability at each unreachable BB is not the same. For example, the cut-the-loss probability at the second unreachable equals the product of the probability of not early termination at the first unreachable BB and the probability of early termination at the second unreachable BB. In summary:
* The more unreachable BBs, the greater the probability that the test case is terminated early and the more computing resource is saved.
* The later unreachable BBs, the lower the probability that the BB is executed.
We conduct an theoretical analysis in Section 7.
§.§ False Positive
We refer to reachable test cases with early termination as false positive test cases. What we don't want to happen is that false positive test cases are terminated early.
What if FGo terminate unreachable test cases early with exponential cut-the-loss probability p = 1? Assuming that iCFG can completely represent PUT and the distance information completely corresponds to the reachability of every BB (e.g., non-negatives represent reachable while negatives represent unreachable), let p = 1 can maximize the performance of FGo. However, that function-level distance or BB-level distance equals -1 doesn't mean certainly unreachable<cit.>. Therefore, there are false positive test cases with early termination when exponential cut-the-loss probability p equals 1. We need 1 - p to prevent reachable test cases from being terminated. Furthermore, in the early stages of fuzzing, if FGo terminate test cases early too frequently, this will significantly affect the original feedback mechanism of AFL and make directness fall into a local optimum. With a high-level view, the exponential cut-the-loss probability p is a trade-off between true positives and false positives.
§ IMPLEMENTATION
We implemented a prototype of FGo in and as an extension of AFLGo. At runtime, FGo calls a function to realize probabilistic exponential cut-the-loss algorithm. In , we use a random integer between 1 and 10 to decide whether a test case is terminated early with a give exponential cut-the-loss probability p or not. Similar to BEACON<cit.>, termination is achieved by .
§ EVALUATION
Benchmarks. We used LibMing version 0.4.7<cit.> containing 4 different categories of crashes which correspond to different function call stacks. BEACON<cit.> shows that LibMing version 0.4.7 contain 4 vulnerabilities assigned CVE ID and they can be reproduced in 6 hours. Because the CVE descriptions of them are too vague to set the target of LibMing version 0.4.7, we first test LibMing version 0.4.7 through AFL<cit.>. According to our analysis, the crashes of LibMing version 0.4.7 found by AFL are divided into 4 categories, as shown in Table 1. Then, we consider the most common function call stacks of the 4 categories of crashes as 4 different targets of LibMing version 0.4.7. We determine which bug a test case triggers by the feature of crash.
Fuzzers. We compare FGo with AFLGo.
* AFLGo, the first and most widely used directed grey-box fuzzer.
* FGo, the tool proposed in this paper.
RQs. We conducted all experiments to answer following questions:
* Time to Exposure (TTE): does FGo outperform AFLGo in reproducing crashes?
* The Different parameters of probabilistic exponential cut-the-loss algorithm: how do the parameters of probabilistic exponential cut-the-loss algorithm affect its performance?
We used the seeds in the GitHub repository of AFLGo and repeated every experiment 7 times with a timeout 6 hours. We set the time-to-exploitation t_x to 2 hours. The command of fuzzing is:
We conducted all experiments on a computer with Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz with 16 cores, 16GB memory, and Ubuntu 20.04.1 LTS.
§.§ Time to Exposure (TTE)
Compared to AFLGo, the performance of FGo at exponential cut-the-loss probability p = 0.1 is improved by an average of 106% as shown in Table 1.
* For , it is a crash difficult to be reproduced. The crash number of AFLGo triggering this crash ranges from 18 to 31. And the TTE fluctuates between 5573s and 13258s in 7 repeated experiments. In contrast, FGo can filter out lots of irrelevant crashes (e.g., the crashes of ), which the minimum crash number of triggering is only 2. In this situation, it only took 440s for FGo to reproduce it. Because the TTE of is relatively long, its reproduction is more dependent on multiple specific mutations.Therefore, the increased performance of FGo is not high, about 40%.
* For , this is the crash to be reproduce most easily. The crash number of AFLGo triggering the crash is stably 1. Its TTE is relatively small. Therefore, the interference of early termination in the coverage feedback mechanism of AFL is almost negligible. In such cases, the effect of FGo's probabilistic exponential cut-the-loss algorithm is very significant, increasing the performance by an average of 360%.
* For , it is a crash with moderate difficulty in reproduction. The crash number of AFLGo triggering this crash is about 7. In terms of TTE, FGo performs poorly. The maximum TTE of FGo was as big as 17570s. The possibly reason why AFLGo outperformed FGo is that probabilistic exponential cut-the-loss algorithm has caused FGo to fall into a local optimum.
* For , neither AFLGo nor FGo can reproduce this crash.
§.§ The Different parameters of probabilistic exponential cut-the-loss algorithm
To study the performance of different cut-the-loss probability, we respectively tested LibMing version 0.4.7 with p = 0.1, p = 0.2, and p = 0.4.
As shown in Table 3, p = 0.1 outperforms p = 0.2 and p = 0.4. Because the larger the value of p, the lower the stability of probabilistic exponential cut-the-loss algorithm. Although p = 0.1 seems very small, it only represents the probability of early termination at the first unreachable BB. The probability of a test case being terminated is 1 - (1 - p)^u where u represents the number of unreachable BBs. For example, when p = 0.1, u = 10, the probability of a test case being terminated is 0.65; when p = 0.1, u = 20, the probability of a test case being terminated is 0.88.
§ DISCUSSION
§.§ The Theoretical Analysis
Based on the definitions in Table 4, the ratio of the time overhead of execution process t_2 and the time overhead of execution process with probabilistic exponential cut-the-loss algorithm t_2' equals:
s = ∑_i = 1^u (1 - p)^i - 1· p ·u + 1 - i/r + u
s represents the proportion of time overhead reduced by probabilistic exponential cut-the-loss algorithm, where s ∈ [0, 1]. Therefore, the theoretical value of speedup is:
I = T/T' = t_1 + t_2/t_1 + t_2' = t_1/t_2 + 1/t_1/t_2 + 1 - s
Obviously, dI/ds > 0 and therefore the sign of dI/dp = dI/ds·ds/dp depends on the sign of ds/dp.
For better performance, FGo should terminate unreachable test cases as early as it can. Therefore, ds/dp > 0 and dI/dp > 0. It means that the larger p is, the higher FGo performs.
§.§ Future Work
As analyzed above, we should choose p as large as possible. Then, why does p = 0.1 outperform p = 0.2 and p = 0.4? There are two major reasons: incomplete pre-computed distances and the interference of early termination in the coverage feedback mechanism of AFL. We can derive two future works from these reasons.
Firstly, we should reconstruct the definition and calculation of distance to complete pre-computed distance files. Secondly, we can transfer from exponential model into a sophisticated model to reduce or eliminate its negative impact. For example, FGo does nothing in exploration phrase and runs an exponential cut-the-loss algorithm in exploitation phrase. This help FGo to avoid falling into a local optimum.
§ RELATED WORK
Let us summarize the related works mainly in chronological order.
Coverage-based Grey-box Fuzzing. AFL is one of the most classic coverage-based grey-box fuzzer. Lots of works base on AFL and optimize the framework of fuzzing loop. Most fuzzers are developed based on AFL. AFLFast<cit.> models the process of generating new test cases as Markov chain and gravitates fuzzers towards low-frequency paths to increase coverage MOpt<cit.> uses particle swarm optimization to improve mutation strategies. AFL++<cit.> combines other open source fuzzers into a new fuzzer, which contains a variety of novel improvements. LibAFL<cit.> deconstructs AFL into modules to integrate orthogonal techniques.
Directed White-box Fuzzing. At first, directed fuzzing doesn't achieve by lightweight instrumentation. It is referred as directed white-box fuzzing, which mainly relies on symbolic execution and generates exploitable test cases<cit.>. However, the problem of path explosion and heavy constraint solving do harm to the scalability of directed symbolic execution<cit.>. BugRedux<cit.> takes a sequence of statements as input and then generate a test case to trigger the crash of PUT.
Directed Grey-box Fuzzing. According to the paradigm of problem, directed symbolic execution casts directness as an iteration problem while directed grey-box fuzzing define distance as loss function and further treat the reachability of test cases as an optimization problem. AFLGo<cit.> is the first directed grey-box fuzzer. It realizes directness through power schedule based on simulated annealing. Hawkeye<cit.> makes a comprehensive improvement (e.g., mutation strategy) on AFLGo. WindRanger<cit.> defines the Deviation Basic Block (DBB), which distinguishes important BBs and unimportant BBs. Based DBB, it makes a comprehensive improvement too. FuzzGuard<cit.> is an extension based on AFLGo. It uses a neural network to predict the reachability of a test case before execution and the test case is skipped if it is not reachable. BEACON<cit.> utilizes backward interval analysis to insert several assertions before the branches of the program, so that the test cases that cannot reach the target are terminated in advance. One of major differences between BEACON and FuzzGuard is that BEACON cuts-the-loss in a provable way.
§ CONCLUSION
We presents FGo, which terminates unreachable test cases early with exponentially increasing probability. FGo is faster than AFLGo in reproducing crashes. Compared to other directed grey-box fuzzers, FGo makes full use of the unreachable information contained in iCFG and doesn‘t generate any additional overhead caused by reachability analysis. Moreover, this idea can be integrated into other fuzzers easily.
§ ACKNOWLEDGEMENT
acm
|
http://arxiv.org/abs/2307.04353v1 | 20230710053014 | On Sufficient Graphical Models | [
"Bing Li",
"Kyongwon Kim"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
We introduce a sufficient graphical model by applying the recently developed nonlinear sufficient dimension reduction techniques to the evaluation of conditional independence. The graphical model is nonparametric in nature, as it does not make distributional assumptions such as the Gaussian or copula Gaussian assumptions. However, unlike a fully nonparametric graphical model, which relies on the high-dimensional kernel to characterize conditional independence, our graphical model is based on conditional independence given a set of sufficient predictors with a substantially reduced dimension. In this way we avoid the curse of dimensionality that comes with a high-dimensional kernel. We develop the population-level properties, convergence rate, and variable selection consistency of our estimate.
By simulation comparisons and an analysis of the DREAM 4 Challenge data set, we demonstrate that our method outperforms the existing methods when the Gaussian or copula Gaussian assumptions are violated, and its performance remains excellent in the high-dimensional setting.
conjoined conditional covariance operator, generalized sliced inverse regression, nonlinear sufficient dimension reduction, reproducing kernel Hilbert space
§ INTRODUCTION
In this paper we propose a new nonparametric statistical graphical model, which we call the sufficient graphical model, by incorporating the recently developed nonlinear sufficient dimension reduction techniques to the construction of the distribution-free graphical models.
Let G = ( Γ, E) be an undirected graph consisting of a finite set of nodes Γ={1, …, p} and set of edges
ℰ⊆{(i,j)∈Γ×Γ : i ≠ j }.
Since (i,j) and (j,i) represent the same edge in an undirected graph, we can assume without loss of generality that i>j.
A statistical graphical model links G with a random vector X=(X1, …, X p) by the conditional independence:
(i,j) ∉ℰ⇔ X i X j | X-(i,j),
where
X -(i,j)= {X 1, …, X p }∖{X i, X j}, and
A B | C means conditional independence. Thus, nodes i and j are connected if and only if X i and X j are dependent given X -(i,j).
Our goal is to estimate the set E based on a sample X 1, …, X n of X.
See <cit.>.
One of the most popular statistical graphical models is the Gaussian graphical model, which assumes that X ∼ N(μ, Σ). Under the Gaussian assumption, conditional independence in (<ref>) is encoded in the precision matrix Θ = Σ in the following sense
X i X j |X-(i,j)⇔θij =0,
where θij is the (i,j)th entry of the precision matrix Θ. By this equivalence, estimating E amounts to identifying the positions of the zero entries of the precision matrix, which can be achieved by sparse estimation methods
such as the <cit.>, <cit.>, and <cit.>. A variety of methods have been developed for estimating the Gaussian graphical model, which include, for example, <cit.>, <cit.>, <cit.>, and <cit.>. See also <cit.>, <cit.>, and <cit.>.
Since the Gaussian distribution assumption is restrictive, many recent advances have focused on relaxing this assumption. A main challenge in doing so is to avoid the curse of dimensionality <cit.>: a straightforward nonparametric extension would resort to a high-dimensional kernel, which are known to be ineffective.
One way to relax the Gaussian assumption without evoking a high dimensional kernel is to use the copula Gaussian distribution, which is the approach taken by <cit.>, <cit.>, and <cit.>, and is further extended to the transelliptical model by <cit.>.
However, the copula Gaussian assumption could still be restrictive: for example, if A and B are random variables satisfying B=A2+ϵ, where A and ϵ are i.i.d. N(0,1), then (A,B) does not satisfy the copula Gaussian assumption. To further relax the distributional assumption, <cit.> proposed a new statistical relation called the additive conditional independence as an alternative criterion for constructing the graphical model. This relation has the advantage of achieving nonparametric model flexibility without using a high-dimensional kernel, while obeying the same set of semi-graphoid axioms that govern the conditional independence <cit.>. See also <cit.> and <cit.>. Other approaches to nonparametric graphical models include <cit.> and <cit.>.
In this paper, instead of relying on additivity to avoid the curse of dimensionality, we apply the recently developed nonparametric sufficient dimension reduction <cit.> to achieve this goal. The estimation proceeds in two steps: first, we use nonlinear sufficient dimension reduction to reduce X -(i,j) to a low-dimensional random vector U ij; second, we use the kernel method to construct a nonparametric graphical model based on (X i, X j) and the dimension-reduced random vectors U ij. The main differences between this approach and <cit.> are, first, we are able to retain conditional independence as the criterion for constructing the network, which is a widely accepted criterion with a more direct interpretation, and second, we are no longer restricted by the additive structure in the graphical model. Another attractive feature of our method is due to the “kernel trick”, which means its computational complexity depends on the sample size rather than the size of the networks.
The rest of the paper is organized as follows. In Sections <ref> and <ref>, we introduce the sufficient graphical model and describe its estimation method at the population level. In Section <ref> we lay out the detailed algorithms to implement the method. In Section <ref> we develop the asymptotic properties such as estimation consistency, variable selection consistency, and convergence rates. In Section <ref>, we conduct simulation studies to compare of our method with the existing methods. In Section <ref>, we apply our method to the DREAM 4 Challenge gene network data set. Section <ref> concludes the paper with some further discussions. Due to limited space we put all proofs and some additional results in the Supplementary Material.
§ SUFFICIENT GRAPHICAL MODEL
In classical sufficient dimension reduction, we seek the lowest dimensional subspace S of p, such that, after projecting X ∈ p on to S, the information about the response Y is preserved; that is, Y X | P S X, where P S is the projection onto S. This subspace is called the central subspace, written as S Y|X. See, for example, <cit.>, <cit.>, and <cit.>. <cit.> and <cit.> extended this framework to the nonlinear setting by considering the more general problem: Y X | G, where G a sub-σ field of the σ-field generated by X. The class of functions in a Hilbert space that are measurable with respect to G is called the central class, written as S Y|X. <cit.> introduced the Principal Support Vector Machine, and <cit.> generalized the Sliced Inverse Regression <cit.> and the Sliced Average Variance Estimate <cit.> to estimate the central class. Precursors of this theory include <cit.>, <cit.>, and <cit.>.
To link this up with the statistical graphical model, let (Ω, F, P) be a probability space, (Ω X, F X) a Borel measurable space with Ω X ⊆ p, and X: Ω→Ω X a random vector with distribution P X.
The ith component of X is denoted by X i and its range denoted by ΩX i. We assume Ω X = ΩX 1×⋯×ΩX p. Let X (i,j)=(X i, X j) and X -(i,j) be as defined in the Introduction. Let σ (X - (i,j)) be the σ-field generated by X -(i,j).
We assume, for each (i,j) ∈Γ×Γ, there is a proper sub σ-field G -(i,j) of σ (X -(i,j)) such that
X (i,j) X -(i,j) | G -(i,j).
Without loss of generality, we assume G -(i,j) is the smallest sub σ-field of σ ( X -(i,j) ) that satisfies the above relation; that is, G -(i,j) is the central σ-field for X (i,j) versus X -(i,j). There are plenty examples of joint distributions of X for which the condition (<ref>) holds for every pair (i,j): see Section S10 of the Supplementary Material.
Using the properties of conditional independence developed in <cit.> (with a detailed proof given in <cit.>), we can show that (<ref>) implies the following equivalence.
If X (i,j) X -(i,j) | G -(i,j), then
X i X j | X -(i,j) ⇔X i X j | G -(i,j).
This equivalence motivates us to use X i X j | G -(i,j) as the criterion to construct the graph G after performing nonlinear sufficient dimension reduction of X (i,j) versus X -(i,j) for each (i,j) ∈Γ×Γ, i > j.
Under condition (<ref>), the graph defined by
(i,j) ∉ E ⇔ X i X j | G -(i,j)
is called the sufficient graphical model.
§ ESTIMATION: POPULATION-LEVEL DEVELOPMENT
The estimation of the sufficient graphical model involves two steps: the first step is to use nonlinear sufficient dimension reduction to estimate G -(i,j); the second is to construct a graph G based on reduced data
{ (X (i,j), G -(i,j)): (i,j) ∈Γ×Γ, i > j }.
In this section we describe the two steps at the population level. To do so, we need some preliminary concepts such as the covariance operator between two reproducing kernel Hilbert spaces, the mean element in an reproducing kernel Hilbert spaces, the inverse of an operator, as well as the centered reproducing kernel Hilbert spaces. These concepts are defined in the Supplementary Material, Section S1.2. A fuller development of the related theory can be found in <cit.>. The symbols (·) and (·) will be used to denote the range and the closure of the range of a linear operator.
⊥
κ
§.§ Step 1: Nonlinear dimension reduction
We use the generalized sliced inverse regression <cit.>, <cit.> to perform the nonlinear dimension reduction. For each pair (i,j) ∈Γ×Γ, i > j, let ΩX -(i,j) be the range of X -(i,j), which is the Cartesian product of ΩX 1, …, ΩX p with ΩX i and ΩX j removed. Let
X -(i,j): ΩX -(i,j)×ΩX -(i,j)→
be a positive semidefinite kernel.
Let H X -(i,j) be the centered reproducing kernel Hilbert space generated by X -(i,j). Let ΩX (i,j), X (i,j), and H X (i,j) be the similar objects defined for X (i,j).
E[ X -(i,j) ( X -(i,j), X -(i,j) ) ]< ∞, E[ X (i,j) ( X (i,j), X (i,j) )] < ∞.
This is a very mild assumption that is satisfied by most kernels.
Under this assumption, the following covariance operators are well defined:
ΣX -(i,j) X (i,j): H X (i,j)→ H X -(i,j), ΣX -(i,j) X -(i,j): H X -(i,j)→ H X -(i,j).
For the formal definition of the covariance operator, see S1.2. Next, we introduce the regression operator from H X (i,j) to H X -(i,j). For this purpose we need to make the following assumption.
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j) ).
As argued in <cit.>, this assumption can be interpreted as a type of collective smoothness in the relation between X (i,j) and X -(i,j): intuitively, it requires the operator ΣX -(i,j) X (i,j) sends all the input functions to the low-frequency domain of the operator ΣX -(i,j) X -(i,j). Under Assumption <ref>, the linear operator
R X -(i,j) X (i,j)=ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j)
is defined, and we call it the regression operator from H X (i,j) to H X -(i,j). The meaning of the inverse
ΣX -(i,j) X -(i,j) is defined in Section S1.2 in the Supplementary Material.
The regression operator in this form was formally defined in <cit.>, but earlier forms existed in <cit.>; see also <cit.>.
R X -(i,j) X (i,j) is a finite-rank operator, with rank d ij.
Intuitively, this assumption means that R X -(i,j)X (i,j) filters out the high frequency functions of X (i,j), so that, for any f ∈ H (i,j), R X -(i,j)X (i,j) f is relatively smooth. It will be violated, for example, if one can find an f ∈ H (i,j) that makes R X -(i,j)X (i,j) f arbitrarily choppy.
The regression operator plays a crucial role in nonlinear sufficient dimension reduction. Let L 2 ( P X -(i,j) ) be the L 2-space with respect to the distribution P X -(i,j) of X -(i,j). As shown in <cit.>, the closure of the range of the regression operator is equal to the central subspace; that is,
( R X -(i,j) X (i,j) ) = 𝔖X (i,j) | X -(i,j)
under the following assumption.
* H X -(i,j) is dense in L 2 (P X -(i,j) ) modulo constants; that is, for any f ∈ L 2 (P X -(i,j) ) and any ϵ > 0, there is a g ∈ H X -(i,j) such that [ f( X -(i,j) ) - g( X -(i,j) ) ] < ϵ;
* 𝔖X (i,j) | X -(i,j) is a sufficient and complete.
The first condition essentially requires the kernel X -(i,j) to be a universal kernel with respect to the L 2(P X -(i,j))-norm. It means H -(i,j) is rich enough to approximate any L 2(P X -(i,j))-function arbitrarily closely. For example, it is satisfied by the Gaussian radial basis function kernel, but not by the polynomial kernel. For more information on universal kernels, see <cit.>. The completeness in the second condition means
E[ g (X -(i,j)) | X (i,j)] = 0 ⇒ g (X -(i,j)) = 0 .
This concept is defined in <cit.>,
and is similar to the classical definition of completeness treating X -(i,j) as the parameter. <cit.> showed that completeness is a mild condition, and is satisfied by most nonparametric models.
A basis of the central class 𝔖X (i,j) | X -(i,j) can be found by
solving the generalized eigenvalue problem: for k = 1, …, d ij,
⟨ f, ΣX -(i,j) X (i,j) A ΣX (i,j) X - (i,j) f ⟩-(i,j)
⟨ f k, ΣX -(i,j) X -(i,j) f k ⟩-(i,j) = 1
⟨ f k, ΣX -(i,j) X -(i,j) f ℓ⟩-(i,j), ℓ=1, …, k-1
where A: H X (i,j)→ H X (i,j) is any nonsingular and self adjoint operator, and ⟨·, ·⟩-(i,j) is the inner product in H X -(i,j). That is, if f ij 1, … f ijd ij are the first d ij eigenfunctions of this eigenvalue problem, then they span the central class. This type of estimate of the central class is called generalized sliced inverse regression.
Convenient choices of A are the identity mapping I or the operator ΣX (i,j) X (i,j). If we use the latter, then we need the following assumption.
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j) ).
This assumption has the similar interpretation as Assumption <ref>; see Section S11 in the Supplementary Material.
At the population level, choosing A to be ΣX -(i,j) X -(i,j) achieves better scaling because it down weights those components of the output of ΣX -(i,j)X (i,j) with larger variances. However, if the sample size is not sufficiently large, involving an estimate of ΣX -(i,j)X (i,j) in the procedure could incur extra variations that overwhelm the benefit brought by ΣX -(i,j)X (i,j). In this case, a nonrandom operator such as A=I is preferable.
In this paper we use A = Σ X (i,j) X (i,j). Let U ij denote the random vector
( f ij 1 (X -(i,j)) , … f ijd ij(X -(i,j)) ).
The set of random vectors { U ij: (i,j) ∈Γ×Γ, i > j } is the output for the nonlinear sufficient dimension reduction step.
§.§ Step 2:Estimation of sufficient graphical model
To estimate the edge set of the sufficient graphical model
we need to find a way to determine whether X i X j | U ij is true. We use a linear operator introduced by <cit.> to perform this task, which is briefly described as follows.
Let U, V, W be random vectors taking values in measurable spaces (Ω U, F U), (Ω V, F V), and (Ω W, F W).
Let ΩUW = Ω U ×Ω W, ΩVW = Ω V ×Ω W, F UW= F U × F V, and F VW = F V × F W.
Let
UW: ΩUW×ΩUW→, VW: ΩVW×ΩVW→, W: Ω W ×Ω W →
be positive kernels. For example, for (u 1, w 1), (u 2, w 2) ∈ΩUW×ΩUW, UW returns a real number denoted by UW[(u 1, w 1), (u 2, w 2)]. Let H UW, H VW, and H W be the centered reproducing kernel Hilbert space's generated by the kernels UW, VW, and W.
Define the covariance operators
Σ(UW)(VW): H VW→ H UW, Σ(UW)W: H W → H UW,
Σ(VW)W: H W → H VW, ΣWW: H W → H W
as before.
The following definition is due to <cit.>. Since it plays a special role in this paper, we give it a name – “conjoined conditional covariance operator” that figuratively depicts its form.
Suppose
* If S is W, or (U,W), or (V, W), then E [ S (S, S) ] < ∞;
* (ΣW (VW) ) ⊆ (ΣWW), (ΣW (UW) ) ⊆ (ΣWW).
Then the operator
ΣÜV̈|W = Σ(UW)(VW) - Σ(UW)WΣWWΣW(VW)
is called the conjoined conditional covariance operator between U and V given W.
The word “conjoined” describes the peculiar way in which W appears in Σ(UW)W and ΣW(VW), which differs from an ordinary conditional covariance operator, where these operators are replaced by ΣUW and ΣWV. The following proposition is due to <cit.>, a proof of a special case of which is given in <cit.>.
Suppose
* H UW⊗ H VW is probability determining;
* for each f ∈ H UW, the function E[ f(U, W) | W=·] belongs to H W;
* for each g ∈ H VW, the function E[ g(V, W) | W =· ] belongs to H W;
Then ΣÜV̈|W = 0 if and only if U V | W.
The notion of probability determining in the context of reproducing kernel Hilbert space was defined in <cit.>. For a generic random vector X, an reproducing kernel Hilbert space H X based on a kernel X is probability determining if and only if the mapping
P ↦ E P [ X(·, X)]
is injective.
Intuitively, this requires the family of expectations { E P f(X): f ∈ H X } to be rich enough to identify P. For example, the Gaussian radial basis function is probability determining, but a polynomial kernel is not. We apply the above proposition to X i, X j, U ij for each (i,j) ∈Γ×Γ, i > j. Let
XUi,ij: (ΩX i×ΩU ij ) × (ΩX i×ΩU ij ) →
be a positive definite kernel, and H XUi,ij the centered reproducing kernel Hilbert space generated by XUi,ij. Similarly, let
Uij: ΩU ij×ΩU ij→
be a positive kernel, and H Uij the centered reproducing kernel Hilbert space generated by Uij.
Conditions (1) and (2) of Definition <ref> and conditions (1), (2), and (3) of Proposition <ref> are satisfied with U, V, and W therein replaced by
X i, X j, and U ij, respectively, for each (i,j) ∈Γ×Γ and i > j.
Under this assumption, the conjoined conditional covariance operator ΣẌ i Ẍ j | U ij is well defined and has the following property.
Under Assumption <ref>, we have
(i,j) ∉ℰ⇔ΣẌ i Ẍ j | U ij = 0.
This corollary motivates us to estimate the graph by thresholding the norm of the estimated conjoined conditional covariance operator.
§ ESTIMATION: SAMPLE-LEVEL IMPLEMENTATION
§.§ Implementation of step 1
Let (X 1, Y 1), …, (X n, Y n) be an i.i.d. sample of (X,Y). At the sample level, the centered reproducing kernel Hilbert space H X -(i,j) is spanned by the functions
{ X -(i,j) ( ·, X -(i,j) a ) - E n [ X -(i,j) ( ·, X -(i,j))]: a = 1, …, n },
where X -(i,j) (·, X -(i,j) ) stands for the function u ↦ X -(i,j) (u, X -(i,j) ), and
E n [ X -(i,j) (·, X -(i,j) )] the function u ↦ E n [ X -(i,j) (u, X -(i,j) )].
We estimate the covariance operators ΣX -(i,j) X (i,j) and Σ X -(i,j) X -(i,j) by
Σ̂X -(i,j) X (i,j) =
E n {[ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )]
⊗
[ X (i,j) ( ·, X (i,j) )
-E n X (i,j) ( ·, X (i,j) )] }
Σ̂ X -(i,j) X -(i,j) =
E n { [ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )]
⊗
[ X -(i,j) ( ·, X -(i,j) )
-E n X -(i,j) ( ·, X -(i,j) )] },
respectively. We estimate ΣX (i,j) X (i,j) by the Tychonoff-regularized inverse
( Σ̂X (i,j) X (i,j) + ϵ X (i,j) I ),
where I: H X (i,j)→ H X (i,j) is the identity operator.
The regularized inverse is used to avoid over fitting. It plays the same role as ridge regression <cit.> that alleviates over fitting by adding a multiple of the identity matrix to the sample covariance matrix before inverting it.
At the sample level, the generalized eigenvalue problem (<ref>) takes the following form: at the kth iteration,
⟨ f, Σ̂X -(i,j) X (i,j) ( Σ̂X (i,j) X (i,j) + ϵ X (i,j) I )Σ̂X (i,j) X - (i,j) f ⟩-(i,j)
⟨ f, Σ̂X -(i,j) X -(i,j) f ⟩-(i,j) = 1,
⟨ f, Σ̂X -(i,j) X -(i,j) f ℓ⟩-(i,j) = 0, ℓ = 1, …, k-1,
where f 1, …, f k-1 are the maximizers in the previous steps. The first d ij eigenfunctions are an estimate of a basis in the central class S X (i,j) | X -(i,j).
Let K X -(i,j) be the n × n matrix whose (a,b)th entry is X -(i,j) (X a -(i,j), X b -(i,j)), Q = I n - 1 n 1 n / n, and
G X -(i,j) = Q K X -(i,j) Q.
Let a 1, …, a d ij be the first d ij eigenvectors of the matrix
( G X -(i,j) + ϵ X -(i,j) I n ) G X -(i,j) G X (i,j) ( G X (i,j) + ϵ X (i,j) I n ) G X - (i,j) ( G X -(i,j) + ϵ X -(i,j) I n ).
Let
b r = ( G X -(i,j) + ϵ X -(i,j) I n ) a r for r = 1, …, d ij.
As shown in Section S12.2, the eigenfunctions f 1 ij, …, f d ijij are calculated by
f r ij = ∑a=1 n b r a { X -(i,j) ( ·, X -(i,j) a ) - E n [ X -(i,j) ( ·, X -(i,j))]}.
The statistics Ûij a = ( f 1 ij (X a -(i,j)) , …, f d ijij (X a -(i,j))), a = 1, …, n, will be used as the input for the second step.
§.§ Implementation of step 2
This step consists of estimating the conjoined conditional covariance operator's for each (i,j) and thresholding their norms. At the sample level, the centered reproducing kernel Hilbert space's generated by the kernels XUi,ij, XUj,ij, and U ij are
H XU i,ij= {XUi,ij ( ·, (X a i, U a ij)) - E n [ XUi,ij ( ·, (X i, U ij)) ]: a = 1, …, n },
H XU j,ij= {XUj,ij ( ·, (X a j, U a ij)) - E n [ XUj,ij ( ·, (X j, U ij)) ]: a = 1, …, n },
H U ij= {Uij ( ·, U a ij) - E n [ Uij ( ·, U ij) ]: a = 1, …, n },
where, for example, XUi,ij ( ·, (X a i, U a ij)) denotes the function
ΩX i×ΩU ij→, (x i, u ij ) ↦XUi,ij ( (x i, u ij ), (X a i, U a ij))
and E n [ XUi,ij ( ·, (X i, U ij)) ] denotes the function
ΩX i×ΩU ij→, (x i, u ij ) ↦ E n [ XUi,ij ( (x i, u ij ), (X i, U ij))].
We estimate the covariance operators
Σ(X i U ij)( X i U ij), Σ(X i U ij)U ij, ΣX j (X jU ij), and ΣU ij U ij by
Σ̂(X i U ij) (X j U ij) = E n { [ XUi,ij ( ·, ( X i, U ij))- E n XUi,ij ( ·, ( X i, U ij)) ]
⊗ [ XUj,ij ( ·, ( X j, U ij))- E n XUj,ij ( ·, ( X j, U ij)) ] }
Σ̂(X i U ij) U ij = E n { [ XUi,ij ( ·, ( X i, U ij))- E n XUi,ij ( ·, ( X i, U ij)) ]
⊗ [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ] }
Σ̂U ij(X j U ij) = E n { [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ]
⊗ [ XUj,ij ( ·, ( X j, U ij))- E n XUj,ij ( ·, ( X j, U ij)) ] }
Σ̂U ij U ij = E n { [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ]
⊗ [ Uij ( ·, U ij)- E n Uij ( ·, U ij) ] },
respectively. We then estimate the conjoined conditional covariance operator by
Σ̂Ẍ i Ẍ j | U ij=
Σ̂(X i U ij) (X j U ij) -
Σ̂(X i U ij) U ij
(Σ̂U ij U ij + ϵ U (i,j) I ) Σ̂U ij(X j U ij) ,
where, again, we have used Tychonoff regularization to estimate the inverted covariance operator ΣU ij U ij.
Let K U ij, K X i U ij, and K X j U ij be the Gram matrices
K U ij= { U ij (U a ij, U b ij) }a, b = 1 n,
K X i U ij= {XUi, ij ((X i a, U a ij), (X i b, U b ij)) }a, b =1 n,
K X j U ij = {XUj, ij ((X j a, U a ij), (X j b, U b ij)) }a,b=1 n,
and G X i U ij,
G X j Uij, and
G U ij their centered versions
G X i U ij = Q K X i U ij Q,
G X j Uij = Q K X jU ij Q,
G U ij = Q K U ij Q.
As shown in Section S12 in the Supplementary Material,
Σ̂Ẍ i Ẍ j| U ijhs
= G X i U ij1/2 G X j U ij1/2 - G X i U ij1/2 G U ij ( G U ij + ϵ U (i,j) Q ) † G X jUij1/2 f,
where · f is the Frobenius norm.
Estimation of the edge set is then based on thresholding this norm; that is,
Ê = { (i,j) ∈Γ×Γ: i > j, Σ̂Ẍ i Ẍ j| U ijhs > ρ n }
for some chosen ρ n > 0.
§.§ Tuning
We have three types of tuning constants: those for the kernels, those for Tychonoff regularization, and the threshold ρ n. For the Tychonoff regularization, we have ϵ X (i,j) and ϵ X -(i,j) for step 1, and ϵ U (i,j) for step 2. In this paper we use the Gaussian radial basis function as the kernel:
(u,v) = exp ( - γ u - v 2 ).
For each (i,j), we have five γ's to determine: γ X (i,j) for the kernel X (i,j), γ X -(i,j) for X -(i,j), γXUi,ij for XUi,ij, γXUj,ij for XUj,ij, and γ U ij for U ij, which are chosen by the following formula (see, for example, <cit.>)
1/ √(γ)= n 2∑a < b s a - s b ,
where s 1, …, s n are the sample of random vectors corresponding to the mentioned five kernels. For example, for the kernel XUj, ij, s a = (X a j, U a ij).
For the tuning parameters in Tychonoff regularization, we use the following generalized cross validation scheme (GCV; see <cit.>):
GCV(ϵ)= ϵ∑i<j‖ G 1-G 2 [ G 2+ϵλmax(G 2)]-1G 1
‖F/1/ntr{I n-G 2 [ G 2+ϵλmax(G 2) ]-1},
where G 1, G 2 ∈n × n are positive semidefinite matrices, and λmax (G 2) is the largest eigenvalue of G 2. The matrices G 1 and G 2 are the following matrices for the three tuning parameters:
* G 1 = G X -(i,j), G 2 = G X (i,j) for ϵ X (i,j),
* G 1 = G X (i,j), G 2 = G X -(i,j) for ϵ X -(i,j),
* G 1 = G X (i,j), G 2 = G U ij for ϵ U (i,j),
We minimize (<ref>) over a grid to choose ϵ, as detailed in Section <ref>.
We also use
generalized cross validation to determine the thresholding parameter ρ n. Let Ê(ρ) be the estimated edge set using a threshold ρ, and, for each i ∈Γ, let C i (ρ)={ X j: j ∈Γ, (i,j) ∈Ê(ρ) } be the subset of components of X at the neighborhood of the node i in the graph (Γ, Ê ( ρ)). The basic idea is to apply the generalized cross validation to the regression of the feature of X i on the feature of C i (ρ). The generalized cross validation for this regression takes the form
GCV (ρ) = ∑i=1 p‖ GX i-GC i (ρ) [ GC i (ρ)+ϵλmax(GC i (ρ))I n ]-1GX i‖F/1/ntr{I n-GC i (ρ) [ GC i (ρ)+ϵλmax(GC i (ρ) ) I n ]-1},
where G C i (ρ)= Q K C i (ρ) Q, and K C i (ρ) is the n × n kernel matrix for the sample of C i (ρ).
We minimize GCV(ρ) over the grid ρ∈{ℓ× 10-2: ℓ=2, …, 7} to determine the optimal threshold ρ n.
Regarding the selection of the dimension of U ij, to our knowledge there has been no systematic procedure available to determine the dimension of the central class for nonlinear sufficient dimension reduction. While some recently developed methods for order determination for linear sufficient dimension reduction, such as the ladle estimate and predictor augmentation estimator <cit.>, may be generalizable to the nonlinear sufficient dimension reduction setting, we will leave this topic to future research. Our experiences and intuitions indicate that a small dimension, such as 1 or 2, for the central class would be sufficient in most cases. For example, in the classical nonparametric regression problems Y = f(X) + ϵ with X ϵ, the dimension of the central class is by definition equal to 1.
§ ASYMPTOTIC THEORY
In this section we develop the consistency and convergence rates of our estimate and related operators. The challenge of this analysis is that our procedure involves two steps: we first extract the sufficient predictor using one set of kernels, and then substitute it into another set of kernels to get the final result. Thus we need to understand how the error propagates from the first step to the second. We also develop the asymptotic theory allowing p to go to infinity with n, which is presented in the Supplementary Material.
§.§ Overview
Our goal is to derive the convergence rate of
| Σ̂Ẍ i Ẍ j | Ûijhs - ΣẌ i Ẍ j | U ijhs|,
as Σ̂Ẍ i Ẍ j | Ûijhs is the quantity we threshold to determine the edge set.
By the triangular inequality,
| Σ̂Ẍ i Ẍ j | Ûijhs - ΣẌ i Ẍ j | U ijhs|
≤Σ̂Ẍ i Ẍ j | Ûij - ΣẌ i Ẍ j | U ijhs
≤Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs + Σ̂Ẍ i Ẍ j | U ij - ΣẌ i Ẍ j | U ijhs.
So we need to derive the convergence rates of the following quantities:
Ûij - U ij[ H -(i,j) (X)] d ij,
Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs,
Σ̂Ẍ i Ẍ j | U ij - ΣẌ i Ẍ j | U ijhs,
where, to avoid overly crowded subscripts, we have used H -(i,j) (X) to denote H -(i,j) X when it occurs as a subscript.
The first and third convergence rates can be derived using the asymptotic tools for linear operators developed in <cit.>, <cit.>, <cit.>, and <cit.>. The second convergence rate is, however, a new problem, and it will also be useful in similar settings that require constructing estimators based on predictors extracted by sufficient dimension reduction. In some sense, this is akin to the post dimension reduction problem considered in <cit.>.
ØȮ
øȯ
Ö
ö
In the following, if {a n } and { b n } are sequences of positive numbers, then we write a n ≺ b n if a n / b n → 0. We write a n ≍ b n if 0< lim inf n (b n / a n) ≤lim sup n (b n / a n) < ∞. We write b n ≼ a n if either b n ≺ a n or b n ≍ a n. Because (i,j) is fixed in the asymptotic development, and also to emphasize the dependence on n, in the rest of this section we denote ϵ X (i,j), ϵ X -(i,j), and ϵ U (i,j) by ϵ n, η n, and δ n, respectively.
§.§ Transparent kernel
We first develop what we call the “transparent kernel” that passes information from step 1 to step 2 efficiently. Let Ω be a nonempty set, and : Ω×Ω→ a positive kernel.
We say that is a transparent kernel if, for each t ∈Ω, the function s ↦ (s,t) is twice differentiable and
* ∂ (s,t)/ ∂ s | s=t = 0;
* the matrix H(s,t) = ∂ 2 (s,t) / ∂ s ∂ s has a bounded operator norm; that is, there exist -∞ < C 1 ≤ C 2 < ∞ such that
C 1 ≤λmin (H(s,t)) ≤λmax (H(s,t)) < C 2
for all (s,t) ∈Ω×Ω, where λmin(·) and λmax (·) indicate the largest and smallest eigenvalues.
For example, the Gaussian radial basis function kernel is transparent, but the exponential kernel
(u,v) = τ 2 exp(-γ‖ u-v ‖ ) is not.
This condition implies a type of Lipschitz continuity in a setting that involves two reproducing kernels 0 and 1, where the argument of 1 is the evaluation of a member of the reproducing kernel Hilbert space generated by 0.
Suppose H 0 is the reproducing kernel Hilbert space generated by 0, H 0 d is the d-fold Cartesian product of H 0 with inner product defined by
⟨ U, V ⟩ H 0 d = ⟨ u 1, v 1 ⟩ H 0 + ⋯ + ⟨ u d, v d ⟩ H 0
where U = (u 1, …, u d) and V = (v 1, …, v d) are members of H 0 d,
H 1 is the reproducing kernel Hilbert space generated by 1. Then:
(i) for any U, V ∈ H 0 d, a ∈Ω, we have
U (a)- V(a) d≤ [ 0(a, a) ] 1/2 U- V H 0 d;
(ii)
if 1(s,t) is a transparent kernel,
then there exists a C> 0 such that, for each U, V ∈ H 0 d and a ∈Ω,
1 ( ·, U ( a) ) - 1 ( ·, V ( a) ) H 1≤ C [ 0 (a , a)]1/2 U - V H 0 d.
A direct consequence of this theorem is that, if Û is an estimate of some U, a member of H 0 d, with Û - U H 0 d = O P ( b n) for some 0 < b n → 0, Σ̂(Û) is a linear operator estimated from the sample Û 1, …, Û n (and perhaps some other random vectors), and Σ̂(U) is a linear operator estimated from the sample U 1, …, U n, then,
Σ̂( Û) - Σ̂(U) hs = O P ( b n).
This result is somewhat surprising, because sample estimates such as Σ̂( Û) can be viewed as E n 𝔾 ( X, Û ), where Û is an estimate of a function U in a functional space with norm · and 𝔾 is an operator-valued function. If Û - U = O P (b n) for some b n → 0, then it is not necessarily true that
E n 𝔾 ( X, Û) - E n 𝔾 ( X, U) = O P (b n),
particularly when U is an infinite dimensional object. Yet relation (<ref>) states exactly this. The reason behind this is that the reproducing kernel property separates the function Û and its argument X a (i.e. Û (x) = ⟨Û, (·, x) ⟩), which implies a type of uniformity among Û (X 1), …, Û (X n). This point will be made clear in the proof in the Supplementary Material.
Statement (<ref>) is made precise by the next theorem.
Suppose conditions (1) and (2) of Definition <ref> are satisfied with U, V, W therein replaced by X i, X j, and U ij. Suppose, furthermore:
(a) U ij, XUi,ij, and XUj,ij are transparent kernels;
(b) Ûij - U ij[ H -(i,j) (X) ] d ij = O P ( b n ) for some 0 < b n → 0.
Then
(i) Σ̂ÛijÛij - Σ̂ U ij U ijhs=O P ( b n );
(ii) Σ̂ (X iÛij) Ûij - Σ̂ (X i U ij) U ijhs=O P ( b n );
(iii) Σ̂ (X iÛij) (X jÛij) - Σ̂ (X i U ij) (X j U ij) hs=O P ( b n ).
Using Theorem <ref> we can derive the convergence rate of Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs.
Suppose conditions in Theorem <ref> are satisfied and, furthermore,
(a)
ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(b) b n ≼δ n ≺ 1.
Then
Σ̂Ẍ i Ẍ j | Ûij- Σ̂Ẍ i Ẍ j | U ijhs = O P ( b n ).
Note that, unlike in Theorem <ref>, where our assumptions imply
ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j)
is a finite-rank operator, here, we do not assume
ΣU ij(U ij)ΣU ij(X j U ij) to be a finite-rank (or even Hilbert-Schmidt) operator; instead, we assume it to be a bounded operator.
This is because (X j, U ij) contains U ij, which makes it unreasonable to assume ΣU ijU ijΣU ij(X j U ij) to be finite-rank or Hilbert Schmidt. For example, when X j is a constant, ΣU ij(X j U ij) is the same as ΣU ij U ij and ΣU ij U ijΣU ij U ij is not a Hilbert Schmidt operator, though it is bounded.
Theorem <ref> shows that convergence rate of (ii) in (<ref>) is the same as the convergence rate of (i) in (<ref>); it now remains to derive the convergence rate of (i) and (iii).
§.§ Convergence rates of (i) and (iii) in (<ref>)
We first present the convergence rate of Ûij to U ij. The proof is similar to that of Theorem 5 of <cit.> but with two differences. First, <cit.> took A in (<ref>) to be I, whereas we take it to be ΣYY. In particular, the generalized sliced inverse regression in <cit.> only has one tuning parameter η n, but we have two tuning parameters η n and ϵ n. Second, <cit.> defined (in the current notation) f r ij to be the eigenfunctions of
ΣX -(i,j)X -(i,j)ΣX -(i,j)X (i,j)ΣX (i,j)X (i,j)ΣX (i,j)X -(i,j)ΣX -(i,j)X -(i,j),
which is different from the generalized eigenvalue problem (<ref>).
For these reasons we need to re-derive the convergence rate of Ûij.
Suppose
(a) Assumption <ref> is satisfied;
(b) ΣX -(i,j) X (i,j) is a finite-rank operator with
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j)2),
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j));
(c) n -1/2≺η n ≺ 1, n -1/2≺ϵ n ≺ 1;
(d) for each r = 1, …, d ij, λij 1 > ⋯ > λijd ij.
Then,
Ûij - U ij[ H -(i,j) (X) ] d ij= O P (
η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n ).
An immediate consequence is that, under the transparent kernel assumption, the b n in Theorem <ref> is the same as this rate. We next derive the convergence rate in (iii) of (<ref>). This rate depends on the tuning parameter δ n in the estimate of conjoined conditional covariance operator, and it reaches b n for the optimal choice of δ n.
Suppose conditions (1) and (2) of Definition <ref> are satisfied with U, V, W therein replaced by X i, X j, and U ij. Suppose, furthermore,
(a)
ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(b) b n ≼δ n ≺ 1.
Then
Σ̂Ẍ i Ẍ j | U ij- ΣẌ i Ẍ j | U ijhs = O P (δ n). Consequently, if δ n ≍ b n, then
Σ̂Ẍ i Ẍ j | U ij- ΣẌ i Ẍ j | U ijhs = O P (b n).
Finally, we combine Theorem <ref> through Theorem <ref> to come up with the convergence rate of Σ̂Ẍ i Ẍ j | Ûij. Since there are numerous cross references among the conditions in these theorems, to make a clear presentation we list all the original conditions in the next theorem, even if they already appeared. These conditions are of two categories: those for the step 1 that involves sufficient dimension reduction of X (i,j) versus X -(i,j), and those for the step 2 that involves the estimation of the conjoined conditional covariance operator. We refer to them as the first-level and second-level conditions, respectively.
Suppose the following conditions hold:
(a) (First-level kernel) E [ (S, S)] < ∞ for = X (i,j) and = X -(i,j);
(b) (First-level operator) ΣX -(i,j) X (i,j) is a finite-rank operator with rank d ij and
( ΣX -(i,j) X (i,j) ) ⊆ ( ΣX -(i,j) X -(i,j)2),
( ΣX (i,j) X -(i,j) ) ⊆ ( ΣX (i,j) X (i,j));
all the nonzero eigenvalues of ΣX (i,j) X -(i,j)ΣX -(i,j) X -(i,j)ΣX -(i,j) X (i,j) are distinct;
(c) (First-level tuning parameters) n -1/2≺η n ≺ 1, n -1/2≺ϵ n ≺ 1, η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n 1/2 + ϵ n ≺ 1;
(d) (Second-level kernel) E [ (S, S)] < ∞ is satisfied for = U ij, XUi,ij, and XUj,ij; furthermore, they are transparent kernels;
(e) (Second-level operators) ΣU ijU ijΣU ij(X i U ij) and ΣU ijU ijΣU ij(X j U ij)
are bounded linear operators;
(f) (Second-level tuning parameter) δ n ≍η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n.
Then
Σ̂Ẍ i Ẍ j | Ûij- ΣẌ i Ẍ j | U ijhs = O P (η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n).
Using this result we immediately arrive at the variable selection consistency of the Sufficient Graphical Model.
Under the conditions in Theorem <ref>, if
η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n ≺ρ n ≺ 1,
Ê = { (i,j) ∈Γ×Γ: i > j, Σ̂Ẍ i Ẍ j | Ûijhs < ρ n }
then limn →∞ P ( Ê = E ) → 1.
§.§ Optimal rates of tuning parameters
The convergence rate in Theorem <ref> depends on ϵ n and η n explicitly, and δ n implicitly (in the sense that δ n ≍η n -3/2ϵ n -1 n -1 + η n -1 n -1/2 + η n + ϵ n is optimal for fixed ϵ n and η n). Intuitively, when ϵ n, η n, and δ n increase, the biases increase and variances decrease; when they decrease, the biases decrease and the variances increase. Thus there should be critical rates for them that balance the bias and variance, which are the optimal rates.
Under the conditions in Theorem <ref>, if ϵ n, η n, and δ n are of the form n a, n b, and n c for some a > 0, b > 0, and c > 0, then
(i) the optimal rates the tuning parameters are
n -3/8≼ϵ n ≼ n -1/4, η n ≍ n -1/4, δ n ≍ n -1/4;
(ii) the optimal convergence rate of the estimated conjoined conditional covariance operator is
Σ̂Ẍ i Ẍ j | Ûij- ΣẌ i Ẍ j | U ijhs = O P (n -1/4).
Note that there is a range of ϵ n are optimal, this is because the convergence rate does not have a unique minimizer. This also means the result is not very sensitive to this tuning parameter.
In the above asymptotic analysis, we have treated p as fixed when n →∞. We have also developed the consistency and convergence rate in the scenario where the dimension of p n of X goes to infinity with n, which is placed in the Supplementary Material (Section S9) due to limited space.
§ SIMULATION
In this section we compare the performance of our sufficient graphical model with previous methods such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and a Naïve method which is based on the conjoined conditional covariance operator without the dimension reduction step.
By design, the sufficient graphical model has advantages over these existing methods under the following circumstances. First, since the sufficient graphical model does not make any distributional assumption, it should outperform <cit.> and <cit.> when the Gaussian or copula Gaussian assumptions are violated; second, due to the sufficient dimension reduction in sufficient graphical model, it avoids the curse of dimensionality and should outperform <cit.>, <cit.>, and a Naïve method in the high-dimensional setting; third, since sufficient graphical model does not require additive structure, it should outperform <cit.> when there is severe nonadditivity in the model. Our simulation comparisons will reflect these aspects.
For the sufficient graphical model, <cit.>, and the Naïve method, we use the Gaussian radial basis function as the kernel. The regularization constants ϵ X(i,j), ϵ X-(i,j), and ϵ U(i,j) are chosen by the generalized cross validation criterion described in Section <ref> with the grid {10-ℓ: ℓ=-1,0,1,2,3,4}. The kernel parameters γ X (i,j), γ X -(i,j), γXUi,ij, γXUj,ij, and γ U ij are chosen according to (<ref>). Because the outcomes of tuning parameters are stable, for each model, we compute the generalized cross validation for the first five samples and use their average value for the rest of the simulation.
The performance of each estimate is assessed using the averaged receiver operating characteristic curve as a function of the threshold ρ.
The accuracy of a method across all ρ is measured by the area under the receiver operating characteristic curve.
To isolate the factors that affect accuracy, we first consider two models with relatively small dimensions and large sample sizes, which are
Model 1: X1 =ϵ1, X2=ϵ2, X3=sin(2X1)+ϵ3
X4 = (X1)2 + (X2)2+ϵ4, X5=ϵ5,
Model 2: X1 =ϵ1, X2=X1+ ϵ2, X3=ϵ3, X4 = (X1 + X3)2+ϵ4,
X5 =cos(2X2X3)+ϵ5, X6=X4+ϵ6,
where ϵ i, i=1, …, p are from independent and identically distributed standard normal distribution. The edge sets of the two models are
: E = { (1,3), (1, 4), (2,4), (1,2)}
: E = {(1,2), (1,4), (3, 4), (1,3), (2,5), (3, 5), (2, 3), (4, 6) }.
We use n = 100, 1000 for each model, and for each n, we generate 50 samples to compute the averaged receiver operating characteristic curves. The dimension d ij for sufficient graphical model is taken to be 2 for all cases (we have also used d ij = 1 and the results are very similar to those presented here).
The plots in the first row of Figure <ref> show the averaged receiver operating characteristic curves for the seven methods, with the following plotting symbol assignment:
Sufficient graphical model: red solid line <cit.>: red dotted line
<cit.>: black solid line <cit.>: black dotted line
<cit.>: red dashed line Naïve: blue dotted line
<cit.>: black dashed line
From these figures we see that
the two top performers are clearly sufficient graphical model and <cit.>, and their performances are very similar. Note that none of the two models satisfies the Gaussian or copula Gaussian assumption, which explains why sufficient graphical model and <cit.> outperform <cit.> and <cit.>. Sufficient graphical model and <cit.> also outperform <cit.>, <cit.>, and Naïve method, indicating that curse of dimensionality already takes effect on the fully nonparametric methods. The three nonparametric estimators have similar performances. Also note that Model I has an additive structure, which explains the slight advantage of <cit.> over sufficient graphical model in subfigure (a) of Figure <ref>; Model II is not additive, and the advantage of <cit.> disappears in subfigure (b) of Figure <ref>.
We next consider two models with relatively high dimensions and small sample sizes. A convenient systematic way to generate larger networks is via the hub structure. We choose p = 200, and randomly generate ten hubs h 1, …, h 10 from the 200 vertices. For each h k, we randomly select a set H k of 19 vertices to form the neighborhood of h k. With the network structures thus specified, our two probabilistic models are
Model 3: X i = 1+ | Xh k|2 + ϵ i, where i ∈ H k ∖ h k,
Model 4: X i = sin((Xh k)3) ϵ i, where i ∈ H k ∖ h k,
and ϵ i's are the same as in Models 1 and 2. Note that, in Model III, the dependence of X i on X h k is through the conditional mean E ( X i | X h k), whereas in Model IV, the dependence is through the conditional variance ( X i | X h k ).
For each model, we choose two sample sizes n=50 and n=100. The averaged receiver operating characteristic curves (again averaged over 50 samples) are presented in the second row in Figure <ref>. From the figures we see that, in the high-dimensional setting with p > n, sufficient graphical model substantially outperforms all the other methods, which clearly indicates the benefit of dimension reduction in constructing graphical models.
We now consider a Gaussian graphical model to investigate any efficiency loss incurred by sufficient graphical model. Following the similar structure used in <cit.>, we choose p=20, n=100, 200, and the model
Model 5: X ∼ N(0, Θ-1),
where Θ is 20 × 20 precision matrix with diagonal entries 1, 1, 1, 1.333, 3.010, 3.203, 1.543, 1.270, 1.544, 3, 1, 1, 1.2, 1, 1, 1, 1, 3, 2, 1, and nonzero off-diagonal entries θ3,5=1.418, θ4,10=-0.744, θ5,9=0.519, θ5,10=-0.577, θ13,17=0.287, θ17,20=0.542, θ14,15=0.998. As expected, Figure <ref> shows that <cit.>, <cit.>, and <cit.> perform better than sufficient graphical model in this case. However, sufficient graphical model still performs reasonably well and significantly outperforms the fully nonparametric methods.
Finally, we conducted some simulation on the generalized cross validation criterion (<ref>) for determining the threshold ρ n. We generated samples from Models I through V as described above, produced the receiver operating characteristic curves using sufficient graphical model, and determined the threshold ρ n by (<ref>). The results are presented in Figure S1 in the Supplementary Material. In each penal, the generalized cross validation-determined threshold ρ n are represented by the black dots on the red receiver operating characteristic curves.
§ APPLICATION
We now apply sufficient graphical model to a data set from the DREAM 4 Challenge project and compare it with other methods.
The goal of this Challenge is to recover gene regulation networks from simulated steady-state data.
A description of this data set can be found in <cit.>.
Since <cit.> already compared their method with <cit.>, <cit.>, <cit.>, <cit.>, and Naïve method for this dataset and demonstrated the superiority of <cit.> among these estimators, here we will focus on the comparison of the sufficient graphical model with <cit.> and the champion method for the DREAM 4 Challenge.
The data set contains data from five networks each of dimension of 100 and sample size 201. We use the Gaussian radial basis function kernel for sufficient graphical model and <cit.> and the tuning methods described in Section <ref>. For sufficient graphical model, the dimensions d ij are taken to be 1. We have also experimented with d ij = 2 but the results (not presented here) show no significant difference. Because networks are available, we can compare the receiver operating characteristic curves and their areas under the curve's, which are shown in Table <ref>.
As we can see from Table <ref>, sufficient graphical model has the same areas under the receiver operating characteristic curve values as <cit.> for Networks 2, 3, and 4, performs better than <cit.> for Network 5, but trails slightly behind <cit.> for Network 1; sufficient graphical model has the same areas under the curve as the champion method, performs better for Network 5 and worse for Network 1. Overall, sufficient graphical model and <cit.> perform similarly in this dataset, and they are on a par with the champion method. We should point out that sufficient graphical model and <cit.> are purely empirical; they employ no knowledge about the underlying physical mechanism generating the gene expression data. However, according to <cit.>, the champion method did use a differential equation that reflects the underlying physical mechanism.
The results for threshold determination are presented in Figure S2 in the Supplementary Material.
§ DISCUSSION
This paper is a first attempt to take advantage of the recently developed nonlinear sufficient dimension reduction method to nonparametrically estimate the statistical graphical model while avoiding the curse of dimensionality. Nonlinear sufficient dimension reduction is used as a module and applied repeatedly to evaluate conditional independence, which leads to a substantial gain in accuracy in the high-dimensional setting.
Compared with the Gaussian and copula Gaussian methods, our method is not affected by the violation of the Gaussian and copula Gaussian assumptions. Compared with the additive method <cit.>, our method does not require an additive structure and retains the conditional independence as the criterion to determine the edges, which is a commonly accepted criterion. Compared with fully nonparametric methods, sufficient graphical model avoids the curse of dimensionality and significantly enhances the performance.
The present framework opens up several directions for further research. First, the current model assumes that the central class S X (i,j) | X -(i,j) is complete, so that generalized sliced inverse regression is the exhaustive nonlinear sufficient dimension reduction estimate. When this condition is violated, generalized sliced inverse regression is no longer exhaustive and we can employ other nonlinear sufficient dimension reduction methods such as the generalized sliced averaged variance estimation <cit.> to recover the part of the central class that generalized sliced inverse regression misses. Second, though we have assumed that there is a proper sufficient sub-σ-field G -(i,j) for each (i,j), the proposed estimation procedure is still justifiable when no such sub-σ-field exists. In this case, U ij is still the most important set of functions that characterize the statistical dependence of X (i,j) on X -(i,j) – even though it is not sufficient. Without sufficiency, our method may be more appropriately called the Principal Graphical Model than the sufficient graphical model. Third, the current method can be extended to functional graphical model, which are common in medical applications such as EEG and fMRI. Several functional graphical models have been proposed recently, by
<cit.>, <cit.>, <cit.>, and <cit.>. The idea of a sufficient graph can be applied to this setting to improve efficiency.
This paper also contains some theoretical advances that are novel to nonlinear sufficient dimension reduction. For example, it introduces a general framework to characterize how the error of nonlinear sufficient dimension reduction propagates to the downstream analysis in terms of convergence rates. Furthermore, the results for convergence rates of various linear operators allowing the dimension of the predictor to go to infinity are the first of its kind in nonlinear sufficient dimension reduction. These advances will benefit the future development of sufficient dimension reduction in general, beyond the current context of estimating graphical models.
Bing Li's research on this work was supported in part by the NSF Grant DMS-1713078. Kyongwon Kim's work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No.2021R1F1A1046976, RS-2023-00219212), basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2021R1A6A1A10039823).
§ SUPPLEMENTARY MATERIAL
Supplementary material includes proofs of all theorems, lemmas, corollaries, and propositions in the paper, asymptotic development for the high-dimensional setting, some additional simulation plots for threshold determination.
0.2in
|
http://arxiv.org/abs/2307.04209v1 | 20230709154400 | Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs | [
"Yingjie Cheng",
"Gaojun Luo",
"Xiwang Cao",
"Martianus Frederic Ezerman",
"San Ling"
] | cs.IT | [
"cs.IT",
"math.CO",
"math.IT"
] |
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling
Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn
G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg.
G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Coded distributed computing (CDC) was introduced to greatly reduce the communication load for MapReduce computing systems. Such a system has K nodes, N input files, and Q Reduce functions. Each input file is mapped by r nodes and each Reduce function is computed by s nodes. The architecture must allow for coding techniques that achieve the maximum multicast gain. Some CDC schemes that achieve optimal communication load have been proposed before. The parameters N and Q in those schemes, however, grow too fast with respect to K to be of great practical value. To improve the situation, researchers have come up with some asymptotically optimal cascaded CDC schemes with s+r=K from symmetric designs.
In this paper, we propose new asymptotically optimal cascaded CDC schemes. Akin to known schemes, ours have r+s=K and make use of symmetric designs as construction tools. Unlike previous schemes, ours have much smaller communication loads, given the same set of parameters K, r, N, and Q. We also expand the construction tools to include almost difference sets. Using them, we have managed to construct a new asymptotically optimal cascaded CDC scheme.
Almost difference set, coded distributed computing, communication load, symmetric design.
§ INTRODUCTION
Processing large amount of data efficiently is a must in this era of big data. Handling such a computational task lies beyond the capability of a single computer. The challenge to complete huge computational assignments motivates the design of distributed computing systems. The main objective is to greatly expedite task execution by letting distributed computing nodes perform computational jobs in parallel by exploiting the distributed nature of available resources, both computing and storage. It is often the case that a large amount of data needs to be exchanged among the computing nodes, which limits the system's performance. In a Facebook Hadoop cluster, for example, it has been observed that 33% of the overall job execution time was spent on data shuffling <cit.>. We know from <cit.> that 70% of the overall job execution time is spent on data shuffling when running a self-join application on an Amazon EC2 cluster.
S. Li et al. in <cit.> introduced coded distributed computing(CDC) to reduce the communication load in distributed computing systems. The reduction is the result of CDC's capability to increase the computation load of the so-called Map functions to create novel coding opportunities. Some systems, which had already been in use by then, including Dean and Ghemawat's MapReduce <cit.> and Spark of Zaharia et al. from <cit.>, could subsequently be improved.
We call a system a (K,N,r,s,Q)-CDC when the system has K computing nodes, N input data files of equal size, and Q output values, each of which is computed by a function on the N files. A computation in this system is divided into three phases, namely Map, Shuffle, and Reduce. In the Map phase, a given input file is exclusively mapped by a distinct r-subset computing nodes to Q intermediate values (IVs) with T bits. In the Shuffle phase, each tt Reduce function is assigned to an s-subset of computing nodes. Subsequently, all computing nodes generate coded symbols from their respective local IVs in such a way that each computing node can derive the needed IVs that it cannot, by itself, calculate locally. In the Reduce phase, any computing node can compute each Reduce function assigned to it after receiving the coded signals during the Shuffle phase. We underline the fact that nodes have to spend most of their execution time in exchanging IVs among themselves, causing a substantial communication bottleneck in the system <cit.>. Hence, it is highly desirable to reduce the execution time in the Shuffle phase.
A fundamental trade-off between computation load in the Map phase and communication load in the Shuffle phase was formulated and characterized by Li et al. in <cit.>. Increasing the computation load by a factor of r can reduce the communication load by the same factor. The authors of the said work also proposed several CDC schemes that achieved the optimal communication load. Their main idea is as follows. In the Map phase the nodes need to compute some side information locally. In the Shuffle phase, the nodes exchange some coded data among themselves. The side information makes each coded data simultaneously useful for multiple Reduce tasks.
In a general (K,N,r,s,Q)-CDC scheme, if s=1, then each Reduce function is calculated by exactly one node. This scheme is similar to the coded caching scheme for the D2D network treated in, e.g., <cit.> and <cit.>. If s≥ 1, then each Reduce function is calculated by multiple nodes. The scheme is known as cascaded CDC scheme. Numerous works, e.g., <cit.>, <cit.>, <cit.> and <cit.>, proposed CDC schemes with stragglers. In heterogeneous networks, CDC schemes have been studied in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Extension to various setups had been pursued. Without attempting a complete listing, we mention works on CDC schemes in wireless network in <cit.> and <cit.>, and in the context of matrix multiplication in <cit.> and <cit.>. Our present work focuses on cascaded CDC schemes. Prior works on such schemes include <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
The scheme in <cit.>, henceforth the Li-CDC, splits the data set into N=Kr files and designs Q=Ks output functions, with r being the average number of nodes that store each file and s being the average number of nodes that calculate each function. The Li-CDC achieves the minimum communication load. The number N=Kr of files and the number Q=Ks of functions in the Li-CDC grow too fast with respect to K for practical scenarios. This was shown by Konstantinidis and Ramamoorthy in <cit.>. Woolsey, Chen, and Ji in <cit.> introduced a combinatorial structure called hypercube structure to design the file and function assignments. Their scheme requires the data set to be split into N=(K/r)^r-1 files and designs Q=(K/r)^r-1 output functions. They also showed that the communication load of their scheme is close to that of the one in <cit.>. Jiang and Qu in <cit.> put forward some cascaded CDC schemes with N=(K/r)^r-1 and Q=K/(K,s) by using placement delivery array. Such an array had previously been introduced by Yan et al. to construct coded caching schemes in <cit.>. The communication load of the schemes built by Jiang and Qu, however, is about twice as large as that of the Li-CDC. Recently, Jiang, Wang, and Zhou in <cit.> used a symmetric balanced incomplete block design (SBIBD) to generate the data placement and the Reduce function assignment to obtain an asymptotically optimal scheme with K=N=Q. In <cit.>, Cheng, Wu, and Li proposed some asymptotically optimal schemes based on t-designs, with t≥ 2. More specifically, their main tool consists of t-group divisible designs (GDDs). For ease of reference, we list the above-mentioned known cascaded CDC schemes in the first part of Table <ref>.
This paper has two main contributions. First, we construct a new class of asymptotically optimal schemes for the cascaded case with r+s=K. In our construction, we carefully arrange the data placement and assign the Reduce functions by using symmetric designs that meet specific requirements. As shown in Table <ref>, our new schemes have the following advantages.
* Compared with the Li-CDC scheme in <cit.>, our schemes have much smaller N and Q. Using the known symmetric designs listed in Table <ref>, the respective communication loads of our schemes approximate that of the Li-CDC scheme.
* Compared with the schemes of Jiang and Qu in <cit.>, ours have smaller respective communication loads for the same (K,r,N,Q). Although both our schemes and those in <cit.> make use of symmetric designs in their constructions, we devise a different transmission scheme from what Jiang and Qu had chosen.
Second, we present a class of new asymptotically optimal cascaded CDC schemes. They are constructed based on specially built 1-designs from almost difference sets. Although our schemes bear some similarities with the schemes of Cheng, Wu, and Li in <cit.>, the parameters differ, as shown in Table <ref>.
In terms of organization, Section <ref> introduces useful properties of symmetric designs, almost difference sets, and cascaded CDC systems. We explain two new constructions of CDC schemes in Section <ref>. Comparative performance analysis of our new schemes relative to known schemes can be found in Section <ref>. The section also collects concluding remarks.
§ PRELIMINARIES
We denote by |·| the cardinality of a set or the length of a vector. For any positive integers a and b with a<b, we use [a,b] to denote the set {a,a+1,…,b}. If a=1, then we use the shorter form [b].
§.§ Cascaded Coded Distributed Computing Systems
In a coded distributed computing system, K distributed computing nodes compute Q Reduce functions by taking advantage of N input files, each of equal size. Let W={w_1,w_2,…,w_N} be the set of the N files, each of size B bits. The set of functions is 𝒬={ϕ_1,ϕ_2,…,ϕ_Q}, where, for any q∈[Q], ϕ_q maps the N files to a C-bit value u_q:=ϕ_q(w_1,w_2,…,w_N) ∈𝔽_2^C. Figure <ref> depicts how each output function ϕ_q is decomposed into
ϕ_q(w_1,w_2,…,w_N) = h_q(g_q,1(w_1),g_q,2(w_2),…,g_q,N(w_N)).
Here, g_q,n is a Map function for any q∈[Q] and n∈[N], whereas h_q is a Reduce function for any q∈[Q]. We name v_q,n := g_q,n(w_n) ∈𝔽_2^T, where q∈[Q] and n∈[N], an intermediate value (IV) of length T. Figure <ref> shows that a cascaded CDC consists of three phases.
* Map Phase: Each node k∈𝒦 stores M files. For each file w_n, let 𝒟_n represent the set of nodes, each of which stores file w_n. We can then write the files stored by node k as elements in the set
𝒵_k={w_n : n∈[N],k∈𝒟_n}.
Using the stored files in (<ref>) and the Map functions in {g_q,n(·) : q ∈ [Q] n ∈ [N]}, node k can compute the IVs in
ℐ_k={v_q,n=g_q,n(w_n) ∈𝔽_2^T: q∈[Q],n∈[N],k∈𝒟_n}.
* Shuffle Phase: For any k∈𝒦, let
𝒬_k={ϕ_q : q∈[Q],k∈𝒜_q}
be the set of output functions to be calculated by node k. Collectively and in a coordinated way, the nodes exchange calculated IVs such that each node can derive the IVs that it cannot locally calculated. The node k, for k∈𝒦, multicasts a coded message X_k of length ℓ_k.
* Reduce Phase: Upon receiving the coded signals 𝒳={X_1,X_2,…,X_K} and its locally computed IVs in ℐ_k, node k∈𝒦 can compute each Reduce function in 𝒬_k.
Keeping the relevant definitions from <cit.>, we know that there are two important quantities that measure the goodness of a CDC. First, the average number of nodes that store each file is the computation load
r=∑^K_k=1 |𝒲_k|/N.
Second, the ratio of the amount of transmitted data to the product Q N T is the communication load
L=∑^K_k=1ℓ_k/Q N T.
<cit.>
Let K ∈ℕ. Given r,s∈[K], there exists a CDC scheme that achieves the optimal communication load
L=∑^min(r+s,K)_ℓ = max(r+1,s)K-rK-ℓ rℓ-s/Ks ℓ-r/ℓ-1,
where r is the computation load and s is the number of nodes that calculate each function.
Given K, r, and s, we call any CDC scheme whose L is as in (<ref>) a
Li-CDC scheme and denote by L_ Li the communication load of a Li-CDC scheme. It is clear from Lemma <ref> that, given r and s, we seek to minimize L.
§.§ Almost Difference Sets
We recall useful results on almost difference sets. Let (A,+) be a finite abelian group of order n and let D be a subset of size k in (A,+). The difference function on a subset D of (A,+) is
diff_D(x) = |D∩(D+x)|,
where D+x={y+x:y∈ D} and x∈ A.
<cit.>
Let (A,+) be an abelian group of order n. A k-subset D of A is an (n,k,λ,t) almost difference set (ADS) of A if diff_D(x) takes on λ altogether t times and λ+1 altogether n-1-t times as x traverses the nonzero elements of A.
We list useful facts from <cit.>. Let an abelian group (A,+) be given.
* If an (n,k,λ,t) ADS exists, then
k(k-1)=tλ+(n-1-t)(λ+1).
* If D is an (n,k,λ,t) ADS, then its complement D^c=A ∖ D is an (n,n-k,n-2k+λ,t) ADS in (A,+).
* If D is an (n,k,0,t) ADS with
t=n-1-k(k-1), then D is also called a modular Golomb ruler in (A,+).
Ruzsa introduced a class of modular Golomb ruler, which we will use in the next section, in <cit.>.
<cit.>
For every prime p, there exists a (p^2-p,p-1,0,2p-3) almost difference set. The missing differences are the 2p-2 multiples of p or p-1.
§.§ Symmetric Designs
We gather useful results on relevant combinatorial designs.
<cit.>
Let 𝒳 be a set of v elements. Let ℬ:={B_1,B_2,…,B_u} be such that B_i⊆𝒳 and |B_i| =t for any i∈[u]. Given any two distinct elements x,y∈𝒳, if there exist exactly λ elements in ℬ containing them, then (𝒳,ℬ) is a (v,t,λ) balanced incomplete block design (BIBD). For each i∈[u], we call B_i a block.
Since any BIBD is also a 2-design, we have
|ℬ|=λ v (v-1)/t (t-1) for any (v,t,λ) BIBD (𝒳,ℬ).
We will soon use symmetric designs as a main construction tool for a class of CDC schemes.
<cit.>
A (v,t,λ) BIBD (𝒳,ℬ) is a (v,t,λ) symmetric design (SD) if |ℬ|=v.
<cit.>
Given a (v,t,λ) symmetric design (𝒳,ℬ), the following statements hold.
* Each x∈𝒳 is contained in t blocks among the elements of ℬ.
* For any two distinct blocks B and B' in ℬ, we have | B⋂ B'|=λ.
* λ=t (t-1)/v-1.
In <cit.>, Ionin and van Trung listed the parameters of four known classes of symmetric designs.
<cit.>
A (v,k,λ) symmetric design exists if its parameters can be found in Table <ref>.
The four classes of symmetric designs in the table play important roles in our construction of asymptotically optimal cascaded CDC schemes.
§ TWO CONSTRUCTIONS OF CDC SCHEMES
§.§ Construction One
This subsection introduces a new construction method for the case r≠ s. Let (𝒳,𝔅) be an (N,t,λ) symmetric design, 𝒳={x_1,x_2,…,x_N}, and 𝔅 = {ℬ_1,ℬ_2,…,ℬ_N}. By Lemma <ref>, any two distinct blocks ℬ_i and ℬ_j intersect in exactly λ points. We now construct a CDC scheme with N nodes, where 𝒦 = ℬ, on N files, which are elements of 𝒲 = {w_x_1,w_x_2,…,w_x_N}, and N functions in 𝒬 = {ϕ_x_1,ϕ_x_2,…,ϕ_x_N}. Each node stores t files and each Reduce function is computed by s nodes.
During the Map phase, each node ℬ∈𝔅 stores the files in the set Z_ℬ={w_x : x∈ℬ, x∈𝒳}. Since |ℬ|=t, for any block B, the computation load is
r=∑^N_i=1 |Z_i|/N=t N/N=t.
In the Shuffle phase, we arrange each node ℬ∈𝔅 to compute the Reduce functions
𝒬_ℬ= {u_y = ϕ_y(w_x_1,w_x_2,…,w_x_N) : y∈𝒳, y∈ℬ},
with ℬ denoting the complement set of ℬ with respect to 𝒳. Given the assigned stored files and the set 𝒬_ℬ, node ℬ can compute the intermediate values in the set
ℐ_ℬ = {v_y,x = g_y,x(w_x) : x,y∈𝒳, x∈ℬ}.
Hence, for any x,y∈𝒳 and for any block ℬ∈𝔅, the intermediate value v_y,x is both required and cannot be locally computed by node ℬ if and only if y ∈ℬ and x ∈ℬ, i.e., y ∉ℬ and x ∉ℬ. The intermediate value v_y,x is locally computable by node ℬ if and only if x∈ℬ.
Based on what we have just investigated, we can divide the delivery strategy into two classes. The first class is for the N intermediate values v_x,x : x ∈𝒳. We cluster each v_x,x into t segments as
v_x,x = (v^ℬ_k_1_x,x, v^ℬ_k_2_x,x, …, v^ℬ_k_t_x,x),
where x ∈ℬ_k_i for each i∈ [t]. Since v_x,x∈𝔽_2^T, we know that v^ℬ_k_i_x,x∈𝔽_2^T/t.
A node ℬ_k has access to t stored files in {w_z_1,w_z_2,…,w_z_t}, giving it the intermediate values in
𝒱_k = {v_w_z_1,w_z_1,v_w_z_2,w_z_2, …,v_w_z_t,w_z_t}.
If α_1,α_2,…,α_t∈𝔽_2^T/t are all distinct, then t must be a divisor of T and T ≥ t^2. Node ℬ_k multicasts the t-λ signals
X^ℬ_k[1] = v^ℬ_k_w_z_1,w_z_1+ v^ℬ_k_w_z_2,w_z_2+…+ v^ℬ_k_w_z_t,w_z_t,
X^ℬ_k[2] =α_1v^ℬ_k_w_z_1,w_z_1+ α_2v^ℬ_k_w_z_2,w_z_2+ …+α_tv^ℬ_k_w_z_t,w_z_t,
⋮
X^ℬ_k[t-λ]
=α^t-λ-1_1v^ℬ_k_w_z_1, w_z_1+α^t-λ-1_2v^ℬ_k_w_z_2, w_z_2+…+ α^t-λ-1_tv^ℬ_k_w_z_t,w_z_t,
which we express as
[ X^ℬ_k[1]; X^ℬ_k[2]; ⋮; X^ℬ_k[t-λ] ]
=
[ 1 1 ⋯ 1; α_1 α_2 ⋯ α_t; ⋮ ⋮ ⋱ ⋮; α^t-λ-1_1 α^t-λ-1_2 ⋯ α^t-λ-1_t; ] [ v^ℬ_k_w_z_1,w_z_1; v^ℬ_k_w_z_2,w_z_2; ⋮; v^ℬ_k_w_z_t,w_z_t ].
The total number of bits transmitted by ℬ_k is, therefore, (t-λ)T/t, which comes from (t-λ)1/t intermediate values. Thus, the total number of intermediate values transmitted by all the nodes combined is (t-λ)v/t.
If a node ℬ is unable to compute v_y,y, then w_y∉ℬ. Hence, there exist nodes ℬ_u_i : i∈[t] such that w_y∈ℬ_u_i. Without loss of generality, let ℬ_u_1 be a node whose stored files are in {w_y,w_ℓ_1,w_ℓ_2,…,w_ℓ_t-1}. By Lemma <ref>, we have |ℬ_u_1⋂ℬ|=λ. If these λ stored files are elements of {w_ℓ_t-λ,w_ℓ_t-λ+1,…, w_ℓ_t-1}, then node ℬ can locally compute
v^ℬ_u_1_w_ℓ_t-λ, w_ℓ_t-λ, v^ℬ_u_1_w_ℓ_t-λ+1,w_ℓ_t-λ+1, …, v^ℬ_u_1_w_ℓ_t-1,w_ℓ_t-1.
Thus, ℬ only needs to solve the system of equations
[ X^ℬ_u_1[1]- ∑^t_i=t-λ+1 v^ℬ_u_1_w_l_i-1,w_l_i-1; X^ℬ_u_1[2]-∑^t_i=t-λ+1α_iv^ℬ_u_1_w_l_i-1,w_l_i-1; ⋮; X^ℬ_u_1[t-λ]-∑^t_i=t-λ+1α^t-λ-1_iv^ℬ_u_1_w_l_i-1,w_l_i-1 ]
=
[ 1 1 ⋯ 1; α_1 α_2 ⋯ α_t-λ; ⋮ ⋮ ⋱ ⋮; α^t-λ-1_1 α^t-λ-1_2 ⋯ α^t-λ-1_t-λ; ] [ v^ℬ_u_1_w_y,w_y; v^ℬ_u_1_w_l_1,w_l_1; ⋮; v^ℬ_u_1_w_l_t-λ-1,w_l_t-λ-1 ].
The coefficient matrix is clearly Vandermonde. Since α_1,α_2,…,α_t are all distinct, node ℬ decodes v^ℬ_u_1_w_y,w_y for node ℬ_u_1. Similarly, node ℬ can also derive v^ℬ_u_i_w_y,w_y for any node ℬ_u_i : i∈{2,3,…,t}. Thus, node ℬ can derive
v_w_y,w_y = {v^ℬ_u_1_w_y, w_y,v^ℬ_u_2_w_y,w_y, …, v^ℬ_u_t_w_y,w_y}.
Proceeding to the second class of intermediate values v_x,y, where x, y ∈𝒳 are distinct, we cluster v_x,y into the λ segments
v_x,y= (v^ℬ_s_1_x,y, v^ℬ_s_2_x,y,…, v^ℬ_s_λ_x,y),
where x,y∈ℬ_s_i for any i∈ [λ]. Since v_x,y∈𝔽_2^T, it is immediate to confirm that v^ℬ_s_i_x,y∈𝔽_2^T/λ for any i∈ [λ].
Any node ℬ_s has access to t stored files in {w_a_1,w_a_2,…,w_a_t}. Hence, ℬ_s has the intermediate values in
{v_w_a_1,w_a_2, v_w_a_1,w_a_3, …,v_w_a_1,w_a_t,…,v_w_a_t, w_a_1,v_w_a_t,w_a_2,…, v_w_a_t,w_a_t-1}.
If β_1,β_2,…,β_t-1∈𝔽_2^T/λ are all distinct, then λ divides T and T ≥λ(t-1). The t (t-λ-1) signals that node ℬ_k multicasts can be expressed as
[ Y_i^ℬ_s[1]; Y_i^ℬ_s[2]; ⋮; Y_i^ℬ_s[t-λ-1] ]
=
[ 1 1 ⋯ 1; β_1 β_2 ⋯ β_t-1; ⋮ ⋮ ⋱ ⋮; β^t-λ-2_1 β^t-λ-2_2 ⋯ β^t-λ-2_t-1; ][ v^ℬ_s_w_a_i,w_a_1; v^ℬ_s_w_a_i,w_a_2; ⋮; v^ℬ_s_w_a_i,w_a_t ],
where i∈ [t]. The total number of bits transmitted by ℬ_s is, therefore, t (t-λ-1) T/λ, which comes from t (t-λ-1) 1/λ intermediate values. Thus, the total number of intermediate values v_x,y transmitted by all nodes combined is t (t-λ-1) v/λ.
If a node ℬ_m is unable to compute v_x,y, then w_x,w_y∉ℬ_m. Since (𝒳,𝔅) is a symmetric design, there exist λ nodes ℬ_n_i : i∈[λ] with access to files w_x and w_y. Without loss of generality, let ℬ_n_1 be a node such that its stored files are the elements in {w_x,w_y,w_b_1,w_b_2,…,w_b_t-2}. By Lemma <ref>, |ℬ_n_1⋂ℬ_m|=λ. If the λ stored files are the elements in {w_b_t-λ-1, w_b_t-λ, …,w_b_t-2}, then node ℬ can locally compute
v^ℬ_n_1_w_x, w_b_t-λ-1, v^ℬ_n_1_w_x, w_b_t-λ, …, v^ℬ_n_1_w_x,w_b_t-2.
Thus, ℬ_m only needs to solve the system of equations
[ Y_x^ℬ_n_1[1] - ∑^t_i=t-λ+1 v^ℬ_n_1_w_x,w_b_i-2; Y_x^ℬ_n_1[2] - ∑^t_i=t-λ+1β_i-1 v^ℬ_n_1_w_x,w_b_i-2; ⋮; Y_x^ℬ_n_1[t-λ-1]- ∑^t_i=t-λ+1β^t-λ-2_i-1 v^ℬ_n_1_w_x,w_b_i-2 ]
=
[ 1 1 ⋯ 1; β_1 β_2 ⋯ β_t-λ-1; ⋮ ⋮ ⋱ ⋮; β^t-λ-2_1 β^t-λ-2_2 ⋯ β^t-λ-2_t-λ-1 ][ v^ℬ_n_1_w_x,w_y; v^ℬ_n_1_w_x,w_b_1; ⋮; v^ℬ_n_1_w_x,w_b_t-2 ].
The coefficient matrix is obviously Vandermonde. Since β_1,β_2,…,β_t-1 are all distinct, node ℬ_m decodes v^ℬ_n_1_w_x,w_y for node ℬ_n_1. Similarly, node ℬ_m can provide v^ℬ_n_i_w_x,w_y to any node ℬ_n_i : i∈{2,3,…,λ}. Thus, node ℬ_m can derive
v_w_x,w_y= (v^ℬ_n_1_w_x,w_y, v^ℬ_n_2_w_x,w_y,…,
v^ℬ_n_t_w_x,w_y).
Since λ=t(t-1)/v-1 in the known (v,t,λ) SD, the communication load is
L = t(t-1-λ)Tv/λ + vT/t(t-λ)/Q N T
= t(t-1-λ) Tv/λ + vT/t(t-λ)/v^2T
= t(t-1-t(t-1)/v-1) v-1/t(t-1) + 1/t(t-t(t-1)/v-1)/v = (v-1)^2-t(v-1)+v-1-t+1/v(v-1)
= (v-1)^2-tv+v/v(v-1).
In the Reduce phase, we know that each node ℬ∈𝔅 can derive the intermediate values
{v_x,y : x,y∈𝒳, x,y∈ℬ}
during the Shuffle phase. Node ℬ can locally compute the Reduce functions
𝒬_ℬ = {u_y = ϕ_y(w_x_1,w_x_2,…,w_x_N) : y∈𝒳, y∈ℬ}.
We formalize the above discussions in the following theorem.
Given a (v,t,λ) SD with t>λ+1, one can construct a CDC scheme with v distributed computing nodes, N=v files and Q=v output functions such that
* each output function is computed by s=v-t nodes,
* the computation load is r=t, and
* the communication load is L=(v-1)^2-tv+v/v(v-1).
We use (7,3,1) SD in an example to illustrate our construction.
When N=Q=K=7, there are 7 files in 𝒲 = {w_1,w_2,…,w_7} and 7 functions in 𝒬= {ϕ_1, ϕ_2,…,ϕ_7}. In the first stage, the nodes store the respective files
𝒵_ℬ_1 ={w_1,w_2,w_4},
𝒵_ℬ_2 ={w_2,w_3,w_5},
𝒵_ℬ_3 ={w_3,w_4,w_6},
𝒵_ℬ_4 ={w_4,w_5,w_7},
𝒵_ℬ_5 ={w_1,w_5,w_6},
𝒵_ℬ_6 ={w_2,w_6,w_7},
𝒵_ℬ_7 ={w_1,w_3,w_7}.
The computation load is r=3 · 7/7=3.
If the Reduce functions are arranged by nodes as
𝒬_ℬ_1 ={ϕ_3,ϕ_5,ϕ_6,ϕ_7},
𝒬_ℬ_2 ={ϕ_1,ϕ_4,ϕ_6,ϕ_7}, 𝒬_ℬ_3 ={ϕ_1,ϕ_2,ϕ_5,ϕ_7},
𝒬_ℬ_4 ={ϕ_1,ϕ_2,ϕ_3,ϕ_6},
𝒬_ℬ_5 ={ϕ_2,ϕ_3,ϕ_4,ϕ_7}, 𝒬_ℬ_6 ={ϕ_1,ϕ_3,ϕ_4,ϕ_5}, 𝒬_ℬ_7 ={ϕ_2,ϕ_4,ϕ_5,ϕ_6},
then each function is computed by s=4 nodes.
The locally computable intermediate values, arranged by nodes, can be listed as
ℐ_ℬ_1 ={v_q,n : q∈[7], n∈{1,2,4}},
ℐ_ℬ_2 ={v_q,n : q∈[7], n∈{2,3,5}},
ℐ_ℬ_3 ={v_q,n : q∈[7], n∈{3,4,6}},
ℐ_ℬ_4 ={v_q,n : q∈[7], n∈{4,5,7}},
ℐ_ℬ_5 ={v_q,n : q∈[7], n∈{1,5,6}}, ℐ_ℬ_6 ={v_q,n : q∈[7], n∈{2,6,7}},
ℐ_ℬ_7 ={v_q,n : q∈[7], n∈{1,3,7}}.
Table <ref> lists the intermediate values required by each of the nodes. We cluster each v_x,x : x∈𝒳 into 3-segments
v_1,1 = (v^ℬ_1_1,1,v^ℬ_5_1,1,
v^ℬ_7_1,1),
v_2,2 = (v^ℬ_1_2,2, v^ℬ_2_2,2, v^ℬ_6_2,2),
v_3,3 = (v^ℬ_2_3,3, v^ℬ_3_3,3, v^ℬ_7_3,3),
v_4,4 = (v^ℬ_1_4,4, v^ℬ_3_4,4, v^ℬ_4_4,4),
v_5,5 = (v^ℬ_2_5,5, v^ℬ_4_5,5, v^ℬ_5_5,5),
v_6,6 = (v^ℬ_3_6,6, v^ℬ_5_6,6, v^ℬ_6_6,6),
v_7,7 = (v^ℬ_4_7,7, v^ℬ_6_7,7,v^ℬ_7_7,7).
When this is the case, the nodes can collectively send the coded signals listed in Table <ref>, with distinct α_1,α_2,α_3∈𝔽_2^T/3. Node ℬ_1, for instance, sends the coded signals
v^ℬ_1_1,1+ v^ℬ_1_2,2+ v^ℬ_1_4,4α_1 v^ℬ_1_1,1 + α_2 v^ℬ_1_2,2 + α_3 v^ℬ_1_4,4.
After receiving the signals in (<ref>), node ℬ_2 can individually decode the intermediate values v^ℬ_1_1,1 by using the locally computed intermediate value v^ℬ_1_2,2. Similarly, node ℬ_2 can decode the required intermediate values v^ℬ_5_1,1 and v^ℬ_7_1,1 from nodes ℬ_5 and ℬ_7, respectively. Doing so allows node ℬ_2 to decode v_1,1. It is straightforward to verify that the situation holds for each node and the required value v_x,x : x ∈[7].
Let us now consider v_x,y : x ≠ y. Node ℬ_1, for example, sends the coded signal v_1,2 + v_1,4. Upon receiving the signal, nodes ℬ_3 and ℬ_4 can individually decode v_1,2 by using the locally computable v_1,4. Nodes ℬ_2 and ℬ_6 can individually decode v_1,4 from the locally computable v_1,2. Similarly, all other nodes can obtain their respective intermediate values. Thus, the communication load of our scheme is L=7 ·2/3 + 3 · 7/7 · 7 = 11/21. When K=7, r=3, and s=4, we reproduce a cascaded CDC scheme from <cit.> with N=Q=7 whose communication load L'=7-3/7-1=2/3 is larger than that of ours.
§.§ Construction Two
Cheng, Wu, and Li in <cit.> constructed some asymptotically optimal cascaded CDC schemes by using t-designs and t-GDDs with t ≥ 2. We propose a construction of such schemes based on 1-designs. For the case of r=s we use almost difference (AD) sets. For any (n,k,λ,μ) AD set (A,D) with λ < k-1, we denote by (A,+) the abelian group {0,1,…,n-1} under addition and by D the set {i_1,i_2,…,i_k : i_t∈{0,1,…,n-1} t∈ [k] }. There exist n subsets ℬ_r={i'_1,i'_2,⋯,i'_k}⊆ A, where i'_t≡ i_t+r-1 n with t ∈[k] and r∈ [n]. By the definition of difference function, when λ<k-1, we know that ℬ_u≠ℬ_v for any u,v∈ [n] such that m≠ n. Letting ℬ = {ℬ_1,ℬ_2,…,ℬ_n}, we confirm that (A,𝔅) is a 1-design with parameters (n,k,k). To verify that (A,𝔅) is not a 2-design, we observe that, if {a,b}⊆ A with | diff_D(a-b)| = λ+1, then {a,b} is contained in the λ+1 elements of 𝔅. If {c,d}⊆ A with | diff_D(c-d)|=λ, then {c,d} is contained in the λ elements of 𝔅. Focusing on the A2 subsets of two elements in A, there are, respectively, nt/2 and (n-1-t)n/2 such subsets which are contained in λ and λ+1 elements of 𝔅.
We use A={0,1,2,3,4,5} to form an abelian group under addition. We verify that D={0,1,3} is a (6,3,1,4) AD set, where the function diff_D(x) takes on 1, in total, 4 times, if x∈{1,2,4,5}, and takes on 2 once if x=3. Our construction yields the composite structure (A,𝔅), where
𝔅 = {{0,1,3},{1,2,4},{2,3,5},{3,4,0},{4,5,1},{5,2,0}}.
We confirm that (A,𝔅) is a 1-design with parameters (6,3,3). It, however, is not a 2-design since the pairs
{0,3}, {1,4}, {2,5}
are contained in 2 elements of 𝔅, but the pairs
{0,1}, {0,2}, {0,4}, {0,5}, {1,2}, {1,3}, {1,5}, {2,3}, {2,4}, {3,4}, {3,5}, {4,5}
are contained in only a single element of 𝔅.
Let (A,+)={0,1,2,3,4,5} be the abelian group. We confirm that D={0,1} is a (6,2,0,3) AD set. Its diff_D(x) takes on 1, in total, twice for x ∈{1,5} and 0, in total, 3 times if x∈{2,3,4}. We have the composite structure (A,𝔅) with
𝔅 = {{0,1},{1,2},{2,3},{3,4},{4,5},{5,0}}.
We verify that (A,𝔅) is a 1-design with parameter (6,2,2). It is not a 2-design since the pairs
{0,1}, {1,2}, {2,3}, {3,4}, {4,5}, {0,5}
are contained in a single element of 𝔅, but the pairs
{0,2}, {0,3}, {0,4}, {1,3}, {1,4}, {1,5}, {2,4}, {2,5}, {3,5}
are not contained in any element of 𝔅.
We refine our construction into two cases: λ≥1 and λ=0. We start with the case of λ≥1 and construct a CDC scheme with N nodes, 𝒦=ℬ, n files in 𝒲 = {w_0,w_1,…,w_n-1}, and n functions in 𝒬 = {ϕ_0,ϕ_1,…,ϕ_n-1}. Each node stores k files and each Reduce function is computed by s nodes.
During the Map phase, let each node ℬ∈𝔅 store the files in Z_ℬ={w_x : x∈ℬ, x∈𝒳}. Since the cardinality of any block is |ℬ| = k, the computation load is
r=∑^n_i=1| Z_i|/n = kn/n=k.
In the Shuffle phase, let each node ℬ∈𝔅 be arranged to compute the Reduce functions in
𝒬_ℬ = {u_y=ϕ_y(w_0,w_1,…,w_n-1) : y∈ A, y∈ℬ}
Using the stored files and the functions in 𝒬, node ℬ can compute the intermediate values
ℐ_ℬ = {v_y,x = g_y,x(w_x) : x,y∈ A,x∈ℬ}.
For any x,y∈ A and any block ℬ∈𝔅, the intermediate value v_y,x is required. It is not locally computable by node ℬ if and only if y∈ℬ and x∉ℬ. On the other hand, v_y,x is locally computable by node ℬ if and only if x∈ℬ. We devise our delivery strategy accordingly.
If x,y∈ A are such that | diff_D(x-y)| =λ+1, then there exist λ+1 nodes with access to the pair of files (x,y). We call these nodes ℬ_1,j with j ∈[λ+1]. There are k-λ-1 nodes with access to file x but not file y. We label these nodes ℬ_2,u with u ∈[k-λ-1]. There are k-λ-1 nodes with access to file y but not file x. We name these nodes ℬ_3,v with v∈[k-λ-1].
Our delivery strategy must allow for the exchange of relevant intermediate values among the nodes. Each node ℬ_1,j can locally compute v_x,y and v_y,x since it stores files w_x and w_y. Each node ℬ_2,u can locally compute v_y,x since it stores file w_x but requires v_x,y from some other nodes. This node does not store w_y but is assigned to compute the Reduce function u_y. Each node ℬ_3,v can locally compute v_x,y since it stores file w_y but requires v_y,x from some other nodes. This node does not store w_x but is assigned to compute the Reduce function u_x.
We divide v_x,y and v_y,x into λ+1 sub-intermediate values
v_y,x={v^(1)_y,x,v^(2)_y,x,…,v^(λ+1)_y,x} v_x,y={v^(1)_x,y,v^(2)_x,y,…,v^(λ+1)_x,y}.
Node ℬ_1,j multicasts {v^(i_1)_y,x + v^(i_1)_x,y : i_1∈ [λ+1] } to nodes ℬ_2,i_2 and ℬ_3,i_3, with i_2,i_3∈ [k-λ-1]. Hence, any ℬ_2,i_2 and ℬ_3,i_3 can derive v^(i_1)_x,y and v^(i_1)_y,x, respectively. Since (A,D) is an (n,k,λ,μ) AD set, for each node ℬ∈𝔅 there exist n-1-μ pairs (x,y), with {x,y}⊆ℬ, such that | diff_D(x-y)| = λ+1. Paying closer attention to the A2 subsets of two elements in A, we infer that there are (n-1-μ)n/2 such subsets which are contained in λ+1 elements of 𝔅. Thus, in this particular delivery strategy, there are exactly (n-1-μ)(λ+1)n/2 transmitted sub-intermediate values, each of which has T/λ+1 bits. The total number of bits transmitted by the nodes is (n-1-μ)Tn/2.
If u,v∈ A are such that | diff_D(u-v)| = λ, then we have the desired exchange scheme. In total, the number of bits transmitted is μ Tn/2 for μ Tn+(n-1-μ) Tn/2=n(n-1)T/2 signals. The communication load is L=n(n-1)T/2n^2T=n-1/2n, leading us to the following theorem.
Given an (n,k,λ,μ) almost different set (A,D) with 1≤λ<k-1, one can construct a CDC scheme with n distributed computing nodes, N=n files, and Q=n output functions such that each output function is computed by s=k nodes. The scheme's respective computation and communication loads are r=k and L=n-1/2n.
Continuing from Example <ref>, we can construct the following coded distributed computing. When N=Q=K=6, we have 6 files 𝒲={w_0,w_1,⋯,w_5} and 6 output functions 𝒬={ϕ_0,ϕ_1,…,ϕ_5}. In the Map phase, the nodes and their respective stored files are
𝒵_ℬ_1 ={w_0,w_1,w_3}, 𝒵_ℬ_2 ={w_1,w_2,w_4}, 𝒵_ℬ_3 ={w_2,w_3,w_5},
𝒵_ℬ_4 ={w_3,w_4,w_0}, 𝒵_ℬ_5 ={w_4,w_5,w_1}, 𝒵_ℬ_6 ={w_5,w_0,w_2}.
The computation load is r=3 · 6/6=3.
Let the Reduce functions be arranged by nodes, such that each function is computed by s=3 nodes, as
𝒬_ℬ_1 ={ϕ_0,ϕ_1,ϕ_3}, 𝒬_ℬ_2 ={ϕ_1,ϕ_2,ϕ_4}, 𝒬_ℬ_3 ={ϕ_2,ϕ_3,ϕ_5},
𝒬_ℬ_4 ={ϕ_3,ϕ_4,ϕ_0}, 𝒬_ℬ_5 ={ϕ_4,ϕ_5,ϕ_1}, 𝒬_ℬ_6 ={ϕ_5,ϕ_0,ϕ_2}.
The indicated nodes can then locally compute their respective intermediate values
ℐ_ℬ_1 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{0,1,3}},
ℐ_ℬ_2 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{1,2,4}},
ℐ_ℬ_3 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{2,3,5}},
ℐ_ℬ_4 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{3,4,0}},
ℐ_ℬ_5 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{4,5,1}},
ℐ_ℬ_6 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{5,0,2}}.
Table <ref> lists the required intermediate values in relation to the nodes.
We cluster all intermediate values v_0,3, v_3,0, v_1,4, v_4,1, v_5,2 and v_2,5 into the 2-segments
v_0,3 = (v^(1)_0,3,v^(2)_0,3),
v_1,4 = (v^(1)_1,4,v^(2)_1,4),
v_2,5 = (v^(1)_2,5,v^(2)_2,5),
v_3,0 = (v^(1)_3,0,v^(2)_3,0),
v_4,1 = (v^(1)_4,1,v^(2)_4,1),
v_5,2 = (v^(1)_5,2,v^(2)_5,2).
The nodes send the coded signals listed in Table <ref>. Node ℬ_1, for instance, sends v_0,1+v_1,0. After receiving it, nodes ℬ_4 and ℬ_6 get the required v_0,1 since they can locally compute v_1,0. Similarly, nodes ℬ_2 and ℬ_5 can obtain v_1,0 after receiving v_0,1+v_1,0. Inspecting the rest of the nodes, we have analogous situation. The communication load is (2+1/2)6/6· 6 = 5/12.
We continue to the case of λ=0. If {u,w}⊆ A such that | diff_D(u-w)|=1, then there exists an element of 𝔅 that contain u and w. If {u,w}⊆ A such that | diff_D(u-w)|=0, then there is no element of 𝔅 that contains u and w. We recall that an (n,k,0,μ) almost difference sets are also known as modular Golomb rulers.
Since the Map phase is the same as in the case of λ≥ 1 above, the computation load is also r=k.
In the Shuffle phase, if each node ℬ∈𝔅 is to compute the Reduce functions in
𝒬_ℬ = {u_y=ϕ_y(w_0,w_1,…,w_n-1) : y∈ A, y∈ℬ},
then s=r=k. Using the stored files and the functions in 𝒬, node ℬ can compute the intermediate values
ℐ_ℬ = {v_y,x = g_y,x(w_x) : x,y∈ A,x∈ℬ}.
Hence, for any x,y∈ A and any block ℬ∈𝔅, the intermediate value v_y,x is required but not locally computable by node ℬ if and only if y ∈ℬ and x ∉ℬ. It is locally computable by node ℬ if and only if x∈ℬ. By a similar analysis as in the case of λ≥ 1, there exist k(k-1)n/2 pairs of elements of A which are contained by elements of 𝔅. On the other hand, there exist nμ/2 pairs of elements of A which are not contained in any element of 𝔅. We adjust the delivery strategy accordingly.
First, for the k(k-1)n/2 pairs, we use the delivery strategy in the proof of Theorem <ref>. There are k(k-1)nT/2 transmitted signals in total. Second, for the remaining nμ/2 intermediate values v_u,w for which {u,w} is not contained in any element of 𝔅, no node can broadcast the coded signal v_u,w+v_w,u. If u,w ∈ A are files such that | diff_D(u-w)|=0, then u and w are stored by nodes whose block representatives contain u. Both files are required by nodes whose block representatives contain w. We collect the respective blocks containing u and w into sets
𝔅_u ={ℬ^u_1, ℬ^u_2,…,ℬ^u_k}𝔅_w = {ℬ^w_1,ℬ^w_2,…, ℬ^w_k}.
We split v_u,w into k sub-intermediate values
v^(1)_u,w, v^(2)_u,w, …, v^(k)_u,w.
Node ℬ^w_i sends v^(i)_u,w to nodes in 𝔅_u. Clearly, each node in 𝔅_u can obtain v_u,w from all k sub-intermediate values sent by the nodes in 𝔅_w. In total, there are (nμ+k(k-1)n/2)T transmitted signals. Since k(k-1)=n-1-μ, the system transmits (n(n-1)-k(k-1)n/2)T signals. From the above discussion, the communication load is L=2n-2-k(k-1)/2n. We have thus proved the next result.
Given an (n,k,0,μ) almost different set (A,D), one can construct a CDC scheme with n distributed computing nodes, N=n files, and Q=n output functions such that each output function is computed by s=k nodes. The respective computation and communication loads are r=k and L=2n-2-k(k-1)/2n.
Continuing from Example <ref>, we construct the following CDC scheme. When N=Q=K=6, we have 6 files in 𝒲={w_0,w_1,…,w_5} and 6 output functions in 𝒬={ϕ_0,ϕ_1,…,ϕ_5}. In the Map phase, the nodes and their respective stored files are
𝒵_ℬ_1 ={w_0,w_1},
𝒵_ℬ_2 ={w_1,w_2}, 𝒵_ℬ_3 ={w_2,w_3},
𝒵_ℬ_4 ={w_3,w_4}, 𝒵_ℬ_5 ={w_4,w_5}, 𝒵_ℬ_6 ={w_5,w_0}.
Hence, the computation load is r=2 · 6/6=2.
Let the Reduce functions be arranged by nodes such that each function is computed by s=2 nodes as
𝒬_ℬ_1 ={ϕ_0,ϕ_1},
𝒬_ℬ_2 ={ϕ_1,ϕ_2}, 𝒬_ℬ_3 ={ϕ_2,ϕ_3},
𝒬_ℬ_4 ={ϕ_3,ϕ_4}, 𝒬_ℬ_5 ={ϕ_4,ϕ_5}, 𝒬_ℬ_6 ={ϕ_5,ϕ_0}.
The indicated nodes can then compute the respective intermediate values
ℐ_ℬ_1 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{0,1}},
ℐ_ℬ_2 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{1,2}},
ℐ_ℬ_3 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{2,3}},
ℐ_ℬ_4 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{3,4}},
ℐ_ℬ_5 ={v_q,n : q∈{0,1,2,3,4,5}, n∈{4,5}},
ℐ_ℬ_6 = {v_q,n : q∈{0,1,2,3,4,5}, n∈{5,0}}.
Table <ref> lists the required intermediate values in relation to the nodes.
We cluster the intermediate values
v_0,2, v_0,3, v_0,4, v_1,3, v_1,4, v_1,5, v_2,4, v_2,5, v_2,0, v_3,1, v_3,5, v_3,0, v_4,1, v_4,2, v_4,0, v_5,2, v_5,3, v_5,1
into the 2-segments
v_0,2 = (v^(1)_0,2,v^(2)_0,2), v_0,3 = (v^(1)_0,3,v^(2)_0,3), v_0,4 = (v^(1)_0,4,v^(2)_0,4),
v_1,3 = (v^(1)_1,3,v^(2)_1,3),
v_1,4 =(v^(1)_1,4,v^(2)_1,4),
v_1,5 =(v^(1)_1,5,v^(2)_1,5),
v_2,4 =(v^(1)_2,4,v^(2)_2,4), v_2,5 =(v^(1)_2,5,v^(2)_2,5), v_2,0 =(v^(1)_2,0,v^(2)_2,0),
v_3,5 =(v^(1)_3,5,v^(2)_3,5), v_3,0 =(v^(1)_3,0,v^(2)_3,0), v_3,1 =(v^(1)_3,1,v^(2)_3,1),
v_4,2 =(v^(1)_4,2,v^(2)_4,2), v_4,0 =(v^(1)_4,0,v^(2)_4,0), v_4,1 =(v^(1)_4,1,v^(2)_4,1),
v_5,2 =(v^(1)_5,2,v^(2)_5,2), v_5,3 =(v^(1)_5,3,v^(2)_5,3), v_5,1 =(v^(1)_5,1,v^(2)_5,1).
In this case, the nodes can send the coded signals listed in Table <ref>. Node ℬ_1, for example, sends v_0,1+v_1,0. After receiving the signal, nodes ℬ_4 and ℬ_6 can obtain the intermediate value v_0,1 because they can locally compute v_1,0. Similarly, nodes ℬ_2 and ℬ_5 can obtain the required v_1,0 after receiving v_0,1+v_1,0. On the other hand, nodes ℬ_1 and ℬ_6 send the respective coded signals v^(1)_0,2 and v^(2)_0,2. Upon receiving v^(1)_0,2 and v^(2)_0,2, nodes ℬ_2 and ℬ_3 can obtain v_0,1. The rest of the nodes can obtain their respective required intermediate values in a similar manner. The communication load is (1+ 6 ·1/2) 6/6 · 6=2/3.
Although, the schemes in Theorems <ref> and <ref> are quite similar to the scheme in <cit.>, we can obtain an asymptotically optimal cascaded CDC scheme with different parameters. The next section focuses on their performance for comparative purposes.
§ PERFORMANCE COMPARISON AND CONCLUDING REMARKS
For fixed (r,s), the number of files N=Kr and functions Q=Ks in the Li-CDC schemes grow fast as the number of computing nodes K increases. In practical scenarios, as Konstantinidis and Ramamoorthy have shown in <cit.>, this fast growth is detrimental to the performance of the schemes. The number of input files and output functions in each of our new schemes, in contrast, are equal to the number of the computing nodes, confirming the superiority of our schemes.
What about the respective communication loads? This section compares the communication loads of our scheme in Subsection <ref> and that of the Li-CDC. Jiang, Wang, and Zhou in <cit.> have constructed an asymptotically optimal cascaded CDC scheme with r≠ s based on symmetric designs. Their scheme, which we call Jiang-CDC for ease of reference, has larger communication load, given the same input files, output functions, and computation load than our scheme in Subsection <ref>. Here we compare the communication load of our scheme and that of Jiang-CDC in <cit.>.
§.§ On the CDC Schemes from Theorem <ref>
Using a (v,t,λ) SD, a Jiang-CDC scheme with r=t and s=v-t has communication load L_ Jiang=v-t/v-1. From the same (v,t,λ) SD, Theorem <ref> gives us a CDC scheme with r=s and (minimum) communication load L_ ours=(v-1)^2-tv+v/v(v-1). It is straightforward to prove that L_ ours is smaller than L_ Jiang on the same number of input files, output functions, computing nodes, r, and s. Let a suitable (v,t,λ) SD be given. For a contradiction, let us assume that L_ Jiang≤ L_ ours. Hence, v-t/v-1≤(v-1)^2-tv+v/v(v-1), which is equivalent to v≤ 1. It is then clear that L_ Jiang≤ L_ ours if and only if v≤ 1, which contradicts the very definition of a symmetric design.
We know that Jiang-CDC schemes constructed from the symmetric designs in Table <ref> are all asymptotically optimal. Thus, using the same symmetric designs, our cascaded CDC schemes are also asymptotically optimal. Figure <ref>
provides concrete performance comparison between our CDC schemes and Jiang-CDC schemes based on the specified SDs.
§.§ On the CDC Schemes from Theorem <ref>
For any prime p, Lemma <ref> and Theorem <ref> lead to a construction of a class of CDC scheme. The class has r=s=p-1, K=p^2-p, and communication load L_1:=p^2+p-4/2(p^2-p). The class has p^2-p input files and the same number, p^2-p, of output functions. We establish that schemes in this class are asymptotically optimal, that is, L_1/L_ Li converges to 1 when p is large. We begin with the following lemma whose proof will be given in the appendix.
For a positive integer p≥ 5,
∑^p-1_ℓ=0ℓp^2-2p+1ℓp-1ℓ >(p-3) p^2-pp-1.
If p is large, then p^2-p>2(p-1). Taking r=s=p-1 and K=p^2-p in Lemma <ref> yields
L_ Li =∑^2(p-1)_ℓ=p(ℓ-(p-1)/ℓ-1 p^2-2p+1p^2-p-ℓ p-1ℓ-(p-1)/p^2-pp-1)
=
1/p^2-pp-1∑^p-1_ℓ=1(ℓ/ℓ+p-2 p^2-2p+1ℓ p-1ℓ).
By Lemma <ref>, we obtain
L_ Li =1/p^2-pp-1∑^p-1_ℓ=1(ℓ/ℓ + p - 2 p^2-2p+1ℓ p-1ℓ)
≥1/p^2-pp-1 (2p-3)∑^p-1_ℓ=1(ℓ p^2-2p+1ℓ p-1ℓ)
> 1/p^2-pp-1 (2p-3) (p-3) p^2-pp-1 = p-3/2p-3.
On the other hand,
L_ Li =1/p^2-pp-1∑^p-1_ℓ=1(ℓ/ℓ+p-2p^2-2p+1ℓp-1ℓ)
≤1/p^2-pp-1 p-1/2p-3 ∑^p-1_ℓ=1p^2-2p+1ℓp-1ℓ
< 1/p^2-pp-1 p-1/2p-3 ∑^p-1_ℓ=0p^2-2p+1ℓp-1ℓ =p-1/2p-3.
Thus, p-3/2p-3 < L_ Li <p-1/2p-3. The fact that
lim_p →∞p-3/2p-3 = lim_p →∞p-1/2p-3 = lim_p →∞ L_1 = 1/2lim_p →∞L_1/L_ Li = 1,
confirming that our cascaded CDC scheme is asymptotically optimal. Figure <ref> compares the communication load of our scheme with L_ Li.
§.§ Concluding Remarks
Our constructions highlight the prominent role that combinatorial designs play in minimizing the communication load. We believe that construction routes from known combinatorial structures still have a lot of potential to exploit in improving the performance of distributed computing schemes. Our schemes are asymptotically optimal. So are many previously known schemes, most notably the Li-CDC and Jiang-CDC schemes. Unlike those prior schemes, ours have generally improved communication loads, being consistently closer to the theoretical lower bound. Another significant advantage of our schemes lies in the parameters.
The schemes constructed in Subsection <ref> have N=Q=v, with v as specified in Table <ref>. This means that the growth of the number N (of input files) and Q (of functions to run) can be nicely calibrated to suit practical constraints. The schemes built in Subsection <ref> have N=Q=n, with n being the number of nodes. Since Q ≥ n, the schemes require only the least possible number of computing nodes and the least number of functions to complete the given task.
The framework depicted in Figure <ref> does not appear to have incorporated some error-control mechanism. The assumption is that the whole system is robust, e.g., none of the nodes can fail and that the broadcasts are sent and received error-free. In practice, some small number of nodes may fail or a few intermediate values cannot be made available due to transmission errors. The question of error-control coding form CDC schemes seems open for investigation.
§ APPENDIX: PROOF OF LEMMA <REF>
We begin by establishing the inequality
p^2-2p+1p-1 > p-13 p^2-2p+1p-4
by observing directly that
p^2-2p+1p-1/p-13 p^2-2p+1p-4 =6 (p^2-3p+5) (p^2-3p+4) (p^2-3p+3)/(p-1)^2 (p-2)^2 (p-3)^2
=6 (p^2-3p+5) (p^2-3p+4) (p^2-3p+3)/(p^2-4p+3)^2 (p^2-4p+4) >1.
Our next step is to prove the inequality
2 p^2-2p+1p-4 p-1 3 >
∑^p-4_ℓ=0 (p-3-ℓ) p-1p-1-ℓ p^2-2p+1ℓ.
For any ℓ∈{0,1,…,p-4}, let
d_ℓ = (p-3-ℓ) p-1p-1-ℓ p^2-2p+1ℓ.
Hence, as ℓ increases in the range 0,1,…,p-5, the function
d_ℓ/d_ℓ+1 = (p-3-ℓ/p-4-ℓ) (ℓ+1)^2/(p^2-2p+1-ℓ) (p-1-ℓ)
is increasing. Hence, for any ℓ∈{0,1,…,p-5},
d_ℓ/d_ℓ+1≤d_p-5/d_p-4 = 2 (p^2-8p+16)/4 (p^2-3p+6) < 1/2,
making it evident that
d_ℓ < 1/2 d_ℓ+1 < … < (1/2)^p-4-ℓ d_p-4
∑^p-4_ℓ=0 d_ℓ < ∑^p-4_ℓ=0(1/2)^p-4-ℓ d_p-4 = (2-(1/2)^p-4) d_p-4 < 2d_p-4,
settling (<ref>).
Our last step is to establish (<ref>). We use (<ref>) and (<ref>), respectively, to get the last two inequalities in the expression
∑^p-1_ℓ=0ℓ p^2-2p+1ℓp-1ℓ
= ∑^p-4_ℓ=0ℓ p^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ
+ p-11 p^2-2p+1p-2 + 2 p^2-2p+1p-1
>∑^p-4_ℓ=0ℓp^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ + 2 p^2-2p+1p-1
> ∑^p-4_ℓ=0ℓ p^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ + 2 p^2-2p+1p-4 p-13
> (p-3) ∑^p-4_ℓ=0p^2-2p+1ℓ p-1p-1-ℓ + (p-3) ∑^p-1_ℓ=p-3p^2-2p+1ℓ p-1p-1-ℓ = (p-3) p^2-pp-1.
The proof is now complete.
10
url@samestyle
chowdhury2011
M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica, “Managing data
transfers in computer clusters with orchestra,” ACM SIGCOMM computer
communication review, vol. 41, no. 4, pp. 98–109, 2011.
zhang2013
Z. Zhang, L. Cherkasova, and B. T. Loo, “Performance modeling of mapreduce
jobs in heterogeneous cloud environments,” in 2013 IEEE Sixth
International Conference on Cloud Computing.1em plus 0.5em minus
0.4emIEEE, 2013, pp. 839–846.
li2017
S. Li, M. A. Maddah-Ali, Q. Yu, and A. S. Avestimehr, “A fundamental tradeoff
between computation and communication in distributed computing,” IEEE
Transactions on Information Theory, vol. 64, no. 1, pp. 109–128, 2017.
dean2008
J. Dean and S. Ghemawat, “Mapreduce: simplified data processing on large
clusters,” Communications of the ACM, vol. 51, no. 1, pp. 107–113,
2008.
zaharia2010
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica et al.,
“Spark: Cluster computing with working sets.” HotCloud, vol. 10, no.
10-10, p. 95, 2010.
ji2015
M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of caching in wireless
d2d networks,” IEEE Transactions on Information Theory, vol. 62,
no. 2, pp. 849–869, 2015.
agrawal2020
S. Agrawal and P. Krishnan, “Low complexity distributed computing via binary
matrices with extension to stragglers,” in 2020 IEEE International
Symposium on Information Theory (ISIT).1em plus 0.5em minus
0.4emIEEE, 2020, pp. 162–167.
lee2017
K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding
up distributed machine learning using codes,” IEEE Transactions on
Information Theory, vol. 64, no. 3, pp. 1514–1529, 2017.
li2016
S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, “A unified coding framework for
distributed computing with straggling servers,” in 2016 IEEE Globecom
Workshops (GC Wkshps).1em plus 0.5em minus 0.4emIEEE, 2016,
pp. 1–6.
yan2020
Q. Yan, M. Wigger, S. Yang, and X. Tang, “A fundamental storage-communication
tradeoff for distributed computing with straggling nodes,” IEEE
Transactions on Communications, vol. 68, no. 12, pp. 7311–7327, 2020.
kiamari2017
M. Kiamari, C. Wang, and A. S. Avestimehr, “On heterogeneous coded distributed
computing,” in GLOBECOM 2017-2017 IEEE Global Communications
Conference.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 1–7.
shakya2018
N. Shakya, F. Li, and J. Chen, “On distributed computing with heterogeneous
communication constraints,” in 2018 52nd Asilomar Conference on
Signals, Systems, and Computers.1em plus 0.5em minus 0.4emIEEE, 2018, pp. 1795–1799.
woolsey2021combinatorial
N. Woolsey, R.-R. Chen, and M. Ji, “A combinatorial design for cascaded coded
distributed computing on general networks,” IEEE Transactions on
Communications, vol. 69, no. 9, pp. 5686–5700, 2021.
woolsey2019
——, “Cascaded coded distributed computing on heterogeneous networks,” in
2019 IEEE International Symposium on Information Theory (ISIT).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 2644–2648.
woolsey2020coded
——, “Coded distributed computing with heterogeneous function
assignments,” in ICC 2020-2020 IEEE International Conference on
Communications (ICC).1em plus 0.5em minus 0.4emIEEE, 2020, pp.
1–6.
xu2019
F. Xu and M. Tao, “Heterogeneous coded distributed computing: Joint design of
file allocation and function assignment,” in 2019 IEEE Global
Communications Conference (GLOBECOM).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 1–6.
li2019
F. Li, J. Chen, and Z. Wang, “Wireless mapreduce distributed computing,”
IEEE Transactions on Information Theory, vol. 65, no. 10, pp.
6101–6114, 2019.
li2016edge
S. Li, Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “Edge-facilitated
wireless distributed computing,” in 2016 IEEE Global Communications
Conference (GLOBECOM).1em plus 0.5em minus 0.4emIEEE, 2016,
pp. 1–7.
lee2017high
K. Lee, C. Suh, and K. Ramchandran, “High-dimensional coded matrix
multiplication,” in 2017 IEEE International Symposium on Information
Theory (ISIT).1em plus 0.5em minus 0.4emIEEE, 2017, pp.
2418–2422.
d2020notes
R. G. D’Oliveira, S. El Rouayheb, D. Heinlein, and D. Karpuk, “Notes on
communication and computation in secure distributed matrix multiplication,”
in 2020 IEEE Conference on Communications and Network Security
(CNS).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1–6.
cheng2023
M. Cheng, Y. Wu, and X. Li, “Asymptotically optimal cascaded coded distributed
computing via combinatorial designs,” arXiv preprint
arXiv:2302.05826, 2023.
jiang2022
J. Jiang, W. Wang, and L. Zhou, “Cascaded coded distributed computing schemes
based on symmetric designs,” IEEE Transactions on Communications,
vol. 70, no. 11, pp. 7179–7190, 2022.
woolsey2021
N. Woolsey, R.-R. Chen, and M. Ji, “A combinatorial design for cascaded coded
distributed computing on general networks,” IEEE Transactions on
Communications, vol. 69, no. 9, pp. 5686–5700, 2021.
jiang2020
J. Jiang and L. Qu, “Cascaded coded distributed computing schemes based on
placement delivery arrays,” IEEE Access, vol. 8, pp.
221 385–221 395, 2020.
kon2020
K. Konstantinidis and A. Ramamoorthy, “Resolvable designs for speeding up
distributed computing,” IEEE/ACM Transactions on Networking, vol. 28,
no. 4, pp. 1657–1670, 2020.
yan2017
Q. Yan, M. Cheng, X. Tang, and Q. Chen, “On the placement delivery array
design for centralized coded caching scheme,” IEEE Transactions on
Information Theory, vol. 63, no. 9, pp. 5821–5833, 2017.
ding2014
C. Ding, Codes from difference sets.1em plus 0.5em minus
0.4emWorld Scientific, 2014.
ruzsa1993
I. Z. Ruzsa, “Solving a linear equation in a set of integers i,” Acta
arithmetica, vol. 65, no. 3, pp. 259–282, 1993.
ionin2006
Y. J. Ionin and T. van Trung, “Symmetric designs,” in Handbook of
Combinatorial Designs.1em plus 0.5em minus 0.4emChapman and
Hall/CRC, 2006, pp. 136–149.
|
http://arxiv.org/abs/2307.07356v1 | 20230714140325 | SDF-Pack: Towards Compact Bin Packing with Signed-Distance-Field Minimization | [
"Jia-Hui Pan",
"Ka-Hei Hui",
"Xiaojie Gao",
"Shize Zhu",
"Yun-Hui Liu",
"Pheng-Ann Heng",
"Chi-Wing Fu"
] | cs.RO | [
"cs.RO"
] |
From Multilayer Perceptron to GPT: A Reflection on Deep Learning Research for
Wireless Physical Layer
Mohamed Akrout, Amine Mezghani, Member, IEEE, Ekram Hossain, Fellow, IEEE,
Faouzi Bellili, Member, IEEE, Robert W. Heath, Fellow, IEEE
August 12, 2023
=======================================================================================================================================================
empty
empty
Robotic bin packing is very challenging, especially when considering practical needs such as object variety and packing compactness.
This paper presents SDF-Pack, a new approach based on signed distance field (SDF) to model the geometric condition of objects in a container and compute the object placement locations and packing orders for achieving a more compact bin packing.
Our method adopts a truncated SDF representation to localize the computation, and based on it, we formulate the SDF-minimization heuristic to find optimized placements to compactly pack objects with the existing ones.
To further improve space utilization, if the packing sequence is controllable, our method can suggest which object to be packed next.
Experimental results on a large variety of everyday objects show that our method can consistently achieve higher packing compactness over 1,000 packing cases, enabling us to pack more objects into the container, compared with the existing heuristics under various packing settings.
The code is publicly available at: https://github.com/kwpoon/SDF-Packhttps://github.com/kwpoon/SDF-Pack.
§ INTRODUCTION
Robotic bin packing is an important logistics task, aiming at leveraging robot arms to help automatically pack objects in a container.
Given a sequence of objects of arbitrary sizes and shapes, a bin-packing algorithm should suggest the optimized placement location of each object, such that the objects can be packed compactly in the container.
If the sequence is controllable, the algorithm may further suggest which object to be packed next in each iteration.
At present, widely-used basic algorithms focus mainly on regularly-shaped objects, e.g., boxes.
Packing irregularly-shaped objects still heavily relies on manual efforts, since several challenges remain.
First, it is difficult to effectively model the container geometry, especially when considering varying objects of irregular shapes.
Second, it is difficult to exploit the geometry of irregular objects, and also the container, to search for a compact bin-packing solution that maximizes space utilization.
Last, the search should be sufficiently fast to avoid long delays in the robotic actions.
For the above practical concerns, packing heuristics or strategies <cit.> have long been a favorite for robotic bin packing, especially for handling general irregular objects, since these approaches are fast to compute.
Basically, they assume a predefined packing order and suggest a compact packing location for each object according to a designed objective function.
Yet, existing heuristics cannot sufficiently account for the compactness between the new object to be packed and the existing ones in the container, so the first two challenges cannot be well addressed.
For example, the deepest bottom-left <cit.> places the object as deep and bottom-left as possible, without modeling the object geometry;
heightmap minimization <cit.> aims to reduce the overall heightmap value, without considering the compactness between the new object and the existing ones at the same height level;
while maximum touching area <cit.> encourages direct object contacts, without attending to non-touching but very close packing.
Hence, they have limited capability to handle irregular objects for better space utilization.
This paper presents SDF-Pack, a new approach to address the challenges for compact packing of general objects in a container;
see the illustration in Figure <ref>.
We adopt the signed distance field (SDF) <cit.> to model the container's geometry, assess the spatial compactness in the container, and formulate the SDF-minimization heuristic to locate compact object placements.
In particular, we first construct an SDF using a scanned top-down heightmap of the container, a volume field that records the shortest distance from each 3D location in the container to the nearest object/container surface.
Using the constructed SDF, we can quickly identify locations with minimum non-negative SDF values and find collision-free locations that compactly pack the next object with the existing ones.
Further, by comparing the SDF heuristic values of different candidate objects, our proposed algorithm may suggest which object to be packed next, if the sequence is controllable.
Overall, the contributions of this work are summarized below:
(i)
We introduce SDF-Pack, a new approach based on the truncated signed distance field to model the geometry of the container and packed objects.
Importantly, we formulate the SDF-minimization heuristic to find the most compact placement for a given object to improve space utilization and packing compactness.
(ii)
For controllable packing sequence, SDF-Pack can suggest the next object to be packed to further improve the overall object packing compactness.
(iii)
To speed up our framework, we further develop a GPU implementation to build the SDF and a local update scheme to avoid redundant computation.
We compare SDF-Pack with the state-of-the-art heuristics <cit.> on general irregular objects from the YCB dataset <cit.> and the Rutgers APC RGB-D dataset <cit.>.
Experimental results show that SDF-Pack is able to consistently achieve higher packing compactness over 1,000 packing cases under various packing settings.
§ RELATED WORK
3D bin packing (3D-BPP) is an NP-hard problem, generally requiring nondeterministic polynomial time to solve.
It involves two sub-problems:
(i) finding the object packing order and
(ii) finding the placement location of each object.
Many methods are proposed to optimize both (i) and (ii) simultaneously.
Since the problem is inherently NP-hard, computational time remains an issue.
Some exact methods, e.g., branch-and-bound <cit.>, try to reduce the search space for the optimal solution.
Instead of exhaustively searching the whole solution space, meta-heuristic methods such as the genetic algorithm <cit.>, tabu search <cit.>, simulated annealing <cit.>, and integer linear programming <cit.> search for sub-optimal solutions. Yet, these methods require trying multiple packing sequences and thus are still computationally expensive.
Very recently, reinforcement learning is introduced for 3D-BPP <cit.>, and in particular, <cit.> attempt to pack non-regular objects. Reinforcement learning methods help to optimize the multi-step packing results.
However, they require an extra training stage to create a network model and train on sampled packing sequences, e.g., <cit.> took 16 hours of training.
On the other hand, some other works simplify the problem by assuming a user-defined packing sequence and focus mainly on (ii).
They propose data structures such as Markov decision tree <cit.> and packing configuration tree <cit.> to look ahead multiple packing steps to choose the best placement.
However, as the number of possible placements in each packing step is still very large, analyzing multiple future steps are thus still not sufficiently efficient.
Aiming at high efficiency in robotic bin packing, some packing heuristics are proposed to quickly determine the best object placement location by evaluating an objective function.
These approaches have long been a favorite.
For example, the deepest bottom-left <cit.> suggests placements closer to the deepest bottom-left corner;
heightmap minimization <cit.> suggests placements with smaller heightmap increment;
and maximum touching area <cit.> finds placements with more contact area with the already-packed objects.
Though some of the heuristics can deal with non-regular objects, existing packing heuristics are insufficient to account for the object compactness in the container, due to the lack of modeling the object geometry and considering near but non-contacting (adjacent) objects.
In this work, we propose SDF-Pack using the truncated signed distance field to model the container's geometry and formulate the SDF-minimization heuristic to encourage nearby objects to be packed more closely, even though they are not directly contacting one another in the container.
§ METHOD
§.§ Problem Setup
Given a sequence of objects, which can be regular or irregular our goal is to iteratively suggest a placement location and orientation for each object, such that the packing solution can be more compact, and more objects can be successfully packed into the container.
A good packing solution should be both feasible and compact, and a feasible bin packing solution should satisfy the following conditions:
* Containment.
Objects must be fully inside the container.
* Collision-free.
When putting an object into the container, the robot arm and the object must not collide with any previously-packed object and the container itself.
* Stability.
Each object should remain physically stable after it is placed into the container.
Furthermore, a compact packing solution should pack the objects as tightly as possible for higher space utilization.
§.§ Our packing framework
Figure <ref> illustrates a single packing step in our framework.
In each step, we scan the current container with the packed objects to first obtain a top-down heightmap, and then construct a signed distance field to represent its geometry.
We leverage the top-down and bottom-up heightmaps to represent each candidate object to be packed and find all feasible placements for each object.
Then, we can make use of the SDF to evaluate all feasible placements and select the one with the lowest SDF objective value to be executed.
Further, when the packing sequence is controllable, we compare the SDF heuristic values of different candidate objects to suggest the best object to be packed to achieve better packing compactness.
For clarity, we present our framework in this section for the case of a controllable packing sequence, which is more general.
In the following, we introduce how we model the container (Section <ref>) with SDF and how to find feasible placements (Section <ref>); after that, we introduce our SDF-minimization heuristic (Section <ref>) and additional strategies for controllable packing sequence (Section <ref>) and accelerating the computation (Section <ref>).
§.§ Modeling the container
We follow <cit.> to scan the top-down heightmap of the container.
The heightmap depicts the 3D volume occupied by previously-packed objects in the container.
We regard the volumes under the packed objects as occupied, since they are not reachable by the robot arm. Then, from the top-down heightmap, we construct the 3D signed distance field of the container (Figure <ref> left).
In real-world packing, objects may shift or roll after being put into the container. So, we re-scan the container after each packing step.
Signed-distance field (SDF) <cit.> is an implicit representation of the object geometry and is often employed in 3D reconstruction.
The efficiency of modeling distance field between objects for object manipulation has been noticed recently <cit.>.
While in our case, the SDF denotes the shortest distance from a 3D location in the container to the nearest surface point on the packed object or the container's interior.
A positive (negative) value indicates an unoccupied (occupied) location.
Intrinsically, the SDF describes how close a 3D location is to the existing objects, so we can readily assess the packing compactness of a new object relative to the existing ones in the container.
Further, when estimating the packing compactness of a candidate object placement, we only need to consider the geometry of the nearby objects.
So, we adopt a truncated SDF, which clamps large distance values to improve the computation efficiency; also, it helps to localize the SDF computation and update.
Compared with previous packing heuristics, the distinctive advantage of our approach is that it can better optimize the object packing compactness.
Using SDF, we can account for the object proximity and consider objects that are nearby but not necessarily contacting, enabling us to readily assess compactness and find more compact object placements.
§.§ Finding feasible placements.
To pack an object (represented by a pair of top-down and bottom-up heightmaps), we need to enumerate all feasible locations {(x, y, z, r)} of placing it in the container, where x,y are the horizontal object coordinates, z is the height level, and r is the object's orientation on the xy-plane.
A placement (x, y, z, r) is regarded as feasible, if it fulfills the containment, collision-free, and stability constraints.
To obtain feasible placements,
we find all possible combinations of x, y, and r, locate the deepest z to place the object without collisions (Eq. (<ref>)), and then perform the stability test (Algorithm <ref>).
Algorithm <ref> shows the procedure.
Similar to <cit.>, we leverage the top-down heightmap of the container and the bottom-up heightmap of the candidate object at the orientation r to find the deepest reachable z:
z = G(x,y,r) = max^W-1_i=0max^D-1_j=0(H_c[x+i,y+j] - Ĥ_o^r[i,j]),
where H_c is the top-down heightmap of the container;
Ĥ_o^r ∈ℝ^W × D × H is the bottom-up heightmap of the input object at orientation r; and
W, D, and H are the width, depth, and height of the object.
The heightmap of each view is measured towards the opposite plane of the object's bounding box.
Then, we measure the stability of the object placed at (x, y, z, r) by checking if the object's mass center lies inside its support polygon <cit.>.
The stability test is shown in Algorithm <ref>.
First, we retrieve the supporting points of the object, i.e., the points that the object contacts the bottom or already-packed objects in the container.
Then, we project the supporting points and the object's mass center to the xy-plane.
If the projected mass center locates in the convex hull of the projected supporting points (i.e., the support polygon <cit.>), the placement is regarded as stable.
Due to the complex tilted forces between the non-regularly shaped objects, passing the stability test may not always guarantee stable placement in the real world, yet this support polygon test can quickly eliminate major non-stable placements.
A candidate placement is regarded as feasible only if the object does not exceed the container's top and it passes the stability test.
Lastly, we choose the optimal placement by finding the feasible placement with the optimal packing objective value.
§.§ SDF-minimization heuristic
We formulate the following SDF-minimization heuristic consisting of three terms to evaluate the object packing compactness (lower is better) locally around each obtained feasible placement (x, y, z, r) for an object o.
F_o(x, y, z, r) =
α/V_o∑_w=0^W-1∑_d=0^D-1∑_h=0^H-1Φ(x+w,y+d,z+h) · O_o^r(w,d,h)
+ β (1 - √(V_o/W · D · H)) + γ· z,
where
α, β, and γ are the weights on the three terms, respectively;
Φ(·) is the truncated SDF;
W, D, and H are the width, depth, and height of the object at orientation r;
O_o^r(·) records the occupancy of the object (its value is 1 for occupied location and 0 otherwise);
V_o is the total volume of the object.
All of W, D, H, O_o^r(·) and V_o can be obtained using the heightmaps of object o.
The first term computes the average TSDF value of the volumes occupied by the object.
By minimizing this term, we can find a candidate location that places the object more compactly around the already-packed objects and the container walls.
The second term further encourages axis-aligned object placements for regularly-shaped objects.
Here, we measure the volume ratio of the object to its axis-aligned bounding box, so minimizing this term encourages a more axis-aligned object orientation.
Lastly, to further maximize the space utilization for accommodating more objects in the future, we encourage the placement to be as deep as possible by minimizing the z value of the placement.
§.§ Extension to controllable packing sequence.
The SDF-minimization heuristic F_o in Eq. (<ref>) can be extended to handle a controllable packing sequence, in which we are allowed to choose which candidate object to be packed first to further improve the packing compactness.
One simple approach is to evaluate F_o for all candidate objects, and then select the one and its associated placement with the lowest F_o value to execute.
However, this approach may not be optimal, as picking small objects too early may break the container space into fragments that large objects may not easily fit into, thereby lowering the overall space utilization.
Hence, we account for objects of varying sizes by adding a size-balancing term to try to pick larger objects earlier, and pick small objects unless they can well fit holes and gaps around the existing objects in the container:
F̂_̂ô(x, y, z, r)
=
F_o(x, y, z, r) + δ· (1 - √(V_o/X · Y · Z)),
where δ is a weight; and X, Y, and Z are the width, depth, and height of the container.
Note that minimizing the last term essentially maximizes the volume ratio of the selected object to the container, preferring larger objects as a result.
Our method can flexibly re-plan the packing order on the fly.
When the number of unpacked objects is extremely large, we simulate a buffer as in many real-world packing scenarios and re-plan the packing sequence of the first K objects.
§.§ Improving computation efficiency
GPU computation for truncated SDF field construction.
Constructing the truncated SDF (TSDF) field is to determine the distance from each location in the container to the closest occupied location within the truncated distance.
Such a computation process can be parallelized by sliding a 3D kernel in the container and accumulating the shortest distance.
To this end, we develop a GPU implementation in PyTorch <cit.> to speed up the process.
Note that we only perform GPU computation in the TSDF construction, and we still use CPU sequential computation in a single thread for a fair comparison with other methods.
Doing so helps to avoid repetitive computation and speed up the feasibility test (Section <ref>) and heuristic computation (Sections <ref> and <ref>).
After each packing step, since the effect of the new placement is local, we only update the feasible placements and heuristic values within the truncated distance around a 2D bounding box, in which the heightmap values have changed after the last packing step.
§ EXPERIMENT
§.§ Dataset
To test the packing performance of our method, we build a dataset that contains 96 types of real-world objects: 71 from the YCB dataset <cit.> and 25 from the Rutgers APC RGB-D dataset <cit.>.
Figure <ref> shows some of these objects.
Considering the picking dynamics of the robot arm with a suction cup, we re-orientate some objects to ensure that all objects remain stable on a horizontal plane and are graspable in the vertical direction.
Due to the robot arm movement, only horizontal in-plane rotations are allowed on the objects during the packing procedure.
We pre-process the 3D models of all objects using <cit.>.
Each processed mesh has at most 100 vertices and 150 faces.
Further, we perform convex decomposition on them using the V-HACD <cit.> algorithm to enable a more realistic collision effect in our experiments with physical simulation.
§.§ Implementation details
Packing environment setup.
Following <cit.>, we consider a container of size 32 cm × 32 cm × 30 cm with resolutions of 0.01 m in x and y dimensions and 0.002 m in the z dimension to discretize the scanned heightmaps of the container and the objects when constructing the truncated SDF field.
Since many objects from the dataset are centrosymmetric (e.g., the mustard bottle, bowl, etc.), we search for the object's possible xy-plane orientations (r) for every π/4 within [0,π) for efficiency.
The truncate distance for the SDF field is set as five units.
In our implementation, the hyper-parameters in Eqs. (<ref>) and (<ref>) are set as α=2.5, β=10.0, γ=1.0, and δ=80.0.
Our GPU computation for the TSDF construction is performed on a single NVIDIA TITAN Xp.
Physical simulation.
All our packing experiments are performed in the PyBullet <cit.> physical simulator.
We set the gravity as -9.8 m/(s^2) and the mass value of each object is set randomly around (0,1) per cm^3.
After packing an object, we wait for 0.25 seconds until it is stable. Then, we scan the container's top-down heightmap in the next packing step.
Packing cases.
We perform experiments on 1,000 object sequences randomly generated using 80 everyday objects, which is four times compared with <cit.>, for evaluating different methods towards the packing limit of the container.
In the experimental setting of <cit.>, they run through the objects in the sequence one by one and skip an object if it does not have a feasible placement.
In comparison, since we cannot skip too many unpacked objects in the real-world packing scenario,
we stop a packing procedure for the current container early if in total K objects cannot find a feasible placement.
Otherwise, the packing procedure finishes when all objects in the sequence have been evaluated.
Fixed and controllable packing sequences.
We perform experiments in two packing settings with (i) a fixed packing sequence, in which we must follow the order of the randomly generated sequences to pack the objects and (ii) a controllable packing sequence, in which we can partially re-arrange the packing order.
In the latter setting, as in some packing scenarios <cit.>, we set a buffer of size K to store the first K arriving objects, select an object from the buffer one at a time, and then fill the buffer with the next arriving object.
In other words, the packing order of the buffered objects can be re-arranged using some rules (e.g., volume decreasing order <cit.>) or using some objectives (e.g., Eq. <ref>) (see Section <ref>).
We set K=5 in our experiments, and the former setting is equivalent to setting a buffer size of one.
Evaluation metrics.
We run all methods in the same packing environment, physical simulation, and packing sequences.
We evaluate various packing methods using (i) packed volume (or packed vol.), , the total volume of the successfully-packed objects measured in cm^3; (ii) compactness, , the average ratio of the total volume occupied by all the packed objects over the volume of the overall bounding box
containing all the packed objects; and (iii) packed object number, , the number of objects successfully packed in the container. In addition, to better indicate the performance of our method and other existing packing heuristics, we also show the increment of the packed volume (vol. inc.) for each method towards the deepest bottom-left (DBL) <cit.>.
§.§ Comparison on fixed packing sequence
We first compare our method for the scenario of fixed packing sequence on 1,000 packing cases against the existing heuristics: deepest bottom-left (DBL) <cit.>, maximum-touching-area (MTA) <cit.>, and heightmap-minimization (HM) <cit.>.
All compared methods share the same inputs (i.e., a top-down heightmap for the container, a top-down and a bottom-up heightmaps for each object) and the same feasible placement search procedure. The only difference is that they use different objective functions to find the best placement for each object.
We also compare our method with random placement (Random), which randomly selects a feasible placement, and the first-fit placement (FF) <cit.>, which selects the first feasible placement.
Table <ref> reports the experimental results, showing that our method achieves a better packing performance than the compared methods.
On average, our method achieves a gain of 7.8% on the packed volume compared with DBL, exceeding all compared heuristics.
Also, it improves the packing compactness, packing one to three more objects compared with the existing heuristics.
SDF-Pack's computation time is only 0.54s per object, similar to other methods (0.36s for MTA, 0.35s for HM, and 0.33s for DBL), which is neglectable compared with the robot arm's movement time.
§.§ Comparison on controllable packing sequence
Bounding-box volume decreasing order. Second, we compare our SDF-Pack with the same set of methods on 1,000 random packing cases in the case of controllable packing sequences (see Section <ref>).
Given the K objects in the buffer, we first follow <cit.> to sort the objects in descending order of their bounding-box volume, and then pack the first feasible object in each packing step.
After that, we fill the buffer with the next object in the packing sequence and sort the buffered objects again.
As shown in Table <ref>, our method outperforms others in terms of packed volume, achieving 6.7% larger than DBL.
Further, our method achieves the best packing compactness and can pack one to ten more objects into the container when compared to different approaches.
The results demonstrate the effectiveness of our approach in finding a better placement location, even when only using a fixed rule to select the next object in the buffer.
Re-planned packing order using Eq.(<ref>).
Then, we evaluate our SDF-minimization heuristic that uses Eq. (<ref>) to select the optimal buffered object instead of using the above rule.
We compare our complete heuristic with DBL <cit.>, MTA <cit.>, and HM <cit.>.
For a fair comparison, we incorporate the size-balancing term into each compared heuristic.
Specifically, for the heuristics that prioritize the lowest objective value (i.e., DBL and HM), we add the size-balancing term as in our method; for MTA, which prioritizes the highest objective value, we subtract the term instead.
Table <ref> shows that our complete heuristic further improves the packing volume by 6% in the setting that allows re-planning the packing order (see the last rows of Tables <ref> and <ref>).
Also, our SDF-minimization heuristic outperforms the compared methods by at least 9.7% in terms of packed volume, over 5% in terms of compactness, and over 1.5 in terms of packed object number, giving clear evidence of the effectiveness of our complete SDF-minimization heuristic.
MTA and HM perform worse than DBL because MTA may leave a large object unpacked if it has a very limited exact contact area (e.g., balls), and HM inherently tends to place small objects first because they have a small heightmap increment.
On average, our method suggests the best object and the corresponding placement in just 1.45s, which is slightly faster than MTA (1.73s) and comparable with DBL (1.35s) and HM (1.21s).
The efficiency advantage of our method is that we pre-compute the SDF field at the start of each packing step (see Section <ref>) and only need to sum up the SDF values for each object and each placement.
Compared with a genetic-algorithm-based method.
Also, we compare our method on 200 random packing cases with <cit.>, which uses a genetic algorithm to re-plan the whole packing sequence.
In short, the method adopts DBL to suggest each placement location and uses the genetic algorithm to find the order.
Due to the long computation time required by the genetic method, we only perform 10 iterations for the genetic method.
As Table <ref> shows, to make genetic-DBL achieve similar performance to our method, it takes over 15x of computation time.
This can be an issue when applied to real usage, since the robot has to wait for the computation.
In comparison, our SDF-minimization heuristic can better re-plan the packing order of irregular objects almost on the fly with a lower computation cost, since it does not require trying multiple packing orders to get the overall packing.
§.§ Ablation Study
We perform an ablation study on 200 random packing cases to explore the effect of each term in our SDF-minimization heuristic.
We perform experiments in both cases of both fixed and controllable packing sequences, using the heuristic value to suggest the placement and the object to be packed.
From Table <ref>, we can observe a drop in the packed volume after removing the TSDF term in both fixed (Rows 2 v.s. 1) and controllable (Rows 4 v.s. 3) packing sequences.
Removing both the TSDF and Regular terms reduces the packed volume on average by 4% (Rows 3 v.s. 1) in the setting of fixed packing sequence.
A performance drop is also seen when removing the size-balancing term in Eq. (<ref>) (Rows 6 v.s. 4, about -7.7%).
These results show that all proposed terms in the heuristic contribute to the final performance of our packing framework in different settings.
§.§ Qualitative analysis
For ease of visualization, we show a visual comparison with a reduced object set (30 objects).
As shown in Figure <ref>, our method achieves a more compact packing consistently for both controllable and fixed packing sequences.
In comparison, other approaches either cannot effectively utilize the empty space near the upper-right corner of the container, or tend to place objects right next to the container's interior to maximize the touching area leaving the central area not well utilized.
More visual comparison results can be found in the supplementary material.
§.§ Robotic demos
Further, we set up a real-world robotic packing scene using the Franka Emika Robot with a suction cup and apply our method to pack everyday supermarket products in this environment.
Besides, we build a virtual packing scene based on the Ravens <cit.> for a virtual-to-real comparison.
The procedure of using our heuristic to pack nine supermarket products into a container of size 45cm × 32cm × 20cm is shown in Figure <ref>.
From the figure, we can see that our heuristic helps to better utilize the container space.
It first packs the tissue box compactly towards the container's interior (since it is the largest with the lowest SDF value), making full use of the bottom-left space in the container.
Then, it compactly fills the container's bottom layer by putting the dehumidifier, the boxed lemon tea, etc.
Please refer to the supplementary video for more results.
§ CONCLUSION
We presented SDF-Pack, a new approach to enhance robotic bin packing with the truncated signed distance field to model the container's geometric condition and the SDF-minimization heuristic to effectively assess the spatial compactness and find compact object placements.
Experimental results manifest that SDF-Pack can consistently achieve the highest packing performance compared with all existing heuristics for both packing scenarios with the fixed and the controllable packing sequences.
In the future, we plan to explore the followings: first, how to improve the robustness of our packing computation when given heightmaps with noise and how to generalize to non-rigid deformable objects; and
second, how to integrate our SDF-based objective with reinforcement learning, to generate a better packing sequence by adjusting the object packing order and more optimally utilize the container space.
IEEEtranS
|
http://arxiv.org/abs/2307.03976v2 | 20230708133744 | Short-time large deviations of the spatially averaged height of a KPZ interface on a ring | [
"Timo Schorlepp",
"Pavel Sasorov",
"Baruch Meerson"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
[email protected]
Institute for Theoretical Physics I,
Ruhr University Bochum, 44801 Bochum, Germany
[email protected]
ELI Beamlines Facility,
ERIC, 25241 Dolní Br̆ežany, Czech Republic
[email protected]
Racah Institute of Physics, Hebrew
University of Jerusalem, Jerusalem 91904, Israel
Using the optimal fluctuation method, we evaluate the short-time probability
distribution P (H̅, L, t=T) of the spatially averaged height H̅ = (1/L) ∫_0^L h (x, t=T) dx
of a one-dimensional interface h (x, t) governed by the Kardar–Parisi–Zhang equation
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t)
on a ring of length L. The process starts from a flat interface, h(x,t=0)=0.
Both at λH̅<0, and at sufficiently small positive λH̅ the optimal
(that is, the least-action) path h(x,t) of the interface, conditioned on H̅, is uniform
in space, and the distribution P (H̅, L, T) is Gaussian. However, at sufficiently
large λH̅>0 the spatially uniform solution becomes sub-optimal and gives way
to non-uniform optimal paths. We study them, and the resulting non-Gaussian distribution P (H̅, L, T),
analytically and numerically. The loss of optimality of the uniform solution occurs via a dynamical
phase transition of either first, or second order, depending on the rescaled system size
ℓ = L/√(ν T), at a critical value H̅=H̅_c(ℓ). At large but
finite ℓ the transition is of first order. Remarkably, it becomes an “accidental" second-order
transition in the limit of ℓ→∞, where a large-deviation
behavior -ln P (H̅, L, T) ≃ (L/T) f(H̅)
(in the units λ=ν=D=1) is observed. At small ℓ the transition is of second order,
while at ℓ =O(1) transitions of both types occur.
Short-time large deviations of the spatially averaged height
of a KPZ interface on a ring
Baruch Meerson
August 12, 2023
=========================================================================================
§ INTRODUCTION
Atypically large fluctuations in macroscopic systems out of
equilibrium continue to attract great interest from statistical physicists.
Although a universal description of such fluctuations is unavailable, there has been
much progress in studies of particular systems. One of the main theoretical tools
in this area is known under different names in different areas of physics:
the optimal fluctuation method
(OFM), the instanton method, the weak-noise theory, the
macroscopic fluctuation theory, etc. This method relies
on a saddle-point evaluation of the pertinent path integral
of the stochastic process, conditioned on the
large deviation. The method is based on a model-specific
small parameter (often called “weak noise"), and it brings about a
conditional variational problem. The solution of this problem – a
deterministic, and in general time-dependent, field – describes the “optimal path" of the system:
the most probable system's history which dominates the contribution of different
paths to the statistics in question.
Among multiple applications of the OFM, we focus on one set of problems which has attracted attention in the last
two decades <cit.>: short-time
large deviations of a stochastically growing interface as described by the one-dimensional Kardar–Parisi–Zhang (KPZ) equation <cit.>
∂_th=ν∂_x^2h+λ/2(∂_xh)^2+√(D)ξ(x,t) ,
where ξ(x,t) is a white noise with
⟨ξ(x,t)⟩=0 , ⟨ξ(x,t)ξ(x^',
t^')⟩=δ(x-x^')δ(t-t^') .
Here we employ the OFM to study a KPZ interface on a ring of length L, i.e. with periodic boundary
conditions at x=0 and x=L. The interface is initially flat,
h(x,t=0)=0 ,
and we are interested in evaluating
the probability density function (PDF) P(H̅, L, T)
of the spatially averaged surface height
H̅ = 1/L∫_0^L h(x,T) dx
at a final time t=T >0, which is much shorter than the characteristic nonlinear
time of Eq. (<ref>), τ_NL= ν^5/D^2 λ^4.
The short-time limit allows one to employ the OFM in a controlled
manner <cit.>, as we will
reiterate shortly. The problem, defined by Eqs. (<ref>)-(<ref>), continues the
line of studies of Refs. <cit.> of finite system-size effects (which turn out to be quite dramatic)
in large deviations of height of the KPZ interface.
Upon rescaling t → tT,
x → (ν T)^1/2 x, h →ν h / λ and ξ→(ν T^3)^-1/4ξ, Eq. (<ref>) becomes
∂_th= ∂_x^2h+1/2(∂_xh)^2
+√(ε)ξ(x,t) ,
with rescaled noise strength ε = D λ^2 T^1/2
/ ν^5/2 on a ring of rescaled length ℓ = L / √(ν T).
The PDF of the rescaled average height H̅ at final time t = 1
can then be written as a path integral
P(H̅,ℓ,ε) = ∫_h(·, 0) = 0 Dh δ(
1/ℓ∫_0^ℓ h(x,1) dx - H̅)
J[h] exp{-1/ε S[h] }
with action functional
S[h] = ∫_0^1 dt ∫_0^ℓ dx L(h, ∂_t h ) = 1/2∫_0^1 dt
∫_0^ℓ dx [∂_th - ∂_x^2h-1/2(∂_xh)^2 ]^2 ,
where ℒ(h,∂_t h) is the Lagrangian.
The OFM assumes a weak-noise limit ε→ 0, when the path integral (<ref>) can be evaluated
by the saddle-point method, while the Jacobian J[h] does not contribute in the leading-order.
In this limit, the PDF P(H̅,ℓ,ε) is dominated by
the optimal path of the system, that is by the most likely history h(x,t) conditional on a given average height at t=1:
-ln P(H̅, ℓ, ε) ε→ 0≃ε^-1min_h(·, 0)= 0 ,
∫_0^ℓ
h(x,1)dx = ℓH̅ S[h] = ε^-1 S(H̅, ℓ) .
Hence, the PDF can be determined, up to pre-exponential factors, from the
solution of this constrained minimization problem. Here
we will solve this minimization problem numerically, for different H̅
and ℓ, and analytically in the asymptotic limits of large and small
ℓ[Note that whenever there exists a spatially
non-uniform optimal path, there are actually infinitely many possible
paths due to the translational symmetry of the problem with respect to x. Accounting for
this submanifold of degenerate solutions and for the associated zero
mode is, however, only relevant for pre-exponential factors <cit.> which
we do not address here.].
It will be convenient to present our results by setting
ν=λ=D=1[In most of the paper we assume, without
loss of generality, that λ>0. Indeed, changing λ to -λ is equivalent to changing h to -h.].
Then the weak-noise scaling (<ref>) reads
-ln P(H̅, ℓ, ε→ 0) ≃
T^-1/2 S(H̅, ℓ) .
Note that the limit ε→ 0 at fixed ℓ corresponds to
the short-time limit T → 0 and small-length limit L → 0
with L / √(T) = const.
When instead T goes to zero at L=const, one has
both ε→ 0 and ℓ→∞. The latter limit turns out to be most interesting, and it is analyzed here
in detail. It is natural to expect that for
any H̅, when ℓ→∞, the action S(H̅, ℓ) should exhibit
a large-deviation form
S(H̅,ℓ) ℓ→∞≃ℓ f(H̅) ,
leading to
-ln P(H̅, L, T→ 0) ≃
(L/T) f(H̅) ,
and this is what we indeed observe here. Less expectedly, we also find that the rate
function f(H̅) exhibits, at a critical value H̅=H̅_c(ℓ),
a dynamical phase transition (DPT) which is accidentally second-order.
By that we mean that
the rate function at the critical point becomes continuously differentiable
only in the limit of ℓ→∞. At arbitrary large but finite ℓ the
large-deviation form (<ref>) breaks down. We show, however, that the action S(H̅,ℓ) still exhibits
a DPT at a critical point H̅=H̅_c, but this DPT is of first order and the optimal
path at the critical point changes discontinuously via a subcritical bifurcation.
For small ℓ a truly second-order DPT is observed as predicted earlier <cit.>.
At intermediate values of ℓ = O(1) DPTs of both types occur. In the latter regime analytical
results are unavailable as of yet, and we present some numerical results. All the DPTs that we
found in this system occur because of a loss of optimality of a path that is uniform in space.
The loss of optimality takes the form either of a subcritical bifurcation (for the first-order DPTs),
or a supercritical bifurcation (for the true second-order DPTs).
The remainder of this paper is structured as follows. In Sec. <ref> we formulate
the OFM equations and boundary conditions, present a simple uniform solution of these equations,
previously studied in Refs. <cit.>, and
argue that it describes the optimal path of the system at all λ H<0. Supercritical
bifurcations of the uniform solution have been recently studied in Ref. <cit.>. Still,
for convenience of further discussion, we briefly rederive them in Sec. <ref>.
Section <ref> includes our results of numerical minimization of the action
functional (<ref>) in different regions of the (H̅,ℓ) phase diagram.
These numerical results provided valuable insights into the nature of optimal paths of the
interface which led us to develop asymptotic analytical solutions of the OFM problem for
large ℓ that we present in Sec. <ref>. The asymptotic solution for small ℓ
is briefly discussed in Sec. <ref>. We summarize and discuss our main results
in Sec. <ref>. A description of numerical algorithms that we use here is relegated to the Appendix.
§ OFM EQUATIONS AND UNIFORM SOLUTION
At a technical level, the main objective of this work is to determine the minimum action S(H̅, ℓ)
as a function of the rescaled average height H̅ and rescaled
system size ℓ. In this section, we present the necessary
conditions for minimizers of the action functional (<ref>) – the OFM equations and the boundary conditions.
We argue then that a simple spatially uniform solution
of the ensuing OFM problem is always optimal for H̅ < 0.
The first-order necessary conditions for a minimizer of the action
functional (<ref>) can be represented as a pair of Hamilton's equations
for the optimal history of the interface h(x,t) and the
conjugate momentum density p = ∂ L / ∂(∂_t h). These equations
were derived in many papers <cit.>, and they take the form
∂_th = ∂_x^2h+1/2(∂_xh)^2+p
,
∂_tp = -∂_x^2p+∂_x(p∂_xh)
.
The “momentum density" p(x,t) describes the (rescaled) optimal realization of
the external noise ξ(x,t) that drives the interface conditional on a specified H̅.
In the present case Eq. (<ref>) and (<ref>) should be complemented by the periodic boundary conditions
at x=0 and x = ℓ, by the initial condition
h(x,0)=0 ,
and by the final-time condition
p(x,1)=Λ= ,
which follows from the demand that a boundary term at t=1, originating from an
integration by parts, should vanish for any h(x,1).
The parameter Λ is a Lagrange multiplier which needs
to be chosen so as to impose the rescaled final-time condition
1/ℓ∫_0^ℓ h(x,1) dx = H̅ .
Once the optimal path is determined, the action S(H̅,ℓ)
can be determined from the equation
S = 1/2∫_0^1 dt∫_0^ℓ dx p^2(x,t) ,
which follows from Eqs. (<ref>) and (<ref>).
By differentiating the action S(H̅, ℓ) = S[h(x,t;H̅,ℓ)] of
the optimal profile h = h(x,t;H̅,ℓ) with respect to H̅ using
the chain rule, one can show that Λ is related to the action via
Λ=1/ℓ ∂ S(H̅, ℓ)/∂H̅ dS=ℓΛ dH̅) .
If the action S(H̅, ℓ) is a strictly convex function of H̅,
there is a bijective relation between Λ and H̅, and it
suffices, for the purpose of calculating the action, to only
determine H̅(Λ) and use Eq. (<ref>). This shortcut is very convenient and
holds for many large-deviation calculations <cit.>.
There is an obvious exact solution of the OFM equations and the boundary conditions:
h(x,t)=H̅ t , p(x,t)=Λ , Λ = H̅ ,
S=ℓ/2H̅^2 ,
which describes a uniformly growing flat interface.
We will often call this branch of solutions branch 1. By virtue of Eq. (<ref>),
whenever the uniform solution (<ref>) is the optimal one, we have
a Gaussian PDF for H̅ up to pre-exponential factors. Of most interest, however,
are the regions of parameters H̅
and ℓ, for which the uniform solution is sub-optimal. As we will see,
the loss of optimality can occur via either a supercritical, or a subcritical bifurcation.
First of all, we can argue that, for negative H̅, the uniform
solution (<ref>) is always optimal. Using the evident conservation law
1/ℓ∫_0^ℓ p(x,t)
d x = Λ = const
of Eq. (<ref>), we can rewrite the action (<ref>) for any solution
of the OFM equations as
S = 1/2∫_0^1 dt∫_0^ℓ
dx p^2(x,t)=ℓΛ^2/2+1/2∫_0^1 dt∫_0^ℓ dx
[p(x,t)-Λ]^2 ,
Also, integrating both sides of Eq. (<ref>) with respect to t from 0 to 1 and
with respect to x over the ring, and using the periodic boundary conditions
and the conservation law (<ref>), we obtain
H̅=1/ℓ∫_0^ℓ h(x,1) dx
=Λ+1/2ℓ∫_0^1 dt∫_0^ℓ
dx [∂_xh(x,t)]^2 .
One can easily see from Eqs. (<ref>) and (<ref>) that, at negative Λ
(or H̅) any inhomogeneity in the
momentum density p both increases
the action S, and decreases the average height |H̅| in comparison to their
values for the uniform solution. Therefore, any nonuniform solution here is sub-optimal.
In contrast to this, for Λ >0 (or
H̅>0), an inhomogeneity increases both S,
and H̅ in comparison to the uniform solution. A competition
between these two opposite effects may give rise to non-uniform solutions with lesser action than
the uniform one, as we will indeed see in the following.
§ BIFURCATIONS OF THE UNIFORM SOLUTION
In this brief section we carry out a linear stability analysis of the
uniform solution (<ref>). We find that, for sufficiently
large positive H̅, the uniform solution can continuously
and supercritically bifurcate to a non-uniform solution. The first
spatial Fourier mode to become unstable as H̅ increases depends
on the rescaled system size ℓ in a nontrivial way and is determined
from Eq. (<ref>). This equation has also been obtained in Ref. <cit.>
by calculating the leading-order prefactor correction to the asymptotic
scaling in Eq. (<ref>) through Gaussian integration of
fluctuations around the uniform solution (<ref>).
At first order of a perturbation theory around the uniform
solution (<ref>) we have
p(x,t)=H̅+b(t)cos qx , h(x,t)=H̅ t + a(t)cos qx
, |a|, |b|≪ 1 .
Here the wave number q spans the set 2π m/ℓ for
m=1,2,…. Substituting the expressions (<ref>)
into Eqs. (<ref>) and (<ref>) and neglecting higher-order terms, we obtain
the following system
of linear ordinary differential equations:
ȧ=-q^2a+b , ḃ=q^2b-q^2H̅ a .
It has solutions proportional to e^iω t, where
ω=± q √(H̅-q^2) .
Using the boundary conditions (<ref>) and (<ref>), we obtain the
following relationship between q and H̅ = H̅_c(q)
at the bifurcation points:
tan(q√(H̅-q^2))=-√(H̅-q^2)/q .
Note that the trivial solution H̅=q^2 of Eq. (<ref>) does
not correspond to a valid non-uniform solution due to the boundary conditions
at t=0 and 1. The resulting dependence H̅(q) can be expressed in a
parametric form
H̅ = -2 u/sin 2u , q=√(-u u) ,
(2n-1)π/2<u<nπ; n=1,2,3,… ,
where, for given ℓ, only values of q = 2 π m / ℓ
with m = 1, 2, 3, … are allowed.
The first three branches of Eq. (<ref>) are shown in
Fig. <ref>. As one can see, the first instability appears for n = 1,
and a necessary condition for the instability, for any ℓ, is H̅_c≥ 4.603.
When ℓ→∞, the first instability of the
uniform solution will occur, at H̅_c≃ 4.603, for a very high mode
m ≃ 1.343 ℓ/ 2 π.
For finite ℓ, one can find the bifurcation point on the n=1 branch of Eq. (<ref>)
numerically.
Finally, for ℓ→ 0, the first instability occurs for the m = 1 mode at
H̅≃ (2 π / ℓ)^2 in
agreement with Ref. <cit.>.
§ NUMERICAL RESULTS
Now we proceed with a numerical solution of the
minimization problem in Eq. (<ref>) for different H̅ and ℓ. The numerical methods that
we used are described in the Appendix. In addition to confirming
the supercritical bifurcations of the uniform solution that we discussed in Sec. <ref>,
we will uncover important subcritical bifurcations
and get insight into non-perturbative optimal paths which
will be studied analytically in Secs. <ref> and <ref>.
We start with the simpler case of small ℓ.
Choosing a moderately small value ℓ = π / 8 and numerically
minimizing the action (<ref>) for different Λ, we
obtain
the rate function S(H̅, ℓ) and Lagrange
multiplier Λ(H̅) shown in Fig. <ref>.
The spatially uniform solution (<ref>), corresponding
to branch 1 of the action, is seen to become unstable
close to H̅≃ (2 π / ℓ)^2 as stated in Sec. <ref>,
and there is a
continuous (second-order) DPT to a spatially
nonuniform solution. Indeed, the (m = 1)-spatial Fourier mode of the
profile becomes unstable at this point. One such spatially nonuniform solution close to the transition point
is shown in Fig. <ref>. As H̅ increases, the optimal solution
turns, for most of the time 0<t<1, into a stationary “cnoidal" solution for p which
drives an h-profile which is non-uniform in x, but is uniformly translating in the vertical direction.
The same solution appears in the problem of the one-point height distribution for the KPZ
equation on a ring <cit.>, and we use it in
Sec. <ref> to calculate the theoretical curves in
Figs. <ref> and <ref>,
which match the numerical results quite well.
Next, we turn to the more complicated and interesting case of large
ℓ.
For ℓ = 16 π the minimization of the augmented action (<ref>)
leads to the results for the rate function S(H̅) and Lagrange
multiplier Λ(H̅) shown
in Fig. <ref>. In addition to branch 1 we observe two other branches of solutions.
Branch 2 is observed to the right of a narrow
transition region close to H̅≃ 4. On this branch the action S(H̅) is
approximately a linear function, while Λ is almost constant. Further, for much larger H̅,
there is a smoothed-out second-order transition from branch 2 to a
third branch 3 with a different scaling behavior.
The optimal paths for branches 2 and 3 are shown in
Fig. <ref>. They consist of strongly localized large-amplitude stationary
solitons of p that drive an outgoing almost triangular structure of h (or two antishocks
of V(x,t) = ∂_x h(x,t), see Sec. <ref>. The solution, corresponding to branch 2,
clearly emerges via a subcritical, rather than supercritical bifurcation. Strikingly, the soliton
has a well-defined life time which is very close to 1/2. The
difference between branches 2 and 3 is that, for branch 3, the two edges
of the triangular structure of h(x,t) collide before the final time t=1 is reached,
while for branch 2 they do not.
These crucial findings will guide our stationary-soliton-based asymptotic theory for large ℓ that we develop
in Sec. <ref>. There we give an analytical description of the optimal paths
for branches 2 and 3, which are the only relevant ones for large
ℓ. There we establish a first-order transition at H̅≃ 4 for large but finite ℓ
and show that it becomes “accidentally" second order in the limit of ℓ→∞.
We also find that the smoothed-out second-order
transition from branch 2 to branch 3 occurs at H̅ = ℓ^2 / 6. The resulting
analytical predictions, indicated by the lines in
Figs. <ref> and <ref>, are in good agreement with numerics
at large, but finite ℓ.
At moderate ℓ the transition region where the spatially uniform
solution (<ref>) of branch 1 becomes sub-optimal is quite
complex, as one can appreciate from
Fig. <ref>.
We see that, in general, there are both first and second order
transitions in this region: The uniform solution becomes
linearly unstable for some m > 1, leading to second-order
transitions, but there is also a competition with the (subcritical) one-soliton
solution. The subcritical scenario clearly wins for sufficiently large ℓ. Indeed, for ℓ = 32 π
we observe only a first-order
transition from the spatially uniform to the soliton solution,
while the linear instability becomes irrelevant.
Note that, for branch 2, in addition to stationary single-soliton
solutions of the OFM equation, discussed so far, there are also stationary multi-soliton solutions
consisting of two or more (almost) non-interacting strongly localized stationary solitons
of p and corresponding expanding triangles of h. One such solution, which we observed numerically, is
shown in the top row of
Fig. <ref>. We found, however,
that such solutions always have a larger action than
the one-soliton solution for the same ℓ
and H̅. Therefore, the one-soliton solution indeed seems to provide
the optimal solution. In the limit ℓ→∞,
these multi-soliton solutions – a soliton gas – would contribute to the
pre-exponential factor for 𝒫(H̅, ℓ), but
pre-exponential factors are beyond the scope of this paper. Additionally, in the
bottom row in Fig. <ref>,
we show an optimal path for ℓ = 16 π and close
to H̅ = 4, which emerges through linear instability of
the (m = 11)-mode. Later on, however, it is overtaken by the
one-soliton solution.
§ LARGE-ℓ ASYMPTOTICS: RISE AND FALL OF THE SOLITON
§.§ General description of the solution
Guided by our numerical solutions and by the previous works on the one-point KPZ height
statistics on the line <cit.> and on a ring <cit.>, here we find approximate
asymptotic solutions of Eqs. (<ref>)-(<ref>) which give rise to two nontrivial
branches (we call them branches 2 and 3) of the large-deviation function S(H̅) for large ℓ.
As we found, for both branches the maximum one-point height of the interface H=max h(x,t=1) turns
out to be very large: H≫ 1. Therefore, in addition to the strong inequality ℓ≫ 1,
we can also use the strong inequality H≫ 1. This allows us to construct “inviscid" asymptotic
solutions in different regions of space, separated by discontinuities of proper types. Like their
numerical counterparts, the analytical solutions exhibit two distinct stages in time, with an abrupt
transition between them at some branch-dependent intermediate time 0<t=τ<1 which we will determine.
For 0<t<τ the solution has the form of a strongly localized stationary soliton of p(x,t)
and “antishock" of V(x,t)= -∂_x h(x,t) which were previously identified in the problem
of one-point height statistics on the line <cit.> and on a ring <cit.>.
The characteristic width, O(1/√(H)), of the soliton-antishock structure is much less than
unity. Outside of the soliton-antishock one has p(x,t) ≃ 0. As a result, Eq. (<ref>)
is obeyed trivially and, at distances ≳ 1 from the soliton, h(x,t) follows the deterministic KPZ dynamics
∂_th=∂_x^2h+1/2(∂_xh)^2 ,
which is equivalent to the Burgers equation
∂_tV+ V ∂_x V =∂_x^2V
for the field V(x,t) =-∂_x h(x,t). In addition, the diffusion term in Eq. (<ref>)
can be also neglected at large distainces <cit.>, and one arrives at the inviscid Hopf equation
∂_tV+V∂_x V=0 .
The stationary soliton-antishock structure drives an almost triangular configuration of h(x,t)
which is expanding outwards <cit.>. The height of the triangle grows linearly with time, while
its two edges propagate with a constant speed as “ordinary" shocks of V(x,t) obeying Eq. (<ref>)
or, when treated as discontinuities, obeying Eq. (<ref>) <cit.>. The positions of these shocks
at t=1 determine the boundaries of the “impact region" of the soliton-antishock structure. When the
size of the impact region, which scales as O(√(H)) <cit.>, is shorter than the rescaled system
size ℓ (this happens when H̅ is not too large, see below), there is also an external region
where the uniform solution p(x,t)=Λ =const and V(x,t)=0 holds, see Eq. (<ref>).
The external uniform solution holds for all times 0<t<1, and it contributes to the large-deviation
function of H̅. In the inviscid limit the regions of zero and nonzero p are divided by a
stationary discontinuity. This regime corresponds to branch 2.
Branch 3 appears when, due to the periodicity of the system, the ordinary shocks of V(x,t)
collide with each other before the final time t=1 is reached. In this case the impact region
of the soliton-antishock structure extends to the whole system, and a region of the uniform solution does not appear.
For the solution to obey the boundary condition (<ref>), the p-soliton must turn into a
constant p= Λ at t=1. Remarkably, as we have seen in our numerical results for large ℓ,
the soliton rapidly decays in the vicinity of a well-defined time t=τ<1. For both branches 2 and 3,
the subsequent dynamics, at τ<t<1,
gives only a subleading contribution (which we neglect, alongside with other subleading contributions)
to the maximum one-point height H and to the action. This stage is important, however, for determining H̅.
We can qualitatively understand this nontrivial temporal structure of the solutions from the viewpoint of action
minimization: First, for 0 ≤ t ≤τ, the interface is efficiently driven upward by a stationary
p-soliton, in the same manner as for the one-point height PDF of the KPZ equation on the line <cit.>
and on a ring <cit.>. Then, quickly suppressing the soliton at an intermediate time 0<τ < 1 and
evolving the interface according to the almost free KPZ dynamics for τ < t ≤ 1 increases considerably
the average height H̅ for a negligible additional cost in terms of action. The optimal value of τ
is the one that minimizes the action for a given H̅.
As an overview, we present here the action S(H̅, ℓ) at leading order for large ℓ,
as will be derived in subsections <ref> and <ref>:
S(H̅, ℓ) ≃{[ H̅^22ℓ , -∞ < H̅≤ 4 , (branch 1); (4 H̅ - 8) ℓ , 4 < H̅≤ℓ^26 , (branch 2); H̅^3/2Φ(H̅ / ℓ^2) , ℓ^26 < H̅ < ∞ , (branch 3) ].
where the function Φ(…) is defined in Eq. (<ref>) and
obeys Φ(z →∞) → 8 √(2) /3. The first line in Eq. (<ref>)
comes from the uniform solution (<ref>). The first two lines manifestly reveal the large-deviation
scaling (<ref>), while the third line does not.
Now we proceed to a more detailed description of the solutions, and we will start with branch 2.
§.§ Branch 2
Due to a translational symmetry of the problem (<ref>)-(<ref>), we can place the soliton-antishock
structure at x=0 (see Fig. <ref>) so that, to the leading order, H≃ h(0,τ).
As explained above, at H≫ 1, the p-soliton can be considered as a point-like object. We will only need
the value of its “mass", ∫ dx p(x,t) which, by virtue of Eq. (<ref>), is conserved. Using
the explicit expression for the soliton, p(x,t)=p_s(x) = 2 c cosh^-2 (√(c/2) x) <cit.>,
where c=H/τ, we obtain
∫_-∞^∞ dx p_s(x) = √(32 H/τ) .
The base of the triangular structure of the h-profile is equal to
2a(t)=√(2H/τ) t ,
while the triangle's height is
h(0,t)=Ht/τ , 0<t<τ .
Let us denote the total size of the impact region of the soliton-antishock structure
by 2a_1, where a_1 ≡ a(t=1). In the region a(t)<|x|<a_1 we have
p=h=0 .
The triangular profile of h on the interval 0<|x|<a(t) is described by the expressions <cit.>
p(x,t)=0 , h(x,t)
=H(t/τ-√(2)|x|/√(Hτ))
, and
V(x,t)=-∂_xh(x,t) = Ṽ x ,
where
Ṽ=√(2H/τ) .
As one can see from Eqs. (<ref>) and (<ref>), the ordinary shocks propagate
with the speed Ṽ/2, as to be expected from Eq. (<ref>) or (<ref>) <cit.>.
After the rapid decay of the soliton at t=τ, the “post-soliton" solution (in the region to be determined)
can be described by the ideal hydrodynamic equations corresponding to the inviscid limit of Eqs. (<ref>)
and (<ref>):
∂_tV +V ∂_xV = -∂_x p ,
∂_tp+∂_x(pV) = 0 .
The V-antishock now plays the role of a discontinuity which undergoes a decay starting from t=τ.
In the leading order we can neglect the -∂_x p term, so that Eq. (<ref>) becomes the Hopf
equation (<ref>). Its solution is
V(x,t)=x/t-τ .
Plugging Eq. (<ref>) into Eq. (<ref>) and using the “final" condition (<ref>)
on p(x,t=1), we obtain
p(x,t) =Λ(1-τ)/(t-τ) .
The solution (<ref>) and (<ref>) holds at t>τ and |x|≤ a_d(t). The boundaries of this region,
x= ± a_d(t)≡Ṽ(t-τ) ,
represent weak discontinuities, moving with the speed Ṽ – that is twice as fast as
the ordinary shocks at x=± a(t), see Eq. (<ref>). Our simulations show
that the weak discontinuities catch up with the shocks at t=1. The corresponding condition can
be written as a_d(1) = a_1, and it yields τ=1/2[We also
obtained τ=1/2 analytically by solving the problem for a general τ and then minimizing the
resulting action with respect to τ. These calculations are somewhat cumbersome, and we do not show them here.]
Therefore, during the second stage of the dynamics, 1/2<t<1, V(x,t) is described by the following expressions:
V(|x|≤ a_d(t),t)=x/t-1/2 , V(a_d(t)≤|x|≤ a(t),t)=±Ṽ , V(a(t)<|x|< a_1,t)=0 .
Using the relation V(x,t)=-∂_x h(x,t), we can obtain the h-profile at any time 1/2<t<1
by integrating Eq. (<ref>) over x. The result describes a parabolic profile of h at |x|<a_d(t),
flanked by the linear profiles at a_d(t)<|x|<a_1 corresponding to the triangular structure of h(x,t) of
the first stage the dynamics. At t=1 the parabolic profile takes over the whole interval |x|<a_1, and we obtain
h(x,t=1)=H-x^2 , |x|<a_1=√(H).
At |x|>a_1 the uniform solution holds:
h(|x|>a_1,t)=Λ t , p(|x|>a_1,t)=Λ .
Now we evaluate the contributions of the uniform solution to the action, Δ S_u, and to the average
height, ΔH̅_u, at t=1. As ℓ goes to infinity, we can neglect the difference between the
total system length ℓ and the length of the domain of uniform solution ℓ-2a_1, and obtain
Δ S_u=Λ^2ℓ/2 ΔH̅_u=Λ .
The leading-order contribution of the soliton-antishock solution to the action is <cit.>
Δ S_s=8√(2)/3 H^3/2/√(τ)=16 H^3/2/3 .
This contribution comes from the first stage of the process, 0<t<1/2, while the second stage gives
only a subleading contribution which we neglect.
The second stage, 1/2<t<1 does contribute to H̅, however. Using Eq. (<ref>), we obtain
ΔH̅_s=4 H^3/2/3ℓ .
What remains to be done is to determine Λ, to collect the contributions to S and H̅,
and to eliminate H in favor of H̅ and ℓ.
In order to determine Λ, we use the local conservation of p(x,t) evident in Eq. (<ref>).
Because of this local conservation law,
the total soliton “mass", see Eq. (<ref>), must be equal to the integral of the solution (<ref>)
for p(x,t) over x from -a_1 to a_1. This condition yields a remarkably simple result: Λ=4,
a constant value (up to small subleading corrections).
Combining Eqs. (<ref>)-(<ref>), we obtain
H̅=4+4 H^3/2/3ℓ ,
S=8ℓ+16 H^3/2/3 .
Eliminating H, we arrive at the leading-order result for the large-deviation function of H̅
for branch 2 in the limit of large ℓ, which was announced in the second line of Eq. (<ref>):
S=(4H̅ -8) ℓ .
This expression obeys the large-deviation scaling (<ref>). As was to be expected, the actions
of branch 1 and 2 coincide at
H̅=H̅_c=4. Noticeably, their first derivatives with respect to H̅
also coincide at this point.
In addition, using Eq. (<ref>), we see that Eq. (<ref>) is consistent with Λ=4,
independently of H̅, for branch 2.
We will look into these peculiarities more carefully in Sec. <ref>.
One applicability condition of Eq. (<ref>) is the strong inequality H≫ 1.
Using the first relation in Eq. (<ref>),
we can rewrite this strong inequality in terms of H̅ and ℓ≫ 1:
H̅-4 ≫ 1/ℓ .
This condition limits H̅ from below. A condition on H̅ from above distinguishes
branch 2 from branch 3. It demands that the ordinary shocks of V(x,t) do not collide with
each other until t=1[While deriving Eq. (<ref>) we
demanded a strong inequality 2√(H)≪ℓ. However, when H̅≫ 1, the main contribution
to S and H̅ comes from the soliton-antishock solution, rather than from the uniform one. As a
result, the strong inequality 2√(H)≪ℓ becomes unnecessary, and a simple inequality suffices.].
This condition can be written as 2√(H)<ℓ or, using Eq. (<ref>),
H̅-4<ℓ^2/6ℓ≫1 .
Now we proceed to a description of branch 3.
§.§ Branch 3
When the inequality (<ref>) is violated, the two outgoing ordinary shocks of V(x,t) collide
with each other and merge at x=±ℓ / 2 (which is the same point of the ring) at some t<1.
Upon the merger, a single stationary shock appears, see Fig. <ref>. Now the impact region of
the soliton-antishock is the whole system: 2a_1=ℓ, and the external region of the uniform solution,
characteristic of branch 2, does not appear here.
Most of the general formulas, derived in the context of branch 2, remain valid for branch 3.
In particular, here too τ is determined by the condition that the weak discontinuities catch
up with the ordinary shocks at t=1. The only difference is that a_1=ℓ/2 now. Solving the
equation a_d(1) = a_1, or
√(2H/τ)(1-τ) = ℓ/2 ,
we obtain
τ =1+ℓ^2/16 H-ℓ√(ℓ^2+32H)/16 H ,
so that τ depends on H and ℓ. Unsurprisingly, Eq. (<ref>) yields τ=1/2 in
the boundary case H=ℓ^2/4, when the size 2a_1 of the impact region of the soliton-antishock
in an infinite system is equal to the system size ℓ. When H goes to infinity, τ approaches 1.
We will not repeat here all expressions for h(x,t), V(x,t) and p(x,t) in different regions,
and present only the expression for h(x,1):
h(x,1)=H-x^2/2(1-τ) ,
with τ from Eq. (<ref>).
Using this expression, we can evaluate H̅. The action S remains the same as in the
first equality in Eq. (<ref>), and we obtain
H̅=H-1/24 ℓ^2/(1-τ) ,
S=8√(2)/3 H^3/2/√(τ) .
Eliminating H from these relations and using Eq. (<ref>), we arrive at a leading-order
result for the large-deviation function S(H̅,ℓ) in the limit of large ℓ and very
large H̅, which was announced in the third line of Eq. (<ref>):
S(H̅,ℓ) = H̅^3/2Φ(H̅/ℓ^2) , where Φ(z) =2 √(2) (9 z+1+√(18z+1))^1/2(36 z+1+√(18z+1))/81 z^3/2 .
In terms of H̅, the condition H>ℓ^2/4 becomes, in the leading order, H̅>ℓ^2/6.
As a result, the function Φ(z) is defined for z≥ 1/6, and Φ(1/6) = 4 √(6).
A graph of Φ(z) is depicted in Fig. <ref>.
In the limit of H̅≫ℓ^2≫ 1 Eq. (<ref>) yields
S=8√(2)/3H̅^3/2+4/3H̅ℓ+ … .
The leading-order term of this expression coincides with the action for a single-point height H <cit.>.
This is to be expected, because for very large H̅, τ approaches 1, and the difference
between H̅ and H becomes relatively small.
The expressions in Eqs. (<ref>) and (<ref>) match in the leading order in ℓ
at the boundary H̅≃ℓ^2/6 between the branches 2 and 3, both giving (2/3) ℓ^3+O(ℓ).
For completeness, we also present the optimal transition time τ in Eq. (<ref>) in terms of H̅ and ℓ:
τ(H̅,ℓ)=1+ℓ^2/12 H̅-ℓ√(ℓ^2+18
H̅)/12 H̅ .
§.§ Dynamical phase transition
In this subsection we resolve the nature of the DPT between
branches 1 and 2, which corresponds to the subcritical bifurcation from the uniform solution (<ref>)
to the leading-order soliton solution discussed in Sec. <ref>. To this end we will have to focus
on subleading corrections that we have previously ignored. We will also present the large-deviation
scaling of 𝒫(H̅,L,T) in the limit of T → 0 at fixed L, in the physical units.
As we have already noticed, the actions S_1(H̅, ℓ) and S_2(H̅, ℓ), described
by the first and second lines of Eq. (<ref>),
coincide at H̅=H̅_c=4 together with their first derivatives ∂ S_1(H̅, ℓ) /
∂H̅ and ∂ S_2(H̅, ℓ)/∂H̅
at H̅_c=4. It would be incorrect, however,
to conclude from here that the DPT between branches 1 and 2 at H̅=H̅_c
is of second order. Indeed, the supercritical first bifurcation of the uniform solution (<ref>)
to a solution with a single maximum of h(x,1) – the one with q = 2 π / ℓ
in Eq. (<ref>) – actually occurs, as ℓ→∞, at much
larger H̅≃ℓ^2 / 16 ≫ 4. Furthermore,
as follows from numerical minimization of Eq. (<ref>), instability
of any Fourier mode around the uniform solution can only occur
at H̅≃ 4.60334 (for q ≃ 1.34336). It
is not surprising, therefore, that
at large but finite ℓ, and at a slightly shifted transition
point H̅_c> 4 where the actions of branches 1 and 2
are equal, the optimal paths h(x,t) for branches 1 and 2, that we found numerically,
are dramatically different, and their respective Lagrange
multipliers Λ are not equal. The latter fact means, by
virtue of Eq. (<ref>), that at large ℓ we actually observe a first-order DPT, not a second-order one.
To make sense of these facts, we recall that Eq. (<ref>)
for the action of branch 2 is merely a leading order asymptotic
at ℓ→∞. Subleading terms, so far unaccounted for, should remove
the degeneracy of the leading-order results by breaking the accidental continuity
of the first derivative ∂ S(H̅, ℓ)/∂H̅
at H̅=H̅_c, and
rendering the corresponding bifurcation subcritical and the corresponding DPT
first-order. The subleading terms should also account for a slight shift of the critical
point H̅_c to the right from its leading-order
value H̅_c=4, as observed in our numerics.
Motivated by the large-H asymptotic of the upper tail of the exact
short-time probability distribution of the one-point height h(x = 0,t = 1)=H
on the line, determined in Ref. <cit.>, we can conjecture the following
subleading terms of S_2(H̅,ℓ) at large ℓ:
S_2(H̅,ℓ)=(4H̅ -8) ℓ+B H^1/2+C H^-1/2+… ,
where B>0 and C are numerical constants O(1), which are independent
of ℓ. The condition B>0 is necessary for the equation
S_1 ( H̅_c,ℓ) =
S_2 ( H̅_c,ℓ)
to have a solution for H̅)_c close to
4 at large ℓ.
To verify Eq. (<ref>), we plotted in Fig. <ref> our large-ℓ numerical results for
[S_2(H̅,ℓ) - (4H̅ -8)
ℓ]/√(H) versus H. A fair plateau at large H is observed, with B ≃ 5.3 > 0 found by fitting.
Now, keeping the first subleading term in Eq. (<ref>)
and the leading-order dependence of H on H̅ in Eq. (<ref>),
we can rewrite Eq. (<ref>) in terms of H̅ and ℓ:
S_2(H̅,ℓ)=8ℓ+4(H̅ -4) ℓ
+ (3/4)^1/3 B [(H̅-4)ℓ]^1/3
+ … ,
(H̅-4)ℓ≫ 1 .
Now Eq. (<ref>) for the critical point becomes
1/2(H̅_c-4 )^2ℓ
= (3/4)^1/3 B [ (H̅_c
-4 )ℓ]^1/3+… ,
Its approximate solution,
H̅_c = 4 + 6^1/5 B^3/5 ℓ^-2/5+… ,
describes a small ℓ-dependent positive shift of the critical point from the leading-order value 4.
This H̅_c corresponds to
H = (9/8)^2/5 B^2/5ℓ^2/5 +…
of the branch-2 solution at the critical point. We observe that, for this solution, H →∞
as ℓ→∞, guaranteeing applicability of our theory at large ℓ. Going back to the
large-deviation scaling (<ref>), we notice that there is now a small but finite jump ∼ℓ^-2/5
of the derivative ℓ^-1∂ S/∂H̅ of the effective rate function at the shifted critical
point. The transition between branches 1 and 2, therefore, is of first order.
By virtue of Eq. (<ref>), the subleading correction in Eq. (<ref>) also removes the degeneracy
of the leading-order result Λ=4 by adding to it a small ℓ-dependent correction that goes
to zero as ℓ→∞.
Using Eq. (<ref>), we plotted in Fig. <ref> the actions of branches 1 and 2, normalized
by ℓH̅^2, in the
vicinity of the H̅ = H̅_c. It is clearly seen that the subleading correction removes the degeneracy
and makes the DPT first-order. Furthermore,
the predicted H̅_c from Eq. (<ref>)
for ℓ = 32 π, which is H̅_c≃ 4.6, is close to our numerical result H̅_c≃ 4.57. for this ℓ, see
Fig. <ref>.
Note that our arguments in favor of the expansion (<ref>) are far from rigorous.
In particular, we cannot exclude a very
slow (for example, logarithmic) dependence of the coefficient B on H in Eq. (<ref>)
based only on the numerical evidence. However,
our main conclusion about the first-order DPT between branches 2 and 3
seems robust.
To conclude this section, we present our large-deviation results, described by the first two lines
of Eq. (<ref>), in
the physical units. Recall that, by taking the
limit T → 0 at fixed L,
we have both ε∝ T^1/2→ 0 and ℓ→∞. In this limit only the first
two lines of Eq. (<ref>) are relevant, and we
obtain[Note the factor of T instead of the customary weak-noise
factor T^1/2 on the left-hand side
of Eq. (<ref>).]
-lim_T→ 0 T ln P(H̅,L,T)
=ν^2/Dλ^2 L f(λH̅/ν) ,
f(w)={[ w^2/2 w<4 ,; 4w-8 w>4 . ].
As we
elaborated in this subsection, the DPT
in Eq. (<ref>) at w = 4 can be called an “accidental”
second order DPT in the sense that the optimal paths, that are responsible for the two branches in Eq. (<ref>),
transition into each other discontinuously, and that the differentiability of the rate function
at the critical point emerges only in
the limit T → 0 at fixed L.
§ SMALL-ℓ ASYMPTOTICS
We found that our numerical results on the second-order DPT at small ℓ, shown in Figs. <ref>
and <ref> and described in Sec. <ref>,
can be understood in terms of a small-ℓ asymptotic solution of the OFM equations (<ref>)
and (<ref>) which was previously found in the context of the one-point
height distribution on a ring <cit.>. In this solution
the interface is driven by a stationary dn^2 profile (see below) of p. The solution represents a finite-amplitude
generalization of a weak sinusolidal modulation with m = 1 which results from the second-order DPT from
the uniform solution. This solution is given by the following expressions[This
solution is invalid inside
narrow boundary layers in time at t=0 and t=1, but their contribution to the action is negligible.]
h(x,t) ≃ H t + 2 lndn[2 K(k) x/ℓ,
k ] ,
p(x,t) ≃ p_0(x) = [4 K(k)/ℓ]^2
dn^2 [2 K(k) x/ℓ , k] ,
where K(k) is the complete elliptic integral of the first kind
and dn(…) is one of the Jacobi elliptic functions <cit.>.
The elliptic modulus k ∈ (0,1) is determined by H via the relation
8 (2 - k^2) K^2(k)/ℓ^2 = H ,
The action of this solution as a function of k is <cit.>
S(k) = 128/3 ℓ^3 K^3(k) [2(2-k^2) E(k)
- (1-k^2) K(k) ] .
At given ℓ≪ 1, Eqs. (<ref>) and (<ref>) determine S as a
function of H in a parametric form. The critical point H̅ = (2 π / ℓ)^2 corresponds
to k=0, when Eqs. (<ref>) and (<ref>) reduce to the uniform solution. k>0
correspond to supercritical solutions.
In order to recast this dependence in terms of S(H̅,ℓ),
we need to express H through H̅ and ℓ. Although Eq. (<ref>) is formally inapplicable
at t=1, asymptotically as ℓ→ 0 we still have
H - H̅≃ -1/ℓ∫_-ℓ /2^ℓ / 2
2 lndn[2 K(k) x/ℓ,
k ] dx= 1/2ln1/1 - k^2 .
where we have used a product formula for dn <cit.>.
Using Eqs. (<ref>) and (<ref>), we obtain
H̅(k) = 8 (2 - k^2) K^2(k)/ℓ^2-1/2ln1/1-k^2 .
Equations (<ref>) and (<ref>) determine S=S(H̅,ℓ) and were
used in Fig. <ref> to draw the theoretical curves for the action and
Lagrange multiplier (via Eq. (<ref>))
at ℓ = π / 8, which agree very well with the numerical action minimization results. Also shown is the
asymptotic action
S(H̅) ≃8 √(2)/3H̅^3/2
as H̅→∞, which agrees with Eq. (<ref>) and can be obtained from
Eqs. (<ref>) and (<ref>) by considering the limit k → 1
with E(k) → 1 and K(k) ≃12ln11-k. As one can see from
Fig. <ref>, the asymptotic relation (<ref>)
is not yet satisfied for the moderately small ℓ = π / 8: noticeably, the solution h(x,1)
at the final time deviates from Eq. (<ref>). However, the numerically found action
is already accurately described by Eqs. (<ref>) and (<ref>), because
the difference between H and H̅ is always subleading – at most O(√(H)) – at small ℓ.
§ SUMMARY AND DISCUSSION
We applied the OFM to evaluate analytically and numerically the short-time PDF P (H̅, L, t=T),
and the optimal paths which dominate this PDF, of the KPZ interface on a ring. The short-time PDF has
the scaling form (<ref>), where ε∼ T^1/2 plays the role of the weak-noise
parameter. The phase diagram of the system
represents the (H̅, ℓ=L/√(ν T)) plane. We were especially interested in the DPTs that occur
in this system at sufficiently large positive λH̅>0. We found that, depending on ℓ, these
DPTs occur via either a supercritical, or a subcritical bifurcation of the “trivial" (uniform in space)
optimal path of the KPZ interface. The supercritical bifurcations dominate at very small ℓ, the subcritical
bifurcations dominate at very large ℓ. In these two limits we obtained asymptotic analytical solutions
for the optimal paths of the system, evaluated the resulting action, and verified the analytical results
numerically. We also found that, as T goes to zero at constant L, the PDF acquire a simple large-deviation
form (<ref>). Interestingly, the rate function f(H̅) exhibits, at a critical value
of H̅=H̅_c(ℓ), a DPT which is accidentally second-order.
In the (much more complicated) region of intermediate ℓ=O(1) we observed numerically both supercritical,
and subcritical bifurcations of the uniform solution. This region of the phase diagram is presently out of
reach of analytical theory. It would be very interesting, but challenging, to determine the complete phase
diagram of the system in this region. In particular, it would be interesting to locate, somewhere
between ℓ=16 π and ℓ = 32π, at least one critical point (H̅_*, ℓ_*) where the
second order DPT curve H̅_c^(2)(ℓ) ends when it meets the first order DPT curve H̅_c^(1)(ℓ),
as well as other possible critical points.
These tasks will become more feasible if this problem, as described by Eqs. (<ref>)-(<ref>),
joins the list of similar
large-deviation OFM problems for the KPZ equation which have been solved exactly by the inverse scattering
method (ISM) <cit.>. Indeed, as was previously found in Ref. <cit.>,
a canonical Hopf–Cole transformation brings Eqs. (<ref>) and (<ref>) into the nonlinear
Schrödinger equation in imaginary space and time. Therefore, Eqs. (<ref>) and (<ref>)
belong to a family of completely integrable models. The only problem (but potentially a big one) is to
adapt the ISM to a finite system with periodic boundaries and to accommodate the problem-specific boundary
conditions (<ref>) and (<ref>). The exact solution would also provide
a full analytic control of the subleading corrections to the action of branch 2, which are presently half-empiric.
Finally, it would be very interesting to explore the possibility of extending to the spatially averaged KPZ
interface height some of the recent “stochastic integrability" approaches, which led, for selected initial
conditions, to exact representations for the complete statistics of the one-point interface
height <cit.>.
§ ACKNOWLEDGMENTS
The authors thank Eldad Bettelheim and Naftali R. Smith for useful discussions.
This research was supported by the program
“Advanced Research Using High Intensity Laser-Produced Photons and Particles"
(ADONIS) (CZ.02.1.01/0.0/0.0/16019/0000789) of the European Regional Development Fund (ERDF) (PS),
and by the Israel Science Foundation (Grant No. 1499/20) (BM).
§ NUMERICAL METHODS
Our numerical procedure of finding solutions h and p of the
OFM problem (<ref>)-(<ref>)
can be summarized as follows:
To compute numerical solutions to the boundary-value problem
for h and p for given ℓ and H̅, we use a
refined version of the popular Chernykh–Stepanov
back-and-forth iteration algorithm <cit.> as described in detail
in Ref. <cit.>, using the language of PDE-constrained optimization.
The idea is to interpret the back-and-forth
iterations – fixing Λ and solving Eq. (<ref>) forward in time
with fixed p, and Eq. (<ref>) backward in time with fixed h until
convergence – as adjoint <cit.> gradient evaluations δ S /
δ p of the action
functional with fixed Λ,
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ∫_0^ℓ h[p](x,1) dx ,
with the height profile h = h[p] determined for a
given p through Eq. (<ref>).
This interpretation allows us to use automatic update step-size
control (here: Armijo line search <cit.>) and
preconditioning for faster convergence (here: L-BFGS method <cit.>).
Conceptually, one fixes Λ in this formulation and obtains
the corresponding average height value H̅ a posteriori.
For large ℓ we find multiple solutions for the
same H̅, and the action S(H̅,ℓ) of the optimal solution as a
function of H̅
becomes nonconvex for some H̅. Nonconvexity of the rate
function S(H̅) is an issue because
minimizing the functional (<ref>) effectively computes the
Legendre–Fenchel transform of the rate function at Λ,
which may diverge in this case. Therefore, we add a
penalty term to the action, leading to the so-called
augmented Lagrangian formulation <cit.>
S[p] = 1/2∫_0^1 dt ∫_0^ℓ
d x p^2(x,t) - Λ(
∫_0^ℓ h[p](x,1) dx - ℓH̅)
+ μ/2(∫_0^ℓ h[p](x,1)
dx - ℓH̅)^2 ,
and solve multiple minimization problems for increasing penalty
parameters μ.
In this formulation, one can directly prescribe H̅ at the
cost of solving multiple optimization problems, and it is usable
regardless of convexity of the rate function, or in other words regardless of
bijectivity of the map between H̅ and Λ.
The formulation (<ref>) is more convenient to
trace solution branches: one initializes the optimization on an
already found solution on a given branch and slightly changes
Λ. In order to trace branches close to the transition
region for large ℓ in
the nonconvex case, we temporarily reparameterize the observable
as described in Ref. <cit.> with reparameterizations
g(z) = lnln z or g(z) = 1 - exp{-(z - 3.5) }.
Within this general framework, we use a
pseudo-spectral code with spatial resolution n_x
to solve Eqs. (<ref>)
and (<ref>), with an exact integration of the diffusion
terms through an integrating factor in Fourier space. An explicit
second-order Runge–Kutta integrator with n_t equidistant steps
is used in time. The gradient of the action functional is
evaluated exactly on a discrete level (“discretize,
then optimize”). Python source code to illustrate the optimization
methods in a simple toy problem
can be found in Ref. <cit.>.
99
KK2007 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 75, 140201(R) (2007).
KK2008 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. B 78, 024206 (2008).
KK2009 I. V. Kolokolov and S. E. Korshunov, Phys. Rev. E 80, 031107 (2009).
MKV B. Meerson, E. Katzav, and A. Vilenkin, Phys. Rev. Lett. 116, 070601 (2016).
KMSparabola A. Kamenev, B. Meerson, and P. V. Sasorov, Phys. Rev. E 94, 032108 (2016).
LDMRS P. Le Doussal, S. N. Majumdar, A. Rosso, and G. Schehr,
Phys. Rev. Lett. 117, 070403 (2016).
Janas2016 M. Janas, A. Kamenev, and B. Meerson, Phys. Rev. E 94, 032133 (2016).
KLD2017 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 96, 020102(R)
(2017).
MeersonSchmidt2017 B. Meerson and J. Schmidt, J. Stat. Mech. (2017) P103207.
SMS2018 N. R. Smith, B. Meerson, and P. V. Sasorov, J. Stat. Mech. (2018) 023202.
SKM2018 N. R. Smith, A. Kamenev, and B. Meerson, Phys. Rev. E 97, 042130 (2018).
SmithMeerson2018 N. R. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Hartmann2018 A. K. Hartmann, P. Le Doussal, S. N. Majumdar, A. Rosso,
and G. Schehr, Europhys. Lett. 121, 67004 (2018).
MV2018 B. Meerson and A. Vilenkin, Phys. Rev. E 98, 032145 (2018).
Asida2019 T. Asida, E. Livne, and B. Meerson, Phys. Rev. E 99, 042132 (2019).
SMV2019 N. R. Smith, B. Meerson, and A. Vilenkin, J. Stat. Mech. (2019)
053207.
HMS2019 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. Res. 1, 032043(R) (2019).
KLD2021 A. Krajenbrink and P. Le Doussal, Phys. Rev. Lett. 127, 064101 (2021).
HMS2021 A. K. Hartmann, B. Meerson, and P. Sasorov, Phys. Rev. E 104, 054125 (2021).
KLD2022 A. Krajenbrink and P. Le Doussal, Phys. Rev. E 105, 054142 (2022).
Lamarre P. Y. G. Lamarre, Y. Lin, L.-C. Tsai,
Probab. Theor. Rel. Fields 185, 885 (2023).
SGG T. Schorlepp, T. Grafke, and R. Grauer, J. Stat. Phys. 190, 50 (2023).
KPZ M. Kardar, G. Parisi, and Y.-C. Zhang, Phys. Rev. Lett. 56, 889
(1986).
shortcut F. D. Cunden, P. Facchi, and P. Vivo, J. Phys. A: Math. Theor. 49, 135202.
(2016).
Whithambook G. B. Whitham, Linear and Nonlinear Waves (Wiley, New York, 2011).
SM18 N. Smith and B. Meerson, Phys. Rev. E 97, 052110 (2018).
Jacobi Wolfram MathWorld, https://mathworld.wolfram.com/JacobiEllipticFunctions.html
Wolf Wolfram Research, Inc., https://functions.wolfram.com/EllipticFunctions/JacobiDN/08/
SS T. Sasamoto and H. Spohn, Phys. Rev. Lett. 104, 230602 (2010).
CDR P. Calabrese, P. Le Doussal, A. Rosso, Europhys. Lett.
90, 20002 (2010).
Dotsenko V. Dotsenko, Europhys. Lett. 90, 20003 (2010).
ACQ G. Amir, I. Corwin, and J. Quastel, Comm. Pur. Appl. Math.
64, 466 (2011).
CLD11 P. Calabrese, and P. Le Doussal, Phys. Rev. Lett. 106, 250603 (2011).
CLD12 P. Le Doussal and P. Calabrese, J. Stat. Mech. (2012) P06001.
IS12 T. Imamura and T. Sasamoto, Phys. Rev. Lett. 108, 190603 (2012).
IS13 T. Imamura and T. Sasamoto, J. Stat. Phys. 150, 908 (2013).
Borodinetal A. Borodin, I. Corwin, P. L. Ferrari, and B. Vető, Math. Phys. Anal. Geom. 18, 20 (2015).
CS A. I. Chernykh and M. G. Stepanov, Phys. Rev. E 64,
026306 (2001).
SGMG T. Schorlepp, T. Grafke, S. May, and R. Grauer, Philos. Trans. Royal Soc. A 380, 20210051 (2022).
Plessix R.-E. Plessix, Geophys. J. Int. 167, 495 (2006).
Armijo L. Armijo, Pacific J. Math. 16, 1 (1966).
LN D. C. Liu and J. Nocedal, Math. Program. 45, 503 (1989).
Hestenes M. R. Hestenes, J. Optim. Theory. Appl. 4, 303 (1969).
AG M. Alqahtani and T. Grafke, J. Phys. A: Math. Theor. 54 175001 (2021).
STGS T. Schorlepp, S. Tong, T. Grafke, and G. Stadler, arXiv:2303.11919 (2023).
|
http://arxiv.org/abs/2307.04545v1 | 20230710132434 | The Pairing-Hamiltonian property in graph prisms | [
"Marién Abreu",
"Giuseppe Mazzuoccolo",
"Federico Romaniello",
"Jean Paul Zerafa"
] | math.CO | [
"math.CO",
"05C76, 05C70, 05C45"
] |
The Pairing-Hamiltonian property
in graph prisms
Marién Abreu
Dipartimento di Matematica, Informatica ed Economia
Università degli Studi della Basilicata, Italy
[email protected]
Giuseppe Mazzuoccolo
Dipartimento di Scienze Fisiche, Informatiche e Matematiche
Università degli Studi di Modena e Reggio Emilia, Italy
[email protected]
Federico Romaniello
Dipartimento di Matematica “Giuseppe Peano"
Università di Torino, Italy
[email protected]
Jean Paul Zerafa
St. Edward's College, Triq San Dwardu
Birgu (Città Vittoriosa), BRG 9039, Cottonera, Malta
[email protected]
05C76, 05C70, 05C45
Let G be a graph of even order, and consider K_G as the complete graph on the same vertex set as G. A perfect matching of K_G is called a pairing of G. If for every pairing M of G it is possible to find a perfect matching N of G such that M ∪ N is a Hamiltonian cycle of K_G, then G is said to have the Pairing-Hamiltonian property, or PH-property, for short. In 2007, Fink [J. Combin. Theory Ser. B, 97] proved that for every d≥ 2, the d-dimensional hypercube 𝒬_d has the PH-property, thus proving a
conjecture posed by Kreweras in 1996.
In this paper we extend Fink's result by proving that given a graph G having the PH-property, the prism graph 𝒫(G) of G has the PH-property as well. Moreover, if G is a connected graph, we show that there exists a positive integer k_0
such that the k^th-prism of a graph 𝒫^k(G) has the PH-property for all k ≥ k_0.
§ INTRODUCTION
The problem of extending perfect matchings of a graph to a Hamiltonian cycle has been first considered by Las Vergnas <cit.> and Häggkvist <cit.> in the 1970s. They both proved Ore-type conditions which ensure that every perfect matching of a graph having some initial conditions can be extended to a Hamiltonian cycle.
Some years later, Kreweras <cit.> conjectured that any perfect matching of the hypercube 𝒬_d, d≥ 2, can be extended to a Hamiltonian cycle. This conjecture was proved in 2007 by Fink <cit.>. Actually, he proved a stronger version of the problem. Given a graph G, let K_G denote the complete graph on the same vertex set V(G) of G. Fink shows that every perfect matching of K_𝒬_d, and not only the perfect matchings of 𝒬_d, can be extended to a Hamiltonian cycle of K_𝒬_d, by using only edges of 𝒬_d. More in general, for a graph G of even order, a perfect matching of K_G is said to be a pairing of G. Given a pairing M of G, we say that M can be extended to a Hamiltonian cycle H of K_G if we can find a perfect matching N of G such that M ∪ N = E(H), where E(H) is the set of edges of H.
A graph G is said to have the Pairing-Hamiltonian property (or, the PH-property for short), if every pairing M of G can be extended to a Hamiltonian cycle as described above. For simplicity, we shall also say that a graph G is PH if it has the PH-property. This notation was introduced in <cit.>, where amongst other results, a classification of which cubic graphs admit the PH-property was given: these are the complete graph K_4, the complete bipartite graph K_3,3, and the cube 𝒬_3. We remark that this was the first non-trivial classification of graphs (having regular degree) admitting the PH-property, as, the only 2-regular graph admitting the PH-property is the cycle on 4 vertices, which happens to be 𝒬_2. We also remark that there is an infinite number of 4-regular graphs having the PH-property (see <cit.>). Following such a terminology we can state Fink's result from <cit.> as follows.
The hypercube 𝒬_d has the PH-property, for every d≥ 2.
Recall that the Cartesian product G H of two graphs G and H is a graph whose vertex set is V(G) × V(H), and two vertices (u_i, v_j) and (u_k, v_ℓ) are adjacent precisely if u_i = u_k and v_jv_ℓ∈ E(H), or u_iu_k ∈ E(G) and v_j = v_ℓ.
Given a graph G, the prism operator 𝒫(G) consists of two copies G_1 and G_2 of G with the same vertex labelling as in G, and an edge between the vertices having the same label. Note that 𝒫(G)=G K_2, the Cartesian product of G with K_2. The result of a single application of the operator is usually called the prism graph 𝒫(G) of G (see <cit.>), and repeated applications shall be denoted by powers, with 𝒫^k(G) being the prism graph of 𝒫^k-1(G). If needed we shall assume that 𝒫^0(G)=G.
It is worth noting that for d≥ 2, 𝒬_d=𝒫^d-2(Q_2). Hence, Theorem <ref> is equivalent to saying that for each k>0, 𝒫^k(𝒬_2) admits the PH-property. One might wonder whether it is possible to replace 𝒬_2 with some other initial graph. The main contribution of this paper is Theorem <ref>, which generalises Theorem <ref>. We obtain a much larger class of graphs with the PH-property by proving that for every graph G having the PH-property, the graph 𝒫^k(G) has the PH-property for each k≥0. Hence, Kreweras' Conjecture, and therefore Theorem <ref>, turn out to be special consequences of Theorem <ref> obtained starting from G=𝒬_2, which is trivially PH.
Other results on this topic, dealing with the Cartesian product of graphs, were also obtained in <cit.> and <cit.>. In particular, we state the following theorem which shall be needed in Section <ref>.
Let P_q be a path of length q. The graph P_q𝒬_d admits the PH-property, for d ≥ 5.
The above theorem is stated as Theorem 5 in <cit.>, where some other results apart from the statement above are proved. We use this result to obtain one of the same flavour for every connected graph G (see Theorem <ref>). More precisely, we prove that for every arbitrary connected graph G, the graph 𝒫^k(G) has the PH-property for a sufficiently large k, depending on the minimum number of leaves over all spanning trees of G. We refer the reader to <cit.> and <cit.> for other papers dealing with the Pairing-Hamiltonian property and related concepts under some graph operations.
§ GENERALISING FINK'S RESULT
As stated in the introduction, this section will be devoted to generalising Theorem <ref>.
Let G be a graph having the PH-property. Then, for each k≥0, 𝒫^k(G) admits the PH-property.
Consider 𝒫(G) and let G_1 and G_2 be the two main copies of the graph G in 𝒫(G). Then, a pairing P of 𝒫(G) can be partitioned into three subsets P_1 ∪ P_2 ∪ X where:
P_i={xy ∈ P | {x,y}⊂ V(G_i), for each i∈{1,2}}; and
X={xy ∈ P | x ∈ V(G_1), y ∈ V(G_2)}.
Note that |X| ≡ 0 2 since each G_i admits the PH-property and so are both of even order. We shall distinguish between two cases: whether X is empty or not.
Case 1. |X|=0.
In this case, P=P_1 ∪ P_2. Since G_1 has the PH-property, there exists a perfect matching M of G_1 such that P_1 ∪ M is a Hamiltonian cycle of K_G_1. Let M' be the perfect matching of G_2 such that x'y' ∈ M' if and only if xy ∈ M. In other words, M' is the copy of M in G_2. We observe that P_2 ∪ M' consists of the union of cycles of even length, say C_1,… , C_t. Note that cycles of length 2 shall be allowed in the sequel as they arise when P_2 ∩ M' ≠∅. For each i ∈{1,…,t}, we choose an edge e_i'=x_i'y_i' ∈ M' ∩ C_i and we denote the corresponding edge in M by e_i=x_iy_i. Consequently, the set
N=(M ∖{ e_1,…, e_t}) ∪ (M' ∖{e'_1,…, e'_t}) ∪{ x_ix_i',y_iy_i' | i∈{1,…,t}}
is a perfect matching of 𝒫(G) such that P ∪ N is a Hamiltonian cycle of K_𝒫(G). We note that the vertex x_i' in G_2 corresponds to the vertex x_i in G_1, see Figure <ref>.
Case 2. |X|=2r>0.
In this case we consider an analogous argument to the one used by Fink to prove Theorem <ref>. Since |X| ≠ 0, P_1 is a matching of K_G_1 which is not perfect, as there are 2r unmatched vertices. Let L be an arbitrary set of r edges of K_G_1 such that P_1 ∪ L is a pairing of G_1. Since G_1 has the PH-property, there exists a perfect matching M, of G_1, such that P_1 ∪ L ∪ M is a Hamiltonian cycle of K_G_1. Next we define the following set
R = {x y∈ E(K_G_2) |[ ∃ x,y ∈ V(G_1) with {xx,yy}⊆ X and; ∃ an (x,y) -path contained in P_1 ∪ M ]},
such that P_2 ∪ R is a pairing of G_2. Note that x x and y y are edges in K_G since |X| ≠ 0, and their extremes might not be corresponding vertices in G_1 and G_2, as they were in the former case.
Since G_2 has the PH-property there exists a perfect matching M^' of G_2, such that P_2 ∪ R ∪ M^' is a Hamiltonian cycle of G_2. It follows that P_1 ∪ P_2 ∪ X ∪ M ∪ M^' is a Hamiltonian cycle of K_𝒫(G) in which M ∪ M^' is a perfect matching of 𝒫(G), see Figure <ref>.
This proves that 𝒫(G) has the PH-property and thus, by iterating the prism operator, the result follows.
§ CONVERGENCE OF GENERAL GRAPH PRISMS TO THE PH-PROPERTY
In this section we show that given any connected graph G, there exist a sufficiently large integer k such that 𝒫^k(G) has the PH-property. In other words, after iterating the prism operator a sufficient number of times, the resulting graph will have the PH-property. We remark that if a graph contains a spanning subgraph admitting the PH-property, then the graph itself admits the PH-property. Hence, by Theorem <ref>, the next corollary follows.
Let G be a traceable graph. For k ≥ 5, the graph 𝒫^k(G) has the PH-property.
Recall that a traceable graph is a graph admitting a Hamiltonian path. Next, we show that starting from an arbitrarily connected graph G, we can always obtain a traceable graph by iterating the prism operator a suitable number of times. To this purpose, we need the following definition and lemma.
Let G be a connected graph. The minimum leaf number of G, denoted by ml(G), is the minimum number of leaves over all spanning trees of G.
Clearly, for any connected graph G, ml(G)≥ 2, and ml(G)=2 if and only if G is traceable.
Let G be a connected graph with ml(G) >2. Then, ml(G) > ml(𝒫(G)).
Suppose that ml(G) =t>2 and let G_1 and G_2 be the two copies of G in 𝒫(G). Let R_1,R_2 be two copies of a spanning tree of G with t leaves in G_1 and G_2, respectively. Let S={e_0,e_1,…,e_t-1} be the set consisting of the t edges which connect a leaf of R_1 to the corresponding leaf of R_2. Consequently, we have that T_0=(R_1 ∪ R_2) + e_0 is a spanning tree of 𝒫(G) with 2t-2 leaves. Moreover, T_0+e_1 has exactly one cycle, say C_1. Since ml(G) >2, C_1 is a proper subgraph of T_0 +e_1 and there exists a vertex v of C_1 such that deg_T_0+e_1(v) >2. We note that the removal of an edge of C_1, say f_1, which is incident to v gives rise to a spanning tree T_1=T_0+e_1-f_1 of 𝒫(G) with at most 2t-3 leaves. Then, for every j∈{2,…, t-1}, starting from j=2 and continuing consecutively up to t-1, we choose an edge f_j from E(T_j-1+e_j) lying on the unique cycle in T_j-1+e_j and incident to a vertex of degree at least 3 in T_j-1+e_j. We then let T_j to be equal to T_j-1+e_j-f_j, which by a similar argument to the above is a spanning tree of 𝒫(G) with at most 2t-2-j leaves. Therefore, T_t-1 has at most t-1 leaves and ml(𝒫(G)) ≤ t-1 < ml(G).
From the above statements, it is easy to obtain the following result.
Let G be a connected graph. Then, 𝒫^k(G) is traceable for all k ≥ml(G)-2.
If we start from G and apply the prism operator ml(G)-2 times, by Lemma <ref>, the graph 𝒫^ml(G)-2(G) has ml(𝒫^ml(G)-2(G))=2.
Consequently, it admits a Hamiltonian path.
Combining Theorem <ref> and Proposition <ref> we obtain the following.
Let G be a connected graph with m=ml(G), then 𝒫^m+3(G) has the PH-property.
If G is traceable, then m=2, and so, from Theorem <ref> we have that 𝒫^5(G) has the PH-property. On the other hand, if G is not traceable, then m>2. By Theorem <ref>, the graph 𝒫^m-2(G) is traceable. Hence, by Theorem <ref>, 𝒫^m-2(𝒫^5(G))=𝒫^m+3(G) admits the PH-property.
§ FINAL REMARKS
Several open problems were posed in <cit.>. In particular, proving that the graph P_q 𝒬_d has the PH-property for d=3,4 and an arbitrary q is still open. It is dutiful to note that we are aware that in case of a positive answer, Theorem <ref> should be refined accordingly.
A much more ambitious problem is to wonder whether it is enough for two graphs G and H to have the PH-property, for G H to have the PH-property as well.
This latter question seems very difficult to prove. Here, we have shown, in Theorem <ref>, that it holds when H is the hypercube, which is an iteration of the prism operator. In Theorem <ref>, we see that even if G does not have the PH-property, but is traceable, a large enough number of iterations of the prism operator make it converge to a graph with the PH-property. As a matter of fact, we can define the parameter 𝔭(G) as the smallest positive integer 𝔭=𝔭(G) such that 𝒫^𝔭(G) admits the PH-property. It trivially follows that 𝔭(G)=0 if and only if G is PH. Henceforth, the parameter 𝔭(G) can be considered as a measure of how far a graph G is from having the PH-property, with respect to the prism operator. Determining the behaviour of 𝔭(G) for some special classes of graphs could be of interest in the study of the PH-property.
We could also wonder if there are other graphs that speed up the convergence to the PH-property under the Cartesian product, or on the other hand if there are other products under which the convergence to the PH-property is faster. It seems so if we consider the strong product of graphs.
The strong product G ⊠ H is a graph whose vertex set is the Cartesian product V(G) × V(H) of V(G) and V(H), and two vertices (u_i, v_j), (u_k, v_ℓ) are adjacent if and only if they are adjacent in G H or if u_i,u_k∈ E(G) and v_j,v_ℓ∈ E(H).
It is trivial that G H is a subgraph of G ⊠ H; hence, if G H has the PH-property, then G ⊠ H will inherit the same property as well.
A result from <cit.> on accordion graphs easily implies that in the case of Hamiltonian graphs, only one occurrence of the strong product with K_2 is enough to obtain a graph with the PH-property.
Let G be a Hamiltonian graph, then G ⊠ K_2 has the PH-property.
This suggests that the strong product may have a faster convergence to the PH-property than the Cartesian product also for general graphs.
999
AGZ-Rook M. Abreu, J.B. Gauci and J.P. Zerafa, Saved by the rook: a case of matchings and Hamiltonian cycles, Contrib. Discrete Math. (2023), accepted.
AAAHST A. Alahmadi, R.E.L. Aldred, A. Alkenani, R. Hijazi, P. Solé and C. Thomassen, Extending a perfect matching to a Hamiltonian cycle, Discrete Math. Theor. Comput. Sci., 17(1) (2015), 241–254.
PrismGraphs R.E.L. Aldred and M.D. Plummer, Matching extension in prism graphs, Discrete Appl. Math., 221 (2017), 25–32.
Fink
J. Fink, Perfect matchings extend to Hamilton cycles in hypercubes, J. Combin. Theory Ser. B, 97 (2007), 1074–1076.
accordions
J.B. Gauci and J.P. Zerafa, Accordion graphs: Hamiltonicity, matchings and isomorphism with quartic circulants, Discrete Appl. Math. 321 (2022), 126–137.
GauZer J.B. Gauci and J.P. Zerafa, Perfect Matchings and Hamiltonicity in the Cartesian Product of Cycles, Ann. Comb. 25 (2021), 789–796, https://doi.org/10.1007/s00026-021-00548-1https://doi.org/10.1007/s00026-021-00548-1.
Hag R. Häggkvist,
On F-Hamiltonian graphs,
in: J.A. Bondy, U.S.R. Murty (eds.),
Graph Theory and Related Topics, Academic Press, New York, 1979, 219–231.
Kre G. Kreweras,
Matchings and Hamiltonian cycles on hypercubes,
Bull. Inst. Combin. Appl. 16 (1996), 87–91.
LasVergnas
M. Las Vergnas,
Problèmes de couplages et problèmes hamiltoniens en théorie des graphes,
Thesis, University of Paris 6, Paris, 1972.
betwixt
F. Romaniello and J.P. Zerafa, Betwixt and between 2-Factor Hamiltonian and Perfect-Matching-Hamiltonian Graphs, Electron. J. Combin. 30(2) (2023), #P2.5.
|
http://arxiv.org/abs/2307.05747v2 | 20230708141455 | Integrating Curricula with Replays: Its Effects on Continual Learning | [
"Ren Jie Tee",
"Mengmi Zhang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
Saran Poshyachinda
=========================================================================
Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge.
This human learning behavior has inspired the integration of curricula with replay methods in continual learning agents. The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer.
Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks, which has shown to be effective. However, limited research has explored the integration of different curricula with replay methods to enhance continual learning.
Our study takes initial steps in examining the impact of integrating curricula with replay methods on continual learning in three specific aspects: the interleaved frequency of replayed exemplars with training data, the sequence in which exemplars are replayed, and the strategy for selecting exemplars into the replay buffer. These aspects of curricula design align with cognitive psychology principles and leverage the benefits of interleaved practice during replays, easy-to-hard rehearsal, and exemplar selection strategy involving exemplars from a uniform distribution of difficulties.
Based on our results, these three curricula
effectively mitigated catastrophic forgetting and enhanced positive knowledge transfer, demonstrating the potential of curricula in advancing continual learning methodologies. Our code and data are available: <https://github.com/ZhangLab-DeepNeuroCogLab/Integrating-Curricula-with-Replays>
§ INTRODUCTION
Continual learning enables consecutive task acquisition without forgetting previously trained tasks <cit.>. This adaptability is vital for autonomous systems in dynamic environments, such as updating a grocery classification model with new products without retraining it on previous products. However, a significant challenge in continual learning is catastrophic forgetting, where knowledge from recent tasks interferes with earlier ones <cit.>, leading to performance degradation on earlier tasks after training on a task sequence.
To resolve this problem,
there are three primary types of continual learning methods commonly employed in the field:
regularization-based methods introduce regularization terms to mitigate catastrophic forgetting by preserving important parameters during training <cit.>; rehearsal-based methods store and replay a subset of previous data during training to maintain knowledge from previous tasks <cit.> and parameter isolation methods isolate specific parameters for each task to prevent interference between tasks <cit.>.
Rehearsal-based methods have proven highly effective in continual learning. However, existing approaches typically involve randomly selecting and rehearsing data from previous tasks. Limited research explores the incorporation of meaningful curricula into replay methods.
In parallel, in the curriculum learning literature, various approaches have focused on weakly supervised <cit.>, unsupervised <cit.>, and reinforcement learning tasks <cit.>. These studies demonstrate that curricula improve generalization abilities, task performances,
and convergence speed <cit.> during training. However, they primarily address intra-class difficulty and example scheduling within a single task, neglecting the impact of class presentation sequences across multiple tasks. Recent research has explored curricula in continual learning scenarios without data replays <cit.>. In complement to this work, our study investigates the role of curricula specifically during replay in continual learning, while keeping the curricula consistent for the feed-forward training process.
Exploring optimal curricula offers countless possibilities, and in our study, we take initial steps to investigate a limited set of potential curricula. We draw inspiration from two sources to guide the design of these curricula. Firstly, neuroscience research has revealed that neural activity patterns associated with past experiences are replayed in specific orders during rest or sleep, which is believed to contribute to memory consolidation and spatial navigation <cit.>. Secondly, pedagogy studies indicate that repetitive practice and revisiting previous knowledge with increasing difficulty enhance long-term memory integration in students <cit.>.
Specifically, we propose three types of curricula for replays and examine their impact on catastrophic forgetting and positive knowledge transfer: (1) the interleaved frequency of replayed exemplars with training data, (2) the replay sequence of exemplars, and (3) the strategy for selecting exemplars into the replay buffer. The experimental findings align with cognitive psychology principles, highlighting the advantages of frequently interleaving between training data and replayed exemplars, incorporating easy-to-hard rehearsals, and selecting exemplars from a uniform distribution of difficulties for replay. These observations present a promising avenue for advancing continual learning methods. It also provides insights into the underlying mechanisms of replay strategies in mitigating forgetting and facilitating knowledge transfer across tasks.
§ RELATED WORKS
§.§ Replay Methods in Continual Learning
Extensive research has focused on utilizing replay methods to address the issue of catastrophic forgetting. Conventional replay methods, such as iCaRL <cit.> and ER <cit.>, involve explicit training on previously saved data, while several variants, like DGR <cit.> and Pseudo-Recursal <cit.>, replay on artificially synthesized samples by generative models, resembling data from previous tasks.
Although these replay methods have made significant contributions in reducing catastrophic forgetting, they paid little attention to the incorporation of meaningful curricula into replay methods. Most methods randomly interleave the replay samples with the training data, without exploring the optimal mixing strategies <cit.>. In our work, we systematically studied the effect of interleaving curricula, which involves mixing training data and replay samples within a pre-defined interleave interval.
§.§ Curriculum Learning
Curriculum learning methods can be broadly categorized into two groups. The first group involves manual curriculum design by humans before training <cit.>, but these methods typically rely on human expertise and struggle to generalize to new domains. The second group consists of models that can autonomously design curricula without human intervention <cit.>. However, the application of these methods to enhance model performance has received limited attention in the continual learning setting.
Here, we highlight two factors to consider when applying curricula on the replay methods in continual learning. Firstly, while curriculum learning has demonstrated efficacy in enhancing generalization and training speed within a single task, the objective of curriculum learning in the context of continual learning is to retain knowledge from previous tasks while acquiring new knowledge from the current task. Secondly, unlike within-task curriculum learning, models in continual learning only have access to data from the current task, making it challenging to create a comprehensive between-task curriculum that encompasses the entire dataset.
Here, we took initial steps in this direction by exploring automated methods to determine the sequence of replay samples and introducing the sample selection strategy which finds the best replay samples for building a curriculum.
§ EXPERIMENTS
We investigated the effect of three types of replay curricula in the class incremental learning (CIL) setting. We first introduce CIL, and then elaborate on the three replay curricula individually.
Problem Setting. The objective of CIL is to teach a unified classification model Θ to recognize sets of object classes incrementally over time. Specifically, an image dataset D, consisting of N object classes, is split into subsets {D_1,...,D_t,...,D_T}
of
images
and presented over a sequence of T tasks. In each task t, the model only has access to training data in D_t, consisting of
samples from distinct classes C_t, and (x_i,t,y_i,t) is the i-th (image, label) pair in D_t. The model Θ can run multiple passes over D_t in task t. The model stops training on D_t
after its performance on the validation set saturates, considering the five most recent epochs.
We implemented the naive replay method where some raw images and their corresponding labels are selected from previous tasks and are stored in the replay buffer R_t. These data in R_t are inter-leaved with D_t for rehearsals. There are three types of replay curricula involved in this study: (1) the interleave frequency; (2) the rehearsal sequence of R_t in CIL; and (3) the image selection for R_t.
R_t is kept at a constant size of 1200 over all the tasks. See Appendix for more training details.
As an upper bound, we also include the offline method where the model Θ is trained on the entire dataset Θ from {D_1,...,D_T} over multiple epochs without any continual learning.
Datasets. We conducted experiments to investigate the use of these three types of curricula in replay methods on the two image datasets ciFAIR-10 and ciFAIR-100 <cit.>.
ciFAIR-10 dataset contains 10 object classes. The protocol asks the model Θ to incrementally learn 2 object classes in each task. There are a total of 5 tasks. ciFAIR-100 dataset contains 100 object classes. The CIL protocol asks the model Θ to incrementally learn 5 object classes in each task. There are a total of 20 tasks.
Both datasets have a total of 60,000 images, with 50,000 images used for training and 10,000 images used for testing.
The conclusions drawn from the experiments on both datasets are consistent. Without loss of generality, we focus on all the experiments and result analysis in ciFAIR-100 in the main text.
See Appendix for more implementation details and results on ciFAIR-10.
Evaluation Metrics. To assess the continual learning performance of the model Θ, we followed <cit.> and introduce 2 standard evaluation metrics. We define Forgetfullness (F) as the percentage decrease in classification accuracy on the test instances from C_1
between the Θ_t (model after being trained on D_t) and Θ_1. An ideal Θ_t could maintain the same classification accuracy on C_1 over tasks; i.e. ∀ t, F_t=0. The higher F is, the more Θ suffers from catastrophic forgetting. To assess the overall classification performance of Θ over tasks, we also report the continual average classification accuracy (Avg. Accu.). Avg. Accu. is computed as the average accuracy on all the test instance from C_i, where i∈{1, 2, ..., t}. For simplicity, we report the averaged F and Avg. Accu. over all the tasks.
Experimental Controls.
Within each experiment, only one variable of interest changes while the rest of the experiment conditions are fixed as control variables. As the previous study has shown that the sequence of class presentations affects the continual learning performance <cit.>, we use the same class presentation sequence in all three experiments. The same MobileNetV3 (small) network architecture is used as the backbone for the model Θ for all experiments. In every experiment, the total number of training samples and the total number of replay samples exposed to Θ remain the same across all experiment variables. Each experiment is conducted with 4 runs initialized with 4 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 4 runs is reported.
§.§ Interleave Divisions during Rehearsals
The number of interleaving divisions refers to the number of splits of in D_t and R_t. It indicates how often the model Θ rehearses on R_t, while learning on a subset of D_t. For example, for interleaving division number 400, D_t is split into 400 groups where each group contains an equal number of (x_i,t,y_i,t) (image, label) pairs, and these (image, label) pairs are randomly selected from D_t without replacement. Correspondingly, R_t is also split into 400 groups with the same splitting criteria as D_t. At each training epoch, the model Θ_t at task t is repeatedly trained with one group of D_t followed by one group of R_t, until the entire D_t and R_t are exhaustively seen by Θ_t. We titrate the interleave division numbers with the range of 1, 8, 60, 120, and 300.
The training data is interleaved with replay data and then presented to the model in sequence. Different interleave division numbers result in different data presentation sequences; hence, different curricula.
§.§ Rehearsal Sequence of Replay Samples
We use the interleave divisions 1 and 600 for all the experiments in this subsection and vary the rehearsal sequence of data samples in R_t by taking into account the two factors: the sample difficulty levels and the increasing or decreasing directions of sample difficulty levels.
To measure whether a sample is easy or hard to learn, we introduce two difficulty measures: (1) the confidence score difficulty metrics and (2) the distance vector difficulty metrics. The confidence score difficulty metrics were used to assess whether a teacher network with full knowledge of the entire dataset D predicted high or low confidence of the given sample belonging to its ground truth class label. Specifically, each image within R_t was input to a teacher network. The teacher network is based on a MobileNetV3 (small) architecture, pre-trained on the entire dataset D. After this, the confidence score for the ground truth class of each sample was extracted from the teacher network’s output. R_t was then sorted according to its individual sample’s confidence score, where a higher confidence score means that the sample is easier to learn for Θ.
However, in CIL setting, having a teacher network with full access to the whole dataset is impractical, as the data is incrementally available over tasks. Hence, we employed the distance vector difficulty metrics, used widely in literature <cit.>. Intuitively, if the sample is closer to other samples in the memory buffer, it is easier for Θ to learn and generalize to other samples as well.
The 2nd last layer from a ResNet-50 model <cit.>, pretrained on the ImageNet dataset, was used to extract the feature vector of each sample in R_t. A Euclidean distance matrix was created, where the pairwise Euclidean distance for all the samples based on their feature vectors was computed. We then compute the sum of each row of the matrix and denote this column vector as a distance vector. Each element in this distance vector represents how much a particular sample differs from all other samples in the feature space. A smaller value in the distance vector means that the particular replay sample is easier to learn for Θ.
We introduce a series of rehearsal sequences in the orders of either easy-to-hard samples or hard-to-easy samples, where the difficulty levels of each sample are determined by either the confidence score difficulty metrics or the distance vector difficulty metrics.
As the previous study has shown that the class orders are also essential for continual learning <cit.>, here we also explore the effect of the class orders during replays. When we design the rehearsal sequence based on class difficulties in R_t, we adapt the two sample-level difficulty measures above to compute class-level difficulty measures by taking the average over all samples of the same class in R_t. We then sort all the samples in R_t by their class difficulty metrics, regardless of their individual sample difficulty scores.
Samples in R_t sorted by their class difficulties
were then presented to the model Θ in either the easy-to-hard or hard-to-easy
orders.
§.§ Selection of Samples for Replay Buffer
In common practice, selecting samples for R_t+1 from task t is often conducted in a random manner <cit.>. In contrast to the previous works, we vary the sample selection criteria for R_t+1 as follows: selecting only the easiest samples from task t for R_t+1, selecting the hardest samples from task t for R_t+1, and selecting samples that are uniformly distributed across difficulty levels from task t for R_t+1. The difficulty levels are judged based on the confidence scores and the distance vectors defined in the previous subsection. We use interleave division numbers 1 and 600 for all the experiments in this subsection.
§ RESULTS
§.§ Frequent Replays Enhance Performances
We report F and Avg. Accu. as a function of interleave divisions in Table <ref>.
Notably, we observed that interleave divisions are important factors influencing the continual learning performance of the replay method with the larger interleave divisions leading to better performances, as indicated by the decreasing F and increasing Avg. Accu. over all the tasks. It is possible that the model parameters at large division numbers are updated more frequently for both the current task and all previous tasks, resulting in minimal forgetting. However, we also note that the continual learning performance saturates at interleave division number 120. This implies that increasing interleave divisions beyond optimal values brings no extra benefits in continual learning.
§.§ Easy-To-Hard Rehearsal Sequences are Beneficial
We studied the models trained with different rehearsal sequences sorted in easy-to-hard or hard-to-easy curricula based on sample-level or class-level difficulty measures computed from either the confidence scores or distance vectors. We reported the Avg. Accu. results in Figure <ref> and F scores in Appendix and made four key observations. First, aligning with the observations in Table <ref> and the discussion from the previous subsection, large interleave divisions benefit continual learning models with higher average accuracy and less forgetting. Second, rehearsal sequences sorted by instance-level difficulties lead to much better continual learning performances (compare red bars versus blue bars). Third, the confidence score is a better evaluation metric measuring instance-level difficulties, as shown by the bars with and without texture patterns. Finally, the models trained with the easy-to-hard rehearsal sequences outperform the ones with reversed rehearsal sequences (compare light versus dark grey bars). It is possible that easy-to-hard rehearsal sequences make the models converge faster on the previous tasks due to more stable gradient updates; hence, the sequences lead to minimal forgetting and higher classification accuracy. We also compared the continual learning performance for both the offline method and the continual learning method with various curricula and observed that there still exists a large performance gap between these two.
§.§ Replays with Only Hard Data Hurt Performances
Here, we explored the effect of different sample selection strategies for replay samples in terms of the sample difficulty levels based on distance vectors or confidence scores. From Figure <ref>,
Our observations indicate that exclusively choosing the most challenging replay samples leads to inferior performance compared to selecting the easiest samples or incorporating samples with a balanced distribution of difficulty levels. Selecting samples with a uniform distribution of difficulty levels yields the best continual learning performance. This outcome may be attributed to the fact that difficult replay samples result in less flat loss landscapes, which in turn make the training process more challenging and slower to converge <cit.>. A curriculum for training the models to rehearse from the easiest to the hardest samples is the best, as it balances the greater precision in data fitting due to the hardest samples and the fast convergence speed during training due to the easier samples. Similar to the previous subsection, we also noted that the confidence score is a better measure of sample difficulty levels than the distance vectors.
§ CONCLUSION
Our study
examines the role of curricula during replays in the class-incremental learning setting in continual learning. We designed and conducted a series of controlled experiments to study the three key questions on replays: how often is the replay, what data should be replayed, and in what sequence to replay.
Across the two common image datasets, our experimental results shed light on the underlying principles of replay methods in continual learning and reveal the good curricula design choices for replay methods.
These curricula designs not only facilitate positive knowledge transfers (which has been explored in existing curriculum learning literature), but also mitigate catastrophic forgetting (a significant problem we need to solve in continual learning). Specifically, we
found that (1) replays should happen frequently; (2) only rehearsing on the most difficult exemplars hurts continual learning performances; and (3) rehearsals on samples with increasing difficulty eliminate forgetting more than its reversed difficulty orders.
There are numerous other possible choices of curricula designs for replay methods, such as a unified difficulty metric considering both confidence scores and distance vectors or the use of a student feedback loop to update the difficulty scores. In the future, we will look into the role of curricula under
stringent continual learning conditions, such as learning with limited training time or noisy data. We will also conduct experiments on other large-scale datasets and apply our replay curriculum to existing replay-based continual learning algorithms.
§ ACKNOWLEDGEMENTS
This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025), its NRFF award NRF-NRFF15-2023-0001, Mengmi Zhang's Startup Grant from Agency for Science, Technology, and Research (A*STAR), and Early Career Investigatorship from Center for Frontier AI Research (CFAR), A*STAR. The authors declare that they have no competing interests.
§ APPENDIX
§.§ Experimental Details
For experiments on both ciFAIR-10 and ciFAIR-100, PyTorch’s default implementation of cross entropy loss was used for object classification tasks. The SGD algorithm was used as the optimizer. The learning rate was set at a constant of 0.001. Momentum was fixed at 0.9. A batch size of 32 is used.
For ciFAIR-10, we employ a 2-layer 2D-convolutional network with 6 and 16 channels in the successive layers, followed by 3 fully connected layers with 400, 120 and 84 hidden units respectively. ReLU was used as the activation function.
We follow the standard training and testing data splits from the original ciFAIR-10.
In every task, the model is trained for 250 epochs. Each experiment is conducted with 20 runs initialized with 20 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 20 runs is reported.
For ciFAIR-100, PyTorch's implementation of MobileNetV3 (small) was used, including the default layers and activation function. We used a custom training, validation, and test data splits with a ratio of 9:1:2, and a stopping criteria for training depending on the validation loss. The ciFAIR-100 images were upscaled to 72x72 using PyTorch's bicubic interpolation function before training.
§.§ More Results and Analysis
We reported the continual learning performance on ciFAIR-10 dataset of the models trained with the three types of curricula as elaborated in Experiments Section.
See Table <ref> for interleave divisions, Figure <ref> for rehearsal sequences, and Figure <ref> for sample selections.
All the tables and figures on ciFAIR-10 dataset follow the same design conventions as the corresponding tables and figures on ciFAIR-100 dataset in the main text. The conclusions from the results of ciFAIR-10 dataset are consistent with the ones on the ciFAIR-100 dataset.
|
http://arxiv.org/abs/2307.07491v2 | 20230714172335 | Two string theory flavours of generalised Eisenstein series | [
"Daniele Dorigoni",
"Rudolfs Treilis"
] | hep-th | [
"hep-th",
"math.NT"
] |
Two string theory flavours
of generalised Eisenstein series
Daniele Dorigoni and Rudolfs Treilis
Centre for Particle Theory & Department of Mathematical Sciences
Durham University, Lower Mountjoy, Stockton Road, Durham DH1 3LE, UK
Generalised Eisenstein series are non-holomorphic modular invariant functions of a complex variable, τ, subject to a particular inhomogeneous Laplace eigenvalue equation on the hyperbolic upper-half τ-plane. Two infinite classes of such functions arise quite naturally within different string theory contexts.
A first class can be found by studying the coefficients of the effective action for the low-energy expansion of type IIB superstring theory, and relatedly in the analysis of certain integrated four-point functions of stress tensor multiplet operators in 𝒩 = 4 supersymmetric Yang-Mills theory.
A second class of such objects is known to contain all two-loop modular graph functions, which are fundamental building blocks in the low-energy expansion of closed-string scattering amplitudes at genus one.
In this work, we present a Poincaré series approach that unifies both classes of generalised Eisenstein series and manifests certain algebraic and differential relations amongst them.
We then combine this technique with spectral methods for automorphic forms to find general and non-perturbative expansions at the cusp τ→ i ∞.
Finally, we find intriguing connections between the asymptotic expansion of these modular functions as τ→ 0 and the non-trivial zeros of the Riemann zeta function.
empty
§ INTRODUCTION
There is a multitude of places where modular invariance plays a crucial rôle in string theory and quantum field theory. For example, the modular group, , appears as U-duality group <cit.> of ten-dimensional type IIB string theory and, via the AdS/CFT correspondence, as Montonen-Olive electro-magnetic duality group <cit.> of its holographic dual 𝒩=4 supersymmetric Yang-Mills theory (SYM).
Similarly, in closed-string perturbation theory the modular group arises as mapping class group for genus-one world-sheet, i.e. for strings whose world-sheet is a two-dimensional torus, strongly constraining the low-energy expansion of string scattering amplitudes <cit.>.
A consequence of modularity particularly relevant for the present work is that physical observables must be invariant, or more generally covariant, under the modular group, i.e. physical observables are automorphic forms with respect to the modular group.
Although the world of modular forms is extremely vast and diverse, we focus our attention to an interesting class of automorphic forms relevant for string theory and known as non-holomorphic Eisenstein series and generalised Eisenstein series. For a recent and more general introduction on the broader subject we refer to the beautifully written set of lectures <cit.>.
As we will discuss in more detail shortly, generalised Eisenstein series are non-holomorphic modular invariant functions of a complex variable, τ, which parametrises the usual hyperbolic upper-half complex plane. These functions satisfy an inhomogeneous Laplace eigenvalue equation on the τ-plane with sources bilinear in non-holomorphic Eisenstein series. The physical rôle of the parameter τ, as well as the spectrum of eigenvalues and the details of the source terms do depend on the particular string theory calculation under consideration.
A first flavour of generalised Eisenstein series arises in the context of higher-derivative corrections to the low-energy effective action of type IIB superstring theory and in related calculations in the holographic dual 𝒩 = 4 SYM gauge theory.
Thanks to modular invariance, combined with supersymmetric arguments <cit.>, the coefficients of certain higher-derivative operators in the low-energy effective action of type IIB superstring theory are computed exactly in terms of Eisenstein and generalised Eisenstein series where their argument, τ, is given by the axio-dilaton vacuum expectation value.
On the dual side we consider 𝒩=4 SYM theory with gauge group SU(N),
for which it was argued in <cit.> that certain integrated correlation functions of four stress-tensor superconformal primaries, computable via supersymmetric localisation, can be related to four-graviton scattering amplitudes in IIB superstring theory.
A lot of progress has recently been made in this direction <cit.> and, using the holographic dictionary, these calculations were shown to reproduce and extend known results for the low-energy type IIB superstring effective action in ten-dimensional flat-space, as well as having striking implications for analogous considerations in AdS_5× S^5.
In particular <cit.> considered the expansion of one such integrated correlator in the large-N limit (with N the rank of the gauge group) with fixed complexified Yang-Mills coupling, τ.
Order by order in 1/N, Montonen-Olive duality constrains the coefficients of this expansion to be modular invariant functions of the complexified coupling τ. While at half-integer orders in 1/N only Eisenstein series appear, at integer orders in 1/N we encounter an infinite class of generalised Eisenstein series akin to the higher-derivative coefficients just mentioned.
From a seemingly very different albeit closely related point of view, a second flavour of generalised Eisenstein series can be found while studying the low-energy expansion of closed-string perturbation theory at genus one.
The low-energy expansion for string amplitudes with toroidal world-sheet leads to the introduction of an infinite class of non-holomorphic and modular covariant building blocks usually named modular graph functions <cit.> and modular graph forms (MGFs) <cit.>, where to a Feynman world-sheet diagram we associate a modular invariant or covariant function whose argument is the torus complex structure. Similarly to standard Feynman integrals, the number of loop-momenta dictates the complexity of the objects under consideration.
While one-loop MGFs evaluate to non-holomorphic Eisenstein series, two-loop MGFs are contained in a second infinite class of generalised Eisenstein series <cit.>.
Given the physical and mathematical importance of generalised Eisenstein series, a crucial problem is understanding their analytic, algebraic and differential properties.
For this reason, a first approach is to try and represent these modular invariant functions as Poincaré series, an extremely convenient way of
rewriting a modular invariant function as a sum over images under the modular group of a simpler function, usually called seed function.
In general, passing from a modular invariant function to its Poincaré seed reduces the functional complexity. This viewpoint was exploited in <cit.> to obtain Poincaré-series representations for various two-loop MGFs, then extended to all two-loop MGFs in <cit.>.
A streamlined approach was presented in <cit.> where a unified description was presented: Poincaré seeds for this second infinite family of generalised Eisenstein series, and hence for all two-loop MGFs, can be constructed from iterated integrals over single holomorphic Eisenstein series and their complex conjugates thus considerably simplifying their studies. However, it had already been noted in <cit.>, that this type of seed functions is ill-suited for describing the first class of generalised Eisenstein series relevant for higher-derivative corrections and integrated correlators.
One of our main results is the derivation of a new Poincaré series representation unifying both classes of generalised Eisenstein series. We introduce a new space of modular invariant functions embedding quite naturally both flavours of generalised Eisenstein series and manifesting many of their algebraic and differential properties. In particular, we find that this space of functions is closed under the action of the hyperbolic Laplacian and exploit this fact to clarify the origins of the spectrum of eigenvalues and respective source terms for the two classes of generalised Eisenstein series that have relevance to string theory.
We then combine our Poincaré series approach with methods coming from spectral analysis for automorphic forms, see e.g. <cit.>, to derive the complete asymptotic expansion at the cusp τ→ i ∞ for both flavours of generalised Eisenstein series in the Fourier zero-mode sector.
As argued in <cit.>, resurgence analysis can be used to reconstruct the entire non-perturbative completion of the MGFs, i.e. the exponentially suppressed terms, from their perturbative expansion around the cusp. With the help of spectral analysis we clarify these resurgence results and extend them to the case of higher-derivative corrections and integrated correlators where such non-perturbative terms can be interpreted as D-instanton/anti-D-instanton events <cit.>.
Finally, from our analysis we can easily derive an asymptotic expansion for τ→ 0, where surprisingly (and quite mysteriously) we find that the non-trivial zeros of the Riemann zeta function play a key rôle. While for MGFs the limit τ→ 0 is simply a particular degeneration case of the toroidal world-sheet, for higher-derivative corrections and integrated correlators this limit corresponds to a strong coupling regime, hence extremely difficult to access by other means.
The rest of the paper is organised as follows. In section <ref> we properly define the key characters of our story, namely the generalised Eisenstein series, and review in more detail the string theory origins for these two infinite families thereof. In section <ref> we propose a new Poincaré series representation, which provides a unifying framework to derive analytic and differential properties for both classes of generalised Eisenstein series. We also present various examples coming from higher-derivative corrections as well as MGFs.
After a brief review of the Roelcke-Selberg spectral decomposition for automorphic forms, section <ref> is devoted to extracting analytic properties of the modular functions discussed. We derive their asymptotic expansion at the cusp, τ→ i∞, where a connection is made with previous resurgence results, and their expansion at the origin, τ→ 0, where we find intriguing connections with the non-trivial zeros of the Riemann zeta function. We end in section <ref> with a brief summary and discussion on future directions. Some more technical details are contained in two appendices.
§ STRING THEORY AND GENERALISED EISENSTEIN SERIES
In this section we briefly review two instances where string theory calculations give rise to generalised Eisenstein series. In particular, we firstly discuss how these modular functions are defined, how they arise in the study of the low-energy expansion of type IIB superstring theory and how this connects, via the holographic dictionary, with integrated correlators in 𝒩=4 super-Yang-Mills theory. Secondly, we present a different class of generalised Eisenstein series arising in the low-energy expansion of genus-one string-scattering amplitudes, i.e. for a string with a toroidal world-sheet. Most of the topics presented in this section are reviewed in <cit.>.
§.§ Low-energy expansion of type IIB superstring theory
When discussing the low-energy expansion of type IIB superstring theory, the group is interpreted as the non-perturbative U-duality group of ten-dimensional type IIB string theory <cit.>.
In the classical theory the vacuum expectation value of the axio-dilaton scalar field,
τ =χ+ i/g_s= (τ)+i (τ) ,
with g_s the string coupling constant, parametrises the coset space
ℋ:= SL(2,ℝ)/ U(1)={τ∈ℂ | (τ) >0 } .
However, quantum corrections <cit.> generate an anomaly in the U(1) R-symmetry thus breaking SL(2,ℝ) to .
This U-duality symmetry group, , acts on the axio-dilaton in the standard way
γ =
[ a b; c d ]∈ ,
γ·τ := aτ + b/cτ + d .
Since the axio-dilaton includes the string coupling, g_s, U-duality is an extremely powerful and non-perturbative symmetry, relating perturbative and non-perturbative effects in g_s.
The low-energy expansion of type IIB superstring theory is therefore expected to be invariant (or covariant) under , where τ parameterises a fundamental domain that can be chosen to be
ℱ :=\ℋ
:= {τ∈ℋ | |τ|> 1 , -1/2 < (τ)≤1/2}∪{τ∈ℋ | |τ| = 1 , 0≤(τ)≤1/2} .
At low energy type IIB supergravity receives corrections coming from excited string states which can be neatly assembled in an effective Lagrangian.
Focusing for simplicity only on four-graviton interactions, we expect to find an effective Lagrangian containing the standard Einstein-Hilbert term, as well as an infinite tower of higher-derivative corrections schematically of the form d^2nR^4, where R^4 is a certain contraction of Riemann tensors and d is the covariant derivative.
For n≤ 3 these terms are fixed by supersymmetry to be of the form (in the string frame)
ℒ_ eff = (α')^-4g_s^-2R + (α')^-1g_s^-1/2π^3/2τR^4+α'g_s^1/2π^5/2τd^4R^4-(α')^2g_s π^34τ d^6R^4+... ,
where α' = ℓ_s^2 is the square of the string length scale.
As expected, the leading term when α'→0 is simply given by the Einstein-Hilbert term (where R is the Ricci scalar). Although we only wrote the bosonic part, this term comes with its supersymmetric completion, involving other bosonic as well as fermionic fields, reproducing the type IIB supergravity lagrangian in ten dimensions.
For the higher-derivative corrections here reviewed, we have that maximal supersymmetry uniquely fixes the Lorentz contractions of the tensor indices and forbids the presence of R^2 and R^3 interactions. The first correction is proportional to R^4 <cit.> which is a 1/2-BPS operator, i.e. it preserves only 16 of the 32 supersymmetries associated with ten-dimensional maximal supersymmetry.
Similarly, the higher-derivative term d^4R^4 is 1/4-BPS while d^6R^4 is 1/8-BPS and it is the last term to be protected by supersymmetry.
The ellipsis in (<ref>) represents various supersymmetric completions, as well as higher-order terms and terms contributing to higher-point amplitudes. A particular class of interesting higher-point BPS amplitudes involve the scattering of four gravitons with certain massless fields of type IIB supergravity carrying specific U(1) charges <cit.> and transforming covariantly under U-duality. The modular properties of these amplitudes have been analysed in <cit.>, while their connection with the holographic dual picture of integrated correlators in 𝒩=4 SYM is presented in <cit.>. We will not be discussing these corrections here.
Interestingly, the coefficients of the higher-derivative and BPS protected corrections displayed in (<ref>) can be computed exactly and are expressible in terms of special modular invariant functions.
In particular, we see that the coefficient of the R^4 <cit.> and the d^4R^4 <cit.> interactions involve non-holomorphic Eisenstein series:
sτ := ∑_(m,n)≠(0,0)(y/π)^s/|m+nτ|^2s
= 2ζ(2s)/π^sy^s + 2ξ(2s-1)/Γ(s)y^1-s+4/Γ(s)∑_k≠ 0|k|^s-1/2σ_1-2s(k)y^1/2K_s-1/2(2π |k|y)e^2π ikx
with τ=x+iy∈ℋ and (s)>1 for now.
We denote by ξ(s):=π^-s/2Γ(s/2)ζ(s) the completed zeta function invariant under reflection ξ(s)=ξ(1-s), while σ_a(k):=∑_d|kd^a is a divisor sum and K_ν(y) is a modified Bessel function of the second kind.
The coefficient of d^6R^4 <cit.> is a new kind of object known in the literature as a generalised non-holomorphic Eisenstein series, and which is defined as the unique modular-invariant solution to the differential equation
[Δ -λ(λ-1)] s_1s_2λτ = s_1τs_2τ,
where Δ := y^2(∂_x^2+∂_y^2) is the hyperbolic Laplace operator in τ. The solution is taken subject to the boundary condition that the term of order y^λ in the Laurent polynomial around the cusp y≫ 1 has vanishing coefficient.
This boundary condition uniquely fixes the modular-invariant solution, since the Eisenstein series is the unique modular-invariant solution with polynomial growth at the cusp to the differential equation
[Δ - λ(λ-1)] λτ = 0 .
Beyond d^6R^4, higher derivative corrections are not supersymmetrically protected any longer, hence the same methods leading to the exact results presented above cannot be applied.
However, novel results have been obtained by considering the holographic dual of type IIB superstring theory on AdS_5× S^5, notoriously given by 𝒩 = 4 SYM theory with gauge group SU(N).
Thanks to supersymmetric localisation <cit.>, we can obtain various specific 𝒩=4 four-point integrated correlators of superconformal primaries of the stress-energy tensor multiplet[Very recently exciting results have been obtained in <cit.> for a different class of integrated four-point functions of local operators, as well as for integrated two-point functions of superconformal primaries of the stress-energy tensor multiplet in the presence of a half-BPS line defect <cit.>.] by taking different combinations of four derivatives of the partition function for the 𝒩 = 2^* theory (a massive deformation of 𝒩 = 4) on a squashed S^4 with respect to different parameters (squashing, mass and complexified coupling).
In <cit.> the authors exploited these supersymmetric localisation results to compute the large-N expansion of such integrated correlators while keeping fixed the modular parameter, τ, now denoting the Yang-Mills complexified coupling τ = θ/ 2π + 4π i/ g__YM^2.
Using the AdS/CFT dictionary, we identify g__YM^2 = 4 π g_s and ( g__YM^2 N)^= L^2/α', where g_s is the string coupling constant and L is the scale of AdS_5× S^5. Hence the large-N limit of such integrated correlators can help us in understanding higher derivative corrections in type IIB superstring theory on AdS_5× S^5 beyond d^6R^4 <cit.> as well as non-perturbative effects in α' <cit.>.
As a consequence of 𝒩=4 Montonen–Olive duality (also known as S-duality), order by order at large-N we must have an expansion with coefficients that are non-holomorphic modular invariant functions of τ.
From <cit.> we know that half-integer orders in 1/N produce only non-holomorphic Eisenstein series. However, for integer orders in 1/N this expansion is conjectured to involve an infinite class of generalised Eisenstein series, s_1s_2λτ,
with half-integer indices s_1,s_2 and spectrum of eigenvalues λ∈ Spec_1(s_1,s_2) constrained by
λ∈ Spec_1(s_1,s_2) := {s_1+s_2+1, s_1+s_2+3, s_1+s_2+5, ...} , s_1,s_2 ∈ℕ+1/2.
The coefficient of the d^6R^4 higher-derivative correction, 4τ, in (<ref>) is simply the first instance of generalised Eisenstein series belonging to this first class (<ref>).
For future reference, we notice that within this first flavour of generalised Eisenstein series, s_1s_2λτ, relevant for higher derivative corrections and the large-N expansion of integrated correlators, the eigenvalue λ has always opposite even/odd parity when compared to the “weight” w=s_1+s_2.
As a final comment, we stress again that from the gauge theory side we obtain exact expressions for four-point correlators which are integrated against different measures over the four insertion points. A difficult open problem is how to reconstruct from the dual IIB superstring side which higher-derivative corrections are responsible for a given generalised Eisenstein series in the large-N expansion. However, in <cit.> the authors used the gauge theory results to reproduce exactly the known BPS corrections to the low-energy expansion of the four-graviton amplitude (<ref>) in type IIB superstring theory in ten-dimensional flat-space. We expect the generalised Eisenstein series (<ref>) to have important implications in our understanding of flat-space higher derivative corrections as well as for the structure of a similar expansion in AdS_5× S^5.
§.§ Modular graph functions and superstring perturbation theory
We now turn our attention towards string perturbation theory where a different manifestation of modularity arises and for which a second and distinct flavour of generalised Eisenstein series plays an important rôle.
The study of the low-energy expansion of superstring perturbation theory has broader connections with different areas of algebraic geometry and number theory. Many recent developments have appeared both in the theoretical physics literature
<cit.>
and the mathematics literature <cit.>.
It is well known that string amplitudes can be computed as perturbative power series expansions in g_s^2, in which a term of order g_s^2g-2 is associated with a functional integral over a genus-g world-sheet.
For the present work we focus our attention to the well-studied case of the ten-dimensional four-graviton scattering amplitude in type IIB superstring theory at genus one.
As already seen in (<ref>), a consequence of supersymmetry is that the four-graviton amplitude has a prefactor of R^4, for a particular scalar contraction of four linearised Riemann curvature tensors. This means that the genus-g contribution to the four-graviton amplitude takes the form
𝒜_g^(4)(ϵ_i,k_i) =κ_10^2 R^4 T_g(s,t,u) ,
where (ϵ_i,k_i) denotes the polarisations and momenta of the scattered massless particles and κ_10^2 is the ten-dimensional Newton constant.
The function T_g(s,t,u) contains all the non-trivial dynamical structure of the amplitude and is a scalar function of the Mandelstam invariants, conventionally defined by s_ij := -α' (k_i + k_j )^2 /4 with s:=s_12=s_34 , t:=s_13=s_24 and u:=s_14=s_23 satisfying s+t+u=0.
Let us focus our attention to the genus-one contribution in string perturbation theory. A genus-one world-sheet, Σ_τ, has the topology of a torus, which is diffeomorphic to ℂ/Λ, where the lattice Λ = ℤ+τℤ defines the shape of the torus for τ in ℋ. Inequivalent tori are parametrised by different complex structures modulo identifications under large diffeomorphisms associated with the modular group, , i.e. inequivalent tori are parametrised by τ in ℱ =\ℋ.
The genus-one amplitude 𝒜_g=1^(4)(ϵ_i , k_i) can then be expressed as an integral over the insertion points z_i∈Σ_τ of the four-graviton punctures and an integral over τ in ℱ,
𝒜_g=1^(4)(ϵ_i,k_i) = 2πκ_10^2 R ^4∫_ℱd^2τ/y^2ℳ_4(s_ij;τ)
,
where ℳ_4(s_ij;τ) is a modular function that results from the integral
ℳ_4(s_ij;τ):= ∫_Σ_τ( ∏_i=2^4 d^2 z_i/y) exp( ∑_1≤ i < j≤ 4 s_ij G(z_i -z_j | τ) ) ,
having used translational invariance to fix the insertion point z_1. The function G(z|τ) is the Green function on a torus and it is given by
G(z|τ) := -log|θ_1(z|τ)/θ_1'(0|τ)|^2 - π/2y(z-z̅)^2 = y/π∑_(m,n) ≠ (0,0) e^2π i (nu-mv)/|m+nτ|^2,
where τ = x+i y, z = u+vτ with u,v∈ [0,1), and θ_1(z|τ) is a Jacobi theta function.
Needless to say the string amplitude (<ref>) cannot be computed in closed form, however it can be expanded as an infinite series of low-energy contributions by considering the limit in which the Mandelstam invariants s_ij→ 0. In this way, we obtain a perturbative expansion in both α'→ 0 and g_s→0, directly connecting with the perturbative part of the previously discussed effective action (<ref>) in type IIB superstring theory.
There is a nice graphical formalism to compute the low-energy expansion of string amplitudes such as (<ref>), where different terms in this expansion are represented in terms of Feynman diagrams for a conformal scalar field theory on the torus.
Each diagram corresponds to a specific way of contracting different Green functions joining pairs of points at positions z_i and z_j, which are then integrated over Σ_τ, thus from Feynman graphs we obtain associated modular invariant functions called Modular Graph Functions (MGFs) <cit.>.
It is convenient to represent the propagator as the momentum-space lattice-sum (<ref>) and divide diagrams according to the number of independent loop-momenta we sum over.
The simplest class of MGFs is associated with one-loop graphs containing s∈ℕ propagators, which can be evaluated to non-holomorphic Eisenstein series sτ with integer index s∈ℕ, see e.g. <cit.>.
Two-loop modular graph functions are less familiar and much more interesting.
Surprisingly <cit.>, the action of the Laplacian Δ closes on the vector space of two-loop modular graph functions and produces source terms which are either linear or bilinear in integer index Eisenstein series.
In <cit.> it was shown that all two-loop modular graph functions can be expressed in terms of a second flavour of generalised Eisenstein series, s_1s_2λτ,
with integer indices s_1,s_2∈ℕ and s_1,s_2≥ 2, and with a spectrum of eigenvalues λ∈ Spec_2(s_1,s_2) now given by:
λ∈ Spec_2(s_1,s_2) := {|s_1-s_2|+2, |s_1-s_2|+4, ... ,s_1+s_2-2} , s_1,s_2∈ℕ^≥ 2 .
Oppositely to (<ref>), all generalised Eisenstein series relevant for two-loop MGFs have eigenvalues λ of the same even/odd parity as their “weight” w=s_1+s_2.
As shown in <cit.>, the space generated by this second flavour of generalised Eisenstein with spectrum (<ref>) actually goes beyond the world of two-loop MGFs considered in <cit.>. As argued from the generating series of modular graph forms <cit.>, all MGFs are conjecturally given by single-valued iterated integrals of holomorphic Eisenstein series, while the space spanned by the generalised Eisenstein series has to be extended to also include iterated integrals of holomorphic cusp forms. The presence of holomorphic cusp forms has deep consequences; in particular we find that the Fourier expansion of these modular functions presents novel coefficients given by L-values of these holomorphic cusp forms inside and outside the critical strip[In <cit.> it was argued that for linear combinations of generalised Eisenstein series corresponding to two-loop MGFs, the holomorphic cusp forms always drop out. More recently in <cit.>, a similar phenomenon (albeit completely different in nature) has been observed for the generalised Eisenstein with spectrum (<ref>) and their special linear combinations appearing in the large-N expansion of the 𝒩=4 integrated correlators. We thank Ksenia Fedosova and Kim Klinger-Logan for related discussions and for sharing their results with us.]. We will come back to these issues in section <ref>.
To conclude this introductory section, we re-emphasise the importance of understanding the spaces of generalised Eisenstein series (<ref>) and (<ref>). We presented two fundamental instances in non-perturbative and perturbative string theory, where the study of generalised Eisenstein series can provide insight into the possible gravitational interactions of type IIB superstring theory.
In this work we firstly introduce a unifying framework which incorporates in a natural way both flavours (<ref>) and (<ref>) of generalised Eisenstein series, and subsequently combine Poincaré series, resurgence theory and spectral analysis techniques to extract novel results.
§ A UNIFYING POINCARÉ SERIES APPROACH
As explained in the previous section, the key player for the rest of the paper is the generalised Eisenstein series, non-holomorphic modular invariant solution to the inhomogeneous Laplace equation
[Δ -λ(λ-1)] s_1s_2λz = s_1zs_2z.
In the rest of the paper we denote the modular parameter by z=x+iy and by Δ = y^2(∂_x^2+∂_y^2) its associated hyperbolic Laplace operator.
Although our studies will be completely general, we will always refer back to the special (i.e. of string theory origin) cases (<ref>) and (<ref>) for which the modular parameter z→τ is respectively the axio-dilaton (or complexified Yang-Mills coupling in the dual gauge theory side) or the genus-one world-sheet complex structure.
The first representation we are going to discuss for these modular invariant functions is in terms of Poincaré series, i.e. we will express a generalised Eisenstein series, s_1s_2λz, as a sum over SL(2,ℤ)-images of a special class of seed functions.
The idea behind Poincaré series is very natural <cit.>: if we are interested in constructing functions which are invariant under a symmetry group, we can start with an arbitrary seed function and then consider the sum over its orbits under said symmetry group. This sum, if it exists, is guaranteed to be invariant under the required symmetry group.
When the symmetry group is SL(2,ℤ), i.e. for modular invariant functions, we can proceed as follows. Denoting by Φ(z) a modular invariant function, and by z its seed function, the Poincaré series representation for Φ(z) is given by:
Φ(z) = ∑_γ∈γ· z ,
where, as usual, we have defined the SL(2,ℤ) action:
γ = [ a b; c d ]∈ SL(2,ℤ) ,
γ· z := az+b/cz+d ,
and we assumed that the seed function φ(z) is periodic in the real direction, i.e. z+n=z for all n∈ℤ, thus explainining the presence of the (Borel) stabiliser
B(ℤ) := {[ ± 1 n; 0 ±1 ] | n∈ℤ}⊂ SL(2,ℤ)
in (<ref>). Note that in general, the Poincaré sum (<ref>) is only absolutely convergent for appropriate seeds, however, will shortly clarify that the representation (<ref>) can often be understood as a suitable analytic continuation in some complex parameters which z depends on.
The simplest example of this construction is the non-holomorphic Eisenstein series:
sz := π^s/2ζ(2s)sz = ∑_γ∈(γ· z)^s
=y^s + ξ(2s-1)/ξ(2s)y^1-s+2 π^s /Γ(s)ζ(2s)∑_k≠ 0|k|^s-1/2σ_1-2s(k) y^1/2K_s-1/2(2π |k|y)e^2π ikx ,
where we introduced a different normalization compared to (<ref>) for later convenience.
It is important to note that for a given modular invariant function, its Poincaré series representation is far from being unique. Since a Poincaré series is just a sum over SL(2,ℤ) images of a particular seed function, we can simply consider as a new seed function any of these images (or even an infinite sum thereof), and the Poincaré series associated with this new seed function will produce exactly the same modular invariant function. We stress that this change in seed function does in general change the stabiliser of the cusp from B(ℤ) to some other conjugate Borel subgroup. However, if we allow for formally divergent Poincaré series to be interpreted via analytic continuation, we can construct different seeds with the same Borel stabiliser as in (<ref>), and all giving rise to the same modular invariant function.
The easiest example of such phenomenon can be seen immediately from (<ref>). Firstly, we notice that (<ref>) converges only for (s)>1 and then we observe that there exists an analytic continuation for the Eisenstein series, which satisfies the reflection formulae
Γ(s)sz = Γ(1-s)1-sz ,
ξ(2s) sz = ξ(2-2s) 1-sz .
Hence, at least formally, the Poincaré series of the two seeds y^s and y^1-s give multiples of the very same sz, even though y^1-s cannot be written as a single SL(2,ℤ) image of y^s but only as an infinite sum of images.
Note that yet another Poincaré seed for sz was given in <cit.>:
∑_γ∈[ √(|k| y) K_s-1/2(2π|k| y) e^2π i k x]_γ
= π^2s+1/2σ_2s-1(k) sz/4 |k|^s-1cos(π s) Γ(s+1/2)ζ(2s-1) ζ(2s) ,
where the notation [⋯]_γ means that γ acts on all occurrences of z (and z̅) inside the bracket using the fractional linear action (<ref>).
As we can easily see from (<ref>), the seed appearing in the Poincaré sum is given by the generic Fourier non-zero mode of sz (or alternatively sz) and is therefore expected again to be proportional to sz (or sz).
We stress that the sum (<ref>) is divergent, but the result on the right-hand side, as argued for in <cit.>, can be obtained via analytic continuation by rewriting the Poincaré series as the difference of two Niebur–Poincaré series, introduced in <cit.>, that are absolutely convergent on the two non-intersecting domains ( s)>1 and (1-s) >1. In the next section we show that a generalisation of such a Niebur–Poincaré series (<ref>) provides a privileged class of seed functions whose Poincaré sums produce all generalised Eisenstein series, thus constructing a unifying framework to discuss both higher derivative corrections and MGFs.
As already mentioned in the introduction, there are many reasons for seeking Poincaré series representations of modular functions.
Perhaps most importantly: Poincaré series representations are manifestly modular-invariant expressions which in general reduce the complexity of the objects under consideration, e.g. for the Eisenstein sz the seed is simply y^s. Similarly, for generalised Eisenstein series relevant for two-loop MGFs, convenient Poincaré seeds were proposed in <cit.>, and were given in terms of certain iterated Eisenstein integrals of depth one, while their corresponding Poincaré series, i.e. two-loop modular graph forms, have to be built in general from iterated Eisenstein integrals of depth two.
While the Poincaré seed is in general of reduced complexity when compared to its modular invariant companion, a drawback is that dealing with generic Poincaré series usually makes it rather cumbersome to extract the analytic properties of the objects under study.
For example, it is not straightforward to obtain from the Poincaré series representation (<ref>) the asymptotic expansion at the cusp, z→ i∞, of a modular function Φ(z).
For Eisenstein series it is a standard result <cit.> to obtain the Fourier mode decomposition from its Poincaré series, as presented in (<ref>). However, for general Poincaré series, a similar analysis is far more intricate. The general asymptotic expansion for Poincaré series of two-loop MGFs was presented in <cit.>, while for alternative seeds analogous results <cit.> involved certain Kloosterman sums. For the generalised Eisenstein series similar to the d^6 R^4 higher-derivative coefficient, the asymptotic expansion at the cusp was derived from yet another different “double”-Poincaré series in <cit.>. In this work we present a unified Poincaré series approach thanks to which both classes of generalised Eisenstein series can be treated in parallel.
As it will be useful shortly, we briefly review how to obtain the Fourier mode expansion of a modular function from that of its Poincaré seed.
Given the Fourier expansions
Φ(z) =∑_k ∈ℤa_k(y)e^2π ik x = ∑_γ∈γ· z ,
z = ∑_k∈ℤc_k(y)e^2π ikx ,
with x= (z) and y = (z), the Fourier modes a_k(y) can be reconstructed from the seed function using the well-known result <cit.>:
a_k(y) = c_k(y) + ∑_d=1^∞∑_m∈ℤ S(k,m;d) ∫_ℝ e^-2π i k ω -2π i m ω/d^2 (y^2+ω^2) c_m(y/d^2(y^2+ω^2))ω .
Here S(k,m;d) denotes in general a Kloosterman sum
S(k,m;d) := ∑_r∈ (ℤ/dℤ)^× e^2π i/d (k r + m r^-1) ,
which is a finite sum over all 0≤ r <d that are coprime to d, such that r has a multiplicative inverse, denoted by r^-1, in (ℤ/dℤ)^×.
In particular, we have that the Fourier zero-mode a_0(y) can be expressed as
a_0(y) = c_0(y) + ∑_d=1^∞∑_m∈ℤ∑_r∈ (ℤ/dℤ)^× e^2π i m r/d∫_ℝ e^-2π i m ω/d^2 (y^2+ω^2) c_m(y/d^2(y^2+ω^2))ω ,
where we rewrote the Kloosterman sum S(0,m;d) as a Ramanujan sum,
S(0,m;d) = S(m,0;d) = ∑_r∈ (ℤ/dℤ)^× e^2π i m r/d .
Note that although (<ref>) is an expression for the whole Fourier zero-mode sector, a_0(y), it is in general quite hard to separate the perturbative terms in the asymptotic expansion at the cusp y≫1, i.e. the power-behaved terms, from the non-perturbative, exponentially suppressed terms (qq̅)^n = e^-4π n y.
In <cit.> it was proven that for two-loop modular graph functions it is actually possible to reconstruct these non-perturbative corrections from the perturbative terms using methods from resurgent analysis <cit.>.
§.§ A new Niebur-Poincaré series
One way of constructing a Poincaré series representation for the generalised Eisenstein series,
s_1s_2λz = ∑_γ∈s_1s_2λγ· z ,
relies on rewriting the Laplace equation (<ref>) after having replaced one of the Eisenstein series in the source term, say s_1z, by its Poincaré series (<ref>), usually dubbed as folding s_1z. This leads us to consider an auxiliary Laplace equation for the candidate seed function s_1s_2λ z:
[Δ-λ(λ-1)] s_1s_2λ z = 2ζ(2s_1)/π^s_1 y^s_1s_2z .
We can first rewrite the source term as a Fourier series (<ref>) with respect to x =(z), and then find a particular solution for this Laplace equation mode by mode.
For the Fourier zero-mode sector there is no issue in finding such a particular solution. However, for a Fourier non-zero mode it is rather difficult to find a particular solution to (<ref>) which is expressible in terms of simple building-block seed functions for generic values of s_1,s_2 and λ .
In <cit.> it was shown that all two-loop MGFs, or more broadly all generalised Eisenstein series with spectrum given by (<ref>), can be written as Poincaré series of finite linear combinations of the building-block seed functions introduced in <cit.>
φ(a,b,r| z)= ∑_m≠ 0σ_a( m ) (4π| m | )^b y ^r e^-2π |m| y e^2π i m x ,
for different values of the parameters (a,b,r). It was nonetheless noticed in <cit.> that such seeds are rather ill-suited to describe generalised Eisenstein series relevant for higher-derivative corrections and integrated correlators, where the spectrum is (<ref>). For these generalised Eisenstein series it is still possible to write a seed function in terms of building-blocks (<ref>), but one requires an infinite sum over such simple seeds, thus making it quite hard to extract the asymptotic expansion at the cusp or other analytic properties from the corresponding Poincaré series.
Other types of Poincaré series have been proposed in the literature <cit.> for the diagonal elements, i.e. s_1=s_2, in the first family (<ref>), while in <cit.> other examples in this class are analysed directly from the differential equation point of view.
To construct a class of Poincaré seeds suited for discussing both (<ref>)-(<ref>) in a uniform manner, we have to re-examine the Laplace equation (<ref>). From the Fourier decomposition of the Eisenstein series (<ref>), we notice that the m^th Fourier mode, with m≠0, of the source term is schematically of the form
σ_a(m) |m|^b- y^r+ K_s-(2π |m| y)e^2π i mx ,
for some specific values of the parameters (a,b,r,s).
Thanks to the recurrence relations satisfied by the modified Bessel function K_s(y), for both spectra (<ref>)-(<ref>) it is always possible to find a finite linear combination over different values of the parameters[In this context the parameter a is rather special, since it is the index of the divisor sum function σ_a(m). From the Laplace equation (<ref>) and the Fourier mode decomposition (<ref>) it is easy to see that a=1-2s_2 for the present discussion.] (a,b,r,s) of terms as above which is a solution to (<ref>) in the m^th Fourier mode sector.
With this fact at hand, we can now introduce a novel space of Poincaré seeds and associated Poincaré series which is both general enough, in that every string theory generalised Eisenstein series (<ref>)-(<ref>) can be written as a Poincaré series of finite linear combinations of these novel seeds, and simple enough so that we can easily extract asymptotic data both at the cusp y≫ 1 and at the origin y→0.
We define the seed function
abrsz =∑_m≠0υ_m(a,b,r,s ; y) e^2π i mx
:= ∑_m≠ 0σ_a(m)|m|^b- y^r+ K_s-(2π |m| y)e^2π i mx,
which depends on four complex parameters (a,b,r,s). Given that the Bessel function K_s(y) is exponentially suppressed for large values of its argument, we immediately have that the sum over the Fourier mode, m, is absolutely convergent for any values of the parameters (a,b,r,s). This property of the Bessel function implies as well that the seed function is exponentially suppressed for y≫ 1, however, the limit y→ 0 is more delicate to analyse.
Under the assumption that the Poincaré series of such a class of seed functions is well-defined, we can introduce a novel class of modular invariant functions which we denote by
abrsz : = ∑_γ∈abrsγ· z .
The convergence of this Poincaré series is studied in appendix <ref>, where we prove that absolute convergence is guaranteed when
min{(r+1-s), (r+s), (r-b), (r-a-b)}>1 .
In what follows we can relax the requirement of absolute convergence and consider if necessary the Poincaré series (<ref>) in terms of its analytic continuation in some of its complex parameters (a,b,r,s), in direct analogy with the discussion below (<ref>).
The keen-eyed reader will notice that the new seeds (<ref>) are very reminiscent of the rather unconventional Poincaré series representation (<ref>) for sz. The reason is that, very much like (<ref>), our expression (<ref>) can be obtained as an infinite sum over all Fourier non-zero modes, m≠ 0, of the difference between two Niebur–Poincaré series <cit.>. We will shortly prove that both string theory generalised Eisenstein series (<ref>)-(<ref>) can be obtained from finite linear combinations of these new Niebur–Poincaré series (<ref>).
As already stressed, one of the perks of a Poincaré series representation is that in general it simplifies the complexity of the objects under consideration.
In particular, from the seed function definition (<ref>) we can already deduce various algebraic and differential identities satisfied by the modular objects abrsz.
Firstly, we note that the seed functions (<ref>) are invariant under the reflection s→ 1-s,
abr1-sz = ∑_m≠ 0σ_a(m)|m|^b- y^r+K_-s (2π |m| y)e^2π i mx = abrsz ,
due to the Bessel function identity K_s(y)=K_-s(y).
Similarly, we have invariance under the transformation (a,b)→ (-a,b+a),
-aa+brsz = ∑_m≠ 0σ_-a(m)|m|^a+b-y^r+K_s- (2π |m| y)e^2π i mx = abrsz ,
a straightforward consequence of the identity σ_-a(m) = |m|^-aσ_a(m).
From these two observations we deduce that the modular functions must also inherit these symmetries,
abrsz =abr1-sz ,
abrsz =-ab+arsz .
More interestingly, given the well-known Bessel function recurrence relation
K_s+1(y)-K_s-1(y) = 2s/yK_s (y) ,
we can immediately derive the three-term recursion
abrs+1z-abrs-1z = 2s-1/2πab-1r-1sz .
Note that even if we consider a seed function whose parameters (a,b,r,s) satisfy the conditions (<ref>) for absolute convergence of the Poincaré series, repeated applications of this recursion formula (<ref>) will inevitably bring us outside of the domain (<ref>) where the analytic continuation of (<ref>) has to be discussed carefully.
Finally, given that our discussion started from the inhomogeneous Laplace equation (<ref>), it is natural to consider the action of the Laplace operator on (<ref>). By simply applying the Laplacian to (<ref>) and using the known identity for the derivative of the Bessel function,
K'_s(y) = - s/y K_s(y) - K_s-1(y) ,
we arrive at
[ Δ -(r+1-s)(r-s)] abrsz = -4π r ab+1r+1s-1z ,
or equivalently making use of (<ref>):
[ Δ- (r+s)(r+s-1)] abrsz = -4π rab+1r+1s+1z .
We have thus obtained that the functions abrsz satisfy a closed system of inhomogeneous Laplace eigenvalue equations where the source term is given by yet another function of the same type, but different parameters (a,b,r,s).
Both Laplace equations (<ref>)-(<ref>) simplify dramatically for r=0, where they reduce to
[Δ - s(s-1) ] ab0sz= 0 ,
and since the function ab0sz is manifestly a modular invariant eigenfunction of Δ with eigenvalue s(s-1) it must be proportional to sz.
We will shortly prove that abrsz has polynomial growth at the cusp and compute explicitly its asymptotic expansion using the integral representation (<ref>), thus easily fixing the coefficient of proportionality between ab0sz and sz.
Alternatively, we can see from (<ref>) that each summand with Fourier mode m=k in the seed function ab0sz is proportional to the Poincaré seed (<ref>) for sz. The only difference with (<ref>), is that the sum over m in (<ref>) will simply produce a particular Dirichlet series which will contribute to the proportionality factor between ab0sz and sz.
As already mentioned, the novel seeds (<ref>) are constructed precisely to provide for a broad enough basis of solutions to (<ref>).
Correspondingly, we will show that it is possible to produce finite linear combinations of abrsz which are solutions to the generalised Eisenstein series differential equation (<ref>) relevant for string theory.
A central part of this analysis is the observation that the space of functions abrsz contains all products of two Eisenstein series, i.e. all possible source terms of (<ref>). The proof of this statement is very simple. If we consider the bilinear s_1zs_2z, we first fold s_1z and then re-express s_2z in Fourier modes arriving at
s_1zs_2z=8ξ(2s_1)/Γ(s_1)Γ(s_2)1-2s_2s_2s_1s_2z
+ 2Γ(s_1+s_2)ξ(2s_1)ξ(2s_2)/Γ(s_1)Γ(s_2)ξ(2(s_1+s_2))s_1+s_2z+2Γ(s_1+1-s_2)ξ(2s_1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+1-s_2))s_1+1-s_2z .
Alternatively we can use the reflection formula (<ref>), combined with (<ref>)-(<ref>), to derive
s_1zs_2z=8ξ(2s_2-1)/Γ(s_1)Γ(s_2)1-2s_1s_11-s_21-s_1z
+ 2Γ(s_1+s_2-1)ξ(2s_1-1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+s_2)-3)s_1+s_2-1z+2Γ(s_1+1-s_2)ξ(2s_1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1-s_2)+2)s_1+1-s_2z .
Note that by folding s_1z we break the symmetry between s_1↔ s_2. This comes at a notable price in the diagonal case s_1=s_2 where (<ref>)-(<ref>) have to be regulated. For s_1=s_2, the right-hand side of both equations contains the divergent Eisenstein series 1z. However, since the bilinear s_1z^2 is perfectly regular for s_1≠1, this implies that the modular functions 1-2s_1s_1s_1 s_1 and 1-2s_1s_11-s_11-s_1z must diverge as well. A regularised versions of (<ref>)-(<ref>) for the case s_1=s_2 is easily obtained by considering the continuous limit away from the diagonal s_1=s_2 case:
s_1z^2=lim_ϵ→ 0[ s_1+ϵz s_1z] .
When ϵ≠0 we can safely write the right-hand side using (<ref>)-(<ref>). As ϵ→ 0 our formulae (<ref>)-(<ref>) produce a divergent contribution coming from 1+ϵz which cancels against the similarly singular Υ thus leaving us with a regular expression.
The need for a regularisation of the diagonal case s_1=s_2 is an ubiquitous phenomenon <cit.> and it is independent from the particular seeds considered in the present work.
§.§ Asymptotic expansion at the cusp
Let us now derive the asymptotic expansion near the cusp z→ i ∞ for the modular invariant functions abrsz.
Firstly we perform a Fourier mode decomposition,
abrsz = ∑_k∈ℤkabrsye^2π ikx ,
and focus on deriving the asymptotic expansion for large y of the Fourier zero-mode abrsy.
In the previous section we have already reviewed how to retrieve the Fourier modes of a Poincaré series from an integral transform (<ref>) of the Fourier modes for the corresponding seed function. In particular, if we focus on the Fourier zero-mode sector (<ref>) for the specific seeds (<ref>) under consideration, we have to compute:
abrsy
= ∑_d=1^∞∑_m≠ 0 S(m,0;d) ∫_ℝ e^-2π im ω/d^2(ω^2 + y^2)σ_a(m)|m|^b-(y/d^2(ω^2 + y^2))^r+ K_s-(2π|m|y/d^2(ω^2 + y^2))ω.
In appendix <ref> we show how the above integral transform can be rewritten as a nicer Mellin-Barnes type of contour integral, thus making the task of extracting the asymptotic expansion at the cusp much more manageable.
Relegating the more technical details to the appendix, we present here the key result of our analysis: the integral representation (<ref>) can be rewritten as the Mellin-Barnes integral
abrsy= ∫_1/2-i∞^1/2+i∞ U(a,b,r,s| t) y^t t/2π i ,
where we define
U(a,b,r,s| t) := Γ(r+1-s-t/2)Γ(r+s-t/2)Γ(t+r-s/2)Γ(t+r+s-1/2)/2π^r Γ(r)ξ(2-2t)
×ζ(r+1-b-t)ζ(r+1-a-b-t)
ζ(t+r-b)ζ(t+r-a-b)/ζ(2r+1-a-2b) .
For this section, unless otherwise specified, we restrict ourselves to the range of parameters (<ref>) for which the Poincaré series is absolutely convergent. However, at the end of appendix <ref> we explain that for parameters, (a,b,r,s), which do not produce convergent Poincaré series, the Mellin-Barnes representation (<ref>) is still perfectly valid provided the contour of integration is modified from the straight line (t)= to a contour separating the two families of poles we are about to discuss.
It is now fairly straightforward to extract from the Mellin-Barnes integral (<ref>) the asymptotic expansion of abrsy as y ≫ 1 by closing the contour of integration at negative infinity on the left semi-half plane (t)<1/2 and collecting residues from the different singular terms in (<ref>).
As we can see in Figure <ref>, when the parameters (a,b,r,s) defines an absolutely convergent Poincaré series, i.e. when they satisfy (<ref>), closing the contour at negative infinity in the half-plane (t)<1/2 selects two different types of poles:
* From the zeta functions ζ(t+r-b) and ζ(t+r-a-b) we have two poles located respectively at t = b+1-r and t=a+b+1-r;
* From the gamma functions Γ(t+r-s/2) and Γ(t+r+s-1/2) we have two infinite families of poles located respectively at t= s - r -2n and t=1-s-r-2n with n∈ℕ.
It is easy to see that under the assumption (<ref>) of an absolutely convergent Poincaré series, the above poles are all located in the half-plane (t)<1/2, while all remaining poles in (<ref>) are located in the half-plane (t)>1/2.
Computing the residues at said poles and summing over all of them produces the wanted asymptotic expansion for large-y of the Fourier zero-mode,
abrsy∼ ζ(2r-a-2b)/2Γ(r)ζ(2r-a-2b+1)[ c^(1)(a,b,r,s) y^b+1 r+ c^(1)( a,a+b,r,s)y^a+b+1 r]
+∑_n=0^∞ y^-r- 2n[ y^s c^(2)_n(a,b,r,s)+ y^1- sc^(2)_n(a,b,r,1s)] ,
where for convenience of presentation we defined the coefficients
c^(1)(a,b,r,s) = Γ(b+1-s/2)Γ(2r-b-s/2)Γ(b+s/2)Γ(2r+s-b-1/2)ζ(1-a)/π^b Γ(r-b) ,
c^(2)_n(a,b,r,s) = (-1)^n π ^2 n+1-sΓ(n+r) Γ(s-n-1/2) Γ(n+r-s+1/2) /n! Γ( r ) Γ (2 n+r+1-s)
×ζ (s-b-2 n)ζ (s-a-b-2 n) ζ (2 n+2 r+1-b-s) ζ (2 n+2 r+1-a-b-s)/ζ (2r-a-2 b+1) ζ (4 n+2 r+2-2 s) .
Besides the first two terms y^b+1 r and y^a+b+1 r, coming from the isolated poles of the two Riemann zeta functions, the remaining perturbative series is, for general parameters, (a,b,r,s), an asymptotic factorially divergent power series.
From (<ref>), the growth of the perturbative coefficients is c^(2)_n(a,b,r,s) = O( (2n)!) which combined with the power-like growth (4π y )^-2n immediately suggests the presence of exponentially suppressed corrections (qq̅) = e^-4π y.
While the modular functions abrsz provide for a natural extension of the generalised Eisenstein series, unlike the generalised Eisenstein series they have, for non-specific values of the parameters, non-terminating and factorially divergent formal power series expansions at the cusp y ≫ 1.
Crucially, at non-generic and physically relevant points in parameter space, i.e. for special values of a,b,r,s corresponding to generalised Eisenstein series, the asymptotic tail of abrsz vanishes and (<ref>) reduces to a sum of finitely many terms. In the next section, we show that this happens for a∈ℤ and (b,r,s) either all integers or all half-integers.
This dramatic change of the asymptotic series (<ref>) from a factorially divergent formal power series to a finite sum can be understood quite easily from the contour integral representation given in (<ref>). From the definition (<ref>) of the function U(a,b,r,s| t) we notice that the gamma functions generate two infinite families of poles in t on both side of the integration contour (t) =. At the same time, the four Riemann zeta functions present two pairs of identically spaced, infinite families of (trivial-)zeros in t again on both side of the integration contour (t) =. The truncation of the asymptotic series (<ref>) to a finite sum happens precisely for special values of a,b,r,s for which these families of poles and zeroes start overlapping at some point. As we will show in the next section, the case of interest - the generalised Eisenstein series - neatly falls into this category.
The analytic continuation in (a,b,r,s) is crucial for fixing the exponentially suppressed (qq̅)-terms from the formal and factorially divergent perturbative expansion at the cusp, since for generic (a,b,r,s) the requirement of a well-defined Borel-Ecalle resummation of (<ref>) allows for calculation of all (qq̅)-terms, similar to <cit.>.
Surprisingly, even when at special values of the parameters (a,b,r,s) the series (<ref>) becomes a finite sum, such non-perturbative resurgent corrections do survive. In the literature, this is usually dubbed Cheshire Cat resurgence <cit.> from the eponymous feline of Alice in Wonderland with a disappearing body but a lingering enigmatic grin.
Since such a resurgence analysis is akin to the one carried out in <cit.> for a different general class of seed functions (<ref>), we will not repeat this calculation here. Later in the paper we will however revisit the calculation of exponentially suppressed terms from the spectral analysis point of view.
We conclude this section with a simpler “special” example, namely the case of the standard Eisenstein series.
As previously remarked, since abr=0sz is a modular solution to the Laplace equation (<ref>) it must proportional to sz.
Given the generic asymptotic expansion (<ref>), we can now fix the constant of proportionality.
Firstly, it is a well-known result (<ref>) that the asymptotic expansion at the cusp for sz has only two power-behaved terms: y^s and y^1-s.
These two terms are easily recognisable in (<ref>) as regulated versions of the n=0 terms y^s-r c_0(a,b,r,s)+ y^1- s-rc_0(a,b,r,s), while all other terms vanish.
More precisely, from the definition (<ref>) we notice in the denominator the factor Γ(r) is singular for r=0, but easily regulated by considering r=ϵ and taking the limit ϵ→ 0 at the very end. Only the poles of (<ref>) located at t=s+ϵ and t=1-s-ϵ have a non-vanishing residue in the limit ϵ→ 0 and produce precisely a multiple of the expected Eisenstein series Laurent polynomial (<ref>).
This allows us to fix the proportionality factor between abr=0sz and sz as such
ab0sz = 2 tan(π s) Γ(s) ζ(1-b-s)ζ(1-a-b-s)ζ(s-b)ζ(s-a-b)/(2s-1)π^s-1 ζ(1-a-2b)ζ(2-2s) sz .
We stress again that we could have reached the same result from a direct comparison between each Fourier mode of the Poincaré seed (<ref>) and the unusual Poincaré series (<ref>) for sz. Applying (<ref>) to each Fourier mode in (<ref>), leaves us with a particular Dirichlet sum over the Fourier non-zero modes m ∈ℤ∖{0} which, once evaluated, brings us back (<ref>).
Lastly, an easy application of the recursion formula (<ref>) shows that all of ab-nsz, with n∈ℕ, are also finite sums of Eisenstein series,
ab- nsz =π^n n! ∑_k=0^n(1)^ k+1 (s+n-2 k-) Γ(s-k-)/k! Γ (n-k+1) Γ(n+s-k+)γ(a,b+n,s+n-2k)s+n2kz ,
where the coefficient γ(a,b,s) is the proportionality constant appearing in (<ref>), i.e.
γ(a,b,s) = 2 tan(π s) Γ(s) ζ(1-b-s)ζ(1-a-b-s)ζ(s-b)ζ(s-a-b)/(2s-1)π^s-1 ζ(1-a-2b)ζ(2-2s) .
§.§ A ladder of inhomogeneous Laplace equations
We have just seen that this newly defined space (<ref>) of modular invariant functions does contain both single Eisenstein series (<ref>) and products of two Eisenstein series (<ref>)-(<ref>). We now show that the functions abrsz are also closed under the action of the Laplace operator in z.
In particular, we describe a method of generating solutions to an infinite ladder of Laplace equations where the source term is a fixed function abrsz and the eigenvalues lie in the spectrum
Spec(r+s) = {r+s-2, r+s-4, r+s-6, ...} ,
i.e. they take the form
λ_n(r+s):=r+s-2(n+1) ,
with and n∈ℕ.
Once the source term abrsz is properly chosen, this spectrum reduces to the string theory spectra (<ref>)-(<ref>) and the constructed solution produces precisely a given generalised Eisenstein series expressed as a finite linear combination of novel Poincaré series (<ref>).
Not to clutter the notation, in this section we will suppress the explicit z-dependence.
The starting point of our analysis is the differential equation (<ref>), rewritten here in a more convenient form
[Δ -λ_0(r+s)(λ_0(r+s)-1)] ab-1r-1s-1/4π (1-r)=abrs .
To construct this ladder of Laplace equations, we view this equation as the top element in a tower of similar equations with decreasing eigenvalues.
We now look for linear combinations, _n(a,b,r,s), of functions a'b'r's' with different parameters (a',b',r',s') and solutions to
[ Δ-λ_n(r+s)(λ_n(r+s)-1)]_n(a,b,r,s)= abrs .
The starting Laplace equation (<ref>) gives us the initial condition
_0(a,b,r,s) = ab-1r-1s-1/4π (1-r) ,
while the rest of the ladder is generated from here by exploiting the crucial recursion relation (<ref>) as we now show.
To simplify the discussion we introduce a linear operator 𝒟 which acts on the space of modular functions (<ref>) as
𝒟abrs:=abrs-2+2s-3/2πab-1r-1s-1 ,
for which the recursion relation (<ref>) can then be written in the compact form
𝒟abrs=abrs .
One can easily check by induction that an n-fold application of this operator produces a sum of n+1 modular functions given by
𝒟^nabrs=∑_k=0^n n k(∏_i=0^k-12(s+i-n)-1/2π)ab-kr-ks+k-2n .
While it is not immediately obvious how to use the Laplace equation (<ref>) to invert (<ref>) and find _n(a,b,r,s), we can use the recursion relation to rewrite (<ref>) as
[ Δ-λ_0(r+s-2n)(λ_0(r+s-2n)-1)]_n(a,b,r,s)=abrs
= 𝒟^nabrs=∑_k=0^n n k(∏_i=0^k-12(s+i-n)-1/2π)ab-kr-ks+k-2n .
Although 𝒟^n abrs is a linear combination of modular functions ab'r's' with different parameters (a,b',r',s'), we notice that the action of 𝒟^n produces a uniform shift on r+s, i.e. for every term in this linear combination we have r'+s' = r+s-2n.
This means that if we consider the left-hand side of (<ref>) term by term, we have reduced the problem to a collection of equations (<ref>) for different values of parameters (a,b',r',s') but all satisfying r'+s' = r+s-2n.
We can then use the inversion of the Laplacian (<ref>) term by term to arrive at
_n(a,b,r,s) = ∑_k=0^n n k(∏_i=0^k-12(s+i-n)-1/2π) ab-k-1r-k-1s+k-2n-1/4π(k+1-r) ,
which is the sought-after solution to the ladder of Laplace equations (<ref>) with eigenvalue λ_n(r+s) = r+s-2(n+1) and source abrs.
Note that while in general this ladder does not terminate, whenever the parameter r is a strictly positive integer, which will be the relevant case for the MGFs spectrum (<ref>), the ladder does in fact terminate after finitely many steps.
This is easy to see from (<ref>), let us assume that r=n+1 with n∈ℕ for which (<ref>) becomes ill-defined. In (<ref>) the would-be k=n term reduces to ab-n-10s-n-1 and according to the differential equation (<ref>)
the action of the Laplace eigenvalue operator on such a factor should produce the corresponding source proportional to ab-n1s-n. However, this is not possible since ab-n-10s-n-1 is precisely proportional (<ref>) to the Eisenstein series s-n-1z which is annihilated by (<ref>) in the case r=n+1. We will come back to this point when discussing this ladder of equations for the case of MGF generalised Eisenstein series.
In the context of this paper, we are particularly interested in generating solutions to Laplace eigenvalue equations with sources given by products of two Eisenstein series.
One of the perks of our approach is that the ladder of Laplace equations (<ref>) just found precisely reduces to the desired inhomogeneous Laplace
eigevalue equations when the source term abrsz is suitably chosen as to reproduce the wanted bilinear in Eisenstein series as given in
(<ref>)-(<ref>).
The first flavour of generalised Eisenstein series
Let us now use the ladder (<ref>) just discussed to reconstruct the first string theory flavour of generalised Eisenstein series (<ref>).
We then consider half-integer indices s_1,s_2∈ℕ+1/2 and we want to reproduce the non-terminating spectrum of eigenvalues
Spec_1(s_1,s_2)={s_1+s_2+1, s_1+s_2+3, s_1+s_2+5, ...} .
To this end, we specialise the ladder (<ref>) to the case for which the source term abrsz produces the second representation we found for the product of two Eisenstein series (<ref>), i.e. we specialise our ladder to
(a,b,r,s) = ( 1-2 s_1, s_1, 1 -s_2, 1-s_1) ,
and assume that s_1,s_2 are fixed half-integers, in which case (<ref>) can be reduced to
[Δ-λ^(1)_n(λ^(1)_n-1)]8ξ(2s_2-1)/Γ(s_1)Γ(s_2)_n(1-2s_1, s_1, 1-s_2, 1-s_1| z)= s_1zs_2z
-2Γ(s_1+s_2-1)ξ(2s_1-1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+s_2)-3)s_1+s_2-1z-2Γ(s_1+1-s_2)ξ(2s_1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+1-s_2))s_1+1-s_2z .
If we apply directly the ladder procedure with fixed parameters (<ref>), we find that the ladder eigenvalues (dropping their explicit dependence from the fixed source indices s_1,s_2) are now λ̃^(1)_n=-s_1-s_2-2n, however, the exchange λ̃^(1)_n→λ^(1)_n = 1-λ̃^(1)_n leaves the equation invariant and produces the expected spectrum of eigenvalues
λ^(1)_n = s_1+s_2+2n+1 .
This change is not without consequences: the constructed modular invariant solution, _n, does not quite land (modulo single Eisenstein terms) on ℰ(λ^(1)_n;s_1,s_2| z), the generalised Eisenstein series we are interested in, but rather on the reflected ℰ(1-λ^(1)_n;s_1,s_2| z). We can use the general expression (<ref>) to compute the asymptotic expansion of _n(1-2s_1, s_1,1-s_2,1-s_1| z) at large-y and confirm that the homogeneous solution y^1-λ^(1)_n has vanishing coefficient, i.e. we land exactly on the opposite boundary condition compared to the wanted generalised Eisenstein series ℰ(λ^(1)_n;s_1,s_2| z).
This can be fixed by adding a suitable multiple of the modular invariant homogeneous solution, λ^(1)_nz, such that the new solution satisfies the desired boundary condition of a vanishing coefficient for the homogeneous solution y^λ^(1)_n.
Lastly, with the help of the differential equation (<ref>) we can easily invert the single Eisenstein source terms in (<ref>).
With all these considerations in mind, we arrive to the final expression
s_1s_2λ^(1)_nz= 8ξ(2s_2-1)/Γ(s_1)Γ(s_2)_n(1-2s_1, s_1, 1-s_2, 1-s_1| z)
-2Γ(λ^(1)_n)ξ(2n+2)ξ(2s_1+2n+1)ξ(2s_2+2n+1)ξ(2(s_1+s_2+n))/(2λ^(1)_n-1)Γ(s_1)Γ(s_2)ξ(2λ^(1)_n-1)ξ(2λ^(1)_n)λ^(1)_nz
+2Γ(s_1+s_2-1)ξ(2s_1-1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+s_2)-3)μ(s_1+s_2-1,λ^(1)_n)s_1+s_2-1z
+2Γ(s_1+1-s_2)ξ(2s_1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+1-s_2))μ(s_1+1-s_2,λ^(1)_n)s_1+1-s_2z ,
where we defined μ(s,λ):=s(s-1)-λ(λ-1).
The second flavour of generalised Eisenstein series
We turn our focus to the second flavour of generalised Eisenstein series. The indices s_1,s_2≥2 are now integers and without loss of generality we assume s_1≥ s_2. We want to use the ladder (<ref>) to reproduce the finite spectrum of eigenvalues
Spec_2(s_1,s_2)={|s_1-s_2|+2, |s_1-s_2|+4, ... , s_1+s_2-2} .
Consequently, we specialise the ladder (<ref>) to the case for which the source term abrsz produces the first representation we found for the product of two Eisenstein series (<ref>), i.e. we specialise our ladder to
(a,b,r,s) = ( 1-2 s_2, s_2, s_1, s_2 ) .
With this choice of parameters the ladder equation (<ref>) reduces to
[Δ-λ^(2)_n(λ^(2)_n-1)]8ξ(2s_1)/Γ(s_1)Γ(s_2)_n(1-2s_2, s_2 , s_1 , s_2 | z) = s_1zs_2z
-2Γ(s_1+s_2)ξ(2s_1)ξ(2s_2)/Γ(s_1)Γ(s_2)ξ(2(s_1+s_2))s_1+s_2z-2Γ(s_1+1-s_2)ξ(2s_1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+1-s_2))s_1+1-s_2z ,
and the ladder eigenvalues, λ^(2)_n=s_1+s_2-2(n+1), reproduce immediately the desired spectrum.
In this second setup there is no issue with the large-y asymptotic behaviour for the solution _n(1-2s_2,s_2 ,s_1 ,s_2 | z): using the general expression (<ref>) we can confirm that our ladder solution satisfies the desired boundary condition for which the coefficient of the homogeneous solution y^λ^(2)_n vanishes. This means that for the specific parameters (<ref>) the ladder solution (<ref>) must reproduce (modulo single Eisenstein terms) the second flavour of generalised Eisenstein series, ℰ(λ^(2)_n;s_1,s_2| z).
Proceeding as we did before, we use (<ref>) to invert the single Eisenstein source terms in (<ref>) and arrive at
s_1s_2λ^(2)_nz= 8ξ(2s_1)/Γ(s_1)Γ(s_2)_n(1-2s_2, s_2 , s_1, s_2 | z)
+2Γ(s_1+s_2)ξ(2s_1)ξ(2s_2)/Γ(s_1)Γ(s_2)ξ(2(s_1+s_2))μ(s_1+s_2,λ^(2)_n)s_1+s_2z
+2Γ(s_1+1-s_2)ξ(2s_1)ξ(2s_2-1)/Γ(s_1)Γ(s_2)ξ(2(s_1+1-s_2))μ(s_2-s_1,λ^(2)_n)s_1+1-s_2z,
Unlike what happens in the previous case, when the sources have integer indices, s_1,s_2, we notice that the spectrum of eigenvalues is bounded both from above and below.
There is a maximal eigenvalue in the ladder which is given by λ^(2)_0=s_1+s_2-2 and agrees with the maximal eigenvalue obtained in the study of MGFs in the second spectrum (<ref>).
However the minimal eigenvalue in the ladder does not quite reproduce the minimal eigenvalue expected from (<ref>).
As discussed below equation (<ref>), in the case when the parameter r=ñ+1, with ñ∈ℕ, the ladder terminates after ñ steps. In the present case (<ref>), the parameter r= s_1 has precisely this property, hence the ladder terminates after ñ = s_1-1 steps, i.e. we have constructed generalised Eisenstein solutions (<ref>) for n=0,...,s_1-2 and fixed sources.
The minimal eigenvalue we obtain is then λ^(2)_s_1-2=s_2-s_1+2, in general lower than the minimal eigenvalue expected from the spectrum (<ref>). These ladder solutions (<ref>) with eigenvalues lower than the MGFs spectrum (<ref>) correspond precisely to the modular objects discussed in section 7.3 of <cit.> and constructed from certain “overly-integrated seed functions”.
In summary, the ladder of Laplace equations (<ref>) includes in a natural and uniform way the two string theory flavours of generalised Eisenstein series (<ref>)-(<ref>). In both cases (<ref>)-(<ref>), we expressed these generalised Eisenstein series as linear combination of finitely many novel Poincaré series (<ref>). We now discuss some concrete examples for both flavours.
§.§ Examples
In this section we present some concrete, and string theory relevant, examples of our general construction.
We begin with the generalised Eisenstein series 4z, coefficient of the higher derivative correction d^6R^4 in the effective low-energy action of type IIB superstring theory (<ref>). For the given indices, s_1=s_2=, the eigenvalue is λ=λ_0^(1)=s_1+s_2+1 = 4, hence 4z is the function with smallest eigenvalue in the spectrum (<ref>) for these sources.
Since this is a diagonal example where the indices s_1 and s_2 coincide, we need to use the regularisation scheme described in (<ref>).
Substituting the regularised parameters s_1 = +ϵ, s_2= and λ_0^(1) =4+ϵ in the general expression (<ref>) we derive
4z =
lim_ϵ→ 0[ 4/9√(π) Γ(3/2+ϵ) 2(1+ϵ)+ ϵϵz-ζ(3+2ϵ)/9(2+ϵ)ζ(2+2ϵ)1+ϵz]
-32π^6/127 575ζ(7)4z -2π^2/45ζ(3)2z .
As previously stated each term inside the limit is separately singular at ϵ=0, however, this combination is such that the divergences in 1/ϵ cancel out and produce a finite expression for ϵ=0.
We can substitute this regulated expression into the general formula (<ref>) to recover the well-known asymptotic expansion <cit.> of the d^6R^4 correction
4z∼ -2ζ(3)^2y^3/3π^3-2ζ(3)y/9π-2π/45y-4π^3/25 515y^3 as y≫ 1 .
A second related example is the modular invariant function 7z which arises at order O(N^2) in the large-N expansion of the particular 𝒩=4 SYM integrated correlator discussed in <cit.>. This case falls again into the spectrum (<ref>), the indices are s_1= , s_2 = while the eigenvalue is λ =λ_1^(1)= s_1+s_2+3 =7 hence one step above the lowest eigenvalue in our Laplace tower (<ref>).
If we substitute these specific values for s_1,s_2 and λ_1^(1) in (<ref>) we obtain the Poincaré series representation
7z= -16/15π^2-4--z+16/27π-4--z
-4096π^12/46 414 974 375ζ(13)7z-8π^4/10 935ζ(5)3z-3ζ(5)/2π^42z .
Substituting this expression in the general formula (<ref>) we obtain the asymptotic expansion
7z∼ -2ζ(3)ζ(5)y^4/15π^4-ζ(5)y^2/30π^2-4ζ(3)/2835-2π^2/3645y^2-8π^6/200 930 625y^6 as y≫1 .
Finally, we discuss an example of generalised Eisenstein series belonging to the second spectrum (<ref>).
We consider the function 323z which captures the genuine dept-two part of the two-loop MGF usually denoted by C_3,1,1(z),
C_3,1,1(z) = - 4 323z + 43/355z - ζ_5/60 .
The indices are s_1=3,s_2=2 while the eigenvalue is λ=λ_0^(2)=s_1+s_2-2=3 hence 323z is the function with largest eigenvalue in the second spectrum (<ref>) for these sources.
Substituting the specific values for s_1, s_2 and λ_0^(2) in the general solution (<ref>) we obtain
323z =-π^2/945-3121z+ 11/705z-ζ(3)/422z .
It is interesting to compare the present Poincaré series representation (<ref>) with a different one (finely tuned to represent all two-loop MGFs) considered in <cit.> for which we have
323z = ∑_γ∈ B()\ SL(2,)[ (π y)^5/297 675-(π y)^2 ζ(3)/1890- (π y)^2/1890∑_m=1^∞σ_-3(m) ( q^m + q̅^m)]_γ .
Again thanks to the general expression (<ref>), starting from (<ref>) we can retrieve the known asymptotic expansion
323z∼π^5y^5/297 675-ζ(3)π^2y^2/1890-ζ(5)/360-7ζ(7)/64π^2y^2+ζ(3)ζ(5)/8π^3y^3 as y≫ 1.
Compared to previous results in the literature, one novelty of our Poincaré series (<ref>) is that all the examples here considered, and more broadly all generalised Eisenstein with spectra (<ref>)-(<ref>) can be expressed as linear combinations of finitely many abrsz.
§ SPECTRAL ANALYSIS POINT OF VIEW
The second representation we wish to discuss for the modular objects under consideration relies on spectral theory.
The key idea behind spectral theory is to decompose any modular invariant function as a linear combination of “good” basis elements, i.e. normalisable eigenfunctions of the hyperbolic Laplace operator.
This has been extremely fruitful in the study of two-dimensional conformal field theories <cit.> and integrated correlators in 𝒩=4 SYM <cit.>. In particular, appendix B of <cit.> presents a self-contained spectral analysis discussion for some of the generalised Eisenstein series appearing in our work, while in <cit.> a more general study is presented.
A complete treatment of spectral analysis is beyond the scope of the present work and we refer to <cit.> for a thorough introduction to the subject while presenting here only some of the key details.
We remind the reader that the standard fundamental domain of is defined by
ℱ :=\ℋ
:={ z∈ℋ | |z|> 1 , -1/2 < (z) ≤1/2}∪{ z∈ℋ | |z| = 1 , 0≤(z)≤1/2} ,
endowed with the natural hyperbolic metric
s^2 = x^2 + y^2/y^2 ,
where z = x+i y.
Given that any point z in the upper half-plane ℋ is conjugate to a point γ· z ∈ℱ by a suitable γ∈, we have that modular invariant functions f(z)= f(γ· z) with f:ℋ→ℂ can be considered simply as functions defined on ℱ.
We can then define the Hilbert space L^2(ℱ) of square-integrable functions with respect to the Petersson inner product
(f,g) = ∫_ℱ f(z) g(z) μ ,
where the invariant Haar measure is μ = y^-2 x y.
Note that for a function f to be an element of L^2(ℱ), its growth at the cusp y≫1 must be at most |f(z)| = O(y^1/2). In what follows, we will often encounter modular invariant functions f violating such bound, i.e. non-L^2(ℱ) normalisable functions.
Although this growth condition seems quite restrictive, and in sharp conflict with the asymptotic expansion (<ref>) previously found, spectral analysis methods can be extended from square-integrable functions to a broader class of functions that have moderate growth at the cusp.
If a function f has cuspidal growth |f(z)| = O(y^α) with (α)>1/2, we can find a coefficient β such that the new modular invariant combination f(z) - β αz has a tamer growth at the cusp. More generally, we will be discussing modular invariant functions whose asymptotic expansion at the cusp is controlled by finitely many non-integrable power-like terms y^α_i with (α_i)>1/2. Although such functions f(z) are not elements of L^2(ℱ), we can find coefficients β_i for which the linear combination
f_ new(z) = f(z) - ∑_i β_i α_iz∈ L^2(ℱ) ,
is L^2-normalisable.
Modulo the caveat just mentioned, we now consider in more detail the Hilbert space L^2(ℱ) with inner product (<ref>). One of the main benefits of working with a vector space is that we can always express a generic element in terms of a basis. Furthermore, since we are interested in solving differential equations with respect to the hyperbolic Laplacian[Note that in the mathematics literature, the hyperbolic Laplacian considered is usually Δ̃ = -y^2(∂_x^2+∂_y^2) so that the spectrum of its L^2(ℱ) eigenfunctions is non-negative.]Δ = y^2(∂_x^2+∂_y^2), and since this operator is self-adjoint with respect to the inner product (<ref>), it is natural to use the Laplace eigenfunctions as a basis for L^2(ℱ).
The spectrum of the hyperbolic Laplacian decomposes into three distinct eigenspaces (again we refer to <cit.> for details):
* The constant function f(z)=1 is clearly an eigenfunction of Δ with eigenvalue 0, and it is an element of L^2(ℱ), since (1,1) = Vol(ℱ) = π/3 is the volume of the fundamental domain.
* The continuous part of the spectrum is spanned by sz with ( s) = 1/2 and eigenvalue s(s-1) given (<ref>).
* The discrete part of the spectrum is spanned by the Maass cusp forms, ϕ_n(z) with n∈ℕ^>0 .
While the non-holomorphic Eisenstein series sz with ( s) = 1/2 are simply meromorphic continuations in s of (<ref>), the Maas cusp forms ϕ_n(z) are different beasts altogether.
These are modular invariant eigenfunctions of the Laplacian
Δϕ_n(z) = μ_nϕ_n(z), where μ_n=-( 1/4+t_n^2) , 0<t_1<t_2<... ,
with the spectral parameters t_n, specifying the eigenvalue μ_n, forming an infinite and unbounded set of sporadic positive numbers.
Similarly to (<ref>), they admit a Fourier mode decomposition
ϕ_n(z) = ∑_k≠ 0 a_k^(n) y^1/2K_it_n(2π |k|y)e^2π ikx ,
and the Fourier coefficients a_k^(n) are once more a set of sporadic real numbers.
Given the outer automorphism of order two z→ -z̅, we can divide the Maass forms into even forms, i.e. ϕ_n(z) = ϕ_n(-z̅), and odd forms, i.e. ϕ_n(z) = -ϕ_n(-z̅).
Presently, we are working with the convention that ϕ_n(z) is normalised in the sense of the Petersson inner product (<ref>), i.e. we have (ϕ_n,ϕ_n)=1. However, another common choice for ϕ_n(z) is to be Hecke normalised, i.e. to have a_1^(n)=1. Clearly the two normalisation are just a scalar multiple of one another.
Note that from the Fourier decomposition (<ref>), and as suggested by their name, the Maass cusp forms are indeed cuspidal objects, i.e. they have vanishing Fourier zero-mode and decay exponentially fast as y≫ 1:
ϕ_n(z) ∼ e^-2π y for y≫ 1 .
Since the Maass cusp forms only contribute to the Fourier non-zero mode sector, we will not be discussing their effects in what follows as our focus will be primarily on the Fourier zero-mode sector. The interested reader can find both spectral parameters and Fourier coefficients for various even/odd Maass cusp forms on the L-functions and modular forms database (LMFDB) <cit.>.
Once the basis of eigenfunctions for the Laplacian is understood, we are naturally led to consider the Roelcke-Selberg spectral decomposition:
f(z) = f +∫_(t)=1/2 (f, E_t) tz t/4π i+∑_n=1^∞ (f,ϕ_n)ϕ_n(z) ,
for a generic f∈ L^2(ℱ) (inside the inner product we use the short-hand notation E_t:=tz).
The first term is simply f = ∫_ℱ f(z) dμ, which can be understood as the average of the function over the fundamental domain, or equivalently as the spectral overlap with the constant function f = (f,1). The remaining part of the decomposition can be understood as a “linear” combination of orthonormal basis elements whose coefficients are simply given by the inner product of the function f(z) under consideration and the respective basis element.
We will shortly focus on analysing the Fourier zero-mode of generalised Eisenstein series s_1s_2λz and the functions abrsz, or rather suitable L^2(ℱ) versions thereof, by using spectral analysis. To this end, we notice that if we Fourier decompose f∈ L^2(ℱ) as
f(z) = ∑_k∈ℤf_k(y)e^2π ikx ,
the spectral decomposition (<ref>) immediately provides for a nice contour integral representation for the Fourier zero-mode f_0(y).
Since from (<ref>) we know that the Maass cusp forms have vanishing Fourier zero-mode, we conclude that only the Eisenstein series can contribute.
Furthermore, from the Fourier decomposition (<ref>) for tz, we know that the Fourier zero-mode of the Eisenstein series contains only two power-behaved terms, y^t and y^1-t. We can however combine the reflection property (<ref>), relating tz to 1-tz, with a change of variables t→ 1-t to show that both terms y^t and y^1-t give an equal contribution, arriving at
f_0(y) = f + ∫_(t)=1/2(f, E_t) y^t t/2π i .
This formula may appear rather useless since to extract the Fourier zero-mode f_0(y) it would seem necessary to know already the full modular function f(z) to be able to compute its spectral overlap (f, E_t). However, in the next section we will show that for both generalised Eisenstein s_1s_2λz and abrsz equation (<ref>) becomes extremely useful and the overlap (f, E_t) can be neatly computed using an “unfolding-trick” involving the Poincaré series representations.
Finally, we notice that once the spectral overlap (f, E_t) is known, the integral representation (<ref>) enables us to explore both the “weak-coupling” asymptotic regime y≫ 1 as well as the “strong-coupling” regime y→ 0 by a suitable choice on how we close the t-contour of integration at infinity.
§.§ Back to the Fourier zero-mode
Let us briefly review how one can exploit the differential equation (<ref>) to compute the spectral decomposition of the generalised Eisenstein series and in particular obtain a useful integral representation (<ref>) for its Fourier zero-mode. Since the generalised Eisenstein series, s_1s_2λz, is not an element of L^2(ℱ), one has to be a little careful in defining a proper regularised version for the spectral overlaps when dealing with functions not of rapid decay.
This problem was addressed in a beautiful and classic paper by Don Zagier <cit.> from which we present here a few key details; we also refer to <cit.> and appendix B of <cit.> for more details on the generalised Eisenstein series.
Firstly we want to understand the behaviour at the cusp y≫1 of the generalised Eisenstein series by exploiting its differential equation (<ref>), repeated here for convenience
[Δ -λ(λ-1)]s_1s_2λz = s_1zs_2z.
As usual we perform the Fourier decomposition in x = (z),
s_1s_2λz=∑_k∈ℤe_k(λ; s_1,s_2| y)e^2π ikx ,
and thanks to linearity, we can solve the inhomogeneous Laplace equation mode by mode.
From (<ref>) we easily extract the Fourier zero-mode contribution to the bilinear source term s_1zs_2z, comprised of power-behaved terms and exponentially suppressed terms e^-4π y.
Thus we find a solution to the differential equation for the Fourier zero-mode e_0(s_1,s_2;λ| y):
e_0(λ;s_1,s_2 | y) = 4π^-s_1-s_2ζ(2s_1)ζ(2s_2)/(s_1+s_2-λ)(s_1+s_2+λ-1)y^s_1+s_2
+4π^-s_1ξ(2s_2-1)ζ(2s_1)/(s_1+1-s_2-λ)(s_1-s_2+λ)Γ(s_2)y^s_1+1-s_2 +4π^-s_2ξ(2s_1-1)ζ(2s_2)/(s_2+1-s_1-λ)(s_2-s_1+λ)Γ(s_1)y^s_2+1-s_1
+4ξ(2s_1-1)ξ(2s_2-1)/(s_1+s_2-λ-1)(s_1+s_2+λ-2)Γ(s_1)Γ(s_2)y^2-s_1-s_2 + α(λ;s_1,s_2)y^1-λ +O(e^-4π y) .
The constant α(λ; s_1,s_2) parametrises the homogeneous solution, y^1-λ, and can not be determined by solely analysing the differential equation.
However, the coefficient α(λ;s_1,s_2) will be promptly fixed by requiring modular invariance for the solution. Furthermore, since we are dealing with a second-order differential equation, we must have two linearly independent homogeneous solutions, which in the Fourier zero-mode sector are y^1-λ and y^λ. It is conventional to choose a vanishing coefficient for the second homogeneous solution, y^λ. Once the modular invariant solution, s_1s_2λz, subject to this boundary condition has been found, we can always consider s_1s_2λz + a λz, with a≠0, which is a different modular invariant solution to the same Laplace system, but this time with a non-vanishing coefficient for y^λ.
As anticipated, from the Fourier zero-mode analysis (<ref>) we immediately deduce that the generalised Eisenstein series is not an element of L^2(ℱ). To simplify the discussion, we can assume that the eigenvalue λ is such that (λ)>1/2, a condition that is satisfied by both spectra (<ref>) and (<ref>).
With this assumption, from (<ref>) we have full control over all power-behaved terms that might grow faster than y^1/2 at the cusp, and subsequently we can subtract suitable Eisenstein series in order to cancel all non-integrable terms thus obtaining a modular invariant and square-integrable function.
We are then led to consider the “regularised” linear combination
ℰ̃(λ;s_1,s_2| z) = s_1s_2λz-∑_Iβ_I Iz,
where I∈{s_1+s_2,s_1+1-s_2,s_2+1-s_1,2-s_1-s_2} and β_I are chosen such that the term of order y^I in ℰ̃(λ;s_1,s_2| z) has a vanishing coefficient if (I)>1/2 and β_I=0 otherwise. By construction, we clearly have ℰ̃(λ;s_1,s_2)∈ L^2(ℱ), hence its Fourier zero-mode can be given in terms of the contour integral representation (<ref>).
Now that we have modified the generalised Eisenstein series to obtain a nice and square-integrable function, ℰ̃(λ;s_1,s_2| z), we can combine the spectral methods described in the previous section with the Laplace equation (<ref>).
It is fairly easy to see from our definition (<ref>) that the inhomogeneous Laplacian equation is modified to
[Δ-λ(λ-1)]ℰ̃(λ;s_1,s_2| z) = s_1zs_2z+∑_I [λ(λ-1)-I(I-1)]β_IIz .
Since both sides of this equation are in L^2(ℱ), we can now take the Petersson inner product against the constant function, the continuous part and the discrete part of the spectrum on both sides of (<ref>) to obtain the spectral overlaps previously discussed.
A slight complication arises from the fact that, although both sides of (<ref>) are square-integrable, the source term is made of non-square integrable objects, hence a suitable regularisation is required to discuss the Petersson inner product for functions not of rapid decay.
To this end, we follow <cit.> and introduce a specific regularisation for the divergent integral
ℐ(s) = ∫_0^∞ y^s y = ∫_0^1 y^s y+∫_1^∞ y^s y = ℐ_1(s)+ℐ_2(s) .
Clearly the starting integral does not converge for any s∈ℂ, but the two parts it splits into do converge on disjoint regions. Namely for (s)>-1 the integral ℐ_1(s) is well-defined and we have ℐ_1(s)=1/s+1, while similarly for (s)<-1 the second integral is well-defined and we have ℐ_2(s)=-1/s+1. Since both integrals admit an analytic continuation in s ∈ℂ∖{-1}, we may define ℐ(s)=ℐ_1(s)+ℐ_2(s) = 0.
As a direct application of this formula, we compute the average E_r = ( E_r,1), i.e. the spectral overlap of an Eisenstein series with the constant function, as well as the spectral overlap ( E_r, E_t) for r≠ t:
E_r =∫_ℱ[∑_γ∈(γ· z)^r] μ= ∫_ B(ℤ) \ℋ y^r x y/y^2=∫_0^∞ y^r-2 y=0 ,
( E_r, E_t) = ∫_ℱrz[∑_γ∈(γ· z)^t̅ ]μ =∫_ B(ℤ) \ℋrz y^t̅ x y/y^2
= ∫_0^∞(y^t̅+r-2+ξ(2r-1)π^r/Γ(r)ζ(2r)y^t̅-r-1) y=0
In both calculations we make crucial use of what is usually called the “unfolding trick”, namely we write part of the integrand as a Poincaré series and then use this sum over images under B(ℤ) \ to unfold the starting domain of integration ℱ=\ℋ onto the strip
B(ℤ) \ℋ = { z∈ℋ: |x| ≤1/2 , y>0} ,
after which we can easily integrate over x and subsequently over y.
Note that all of the above integrals are ill-defined and need to be regularised in the same way as the original integral ℐ(s). We will shortly see more interesting examples where the unfolding procedure produces convergent integrals, which can nevertheless be treated via the same type of analytic continuation.
In particular, we can use the differential equation (<ref>) to show the vanishing of the spectral overlap of ℰ̃(s_1,s_2;λ| z) with the constant function,
ℰ̃(λ;s_1,s_2)=∫_ℱℰ̃(λ;s_1,s_2| z) μ
=1/λ(λ-1)∫_ℱ{Δℰ̃(λ;s_1,s_2| z)-s_1zs_2z+∑_I[I(I-1)-λ(λ-1)]β_IIz}μ=0 .
The first term vanishes since it is an integral of a total derivative over a closed surface, while the second and third term vanish due to the previously derived identities (<ref>)-(<ref>).
As a result, to derive a useful expression for the Fourier zero-mode integral representation (<ref>) of ℰ̃(λ;s_1,s_2| z), we only need considering the spectral overlap with the Eisenstein series tz with (t)=1/2:
(ℰ̃(λ;s_1,s_2), E_t) = ∫_ℱℰ̃(λ;s_1,s_2| z) 1-tzμ= ∫_ℱℰ̃(λ;s_1,s_2| z) Δ1-tz/t(t-1)μ
= ∫_ℱ{s_1zs_2z +λ(λ-1)ℰ̃(s_1,s_2;λ| z) + ∑_I [λ(λ-1)-I(I-1)]β_I Iz}1-tz/t(t-1)μ ,
where in the Petersson inner product we used the fact that tz = tz=1-tz on the critical line (t)=1/2 for which t = 1-t.
In the first line of (<ref>) we used the differential equation satisfied by the Eisenstein series (<ref>), while in the second line we integrated by parts and then used the inhomogeneous Laplace equation (<ref>). Since we have already shown that the integral over the fundamental domain ℱ of a product of two Eisenstein series vanishes (<ref>), the overlap (ℰ̃(λ;s_1,s_2), E_t) can be expressed simply as an integral of a triple product of Eisenstein series.
Once again this integral can be evaluated <cit.> via the unfolding trick by rewriting one of the Eisenstein series as a Poincaré series and then using the sum over images to unfold the fundamental domain ℱ onto the strip B(ℤ)\ℋ:
(ℰ̃(λ;s_1,s_2), E_t) = 1/(t-λ)(t+λ-1)∫_ℱs_1zs_2z1-tzμ
=4ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t-1).
We can then write the spectral decomposition (<ref>) for the generalised Eisenstein series
s_1s_2λz= ∫_(t)=1/24ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t-1) tz t/4π i
+∑_Iβ_I Iz + ∑_n=1^∞ (ℰ̃(λ;s_1,s_2),ϕ_n)ϕ_n(z),
where the spectral overlap with the Maass cusp forms can be made more explicit, but it is of little concrete use given the poor analytic control over these objects.
We are now in the position of specialising the integral representation (<ref>) to the case of ℰ(s_1,s_2;λ) thus arriving at the useful expression for its Fourier zero-mode
e_0(λ;s_1,s_2 | y) = ∑_I β_I[2ζ(2I)/π^Iy^I+2ξ(2I-1)/Γ(I)y^1-I]
+∫_(t)=1/24ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t-1) y^t t/2π i ,
where again I∈{s_1+s_2,s_1+1-s_2,s_2+1-s_1,2-s_1-s_2} and β_I was defined in (<ref>).
The integrand of (<ref>) is a meromorphic function of t for which it is rather easy to understand the structure of singularities. Firstly, we note that the completed Riemann function ξ(s) = π^-s/2Γ(s/2)ζ(s) is meromorphic with simple poles at s=0 and s=1, while it vanishes only at the non-trivial zeros of the Riemann zeta function, which, from the conjectural Riemann hypothesis, are of the form s=1/2+iρ_n with ρ_n real. We then deduce that the integrand of (<ref>) has poles located at:
* t= 1-I,I with I∈{s_1+s_2,s_1+1-s_2,s_2+1-s_1,2-s_1-s_2}, for which one the completed Riemann zeta functions in the numerator has argument equal to 0 or 1 respectively;
* t=λ, 1-λ, coming from the two rational terms [(t-λ)(t+λ-1)]^-1;
* t=3/4+iρ_n/2, coming from the non-trivial zeroes of ξ(2t-1) present in the denominator.
We can now use (<ref>) to distinguish between the different contributions arising in the asymptotic expansions of the Fourier zero-mode e_0(λ;s_1,s_2| y) as y≫ 1 or as y→0.
Focusing for the present time on the asymptotic expansion at the cusp y≫ 1, we see that the integral contour in (<ref>) can be closed in the left half-plane (t)<0. In doing so, we pick up the residues for the poles located at (t)<1/2, which are:
(i) t=I for I∈{s_1+s_2,s_1+1-s_2,s_2+1-s_1,2-s_1-s_2} with (I)<1/2;
(ii) t=1-I for I∈{s_1+s_2,s_1+1-s_2,s_2+1-s_1,2-s_1-s_2} with (I)>1/2;
(iii) t = 1-λ under the original assumption (λ)>1/2.
The end result can be made more concrete by considering the case relevant for our spectra (<ref>)-(<ref>), where s_1,s_2≥3/2 and without loss of generality s_1≥ s_2. Under these conditions and considering the non-diagonal case where s_1-s_2≥ 1, we simply collect the residues from the poles at t∈{s_2+1-s_1,s_2-s_1,2-s_1-s_2,1-s_1-s_2} and t=1-λ.
Note that, for this range of parameters, the square-integrable function ℰ̃(λ;s_1,s_1) in (<ref>) is obtained by removing suitable multiples of the Eisenstein series Iz with I∈{s_1+s_2,s_1+1-s_2}. From the Fourier zero-mode (<ref>), we see that this subtraction indeed removes the non-square integrable powers y^s_1+s_2 and y^s_1+1-s_2. However, since at the cusp Iz∼# y^I +# y^1-I, we also introduce “unwanted” reflected powers y^1-s_1-s_2 and y^1-(s_1+1-s_2)=y^s_2-s_1. These unwanted terms are exactly cancelled by the residues coming from the above-mentioned poles located at t∈{s_2-s_1,1-s_1-s_2}. The remaining poles at t∈{s_2+1-s_1,2-s_1-s_2} produce the remaining powers for the particular solution (<ref>), while the pole at t=1-λ produces the homogeneous solution term.
The diagonal case, s_1=s_2, requires some extra care since to define the square-integrable function ℰ̃(λ;s_1,s_1) in (<ref>) we need to subtract a regularised version for the divergent Eisenstein series 1z, see e.g. appendix B of <cit.>. At the same time, we see that the spectral overlap (<ref>) develops a double pole at t=0 and t=1 precisely for s_1=s_2. To avoid these complications, we can obtain the diagonal case as the off-diagonal limit s_2=s_1-ϵ with ϵ→0.
We can directly use (<ref>) to determine the previously unknown coefficient α(λ;s_1,s_2) multiplying the homogeneous solution y^1-λ.
This coefficient was first computed in <cit.> with a similar method, and can now be calculated by simply picking up the pole of (<ref>) at t=1-λ, giving us
α(λ;s_1,s_2)=-4ξ(s_1+s_2-λ)ξ(s_1-s_2+λ)ξ(s_2-s_1+λ)ξ(s_1+s_2+λ-1)/(2λ-1)Γ(s_1)Γ(s_2)ξ(2λ) .
In the next section, we discuss the asymptotic expansion of (<ref>) as y→0 where the contour of integration has to be closed instead in the right half-plane (t)>0. This will select the “complementary” poles to the ones just discussed, and a new infinite family of poles coming from the non-trivial zeros of the Riemann zeta function will also play an essential rôle.
We conclude this section by analysing the spectral decomposition for the novel functions abrsz.
Firstly, from the previously determined asymptotic expansion at the cusp (<ref>), we see that all abrsz are directly square integrable functions in the region of parameters a,b,r,s where the Poincaré series converges (<ref>), i.e. we have immediately abrs∈ L^2(ℱ) when (<ref>) is satisfied.
As a consequence, we can directly compute the spectral overlaps without any need for subtracting Eisenstein series.
We start by observing that the spectral overlap with the constant function vanishes
abrs= ∫_ℱabrszμ= 0 ,
since we can use the Poincaré series representation (<ref>) for abrsz to unfold the integral from the fundamental domain ℱ to the strip B(ℤ)\ℋ, and we conclude that the integral over x vanishes since the seed function abrsz does not have a Fourier zero-mode.
We proceed by computing the spectral overlap with the Eisenstein series. A calculation very similar to (<ref>) yields
(abrs, E_t) = ∫_ℱabrsz1-tzμ
= ∫_ B(ℤ) \ℋabrs z1-tz x y/y^2
= Γ(r+1-s-t/2)Γ(r+s-t/2)Γ(t+r-s/2)Γ(t+r+s-1/2)/2π^r Γ(r)ξ(2-2t)
×ζ(r+1-b-t)ζ(r+1-a-b-t)
ζ(t+r-b)ζ(t+r-a-b)/ζ(2r+1-a-2b) .
The keen-eyed reader will notice that if we now plug the spectral overlap just derived into the integral representation formula for the Fourier zero-mode (<ref>), we obtain exactly the same expression (<ref>) previously derived from the Poincaré series representation. This is a significantly simpler derivation of (<ref>) when compared to the Mellin-Barnes discussion presented in appendix <ref>. However, we need to stress that without having already obtained the result (<ref>), we could have not inferred immediately that the functions abrsz are in L^2(ℱ).
§.§ Non-perturbative terms and small-y behaviour
So far our analysis of the Fourier zero-mode (<ref>) only concerned with the power-behaved terms at the cusp y≫ 1.
In this section we show how the exponentially suppressed corrections e^-4π y are encoded in (<ref>) and clarify how the resurgent analysis carried out in <cit.> nicely connects with the present discussion.
In the limit y→ 0, the non-perturbative terms stop being exponentially suppressed and produce instead an infinite sum of perturbative corrections related to the non-trivial zeros of the Riemann zeta function.
As discussed in the previous section, we can easily evaluate the perturbative expansion for the Fourier zero-mode integral representation (<ref>) as y≫ 1 by closing the contour of integration in the left half-plane (t)<0. Picking up various residues allows us to reproduce all power-behaved terms present in (<ref>), however, the integral does not vanish when we push the contour of integration to infinity, but rather it produces the remaining exponentially suppressed corrections in the Fourier zero-mode sector.
We follow this procedure and push the contour of integration to the left half-plane (t)<0, while collecting the residues to arrive at
e_0(λ;s_1,s_2| y) = 4π^-s_1-s_2ζ(2s_1)ζ(2s_2)/(s_1+s_2-λ)(s_1+s_2+λ-1)y^s_1+s_2
+4π^-s_1ξ(2s_2-1)ζ(2s_1)/(s_1+1-s_2-λ)(s_1-s_2+λ)Γ(s_2)y^s_1+1-s_2 +4π^-s_2ξ(2s_1-1)ζ(2s_2)/(s_2+1-s_1-λ)(s_2-s_1+λ)Γ(s_1)y^s_2+1-s_1
+4ξ(2s_1-1)ξ(2s_2-1)/(s_1+s_2-λ-1)(s_1+s_2+λ-2)Γ(s_1)Γ(s_2)y^2-s_1-s_2 + α(λ;s_1,s_2)y^1-λ
+∫_(t)=γ̃4ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t-1) y^t t/2π i ,
where γ̃<min{(I),(1-I),(λ),(1-λ)}. The integrand in (<ref>) is manifestly analytic for t in the half-plane (t)≤γ̃ and we claim that the corresponding integral is exponentially suppressed at the cusp y≫ 1 thus containing all of the non-perturbative, e^-4 π y, terms.
For aesthetic reasons we perform the change of variables t→ 1-t and use the reflection identity ξ(s)=ξ(1-s) to rewrite the above integral as
NP^(λ)_s_1,s_2(y):=∫_(t)=γ4ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t) y^1-t t/2π i ,
where γ>max{(I),(1-I),(λ),(1-λ)} is arbitrary as long as it lies to the right of all the poles location. We can expand all the completed Riemann functions as ξ(s) = π^-s/2Γ(s/2) ζ(s) and use Ramanujan identity (<ref>) in reverse to rewrite the particular combination of Riemann zeta functions appearing in (<ref>) as a Dirichlet series for the product of two divisor functions, arriving at
NP^(λ)_s_1,s_2(y) = ∑_n=1^∞4σ_1-2s_1(n)σ_1-2s_2(n)n^s_1+s_2-1y/Γ(s_1)Γ(s_2)
=×∫_(t)=γΓ(t+s_1+s_2-1/2)Γ(t+s_1-s_2/2)Γ(t+s_2-s_1/2)Γ(t+1-s_1-s_2/2)/(t-λ)(t+λ-1)Γ(t) (π ny)^-t t/2π i .
We have not managed to evaluate (<ref>) in closed form, however, its asymptotic expansion as y≫ 1 can be obtained via saddle point approximation.
Firstly we can use the Stirling approximation for the gamma functions to confirm that the integrand has a stationary point at t=4π ny.
Hence a simple steepest descent calculation produces the required asymptotic expansion,
NP^(λ)_s_1,s_2(y) = ∑_n=1^∞σ_1-2s_1(n)σ_1-2s_2(n)n^s_1+s_2-2/Γ(s_1) Γ(s_2) e^-4π n yϕ^(λ)_s_1,s_2(4π n y) ,
where the first few perturbative corrections are given by
ϕ^(λ)_s_1,s_2(y)=
8/y^2+ 8[s_1(s_1-1)+s_2(s_2-1)-4]/y^3+ 4 { [s_1(s_1-1)+s_2(s_2-1)-7]^2+2λ(λ-1)-13}/y^4+O(y^-5) .
A few comments are in order.
Firstly, although for general values of the parameters s_1,s_2 and λ the function ϕ^(λ)_s_1,s_2(y) contains infinitely many perturbative terms, we find that for some special cases, this series has only a finite number of terms. For example, if we fix s_1=3,s_2=2 and λ=2, corresponding to the modular invariant function (<ref>) belonging to the spectrum (<ref>), the non-perturbative sector (<ref>) simplifies to
NP^(3)_3,2(y) = ∑_n=1^∞ 16π y σ_-3(n)σ_-5(n)n^4∫_(t)=5t Γ(t-4) (4π ny)^-t t/2π i
=∑_n=1^∞σ_-3(n)σ_-5(n)n^3/2e^-4π ny[ 8/(4π ny)^2+32/(4π ny)^3] .
Although we have not proven that for generic values of s_1,s_2 and λ the perturbative series ϕ^(λ)_s_1,s_2(y) contains infinitely many terms, it is easy to see that for the case associated with the spectrum (<ref>) (corresponding to depth-two modular graph functions) the series ϕ^(λ)_s_1,s_2(y) is always a polynomial. This is expected from the Laplace equation (<ref>) given that for the spectrum (<ref>) the Eisenstein series appearing in the source term have integer index, hence the corresponding Bessel functions, which appear in the Fourier decomposition (<ref>) and which are responsible for the non-perturbative terms, have half-integer index thus producing only finitely many perturbative terms in the non-perturbative sector.
A second comment we want to stress is that our expression (<ref>) can be shown to be the exact solution to the Laplace equation (<ref>) for the non-perturbative part of the Fourier zero-mode sector. Given the Laplace equation (<ref>) and the Fourier decomposition (<ref>) we must have
[y^2 ∂_y^2 -λ(λ-1)] NP^(λ)_s_1,s_2(y) =∑_n=1^∞32 n^s_1+s_2-1σ_1-2s_1(n)σ_1-2s_2(n) /Γ(s_1)Γ(s_2) y K_s_1-1/2(2π n y)K_s_2-1/2(2π n y) .
If we rewrite the source term using the Mellin-Barnes type integral representation for the product of two Bessel function
y K_s_1-1/2( 2y)K_s_2-1/2( 2y) =∫_(t) = γΓ(t+s_1+s_2-1/2)Γ(t+s_1-s_2/2)Γ(t+s_2-s_1/2)Γ(t+1-s_1-s_2/2)/Γ(t) y^1-t t /16 π i ,
where γ>max{(I),(1-I)}, and then simply solve the differential equation for NP^(λ)_s_1,s_2(y) by inverting the differential operator as
1/[y^2 ∂_y^2 -λ(λ-1)] y^1-t = y^1-t/(t-λ)(t+λ-1) ,
we find the exact integral representation (<ref>).
Furthermore, we note that the formula for the perturbative series expansion (<ref>) in the non-perturbative sector reproduces exactly the results obtained in <cit.> for the modular invariant functions associated with the spectrum (<ref>).
We stress that in <cit.>, the authors started from the seed functions (<ref>) and obtained the non-perturbative sector for the generalised Eisenstein series with spectrum (<ref>) from a careful resummation of an evanescent, yet factorially divergent formal perturbative expansion in an example of Cheshire cat resurgence, very similar to our discussion below (<ref>).
We now see that the results obtained in <cit.> are actually more general than what originally stated in said reference, and in particular (<ref>) appears to be valid for all values of s_1,s_2 and λ and not just for the spectrum (<ref>).
Finally, as discussed in <cit.>, it is easy to see that while for y≫1 the Fourier zero-mode contribution (<ref>) is non-perturbative and exponentially suppressed, its nature changes dramatically when y→ 0.
Rather than splitting the complete Fourier zero-mode e_0(s_1,s_2;λ| y) in perturbative plus non-perturbative terms as in (<ref>), we can analyse directly the integral representation (<ref>) in the limit y→ 0.
As previously anticipated just below (<ref>), in the limit y→ 0 we can close the contour of integration to the right half-plane (t)>0 and collect the residues from the various “complementary” poles plus the infinite set of completely novel poles located at t=3/4+iρ_n/2 and coming from the non-trivial zeroes of ξ(2t-1) present in the denominator of the integrand in (<ref>). Once again, after we have pushed the contour of integration past all the poles, the remaining integral captures all exponentially suppressed contributions[We are extremely grateful to Nathan Benjamin and Cyuan-Han Chang for pointing out that such non-perturbative corrections have to be present and for bringing <cit.> to our attention where very similar results were obtained in the spectral decomposition for the partition function of certain 2-d conformal field theories.] now of the form e^-4π/y. The asymptotic expansion of (<ref>) as y→ 0 is simply given by the sum over the residues of all the poles located at (t)>1/2 plus a remaining contour integral,
e_0(λ;s_1,s_2| y) = 4ξ(2s_1-1)ξ(2s_2-1)ξ(2s_1+2s_2-2)/(s_1+s_2-λ-1)(s_1+s_2+λ-2)Γ(s_1)Γ(s_2)ξ(2s_1+2s_2-3)y^s_1+s_2-1
+4ξ(1-2s_1)ξ(2s_2-1)ξ(2s_2-2s_1)/(s_1+1-s_2-λ)(s_1-s_2+λ)Γ(s_1)Γ(s_2)ξ(2s_2-2s_1-1)y^s_2-s_1
+4ξ(1-2s_2)ξ(2s_1-1)ξ(2s_1-2s_2)/(s_2+1-s_1-λ)(s_2-s_1+λ)Γ(s_1)Γ(s_2)ξ(2s_1-2s_2-1)y^s_1-s_2
+4ξ(1-2s_1)ξ(1-2s_2)ξ(2-2s_1-2s_2)/(s_1+s_2-λ)(s_1+s_2+λ-1)Γ(s_1)Γ(s_2)ξ(1-2s_1-2s_2)y^1-s_1-s_2
-α(1-λ;s_1,s_2)y^λ
+∑_ρ_n2ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(1-t-λ)(t-λ)Γ(s_1)Γ(s_2)π^1/2-tΓ(t-1/2)ζ'(2t-1) y^t|_t=3/4+iρ_n/2
+ NP^(λ)_s_1,s_2(y) .
where the coefficient α(λ;s_1,s_2) is given in (<ref>).
Similar to our large-y discussion, at small-y the non-perturbative terms, NP^(λ)_s_1,s_2(y), come from having pushed the contour of integration past all the poles on the right t-half-plane,
NP^(λ)_s_1,s_2(y):=∫_(t)=γ4ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t-1) y^t t/2π i ,
where γ>max{(I),(1-I),(λ),(1-λ)}. We proceed as before and expand all the completed Riemann functions as ξ(s) = π^-s/2Γ(s/2) ζ(s), however, we notice that this time the ratio of Riemann zeta functions we obtain,
ζ(t+s_1+s_2-1)ζ(t+s_1-s_2)ζ(t+s_2-s_1)ζ(t+1-s_1-s_2) /ζ(2t-1) ,
cannot be written immediately as a Dirichlet series using Ramanujan identity (<ref>).
However, the present discussion is very similar to the spectral decomposition analysis considered in <cit.> for the study of certain partition functions in 2d CFTs. Building on <cit.>, we can combine (<ref>) with
ζ(2t)/ζ(2t-1) = ∑_n=1^∞φ^1(n)/n^2t ,
where φ^1(n) denotes the Dirichlet inverse[The Dirichlet inverse, f^ 1, of an arithmetic function, f, is defined such that the Dirichlet convolution of f with its inverse produces the multiplicative identity, i.e. ∑_d|n f(d) f^1(n/d) = δ_n,1. The Dirichlet series L(f;s) := ∑_n=1^∞f(n)/n^s has the property that L(f^ 1;s) = (L(f;s))^-1. The Dirichlet inverse, φ^1, of Euler totient function, φ, is given by φ^ 1(n) = ∑_d|n d μ(d) where μ is Möbius function.] of Euler totient function, φ(n), so that (<ref>) can be rewritten in terms of a double Dirichlet series and an easier contour integral,
NP^(λ)_s_1,s_2(y)= ∑_m=1^∞∑_n=1^∞4σ_1-2s_1(m)σ_1-2s_2(m) m^s_1+s_2-1φ^1(n)/√(π)Γ(s_1)Γ(s_2)
×∫_(t)=γΓ(t+s_1+s_2-1/2)Γ(t+s_1-s_2/2)Γ(t+s_2-s_1/2)Γ(t+1-s_1-s_2/2)/(t-λ)(t+λ-1)Γ(t -1/2)(y/π m n^2)^t t/2π i .
We can now evaluate the asymptotic expansion as y→0 of (<ref>) via saddle point approximation.
The integrand has a stationary point at t= 4π m n^2/y and a simple steepest descent calculation yields the required asymptotic expansion,
NP^(λ)_s_1,s_2(y)= ∑_m=1^∞∑_n=1^∞σ_1-2s_1(m)σ_1-2s_2(m) m^s_1+s_2-φ^1(n)/Γ(s_1)Γ(s_2) n√(4 y) e^-4 π m n^2/yϕ^(λ)_s_1,s_2(y/4π m n^2) ,
where the first few perturbative corrections are given by
ϕ^(λ)_s_1,s_2(y)=
8 y^2+ 8[s_1(s_1-1)+s_2(s_2-1)-11/4]y^3+ 4 {[s_1(s_1-1)+s_2(s_2-1)-42/8]^2+2λ(λ-1)-31/4} y^4+O(y^5) .
We note the striking similarity between the small-y exponentially suppressed terms (<ref>)-(<ref>) and the parallel large-y expressions (<ref>)-(<ref>). Equation (<ref>) is directly analogous to the crossing equation (3.22) derived in <cit.>.
Going back to the perturbative terms in (<ref>), we see that the infinite series over ρ_n comes precisely from having collected the residues from the poles[The contribution from these poles was inadvertently missed in <cit.>.] of 1/ξ(2t-1) in (<ref>).
Under the assumption that Riemann hypothesis is correct, these poles are associated with all non-trivial zeros of the Riemann zeta function ζ(s) located at s=1/2+iρ_n and ρ_n∈ℝ. Hence in the small-y limit, the last line in equation (<ref>) behaves as the power y^3/4 modulated by oscillatory terms in y with frequencies determined by the ρ_n. A similar behaviour was already observed in <cit.> for the modular-invariant function f(z) = y^12 |Δ(z)|^2 with Δ(z) Ramanujan discriminant cusp form.
Similarly, in a different string theory context <cit.> and from a two-dimensional CFT context <cit.>, the non-trivial zeros of the Riemann zeta function do appear from the asymptotic expansion for the spectral decomposition of different physical quantities.
In figure <ref>, we present numerical evidences for the small-y expansion (<ref>) of the d^6 R^4 case, e_0(4;3/2,3/2| y). We have numerically evaluated to high precision the integral representation (<ref>) for e_0(4;3/2,3/2| y) at small y and subtracted from it all the terms in (<ref>) but the Riemann zeta contributions. In figure <ref> we first plot this quantity and then subtract from it the predicted series of contributions from the first 10 non-trivial zeros (<ref>) of the Riemann zeta and plot this difference. As the second plot shows, our formula (<ref>) is consistent with the numerical data within a 10^-19 error over the whole range of y considered.
Although from a physical point of view, the limit y→0 for the MGFs spectrum (<ref>) corresponds simply to a particular degeneration limit of the worldsheet torus, for the generalised Eisenstein series associated with the spectrum (<ref>), and in particular for the coefficient of the d^6 R^4 in the low-energy expansion of type IIB superstring theory, this limit corresponds to the strong coupling regime g_s→∞. It would be extremely interesting, and equally difficult, to understand the string theory origins at strong coupling for the appearance of the non-trivial zeroes of the Riemann zeta.
§.§ The instanton sectors
So far we have focused our attention entirely on the Fourier zero-mode sector, while the spectral decomposition (<ref>) in principle allows us to reconstruct all of the Fourier modes, in particular the Fourier non-zero modes which we will refer to as instanton sectors.
Given (<ref>), we can extract the k-instanton sector e_k(λ;s_1,s_2| y), i.e. the Fourier mode e^2π i k x, for the generalised Eisenstein series s_1s_2λz:
e_k(λ;s_1,s_2| y)= ∑_Iβ_I 4/Γ(I)|k|^I-1/2σ_1-2I(k)y^1/2K_I-1/2(2π |k|y)
+ ∫_(t)=1/2[ 4ξ(t+s_1+s_2-1)ξ(t+s_1-s_2)ξ(t+s_2-s_1)ξ(t+1-s_1-s_2)/(t-λ)(t+λ-1)Γ(s_1)Γ(s_2)ξ(2t-1)4|k|^t-1/2σ_1-2t(k)/Γ(t)
× y^1/2K_t-1/2(2π |k|y) t/4π i] +∑_n=1^∞ (ℰ̃(λ;s_1,s_2),ϕ_n)a_k^(n) y^1/2K_it_n(2π |k|y) .
Although a complete analysis of the instanton sector is beyond the scope of the present paper, we note that a naive attempt at extracting the large-y behaviour of (<ref>) would produce an incorrect result. At first glance we may try and expand directly the different Bessel functions for large argument, thus immediately obtaining the expected exponential suppression factor e^-2π |k| y, hallmark of the k-instanton sector. However, by doing so the perturbative expansion on top of the instanton factor q^k, for k>0 with q= e^2π i z, or anti-instanton factor q̅^k, for k<0, would start at order y^0 with sub-leading corrections O(y^-1), which turns out to be incorrect.
In <cit.>, a representation for all generalised Eisenstein series with spectrum (<ref>) was provided in terms of iterated integrals of holomorphic Eisenstein series. This representation is extremely convenient for extracting all of the instanton expansions and, by comparing with the examples discussed in <cit.>, we can clearly see that the above naive argument cannot possibly provide the correct answer for the generalised Eisenstein series with spectrum (<ref>).
Furthermore, in the same references, the authors discovered that amongst the coefficients of the perturbative expansion in the instanton sector, e_k(λ;s_1,s_2| y), besides rational numbers and odd-zeta values, a new class of numbers appears whenever the eigenvalue λ is such that the the vector space of holomorphic cusp forms of modular weight w=2λ has non-zero dimension. For these special eigenvalues the perturbative expansion at large-y of e_k(λ;s_1,s_2| y) contains non-critical completed L-values of holomorphic cusp forms.
Very recently in <cit.> a very similar (albeit so far completely different in nature) phenomenon was discovered for the generalised Eisenstein series with spectrum (<ref>) for exactly the same eigenvalues.
It would be extremely interesting to extract the asymptotic expansion as y≫ 1 of the k-instanton sector (<ref>) and understand the origin of these completed L-values for holomorphic cusp forms from the spectral decomposition point of view (<ref>). In particular, it is tantalising to conjecture some interplay between the non-holomorphic cusp forms and the appearance of holomorphic cusp forms.
§ CONCLUSIONS
In this work, we have presented a family of Poincaré series (<ref>) which contains both string theory flavours of generalised Eisenstein series, namely higher derivative corrections in the low-energy effective action of type IIB superstring theory and integrated correlators coefficients from the gauge theory dual counter-part (<ref>), as well as all two-loop modular graph functions (<ref>) from the low-energy expansion of perturbative string amplitudes at genus-one.
Besides giving a unifying picture, the newly introduced family of modular invariant functions manifest a variety of algebraic and differential relations. In particular, since this space is closed under the action of the Laplace operator (<ref>)-(<ref>), we find a natural explanation (<ref>)-(<ref>) for the string theory spectra of eigenvalues and possible source terms (<ref>)-(<ref>).
From the Poincaré series integral representation (<ref>), or equivalently from the spectral decomposition (<ref>), we derive in (<ref>) the complete asymptotic expansion as y≫1 for the Fourier zero-mode sector, as well as all non-perturbative, exponentially suppressed terms (<ref>), which in the context of higher derivative corrections and integrated correlators correspond to instanton/anti-instanton events.
It would be interesting to repeat a similar analysis in the instanton sector, i.e. for Fourier non-zero mode, starting from the integral representation (<ref>). As shown in <cit.>, for particular eigenvalues the large-y perturbative expansion of any Fourier non-zero mode coefficient for both flavours of generalised Eisenstein series (<ref>)-(<ref>) does contain L-values of holomorphic cusp forms. Obtaining these results from a Poincaré series or spectral function decomposition is as interesting as challenging. From the Poincaré series side this involves tackling infinite sums involving general Kloosterman sums (<ref>), while from the spectral decomposition side (<ref>) we have to sum all contributions from non-holomorphic Maass cusp forms.
Finally, we have also presented in (<ref>) the general expansion as y→0, which crucially involves the non-trivial zeros of the Riemann zeta function. For the generalised Eisenstein series (<ref>) corresponding to two-loop MGFs this limit corresponds to a particular degeneration of the toroidal world-sheet. However, for the higher derivative corrections and integrated correlators coefficients (<ref>) the limit y→0 corresponds to the strong coupling regime g_s→∞, or equivalently g__YM^2→∞ on the gauge theory dual side. We do not know why Riemann hypothesis should play any rôle in the strong coupling limit of string theory, nonetheless, we find this observation extremely fascinating and in need of further exploration.
§.§ Acknowledgements
We would like to thank Nathan Benjamin, Cyuan-Han Chang, Ksenia Fedosova, Axel Kleinschmidt, Kim Klinger-Logan, Eric Perlmutter, Oliver Schlotterer, and Don Zagier for useful discussions. In particular we would like to thank Nathan Benjamin and Cyuan-Han Chang for helping us correct one of our results and Axel Kleinschmidt for comments on the draft. We are grateful to the organisers of the Pollica Summer Workshop “New Connections between Physics and Number Theory” supported by the Regione Campania, Università degli Studi di Salerno, Università degli Studi di Napoli "Federico II", the Physics Department "Ettore Pancini" and "E.R. Caianiello", and Istituto Nazionale di Fisica Nucleare. DD would also like to thank the Albert Einstein Institute, Golm, for the hospitality during the final stages of this project.
§ CONVERGENCE OF THE POINCARÉ SERIES
In this appendix we discuss the region in parameter space, (a,b,r,s), for which the Poincaré series (<ref>) converges absolutely.
This will be achieved by constructing an auxiliary Poincaré series which has the same domain of absolute convergence but it is easier to analyse.
We start by observing that under a modular transformation γ∈ the magnitude of the seed functions abrsz is bounded from above by an x-independent function
|abrsγ· z| ≤∑_m≠ 0|[σ_a(m)|m|^b- y^r+ K_s-(2π |m|y)]_γ| ,
simple consequence of triangle inequality combined with
|[e^2π ix]_γ|=1 .
Motivated by this observation, we define the auxiliary Poincaré series
ψ(a,b,r,s| y) := ∑_m=1^∞σ_a(m)m^b-y^r+ K_s-(2π my) ,
Ψ(a,b,r,s| z) :=∑_γ∈[ψ(a,b,r,s| y)]_γ ,
and notice that the auxiliary Poincaré series (<ref>) converges absolutely if and only if the original Poincaré series (<ref>) does.
We continue by showing that (<ref>) can be written in terms of a contour integral thus manifesting the convergence properties of the Poincaré series.
Given a function f(t) we define its Mellin transform as
ℳ [ f ] (t) := ∫_0^∞ f(t) y^t y/y ,
and proceed to compute the Mellin transform of our new seed function
ψ̃(a,b,r,s| t) :=ℳ[ψ(a,b,r,s)](t) =∫_0^∞ψ(a,b,r,s| y) y^t y/y
:= 1/4π^t+r+Γ(t+r+1-s/2)Γ(t+r+s/2)ζ(t+r+1-b)ζ(t+r+1-a-b) ,
using the identities
∫_0^∞ K_s(y)y^b y = 2^b-1Γ(b+1-s/2)Γ(b+s+1/2) ,
∑_m=1^∞σ_a(m)m^b = ζ(-a-b)ζ(-b) .
The Mellin transform (<ref>) is well-defined in the strip
(t)>α = max{(s-r-1),(-s-r),(b-r),(a+b-r)} .
We can now apply Mellin inversion formula to obtain the integral representation
ψ(a,b,r,s| y) = ℳ^-1[ψ̃(a,b,r,s)](y)= ∫_β-i∞^β+i∞ψ̃(a,b,r,s| t) y^-t t/2π i ,
where β is an arbitrary constant such that β>α. The reason to derive (<ref>) is that all of the explicit y dependence has now been reduced to the simple term y^-t. At this point we can easily perform the Poincaré series (<ref>) arriving at
Ψ(a,b,r,s| z) = ∫_β-i∞^β+i∞ψ̃(a,b,r,s| t) -tz t/2π i .
The absolute convergence of the auxiliary Poincaré series (<ref>), and hence of the original Poincaré series (<ref>), is then equivalent to understanding the conditions for which the Poincaré series of the integral representation (<ref>) is absolutely convergent. This question is much easier to answer: with (<ref>) the problem has been reduced to the convergence of the Poincaré series for Eisenstein series (<ref>).
We conclude that absolute convergence of (<ref>) and (<ref>) is guaranteed whenever
(-t) = -β > 1 ⇒ α <β < -1 ,
which, upon use of the condition (<ref>) for a well-defined Mellin transform, reproduces precisely the domain in parameters space (<ref>) stated in the main text
min{(r+1-s),(r+s),(r-b),(r-a-b)}>1 .
It is interesting to note that the integral representation (<ref>) implies that the spectral overlap (Ψ(a,b,r,s),ϕ_n) vanishes for all Maass cusp forms ϕ_n(z); such a result is not expected to hold for the more complicated abrs.
§ MELLIN-BARNES REPRESENTATION
In this appendix we derive a Mellin-Barnes representation for the Fourier zero-mode abrsy, starting from the general integral representation (<ref>) specialised to the seed function (<ref>) under consideration.
Hence we start by considering
abrsy
= ∑_d=1^∞∑_m≠ 0 S(m,0;d) ∫_ℝ e^-2π im ω/d^2(ω^2 + y^2)σ_a(m)|m|^b-(y/d^2(ω^2 + y^2))^r+ K_s-(2π|m|y/d^2(ω^2 + y^2))ω.
The Bessel function can now be substituted by its Mellin-Barnes integral representation
K_s(y) =(y/2)^s ∫_α - i∞^α+i∞Γ(t)Γ(t-s)(y/2)^-2t t/4π i,
where α is a real parameter such that α > max{ (s),0}.
To perform the integral over ω we furthermore expand the exponential as
e^-2π im ω/d^2(ω^2 + y^2) = ∑_k=0^∞1/k!( -2π i mω/d^2(ω^2+y^2))^k.
Substituting both the Mellin-Barnes representation for the Bessel function and the above convergent expansion in (<ref>), we obtain
abrsy =∑_d=1^∞∑_m≠ 0∑_k=0^∞∫_ℝ∫_α-i∞^α +i∞ S(m,0;d) σ_a(m) |m|^b-(y/d^2(ω^2 + y^2))^r+
(-2π imω/d^2(ω^2+y^2))^k(π |m|y/d^2(ω^2+y^2))^s-2t- Γ(t)Γ(t-s+)/k! t ω/4π i .
The integral over ω can be performed
∫_ℝω^k/(ω^2+y^2)^k+r+s-2t ω = [1+(-1)^k]y^4t+1-k-2r-2s Γ(k+1/2)Γ(k-1/2+r+s-2t)/2Γ(k+r+s-2t ),
provided that the integrand falls-off sufficiently fast as ω→±∞, which in turns requires the parameter α to be bounded from above by 4α < k+2 (r+s)-2.
Under the conditions (<ref>) for absolute convergence of the Poincaré series, we can easily see that for all k∈ℕ the constraints on the parameter α:
max{ (s),0} < α< k+2 (r+s)-1/4 ,
always admit a non-vanishing strip of allowed integration contours in t.
At this point, we focus on the series in m, d and k. Firstly, given the explicit expression (<ref>) for the Kloosterman sum S(m,0;d) we use that r∈(ℤ/dℤ)^× implies -r∈(ℤ/dℤ)^× to derive S(m,0;d)=S(-m,0;d). We can then replace the sum over all non-zero integers m by twice the sum over the positive integers m>0. Secondly, it is possible to evaluate explicitly the sum over d, which takes the form of a well-known Dirichlet series for the Ramanujan sum S(m,0;d),
∑_d=1^∞S(m,0;d)/d^s̃ = σ_1-s̃(m)/ζ(s̃) ,
specialised to s̃=2r+2k+2s-4t.
Finally, we note that the term [1+(-1)^k] in the numerator of (<ref>) restricts the sum over k to only run over even integers 2k.
When the dust settles and after performing the change of variables t→t+ r+s-1/2, we are left with the expression
abrsy
=∑_m=1^∞∑_k=0^∞∫_1/2-i∞^1/2+i∞σ_a(m)σ_2t-4k-1(m)/m^t+r-2k-b(1)^k π^2k+1-r-tΓ(k+1/2-t)Γ(t+r-s/2)Γ(t+r+s-1/2)/Γ(2k+1-t)ζ(4k+2-2t)k! y^t-2k t/4π i .
The next sum to evaluate is that over k. To this end, we begin by making the change of variable t→ t'=t-2k, thus shifting the contour of integration from (t)=1/2 to (t')=1/2-2k and, after having changed the integration variable back to t, we are left with
abrsy
=∑_m=1^∞∑_k=0^∞∫_1/2-2k-i∞^1/2-2k+i∞σ_a(m)σ_2t-1(m)/m^t+r-b( 1)^k π^1 t rΓ(1/2-k-t)Γ(t+2k+r-s/2)Γ(t+2k+r+s-1/2)/Γ(1-t)ζ(2-2t)k!y^t t/4π i .
We would like to translate the shifted integration contour back to its original position at (t)=1/2, however, additional poles originating from Γ(1/2-k-t) appear at t=1/2-ℓ, with ℓ∈ℕ and 0<ℓ≤ k. Although the shifted contour cannot be moved back immediately to its initial place, we can nevertheless rewrite it as a sum of two different contours: the original one along (t)=1/2 and a new contour encircling these new poles along the negative t-axis. As depicted in figure <ref>, these two contours can be connected at infinity to form a single auxiliary contour of integration 𝒞 which is independent from the summation variable k. We exchange the sum over k with the integral over 𝒞 and perform the sum over k
∑_k=0^∞(-1)^kΓ(1/2-k-t)Γ(2k+r+t-s/2)Γ(2k+r+s+t-1/2)/k!
= sin[π(r-t)]+sin(π s)/2 sin(π r) cos(π t)Γ(r+1-s-t/2)Γ(r+s-t/2)Γ(t+r-s/2)Γ(t+r+s-1/2)/Γ(r) .
We are then left with the expression
abrsy = ∑_m>0∫_𝒞 ( sin[π(r-t)]+sin(π s)/2 sin(π r) cos(π t))( σ_a(m)σ_2t-1(m)/m^t+r-b)
×Γ(r+1-s-t/2)Γ(r+s-t/2)Γ(t+r-s/2)Γ(t+r+s-1/2)/π^rΓ(r)ξ(2-2t)y^t t/4π i .
Since the integration contour 𝒞 is closed, the integral is uniquely fixed by the residues at the poles in the interior of 𝒞.
The only poles situated in the interior of the contour 𝒞 are located at t=-2n+s-r and t=-2n+1-s-r for n∈ℕ and come from the last two gamma functions at numerator in the above integrand.
Furthermore, we notice that at these pole locations the ratio of trigonometric factors in (<ref>) always evaluates to 1.
We conclude that this ratio of trigonometric terms can be dropped from the contour integral (<ref>) without changing the result
abrsy = ∑_m>0∫_𝒞( σ_a(m)σ_2t-1(m)/m^t+r-b)
Γ(r+1-s-t/2)Γ(r+s-t/2)Γ(t+r-s/2)Γ(t+r+s-1/2)/π^rΓ(r)ξ(2-2t)y^t t/4π i .
Since the trigonometric factors have been removed, we have that the previously mentioned poles which were located on the negative t-axis at t=1/2-ℓ, with ℓ∈ℕ are no longer present in (<ref>). As depicted in figure <ref>, we are now free to deform the auxiliary contour of integration 𝒞 to an infinite semi-circle. The contribution from the circle at infinity vanishes and the only non-trivial contribution to the integral comes from the line (t)=1/2, hence we have managed to restore the original contour of integration.
Finally, we turn to the sum over m. We notice that at large-m the summand is bounded by
|σ_a(m)σ_2t-1(m)/m^t+r-b| = O(m^- [ (r-b) - max{(a),0} ] ) ,
and, thanks to the conditions (<ref>) for the absolute convergence of the Poincaré series, we can easily see that (r-b) - max{(a),0}>1 for the range of parameters considered, hence this sum converges absolutely (note that for the convergence of this sum it is crucial we managed to reduce the contour 𝒞 back to just the line (t)=1/2). We can then use a well-known identity due to Ramanujan,
∑_m>0σ_a(m)σ_b(m)/m^s =ζ (s) ζ (s-a) ζ (s-b) ζ (s-a-b)/ζ (2s-a-b) ,
specialised to the case b=2t-1 , s=r+t-b and substitute it back in equation (<ref>).
Our final result is the Mellin-Barnes integral representation for the Fourier zero-mode,
abrsy= ∫_1/2-i∞^1/2+i∞ U(a,b,r,s| t) y^t t/2π i,
where we define
U(a,b,r,s| t) := Γ(r+1-s-t/2)Γ(r+s-t/2)Γ(t+r-s/2)Γ(t+r+s-1/2)/2π^r Γ(r)ξ(2-2t)
×ζ(r+1-b-t)ζ(r+1-a-b-t)
ζ(t+r-b)ζ(t+r-a-b)/ζ(2r+1-a-2b) .
The Mellin-Barnes integral representation (<ref>) can be analytically continued to values of parameters, (a,b,r,s), for which the Poincaré series (<ref>) is not absolutely convergent. In general, rather than the vertical line (t) =, the integration contour, γ, in (<ref>) has to be chosen such that it separates two sets of poles of (<ref>).
The contour γ is such that the poles coming from
Γ(t+r-s/2)Γ(t+r+s-1/2)
ζ(t+r-b)ζ(t+r-a-b) ,
are located to the left of γ, while the remaining poles coming from
Γ(r+1-s-t/2)Γ(r+s-t/2)ζ(r+1-b-t)ζ(r+1-a-b-t)
/ξ(2-2t) ,
are located to the right of γ.
utphys
|
http://arxiv.org/abs/2307.04900v1 | 20230710210214 | The angular dependence of spin-orbit torque in monolayer $Fe_3GeTe_2$ | [
"Fei Xue",
"Mark D. Stiles",
"Paul M. Haney"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
Department of Physics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
Institute for Research in Electronics and Applied Physics & Maryland Nanocenter, University of Maryland, College Park, MD 20742, USA
Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
In ferromagnetic systems lacking inversion symmetry, an applied electric field can control the ferromagnetic order parameters through the spin-orbit torque. The prototypical example is a bilayer heterostructure composed of a ferromagnet and a heavy metal that acts as a spin current source. In addition to such bilayers, spin-orbit coupling can mediate spin-orbit torques in ferromagnets that lack bulk inversion symmetry. A recently discovered example is the two-dimensional monolayer ferromagnet Fe3GeTe2. In this work, we use first-principles calculations to study the spin-orbit torque and ensuing magnetic dynamics in this material. By expanding the torque versus magnetization direction as a series of vector spherical harmonics, we find that higher order terms (up to ℓ=4) are significant and play important roles in the magnetic dynamics. They give rise to deterministic, magnetic field-free electrical switching of perpendicular magnetization.
The angular dependence of spin-orbit torque in monolayer Fe3GeTe2
Paul M. Haney
August 12, 2023
=================================================================
§ INTRODUCTION
The electrical control of magnetization without external magnetic fields has attracted a lot of interest due to its potential applications in energy-efficient nonvolatile magnetic random access memory devices and neuromorphic computing <cit.>. One of the promising mechanisms to realize such functionality is spin-orbit torque <cit.>, which is derived from spin-orbit coupling and transfers angular momentum from the crystal lattice to the magnetization <cit.>.
The symmetry of the system determines the dependence of the spin-orbit torque on the magnetization direction. This dependence in turn determines the possible functionality of the torque in devices. As an example, a bilayer heterostructure consisting of a ferromagnetic and a heavy metal layer often possesses a symmetry mirror plane containing the electric field and the interface normal directions. This symmetry requires that the spin-orbit torque vanishes when the magnetization is in-plane and perpendicular to the electric field. This property in turn prevents the spin-orbit torque from affecting deterministic switching of magnetic devices with perpendicular magnetic anisotropy, which are desired for applications <cit.>.
Utilizing materials with reduced crystal symmetry such as two-dimensional layered materials can overcome this limitation and result in deterministic perpendicular switching <cit.>.
In addition to conventional bilayer heterostructures, ferromagnets without inversion symmetry <cit.> can also exhibit sizable spin-orbit torques, offering another route to useful switching dynamics. An example is the recently discovered 2-d magnetic material, monolayer Fe3GeTe2. Fe3GeTe2 is additionally of great interest in ferromagnetic spintronics applications because it is metallic and has strong perpendicular magnetic anisotropy <cit.>. Johansen et al. recently predicted that this material's C_3z symmetry leads to novel bulk spin-orbit torques <cit.>.
For example, the lowest order spin-orbit torque is found to be time-reversal even and fieldlike, in contrast to the conventional bilayer case that has a time-reversal odd fieldlike torque and a time-reversal even dampinglike torque. Interestingly, although the material symmetry is compatible with deterministic perpendicular magnetization switching, the lowest order torques identified in previous work do not lead to deterministic switching.
Motivated by this, we compute the spin-orbit torques in monolayer Fe3GeTe2 in this work using ab initio calculations. We generalize the analysis of the symmetry properties of the material response and show that higher order terms in the spin-orbit torque enable deterministic switching of perpendicular magnetization.
This paper is organized as follows: In Sec. <ref>, we describe how symmetry determines the form of spin-orbit torques, which we express in vector spherical harmonics. Using vector spherical harmonics as the expansion basis enables the convenient analysis of higher-order terms. We provide symmetry tables for the Fe3GeTe2 structure and for conventional bilayer systems. Sec. <ref> presents first-principles calculations of spin-orbit torques in monolayer Fe3GeTe2 and analyzes the important higher-order terms in the results. Sec. <ref> presents the resulting dynamics of the ab initio torques computed with the Landau-Lifshitz-Gilbert-Slonczewski equation.
In Sec. <ref>, we provide a brief discussion of our main findings and relevance to the experiments.
§ SYMMETRY ANALYSIS
§.§ Vector Spherical Harmonics
Crystal symmetry ultimately determines the dependence of spin-orbit torque on the electric field and magnetization directions. Following Belashchenko et al. <cit.>, we expand the spin-orbit torque in the basis of vector spherical harmonics. This expansion offers several advantages over other approaches <cit.> when describing spin-orbit torques in systems with more complicated symmetries than the typical bilayer system. First, the expansion elements are orthogonal to each other so that adding more terms to the expansion does not change the fit values for the lower order terms. Second, there is a straightforward procedure to determine all symmetry allowed elements of the expansion set. This is in contrast to a polynomial expansion of the torque in Cartesian coordinates, where the number of tensor elements grows exponentially with polynomial order. This makes higher order terms difficult to identify and evaluate. As we show in this paper, higher order terms (4^ th order) can qualitatively impact the features of the spin-orbit torque-induced magnetization dynamics, so their identification is important. Third, the terms in the vector spherical harmonics are automatically partitioned into dampinglike or fieldlike torque terms <cit.>. Knowledge of the fieldlike/dampinglike characteristic of the torque can provide intuition about the role of each term in magnetic dynamics. Finally, the expansion allows easy identification of time-reversal even and odd torques. As we show below, both fieldlike and dampinglike torques include time-reversal even and odd components.
We discuss these points in more detail below.
We follow the same convention adopted in <cit.> to use two of the three vector spherical harmonics. For the magnetization direction
m̂= (sinθcosϕ,sinθsinϕ,cosθ),
the torque components are defined in terms of scalar spherical harmonics Y_lm[m̂(θ,ϕ)] as
Y^ D_lm(m̂)
=∇_m̂ Y_lm(m̂)/√(l(l+1)),
Y^ F_lm(m̂)
=m̂×∇_m̂ Y_lm(m̂)/√(l(l+1)),
We explicitly label the vector spherical harmonics terms in Eq. <ref> and Eq. <ref> based on the fieldlike or dampinglike nature of the torque. We label Y^ F_lm as fieldlike because its corresponding effective field ∇_m̂ Y_lm is a pure gradient and has zero curl. Fieldlike torques result in precessional motion of the magnetization. We label Y^ D_lm as dampinglike because it is proportional to m×Y^ F_lm and can be generated from the curl of an effective field. Dampinglike torques direct the magnetization to fixed points. The time-reversal properties of fieldlike and dampinglike torques depend on whether l is even or odd; Table <ref> summarizes this relationship.
For the most common spin-orbit torques found in bilayers with a broken mirror plane perpendicular to z, the dampinglike torque 𝐦̂×(( E×ẑ)×𝐦̂) is even under time reversal and the fieldlike torque 𝐦̂×( E×ẑ) is odd. The terms “time-reversal even torque” and “dampinglike torque” are often used interchangeably, as are the terms “time-reversal odd torque” and “fieldlike torque”. However, these equivalences do not hold for higher order terms in the expansion of the torque.
Since the electric-field-induced spin-orbit torque is always perpendicular to the magnetization m and the vector spherical harmonics form a complete set of functions, we can write down the spin-orbit torkance for an electric field in the Ê direction of magnitude E
T_Ê(m̂)=τ_Ê(m̂)E
in the basis of Y^ D_lm and Y^ F_lm
τ_Ê(m̂)=∑_lm[Y^ D_lmC^ D_lm(Ê)+Y^ F_lmC^ F_lm(Ê)],
where the Cs are complex Cartesian coefficients with the real part being the coefficient of the ReY^ D,F_lm and the imaginary parts the negative coefficients of the ImY^ D,F_lm. The crystal symmetry determines what combinations of coefficients are allowed.
When we expand spin-orbit torkance in vector spherical harmonics, we have 2l+1 independent choices of vector spherical harmonics, one for each integer m with -l ≤ m ≤ l for a given l in the absence of symmetry constraints. As with spherical harmonics, the vector spherical harmonics with -m are the complex conjugates of those with m.
Since the torques are real, we use the real and imaginary parts of the vector spherical harmonics as the expansion functions, e.g. ReY^ D,F_lm and ImY^ D,F_lm
When we make this choice, we restrict m to be non-negative so we do not overcount. Note that we use a different notation for the vector spherical harmonic torque components than found in Belashchenko et al. [Our vector spherical harmonics convention can be converted to the one adopted by Belashchenko et al. <cit.>: ReY^ D,F_lm=-Z^ (1),(2)_l,m/√(2), ImY^ D,F_lm=-Z^ (1),(2)_l,-m/√(2)]. Crystal symmetries constrain the choices of m for a given l. Table <ref> gives the constraints due to important mirror plane symmetries of the structure. Rotational crystal symmetries place additional constraints on m, as described in Appendix A.
Conventionally, for thin film heterostructures composed of ferromagnets and heavy metals, the structure is assumed to be disordered, so that crystal symmetry does not play a role. The bilayer structure itself breaks the mirror plane σ_p̂,Ê, but the other two structural mirror planes remain. The presumed continuous rotational symmetry restricts m=1 <cit.>, so that for l odd, only ImY^ D,F_l1 is allowed and for l even, only ReY^ D,F_l1.
The material of interest in this paper, Fe3GeTe2, preserves the mirror plane perpendicular to the interface normal but breaks one of the mirror planes that contain the interface normal. The mirror plane perpendicular to the interface normal restricts m to be even. When the crystal is orientated such that the electric field is along the x-direction as in Fig. <ref>(a), σ_p̂,n̂ is preserved, so that terms containing ReY^ D,F_lm require l to be odd and terms containing ImY^ D,F_lm require l to be even. If the crystal is oriented so that the electric field is along the y-direction as in Fig. <ref>(a), the allowed l values for the different terms switches.
Systems like that in Ref. <cit.> are similar but do not have the mirror plane perpendicular to the interface, so there is no restriction that m be even. Depending on the orientation of the electric field along the crystal, different terms are allowed for different combinations of l and m.
It can be informative to take a different approach from that used in Table <ref>, in which the vector spherical harmonics are defined with respect to the interface normal and the electric field direction and instead to fix the crystal orientation. Then the vector spherical harmonics do not change as the electric field direction is changed and it becomes possible to relate the coefficients of the different terms for the different electric field directions. This process is explained in Appendix A, allowing us to determine the angular dependence of the torque when the electric field is along y from calculations done for the field along x.
§.§ General form of the torkance for monolayer Fe3GeTe2
The vector spherical harmonic expansion of the spin-orbit torque for Fe3GeTe2 is determined by its crystal structure, shown in Fig. <ref>. Monolayer Fe3GeTe2 has the D_3h symmetry of the P63/mmc space group, which means that it has mirror plane symmetry with respect to the plane of the film (x-y plane), three-fold rotational symmetry around the out-of-plane axis, and three in-plane mirror planes (y-z plane and equivalents rotated by 120^∘), but mirror-plane symmetry is broken in the mutually perpendicular planes (x-z plane and equivalents rotated by 120^∘). Its lack of inversion symmetry is the key to allowing current-induced spin-orbit torque.
Following the general procedure outlined in Appendix <ref>, the symmetry-allowed spin-orbit torkance for an electric field in the x-direction is given by
τ^even_x̂(m̂)=∑_lm C^ F_2l,6m±2 ImY^ F_2l,6m±2(m̂)
+ C^ D_2l+1,6m±2 ReY^ D_2l+1,6m±2(m̂).
Our first-principles calculation and analysis of the magnetic dynamics indicate that the following three terms in this expansion are dominant:
τ^even_x̂
(m̂) ≈ C^ F_2,2ImY^ F_2,2 +
C^ F_4,2ImY^ F_4,2
+ C^ D_3,2ReY^ D_3,2 .
Some of these terms are illustrated in Fig. <ref>. The lowest order time-reversal even term can be written in Cartesian coordinates as:
ImY^ F_2,2 ∝-sinθcos2ϕ θ̂+1/2sin2θsin2ϕ ϕ̂
=m̂×(m_y,m_x,0) .
This form, which is shown in Fig. <ref>(c) and which has been derived from the Cartesian expansion <cit.>, acts as a fieldlike torque even though it is the time-reversal even component of the spin-orbit torque.
The time-reversal odd torkance is given by:
τ^odd_x̂(m̂)=∑_lm C^ D_2l,6m±2ImY^ D_2l,6m±2
+ C^ F_2l+1,6m±2ReY^ F_2l+1,6m±2.
Our analysis shows that for Fe3GeTe2, the important terms in this expansion are:
τ^odd_x̂(m̂) ≈ C^ D_2,2ImY^ D_2,2+
C^ D_4,2ImY^ D_4,2
+C^ F_3,2ReY^ F_3,2.
The leading term in this expression is in Fig. <ref>(d), and in Cartesian coordinates takes the form:
ImY^ D_2,2 ∝1/2sin2θsin2ϕ θ̂+sinθcos2ϕ ϕ̂
=m×((m_y,m_x,0)×m) .
This time-reversal odd torque acts as dampinglike and is the second lowest-order in magnetization m.
Utilizing Eq. <ref>, we can write down the final symmetry-constrained form of torkance under the applied E-field in ŷ-direction by keeping the same coefficients and swapping the Re and Im operating on the vector spherical harmonics (see Appendix <ref> for details). In this material, even though the coefficients of the torques are the same for fields in the x̂ and ŷ directions, and the real and imaginary parts of the vector spherical harmonics are the same but rotated through π/m, the differences between those rotational angles are sufficient to qualitatively change the torques for fields in the two directions. For electric fields in the ŷ direction, symmetry prevents magnetic-field free switching of perpendicular magnetizations. However, the different relationship between the electric field and the mirror plane allows for predictable perpendicular switching for an electric field in the x-direction. In the following, we focus particularly on this case.
It is interesting to compare the spin-orbit torques for this system with those typically discussed for bilayer systems. Panels (a) and (b) in Fig. <ref> respectively show the typical fieldlike and dampinglike torques. These systems have a broken mirror plane perpendicular to the interface normal. When the electric field is applied in-plane, both torques vanish when the magnetization points in the in-plane direction perpendicular to the electric field. The torques are finite when the magnetization is perpendicular to the interface. Monolayer Fe3GeTe2 does not break this mirror plane but rather one containing the interface normal. In this case the torques are strictly zero when magnetizations are perpendicular to the layer. The three fold rotational symmetry then gives more complicated angular dependence than that seen in the bilayer systems. We discuss the consequences of these differences in Sec. <ref>.
A motivation for symmetry analysis is the technological application of current-induced switching of perpendicular magnets <cit.>. Deterministic spin-orbit torque switching of perpendicular magnetization requires a nonzero out-of-plane torque when the magnetization is along the equator. This form of torque cannot be realized in typical devices composed of isotropic heavy metal layers and ferromagnetic layers due to their in-plane mirror symmetries. The use of in-plane-symmetry-breaking materials such as WTe2 have been reported previously <cit.> as a means to accomplishing field-free switching.
Here we describe a different scenario for achieving deterministic switching of perpendicular magnetizations in Fe3GeTe2 in which symmetry-allowed higher-order terms in the vector spherical harmonics expansion play an essential role. A first requirement is that when the magnetization is in-plane there be an out-of-plane torque to break the symmetry between up and down. Only time-reversal even torques (such as panels (c), (f), (g) in Fig.2) can provide such functionality because C_2y symmetry enforces the out-of-plane torque to have the time-reversal even form, τ_z∝cos2mϕ. Second, the second requirement is that there be a stable fixed point out-of-plane, otherwise, the torque will vanish at an in-plane direction. Fig. <ref> (f) shows that torque ReY^ D_3,2 is the lowest order expansion term to satisfy this requirement. However, a ReY^ D_3,2 torque alone cannot switch the magnetization from one hemisphere to the other because of symmetry around the equator for m=2 terms. The fixed point in one hemisphere is exactly equivalent to a fixed point at the other hemisphere connected by (θ,ϕ)→(π-θ,π/2-ϕ). Although the ReY^ D_3,2 torque can drive the magnetization away from the north or south pole when we turn on the field, the new fixed point is still in the same hemisphere. As we turn off the electric field, the magnetization will then go back to the same pole thus resulting in no switching. The third requirement is breaking the symmetry connecting points in the northern and southern hemisphere which can happen if the higher-order torques with m>2 terms are also present. Fig. <ref>(g) shows one example of such torque, ImY^(F)_4,4. The combination of ImY^F_4,4 and ReY^D_3,2 can deterministically switch ferromagnets with perpendicular magnetic anisotropy, as we show in the following sections.
§ FIRST-PRINCIPLES CALCULATIONS OF SPIN-ORBIT TORKANCES IN MONOLAYER FE3GETE2
We adopt the experimental unit cell parameters <cit.> a=0.3991 nm of monolayer Fe3GeTe2 (space group D_3h) for our first-principles calculations using Quantum ESPRESSO <cit.>. We then use a Wannier function based approach <cit.> to compute the linear responses, described in more detail in Appendix <ref>. The time-reversal even and odd torkances are given by
τ^ even_ij=2e∑_𝐤,n,
m≠ n f_nkIm⟨ψ_nk|∂ H_ k/∂ k_i|ψ_mk⟩⟨ψ_mk|𝒯_j|ψ_nk⟩/(E_m-E_n)^2+η^2,
τ_ij^ odd=-e∑_𝐤,n1/2η∂ f_nk/∂ E_nk⟨ψ_nk|∂ H_ k/∂ k_i|ψ_nk⟩⟨ψ_nk|𝒯_j|ψ_nk⟩.
|ψ_nk⟩ and E_nk are the eigenstates and eigenvalues of Hamiltonian H_ k, where k is the Bloch wave vector and n is the band index. the equilibrium Fermi-Dirac distribution function is f_nk=(e^ (E_nk-μ)/k_ BT+1)^-1, μ is the Fermi level, η is the broadening parameter, and e is the electron charge.
The torque operator is 𝒯=-i/ħ[Δ·𝐒̂ ,𝐒̂].
𝐒 is the spin operator and Δ is the time-reversal odd spin-dependent exchange-correlation potential.
One important input parameter to the calculation is the broadening parameter. Fig. <ref> shows the dependence of torkance on broadening parameter and chemical potential. In Fig. <ref>(a), we find that the time-reversal odd component τ_xx is always larger than the even component τ_xz when m̂=ŷ at the Fermi level. Both time-reversal even and odd torkances increase as the broadening parameter becomes smaller with the odd component increasing faster. The longitudinal resistance is indicated by black line in Fig. <ref>(a). In the broadening parameter regime η∈(0.02,0.04) eV where the resistance is about 400 Ω, the odd torkance is almost one order of magnitude larger than the even component. However, the torkance as a function of chemical potential for a fixed η=25 meV shown in Fig. <ref>(b) shows that this ratio does not always hold. Both even and odd components are peaked around 0.3 eV above the Fermi level with a much smaller magnitude difference. In some regions such as 0.2 eV below the Fermi level, the even component can be much larger than the odd component.
We choose a constant broadening parameter η=25 meV for the results presented below. The corresponding constant electron momentum relaxation time is τ=ħ/2η=13 fs. The computed longitudinal resistance (Fig. <ref>(a)) using this η=25 meV at low temperature is around 400 Ω which agrees well with the experiment <cit.>. Although one experiment <cit.> finds the Curie temperature for monolayer Fe3GeTe2 can reach up to 100 K, we treat the smaller temperature T=20 K <cit.> where the ferromagnetic order is most robust.
Figure <ref> gives the first-principles calculations of spin-orbit torkance in the monolayer Fe3GeTe2 as a function of magnetization angle (θ,ϕ).
Comparing Fig. <ref>(a) with Fig. <ref>(c) gives clear evidence of the existence of higher-order terms. There is a vanishing torque band in both north and south hemispheres.
By using the fitted coefficients of these nonzero vector spherical terms, we can replicate Fig. <ref>. This allows us to understand specifically how each term contributes to the magnetization dynamics, the focus in the next section.
The full expansion of even and odd torques in vector spherical harmonics as in Eq. <ref> and Eq. <ref> are given in Table <ref> and Table <ref>.
Fig. <ref> (c) and (d) show the angular dependence of spin-orbit torques when applied electric field is in ŷ direction. Because of the C_3z rotation symmetry, these results are expected to be related with the results of applied field in x̂ direction according to Eq. <ref>.
We have checked that the numerical results are indeed consistent with this relationship. If we look at each individual vector spherical harmonic term, the difference between the cases for E∥ŷ and E∥x̂ is a simple azimuthal rotation by an angle of π/2m to swap the real and imaginary parts. After summing over all m, the total torques for the two cases are not related by a simple rotation. This enables a out-of-equator fixed point for E∥x̂, as we describe next.
Fig. <ref> shows a zoomed-in contour plot of the magnitude of the total spin-orbit torkance near the equator. In the case of E∥ŷ, the mirror symmetry σ_yz enforces a zero torkance fixed point at m=x̂, shown in Fig. <ref>(b). Microscopically, all vector spherical harmonic terms in Eq. <ref> are zero when (θ,ϕ)=(π/2,0). In contrast, Fig. <ref>(a) shows one of the four out-of-equator zero torkance fixed points near (θ,ϕ)=(π/2,π/4). The fixed points in (a) and (b) are inequivalent due to the broken σ_xz mirror symmetry in Fe3GeTe2. The three additional zero-torkance points include one on the same hemisphere and two on the opposite hemisphere. For a particular electric field, the two fixed points on the same hemisphere are stable and the other two on the opposite hemisphere are unstable. The stability of each points changes with the sign of the electric field, allowing deterministic switching, discussed in the next section.
Fig. <ref>(a) shows a tiny polar angle difference from π/2 which is unlikely to be thermally stable in realistic applications. The reason the angle is so small is that C^D_3,2 is relatively small compared to lower order terms such as C^D,F_2,2 which all have the fixed points at equator. The smallness of C^D_3,2 is not always true, as shown in the fitted coefficients as function of the chemical potential in Fig. <ref>. The important C^D_3,2 term can be very prominent as we increase chemical potential a few tens of millielectron volts indicated by the red line. At this chemical potential range, the out-of-equator fixed point can be detectable much more easily, as shown in the contour plot of Fig. <ref> (a). While the properties we calculate of Fe3GeTe2 are not likely to be suitable for applications, our focus is on the new physics and its trends dictated by the symmetries in Fe3GeTe2, rather than specific values. Other materials that share the same symmetry may have properties that are more amenable.
§ DYNAMICS
In this section, we focus on how the spin-orbit torques computed in the previous section affect the magnetization dynamics. The spin dynamics of a ferromagnet with perpendicular easy-axis anisotropy is governed by the following Landau-Lifshitz-Gilbert equation with additional current-induced spin-orbit torque terms <cit.>
d𝐦̂/dt-α𝐦̂×d𝐦̂/dt=-γμ_0H_ A(𝐦̂×ẑ)(𝐦̂·ẑ)+𝒯,
where 𝐦̂ is the normalized magnetization, α is the Gilbert damping parameter, γ is the absolute value of the gyromagnetic ratio, μ_0 is vacuum magnetic permeability, H_ A is the magnetic anisotropy field, and 𝒯 is the current-induced spin-orbit torque.
We directly compute the spin dynamics with the ab initio fitted spin-orbit torques as input into the Eq. <ref>. In the simulation, we choose μ_0 H_ A=20 T by calculating the energy difference for out-of-plane and in-plane magnetic configuration <cit.>. For the Gilbert damping, we choose α=0.01 <cit.>. Fig. <ref>(b) shows a typical zero-temperature magnetic trajectory when the applied electric field is larger than a critical threshold. The stable fixed point (θ_E,ϕ_E) corresponds to the same fixed point near ϕ=π/4 determined by the spin-orbit torkance shown in Fig. <ref>(a) but shifted by the presence of the anisotropy torque. There is another electric-field driven stable point near the symmetry related fixed point (θ_E,ϕ_E+π) depending on the initial state of the magnetization. Reversing the sign of electric field makes the other two fixed points [(π-θ_E,-ϕ_E) and (π-θ_E,-ϕ_E-π)] become stable so that it is possible to switch the magnetization from the south pole to the north hemisphere.
The spin-orbit torques in monolayer Fe3GeTe2 lead to dynamics that are quite distinct from those of the conventional cases. First, the instability condition of the initial magnetization is very different from the cases found in bilayers. In the bilayer case, for a perpendicular easy-axis anisotropy, the spin-orbit torque is finite on the initial magnetization (±𝐳̂), see Fig. <ref>(a,b). For Fe3GeTe2 on the other hand, the torque on that initial magnetization is zero by symmetry as seen in Fig. <ref>. For this aspect of the reversal, the initial instability for Fe3GeTe2 has more in common with the instability for a bilayer system with an in-plane easy-axis along the ±𝐲̂, because in that case the torque is also zero.
The instability case for Fe3GeTe2 also differs significantly from that of the bilayer with in-plane easy-axis anisotropy. As seen in Fig. <ref>(b), when the magnetization in the bilayer system precesses around the easy axis, the dampinglike torque pushes magnetization toward the easy axis or away from it depending on the sign of the current but independent of the phase of the precession. This means that the dampinglike torque competes with the damping torque, which is a factor of α smaller than the precession torques. On the other hand, the torques shown in Fig. <ref>(a,b) have no net push toward the easy axis along the poles (due to the σ_xy symmetry making the poles saddle points for the spin-orbit torques) and so they do not compete with damping torque. For Fe3GeTe2, when the magnetization is near the poles, the spin-orbit torques compete with the anisotropy directly. This competition gives the unfortunate consequence that reversal instability in Fe3GeTe2 requires larger currents than might be the case for other symmetries. However, when the magnetization is close to the fixed points near the equator, the spin-orbit torque competes directly with the damping, giving smaller critical currents for the stability of those fixed points.
Once the critical current is reached and the 𝐳̂ direction becomes unstable for the magnetization, Fe3GeTe2 has the advantage over the bilayer system with perpendicular anisotropy that the switching is deterministic without any other symmetry breaking, like in-plane magnetic fields, applied to the system. In the bilayer system without symmetry breaking the magnetization goes to the 𝐲̂. When the current is turned off, small fluctuations determine whether the magnetization reverses or returns to its original state. For Fe3GeTe2 on the other hand, as shown in Fig. <ref>, the stable minima near m_z=0 are in one equator or the other, so that when the current is turned off, the magnetization goes to the pole on that side of the equator.
§ DISCUSSION
Our findings have several experimental implications. The lowest order ImY^ F_2,2 has been found to be important in assisting the conventional dampinglike torque ImY^ D_1,1 in perpendicular switching of bilayer CoPt/CuPt <cit.>. This combination shares the similar traits as Fe3GeTe2. Reversal requires mixing vector spherical harmonics with different m and nonzero out-of-plane torques when the magnetization is in-plane. Our numerical results also give a large time-reversal odd dampinglike torque ImY^ D_2,2 in Fe3GeTe2, which can be tested in existing second harmonics setups <cit.>.
In order to quantify all the symmetry-allowed higher order torques, a complete sweep of magnetization is required. Similar work has been done in WTe2/Ni80Fe20 bilayer <cit.>. Instead of expanding the measured torques into trigonometric functions, we need to expand them into vector spherical harmonics and obtain the fitting parameters. As we have shown, the coefficients vary largely as we change the chemical potential. Thus, adding a bias gate to change the charge density <cit.> in monolayer Fe3GeTe2 might be a way to find useful experimental conditions.
The critical electric field to switch the perpendicular magnetization in Fe3TeGe2 is high because the mirror symmetry σ_xy restricts torques to those with even m. This restriction requires the spin-orbit torques to compete with the anisotropy torque instead of the damping torque. This mirror symmetry can be broken in the presence of a substrate or applied out-of-plane electric field, similar to the case of bilayer CoPt/CuPt <cit.>.
In summary, we perform first-principles calculations of spin-orbit torque in monolayer Fe3GeTe2 and discover that the bulk spin-orbit torque expressed in higher-order vector spherical harmonics can deterministically switch the perpendicular magnetization. We have provided a symmetry table for other reduced symmetry systems as well. Utilizing higher-order spin-orbit torque offers a new perspective to realize novel electrical control of magnetization.
§ ACKNOWLEDGEMENT
We thank Kirill Belashchenko and Alexey Kovalev for useful discussions. The work done at University of Alabama at Birmingham is supported by the National Science Foundation under Grant No. OIA-2229498, UAB internal startup funds, and UAB Faculty Development Grant Program, Office of the Provost.
F.X. also acknowledges support under the Cooperative Research Agreement between the University of Maryland and the National Institute of Standards and Technology Physical Measurement Laboratory, Award 70NANB14H209, through the University of Maryland.
§ SYMMETRY-CONSTRAINED FORM OF SPIN-ORBIT TORQUE IN VECTOR SPHERICAL HARMONICS BASIS
The symmetry allowed form of spin-orbit torque tensor can be obtained by averaging all possible symmetry transformed tensors τ^sym=1/N∑τ' where N is the number of symmetry operations and τ' indicates the tensor after the transformation. If we consider an orthogonal transformation to a Cartesian tensor, we can usually get the explicit transformation form under a rotation R
τ'_ijk...=∑_αβγ...(R)R_iαR_jβR_jγ...τ_αβγ....
In Cartesian form such as T_i=τ_ijkE_j m_k to arbitrary order, the number of nonzero components in the tensor τ becomes exponentially large as we increase the tensor rank. It becomes practically intractable to obtain the symmetry-allowed higher-order terms in m̂ of τ in Cartesian form.
We next describe the transformation of the torkance tensor in the expansion of vector spherical harmonics. For this purpose, it's convenient to write the tensor with a slightly different notation than used in the main text. In what follows, the tensor τ relates the electric field E to torque T according to:
T = τ· E
τ is the outer product of a vector spherical harmonic Y which specifies the torque direction, and a row vector C that contracts with the electric field:
τ = Y(θ,ϕ) ⊗ C
A coordinate transformation U of the system will act on both electric field and magnetization directions, and is represented by U_M̂ and U_Ê, respectively:
U_M̂ T = τ· (U_Ê E)
For operations which leave the crystal invariant, we require that the transformed torkance is also invariant, so that τ satisfies:
τ = U_M̂^-1τU_Ê
The above equation provides symmetry constraints on τ for a given symmetry transformation U.
In the following, we apply this procedure for Fe3GeTe2 for each of the materials symmetry operations. The monolayer Fe3GeTe2 has the point group symmetry D_3h <cit.> which consists of one C_3 rotations around z-axis, three C_2 rotations including one around y-axis, and one mirror reflection respect to xy-plane, as shown in Fig. <ref>. Since we are interested in the case where electric field is applied in-plane, it is convenient to consider the rotation symmetry around z-axis first.
According to Eq. <ref>, the torkance tensor τ is invariant under a rotation because both torque T and electric field E follow the same transformation under a rotation. Since the vector spherical harmonics absorbs an extra phase under a rotation of angle γ, i.e., Y^(ν)_lm(θ,ϕ-γ)→Y^(ν)_lm(θ,ϕ)e^-imγ, the transformed vector coefficients C need to have additional phase factors e^imγ to compensate e^-imγ in order to keep the tensor τ invariant.
If the rotation symmetry is continuous, the only possible way is either m=0, C∝ẑ or m=±1, C∝x̂± iŷ. We can then get the relation C_l,±1(ŷ)=C_l,±1(x̂)e^∓iπ/2=∓iC_l,±1(x̂).
For the discrete rotation angle γ=2π/ν (ν=3 for Fe3GeTe2), we can consider ν cases depending on the modulus:
mν=0,1,...,ν-1.
When we perform a rotation of angle γ from x-axis, the new electric field becomes E=(cosγ,sinγ,0)E. Because the torkance is invariant under this transformation, we can rewrite the x-axis as the new axis and the ϕ goes to ϕ-γ. This leads to the following equation
C_lm(x̂)e^-imγ=C_lm(x̂)cosγ+C_lm(ŷ)sinγ,
where C_lm(x̂,(ŷ)) are scalar coefficients that needed to be obtained by fitting the numerical results. The full vector form is C_lm=C_lm(x̂)x̂+C_lm(ŷ)ŷ, which will contract with the applied Efield vector E. If m=nν, we see that Eq. <ref> cannot be satisfied because the left hand side is always 1. The reason is that C_ν z symmetry only allows out-of-plane field-induced torque (E∥ẑ) in this case.
Now let's consider the case m=nν±1, Eq. <ref> gives
C_l,nν±1(ŷ)=∓ i C_l,nν±1(x̂).
In fact, m=nν±1 are the only two possible cases for C_3z rotation symmetry. For C_2z symmetry, Eq. <ref> is always satisfied for odd m. For C_4z,C_6z symmetries, we need to consider more cases, which is summarized in the Table <ref> and Table <ref>.
Now we only need to focus on the applied field in x̂ direction to obtain the additional symmetry constraints. Under the mirror reflection respect to xy-plane, both the torque T decomposed in θ̂,ϕ̂ directions and applied field are even. Thus the torkance τ has to be even under the transformation as well,
τ(θ,ϕ)τ(θ,ϕ+π)=e^imπτ(θ,ϕ).
This enforces that m must be an even number, i.e., m=6n±2. The remaining crystal symmetry constraint is due to the C_2y rotation symmetry,
τ(θ,ϕ)τ(π-θ,π-ϕ)=τ(π-θ,-ϕ).
Because T_θ=T·θ̂,T_ϕ=T·ϕ̂, and applied field in x̂ all flip the sign under C_2y rotation, thus the torkance has actually to be even under the rotation.
To further simplify the constraint, we consider the real and imaginary part of vector spherical harmonics separately by observing the following relation:
ReY^(D,F)_lm(π-θ,-ϕ)=(-1)^l+m+1ReY^(D,F)_lm(θ,ϕ),
ImY^(D,F)_lm(π-θ,-ϕ)=(-1)^l+mImY^(D,F)_lm(θ,ϕ).
Given that m is always even, we are only allowed to have odd/even number l for the real/imaginary part of vector spherical harmonics.
Last but not the least, we can always decompose the current-induced torque into time-reversal even and odd parts. Under the time-reversal symmetry transformation,
τ(θ,ϕ)τ(π-θ,π+ϕ)=τ(π-θ,ϕ).
The real and imaginary part of vector spherical harmonics both satisfy
Y^ D_lm(π-θ,ϕ)=(-1)^l+1Y^ D_lm(θ,ϕ),
Y^ F_lm(π-θ,ϕ)=(-1)^l Y^ F_lm(θ,ϕ).
The final symmetry-constrained form of time-reversal even torkance under the applied E-field in x̂-direction is
τ^even(x̂)=∑_lm C^ F_2l,6m±2ImY^ F_2l,6m±2
+ C^ D_2l+1,6m±2ReY^ D_2l+1,6m±2,
and time-reversal odd torkance is
τ^odd(x̂)=∑_lm C^ D_2l,6m±2ImY^ D_2l,6m±2
+ C^ F_2l+1,6m±2ReY^ F_2l+1,6m±2.
By utilizing Eq. <ref>, we can write down the final symmetry-constrained form of torkance under the applied E-field in ŷ-direction is
τ^even(ŷ)=∑_lm ± C^ F_2l,6m±2ReY^ F_2l,6m±2
∓ C^ D_2l+1,6m±2ImY^ D_2l+1,6m±2,
and time-reversal odd torkance is
τ^odd(ŷ)=∑_lm ± C^ D_2l,6m±2ReY^ D_2l,6m±2
∓ C^ F_2l+1,6m±2ImY^ F_2l+1,6m±2.
Note that scalar coefficients C_lm appearing in equations above are the same. We only need to calculate the applied E-field in x̂ case and fit the numerical results with the vector spherical harmonics form to obtain the coefficients C_lm. Note that for this system, changing the direction of the electric field swaps Re and Im. The differences in these functions correspond to rotations through the azimuth by π/2.
§ DETAILS OF THE TORQUE CALCULATION.
The first step is to obtain the tight-binding Hamiltonian in a localized atomic orbital basis using a combination of Quantum Espresso <cit.> and Wannier90 <cit.>.
In the Quantum ESPRESSO implementation, we use the pseudopotentials from PSlibrary <cit.> generated with a fully relativistic calculation using Projector Augmented-Wave method <cit.> and local density approximation exchange correlations <cit.>. We utilize a 18× 18 × 1 Monkhorst-Pack mesh <cit.>, 2 nm vacuum layer, 2720 eV cutoff energy, 1.36×10^-3 eV total energy convergence threshold and obtain the relaxed positions with the forces smaller than 0.02 eV/nm.
The second step is to use Wannier90 <cit.> to obtain the Hamiltonian in an atomic basis. We project plane-wave solutions onto atomic s,p,d orbitals of Fe atoms, s,p orbitals of Ge and Te atoms. We then symmetrize the tight-binding Hamiltonian using TBmodels <cit.>. The final symmetrized tight-binding band structures agree very well with these bands from plane-wave methods shown in Fig. <ref>(c). The band inconsistencies at higher higher above the Fermi level are expected and do not significantly affect our results because states near the Fermi level dominate the torkance calculations through the energy denominator in Eq. (<ref>).
Equipped with the spin-orbit coupled tight-binding Hamiltonian, We then apply linear response theory to compute the torkance <cit.>. We denote the j^ th component of the torkance in response to an electric field along the i-direction with τ_ij. The even and odd components of the torkance are given by Eq. (<ref>) and Eq. (<ref>) respectively.
The torque operator is obtained as the change of magnetization with respect to time,
𝒯=dΔ/dt=i/ħ[H,Δ]=-i/ħ[Δ·𝐒̂ ,𝐒̂].
where 𝐒 is the spin operator and Δ is the time-reversal odd spin-dependent exchange-correlation potential.
We use a very dense k-mesh of 1200×1200 to evaluate the torkance Eqs. <ref> and <ref>. Note that we adopt the tight-binding approximation <cit.> that Wannier orbitals are perfectly localized on atomic sites and the spin operators 𝐒 are described by the Pauli matrices spanned in the Wannier orbital basis in the implementation.
We also adopt a constant broadening model to evaluate the longitudinal conductivity <cit.>,
σ_xx=e^2/πħ∑_kn mη^2 Re[⟨ψ_nk|∂ H/∂ k_x|ψ_mk⟩⟨ψ_mk|∂ H/∂ k_x|ψ_nk⟩]/[(E_m-μ)^2+η^2][(E_n-μ)^2+η^2].
Eq. <ref> also becomes diverge as a function of 1/η at the zero broadening limit, similar to Eq. <ref>. The sheet resistance then is the reciprocal of longitudinal conductivity per unit cell area.
apsrev4-2
|
http://arxiv.org/abs/2307.05700v1 | 20230711180725 | SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing imagery using HRNet with Separable Convolution | [
"Priyanka Goyal",
"Sohan Patnaik",
"Adway Mitra",
"Manjira Sinha"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing imagery using HRNet with Separable Convolution
Priyanka Goyal, Sohan Patnaik, Adway Mitra, Manjira Sinha
Indian Institute of Technology Kharagpur
=================================================================================================================
The accurate mapping of crop production is crucial for ensuring food security, effective resource management, and sustainable agricultural practices. One way to achieve this is by analyzing high-resolution satellite imagery. Deep Learning has been successful in analyzing images, including remote sensing imagery. However, capturing intricate crop patterns is challenging due to their complexity and variability. In this paper, we propose a novel Deep learning approach that integrates HRNet with Separable Convolutional layers to capture spatial patterns and Self-attention to capture temporal patterns of the data. The HRNet model acts as a backbone and extracts high-resolution features from crop images. Spatially separable convolution in the shallow layers of the HRNet model captures intricate crop patterns more effectively while reducing the computational cost. The multi-head attention mechanism captures long-term temporal dependencies from the encoded vector representation of the images. Finally, a CNN decoder generates a crop map from the aggregated representation. Adaboost is used on top of this to further improve accuracy. The proposed algorithm achieves a high classification accuracy of 97.5% and IoU of 55.2% in generating crop maps. We evaluate the performance of our pipeline on the Zuericrop dataset and demonstrate that our results outperform state-of-the-art models such as U-Net++, ResNet50, VGG19, InceptionV3, DenseNet, and EfficientNet. This research showcases the potential of Deep Learning for Earth Observation Systems.
§ INTRODUCTION
Spatiotemporal crop mapping is a significant area of research in remote sensing and agriculture that uses satellite imagery to identify and monitor the cultivation of crops over time and space. Accurate crop mapping is crucial for sustainable agriculture, as it helps optimize crop yields and increase food production. Additionally, crop mapping provides valuable insights into land-use changes, agricultural practices, and crop management. Furthermore, spatiotemporal crop mapping has applications beyond agriculture, including monitoring invasive species, urban growth, and changes in natural habitats, making it a valuable tool for environmental monitoring and management.
Recent advancements in crop mapping have been driven by the increasing availability of high-resolution satellite imagery and advances in machine learning algorithms. Deep learning models, such as Convolutional Neural Networks (CNNs), have improved the accuracy and efficiency of crop mapping from spatial satellite data by representing complex structures associated with different types of croplands.
Integration of multiple data sources, such as weather data, soil information, and topographic data, into crop mapping models, has further enhanced their accuracy and provided a more comprehensive understanding of crop growth and management, enabling more accurate predictions of crop yields and environmental impacts. There has also been a shift towards using spatio-temporal models that consider the dynamics of crop growth over time and space. These models provide insights into the effects of climate change, natural disasters, and land-use changes on crop production, informing strategies for adaptation.
In this study, we explore the task of spatio-temporal crop mapping from remote sensing images using several recent developments in Deep Learning, such as separable convolutions. We propose a pipeline that takes sequence of remote sensing images as input, incorporates High Resolution Network (HRNet) to capture spatial relations and an LSTM-based block and a self-attention mechanism to capture the temporal dependencies, to obtain a segmented image where each segment indicates a particular crop growth. Promising results are obtained on the publicly available ZueriCrop dataset, and several metrics are used to validate the robustness of the proposed pipeline.
The novelty of the proposed approach lies in our use of recent Deep Learning models and concepts for this task. We use the High Resolution Network (HRNet) and show its strong improvement in comparison to well-established image segmentation approaches such as U-Net. Further, we show that the use of separable convolution is far more effective for this task in comparison to traditional convolution. Further, we show that utilizing the sequential information is useful to create a more accurate representation of the crop map, and explore the use of sequential models like LSTM and Multi-Head Self-Attention.
The contributions of this work can be summarized as follows:
* We propose SepHRNet: an encoder-decoder based pipeline for generating high-resolution crop maps from remote sensing image sequence
* We compare many recent Deep learning-based models at each step of the pipeline to choose the best one
* We show that use of separable convolution instead of standard convolution and multi-head self-attention instead of LSTM improve the spatial and temporal representation respectively
* We show that Boosting can help the models further
To establish the veracity of our contributions, we carry out extensive experiments on the ZueriCrop dataset, which contains sequences of remote sensing imagery over farmlands with ground-truth labels of crop production. We test different aspects of our proposed pipelines against alternate approaches. We consider and compare different deep learning architectures for the spatial component, as well as different convolution techniques. Regarding the temporal component, we compare LSTM and self-attention. Finally, we show how the use of Boosting (Adaboost) can further improve the performance of the proposed pipeline. We carry out a detailed ablation study to establish the importance of each part of the proposed pipeline.
The following section includes a description of prior work in the domain of mapping crop types using remote-sensing images in a spatiotemporal setting. Section 3 provides details about the dataset used along with data processing. In Section 4, the methodology is presented, including baselines and the proposed architecture, with a detailed mathematical representation. Training details, along with mainstream experiments, simulation results, and a performance comparison of the proposed model with existing ones, are explained in Section 5. Section 6 presents an ablation study conducted. Finally, the last section presents the conclusions drawn from this work.
§ RELATED WORK
Crop mapping is an important task for agricultural planning. In recent years, remote sensing has emerged as a useful source of information for such crop mapping. Various techniques have been employed, including deep learning, time-series analysis, and machine learning, resulting in high classification accuracy for different crop types.
§.§ Crop Mapping
Mazzia et al. (2021) <cit.> utilized multi-temporal Sentinel-2 imagery and crop information from the USDA National Agricultural Statistics Service (NASS) to train and evaluate their proposed spatiotemporal recurrent neural networks (STRNNs).
Turkoglu et al. (2021) <cit.> introduced ms-convSTAR (multistage ConvRNN) and evaluated its performance on the Zuericrop dataset. They compared its performance with RF, LSTM, TCN, Transformer network, Unet, Unet + convLSTM, and Bi-convGRU.
Konduri et al. (2020) <cit.> employed the Cluster-then-label approach using Multivariate Spatio-Temporal Clustering and Mapcurves on the MODIS NDVI and USDA CDL dataset.
Rußwurm et al. (2019) <cit.> proposed the Breizhcrop time series dataset for crop identification and evaluated different models including RF, TCN, MSResNet, InceptionTime, OmniscaleCNN, LSTM, StarRNN, and Transformer.
Rustowicz et al. (2019) <cit.> introduced the first small-holder farms' crop type dataset of Ghana and South Sudan. They compared the performance of 2D U-Net + CLSTM, 3D CNN with RF, and bidirectional sequential encoder.
Khaleque et al. (2020) <cit.> utilized Sentinel-2 time-series data and machine learning algorithms to classify crops, considering temporal variations. Chen et al. (2022) <cit.> integrated convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) for spatiotemporal crop mapping.
Zhang et al. (2021) <cit.> combined spectral-temporal features and multi-scale spatial information using a multi-task CNN and morphological profile (MP) technique. Zhu et al. (2021) <cit.> employed a multi-scale CNN with random forest (RF) classification for crop mapping from multi-temporal Landsat imagery.
Temporal variability of crop reflectance was considered by Liu et al. (2020) <cit.> using Sentinel-2 data and normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) as input features. Liu et al. (2021) <cit.> studied the temporal consistency and variability of optical indices for crop mapping in Southwest China.
Yang et al. (2021) <cit.> proposed a multi-scale feature fusion approach for crop mapping using Sentinel-2 imagery. Hu et al. (2021) <cit.> used CNNs and a feature pyramid network (FPN) for multi-scale feature extraction. Shao et al. (2020) <cit.> combined spectral indices and image patches with a UNet architecture for crop classification.
These advancements are crucial for accurate crop mapping, enabling effective crop management and decision-making in agriculture.
§.§ Deep Learning architectures
VGG19 <cit.> is a deep architecture with 19 layers, enabling it to learn hierarchical representations and capture complex patterns in crop images. However, its large number of parameters makes it computationally expensive, memory-intensive, and slower compared to other models. ResNet50 <cit.> introduces residual connections that facilitate training deeper networks and capture discriminative features for crop mapping. However, its larger model size can be challenging in terms of memory usage and computational resources. InceptionV3 <cit.> incorporates multi-scale feature extraction through inception modules with parallel convolutional layers of different sizes. This reduces the number of parameters, allowing for faster training and inference. However, multiple parallel convolutional layers increase computational complexity and may lead to information loss, although auxiliary classifiers help mitigate this issue.
DenseNet121's <cit.> dense connectivity pattern allows for the direct flow of information between layers, enhancing gradient propagation and feature reuse. This improves parameter efficiency and captures fine-grained details and local features in crop images. However, direct connections increase memory usage during training and inference. EfficientNetV2 <cit.> uses a compound scaling method to optimize resource allocation and achieve computational efficiency while maintaining accuracy. It incorporates Squeeze and Excitation (SE) blocks to capture important features and depthwise separable convolutions to reduce computational cost. However, the complex scaling coefficients may reduce model interpretability, and depthwise separable convolutions might impact the capture of complex relationships in crop images. HRNet <cit.> captures high-resolution details, multi-scale features, and contextual information. It maintains high-resolution representations throughout the network, captures fine-grained and coarse-grained features simultaneously, and integrates information from different levels of abstraction. However, it requires more computational resources, resulting in increased memory usage and longer training and inference times.
UNet <cit.> is effective in capturing fine details and spatial relationships within an image. The UNet architecture consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. The encoder captures hierarchical information at different scales, while the decoder upsamples the feature maps. The skip connections help preserve spatial information during upsampling.
UNet++ <cit.> builds upon the skip connections of the original UNet by introducing nested skip pathways, which allow for the integration of multi-scale contextual information. Each encoder block is connected not only to the corresponding decoder block but also to higher-resolution decoder blocks. This nested skip connection leverages multi-scale contextual information, enabling the network to capture both local and global contextual information more comprehensively. UNet++ offers an advanced and powerful architecture for crop mapping, allowing for more accurate and detailed segmentation of crops in satellite images or other remote sensing data.
Each of these deep learning models has unique architectural characteristics that make it suitable for crop mapping. These models can be trained to classify different types of crops or identify crop boundaries within an image. The input to the network is an image patch or a satellite image, and the output is a pixel-wise segmentation map where each pixel is assigned a class label representing the crop type or boundary. These models have demonstrated effectiveness in capturing spatial dependencies, contextual information, and fine-grained details, which are crucial for accurate crop mapping.
§ DATASET: ZUERICROP
ZueriCrop <cit.> is a large-scale, time-series dataset of satellite imagery of agricultural fields in Switzerland. It contains 116,000 field instances, each with ground-truth labels for 48 different crop types. The images were captured in 2019 under a variety of weather and lighting conditions, over a 50 km x 48 km area in the Swiss Cantons of Zurich and Thurgau. The dataset was made publicly available in 2021.
The images were acquired by the Sentinel-2 satellite, which provides high-resolution (10-meter) multispectral imagery. The images were atmospherically corrected using the standard Sen2Cor software package.
Several crop land images can be seen in Figure <ref>.
It is the largest publicly available dataset of time-series satellite imagery of agricultural fields. It contains a variety of crop types, some of which are difficult to distinguish from each other using satellite imagery. It may not be representative of agricultural practices in other parts of the world, since it was collected for a small area of Switzerland. Despite these challenges, ZueriCrop is a valuable resource for research in precision agriculture.
Since the images are of 24X24 resolution, and most of the deep learning architectures require a higher resolution of images, we resize the images using interpolation as well as padding. Moreover, the same transformation is also applied to the crop map in order to align the input to the output. After this, we normalize all the images by calculating the mean and the standard deviation of pixels per channel in order to make the pixel distribution uniform across channels.
§ METHODOLOGY
The task of spatio-temporal crop mapping involves encoding the sequence of images, capturing temporal dependencies across the encodings, and finally obtaining the crop map as accurately as possible. We propose a deep learning-based solution to effectively capture temporal dependencies among the satellite images of land captured at different times of the year to obtain the crop distribution over that region.
The rest of this section explains the proposed pipeline and its various parts that we explored to finally come up with a design that achieved the best performance on the ZueriCrop dataset.
§.§ Pipeline Design
Encoder - Decoder Architecture
The standard encoder-decoder pipeline can be used with the motivation to treat images at different time frames independently. This is basically the image segmentation problem of Computer Vision, for which there are well-known models that fit this approach are UNet <cit.>, and UNet++ <cit.>. These can produce crop segmentation maps for the images at different time-points independently. Subsequently, we can compute the mean of all the resulting crop maps to aggregate the information and obtain the final crop map representing the land cover. The overview of this pipeline can be seen in Figure <ref>.
Encoder - LSTM - Decoder Architecture
An alternative paradigm of architecture is where we utilize the sequential relation between the images directly. Here, we can use convolutional neural networks such as VGG19 <cit.>, ResNet50 <cit.>, InceptionV3 <cit.>, DenseNet121 <cit.>, EfficientNetV2 <cit.>, or HRNet <cit.> to encode and obtain vector representation of the ZueriCrop images. VGG19, ResNet50, InceptionV3, DenseNet121, and EfficientNetV2 are pre-trained on ImageNet to learn generic features, that can be fine-tuned for crop mapping. They capture spatial dependencies and contextual information, which are crucial for accurate crop mapping. Next, a sequential model like LSTM <cit.> can be used in order to establish temporal relationships and obtain an aggregated representation of the land cover over the specified time frame.
We found that a stacked LSTM layer with 3 blocks gives best results.
Finally, the last hidden state of the LSTM block is fed to a Transposed Convolution <cit.> based decoder in order to obtain the segmented crop map. The overview of this architecture can be seen in Figure <ref>.
Encoder - Self Attention - Decoder Paradigm
In this paradigm of architecture, we can use the same encoder and decoder types proposed in Section <ref>. However, instead of using a stacked LSTM block for temporal modeling, we use Self Attention <cit.> mechanism to aggregate the vector representation of the images from different time-points. The architecture overview of this pipeline can be seen in Figure <ref>.
§.§ Proposed Pipeline Design
After conducting exhaustive experimentation and hyperparameter tuning, we have developed a pipeline that achieves the best performance among all the candidates previously discussed. The overview of our proposed pipeline is illustrated in Figure <ref>.
We choose the Encoder-Self Attention-Decoder pipeline. In the encoder, we employ HRNet <cit.> to create a high-resolution representation of the satellite images at each time-point.
In the shallow layers of the network, we utilize Spatially separable convolution to reduce the model's parameter count while maintaining performance. By obtaining vector representations of all image frames, we leverage multi-head attention <cit.> to capture long-term temporal dependencies and generate a comprehensive representation of the land over the entire time-period. The aggregated representation is then fed into the decoder to produce the crop map. We call this combined model as SepHRNet.
We further improve performance using Boosting. SepHRNet serves as the base model for the AdaBoost <cit.> algorithm, with a modified rule for updating the sampling probability of data points.
We combine multiple versions of SepHRNet trained on 80% of the data, using the weighted ensemble as specified by the AdaBoost algorithm. Further, the Loss Function for AdaBoost is designed as follows: each image are assigned an error value of 1 if the percentage of misclassified pixels is more than 20%, and -1 otherwise. As a result, the aggregated model demonstrates strong performance across the entire dataset.
§.§ Components of Proposed Architecture
HRNet Encoder
Let x be the input to the HRNet network.
HRNet(x) = H_n(H_n-1(...H_2(H_1(x))...))
where H_i denotes the i_th stage of the HRNet network. Each stage consists of parallel branches, denoted as B_i, which operate on different resolutions of the input feature maps. The outputs of the branches in each stage are then combined to obtain the output of that stage. The HRNet network iteratively applies the stages H_i to the input x, with the final output being the output of the last stage H_n. This allows the network to capture and integrate features at multiple resolutions, enabling it to maintain high-resolution representations throughout the network.
Spatially separable Convolution
Convolution is a well-known technique in image processing, which is widely used in Convolutional Neural Networks for image representation. Here we have a rectangular kernel w, which represents a spatial pattern, and we scan the image with it to see which parts of it contains that pattern.
y(i, j) = ∑_m∑_n x(i-m, j-n) · w(m, n)
A typical CNN has many layers for convolution, each of which uses many kernels for parallel channels. The parameters w are not specified but learnt from data while training the neural network.
Spatially separable convolution is a convolutional technique that offers advantages over standard convolution, particularly in scenarios with high aspect ratio images or when applying filters to small image regions. By using different kernel sizes for the vertical and horizontal dimensions, it can reduce the number of parameters in a convolutional neural network (CNN) and improve generalization performance by avoiding overfitting.
z(i, j) = ∑_m, n x(i-m, j-n) · w_row(m) · w_col(n)
Equation <ref> represents the spatially separable convolution operation where z is the output obtained by convolving the input x with the row-wise filter w_row and the column-wise filter w_col. The summation is performed over the filter dimensions m and n, and the element-wise multiplication of the input and filters is performed at each spatial location (i, j).
In SepHRNet, we replace the standard convolution in shallow layers with spatially separable convolution. This replacement involves using two sequential convolutional layers with kernel sizes of kX1 and 1Xk, respectively, instead of a single kXk kernel.
This modification maintains the same receptive field, reduces parameter count, and promotes more comprehensive interactions among pixels. As a result, our segmentation performance improves, especially considering the non-uniform distribution of land cover.
Self-Attention
Let q_t ∈ℛ^𝒹_𝓀, k_t ∈ℛ^𝒹_𝓀, and v_t ∈ℛ^𝒹_𝓀 represent the query, key, and value vectors, respectively, at time step t. Matrix representations of the query, key, and value vectors are denoted as Q = [q_1, q_2, …, q_T], K = [k_1, k_2, …, k_T], and V = [v_1, v_2, …, v_T], respectively. To compute the attention-weighted representation at a specific time step, we use the following equation:
Attention(Q, K, V) = softmax(QK^T/√(d_k))V
Here, d_k represents the dimension of the query, value, and key vectors, i.e., the dimension of the output vector obtained from the encoder.
The Attention function performs scaled dot-product attention, where the queries and keys are scaled by the square root of the key dimension (d_k) and the result is weighted by the softmax of the query-key dot product. The final output is obtained by multiplying the weighted values with the softmax weights.
In order to incorporate self-attention, we require three types of vectors: query, key, and value for each input in the sequence. To obtain these vectors, instead of a single layer at the end of the encoder, we utilize three feed-forward layers. This allows us to generate the necessary query-key-value triplet of vectors.
By summing the attention-weighted vectors, we obtain the aggregated representation of the cropland.
Figure <ref> illustrates the single-head attention block.
Multi-head Attention
Instead of having just one query-key-value triplet from the encoder, we obtain multiple triplets and compute the aggregated representation of all the queries, keys, and values. These representations are then concatenated and projected to the required dimension, resulting in the multi-head representation of the cropland illustrated in Figure <ref>. This approach allows us to attach more importance to the more complex structures in the cropland.
head_i = Attention(Q · W_Q^i, K · W_K^i, V · W_V^i)
Each head_i is computed by applying the Attention function to the transformed queries (Q · W_Q^i), keys (K · W_K^i), and values (V · W_V^i).
MultiHead(Q, K, V) = FFN(Concat(head_1, ..., head_h))
Here, head_i represents the output of a single-head self-attention, and FFN refers to a feed-forward network used to downsample the concatenated representation. The MultiHead function calculates the multi-head self-attention by concatenating the individual attention heads (head_i) and applying a feed-forward network.
Decoder
The aim of the decoder is to create the segmentation map at the same resolution as the input images. We use transposed convolution for this purpose. The architecture details can be seen in Figure <ref>.
X = ReLU(BN(ConvTranspose(Z, W, S, padding) + B))
In the above representation, Z represents the input feature map, W denotes the learnable weights of the transposed convolution operation, S represents the stride, and padding refers to the amount of zero-padding applied to the input feature map. The ConvTranspose operation performs the transposed convolution operation on Z using W, S, and the specified padding. It upsamples the input feature map by performing the reverse of the convolution operation, effectively increasing its spatial dimensions. The resulting output feature map X is then obtained by adding a bias term B to the transposed convolution output. BN denotes the batch normalization block. ReLU is used for RELU actiVation Function.
§ EXPERIMENTAL EVALUATION OF PIPELINE
§.§ Training Procedure
The proposed architecture is trained using the sparse categorical cross-entropy loss function, which compares the softmax probabilities (P_p) with the ground truth labels (T_p) for each class. The loss is calculated according to Equation (<ref>):
L = -∑_p=1^N(T_p * log(P_p))
In this equation, P_p is obtained using the softmax function to normalize the class probabilities. The loss function assigns smaller values for smaller differences and larger values for larger differences between the predicted and actual values. The goal is to minimize the loss, with a perfect model achieving a loss value of zero.
The segmentation network is trained on a split of 80% for the train set (20,982 inputs) and 20% for the test set (6,995 inputs), with each input consisting of 71 images capturing different time frames.
During training, the network is optimized using the Adam optimizer with a learning rate of 1e-4, a batch size of 128, and a weight decay of 0.0001. These hyperparameters are determined through grid search cross-validation. A cosine learning rate scheduler is employed over 25 epochs, and the training utilizes two V100 32GB GPUs in a distributed setup.
Stacked bidirectional LSTM with three layers is explored to capture temporal dependencies. The LSTM has a hidden state and output size of 256, and the LSTM block excludes bias terms in linear activations.
The self-attention block employs the standard attention mechanism, producing an aggregated representation of 768-dimensional vectors, serving as the base version. The multi-head attention block utilizes six attention heads as the default configuration.
Five models are used as the default number of base models in the ensemble paradigm experiments.
§.§ Comparative Analysis of Architectures
Section <ref> introduces the proposed approach, which achieves the best performance. Table <ref> provides a summary of the experiments, utilizing multi-head attention with six attention heads. HRNet-base as the encoder outperforms other encoder variants. Moreover, incorporating the self-attention mechanism to capture temporal dependencies in the underlying data improves the performance of each individual model. The ensembling approach demonstrates strong performance across the entire dataset.
The experimental analysis demonstrates the performance of different encoder architectures with the ESD paradigm for crop mapping tasks. Among the models, HRNet-base achieves the highest performance across all metrics, with an accuracy of 0.975, precision of 0.701, recall of 0.733, F1-score of 0.717, and mIoU of 0.552. The use of self-attention for sequence modeling instead of LSTM improves the performance of all models. This finding highlights the effectiveness of the ESD paradigm and the self-attention mechanism for accurate crop mapping, enabling precise crop segmentation and classification.
Figure <ref> provides a comparative overview of the improvement in different metrics when modifying the pipeline. It demonstrates that using self-attention instead of an LSTM block better captures temporal dependencies across time frames. Furthermore, combining spatially separable convolution and standard convolution in the encoder architecture enables the model to understand the underlying cropland with higher precision and accuracy. Based on the promising results of our proposed approach, we anticipate its generalizability to other datasets in the future.
§.§ Visualization
In this section, Figure <ref> provides the visualization of some crop maps generated comparing the proposed model with baseline models HRNet and UNet. The proposed model seems to display better-quality crop maps on the ZueriCrop dataset. This serves as a motivation to incorporate Spatially separable convolution in place of the standard convolution of several standard encoder architectures. Moreover, ensembling the base models also shows promising results.
§ ABLATION STUDY
In this section, we present the results of various ablation experiments that demonstrate the enhanced performance of the models.
§.§ Baselines
In this section, we present the baseline results of the three proposed architectures discussed in Section <ref>.
In Table <ref>, "ELD", and "ED" refers to the Encoder-LSTM-Decoder, and Encoder-Decoder paradigm respectively.
Encoder - LSTM - Decoder Architecture
To establish a baseline performance for spatio-temporal crop mapping and understand the behavior of different encoder architectures, we compare the results on the test set, as shown in Table <ref>. For all experiments, we use a stacked bidirectional LSTM with three layers.
Upon evaluating the performance of various encoder architectures, we find that the HRNet-base encoder significantly outperforms other versions. HRNet maintains multi-resolution inputs by fusing information from multiple resolutions in parallel, enabling the model to capture both fine-grained and coarse information in the image. It also enhances the localization of land patches, resulting in promising crop mapping results.
Encoder - Decoder Architecture
We enumerate the evaluation metrics for different encoder-decoder architectures in this subsection. As mentioned in Section <ref>, we obtain the final crop distribution by taking the mean of the crop maps for each time frame. From Table <ref>, we observe that UNet++ outperforms UNet. Consequently, we choose HRNet-base as the preferred encoder due to its superior overall performance.
§.§ Spatially Separable Convolution
We evaluate the performance by replacing standard convolution with Spatially Separable Convolution in the encoder architecture. Table <ref> illustrates this change also improves the performance. By capturing features along one dimension before moving to the other, the model effectively captures the contours of the cropland, resulting in better outcomes.
The experimental analysis shows that HRNet-base with the ELD paradigm achieves the highest performance in terms of accuracy, precision, recall, F1-score, and mIoU. It outperforms other encoder architectures in accurately classifying crop types and identifying crop boundaries. UNet++ with the ED paradigm also demonstrates competitive performance. The incorporation of spatially separable convolution improves the performance of both paradigms, highlighting its effectiveness in capturing fine-grained details and spatial relationships. Overall, HRNet-base with the ELD paradigm and UNet++ with the ED paradigm, incorporating spatially separable convolution, are effective for crop mapping tasks, providing accurate and detailed segmentation of crops in satellite images or other remote sensing data.
§.§ Ensemble through Boosting
We explore whether the ensemble strategy improves upon the baseline experiments by employing the AdaBoost algorithm. Table <ref> shows that the AdaBoost algorithm indeed enhances the performance of the corresponding baseline architectures. Among the ensembles, the HRNet-base ensemble demonstrates the best performance, as detailed in Section <ref>.
The experimental analysis shows that HRNet-base with the ELD paradigm achieves the highest accuracy, precision, recall, F1-score, and mIoU among the encoder architectures and paradigms. The ensemble strategy improves the performance of all models, with HRNet-base and UNet++ demonstrating competitive results. This highlights the effectiveness of the ensemble strategy for crop mapping tasks. Overall, HRNet-base with the ELD paradigm and UNet++ with the ED paradigm, combined through an ensemble strategy, offer accurate and detailed crop segmentation, enabling precise crop classification and boundary delineation.
§.§ Separable Convolution with Boosting
We also see the impact of combining spatially separable convolution at the encoding stage, along with AdaBoost. We find that this results in improvement of performance in all baseline architectures of the encoder.
The experimental analysis for Table <ref> reveals that HRNet-base with the ELD paradigm achieves the highest accuracy, precision, recall, F1-score, and mIoU among the different encoder architectures and paradigms. The incorporation of spatially separable convolution and the ensembling approach further improves the performance of all models. HRNet-base with the ELD paradigm achieves remarkable results with an accuracy of 0.964, precision of 0.692, recall of 0.717, F1-score of 0.704, and mIoU of 0.547. The UNet++ model with the ED paradigm also demonstrates competitive performance. This demonstrates the effectiveness of the spatially separable convolution and ensembling approach in enhancing crop mapping tasks. Overall, the combination of HRNet-base with the ELD paradigm and UNet++ with the ED paradigm, incorporating spatially separable convolution and employing an ensembling approach, provides accurate and detailed crop segmentation, enabling precise classification and boundary delineation of crops.
§.§ Self-attention versus LSTM
Instead of using LSTM to capture temporal dependencies between cropland representations at different time frames, we employ a self-attention mechanism to better weigh the contribution of each representation. In all experiments listed in Table <ref>, we utilize multi-head attention with six attention heads. As expected, HRNet-base as the encoder outperforms other encoder variants. Additionally, incorporating the self-attention mechanism improves the performance of each individual model, enhancing the capture of temporal dependencies in the data.
Its experimental analysis reveals that HRNet-base with the ESD paradigm achieves the highest accuracy, precision, recall, F1-score, and mIoU among the encoder architectures, making it an effective choice for crop mapping. Comparing the ELD and ESD paradigms, the ESD paradigm consistently outperforms the ELD paradigm across various encoder architectures in terms of accuracy, precision, recall, F1-score, and mIoU, indicating its effectiveness for crop mapping tasks. HRNet-base demonstrates superior performance in accurately classifying crop types and identifying crop boundaries compared to other encoders. UNet++ with ED, EfficientNetV2, and InceptionV3 with ESD also show competitive performance, while VGG19 exhibits lower performance. HRNet-base with the ESD paradigm emerges as a powerful choice, offering high accuracy, precision, recall, F1-score, and mIoU, which are crucial for precise crop classification and boundary delineation.
§ CONCLUSION
The aim of this work is to generate high-resolution crop maps based on remote sensing imagery. We use image sequences collected over a period of time, and aim to incorporate this temporal information into the model for more robust estimation of the segmented crop maps. For this purpose, we proposed a deep learning pipeline using Encoder-Self Attention-Decoder structure, which we named SepHRNet. For each of the parts, we compared multiple baselines on multiple criteria, and chose the best-performing options. In the encoder, HRNet along with spatially separable convolution is used, followed by Multi-Head Self-Attention followed by decoder based on Transposed Convolutions which produces the segmented map at the original resolution. Further, we see that the results can improve significantly by building an ensemble of SepHRNet by Adaboost. The pipeline was tested on the Zuericrop dataset for mapping 48 different types of crops over Switzerland. The proposed model demonstrated high accuracy, precision, recall, F1-score, and mIoU, making it an effective choice for crop mapping. This work highlights the importance of separable convolution for spatial modeling and multi-head self-attention for temporal modeling. Future work will include scaling up the proposed architecture for mapping of larger regions with more crop-types, and over different regions of the world.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04106v2 | 20230709060722 | Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye View | [
"Jiayu Yang",
"Enze Xie",
"Miaomiao Liu",
"Jose M. Alvarez"
] | cs.CV | [
"cs.CV"
] |
Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird’s-Eye View
Jiayu Yang^1,3^*, Enze Xie^2, Miaomiao Liu^1, Jose M. Alvarez^3
^1Australian National University, ^2The University of Hong Kong, ^3NVIDIA
{jiayu.yang, miaomiao.liu}@anu.edu.au, [email protected], [email protected]
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
========================================================================================================================================================================================================================================
empty
< g r a p h i c s >
figureGiven multi-view images and camera parameters, our framework utilize parametric depth to transform image feature into BEV space for jointly estimating 3D object detection, BEV segmentation and a BEV visibility map.
Recent vision-only perception models for autonomous driving achieved promising results by encoding multi-view image features into Bird's-Eye-View (BEV) space. A critical step and the main bottleneck of these methods is transforming image features into the BEV coordinate frame. This paper focuses on leveraging geometry information, such as depth, to model such feature transformation. Existing works rely on non-parametric depth distribution modeling leading to significant memory consumption, or ignore the geometry information to address this problem. In contrast, we propose to use parametric depth distribution modeling for feature transformation. We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view. Then, we aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame. Finally, we use the transformed features for downstream tasks such as object detection and semantic segmentation. Existing semantic segmentation methods do also suffer from an hallucination problem as they do not take visibility information into account. This hallucination can be particularly problematic for subsequent modules such as control and planning. To mitigate the issue, our method provides depth uncertainty and reliable visibility-aware estimations.
[^*The work is done during an internship at NVIDIA]
We further leverage our parametric depth modeling to present a novel visibility-aware evaluation metric that, when taken into account, can mitigate the hallucination problem.
Extensive experiments on object detection and semantic segmentation on the nuScenes datasets demonstrate that our method outperforms existing methods on both tasks.
§ INTRODUCTION
In autonomous driving, multiple input sensors are often available, each of which has its coordinate frame, such as the coordinate image
frame used by RGB cameras or the egocentric coordinate frame used by the Lidar scanner. Downstream tasks, such as motion planning, usually require inputs in a unified egocentric coordinate system, like the widely used Bird's Eye View (BEV) space. Thus, transforming features from multiple sensors into the BEV space has become a critical step for autonomous driving. Here, we focus on this transformation for the vision-only setup where we take as input multi-view RGB images captured in a single time stamp by cameras mounted on the ego vehicle and output estimation results, such as object detection and segmentation, in a unified BEV space, see Fig. <ref>.
In general, accurate depth information is crucial to achieve effective transformations.
Early methods<cit.> forgo explicit depth estimation and learn implicit feature transformations using neural networks, which suffers from the generalization problem since the neural network does not have an explicit prior of the underlying geometric relations. More recent methods <cit.> adopt explicit but simplified depth representations for the transformation, which either requires large memory consumption, limiting the resolution <cit.>; or over-simplifies the representation leading to noise in the BEV space<cit.>. Moreover, these simplified depth representation do not have the ability to efficiently provide visibility information. As downstream tasks such as semantic segmentation are trained using aerial map ground truth, the lack of visibility estimation usually results in hallucination effects where the network segments areas that are not visible to the sensor <cit.>, see Figure <ref>. As a consequence, those estimations can mislead downstream planning tasks as it is extremely dangerous to drive towards hallucinated road but actually non-driveable, especially in high speed.
To address these limitations, we propose to adopt explicit parametric depth representation and geometric derivations as guidance to build a novel
feature transformation pipeline. We estimate a parametric depth distribution and use it to derive both a depth likelihood map and an occupancy distribution to guide the transformation from image features into the BEV space. Our approach consists of two sequential modules: a geometry-aware feature lifting module and an occupancy-aware feature aggregation module. Moreover, our parametric depth-based representation enables us to efficiently derive a visibility map in BEV space, which provides valuable information to decouple visible and occluded areas in the estimations and thus, mitigate the hallucination problem. We also derive ground-truth visibility in BEV space, which enables us to design a novel evaluation metric for BEV segmentation that takes visibility into account and reveals insight of selected recent methods <cit.> in terms of estimation on visible region and hallucination on occluded region.
Our contributions can be summarized as follows:
* We propose a geometry-aware feature transformation based on parametric depth distribution modeling to map multi-view image features into the BEV space. Our depth distribution modeling enables the estimation of visibility maps to decouple visible and occluded areas for downstream tasks.
* The proposed feature transformation framework consists of a novel feature lifting module that leverages the computed depth likelihood to lift 2D image features to the 3D space; and a feature aggregation module to project feature to the BEV frame through the derived 3D occupancy.
* We further propose a novel visibility-aware evaluation metric for segmentation in BEV space that reveals the insight of estimation on visible space and hallucination on occluded space.
Extensive experiments on the nuScenes dataset on object detection and semantic segmentation demonstrate the effectiveness of our method yielding state of the art results for these two tasks with a negligible compute overhead.
§ RELATED WORK
External depth based feature transformations.
When given depth input either from Lidar sensor or stereo matching, image feature can easily be transformed into BEV space<cit.>. PointPillar<cit.> extract features from a 3D point cloud and aggregate the features into BEV space. PseudoLidar<cit.> based methods firstly estimate a depth using stereo matching given stereo image pair as input followed by unprojecting the feature based on estimated depth. However, in real-life applications, Lidar sensors or stereo image inputs are not always available, which limits these line of methods.
Feature transformations without reliable depth input.
Without reliable depth input, various feature transformation methods have been proposed<cit.>, starting from early methods<cit.> that learn implicit feature transformations using neural networks. Learned transformation can suffer from the generalization problem, since the neural network does not explicitly account for changes in cameras' intrinsic and extrinsic parameters. Recent methods <cit.> adopt various depth representations to explicitly transform features based on multi-view geometry to the BEV space. The key in these methods is the underlying depth representation, which dominates the resolution and accuracy the feature transformation module can achieve. For instance, LSS <cit.> adopts a non-parametric depth representation. It represents depth as a discretized probability density function along each visual ray, which can be treated as a categorical distribution of depth. It can further form the depth probability volume in LSS for all pixels in an image. When the sampling rate is sufficient, such non-parametric depth distribution can adequately represent a large variety of depths, including multi-modal depth distributions. In practice, however, to estimate such depth representation, the backbone needs to estimate a probability volume that is cubic with the input image size and increases significantly along the number of input images, which limits the image and depth resolution.
To address this limitation, M^2BEV <cit.> adopts a simplified depth representation assuming the depth of all pixels follows a uniform distribution. Under this assumption, features are directly lifted to every location on the visual ray, resulting identical feature along the entire ray with no difference. Following works <cit.> followed similar depth representation. Such simplified representation have advantage on efficiency, as the backbone network do not need to estimate any parameter for the depth, but can cause ambiguity and noise in the 3D space.
Unlike the non-parametric depth distribution used in <cit.> or the uniform depth distribution in M2BEV<cit.>, we adopt a parametric depth distribution to model pixel-wise depth for feature lifting. Parametric depth distribution represents depth as a continuous distribution such as Gaussian or the Laplacian distribution, and its estimated distribution parameters can be used to evaluate depth likelihood or depth probability on any given depth value along each ray. To model the depth for a pixel, it takes only two parameters (μ,σ) for Gaussian and two (μ,b) for Laplacian, so it can be more efficient than non-parametric distribution. Moreover, its continuous nature allows evaluating depth likelihood on any points along the visual ray, which can achieve a higher depth resolution than the diescretized non-parametric distribution. We specifically designed our pipeline incorporating parametric depth to improve 2D-BEV feature transformation and also propose the derivation of visibility for subsequent planning tasks and visibility-aware evaluations.
Aggregating 3D feature into BEV space. Given the lifted feature in 3D space, most existing works including LSS <cit.> and M^2BEV <cit.> use the feature concatenation method introduced by Pointpillars<cit.> for transforming 3D features into BEV space. The 3D feature volume is split along horizontal dimensions and interpreted as pillars of features. Then, a feature vector is created by concatenating features along the vertical dimension for each pillar. All the concatenated features form a 2D feature map, which is converted into BEV feature map by few convolution layers. This design allows each voxel along the Z-axis to have equal contribution to the final BEV feature. However, this method can be affected by noisy features on empty spaces. We thus propose to compress the features based on a calculated space occupancy probability from the parametric depth distribution. Our proposed method can largely reduce the influence of those empty voxels to the aggregated features.
Joint Detection and Segmentation in BEV space.
M^2BEV recently proposed a unified detection and segmentation framework in BEV space, which we leverage to evaluate the effectiveness of our method. Specifically, the image features are transformed into a unified BEV feature, which is used by two parallel heads, a detection head and a segmentation head, to achieve multi-task estimation. M^2BEV leverage a detection head design from Lidar-based detection methods <cit.> and modify it to better suit camera-based methods. Their segmentation head is inspired by the design from <cit.>. However, in contrast to prior work, we leverage the proposed explicit feature transformations based on parametric depth to address its weaknesses.
Temporal extension.
Few concurrent methods <cit.> proposed to utilize temporal information to further boost segmentation and detection performance in BEV space and achieved promising results. Most of these methods, including BEVFormer<cit.>, BEVerse<cit.>, BEVDet4D<cit.> are based on the feature transformation module in LSS<cit.>.
<cit.> adopt depth supervision and temporal stereo matching to improve depth quality and further propose a more efficient implementation of LSS's Lift-splat step. <cit.> query 2D features from projected location of 3D voxels, which does not explicitly use depth and is similar to the uniform depth assumption in M^2BEV<cit.>. Our contributions focusing on depth representation, feature transformation and visibility estimation is orthogonal to the temporal extension of these methods and our method can potentially be applied to these methods to further boost their performance and enable the efficient visibility inference.
§ METHOD
Let us now introduce our framework to jointly perform segmentation and object detection. Shown in Fig. <ref>, our framework comprised of three fundamental components: feature extraction, feature transformation, and multi-task estimation. The framework's key contributions include a parametric depth decoder integrated into the feature extraction, a geometry-aware feature lifting module, and an occupancy-aware feature aggregation module. Furthermore, we introduce a visibility estimation module as a constituent of the multi-task estimation that provide crucial visibility information for down-streaming planning tasks.
§.§ Problem Statement
Let { I_i} _i=1^N, I_i∈ℝ^H× W × 3,
be a set of RGB images taken at the same time slot, H and W define the image dimension, and { K_i, R_i, T_i}_i=1^N represent the intrinsic and extrinsic parameters for their corresponding camera poses, respectively. We focus on lifting the image features f_i^2D∈ℝ^H× W × CH to the 3D space as f^3D∈ℝ^X'× Y' × Z'× CH and then aggregate them to the BEV space as f^BEV∈ℝ^X× Y × CH_B for 3D object detection and segmentation.
§.§ Parametric Depth Distribution Modelling
Let us first introduce our parametric depth distribution modelling. Given an image I_i, we extract its latent features f_i^T using a backbone network followed by a image feature decoder network to extract 2D image features, f_i^2D, see fig. <ref>. Then, following depth estimation methods <cit.>, we adopt a Laplacian distribution to model depth in real-world scenarios where the depth distribution for each pixel is given by,
ℒ(d|μ,b) = 1/2bexp(-|d-μ|/b),
where μ provides an estimation of the depth, and b is the diversity parameter of the distribution, see Fig. <ref>. The goal in this module is to estimate (μ, b).
We design the parametric depth decoder network Φ_θ to map the latent feature to the parameter space of the depth distribution: Φ_θ: ℝ^H× W× CH_T→ℝ^H× W× 2,
where CH_T is the latent feature dimension. Note that when the ground-truth depth for each pixel is known, the depth distribution becomes a delta function, where the depth probability p(d_gt) on ground-truth depth d_gt is one and zero anywhere else. However, in practice, the depth is unknown for each pixel. Given our modelled depth distribution, we can calculate the depth likelihood analytically based on our parametric modelling.
Fig. <ref> shows an example of depth distribution where μ gives an estimate of the depth and b could be interpreted as the uncertainty of each estimation. Larger values of b correspond to areas where the estimation is more uncertain.
§.§ Geometry-aware Feature Lifting
Fig. <ref> depicts our geometry-aware feature lifting module to transform the 2D image features f_i^2D∈ℝ^H× W× CH from the camera coordinate system into 3D space defined for the ego vehicle coordinate system, generating the 3D feature volume f_i^3D∈ℝ^X'× Y'× Z'× CH_I.
Ideally, the 2D image feature for each pixel is back-projected along the visual ray to the 3D location defined by its ground truth depth value f^3D( P_gt) = f^2D( p), where P_gt = d_gt K_i^-1p̃, p̃ is the homogeneous coordinate for p. Without knowing the true depth value for each pixel, we discretize the 3D space into voxels and thus aggregate the feature for each voxel by forward projecting it to multi-view images.
Precisely, let P_j = (x_j, y_j, z_j)^T define the 3D coordinate of centre for voxel j. Given the camera poses for multiple views, we project it to image I_i as
d^i_jp̃^i_j = K_i( R_iP̃_j+ T_i) where p̃^i_j denotes the homogenous coordinate of p^i_j in image I_i. Meanwhile, we can obtain the depth value of P_j in view i as d^i_j. Based on our parametric depth modelling, we obtain the likelihood of d^i_j being on the object surface as
α_d^i_j = ℒ(d^i_j|μ^i_ p^i_j,b^i_ p^i_j) = 1/2b^i_ p^i_jexp(-|d^i_j-μ^i_ p^i_j|/b^i_ p^i_j).
We similarly project the voxel to all views and aggregate the feature for the j-th voxel as
f_j^3D = ∑_i=1^Nα_d^i_j f_i^2D( p^i_j),
where f_i^2D is the extracted image feature. We adopts bilinear interpolation to obtain f_i^2D( p^i_j) when p^i_j is a non-grid coordinate. All lifted 3D features form the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH, which is then aggregated by our occupancy aware feature aggregation module into 2D BEV feature, introduced in the following section.
§.§ Occupancy-aware Feature Aggregation
Our occupancy-aware feature aggregation module aggregates the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH from ego vehicle 3D coordinate frame into BEV space, forming BEV feature map f^BEV∈ℝ^X× Y× CH_B.
As shown in Fig. <ref>, the 2D BEV coordinate system is aligned with the XY plane of the ego vehicle coordinate system where the shared origin is defined on the center of the ego vehicle. Note that the BEV coordinate system only has 2 dimensions, forgoing the Z dimension. The goal of the feature aggregation is to transform the 3D feature volume in ego vehicle coordinate into a 2D feature map in the BEV space, which can be treated as aggregating the 3D feature volume along its Z axis. To this end, we first rearrange the previously computed depth likelihood for all voxels by Eq. <ref> into a depth likelihood volume P^3D∈ℝ^X'× Y'× Z', which shares the same volumetric coordinate as that of 3D feature volume f^3D. For each column along the Z-axis in the depth likelihood volume, the likelihood of each voxel of different height reflects its spatial occupancy. Thus, we normalize the depth likelihood along Z axis into a spatial occupancy distribution, forming a spatial occupancy volume O^3D∈ℝ^X'× Y'× Z' defined as
O^3D(x,y,z) = P^3D(x,y,z) + b_o/∑_z_i=0^Z'-1P^3D(x,y,z_i) + b_o,
where the b_o is a bias term to encourage an equal contribution of feature on completely occluded region.
Our feature aggregation along the Z-axis could minimize the influence of features from empty voxels to the final feature in the BEV frame. Given the spatial occupancy volume O^3D, we compute the final 2D BEV feature as a weighted sum of 3D features
f̂^BEV(x,y) = ∑_z_i=0^Z'-1 (O^3D(x,y,z_i)× f^3D(x,y,z_i)),
where we use the normalized spatial occupancy distribution as the 3D feature weight.
We further transform f̂^BEV via a few layers of convolution to obtain the final feature for BEV space f^BEV which is then applied to detection and segmentation tasks.
§.§ Object Detection and Segmentation
Given the BEV feature map, we use two heads for detection and segmentation. Specifically, we adopt the detection head and segmentation head from M^2BEV <cit.> without modification for fair comparison. The detection head consists of three convolution layers and outputs dense 3D anchors in BEV space along with category, box size, and direction of each object. The segmentation head consists of five convolution layers and outputs 2 classes predictions, road and lane, as originally defined by LSS<cit.>.
§.§ Training Strategy
We adopt supervised training strategy. We supervise the parametric depth estimation by maximizing its depth likelihood on ground-truth depth observations. Specifically, we minimize the negative log-likelihood loss ℒ_D using sparse ground-truth depth d_gt generated from sparse lidar measurements. Here ℒ represent Laplacian distribution and P^i_gt represent set of pixels where ground-truth lidar measurements is valid for image i.
ℒ_D(θ) =∑_i=1^N∑_p∈𝒫^i-log(ℒ(d^p_gt,i|μ_i^p(θ), b_i^p(θ)))
where 𝒫^i defines the set of pixel coordinates with valid ground truth depth map for view i.
For detection head, we use the 3D detection loss used in PointPillars<cit.> as follows, where ℒ_loc is the total localization loss, ℒ_cls is the object classification loss, ℒ_dir is the direction classification loss, N_pos refer to the number of positive samples and β_cls, β_loc, β_dir are set to 1.0, 0.8, 0.8 accordingly.
ℒ_det = 1/N_pos(β_clsℒ_cls + β_locℒ_loc + β_dirℒ_dir)
Please refer to <cit.> for more details.
For segmentation head, we use both Dice loss ℒ_dice and binary cross entropy loss ℒ_bce as segmentation loss ℒ_seg and use equal weight β_dice = β_bce = 1.
ℒ_seg = β_diceℒ_dice + β_bceℒ_bce
For the visibility map and additional outputs, since they are geometrically derived from the estimated parametric depth representation without any learned parameters, it's not necessary to apply supervision on them.
§ VISIBILITY
§.§ Visibility Map
The segmentation in BEV space mainly focuses on segmenting lane regions. However, those regions are not always visible in the camera views due to the occlusion of vertical scene structures such as building (see Fig.<ref>). We thus propose to use our parametric depth modeling to infer a visibility map which decouples visible and occluded areas and, will contribute to mitigate the hallucination effect.
We define a visibility map V^BEV∈ℝ^X× Y to describe the visibility range of ego vehicle's multi-view cameras. Starting from the likelihood of the Laplacian distribution in Eq. <ref>, the occlusion probability B(d) of a voxel in 3D space that has a back-projected depth d in camera view is
B(d) = ∫_0^dℒ(x|μ,b) dx.
We derive this occlusion probability as follows. Firstly we find the indefinite integral of Eq. <ref> as
F(x) = ∫_-∞^xℒ(x|μ,b)dx = 1/2exp(x-μ/b) if x < μ
1-1/2exp(-x-μ/b) if x ≥μ.
Then we calculate the definite integral between [0,d] as the occlusion probability B(d), which is defined as
B(d) = F(d) - F(0) = F(d)-1/2exp(-μ/b).
In practice, this is computed very efficiently, without the need to perform the discrete integration of the depth likelihood over the range [0,d]. Based on the relationship between visibility and occlusion, we convert the occlusion probability B to visibility probability V by
V(d) = 1-B(d) = 1 + 1/2exp(-μ/b)-F(d).
To finally compute the visibility in BEV space, we take the maximum visibility probability along the Z axis to form the visibility map V^BEV.
Ṽ^BEV(x,y) = max_z∈𝒵'V(x,y,z)
where 𝒵'={0,1,2⋯ Z'-1}. The V^BEV is obtained via interpolation from Ṽ^BEV.
§.§ Visibility-aware Evaluation
For semantic segmentation where the ground-truth is usually generated using aerial images, it is not possible evaluate predictions in visible and occluded areas by using the standard evaluation metrics. Therefore, in this section, we follow a similar process as the one to generate the visibility map to derive a visibility-aware evaluation method for segmentation in BEV space. In this case, however, we project the lidar 3D points (ground-truth) into multi-view image space and use a depth completion network to obtain multi-view dense depth maps. This depth map is then used as the expected depth value to build a parametric depth representation F(θ_gt). We then evaluate the ground-truth depth likelihood on each voxel in 3D space using Eq. <ref>, forming the ground-truth depth likelihood volume L_gt. Finally, we derive the ground-truth visibility map in BEV space V using Eq. <ref> and Eq. <ref>.
In this case, V reflects the maximum visibility of the multi-view cameras in BEV space. Thus, it can be used as a mask to explicitly evaluate results in BEV space subject to visibility. Specifically, we use a threshold τ_vis to split the predicted segmentation s_pred and ground-truth segmentation label s_gt into visible region {s^vis_pred,s^vis_gt} and occluded region {s^occ_pred,s^occ_gt}. We can then compute the IoU for the visible (IoU_vis) and occluded (IoU_occ) regions separately as
s^vis = ∑_x∈𝒳,y∈𝒴s(x,y)× 1(V(x,y) ≥τ _vis),
s^occ = ∑_x ∈𝒳, y∈𝒴s(x,y)×1(V(x,y) < τ _occ),
IoU_vis = s^vis_pred∩ s^vis_gt/s^vis_pred∪ s^vis_gt, IoU_occ = s^occ_pred∩ s^occ_gt/s^occ_pred∪ s^occ_gt where 𝒳={0,1,⋯,X-1}, 𝒴={0,1,⋯,Y-1}, and 1(·) is the indicator function.
We also report the occlusion rate on nuScenes as the percentage of visible or occluded segmentation labels over total number of segmentation labels.
§ EXPERIMENTS
In this section, we first detail our experimental settings, then we demonstrate the effectiveness of our approach on the nuScenes dataset, and, finally, we provide ablation studies on the main components of our method.
§.§ Implementation Details
Dataset. We conduct our experiments on the nuScenes dataset <cit.>. The nuScenes dataset provides video sequences along with multiple sensor outputs including Lidar, Radar, GPS and IMU, all of which are collected by calibrated and synchronized sensors mounted on an vehicle driving across Boston and Singapore. The dataset consists of 1000 sequences, split into 700 for training and 150 for validation and testing, respectively. Each sample provides six RGB images captured by 6 cameras with divergent viewing directions along with Lidar sparse 3D points, Radar sparse 3D points, GPS pose and IMU readouts. We follow <cit.> to generate ground-truth segmentation labels from the global map provided by nuScenes dataset.
Evaluation metrics. We report our results using the same metrics as in the nuScenes benchmark. For detection, we report mean Average Precision (mAP) and the nuScenes detection score <cit.>. For segmentation, we follow LSS <cit.>, and report the mean IoU score (mIoU). In addition, we report results using the proposed visibility-aware evaluation detailed in Sec. <ref>. Unless specified, we report numbers on the validation set.
Network architecture. We use a unified framework to demonstrate benefits of our depth-based feature transformation module. The network consists of a backbone image encoder and two decoding heads, one for segmentation and one for detection. We use ResNet with deformable convolution as the image encoder. For the decoding heads, we use the same architecture as the one in PointPillars <cit.>.
We set the size of the intermediate 3D volume consisting of X'× Y'× Z' = 400×400×12 voxels, with a voxel size of 0.25m× 0.25m× 0.5m, respectively. The final BEV space dimension consists of X× Y = 200×200 grids. Each grid is of size 0.5m× 0.5m.
Training and inference. During training, we use 6 RGB images and corresponding camera parameters as input.
The training for parametric depth estimation is supervised by the ground-truth sparse Lidar points provided in the dataset. Ground-truth detection and segmentation labels are used to supervise the detection and segmentation heads. We set batch size to 1 per GPU and use 3 nodes with 8 Nvidia V100 GPUs. For inference, our method only requires the 6 input RGB images together with the corresponding camera parameters.
§.§ Results
We now compare our results with M^2BEV and other state-of-art methods on the nuScenes dataset. To facilitate the comparison to other approaches, we use ResNeXt-101 as the backbone of our method for detection and segmentation experiments and use ResNet-50 as the backbone for multi-task learning experiments and efficiency analysis.
Detection. We report the results of our method and related state of the art methods in Tab. <ref> and Tab. <ref>, for the validation set and the test set respectively. For the validation set, we only include frame-wise camera-based methods. That is, we exclude those approaches using temporal information. For the test set, we include the latest results including Camera, Lidar, Radar and their combination. As we can see, in both sets, our approach outperforms all existing camera-based methods on both mAP and the NDS score.
Segmentation. We now focus on evaluating our semantic segmentation results. We report our performance compared to state-of-the-art methods on the nuScenes validation set in Tab. <ref>.
We also report a variant of our model trained without depth supervision (Ours*) to fairly compare with LSS <cit.>.
Our method performs significantly better compared to LSS <cit.> on both road and lane segmentation and slightly better compared to M^2BEV <cit.>, the closest method to ours.
Our model without depth supervision still outperforms existing methods.
Interestingly, if we take the visibility into account, as shown in Tab. <ref> and Fig. <ref>, our method clearly outperforms the baselines on the visible areas while maintain the performance compared to M^2BEV on the occluded regions. These results evidence the benefits of our parametric depth approach.
Joint detection and segmentation. Finally, we report results for jointly evaluating both tasks. In this case, we compare our results to the multi-task version of M^2BEV. We show results for this experiment in Tab. <ref>. Our method, once again, outperforms the baseline on both detection and segmentation tasks. These results further evidence the benefits of an improved depth representation in the 2D to 3D feature transformation process.
Efficiency. Our parametric depth estimation requires the estimation of additional parameters compared to simplified depth estimation approaches. As shown in Tab. <ref>, our model requires slightly larger amount of memory; However, that does not lead to a significant increase in the inference time.
§.§ Ablation Studies
We carry out ablation experiments to study the influence of feature transformations on final detection and segmentation performance and the robustness of our model to calibration error. More ablation experiments can be found in supplementary material. We use ResNet-50 as the backbone for all ablation experiments.
Feature transformations
We evaluate the effectiveness of the parametric depth based feature lifting and aggregation module comparing with baseline non-parametric depth based lifting LSS<cit.>, baseline uniform depth based lifting similar to M^2BEV and the widely used Pointpillar<cit.> feature aggregation. Results are in Tab. <ref>. Our proposed parametric depth based lifting coupled with occupancy based feature aggregation achieved best performance for both detection and segmentation.
Limitations. Like all camera based methods, our method can only provide reliable detection and segmentation results on visible region. On occluded region, although our method can provide hallucination results and visibility information, the results are not reliable for making critical driving decision. Following planning tasks should utilize the visibility and uncertainty information to achieve reliable planning.
§ CONCLUSION
We propose a parametric depth distribution modeling-based feature transformation that efficiently transforms 2D image features to BEV space. By incorporating visibility inference, our method can provide crucial visibility information to down-streaming planning tasks. Moreover, our approach outperforms existing methods in both detection and segmentation tasks, making it a promising candidate for feature transformation in future works. In our future work, we aim to investigate the integration of temporal information to improve estimation accuracy.
ieee_fullname
|
http://arxiv.org/abs/2307.03886v1 | 20230708033922 | On Regularization and Inference with Label Constraints | [
"Kaifu Wang",
"Hangfeng He",
"Tin D. Nguyen",
"Piyush Kumar",
"Dan Roth"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
[
ICML'2023
On Regularization and Inference with Label Constraints
equal*
Kaifu Wangupenn
Hangfeng Herochester
Tin D. Nguyenmit
Piyush Kumarstr
Dan Rothupenn
upennUniversity of Pennsylvania, Philadelphia, PA, USA
mitMassachusetts Institute of Technology, Cambridge, MA, USA
strSystems and Technology Research, Woburn, MA USA
rochesterUniversity of Rochester, Rochester, NY, USA (Part of the work done while at the University of Pennsylvania.)
Piyush [email protected]
Dan [email protected]
Machine Learning, ICML
0.3in
]
Prior knowledge and symbolic rules in machine learning are often expressed in the form of label constraints, especially in structured prediction problems.
In this work, we compare two common strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference, by quantifying their impact on model performance.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints. However, its preference for small violations introduces a bias toward a suboptimal model.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
Given these differences, we further explore the use of two approaches together and propose conditions for constrained inference to compensate for the bias introduced by regularization, aiming to improve both the model complexity and optimal risk.
§ INTRODUCTION
Domain knowledge in machine learning is often framed as constraints on the output label space.
Such label constraints have been widely identified in natural language processing tasks
<cit.>
and studied in the context of structured prediction
<cit.>.
For example, in temporal reasoning <cit.> where the model is asked to label the relations (“before” or “after”) among a set of events, the assigned labels will need to satisfy a transitivity constraint which means, for example, the facts that an event E_1 is after E_2 and that E_2 is after E_3 imply that E_1 is after E_3.
The central question is how to encode such a constraint into a learning algorithm to ensure better performance and generalization of the learned model.
Practitioners have developed two techniques to encode a label constraint in a machine learning pipeline. The first, called regularization with constraints, penalizes a model for its violation of the constraint in addition to the classification loss <cit.>. The second, called inference with constraints, modifies prediction rules directly by enforcing strictly constrained inference <cit.> or balancing the original model's output with the constraint in a soft way <cit.>.
Although these two learning algorithms have been shown to be empirically successful, we are not aware of theoretical analyses that elucidate each algorithm's advantages or disadvantages in comparison with the other one. Natural questions include, how do these two differ in their impact on the learned model? Moreover, in practice, the constraints could be noisy i.e. <cit.>. In such cases, do they still improve the model performance? If so, by how much?
Focusing on multiclass classification with label constraints, we compare regularization with constraints and constrained inference.
For each algorithm, we quantify its optimal risk (aka approximation error) and its generalization gap (aka estimation error).
Specifically, in Section <ref>, we show that regularization with constraints achieves a smaller generalization error by reducing the model complexity but will introduce a bias towards a suboptimal model if the risk minimizer and violation minimizer does not coincide.
In Section <ref>, we study a broad family of constrained inference model called Constrained Conditional Model (CCM) <cit.> and point out that the constrained inference could reduce the risk of a model if and only if the model violates the constraint more than the true data distribution.
This further suggests finding models with higher violation, which contrasts the learning objective used in regularization that discourages violation.
Given these contrasts, we further study the combination and interaction of the two methods in Section <ref> and describe how constrained inference could compensate for the bias introduced by regularization.
To the best of our knowledge, our analysis is the first to provide a theoretical view on comparing the two approaches. We believe in the importance of this comparison and hope to bring this problem to the attention of the machine learning community.
In summary, our contributions include:
* We provide an error bound (Theorem <ref>) that describes the tradeoff between the generalization gap and the optimal risk when performing regularization with constraints.
* We propose a sufficient and necessary condition (Theorem <ref>) for constrained inference to improve a model by quantifying its reduction in risk.
Based on this, we further argue that constrained inference, when used at training time, implicitly modifies the training objective in an opposite direction as in the regularization approach (Proposition <ref>).
* We study the combination of regularization and constrained inference, and propose sufficient (Theorem <ref>) as well as necessary (Theorem <ref>) conditions for the combined algorithm to achieve improvement in both optimal risk and model complexity.
Proofs of all the theoretical results are in the appendix.
§ PRELIMINARIES
Our goal is to learn a mapping from the instance space X to the output space Y.
The learner has access to a set of labeled training data S_ L of size m_ L, which contains i.i.d. samples of a distribution P on X ×Y.
The marginal distribution of X is denoted as P_X.
In this work, we assume the ground truth label associated with x ∈X is generated by a deterministic mapping y_:X →Y (_ is short for oracle). We also denote the true label as y_ when the context is clear.
Model.
The scoring class F contains scoring functions f:X ×Y →R.
We will also call a f∈F a classifier.
Let Δ_Y be the |Y|-dimensional probability simplex. Each scoring function
induces a probabilistic prediction P_f(·|x) ∈Δ_Y by performing softmax inference as P(y|x) ∝exp(f(x,y)).
Loss Function.
The prediction of f at x is evaluated by the classification error (or ℓ^1 loss) L(x,y_,f) := 1 - P_f(y_|x), which is half the ℓ^1 distance the between the one-hot distribution e_y_ and P_f on Δ_Y.
It can also be viewed as a smoothed version of the standard zero-one loss in the sense that lim_t →∞ L(x,y_,tf) = 1{_y∈Yf(x,y) y_}.
More background on the definition of the ℓ^1 loss are provided in Appendix <ref>.
A scoring function f is evaluated by its risk R(f) := E[L(x,y_,f)]. The empirical estimate of the risk using the labeled examples in S_ L is denoted as R(f, S_ L).
We also consider the cross-entropy surrogate loss defined as L_ (x,y_,f) := -logP_f(y_|x) and refer its expectation R_(f) = E[L_(x,y_,f)] as cross-entropy risk.
Label constraint.
A label constraint (or constraint for short) is a deterministic mapping C:X → 2^Y-{∅}. Namely, C maps an instance x to a nonempty subset of Y, which may or may not contain the true label y_(x). In particular, we say a constraint C is noise-free if P(y_∈ C(x))=1. Otherwise, C is said to be a noisy constraint and its noise rate is denoted as V_ := P(y_(x) ∉ C(x)).
Violation.
A constraint C is equipped a violation function, which is an indicator function v_C(x,y) = 1{y∉ C(x)}. We also overload the notation v and define the violation of a classifier f at an instance x as v_C(x,f):= 1-P_f(C(x)|x) = ∑_y∉ C(x)P_f(y|x). Its expectation is V_C(f):= E[v_C(x,f)]. We elide the subscript C and write them as v(x,y), v(x,f) and V(f) when the context is clear. Similar to the classification error, we consider a cross-entropy surrogate of the violation function defined as v_(x,f):=-logP_f(C(x)) and its expectation V_(f) = E[v_(x,f)].
Rademacher complexity.
We use the following version of Rademacher complexity that is adopted from <cit.> to characterize the generalization ability of the scoring space of multiclass classifiers F:
The empirical Rademacher complexity of scoring class F with respect to a set S = {x_i}_i=1^m that contains m samples of the instance is defined as
ℜ_m(F;S)
:=
1/mE_ϵ[
sup_f∈F∑_i=1^m
∑_y∈Yϵ_i,y f(x_i,y)
]
where ϵ=(ϵ_i,y)_i∈ [m],y∈Y are independent Rademacher random variables, each of which is uniformly distributed over {-1,+1}. The Rademacher complexity of scoring class F is the expectation of the empirical version:
ℜ_m(F)
:= E_S ∼P_X^m[ℜ_m(F;S)]
This definition of Rademacher complexity is a special case of the factor graph complexity proposed by <cit.>, which is defined for more general structured prediction models. It is hence possible to extend our results of the generalization bounds to structured models by replacing the Rademacher complexity with factor graph complexity. In this work, we focus on multiclass classifiers for the simplicity of presentation.
§ REGULARIZATION WITH CONSTRAINTS
In a standard machine learning algorithm, the learner receives a set of labeled data S_ L ∈∪_m=1^∞(X ×Y)^m and finds the empirical risk minimizer, which is defined as _f ∈FR̂(f;S_ L).
In this section, we consider a method that modifies this learning objective by adding a regularization term defined with the constraint C. Precisely, we consider minimizing an augmented objective defined as
L_ρ (f)
:= R(f) + ρ V(f)
where ρ≥ 0 is a fixed tradeoff parameter.
The idea of regularizing the model by adding a penalty for the violation of the constraints on an unlabeled dataset is widely adopted in the literature. In particular, the cross entropy violation is known as the semantic loss <cit.> in the context of logical constraints. Other designs of the regularization term include using the KL-divergence on the probability space in the posterior regularization algorithm <cit.> and using the t-norms from fuzzy logic <cit.>.
We will show this algorithm improves the generalization error by reducing the complexity of the scoring space (Theorem <ref>), but in general leads to a larger classification risk in the long run (Proposition <ref>), thus resulting in a tradeoff between estimation and approximation errors.
§.§ Semi-supervised Regularization with Constraints
We consider a semi-supervised approach where the learner has access to an unlabeled dataset S_ U that contains m_ U independent samples of the instance X, resulting in the following definition.
Given a labeled dataset S_ L of size m_ L and an unlabeled dataset S_ U of size m_ U, a scoring space F and a tradeoff parameter ρ≥ 0, we define and denote the empirical risk and violation minimizer (ERVM) as:
f_ρ(S_ L,S_ U)
:= _f∈F (
1/m_ L∑_(x,y)∈ S_ L L(x,y,f) .
. + ρ/m_ U∑_x∈ S_ U v_C(x,f)
).
We also denote the expected version as:
f_ρ := _f ∈F R(f) + ρ V_C(f).
For example, with our notation, f̂_0 is the ERM and f_∞ is the minimizer of the expected violation function. Notice that the minimizer in general is non-unique. Therefore, when we state any proposition that is related to f_ρ or f̂_ρ, we mean the proposition will hold for any of the minimizers.
§.§ Deviation from The Optimal Risk
In this section, we study how the risk of the minimizer f_ρ will deviate from the optimal risk in F. The reason that we are interested in bounding R(f_ρ) is that in general the minimizer R(f_ρ) is non-unique and may have different values of risks. Therefore, to describe the risk of ERVM in the long run (in Theorem <ref>), we provide an upper bound for all the possible risks of f_ρ.
For any constraint C and ρ≥ 0, the following holds.
R(f_0)
≤R(f_ρ)
≤R(f_0) + ρ (V(f_0) - V(f_∞))
.
The same relation also holds for the empirical estimates R̂ and V̂. Moreover, for any ρ>0, there exists a scoring space and data distribution so that the RHS can be reached even with a noise-free constraint C.
This result shows the minimizer of the regularized objective in general has a suboptimal risk over F. On the other hand, if the risk minimizer is simultaneously a violation minimizer, i.e., V(f_0) = V(f_∞), this relation implies consistency, i.e., R(f_ρ) = R(f_0).
This quantity V(f_0) can be small when the noise rate V_ is small and the model is expressive enough (e.g., a deep neural net) to approximate the true model.
§.§ Generalization Bounds
Now we discuss how regularization could reduce the complexity of the hypothesis class. The first step is to show that the violation of the target hypothesis is not too large. In particular, the following bound is a direct consequence of minimizing the regularized objective:
Let f_ρ be the regularized learning objective defined as in (<ref>). If the minimum violation in F is upper bounded by a known constant u ≥ 0, i.e., V(f_∞) ≤ u, then V(f_ρ) ≤ 1/ρ + u.
The upper bound u can be set to arbitrarily small by adding a baseline model defined as f_t(x,y) = t·1{y∈ C(x)} and driving t to infinite. This construction is possible due to the fact that the mapping C is known to the learner. The benefits of knowing C will be further explored in Section <ref> when we discuss inference with constraints.
For any B ≥ 0, we let F_B := {f ∈F| V(f) ≤ B} be the set of classifiers with small violation.
From the above discussion, we know that the target hypothesis f_ρ will lie in a smaller space F_u+1/ρ, which is characterized by the violation function and hence can be identified only with unlabeled data. To this end, we describe how the violation as well as the risk can be estimated with data.
Given a labeled dataset S_ L of size m_ L, for any δ>0, with probabilistic at least 1-δ, the following inequality holds uniformly for f ∈F:
R(f)
≤R̂(f;S_ L) + ℜ_m_ L(F) + √(log(1/δ)/2m_ L)
Given a unlabeled dataset S_ U of size m_ U, for any δ>0, with probabilistic at least 1-δ, the following inequality holds uniformly for f ∈F:
V(f)
≤V̂(f;S_ U) + ℜ_m_ U(F) + √(log(1/δ)/2m_ U)
The proof of this result relies on a contraction lemma established in <cit.>, which was used to analyze the argmax inference with margin losses. Our analysis extends their results to softmax inference, which may be of independent interest.
Furthermore, if the size of the constrained set C(x) is a constant, namely |C(x)|=c_0 < c = |Y| for all x ∈X, then the Rademacher complexity term of equation (<ref>) can be improved to √(2)/2√(1/c-c_0 + 1/c_0)ℜ_m_ U(F) (see the discussion in the proof).
This term is symmetric with the transformation c_0 ↦ c-c_0, due to the fact that estimating the violation V_C of a constraint C is equivalent to estimating V_Y-C.
In particular, when c_0 < c/2, if the constraint is more restrictive and informative (so that c_0 is small), it can be more difficult to estimate the violation.
Assuming lim_m
→∞ℜ_m(F) = 0, this result implies L_ρ can be approximated by its empirical version L̂_ρ with sufficient amount of data. On the other hand, since L̂_ρ is upper bounded by its cross-entropy surrogate R̂_ + ρV̂_, we further have that
L_ρ(f)
≤R̂_(f,S_ L) + ρV̂_(f,S_ U) + o_m_ L, m_ U(1)
where o_m_ L, m_ U(1) converges to 0 as m_ L, m_ U →∞.
Therefore, in practice one can minimize this upper bound by solving the convex surrogate problem
min_f ∈FR̂_(f,S_ L) + ρV̂_(f,S_ U).
where R̂_(f,S_ L) and V_(f,S_ U) are the empirical average of the cross-entropy loss and violation.
Finally, using these results, we bound the risk of the classifier learned by ERVM. For simplicity, we will denote the generalization gap B(δ, m, F) := ℜ_m(F) + 2√(log(1/δ)/2m).
We have with probability at least 1-6δ that
R(f̂_ρ)
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞)
+ ℜ_m_ L(F_1/ρ + u + B(δ, m_ U, ℱ))
+ ρℜ_m_ U(F_1/ρ + u + B(δ, m_ U, ℱ))
+ 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)
where ℜ(·) is the Rademacher complexity defined in (<ref>).
First, we show f̂_ρ and f_ρ both lie in the subspace F_1/ρ + u + B(δ, m_ U, ℱ) with high probability since the violation can be well-approximated, according to Lemma <ref>.
Then, the gap between the objective L(f_ρ) and L(f̂_ρ) is controlled by the Rademacher complexity of F_1/ρ + u + B(δ, m_ U, ℱ).
Finally, we use the inequalities established in Lemma <ref> to further upper bound the term L(f_ρ) using the risk and violation of f_0.
Using the same proof technique, this result can be extended to other choices of loss function as long as:
(a) The loss is bounded so that the optimal regularized model has a small violation, as in Lemma <ref>. (b) The loss is Lipschitz with the model scores so that a generalization bound associated with the loss holds, as in Lemma <ref>.
Reducing the generalization gap.
The bound (<ref>) contains three parts: the first line is the worst risk that can be achieved by f_ρ as we described in Proposition <ref>, the second and the third line is the complexity of the classifiers that have a small violation, and the last line is the errors that are independent of the model.
This bound (<ref>) is most preferable when a large set of unlabeled data is available so that the approximation errors of violations (i.e., term B(δ/2, m_ U, ℱ), ℜ_m_ U(F_1/ρ + u + B(δ/2, m_ U, ℱ)) and √(log(1/δ)/2m_ U)) are all small. Then, the model complexity is mainly described by the term ℜ_m_ L(F_1/ρ + u), which is the Rademacher complexity of a proper subset of F.
In this sense, the regularization method reduces the generalization gap by reducing the model complexity of the scoring space.
Tradeoff in regularization.
In situations where m_ U is large, the tradeoff parameter ρ balances two quantities: a larger ρ leads to a smaller scoring space F_1/ρ + u, but brings more bias depending on the suboptimality of f_0 in violation, measured by V(f_0)-V(f_∞).
The benefit of regularization is greater if fewer classifiers can achieve a violation that is close to the optimal value V(f_∞).
We provide the following example to illustrate how the Rademacher complexity can be reduced in linear models.
[Logistic Regression]
Consider a linear model for multiclass classification where Y=[c] and f(x,j)=w_j^ T x with ∑_j=1^c w_j_2^2 ≤ 1.
Suppose x ∈R^p is distributed in the unit sphere x_2 ≤ 1 with expectation E[x] = α∈R^p and covariance matrix σ^2I_p× p.
Without constraint, the Rademacher complexity is upper bounded as ℜ_m(F) ≤√(c/m) as in <cit.> (Theorem 2).
Now, consider a constraint that removes exactly one label so that C(x) ≡ [c-1].
With regularization, for sufficient small t<1/(c+2), we have the following bound
ℜ_m(F_t)
≤1/2(√(c/m) + √(c-σ^2-α_2^2/m))
which is strictly tighter than the standard bound. Intuitively, if x is concentrated around the origin 0, the prediction by any classifier will tend to be a uniform distribution. Therefore, a large bias and variance in x (captured by σ^2+α_2^2) help to distinguish models with different levels of violation.
Compare to existing results.
Previous works mostly consider a zero-one loss for both classification and violation under the assumption that the risk minimizer also achieves zero violation.
Then, one can simply preclude all the classifiers f∈F that have nonzero empirical violations on the unlabeled dataset and find the ERM among the remaining classifiers.
This approach has been theoretically studied in <cit.> for binary classification and <cit.> in a similar manner for regression by characterizing the complexity of the reduced set of hypotheses that achieve zero violation.
Conceptually, we can regard this algorithm as a special case of problem (<ref>) when ρ = ∞.
Our study, therefore, extends previous works with a soft learning objective to multiclass classification problems.
§ INFERENCE WITH CONSTRAINTS
An inference algorithm is a mapping F ×X →Δ_Y.
By default, we define it as the softmax inference: (f,x) ↦P_f(·|x).
When performing inference with constraints (or constrained inference), we modify this softmax mapping for the given function f using the additional information of C.
In this section, we study the Constrained Conditional Model (CCM) <cit.>, a broad family of models that perform inference with constraints.
We show at testing time, whether CCM reduces the risk depends on whether the model's expected violation is larger than the noise rate of the constraint V_ (Theorem <ref>).
In particular, when the constraint is noise-free, CCM always achieves a smaller or equal risk.
Furthermore, we show better risks are achieved if the constrained inference is also performed at training time, and pursuing this optimal risk leads to a learning objective that contrasts with the one used in the regularization approach (Proposition <ref>).
To make distinguishment, we will refer to a model in the original spaces F as a base model and refer to an augmented model as a constrained model.
§.§ Constrained Conditional Model
CCM augments existing scoring functions using a linear combination with the violation function. Precisely, given a vanilla scoring space F, the scoring space of CCM is defined as follows.
Given a scoring space F, a constraint C and a fixed tradeoff parameter μ∈ [0, ∞], the scoring space of the Constrained Conditional Model (CCM) is defined as:
F^μ
:= { (x,y) ↦ f(x,y) - μ v_C(x,y) | f∈F}
We will also denote
f^μ(x,y)
:= f(x,y) - μ v_C(x,y)
to be the augmented scoring function for a given f∈F. In particular, setting μ = ∞ will assign a score -∞ to any y ∉ C(x), which implies P_f^∞(y|x)=0, namely forcing strictly-constrained inference.
The tradeoff parameter μ allows CCM to improve the base model f despite noisy constraints, as we will discuss in detail in the following sections. Otherwise, if the noise rate is large, performing strictly-constrained inference can be harmful because it assigns 0 probability mass to any label y that is outside C(x) and hence has a classification loss L(x,y_,f^∞)=1 at any x where y_∉ C(x).
The learner can choose whether or not to perform the constrained inference either at the training time. This choice leads to the following two different approaches:
* On-training approach: perform constrained inference both at training and testing time, and directly find the ERM over F^μ using labeled data (also known as (Inference Based Training in <cit.>)
* Post-training approach: first find the ERM over the vanilla F using labeled data, and then perform constrained inference at the testing time (also known as Learning Plus Inference in <cit.>).
For both approaches, the generalization ability of CCM is characterized by the complexity of F^μ. So, we first point out that CCM does not increase the Rademacher complexity.
For any fixed μ≥ 0 and m ∈N, we have the following identity:
ℜ_m(F^μ)
= ℜ_m(F)
§.§ Post-training Constrained Inference
For a given and fixed classifier f (presumably trained with data), how does performing constrained inference impact the model performance?
In this section, we study the change in risk when the learner chooses to augment f as a CCM f^μ defined in (<ref>).
It is most convenient to characterize the risk of a CCM using the cross-entropy loss, although we will also conduct the same analysis for the hinge and ℓ^1 losses, as we will point out later.
To start with, for any f and μ∈ [0, ∞], we let
Δ^μ_(f)
:=R_(f) - R_(f^μ)
be the difference in the risk between the base model and the CCM (the larger the better).
We have:
-0.5em
* For any fixed model f, there exists an μ_0 > 0 such that R_(f^μ_0) < R_(f) if and only if
V(f) > V_
* The change in risk can be lower bounded as
Δ^μ_(f)
≥ V(f)(1-^-μ) - μ V_
* In particular, if the constraint is noise-free, we have
Δ^∞_(f)
= V_(f)
The first result describes the sufficient and necessary condition for constrained inference to be helpful.
It requires f to have a larger violation (measured by ℓ^1 violation) than the true data on average so that it has the potential to be improved. This condition is easier to satisfy when the constraint is less noisy.
The second result further quantifies the risk reduction as an explicit function of μ.
The last result shows that in the noise-free case, the maximum risk reduction is exactly the expected violation measured by cross-entropy. Its consequences will be further discussed in the next section.
We present the counterparts of Theorem <ref> for hinge loss and ℓ^1 loss in the Appendix <ref>.
The information delivered by those results is consistent with Theorem <ref> in the sense that (1) whether CCM can reduce the risk depends on the comparison between the violation of the original model and the oracle.
(2) the reduction can be described or lower bounded by some measures of the violation.
The drawback of the hinge loss is its non-smoothness due to the discontinuity of the argmax inference. The drawback of the ℓ^1 loss is that the range of μ such that R(f^μ) ≤ R(f) can be disconnected and difficult to describe. Therefore, we provide weaker results by deriving only sufficient or necessary conditions for CCM to reduce the risks.
As an application of Theorem <ref>, we derive a sufficient condition under which CCM achieves smaller risks.
Assuming V(f) ≥ V_, then R_(f^μ) ≤ R_(f) if the following condition holds:
μ≤ W(-η/^η)+η
where η := V(f)/V_ is the relative violation rate and W is the Lambert W function whose value W(t) is defined to be the solution to the equation w^w = t of w.
The RHS of (<ref>) increases with η and vanishes as η→ 1.
In particular, when the constraint is noise-free, one should encourage strictly-constrained inference and set μ = ∞. We also provide a plot of the RHS in the proof in the appendix.
§.§ On-training Constrained Inference
In this subsection, we study the on-training approach where we perform constrained inference both at the training and testing time. We use the results we established in the last subsection to describe the learning objective of the on-training approach, and argue that it achieves better risks than the post-training approach. Based on this, we further show that minimizing the cross entropy over CCM encourages a large violation of the base model, which contrasts the learning objective (<ref>) that is used in regularization.
We provide a simplified analysis for the noise-free setting where we choose μ = ∞ and perform strictly-constrained inference.
Then, the on-training approach aims to find the optimal (in terms of cross entropy) base model as follows:
:=
_f ∈F R_(f^∞)
(recall f^∞ means performing strictly-constrained inference with f) We characterize the behavior of with the following results, which are direct corollaries of Theorem <ref>.
Assuming C is noise-free, we can reformulate the learning objective (<ref>) as
= _ f∈F R_(f) - V_(f)
A fundamental difference.
Surprisingly, the reformulated learning objective (<ref>) is opposite to the surrogate regularized objective defined in (<ref>) in their attitudes towards violations. This contrast suggests a fundamental difference between regularization and constrained inference: the regularization method views violation as a bad thing and it precludes classifiers with substantial violations. But constrained inference corrects a model from its violation, so a large violation means a great potential to be improved.
On-training vs post-training.
Loosely speaking, this result also suggests that in general, the best constrained model is not the constrained best model. To be more precise, suppose we perform post-training constrained inference for the cross-entropy risk minimizer in the vanilla model, i.e., := _f∈F R_ (f).
Then, we can reformulate the definition of as
:= _f∈F(R_(f) - V_(f))_objective in (<ref>), post-training risk + V_(f)
which can be regarded as a “regularized” version of (<ref>). Therefore, similar to Proposition <ref>, we can argue that the risk minimizer over F, as a base model of CCM, contains a bias towards a higher risk than the on-training method's as follows:
R_(^∞)
≤ R_(^∞)
≤ R_() - min_f∈F V_(f)
The proof is included in the proof of Proposition <ref>.
Computational considerations.
In practical structured prediction problems where the output is sequential or graphical, performing constrained inference during training time is typically expensive due to the complexity of the constraints. For example, as pointed out by <cit.>, when the constraint is defined by a logical expression over several output variables, computing the probability of constraint being satisfied corresponds to the problem of weighted model counting (WMC) and is #P-complete <cit.>.
Therefore, to implement the on-training approach in practice, one can alternatively use approximate inference to ensure tractability.
For example, strictly constrained inference, formulated as Integer Linear Programming <cit.>, can be further relaxed as Linear Programming <cit.>.
Another example is amortized inference <cit.>, which accelerates the convergence to the optimal model while only performing exact inference in every τ>1 iterations.
Compare to existing results.
There has been limited theoretical work discussing the impact of performing constrained inference. The most related one is <cit.>, which derives VC-style generalization bounds for linear structured models to argue that (1) performing strictly constrained inference in a post-training manner (Learning Plus Inference in the paper) improves the model performance and (2) the on-training approach (Inference Based Training in the paper) further reduces the error in the long run. Our approach directly analyses the classification risk and extends the comparison to noisy constraints and soft-constrained inference with CCM.
§ REGULARIZATION WITH CONSTRAINED INFERENCE
We have seen that regularization and constrained inference have different impacts on the generalization gap and the risk.
On one hand, CCM has an equal Rademacher complexity (Proposition <ref>) as the original model ℜ(F), which can be reduced by regularization. So, performing regularized algorithm to CCM also reduces the generalization gap.
On the other hand, their impacts on the risks are contradicting, as summarized in figure <ref>.
In this section, we aim to describe how these impacts can interact with each other by applying our established results to explore the usage of these two methods together.
We show both positive and negative results for the combination. On one hand, we propose sufficient conditions under which the bias introduced by regularization can be compensated by performing constrained inference (Proposition <ref>).
On the other hand, we study if post-training constrained inference can reduce the risk of the optimal classifier f_ρ. We show with a noisy constraint, choosing a large value of ρ in the regularized objective (<ref>) will make CCM incapable to reduce the risk (Proposition <ref>).
§.§ CCM Compensates for Regularization Bias
As the red part of Figure <ref> summarizes, we have shown that the regularization and constrained inference have contradicting influences on the risk. Moreover, the regularization bias is controlled by the violation of the risk minimizer (Proposition <ref>), which can be reduced by constrained inference. This suggests the possibility for CCM to reduce the additional risk introduced by regularization.
We formally describe this phenomenon by considering the following combination: an on-training approach that aims to find the minimizer of the following regularized surrogate objective over the CCM F^μ:
f_⋆^μ
:= _g∈F^μ R_(g) + ρ V_(g)
Recall that R_() is the minimum cross-entropy risk that can be achieved in F.
We show that unlike the vanilla regularized objective (<ref>), it is possible for this algorithm to achieve a smaller risk than R_() as follows.
If
CCM improves so that Δ^μ_()> 0,
then letting
ρ
< V_()-μ V_/V_(^μ) - 1
will imply R_(f_⋆^μ) < R_().
This result shows a small choice of ρ allows the regularized optimizer f_⋆^μ to achieve better cross-entropy.
A less noisy constraint allows more choices of ρ to make this happen.
In particular, when the constraint is noise-free, since V_(^μ) → 0 as μ→∞, driving μ to ∞ will make R(f_⋆^μ) < R() for all ρ > 0.
As a cost, regularization will be less effective in reducing the Rademacher complexity with a large value of μ. In the extreme case, all the classifiers in F^∞ make zero violation, and hence cannot be distinguished by the regularization objective.
§.§ Post-regularized-training Constrained Inference
Finally, as the blue part of Figure <ref> summarizes, we have shown that post-training inference is beneficial only if the average violation of f is larger than V_ (Theorem <ref>). However, the minimizer of the regularized objective f_ρ tends to have a small violation (Proposition <ref>) scaled with 1/ρ.
Therefore, it is possible that choosing a large value of ρ will make post-training incapable to reduce the risk with a noisy constraint.
Formally, assuming a model is already trained with the vanilla regularized ℓ^1 objective as in (<ref>), we have the following holds.
Recall V(f_∞) is the minimal expected violation that can be achieved by F. If V_≥ V(f_∞) and
ρ≥1/V_ - V(f_∞)
then the minimizer f_ρ of the regularized objective (<ref>) will not be improved by post-training constrained inference for any μ∈ (0, ∞] in the sense that R_(f_ρ) ≤ R_((f_ρ)^μ).
The RHS of (<ref>) shrinks with a larger noise rate V_ and smaller V(f_∞). Intuitively, a more noisy constraint is less helpful (Theorem <ref>), while a small value of V(f_∞) allows f_ρ to violate less (Proposition <ref>) and hence gains fewer benefits from constrained inference (Theorem <ref>).
As a consequence, with a noisy constraint, choosing a large ρ in the regularized objective will make post-training constrained inference unnecessary or even harmful.
§ RELATED WORKS
Regularization with constraints.
In the context of structured prediction, the Posterior Regularization (PR) framework <cit.> proposed to regularize the log-likelihood by adding a distance of the probabilistic prediction to the constrained subspace of distributions.
The CoDL algorithm <cit.> is a semi-supervised algorithm that repetitively assigns constrained pseudo-labels to the unlabeled dataset and uses pseudo-labels to retrain the model.
CoDL and PR are further unified in <cit.> as special cases of a parameterized EM algorithm.
More recent works have proposed injecting logical constraints into deep models by augmenting the training objective with explicitly defined violation functions, such as the semantic loss <cit.>, the DL2 loss <cit.> and the inconsistency loss <cit.>, which motivate our theoretical formulation in (<ref>).
Inference with constraints.
The idea of injecting prior knowledge directly into a predictive model dates back to <cit.>, which formulates the problem of inference with hard constraints as Integer Linear Programming (ILP).
The idea of constrained inference has been followed and developed by NLP researchers and empirically shown to be effective in various problems such as summarization <cit.>, temporal reasoning <cit.>, semantic parsing <cit.> and text generation <cit.>.
<cit.> further defines the CCM to incorporate soft constraints into linear models.
Another related work is <cit.>, which uses Bayesian networks to model the label correlations and define an order to the labels.
The order information is then taken as extended features at inference time.
Theoretically, <cit.> provides a comparison between the on-training and post-training constrained inference using VC-style error bounds.
Semi-supervised learning theory.
Several theoretical semi-supervised learning frameworks such as <cit.> and <cit.> illustrate how hard constraints on the hypothesis space could reduce the generalization error. A detailed comparison can be seen in the discussion at the end of Section <ref>.
Learning with partial labels.
The problem of learning with constraints is closely related to the problem of learning from partial labels (also known as superset labels) <cit.> where each instance x in the dataset is assigned with a partial label s which also takes value in 2^Y.
The difference is that the constraint mapping itself is known to the learner and hence can be encoded in the inference algorithm directly, for example, via the CCM. Another difference is that the partial labels are typically more informative and can guarantee learnability alone <cit.>. In contrast, the constraints that appear in practice typically provide only side information and need to be used with gold labels together.
§ CONCLUSION AND FUTURE WORKS
In this paper, we presented a theoretical study of two methods to encode label constraints into a learning system: regularization and constrained inference.
We compared these two approaches by quantifying their impact on the optimal risk as well as the generalization error.
Our study revealed that the success of these two approaches replies on different data assumptions:
the regularization method requires the optimal classifier in the model to have a small violation while constrained inference requires the true data to have a small violation.
We further elucidated the detrimental consequences that arise when these assumptions fail to hold.
Finally, we demonstrate how their impacts on the model can interact when used together.
We have focused on multiclass classification, aiming to provide a starting point for understanding the different mechanisms of the two methods. For future work, we will extend the discussion to structured prediction problems where complex constraints are naturally defined. In particular, while the presence of constraints can improve the model performance, it also suggests a strong dependency inside the structure, which may hurt the generalization performance, as pointed out by <cit.>.
§ ACKNOWLEDGEMENTS
This work was partially supported by Contract FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
This work was also partially sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0080. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
This work was also partially funded by ONR Contract N00014-19-1-2620.
icml2023
PART:
*Appendix
§ DETAILS ON LOSS FUNCTION
The ℓ^1 loss is a smoothed alternative to the zero-one loss and has been used in the theoretical analysis for the generalization error, see, for example, in <cit.> (Section 6.2). It can be related to other common loss functions as follows.
As distances on the probability simplex.
Let e_y ∈R^|Y| be a one-hot vector with the y^th coordinate be 1 and all others be 0. We then have that
L(x,y_,f)
:= 1 - P_f(y_|x)
= 1/2e_y_ - P_f_1
Moreover, since our label space Y is of finite cardinality, we further have that 1/2e_y_ - P_f_1 = TV(e_y_, P_f), the total variation distance.
Relation to zero-one loss.
By introducing a temperature parameter t ∈R_≥ 0 to the softmax function, it is well known that lim_t →∞(tu) = (u) for a vector u. This implies
lim_t →∞ L(x,y_,tf)
= 1 - 1{_y∈Yf(x,y) = y_}
= 1{_y∈Yf(x,y) y_}
which is the zero-one loss.
Since performing softmax inference with temperature t can be equivalently regarded as performing softmax inference for the scoring space tF, for the simplicity of our presentation, we omit the temperature parameter in the softmax inference.
Relation to cross-entropy.
The total variation distance to a one-hot probability can be lower bounded by cross-entropy due to Pinsker's inequality. More directly, in our case, we have 1-p ≤ -log(p) for any p ∈ [0,1] from basic inequality. This implies L(x,y,f) ≤ L_(x,y,f).
In conclusion, the ℓ^1 loss is a ℓ^1 and total variation distance on the probability space, is a smoothed version of the zero-one loss, and is upper bounded by cross-entropy. It is differentiable and bounded so that we can derive generalization bounds with Rademacher complexity. Another reason that we are interested in softmax inference will be clearer in the discussion for constrained inference, where in Theorem <ref>, <ref> and <ref>, the change of expected cross entropy and ℓ^1 loss can be lower bounded by a smooth function. But with the argmax inference, the risk is in general not continuous and needs to be assumed to be Lipschitz to obtain similar results.
§ PROOFS FROM SECTION 3
§.§ Proof of Proposition <ref>
The first inequality is straightforward. For the second inequality, by definition (<ref>) we have
R(f_ρ) + ρ V(f_ρ)
≤ R(f_0) + ρ V(f_0)
and
V(f_ρ) ≥ V(f_∞)
.
Combining the two above inequalities yields
R(f_ρ) + ρ V(f_∞)
≤ R(f_0) + ρ V(f_0)
.
The desired inequality follows by rearranging these terms. This argument also holds if we replace the expectations with empirical estimates.
To see how the RHS bound can be reached, consider the following scoring space that contains two classifiers, f_0 and f_∞, and an instance space X that only contains one point x. Let C(x) = {y_,y'}. Let f_0 be such that P_f_0(y_)=a∈(0,1) and P_f_0(y')=b. Let f_∞ be such that P_f_∞(y_)=a-ϵ_1 and P_f_∞(y')=b+ϵ_2 so that ϵ_1 < ρϵ_2. Then
R(f_∞) + ρ V(f_∞)
≤ 1 - (a - ϵ_1) + ρ (b-ϵ_2)
< 1-a + ρ b
= R(f_0) + ρ V(f_0)
which means f_∞ will be preferred to f_0 by the regularized objective.
§.§ Proof of Lemma <ref>
By definitions, we have
ρV(f_ρ)
≤R(f_ρ) + ρV(f_ρ)
≤R(f_∞) + ρV(f_∞)
≤ 1 + ρV(f_∞)
≤ 1 + ρ u
Therefore, we have that V(f_ρ) ≤ u + 1/ρ.
§.§ Proof of Lemma <ref>
To prove this theorem, we need the following lemmas. The first one is a contraction inequality established in <cit.>.
Let H be a set of functions mapping X to R^N. Suppose Φ_i is μ_i-Lipschtz with the 2-norm, i.e.,
|Φ_i(v') - Φ_i(v)|
≤μ_i v'-v_2
∀ v,v'∈R^N
Then for any set of m points x_1,…, x_m ∈X, the following inequality holds
1/mE_σ[
sup_h ∈H∑_i=1^m σ_i Φ_i(h(x_i))
]
≤√(2)/mE_ϵ[
sup_h∈H∑_i=1^m ∑_j=1^N ϵ_ijμ_i h_j(x_i)
]
where σ_is and ϵ_ijs are independent Rademacher variables uniformly distributed over {-1,+1}.
The second one computes the Lipschitz constants of the ℓ^1 losses by bounding its gradient's 2-norm.
Given a scoring function f:X ×Y →R, let f(x) = [f(x,y)]_y ∈Y∈R^|Y| be the vector of scores for each label.
For any two scoring functions f,f' and data (x,y), we have that
|P_f(y|x) - P_f'(y|x)|
≤√(2)/4f(x) - f'(x)_2
Furthermore, for any constraint C, we have
|P_f(C|x) - P_f'(C|x)|
≤1/4√(1 + 1/|C(x)|)f(x) - f'(x)_2
where P_f(C|x)=P_f(C(x)|x)=∑_y ∈ C(x)P_f(y|x).
We start with the second claim.
Suppose C(x) = Y, then P_f(C|x) = 0 for any scoring function f, so the inequality trivially holds.
Next, we assume C(x) ⊂Y.
Given a constraint C:X → 2^𝒴, the derivative of its violation function with respect to the score for a label y is
P_f(C|x)/ f(x,y) = ∑_y' ∈ C(x)P_f(y'|x)/ f(x,y)
= ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x)
The 2-norm of the gradient of the mapping f(x) ↦P_f(y|x) is then
(
∑_y ∈Y( ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x) )^2
)^1/2
which is maximized when P_f(y|x) = 1/2|C(x)| for all y ∈ C(x) and P_f(y|x) = 1/2(Y-|C(x)|) for all y ∉ C(x) (so that P_f(C|x)=1/2). The maximum is then
(
∑_y ∈ C(x)( ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x) )^2
+ ∑_y ∉ C(x)( ∑_y' ∈ C(x)P_f(y|x) P_f(y'|x) )^2
)^1/2
= √(|C(x)|(1/4|C(x)|)^2 + |Y-C(x)| (1/2|Y-C(x)|)^2)
= √(1/16 |C(x)| + 1/16|Y-C(x)|)
≤√(1/16 |C(x)| + 1/16)
= 1/4√(1 + 1/|C(x)|)
The boundedness of the gradient implies that the function f(x) ↦P_f(C|x) is Lipschitz with a Lipschitz constant 1/4√(1 + 1/|C(x)|).
The first claim then follows by considering the special constraint C(x) := {y_(x)} so that |C(x)| = 1.
Next, we present the proof of the theorem. By standard Rademacher complexity bounds, given a labeled dataset S of size m, for any δ>0, with probability at least 1-δ, the following inequality holds uniformly for f ∈F:
R(f)
≤R̂(f;S_ L) + 2 ℜ_m(H) + √(log (1/δ)/2m)
where
H
:= {(
x,y) ↦ 1- P_f(y|x): f ∈F
}
By the contraction lemma and Lipschitzness, we have
ℜ_m(H)
= 1/mE_SE_σ[
sup_f ∈F∑_i=1^m σ_i ( 1 - P_f(y_i|x_i))
]
≤√(2)/mE_SE_ϵ[
sup_f ∈F∑_i=1^m ∑_y ∈Yϵ_iy√(2)/4 f(x, y)
]
= 1/2mE_SE_ϵ[
sup_f ∈F∑_i=1^m ∑_y ∈Yϵ_iy f(x, y)
]
This implies
R(f)
≤R̂(f;S_ L) + ℜ_m(F) + √(log (1/δ)/2m)
The proof for the generalization bound of violation follows from the same argument. In particular, if the size of the constrained set C(x) is a constant, namely |C(x)|=c_0 < c = |Y| for all x ∈X, then from Equation (<ref>), we know that the mapping x ↦ 1- P_f(y|x) is Lipschitz with a Lipschitz constant 1/4√(1/c_0 + 1/c-c_0). So in this case, the generalization bound for the violation function can be improved as
V(f)
≤V̂(f;S_ U)
+ √(2)/2√(1/c_0 + 1/c-c_0)ℜ_m_ U(F)
+ √(log(1/δ)/2m_ U)
§.§ Proof of Theorem <ref>
Step 1. Showing the expected violation of f̂_̂ρ̂ is bounded.
First, we have with probability 1-δ,
ρV̂(f̂_ρ)
≤R̂(f̂_ρ) + ρV̂(f̂_ρ)
≤R̂(f_∞) + ρV̂(f_∞)
≤ 1 + ρV̂(f_∞)
≤ 1 + ρ(u + √(log(1/δ)/2m_ U))
where the last step follows by applying Hoeffding's inequality to V̂(f_∞). This result implies V̂(f̂_ρ) ≤1/ρ + u + √(log(1/δ)/2m_ U).
Second, Theorem <ref> claims that with probability 1-δ, the following inequality holds:
V(f̂_ρ) - V̂(f̂_ρ) ≤ℜ_m_ U(F) + √(log(1/δ)/2m_ U)
Putting these two inequalities together using union bound, we know with probability 1-2δ,
V(f̂_ρ)
≤1/ρ + u + ℜ_m_ U(F) + √(log(1/δ)/2m_ U) + √(log(1/δ)/2m_ U)
= 1/ρ + u + B(δ,m_ U,F)
Namely, with probability no less than 1-2δ, f̂_ρ lies in F_1/ρ + u + B(δ,m_ U,F), which is a fixed hypothesis class.
Step 2. Bounding the generalization gap of L_ρ.
Since f̂_ρ∈F_1/ρ + u + B(δ,m_ U,F), we can bound the generalization gap of L_ρ using the uniform convergence property of F_1/ρ + u + B(δ,m_ U,F). By standard decomposition,
L_ρ (f̂_ρ) - L_ρ (f_ρ)
=
L_ρ (f̂_ρ) - L̂_ρ (f̂_ρ)_(*)
+ L̂_ρ (f̂_ρ) - L̂_ρ (f_ρ)_≤ 0
+ L̂_ρ (f_ρ) - L_ρ (f_ρ)_(**)
For term (*), combining the two inequalities in Lemma <ref> and Step 1 via union bound, we know with probability 1-4δ,
(*)
≤ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + √(log(1/δ)/2m_ L) + ρ( ℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + √(log(1/δ)/2m_ U))
For term (**), using Hoeffding's inequality for the risk and violation separately, we have with probability 1-2δ,
(**)
≤√(log(2/δ)/2m_ L) + ρ√(log(2/δ)/2m_ U)
By union bound, with probability 1-6δ,
L_ρ (f̂_ρ) - L_ρ (f_ρ)
≤ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + ρℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)_for convenience, denote these terms as B'
Step 3. Bounding the risk of f_ρ.
By Step 2, we have with probability 1-6δ,
R(f̂_ρ)
≤ R(f_ρ) + ρ V(f_ρ) - ρ V(f̂_ρ) + B'
≤ R(f_0) + ρ V(f_0) - ρ V(f̂_ρ) + B'
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞) + B'
We conclude that with probability 1-6δ,
R(f̂_ρ)
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞)
+ ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + ρℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)
as claimed.
§.§ Proof of Example <ref>
The normalizing factor ∑_j=1^c ^w_j^ T x is maximized at w_1=x=[1,0,0,…,0] and w_2=…=w_c=0 so that
∑_j=1^c ^w_j^ T x≤ + (c-1)
≤ c+2
This implies P_w(y_c) ≥ (^w_c^ T x)/(c+2). Therefore, E[P_w(y_c)] ≤ t implies t(c+2) ≥E[^w_c^ T x] ≥^E[w_c^ T x] = ^α^ T w_c, or equivalently α^ T w_c ≤log(t(c+2)).
Therefore, given a set of data S={x_i}_i=1^m and Rademacher random variables ϵ, the inner supremum in the definition of Rademacher complexity can be upper bounded by solving the following program
max ∑_i=1^m ∑_j=1^c ϵ_i, j w_j^ T x_i
s.t. ∑_j=1^c w_j^ T w_j ≤ 1
α^ T w_c ≤log(t(c+2))
Consider its Lagrangian
L(w, λ, μ)
= ∑_i=1^m ∑_j=1^c ϵ_i,j w_j^ T x_i
+ λ(1 - ∑_j=1^n w_j^ T w_j )
+ ν(log(t(c+2)) - α^ T w_c )
Denote ξ_j := ∑_i=1^m ϵ_i,jx_i. The Lagrangian is then maximized at w_j = ξ_j/(2λ) for j<c and w_c = (ξ_c- να)/(2λ). The dual function then writes:
g(λ, ν)
= νlog(t(c+2)) + λ + ∑_j=1^c-1ξ_j ^2_2/4 λ +ξ_c - να^2_2/4 λ≥νlog(t(c+2)) + √(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 )
By weak duality, we have that
ℜ̂_m (F_t)
≤1/mE_ϵ[
min_ν≥ 0(
νlog(t(c+2)) + √(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
Assuming t<1/(c+2) so that log(t(c+2))<0. We can upper bound (<ref>) as
1/mE_ϵ[
min_ν≥ 0(
√(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
The function ∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 is minimized at ν = 0 if ξ_c^ T α≤ 0 and ν = ξ_c^ T α /α_2^2 otherwise. Denote the event ξ_c^ T α≤ 0 as E. By symmetry, we have that P(E) = 1/2 so that
1/mE_ϵ[
min_ν≥ 0(
√(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
= 1/2E_ϵ[ √(∑_j=1^cξ_j _2^2)| E ]
+ 1/2E_ϵ[√(∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2)| E]
Again by symmetry, the quantity (ξ_c^ T α)^2 is independent of E. Therefore, by Jensen's inequality, we have that
E_S,ϵ[√(∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2)| E]
≤√(E_S,ϵ[
∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2]
)
≤√(
cm - E_S,ϵ[ (ξ_c^ T α)^2/α_2^2]
)
= √(
cm - Var(ξ_c^ T α)/α_2^2)
= √(
cm - mσ^2 α_2^2+α_2^4/α_2^2)
= √(
(c-σ^2-α_2^2)m
)
Similarly, we can use Jensen's inequality to bound E_S,ϵ[ √(∑_j=1^cξ_j _2^2)| E ] ≤√(cm). Putting these together, we have that
ℜ_m (F_t)
=E_x[ℜ̂_m (F_t)]
≤1/2√(c/m) +1/2√(c-σ^2-α_2^2/m)
§ PROOFS FROM SECTION 4
§.§ Proof of Propostion <ref>
First, we show the Rademacher complexity of the singleton mapping is zero:
ℜ_m({(x,y)↦ -μ v(x,y)})
= 1/mE_x, ϵ[
∑_i=1^m∑_y ∈Y -ϵ_i,yμ v(x_i,y)
]
= 1/mE_x[
∑_i=1^m∑_y ∈Y -E[ϵ_i,y] μ v(x_i,y)
]
= 0
Second, we use the linearity of Rademacher complexity to obtain the desired result.
ℜ_m(F^μ)
= 1/mE_x, ϵ[ sup_f ∈F∑_i=1^m∑_y ∈Yϵ_i,y (f(x_i,y) - μ v(x_i,y))
]
= 1/mE_x, ϵ[ sup_f ∈F∑_i=1^m∑_y ∈Yϵ_i,y f(x_i,y)
] + 1/mE_x, ϵ[
∑_i=1^m∑_y ∈Y -ϵ_i,yμ v(x_i,y)
]
= ℜ_m(F) + ℜ_m({(x,y)↦ -μ v(x,y)}) = ℜ_m(F)
§.§ Proof of Proposition <ref>
* Given any scoring function f, let Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y)) and Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y)). We have
μΔ^μ_ (f)
= μE[logexp(f(x,y_)-μ v(x,y_))/Z_f^C(x) + Z_f^-C(x)/^μ]
= E[ μlogexp(f(x,y_)-μ v(x,y_))/Z_f^C(x) + Z_f^-C(x)/^μ]
= E[
Z^-C_f(x)/^μ/Z_f^C(x) + Z_f^-C(x)/^μ - v(x,y_)
]
= V(f^μ) - V_
Moreover,
μ V(f^μ)
= E[ μZ_f^μ^-C(x)/Z_f^μ(x)]
= E[ Z_f^μ(x)(-Z_f^μ^C(x)) + (Z_f^μ^C(x))^2/(Z_f^μ(x))^2]
= E[ P_f^μ^2(-C) - P_f^μ(-C) ]
which is negative and bounded, implying V(f^μ) - V_ is decreasing and Lipschitz with μ. Therefore, there is a μ > 0 such that R_(f^μ) < R_(f) if and only if the derivative is positive at μ = 0, i.e., V(f) > V_.
* By (<ref>),
Δ^μ_ (f)
= ∫^μ_0 (V(f^t) - V_) t
= E[ ∫^μ_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t t
] - μ V_
≥E[ ∫^μ_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x) t
] - μ V_
= (1-^-μ) E[
Z^-C_f(x)/Z_f^C(x) + Z_f^-C(x)] - μ V_
= (1-^-μ) V(f) - μ V_
* If V_=0, we have
Δ^∞_ (f)
= ∫^∞_0 E[
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t] t
= E[ ∫^∞_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t t ]
= E[
log(Z_f^C(x) + Z_f^-C/Z_f^C)
]
= V_(f)
§.§ Proof of Corollary <ref>
Using Proposition <ref> (b), this result follows by solving the following equation
(1-^-μ) V(f) - μ V_≥ 0
It is known that the solution to the inequaltiy u ≤ a + b^c u of u is u ≤ a-1/cW(-bc^ac). Substituting a=η=V(f)/V_=-b and c=-1 yields the desired result:
μ≤ W(-η/^η)+η
where the RHS is positive only when η>1. A plot of this solution as a function of η is presented below in Figure <ref>.
§.§ Proof of Proposition <ref>
This claim follows from the fact that R_(f^∞)=R_(f)-V_(f) from Proposition <ref> (c).
For equation (<ref>), the first inequality follows from the optimality of . For the second inequality, by definition we have
R_(^∞) + V_() = R_()
≤ R_()
⇒ R_(^∞) ≤ R_() - V_() ≤ R_() - min_f∈F V_(f)
§ ANALYSIS FOR HINGE LOSS AND ℓ^1 LOSS
§.§ Hinge Loss
The margin of a scoring function f at a sample (x,y_) is defined as
m(x,y_, f)
:= max_y∈Y{f(x,y)} - f(x,y_)
We denote its expectation as M(f) = E[m(x,y_,f)].
Given a loss function ℓ:Y×Y →R, the structured hinge loss <cit.> is defined as the margin of the loss augmented scoring function f+ℓ: (x,y)↦ f(x,y) + ℓ(y, y_). Namely,
L_hinge (x,y_, f)
:= m(x,y_, f+ℓ)
Therefore, we can study the impact of constrained inference on the hinge loss via the impact on the margin. Let Δ_margin^μ(f) = M(f) - M(f^μ). We present the following result.
The following results hold:
* For any fixed model f, there exists an μ_0 > 0 such that M(f^μ) ≤ M(f) only if
V_01(f) > V_
where V_01(f) is the zero-one style violation defined as E[1{_y ∈Yf(x,y) y_}].
* In particular, if the constraint is noise-free, we have
Δ^∞_margin(f)
= E[ max_y ∈Y f(x,y) - max_y∈ C(x) f(x,y) ]
= E[ (max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y))_+ ]
* The derivative of the change of the margin is
μΔ^μ_margin(f) =
-μ M(f^μ)
= - μE [
max_y ∈Y{ f(x,y) - μ v(x,y) } - f(x,y_) + μ v (x,y_)
]
= E[v(x,y_f^μ) - v(x,y_)]
where y_f^μ:= _y ∈Y{ f(x,y) - μ v(x,y)} is the argmax inference output of CCM. Moreover, this derivative is non-increasing with μ. Therefore, a necessary condition for CCM to reduce the margin is
E[v(x,y_f)] = V_01(f)
> V_
* This follows directly by taking the difference between M(f) and M(f^∞).
Due to the discontinuous nature of the argmax inference, the function v(x,y_f^μ) is in general not continuous with μ. On the other hand, if we assume μ↦E[v(x,y_f^μ)] is Lipschitz continuous, the condition proposed in (a) is also sufficient, as in the analysis for cross-entropy.
The impact of constrained inference on the hinge loss can be investigated by substituting f by f+ℓ. For example, a sufficient for improving the average hinge loss will be V_01(f+ℓ) > V_.
The quantity (max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y))_+ is closely related to the integrality loss defined in <cit.>. It is a hinge-stye surrogate loss function for the zero-one style violation function of f with argmax inference:
P{max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y)
≥ 0
}
= V_01(f)
§.§ ℓ^1 Loss
To facilitate our discussion, we first present the following lemmas that will be useful in this section.
For any constraint C we have the following:
* The derivative of the predicted probability is
μP_f^μ(y|x)
= P_f^μ(y) (P_f^μ(-C|x) - v(x,y))
* The second order derivative of the probability is
μP_f^μ(-C|x)
= P_f^μ(y|x) (
( P_f^μ(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
)
Recall that given any scoring function f, we denote
Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y))
and
Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y))
We also let Z_f(x) = Z_f^C(x) + Z_f^-C(x).
* The pointwise derivative of CCM's l^1 risk with respect to μ is then
μP_f^μ(y|x)
= μ^f(x,y) - μ v(x,y)/Z_f^μ(x)
= 1/(Z_f^μ(x))^2( Z_f^μ(x) (-v(x,y) ^f(x,y) - μ v(x,y)) + Z_f^μ^-C(x) ^f(x,y) - μ v(x,y))
= P_f^μ(y) (P_f^μ(-C) - v(x,y))
where the second equality follows from the fact that μ Z_f^μ(x) = -Z_f^μ^-C(x).
* Based on (a),
^2/^2 μP_f^μ(y|x)
= (P_f^μ(y) (P_f^μ(-C) - v(x,y)))(P_f^μ(-C) - v(x,y))
+ P_f^μ(y) (P_f^μ^2(-C) - P_f^μ(-C))
= P_f^μ(y|x) (
( P_f^μ(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
)
Now we discuss the change in ℓ^1 risk that is defined as Δ^μ(f):=R(f)-R(f^μ).
The following results hold:
* For any fixed model f, there exists an μ_0 > 0 such that R(f^μ) < R(f) if
E[P_f(y_)P_f(-C)]
> E[P_f(y_)v(x,y_)]
* The change of risk can be lower bounded by
Δ^μ(f)
≥1-^-2μ/2E_x[P_f(y_)P_f(-C)] - μ V_
* In particular, if the constraint is noise-free, we have
Δ^∞(f)
≥E_x[P_f(y_)P_f(-C)]
* From Lemma <ref> (a) we know the derivative of the risk with respect to μ at μ=0 is
E[P_f(y_)P_f(-C)] - E[P_f(y_)v(x,y_)]
Further, Lemma <ref> (b) implies this derivative is Lipschitz with respect to μ since for any μ,
| P_f^μ(y|x) (
( P_f(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
) |
≤ 1
Therefore, a sufficient condition for the existence of an μ_0 > 0 such that R(f^μ) < R(f) is that E[P_f(y_)P_f(-C)] > E[P_f(y_)v(x,y_)].
* First, we note for any y and μ that
P_f^μ(y)P_f^μ(-C)
= ^f(x,y)-μ v(x,y) Z_f^-C(x)/^μ/(Z_f^μ(x))^2
≥^f(x,y)-μ v(x,y) Z_f^-C(x)/^μ/(Z_f(x))^2
≥^f(x,y)-μ Z_f^-C(x)/^μ/(Z_f(x))^2
= P_f(y)P_f(-C)^-2μ
Also,
E[P_f(y_)v(x,y_)]
≤E[v(x,y_)]
= V_
Integrating the derivative gives
Δ^μ(f)
≥∫^μ_0 E[
P_f(y_)P_f(-C)^-2t - V_] t
= 1-^-2μ/2E_x[P_f(y_)P_f(-C)] - μ V_
* With noise-free constraints,
P_f^μ(y_)P_f^μ(-C)
= ^f(x,y_) Z_f^-C(x)/^μ/(Z_f^μ(x))^2
≥^f(x,y_) Z_f^-C(x)/^μ/(Z_f(x))^2
= P_f(y_)P_f(-C)^-μ
Integrating both sides gives
Δ^μ(f)
≥∫^μ_0 E[
P_f(y_)P_f(-C)^-t] t
= E_x[P_f(y_)P_f(-C)]
The term E_x[P_f(y_)P_f(-C)] plays a key role in these results, and it measures the average violation of the model f, weighted by the model's confidence of the true label. The first result shows that if this weighted average violation is larger than that of the true data distribution, then CCM is helpful. The last result shows that a model with a larger weighted violation obtains more benefits from strictly constrained inference.
§ PROOFS FROM SECTION 5
§.§ Proof of Theorem <ref>
Recall f_⋆^μ = _g∈F^μ R_(g) + ρ V_(g) is the optimal CCM for the regularized surrogate objective and is the cross entropy risk minimizer in F. According to our notation, ^μ is the constrained model with base model .
By this definition, we have
R_(f_⋆^μ ) +ρ V_(f_⋆^μ)
≤ R_(^μ) +ρ V_(^μ)
Therefore,
R_(f_⋆^μ)
≤ R_(^μ) + ρ (V_(^μ) - V_(f_∞^μ))
≤ R_(^μ) + ρ V_(^μ)
≤ R_() - Δ_^μ() + ρ V_(^μ)
Therefore, a sufficient condition for R_(f_⋆^μ) ≤ R_() is that ρ V_(^μ) < Δ_^μ(). Furthermore, recall for any scoring function f, we define Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y)) and Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y)). We then have
V_(f) - V_(f^μ)
= E[
-log( Z_f^C(x)/Z_f^C(x) + Z_f^-C(x))
] - E[
-log( Z_f^C(x)/Z_f^C(x) + Z_f^-C(x)/^μ)
]
= E[
-log( Z_f^C(x) + Z_f^-C(x)/^μ/Z_f^C(x) + Z_f^-C(x))
]
= ∫^μ_0 E[
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t] t
= Δ^μ_(f) + μ V_ (compare to equation (<ref>))
Therefore, Δ^μ_() = V_() - V_(^μ) - μ V_. So, the sufficient condition can be reformulated as
ρ
< V_() - V_(^μ) - μ V_/V_(^μ)
§.§ Proof of Theorem <ref>
We have seen in Theorem <ref> that for any scoring function f, there is a μ > 0 such that R_(f^μ) < R_(f) if and only if V(f) ≥ V_. On the other hand, we know from Lemma <ref> that
V(f_ρ)
≤ V(f_∞) + 1/ρ
Therefore, if
ρ≥1/V_ - V(f_∞)
we must have V(f_ρ) ≤ V_, which implies there is no μ > 0 such that R_((f_ρ)^μ) < R_(f_ρ).
|
http://arxiv.org/abs/2307.04276v1 | 20230709230219 | Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant | [
"Yann Hicke",
"Tonghua Tian",
"Karan Jha",
"Choong Hee Kim"
] | cs.CL | [
"cs.CL"
] |
2022
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
LAK'23: Workshop on Partnerships for Cocreating Educational Content, March 13, 2023, Arlington, TX, USA
1]Yann Hicke[
[email protected],
]
[1]
[1]
[1]Cornell University,
Department of Computer Science
2]Tonghua Tian
[1]
[2]Cornell University,
Department of Operations Research and Information Engineering
3]Karan Jha
[1]
[3]Cornell University,
Department of Mechanical Engineering
3]Choong Hee Kim
[1]
[1]Corresponding author.
[1]These authors contributed equally.
Automated Essay scoring has been explored as a research and industry problem for over 50 years. It has drawn a lot of attention from the NLP community because of its clear educational value as a research area that can engender the creation of valuable time-saving tools for educators around the world. Yet, these tools are generally focused on detecting good grammar, spelling mistakes, and organization quality but tend to fail at incorporating persuasiveness features in their final assessment. The responsibility to give actionable feedback to the student to improve the strength of their arguments is left solely on the teacher's shoulders. In this work, we present a transformer-based architecture capable of achieving above-human accuracy in annotating argumentative writing discourse elements for their persuasiveness quality and we expand on planned future work investigating the explainability of our model so that actionable feedback can be offered to the student and thus potentially enable a partnership between the teacher's advice and the machine's advice.
Automated Essay Scoring Argument Mining Large Language Models
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
[
August 12, 2023
========================================================================
§ INTRODUCTION
ETS e-rater <cit.> is one of many commercial tools available today that can automatically grade essays and hence save a substantial amount of human time. It follows a long lineage of tools that have been created over the past 50 years all traced back to Page's pioneering work on the Project Essay Grader <cit.>. All high school students taking the SAT to get into college or undergraduate students applying to graduate schools with their GRE or GMAT scores will have their essays graded by an Automated Essay Scoring (AES) system. The vast majority of AES software are holistic graders in the sense that they summarize the entire quality of an essay in one single score. The main reason that explains such a trend is the nature of the vast majority of annotated corpora available which has a holistic score associated with it.
In August 2022, Crossley et al. released a dataset somewhat unique of its kind: a large-scale corpus of writing with annotated discourse elements (PERSUADE) indicating their level of persuasiveness <cit.>. The originality of this dataset is what is motivating our work: can we achieve human-level accuracy on the persuasiveness prediction task? And then using this performance can we provide feedback to the student-writer?
§ RELATED WORK
We will outline here work that has been done in Automated Essay Scoring that does not solely focus on holistic scoring.
§.§ Identifying Argumentative Discourse Structures in Persuasive Essays:
In 2014, Stab and Gurevych <cit.> developed a corpus of essays and tried to identify the structure of arguments in persuasive essays as well as novel feature sets for identifying argument components and argumentative relations, which was one of the first approaches in the field of argument mining.
§.§ SVM Regressor for Modeling Argument Strength:
In 2015, Persing and Ng <cit.> proposed an SVM regressor model to score an essay based on the strength of an argument. This paper also released a human-annotated dataset of 1000 essays publically to stimulate further research. In this dataset, the essays were scored from 1 through 4, higher score indicating a strong argument.
§.§ Neural Models for Predicting Argument Persuasiveness:
In 2018 Carlile et. al. <cit.> released an argument mining dataset, annotating the arguments within the essay as MajorClaims, Claims, Premises, Support and Attack, as well as scored these sections on the basis of attributes like Specificity, Eloquence, Strength, etc. The same group of people in another paper in 2018 <cit.> proposed a bidirectional LSTM model with attention for providing scores on these metrics (Specificity, Eloquence, Strength, etc.) using a neural network, using the same dataset.
In 2019, Toledo et. al. also released a new dataset annotating arguments on the basis of quality and comparing pair-wise arguments for the stronger argument. They used a BERT-base-uncased model architecture to create word embeddings, with an Argument Classification and an Argument Ranking head. <cit.>
§ METHOD
§.§ Data Preprocessing and Problem Formulation
Recall that our goal is to predict the effectiveness rating for each discourse element given its type label. In the training data, except a table of discourse elements, we also have access to the complete essays which these discourse elements are extracted from. Each essay contains a variable number of discourse elements, with possibly repeated type labels.
§.§.§ Data Preprocessing:
In order to fully utilize the context information, when evaluating each discourse element, we aim to include all other discourse elements extracted from the same essay in the input as well. For the purpose of efficiency, predictions are done on the essay level.
The data preprocessing goes as follows. For each essay, we first look for every discourse element extracted from it and locate them within the essay. Then we add special tokens to the beginning and the end of each discourse element indicating the corresponding discourse type. Finally, we concatenate all the discourse elements together to form a new essay, following the same order as when they appear in the original one. An example of preprocessed essays is the following:
After this step, we tokenize the essays and use the resulting sequences as the inputs to our model.
§.§.§ Problem Formulation:
Eventually, we want to produce one prediction, which is a probability distribution over the three different ratings, for every discourse element. Essentially this is a sequence classification problem. However, instead of directly handling it as a sequence classification task, we find that it is more efficacious to formulate the problem as a token classification task. At training time, we label each token with the effectiveness rating of its corresponding discourse element and try to correctly predict all labels. Then at inference time, we take the average scores of overall tokens to obtain a prediction for each discourse element.
§.§ Model Selection
We build our classifier on the pre-trained large language model DeBERTaV3. The DeBERTa model, originally proposed in <cit.>, improves BERT <cit.> using a disentangled attention mechanism and an enhanced mask decoder. Then DeBERTaV3 <cit.> further improves the original DeBERTa model with a new ELECTRA-style pre-training method. We briefly introduce the three techniques here.
§.§.§ Disentangled Attention
In a classical attention mechanism, each token is represented by one vector which is the sum of the content embedding and the position embedding, whereas in DeBERTa these two embeddings are kept separate. For each pair of tokens x_i and x_j at position i and j respectively, we have two pairs of embeddings: the content embeddings H_i, H_j and the relative position embeddings P_i|j, P_j|i. The cross-attention score between x_i and x_j is then calculated as
A_i,j = H_iH_j^⊤ + H_iP_j|i^⊤ + P_i|jH_j^⊤,
where position-to-position attention is omitted in the implementation for lack of useful information.
§.§.§ Enhanced Mask Decoder:
DeBERTa is pre-trained using masked language modeling (MLM). The absolute position information is important in this task as well as many other NLP tasks. To use this information, DeBERTa incorporates the absolute position embeddings after the Transformer layers but before the softmax layer for masked token prediction, hence enhancing the mask decoder.
§.§.§ ELECTRA-style Pre-training:
DeBERTaV3 replaces the MLM pre-training procedure in DeBERTa with an ELECTRA-style pre-training procedure. In the pre-training stage, DeBERTaV3 trains a generator that aims to minimize an MLM loss and a discriminator which aims to minimize an RTD (Replaced Token Detection) loss simultaneously. A Gradient-Disentangled Embedding Sharing method is adopted to avoid the tug-of-war between the generator and the discriminator.
§.§ Memory constraints
The "scaling laws" draw us towards picking models that are ever larger. Yet, this added performance brought by a larger number of weights does not come for free for all Machine Learning practitioners.
DeBERTaV3_large represents around 800MB of weights when stored in a PyTorch bin file. The virtual machines that a lot of practitioners have access to for free (be it Google collab or Kaggle notebooks) tend to have around 13GB of RAM available. It is not sufficient memory; we lay out a simple memory requirement estimation example below.
For a 1 billion fp32 parameter model we can break down the memory needs as such: 4GB of data just for the weights since these are fp32 numbers, the same amount is needed to store the gradients. On top of this 8 GB, in the case of Adam as a choice of the optimizer (which is the optimizer that we are using) we need to add 8GB for storing the first and second moments of each gradient. Therefore as a rough estimation, we end up with 16GB required just to properly load a 1bn parameter model for training. before taking into account the required memory to store the activations during the forward pass. If we want to load a decently sized model such as DeBERTaV3_large we need to make use of a few engineering tricks to circumvent these constraints.
§.§.§ fp16:
Mixed precision training is the first trick that we used <cit.>. It is a very intuitive technique that we used which relies on cleverly using low-precision arithmetic. Instead of storing all real numbers in their 32-bit representation, we shift all of them to be represented as 16-bit numbers. It enables saving a lot of memory while not compromising the accuracy of computations (fp16 loses in representative power due to its limited numerical range but has decent precision otherwise). Therefore based on the architecture which uses batch normalization layers we can assume that quantization errors will most likely be negligible since activations are frequently normalized. We can shift the quantization of our neural network by passing "fp16" as an argument to the Trainer object. The implementation that Huggingface uses for quantizing a model does not involve representing all weights, gradients, activations, and moments as fp16 numbers; simply the forward activations saved for gradient computation. Thus it does not halve the memory needs; more optimization techniques are required.
§.§.§ Gradient checkpointing:
Chen et al. introduced this technique - also known as "activation checkpointing" - in their paper <cit.>. It uses significantly less memory. When enabled a lot of memory can be freed at the cost of a small decrease in training speed. The memory savings tend to be in the order of 𝒪(n) with n the number of feed-forward layers. The general idea of this technique is to cleverly analyze the computation graph and based on it decide on what results to store. For example, if a low-cost operation of a forward pass can be dropped and only recomputed later during the backward pass it becomes a savvy trade-off between computation and memory.
§.§.§ Gradient accumulation:
This technique modifies the last step of the backward pass when training a neural network. Instead of updating the gradients after the forward pass and backward pass of each mini-batch, the gradients are saved and the update is only done after several mini-batches. It enables an algorithm to emulate a network training procedure on larger batches even though the execution of the forward and backward pass is done on smaller batches by performing the weights update on their accumulation; hence saving extra memory.
§.§ Ensembling
Ensembling is an approach that combines predictions from multiple models in order to obtain a better predictive performance. There are multiple ways of ensembling models. In our model, we used a "K-fold cross-training" approach - a combination of K-fold cross-validation and bagging approaches, where we divided the training data into five folds and trained five models independently with each fold as a test set. The final prediction is an average of all five models. A brief description of bagging and other ensembling methods is listed below.
§.§.§ Bagging
We noticed that models trained over different folds of training samples had high variance. To reduce this variance, we used bagging <cit.> by averaging the predictions of five different models.
§.§.§ Boosting
This other paradigm had us use a LightGBM <cit.> Bag-of-Words model <cit.>, which is a Bag-of-words classifier with Light Gradient Boosting Method <cit.>. This model worked decently well, slightly better than a BERT-base sequence classifier, but we needed a stronger model to get a competitive model.
§.§.§ Stacking
Another ensembling technique that we tried was stacking <cit.>; we tried to ensemble each of the five models' predictions and the Bag-of-words classifier using another meta-model, which was a neural network. This approach is useful in reducing bias among models. We weren't able to improve the final results using stacking because the five models combined make a strong learner, while the Bag-of-words model is quite weak, therefore the neural network almost completely ignores the Bag-of-words model. It would be interesting to stack a few other weak models before ensembling with our main strong model. We can actually show some improvement in the loss, something that was done by this paper to identify fake news <cit.>.
§.§ Hyperparameter Optimization
As mentioned before DeBERTaV3_large is a heavy model. The training time of four models that we then ensemble takes about 12 hours on a Tesla P100. We can understand the limitations that one can have when using free VM access like Colab free or Kaggle notebooks. Hyperparameter optimization was therefore a low priority because of our inability to run instances for more than 30 hours per week. Yet, we decided to focus on one hyperparameter:
the accumulation step interval. We decreased it so that gradients would be updated after every batch at the expense of extra memory usage. It granted us a +0.01 in AUC.
§.§ Regularization
We used several regularization techniques to help our model generalize. We used dropout probabilities of 10%, 20%, 30%, 40% and 50% and averaged the outputs to get the final output logits. Additionally, we used Adversarial Weight Perturbation, which is briefly described below. We also tried to implement Scale Invariant Fine-tuning, a work in progress that we also describe below.
§.§.§ Adversarial Weight Perturbation:
Adversarial Weight Perturbation <cit.>,<cit.> is a regularization method that perturbs the weights of the Deep Neural Networks to prevent overfitting to the data. Here, we apply weight perturbation every time the training loss goes below a set threshold and we add noise in the direction of the gradient of the loss function with respect to the weights. This method works similarly to Stochastic Weight Averaging which keeps on trying to push the learner away from falling into local minima.
§.§.§ Scale Invariant Fine-tuning:
Virtual Adversarial Perturbation <cit.> is a technique of regularization that introduces small perturbations in the input thus regularizing the model by generating the same output for an example as it would generate for the adversarial perturbation of the example.
In the case of NLP tasks, these perturbations are added to the word embeddings instead of the original sequence. The problem is that the word embedding values vary largely between different words and models. To solve this, the authors of the DeBERTa paper <cit.> suggest Scale Invariant fine-tuning or SiFT that normalizes embedding layers before applying perturbations. This method significantly improves the performance of the model in the downstream NLP tasks.
§ RESULTS
§.§ Evaluation Metric
The output is evaluated based on the log loss as follows.
log-loss = 1/N∑_i=1^N ∑_j=1^M y_ij log(p_ij)
where N is the number of rows in the test set, M is the number of class labels, y_ij is 1 if observation i is in class j and 0 otherwise, and p_ijis the predicted probability that observation i belongs to j.
§.§ Overall Results
Table <ref> below describes the performance of the different language models, that were used for generating the embeddings, for the given task.
The models trained on DeBERTa Large embeddings significantly outperformed the other models. Section <ref> explains how DeBERTa improves over the BERT model. Using a larger model improves the performance significantly, however, it also severely increases the computational costs and memory requirements. Section <ref> explains how we overcame such memory constraints.
§ DISCUSSION
This project seeks to identify a way to include Artificial intelligence in assessing argument-persuasiveness. This model, combined with an argument-mining AI, is capable of identifying the sections of an essay that are "effective" or "ineffective" in persuading the reader. Based on the segments identified as "ineffective" by the machine, a teacher can go through those sections carefully, identify the scope of improvements and provide the necessary feedback. It allows to specifically target the segment of the essay that needs attention, thus making the job of the teacher much more specific. It ends up really helping the student identify the sections where they need to strengthen their arguments.
In this project, various aspects of language models were explored in order to achieve competitive accuracy. This discussion section tries to summarize some of the more vital aspects of the architecture, and the key takeaways from those.
§.§ Choosing the right language model
The choice of model was a vital piece of the puzzle. A vanilla DeBERTa_base<cit.> model performs on par with a fine-tuned BERT_base<cit.> architecture which is one of the most popular Transformer-based architectures. Recall that the DeBERTa architecture differs from BERT<cit.> or RoBERTa<cit.> mainly due to the introduction of the disentangled attention property. Increasing the size of the model by using the DeBERTa_large architecture improved the performance significantly. This led us to two major conclusions. Firstly, that the disentangled attention, that separates the token and positional embeddings, creates a more robust representation of text for assessing the its persuasiveness. The AI index report of 2021 <cit.> shows that the DeBERTa architecture tops the leaderboard of the SuperGLUE benchmark <cit.> which is a benchmark for complex language understanding tasks. This shows that DeBERTa model, with its disentangled attention mechanism, better encapsulates the contextual understanding of the text and hence, supports the sentence evaluation better than the other Language Models.
Secondly, a little more trivial conclusion was that incrementing the size of the model actually significantly improves the performance of the model.
§.§ Ensembling methods
Among the various methods applied to our initial architecture, ensembling methods applied to DeBERTaV3 led to the most significant improvement. We ensembled five identical DeBERTaV3 architectures each using different training/validation splits of our working dataset. After ensembling these models we reached 0.63 in log loss. However, ensembling requires substantial extra GPU memory during training due to having to deal with four other models. Section <ref> describes how we went about solving the computational challenges.
We also applied boosting on the bag-of-words model<cit.>. Even though a BERT<cit.>_base model is known to provide significantly better performance for most NLP tasks as compared to a bag-of-words model, results show that boosting improves the bag-of-words model's performance for specific tasks. However, the worsening of performance by stacking shows that ensembling unbalanced models can lead to poorer models.
§.§ Overcoming Computational Challenges
Training a large-language model requires an appropriate quantization of the model. Reducing the precision of the model to fp16 reduced the memory requirements significantly. Furthermore, increasing the gradient accumulation steps reduced the amount of computation required and hence, the amount of time required to train the model. Overall, training a large language model might require making trade-offs between accuracy and training speed, as well as making judgement calls regarding the precision that the model requires.
§ FUTURE WORK
In future work, we aspire to use Explainable AI (XAI) to have the machine pivot from a grader position to a Teaching Assistant position. The hope is to transform our predictive model into a feedback provider. The feedback would then trigger a conversation between the student, the teacher, and the machine.
XAI is a subset of artificial intelligence that focuses on making the decision-making process of a model transparent and interpretable to human users. In the context of providing feedback to students on the strength of their arguments in an essay, a large language model can be used to analyze the text and then identify key elements such as the structure of the argument, the use of evidence, and the clarity of the writing so that the student can improve on those.
In future work, we aspire to use XAI to make the model's feedback more explainable and use natural language generation (NLG) techniques to generate human-readable explanations as feedback. For example, the model could identify that a student's essay lacks a clear thesis statement and generate the feedback "Your essay does not have a clear thesis statement. A strong thesis statement is essential for guiding the structure and direction of your argument."
XAI would be done by using techniques such as attention visualization, which can show which parts of the text the model is focusing on when making its grade predictions. Once identified, it is shown to the student and thus can help them understand why the machine is giving them a certain grade and how they can improve their writing.
On a more granular level, a way to use attention visualization for feedback is to display a heatmap of the essay, where each word is colored based on the level of attention the model is giving to it. The words that are colored more brightly are the ones that the model is paying more attention to, and therefore are the ones that are most important for the student to focus on when revising their essay. For example, if the model is giving low attention to the introduction, it could be an indication of weak thesis statement or lack of a clear direction for the essay or if the model is giving low attention to certain key vocabulary words related to the topic, it could be an indication of lack of understanding of the topic or weak research.
Overall, attention visualization can be a useful tool for providing feedback to students on their writing by allowing them to see which parts of the essay the model is focusing on and why. This can help them to better understand the model's feedback and make more informed revisions to their writing.
Thanks to Professor Kilian Weinberger for his support and ideas throughout our work.
|
http://arxiv.org/abs/2307.06231v1 | 20230712152418 | Submillimeter Observations of Magnetic Fields in Massive Star-forming Region W75N | [
"Lingzhen Zeng",
"Qizhou Zhang",
"Felipe O. Alves",
"Tao-Chung Ching",
"Josep M. Girart",
"Junhao Liu"
] | astro-ph.GA | [
"astro-ph.GA"
] |
1Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
2Center for Astrochemical Studies, Max-Planck-Institut für extraterrestrische Physik (MPE), Gieβenbachstr. 1, D-85741 Garching, Germany
3Research Center for Intelligent Computing Platforms, Zhejiang Lab, Hangzhou 311100, People’s Republic of China
4National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801, USA
5Institut de Ciències de l’Espai (ICE), CSIC, Can Magrans s/n, E-08193 Cerdanyola del Vallès, Catalonia, Spain
6Institut d’Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Catalonia, Spain
7East Asian Observatory, 660 N. A’ohōkū Place, University Park, Hilo, HI 96720, USA
This paper presents the results of full polarization observations of the massive star-forming region W75N, conducted with ∼3spatial resolutions at 345 GHz using the Submillimeter Array (SMA). The magnetic field structures in the dense cores of the region are derived using the linearly polarized continuum emission. The overall magnetic field strength and orientation are found to agree with those from the previous observations. The plane-of-sky (POS) component of the magnetic field in the region was calculated to be ∼0.8 ± 0.1 mG using the angular dispersion function (ADF) method. Further analyses involving the polarization-intensity gradient-local gravity method and H^13CO^+ (4–3) line data indicated that the cloud is undergoing global gravitational collapse and the magnetic field is shaped by gravity and outflows in the dense core regions.
§ INTRODUCTION
Stars are born in dense molecular cores when self gravity exceeds the internal support and drives gravitational collapse and the formation of an embedded protostar. In addition to gravity, both turbulence and magnetic fields influence the dynamical evolution of the molecular gas and impact the outcome of star formation. Solenoidal turbulence suppresses star formation since it acts similar to the thermal pressure that counteracts gravity, thus it hinders star formation. Compressive turbulence, on the other hand, compresses the gas and enhances its densities, thus promotes star formation <cit.>. Magnetic fields, well coupled with the molecular gas, tend to restrict the movement of material along the field lines, thus hinder the star formation <cit.>.
There have been considerable efforts devoted to accessing the role of magnetic fields in star forming dense molecular cores. Thanks to the improvement in sensitivity, polarimetric observations in the millimeter and sub-millimeter wavelengths become increasingly accessible to probe the plane-of-the-sky component of magnetic fields through linearly polarized dust emission <cit.>. We refer readers to recent reviews on the development of observational efforts on magnetic fields in molecular clouds and star formation <cit.>.
Despite the considerable progress, there is a lack of understanding on how magnetic fields may affect star formation in a protocluster environment where multiple stars arise from collapse and fragmentation of molecular gas. We present Submillimeter Array observations of W75N, a massive star-forming region that contains a number of H2 regions and is located in the local spiral arm at a distance of approximately 1.3 kpc <cit.>. W75N is part of the Cygnus-X giant molecular cloud, which spans over 100 pc and includes the renowned DR21 region.
Early observations of the region indicated that W75N IRS 1, a cluster of young stellar objects (YSOs), powered a massive molecular outflow. VLA observations at 4.9 GHz detected three ionized regions, W75N(A), W75N(B), and W75N(C) near the center of the outflow <cit.>. Later, VLA 8.4 GHz observations revealed that W75N(B) consisted of three compact regions, Ba, Bb, and Bc <cit.>. Using 1.3 cm continuum VLA observations, <cit.> discovered VLA 1 (Ba), VLA 3 (Bb), and another compact source located between them (VLA 2). <cit.> suggested that source Bc was a radio Herbig–Haro object <cit.> powered by the VLA 3 radio jet. They also discovered the VLA 4 source, located south of the VLA 1–VLA 3 group. <cit.> found that the outflow of VLA 2 was in a transition from a shell-like to a more elongated jet-like shape based on VLBI observations of 22 GHz water masers. Further observations by <cit.> showed that the water maser distribution around VLA 1 was stable, while the shell-like structure in VLA 2 was expanding along the direction parallel to the thermal radio jet of VLA 1, which was later confirmed by <cit.>. Recently, using VLA-A data covering 4 to 48 GHz, <cit.> concluded that Bc and VLA 4 were obscured Herbig–Haro objects excited by the jet from VLA 3.
Observations in millimeter wavelengths have revealed the presence of 9 dense cores (MM1 to MM9) in the W75N region. These were identified using continuum data obtained with BIMA and CARMA <cit.>. The MM1 core was further studied using the SMA and resolved into two compact continuum sources, MM1a and MM1b <cit.>. In addition, a dense core labeled as MM[N] was recently reported to the north of MM1 using ALMA data at 1.3 mm <cit.>.
Previous polarization observations of W75N at 450, 870, and 1100 μm, using the JCMT, yielded only a single polarization segment due to the large beamsizes of around 12–19 <cit.>. At 870 and 1100 μm, the inferred magnetic field had an average position angle of approximately 150, while at 450 μm, it was measured to be around 37. To improve the angular resolution, we conducted full polarization observations of the W75N region using the SMA with spatial resolutions of approximately 3 at 345 GHz. In this study, we focus on the central region of W75N, which includes the MM1 to MM4, and MM[N] cores. We present the derived parameters of these dense cores using the dust continuum polarization data in this paper. We summarize the SMA observations in Section <ref> and present the results in Section <ref>. A discussion of the results is shown in Section <ref>, followed by a summary in Section <ref>.
§ OBSERVATIONS AND DATA REDUCTION
The observations of W75N were carried out between 2012 July 03 and 2012 Aug 09 with the Submillimeter Array (SMA) <cit.>. Three observations were made in July using the compact array configuration, and three were made in August using the subcompact configuration. The number of antennas in the array varied between 6 and 7. The observational parameters and calibration sources can be found in Table <ref>. The Application Specific Integrated Circuit (ASIC) correlator provided a 4 GHz IF bandwidth (4-8 GHz) with a uniform spectral width of 812.5 kHz per channel. The receivers were tuned to the 345 GHz band, which captured the CO (3-2) and H^13CO^+ (4–3) lines, with a velocity resolution of approximately 0.70 km s^-1.
The visibility data from the observations were calibrated for bandpass, flux, and time-dependent gains using the IDL superset MIR package adapted for the SMA <cit.>. The calibrated data were then exported to the Miriad <cit.> format for instrumental polarization calibrations and imaging. Table <ref> lists the calibrators used for each track. The synthesized beam size of the combined visibilities was approximately 3.05× 2.83. The 1σ RMS noise of the Stokes I image of the continuum emission was approximately 26.1 mJy beam^-1, while the RMS noise of the Stokes Q/U maps after de-biasing using the method from <cit.> was approximately 1.4 mJy beam^-1. The Astropy package <cit.> was used for the final analysis.
ccccccc
Observational Summary
0pt
Observation Number of Array Baseline Flux Gain Polarization and
date Antennas configuration range (m) calibrator calibrator bandpass calibrator
2012 Jul 03 7 compact 16.5 – 32 Titan, Uranus MWC349A 3c279
2012 Jul 04 7 compact 16.5 – 32 Titan MWC349A 3c279
2012 Jul 05 7 compact 16.5 – 32 Titan, Uranus MWC349A 3c84
2012 Aug 07 6 subcompact 9.5 –25 Uranus MWC349A 3c84
2012 Aug 08 6 subcompact 9.5 – 25 Uranus MWC349A 3c84
2012 Aug 09 6 subcompact 9.5 – 25 Uranus MWC349A 3c84
§ RESULTS
§.§ Continuum emission
Figure <ref> (a) illustrates the 345 GHz continuum emission of the W75N region. To identify dense structures in this area, we applied the dendrogram algorithm <cit.> to the continuum data using the astrodendro[<http://www.dendrograms.org/>] package. For the astrodendro analysis, we set the minimum value for the structure to be considered as 3σ, the minimum height required for an independent structure to be retained as 1σ, and the minimum number of pixels for a structure to be considered as half of the synthesized beam area. Using the astrodendro results as the initial input, we performed a final 2D Gaussian fit to each of the identified cores using the Cube Analysis and Rendering Tool for Astronomy (CARTA) <cit.>. We followed the nomenclature for dense cores used in <cit.> and <cit.>. The mask for the entire cloud and the FWHM ellipses representing the dense cores are shown in Figure <ref> (b). Table <ref> lists the observation parameters for those structures. The paremeters for the “all” mask are from astrodendro, and the equivalent FWHMs are calculated from the intensity weighted second moment in the corresponding directions. The parameters of the dense cores are from CARTA.
ccccccc
Observation parameters of dense structures
0pt
2*Structure RA(J2000) Dec(J2000) Inegrated Flux FWHM Peak Intensity PA
(h:m:s) (d:m:s) (Jy) a × b (Jy beam^-1) ()
all 20 38 36.44 42 37 33.75 17.9 9.2 × 7.4 2.3 19.6
MM1 20 38 36.46 42 37 34.11 10.2 5.2 × 4.1 4.1 146.9
MM2 20 38 36.08 42 37 31.54 3.0 4.5 × 3.5 1.7 106.1
MM3 20 38 37.20 42 37 36.69 1.1 4.1 × 2.8 0.9 75.2
MM4 20 38 36.50 42 37 27.57 2.3 5.2 × 4.6 0.8 98.7
MM[N] 20 38 36.49 42 37 42.57 1.1 4.1 × 4.0 0.5 73.2
cccccc
Fitting parameters of dense structures
0pt
2*Structure T_d M ρ N_H_2 n_H_2
(K) (M_⊙) (10^-15 kg m^-3) (10^23 cm^-2) (10^6 cm^-3)
all 63 35.5 4.1 1.8 0.85
MM1 73 17.1 11.2 2.8 2.3
MM2 <45 8.9 9.2 2.0 1.9
MM3 <45 3.4 5.6 1.0 1.2
MM4 58 5.0 2.8 0.73 0.58
MM[N] 45 3.1 3.0 0.67 0.63
Assuming that the cloud is isothermal, the continuum emission is optically thin, and the gas-to-dust mass ratio is a constant Λ = 100, we can derive the the total mass of the structures using the observed integrated flux of the dust emission, F_ν, by
M=ΛF_ν D^2B_ν(T_d)κ_ν,
where D = 1.3 kpc is the distance to the source, κ_ν = (ν/1000 GHz)^β m^2 kg^-1 is the dust opacity <cit.>, and B_ν(T_d) is the Planck function at a given dust temperature T_d. We utilized an opacity index of β = 1.5 <cit.>, and the average T_d within each dense structure in W75N was listed in Table <ref> from ammonia hyper-fine line fitting using EVLA data (Zhang, X., et al. 2023, in prep.). While the fittings for MM2 and MM3 did not converge, we were still able to estimate the temperatures to be between 30 K and 45 K, and hence, we used T_d = 45 K to determine the lower limits for the mass. The average density, column density N_H_2 and volume density n_H_2 within each structure are calculated as:
ρ = 3M/4π r^3,
N_H_2 = M/π m_Hμ_H_2 r^2,
n_H_2 = 3M/4π m_Hμ_H_2 r^3,
where r = √((FWHM_a ×FWHM_b)) <cit.> is the geometric mean radius of the structure, μ_H_2 = 2.86 is the mean hydrogen molecular weight <cit.>, and m_H is the hydrogen atomic mass. The mass, average density, column density, and volume density of the dense structures derived from Equations (<ref>) to (<ref>) are listed in Table <ref>. The estimated column and volume densities of the structures in W75N are generally similar to those in other massive star-forming regions.
The uncertainties in the parameters discussed above arise from various sources. The characterization of the constant Λ and κ_ν is not well-constrained and contributes to an uncertainty over 50% <cit.> and a factor of 2 <cit.>, respectively. The ammonia line data yields dust temperatures ranging from 30 – 73 K, consistent with the results of <cit.>, which estimated temperatures of 35–75 K. For MM2 and MM3, we estimated the lower mass limits using the upper fitting temperatures. The distance to W75N, as estimated by <cit.>, is uncertain by approximately 5%. As a result, the final uncertainties for the mass, density, column density, and volume density listed in Table <ref> are estimated to be at least a factor of 2.1.
§.§ Dust polarization
Since polarized intensity and polarized percentage are defined as positive values, the measurements of these two parameters tend to be biased towards larger values. In order to correct for this bias, the debiased polarized intensity (PI) can be calculated using the following formula <cit.>:
PI = √(Q^2+U^2-0.5(σ^2_Q + σ^2_U)),
where σ_Q and σ_U are the 1σ rms noise of the Q and U maps. The polarization fraction is calculated as:
Pf = PI/I,
where I is the Stokes I intensity.
Assuming that irregular grains have their shortest axis aligned with the magnetic field lines <cit.>, we can determine the magnetic field orientation projected on the plane of sky (POS) by rotating the polarization segments by 90. Figure <ref> displays the magnetic field orientations overlaid on the polarization intensity map, where two polarization intensity peaks are observed, one close to MM2 and the other to the northwest of MM1. As shown in Figure <ref>, the magnetic field orientation distribution falls into three major groups. The first group with position angles between 0 and 40 is dominated by the polarized emission from MM[N], while the second group with position angles between 60 and 120 is mainly associated with the polarized emission from MM4. The last group comprises detections from the polarization intensity peaks around MM1 and MM2, with polarization angles from 130 to 180. As these groups are found to be related to the dense structures described in Section <ref>, the magnetic field angles can be assumed to be uniform within each dense structure.
In Figure <ref>, we present the polarization fraction (Pf) as a function of I for the entire W75N region. We then fitted the Pf–I relation using a simple power law of P ∝ I^α, with an estimated index of α = -0.4 ± 0.3. This relation can be used to evaluate the grain alignment efficiency within a cloud. In more developed star forming regions, the alignment efficiency is often enhanced by additional radiation, resulting in a power law index with smaller absolute value (the slope is shallower) in the Pf–I relation.
§.§ Magnetic field analysis
The Davis-Chandrasekhar-Fermi (DCF) method <cit.> relates the dispersion of polarization position angles to the large scale mean magnetic field strength. This analysis tool has been widely used to obtain the strength of the magnetic field projected on the POS. We refer readers to <cit.> for a review and detailed discussion of the assumptions in the DCF analysis. Further studies have been made to expand the DCF method using the angular dispersion function (ADF) analysis <cit.>. Specifically, using the twin Gaussian model for the interferometer beams, <cit.> derived the angular dispersion solutions for the interferometer, which can be expressed as equation (13) in their work. We can rewrite it as:
1-<cos[Δϕ(ℓ)]> = ∑_j=1^∞a_2jℓ^2j + 1/1+N<B_0^2>/<B_t^2>
-b^2(ℓ),
where Δϕ(ℓ) is the angular difference of the two polarization segments separated by a distance of ℓ, N is the number of the turbulent cells, <B_0^2>/<B_t^2> is the large scale to turbulent magnetic strength ratio, and b^2(ℓ) is the local turbulent component of the angular dispersion function. The contribution of the large scale component to the dispersion function can be written as 1-<cos[Δϕ(ℓ)]> - b^2(ℓ). Assuming the turbulent correlation length is δ, the effective thickness of the observation region is Δ^', and the beamsizes (standard deviation) of the twin Gaussian model are W_1 and W_2, N and b^2(ℓ) can be written as:
N_1 = (δ^2+2W_1^2)Δ^'/√(2π)δ^3
N_2 = (δ^2+2W_2^2)Δ^'/√(2π)δ^3
N_12 = (δ^2+W_1^2+W_2^2)Δ^'/√(2π)δ^3
N = (1/N_1+1/N_2-2/N_12)^-1
b^2(ℓ) = N/1+N<B_0^2>/<B_t^2>{1/N_1e^-ℓ^2/[2(δ^2+2W_1^2)]
+1/N_2e^-ℓ^2/[2(δ^2+2W_2^2)]
-1/N_12e^-ℓ^2/[2(δ^2+W_1^2+W_2^2)]}.
Due to the limited number of detected polarization segments, performing the angular dispersion analysis on each dense structure in the W75N region is impractical. Therefore, we estimated the mean magnetic field (B_0) for the entire cloud by utilizing the position angle data from the polarization measurements presented in Figure <ref>. For the twin Gaussian beamsizes, the telescope beamwidth radius W_1 can be estimated using the size of the synthesized beam, W_1 = √(FWHM_a^beam×FWHM_b^beam)/(√(8ln2)), and W_2 is the resolution calculated from the shortest baseline of the array. For our analysis, we set W_1 = 1.2 and W_2 = 8.0. We determined the effective thickness of the cloud to be the ratio of the volume to the cross area of the equivalent sphere of the entire cloud:
Δ^' = V/A = (4/3)π r^3/(π r^2) = 4r/3
= 11.0^'',
where r = √((FWHM_a ×FWHM_b)) = 8.2. With the parameters outlined above, we plotted the derived W75N polarization angular dispersion data and fittings in Figure <ref>. We fitted the data points between 4<ℓ < 8, as scales below ℓ < 4, were smaller than our synthesized beam. We set the upper fitting boundary at ℓ≈ 8, as Equation <ref> is valid when ℓ is less than a few times the beamsize (W_1) <cit.>. Our fitting results yielded the turbulent-to-total magnetic energy ratio, <B_t^2>/<B_0^2> = 2.1 ± 0.7, and δ = 1.7 ± 0.2. The large scale magnetic field strength was estimated as <cit.>:
B_0 = √(μ_0 ρ)δν/δθ = √(μ_0 ρ)δν[<B_t^2>/<B_0^2>]^-1/2
= 0.8±0.1 mG,
where μ_0 is the vacuum permeability, ρ is the average density of the cloud, and δν = 1.5 km s^-1 is the turbulent velocity dispersion in the cloud, which was estimated from the H^13CO^+ (4–3) line-of-sight (LOS) velocity dispersion (see Section <ref>).
Previous studies have revealed that the W75N cloud is linked to the DR21 region, and both regions are in a comparable global collapse state as a result of converging flows on large scales <cit.>. Magnetic field strength measurements of DR21 cores from earlier observations range from 0.4 to 2.1 mG <cit.>, which is consistent with the magnetic field strength derived in this study for W75N.
The Alfvénic Mach number (M_A), sonic Mach number (M_s) and the ratio of thermal-to-magnetic pressures (β) of the cloud can be calculated as:
M_A = √(3)δν/ν_A,
M_s = √(3)δν/c_s,
β = 2(M_A/M_s)^2 = 2(c_s/ν_A)^2,
where δν = δν_los is the one-dimensional velocity dispersion, ν_A = B_0/√(μ_0ρ) is the Alfvénic velocity and c_s = √(γ k_B T/(μ m_H)) is the sound speed at temperature T using the adiabatic index γ = 5/3 and the mean molecular weight μ = 2.33. With the average cloud temperature of 63 K, we calculated c_s = 0.61 km s^-1. ν_A is calculated to be 1.0 km s^-1 and the corresponding β value is 0.7. The calculated M_A, M_s and β values for the cloud are listed in Table <ref>.
§.§ Polarization–intensity gradient analysis
Within the framework of ideal magnetohydrodynamics (MHD), and assuming that the intensity gradient traces the direction of gas motion in the MHD force equation, <cit.> developed a technique to connect the position angle between polarization and intensity gradient orientations to the total magnetic field strength. Using this technique, we calculated the angular differences between the intensity gradient, the local gravity, and the magnetic field orientation. Figure <ref> (a) displays the sinψ–map for pixels with a detection higher than 3σ, where ψ represents the difference between the intensity gradient and local gravity orientations. Assuming that mass is proportional to the detected dust emission intensity, for an intensity map with n positions, the local gravity at a given position 𝐫_i can be calculated using the following formula <cit.>:
𝐠(𝐫_i) ∝∑_j=1^nI_j/|𝐫_i-𝐫_j|^2·𝐞_ji, (for j ≠ i)
where 𝐞_ji is the unit directional vector between position 𝐫_j and 𝐫_i, and I_j is the continuum intensity at position 𝐫_j. Figure <ref> (b) shows that the majority of sinψ values are small, less than 0.4, indicating that changes in the local intensity structure closely follow the local gravity. Positions with high sinψ values are mostly situated between intensity peaks, where the local gravity is canceled out in a particular orientation.
The sinω–map, which displays the difference between the magnetic field and local gravity orientations, is presented in Figure <ref> (a), and its corresponding histogram is shown in Figure <ref> (b). The sinω distribution is characterized by two major peaks, one ranging from 0.2 to 0.5, and the other from 0.8 to 1.0. Regions with low sinω values, particularly along the MM1 to MM[N] direction, indicate a strong alignment between the magnetic field and local gravity, resulting in a magnetic field morphology that is primarily shaped by gravity. Conversely, regions with high sinω values, such as those located around the MM2, MM3, and MM4 peaks, suggest that the magnetic field is more dominant.
We also studied the magnetic field magnetic field significance (Σ_B) to evaluate the relative importance of the magnetic field (F_B) in comparison to gravity (F_G) and pressure gradient (F_P) at various locations within the cloud. Σ_B is calculated using the equation:
Σ_B = F_B/|F_G+F_P| = sin ψ/cos δ,
where δ represents the difference between the magnetic field and intensity gradient orientations. The resulting Σ_B–map and distribution are depicted in Figure <ref>.
Based on the results presented in Figure <ref> and Figure <ref>, it appears that the MM[N] region is strongly influenced by the gravity of the main MM1 core. This gravity exerts a strong pull on the magnetic field, directing it towards the center of the cloud. Near the MM[N] peak, there is a notable discrepancy between the magnetic field and intensity gradient orientations. We conclude that the MM[N] core is a low-mass structure that is dominated by the gravity of the nearby high-mass core (MM1), similar to the case of “Region IV" in <cit.>. In such scenarios, the basic assumption that the intensity gradient traces the gas motion direction does not hold strictly, leading to high uncertainties. The Σ_B values are dominated by large changes in ψ when linked to the gravitational center of the main core. Given the lack of a clear identification of a local gravity center, the calculated ψ values may be much smaller, resulting in overestimated Σ_B values in the region, which are shown in Figure <ref>.
If we ignore the Σ_B values near the MM[N] region, the majority of the Σ_B values are below 1.0, particularly in the northern MM1 region, indicating that the cloud is experiencing global collapse, with the magnetic field being unable to balance the gravitational and pressure forces. Conversely, in the MM3 and MM4 core regions, the Σ_B values are higher, suggesting that the magnetic field may be more dominant. Around the MM2 peak, the value is approximately 1, indicating that the magnetic force is comparable to the other forces.
§.§ Molecular line emissions analysis
The kinematic information on the gas dynamics in the star-forming clouds enables us to probe the star formation scenario. Utilizing data from the H^13CO^+ (4–3) line emission, which is optically thin and devoid of self-absorption features, enables us to estimate the physical parameters of the dense cores in W75N. Figure <ref> illustrates the first-moment map of the H^13CO^+ (4–3) line emission in color scale overlay on the continuum contours. The magnetic field orientations are depicted by red segments. The figure shows the contamination of high velocity components by the outflows, indicated by the redshifted lobes in Figure <ref>, located to the east and west of the center MM1 region and around MM3. A significant velocity gradient from the MM[N] region to the MM1 core is observed. Based on our analysis in Section <ref>, the W75N cloud is undergoing global collapsing. The observed velocity gradient may be caused by gas flow from MM[N] to MM1 or cloud rotation.
To avoid the contamination from the outflows, we perform the position–velocity (PV) analysis to model the velocity gradient along the vertical white color path (PA = –20) shown in Figure <ref>. The PA angle is chosen to be perpendicular to the large-scale outflow shown in Figure <ref> and consistent with the disk-like structure unidentified by <cit.> and <cit.>. The ellipse resulting from the best 2D Gaussian fit, represented in Figure <ref>, indicates a slope angle of 24 and a calculated velocity gradient of approximately 0.9 km s^-1 arcsec^-1. If the observed gradient is due to cloud rotation, it corresponds to a rotation velocity of ω = 1.4×10^-4 yr^-1, resulting in (ω/B)_obs = 1.7×10^-7 yr^-1μG^-1. Depending on the magnetic field strength and rotation velocity, the evolution of a collapsing dense core can be regulated either by centrifugal forces or magnetic forces. We define a centrifugal critical parameter χ, which is the ratio of the observed (ω/B)_obs to the critical (ω/B)_crit <cit.>:
χ = (ω/B)_obs/(ω/B)_crit
= (ω/B)_obs/1.69× 10^-7 (c_s/0.19 km s^-1)^-1yr^-1μG^-1.
Given c_s = 0.61 km s^-1, (ω/B)_crit is calculated to be 5.3 ×10^-8 yr^-1μG^-1 and χ value for the cloud is 3.5, which is greater than 1. The centrifugal forces dominates the dynamics of the collapse over the magnetic field.
We determined the turbulent velocity dispersion, δν_los, by fitting the line width of the H^13CO^+ (4–3) spectrum. Since the molecular weight is high and T_d (temperature) is low, the impact of thermal velocity dispersion is negligible. To eliminate the contribution of large-scale velocity motion within the cloud, we applied a method that shifts the velocity of a spectrum for each spatial pixel by the centroid velocity indicated in the moment 1 map (refer to Figure <ref>) to remove the large-scale velocity field. This technique shifts the average velocity of each pixel to zero, isolating only the turbulent component. The turbulent velocity is then determined by fitting a Gaussian profile to the intensity–velocity curve. The final estimated value for δν_los is approximately 1.5 km s^-1.
The ratio of the turbulent to magnetic energy β_turb is usually calculated using the Alfvénic Mach number:
β_turb = M_A^2 = 3(δν_los/ν_A)^2.
The β_turb for the entire cloud is calculated to be 10.9, indicating the turbulent energy dominates the magnetic energy.
The relative importance between the magnetic field and the gravity of individual sources can be estimated by the magnetic critical parameter λ, which is the mass-to-flux ratio in units of the critical value 1/(2π√(G)) <cit.>:
λ = (M/Φ_B)_obs/(M/Φ_B)_ ntextcrit
= 7.6× 10^-21N_H_2/(cm^-2)/B/(μG).
The calculated λ value for W75N is about 2.0, indicating gravity dominates the magnetic field.
Table <ref> lists the viral parameters of W75N. The cloud has M_s > 1, revealing that non-thermal motions are supersonic. The M_A value is greater than 1, indicating that turbulent energy is stronger than magnetic energy. These supersonic and super-Alfvénic Mach numbers imply the presence of strong non-thermal motions in the cloud. The β value is less than one, indicating that although weaker than the non-thermal pressure, the magnetic pressure is stronger than the thermal pressure. This M_s > M_A > 1 > β relationship has been previously observed in other high-mass forming regions, such as the DR21 cores <cit.>.
The average λ value for the cloud is 2.0, indicating it is under going a global collapse. The estimated B_0 and λ values in this work are consistent with the results (B_0 = 0.3 – 1.2 mG and λ = 0.6 – 2.2) obtained by <cit.>. The cloud exhibits a large scale velocity gradient, but it is unclear whether it is due to gas in-fall or cloud rotation. If the cloud is rotating, the high χ value suggests that the centrifugal force dominates the magnetic field force.
cccccccc
Viral parameters of W75N
0pt
δν_los/(km s^-1) M_s M_A β β_turb λ χ
1.5 5.6 3.3 0.7 10.9 2.0 3.5
§ DISCUSSION
Observations of the Zeeman effect towards maser sources at small scales have been used to derive magnetic field strengths in the line-of-sight (LOS) direction. The magnetic field strength derived from Zeeman pairs of opposite circular polarization ranges from +8 to -8 mG using OH masers at 1665, 1667, and 1720 MHz <cit.>. <cit.> detected a strong magnetic field source of about 40 mG near VLA 2. <cit.> observed the 6.7 GHz methanol maser using the European VLBI network and found that the Zeeman-splitting measurements indicated the LOS magnetic fields in the maser regions ranging from 11 to 16 mG. In contrast, the magnetic field strength measured from observations of the 22 GHz water masers is about 1000 mG <cit.>, which is much higher than those from the methanol maser observations. Recently, <cit.> measured –764 mG < B^VLA1< –676 mG and –355 mG < B^VLA2< –2426 mG in the LOS direction with 22 GHz water maser observations. These high-resolution (typically around 10^2 AU) maser observations detected much higher LOS magnetic field strengths at small scales in protostellar envelopes. The hydrogen number densities of those regions estimated using the empirical equation B ∝ n^0.65_H_2 <cit.> range from 10^8 to 10^10 cm^-3 <cit.>. It is not straightforward to compare the results from our work using thermal dust emission to those from maser observations arising from non-thermal processes. Based on the findings from <cit.>, the density and magnetic field strength (0.85× 10^6 cm^-3 and 0.8 mG) from this work indicate that the cloud is in a magnetically supercritical phase.
<cit.> conducted JCMT observations towards the compact source W75N-IRS1 using a beamsize of 12 at 870 μm. They detected one magnetic segment with a PA = 145± 5 and estimated a magnetic field strength of B = 0.8 mG using a simple statistical relation between the magnetic field strength and the gas density. Their magnetic field strength and PA are consistent with the mean field of our results. Using JCMT, <cit.> obtained a similar magnetic field position angle of 153± 22 at 1100 μm, while at 450 μm, the derived magnetic field was 37± 9. The change in magnetic field PA could be attributed to the twisted magnetic field lines around the region. The net magnetic field value could change as the beamsize varies. Similarly, the maser observations obtained magnetic fields perpendicular to our submillimeter polarization observations because the maser observations were at milliarcsecond (mas) resolution to trace the compact H2 regions. The magnetic field could twist significantly from mas to arcsec scales.
In Figure <ref>, we present the CO (3–2) blueshifted and redshifted emission contours from our work. We chose the velocity boundaries of the blueshifted (-18.0 to 0 km s^-1) and redshifted (20.0 to 28.0 km s^-1) emissions to be symmetrical with respect to the cloud's ν_LSR = 10.0 km s^-1, as reported by <cit.>. The compact sources VLA 1 (Ba), VLA 2, VLA 3 (Bb), Bc, and VLA 4 are marked as filled triangles, and the dense cores of the cloud from Figure <ref> are labeled using dashed ellipses. The black dashed arrows indicate the direction of the bipolar outflow (66) from <cit.>, and the three black solid arrows from <cit.> show the outflow orientations for the redshifted component (45, started from VLA 1), blueshifted component (135, started from MM2), and the bipolar outflow from VLA 3 (101, centered at VLA 3). <cit.> also suggested that VLA 1 powers the large-scale molecular bipolar outflow of W75N(B).
We found that the main outflows centered at VLA 1 and MM2 from <cit.> match well with the high velocity gas detected in our CO (3–2) emission map. However, we did not detect the blueshifted components of the bipolar outflows from VLA 3 to the west of the source. We propose the existence of another outflow centered at MM2, extending in a direction almost opposite to that of MM4, indicated by the orange arrow in Figure <ref>. The bipolar outflows originating from the MM2 core drag and align the magnetic field lines in the MM2 and MM4 regions. In addition, we found enhanced dust polarization along the cavity walls of the redshifted lobe of the outflow, specifically around the MM3 region. The magnetic field lines in the MM[N] and MM1 regions are shaped by gas infall from the MM[N] to MM1 core. These findings are consistent with the results of the polarization angle analysis presented in Section <ref>.
The overall λ is greater than 1, and the Σ_B values shown in Figure <ref> (b) predominantly fall below 1, indicating that the W75N cloud is undergoing global collapsing. In the MM2 and MM4 regions, while the Σ_B values increase, they still remain primarily below 1, as these regions are dominated by gravity and pressure gradient. The magnetic field is also shaped by the outflows from the MM2 core. If the large scale velocity is from cloud rotation, the average cloud M_A = 3.3 and χ = 3.5, indicating that turbulence and the centrifugal force dominate over magnetic field.
§ CONCLUSION
We present 345 GHz polarization observations of the W75N region using the SMA interferometer. We estimated the physical parameters of the dense structures in the region from the dust continuum emission. Our analysis reveals a uniform distribution of polarization angles within each dense structure. We used the ADF method to study the POS magnetic field and estimated a large-scale magnetic field component of 0.8 ± 0.1 mG. We also investigated the dynamical state of the cloud by analyzing the polarization-intensity gradient and the H^13CO^+ (4–3) line data. Our findings suggest that the W75N region is undergoing global collapsing due to the weaker magnetic field force compared to other forces. We observed that the magnetic field around the MM[N] and MM1 regions is aligned by gas infall, while in the MM2 and MM4 regions, the magnetic field is shaped by outflows from the MM2 core. We also observed enhanced dust polarization along the cavity walls around the MM3 region.
This work was partially supported by the program Unidad de Excelencia María de Maeztu CEX 2020-001058-M. JG also acknowledges support by the grant PID 2020-117710 GB-I00 (MCI-AEI-FEDER,UE).
aasjournal
|
http://arxiv.org/abs/2307.04042v1 | 20230708202414 | Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric Regression by Adversarial Training | [
"Masaaki Imaizumi"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme. For the nonparametric regression problem, it has been shown that an estimator using deep neural networks can achieve better performances in the sense of the L2-norm. In contrast, it is difficult for the neural estimator with least-squares to achieve the sup-norm convergence, due to the deep structure of neural network models. In this study, we develop an adversarial training scheme and investigate the sup-norm convergence of deep neural network estimators. First, we find that ordinary adversarial training makes neural estimators inconsistent. Second, we show that a deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction. We extend our adversarial training to general setups of a loss function and a data-generating function. Our experiments support the theoretical findings.
§ INTRODUCTION
We study the nonparametric regression problem.
Suppose we observe (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × with dimension d ∈ that are independent and identical copies of a [0,1]^d ×-valued random element (X,Y) which follows the following regression model:
Y = f^*(X) + ξ,
where f^*: [0,1]^d → is an unknown function, ξ is a random noise variable with zero mean and finite variance and is independent to X, and X follows a marginal measure P_X on [0,1]^d.
Our interest is to utilize a deep neural network model and develop an estimator f̂ from the model and the n observations, then study its estimation risk in terms of the sup-norm, referred to as an L^∞-risk:
sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|,
which implies uniform convergence of the estimator.
In this study, we prove that an adversarial training framework can provide an estimator with deep neural networks whose L^∞-risk converges, then derive a convergence rate of the risk and show the minimax optimality of the rate.
§.§ Background and Question
Deep learning is a data-driven statistical method using deep neural network models <cit.>, which have multiple layers.
It has many well-known extensions, such as a deep convolutional network <cit.>, a residual network <cit.>, and an attention mechanism <cit.>.
Owing to the multiple layers and the well-designed training algorithm, deep learning has achieved quite accurate prediction performance in various tasks.
The framework of nonparametric regression has been actively used to analyze deep neural networks, and many roles of deep learning have been revealed.
A deep neural network is a model of functions f:[0,1]^d → with multiple layers such that
f(x) = g_L ∘ g_L-1∘⋯∘ g_1(x),
where g_1(·),...,g_L(·) are trainable functions by L layers.
Deep learning is a method of fitting the function by deep neural networks to observed data, hence it is obviously regarded as a method for the nonparametric regression problem.
Specifically, in most studies on the nonparametric regression with deep neural networks, the following least-square estimator has been studied:
f̂^LS∈_f ∈1/n∑_i=1^n (Y_i - f(X_i))^2,
where is a set of functions by deep neural networks with the form (<ref>).
Further, performance of the estimator f̂^LS has been studied by its L^2-risk
f̂^LS - f^*_L^2^2 := [ (f̂^LS(X) - f^*(X))^2 ].
Using this framework, seminal works <cit.> show that the multilayer structure of deep neural networks fits an internal structure of the unknown function f^* and that its estimation error achieves a faster convergence.
<cit.> investigate statistical properties of the neural estimators such as asymptotic distribution and robustness.
<cit.> show that the multilayer structure of the neural estimator is effective when the target function f^* has irregular properties such as discontinuity and heterogeneous smoothness.
<cit.> shows an adaptive property of the neural estimators to an intrinsic low-dimensionality of the observations, e.g., data concentrates on a low-dimensional manifold in its domain.
Studying a sup-norm value of the estimation error has been an important interest in nonparametric regression problems.
The sup-norm value, referred to as an L^∞-risk, is a sharper measure of accuracy and sensitivity of estimators than the L^2-risk.
Furthermore, the sup-norm convergence of errors is useful for statistical inference, such as a uniform confidence band, and is effective in the case with covariate shift of the transfer learning <cit.>.
For several conventional (non-deep) nonparametric estimators for f^*, their sup-norm convergence has been actively studied.
Classically, the convergence of kernel methods <cit.> and series methods <cit.> have been investigated.
More recently, the convergence of wavelet methods <cit.>, methods with reproducing kernel Hilbert spaces <cit.>, and Gaussian process methods <cit.> have been clarified.
Roughly speaking, when studying the sup-norm convergence of these non-deep estimators f̂^ND, the following linear-in-basis form plays an effective role:
f̂^ND = ∑_j ∈ J w_j ψ_j(·),
where J is an index set, {w_j}_j ∈ J is a set of weights in trained by the least-square approach, and {ψ_j(·)}_j ∈ J is a family of basis functions (possibly depending on covariates) such as wavelets or kernels.
Since the non-deep estimators have the linear form, it is possible to control the L^∞-risk effectively and show its convergence, except a general result by <cit.>.
Our interest is to evaluate the L^∞-risk of an estimator using deep neural networks (<ref>).
Since the deep neural network model (<ref>) does not have the linear-in-basis form (<ref>) as the non-deep methods, the existing analysis cannot study the L^∞-risk of deep neural networks.
Based on the background, we have the following questions:
Is it possible to achieve an estimator by deep neural networks f^* whose L^∞-risk converges?
If so, is it possible to show the optimality of a convergence rate of the L^∞-risk?
§.§ Introduction to Adversarial Training
The adversarial training is a training scheme for deep neural networks, which has been developed to deal with an adversarial attack on prediction by neural networks.
An adversarial attack is a methodology to mislead deep neural networks in its predictions, by putting a tiny perturbation into a covariate for a trained deep neural network.
Since functions by trained deep neural networks are unstable, the perturbed samples, called adversarial samples, vary the outputs of deep neural networks drastically.
<cit.> reported that the phenomenon by introducing a case in which a deep neural network misclassified an image of a panda as an image of gibbons by adding very fine noise to the image.
After the finding, many adversarial attack methods have been developed <cit.>, threatening the robustness of neural networks.
A standard approach to adversarial training is to minimize a robustified empirical risk, which is measured by adding perturbations to the observed input variable <cit.>.
Rigorously, an estimator by the adversarial training for regression is defined as the minimizer of the following empirical risk:
min_f ∈1/n∑_i=1^n max_x' : x' - X_i_∞≤ h (Y_i-f(x'))^2,
with some h > 0.
The outer minimization is solved by the gradient descent method as well as the usual least-square loss, and the inner maximization is solved by a gradient ascent method.
Several efficient algorithms have been proposed to solve this problem effectively <cit.>, such as the fast gradient sign method <cit.>.
The optimization process is summarized in the following:
i. Initialize f ∈ and repeat the following steps ii and iii:
ii. For each (Y_i,X_i), find x^*_i = _x' ∈{x: x-X_i_∞≤ h} (Y_i - f(x'))^2.
iii. Update function f ← f - η∇ ( n^-1∑_i=1^n (Y_i - f(x^*_i))^2),
where η > 0 is a learning rate and ∇ denotes a derivative with respect to neural network parameters of f.
Note that the efficiency of the algorithm is not a primary interest of this study, hence we focus on the estimation error by the global minimizer of the adversarial risk.
Several works actively pursue a theoretical understanding of adversarial training.
One of the most significant issues is a trade-off between the robustness and accuracy of the adversarial training, which studies the possibility of balancing the predictive performance of deep neural networks with their ability to defend against adversarial samples.
A risk bound and the sample complexity of the adversarial training in general settings is widely examined <cit.>.
The predictive performance of the adversarial training has been also studied, particularly in linear regression models with over-parameterization <cit.>.
§.§ This Study
The purpose of this study is to investigate the sup-norm convergence of an error by deep neural networks using the adversarial training scheme.
For this aim, we develop a novel formulation of adversarial training and study its efficiency.
Specifically, our formulation includes a preprocessing for smoothing the output variable at the first step, then formulates a neural estimator as a minimizer of an empirical adversarial risk associated with the preprocessing.
The preprocessing has a role to reduce a bias on the estimator from the perturbation of the adversarial training scheme.
As a specific form of preprocessing, we can employ several nonparametric estimators including the nearest neighbor method and the kernel method.
As a result, we derive an upper bound on the L^∞-risk of the estimator with deep neural networks using our adversarial training scheme, then reveal some properties of its convergence rate.
Specifically, our contributions are summarized as follows.
(i) We derive a convergence rate of the L^∞-risk of the estimator when the true function f^* belongs to the Hölder space.
The derived rate achieves the minimax optimal rate with an appropriately designed preprocessing.
(ii) We show the inconsistency of the ordinary adversarial training without preprocessing.
This is due to the inability of an output variable in the regression problem to accommodate perturbations of the adversarial training.
(iii) Our approach applies to not only the adversarial training with a squared loss but also a general convex loss.
Specifically, we study an L^∞-risk of the regression problem of general loss, which is useful for handling data that have heavy-tailed noise.
(iv) We additionally study the L^∞-risk when the true function f^* has a heterogeneous smoothness, i.e. it belongs to the Besov space.
Our analysis shows the minimax optimality of the convergence rate of the L^∞-risk in this case.
(v) Our result is applicable to a wide range of architectures of deep neural networks, such as a fully-connected dense layer.
Also, it allows both finite depth networks and finite width networks.
We conduct numerical experiments and confirm that our theoretical results are consistent with the result.
Our results provide new implications for the understanding of adversarial training, which argues the trade-off between robustness and accuracy of prediction by adversarial training.
Along with this line, we show that (i) the ordinary adversarial learning is not consistent in the regression problem in the first place, (ii) the robustness obtained by adversarial learning is described by sup-norm convergence of the estimation error, and (iii) the adversarial training achieve the optimal rate with appropriate preprocessing.
Technical contributions in our proof are summarized as follows.
First, we derive an upper bound of the sup-norm of an estimation error by the adversarial risk up to constants.
This bound uses a volume of a neighborhood set of an input variable, which is utilized to design the adversarial perturbation.
Second, we develop an empirical process technique for the evaluation of preprocessing.
To control the effects of the preprocessing and the adversarial training simultaneously, we involve two levels of evaluation of biases and variances as appropriate.
§.§ Organization
The rest of this paper is organized as follows.
Section <ref> gives a setup for the nonparametric regression problem and the definition of deep neural networks.
Section <ref> gives a general formulation of adversarial training and an overview of analysis on it.
Furthermore, the section shows that naive adversarial training does not give a consistent estimator.
In Section <ref>, as a main result, we derive an upper bound by a sup-norm of an estimation error by the developed estimator
Section <ref> gives extensions and applications.
Section <ref> gives numerical simulations, and Section <ref> concludes.
§.§ Notation
For n ∈, [n] := {1,2,...,n} is a set of natural numbers no more than n.
For a,a' ∈, a ∨ a' := max{a,a'} is the maximum.
⌊ a ⌋ denotes the largest integer which is no more than a.
The Euclidean norm of a vector b ∈^d is denoted by b_2 := √(b^⊤ b).
Let C_w be a positive finite constant depending on a variable w.
{E} denotes the indicator function. It is 1 if the event E holds and 0 otherwise.
For a matrix A ∈^N × N, A_i,j denotes an (i,j)-th element of A for i,j=1,...,N.
For a measurable function f: Ω→ on a set Ω⊂^d, f_L^p(μ) := (∫ |f(x)|^p dμ(x) )^1/p denotes an L^p-norm for p ∈ [1,∞) with a measure μ, and f_L^∞ := sup_x ∈Ω|f(x)| denotes a sup-norm.
Also, L^p(Ω) denotes a set of measurable functions such that f_L^p(λ) < ∞ with the Lebesgue measure λ.
For x ∈^d, δ_x denotes the Dirac measure at x.
For a function f : ^d → with a multi-variate input (x_1,...,x_d) ∈^d and a multi-index a = (a_1,...,a_d) ∈^d, ∂^a f(x_1,...,x_d) := ∂_x_1^a_1∂_x_2^a_2⋯∂_x_d^a_d f(x_1,...,x_d) denotes a partial derivative with the multi-index.
For a variable x, C_x denotes some positive finite constant that polynomially depends on x, and it can have different values in different places.
For sequences of reals {a_n}_n ∈ and {b_n}_n ∈, a_n ≍ b_n denotes lim_n →∞ a_n/b_n → c with some c ∈ (0,∞), a_n = O(b_n) denotes |a_n| ≤ M|b_n| and a_n = Ω (b_n) denotes |a_n| ≥ M |b_n| with some M > 0 for all sufficiently large n. a_n = o(b_n) denotes |a_n| ≤ M |b_n| for any M > 0 and for all sufficiently large n.
O(·) and Ω(·) are the notations O(·) and Ω(·) ignoring multiplied polynomials of log(n), respectivelly.
For a sequence of random variables {X_n}_n ∈, X_n = O_P(a_n) denotes Pr(|X_n/a_n| > M) ≤ε for any ε > 0 and some M>0 for all sufficiently large n, and X_n = o_P(a_n) denotes lim_n →∞Pr(|X_n/a_n| > ε) = 0 for any ε > 0.
§ PROBLEM SETTING AND PRELIMINARIES
§.§ Nonparametric Regression and L^∞-Risk
§.§.§ Model and Observations
For the nonparametric regression, suppose that we have n observations (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × that are independent and identical copies of a random variable (X,Y) which follows the regression model (<ref>).
Note that the model is characterized by the unknown function f^* and the noise variable ξ.
Let P_X be a marginal measure of X.
§.§.§ Basic Assumption
We introduce a standard assumption on the regression model.
P_X has a density function that is uniformly lower bounded by C_P_X > 0 on [0,1]^d.
Assumption <ref> is important to estimate f^* on the entire domain [0,1]^d.
Both of the assumptions are commonly introduced in the nonparametric regression for neural networks <cit.>.
We suppose that f^* belongs to a function class with the Hölder smoothness with an index β > 0.
To the end, we define a ball of the Hölder space with β > 0 as
^β([0,1]^d) := { f: [0,1]^d →|
∑_b ∈^d: b_1 < ⌊β⌋∂^b f_L^∞ + ∑_b ∈^d: b_1 = ⌊β⌋sup_x,x' ∈ [0,1]^d, x ≠ x'|∂^b f(x) - ∂^b f(x')|/x - x'_∞^β - ⌊β⌋≤ B},
with its radius B ≥ 1.
Intuitively, ^β([0,1]^d) is a set of functions on [0,1]^d that are ⌊β⌋ times partially differentiable and their derivatives are (β - ⌊β⌋)-Hölder continuous.
There exists β > 0 such that f^* ∈^β'([0,1]^d) holds for all β' ∈ (0,β].
To impose differentiability for f^* is the usual setting for nonparametric regression (see <cit.>, for example).
Further, in the statistical studies on deep neural networks, it has also studied the estimation of functions with more complex structures <cit.>.
We will discuss an extension on this assumption in Section <ref>.
§.§.§ Goal: Sup-norm Convergence
Our goal is to estimate the true function f^* in the model (<ref>) and study an estimation error of an estimator in terms of the sup-norm ·_L^∞.
Rigorously, we will develop an estimator f̂ and study its L^∞-risk defined as follows:
f̂ - f^*_L^∞ := sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|.
The L^∞-risk is a sharp measure for the robustness of estimators and is applied to statistical inference such as a uniform confidence band.
To understand this point, we discuss its relation to the commonly used L^2-risk measured by the L^2-norm, which is a typical case with the following L^p-norm (p ∈ [1,∞)) with p=2:
f̂ - f^*_L^p(P_X)^p := _X[ |f̂(X) - f^*(X)|^p ].
Since the L^∞-risk bounds the L^p-risk, i.e. f̂ - f^*_L^∞≥f̂ - f^*_L^p(P_X) holds for every p ≥ 1, the L^∞-risk leads stronger convergence.
Figure <ref> illustrates the difference between the convergences in the L^2-norm and the sup-norm.
In the related studies with neural networks (e.g. <cit.>), the L^2-risk has been mainly studied, but the L^∞-risk of neural network estimators has not been proved to converge.
§.§ Deep Neural Network Model
We define a deep neural network, which is a model of functions by multiple layers.
Specifically, we consider deep neural networks with fully-connected layers and the rectified linear unit (ReLU) activation function, which is one of the most commonly used activations.
Let L ∈ be a number of layers, and = (W_1,...,W_L+1) ∈^L+1 be a tuple of width parameters, where W_ℓ denotes width of an ℓ-th layer.
Deep neural networks have a weight matrix A_ℓ∈^W_ℓ + 1× W_ℓ and a weight vector b_ℓ∈^W_ℓ for each ℓ∈ [L].
For each d ∈, we introduce a ReLU activation function σ:^d →^d such that σ(z) = ((z_1 ∨ 0), (z_2 ∨ 0),...,(z_d ∨ 0))^⊤ for z = (z_1,...,z_d) ∈^d.
For each ℓ∈ [L-1], we define a map g_ℓ: ^W_ℓ→^W_ℓ+1 by an ℓ-th layer as
g_ℓ(z) = σ(A_ℓ z + b_ℓ), z ∈^W_ℓ.
For the last L-th layer, we define g_L(z) = A_L z + b_L with z ∈^W_L.
For L and , we define a parameter space Θ_L, := (^W_2× W_1×^W_1) × (^W_3× W_2×^W_2) ×⋯× (^W_L+1× W_L×^W_L) whose elements is θ = ((A_1,b_1),(A_2,b_2),...,(A_L,b_L)), then we define a function g :^d → by a deep neural network with d = W_1 and W_L+1 = 1 as
f_θ(x) = g_L ∘ g_L-1∘⋯∘ g_1(x), x ∈ [0,1]^d.
Intuitively, f_θ(x) is constituted by compositions of L maps by the multiple layers with the maximum width _∞ = max_ℓ∈ [L+1] W_ℓ.
There are at most ∑_ℓ=1^L (W_ℓ + 1) W_ℓ+1≤ L (_∞ +1)^2 parameters in the deep neural network model.
We introduce a set of functions by deep neural networks with L layers and W maximum width.
With a tuple (L, W) ∈^2 and an upper bound B ≥ 1, we define the set of functions by deep neural networks as
(L,W):= { f_θ|f_θ_L^∞≤ B , θ∈Θ_L,, _∞≤ W }.
The condition on the upper bound B can be satisfied by a clipping operation using the ReLU activation function <cit.>.
This definition of deep neural networks includes several variations of neural networks.
If the parameter matrix A_ℓ is not sparse, the defined neural network is a fully-connected neural network.
If the matrix A_ℓ is constrained to be sparse with some structure, it is equivalent to a convolutional neural network <cit.> or a residual network <cit.>.
One advantage of the definition (<ref>) is that it controls the easily manipulated values of width W and depth L of neural networks, that can be easily specified when designing neural network models.
This is in contrast to manipulating the number of nonzero parameters and the maximum parameter value, which are difficult to control in practice (for example, see <cit.>).
§ ADVERSARIAL TRAINING ESTIMATOR FOR REGRESSION
§.§ Ordinary Adversarial Training and its Inconsistency
We introduce a framework of adversarial training.
The adversarial training framework defines its loss using an input point in the neighborhood of a data point that maximizes loss, as reviewed in (<ref>).
Rigorously, with a scale multipliers h ∈ ( h,1) with h >0, we consider a neighbourhood of x ∈ [0,1]^d as
Δ_h^p(x) = {x' ∈ [0,1]^d |x - x'_p ≤ h}⊂ [0,1]^d.
Then, we consider the following estimator by the empirical adversarial risk with a function f: [0,1]^d → and p ≥ 1:
R_n^o(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i) (Y_i - f(x'))^2.
We can define an estimator of f^* by the minimizer of this empirical adversarial risk as
f := _f ∈(L,W) R_n^o(f).
The minimax optimization in the problem (<ref>) is solved by various algorithms <cit.>.
§.§.§ Inconsistency of Ordinary Adversarial Training
In this section, we show the inconsistency of f̃ by ordinary adversarial training.
Specifically, we obtain the following result.
Suppose n ≥ 3.
There exists a sub-Gaussian noise ξ_i, f^* ∈^1([0,1]^d), P_X, and h ∈ (0,1) such that the estimator f̌ in (<ref>) satisfies the following inequality with an existing constant c^* > 0 with probability at least 0.5:
f̌ - f^*_L^2(P_X)^2 ≥ c^*.
This result shows that the L^∞-risk of f̌ does converge to zero with the ordinary adversarial training, regardless of the sample size n and a neural network architecture.
Since the L^∞-risk is bounded below by the L^2-risk, hence the ordinary adversarial training also yields an inconsistent estimator in the sense of a sup-norm.
This result is not limited to the choice of model used for the estimator, hence it occurs with methods other than neural networks.
Intuitively, ordinary adversarial training produces a bias by the design of perturbations on inputs (see the middle panel of Figure <ref>).
This is because the perturbation makes f̌(X_i) fit to an output with a shift ς = x' - X_i, which creates the inconsistency.
Hence, we need to correct the bias by the ordinary adversarial training in the regression problem.
§.§ Proposed Framework of Adversarial Training
We introduce an empirical risk function for adversarial training based on a quadratic loss.
We develop a random map Ŷ: [0,1]^d → for surrogate outputs, which referred to a preprocessed output.
This notion is a general expression of several methods, and its specific configurations will be given later.
With Ŷ, we define an empirical preprocessed adversarial risk as
R_n(f) := 1/n∑_i=1^nsup_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
for a function f ∈ L^2([0,1]^d).
This loss function is a generalized version of the ordinary adversarial risk (<ref>) with the preprocessing Ŷ.
Using this notion, we define an estimator as the minimizer of the empirical risk as
f̂∈_f ∈(L,W) R_n(f).
This framework intends to perturb an output variable in response to the perturbation on the input X_i.
That is, when the input point X_i is shifted by ς = x' - X_i due to the adversarial training, we also shift the output side by ς.
Hence, the observed outputs may not be able to accommodate the shift.
To address this issue, we prepare the corresponding output using a preprocessing approach, such as the nearest neighbor method.
Figure <ref> illustrates differences between the least square estimator f̂^LS, the ordinary adversarial training f̌, and our proposal estimator by the adversarial training with preprocessing f̂.
§.§.§ Preprocessing Design
We impose the following assumptions on the preprocessing.
[Preprocessing]
Ŷ(x) is continuous and [Ŷ_L^∞^2] ≤ V^2 with some V > 0.
Also, there exists a non-negative sequence {ζ_n}_n ∈ such that ζ_n → 0 as n →∞ such that the following holds for all n ∈:
ζ_n^2 ≥[ Ŷ - f^*_L^∞^2 ].
The sequence {ζ_n}_n ∈ represents a convergence rate of the preprocessing Ŷ to f^*.
Importantly, the data used to construct the preprocessed output Ŷ here may overlap the data for the estimator as (<ref>).
There are several examples for preprocessing as follows.
[Nearest neighbour]
First, we consider the k-nearest neighbor method.
For k ∈ and x ∈ [0,1]^d,
we define a radius B_x(r) := {x' ∈ [0,1]^d |x-x'_2 ≤ r} with r>0, the k-nearest neighbour radius r_k(x) := inf{r >0 | |B_x(r) ∩| ≥ k}, and its corresponding dataset N_k(x) := B_x(r) ∩.
With this notion, we define the k-
nearest neighbor preprocessing.
Ŷ(x) = 1/|N_k(x)|∑_i=1^n Y_i {X_i ∈ N_k(x)}
In this example, if Assumption <ref> holds with β∈ (0,1], we have ζ_n^2 = O(n^-2β/(2β + d)log n) with k ≍ n^2β/(2β + d) by Theorem 1 in <cit.>.
[Posterior mean by Bayesian method]
We consider a mean of a posterior distribution by a prior distribution on functions.
The method considers a B-spline series (see <cit.> for overview and specific constructions).
With some tuple of numbers of basis (J_1,...,J_d) ∈^d and orders (q_1,...,q_d) ∈^d, we consider parameters {θ_j_1,...,j_d}_j_1,...,j_d = 1^J_1,...,J_d and the B-spline series {B_j_k,q_k(x)}_j_k = 1^J_k for k=1,...,d.
Then, the method constructs a prior distribution on a function f with the form
f(x) = ∑_j_1=1^J_1⋯∑_j_d=1^J_dθ_j_1,...,j_d∏_k=1^d B_j_k,q_k(x_k),
by putting a Gaussian prior on the parameters θ_j_1,...,j_d.
If Assumption <ref> holds with β > 0, Theorem 4.4 in <cit.> shows that ζ_n^2 = O(n^-2β/(2β + d)log^2β/(2β + d) n), which is implied by a contraction of the posterior shown by the theorem.
We can pick other methods for preprocessing.
The required property is that an error in estimating a smooth function converges in the sup-norm sense.
§ MAIN RESULT: L^∞-RISK ANALYSIS
We present our main results on the consistency of the estimator and a non-asymptotic upper bound on the estimation error with its convergence rate in n.
We further discuss the minimax optimality of the obtained convergence rate.
To achieve optimality, we need to discuss the design of the preprocessing Ŷ and the architecture of deep neural networks.
§.§ Consistency
We present an upper bound of an expectation of the L^∞-risk of the estimator.
The first result is consistency in the sense of the L^∞-risk.
In an asymptotic analysis with n →∞, a product of the depth and width of deep neural networks should also increase in n.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class by deep neural networks with a tuple (L,W).
Suppose <ref>, and <ref> hold and f^* is continuous.
Then, there exists a tuple (L,W) with LW = o(n) such that it holds that
[f̂ - f^*_L^∞^2 ] → 0,
as n →∞.
The results show that under divergent widths and depths and appropriate preprocessing, we obtain consistency in the sense of sup-norm.
Note that f^* needs only be continuous, and conditions on derivatives are not necessary.
Also, it provides the following important implications: (i) we can control the L^∞-risk even though the deep neural network model does not have the linear-in-feature structure, and (ii) the preprocessing solves the problem of inconsistency in adversarial training presented in Section <ref>.
Its proof is based on the procedure in Section <ref>.
We note the importance of sup-norm convergence in the context of estimation.
In the theory of approximation, the sup-norm convergence by neural networks has been an important topic, that is, inf_f∈(L,W)f - f^*_L^∞→ 0 as L →∞ or W →∞, and numerous studies have studied the problem, e.g. <cit.>.
Conversely, in the nonparametric regression problem, the sup-norm convergence has been difficult due to noise in observations.
Theorem <ref> shows that the adversarial training with preprocessing enables convergence in the sup-norm.
§.§ Non-Asymptotic Bound and Convergence Rate
As a more rigorous error evaluation, we derive a non-asymptotic upper bound for the L^∞-risk of the estimator with the adversarial training.
This result is also useful in studying convergence rates of the risk and discussing its optimality.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W) by deep neural networks.
Suppose Assumption <ref>, <ref>, and <ref> hold for some β > 0.
Then we have
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (WL)^-4β/d + h^-dζ_n^2 ),
for every n ≥n̅ with some n̅∈ℕ.
This result gives some implications: (i) we develop an upper bound on the L^∞-risk of the estimator, and
(ii) the bound is proportional to h^-d, which appears when evaluating the L^∞-risk using the adversarial loss.
Note that we can select h as strictly positive and thus it does not affect an order of the bound in n.
More precisely, this upper bound consists of the three terms.
The first term O((WL)^2 log (WL) /n) is the complexity error, the second term O((WL)^-4s/d) is the approximation error by the deep neural network, and the third term O(ζ_n^2) is the error by the preprocessing.
The complexity and approximation errors also appear in several risk bounds on an L^2-risk of deep neural network (e.g., Theorem 4.3 in <cit.>).
In contrast, the preprocessing error term is a new term needed to derive an upper bound on the L^∞-risk.
We derive the convergence rate of the L^∞-risk with respect to n.
Specifically, we select the width and depth of deep neural networks in order to balance the trade-off in the error terms presented in Theorem <ref>.
Consider the setting in Theorem <ref>.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as LW ≍ n^2β/(2β + d).
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^2 ∨β^* n ).
The rate obtained in Corollary <ref> is identical to the minimax optimal rate of risk measured in the sup-norm in the problem of estimating a function from ^β([0,1]^d) <cit.>.
Specifically, the derived rate corresponds to the following lower bound:
inf_f̅_nsup_f^* ∈^β([0,1]^d)[f̅_n - f^*_L^∞^2 ] = Ω̃( n^-2β / (2β + d)), (n →∞),
where f̅_n is taken from all estimators depending on the n observations.
Since the derived rate is the same as the lower bound, we show that the adversarial training estimator achieves the minimax optimal rate.
§.§ Proof Overview
We give an overview of proof of the main theorem.
As preparation, we introduce several notations related to adversarial training.
With h, an order p and a base measure P, we define an adversarial (pseudo-)norm of f: [0,1]^d → and its empirical analogue
f_P,Δ^2 := _X ∼ P[ max_x' ∈Δ_h^p(X) |f(x')|^2 ], f_n,Δ^2 := n^-1∑_i=1^n max_x' ∈Δ_h^p(X_i) |f(x')|^2.
These norms correspond to the adversarial risks with a squared loss for the regression problem (<cit.>).
We also define an approximation error of deep neural networks in (L,W) as
Φ_L,W := inf_f ∈(L,W)f - f^*_L^∞.
This term represents an expressive power of neural networks in (L,W), which decreases as L or W increase (see <cit.> for an example).
We further use a uniform covering number of (L,W).
Let Q_n be an empirical measure with n samples.
Given δ∈ (0,1],
we define a δ-covering set of (L,W) as {f_1,...,f_N}⊂ and the uniform covering number from the empirical process theory (e.g., <cit.>):
N_L,W(δ) := sup_Q_n N(δ, (L,W), ·_L^2(Q_n)),
where the supremum is taken over all possible empirical measures Q_n.
This notion is useful to evaluate the complexity of the set of deep neural networks, because it gives an upper bound without boundedness or sparsity of parameters of neural networks (See Lemma <ref>, for example).
Our proof consists of three main elements: (i) the derivation of an upper bound of the adversarial norm of the estimation error, (ii) to develop an upper bound of the L^∞ norm of the estimation error by the adversarial norm, and (iii) a comb of the above results using the localization technique.
Each of these is described below.
In the first step, we derive an upper bound for the adversarial norm of the estimation error.
Rigorously, Lemma <ref> will state the following upper bound
[f̂ - f^*_P_X, Δ^2 ] ≤ C {[f̂ - f^*_n,Δ^2] + B^2 (log N_L,W(δ) +1)/n + δ B + δ^2 },
for any δ∈ (0,1) with some universal constant C> 0.
Furthermore, Proposition <ref> will bound the empirical adversarial norm [f̂ - f^*_n,Δ^2] as
[f̂ - f^*_n, Δ^2 ] ≤ C {([f̂ - f^*_L^∞^2 ]^1/2 +δ) ( log N_L,W(δ)/n + ζ_n )^1/2 + (Φ_L,W + ζ_n )^2 }.
We achieve these bounds by extending the empirical process technique by <cit.> to the adversarial norm.
There are several points for noting: (i) the term Φ_L,W represents a bias, and the term O(log N_L,W(δ) / n) represents a variance of the estimator, that are similar to the least square estimator, (ii) the variance term is described by the uniform covering number, which is useful to study neural networks whose parameters are unbounded and non-sparse, and (iii) there is a term ζ_n which represents the error by the preprocessing, unlike the case of the least square estimator.
In the second step, we construct an upper bound for the sup-norm using the adversarial norm.
That is, we develop the following statement:
Consider the estimator as (<ref>) and the adversarial norm as (<ref>).
Suppose P_X satisfies Assumption <ref>.
Then, we have
f̂ - f^*_P_X, Δ^2≥ C_P_X,p,d h^d f̂ - f^*_L^∞^2 .
Intuitively, we utilize the similarity between the adversarial norm and the sup-norm to achieve the result.
That is, the maximization over Δ_h^p in the adversarial norm has a similar property to the sup-norm.
Using this property, we give an upper bound on the sup-norm while taking into account the volume of the hypercube.
We will give a generalized version of this result as Lemma <ref> in the supplementary material.
In the last step, we combine these results and derive the main statement of Theorem <ref>.
Here we apply the peeling argument to obtain convergence rates. Note that a simple combination of the above results would lose optimality.
To obtain the minimax optimal rate, we evaluate the approximation error and the uniform covering number based on the localization techniques.
§ APPLICATIONS
§.§ Extension to General Loss Function
§.§.§ Motivation and Setting
We can extend our adversarial training results to the case of non-squared loss functions.
Specifically, we can handle loss functions such as absolute value loss, quantile loss, and Huber loss, which are used in the presence of heavy-tailed noise.
This setting with deep neural networks is studied in <cit.>.
We introduce a generic loss function, which satisfies the following assumption:
A loss function ℓ:×→ is symmetric and ℓ(x,y) is Lipschitz-continuous in each x and y with its Lipschitz constant C_ℓ > 0.
Further, ℓ (y,x)=0 holds if and only if y=x, and there exists a constant c_ℓ > 0 and q ≥ 1 such that
ℓ(y,x) ≥ c_ℓ |y-x|^q, ∀ x,y ∈.
A class of loss function satisfying Assumption <ref> includes several representative loss functions, e.g., an absolute loss ℓ(y,x) = |y-x|, a quantile loss ℓ(y,x) = ({y ≥ x}τ + {y ≤ x}(τ - 1)) (y-x) for τ∈ (0,1), and the Cauchy loss ℓ(y,x) = log (1 + κ^2 (y-x)^2) for κ > 0.
We introduce an empirical risk function for adversarial training based on ℓ.
Using the neighbourhood set Δ_h^p(x) and the preprocessing Ŷ, we define an empirical risk function as
R̃_n(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ(Ŷ(x'), f(x')).
This loss function is a generalized version of the ordinary loss for the adversarial training (<ref>).
Using this notion, we define its minimizer as
f̃∈_f ∈(L,W)R̃_n(f).
§.§.§ Error Analysis
We study an L^∞-risk of this estimator by deriving a non-asymptotic upper bound.
The proof differs from that of Theorem <ref>, requiring a more general treatment of loss combined with adversarial training.
Consider the regression model (<ref>) and the adversarial estimator f̃ in (<ref>) with the function class by deep neural networks with a tuple (L,W) and h ∈ (0,1).
Suppose Assumption <ref> and <ref> for β > 0, Assumption <ref> holds with ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0 and Ŷ is independent of {(X_i,Y_i)_i=1^n},
and Assumption <ref> holds with q ∈ [1,∞).
Then, we have the following as n →∞:
[f̃ - f^*_L^∞^2] = O(h^-2d/q n^-β/(q(β + d))log^ (2/q) ∨β^* n ).
This result shows that the L^∞-risk is bounded with the setup with general loss functions.
The convergence rate of Proposition <ref> of the L^∞-risk corresponds to a convergence rate of excess risks derived by Theorem 4.2 in <cit.> under general losses.
The key to this result is the bound V on [Ŷ_L^∞^2] given in Assumption <ref>.
The independence of the preprocessing Ŷ is imposed because of a technical reason, however, it is easy to satisfy it.
For example, we can randomly split the observed data into two and then conduct the preprocessing using one of the two.
The technical derivation is similar to that of Theorem <ref>.
First, we define an expected value of adversarial risk with the general loss and the preprocessing: for f ∈(L,W), we define
R(f) := _X [ sup_x' ∈Δ_h^p(X)ℓ(f(x'),Ŷ(x')) ].
Then, we derive an upper bound for an excess value of the risk R̃ (f̃) - R̃(f^*) in Proposition <ref>.
Next, we bound the L^∞-risk by properties of the expected adversarial risk as
f̃ - f^*_L^∞^q = O ( h^-d( R̃(f̃) - R̃(f^*) + Ŷ - f^*_L^∞)).
in Lemma <ref>.
This result is an extension of the bound for the L^∞-risk by the L^2-risk as shown in Lemma <ref>.
Combining the results, we obtain the result of Proposition <ref>.
§.§ Adaptation to Heterogeneous Smoothness with Besov Space
§.§.§ Motivation and Setting
In this section, we show that our proposed method can be adapted to estimate functions with heterogeneous smoothness, that is, we study the case that the true function f^* is an element of the Besov space (see <cit.> for an introduction).
The Besov space has an interesting property that linear estimators, a certain type of non-deep estimators, cannot estimate its elements with the optimal convergence rate.
First, we give the definition of the Besov space following <cit.>.
Note that there are several equivalent definitions for Besov spaces, and the following is based on the notion of difference of functions.
Consider parameters p,q ∈ (0,∞] and β > 0.
For r ∈, h ∈^d, and f:[0,1]^d →, we define an r-th difference of f at x ∈ [0,1]^d as
Δ_h^r[f](x) = {x + rh ∈ [0,1]^d}∑_j=1^r rj (-1)^r-j f(x + jh).
We also define the r-th modulus of smoothness of f with u > 0 as
ω_r,p(f,u) = sup_h_2 ≤ uΔ_h^r[f]_L^p(λ).
Recall that ·_L^p(λ) denotes the L^p-norm with the Lebesgue measure λ.
Using these notions, we define a ball in the Besov space as follows.
.
With r ∈ such that define r > β, we define a semi-norm of f: [0,1]^d → as
f__p,q^β :=
∫_0^∞ ((u^-βω_r,p(f,u))^q u^-1 du )^1/q q < ∞
sup_u > 0 u^-βω_r,p(f,u) q = ∞.
Then, we define a ball of the Besov space with its radius B ≥ 1 as
_p,q^β := { f: [0,1]^d →|f_L^p(λ) + f__p,q^β≤ B }.
The Besov space can represent functions with discontinuity and heterogeneous smoothness, which means that the degree of smoothness of functions varies depending on x.
These properties follow the fact that _1,1^1 coincides with the space of bounded total variation <cit.>.
An important property of heterogeneous smoothness is that deep estimators, such as deep neural networks, tend to have an advantage in estimating such functions.
Specifically, a linear estimator, which is one certain family of non-deep estimators <cit.>, becomes sub-optimal when estimating elements of the Besov space.
The linear estimator has a form f̂^lin(·) = ∑_i=1^n Ψ(·;X_1,...,X_n)Y_i with an arbitrary measurable map Ψ, and includes major estimators such as the kernel ridge estimator.
Then, Theorem 1 in <cit.> implies the following minimax lower bound with d=1 case:
min_f̂^linmax_f^* ∈_p,q^β[ f̂^lin - f^*_L^2(λ)^2 ] ≥ C n^-2 β' / (2β' + d ),
with some C > 0 and β' = β + 1/2 - 1/p.
For p < 2 case, the linear estimator is sub-optimal, hence the rate is slower than the minimax optimal rate Õ(n^-2 β / (2β + d )).
Several studies <cit.> show similar statements.
Therefore, it is important to estimate functions in the Besov space with deep neural networks, since it overcomes the limitations of linear estimators.
§.§.§ Error Analysis
We give a convergence rate of the adversarial estimator with deep neural networks and the preprocessing in (<ref>).
Note that we consider the adversarial risk (<ref>) based on the squared loss function.
We first give the following assumption.
There exists β > 0 such that f^* ∈_p,q^β' holds for every β' ∈ (0,β].
To estimate functions in the Besov space, we have to restrict a set of neural network functions.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1, then consider the empirical preprocessed adversarial risk (<ref>) on (L,W,S,B) as
f̂∈_f ∈(L,W,S,B) R_n(f).
Then, we give the convergence rate of the estimator, which corresponds to the minimax optimal rate Õ(n^-2 β / (2β + d )) <cit.>.
Note that this rate is valid regardless of the values of p and q.
Fix p,q ∈ (0,∞].
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W,S,B) by deep neural networks.
Suppose that Assumption <ref>, and <ref> hold with β > d/p.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as L ≥ C_d,p,β,Blog n, S ≍ W ≍ n^d/(2β + d), and B = O(n^a) with some a > 0.
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^3 ∨β^* n ).
The result shows that our estimator with deep neural networks inherits the advantages of both deep and non-deep estimators.
Rigorously, first, it achieves the minimax optimal rate up to log factors.
This optimality is not achieved by the linear estimator and is one of the advantages of using deep neural networks.
Next, the errors are convergent in the sup-norm sense.
This is not shown by deep neural network estimators using the least squares, and is achieved by adversarial training with preprocessing.
Note that the requirement on the preprocessing is satisfied by, for example, the wavelet estimator with β^* = 2β / (2β + d) <cit.>.
The proof of this proposition is a slight modification of the proof of Proposition <ref> in Appendix.
The main update is an analysis of the approximation error by deep neural networks to a function in the Besov space.
Here, we apply the seminal result by <cit.> on the approximation error in the sup-norm.
§ SIMULATIONS
In this section, we conduct simulation experiments to justify the theoretical results.
Specifically, we generate data from a function and then numerically compute the L^∞-risk of the proposed estimator and other standard methods.
We generate n samples from the regression model (<ref>) with the sample size n ∈{400,800,1200,1600} and the noise variance σ^2 ∈{0.0001,0.01,1.0}.
We consider the following three cases as values of f^* on [0,1]^d.
In Case 1, we set d=1 and f^*(x) = 0.3 sin(4 π x) - x + 0.5.
In Case 2, we set d=2 and f^*(x_1,x_2) = sin(4 π x_1) + cos(2 π x_2).
In Case 3, we set d=7 and f^*(x_1,x_2,...,x_7) = 2/x_1 + 0.01 + 3 log (x_2^7 x_3 + 0.1) x_4 + 0.1 x_5^4 x_6^2 x_7.
For estimation, we use a three-layer fully-connected neural network with the ReLU activation function.
The width of each layer is 40.
For training, we use three methods: (i) adversarial training without preprocessing, (ii) adversarial training with preprocessing (our proposal), and (iii) ordinary least squares.
In the adversarial training case (i) and (ii), the value of h is set to 2^-3.
For the adversarial training, we employ the projected descent algorithm <cit.>.
For the preprocessing, we employ the k-nearest neighbor with setting k=3.
To measure the L^∞-risk, we generate 10,000 uniform random variables on the support [0,1]^d and use their maximum to approximate the risk.
Figure <ref> shows the measured L^∞-risk against the sample size n.
We have mainly three findings:
(i) In approximately all cases, our proposed estimator from adversarial training with preprocessing monotonically reduces the L^∞-risk in n.
(ii) The adversarial estimators without preprocessing may or may not be as good as those with preprocessing.
This implies that the magnitude of the bias from adversarial training depends on the shape of the true function f^*.
(iii) The L^∞-risk of the least square estimator generally decreases at a slower rate or does not decrease in all cases.
This supports the possibility that training a deep neural network with least-squares may have difficulty in reducing the L^∞-risk.
§ CONCLUSION AND DISCUSSION
We consider the nonparametric function estimator by deep neural networks that converge in the sense of the sup-norm, i.e., L^∞-norm.
Since deep neural networks do not have a tractable structure such as a linear sum of basis functions as the conventional non-deep estimators, they are not guaranteed to converge in the sup-norm sense.
In this study, we tackle this problem by considering the estimator based on adversarial training.
For the bias due to the adversarial training, we solve this problem by introducing the preprocessing for the data.
As a result, our proposed corrected adversarial converges to the smooth true function with the minimax optimal rate in the sup-norm sense.
Our approach is also valid for functions with general loss and functions with heterogeneous smoothness.
The experiments support our theoretical results.
Future research directions include sup-norm convergence for estimating non-smooth functions.
Although we expect that there are significant obstacles to the sup-norm convergence of estimators for the non-smooth functions, it is interesting to argue how far we can relax the conditions to estimate such functions.
Another direction is the application of uniform confidence bands for functions.
Our sup-norm convergence is useful to study the uncertainty of neural network estimators and constructing uniform confidence bands.
These directions may be a step toward statistical inference with deep neural networks.
§ PROOF FOR MAIN RESULT IN SECTION <REF>
§.§ Overview
We first develop a general theorem with arbitrary preprocessing, then apply the result and prove the results in Section <ref>.
For a preprocessed output Ŷ, we define its residual as
Ξ(x) := Ŷ(x) - f^*(x), x ∈ [0,1]^d.
This notion expresses an error in estimating the true function f^* by the preprocessing Ŷ.
Consider the regression model (<ref>) and the corrected adversarial estimator f̂ as (<ref>) with the function class (L,W) by deep neural networks.
Suppose that Assumption <ref> and <ref> hold.
Then, we obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d( W^2 L^2 log(WL) log n/n + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ] ).
We apply Lemma <ref> to bound the sup-norm as
f̂ - f^*_L^∞^2 ≤ 2(C_P_X,p,d h^d)^-1f̂ - f^*_P_X, Δ^2
Note that any f ∈(L,W) is continuous, since it has a form of deep neural network with the ReLU activation with continuity.
We then take an expectation of the bounds and apply Lemma <ref> and Proposition <ref> to obtain
[f̂ - f^*_P_X, Δ^2 ]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2
≤( 16[f̂ - f^*_L^∞^2 ]^1/2 + 40 δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2 + 4Φ_L,W^2+ 8 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2 ],
for δ∈ (0,1].
Note that both f ∈(L,W) and f^* are bounded, the expectations are guaranteed to exist.
We combine this fact with the above inequality to (<ref>), then obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d h^-d( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ C_P_X,p,dh^-d( B^2 log N_L,W(δ) + B^2/n + δ B + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + [Ξ_L^∞^2 ] ),
by setting δ≤ B ∨Φ_L,W, which will be verified later.
We arrange the terms in the above inequality.
For a,b ≥ 0 and z ∈, z^2 ≤ az + b implies z^2 ≤ 3a^2 + 2b.
with regarding regard z = [f̂ - f^*_L^∞^2 ]^1/2 and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d{log N_L,W(δ)/n + δ + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ]
+ ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2δ}.
Further, we set δ = 1/n then Lemma <ref> shows
log N_L,W(1/n) = logsup_Q_n N(1/n, (L,W), ·_L^2(Q_n)) ≤ C W^2 L^2 log(WL) log (B n^2).
We substitute these results and obtain the statement.
Suppose P_X satisfies Assumption <ref> and f^* is continuous.
For any bounded and continuous f:[0,1]^d →, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d f - f^*_L^∞^2 .
We apply Lemma <ref> to achieve the statement.
To apply the lemma, we verify that the map x' ↦ (f(x') - f^*(x'))^2 is bounded and continuous by the compactness of the domain [0,1]^d and the assumptions.
Then, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d sup_x' ∈ [0,1]^d (f(x') - f^*(x'))^2 = C_P_X,p,d h^d f - f^*_L^∞^2 .
The inequality follows Lemma <ref> by setting g(·) = (f(·) - f^*(·))^2.
All f ∈ is continuous.
Suppose that f^* is continuous and f^*_L^∞≤ B holds.
Then, for any δ > 0, we have
[f̂ - f^*_P_X,Δ^2]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2.
Without loss of generality, we assume that N_L,W(δ) ≥ 3 and log N_L,W(δ) ≤ n.
Also, we define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Let X_i' be an i.i.d. samples from P_X for i = 1,...,n.
Note that Ŷ depends on X_1,...,X_n.
We give a bound on the following difference as
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
= | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f̂(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f̂(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2_=: g_ĵ(X_i,X_i')] |
+ 2 | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i) (f̂(x') - f_ĵ(x') + f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 [sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f^*_L^2(Q_n)^2 ]^1/2
+ 2 [ sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2]
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 δ[ f_ĵ - f^*_L^∞^2 ]^1/2+ 2 δ^2
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 8 δ B + 2δ^2.
Here, the second last inequality follows Lemma <ref> using the continuity of f^* and the f ∈.
The last inequality follows the definition of ĵ and the boundedness of f ∈ and f^* by B.
We further study the first term of the bound (<ref>).
As preparation, we define
r_j = Bmax{[f_j - f^*_P_X,Δ^2 ]^1/2 , (n^-1log N_L,W(δ))^1/2},
for j=1,...,N, and it yields
r_ĵ ≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f_ĵ(x') - f^*(x'))^2 ]^1/2 + B (n^-1log N_L,W(δ))^1/2
≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ.
Here, _X| X_1:n, Y_1:n[ · ] denotes a conditional expectation with given X_1,...,X_n and Y_1,...,Y_n.
By the law of iterated expectation, the first term of the bound is decomposed as
| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] |
= 1/n| [ ∑_i=1^n g_ĵ(X_i,X_i') /r_ĵ_=: g̃_ĵ(X_i,X_i')r_ĵ] |
≤1/n| [ ∑_i=1^n g̃_ĵ(X_i,X_i')( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ)] |
≤1/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2)] |
+ B/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( (n^-1log N_L,W(δ))^1/2 + δ)]^1/2|
≤B/n| [ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2 ]^1/2[f̂ - f^*_P_X,Δ^2 ]^1/2|
+ B/n[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]((n^-1log N_L,W(δ))^1/2 + δ)
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2+ B/n (6 log N_L,W(δ) + 11).
The first inequality follows (<ref>) and the second last inequality follows the Cauchy-Schwartz inequality.
We also apply Lemma <ref> and 1 ≤log N_L,W(δ) ≤ n to achieve the last inequality.
We substitute the result (<ref>) into the bound (<ref>), then obtain the inequality:
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2 + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2.
We rearrange the term and obtain that
[f̂ - f^*_P_X,Δ^2]
≤ 2 ([f̂ - f^*_n,Δ^2] + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2 ) + 8B^2(36 n log N_L,W(δ) + 256 n)/n^2.
Then, we obtain the statement.
Suppose that N_L,W(δ) ≥ 3.
For the function g̃_j(X_i,X_i') defined in the proof of Lemma <ref>, we have
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')] ≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2,
and
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2] ≤ 36 n log N_L,W(δ) + 256 n.
We first note that for any j = 1,...,N_L,W(δ), we have [g̃_j(X_i,X_i')] = 0, |g̃_j(X_i,X_i')| ≤ 4B^2 /r_j ≤ 4 n^1/2/ (log N_L,W(δ))^1/2 =: M, and
(g̃_j(X_i,X_i')) = 2 r_j^-2( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )
≤ 2 r_j^-2[ ( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )^2]
≤ 8 r_j^-2[f_j - f^*_P_X,Δ^2] B^2
≤ 8.
The second inequality follows Hölder's inequality.
Using the bounds above, we apply the Bernstein inequality as
( ∑_i=1^n g̃_j(X_i,X_i') ≥ t) ≤exp( - t^2/2t M/3 + 2n (g̃_j(X_1,X_1')))
≤exp( - t^2/8t n^1/2(log N_L,W(δ))^-1/2 /3 + 16n)
≤exp( - t^2/16t n^1/2(log N_L,W(δ))^-1/2 /3)
= exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2),
for t ≥ 6 (n log N_L,W(δ))^1/2.
The last inequality follows 8t n^1/2(log N_L,W(δ))^-1/2 /3 ≥ 16n for t larger than the threshold 6 (n log N)^1/2.
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2.
Then, the first statement is proved.
For the second statement, we similarly apply (<ref>) and obtain
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞exp( - 3t^1/2 (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 36 n log N_L,W(δ) + 256 n.
Then, the second statement is also proved.
Consider the setting in Theorem <ref>.
Then, for any δ∈ (0,1], we have
[f̂ - f^*_n,Δ^2] ≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ Φ_L,W^2+ 2 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2].
By the definition of the minimization problem, L_n(f̂) ≤L_n(f) holds for any f ∈(L,W), hence we have the following basic inequality as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
which can be rewritten as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f(x'))^2.
We bound the both-hand side of (<ref>).
The left-hand side (LHS) of (<ref>) is lower bounded as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f̂(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f̂(x'))}
≥f^* - f̂_n,Δ^2 - Ξ_n,Δ^2 - 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))|,
by applying Lemma <ref>.
Similarly, we bound the right-hand side of (<ref>) as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f(x'))}
≤f^* - f_n,Δ^2 + Ξ_n,Δ^2 +2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|.
Combining (<ref>) and (<ref>) with (<ref>), we obtain
f^* - f̂_n,Δ^2 ≤f^* - f_n,Δ^2 + 2 Ξ_n,Δ^2 + 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))| _=: T_1
+ 2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|
≤Φ_L,W^2 + 2 Ξ_L^∞^2 + T_1 + 2 Ξ_L^∞Φ_L,W,
by the definition of Φ_L,W in (<ref>).
We will bound an expectation the terms.
Note that the expectations of the terms are guaranteed to exist, by the boundedness of f^* and f̂,f ∈(L,W), and Ŷ.
We bound [T_1].
We define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Then, we bound [T_1] as
[T_1] = [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x') + f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| ] + [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') ( f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| f̂ - f^*_L^∞ + δ/f_ĵ - f^*_L^∞]
+ 2 [ sup_Q_nΞ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f̂_L^2(Q_n)^2]^1/2
≤[ (f̂ - f^*_L^∞ + δ) 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))|/f_ĵ - f^*_L^∞_=: Z_ĵ] + 2 [Ξ_L^∞^2 ]^1/2δ.
Since we have
|Z_j| ≤2/n∑_i=1^n | max_x' ∈Δ_h(X_i){| Ξ(x') | | (f^*(x') - f_j(x'))| }/f_j - f^*_L^∞| ≤ 2Ξ_L^∞,
for any j = 1,...,N,
the Cauchy-Schwartz inequality yields
[ (f̂ - f^*_L^∞ + δ) Z_ĵ] ≤[ (f̂ - f^*_L^∞ + δ)^2 ]^1/2[ Z_ĵ^2 ]^1/2
≤ 2( [f̂ - f^*_L^∞^2 ]^1/2 + δ)[ max_j=1,...,N_L,W(δ) Z_j^2 ]^1/2
≤ 4( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2.
The last inequality follows the maximal inequality (Theorem 3.1.10 in <cit.>) for the bounded random process.
Using this result, we obtain
[T_1] ≤ 4 ( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2 + 2 [Ξ_L^∞^2 ]^1/2δ
≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2.
We substitute the bound (<ref>) into the expectation of (<ref>), then obtain the statement.
Fix ε > 0 arbitrary.
Also, we fix C_* = C_P_X,p,d,B as used in the statement of Proposition <ref>.
By the universal approximation theorem (e.g. Theorem 1 in <cit.>) associate with the continuity of f^*, there exists a tuple (L',W') such that
Φ_L',W'≤√(ε h^d/( 4C_*)).
Further, by Assumption <ref>, there exists n̅∈ such that
[Ξ_L^∞^2] ≤√(ε h^2d/(4 C_*)).
Then, for all n ≥n̅, Proposition <ref> yields that
[f̂ - f^*_L^∞^2 ] ≤ C_* h^-d(W'L')^2 log(W'L') log n/n + 3 ε/4.
Then, for any n ≥ n' ∨ (4 C_* (W'L')^2 log(W'L') h^-dε^-1), we have [f̂ - f^*_L^∞^2 ] ≤ε/4 + 3ε/4 = ε, which shows the statement.
As preparation, Lemma <ref> gives the following bound
Φ_L,W≤ C_d,β (LW)^-2β/d.
With this bound on Φ_L,W, we apply Proposition <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ).
Further, we have
(LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ≤{(LW)^-2β/d + h^-d/2[Ξ_L^∞^2]^1/2}^2,
by applying Jensen's inequality.
Arranging the terms, we obtain the statement.
We start with the inequality (<ref>) in the proof of Theorem <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( n^-2β/(2β+d) (log^2 n + 1) + [Ξ_L^∞] n^-β/(2β+d) + h^-d[Ξ_L^∞^2] )
by the setting WL ≍ n^d/(4s + 2d).
§ PROOF FOR APPLICATIONS
§.§ Proof for General Loss Setting
We give proofs of the result in Section <ref>.
Consider the setting in Proposition <ref>.
Then, we have for n such that log N(1/n) ≥ 1:
[R̃ (f̃) - R̃(f^*)] ≤C_ℓ, B ( log N_L,W(1/n) + V^2 )/n^1/2 + C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
This proof is similar to Lemma 3.1 in <cit.>.
A difference between <cit.> and our result is that a property of the loss depends on f in our setting, so we have to modify it.
Hence, we write down the proof.
We develop the proof in the following four steps: (i) a basic decomposition, (ii) bounding a variance, (iii) bounding a bias, and (iv) combining every bound.
Step 1: Basic decomposition.
We define i.i.d. copies of the observations D := {(X_i,Y_i)_i=1^n} as D' := {(X_i',Y_i')_i=1^n}, and also define an excess loss as
g(x,Ŷ,f) = sup_x' ∈Δ_h^p(x)ℓ(f(x'), Ŷ(x')) - sup_x' ∈Δ_h^p(x)ℓ(f^*(x'), Ŷ(x'))
We further define empirical means of the excess loss as G_n(f) := n^-1∑_i=1^n g(X_i,Ŷ,f) with the observations D, and G_n'(f) := n^-1∑_i=1^n g(X_i',Ŷ,f) with the copies D'.
Since f̂ is independent to D', we can rewrite the expected risk as
[R̃(f̂) - R̃(f^*)] = [ _D'[G_n'(f̂) ]].
Since f̂ is the minimizer of the empirical risk and the loss is bounded, we obtain the following inequality of expectations:
[G_n(f̂)] ≤[G_n(f) ],
for any f∈(L,W).
We set set f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞
Using this fact, we decompose the excess risk as
[R̃(f̂) - R̃(f) ] = [ _D'[ G_n'(f̂)]] ≤[ - 2G_n(f̂) + _D'[ G_n'(f̂)]_=:] + 2[ G_n(f)_=: ].
The inequality follows (<ref>).
Step 2: Bound the variance [].
We bound an expectation of the term .
By the boundedness of both Ŷ and f̃ by Assumption <ref> and (<ref>), the expectation [] exists.
We prepare additional notations.
Fix δ∈ (0,1].
We consider a covering set {f_j}_j=1^N_L,W(δ)⊂, then we pick f_j from the set such that sup_Q_nf_j - f̃_L^2(Q_n)≤δ.
We define a term g̃(X_i,Ŷ,f̃) by the following reform of as
= 1/n∑_i=1^n {_D'[ G_n'(f̃)] - 2 g(X_i,Ŷ,f̃) } =: 1/n∑_i=1^ng̃(X_i,Ŷ,f̃),
which yields the following form
[] = [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)]
= [1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_:= _1] + [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)- 1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_=: _2] .
We will bound both [_1] and [_2], separately.
We bound the term [_2].
Since g in (<ref>) is Lipschitz continuous in f with its Lipschitz constant C_ℓ by Lemma <ref>, we easily see that g̃ is Lipschitz continuous in f with its Lipschitz constant 6C_ℓ.
Thus, we obtain that
[_2] ≤| [1/n∑_i=1^ng̃ (X_i,Ŷ,f̃)] - [1/n∑_i=1^ng̃ (X_i, Ŷ,f_j)] | ≤ 6 C_ℓδ.
Next, we bound the term [_1].
Here, we need to consider a uniformly bounded function y:[0,1]^d → [-B,B]
For each f_j in the covering set, t > 0, and the bounded function y, we use the Bernstein inequality to derive its stochastic upper bound.
As preparation, we consider a threshold B_n ≥ 1 depending on n and define a clipped preprocessing Ŷ_B_n(·) := max{min{Ŷ(·), B_n}, -B_n}.
We firstly approximate [_1] by the Lipschitz continuity of ℓ as
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ - Ŷ_B_n_L^∞].
Since |Ŷ(x) - Ŷ_B_n(x)| = |Ŷ(x)| {|Ŷ(x)| ≥ B_n} holds, we can bound the expectation in the second term of the right-hand side as
[Ŷ - Ŷ_B_n_L^∞] = [ sup_x ∈ [0,1]^d |Ŷ(x)| {|Ŷ(x)| ≥ B_n}|]
≤[ sup_x ∈ [0,1]^d |Ŷ(x)| sup_x ∈ [0,1]^d{|Ŷ(x)| ≥ B_n}|]
≤[Ŷ_L^∞{Ŷ_L^∞≥ B_n}]
≤[Ŷ_L^∞^2 / B_n].
The last inequality follows {x ≥ 1}≤ x for any x ≥ 0.
The existence of the second moment is guaranteed by Assumption <ref>.
We put this result to (<ref>) and obtain
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ_L^∞^2 / B_n].
Then, we bound the first term [n^-1∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)].
Since we have |g(x,Ŷ_B_n,f)| ≤ C_ℓ ( B_n ∨ B) for any x ∈ [0,1]^d and f: f_L^∞≤ B, we obtain the following inequality with fixed Ŷ_B_n:
( 1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t)
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 2/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t )
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'[ g(X_i',Ŷ_B_n,f_j)] )
≤(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'(g(X_i, Ŷ_B_n, f_j))/4 C_ℓ B_n)
≤exp( - n(t')^2/2 _D'(g(X_i, Ŷ_B_n, f_j)) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/2 t' C_ℓ ( B_n ∨ B) + C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/16 t' C_ℓ ( B_n ∨ B) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - 3 n t'/64 C_ℓ ( B_n ∨ B))
≤exp( - 3 n t/128 C_ℓ ( B_n ∨ B)).
The first and third inequalities follow _D'(g(X_i, Ŷ_B_n, f_j)) ≤ 4 C_ℓ B_n _D'[g(X_i, Ŷ_B_n, f_j)], and the second and last inequalities follows a setting t' = t/2 + _D'(g(X_i, Ŷ_B_n, f_j))/(8 C_ℓ (B ∨ B_n)).
Using this inequality for a uniform bound in terms of the covering set {f_j}_j=1^N_L,W(δ) and the independent random functions Ŷ and Ŷ_B_n, we obtain
( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) ≤ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ).
Then, by the maximal inequality (Corollary 2.2.8 in <cit.>), for any η > 0, we have
[max_j=1,...,N_L,W(δ)[1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j)]]
≤η + ∫_η^∞( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) dt
≤η + ∫_η^∞ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ) dt
≤η + N_L,W(δ) (128 C_ℓ ( B_n ∨ B))/3nexp( - 3 n η/ 128 C_ℓ ( B_n ∨ B) ) .
We set B_n = n^1/2, hence we have (B ∨ B_n) ≤ C_B n^1/2.
Also, we set η = (128 C_B,ℓ n^1/2) log N_L,W(δ) / (3n) and put this result into (<ref>), we obtain
[_1] ≤[max_j=1,...,N[1/n∑_i=1^ng̃ (X_i,Ŷ,f_j)]] ≤C_ℓ,B (log N_L,W(δ) + [Ŷ_L^∞^2 ])/n^1/2.
Combining the inequalities (<ref>) and (<ref>) into (<ref>) and set δ = 1/n, we obtain
[] ≤(2 C_ℓ^2 B_2 + C_ℓ B/3) (log N_L,W(1/n) + [Ŷ_L^∞^2 ])/n^1/2.
Step 3: Bound the bias [].
By the Lipschitz continuity of the loss ℓ by Assumption <ref>, we have
[] = [ 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ( f̅(x'), Ŷ(x')) ]
≤[ sup_x ∈[0,1]^dℓ( f̅(x), Ŷ(x)) ]
≤[sup_x' ∈[0,1]^d C_ℓ |f̅(x) - Ŷ(x)| + ℓ(Ŷ(x), Ŷ(x)) ]
≤ C_ℓ[f̅ - Ŷ_L^∞]
≤ C_ℓ (f̅ -f^*_L^∞ + [f^*- Ŷ_L^∞ ])
≤ C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
The last inequality holds by setting f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞.
Step 4: Combining the bounds.
We combine the result in Step 3 and Step 4 into the decomposition (<ref>), then obtain the statement.
Consider the expected adversarial risk R̃(·) with general losses as (<ref>).
Then, for the estimator f̃ as (<ref>) and q ∈ [1,∞), we have
f^* - f̃_L^∞^q ≤ C_P_X,p,d,ℓ,q h^-d( R̃(f̃) - R̃(f^*) + Ξ_L^∞^q ∨Ξ_L^∞).
We develop a lower bound of R̃(f̃) - R̃(f^*) as
R̃(f̃) - R̃(f^*) = _X[sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f̃(x')) - sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f^*(x')) ]
≥ C_P_X,p,d h^d sup_x ∈ [0,1]^d |ℓ(Ŷ(x'), f̃(x'))| - C_ℓŶ - f^*_L^∞
≥ C_P_X,p,d,ℓ h^d Ŷ - f̃_L^∞^q - C_ℓΞ_L^∞
≥ C_P_X,p,d,ℓ,q h^d ( f^* - f̃_L^∞^q - Ξ_L^∞^q ) - C_ℓΞ_L^∞ .
Here, the first inequality follows Lemma <ref> and the Lipschitz continuity of ℓ by Assumption <ref>, and the last inequality follows (a+b)^q ≤ C_q (a^q + b^q) for q ∈ [1,∞) and a,b ≥ 0.
By Proposition <ref> and Lemma <ref>, we have
[f^* - f̃^2_L^∞] ≤ C_P_X, p,d,ℓ,q h^-2d/q( [(R̃(f̃) - R(f^*))^2/q] + [ Ξ_L^∞^2] )
≤ C_P_X,B, p,d,ℓ,q, h^-2d/q{(log N_L,W(1/n) /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }
≤ C_P_X,B, p,d,ℓ,q,V h^-2d/q{( W^2L^2 log(WL) log n /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }.
The last inequality follows Lemma <ref>.
We set WL ≍ n^d/(4β + 4d) and obtain the statement.
§.§ Proof of Adaptation to Besov Space
We give proof of the result in Section <ref>.
To show the statement, we slightly modify the proof of Proposition <ref>.
We start from the inequality (<ref>) with setting δ = 1/n.
Since we use (L,W,S,B) as a set of candidate functions instead of (L,W), we obtain the following updated inequality of (<ref>) as
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B h^-d{logÑ_L,W,S,B(1/n)/n + Φ̃_L,W,S,B^2 + ζ_n^2 },
which replaces N_L,W(1/n) by Ñ_L,W,S,B(1/n) := sup_Q_n N(1/n, (L,W,S,B), ·_L^2(Q_n)) and Φ_L,W by Φ̃_L,W,S,B := inf_f ∈(L,W,S,B)f - f^*_L^∞.
We study the terms Ñ_L,W,S,B(1/n) and Φ̃_L,W,S,B.
For the approximation error term Φ̃_L,W,S,B, we apply Lemma <ref> by setting r = ∞ and obtain
Φ̃_L,W,S,B≤ C_d,β N^-β/d,
for sufficiently large N such that L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N.
About the entropy term Ñ_L,W,S,B(1/n), we apply Lemma <ref> and obtain
logÑ_L,W,S,B(1/n) ≤log N(1/n, (L,W,S,B), ·_L^∞)
≤ LS log(n LB(1+S))
≤ C_d,β L^2 N log (n L^2 B N)
≤ C_d,p,β,B N log^2(N) log (nN log^2(N)),
by substituting the setup of L,S,W and B.
We substitute (<ref>) and (<ref>) into (<ref>) and obtain
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,β h^-d{ N log^2(N) log (nN log^2(N))/n + N^-2β/d + ζ_n^2 }.
We set N ≍ n^d/(2β + d) and obtain the statement.
§ SUPPORTIVE RESULT
Consider a non-negative bounded continuous function g:[0,1]^d →_+.
Then, we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ P_X(Δ_h^p(x^*)),
with x^* ∈_x ∈ [0,1]^d g(x).
Further, if Assumption <ref> holds, then we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ h^d C_P_X,p,d.
Let A := {x ∈ [0,1]^d | g(x) = max_x' ∈ [0,1]^d g(x')} be a set of argmax of g(x), which is non-empty because of the compactness of [0,1]^d and boundedness/continuity of g.
Also, we define a union Δ_A := ∪_x ∈ AΔ_h^p({x}).
By the non-negativity of g, we obtain
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥_X[sup_x' ∈Δ_h^p(X) g(x') {X ∈Δ_A }]
= _X[sup_x ∈ [0,1]^d g(x) {X ∈Δ_A }]
= g_L^∞ P_X(Δ_A).
Hence, we obtain the first statement.
We consider that Assumption <ref> holds.
We develop a lower bound of P_X(Δ_A) as
P_X(Δ_A) ≥inf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ [0,1]^dλ( Δ_h^p({x})),
where C_P_X is a lower bound of a density function of P_X defined in Assumption <ref>, and λ(·) is the Lebesgue measure.
Since the Lebesgue of the L^p-ball is known, we obtain that
inf_x ∈ [0,1]^dλ( Δ_h^p({x})) = Γ(1/p + 1)^d/Γ(d/p + 1)h^d,
where Γ (·) is the Gamma function.
Then, we obtain the second statement.
We develop the following covering number bound.
The following lemma immediately holds by <cit.> and <cit.>.
Consider the set of deep neural networks as (<ref>) with the depth L, the width W, and the upper bound B.
For any δ > 0 and sufficiently large n, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ C W^2 L^2 log(WL) log (B n /δ).
Let D be the VC-dimension of , and S(≤ W^2 L) be a number of parameters in .
By Theorem 3 in <cit.>, we bound the VC-dimension as D = O(S L log(S)) ≤ O(W^2 L^2 log (WL)).
Using this inequality and Theorem 12.2 in <cit.>, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ D log( en B/δ D) ≤ C W^2 L^2 log(WH) log (B n /δ).
for n = Ω(W^2 H^2 log (WH)).
Consider a non-empty compact set T ⊂^d with some d and continuous bounded functions f,f':T →.
Then, we have
|sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) | ≤f_L^∞f'_L^∞ + f'_L^∞^2.
We define the optimal values t^* ∈ T and t^†∈ T such that sup_t ∈ T(f(t) + f'(t))^2 = (f(t^*) + f'(t^*))^2 and sup_t ∈ Tf(t) ^2 = f(t^†)^2.
Note that such t^* ∈ T and t^†∈ T exist by the compactness of T and the continuity of f and f'.
We first derive the following inequality
sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) ^2 ≤ f(t^*)^2 + 2 f(t^*)f'(t^*) + f'(t^*)^2 - f(t^*)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Second, we develop a bound for the opposite side as
sup_t ∈ Tf(t)^2 - sup_t ∈ T(f(t) + f'(t))^2 ≤ f(t^†)^2 - (f(t^†) + f'(t^†))^2
≤ 2f(t^†) f'(t^†) - f'(t^†)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Then, we obtain the statement.
For any continuous and bounded functions f,g on a compact set I, we have
max_t ∈ I (f(t) + g(t)) ≥max_t ∈ I f(t) - max_t ∈ I |g(t)|.
Let t' ∈ I be a point such that max_t ∈ I (f(t) + g(t)) = f(t') + g(t'), which is guaranteed to exist by the compactness of I and the boundedness/continuity of f,g.
The statement simply follows
max_t (f(t) + g(t)) = f(t') + g(t') ≥ f(t') - |g(t')| ≥max_t(f(t)) - max_t |g(t')|.
Consider functions f,f', y: [0,1]^d → [-B,B], and a loss function ℓ satisfying Assumption <ref>.
Also, consider a funciton g as (<ref>).
For any x ∈ [0,1]^d, we have
g(x,y,f) - g(x,y,f') ≤ C_ℓ |f(x̅) - f'(x̅)|,
for some x̅∈ [0,1]^d.
We define x^* ∈Δ_h^p(x) such that ℓ(y(x^*), f(x^*)) = max_x' ∈Δ_xℓ(y(x'), f(x')).
Its existence follows the continuity of f, f',y, and ℓ.
For f,f' ∈ L^2([0,1]^d), we have
g(x,y,f) - g(x,y,f') = max_x' ∈Δ_h^p(x)ℓ(y(x'),f(x')) -max_x' ∈Δ_h^p(x)ℓ(y(x'),f'(x'))
≤ℓ(y(x^*),f(x^*)) - ℓ(y(x^*),f'(x^*))
≤ C_ℓ |f(x^*) - f'(x^*)|.
The first inequality follows max_x' ∈Δ_h^p(x)ℓ(y(x'), f(x')) = ℓ(y(x^*), f(x^*)), and the second inequality follows the Lipschitz continuity of ℓ in the second argument from Assumption <ref>.
Thus, we obtain the statement.
Fix N,M ∈ arbitrarily.
If (L,W) is a set of functions with W= C_d (N+2) log_2 (8N) and L= C_s (M+2) log_2 (4M) + 2d, we have
inf_f ∈sup_f^* ∈ C^s_1([0,1]^d)f - f^*_L^∞≤ C_d,s N^-2s/d M^-2s/d.
Fix p,q,r∈ (0, ∞] and β∈ (0,∞).
Suppose that β > d max{1/p-1/r, 0} holds.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1.
Let N be a sufficiently large number and set L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N, and B̅ is a polynomially increasing in N.
Then, we have
sup_f^0 ∈_p,q^βinf_f ∈(L,W,S,B)f^0 - f_L^r(λ)≤ C N^-β/d,
with some constant C > 0 independent of N.
For ε∈ (0,1], we obtain
log N(ε, F(L,W,S,B)) ≤ LS log(ε^-1 LB(1+S)).
§ PROOF OF INCONSISTENCY
We first specify the coordinates of the setting.
We consider two points x = (0.3, 0.5, 0.5, ...,0.5), x' = (0.7,0.5, 0.5, ...,0.5)∈ [0,1]^d, and a marginal measure as a mixture of Dirac measures on the points; P_X = 0.5 δ_{x} + 0.5 δ_{x'}.
We also specify the true function with an input x = (x_1,...,x_d) ∈ [0,1]^d as f^*(x) = - {x_1 < 0.4} + 10 (x_1 - 0.5){0.4 ≤ x_1 ≤ 0.6} + {x_1 > 0.6}, and the noise variable ξ_i as a uniform random variable on [-0.1,0.1].
For the adversarial training, we set p=∞ and h = 0.5.
We study an empirical risk minimizer in this setting.
Since the inputs X_1,...,X_n are either of x or x', we set n_1 := |{i: X_i = x}| and n_2 := |{i: X_i = x'}| such that n = n_1 + n_2.
With the specified coordinates above, we rewrite an empirical risk of f:[0,1]^d → with the adversarial training as
1/n∑_i=1^n max_x ∈Δ_h^p(X_i) (Y_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈ [0,1]^d: x_1 ∈ [0,0.8] (ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈ [0,1]^d: x_1 ∈ [0.2,1] (1 + ξ_i - f(x))^2,
which follows f^*(x) = 0 and f^*(x') = 1.
To minimize this empirical risk in terms of f, we restrict a class of f.
Specifically, we set f with an input x = (x_1,...,x_d) as having a form f(x) = c_1 {x_1 ≤ 0.2} + c_2 {0.2 < x_1 < 0.8} + c_3 {0.8 ≤ x_1} with some constants c_1,c_2,c_3 ∈.
This form of f can minimize the risk, since The risk depends only on the value of f for each region.
Then, we rewrite the risk as
(<ref>) =1/n∑_i: X_i = xmax{ (ξ_i - c_1)^2 , (ξ_i - c_2)^2} + 1/n∑_i: X_i = x'max{ (1 + ξ_i - c_2)^2 , (1 + ξ_i - c_3)^2 }.
Here, we consider an event |n_1/2 - n/2| ≤ 1, which appears with probability 1-2 exp(-2/n) ≥ 0.5 with n ≥ 3, by Hoeffding's inequality.
In this case, a simple calculation yields that c_2 ∈ [-0.2, 0.2] minimizes the (<ref>) since it prevents quadratic growth of the risk in terms of c_2, which gives (ξ_i - c_1)^2 < (ξ_i - c_2)^2 and (1 + ξ_i - c_2)^2 > (1 + ξ_i - c_3)^2.
Then, we rewrite the risk (<ref>) as
(<ref>) = 1/n∑_i: X_i = x (ξ_i - c_2)^2 + 1/n∑_i: X_i = x'(1 + ξ_i - c_2)^2,
and the minimization on it by c_2 yields the following optimal choise
c_2^* = n_2 - n_1/n + 1/n∑_i=1^n ξ_i.
Then, we have that the original risk (<ref>) is minimized by the following function
f̌(x) := c_1^* {x_1 ≤ 0.2} + c_2^* {0.2 < x_1 < 0.8} + c_3^* {0.8 ≤ x_1},
where c_1^* = n_1^-1∑_i: X_i = xξ_i and c_3^* = n_2^-1∑_i: X_i = x'ξ_i.
Finally, we define the L^∞-risk of f̌.
Simply, we have
f̌ - f^*_L^∞^2 ≥f̌ - f^*_L^2(P_X)^2
= _X ∼ P_X[ (f̌(X) - f^*(X) )^2 ]
= 1/2{ (f̌(x) - f^*(x) )^2 + (f̌(x') - f^*(x') )^2}
= 1/2{ (c_2^* +1 )^2 + (c_2^* - 1)^2}
= 1 + (c_2^*)^2
≥ 1.
Hence, we show the statement of Proposition <ref>.
alpha
|
http://arxiv.org/abs/2307.05472v1 | 20230710144636 | An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons | [
"M. Lagrée",
"M. Jeannin",
"G. Quinchard",
"S. Pes",
"A. Evirgen",
"A. Delga",
"V. Trinité",
"R. Colombelli"
] | physics.optics | [
"physics.optics",
"cond-mat.mes-hall",
"physics.app-ph"
] |
[email protected]
III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France
Centre de Nanosciences et de Nanotechnologies (C2N), CNRS UMR 9001, Université Paris-Saclay, 91120 Palaiseau, France
III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France
III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France
III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France
III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France
[email protected]
III-V Lab, Campus Polytechnique, 1, Avenue Augustin Fresnel, RD 128, 91767 Palaiseau cedex, France
[email protected]
Centre de Nanosciences et de Nanotechnologies (C2N), CNRS UMR 9001, Université Paris-Saclay, 91120 Palaiseau, France
The main technological obstacle hampering the dissemination of modern optoelectronic devices operating with large light-matter coupling strength Ω is an in-depth comprehension of the carrier current extraction and injection from and into strongly coupled light-matter states, the so-called polaritonic states.
The main challenge lies in modeling the interaction between excitations of different nature, namely bosonic excitations (the plasmonic ISB excitations) with fermionic excitations (the electrons within the extraction or injection subband).
In this work, we introduce a comprehensive quantum framework that encompasses both the ISB plasmonic mode and the extractor/injector mode, with a specific emphasis on accurately describing the coherent nature of transport.
This reveals inherent selection rules dictating the interaction between the ISB plasmon and the extraction/injection subband.
To incorporate the dynamics of the system, this framework is combined to a density matrix model and a quantum master equation which have the key property to distinguish intra and intersubband mechanisms.
These theoretical developments are confronted to experimental photocurrent measurements from midinfrared quantum cascade detectors (λ = 10 µm) embedded in metal-semiconductor-metal microcavities, operating at the onset of the strong light-matter coupling regime (2Ω=9.3 meV).
We are able to reproduce quantitatively the different features of the photocurrent spectra, notably the relative amplitude evolution of the polaritonic peaks with respect to the voltage bias applied to the structure.
These results on extraction allow us to elucidate the possibility to effectively inject electronic excitations into ISB plasmonic states, and thus polaritonic states.
An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons
R. Colombelli
August 12, 2023
====================================================================================================================================================
§ INTRODUCTION
The use of electromagnetic resonators like antennas or cavities is an established tool to tailor and improve the properties of optoelectronic devices, whether by increasing the sensitivity, reducing the electronic noise, improving the wall-plug efficiency.
In general, the strategy is to engineer, and typically increase, the interaction strength between light and an electronic transition in matter.
However, the interaction strength in practical devices is always limited to a small fraction of the photon or electronic transition lifetimes, which places the device in the so-called weak coupling regime.
On the contrary, when the light-matter interaction strength overcomes the losses in the system, the latter enters the strong coupling regime.
The new constituents of this system are mixed light-matter states called polaritons, which can be formed by hybridizing any polarization-carrying matter excitation and a photon field.
Polariton physics thus emerged as a transverse research field studying the fundamental properties of strongly coupled systems. It revealed a plethora of phenomena, the most recognized being the out-of-equilibrium Bose-Einstein condensation of exciton-polaritons <cit.>.
However, most experiments on polaritons are performed by optical means, whereas practical devices require electrical injection or extraction of charge carriers. Recent experiments sparked new interest in electrical transport in systems under strong light-matter coupling conditions, with the report of increased conductivity in organic molecules <cit.>, or the breakdown of topological protection in quantum Hall systems <cit.>.
Intense research effort is thus currently devoted to provide an accurate description of transport in systems strongly coupled to a cavity field.
In this context, intersubband (ISB) polaritons, that originate from the coupling between an intersubband transition in doped semiconductor quantum wells (QW) and a cavity mode, are of particular interest.
They were first reported in 2003 <cit.> with absorption experiments, and that same year electronic detection of the signature of strong coupling was also reported <cit.>.
However, proposals for electrical injection and electroluminescence of ISB polariton devices <cit.>,
that were quickly followed by experimental work <cit.>, faced the problem of inefficient electrical injection in a polaritonic state. That issue proved insurmountable in the following years <cit.>.
To circumvent the problem, the study of the "reverse" process (photo-detection) was proposed to elucidate transport mechanisms in polaritonic ISB electronic devices, with experiments on quantum well infrared photodetectors (QWIP) operating in the strong light-matter coupling regime <cit.>.
In this context, we have recently presented a semi-empirical model to describe the electronic photoresponse of quantum cascade detectors (QCD) operating
in the strong light-matter coupling regime <cit.>.
Based solely on classical oscillators, it allowed us to shine new light on the polariton-to-electron process, and in particular to conjecture that a direct polariton-to-electron tunnel mechanism may play a major role in such devices.
This result was obtained at the expense of great simplifications. In particular, because the model is based on classical theory, it cannot include any consideration on the coherence of the involved processes.
Nevertheless, coherence is of paramount importance when dealing with systems operating in the strong-coupling regime, and even more so for ISB polaritons, that originate from the coupling between a cavity mode and a collective excitation.
ISB transitions, that are more rigorously defined as ISB plasmons<cit.>, are collective matter excitations originating from the electronic plasma inside a semiconductor quantum well, subject to its own Coulomb interaction.
This is in stark contrast to, for instance, exciton-polaritons that result from an ensemble of single-particle transitions.
The main consequence is the presence of dark states, that do not couple to the electromagnetic field, but do participate in electronic transport. This has important consequences on the behavior of ISB polariton systems under electrical injection.
In this paper, we propose a quantum description of QCDs based on a density matrix formalism, that we compare to a complete set of experimental data.
Crucially, this approach allows us to describe (de)coherence and dissipation in the system. Our goal is to develop a theoretical description that permits to explain the electronic extraction process (photo-detection), and that - at the same time - provides a more suitable vantage point to elucidate the more complex electronic injection process leading to light emission.
We note that a very recent work reports experimental results and proposes an alternative transport model for similar QCD structures operating in the strong coupling regime <cit.>. It works explicitly with the Fermionic approach, without performing the bozonisation steps.
While similar conclusions are drawn in the photo-detection case, the work we present raises fundamental open questions and presents ways forward to the case of electrically pumped polaritonic light emitters.
In the first part, we develop the model and derive the main observable quantities, notably the photocurrent generated by an exciting external photon field.
In the second part, we validate the theoretical results by studying the photoresponse of quantum cascade detectors operating
in the strong coupling regime as a function of the applied bias.
We compare the values obtained in our model
with a in-house code based on Ref. <cit.> that models the electronic transport in a more rigorous way, but does not
incorporate the cavity effects <cit.>.
In the last part, we discuss the implications of the main assumption at the basis of our new model, and extend them to the case of electrical injection.
The system under study is sketched in the central part of Fig. <ref>.
It consists of two electronic subbands confined inside a QW, here represented in momentum space.
The second subband is tunnel-coupled to the fundamental state of an adjacent QW, and the whole system is embedded inside a cavity.
The system can operate as a detector, acting as a QCD (top sketch), when it is excited by a photon that generates a photocurrent. This path is represented by blue arrows.
It is also possible to inject electrons in the system (red arrows and bottom sketch), when an electric bias is applied, that can eventually lead to photon emission. In this case the device behaves as a polaritonic LED.
§ AN EFFECTIVE DENSITY MATRIX APPROACH FOR ELECTRONIC TRANSPORT IN CAVITY-COUPLED QCDS
§.§ Bosonization of the active optical transition
We start by defining the annihilation and creation operators c_λ𝐤 and c_λ𝐤^†, the fermionic operators related to the creation and annihilation of electrons in subbands λ = {0,1,2} (see Fig. <ref>). We impose T=0K and we assume that all N electrons are contained inside the 0-subband without external excitation. The one-particle quantum state |1,𝐤⟩ of electronic wave vector 𝐤, representing a state where one electron is in subband λ=1 is:
|1,k⟩= c_1k^† c_0k|F⟩
where |F⟩ denotes the fundamental Fermi state (equilibrium state, where all the electrons are contained in subband λ=0).
For now, we restrain the problem to the λ=0,1 subbands, that form the intersubband optical transition. This transition will be denoted as α. Following the developments of Ref. <cit.>, to describe the photo-excitation of an electron in the α-transition, it is relevant to switch from the fermionic basis formed by the |1,k⟩ states to a new basis of states {|B_i^α⟩}_i=[1:N].
We have:
|B_i^α⟩ = ∑_ |𝐤|< 𝐤_F w_i 𝐤^α|1,𝐤⟩
Since the system is considered at T=0 K, only |𝐤|<𝐤_F states are occupied, 𝐤_F the module of the k-wavevector corresponding to the Fermi level of 0-subband. The {|B_i^α⟩}_i=[1:N] basis only covers the single-excitation subspace (only one photo-excited electron per-subband), which is sufficient in the case of a weak excitation regime. The coefficients w_i 𝐤^α are defined as:
w_1𝐤^α = 1/√(N) ∀𝐤
∑_𝐤 w_i 𝐤^α = 0 ∀ i≠ 1
The |B_1^α⟩ state, of eigenenergy equal to the ISB transition energy ω_α = ω_1-ω_0 (assuming parabolic dispersion), has the remarkable property of holding the entire oscillator strength of the α transition:
⟨ F | d̂ | B_1^α⟩ = z_α√(N)
where d̂ denotes the dipole operator and z_α the dipole strength of one electronic transition. The |B_1^α⟩ state is called the bright state: it is formed by the coherent superposition of the one-particle fermionic states |1,𝐤⟩ of the α-transition and it holds the entire capacity of light-matter interaction.
The {|B_i^α⟩}_i=[2:N] are called the dark states since they can not interact with the light:
⟨ F | d̂ | B_i^α⟩ = 0 i≠ 1
From these developments, one can define bright state destruction and creation operators b_α and b_α^† which describe the collective excitation of the α-transition:
b_α^†= 1/√(N)∑_𝐤 c_1 𝐤^† c_0 𝐤
In a weak excitation regime and for a large number of electrons N, b_α can be approximated as a bosonic operator. b_α and b_α^† respectively demote and promote excitations inside the bright state |B_1^α⟩.
The final step in this development is to include the plasmonic effect ω_P of the electronic polarizations.
The diagonalization of the plasmonic Hamiltonian leads to the emergence of new operators of eigen-energy ω̃_α= √(ω_α^2 + ω_P^2 ) and a plasmonic bright state that is still orthogonal to the dark states <cit.>.
Mathematically, this new state is essentially the same as the previous bright state, except that it is no longer degenerated with the dark states: for simplicity, we will keep the notation |B_1^α⟩ and b_α for respectively the bright state and the corresponding creation operator.
Note: at this stage we did not introduce strong light-matter coupling yet. This derivation is therefore valid in any coupling regime.
§.§ Bosonization of the extractor: the tunnel-coupling Hamiltonian
We now turn to the insertion of the extraction subband in the formalism. As outlined in Refs. <cit.>, the mixing of bosonic (the plasmonic ISB excitations) and fermionic (the electrons in the extraction subband) degrees of freedom is necessary to correctly model the transport mechanisms that take place in an optically excited ISB system.
The focus of our paper is on ISB systems strongly coupled to a photonic mode, but we stress that the above consideration is valid also in the weak-coupling regime.
When a photon is absorbed by an ISB transition, it generates a bosonic excitation: an ISB plasmon. But the measured current, in a detector, is of course of fermionic nature.
In the case where the extraction subband is explicitly included in the system dynamics (and not only in the form of an external bath), it becomes an extremely tedious task to keep track of all these degrees of freedom. Effectively, one correct way to describe the interaction between these excitations of different nature is to use a full fermionic Hamiltonian of extremely large dimension. It is a significant mathematical challenge that demands considerable effort, and the nature of transport cannot be straightforwardly interpreted due to this complexity.
In this work, we overcome this strong limitation with a key modification: we propose to depict the subband λ=2 with a bosonic operator in the context of an extraction process.
This approach has several advantages, and - as we will discuss later on - it might also permit to address the scenario involving an injection process.
To explicitly incorporate subband λ=2 into our formalism, we introduce the one-particle fermionic states |2,𝐤⟩ of the β-transition:
|2,𝐤⟩ = c_2𝐤^† c_0𝐤|F⟩
Analogous to the α-transition, we will not use this fermionic state basis and instead employ a new ortho-normal basis {|B_i^β⟩}_i=[1:N] defined as:
|B_i^β⟩ = ∑_ |𝐤|< 𝐤_f w_i 𝐤^β|2,𝐤⟩
where the coefficients w_i 𝐤^β are chosen such that:
w_1𝐤^β = 1/√(N) ∀ k
∑_k w_i 𝐤^β = 0 ∀ i ≠ 1
The construction of this basis follows a similar approach as that of the {|B_i^α⟩}_i=[1:N] basis. Specifically, the first state |B_1^β⟩ is the bright state of the β-transition, while the remaining states {|B_i^β⟩}_i=[2:N] are the dark states of this same transition.
However, this time, the oscillator strength of a diagonal transition being very small, we have z_β≪ z_α and thus the bright and dark states of the extractor are degenerated.
Note that the one excitation subspace describing subband 1 and 2, of dimension 2N, is spanned by the concatenation of the {|B_i^α⟩}_i=[1:N] and {|B_i^β⟩}_i=[1:N] basis.
The introduction of this new basis is valuable to evaluate the tunnel coupling between subbands 1 and 2 within the regime of strong light-matter coupling. The tunnel coupling operator T̂ can be defined as:
T̂ = Ω_T ∑_𝐤 (c_2𝐤 c_1𝐤^† + c_2𝐤^† c_1𝐤 )
where Ω_T is the tunnel coupling strength. Using equations (<ref>), (<ref>), (<ref>) and (<ref>), we compute the tunnel interaction between subbands 1 and 2:
⟨B_1^α | T̂ | B_1^β|=⟩ Ω_T
⟨B_1^α | T̂ | B_j^β|=⟩ 0 j ≠ 1
⟨B_i^α | T̂ | B_1^β|=⟩ 0 i ≠ 1
⟨B_i^α | T̂ | B_j^β|=⟩ Ω_T ∑_𝐤 w_i𝐤^α∗ w_j𝐤^β i ≠ 1, j ≠ 1
The above relations, that are de facto selection rules, are one of the key results of this work:
through tunnel interaction, it is not possible to transition from a dark state to a bright state (Eq. (<ref>)) or vice versa (Eq. (<ref>)).
Obviously, dark states can interact with each other through tunnel coupling (Eq. (<ref>)), and the same applies to bright states as well (Eq. (<ref>)).
These results have crucial implications on the nature of electronic transport in a QCD.
For a detection process, where light promotes excitations into the |B_1^α⟩ bright state, the previous results suggest that an optical excitation can generate an electronic current in only two ways:
* Direct tunnelling into the extractor bright state |B_1^β⟩, preserving the coherent nature of the excitation, and subsequent decay - with loss of coherence - into an extractor dark state |B_i≠1^β⟩
or
* First decay - with loss of coherence - into an ISB dark state |B_i≠1^α⟩ in the active region, and subsequent tunneling into an extractor dark state |B_i≠1^β⟩
Other channels involving bright-to-dark tunneling should not be considered, as they are prohibited by selection rules (<ref>)(<ref>).
Once in the extractor dark states, the electronic excitation will simply decay in the remaining cascade, generating photocurrent.
We stress that the construction of the new β basis merely extended the procedure applied to the α transition (detailed in reference <cit.>) to the β transition, without additional hypothesis. By implementing this basis transformation, the comprehension of the transport process is streamlined, leading to the natural emergence of the selection rules presented in Equation (<ref>) to (<ref>).
In the following section, we will assess the need to actually incorporate the dark states from both the α and β-transitions to replicate the experimental photocurrent measurements from a QCD operating in the strong light-matter coupling regime.
The implications of this section for an electronic injection process into polaritonic states will be discussed in section <ref>.
§.§ Introducing dissipation and decoherence in the model
In the following, we develop an effective density matrix model of the photocurrent extraction.
We apply a drastic choice in the description of the system: we limit the extraction model to the transport induced by the bright states |B_1^α⟩ and | B_1^β⟩.
The dark states from both the α and β-transitions are omitted. Both subbands 1 and 2 will thus be described only using bosonic operators.
This is equivalent to choose scenario (1) among the two described at the end of the previous section:
direct tunnelling into the extractor bright state |B_1^β⟩ (preserving the coherent nature of the excitation), and subsequent decay - with loss of coherence - into an extractor dark state |B_i≠1^η⟩.
This choice was already implicit in the approach that we have employed in our previous work based on a classical description of the electronic transport, using coupled mode theory <cit.>.
We now go beyond this classical model using a quantum master equation. The key addition is the introduction of decoherence in the system, that is distinct from dissipation.
In terms of spectral effects, decoherence impacts the broadening of the photocurrent peaks, while dissipation primarily affects their amplitude.
In the experimental study we will report in Sec. <ref>, bias will be varied, and - as a result - the amplitude of the peaks will be affected more than their broadening. It will be essential to differentiate between the effects of decoherence and dissipation, a distinction that was previously impossible to achieve with the classical model.
We define the operator b_β using our new basis from equations (<ref>) and (<ref>):
b^†_β = 1/√(N)∑_𝐤 c_2 𝐤^† c_0𝐤
b^†_β |F⟩ = |B_1^β⟩
Using the fermionic commutation rules and a weak excitation regime, we have:
[b_β,b_β^†] = N̂_0 - N̂_2/N≈ℐ̂_̂d̂
where N̂_i is the population operator of subband i and ℐ̂_̂d̂ the identity operator.
b_β can thus be approximated as a bosonic operator: b_β and b_β^† describe the destruction and creation of electronic excitations inside the extraction mode, of eigen-frequency ω_β = ω_2 - ω_0. The related Hamiltonian is:
ℋ̂_β = ω_β b_β^† b_β
We restrict the tunnel interaction to the interaction between the plasmonic bright mode and this new extraction mode. This drastically simplifies the tunnel interaction Hamiltonian described in Eq. (<ref>).
The restricted Hamiltonian T̂_bright is:
T̂_bright = Ω_T (b_α^† b_β + b_α b_β^†)
The TM_01 electromagnetic mode confined in the patch antennas will be modeled as a standard optical resonator of frequency ω_c, using a_c and a_c^† bosonic destruction and creation operators. Using the rotating wave approximation to describe the light-matter interaction, the time dependent Hamiltonian ℋ(t) of the whole system reads:
ℋ̂(t) = ω_c a_c^† a_c + ω̃_α b_α^† b_α + ω_β b_β^† b_β
+ Ω( a_c^† b_α + a_c b_α^†)
+ Ω_T ( b_α^† b_β + b_α b^†_β)
+ κ_c s_+ ( a_c e^iω t + a_c e^-i ω t)
where s_+ is the amplitude of the incoming light excitation, ω its frequency, and κ_c is the coupling constant between this external field and the confined optical mode inside the cavity.
We map this system on an equivalent open quantum system described by the reduced density matrix ρ. Under standard Born-Markov approximations, the time evolution of the density matrix ρ obey the following quantum master equation <cit.> (ħ=1 for clarity):
dρ(t)/dt = - i [ℋ(t),ρ]
+ γ_αℒ[b_α, ρ]
+ γ_βℒ[b_β, ρ]
+ (γ_c+Γ_c) ℒ[a_c, ρ]
+ γ_α^intraℒ[b_α^†b_α, ρ]
+ γ_β^intraℒ[b_β^†b_β, ρ]
where the ℒ are Lindblad super-operators modeling the dissipative and decoherent interactions of the environment with the system. For any operator Â, a super-operator ℒ reads:
ℒ[Â,ρ] = 2 ÂρÂ^† - (Â^†Âρ + ρÂ^†Â)
The plasmonic ISB excitations are mainly dissipated through their interaction with interface roughness, at a non-radiative rate γ_α. Similarly, the extractor dissipates electrons into the next period at a non-radiative rate γ_β, and is responsible for the generation of electrical current inside the structure. γ_β represents an effective dissipation rate that takes into consideration the remaining electronic cascade. The cavity also dissipates photons (mainly through undesired free-carriers absorption) at a rate γ_c, but also through a spontaneous emission channel, at a radiative rate Γ_c. Note that the radiative coupling κ_c is related to the radiative damping through κ_c = √(2 Γ_c) <cit.>.
The main difference with our previous work <cit.> lies in the ability to explicitly introduce the intra-subband scattering through the pure decoherence terms γ_α^intraℒ[b_α^†b_α, ρ] (resp. γ_β^intraℒ[b_β^†b_β, ρ ]) <cit.>. These terms model pure decoherence damping without excitation dissipation (the intra-subband scattering thermalize excitations inside a subband without dissipating them into an other subband). By using the density matrix formalism, it thus becomes possible to differentiate between the effects of inter-subband (dissipation) and intra-subband (pure decoherence) processes on the evolution of the system (and ultimately on the shape of the calculated photoresponse spectra). More details on the necessity to distinguish intra and intersubband scatterings can be found in Appendix <ref>.
§.§ Deriving observable quantities for comparison with experiments
Equation <ref> can be solved numerically in steady state. The solution is a stationnary reduced density matrix ρ_S, and any observable Ô can then be computed using:
⟨Ô⟩ = Tr(Ôρ_s)
where Tr represents the trace function. We can then compute the different interesting quantities of the system. The system total absorption is the sum of the power dissipated into the different decay channels, normalized by the incoming power |s_+|^2:
𝒜_tot = 𝒜_c + 𝒜_α + 𝒜_β
= 2 γ_c ⟨a_c^† a_c|⟩/| s^+ |^2 + 2 γ_α⟨b_α^† b_α|⟩/| s^+ |^2 + 2 γ_β⟨b_β^† b_β|⟩/| s^+ |^2
where 𝒜_c, 𝒜_α and 𝒜_β represent respectively the cavity, ISB and extraction absorptions.
The net photocurrent 𝒥_β is defined as the current under illumination. 𝒥_β is proportional to the power dissipated from a period to the next adjacent period. This is exactly the power dissipated by the extraction mode β:
𝒥_β = 2 γ_β⟨b_β^† b_β|⟩Eq:J_beta
Note: this is a phenomenological interpretation of the photocurrent. It is in fact expected that an excitation inside the bright extractor state |B_1^β⟩ should first decay in the dark states |B_i≠ 1^β⟩ before being extracted in the electronic cascade and contribute to the photocurrent.
We choose to neglect these dark extractor states such that the power is directly dissipated from the bright extractor state. This also applies on the ISB dissipation, where the |B_i≠ 1^α⟩ dark states are neglected when considering the non-radiative dissipation γ_α.
§ EXPERIMENTAL VALIDATION IN PHOTO-DETECTION: THE POLARITON-TO-CURRENT PROCESS
§.§ Experimental details
The samples investigated in this study are the same as those already studied in Ref. <cit.>.
They are processed into 8 × 8 (approximately equal to 50 × 50 µm^2) patch antenna arrays, with the patches connected through 250-nm thin metallic wires (see Fig. <ref> in Appendix <ref>). Details of the processing can be found in <cit.>.
The samples are cooled down to T = 78K in a cryostat, and they are illuminated by light from a globar source at normal incidence.
The photocurrent spectra are acquired in rapid scan mode, after amplification using a low-noise transimpedance amplifier.
We extend the data presented in <cit.>, and now present measurements with voltage bias applied to the samples.
The applied electric field ranges from F=-25kV.cm^-1 to F=8kV.cm^-1.
We have fabricated several array designs (p, s), with p the inter-patch period of the array, and s the lateral dimensions of the patches.
However, to allow for a quantitative comparison, we present measurements under an applied electric field for two samples only, with same p=7 µm, and s= 1.5 µm and s=1.55 µm, respectively, as reported in Fig. <ref> (continuous lines).
Additional measurements can be found in Appendix <ref>. While the relative amplitude of the spectra when varying the bias contains meaningful information of the electronic transport, one should exercise caution when comparing the amplitudes of different pairs (p, s) as the experimental protocol does not ensure a consistent illumination between each measurement of the device.
Two photocurrent peaks are clearly visible in Fig. <ref>, signature of the strong light-matter coupling regime. Note: the peaks under consideration cannot be confused with the two peaks arising from coupled subbands (tunnel coupling), since the peak positions would change with the applied bias in the latter case. Here, the energy splitting (for a given pair p, s) is constant regardless of the applied field.
For all (p, s) couples studied, the global amplitude of the photocurrent spectra evolves with the applied electric field F.
A maximum amplitude is observed around F=-10 kV.cm^-1. The noise level increases strongly when the absolute amplitude of the field |F| increases. The noise level is the direct consequence of the increase of the parasitic dark current with the electric field and - as is well known <cit.> - it affects the range of exploitable field F for device applications.
The relative amplitude of these peaks inverts with respect to the applied field F, with the equal amplitude condition of the two polaritonic photo-detection peak found for a negative field F ≈ -5 kV.cm^-1. Below this threshold, the low energy peak dominates. Inversely, for F > -5kV.cm^-1, it is the high energy peak that dominates. This phenomenon can be attributed to the realignment of the subbands under the influence of the applied bias. When a highly negative voltage is applied, the subbands follow a clear staircase structure
(see Fig. <ref> in Appendix <ref> for the QCD bandstructure), which facilitates the extraction process. Conversely, at positive voltages, the subband cascade becomes less organized, hindering the extraction process.
§.§ System parameters and constraints
Before applying the theoretical developments of section <ref> to the experimental data, let us detail the system parameters and the constraints applied to them.
The photonic degrees of freedom are the cavity parameters ω_c, γ_c and Γ_c, that are independent of the applied electric field F.
They only depend on the geometrical parameters (p, s) of the cavities <cit.>:
ω_c(s) = π c_0/n_eff s
Γ_c(p) = α_c/p^2
where c_0 is the light velocity, n_eff is the effective index of the cavity, that represents the effective medium composed of the semiconductor contacts and of the undoped periodic structure embedded between the gold layers forming that cavity, and α_c is the cavity dispersion loss factor.
We choose to constrain n_eff, α_c and γ_c to the values obtained from our prior investigation of the same samples <cit.>, where the photocurrent of several samples with different (s,p) couples have been studied for F=0 kV.cm^-1:
n_eff = 3.22
α_c = 29.1 meV.µ m^2
γ_c = 3.4 meV
The cavity parameters are thus excluded from the fitting process.
Several electronic degree of freedom can also be fixed or constrained independently of our density matrix model.
The parameters of the ISB transition in the active QW (α) are assumed independent of the applied electric field F: the transition is vertical in a single quantum well and therefore is very marginally affected by the applied bias. The ISB frequency ω_α and the plasma frequency ω_P could be computed from our sequential transport software <cit.>. However, it is common to observe disparities between expected and measured doping levels (up to 15%). Experimental discrepancies also affect the ISB frequency (up to 5%), usually caused by the quality of the quantum wells interfaces during the epitaxial process. To account for these disparities, and since both ω_α and ω_P are crucial parameters to reproduce the strong coupling measurements, we chose to let these parameters free during the fitting process:
ω̃_α = √(ω_α^2 + ω_P^2)
Note: the light-matter coupling constant Ω is parametrized using ω_P:
Ω= ω_P/2√(f_w)
with f_w (≈ 0.17), the computed overlap factor between the cavity field and the doped active quantum wells.
Two additional α parameters can be computed using our sequential transport software: the non-radiative dissipation rate γ_α of the α plasmon from the excited subband to the fundamental subband, and the tunnel coupling Ω_T.
We compute γ_α = 0.66 meV and Ω_T = 4.2 meV, respectively. The new parameter of our transport model in the strong coupling regime, the intra-subband rate γ_α^intra, will instead be fitted.
The parameters related to the extractor β are instead dependent on the electric field F: the extractor energy shifts with respect to the upper excited state of the ISB transition when a bias is applied to the structure.
The misalignment is approximated as linear:
ω_β (F) = α_F F + ω_β^0
where α_F is the linear coefficient and ω_β^0 is the extractor energy for F=0. This dispersion can be computed using our sequential transport software and is injected into the model:
α_F = 1.12 meV/(kV.cm^-1)
ω_β^0 = 124 meV
Similarly to γ_α^intra, γ_β^intra will be a fitting parameter common to the whole data set.
Finally, we expect the misalignment of the cascade with the electric field to modify the value of the effective extraction rate γ_β(F).
γ_β is one of the most important parameters of the fitting process, as it controls the relative amplitude of the spectra. Although we suspect that it might closely match with the actual extraction rate calculated from our sequential transport model, we decided to keep it as a free parameter: for each measured electric field value F_i, we fit one extraction rate γ_β(F_i). Note: γ_β(F_i) is independent of the geometrical parameters p and s. In summary, ω_α, ω_P and γ_α^intra and γ_β^intra are fitting parameters common to the whole data set, and their initial values for the fit will be based on the ones derived by our software.
§.§ Discussion on the validity of the fit
In this section, we perform a global fit on the whole experimental photocurrent dataset (Fig. <ref>), using the parameters constraints exposed in the previous section. We solve Eq.(<ref>) in the stationary regime (using the QuTiP python library <cit.>) to evaluate the theoretical photocurrent J_β, as per Eq.(<ref>).
The parameters resulting from the fit are presented in Table <ref>.
The returned values are consistent with the previous fits performed with the coupled mode theory in <cit.>.
In particular, the extraction rate γ_β as a function of the applied electric field is plotted in Fig. <ref> and compared with the values computed through our sequential transport model.
The right order of magnitude is obtained (γ_β < 1 meV) and the evolution trends are relatively well reproduced (γ_β decreasing for F>0, slope break around F = -4 kV.cm^-1).
These results on γ_β are also consistent with the evolution of the integrated amplitude of the spectra (Fig. <ref>, right-side scale): when the electric field is below F = -4 kV.cm^-1, the electronic cascade is efficiently aligned, and the effective extraction rate γ_β is high. This leads to a significant photocurrent signal.
The spectrally resolved photocurrent calculated using the parameters returned by the global fit procedure is compared to the experimental data in Fig. <ref>, with a quantitative agreement obtained on the set of triplets (p, s, F).
Two important trends are reproduced as a function of the bias, i.e. as a function of the ω_α- ω_β alignment: (i) the overall amplitude of the spectra, and (ii) the relative amplitude inversion between the peaks of the two polaritonic branches.
This study quantitatively confirms that the extractor (the electronic cascade of the QCD) and its relative alignment with respect to the ISB transition controls the overall amplitude of the spectra, and also the relative amplitude of the peaks of the polaritonic branches.
Applying an electric field to the structure enables the selective extraction of excitations from a polaritonic state towards the electronic cascade, while also providing control over the efficiency of this extraction.
This selective extraction capacity is enabled by the sharp transfer function and the 2Ω spacing (the Rabi splitting) between the polaritonic peaks: a finer transfer function and a stronger coupling would allow for better selectivity of ω_± polaritons. More details on a QCD transfer function in the strong coupling regime can be found in Appendix <ref>.
The good agreement between the experimental data and the theoretical model provides strong evidence that the dark states for both transitions α and β do not need to be included in the model to depict an extraction process.
The bright tunnel interaction T̂_bright and the phenomenological dissipation rate γ_β from the extractor bright state are sufficient to quantitatively reproduce the experimental measurements.
As previously postulated in <cit.>, this result confirms that the polaritonic nature of the excitation is carried on during the extraction process through the coherent tunnel coupling.
The extraction is a coherent process, mainly involving the bright states from both α and β transitions.
This model permits however a step forward in the comprehension of the polariton-to-electron process. Chronologically, the early attempts were limited to the observation of a polariton splitting in photo-detection <cit.>. A phenomenological transfer function was then introduced in the study of QWIPs operating in strong coupling <cit.>. Recently, the Coupled Mode Theory (CMT) permitted a more rigorous modeling of the transfer function, and an initial indication of direct tunneling into the extractor bright state, with no role for the polaritonic dark states <cit.>.
The model presented in this paper gets rid of the transfer function - a phenomenological concept - and replaces it with a rigorous tunnel coupling Hamiltonian between the α and β transitions, with a complete description of bright and dark states.
The latter do not play a major role for the polariton extraction process, but they have a crucial role for polariton injection. Our model integrates them, and might constitute a valid vantage point to study electrically injected polariton emitters. More information on the transfer function and the difference between the CMT and the effective density matrix approach can be found in Appendix <ref>.
§ IMPLICATIONS OF THE MODEL FOR ELECTRICALLY PUMPED POLARITON EMITTERS: THE ELECTRON-TO-POLARITON PROCESS
The validity of the density matrix approach to describe electrical extraction from optically excited polaritons, motivates to study the implications of these findings on the electrical injection and subsequent photon emission, represented by the red arrows in Fig. <ref>.
As discussed in Ref. <cit.>, the main difficulty describing an intersubband emitter operating in the strong light-matter coupling regime lies in the simultaneous description of both optical (bosonic) and electronic (fermionic) excitations.
The injection process fills subband 2 with fermionic excitations in the form of electrons, while the plasmonic excitations that occupy the α bright state are bosonic.
Working with the full Fermionic Hamiltonian is an arduous task <cit.>, that could hinder the development of an intuitive understanding of the transport, although very recently a Fermionic approach was successfully used to model QCDs operating in the strong coupling regime <cit.>.
The previous section <ref> suggests that the bosonization procedure of the extractor, that we employed to describe the extraction process, is a novel and readily interpretable approach for examining the injection process.
In particular the selection rules for the tunnel Hamiltonian,
eqs. (<ref>)-(<ref>) might prove a powerful tool.
Due to the impossibility of conducting an experimental study resembling the one carried out for QCDs for a detection process, the following discussion will be supported by the quantitative arguments previously presented in section <ref>.
Note: the β extractor states are now referred to as injector states.
An injection process is inherently incoherent because it introduces electrical excitations into an intersubband system through an incoherent external bath of electrons.
The relevant coherence here is the one of the
ISB plasmon<cit.>, that is a collective - and coherent - matter excitation originating from the electronic plasma inside a semiconductor quantum well (QW).
In this respect, an intuitive picture suggests that for an ISB polariton system, the electrical injection process is not the reverse of the electrical extraction. In the latter, coherence (induced by light) is destroyed to generate an electrical current, while in the former it appears that coherence must be created.
More formally, in the framework of a bosonized injector, we expect most of the electronic population to be located in the dark states |B_i^β⟩ (i≠ 1) upon electrical injection.
Furthermore, to emit light, excitations must be transferred to the plasmonic bright state |B_1^α⟩, which holds the entire oscillator strength of the system.
However, the selection rules (<ref>) and (<ref>) are clear: it is impossible for a dark state from the injector to interact with the plasmonic bright state through a tunnel interaction.
In other words, the primary injection pathway, which would involve direct transfer from the injector states to the bright plasmonic state, can not be taken.
The bosonized injector formalism confirms that polaritonic emitters do not operate as reversed polaritonic detectors.
In QCDs, the coherence is established through the photonic mode and maintained up to the extractor using both light-matter coupling Ω and tunnel coupling Ω_T.
Coherence can also be lost through the irreversible intrasubband scatterings γ_α^intra in the plasmonic mode, although we have demonstrated that it is not the main extraction scheme. However, the extraction process can still take place, since the usual dark-to-dark tunnel interactions are possible (Eq. (<ref>)).
On the contrary, in a LED the injection mechanism is incoherent, and coherence cannot emerge spontaneously during the transport. Additionally, we showed that incoherent (dark) states cannot interact with a coherent (bright) via the tunneling Hamiltonian (Eq. (<ref>)) and (Eq. (<ref>)).
As a result, it seems unfeasible to efficiently transfer excitations to the optically active bright state α, and thus to the polaritonic states, in the absence of an additional mechanism to generate coherence.
In the case where the electrical injection would be uniform among the N states of |B_i^β⟩, light could be emitted since the system would start with excitations in |B_1^β⟩, but the expected efficiency would be at most 1/N, without considering intrasubband decoherence.
There are however two points that need to be discussed further.
First, light emission from another kind of polariton states under electrical injection is well documented, namely in exciton-polariton devices <cit.>, with additional reports of polariton lasing under electrical injection <cit.>.
The key difference is that exciton-polaritons states do not result from a collective matter excitation, but rather from an ensemble of single-particle transitions.
As a consequence, non-resonant pumping schemes can apply to exciton polaritons, as demonstrated in optical experiments.
Second, several reports of electroluminescence from electrically-injected polariton LEDs exist in the literature.
Some of them clearly determine that thermally assisted emission processes have a major role <cit.>, but in many other ones simple thermal models cannot explain the data <cit.>. We can only conjecture possible ways forward to elucidate electrical injection of polaritonic LEDs.
On one hand, one might wonder if the application of the generalized, local Kirchoff <cit.> law to ISB polariton LEDs can shine new light on the electrical injection process, and possibly explain all the existing experimental data in the literature.
On the other, the problem of electrical excitation of coherent electronic motion - which is essentially the mechanism at play in electrically pumped polariton emitters - is well known from the field of surface plasmon polaritons (SPPs) <cit.>.
The extremely low efficiency of the electron-to-plasmon and electron-to-photon processes is well known, although recent theoretical works, supported by one experimental finding, have demonstrated that the efficiency could be drastically increased by tailoring the electronic landscape to favor inelastic over elastic tunneling, as long as the electronic coherence is preserved in the process <cit.>.
We thank S. De Liberato, J-M Manceau, I. Carusotto, A. Bousseksou for helpful discussions.
We acknowledge financial support from the European Union Future and Emerging Technologies (FET) Grant No. 737017 (MIR-BOSE), and by the French National Research Agency: project SOLID (No. ANR-19-CE24-0003), HISPANID (ANR-17-ASTR-0008-01), and EVEREST (ANR-21-CE24-0021).
§ QUANTUM MASTER EQUATION MODEL FOR A QCD OPERATING IN THE STRONG LIGHT-MATTER COUPLING REGIME: PARAMETRIC STUDY OF THE IMPACT OF THE LIGHT-MATTER COUPLING STRENGTH ON THE TRANSFER FUNCTION
The transfer function between the photocurrent and the total power dissipated inside the QCD (𝒜_QCD = 𝒜_α + 𝒜_β) is defined as 𝒯:
𝒯(ω) = 𝒜_β/𝒜_α + 𝒜_β
𝒯 is dependent on the light frequency ω.
§.§ Parametric study
Fig. <ref> plots the different quantities 𝒜_tot (𝒜_tot= 𝒜_QCD + 𝒜_c), 𝒜_QCD, 𝒥_β and 𝒯 computed from the solution of equation (<ref>). We impose a realistic situation between the inter and intra-subband dynamics within the QCD such that 90% of the total broadening is due to the intrasubband scattering:
γ_α^intra + γ_β^intra = 0.9 ·γ_αβ
where γ_αβ= γ_α^intra + γ_β^intra + γ_α + γ_β represents the total contribution to the broadening from the α and β transitions, including intersubband and intrasubband scatterings.
This assumption is equivalent to set T_1≈ 10· T_2, where T_2 (T_1) are the dephasing (upper state) lifetime, respectively. For a typical mid-IR ISB transition this is verified, as we have T_1 of the order of the ps, and T_2 of the order of a few hundreds fs.
The cavity resonance ω_c and the extractor resonance are also voluntarily mismatched with the ISB transition:
ω_c = 1.05 ω_α, ω_β = 0.95 ω_α
𝒜_tot, 𝒜_QCD, 𝒥_β and 𝒯 are computed for different light-matter coupling amplitudes Ω, up to 10% of the ISB transition ω_α.
When the light-matter coupling ratio Ω/ω_α increases, the system progressively moves from a weak coupling regime to a strong coupling regime: around the spectral resolution criteria 2Ω > γ_αβ, we compute the characteristic splitting of the polaritonic peaks, for each spectra 𝒜_tot (A), 𝒜_QCD (B) and 𝒥_T (C). The model is able to reproduce the smaller splitting of the QCD absorption (B) compared to the splitting of the total absorption (A) for a same coupling situation Ω/ω_α, something previously observed in <cit.>. The important novelties that brings the model are found in the transfer function 𝒯. In weak coupling (small ratios Ω/ω_α), the transfer function is almost scalar: it coincides with the transfer function computed in the framework of a QCD that is not inside a cavity. As the ratio Ω/ω_α increases, the baseline of the transfer function gradually falls, and the amplitude of its peak increases: increasing Ω enables the transfer function to reach a Lorentzian shape.
Therefore, in a model where the intra-subband dynamics is explicitly described, the progressive increase of the light-matter coupling allows us to move continuously from a sequential transport in QCDs (flat, quasi-scalar transfer function 𝒯(ω)) to a delocalized description of the transport (sharp, Lorentzian transfer function). Again, when the strong light-matter coupling Ω is sufficiently intense, the coherent nature of the transport is maintained during the extraction process.
The previous discussion explains the satisfactory description of the photocurrent experimental data produced by the semi-classical CMT obtained in our previous work <cit.>, despite the impossibility in this previous model to describe the intrasubband dynamic. By default, the CMT predicts a sharp Lorentzian transfer function 𝒯. While this description is not suited for a weak coupling scenario, where the sequential transport should be described with a scalar transfer function, Fig. <ref>-[D] illustrates that it is on the other hand quite adapted to a strong coupling scenario and a delocalized transport scheme. However, being a semi-classical model, the CMT also lacked the ability to distinguish between the inter and intrasubband dynamic which would prevent the distanglement between the spectra broadening and their amplitude.
§.§ Tunneling current
Another quantity of interest is the tunneling current 𝒥_T between the plasmonic mode α and the electronic extraction mode β. It is defined as:
𝒥_T = Ω_T (⟨ b_α^† b_β|-⟩⟨ b_α b_β^†|)⟩
Using Eq. (<ref>) in the low excitation regime, and developing the expressions of the coherences, 𝒥_T can be approximated as:
𝒥_T = 2 Ω_T^2 γ_αβ/( ω̃_α - ω_β )^2 + ( γ_αβ)^2 ( ⟨b_α^†b_α|-⟩⟨b_β^†b_β|⟩)
+ [ 2i Ω_T Ω/ (ω̃_α - ω_β )^2 + ( γ_αβ)^2 ⟨a_c b_β^†|⟩]
where γ_αβ= γ_α + γ_α^intra + γ_β + γ_β^intra is the sum of the different contributions to the coherences damping inside the QCD. The obtained expression of 𝒥_T in Eq.(<ref>) is decomposed in two contributions. The fist term is the standard sequential tunnel current <cit.> (in its first order expression) which is broadely used for the electronic transport in QCD operating in the weak coupling regime <cit.>. It is a semi-classical expression of the current, in the sense that it directly involves the population difference between the modes involved in the tunneling process ⟨b_α^†b_α|-⟩⟨b_β^†b_β|$⟩. The second term is a new addition to the tunnel current. It involves the coherences ⟨a_c b_β^†|$⟩ between the cavity and the extractor modes, which could be qualified as long-range coherences (the two modes are only coupled through their mutual coupling to the ISB mode). It thus expresses the system capacity to transport current between modes that are not directly coupled. We will refer to this current as delocalized current.
The amplitude of the delocalized current of Eq.(<ref>) is controlled by a Lorentzian function and involves the cross product of the couplings Ω_T (tunnel coupling) and Ω (light-matter coupling). In the case of a weakly coupled QCD, it is thus expected that the delocalized current is null. Note that it can be numerically checked that the current expression is independent of the considered interface where its computed, thus 𝒥_T = 𝒥_β.
§.§ Validity domains of the different models
To explore the validity domains of the different models introduced here and in our previous work <cit.> (sequential model, CMT model, quantum master equation model), we define a criterion based on the spectral shape of the transfer function 𝒯(ω), the sharpness r:
r(Ω, γ_α^intra + γ_β^intra) = Max{𝒯(ω)}-Min{𝒯(ω)}/Max{𝒯(ω)}
r = 1 thus indicates that the transfer function 𝒯(ω) is a sharp Lorentzian function, r = 0 indicates that 𝒯(ω) is a flat scalar function. Fig. <ref> summarizes the results of the parametric exploration on both the total intrasubband scattering and the light-matter coupling strength. We differentiate three domains D1, D2 and D3:
* Domain D1: sequential transport model, flat scalar transfer function (𝒯(ω)≈ p_E). This domain is correctly described by the standard thermalized subband model for QCD. It corresponds for instance to QCDs operating in the weak-coupling regime
* Domain D3: delocalized transport model, sharp Lorentzian transfer function. This domain is correctly described by the CMT.
* Domain D2: intermediate domain, where transport combines contributions from different sources. D1, D2 and D3 are correctly described by the density matrix formalism of equation (<ref>) and the capability to distinguish intra and inter-subband dynamic.
§ EXPERIMENTAL SYSTEM AND QCD BANDSTRUCTURE
In this section, we present additional information about the samples used in this work. Fig. <ref> presents a scanning electron microscope (SEM) image of a patch cavity array, and Fig. <ref> presents the bandstructure of the QCD embedded inside the patches. The bandstructure is computed using our sequential transport software <cit.>.
§ ADDITIONNAL PHOTOCURRENT MEASUREMENTS AND COMPUTATIONAL RESULTS
In this section, we present additional photocurrent measurements and computational results to supplement the results of Fig. <ref>.
55
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Kasprzak et al.(2006)Kasprzak, Richard, Kundermann,
Baas, Jeambrun, Keeling,
Marchetti, Szymańska, André, Staehli, Savona, Littlewood, Deveaud, and Dang]kasprzak_boseeinstein_2006
author author J. Kasprzak, author M. Richard,
author S. Kundermann, author A. Baas, author
P. Jeambrun, author
J. M. J. Keeling, author
F. M. Marchetti, author
M. H. Szymańska, author
R. André, author J. L. Staehli, author V. Savona, author P. B. Littlewood, author B. Deveaud, and author L. S. Dang, title title Bose–einstein
condensation of exciton polaritons, https://doi.org/10.1038/nature05131 journal journal Nature volume 443, pages
409–414 (year 2006)NoStop
[Bajoni et al.(2008)Bajoni,
Semenova, Lemaître, Bouchoule, Wertz, Senellart, and Bloch]bajoni_polariton_2008
author author D. Bajoni, author E. Semenova,
author A. Lemaître, author S. Bouchoule, author
E. Wertz, author P. Senellart, and author J. Bloch, title title Polariton
light-emitting diode in a gaas-based microcavity, https://doi.org/10.1103/PhysRevB.77.113303 journal journal Phys. Rev. B volume 77, pages 113303 (year 2008)NoStop
[Carusotto and Ciuti(2013)]carusotto_quantum_2013
author author I. Carusotto and author C. Ciuti, title title Quantum fluids of light, https://doi.org/10.1103/RevModPhys.85.299 journal
journal Reviews of Modern Physics volume
85, pages 299–366 (year 2013)NoStop
[Orgiu et al.(2015)Orgiu,
George, Hutchison, Devaux,
Dayen, Doudin, Stellacci,
Genet, Schachenmayer, Genes,
Pupillo, Samorì, and Ebbesen]orgiu_conductivity_2015
author author E. Orgiu, author J. George,
author J. A. Hutchison, author E. Devaux, author
J. F. Dayen, author
B. Doudin, author F. Stellacci, author C. Genet, author J. Schachenmayer, author C. Genes, author G. Pupillo, author P. Samorì, and author T. W. Ebbesen, title title Conductivity
in organic semiconductors hybridized with the vacuum field, https://doi.org/10.1038/nmat4392 journal journal
Nature Materials volume 14, pages
1123–1129 (year 2015)NoStop
[Appugliese et al.(2022)Appugliese, Enkner, Paravicini-Bagliani,
Beck, Reichl, Wegscheider,
Scalari, Ciuti, and Faist]appugliese_breakdown_2022
author author F. Appugliese, author J. Enkner,
author G. L. Paravicini-Bagliani, author M. Beck, author C. Reichl,
author W. Wegscheider, author G. Scalari, author
C. Ciuti, and author
J. Faist, title title Breakdown of topological protection by cavity vacuum fields in the
integer quantum hall effect, https://doi.org/10.1126/science.abl5818 journal journal Science volume 375, pages
1030–1034 (year 2022)NoStop
[Ciuti(2021)]ciuti_cavity_2021
author author C. Ciuti, title title Cavity-mediated electron
hopping in disordered quantum hall systems, https://doi.org/10.1103/PhysRevB.104.155307 journal journal Physical Review B volume 104, pages 155307 (year 2021)NoStop
[Dini et al.(2003)Dini,
Köhler, Tredicucci, Biasiol, and Sorba]dini_microcavity_2003
author author D. Dini, author R. Köhler,
author A. Tredicucci, author G. Biasiol, and author
L. Sorba, title title Microcavity polariton splitting of intersubband transitions, https://doi.org/10.1103/PhysRevLett.90.116401 journal
journal Phys. Rev. Lett. volume 90, pages 116401 (year 2003)NoStop
[Dupont et al.(2003)Dupont,
Liu, SpringThorpe, Lai, and Extavour]dupont_vacuumfield_2003
author author E. Dupont, author H. C. Liu,
author A. J. SpringThorpe,
author W. Lai, and author M. Extavour, title
title Vacuum-field rabi splitting in quantum-well infrared
photodetectors, https://doi.org/10.1103/PhysRevB.68.245320
journal journal Phys. Rev. B volume 68, pages 245320 (year
2003)NoStop
[Colombelli et al.(2005)Colombelli, Ciuti, Chassagneux, and Sirtori]colombelli_quantum_2005
author author R. Colombelli, author C. Ciuti,
author Y. Chassagneux, and author C. Sirtori, title title Quantum cascade intersubband polariton light
emitters, https://doi.org/10.1088/0268-1242/20/10/001 journal journal Semiconductor Science and Technology volume 20, pages 985–990 (year 2005)NoStop
[De Liberato and Ciuti(2008)]de2008quantum
author author S. De Liberato and author C. Ciuti, title title Quantum model of
microcavity intersubband electroluminescent devices, @noop
journal journal Physical Review B volume 77, pages 155321 (year
2008)NoStop
[Sapienza et al.(2008)Sapienza, Vasanelli, Colombelli,
Ciuti, Chassagneux, Manquest,
Gennser, and Sirtori]sapienza_electrically_2008
author author L. Sapienza, author A. Vasanelli,
author R. Colombelli, author C. Ciuti, author
Y. Chassagneux, author
C. Manquest, author
U. Gennser, and author
C. Sirtori, title title Electrically injected cavity polaritons, journal
journal Physical Review Letters volume
100, https://doi.org/10.1103/PhysRevLett.100.136806
10.1103/PhysRevLett.100.136806 (year 2008)NoStop
[Jouy et al.(2010)Jouy,
Vasanelli, Todorov, Sapienza,
Colombelli, Gennser, and Sirtori]jouy_intersubband_2010
author author P. Jouy, author A. Vasanelli,
author Y. Todorov, author L. Sapienza, author
R. Colombelli, author
U. Gennser, and author
C. Sirtori, title title Intersubband electroluminescent devices operating in the
strong-coupling regime, journal journal
Physical Review B volume 82, https://doi.org/10.1103/PhysRevB.82.045322 10.1103/PhysRevB.82.045322
(year 2010)NoStop
[Delteil et al.(2011)Delteil, Vasanelli, Jouy, Barate, Moreno, Teissier, Baranov, and Sirtori]delteil_optical_2011
author author A. Delteil, author A. Vasanelli,
author P. Jouy, author
D. Barate, author J. C. Moreno, author R. Teissier, author A. N. Baranov, and author C. Sirtori, title title
Optical phonon scattering of cavity polaritons in an electroluminescent
device, https://doi.org/10.1103/PhysRevB.83.081404 journal journal Physical Review B volume 83, pages 081404 (year
2011)NoStop
[Chastanet et al.(2017)Chastanet, Manceau, Laurent, Bousseksou, Beaudoin, Sagnes, and Colombelli]chastanet_surface_2017
author author D. Chastanet, author J.-M. Manceau, author T. Laurent,
author A. Bousseksou, author G. Beaudoin, author
I. Sagnes, and author
R. Colombelli, title
title Surface emitting thermally assisted polaritonic
light-emitting device, https://doi.org/10.1063/1.4976585
journal journal Applied Physics Letters volume 110, pages 081108 (year 2017)NoStop
[Geiser et al.(2012)Geiser,
Scalari, Castellano, Beck, and Faist]geiser_room_2012
author author M. Geiser, author G. Scalari,
author F. Castellano, author M. Beck, and author
J. Faist, title title Room temperature terahertz polariton emitter, https://doi.org/10.1063/1.4757611 journal journal Applied Physics Letters volume 101, pages 141118 (year 2012)NoStop
[Vigneron et al.(2019)Vigneron, Pirotta, Carusotto, Tran, Biasiol, Manceau, Bousseksou, and Colombelli]vigneron_quantum_2019
author author P.-B. Vigneron, author S. Pirotta,
author I. Carusotto, author N.-L. Tran, author
G. Biasiol, author J.-M. Manceau, author A. Bousseksou, and author R. Colombelli, title title
Quantum well infrared photo-detectors operating in the strong light-matter
coupling regime, https://doi.org/10.1063/1.5084112 journal journal Applied Physics Letters volume 114, pages 131104 (year
2019)NoStop
[Lagrée et al.(2022)Lagrée, Jeannin, Quinchard, Ouznali, Evirgen, Trinité, Colombelli, and Delga]lagree_direct_2021
author author M. Lagrée, author M. Jeannin,
author G. Quinchard, author O. Ouznali, author
A. Evirgen, author V. Trinité, author R. Colombelli, and author A. Delga, title title Direct
polariton-to-electron tunneling in quantum cascade detectors operating in the
strong light-matter coupling regime, https://doi.org/10.1103/PhysRevApplied.17.044021 journal
journal Phys. Rev. Applied volume
17, pages 044021 (year 2022)NoStop
[Ando et al.(1982)Ando,
Fowler, and Stern]ando_electronic_1982
author author T. Ando, author A. Fowler, and author F. Stern, title title Electronic properties of two-dimensional
systems, https://doi.org/10.1103/RevModPhys.54.437 journal journal Reviews of Modern Physics volume 54, pages 437 (year
1982)NoStop
[Helm(1999)]helm_intersubband_1999
author author M. Helm, title title Intersubband transitions in
quantum wells physics and device applications i, @noop journal journal Academic Press , pages 1
(year 1999)NoStop
[Delteil et al.(2012)Delteil, Vasanelli, Todorov, Feuillet Palma, Renaudat St-Jean, Beaudoin, Sagnes, and Sirtori]delteil_charge_2012
author author A. Delteil, author A. Vasanelli,
author Y. Todorov, author C. Feuillet Palma, author M. Renaudat St-Jean, author G. Beaudoin, author
I. Sagnes, and author
C. Sirtori, title title Charge-induced coherence between intersubband plasmons in a quantum
structure, https://doi.org/10.1103/PhysRevLett.109.246808
journal journal Physical Review Letters volume 109, pages 246808 (year 2012)NoStop
[Pisani et al.(2023)Pisani,
Gacemi, Vasanelli, Li,
Davies, Linfield, Sirtori, and Todorov]pisani_electronic_2023
author author F. Pisani, author D. Gacemi,
author A. Vasanelli, author L. Li, author
A. G. Davies, author
E. Linfield, author
C. Sirtori, and author
Y. Todorov, title title Electronic transport driven by collective light-matter coupled
states in a quantum device, https://doi.org/10.1038/s41467-023-39594-z journal journal Nature Communications volume 14, pages 3914 (year 2023)NoStop
[Trinité et al.(2011)Trinité, Ouerghemmi, Guériaux,
Carras, Nedelcu, Costard, and Nagle]trinite2011modelling
author author V. Trinité, author E. Ouerghemmi, author V. Guériaux, author M. Carras, author A. Nedelcu,
author E. Costard, and author J. Nagle, title
title Modelling of electronic transport in quantum well infrared
photodetectors, @noop journal journal
Infrared Physics & Technology volume 54, pages 204 (year 2011)NoStop
[Koeniguer et al.(2006)Koeniguer, Dubois, Gomez, and Berger]koeniguer2006electronic
author author C. Koeniguer, author G. Dubois,
author A. Gomez, and author V. Berger, title
title Electronic transport in quantum cascade structures at
equilibrium, @noop journal journal
Physical Review B volume 74, pages
235325 (year 2006)NoStop
[Buffaz et al.(2010)Buffaz,
Gomez, Carras, Doyennette, and Berger]buffaz2010role
author author A. Buffaz, author A. Gomez,
author M. Carras, author L. Doyennette, and author V. Berger, title
title Role of subband occupancy on electronic transport in
quantum cascade detectors, @noop journal journal Physical Review B volume 81, pages 075304 (year 2010)NoStop
[Todorov and Sirtori(2012)]todorov2012intersubband
author author Y. Todorov and author C. Sirtori, title title Intersubband polaritons
in the electrical dipole gauge, @noop journal
journal Physical Review B volume 85, pages 045304 (year 2012)NoStop
[De Liberato and Ciuti(2009)]de2009quantum
author author S. De Liberato and author C. Ciuti, title title Quantum theory of electron
tunneling into intersubband cavity polariton states, @noop
journal journal Physical Review B volume 79, pages 075317 (year
2009)NoStop
[Breuer et al.(2002)Breuer,
Petruccione et al.]breuer2002theory
author author H.-P. Breuer, author F. Petruccione,
et al., title title The theory of open
quantum systems, @noop journal journal
Oxford University Press on Demand (year 2002)NoStop
[Suh et al.(2004)Suh,
Wang, and Fan]suh2004temporal
author author W. Suh, author Z. Wang, and author S. Fan, title title Temporal coupled-mode theory and the presence of
non-orthogonal modes in lossless multimode cavities, @noop
journal journal IEEE Journal of Quantum
Electronics volume 40, pages 1511
(year 2004)NoStop
[Schlosshauer(2007)]schlosshauer2007decoherence
author author M. A. Schlosshauer, title title Decoherence: and the
quantum-to-classical transition, @noop journal
journal Springer Science & Business Media (year
2007)NoStop
[Quinchard et al.(2022)Quinchard, Mismer, Hakl, Pereira, Lin, Lepillet, Trinité, Evirgen, Peytavit,
Reverchon et al.]quinchard2022high
author author G. Quinchard, author C. Mismer,
author M. Hakl, author
J. Pereira, author Q. Lin, author S. Lepillet, author V. Trinité, author A. Evirgen, author E. Peytavit,
author J. Reverchon, et al., title title High speed,
antenna-enhanced 10.3 μ m quantum cascade detector, @noop
journal journal Applied Physics Letters volume 120, pages 091108 (year 2022)NoStop
[Delga et al.(2012)Delga,
Carras, Trinité, Guériaux, Doyennette, Nedelcu,
Schneider, and Berger]delga2012master
author author A. Delga, author M. Carras,
author V. Trinité, author V. Guériaux, author
L. Doyennette, author
A. Nedelcu, author H. Schneider, and author V. Berger, title title Master
equation approach of classical noise in intersubband detectors, @noop
journal journal Physical Review B volume 85, pages 245414 (year
2012)NoStop
[Hakl et al.(2021)Hakl,
Lin, Lepillet, Billet,
Lampin, Pirotta, Colombelli,
Wan, Cao, Li, Peytavit, and Barbieri]hakl_ultrafast_2021
author author M. Hakl, author Q. Lin, author S. Lepillet, author
M. Billet, author J.-F. Lampin, author S. Pirotta, author R. Colombelli, author W. Wan, author J. C. Cao, author H. Li, author E. Peytavit, and author S. Barbieri, title title Ultrafast Quantum-Well Photodetectors Operating
at 10 μm with a Flat Frequency Response up to 70 GHz at Room
Temperature, https://doi.org/10.1021/acsphotonics.0c01299
journal journal ACS Photonics volume 8, pages 464 (year
2021)NoStop
[Todorov et al.(2010)Todorov, Tosetto, Teissier, Andrews, Klang, Colombelli, Sagnes, Strasser, and Sirtori]todorov2010optical
author author Y. Todorov, author L. Tosetto,
author J. Teissier, author A. M. Andrews, author
P. Klang, author R. Colombelli, author I. Sagnes, author G. Strasser, and author C. Sirtori, title title Optical
properties of metal-dielectric-metal microcavities in the thz frequency
range, @noop journal journal Optics
express volume 18, pages 13886
(year 2010)NoStop
[Balanis(2016)]balanis2016antenna
author author C. A. Balanis, title title Antenna theory: analysis
and design, @noop journal journal John
wiley & sons (year 2016)NoStop
[Palaferri(2018)]palaferri2018antenna
author author D. Palaferri, title title Antenna resonators for
quantum infrared detectors and fast heterodyne receivers, @noop
journal journal Sorbonne Paris Cité
(year 2018)NoStop
[Johansson et al.(2012)Johansson, Nation, and Nori]johansson2012qutip
author author J. R. Johansson, author P. D. Nation, and author F. Nori, title title Qutip: An open-source python framework
for the dynamics of open quantum systems, @noop journal journal Computer Physics Communications volume 183, pages 1760 (year
2012)NoStop
[Sapienza et al.(2007)Sapienza, Vasanelli, Ciuti, Manquest, Sirtori, Colombelli, and Gennser]sapienza_photovoltaic_2007
author author L. Sapienza, author A. Vasanelli,
author C. Ciuti, author C. Manquest, author
C. Sirtori, author R. Colombelli, and author U. Gennser, title title
Photovoltaic probe of cavity polaritons in a quantum cascade structure, https://doi.org/10.1063/1.2739308 journal journal Applied Physics Letters volume 90, pages 201101 (year 2007)NoStop
[Khalifa et al.(2008)Khalifa, Love, Krizhanovskii, Skolnick, and Roberts]khalifa_electroluminescence_2008
author author A. A. Khalifa, author A. P. D. Love, author D. N. Krizhanovskii, author M. S. Skolnick, and author J. S. Roberts, title title Electroluminescence
emission from polariton states in gaas-based semiconductor microcavities, https://doi.org/10.1063/1.2844860 journal journal Applied Physics Letters volume 92, pages 061107 (year 2008)NoStop
[Tsintzos et al.(2008)Tsintzos, Pelekanos, Konstantinidis,
Hatzopoulos, and Savvidis]tsintzos_gaas_2008
author author S. I. Tsintzos, author N. T. Pelekanos, author G. Konstantinidis, author Z. Hatzopoulos, and author P. G. Savvidis, title title A gaas polariton
light-emitting diode operating near room temperature, https://doi.org/10.1038/nature06979 journal journal Nature volume 453, pages
372–375 (year 2008)NoStop
[Bajoni(2012)]bajoni_polariton_2012
author author D. Bajoni, title title Polariton lasers. Hybrid
light–matter lasers without inversion, https://doi.org/10.1088/0022-3727/45/31/313001 journal
journal J. Phys. D: Appl. Phys. volume
45, pages 313001 (year 2012)NoStop
[Schneider et al.(2013)Schneider, Rahimi-Iman, Kim, Fischer, Savenko, Amthor, Lermer, Wolf, Worschech, Kulakovskii, Shelykh, Kamp, Reitzenstein, Forchel, Yamamoto, and Höfling]schneider_electrically_2013
author author C. Schneider, author A. Rahimi-Iman, author N. Y. Kim, author J. Fischer,
author I. G. Savenko, author M. Amthor, author
M. Lermer, author A. Wolf, author L. Worschech, author V. D. Kulakovskii, author I. A. Shelykh, author M. Kamp,
author S. Reitzenstein, author A. Forchel, author
Y. Yamamoto, and author
S. Höfling, title title An electrically pumped polariton laser, https://doi.org/10.1038/nature12036 journal journal Nature volume 497, pages
348–352 (year 2013)NoStop
[Askenazi et al.(2017)Askenazi, Vasanelli, Todorov, Sakat, Greffet, Beaudoin, Sagnes, and Sirtori]askenazi_midinfrared_2017
author author B. Askenazi, author A. Vasanelli,
author Y. Todorov, author E. Sakat, author
J.-J. Greffet, author
G. Beaudoin, author
I. Sagnes, and author
C. Sirtori, title title Midinfrared Ultrastrong Light–Matter Coupling for THz Thermal
Emission, https://doi.org/10.1021/acsphotonics.7b00838 journal journal ACS Photonics volume
4, pages 2550 (year 2017)NoStop
[Greffet et al.(2018)Greffet, Bouchon, Brucoli, and Marquier]greffet_light_2018
author author J.-J. Greffet, author P. Bouchon,
author G. Brucoli, and author F. m. c. Marquier, title title Light emission by nonequilibrium bodies: Local
kirchhoff law, https://doi.org/10.1103/PhysRevX.8.021008
journal journal Phys. Rev. X volume 8, pages 021008 (year
2018)NoStop
[Lambe and McCarthy(1976)]lambe_light_1976
author author J. Lambe and author S. L. McCarthy, title title Light emission from
inelastic electron tunneling, https://doi.org/10.1103/PhysRevLett.37.923 journal journal Phys. Rev. Lett. volume 37, pages 923 (year 1976)NoStop
[Davis(1977)]davis_theory_1977
author author L. C. Davis, title title Theory of surface-plasmon
excitation in metal-insulator-metal tunnel junctions, https://doi.org/10.1103/PhysRevB.16.2482 journal journal Phys. Rev. B volume 16, pages 2482 (year 1977)NoStop
[Bharadwaj et al.(2011)Bharadwaj, Bouhelier, and Novotny]bharadwaj_electrical_2011
author author P. Bharadwaj, author A. Bouhelier, and author L. Novotny, title title Electrical excitation of
surface plasmons, https://doi.org/10.1103/PhysRevLett.106.226802
journal journal Phys. Rev. Lett. volume 106, pages 226802 (year
2011)NoStop
[Parzefall et al.(2015)Parzefall, Bharadwaj, Jain, Taniguchi, Watanabe, and Novotny]parzefall_antenna_2015
author author M. Parzefall, author P. Bharadwaj, author A. Jain,
author T. Taniguchi, author K. Watanabe, and author L. Novotny, title
title Antenna-coupled photon emission from hexagonal boron
nitride tunnel junctions, https://doi.org/10.1038/nnano.2015.203
journal journal Nature Nanotechnology volume 10, pages 1058–1063 (year
2015)NoStop
[Kern et al.(2015)Kern,
Kullock, Prangsma, Emmerling,
Kamp, and Hecht]kern_electrically_2015
author author J. Kern, author R. Kullock,
author J. Prangsma, author M. Emmerling, author
M. Kamp, and author
B. Hecht, title title Electrically driven optical antennas, https://doi.org/10.1038/nphoton.2015.141 journal journal Nature Photonics volume 9, pages 582–586 (year 2015)NoStop
[Du et al.(2017)Du,
Wang, Chu, and Nijhuis]du_highly_2017
author author W. Du, author T. Wang, author H.-S. Chu, and author
C. A. Nijhuis, title
title Highly efficient on-chip direct electronic–plasmonic
transducers, https://doi.org/10.1038/s41566-017-0003-5 journal journal Nature Photonics volume 11, pages 623–627 (year
2017)NoStop
[Qian et al.(2018)Qian,
Hsu, Gurunatha, Riley,
Zhao, Lu, Tao, and Liu]qian_efficient_2018
author author H. Qian, author S.-W. Hsu,
author K. Gurunatha, author C. T. Riley, author
J. Zhao, author D. Lu, author A. R. Tao, and author Z. Liu, title title Efficient light generation
from enhanced inelastic electron tunnelling, https://doi.org/10.1038/s41566-018-0216-2 journal journal Nature Photonics volume 12, pages 485–488 (year 2018)NoStop
[Uskov et al.(2016)Uskov,
Khurgin, Protsenko, Smetanin, and Bouhelier]uskov_excitation_2016
author author A. V. Uskov, author J. B. Khurgin,
author I. E. Protsenko, author I. V. Smetanin, and author A. Bouhelier, title
title Excitation of plasmonic nanoantennas by nonresonant and
resonant electron tunnelling, https://doi.org/10.1039/C6NR01931E
journal journal Nanoscale volume 8, pages 14573–14579 (year
2016)NoStop
[Qian et al.(2021)Qian,
Li, Hsu, Chen, Tian, Tao, and Liu]qian_highly_2021
author author H. Qian, author S. Li, author S.-W. Hsu, author
C.-F. Chen, author
F. Tian, author A. R. Tao, and author Z. Liu, title title
Highly-efficient electrically-driven localized surface plasmon source
enabled by resonant inelastic electron tunneling, https://doi.org/10.1038/s41467-021-23512-2 journal journal Nature Communications volume 12, pages 3111 (year 2021)NoStop
[Kazarinov and Suris(1972)]kazarinov1972electric
author author R. Kazarinov and author R. Suris, title title Electric and
electromagnetic properties of semiconductors with a superlattice, @noop journal journal Sov. Phys.
Semicond volume 6, pages 120
(year 1972)NoStop
[Willenberg et al.(2003)Willenberg, Döhler, and Faist]willenberg2003intersubband
author author H. Willenberg, author G. Döhler, and author J. Faist, title title Intersubband gain in a
bloch oscillator and quantum cascade laser, @noop journal journal Physical Review B volume 67, pages 085315 (year
2003)NoStop
[Lagrée(2022)]lagree2022transport
author author M. Lagrée, title title Transport
électronique en régime de couplage fort lumière-matière pour
les dispositifs quantiques moyen-infrarouge, @noop journal journal Université Paris-Saclay (year 2022)NoStop
|
http://arxiv.org/abs/2307.03925v2 | 20230708074740 | Probe of soft-QCD in minimum bias events of pp collisions with the ATLAS at the LHC | [
"Yuri A. Kulchitsky"
] | hep-ex | [
"hep-ex",
"nucl-ex"
] |
Enhancing Room Security and Automating Class Attendance Using ID Cards
Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135
August 12, 2023
========================================================================
§ INTRODUCTION
The study of soft Quantum Chromodynamics (QCD) charged-particle distributions in
proton–proton (pp) and proton–antiproton (pp̅) collisions
probes the strong interaction in the low transverse momentum (p_T)
regime or non-perturbative QCD (non-pQCD).
Description of low-p_T processes within pQCD is not possible.
Predictions can be made with phenomenological models inspired by QCD
(see reviews in
<cit.>).
In the low-p_T region, charged-particle interactions are typically described by
quantum QCD-inspired models implemented in Monte Carlo (MC) event generators.
Data are used to constrain such MC models and gain further insight into the particle
dynamics of the low-p_T regime.
Measurements are used to constrain the free parameters of these models.
Low-p_T processes arising from pile-up events[Pile-up events are
pp interactions in the same bunch crossing at higher instantaneous luminosities
additional to the triggered collision between two protons.]
may also affect the topologies of events involving an interaction with a high-p_T scale.
An understanding of soft-QCD processes is therefore important both on its own and as a means of
reducing systematic uncertainties in measurements of high-p_T phenomena.
An accurate description of low-p_T strong interaction processes is
essential for simulating single p p and p p̅ interactions and the pile-up effects.
Understanding of soft-QCD interactions has a direct impact on precision measurements of
high-p_T phenomena and searches for new physics,
it provides insight into strong interactions in the non-pQCD regime:
soft-QCD results are used in MC generator tuning,
soft-QCD description is essential for simulating
an underlying event (UE) with
multiple parton interactions (MPI), and initial and final state gluon radiation (ISR, FSR).
An important example of a process which is entirely governed by soft-QCD physics is hadronization.
Since there is no uniform description of the phenomena that occur at low p_T,
there is a variety of models trying to explain them through comparisons with extracted data.
There is a wealth of CERN's Large Hadron Collider (LHC)
<cit.>
measurements that probe the soft-QCD region and basically all LHC experiments to
measure soft-QCD phenomena.
Minimum bias (MB) events were used for soft-QCD studies.
MB are inelastic events selected by an MB trigger with as little bias as possible or
with low-p_T events.
MB events include non-diffractive (ND), single- (SD), double- (DD) and central-diffractive (CD) processes.
In order to make a more complete study of particle properties in MB events,
results are given for different multiplicity and kinematic selections termed as “phase spaces” (PS).
Measurements of charged-particle distributions by the ATLAS
<cit.>
detector
<cit.>
at the centre-of-mass (CM) energies √(s) = 0.9, 2.36, 7, 8 and 13
were performed for the pseudorapidity (η ) region |η| <2.5
and for the samples of events with the primary charged-particle multiplicity (n_ch)
more than or equal to 2 with the charged-particle transverse momentum p_T>100
and with the primary charged-particle multiplicity n_ch≥ 1, 6, 20, 50
with the charged-particle transverse momentum p_T > 500 .
Charged-particle transverse momentum results for pp and Pb + Pb interactions at 2.76 <cit.>,
for pp and p + Pb interactions at 5.02 <cit.>
in the pseudorapidity range |η| <2 of particles with
p_T > 500
and p_T > 4000 , respectively, and with
p_T⪅ 200
<cit.> were studied by the ATLAS.
Charged-particle distributions were measured
by the ALICE <cit.> Collaboration
<cit.>,
the CMS <cit.> Collaboration
<cit.>,
the CMS and TOTEM <cit.> Collaborations
<cit.>,
the LHCb <cit.> Collaboration
<cit.>,
the LHCf <cit.> Collaboration and
the TOTEM <cit.> Collaboration
<cit.>.
Similar measurements aimed at probing strong interactions at low p_T
have been made in lower-energy from √(s) = 0.03 to 0.9
for e^+ e^-, e p and p p̅ collisions.
The low p_T studies were carried out in pp collisions at the ISR (CERN) by the
ACHM and ABCDHW Collaborations at √(s) = 0.0304, 0.0445, 0.0526 and 0.0622
<cit.>.
Similar studies were also carried out in p p̅ collisions at the SPS (CERN) by the
NA22
<cit.>,
UA1
<cit.>,
UA4 <cit.>
and UA5
<cit.>
Collaborations at √(s) = 0.022, 0.2, 0.54 and 0.9 .
Important results on this subject were obtained also in p p̅ collisions
at Tevatron (Fermilab) by the CDF
<cit.>
Collaboration at √(s) = 0.63, 1.8 and 1.96
<cit.>
and by the E735 Collaboration at √(s) = 0.3, 0.54, 0.9 and 1.8
<cit.>.
The hypothesis that at very high energies the probability distributions P (n, √(s))
of producing n particles in a certain collision process should exhibit a scaling relation
was proposed in
<cit.>.
This scaling behaviour is a property of particle multiplicity distributions known as the KNO scaling hypothesis.
The main assumption of the KNO scaling is the Feynman scaling
<cit.>,
where it was concluded that for asymptotically large energies
the mean total number of any kind of particle rises logarithmically with the CM energy as
⟨ n ⟩∝ln√(s).
Results of the KNO scaling study using the ATLAS experiment data are presented in
<cit.>.
The KNO scaling was also studied at the LHC energies by the CMS
<cit.>
and ALICE
<cit.>.
Charged-particle multiplicity and transverse momentum distributions in pp collisions at CM energies
√(s) = 0.2 – 14 within the MC Quark-Gluon String Model (QGSM)
<cit.>
based on Gribov’s Reggeon field theory (RFT)
<cit.> were studied in
<cit.>,
where a special attention was given to the origin of violation of the KNO scaling.
A detailed theoretical description of the KNO scaling was done in
<cit.>.
The novel physically well-motivated scaling rules for high-energy data were introduced in
<cit.>.
The MB events were also used by the LHC experiments to study UE, Bose-Einstein correlations (BEC),
an inelastic cross section, track jets, particle correlations, hadronization and colour reconnection.
To perform precise Standard Model measurements or to search for new physics phenomena at hadron colliders,
it is important to have a good understanding not only of the primary short-distance hard scattering
process, but also of the accompanying interactions of the rest of the pp collision
— collectively termed the UE.
It is impossible to separate uniquely the UE from the hard scattering process on an event-by-event basis,
but observables can be defined which are particularly sensitive to the properties of the UE.
Such observables have been studied using the MB events measurements
performed by the ATLAS detector in pp collisions at
√(s) = 0.9 and 7
<cit.>
and at √(s) = 13
<cit.>.
Using the MB events the BEC effect with one size parameter, the source radius, has been studied
by the ATLAS detector in pp collisions at
√(s) = 0.9 and 7
<cit.>
and at √(s) = 13
<cit.>.
Fiducial inelastic cross-sections were measured by the ATLAS at √(s) = 7
<cit.>
and at √(s) = 13
<cit.>.
The recent soft-QCD measurement results of the LHC experiments are reported, for example, in
<cit.>.
This paper is organized as follows.
A short description of the soft-QCD physics is presented in Sec. <ref>.
The ATLAS detector for study of MB events is described in Sec. <ref>.
The MC model tunes are presented in Sec. <ref>.
The charged-particle analysis is performed in Sec. <ref>.
A study of the KNO scaling is presented in Sec. <ref>.
The summary and conclusions are given in Sec. <ref>.
§ SOFT QCD
Understanding of soft-QCD interactions has a direct impact on precision measurements in
high energy physics and searches for new physics which
provides insight into strong interactions in non-pQCD regime:
the soft-QCD results are used
* in MC generator tuning,
* for description of UE simulation,
* for description of multiple parton interactions (MPI),
* for description of initial and final state gluon radiation (ISR, FSR).
Schematic diagrams of non-diffractive (ND) and diffractive processes with single dissociation (SD), double dissociation (DD),
and central diffraction (CD) are shown in Fig. <ref>.
As discussed in
Ref. <cit.>,
the Ryskin-Martin-Khoze (RMK) model introduced in
<cit.>
based on a modification of the classic Gribov's Reggeon Field Theory (RFT)
<cit.>
allows one to trace the smooth transition from the pure perturbative
region with large parton transverse momentum
(k_T) into the soft domain.
Strong absorption of low-k_T partons
plays a crucial role here since it produces an effective infrared cut-off
and provides a possibility of extending the parton approach
used for hard processes
to also describe high-energy soft and semi-hard interactions.
This approach combines a description of the soft physics
and diffraction with the jet physics in a coherent self-consistent way.
The soft and hard components independently include
<cit.>
is also possible.
In this approach the soft part is described in terms of RFT with the phenomenological
soft Pomeron pole
while the hard part is calculated in terms of the parton model for mini-jet production with the
energy-dependent cut-off k_T > k_0 (s).
A combined description of soft and hard processes in hadronic collisions is reached within the
QGSJET-II MC model <cit.>
using of the semi-hard Pomeron approach <cit.>.
In Ref. <cit.>
a model was constructed, which incorporated attractive features of
two successful theoretical approaches to high-energy QCD:
Balitsky-Fadin-Kuraev-Lipatov (BFKL) Pomeron calculus
<cit.> and
the Colour Glass Condensate approach (leads to a saturation of parton density with s)
<cit.>.
In Refs. <cit.>
an analysis was done for the data set divided into two classes corresponding to soft and hard interactions.
The term hard' interactions is typically understood to mean
high-p_T parton-parton
interactions associated with such phenomena as jets,
while the soft component consists of everything else.
A comparison of the results shows distinct differences in the behaviour
of the two samples as a function of the CM energy.
Evidence was found that the properties of the soft sample are invariant as a function of the CM energy.
The separation of hard and soft interactions in the LHC experiments can be done using the event shape observables
<cit.>, for example, spherocity or transverse trust.
§ ATLAS DETECTOR
The ATLAS is a multipurpose particle physics experiment
<cit.>
operating at one of the beam interaction points at the LHC
<cit.>.
the cut-away view of the ATLAS detector[ATLAS uses a right-handed coordinate system
with its origin at the nominal interaction point (IP)
in the centre of the detector and the z-axis along the beam pipe.
The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward.
Cylindrical coordinates (r, ϕ) are used in the transverse plane, ϕ being
the azimuthal angle around the beam pipe.
The pseudorapidity is defined in terms of the polar angle θ as η=-lntan(θ/2).
The angular distance is measured in units of Δ R = √( (Δη)^2 + (Δϕ)^2).
]
is shown in Fig. <ref>.
The ATLAS detector covers almost the whole solid angle around
the collision point with layers of tracking detectors, calorimeters and muon chambers.
It is designed to study a wide range of physics topics at LHC energies.
The tracking devices and the trigger system
<cit.> are of particular importance for the study of MB events.
The innermost part of the ATLAS detector is the Inner Detector tracker (ID),
which has full coverage in ϕ and covers the pseudorapidity range |η|<2.5.
The cut-away view of the ATLAS ID is shown in Fig. <ref>.
The ID is immersed in the 2 T axial magnetic field of a superconducting solenoid
and measures trajectories of charged particles.
It consists of a silicon pixel detector (Pixel), a silicon microstrip detector (SCT) and
a straw-tube transition radiation tracker (TRT), each of which is split into a barrel and two endcap components.
The Pixel, SCT and TRT are located around the interaction point spanning radial distances of
33–150 mm, 299–560 mm and 563–1066 mm, respectively.
The barrel (each endcap) consists of four (three) pixel layers, four (nine) double layers
of silicon microstrips and 73 (160) layers of TRT straws.
The Pixel, SCT and TRT have (r, ϕ)-position resolutions of
10 μm, 17 μm, and 130 μm, respectively.
During the first long shutdown of the LHC,
the Insertable B-Layer (IBL) <cit.>
was constructed, inserted and commissioned to
become an additional (innermost) layer of the existing Pixel Detector.
The IBL is composed of 14 lightweight staves arranged in a cylindrical geometry,
each made of 12 silicon planar sensors in its central region and 2× 4
three-dimensional sensors at the ends.
The IBL pixel dimensions are 50 μm in the ϕ-direction and 250 μm
in the z-direction (compared with 50 μm by 400 μm for the other pixel layers).
The intrinsic spatial resolution of the IBL readout is 10 μm in the (r, ϕ)-position
and 75 μm in the z-position <cit.>.
The smaller radius and the reduced pixel size result in improvements
in both the transverse and longitudinal impact parameter resolutions
<cit.>.
The services for the existing pixel detector were upgraded,
significantly reducing the amount of material in
the region |η| > 1.5, in particular at the boundaries of the active tracking volume.
A track from a charged particle traversing the barrel detector typically has 12 silicon
measurement points (hits), of which 4 at the Pixel and 8 at the SCT,
and more than 30 TRT straw hits.
Requirements on an IBL hit and on impact parameters
strongly suppress the number of tracks from secondary particles.
The ATLAS detector has a two-level trigger system: the first-level (L1) trigger and the
high-level trigger (HLT) <cit.>.
MB events were required to satisfy L1 triggers using the MB trigger scintillators (MBTS).
These are mounted at each end of the detector in front of the liquid-argon
endcap-calorimeter cryostats at z = ±3.56 m,
and are segmented into two rings in pseudorapidity
(2.07 < |η| < 2.76 and 2.76 < |η| < 3.86).
The inner (outer) ring consists of eight (four) azimuthal sectors, giving a total of 12 sectors on each side.
The MB events were selected on the basis of the MBTS alone.
The trigger used in this measurement requires at least one signal in a scintillator on one side to
be above threshold.
The MB ATLAS trigger collect to inelastic events (INEL) in the definition of the ALICE or the CMS.
The methods developed for the measurement of the properties of MB events during
low luminosity runs using the ATLAS detector was described in
Ref. <cit.>.
An extensive software suite <cit.> is used in the reconstruction and analysis of
real and simulated data, in detector operations and in the trigger and data acquisition
systems of the experiment.
§ MONTE CARLO MODELS
Inclusive MB data are modelled in MC event generators assuming three different diffractive processes:
non-diffractive, single diffractive and double diffractive.
Low-p_T scattering processes may be described by the lowest-order (LO) pQCD
two-to-two parton scatters, where the divergence of the cross section at
p_T = 0 is regulated by phenomenological models.
A summary of MC generator tunes used for comparison with the MB results based on the ATLAS measurements
<cit.>
is presented in Table <ref>.
The
Pythia 6 <cit.>,
Pythia 8 <cit.>,
PHOJET <cit.>,
EPOS <cit.>
and
QGSJET-II
<cit.>
MC generators are used to correct the data for detector effects and to compare
with particle-level corrected data.
For the purpose of comparing the present measurements to different phenomenological
models describing MB events, the following particle-level MC samples were generated.
Pythia 8 <cit.> and EPOS <cit.> models use
the effects of colour coherence, which is important in dense parton environments and effectively
reduces the number of particles produced in multiple parton–parton interactions.
In Pythia 8
the simulation is split into non-diffractive and diffractive processes, the former dominated by
t-channel gluon exchange and amounting to approximately 80% of the selected events,
and the latter described by a Pomeron-based approach
<cit.>.
Different parameter settings in the models are used in
simulation to reproduce the existing experimental data and are referred to as tunes.
A tune is a particular configuration or set of values of the parameters of a particular MC model.
The Pythia 8 MC generator <cit.> was used with the
parameter values set to the A2 tune <cit.>
and with the MSTW2008LO PDF set <cit.>.
The contributions from ND, SD and DD processes were included in proportion to the
cross sections predicted by Pythia 8 with the A2 tune.
The ATLAS MB tune Pythia 8 A2 was used for determination of detector corrections.
This was tuned using ATLAS MB data at 7 for the MPI parameters.
The Pythia 8 Monash <cit.> is used the tune to MB and UE results.
It was constructed using Drell–Yan and UE data from ATLAS, and also data from the CMS, SPS,
and Tevatron in order to constrain energy scaling.
The Monash UE tune is based on the NNPDF2.3LO PDF <cit.>
and incorporates updated fragmentation parameters, as well as SPS and
Tevatron data to constrain the energy scaling.
The Pythia 8 version 8.130 MC generator <cit.> uses
the diffraction model that produces much harder p_T and n_cn
spectra for the SD and DD contributions than Pythia 6.
The default parton shower model is similar to that used in Pythia 6 MC09.
The new Pythia 8 A3 tune <cit.> is suitable for inclusive QCD
modelling for LHC Run 3.
The Pythia 8 A3 uses the ATLAS Run 2 charged particle
distribution and inelastic cross section results in addition to the Run 1 results
used previously to construct MB tunes.
The A3 uses the same NNPDF 2.3LO PDF and demonstrates that an acceptable description
of data can be achieved by using the Donnachie–Landshoff (DL) model for diffraction.
The ATLAS Pythia 6 <cit.> MC09 tune <cit.>
uses a specific set of optimized parameters; it employs the MRST LO* PDF <cit.>
and the p_T-ordered parton shower <cit.>.
These parameters were derived by tuning to the UE and MB Tevatron results from
energy region √(s) = 0.63 – 1.96 .
The ATLAS Pythia 6 MC09c tune <cit.>
is an extension of the ATLAS MC09 tune where the strength of the colour reconnection (CR)
was tuned to describe the
⟨ p_T⟩ distributions as a function of n_ch
measured by CDF in p p̅ collisions at the Tevatron <cit.>.
The CR phenomenon is a pure soft-QCD effect.
The point is that after a number of coloured secondary partons are produced, there are different possibilities
of forming the colour flow between these partons and grouping
the partons into colourless clusters.
In the process of reconnection, one rearranges the colour flow in such a way as to minimize
the size of the clusters.
This is especially important when dealing with contributions of MPI.
The reconnection between the different cut of Pomeron diagrams diminishes
the final multiplicity and can change the form of the n_ch distributions
<cit.>.
The Pythia 6 AMBT1 tune (ATLAS Minimum Bias Tune 1)
<cit.>
was developed in order to adapt the free parameters of the ND models to the experimental data
at √(s) = 0.9 and 7 in a diffraction-reduced PS with
n_cn≥ 6, p_T > 500 , |η| <2.5.
The starting point for this tune is the ATLAS Pythia 6 MC09c
<cit.>.
The Pythia 6 DW tune <cit.>
uses virtuality-ordered showers and was derived to describe the CDF Run II UE and Drell–Yan data.
The Pythia 6 AMBT2B tune <cit.>
with the CTEQ6L1 PDF <cit.> was
evaluated using jet and MB data.
EPOS <cit.> provides implementation of a parton-based Gribov's Reggeon theory
<cit.>
which is an effective QCD-inspired field theory describing hard and soft scattering simultaneously.
The EPOS generator, version LHCv3400, was used with the LHC tune
<cit.>.
The EPOS generator does not rely on PDF.
The QGSJET-II model version 04
<cit.>
provides phenomenological ] treatment of hadronic and nuclear interactions in
the framework of the Reggeon field theory.
The soft and semihard parton processes are included within the “semihard Pomeron” approach.
For QGSJET-II the default settings of the generator are applied.
The QGSJET-II generator does not rely on PDF.
The PHOJET MC generator <cit.> version 1.12.1.35
is used as an alternative model to Pythia-based generators.
It describes low-p_T physics using the two-component Dual Parton Model (DPM)
<cit.> which includes soft hadronic processes described by Pomeron
exchange and semi-hard processes described by perturbative parton scattering.
The PHOJET relies on Pythia 6 version 6.1.15 for the fragmentation of partons.
The Pythia 6 MC generator Perugia 0 tune <cit.>
with the soft-QCD part is tuned using only MB data from the p p̅ Tevatron and CERN colliders.
All large MC samples of MB events were generated and passed through the ATLAS simulation program
<cit.>, which is based on Geant4 <cit.>,
and the reconstruction chain, which is exactly the same as used for collision dataset.
The ATLAS used 13 MC generators and theirs tunes
Pythia 6 <cit.>,
Pythia 8 <cit.>,
PHOJET <cit.>,
EPOS <cit.>,
QGSJET-II
<cit.>
to correct the data for detector effects and to compare with particle-level corrected MB results,
which are presented in Table <ref>.
The comparisons of the MC predictions with the ATLAS MB results are presented in Sec. <ref>.
§ ANALYSIS OF MINIMUM-BIAS EVENTS
Measurements of inclusive particle spectra belong to basic items in the physics programs of LHC experiments,
and they are usually measured regularly at each collision energy.
The charged-particle multiplicity is one of the key characteristics of high-energy hadron collisions
and has been the subject of many experimental and theoretical studies because,
although quite simple to measure, it is quite difficult to describe it in the full measured range.
Measurements of charged-particle distributions probe the non-pQCD regime where
QCD-inspired models implemented in MC event generators are used to
describe the data and to constrain free parameters of MC models.
Accurate description of low-p_T strong interaction processes is essential for
simulating single pp and pile-up multiple pp interactions.
Such pp measurements are also used as input in many models trying to describe heavy-ion results.
The results used in this review are based on the pp data collected at
√(s) = 0.9 – 13 recorded by the ATLAS experiment
<cit.> at the LHC <cit.> in 2010 – 2015
<cit.>.
The data were taken in a special configuration of the LHC with low beam currents and
a reduced beam focusing, producing the low mean number of interactions per bunch-crossing
in the range 0.003 – 0.007.
The corrected distributions for primary charged particles in five separate PS regions for events with
n_ch≥ 2, p_T >100 ,
n_ch≥ 1, p_T >500 and
n_ch≥ 6, 20, 50, p_T >500 are used.
The results are compared to predictions of models tuned to a wide range of measurements.
The measured distributions are presented as inclusive-inelastic distributions within a given
PS region with minimal model-dependent corrections to facilitate comparisons with models.
§.§ Observables
The following observables were studied by ATLAS:
1/N_ev·d N_ch/dη ,
1/N_ev·1/2 π p_T·d^2 N_ch/dηd p_T ,
1/N_ev·d N_ev/d n_ch ,
d⟨ p_T⟩/d n_ch ,
where, η is the particle pseudorapidity, p_T is the
charged-particle transverse momentum,[The factor 2π p_T
in the p_T spectrum comes from the Lorentz-invariant definition
of the cross-section in terms of d^3 p.
The results could thus be interpreted as the massless approximation to d^3 p.]
n_ch is the number of primary charged particles in an event within the kinematic acceptance.
N_ev is the event number yield for a given event selection,
N_ch is the total number of primary charged particles in all selected events in the data sample,
⟨ p_T⟩
is the average transverse momentum of primary charged particles within the kinematic acceptance.
A primary charged particle is defined as a charged particle with a mean lifetime τ > 300 ps,
which is either directly produced in p p interactions or from decays of directly produced particles with
τ < 30 ps.
Charged particles produced from decays of particles with τ > 30 ps
are considered as secondary particles and are thus excluded.
The usually used inclusive charged-particle spectra
correspond to events with a minimum multiplicity
n_ch≥ 2 or n_ch≥ 1 and contain
primary charged particles possessing a minimum transverse momentum
p_T > 100 or p_T > 500 , respectively,
for the pseudorapidity region |η| < 2.5.
Primary charged-particle spectra are also shown for higher-multiplicity events
(n_ch≥ 6, 20 and 50, p_T > 500 ).
§.§ Pseudorapidity dependence of charged-particle multiplicity
§.§.§ ATLAS distributions of charged-particle multiplicity over η
The primary charged-particle multiplicity density pseudorapidity distributions
(or “pseudorapidity distribution”)
for events with n_ch≥ 2, p_T >100 and
n_ch≥ 1, p_T >500 for |η| < 2.5
studied by the ATLAS
<cit.>
at the CM energies √(s)= 13, 8, 7, 2.36 and 0.9 are shown in
Figs. <ref>, <ref>(a) and (b), <ref>(a) and (b), <ref>
and <ref>, respectively.
The pseudorapidity distributions for particles with
p_T >500 and higher minimum multiplicities per event
n_ch≥ 6, 20, 50 at √(s)= 8 are shown in
Figs. <ref>(c) – (d), and for n_ch≥ 6 at √(s)= 7 and 0.9
in Figs. <ref>(c) and <ref>(c), respectively.
The accuracy of measurement of pseudorapidity distributions
increases with increasing energy, because of the better understanding of dead material
values in the ATLAS ID in the data analysis for higher energies.
The ATLAS experimental results are compared to predictions of models tuned to a wide range
of measurements described in Sec. <ref> and presented in Table <ref>.
The measured spectra are presented as inclusive distributions with corrections that minimally rely
on the MC model used, in order to facilitate an accurate comparison with predictions.
In general, the systematic uncertainties are larger than the statistical uncertainties.
In most regions of all distributions the dominant uncertainty comes
from the track reconstruction efficiency.
Figure <ref> shows the pseudorapidity distributions at √(s) = 13 .
The distribution corresponding to the PS with n_ch≥ 2, p_T >100
<cit.>
rises as |η| increases, peaking at |η|≈ 1.7 before falling.
For the PS with n_ch≥ 1, p_T >500 <cit.>,
the mean particle density is roughly constant at 2.9 for |η|≲ 1.5 and falls at higher
η.
For pseudorapidity distributions at 13 for n_ch≥ 2 with
p_T >100 the Pythia 8 Monash tune, EPOS and
QGSJET-II give a good description for |η|≲ 1.5 in
Fig. <ref>(a).
The prediction from the Pythia 8 A2 tune
has the same shape as predictions from the other generators, but lies below the data.
In case of PS with n_ch≥ 1, p_T >500 ,
EPOS describes the data for |η|≲ 1.0,
and predicts a slightly larger multiplicity at larger |η| values.
QGSJET-II and the Pythia 8 Monash tune
predict multiplicities that are too large by approximately 15% and 5%, respectively.
The Pythia 8 A2 tune predicts a primary charged-particle multiplicity
density that is 3% too low in the central region but describes the data well in the forward region.
In Fig. <ref>(a) at 8 <cit.>
the distribution corresponding to the PS with n_ch≥ 2, p_T >100
is well described by EPOS and Pythia 8 Monash tune
but is underestimated by the Pythia 8 A2 tune and QGSJET-II.
In Fig. <ref>(b) for the PS with n_ch≥ 1, p_T >500
EPOS overestimates the distribution at |η| > 1.7
and describes the data well for the rest of the pseudorapidity range.
The data are overestimated by the QGSJET-II and
Pythia 8 Monash tune calculations and underestimated by the
Pythia 8 A2 tune prediction.
A similar shape is seen for the PS corresponding to higher multiplicities with
n_ch≥ 6, 20, 50 shown in Fig. <ref>(c) – (e)
with the extent of the plateau becoming shorter as the multiplicity threshold is raised.
A small apparent structure in the distributions of the central values of the data points
occurs at values of |η|∼ 1.7.
In this figures all models overestimate the overall yield for the PS with n_ch≥ 6, 20
although Pythia 8 A2 describes the plateau in the central region well.
For the largest multiplicity threshold, n_ch≥ 50,
all of the models overestimate the data at |η| > 1.7
but provide a better description in the central region.
Figures <ref>(a) and <ref>(a) show the η distributions for the
most inclusive PS region with n_ch≥ 2, p_T >100 .
In these cases the distributions show weaker dependence on |η|
than in the other plots at √(s)= 7 and √(s)= 0.9 .
Figures <ref>(b), <ref> and <ref>(b) show the
pseudorapidity distributions in the PS region with
n_ch≥ 1, p_T >500 at √(s)= 7 , √(s)= 2.36 and √(s)= 0.9 , respectively.
The mean particle density is roughly constant for |η| < 1.0 and
decreases at higher |η|.
The distribution shapes of the models are similar except for that of the Pythia 6 DW tune,
which has a flatter spectrum and a more pronounced dip at central |η|,
especially at low √(s).
At energies 7 , 2.36 and 0.9 the Pythia 6 AMBT1 tune
gives the best shape and normalisation description of the data, although it was tuned for
n_ch≥ 6 in Figs. <ref>(c) and <ref>(c).
At √(s)= 7 all the shapes seem to model the observed spectrum reasonably well,
but at this energy the difference in normalisation among the models varies more widely
and no model reproduces the data.
At √(s)= 0.9 there is very little difference between the models both in shape
and normalisation with the exception of PHOJET, which shows excellent agreement with the data.
The other models show on average too few particles.
The shape of the distribution is reasonably well described by all models.
In Ref. <cit.> the performance of the ATLAS Pythia 8 A3 tune was
presented for primary charged-particle multiplicity density pseudorapidity distributions,
transverse momentum distributions and multiplicity distributions;
and also average transverse momentum multiplicity distributions,
compared to the predictions of the previous ATLAS Pythia 8 tunes —
A2 and Monash.
Both these tunes use the default Schuler–Sjöstrand (SS) diffraction model
<cit.>, and predict the same value.
The SS model overestimates the inelastic cross-section measured by
ATLAS at 7 and 13 , as can be seen in Table <ref>;
alternative models are therefore considered here.
Changing the diffractive model affects the charged particle distributions not only at the
low multiplicity or in the low p_T region, but also at intermediate values, and in each case,
the MPI and CR parameters need retuning in order to preserve reasonable agreement with data.
The DL model <cit.> is found to give the best description of the MB observables
and the measured fiducial inelastic cross-section <cit.>.
The DL model comes with two tunable parameters which control the Pomeron Regge trajectory.
To understand the energy dependence of the parameters, the tuning results at different √(s)
individually using just MB distributions were initially determined.
For each parameter at each √(s), a tuned value was determined and then compared
to values of the same parameter when a subset of sampling runs is used.
The spread of these points was an indication of the statistical and extrapolation uncertainty
on the tune, as well as how well was constrained the tuned value of the parameter by
the observables used.
The next step was to determine the sensitivity of each of these parameters to different observables
by successively adding distributions other than those from the MB analysis
and varying the relative weight.
The fiducial inelastic cross section predictions from Pythia 8 A3
are about 5% lower compared to SS, which is somewhat closer to the values from the data.
This does not come at a cost of sacrificing agreement with other distributions.
In Figs. <ref>, <ref>, <ref> and <ref>
the performance of the ATLAS Pythia 8 A3 tune can be seen for
primarily charged-particle multiplicity pseudorapidity distributions,
primary charged-particle multiplicity transverse momentum distributions,
primary charged-particle multiplicity distributions;
and average transverse momentum multiplicity distributions,
compared to the previous Pythia 8 A2 and Monash tunes.
The predicted values of the fiducial inelastic cross-section at √(s) = 7 and 13 for the tunes compared with data are shown in Table <ref>.
Figures <ref> shows that the Pythia 8 A3 tune
provides a small improvement in the modelling of charged particle pseudorapidity distributions
at √(s)= 8 , and to a lesser extent, at √(s)= 13 , at the expense of
larger deterioration of the modelling of √(s)= 0.9 data.
Since the aim is to model soft collisions for pile-up at √(s) = 13 ,
the Pythia 8 A3 tune’s mis-modelling of the √(s)= 0.9
data is acceptable.
The models EPOS LHC, PHOJET, QGSJET-II,
Pythia 6 and Pythia 8 show big troubles in describing the whole spectrum in the data,
but the best agreement is achieved with EPOS.
For p_T > 100 at the highest energies
Pythia 8 Monash, EPOS, QGSJET-II
give a good description for |η|< 1.5.
The prediction from Pythia 8 A2 has the same shape but lies below the data.
For p_T > 500 at the highest energies
the MCs have the same shape but different normalisation;
EPOS and Pythia 8 A2 give remarkably good predictions.
As discussed in Ref. <cit.>, in terms of Feynman diagrams
(Fig. <ref>)
the cut Pomeron can be viewed as a set of ladder diagrams corresponding to a sum of completely inelastic
2 → n processes, that is, to the last term
G_inel = 1 - e^-Ω in the unitarity equation (20.9) in
<cit.>.
Here n > 2 means the production of additional (n - 2) gluons which form minijet.
Minijets result from hadronization of partons emitted from the cut QCD Pomeron.
Typically, minijets are groups of hadrons with comparatively low overall transverse momentum,
p_T≲ 10 .
In the final state driven by one Pomeron, can be expect to observe gluon minijets with a flat rapidity
distribution in the central pseudorapidity region of primary charged-particle multiplicities
distributions which are presented in Figs. <ref> – <ref> and
<ref>.
This plateau is more pronounced for the results with higher p_T threshold,
p_T > 500 in Figs. <ref>(b) – <ref>(b)
and <ref>(a).
This would correspond to a flat pseudorapidity distribution of produced particles
if they were massless.
The dip observed at η = 0 in Figs. <ref>(a) – <ref>(a)
for events with p_T > 100 is explained by the presence of massive particles.
§.§.§ Distributions of charged-particle multiplicity over η of the LHC experiments
The CMS results for pseudorapidity distributions for events for |η| < 2.4 at
the CM energies √(s)= 13 with n_ch≥ 1, p_T >500 <cit.> are shown in Fig. <ref>(a).
The measured distributions are presented for three different event data sets:
* the most inclusive sample (inelastic),
* the sample dominated by non-single diffractive dissociation events (NSD-enhanced sample),
* the sample enriched by single diffractive dissociation events (SD-enhanced sample).
The SD-minus and SD-plus samples are mutually exclusive, depending on the side of the
forward-detector that contains the hadronic activity.
The pseudorapidity distribution of the SD-enhanced event sample is also presented
as a symmetrized distribution constructed from the SD-minus and SD-plus enhanced samples
and is referred to as the SD-One-Side enhanced event sample.
The symmetrization is performed by reflecting the distribution with respect to
|η| = 0.
In general terms, the inelastic and NSD distributions are similar.
The pseudorapidity density of the SD-enhanced event sample is about a factor of 4
lower than that of the most inclusive event samples.
The combined CMS–TOTEM pseudorapidity distributions are presented in
Figs. <ref>(b) – (d) for the inclusive event selection sample,
the NSD-enhanced event selection sample and the SD-enhanced event selection sample
<cit.>.
The measurements are compared to the results from
Pythia 6 (version 6.426) <cit.> tune Z2* <cit.>,
Pythia 8 (version 8.153) <cit.> tune 4C <cit.>,
HERWIG++ (version 2.5.0) <cit.> tune UE-EE-3 with CTEQ6L1
<cit.> PDFs, EPOS LHCv3400 tune LHC <cit.>
and QGSJET-II version 04 <cit.>.
In Ref. <cit.> the similar figures for the pseudorapidity distributions
were presented with additional η regions from TOTEM: 3.7 < η < 4.8
and -7.0 < η < -6.0.
The results are derived in the central region by averaging the data points in the corresponding
±η bins and in the forward region by averaging over the
half-arms four TOTEM T2 telescopes.
The primarily charged-particle multiplicity density at η = 0 is
5.35 ± 0.36 for the inclusive sample,
6.20 ± 0.46 for the NSD-enhanced sample, and
1.94^+ 0.26 _-0.23 for the SD-enhanced sample, with negligible statistical uncertainties.
The CMS primarily charged-particle multiplicity density at η = 0
for the NSD-enhanced sample is in agreement within error bars with the ATLAS one
presented in Table <ref> at
√(s) =13 for PS n_ch≥ 2, p_T >100 .
The predictions from various MC event generators differ from
the data by up to 20% for the inclusive and NSD-enhanced samples,
with even larger discrepancies for the SD-enhanced sample.
The data are well described by Pythia 6 and QGSJET-II for the inclusive selection.
For the NSD-enhanced sample, the predictions obtained from Pythia 6 and
QGSJET-II agree with the data for most η bins.
A good description of the measurement for the SD-enhanced sample is provided by both
EPOS and Pythia 6.
The forward primarily charged-particle multiplicity density over pseudorapidity decreases
with |η|.
In the inclusive sample, d N_ch / dη is
3.85 ± 0.49 at η = 5.375 and
2.61 ± 0.28 at η = 6.350 with negligible statistical uncertainty.
The pseudorapidity density of the NSD-enhanced sample varies between
4.80 ± 0.62 and 3.17 ± 0.35, while for the SD-enhanced sample it is in the range of
1.49 ± 0.27 to 1.20 ± 0.20.
The MC predictions for the three samples differ from the data by up to about ± 30%.
For the inclusive and NSD-enhanced samples, the data in the forward region are in agreement with
the prediction from QGSJET-II and are between the EPOS and
Pythia 8 results.
For the SD-enhanced selection, the TOTEM data points are close to the Pythia 8 and
HERWIG++ predictions, while QGSJET-II underestimates the data.
The change in the slope of the MC curves close to η = 5.3,
more visible for the NSD- and SD-enhanced distributions, is due to the event
selection requirement of at least one charged particle in the pseudorapidity region of
the TOTEM T2 telescopes.
§.§ Charged-particle multiplicity density
§.§.§ Energy dependence of the multiplicity density at ATLAS
The energy dependence of primary charged-particle multiplicity density,
1/N_ev·d N_ch/ dη|_η =0,
is of interest because it
* provides information about the basic properties of p p collisions,
* is
related to the average energy density achieved in the interaction of protons,
* constitutes a reference for the comparison with heavy ion collisions.
The average primary charged-particle multiplicity in pp interactions per unit of pseudorapidity,
multiplicity density, for |η| <0.2 as a function of the CM energy √(s)
in three separate PS regions for events with
n_ch≥ 2, p_T >100 ,
n_ch≥ 1, p_T >500
and
n_ch≥ 6, p_T >500
are shown in Fig. <ref>.
The results are compared to predictions of MC models tuned to a wide range of measurements.
The comparison with the MC models
Pythia 8 A2, Pythia 8 Monash, EPOS LHC,
QGSJET-II for √(s) from 0.9 to 13
<cit.>
and
Pythia 6 AMBT1, Pythia 6 MC09, Pythia 6 DW,
Pythia 8, PHOJET for √(s) from 0.9 to 7 <cit.>
is show in Fig. <ref>(a) and Fig. <ref>(b) , respectively.
The primary charged-particle multiplicity density in the central pseudorapidity
region at √(s) = 13 for events with n_ch≥ 2,
p_T >100 is measured for fiducial PS
to be 6.42 ± 0.10, by averaging over |η| < 0.2;
the quoted error is the systematic uncertainty, the statistical uncertainty is negligible.
In order to compare with other measurements, it is corrected for the contribution from
strange baryons (and therefore extrapolated to primary charged particles with τ > 30 ps)
by a correction factor of 1.0121 ± 0.0035.
The central value is taken from EPOS; the systematic uncertainty is taken from the difference between
EPOS and Pythia 8 A2, and the statistical uncertainty is negligible.
The mean number of primary charged particles after the correction is
6.50 ± 0.10 at √(s) = 13 for events with
n_ch≥ 2, p_T >100 .
The mean number of primary charged particles in the central region is computed
by averaging over |η| < 0.2 and found to be
2.874 ± 0.001 (stat)± 0.033 (syst)
at √(s) = 13 for events with
n_ch≥ 1, p_T >500 .
This measurement is corrected for the contribution from strange baryons.
The prediction from EPOS is used to perform the extrapolation,
and the deviation from the Pythia 8 Monash
prediction is taken as a systematic uncertainty and symmetrised to give 1.024 ± 0.009.
A summary of central primary charged-particle multiplicity densities at η = 0
in all measured PS at √(s) = 8, 13 is given in Table <ref>.
The primary charged-particle multiplicity density increases by a factor of 2.2
when √(s) increases by a factor of about 14 from 0.9 to 13 .
These extrapolated results from Table <ref>. are shown
in Fig. <ref>(a) <cit.> and
compared to predictions of the MC models
Pythia 8 A2, Pythia 8 Monash, EPOS LHC and
QGSJET-II for √(s) from 0.9 to 13 <cit.>.
The predictions of EPOS and Pythia 8 MONASH match the data well
at √(s) = 13 for events with n_ch≥ 2, p_T >100 .
For Pythia 8 A2, the match is not so good as was observed when measuring particles with
p_T >500 <cit.>.
For events with n_ch≥ 1, p_T >500 at √(s) = 13 EPOS and Pythia 8 A2
describe the dependence on √(s) very well, while
Pythia 8 Monash and QGSJET-II
predict a steeper rise in multiplicity with √(s).
In order to make consistent comparisons of pseudorapidity density at 8
<cit.>
with other measurements, these results are corrected to the earlier τ > 30 ps definition of
stable particles, using the factor
1.012 ± 0.004 in the p_T > 100 PS
and
1.025 ± 0.008 in the p_T > 500 PS
derived from predictions of the EPOS LHC tune
with uncertainties following comparisons of the predictions of different MC models.
Results at 8 are shown in Fig. <ref>(a) for the PS
(p_T > 500 , n_ch≥ 1; 6)
and
(p_T > 100 , n_ch≥ 2).
It can be seen that the total uncertainty in the measurement at
√(s) = 8 is about 30–40% less than for the study with
the √(s) = 7 data.
This was achieved due to improved knowledge of the ID material distribution <cit.>,
which reduced the dominant source of systematic uncertainty by more than 50% with
respect to the √(s) = 0.9, 2.36, 7 measurements.
The best description of the data is given by EPOS.
The predictions of the Pythia 8 tunes provide a fair description of the shape of the
multiplicity dependence with CM energy.
As in the case of the other presented distributions, QGSJET-II calculations
give the worst description.
The values for three PS regions are shown in Fig. <ref>(b) with comparison of
Pythia 6 AMBT1,
Pythia 6 MC09,
Pythia 6 DW,
Pythia 8
and
PHOJET
predictions for √(s) from 0.9 to 7 and in
Table <ref>
<cit.>.
The PS region with the largest minimum p_T and the highest minimum multiplicity,
(p_T > 500 , n_ch≥ 6),
which is the region with the least amount of diffraction,
is the one where the models vary the least and the energy extrapolations of most
models is in the best agreement with the data.
For the most inclusive measurements, none of the models agree with the data and the spread at
√(s) = 7 in the expected values is almost one third of the mean predicted value.
The observed value is significantly higher at this energy than in any of the models.
The total multiplicity density of charged particles with p_T > 100
within the |η| < 2.5 are computed as the mean of the distributions
shown in Figs. <ref>(a) and <ref>(a).
They are found to be
5.881 ± 0.002 (stat)± 0.276 (syst) at √(s) = 7
and
3.614 ± 0.006 (stat)± 0.170 (syst) at √(s) = 0.9 (see Table <ref>).
These charged-particle total multiplicities density in the full pseudorapidity region, -2.5 < η < 2.5,
are
29.04 ± 0.01 (stat)± 1.38 (syst) at √(s) = 7
and
18.07 ± 0.03 (stat)± 0.85 (syst) at √(s) = 0.9
and are in good agreement with the results presented in Table <ref>.
With extrapolation to p_T = 0 ,
these numbers were multiplied by the model-dependent scale factors.
The averaged inclusive charged-particle multiplicity for events with two or more particles
for the kinematic region with p_T≥ 0 is found to be
6.252 ± 0.002 (stat)± 0.304 (syst)
at √(s) = 7
and
3.849 ± 0.006 (stat)± 0.185 (syst)
at √(s) = 0.9 (see Table <ref>).
These are ≈ 6% higher than average multiplicities for p_T > 100 .
This result is interpreted as the average total inelastic multiplicity
for events with two or more particles within |η| < 2.5.
For correct comparison of charged-particle multiplicity and average transverse momentum distributions
for different energies or PS regions the scaled multiplicity is introduced as follows:
z =
n_ch (√(s), p_T^min) /⟨
n_ch (√(s), p_T^min)
⟩.
For example, a comparison of results for different PS regions,
with two p_T^min thresholds, was presented in
Ref. <cit.>.
A fit with a fourth-degree polynomial function of the primary charged-particle
multiplicity density distributions in the pseudorapidity region -2.5 < η < 2.5
was used in <cit.> for the calculation of an
average total multiplicity,
⟨ n_ch ( √(s), p_T^min ) ⟩,
for different CM energies and p_T^min
using the ATLAS results <cit.>.
The 1/N_ev· d N_ch/dη distributions
over pseudorapidity are shown in Fig. <ref>.
The average multiplicity,
⟨ n_ch ( √(s), p_T^min ) ⟩,
resulting from fit of these distributions with the fourth-degree polynomial function
are presented in Table <ref>.
The average multiplicities from Table <ref> were used for calculation of
horizontal axes using Eq. (<ref>)
for correct comparison of primary charged-particle multiplicity distributions in
and multiplicity dependences of an average transverse momentum in
Sec. <ref>, and for KNO scaling study in Sec. <ref>.
§.§.§ Energy dependence of the multiplicity density of the LHC experiments
The average total primary charged-particle multiplicity,
⟨ n_ch⟩,
is equal to the integral of the corresponding single-particle inclusive density in the η
interval considered.
The ⟨ n_ch⟩ is observed to rise with increasing CM energy
in hadron-hadron collisions
<cit.>.
The same behaviour is also observed in e^+ e^- collisions,
in deep-inelastic scattering <cit.>,
and in heavy ion collisions <cit.>.
The CMS measured average total primary charged-particle multiplicity for
|η| < 2.4 presented in Table <ref>
and shown in Fig. <ref>(a), where the CMS data are
compared with experimental data obtained at lower energies
and various theoretical predictions.
Recent Regge-inspired models <cit.>
predict a power-like behaviour among which only Ref. <cit.>
describes the highest energy data very well.
Parton saturation models (such as <cit.>)
predict a strong rise of the central rapidity plateau as well.
The Pythia 6 <cit.> generator and its fragmentation model tuned to CDF data
<cit.>, called Pythia D6T,
is used as a baseline model to simulate inelastic pp collisions.
At 7 a dedicated Pythia tune <cit.>
better describing the high multiplicities is used for correcting the data.
Alternative tunings that differ mainly in the modelling of MPI have also been considered
<cit.>.
PHOJET <cit.>
is used as an alternative event generator that differs mainly
in the underlying dynamical model for particle production.
Table <ref> gives an overview of the
average total primary charged-particle multiplicity
for the data and for the Pythia D6T tune, Pythia 8
and PHOJET models.
The Pythia D6T tune produces on average too few particles
per event at all energies.
PHOJET is consistent with the data within uncertainties for √(s) = 0.9 ,
but is not able to predict properly the average total multiplicity at higher energies.
Pythia 8 describes best the √(s) = 7 data,
but underestimates ⟨ n_ch⟩ systematically at all energies.
The CMS results at √(s) = 0.9 and 7 presented in
Table <ref> are in agreement within the error bars
with the ATLAS results at the same energies with p_T > 100
in Table <ref>.
The CM energy dependence of the pseudorapidity distribution at η = 0
is shown in Fig. <ref>(b), which includes data from various
experiments for NSD events in pp and p p̅ collisions.
The different experiments do not use identical event selection criteria,
they all include a large fraction of NSD events.
Particle production at η = 0 is expected to follow a power-law dependence,
d N_ch / dη|_η =0 ∝ s^Δ,
where Δ
is the Pomeron intercept <cit.> and the effective Pomeron intercept defined as
α_eff (0) = 1 + Δ
with Δ in the range 0.14 – 0.24 <cit.>.
The result of fitting the high-energy pp and p p̅ central-pseudorapidity
particle densities with this function is shown in Fig. <ref>(b).
The value of Δ = 0.23± 0.01 is obtained.
In ALICE the definition for multiplicity density in pp collisions,
1/N_ev·d N_ch/ dη|_η =0,
is an integral of the data over the pseudorapidity range |η|< 0.5.
The results of the measurements of multiplicity density are shown in
Fig. <ref> and given in Table <ref>.
Results are given for three conventional event classes: inelastic (INEL) events,
non-single diffractive (NSD) events and
events with at least one charged particle in |η| < 1 (INEL>0).
The fits based on Eq. (<ref>) to combination of
the ALICE data with other data at the LHC experiments and
other experiments at lower energies in Fig. <ref> yield
Δ = 0.102± 0.003 for INEL events,
Δ = 0.114 ± 0.003 for NDS events
and
Δ = 0.114 ± 0.002
for INEL>0 events.
These results are compared to Δ = 0.15 for central Pb–Pb collisions
<cit.>.
This is clear evidence that the charged-particle multiplicity density increases with energy
in Pb–Pb collisions faster than in p p collisions.
Fits results are shown in Fig. <ref>(a).
The results of the extrapolations to CM energies of 13, 13.5 and 14 are presented in Table <ref>.
The multiplicity densities ⟨d N_ch / dη⟩
measured in the INEL and INEL>0 events in the pseudorapidity range
|η|< 0.5 at √(s)= 13 are shown in Fig. <ref>(b)
<cit.> and are 5.31± 0.18
and 6.46± 0.19, respectively.
The multiplicity density for the INEL>0 events is also measured in |η|< 1
for direct comparison with the INEL>0 results of ALICE at lower energies
and is found to be 6.61± 0.20 <cit.>.
Figure <ref>(b) shows compilation of results on multiplicity density of
charged particles measured in |η|< 0.5 for the INEL and INEL>0 results at
different p p energies by ALICE
<cit.>,
CMS <cit.>, ACHM <cit.>,
UA5 <cit.> and PHOBOS <cit.>.
The energy dependence of
⟨d N_ch / dη⟩
is parametrised by the power law (<ref>) fitted to data.
By combining the data at lower energies with ALICE and CMS results at √(s) = 13 ,
it was obtained that Δ = 0.103 ± 0.002 for INEL events
and Δ = 0.111 ± 0.004 for INEL>0 events.
These fit results are in agreement within error bars with the
results obtained in Fig. <ref>(a).
The CMS obtained value Δ = 0.23 ± 0.01 in Fig. <ref>(b)
is higher than ALICE result Δ = 0.114 ± 0.003 in Fig. <ref>(a)
by 0.12± 0.01 for NSD event class.
Note that a more complete data sample was used for the ALICE fit that for the CMS one.
The measurement of average multiplicity density at 13 by CMS <cit.>
for the pseudorapidity region |η| < 2.4 resulted in
d N_ch/ dη|_|η| <0.5
= 5.49±0.01 (stat)± 0.17 (syst)
for inelastic events, which is consistent with the ALICE extrapolation of
5.30 ± 0.24 in Table <ref>.
Over the LHC energy range from 0.9 to 14 , while the
CM energy increases by a factor of 15.5, extrapolation of the present data for
d N_ch/ dη|_|η| =0
shows an increase by a factor of
1.75 ± 0.03 for the INEL event class,
1.87 ± 0.03 for the NSD event class and
1.87 ± 0.01 for the INEL>0 event class.
The multiplicity increase is similar for the NSD and INEL>0 classes but slightly lower for the INEL class.
The ALICE results at √(s) = 0.9, 7 and 8 and
extrapolation at √(s) = 13 for the average multiplicity density for
the NSD events in Table <ref> are in agreement
within uncertainties with the ATLAS results presented in Table <ref>
at √(s) =8 and 13 and in Table <ref>
at √(s) = 0.9 and 7 for inelastic events with
p_T > 100 and n_ch≥ 2.
The multiplicity pseudorapidity distributions, the charged-particle multiplicity
density at mid-rapidity (|η| < 0.2) measured at several √(s) points
were found to be well described by the Pythia 8 Monash and
EPOS models for three event selections.
For p_T > 100 at the highest energies, the predictions from
EPOS and Pythia 8 Monash match the data well.
For the predictions from Pythia 8 A2,
the match is not as good as was observed when measuring particles with p_T > 500 .
For p_T > 500 at the highest energies, the predictions from EPOS
and Pythia 8 A2 match the data well.
The energy dependence of the particle density
1/N_ev·d N_ch / dη|_η =0
is shown in Fig. <ref> for ATLAS, in
Fig. <ref>(b) for CMS–TOTEM and in Fig. <ref> for ALICE.
As discussed in Ref. <cit.>,
neglecting absorptive corrections given by the enhanced diagrams,
which mainly change (“renormalize”) the effective Pomeron intercept in Eq. (<ref>),
one can conclude that according to the Abramovsky-Gribov-Kancheli
<cit.>
(AGK) rules[The relation between the cross sections of subprocesses with
a different number of cut Pomerons within a given diagram with
n Pomerons is given by the AGK cutting rules.],
the plateau height in Eq. (<ref>)
is driven just by the one-Pomeron exchange with effective Δ∼ 0.2.
That is, the density of secondaries observed in the inclusive
process increases with increasing energy faster than the total cross section,
whose growth is tamed by the multi-Pomeron diagrams.
Indeed, as is seen in Fig. <ref>(b), in the interval of collider energies
d N_ch / dη =
1/σ_inel·dσ / dη∝ s^0.115
(i.e. dσ / dη∝ s^0.215),
while σ_inel∝ s^0.1.
§.§ Transverse momentum dependence of charged-particle multiplicity
§.§.§ ATLAS distributions of multiplicity over p_T
The transverse momentum distributions of charged-particle measured by ATLAS
are shown in Figs. <ref> – <ref> at
the CM energies √(s)= 0.9, 2.36, 7, 8, and 13 .
Figure <ref>(a) shows the charged-particle transverse momentum distribution
at √(s)= 13 for p_T >100 <cit.>.
The EPOS describes the data well for p_T >300 .
For lower p_T the data are underestimated by up to 15%.
The other generators show similar mis-modelling at low momenta but with larger discrepancies up to 35%
for QGSJET-II.
MC models mostly overestimate the charged-particle multiplicity for
p_T >400 ; Pythia 8 A2 yields overestimated
results only in the intermediate p_T region and slightly underestimates the data
for p_T >800 .
Figure <ref>(b) shows the charged-particle transverse momentum distribution
at √(s)= 13 for p_T >500 <cit.>.
EPOS describes the data well over the entire p_T spectrum.
The Pythia 8 tunes describe the data reasonably well, but they are slightly above
the data in the high-p_T region.
QGSJET-II gives a poor prediction over the entire spectrum, overshooting the
data in the low-p_T region and undershooting it in the high-p_T region.
Figures <ref> show charged-particle multiplicities as a function of the transverse momentum,
see Eq. (<ref>), for various PS at the CM energy
√(s)= 8 <cit.>.
No model is fully consistent with the distributions.
Above 1 Pythia 8 Monash predictions agree well with the data.
This model is the only, that gives a fair description of the data corresponding to the highest multiplicity
threshold with n_ch≥ 50 and p_T >500 ,
where all other models show large deviations as p_T increases.
The EPOS predictions give the best description of the data corresponding to the PS
n_ch≥ 2 and p_T >100 ,
particularly at transverse momenta below 1 ,
while the other models underestimate the data at the lowest p_T values.
The EPOS provides fair predictions for the PS
n_ch≥ 1; 6 and p_T >500 ,
but for the higher multiplicity thresholds, n_ch≥ 20; 50,
deviations from the data are seen at high transverse momenta.
Pythia 8 A2 gives fair descriptions of the data below 6 ,
yet shows deviations of up to 30% around p_T∼ 10 .
In all measured PS the QGSJET-II approach shows large disagreements with the data as
p_T increases.
Figures <ref>, <ref>(a) and <ref> show
the charged-particle multiplicities as a function of the transverse momentum, Eq. (<ref>).
Figures <ref>(b), <ref>(a) and <ref>(b) show three
CM energies considered in the PS region
n_ch≥ 1, p_T >500 and |η|< 2.5.
The observed p_T spectrum is not described by any of the models over the whole range.
The region that is most difficult for the models to describe is the region above 1 .
Figures <ref>(a) and <ref>(a) show the charged-particle multiplicities in the most inclusive
PS region n_ch≥ 2, p_T >100 and |η|< 2.5.
At √(s) = 0.9 PHOJET
describes the data best over the whole range even though the agreement is still not excellent.
The other models tend to under-predict the number of low-p_T particles,
while at higher p_T the models vary widely.
At √(s) = 7 the effect at low p_T is more pronounced, whereas at
high p_T the agreement of Pythia 8 and PHOJET
with the data is quite good.
The AMBT1 and MC09 tunes of Pythia 6
predict too many particles at higher p_T.
Figures <ref>(c) and <ref>(c)
show the charged-particle multiplicities with the smallest contribution from diffractive events.
This distribution carried the most weight in the Pythia 6 AMBT1 tune.
Considerable improvement in the agreement with the data is seen between the older
Pythia 6 MC09 and AMBT1
but the parameters varied in this tune were not sufficient to describe the full spectrum.
The charged-particle multiplicities as a function of the transverse momentum
measured in pp collisions at √(s) = 2.76 and
in Pb+Pb collisions at √(s_NN) = 2.76 are shown in
Fig. <ref>(b) for the pseudorapidity range |η| <2
and for five centrality intervals in Pb+Pb collisions:
0–5%, 10–20%, 30–40%, 50–60% and 60–80% in the 0.5 < p_T < 150 .
This figure shows the Pb + Pb spectra divided by the ⟨ T_AA⟩
(which is estimated as the number of nucleon–nucleon collisions over their cross section)
of the corresponding centrality interval compared with the charged-particle production cross sections measured
in pp collisions at √(s) = 2.76 .
The charged-particle multiplicities as a function of the transverse momentum combine
the measurement of the soft regime at low p_T with the hard regime at
high p_T which can be calculated in pQCD.
While early measurements could focus only on the regime up to a few , distributions
were later measured up to ≈ 200 as presented in
Fig. <ref>(b) <cit.>
and in pp collisions at √(s) = 5.02 <cit.>.
The similar result of the CMS is presented in Ref. <cit.>.
For p_T > 100 at the highest energies EPOS
describes the data well for p_T > 300 , while for
p_T < 300 , the data are underestimated by up to 15%.
MCs show similar mis-modelling at low momentum but with larger discrepancies up to 35% for
QGSJET-II.
MCs mostly overestimate the charged-particle multiplicity for p_T > 400 .
Pythia 8 A2 overestimates the data only in the intermediate p_T
region and slightly underestimates them for p_T > 800 .
For p_T > 500 at the highest energies the
measurement spans 10 orders of magnitude; EPOS and
Pythia 8 Monash give remarkably good predictions.
Contrary to the ‘old’ Regge theory where it was assumed that all transverse momenta are limited,
in QCD the k_T distributions of jets have a long k_T tail
(dσ / d k_T^2 ∝α_s^2 ( k_T^2 ) / k_T^4
at large k_T and very large energy s ≫ k_T^2).
An examples of the p_T primary charged-particle distributions are shown in
Figs. <ref>, <ref>(a) and <ref>(a).
In Fig. <ref> for charged-particle multiplicity,
Pythia 8 A3 is comparable to data
at √(s) = 0.9, 2.36, 7, 8, 13 and other tunes: Pythia 8 A3 A2 and Monash.
At √(s) = 13 ,
Pythia 8 A2
describes the low multiplicity part better than Pythia 8 A3
in the range of 40 < n_ch < 60 charged particles.
The shape of the distribution predicted by the new tune is consistent across the center-of-mass energies.
In Fig. <ref> for charged particle multiplicity,
ATLAS Pythia 8 A3 is comparable to other tunes
except at √(s) = 0.9 .
At √(s) = 13 ,
Pythia 8 A2
describes the low multiplicity part better than
Pythia 8 A3 in the range of
40–60 charged particles.
The shape of the distribution predicted by the Pythia 8 A3 tune
is consistent across the center-of-mass energies.
Compared to Pythia 8 A2, Pythia 8 A3
provides a slightly worse description of the charged particle multiplicity distribution,
which coincides with
the
improved charged-particle
p_T distribution that performs similarly to
Pythia 8 Monash, as shown by Fig. <ref>.
In all cases, √(s) = 8 results are very similar to those at √(s) = 7 .
The comparison of the primary charged-particle multiplicities as a function of the transverse momentum
for |η| < 2.5 measured at the CM energies from 0.9 to 13
by the ATLAS <cit.>
are presented for events with PS
n_ch≥ 2, p_T >100 in Fig. <ref>(a)
and with
n_ch≥ 1, p_T >500 in Fig. <ref>(b).
Figures <ref>(a) and (b) show an increase of the primary charged-particle
multiplicity distributions with the transverse momentum.
As expected the distributions acquire higher values at higher collision energies
and an increase by ≈ 40% and ≈ 10%
is observed in the region of p_T < 1 as the
energy increases from 0.9 to 13
for p_T >100 and p_T >500 , respectively.
The results at 7 and 8 are in agreement within error bars.
The particle multiplicity in transverse momentum region of p_T > 5 increases by ≈ 40% for particle p_T threshold of 100
and for that of 500 when energy rises from 7 to 13 .
Charged-particle multiplicities p_T distributions were compared
using “z-scaling”, see details in Refs. <cit.>.
The energy independence of the scaling function for some reactions was observed.
The concept of z-scaling is considered to reflect the general features of
high-p_T particle production in hadron-hadron
and hadron-nucleus collisions.
Violation of z-scaling is suggested to be considered as a signature of new physics.
§.§.§ Distributions of multiplicity over p_T of the LHC experiments
The CMS results for primary charged-particle multiplicities as a function of
the transverse momentum, p_T, and
a leading transverse momentum,
p_T, leading, for events for |η| < 2.4 at
the CM energy √(s)= 13 with n_ch≥ 1 and
p_T >500 <cit.>
are shown in Fig. <ref>.
The measured distributions are presented for three different event data sets: an inelastic (INEL) sample,
an NSD-enhanced sample, and an SD-enhanced sample.
The p_T distributions (i. e. p_T and
p_T, leading)
of the SD-enhanced event sample fall very steeply for large p_T values.
The ALICE measurement of primary charged particle transverse momentum spectra in
pp collisions at √(s) = 0.9, 2.76, 7 were presented in
Ref. <cit.>.
The measurement is performed in the pseudorapidity range |η| < 0.8
for particles with p_T > 150 .
The differential cross section for the INEL pp collisions as a function of p_T
measured by ALICE is shown in Fig. <ref>(a)
for three measured collision energies <cit.>.
At high p_T a clear evolution of the slope from
√(s) = 0.9 to 7 can be observed.
The next-to-Leading-Order pQCD (NLO-pQCD) calculation <cit.>
for p_T > 3 is compared to the spectra.
The calculation shows a similar evolution of the high-p_T
dependence with √(s) but over-predicts the data by a factor of two
<cit.>.
The low systematic uncertainties demonstrate the accuracy of the measurements for all energies over the full
p_T range.
Though the p_T dependence of the cross section for a single √(s)
is not well described by NLO-pQCD, the relative dependence on
p_T of cross sections of two collision energies is described better.
Figure <ref>(b) shows the ratio between the differential cross section in
INEL pp collisions at
√(s) = 2.76 to 7 ,
√(s) = 0.9 to 2.76
and
√(s) = 0.9 to 7 as a function of p_T in comparison to the same ratio calculated with NLO-pQCD.
The total p_T-dependent systematic uncertainties on the ratios are
evaluated with allowance for correlated contributions, and amount to
8.1–9.8%
for
0.9 /2.76 ,
7.8–9.9%
for
0.9 /7 ,
and
7.9–9.9%
for
2.76 /7 .
The corresponding normalisation uncertainties amount to
+5.4%/-4.4%,
+6.2%/-5.4%,
and
± 4.1%,
and are calculated assuming that the normalisation uncertainties on the
p_T spectra are uncorrelated.
In all ratios good agreement between the data and the NLO-pQCD calculations is found,
which can be seen in the double ratio of data and NLO-pQCD for the three energy ratios in
the lower panel of Fig. <ref>(b).
§.§ Charged-particle multiplicity dependence
§.§.§ ATLAS multiplicity distributions
The charged-particle multiplicity distributions are shown in Figs. <ref> – <ref> at
the CM energies √(s)= 0.9, 2.36, 7, 8, and 13 .
Figures <ref>(a) and (b) show the charged-particle multiplicity distributions at
the CM energy √(s)= 13 for events with
n_ch≥ 2, p_T >100
<cit.> and
n_ch≥ 1, p_T >500
<cit.>, respectively.
In Fig. <ref>(a) for events with
n_ch≥ 2, p_T >100
at √(s)= 13
the form of the measured distribution is reproduced reasonably by all models.
Pythia 8 A2 describes the data well for 30 < n_ch < 80
but underestimates them for higher n_ch.
For this multiplicity region, Pythia 8 Monash, EPOS and
QGSJET-II underestimate the data by up to 20%.
Pythia 8 Monash and EPOS
overestimate the data for the multiplicity region n_ch > 80 and drop below
the measurement in the high-n_ch region,
starting from n_ch > 130 and n_ch > 200, respectively.
QGSJET-II significantly overestimates the data for the multiplicity region
n_ch > 100.
Figure <ref> (b) shows the charged-particle multiplicity distribution for events with
n_ch≥ 1, p_T >500 at √(s)= 13 .
The high-n_ch region has significant contributions from events
with numerous MPI.
Pythia 8 A2 describes well the data in the multiplicity region
n_ch < 50 but predicts too few events at larger n_ch.
Pythia 8 Monash, EPOS and
QGSJET-II describe the data reasonably well in the multiplicity region
n_ch < 30 but predict too many events in the mid-n_ch region,
with Pythia 8 Monash and EPOS
predicting too few events in the region n_ch > 100 while
QGSJET-II continues to be above the data.
In Figs. <ref>(a) and (b) the distributions of primary charged-particle multiplicity
are shown for the minimum transverse momentum thresholds of
100 and 500 at √(s)= 8 <cit.>, respectively.
For the lower threshold, the distribution rises until n_ch∼ 9 before falling steeply.
For the higher threshold the distribution peaks at n_ch∼ 2.
The models are consistent with the data although the EPOS model provides a fair description.
The two Pythia 8 calculations predict distribution peaks which are at higher n_ch
than those observed and underestimate the event yield at low and high multiplicities.
The
QGSJET-II tune overestimates the data at low and high n_ch
values and underestimates the data for intermediate n_ch values.
In Figs. <ref>(a) and <ref>(a) the distributions of primary charged-particle multiplicity
are shown for the most inclusive PS region
n_ch≥ 2, p_T >100 and |η| < 2.5
at the CM energies √(s) = 7 and √(s) = 0.9 , respectively.
Here the variations between models at both low n_ch and
high n_ch are increased and no model predicts the observed spectra.
Due to the normalisation, 1 / N_ev,
the deviation observed in one region needs to be compensated for by
the one in the other direction somewhere else.
Figures <ref>(b), <ref> and <ref>(b)
show the primary charged-particle multiplicity distributions for
n_ch≥ 1, p_T >500 and |η| < 2.5
at the CM energies √(s) = 7 , 2.36 and 0.9 , respectively.
At low n_ch, all models predict more events than observed in
the data, which is compensated for by an under-prediction in the tails of the distributions.
The predictions of PHOJET at √(s) = 0.9
model the data reasonably well, but at √(s) = 2.36 and
√(s) = 7 they do not model the observed spectrum so well.
The Pythia 6 AMBT1 tune seems to provide the best agreement with
the data.
Figures <ref>(c) and <ref>(c) show the distribution for the diffraction-reduced
PS region for events with
n_ch > 6, p_T >500 .
The distributions are very similar to those in Figs. <ref>(c)
and <ref>(c) with a cut at n_ch > 6; only the normalisation is different.
In Fig. <ref>, for the charged-particle multiplicity,
ATLAS Pythia 8 A3 is comparable to other tunes.
At √(s) = 13 Pythia 8 A2
describes the low multiplicity part better than Pythia 8 A3
in the range of 40–60 charged particles.
The shape of the distribution predicted by the Pythia 8 A3 tune
is consistent across the center-of-mass energies.
Compared to Pythia 8 A2 tune,
Pythia 8 A3 tune
provides a slightly worse description of the charged particle multiplicity distribution,
which coincides with an improved charged particle
p_T distribution that performs similarly to
Pythia 8 Monash, as shown by Fig. <ref>.
In all cases, √(s) = 8 results are very similar to those at √(s) = 7 .
In Fig. <ref> for charged-particle multiplicity,
Pythia 8 A3 is comparable to data
at √(s) = 0.9, 2.36, 7, 8, 13 and other tunes: Pythia 8 A3 A2 and Monash.
At √(s) = 13 ,
Pythia 8 A2
describes the low multiplicity part better than Pythia 8 A3
in the range of 40 < n_ch < 60 charged particles.
The shape of the distribution predicted by the new tune is consistent across the ECMs.
For correct comparison of the charged-particle multiplicity and average transverse momentum distributions
for different energies or kinematic regions the scaled multiplicity z,
usually called KNO variable, see Eq. (<ref>), is introduced.
For example, comparison of the results for different kinematic regions,
with two p_T^min thresholds, was presented in
Ref. <cit.>.
The comparison of the primary charged-particle multiplicities as a function of the scaled multiplicity
z or the KNO scale for events with
n_ch≥ 2 and p_T >100 ;
n_ch≥ 1 and p_T >500
for |η| < 2.5
measured by the ATLAS at
√(s) from 0.9 to 13
<cit.>
are presented in Fig. <ref> and Fig. <ref>
<cit.>, respectively.
For these figures the multiplicity axis was compressed by the factor
⟨ n_ch ( √(s), p_T^min ) ⟩.
The KNO scale is the same and therefore it is the correct scale for comparing
distributions at different √(s) or distributions in different PS regions.
The scaled multiplicity regions are up to 7.5 of the average total multiplicity
for p_T >100 and up to 10.5 of the average total multiplicity
for p_T >500 as shown in
Figs. <ref>(a) and <ref>(a), respectively.
In Table <ref> the relative uncertainty,
δ⟨ n_ch⟩ / ⟨ n_ch⟩,
is presented for average total multiplicities.
Relative uncertainties are small and equal to
0.32–0.66% for p_T >100 and
0.24–0.46% for p_T >500 ,
except of the result at √(s)=2.36
which was measured with the lower accuracy.
In the bottom panels in Figs. <ref> and <ref>
ratios of the charged-particle distributions at 0.9 – 8
to the distribution at 13 are shown.
These ratios, and their uncertainties, are obtained by interpolation.
For the interpolation procedure the Interpolator method
of the Root statistical analysis framework <cit.> was used.
In Figs. <ref> – <ref>,
the gray curve and the band of the uncertainties are the result of the interpolation of the
distribution at 13 .
Figures <ref> and <ref>
show that primary charged-particle multiplicity distributions decrease
as the collision energy increases from 0.9 to 13
by the factor of ≈ 3 for maximum of the functions at z ≈ 0.7.
The results for √(s) = 7, 8 and 13 TeV and z ≤ 3
are presented in Fig. <ref>(b) for p_T >100
and in Fig. <ref>(b) for p_T >500 .
The distributions at √(s) = 7 and 8 are in
agreement within error bars except for the region 0.5 < z < 1.5.
The multiplicity distribution at 8 is ≈ 20% larger
than at 13 for the region z < 3 in both cases.
For p_T > 100 and p_T > 500 at
the highest energies the form of the measured distribution is reproduced reasonably by all models.
Pythia 8 A2 describes the data well for middle n_ch
but underestimates it for higher.
For middle n_ch Pythia 8 Monash, EPOS,
QGSJET-II underestimate the data by up to 10–20%.
Pythia 8 Monash, EPOS overestimate the data for higher n_ch and
drop below the measurement in the very high-n_ch region.
QGSJET-II overestimates the data significantly.
The high-n_ch region has significant contributions from events with numerous MPI.
As discussed in Ref. <cit.>, negative binomial distributions (NBDs)
were successful to describe (n_ch) at SPS energies
<cit.> but failed at higher energies.
Two-component approaches using two <cit.> or three
<cit.> NBDs could not survive up to LHC energies (see e.g. <cit.>).
Multiplicity distributions are a very sensitive probe of multiple parton interactions
as collisions with large multiplicities are mostly composed of several parton interactions.
Event generators fail to describe the tail of the multiplicity distribution without considering MPI
and the careful tuning of the related parameters.
§.§.§ Multiplicity distributions of the LHC experiments
The CMS results for primary charged-particle multiplicities as a function of
the multiplicity for events with |η| < 2.4 at the CM energy
√(s)= 13 with n_ch≥ 1 and p_T >500 <cit.> are shown in Fig. <ref>.
The measured distributions are presented for two different event data sets: an INEL sample
and an NSD-enhanced sample.
The charged particle multiplicity distribution of the NSD-enhanced event sample shows
a depletion of low-n_ch events and an increase of high-n_ch
multiplicity events compared to that of the inelastic sample.
The NSD charged hadron multiplicity distributions are measured
in increasing ranges of pseudorapidity from |η| < 0.5 to |η| < 2.4.
The fully corrected results at √(s) = 0.9, 2.36 and 7
are compared in Fig. <ref> with the
measurements in the same pseudorapidity ranges
performed by the UA5 <cit.>
and ALICE <cit.>.
The CMS measurements were also compared with the results obtained from the
CMS cross-check analysis of the data at √(s) = 0.9 and 7 using a tracklet-based tracking algorithm as in Ref. <cit.>.
With a reconstruction efficiency exceeding 90% for p_T > 50 ,
the latter provided a cross-check of the extrapolation for tracks below
p_T < 100 , including the use of the data without the
magnetic field at √(s) = 7 .
All measurements agree well within their total uncertainties.
In the largest pseudorapidity interval |η| < 2.4,
there is a change of slope in P_n for n_ch > 20,
indicating a multicomponent structure, as was discussed in
Refs. <cit.> in terms of multiple-soft-Pomeron exchanges.
This feature becomes more pronounced with increasing CM energies, notably at √(s) = 7 .
The Pythia 6 <cit.> generator and its fragmentation model tuned to CDF data
<cit.> hereafter called
Pythia D6T,
is used as a baseline model to simulate inelastic pp collisions.
However, at 7 a dedicated Pythia tune <cit.>
describing better the high multiplicities is used for correcting the data.
Alternative tunings that differ mainly in the modelling of
multiple parton interactions have also been considered
<cit.>.
PHOJET
<cit.>
is used as an alternative event generator that differs mainly
in the underlying dynamical model for particle production.
An extensive range of tunes <cit.>
based on the Pythia 6 fragmentation model have been developed.
They differ mainly in their parametrisation of the multiple-parton interaction model.
Some reproduce the charged hadron multiplicities better than others,
but none is able to give a good description simultaneously at all √(s) and in all pseudorapidity ranges.
For clarity, only the baseline tune Pythia D6T <cit.>
is shown in comparison with other models having a different physical description of soft-particle
production such as PHOJET <cit.>
and the fragmentation model of Pythia 8 <cit.>.
A comparison of the CMS measurements with three classes of models is shown in
Fig. <ref> for all charged hadrons and for those with p_T > 500 .
Pythia D6T drastically underestimates the multiplicity at all measured energies
but improves when p_T > 500 is required.
Pythia 8 is the only model that gives a reasonable description
of the multiplicity distribution at all energies, but tends to overestimate the multiplicity at
√(s) = 7 when p_T > 500 is required.
PHOJET produces too few charged hadrons overall but gives a good description of the
average transverse momentum ⟨ p_T⟩ at the fixed multiplicity
n_ch, as illustrated in Fig. <ref>.
The ALICE results of study the multiplicity (N_ch) distributions and transverse momentum
spectra and KNO scaling of inclusive primary charged particles in the kinematic range of
|η| < 0.8 and 0.15 < p_T < 10 for
pp, p–Pb, Xe–Xe and Pb–Pb collisions
at CM energies per nucleon pair ranging from
√( s_NN) = 2.76 up to 13 were published in
Ref. <cit.>.
The N_ch distributions for pp collisions at the different centre-of-mass energies
√(s) = 2.36, 5.02, 7, 8 and 13 for the kinematic region
|η| < 0.8 and 0.15 < p_T < 10 are shown in Fig. <ref>(a).
These distributions reach a maximum around N_ch≈ 2
and then fall steeply off over several orders of magnitude.
The slope of the decay with N_ch decreases with increasing collision energy.
This can be attributed to the larger p_T
in the initial hard scattering which results in larger multiplicities.
Figure <ref>(b)
compare measured results for pp collisions for the respective multiplicity distributions with predictions from
Pythia 8 <cit.> (solid lines) and
EPOS LHC <cit.> (dashed lines).
The Pythia 8.306 event generator is used with the
Monash-2013 tune <cit.> for pp collisions.
The overall shapes of the multiplicity distribution shown in Fig. <ref>(b)
are better described by EPOS LHC, while
Pythia 8 falls sharply off above N_ch/ ⟨ N_ch⟩≈ 4.
Both models agree with the experimental distributions within 25%
with larger deviations at highest multiplicities.
§.§ Average transverse momentum multiplicity dependence
§.§.§ ATLAS average transverse momentum distributions
The charged-particle average transverse momentum distributions
are shown in Figs. <ref> – <ref>
at the CM energies √(s)= 0.9, 2.36, 7, 8, and 13 .
The average transverse momentum versus the primary charged-particle multiplicity is shown in
Fig. <ref> at √(s)= 13 for
n_ch≥ 2, p_T >100
<cit.> and n_ch≥ 1, p_T >500
<cit.>, respectively.
For p_T >100 in Fig. <ref>(a)
it increases towards higher n_ch,
as modelled by a colour reconnection mechanism in Pythia 8
and by the hydrodynamical evolution model in EPOS.
The
QGSJET-II generator, which has no model for colour coherence effects, describes the data poorly.
For low n_ch,
Pythia 8 A2 and EPOS underestimate the data,
where Pythia 8 Monash agrees within the uncertainties.
For higher n_ch all generators overestimate the data, but for n_ch > 40,
there is a constant offset for both Pythia 8 tunes, which describe the data to within 10%.
EPOS describes the data reasonably well and to within 2%.
Figure <ref>(b) for
n_ch≥ 1, p_T >500
shows the mean transverse momentum versus the charged-particle multiplicity.
The ⟨ p_T⟩ rises with n_ch, from 0.8 to 1.2 .
This increase is expected due to colour coherence effects being important in dense parton environments and
is modelled by the colour reconnection mechanism in Pythia 8 or by
the hydrodynamical evolution model used in EPOS.
If the high-n_ch region is assumed to be dominated by events with numerous MPI,
without colour coherence effects the ⟨ p_T⟩
is approximately independent of n_ch.
Inclusion of
colour coherence effects leads to fewer additional charged particles produced with every
additional MPI, with an equally large p_T to be shared among the produced hadrons
<cit.>.
EPOS predicts a slightly lower ⟨ p_T⟩
but describes the dependence on n_ch very well.
The Pythia 8 tunes predict a steeper rise of
⟨ p_T⟩
with n_ch than the data, predicting lower values in the low-n_ch
region and higher values in the high-n_ch region.
QGSJET-II
predicts a ⟨ p_T⟩ of ∼ 1 ,
with very little dependence on n_ch; this is expected
as it contains no model for colour coherence effects.
Similar plots as for 13 are also shown for 8 in Fig. <ref>
for transverse momentum thresholds of 100 and 500 , respectively.
The average p_T rises with multiplicity although the
rise becomes progressively less steep as the multiplicity increases.
This is expected due to colour coherence effects in dense parton environments, which are modelled
by a colour reconnection mechanism in Pythia 8 or by the hydrodynamical evolution model
used in EPOS.
It is assumed that numerous MPI
dominate the high-multiplicity events, and that colour coherence effects thereby lead to fewer additional
charged particles produced with every additional MPI, which share a higher average p_T.
The EPOS and Pythia 8 models provide a fair description of the data.
The QGSJET-II
model fails to predict the mean transverse momentum over the entire multiplicity range,
as it does not simulate colour coherence effects and therefore shows very little dependence on the multiplicity.
Figures <ref> and <ref> show the results for events at the
CM energies √(s)= 7 and √(s)= 0.9 for
n_ch≥ 2, p_T >100
and
n_ch≥ 1, p_T >500 ,
respectively.
Globally one can say that
at √(s)= 0.9 the slope versus n_ch
for high values of n_ch seems to be well described by most
models, but the absolute value is best modelled by
Pythia 6 DW.
At the highest CM energy (8 and 13 ) above multiplicity of 20
the models vary widely both in slope and in absolute value;
at low values of n_ch none of the models describe the data very well.
In the more inclusive PS region,
Figs.<ref>(a) and <ref>(a),
the models vary widely, especially at √(s)= 7 .
The measurement of ⟨ p_T⟩ as a function of
the charged multiplicity at √(s)= 2.36
is not shown because different track reconstruction methods are used for determining
p_T and multiplicity distributions.
In Fig. <ref>, which shows the mean transverse momentum,
⟨ p_T⟩, against the charged particle multiplicity correlation
<cit.>,
the choice of lower colour reconnection strength led to slight improvement over
Pythia 8 A2.
Although √(s) = 2.36 <cit.>
and √(s) = 8 charged particle distributions
were not used in tuning, comparisons are made with those distributions for completeness.
In Figs. <ref>, <ref>, <ref> and <ref>
distributions at √(s) = 7 and √(s) = 13
predicted by Pythia 8 A3, in compared to
Pythia 8 A2, show a broadly comparable, or better, level of agreement.
Pythia 8 A2 demonstrates that an acceptable description of data
can be achieved by using the
DL model for diffraction and can be viewed
as a possible starting point for further systematic studies of soft-QCD tunes.
The results of Pythia 8 A3 provide good reasons to believe that an improved and more
reliable simulation of pile-up overlay can be obtained.
The correct comparison of the primary charged-particle average transverse momentum,
⟨ p_T⟩, as a function of the scaled multiplicity
z for events with
n_ch≥ 2 and p_T >100 ;
n_ch≥ 1 and p_T >500
measure for |η| < 2.5 at the CM energies
from 0.9 to 13 by the ATLAS
<cit.>
are presented in Fig. <ref> <cit.>.
The ⟨ p_T⟩ distribution as a function of z acquires
a higher value at higher collision energies.
The values of ⟨ p_T⟩ distributions increases by 18% and 13%
for z > 1 with energy increase from 0.9 to 13 for
p_T >100 and p_T >500 , respectively.
The results at 7 and 8 are in agreement within error bars.
The values of ⟨ p_T⟩ distributions increases by
≈ 3% for p_T >100
and by ≈ 2.5% for p_T >500 with increase in energy from 8 to 13 for z > 0.5.
The ratio of ⟨ p_T⟩ distributions
for 8 to 13 are ≈ 6 times smaller than the ratio for 0.9 to 13 .
For p_T > 100 and p_T > 500 at
the highest energies distributions increase towards higher n_ch, as modelled by
CR mechanism in Pythia 8 and by the hydrodynamical evolution model in
EPOS.
The QGSJET-II generator describes the data poorly.
For low n_ch, Pythia 8 A2, EPOS
underestimate the data and for higher n_ch all generators overestimate the data.
EPOS describes the data reasonably well and to within 2%.
As discussed in Ref. <cit.>,
the ⟨ p_T (n) ⟩ of distributions of primary charged particles was
produced via jet fragmentation, slowly increases with collision energy, as shown in
Fig. <ref>.
This is caused by the stronger absorption (at larger √(s)) of the gluons with a smaller
k_T (σ^abs∝ 1 / k_T^2).
The growth of ⟨ p_T⟩ with multiplicity can be explained by the
fact that events with larger n_ch correspond to a smaller impact parameter, b,
where the absorption of the low k_T component is stronger, and
larger multiplicity can be originated by the events with jets/minijets with higher p_T.
Since ⟨ p_T⟩ of primary charged particles grows with √(s),
the increase with √(s) of transverse energy flow is a bit faster than that of the particle density.
§.§.§ Average transverse momentum distributions of the LHC experiments
Figure <ref> (top) show a CMS comparison of the
average transverse momentum, ⟨ p_T⟩,
as a function of the charge-particle multiplicity, n_ch,
for the inclusive pseudorapidity region |η| < 2.4
with prediction of the Pythia D6T tune,
the Pythia 8 and PHOJET models at √(s) = 0.9, 2.36 and 7
<cit.>.
In Fig. <ref> (bottom) the ratios of the higher-energy data to the fit at
√(s) = 0.9 indicate the approximate energy independence of
⟨ p_T⟩ at fixed n_ch.
These results are in disagreement with the ATLAS results presented in
Fig. <ref>, where a ratio depends on the multiplicity.
The ATLAS ratio of ⟨ p_T⟩ distributions for
7 to 0.9 is ≈ 1.18
for z ≳ 2 as shown in Fig. <ref>(a).
According to the CMS, the same ratio shown in Fig. <ref>
is ≈ 1.05 for n_ch≳ 30 or z ≳ 1,
because ⟨ n_ch⟩ = 30.4 at 7 in Table <ref>.
That is ≈ 3.5 times smaller than for ATLAS.
Among the three classes of models, Pythia 8
gives the best overall description of the multiplicity distribution and the dependence of the
average transverse momentum on n_ch.
Inspired by <cit.> the fit by the first-degree polynom in √( n_ch)
to the multiplicity dependence of ⟨ p_T (n_ch ) ⟩ for
n_ch > 1.5 at each energy, yielding a good description which is valid at all three energies.
The ratios of the data obtained at √(s) = 7 and √(s) = 2.36
with respect to the data at √(s) = 0.9 show that the rise of the average
transverse momentum with the multiplicity weakly depends on energy.
The average charged-particle transverse momenta
for pp collisions at the different centre-of-mass energies
√(s) = 2.36, 5.02, 7, 8 and 13
for the kinematic region
|η| < 0.8 and 0.15 < p_T < 10 were obtained by the ALICE experiment
<cit.>
and are presented
in Fig. <ref>.
In
Fig. <ref>(a)
the average charged-particle transverse momentum
⟨ p_T⟩ spectra
and
in Fig. <ref>(b)
the
⟨ p_T⟩ spectra
divided by their respective multiplicity-integrated values,
⟨ p_T⟩_incl,
as a function of relative multiplicity
N_ch /⟨ N_ch⟩,
same as the scale variable z,
are shown.
The value of
⟨ p_T⟩_incl
for pp collisions
increase from
6.05 ± 0.17 at √(s) = 2.76
to
9.48 ± 0.07 at √(s) =13 (see in Table 2 <cit.>).
The values for each collision system align almost perfectly for the
⟨ p_T⟩ / ⟨ p_T⟩_incl.
In pp collisions, the overall shapes of the ⟨ p_T⟩
distributions are shown in Fig. <ref>(c)
in comparison with predictions from Pythia 8 <cit.> (solid lines)
and EPOS LHC <cit.> (dashed lines).
Pythia 8 underpredicts the experimental data on
⟨ p_T⟩
at the lowest values of
N_ch
by up to 4%.
The N_ch dependent
⟨ p_T⟩
values produced by
Pythia 8
increase faster than the measurements with an almost linear dependence up to
N_ch≈ 20,
after which the ratio shows a flat multiplicity dependence with an offset from unity varying from
0.5% at√(s) = 5.02 up to 4% at the highest CM energy.
EPOS LHC is further off at low multiplicities by up to 5%
and increases slower than the measurements, underestimating themby up to 6% around
N_ch≈ 9.
At higher multiplicities, the increase is faster with a linearly rising ratio up to
N_ch≈ 20 - 30,
reaching a plateau which describes the measurements within ± 2%.
§ KNO SCALING
§.§ Study of the KNO scaling using the ATLAS results
Deviation from the KNO scaling was already observed long ago at the ISR energies
in pp collisions at √(s) from 0.0304 to 0.0622 ,
in the full PS, for inelastic events
<cit.>.
For hadron-hadron collisions, the approximate KNO scaling holds up to the ISR energies
<cit.>.
On the other hand, for NSD collisions, scaling was still found to be present
<cit.>, suggesting that diffractive processes might also play a role in KNO scaling violations.
In e^+ e^- collisions, at √(s) from 0.005 to 0.034 ,
the KNO scaling was found to hold within ± 20% <cit.>.
Clear scaling violations become manifested above √(s)≈ 0.2
both for the multiplicity distributions in full PS and in central pseudorapidity ranges
<cit.>.
In pp̅) collisions at the CERN collider at √(s) = 0.2, 0.546 and 0.9 ,
the KNO scaling was found to be violated for NSD collisions in full PS
<cit.>.
Nevertheless, for NSD collisions, in limited central pseudorapidity intervals, the
KNO scaling was still found to hold up to 0.9 , and at (√(s) = 0.546 , the
KNO scaling was found to hold in the pseudorapidity interval |η| < 3.5
<cit.>.
In p p̅ collisions, and for large rapidity ranges,
the UA5 experiment was the first to observe a larger-than-expected high-multiplicity tail and a change of slope
<cit.>,
which was interpreted as evidence for a multi-component structure of the final states
<cit.>.
In NSD pp collisions at the LHC, at √(s) = 2.36 and 7
and in |η| < 0.5, ALICE
<cit.>
and CMS <cit.> observed no significant deviation from the KNO scaling.
On the other hand the CMS observation of strong KNO scaling violations at √(s) = 7 ,
as well as a change of slop in P_n, confirm the earlier measurements.
The KNO variable z provides another way to study the evolution of the shape of
multiplicity distributions with varying CM energies and pseudorapidity intervals.
For the verification of the KNO scaling hypothesis the following
equation with dependence on the CM energy and a kinematic region,
p_T^min,
was used in Ref. <cit.>:
Ψ ( z , √(s))
= ⟨ n_ch (√(s), p_T^min) ⟩·
P (n_ch, √(s), p_T^min)
= ⟨ n_ch (√(s), p_T^min) ⟩/ N_ev (√(s), p_T^min ) ·d N_ev (√(s), p_T^min )/ d n_ch,
where
n_ch is the number of primary charged particles within the kinematic acceptance in an event,
P (n_ch, √(s)) is the probability distributions of producing
n_ch particles,
N_ev is the number of events with primary charged particles in
the kinematic acceptance, ⟨ n (√(s)) ⟩ is the average multiplicity of primary particles at
the CM energy, and Ψ ( z ) is the particle distribution as a function of the scaled multiplicity.
The KNO scale variable z provides a way to study evolution of shapes of the KNO charged-particle
multiplicity distributions (see Eq. (<ref>)) with varying CM energy and kinematic region,
for example p_T^min threshold.
The KNO distributions and their ratios, studied using ATLAS results,
are presented in Fig. <ref>
for charged particles with p_T >100 and in
Fig. <ref> for those with p_T >500 .
These figures are similar to Fig. <ref> and Fig. <ref>
but the vertical axis is stretched by the factor
⟨ n_ch (√(s), p_T^min)⟩.
The quantities of interest are derived from the original set of
KNO distributions and the ratios of these distributions to the one at 13 .
The high-multiplicity tail of the distributions is pushed up and the maximum of the distribution
is shifted towards small values of z with increasing collision energy.
Ratios of the KNO distributions between the smallest CM energy 0.9 to 13
reach the maximum value at z ≈ 0.8 and the
minimum value for the highest multiplicity at z ≈ 5.5
for p_T >100 , as can be seen in
Fig. <ref>(a), and
z ≈ 6.5 for p_T >500 , in
Fig. <ref>(a).
There is an intersection point for all distributions at z ≈ 2.
A test of the KNO scaling distributions between √(s) = 0.9 and 13
confirms that KNO scaling violation increases with decreasing collision energy.
Ratios of the KNO distributions between the highest energies 8 and 13
exceed the maximum value of +8% at z ≈ 0.5 and
the minimum value of -15% at z ≈ 0.1
for p_T >100 , as can be seen in
Fig. <ref>(b), and
the maximum value of +5% at z ≈ 0.5 and -13% at z ≈ 0.1
for p_T >500 , in Fig. <ref>(b).
For the high multiplicity tail, these ratios are in agreement within error bars with the KNO distribution at
13 .
Single- and double-diffractive processes make an important contribution only for the
low-multiplicity region, z ≲ 0.3.
The typologies of diffractive and non-diffractive events are different and their KNO behaviour
may also be different.
The negative spread, ≲ -8%, for the low multiplicity may be the result
of the contribution from diffractive processes.
The KNO scaling tends to be valid in the energy region from
√(s) = 7 to √(s) =13 within
≈^+8_-15% for z ≲ 2
and within error bars for z ≳ 2 for events with the
charged-particle transverse momentum p_T >100
(Fig. <ref>(b)),
and within ^+5_-13% for z ≲ 3
and within error bars for z ≳ 3 for events with the
charged-particle transverse momentum p_T >500
(Fig. <ref>(b)).
The tendency of the KNO scaling to hold for the highest collision energies is observed.
The MC QGSM predictions are made for the KNO
non-diffractive charged-particle multiplicity distributions
for pp collisions including at the highest LHC CM energy √(s)= 14
for |η| <2.4 in Fig. 12 in Ref. <cit.>.
These distributions have the same qualitative behaviour as those presented in
Fig. <ref>(a).
The MC QGSM described the KNO distributions as the contribution of
the cylinder diagram and diagrams with multi-Pomeron scattering.
The pronounced peak in the low z arises solely due to a single Pomeron exchange,
and the maxima of the distributions for multi-Pomeron processes are moved in the direction of
high z thus pushing up the tail <cit.>.
The energy independence of the moments of the probability distributions defined as
P (n_ch, √(s))
C_q (√(s)) =
∑_n=1^n_max n_ch^q (√(s)) P (n_ch, √(s))
/( ∑_n=1^n_max n_ch (√(s)) P (n_ch, √(s)) )^q
in the energy asymptotic was the precise finding of the KNO scaling <cit.>.
The analysis results for the validity of KNO scaling is shown quantitatively in
Fig. <ref> by the C_q (√(s))
of the multiplicity distributions measured by the ATLAS and complemented with
the CMS measurements at √(s) =0.9, 2.36 and 7
<cit.> and results of the lower-energy experiments by
NA22 <cit.>, UA1 <cit.>, and UA5 <cit.>.
The C_q (√(s)) calculations based on the ATLAS results for the kinematic region
|η| < 2.5, n_ch≥ 2 and p_T >100 are shown in Fig. <ref>(a).
The ATLAS and CMS results are agree within the errors.
The values of C_q (√(s))
for all experiments linearly increase with log√(s) as illustrated by the fits in
Fig. <ref>(a).
Since, as mentioned above, the KNO scaling requires that C_q (√(s))
be independent of energy, one can state that the KNO scaling is violated at least for
the full region of scaled multiplicity.
Figure <ref>(b) shows for the first time the values of C_q (√(s))
calculated using multiplicity distributions measured by ATLAS for the kinematic region
|η| < 2.5, n_ch≥ 1 and p_T >500 .
Similarly as in Fig. <ref>(a) the values of C_q (√(s))
linearly increase with log√(s).
The C_q values at √(s) = 2.36 TeV in Fig. <ref>(b)
are much smaller than those for other energies.
This is because the region of primary charged-particle multiplicity distributions at 2.36
is smaller (up to z ≈ 3.5) than that for higher CM energies (up to z ≈ 9)
<cit.>.
Therefore, the C_q values at √(s) = 2.36 were note used in the fits.
The C_q (√(s)) for p_T >500
have higher bias (α) and slope (β) of the fits than those for
minimum p_T threshold, the bias increasing
from 1.1 at q=2 up to 2.1 at q=5,
and the slope increasing from 1.4 at q=2 up to 2.6 at q=5.
This is the result of stronger interactions with a higher p_T threshold.
Figure <ref>(c) shows moments C_q for events with
n_ch≥ 2, p_T >100 and for z > 0.5
without the fraction of single and double diffraction events, which was accepted by
the ATLAS minimum-bias trigger
<cit.>.
In this case, the values of C_q (√(s)) are systematically higher than those for
full distributions with z > 0 and show a similar linear increase with log√(s)
as is illustrated in Fig. <ref>(c).
For multiplicity distributions for z > 1.0 the values of C_q (√(s))
at the highest energies √(s) =7, 8 and 13 are in agreement within error uncertainties,
as can be seen in Fig. <ref>(c).
Therefore, the energy independence of the moments of various orders can be considered
as a confirmation of the KNO scaling.
§.§ Study of the KNO scaling at the LHC experiments
The KNO scaling violation was studied for different pseudorapidity ranges in LHC experiments by the CMS
<cit.> and the ALICE <cit.>
at the CM energies from √(s) = 0.9 to 8 .
The multiplicity distributions obtained by the CMS detector are shown in the KNO form
<cit.> for the pseudorapidity interval of |η| < 2.4
in Fig. <ref>(a), which is close to the similar ATLAS results with |η| < 2.5,
and for a more central pseudorapidity interval |η| < 0.5 in Fig. <ref>(b).
The variation of the ratio for the central region of 0.9 to 7 with |η| < 0.5 is
about ± 15% and agree with 1 within error bars; therefore the KNO scaling holds.
The variation of the ratio for the full region with |η| < 2.4 is twice wider ≈± 30%
and does not agree with 1 in error bars, therefore the KNO scaling is violated similar to the ATLAS data
in Fig. <ref>(a).
Scaling is a characteristic property of the multiplicity distribution in cascade processes of a single jet
with self-similar branching and a fixed coupling constant
<cit.>.
A similar conclusion about the shape evolution of the multiplicity distributions
like from Fig. <ref>(b) can be extracted
from Fig. <ref>(c),
where are compared the ALICE measurements plotted in terms of KNO variables
at the two energies and UA5 p p̅ data at √(s) = 0.2 and 0.9 ,
for NSD collisions and pseudorapidity interval |η| < 0.5.
While the KNO scaling gives a reasonable description of the data from √(s) = 0.2 and 2.36 ,
the ratio between the √(s) = 0.9 and 2.36
data shows a slight departure from unity above z = 4,
but it is in agreement with unit within error bars.
The KNO test on the ALICE results in the range of 0.9 to 8
<cit.> is presented in Fig. <ref>.
The KNO-scaled distributions and their ratios were obtained for each of the available combinations of
corrections with the same procedure used for multiplicity distribution measurements.
Bin-to-bin correlations were ignored when comparing KNO distributions and q-moments
at various CM energies.
Consequently, the relative errors obtained on the ratios are somewhat overestimated.
The ratios between two highest energies and 0.9 exceed the value of 2 at
z > 5.5, 5 and 4.5,
for |η| < 0.5, |η| < 1.0 and |η| < 1.5, respectively,
Fig. <ref>.
This confirms that KNO scaling violation increases with the size of increasing pseudorapidity interval.
The shape of the KNO scaling violation reflects the fact that the high-multiplicity tail of the distribution increases
with energy and with size of pseudorapidity interval faster than that for low-multiplicity tail
(n_ch≤ 20).
A test of the KNO scaling between √(s) = 0.9 to 8 confirms that KNO scaling violation increases with increasing √(s) and, at a given CM energy,
with increasing width of pseudorapidity intervals.
This is similar to the ATLAS result in Fig. <ref>(a).
The KNO test on the ALICE results for pp collisions at the different centre-of-mass energies
√(s) = 2.36, 5.02, 7, 8 and 13 for the kinematic region
|η| < 0.8 and 0.15 < p_T < 10 is presented in Fig. <ref>(a).
Figure <ref>(b) shows the corresponding ratios of the KNO scaled multiplicity distributions
at various CM energies relative to √(s) = 13 .
The KNO scaling apparently holds within ≈ 30% for
CM energies from 2.36 to 8 in relative to √(s) = 13 .
Figure <ref>(c) compare measured results
for the respective KNO scaled multiplicity distributions with predictions from
Pythia 8 <cit.> (solid lines)
and EPOS LHC <cit.> (dashed lines).
Like for multiplicity distributions in Fig. <ref>(b),
the overall shapes of the KNO-scaled distribution shown in Fig. <ref>(c)
are better described by EPOS LHC, while Pythia 8 falls sharply off above
N_ch/ ⟨ N_ch⟩≈ 4
and these models within 25% agree with the experimental distributions
with larger deviations at highest multiplicities.
Figure <ref> shows the ALICE results for the trans-max and trans-min UE regions
for charged-particle multiplicity distributions in KNO variables for pp collisions at
√(s)=2.76, 5.02, 7 and 13 <cit.>.
The trans-max and trans-min regions of UE refer to the sub-transverse regions with
the largest and smallest charged-particle multiplicity which have an enhanced sensitivity to
ISR-FSR and UE, respectively <cit.>.
In the trans-max region, within 20%, the KNO-like scaling is observed in a wider range of multiplicity
(0<z<4) relative to the results reported in <cit.>, while for higher z values (z > 4)
the scaling is broken.
It is worth noticing that for trans-max both contributions are considered: UE and ISR-FSR.
If the effect of ISR-FSR is suppressed, i.e., exploiting the features of trans-min region,
the KNO-like scaling also holds for
0 < z < 4, and then for z > 4 the KNO-like scaling is still broken but a higher
z reach is achieved, especially for z>6, a larger violation is observed.
Events with high-multiplicity jets can contribute to the large violation of the scaling properties.
It was observed that for z > 3, the number of uncorrelated seeds (or MPI) deviate from
the linear trend suggesting the presence of high-multiplicity jets <cit.>.
Multiplicity distributions may be characterized by their normalized C_q-moments
where q is a positive integer studied here for the values 2, 3, 4 and 5, for NSD events.
The results obtained by different experiments for the C_q-moment dependence
on √(s) are shown in Fig. <ref>.
For three pseudorapidity intervals
|η| < 0.5,
|η| < 1.0 and
|η| < 1.5,
C_2 remains constant over the energy range,
C_3 shows a small increase with increasing energy for
two largest η intervals,
C_4 and
C_5 show an increase with increasing energy,
which becomes stronger for larger η intervals.
These ALICE data are in agreement with UA5 <cit.> and
CMS <cit.>.
The results of KNO scaling research according to the data of the ALICE, CMS and ATLAS experiments
have been analysed.
The shape evolution of the multiplicity distributions with a collision energy at ATLAS
is studied in terms of KNO scaling variables at √(s) from 0.9 to 13
with the inclusive |η| < 2.5 one.
The KNO scaling and C_q-moments were studied by the CMS
at √(s) from 0.9 to 7 in central pseudorapidity |η| < 0.5 region
and more inclusive |η| < 2.4 regions, and the ALICE at √(s) from 0.9
to 8 in three pseudorapidity regions: |η| < 0.5, |η| < 1.0
and |η| < 1.5.
The charged-particle multiplicity distributions on the KNO scale for all experiments
have the similar shape and decrease with increasing collision energy.
For all experiments the KNO scaling is violated for energies from 0.9 to 7 if taking into account more inclusive pseoudorapidity regions.
The ATLAS data demonstrate the tendency for the KNO scaling to be independent of energy
for the highest energies.
The CMS results show that the KNO scaling holds for central pseudorapidity region, |η| < 0.5,
and is dependent of the energy from √(s) = 0.9 to 7 ,
because C_q-moments demonstrate independence of energy
and the shape of the KNO function is similar.
Another situation is for the inclusive region |η| < 2.4 where
C_q-moments demonstrate the linear increase with energy.
The ALICE results have the KNO scaling violation for all pseudorapidity regions
in depending on the energy from √(s) = 0.9 to 8 ,
because C_q-moments linear increase with log√(s).
Ratios of the KNO distributions between the smallest √(s) = 0.9 and 8
exceed the maximum positive value at z ≈ 0.5 and the maximum positive value for the
multiplicity at z ≈ 4.5, z ≈ 5.5 and z ≈ 6.0
for the pseudorapidity intervals
|η| < 1.5, |η| < 1.0 and |η| < 0.5, respectively.
There is an intersection point for all distributions at z ≈ 2.
The shapes at √(s) =7 and 8 are similar and agree within error bars.
The ALICE results show the tendency for the KNO scaling to be independent of energy for the highest energies.
Therefore, an investigation of the KNO scaling at energies higher than 13 is important.
The validity of KNO scaling is shown more quantitatively in Fig. <ref>(a) for wider pseudorapidity region and for smaller pseudorapisity region, |η| < 0.5,
in Fig. <ref>(a) by the normalized order-q moments C_q
of the multiplicity distribution, complemented with measurements at lower energies experiments
NA22 <cit.> and UA5 <cit.>.
For |η| < 0.5 the values of C_q remain constant
over the full CM energy range, as illustrated by the fits in Fig. <ref>(a).
The KNO-scaling study by ALICE is carried out for the NSD event class only so that
SD events, which may have a different behaviour, are not included in the data samples.
The ALICE data are consistent with the UA5 p p̅ measurements at 0.9 <cit.>.
The energy dependence of the reduced moments C_q shown in
Fig. <ref>(b)
indicates a slight increase, which is not significant given the size of our systematic uncertainties.
Systematic uncertainties are assumed to be uncorrelated between energies.
§ CONCLUSIONS
The ATLAS studied MB events in pp interactions at
the CM energies √(s) = 0.9, 2.36, 7, 8 and 13
for the absolute pseudorapidity region less than 2.5 in five separate
PS regions
n_ch≥ 2, p_T > 100 and
n_ch≥ 1, 6, 20, 50, p_T > 500 recorded in 2010 – 2015.
The data were taken in the special configuration of the
LHC with low beam currents and reduced beam focusing,
producing a low mean number of interactions per
bunch-crossing in the range 0.003 – 0.007.
The charged-particle multiplicity dependences on pseudorapidity, charged-particle multiplicity and
transverse momentum, as well as the dependence of the mean transverse momentum on multiplicity
were presented for
the
study the soft-QCD phenomena.
The measured distributions are presented as inclusive-inelastic distributions within a
given
PS
region with minimal model-dependent corrections
to facilitate the comparison with models.
There variables are tuned in event generators using these MB measurements,
because there is a variability in modelling since non-perturbative QCD is used.
The results are compared to the predictions of more than ten
MC models tuned to a wide range of measurements.
Then variables in the MC event generators were tuned using the MB measurements of
the LHC and Tevatron experiments,
because there was a variability in modelling since non-perturbative QCD was used.
This review reported that the multiplicity distribution is not described perfectly
by any of the models, there are large discrepancies especially at large multiplicities.
Having observed similar discrepancies at all measured energies, we conclude that
for every collision energy, model parameters usually need to be re-tuned in every
MC generator.
Reasonable agreement of the tunes used in the MC models with the data were presented.
The models
EPOS LHC,
PHOJET,
QGSJET-II,
Pythia 6
and
Pythia 8
show big troubles in describing the whole
spectrum in
the
data,
but the best agreement is achieved with
EPOS.
A new ATLAS Pythia 8 A3 tune was presented
for result predictions at Run 3 of the LHC.
The comparisons of the charged-particle multiplicity and the average transverse momentum
distributions on the basis of the scaled multiplicity
using the LHC experiments results
were presented.
The charged-particle multiplicity distributions on the KNO scale
have the similar shape and decrease with increasing energy.
The KNO scaling was studied using
the LHC experiments results.
A test of the KNO scaling between 0.9 and 13
confirms that the KNO scaling violation increases with decreasing collision energy.
The KNO distributions tend to be independent of energy for the highest energies.
The mean transverse momentum on the KNO scale has the same shape and increases with increasing energy.
§ ACKNOWLEDGEMENTS
Our thank the ATLAS collaboration for the excellent experimental results which were used in this review.
Special thanks are to Edward K. Sarkisyan-Grinbaum and Stanislav Tokar for very productive discussions.
Grateful to Pavel Tsiareshka for the technical support.
|
http://arxiv.org/abs/2307.04286v2 | 20230710002617 | Numerical Investigation of Diffusion Flame in Transonic Flow with Large Pressure Gradient | [
"Yalu Zhu",
"Feng Liu",
"William A. Sirignano"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Generalizing Graph ODE for Learning
Complex System Dynamics across Environments
Wei Wang
August 12, 2023
=================================================================================
A finite-volume method for the steady, compressible, reacting, turbulent Navier-Stokes equations is developed by using a steady-state preserving splitting scheme for the stiff source terms in chemical reaction. Laminar and turbulent reacting flows in a mixing layer with large streamwise pressure gradient are studied and compared to boundary-layer solutions. Influence of chemical reaction on the turbulent transport in the mixing layer is analyzed. Influence of vitiated air on the combustion process and aerodynamic performance is also investigated for the cases of turbulent mixing layer and turbine cascade.
§ NOMENCLATURE
@l @ = l@
C molar concentration
C_p specific heat capacity at constant pressure
E total energy
e internal energy
h enthalpy
h^0 enthalpy of formation
𝐈 Kronecker tensor
𝐣 diffusive flux of species
k turbulent kinetic energy
N total number of species
n index of time step
Pr Prandtl number
p pressure
Q̇ source term in energy equation due to reaction
𝐪 heat flux in energy equation
R gas constant
R_0 universal gas constant, R_0 = 8.3145 J/mol/K
Sc Schmidt number
T temperature
t time
u velocity in x direction
𝐕 velocity vector, 𝐕 = (u, v)
v velocity in y direction
W molecular weight
x,y Cartesian coordinates
Y mass fraction
Δ t time step
μ viscosity coefficient
ρ density
τ viscous stress tensor
ω specific dissipation rate
ω̇ production rate of species by reaction
(·)^T transpose of quantity
(̅·̅)̅ Reynolds-averaged quantity
(̃·̃)̃ Farve-averaged quantity
(·)_i species
(·)_T quantity of turbulence
(·)_∞ quantity at the top far away from the mixing layer
§ INTRODUCTION
To reduce the weight and widen the range of operation, the designer continues to pursue compact design of the combustor and the turbomachinery of a gas turbine engine. In a compact combustor, the fuel residence time becomes shorter than the the time required for complete combustion. As a result, the combustion process would be extended into the downstream turbine passage. This increases the challenge of heat transfer in turbine at the first sight. However, the thermodynamic analysis by Sirignano and Liu <cit.> showed that intentional augmented burning in the turbine passage, called turbine-burner, allows for significant benefits: 1) reduction in after-burner length and weight, 2) reduction in specific fuel consumption, and 3) increase in specific thrust.
To take advantage of the turbine-burner design concept, it is necessary to address some fundamental issues of aerodynamics and combustion associated with it. In a turbine passage, the compressible turbulent flow is subjected to strong streamwise and transverse pressure gradients produced by the turbine blade profiles.
The flow accelerates from subsonic to supersonic in a very short distance, creating a challenge for flameholding. Large gradients of temperature, velocity, and species concentration occur on the fuel-oxidizer interface due to mixing and combustion. This can result in hydrodynamic instabilities that might significantly affect the energy conversion, heat transfer, and aerodynamic loading on the turbine blades, and the character of turbulent flow <cit.>.
The high-accelerating transonic flow with mixing and chemical reaction is an important area of applied scientific research.
Sirignano and Kim <cit.> obtained the similarity solution for a diffusion flame in the two-dimensional, laminar, steady, compressible mixing layer with constant pressure gradient along the streamwise direction.
Fang et al. <cit.> extended the study to include cases with arbitrary pressure gradients by using a finite-difference method for the boundary-layer equations. The influence of pressure gradient, initial temperature, initial velocity, and transport properties on the ignition process and flame structure was studied.
Mehring et al. <cit.> further extended the laminar boundary-layer computation to include the effects of turbulence.
Cai et al. <cit.> investigated the turbulent, transonic, reacting flows in a mixing layer and curved duct by solving the two-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations. The flame structures in transonic flows with large axial and transverse pressure gradients were examined.
Cheng et al. <cit.> simulated the development of reacting mixing layers in straight and curved ducts from laminar flow to transition regime by solving the two-dimensional Navier-Stokes equations.
These numerical computations of the reacting flows with pressure gradients are based on the boundary-layer equations or the two-dimensional Navier-Stokes equations in which the pressure gradient is provided by the variation of flow passage width in the third direction. In order to simulate the reacting turbulent flow in a real turbine, numerical methods to solve the full three-dimensional Navier-Stokes equations with chemical reaction must be developed.
Since the turbine vane is downstream of the primary combustion chamber in a gas turbine engine, the gases at the turbine entrance are a mixture of unburned air and reaction products. However, a pure gas model, treating the medium as a single species with altered specific heat capacities, is usually used for flow simulation in a turbine when there is no chemical reaction and heating <cit.>. This simplification has little influence on the aerodynamic performance and heat transfer in a turbine. However, it significantly affects the heat release and flow characteristics if a combustion process is incorporated into a turbine due to the different chemically thermodynamic properties of each species. To accurately model the reacting flow and thus predict its influence on the aerodynamic and thermodynamic performance of a turbine, the vitiated air composed of unburned air and reaction products should be used at the turbine entrance instead <cit.>. The differences between pure air and vitiated air are also necessary to be evaluated.
In the present paper, a code to solve the three-dimensional compressible RANS equations with chemical reaction and turbulence models by using finite-volume method is developed and implemented. The code is then applied to study the laminar and turbulent reacting flows in an accelerating mixing layer and to compare the vitiated air and pure air in the same mixing layer and a real turbine cascade.
The governing equations and numerical methods are presented in Sec. <ref> and Sec. <ref>, respectively.
The nonreacting laminar flow in a mixing layer is presented in Sec. <ref>. The reacting laminar case is discussed in Sec. <ref>. The reacting turbulent case is given in Sec. <ref>. The differences between vitiated air and pure air for the turbulent mixing layer are discussed in Sec. <ref>. The reacting turbulent flow in a turbine cascade is analyzed in Sec. <ref>.
The concluding remarks are given in Sec. <ref>.
§ GOVERNING EQUATIONS
§.§ Reynolds-Averaged Navier-Stokes Equations
The Reynolds-Averaged Navier-Stokes (RANS) equations for compressible flows are expressed by the following transport equations for mass, momentum and energy
∂ρ̅/∂ t + ∇· (ρ̅𝐕)
= 0
∂(ρ̅𝐕)/∂ t + ∇· (ρ̅𝐕𝐕)
= -∇p̅ + ∇·τ
∂(ρ̅E)/∂ t + ∇· (ρ̅E𝐕) = -∇· (p̅𝐕) + ∇· (𝐕·τ) - ∇·𝐪 + Q̇
The chemical reaction in the flow is taken into consideration by the mass fraction transport equation for each species in a mixture with N species
∂ρ̅Y_i/∂ t + ∇· (ρ̅Y_i 𝐕) = -∇·𝐣_i + ω̇_i, i = 1, 2, ..., N
The energy equation is expressed in terms of the total energy, E, which consists of the internal energy and the kinetic energy, i.e.
E = ẽ + 1/2𝐕·𝐕
where the internal energy ẽ is related to the enthalpy h̃ by
ẽ = h̃ - p̅/ρ̅
The enthalpy is the summation of the sensible enthalpy weighted by the mass fraction
h̃ = ∑_i=1^NY_i h̃_i
with
h̃_i = ∫_T_0^TC_p,idT
where the specific heat capacity at constant pressure C_p,i is a function of temperature given by the empirical polynomial formula of NASA <cit.> for each species. An additional heat source term Q̇ appears on the right-hand side of the energy
equation
Q̇ = -∑_i=1^Nω̇_i h_i^0
where h_i^0 is the enthalpy of formation of species i at the reference temperature T_0.
The viscous stress tensor τ is the sum of the molecular stress tensor τ_L and turbulent stress tensor τ_T with
τ_L = μ[∇𝐕 + (∇𝐕)^T ] -2/3μ(∇·𝐕) 𝐈
τ_T = μ_T [∇𝐕 + (∇𝐕)^T ] -2/3μ_T (∇·𝐕) 𝐈
where μ is the molecular viscosity computed by the mass-weighted summation of molecular viscosity of each species given by the Sutherland's law <cit.>, and μ_T is the turbulent viscosity determined by the turbulence model in the next subsection.
The diffusive flux of species i is given by
𝐣_i = -( μ/Sc_i + μ_T/Sc_T) ∇Y_i
where Sc_i and Sc_T are the Schmidt number of species i and the turbulent Schmidt number, respectively. In the present study, we set Sc_i = 1.0 and Sc_T = 1.0.
The heat flux in the energy equation is computed by
𝐪 = -( μ/Pr + μ_T/Pr_T) ∇h̃ + ∑_i=1^Nh̃_i𝐣_i
where the last term stands for the energy transport due to mass diffusion of each species with different enthalpy, and Pr and Pr_T are the Prandtl number and the turbulent Prandtl number, respectively. In the present study, we set Pr = 1.0 and Pr_T = 1.0.
A perfect gas is assumed in this study, in which the pressure, density and temperature are related by the equation of state
p̅ = ρ̅RT
where R is the gas constant of the mixture, computed by the mass-weighted summation of the gas constant of each species R_i with R_i = R_0/W_i.
§.§ Turbulence Model
The improved kω Shear-Stress Transport (SST) model presented by Menter et al. <cit.> in 2003 is used to evaluate the turbulent viscosity.
This model combines the advantages of the kε model and the kω model.
In the inner zone of a boundary layer, the SST model degenerates into the kω model, which avoids the stiff source term in the kε model due to the damping function in the viscous sublayer and the defect in capturing the proper behaviors of turbulent flows with adverse pressure gradients up to separation.
In the outer zone of a boundary layer or in a free-shear flow, the SST model switches to the kε model, avoiding the strong sensitivity of the kω model to freestream turbulence.
In addition, the SST model takes into account the transport of principal turbulent shear stress to enhance its ability to predict turbulent flows with adverse pressure gradient and separation.
The SST model is established by the transport equations for turbulent kinetic energy k and specific dissipation rate ω
∂ρ̅ k∂ t + ∇·(ρ̅ k 𝐕) = P - β^*ρ̅ kω + ∇· [(μ+σ_kμ_T) ∇ k]
∂ρ̅ω∂ t + ∇·(ρ̅ω𝐕) = γρ̅μ_TP - βρ̅ω^2 + ∇·[(μ+σ_ωμ_T)∇ω] + 2(1-F_1)ρ̅σ_ω2ω∇ k·∇ω
where the production term in the k equation is
P = min(μ_TS^2, 10β^*ρ̅ kω)
The turbulent viscosity is then computed by
μ_T = a_1ρ̅ kmax(a_1 ω, F_2S)
where S is the second invariant of strain rate tensor
S = √(2S_ijS_ij), S_ij = 1/2(ũ_j,i + ũ_i,j)
The blending functions F_1 and F_2 are defined by, respectively
F_1 = tanh(Γ^4), Γ = min[max(500μρ̅ω d^2, √(k)β^*ω d), 4σ_ω 2ρ̅ kCD_kω d^2], CD_kω = max(2ρ̅σ_ω 2/ω∇ k·∇ω, 10^-10)
F_2 = tanh(Π^2), Π = max(500μρ̅ω d^2, 2√(k)β^*ω d)
where d is the distance to the nearest wall. F_1 is set to zero away from the wall (kε model), and switched to one inside the boundary layer (kω model). F_2 is one for boundary-layer flows and zero for free-shear layers. Both of them are artificially set to zero in the turbulent mixing-layer case below.
Each of the constants in the SST model is computed by a blend of the corresponding constants of the kω and kε models via ϕ = F_1 ϕ_1 + (1-F_1) ϕ_2, where ϕ = σ_k, σ_ω, β, γ. The other constants are: a_1 = 0.31, β^* = 0.09, σ_k1 = 0.85, σ_k2 = 1.0, σ_ω_1 = 0.5, σ_ω_2 = 0.856, β_1 = 3/40, β_2 = 0.0828, γ_1 = 5/9, γ_2 = 0.44.
§.§ Chemistry Model
The combustion of methane (CH_4) in air is considered in the current computations. The production rate ω̇_i of each species due to chemical reaction is calculated by the Westbrook and Dryer's one-step reaction mechanism <cit.>
CH_4 + 2 O_2 + 7.52 N_2 → CO_2 + 2 H_2 O + 7.52 N_2
where only four species, i.e. methane (CH_4), oxygen (O_2), carbon dioxide (CO_2), and water vapor (H_2O), are tracked besides nitrogen (N_2) in air. Thus, the number of species N = 5 in the present work. Note that although this work only focuses on one-step reaction of one type of fuel, it is straightforward to extend the present method to different fuels and oxidizers with more complex chemical reaction mechanisms if the increased computational costs are acceptable.
The reaction rate by the laminar kinetics is given by the modified Arrhenius expression
ε = A T^β e^-E_a/(R_0 T) C_fuel^a C_ox^b
where "fuel" and "ox" stand for CH_4 and O_2 in this study, respectively, and C_i is the molar concentration of species i defined by C_i = ρ̅Y_i/W_i. According to Westbrook and Dryer <cit.>, A = 1.3 × 10^9 s^-1, β = 0, E_a = 202.506 kJ/mol, a = -0.3, and b = 1.3 for methane.
To be consistent with the setups of Fang et al. <cit.> and Mehring et al. <cit.>, the values of A are adjusted to be 2.8 × 10^9 s^-1 and 1.3 × 10^10 s^-1 in the laminar and turbulent cases below, respectively. Computations show that this change has little influence on flame development except ignition length. It is found that ignition may not happen if the original value of A is used in the turbulent mixing-layer case.
Note that the influence of turbulence on the reaction rate has been neglected in this analysis. Since the turbulent length scale is orders of magnitude smaller than the scale across which the largest temperature gradient occurs in the ignition region of mixing-layer flow as analyzed by Mehring et al. <cit.>, the averaged reaction rate by Eq. (<ref>) is considered to be reasonable.
The net rate of production of species i by the chemical reaction is thus calculated by
ω̇_i = W_i(v_i^'' - v_i^') ε
where v_i^' is the stoichiometric coefficient for reactant i in Eq. (<ref>), and v_i^'' is the stoichiometric coefficient for product i. Obviously, the net production rate of nitrogen is zero.
§ NUMERICAL METHODS
§.§ Numerical Solver
An in-house three-dimensional code of simulating the steady and unsteady transonic flows for single species within turbomachinery blade rows has been developed, validated, and applied by Refs. <cit.>. The code solves the Navier-Stokes equations together with various turbulence models by using the second-order cell-centred finite-volume method based on multi-block structured grid. The central schemes with artificial viscosity, flux difference splitting schemes, and advection upstream splitting methods with various options to reconstruct the left and right states have been developed and implemented in the code. In this study, the present code is extended to the case of multiple species with varying specific heat capacities and to include appropriate chemistry models.
The species transport equations (<ref>) have the same form as the basic conservation equations (<ref>) and (<ref>). Consequently, the numerical methods for the Navier-Stokes equations are still applicable, except that the chemical source terms in both species equations and energy equation need to be treated separately to avoid stiffness.
The convective and viscous fluxes are discretized by the JST scheme <cit.> and the second-order central scheme, respectively.
The local time-stepping method is introduced to accelerate the convergence to steady state. Thus, the time t in the governing equations (<ref>), (<ref>), and (<ref>) is interpreted as a pseudo time, and a large enough pseudo-time step determined by the local flow field can be used in each grid cell since time accuracy is not required for steady-state solutions.
The Lower-Upper Symmetric-Gauss-Seidel (LU-SGS) method <cit.> is applied to the pseudo-time stepping to obtain the converged steady-state solutions. The parallel technique based on Message Passing Interface (MPI) is adopted to further accelerate the computation by distributing grid blocks among CPU processors.
Note that the continuity equation (<ref>) and the N species transport equations (<ref>) are not independent of each other. The summation of the N species equations should reduce to the continuity equation, which gives the following restrictions on the terms in the species transport equations
∑_i=1^NY_i = 1, 0 ≤Y_i ≤ 1
∑_i=1^N𝐣_i = 0
∑_i=1^Nω̇_i = 0
These restrictions should be satisfied during the computation to maintain consistency of the final converged solution. Equation (<ref>) is automatically satisfied due to the balanced stoichiometric coefficients in Eq. (<ref>). Additional treatments should be adopted to guarantee the other two conditions. To ensure Eq. (<ref>), after each iteration for Eq. (<ref>), the mass fraction of each species is corrected as
Y_i^ corr = Y_i/∑_k=1^NY_k
To ensure Eq. (<ref>), the diffusive flux of each species is corrected as
𝐣_i^ corr = 𝐣_i - Y_i∑_k=1^N𝐣_k
§.§ Splitting Scheme
In the species and energy equations, the source terms exhibit fundamentally different physical properties from the terms of convection and diffusion due to significantly smaller time scales for chemical reactions than for the flow, resulting in strong stiffness in solving the governing equations. The operator-splitting method is a natural choice to achieve efficient integration in time for unsteady problem.
Equations (<ref>) and (<ref>) can be rewritten as
dW/dt = T(W)+ S(W)
where W = [ρ̅Y_i, ρ̅E]^T, with size of N+1, is the state variables. T and S represent the transport term (convection and diffusion) and the reacting source term, respectively. The operator-splitting method integrates the two terms sequentially. Consider the integration from time step n to n+1 over the time interval of Δ t. A natural splitting scheme first integrates the Ordinary Differential Equation (ODE) of S over the time interval Δ t with the solution at the time step n as initial value, and then solves the Partial Differential Equation (PDE) of T over Δ t with the solution at the end of the first step as initial value. It is easy to prove that this simplest splitting is of first-order accuracy. The accuracy can be improved to second order by using the Strang splitting scheme <cit.>, in which the integration proceeds in a symmetric way: first a half interval Δ t/2 is taken with the S operator, then a full interval Δ t with the T operator, and finally another half interval Δ t/2 with the S operator. Both splitting schemes are strongly stable and applicable to unsteady problems. However, they are not steady-state preserving <cit.>. A numerical integration method is called steady-state preserving, if given an initial solution W_0 satisfying T(W_0)+ S(W_0) = 0, the solution of the next step remains W_0, regardless of the step size.
We propose here a steady-state preserving splitting scheme for solving stiff reacting flow. The integration of Eq. (<ref>) from pseudo-time step n to n+1 over the time interval of Δ t is split into two separate sub-integrations
dW^*/dt = T(W^n)+ S(W^*), (W^*)^n = W^n
dW/dt = R, R = (W^*)^n+1 - W^n/Δ t
In the first chemical sub-integration, Eq. (<ref>) is integrated by a stiff ODE solver over the time interval Δ t with the initial value W^n, giving an intermediate value (W^*)^n+1 at the end of the sub-integration. Since the time scale of chemical reaction is several orders of magnitude smaller than that of the flow, the transport term T is evaluated at time step n and is kept unchanged within the chemical sub-integration.
Although advanced full-implicit method or Quasi-Steady-State (QSS) method <cit.> is usually chosen as the stiff ODE solver, the simplest Euler explicit integration method is applied to solve Eq. (<ref>) in the present study due to the extremely stiff source terms introduced by the empirical global reaction mechanism in the absence of reverse reaction. The chemical time step in the explicit integration is determined by the spectral radius of the Jacobian matrix ∂S/∂ W.
Once the intermediate solution (W^*)^n+1 is obtained by the chemical sub-integration, the LU-SGS method is then applied to the flow sub-integration (<ref>) over the time interval Δ t to obtain the solution W^n+1 at time step n+1. In the original LU-SGS method for unsplit problems, the residual on the right-hand side is computed by the solution at the time step n, i.e., R = T(W^n) + S(W^n). However, in Eq. (<ref>), the residual is replaced by the difference of solutions at the intermediate time step and the time step n. In consideration of Eq. (<ref>), this residual can be regarded as the weighted average of T(W) + S(W) over the time interval Δ t
R = (W^*)^n+1 - W^n/Δ t = 1/Δ t∫_t_n^t_n+1dW^*/dtdt = 1/Δ t∫_t_n^t_n+1[T(W^n)+ S(W^*)] dt = T(W^n) + 1/Δ t∫_t_n^t_n+1S(W^*) dt
It is considered to be more reasonable to maintain the stability of the integration than that computed at time step n.
The contributions of transport term and reacting source term are incorporated into both the chemical sub-integration and the flow sub-integration, which guarantees that the right-hand sides are always consistent with the original differential governing equations for steady problem. In other words, the present splitting scheme is steady-state preserving. However, since the contribution of chemical source term is not included into the Jacobian matrix of LU-SGS iteration, this splitting scheme may degrade the convergence as the chemical time scale becomes much smaller than the flow time scale. Even so, this splitting scheme is attractive since it is easy to be implemented in the existing LU-SGS method. We need only reset residuals of species and energy equations before performing LU-SGS iteration. Thus, in the second flow sub-integration, the LU-SGS iteration for species and energy equations can be performed together with the other non-stiff equations, which avoids the separate solving of governing equations. It is especially important for the energy equation since it is closely coupled with the continuity and momentum equations.
In addition, in contrast to the Strang splitting scheme, the computational cost of the presented scheme is smaller since only one chemical sub-integration and one flow sub-integration are necessary within one time step determined by the flow time scale.
§.§ Computational Configuration and Mesh
To model the combustion flow in the turbine burner, the diffusion flame in a two-dimensional steady transonic mixing layer with strong favorable pressure gradient is considered in this study. The flow condition is from the cases of Fang et al. <cit.>, Mehring et al. <cit.>, and Cai et al. <cit.>. To produce the prescribed streamwise pressure gradient in the mixing layer, a configuration of converging-diverging nozzle is created in this paper, which was not necessary in Fang's and Mehring's work since the boundary-layer approximation was made. It was also avoided in Cai's work by introducing a streamtube thickness function into the two-dimensional equations.
At the inlet of the nozzle, the hot air mixed with burned gases flows into the upper side and comes into contact with the fuel vapor from the lower side. To achieve the prescribed pressure levels in the nozzle passage, given the flow conditions and nozzle height at the inlet, the downstream profiles of the upper and lower surfaces can be determined by the isentropic relations of quasi-one-dimensional flow for air and fuel, respectively. Since this is based on the assumption of quasi-one-dimensional flow without mixing and reaction, there exists a difference between the computed pressure in the diffusion flame and the prescribed one. This is eliminated by reshaping the nozzle profiles according to the pressure difference using the isentropic relations again.
Figure <ref> shows the converging-diverging nozzle configuration in the turbulent mixing-layer case. To reduce the disturbance of inlet boundary condition on the downstream flow, a uniform inlet section is added ahead of the mixing layer. The upper side and lower side of the nozzle are almost symmetric, both of which rapidly converge at the inlet, gradually slow down in the middle, and keep diverging after the throat at x = 70 mm. To reduce the slope of the side surfaces near the inlet while keeping them away from the mixing layer at the throat, the nozzle height at the inlet should be carefully chosen. The half height at the inlet is 30 mm for the turbulent case, whereas it is reduced to 3.5 mm for the laminar case due to the thinner mixing layer. As a result, the half heights at the throats for the turbulent and laminar cases are about 3 mm and 0.4 mm, respectively.
Figure <ref> presents the pressure variations along the center line of the nozzle for three cases to be analyzed in Secs. <ref>, <ref> and <ref>, along with the linearly varying pressure imposed in the boundary-layer approximation. The pressure variations in the two laminar cases are sufficiently close to the boundary-layer case. However, the pressure levels for the reacting turbulent case deviate from the linear values, especially in the middle of the nozzle. This is because the thick turbulent shear layer approaches the side surfaces and is evidently perturbed by them in the downstream nozzle. This pressure difference in the reacting turbulent case is believed to have some influence on the development of the mixing layer and combustion process in it, which will be discussed later.
Refs. <cit.> and <cit.> studied the diffusion flame in the mixing layer under 13 different flow conditions by solving the boundary-layer equations. One of those cases, identified as the base case, is chosen to be simulated by using the full Navier-Stokes equations in this paper. The base case, corresponding to a constant streamwise pressure gradient of -200 atm/m, has a static pressure of 30 atm at the inlet. The temperature and velocity of the air at the inlet are T_air = 1650 K and u_air = 50 m/s, respectively. Those of the fuel are T_fuel = 400 K and u_fuel = 25 m/s.
The inviscid slip and adiabatic wall boundary conditions with zero normal pressure gradient are specified on the two side surfaces of the nozzle. At the inlet, the total pressure and total temperature of air and methane are fixed on the upper stream and lower stream, respectively. For turbulent cases, the turbulent intensity and ratio of turbulent to molecular viscosity for air are set as 5% and 10.0 at the inlet, respectively. Those for the fuel are 10% and 100.0. All flow quantities are extrapolated (zero streamwise gradient) at the exit flow boundary since the flow is supersonic there. This ensures that no backward waves propagate into the computational domain at the exit. The segment of the center line between the inlet and the starting point of convergence is set as a symmetric plate to avoid pre-mixing of air and fuel.
Multi-block grids with matched interfaces between neighboring blocks are generated in the nozzle. Figure <ref> shows the grid for the turbulent case, which is rescaled along the vertical direction to obtain the grid for the laminar case. The vertical grid lines cluster near the inlet, whereas the transverse grid lines cluster near the center line and scatter towards the two side surfaces. The height of the first cell off the center line is less than 0.006 mm for the laminar case, and less than 0.05 mm for the turbulent case. The total number of grid cells is 38272.
To check the grid independence, the nonreacting case and reacting case in a turbulent mixing layer are performed on the current grid and another fine grid with 83776 cells, respectively. Figure <ref> compares the profiles of temperature at four streamwise positions for both cases.
The solutions on the two grid levels are almost indistinguishable. The profiles of other flow variables (not shown here for brevity) show similar behavior, indicating achievement of grid independence on the coarser 38272 grid already for the turbulent case. For the laminar case, grid independence is also reached on the coarse grid since the grid size (length of cell size) in the vertical direction is smaller than the grid for the turbulent case because of the smaller throat height. Therefore, the results in the following section are based on the 38272 grid.
§ COMPUTATIONAL RESULTS AND DISCUSSIONS
§.§ Nonreacting Laminar Mixing Layer
To demonstrate the abilities of the present code to deal with multi-component flows, we first examine the nonreacting laminar flow in the nozzle. At the trailing edge of the splitter plate (x = 0), the hot air on the upper stream starts to mix with the fuel vapor on the lower stream, producing velocity and thermal mixing layers in the middle of the nozzle.
Figure <ref> shows the profiles of density and streamwise velocity at four different streamwise positions. The velocity is normalized by the corresponding freestream value in the air side at each position.
The mixing layer grows in thickness along the streamwise direction, and the freestream density decreases due to flow acceleration caused by the favorable pressure gradient. The thickness of the thermal mixing layer is almost the same as that of the velocity mixing layer since unity Prandtl number is used in the computation. In addition, the mixing layer on the upper side is thicker than that on the lower side. This is because the smaller density and larger molecular viscosity resulting from the higher temperature produce a smaller Reynolds number on the upper side although the velocity is higher there.
§.§ Reacting Laminar Mixing Layer
The contours of temperature for the laminar mixing layer are shown in Fig. <ref>. At the trailing edge of the splitter plate, the hot air on the upper stream starts to mix with the fuel vapor on the lower stream. Chemical reaction happens shortly downstream the trailing edge. A diffusion flame is established within the mixing layer as indicated by the high-temperature region slightly biased towards the air side. The reason for that is that the momentum of fuel on the lower stream is higher than that of air on the upper stream, resulting in an upward tilting of the mixing layer. The flame continues to move upward with the mixing layer along the streamwise direction. The freestream temperatures on the two sides decrease downstream because of the flow acceleration under the favorable pressure gradient. The peak temperature within the diffusion flame also decreases along the streamwise direction due to not only the decreasing freestream temperatures but also reduced reaction rate resulting from the reduced freestream temperatures and reactant concentrations.
Figure <ref> shows the profiles of density and streamwise velocity at four different streamwise positions. The results computed by Fang et al. <cit.> using the boundary-layer approximation are also shown for comparison. The present density and velocity profiles agree very well with those by Fang et al. Within the diffusion flame, however, the present density is slightly smaller and the velocity is slightly larger. This discrepancy is mainly because of the different governing equations used. In the flame, the density reaches a trough where the temperature peaks since the pressure is the same along the transverse direction. Along the streamwise direction, the density at the freestreams and in the flame decreases, which is consistent with the contours of temperature in Fig. <ref>. Although the normalized freestream velocity is unchanged along the streamwise direction, the peak velocity in the reaction region increases, which is different from the case in the nonreacting mixing layer in Fig. <ref>. This happens because the lighter gas in the reaction region gets accelerated more under the same pressure gradient.
Figure <ref> compares the profiles of mass fraction of products (CO_2 and H_2O) at four different streamwise positions. At the positions near the inlet (x = 3 mm and x = 5 mm), both the thickness of reaction region and the peak value of mass fraction by the full Navier-Stokes equations differ from those by the boundary-layer equations. This is primarily attributed to the two-dimensional effects dominated at the initial stage of mixing-layer flow, which is neglected in the boundary-layer approximation. This is also due to the difficulty to exactly maintain a constant streamwise pressure gradient at the initial stage of the two-dimensional nozzle in the present computation. For both computations, the profiles of mass fractions vary sharply near their peaks, and the peak values obviously increase along the streamwise direction. This demonstrates that the chemical reaction dominates the flow over the molecular diffusion at the initial stage of the mixing layer. At the two downstream positions (x = 30 mm and x = 40 mm), the two computational results are quite close to each other with the reaction region of the present simulation slightly biased to the upper side, consistent with the profiles of density and normalized velocity in Fig. <ref>. For both computations, the peak mass fraction of products, corresponding to the stoichiometric reaction, keeps unchanged at the two positions, while the thickness of reaction region continuously increases. This indicates that the diffusion begins to dominate the flow as the mixing layer further develops.
§.§ Reacting Turbulent Mixing Layer
After validating the numerical method by the laminar case, the present method is extended to the reacting turbulent case. The flow condition is the same as that used for the laminar case, except that the nozzle height is enlarged about eight times to prevent the thicker turbulent mixing layer from touching the side surfaces downstream.
To be consistent with the case in Mehring et al.'s work <cit.>, the standard kω model proposed by Wilcox <cit.> in 1988 rather than the SST model is used to evaluate the turbulent viscosity in this section.
The kω model exhibits a strong sensitivity to the freestream value of ω <cit.>. We set k = 2.5 × 10^-4 m^2/s^2 and ω = 500 s^-1 at the inlet, same as the setup of Mehring et al. <cit.>.
The contours of temperature for the turbulent mixing layer are shown in Fig. <ref>, along with the boundary-layer results by Mehring et al. <cit.>. Similar to the laminar case, a diffusion flame appears on the upper air side downstream of the splitter plate. The growths of both the mixing layer itself and flame are significantly larger than those in the laminar case shown in Fig. <ref> due to stronger diffusion of turbulence. In Mehring et al.'s results, the ignition occurs after a certain distance (about 10 mm) downstream from the trailing edge of the splitter plate, whereas it ignites much earlier (about 5 mm after the splitter plate) in the present computation, as indicated by the high-temperature regions. This discrepancy is attributed to two reasons. On the one hand, the pressure gradient is exactly prescribed as -200 atm/m in the boundary-layer equations. However, it is produced by the converging-diverging side walls in the present full Navier-Stokes equations. The pressure levels near the nozzle inlet cannot stay exactly the same as the prescribed values due to the strong two-dimensional flow caused by the large slopes of the side walls. On the other hand, the boundary-layer approximation is not sufficiently accurate at the initial stage of mixing layer where the flow is fully two-dimensional in nature. The flame by the boundary-layer approximation spreads like a straight line along the streamwise direction. In the present computation, the flame keeps straight on the whole but with slight deflection near the inlet and after the throat. The deflection near the inlet is again due to the two-dimensional effects. The flame distorts after the throat since it is quite close to the top side surface due to the strong turbulent diffusion. We mainly focus on the streamwise ranges between x = 10 mm and x = 70 mm in the following.
The present chemistry model does not explicitly include the influence of turbulent kinetics. However, the chemical reaction affects the turbulent transport of flame in the mixing layer. Figure <ref> shows the contours of the reaction rate computed by Eq. (<ref>). The intense chemical reaction is concentrated around a narrow band on the upper side, where the fuel and oxidizer have been fully mixed. Note that the reaction rate on the air side is remarkably higher than that on the fuel side due to the negative exponent of fuel concentration in Eq. (<ref>). As the mixing layer develops, the reaction region becomes thick due to diffusion, while its strength reduces due to the decreased temperature and concentrations of reactants. Similar to the laminar case, the mixing-layer flow is dominated by both reaction and diffusion in the beginning, while it becomes diffusion-dominated further downstream. High velocity gradient is generated in the reaction region under the combined actions of convection and diffusion. This induces significantly strong turbulent production in the reaction region, as indicated by the contours of production rate of turbulent kinetic energy in Fig. <ref>. The production rate is computed by P = μ_TS^2 in the kω model. Another stronger-production region near the center line originates from the shear in the turbulent mixing layer. The intense production of turbulence leads to high turbulent kinetic energy and thus large turbulent viscosity in the flame as shown in Fig. <ref>. As a result, not only the diffusion in the mixing layer itself is strengthened by the turbulence, but also the turbulent diffusion in the flame region is further enhanced by the chemical reaction.
Figure <ref> compares the profiles of temperature and density at four different streamwise positions. Compared to the laminar solutions in Fig. <ref>, the thickness of the turbulent mixing layer is one order of magnitude larger. However, the basic behavior remains the same. The present temperature and density agree with those by Mehring et al. in general. The boundary-layer solution is more diffusive as indicated by the thicker temperature and density shear layers. This is due to the higher production rate of turbulent kinetic energy, and thus the higher turbulent viscosity in the mixing layer and flame region for the boundary-layer approximation, as shown by the profiles at x = 37.5 mm in Fig. <ref>. The stronger turbulent diffusion in the boundary-layer solution induces a lower peak temperature in the middle of the flame.
Figure <ref> compares the profiles of mass fraction of each species at four different streamwise positions by the full Navier-Stokes and boundary-layer equations. At each streamwise position, the mass fraction of products reaches its peak at the transverse location where both methane and oxygen are simultaneously depleted. Nitrogen is inert in the present reaction model and is thus smoothly diffused from the air side to the fuel side. The peak product mass fraction stays almost constant along the streamwise direction. This is because the balance among convection, diffusion, and production in the species transport equations is independent of the pressure gradient after ignition.
Similar to the profiles of density and temperature in Fig. <ref>, the two solutions for the mass fractions generally agree well with each other. The profiles of species mass fractions by the boundary-layer equations are slightly thicker than the full Navier-Stokes equations. The peak mass fractions of products by the full Navier-Stokes equations have slightly larger values.
At the first two streamwise positions (x = 12.5 mm and x = 25.0 mm), there is a local peak in the mass fraction of oxygen slightly below the center line. In addition, the peak value in the boundary-layer solution is larger than those in the full Navier-Stokes solution. This can be explained by the contours of mass fraction of oxygen shown in Fig. <ref>. The flame is ignited after a certain distance downstream from the splitter plate. Before the ignition, the flow is dominated by convection and diffusion. Due to the higher velocity on the upper air side, a significant amount of oxygen is convected to and then diffused in the lower fuel side in front of the flame. Compared to the boundary-layer solution, a smaller quantity of oxygen is transported into the lower side in the present solution since it ignites much earlier. As a result, the peak value of oxygen mass fraction is smaller.
§.§ Comparisons of Pure Air and Vitiated Air
To approximate the inlet flow conditions in a turbine burner, the case with vitiated air at the upper-stream inlet is considered. The vitiated air is used to simulate the exhaust gas from the upstream primary combustion chamber of a turbine engine.
It is estimated that, to reach a turbine inlet temperature of 1650 K, a fuel-air mass ratio of 0.03 is needed for the stoichiometric combustion in the primary combustion chamber. As a result, the vitiated air at the turbine inlet consists of 73.77% N_2, 11.01% O_2, 8.04% CO_2, and 7.18% H_2O by mass fraction. These compositions of vitiated air are specified as the inlet boundary conditions of species mass fractions at the upper stream of the nozzle. The other flow conditions are kept the same as the pure air case. To avoid the influence of nozzle geometry, the profiles without reshaping the side walls are applied in this section.
Figure <ref> compares the profiles of temperature and density at different streamwise positions by using the inlet conditions of pure air and vitiated air.
Compared to the pure air case, the peak flame temperature in the vitiated air case is reduced since the lower oxygen concentration in the upper stream significantly weakens the chemical reaction. As a result, higher density levels are found in the flame for the vitiated air case under the same pressure levels as the pure air case.
The transverse location of peak temperature and thus minimum density, corresponding to stoichiometric combustion, is sightly shifted upward for the vitiated air case. The oxygen is redistributed along the transverse direction under the actions of molecular and turbulent diffusion. However, the concentration of oxygen is globally lower in the vitiated air case. Thus, the amount of oxygen required for the stoichiometric combustion moves farther towards the upper freestream.
Compared to the pure air case, the thickness of the shear layer is decreased for the vitiated air case, as indicated by the profiles of temperature and density. This is because the reduced velocity gradients in the shear layer resulting from the weak chemical reaction produce a low production of turbulent kinetic energy. Consequently, the turbulent diffusion along the transverse direction is reduced.
It is interesting to note that turbulence modeling has significant influence on the developments of flow and combustion in the mixing layer. Comparison between the pure-air results by the SST model in Fig. <ref> and those by the kω model in Fig. <ref> shows that the mixing layer by the SST model develops slower. For example, the thickness of the shear layer on the air side is about 70% of the kω model at x = 25.0 mm, and it decreases to less than half at x = 50.0 mm. This weaker turbulent diffusion of the SST model reduces the mixing between air and fuel, and thus the chemical reaction, as seen from the lower peak temperature at x = 25.0 mm and x = 50.0 mm. This discrepancy between the two RANS turbulence models indicates that more elaborate turbulence modeling, such as large-eddy simulation or detached-eddy simulation, is necessary to accurately resolve the interaction between chemical reaction and turbulent flow.
The profiles of mass fraction of products at different streamwise positions are shown in Fig. <ref>. Similar to the temperature and density, the thickness of profile of mass fraction is obviously reduced in the vitiated air case. Although there already exists initial CO_2 and H_2O in the freestream vitiated air, the peak mass fraction of products is still lower than that in the pure air case due to the significantly reduced chemical reaction. In fact, the flame almost becomes extinct after x =75.0 mm, as indicated by the extremely low levels of product mass fraction and temperature at x =75.0 mm and x = 100.0 mm. The upward shift of the peaks in the vitiated air case is also clearly observed.
§.§ Reacting Turbulent Flow in Turbine Cascade
The reacting flow in a highly-loaded transonic turbine guide vane, named VKI LS89 <cit.>, is simulated and compared for the cases with pure air and vitiated air inlets. The chord of the vane is 76.674 mm, and the pitch-to-chord ratio is 0.85. The stagger angle of the blade is 55^∘. The multi-block structured grid, as shown in Fig. <ref>, is generated for a single cascade passage with translational periodicity on the pitchwise boundaries (green curves in Fig. <ref>). There are 317 grid points around the blade (blue curve in Fig. <ref>), with the points concentrated near the leading edge and trailing edge. The dimensionless distance y^+ of the first grid point away from the blade is less than one. The total grid has 26416 cells, which are divided into 7 blocks for the parallel computation.
At the inlet of the turbine cascade, methane with total temperature of 400 K is injected over part of the middle section. Air with total temperature of 1650 K is specified at the remaining part of the inlet section. The total pressure is uniform 166834 Pa, and the inlet flow angle is 0. The back pressure at the outlet is set as 101325 Pa. This produces an averaged streamwise pressure gradient of -20 atm/m within the cascade passage, which is an order of magnitude lower compared to the mixing-layer cases.
Figure <ref> shows the contours of temperature for the pure air and vitiated air cases. Two diffusion flames, generated on the interfaces between the fuel and air, are transported downstream in the cascade passage. One of them is near the suction surface, and the other one goes through the middle of the passage. After the trailing edge, the middle branch merges with the suction-surface branch from the adjacent blade, and then they move downstream together.
The variations of temperature for the vitiated air case are similar to the pure air case, but the flame temperature levels are obviously reduced. The maximum temperature within the flames is about 3200 K for the pure air case, while it reduces to about 2400 K for the vitiated air case.
Same as in the mixing-layer cases, the streamwise and transverse pressure gradients produced by the curved suction and pressure surfaces have significant effects on the flow and combustion process in the turbine cascade, which can be seen from the contours of chemical reaction rate for both pure air and vitiated air cases in Fig. <ref>. The region with high reaction rate in the figure indicates the flame.
Ignition immediately happens at the turbine inlet, even if the reaction rate is relatively low due to the insufficient mixing between fuel and air.
Disturbed by the blade, the pressure starts to decrease along the streamwise direction near the leading edge. At first glance, the chemical reaction would be weakened by this favourable pressure gradient as in the mixing-layer cases above. However, the local velocity gradients resulting from the blade surface curvature significantly reinforce the molecular and turbulent diffusion and thus make the mixing between fuel and oxidizer sufficient enough, which consequently enhances the chemical reaction. This is why both the strength and thickness of the flames significantly increase near the leading edge. It turns out that the mixing dominates the chemical reaction over the pressure gradient at the initial stage of the turbine cascade.
However, after the suction peak (x = 10 mm) on the suction surface, the two flames gradually become weak until reaching the trailing edge due to the strong favorable streamwise pressure gradient produced by the converged turbine passage. After the trailing edge, in the absence of constraints from blade surfaces, the gases within the two flames mix with the low-speed wakes from the suction and pressure surfaces. Hence, the two flames are enhanced again in both strength and thickness, and finally they merge together.
The variations of temperature in the turbine passage in Fig. <ref> are clearly consistent with those for reaction rate. Note that the overall pressure gradient in the turbine passage is strongly lower than that in the mixing-layer case. Hence, extinction does not happen on any of the two flames within the accelerating turbine passage, although the reaction rate in the vitiated air case is evidently reduced. This is especially important for the flameholding in high-speed flow.
Figure <ref> shows the contours of mass fraction of methane for the pure air and vitiated air cases. The methane from the inlet is transported downstream and continuously consumed on the interfaces between fuel and oxidizer in the turbine passage. The mass fraction of methane is significantly reduced due to the increased reaction rate after the two flames merge. However, the methane is not depleted until it is transported out of the computational domain by the main stream. This is mainly due to the excessive fuel provided at the inlet. Fuel will be depleted if we further decrease the area portion it occupies at the inlet.
Chemical reaction in the turbine passage affects the aerodynamic performance of the blade. The distributions of pressure over the turbine blade for the nonreacting case, pure air case, and vitiated air case are compared in Fig. <ref>, in which the pressure is normalized by the total pressure at the inlet. The pressure distributions on the pressure surface are not affected by the combustion since the two flames in the turbine passage are far away from it. However, the pressure distributions on the suction surface are different for the three cases, especially after the suction peak. Compared to the nonreacting case, the pressure in the pure air case is higher on the suction surface since the increased temperature by the intense reaction in the passage and wake reduces the pressure diffusion in the turbine. This results in a lower pressure difference between the pressure and suction surfaces and thus reduces the aerodynamic loading of the blade. Resulting from the weaker chemical reaction, the pressure levels in the vitiated air case are between the levels in the nonreacting case and the pure air case.
§ CONCLUSION
A finite-volume method for the compressible reacting Reynolds-averaged Navier-Stokes equations is developed by using a steady-state preserving splitting scheme to treat the stiff source terms.
Laminar and turbulent reacting flows in an accelerating mixing layer are studied and compared to the boundary-layer solutions. The influence of vitiated air on the combustion process and aerodynamic performance is investigated for the cases of mixing layer and turbine cascade.
For the reacting flow in the accelerating mixing layer, a diffusion flame is established slightly biased towards the air side after the splitter plate and then transported downstream. The chemical reaction strongly enhances turbulent transport due to intensive production of turbulence by the increased velocity gradients and thus produces large turbulent viscosity in the reaction region. Compared to the laminar case, the turbulent shear-layer thickness is one order of magnitude larger. However, the basic behaviors of the two cases remain the same.
Vitiated air has significant influence on the combustion process and aerodynamics. In the mixing layer, the peak temperature in the flame reduces while the minimum density increases compared to the pure air case. The location of peak reaction is slightly shifted upward to the air side. The thickness of shear layer is decreased due to the reduced turbulent diffusion by the weak chemical reaction. In the turbine cascade, the variations of temperature for the vitiated air case are similar to the pure air case, but the flame temperature levels are lower. Although the reaction rate for the vitiated air case is evidently reduced, the extinction does not happen within the accelerating passage.
The turbine cascade analysis indicates viability for the turbine burner concept. Future three-dimensional large-eddy simulation with improved chemistry modeling will be pursued.
§ ACKNOWLEDGMENTS
The research was supported by the Office of Naval Research through Grant N00014-21-1-2467 with Dr. Steven Martens as program manager.
|
http://arxiv.org/abs/2307.04514v1 | 20230710122050 | Improving Heterogeneous Graph Learning with Weighted Mixed-Curvature Product Manifold | [
"Tuc Nguyen-Van",
"Dung D. Le",
"The-Anh Ta"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
Ashish Singh
Antonio Bevilacqua
Timilehin B. Aderinola
Thach Le Nguyen
Darragh Whelan
Martin O'Reilly
Brian Caulfield
Georgiana Ifrim
August 12, 2023
============================================================================================================================================
In graph representation learning, it is important that the complex geometric structure of the input graph, e.g. hidden relations among nodes, is well captured in embedding space.
However, standard Euclidean embedding spaces have a limited capacity in representing graphs of varying structures.
A promising candidate for the faithful embedding of data with varying structure is product manifolds of component spaces of different geometries (spherical, hyperbolic, or Euclidean).
In this paper, we take a closer look at the structure of product manifold embedding spaces and argue that each component space in a product contributes differently to expressing structures in the input graph, hence should be weighted accordingly.
This is different from previous works which consider the roles of different components equally.
We then propose , a data-driven method for learning embedding of heterogeneous graphs in weighted product manifolds.
Our method utilizes the topological information of the input graph to automatically determine the weight of each component in product spaces. Extensive experiments on synthetic and real-world graph datasets demonstrate that is capable of learning better graph representations with lower geometric distortion from input data, and performs better on multiple downstream tasks, such as word similarity learning, top-k recommendation, and knowledge graph embedding.
We provide the source of implementation in https://github.com/sharecodesubmission/weighted_product_manifoldhttps://github.com/product_manifold.
§ INTRODUCTION
Representation learning aims to acquire the ability to effectively embed meaningful data into feature spaces <cit.>. In traditional representation learning models, Euclidean embedded spaces have been predominantly utilized. However, the uniform geometric structure of Euclidean spaces has certain limitations when it comes to providing accurate representations for various types of structured data, particularly graphs such as tree structures <cit.> or circular graphs <cit.>. Consequently, there is a growing interest in developing methods that enable the embedding of graph features in non-Euclidean spaces <cit.>.
Real-world data frequently exhibit diverse patterns and complex geometries that cannot be adequately captured by the uniform structures of Euclidean embedding spaces. It has been observed that Euclidean spaces are often insufficient for embedding various types of real-world graph data, such as hierarchical structures that induce negative curvature geometry <cit.>, or circle structures <cit.> that require positive curvature geometry.
Previous research has demonstrated that using spherical embedding spaces instead of Euclidean ones can result in minimal distortion errors when embedding data with circle and ring structures <cit.>. Moreover, models that solely utilize embedding spaces of a single geometric type often struggle to capture mixed structures effectively. These models tend to produce embedding representations with significant geometric distortion compared to the underlying geometry of the input data <cit.>. In contrast, approaches employing product spaces composed of components with different geometries have shown promising results in graph representation learning.
Problem
Current geometric embedding models, as seen in <cit.>, typically employ product spaces with equally weighted components. In this setup, the learnable parameters are fitted to the training data samples across all component spaces in a uniform manner. However, we contend that this approach hinders the robustness of models when learning data with diverse geometric structures.
Specifically, when the input data predominantly exhibit a particular geometric type compared to others, updating all components equally may not be optimal. Instead, it would be advantageous to assign more emphasis to the dominant geometric type during the parameter update process. This would allow the model to better capture and represent the most prevalent geometric structure in the data.
Our approach
To address this issue, we introduce a novel data-driven approach that incorporates a scoring mechanism for each component in the product spaces. This scoring mechanism enables the automatic learning of weights for each component based on the geometric structures present in the input data.
By considering the specific geometric characteristics of the data, our method allows for the construction of flexible and adaptive product spaces. This includes not only updating the weights of the components but also adjusting the geometric curvatures of the spaces.
As a result, our models are capable of effectively capturing and representing the complex geometric structures inherent in the data, leading to improved embedding performance.
Contributions
We summarize our contribution as follows.
Firstly, to the best of our knowledge, this is the first work that considers the structure at each component of product manifold and proposes that each component space contributes differently to expressing various geometric structures in the input graph, hence should be weighted accordingly.
Secondly, we propose , a data-driven method for learning to embed of
heterogeneous graphs in weighted product manifolds.
Thirdly, we conduct extensive experiments on both synthetic and real-world datasets to validate our approach to the various downstream tasks.
§ RELATED WORKS & BACKGROUND
The field of machine learning has witnessed a proliferation of works focusing on learning data representations in non-Euclidean spaces, as evidenced by studies such as <cit.>. However, recent research by <cit.> has highlighted the computational challenges and numerical instability faced by hyperbolic graph convolution networks, particularly in high-dimensional settings. To address this issue, <cit.> proposed a random feature mapping technique that utilizes the eigenfunctions of the Laplace operator to approximate an isometry-invariant kernel on hyperbolic space.
Another notable approach in this area is CurvGAN <cit.>, which introduces a GAN-based graph representation method that preserves the topological properties of discrete structures by approximating them as continuous Riemannian geometric manifolds. However, these methods primarily focus on a single embedding space and may struggle to effectively capture the underlying structure of the input data.
In contrast, the product of spaces has been shown to possess the capability to achieve higher generalization and effectively capture the intrinsic structures of graphs with mixed geometries <cit.>. By combining multiple spaces with different geometric characteristics, the product of spaces approach offers improved representation learning and a more comprehensive understanding of complex data structures.
While several approaches have explored the use of product spaces, few have addressed the challenges associated with defining and updating the component spaces. One such work, Switch Spaces <cit.>, introduces a method that selects a combination of K components from a set of N spaces based on input specifications. It employs a gating mechanism to score and choose subspace components using pairwise relationships in the training data. However, since entities in a graph are not independent and identically distributed (iid), the component spaces selected based on individual input instances may not effectively capture the overall relationships between nodes in the graph. Consequently, Switch Spaces requires embedding spaces with high dimensions (e.g., 100, 500) to achieve competitive performance in various downstream tasks like knowledge graph embedding and recommendation.
Unfortunately, this approach unintentionally sacrifices the advantages offered by non-Euclidean models, which can achieve compactness by requiring smaller dimensions to achieve the same capacity as Euclidean space. In our study, we propose a novel approach that leverages a richer and more robust representation space to capture the diverse geometric structures present in graph data. By enhancing the quality of embeddings, our research complements existing graph-based learning methods and enables more effective representation learning.
Non-Euclidean embedding spaces
Non-Euclidean representation learning has emerged as a powerful approach, delivering state-of-the-art performance across diverse tasks. Specifically, hyperbolic space has proven effective in tasks such as network embedding <cit.>, recommendation systems <cit.>, and knowledge graphs <cit.>. On the other hand, spherical space excels in modeling directional similarity and data with cyclical structures <cit.>. Each of these spaces possesses unique geometric features, and the selection of an appropriate embedding space should be guided by the inherent structure of the data. By leveraging the most suitable embedding space, we can effectively capture the intrinsic properties and relationships within the data, leading to superior performance across a wide range of applications.
Product manifold
Product manifolds are constructed by combining embedding spaces with different geometric types, such as Euclidean, hyperbolic, and spherical spaces. In the context of representation learning, the concept of product spaces was introduced in <cit.>, where each component of the product space has a constant curvature. The curvature of the product space is determined by the sum of curvatures of its individual components <cit.>, resulting in a constant curvature overall. This property enables product spaces to capture a wide range of curvatures with lower distortion compared to a single space <cit.>. As a result, product spaces are particularly well-suited for real-world data that exhibit mixtures of geometric structures.
For example, <cit.> developed a Mixed-curvature Variational Autoencoder, which efficiently trains a VAE with a latent space consisting of a product of constant curvature Riemannian manifolds. Additionally, the heterogeneous structure present in user-item interaction graphs can be effectively learned by utilizing product spaces with different curvature components <cit.>.
Distortion error of embedding
Given metric spaces U and V equipped with distances d_U and d_V respectively, an embedding is a continuous and injective mapping f: U → V. To evaluate the quality of an embedding, we use the average distortion metric D_avg(f), which calculates the average distortion over all pairs of points. Distortion between a pair of points a and b is defined as |(d_V(f(a), f(b))/d_U(a, b))^2 - 1|.
§ PROPOSED METHOD:
In this section, we present our approach to learning the weights between sub-geometries with different curvatures in the product of embedding spaces. Our objective is to ensure that the curvatures of the graph embedding spaces closely match the curvatures of the graph itself. To accomplish this, we introduce a novel gating mechanism that assigns a score to each component space.
Motivated from the coarsening approaches <cit.>, we designed gating mechanism to leverage the message-passing of information across various regions of the input graph, enabling the extraction of topology information. Our gating mechanism divides the graph into multiple parts, where each sub-graph is predominantly characterized by a specific type of geometry, such as a tree or cycle structure.
For example, in a graph consisting of a ring of trees where the tree structure dominates, we assign higher scores to hyperbolic components in the product space compared to spherical components. This choice is made to improve the quality of the embeddings produced.
By applying this gating mechanism and adjusting the weights between the different sub-geometries, we aim to achieve a more accurate representation of the graph's underlying structures, resulting in improved embedding results.
Problem formulation
Given three types of geometry: Euclidean (𝔼), Hyperbolic (ℍ), and Spherical (𝕊).
Let ℳ_1, ℳ_2, …, ℳ_N be N component spaces where M_i is of one geometric type among {𝔼, ℍ, 𝕊}, and M_i = b_i.
The goal of our approach is to learn the score 𝐰 = (w_1, …, w_N) ∈ℝ^N from the input graph data on each component of product manifold embedding space in such a way that the embedding of input graph into P = w_1 ℳ_1 × w_2 ℳ_2 ×…× w_N ℳ_N will have lowest possible geometric distortion.
§.§ Coarsening input graph data
Hierarchical pooling layers
Given input graph 𝒢, with n > 0 nodes, adjacency matrix 𝐀∈{ 0, 1}^n × n and node features 𝐗∈𝐑^n × d.
The matrix 𝐀 represents graph structure: 𝐀(i, j) = 1 if there is an edge connecting two nodes i, j, otherwise 𝐀(i, j) = 0.
D is the diagonal degree matrix of the graph 𝐆 where D_ii = ∑_i 𝐀_ij.
We use hierarchical pooling-based GCNs to learn cluster assignments.
There are two GCNs with two different sets of parameters in this module.
At each layer l, the soft cluster assignment matrix 𝐒^(l)∈𝐑^n_l-1× n_l is computed as follows:
0.8!𝐒^(l) = softmax (GNN_1^l(𝐀^(l-1), 𝐗^(l-1))) with (𝐀^(0), 𝐗^(0)) = (𝐀, 𝐗).
Then, we apply the second GNN on 𝐒^(l) to compute the graph representation at layer l:
0.8!𝐗^(l) = 𝐒^(l)^T (GNN_2^(l)(𝐀^(l-1), 𝐗^(l-1))) and 𝐀^(l) = 𝐒^(l)^T 𝐀^(l-1)𝐒^(l)).
Coarsening input graph
The hierarchical pooling layer produces a coarsened graph with m < n nodes, a weighted adjacency matrix A' ∈ℝ^m × m, and node embeddings Z' ∈ℝ^m × d.
This process is then repeated L times, resulting in a GNN model with L layers that operate on the input graph and a series of coarser versions of it.
The soft assignment matrix S^(l) assigns each node at layer l to a cluster at the next layer l+1.
In other words, each row of S^(l) corresponds to one of the n_l nodes or clusters at layer l, while each column of S^(l) corresponds to one of the n_l+1 clusters at layer l+1.
In our approach, we treat the number of clusters as a hyperparameter and set n_l+1 = N, where N is the number of components in the product space P.
Each row of S^(l) shows the degree of membership of a node to each component space in P.
Attention pooling
We use the attention mechanism with the input being the matrix 𝐒^(l) to take the influence vector for each subspace.
Consider the matrix 𝐒 in form 𝐒 = [𝐡_1, 𝐡_2, ... , 𝐡_N ], with 𝐡_t ∈ℝ^d , and a trainable matrix 𝐔∈ℝ^d.
Self attention:
We define a relevant scalar weight for each element of the sequence through a softmax layer as follows w_t = softmax(𝐡_t^T 𝐔).
Given the set of weights over all the elements of the sequence, we can then obtain the pooled representation as the weighted average of the hidden states
s = ∑_t = 1^N 𝐡_t^T 𝐰_t.
Multi-head self attention:
Considering a number of k heads for the multi-head attention, 𝐡_t = [𝐡_t1, 𝐡_t2, …, 𝐡_tk] where 𝐡_tj∈ℝ^d/k and size of each head is d/k.
In the same sense, we have a trainable parameter 𝐔 =[𝐮_1 𝐮_2 …𝐮_k] where 𝐮_j ∈ℝ^d/k.
Different attention is then applied over each head of the encoded sequence softmax function following
w_t = softmax(𝐡_tj^T 𝐮_j), where w_tj corresponds to the attention weight of the head j on the element t.
A soft weight representation for each subspace is computed as follows:
s_j = ∑_t=1^N 𝐡_tj^T 𝐰_tj.
This method allows a multi-head self-attention network extracts different kinds of information over different regions of the self-attention network.
In the end, 𝐬∈ℝ^N represents the average weight of N component spaces in the product manifold P over the n_l clusters.
§.§ Objective function
Let 𝐬∈ℝ^N be the weight vector of N components based on the data's local geometry information.
The distance between x_i, x_j ∈ P is computed following d_P^2(x_i, x_j) = ∑_k = 1^N 𝐬_k dist^2 (x_i^k, x_j^k).
Then the base objective ℒ_base is defined as:
ℒ_base = ∑_1 ≤ i < j ≤ n|(d_P(x_i, x_j)/d_G(X_i, X_j))^2-1|
Finally, the total average distortion objective function is defined as ℒ = ℒ_base + ℒ_aux,
where ℒ_aux = ℒ_LP + ℒ_e is a combination of the link prediction loss (ℒ_LP) and the entropy regularization loss (ℒ_e).
More precisely, ℒ_LP = 𝐀^(l) - 𝐒^(l)𝐒^(l)^T_F at each layer l, where ·_F denotes the Frobenius norm; and
ℒ_e=1/n∑_i=1^n H(𝐒_i) where H(𝐒_i) is the entropy of the row i^th in matrix 𝐒.
Minimizing ℒ_LP means enforcing close nodes to be pooled together, while
minimizing ℒ_e makes the output cluster assignment for each node close to a one-hot vector so that the membership for each cluster is clearly defined.
Our total average distortion ℒ is optimized with the Algorithm <ref>.
§.§ Physical meaning of subspace weights
In manifold representation learning, the goal is to embed data into appropriate embedding spaces where the curvature of the embedding matches the curvature of the original data. In the case of a product manifold, each data point is partially embedded in different subspaces with varying curvatures.
Our work explores the relationship among the curvatures of all the subspaces and introduces a partial update mechanism for the embedding space based on their respective influence scores. In the importance score box of Model Architecture (Figure <ref>), if the input data is predominantly characterized by hierarchical structures, the importance score of the hyperbolic embedding component (s_2) will receive a larger value compared to the others (s_1 and s_3).
In Algorithm <ref>, we update the subspaces' curvatures and the embedding itself. The higher the curvature embedding scores, the more effort is required to minimize them. As a result, the negative curvature loss should contribute more to the overall loss, leading to more active updates of the embedding spaces associated with negative curvature compared to the other spaces. This ensures that the embedding adapts to the data's curvature characteristics and effectively captures the underlying structures.
§ EXPERIMENTS
This section presents our experimental evaluation of the proposed model's performance across various learning tasks. We begin by evaluating the model's effectiveness in improving graph reconstruction, as described in section <ref>.
Following this, we apply our framework to four downstream tasks: recommendation systems, knowledge graph embedding, node classification, as well as graph classification, and word similarity tasks.
§.§ Graph reconstruction
We perform experiments on both synthetic and real-world datasets to evaluate the performance of our proposed model.
More information on baselines and metrics is shown in Appendix <ref>.
Model performance on synthetic datasets
Table <ref> shows the average distortion (D_avg) of our model on the three synthetic graphs. When d = 3, achieves D_avg = 0.104 with the product manifold s_1 ℍ^2×s_2 𝕊^1.
Meanwhile, without any constraints in subspace curvatures (PM <cit.>), the distortion measure of ℍ^2×𝕊^1 on the Cycle graph is 0.11.
Overall, for all three synthetic graphs, our proposed model improves upon the main contender method PM from <cit.> by 5.4 %, 16.3 %, and 18.6 %, respectively (Table <ref>).
Similar trend continues in higher dimension d=5, our proposed method improves upon the baseline by 17.3%, 3.3% and 11.9 %, respectively (Table <ref>).
Model performance on benchmark datasets
We first employ a single space to generate embedding representations for each dataset in order to explore its intrinsic geometry.
Based on these observations, we develop heuristics for the dataset characteristics and utilize them to select the component in the model space product.
Then, the learning process optimizes the curvature of each subspace according to the dominant graph structure.
Figure <ref> presents the average distortion D_avg of embeddings into single model spaces for three complex benchmark datasets, as the number of embedding dimensions increases within the range of [5, 100].
We can see that, with the Cs PhDs and Power dataset, D_avg is smaller in hyperbolic space than in spherical space when d<50, indicating that the hyperbolic space should be considered in the general product space.
Similarly, the Cities dataset exhibits a more spherical structure than other geometric properties, and thus components of positive curvature should be used.
Table <ref> reports the performance of our model on the benchmark datasets.
Unlike the results obtained from the synthetic dataset, the best results are predominantly obtained when learning with the product manifolds.
This phenomenon is attributed to the more complex structure of real-world data compared to synthetic ones.
Specifically, the Power graph dataset has a hierarchical and cyclical structure that can be embedded effectively into any space with a hyperbolic and spherical component.
Our proposed model outperforms the main baseline PM <cit.> in all cases.
With embedding dimension d = 10, our model achieves the best distortion on the three datasets.
Specifically, in the Cs PhDs dataset, the percentage of improvements in terms of D_avg is 15.6 %.
In the Power dataset, with the soft gating mechanism, our model achieves better distortion upon the product of the space model with 28.4 %.
In case d = 50, the same improvement with d = 50, for specific, these average distortions (D_avg) compare with the uniform product of spaces (PM) of <cit.> is 19.3 % and 13.9%, respectively.
Furthermore, Table <ref> shows that for distortion of 0.0231 in the product space ℍ^5 ×𝕊^5 with the Power dataset, our method determines that the optimally weighted product manifold for embedding the dataset are 0.83 ℍ^5 × 0.16 𝕊^5. The ratio between the hyperbolic and spherical components is approximately 5:1, indicating the greater importance of hyperbolic components compared to spherical ones.
In contrast, the uniform product embedding space PM of <cit.> assumes that each component space contributes equally to learning representations in the product of spaces.
Our method , on the other hand, captures the constraints relation among all sub-geometries of different curvatures in the product manifold, depending on the geometry of the input graph data, leading to better performance than using the uniform product of spaces (PM) without scoring mechanism. Our proposed method has advantages in discovering general models with suitable geometry in the product manifold. Notably, we also observe that the mAP measures are not consistently better than the uniform product model spaces <cit.> when D_avg decreases.
§.§ on Knowledge Graph Embedding
Knowledge graphs (KGs) are a fundamental tool for representing information and have a wide range of applications, including question answering and web search <cit.>.
However, KGs are often highly incomplete, which poses a significant challenge for downstream use.
The goal of our approach is to address this issue by inferring missing facts in the KGs using entity and relation embedding techniques to map them to appropriate spaces.
In this section, we propose using the product of manifolds with a gating mechanism to represent the relations between entities in the KGs.
Detailed experimental scenario is shown in Appendix <ref>.
Model performance
Table <ref> reports the performance of various methods on two knowledge graphs.
To enable a fair comparison, we set the total embedding dimension to 64, which is a common practice in non-Euclidean embedding due to its ability to provide more compact spaces than Euclidean embeddings.
Our proposed model achieves superior performance over the baselines on the knowledge embedding graph, highlighting its effectiveness in learning informative representations of the data.
§.§ on node classification and link prediction
In this section, we evaluate the performance of our proposed model on node and graph classification tasks.
Hyperbolic GCN <cit.> uses message-passing on the hyperbolic tangent space for graph convolutional networks (GCNs).
However, our proposed model replaces the hyperbolic space with and applies message passing in the tangent of the product spaces.
We further introduce δ <cit.> which is used to evaluate the degree of tree-likeness of a graph by evaluating its graph distance metric.
The value of δ ranges from 0 to half of the graph diameter, with trees having δ = 0, while "circle graphs" and "grid graphs" have a larger δ, approximately half of their diameters.
Further details on the metrics, datasets, and baselines used in our experiments can be found in Appendix <ref>.
Model performance
Table <ref> presents the F1 and AUC scores for the link prediction and node classification tasks.
Notably, the DISEASE and AIRPORT datasets exhibit high hyperbolicity (δ = 0 and 1, respectively), where the performance of using the product of hyperbolic space surpasses that of using the product of mixture curvatures.
This is because the unified product of curvature fails to differentiate the primary intrinsic graph structure and instead adapts equally to spaces that do not align with the graph's topology.
Our proposed extension addresses this issue by incorporating a weighting mechanism that identifies the dominant embedding manifold most influenced by the underlying structure of the graph data, leading to improved results in both link prediction and node classification for these two datasets.
§.§ on Recommendation Systems
In this section, we evaluate the performance of our proposed model on the recommendation task. Specifically, we apply to replace the hyperbolic space in metric learning recommendation (HyperML <cit.>). Detailed information on baselines, datasets and metrics can be seen in Appendix <ref>.
Objective function
In HyperML <cit.>, the push-pull loss is proposed to learn the metric between the positive and negative items.
The overall objective is defined as ℒ = ℒ_P + γℒ_D,
where pull-push loss ℒ_P and distortion loss ℒ_D are defined as:
ℒ_P = ∑_(i, j) ∈𝕊∑_(i, k) ∉𝕊 [m + d^2_𝔻(i,j) - d^2_𝔻(i,k)]_+,
0.8!ℒ_D = ∑_(i, j) ∈𝕊[d_𝔻(f(i), f(j)) - d_𝔼(i, j)|/d_𝔼(i, j)]_+ + ∑_(i, k) ∉𝕊[d_𝔻(f(i), f(k)) - d_𝔼(i, k)/d_𝔼(i, k)]_+,
where |z|_+ = max(0, z), m > 0 is the margin size (m = 0.4 in this paper),
and f(.) is a mapping function f: 𝔼→𝔻 (f is the identity in <cit.>), γ is the multi-task learning weight and 𝕊 is the set of positive user-item pairs.
We use the same loss function in <cit.> with a difference in the distance on 𝔻. For specific, we compute the distance d between two embeddings in the product of model spaces.
Model performance
Table <ref> reports the H@10 and N@10 scores for two different datasets, considering the number of factors d ∈{32, 64}.
Our experiments demonstrate that, overall, CML and HyperML achieve better results with the weighted product manifolds () than in the Hyperbolic space alone, highlighting the advantages of using scoring sub-manifolds to model the distance between users and items.
§.§ Performance on word similarity task
We evaluated our model's performance on applications that require an understanding of the underlying manifold structure. To conduct our experiment, we trained word embeddings on the Word Similarity (WS-353) benchmark dataset, following the methodology established in previous works such as <cit.>. Our implementation is based on hyperbolic skip-gram embeddings from <cit.>.
Setup
For our setup, we utilized the standard skip-gram model <cit.> and extended the loss function to a generic objective suitable for arbitrary manifolds, using a variant of the objective used in <cit.>.
Specifically, given a word u and a target w with label y=1 if w is a context word for u and y=0 if it is a negative sample, our model is represented by P(y | w, u)=σ((-1)^1-y(-cosh(d(α_u, γ_w))+θ)).
Word similarity
To measure the effectiveness of our model, we evaluated its performance on the WS-353 dataset using the Spearman rank correlation ρ between our scores and annotated ratings.
We obtained the dataset from <cit.>, and the results of our experiment are presented in Table <ref>.
Our model outperformed the hyperbolic word embeddings of <cit.> and the product space (PM) in all dimension settings.
§ CONCLUSIONS
Real-world data often possess intricate geometric structures that are challenging to capture by embedding into spaces with uniform curvature.
To address this issue, we propose a method that partially extracts the topology information from the input data to update the embedding vectors and curvature of each subspace.
Our motivation is that graphs are constructed by combining simple structure topologies, such as trees, cycles, and stars.
Our approach introduces a data-driven method of weighted product spaces for learning better representations.
Our empirical experiments on synthetic and real-world datasets demonstrate that our framework enhances the embedding quality of input graphs with varying structures and improves the performance of the downstream tasks.
iclr2021_conference
§ ADDITIONAL BACKGROUND
Riemannian Geometry
Let ℳ^n be a smooth manifold in n-dimensional space, where ℳ^n is locally approximated by an n-dimensional Euclidean tangent space T_pℳ at p ∈ℳ.
The pair (ℳ, g) is called a Riemannian manifold if ℳ is equipped with a positive-definite metric tensor g that satisfies certain conditions.
Geodesics are the shortest-distance paths on manifolds, and the metric tensor g is integrated along the geodesic to compute distances on a Riemannian manifold.
The exponential map exp_p: T_p ℳ→ℳ and logarithmic maps log_p: ℳ→ T_p ℳ are two common bijections defined on the manifold ℳ.
A formal introduction to Riemannian manifolds can be found in <cit.>.
Product manifolds
Consider a sequence of smooth Riemannian manifolds ℳ_1, ℳ_2, …, ℳ_k.
ℳ_i can be positive (Spherical), zero (Euclidean), negative (Hyperbolic) curvature space.
The product manifold is defined as the Cartesian product ℳ = ℳ_1 ×ℳ_2 ×…×ℳ_k.
We write a point p ∈ℳ through their coordinates p=(p_1, …, p_k), p_i ∈ℳ_i. Similarly, a tangent vector v ∈ T_p ℳ can be written as (v_1, … , v_k) : v_i ∈ T_p_iℳ_i.
Gradient descent on manifolds requires the notion of taking steps.
This step can be performed in the tangent space and transferred to the manifold via the logarithmic map, and exponential map <cit.>.
The product space is also equipped with a distance function. The squared distance between points x, y ∈ℳ is defined as: d_P^2(x, y)=∑_i=1^k d_i^2(x_i, y_i).
§ CURVATURE ESTIMATION ON GRAPH DATA
Curvature estimation on simple graphs
There are three commonly used definitions for local graph curvature: Ollivier-Ricci <cit.>, Forman-Ricci <cit.>, and sectional curvature <cit.>.
In this paper, we use sectional curvature for estimating the geometric structures of graphs.
Sectional curvature is determined by geometric triangle properties as follows.
Theorem 1: Recall from <cit.> that on a given constant curvature geometric space, if abc is a geodesic triangle and m is the midpoint of bc, then d(a,m)^2+ d(b,c)^2/4 - d(a,b)^2 + d(a,c)^2/2 is equal to zero when the underlying space is Euclidean, is positive in spherical and negative in hyperbolic space, respectively.
Proof:
We provide proof of Theorem 1.
A = d(a,m)^2+ d(b,c)^2/4 - d(a,b)^2 + d(a,c)^2/2
= x^2 + z^2 - y^2/2 - t^2/2
= 1/2 (2x^2 + 2z^2 - y^2 - t^2)
= 1/2 [(x^2 +z^2 - y^2) + (x^2 + z^2 - t^2)]
= 1/2 [2xz cosα_1 + 2 xz cosα_2]
= xz (cosα_1 + cosα_2)
From Equation (6), we apply the cosine rule [https://en.wikipedia.org/wiki/Law_of_cosines].
We have three cases:
* cosα_1 + cosα_2 = 0: α_1 and α_2 are two supplementary angles, α_1 + α_2 = 180^0. Then the triangle is in Euclidean space.
* Similarly, it will be negative in hyperbolic and positive in the spherical curvature space.
Curvature estimation on graph data
Given theorem (1), let v be a node in G; b, c neighbors of v and a any other node.
Then, the sectional curvature of a node v and its neighbors b,c is defined following: 1/|V|-3∑_a ∈ G \{v, b, c}ξ_G(v ; b, c ; a) where
0.8!ξ_G(v ; b, c ; a)=1/2 d_G(a, v)(d_G(a, v)^2+d_G(b, c)^2/4-d_G(a, b)^2+d_G(a, c)^2/2)
and 2d_G(v;b,c) is included to yield the right scalings for trees and cycles.
Next, we estimate the curvature of some typical topology graph structures.
Star 𝐒_n is created from one central node and n leaves. We consider n ≥ 3, the local curvature at the center node v with two neighbors b, c is -1.
Tree 𝐓_b with branching factor b is the finite depth tree with b ≥ 2. The sectional curvature on the tree in the range ξ(T) ∈ [-1, 0].
Cycles graph 𝐂_n with n ≥ 4. If n is even, then ξ_C_n(v; b,c;a) = 0 for all points except the one diametrically opposite to v for which have ξ_C_n(v; b,c;a) = 1.
If n is odd, then for two points we have ξ_C_n(v; b,c;a) = n/2(n-1).
As a result, ξ(C_n) = 1/n-3 for even n and ξ(C_n) = n/(n-1)(n-3) for odd n.
Distortion error on simple graphs
We have demonstrated the limitations of using a single curvature approach to embed graphs with varying topologies.
To investigate the impact of curvature spaces on the quality of embedding spaces, we conducted experiments on three synthetic datasets with specific structures, including trees, circles, and rings of trees (Table <ref>).
Figure <ref> shows the distortion error results for Cycle and Tree graphs.
Our findings suggest that different graph structures require corresponding curvature spaces for optimal embedding quality.
For instance, spherical space (positive curvature) provides the least distortion error for cycle-like datasets (from 𝐒_3 to 𝐒_50), while hyperbolic spaces (negative curvature) give a minimal error for tree-like datasets (from 𝐇_3 to 𝐇_50).
All three models show some advancements compared to others in certain cases.
However, the overall distortions achieved are significantly higher than when using hyperbolic space with tree-like or spherical space with circle-like data.
For example, the distortion error on the Cycle tree is 0.09 compared to 0.02 on H_10 with Cycle data and 0.042 on S_5 with simple Tree data.
Therefore, using a product of individual spaces can improve the accuracy of embedding data with a mixture of structures.
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ Graph reconstruction task
Datasets
The synthetic datasets we use are small graphs with 40 nodes that are designed to have specific geometric structures, including a circle, a tree, and a ring of trees.
To assess the effectiveness of our approach on larger and more complex graphs, we also use three benchmark datasets: CsPhD <cit.>, Power <cit.>, and Cities <cit.>.
The Cities dataset consists of 1025 nodes and 1043 edges, while the Power dataset contains 4941 nodes and 6594 edges. Additionally, the CsPhD dataset has 312 nodes and 48516 edges.
Baselines
We compare the distortion error of node embeddings on both synthetic and benchmark datasets between our proposed model and the product spaces (PM) <cit.> method.
Metrics
We use two standard metrics to measure the quality of embeddings: average distortion D_avg and mean average precision mAP.
D_avg is a global metric that considers all the exact distance values.
Let G = (V, E) be a graph and node a ∈ V have a neighborhood 𝒩_a = b_1, ⋯, b_deg(a), where deg(a) is the degree of a.
In the embedding f, define R_a, b_i to be the smallest ball around f(a) that contains b_i, which means R_a, b_i is the smallest set of nearest points required to retrieve the i^th neighbor of a in f.
Thus, mAP = 1/|V|∑_a ∈ V1/deg(a)∑_i = 1^|𝒩a||𝒩a∩ R_a, b|/|R_a, b_i|. mAP is a ranking-based measure for local neighborhoods, and it does not track exact distances like D_avg.
§.§ Additional information for Recommendation task
Metrics
We use two measures Hit Ratio (H) <cit.> and Normalized Discounted Cumulative Gain (N) <cit.> to examine the predictive ability of these models.
The final H@k and N@k are averaged on all users' H@k and N@k scores.
We choose k = 10 to evaluate the model.
Datasets We perform experiments on two popular datasets, MovieLens-1M and LastFM-20K. The LastFm dataset <cit.> is obtained from a music website[http://millionsongdataset.com/lastfm/]. It is preprocessed to have 1892 users and 17632 music records. The MovieLens-1M is created from 6040 users and 3706 movies.
Baselines
We consider the three works below as the baselines for our model: CML <cit.>, HyperML <cit.>.
For specific, CML <cit.> investigates the relationship between metric learning and collaborative filtering.
It proposes a method that learns a joint metric space capable of encoding not only users' preferences but also the similarity between users and items.
HyperML <cit.> presents the connection between metric learning in hyperbolic space and collaborative filtering by exploring hyperbolic geometry.
HyperML-PM is our extension of HyperML in the product of model space.
HyperML-WPM (Our) is our extension of HyperML in the product of model spaces with the gating mechanism.
§.§ Additional information for Knowledge graph embedding
Metrics
The performance of various models is evaluated using two standard metrics: mean reciprocal rank (MRR) and hit rate (HR@3).
Datasets
We used two standard datasets, WN18RR <cit.> and FB15K-237 <cit.>, for our analysis. WN18RR is derived from WordNet, a lexical database of semantic relations between words. FB15K-237 is a subset of the Freebase knowledge graph, which is a comprehensive resource containing general information.
Table <ref> shows the statistics of the two datasets.
Objective function
Given a knowledge graph 𝒢 with a set of entities ℰ and a set of relation ℛ. Each triplet (h,r,t) ∈𝒢 is included by head entity h, tail entity t, and the relation r ∈ℛ between them.
There are a lot of works that propose RotE <cit.> in Euclidean space, and RotH <cit.> in Hyperbolic space. In this work, we extend to the product of different curvature spaces. Formally, entities h, t are represented by vector 𝐞_h, 𝐞_t ∈ℝ^b and the relation r is represented by two translation vectors α_r, β_r ∈ℝ^b and a rotation vector γ_r ∈ℝ^b. The head entity will translate twice via Mobius addition operation and rotate one time.
Q(h,r)= Rot(exp_0^c(𝐞_h) ⊕_c exp_0^c(α_r), γ_r) ⊕_c exp_0^c (β_r)
with c > 0 and exp_0^c is the exponential map over the origin. Rot is a rotation function with γ_r is the rotation matrix.
According to the above definition, for each triple (h,r,t), we define the distance function as:
d_r(h, t) = √(d_ℳ_c^2 (Q(h,r), exp_0^c(e_t)))
where ℳ_c is the product of curvature manifold. In <cit.>, the distance function of RotatE for the triple (h,r,t) is defined as: d_r(h, r) = || h⊙r - t||
The final negative sampling loss is defined by the cross-entropy loss:
ℒ =∑_(h,r,t) ∈Ωlog(1+ exp(-Y_(h,r,t) d_r(h,t)))
where Y_(h,r,t)∈{1, -1} is a binary label indicating whether a triplet is real or not.
Baselines
RotatE <cit.> is a knowledge graph embedding that is used to learn the representations of entities and relations in knowledge graphs.
RotatH is the extension of RotatE <cit.> in the hyperbolic space.
Product-RotatH is the extension of RotatE in the product of the hyperbolic spaces <cit.>.
SwisE <cit.> used the gating mechanism which is learned to choose the component space for knowledge graph embedding.
-Rotat is our extension by using the product of manifold in representing the relations among entities in the knowledge graph.
§.§ Additional information for Node Classification and Link Prediction
Metrics We utilize ROC AUC as a metric to evaluate the performance of Link Prediction (LP), whereas we rely on the F1 score to assess the Node Classification (NC) performance. In both cases, a higher score indicates better performance.
Datasets In this experiment, we evaluate model performance on the two different benchmark datasets.
DISEASE is the dataset of Infectious diseases from Oxford University <cit.>.
AIRPORT: is the dataset of airline routes from OpenFlight.org. Each node represents an airport, and the edge represents airline routes among these airports.
Detailed information regarding these datasets is provided in Table <ref>.
Baselines We evaluate the contributions of our proposed model by measuring the F1 and AUC scores on two datasets, compared with five different baseline models:
MLP and Hyperbolic-MLP are two variants of multilayer perceptron (MLP) classifiers operating on the Euclidean (𝐄) and hyperbolic space (𝐇), respectively.
HGCN <cit.> is an extension of graph convolutional networks (GCNs) to hyperbolic geometry.
Product-HGCN <cit.> extends GCNs in the product of hyperbolic geometries.
Mix-GCN <cit.> extends GCNs in the product of hyperbolic, spherical, and Euclidean spaces.
Our proposed model (-GCN) extends GCNs with a gating mechanism in the product of different curvature spaces (H, E, S).
|
http://arxiv.org/abs/2307.04573v1 | 20230710140728 | A Semi-Automated Solution Approach Selection Tool for Any Use Case via Scopus and OpenAI: a Case Study for AI/ML in Oncology | [
"Deniz Kenan Kılıç",
"Alex Elkjær Vasegaard",
"Aurélien Desoeuvres",
"Peter Nielsen"
] | cs.AI | [
"cs.AI",
"cs.IR",
"cs.LG"
] |
inst1]Deniz Kenan Kılıçmycorrespondingauthor
[mycorrespondingauthor]Corresponding author
[email protected]
[inst1]organization=Department of Materials and Production, Aalborg University,
addressline=Fibigerstræde 16,
city=Aalborg,
postcode=9220,
country=Denmark
inst1]Alex Elkjær Vasegaard
[email protected]
inst1]Aurélien Desoeuvres
[email protected]
inst1]Peter Nielsen
[email protected]
In today's vast literature landscape, a manual review is very time-consuming. To address this challenge, this paper proposes a semi-automated tool for solution method review and selection. It caters to researchers, practitioners, and decision-makers while serving as a benchmark for future work. The tool comprises three modules: (1) paper selection and scoring, using a keyword selection scheme to query Scopus API and compute relevancy; (2) solution method extraction in papers utilizing OpenAI API; (3) sensitivity analysis and post-analyzes. It reveals trends, relevant papers, and methods. AI in the oncology case study and several use cases are presented with promising results, comparing the tool to manual ground truth.
* Automated support for literature choice and solution selection for any use case.
* A generalized keyword selection scheme for literature database queries.
* Trends in literature: detecting AI methods for a case study using Scopus and OpenAI.
* A better understanding of the tool by sensitivity analyses for Scopus and OpenAI.
* Robust tool for different domains with promising OpenAI performance results.
Artificial intelligence (AI) Machine learning (ML) OpenAI Generative pre-trained transformers (GPT) Scopus Solution approach selection
§ INTRODUCTION
Over the past decade, artificial intelligence (AI) and machine learning (ML) have gained significant attention in the fields of information technology and computer science, accompanying significant advancements and benefits across diverse industries and sectors <cit.>. There are numerous AI/ML taxonomies presented in the literature that can be used to select a collection of AI strategies to address a specific challenge[<https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html>] <cit.>.
Figure <ref> illustrates an example taxonomy of the extensive AI/ML domain, encompassing multiple problem types and branches. However, to search for AI methods specific to a given use case, it is not only necessary to select a fitting branch in the taxonomy, but one also has to refine the search by comparing it to the standing knowledge base of the literature on the use case.
The increasing amount of literature presents a challenge for decision-makers seeking to employ AI/ML methodology in their specific problem domains. Manual review is time-consuming <cit.>, often resulting in incomplete information without targeted searches. A tool that rapidly generates trend findings and examines solution methods for any use case would be extremely beneficial in various situations.
This research proposes a semi-automatic tool developed to generate results on solution approaches for any use case. The study presents results on multiple problem domains on AI with a focus on the case study for AI/ML in oncology.
The proposed scheme
contains the following steps:
* Determining keywords systematically from the use case by a two-domain, three-level setup.
* Automated literature extraction using selected keywords via Scopus Search API <cit.>.
* Extracting AI methods automatically from Scopus search results by using OpenAI API.
* Sensitivity-analyzes for both Scopus and OpenAI.
* Post-analyzes based on results.
The proposed scheme can be used iteratively for the decision makers to augment their understanding of the problem and similarly align the keywords better with the desired use case and specificity level, consequently obtaining better results.
The remainder of this paper is structured as follows: Sec. <ref> reviews the use of AI methods and the literature on model selection approaches. Sec. <ref> presents the proposed AI method selection tool, and Sec. <ref> showcases the performance, sensitivity, and post-analysis of the method. In Sec. <ref>, discussion, conclusion, and suggestions for future works are given.
§ LITERATURE REVIEW
In the literature, there are reviews and surveys on which AI approaches or applications are used for different problem domains such as building and construction 4.0 <cit.>, architecture, engineering and construction (AEC) <cit.>, agriculture <cit.>, watermarking <cit.>, healthcare <cit.>, oil and gas sector <cit.>, supply chain management <cit.>, pathology <cit.>, banking <cit.>, finance <cit.>, food adulteration detection <cit.>, engineering and manufacturing <cit.>, renewable energy-driven desalination system <cit.>, path planning in UAV swarms <cit.>, military <cit.>, cybersecurity management <cit.>, engineering design <cit.>, vehicular ad-hoc networks <cit.>, dentistry <cit.>, green building <cit.>, e-commerce <cit.>, drug discovery <cit.>, marketing <cit.>, electricity supply chain automation <cit.>, monitoring fetus via ultrasound images <cit.>, IoT security <cit.>.
As can be seen, some of the problem domains in the example review and surveys are low-level, while some are high-level. The abstraction level is difficult to integrate for the solution domain while considering the reviews and surveys. Even if the same problem domain is considered, it will be an issue to depend on reviews or surveys in the literature as there may be an unlimited number of use case scenarios and levels of specificity.
In addition, AI approaches specified in reviews or surveys can sometimes be very general. In this case, it may be necessary to make article reviews manually, and it causes labor and time lost <cit.>. Based on this idea, one can search for an automated way to minimize the time spent on manual review in order to get an AI method applied to a given use case.
The last decade saw significant steps toward a fully automatic model selection scheme with tools that select models for specialized use cases, generally referred to as model determination, parameter estimation, or hyper-parameter selection tools. For forecasting time series in R, the popular forecast package by R. Hyndman et al. was presented, showcasing great initial results <cit.>. For regression models, the investigated selection procedures are generally based on the evaluation of smaller pre-defined sets of alternative methods, e.g., by information criteria (AIC, BIC), shrinkage methods (Lasso), stepwise regression, and or cross-validation schemes <cit.>. For ML-based model schemes, the methods proposed by B. Komer et al. <cit.> introduce the hyperopt package for hyper-parameter selection accompanying the Scikit-learn ML library, J. Snoek et al. <cit.> presents a bayesian optimization scheme to identify the hyper-parameter configuration efficiently, and J. Bergstra et al. <cit.> identifies hyper-parameter configurations for training neural networks and deep belief networks by using a random search algorithm and two greedy sequential methods based on the expected improvement criterion. There also exist smaller frameworks, e.g., that of hyper-parameter tuning based on problem features with MATE <cit.>, to model and fit autoregressive-to-anything processes in Java <cit.>, or extensions to general purpose optimization frameworks <cit.>.
On the other hand, Dinter et al. <cit.> presents a systematic literature review on the automation of systematic literature reviews with a concentration on all systematic literature review procedures as well as natural language processing (NLP) and ML approaches. They stated that the main objective of automating a systematic literature review is to reduce time because human execution is costly, time-consuming, and prone to mistakes. Furthermore, the title and abstract are mostly used as features for several steps in the systematic review process proposed by Kitchenham et al. <cit.>. Even though our research does not stick to these procedures since our study was not a pure systematic literature review, the title and abstract are included for the OpenAI part. Additionally, they found the majority of systematic literature reviews to be automated using support vector machine (SVM) and Bayesian networks, such as Naive Bayes classifiers, and there appears to be a distinct lack of evidence regarding the effectiveness of deep learning approaches in this regard.
The work of H. Chen et al. <cit.> produce a written section of relevant background material to a solution approach written in the form of a research paper through a bidirectional encoder representation from transformers (BERT)-based semantic classification model.
Similarly, K. Heffernan et al. <cit.> utilizes a series of machine learning algorithms as automatic classifiers to identify solutions and problems from non-solutions and non-solutions in scientific sentences with good results.
These findings suggest that ML-based language models can be utilized in the automation of literature review with success.
Consequently, we have identified literature that explains the procedure of manually and automatically reviewing the literature. We have also identified automated tuning frameworks for different modeling schemes.
However, there is a gap in the automatic selection of a solution approach. Our paper aims to investigate and address this gap.
§ METHODOLOGY
The proposed methodology has three main modules; see the flowchart in Fig. <ref>. The first module covers selecting keywords and getting results via Scopus Search[<https://dev.elsevier.com/sc_search_tips.html>]. Then the advanced search query returns the results where the fields are explained by Scopus Search Views [<https://dev.elsevier.com/sc_search_views.html>]. In the second module, solution methods that are used for each article are searched using the OpenAI API. In the third module, sensitivity and post-analyzes are performed. The flow indicated by the red dashed line is performed automatically.
This scheme is appropriate for any problem and solution domain. It can be used for use cases in many different fields. Although the second block of this study focuses on AI methods, this block can also evolve into other topics, such as which hardware to be used and which scientific applications to be employed. However, as the tool relies on the OpenAI framework, ground truth data is created manually to check the performance.
In Tab. <ref>, the benefits and functions of the methods used in the proposed methodology are shown.
§.§ Module 1: Scopus search
The goal of the first module is to search for a relevant pool of paper w.r.t. the given problem a user is dealing with. To do so, a keyword selection scheme has been made in order to facilitate the user's work. This scheme is then used to make a Scopus query, but also to score each paper.
To determine keywords, three specification levels (a general, an expended, and a detailed one) are applied to the given problem and the searched solutions. This work is done manually as it involves eliciting user information on the use case. That means both classification and order are specified by the user. However, this stage is critical in recommending more appropriate solution approaches because these keywords are the first inputs to the proposed methodology and determine the pool of papers used in module 2.
Fig. <ref> gives an example of the proposed keyword selection scheme.
Notice that it is possible, but not necessary, to add keywords in each field, where a field refers to the specific level in the block. Leaving some fields empty will lead to a less specified pool of solution approaches, which consequently risks not fitting the use case. At the same time, adding too many keywords can lead either to a too restricted pool of papers (e.g., if one uses too many general keywords, and fulfill each field) or, if too many expanding keywords are given, to a less specific pool of paper as if the field was left empty.
The different levels showcase:
Level 1 The general and necessary keywords. The keyword must be a part of the research paper for the paper to be in the selected pool of papers.
Level 2 The expanding keywords. Here only one of the keywords in the field is necessary for the paper to be selected.
Level 3 A further specification. It is only used in the later stage to rank the identified solution methods with the relevancy metric.
After keyword selection, a query is created for Scopus Search API. Information is searched in titles, abstracts, and keywords of recent articles or conference papers, for the words defined in levels 1 and 2. The query can be, for example:
Note that an expert can directly enter a query instead of using the keyword selection scheme. It is useful in some cases, for example: when it is difficult to find a good pool of papers using the query built by the keyword selection scheme, or when one wants to search in a specific field or a specific range of years, or for a first try if one wants to search only for reviews in order to get more appropriate academic keywords. However, it is still advantageous to follow this scheme as it helps to find, classify, and order the use case keywords, but also to specify what is important for scoring the paper.
The publication year, the number of citations, the title, and the abstract information of all articles returned by the Scopus query are saved. After all the results are obtained, the title and abstract information of all the articles are examined manually, and articles that are irrelevant and have not applied/mentioned any AI method are eliminated.
§.§ Module 2: Scoring and method extraction
In this module, the relevancy and popularity metrics for the Scopus search results are computed, and solution methods are extracted from the title and abstract of each paper.
The relevancy metrics count the number of unique level 2 and 3 keywords appearing at least once in the title, abstract, or keywords. Ultimately, the metric represents how well the methods fit the specificity of the use case. For example, a paper named “Hybrid learning method for melanoma detection" yields in the abstract “image recognition (5 times), deep learning (2 times), real-time"; it will therefore have a relevancy metric of 3, taking into account Fig. <ref>.
The popularity metric is used to know the research interest of a paper and its methods. It is computed by citation number/publication age in whole years +1 where 1 is added in the denominator to avoid zero divisions.
After calculating the relevancy and popularity metrics, the tool inputs the title and abstract information to OpenAI and outputs the AI approaches used in each article.
When someone provides a text prompt in OpenAI API, the model will produce a text completion that tries to match the context or pattern you provided. Essential GPT-3 models, which generate natural language, are Davinci, Curie, Babbage, and Ada. In this paper, “text-davinci-003" is used which is the most potent GPT-3 model and one of the models that are referred to as “GPT 3.5"[<https://beta.openai.com/docs/model-index-for-researchers>].
Some issues to consider when preparing prompts are as follows[<https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api>]:
* It is advised to place instructions at the start of the prompt and to use ### or """ to demarcate the context from the instruction.
* Speaking of what to do is preferable to speaking about what not to do.
The prompt can then be the following:
where `document_text' includes the title and abstract information of a paper.
To evaluate OpenAI's performance, the ground truth AI methods are manually produced for non-filtered papers, regarding the title and abstract information of each paper. Some high-level tags, such as “artificial intelligence" and “machine learning" are not included. In other words, the keywords used in Scopus search as a method are not involved. Precision, recall, and F1-measure are calculated for performance analysis.
§.§ Module 3: Analyzes
In this module, sensitivity analyzes are done regarding Scopus and OpenAI. Different combinations of level 1 and 2 keywords in the Scopus query are tried and the initial prompt is compared with other prompts for OpenAI.
For the selected use case, post-analyzes are performed by investigating which AI methods are used more often and which have higher relevancy or popularity metrics and comparing the results over different periods.
This can be done manually, or, if there are too many methods listed, first a clustering algorithm can be used to help this investigation. Currently, density-based spatial clustering of applications with noise (DBSCAN) <cit.> used with (1 - the normalized Indel similarity) as distance performs well enough to support post-analysis.
§ EXPERIMENTS
§.§ Use case definition
The use case example given in Fig. <ref> is tackled for our initial experiment. Here, AI is employed on the dataset of images to detect cancer.
§.§ Keywords from the use case scenario
Using Fig. <ref>, the following keywords are defined:
“oncology" as problem level 1, “artificial intelligence" and “AI" as solution level 1. Only “image processing" is used as solution level 2.
By using only one level 2 keyword, the experiment stays rather general in the expected results.
For simplicity, level 3 keywords are not used in this example. Level 3 keywords do not affect the pool of papers but enable the user to elicit relevancy to papers that match their use case better. Because the computation of the relevancy metric is trivial, it is omitted in this example.
§.§ Scopus API search and manual article cleaning
According to the selected keywords, our initial query of Scopus API[<https://dev.elsevier.com/sc_search_tips.html>] is given below.
That means the keywords are searched in the title, abstract, and keyword parts. In addition, to limit the size of the results, the publications published after 2013 are selected, and to be more specific, the document type is restricted to “Article" or “Conference Paper".
Then DOI, eid, year, and citation number results that Scopus API returns are given in Tab. <ref>. The relevancy and popularity values are calculated as stated in Sec. <ref>. Currently, some papers can have a relevancy of 0, but by manually checking them, they stay relevant. It happens when keywords only appear in “INDEXTERMS" provided by Scopus but are absent from the title, abstract, and author keywords. Moreover, this is also due to a total absence of keyword level 3. It can be fixed by taking these automatic keywords for the OpenAI analysis.
The query returns 92 results. Among them, 25 publications (irrelevant, not technical, just survey, etc.) indicated in red in Tab. <ref> are manually filtered.
The remaining 67 articles are the results related to the domains and keywords of the use case.
However, there are among them 12 papers, highlighted in orange, that apply an AI method successfully, but they do not mention particular methods (they do only highly general, level 1 and 2 ones) in the title and abstract; they will therefore be missed by the OpenAI extraction part that is stated in Sec. <ref>. However, it is not critical as trends are explored.
Still, 55 papers remain to be analyzed.
Note that of the 37 articles eliminated, these could have been marked as such if we had implemented the level 3 keywords.
§.§ OpenAI
The initial prompt for the OpenAI API is stated below.
where `document_text' includes the title and abstract information of a paper.
After finding methods using OpenAI and manual work, the precision value is calculated. Here it is assumed that manual findings are the actual methods. On the other hand, the results coming from OpenAI are the predicted ones.
§.§.§ OpenAI performance
To analyze the results, the methods found by OpenAI are compared to the ones found by manual investigation (considered ground truths) for each paper. There are four different performance determinants, and they are called
* “true found" the number of methods found both by OpenAI that belong to the ground truths,
* “false found" the number of methods found by OpenAI that do not belong to the ground truths,
* “true general found" the number of methods found by OpenAI and the manual search but belonging to level 1 or 2 keywords or high-level keywords like “machine learning",
* “total manual" the number of ground truths,
* “missing" = “total manual" - “true found".
With these data, precision, recall (or sensitivity or true positive rate), and F1-score can be calculated for performance analysis.
To do that, the following metrics are employed:
* True Positive (TP) =“true found",
* False Positive (FP) =“false found" + “true general found",
* False Negative (FN) =“missing".
The “true general found" results are counted as False Positive since they are terms that are entered into the Scopus search or they are high-level keywords for our solution domain interest like “machine learning, artificial intelligence-based approach" as mentioned above.
For each paper that is not filtered, the performance metrics are calculated as follows.
* Precision = TP / (TP + FP)
* Recall = TP / (TP + FN)
* F1-score = 2 ×Precision × Recall/(Precision + Recall)
The F1-score assesses the trade-off between precision and recall <cit.>. When F1-score is high, it indicates that both precision and recall are high. A lower F1-score indicates a larger imbalance in precision and recall.
Let's check the following example, coming from <cit.>:
“Transfer learning with different modified convolutional neural network models for classifying digital mammograms utilizing Local Dataset"
“ [...] accuracy of different machine learning algorithms in diagnostic mammograms [...] Image processing included filtering, contrast limited adaptive histogram equalization (CLAHE), then [...] Data augmentation was also applied [...] Transfer learning of many models trained on the Imagenet dataset was used with fine-tuning. [...] NASNetLarge model achieved the highest accuracy [...] The least performance was achieved using DenseNet169 and InceptionResNetV2. [...]"
Manually, “transfer learning", “convolutional neural network", “NASNetLarge", “DenseNet169", “InceptionResNetV2", “data augmentation", and “fine-tuning" are found as AI methods. What OpenAI has found is highlighted as well.
Highlighted in green, “transfer learning", “convolutional neural network", “data augmentation", `NASNetLarge", “DenseNet169" and “InceptionResNetV2" are “true found"; so TP=6. Highlighted in orange, “machine learning algorithms" is a “true general found", and highlighted in red, “contrast limited adaptive histogram equalization (CLAHE)" is “false found", then FP=2. Finally, highlighted in blue “fine-tuning" is a “missing" and so FN=1. With these, data can compute Precision=6/(6+2)=0.75, Recall=6/(6+1)=0.86 and F1-score=(2× 0.75× 0.86)/(0.75+0.86)=0.8.
In our studied case (see <ref>), the average scores are good, with an average precision of 0.7111, recall of 0.9226, and F1-score of 0.7775. There are 108 TPs, 51 FPs, and 12 FNs if all 55 results are grouped into a single result pool. The values of the precision, recall, and F1-score are then 0.6793, 0.9, and 0.7742, respectively. All ground truths and OpenAI findings are presented in Tab. <ref>.
§.§ Sensitivity analyzes
§.§.§ Scopus API sensitivity
For the Scopus sensitivity analysis, different combinations of level 1 keywords are tried in the query. The initial query can be seen in Sec. <ref>.
Tab. <ref> shows the impact of changing keywords in level 1. Changing a problem domain keyword with another that could be seen as a synonym can greatly impact the papers found. Using the more specific keyword “machine learning" in the solution domain instead of “artificial intelligence" has an impact on the publications found. Similarly, in the problem domain using “cancer" instead of “oncology" has a great impact on the number of papers found. On the other hand, changing double quotes to braces has not that much effect. Moreover, it seems that using only an abbreviation instead of the open form can change the number of results found. Using only the abbreviation has resulted in a poor paper pool.
However, despite the different pool of papers, the methods found by OpenAI are pretty much the same, both for the second and the third query. This means that using synonyms changes the pool of papers but not the methods used to solve the same kind of problem, which means that the method is robust to the keyword selection scheme.
§.§.§ OpenAI sensitivity
To analyze the sensitivity of OpenAI, different prompts are tested, and the differences of proposed AI methods are checked.
Results are summarized in Tab <ref>, and details are provided in <ref>. The number in the last column is an enriched ratio, meaning that if two prompts are equal, it will obtain an infinite value. However, having a difference between two prompts will lead to a decreasing ratio, considering that two papers do not provide the same set of words but also how many words in the prompt are different.
Below prompts are used for analysis.
Prompt 1
Prompt 2
Prompt 3
Prompt 4
Prompt 5
Prompt 6
The original prompt has a higher F1-score value than the other six prompts. With these few prompts, it can already be said that OpenAI is sensitive to the sentence used. However, it generally adds words with respect to the manual search, and extracting the most common words belonging to these results should be enough to find what the user is searching for. Moreover, it is observed that changing a word's position has less impact than changing a word; the more words the user changes, the more differences appear. It also seems that using more common/usual words will give more generic results, closer to the ones that are being searched for; when using very specific instructions, notably in the action verbs, the results will generally be more irrelevant.
§.§ Post-analyzes
The extracted AI methods for the use case described in Sec. <ref> are presented in <ref>. The total number of appearances of the methods, their relevancy, and popularity metrics are showcased in Tab. <ref> by years. Methods selected from articles that are not highlighted in Tab. <ref> and appeared at least in two papers are discussed.
Fig. <ref> illustrates the summary chart of Tab. <ref>.
It is seen from the figure that many different methods have been investigated to solve our example use case, but some are much more used or popular than others. These methods (e.g., class 2 (deep learning methods) and class 1 (artificial neural networks)) are the ones that the user should investigate in the first place to solve the given use case. To be more specific, until 2018 different types of neural networks, logistic regression, SVM, and random forest are popular methods. After 2018, SVM and neural networks are still utilized, and the extra trees classifier seems popular in 2022. However, the trend is being dominated by deep learning methods. Among the deep learning algorithms, CNN, U-Net, and AlexNet can be counted as the three most used and popular methods.
AI methods can be examined without making any classification, but in this case, there will be too many methods. To simplify this situation, the methods are divided into classes. In <ref>, specifics on method classification and detailed information for AI methods in these classes are provided. Moreover, a more detailed decision-making process can be made by using relevancy and popularity metrics. For example, these metrics support decision-making when being uncertain between two AI methods.
§.§ Experiments for different problem domains
In order to check the robustness of the tool, different problem domains and solution approaches are also considered for the Scopus search. The same initial prompt given in Sec. <ref> is used for all use cases to extract AI methods by utilizing OpenAI API.
First, the same problem domain is kept, and the level 2 solution approach is changed as given in the below query.
The aforementioned search yields 35 documents. Although 5 of them effectively use an AI approach, they do not mention any particular methods in the title or abstract, and 15 of them are irrelevant or merely surveys. Consequently, 15 of them are selected in the manner described in Sec. <ref>. Fig. <ref> shows AI methods employed in selected papers. Until 2019, SVM seems to be a popular method, and from 2019 the trend is shifting to deep learning algorithms. Recurrent neural network (RNN), convolutional neural network (CNN), and BERT are among the deep learning methods that are more used after 2019. In addition, some of the most popular methods are BERT, long short-term memory (LSTM), and generative pre-trained transformers (GPT).
Secondly, the solution approach components are retained the same while changing the problem domain. The query for the ”traffic control" issue domain is presented below.
The query returns 52 results, where nine are irrelevant or just surveys, and 20 use an AI method successfully, but they do not mention specific methods in the title and abstract. Therefore, 23 of them are selected. In Fig. <ref>, it is seen that until 2020, classical methods like scale-invariant feature transform (SIFT), speeded up robust features (SURF), k-nearest neighbors (KNN), and decision trees are popular methods. After 2020, deep learning methods class (that contains region-based CNN (R-CNN), Fast R-CNN, Faster R-CNN, you only look once (YOLO), deep simple online real-time tracking (DeepSORT), CNN, U-Net, etc.) is on the rise in terms of the number of uses and popularity.
Another query is the “satellite imagery” for the problem domain, given below. It returns 66 results and 37 of them are selected to be used in analyzes.
Fig. <ref> illustrates the summary of extracted AI methods. Class 1 includes CNN, deep neural network (DNN), DeepLabv3+, Fully Convolution Networks (FCN), U-Net, U-Net++, encoder-decoder, attention mechanism, Res2Net, ResNet, LSTM, SegNet, V-Net, U2Net, AttuNet, LinkNet, mask R-CNN, and cloud attention intelligent network (CAI-Net). On the other hand, class 2 covers ant colony optimization (ACO), genetic algorithm, particle swarm optimization (PSO), bat algorithm, and artificial bee colony (ABC). Until 2020, SVM, artificial neural network (ANN), and ACO were frequently used and popular methods. After 2020, the use and popularity of class 1 and PSO appear to be increasing. In class 1, the top three most used and most popular methods are CNN, U-Net, and DNN. As can be seen from the trend, the first methods to be considered in this problem domain may be the deep learning methods given above.
In Tab. <ref>, OpenAI performance results for all experiments are given, where TP, FP, and FN values are considered as a single pool, i.e., performance metrics are not average values for each article result. It should also be taken into account that if the “true found" words (i.e., machine learning, artificial intelligence, image processing) are not included in the FP, higher precision and F1-score values would have been obtained. Although the problem domain and solution approach change, similar performance results are attained, which is promising for the robustness of the tool.
§ DISCUSSION AND CONCLUSION
A big issue when utilizing automatic solution method selection schemes is the trust in the fit, relevancy, and popularity of the suggested methods. The fit to the actual use case depends on the ability of the human operator to interact with the tool and whether or not they understand the intricacies of the approach. With the proposed method, the human operator has the ability to validate the suggested methods from the accompanying pool of research papers, and due to the simplicity, responsiveness, and intuitiveness, it is relatively straightforward for the human operator to modify and align the usage of the tool with the overall goal of solving a problem. Additionally, to increase the tool's performance in terms of operation requirements (e.g., explainability, trustworthiness) and resources (e.g., hardware), the necessary features or extra resources for AI methods can be added and expanded later if the detailed requirements and current resources are stated clearly.
For example, if explainability is required, many different methods exist for obtaining explainable AI (XAI) methods <cit.>. On the other hand, if trustworthiness is required, then according to the system, environment, goals, and other parameters where AI will be used, several alternative criteria for trustworthiness may be specified <cit.>.
Details or requirements such as explainability and trustworthiness can be retrieved in the keyword selection scheme in Fig. <ref>.
Or, after AI methods are found by the proposed tool, post hoc analyzes can be made with the requirements not used in the proposed method. In some use cases, such requirements or details may not be specified at the beginning of the AI system life cycle and, therefore, may not be included in the keyword selection phase.
Due to the specificity of certain use cases, there is a considerable risk that no research has been conducted on the specifics of the use case. Consequently, the proposed methods will likely not showcase a high score in the relevancy metric. Therefore, the literature pool must be investigated after the results are identified.
Ultimately, the tool's applicability comes down to the objective of the application. It will comfortably propose methods already explained in the literature as to why it is very useful when identifying trends in the research communities. However, as the method identification is based on historical data that train the tool to determine what words within a research paper can be classified as a method, the tool will not fare well when dealing with entirely new solution approach schemes.
It is noteworthy that the relevancy explained in Sec. <ref> is computed and saved at the same time as the other data. It could be useful in the future if one wants an automatic filter. On the other hand, if the pool of papers is too big to be manually filtered, it is possible to filter at the end of the process, when one is checking for the methods to be used. The main disadvantage of filtering after the whole process is that it can allow a lot of irrelevant papers to be analyzed by OpenAI, and this will modify the perception of the trends of research for the studied use case. However, note that our tool is used to get trends in research about a given use case to support the selection of solution methods, and does not directly select a method for the user. It means that having some irrelevant papers analyzed in the whole process will not lead to a completely different result. Moreover, no information is lost, so the trends can be recomputed after filtering if necessary.
On the other hand, when the experiments are examined, the tool produces robust results concerning OpenAI performance for different problem and solution domains in its current state. In terms of the trend, up-to-date usage, and popularity of solution methods, our proposed approach quickly produces rich and advantageous information for the user. In addition, the recommended keyword selection scheme offers a very flexible structure in choosing the problem domain and solution approach for any use case.
§.§ Future work
Due to the nature of the underlying problem, certain processes are technically more difficult to automate than others <cit.>. In its current form, the proposed method still needs a human to perform the keyword selection, check the results given by the query, classify the found methods, and validate the robustness of the solution.
For future work, it would be of high value to remove the need for human intervention while presenting results that signify the trade-off for the different automated decisions. Our study towards automating these tasks is currently underway.
Simultaneously, employing versions from the updated suite of large language models, such as OpenAI's GPT-4[<https://openai.com/gpt-4>], and exploring other databases (like Web of Science, PubMed, IEEE Xplore, etc.) are also future works. Besides, open-source alternatives to GPT-3 or GPT-4, such as GPT-NeoX-20B <cit.> and GPT-J <cit.>, will be implemented to help in cutting costs.
The sensitivity analysis is split into two parts: queries and prompts. Queries highly depend on the keyword selection scheme and should be studied together. However, reasonably an automatic sensitivity analysis can be made using some variants of the initial query, like using quotation marks instead of brackets or using several forms of the same words.
Later, it could be interesting to study the sensitivity concerning synonyms.
On the other part, prompts can be analyzed more easily. Indeed, several sentences could be automatically generated with respect to the initial one and then tested. The common pool of solutions, or using a scoring-like number of occurrences, could be a robust amicable solution.
Classifying methods is not easy as we want to keep a stratification level from general methods to specific ones. However, as deep learning is already used to classify images, e.g., gaining attention in cancer research <cit.>, a deep learning method could pool different methods together and reduce the number of methods used like YOLO-v2, YOLOv4-tiny, etc.
Without any logical pooling, a simple clustering approach based on the text, such as DBSCAN, can be used to make an automatic pooling for a sufficiently big set of methods extracted. However, if we want to automatically match a specific taxonomy, another method will be needed.
Currently, the tool only checks the title, abstract, and keywords for the method determination. For certain papers, the specifics of the method are only introduced later in the paper. E.g., for hybrid methods. Consequentially, an important extension will be to determine the applied method of a paper from the entirety of a paper.
Finally, the tool can essentially investigate any arbitrary characteristic of the literature rather than only the solution approaches — E.g., identifying problem formulations and varieties therein. Therefore, exploring how to do this manually will greatly benefit the research community.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Deniz Kenan Kılıç: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Alex Elkjær Vasegaard: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing, Visualization. Aurélien Desoeuvres: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review & editing. Peter Nielsen: Conceptualization, Methodology, Validation, Investigation, Data curation, Writing – review & editing, Supervision.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENT
All authors read and agreed to the published version of the manuscript.
§ SCOPUS AND OPENAI RESULTS
In Tab. <ref>, Scopus results are shown for the initial query stated in Sec. <ref>. As it is mentioned, articles highlighted in red are manually deleted, and the orange ones that use the AI method are related to the use case but do not specify it in the title and abstract.
In Tab. <ref>, OpenAI results for the initial prompt and ground truth methods extracted manually are shown with performance determinants. These performance determinants are utilized to calculate performance metrics stated in <ref>.
§ OPENAI PERFORMANCE RESULTS
Below, OpenAI performance results for 55 articles are listed in the same order as Tab. <ref>.
* TP = [1, 3, 2, 3, 1, 2, 1, 2, 2, 2, 3, 1, 1, 3, 1, 1, 2, 1, 4, 2, 0, 2, 1, 3, 5, 1, 1, 3, 1, 1, 1, 2, 0, 6, 2, 1, 2, 3, 1, 1, 1 ,2, 1, 2, 2, 3, 2, 6, 2, 2, 2, 4, 2, 1, 1]
* FP = [0, 1, 0, 0, 0, 2, 0, 2, 1, 0, 0, 1, 2, 2, 0, 1, 3, 1, 0, 0, 3, 0, 1, 2, 3, 1, 0, 0, 0, 0, 1, 2, 0, 0, 1, 2, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 2, 0, 1, 1, 1, 1, 5, 2]
* FN = [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0 ,0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0]
* Precisions = [1, 0.75, 1, 1, 1, 0.5 ,1, 0.5, 0.6667, 1, 1, 0.5, 0.3334, 0.6, 1, 0.5, 0.4, 0.5, 1, 1, 0, 1, 0.5, 0.6, 0.625, 0.5, 1, 1, 1, 1, 0.5, 0.5, 0, 1, 0.6667, 0.3334, 1, 1, 0.5, 0.5, 1, 1, 0.5, 1, 0.6667, 0.75, 0.6667, 0.75, 1, 0.6667, 0.6667, 0.8, 0.6667, 0.1667, 0.3334] and Average(Precisions) = 0.7111
* Recalls = [1, 1, 1, 1, 1, 1, 1, 0.6667, 1, 1, 1, 1, 0.5, 1, 1, 1, 1, 1, 0.8, 1, 0, 1, 1, 1, 0.8334, 1, 1, 1, 1, 1, 1, 1, 0, 0.75, 1, 1, 1, 1, 1, 1, 1, 0.6667, 1, 1, 1, 1, 1, 0.8571, 1, 1, 1, 0.6667, 1, 1, 1] and Average(Recalls) = 0.9226
* F1-score = [1, 0.8571, 1, 1, 1, 0.6667, 1, 0.5714, 0.8, 1, 1, 0.6667, 0.4, 0.75, 1, 0.6667, 0.5714, 0.6667, 0.8889, 1, 0, 1, 0.6667, 0.75, 0.7143, 0.6667, 1, 1, 1, 1, 0.6667, 0.6667, 0, 0.8571, 0.8, 0.5, 1, 1, 0.6667, 0.6667, 1, 0.8, 0.6667, 1, 0.8, 0.8571, 0.8, 0.8, 1, 0.8, 0.8, 0.7273, 0.8, 0.2857, 0.5] and Average(F1-score) = 0.7775
If all 55 results are considered as a single result pool, then there are 108 TPs, 51 FPs, and 12 FNs. Then precision, recall and F1-score values are 0.6793, 0.9, and 0.7742, respectively.
When the performance metrics are examined, the OpenAI presents good performance for the manually generated ground truths.
§ OPENAI SENSITIVITY RESULTS
In Tab. <ref>, Tab. <ref> and Tab. <ref>, missing and extra/different methods are given with respect to the initial prompt. If there is no missing or extra/different method name, it is expressed by “X".
§ EXTRACTED AI METHODS AND POST-ANALYZES
In Tab. <ref>, how many times a method is mentioned in the articles is found according to years, and the relevancy and popularity sums are written next to it. The total number of articles used is 55 that are not filtered and not general in Tab. <ref>. Methods are classified by their occurrence number and their similar ones as described below. Of course, the classification of methods can be done in different ways and at different levels. They are classified to get a more compact overview of the results. The “true general found" results are not included. The methods that are “true found" and mentioned in at least 2 articles are shown.
In the classes listed below, after each method, it is written that it is employed in how many papers total, how many times it is used in which years, and the total relevancy and popularity metrics according to these years.
Class 1 (Artificial neural networks): Paraconsistent Artificial Neural Network (PANN) (x1; 2014, 0, 0.6), Artificial Neural Network (ANN) (x6; 2014, 1, 2.7; 2015, 1, 1; 2016, 0, 6; 2017, 1, 0.7143; 2021, 0, 4.6667; 2023, 0, 2), Probabilistic Neural Network (PNN) (x2; 2015, 0, 0.4444; 2017, 0, 3.2857), Multi-Layer Feed-forward Neural Network (MFFNN) (x1; 2016, 0, 1.125), Neural Networks (x6; 2017x2, 1, 3.8572; 2018, 0, 4; 2019, 0, 0.2; 2020, 1, 0.25; 2023, 0, 0), Perceptron (x1; 2020, 1, 3.75), Back-Propagation Perceptron (x1; 2020, 1, 3.75), Fully Connected Network (FCN) (x1; 2022, 0, 1.5)
Class 2 (Deep learning methods): Deep learning (x15; 2019, 0, 0.2; 2020x3, 1, 27.75; 2021x3, 1, 4.3334; 2022x3, 1, 3.5; 2023x5, 1, 2), Generative Adversarial Network (GAN) (x2; 2019, 0, 0.2; 2020, 1, 3.75), ResNet (x1; 2020, 1, 3.75), ResNet50 (x1; 2021, 0, 4.6667), AlexNet (x2; 2020, 1, 3.75; 2021, 0, 4.6667), U-Net (x2; 2021, 1, 0; 2022, 0, 1.5), Convolutional Neural Network (CNN) (x4; 2021, 0, 4.6667; 2022, 0, 2; 2023x2, 1, 0), 2D U-Net (x1; 2021, 0, 2.3333), 3D U-Net (x1; 2021, 0, 2.3333), Deep Reinforcement Learning (DRL) (x1; 2022, 0, 1), Convolutional Encoder-Decoder Architecture (x1; 2022, 0, 1), Convolution algorithm (x1; 2022, 0, 0), Deep Convolutional Neural Network (DCNN) (x1; 2023, 0, 0), NASNetLarge (x1; 2023, 1, 0), DenseNet169 (x1; 2023, 1, 0), InceptionResNetV2 (x1; 2023, 1, 0), EfficientNets (x1; 2023, 0, 2), Conditional Generative Adversarial Network (cGAN) (x1; 2023, 0, 0)
Class 3 (Tree-based methods): Random Forest (x2; 2016, 1, 2.125; 2018, 0, 4), Decision Trees (x1; 2016, 0, 4.125), Extra Trees Classifier (x1; 2022, 0, 8.5)
Class 4 (Optimization methods): Genetic Algorithm (x1; 2014, 1, 2.7), Sequential Minimal Optimization (SMO) (x1; 2016, 0, 1.75), Ant Colony Optimization (ACO) (x1; 2023, 1, 1)
The cases are counted where the same method is used between 2014-2023, and all time. Relevancy and popularity sums are calculated for a specific method regarding the related articles. In other words, the first column (“Papers") states how many articles use the method in total. The second and third columns show the sum of relevancy and popularity values for these articles, respectively.
If all the time is considered, class 1, class 2, class 3, class 4, “K-nearest neighbors (KNN)", “support vector machine (SVM)", “K-means", “grey level co-occurrence matrix (GLCM)" and “logistic regression" are the ones that are mentioned in at least 2 articles. Sorting the total number of papers using these methods from largest to smallest is as follows:
Papers: class 2 > class 1 > “SVM" > class 3 > class 4 = “KNN" > “K-means" = “logistic regression" = “GLCM"
The relevancy values for all times are sorted as:
Relevancy: class 2 > class 1 > class 4 = “SVM" = “KNN" > class 3 = “GLCM" > “K-means" = “logistic regression"
On the other hand, the sorting of popularity values for all time is given below and it indicates the highest value belongs to class 2.
Popularity: class 2 > class 1 > class 3 > “logistic regression" > “SVM" > class 4 > “GLCM" > “KNN" > “K-means"
From the above methods, it is seen that the number of implementing and popularity trends of class 1 and class 2 have been increasing over the years. For this reason, tests can be started with AI methods in these classes in a similar problem domain.
elsarticle-num
|
http://arxiv.org/abs/2307.06078v1 | 20230712110100 | A systematic survey of Moon-forming giant impacts: Non-rotating bodies | [
"Miles Timpe",
"Christian Reinhardt",
"Thomas Meier",
"Joachim Stadel",
"Ben Moore"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Miles Timpe
[email protected]
0000-0003-1938-7877]Miles Timpe
Institute for Computational Science
University of Zürich
Winterthurerstrasse 190
8059 Zürich, Switzerland
0000-0002-4535-3956]Christian Reinhardt
Institute for Computational Science
University of Zürich
Winterthurerstrasse 190
8059 Zürich, Switzerland
0000-0001-9682-8563]Thomas Meier
Institute for Computational Science
University of Zürich
Winterthurerstrasse 190
8059 Zürich, Switzerland
0000-0001-7565-8622]Joachim Stadel
Institute for Computational Science
University of Zürich
Winterthurerstrasse 190
8059 Zürich, Switzerland
0000-0001-5996-171X]Ben Moore
Institute for Computational Science
University of Zürich
Winterthurerstrasse 190
8059 Zürich, Switzerland
In the leading theory of lunar formation, known as the giant impact hypothesis, a collision between two planet-size objects resulted in a young Earth surrounded by a circumplanetary debris disk from which the Moon later accreted. The range of giant impacts that could conceivably explain the Earth-Moon system is limited by the set of known physical and geochemical constraints. However, while several distinct Moon-forming impact scenarios have been proposed—from small, high-velocity impactors to low-velocity mergers between equal-mass objects—none of these scenarios have been successful at explaining the full set of known constraints, especially without invoking controversial post-impact processes. In order to bridge the gap between previous studies and provide a consistent survey of the Moon-forming impact parameter space, we present a systematic study of simulations of potential Moon-forming impacts. In the first paper of this series, we focus on pairwise impacts between non-rotating bodies. Notably, we show that such collisions require a minimum initial angular momentum budget of approximately 2 J_EM in order to generate a sufficiently massive protolunar disk. We also show that low-velocity impacts (v_∞≲ 0.5 v_esc) with high impactor-to-target mass ratios (γ→ 1) are preferred to explain the Earth-Moon isotopic similarities. In a follow-up paper, we consider impacts between rotating bodies at various mutual orientations.
§ INTRODUCTION
The prevailing theory of lunar formation is known as the Giant Impact Hypothesis, which posits that Earth's Moon is the result of an early and energetic impact event between two planetary-size bodies <cit.>. In the leading version of this theory, which is generally referred to as the “canonical” Moon-forming impact, the young Earth suffered an oblique and relatively low-velocity impact by a Mars-sized impactor. This class of impacts corresponds to an impactor-to-target mass ratio of γ≃ 0.1 and an impact velocity of v_imp≃ v_esc, where v_esc is the mutual escape velocity of the colliding bodies. Early simulations suggested that the canonical scenario could place approximately one lunar mass of material into orbit in the form of a circumplanetary disk while simultaneously reproducing the angular momentum budget of the Earth-Moon system and the low iron content of the Moon <cit.>.
However, since the canonical scenario was proposed, improved constraints on the Earth-Moon system and advances in simulation techniques have brought the canonical scenario under renewed scrutiny. A well understood short-coming of the canonical scenario is its inability to explain the remarkable isotopic similarity of Earth's mantle and lunar samples returned by the Apollo missions. Indeed, the isotopic composition of the lunar and terrestrial mantles are indistinguishable when measured for several isotope ratios, including ^18O/^17O <cit.>, ^50Ti/^47Ti <cit.>, and ^182W/^184W <cit.>.
In the context of giant impact simulations, these measurements have significantly constrained the post-impact compositional difference between the Earth and the protolunar disk. This is a problem for the canonical scenario because most of the material that ends up in the impact-generated disk is derived from the impactor <cit.>. The extent to which this translates to differences in the isotopic fingerprints of the Earth and protolunar disk depends on the pre-impact isotopic compositions of the colliding bodies. If the difference between the pre-impact isotopic fingerprints of the target and impactor is large, then even a small difference in the post-impact compositions of the Earth and the protolunar disk will result in significant isotopic differences. In contrast, if the impactor has the same pre-impact isotopic composition as the target, then a preponderance of impactor material in the disk is not a problem. However, this would imply that the target and impactor formed at a similar heliocentric distance in the protoplanetary disk. While theoretically possible, simulations of the terrestrial planet formation that track the origin of the accreted bodies show that such a scenario is a low-probability event ().
Another way in which the canonical scenario might be reconciled with distinct isotopic fingerprints of the colliding bodies is through a post-impact mixing process that equilibrates Earth's magma ocean with the inner edge of the protolunar disk <cit.>. However, such processes require long time scales and therefore imply a formation time for the Moon that is orders of magnitude longer than that predicted by N-body simulations of lunar accretion <cit.>. This complication is even more pronounced for heavier (i.e., more refractory) elements such as titanium, which has also been shown to be indistinguishable between the Earth's mantle and the Moon <cit.>.
Nevertheless, recent numerical studies have strengthened the case for post-impact equilibration. For example, post-impact mixing between material derived from the target and impactor may have been substantially underestimated because the different flavors of the Smoothed Particle Hydrodynamics (SPH) method (e.g., ) used in most giant impact simulations suppress mixing <cit.>. Furthermore, <cit.> showed that using a numerical method more suitable for investigating mixing substantially increases the amount of target material placed into orbit under the canonical scenario. <cit.> additionally demonstrated that the canonical scenario is both consistent with the known isotopic constraints and successfully reproduced the heterogeneity of Earth's mantle, showing that it is a natural consequence of such a collision.
Whereas the canonical scenario generally assumes that the Moon accretes out of the protolunar disk, a recent study by <cit.> shows that the Moon could instead be formed by the gravitational collapse of the outermost region of the arm-like structure that is observed in the canonical scenario. During repeated tidal encounters with the post-impact Earth, the Moon can accrete a thick layer of mantle material which could explain the isotopic similarity if such a vertical stratification can remain until today. However, it is unclear to which extent the gravitational collapse of impactor material is enhanced due to numerical issues and if such a vertical stratification can persist over longer time scales or if it will be erased by long-term geological processes.
Finally, recently discovered differences in vanadium isotopes between Earth and lunar samples <cit.> suggest that these differences can only be explained by differences in the pre-impact bodies' differentiation processes. The vanadium isotope measurements are therefore inconsistent with equilibration after an impact, implying that any impactor would have had to have been isotopically very similar to Earth. In such a scenario, the Moon is formed mostly from impactor material as predicted by the canonical scenario.
Alternatively, several novel impact scenarios have been proposed which are notably distinct from the canonical scenario. The most successful of these are high impact energy and high angular momentum scenarios in which near-perfect mixing is achieved, either due to merging of similar mass embryos <cit.> or ejection of mantle material from a rapidly spinning proto-Earth hit by a very small impactor at about three times the mutual escape velocity <cit.>. Furthermore, such impacts can result in the formation of circumplanetary structures known as “synestias” which allow the young protolunar disk to continue exchanging material with the Earth's mantle following the impact <cit.>, further enhancing post-impact equilibration.
However, simulations that investigate the giant-impact stage during terrestrial planet formation show that equal mass collisions are very rare () and would occur very early. Such equal mass collisions could therefore be difficult to reconcile with the age of the Moon <cit.>. Likewise, the large impact velocities required by <cit.> are not observed in the simulations of <cit.> and the large pre-impact rotation of the proto-Earth suggests that it experienced a similar-mass merger before the Moon-forming impact, which is a very rare event according to the same study.
The largest challenge for such high angular momentum models is to explain how the excess angular momentum of approximately 1-2 J_EM can be lost in order to be consistent with observations. <cit.> proposed that evection resonances with the Sun could remove the required amount of angular momentum but recent studies suggest that the parameter space for which this mechanism is effective is narrow and its efficiency is strongly dependent on the choice of tidal model <cit.>. Thus, a particular difficulty of lunar formation theory continues to be the identification of a giant impact scenario that can simultaneously reproduce the angular momentum budget of the Earth-Moon system and the isotopic similarity of the Moon and Earth's mantle.
In addition to concerns about the angular momentum and isotopic constraints, the canonical Moon-forming impact also struggles to produce sufficiently massive protolunar disks. Indeed, the simulations presented here show that the class of canonical Moon-forming impacts cannot produce disks with sufficient masses to explain the current mass of the Moon, much less than the more massive disks that appear to be required by accretion studies. This is a significant problem because N-body simulations of accretion in the protolunar disk suggest that accretion rates are in the range of 10-55% and therefore a disk mass of at least two lunar masses is required to form the Moon <cit.>.
Thus, as it stands, several distinct Moon-forming impact scenarios have been proposed that—sometimes necessarily in combination with post-impact processes—are capable of reproducing certain constraints of the Earth-Moon system. However, to date, none of these individual scenarios, either with or without effective post-impact processes, are capable of reproducing all necessary constraints of the Earth-Moon system. We further note that prior work investigating Moon-forming impacts has largely been focused on explaining specific observational constraints and were therefore limited in their range of pre-impact parameters that they considered. Notably, with the exception of <cit.> and <cit.>, pre-impact rotation of the target and impact was neglected in such studies.
The purpose of the present study is therefore to provide the community with a comprehensive survey of the parameter space of potential Moon-forming impacts and provide a systematic analysis of the collision outcomes. The simulations in this study assume a single giant impact event and the subsequent post-impact analysis determines whether any such event can simultaneously explain the observed physical, compositional, and geochemical constraints of the Earth-Moon system. We have chosen to split the results into two papers in order to keep the results tractable. The present paper (hereafter Paper I) focuses on the subset of collisions without pre-impact rotation. The follow-up paper (hereafter Paper II) considers collisions with pre-impact rotation of the target and impactor for a wide range of rotational configurations (e.g., co-rotating, counter-rotating). Paper I is intended to serve as a baseline for understanding the results of the rotating impacts discussed in Paper II.
§ CONSTRAINTS ON POST-IMPACT PROPERTIES
There are a number of empirically determined constraints on the Earth-Moon system that must be met in order for a Moon-forming simulation to be considered successful (see for a modern and comprehensive review). These constraints are: the total angular momentum budget of the Earth-Moon system (J_EM), the protolunar disk mass (M_d) as a proxy for the mass of the Moon (M_), the iron fraction of the post-impact Earth (F^Fe_⊕), the iron fraction of the protolunar disk (F^Fe_d) as a proxy for the iron fraction of today's Moon (F^Fe_), and the difference in impactor-to-target material between the planet and disk (δ_pd) as a proxy for the isotopic similarity of the Earth and Moon. There are other physical properties of the Earth-Moon system which do not strictly need to be explained by the simulations. These properties can readily be explained by, for example, post-impact dynamical processes, such as the inclination of the lunar orbit (θ_), and are therefore not considered in this work.
§.§ Post-impact angular momentum budget
The constraint on the post-impact angular momentum budget is set by the current angular momentum of the Earth-Moon system and any subsequent processes which could conceivably alter the angular momentum of the system following the impact. Currently, the only known process by which a significant amount of angular momentum could have been removed from the Earth-Moon system is the Solar Evection Resonance (SER) <cit.>, which transfers angular momentum from the Earth-Moon system to the Earth's heliocentric orbit.
The amount of angular momentum that can be drained from the Earth-Moon system in this way is still debated and the results depend strongly on the underlying tidal model <cit.>. Some studies have suggested that no more than a few percent of the initial post-impact angular momentum can be lost through the SER, whereas other studies have suggested that up to 2-3 J_EM can be removed in this way. While the SER could theoretically remove a significant fraction of the post-impact angular momentum, it makes sense to favor impact scenarios with post-impact angular momenta as close to J_EM as possible. This reduces the reliance on the SER, which is not well understood and requires post-impact dynamical configurations that are difficult to achieve. Thus, while we consider impact scenarios that produce post-impact angular momentum budgets of up to several J_EM, given two successful scenarios in which only the post-impact angular momenta differed, it would be reasonable to favor the scenario with a total post-impact angular momentum budget closer to J_EM.
§.§ Protolunar disk mass
The mass and composition of the protolunar disk is used as a proxy for the mass and composition of the Moon, which is assumed to form at a later time via accretion from the disk <cit.>. The SPH simulations used to study giant impacts cannot subsequently follow the lunar accretion process, as the timescale is orders of magnitude longer and the computational cost therefore prohibitive. Nevertheless, studies of the subsequent accretion process (decoupled from the impact simulations) have been carried out using numerical techniques designed specifically for that purpose <cit.>. These studies indicate that less than half of the protolunar disk material ends up in the Moon <cit.>. This suggests that the post-impact disk must contain at least 2 M_ worth of material to subsequently form a body with a mass at least that of the Moon. Similarly, too massive a disk would presumably result in several moons or a Moon that is much too massive. However, no upper limit has been systematically determined. Nevertheless, it is reasonable to assume that the protolunar disk cannot be much more massive than a few lunar masses given typical accretion efficiencies.
§.§ Iron content of the Earth and protolunar disk
Following an impact, iron from the cores of the target and impactor will be distributed between the post-impact Earth, the protolunar disk, and the ejecta. Iron located in the protolunar disk may be incorporated into the Moon as it accretes from the protolunar disk material. The amount of iron that ends up in the protolunar disk and is subsequently accreted into the Moon (assuming some accretion efficiency) must match the constraints derived from measurements and assumed lunar geological processes. Given our current understanding of the iron content of Earth, this means that roughly 0.33 M_⊕ of iron should end up in the post-impact Earth.
In the case of the Moon, studies have constrained its iron fraction to be ≤ 1.5% <cit.>, meaning that any successful impact scenario will have to avoid injecting any significant amount of iron into the protolunar disk. In this respect, it is difficult to set a hard upper limit on the iron fraction of the protolunar disk due to the unknown accretion efficiency of iron into the Moon. Thus, while the Moon is constrained to ≤ 1.5 % iron by mass, the constraint for the protolunar disk is likely higher because some of the iron may not be accreted into the Moon and may instead be ejected or reaccreted by the Earth. The initial radial distribution of the iron in the protolunar disk will likely play an important role as well, given that the iron inside and outside of the Roche limit will be subject to different dynamical processes. The long-term evolution the protolunar disk, however, is beyond the scope of this work. Future studies are needed to constrain the accretion efficiency of iron and set an upper limit and distributional constraints on the iron in the protolunar disk.
§.§ Compositional similarity of Earth and the protolunar disk
Since the first lunar samples were returned by the Apollo missions, it has been clear that the Moon and the Earth—or at the very least their mantles—exhibit a remarkable geochemical similarity. More modern studies have uncovered additional isotopic similarities across several elements. However, simulations of giant impacts are not capable of tracking isotopic ratios directly and, as a result, the relative fraction of impactor material between Earth and the protolunar disk is used as a proxy,
δ_pd = (N_imp/N_tot)_d - (N_imp/N_tot)_p ,
where N_imp is the number of particles originating from the impactor and N_tot is the total number of particles in the post-impact planet or disk, indicated by the subscripts p and d, respectively. A positive value of δ_pd therefore indicates that the protolunar disk is enriched in impactor material relative to the Earth, whereas a negative value of δ_pd indicates that the disk is depleted in impactor material relative to the Earth.
The measured isotopic ratios are indistinguishable to within 5σ. Such a sensitivity of δ_pd is difficult to achieve in SPH simulations, as δ_pd depends on the resolution of the simulation and the mass (i.e., number of particles) of the post-impact disk. Nevertheless, values of δ_pd near or equal to zero should be interpreted as favorable because they allow for a larger pre-impact compositional difference between the target and impactor and rely less on post-impact equilibriation processes.
§ METHODS
The giant impact simulations presented in this work are performed with the Smoothed Particle Hydrodynamics (SPH) code <cit.>. The version of used in this work includes modifications as described in <cit.> and <cit.> and uses a generalized equation of state (EOS) interface <cit.>. This version of has been used extensively for giant impact simulations <cit.>.
§.§ Equations of State (EOS)
The simulations in this work follow collisions between bodies with distinct compositional layers, namely iron cores and rocky mantles. We use the ANEOS (ANalytic Equation of State) EOS <cit.> to model the materials, specifically iron <cit.> for the core and dunite <cit.> for the mantle. ANEOS is based on fitting analytic expressions of the Helmholtz free energy in different phases of the material to experimental data. It covers a wide range of densities and temperatures and faithfully models shock compression and release. This makes it a very popular choice for impact simulations.
§.§ Pre-impact models
Each collision begins with two distinct bodies, designated the target and impactor, whereby the target is the more massive of the two bodies. The particle representation of these bodies are created with the code <cit.>, including improvements for multi-component models as described in <cit.> and <cit.>. In this work, the models have Earth-like compositions, with an iron core (33% by mass) and a rocky mantle (67%). The thermal profiles of the models are constructed to be adiabatic, with surface temperatures set to T_s = 1000 K.
§.§ Initial conditions
The pre-impact state of each collision is defined by a set of parameters that define the geometry of the collision and the internal compositions of the target and impactor. We have chosen to use the initial total angular momentum budget (J_0) and asymptotic relative velocity (v_∞)—which in turn set the asymptotic impact parameter (b_∞)—to define the initial geometries of the collisions. This is in contrast to many previous studies, which have chosen to parameterize their collisions by the impact parameter (b_imp) and velocity at the moment of impact (v_imp). Our choice is motivated by the fact that the target and impactor can undergo significant deformation prior to the actual impact which renders a determination of the impact parameter and velocity at impact problematic. To avoid any confusion, a detailed definition of the asymptotic parameters (v_∞ and b_∞) and their relation to the parameters at the moment of impact (v_imp and b_imp), as well as to the initial positions of the colliding bodies at the start of each simulation (v_ini and b_ini), is provided in Appendix <ref>.
The total mass of the colliding bodies in this study is always M_tot = M_targ + M_imp = 1.05 M_⊕. Given the typical post-impact disk masses resulting from the simulations considered here, this is approximately the total mass required to produce a 1 M_⊕ Earth and a protolunar disk with a favorable mass. The masses of the target and impactor for given M_tot and γ are then,
M_targ = M_tot( 1/γ + 1) ,
M_imp = M_tot( γ/γ + 1) .
where γ is the impactor-to-target mass ratio.
The fundamental parameters that we vary between simulations are then the impactor-to-target mass ratio (γ), the initial total angular momentum budget (J_0), and the asymptotic relative velocity (v_∞). Given the three parameters above, the asymptotic impact parameter (b_∞) is calculated as follows:
b_∞ = J_0/M_tot v_∞( γ + 1 )^2/γ .
The factor of (γ + 1)^2 / γ in Equation <ref> is required because J_0 is given in the center of mass frame, while b_∞ is calculated in the target's frame of reference, and angular momentum is not conserved under such frame transitions.
Regarding the initial distance between the colliding bodies (d_ini), it is computationally prohibitive to place the target and impactor at large distances from each other. Therefore, we place the target and impactor close enough together that the pre-impact phase of the simulation is computationally tractable but far enough apart that they are not yet subject to significant mutual gravitational interactions (causing significant deformation and tidal interaction). Therefore, for all simulations in this work, d_ini = 10 R_crit, where R_crit = R_targ + R_imp and R_targ and R_imp are the radii of the (non-rotating) target and impactor, respectively.
We reiterate that in Paper I we only consider impact scenarios without pre-impact rotation of the target or impactor. In Paper II, we explore pre-impact rotation of the target and impactor as variable parameters. The initial conditions of the collisions simulated in this work are shown in Figure <ref>.
§.§ Post-impact analysis
In order to classify collisions into distinct outcomes, we use the group finder <cit.> to identify the number and mass of post-impact fragments. identifies coherent, gravitationally bound clumps of material. It does this by identifying regions that are bounded by a critical surface in the density gradient and then removes the least bound particles one by one from the resulting structure until all particles are self-bound. The clumps identified by are then combined if they are co-located.
For collisions where at least one surviving post-impact body is identified by SKID, an analysis is carried out using <cit.>, a Python package for analyzing astrophysical SPH simulations. As a first step, we identify the largest remnant (LR), which corresponds to the surviving target, the second largest remnant (SLR), which generally corresponds to the surviving impactor, and the ejecta, which corresponds to particles that are gravitationally unbound from the post-impact remnant(s). Once the LR and—if it exists—SLR are identified, we classify the collisions by outcome.
§.§.§ Collision outcomes
A diverse range of outcomes are possible for pairwise collisions. However, in the range of pre-impact conditions likely to lead to the formation of the Moon, there are only three types of outcomes that are relevant:
Merger The impactor merges with the target as a result of the initial impact. Some fraction of the colliding material will be lost as ejecta, but this fraction is generally small.
Hit & run The impactor survives the initial impact and has enough energy to escape the gravitational pull of the target. In this work, we only analyze the post-impact state of collisions that have been classified as a merger. We therefore ignore hit & run cases but note that, in theory, both of the post-impact remnants in such a scenario could host a circumplanetary disk.
Graze & merge The impactor survives the initial impact but does not have enough energy to escape the gravitational pull of the target. However, note that if the bound impactor's orbit takes it beyond the Hill radius of the Earth (r_apo > R_Hill), then we consider the collision to be a hit & run. The surviving impactor will therefore re-impact the target at a later time. In these cases, we continue to run the simulation forward until the re-impact has occurred and the collision has resolved. Once these collisions have resolved, their outcomes are re-classified.
The collisions in this study are classified according to one of these categories. Note that in Paper I, which explores only non-rotating collisions, no graze & merge outcomes were found. However, graze & merge do result in Paper II and we therefore include their definition above for completeness.
§.§.§ Disk finder
In order to identify the planet (i.e., Earth), protolunar disk, and ejecta following the impact, we employ a novel disk finding algorithm. This disk finding algorithm differs from previous approaches in that it determines the planet's radius (R_p) by finding the radius at which the median rotation rate of local particles deviates significantly from the expected solid-body rotation rate. In a subsequent step, the algorithm largely follows previous approaches by assigning particles exterior to R_p to either the planet, disk, or ejecta based on the periapsis distance of each particle's orbit. This disk finding algorithm is described in detail in Appendix <ref>.
§ RESULTS & DISCUSSION
In this paper (Paper I), we present the results of pairwise collisions between non-rotating targets and impactors. The set of collisions in this work consists of two distinct subsets: the main subset consists of 435 impacts by relatively large (0.1 ≤γ≤ 1.0), low-velocity (v_∞≤ v_esc) impactors. The smaller subset consists of 62 impacts by relatively small (0.025 ≤γ≤ 0.05), high-velocity (1.2 v_esc≤ v_∞≤ 3 v_esc) impactors. Of the 497 collisions simulated in total, the following outcomes are observed: 355 mergers and 142 hit & runs. In the results that follow, we only consider the 355 collisions that resulted in a single large post-impact body (i.e., the mergers). The collisions in this work which have been classified as hit & run are not relevant for lunar formation because the mass of the resulting planet is significantly lower than the mass of Earth. The hit & run cases are therefore not considered in the results that follow.
The distributions of collision outcomes for the large subset is shown in Figures <ref>. As would be expected, hit & run collisions result from grazing impacts with relatively high velocities (the top right region in each panel). Circles indicate collisions that resulted in a merger, with filled circles representing collisions that generated a protolunar disk of at least one lunar mass (M_d ≥ M_) and open circles representing collisions that generated either no disk or a disk with less than one lunar mass (M_d < M_).
The collision outcomes for the small subset of low-mass, high-velocity impactors are not shown, as all of the collisions in this subset failed to produce sufficiently massive protolunar disks. Indeed, the most massive disk produced by these collisions is less than 1% of the lunar mass (M_d < 0.01 M_). Therefore, we rule out this class of collisions and ignore the associated simulations in the results and discussion that follow.
§.§ Protolunar disk mass
We find 179 mergers in our dataset that produce protolunar disks of at least one lunar mass. Figure <ref> illustrates that significantly more angular momentum than the current budget of the Earth-Moon system is required to generate a protolunar disk with at least one lunar mass. The demarcation of mergers with and without a sufficiently massive protolunar disk hints at a strong relationship between the pre-impact angular momentum budget (J_0) and post-impact protolunar disk mass (M_d). Indeed, the Pearson correlation coefficients (r) measured between the pre- and post-impact properties (see Figure <ref>) indicate that J_0 is by far the strongest determinate of M_d, with larger pre-impact angular momentum budgets driving more massive disks (Pearson r = 0.88). The impactor-to-target mass ratio (γ) also plays a significant role in determining M_d, with higher mass ratios resulting in more massive disks (Pearson r = 0.36).
Figure <ref> most clearly illustrates the relationship between J_0 and M_d. From this relationship, it is clear that in order to generate a disk with enough mass to form the Moon (M_d ≥ M_), a pre-impact angular momentum budget of at least J_0 ≃ 2 J_EM is required. Note that a protolunar disk mass of M_d = M_ implies a 100% accretion rate during the subsequent accretion of the Moon from the disk. Such an accretion rate is unrealistic and therefore disk masses will need to be significantly higher in order to provide enough material to form a lunar-mass object under the assumption of imperfect accretion. Indeed, N-body studies of lunar accretion from circumplanetary disks suggest that realistic accretion rates are closer to 25-50% <cit.>. Under these constraints, a disk mass of M_d ≥ 2 M_ is required, suggesting that the minimum viable pre-impact angular momentum budget is closer to J_0 ≃ 2.25 J_EM.
In the context of collisions between non-rotating bodies, this result presents significant difficulties for the giant impact hypothesis, as it implies that a post-impact process capable of removing more than J_EM must exist. Currently, the only known process by which a significant amount of angular momentum can be removed from the Earth-Moon system following an impact is the Solar Evection Resonance (SER). However, it is still unclear how much angular momentum an SER could have removed from the Earth-Moon system under realistic conditions, with estimates varying from a few percent <cit.> to several J_EM <cit.> depending on the underlying tidal model. This result leaves the lunar formation community with two distinct—but certainly not mutually exclusive—potential solutions for rescuing the giant impact hypothesis. One solution would be to demonstrate a sufficiently effective SER. The existence of such an SER is beyond the scope of this work, but we note that further research is needed to understand this process. Another solution may be realized by allowing for rapid pre-impact rotation of the target and impactor. We explore whether or not such pre-impact rotation can reconcile the angular momentum problem in Paper II and reserve the effectiveness SER for future work.
§.§ Composition of the protolunar disk
Under the giant impact hypothesis, the Moon is assumed to have accreted from the circumplanetary disk created by the impact. Thus, the composition of the Moon is largely determined by the composition of the post-impact protolunar disk. Two compositional constraints are relevant in this respect: the iron fraction of the disk (F^Fe_d) and the fraction of disk material originating from the impactor body. The latter constraint is important in relation to the composition of the post-impact Earth; in order to explain the isotopic similarities between the Earth and the Moon, the fraction of impactor material in the post-impact Earth and protolunar disk should be similar.
§.§.§ Iron content
A successful simulation should avoid injecting too much iron into the protolunar disk. While the iron fraction of the protolunar disk should preferably be less than 2%, the allowable iron fraction can be increased if we assume that iron is accreted into the Moon less efficiently than mantle material. Figure <ref> illustrates a strong relationship between the asymptotic relative velocity (v_∞) and F^Fe_d for γ≳ 0.5, while for low-γ the fraction of iron in the disk is difficult to predict. This latter uncertainty appears to be due to the tendency of low-γ impacts to produce relatively large intact fragments. The iron fraction of these fragments and their subsequent inclusion in the protolunar does not depend predictably on the pre-impact parameters. In future work, it will be important to study the behavior of these fragments at much higher numerical resolutions.
These results indicate that high-γ (γ≳ 0.5), low-velocity (v_∞ < 0.7 v_esc) impacts are favored in order to keep the iron fraction of the disk sufficiently low. It is possible that higher disk iron fractions could be successful, but this implies a spatial distribution of the iron in the disk which prevents it from being accreted at the same rate as mantle material. The long-term behavior of accretion is beyond the scope of this work and the maximum disk iron fraction should be constrained by future post-impact accretion studies. However, it is reasonable to expect that realistic disk iron fractions would not be more than a few percent.
§.§.§ Isotopic composition
The compositional difference between the protolunar disk and the proto-Earth is quantified by δ_pd, which is defined in Equation <ref>. The strongest determinant of δ_pd is the impactor-to-target mass ratio (γ), with lower values of γ resulting in increasingly large differences between the fractions of impactor material in the Earth and protolunar disk. The Pearson correlation coefficient for γ and δ_pd quantifies this inverse relationship, with Pearson r = -0.79. Figure <ref> most clearly demonstrates this relationship. Overall, it is very difficult to achieve the level of compositional similarity suggested by isotopic measurements. Only at equal or very nearly equal-mass mergers (γ→ 1) does the difference in the fraction of impactor material between the Earth and protolunar disk approach zero (see Figure <ref>). This result strongly favors near-equal mass mergers if there is any significant compositional difference between the pre-impact target and impactor.
§.§.§ Post-impact mixing
If the atmosphere of the Earth and the inner edge of the protolunar disk remain in contact following the impact, then it is possible that these reservoirs could continue to exchange material. This could have the effect of reducing the iron fraction of the disk (assuming that iron is preferentially lost to the planetary atmosphere) or equilibrating the isotopic composition of the Earth and protolunar disk. In order to equilibrate the compositions of the planet and disk post-impact, processes that rely on a link between the planet's mantle (via its post-impact atmosphere) and inner disk have been suggested.
A post-impact structure known as a synestia is currently thought to offer such a link. However, as Figure <ref> shows, not all collisions will result in such a post-impact structure. Only those collisions in the hot-spin stability limit (HSSL) regime are candidates for post-impact compositional equilibration. To reach the HSSL regime, large initial angular momentum budgets are required (J_0 ≥ 2). For lower mass ratios (γ < 0.5), too much angular momentum can prevent the post-impact system from being in the HSSL regime.
§.§ Angular momentum budget
The angular momentum budget of the Earth-Moon system is extremely well constrained and angular momentum is difficult to alter via post-impact processes. Therefore, a critical question for potential Moon-forming impacts is how much angular momentum remains in the bound material (i.e., the Earth and protolunar disk) following the impact. The results in this work demonstrate that very little of the pre-impact angular momentum budget (J_0) is lost via the impact-generated ejecta. Given that at least J_0 ≃ 2.25 J_EM is required to generate a significantly massive protolunar disk (Figure <ref>), this implies that there must be a post-impact process capable of removing at least ∼ 1.25 J_EM if non-rotating cases are to be successful.
For collisions between non-rotating bodies, the initial total angular momentum budget (J_0) strongly determines two important post-impact quantities for lunar formation: the post-impact angular momentum budget of the bound material (J_b) and the mass of the protolunar disk (M_d). The Pearson correlation coefficients for J_0 - J_b and J_0 - M_d quantify these effects and are r=0.88 and r=0.89, respectively. Moreover, the protolunar disk mass almost perfectly correlated with the angular momentum budget of the protolunar disk, evincing a coefficient of r=0.99. For collisions that result in full or partial accretion—i.e., those that do not result in a hit & run—almost all of the angular momentum remains with the bound material. Only at lower impactor-to-target mass ratios (γ≲ 0.5) is significant angular momentum carried away by the ejecta for high-velocity impacts.
§.§ Ejecta
For high impactor-to-target mass ratios (γ≳ 0.5), the mass of the ejecta never exceeds 5% of the initial total mass (M_ej < 0.05 M_tot). For collisions with γ > 0.5, the ejecta mass generally decreases with an increasing angular momentum budget. However, for collisions with γ < 0.5, the trend is reversed and the ejecta mass increases with an increasing angular momentum budget.
Similarly, the fraction of angular momentum carried away with the ejecta is generally small. The exception to this is at low-γ, where higher-velocity impacts start to produce ejecta that carries away a significant fraction of the initial total angular momentum. These low-γ, high-velocity cases correspond to the cases with relatively large ejecta masses (∼ 5%).
§.§ Promising classes of impacts
The results presented here show that non-rotating collisions cannot generate sufficiently massive protolunar disks below J_0 ≃ 2 J_EM. If the angular momentum constraint is relaxed (i.e., by assuming there exists a post-impact process that is capable of removing large amounts of angular momentum from the system), then a handful of collisions in our dataset are capable of meeting the remaining constraints. We consider two different sets of constraints: one strict and one more permissive.
In the permissive case, we ask which simulations produce a disk of at least one lunar mass (M_d ≥ M_) with an iron fraction of less than 4% (F^Fe_d≤ 0.04). This assumes an overall accretion efficiency of 50-100% in the protolunar disk and an iron accretion efficiency of <50%. In the strict case, we ask which simulations produced a disk within the mass range suggested by accretion studies (2M_≤ M_d ≤ 4M_) with a disk iron fraction below 2% (F^Fe_d≤ 0.02) and post-impact compositional difference between the proto-Earth and protolunar disk of less than 5% (| δ_pd| ≤ 0.05).
Two facts conspire to rule out low-gamma (γ < 0.2) collisions between non-rotating bodies as viable lunar formation scenarios. First, our results demonstrate that a minimum pre-impact angular momentum budget of 2 J_EM is required to produce a sufficiently massive disk. Second, for γ < 0.2, there are no valid trajectories resulting in collisions for J_0 > 2 J_EM. This raises interesting questions for the non-rotating canonical Moon-forming impact because, given the results presented here, such an impact cannot produce a sufficiently massive protolunar disk and could therefore not have led to the formation of the Moon.
While none of the collisions in our dataset are able to reconcile the angular momentum constraint, some collisions are more favorable in terms of post-impact compositional constraints. Indeed, collisions between near-equal mass bodies (γ≃ 1) produce Earths and protolunar disks with nearly indistinguishable isotopic compositions (F^imp_p≃ F^imp_d). Of course, while this measure is a crude proxy for actual isotopic compositions, such values do indicate a much more favorable initial compositional difference that may more easily reach equilibrium via post-impact mixing processes.
Taken together, the simulations in this work suggest that the most favorable impact conditions are low-velocity (v_∞ < 0.7 v_esc) impacts between near-equal mass bodies (γ≃ 1) with pre-impact angular momentum budgets of J_0 ≥ 2.25 J_EM. These collisions are likely to produce sufficiently massive protolunar disks with favorable compositions and iron fractions. We note that this class of impacts (i.e., low-velocity, equal-mass mergers) most closely corresponds to the Moon-forming impacts proposed by <cit.>. The results here necessitate, however, a post-impact process capable of removing at least the equivalent of the current angular momentum budget of the Earth-Moon system, which lends support to the stronger class of SER proposed by <cit.>.
§ CONCLUSIONS
We simulate 497 pairwise collisions between differentiated non-rotating bodies. Two distinct sets of collisions are considered: a main set of 435 collisions with large impactor-to-target mass ratios (0.1 ≤γ≤ 1) and asymptotic relative velocities equal to or below the mutual escape velocities of the colliding bodies (v_∞≤ v_esc); and a smaller set of 62 collisions with small mass ratios (0.025 ≤γ≤ 0.05) and velocities above the mutual escape velocities of the colliding bodies (1.2 v_esc≤ v_∞≤ 3 v_esc). We reiterate that the conclusions presented here assume the absence of any pre-impact rotation. The effects of such rotation is the focus of Paper II.
We find that the smaller set of low-γ, high-velocity collisions between non-rotating bodies fails to produce protolunar disks with sufficient mass to explain lunar formation. Indeed, this class of collisions is unable to generate disks with more than 1% of a lunar mass. We therefore rule out this class of impacts as candidates for Moon-forming impacts.
In the main set of higher-γ, lower-velocity collisions, only those collisions with pre-impact angular momentum budgets of J_0 ≥ 2 J_EM are able to produce disks with the minimum viable mass budget of one lunar mass (M_d ≥ M_). If disk mass constraints suggested by post-impact N-body accretion studies are used (2M_≤ M_d ≤ 4M_), then only collisions with J_0 ≥ 2.25 J_EM remain as viable candidates. In the absence of pre-impact rotation, this result implies that in order to reproduce the observed angular momentum budget of the Earth-Moon system, post-impact processes that are capable of removing at least 1-1.25 J_EM must exist, which supports a strong SER mechanism.
Favorable protolunar disk compositions are only achieved by low-velocity impacts between near-equal mass bodies. Indeed, in order to avoid injecting too much iron into the protolunar disk, low-velocity (v_∞ < 0.7 v_esc) impacts are favored, while only near-equal mass collisions (γ→ 1) are able to produce an Earth and protolunar disk with similar isotopic compositions (δ_pd→ 0). Differences in post-impact isotopic compositions quickly increase as γ decreases.
Taken together, these results cast doubt on the canonical Moon-forming impact and suggest that low-velocity, high-angular momentum impacts between near-equal mass bodies <cit.> are more favorable candidates for Moon-forming impacts. This class of impacts requires a process capable of removing large amounts of angular momentum from the post-impact system.
The main results of our systematic survey of potential Moon-forming impacts between non-rotating bodies are summarized as follows:
* The canonical Moon-forming impact cannot generate a sufficiently massive protolunar disk to explain the Moon.
* For all collisions, the protolunar disk mass is strongly dependent on the initial angular momentum budget. In order to generate a protolunar disk mass of at least one lunar mass, a pre-impact angular momentum budget of at least 2 J_EM is required. This implies that a post-impact process capable of removing at least J_EM must exist.
* For γ≳ 0.5, the iron fraction of the protolunar disk is strongly dependent on the impact velocity (v_∞). Therefore, low-velocity (v_∞ < 0.7 v_esc) grazing impacts are favored to avoid injecting too much iron into the protolunar disk.
* Only near-equal mass collisions (γ∼ 1) are able to produce an Earth and protolunar disk with similar compositions (| δ_pd| ∼ 0) regardless of the initial compositions of the target and impactor.
* Taken together, the results of our survey favor low-velocity, near-equal mass collisions to explain the origin of the Moon.
In Paper II, we systematically study whether or not pre-impact rotation can lower the amount of angular momentum required to generate sufficiently massive disks while simultaneously reproducing the other observational constraints.
We thank the anonymous reviewer for valuable suggestions and comments that helped to substantially improve the paper. This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. The authors acknowledge the financial support of the SNSF. We acknowledge access to Piz Daint and Eiger@Alps at the Swiss National Supercomputing Centre, Switzerland under the University of Zurich's share with the project ID UZH4.
Swiss National Supercomputing Centre (Piz Daint, Eiger@Alps)
Gasoline <cit.>,
ballic <cit.>,
eoslib <cit.>,
skid[<https://github.com/N-BodyShop/skid>],
numpy <cit.>,
scipy <cit.>,
matplotlib <cit.>,
pynbody <cit.>,
GNU parallel <cit.>
§ ASYMPTOTIC PARAMETERS
In contrast to most previous studies of giant impacts, we define the initial conditions of our simulations by specifying the “asymptotic relative velocity” (v_∞) and the initial total angular momentum budget (J_0), which in turn fix the “asymptotic impact parameter” (b_∞). Asymptotic refers to an “infinite” initial separation which, in practice, means a distance whereby mutual gravitational interactions between the target and impactor have not yet had a chance to significantly modify the pre-impact trajectory of the bodies, nor their internal structure or rotation rates through tidal interactions. In the simulations presented in this work, an initial separation of 10 R_crit is sufficient for this purpose.
Given that many previous studies on pairwise collisions have defined the initial conditions of their simulations using the relative velocity and impact parameter at the moment of impact (v_imp and b_imp, respectively), we provide analytic prescriptions for relating the values at impact to the asymptotic values used in this work. Note, however, that these analytic relations assume no tidal interactions between the target and impactor prior to the moment of impact (i.e., the bodies maintain perfectly spherical shapes). In SPH simulations, as in reality, this is not a valid assumption and the colliding bodies can undergo significant deformation prior to impact. Thus, we urge the reader to interpret the results of these relations with caution.
Given an asymptotic relative velocity v_∞, we can calculate the eventual velocity at impact v_imp (under the assumption that the target and impactor are not subject to deformation and therefore no orbital energy is lost due to tidal interactions) as follows,
v_imp = (v_∞^2 + 2GM_targ/R_crit)^0.5 ,
where G is the gravitational constant and R_crit = R_targ + R_imp is the sum of the non-rotating equatorial radii of the target and impactor. The use of the non-rotating equatorial radii is purely a matter of convention, but we note that it greatly simplifies the problem once arbitrary orientations of rotating bodies is involved. The associated impact parameter at the moment of impact (again assuming no deformation and no tidal effects) is then,
b_imp = b_∞v_∞/v_imp ,
where v_imp is calculated as in Equation <ref>.
Note that when setting up a collision, the asymptotic values are converted to the associated parameters (v_ini and b_ini) at the distances specified by the initial separation parameter d_ini. In order to convert to these values, we follow the same approach as above and calculate v_ini as follows:
v_ini = (v_∞^2 + 2GM_targ/d_ini)^0.5 ,
where d_ini is set to 10 R_crit in this work and the other parameters are the same as in Equation <ref>. Similarly, the impact parameter at the start of the simulation (b_ini) is calculated as follows:
b_ini = b_∞v_∞/v_ini ,
where v_ini is calculated as in Equation <ref>.
§ DISK FINDER
In order to assess post-impact properties following an impact, we require an algorithm to first distinguish between the post-impact planet, circumplanetary disk, and ejecta. To this end, we have developed a novel disk-finding algorithm, which we describe in detail here. On a high level, our disk-finding algorithm begins by calculating the solid-body rotation rate for the planet using the densest particles. Using a sliding window, it then moves radially outward until it reaches the radius at which the median angular velocity of particles deviates significantly from this solid-body rotation rate. All particles within this radius are assigned to the planet. Similar to other disk-finding algorithms, it then calculates the orbits of the particles outside this radius to determine which particles will fall back onto the planet (and are therefore assigned to the planet) and which of the other particles belong to the disk and which particles to the ejecta. In detail, the following steps are performed by the disk finder:
* Center and align snapshot The simulation output (hereafter referred to as the “snapshot”) is centered on the planet (whereby the planet is identified by assuming some minimum density cutoff) and the ẑ-axis of the analysis frame of reference is aligned with the global angular momentum vector Ĵ of the particles.
* Radially bin particles The particles in the snapshot are binned according to their radius. The bin range is defined by R_min and R_max, where R_min = 0.1 R_⊕ in this work. The purpose of R_min is to exclude the noisy particles near the center of the planet, while R_max is arbitrary so long as it is sufficiently large to capture any reasonable radius (e.g., R_max = 5 R_⊕. By excluding particles well beyond the expected radius, the computational performance of the disk finder is greatly improved. The number of bins within this range depends on the resolution of the simulation (i.e., the number of particles in the snapshot N_p). In this work, N_bins = int(10^-2 N_p).
* Determine planet radius Starting at the innermost bin, the disk finder steps outward along the bins. At each bin, the following steps are carried out:
* A sliding window with length ℓ_win is defined that extends from w_min = r_bin - ℓ_win to w_max = r_bin, where the r_bin is the midpoint of the current bin. In this work, ℓ_win = 0.15 R_⊕.
* The median rotation rate of the particles within the current window (ω_win) is computed. This is the expected “solid-body” rotation rate for the current bin.
* The median rotation rate of the particles within the current bin (ω_bin) is computed.
* The fractional difference between ω_win and ω_bin is computed,
Δ = ω_bin - ω_win/ω_win .
* If the fractional difference is greater than a predefined threshold (Δ > Δ_crit), then the disk finder returns the midpoint of the current bin as the planet's radius (R_p = r_bin). In this work, the threshold is defined to be Δ_crit = 0.05.
If at any point three bins in a row are found to be empty of particles, then the disk finder returns the midpoint of the first empty bin as the planet's radius (R_p).
* Kepler intercept In some simulations, the post-impact Earth is still rotating at or beyond its rotational stability limit (i.e., R_p ≥ R_HSSL). In these cases, the disk finder may overestimate the radius due to the large amounts of noise near the planet-disk transition. An additional step is therefore carried out to determine if the radius estimated by the disk finder exceeds the stability limit. The stability limit is approximated by identifying the radius at which the median transverse velocity in the sliding window (v_t,win) intercepts the Keplerian velocity (v_t,kep). If the radius estimated by the disk finder is larger than the radius at which the intercept occurs, the planet's radius is set to the radius of the intercept.
* Differentiate disk and ejecta Using the positions and velocities of the particles outside R_p, calculate the orbits of each particle. Particles with e ≥ 1 are unbound and are assigned to the ejecta. For particles with e < 1, calculate their distance at periapsis r_peri. Those with r_peri≤ R_p are assigned to the planet. Those with r_peri > R_p are assigned to the disk.
In addition to distinguishing the post-impact structures, the disk finding algorithm can also determine the proximity of the post-impact system to the Hot Spin Stability Limit (HSSL; <cit.>). This is a useful feature of our disk-finding algorithm because it allows us to identify post-impact states where compositional mixing between the Earth's mantle and protolunar rocks can occur.
aasjournal
|
http://arxiv.org/abs/2307.04316v3 | 20230710030524 | Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain | [
"Xiangman Li",
"Jianbing Ni"
] | cs.CR | [
"cs.CR"
] |
Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain
Xiangman Li and Jianbing Ni X. Li and J. Ni are with the Department of Electrical and Computer Engineering and Ingenuity Labs Research Institute, Queen's University, Kingston, Ontario, Canada K7L 3N6. Email: [email protected].
==============================================================================================================================================================================================================================================
Secure data deletion enables data owners to fully control the erasure of their data stored on local or cloud data centers and is essential for preventing data leakage, especially for cloud storage. However, traditional data deletion based on unlinking, overwriting, and cryptographic key management either ineffectiveness in cloud storage or rely on unpractical assumption. In this paper, we present SevDel, a secure and verifiable data deletion scheme, which leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts, while the deletion of the encryption keys are guaranteed based on Intel SGX. SevDel implements secure interfaces to perform data encryption and decryption for secure cloud storage. It also utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. Evaluation on real-world workload demonstrates that SevDel achieves efficient data deletion verification and maintain high bandwidth savings.
Cloud storage, secure data deletion, Intel SGX, data outsourcing, verifiability.
§ INTRODUCTION
Outsourcing data to the cloud storage is a common practice for data owners to save the burden of self-managing massive data <cit.>. The data owners can on-demand rent the storage spaces provided by the cloud service providers. The outsourcing data management enables the data owners to access their data at anytime and from anywhere. Due to the appealing features, the cloud storage services, such as Amazon S3 <cit.>, Google Drive <cit.>, Dropbox <cit.>, Apple iCloud <cit.>, and Microsoft OneDrive <cit.>, have attracted a large number of stable and loyal users. Data security is one of the primary concerns for data owners. After data owners outsource their data to the cloud data centers, they lose their physical control over their data. Thus, the data owners have no choice, but relying on the cloud service providers to protect their data. Unfortunately, due to the frequently happened data leakage or breach accidents, security are always ranked as the top threats in cloud storage <cit.>, although the cloud service providers have great efforts for guaranteeing the confidentiality, integrity, and availability of the outsourced data on their cloud servers.
Data deletion <cit.>, as one of important security technologies, has not received sufficient attentions from both data owners or cloud service providers in cloud storage. It provides methods to securely erase data from local storage medium or remote cloud servers, which can significantly reduce the probability of data leakage. Moreover, privacy regulations, such as GDPR <cit.>, CPPA <cit.>, and PIPL <cit.>, have clearly defined the principles of data deletion, called right to be forgotten. The data centers should delete the personal data if they are no longer necessary to the purpose for which it was collected, and the data owners have the right to request the data centers to delete their personal data stored on the data centers. Therefore, data deletion becomes increasingly critical for the service providers, which should provide effective ways to guarantee secure data deletion.
§.§ Related Work
It is not a trivial problem to securely delete data. It is well recognized that there is no existing software-based solution that can provide complete data removal from storage medium. Existing
deletion methods can be summarized in the following categories:
Deletion by unlinking. This method is widely deployed on the file management system in operating systems, such as Windows, IOS, and Linux. When the user would like to delete a file (i.e., press the "delete" button), the operating system delete the link of the file from the file systems, and returns "success" to the user. The file is no longer accessible because the link of the file is removed. Nevertheless, this is not the real file deletion, as the file content still remains on the disk. An adversary can simply use a file recovery tool to access the deleted file by scanning the disk <cit.>.
Deletion by block erasure. This method is utilized by storage mediym, such as solid-state drive (SSD) to securely clean the data. It applies a voltage spike to all available flash memory blocks in unison. Each block is altered with a vendor-specific value and SSD become "clean" <cit.>. However, this method erases all the data on the drive and does cause a small amount of wear.
Deletion by overwriting. Overwriting is an important tool to delete the data by overwriting the data with new, insensitive data, e.g., all zeros. There are multiple tools that can offer 35-pass overwriting times. However, one inherent limitation with the overwriting methods is that they cannot guarantee the complete removal of data. It is effectively impossible to sanitize storage locations by simply overwriting them, no matter how many overwrite passes are made
or what data patterns are written <cit.>. The conclusion holds for not only magnetic drives, but also tapes, optical disks, and flash-based solid state drives. In all these cases, an attacker, equipped with
advanced microsoping tools, may recover overwritten data based on the physical remanence of the deleted
data left on the storage medium. Therefore, although overwriting data makes the recovery harder, it does
not change the basic one-bit-return protocol.
Deletion by encryption. Boneh and Lipton <cit.> proposed the first cryptography-based method for secure date deletion by encrypting data before saving it to the disk, and deleting the data by discarding the decryption key after encryption. This method is desirable when duplicate copies of data are backed up in distributed. However, this method essentially change the problem of deleting a large amount of data to the problem of deleting a short key. However, forgetting a decryption key is non-trivial. The key can be stored on a hard disk is not easy to be permanently deleted, i.e., never be recoverable for an adversary even it obtain the storage medium <cit.>.
However, the problem of key deletion becomes dramatically difficult as the cloud server performs the encryption of the outsourced data in traditional secure cloud storage. Although some cloud storage services enable user-side encryption, i.e., the data owners also can encrypt their data before outsourcing, the server-side encryption is more general. The cloud server encrypts the data after receiving them from the data owners with an encryption key and decrypts the data that the owners would like to access before returning them to the data owners. In this model, the encryption is fully controlled by the cloud servers, which brings the worries of the data owners about the encryption of their outsourced data and secure deletion of the decryption keys.
§.§ Contributions
In this paper, we propose a novel secure and verifiable data deletion scheme, named SevDel, for cloud storage. To reduce the concern that whether the cloud server honestly encrypts the outsourced, we utilize the randomly sampling method and the zero-knowledge proof <cit.> to verify the encryption without retrieving the ciphertexts of the outsourced data. The encryption is also performed based on the Intel SGX <cit.> to prevent the possible data leakage. The enclave is created for each file for the encryption and the management of the keys. Thus, the operation of the deletion of the decryption becomes the destroy of the enclave. In addition, to enforce the cloud servers to protect the outsourced data, the smart contract is designed based on the service-level agreements between the data owners and the cloud service providers. We demonstrate the properties of confidentiality, verifiability, erasability, and auditability of SevDel through security analysis and show that the proposed SevDel has outstanding performance for deployment.
§ SYSTEM AND SECURITY MODELS
In this section, we introduce the system model and security model of our SevDel.
§.§ System Model
We present the system model of SevDel, that comprises three kinds of entities: 1) a data owner that outsources the data to the cloud and requests to delete them after the data is processed or used; 2) a cloud service provider that offers secure cloud storage services (i.e., the outsourced data of data owners are encrypted by the service provider with its chosen secret keys or by the data owners before outsourcing) to data owners with its storage servers in the cloud data center, and each server has high-performance hard disks for data storage and has the Intel Core that supports for SGX <cit.>; and 3) a blockchain node <cit.> that participates the blockchain network to maintain transactions happened between two parties. The blockchain can be the public blockchain, e.g., Bitcoin blockchain, Ethereum blockchain, or Hyperledger. It maintains an automatically executable smart contract that enforces the penalty on the cloud service provider if it leaks the outsourced data of users.
Intel SGX <cit.>, a suite of security-related instructions built into modern Intel CPUs, can create a hardware-protected environment, enclave, for shielding the execution of code and data. An enclave resides in a hardware-guarded memory region called the enclave page cache (EPC) for hosting any protected code and data.
In enclave, SGX performs the encryption of the outsourced data with a secret key stored on the EPC. The deletion of the encrypted data for the data owner is the deletion of the secret key in enclave. More specifically, the secret in enclave is erased after the enclave is destroyed.
§.§ Threat Model
The security threats are mainly from the outsider attackers or the data thief. An outsider attacker or a data thief may compromise the cloud server to steal the data on the hard disks. The frequently happened data leakage incidents on cloud have witnessed the risks of cloud storage services. This risk is high because of potential code vulnerability, and the damage is severe as the data leakage incidents significantly affect reputation. Moreover, the employees in cloud service may steal the data on cloud servers. We have witnessed many data corruption or leakage incidents that occur due to the operation errors or misbehavior of the employees. The main security objective is to protect the cloud data for users against data leakage incidents.
A cloud service provider is the legitimate processor of the Intel SGX and holds the service level agreements with the data owners for maintaining outsourced data. It is expected that the cloud service provider stores the encrypted outsourced data of data owners on the hard disks of cloud servers and deletes the data under the requests of data owners or based on the principles of privacy regulations, like GDPR and CPPA, and PIPEDA. It is assumed that the cloud service provider may not deviate from the expectation due to the agreement with data owners, that is, the cloud service provider is rational. It follows the service level agreements to honestly offer data storage services. Undoubtedly, regulating the implementation of the agreement between the users and the cloud service provider become necessary.
A data owner is an honest party to rent storage spaces from the cloud storage services and outsources the data to the cloud servers in the data center. The data owner chooses the reliable service providers for data outsourcing. According to the modes for protecting cloud data in cloud storage, e.g., Amazon S3 of Amazon Web Service, the owners can determine whether to encrypt their data before outsourcing. The data owners can use secret keys to encrypt their data before outsourcing. If the owners do not encrypt the data, the cloud server chooses the secret keys for data encryption. In this paper, we study secure data erasure for the latter case because it is trivial to achieve data deletion if the data owners encrypt their data by themselves, as they can delete their keys and then no one can read the cloud data.
§ PROPOSED SEVDEL
In this section, we propose the overview and the detailed construction of our SevDel.
§ OVERVIEW
Our SevDel accelerates security and verifiability of cloud data erasure in cloud storage. It can serve as the central element of secure cloud storage and erasure in cloud storage services, such as Amazon S3 Find and Forget, the solution to selectively erase records from data lakes stored on Amazon S3. To prevent data leakage, the received file from the data owner is encrypted by the cloud server with a randomly selected private key with additive homomorphic encryption, such as lifted ElGamal encryption <cit.>. The encryption operation is performed in the enclave of Intel SGX. The encryption of the file is audited by the data owner to ensure that the file is correctly encrypted as claimed by the cloud service provider. The random sampling is utilized to enable probabilistic auditing of the encrypted data and the ciphertexts are aggregated to compress auditing messages. The cloud server proves to the data owner that the entire file is encrypted with lifted ElGamal encryption by a randomly chosen key with a large probability, without retrieving the encrypted file. The challenge here is to ensure that the proved ciphertext is really the encryption of the correct outsourced file. To blind the file and its ciphertext during auditing, the cloud server should prove that the plaintext of the ciphertext is the outsourced file in the homomorphic authentication tags, which are produced by the data owner and outsourced along with the file. Meanwhile, they can also used to verify the integrity of the outsourced file based on provable data possession <cit.> or proof of retrievability <cit.>.
The deletion of the oursourced file on the cloud server is enabled by the deletion of the secret key of the file. If the secret key is permanently deleted, no one is able to decrypt the ciphertext. The secret key deletion is realized by the Intel SGX. An enclave is created when the cloud server receives the file and the encryption is performed in the enclave. Also, the encryption key is stored in the enclave. In order to permanently forget the key, the simple way is to destroy the corresponding enclave.
To ensure the cloud service provider to honestly maintain and encrypt outsourced files of data owners, a smart contract is created based on the service level agreement between the service provider and data owners. the deposits of the service provider are made when the cloud storage service is bootstrapped. The deposits are paid to the data owner if the file of the data owner is found on the Internet, which means that the file is leaked during storage. The condition to trigger the payment is the key point of the smart contract. We convert this data leakage problem to be the provable data possession. If a data owner is succeed to giving a proof that she possesses the encrypted version of her outsourced files, the penalty is performed over the service provider and a certain amount of the deposits is transferred to the data owner. The conversion is valid because only the cloud server has the encrypted version of the outsourced file of the data owner. The cloud server performs encryption after receiving the outsourced file and decryption before returning it to the data owner. The ciphertext of the outsourced file should be only known by the cloud server. Although the data owner knows the cleartext of the file, the data owner obtains the same ciphertext, as the data encryption on the side of the cloud server is probabilistic.
Our SecDel consists of the following algorithms.
Setup: This algorithm is run by the cloud service provider to bootstrap the cloud storage systems. With the input of the security parameter, the algorithm outputs the system parameters and the public-private key pairs of the cloud servers.
Contract: This algorithm is run by the cloud service provider to initialize a smart contract that implements the service level agreement with the data owners. The smart contract is maintained by the blockchain nodes.
KeyGen: This algorithm is run by the data owner. With the input of the system parameters, the algorithm takes the input of the system parameter and generate the public-private key pair of the data owner for data outsourcing.
Outsource: This algorithm is run by the data owner to outsource the file to the cloud server. With the input of the security parameters, the private key of the data owner, and the outsourcing file, the algorithm produces the homomorphic authentication tags of the data blocks of the file and outsource the file, along with the generated tags.
Encrypt: This algorithm is run by the cloud server that encrypts the received file with a randomly chosen private key. With the input of the file, the private key of the cloud server, and the chosen private key, the algorithm outputs the encrypted file, the corresponding public key, and the homomorphic authentication tags of the data blocks of the encrypted file.
Verify: This is an interactive protocol between the cloud server and the data owner to audit the encryption of the outsourced file. The data owner randomly samples the data blocks, and the cloud server generates a proof that proves the encryption of the sampled data blocks. The data owner finally verifies the proof to learn whether the file has been encrypted by the cloud server.
Delete: This algorithm is run by the cloud server who deletes the file under the request of the data owner or the data is no longer needed for data analysis.
Audit: This is an interactive protocol between the data owner and the blockchain nodes. The blockchain nodes randomly samples the data blocks owned by the data owners and the data owner responds the proof that proves the ownership of the encrypted data block. Then, the blockchain nodes verify the proof to learn whether the file has been disclosed. If the proof is valid, the smart contract is executed to give penalty to the cloud service provider.
The correctness of SevDel has the following aspects: 1) The encryption of the outsourced file should be correctly recovered by the cloud server with the corresponding secret key; 2) the data owner can identify that the cloud server does not encrypt the oursourced file on hard disks as agreed with the service level agreement; 3) the deleted outsourced file can be no longer recovered; and 4) the blockchain node can execute the penalty if the data owners find the leaked outsourced data.
§.§ Detailed SevDel
Setup: Let q be a large prime and 𝔾_1, 𝔾_2 and 𝔾_T be three multiplicative cyclic groups of the same prime order p. g_1 and g_2 are the generators of 𝔾_1 and 𝔾_2, respectively. e:𝔾_1 ×𝔾_2 →𝔾_T denotes an admissible bilinear pairing.
The file M to be outsourced is divided into n blocks and each block is further split into s sectors. Thus, the fiel is denoted as M={m_ij}_i ∈ [1,n],j ∈ [1,s] and the abstract information of M is denoted as 𝕀_M.
H:{0,1 }^* →𝔾_1 is a cryptographic hash function that maps the 𝕀_M to a point in 𝔾_1.
The cloud service provider chooses a random number a ∈ℤ_p and calculates A=g_2^a∈𝔾_2. The private key of the data owner is a, and the corresponding public key is A.
Contract: The service provider creates the smart contract CS-SevDel to provide cloud storage services to data owners. To provide the service, the service provider initiates CS-SevDel.Init to setup the smart contract and deposits an amount of money on the blockchain as insurance in CS-SevDel.Service. The a part of the deposit would be sent to the data owner if the outsourced data is leaked and the remainder would be re-fund to the service provider.
KeyGen: An data owner chooses a random number w ∈ℤ_p and calculates W=g_2^w∈𝔾_2. The private key of the data owner is w, and the corresponding public key is W.
Outsource: The data owner chooses s random values x_1,⋯, x_s ∈ℤ_p and
computes u_j=g_1^x_j∈𝔾_1 for j ∈ [1,s].
Then, for each block m_i (i ∈ [1,n]), it computes a tag t_i as
ϕ_i=(H(𝕀_M||i)·∏_j=1^su_j^m_ij)^w.
The data owner outputs the set of homomorphic authentication tags T={ϕ_i}_i ∈ [1,n]. The tag set Φ, the file index 𝕀_M, and the file M are sent to the cloud server.
Encrypt: After receiving (𝕀_M,M,Φ) from a data owner, the cloud server first randomly selects a private key v ∈ℤ_p and computes V=g_1^v ∈𝔾_1. The cloud server uses the random private key v to encrypt each data block of the received file m_ij as E_ij=(E'_ij,E”_ij)=(g_1^m_ijV^r_ij, g_1^r_ij), where r_ij is a random number chosen from ℤ_p. The set of the encrypted blocks is denoted as E={E_i}_i ∈ [1,n]. Then, for each encrypted block E_i (i ∈ [1,n]), the cloud server computes a homomorphic authentication tag σ_i for the encrypted block as
σ_i=(H(𝕀_M||i)·∏_j=1^su_j^E'_ijv_j^E”_ij)^a.
The set of the tags of the encrypted blocks is denoted as Σ={σ_i}_i ∈ [1,n]. Finally, the cloud server stores (𝕀_M,E, Σ) on the hard disks and uploads (𝕀_M, Σ) to the blockchain.
Verify: To verify the encryption of the outsourced file M, the data owner takes the abstract information 𝕀_M as inputs. It selects some data blocks to construct a challenge set Q and picks a random l_i ∈ℤ_p^* for each m_i (i ∈ Q). The challenge (i, l_i)_i ∈ Q is sent to the cloud server.
To respond the challenge, the cloud server generates P_1 as
P_1=∏_i ∈ Q g_1^l_im_ijV^l_ir_ij.
The cloud server computes Q_j=∑_i ∈ Ql_i·m_ij for each j ∈ [1,s].
Then, it computes Q_2 as
P_2=∏_j=1^s ϕ_i^l_i.
π← NIZK {(Q_j, r_ij): P_1=∏_i ∈ Q g_1^l_im_ijV^l_ir_ij, P_2=∏_j=1^s ϕ_i^l_i}.
The data owner verifies the validity of the zero-knowledge proof π to determine whether the outsourced file has been encrypted or not.
Delete: The cloud server deletes the random private key v that is used to encrypt the file F by destroying the enclave that used to store v. The cloud server creates an enclave for each file received and use the enclave to maintain the private key.
Audit: If the data owner obtains the leaked encrypted file F, the data owner can prove to the blockchain nodes that the cloud server has data leakage. The blockchain node selects some data blocks to construct a challenge set R and picks a random γ_i ∈ℤ_p^* for each E_i (i ∈ R). The challenge (i, γ_i)_i ∈ R is sent to the data owner.
To respond the challenge, the data owner generates Q_1 as
Q_1=∏_i ∈ Rγ_i E_ij.
Then, it computes Q_2 as
Q_2=∏_j=1^s σ_i^γ_i.
The data owner returns (Q_1, Q_2) to the blockchain node. The blockchain node verifies (Q_1, Q_2) to determine whether the cloud server has disclosed the file F. If yes, the blockchain node performs CS-SevDel.Penalty to give penalty to the cloud service provider.
The correctness of SevDel can be check that 1) the encryption of the outsourced file is correctly recovered; 2) the verification equation can pass; 3) the security of Intel SGX; and 4) the blockchain node can execute the penalty.
§ SECURITY OF SEVDEL
The security of SevDel should capture the properties of confidentiality, verifiability, erasability, and auditability.
The confidentiality of the outsourced file relies on the semantic security of the data encryption scheme used by the cloud server. SevDel utilizes the lifted ElGamal encryption scheme to encrypt each block of the outsourced file M. Here, each block is independently encrypted with the key V. As the lifted ElGamal encryption scheme can be proved semantic security under the Decisional Diffie-Hellman (DDH) assumption, the confidentiality of the outsourced file is achieve as long as the DDH assumption holds.
(-10,608)
(0,10)(248,600)
(0,554)(100,100)[l] Smart Contract CS-SevDel
(0,542)(200,95)[l] Init: Set state:=INIT, File:={}, Onwer:={}, RU:={},
(0,530)(200,95)[l] Tags:={}, Param:=SerDel(1^λ).
(0,518)(200,95)[l] Service: Upon receiving (“Create", N, file, A, Deposit,
(0,506)(200,95)[l] T_1,T_2,T_3,T_4) from a service provider 𝒮:
(0,494)(200,95)[l] Assert state=INT.
(0,482)(200,95)[l] Assert current time T≤ T_1.
(0,470)(200,95)[l] Assert ledger|𝒮|≥ $Deposit.
(0,458)(200,95)[l] ledger |𝒮|:=ledger|𝒮|–$Deposit.
(0,446)(200,95)[l] Set state:=CREATED.
(0,434)(200,95)[l] Set Accept:=0.
(0,422)(200,95)[l] File:=File∪{𝒮,N,A,Deposit,Accept,T_j=1-4}.
(0,410)(200,95)[l] Agree: Upon receiving (“Accept", 𝒰_i,N,R_i) from a
(0,398)(200,95)[l] data owner 𝒰_i:
(0,386)(200,95)[l] Assert state=CREATED.
(0,374)(200,95)[l] Assert T_1 ≤ T ≤ T_2.
(0,362)(200,95)[l] Assert $R_i >0.
(0,350)(200,95)[l] Assert ledger|𝒰_i|≥ $R_i.
(0,338)(200,95)[l] ledger |𝒰_i|:=ledger|𝒰_i|–$R_i.
(0,326)(200,95)[l] Set Accept:=Accept+1.
(0,314)(200,95)[l] Set state_i:=ACCEPTED.
(0,302)(200,95)[l] Owner_N:=Owner_N∪{𝒰_i}.
(0,290)(200,95)[l] Claim: Current time T=T_2:
(0,278)(200,95)[l] Assert state_i=ACCEPTED.
(0,266)(200,95)[l] Assert the data outsourcing N.
(0,254)(200,95)[l] Set state:=CLAIMED.
(0,242)(200,95)[l] Audit: Upon receiving (“Audit", 𝒰_i,N,c_i,d_i,σ_i,e_i,rk_i,
(0,230)(200,95)[l] 𝒫𝒦_i) from 𝒰_i:
(0,218)(200,95)[l] Assert state=CLAIMED.
(0,206)(200,95)[l] Assert T_2≤ T≤ T_3.
(0,194)(200,95)[l] Assert 𝒰_i∈AU_N.
(0,182)(200,95)[l] Assert 𝒫𝒦_i=1.
(0,170)(200,95)[l] Set state_i:=UPLOADED.
(0,158)(200,95)[l] Set ledger |𝒰_i|:=ledger|𝒰_i|+$R_i.
(0,146)(200,95)[l] Owner_N:=owner_N∪{𝒰_i}.
(0,134)(200,95)[l] File_N:=File_N∪{(𝒰_i,N,σ_i,e_i,rk_i)}.
(0,122)(200,95)[l] Refund: T_3≤ T≤ T_4 and Owner_N=File_N:
(0,110)(200,95)[l] Set state:=FULFILLED.
(0,98)(200,95)[l] Set ledger |𝒰_i|:=ledger|𝒰_i|+$Deposit_i.
(0,86)(200,95)[l] Assert $Deposit=∑_i=1^n$Deposit_i.
(0,74)(200,95)[l] Set state:=FINISHED.
(0,62)(200,95)[l] Penalty: T_3≤ T≤ T_4 and AU_N⊃RU_N:
(0,50)(200,95)[l] Set state:=UNFULFILLED.
(0,38)(200,95)[l] ledger|𝒰_i|:=ledger|𝒰_i|+$R^*_i, for 𝒰_i ∈RU_N.
(0,21)(200,95)[l] Assert ∑_i∈{_N-_N}$R_i =∑_i∈{_N}$R^*_i.
(0,6)(200,95)[l] Set state:=ABORTED.
(0,-6)(200,95)[l] Timer: If state=ABORTED and T>T_4;
(0,-18)(200,95)[l] Set ledger |𝒮|:=ledger|𝒮|+$Deposit.
(0,-30)(200,95)[l] Set state:=ABORTED.
(0,-10)(250,20)Alg. 1. Smart Contract CS-SevDel
The verifiability of the data encryption is achieved based on provable data possession and zero-knowledge proofs. The data owners are able to audit the encrypted data by randomly sampling the encrypted blocks. The homomorphic authentication tags guarantee the authentication of data blocks in the aggregated way. First, the homomorphic authentication tags are created in the way of digital signatures. They are not forgeable under the assumption of computational Diffie-Hellman assumption. Second, it is impossible to generate a proof if the cloud server does not encrypt the sampled data blocks because the proof is the linear aggregation of the tags. Therefore, the verifiability of the data encryption is realized.
The erasability of the data is achieved based on the Intel SGX. The enclave is created for the file when the cloud server receives the file. The enclave is used to maintain the decryption key. The deletion of the data is achieved when the enclave is destroyed. The destroy of the enclave would permanently lose the information in the enclave. According to this feature, the decryption key is lost after the destroy of the enclave. Thus, the encrypted file can never be decrypted, so the file is permanently deleted.
The auditability of data leakage is achieved based on the smart contract. The smart contract makes sure the automatic execution of the service-level agreement between the data owners and the cloud service providers. The condition that triggers penalty is the data leakage incident, so the data owner needs to prove to the blockchain node that they have the leaked data. This proof generation method is the same as the method for data encryption proof, so they are based on the same assumption.
§ CONCLUSION
In this paper, present a secure and verifiable data deletion scheme that leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts. The deletion of the encryption keys are guaranteed based on Intel SGX. The proposed scheme implements secure interfaces to perform data encryption and decryption for secure cloud storage and utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers.
As the proposed scheme enables the cloud server to handle the service-side encryption, which make the scheme particularly suitable for the popular secure cloud storage services.
IEEEtran
|
http://arxiv.org/abs/2307.04189v1 | 20230709144340 | Histopathology Whole Slide Image Analysis with Heterogeneous Graph Representation Learning | [
"Tsai Hor Chan",
"Fernando Julio Cendra",
"Lan Ma",
"Guosheng Yin",
"Lequan Yu"
] | cs.CV | [
"cs.CV"
] |
Histopathology Whole Slide Image Analysis with Heterogeneous Graph Representation Learning
Tsai Hor Chan^1,*, Fernando Julio Cendra^1,2*, Lan Ma^2, Guosheng Yin^1,3, Lequan Yu^1
^1Department of Statistics and Actuarial Science,
The University of Hong Kong
^2TCL Corporate Research Hong Kong
^3Department of Mathematics, Imperial College London
{hchanth, fcendra}@connect.hku.hk, [email protected], [email protected], [email protected]
Received: date / Accepted: date
=======================================================================================================================================================================================================================================================================================================================================================================================
*The first two authors contributed equally to this work.footnote
Graph-based methods have been extensively applied to whole slide histopathology image (WSI) analysis due to the advantage of modeling the spatial relationships among different entities.
However, most of the existing methods focus on modeling WSIs with homogeneous graphs (, with homogeneous node type).
Despite their successes, these works are incapable of mining the complex structural relations between biological entities (, the diverse interaction among different cell types) in the WSI.
We propose a novel heterogeneous graph-based framework to leverage the inter-relationships among different types of nuclei for WSI analysis.
Specifically, we formulate the WSI as a heterogeneous graph with “nucleus-type” attribute to each node and a semantic similarity attribute to each edge.
We then present a new heterogeneous-graph edge attribute transformer (HEAT) to take advantage of the edge and node heterogeneity during massage aggregating.
Further, we design a new pseudo-label-based semantic-consistent pooling mechanism to obtain graph-level features, which can mitigate the over-parameterization issue of conventional cluster-based pooling.
Additionally, observing the limitations of existing association-based localization methods, we propose a causal-driven approach attributing the contribution of each node to improve the interpretability of our framework.
Extensive experiments on three public TCGA benchmark datasets demonstrate that our framework outperforms the state-of-the-art methods with considerable margins on various tasks.
Our codes are available at https://github.com/HKU-MedAI/WSI-HGNNhttps://github.com/HKU-MedAI/WSI-HGNN.
§ INTRODUCTION
Histopathology slides provide rich information on
diagnosis and treatment planning for many cancer diseases.
The recent technological advancements in tissue digital scanners facilitate the development of whole slide histopathology image (WSI) analysis.
However, traversing through the WSI with diverse magnifications is time-consuming and tedious for pathologists due to the large-scale nature of the WSI (e.g., its typical size is 60,000 × 60,000 pixels).
Hence deep learning techniques play an important role as they introduce accurate and automated analysis of WSIs, which can significantly relieve the workload of pathologists.
Since it is difficult to fit the complete WSI into the memory, most of the works adopt multiple instance learning (MIL) to divide the WSI into instances and then aggregate them for WSI analysis.
However, these methods operate on bags of instances that do not emphasize the inter-relationships between these instances.
Recently, the emergence of graph neural networks (GNNs) has made large progress in representing the spatial relationships between instances.
As a result, there are many attempts to represent the WSIs as graphs of instances.
Figure <ref> presents an example of a graph constructed from WSI.
Unlike convolutional neural networks (CNNs) that aggregate features based on locality in the Euclidean space, GNNs focus on locality on graph topology, which offers more flexibility in analyzing the deep connections between features in the image data beyond the spatial locality <cit.>.
For example, GNNs are able to learn relational information and distinguish cells based on their apposition to tumor cells, or normal stroma (i.e., cells which are tumor-infiltrating lymphocytes or from an adjacency inflammatory response), which are important for prognosis <cit.>.
However, existing paradigms on graph-based WSI analysis focus on representing the WSI with a homogeneous graph structure and then predicting the response via vanilla GNNs with cluster-based pooling (i.e., based on similarities of node embeddings).
Despite their successes, these methods suffer from several drawbacks:
(i) GNNs on homogeneous graphs focus on aggregating direct relational information from neighboring nodes, where the complex relational information of the graphs is often neglected.
(ii) For different graphs, the clusters defined by similarities between node embeddings have inconsistent meanings. This introduces a large degree of freedom in parameters and leads to over-parameterization issue <cit.>.
Therefore, GNNs tend to easily overfit due to a lack of identifiability <cit.>.
In view of these limitations, we propose a novel framework for WSI analysis, which leverages a heterogeneous graph to learn the inter-relationships among different types of nodes and edges.
The heterogeneous graph introduces a “nucleus-type" attribute to each node, which can serve as an effective data structure for modeling the structural interactions among the nuclei in the WSI.
To tackle the aggregation process in the heterogeneous graph, we propose a novel heterogeneous-graph edge attribute transformer (HEAT) architecture which can take advantage of the edge and node heterogeneity.
Thus, the diverse structural relations among different biological entities in the WSI can be incorporated to guide the GNN for more accurate prediction.
Further, to obtain the graph-level representations for slide-level prediction, we propose a semantic-consistent pooling mechanism — pseudo-label (PL) pooling, which pools node features to graph level based on clusters with a fixed definition (i.e., nucleus type).
The proposed PL pooling can regularize the graph pooling process by distilling the context knowledge (i.e., pathological knowledge) from a pretrained model to alleviate the over-parameterization issue <cit.>.
Additionally, we propose a Granger causality <cit.> based localization method to identify the potential regions of interest with clinical relevance to provide more insights to pathologists and promote the clinical usability of our approach.
We extensively evaluate our method on three TCGA public benchmark datasets, including colon adenocarcinoma cancer (COAD) and breast invasive carcinoma (BRCA) datasets from the TCGA project <cit.> and the Camelyon 16 dataset <cit.>, and compare to various latest state-of-the-art (SOTA) methods.
Our method outperforms the competitors on cancer staging, cancer classification, cancer typing, and localization tasks.
§ RELATED WORKS
Multiple Instance Learning on WSIs.
Existing WSI analysis approaches generally adopt MIL
<cit.>, which first divide the WSI into fixed-size patches and then compress the information of these patches into low-dimensional vectors.
Conventional methods aggregate bags of instances to learn WSI-level features for final predictions.
Tellez <cit.> compress the WSI-level image into embedding vectors and use a standard CNN to perform patch-level and WSI-level cancer classification.
These CNN-based methods analyze local areas in the Euclidean space on fixed connectivity (i.e., fixed-size kernels), limiting the performance beyond the spatial locality.
Graph-based methods <cit.> have recently been proposed, which model the interactions between instances via graphs.
Their capability of modeling instances based on graph topology provides more flexibility to analyze complex structures of WSIs.
Chen <cit.> propose patch-GCN, a method of modeling WSI with homogeneous graphs, and regress survival data with a graph convolutional neural network (GCN) <cit.>.
Zheng <cit.> propose a graph-based MIL method using graph transformer networks <cit.>.
In spite of their power, most of these WSI methods use homogeneous graphs, which limits the information mined from WSIs.
A recent method <cit.> is proposed to model WSIs with heterogeneous graphs, where the heterogeneity in each patch is introduced by different resolution levels.
However, it only considers the resolution level heterogeneity of patches, with insufficient ability to model the complex contextual interaction between patches in the same resolution level.
Graph Neural Networks.
Although the SOTA GNNs have shown great successes in many problem domains <cit.>, they are mostly focused on homogeneous graphs <cit.>.
These architectures extract the locality information on the graph topology and learn the graph representations by performing aggregation on neighboring nodes.
However, the potential heterogeneity in nodes and edges is not incorporated by these homogeneous GNN algorithms, and therefore their capability in mining the structural information is limited.
Several works attempt to address the heterogeneity in their architectural designs <cit.> and assume that the relation type is finite and discrete.
However, when modeling images with graphs, the heterogeneity in relations is typically continuous (e.g., the similarity between nodes) or high-dimensional. Although there are several attempts <cit.> to extend SOTA GNNs <cit.> to incorporate edge attributes, their works are limited to homogeneous graphs.
Graph Pooling.
Graph pooling aims to aggregate node-level features to obtain graph-level features. Conventional methods <cit.> directly take the average of node-level features to extract graph-level features, which tends to over-smooth the signals of the nodes and cannot generate representative graph-level features.
Recently, there is extensive development of graph pooling algorithms based on the clusters of the embeddings <cit.>.
However, the clusters constructed based on similarity are inconsistent across graphs.
This leads to a large degree of freedom in parameters which easily causes overfitting.
A semantic-consistent pooling method is therefore needed.
Explaining GNNs.
Despite the success of graph neural networks, their poor interpretability of the parameters makes them notoriously recognized as “blackboxes".
With the advances in network attribution methods <cit.>, extensive attempts have been made to open such “blackboxes" <cit.>. Generating network explanation is an important qualitative step in the WSI analysis since it can highlight the abnormal regions for further investigation.
Conventional explainers try to find the associations between the parameters in deep neural networks (or the nodes in GNNs) and the predictions.
GNNExplainer <cit.> is the SOTA method explaining the contributions of node features to the GNN predictions.
It trains feature masks on each node and edge feature to minimize the prediction loss of a trained GNN.
PGExplainer <cit.> shares the same objective as GNNExplainer and trains a generative model to generate explanations.
Recently, there has been emerging attention in generating causal explanations for GNNs <cit.>, and most of the methods focus on the Granger causality as the explanation objective.
Gem <cit.> trains explanation generators from the causal perspective. Causal explainers attempt to provide explanations of features that are causal rather than associated with the neural network prediction.
§ PRELIMINARIES
Heterogeneous Graph: A heterogeneous graph is defined by a graph 𝒢 = (𝒱, ℰ, 𝒜, ℛ), where 𝒱, ℰ, 𝒜 represent the set of entities (vertices or nodes), relations (edges), and entity types, respectively.
And ℛ represents the space of edge attributes.
For v ∈𝒱, v is mapped to an entity type by a function τ(v) ∈𝒜. An edge e = (s, r, t) ∈ℰ links the source node s and the target node t, and r is mapped to an edge attribute by a function ϕ(e) = r ∈ℛ.
Every node v has a d-dimensional node feature x ∈𝒳, where 𝒳 is the embedding space of node features.
Granger Causality <cit.>: Let ℐ be all the available information and ℐ_-X be the information excluding variable X. If we can make a better prediction of Y using ℐ than using ℐ_-X, we conclude that X Granger-causes Y.
WSI Classification:
Given a WSI X and a heterogeneous graph 𝒢 constructed from X, we wish to predict the label y with a GNN model ℳ. We also aim to assign an importance score f(v) to each node v ∈𝒱 in 𝒢 as the causal contribution of each patch to the prediction for localization.
§ METHODOLOGY
§.§ Heterogeneous Graph Construction
We introduce our methodology of modeling the WSI with a heterogeneous graph.
Figure <ref> presents the overall workflow of our proposed framework.
We adopt the commonly used OTSU thresholding algorithm <cit.> and sliding window strategy to crop each WSI into non-overlapping patches.
Uninformative patches with backgrounds are removed.
These patches define the nodes of the graph constructed.
To define the corresponding node type, we use HoverNet <cit.> pretrained on the PanNuke dataset <cit.> to classify the patches into predefined types.
HoverNet detects nuclei in each patch and assigns types to these nuclei.
By majority votes, we take the most frequently predicted nucleus type to be the type of the patch.
Figure <ref> presents an example of a WSI with patches selected from the OTSU and node types generated by HoverNet <cit.>.
We use a pretrained feature encoder (i.e., KimiaNet <cit.>) to obtain the embeddings of each patch, which serves as the features of each node in the heterogeneous graph.
Based on the nodes and node features, we define the edges and edge attributes between the patches. For each node v ∈𝒱, we use the k-nearest neighbor algorithm to find k nodes that have the most similar features to that node, and connect edges between node v and these neighboring nodes.
For each edge, we compute the Pearson R correlation between the head and tail node features as the edge attributes. The edge attributes introduce heterogeneity in edges and highlight meta-relations in the WSI.
We adopt data augmentations (e.g., randomly removing some edges) during training to alleviate the potential noises introduced by the edge attributes.
As a result, we obtain a heterogeneous graph 𝒢 with heterogeneity introduced by different node types and edge attributes.
As shown in Figure <ref>, a heterogeneous graph outlines the meta-relations between the nuclei in a WSI.
Mining these meta-relations can reveal the structural interactions between the cells, leading to improved performances on different tasks.
§.§ Heterogeneous Edge Attribute Transformer
The conventional graph attention mechanism is incapable of tackling the heterogeneity of the graph.
Inspired by the transformer architecture <cit.> and its extension on graphs <cit.>, we propose a new graph aggregation layer, named the Heterogeneous Edge Attribute Transformer (HEAT) layer, to aggregate the structural relations between biological entities in the built heterogeneous graph.
We explicitly incorporate the node types and continuous edge features into the aggregation process, which guides the learning of edge similarities.
Our proposed architecture also generalizes the existing architecture to incorporate continuous or high-dimensional edge attributes and simplifies the use of linear layers to avoid overfitting led by model over-parameterizations.
For each edge e = (s, r, t) and each attention head i, we project the target node t into a query vector with a linear projection layer W^i_τ(s), and the source node into a key vector with W^i_τ(t). We also compute the value vector h_value^i of each source node by the same projection layer W^i_τ(s)
h_key^i = W^i_τ(s) H^(l-1)_s, h_query^i = W^i_τ(t) H^(l-1)_t,
h_value^i = W^i_τ(s) H^(l-1)_s,
where H^(l-1)_v is the input node feature for node v ∈𝒱 from the (l-1)-th layer.
These projection layers can project node features of various node types into a node-type-invariant embedding space.
The edge attributes from the (l-1)-th layer h_e^(l-1) are also projected to h'_e = W_edge h_e^(l-1) by a linear projection layer W_edge.
After projecting the node embeddings, we compute the dot-product similarity between the query and key vectors and further multiply the linear transformed edge attribute to the similarity score to incorporate the edge attributes in 𝒢.
We then concatenate the scores from each head and take the softmax of the score (i.e., overweights of incoming edges for all neighboring nodes) to obtain the final attention scores to the value vector h_value^i,
Attention(e) = ∀ s ∈ N(t)softmax( i ∈ [1,h]‖ATT(e, i)),
ATT(e, i) = ( h_key^i h'_e h_query^i ) / √(d),
where N(t) is the set of all the source nodes pointing to target node t, d is the dimension of node embeddings, ATT(e, i) represents the i-head attention score of edge e, ‖_i ∈ [1,h] is the concatenation operator concatenating the attention scores from all heads and Attention(e) represents the final attention score of the edges aggregating all the heads.
We multiply the attention score obtained by the value vector to obtain the output features.
By doing so, the output features contain both the node-type and edge-attribute-specific information. Hence the HEAT layer can capture the structural information in 𝒢 by transforming the node features from different node types. It can also model different semantic relations since edge attributes are included in the aggregation.
Finally, we perform target-specific aggregation to update the feature of each target node by averaging its neighboring node features. We concatenate all h attention heads to obtain the attention vector for each pair of source and target nodes. For each target node t, we conduct a softmax operation on all the attention vectors from its neighboring nodes and then aggregate the information of all neighboring source nodes of
t together.
The updated node features H^(l)_t for 𝒢_l can be represented as
H^(l)_t = ∀ s ∈ N(t)⊕ (i ∈ [1,h]‖ h^i_value·Attention(e) ),
where ⊕ is an aggregation operator (e.g., mean aggregation).
The updated graph 𝒢_l is returned as the output of the l-th HEAT layer. Algorithm <ref> demonstrates the overall process of our proposed HEAT layer.
§.§ Pseudo-label Graph Pooling
We introduce a novel pooling method — pseudo-label (PL) pooling, to aggregate information with respect to the pseudo-labels (i.e., node types) predicted from a pretrained teacher network (e.g., HoverNet <cit.>).
Unlike conventional methods of pooling features based on clusters, we define clusters using a pretrained node classifier.
Pooling from pseudo-labels ensures the semantic consistency in cluster definitions and distills the context knowledge (e.g., nuclei features) from the teacher network.
Specifically, for each node type a, we pool all node features belonging to type a into a single vector h_a with a readout layer.
The pooled features from each node type are then aggregated into a feature matrix S ∈ℝ^|𝒜| × d.
The graph level feature is then determined by another readout layer (e.g., mean readout).
Algorithm <ref> presents the workflow of the proposed PL Pooling.
By pooling with the pseudo-labels, we are able to cluster patch representation according to nuclei types, such that the graph-level features are enhanced with the prior knowledge on nuclei type distributions.
The detailed mechanism of the PL Pool is presented in the supplementary materials.
We also perform an ablation study in Table <ref> and show that PL Pooling outperforms existing pooling methods in cancer classification tasks.
§.§ Prior Knowledge Regularization
Here we discuss the motivation for introducing prior knowledge in our proposed HEAT and PL pooling algorithms.
In the context of WSI analysis when the data are scarce, while data distributions are sparse and high-dimensional. The curse of high dimensionality makes the sampling distributions difficult to approximate the properties of true distributions of the WSIs. This leads to a significant gap between training and testing distributions.
Hence regularization techniques are needed to reduce the model variance and mitigate performance deterioration when transferring the model from training to testing environments.
Since WSI data contain enriched prior knowledge (e.g., the interaction among different cell types), integrating such knowledge into the framework regularizes the model, such that the testing performance improves.
Therefore, we design the above two designs by integrating prior knowledge into the feature aggregation procedure.
Specifically, for the HEAT layer, we integrate the prior knowledge of node type and node attributes when extracting node-level features.
For PL Pooling, we pool node-level features using prior definitions on node clusters.
Moreover, we perform data augmentations (e.g., random pruning on edges and nodes) to regularize the learning from training distributions.
Besides that, other regularization such as imposing a Gaussian prior on the model weights (i.e., using a Bayesian neural network) would also achieve the goal.
§.§ Causal-driven Localization
We make use of the Granger causality to outline causal regions in the WSI with the causal graph explainer <cit.>.
Given a trained GNN model ℳ, the causal contribution of each node v is given by
Δ_δ, v = ℒ(y, y_𝒢) - ℒ(y, y_𝒢\{ v}),
where y is the true label and y_𝒢 = ℳ(𝒢) and y_𝒢\{ v} = ℳ(𝒢\{ v}) are the predicted labels from ℳ with input graphs 𝒢 and 𝒢\{ v}, respectively.
The causality heatmap of the patches can then be visualized with the causal contribution computed for each patch (i.e., node). Addressing causality in instance interpretation can adjust for observational and selection biases, which would improve the explanation accuracy.
Moreover, the causal property of the explainer could facilitate pathologists to find out potential biomarkers for diagnosis and prognosis by highlighting the patches with clinical relevance in the WSI.
§ EXPERIMENTS
§.§ Datasets
We use WSIs from the public TCGA–COAD (cancer staging task: 1304 cases, classification task: 1434 cases), TCGA–BRCA (cancer staging task: 1328 cases, classification task: 1712 cases), and TCGA–ESCA (typing task: 213 cases) from the TCGA project <cit.> and Camelyon 16 <cit.> as the benchmark datasets.
On average, around 300 patches are sampled from each WSI in the TCGA datasets (around 5,000 for Camelyon 16), where each patch corresponds to a node in the final heterogeneous graph.
For the TCGA–COAD and the TCGA–BRCA datasets, we conduct two tasks for the benchmark methods — cancer staging and cancer classification.
For the cancer staging task, all the cases are divided into the “Stage I", “Stage II", “Stage III", and “Stage IV" classes.
For the cancer classification task, all the cases are divided into the “Normal" and “Tumor" classes.
For the cancer typing task, we use TCGA–ESCA dataset where all the cases are divided into two classes i.e., “Type I: adenocarcinoma" and “Type II: squamous cell carcinoma".
We also evaluate the localization ability of our framework on the Camelyon 16 dataset, as this dataset provides the tumor mask annotations.
A detailed summary of datasets is provided in supplementary materials.
§.§ Implementation Details
The proposed framework is implemented in Python with the Pytorch library on a server equipped with four NVIDIA TESLA V100 GPUs.
We use openslide <cit.> as the tool to process the WSIs.
The dropout ratio of each dropout layer is selected as 0.2.
All models are trained with 150 epochs with early stopping.
The batch size is selected as 2.
We adopt the cross-entropy loss to train the network for classification tasks.
We use the Adam optimizer to optimize the model with a learning rate of 5 × 10^-5 and a weight decay of 1 × 10^-5.
We perform data augmentations on the training graphs by randomly dropping the edges and nodes, and adding Gaussian noises to the node and edge features.
§.§ Experiment Settings and Evaluation Metrics
We compare our method with an array of SOTA methods, including MIL or graph-based methods. We use five-fold cross-validation to evaluate the overall performance of our framework and other methods. We used the pretrained KimiaNet as the feature extraction for all methods for a fair comparison.
The details of compared methods are listed below.
* ABMIL <cit.>: a MIL framework aggregating bag-level instance information by the attention mechanism.
* DSMIL <cit.>: a dual-stream multiple instance learning method using max pooling and attention to aggregate the signals from the individual patches.
* ReMix <cit.>: a general and efficient MIL's based framework for WSI analysis that takes the advantage of data augmentation and reduces method to produce rich features.
* PatchGCN <cit.>: a hierarchical graph-based model on survival data with patient-level and WSI-level aggregations. We adapt this method as a GCN model with global attention pooling <cit.>.
* GTNMIL <cit.>: a graph-based MIL method based on the graph transformer network <cit.>.
* H^2-MIL <cit.>: a tree-graph-based multiple instance learning method that utilizes different magnification levels to represent hierarchical features.
For the cancer staging, classification and typing tasks, we use AUC, classification accuracy, and macro F-1 score as the evaluation metrics.
Percentage [%] values are reported for each of the metrics.
Standard errors are reported in brackets.
For all metrics, a higher value indicates a better performance.
Detailed definitions of the evaluation metrics can be found in the supplementary materials.
§.§ Comparison with Other Methods
Quantitative Results.
Table <ref> shows the cancer staging and classification results on the TCGA–COAD and the TCGA–BRCA datasets, and Table <ref> presents cancer typing results on the TCGA–ESCA dataset.
Compared to graph-based WSI analysis methods <cit.>, our method demonstrates improved performance, which indicates our graph modeling method potentially better represents the interaction of patches in a WSI than existing graph-based methods.
We also observe that aggregation on a graph of instances is more effective than aggregation on bags of instances in the staging tasks, which implies graph-based methods are more capable of capturing the global information of WSI for staging tasks than conventional MIL methods <cit.>.
We further compare HEAT on the BRCA subtyping task with a recent SOTA method on WSI — hierarchical image pyramid transformer (HIPT) <cit.>.
Our method achieves an AUC of 89.69 (SD: 3.63), which outperforms the AUC of 87.4 (SD: 6.0) by HIPT.
Additionally, we perform a t-test on the AUCs to demonstrate the statistical significance of our improvements over the SOTA methods, for which the results are presented in Table <ref>.
We observe that the improvements are statistically significant over most of the baseline methods under the 0.05 significance level.
Qualitative Results.
We compute the causal contribution of each patch using Equ. (<ref>).
We visualize the patch image associated with that node to outline the causal regions related to the predictions.
We also compare our causal explanation method to numerous baseline graph interpretation methods based on associations <cit.>.
Figures in the supplementary materials present the explanation results with different graph explainers on the Camelyon 16 dataset.
It is observed that using an association-based explainer provides a smooth heatmap where many regions are highlighted as important.
A such heatmap is less accurate in localizing the tumor regions and pathologists still need to traverse a large number of abnormal regions suggested by the explainer to identify tumor regions.
On the contrary, we observe that using a causal explainer can outline the tumor regions in the WSIs more accurately, with the heatmap more concentrated on the ground-truth tumor regions compared to association-based explainers (e.g., GNNExplainer <cit.>).
§.§ Analysis of Our Framework
Effectiveness of Heterogeneous Graph Construction.
We compare our method with other SOTA GNNs <cit.> to evaluate the effectiveness of our heterogeneous graph construction.
For heterogeneous graph transformer (HGT) <cit.> and HetRGCN <cit.>, we define the discrete edge types — each relation either has the “positive" type representing positive correlations between the nodes of the edge, or the “negative" type representing negative correlations.
Table <ref> presents cancer typing results of our method compared to various SOTA GNN aggregation methods on the TCGA–ESCA dataset.
Not only our method outperforms SOTA homogeneous GNN architectures <cit.>, but it is also superior to some recently heterogeneous GNN architectures <cit.>.
This implies the advantage of our proposed architecture for graph-based WSI analysis.
Analysis of Different Pooling Strategies.
We compare our proposed pooling strategy to a variety of comparable pooling methods, including basic pooling methods, such as sum/max/mean poolings and advanced pooling strategies <cit.>.
Table <ref> presents the comparison results of cancer classification on TCGA–COAD dataset.
We fix the model architecture to be GCN <cit.> and the feature encoder as KimiaNet <cit.>.
It is observed that our pooling strategy outperforms the competitors, which validates the advantage of using semantic-consistently defined clusters in pooling.
Performance on Different Class Distributions.
We observe the WSI datasets for cancer classification is imbalanced (i.e., approximately ten cancer WSIs to one normal WSI). We thus compose a balanced dataset (i.e., normal:cancer = 1:1) with the undersampling strategy to study how the difference in class distributions affect the performance of our model.
Table <ref> presents the comparison. It is observed that our method achieves similar performance with the unbalanced setting (See Table <ref>).
Generalizability.
The pretrained features are a key component of our proposed framework.
As the pretrained embedding models are from a diverse WSI context, they can extract good features from most of the WSI datasets.
Because the PanNuke dataset <cit.> (used to pretrain the HoverNet node type classifier) contains WSIs of most of the common cancer types, this leads to a broad generalization of HoverNet.
Furthermore, one may adopt contrastive learning to fine-tune the pretrained models to improve their generalizability to new datasets in potential deployment scenarios.
Accuracy of HoverNet.
The performance of the HoverNet classifier would influence the sensitivity of our framework.
Since the PanNuke dataset contains WSIs of most of the common cancer types and cohorts of the TCGA dataset (e.g., COAD), there are domain overlaps between them.
Hence the HoverNet trained on the PanNuke dataset can be transferred to the TCGA dataset for patch types classification with good performance.
Furthermore, we perform cancer classification on COAD using node types generated by unsupervised K-means clustering.
The performance (AUC: 98.5) is lower than that using HoverNet predicted node types (AUC: 99.9).
This demonstrates that incorporating the pretrained HoverNet outperforms unsupervised methods and improves WSI analysis.
§ CONCLUSION
We present a novel heterogeneous graph-based framework for WSI analysis.
By modeling WSI as a heterogeneous graph with various node types and edge attributes, our method not only leverages the locality information, but also mines the complex relational information of WSI.
We further design a novel heterogeneous edge attribute transformer architecture to aggregate the structural information in the graph and a semantic consistent pooling method to address the potential over-parameterization problems in conventional pooling.
We provide a causal explanation mechanism to highlight the causal contributions of the instances to improve the clinical usability of our work.
Extensive experiments on public datasets validate the effectiveness of our proposed framework and our framework could be adapted to other graph-based computer vision tasks, such as 3D point cloud analysis and anomaly detection.
Acknowledgement. We thank the anonymous reviewers and the area chair for their insightful comments on our manuscript.
This work was partially supported by the Research Grants Council of Hong Kong (17308321), the Theme-based Research Scheme (T45-401/22-N), the National Natural Science Fund (62201483), and the HKU-TCL Joint Research Center for Artificial Intelligence sponsored by TCL Corporate Research (Hong Kong).
unsrt
|
http://arxiv.org/abs/2307.05676v1 | 20230711180003 | Mott insulators in moiré transition metal dichalcogenides at fractional fillings: Slave-rotor mean-field theory | [
"Zhenhao Song",
"Urban F. P. Seifert",
"Zhu-Xi Luo",
"Leon Balents"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mes-hall"
] |
Department of Physics, University of California, Santa Barbara, CA 93106, USA
Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106, USA
Department of Physics, Harvard University, Cambridge, MA 02138, USA
Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106, USA
Canadian Institute for Advanced Research, Toronto, Ontario, Canada
In this work, we study a slave-rotor mean-field theory of an extended Hubbard model, applicable to transition metal dichalcogenide moiré systems, that captures both the formation of Wigner crystals as well as exotic spin states on top of these charge backgrounds.
Phase diagrams are mapped out for different choices of long-range Coulomb repulsion strength, reproducing several experimentally found Wigner crystal states.
Assuming unbroken time reversal symmetry, we find several spin liquid states as well as dimer states at fractional fillings.
While spin dimer states are always found to have the lowest mean field energy, several spin liquid states are energetically competitive and may be stabilized by including gauge fluctuations or further interaction terms.
We further discuss possible experimental signatures of these states pertinent to two-dimensional moiré heterostructures.
Mott insulators in moiré transition metal dichalcogenides at fractional fillings: Slave-rotor mean-field theory
Leon Balents
August 12, 2023
===============================================================================================================
§ INTRODUCTION
Moiré patterns are formed by two or more similar two dimensional lattices overlaid with a slight relative strain or twist angle, giving rise to a large-scale periodic structure.
The most prominent example, twisted bilayer graphene, theoretically proposed by Bistritzer and MacDonald <cit.>, has been found to host a variety of novel phenomena such as superconductivity and correlated insulating states <cit.>.
Moiré systems often feature strongly quenched kinetic energy scales and flat bands such that electronic interactions become dominant, providing a fertile ground for exploring strong correlations in condensed matter systems.
However, the flat band of twisted bilayer graphene is highly degenerate (spin and valley) and nonlocal <cit.>, such that the validity of Hubbard-type models for localized orbitals is under debate.
On the other hand, moiré heterostructures constructed from transition metal dichalcogenides (TMDs) also exhibit flat bands, where only a two-fold pseudospin degeneracy is present due to large spin-orbit coupling and resulting spin-valley locking in TMDs.
Wannier centers constructed from the flat band of some moiré TMDs turn out to be localized at triangular superlattice sites, and thus effective moiré Hubbard model can be formulated <cit.>.
Moiré TMDs are also insensitive to the precise magic angle, i.e. flatness is maintained for a large range of twist angles.
These features, along with tunable filling via gating, make moiré TMDs an ideal and flexible platform to “simulate” <cit.> the Hubbard model and study novel phases that can emerge in systems of strongly correlated electrons <cit.>. Moreover, several moiré TMD systems are found to feature topologically non-trivial bands <cit.>, hence providing a platform to study the interplay of band topology and strong interactions <cit.>.
Recent experiments on WSe_2/WS_2 moiré heterostructures as well as twisted WSe_2/WSe_2 homobilayers have demonstrated Mott insulating states at half filling, as well as correlated insulating states at various fractional fillings <cit.>, corresponding to generalized Wigner crystals, where longer-ranged Coulomb interactions lead to the localization of charges on self-organized lattices.
A highly topical open problem in this context is concerned with possible magnetic states that would arise at lowest temperatures by interactions lifting the residual spin degeneracies of charges localized on lattice sites <cit.>.
Since at some fractional fillings, the effective charge lattices correspond to frustrated lattices, these states may be prime candidates for the realization of highly sought-after quantum spin liquids <cit.>, featuring long-range entanglement and fractionalized excitations.
In this work, we focus on the moiré-Hubbard model with nearest-neighbor hopping t, onsite and longer-range Coulomb repulsions U and V_ij as an effective model for correlated electrons/holes on an emergent moiré triangular lattice, as applicable for K-valley moiré TMD heterostructures, such as WSe_2/WS_2 heterobilayers or twisted WSe_2 homobilayers <cit.>.
In principle, given a particular charge-ordered configuration (stabilized by onsite (U) and longer-ranged (V_ij) repulsive interactions), one can perform a perturbative expansion in the hopping t ≪ U,V_ij to derive an effective strong-coupling Hamiltonian that lifts the spin degeneracy.
However, this procedure is cumbersome in practice, requiring separate perturbative expansions for each filling factor.
Further, at dilute fillings, spin-spin interactions may only be induced by processes at higher order in perturbation theory <cit.>, and finding the ground states/phase diagrams of resulting spin Hamiltonians (often with multiple competing interactions) requires significant numerical efforts.
Such a program was undertaken recently by Motruk et al. in Ref. motruk22, where an effective spin model for pseudospin-1/2 degrees of freedom in the kagome charge crystal (at filling 3/4) was studied using density matrix renormalization group (DMRG) simulations, finding chiral spin liquid and kagome spin liquid phases.
In the paper at hand, we instead pursue a more integrated approach, aiming at a framework to simultaneously describe charge-ordered states at various fractional fillings and the concomitant magnetic states on top of these states.
To this end, we employ a slave-rotor representation, first introduced by Florens and Georges <cit.>.
In this representation, each electron is fractionalized into a fermionic spinon (carrying spin, but no charge) and an O(2) rotor degree of freedom, with its angular momentum corresponding to the electronic charge.
Within the slave-rotor representation, the Hubbard model then becomes a model of interacting spinons and rotor degrees of freedom.
Making a mean-field approximation, this interacting problem can be split into a solvable free spinon Hamiltonian and a quantum XY model, with self-consistency equations coupling the two mean-field Hamiltonians.
Solving these self-consistency equations numerically allows us to map out phase diagrams, and characterize the nature of respective phases.
We summarize our main results below:
* We find various incompressible (Mott-insulating) states with charges forming emergent honeycomb (n̅=4/3,5/3 filling) and kagome (n̅=5/4,7/4) Wigner crystals on the (moiré) triangular lattice, as observed in previous experiments <cit.>. These states are accessible by tuning chemical potential and/or the overall strength of repulsive electronic interactions (compared to kinetic energy scales).
* Distinct insulating states are separated by metallic (compressible) states, that are entered via first-order metal-insulator transitions. At fractional fillings, “partially metallic” states are found, where quasiparticles disperse on emergent lattices that are inherited from adjacent Wigner crystal phases, e.g., honeycomb or kagome sublattices of the triangular lattice.
* Considering incompressible (insulating) states, within our mean-field theory it is energetically preferable for the spinons to form dimerized states, corresponding to (possibly fluctuating) valence-bond solid (VBS) magnetic states that lift the remaining spin degeneracy of localized charges.
We further find that some spin liquid states are energetically competitive to these dimer states, such as 0-flux spinon Fermi surface spin liquid and staggered π-flux Dirac spin liquid on half-filling triangular charge crystal and 5/4 filling kagome charge crystal. We discuss the stability of such spin liquid states on these different charge crystals.
The outline of this paper is as follows.
In Sec. <ref> we briefly describe the generalized Hubbard model on the effective moiré triangular lattice, and detail the symmetry properties of the physical moiré system and the effective Hubbard model.
In Sec. <ref>, we introduce the slave-rotor representation and mean-field approximation, and describe the solution of self-consistent equations for the decoupled free fermion and rotor Hamiltonian.
In Sec. <ref>, we discuss results of our mean-field calculation and present phase diagrams as a function of chemical potential and interaction strength.
In Sec. <ref>, we explore the physics beyond mean field theory, give arguments on the stability of different spin liquids and discuss possible experimental signatures.
A summary and outlook is given in Sec. <ref>.
§ MOIRÉ-HUBBARD MODEL
§.§ Hamiltonian
In this work, we are concerned with the moiré-Hubbard model on an effective triangular lattice (on moiré lattice scales), with the Hamiltonian
H = H_t + H_U
where
H_t =
ε_0 ∑_i,σ=↑,↓ c_i,σ^† c^_i,σ
+
∑_ij,σ=↑,↓ t_ij,σ c_i,σ^† c^_j,σ
H_U = U/2∑_i (n_i-1)^2 + 1/2∑_ijV_ij (n_i - 1) (n_j-1).
Here, ε_0 is the onsite energy, i,j denote lattice sites of the effective moiré triangular lattice, t_ij,σ corresponds to a (possibly complex) spin-dependent hopping amplitude, U is the onsite Coulomb repulsion, and V_ij is the long range Coulomb interaction.
As discussed further below, we will mostly focus on truncating long-ranged Coulomb interaction to nearest neighbor V and next-nearest neighbor V^' for simplicity, but write V_ij for generality.
The total electron number n_i = n_i,↑ + n_i,↓.
Note that we have defined the interaction term (<ref>) so that ε_0=0 corresponds to half-filling.
H_U is related (up to a constant) to the conventional form U∑_i n_i,↑ n_i,↓ + V ∑_⟨ ij ⟩ n_i n_j + V^'∑_⟨⟨ ij ⟩⟩n_in_j by a redefinition of the onsite energy ϵ_0 →ϵ_0 - U/2 - 6(V+V^').
As written, the Hamiltonian is agnostic regarding specific material realizations.
A general principle that gives rise to such effective moiré-Hubbard Hamiltonians consists in determining the band structure that arise when holes near the valence-band maxima (VBM) experience a slowly varying periodic moiré potential (in heterobilayers, induced by a second layer with incommensurate lattice geometry), or a periodically varying interlayer hybridization (in twisted homobilayer systems).
Considering nearly-flat and well-isolated moiré bands, one may then construct appropriate localized Wannier orbitals, with their overlaps giving rise to the tight-binding dispersion H_t, and H_U is obtained from projecting Coulomb interactions onto these localized orbitals.
The locations of the centers of these Wannier orbitals therefore determine the effective lattice geometry in Eq. (<ref>).
In TMDs with K-valley VBM, strong spin-orbit coupling leads to a locking of spin and valley degrees of freedom near the Fermi level, so that quasiparticles in the effective moiré-Hubbard model carry a single (combined) S_eff=1/2 spin-valley degree of freedom (pseudospin).
We briefly discuss possible material realizations:
* Heterobilayers such as WSe_2/MoSe_2 <cit.> or WSe_2/WS_2<cit.>: the topmost moiré band originates from the K/K' valley valence electrons of the WSe_2 layer, experiencing a triangular moiré potential modulated by the WS_2 or MoSe_2 layer. Wannier centers constructed from this moiré band are found to form an effective triangular lattice <cit.>.
* Twisted homobilayers such as twisted bilayer WSe_2 <cit.>: the K/K' valley valence bands from both layers hybridize to generate the (topologically trivial) K/K' valley moiré bands, respectively. Wannier centers are found to form a triangular lattice, coinciding with sites in the moiré structure where the metal atoms in the two layers are aligned <cit.>.
For both realizations, the topmost moiré bands are doubly degenerate and related to each other by time reversal symmetry, corresponding to the pseudospin-1/2 degeneracy.
For some twisted TMDs, most prominently twisted bilayer MoTe_2 <cit.>, Wannier states for the topmost moiré bands are found to form an effective honeycomb superlattice, with pseudospin-1/2–dependent intralayer hopping giving rise to an effective realization of the Kane-Mele model. The interplay of band topology and strong interactions has recently received immense attention, following experimental reports of fractional quantum anomalous Hall states <cit.>.
In the following, we will focus on moiré TMD systems well-described by effective triangular Hubbard models, for which generalized Wigner crystal states have been observed experimentally <cit.>.
We stress that that in principle, our slave-rotor mean-field study as presented in Sec. <ref> can be straightforwardly applied to appropriate Kane-Mele-Hubbard models, which is an interesting avenue left for further study.
However, we note that pseudospin-dependent complex next-nearest neighbor hoppings give rise to Dyzaloshinskii-Moriya interactions in the strong-coupling limit which can be expected to stabilize (non-collinear) magnetic order rather than spin-liquid phases <cit.>.
§.§ Symmetries
Moiré heterostructures have distinct microscopic symmetries.
TMD monolayers possess C_3v symmetry, with a vertical reflection plane parallel to the links of the effective honeycomb lattice.
For (twisted) heterobilayers, this reflection symmetry is broken, and the C_3v symmetry is reduced to a C_3 symmetry.
Twisted homobilayers have D_3 symmetry which is generated by C_3 rotations as well as C_2 rotations around an in-plane axis which swaps the top and bottom layers.
Note that vertical displacement field, e.g., introduced by gate voltages, would break the layer pseudo-spin symmetry and reduce it to a C_3 symmetry.
For both systems, time reversal symmetry is preserved, connecting the K(spin up) and K'(spin down) degrees of freedom.
The effective Hubbard model Eq. (<ref>) is constructed by projecting the repulsive Coulomb interactions to the lowest energy (flat) bands derived from continuum models for K-valley moiré TMD <cit.>.
Crucially, in Refs. <cit.> moiré potentials were truncated beyond the lowest harmonics (i.e. restricting to Fourier components corresponding to the first six moiré reciprocal lattice vectors).
As we detail in App. <ref>, this truncation leads to the emergence of an accidental inversion symmetry of the moiré-Bloch wavefunctions, and thus also of the effective Hubbard model for the respective Wannier states.
Now we comment on the validity of the lowest harmonics approximation, following the original argument in the Bistritzer-MacDonald paper <cit.> for twisted bilayer graphene. We expect the interlayer tunneling amplitude t_q at momentum q to drop rapidly on the reciprocal lattice vector scale. For example, based on Ref. <cit.>, WSe_2 has interlayer separation 6.7Å≤ d_⊥≤ 7.1Å, which exceeds the intralayer lattice constant a=3.28Å by more than a factor of two. Because the real-space hopping t(r) varies with three-dimensional separation √(d_⊥^2+r^2), t_q decreases rapidly for qd_⊥>1.
§ SLAVE-ROTOR MEAN-FIELD THEORY
We seek an integrated description of metal-insulator transitions (at zero temperature) and the concomitant formation of charge crystals at certain fractional fillings, and the magnetic states in the incompressible regimes (with localized charges).
To this end, we employ a slave-rotor representation to explicitly separate the electrons' spin and charge degrees of freedom.
While in an exact rewriting these are strongly coupled, we can make a mean-field approximation to obtain separate spin and charge Hamiltonians, coupled via self-consistency equations.
§.§ Slave-rotor representation
Following Ref. florens04, we split electrons at each site into fermionic chargeless spinons and a single on-site O(2) rotor degree of freedom. The rotor is used to represent the phase degree of freedom θ_i, conjugate to the total charge at site i, identified as the rotor's angular momentum L̂_i =-∂/∂θ_i.
The electron creation operator at site i is rewritten as
c_i,σ^†=f_i,σ^†^θ_i
where the spinon f_i,σ^† has the same spin/orbit flavor as the electron, and ^θ_i raises the angular momentum of the rotor by one unit.
In other words, creating an electron amounts to creating a spinon and raising the angular momentum (total charge) by one at the same time.
This rewriting enlarges the local Hilbert space and thus introduces redundant degrees of freedom.
Therefore, a constraint is imposed that the number of spinons match the total charge,
L̂_i =∑_σ(f_i,σ^†f^_i,σ-1/2) .
Here, we choose the convention that the rotor quantum number L_i=0 corresponds to half filling, e.g., for electrons with spin-1/2, L_i=0 implies that there is exactly one electron at site i.
We note that the representation Eq. (<ref>) is an exact rewriting if the constraint Eq. (<ref>) is enforced on each site, for example by means of a Gutzwiller projection.
However, since there is only limited analytical understanding of projected wavefunctions, and their evaluation requires significant numerical efforts, we instead henceforth will enforce the constraint Eq. (<ref>) on average.
The merit of the slave-rotor representation lies in the fact that Coulomb repulsion is only dependent on the charge quantum number, and we can thus replace the four-fermion interaction terms H_U [see Eq. (<ref>)] by terms quadratic in the rotor's angular momentum.
Specifically, considering an atomic Hamiltonian with some onsite energy level ε_0 and on-site Coulomb repulsion U, we can write
H_at = ∑_σε_0 c^†_σ c^_σ + U/2 ( n-1)^2
= ∑_σε_0 f^†_σ f^_σ + U/2L̂^2
where we drop an overall numerical constant.
We now generalize to the Hubbard model in Eq. (<ref>).
Again using the slave-rotor representation in Eq. (<ref>), and replacing L̂_i = n_i - 1, the Hubbard Hamiltonian can be expressed in terms of spinons and rotors as
H =
-∑_i,σμ f^†_i,σf^_i,σ
+∑_iU/2L̂_i^2
+1/2∑_ijV_ijL̂_iL̂_j
-∑_ij,σt_ij,σ
f^†_i,σf^_j,σ^ (θ_i-θ_j).
Here, we have replaced the onsite energy ϵ_0 by a chemical potential,
ϵ_0 = - μ,
which is an experimentally accessible tuning parameter (via electrostatic gating) <cit.>.
In the following, we will therefore work in the grandcanonical ensemble rather than at fixed particle number.
The kinetic term of the Hubbard model has become a coupling between spinon and rotor degrees of freedom in Eq. (<ref>).
In principle, the constraint Eq. (<ref>) should be imposed on each site.
§.§ Mean-field decoupling of spinons and charge rotors
The fermionic spinons and rotor degrees of freedom interact via the “correlated hopping” in Eq. (<ref>), preventing an exact solution of the model.
To make progress, here we perform a mean-field decoupling of the interaction term,
f^†_i,σ f^_j,σ^ (θ_i-θ_j)→<f^†_i,σf^_j,σ>^ (θ_i-θ_j)
+f^†_i,σ f^_j,σ<^ (θ_i-θ_j)>
-<f^†_i,σf^_j,σ> <^ (θ_i-θ_j)>,
where ⟨…⟩ denotes an expectation value with respect to the ground state of the respective mean-field Hamiltonian.
We also add a Lagrange multiplier field h_i to impose the constraint Eq. (<ref>).
The Hamiltonian Eq. (<ref>) then splits into separate Hamiltonians for the fermionic spinons and O(2) quantum rotors,
H_f =∑_i,σ(-μ-h_i)f^†_i,σf^_i,σ
-∑_ij,σt_ij,σ^efff^†_i,σf^_j,σ
H_θ =∑_iU/2L̂_i^2+h_iL̂_i
+ ∑_ij1/2 V_ijL̂_iL̂_j
-K_ij^ (θ_i-θ_j) ,
where the effective hopping t_ij,σ^eff for the (free) fermionic spinons and the coupling of quantum rotors K_ij are related to the mean-field parameters, and are to be determined self-consistently.
Explicitly, the coupled self-consistency relations read
t_ij,σ^eff =t_ij,σ⟨^ (θ_i-θ_j)⟩
K_ij =∑_σt_ij,σ⟨ f^†_iσf^_jσ⟩.
Further, the parameters h_i must be (implicitly) determined to satisfy the average constraint for matching the filling of spinons to each site's charge,
<L̂_i>=∑_σ[<f^†_i,σf^_i,σ>-1/2].
§.§ Solution of rotor Hamiltonians
While the free fermion Hamiltonian is easily diagonalized by means of a unitary transformation in momentum space, the Hamiltonian H_θ of interacting O(2) rotors (i.e. a quantum XY model) evades such an exact solution.
Instead, we will make physically motivated approximations to the rotor correlation expectation value <^ (θ_i-θ_j)> which characterizes distinct phases of the quantum XY model, and in the present context then determine the effective spinon dispersion.
We discuss two distinct regimes below.
§.§.§ Metallic (compressible) states
The rotor acquiring a non-zero expectation value ⟨^θ_i⟩≡√(Z_i)≠ 0 can be understood to be analogous to the condensation of a bosonic ladder operator ⟨b^†_i|$⟩, giving rise to a superfluid phase for the bosonic degrees of freedom.
This phase is characterized by off-diagonal long range order of the rotor correlator at long distances, i.e.lim_|i-j| →∞ < ^(θ_i - θ_j) > = < ^θ_i > < ^-θ_j >.
We can access this phase on a mean-field level by factorizing the correlator< ^(θ_i - θ_j) > ≈< ^θ_i > < ^- θ_j >.
This implies that the effective spinon hopping [cf. Eq. (<ref>)] can be written as
t_ij,σ^eff =t_ijσ⟨^θ_i|⟨%s|%s⟩⟩^-iθ_j
=t_ijσ√(Z_i)√(Z_j)
where we assume⟨^θ_i|$⟩ is real, which should be expected if the time reversal symmetry is unbroken, and then ⟨^θ_i|=⟩⟨^-iθ_i|=⟩√(Z_i) from hermiticity. The notation √(Z_i) is used so that Z_i would have the meaning of spectral weight, see below. The nonzero expectation value of the phase operator indicates that the rotor's angular momentum, and thereby the electronic charge, is no longer a good quantum number and thus the system is in a metallic (compressible) state.
Especially at commensurate fillings, upon increasing Coulomb repulsion, Z_i decreases continuously to zero, which is the well-known Mott transition as demonstrated in Ref. <cit.>.
Within the slave-rotor formalism, the electronic Green's function G^(c) is given by
G^(c)_ij(τ-τ') = G^(f)_ij(τ-τ') ⟨^- [θ_i(τ) - θ_j(τ')] ⟩.
In the metallic states, the rotor degrees of freedom are long-range ordered, and in the mean-field approximation we can read off the spectral weights of the electronic quasiparticles as Z_ij = √(Z_i)√(Z_j)
where √(Z_i) = ⟨^θ_i⟩. Note that here, the spinon bands contribute unity spectral weight, such that the wavefunction renormalization of the electronic quasiparticles is determined by the rotor degrees of freedom.
To explicitly solve the self-consistency equations, we note that with Eq. (<ref>), the rotor Hamiltonian Eq. (<ref>) can be written as
H_θ ≈U/2∑_i L̂_̂î^2 + 1/2∑_ijV_ijL̂_̂îL̂_̂ĵ+∑_i h_iL̂_̂î
-∑_i ( ∑_j K_ij√(Z_j))(^θ_i+^-θ_i)+const.
In line with our site-factorized treatment of the rotor kinetic energy, we also decouple the long range Coulomb interaction as
∑_ijV_ijL̂_̂îL̂_̂ĵ≈∑_ijV_ij(L̂_̂î⟨L̂_̂ĵ|+⟩⟨L̂_̂î| ̂⟩L_j
-⟨L̂_̂î|⟨%s|%s⟩⟩L̂_̂ĵ)
Then, the rotor Hamiltonian can be reduced to a sum of decoupled single-site rotor (mean-field) Hamiltonians
H_θ^MF = ∑_i[U/2L̂_i^2+(∑_j V_ij⟨L̂_̂ĵ|+⟩h_i )L̂_̂î]
-∑_i(∑_j K_ij√(Z_j))(^θ_i+^-θ_i)+const.
Given a set of K_ij, the mean-field Hamiltonian H_θ^MF can now be readily solved, where ⟨L̂_i|$⟩ and√(Z_i)are to be determined self-consistently – the corresponding ground-state expectation values⟨^θ_i|$⟩ then determine t^eff_ij,σ, which serves as an input for the solution of the fermionic spinon Hamiltonian, to obtain the value of K_ij for the next iteration.
§.§.§ Insulating (incompressible) states
We characterize insulating states by vanishing of the respective quasiparticle weights which attains when the phase operator expectation values ⟨^θ_i|=⟩ 0.
In this case, L would be quantized to be integers, giving rise to zero compressibility ∂ n /∂μ=0.
When ⟨^θ_i|=⟩√(Z_i)=0, there is no long-range order for the rotor degrees of freedom, but we stress that this does not necessarily lead to ⟨^ (θ_i-θ_j)|=⟩0: A simple site-factorized mean-field treatment of the quantum XY model (as suggested for metallic states above) is incapable of correctly producing such finite (short-range) rotor correlations.
Instead, we obtain the expectation value of this operator from Eq. (<ref>) by the Hellmann-Feynman theorem
⟨^ (θ_i-θ_j)|=⟩-∂⟨H_θ|⟩/∂ K_ij,
which makes explicit that in general rotor correlations do not vanish, since the energy expectation value will depend on K_ij.
This implies that in insulating states with vanishing quasiparticle weights Z=0, the spinon in general still has nonzero hopping and disperses.
Such spin-charge separation is typical of spin liquids.
In this work, we employ canonical perturbation theory to calculate the ground state energy ⟨ H_θ⟩ of the rotor Hamiltonian Eq. (<ref>) and the rotor correlation from Eq. (<ref>) when the onsite mean-fields ⟨^θ_i|=⟩0 vanish.
This perturbative expansion is controlled if the rotor-rotor coupling K_ij is small compared to the repulsive U,V interactions.
To this end, we take the (solvable) Hamiltonian for the angular momenta as the unperturbed Hamiltonian
H_θ^(0)=∑_iU/2L̂_i^2
+1/2∑_ijV_ijL̂_iL̂_j+∑_i h_iL̂_i,
with the perturbation
H_θ^'=∑_iδ h_i L̂_i
-∑_ijK_ij^ (θ_i-θ_j)
where we included δ h_i as a possible change of Lagrange multipliers to ensure the constraint Eq. (<ref>) remains satisfied.
Then, the ground-state energy up to second order in K/U reads
E_0=E_0^(0)+∑_iδ h_i L_i
+∑_ijK_ijK_ji/E_0^(0)-E_ij^(0)
+𝒪((K/U)^3).
Here, E_ij^(0) corresponds to the energy of the configuration that one unit charge is moved from site j to site i, with respect to the unperturbed ground configuration.
From Eq. (<ref>) we can see that ⟨^ (θ_i-θ_j)|$⟩ and thust_ij,σ^effwould be proportional toK_ijto the lowest order.
Eq. (<ref>) and (<ref>) correspond to a set of self-consistent equations, now accounting for perturbative corrections. From these we can solve forK_ijand other parameters.
There is always a trivial solutionK_ij=0, which is the normal insulating state. WhenK_ij≠0, we get nonzero spinon hoppings, while the system is still incompressible because the charge is quantized and conserved (⟨^θ|=⟩0).
§.§.§ Classification of spin liquid states
When theK_ijare not zero in insulating states, we can get a larger set of solutions than the metallic case, which fall into different universality classes of spin liquids.
In fact, in the decomposition Eq. (<ref>), we have aU(1)gauge redundancy
f^†_i,σ →^-iφ_if^†_i,σ
θ_i →θ_i + φ_i
After this transformation, the electron operators and thus the physical Hamiltonian remain invariant. However, given mean-field HamiltoniansH_fandH_θwill in general not be invariant under these transformations.
Equivalently, one observes that even though a physical wavefunction for a spin liquid may preserve all physical symmetries (space group symmetries, spin rotation symmetry and time reversal symmetry), a corresponding mean-field Hamiltonian is only invariant if those symmetry operations are supplemented by appropriate gauge transformations as in Eq. (<ref>).
In other words, symmetries of mean-field ansaetze are realized projectively.
Mean-field ansaetze corresponding to distinct physical states can be classified by their respective projective symmetry groups (PSG), as introduced in Ref. <cit.>.
To be more explicit, consider a space group transformationUunder which the physical state|Ψ_Phys>is invariant.
Before projection, the mean field state|Ψ^(K_ij)>may not transform trivially underU, sinceK_ijandK^'_ijrelated by agauge transformation [Eq. (<ref>)] correspond to actually the same physical state.
Invariance of the mean field ansatzK_ijis achieved by combiningUwith agauge transformation
G_U U(K_ij)=K_ij
where the physical operationUmaps the spatial indexito some other indexU(i), andG_Uis an appropriately chosengauge transformation,
G_U: f^†_iσ→^-φ_U(i) f^†_iσ
^θ_i→^φ_U(i)^θ_i
K_ij→^-(φ_U(i)-φ_U(j))K_ij
t_ij,σ^eff→^(φ_U(i)-φ_U(j)) t_ij,σ^eff
The set ofG_UUthat leavesK_ijinvariant is referred to as the invariant
PSG.
Different PSG entail distinctK_ijansaetze, and thus we can classify the mean field ansaetze and corresponding physical states by their PSG realizations.
The subgroup of invariant PSG that is a pure gauge group is called invariant gauge group (IGG), and the physical symmetry group is hence given bySG = PSG/IGG.
Here in the insulating case, a globaltransformation leavesK_ijandt_ij,σ^effinvariant
f^†_i,σ →^-iφf^†_i,σ
^θ_i →^φ^θ_i
whereφis site independent. Therefore, the slave-rotor representation has aIGG.
Then, we can ask how many gauge-inequivalent classes of spin liquids are allowed if we demand the full physical symmetry. The structure of the symmetry group imposes an algebraic constraints on the PSG. For example, demanding that translations along the two principal axis of a lattice commute implies
T_1^-1T_2T_1T_2^-1=ℐ
Then, the PSG Eq. (<ref>) with aIGG should satisfy
(G_T_1T_1)^-1G_T_2T_2G_T_1T_1(G_T_2T_2)^-1
=G∈ U(1)
The identity of right hand side of Eq. (<ref>) is now relaxed to an element ofIGG since it keeps the mean-field ansaetze invariant. Listing out all the relations of the symmetry group, we can get a finite number of distinct PSGs allowed by these algebraic constraints, called algebraic PSG.
Since ansaetzeK_ijare distinguished by their PSG realizations, we arrive at a finite number of choices, which are not related to each other by a pure gauge transformation, and thus fall into different classes.
We can therefore focus on particular mean-field ansaetzeK_ijthat correspond to distinct PSG, under a given symmetry requirement.
Above arguments and PSG classifications in general employ the full symmetry group of the lattice on which spin degrees of freedom reside.
However, in this work, we are in particular focused on charge-ordered states that may form in the generalized Hubbard model.
At fractional fillings, charges then reside on effective sublattices of the triangular lattice (e.g. honeycomb lattice at 4/3 filling or kagome lattice at 5/4 filling).
For these states, a similar PSG analysis can be applied based on the symmetry groups of the respective charge-ordered states (that spontaneously break translational/rotational symmetries of the parent triangular lattice).
Moreover, if we allow a breaking of rotational symmetry, nematic states are possible, where the amplitudes ofK_ijdiffer on bonds of different orientation.
These also include dimer states that are formed if all the nonzeroK_ijbonds are disconnected, corresponding to VBS states.
In the following, based on the discussion above, we will consider distinct invariant PSG ansaetze that have previously been found to be energetically competitive on various charge crystals, solve the self-consistent equations respectively, and compare their respective energies.
We mention that also in the metallic phase, the slave-rotor Hamiltonian before mean-field decoupling formally enjoys agauge redundancy.
But the phase operator^θ →⟨^θ ⟩≠0acquiring a finite expectation value implies that the IGG here is just a trivial group with identity as the only element, because⟨^θ|$⟩ would change under any transformation θ_i →θ_i + ϕ_i. This trivial IGG will result in only one solution allowed where K_ij are the same on all bonds (a uniform solution), if the full physical symmetry is preserved: This metallic state does not possess an intrinsic gauge structure and corresponds to a confining phase.
We further note that dimer states which have ⟨ f^†_i,σ f^_j,σ⟩≠ 0 on disjoint bonds ⟨ ij ⟩ possess on the mean-field level a IGG. However, the disconnected nature of the spinon hopping, all fermionic degrees of freedom are gapped (i.e. spinons are localized on bonds, forming spin singlets after projection).
Pure (compact) gauge theory in 2+1-dim. is unstable <cit.>, and thus the dimer phase must be a confining phase of matter.
§ MEAN-FIELD PHASES
§.§ Overview
To map out phase diagrams, we numerically solve the self-consistency equations Eqs. (<ref>) and (<ref>) in Sec. <ref>, as a function of chemical potential μ and interaction strength U.
We find that phase diagrams depend significantly on the presence and nature of longer-ranged repulsive interactions V_ij in Eq. (<ref>) due to Coulomb interactions between charges.
Typically, the Coulomb interaction in bulk solids is efficiently screened (leading to an exponential decay with distance), justifying the approximation of repulsive interactions as an onsite (contact) interaction.
However, in two-dimensional moiré heterostructures, screening is significantly weaker, and the moiré-induced quenching of kinetic energy scales implies that extended repulsive interactions are no longer negligibly small <cit.>. As we shall see below (and discuss in Appendix <ref>), extended repulsive interactions (beyond nearest-neighbor) are necessary for reproducing some of the more complex generalized Wigner crystals at certain filling factors.
In fact, the screening length in moiré TMD heterostructures can be controlled via the choice of and distance to metallic screening layers (which also act as gate electrodes).
Modelling the screening via the method of image charges, the electron-electron interaction potential is U(r)=(e^2/ϵ)[r^-1-(r^2+D^2)^-1/2], where D is the vertical distance between the metallic layer and TMD bilayer <cit.>.
With this in mind, in this study, we for simplicity consider two distinct cases of next-nearest neighbor repulsion V^':
* V^'=0, corresponding to short-ranged (truncated beyond nearest-neighbor repulsion V) interactions due to strong screening.
* V^'=1/√(3)V, roughly motivated by a 1/r decaying Coulomb repulsion, such that V^'/V is inversely proportional to distance ratio.
In both cases, we neglect repulsions beyond next-nearest neighbors for simplicity, and the ratio of nearest neighbor to on-site Coulomb repulsion V/U is fixed to be 1/4 <cit.>.
We expect that in realistic systems, V^'/V should take a value between the two cases discussed above, depending on microscopic details.
Note that here we also drop hopping amplitudes beyond nearest neighbors since the Wannier states on the moiré lattice scale are exponentially localized, and accordingly longer-ranged hopping has been found to be negligible compared to the strong Coulomb interactions in moiré TMDs <cit.>.
Considering only nearest-neighbor repulsion, a classical analysis shows that all possible charge crystals on the triangular have at most three distinct sublattices, so three inequivalent sublattice sites with possibly different L_i, Z_i and h_i are needed, assuming a translational symmetry with respect to √(3)×√(3)-unit cells (Fig. <ref>).
On the other hand, if next-nearest neighbor repulsion is included, charge ordering patterns with 4-site unit cells (Fig. <ref>) become energetically competitive, allowing for striped phases and kagome-type effective sublattice charge order.
In our numerical solutions of the self-consistency equations we consider various ansaetze in 3- and 4-site unit cells, and compare their respective total energies per site to determine the global ground state.
For simplicity, we restrict our analysis to the half plane of μ>0 so that in our convention, the filling factor (mean number of particles per site) n̅≥ 1 (i.e. hole doped scenario).
While the triangular lattice is not particle-hole symmetric, and thus the location of phases and phase boundaries may change, it is expected that for each generalized Wigner crystal at filling n̅ there will exist a “conjugate” phase at filling 2-n̅.
We solve the mean-field self-consistency equations using an iterative procedure.
We work on discretized momentum-space grids with 30×30 unit cells for both the 3-site and 4-site ansaetze.
The free fermion Hamiltonian can be diagonalized easily in momentum space, with negligible finite-size effects.
To solve the rotor Hamiltonian, we truncate the local rotor Hilbert space (which is in principle unbounded) to finite dimension with
L_min=-5 to L_max=5.
This truncation is justified since states with large L would be suppressed by U, and we explicitly confirm its validity by noting that Z is close to 1 when U=0 and ε=0, as expected.
We further note that this consideration also implies that the slave-rotor mean field approach works better for a relatively large U.
We work in units of the kinetic energy t, and employ a small but finite temperature k_B T/t = 0.01 for numerical stability.
The mean field phase diagrams in the plane of U and μ (in units of t) for short range repulsion V^'=0 is shown in Fig. <ref>(a), and for longer-ranged repulsion V^'=1/√(3)V in Fig. <ref>(b).
Various charge crystal states are illustrated in Figs. <ref>(c)-(h), and metallic states with charge dispersion on distinct sublattices in Figs. <ref>(i)-(l).
Depending on V', we find distinct charge ordering patterns:
For V^'=0, there are emergent honeycomb Wigner crystals at 4/3 and 5/3 filling [Figs. <ref>(e) and <ref>(g)] as well as half-filling [Fig. <ref>(c)].
These Mott insulating state are separated by metallic states conducting on an emergent honeycomb sublattice [Fig. <ref>(j)] or parent triangular lattice [Fig. <ref>(i)].
On the other hand, for V^'=1/√(3)V, charge orders of kagome type at 5/4 and 7/4 filling [Figs. <ref>(d) and <ref>(h)] and stripe type at 3/2 filling [Fig. <ref>(f)] are more favored than order with √(3)×√(3)-periodicity.
These states are accompanied by metallic states dispersing on respective sublattices [Fig. <ref>(k) and <ref>(l)].
States of commensurate fractional fillings in the phase diagrams are all insulators with √(Z_i)=0, and the compressibility ∂ n/∂μ=0 because the charge per site n_i is quantized in a range of chemical potential μ.
In these parameter regimes, we use perturbation theory in K/U in order to obtain finite (short-ranged) spin correlations determined by ⟨^(θ_i-θ_j)|$⟩ as introduced in Sec. <ref>, with details discussed in the following subsections.
When some of the⟨^θ_i ⟩≠0, there is a finite quasiparticle weight and the system is in a compressible metallic state [corresponding to incommensurate particle numbers in the phase diagram Fig. <ref>(a) and <ref>(b)].
In addition to the metallic state corresponding to particles dispersing on the triangular lattice (with uniform⟨^θ_i ⟩≠0), we also find states where particles disperse on a honeycomb sublattice formed by two of the three sites in the√(3)×√(3)unit cell, and kagome or stripe sublattices formed by three or two sites in the2×2unit cell, while the remaining site is doubly occupied, leading to⟨^θ_i ⟩= 0for the correspondingi.
The metallic states conducting on different sublattices are sketched in Figs. <ref>(i)-(l).
While the uniformly dispersing metallic state on the parent triangular lattice is present in both phase diagrams ofV^'=0andV^'=1/√(3)V, the honeycomb metal is only favored forV^'=0, and the kagome and stripe metals are more competitive in the latter case.
At a fixed chemical potential, upon increasing interaction strengthU(simultaneously also increasingV,V^'proportionally), the metallic state enters an adjacent insulating charge-ordered state through a first-order phase transition, in contrast to a continuous Mott transition with a spectral weight going to 0, as also shown in Fig. <ref> in next subsection.
The criticalUfor a continuous Mott transition is marked by the vanishing of the quasiparticle weightsZ_iand can be obtained by applying a perturbative approximation onZ_i, as shown in Ref. <cit.>.
From Eq. (<ref>), and making use of Hellmann-Feynman theorem, we have
√(Z_i)≡⟨^θ_i|=⟩4U√(Z_i)K_i/U^2-4(UL_i+∑_jV_ijL_j+h_i)^2
whereK_i=∑_j∈n.n(i)K_ijis the sum over nearest sites in the corresponding metallic sublattice (since we are keeping only nearest hoppings). At the boundary of the insulating state,h_i=ϵ_0=-μto satisfy the constraint Eq. (<ref>). EliminatingZ_ion both sides, we get an equation of the critical interaction strengthU_CandV_ijC.
U_C^2-4(U_CL_i+∑_jV_ijCL_j-μ)^2=4K_iU_C
The value ofK_ican be calculated from the self-consistency equation Eq. (<ref>), which would be independent on√(Z_i)if the non-zero√(Z_i)is uniform on corresponding sublattices withh=-μnear transition.
§.§ Spin liquids and dimerized states in insulating phases
When all√(Z_i)=0, all electronic quasiparticle weights vanish and thus the system enters an insulating regime.
Distinct insulating states (at identical filling/charge order) can be characterized by their corresponding spin states.
Focusing on non-magnetic states such as spin liquids and dimer states, as stated in Sec. <ref> and Sec. <ref>, the mean-field parametersK_ijwill generically be non-zero and can be used to classify various ansatz states corresponding to their respective invariant PSG.
In the following, we analyze self-consistent solutions to the mean-field equations corresponding to symmetry-allowed spin liquid states, as well as dimerized solutions.
While we find that for all insulating states the dimerized state always has the lowest energy in our mean-field calculation (with perturbative corrections), various spin liquid states are competitive in energy with respect to the mean-field Hamiltonian.
§.§.§ Triangular charge crystal (half-filling) |
http://arxiv.org/abs/2307.04978v1 | 20230711023526 | Diffusion idea exploration for art generation | [
"Nikhil Verma"
] | cs.CV | [
"cs.CV"
] |
top=2.5cm,bottom=2.5cm,outer=2.5cm,inner=2.5cm
1
./references/IEEEtran
|
http://arxiv.org/abs/2307.07349v1 | 20230714140028 | Direct Frequency-Mode-Stable Laser Amplification at Terahertz Burst Rates | [
"Vinzenz Stummer",
"Tobias Flöry",
"Matthias Schneller",
"Markus Zeiler",
"Audrius Pugžlys",
"Andrius Baltuška"
] | physics.optics | [
"physics.optics"
] |
Direct Frequency-Mode-Stable Laser Amplification at Terahertz Burst Rates
Vinzenz Stummer1,*,
Tobias Flöry1,
Matthias Schneller1,
Edgar Kaksis1,
Markus Zeiler1,
Audrius Pugžlys1,2,
Andrius Baltuška1,2
[1] Photonics Institute, TU Wien, Gusshausstrasse 27/387, 1040 Vienna, Austria
[2] Center for Physical Sciences & Technology, Savanoriu Ave. 231 LT-02300 Vilnius, Lithuania
* [email protected]
§ ABSTRACT
Generation of high-fidelity amplified pulse bursts with a regular interpulse interval yields, in the spectral domain, an equidistant pattern of narrowband spectral modes, similar to frequency combs produced by cw mode-locked lasers, but with greatly increased pulse energy. Despite their great potential for nonlinear spectroscopy, material processing, etc., such long frequency-stable bursts are difficult to generate and amplify because of prominent temporal intensity modulation even after strong dispersive pulse stretching. This study presents a burst generation method based on a master-oscillator regenerative-amplifier system that allows for chirped-pulse amplification (CPA) with high scalability in pulse number. A gradual smoothing of temporal intensity profiles at an increasing number of pulses is discovered, demonstrating an unexpected recovery of the CPA performance at terahertz (THz) intraburst repetition rates. In consequence, a self-referenced stable burst spectral peak structure with megahertz (MHz) peak width is generated, without risk of amplifier damage caused by interference of chirped pulses. This result eliminates limitations in burst amplification and paves the way for advancements in ultrashort-pulse burst technology, particularly for its use in nonlinear optical applications.
§ INTRODUCTION
Finite trains of ultrashort pulses, also commonly known as ultrashort-pulse bursts, are becoming increasingly relevant in controlling the orientation and alignment of molecules <cit.>, the generation of plasma waves <cit.>, electron bunch generation <cit.> and amplification <cit.>, or material ablation <cit.>. This development is originating from fundamental differences in the response of a system when excited with a burst-mode format in contrast to a single pulse. Such difference is also known when comparing the use of a single ultrashort pulse with that of an optical comb in spectroscopic applications <cit.>. In these, the spectral response may span the entire bandwidth of the comb with very narrow, down to sub-millihertz <cit.>, linewidths achieved with an infinite train of pulses. An ultrashort-pulse burst may consist of N pulses with a given burst rate 1/Δ t, where Δ t is the interpulse spacing. The linewidth is given by the inverse duration of the burst 1/(NΔ t) with a linewidth spacing equal to the burst rate 1/Δ t. The obtainment of spectral peaks with largest intensities can thus be achieved by generation of a burst with the highest number of ultrashort pulses (high-N) at terahertz (THz) burst rates, with the interpulse spacing comparable to the ultrashort pulse duration. By this, a spectrum with only few narrow lines, which contain the whole burst energy, is obtained. While bursts with GHz-rates, or lower, are an established technology <cit.>, the concentration of energy on a small number of spectral lines at THz burst rates opens up opportunities for nonlinear optical applications such as Stimulated Raman Scattering (SRS) <cit.> or Resonantly-Enhanced Multi-Photon Ionization (REMPI) <cit.> that demand both, high peak power but also spectral selectivity. While the first prerequisite cannot be met by frequency combs due to their low peak power compared to a single pulse, the latter criterion cannot be fulfilled when being limited in pulse number in burst-mode generation. However, existing burst-mode systems have exactly this problem of pulse number scalability when generating ultrashort-pulse bursts at THz intraburst repetition rates. In this regime, common burst generation techniques rely on single-pulse division by using pulse shapers <cit.>, beam splitters <cit.>, birefringent crystals <cit.>, or nested Mach-Zehnder interferometers <cit.>. Independent of the approach, the practical limit in pulse number is ten to maximally a few hundred of pulses, due to the 1/N energy throughput of pulse division techniques <cit.>. With the burst comprising pulses that are ultrashort, amplification techniques rely on Chirped-Pulse Amplification (CPA) <cit.> that raises several problems in this regime. The most crucial one is the appearance of interference effects due to pulses that are much more strongly chirped than their individual spacing. By this, the temporal intensity profile of a stretched burst resembles the burst spectrum with a narrow peak structure that is dictated by the burst rate (See Fig. <ref>b). Another problem is caused by instabilities of the pulse spacing, or equivalently, the pulse-to-pulse phase slip ϕ_s, leading to drifts of the spectral peaks. Given all these complications, amplification of ultrashort-pulse THz-rate bursts relied so far on individual pulse-phase modulation in order to suppress these interference effects in the time-domain, and on phase-slip stabilization techniques to allow for a stable peak structure <cit.>.
In this work, we develop a numerical model and analyze its results, and validate experimentally the time-frequency properties of ultrashort-pulse THz-rate bursts at high (N ≫ 10) pulse numbers. We demonstrate a system that builds on a master-oscillator regenerative-amplifier setup with only minor modifications to its single-pulse operation to allow for burst-mode operation. By using direct time-domain methods for burst generation based on in-loop accumulation of oscillator pulses, we are not limited in seed burst energy. Our proposed method is therefor easily scalable in pulse number, up to ten thousands of pulses, in contrast to existing methods based on pulse division. Further, it allows to generate a stable burst spectral peak structure by direct stabilization, without any need of an external reference. An operating regime corresponding to an intrinsic smoothing of the temporal intensity profile of chirped THz-rate bursts is identified at high pulse numbers (See Fig. <ref>c). This burst amplification regime gives a sustainable energy extraction from an amplifier, even at high (N≫10) pulse numbers without risk of intensity-induced optical damage. Further than that, for burst durations larger than the chirped pulse duration, the extractable energy of an amplifier can be shown to be increased. For such high pulse numbers the proposed technique combines CPA and Divided Pulse Amplification (DPA) <cit.> in an unprecedented way.
§ ANALYTICAL DESCRIPTION
We first describe all phenomena in an analytical way to outline the underlying physics. For the further interested reader, we provide derivations of the results communicated in this chapter in the Supplementary.
§.§ Single pulse
When chirping a pulse by a common prism or grating stretcher, a frequency-dependent phase is imposed on a pulse. In this work, we assume linearly chirped Gaussian pulses that can be described in the frequency domain as <cit.>
Ẽ_P(ω+ ω_0) =
Ẽ_0exp[
-(1+iC)
(
√(2ln(2))ω/ω_FWHM)^2
]
with complex amplitude Ẽ_0, chirp parameter C, central frequency ω_0 and Full-Width-at-Half-Maximum (FWHM) bandwidth ω_FWHM.
The imposition of a chirp leads to a broadening of the pulse in time
τ_FWHM =
4ln(2)√(1+C^2)/ω_FWHM,
with τ_FWHM being the FWHM pulse duration.
In this description, the field is given in the time-domain by
E_P (t) =
E_0exp[
-iω_0 t
-(1+iC)(√(2ln(2))t/τ_FWHM)^2
]
,
with a complex amplitude E_0.
§.§ Ultrashort-Pulse Bursts
In the following, we define an ultrashort-pulse burst as a finite train of ultrashort pulses that are closely spaced in comparison to their duration. For describing ultrashort-pulse bursts analytically, we assume that all pulses are, up to a constant pulse-to-pulse phase slip ϕ_s, equal. The analytical formulation of a burst consisting of N such pulses with a Δ t interpulse spacing is then given in the time domain by a summation over the pulse fields E_n(t) as following
E_B(t) = ∑_n=0^N-1 E_n(t)
= ∑_n=0^N-1 E_P(t-nΔ t) exp(inϕ_s),
with E_P(t) as in Eq. <ref>.
The corresponding description in the frequency domain is given by
Ẽ_B( ω+ω_0) =
Ẽ_P(ω+ω_0) ·∑_n=0^N-1exp(-in(Δ t (ω+ω_0) - ϕ_s)).
The Wigner distribution gives useful insights, with a full time-frequency picture of a complex-valued signal. For the burst field, it is <cit.>
𝒲_B(t,ω) ∫_-∞^∞ E_B(t+s/2)E_B^*(t-s/2)exp(-iω s)ds.
For the N-pulse field (Eq. <ref>), the Wigner distribution consists of N signal terms 𝒲_B,n^(S)(t,ω) and N(N-1)/2 interpulse interference terms 𝒲_B,nm^(I)(t,ω) <cit.>
𝒲_B(t,ω) = ∑_n𝒲_B,n^(S)(t,ω) + ∑_n∑_m_m>n𝒲_B,nm^(I)(t,ω)
Of further interest are the Wigner marginal integrals <cit.>, i.e. the integration over the time axis which gives the spectrum S(ω) and the integration over the frequency axis, which gives the intensity in time I(t):
S(ω) = 1/2√((μ_0/ϵ))∫_-∞^∞𝒲_B(t,ω)dt
I(t) = 1/2√((μ_0/ϵ))∫_-∞^∞𝒲_B(t,ω)dω
In Eq. <ref>, μ_0 is the vacuum permeability and ϵ is the electric susceptibility. We note at this point that the Wigner distribution, as given in Eq. <ref>, is always a real-valued distribution with no imaginary part.
§ NUMERICAL RESULTS
We calculate numerically the Wigner distribution 𝒲_B(t,ω) for an ultrashort-pulse burst, according to the definition given in Eq. <ref>, with an intraburst pulse spacing of 1 ps. Depending on the FWHM pulse duration τ_FWHM (by variation of the chirp parameter C) and on the compressed burst duration (N-1)Δ t (by variation of the pulse number N), we are able to identify three regimes: compressed pulses (250 fs pulse duration), few strongly chirped pulses (200 ps, 10 pulses) and many strongly chirped pulses (200 ps, 80 pulses). Further, we show only calculations with zero phase slip ϕ_s=0 in the main text.
§.§ Compressed Pulses
The chirp parameter C is negligible and the pulse duration τ_FWHM is smaller than the pulse spacing Δ t
C≈0,
τ_FWHM<Δ t.
This is the typically known case prior, or after, Chirped-Pulse Amplification (CPA) where all pulses are compressed and well separated from each other in time (Fig. <ref>, left side). For the first (n=0) and last (n=N-1) pulse, the Wigner distribution consists only of non-oscillatory positive-valued signal terms 𝒲_B,n=0/n=N-1^(S)(t,ω). For each pair (n,m) of pulses, that are located at times t_n^(S) and t_m^(S), respectively, exists an oscillating interference contribution 𝒲_B,nm^(I)(t,ω) at time <cit.>
t_nm^(I) = t_n^(S)+t_m^(S)/2,
which for |n-m|=qΔ t, q being a positive even number, overlaps with the signal term of another pulse in the Wigner space.
The Wigner interference pattern shows a discrete symmetry along the frequency axis with period 2π/Δ t. For each period along the frequency axis, interference in the time-frequency Wigner space leads either to a strong spectral peak or to very weak spectral secondary maxima in between, explaining the burst-typical peak structure in the spectrum. For the time-domain marginal, there are no secondary maxima beside the burst pulses. We note that the Wigner interference contributions 𝒲_B,nm^(I)(t,ω) do have a physical significance and are not a mere mathematical artefact. Their total energy can be shown to be zero, because the individual pulse Wigner distributions are time-frequency disjoint (Moyal's formula) <cit.>. However, they are responsible for the spectral interference structure, which can be measured with any spectrometer with sufficient resolution. Another symmetry property of the Wigner distribution is given by the vertical line at t=0, across which it has an even symmetry 𝒲_B(t,ω)=𝒲_B(-t,ω). This symmetry is given by the fact, that we calculated with pulses that are equal in their energy. However, it is also preserved for a nonzero phase slip ϕ_s≠0.
§.§ The low-N regime: Few Strongly Chirped Pulses
The chirp parameter C is very high. The chirped pulse duration τ_FWHM is much larger than the pulse spacing Δ t and also even larger than the whole compressed burst duration (N-1)Δ t
C ≫ 1,
τ_FWHM≫ (N-1)Δ t > Δ t
As it is generally known for a single pulse <cit.>, the Wigner distribution 𝒲_B,chirped(t,ω) of a chirped burst in this regime (Fig. <ref>, middle) can be seen to be a tilted version of the Wigner distribution of the compressed pulses 𝒲_B,compr (Fig. <ref>, left):
𝒲_B,chirped (t,ω) =
𝒲_B,compr(t-4ln(2)C/ω_FWHM^2·ω,ω)
=
𝒲_B,compr(t,ω+4ln(2)C/τ_FWHM^2· t)
Giving this tilt in combination with the even symmetry of the compressed-pulse distribution, we note the presence of a diagonal symmetry line in this case, which is in agreement with the mapping of the spectrum into the time domain. Therefore, the burst intensity I(t) is, up to a chirp-dependent factor a(C) in the argument, well represented by the burst spectrum S(ω)
I(t) ∝ S(a(C)· t).
This is confirmed by our numerical calculations, as can be seen by comparing the Wigner marginal distributions of Fig. <ref>, middle.
§.§ The high-N regime: Many Strongly Chirped Pulses
The chirp parameter C is very high. The chirped pulse duration τ_FWHM and, due to the large number of pulses N, the compressed burst duration (N-1)Δ t are both much larger than the pulse spacing Δ t
C ≫ 1,
τ_FWHM⪆ (N-1) Δ t ≫Δ t
In this regime, the diagonal symmetry of the chirped low-pulse Wigner distribution is broken due to many chirped-pulse replicas along the horizontal/time axis. When presenting the data while covering a large coordinate range (Fig. <ref>, right), the Wigner distribution can be seen to consist of horizontally spreaded contributions. When taking a closer look at the individual contributions, we see that each contribution consists primarily of diagonal, closely spaced lines (Fig. <ref>, right). Wigner signal and interference contributions are hardly distinguishable at this point. Performing the horizontal sum over time to acquire the spectrum, the contributions can be either attributed to signal and constructive interference terms (only positive lines, lower contribution in Fig. <ref>, right), retaining the strong spectral peaks, or, to signal and destructive interference terms (positive and negative lines, upper contribution in Fig. <ref>, right), suppressing the spectral density in between the peaks. The intensity distribution in time is smoothed out by interference in the time-frequency Wigner space and thus the CPA technique is a useful way to amplify ultrashort-pulse bursts safely without amplifier damage.
§.§.§ Energy scalability of the high-N regime
Interpulse interferences between strongly chirped pulses lead in the low-N regime to an overshoot of the chirped burst temporal intensity profile. The high-N regime allows for a self-smoothing effect. In this section, we further investigate this phenomenon numerically, with the focus on how much energy can be extracted from an amplifier per amplification cycle, under equal conditions, compared to single-pulse operation when the only limitation is a peak-intensity-induced optical damage threshold I_THR. We compare the reachable burst energy ϵ_B with that of a single pulse ϵ_P at a given intensity threshold I_THR:
ϵ_B/ϵ_P = ∫ I_B(t) dt/∫ I_P(t) dt,
which gives the burst-extractable amplifier energy normalized to single-pulse operation. I_B(t), I_P(t) are the chirped intensity profiles of a burst and a single pulse, respectively, and max{I_B(t)}=max{I_P(t)}=I_THR.
The results of our numerical investigation can be seen in Fig. <ref> where we show, depending on the number of pulses N, the normalized extractable energy of bursts with pulses chirped to 200 ps and an interpulse spacing of 1 ps (corresponding to a 1 THz burst rate). For less than about 10 pulses we see a 1/N decrease. This refers to an N-times increase of peak intensity of the chirped temporal intensity profile. We note, that this is the same behaviour as from the spectral peak intensity, as can be calculated from Eq. <ref>. This indicates the discussed frequency-to-time mapping of the spectral peak structure and the absence of the temporal self-smoothing effect. At N ⪆ 10, we observe a continuous increase in the normalized extractable energy, which is given by the onset of the self-smoothing of the temporal intensity profile of the chirped burst.
§ EXPERIMENTAL SETUP
The motivation is to directly measure the increasing effect of temporal intensity smoothing in THz-rate bursts of chirped pulses, arising when raising the number of pulses to N > 10 pulses. For this we generate and characterize μJ bursts with 1.8 ps spaced pulses (corresponding to a 0.56 THz burst rate) at various pulse numbers. The laser system and an overview of the experimental setup is given in Sec. <ref>. We report on a self-referenced method to stabilize the pulse-to-pulse phase-slip ϕ_s in Sec. <ref>. Spectrogram measurements of the chirped burst by cross-correlation with the synchronous, compressed reference pulse are shown in Sec. <ref>. Finally, we show compressibility of the burst waveform by autocorrelation and techniques for optimization of the burst generation process in Sec. <ref>.
§.§ Ultrashort-Pulse Burst Laser System
A schematic of the experimental system can be seen in Fig. <ref>. A MHz-repetition-rate mode-locked oscillator (OSC, 1030 nm Yb:KGW, 76 MHz repetition rate, 80 fs pulse duration) generates nanojoule pulses. The oscillator pulses are stretched to around 300 ps by a double-pass grating stretcher (STR). An acousto-optical modulator (AOM) works as pulse picker by diffracting the burst seed pulses. Amplification takes place at a repetition rate of 1 kHz in a CW-pumped twin regenerative amplifier (Twin RA, Yb:CaF_2) with two cavities: in one we accumulate the AOM-diffracted pulses to an ultrashort-pulse burst and amplify it up to μJ burst energies, in the other we amplify a single non-diffracted pulse to a few-μJ level as reference for cross-correlation measurements. The prior requires that the round-trip time of the burst cavity is comparable to the oscillator round-trip time, such that their absolute difference gives the intraburst pulse spacing (Vernier effect). By application of an intermediate voltage to the RA Pockels Cell (PC) during burst pulse accumulation, we are able to set round-trip losses and round-trip gain equally, in order to acquire a scalable number of burst pulses with the same energy. In contrast to efforts on Vernier burst generation and amplification in recent times <cit.>, CPA of ultrashort-pulse bursts does not require any phase-modulation techniques (phase scrambling) due to the smoothing of the temporal intensity peaks in the high-N regime (Fig. <ref> right, Fig. <ref>). Both cavities are seeded by the same oscillator, thus, the burst and the reference pulse are synchronized to each other. The Compressor, containing a single large 130x20 mm^2 transmission diffraction grating, compresses both the burst and the reference pulse to 250 fs with beams being spatially separated inside the compressor. Spectra of amplified bursts are measured with a high-resolution Near-Infrared (NIR) spectrometer (NIR SPEC, Avantes Avaspec-ULS4096CL-EVO).
§.§ Phase-Slip Stabilization
When forming a burst of ultrashort pulses by using the Vernier effect between an oscillator with round-trip time τ_OSC and an RA with round-trip time τ_RA, a constant pulse-to-pulse phase slip ϕ_s is imposed on the burst pulses, according to
ϕ_s = ω_0 (τ_RA - τ_OSC) = ω_0 Δτ,
where Δτ is the round-trip time detuning between RA and oscillator, whose absolute value is equal to the interpulse spacing Δ t. In a first approximation, we assume the phase slip to be constant over the full pulse bandwidth.
To generate ultrashort-pulse bursts with stable spectra, drifts in the phase slip induced by drifts in the round-trip time detuning need to be considered. In this work, we apply a slow feedback loop for stabilization of the phase slip without the use of any additional reference to the system. For the pulse-to-pulse phase slip stabilization, we measure the intracavity spectrum of the burst channel (Ocean Optics HR4000). The complex Fast Fourier Transform of the intracavity spectrum shows a modulation peak at a position corresponding to the burst rate (red line in the upper left inlay of Fig. <ref>). Drifts of the phase at this point are equal to drifts of the phase slip. To compensate for any phase variations, we apply a PI control algorithm and change the cavity length accordingly by moving one of the end mirrors by a piezoelectric transducer.
§.§ Spectrogram Measurements of the Chirped Burst
We cross-correlate (XC) the chirped amplified burst with the compressed reference pulse quasi-collinearly (2crossing angle) in a type I Beta-Barium Borate (BBO) crystal and measure the sum-frequency generated (SFG) spectrogram (VIS SPEC, Ocean Optics HR4000CG-UV-NIR), i.e. the spectrum of the SFG signal for each burst-reference time delay point (See Fig. <ref>, upper right inlay). The SFG spectrogram is given as <cit.>
𝒮_E^(SFG)(t,ω) = | ∫ E_B(t')h(t-t')exp(iω t') dt' |^2
with h(t) being a window function, which is given by the compressed reference pulse with 250 fs duration τ_FWHM,ref. The spread of the window function h(t) is much smaller (τ_FWHM,ref = 250 fs) than the temporal spacing in all pairs of interfering pulses (≥Δ t = 1.8 ps), therefore, interference terms that arise in the burst Wigner distribution are strongly attenuated in the spectrogram. Summation over the wavelength axis gives the time-dependent intensity of the chirped burst waveform. Due to the large duration of the 300 ps chirped pulses, we use a long-range, high-precision translation stage in a double-pass configuration by a combination of thin-film polarizer, quarter waveplate and back-reflecting mirror, to allow for an 800 ps temporal range with 1 ps step size.
§.§ Compressed Burst SHG Auto-Correlations
We perform Second-Harmonic Generation (SHG) autocorrelation measurements (AC) of the compressed ultrashort-pulse burst in a nonlinear type I BBO crystal. This way, we show compressibility of the burst pulses, without interpulse crosstalk that distorts the phases of individual amplified pulses. We also use this measurement to set the proper settings for the burst generation process: We find the optimal PC intermediate voltage primarily by two measures
* When looking at the burst spectrum, we maximize the ratio of spectral peak height and the spectral background in between the maxima, while keeping the energy constant.
* The envelope of the AC should, at its best, be triangular.
This way, it is made sure that for a given gain a number of N amplified pulses with equal energies are generated.
§ EXPERIMENTAL RESULTS
We first validate the cross-correlation method by comparing the result for a single chirped pulse with its spectrum (Sec. <ref>). Then, we perform measurements for ultrashort-pulse bursts with 20 pulses up to 40 pulses (Sec. <ref>). Finally, we show performance data of the phase slip stabilization (Sec. <ref>) and the autocorrelation of the compressed burst (Sec. <ref>).
§.§ Cross-Correlation Validation with a Chirped Single Pulse
To validate the cross correlation measurement, we compare the measured temporal intensities of a 300 ps chirped single pulse with its spectrum, since it is expected that they are equal to each other for such large chirp parameters (C > 10^7). This confirms a good spatial overlap of both channels within the BBO crystal over the whole multiple-100 ps travel range. The result can be seen in Fig. <ref>. The spectrum of the original near-infrared (NIR) chirped pulse, is well reproduced by the temporal intensity distribution. Both, the temporal intensity profile and the directly measured spectrum show a periodic modulation structure introduced by the etalon effect in the intracavity air-spaced waveplate. In the spectrogram, the primarily linear chirp is clearly visible, also the chirped pulse duration of 300 ps is confirmed by the time-dependent intensity.
§.§ Cross-Correlation of Chirped Bursts with a Compressed Reference Pulse
We set the gain to a given value in order to get sufficient signal on the SFG spectrometer and in the autocorrelation. For this, we generate bursts with an intraburst pulse spacing of 1.8 ps, corresponding to a 0.56 THz burst rate. We amplify the burst and the reference to about 10 μJ each, and optimized the PC intermediate voltage as describe in Sec. <ref>. The phase slip stabilization was then turned on. This procedure was done after every time the burst seed pulse number was set by the AOM electronics and the burst accumulation time window in the burst RA channel was adjusted accordingly. We performed measurements from 20 pulses up to 40 pulses in 5-pulse steps.
The cross-correlation spectrograms, together with the temporal intensities acquired by summation of the spectrogram data over the wavelength axis are shown in Fig. <ref> for 20, 30 and 40 pulses, including a comparison of the acquired temporal intensities with the NIR burst spectra. For 20 pulses (Fig. <ref>) we see a good agreement of the intensities with the spectra, as it is the case for a single pulses (Sec. <ref>). This is an indication that for 0.56 THz rate bursts with 20 pulses, which are chirped to 300 ps, we are still in the low-N regime, as discussed in Sec. <ref>. The spectrogram shows, in contrast to the numerically calculated Wigner distributions (Fig. <ref>, middle) only a signal at delay times where peaks in the temporal intensity are visible. In between, no interference structure was recorded. This is in accordance with the formulation of the spectrogram as a smoothed version of the Wigner distribution, where the interference terms are suppressed by a short temporal window. When further increasing the pulse number (Figs. <ref>, <ref>), we see a gradual intrinsic smoothing of the signal in time over the whole bandwidth in the spectrogram, which also leads to a smoothing of the time-dependent chirped-burst intensity. We see in Fig. <ref> that the number of 20 pulses seems to be indeed the threshold between the low-N and the high-N regime in our particular case, until at 40 pulses the temporal intensity profile is completely smoothed out. We underline that the high-N regime thus serves for the optimal conditions for CPA.
§.§ Phase-Slip Stabilization
In the following, we present the results for the self-referenced phase-slip stabilization, as it is described in Sec. <ref>. We recorded the stabilization data in parallel to the acquisition of the cross-correlation data (see Sec. <ref>).
In Fig. <ref>, we show the measured deviation of the phase slip ϕ_s over time. The stochastic distributions of the time-dependent deviations do not depend strongly on the pulse number, which is why we summed up all data points over the time axis. We see that the total phase deviation distribution is Gaussian with an FWHM width of 0.028π, which corresponds at 1030 nm to an FWHM group delay deviation of only 48 as in the pulse spacing. The phase-slip stabilization performs even well also at high pulse numbers. For 40 pulses, we show the intracavity spectrum over time and its derived phase deviations in Fig. <ref>. We use a guided PZT with a 40 μm travel range (Piezosystem Jena PU 40) that is much larger than the 14.4 nm translation that would correspond to the measured FWHM width of the phase deviation distribution. A main factor for the stochastic width of the phase deviation is given by the limited control loop bandwidth. Our slow control loop was running software-controlled with a mean sample rate of 49 Hz, which could be easily improved by applying a fast hardware-running control loop in the future. Because of the good stability of the phase slip, the peak structure in the burst spectrum can be seen to be very stable.
§.§ Autocorrelation of the Compressed Burst
We show compressibility of the burst pulses by demonstration of SHG autocorrelation results for compressed bursts, which in the case of N pulses is supposed to show 2N-1 signal peaks. The result for 40 pulses can be seen in Fig. <ref>, including the SHG FROG trace from which we derived the autocorrelation. The SHG FROG trace proves further our phase slip control stability by a stable peak structure over time without any noticeable wavelength-detuning drift. The autocorrelation can be seen to be typical for a complex waveform with a broad background component and a coherent artifact <cit.>, visible by the overshoot at zero time delay. The peak maxima can be approximately fitted with a triangle-shaped line, indicating equalization of the burst pulse energies. Deviations from the ideal triangle course do not only arise because of the pulses themselves, but also because of the set time delay step of 200 fs of the delay stage that is comparable to the compressed pulse duration of 250 fs, leading to discretization errors. Nonetheless, this delay step value is found to be a good trade-off between step size and a large, almost 160 ps, scan range for our proof-of-concept experiment. We also note, that bursts with any higher pulse number can be easily generated with the described method, however, due to limited scan range and signal-to-noise ratio (SNR) in the AC, higher pulse numbers than 40 were not reasonable for the given demonstration. The theoretical limit of the burst duration is given by the RA burst channel cavity round-trip time, which is approximately 13 ns in our case (corresponding to the 76 MHz oscillator repetition rate). This would correspond to a burst of 13,000 pulses at a 1 THz burst rate.
§ OUTLOOK ON ENERGY SCALABILITY FOR EXTRAORDINARILY HIGH PULSE NUMBERS
The primary focus of this work is to investigate, both numerically and experimentally, the onset of the high-N regime given by the intrinsic self-smoothing of the chirped temporal intensity profile. In order to give a further outlook on the capabilities given by direct time-domain generation of bursts and the self-smoothing phenomenon at THz burst rates, we further investigate the normalized exctractable energy at pulse numbers N→ 1000 at a 1 THz burst rate.
We see a partial periodic revival of the temporal peak structure at every 90 pulses, i.e. at N=100,N=190,N=280,..., which is indicated by a decrease in extractable burst energy. The energy decrease due to the partial peak revivals, however, becomes less the higher the number of pulses. In consequence, normalized extractable energy keeps increasing for sufficiently many pulses, until it becomes linearly dependent on the pulse number N.
For sufficiently large N, the largest interpulse spacing (N-1)Δ t (which is the spacing between the first and the last burst pulse) becomes larger than the duration of the individual chirped pulses τ_FWHM (200 ps in our simulation). In this case, the burst-extractable energy is higher than the extractable energy with a single pulse under the same conditions. This can be well understood, when considering the temporal intensity profile of a chirped burst with N → 1000, or more, as it is shown in Fig. <ref> with N=2000 pulses. In this case, we have a 10-times higher maximum interpulse spacing than the chirped pulse duration. The intensity profile of a strongly chirped THz-rate burst almost completely fills out the intensity-time area, resembling a waveform with a rectangular shape. It also exceeds the temporal range of the single stretched pulse and thus allows for higher energies at a given chirp rate C. When zooming into the waveform (right subplot of Fig. <ref>), a periodic temporal peak structure can be observed, with a period equal to the interpulse spacing Δ t. The reason for the normalized extractable energy increase is similar to that of Divided Pulse Amplification (DPA) <cit.>. In DPA, the total amplified energy is distributed over time over multiple compressed pulses, while peak intensity is kept below a given damage threshold. We note, that the regime N→ 1000, or higher, thus includes advantages of DPA and CPA.
§ CONCLUSION
We have investigated numerically and experimentally the regime of ultrashort-pulse THz-rate bursts at high (N≫10) pulse numbers, with a focus on the transition from few to many pulses where we observed a gradual intrinsic smoothing of the temporal intensity profile of a chirped burst. Direct self-stabilized burst generation allows for THz burst rates with stable, MHz-wide spectral peaks given by bursts with high pulse numbers. This allows for generation of a controlled stable peak structure that is useful for many nonlinear spectroscopic applications. The pulse-to-pulse phase slip Δϕ_s can be stabilized without any external reference to a high degree with an FWHM phase deviation of down to about 0.02π, as was shown in this work. Phase stability can be characterized by the spectral peak linewidth for which high-resolution spectrometers are available. In the high-N regime, the presence of temporal intensity spikes in the chirped burst waveform are avoided due to the self-smoothing effect. For sufficiently high pulse numbers N→1000, the largest interpulse burst spacings exceed the chirped pulse duration at THz burst rates. This leads to a combination of CPA and DPA methodologies and to burst-extractable energies from the amplifer that are higher than the extractable energy with a single pulse, under the same conditions. We underline that conventional master-oscillator regenerative-amplifier systems may easily be able to apply this technique with only minor modifications, that are installation of a 3-level PC voltage driver and the adaption of the oscillator or amplifier round-trip time for acquiring the desired intraburst pulse spacing Δ t. Future efforts will include suitable burst characterization techniques that allow for precise determination of the number of pulses and their energies over nanosecond burst durations when scaling the pulse number higher than experimentally demonstrated. Further, generation of bursts with thousands of pulses may require realization of a dispersion-free RA cavity. Overall, these findings strongly underline the significance of the high-N regime for future developments in high-energy ultrashort-pulse burst technology.
§ SUPPLEMENTARY MATERIAL
See the supplementary material for detailed derivations of the analytical formulations.
§ CONFLICT OF INTEREST
The authors have no conflicts to disclose.
§ FUNDING
Österreichische Forschungsförderungsgesellschaft (I 5590).
unsrt
|
http://arxiv.org/abs/2307.05532v1 | 20230708070820 | Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators | [
"Andreas Liesenfeld",
"Alianda Lopez",
"Mark Dingemanse"
] | cs.CL | [
"cs.CL"
] |
[email protected]
0000-0001-6076-4406
Centre for Language Studies
Radboud University
The Netherlands
[email protected]
0009-0004-5873-5496
Centre for Language Studies
Radboud University
The Netherlands
[email protected]
0000-0002-3290-5723
Centre for Language Studies
Radboud University
The Netherlands
Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as `open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human annotation labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010179.10010182</concept_id>
<concept_desc>Computing methodologies Natural language generation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010583.10010786</concept_id>
<concept_desc>Hardware Emerging technologies</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10002944.10011122.10002945</concept_id>
<concept_desc>General and reference Surveys and overviews</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003227.10003233.10003597</concept_id>
<concept_desc>Information systems Open source software</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10002944.10011123.10011130</concept_id>
<concept_desc>General and reference Evaluation</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Natural language generation
[300]Emerging technologies
[300]Surveys and overview
[100]Open-source software
[100]Evaluation
< g r a p h i c s >
A table with 16 rows and 13 columns. The first row is headed “Project" and lists the project names and organization behind it. Some projects also feature more information regarding the base large language and reinforcement learning models that are used. The The remaining 12 rows are each names after one of evaluation features in Table 1. Each cell of the table then evaluates the project for the respective feature, either giving it a pass, a partial pass, or a fail. More detailed information as well as the content of each cell can be found in the data repository that accompanies the paper.
20 April 2023
[accepted]26 May 2023
Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators
Mark Dingemanse
August 12, 2023
============================================================================================================
§ INTRODUCTION
Open research is the lifeblood of cumulative progress in science and engineering. In today's technological landscape, it is hard to find any research finding or technology that does not rely to a significant extent on the fruits of open research, often publicly funded. For instance, AlexNet <cit.>, the deep neural net kickstarting the deep learning revolution a decade ago, derived its strength from a human-annotated dataset of 3.2 million images created by Princeton computer scientists <cit.>. And the striking progress in protein folding in recent years (with the AlphaFold deep learning system predicting the structure of nearly all known proteins <cit.>, where decades of prior work had reached a comparatively meagre 17%) has only been possible thanks to openly deposited structural data in the Protein Data Bank that goes back half a century <cit.>.
The talk of the town in conversational interfaces today is undoubtedly ChatGPT, an instruction-tuned text generator that impresses many because of its fluid prose. Yet striking new capabilities should not detract us from the risks of proprietary systems. Only three months after OpenAI rolled out ChatGPT, it abruptly discontinued API support for its widely used Codex model that had been available as a “free limited beta” since 2021 <cit.> — surprising users with only three days' notice and undercutting at one blow the reproducibility of at least 100 research papers.[See https://aclanthology.org/search/?q=openai-davinci-002aclanthology.org/search/?q=openai-davinci-002 (the same search term yields >150 arXiv preprints and >800 entries on Google Scholar) ] This is a stark reminder that proprietary systems are designed to offer smooth onboarding and convenience but come at the price of user lock-in and a lack of reliability.
Proprietary systems come with considerable further risks and harms <cit.>. They tend to be developed without transparent ethical oversight, and are typically rolled out with profit motives that incentivise generating hype over enabling careful scientific work. They allow companies to mask exploitative labour practices, privacy implications <cit.> and murky copyright situations <cit.>. Today there is a growing division between global academia and the handful of firms who wield the computational resources required for training large language models. This “Compute Divide” <cit.> contributes to the growing de-democratisation of AI. Against this, working scientists call for avoiding the lure of proprietary models <cit.>, for decolonizing the computational sciences <cit.>, and for regulatory efforts to counteract harmful impacts <cit.>.
§.§ Why openness matters
Open data is only one aspect of open research; open code, open models, open documentation, and open licenses are other crucial elements <cit.>. Openness promotes transparency, reproducibility, and quality control; all features that are prequisites for supporting robust scientific inference <cit.> and building trustworthy AI <cit.>. Openness also allows critical use in research and teaching. For instance, it enables the painstaking labour of documenting ethical problems in existing datasets <cit.>, important work that can sometimes result in the retraction of such datasets <cit.>. In teaching, it can help foster critical computational literacy <cit.>.
Despite strong evidence of the scientific and engineering benefits of open research practices, openness is not a given in machine learning and AI research <cit.>. Gundersen and Kjensmo, in one of the most detailed examinations of reproducibility in AI to date <cit.>, systematically surveyed 400 papers for a range of open science practices. They found that only about a third of papers share test datasets, only 8% share source code, and only a single paper shared training, validation and test sets along with results. We are not aware of more recent systematic surveys of this kind (nor do we attempt this here), but the increasing trend of corporate releases with glossy blog posts replacing peer-reviewed scientific documentation provides little reason for optimism.
Openness is perhaps especially important for today's breed of instruction-following text generators, of which ChatGPT is the best known example. The persuasiveness of these language models is due in large part to an additional reinforcement learning component in which text generator output is pruned according to a reward function that is based on human feedback <cit.>, using insights from early work on evaluative reinforcement <cit.>. Human users appear to be highly susceptible to the combination of interactivity and fluid text generation offered by this technology. The ubiquity of ChatGPT interfaces makes it easy for anyone today to try out some prompt engineering (while freely providing further training data to OpenAI) — but it does not allow one to gain a critical and holistic understanding of the constraints and capabilities of such systems, nor of their risks and harms. For true progress in this domain, we will need open alternatives.
In this paper, we survey alternatives to ChatGPT and assess them in terms of openness of data, models, documentation and access methods. The aim of our survey is threefold: to sketch some of the major dimensions along which it is useful to assess openness and transparency of large language models; to provide a view of the state of the art in open source instruction-tuned text generation; and to contribute towards a platform for tracking openness, transparency and accountability in this domain.
§.§ Previous work
Existing work reviewing and comparing large language models falls into two categories: informal lists and structured surveys. Informal lists are crowd-sourced pointers to available resources, from open RLHF datasets[https://github.com/yaodongC/awesome-instruction-datasetgithub.com/yaodongC/awesome-instruction-dataset ] to open examples of instruction-tuned text generators.[https://github.com/nichtdax/awesome-totally-open-chatgpt/blob/main/README.mdgithub.com/nichtdax/awesome-totally-open-chatgpt ] Systematic surveys of instruction-tuned language models are still rare and mostly focus on comparing model capabilities and performance, e.g., of “augmented language models” <cit.> and language models for writing code <cit.> (not our focus here). Complementary to our focus on degrees of openness in instruction-tuned models, a recent survey of generative AI systems more broadly focuses on gradience in release methods, from closed to staged to fully open <cit.>.
An important development in this domain the introduction of data statements <cit.> and model cards <cit.>. These are structured documents that help creators document the process of curating, distributing and maintaining a dataset or model, and that help users to critically judge underlying assumptions, potential risks and harms, and potential for broader use. These resources have seen considerable uptake in the scientific community, though their adoption by for-profit entities lags behind.
The risks of relying on proprietary solutions has spurred the development of several more open alternatives. For instance, the Bloom collaboration <cit.> is a team science project of unprecedented magnitude. It has trained and open-sourced a large language model based on a collection of almost 500 HuggingFace datasets amounting to 1.6TB of text and code in 46 spoken languages and 13 programming languages. <cit.>. A related initiative is The Pile <cit.>, a 800GB dataset of English text that serves as pre-training data for language models by EleutherAI <cit.>. Meta AI's LLaMA <cit.> provides researchers with access to a series of base models trained on data claimed to be `publicly available'. It should be noted that none of these initiatives have undergone rigorous peer-review or data auditing at this point, and that claims of openness do not cancel out problems, legal or otherwise.
In recent years, the private company HuggingFace has emerged as an important hub in the open source community, bringing together developers and users of projects in machine learning and natural language processing. It offers infrastructure for hosting code, data, model cards, and demos <cit.>. It also provides a widely used setup for automated evaluation, generating leaderboards and allowing quick comparison on a number of automated metrics, making it somewhat of a balancing act between offering incentives for documentation and for SOTA-chasing <cit.>. Our focus here is not performance evaluation of the kind offered by leaderboards; instead it is to survey degrees of openness in the fast-evolving landscape of text generators.
§ METHOD
We survey open-source instruction-tuned text generators and evaluate them with regard to openness, scientific documentation, and access methods. Since any survey in this fast-growing field deals with moving targets, we focus here mainly on dimensions of enduring relevance for transparency and accountability. An up to date list of all models surveyed can be found at https://osf.io/d6fsrosf.io/d6fsr.
§.§ Requirements
The target breed of models in focus here is characterized by the following two features: its architecture is at base a large language model with reinforcement learning from human feedback (LLM + RLHF) and it aims for openness and transparency (along degrees we quantify). Projects are not included if they are as proprietary and undocumented as ChatGPT (like Google's Bard), or if they merely provide a front-end that calls some version of ChatGPT through an OpenAI API (like Microsoft's Bing). We explicitly include small-scale projects and projects that are in early stage development if they are open, sufficiently documented, and released under an open source license. Querying academic search engines and open code repositories, we find at least 15 projects that have sprung up in the last six months alone.
§.§ Survey elements
We assess projects on 13 features divided over three areas (Table 1): availability, documentation, and access methods. For each feature, we document openness along a scale from maximum to partial to no openness and transparency. For licenses, only systems that are fully covered by a true open-source licence count as maximally open, less permissive or partial licensing counts as partially open, and non-open or unclear licensing situations count as closed. Figure 1 shows a snapshot of 15 projects assessed for all features, with degrees of openness colour-coded (, ∼ , ×). Please refer to the data repository for more information about how each feature is evaluated, and for a more up to date listing.
§ RESULTS
Projects roughly fall into two categories. First, small, relatively bare bones projects that only provide source code and build on existing large language models. These projects often cannot share information on architecture, training data, and documentation because they inherit closed-source data from the LLMs they build on. They usually also do not provide APIs or other user interfaces. However, some of such small projects do come with high-quality documentation and some build only on explicitly open LLMs. What such small projects lack in performance, they make up in utility for the open source community as they can provide useful entry points to learning about LLM+RLHF tools.
We also identify a handful of projects backed by larger organisations, which aim to offer similar features to proprietary tools such as ChatGPT but are open-sourced and well documented. Two such initiatives top our list of open-source alternatives to ChatGPT: bigscience-workshop's xmtf tool building on the BLOOMZ and mT0 models (sponsored by HuggingFace) and LAION-AI's OpenAssistant based on an open, crowd-sourced RLHF training dataset (oasst1). Open Assistant also features a text-based and graphical user interface as well as a web resources for crowd-sourcing training data. We also found that several projects are not as open as they initially seemed to be, with many of them merely wrappers of closed models.
We observe three recurring issues in the area of availability and documentation. Inheritance of undocumented data. Many tools build on existing large language models (which we here call base models) and inherit the undocumented datasets (often web-scraped and often of dubious legality) these base models are trained on.
Training data of RLHF component is not shared. Building RLHF training datasets requires labour-intensive work by human annotators. The lack of RLHF training data is a major performance bottleneck for smaller research teams and organisations, and hampers reproducible research into the use of instruction-tuned text generators for conversational user interfaces.
Papers are rare, peer-review even rarer. Most projects reviewed here follow the corporate `release by blog post' model. While there are some preprints, none of the systems we review is currently documented in a peer-reviewed paper. Habitually bypassing this important (albeit sometimes flawed) quality assurance mechanism allows systems to escape critical scrutiny and risks undermining scientific and ethical standards.
Some other patterns are worth noting. One is the rise of synthetic data especially for the instruction component. Prominent examples are Self-Instruct (derived from GPT3) <cit.>, and Baize, a corpus generated by having ChatGPT engage in interaction with itself, seeded by human-generated questions scraped from online knowledge bases <cit.>. This stretches the definition of LLM + RLHF architectures because the reinforcement learning is no longer directly from human feedback but has a synthetic component, in effect parasitizing on the human labour encoded in source models. The consequences of using synthetic reinforcement learning data at scale are unknown and in need of close scrutiny.
The derivative nature of synthetic datasets is probably one reason they are released specifically “for research purposes only” <cit.>, with commercial use strictly prohibited. This leads to an important wrinkle. Baize models and data are incorporated in several popular instruction-tuned text generators, including the Falcon family of models which bills itself as ready for “research and commercial utilization”[Technology Innovation Institute, https://falconllm.tii.ae/, June 7, 2023] in direct violation of Baize's prohibition against commercial use. This is merely one example of the complex dependencies embedded in these tools, and the legal quagmires obscured by simple claims of `openness'.
§ DISCUSSION
The goal of this short paper has been to provide a critical review of degrees of openness in the fast-moving field of instruction-tuned large language models. We have found projects at varying stages of implementation, documentation, and useability. Most of them offer access to source code and some aspects of pre-training data, sometimes in legally ambiguous ways. Data from the reinforcement learning step, crucial to the simulation of instruction-following in these interfaces, is more elusive, provided by at best half of the initiatives. Strikingly, only a handful of projects are underpinned by a scientific write-up and none of them have as yet undergone scientific peer review.
There are many shades of openness <cit.>, yet all of the projects surveyed here are significantly more open than ChatGPT. ChatGPT was announced in a company blog post and rolled out to the public with an interface designed to capture as much free human labour as possible, but without any technical documentation. (The RLHF component, arguably the biggest differentiator for the instruction-following behavior, was sketched in <cit.>, though without data.) Its follow-up GPT-4 continues OpenAI's tradition of openness in name only: it comes with an evaluation framework that primarily benefits the company yet contains the absolute minimum of technical documentation. In particular, an unreviewed preprint distributed by OpenAI and billed as a “technical report" <cit.> mostly provides cherry-picked examples and spends more space on crediting company workers for blog post content, communications, revenue, and legal advice than on actual technical details. (Companies like OpenAI sometimes give “AI safety" as a pretext for closedness; this is hard to take seriously when their own public-facing proprietary models provide clear and present harms <cit.>.)
How can we foster more openness and accountability? First, incentives need changing. In high-stakes AI research, data work is often seen as low-level grunt work <cit.> and incentive structures generally encourage a `move fast and break things' mentality over careful scientific work <cit.>. But work that documents data provenance and traces harmful impacts <cit.> deserves major scholarly and societal credit. Here, AI and NLP might benefit from work in software engineering and infrastructure, where strong frameworks already exist to foster accountability for datasets <cit.>. Interactive model cards <cit.> offer a promising step towards a human-centered approach to documentation.
Second, corporate capture and user lock-in are well-known strategies by which companies exercise control over scientific results and research infrastructure. In the age of large language models, this is amplified by the possibility to extract human labour and repackage it in amiable conversational formats. Openness not only aligns with principles of sound and ethical scholarship <cit.>; it also safeguards transparent and reproducible research <cit.>. Recent work on legal datasets offers an example in responsible data curation with insights that may be more broadly applicable <cit.>.
Third, technology is never a fait accompli unless we make it so. It is one of the achievements of publicly funded science that it can afford to not jump on the bandwagon and instead make room for reflection <cit.>. Today's language technology landscape offers ample opportunities for what philosopher Ivan Illich has called counterfoil research: “Counterfoil research must clarify and dramatize the relationship of people to their tools. It ought to hold constantly before the public the resources that are available and the consequences of their use in various ways. It should impress on people the existence of any trend that threatens one of the major balances of which life depends” <cit.>. Among the consequences of unleashing proprietary LLM + RLHF models are untold harms to workers exploited in labeling data; energy demands of computational resources <cit.>; and tidal waves of plausible-looking text generated without regard for truth value (technically, bullshit <cit.>).
One possible outcome of the kind of deeper understanding fostered by openness is a call for responsibly limited technology <cit.>. The spectre of regulation (a key way to keep corporate powers in check) is a powerful incentive for companies to keep things proprietary and so shield them from scrutiny. The systems we have surveyed here provide elements of a solution. Open to various degrees, they provide ways to build reproducible workflows, chart resource costs, and lessen reliance on corporate whims.
§ CONCLUSION
Openness is not the full solution to the scientific and ethical challenges of conversational text generators. Open data will not mitigate the harmful consequences of thoughtless deployment of large language models, nor the questionable copyright implications of scraping all publicly available data from the internet. However, openness does make original research possible, including efforts to build reproducible workflows and understand the fundamentals of LLM + RLHF architectures. Openness also enables checks and balances, fostering a culture of accountability for data and its curation, and for models and their deployment. We hope that our work provides a small step in this direction.
This research is funded by Dutch Research Council (NWO) grant 016.vidi.185.205 to MD. For the purpose of Open Access the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.07216v1 | 20230714082015 | Reduction-Based Creative Telescoping for Definite Summation of D-finite Functions | [
"Hadrien Brochet",
"Bruno Salvy"
] | cs.SC | [
"cs.SC"
] |
Creative telescoping is an algorithmic method initiated by
Zeilberger to compute definite sums
by synthesizing summands that telescope, called certificates.
We describe a creative telescoping algorithm that computes telescopers
for definite sums of D-finite functions as well as
the associated certificates in a compact form. The algorithm relies on
a discrete analogue of the generalized Hermite reduction, or
equivalently, a generalization of the Abramov-Petkovšek
reduction.
We provide a Maple implementation with good timings on a variety of
examples.
2426Communication29.9pc0.5pt1618
Three-Dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information
Shiri ChechikTel Aviv University, Israel, mailto:[email protected]@tauex.tau.ac.il
Shay MozesReichman University, Israel, mailto:[email protected]@idc.ac.il
Oren WeimannUniversity of Haifa, Israel, mailto:[email protected]@cs.haifa.ac.il
==================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The algorithmic computation of definite sums originates in
Zeilberger's
algorithm in the 1990's <cit.>.
Initially designed to deal with
hypergeometric sums, his method of creative telescoping has
been extended to differential settings <cit.> and next generalized to
the large
class of D-finite functions by
Chyzak <cit.>.
In order to compute a definite sum of
F
(t,x_1,…,x_m) with respect to t, where
each x_i is a variable with respect to which one can
apply a linear operator ∂_i (generally, differentiation or
shift or
q-shift operator), the
creative telescoping algorithm
constructs identities of the form
∑_αc_α
(x_1,…,x_m)∂^
α(F)=G(t+1,x_1,…,x_m)-G(t,x_1,…,x_m).
Here, the sum is over a finite number of
multi-indices α and we use the multi-exponent
notation ∂^α=∂_1^
α_1…∂_m^α_m. In the
original version for hypergeometric summation, the
monomials ∂^α(F) are simply
successive shifts F(t,n),F(t,n+1),F(t,n+2),…
of a hypergeometric sequence
F(t,n). Identities obtained that way
often be summed over t. The right-hand
side telescopes by design. Since the coefficients c_
α do not depend on the variable t, the left-hand
side results in an operator
applied to
the definite sum of F. From there, other algorithms can be
applied to compute information on the sum.
The operator in the left-hand
side of <ref> is called a
telescoper of F and the function G in the right-hand side
is the corresponding
certificate. Chyzak's algorithm also deals
with the differential analogue of
<ref> where the right-hand side is a derivative; it is
used
to compute information on definite integrals. Chyzak's algorithm, like
Zeilberger's, looks for telescopers with an increasing number of
monomials ∂^α with indeterminate
coefficients c_α and determines c_
α such that a certificate G exists in the vector
space generated by the ∂^β(F)
for β∈ℕ^m+1 over the field of rational
functions. The conditions of being
D-finite is that this vector space has finite dimension, which
allows for the existence of algorithms based on linear algebra.
If no certificate exists, then the support is
increased
and the process is
iterated. This stops either when sufficiently many operators have been
found or when a prescribed bound on the orders is reached. (In the
original hypergeometric case, no bound on the order is
fixed a priori and termination is
guaranteed for the family of proper hypergeometric terms <cit.>.)
Efficiency issues with this approach have led to the development of
heuristics and a very useful Mathematica implementation
by Koutschan <cit.>.
The most recent approach to deal with the
efficiency issues with creative telescoping was initiated by
Bostan, Chen, Chyzak and many co-authors who developed
a class of reduction-based algorithms
<cit.>. These algorithms
avoid the computation
of potentially large certificates. In the differential case, where
the right-hand side of <ref> is replaced by a
derivative
∂_t(G), the
principle is to use a variant of Hermite reduction to compute an
additive decomposition of each
monomial in the form
∂^α(F)=R_
α(t,x_1,…,x_m)F+∂_t(G_
α),
where R_α is a rational function with a
certain minimality property. A telescoper is found by looking for
a linear dependency between these rational functions for a family of
monomials ∂^α.
The computation of the rational function R_
α by Hermite reduction works by getting
rid of multiple poles and isolating a polynomial part. This was
first done for the integration of bivariate rational functions
<cit.>, of hyperexponential functions
<cit.> and of mixed
hypergeometric-hyperexponential functions
<cit.>. In these three cases, the vector space
generated by the functions ∂^β( F)
for β∈ℕ^2 has dimension only 1 over the
rational functions. For summation, the analogous problem for bivariate
hypergeometric sequences was solved by replacing the Hermite reduction
by a modified Abramov-Petkovšek reduction, thereby providing a
faster variant of Zeilberger's algorithm <cit.>.
For dimension larger than 1, the method was first extended to the
integration
of bivariate algebraic functions <cit.> and
to Fuchsian functions <cit.> by means of
suitable integral bases. An extension
to the integration of purely differential bivariate
D-finite functions in
arbitrary dimension was first
achieved by turning the
differential equations satisfied by the function to be integrated
into first-order differential systems; then, a variant of Hermite
reduction can be designed at the level of vectors of rational
functions <cit.>. This approach generalizes to
purely
differential D-finite functions in more variables <cit.>.
Another method relies on cyclic vectors and
allows the integration of arbitrary D-finite
functions <cit.>. Without loss of generality, we assume that F is
a cyclic vector for ∂_t, which means that all monomials
∂^
α(F) rewrite as M_α(F) with
M_α a
linear operator
in ∂_t only. (If F is not a cyclic vector, one finds a
cyclic vector G, F=M_F(G) for some linear operator M_F in
∂_t only and the
rest of the reasoning is unchanged.) Next, for any rational
function u and any
linear operator M in ∂_t,
repeated integration by parts
implies Lagrange's identity
uM(F)-M^*(u)F=∂_t(P_M(F,u)),
where M^* is the adjoint of M and P_M is linear in
F,∂_t(F),…
and u,∂_t(u),…. Thus, a reduction-based algorithm is
obtained by the additive decomposition of
<ref> with R_α a
solution of the generalized Hermite reduction
R_α=M_α^*(1)Im(L^*).
The tools used in the reduction
modulo the image of the linear differential operator L^* are
classical techniques used when
looking for rational solutions of linear differential equations.
This method of integration using generalized Hermite reduction
generalizes to other contexts.
This has been done in high generality in a preprint by
van der Hoeven <cit.>. Our approach here is
different: we focus on the case of summation only and give a
simple self-contained presentation of the corresponding algorithm;
our algorithm returns operators of minimal order[Remark 5.6
in <cit.> seems to allude to a way of
doing this, but the relevant space E may contain rational functions that are not in the image of L. For example take L=1/z + 1/(z-1)σ^-1, α = 0, and A ={α}, then one can check that 1/z ∉ (L) but 1/z ∈ E.]; we
make the
choice to avoid algebraic extensions when possible; we
present a Maple implementation that performs well in
practice. Note that while in terms of complexity, minimal operators
cannot be computed in polynomial time in general, in practice this
does not seem to be an obstacle.
The context of our algorithm is the following. The summand F
(x_1,…,x_t,n) is a function of t+1 variables. The first ones,
x_1,…,x_t are each associated to an Ore operator D_i∈{S_
x_i,∂_x_i} (i∈{1,…,t}). We are given a system
of linear equations satisfied by F, such that the corresponding
operators generate a D-finite ideal (definitions in ??).
We want to find
equations of the form
∑_i_1,…,i_tλ_i_1,…,i_tD_1^i_1… D_t^i_tF(x_1,…,x_t,k)=Δ_n(G)
with nonzero λ_i_1,…,i_t, Δ_n=S_n-1 is the
difference operator, and the sum on the left is over a finite
subset of ℕ^t.
§.§.§ Notation
Multiple indices will often be underlined, e.g.,
λ_i is λ_i_1,…,i_t and D^i=D_1^i_1… D_t^i_t.
§.§.§ General idea of the algorithm
Let 𝕂=ℚ(x_1,…,x_t) be the base field.
In the same fashion as <cit.> our algorithm aims at decomposing each term D^iF for increasing i in the form
D^iF= R_iF + Δ_n(G_i )
where R_i is a “reduced" rational function in 𝕂(n), and looking for 𝕂-linear combinations of such equations that annihilate the rational terms.
The main tools used for this decomposition are Lagrange's identity <cit.> and a canonical form to reduce rational functions modulo the image of a difference operator.
In the shift case, this identity takes the following form
§.§ Previous work
à compléter
Chyzak published in 2000 <cit.> the first creative telescoping algorithm for sums and integrals of D-finite functions. Koutschan <cit.> developped an heuristic for sums and integrals that may not return a minimal order telescoper.
A fourth generation of creative telescoping algorithm based on reduction appeared first for bivariate functions in 2010 and was later extented to multivariate functions. There have benn other extensions to larger classes for sums/integrals such as algebraic,hyperexponential,hypergeometric,,mixed,fushian and D-finite. à citer
§.§ Contribution
Our algorithm like Van Der Hoeven's<cit.> is an adaptation to the summation case of BCLS's algorithm <cit.> for integrals of D-finite functions. We use a stronger reduction than Van Der Hoeven that allows us to compute the minimal order telescoper at the cost of an extra step that correspond in our article to the strong reduction.
We provide an implementation of our algorithm that beats the current best existing implementation of Chyzak's algorithm<cit.> and which is slightly better than Koutschan's heuristic<cit.> when it computes the minimal order telescoper.
§ EXAMPLE
The multiplication theorem for Bessel functions of the first kind
J_ν states that <cit.>
J_ν(λ z)=λ^ν∑_n=0^∞(-1)^n(λ^2-1)^n(z/2)^n/n!J_ν+n(z).
This can be proved automatically by showing
that the left-hand side and the right-hand side satisfy the same set
of mixed differential-difference equations with sufficiently many
identical initial conditions.
We write F for the summand in <ref>. It is a function
of the four variables ν,n,z,λ. Basic properties of the
Bessel function give the following four equations:
(λ^2-1)zS_ν(F) + 2(n+1)S_n(F) = 0,
(λ^2-1)∂_λ(F) -2nλ F = 0,
(1-λ^2)z∂_z(F) + +2(n+1)S_n(F)
+ (λ^2-1)(2n+ν)F =
0,
4(n+1)(n+2)S_n^2(F) + 4(λ^2-1)(n+1)(n+ν+1)S_n(F) + z^2
(λ^2-1)^2F = 0,
where S_ν denotes the shift with respect to ν: S_ν:G
(ν)↦ G(ν+1) and similarly for S_n, while ∂_z
and ∂_λ denote partial derivatives.
These equations show that any shift or derivative
S_ν^a∂_λ^b∂_z^cS_n^dF of F with nonnegative
integers a,b,c,d
rewrites
as a ℚ(ν,λ,z,n)-linear combination of F and S_n
(F). In particular, this implies that F
is D-finite with respect to these variables.
The aim of creative telescoping is to find a similar set of equations,
in the variables ν,λ,z only, for the sum in <ref>.
Let Δ_n be the difference operator Δ_n=S_n-1. Any
product ϕ(n)S_nF with ϕ∈ℚ(ν,λ,z,n) can
be
rewritten ϕ(n-1)F+Δ_n(ϕ(n-1)F), i.e., as the sum of a
rational function times F plus a difference, that would telescope
under summation. Consequently, any S_ν^a∂_λ^b∂_z^cS_n^dF
with nonnegative
integers a,b,c,d
rewrites in the form
∂^α(F)=R_
αF+Δ_n(G_
α),
with R_α a rational function in ℚ
(ν,λ,z,n). This reduction as a sum of a rational function
plus a difference is a general
phenomenon (see <ref>).
Reduction-based creative telescoping works by reducing this
rational function further by pulling out parts that can be
incorporated into the difference Δ_n(G_α). Denote by
L the recurrence operator such that <ref> is LF=0.
The adjoint of L (see <ref>) is
L^* = 4(n-1)nS_n^-2 + 4(λ^2-1)n(n+ν)S_n^-1 + z^2(λ^2-1)^2
where S_n^-1:g(n)↦ g(n-1). <Ref> shows that a
rational
function R is of the form Δ_n(M(F)) for a
recurrence operator M(S_n) if and only if R is in the image L^*(ℚ
(ν,λ,z,n)). This is the basis for the computation of
relations of the form (<ref>)
where R_α is now a reduced rational
function (in a sense made precise in <ref>).
The next step is to look for linear combinations of these rational
functions that yield telescopers.
The starting point is the monomial 1, which decomposes as
1· F = 1· F + Δ_n(0).
Using <ref>, the monomial ∂_λ
rewrites
∂_λ(F) = 2nλ/λ^2-1F + Δ_n(0)
and the rational function is reduced. Taking
the derivative of this equation and using
<ref> again gives a similar equation for
∂_λ^2(F):
∂_λ^2(F) = -2n(λ^2+1)/(λ^2-1)^2F + 2nλ/λ^2-1∂_λ(F) + Δ_n(0),
= -2n(λ^2+1)/(λ^2-1)^2F + (2nλ)^2/(λ^2-1)^2F + Δ_n(0).
This time, a reduction is possible. Indeed, <ref>
implies that L^*(1)F is a difference Δ(A_n) (where A_n can
be computed explicitly). Since
L^*(1) = 4λ^2n^2 + 4((λ^2-1)ν - 1)n + z^2(λ^2-1)^2,
we can eliminate the term in n^2 in the expression of
∂_λ^2(F) to get
∂_λ^2(F) = -2n(2ν+1)/(λ^2 - 1)F -z^2F +
Δ_n(A_n/(λ^2-1)^2).
A simple linear combination of
<ref> then eliminates the term
in n, showing
that F satisfies the equation
λ∂_λ^2F + (2ν + 1)∂_λ F + λ
z^2F = Δ_n(-λ A_n/(λ^2-1)^2).
The left-hand side is a telescoper. The right-hand side is a
certificate. It can be written more explicitly as
-λ A_n/(λ^2-1)^2= - 4(nλ^2 +
λ^2ν - ν - 1)nλ/(λ^2 - 1)^2F - 4λ(n + 1)n/(λ^2 - 1)^2 S_n(F).
In general, summation and telescoping of the certificate requires a
few
verifications. Here, we first observe that the certificate does not
have
integer poles and thus is well defined at all points over which it is
summed. Next, the certificate evaluates to zero at n=0. Finally,
it tends to zero when n tends to infinity, as J_ν+n(z)
decreases fast as n→∞
<cit.>.
In summary, we have obtained that the sum S in the right-hand side
of <ref> satisfies
λ∂_λ^2(S) + (2ν + 1)∂_λ(S) + λ z^2S = 0.
Proceeding similarly with <ref>, one
gets the equations
zλ S_ν(S) + ∂_λ(S) = 0,
z∂_z(S) - λ∂_λ(S) -ν S =0.
Injecting T=J_ν(λ z)/λ^ν in these equations and
using
basic equations for J_ν shows that it is a solution of this
system too. The proof of the multiplication theorem is concluded by
checking the equality of the initial conditions for T and
for the sum on the
right-hand side of
<ref>. As ν is associated to the shift, we need to
check initial conditions for any ν satisfying 0≤(ν) < 1.
Indeed, both term of the identity equal J_ν(1) at z=1,λ
=1, and ν∈
[0,1) and both their derivatives
with
respect to λ equal -J_ν+1(1), which proves the identity.
§ BACKGROUND
In this section, we recall the basic framework for reducing
creative telescoping to the generalized Abramov-Petkovšek
decomposition. Most of this section is identical to the differential
case <cit.>, except for the existence
and computation of the
cyclic vector and the use of the recurrence
variant of Lagrange's identity <cit.>.
More gentle
introductions to Ore algebras,
creative
telescoping and their applications can be found in the references <cit.>.
§.§ Telescoping ideal
§.§.§ Ore algebras
Let 𝐤 be a field of characteristic 0, x_0,…,x_m be
variables used to form the fields of rational functions 𝕂=𝐤(x_1,…,x_m) and =𝕂(x_0). The Ore algebra
𝔸=⟨∂_0,…,∂_m⟩
is a polynomial ring
over , with
∂_i∂_j=∂_j∂_i,
and a commutation between the ∂_is and the elements
of ruled by relations
∂_i R=σ_i(R)∂_i+δ_i(R), R∈𝕂̂,
with σ_i a ring morphism of and δ_i a
σ_i-derivation, which means that δ_i(ab)=σ_i
(a)δ_i(b)+δ_i(a)b for all a,b in
<cit.>. The typical cases are when
∂_i is the differentiation d/dx_i (then σ_i is
the identity and δ_i=d/dx_i) and the shift
operator x_i↦ x_i+1 (then σ_i(a)=a|_x_i←
x_i+1 and δ_i=0).
§.§.§ Annihilating and D-finite ideals
For a given function f in a left 𝔸-module, the
annihilating ideal of f is the left ideal
annf⊆𝔸 of elements of 𝔸 that annihilate f.
A left ideal ℐ of 𝔸 is D-finite when the
quotient 𝔸/ℐ is a finite dimensional 𝕂-vector space. A function is called D-finite when its
annihilating ideal is D-finite.
§.§.§ Telescoping ideal
As we focus here on summation, from now on, when we use n and S_n,
they stand for x_0 and the corresponding shift
operator ∂_0:x_0↦ x_0+1.
The telescoping ideal 𝒯_ℐ
of the left ideal ℐ⊂𝔸 with respect to n is
𝒯_ℐ =(ℐ + Δ_n(
𝔸)) ∩𝕂⟨∂_1,…,∂_m⟩, where Δ_n=S_n-1.
In other words, if ℐ=annF,
the telescoping ideal 𝒯_ℐ is the set of
operators T∈𝕂⟨∂_1,…,∂_m⟩ such that there exists G∈𝔸 such
that T + Δ_nG ∈ℐ, or equivalently, such that
<ref> holds (with t=n).
§.§ Cyclic vector and Lagrange identity
§.§.§ Cyclic vector
Let ℐ be a D-finite ideal of 𝔸 and let r be the
dimension of the -vector space 𝔹:=
𝔸/ℐ. An
element γ∈𝔹 is called cyclic
with respect to ∂_0 if {γ,…,∂_0^r-1γ} is a basis of
𝔹. In the differential case (∂_0=d/dx_0), such a
vector always exists and can be computed efficiently when ℐ
is D-finite <cit.>. In the shift case
(∂_0:x_0↦ x_0+1), even for a D-finite ideal ℐ, it is not
the case that there always exists a cyclic vector: in general,
𝔹 decomposes as the sum of a vector space where ∂_0
is nilpotent and a part where it is cyclic <cit.>.
However, we have the following.
<cit.> With the
notation above, in the case when ∂_0 is the shift operator
x_0↦ x_0+1, let E=(e_1,…,e_r) be a basis of the vector
space 𝔹=𝔸/ℐ and A_0∈𝕂^r× r be defined by
∂_0E=A_0E. If A_0 is invertible, then there
exists a
cyclic vector with respect to ∂_0 of the
form v=a_1e_1+…+a_re_r with polynomial
coefficients a_i∈ℤ[x_0] of degree at most r-1, and
coefficients all in {0,…,r}.
Sufficient conditions for the matrix A_0 to be invertible are that
ℐ=annf with f in a [∂_0,∂_0^-1]-module <cit.> or
that ℐ be a reflexive
ideal <cit.>. In practice, this condition
on A_0 can be checked from the input and appears to be always
satisfied in the examples we have tried. From this proposition, the
computation of a cyclic vector
follows the same lines as that of the differential
case <cit.>. Most often, e_1=1 is a cyclic
vector, which simplifies the rest of the computation.
§.§.§ Lagrange's identity
The shift version of Lagrange's identity is the following.
<cit.>.
Let u and v be two sequences in n and L=∑_i=0^ra_iS_n^i
be an operator of order r with a_i in 𝕂. The
adjoint operator L^* of L is defined by the formula
L^*=∑_i=0^r a_i(n-i)S_n^-i and it satisfies
u(n)L(v(n))-L^*(u(n))v(n)=Δ_n(P_L(u(n),v(n)))
where
P_L(u(n),v(n))=∑_i=0^r-1(∑_j=i+1^ra_j
(n+i-j)u(n+i-j))v(n+i).
Let γ be a cyclic vector. Then any element of 𝔹
is
of the form Aγ with A∈⟨ S_n⟩.
Applying Lagrange's identity with u(n)=1 shows
that this is a
rational multiple of γ up to a difference:
Aγ=A^*(1)γ+Δ_n(P_A(1,γ)).
As in the differential
case,
all computations in 𝔹 then reduce to -linear
operations on
single rational
functions, rather than vectors of them, by the following analogue
of <cit.>.
With the notation above, let γ be a cyclic
vector of 𝔹=𝔸/ℐ and for
all i=0,…,m, let B_i∈⟨ S_n⟩ be such that ∂_iγ=B_iγ. Then
for all R∈,
∂_iRγ=φ_i(R)γ + Δ_n(Q_i(R)),
with φ_i(R)=B_i^*(R(x_i+1)), Q_i(R)=P_B_i(R
(x_i+1),γ) if ∂_i:x_i↦ x_i+1;
φ_i(R)=B_i^*(R) + d/dx_i(R),
Q_i(R)=P_B_i(R
(x_i),γ) if ∂_i=d/dx_i.
Multiplying <ref> by γ on the
right and using the definition of B_i gives
∂_iRγ=σ_i(R)B_iγ+δ_i(R)γ.
The conclusion follows from Lagrange's identity (<ref>)
applied with L=B_i, u=σ_i(R) and v=γ.
§.§ Canonical Form
<Ref> shows how, given a cyclic vector γ, all elements
of 𝔹 can be reduced to the product of γ by a rational
function, up to a difference in Δ_n𝔹. The starting
point of the reduction-based creative telescoping is that one can
actually identify those multiples that belong to Δ_n𝔹.
With the same hypotheses as in <ref>, let L be the minimal-order
operator in ⟨ S_n ⟩ annihilating γ,
ie, L(γ)=0 and L has order r.
Then for all R∈,
Rγ∈Δ_n(𝔹) ⟺ R∈ L^*(
).
First, Lagrange's
identity (<ref>) implies that if R=L^*(R') with R'∈ then Rγ=L^*(R')γ=Δ_n(G) for some G.
Conversely, if Rγ∈Δ_n(𝔹), there exists
M∈⟨ S_n ⟩ such that Rγ = Δ_n M
γ.
The operator Δ_n M - R annihilates γ. By
minimality of L,
there exists N∈⟨ S_n ⟩ such that
Δ_n M - R=NL. Taking the adjoint and evaluating at 1 gives
R=M^*Δ^* - L^*N^* and finally R=L^*(-N^*(1)).
This proposition motivates the following.
<cit.>
A canonical form associated to L^* is a 𝕂-linear
map [·] : 𝕂(n) →𝕂(n) such that for
all R∈𝕂(n),
[L^*(R)]=0 and R-[R]∈ L^*(𝕂(n)).
A rational function R∈𝕂(n) is called reduced when [R]=R.
The computation of canonical forms is the object of <ref>.
§.§ Creative Telescoping Algorithm via Canonical Forms
With the notation above, the creative telescoping algorithm
from <cit.> applies verbatim. It is given in <Ref>. Its principle is to
iterate on every monomial of the form
∂_1^α_1…∂_m^α_m by increasing order
for some monomial order, e.g., the grevlex order, and to compute the
reduced rational functions R_α such that
∂^α= R_αγ + Δ_n(G_α ).
The rational function R_α is obtained by
R_0,…,0=[A^*_1(1)],
R_α=[φ_k(R_β)]
if ∂^α=∂_k
∂^β,
where A_1^* is the adjoint of the operator A_1 verifying 1=A_1(γ).
When a monomial α is dealt with, two situations are
possible.
The corresponding R_α can be a linear
combination of the previous R_β.
In that case, that linear combination makes the corresponding
linear combination of ∂^α and the
∂^β
a new element of the telescoping ideal 𝒯_ℐ and then it is not necessary to
visit the multiples of this monomial. Otherwise, ∂^α
is free from the previous ones and thus a new generator of 𝔹
has been found. The algorithm terminates when there are no more
monomials to visit.
The only difference with the differential case lies in the definition
of the canonical form [·] associated to the adjoint L^* of the
minimal-order operator L∈𝕂(n)⟨ S_n⟩
annihilating γ. By <ref> and the
definition of canonical form, <ref> is satisfied and
the following equivalence holds:
a_1R_1 + … + a_sR_s = 0 iff a_1D^1 + … + a_sD^s∈𝒯_ℐ.
The following result follows, with the same proof as in <cit.>.
Given as input the generators of a D-finite ideal ℐ and
a cyclic vector γ for S_n,
Algorithm <ref> terminates if and only if 𝒯_
ℐ is D-finite. Then, it outputs a Gröbner basis of 𝒯_ℐ for the grevlex order.
As in the differential situation <cit.>, one can modify
the algorithm to compute
all elements of 𝒯_ℐ up to a given bound
on their degree, or to return as soon as one telescoper is found, thus
allowing to recover a generating family of a
sub-ideal of 𝒯_F.
§ GENERALIZED ABRAMOV-PETKOVŠEK DECOMPOSITION
The main contribution of
this article lies in an algorithm for the computation of canonical
forms as in
<ref> for the operator
L^*= ∑_i=0^rp_i(n)S_n^-i
with polynomial coefficients p_i∈𝕂[n]. The
modified Abramov-Petkovšek decomposition <cit.> is a
special case of this reduction when L has order 1 and once the shell <cit.> has been removed <cit.>.
The starting point is a decomposition of any rational
function R∈𝕂(n) in the form
R(n)= P_∞(n) + ∑_i,hc_i,h(n)/Q_i(n+h)^ℓ_
i,h,
with ℓ_i,h∈ℕ^*, polynomials P_∞,Q_i and c_
i,h in 𝕂[n] such that c_i,h < ℓ_i,h
Q_i and
(Q_i(n+k),Q_j(n))=1 for all k∈ℤ when i≠ j. This
is discussed in <ref>.
The vector spaces
𝕂_Q_i(n)def=_𝕂( n^ℓ/Q_i(n+h)^j| h∈ℤ,
j∈ℕ^*, ℓ< j (Q_i) )
are in direct sum for distinct Q_i and are left invariant by L^* modulo 𝕂[n].
This allows the reduction
algorithm to operate in each of the 𝕂_Q_i(n)
independently. This is described in <ref>, before the reduction of
the remaining polynomial part in <ref>.
For two integers a,b with a≤ b, we write
a;b for the set {a,a+1,…,b}.
§.§ Decomposition of rational functions
Recall that a polynomial Q is square-free when it does not
have
multiple nontrivial factors. It is shift-free when (Q
(n),Q(n+k))=1 for all k∈ℤ^*.
A
shiftless decomposition
of a polynomial Q is a factorization of the form
Q=∏_i=1^v∏_j=1^n_iQ_i(n+h_i,j)^e_
i,j,
with e_i,j∈ℕ^*, h_i,j∈ℤ,
and Q_i∈𝕂[n] such that each Q_i is square-free and
(Q_i(n+k),Q_j(n))=1 for all i,j and all k∈ℤ
unless i=j and k=0.
Such a factorization can be computed using only gcds, resultants
and
integer root finding <cit.>.
Note that shiftless decompositions are not unique in general. One can
be
refined when a Q_i is not irreducible, by splitting this
factor further. In particular, the linear factors of the Q_i can be
isolated and dealt with more easily.
A polynomial Q is refined with respect to a polynomial P
when it is such that for each
h∈ℤ, there exists ℓ∈ℕ
such that (P,Q(n+h)^ℓ+1)=Q(n+h)^ℓ.
A shiftless decomposition is called refined
with respect to P when each Q_i is.
This refinement can be computed using gcds only and will be used with
P=p_0 and P=p_r, the extreme coefficients of L^*.
From a shiftless decomposition, the partial fraction decomposition of
<ref> is then obtained by standard algorithms <cit.>.
§.§ Weak reduction of the polar part
Let Q∈𝕂[n] be square-free, shift-free and refined with
respect to the
coefficients p_0 and p_r of L^*. Given
a rational
function R∈𝕂_Q
(n),
<ref> computes a rational
function [R]_Q∈𝕂_Q(n) with all its poles at zeros of Q
(n-j) such
that j∈0;r-1
and R - [R]_Q = P+L^*
(T) for some P∈𝕂[n] and T∈𝕂_Q(n).
The algorithm is
𝕂-linear.
Assume that R decomposes as
R=∑_j∈ Jλ_j(n)/Q(n-j)^s_j with
(λ_j(n)) < s_j(Q).
Let j_m=min(J) and _j(p_0) be the largest integer ℓ
such that Q(n-j)^ℓ| p_0. Then,
Q(n-j_m)^s_j_m
L^*(1/Q(n-j_m)^s_j_m+_j_m(p_0))=
p̃_0(n) Q(n-j_m),
where p̃_0(n) is the remainder in the Euclidean division of
p_0/Q(n-j_m)^_j_m(p_0) by Q(n-j_m)^s_j_m. The poles of
this rational function are at zeros of Q(n-j) with j∈Ĵ:=
J∖{j_m}∪(j_m+1,r-1).
Since Q is reduced with respect to p_0, the polynomial p̃_0(n) is relatively prime with Q(n-j_m). Thus, there exist
polynomials A and B such that
λ_j_m(n)=A(n)p̃_0(n) + B(n)Q(n-j_m)^s_j_m.
Then
A(n)p̃_0(n)/Q(n-j_m)^s_j_m=
λ_j_m(n)/Q(n-j_m)^s_j_m-B(n),
so that
R- L^*(A(n)/Q(n-j_m)^s_j_m+_j_m
(p_0))
is equivalent to R modulo L^*(𝕂_Q(n)) and with all its
poles at zeros of Q(n-j) with j∈Ĵ. This operation can be repeated a finite number of times
until all poles are at zeros of Q(n-j) with j≥0.
Similarly, let j_M=max(J) and _j_M+r(p_r) be the largest
integer ℓ such that Q(n-j_M)^ℓ divides p_r(n-r). Then
Q(n-j_M)^s_j_M
L^*(1/Q(n-j_M+r)^s_j_M+_j_M+r(p_r))=
p̃_r(n) Q(n-j_M),
where p̃_r(n) is the remainder in the Euclidean division of
p_r/Q(n-j_M)^_j_M+r(p_r) by Q(n-j_M)^s_j_M. The poles
of
this rational function are at zeros of Q(n-j) with j∈Ĵ':=
J∖{j_M}∪(j_M-1,r-1).
Again, since Q is reduced with respect to p_r, the
polynomial p̃_r is relatively prime with Q(n-j_M). Thus
there exist two polynomials A' and B' such that
λ_j_M(n)=A'(n)p̃_r(n) + B'(n)Q(n-j_M)^s_j_M
so that
R - L^*(A'(n+r)/Q(n-j_M+r)^s_j_M+_j_M+r
(p_r))
is equivalent to R modulo L^*(𝕂_Q(n)) and with all its
poles at zeros of Q(n-j) with j∈Ĵ'. This operation can be repeated a finite number of times
until all poles are at zeros of Q(n-j) with
j∈0,r-1.
Each step being 𝕂-linear, so is the algorithm.
Let
R= 8nx+ n(8x^2 +x) - 16x - 1/2(n - 1)^2 -4(x - 1)x^2/n
- (4x^3 + 8x^2 - 31x - 32)n + 4x^3 - 31x - 32/2(n+1)^2 +4x(x^2 - x - 8)/n + 2 + 2x^3/n + 3
and
L^*=x^2(n-2)S_n^-3 -n(4n^2 - x^2 - 4n)S_n^-2 + n(4n^2 - x^2 - 4n)S_n^-1 - x^2(n+2).
The poles of R are at {1,0,-1,-2,-3}. We take Q=n+1
and follow the steps of the algorithm.
The pole at -3 is easy: from
L^*(1/n+3)=-x^2 + 4n + -x^2 + 8/n + 1 +
2x^2 - 16/n + 2 + x^2/n + 3
and the coefficient 2x^3 of (n+3)^-1 in R the algorithm
performs the subtraction
R R - 2xL^*(1/n+3)
= n(8x^2 +x) - 16x - 1/2(n - 1)^2 + 4x^2/n - (8x^2+ x - 32)n + x - 32/2(n + 1)^2 -4x^2/n + 2.
Next, the pole -2 is a simple root of the constant coefficient of
L^*, leading to the computation of
L^*(1/(n+2)^2)= x^2(n - 2)/(n - 1)^2 +
x^2/n - n(x^2 - 4) - 4/(n + 1)^2 - x^2/n + 2
so that the pole is removed by
R R - 4L^*(1/(n+2)^2)
= x/2(n - 1) - x/2(n + 1).
R now has all its poles in {-1,0,1} and the weak reduction
is finished.
§.§ Strong reduction of the polar part
By <ref>,
the weak reduction produces rational functions all whose poles
differ from those of Q by an integer in 0,r-1.
The next step of the reduction is to subtract rational functions in
L^*(𝕂_Q(n)) that have this property.
It turns out that is possible to focus on a finite-dimensional
subspace of
L^*(𝕂_Q(n)) thanks to the following.
If j< 0, s > _j(p_0) and l<s(Q) or
if j ≥ 0, s > _j+r(p_r) and l<s(Q)
then
[L^*(n^l/Q(n-j)^s)]_Q=0.
Let j,s,ℓ be three integers that satisfy the first
assumption. Then L^*(n^ℓ/Q(n-j)^s) has poles that differ
by j<0 from a pole of Q(n). The first pass through the first loop
of the weak reduction subtracts L^*(n^ℓ/Q(n-j)^s) to itself and thus reduces it to zero.
The second assumption is dealt with similarly by the second
loop.
Let I_0:={j∈ℤ_<0|(p_0(n),Q(n-j))≠1}
and I_r:={j∈ℤ_≥0|(p_r(n),Q(n-j))≠1}.
The 𝕂-vector space [L^*(𝕂_Q(n)]_Q is
generated by the fractions
[L^*(
n^ℓ/Q(n-j)^s_j)]_Q, with
j∈ I_0 and
1≤ s_j≤_j(p_0) and 0≤ℓ<s_j Q
or
j∈ I_r and
1≤ s_j≤_j+r(p_r) and 0≤ℓ<s_j Q.
From this finite set of generators, one can compute a basis by
linear
algebra. The strong reduction of a rational
function R∈𝕂_Q consists in reducing [R]_Q using this basis. By this process, we obtain the
following.
Strong reduction reduces every rational function R∈ L^*(𝕂_Q(n)) to a polynomial in 𝕂[n].
With the same notation as in example <ref>, <ref>
shows that [L^*(𝕂_n+1(n))]_n+1 is generated by
[L^*(1/n-2)]_n+1=4n + x^2/n +
1
- x^2/n - 1, [L^*(1/n-1)]_n+1=4n + x^2/n -
1 - x^2/n + 1.
Thus the strong reduction of the rational function R from
<ref> is the polynomial
R+ L^*((n-2)^-1)/(2x)=-2n/x,
concluding the reduction.
§.§ Weak reduction of polynomials
The weak reduction of polynomials is a direct adaptation of the differential case
<cit.>.
The indicial polynomial of L^* at infinity is the polynomial p∈𝕂[s] defined by
L^*(n^s)=n^s+σ(p(s) + O(1/n)),
with σ∈ℕ.
The ensuing weak reduction is presented in <ref>.
In <ref>,
the indicial equation at infinity is
L^*(n^s)=n^s+2(8+4s +O(1/n)).
The polynomial -2n/x found in <ref> cannot be reduced
further by weak reduction since its degree is smaller than 2.
The following properties are proved exactly as those for weak
reduction at a pole.
Algorithm [·]_∞ terminates and is 𝕂-linear.
For all P∈𝕂[n], there exists Q∈𝕂[n] such that P-[P]=L^*(Q).
If s∈ℕ is not a root of p, then [L^*(n^s)]_∞=0.
§.§ Strong reduction of polynomials
The final step is to subtract polynomials in L^*(𝕂(n)). Here
again, a finite number of generators can be obtained thanks to the
following.
Let Q_1,…,Q_v be the polynomials that occur in a
shiftless decomposition of p_0p_r and let P be a polynomial in
L^*(𝕂(n)). Then
P ∈ E_poldef=L^*(𝕂[n])+∑_
i=1^v [L^*(𝕂_Q_i
(n))]_Q_i∩𝕂[n] .
If R∈𝕂(n) is such that L^*(R) is a polynomial, then the
poles of R must be cancelled by the zeros of p_0p_r or their
shifts.
It follows that R decomposes as
R= R_∞ + ∑_i=1^v R_i
with R_i∈𝕂_Q_i(n) and R_∞∈𝕂[n]. Each
L^*(R_i) has to be a polynomial and thus invariant by [·]_
Q_i. This concludes the proof.
By <ref>, the vector space [L^*(𝕂[n])]_∞ is
generated by {[L^*(n^s)]_∞| s∈ℕ and p(s)=0} with
p the indicial polynomial of L^* at infinity.
Generators of each [L^*(𝕂_Q_i)]_Q_i∩𝕂
[n] are obtained from the echelon basis used in the strong reduction
with respect to Q_i. This gives a finite set of generators for [E_
pol]_∞, which is easily transformed into a basis by a row
echelon computation. Strong reduction consists in reducing modulo this
basis. The following consequence is as in the polar case.
The strong reduction of polynomials reduces every polynomial P∈ L^*(𝕂(n)) to zero.
Continuing <ref>,
the polynomial p(s)=8+4s has no positive integer root therefore [L^*(𝕂[n]]_∞={0}. A basis of [L^*(𝕂_
n+1)]_n+1∩𝕂[n] is {n} according to example
<ref>.
Therefore 2n/x reduces to 0.
§.§ Canonical Form
<ref> and <Ref> combine the previous algorithms
to produce a canonical form.
<Ref> is a canonical form.
<Ref> is linear as every step is linear. By <ref> and <ref>, [L^*(𝕂(n)] reduces to 0 and [R]-R ∈ L^*(𝕂(n)) as we substracted to R only functions in this image.
§ CERTIFICATES
The fourth generation of creative telescoping algorithms introduced
reductions that allowed to find a telescoper without having to compute
an associated certificate. This led to faster algorithms as
certificates are known to be larger than telescopers
<cit.>. This approach makes sense in the diffential
case as it is known in advance that the integral of a certificate
over a cycle that avoids singularities is equal to zero. The framework
is not as favorable for sums. Indeed, it is necessary to detect
whether
the certificate has integer poles in the range of summation and
it is often unclear whether the certificate becomes 0 at the boundaries of the summation interval.
It is however possible to compute the certificates in a compact way
during the execution of our algorithm. Recall that certificates belong
to
the finite dimensional 𝕂(n)-vector space 𝔸/
F that admits for basis (γ,…,S_n^r-1γ). The idea
is to make the computation and storage of certificates efficient by
storing their rational function coefficients not as normalized
rational functions but as directed acyclic graphs (dags). These dags
have number of internal nodes of the same order as the number of
operations performed when computing the telescoper, so that their
computation does not burden the complexity.
§.§.§ Computation of certificates
The computation is incremental. The starting point is Lagrange's
identity (<ref>) which gives an explicit certificate for a
given rational multiple of the cyclic vector γ. For instance,
from the operator A_1∈𝕂(n)⟨ S_n⟩ such that
1=A_1γ, Lagrange's formula as applied in
<ref> gives
1=A_1(1)γ + Δ_n(G_1).
Assume that the certificate G_i associated
to D^i has been computed, so that
D^i= R_iγ + Δ_n(G_i ).
Then by <ref>, for any j∈{1,…,t},
∂_j D^i=∂_jR_iγ+Δ_n
(∂_jG_i)
=φ_j(R_i)γ +Δ_n(∂_jG_i+Q_j(R_i)).
Finally, the certificate associated to a telescoper ∑_iα_iD^i is ∑_iα_iG_i.
§.§.§ Evaluation of the certificate
It is sometimes possible to prove that the certificate equals zero
without evaluating it. We present two such cases. If the summand F
has finite support (e.g. binomial sums), then the sum of any certificate over ℤ will be zero provided it has no integer pole.
If we can prove that R(n)γ F,…,R(n)S_n^r-1γ F tend to zero as n tends to ±∞ for any rational function R∈𝕂[n] (as in the introductory example) then again the sum of any certificate over ℤ will be zero provided it has no integer pole.
§.§.§ Integer pole detection
By its very nature, the method of creative telescoping requires the
certificate not to have poles in the range of summation, so that
telescoping can occur.
The compact representation form of our certificate does not allow to
recover its denominator efficiently. However it is possible to
compute a multiple of this denominator by taking the least common
multiple of the denominators of every rational function in the
representation. Then one can compute the set of integer
roots of this polynomial. If that set is not empty, then one can
compute a Laurent series expansion of the certificate at any point
point to check if it is a pole or not.
On input ∑_i=0^∞ J_i(x)^2 where J_i(x) is the Bessel J function, <ref> outputs a certificate G of the form
G= A_0 + A_1S_n + A_2S_n^2
with A_i∈𝕂(n) in compact form. Moreover we know that the poles of the A_is are included in {-1,0,1}. From this compact form a series expansion can be computed at the point n=0
A_0 n→ 0∼x/4 A_1n→ 0∼ x
A_2n→ 0∼ -x/4
and at the point n=1
A_0 n→ 1∼(x - 4)(x + 4)/8x
A_1
n→ 1∼2/x A_2
n→ 1∼ -x/8
which proves that none of them are actually poles of G.
§ IMPLEMENTATION
This algorithm is implemented in Maple[It is available at
<https://github.com/HBrochet/CreativeTelescoping.git>]. We compare our code with Koutschan's heuristic (HF-FCT) and Chyzak's algorithm (HF-CT). They
are both implemented in the package in
Mathematica <cit.>. Redctsum corresponds to our
algorithm.
We tested these algorithms on a list of 21 easy examples that were
compiled by Koutschan and more difficult ones listed below.
<ref> was found in a paper about identities involving determinants <cit.>,
<ref> were created by us,
<ref> was a harder example found in Koutschan's list,
<ref> was found in the book integral and
series <cit.>, and finally
<ref>
is an exemple where Koutschan's heuristic does not stop as it does not
guess correctly the form of the ansatz to use <cit.>.[The code was run
on a Intel Core i7-1265U with 32 GB of RAM.]
HF-CT HF-FCT redctsum
easy examples 10.0s 9.2s 2.4s
<ref> 99s 50s 1.2s
<ref> 2138s 44s 13.8s
<ref> 63s 1.6s 39s
<ref> 4.5s 1.4s 61s
<ref> >1h 3.2s(^*) >1h
<ref> >1h 108s(^*) >1h
<ref> >1h >1h 1.2s
The notation (^*) in the tables means that we could not check
whether the telescopers
returned by HF-FCT were minimal.
∑_j=1^n m+xm-i+jc_n,j where c_n,j satisfies recurrences of order 2 <cit.>
∑_n C_n^(k)(x)C_n^(k)(y)u^n/n!
∑_n J_n(x)C_n^k(y)u^n/n!
∑_n (4n+1)(2n)!√(2π)/n!^2 2^2n√(x)J_2n+1/2(x)P_2n(u)
∑_n P_n(x)P_n(y)P_n(z)
∑_k (a+b+1)_k/(a+1)_k(b+1)_kJ_k^(a,b)(x)J_k^(a,b)(y)
∑_y 4x + 2/(45x + 5y + 10z + 47)(45x + 5y + 10z + 2)(63x - 5y + 2z + 58)(63x - 5y + 2z - 5)
When our algorithm does not perform well, the time is spent on
operations on rational functions. These are cases where the
intermediate rational functions R_α in
<ref> become much larger than the telescopers found
after linear algebra on them.
Our algorithm performs better on examples that have large telescopers. It is the case of <ref> and the family S_r below. Indeed HF-CT and HF-FCT are not incremental, they search for a telescoper with bounded order (and degree for HF-FCT) and if it does not find any it augments the order and the degree and must start again from scratch.
The familly (S_r)_r defined by <cit.>
S_r = ∑_k=0^n (-1)^k(rn - (r-1)k)!(r!)^k/(n - k)!^r k!
illustrates this phenomenon. For any r, we find a minimal telescoper of order r and degree r(r-1)/2.
It is unclear why HF-FCT does not perform well on this family.
HF-CT HF-FCT redctsum
S_6 11s 64s 0.4s
S_7 32s 331s 0.9s
S_8 106s 1044s 2s
S_9 325s 3341s 5s
S_10 1035s >1h 8s
plain
|
http://arxiv.org/abs/2307.05276v1 | 20230711141124 | Unbiased Scene Graph Generation via Two-stage Causal Modeling | [
"Shuzhou Sun",
"Shuaifeng Zhi",
"Qing Liao",
"Janne Heikkilä",
"Li Liu"
] | cs.CV | [
"cs.CV"
] |
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sun et al.: USGG
Despite the impressive performance of recent unbiased Scene Graph Generation (SGG) methods, the current debiasing literature mainly focuses on the long-tailed distribution problem, whereas it overlooks another source of bias, , semantic confusion, which makes the SGG model prone to yield false predictions for similar relationships. In this paper, we explore a debiasing procedure for the SGG task leveraging causal inference. Our central insight is that the Sparse Mechanism Shift (SMS) in causality allows independent intervention on multiple biases, thereby potentially preserving head category performance while pursuing the prediction of high-informative tail relationships. However, the noisy datasets lead to unobserved confounders for the SGG task, and thus the constructed causal models are always causal-insufficient to benefit from SMS. To remedy this, we propose Two-stage Causal Modeling (TsCM) for the SGG task, which takes the long-tailed distribution and semantic confusion as confounders to the Structural Causal Model (SCM) and then decouples the causal intervention into two stages. The first stage is causal representation learning, where we use a novel Population Loss (P-Loss) to intervene in the semantic confusion confounder. The second stage introduces the Adaptive Logit Adjustment (AL-Adjustment) to eliminate the long-tailed distribution confounder to complete causal calibration learning. These two stages are model agnostic and thus can be used in any SGG model that seeks unbiased predictions. Comprehensive experiments conducted on the popular SGG backbones and benchmarks show that our TsCM can achieve state-of-the-art performance in terms of mean recall rate. Furthermore, TsCM can maintain a higher recall rate than other debiasing methods, which indicates that our method can achieve a better tradeoff between head and tail relationships.
Scene graph generation, causal inference, counterfactuals, representation learning, long-tailed distribution
Unbiased Scene Graph Generation via
Two-stage Causal Modeling
Shuzhou Sun, Shuaifeng Zhi, Qing Liao, Janne Heikkilä, Li Liu
This work was partially supported by the National Key Research and Development Program of China No. 2021YFB3100800, the Academy of Finland under grant 331883, Infotech Project FRAGES, and the National Natural Science Foundation of China under Grant 61872379, 62022091, 62201603 and 62201588.
Li Liu ([email protected]) and shuaifeng Zhi are with the College of Electronic Science, National University of Defense Technology (NUDT), Changsha, Hunan, China. Li Liu is also with the CMVS at the University of Oulu, Finland.
Li Liu is the corresponding author.
Qing Liao is with the Department of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China, and also with the Peng Cheng Laboratory, Shenzhen, China, 518055. Shuzhou Sun and Janne Heikkilä are with the Center for Machine Vision and Signal Analysis, University of Oulu, Finland.
August 12, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Scene Graph Generation (SGG), first proposed by Scherrer .<cit.>, is an emerging, critical, and challenging intermediate scene-understanding task and has received increasing attention, especially during the past few years <cit.>, due to its potential to be a bridge between computer vision and natural language processing. SGG aims to generate a structured representation of a scene that jointly describes objects and their attributes, as well as their pairwise relationships, and is typically formulated as a set of <subject, relationship, object> triplets. Such representations can provide a deep understanding of a scene, and thus SGG has been employed for many downstream tasks, such as image-text retrieval<cit.>, visual question answering <cit.>, visual captioning <cit.>, .
While early SGG work has made significant progress <cit.>, which, however, as discussed in <cit.>, tends to generate biased predictions, , informative fine-grained relationships (, standing on) are predicted as less informative coarse-grained relationships (, on) due to the long-tailed distribution problem. As an example, we consider the distribution of the relationships in VG150 <cit.>, a popular benchmark in the SGG task, which, as shown in Fig. <ref> (a), clearly suffers from severe long-tailed distribution problems. The SGG model, naturally, cannot learn to represent the features of the head and tail relationships simultaneously from the skewed distribution and, hence, easily yields False Predictions (FP) on head relationships (see Fig. <ref> (c)).
For the above biased predictions, many debiasing methods <cit.> have been proposed to overcome this problem. Unlike earlier work, the primary goal of debiasing methods, however, is to pursue the unbiased scene graphs. Existing debiasing methods can be roughly categorized into four groups: 1) Resampling methods <cit.> upsample the tail relationships and/or downsample the head relationships to rebalance the training data distribution. 2) Reweighting methods <cit.> revise the contribution of different relationships during training, for instance, weighting the prediction loss to strengthen the model's representation ability to the tail categories. 3) Adjustment methods <cit.> modify the learned biased model to obtain unbiased predictions, for example, by adjusting the output logits to increase the likelihood of more informative fine-grained relationships. 4) Hybrid methods <cit.> combine some/all of the above methods. Although debiasing research is rather active in the SGG community, the above methods often fall short in preserving head category performance while pursuing the prediction of informative tail relationships <cit.>. More importantly, the current debiasing methods mainly focus on a single bias, , the long-tailed distribution problem, whereas it overlooks other biases.
Unlike existing work that focuses on a single bias, we reveal the fact that there are multiple biases for the SGG task in this paper. This stems from our observation that some of the False Predictions (FP) are clearly not caused by the long-tailed distribution bias, , FP on tail relationships (see Fig. <ref> (c)). We therefore argue that there are other biases that have not yet been observed and explored with current debiasing methods. Cognitive psychology <cit.> and studies on the human visual system <cit.> suggest that humans struggle to distinguish similar objects. Inspired by this fact, we hypothesize that the source of the bias of FP on tail relationships is semantic confusion, which refers to two relationships sharing similar semantic information. For instance, as shown in Fig. <ref> (b), both carrying and holding are semantic concepts composed of a people and objects in his/her hands. To demonstrate our premise, as shown in Fig. <ref> (c), we additionally split FP on tail relationships into FP on tail-similar relationships and FP on tail-dissimilar relationships. As expected, most of the FP on tail relationships occur in tail-similar relationships. This suggests that SGG models, like humans, have difficulties in distinguishing similar relationships. As a result, we take semantic confusion as the second bias.
For the multiple biases in the SGG task, we seek causal inference <cit.>, an inference procedure that achieves impressive performance in statistics, econometrics, and epidemiology, which has also attracted significant attention in the deep learning community in recent years. Our central insight is that the Sparse Mechanism Shift (SMS) <cit.> in causal inference allows independent intervention on multiple biases, thereby potentially preserving head category performance while pursuing higher performance in fine-grained tail relationships. Inspired by Pearl Causal Hierarchy (PCH) <cit.>, in particular its highest layer, counterfactual, we pose two questions: 1) What happens if there is no semantic confusion between any two relationships in the observed data? 2) What happens if the distribution of relationships in the observed data is balanced? To answer these two counterfactual questions, we first build Structural Causal Models (SCM) <cit.>, a causal modeling method that can support counterfactual inference, based on two observed biases as confounders. In practice, unfortunately, not all confounders for the SGG task can be observed, which means that the built SCM is causal-insufficient (see Section <ref> for a detailed analysis). Causal-insufficient assumption will invalidate the SMS hypothesis because the variables of the SCM are entangled in this case. Put another way, when we use existing causal intervention methods to overcome the observed biases, unobserved biases could be disturbed and bring about unwanted consequences. To allow SCM with causal-insufficient assumption to also benefit from the SMS hypothesis, we decouple the causal interventions into two stages and, on this basis, propose a novel causal modeling method, Two-stage Causal Modeling (TsCM), tailored for the SGG task.
Our TsCM consists of two stages: 1) Stage 1, causal representation learning, where despite the causal-insufficient assumption of the built SCM, we find that similarity relationships have inherently sparse properties (see Section <ref>), and, hence, sparse perturbations and independent interventions on semantic confusion bias are attainable. To achieve this, we proposed the Population Loss (P-Loss), which intervenes in the model training process to increase the prediction gap between similar relationships, allowing the trained model to obtain the causal representation that can eliminate the semantic confusion bias. As a result, this stage disentangles the confusion bias from the variables of the built SCM, thereby getting a disentangled factorization. 2) Stage 2, causal calibration learning, where thanks to the disentangled factorization obtained in stage 1, we calibrate the model's causal representation to remove the long-tailed distribution bias. Specifically, this is achieved by our proposed Adaptive Logit Adjustment (AL-Adjustment), which can adaptively learn a set of adjustment factors from the observed data for sparse perturbations and independent interventions.
In summary, the contributions of our work are three-fold:
* We thoroughly analyze the sources of bias in the biased SGG model and experimentally verify the bias, , semantic confusion bias, ignored by current debiasing methods.
* We propose a new causal modeling framework, Two-stage Causal Modeling (TsCM), to disentangle the multiple biases from the biased SGG model. Our TsCM decouples the causal intervention into two stages. Stage 1 leverages the proposed P-Loss to remove the semantic confusion bias and obtain a disentangled factorization even in the case of insufficient causality, thereby providing the causal representation that can distinguish similar relationships. Stage 2 further calibrates the causal representation to eliminate the long-tailed distribution bias by using the proposed AL-Adjustment.
* Comprehensive experiments on various SGG backbones and the popular benchmark demonstrate the state-of-the-art mean recall rate of the proposed TsCM. Furthermore, our TsCM can maintain a higher recall rate than other debiasing methods, achieving a better tradeoff between head and tail relationships.
§ RELATED WORKS
§.§ Scene Graph Generation
SGG produces a structured representation of the scene by assigning appropriate relationships to object pairs and enables a more comprehensive understanding of the scene for intelligent agents <cit.>. Most early works struggled with employing advanced network structures, , Convolutional Neural Network, Recurrent Neural Network, Graph Neural Network, for better feature extraction and representation <cit.>. Despite continuous improvements in the recall rate, these methods fall into the trap of biased prediction, , informative fine-grained relationships are predicted as less informative coarse-grained relationships. As a result, debiasing methods have attracted unprecedented attention in the SGG community in recent years. To keep focus, here we mainly review the debiasing methods for the SGG task. Existing debiasing methods can be roughly categorized into four groups as follows.
Resampling methods downsample the head category relationships and/or upsample the tail ones to balance the training data distribution, and often the prior knowledge, , language prior, is taken into account, too. For instance, instead of relying on box-level annotations, SegG <cit.> argues that pixel-level grounding would naturally be more valuable and, hence, create segmentation annotations for the SGG dataset with the help of auxiliary datasets. Recently, TransRwt <cit.> rectified the skewed distribution by creating an enhanced dataset using Internal Transfer and External Transfer, the former for transferring the coarse-grained relationships to the fine-grained ones and the latter for re-labeling the relationships that are missing annotations. However, resampling methods may lead to overfitting (oversampling) or information loss (undersampling) by altering relationship category sample distributions.
Reweighting methods design debiasing loss functions to make the model pay more attention to the tail category relationships or to create advanced networks to improve the representation ability of these relationships. Some works in this group begin by extracting prior knowledge from biased distributions, , cognitive structure in CogTree <cit.>, predicate lattice in FGPL <cit.>, relationship probability distribution in PPDL <cit.>, , and then combine the proposed debiasing loss functions to supervise the model training. Besides, GCL <cit.> presents a Stacked Hybrid-Attention network to achieve intra-modal refinement and intermodal interaction and then enhances the representation ability of tail relationships. Nonetheless, reweighting methods may result in an imbalanced focus on relationship categories and suboptimal, unstable performance due to manual or heuristic weight adjustments.
Adjustment methods adjust the output of the biased trained model to obtain unbiased predictions. The adjustment procedure can be based on prior knowledge. For example, Logit-reweight <cit.> uses label frequencies to adjust the logit outputs by the biased model. DLFE <cit.> considers the SGG task as a Learning from Positive and Unlabeled data (PU learning) problem, where a target PU dataset contains only positive examples and unlabeled data. However, its prior knowledge, , label frequencies, is obtained iteratively during the training process by the proposed Dynamic Label Frequency Estimation method. Furthermore, adjustment procedures can also be modeled by causal inference. For instance, TDE <cit.> first builds a causal graph for the SGG task and then draws counterfactual causality from the trained model to infer the effect from the negative bias. Note that adjustment methods will increase computational complexity with post-training output adjustments and may cause a decline in other relationship category performances.
Hybrid methods combine some/all of the above techniques. HML <cit.> and CAME <cit.> first divide the long-tailed distribution into some balanced subsets. HML <cit.> then trains the model with coarse-grained relationships and finally learns the fine-grained categories. While CAME <cit.> then proposes to use a mixture of experts to handle different subsets. RTPB <cit.> enhances the impact of tail relationships on the training process based on prior bias and designs a contextual encoding backbone network to improve feature extraction capabilities. However, hybrid methods Increase implementation complexity, more challenging parameter tuning, higher computational costs, and potential performance instability due to the interplay of combined methods.
Despite achieving impressive results, the above debiasing methods focus almost exclusively on a single bias, , long-tailed distribution bias, which clearly, makes complete debiasing impossible. Moreover, these methods sacrifice head relationships in pursuit of tail category performance. Differently, our method considers multiple biases and removes them using the causal inference technique. Our causal model TsCM consists of two stages covering both the reweighting and adjustment approaches. Thanks to the SMS mechanism, the two stages in our method independently intervene in different biases. In contrast, the different stages in existing hybrid methods only intervene in the same bias.
§.§ Causal Inference
Causal analysis has achieved encouraging performance in health, social, behavioral sciences, , and it has also attracted increasing attention in deep learning community in recent years, such as scene graph generation <cit.>, out-of-distribution generalization <cit.>, and salient object detection <cit.>. Compared with deep learning models, the causal inference approaches can eliminate the influence of biases/confounders when making predictions <cit.>. A typical causal inference paradigm usually starts with establishing a graphical model, , the Structural Causal Model (SCM) <cit.>, which models the dependencies of causal variables. It then intervenes in (, do-interventions <cit.>) these variables to pursue causal inference of interest. The models can therefore be generalized to different distributions.
It should be emphasized that the above interventions can be achieved because the causal variables satisfy the principle of sparse perturbation and independent intervention, which is the cornerstone of causal inference. The independent intervention principle in causality emphasizes that the conditional distribution of each causal variable, given its causes (, its mechanism), does not inform or influence the other mechanisms. The sparse perturbation principle refers to small distribution changes that tend to manifest themselves in a sparse or local way <cit.>. The sparse principle is extended by independence, which can be seen as a consequence of independence, too <cit.>. Benefiting from the independent intervention principle, Scherrer .<cit.> decompose the causal mechanism into modules with knowledge, which, different from monolithic models where full knowledge will be learned directly, enables adaptation to distribution shifts by only updating a subset of parameters. Thanks to the sparse perturbation principle, Ahuja . <cit.> achieve weakly supervised representation learning by perturbing the causal mechanism sparsely. Inspired by the above work and the multiple confounders in the SGG task, the model proposed in this paper removes these confounders independently and sparsely, which allows our model to preserve the performance of the head categories while pursuing debiasing.
§ METHODS
§.§ Overview
The primary goal of SGG is to model the objects existing in the scene and their pairwise relationships. Most existing works first detect the objects (, “man”, “horse” ) in the scene with an object detector and then recognize their pairwise relationships (, “riding”, “standing on”) with a relationship classifier. The object detector extracts information about objects, like their bounding boxes, categories, and features. Then the relationship classifier predicts relationships for each pair of objects. Simply, a scene graph is a set of visual triples in the formulation of <subject, relationship, object>. Formally, let 𝒟={(𝐱_i, 𝐲_i)}_i=1^N_𝒟 denote the observed data with N_𝒟 samples, where 𝐱_i is i-th image and 𝐲_i ∈ℝ^N_i ×K is N_i relationships in this sample, 𝐲_ij is K dimension one-hot vector denoting the label of j-th relationship in 𝐱_i. We therefore need to label the dataset 𝒟 with visual triplets {<(𝐨_i^sub,𝐛_i^sub), 𝐲_i,(𝐨_i^obj,𝐛_i^obj)>}_i=1^N_𝒟 to support model training, where 𝐨_i^sub, 𝐨_i^obj ∈ℝ^N_i ×C and 𝐛_i^sub, 𝐛_i^obj ∈ℝ^N_i × 4, 𝐨_ij and 𝐛_ij denoting the category and bounding box information of subject or object of j-th relationship in 𝐱_i respectively. C and K are the numbers of categories of objects and relationships in the observed data, respectively. Although labeling the visual triples is very costly, early efforts have contributed a few benchmarks to the SGG community, such as Visual Genome <cit.>, Scene Graph <cit.>, and Open Images V4 <cit.>. However, SGG models trained on these datasets typically suffer from two challenges: (1) Semantic confusion, and (2) Long-tailed distribution.
In this work, we address the above two challenges from the perspective of causal inference. Specifically, in Section <ref>, we firstly consider the aforementioned two challenges, , semantic confusion and long-tailed distribution, as confounders for the standard SGG framework (see Fig. <ref> (a)). Therefore, our method leverages the data-level confounders to model the causality for the SGG task. Compared to model-level confounders <cit.>, our approach is model-agnostic, , transferable to arbitrary SGG models. We then propose the Population Loss in Section <ref> to remove the semantic confusion confounder and get a disentangled factorization for the causal model (see Stage 1 in Fig. <ref> (b)). Next, in Section <ref>, we propose AL-Adjustment to remove the long-tailed distribution confounder to obtain unbiased predictions (see Stage 2 in Fig. <ref>). Finally, in Section <ref>, we show that our method is Fisher-consistent and highlight the differences from existing statistical-based approaches.
§.§ Modeling structural causal model
One can use a variety of frameworks to model the causality of their system of interest, such as Causal Graphical Models (CGM), Structural Causal Models (SCM), and Potential Outcomes (PO) <cit.>. The causality modeling ability of CGM is limited since it cannot support counterfactual inference. PO is active in the system with binary treatment variables, but it is awkward when dealing with special treatment and outcome variables. Considering the limitations of CGM and PO, in this work, we model the causality using SCM, a structural method that contains variables, structural functions, and distributions over the variables (see Definition 1).
Definition 1 (Structural Causal Model (SCM) <cit.>). A structural causal model (S C M) ℳ is a 4-tuple ⟨𝒱, 𝒰, ℱ, P(𝒰)⟩, where 𝒰 = { U_1, U_2, ··· , U_n } is a set of exogenous variables; 𝒱 = { V_1, V_2, ··· , V_n } is a set of endogenous (observed) variables; ℱ={ F_1, F_2, ··· , F_n } is the set of structural functions determining 𝒱; P(𝒰) is a distribution over the exogenous variables.
Definition 2 (Submodel <cit.>). For the SCM ℳ, let 𝒱 be a set of variables in 𝒱, and v a particular value of 𝒱. A submodel ℳ_v (of ℳ ) is a 4-tuple: ℳ_v=⟨𝒱, 𝒰, ℱ_v, P(𝒰)⟩, where ℱ_v={F_i: V_i ∉𝒱}∪{𝒱←v }, and all other components are preserved from ℳ.
Endogenous variables are the fundamental elements of an SCM. However, determining variable 𝒱 in the SGG task is very challenging because its inputs, , images, differ greatly from the structured units in traditional causal discovery and reasoning tasks <cit.>. Inspired by FP on head/tail relationships in Fig. <ref>, in this paper, we propose a model-agnostic data-level variable that takes semantic confusion and long-tailed distribution as the confounders. As a result, the induced submodel ℳ_v in our work is ⟨𝒱, 𝒰, ℱ_v, P(𝒰)⟩ (see Definition 2). Where 𝒱={X, Y, S, L}, X is input (images in SGG task), Y is output (relationships), S is the semantic confusion confounder, L is the long-tailed distribution confounder; 𝒰={U_X, U_Y, U_S, U_L}; ℱ_v={F_1,F_2,F_3,F_4,F_5 }; P(U_X, U_Y, U_S, U_L) is the distribution over the exogenous variables. The SCM is shown in Fig. <ref> (Biased SGG), and thus the structural equations are:
S = P(S) ,
L = P(L) ,
X = F_1(L, P(L))+F_2(S, P(S)) ,
Y = F_3(X, P(X))+F_4(L, P(L))+F_5(S, P(S)),
Intuitively, we can directly use interventions to remove the confounders in SCM (see Definition 3). These interventions, however, do not update P(𝒰), and thus the intervened results are noisy causal effects in most cases <cit.>.
Definition 3 (Interventions in SCM <cit.>). An intervention d o(V_i:=v') in an SCM ℳ is modeled by replacing the i-th structural equation by V_i:=v', where v' is a V_i-independent value.
Definition 4 (Counterfactual in SCM <cit.>). A counterfactual in an SCM ℳ is modeled by replacing the i-th structural equation by V_i:=v' and update the P(𝒰), where v' shares the same meaning as it does in Definition 3. The above counterfactual intervention induces the submodel ℳ^V_i.
Assumption 1 (Causal-insufficient). The exogenous variable 𝒰 in ℳ satisfies that: P(U_1, …, U_n) ≠ P(U_1) × P(U_2) ×⋯× P(U_n).
Fortunately, counterfactual, the highest-level reasoning procedure of cognitive ability <cit.>, overcomes this limitation by imagining pre/post-intervention results (see Definition 4). Note that counterfactual is unfalsifiable since its imaginary results cannot be observed. However, significant designs (, average treatment effect) in statistics, econometrics, and epidemiology can estimate the counterfactuals and are proven effective. The principle difference between intervention and counterfactual is that the latter updates P(𝒰) when manipulating the structural equations <cit.>. Thus, one can partially seek the intervention technique to calculate the counterfactuals. Inspired by the above facts, we therefore leverage the counterfactual inference to eliminate the semantic confusion confounder S and long-tailed distribution confounder L to obtain an unbiased SCM ℳ_v^S,L for the SGG task. The counterfactuals results can be calculated as:
𝔼[Y | X, d o(S:= s) , d o(L:=l)]
=𝔼_S 𝔼_L 𝔼[Y | X, s, l]
=∑_s ∑_l E[Y | X, s, l] P(s) P(l).
where s/l is a S/L-independent value. d o interventions involve manipulating one or more variables to investigate causal relationships <cit.>, where d o(L:=l) signifies setting the value of variable L to l and observing the outcome. Note that, causal sufficiency is an essential assumption for Equation (<ref>), , the exogenous variables U_i are jointly independent: P(U_1, …, U_n) = P(U_1) × P(U_2) ×⋯× P(U_n). Thanks to the causal sufficiency assumption, the endogenous variables 𝒱 in ℳ can be formulated as a causal/disentangled factorization:
P(V_1, V_2, …, V_n)=∏_i=1^n P(V_i |pa(V_i)),
where pa(V_i) are the parents of V_i. In the SGG task, confounders can be, for instance, the observed confounders such as the semantic confusion confounder and the long-tailed distribution confounder, as well as unobserved ones caused by missing labeled relationships and mislabeled relationships. The latter has been discussed in much literature <cit.>. We therefore do not expect and cannot model a causal sufficient SCM for the SGG task due to the unobserved confounders. In accordance with this, we assume that ℳ_v is causal-insufficient (see Assumption 1), and thus its endogenous variables can only be formulated as an entangled factorization:
P(V_1,V_2, …, V_n)=∏_i=1^n P(V_i | V_i+1, …, V_n).
Assumption 2 (Sparse Mechanism Shift (SMS) <cit.>). Small distribution changes tend to manifest themselves sparsely or locally in the causal/disentangled factorization (see Equation (<ref>)), that is, they should usually not affect all factors simultaneously.
Assumption 2 tells us that for a disentangled factorization, a sparse operation allows the learner to remove the confounders and even generalize to unseen distributions. However, unfortunately, ℳ_v is causal-insufficient since the SGG task inevitably contains the unobserved confounders, and it, therefore, cannot benefit from the SMS hypothesis. In response to this challenge, we decouple causal modeling into two stages to achieve the goal of intervening in the endogenous variables sparsely:
𝔼 [Y | X, d o(S:= s), d o(L:=l)]
=𝔼_X[𝔼[Y^'| X, d o(S:=s)]_stage 1 + 𝔼[Y | X, Y^', d o(L:=l)]_stage 2].
where stage 1 exploits the inherent sparse property of similar relationships, even under the condition of causal-insufficient assumption, it can achieve sparse perturbations on variable S to remove the semantic confusion confounder as well as learn a disentangled representation of ℳ_v at the same time, see Section <ref>. Based on the disentangled factorization obtained, we then, in stage 2, manipulate variable L in a sparse way to remove the long-tailed distribution confounder, see Section <ref>. Both stages are sparse interventions, thereby satisfying the SMS assumption, which allows our method to achieve unbiased prediction while protecting the performance of head relationships. Specifically, stage 1, involving interventions on similar relationships, naturally doesn't harm head relationships. Moreover, the adjustment mechanism in stage 2, adaptively learned from stage 1, further ensures the protection of head relationships.
§.§ Causal representation learning
§.§.§ Population Loss
In the SGG task, similar relationships are those with only slightly different visual and semantic features. However, existing SGG models perform poorly in discriminating these similar relationships, for instance, easily mispredicting standing on as walking on or vice versa. This is not surprising, as distinguishing these similar relationships is even challenging for humans. Naturally, one may be curious and then imagine: Would the above error still occur if standing on and walking on are no longer similar? While this only happens in our imagined spaces, it can be formally calculated by counterfactual (see Definition 4) in the causal inference paradigm:
P(y| x,d o(S:= s))
=P(y|x,d o(S:= s_1))-P(y|x,d o(S:= s_0)),
where (x,y) is a particular value of (X,Y), (X,Y) ∼𝒟; s_1 and s_0 indicate that the relationship y is similar or dissimilar to other relationships, respectively. In fact, the above counterfactual formulated in Equation (<ref>) simulates the potential outcomes of different interventions, , d o(S:= s_1) and d o(S:= s_0). It is critical because one can often benefit from imagining; for instance, Einstein's thought experiment brought the Special Theory of Relativity to the world. Despite being promising, however, calculating Equation (<ref>) takes a lot of work. TDE <cit.> is highly relevant to our work, which simulates two interventions by inferring pre/post-processed inputs to obtain counterfactual results. However, it requires two model inferences for each input, thereby introducing unbearable costs. In contrast, Average Treatment Effect (ATE) <cit.> estimates the counterfactuals in one shot by leveraging statistical knowledge. Thanks to its high estimation efficiency, ATE is a commonly used technique in causal inference, such as exploring the ATE estimation with binary treatments and continuous outcomes in <cit.> and discussing the propensity score if the average treatment effect is identifiable from observational data in <cit.>. Inspired by ATE, in this paper, we use statistical knowledge from the observed data 𝒟 to estimate counterfactuals:
𝔼 [y |x,d o(S:= s)]
=𝔼_X[𝔼(Y|X,d o(S:= s_1))-𝔼(Y|X,d o(S:= s_0))].
Definition 5 (Population in SGG). Let y = {y_1,y_2,···,y_K} be relationship categories in observed data 𝒟, and let P_α^y_i be population of y_i. Then, P_α^y_i is a relationship set containing the α most similar relationships to y_i.
Formally, we first extract knowledge 𝒫_α, 𝒫_α = {P_α^y_i}_i=1^K, from the observed data (the calculation of 𝒫_α is placed in Section <ref>). P_α^y_i is the population of relationship y_i, a relationship set containing the statistical knowledge of similar relationships; see Definition 5. Inspired by penalized head categories in <cit.>, here we punish similar relationships based on knowledge 𝒫_α. Specifically, we discard the widely used cross-entropy loss ℓ and supervise the SGG model f through the proposed Population Loss (P-Loss) ℓ̂:
ℓ̂(𝒫_α, y, f(x))=log [1 + ∑_y^'∈P_α^yπ_y^'/π_y× e^(f_y^'(x)-f_y(x))
+∑_y^'∉P_α^y , y^'≠ y e^(f_y^'(x)-f_y(x))],
θ^*=θmin𝔼[(x, y) ∼𝒟𝔼ℓ̂(𝒫_α, y, f(x))],
where π is category frequencies on the observed data 𝒟 and θ^* is the parameter used to parameterize SGG model f_θ^*. x and y (y = {y_i}_i=1^K) are the input (, image) and output relationship categories, respectively. As an example, for relationship y_i, the P-Loss ℓ̂ penalizes its confusing relationships with the help of statistical knowledge π and P_α^y_i extracted from the observed data 𝒟. The penalty term in Equation (<ref>) can be seen as d o(S:= s) in Equation (<ref>) since it intervenes in the sparse 𝒫_α and makes the model more capable of distinguishing between similar relationships. In other words, P-Loss can remove the confounder S in ℳ_v. Thus, the counterfactual can be estimated by the statistical knowledge 𝒫_α as:
P(y| x,d o(S:= s)) = P(y| x,𝒫_α,π)
= f_θ^*(x) .
Assumption 3 (Similar relationships are sparse). Let y = {y_i}_i=1^K be relationships in observed data 𝒟. For any relationship y_i (i ∈{ 1,2, ⋯, K}), there exist k relationships similar to it. Then, it holds that k ≪K.
Despite achieving the goal of calculating the counterfactuals, however, it is critical to note that our causal-insufficient assumption determines that our manipulation (d o(S:= s)) in Equation (<ref>) may perturb other variables simultaneously since the entangled factorization of ℳ_v does not satisfy the SMS hypothesis. Fortunately, in this paper, we empirically find that similar relationships hold the sparse property (see Assumption 3). Based on our observations, relationships within the SGG dataset are often similar to a few specific relationships but not to most others. For example, standing on is only similar to on, walking on, ., but differs from most other relationships. Therefore, Assumption 3, which shows similar relationships have the sparse property that SMS highlighted, guarantees that even an entangled factorization can be intervened sparsely on the confounder S. In other words, even if ℳ_v is causal-insufficient, our proposed P-Loss can still intervene in S without worrying about perturbing other exogenous variables, such as confounder L.
Furthermore, we argue that d o(S:= s) partially disentangles the ℳ_v as it removes the confounder S and allows us to get a better causal representation. Thus, the endogenous variables in the induced submodel ℳ_v^S can be roughly formulated as a disentangled factorization:
P(X,Y,L)≐ P(X) × P(Y) × P(L).
Disentangled factorization is considered to be the key to representation learning due to its potential in abstract reasoning, interpretability, generalization to unseen scenarios, . Although it has attracted significant attention, evaluating the disentangled representation is still challenging <cit.>. We will design experiments in the ablation study (Section <ref>) to demonstrate our disentangled claim in Equation (<ref>).
§.§.§ Calculate the relationship-populations
As a supplement to Section <ref>, this section shows how to calculate the relationship-populations 𝒫_α. For 𝒫_α, we give three assumptions (Assumptions 4-6) based on the inspirations of causality as well as the hallmarks of the relationships in the SGG task.
Assumption 4 (Relationship-population is learner independent). Let f_θ_1 and f_θ_2 be SGG models parameterized by θ_1 and θ_2, respectively. Then, 𝔼[𝒫_α^'| f_θ_1]=𝔼[𝒫_α^”| f_θ_2].
Assumption 5 (Relationship-population is distribution insensitive). Let D_obs^1 and D_obs^2 be two observed datasets. Then, 𝔼[𝒫_α^'| D_obs^1]=𝔼[𝒫_α^”| D_obs^2].
Assumption 6 (Relationship-population is asymmetric). Let y_i and y_j be two relationships. Then, y_i ∈P_α^y_j⇎ y_j ∈P_α^y_i.
Assumption 4 states that whether two relationships are similar is irrelevant to the SGG model. Standing on and walking on, for instance, should share the same features no matter what model we use. In light of this, we should not use any SGG model when calculating 𝒫_α. Assumption 5 illustrates no correlation between the distribution of two relationships and their similarity. We highlight this because we observe that the SGG dataset often suffers from the long-tailed distribution issue at both the relationship and object levels, which may perturb the calculation procedure of 𝒫_α. Assumption 6 is inspired by the fact that cause and effect are directed, , the cause can determine the effect, but not vice versa.
To satisfy Assumption 4, we design a model-agnostic relationship feature extraction method. Consider two objects, o_i and o_j, whose bounding boxes are [b_x̅^i, b_y̅^i, b_h^i, b_w^i] and [b_x̅^j, b_y̅^j, b_h^j, b_w^j], respectively. Where, as an example, for the bounding box of o_i, (b_x̅^i, b_y̅^i) is the center point, and b_w^i and b_h^i are the width and height. We denote the model-agnostic feature of the relationship between these two objects as ψ_<o_i,o_j>:
[2(b_x̅^i+b_x̅^j)-(b_w^i+b_w^j)/4 b_h^i, 2(b_y̅^i+b_y̅^j)-(b_h^i+b_h^j)/4 b_h^i ,
2(b_x̅^i+b_x̅^j)+(b_w^i+b_w^j)/4 b_h^i, 2(b_y̅^i+b_y̅^j)+(b_h^i+b_h^j)/4 b_h^i , b_h^j/b_h^i, b_w^j/b_w^i].
Our proposed model-agnostic feature emphasizes the relative position between object pairs, which is inspired by the fact that it is intrinsically linked to the relationships in SGG. For example, the object pairs of standing on are up-down, while behind is front-back. Thanks to the molecules of Equation (<ref>), ψ_<o_i,o_j> is position-insensitive, as the upper left corners of all object pairs are moved to the same coordinate. Besides, the denominator of Equation (<ref>) normalizes the object pairs, ensuring that ψ_<o_i,o_j> is scale-insensitive. The position/scale-insensitive design in our model-agnostic feature extraction method can overcome the problem that the distance of the lens can make the same relationship vary greatly, thereby generalizing to unseen object pairs.
Before extracting the relationship features in the observed data using the above method, however, there is a problem that needs to address: The object-level long-tailed distribution problem may perturb the model-agnostic feature extraction. Consider an example with 90% <people, standing on, road> and 10% <people, standing on, beach> in the observed data. It is fusing the features of standing on will undoubtedly bias towards <people, standing on, road> due to its dominance in the observed data, which is detrimental to extracting the feature of standing on. We address this problem by extracting the object-to-object level features ξ_y and then normalizing them. Our method is inspired by Inverse Probability Weighting (IPW) <cit.>, a bias correction method commonly used in statistics, econometrics, epidemiology, . As a result of this improvement, the proposed method satisfies Assumption 5 since it eliminates the disturbance of distribution from the feature extraction. Specifically, ξ_y ∈ℝ^4 (C×C×K× 6) is a four-dimensional statistic:
ξ_y=[[ ξ_y^(1,1) ξ_y^(1,2) ⋯ ξ_y^(1, C); ⋯ ⋯ ⋯ ⋯; ξ_y^(C, 1) ξ_y^(C, 2) ⋯ ξ_y^(C, C) ]],
where ξ_y^(i, j) = {ξ_y_t^(i, j)}_t=1^K is the normalized features of relationship <o_i, y_t, o_j>, and it can be calculated as:
ξ_y_t^(i, j)= ξ_y_t^<o_i, o_j> / |ξ_y_t^<o_i, o_j>|,
ξ_y_t^<o_i, o_j> and |ξ_y_t^<o_i, o_j>| are the fusion features and numbers of all relationships <o_i, y_t, o_j> in the observed data 𝒟, respectively. We then calculate the feature of each relationship, for instance, for the t-th relationship ξ_y_t:
ξ_y_t=∑_i=1^C∑_j=1^Cξ_y_t^(i, j) / C^2 [ Object pair <o_i,o_j> theoretically has C^2 triplet categories. However, the annotations in SGG dataset are very sparse, , <o_i,o_j> usually only covers a few triplet categories, resulting in a much smaller number of triplet categories than C^2. As a result, the numerator term in Equation (<ref>) should be determined by the observed data 𝒟. For instance, for y_t, the numerator term should be the number of triplet categories composed of y_t in 𝒟.].
For relationship-populations 𝒫_α, 𝒫_α = {P_α^y_i}_i=1^K, the population of y_t can be calculated as:
P_α^y_t=t, t^'∈{ 1,2, ⋯, K}, t ≠ t^'smalαξ_y_t-ξ_y_t^',
where smalα is a computation kernel that selects similar relationships based on feature distances. As an example, Equation (<ref>) takes the α relationships with the smallest feature distance from ξ_y_t. Our method guarantees that the head and tail relationship categories have the same population scale and thus satisfy Assumption 6. As such, different relationships yield different feature distance thresholds in Equation (<ref>), resulting in asymmetric relationship-populations.
§.§ Causal calibration learning
§.§.§ Adaptive Logit Adjustment
Fig. <ref> illustrates the severe long-tailed distribution problem in the SGG task. The current SGG models, therefore, easily predict informative fine-grained relationships as less informative coarse-grained relationships. For instance, looking at is predicted as near. To end this, let us seek the imagination again: If one collected the balanced data, or, in particular, looking at and near share the same distribution in the observed data 𝒟, will the above error still occur? Similar to Equation (<ref>), this question can also be answered with the counterfactual:
P(y| x,d o(L:= l))
=P(y|x,d o(L:= l_1))-P(y|x,d o(L:= l_0)),
where l_1 and l_0 represent the head and tail categories, respectively; as such, Equation (<ref>) simulates the potential outcomes of different interventions, , d o(L:= l_1) and d o(L:= l_0). Inspired by logit adjustment <cit.>, in which the class prior knowledge (also known as adjustment factors) extracted from the training data are used to adjust the model results, we extract the statistical knowledge from the observed data 𝒟 via model f_θ^* to estimate counterfactuals:
𝔼 [y |x,d o(L:= l)]
=𝔼_X[𝔼(Y|X,d o(L:= l_1))-𝔼(Y|X,d o(L:= l_0))].
Specifically, we leverage the extracted statistical knowledge, adjustment factors T_β, to maximize the recall rate of f_θ^* on the observed data (the computation of T_β is placed in Section <ref>). Compared to existing logit adjustment methods, our adjustment factors T_β not only extract knowledge from 𝒟 but, more importantly, it fits adaptively to the SGG model f_θ^*. Holding this advantage, our adjustment factors T_β outperform the traditional adjustment method by a clear margin (see the experiments in Section <ref>). Despite this, the knowledge extracted directly from f_θ^* and 𝒟 is still suboptimal. We think this is because background relationships will perturb the model training, resulting in 1) the logits of the foreground relationships being less discriminative; 2) the logits of alternating positive and negative make it impossible for the learned factors to adjust to some predictions correctly (see the qualitative results in Fig. <ref>). Where foreground relationships in the SGG task are those within annotated triplets in the observed data, and background relationships are the ones that are absent between object pairs. To overcome these problems, we augment the logits of f_θ^*:
f̃_θ^*, y(x)=e^f_θ^*, y(x)× f_θ^*, y^bg(x),
where f_θ^*, y^bg(x) is the logit of the corresponding background relationship output by f_θ^*. f_θ^*, y^bg(x) acts as a guidance term that can make the augmented logits f̃_θ^*, y(x) more discriminative, which is inspired by the impressive performance of the traditional adjustment methods in the simple classification task. Augmented logits allow us to get better adjustment factors, and then the final prediction y_x of input x can be calculated as:
y_x=y ∈{ y_1, ⋯, y_K}max{(f̃_θ^*, y(x) ×T_β)_y ∈T_β∩(f̃_θ^*, y(x))_y ∉T_β}.
Consider a typical false prediction: For an input x belonging to the tail category y_i, the largest and next largest output logits correspond to y_j and y_i, where y_j is a head category. However, our proposed adjustment factors can correct this false prediction by penalizing the logits corresponding to the head categories and encouraging the tail categories, thus, eliminating the negative effect caused by the long-tailed distribution problem. Our proposed AL-Adjustment acts as d o(L:= l) in Equation (<ref>) to remove the confounder L in the induced submodel ℳ_v^S. Therefore, the estimated counterfactual by the statistical knowledge T_β is:
P(y| x,d o(L:= l)) = P(y| x,T_β,f̃_θ^*)
=f̃_θ^*, y(x) ×T_β.
In Section <ref>, we show that ℳ_v^S can be decomposed into a disentangled factorization. As a result, manipulating the factor in ℳ_v^S, in most cases, should not affect all factors simultaneously (SMS hypothesis, see Assumption 2). We therefore argue that d o(L:= l) in Equation (<ref>) does not affect the exogenous variables X and Y, and then the induced submodel ℳ_v^S,L obtained in this stage can be further roughly formulated as a disentangled factorization:
P(X,Y)≐ P(X) × P(Y).
We will design experiments in the ablation study (Section <ref>) to demonstrate our disentangled claim in Equation (<ref>).
§.§.§ Calculate the adjustment factors
This subsection shows how to extract statistical knowledge T_β from the observed data 𝒟 and the SGG model f̃_θ^*, which can be used to adjust the logits to remove the confounder L in submodel ℳ_v^S. For adjustment factors T_β, we have two assumptions (Assumptions 7-8).
Assumption 7 (Adjustment effect should be sparse). Let T_β^y_i and T_β^y_j are adjustment factors of i-th and j-th prediction logits, respectively. Then, P(y_i | x)=P(y_i | x, T_β^y_j), P(y_j | x)=P(y_j | x, T_β^y_i).
Assumption 8 (Adjustment factors should be independent of each other). Let T_β^y_i and T_β^y_j are adjustment factors of i-th and j-th prediction logits, respectively. Then, P(y | x,Max(T_β^y_i,T_β^y_j)) = Max(P(y | x, T_β^y_i),P(y | x, T_β^y_j)), where Max(· | ·) is a computation kernel to take the maximum value of the corresponding positions of the two sets.
Assumption 7 is inspired by the SMS hypothesis, and it holds due to the disentangled factorization (Equation (<ref>)) obtained in stage 1. This assumption also stems from our insight that the false predictions in most cases belong to the largest few logits (see Table <ref>). As such, Assumption 7 allows us to correct the false predictions with sparse adjustment factors. However, the existing methods adjust all logits. Assumption 8 views the SMS hypothesis through the relationship level to highlight the causality between the relationships. Here is an intuition for this assumption: To correct any false prediction logits of a binary classification task, we only have to adjust one of these two logits. Assumption 8 allows us to learn the adjustment factors of each relationship independently.
Our proposed adaptive adjustment factors T_β is a two-dimensional (K×β) matrix:
T_β=[[ T_β^y_1,l_1 T_β^y_1,l_2 ⋯ T_β^y_1,l_β; ⋯ ⋯ ⋯ ⋯; T_β^y_K,l_1 T_β^y_K,l_2 ⋯ T_β^y_K,l_β ]],
where T_β^y_i,l_j adjusts the j-th largest prediction logit to correspond to the i-th relationship, and it can be calculated as:
T_β^y_i,l_j=T_β^y_i,l_j∈Tmax ( (x, y) ∼𝒟^y_i,l_jTP(f̃_θ^*, y(x) ×T_β^y_i,l_j)_true prediction with adjustment-
(x, y) ∼𝒟^y_i,l_jTP(f̃_θ^*, y(x)) ) _true prediction without adjustment,
where T∈ℝ and (X, Y)TP(f) is a computation kernel that calculates the true prediction numbers (, recall rate (R@K) in SGG task) of model f on dataset (X,Y). 𝒟^y_i,l_j is all relationships with the j-th largest prediction logit to correspond to the i-th category reasoned by f_θ^*, and it can be further divided into true predictions 𝒯𝒟_obs^y_i,l_j and false predictions ℱ𝒟_obs^y_i,l_j. Thus, our method is to maximize the recall rate of model f_θ^* on the observed data 𝒟 by the adjustment factors learned in Equation (<ref>).
We then propose an upper-lower bound-based method to compute Equation (<ref>) quickly. As shown in Fig. <ref>, for each relationship in 𝒯𝒟_obs^y_i,l_j, we can compute a lower bound that keeps the correct prediction. Similarly, we can obtain for each relationship in ℱ𝒟_obs^y_i,l_j an upper bound that can adjust it to the correct prediction. We denote the lower and upper bounds of 𝒟^y_i,l_j as ⊻_y_i,l_j and ⌅_y_i,l_j, respectively. It is clear that we only need to let T_β^y_i,l_j satisfy the most bounds to maximize the number of correct predictions. Therefore, maximizing the recall rate of model f_θ^* on the observed data 𝒟 by the adjust factor is equivalent to another task that finds the factor that satisfies the most bounds in ⊻_y_i,l_j and ⌅_y_i,l_j. As a result, T_β^y_i,l_j in Equation (<ref>) can also be calculated by:
T_β^y_i,l_j=t ∈Tmax ( ∑_m=1^|⊻_y_i,l_j|1(t ≥⊻_y_i,l_j^m)+
∑_n=1^|⌅_y_i,l_j|1(t < ⌅_y_i,l_j^n)),
where 1( · ) is an indicator function (equals 1 if the expression is true and 0 for false) and | · | is the length/size of the given set/list. However, we further find that the long-tailed distribution problem may perturb the adjustment effect of T_β^y_i,l_j. Specifically, if y_i is a head category, |⊻_y_i,l_j| ≪ |⌅_y_i,l_j|, and if it is a tail category, then |⌅_y_i,l_j| ≫ |⊻_y_i,l_j|. It is due to biased training caused by the skewed distribution. To this issue, for relationship y_i, we randomly sample the same number (, min(|⊻_y_i,l_j|, |⌅_y_i,l_j|)) of bounds ⊻_y_i,l_j^' and ⌅_y_i,l_j^' separately from the original lower/upper bounds to ensure unbiased adjustment factors. Therefore, Equation (<ref>) will be adjusted to:
T_β^y_i,l_j=t ∈Tmax ( ∑_m=1^min(|⊻_y_i,l_j| , |⌅_y_i,l_j|)1(t ≥⊻_y_i,l_j^' m)+
∑_n=1^min(|⊻_y_i,l_j| , |⌅_y_i,l_j|)1(t < ⌅_y_i,l_j^' n)).
Note that in Equation (<ref>), T_β^y_i,l_j is an interval with extremely close upper and lower bounds, thereby selecting any value within this interval as the adjustment factor has a negligible impact on the results. Consequently, in this paper, we randomly sample a value from T_β^y_i,l_j as the learned adjustment factor. Finally, for each relationship, we learn only β adjustment factors corresponding to the 1-β positions in the prediction logits. Thus, this sparse adjustment mechanism enables our method to satisfy Assumption 7. Meanwhile, the adjustment factors for each relationship are independently learned by Equation (<ref>), so our method satisfies Assumption 8.
§.§ Discussion
This subsection first shows that our method is Fisher consistent, , models based on popular learning strategies (, empirical risk minimization (ERM)) lead to the Bayes optimal rule of classification that minimizes the balanced error <cit.>. This is very important for the SGG task, as it prevents the model from heading down a confusing path, , biased towards predicting head categories for a high recall rate. We then highlight the differences and advantages of the proposed causal framework with existing methods.
§.§.§ Fisher consistency
To demonstrate that our method is Fisher consistent, we start with the Bayes perspective. <cit.> thoroughly explored the relationship between the posterior probability of the balanced class-probability function P^bal(y | x) and the unbalanced one P(y | x), and it defined P^bal(x | y) ∝ P(x | y) / P(x). In the SGG task, however, we find that the models suffer from confounders other than the long-tailed distribution problem, such as semantic confusion confounder, as well as unobserved ones like missing labeled relationship confounder and mislabeled relationship confounder. As such, here we define:
P^bal(x | y) ∝ P(x | y) / P(x)P(S)P(U_o),
where U_o is unobserved confounders. Also, consider:
P^bal(y | x)=(P^bal(x | y) P^bal(y)) / P^bal(x) ,
we therefore have:
P^bal(y | x) ∝ (P(x | y) P(X) P^bal(y)) / (P(X)
P(Y) P(S) P(U_o) P^bal(x)).
For fixed class-conditionals P(x | y), the optimal predictions will not be affected by P(Y) <cit.>, hence:
P^bal(y | x) ∝ P(y | x) / ( P(S) P(U_o) P^bal(x)).
Then, according to the SMS hypothesis (Assumption 2) and small distribution changes hypothesis in <cit.>, there exists an intervention ℐ such that:
y ∈{ y_1, ⋯, y_K}max f_θ^ℐ(x) =y ∈{ y_1, ⋯, y_K}max(f̃_θ^*, y(x) ×T_β).
Note that we cannot model the intervention ℐ directly since the induced submodel ℳ_v can only be formulated as an entangled factorization. We define the adjustment factors corresponding to intervention ℐ as T_β^ℐ, that is:
y ∈{ y_1, ⋯, y_K}max f_θ^ℐ(x) =y ∈{ y_1, ⋯, y_K}max(f̃_θ^*, y(x) ×T_β^ℐ).
Based on the Theorem 1 in the <cit.>, we have
argmax_y ∈{ y_1, ⋯, y_K}f̃_θ^*, y(x) = argmax_y ∈{ y_1, ⋯, y_K} P (x | y),
thus:
y ∈{ y_1, ⋯, y_K}max f_θ^ℐ(x) =y ∈{ y_1, ⋯, y_K}max((P(y | x)P(y)/P(x)) ×T_β^ℐ).
Considering both Equation (<ref>) and Equation (<ref>), when
T_β^ℐ∝ P(Y) / (P(S)P(X)P(U_o) P^bal(x)) ,
then
y ∈{ y_1, ⋯, y_K}max f_θ^ℐ(x) = y ∈{ y_1, ⋯, y_K}maxP^bal(y | x).
This means that our manipulations on confounder S and confounder L can lead to a minimal balanced error (mR@K in SGG task), and thus our method is Fisher consistent.
§.§.§ Sparsity and independency
Causal representation learning (stage 1) in our proposed causal framework is inspired by the loss-weighting methods <cit.>, and causal calibration learning (stage 2) is inspired by post-hoc adjustment approaches <cit.>. In all of these heuristic works, statistical priors (, category frequencies) are extracted from the observed data to calibrate the decision boundaries. However, we leverage the extracted statistical knowledge to estimate the counterfactual to eliminate confounders S and L. Where the statistical knowledge in stage 1 is extracted via the proposed model-agnostic method, and that of in stage 2 is adaptively extracted from the learned model f_θ^* and the observed data 𝒟. Besides, our method differs fundamentally from these works in that the interventions using knowledge are sparse and independent, which is the key to preserving head category performance while pursuing the prediction of high-informative tail relationships.
Causal inference models the observed data with modular knowledge, and interventions on partial knowledge can achieve rapid distribution changes <cit.>. These sparse perturbations simulate human learning, , the reuse of most knowledge, and thus, have great potential for practical applications, especially for open-world learning. Both stage 1 and stage 2 of our causal framework are sparse, specifically, α in 𝒫_α and β in T_β. The former means that each relationship only takes the α most similar ones as its population. Therefore, Equation (<ref>) only sparsely adjusts the loss for very few relationships. The latter represents that only the top-β predict logits will be adjusted. Thus, Equation (<ref>) is a sparse adjustment technique.
Independent Causal Mechanisms (ICM) <cit.> tells us that changing one causal mechanism does not change others. Note that ICM requires causal sufficiency, but the SGG task does not satisfy this. However, as analyzed in Section <ref>, our proposed P-Loss can intervene in confounder S without losing the independent property due to the sparse nature of similar relationships. The result of stage 1 can be roughly formulated as a disentangled factorization. Furthermore, the different logit positions of T_β are independently learned. These enable independent intervention in stage 2.
In addition, <cit.> shows that loss-reweight and logit-reweight are identical, and merging them brings no further gain. The latter even cancels out the improvement from the former in some cases. However, the post-hoc adjustment factors in our causal framework are adaptively learned from the model obtained in the previous stage and thus always yield positive adjustment effects. More importantly, our method can make the decision boundaries between similar relationships clearer, which traditional methods cannot achieve. We show the two above merge routes in Fig. <ref> to compare the boundary adjustment processes.
§ EXPERIMENTS
§.§ Implementation
Datasets.
We evaluate our method on VG150 <cit.>, a subset of the VG dataset <cit.> that includes the most frequent 150 object categories and 50 relationship classes. VG150 has about 94k images, and we follow the split in <cit.>, , 62k training images, 5k validation images, and 26k test images.
Evaluation modes.
Following MotifsNet <cit.>, we use three evaluation modes: 1) Predicate classification (PredCls). This mode requires the SGG model to predict relationships given the ground truth boxes and object classes. 2) Scene Graph Classification (SGCls). This mode requires the SGG model to predict object classes and relationships given the ground truth boxes. 3) Scene Graph Detection (SGDet). This mode requires the SGG model to predict object classes, boxes, and relationships.
Evaluation metrics. Following <cit.>, we adopt three evaluation metrics: 1) Recall rate (R@K). R@K is one of the most commonly used evaluation metrics, which calculates the fraction of times the correct relationship is predicted in the top K confident relationship predictions. Typically, K is set to 20, 50, and 100, , R@20, R@50, and R@100. 2) mean recall rate (mR@K). mR@K calculates the mean of the R@K for each relationship. Compared with R@K, mR@K can comprehensively evaluate the model performance on all relationship categories, especially the tail relationships. 3) Mean of R@K and mR@K (MR@K). Due to the severely long-tailed distribution, the SGG model only needs to perform well on a few head categories to achieve high R@K. Although some current works can achieve a high mR@K, they greatly sacrifice the R@K of the head categories, which is certainly not what we expected since the head categories account for significant proportions in realistic scenarios. We therefore aim to achieve a favorable tradeoff between R@K and mR@K, allowing the model to accommodate both head and tail relationships, which in turn enhances the practical value of the generated scene graph. For this purpose, we calculate the mean of R@K and mR@K, denoted as MR@K, to evaluate the model comprehensively.
Training and testing. We evaluate our model-agnostic method on the popular SGG backbones, including MotifsNet <cit.>, VCTree <cit.>, and Transformer <cit.>, in the repository provided by <cit.>. We follow most of the settings of this repository: 1) The object detector in the pipeline is the Faster R-CNN <cit.> with the backbone of ResNeXt-101-FPN <cit.>. The detector was trained with the VG training set and achieved 28.14 mAP on the VG test set. 2) The detector is then frozen and outputs the bounding boxes, categories, and features of the detected objects for the relationship classifier in the pipeline. The classifier is supervised by our proposed P-Loss and optimized by SGD. For MotifsNet <cit.> and VCTree <cit.>, the batch size and initial learning rate are set to 12 and 0.01, while these parameters in Transformer <cit.> are 16 and 0.001. We set α in Equation (<ref>) to 5 unless otherwise mentioned. 3) In the testing phase, the logits will first be augmented by Equation (<ref>) and then adjusted by the adjustment factors learned by Equation (<ref>) to obtain the final predictions. The β in Equation (<ref>) is set to 3.
§.§ Comparison with state-of-the-art
§.§.§ Backbones and baselines
Backbones.
We evaluate our proposed method with three popular SGG backbones, , MotifsNet <cit.>, VCTree <cit.>, and Transformer <cit.>. Specifically, we first replace the loss function of the above backbones with the P-Loss to supervise the model training. We then leverage AL-Adjustment to optimize the logits outputted by the trained model during inference.
Baselines.
We classify existing baselines from two perspectives to comprehensively evaluate our proposed framework. 1) Debiasing perspective. We divide the baselines into four groups, resampling methods, reweighting methods, adjustment methods, and hybrid methods. Resampling methods include SegG <cit.> and TransRwt <cit.>. Reweighting methods include CogTree <cit.>, EBM-loss <cit.>, Loss-reweight <cit.>, FGPL <cit.>, GCL <cit.>, PPDL <cit.>, and LS-KD(Iter) <cit.>. Adjustment methods include TDE <cit.>, DLFE <cit.>, Logit-reweight <cit.>, and PKO <cit.>. Hybrid methods include BPL+SA <cit.>, HML <cit.>, RTPB <cit.>, NICE <cit.>, and CAME <cit.>. We group from this perspective because Stage 1 in our framework is the reweighting method and Stage 2 is the adjustment method, and thus our TsCM is a hybrid method. 2) Model perspective. We divide the baselines into two groups, model-agnostic and model-dependent methods. The former group includes TDE <cit.>, Loss-reweight <cit.>, Logit-reweight <cit.>, BPL+SA <cit.>, CogTree <cit.>, DLFE <cit.>, HML <cit.>, FGPL <cit.>, TransRwt <cit.>, SegG <cit.>, EBM-loss <cit.>, PPDL <cit.>, NICE <cit.>, PKO <cit.>, LS-KD (Iter) <cit.>, and the latter group includes GCL <cit.>, RTPB <cit.>, CAME <cit.>. It is generally possible to easily transfer model-agnostic methods to different SGG backbones, thereby generalizing well in real-world applications.
§.§.§ Performance analysis
Quantitative results analysis. We report the quantitative results in Table <ref>, Table <ref>, Table <ref>, Table <ref>, and Table <ref>. Our proposed method achieves state-of-the-art performance on mR@K, the most popular metric for evaluating unbiased SGG. Besides, the proposed method shows more gains on the metrics of R@K and MR@K, which indicates that TsCM obtains a better tradeoff between head and tail categories.
From the quantitative results, we have the following observations: 1) Adjustment methods <cit.> are the most relevant to our proposed AL-Adjustment approach since they share the same insight in encouraging predicting more informative tail relationships by adjusting the output logits. However, the adjustment factors in our method are adaptively learned from the observed data and thus can support causal calibration since they are sparse and independent. Benefiting from this, for instance, in PredCls mode, TsCM achieves 6.7%/6.5% performance gains on MotifsNet (Table <ref>)/VCTree (Table <ref>) compared with adjustment methods. 2) Reweighting methods <cit.> suppress partial relationships by modifying the loss function and are thus highly related to our proposed P-Loss as well. However, the difference is that our method focuses on relationships with semantic confusion, which this group of baseline methods has not explored yet. Thanks to P-Loss for providing the causal representation that can distinguish similar relationships, for example, in SGCls mode, TsCM surpasses the reweighting methods on MotifsNet (Table <ref>)/Transformer (Table <ref>) by 1.3%/2.5%. 3) Compared with hybrid methods <cit.>, for example, in SGDet mode, our method observes 2.8%/3.6% improvements on VCTree (Table <ref>)/Transformer (Table <ref>). We believe this is mainly due to the fact that the two stages in our causal framework target different biases. While baseline methods mix different techniques, they only target the same bias. 4) We model the SCM with the data-level confounders so that our method is model-agnostic. TsCM can therefore be used for any SGG backbone that wants to pursue unbiased predictions. Compared with model-agnostic methods <cit.>, for instance, in SGDet mode, we observe 2.7%/2.8% improvements on MotifsNet (Table <ref>)/Transformer (Table <ref>). 5) Table <ref> shows that our method is slightly weaker than logit-reweight <cit.> in terms of R@K. However, <cit.> provides biased prediction, resulting in a poor performance in mR@K, , 14.8% mR@K in the PredCls mode of the MotifsNet backbone <cit.>, while our method achieves 36.8% in the same set-up. This indicates that our approach significantly outperforms traditional logit-adjusted methods <cit.> in terms of unbiased prediction. We attribute this to the inability of conventional logit-adjusted methods, particularly those employing non-heuristic prior knowledge, to effectively adjust for severely biased models in the presence of extremely long-tailed data within the SGG task. Compared with other methods, for instance, in PredCls mode, TsCM achieves 11.7%/13.8% gains on MotifsNet/Transformer. We believe that these exciting improvements come from sparse perturbations in our method, which do not perturb the SGG model largely, thus preserving the performance of head categories while pursuing unbiased predictions. 6) Table <ref> shows that our method can achieve a better tradeoff between R@K and mR@K. Besides methods that benefit from recall rate (, Logit-reweight <cit.>), our method achieves 6.4%/7.4%/5.4% improvements in the backbones of MotifsNet/VCTree/Transformer. This illustrates that our method also preserves head category performance while pursuing informative tail category predictions.
Qualitative results analysis. Fig. <ref> shows the qualitative results generated by the original MotifsNet <cit.> and MotifsNet equipped with our TsCM. From these qualitative results, we have the following observations: 1) Our proposed method tends to predict more informative relationships, for instance, { <girl, standing on, ski> vs <girl, on, ski> } and { <dog, laying on, bed> vs <dog, on, bed> }. We believe these improvements are due, in part, to the fact that our proposed AL-Adjustment can refine less informative predictions into high-informative outputs, and we will discuss this in the ablation study. 2) Our method performs well in distinguishing similar relationships, for example, { <wire, attached to, surfboard 1> vs <surfboard 1, has, wire> }. Besides, for objects where two bounding boxes do not intersect, our method can still generate meaningful relationships, , { <person, behind, girl> vs <person, near, girl> }. It is evident from the above improvements that our method can optimize the features of the model and classify relationships based on more than just simple information (, the distance between objects). We think the optimized features are achieved by our proposed P-Loss, which can learn representations that build causality between relationships.
§.§ Ablation Study
Exploring the contributions of the two stages. TsCM consists of P-Loss and AL-Adjustment to eliminate the confounders S and L, respectively. We first ablate our proposed causal framework using different combinations of P-Loss and AL-Adjustment, and the results are shown in Table <ref>. The results show that both components of TsCM contribute a lot to the performance. Specifically, for AL-Adjustment, it can significantly improve the mean recall rate of the model. For example, VCTree <cit.> equipped with AL-Adjustment has 11.2%/14.9%/16.5% gains on the metrics of mR@20/50/100. Although we can only observe trivial boosts for P-Loss alone, its purpose is to obtain causal representations that can well distinguish similar relationships. Therefore, these trivial boosts can be seen as by-products of the pursuit of causal representations. Table <ref> shows that the causal representation greatly enhances AL-Adjustment. For instance, AL-Adjustment equipped with P-Loss achieves 8.7%/8.4%/8.4% improvements on the backbone of VCTree <cit.>.
We also present the output logits, the augmented logits, and the adjusted logits in Fig. <ref> to show the process of P-Loss and AL-Adjustment adjusting the SGG model. These results show that AL-Adjustment can adjust less informative predictions to high-informative ones. For example, <bear, on, chair> is adjusted to <bear, standing on, chair> (see Fig. <ref> (a)). Then, thanks to the proposed P-Loss, compared with MotifsNet <cit.>, TsCM performs better at distinguishing similar relationships, , similar relationships have more significant logit gaps. As an example shown in Fig. <ref> (b): In the output logits of MotifsNet <cit.>, carrying has a 1.31× logit gap over holding, but in TsCM, it is 2.18×. Large logit gaps will clarify the decision boundary between similar relationships, thereby overcoming semantic confusion. Finally, we can observe the issues discussed in Section <ref>, , the logits of the foreground relationships being less discriminative and the logits of alternating positive and negative. It is possible, however, to unify logits to positive values and improve their discrimination, especially for the top few large logits, by using our logit augmentation method (Equation (<ref>)). We argue that the logit augmentation procedure is critical for learning adjustment factors. To prove this, we ablate our logit augmentation method and show the results in Table <ref>. These results demonstrate that our logit augmentation method can provide significant improvements. In addition, the guidance term f_θ^*, y^bg(x) in Equation (<ref>) also contributes a lot to the results. We think this is because the guidance term can make the augmented logits f̃_θ^*, y(x) more discriminative.
Hyperparameters in TsCM. This subsection ablates two critical hyperparameters, α in Equation (<ref>) and β in Equation (<ref>), that enable causal intervention on model sparsity. Fig. <ref> varies the α and β, and the results show that a suitable combination of hyperparameters is essential for our TsCM. We observe that the performance of the model increases when β∈ [1, 3], but decreases when β > 3. This can be explained by the facts shown in Table <ref>: For a false prediction, the logit corresponding to the ground truth category is often ranked at the top of all logits and, in most cases, is the second largest. Therefore, most false predictions can be corrected by adjusting the first few logits. Table <ref> shows the distribution of the ordering of the logits corresponding to the correct categories in the false predictions. Furthermore, adjusting logits that are ranked low often requires sharp adjustment factors (refer to the logits shown in Fig. <ref>), which in effect, learn a set of factors that overfit the observed data. Fig. <ref> also shows that the model performs best when α=5. We think that when α is small, 𝒫_α will miss some similar relationships, and conversely when α is large, many dissimilar relationships will be included in 𝒫_α. It is worth noting that α should be set according to observed data. In other words, for 50 relationship categories in VG150, α=5 is the optimal choice, but α may have other optimal values for other datasets.
Disentangled claim in Equation (<ref>). We set up three baseline loss functions to support our disentangled claims: 1) Cross-entropy loss ℓ. 2) Modified P-Loss ℓ̂^▹. This baseline loss function replaces 𝒫_α in Equation (<ref>) with 𝒫_α^▹. 𝒫_α^▹ means to take the α relationships with the largest feature distance (dissimilar relationships) as relationship-population. 3) Modified P-Loss ℓ̂^0.7. This baseline loss function replaces 𝒫_α in Equation (<ref>) with 𝒫_α^0.7. 𝒫_α^0.7 means to take the α^' relationships with the largest feature distance as relationship-population (α^'>α, α^' = 8). 4) Modified P-Loss ℓ̂^◃. This baseline loss function replaces 𝒫_α in Equation (<ref>) with 𝒫_α^◃. 𝒫_α^◃ means to take the α relationships with the largest feature distance belonging to the tail category as the relationship-population. Table <ref> reports the model results supervised by different loss functions. The results of ℓ, ℓ̂^◃, and ℓ̂^ are very close, which proves that intervening in dissimilar tail relationships has very limited impacts. Even with the possible inclusion of head categories, the supervised performance of ℓ̂^▹ is still close to ℓ and ℓ̂^◃. This further shows that only intervening in dissimilar relationships has tiny perturbations to the model. However, P-Loss observes drastic changes due to intervening in similar relationships. Hence, we argue that P-Loss can intervene in similar relationships sparsely. In other words, it eliminates confounder S without affecting other confounders. As a result, the model trained with P-Loss can be roughly formulated as a disentangled factorization.
Disentangled claim in Equation (<ref>). This subsection designs a new metric, , mean Correction Rate (mC@K), to support our disentangled claims. mC@K calculates the balanced fraction of rates that the false relationships are corrected, where K shares the same meaning as in mainstream metrics (, R@K and mR@K). In keeping with the claim to be demonstrated, Fig. <ref> shows mC@K on the tail categories, which allows us to evaluate the performance of the AL-Adjustment in alleviating the long-tailed distribution problem. These results clearly show that AL-Adjustment can adjust a considerable number of false predictions of tail categories to correct ones, and, hence, the long-tailed distribution confounder can be removed by our proposed adjustment procedure. Taking together the disentangled claim in Equation (<ref>), we can naturally come to the disentangled claim in Equation (<ref>).
§ CONCLUSION
In this paper, we have proposed a novel causal modeling framework, TsCM, for unbiased scene graph generation, which decouples the causal intervention into two stages to eliminate semantic confusion bias and long-tailed distribution bias, where the former bias is rarely explored in the existing debiasing methods. In stage 1, we analyzed that the SCM modeled for SGG is always causal-insufficient and the sparsity of relationship categories. On this basis, a causal representation learning method is proposed to achieve sparse interventions on semantic confusion bias in the case of insufficient causality. As a result, this stage also provides a disentangled factorization. Benefiting from this factorization, stage 2 then proposes causal calibration learning to intervene sparsely and independently in the long-tailed distribution bias to achieve unbiased predictions. Experiments were conducted on the popular SGG backbones and dataset, and our method achieved state-of-the-art debiasing performance. Furthermore, our method achieved a better tradeoff between recall rate and mean recall rate thanks to the sparse causal interventions.
Although our method can remove multiple biases in the SGG task, it is still challenging to overcome the unobservable biases. In the future, we will focus on exploring the unobservable biases and develop the automatic debiasing causal framework to pursue unbiased SGG predictions.
IEEEtran
|
http://arxiv.org/abs/2307.03956v1 | 20230708112126 | The annealed parabolic Anderson model on a regular tree | [
"Frank den Hollander",
"Daoyi Wang"
] | math.PR | [
"math.PR"
] |
[1]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
[2]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
We study the total mass of the solution to the parabolic Anderson model on a regular tree with an i.i.d. random potential whose marginal distribution is double-exponential. In earlier work we identified two terms in the asymptotic expansion for large time of the total mass under the quenched law, i.e., conditional on the realisation of the random potential. In the present paper we do the same for the annealed law, i.e., averaged over the random potential. It turns out that the annealed expansion differs from the quenched expansion. The derivation of the annealed expansion is based on a new approach to control the local times of the random walk appearing in the Feynman-Kac formula for the total mass. In particular, we condition on the backbone to infinity of the random walk, truncate and periodise the infinite tree relative to the backbone to obtain a random walk on a finite subtree with a specific boundary condition, employ the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs, and afterwards let the truncation level tend to infinity to obtain an asymptotically sharp asymptotic expansion.
MSC2010: 60H25, 82B44, 05C80.
Keywords: Parabolic Anderson model, Feynman-Kac formula, regular tree, double-exponential random potential, backbone of random walk, annealed Lyapunov exponent, variational formula.
Acknowledgment:
The research in this paper was supported by the Netherlands Organisation for Scientific Research through NWO Gravitation Grant NETWORKS-024.002.003.
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
§ INTRODUCTION AND MAIN RESULTS
Section <ref> provides background and motivation, Section <ref> lists notations, definitions and assumptions, Section <ref> states the main theorems, while Section <ref> places these theorems in their proper context.
§.§ Background and motivation
The parabolic Anderson model (PAM) is the Cauchy problem
∂_t u(x,t) = Δ_ u(x,t) + ξ(x) u(x,t) , t>0, x ∈,
where t is time, is an ambient space, Δ_ is a Laplace operator acting on functions on , and ξ is a random potential on . Most of the literature considers the setting where is either ^d or ^d with d ≥ 1, starting with the foundational papers <cit.>, <cit.>, <cit.> and further developed through a long series of follow-up papers (see the monograph <cit.> and the survey paper <cit.> for an overview). More recently, other choices for have been considered as well:
(I)
Deterministic graphs (the complete graph <cit.>, the hypercube <cit.>).
(II)
Random graphs (the Galton-Watson tree <cit.>, <cit.>, the configuration model <cit.>).
Much remains open for the latter class.
The main target for the PAM is a description of intermittency: for large t the solution u(·,t) of (<ref>) concentrates on well-separated regions in , called intermittent islands. Much of the literature focusses on a detailed description of the size, shape and location of these islands, and on the profiles of the potential ξ(·) and the solution u(·,t) on them. A special role is played by the case where ξ is an i.i.d. random potential with a double-exponential marginal distribution
(ξ(0) > u) = ^-^u/ϱ, u ∈,
where ϱ∈ (0,∞) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and represents a class of its own.
In the present paper we consider the case where 𝒳 is an unrooted regular tree . Our focus will be on the asymptotics as t→∞ of the total mass
U(t) = ∑_x ∈ u(x,t).
In earlier work <cit.>, <cit.> we were concerned with the case where 𝒳 is a rooted Galton-Watson tree in the quenched setting, i.e., almost surely with respect to the random tree and the random potential. This work was restricted to the case where the random potential is given by (<ref>) and the offspring distribution of the Galton-Watson tree has support in \{1} with a sufficiently thin tail. In the present paper our focus will be on the annealed setting, i.e., averaged over the random potential. We derive two terms in the asymptotic expansion as t→∞ of the average total mass
⟨ U(t) ⟩ = ∑_x ∈⟨ u(x,t) ⟩,
where ⟨·⟩ denotes expectation with respect to the law of the random potential. It turns out that the annealed expansion differs from the quenched expansion, even though the same variational formula plays a central role in the two second terms.
The derivation in the annealed setting forces us to follow a different route than in the quenched setting, based on various approximations of that are more delicate than the standard approximation of ^d (see <cit.>). This is the reason why we consider regular trees rather than Galton-Watson trees, to which we hope to return later. A key tool in the analysis is the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, which is recalled in Appendix <ref>.
§.§ The PAM on a graph
§.§.§ Notations and definitions
Let G = (V,E) be a simple connected undirected graph, either finite or countably infinite, with a designated vertex called the root. Let Δ_G be the Laplacian on G, i.e.,
(Δ_G f)(x) = ∑_y∈ V:{x,y}∈ E [f(y) - f(x)], x ∈ V, f V→,
which acts along the edges of G. Let ξ := (ξ(x))_x ∈ V be a random potential attached to the vertices of G, taking values in . Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,
[ ∂_t u(x,t) = (Δ_G u)(x,t) + ξ(x) u(x,t), x ∈ V, t>0,; u(x,0) = δ_(x), x ∈ V. ]
The quantity u(x,t) can be interpreted as the amount of mass at time t at site x when initially there is unit mass at . The total mass at time t is U(t) = ∑_x ∈ V u(x,t). The total mass is given by the Feynman-Kac formula
U(t) = _(^∫_0^t ξ(X_s) s),
where X=(X_t)_t ≥ 0 is the continuous-time random walk on the vertices V with jump rate 1 along the edges E, and _ denotes the law of X given X_0=. Let ⟨·⟩ denote expectation with respect to ξ. The quantity of interest in this paper is the average total mass at time t:
⟨ U(t) ⟩ = ⟨_(^∫_0^t ξ(X_s) s)⟩.
§.§.§ Assumption on the potential
Throughout the paper we assume that the random potential ξ = (ξ(x))_x ∈ V consists of i.i.d. random variables with a marginal distribution whose cumulant generating function
H(u) = log⟨^uξ()⟩
satisfies the following:
[Asymptotic double-exponential potential]
There exists a ϱ∈ (0,∞) such that
lim_u→∞ u H”(u) = ϱ.
[Double-exponential potential]
A special case of (<ref>) is when ξ() has the double-exponential distribution in (<ref>), in which case
H(u) = logΓ(ϱ u + 1)
with Γ the gamma function.
By Stirling's approximation, (<ref>) implies
H(u) = ϱ u log(ϱ u) - ϱ u + o(u), u →∞.
Assumption <ref> is more than enough to guarantee existence and uniqueness of the non-negative solution to (<ref>) on any discrete graph with at most exponential growth (as can be inferred from the proof in <cit.>, <cit.> for the case G=^d). Since ξ is assumed to be i.i.d., we have from (<ref>) that
⟨ U(t) ⟩ = 𝔼_𝒪(exp[∑_x∈ V H(ℓ_t(x))]),
where
ℓ_t(x) = ∫^t_0 1{X_s =x } s, x ∈ V, t≥ 0,
is the local time of X at vertex x up to time t.
§.§.§ Variational formula
The following characteristic variational formula is important for the description of the asymptotics of ⟨ U(t)⟩. Denote by (V) the set of probability measures on V. For p ∈(V), define
I_E(p) = ∑_{x,y}∈ E( √(p(x)) - √(p(y)) )^2,
J_V(p) = - ∑_x ∈ V p(x) log p(x),
and set
χ_G(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞).
The first term in (<ref>) is the quadratic form associated with the Laplacian, which is the large deviation rate function for
the empirical distribution
L_t = 1/t∫_0^t δ_X_s s = 1/t∑_x ∈ Vℓ_t(x) δ_x ∈(V)
(see e.g. <cit.>). The second term in (<ref>) captures the second order asymptotics of ∑_x ∈ V H(tp(x)) as t →∞ via (<ref>) (see e.g. <cit.>).
§.§.§ Reformulation
The following lemma pulls the leading order term out of the expansion and shows that the second order term is controlled by the large deviation principle for the empirical distribution.
[Key object for the expansion]
If G=(V,E) is finite, then
⟨ U(t) ⟩ = ^H(t) + o(t) _(^-ϱ t J_V(L_t)),
t →∞.
where J_V is the functional in (<ref>) and L_t is the empirical distribution in (<ref>).
Because ∑_x ∈ Vℓ_t(x) = t, we can rewrite (<ref>) as
⟨ U(t) ⟩ = _(exp[∑_x∈ V H(ℓ_t(x))])
= ^H(t) _(exp{t ∑_x∈ V1/t[H(ℓ_t(x)tt)-ℓ_t(x)tH(t)]}).
Assumption <ref> implies that H has the following scaling property (see <cit.>):
lim_t→∞1/t [H(ct) - cH(t)] = ϱ c log c uniformly in c ∈ [0,1].
Hence the claim follows.
§.§ The PAM on an unrooted regular tree: annealed total mass for large times and key variational formula
In this section we specialise to the case where G= = (E,V), an unrooted regular tree of degree d +1 with d ≥ 2 (see Fig. <ref>). The main theorem of our paper is the following expansion.
[Growth rate of the total mass]
For any d ≥ 4, subject to Assumption <ref>,
1/tlog⟨ U(t) ⟩ = ϱlog(ϱ t) - ϱ - χ_(ϱ) + o(1), t →∞,
where χ_(ϱ) is the variational formula in (<ref>) with G=.
The proof of Theorem <ref> is given in Sections <ref>–<ref> and makes use of technical computations collected in Appendices <ref>–<ref>.
The main properties of the key quantity
χ_(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
are collected in the following theorem (see Fig. <ref>).
[Properties of the variational formula]
For any d ≥ 2 the following hold:
(a) The infimum in (<ref>) may be restricted to the set
_^↓(V) = {p ∈(V) argmax p = ,
p is non-increasing in the distance to }.
(b) For every ϱ∈ (0,∞), the infimum in (<ref>) restricted to _^↓(V) is attained, every minimiser p is such that p>0 on V, and ∂ S_R = ∑_∂ B_R()p(x), R∈_0, satisfies
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ,
where B_R() is the ball of radius R centred at .
(c) The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with
lim_ϱ↓ 0χ_(ϱ) = d-1, lim_ϱ→∞χ_(ϱ) = d+1.
The proof of Theorem <ref> is given in Appendix <ref> (see Fig. <ref>).
§.§ Discussion
1.
Theorem <ref> identifies the scaling of the total mass up to and including terms that are exponential in t. The first two terms in the right-hand side of (<ref>) are the same as those of 1/t H(t) (recall (<ref>)). The third term is a correction that comes from the cost for X in the Feynman-Kac formula in (<ref>) to create an optimal local time profile somewhere in , which is captured by the minimiser(s) of the variational formula in (<ref>).
2.
For the quenched model on a rooted Galton-Watson tree we found in <cit.>, <cit.> that
1/tlog U(t) = ϱlog(ϱ t ϑ/loglog t)
- ϱ - χ(ϱ) +o(1), t →∞,
×-a.s.,
where is the law of the potential, is the law of , ϑ is the logarithm of the mean of the offspring distribution, and
χ_(ϱ) = inf_⊂χ_(ϱ)
with χ_(ϱ) given by (<ref>) and the infimum running over all subtrees of . This result was shown to be valid as soon as the offspring distribution has support in \{1} (i.e., all degrees are at least 3) and has a sufficiently thin tail. The extra terms in (<ref>) come from the cost for X in the Feynman-Kac formula in (<ref>) to travel in a time of order o(t) to an optimal finite subtree with an optimal profile of the potential, referred to as intermittent islands, located at a distance of order ϱ t/loglog t from , and to subsequently spend most of its time on that subtree. In this cost the parameter ϑ appears, which is absent in (<ref>). It was shown in <cit.> that if ϱ≥ 1/log (d_min+1), with d_min the minimum of the support of the offspring distribution, then the infimum in (<ref>) is attained at the unrooted regular tree with degree d_min+1, i.e., the minimal unrooted regular tree contained in , for which ϑ = log d_min. Possibly the bound on ϱ is redundant.
3. In view of Lemma <ref> and the fact that Assumption <ref> implies (<ref>), we see that the proof of Theorem <ref> amounts to showing that, on = (V,E),
lim_t→∞1/tlog_(^-ϱ t J_V(L_t)) = - χ_(ϱ).
We achieve this by deriving asymptotically matching upper and lower bounds. These bounds are obtained by truncating outside a ball of radius R, to obtain a finite tree _R, deriving the t→∞ asymptotics for finite R, and letting R→∞ afterwards. For the lower bound we can use the standard truncation technique based on killing X when it exits _R and applying the large deviation principle for the empirical distribution of Markov processes on finite graphs derived in <cit.>. For the upper bound, however, we cannot use the standard truncation technique based on periodisation of X beyond radius R, because is an expander graph (see <cit.> for a list of known techniques on ^d and ^d). Instead, we follow a route in which is approximated in successive stages by a version of _R with a specific boundary condition, based on monitoring X relative to its backbone to infinity. This route allows us to use the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, but we need the condition d ≥ 4 to control the specific boundary condition in the limit as R →∞ (see Remark <ref> for more details). The reason why the approximation of by finite subtrees is successful is precisely because in the parabolic Anderson model the total mass tends to concentrate on intermittent islands.
4. Theorem <ref> shows that, modulo translations, the optimal strategy for L_t as t→∞ is to be close to a minimiser of the variational formula in (<ref>) restricted to _^↓(V). Any minimiser is centred at , strictly positive everywhere, non-increasing in the distance to , and rapidly tending to zero. The following questions remain open:
(1)
Is the minimiser p unique modulo translation?
(2)
Does p(x) satisfy lim_|x| →∞ |x|^-1logp̅(x) = -∞, with |x| the distance between x and ?
(3)
Is p radially symmetric?
(4)
Is ϱ↦χ_(ϱ) analytic on (0,∞)?
We expect the answer to be yes for (1) and (2), and to be no for (3) and (4).
§ PROOF OF THE MAIN THEOREM: LOWER BOUND
In this section we prove the lower bound in Theorem <ref>, which is standard and straightforward. In Section <ref> we obtain a lower bound in terms of a variational formula by killing the random walk when it exits _R. In Section <ref> we derive the lower bound of the expansion by letting R→∞ in the variational formula.
§.§ Killing and lower variational formula
Fix R∈ℕ. Let _R be the subtree of =(V,E) consisting of all the vertices that are within distance R of the root and all the edges connecting them. Put V_R=V_R(_R) and E_R = E(_R). Let τ_R = inf{t ≥ 0 X_t ∉ V_R} denote the first time that X exits _R. It follows from (<ref>) that
⟨ U(t) ⟩≥_(exp[∑_x∈ V_R
H(ℓ_t(x))]1{τ_R>t}).
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≥^H(t) + o(t) _[^-ϱ t J_V(L_t)1{τ_R>t}]
with J_V the functional defined in (<ref>). As shown in <cit.> (see also <cit.>), the family of sub-probability distributions _(L_t ∈· , τ_R>t), t ≥ 0, satisfies the LDP on ^R(V) = {p ∈(V) supp(p) ⊂ V_R} with rate function I_E, with I_E the functional defined in (<ref>). This is the standard LDP for the empirical distribution of Markov processes. Therefore, by Varadhan's Lemma,
lim_t→∞1/tlog_[^-ϱ t J_V(L_t)1{τ_R>t}] = - χ^-_R(ϱ)
with
χ^-_R(ϱ) = inf_p ∈^R(V) [I_E(p) +ϱ J_V(p)],
where we use that p ↦ J_V(p) is bounded and continuous (in the discrete topology) on ^R(V). Note that
lim_t →∞1/tlog_(τ_R>t) = - inf_p∈^R(V) I_E(p) < 0,
which is non-zero because any p ∈^R(V) is non-constant on V. The expression in (<ref>) is the same as (<ref>) with G=, except that p is restricted to V_R.
§.§ Limit of the lower variational formula
Clearly, R ↦χ^-_R(ϱ) is non-increasing. To complete the proof of the lower bound in Theorem <ref>, it remains is to show the following.
lim sup_R→∞χ^-_R(ϱ) ≤χ_(ϱ).
Pick any p ∈(V) such that I_E(p)<∞ and J_V(p)<∞. Let p^ R be the projection of p onto V_R, i.e.,
p^ R(x) = {[ p(x), x ∈int(V_R),; ∑_y ≥ x p(y), x ∈∂ V_R, ].
where y ≥ x means that y is an element of the progeny of x in . Since p^ R∈^R(V), we have from (<ref>) that χ^-_R(ϱ) ≤ I_E(p^ R) + ϱ J_V(p^ R). Trivially, lim_R→∞ I_E(p^ R) = I_E(p) and lim_R→∞ J_V(p^ R) = J_V(p), and so we have lim sup_R→∞χ^-_R(ϱ) ≤ I_E(p) + ϱ J_V(p). Since this bound holds for arbitrary p ∈(V), the claim follows from (<ref>).
§ PROOF OF THE MAIN THEOREM: UPPER BOUND
In this section we prove the upper bound in Theorem <ref>, which is more laborious and requires a more delicate approach than the standard periodisation argument used on ^d . In Section <ref> we obtain an upper bound in terms of a variational formula on a version of _R with a specific boundary condition. The argument comes in four steps, encapsulated in Lemmas <ref>–<ref> below:
(I)
Condition on the backbone of X (Section <ref>).
(II)
Project X onto a concatenation of finite subtrees attached to this backbone that are rooted versions of _R (Section <ref>).
(III)
Periodise the projected X to obtain a Markov renewal process on a single finite subtree and show that the periodisation can be chosen such that the local times at the vertices on the boundary of the finite subtree are negligible (Section <ref>).
(IV)
Use the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.> to obtain a variational formula on a single subtree (Section <ref>).
In Section <ref> we derive the upper bound of the expansion by letting R→∞ in the variational formula.
§.§ Backbone, projection, periodisation and upper variational formula
§.§.§ Backbone
For r ∈_0, let τ_r be the last time when X visits ∂ B_r(), the boundary of the ball of radius r around . Then the sequence = (X_τ_r)_r ∈_0 forms the backbone of X, running from to infinity.
[Condition on a backbone]
For every backbone and every t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))])
= 𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = ).
By symmetry, the conditional expectation in the right-hand side does not depend on the choice of . Indeed, permutations of the edges away from the root do not affect the law of ∑_x∈ V() H(ℓ_t(x)).
Turn the one-sided backbone into a two-sided backbone by adding a second backbone from to infinity. By symmetry, the choice of this second backbone is arbitrary, say '. Redraw by representing ' ∪ as and representing the rest of as a sequence of rooted trees ^∗ = (^∗_x)_x ∈ hanging off (see Fig. <ref>). In ^∗_x, the root sits at x and has d-1 downward edges, while all lower vertices have d downward edges.
Let X^=(X^_t)_t ≥ 0 be the random walk on ^ and (ℓ^_t(x))_x ∈^ the local times of X^ at time t.
[Representation of as a backbone with rooted trees]
For every and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = )
= 𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞).
Simply redraw as ^.
Note that X^ is a Markov process whose sojourn times have distribution EXP(d+1) and whose steps are drawn uniformly at random from the d+1 edges that are incident to each vertex.
§.§.§ Projection
For R ∈\{1}, cut into slices of length R, i.e.,
= ∪_k∈ (z + (kR+I)), I={0,1,…,R-1},
where z is to be chosen later. Apply the following two maps to ^ (in the order presented):
(i)
For each k ∈, fold ^∗_z+(kR+(R-1)) onto ^∗_z+(k+1)R by folding the d-1 edges downwards from the root on top of the edge in connecting z+(kR+(R-1)) and z+(k+1)R, and putting the d infinite rooted trees hanging off each of these d-1 edges on top of the rooted tree ^*_z+(k+1)R hanging off z+(k+1)R. Note that each of the d infinite rooted trees is a copy of ^*_z+(k+1)R.
(ii)
For each k ∈ and m ∈{0,1,…,R-2}, cut off all the infinite subtrees trees in ^∗_z+(kR+m) whose roots are at depth (R-1)-m. Note that the total number of leaves after the cutting equals
(d-1) ∑_m=0^R-2 d^(R-2)-m = (d-1)d^R-2 1-d^-(R-1)/1-d^-1 = d^R-1 - 1,
which is the same as the total number of leaves of the rooted tree ^*_R of depth R-1 (i.e., with R generations) minus 1 (a fact we will need below).
By doing so we obtain a concatenation of finite units
_R=(_R[k])_k ∈
that are rooted trees of depth R-1 (see Fig. <ref>). Together with the two maps that turn ^ into _R, we apply two maps to X^:
(i)
All excursions of X^ in the infinite subtrees that are folded to the right and on top are projected accordingly.
(ii)
All excursions of X^ in the infinite subtrees that are cut off are replaced by a sojourn of X^_R in the tadpoles that replace these subtrees (see Fig. <ref>)
The resulting path, which we call X^_R = (X^_R_t)_t ≥ 0, is a Markov renewal process with the following properties:
* The sojourn times in all the vertices that are not tadpoles have distribution EXP(d+1).
* The sojourn times in all the tadpoles have distribution ψ, defined as the conditional distribution of the return time τ of the random walk on the infinite rooted tree ^* given that τ<∞ (see <cit.> for a proper definition).
* The transitions into the tadpoles have probability d/d+1, the transitions out of the tadpoles have probability 1 (because of the condition X^_∞ = + ∞).
* The transitions from z + (kR+(R-1)) to z+(k+1)R have probability d/d+1, while the reverse transitions have probability 1/d+1.
Write (ℓ^ _R_t(x))_x ∈ V__R to denote the local times of X^_R at time t.
[Projection onto a concatenation of finite subtrees]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞)
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
| X^_R_∞ = + ∞).
The maps that are applied to turn X^ into X^_R are such that local times are stacked on top of each other. Since H defined in (<ref>) is convex and H(0)=0, we have H(ℓ) + H(ℓ') ≤ H(ℓ+ℓ') for all ℓ,ℓ' ∈_0, which implies the inequality.
§.§.§ Periodisation
Our next observation is that the condition {X^_R_∞ = + ∞} is redundant.
[Condition redundant]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] | X^_R_∞ = + ∞)
= 𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] ).
The event {X^_R_∞ = + ∞} has probability 1 because on the edges connecting the units of _R (see Fig. <ref>) there is a drift downwards. To see why, note that 1/d+1 < 12 < d/d+1 because d ≥ 2, and use that a one-dimensional random walk with drift is transient to the right <cit.>.
Since _R is periodic, we can fold X^_R onto a single unit _R, to obtain a Markov renewal process X^_R on _R (see Fig. <ref>) in which the transition from the top vertex to the right-most bottom vertex has probability 1/d+1, while the reverse transition has probability d/d+1. Clearly, the sojourn time distributions are not affected by the folding and therefore remain as above. Write (ℓ^ _R_t(x))_x ∈ V(_R) to denote the local times of X^_R at time t.
[Periodisation to a single finite subtree]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]).
The periodisation again stacks local time on top of each other.
Before we proceed we make a crucial observation, namely, we may still choose the shift z ∈{0,1,…,R-1} of the cuts of the two-sided backbone (recall Fig. <ref>). We will do so in such a way that the local time up to time t spent in the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of a unit in _R
= all vertices marked by ∙ in Fig. <ref>
is at most t/R. After the periodisation these vertices are mapped to the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of _R
= all vertices marked by ∙ in Fig. <ref>.
[Control on the time spent at the boundary]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
1_{1/t∑_x ∈∂_ _Rℓ^_R_t(x) ≤ 1/R}).
For different z the sets of vertices making up ∂_R correspond to disjoint sets of vertices in ^ (see Fig. <ref>). Since ∑_x ∈^ℓ^_t(x) = t for all t ≥ 0, it follows that there exists a z for which ∑_x ∈∂_Rℓ^_t(x) ≤ t/R. Therefore the upper bound in Lemma <ref> can be strengthened to the one that is claimed.
§.§.§ Upper variational formula
Lemmas <ref>–<ref> provide us with an upper bound for the average total mass (recall ((<ref>)) on the infinite tree in terms of the same quantity on the finite tree-like unit _R with a specific boundary condition. Along the way we have paid a price: the sojourn times in the tadpoles are no longer exponentially distributed, and the transition probabilities into and out of the tadpoles and between the top vertex and the right-most bottom vertex are biased. We therefore need the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.>, which we can now apply to the upper bound.
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≤^H(t) + o(t) 𝔼_𝒪(^-ϱ J_V(_R)(L^ _R_t)
1_{L^_R_t(∂_ _R) ≤ 1/R})
with J_V the functional defined in (<ref>). The following lemma controls the expectation in the right-hand side.
[Scaling of the key expectation]
For every R ∈\{1},
lim_t→∞1/tlog_(^-ϱ t J_V(_R)(L^_R_t) 1_{L^_R_t(∂_ _R) ≤ 1/R}) = - χ^+_R(ϱ),
where
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)},
with
I^†_E(_R)(p) = inf_β∈ (0,∞)inf_q ∈(V(_R))[K(β q) + K(p |β q)],
where
K(β q) = sup_q∈(V(_R))∑_x ∈ V(_R)β q(x) log(q(x)∑_y ∈ V(_R)π_x,yq(y)),
K(p |β q) = ∑_x ∈ V(_R)β q(x) (λ_x)(p(x)β q(x)),
with
(λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),
λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ,
where ψ_x=ψ when x is a tadpole, ψ_x = EXP(d+1) when x is not a tadpole, and π_x,y is the transition kernel of the discrete-time Markov chain on V(_R) embedded in X^_R.
Apply the large deviation principle derived in <cit.>, which we recall in Proposition <ref> in Appendix <ref>.
The expression in (<ref>) is similar to (<ref>) with G=_R, expect that the rate function I_E(_R) in (<ref>) is more involved than the rate function I_E in (<ref>).
§.§ Limit of the upper variational formula
The prefactor ^H(t)+o(1) in Lemma <ref> accounts for the terms ϱlog(ϱ t)-ϱ in the right-hand side of (<ref>) (recall <ref>). In view of Lemma <ref>, in order to complete the proof of the upper bound in Theorem <ref> it suffices to prove the following lemma.
For any d ≥ 4, lim inf_R→∞χ^+_R(ϱ) ≥χ_(ϱ).
The proof is given in Appendix <ref> and relies on two steps:
* Show that, for d ≥ 4,
I^†_E(_R)(p) ≥ I^+_E(_R)(p) + O(1/R)
with I^+_E(_R) a rate function similar to the standard rate function I_E(_R) given by (<ref>).
* Show that, d ≥ 2,
χ^ +_R(ϱ) = inf_p ∈(V(_R))p(∂_ _R) ≤ 1/R{I^+_E(_R)(p) + ϱ J_V(_R)(p)}
satisfies
lim inf_R→∞χ^ +_R(ϱ) ≥χ_(ϱ).
§ LARGE DEVIATION PRINCIPLE FOR THE LOCAL TIMES OF MARKOV RENEWAL PROCESSES
The following LDP, which was used in the proof of Lemma <ref>, was derived in <cit.>, and generalises the LDP for the empirical distribution of a Markov proceses on a finite state space derived in <cit.>. See <cit.> for the definition of the LDP.
Let Y=(Y_t)_t ≥ 0 be the Markov renewal process on the finite graph G=(V,E) with transition kernel (π_x,y)_{x,y}∈ E and with sojourn times whose distributions (ψ_x)_x ∈ V have support (0,∞). For t > 0, let L_t^Y denote the empirical distribution of Y at time t (see (<ref>)). Then the family (ℙ(L^Y_t ∈·))_t>0 satisfies the LDP on 𝒫(V) with rate t and with rate function I^†_E given by
I^†_E(p) = inf_β∈ (0,∞)inf_q ∈(V)[K(β q) + K(p |β q)]
with
K(β q) = sup_q∈(V)∑_x ∈ Vβ q(x) log(q(x)∑_y∈ Vπ_x,yq(y)),
K(p |β q ) = ∑_x ∈ Vβ q(x) (λ_x)(p(x)β q(x)),
where
[ (λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),; λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ. ]
The rate function I_E consist of two parts: K in (<ref>) is the rate function of the LDP on (V) for the empirical distribution of the discrete-time Markov chain on V with transition kernel (π_x,y)_{x,y}∈ E (see <cit.>), while K in (<ref>) is the rate function of the LDP on (0,∞) for the empirical mean of the sojourn times, given the empirical distribution of the discrete-time Markov chain. Moreover, λ_x is the cumulant generating function associated with ψ_x, and λ_x is the Legendre transform of λ_x, playing the role of the Cramèr rate function for the empirical mean of the i.i.d. sojourn times at x. The parameter β plays the role of the ratio between the continuous time scale and the discrete time scale.
§ SOJOURN TIMES: CUMULANT GENERATING FUNCTIONS AND LEGENDRE TRANFORMS
In Appendix <ref> we recall general properties of cumulant generating functions and Legendre transforms, in Appendices <ref> and <ref> we identify both for the two sojourn time distributions arising in Lemma <ref>, respectively.
§.§ General observations
Let λ be the cumulant generating function of a non-degenerate sojourn time distribution ϕ, and λ be the Legendre transform of λ (recall (<ref>)). Both λ and λ are strictly convex, are analytic in the interior of their domain, and achieve a unique zero at θ = 0, respectively, α=α_c with α_c= ∫_0^∞τϕ(τ). Furthermore, λ diverges at some θ_c ∈ (0,∞] and has slope α_c at θ=0. Moreover, if the slope of λ diverges at θ_c, then λ is finite on (0,∞).
The supremum in the Legendre transform defining (λ)(α) is uniquely taken at θ=θ(α) solving the equation
λ'(θ(α)) = α.
The tangent of λ with slope α at θ(α) intersects the vertical axis at (-λ)(α), i.e., putting
μ(α) = λ(θ(α))
we have
μ(α) = α (λ)'(α)-(λ)(α).
(See Fig. <ref>.) Note that by differentiating (<ref>) we get
μ'(α) = α(λ)”(α),
which shows that α↦μ(α) is strictly increasing and hence invertible, with inverse function μ^-1.
Note that by differentiating the relation (λ)(α) = αθ(α)-λ(θ(α)) we get
(λ)'(α) = θ(α).
A further relation that is useful reads
(λ)' ∘μ^-1 = λ^-1,
which follows because μ = λ∘θ by (<ref>) and (λ)' = θ by (<ref>).
§.§ Exponential sojourn time
If ϕ=EXP(d+1), then the cumulant generating function λ(θ) = log∫_0^∞^θτψ(τ) is given by
λ(θ) =
log(d+1d+1-θ), θ < d+1,
∞, θ≥ d+1.
To find (λ)(α), we compute
∂/∂θ[αθ - log(d+1d+1 - θ)] = α - 1/d+1-θ,
∂^2/∂θ^2[αθ - log(dd+1-θ)] = - 1/(d+1-θ)^2 < 0.
Hence the supremum in (<ref>) is uniquely taken at
θ(α) = d+1 - 1α, α > 0,
so that
(λ)(α) = α (d+1) -1 - log[α (d+1)], α>0.
Thus, λ and λ have the shape in Fig. <ref>, with θ_c = d+1 and α_c = 1/d+1, and with lim_θ↑θ_cλ(θ) = ∞ and lim_θ↑θ_cλ'(θ) = ∞.
Note that μ has domain (0,∞) and range .
§.§ Non-exponential sojourn time
For ϕ=ψ the computations are more involved. Let ^*=(E,V) be the infinite rooted regular tree of degree d+1. Write for the root. Let X = (X_n)_n ∈_0 be the discrete-time simple random walk on ^*=(E,V) starting from . Write τ_ to denote the time of the first return of X to . Define r = ℙ_(τ_<∞). It is easy to compute r by projecting X on _0: r is the return probability to the origin of the random walk on _0 that jumps to the right with probability p = dd+1 and to the left with probability q = 1d+1, which equals p/q (see <cit.>). Thus, r= 1/d.
For y ∈^*, define h_y = ℙ_y(τ_ <∞). Then h_y can be explicitly calculated, namely,
h_y =
d^-|y|, y∈^*∖{},
1, y= .
Note that h is a harmonic function on ^* ∖, i.e., h_y = ∑_z∈^*π_y,z h_z, y∈^*∖. We can therefore consider the Doob-transform of X, which is the random walk with transition probabilities away from the root given by
σ̌_y,z =
d/d+1, z=y^↑,
1/d1/d+1, z≠ y^↑, {y,z}∈ E,
0, else,
y ∈^*∖{},
and transition probabilities from the root are given by
σ̌_,z =
1/d, {,z}∈ E,
0, else.
Thus, the Doob-transform reverses the upward and the downward drift of X.
Recall from Lemma <ref> that ψ is the distribution of τ_ conditional on {τ_<∞} and on X leaving at time 0.
Let λ(θ) = log∫_0^∞^θτψ(τ). Then
^λ(θ)
= d+1-θ/2 [1- √(1- 4d(d+1-θ)^2) ], θ∈ (-∞,θ_c],
∞, else,
with θ_c = (√(d)-1)^2. The range of exp∘λ is (0,√(d) ], with the maximal value is uniquely taken at θ=θ_c.
To compute the moment-generating function of τ_, we consider the Doob-transform of X and its projection onto ℕ_0. Let p_2k = P(τ_ = 2k). It is well-known that (see <cit.>)
G^p,q(s) = (s^τ_|τ_ <∞) = ∑_k ∈ s^2k p_2k = 1/2p[1- √(1-4pqs^2)], |s| ≤ 1.
Therefore we have
^λ(θ) = (^θτ_)
= ∑_k ∈ p_2k [(^θ EXP(d+1))]^2k-1
= ∑_k ∈ p_2k(d+1/d+1 - θ)^2k-1
= (d+1 -θ/d+1) G^p,q(s)
with
p = 1d+1, q = dd+1, s = d+1/d+1-θ.
Inserting (<ref>) into (<ref>), we get the formula for λ(θ). From the term in the square root we see that λ(θ) is finite if and only if θ≤θ_c = d+1-2√(d) = (√(d)-1)^2.
There is no easy closed form expression for (λ)(α), but it is easily checked that λ and λ have the shape in Fig. <ref>, with θ_c = (√(d)-1)^2 and α_c = ∫_0^∞τψ(τ)<∞, and with λ(θ_c) = log√(d)<∞ and λ'(θ_c)=∞, i.e., there is a cusp at the threshold θ_c, implying that λ is finite on (0,∞). It follows from (<ref>) that
lim_α→∞1/α (λ)(α) = lim_α→∞θ(α) = θ_c.
The function λ^-1∘log = (exp∘λ)^-1 is given by
(exp∘λ)^-1(β) = d+1 - β -d/β, β∈ (0,√(d) ].
The range of (exp∘λ)^-1 is (-∞,θ_c], with the maximal value θ_c uniquely taken at β = √(d).
We need to invert exp∘λ in (<ref>). Abbreviate χ = d+1-θ/2. Then
β = χ[1-√(1-d/χ^2) ] ⟹ χ = β^2+d/2β ⟹ θ = d+1 - β^2 + d/β.
Note that (√(d),∞) is not part of the domain of (exp∘λ)^-1, even though the right-hand side of (<ref>) still makes sense (as a second branch). Note that μ has domain (0,∞) and range (-∞,√(d) ] (see Fig. <ref>).
§ ANALYSIS OF THE VARIATIONAL PROBLEM ON THE INFINITE REGULAR TREE
In this appendix we prove Theorem <ref>. Appendix <ref> formulates two theorems that imply Theorem <ref>, Appendix <ref> provides the proof of these theorems. Recall the definition of (V), I_E(p) and J_V(p) from (<ref>). Set
χ_(ϱ) = inf_p ∈_(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
where _(V) = {p ∈(V) argmax p = }. Since (V), I_E and J_V are invariant under translations, the centering at is harmless.
§.§ Two properties
For every ϱ∈ (0,∞) the infimum in (<ref>) is attained, and every minimiser p is strictly positive, non-increasing in the distance to the root, and such that
∑_N∈_0∂ S_R log (R+1) ≤d+1/ϱ,
∂ S_R = ∑_∂ B_R()p(x),
where B_R() is the ball of radius R around .
The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with lim_ϱ↓ 0χ_(ϱ) = d-1 and lim_ϱ→∞χ_(ϱ) = d+1.
Theorems <ref>–<ref> settle Theorem <ref>. Their proof uses the following two lemmas.
For every ϱ∈ (0,∞), the infimum in (<ref>) may be restricted to p ∈_(V) such that J_V(p) ≤d+1ϱ.
Let δ_∈_(V) denote the point measure at . Then, for all ϱ∈ (0,∞),
χ_(ϱ) ≤ I_E(δ_) + ϱ J_V(δ_) = (d+1) + ϱ× 0 = d+1.
Since I_V ≥ 0, we may restrict the infimum in (<ref>) to p with J_V(p) ≤d+1/ϱ.
For every ϱ∈ (0,∞), there exists a c(ϱ) >0 such that the infimum in (<ref>) may be restricted to p∈𝒫_(V) such that J_V(p) ≥ c(ϱ).
Since J_V(p) = 0 if and only if p = δ_ is a point measure, it suffices to show that δ_ is not a minimiser of χ_(ϱ). To that end, for y ∈ V compute
∂/∂ p(y)[I_E(p) + ϱ J_V(p)] = 1 - ∑_z∼ y√(p(z)/p(y)) - ϱlog p(y) -ϱ.
Because p()>0, it follows that the right-hand side tends to -∞ as p(y) ↓ 0 for every y ∼. Hence, no p ∈_(V) with p(y) = 0 for some y ∼ can be a minimiser of (<ref>), or be the weak limit point of a minimising sequence. In particular, δ_ cannot.
§.§ Proof of the two properties
First observe that (V) and J_V are invariant under permutations, i.e., for any p ∈(V) and any relabelling π of the vertices in V, we have π p ∈(V) and J_V(π p)=J_V(p). The same does not hold for I_E, but we can apply permutations such that I_E(π p) ≤ I_E(p).
1.
Pick any p ∈(V). Pick any backbone = {x_0, x_1,⋯} that runs from x_0 = to infinity. Consider a permutation π that reorders the vertices in such that {(π p)(x)}_x ∈ becomes non-increasing. Together with the reordering, transport all the trees that hang off as well. Since π p is non-increasing along , while all the edges that do not lie on have the same neighbouring values in p and in π p, we have
I_E(π p) ≤ I_E(p).
Indeed,
12 [I_E(p) - I_E(π p)] = ∑_k ∈_0√((π p)(x_k) (π p)(x_k+1))
- ∑_k ∈_0√(p(x_k)p(x_k+1)),
where we use that p(x_0) = (π p)(x_0) (because p(x_0) ≥ p(x_k) for all k∈) and ∑_k∈ p(x_k) = ∑_k∈ (π p)(x_k). The right-hand side of (<ref>) is ≥ 0 by the rearrangement inequality for sums of products of two sequences <cit.>. In fact, strict inequality in (<ref>) holds unless p is constant along . But this is impossible possible because it would imply that p() = 0 and hence p(x) = 0 for all x ∈ V. Thus, p and being arbitrary, it follows from (<ref>) that any minimiser or minimising sequence must be non-increasing in the distance to . Indeed, if it were not, then there would be a along which the reordering would lead to a lower value of I_E+ϱ J_V. Hence we may replace (<ref>) by
χ_(ϱ) = inf_p ∈_^↓(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
with _^↓(V) defined in (<ref>).
2.
Let p ∈_^↓(V). Estimate
J_V(p) = ∑_R ∈_0∑_x ∈∂ B_R() [-p(x)log p(x)]
≥∑_R ∈_0∑_x ∈∂ B_R()[-p(x)log(1R+1)],
where we use that p(x) ≤1R+1 for all x ∈∂ B_R(). Hence
J_V(p) ≥∑_R ∈_0∂ S_R log(R+1)
with ∂ S_R = ∑_x ∈∂ B_R() p(x). By Lemma <ref>, J_V(p) ≤d+1/ϱ, and so
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ.
The computation in (<ref>) shows that any p for which there exist z ∼ y with p(z)>0 and p(y)=0 cannot be minimiser nor a weak limit point of a minimising sequence. Hence all minimisers or weak limit points of minimising sequences are strictly positive everywhere.
3.
Take any minimising sequence (p_n)_n∈ of (<ref>). By (<ref>), lim_R→∞∑_x ∉ B_R() p_n(x) = 0 uniformly in n∈, and so (p_n)_n∈ is tight. By Prokhorov's theorem, tightness is equivalent to (p_n)_n∈ being relatively compact, i.e., there is a subsequence (p_n_k)_k∈ that converges weakly to a limit p∈_^↓(V). By Fatou's lemma, we have lim inf_k→∞ I_E(p_n_k) ≥ I_E(p) and lim inf_k→∞ J_V(p_n_k) ≥ J_V(p). Hence
χ_(ϱ) = lim_k →∞ [I_E(p_n_k) + ϱ J_V(p_n_k)] ≥ I_E(p) + ϱ J_V(p).
Hence p is a minimiser of (<ref>).
The proof uses approximation arguments.
1.
We first show that ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz. Pick ϱ_1 < ϱ_2. Let p̅_ϱ_1 be any minimiser of (<ref>) at ϱ_1, i.e.,
χ_(ϱ_1) = I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1).
Estimate
[I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1)]
= [I_E(p̅_ϱ_1) + ϱ_2 J_V(p̅_ϱ_1)] - (ϱ_2 - ϱ_1)J_V(p̅_ϱ_1)
≥χ_(ϱ_2) - (ϱ_2 - ϱ_1) J_V(p̅_ϱ_1)
≥χ(ϱ_2) - (ϱ_2 - ϱ_1) d+1ϱ_1,
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≤ (ϱ_2-ϱ_1) d+1ϱ_1.
Similarly, let p̅_ϱ_2 be any minimiser of (<ref>) at ϱ_2, i.e.,
χ_(ϱ_2) = I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2).
Estimate
[I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2)]
= [I_E(p̅_ϱ_2) + ϱ_1 J_V(p̅_ϱ_2)] + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) c(ϱ_2),
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≥ c(ϱ_2)(ϱ_2 - ϱ_1).
2.
Because χ_(ϱ) ≤ d+1 for all ϱ∈ (0,∞), it follows that lim_ϱ→∞χ_(ϱ) ≤ d+1. To obtain the reverse inequality, let p_ϱ be any minimiser of (<ref>) at ϱ. By Lemma <ref>, we may assume that J_V(p_ϱ) ≤d+1/ϱ. Hence lim_ϱ→∞ J_V(p_ϱ) = 0, and consequently lim_ϱ→∞p_ϱ= δ_ weakly. Therefore, by Fatou's lemma, lim_ϱ→∞χ_(ϱ) = lim_ϱ→∞ [I_E(p) + ϱ J_V(p)] ≥lim inf_ϱ→∞ I_E(p_ϱ) ≥ I_E(δ_) = d+1.
3.
To prove that lim_ϱ↓ 0χ_(ϱ) ≤ d-1, estimate
χ_(ϱ) ≤inf_p ∈_^↓(V)(p) ⊆ B_R() [I_E(p)+ϱ J_V(p)],
R ∈_0.
Because
sup_p ∈_^↓(V)(p) ⊆ B_R() J_V(p) = J_V(p_R) = log |B_R()|,
R ∈_0,
with
p_R(x) =
|B_R()|^-1, x ∈ B_R(),
0, else,
it follows that
lim_ϱ↓ 0χ_(ϱ)
≤inf_p ∈_^↓(V)(p) ⊆ B_R() I_E(p)
≤ I_E(p_R), R ∈_0.
Compute (recall (<ref>)) ,
I_E(p_R) = |∂ B_R+1()|/|B_R()|, R ∈_0.
Inserting the relations
|∂ B_R()| = {[ 1, R=0,; (d+1)d^R-1, R ∈, ].
|B_R()| = ∑_R'=0^R |∂ B_R'()| = 1 + d+1/d-1(d^R-1),
R ∈_0,
we get
I_E(p_R) = (d-1) (d+1)d^R/(d+1)d^R-2.
Hence lim_R→∞ I_E(p_R) = d-1, and so lim_ϱ↓ 0χ_(ϱ) ≤ d-1.
4.
To prove that lim_ϱ↓ 0χ_(ϱ) ≥ d-1, note that because J_V ≥ 0 we can estimate
lim_ϱ↓ 0χ_(ϱ) ≥inf_p ∈_^↓(V) I_E(p).
It therefore suffices to show that
inf_p ∈_^↓(V) I_E(p) ≥ d-1,
i.e., (p_R)_R ∈_0 is a minimising sequence of the infimum in the left-hand side. The proof goes as follows. Write (recall (<ref>))
I_E(p) = 12 ∑_x,y ∈ Vx ∼ y(√(p(x)) - √(p(y)) )^2
= 12 ∑_x,y ∈ Vx ∼ y[p(x) + p(y) - 2 √(p(x)p(y)) ]
= (d+1) - ∑_x,y ∈ Vx ∼ y√(p(x)p(y)).
Since is a tree, each edge can be labelled by the end-vertex that is farthest from . Hence the sum in the right-hand side can be written as
∑_x ∈ V ∖ 2√(p(x)p(x^↓)),
where x^↓ is the unique neighbour of x that is closer to than x. Since 2√(p(x)p(x^↓))≤ p(x) + p(x^↓), it follows that
∑_x ∈ V ∖ 2√(p(x)p(x^↓))≤∑_x ∈ V ∖ p(x) + ∑_x ∈ V ∖ p(x^↓)
= [1-p()] + 1.
Therefore
I_E(p) ≥ d - 1 + p(),
which settles the claim.
§ LARGE DEVIATION ESTIMATE FOR THE LOCAL TIME AWAY FROM THE BACKBONE
In this appendix we derive a large deviation principle for the total local times at successive depths of the random walk on ^ (see Fig. <ref>). This large deviation principle is not actually needed, but serves as a warm up for the more elaborate computations in Appendix <ref>.
For k∈_0, let V_k be the set of vertices in ^ that are at distance k from the backbone (see Fig. <ref>). For R ∈, define
[ ℓ^R_t(k) = ∑_x ∈ V_kℓ^_t(x), k = 0,1,…,R,; ℓ_t^R = ∑_k > R∑_x∈ V_kℓ^_t(x), k= R+1, ]
and
L_t^R = 1/t ((ℓ_t(k))_k=0^R, ℓ^R_t).
Abbreviate V^*_R = {0,1,…,R,R+1},
For every R ∈, (L_t^R)_t ≥ 0 satisfies the large deviation principle on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = [√((d-1)p(0))-√(dp(1)) ]^2 + ∑_k=1^R-1[√(p(k))-√(dp(k+1)) ]^2
+ [√(p(R)+p(R+1)) - √(dp(R+1)) ]^2.
By monitoring the random walk on the tree in Fig. <ref> and projecting its depth on the vertices 0,1,…,R, respectively, R+1, we can apply the LDP in Proposition <ref> (see Fig. <ref>).
1.
The sojourn times have distribution EXP(d+1) at vertices k=0,1,…,R and distribution ψ at vertex k=R+1. The transition probabilities are
[ π_0,0 = 2d+1, π_0,1 = d-1d+1,; π_k,k+1 = 1d+1, π_k,k-1 = dd+1, k = 1,…,R,; π_R+1,R = 1. ]
Proposition <ref> therefore yields that (L_t^R)_t ≥ 0 satisfies the LDP on on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = (d+1) ∑_k=0^R p(k) + inf_v V^*_R → (0,∞)sup_u V^*_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C,
where
A = ∑_k=1^R v(x) {1+log(du(k-1)+u(k+1)/u(k) p(k)/v(k))},
B = v(0) {1+log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0))},
C = v(R+1) {log(u(R)/u(R+1))-(λ)(p(R+1)/v(R+1))}.
Here we use (<ref>) to compute A and B, and for C we recall that λ is the Legendre transform of the cumulant generation function λ of ψ computed in Lemma <ref>.
2.
We compute the infimum of L(u,v) over v for fixed u.
∙ For k=1,…,R,
∂ A/∂ v(k) = log(du(k-1)+u(k+1)/u(k) p(k)/v(k)),
⟹v̅_u(k) = p(k) du(k-1)+u(k+1)/u(k).
The second derivative is 1/v(k)>0.
∙ For k=0,
∂ B/∂ v(0) = log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0)),
⟹v̅_u(0) = p(0) 2u(0)+(d-1)u(1)/u(0).
The second derivative is 1/v(0)>0.
∙ For k=R+1, the computation is more delicate. Define (recall (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(R+1)/u(R) ≤√(d). Compute
∂ C/∂ v(R+1) = μ(p(R+1)/v(R+1)) - log(u(R+1)/u(R)),
⟹v̅(R+1) = p(R+1)/α_u(R+1)
with α_u(R+1) solving the equation
log(u(R+1)/u(R)) = μ(α_u(R+1)).
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(R+1) = μ^-1(log(u(R+1)/u(R))).
Putting (<ref>)–(<ref>) together, we get
L(u) = inf_v V^*_R → (0,∞) L(u,v)
= - ∑_k=1^R A_u(k) - B_u + C_u
with
A_u(k) = du(k-1)+u(k+1)/u(k) p(k), k = 1,…,R,
B_u = 2u(0)+(d-1)u(1)/u(0) p(0),
and
C_u = p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - log(u(R+1)/u(R))]
= p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - μ(α_u(R+1))]
= p(R+1) (λ)^'(α_u(R+1))
= p(R+1) ((λ)^'∘μ^-1)(log(u(R+1)/u(R))).
In (<ref>) in Appendix <ref> we showed that (λ)' ∘μ^-1 = λ^-1. Moreover, in (<ref>) in Appendix <ref> we showed that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], C_u(R+1) is only defined when u(R+1)/u(R) ≤√(d), in which case
C_u = p(R+1) S(u(R+1)/u(R)).
▸ u(R+1)/u(R) ≤√(d). In this case ∂ C/∂ v(R+1)>0, the infimum is taken at v̅(R+1)=0, and hence (recall (<ref>))
C_u = p(R+1) (√(d)-1)^2 = p(R+1) S(√(d)).
Note that the right-hand side does not depend on u. The expressions in (<ref>)–(<ref>) can be summarised as
C_u = p(R+1) S(√(d)∧u(R+1)/u(R)).
3.
Next we compute the supremum over u of
L(u) = L(u,v̅_u) = - A_u - B_u + C_u.
with A_u = ∑_k=1^R A_u(k). We only write down the derivatives that are non-zero.
∙ For k=2,…,R-1,
- ∂ A_u/∂ u(k) = - p(k+1) d/u(k+1) - p(k-1) 1/u(k-1) + p(k) du(k-1)+u(k+1)/u(k)^2.
∙ For k=1,
- ∂ A_u/∂ u(1) = - p(2) d/u(2) + p(1) du(0)+u(2)/u(1)^2,
- ∂ B_u/∂ u(1) = - p(0) d-1/u(0).
∙ For k=R,
- ∂ A_u/∂ u(R) = - p(R-1) 1/u(R-1) + p(R) du(R-1)+u(R+1)/u(R)^2,
∂ C_u/∂ u(R) = p(R+1) [u(R+1)/u(R)^2 - d/u(R+1)]
1_{u(R+1)/u(R)≤√(d)}.
∙ For k=0,
-∂ A_u/∂ u(0) = - p(1) d/u(1),
-∂ B_u/∂ u(0) = p(0) (d-1)u(1)/u(0)^2.
∙ For k=R+1,
-∂ A_u/∂ u(R+1) = - p(R) 1/u(R),
∂ C_u/∂ u(R+1) = p(R+1) [-1/u(R) + du(R)/u(R+1)^2]
1_{u(R+1)/u(R)≤√(d)}.
All the first derivatives of A_u+B_u+C_u are zero when we choose
u̅(0) = √((d-1)p(0)), u̅(k) = √(d^kp(k)), k = 1,…,R,
u̅(R+1) = √(d^R+1 p(R)p(R+1)/p(R)+p(R+1)).
All the second derivatives are strictly negative, and so u̅ is the unique maximiser.
4.
Inserting (<ref>) into (<ref>), we get
L(u̅) = L(u̅,v̅_u̅) = - ∑_k=2^R-1 A_u̅(k)
- [A_u̅(1) + B_u̅] - A_u̅(R) + C_u̅
= -∑_k=2^R-1√(dp(k)) [√(p(k-1)) + √(p(k+1)) ]
- [2√(d(d-1)p(0)p(1)) + 2p(0) + √(dp(1)p(2)) ]
- [√(dp(R-1)p(R)) + √(p(R)/p(R)+p(R+1)) √(dp(R)p(R+1)) ]
+ p(R+1) S(√(dp(R+1)/p(R)+p(R+1)) ).
Recalling (<ref>), (<ref>) and (<ref>), and rearranging terms, we find the expression in (<ref>).
Note that I^†_R has a unique zero at p given by
p(0) = 12, p(k) = 12 (d-1)d^-k, k = 1,…,R, p(R+1) = 12d^-R.
This shows that the fraction of the local time typically spent a distance k away from the backbone decays exponentially fast in k.
§ ANALYSIS OF THE UPPER VARIATIONAL FORMULA
In this appendix we carry out the proof of the claims in Section <ref>, namely, we settle (<ref>) in Appendix <ref> and (<ref>) in Appendix <ref>. The computations carried out in Appendix <ref> guide us along the way.
§.§ Identification of the rate function for the local times on the truncated tree
To identify the rate function I^†_E(_R) in Lemma <ref>, we need to work out the two infima between braces in (<ref>). The computation follows the same line of argument as in Appendix <ref>, but is more delicate. We will only end up with a lower bound. However, this is sufficient for the upper variational formula.
To simplify the notation we write (recall Fig. <ref>):
(V_R,E_R) = vertex and edge set of _R without the tadpoles,
= top vertex of V_R,
⋆ = right-most bottom vertex of V_R,
∂ V_R = set of vertices at the bottom of V_R,
= set of tadpoles,
_x = tadpole attached to x ∈∂ V_R\⋆.
Note that ∂ V_R consists of ⋆ and the vertices to which the tadpoles are attached. Note that int(V_R) = V_R ∖∂ V_R includes .
1.
Inserting (<ref>) in Appendix <ref> into (<ref>)–(<ref>), we get
I^†_E(_R)(p) = (d+1) ∑_x∈ V_R p(x)
+ inf_β∈ (0,∞)inf_q ∈(V_R)sup_q∈(V_R) L(β,q,q| p)
with
L(β,q,q| p) = - A - B - C - D,
where
A = ∑_x ∈int(V_R)β q(x){1+log(∑_y ∼ xq(y)/q(x)p(x)/β q(x))},
B = ∑_x ∈∂ V_R\⋆β q(x){1+log(q(x^↑)
+ d q(_x)/q(x)p(x)/β q(x))},
C = β q(⋆) {1+log(q(⋆^↑) + d q()/q(⋆)p(⋆)/β q(⋆))},
D = ∑_x ∈β q(x){log(q(x^↑)/q(x))
- (λ)(p(x)/β q(x)) },
with λ the Legende transform of the cumulant generating function of ψ (recall (<ref>)) and x^↑ the unique vertex to which x is attached upwards. (Recall that y ∼ x means that x and y are connected by an edge in E_R.) Note that A,B,C each combine two terms, and that A,B,C,D depend on p. We suppress this dependence because p is fixed.
2.
Inserting the parametrisation q = u/u_1 and q = v/v_1 with u,v V_R → (0,∞) and putting β q = v, we may write
I^†_E(^R)(p) = (d+1) ∑_x∈ V_R p(x) + inf_v V_R → (0,∞)sup_u V_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C - D,
where
A = ∑_x ∈int(V_R) v(x){1+log(∑_y ∼ xu(y)/u(x)p(x)/v(x))},
B = ∑_x ∈∂ V_R \⋆ v(x){1+log(u(x^↑)
+ d u(_x)/u(x)p(x)/v(x))},
C = v(⋆) {1+log(u(⋆^↑) + d u()/u(⋆)p(⋆)/v(⋆))},
D = ∑_x ∈v(x){log(u(x^↑)/u(x)) - (λ)(p(x)/v(x)) }.
Our task is to carry out the supremum over u and the infimum over v in (<ref>).
3.
First, we compute the infimum over v for fixed u. (Later we will make a judicious choice for u to obtain a lower bound.) Abbreviate
A_u(x) = ∑_y ∼ xu(y)/u(x) p(x), x ∈int(V_R),
B_u(x) = u(x^↑) + d u(_x)/u(x) p(x), x∈∂ V_R\⋆,
C_u(⋆) = u(⋆^↑) + d u()/u(⋆) p(⋆).
∙
For z ∈ V_R, the first derivatives of L are
z ∈int(V_R) ∂ L(u,v)/∂ v(z) = -log(A_u(z)/v(z)),
z ∈∂ V_R\⋆ ∂ L(u,v)/∂ v(z) = -log(B_u(z)/v(z)),
z = ⋆ ∂ L(u,v)/∂ v(z) = -log(C_u(z)/v(z)),
while the second derivatives of L equal 1/v(z)>0. Hence the infimum is uniquely taken at
x ∈int(V_R) v̅(x) = A_u(x),
x ∈ V_R \⋆ v̅(x) = B_u(x),
x = ⋆ v̅(x) = C_u(x).
∙ For z ∈, the computation is more delicate. Define (see (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(x)/u(x^↑) ≤√(d):
Abbreviate α_u(z) = p(z)/v(z). For z ∈,
∂ L(u,v)/∂ v(z) = log(u(z)/u(z^↑))
+ (λ)(p(z)/v(z)) - p(z)/v(z) (λ)^'(p(z)/v(z))
= log(u(z)/u(z^↑)) - μ(α_u(z)),
∂^2 L(u,v)/v(z)^2 =p^2(z)/v^3(z) (λ)^”(p(z)/v(z)) >0,
where we use that λ, being a Legendre transform, is strictly convex. Hence the infimum is uniquely taken at
v̅(x) = p(x)/α_u(x), x ∈,
with α_u(x) solving the equation
log(u(x)/u(x^↑))
= μ(α_u(x)), x ∈.
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(x) = μ^-1(log(u(x)/u(x^↑))), x ∈.
Putting the above formulas together, we arrive at (recall (<ref>))
L(u) = inf_v V_R → (0,∞) L(u,v)
= - ∑_x ∈int(V_R) A_u(x) - ∑_x∈∂ V_R\⋆ B_u(x) - C_u(⋆)
+ ∑_x ∈ D_u(x)
with (recall (<ref>))
D_u(x) = - p(x)/α_u(x)[log(u(x^↑)/u(x)) - (λ)(α_u(x))]
= p(x)/α_u(x)[(λ)(α_u(x)) - μ(α_u(x))]
= p(x) (λ)^'(α_u(x))
= p(x) ((λ)^'∘μ^-1)(log(u(x)/u(x^↑))).
In (<ref>) in Appendix <ref> we show that (λ)' ∘μ^-1 = λ^-1. Moreover In (<ref>) in Appendix <ref> we show that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], D_u(x) is only defined when u(x)/u(x^↑) ≤√(d), in which case
D_u(x) = p(x) S(u(x)/u(x^↑)), x ∈.
▸ u(x)/u(x^↑) > √(d): In this case ∂ L(u,v)/∂ v(z) > 0, the infimum is uniquely taken at v̅(x)=0, and
D_u(x) = p(x) (√(d)-1)^2 = p(x) S(√(d)), x ∈,
where we use (<ref>). Note that the right-hand side does not depend on u.
4.
Next, we compute the supremum over u. The first derivatives of L are
z ∈int(V_R) \ ∂ L(u)/∂ u(z)
= ∑_y ∼ z u(y)/u^2(z) p(z) - ∑_y ∼ z1/u(y) p(y),
z = ∂ L(u)/∂ u()
= ∑_y ∼ u(y)/u()^2 p() -∑_y: y^↑ = 1/u(y)p(y)
- d/u(⋆) p(⋆),
z = ⋆ ∂ L(u)/∂ u(⋆)
= -1/u() p() + u(⋆^↑) + du()/u(⋆)^2 p(⋆),
z ∈∂ V_R \⋆ ∂ L(u)/∂ u(z)
= -1/u(z^↑) p(z^↑) + u(z^↑)+du(_z)/u(z)^2 p(z)
+ [u(_z)/u(z)^2 - d/u(_z)]p(_z)
1_{u(z)/u(z^↑)≤√(d)},
z ∈ ∂ L(u)/∂ u(z)
= -d/u(z^↑) p(z^↑)
+ [-1/u(z^↑) +du(z^↑)/u(z)^2] p(z)
1_{u(z)/u(z^↑)≤√(d)}.
The second derivates of L are all <0. The first line in (<ref>) can be rewritten as
∑_y ∼ z u(y) [p(z)/u^2(z) - p(y)/u^2(y)],
which is zero when
u̅(x) = √(p(x)), x ∈ V_R.
Given the choice in (<ref>), the fifth line in (<ref>) is zero when
u̅(x) = √(dp(x^↑)p(x)/dp(x^↑)+p(x)), x ∈.
Indeed, the derivative is strictly negative when the indicator is 0 and therefore the indicator must be 1. But the latter is guaranteed by (<ref>)–(<ref>), which imply that
u̅(x)/u̅(x^↑) = √(dp(x)/dp(x^↑)+p(x))≤√(d), x ∈.
Given the choice in (<ref>)–(<ref>), also the fourth line in (<ref>) is zero. Thus, only the second and third line in (<ref>) are non-zero, but this is harmless because ,⋆ carry a negligible weight in the limit as R →∞ because of the constraint p(∂ V_R ∪) ≤ 1/R in Lemma <ref> (recall (<ref>)).
Inserting (<ref>)–(<ref>) into (<ref>) and using (<ref>), (<ref>), we get the following lower bound:
sup_u V_R → (0,∞) L(u)
≥ - ∑_x ∈int(V_R) A_u̅(x)
- ∑_x∈∂ V_R\⋆ B_u̅(x)
- C_u̅(⋆) + ∑_x ∈ D_u̅(x)
= - ∑_x ∈int(V_R)∑_y ∼ x√(p(y)p(x))
- ∑_x∈∂ V_R \⋆√(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))
-√(p(⋆))(√(p(⋆^↑))+ d√(p()))
+ ∑_x ∈ p(x) (d+1-√(d)[√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) ]).
5.
Using the relation (d+1) p(x) = ∑_y∼ x p(x), x∈int(V_R), we get from (<ref>) that
I^†_E(^R)(p) ≥ K^1_R(p) + K^2_R(p)
with
K^1_R(p)
= ∑_x ∈int(V_R)∑_y ∼ x[p(x) - √(p(x)p(y)) ]
= ∑_{x,y}∈E_R(√(p(x)) - √(p(y)) )^2
+ [p()-√(p()p(⋆)) ] - ∑_x∈∂ V_R[ p(x) - √(p(x)p(x^↑)) ]
and
K^2_R(p)
= ∑_x∈∂ V_R \⋆[(d+1) p(x) - √(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))]
+ (d+1) p(⋆)-√(p(⋆))(√(p(⋆^↑)) + d√(p()))
+ ∑_x ∈ p(x) [d+1-√(d) (√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) )].
The first sum in the right-hand side of K^1_R(p) equals the standard rate function I_E_R(p) given by (<ref>), with
E_R = E_R ∖{,⋆}
the set of edges in the unit _R without the tadpoles and without the edge {,⋆} (i.e., E_R = E(^*_R); recall Fig. <ref>). Rearranging and simplifying terms, we arrive at
I^†_E(^R)(p) ≥ I_E_R(p)+ K^3_R(p)
with
K^3_R(p) = S_∂ V_R \⋆(p) + S_,⋆(p) + S_(∂ V_R \⋆) ∪(p),
where
S_∂ V_R \⋆(p)
= d ∑_x∈∂ V_R \⋆ p(x),
S_,⋆(p)
= (√(p()) - √(p(⋆)))^2 + (d-1)[p(⋆) - √(p()p(⋆)) ],
S_(∂ V_R \⋆) ∪(p)
= - ∑_x∈∂ V_R \⋆ p(x) d√(dp(_x)/dp(x)+p(_x))
+ ∑_x∈∂ V_R \⋆ p(_x) (d+1-√(d) [√(p(_x)/d p(x) + p(_x))
+ √(d p(x) + p(_x)/p(_x)) ]).
6.
Since √(p()p(⋆))≤12[p()+p(⋆)], the boundary constraint ∑_x∈∂ V_R ∪ p(x) ≤ 1/R implies that S_∂ V_R \⋆(p) + S_,⋆(p) = O(1/R). The same constraint implies that the first sum in S_(∂ V_R \⋆) ∪(p) is O(1/R). Hence
K^3_R(p) = O(1/R) + ∑_x∈∂ V_R \⋆ p(x) F(p(_x)p(x))
with
F(w) = w (d+1-√(d) [√(w/d+w) + √(d+w/w) ]).
The map w ↦ F(w) is continuous on (0,∞) with
F(w) = {[ √(w) + (d+1)w + O(w^3/2), w ↓ 0,; [(d+1)-2√(d) ] w + O(w^-1), w →∞. ].
From this we see that if d ≥ 4, then there exists a C ∈ (1,∞) such that
F(w)+C ≥(1-√(w) )^2, w ∈ [0,∞).
Hence we have the lower bound
K^3_R(p)
≥ O(1/R) + ∑_x∈∂ V_R \⋆
p(x) [-C + (1-√(p(_x)p(x)) )^2]
= O(1/R) + ∑_x∈∂ V_R \⋆(√(p(x))-√(p(_x)) )^2.
Via (<ref>)–(<ref>), it follows that
I^†_E(^R)(p) ≥ O(1/R) + I_E_R(p), R ∈,
with I_E_R(p) the standard rate function given by (<ref>), with
E_R = E_R ∪[∪_x ∈∂ V_R ∖⋆{x,_x}]
the set of edges in the unit _R that is obtained from the unit _R by removing the edge {,⋆} (i.e., E_R = E(_R); recall Fig. <ref>). This completes the proof of (<ref>).
The condition d ≥ 4 is needed only in (<ref>). For d=2,3 we have F(w)+C ≥θ_c(1-√(w) )^2 with θ_c = d+1-2√(d)∈ (0,1). Consequently, the edges {x,_x}, x ∈∂ V_R∖⋆, carry a weight that is smaller than that of the edges in , which may cause the optimal p to stick to the boundary as R→∞, in which case we do not have (<ref>).
§.§ Limit of the upper variational formula
Note that
_R ⊆,
with the infinite tree. Consequently,
I_E_R(p) = I_E()(p) - (d-1) ∑_x ∈∂ V_R ∖⋆ p(x),
∀ p ∈(V()) (p) = V(_R),
where the sum compensates for the contribution coming from the edges in that link the vertices in ∂ V_R ∖⋆ to the vertices one layer deeper in that are not tadpoles. Since this sum is O(1/R), we obtain (recall (<ref>))
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)}
≥ O(1/R) + inf_p ∈(V())(p) = V(_R),
p(∂__R) ≤ 1/R{I_E()(p) + ϱ J_V()(p)}
≥ O(1/R) + χ_(ρ),
where the last inequality follows after dropping the constraint under the infimum and recalling (<ref>). This completes the proof of (<ref>).
99
A2016
A. Astrauskas,
From extreme values of i.i.d. random fields to extreme eigenvalues of finite-volume Anderson Hamiltonian,
Probab. Surv. 13, 156–244, 2016.
AGH2016
L. Avena, O. Gün, M. Hesse,
The parabolic Anderson model on the hypercube,
Stoch. Proc. Appl. 130, 3369–3393, 2020.
DV75
M.D. Donsker and S.R.S. Varadhan,
Asymptotic evaluation of certain Markov process expectations for large time,
Comm. Pure Appl. Math. (I) 28, 1–47, 1975; (II) 28, 279–301, 1975; (III) 29, 389–461, 1976; (IV) 36, 183–212, 1983.
FM1990
K. Fleischmann, S.A. Molchanov,
Exact asymptotics in a mean field model with random potential,
Probab. Theory Relat. Fields 86, 239–251, 1990.
G1977
J. Gärtner,
On large deviations from the invariant measure,
Theory Probab. Appl. 22, 24–39, 1977.
GdH1999
J. Gärtner, F. den Hollander,
Correlation structure of intermittency in the parabolic Anderson model,
Probab. Theory Relat. Fields 114, 1–54, 1999.
GM1990
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model I. Intermittency and related problems,
Commun. Math. Phys. 132, 613–655, 1990.
GM1998
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model II. Second-order asymptotics and structure of high peaks,
Probab. Theory Relat. Fields 111, 17–55, 1998.
HLP1952
G.H. Hardy, J.E. Littlewood, G. Pólya,
Inequalities,
Cambridge Mathematical Library (2nd. ed.), Cambridge University Press, 1952.
dHLDP2000
F. den Hollander,
Large Deviations,
Fields Institute Monographs 14, Providence RI, American Mathematical Society, 2000.
dHKdS2020
F. den Hollander, W. Konig, R.S. dos Santos,
The parabolic Anderson model on a Galton-Watson tree,
in Out of Equilibrium 3: Celebrating Vladas Sidoravicius
(eds. M.E. Vares, R. Fernandez, L.R. Fontes, C.M. Newman),
Progress in Probability 77, Birkhäuser, 2021, pp. 591–635.
dHW2021
F. den Hollander, D. Wang,
The parabolic Anderson model on a Galton-Watson tree revisited,
J. Stat. Phys. 189, paper no. 8, 1–30, 2022.
LP2016
R. Lyons, Y. Peres,
Probability on Trees and Networks,
Cambridge Series in Statistical and Probabilistic Mathematics 42,
Cambridge University Press, New York, 2016.
K2016
W. König,
The Parabolic Anderson Model,
Pathways in Mathematics, Birkhäuser, 2016.
MZ2016
M. Mariani, L. Zambotti,
Large deviations for the empirical measure of heavy-tailed Markov renewal processes,
Adv. Appl. Probab. 48, 648–671, 2016.
S1976
F. Spitzer,
Principles of Random Walk (2nd ed.),
Graduate Texts in Mathematics, Springer, 1976.
|
http://arxiv.org/abs/2307.04168v1 | 20230709133056 | Possible open charm molecular pentaquarks from $Λ_cK^{(*)}/Σ_cK^{(*)}$ interactions | [
"Rui Chen",
"Qi Huang"
] | hep-ph | [
"hep-ph"
] |
[email protected]
[email protected]
^1Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China
^2Department of Physics, Nanjing Normal University, Nanjing 210023, China
In this work, we adopt the one-boson-exchange model to study the Y_cK^(*) (Y_c=Λ_c, Σ_c) interactions. After considering both of the S-D wave mixing effects and the coupled channel effects, we can predict several possible open-charm molecular pentaquarks, i.e., the single Σ_cK^* molecular states with I(J^P)=1/2(1/2^-), 1/2(3/2^-) and 3/2(1/2^-), the coupled Λ_cK^*/Σ_cK^* molecular states with 1/2(1/2^-) and 1/2(3/2^-), and the coupled Σ_cK/Λ_cK^*/Σ_cK^* molecular state with 1/2(1/2^-). Meanwhile, we extend our study to the Y_cK̅^(*) interactions, our results suggest the Σ_cK̅ system with I(J^P)=1/2(1/2^-), the Σ_cK̅^* systems with 1/2(1/2^-), 1/2(3/2^-), and 3/2(3/2^-), the coupled Λ_cK̅^*/Σ_cK̅^* system with 1/2(1/2^-), and the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* system with 1/2(1/2^-) can be the prime molecular candidates.
12.39.Pn, 14.20.Pt, 13.75.Jz
Possible open charm molecular pentaquarks from Λ_cK^(*)/Σ_cK^(*) interactions
Qi Huang^2[Corresponding author]
August 12, 2023
=============================================================================
§ INTRODUCTION
In the past decades, the observations of X/Y/Z/P_c/T_cc structures have stimulated theorist's extensive interest in
exploring the properties of exotic states. Among the possible configurations, the hadronic molecular state, which is composed of the color-singlet hadrons, plays an important role in explaining the observed exotic structures. The main reason of introducing such a configuration is that many observed X/Y/Z/P_c/T_cc structures are near some specific mass thresholds of the hadron pairs, which leads to answers whether these observations can be explained under the framework of the molecular state (one can see Refs. <cit.> for a detailed review). Thus, carrying out the study of the hadronic molecular state has became an active and important research field in the hadron physics. It is not only helpful to reveal the underlying structures of these near thresholds X/Y/Z/P_c/T_cc structures, but also can improve our knowledge of the non-perturbative behavior of the quantum chromodynamics (QCD).
Very recently, the LHCb collaboration continued to report their observations of two open heavy flavor multiquark candidates, T_cs̅^a0(2900) and T_cs̅^a++(2900), where the superscript a means that their quantum numbers are both I(J^P)=1(0^+) <cit.>. For the T_cs̅^a0(2900), the discovered channel is D_s^+ π^-, the mass and width are 2892 ± 14 ± 15 MeV and 119 ± 26 ± 12 MeV, respectively, while for the T_cs̅^a++(2900), the discovered channel, the mass, and the width are D_s^+ π^+, 2921 ± 17 ± 19 MeV and 137 ± 32 ± 14 MeV, respectively. According to their channels, mass positions and quantum numbers, it is easy to guess that the T_cs̅^a0(2900) and T_cs̅^a++(2900) belong to the same isovector triplet. Furthermore, the LHCb collaboration also determined their averaged masses and decay widths, which are 2908 ± 11 ± 20 MeV and 136 ± 23 ± 11 MeV, respectively.
Due to the charged property of T_cs̅^a0(++)(2900), their minimal valance quark components are naturally inferred to be cs̅qq̅ (q=u, d). Since they are very close to the D^*K^* mass threshold, it is natural conjecture whether the T_cs̅^a0(++)(2900) states can be the isovector D^*K^* molecules with J^P=0^+. In fact, in our former work <cit.>, we can not only reproduce the D_s0^∗(2317) and D_s1(2460) in the S-wave DK and D^*K molecular scenario, but also find the one-boson-exchange (OBE) effective potentials are strong enough to form loosely bound molecular states for the D^*K^* systems with I(J^P)=0(0^+, 1^+, 2^+), and 1(0^+). Therefore, the D^*K^* hadronic molecular explanations for the T_cs̅^a0(++)(2900) states cannot be excluded. In addition, there are other different theoretical explanations to the T_cs̅^a0(++)(2900) states, like the compact open-charm pentaquark <cit.> and the D^*ρ molecule <cit.>.
Besides the T_cs̅^a0(++)(2900), another two open-charm states X_0(2900) and X_1(2900), which were observed by the LHCb collaboration in the D^-K^+ final states of the B^+→ D^+D^-K^+ decay process <cit.>, are also interesting. Their spin-parities J^P are 0^+ and 1^+, respectively. Because their mass positions are very close to the D̅^*K^* and D̅_1K mass thresholds, respectively, many theorists propose the X_0(2900) and X_1(2900) states as the hadronic molecular states <cit.>. At present, the inner structures for the T_cs̅^a0(++)(2900) and X_0,1(2900) are still on discussion (one can see Refs. <cit.>).
As is well known, the light diquark in the heavy baryons Y_c=(Λ_c, Σ_c) has the same color structure 3̅_c with the light anti-quark in the heavy meson Qq̅ <cit.>. If the T_cs̅^a0(++)(2900) can be assigned as the loosely bound hadronic molecular states composed by the charmed meson and kaon, it is natural to conjecture whether there exist possible open charm molecular pentaquarks counterpart of the T_cs̅^a0(++)(2900), which are near the thresholds of the Λ_cK^(*) and Σ_cK^(*), respectively. In this work, we search for such open charm molecular partners composed by Λ_cK^(*) and Σ_cK^(*), which can not only enrich the family of the exotic states, but also help us to understand the nature of the newly T_cs̅^a0(++)(2900).
Apart from searching for possible Λ_cK^(*) and Σ_cK^(*) molecular states, in this work, we also study the interactions between the S-wave charmed baryon Y_c=(Λ_c, Σ_c) and the anti-strange meson K̅^(*) by adopting the OBE model and considering both of the S-D mixing effects and the coupled channel effects. After solving the coupled channel Schrödinger equations, we can search for the possible charmed-strange molecular pentaquarks counterpart of the X_0,1(2900). Our study will not only provide valuable information to experimental search for exotic open charm hadronic molecular pentaquarks, but also give indirect test of molecular state picture for the T_cs̅^a0(++)(2900) and X_0,1(2900).
This paper is organized as follows. After this introduction, we introduce the relevant effective Lagrangians and the OBE model in Sec. <ref>. In Sec. <ref>, we present the OBE effective potentials and the corresponding numerical results. The paper ends with a summary in Sec. <ref>.
§ LAGRANGIANS AND OBE MODEL
In this work, we deduce the OBE effective potentials for the Y_cK^(*) systems by employing the effective Lagrangian approach at the hadronic level. The relevant Lagrangians describing the interactions between the heavy baryons and light mesons are constructed in terms of the heavy quark limit and chiral symmetry <cit.>, i.e.,
ℒ_ℬ_3̅ = l_B⟨ℬ̅_3̅σℬ_3̅⟩
+iβ_B⟨ℬ̅_3̅v^μ(𝒱_μ-ρ_μ)ℬ_3̅⟩,
ℒ_ℬ_6 = l_S⟨𝒮̅_μσ𝒮^μ⟩
-3/2g_1ε^μνλκv_κ⟨𝒮̅_μA_ν𝒮_λ⟩
+iβ_S⟨𝒮̅_μv_α(𝒱_ab^α-ρ_ab^α) 𝒮^μ⟩
+λ_S⟨𝒮̅_μF^μν(ρ)𝒮_ν⟩,
ℒ_ℬ_3̅ℬ_6 = ig_4⟨𝒮̅^̅μ̅A_μℬ_3̅⟩
+iλ_Iε^μνλκv_μ⟨𝒮̅_νF_λκℬ_3̅⟩+h.c..
Here, v=(1,0) is the four velocity, ρ_ba^μ=ig_VV_ba^μ/√(2), and F^μν(ρ)=∂^μρ^ν-∂^νρ^μ
+[ρ^μ,ρ^ν]. A_μ and 𝒱_μ stand for the axial current and vector current, respectively. They can be written as
A_μ = 1/2(ξ^†∂_μξ-ξ∂_μξ^†)=i/f_π∂_μP+…,
𝒱_μ = 1/2(ξ^†∂_μξ+ξ∂_μξ^†)
=i/2f_π^2[P,∂_μP]+…,
respectively. Here, ξ=exp(iP/f_π) and f_π=132 MeV. ℬ_3̅ and 𝒮_μ =-√(1/3)(γ_μ+v_μ)γ^5ℬ_6+ℬ_6μ^* denote the ground heavy baryons multiplets with their light quarks in the 3̅ and 6 flavor representation, respectively. The matrices ℬ_3̅, ℬ_6, P, and V read as
.[ ℬ_3̅ = ([ 0 Λ_c^+; -Λ_c^+ 0 ]), ℬ_6 = ([ Σ_c^++ Σ_c^+/√(2); Σ_c^+/√(2) Σ_c^0 ]),; P = ([ π^0/√(2)+η/√(6) π^+; π^- -π^0/√(2)+η/√(6) ]), V = ([ ρ^0/√(2)+ω/√(2) ρ^+; ρ^- -ρ^0/√(2)+ω/√(2) ]). ].
The effective Lagrangians describing the interactions between the strange mesons and light mesons are constructed in the SU(3) symmetry <cit.>, i.e.,
ℒ_PPV = ig/2√(2)⟨∂^μP(PV_μ-V_μP⟩,
ℒ_VVP = g_VVP/√(2)ϵ^μναβ⟨∂_μV_ν∂_αV_βP⟩,
ℒ_VVV = ig/2√(2)⟨∂^μV^ν(V_μV_ν-V_νV_μ)⟩.
After expanding Eqs. (<ref>)-(<ref>), we can further obtain
ℒ_σ = l_B⟨ℬ̅_3̅σℬ_3̅⟩
-l_S⟨ℬ̅_6σℬ_6⟩,
ℒ_P =
ig_1/2f_πε^μνλκv_κ⟨ℬ̅_6
γ_μγ_λ∂_νPℬ_6⟩
-√(1/3)g_4/f_π⟨ℬ̅_6γ^5
(γ^μ+v^μ)∂_μPℬ_3̅⟩+h.c.,
ℒ_V = 1/√(2)β_Bg_V⟨ℬ̅_3̅v·Vℬ_3̅⟩
-β_Sg_V/√(2)⟨ℬ̅_6v·Vℬ_6⟩
-λ_Ig_V/√(6)ε^μνλκv_μ⟨ℬ̅_6γ^5γ_ν(∂_λV_κ-∂_κV_λ)ℬ_3̅⟩+h.c.
-iλ g_V/3√(2)⟨ℬ̅_6γ_μγ_ν(∂^μV^ν-∂^νV^μ)
ℬ_6⟩,
ℒ_K^(*)K^(*)σ = g_σm_KK̅ Kσ-g_σm_K^*K̅^*· K^*σ,
ℒ_P KK^* = ig/4[(K̅^*μ K-K̅ K^*μ)(τ·∂_μπ+∂_μη/√(3)).
.+(∂_μK̅ K^*μ-K̅^*μ∂_μK)(τ·π+η/√(3))],
ℒ_V KK = ig/4[K̅∂_μK
-∂_μK̅K](τ·ρ^μ+ω^μ),
ℒ_V K^*K^* = ig/4[(K̅_μ^*∂^μK^*ν-∂^μK̅^*ν K_μ^*)(τ·ρ_ν+ω_ν).
.+(∂^μK̅^*νK_ν^*-K̅_ν^*∂^μK^*ν)
(τ·ρ_μ+ω_μ).
.+(K̅_ν^* K^*_μ-K̅_μ^*K^*_ν)
(τ·∂^μρ^ν+∂^μω^ν)],
ℒ_P K^*K^* = g_VVPε_μναβ∂^μK̅^*ν∂^αK^*β(τ·π+η/√(3)),
ℒ_V KK^* = g_VVPε_μναβ(∂^μK̅^*νK+K̅∂^μK^*ν)
(τ·∂^αρ^β+∂^αω^β).
Coupling constants in the above Lagrangians are estimated with the quark model <cit.>, l_S=-2l_B=7.3, g_1=(√(8)/3)g_4=1.0, β_Sg_V=-2β_Bg_V=12.0, λ_Sg_V=-2√(2)λ_Ig_V=19.2 GeV^-1, g_σ=-3.65, and g=12.00. g_VVP=3g^2/(32√(2)π^2f_π) <cit.>.
With these prepared effective Lagrangians, we can easily write down the scattering amplitudes for the B_1M_2→ B_3M_4 processes in the t-channel, where B_1 and B_3 stand for the initial and final baryons, respectively, and M_2 and M_4 stand for the inial and final mesons, respectively. The corresponding effective potentials can be related to the scattering amplitudes by the Breit approximation,
𝒱_E^B_1M_2→ B_3M_4(q) =
-ℳ(B_1M_2→ B_3M_4)/4√(m_B_1m_M_2m_B_3m_M_4).
Here, m_i is the mass of the interaction hadron. ℳ(B_1M_2→ B_3M_4) denotes the scattering amplitude for the B_1M_2→ B_3M_4 process by exchanging the light mesons (σ, π, η, ρ, and ω). Next, we perform the Fourier transformation to obtain the effective potentials in the coordinate space 𝒱(r),
𝒱_E(r) =
∫d^3q/(2π)^3e^iq·r𝒱_E(q)ℱ^2(q^2,m_E^2).
In order to compensate the off-shell effect of the exchanged meson, we introduce a monopole form factor ℱ(q^2,m_E^2)= (Λ^2-m_E^2)/(Λ^2-q^2) at every interactive vertex, where Λ, m_E, and q are the cutoff parameter, the mass and four-momentum of the exchanged meson, respectively. In our numerical calculations, we vary the cutoff value in the range of 0.8≤Λ≤5.0 GeV. According to the deuteron experience <cit.>, the reasonable cutoff value is taken around 1.00 GeV. In the following discussion, the loosely bound state with the cutoff value around 1.00 GeV can be recommended as the prime hadronic molecular candidate.
For the Λ_cK^(*) systems, the flavor wave function |I,I_3⟩ can be expressed as |1/2,1/2⟩=|Λ_c^+K^(*)+⟩ and |1/2,-1/2⟩=|Λ_c^+K^(*)0⟩. For the Σ_cK^(*) systems, their isospin I can be taken as 1/2 or 3/2. The corresponding flavor wave functions |I,I_3⟩ are
[ |1/2,1/2⟩ =
√(2/3)|Σ_c^++K^(*)0⟩
-1/√(3)|Σ_c^+K^(*)+⟩,; |1/2,-1/2⟩ =
1/√(3)|Σ_c^+K^(*)0⟩
-√(2/3)|Σ_c^0K^(*)+⟩, ]
[ |3/2,3/2⟩ = |Σ_c^++K^(*)+⟩,; |3/2,1/2⟩ =
1/√(3)|Σ_c^++K^(*)0⟩+
√(2/3)|Σ_c^+K^(*)+⟩,; |3/2,-1/2⟩ =√(2/3)|Σ_c^+K^(*)0⟩
+1/√(3)|Σ_c^0K^(*)+⟩,; |3/2,-3/2⟩ =
|Σ_c^0K^(*)0⟩, ]
respectively. When we consider the S-D wave mixing effects, the spin-orbit wave functions |^2S+1L_J⟩ are
.[ Y_cK[J^P=1/2^-]: |^2S_1/2⟩,; Y_cK^*[J^P=1/2^-]: |^2S_1/2⟩, |^4D_1/2⟩,; Y_cK^*[J^P=3/2^-]: |^4S_3/2⟩, |^2D_3/2⟩, |^4D_3/2⟩. ].
The general expressions of the spin-orbit wave functions |^2S+1L_J⟩ for the Y_cK^(*) systems read as
Y_cK: |^2S+1L_J⟩ = ∑_m_S,m_LC^J,M_1/2m_S,Lm_Lχ_1/2m|Y_L,m_L⟩,
Y_cK^*: |^2S+1L_J⟩ = ∑_m,m'^m_S,m_LC^S,m_S_1/2m,1m'C^J,M_Sm_S,Lm_Lχ_1/2mϵ^m'|Y_L,m_L⟩.
Here, C^J,M_1/2m_S,Lm_L, C^S,m_S_1/2m,1m', and C^J,M_Sm_S,Lm_L are the Clebsch-Gordan coefficients. χ_1/2m and Y_L,m_L stand for the spin wave function and the spherical harmonics function, respectively. ϵ is the polarization vector for the vector meson with ϵ_±^m=∓1/√(2)(ϵ_x^m±iϵ_y^m) and ϵ_0^m=ϵ_z^m, which satisfies ϵ_±1= 1/√(2)(0,±1,i,0) and ϵ_0 =(0,0,0,-1).
§ THE OBE EFFECTIVE POTENTIALS AND THE NUMERICAL RESULTS
Following the above procedures, we can deduce the concrete OBE effective potentials for the Y_cK^(*) systems with different quantum configurations. After that, we adopt the obtained OBE effective potentials to solve the coupled channel Schrödinger equations. By doing this, we can search for the bound state solutions. A system with the reasonable bound state solutions can be recommended as the good hadronic molecular candidate, where the binding energy is taken from several MeV to several tens MeV, and the root-mean-square (RMS) radius is a few fm or larger.
§.§ The Λ_cK^(*) systems
The total OBE effective potentials for the single Λ_cK system can be written as
V_Λ_cK→Λ_cK = l_Bg_σχ_3^†χ_1Y(Λ,m_σ,r)
+β_Bg_Vg/4χ_3^†χ_1Y(Λ,m_ω,r).
Here, we define
Y(Λ,m,r) = 1/4π r(e^-mr-e^-Λ
r)-Λ^2-m^2/8πΛe^-Λ r.
As shown in Eq. (<ref>), there exist the σ exchange and ω exchange interactions, which contribute in the intermediate range and the short range, respectively. The σ exchange provides an attractive interaction, whereas the ω exchange interaction is repulsive. Here, the ρ exchange interaction is strongly suppressed as the isospin forbidden in the Λ_c-Λ_c-ρ coupling. Since the KKπ(η) coupling is forbidden by the spin-parity conservation, the pseudoscalr meson (π/η) exchanges interactions are strongly suppressed, either.
After solving the Schrödinger equation, we don't find bound state solutions in the cutoff region 0.8≤Λ≤5.0 GeV. Thus, the OBE effective potentials for the Λ_cK system is not strong enough to bind a bound state.
For the single S-wave Λ_cK^* systems with J^P=1/2^- and 3/2^-, their OBE effective potentials are the same, i.e.,
V_Λ_cK^*→Λ_cK^* = l_Bg_σ(ϵ_2·ϵ_4^†)χ_3^†χ_1Y(Λ,m_σ,r)
+β_Bg_Vg/4(ϵ_2·ϵ_4^†)χ_3^†χ_1Y(Λ,m_ω,r).
When we consider the S-D wave mixing effects, the operator ϵ_2·ϵ_4^† will be replaced by the unit matrix ℐ=⟨^2S'+1L'_J'|ϵ_2·ϵ_4^†|^2S+1L_J⟩ in the numerical calculations, which indicates the OBE effective potentials are the exactly the same with those for the Λ_cK system with 1/2^-. In the cutoff region 0.8≤Λ≤5.0 GeV, we cannot find the bound state solutions, either.
In this work, we further perform the coupled channel analysis on the Λ_cK^*/Σ_cK^* interactions, the corresponding OBE effective potentials are
V_Λ_cK^*^C = ([ V_Λ_cK^*→Λ_cK^* V_Σ_cK^*→Λ_cK^*; V_Λ_cK^*→Σ_cK^* V_Σ_cK^*→Σ_cK^* ]),
with
V_Λ_cK^*→Σ_cK^* = 1/6g_4g_VVP/f_πℱ_1(r,σ,iϵ_2×ϵ_4^†)
Y(Λ_0,m_π0,r)
-1/6√(2)λ_Ig_Vg/m_K^*ℱ_2(r,σ,iϵ_2×ϵ_4^†)
Y(Λ_0,m_ρ0,r),
V_Σ_cK^*→Σ_cK^* = 1/2l_Sg_σχ_3^†χ_1
ϵ_2·ϵ_3^†Y(Λ,m_σ,r)
-g_1g_VVP/6√(2)f_πℱ_1(r,σ,iϵ_2×ϵ_4^†)
𝒢(I)Y(Λ,m_π,r)
-g_1g_VVP/18√(2)f_πℱ_1(r,σ,iϵ_2×ϵ_4^†)
Y(Λ,m_η,r)
+1/8β_Sg_Vgχ_3^†χ_1
ϵ_2·ϵ_3^†𝒢(I)Y(Λ,m_ρ,r)
+λ_Sg_Vg/8√(3)m_Σ_cχ_3^†χ_1
ϵ_2·ϵ_3^†𝒢(I)∇^2Y(Λ,m_ρ,r)
-λ_Sg_Vg/24√(3)m_K^*ℱ_2(r,σ,iϵ_2×ϵ_4^†)
𝒢(I)Y(Λ,m_ρ,r)
+1/8β_Sg_Vgχ_3^†χ_1
ϵ_2·ϵ_3^†Y(Λ,m_ω,r)
+λ_Sg_Vg/8√(3)m_Σ_cχ_3^†χ_1
ϵ_2·ϵ_3^†∇^2Y(Λ,m_ω,r)
-λ_Sg_Vg/24√(3)m_K^*ℱ_2(r,σ,iϵ_2×ϵ_4^†)
Y(Λ,m_ω,r).
Here, the value of the isospin factor 𝒢(I) is taken as 𝒢(I=1/2)=-2, 𝒢(I=3/2)=1. The variables in Eq. (<ref>) are Λ_0^2 =Λ^2-q_0^2, m_π0^2=m_π^2-q_0^2, m_ρ0^2=m_ρ^2-q_0^2 with q_0 =
M_Σ_c^2-M_Λ_c^2/2(M_Σ_c+M_K^*). And we define useful operators, i.e.,
ℱ_1(r,a,b) = χ_3^†(a·b∇^2
+S(r̂,a,b)
r∂/∂ r1/r∂/∂ r)χ_1,
ℱ_2(r,a,b) = χ_3^†(2a·b∇^2
-S(r̂,a,b)
r∂/∂ r1/r∂/∂ r)χ_1.
Here, the a·b and S(r̂,a,b) stand for the spin-spin interactions and the tensor force operators, respectively. The corresponding matrices elements can be obtained by sandwiched between the spin-orbit wave functions as presented in the Eq. (<ref>), i.e.,
iσ·(ϵ_2×ϵ_4^†) ↦ {[ ([ -2 0; 0 1 ]), J^P=1/2^-; ([ 1 0 0; 0 -2 0; 0 0 1 ]), J^P=3/2^- ].
S(r̂,σ,iϵ_2×ϵ_4^†) ↦ {[ ([ 0 -√(2); -√(2) -2 ]), J^P=1/2^-; ([ 0 1 2; 1 0 -1; 2 -1 0 ]), J^P=3/2^- ].
With these deduced effective potentials, we search for the bound state solutions for the coupled Λ_cK^*/Σ_cK^* systems in the cutoff range 0.8≤Λ≤5.0 GeV. In Table <ref>, we collect the corresponding numerical results, which include the cutoff dependence of the binding energy E, the root-mean-square radius r_RMS, and the probabilities P_i(%) for all the discussed channels.
For the coupled Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(1/2^-), there exist four channels, the Λ_cK^*(^2S_1/2, ^4D_1/2) channels and the Σ_cK^*(^2S_1/2, ^4D_1/2) channels after considering both the S-D wave mixing effects and the coupled channel effects. As presented in Table <ref>, when the cutoff is taken as 1.56 GeV, the binding energy is -0.14 MeV, the RMS radius is 6.11 fm, and the probability for the Λ_cK^*(^2S_1/2) channel is 98.82%. As the cutoff Λ increases to 1.62 GeV, the binding energy becomes -11.57 MeV, the RMS radius becomes 1.12 fm, and the Λ_cK^*(^2S_1/2) is still the dominant channel with the probability around 93.18%. From the current numerical results, in the cutoff range around 1.60 GeV, we can obtain the weakly bound state with the reasonable loosely bound state solutions, and the dominant channel is the Λ_cK^*(^2S_1/2) with its probability over 90%. Since the cutoff value is close to the empirical value Λ∼ 1.00 GeV for the deuteron <cit.>, we conclude that the coupled Λ_cK^*/Σ_cK^* systems with I(J^P)=1/2(1/2^-) can be recommended as a good hadronic molecular candidate.
For the coupled Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(3/2^-), there include the Λ_cK^*(^4S_3/2, ^2D_3/2, ^4D_3/2) channels and the Σ_cK^*(^4S_3/2, ^2D_3/2, ^4D_3/2) channels when we consider both the coupled channel effects and the S-D wave mixing effects. As shown in Table <ref>, we can obtain loosely bound state solutions at the cutoff larger than 1.34 GeV, where the binding energy is from several to ten MeV, and the RMS radius is larger than 1.00 fm, the dominant channel is the Λ_cK^*(^4S_3/2) channel. As the increasing of the cutoff value, the Σ_cK^*(^4S_3/2) channel becomes more and more important, when the cutoff is 1.40 GeV, the probability of the S-wave Σ_cK^* component turns into 27.27%. If we still adopt the experience of the deuteron <cit.>, the coupled Λ_cK^*/Σ_cK^* system with 1/2(3/2^-) can be a good hadronic molecular candidate, it is mainly composed by the Λ_cK^*(^4S_3/2) channel, followed by the Σ_cK^*(^4S_3/2) channel.
In addition, we find that the coupled channel effects play an important role in forming these Λ_cK^*/Σ_cK^* bound states with 1/2(1/2^-, 3/2^-), since there don't exist bound state solutions in the single Λ_cK^* systems.
§.§ The Σ_cK^(*) systems
The OBE effective potentials for the single Σ_cK is
V_Σ_cK→Σ_cK = 1/2l_Sg_σχ_3^†χ_1Y(Λ,m_σ,r)
+𝒢(I)/8β_Sg_Vgχ_3^†χ_1Y(Λ,m_ρ,r)
-𝒢(I)/24m_Σ_cλ_Sg_Vgχ_3^†χ_1
∇^2Y(Λ,m_ρ,r)
+1/8β_Sg_Vgχ_3^†χ_1Y(Λ,m_ω,r)
-1/24m_Σ_cλ_Sg_Vgχ_3^†χ_1∇^2Y(Λ,m_ω,r).
Here, there exists the extra ρ exchange interaction in comparison to the Λ_cK system, and it provides the attractive and repulsive forces for the Σ_cK system with I=1/2 and 3/2, respectively. Therefore, it is possible to find the bound state solutions for the Σ_cK system with I=1/2 as the stronger attractive OBE effective potentials. After solving the coupled channel Schrödinger equation, our results show that there exist no bound state solutions for the iso-quartet Σ_cK system. For the iso-doublet Σ_cK system, as presented in Table <ref>, we can obtain the reasonable loosely bound state solutions when the cutoff Λ is larger than 2.00 GeV.
When we further perform the coupled Σ_cK/Λ_cK^*/Σ_cK^* analysis. There can allow the π-exchange interactions for both of the Λ_cK^*→Σ_cK and Σ_cK^*→Σ_cK process, which plays very important role in binding the deuteron. The corresponding OBE effective potentials can be expressed as
V_Σ_cK^C = ([ V_Σ_cK→Σ_cK V_Λ_cK^*→Σ_cK V_Σ_cK^*→Σ_cK; V_Σ_cK→Λ_cK^* V_Λ_cK^*→Λ_cK^* V_Σ_cK^*→Λ_cK^*; V_Σ_cK→Σ_cK^* V_Λ_cK^*→Σ_cK^* V_Σ_cK^*→Σ_cK^* ]),
with
V_Λ_cK^*→Σ_cK = -1/6g_4g/f_π√(m_Km_K^*)ℱ_1(r,σ,ϵ_2)
U(Λ_1,m_π1,r)
-λ_Ig_Vg_VVP/3√(2)√(m_K^*/m_K)ℱ_2(r,σ,ϵ_2)
Y(Λ_1,m_ρ1,r),
V_Σ_cK^*→Σ_cK = g_1gℱ_1(r,σ,ϵ_2)/24√(2)f_π√(m_Km_K^*)𝒢(I)Y(Λ_2,m_π2,r)
+g_1g/72√(2)f_π√(m_Km_K^*)ℱ_1(r,σ,ϵ_2)
Y(Λ_2,m_η2,r)
+λ_Sg_Vg_VVP/6√(3)√(m_K^*/m_K)ℱ_2(r,σ,ϵ_2)
𝒢(I)Y(Λ_2,m_ρ2,r)
+λ_Sg_Vg_VVP/6√(3)√(m_K^*/m_K)ℱ_2(r,σ,ϵ_2)
Y(Λ_2,m_ω2,r).
Here, we define an useful function in Eq. (<ref>), i.e.,
U(Λ,m,r) = 1/4π r(cos(mr)-e^-Λ r)-Λ^2+m^2/8πΛe^-Λ r.
The variables in the above effective potentials (<ref>)-(<ref>) are defined as
q_1=M_Λ_c^2+M_K^2-M_Σ_c^2-M_K^*^2/2(M_Σ_c+M_K), Λ_1^2=Λ^2-q_1^2, m_π1^2=q_1^2-m_π^2, m_ρ1^2=m_ρ^2-q_1^2, q_2=M_K^*^2-M_K^2/2(M_Σ_c+M_K), Λ_2^2=Λ^2-q_2^2, m_π2^2=m_π^2-q_2^2, m_η2^2=m_η^2-q_2^2, m_ρ2^2=m_ρ^2-q_2^2, m_ω2^2=m_ω^2-q_2^2. After considering the S-D wave mixing effects, the matrix elements for the spin-spin interaction and tensor force operators read as σ·ϵ_2↦ ([ √(3) 0 ] ) and S(r̂,σ,ϵ_2)↦ ([ 0 -√(6) ] ), respectively.
In Table <ref>, we collect the bound state solutions (the binding energy E, the root-mean-square radius r_RMS, and the probabilities P_i(%) for all the discussed channels) for the coupled Σ_cK/Λ_cK^*/Σ_cK^* systems with I(J^P)=0,1(1/2^-).
For the Σ_cK/Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(1/2^-), there exist the Σ_cK(^2S_1/2) channel, the Λ_cK^*(^2S_1/2,^4D_1/2) channels, and the Σ_cK^*(^2S_1/2,^4D_1/2) channels. The reasonable loosely bound state solutions emerge at the cutoff Λ=0.90 GeV, where the binding energy is -0.36 MeV, the RMS radius is 4.78 fm, and the dominant channel is the Σ_cK(^2S_1/2) with the probability P=98.85%. When the cutoff increases to 1.05 GeV, this bound state binds deeper, the binding energy is -18.44 MeV, the RMS radius decreases to 1.42 fm, and the Σ_cK(^2S_1/2) channel is still the dominant channel with its probability around 95%. For the remaining channels, their probabilities are very tiny. Compared to the bound state properties in the single channel case, the cutoff is very close to the reasonable value Λ∼1.00 GeV. Therefore, the coupled Σ_cK/Λ_cK^*/Σ_cK^* system with I(J^P)=1/2(1/2^-) can be the prime molecular candidate, and the coupled channel effects play an important role for the formation of this bound state.
For the Σ_cK/Σ_cK^* system with I(J^P)=3/2(1/2^-), there include the Σ_cK(^2S_1/2) channel and the Σ_cK^*(^2S_1/2,^4D_1/2) channels. We find a weakly bound state at the cutoff Λ=1.28 GeV, the binding energy is E=-2.58 MeV, the RMS radius is r_RMS=2.85 fm, and the dominant channel is the Σ_cK(^2S_1/2) with the probability P=92.77%. With the increasing of the cutoff value, the Σ_cK^* channel becomes more and more important. As the cutoff increases to 1.31 GeV, the binding energy turns into -48.10 MeV, the RMS radius decreases to 0.61 fm, and the probability for the Σ_cK^*(^2S_1/2) is 24.29%. However, the binding energy depends very sensitively with the cutoff. Thus, we cannot draw a definite conclusion that the Σ_cK/Σ_cK^* system with I(J^P)=3/2(1/2^-) as a good hadronic molecular candidate.
For the Σ_cK^* systems, the isospin and spin-parity configurations I(J^P) include 1/2(1/2^-), 1/2(3/2^-), 3/2(1/2^-), and 3/2(3/2^-) after considering the S-D wave mixing effects. The relevant OBE effective potentials are presented in Eq. (<ref>). Our results indicate that there exist the reasonable loosely bound state solutions for the Σ_cK^* states with I(J^P)=1/2(1/2^-), 1/2(3/2^-), and 3/2(1/2^-) in the cutoff range 0.80≤Λ≤5.00 GeV. As shown in Table <ref>, for the Σ_cK^* systems with 1/2(3/2^-) and 3/2(1/2^-), the binding energy around several to several tens MeV and the RMS radius around several fm appear at the cutoff around 1.00 GeV, which is comparable to the value in the deuteron <cit.>. Therefore, these two states can be suggested as the good hadronic molecular candidates. For the Σ_cK^* system with 1/2(1/2^-), the loosely bound state solutions appear as the cutoff is larger than 1.70 GeV, which is slightly far away from the empirical value for the deuteron <cit.>, in this work, we cannot exclude that the Σ_cK^* system with 1/2(1/2^-) as a suitable molecular candidate.
In summary, our results can predict several possible open charm molecular pentaquarks, the coupled Λ_cK^*/Σ_cK^* molecular states with I(J^P)=1/2(1/2^-,3/2^-), the coupled Σ_cK/Λ_cK^*/Σ_cK^* molecular states with I(J^P)=1/2(1/2^-), and the single Σ_cK^* states with I(J^P)=1/2(1/2^-,3/2^-) and 3/2(1/2^-). And the coupled channel effects do play the very important role in generating these coupled channel molecular candidates.
The study of the strong decay behaviors is very helpful to the search of these predicted open flavor molecular pentaquarks. According to the conservation of the quantum numbers and the limit of the phase space, we collect the important strong decay decay channels as follows, i.e.,
Σ_cK/Λ_cK^*/Σ_cK^*[1/2(1/2^-)] → {D_sN, Λ_cK},
Λ_cK^*/Σ_cK^*[1/2(1/2^-)] → {D_s^(*)N, Λ_cK, Σ_cK},
Λ_cK^*/Σ_cK^*[1/2(3/2^-)] → {D_s^*N},
Σ_cK^*[1/2(1/2^-)] → {D_s^(*)N, Λ_cK^(*), Σ_cK},
Σ_cK^*[1/2(3/2^-)] → {D_s^*N, Λ_cK^*, Σ_c^*K},
Σ_cK^*[3/2(1/2^-)] → {D_s^*Δ, Σ_cK}.
§.§ The predictions of the possible Y_cK̅^(*) molecular states
In this work, we further extend our study to the Λ_cK̅^(*) and Σ_cK̅^(*) systems, the corresponding OBE effective potentials can be related to those for the Λ_cK^(*) and Σ_cK^(*) systems by the G-parity rule <cit.>, i.e.,
V_B_1M̅_2→ B_3M̅_4 = (-1)^G_EV_B_1M_2→ B_3M_4,
where G_E stands for the G-parity for the exchanged meson in the B_1M_2→ B_3M_4 process, notations M̅_i and M_i correspond to the anti-mesons and mesons, respectively. Therefore, the effective potentials from the ω and π exchanges are in completely contrast between the Y_cK^(*) and the Y_cK̅^(*) systems.
In the following, we also perform the single channel analysis and the coupled channel analysis on the Y_cK̅^(*) systems. We summary the corresponding numerical results in Table <ref> and Table <ref>, respectively.
As shown in Table <ref>, we collect the bound state properties for the single Y_cK̅^(*) systems. In the cutoff region 0.80≤Λ≤5.00 GeV, we can obtain five loosely bound states, the Σ_cK̅ bound state with I(J^P)=1/2(1/2^-), the Σ_cK̅^* states with I(J^P)=1/2(1/2^-), 1/2(3/2^-), 3/2(1/2^-) and 3/2(3/2^-). Among these five bound states, we cannot recommend the Σ_cK̅^* state with 3/2(1/2^-) as a good hadronic molecular candidate, as the cutoff value is too far away from the empirical value Λ∼1.00 GeV. For the remaining four bound states, we conclude that they can be prime hadronic molecular candidates when we take the same cutoff criterion in the deuteron.
When we consider the coupled channel analysis, we find four weakly bound states by varying the cutoff from 0.80 GeV to 5.00 GeV as shown in Table <ref>. For the Λ_cK̅^*/Σ_cK̅^* coupled bound state with I(J^P)=1/2(1/2^-) and the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* coupled bound state with I(J^P)=1/2(1/2^-), we can obtain the reasonable loosely bound state properties at the cutoff taken around 1.00 GeV, the dominant channels are the Λ_cK̅^*(^2S_1/2) and Σ_cK̅(^2S_1/2) channels, respectively. Thus, these two coupled bound states can be prime hadronic molecular candidates, which are mainly composed by the Λ_cK̅^* and Σ_cK̅ states, respectively.
Compared to the bound states solutions for the single Λ_cK̅^* and Σ_cK̅ systems with 1/2(1/2^-), we also find that the coupled channel effects play an important role in generating the Λ_cK̅^* state with 1/2(1/2^-). However, it contributes very little for the Σ_cK̅ state with 1/2(1/2^-). Thus, the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* coupled bound state with I(J^P)=1/2(1/2^-) predicted here is not a new bound state but has a close relation with the single Σ_cK̅ molecule with 1/2(1/2^-).
For the Λ_cK̅^*/Σ_cK̅^* coupled system with I(J^P)=1/2(3/2^-), its dominant channel is the Σ_cK̅^*(^4S_3/2). As shown in Table <ref>, its size is much smaller than those coupled channel bound states mainly made up by the lowest system. As the dominant channel is the Σ_cK̅^*(^4S_3/2), this bound state has a close relation to the Σ_cK̅^* molecule with 1/2(3/2^-).
For the Σ_cK̅/Σ_cK̅^* coupled system with I(J^P)=3/2(1/2^-), we can obtain the bound state solution as the cutoff reaches up to 3.80 GeV. Obviously, the cutoff applied here is deviated from the reasonable value 1.00 GeV. It cannot be a good molecular candidate.
All in all, our results can predict five Y_cK̅^(*) type hadronic molecular candidates, the coupled Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-), the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-), the Σ_cK̅^* molecules with 1/2(1/2^-,3/2^-), and 3/2(3/2^-), where the coupled channel effects play a vital role in binding the coupled Λ_cK̅^*/Σ_cK̅^* state with 1/2(1/2^-). Their important two-body strong decay channels are summarized as follows, i.e.,
Σ_cK̅[1/2(1/2^-)] → {Λ_cK̅, Ξ_c^(')π},
Λ_cK̅^*/Σ_cK̅^*[1/2(1/2^-)] → {Λ_cK̅, Σ_cK̅, DΛ, DΣ, Ξ_c^(')π, Ξ_c^(')η},
Σ_cK̅^*[1/2(1/2^-)] → {Λ_cK̅^(*), Σ_cK̅, D^(*)Λ, D^(*)Σ, .
. Ξ_c^(')π, Ξ_c^(')η, Ξ_cρ, Ξ_cω},
Σ_cK̅^*[1/2(3/2^-)] → {Λ_cK̅^*, Σ_c^*K̅, D^*Λ, D^*Σ,.
. Ξ_cρ, Ξ_cω, Ξ_c^*π, Ξ_c^*η},
Σ_cK̅^*[3/2(3/2^-)] → {Σ_c^*K̅, D^*Σ, Ξ_cρ, Ξ_c^*π}.
§ SUMMARY
The study of the exotic states is an important and interesting issue in the hadron physics. Searching for the hadronic molecular states can not only enrich the family of the exotic states, but also help us to understand the essential hadron-hadron interactions. Very recently, the LHCb collaboration observed two open heavy flavor multiquarks T_cs̅^a0(++). Their near threshold behavior inspires the isovector D^*K^* molecular explanations to them. In our former paper, we found the D^*K^* state with I(J^P)=1(0^+) can be possible molecular candidate by adopting the OBE effective potentials <cit.>.
In this work, we extend our study on the interactions between the S-wave charmed baryon Y_c=(Λ_c,Σ_c) and the strange meson K^(*) by using the OBE model, and we consider both of the S-D wave mixing effects and the coupled channel effects. As shown in Figure <ref>, our results indicate the single Σ_cK^* states with I(J^P)=1/2(1/2^-), 1/2(3/2^-) and 3/2(1/2^-) can be good open charm molecular candidates. When we further consider the coupled channel effects, we can predict another three prime open charm molecular candidates, i.e., the coupled Λ_cK^*/Σ_cK^* molecular states with 1/2(1/2^-) and 1/2(3/2^-), and the coupled Σ_cK/Λ_cK^*/Σ_cK^* molecular state with 1/2(1/2^-), where the dominant channels correspond to the Λ_cK^*(^2S_1/2), Λ_cK^*(^4S_3/2), and Σ_cK(^2S_1/2), respectively. And the coupled channel effects play the essential role in binding these three coupled channel molecular candidates.
As a byproduct, we further study the Y_cK̅^(*) interactions in the same model. As shown in Figure <ref>, we can predict the existences of the Y_cK̅^(*) type hadronic molecular states, i.e., the Σ_cK̅ molecule with I(J^P)=1/2(1/2^-), the Σ_cK̅^* molecules with 1/2(1/2^-), 1/2(3/2^-), and 3/2(3/2^-), the coupled Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-), and the Σ_cK̅/Λ_cK̅^*/Σ_cK̅^* molecule with 1/2(1/2^-). We expect the experimentalists to search for these predicted open charm molecular pentaquarks.
§ ACKNOWLEDGMENTS
R. C. is supported by the Xiaoxiang Scholars Programme of Hunan Normal University.
99
Chen:2016qju
H. X. Chen, W. Chen, X. Liu and S. L. Zhu,
https://linkinghub.elsevier.com/retrieve/pii/S037015731630103XPhys. Rept. 639, 1-121 (2016).
Liu:2019zoy
Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu,
https://linkinghub.elsevier.com/retrieve/pii/S0146641019300304Prog. Part. Nucl. Phys. 107, 237-320 (2019).
Chen:2016spr
H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,
https://iopscience.iop.org/article/10.1088/1361-6633/aa6420Rept. Prog. Phys. 80, no.7, 076201 (2017).
Guo:2017jvc
F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou,
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.90.015004Rev. Mod. Phys. 90, no.1, 015004 (2018),
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.94.029901[erratum: Rev. Mod. Phys. 94, no.2, 029901 (2022)].
Chen:2022asf
H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,
https://iopscience.iop.org/article/10.1088/1361-6633/aca3b6Rept. Prog. Phys. 86, no.2, 026201 (2023).
LHCb:Tcc
C. Chen and E. S. Norella,
https://indico.cern.ch/event/1176505/https://indico.cern.ch/event/1176505/.
LHCb:Qian
W. B. Qian,
https://indico.ihep.ac.cn/event/17185/https://indico.ihep.ac.cn/event/17185/.
Chen:2016ypj
R. Chen and X. Liu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.94.034006Phys. Rev. D 94, no.3, 034006 (2016).
Chen:2017rhl
W. Chen, H. X. Chen, X. Liu, T. G. Steele and S. L. Zhu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.95.114005Phys. Rev. D 95, no.11, 114005 (2017).
Guo:2021mja
T. Guo, J. Li, J. Zhao and L. He,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.105.054018Phys. Rev. D 105, no.5, 054018 (2022).
Cheng:2020nho
J. B. Cheng, S. Y. Li, Y. R. Liu, Y. N. Liu, Z. G. Si and T. Yao,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.101.114017Phys. Rev. D 101, no.11, 114017 (2020).
Agaev:2022duz
S. S. Agaev, K. Azizi and H. Sundu,
https://iopscience.iop.org/article/10.1088/1361-6471/acc41aJ. Phys. G 50, no.5, 055002 (2023).
LHCb:2020bls
R. Aaij et al. [LHCb],
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.242001Phys. Rev. Lett. 125, 242001 (2020).
LHCb:2020pxc
R. Aaij et al. [LHCb],
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.112003Phys. Rev. D 102, 112003 (2020).
Molina:2020hde
R. Molina and E. Oset,
https://www.sciencedirect.com/science/article/pii/S0370269320306730Phys. Lett. B 811, 135870 (2020).
Hu:2020mxp
M. W. Hu, X. Y. Lao, P. Ling and Q. Wang,
https://iopscience.iop.org/article/10.1088/1674-1137/abcfaaChin. Phys. C 45, no.2, 021003 (2021).
Liu:2020nil
M. Z. Liu, J. J. Xie and L. S. Geng,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.102.091502Phys. Rev. D 102, no.9, 091502 (2020).
Kong:2021ohg
S. Y. Kong, J. T. Zhu, D. Song and J. He,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.104.094012Phys. Rev. D 104, no.9, 094012 (2021).
Wang:2021lwy
B. Wang and S. L. Zhu,
https://link.springer.com/article/10.1140/epjc/s10052-022-10396-9Eur. Phys. J. C 82, no.5, 419 (2022).
Xiao:2020ltm
C. J. Xiao, D. Y. Chen, Y. B. Dong and G. W. Meng,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.103.034004Phys. Rev. D 103, no.3, 034004 (2021).
Huang:2020ptc
Y. Huang, J. X. Lu, J. J. Xie and L. S. Geng,
https://link.springer.com/article/10.1140/epjc/s10052-020-08516-4Eur. Phys. J. C 80, no.10, 973 (2020).
Chen:2020aos
H. X. Chen, W. Chen, R. R. Dong and N. Su,
https://iopscience.iop.org/article/10.1088/0256-307X/37/10/101201Chin. Phys. Lett. 37, no.10, 101201 (2020).
Agaev:2020nrc
S. S. Agaev, K. Azizi and H. Sundu,
https://iopscience.iop.org/article/10.1088/1361-6471/ac0b31J. Phys. G 48, no.8, 085012 (2021).
Qi:2021iyv
J. J. Qi, Z. Y. Wang, Z. F. Zhang and X. H. Guo,
https://link.springer.com/article/10.1140/epjc/s10052-021-09422-zEur. Phys. J. C 81, no.7, 639 (2021).
Chen:2021tad
H. Chen, H. R. Qi and H. Q. Zheng,
https://link.springer.com/article/10.1140/epjc/s10052-021-09603-wEur. Phys. J. C 81, no.9, 812 (2021).
An:2022vtg
H. T. An, Z. W. Liu, F. S. Yu and X. Liu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.106.L111501Phys. Rev. D 106, no.11, L111501 (2022).
Liu:2011xc
Y. R. Liu and M. Oka,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.85.014015Phys. Rev. D 85, 014015 (2012).
Lin:1999ad
Z. W. Lin and C. M. Ko,
https://journals.aps.org/prc/abstract/10.1103/PhysRevC.62.034903Phys. Rev. C 62, 034903 (2000).
Nagahiro:2008mn
H. Nagahiro, L. Roca and E. Oset,
https://link.springer.com/article/10.1140/epja/i2008-10567-8Eur. Phys. J. A 36, 73-84 (2008).
Chen:2017xat
R. Chen, A. Hosaka and X. Liu,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.97.036016Phys. Rev. D 97, no.3, 036016 (2018).
Kaymakcalan:1983qq
O. Kaymakcalan, S. Rajeev and J. Schechter,
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.30.594Phys. Rev. D 30, 594 (1984)..
Tornqvist:1993ng
N. A. Tornqvist,
https://link.springer.com/article/10.1007/BF01413192Z. Phys. C 61, 525-537 (1994).
Tornqvist:1993vu
N. A. Tornqvist,
https://link.springer.com/article/10.1007/BF02734018Nuovo Cim. A 107, 2471-2476 (1994).
Klempt:2002ap
E. Klempt, F. Bradamante, A. Martin and J. M. Richard,
https://www.sciencedirect.com/science/article/pii/S0370157302001448?via
|
http://arxiv.org/abs/2307.04002v1 | 20230708160353 | Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems | [
"Jiaqi Zou",
"Songlin Sun",
"Christos Masouros",
"Yuanhao Cui",
"Yafeng Liu",
"Derrick Wing Kwan Ng"
] | eess.SP | [
"eess.SP"
] |
4.1ex
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems
Jiaqi Zou, Graduate Student Member, IEEE, Songlin Sun, Senior Member, IEEE, Christos Masouros, Senior Member, IEEE, Yuanhao Cui, Member, IEEE,
Ya-Feng Liu, Senior Member, IEEE, and Derrick Wing Kwan Ng, Fellow, IEEE
Part of this work has been submitted to the IEEE Global Communications Conference (GLOBECOM 2023) for possible presentation <cit.>.
Jiaqi Zou is with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China, and also with the Department of Electrical and Electronic Engineering, University College London, London WC1E 7JE, UK (e-mail: [email protected]).
Songlin Sun and Yuanhao Cui are with Beijing University of Posts and Telecommunications (BUPT), Beijing, China (e-mail: [email protected], [email protected]).
Christos Masouros is with the Department of Electrical and Electronic Engineering, University College London, WC1E 7JE, UK (e-mail: [email protected]).
Ya-Feng Liu is with the State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected])
Derrick Wing Kwan Ng is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia (e-mail: [email protected]).
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
1.5
In this paper, we investigate the design of energy-efficient beamforming for an ISAC system, where the transmitted waveform is optimized for joint multi-user communication and target estimation simultaneously.
We aim to maximize the system energy efficiency (EE), taking into account the constraints of a maximum transmit power budget, a minimum required signal-to-interference-plus-noise ratio (SINR) for communication, and a maximum tolerable Cramér-Rao bound (CRB) for target estimation.
We first consider communication-centric EE maximization.
To handle the non-convex fractional objective function, we propose an iterative quadratic-transform-Dinkelbach method, where Schur complement and semi-definite relaxation (SDR) techniques are leveraged to solve the subproblem in each iteration.
For the scenarios where sensing is critical, we propose a novel performance metric for characterizing the sensing-centric EE and optimize the metric adopted in the scenario of sensing a point-like target and an extended target.
To handle the nonconvexity, we employ the successive convex approximation (SCA) technique to develop an efficient algorithm for approximating the nonconvex problem as a sequence of convex ones.
Furthermore, we adopt a Pareto optimization mechanism to articulate the tradeoff between the communication-centric EE and sensing-centric EE. We formulate the search of the Pareto boundary as a constrained optimization problem and propose a computationally efficient algorithm to handle it.
Numerical results validate the effectiveness of our proposed algorithms compared with the baseline schemes and
the obtained approximate Pareto boundary shows that there is a non-trivial tradeoff between communication-centric EE and sensing-centric EE, where the number of communication users and EE requirements have serious effects on the achievable tradeoff.
Integrated sensing and communication (ISAC), energy efficiency, fractional programming.
§ INTRODUCTION
Integrated sensing and communications (ISAC) are anticipated as a viable enabling technology for unlocking the potential of next-generation wireless networks, as the two kinds of systems tend to share various common devices, signal processing techniques, and even the hardware circuitries. Rather than the conventional parallel development of the two systems, the joint designs advocating their coexistence and cooperation have attracted extensive research interest in recent years. For instance, the coexistence of communication and radar systems focuses on spectrum sharing or physical integration design, which mainly aims to mitigate the mutual interference and efficiently manage the limited wireless resources <cit.>. Indeed, since communication and radar systems may transmit independent signals superimposed in the time/frequency domains, the interference between each other should be minimized to facilitate their individual functionalities. In such cases, numerous approaches have been proposed, such as cooperative spectrum sharing <cit.> and beamforming design <cit.>. Nevertheless, the existence of inevitable mutual interference still causes certain limitations on spectral efficiency performance.
Meanwhile, compared with the coexistence design approaches that generate communication and sensing signals separately, ISAC employs a common transmitted signal for realizing communication and sensing simultaneously. In such a case, the crux of ISAC is how to design a specialized waveform for effectively transmitting data and sensing potential targets.
In particular, the waveform design can be categorized into the communication-centric, radar-centric, and joint design according to the design goals <cit.>. Specifically, the radar-centric design aims to modulate the communication data onto the radar pulses, where the radar probing signals can be regarded as an information carrier <cit.>. On the other hand, communication-centric approaches utilize existing communication signals to sense the environment, such as cellular signals <cit.> and Wi-Fi signals <cit.>. In particular, various environmental conditions can be extracted from the received echoes of the communication signals, as the target's existence or movement inevitably affects the signal's propagation. Nevertheless, the integration performance is limited in the above two approaches, as the communication/sensing functionality is often carried out as ancillary tasks. In contrast, the joint ISAC design studies the co-design of signaling methodologies enabling both communications and sensing, which is the research content of this work.
§.§ Related Works
Related works of joint waveform design focus on striking a balance between the tradeoff of communication and sensing. For example, <cit.> investigated the tradeoff between the multi-user interference minimization and the appropriate radar beampattern formulation. Besides, a recent work in <cit.> considered the Cramér-Rao bound (CRB) minimization with guaranteed signal-to-interference-plus-noise ratio (SINR) for each communication user. Furthermore, as widely-used performance metrics, the fundamental tradeoff between the CRB for target parameter estimation and the data rate for communication was also investigated in <cit.> under various system settings, to unveil the potential of ISAC.
Despite the above approaches can achieve favorable performance tradeoffs between the estimation performance and spectral efficiency <cit.>, the energy efficiency (EE) optimization of the joint waveform has not been fully investigated. Currently, the energy consumption of the state-of-the-art fifth-generation (5G) wireless networks is extremely high, resulting in expensive operational costs <cit.>.
It is anticipated that the upcoming ISAC will pave the way for developing a perceptive wireless network requiring a much higher energy consumption than the current one, since the wireless signals are expected to achieve the dual purposes of environment sensing and information transmission simultaneously.
This could hinder the long-term development of sustainable and environmentally friendly wireless communication technologies.
Hence, there is a pressing need to investigate the energy efficiency design of ISAC for establishing
a perceptive-efficient and spectrally-efficient cellular network.
Actually, energy-aware optimization has been a hot topic in the past decade for conventional cellular networks,
e.g., <cit.>.
Specifically, EE is defined as the ratio of the achieved data rate and the required power consumption, capturing the energy consumption per bit in communication, which has been widely studied for various communication networks <cit.>.
However, these approaches for maximizing the communication EE cannot be directly applied to ISAC, as they do not take into consideration of sensing functionalities.
Recently, the EE optimization for radar-communication spectrum sharing has been studied in <cit.>, and the results cannot be applied to ISAC systems either due to the separated signal waveform design.
On the other hand, a few works have studied ISAC beamforming for maximizing communication-centric EE. For instance, the work of <cit.> investigated the communication EE maximization under the required radar beampattern constraint. Yet, it does not consider the sensing EE and the performance of target parameter estimation. Besides, the work of <cit.> focused on energy minimization under the sensing and communication constraints. In particular, the algorithm designed in <cit.> cannot handle the EE optimization due to the intrinsic challenges brought by fractional programming in the resource allocation design.
More importantly, to the best of our knowledge, the sensing-centric EE that characterizes the EE of target sensing has been rarely studied in the literature.
In particular, to fulfill the increasing demand for sensing services, it is natural for the base station (BS) to transmit the waveforms with high power for improving the detection and estimation performance. However, this operation will inevitably bring unaffordable energy costs, which contradicts to the emerging requirements of carbon neutrality and environmental sustainability for future wireless networks <cit.>.
Therefore, there is an urgent need for the design an energy-efficient sensing performance metric for ISAC.
§.§ Contributions
Against this background, this work considers the EE optimization for the waveform design of ISAC, where the communication-centric EE, sensing-centric EE, and their tradeoffs are investigated.
Specifically, for the ISAC systems wherein communication serves as the primary objective, we study the ISAC waveform design for maximizing the communication-centric EE, i.e., the ratio of the achievable rate and the corresponding power consumption, while guaranteeing both the target estimation and communication performance in terms of the CRB and SINR, respectively.
As for the sensing-centric ISAC systems, for the first time, we propose the performance metric to measure the sensing-centric EE for target parameter estimation.
Then, we optimize the ISAC waveform to maximize the sensing-centric EE, considering the constraints of SINR, CRB, and the maximum transmission power budget. Then, we study the Pareto boundary of communication-centric EE and sensing-centric EE for characterizing their tradeoffs. The main contributions of this paper are summarized as follows.
* We optimize the communication-centric EE considering the two scenarios having a point-like target estimation and an extended target estimation, respectively, under the constraints of CRB, SINR, and transmission power limitations. For the case of point-like target, the nonconvexity of the objective function and CRB constraint hinder the communication-centric EE optimization. For handling these challenges, we first adopt the quadratic-transform-Dinkelbach method to reformulate the nonconvex fractional objective function as a tractable formulation. Then, we adopt the semi-definite relaxation and linear matrix inequality to convert the nonconvex optimization problem into a sequence of convex optimization problems. Finally, we generalize the proposed algorithm to an extended target case.
* We propose a performance metric for capturing the notion of sensing-centric EE for the first time, which adopts the ratio of the reciprocal of the CRB to the transmit energy for measuring “information-per-Joule’’. Then, based on the proposed metric, we consider the sensing-centric EE maximization for point-like/extended targets by optimizing the transmit beamforming. Although the considered problem is nonconvex, we adopt the Schur complement to reformulate the problem into a tractable formulation, facilitating the development of a successive convex approximation (SCA)-based algorithm to effectively acquire the solution to the design problem.
* We adopt the Pareto optimization technique to characterize the tradeoff between the communication-centric EE and the sensing-centric EE. In particular, we formulate a constrained optimization problem that maximizes the communication-centric EE under the constraint of sensing-centric EE. To handle the nonconvexity of the considered optimization problem, we propose an SCA-based iterative algorithm for addressing the nonconvexity. Then, by varying the threshold of the sensing-centric EE, the approximate Pareto boundary can be obtained by solving a sequence of constrained problems. Simulation results present the Pareto boundary to demonstrate the tradeoff between the two EE metrics.
The remainder of this paper is organized as follows. Section II introduces the system model, including the communication model and the sensing model. In Section III, we study the optimization of the communication-centric EE under the sensing and communication constraints. The sensing-centric EE is studied in Section IV. Section V investigates the tradeoff between the communication-centric and the sensing-centric EE. Simulation results are provided in Section VI. Finally, we conclude the paper in Section VII.
Notations: The normal plain text (i.e., t), bold lowercase letters (i.e., 𝐰) and uppercase letters (i.e., 𝐖) represent scalars, vectors, and matrices, respectively. tr(·), rank(·), (·)^H, and (·)^T denote the trace operator, the rank operator, the Hermitian transpose, and the transpose operator, respectively. ℂ^n × n stands for an n × n complex-valued matrix. · represents the L_2 norm of a matrix. The inequality 𝐀≽0 means that 𝐀 is Hermitian positive semi-definite. Re(·) denotes the real part of the argument. We adopt 𝔼(·) for the stochastic expectation. ḟ(x) denotes the first derivative of function f(x). The notation ≜ is used for definitions.
§ SYSTEM MODEL
As depicted in Fig. <ref>, we consider an ISAC multiple-input multiple-output (MIMO) system, where the BS equipped with M transmit antennas serves K single-antenna UEs for communication with K ≤ M. Let k ∈𝒦≜{1,2, ⋯,K} denote the communication user set. As for radar estimation, the environmental information is simultaneously extracted from the reflected echoes with N receiving antennas implemented at the BS.
Without loss of generality, the number of transmit antennas is less than that of receive antennas, i.e., M ≤ N. As for target sensing, both the point-like target and the extended target cases are considered separately covering various practical scenarios. In particular, the former case denotes the unstructured point that is far away from the BS, such as unmanned aerial vehicles (UAVs). On the other hand, for the extended target, it acts as a reflecting surface with a large number of distributed scatterers, such as a vehicle or a pedestrian <cit.>. The detailed model is given as follows.
§.§ Communication Model
We denote the beamforming vector and the channel from the BS to the k-th user as 𝐰_k∈ℂ^M× 1 and 𝐡_k∈ℂ^M× 1, respectively. Then, the data symbol intended for the k-th user at time slot l is denoted as s_k[l], with unit power 𝔼( |s_k[l]|^2) =1. Left multiplying 𝐬[l] = [s_1[l], s_2[l], ⋯, s_k[l]]^T ∈ℂ^K × 1 with the beamforming matrix 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_k] ∈ℂ^M × K, the transmitted signal vector of the BS is given by 𝐱[l]= 𝐖𝐬[l].
Then, the transmitted ISAC waveform over L time slots can be denoted as 𝐗 = [ x[1], x[2], ⋯, x[L] ] ∈ℂ^M × L. Then, the received signal at the k-th user during the l-th time slot, l ∈{1, 2, ⋯, L}, is given as follows
y_k[l] = h_k^H 𝐰_k s_k[l] +
∑_k ∈𝒦 j ≠ k h_k^H 𝐰_j s_j[l] + z_c[l],
where z_c[l] is the additive white Gaussian noise (AWGN) with zero mean and variance σ_c^2. The received SINR at the k-th user can be calculated as
SINR_k( W) = | h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2,
and the corresponding achievable rate is R_k( W) = log_2(1+SINR_k ( W)).
It is well known that communication-centric EE is defined as a ratio of the transmission sum rate ∑_k R_k( W) to the total power consumption P. Following <cit.>, the power consumption can be calculated as
P = 1/ϵP_d + P_0,
where the power amplifier efficiency ϵ∈ [0,1] and P_0 denotes the constant circuit power consumed by circuitries in RF chains, power supply, cooling system, etc. Besides, the total transmit power is given by P_d = ∑_k w_k_2^2. Hence, the communication-centric EE, measuring the required “bits-per-Joule" <cit.>, can be calculated as
EE_C = R_k(𝐖)/ P = ∑_k log_2( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0.
§.§ Sensing Model
For radar sensing, the BS exploits the echo signals collected in L time slots to estimate the target parameter.
This work considers the two cases with either a point-like target or an extended target, respectively.
For notational simplicity, we consider the same angle of departure (AOD) and angle of arrival (AOA) of the target, i.e., θ_t=θ_r=θ <cit.>. Then,
for the point-like target that locates in the far field, the target response matrix can be denoted as
𝐀 = α𝐚_r(θ)𝐚^H_t(θ),
where 𝐚_x(θ), x∈{t,r}, is the steering vector for the transmit signal at angle θ. Following the existing works on ISAC, e.g., <cit.>, we assume that the BS employs a uniform linear antenna with a half-wavelength spacing between the adjacent antennas. Then, the transmit and receive steering vectors are given by
𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (M -1) cosθ]^T,
𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (N -1) cosθ]^T.
For the extended target that locates in the near field, we follow <cit.> to model it as a reflecting surface with N_s point-like scatters. Then, the target response matrix can be represented as
𝐀 = ∑_i=1^N_sα_i 𝐚_r(θ_i)𝐚_t^H(θ_i),
where α_i is the reflection coefficient of the i-th scatterer.
Therefore, the received target echoes 𝐘_R from the point-like or the extended targets can both be denoted as
𝐘_R = 𝐀𝐗 + 𝐙_s,
where 𝐙_s is the zero-mean AWGN with variance σ_s^2 in each element.
Since CRB is a lower bound on the variance of an unbiased estimator of an unknown parameter that can guarantee the performance of sensing <cit.>, we adopt the CRB as the sensing metric to design the energy-efficient ISAC in the following.
§ COMMUNICATION-CENTRIC ENERGY-EFFICIENT DESIGN
§.§ Point-Like Target Case
Since the CRB of α has a similar form as the one of θ, for conciseness,
this work only considers the CRB of θ to for the design of the ISAC beamforming. For the point-like target, the CRB of θ is given as follows <cit.>
CRB(θ)=σ_s^2/|α|^2 (M𝐚̇^H(θ)𝐑_𝐱^T𝐚̇(θ)+ 𝐚^H(θ)𝐑_𝐱^T𝐚(θ)‖𝐚̇(θ)‖^2-M|𝐚^H(θ)𝐑_𝐱^T𝐚̇(θ)|^2/𝐚^H(θ)𝐑_𝐱^T𝐚 (θ)),
where 𝐑_𝐱 is the sample covariance matrix of 𝐗. Since 𝔼( |s_k[l]|^2) =1, for a large L, we have the asymptotic result
R_𝐱 = 1/L X X^H ≈ W W^H = ∑_k=1^K w_k w_k^H <cit.>.
The communication-centric energy efficient design is to maximize the EE_C defined in (<ref>), under the constraints of multiple users’ required SINR and maximal CRB(θ), whose optimization problem can be formulated as follows
max_{𝐰_k}_k=1^K ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0
s.t. ∑_k=1^K w_k _2^2 ≤ P_max,
CRB(θ) ≤ρ ,
| h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k,
where P_max denotes the power budget of the BS and (<ref>) is the transmit power constraint.
Besides, ρ and γ_k are the required CRB threshold for sensing and the required SINR for the k-th communication user, respectively.
In general, it is challenging to solve problem (<ref>) directly, due to the nonconvexity of the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>).
For addressing the nonconvex optimization problem, we first adopt the Dinkelbach's method <cit.> to reformulate the problem (<ref>) as
max_{𝐰_k}_k=1^K f_1(𝐰_k) - λ f_2(𝐰_k)
s.t. (<ref>), (<ref>), (<ref>),
where f_1(𝐰_k) ≜∑_k=1^K log_2 ( 1+| h_k^H w_k |^2/σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2),
f_2(𝐰_k) ≜1/ϵ∑_k=1^K w_k_2^2 + P_0, and λ≥ 0 is the auxiliary variable to be iteratively updated by
λ = f_1(𝐰_k)/f_2(𝐰_k).
With (<ref>) and (<ref>), an efficient solution to problem (<ref>) can be obtained by updating 𝐰_k and λ alternately.
Nevertheless, problem (<ref>) is still difficult to handle due to the following issues: 1) the objective function (<ref>) is still non concave over {𝐰_k } due to the fractional function f_1(𝐰_k); 2) nonconvex constraints (<ref>) and (<ref>).
Since the function log_2(·) is concave and non-decreasing, the nonconvexity of (<ref>) can be addressed if the term inside log_2(·) can be reformulated as an equivalent concave formulation.
Bearing this in mind, since f_1(𝐰_k) belongs to the general multiple-ratio concave-convex fractional programming problem, we adopt the quadratic transform method <cit.> to reformulate f_1(𝐰_k) as
f_1(𝐰_k) = t_kmax∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k)
- t_k^2 B_k(𝐰_k) ) ,
where B_k(𝐰_k) = σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2 and t_k is an introduced auxiliary variable that is iteratively updated by
t_k = | h_k^H w_k |( σ_c^2 +∑_j=1,j ≠ k^K| h_k^H w_j |^2)^-1.
Based on the above reformulations, problem (<ref>) can be recast as
max_{𝐰_k, t_k}_k=1^K, λ ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐰_k) ) - λ( 1/ϵ∑_k=1^K w_k_2^2 + P_0) s.t. (<ref>),
where {𝐰_k, t_k}_k=1^K and λ can be updated alternatively.
In the following, we focus on handling the nonconvex constraints (<ref>) and (<ref>). Specifically, constraint (<ref>) can be reformulated as
Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2 - M| a^H(θ) R_ x^Tȧ(θ)|^2/ a^H(θ) R_ x^T a(θ) - σ_s^2/2Lρ|α|^2 ≥ 0.
Then, for notational conciseness, denoting ℱ( R_X) ≜ Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2, (<ref>) can be reformulated as the following linear matrix inequality by leveraging the Schur complement <cit.>.
[ ℱ( R_x) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ) R_ x^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 .
Next, for handling the nonconvex constraint (<ref>), we introduce an auxiliary optimization variable matrix 𝐖_k and reformulate constraint (<ref>) into
tr(𝐐_k 𝐖_k) - γ_k ∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) ≥γ_k σ_c^2,
W_k =w_k w_k^H,
where 𝐐_k = h_k h_k^H. Then, problem (<ref>) can be equivalently reformulated as
max_{𝐰_k,𝐖_k, t_k}_k=1^K ∑_k=1^K log_2 ( 1+ 2 t_k ·Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0)
s.t. [ [ ℱ(∑_k=1^K𝐖_k) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ)∑_k=1^K W_k^Tȧ(θ); √(M)ȧ^H(θ)∑_k=1^K W_k^T a(θ) a^H(θ)∑_k=1^K W_k^T a(θ) ] ]≽0 ,
(<ref>), (<ref>), (<ref>),
where B_k(𝐖_k) ≜∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2. However, constraint (<ref>) is a nonconvex equality constraint which is difficult to handle. Therefore, we introduce the following lemma to transform constraint (<ref>) into equivalent inequality constraints.
W_k =w_k w_k^H can be equivalently reformulated as
[ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , 𝐖_k ≽0, ∀ k,
tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤ 0, ∀ k.
The proof is given in Appendix A.
Although the equality constraint in (<ref>) has been reformulated as the equivalent inequality constraints, constraint (<ref>) is still nonconvex.
For handling this, we adopt the SCA technique that establishes an inner convex approximation of constraint (<ref>) given as
tr(𝐖_k) + (𝐰_k^(i-1))^H 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ 0, ∀ k,
where 𝐰^(i-1)_k is the solution obtained at the i-th iteration of the SCA.
Therefore, at the i-th iteration, the convex approximation of problem (<ref>) can be reformulated as
max_𝒲, t_k, λ ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0)
s.t. (<ref>), (<ref>),(<ref>),(<ref>),(<ref>).
Algorithm <ref> summarizes the iterative algorithm for handling problem (<ref>), where f̂_1(𝐰_k, 𝐖_k) = ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) and f̂_2(𝐖_k) =1/ϵ∑_k=1^K tr( W_k)+ P_0. Although we cannot guarantee that the optimal solution of problem (<ref>) can be obtained, the proposed Algorithm <ref> follows the inexact Dinkelbach-type algorithm adopted in <cit.>, whose convergence can be guaranteed by the following lemma.
Let {𝐰_k^i,𝐖_k^i} be the solution sequence generated by solving problem (<ref>). The sequence {λ^(i)} generated by Algorithm 1 is non-decreasing and convergent.
Since
f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))
=(λ^(i+1)-λ^(i))f̂_2(𝐖^(i)),
we have λ^(i+1)≥λ^(i) if f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥ 0.
Obviously, f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))=0. At the i-th iteration, we approximate problem (<ref>) as
problem (<ref>) around 𝐰_k^(i-1). Since 𝐰_k^(i-1) is definitely a feasible solution of problem (<ref>), we have
f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))= 0.
Therefore, we can conclude that the sequence {λ^(i)} is non-decreasing and Algorithm 1 converges due to the finite power budget.
Complexity Analysis:
The computational complexity of Algorithm <ref> is dominated by solving problem (<ref>). Problem (<ref>) involves linear matrix inequality (LMI) constraints that dominate the computation complexity. We notice that the problem contains one LMI constraint of size 2M, K LMI constraints of size M+1, and K LMI constraints of size M.
Given the required accuracy ϵ_0 > 0, the ϵ_0-optimal solution can be achieved after a sequence of iterations. Then, the computational complexity can be given as 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ) by reserving the highest order term, where I_iter denotes the number of iterations <cit.>.
Due to the stringent requirement introduced by (<ref>), it is generally non-trival to directly obtain a feasible solution as an initial point. Alternatively, we can adopt the penalty SCA <cit.> and introduce auxilary variables ρ̅_k to transform problem (<ref>) into
max_𝒲, t_k, λ ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0) - p̅∑_k=1^K ρ̅_k
s.t. 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ρ̅_k, ∀ k,
(<ref>), (<ref>), (<ref>), (<ref>),
where p̅ and ∑_k=1^K ρ̅_k denote the weight coefficient and the penalty term, respectively. To obtain the initial point of (<ref>), we can solve problem (<ref>) as an initial warm-up phase by gradually raising p̅ to induce a reduction in the penalty term to a smaller value. When the penalty term decreases to zero, problem (<ref>) reduces to problem (<ref>), whose solution serves as the feasible initial point of (<ref>).
§.§ Extended Target Case
For estimating the extended target, we follow <cit.> to consider the CRB of the target response matrix 𝐀 instead of the angle. Since K ≤ M, transmitting K signal streams is not always sufficient for recovering the rank-M matrix. To address this issue, the BS generates additional signals that are dedicated for target probing. As such, the augmented data matrix at the l-th time slot is 𝐱̃[l]≜[𝐖, 𝐖̃][𝐬[l];𝐬̃[l]], where 𝐬̃[l] ∈ℂ^(N_t-K) × 1 is the dedicated probing signal and 𝔼( 𝐬[l] 𝐬̃^H[l] ) = 0.
Note that in the augmented signal, the beamforming 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_K] ∈ℂ^M × K broadcasts the information data to the K users and the beamforming 𝐖̃ = [𝐰_K+1, ⋯, 𝐰_K+M] ∈ℂ^M × M is employed to generate probing signals for enabling the estimation of the target response matrix. However, the introduced probing signals 𝐬̃[l] inevitably generate undesired interference to the served multiple users that introduces non-trivial tradeoff between sensing and communication. In particular, the SINR received at the k-th user is given by
S̃ĨÑR̃_k = | 𝐡_k^H 𝐰_k|^2/∑ _i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2,
where ‖𝐡_k^H𝐖̃‖^2_2 is the additional interference due to the probing signals.
In such a case, the CRB for the extended target estimation can be derived as
CRB_extended= σ_s^2 M/Ntr(𝐑_𝐱^ - 1),
where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H .
Based on the discussions above, the problem of communication-centric EE optimization for estimating an extended target can be formulated as
max_{𝐰_k}_k=1^K+M ∑_k=1^K log_2(1+S̃ĨÑR̃ _k)/1/ϵ∑_k=1^K+M w_k _2^2 + P_0
s.t. ∑_k=1^K+M w_k _2^2 ≤ P_max,
CRB_extended= σ_s^2 M/Ltr(𝐑_𝐱^ - 1) ≤τ ,
S̃ĨÑR̃_̃k̃≥γ_k.
Obviously, although constraints (<ref>) and (<ref>) are both convex, the fractional objective function (<ref>)
is still nonconvex.
Following Section <ref>, we first adopt Dinkelbach’s transformation to handle the nonconvex fractional programming and reformulate the problem as follows
max_{𝐰_k}_k=1^K+M ∑_k=1^K log_2 (1+S̃ĨÑR̃ _k) - λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0)
s.t. (<ref>), (<ref>), (<ref>).
Then, by exploiting the equality -log a = bmax (log b - ab) <cit.>, problem (<ref>) can be reformulated as
max_{𝐰_k}_k=1^K+M, {b_k}_k=1^K, λ ∑_k=1^K log_2 ( | 𝐡_k^H 𝐰_k|^2 + ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2)
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H 𝐖̃‖_2^2 + σ _C^2 ) )
- λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0)
s.t. (<ref>), (<ref>), (<ref>).
For obtaining a tractable formulation, by introducing auxiliary variables 𝐖_k ≜𝐰_k 𝐰_k^H, k ∈ [1, 2, ⋯, K] and 𝐑_𝐖̃ = 𝐖̃𝐖̃^H, problem (<ref>) can be reformulated as
max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖_2, λ ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k )
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
- λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0)
,
s.t. tr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ≤ P_max,
σ_s^2 M/Ntr( ( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ^-1) ≤τ ,
𝐡_k 𝐖_k 𝐡^H_k - γ_k ( ∑_i = 1,i k^K𝐡_k^H 𝐖_i 𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k ) ≥γ_k σ_c^2,
𝐖_k ≽0, ∀ k, 𝐑_𝐖̃≽0,
rank(𝐖_k) = 1, ∀ k.
After inspecting problem (<ref>), we can find that all constraints are convex, except for constraint (<ref>). Besides, the objective function in (<ref>) includes three sets of optimization variables: {λ}, {b_k}, and {{𝐖_k}_k=1^K, 𝐑_𝐖̃}. Moreover, when fixing the other two sets, the objective function is convex with respect to the remaining one. Therefore, we first adopt the rank relaxation to remove constraint (<ref>) and then employ an alternating optimization (AO) algorithm to optimize three sets of optimization variables alternately.
The detailed algorithm is summarized in Algorithm 2, where we denote
f̃_1(𝐖_k, 𝐑_𝐖̃ ) = ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k )
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
f̃_2(𝐖_k, 𝐑_𝐖̃ ) = 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0.
In the following theorem, we will show that the rank-1 solution of problem (<ref>) can be recovered from the solution generated by Algorithm 2.
Given the optimal solution obtained by Algorithm <ref> as {𝐖_k^∗, 𝐑^∗_𝐖̃}. When K = 1,
𝐖̂^∗ = 𝐖^∗𝐡_k 𝐡_k^H 𝐖^∗/𝐡_k^H 𝐖^∗𝐡_k, 𝐑̂^∗_𝐖̃= 𝐑^∗_𝐖̃
is the optimal rank-1 solution that achieves identical performance as {𝐖_k^∗, 𝐑^∗_𝐖̃}.
When K > 1, one can always construct the optimal solution that satisfies the rank-1 constraint acquiring the same performance.
The proof is given in Appendix B.
Complexity Analysis:
We provide the computational complexity of Algorithm <ref> as follows. Similarly, the problem (<ref>) is a semidefinite program that can be solved by the standard interior-point algorithm. We note that the problem involves K+1 LMI constraints of size M. We consider the highest order term and express the computational complexity as 𝒪( √(MK+M+K+1) M^6 K^3 I_iterlog(1/ϵ_0) ) for an ϵ_0-optimal solution, where I_iter represents the number of iterations <cit.>.
§ SENSING-CENTRIC ENERGY-EFFICIENT DESIGN
§.§ Performance Metric for Sensing-Centric EE
It is well known that CRB is the inverse of Fisher information for the unbiased estimator <cit.>. In fact, Fisher information is the statistical expected value of the observed information about an observable random variable. Considering these, we adopt the reciprocal ratio of the CRB to the transmit power, further normalized by the total time slot length. In this context, we arrive at a novel sensing-centric EE metric that measures the average sensing information per Joule, defined as
EE_s≜CRB^-1/L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) .
In this manner, both the sensing-centric EE and communication-centric EE measure the “information” per Joule, but the “information” has different meanings.
Based on the above metric, we study the waveform design to maximize the sensing-centric EE considering the point-like target and the extended target in Sections <ref> and <ref>, respectively.
§.§ Point-Like Target Case
Considering the point-like target, with the CRB of estimating θ given in (<ref>), the sensing-centric EE optimization problem can be formulated as
max_{𝐰_k}_k=1^K CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 )
s.t. ∑_k=1^K w_k _2^2 ≤ P_max,
CRB(θ) ≤ρ ,
| h_k^H w_k |^2/σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k.
Obviously, problem (<ref>) is also intractable due to the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>).
For handling the fractional objective function (<ref>), with the introduced auxiliary optimization variables ω, t,ϕ, and ζ, problem (<ref>) can be reformulated as
max_{𝐰_k}_k=1^K, ω, ϕ, ζ ω
s.t. CRB^-1(θ) ≤1/t,
1/ϵ∑_k=1^K w_k_2^2 + P_0 ≤ϕ, t ≥ζ^2,
ω≤ζ^2/ϕ,
(<ref>), (<ref>), (<ref>).
The equivalence between (<ref>) and (<ref>) is obvious, since constraints
(<ref>), (<ref>), and (<ref>) should be active at the optimal solution. We note that (<ref>) share the same form with (<ref>). Therefore, with Schur complement, constraint (<ref>) can be reformulated as
[ ℱ(∑_k=1^K𝐖_k) - t σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0,
where ℱ(∑_k=1^K𝐖_k) ≜ Mȧ^H(θ)∑_k=1^K𝐖_k^Tȧ(θ)+ a^H(θ)∑_k=1^K𝐖_k^T a(θ)‖ȧ(θ)‖^2 and 𝐖_k = 𝐰_k 𝐰_k^H. Furthermore, Lemma <ref> presents an equivalent formulation of the equality 𝐖_k = 𝐰_k 𝐰_k^H whose convex approximation has been given in (<ref>) and (<ref>).
Then, for handling the fractional constraint (<ref>), we introduce auxiliary variables {τ_k, ψ_k, ∀ k} to reformulate (<ref>) as
τ^2_k / ψ_k ≥γ_k,
τ_k = 𝐡_k^H 𝐰_k,
ψ_k ≥σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2,
where (<ref>) and (<ref>) are convex constraints. Then, problem (<ref>) can be reformulated as
max_Θ ω
s.t. ω≤ζ^2/ϕ , γ_k ≤τ^2_k/ψ_k , ∀ k
(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>), (<ref>),
where Θ≜{{𝐖_k, 𝐰_k}_k=1^K, ω, t,ϕ, ζ, τ_k, ψ_k } denotes the set of optimization variables. Obviously constraint (<ref>) is convex. Therefore, the challenge for handling problem (<ref>) lies in the nonconvexity of constraint (<ref>). To deal with this, we adopt the SCA techniques to establish a convex approximation of constraint (<ref>). Since function ζ^2/ϕ is jointly convex with respect to ζ and ϕ, its convex lower approximation can be established as
ζ^2/ϕ ≥(ζ^(n))^2/ϕ^(n) + 2 ζ^(n)/ϕ^(n) (ζ - ζ^(n) ) - ( ζ^(n)/ϕ^(n)) ^2 (ϕ - ϕ^(n) ) = 2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ ,
where ζ^(n) and ϕ^(n) are the feasible points obtained at the n-th iteration of the SCA. Consequently, the inner convex approximation of ω≤ζ^2/ϕ is
ω≤2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ.
Similarly, the inner convex approximation of γ_k ≤τ^2_k/ψ_k, ∀ k is
γ_k ≤2 τ_k^(n)/ψ_k^(n)τ_k - ( τ_k^(n)/ψ_k^(n)) ^2 ψ_k , ∀ k ,
where τ_k^(n) and ψ_k^(n) are the feasible points obtained at the n-th iteration.
Finally, a convex approximation of problem (<ref>) is formulated as
max_Θ ω
s.t. (<ref>), (<ref>), (<ref>).
In this way, problem (<ref>) can be solved with off-the-shelf numerical convex program solvers such as CVX Toolbox <cit.>. We summarize the proposed iterative method in Algorithm <ref>, where its initial feasible solution can be obtained by following the penalty SCA method given in Remark 1.
In the following, we analyze the convergence of Algorithm <ref>. We can note that in the iterative procedure of Algorithm <ref>, Θ^(n-1) is always feasible in problem (<ref>) at n-th iteration owing to the adopted first-order Taylor approximation. We note that (<ref>) can be optimally solved and the optimal value of its objective function serves as a lower bound on that of (<ref>).
Therefore, it can be guaranteed that the optimal value of (<ref>) at n-th iteration n, denoted as p_∗^(n), always satisfies p_∗^(n)≥ p_∗^(n-1). Therefore, Algorithm <ref> produces a non-decreasing objective function of problem (<ref>).
Similar to Algorithm <ref>, the computational complexity of Algorithm <ref> is 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ).
§.§ Extened Target Case
For the case of the extended target, following the discussion in Section <ref>, we choose 𝐀 as the parameter to be estimated and adopt the formulation of CRB in (<ref>).
Then, we have the sensing-centric EE for sensing an extended target as
EE_S = ( σ_s^2 M/Ltr(𝐑_𝐱^-1) )^-1/ L ( 1/ϵtr(𝐑_𝐱) + P_0 ) = ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ) ,
where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H = ∑_k=1^K 𝐰_k 𝐰_k^H + 𝐑_𝐖̃. Then, we formulate the problem as
max_{𝐰_k}_k=1^K,𝐑_𝐖̃ ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 )
s.t. tr(𝐑_𝐱) ≤ P_max,
σ_s^2 M/Ntr(𝐑_𝐱^ - 1) ≤ϕ ,
S̃ĨÑR̃_̃k̃≥γ_k,
where S̃ĨÑR̃_̃k̃ is given in (<ref>) and can be recast as a convex form in (<ref>) by letting 𝐖_k = 𝐰_k 𝐰_k^H.
We notice that in (<ref>), the numerator is the reciprocal of a convex function and the denominator is strictly positive and convex. To handle its nonconvexity, we introduce auxiliary optimization variables p_e,q_e and equivalently transform the problem into
max_{𝐰_k}_k=1^K,𝐑_𝐖̃, q_e, p_e 1/p_e q_e
s.t. p_e ≥σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ), q_e ≥tr(𝐑_𝐱^ - 1),
(<ref>), (<ref>),(<ref>).
Then, the problem can be further transformed into its equivalent form as
min_{𝐖_k}_k=1^K,𝐑_𝐖̃, q_e, p_e ln(p_e) + ln(q_e) s.t. (<ref>), (<ref>),
where the objective function is still not convex, but can be approximated based on the first order Taylor series expansion given by
ln(p_e) + ln(q_e) ≤ln( p^(n)_e ) + ln( q_e^(n)) + 1/p_e^(n)( p_e-p_e^(n)) + 1/q^(n)_e( q_e-q^(n)_e) ,
where p_e^(n) and q_e^(n) are the feasible solutions obtained at the n-th iteration. Following the techniques detailed in Section <ref>, a convex approximation of problem (<ref>) at the n-th iteration can be established as
min_{𝐖_k}_k=1^K, 𝐑_𝐖̃, q_e, p_e ln(p^(n)_e) + ln(q_e^(n)) + 1/p_e^(n) (p_e-p_e^(n)) + 1/q^(n)_e (q_e-q^(n)_e)
s.t. (<ref>), (<ref>),(<ref>),(<ref>), (<ref>).
The computational complexity is 𝒪( √(MK+M+K+1) M^6 K^3 I_iterln(1/ϵ_0) ) for an ϵ_0-optimal solution.
Based on the optimal solution of (<ref>), denoted as {𝐖_k^∗, 𝐑^∗_𝐖̃}, the optimal rank-1 solutions can always be reconstructed.
The proof can be achieved by following the proof of Theorem 2 and the details are omitted for brevity.
§ APPROXIMATE PARETO BOUNDARY OF ENERGY-EFFICIENT ISAC SYSTEMS
In this section, we aim to investigate the Pareto boundary of the achievable EE performance region built on the communication-centric EE and the sensing-centric EE.
Considering the point-like target case, we follow <cit.> to formulate the search of the Pareto boundary as a constrained optimization problem that maximizes the communication-centric EE under the sensing-centric EE constraint. It is worth noting that the proposed algorithm can be adapted to the extended target case directly. Now, we aim to solve
max_{𝐰_k}_k=1^K ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0
s.t. CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) ≥ℰ,
∑_k w_k _2^2 ≤ P_max,
where ℰ denotes the required minimum sensing-centric EE threshold.
Obviously, problem (<ref>) is a nonconvex fractional program, which is challenging to solve directly.
To handle fractional objective function (<ref>) and nonconvex constraint (<ref>), we follow <cit.> to find the approximate optimal Pareto boundary for characterizing the tradeoff between the communication-centric EE and sensing-centric EE.
In particular, we first apply the Dinkelbach algorithm to reformulate fractional function (<ref>) as
max_λ ∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) - λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0 )
s.t. (<ref>), (<ref>),
where B_k(𝐖_k) = ∑^K_j=1, j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2.
Furthermore, by introducing auxiliary variables b_k, k=1,…,K, the intractable fractional terms in (<ref>) can be equivalently formulated as
∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) = max_b_k ( ∑_k=1^Klog_2 (1+ b_k) - ∑_k=1^K b_k + ∑_k=1^K(1+b_k)| h_k^H w_k |^2 /B_k(𝐖_k)),
which has an analytical solution b_k = | h_k^H w_k |^2/B_k(𝐖_k).
Finally, by applying the quadratic transform <cit.>, problem (<ref>) can be reformulated as
max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) )
- λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0)
s.t. (<ref>), (<ref>),(<ref>),(<ref>).
The convex approximation of nonconvex constraint (<ref>) is constraint (<ref>), as mentioned in Section <ref>. For handling nonconvex constraint (<ref>),
we introduce an auxiliary variable ℰ̃ and employ the Schur complement to obtain the convex approximation of problem (<ref>) given by
max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) )
- λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0)
s.t. [ ℱ(∑_k=1^K𝐖_k) - ℰ̃σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 ,
ℰ̃≥ℰ N (1/ϵ∑_k=1^Ktr( W_k)+ P_0),
(<ref>), (<ref>), (<ref>).
(<ref>) is convex whose optimum can be obtained by the interior point method. Therefore, an efficient solution of problem (<ref>) can be obtained by solving a sequence of problem (<ref>). Algorithm <ref> summarizes the iterative algorithm, where f̆_1(𝐰_k, 𝐖_k) = β/ℛ∑_k=1^K( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k). - t_k^2 B_k(𝐖_k) ) + (1-β) ϕ̃/L 𝒞, f̆_2(𝐖_k) = λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0).
§ NUMERICAL RESULTS
In this section, we provide simulation results of the proposed energy-efficient waveform design. Numerical analysis is presented to evaluate the performance of communication-centric EE (EE_C), sensing-centric EE (EE_S), and their approximate Pareto boundary.
Unless stated otherwise, we consider a dual-functional BS equipped N = 20 receiving antennas, with the frame length N set to 30. The maximum transmission power P_max is set to 30 dBm with the power amplifier efficiency ϵ = 0.35. The circuit power consumption is set to P_0 = 33 dBm. For the target estimation of radar, the target angle is θ = 90 ^∘.
§.§ EE_C Optimization
We first examine the performance of Algorithm <ref> for maximizing EE_C considering the existence of a point-like target. The convergence rate of Algorithm 1 is given in Fig. <ref>. Obviously, it enjoys a fast convergence rate, whose objective function value converges within 12 iterations on average.
Furthermore, the convergence rate of Algorithm 1 is almost the same for
different system parameters, e.g., different M and CRB constraints, which confirms the scalability of Algorithm 1.
Fig. <ref> investigates the EE_C performance versus the root-CRB threshold for different M. The EE_C increases with the increasing Root-CRB threshold, indicating that EE_C can achieve a higher level when the sensing performance requirement is less stringent. Indeed, increasing the number of antennas can improve EE_C, since more spatial degrees-of-freedom can be utilized for designing an efficient ISAC waveform. On the other hand, the baseline scheme only maximizes the communication sum rate under the same constraints of problem (<ref>).
Obviously, the EE_C of the baseline scheme is unsatisfying, since it only considers the spectral efficiency maximization instead of the EE_C maximization. In such a case, the baseline scheme encourages the ISAC BS to adopt as much power as possible for increasing the communication sum rate.
Fig. <ref> and Fig. <ref> plot the EE_C of the point-like target and extended target with the increasing SINR constraint of multiple users, γ_k, respectively. With the increasing γ_k, EE_C first remains unchanged and then decreases due to the shrunken feasible region. Therefore, increasing the downlink communication rate does not necessarily improve EE_C. Furthermore, with the increasing root-CRB, the EE_C decreases, since more power is allocated to radar sensing due to the increasing sensing requirements. A similar trend can also be found in Fig. <ref> for the increasing CRB in the extended target case.
§.§ EE_S Optimization
In this subsection, we investigate the performance of EE_S optimization for both the point-like target sensing and extended target cases. In Fig. <ref>, we first consider the point-like target to show the EE_S versus the increasing power budget, for different SINR levels. As expected, EE_S increases with the increasing P_T, since the increasing power improves the estimation accuracy and increases EE_S. Besides, lowering the SINR requirement also improves
EE_S, since relaxing the SINR constraint enlarges the feasible region and improves EE_S.
For demonstrating the performance gain obtained by our proposed Algorithm 3,
we perform the performance comparison with two other baselines, namely BA_1 and BA_2. In particular, BA_1 aims to minimize the transmission power while BA_2 aims to maximize the communication sum rate under the same constraints as our proposed method (γ_k = 5 dB, the root-CRB threshold is set to 0.15 deg, P_max = 30 dBm). The results indicate that EE_S of BA_1 is significantly low due to the insufficient power for improving the CRB performance. Additionally, EE_S of BA_2 is also inferior to the proposed method and exhibits a further decline as the transmission power increases, since most of the power is utilized for maximizing the sum rate instead of sensing target.
Fig. <ref> further demonstrates the EE_S versus the SINR requirement, where the root-CRB threshold is set to 0.15 deg. It can be observed that EE_S decreases as the increasing SINR and the number of communication users since the increasing communication requirements deteriorates the sensing performance.
As for the scenario of sensing an extended target, Fig. <ref> shows the EE_S versus communication SINR under different numbers of users and different CRB.
It is worth noting that the performance metric for the extended target sensing EE_S is different from the point-like target case.
Similar to the scenario of sensing a point-like target, EE_S decreases with the increasing requirements of communication SINR, especially when the number of users is larger. Besides, increasing CRB requirements improves EE_S, due to the improved estimation performance.
§.§ Approximate Pareto Boundary of Energy-Efficient ISAC.
Fig. <ref> plots the approximate Pareto boundary of energy-efficient ISAC, which demonstrates the tradeoff between EE_C and EE_S. With the more stringent EE_S constraint, the EE_C decreases.
In particular, when the required minimum sensing-centric EE threshold ℰ is small, strengthening the requirement of EE_S only affects EE_C mildly.
However, when the required EE_S beyonds a certain threshold, increasing EE_S constraint will bring a sharp decline in EE_C.
This phenomenon shows that there is a non-trivial tradeoff between EE_S and EE_C, which should be given serious consideration.
Besides, we can find that the area spanned by the Pareto boundary is sensitive to the number of communication users, K, since the increasing number of served communication users consumes the available spatial degrees of freedom which cannot compensate for the performance loss due to the increasingly stringent EE_S constraint.
Therefore, it is more challenging to balance EE_S and EE_C for a large K.
On the other hand, after the required EE_S surpasses some threshold, EE_C decreases sharply. This is because most of the available resources are allocated for satisfying the stringent EE_s constraint, such that the remaining resources are insufficient for guaranteeing the EE_C performance.
§ CONCLUSION
In this paper, we addressed the problem of maximizing energy efficiency for MIMO ISAC systems. We first studied the communication-centric EE adopting the conventional definition of EE in both the point-like target and extended target cases. We reformulated the objective function using the quadratic-transform-Dinkelbach method and solved the sub-problem by leveraging the Schur complement and semi-relaxation techniques. In the second part, we introduced a novel performance metric for measuring sensing-centric EE. We iteratively approximated the objective function as a convex program exploiting SCA to address this problem. Finally, we investigated the tradeoff between the two EE metrics and provided an effective solution. Numerical results showed an improvement compared to the benchmark on both communication-centric EE and sensing-centric EE performance, and we also demonstrated the tradeoff between communication-centric and sensing-centric EE.
§ APPENDIX A
First, we provide the matrix inequality
𝐖_k ≽𝐰_k 𝐰_k^H,
which satisfies either of the following cases:
Case I: 𝐖_k ≻𝐰_k 𝐰_k^H. Then, we have tr(𝐖_k) > tr(𝐰_k 𝐰^H_k).
Case II: 𝐖_k = 𝐰_k 𝐰_k^H. In this case, we have tr(𝐖_k) = tr(𝐰_k 𝐰^H_k).
By combining 𝐖_k ≽𝐰_k 𝐰_k^H, with an additional LMI constraint, given as tr(𝐖_k) ≤tr(𝐰_k 𝐰^H_k), we can guarantee that Case II always holds.
We remark that tr(𝐰_k 𝐰_k^H) = tr(𝐰^H_k 𝐰_k) =𝐰^H_k 𝐰_k. Further applying the Schur complement, W_k =w_k w_k^H can be equivalently transformed into the following LMI, given as
[ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , ∀ k, tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤0, ∀ k,
which completes the proof.
§ APPENDIX B
For K = 1, we can derive that 𝐡_k^H 𝐖̂^∗𝐡_k = 𝐡_k^H 𝐖^∗𝐡_k. Hence, the received SNR and the transmission rate at the user does not decrease. Besides, we have
𝐖^∗ - 𝐖̂^∗ = ( 𝐖^∗)^1/2( 𝐈 - (𝐖^∗)^1/2𝐡_k 𝐡_k^H (𝐖^∗)^1/2/𝐡_k^H 𝐖^∗𝐡_k) ( 𝐖^∗)^1/2≽0,
indicating that the power constraint is satisfied due to 𝐖^∗≽𝐖̂^∗. Additionally, replacing 𝐖^∗ by 𝐖̂^∗ would not decrease the transmission rate or increase the total power, showing that 𝐖̂^∗ is the optimum to the objective function.
Then, we discuss the case of K > 1 . We introduce r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 and equivalently reformulate (<ref>) as
max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃, λ ∑_k=1^K log( 1+r ) - λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0)
+ ∑_k=1^K( log b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
s.t. r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 ,
(<ref>),(<ref>), (<ref>), (<ref>), (<ref>) .
We note that with the fixed λ, problem (<ref>) is jointly convex of variables {𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃. Thus, it can be proved that Slater's condition holds such that strong duality holds. By introducing the Lagrange multipliers ϖ_k,1≤ 0, ϖ_k,2≤ 0, μ≤ 0 and Ψ_k ≽0, we provide the Lagrangian function of 𝐖_k as
ℒ(𝐖_k) = - ϖ_k,1𝐡_k^H 𝐖_k 𝐡_k + ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐖_k 𝐡_i + ϖ_k,2𝐡_k^H 𝐖_k 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐖_k 𝐡_i
- tr(𝐖_k Ψ_k)+ μtr(𝐖_k) + ξ ,
where ξ represent the terms that do not involve 𝐖_k. Then, the KKT conditions of (<ref>) is given as
ℒ̇(𝐖^∗_k) = 0 , 𝐖^∗_k Ψ_k = 0.
Then, we have Ψ^∗_k = 𝐀_k^∗ - ϖ_k,1𝐡_k^H 𝐡_k and
𝐀_k^∗ = ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐡_i + ϖ_k,2𝐡_k^H 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐡_i + μ𝐈_M.
Nest, we discuss the rank of 𝐀_k^∗ under the following cases.
1) Case I: rank( 𝐀_k^∗) = M.
In this case, we have rank( Ψ^∗_k) ≥ M-1 with the inequality rank( 𝐗 + 𝐘 ) ≥rank( 𝐗 ) - rank( 𝐘 ) <cit.>. For rank(Ψ^∗_k ) = M, the first condition in (<ref>) implies 𝐖^∗_k = 0.
For rank(Ψ^∗_k ) = M - 1, we have rank( 𝐖^∗_k )= 1.
2) Case II: rank( 𝐀_k^∗) = r_a < M.
In this case, we exploit <cit.> to construct a rank-1 solution 𝐖^∗_k. We give {𝐪_k,i^∗}_i=1^M-r_ato denote the columns of orthonormal basis of Ω_k^∗, which represents the nullspace of 𝐀_k^∗. As Ψ^∗_k ≽0, we have (𝐪_k,i^∗)^H Ψ^∗_k 𝐪_k,i^∗ = - ϖ_k,1 |𝐡_k^H 𝐪_k,i^∗ |^2 ≥ 0. Since (<ref>) should be active at opimum indicating ϖ_k,1≥ 0, we have 𝐡_k^H 𝐪_k,i^∗ = 0 and Ψ^∗_k Ω_k^∗ = 0. Thus, the M - r_a dimensions of Ψ^∗_k's null space can be represented by Ω_k^∗. We further denote Ω_k^∗ as the null-space of Ψ^∗_k, we have rank(Ω_k^∗) ≥ M - r_a. Additionally, since rank( 𝐀_k^∗) = r_a, we have rank( Ψ^∗_k) ≥ r_a - 1, which shows that rank(Ω_k^∗) ≤ M - r_a + 1. Then, it can be readily noted that rank(Ω_k^∗) = M - r_a or rank(Ω_k^∗) = M - r_a + 1. When rank(Ω_k^∗) = M - r_a , we have 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H with λ_k,i^∗≥ 0. In such a case, 𝐡_k^H 𝐖_k^∗𝐡_k = 0, which constradicts the optimality. Hence, we conclude that rank(Ω_k^∗) = M - r_a + 1. Denoting Ω_k^∗ as [Ω_k^∗, 𝐩_k^∗], the optimal solution 𝐖^∗_k can be given as 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H + λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H with λ̃^∗_k ≥ 0. Therefore, a rank-1 solution can be constructed as
𝐖̂_k^∗ = 𝐖^∗_k - ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H = λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H , 𝐑̂^∗_𝐖̃ = 𝐑^∗_𝐖̃ + ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H.
In the following, we show that the reconstructed solution, 𝐖̂_k^∗ and 𝐑̂^∗_𝐖̃ satisfy the constraints. Firstly, we have
𝐡_k^H 𝐖_k^∗𝐡_k = 𝐡_k^H 𝐖̂_k^∗𝐡_k, 𝐡_k^H (∑_i = 1,i k^K𝐖^∗_i + 𝐑^∗_𝐖̃) 𝐡_k = 𝐡_k^H (∑_i = 1,i k^K𝐖̂^∗_i + 𝐑̂^∗_𝐖̃) 𝐡_k.
Therefore, the right-hand side term in (<ref>) and the left-hand side term in (<ref>) remain unchanged.
Besides, it can be readily verified that constraints (<ref>) and (<ref>) hold, since 𝐖_k^∗ + 𝐑^∗_𝐖̃ = 𝐖̂^∗_k + 𝐑̂^∗_𝐖̃, which completes the proof.
IEEEtran
|
http://arxiv.org/abs/2307.04490v1 | 20230710112803 | A symmetry and Noether charge preserving discretization of initial value problems | [
"Alexander Rothkopf",
"Jan Nordström"
] | math.NA | [
"math.NA",
"cs.NA",
"hep-lat",
"physics.comp-ph"
] |
PreprintFP
addressref=aff1,
corref=aff1,
[email protected]
]ARAlexander Rothkopf
addressref=aff2,aff3,
[email protected]
]JNJan Nordström
[id=aff1]
Faculty of Science and Technology,
University of Stavanger,
4021,
Stavanger,
Norway
[id=aff2]
Department of Mathematics,
Linköping University,
SE-581 83,
Linköping,
Sweden
[id=aff3]
Department of Mathematics and Applied Mathematics,
University of Johannesburg,
P.O. Box 524, Auckland Park 2006,
Johannesburg,
South Africa
Taking insight from the theory of general relativity, where space and time are treated on the same footing, we develop a novel geometric variational discretization for second order initial value problems (IVPs). By discretizing the dynamics along a world-line parameter, instead of physical time directly, we retain manifest translation symmetry and conservation of the associated continuum Noether charge. A non-equidistant time discretization emerges dynamically, realizing a form of automatic adaptive mesh refinement (AMR), guided by the system symmetries. Using appropriately regularized summation by parts finite difference operators, the continuum Noether charge, defined via the Killing vector associated with translation symmetry, is shown to be exactly preserved in the interior of the simulated time interval.
The convergence properties of the approach are demonstrated with two explicit examples.
Initial Value Problem, Summation By Parts, Time-Translation Invariance, Conserved Noether Charge, Adaptive Mesh Refinement
§ INTRODUCTION
Symmetries play a central role in our understanding of dynamical processes in both classical <cit.> and quantum <cit.> physics. Emmy Noether achieved groundbreaking insight, when she proved that the presence of a global continuous symmetry in the action S of a system implies the existence of a conserved current, whenever the equations of motions are fulfilled <cit.>. Via such a Noether current, one can define a quantity, which remains unchanged during the evolution of the system and which is referred to as Noether charge. Noether's theorem thus offers a fundamental understanding of central tenets of classical physics, such as energy and momentum conservation, which it relates to the invariance of physics under translations in time and space respectively.
In quantum theory, the presence of symmetries limits the type of quantum fluctuations which may occur <cit.>, with measurable consequences for the spectrum of elementary particles and their bound states. The four Noether currents associated with space and time translations are conventionally summarized in a quantity called the energy-momentum tensor T^μν(x), where μ and ν refer to spatial and temporal components. It offers access to vital properties of a system, one pertinent example being the energy density profile <cit.> of a static charge distribution via the ε(x)=T^00(x) component or the corresponding electric field-line configuration via the spatial components T_ij(x)=F_iμF^μ_j-1/4δ_ijF_μν^2 of the electromagnetic field F_μν, referred to as the Maxwell stress tensor (see e.g. <cit.>).
The simulation of dynamical phenomena in classical and quantum systems is often performed after discretizing space and time on a finite mesh (for a discussion of discretization in functional spaces see e.g <cit.>). Finite difference schemes, formulated in their modern summation-by-parts (SBP) form (for reviews see e.g. <cit.>) offer both conceptual and practical benefits. The SBP approach in both space and time <cit.> offers proofs of stability based on the so-called energy method, which can be extended to high-order schemes in a straight forward fashion. Not only do SBP operators mimic integration by parts (IBP) exactly in the discretized setting, but in addition they constitute a cost effective approximation to differential operators on many mesh types.
The discretization of space and time in its conventional form, i.e. considering x and t as independent variables, necessarily affects the symmetry properties of the system at hand (see e.g. the discussion in <cit.>). Where the continuum theory e.g. admits translations of any magnitude, i.e. in particular also infinitesimal ones, the discretized theory on a space-time mesh with grid spacing Δ_μ only allows one to shift space and time by that finite amount. In general this entails that a central condition of Noether's theorem, the presence of a continuous symmetry, does not hold and the corresponding continuum Noether charge fails to remain constant over time. This is particularly concerning with regards to time translation symmetry and energy conservation, which are closely related to the stability of the simulation.
Artificial loss of energy is often considered benign, as it is simply a matter of loosing accuracy. An artificial increase of energy will, as energy is not bounded from above, eventually lead to a divergence of the simulated dynamics, characteristic of an unstable scheme. On the other hand, if energy is conserved, it puts stringent bounds on the growth of the solution. In the context of symplectic schemes, which conserve energy on average, one can relate energy conservation directly to the stability of the numerical scheme (see e.g. <cit.> and also <cit.>).
One strategy to retain energy conservation for systems with second order governing equations is to go over to a Hamiltonian approach, where only space is discretized, while time remains continuous. One converts the equation of motion of the Lagrange formalism, which is second order in the time derivative into a set of two equations of motion of first order, after replacing velocities with the so-called canonical momentum. After this step, a discrete phase-space volume preserving time stepping may be implemented (c.f. Verlet-Størmer <cit.>). This approach crucially hinges on the availability of a Hamiltonian picture, i.e. whether the canonical momenta can be defined, which may face difficulties in systems with inherent constraints or requires the choice of a particular gauge, as in Maxwell's electrodynamics <cit.>. Another strategy is to determine whether Noether's theorem may be salvaged in the presence of a finite grid spacing <cit.>. One may e.g. consider modifications to the continuum energy expression, which remain conserved, given a particular choice of difference approximation. However, as the necessary schemes are not of SBP type, they do not mimic other relevant properties of the continuum theory.
In this study we develop a generic approach to discretize second order IVPs on the level of the system Lagrangian, while retaining the manifest translation invariance of the continuum theory. In order to do so we will take inspiration from the general theory of relativity (for a textbook see e.g. <cit.>), where space and time are treated on the same footing. In this formalism the presence of translation symmetry is evident from the form of the Langrangian itself. We build upon our prior work on formulating IVPs directly via the action of the system, which allows us to avoid the need to derive their equation of motion. The action of the system is discretized using SBP finite difference operators with a physical null-space, developed in our previous paper <cit.>. These operators are crucial in mimicking the continuum derivation of Noether's theorem (and if one wishes to do so, the equations of motion).
The central outcome of this proof-of-principle study is a prescription of how to discretize second order IVPs directly on the level of the Lagrangian, while retaining the continuum time translation symmetry and thus exact conservation of the corresponding Noether charge. No reference to a Hamiltonian is required. We observe that a non-equidistant discretization emerges in the time coordinate, which represents a form of automatic adaptive mesh refinement (AMR) <cit.>, guided by the inherent symmetries of the system. Our results open up a novel route for obtaining optimal AMR procedures, where clustering and coarsening emerge as part of the solution process, thus avoiding the conventional use of sensors (see e.g. <cit.>), adjoint techniques (see e.g. <cit.>) or error estimates (see e.g. <cit.>).
In <ref> we discuss the continuum formulation of our geometrized variational approach with time considered as dependent variable. In <ref> the discretized formalism is introduced and we present its efficacy in <ref> using different example systems. We close with a summary and outlook in <ref>.
§ CONTINUUM FORMALISM WITH MANIFEST TRANSLATION SYMMETRY
The common starting point for the formulation of the variational principle in classical point mechanics is to consider the dynamics of a system as boundary value problem (BVP). The system, which takes on position x_i at t_i evolves to position x_f at t_f and we wish to determine the trajectory it follows. Obviously this formulation is not causal, as we already need to know the end-point of the dynamics to determine the trajectory. As discussed in <cit.> and in our previous study <cit.> it is possible to formulate the variational problem as a genuine initial value problem through a doubling of the degrees of freedom of the system.
In order to focus on the qualitatively novel ingredients of our variational approach, we first introduce it in the standard context of point mechanics as a BVP. The implementation for a genuine IVP is given in the subsequent subsection.
§.§ Boundary value problem formulation
Symmetry is a central mathematical pillar of the theory of relativity. In the special theory of relativity one formulates the laws of physics in a way that remains invariant under so-called Lorentz transformations of the coordinates, while in general relativity one constructs a description, which is invariant under an even larger class of transformations. Such a theory, invariant under arbitrary differentiable coordinate transformations, is called reparametrization invariant.
Reparameterization invariance is achieved by considering both space and time as dynamical degrees of freedom. In this study we are not interested in determining the dynamical evolution of space-time itself but will simply borrow this reparametrization invariant formalism of general relativity for our purposes of obtaining a symmetry preserving discretization. As our prime example, we set out to describe the dynamics of a point mass in the presence of a potential. The first step is to convert this physics question into a purely geometric problem.
In general relativity, the trajectory of a particle, traveling freely in (a not necessarily flat) space-time described by the metric tensor g_μν, is given by a path that generalizes the notion of the shortest path on the corresponding space-time manifold. This path is called a geodesic. While the particle may move in a (1+d) dimensional space-time with d space and one time direction, its path traces out a one-dimensional submanifold, which we can parameterize with a single, so called world-line parameter, denoted in the following by γ. We will restrict ourselves here to two dimensions, i.e. d=1, a system with one spatial and one temporal direction expressed in coordinates as x(γ)=(t(γ),x(γ)).
A geodesic may be obtained from a variational principle <cit.>, which asks for the critical point of the following action functional that measures the length of the path between two space-time points x(γ_i) and x(γ_f)
S= ∫_γ_i^γ_f dγ (-mc)√( g_μνdx^μ/dγdx^ν/dγ), x(γ_i)= x_i, x(γ_f)= x_f.
Here Einstein's summation convention has been adopted and we have included the dimensionful prefactor mc, which, as we will show explicitly below, allows us to recover the usual action in the non-relativistic limit from <ref>.
We refer to time t(γ) as the zeroth component x^0 of the vector x and to the spatial coordinate x(γ) as the first component x^1. Note that this functional is reparametrization invariant under any differentiable redefinition of the parameter γ. I.e. when converting from γ→γ^' the conversion of differentials under the square root produces terms dγ^'/dγ that cancel with the conversion factor of the measure.
The geodesics of flat space-time, described by the diagonal metric tensor g= diag[c^2,-1], which arise from the critical point of the action functional
S_ flat=∫_γ_i^γ_f dγ (-mc)√(c^2(dt/dγ)^2-(dx/dγ)^2),
are straight lines, which are traversed with constant speed (see chapter 3.4 of <cit.>), in agreement with Newtonian mechanics.
It is important to note that while our intuition of the concept of shortest path relies on geometries with positive definite metrics (Riemannian geometry), physical spacetime, as confirmed by experiment, has a metric with both positive and negative eigenvalues (pseudo-Riemannian geometry). In such a geometry the shortest path between two points can denote a saddle point of the action functional instead of a genuine minimum, as the temporal and spatial components enter relation (<ref>) with opposite sign.
To describe the presence of an external force acting on a point particle in flat spacetime, one conventionally amends the action S_ flat simply by adding the potential term V(x) responsible for generating that force (see chapter 7.9 in <cit.>).
Let us now discuss how we can exploit the formalism of general relativity to re-express the evolution of a particle in flat spacetime in the presence of an external force, instead as an evolution of a free particle in a non-flat spacetime. In the presence of an external force, encoded in a potential term V(x), the particle trajectory in flat space-time will deviate from the straight line. A standard procedure in the study of weak-field gravity is to reinterpret the change in the particle trajectory due to a potential, instead, as the effect of a non-flat space-time without a potential present (see e.g. chapter 8 of <cit.>). This reinterpretation is possible, as long as the values of the potential are smaller than the rest energy (mc^2) of the point mass, a condition which is very well fulfilled for the non-relativistic systems we are interested in solving.
As we will see in the following, one can introduce the effects of a potential V(x) on a point particle with mass m in the weak-field limit of general relativity by modifying the temporal component g_00 of the diagonal metric tensor
g_00=c^2+2V(x)/m,
while keeping g_11=-1. I.e. one endows the metric with a non-trivial dependence on the spatial coordinate, trading the absence of an explicit external force for a non-flat spacetime.
Let us now show that such a modification of the metric indeed recovers the non-relativistic action of a particle in the presence of the potential V(x). To this end we insert the modified metric <ref> into the geodesic action <ref>:
S= ∫_γ_i^γ_f dγ (-mc)√( g_00(dt/dγ)^2 - (dx/dγ)^2 )
g_00>0= ∫_γ_i^γ_f dγ (-mc) √( g_00(dt/dγ)^2 )√( 1 - 1/g_00(dx/dγ)^2 (dt/dγ)^-2_(dx/dt)^2)
dx/dt^2 ≪ g_00∼ c^2= ∫_γ_i^γ_f dγ |dt/dγ| (-mc) √( g_00)( 1 - 1/21/g_00(dx/dγ)^2 (dt/dγ)^-2
+ O( 1/g_00^2(dx/dt)^4 ))
V/m ≪ c^2= ∫_γ_i^γ_f dγ dt/dγ( -mc^2 + 1/2 m (dx/dγ)^2 (dt/dγ)^-2 - V(x)
+ O((V/mc^2)^2) + O( 1/c^2(dx/dt)^4 ) )
= ∫_t_i^t_f dt ( -mc^2 + 1/2 m (dx/dt)^2 - V(x) ).
In the third line we have expanded the rightmost square root in <ref>, assuming that the square of the physical velocity (dx/dt)^2 is much smaller than g_00, which is to say that the particle velocity dx/dt itself is much smaller than the speed of light c. To go from the third to the fourth line, we have in addition assumed that the potential is much smaller than the rest energy of the point particle, which allows us to expand the term √(g_00)=√(c^2+2V(x)/m) in terms of V(x)/mc^2. We will look for solutions where time flows forward and thus have dropped the absolute value around dt/dγ at the beginning of the second to last line. Note that <ref> is nothing but the standard non-relativistic action <cit.> for a point particle in the presence of an arbitrary potential term with the rest energy mc^2 included.
We have thus successfully related the (artificially constructed) fully geometric description of the particle in a non-flat spacetime in <ref> with the standard description of a particle propagating in flat spacetime in the presence of an external potential in <ref> in the non-relativistic limit.
We see in <ref> that time emerges naturally as the independent variable in which the action integral is formulated. Of course, choosing time as independent variable hides the inherent reparametrization invariance, which persists even in the non-relativistic limit in <ref>. Interestingly it turns out that <ref> is a generalization of the ad-hoc construction of a reparametrization invariant non-relativistic action, discussed in standard textbooks on the calculus of variations (see e.g. <cit.>). <Ref> includes the rest mass term -mc^2 (dt/dγ), which is missing in the standard derivation and which in the absence of a potential contributes a dependence on (dt/dγ) that plays a role in obtaining a well-defined critical point for the time degree of freedom.
The reward for our efforts lies in the fact that <ref> is manifestly invariant under the space-time symmetries of our (1+1) dimensional system. If V(x)=0 only the derivatives dt/dγ and dx/dγ but not t and x itself appear in the action functional <ref>. In turn adding a constant shift to either t or x as in x→ x+ s leaves the action invariant. In the presence of a spatially dependent potential V(x), g_00(x) too becomes dependent on space x and only time translation invariance remains (as the force induced by V(x) changes the momentum of the point particle).
Proving time translation invariance in the conventional action <ref> is much more involved, as one needs to consider how x as a function of t changes under such translations and in addition the boundaries of the action integral themselves are affected by the shift. None of these complications arise in <ref>[That the derivatives of space and time occur in eq. (<ref>) as squares under the square root with a relative minus sign (hiding in g_11) also entails that the action is manifestly invariant under so called Lorentz boosts. These transformations mix space and time components and are related to changes between inertial coordinate systems.].
In the calculus of variations it is known that the critical point of the action S can be obtained by solving certain differential equations, the so called geodesic equations <cit.>. It follows from considering the variation of the action in all of its dependent variables t, ṫ=dt/dγ, x and ẋ=dx/dγ
δ S[t,ṫ, x,ẋ]= ∫_γ_i^γ_f dγ{∂ L/∂ tδ t + ∂ L/∂ṫδṫ + ∂ L/∂ xδ x + ∂ L/∂ẋδẋ}
= ∫_γ_i^γ_f dγ{( ∂ L/∂ t - d/dγ∂ L/∂ṫ)δ t + ( ∂ L/∂ x - d/dγ∂ L/∂ẋ)δ x}
+ .[ ∂ L/∂ṫδ t ]|_γ_i^γ_f+ .[ ∂ L/∂ẋδ x ]|_γ_i^γ_f.
where in the second line we have integrated by parts. As we are considering the variational problem as boundary value problem with the coordinates t and x fixed at the start and end points of the trajectory x(γ_i)= x_i, x(γ_f)= x_f, also the variations δ t and δ x on the boundary vanish and so do the two boundary terms above. Note that we consider t and x as distinct degrees of freedom, so that the terms in the parentheses, multiplying the arbitrary variations δ x and δ t, must vanish each independently at the stationary point δ S=0.
By deriving the Euler-Lagrange equations of the system in the spirit of the standard BVP treatment of classical mechanics, the above derivation tells us that we may locate the classical trajectory of a non-relativistic particle under the influence of a potential, by finding the critical point of the action <ref> with modified g_00 component of the metric, while keeping the start and end coordinates x(γ_i) and x(γ_f) fixed.
Note that there exist infinitely many different parameterizations of the trajectory described by δ S=0, which all differ by the velocity in γ, in which this trajectory is traversed. In practice these different stationary points of S lead to difficulties in numerical optimization
and we therefore follow the standard practice (see e.g. discussion in <cit.> or <cit.>) of selecting a particular parameterization by choosing instead of S the variations of the functional
E_ BVP=∫_γ_i^γ_fdγ E_ BVP[t,ṫ,x,ẋ]=∫_γ_i^γ_fdγ1/2( g_00(dt/dγ)^2 + g_11(dx/dγ)^2 ).
It differs from S via squaring the integrand and replacing the pre-factor -mc by 1/2. These are both irrelevant changes with respect to the classical equation of motion. Since E_ BVP and S differ by a monotonous function applied to their integrands, formally the same critical point ensues. I.e. the variation of E_ BVP is given by δ L=δ√(E_ BVP)=δ E_ BVP/2√(E_ BVP)=0, so that the trajectory that extremizes E_ BVP agrees with that for S at the critical point. Note that the functional E_ BVP is not reparametrization invariant anymore. The derivative terms enter quadratically, and produce a conversion factor (dγ^'/dγ)^2, which cannot be absorbed by the measure dγ alone.
Let us compute the Euler-Lagrange equations (the geodesic equations) for time t and space x following from the variation of <ref>
δ E_ BVP[t,ṫ, x,ẋ]
= ∫_γ_i^γ_f dγ{∂ E_ BVP/∂ tδ t + ∂ E_ BVP/∂ṫδṫ + ∂ E_ BVP/∂ xδ x + ∂ E_ BVP/∂ẋδẋ}
= ∫_γ_i^γ_f dγ{( ∂ E_ BVP/∂ t - d/dγ∂ E_ BVP/∂ṫ)δ t + ( ∂ E_ BVP/∂ x - d/dγ∂ E_ BVP/∂ẋ)δ x}
+ .[ ∂ E_ BVP/∂ṫδ t ]|_γ_i^γ_f+ .[ ∂ E_ BVP/∂ẋδ x ]|_γ_i^γ_f.
As the above boundary terms vanish, we are left with evaluating the individual expressions appearing in the parentheses of <ref>. Below we evaluate each of these terms individually
∂ E_ BVP/∂ t=0, ∂ E_ BVP/∂ṫ=g_00(x)dt/dγ,
∂ E_ BVP/∂ x=1/2∂ g_00(x)/∂ x(dt/dγ)^2, ∂ E_ BVP/∂ẋ=g_11dx/dγ=-dx/dγ,
making explicit the ingredients to the geodesic equations for the temporal and spatial degrees of freedom
d/dγ(g_00dt/dγ)=0,
d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=0.
The attentive reader will have recognized that <ref> constitutes a conservation equation for the expression inside the parenthesis. In the next chapter we will show that this quantity indeed is the conserved charge associated with the time translation symmetry of our system. In general the geodesic equations do not single out the conserved quantities in such a simple fashion. There however exists an systematic procedure to identify the space-time symmetries of the system in the form of different so-called Killing vectors, each of which leads to one conserved quantity (see <ref>).
Note that the geodesic equations <ref> are often written in a more concise fashion in the general relativity literature (see e.g. <cit.>). They are expressed for a general metric using the so-called Christoffel symbols Γ^α_μν=1/2g^αβ( ∂ g_βμ/∂ x_ν + ∂ g_βν/∂ x_μ - ∂ g_μν/∂ x_β), where g^αβ refers to the components of the inverse of the metric g_αβ. One obtains in short hand notation with Einstein summation implied
d^2 x^α/d γ^2+Γ^α_μνd x^μ/d γd x^ν/d γ = 0 .
It is important to note that the derivation of the above expression involves application of the product rule, which in the discrete setting is not valid. Therefore even though in the continuum <ref> and <ref> are equivalent, we will work solely with the former, as only integration by parts (which is exactly mimicked by summation by parts) has been used in their derivation.
§.§ Conserved quantities, Noether's theorem and stability
Conservation of momentum and energy in general relativity is conceptually more involved compared to flat space-time, since the comparison of two quantities at different space-time points becomes a non-trivial operation due to the effects of a non-flat metric. However there may exist a vector field K^μ(x) along which transported quantities remain constant. These vector fields are known as Killing[For completeness we note that a Killing vector field K_μ is defined as solution to the Killing equation
(∂ K_μ/∂ x^ν-Γ^α_μνK_α) + (∂ K_ν/∂ x^μ-Γ^α_νμK_α) =0.] vector fields K^μ(x).
The Killing vector fields are generators of infinitesimal isometries of the space-time manifold. Moving all points of the manifold in the direction of the Killing field leaves the manifold unchanged.
As discussed in standard literature on general relativity (see e.g. chapter 3.8 of <cit.>), each Killing vector field K^μ can be used to define a conserved quantity Q_K via the expression
Q_K=g_αβK^αẋ^β.
Computing the change of Q_K along a geodesic, parameterized by γ, one finds from combining <ref> and the equation that defines the Killing vector that dQ_K/dγ=0, i.e. it vanishes. We will give an explicit example of such a conserved quantity below.
More intuitively, one can think of the role of K^μ as pointing out directions along which the metric g of spacetime in our system remains constant. In the spirit of Noether's theorem, assume that the integrand E_ BVP of our action functional E_ BVP in <ref> remains unchanged under infinitesimal translations with magnitude ϵ in the direction of K^μ. The change in coordinates under such a shift is δ x^μ=ϵ K^μ. Noether's theorem tells us that the conserved quantity corresponding to δ x^μ is given by J=δ x^μ∂ E/∂ẋ^μ, which, when written explicitly as ϵ K^α g_αβdx^β/dγ, turns out to just be ϵ Q_K.
In case of our geometrized problem of determining the dynamics of a point particle under the influence of a potential V(x), the metric remains independent of time t. Thus the vector K_t=(1,0) constitutes a Killing vector associated with time translation symmetry. The conservation of the associated conserved quantity Q_t=K_t^μ g_μνẋ^ν= g_00ṫ follows straight forwardly from the geodesic equation for t
d/dγ Q_t K_t=(1,0)<ref>=d/dγ( g_00ṫ) <ref>=0 ,
i.e. the quantity Q_t remains constant along the geodesic. Note that this quantity is different from the usual energy considered in the non-relativistic formalism.
Turning to the question of stability, let us show next that as a consequence of the presence of a conserved quantity together with the form of the geodesic equations and the reasonable assumption that the potential of the system is bounded from below, it is possible to provide an upper bound on the derivatives of the trajectories obtained as critical point of the functional <ref>.
In an analogy to the construction of a Hamiltonian from a Lagrangian, we define the following
H_ BVP =∫_γ_i^γ_fdγ1/2( g_00(x)(dt/dγ)^2 - g_11(dx/dγ)^2 )_H_ BVP,
=∫_γ_i^γ_fdγ1/2( (c^2+2V(x)/m)(dt/dγ)^2 + (dx/dγ)^2 ).
Due to the flipped sign in front of g_11, compared to the action <ref>, this quantity is actually positive definite, as long as V(x) is bounded from below[Since physical forces arise from the derivative of the potential, we may always add a constant to a bounded potential that will make g_00 positive.]. H_ BVP thus provides a norm on the function space in which t(γ) and x(γ) reside. Now let us inspect the evolution of the integrand H_ BVP
d H_ BVP/dγ =1/2dg_00/dγ(dt/dγ)^2+ g_00dt/dγd^2t/dγ^2+dx/dγd^2x/dγ^2,
= dx/dγ[ 1/2∂ g_00/∂ x(dt/dγ)^2 +d^2x/dγ^2] + g_00dt/dγd^2t/dγ^2,
<ref>=g_00dt/dγ_Q_t const.d^2t/dγ^2.
To arrive at the final expression in <ref>, we use the fact that one can rewrite dg_00/dγ=(∂ g_00(x)/∂ x) ẋ and combine the first and third term to apply <ref>. This simplification tells us that the change in H_ BVP is given solely by the second derivative of time with respect to the world-line parameter. Now we can integrate up twice H_ BVP= ∫_γ_i^γ_fdγ∫_γ_i^γdγ^' (dH_ BVP(γ^')/dγ^') to get
H_ BVP = m g_00(x_i)ṫ(γ_i) ∫_γ_i^γ_f dγ( ṫ(γ)-ṫ(γ_i)),
= g_00(x_i)ṫ(γ_i) ( -ṫ(γ_i)(γ_f-γ_i) + ( t(γ_f)-t(γ_i) ) ),
≤ g_00(x_i)ṫ(γ_i)( t(γ_f)-t(γ_i) ) .
For the last inequality we use the fact that the world-line is parameterized by an increasing γ and correspondingly time moves forward along the world-line.
In the BVP setting, where both t(γ_i) and t(γ_f) are given apriori, <ref> constitutes a proof that the norm H_ BVP defined on the derivatives of the solution t and x grows at most linearly with time, precluding the occurrence of exponentially increasing behavior that would signal an instability, in turn establishing stability of the geometric approach.
§.§ Initial value formulation
So far we have shown how the geodesic equations <ref> can be obtained from a variational principle formulated as a boundary value problem in time. However for a causal description as an initial value problem, we must be able to determine the dynamics of the particle without knowledge of the final point of the trajectory. If one wishes to prescribe only initial values, i.e. positions and derivatives at γ_i, then the variations δ x^μ in <ref> do not vanish at the end of the particle world line, i.e. at γ_f. In turn the equivalence between the critical point of S and the Euler-Lagrange equations in <ref> does not hold. As discussed by <cit.> and put into practice in our previous publication <cit.> one can overcome this issue by constructing an action with doubled degrees of freedom, living on a closed contour with a forward and backward branch in γ.
Since both time and space constitute dependent degrees of freedom in our approach, we need to introduce both forward and backward variants of each of them x_1(γ), x_2(γ) and t_1(γ), t_2(γ). The degrees of freedom on the forward contour enter the action functional with the usual Lagrangian, while those on the backward contour are assigned the negative Lagrangian. Choosing to build the doubled formalism based on the action E_ BVP we obtain
E_ IVP =∫_γ_i^γ_f dγ E_ IVP[t_1,ṫ_1,x_1,ẋ_1,t_2,ṫ_2,x_2,ẋ_2],
=∫_γ_i^γ_f dγ{ E_ BVP[t_1,ṫ_1,x_1,ẋ_1] - E_ BVP[t_2,ṫ_2,x_2,ẋ_2] }.
As discussed in detail in <cit.>, the inner workings of the doubled formalism become more transparent, once we go over to expressing the action E_ IVP in terms of the central and difference coordinates x_+=1/2(x_1+x_2) and x_-=x_1-x_2 and t_+=1/2(t_1+t_2) and t_-=t_1-t_2 respectively. The variation now proceeds in the independent degrees of freedom x_± and t_± and yields
δ E_ IVP[t_±,ṫ_±, x_±,ẋ_±]=
∫_γ_i^γ_f dγ{∂ E_ IVP/∂ t_+δ t_+ + ∂ E_ IVP/∂ṫ_+δṫ_+ + ∂ E_ IVP/∂ t_-δ t_- + ∂ E_ IVP/∂ṫ_-δṫ_-
+ ∂ E_ IVP/∂ x_+δ x_+ + ∂ E_ IVP/∂ẋ_+δẋ_+ + ∂ E_ IVP/∂ x_-δ x_- + ∂ E_ IVP/∂ẋ_-δẋ_- }
= ∫_γ_i^γ_f dγ{( ∂ E_ IVP/∂ t_+ - d/dγ∂ E_ IVP/∂ṫ_+)δ t_+ +( ∂ E_ IVP/∂ t_- - d/dγ∂ E_ IVP/∂ṫ_-)δ t_-
+ ( ∂ E_ IVP/∂ x_+ - d/dγ∂ E_ IVP/∂ẋ_+)δ x_+ + ( ∂ E_ IVP/∂ x_- - d/dγ∂ E_ IVP/∂ẋ_-)δ x_-}
+ .[ ∂ E_ IVP/∂ṫ_+δ t_+ ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ṫ_-δ t_- ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ẋ_+δ x_+ ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ẋ_-δ x_- ]|_γ_i^γ_f.
To arrive at <ref> we have carried out four integrations by parts. As the next step, we consider under which conditions the boundary terms in the above expression vanish. Since we prescribe fixed initial values for both time and space, the variations δ t_±(γ_i)=0 and δ x_±(γ_i)=0 vanish. What about the variations at the end of the forward and backward world-line? As long as we require that
x_2(γ_f)=x_1(γ_f), t_2(γ_f)=t_1(γ_f),
it follows that δ x_-(γ_f) and δ t_-(γ_f) vanish and with it the corresponding boundary terms. The only remaining terms are those at γ_f which feature δ x_+ and δ t_+. As these variations do not vanish, we instead inspect the terms multiplying them, i.e. ∂ E_ IVP/∂ṫ_+ and ∂ E_ IVP/∂ẋ_+. Using the definition x_1=x_+ + 1/2 x_- and x_2=x_+ - 1/2 x_- and correspondingly for t_1,2, we find from the defining equation for E_ IVP <ref>
d E_ IVP/d ẋ_+ = ∂ E_ IVP[t_1,2,ṫ_1,2,x_1,2,ẋ_1,2]/∂ẋ_1d ẋ_1/d ẋ_+ + ∂ E_ IVP[t_1,2,ṫ_1,2,x_1,2,ẋ_1,2]/∂ẋ_2d ẋ_2/dẋ_+,
= ∂ E_ BVP[t_1,ṫ_1,x_1,ẋ_1]/∂ẋ_1d ẋ_1/d ẋ_+ - ∂ E_ BVP[t_2,ṫ_2,x_2,ẋ_2]/∂ẋ_2d ẋ_2/dẋ_+,
= g_11(x_1)ẋ_1-g_11(x_2)ẋ_2=-ẋ_1+ẋ_2.
Similarly one obtains
d E_ IVP/d ṫ_+ = g_00(x_1)ṫ_1-g_00(x_2)ṫ_2.
Together with condition <ref> that the values of x_1,2 and t_1,2 must agree at γ_f, this result tells us that in order for the two remaining boundary terms to vanish, we need to also identify the derivatives of x_1,2 and t_1,2 at the point γ_f
ẋ_2(γ_f)=ẋ_1(γ_f), ṫ_2(γ_f)=ṫ_1(γ_f).
Note that we have now managed to remove the boundary terms without the need for specifying the concrete value of t's and x's at the final point γ_f. This is the central contribution of the forward-backward construction.
The last remaining step is to undo the proliferation of degrees of freedom that occurred when introducing the forward-backward construction. It has been shown <cit.> that taking the so-called physical limit achieves this goal, where the constraints x_1(γ)-x_2(γ)=x_-(γ)=0 and t_1(γ)-t_2(γ)=t_-(γ)=0 are enforced. The remaining x_+ and t_+ are identified with the true classical geodesics.
In terms of the Euler-Lagrange equations in parentheses in <ref>
∂ E_ IVP/∂ x_± - d/dγ∂ E_ IVP/∂ẋ_±=0, ∂ E_ IVP/∂ t_± - d/dγ∂ E_ IVP/∂ṫ_±=0,
the physical limit entails that only those equations independent of x_- and t_- survive. With the construction of the action E_ IVP= E_ BVP[x_1,ẋ_1,t_1,ṫ_1] - E_ BVP[x_2,ẋ_2,t_2,ṫ_2] from a difference of the E_ BVP functionals, there will appear at least a linear dependence on the minus degrees of freedom. Hence in the physical limit only those Euler-Lagrange equations linear in x_- and t_- will survive, where the minus degrees of freedom have been removed by taking the derivative with respect to x_- or t_-.
Note that we have decided to not only specify the value and derivative of x at initial γ_i but also those of t. As we wish to determine the dynamics of a point particle in the presence of a potential with given x(t_i) and dx/dt(t_i), there remains a freedom in choosing ẋ(γ_i) and ṫ(γ_i), since only their ratio needs to be fixed dx/dt(t=t_0)=ẋ(γ_i)/ṫ (γ_i). The end of the time interval traversed by the world line parameter γ, will consequently depend on the value prescribed to ṫ(γ_i) and emerges dynamically from the combined evolution of x and t.
At this point we have formulated a manifest time translation symmetric variational principle that encodes the dynamics of a point particle evolving in the presence of a non-relativistic potential as initial value problem. Our next goal is to discretize the action functional E_ IVP in <ref> using SBP finite difference operators. Since all derivations of the Euler-Lagrange equations, as well as that of the conserved quantity Q_t have made ample reference to integration by parts, it is paramount to use such a discretization technique, which faithfully mimics this continuum property on a finite mesh.
§ DISCRETIZED FORMALISM FOR IVPS
The central novelty we introduce in this section is related to the fact that the discretization of the action functional takes place in the world-line parameter γ and not in the time variable t, as in conventional discretization prescriptions. I.e. the values of both time t(γ) and position x(γ) remain continuous and in turn we achieve preservation of the continuum space-time symmetries even after discretization.
In the presence of a potential that depends on x but not on t, the invariance under infinitesimal constant shifts in time is hence retained. This comes about, since the metric remains invariant under changes in t, which in turn leads to a simple form of the corresponding Killing equation, which shows that K_t=(1,0) indeed is a Killing vector. The symmetry of the metric under time translation is intimately related to energy conservation via Q_t and thus the stability of the simulation. In the absence of a potential, when the metric does not depend on neither t nor x, our discretized approach, in addition to K_t=(1,0), retains the continuum invariance under shifts in x via the Killing vector K_x=(0,1), as well as the invariance under boosts via the Killing vector K_η=(x,t).
We will give numerical evidence that we achieve exact conservation of Q_t in the interior of the simulated domain, even in the case of highly non-harmonic motion. In contrast to other formally energy preserving schemes, such as the leap-frog, our approach, using SBP operators, is consistent with the continuum formulation, in that it only requires the actual initial conditions of the system at hand, avoiding the need to stagger the degrees of freedom (also known as insertion of dummy points).
After introducing the discretization on the level of the underlying action functional, we will obtain the classical trajectory by numerically finding the critical point of that functional without the need to derive the corresponding equations of motion. To make sure that the solution of the discretized variational principle mimics as accurately as possible the continuum theory, we deploy summation-by-parts finite difference operators <cit.>.
Note that we are discretizing the world-line parameter γ with equidistant steps, whereas both the values of t and x arise dynamically from the evolution of the simulation along γ. I.e. a not necessarily equidistant discretization of the time coordinate emerges dynamically in our approach. As we will see in <ref> this dynamical time discretization realizes a one-dimensional form of automatic adaptive mesh refinement, guided by the symmetries of the system. I.e. the non-equidistant discretization in t plays a crucial role in guaranteeing that the Noether charge Q_t remains conserved.
Another non-standard feature of our technique is the departure from the conventional notion of carrying out a simulation on a predefined time interval. We instead provide the initial time and its velocity with respect to γ, so that the end-point of the simulation too emerges dynamically.
In the following we will consider the trajectory of a point particle propagating under the influence of an arbitrary x but not t dependent potential V(x). We begin by discretizing the action functional E_ IVP of <ref> along the world-line parameter γ between γ_i and γ_f with N_γ steps, leading to a step-size of dγ=(γ_f-γ_i)/(N_γ-1). We will add to E_ IVP Lagrange multipliers to explicitly account for both the initial conditions and the connecting conditions required by doubling of the degrees of freedom. The forward and backward paths x_1,2 and times t_1,2 are described by x_1,2=(x_1,2(0),x_1,2(Δγ),x_1,2(2Δγ),…,x_1,2((N_γ-1)ΔΓ))^ T and t_1,2=(t_1,2(0),t_1,2(Δγ),t_1,2(2Δγ),…,t_1,2((N_γ-1)Δγ))^ T respectively.
The integral in E_ IVP is approximated with a quadrature rule, consistent with our choice of finite difference operator, in the form of a diagonal positive definite matrix H. The inner product on discretized paths and times thus reads ( x, x^')= x^ TH x^'.
With integration by parts being a central element in establishing both equations of motion and the existence of conserved quantities, we must use a discretization that mimics IBP exactly, which is achieved by deploying summation-by-parts (SBP) operators D with the defining properties
D=H^-1Q, Q^ T+Q=E_N-E_0= diag[-1,0,…,0,1].
In this study we consider both the lowest order SBP discretization scheme, referred to as and the next higher order scheme . The former is second order in the interior and exhibits one order less on the boundary. Using the trapezoidal rule for integration one has
H^[2,1]=Δγ[ [ 1/2 ; 1 ; ⋱ ; 1; 1/2 ]],
D^[2,1]=
1/2 Δγ[ [ -2 2 ; -1 0 1 ; ⋱ ; -1 0 1; -2 2 ]].
The scheme achieves fourth order accuracy in the interior, which reduces to second order on the boundary
H^[4,2]=Δγ[ [ 17/48 ; 59/48 ; 43/48 ; 49/48 ; 1 ; ⋱; ]],
D^[4,2]=
1/Δγ[ [ -24/17 59/34 -4/17 -3/34 ; -1/2 0 1/2 0 ; 4/43 -59/86 0 59/86 -4/43 ; 3/98 0 -59/86 0 32/49 -4/49 ; 1/12 -2/3 0 2/3 -1/12 ; ⋱ ]].
The SBP operators defined above are not yet ready for duty in our variational approach, as they allow for non-physical zero modes. As discussed in detail in <cit.>, we can construct null-space consistent[Note that in the context of PDE's, SBP operators are considered null-space consistent by construction, as only their right eigenvectors play a role in the equation of motion. Here due to the presence of D^ T in the action functional, also the left eigenvectors contribute, among which a highly oscillating null-mode (the so-called π-mode) can be identified (see ref. <cit.>)] SBP operators D̅ from the conventional D by deploying affine coordinates and by absorbing penalty terms, inspired by the simultaneous-approximation terms (SAT) technique <cit.>, used to regularize SBP operators. A brief overview of this regularization is given in <ref>.
The idea behind the penalty term construction is that we are assigning a penalty to all functions that do not fulfill the initial conditions in t and x, which includes the non-physical zero mode of D. In turn, when we will be searching for the critical point of the discretized action functional E_ IVP the minimizer will approach the correct solution globally and the presence of the penalty term effectively prevents contamination of the correct solution by the non-constant zero mode.
Explicitly our regularized and null-space consistent operators read
D̅^ R,[2,1]_t=
[ [ -1/Δγ + 2/Δγ 1/Δγ -2/Δγ t_i; -1/2Δγ 0 1/2Δγ 0; ⋱ ⋮; -1/2Δγ 0 1/2Δγ 0; -1/Δγ 1/Δγ 0; 0 … 0 1; ]]
D̅^ R,[2,1]_x=
[ [ -1/Δγ + 2/Δγ 1/Δγ - 2/Δγ x_i; -1/2Δγ 0 1/2Δγ 0; ⋱ ⋮; -1/2Δγ 0 1/2Δγ 0; -1/Δγ 1/Δγ 0; 0 … 0 1; ]].
Using the operators defined above, we can now write the discretized action functional in the following fashion
E_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[c^2+2 V( x_1)/m] H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)}
- 1/2{(D̅^ R_t t_2)^ T𝕕[c^2+2 V( x_2)/m] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)}
+ λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i)+λ_3( x_1[1]-x_i)
+ λ_4((D x_1)[1]-ẋ_i)
+ λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ])
+ λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]).
Conventional matrix vector multiplication is implied in the above expression, whenever a matrix quantity such as H̅ or D̅ acts on a vector x_1,2 or t_1,2. The matrix denoted by 𝕕[f( x)] contains on its diagonal the values 𝕕_kk=f( x(γ_k)) and zero otherwise. We deploy an appropriately modified matrix H̅ for the inner product in the presence of the affine-coordinate regularized SBP operators (see <ref>).
The initial conditions we supply are the values of the spatial and temporal coordinate x_i, t_i, as well as the initial velocities with respect to the world line parameter γ, i.e. ẋ_i and ṫ_i. Since our physical problem is formulated as an initial value problem, given t_i, x_i and the physical velocity v_i=dx/dt, there exists a freedom to choose ṫ_i and ẋ_i, as only their ratio is fixed v_i=ẋ_i/ṫ_i. We have added eight Lagrange multipliers, whose role is to explicitly implement the initial conditions (λ_1-4) and the connecting conditions at the end of the forward and backward branches of our doubled degree of freedom construction (λ_5-8).
Once the action functional has been formulated in its discrete form, changing from to only requires replacement of the corresponding difference operator D and quadrature matrix H but no further changes to the functional itself.
This concludes the description of our novel variational approach and we proceed to evaluate its properties and performance based on two concrete numerical examples.
§ NUMERICAL RESULTS
In this section we will present explicit results for the numerically obtained classical trajectory of a point particle in the presence of two different potentials, V_1(x)=α x
and V_4(x)=κ x^4. These two choices correspond to a model of a point mass falling in a constant gravitational field
and carrying out highly-nonlinear anharmonic motion. We set the mass of the particle to unity, as well as adopt without loss of generality the convention that the speed of light c=1, which simply amounts to a particular choice of units for length and time.
Let us stress again that while standard numerical methods exist to solve the equations of motion for each of these systems, the novelty of the approach presented here lies in the fact that we retain the continuum time shift invariance of the system and thus achieve exact conservation of Q_t in the interior of the simulated time domain. In addition we determine the classical trajectory directly from the action functional of the geometrized problem, without the need to derive the equation of motion.
We implement the action functional <ref> in the Mathematica language[The code using both the or operator is available under open access on the Zenodo repository <cit.>.]. As the critical point of the action may be a saddle point, instead of an actual minimum, we must be careful in deploying established numerical optimization algorithms in the dynamical degrees of freedom d={ t_1,2, x_1,2,λ_1-8}. Instead of minimizing E_ IVP directly, we will minimize the Euclidean norm of the gradient |∇_ dE_ IVP|^2. Via this detour, a saddle point is converted into a minimum. In practice we deploy a chain of minimization algorithms. We start with a preconditioning based on the LBFGS quasi-Newton algorithm, which features cost efficient iteration steps, when far away from the true critical point. It is followed by further iterations based on the full Newton method, which exhibits a faster convergence rate than the LBFGS algorithm when close to the critical point. Once the critical point has been approached to at least floating point precision we switch to the interior point optimization, which showed reliable performance in identifying the critical point to any desired tolerance. For our numerical tests in Mathematica, we used of 40 and of 40.
The figures shown in the following are based on results from the operator and include the outcomes from the operators when indicated in the text.
§.§ Linear potential case
We discretize the continuous action functional
E^ lin_ IVP= ∫_γ_i^γ_fdγ1/2{( 1+2α x_1(γ) ) (d t_1/dγ)^2-(d x_1/dγ)^2}
- ∫_γ_i^γ_fdγ1/2{( 1+2α x_2(γ) ) (d t_2/dγ)^2-(d x_2/dγ)^2}
+ λ_1(t(γ_i)-t_i)+λ_2(ṫ_1(γ_i)-ṫ_i)+λ_3(x_1(γ_i)-x_i)+λ_4(ẋ_1(γ_i)-ẋ_i)
+ λ_5(t_1(γ_f)-t_2(γ_2)) +λ_6(ṫ_1(γ_f)-ṫ_2(γ_2))
+ λ_7(x_1(γ_f)-x_2(γ_2)) +λ_8(ẋ_1(γ_f)-ẋ_2(γ_2))
along the world-line of the particle motion between γ_i=0 and γ_f=1 with N_γ=32 points. Without loss of generality, we arbitrarily set the starting time to t_i=0 and the starting position to x_i=1. To obtain an initial velocity v_i=1/10 we choose ṫ=1 and ẋ=v_i. Note that we do not fix the value of t_f but only the initial velocity of time with respect to γ. The choice of ṫ=1 will lead to dynamics, such that t_f will be of the order of one. (In the next subsection we will also provide results for different choices of ṫ_i.) As strength for the linear potential we choose α=1/4. The corresponding discrete action functional reads explicitly
E^ lin_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[1+2 α x_1]H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)}
- 1/2{ (D̅^ R_t t_2)^ T𝕕[1+2 α x_2] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)}
+ λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i)
+ λ_3( x_1[1]-x_i)+λ_4((D x_1)[1]-ẋ_i)
+ λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ])
+ λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]).
Let us take a look in <ref> at the raw results for the forward and backward time and spatial coordinates, as obtained from the critical point of E^ lin_ IVP with V(x)=α x. In the top panel, we show t_1(γ_i) as red circles and t_2(γ_i) as blue crosses, while in the bottom panel these symbols denote the spatial coordinate of the point particle trajectory x_1(γ_i) and x_2(γ_i) respectively. As required by the physical limit (discussed in <ref>), we find that the values of the doubled degrees of freedom coincide at the critical point. The solution of the corresponding continuum geodesic equations, obtained via the algorithm of Mathematica's command is shown as gray solid line and excellent agreement is observed.
Note that due to our choice of ṫ_i=1 the maximum time traversed by the simulation is close to one.
At first sight it appears that an equidistant discretization of time in γ emerges, but an inspection of the velocity of time with respect to γ in <ref> reveals that the time spacing dynamically adapts to the behavior observed in the spatial coordinate x. Close to the maximum of x(γ) at around γ=0.4 the temporal spacing e.g. has a minimum. This dynamically emerging time discretization constitutes an automatically generated non-trivial mesh for the time coordinate and arises naturally in our formalism. In fact an automatic AMR procedure results.
Let us plot next in <ref>, the results from our geometrized formalism as physical trajectory, i.e. as x_1,2(t_1,2) (red circles and blue crosses). This allows us to compare the outcome to the solution one would obtain by following the conventional approach in the literature (see e.g. chapter 7.9 in <cit.>). There one considers time as independent variable and simply adds a potential term to the free relativistic action <ref> before deriving the corresponding Euler-Lagrange equation, which for the linear potential reads d^2x/dt^2 = -(α)(1-(dx/dt)^2)^(3/2). Using the algorithm of Mathematica's command, we compute the solution of this equation of motion and plot it as gray solid line. Excellent agreement with the solution from our variational approach is observed, indicating that the geometrization strategy indeed reproduces the solution of the physical problem at hand.
Note that the change in the velocity of the time coordinate manifests itself here as a slightly denser time grid around the maximum of the trajectory.
After this qualitative visual inspection, let us take a closer look at the properties of the obtained solution. The first question we may ask is how well quantitatively the solution follows the naively discretized geodesic equations for time <ref> and space <ref> respectively. The continuum geodesic equations for the system at hand read
d/dγ(g_00dt/dγ)=d/dγ( (1+2 α x) dt/dγ)
=0,
d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=d^2x/dγ^2 + α( dt/dγ)^2=0.
When deriving these equations of motion from the continuum action functional <ref> we have only used integration by parts. This motivates us to proceed, considering them naively discretized by replacing the derivatives with SBP finite difference operators
D( (1+2α x )∘(D t) )=Δ G^t,
DD x + α (D t)∘ (D t)=Δ G^x.
Here element-wise multiplication of entries of vector quantities is explicitly denoted by the symbol ∘, which implements e.g. x_1∘ x_2= ( x_1(0)x_2(0), x_1(Δγ)x_2(Δγ),…, x_1(Δγ(N_γ-1))x_2(Δγ(N_γ-1)))^ T. Note that we have introduced on the right of the above equations two quantities Δ G^x and Δ G^t, which denote the deviation from the value zero, to which the equations of motion evaluate in the continuum. By inspecting Δ G^x and Δ G^t for the trajectories x_1,2 and t_1,2 obtained from the critical point of the discretized action functional E^ lin_ IVP, we can obtain first quantitative insight into the performance of our variational approach.
We plot the values of both quantities Δ G^x and Δ G^t in the top panel of <ref>. At first sight we find that deviations from the naively discretized geodesic equations are minute, except for the two last points. Note that the plot is given in logarithmic scale.
Since we use a minimizer in Mathematica with set to 40, the values of <10^-30 reflect a true zero. It is apparent that both the naively discretized geodesic equation for x and t are fulfilled down to machine precision.
Let us proceed to the central quantity of interest in this study Q_t, defined in <ref>, which in the continuum represents the conserved quantity associated with the time-translation symmetry of the system. We again consider its naively discretized form in the following
Q_t=(D t)∘( 1 + 2α x).
With the discrete action functional E^ lin_ IVP retaining manifest invariance under shifts in the time coordinates t_1,2 we wish to investigate whether also the discretized Q_t retains its role as conserved Noether charge. To this end let us focus here on the deviation Δ E of Q_t from its continuum value
Δ E = Q_t -Q_t = (D t)∘( 1 + 2α x) - ṫ_i (1+2α x_i).
Note that Q_t takes on the continuum value by construction at the first point in γ, as there it is defined by the initial conditions. The values obtained for Δ E from the critical point of E^ lin_ IVP using either the (red circles) or operator (blue crosses) are shown in the bottom panel of <ref>. There are two important observations to be made.
First, the discretized quantity Q_t is exactly conserved in the discrete setting in the interior of the simulated time domain and only at the final point γ_f it deviates from that constant. While the deviation Δ E(γ_f) in case of the operator is already smaller than two permille, it reduces even further to a value of 10^-6 when deploying the operator.
We have investigated various potential reasons for the slight difference at the final point, such as a potential over-constraint from the connecting conditions in <ref>, but we have not identified the source as of yet. One venue to explore in the future is whether the exact enforcement of the connecting conditions plays a role, which however requires the development of a genuinely weak formulation of our approach without the use of Lagrange multipliers. It is important to point out that, as we will show explicitly below, the presence of this final differing point does not spoil the convergence to the correct continuum limit.
Secondly, the value of Q_t that remains conserved in the interior agrees with the true continuum value, prescribed by the initial conditions, within machine precision. This is a highly non-trivial result, as even in energy preserving schemes, such as the leap-frog, the conserved quantities do not necessarily agree with the continuum ones.
We surmise that it is the interplay of a manifest time-translation invariant formulation of the action functional, together with the resulting dynamically emerging time discretization, which achieves the conservation of the discrete Q_t at its continuum value in the interior of the simulation domain.
The presence of two points that deviate from the naively discretized continuum geodesic equations may appear troublesome. However as we show in <ref> these points do not spoil the convergence to the correct continuum limit under grid refinement.
In the top panel of <ref>, we select the apparently most disadvantageous points for our convergence study, i.e. we compare the deviation from the continuum geodesic equations ϵ(γ_f)_x=| x[N_γ]-x_ true(1)| and ϵ(γ_f)_t=| t[N_γ]-t_ true(1)|
at γ_f, exactly where the deviation from the continuum result was maximal in the top panel of <ref>. Grid refinement is carried out and we provide the results for both the lowest order operator and the next higher operator.
Even in this disadvantaged scenario, we find that under grid refinement, the discrete solution approaches the true continuum values as expected from a scheme that is second order in the interior. Taking the results, the best fit to ϵ_x reveals a scaling with Δγ^2.08, while for ϵ_t an virtually identical Δγ^2.07 ensues. Going over to the results we find that the convergence is in line with expectations for an SBP operator of 4th order in the interior with ϵ_x exhibiting a scaling of Δγ^3.07 and ϵ_t a somewhat better value of Δγ^3.48.
In the bottom panel of <ref> we instead investigate the global convergence of our approach using the L_2 norm ϵ(L_2)_x=√(( x- x_ true)^ T.H.( x- x_ true)) and ϵ(L_2)_t=√(( t- t_ true)^ T.H.( t- t_ true)), where x_ true and t_ true are taken from the numerical solution of the geodesic equations, used for comparison in <ref>. We find that similar convergence rates ensue, where shows scaling Δγ^β with exponent β≥2 and shows scaling with exponent β≥ 3.
These convergence result agrees with the findings of our previous study <cit.>, where the standard action functional was discretized with time as independent parameter.
§.§ Quartic potential
After considering the simplest possible non-trivial scenario with a linear potential, we now turn to a system with a quartic potential and the following continuum action functional
E^ qrt_ IVP= ∫_γ_i^γ_fdγ1/2{( 1+2κ x_1^4(γ) ) (d t_1/dγ)^2-(d x_1/dγ)^2}
- ∫_γ_i^γ_fdγ1/2{( 1+2α x_2^4(γ) ) (d t_2/dγ)^2-(d x_2/dγ)^2}
+ λ_1(t(γ_i)-t_i)+λ_2(ṫ_1(γ_i)-ṫ_i)+λ_3(x_1(γ_i)-x_i)+λ_4(ẋ_1(γ_i)-ẋ_i)
+ λ_5(t_1(γ_f)-t_2(γ_2)) +λ_6(ṫ_1(γ_f)-ṫ_2(γ_2))
+ λ_7(x_1(γ_f)-x_2(γ_2)) +λ_8(ẋ_1(γ_f)-ẋ_2(γ_2)).
Again we discretize along N_γ=32 in the world-line parameter γ. Using κ=1/2 in the potential V(x)=κ x^4 leads to dynamics that already in the small time regime considered here are distinctly anharmonic.
As in the previous subsection we discretize the world-line of the particle motion between γ_i=0 and γ_f=1, set the starting time to t_i=0 and the starting position to x_i=1. For our choice of v_i=1/10 we again decide on ṫ=1 and ẋ=v_i. The discretized action functional thus reads
E^ qrt_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[1+2 κ x^4_1] H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)}
- 1/2{ (D̅^ R_t t_2)^ T𝕕[1+2 κ x^4_2] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)}
+ λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i)
+ λ_3( x_1[1]-x_i)+λ_4((D x_1)[1]-ẋ_i)
+ λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ])
+ λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ])
and taking the fourth power of the x_1,2 vector is to be understood in an element wise fashion.
While for the linear potential, the time geodesic appeared to depend almost linearly on γ, we find that here a distinct curvature along γ emerges, as shown in the top panel of <ref>. We plot the values of t_1(γ_i) as red circles and t_2(γ_i) as blue crosses and show as gray solid line the solution of the corresponding geodesic equation, obtained from the algorithm of Mathematica's command. Again the physical limit of equal values t_1(γ)= t_2(γ) is realized.
The values of the spatial coordinate x_1(γ_i) and x_2(γ_i) as obtained from the critical point of E^ qrt_ IVP with V(x)=κ x^4 are plotted in the bottom panel of <ref> with the direct numerical solution of the geodesic equation added as gray solid line.
Note that even though we have provided an initial velocity of the time along γ again with value ṫ_i=1, the final time reached by the simulation now lies at t[N_γ]=1.47. Similarly one finds that that a dynamical discretization in t emerges, which, as shown in <ref>, varies from the initial values ṫ_i=1 to (D t)[N_γ]=2.06. This behavior can be understood when realizing that the trajectory x(t) in the non-linear case shows a stronger curvature close to t=0 than at later times. I.e. we find again that the automatically generated non-trivial mesh (through automatic AMR) for the time coordinate adapts to the dynamics, by exhibiting a finer spacing at initial times.
Let us take a look at the results from our geometrized formalism as physical trajectory in <ref>, i.e. plotted as x_1,2(t_1,2) (red circles and blue crosses). They are compared to the solution of the conventional equation of motion, obtained from treating time as independent variable d^2x/dt^2 = -(4κ x^3 )(1-(dx/dt)^2)^(3/2), computed via the algorithm of Mathematica's command (gray solid line) in the range t∈[0,1]. We find that within this range the solution from our geometrized discrete approach shows excellent agreement. Note that due to the non-equidistant emergent time discretization, the physical trajectory x(t), shown in <ref> extends beyond the point t=1.
As for the linear potential, let us investigate quantitatively the properties of the trajectories t(γ_i) and x(γ_i) by inserting them into the naively discretized geodesic equations. For the quartic potential, the continuum geodesic equations for the temporal and spatial coordinate read
d/dγ(g_00dt/dγ)=d/dγ( (1+2 κ x^4) dt/dγ)
=0,
d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=d^2x/dγ^2 + 4κ x^3( dt/dγ)^2=0.
Naively discretizing these equations by replacing derivatives with SBP operators leads to the following discrete geodesic equations
D( (1+2κ x^4 )∘D t)=Δ G^t,
DD x + (4κ x^3) ∘ (D t)∘ (D t)=Δ G^x.
where again taking a power of the x_1,2 vector is to be understood in an element wise fashion. To evaluate how well the solution obtained from the critical point of E^ qrt_ IVP fulfills the naive discretized geodesic equations we have again introduced the quantities Δ G^t and Δ G^x above.
As shown in <ref> also here in the highly non-linear scenario, we find that the values of both x (red circles) and t (blue crosses) follow the discretized geodesic equations excellently, except for the last two points.
The most important question however remains whether in the non-linear discretized system, the continuum quantity Q_t from <ref> also remains conserved. Its naively discretized counterpart here reads
Q_t=(D t)∘( 1 + 2κ x^4),
and we define its deviation from the continuum result via the difference
Δ E = Q_t -Q_t = (D t)∘( 1 + 2 κ x^4) - ṫ_i (1+2κ x_i^4),
which we plot in the bottom panel of <ref> using the operator (red circles) and the operator (blue crosses).
We find also in the case of a non-linear potential that Q_t is preserved exactly in the interior of the simulation time domain. Up to machine precision its values in the interior also take on the correct continuum value. Similar to what we saw in the linear case, the last point deviates from the continuum value. It is reassuring to see that the absolute deviation at γ_f reduces already by an order of magnitude when going from a to an operator.
One may now ask whether the deviation of Δ E from its continuum value at γ_f is in some way related to the fact that we use N_γ=32 points to discretize the world-line parameter. The answer is negative, as demonstrated in <ref>.
Three different datasets are shown in <ref>, where for fixed ṫ_i the grid spacing in γ is changed. The green triangles denote the results for Δ E when using N_γ=16, the red circles N_γ=32 and the blue crosses N_γ=64. We have confirmed explicitly that in all cases the values of Q_t are preserved up to machine precision in the interior of the simulated time domain. It is indeed only the last point that shows a deviation and we see that the absolute magnitude of the deviation reduces as the grid is refined.
For the next test, we instead increase N_γ together with ṫ_i to let the simulation proceed to larger values of time t. In the top panel of <ref> we plot the deviation of Q_t from its continuum value for three choices ṫ_i=1, N_γ=16 (green triangles), ṫ_i=4, N_γ=32 (red circles) and ṫ_i=8,N_γ=64 (blue crosses). As seen before in the interior of the simulated time domain, the values of Q_t remain exactly preserved and only the last point deviates. We find that the magnitude of the deviation in the last point changes only marginally with the length of the simulated trajectory. For completeness the corresponding trajectories x(t) are plotted in the bottom panel of <ref>. Again let us emphasize that, as we will show below, the presence of this single deviating point does not spoil the convergence to the correct solution under grid refinement.
The exact conservation of the quantity Q_t in the interior is remarkable, as e.g. the trajectory in the bottom panel of <ref> for ṫ_i=8,N_γ=64 shows sizable discretization artifacts (which disappear under grid refinement). We believe that it is due to the manifest time-translation invariance of the underlying action functional that the combined dynamics of x(γ) and t(γ), including the automatically generated non-equidistant time mesh, achieve conservation of the continuum quantity.
The fact that the solutions we obtain fulfill the naively discretized geodesic equations and provide exact conservation of the continuum conserved charge in the interior of the simulated domain (see <ref>) bodes well for establishing its stability. Since in the IVP setting t(γ_f) is not given but emerges dynamically we cannot directly apply <ref> as proof of stability. However, as long as we can assume that the simulated time range (given a certain ṫ(γ_i) is finite, the linear bound of <ref> on the norm H_ BVP holds in the discrete setting. In turn we deduce that the solution cannot exhibit stronger than linear rise of the derivatives of either t(γ) or x(γ), implying stability of the approach.
Let us now quantify the convergence properties of our variational approach using the results from the lowest order operator and those coming from the operator in <ref>.
As in the linear potential case, in the top panel of <ref>, we select the most disadvantageous points for our convergence study, i.e. we compare the deviation from the continuum geodesic equations ϵ(γ_f)_x=| x[N_γ]-x_ true(1)| and ϵ(γ_f)_t=| t[N_γ]-t_ true(1)|
at γ_f, exactly where the deviation from the continuum result was maximal in the top panel of <ref>. Also in the non-linear scenario we find that under refinement of the γ grid, the discrete solution monotonously approaches the true continuum values.
Taking the results, the best fit to ϵ(γ_f)_x reveals a scaling with Δγ^2.08, while for ϵ(γ_f)_t an virtually identical Δγ^2.06 is obtained.
For , we find that the convergence is slightly worse than in the linear potential case. As seen in the green circles plotted in <ref>, the asymptotic convergence regime is reached for 32 <N_γ <64. Once we are in that regime, we find that ϵ(γ_f)_x exhibits a scaling of Δγ^2.84, close to the expected value of three. On the other hand ϵ(γ_f)_t shows a consistent performance with a scaling of Δγ^3.13 already at N_γ=32.
Let us now investigate the global convergence in the bottom panel of <ref> using the L_2 norm ϵ(L_2)_x=√(( x- x_ true)^ T.H.( x- x_ true)) and correspondingly ϵ(L_2)_t=√(( t- t_ true)^ T.H.( t- t_ true)), where x_ true and t_ true are taken from the numerical solution of the geodesic equations, used for comparison in <ref>.
Reassuringly we find that the global convergence properties of our approach are better than indicated by those of the most disadvantaged point in the top panel of <ref>. Indeed we find that for the operators, the global scaling regime is reached already at N_γ=32, similarly to the case. In addition, the global convergence rate Δγ^β for operators lies consistently above β≥3 for both the x and t degrees of freedom.
Again, these convergence result are in good agreement with those of our previous study <cit.>, where the standard action functional was discretized with time as independent parameter.
§ SUMMARY AND OUTLOOK
In this study we have put forward a novel geometric variational approach for solving a large class of initial value problems, associated with the dynamics of point particles evolving under a generic x dependent potential V(x). Taking inspiration from the general theory of relativity, we consider both time and spatial coordinates of the point particle as dependent variables of a world-line parameter γ. We select a continuum action functional, which in the non-relativistic limit reduces to the standard action of point mechanics and whose critical point encodes a set of geodesic equations for x(γ) and t(γ). After doubling the degrees of freedom t_1,2 and x_1,2 we can relate the critical point of the corresponding doubled d.o.f. action with the classical trajectory. Using the concept of Killing vectors we identify conserved quantities, e.g. related to the continuum time translation invariance of the action.
Deploying the regularized SBP operators originally introduced in <cit.>, we discretize the continuum action and add Lagrange multipliers to enforce the initial and connecting conditions between the doubled t_1,2 and x_1,2. The main novelty of our approach is that the discretized action retains the continuum symmetries, in particular the invariance under time translations. Exactly mimicking integration by part through the use of SBP finite difference operators entails that the derivation of the conserved charges associated with the Killing vectors of the system is also exactly mimicked in the discrete setting. I.e. the continuum conserved quantities Q_K retain their role even after discretization.
The numerical results we obtain for both a linear and highly non-linear potential show that a discretization of time t now indeed emerges dynamically, adapting to the behavior of the spatial coordinate x. This is a concrete realization of an automatically generated non-equidistant mesh for the time coordinate, guided by our action functional with manifest continuum translation symmetry, i.e. an automatic AMR procedure. We have shown that except for the last two points along the discrete γ, the solution we obtain follows the naively discretized geodesic equations excellently.
Even more importantly, the naively discretized counterpart Q_t of the continuum conserved quantity Q_t remains exactly preserved in the interior of the simulated time domain, where it even retains its continuum value exactly within machine precision. A small deviation from the values in the interior for Q_t is observed at the last step γ_f. This deviation however decreases both under grid refinement, as well as when increasing the order of the SBP operator.
Point-wise, as well as global scaling analyses under grid refinement show that even in the presence of two points deviating from the naively discretized geodesic equations at the last two γ steps, the solution monotonously improves and manages to approach the true solution. When deploying the operator, we achieve consistent scaling in Δγ^β with β≳ 2 for both the linear and non-linear potential. For in case of a linear potential the dependence on the grid spacing follows the expected power law Δγ^β with β≳3 for all values of N_γ we inspected. For the non-linear potential, the scaling regime for point-wise convergence at the last point γ_f is reached with for 32<N_γ<64 with a slightly worse scaling of 2.84 ≤β≤ 3.13. Global convergence on the other hand shows consistent scaling at all N_γ we considered, with exponents β≥3, in agreement with the findings in our previous paper <cit.>, where the standard action functional was discretized with time as independent variable.
This study presents a proof of principle that initial value problems can be discretized, while retaining continuum symmetries. Three future directions will be explored: we may ask how we can capture systems of ordinary differential equations that e.g. contain a term that is proportional to a first derivative in x with respect to time? To this end we must exploit the versatility of the doubled d.o.f. approach more thoroughly. Furthermore we will explore how the reparametrization invariant formulation can be applied to partial differential equations in higher dimensions, taking insight from how the non-relativistic action emerges from our relativistic starting point in <ref>. In addition, to better understand the origin of the single deviating value in the otherwise exactly preserved Q_t, we will develop a genuinely weak formulation of our approach, devoid of Langrange multipliers for enforcing initial and connecting conditions.
We believe that the quest for retention of defining continuum properties in discretized systems is both conceptually and practically valuable. Not only does the preservation of symmetries place powerful physical constraints on the solution but in addition offers a mechanism for the automatic generation of optimal discrete spacetime grids to ensure conservation of the Noether charges associated with these symmetries. We hope that this study provides the community with a novel impulse in this direction.
§ ACKNOWLEDGEMENTS
A. R. thanks Will Horowitz for inspiring and insightful discussions and Alex Nielsen for valuable insight on the general theory of relativity. A. R. gladly acknowledges support by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. J. N. was supported by the Swedish Research Council grant nr. 2021-05484. The study has benefited from computing resources provided by
UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX "Real-time dynamics of nuclear matter under extreme conditions"
§ REGULARIZED SBP OPERATORS IN AFFINE COORDINATES
We here briefly review the idea and some technical aspects of constructing null-space consistent regularized SBP operators using affine coordinates, developed in our study <cit.>.
The goal of regularizing conventional SBP operators D, such as those defined e.g. in <ref> and <ref>, lies in removing their unphysical zero modes. These may appear as highly oscillatory eigenfunctions to D^ T with zero eigenvalue. To this end we take inspiration from regularization techniques developed for partial differential equations. There the concept of null-space consistent SBP operators has been discussed in detail (see e.g. <cit.>).
For a differential equation, the boundary conditions may be enforced in the weak sense by adding a simultaneous approximation penalty term (SAT) <cit.>, which can be partially absorbed into the finite difference operator, lifting its zero modes. Take for example a simple discretized first order differential equation
D u = λ u + σ_0 H^-1E_0( u - g),
where the SAT penalty term has been added to the right-hand side. It features the matrix E_0= diag[1,0,…,0] that makes reference only to the first entry in the discretized functions u and g, the latter of which contains the initial value in its first entry g=(u_0,0,…,0). The SAT term also contains H^-1, i.e. Δ t^-1, which increases the strength of the penalty as Δ t→0. The parameter σ_0 in the SBP-SAT approach is tuned to satisfy stability properties and its optimal value is found to be σ_0=-1 (see e.g. ref. <cit.>), a choice we adopt in the following. In the differential equation context one conventionally absorbs the term proportional to u into a new D̃=D - σ_0 H^-1 E_0. This new operator is devoid of zero modes <cit.> and may be inverted to obtain the solution u.
In the context of an action functional, such as <ref>, we do not have an equal sign around which we can move the SAT term. Instead we must incorporate the whole of the penalty term directly in a modified SBP operator. Since the penalty term in our example <ref> contains both a contribution that is proportional to the function u and a constant shift g it amounts to an affine transformation on u, which can be captured efficiently using affine coordinates. To this end let us write A̅[ b] x̅ = A x+ b, where A̅[ b] refers to a matrix A extended by an additional row and column with the value 1 placed in the lower right corner. The new column available in A̅[ b] is populated with the entries of b. The vector x̅ is nothing but x extended by one more entry with value unity. We will use this construction principle to define a regularized D̅ from our conventional SBP operator D.
Since we have both x and t as independent degrees of freedom each with independent initial conditions x_i and t_i, we must define different shifts b^x and b^t respectively and thus end up with two different regularized SBP operators D̅_t and D̅_x. The shift terms are nothing but the constant part of the corresponding SAT term, absorbed into the SBP operator
b^x= σ_0 H^-1 E_0 g^x, b^t= σ_0 H^-1 E_0 g^t.
Here g^x= diag[x_i,0,⋯,0] and g^t= diag[t_i,0,⋯,0] encode the initial values for x and t respectively. As mentioned before, we choose the parameter σ_0=-1, whenever a penalty term is incorporated in D̅, motivated by the fact that in the conventional treatment of IVPs using the SBP-SAT approach, this value leads to a minimal discretization error (see e.g. ref. <cit.>). The resulting regularized SBP operators to be deployed on t_1,2 or x_1,2, are given explicitly in <ref> and <ref> respectively.
Consistent with the affine coordinates used in the newly defined D̅_t and D̅_x, we also amend the discretized trajectories t_1,2 and x_1,2 by one more entry that is given the value one.
In order to compute inner products in the space of discretized functions, we also have to modify the quadrature matrix H→H̅ by amending it by one row and column filled with zeros. We do not include the value one in the lower right corner in order to correctly account for the fact that the vectors appearing as arguments to the inner product contain an auxiliary final entry, which does not contribute to the value of the inner product and only facilitates the efficient implementation of shift operations. For more details on the affine coordinate regularization technique see <cit.>.
§ COMPETING INTERESTS
The authors declare that they have no competing interests.
§ AUTHOR'S CONTRIBUTIONS
* A. Rothkopf: formulation of the geometric variational approach, literature review, numerical experiments, writing, editing
* J. Nordstöm: guidance on the formulation and implementation of SBP based discretization schemes, literature review, editing
stavanger-mathphys
|
http://arxiv.org/abs/2307.05568v1 | 20230710025610 | Subtraction of the foreground confusion and parameter uncertainty of resolvable galactic binaries on the networks of space-based gravitational-wave detectors | [
"Jie Wu",
"Jin Li"
] | gr-qc | [
"gr-qc",
"astro-ph.IM"
] |
[email protected]
^1 College of Physics, Chongqing University, Chongqing 401331, China
^2 Department of Physics and Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, China
There are tens of millions of compact binary systems in the Milky Way, called galactic binaries (GBs), most of which are unresolved, and the Gravitational waves (GWs) emitted overlap to form foreground confusion.
By simulating such foreground confusion, we have studied how LISA, Taiji and TianQin, including their alternative orbital configurations, subtract resolvable GBs when they combine as some networks.
The results of our research indicate that the order of detected number for a single detector from high to low is: Taiji-m, Taiji-p (c), LISA, TianQin I, TianQin II.
For detector combinations on the network, the foreground confusion is effectively reduced as the number of detectors grows, and the optimal combinations with different numbers are: Taiji-m, LISA+Taiji-m, LISA+Taiji-m+TianQin I, and LISA+Taiji-m+TianQin I+II.
The sensitivity curve is optimized as the number of detectors increases, which renders it possible to detect other gravitational wave sources more precisely and decrease the resolvable GBs parameter uncertainty.
Based on this, we discuss the parameter uncertainty of resolvable GBs detected by the combinations above and find that GW detection can promote electromagnetic (EM) detection.
On the contrary, we discovered that by utilizing EM detection, determining the inclination angle can reduce the uncertainty of GW strain amplitude by ∼93%, and determining the sky position can reduce the uncertainty of the phase by ∼30%, further strengthening the connection between GW detection and EM detection, and contributing to the research of Multi-messenger astronomy.
Subtraction of the foreground confusion and parameter uncertainty of resolvable galactic binaries on the networks of space-based gravitational-wave detectors
Jin Li^1,2
=============================================================================================================================================================
§ INTRODUCTION
Since LIGO detected the first GW event from a binary black hole merger (GW150914) in 2015 <cit.>, a series of ground-based GW detectors, such as Advanced LIGO <cit.>,
Advanced Virgo <cit.> and KAGRA <cit.>, have been built around the world, opening the window for detecting GW.
However, due to the limitation of the interferometer arm length, the observation window of the ground-based GW detector is in the high-frequency band from 1 Hz to kHz, and the low-frequency GW signal below 1 Hz cannot be effectively detected.
Therefore, constructing an interferometer with an arm length in order of one million kilometers in space is an ideal solution for detecting low-frequency GW.
The mission proposed by European Space Agency to detect GW in the low-frequency band named Laser Interferometer Space Antenna (LISA) is scheduled to be launched around the 2030s <cit.>.
At the same time, the Taiji mission proposed by the Chinese Academy of Sciences to construct a space-based GW observatory similar to LISA, which consists of a triangle of three spacecraft (S/C) orbiting the sun linked by laser interferometers, will be in operation <cit.>.
Another Chinese mission, TianQin, being different from LISA and Taiji, consists of three identical drag-free controlled S/C in high Earth orbits <cit.>.
LISA, Taiji, and TianQin are all sensitive to the milli-Hertz frequency band.
Compared with the Hertz frequency band, there are a large variety of GW sources in the milli-Hertz frequency band sensitive to the space-based GW detectors. These sources are expected to carry a large amount of information about galaxy formation, galactic nuclei, the Milky Way, and the early universe <cit.>, including massive black hole binaries (MBHB) <cit.>, extreme/intermediate mass ratio inspirals (EMRIs/IMRIs) <cit.>, compact binaries
in the Milk Way <cit.> and stochastic gravitational-wave backgrounds (SGWBs) <cit.>.
According to current astrophysical models and observations, there are a large number of GBs in our Milky Way, whose orbital period is less than a few hours, and the frequency band of emitted GW is from 0.1 mHz to 10 mHz <cit.>.
Considering the sensitivity of the space-based GW detectors, the GWs emitted by tens of millions of GBs will enter the observation frequency band at the same time, overlapping to form the galactic foreground <cit.>.
Except for a small percentage of high signal-to-noise ratio (SNR) GBs known as resolvable GBs, the majority of them are unresolved, resulting in an effective noise called foreground confusion or confusion noise. <cit.>.
In the frequency range of 0.5∼3 mHz, the foreground confusion will be greater than the instrument noise, affecting the observation of other GW sources and creating a bump on the sensitivity curve.
While the unresolved GBs constitute the foreground confusion and have a negative impact on the observation of other GW sources, the resolvable GBs are conducive to researching the evolution and distribution of GBs in our Milky Way, which is also one of the main science objectives of the space-based GW detectors <cit.>.
Since the proposal of LISA, extensive research has been conducted on the foreground confusion from GBs <cit.>.
Subtracting the foreground confusion as much as possible is beneficial for better observation of other GW sources.
The research on LISA, Taiji, and TianQin in subtracting of the foreground confusion is introduced respectively in Ref. <cit.>.
In addition to increasing observation time and improving the sensitivity of the GW detector, the networks of the GW detector can also effectively identify more resolvable GBs and subtract the foreground confusion <cit.>.
In this paper, we simulate subtracting foreground confusion using different combinations between LISA, Taiji, and TianQin on the network, including their alternative orbital configurations to determine the best combination on the network, and draw the sensitivity curve to calculate the SNR and parameter uncertainty of detected resolvable GBs, thus discussing the Multi-messenger astronomy combined with EM detection.
In Sec. <ref>, we introduce the GW signal model used to simulate GBs, the response of different space-based GW detectors to GW, as well as their instrument noise, sensitivity, and the alternative orbit configurations.
In Sec. <ref>, we use the population model to construct the GBs signal, subtracting the resolvable GBs by the iterative procedure to estimate the foreground confusion, and calculating the parameters of the resolvable GBs.
In Sec. <ref>, we present the subtraction of the foreground confusion by different combinations on the network, analyze the factors responsible for them, and plot the full sensitivity curves containing the foreground confusion.
Finally, we summarize our results in Sec. <ref>.
§ GW SIGNALS AND DETECTORS
§.§ GW signals from GBs
Considering that GBs have a few hours of orbital period and emit GW frequencies in milli-Hertz, they are in the very beginning phase of inspiral, millions of years before the merger <cit.>.
Therefore, the orbital period evolves slowly and the GWs emitted by GBs can be fully regarded as quasi-sinusoidal signals (quasi-monochromatic sources).
For the GW signal, we can use a very simple model in which the phase is decomposed in a Taylor series, and consequently, the time domain waveform of a GB can be written as <cit.>:
h_+(t)=𝒜(1+cosι^2)cosΦ(t)
h_×(t)=2𝒜cosιsinΦ(t)
with
Φ(t)=ϕ_0+2π f_0t+πḟ_0t^2+ Φ_D(t)
where 𝒜 is the GW strain amplitude, ι is the inclination angle, Φ(t) is the orbital phase , Φ_D(t) is the Doppler phase, ϕ_0 is the initial phase, f_0 and ḟ_0 is the frequency and the derivative of the frequency of GW.
The frequency variation, also known as the frequency derivative, can be expressed with the equation described in Ref. <cit.>:
ḟ_0=48/5(Gℳ/2c^3)^5/3f^11/3
where ℳ=(m_1m_2)^3/5/(m_1+m_2)^1/5 is the chirp mass, G and c are the gravitational constant and the speed of light.
By substituting frequency f_0∼10^-3 into equation 3, we can roughly calculate the derivative of frequency ḟ_0∼10^-19, indicating that the derivative of frequency is much lower in magnitude than that of frequency, which is also why we consider GWs as quasi-sinusoidal signals.
Therefore, we neglect higher-order phase terms as they contribute minimally to the waveform and have little impact on foreground confusion.
Additionally, we assume that the GBs are in circular orbits and ignore the influence of the third perturbation body. <cit.>.
For the space-based GW detector, the periodic motion around the Sun will produce the Doppler phase, which is given by <cit.>:
Φ_D(t) = 2π f_0t(R/c)cosβcos(2π f_mt-λ )
where R = 1 A.U. is the distance between the Sun and the Earth, f_m = 1/year is the Geocentric orbit modulation frequency and (λ,β) are the Ecliptic coordinates of the GW source.
§.§ Detector’s response and noise
For the space-based GW detector, the GW strain recorded by the detector can be described as the linear combination of two GW polarizations <cit.>:
h(t)=F^+(t)h_+(t)+F^×(t)h_×(t)
where F^+ and F^× are the antenna pattern functions.
In the low-frequency limit, the antenna pattern functions in the detector’s coordinate frame can be expressed as<cit.>:
F^+ = -sinγ/2[(1+cos^2θ_d)sin2ϕ_dcos2ψ_s+2cosθ_dcos2ϕ_dsin2ψ_s]
F^× = -sinγ/2[-(1+cos^2θ_d)sin2ϕ_dsin2ψ_s+2cosθ_dcos2ϕ_dcos2ψ_s]
where γ=π/3 is the angle between the two arms of the detector, (ϕ_d,θ_d) are the coordinates of the location of the GW source in the
detector coordinate frame and ψ_s is the polarization angle.
The transformation between detector coordinates (ϕ_d,θ_d) and Ecliptic coordinates (λ,β) can be found in Appendix <ref>.
To explore the response of the detector to GWs in different positions, we introduce the combined tensor mode response function:
F=√(|F^+|^2+|F^×|^2)
The results in the detector coordinate frame are shown in FIG. <ref>.
It can be seen that the position perpendicular to the constellation plane has the highest response, implying that different orientations will affect detection capacity in the same configuration.
Besides, the noise of the detector is another element that influences detection ability.
In this paper, we focus solely on the impact of instrument noise composed of acceleration noise and displacement noise when subtracting foreground confusion.
Therefore, an analytical model of the detector's sensitivity curve S_n(f) can be constructed from the sky average response function and instrument noise.
For LISA <cit.> and Taiji <cit.>, the sensitivity curve can be expressed as follows:
S_n(f) =10/3L^2[P_dp+2(1+cos^2(f/f_*))P_acc/(2π f)^4]
×[1+0.6(f/f_*)^2]
with
P_dp =S_x[1+(2mHz/f)^4]
P_acc =S_a[1+(0.4mHz/f)^2][1+(f/8mHz)^4]
For TianQin <cit.>, the sensitivity curve can be written in the form of:
S_n(f) =1/L^2[4S_a/(2π f)^4(1+0.4mHz/f)+S_x]
×[1+0.6(f/f_*)^2]
where f_*=c/(2π L) is the transfer frequency, c is the speed of light, L is the arm length, S_a is acceleration noise and S_x is displacement measurement noise, all of which are given in TABLE <ref>.
§.§ Alternative orbital configurations
LISA, Taiji, and TianQin are all scheduled to launch a triangular constellation composed of three S/C.
The difference is that LISA and Taiji apply heliocentric orbits, whereas TianQin applies geocentric orbits.
There are multiple orbital configurations to be chosen, as detailed in FIG. <ref>, FIG. <ref> and TABLE <ref>.
LISA includes three S/C forming a 2.5×10^6 km triangle trailing the Earth by 20^∘ on the Heliocentric orbit and the constellation plane has a 60^∘ inclination to the Ecliptic plane as shown in FIG. <ref> and FIG. <ref>.
Meanwhile, Taiji expects to use a LISA-like orbital configuration with a 3×10^6 km arm length and three different orbital configuration options available <cit.>.
The first configuration is called Taiji-p, which has the same inclination angle as LISA but is 20^∘ ahead of Earth. The second configuration is exactly the same as LISA, called Taiji-c. These two configurations are shown on the right side of FIG. <ref>. The third configuration named Taiji-m has an inclination of -60^∘ to the Ecliptic plane and a leading angle of 20^∘ to the Earth, as shown on the left side of FIG. <ref>.
Unlike LISA and Taiji, TianQin uses a Geocentric orbit with a √(3)×10^5 km arm length, hence the normal direction of the constellation plane will remain unchanged, pointing in the same direction. <cit.>.
The two orbital configurations of TianQin are the different orientations of the normal directions of the constellation plane.
The normal direction of TianQin I points towards the tentative reference source RX J0806.3+1527 (pointing towards λ = 120.4^∘,β = -4.7^∘), while the normal direction of TianQin II perpendicularly (pointing towards λ = 30.4^∘,β = 0^∘), which is shown in FIG. <ref> and FIG. <ref>.
The observation time varies with different orbital configuration.
LISA and Taiji are both year-round observation schemes, and any of Taiji's three alternative orbital configurations will not operate simultaneously.
Different from the former, TianQin follows the “three months on + three months off” observation scheme, and TianQin I and TianQin II can operate simultaneously to fill the data gaps of each other <cit.>, which will be considered in the subtraction methodology in Sec. <ref>.
§ METHODOLOGY
§.§ Data analysis
The SNR ρ of a GB source, which play an important role for judging the resolvable sources, can be defined as:
ρ^2=(h|h)
where the inner product (·|·) is a generalisation of the time-domain correlation product and is conventionally defined as <cit.>:
(a|b) =4∫_0^∞df ã^*(f)b̃(f)/S_n(f)
≃2/S_n(f_0)∫_0^T_obsdt a(t)b(t)
where ã(f) and b̃(f) are the Fourier transformations of a(t) and b(t), S_n(f) is the sensitivity curve defined by Eq. <ref> and Eq. <ref>, T_obs is the observation duration.
Note that the second line of Eq. <ref> only holds when calculating a quasi-sinusoidal signal (quasi-monochromatic source) that have an almost constant noise PSD and it can be seen that the SNR increases while the observation duration increases.
A quasi-sinusoidal signal like GB can be represented in the spectrum using the Dirac Delta function, thus the signal is plotted as a point with amplitude in the spectrum. Therefore, the SNR of GB in the Eq. <ref> can be roughly calculated as follows, which is obtained by evaluating the SNR integral <cit.>:
ρ^2 =16/5𝒜^2T_obs/S_n(f_0)
where 𝒜 is the GW strain amplitude.
Using Eq. <ref> can calculate SNR more quickly than using Eq. <ref>, and in the processing steps of Sec. <ref>, we use Eq. <ref> to quickly calculate and filter optimal resolvable GBs.
Usually, the GB with the SNR greater than 7 (ρ>7) is defined as the resolvable GB <cit.> and we can analyze the uncertainties of the resolvable GB using Fisher information matrix (FIM), which is defined as:
Γ_ij=(∂ h/∂ξ_i|∂ h/∂ξ_j)
where ξ_i represents the parameter of GB. For high SNR signals (ρ≫ 1), the variance-covariance matrix obtained from the inverse of FIM, Σ=Γ^-1, where the diagonal element represents the variance (or mean squared error) of each parameter, and the off-diagonal element represents the covariance (or correlation) between the parameters <cit.>.
Therefore, the uncertainty of each parameter can be written as:
Δξ_i=√(Σ_ii)
Compared to the uncertainty of coordinates, the uncertainty of sky position is more commonly used, which can be obtained by combining the uncertainty of both coordinates <cit.>:
ΔΩ=2π|sinβ|√(Σ_ββΣ_λλ-Σ_βλ^2)
When calculating FIM in Eq. <ref>, use the following numerical differentiation approximation <cit.>:
∂ h/∂ξ_i≈h(t,ξ_i+δξ_i)-h(t,ξ_i-δξ_i)/2δξ_i
When considering network detection by multiple independent detectors, the total SNR and FIM can be obtained by the sum of the inner products calculated by each detector, which can be written as <cit.>:
ρ_net^2=∑_kρ_k^2=∑_k(h_k|h_k)
Γ_net=∑_kΓ_k=∑_k(∂ h_k/∂ξ_i|∂ h_k/∂ξ_j)
where k represents different independent detectors.
From Eq. <ref>, the sensitivity in the network can be obtained, whose reciprocal is the sum of the reciprocal sensitivities of each detector, which can be expressed as follows:
S_net^-1=∑_kS_k^-1
§.§ Subtraction of the foreground confusion
For population simulation of GBs, we used the population datasets from the first “new” LISA Data Challenge (LDC), codenamed , which contains approximately 30 million GB sources in the milli-Hertz band <cit.>.
For the convenience of data processing, we select 1% of the GBs in (3×10^5 GBs) and multiply them to achieve the same amplitude level as the actual situation to generate the galactic foreground.
The number of 3×10^5 GBs is sufficient to include the same parameter distribution 3×10^7 GBs in , and the number of the resolvable GBs should be 1% of that in .
Notice that although the multiplication operation was performed during the generation of the galactic foreground, which would increase the amplitude of a single signal, the smoothed spectrum is used in subsequent processing to obtain the same amplitude as without affecting the calculated SNR.
The basic steps for subtracting foreground confusion are shown in FIG. <ref>, which can be summarized as follows<cit.>:
* Simulate the superposition h(t) of 3×10^5 GBs in the time domain and then calculate the power spectrum density (PSD) of the galactic foreground. Run the median on the PSD to estimate the foreground confusion S_c(f).
* Roughly calculate the optimal SNR ρ under the sensitivity curve of instrument noise S_n(f) using Eq. <ref>, and consider GBs with an optimal SNR greater than 3 (ρ>3) as optimal resolvable GBs, which can quickly filter out 99.6% of unresolved GBs.
* For the ith optimal resolvable GB, the sensitivity curve is formed by adding instrument noise and foreground confusion (S_n(f)+S_c(f)), and the SNR ρ_i is calculated using Eq. <ref> and Eq. <ref>. If the SNR is less than 7 (ρ_i<7), skip and repeat the method to calculate the SNR of the (i+1)th optimal resolvable GB. If the SNR is greater than 7 (ρ_i≥7), the GB is resolvable, and then continue with the next subtraction step.
* Subtract the ith GB signal in the time domain (h(t)-h_i(t)) and use the method in Step <ref> to re-estimate the subtracted galactic confusion. Repeat Steps <ref> and <ref>, continuously subtracting resolvable GBs and re-estimating galactic confusion until all optimal resolvable GBs are calculated.
* Repeat Steps <ref>, <ref> and <ref> in the remaining optimal resolvable GBs until the subtracted GB is 0, indicating the galactic confusion composed of unresolved GBs.
* Recalculate the SNR and FIM of the resolvable GBs using the final subtracted galactic confusion.
In the above steps, it is assumed that the resolvable GB can be subtracted perfectly without residual error, which will not be achievable in practice, and the subtraction error should be considered <cit.>.
When generating the time-domain galactic foreground signal, we set the Earth in the Vernal Equinox as zero time (t=0), and conduct observation simulation at different times (T_obs={0.5,1,2,4}years) to subtract the galactic confusion using the above basic steps. Considering the observation on the networks, we use the method of Eq. <ref> to calculate the SNR and FIM, and get the results on different networks.
§ RESULTS
§.§ Resolvable GBs
Using the method in Sec. <ref>, we simulated and calculated the number of resolvable GBs detected on different detectors and their networks at different observation times, as shown in FIG. <ref>.
Apparently, FIG. <ref> illustrates that as the observation time increases, the number of resolvable GBs also increases due to Eq. <ref> and Eq. <ref>.
Given the observation time, for a single detector, the number of resolvable GBs detected in descending order is: Taiji-m, Taiji-p (c), LISA, TianQin I, and TianQin II, mainly due to the arm length and orientation of the detector.
In terms of arm length, from Eq. <ref> and Eq. <ref>, it can be seen that the longer the detector arm length results in the better sensitivity.
Moreover, from TABLE <ref>, it can be seen that Taiji's arm length (3×10^9 m) is the longest, followed by LISA's arm length (2.5×10^9 m), and TianQin's arm length (√(3)×10^8 m) is the shortest, making Taiji detect more resolvable GBs than LISA and TianQin.
In terms of orientation, FIG. <ref> shows that the detector is most sensitive to signals perpendicular to the constellation plane position (θ_d=0^∘ or 180^∘).
The density of GBs in the bulge region of the Galaxy is significantly higher than that in the disk region <cit.>, therefore the closer the normal direction of the detector constellation plane is to the Galactic Center (λ = 266.8^∘,β = -5.6^∘), the greater the detector response and the more resolvable GBs can be detected.
From FIG. <ref>, it can be seen that the normal direction of Taiji-m (β = -30^∘ and β = 60^∘) is closer to the Galactic Center compared to Taiji-p (c) (β = 30^∘ and β = -60^∘) over a year, and the normal direction of TianQin I (λ = 120.4^∘,β = -4.7^∘ and λ = 300.4^∘,β = 4.7^∘) is also closer to the Galactic Center than TianQin II (λ = 30.4^∘,β = 0^∘ and λ = 210.4^∘,β = 0^∘). Therefore, Taiji-m detects more resolvable GBs than Taiji-p (c), and TianQin I detects more than TianQin II.
For detection on networks, just like the result in a single detector, the arm length and orientation of the detector are the major factors in resolvable GBs detection.
Due to the longer arm length of Taiji and LISA than that of TianQin, the networks of Taiji and LISA detect more resolvable GBs than individual Taiji or LISA, but the improvement is not significant compared to TianQin's network.
Eq. <ref> indicates that the reciprocal sensitivity on the network is the sum of the reciprocal sensitivities of each detector. Therefore, as the number of detectors in the network increases, the sensitivity of the network increases, but the increase rate decreases.
In summary, it can be concluded that as the number of detectors on the network increases, the number of resolvable GBs detected will also increase.
The optimal result will be achieved when LISA, Taiji-m, TianQin I and TianQin II are combined as a network.
§.§ Improvement of sensitivity
In order to better show the impact of foreground confusion on the sensitivity curve, and the subtraction of foreground confusion by different number of detectors on the network, we can fit the foreground confusion on logarithmic scale through a polynomial function, which can be written as follows <cit.>:
S_c(f) = 10^x
with
x = ∑_n = 0^5a_n[ log 10( f/1 mHz) ] ^n
This fitting is only applicable to the frequency range of 0.1∼6 mHz, and the fitting parameters a_n are listed in TABLE <ref>.
Due to different fitting functions, they can affect the final curve. Therefore, the fitting parameters given in TABLE <ref> and the curves drawn in FIG. <ref> are only used as a reference. In our previous calculations, we estimate foreground confusion using a running median on PSD.
In FIG. <ref>(a), we plotted the sensitivity curve of a single detector, and it can be seen that in the part where the foreground confusion affects, the sensitivity curve generated by instrument noise is better in Taiji than in LISA than in TianQin.
In the range of 8∼1.5 mHz, the full sensitivity curves of LISA and Taiji are almost identical, due to the larger response of resolvable GBs in Taiji, resulting in greater foreground confusion.
In the range of 1.5∼3.5 mHz, the full sensitivity of Taiji-m is superior to that of Taiji-p (c), as Taiji-m can detect more resolvable GBs than Taiji-p (c), resulting in lower subtracted foreground confusion.
In the 2∼6 mHz range, the full sensitivity of TainQin I is slightly lower than that of TianQin II, which is also because TainQin I has a greater response to resolvable GBs.
In FIG. <ref>(b), we show the sensitivity curves of different numbers of detectors on the network.
It can be seen that as the number of detectors on the network increases, the sensitivity curve of instrument noise decreases.
Moreover, because in this range, the sensitivity of TianQin is much lower than that of LISA and Taiji, the sensitivity curve of instrument noise only slightly changes after adding TianQin to the network.
As the number of detectors on the network increases, the more resolvable GBs are the subtracted foreground confusion is smaller, which is sufficient to demonstrate the advantage of detecting on the network for subtracting foreground confusion.
§.§ SNR and uncertainty
In addition to the number of resolvable GBs detected and the sensitivity curve containing foreground confusion, the uncertainty of parameters for resolvable GBs is also crucial. Therefore, we calculated the FIM on different networks (choosing TJm, LISA+TJm, LISA+TJm+TQI and LISA+TJm+TQI+II due to the most number of resolved GBs with 1, 2, 3, 4 detectors respectively) using Eq. <ref> ∼ Eq. <ref> to obtain the uncertainty of different parameters, as shown in FIG. <ref>.
Explanation of result on the right side of FIG. <ref>, for the resolvable GBs detected only by Taiji-m, it can be clearly seen that as the number of detectors on the network increases, the SNR will increase, while the uncertainty of parameters will decrease.
This is due to the sensitivity improvement for the increased number of detectors on the network.
Similar to the increase rate in the number of resolvable GBs described in Sec. <ref>, the magnitude of changes in SNR and uncertainty will decrease as the number of detectors on the network increases.
Increasing from one detector to two has a significant effect, but increasing from two to three is relatively less significant.
Unlike the above situation, in actual detection, the resolvable GB detected by different detector combinations is different.
From the result on the left side of FIG. <ref>, it can be seen that the changes in SNR and uncertainty of resolvable GBs detected on different networks are not as significant as those of the same resolvable GBs.
Except for the decrease in the uncertainties of GW strain amplitude, frequency, and sky position, there are almost no significant changes in the rest, and even some uncertainties have no decrease but increase.
For example, the initial phase and polarization angle show a slight increase when the number of detectors on the network increases from three to four.
This is because as the number of detectors on the network increases, the sensitivity improves, making many unresolved GBs become resolvable GBs, adding more low-SNR resolvable GBs.
Therefore, it is possible that as the number of detectors on the network increases, uncertainty increases instead of decreasing, and SNR decreases instead of increasing.
Nonetheless, as the number of detectors on the network increases, the SNR of the same resolvable GBs increases, and uncertainty decreases. Moreover, after adding more low-SNR resolvable GBs, the overall SNR remains almost unchanged, with some uncertainties significantly decreasing and others slightly increasing, which is sufficient to demonstrate the positive impact of increasing the number of detectors on the network.
Not only these, but also the GW detection of
resolvable GBs is helpful for the detection of EM bands, constituting Multi-messenger astronomy <cit.>.
The more accurate the GW detection of resolvable GBs parameters, i.e. the lower the uncertainties, the more conducive it is to EM detection.
If the sky position of the source is sufficiently accurate, it is possible to search for EM counterparts through EM follow-up observations
Among all resolvable GBs, the uncertainty of the sky position is less than 1 deg^2 (ΔΩ<1 deg^2) for 30.2∼31.6% of resolvable GBs, and less than 0.1 deg^2 (ΔΩ<0.1 deg^2) for 9.6∼10.3%.
It can be seen from the data in FIG. <ref> that among all parameters, the frequency measurement of resolvable GBs is the most accurate, of which the uncertainty on Δ f_0/f_0 of 29.2∼32.3% GB is less than 1×10^-6 (Δ f_0/f_0<1×10^-6), while the GW frequency f_0 is directly related to the period T_p of resolvable GBs (f_0=2/T_p), that is, the period can be measured accurately.
Note that as the number of detectors on the network increases, the proportion of the above items will also increase.
On the contrary, the results of EM detection can also serve as a prior to reduce the uncertainty of GW detection.
We adopt the method in Ref. <cit.>, which can be used to reduce the uncertainty of parameters from GW data by removing the respective rows and columns in the FIM.
By observing GBs, the inclination angle ι can be independently determined by EM detection, and we assume that the inclination angle of resolvable GBs can be completely determined.
By calculating the uncertainty of other parameters through the removed FIM, we found that only the uncertainty on Δ𝒜 /𝒜 changes significantly, with the mean uncertainty decrease of 91.9∼93.5% and the median uncertainty decrease of 60.8∼61.9%.
From Eq. <ref>, there is degeneracy between GW strain amplitude 𝒜 and inclination angle ι, which is why determining the inclination angle can significantly improve the measurement of amplitude.
Using the same method, we assume that the EM counterparts can be found through EM detection, that is, the sky position (λ,β) is completely determined. Therefore, the mean uncertainty on ϕ_0 is reduced by 25.8∼33.6%, the median uncertainty is reduced by 25.1∼26.9%, and other parameters will have a decrease of 2∼9%.
Notice that the above situations are all very idealized and are based on the assumption that a certain parameter of all resolvable GBs is completely determined, which cannot be achieved in practice. Even so, it can also indicate that there is feasibility in reducing the parameter uncertainty of GW detection through EM detection.
In summary, GW detection and EM detection can complement each other, and as the number of detectors on the network increases, the improvement of both will be greater.
§ SUMMARY AND DISCUSSION
In this paper, we used 1% of the data in LDC, which is 3×10^5 GBs, to simulate the galactic foreground by overlapping GBs as quasi-sinusoidal signals. We treated GB with the SNR greater than 7 as resolvable GBs, studied the number of detected resolvable GBs under different detector combinations and their alternative orbital configurations on the network, calculated the parameter uncertainties of resolvable GBs, and plotted the fitted full sensitivity curve.
Through the iterative method, we predict the number of resolvable GBs detected by different detector combinations on the network.
In the single detectors, the number of resolvable GBs is arranged in descending order of detected quantity: Taiji-m, Taiji-p (c), LISA, TianQin I, and TianQin II.
The trend of results for different detectors combinations on the network is also similar to that of a single detector.
The optimal combination for each number on the network is TJm, LISA+TJm, LISA+TJm+TQI, and LISA+TJm+TQI+II.
Based on the above optimal combinations, we calculate the uncertainty of the parameters of resolvable GBs using FIM.
As the number of detectors on the network increased, the uncertainty of the same resolvable GBs decreased, and the magnitude of the decrease also decreased.
The uncertainty remained reduced or almost unchanged even when more low-SNR resolvable GBs were detected.
Resolvable GBs with low uncertainty can help EM detection find electromagnetic counterparts and determine the period of GBs, while EM detection can also serve as a prior to reducing the uncertainty of GW detection.
We find that determining the inclination angle through EM detection can reduce GW strain amplitude uncertainty by ∼93%, and determining the sky position can reduce the phase uncertainty by ∼30%.
Therefore, GW joint detection on the network can complement EM detection, which is conducive to the development of Multi-messenger astronomy.
By fitting the full sensitivity curve containing foreground confusion, it is possible to intuitively see the effect of a single detector and different combinations of detectors on the network on subtracting foreground confusion.
The effect of subtracting foreground confusion is basically proportional to the number of resolvable GBs detected. The more detectors in the network, the better the subtracting effect.
In addition, it should be noted that so far, no space-based GW detector has been launched, so the data related to the space GW detector are simulated and predicted.
In fact, during the observation, the noise is assumed to be Gaussian and stationary, and the data quality is assumed to be optimal and uninterrupted <cit.>.
We use SNR to define thresholds and distinguish resolvable GBs, which is very useful and efficient to estimate foreground confusion.
Moreover, we assume that the subtraction of GBs is perfect without residual, which leads to our results being optimal and ideal.
Some new and more practical methods have been proposed, such as iterative subtraction based on Particle swarm optimization algorithm <cit.>, search and subtraction using Bayesian evidence ratio <cit.>.
In future research, we can delve into multiple aspects to improve our understanding and accuracy of foreground confusion.
Firstly, we can further investigate the relationship between GW detection and EM detection, exploring how to better combine GW detectors and EM detectors to enhance observation and understanding of GBs <cit.>.
Secondly, we can delve deeper into the impact of time-delay interferometr (TDI) technology on foreground confusion, as well as the subtraction of foreground confusion by different generations of TDI and channels <cit.>
In addition, we can also consider the impact of different population models on foreground confusion to better understand the population distribution and evolution theory of GBs.
Finally, we can also consider the impact of foreground confusion on other GW sources to better evaluate the sensitivity and accuracy of GW detection, and use foreground noise to improve the data processing and analysis methods.
In conclusion, through in-depth research on the above aspects, we can further improve our understanding and accuracy of GW detection, so as to better explore the essence and evolution history of astrophysical events, and provide more valuable data and information for research in Cosmology, Astrophysics and other fields.
§ COORDINATE TRANSFORMATION
The transformation between detector coordinates (ϕ_d,θ_d) and Ecliptic coordinates (λ,β) is based on the method described in Ref. <cit.>, and the situation in both coordinate frames is shown in FIG. <ref>.
We can use a rotation matrix R to connect detector coordinates X^d={sinθ_d cosϕ_d,sinθ_d sinϕ_d,cosθ_d} and Ecliptic coordinates X^e={cosβcosλ,cosβsinλ,sinβ}, which can be expressed as:
X^e =RX^d
X^d =R^-1X^e
For LISA and Taiji:
R=
([ cosθ_l cos ^2α_d+sin ^2α_d (cosθ_l-1) sinα_dcosα_d -sinθ_l cosα_d; (cosθ_l-1) sinα_dcosα_d cosθ_l sin ^2α_d+cos ^2α_d -sinθ_l sinα_d; sinθ_l cosα_d sinθ_l sinα_d cosθ_l; ])
For TianQin:
R=
([ cosθ_t qcosϕ_t qsinα_d+sinϕ_t qcosα_d cosθ_t qcosϕ_t qcosα_d-sinϕ_t qsinα_d sinθ_t qcosϕ_t q; cosθ_t qsinϕ_t qsinα_d-cosϕ_t qcosα_d cosθ_t qsinϕ_t qcosα_d+cosϕ_t qsinα_d sinθ_t qsinϕ_t q; -sinθ_t qsinα_d -sinθ_t qcosα_d cosθ_t q ])
where α_d=2π f_sct+2π/3(n-1)+α_0, n is the nth S/C, α_0 is the initial phase, f_sc=1/T_sc and T_sc is the rotation period.
For TianQin, T_sc=3.65 days and f_sc≃3×10^-3 mHz, but for LISA and Taiji, T_sc=1 year and f_sc≃3×10^-6 mHz
The angles in the rotation matrix R can be determined from FIG. <ref>.
For LISA, Taiji-p and Taiji-c, θ_l=60^∘ and for Taiji-m, θ_l=120^∘.
For TianQin I, θ_tq=94.7^∘,ϕ_tq=120.4^∘ and for TianQin II, θ_tq=90^∘,ϕ_tq=30.4^∘.
|
http://arxiv.org/abs/2307.06244v1 | 20230712153439 | Diffusion Based Multi-Agent Adversarial Tracking | [
"Sean Ye",
"Manisha Natarajan",
"Zixuan Wu",
"Matthew Gombolay"
] | cs.RO | [
"cs.RO",
"cs.LG",
"cs.MA"
] |
Diffusion Based Multi-Agent Adversarial Tracking
Sean Ye^1, Manisha Natarajan^1, Zixuan Wu^1, and Matthew C. Gombolay^1
*This work was supported in part by the Office of Naval Research (ONR) under grant numbers N00014-19-1-2076, N00014-22-1-2834, and N00173-21-1-G009, the National Science Foundation under grant CNS-2219755, and MIT Lincoln Laboratory grant number 7000437192.
^1All authors are associated with the Institute of Robotics and Intelligent Machines (IRIM), Georgia Institute of Technology, Atlanta, GA 30308, USA.
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Target tracking plays a crucial role in real-world scenarios, particularly in drug-trafficking interdiction, where the knowledge of an adversarial target's location is often limited. Improving autonomous tracking systems will enable unmanned aerial, surface, and underwater vehicles to better assist in interdicting smugglers that use manned surface, semi-submersible, and aerial vessels. As unmanned drones proliferate, accurate autonomous target estimation is even more crucial for security and safety. This paper presents Constrained Agent-based Diffusion for ENhanCEd Multi-Agent Tracking (), an approach aimed at generating comprehensive predictions of adversary locations by leveraging past sparse state information. To assess the effectiveness of this approach, we evaluate predictions on single-target and multi-target pursuit environments, employing Monte-Carlo sampling of the diffusion model to estimate the probability associated with each generated trajectory. We propose a novel cross-attention based diffusion model that utilizes constraint-based sampling to generate multimodal track hypotheses. Our single-target model surpasses the performance of all baseline methods on Average Displacement Error (ADE) for predictions across all time horizons.
§ INTRODUCTION
Unmanned aerial vehicles (UAVs) are extensively used in military and civilian applications, such as surveillance, search and rescue, anti-smuggling operations, wildlife tracking, and urban traffic monitoring <cit.>. These missions often involve tracking dynamic targets in large-scale environments, where predicting a target's current and future states is essential and internal states are not fully observable. However, tracking targets in complex environments, especially adversarial ones, presents significant challenges, including sparse observations and multiple possible future states for adversaries. As UAVs continue to advance and showcase their capabilities in diverse fields, refining methods for tracking dynamic targets is increasingly important.
While most works focus on single-target tracking in partially observable environments <cit.>, the challenges becomes significantly more complex in multi-target tracking <cit.>. One significant challenge is the need to maintain distinct hypotheses for each target while correctly associating detections or observations with the different targets. Furthermore, maintaining multiple hypotheses of various tracks in multi-target settings involves handling track fragmentation (track splitting and merging) when targets interact with one another.
Earlier methods for target tracking encompass model-based approaches like Particle Filters <cit.> and Kalman Filters <cit.>. However, these methods fail in sparse environments, where the vast majority of the time we do not receive any information on the target location. Model-free data-driven approaches <cit.>, can often outperform model-based approaches by estimating the behavior of the target with prior behavioral data rather than relying on expert-defined models. One model-free approach has shown promising results on this challenging task <cit.> by maximizing mutual information to regulate the components of a Gaussian Mixture Model. However, this model was limited to predicting a single time horizon and tracking a single target. Additionally the prior model is limited to a parametric formulation for multimodal hypotheses by using a mixture of Gaussians.
In this work, we address single and multi-target tracking in large-scale pursuit environments using diffusion probabilistic models. Inspired by the recent success of diffusion models for trajectory generation in robotics <cit.>, we propose a novel approach for target track reconstruction under partial observability.
We design a novel approach named Constrained Agent-based Diffusion for ENhanCEd Multi-Agent Tracking () that employs cross-attention to enable information exchange across different agents. A key benefit of diffusion models is their non-parametric formulation for generating multimodal hypotheses as compared to prior work. Additionally, we take inspiration from the computer vision community and adapt the classifier-guided sampling formulation to steer the trajectory generation process to adhere to motion model and environmental constraints.
Contributions: Our key contributions are:
* First, we propose to track multiple adversaries, utilizing a cross-attention based diffusion architecture that implicitly conducts target track assignment between the agents.
* Second, we propose a constraint-guided sampling process for our diffusion models to ensure that state transition functions and obstacle constraints are satisfied in the track generation process, reducing collisions with obstacles by 90% compared to models without.
* Finally, we apply our diffusion models to generate track predictions for a single target, surpassing the performance of previous state-of-the-art (SOTA) models by an average of 9.2% in terms of Average Displacement Error. We additionally set a new baseline on the challenging task of multi-target tracking in a large, partially observable domain.
§ RELATED WORKS
§.§ Diffusion Models
Deep diffusion models are a new class of generative models which model complex data distributions and have exploded in popularity within the computer vision community <cit.>. As these models have shown promise on learning within high dimensional data manifolds, other research areas have begun to apply diffusion models as powerful generative and conditional generative models. In robotics, a key work by Janner et al. <cit.> shows that diffusion models can be used to generate plausible paths for planning. Diffusion policy <cit.> extends this to work to imitation learning by diffusing the action distributions to accomplish various pushing tasks. Recent contemporary work by Zhu et al. has also used cross-attention within diffusion models <cit.> to generate multi-agent tracks. However, their work assumes full observability which is not available in the adversarial tracking domain. We also take inspiration from image inpainting (reconstructing missing parts of an image) <cit.> to condition the diffusion sampling process on detections for producing better target track predictions. To the best of our knowledge, we are the first to utilize diffusion models for multi-target tracking under partial observability.
§.§ Target Tracking
Target tracking involves estimating the positions of one or more targets using sensor data <cit.> and has various real-world applications such as surveillance <cit.>, sports analysis <cit.>, and traffic management <cit.>. Traditional approaches, like Particle Filters <cit.> and Kalman Filters <cit.>, dominate target tracking but require accurate knowledge or estimation of the target's dynamics model. However, recent advancements in model-free object tracking with images have emerged <cit.>. Our work differs from prior work in computer vision as we rely on sparse observations or detected locations instead of images to predict future target trajectories.
Adversarial tracking involves targeting an intelligent opponent trying to evade trackers <cit.>. Previous works assume access to target states/observations for training predictive models <cit.>. However, this assumption is unrealistic in large environments due to non-cooperative adversaries and a limited field of view. Prior work <cit.> introduced GrAMMI, which predicts dynamic target locations using partial observations from a team of trackers. However, it only focused on single target tracking and was unable to generate predictions for multiple time horizons. Our current work addresses these limitations by utilizing diffusion models to generate trajectories up to any time horizon and extending them to multi-target tracking.
§ BACKGROUND
§.§ Partially Observable Markov Game
We define adversarial tracking as a Partially Observable Markov Game (POMG), which consists of a set of states 𝒮, a set of private agent observations 𝒪_1, 𝒪_2, …, 𝒪_M, a set of actions 𝒜_1, 𝒜_2, …, 𝒜_M, and a transition function 𝒯: 𝒮×𝒜_1 ×…×𝒜_M ↦𝒮 for M-agents. At each time step t, agents receive an observation O_i^t∈𝒪_i, choose an action a_i^t ∈𝒜_i, and receive a reward r_i^t based on the reward function R: 𝒮×𝒜_i ↦ℝ. The initial state is drawn from an initial state distribution ρ.
We simplify the observation formulation of all agents to produce a single array of detections for all targets denoted as {d}_1...t. We denote the trajectory (τ) of adversary states as τ_n ∀ n ∈ N, where N is the number of targets being tracked. Thus, the goal of is to estimate the joint trajectory of all agents p_θ(τ_1...n | {d}_1...t).
In this work, we address multi-target and single-target tracking. For the multi-target tracking case, we ablate an assumption, where 1) we assume we know the origin of a given detection, i.e., we do not have to solve the data association problem since we can identify which detection corresponds to which target, and 2) we do not assume we know the origin of a given detection. Here, the model must perform target assignment to distinguish the paths.
§.§ Diffusion Probabilistic Models
Diffusion models are a class of generative models that learn a target distribution through an iterative denoising process p_θ(x^i-1 | x^i). The model learns how to reverse the forward noising process q(x^i | x^i-1), which is commonly parameterized as a Gaussian 𝒩∼ (0, I). Traditionally, x is used to represent images but in our work, we replace this notation with τ as we are generating trajectories.
The training process consists of a noising and denoising process. We utilize the Denoising Diffusion Probablistic Model (DDPM) formulation <cit.> and create noisy trajectories with Equation <ref>, where α̅^i is a noise scheduler dependent on the diffusion process timestep i.
q(τ^i | τ^0) := 𝒩 (τ^i ; √(α̅^i)τ^0, (1 - α̅^i) I)
We learn a denoising network ϵ_θ to predict the random noise at all denoising iterations i (Equation <ref>).
ℒ = MSE ( ϵ^i, ϵ_θ(√(α̅^i)τ^0 + √(1 - α̅^i)ϵ, i) )
Finally, with a trained denoising network ϵ_θ, we can iteratively denoise a trajectory τ from pure Gaussian noise using Equation <ref>, which is equivalent to minimizing the negative log-likelihood of the samples generated by the model distribution under the expectation of the data distribution <cit.>.
τ^i-1 = 1/√(α^i)( τ^i - 1 - α^i/√(1 - α̅^i)ϵ_θ(τ^i, i) ) + 𝒩 (0, σ^2 I)
§.§ Domains and Target Behavior
We test our models in the Prison Escape and Narco Traffic Interdiction (Smuggler) domains first described in <cit.>.
§.§.§ Narco Traffic Interdiction
This simulation involves illegal maritime drug trafficking along the Central American Pacific Coastline. A team of tracker agents, including airplanes and marine vessels, pursue a drug smuggler. Airplanes have a greater search radius and speed, while vessels can capture the smuggler. The smugglers must reach rendezvous points before heading to hideouts. The tracking team is unaware of hideout and rendezvous point locations. Episodes end when the smuggler reaches a hideout or is captured.
§.§.§ Prisoner Escape
In this scenario, a team consisting of cameras, search parties, and helicopters work together to track down an escaped prisoner, which is heading to one of several goal locations (hideouts). The game takes place on a map measuring 2428 × 2428 units, with various mountains representing obstacles. The motivation behind this domain stems from situations encountered in military surveillance and border patrol, where the objective is to track and intercept adversaries.
Goal locations for the fugitive are randomly selected without replacement from a predetermined set for each episode. The Prison Escape scenario incorporates evasive behaviors for the prisoner and introduces a fog-of-war element that limits the detection range of all agents. Notably, the tracking agents are only capable of tracking the prisoner and not capturing them, allowing for analysis of long-term predictions. The episode concludes either when the prisoner reaches a goal location or after a maximum number of timesteps.
§ METHODOLOGY
Given sparse observations of the target {d}_1...t, our diffusion model generates possible paths of the various target(s) τ_n. We describe 's design architecture for the multi-target and single-target domain.
§.§ Model Architecture
The multi-target diffusion model consists of two key components, 1) the temporal U-net and 2) the cross-attention mechanism between parallel tracks.
§.§.§ Temporal U-net
We utilize a 1D temporal CNN-based architecture from <cit.> as our noise prediction network ϵ_θ for each target agent in the environment. This architecture predicts an entire trajectory non-autoregressively and uses temporal convolutional blocks to encode the trajectory. Within the diffusion framework, the temporal U-net takes as input a noisy trajectory and outputs a refined (less noisy) trajectory. We repeat this i times to produce a noiseless trajectory from the noisy one. For a agents in the environment, we have use a parallel denoising networks, where each pathway denoises the trajectory for a single agent.
We utilize Feature-wise Linear Modulation (FiLM) at each convolutional layer <cit.> to condition the generative process on past detections {d}_1...t. The history of past detections {d}_1...t is encoded in an LSTM, where each detection consists of the δ t, x, y denoting the time since detection and location of the detection. Figure <ref> shows the full multi-agent denoising architecture.
§.§.§ Cross Attention
A key assumption in is that the track generation is permutation equivariant— the ordering of the track inputs does not impact the results. This is achieved by sharing parameters between the track generators and using cross-attention to communicate information from one track to the other. The cross-attention formulation is a variant of the scaled dot product attention <cit.>:
x^m' = ∑_m softmax( Q^m K^n^T/√(dim_k)) V^n
where Q ∈ℝ^N × dim, K ∈ℝ^N × dim, V ∈ℝ^N × dim are vectors of query, key, and value. N is the number of query, key, and value vectors, dim is the dimension of the vector, and m, n are the indices for each target agent A. The attention module can be interpreted as a combination of both self-attention (m=n) and cross-attention across other agents (m ≠ n). Crucially, the computation can be batched as the key, queries, and values k, q, v for each agent track only needs to be computed once for each agent. Finally, we adopt the multi-headed attention formalism and concatenate multiple heads to produce x^m'. We use the cross attention embeddings by interspersing them between the convolutional blocks in the U-net.
§.§.§ Single-Target Architecture
In the special case where we are only tracking a single agent, the cross-attention module is not used. In this formulation, a single history of detections is passed through the LSTM, and the model generates a single trajectory.
§.§ Constraint-Guided Sampling
Within our domain, two dominant constraints exist: 1) the motion model of target agents and 2) obstacle (mountains) constraints. In this work, we adapt classifier-based guidance <cit.>, which was first used in image diffusion models to steer models towards certain classes. Classifier-based guidance uses a trained discriminative model to estimate p(y|x), which denotes a class of an image based upon its input. The guidance augments the diffusion sampling procedure by changing the predicted means using the gradients of the classifier ∇_x log p(y|x).
We adapt classifier-based sampling to constraint-based sampling by substituting the classifier with an objective function J(μ^i). We denote the mean of the trajectory we learn as μ and the sampled path as τ. Then, the guidance process can be written as τ^i-1∼𝒩(τ^i; μ_θ(τ^i) + s Σ g, Σ), where the mean of the new distribution is perturbed by g = ∇_τ J(μ) and s is a gradient scale. In our implementation, we use an additional Adam optimizer to perform this gradient update (Algorithm <ref>). Using the Adam optimizer <cit.> relieves the need for hyper-parameter tuning s and allows us to combine multiple constraints together.
In our modified sampling algorithm (Algorithm <ref>), we begin with a completely noisy trajectory (line <ref>). Then, the denoising process occurs for T timesteps (line <ref>), where the denoised trajectory means are sampled from the model (line <ref>). We then use our constraint functions to move the means (line <ref>) and sample from the new distribution (line <ref>). Finally, we condition the model on detected locations at each diffusion timestep (line <ref>).
We use two constraint functions, one for the motion model and the second for the obstacles.
* Motion Model Constraint: ∑_t‖τ_t - τ_t+1‖
For each consecutive point in our trajectory, we create a simple smoothness loss such that consecutive states in the trajectory are close by.
* Obstacle Constraint: ‖τ_t - c ‖ < ϵ∀ t ∈ T, c ∈ C
For each state (τ_t) in the trajectory and each obstacle C on the map, we provide a loss that pushes states away from obstacles in the environment.
§.§ Conditioning Detected Observations
While almost all the detected state information about the adversary is in the past, we provide a way to implement detected locations at the current time horizon t=0 directly into the diffusion model sampling process. We alter the sampling process of the diffusion model, where if the detected location at the current timestep is known, we replace the sampled value with the known location after each diffusion timestep i (Algo <ref>, line <ref>).
Planning with diffusion models have used a similar process to goal-condition trajectories based on the starting and ending location <cit.>. This solution is inspired by inpainting <cit.> in computer vision, where parts of an image are known and the diffusion model must generate the rest of the image.
§ EVALUATION
We evaluate our single-target models for target tracking in the Prison Escape and Smuggler scenario introduced in <cit.> and use Monte-Carlo Sampling from the trained diffusion model to estimate the distribution of trajectories by generating 30 paths per sample.
We use three datasets in the Prison Escape scenario (Prisoner-Low, Prisoner-Medium, Prisoner-High) that contain opponent detection rates of 12.9%, 44.0%, and 63.1%, respectively and two Smuggler datasets with opponent detection rates, 13.8% and 31.5%.
We create new multi-agent datasets within the same domain. However, we do not include target adversaries in this domain for simplicity. Instead, we randomly sample 10-12% of the timesteps and assume that these are the detected locations. These detected location samples are not resampled during training such that there is only one set of detections per trajectory rollout. We create two types of behavior 1) where the all the agents meet together before traveling to the goal location and 2) where the agents directly go to the goal location. This assumption produces a multimodal distribution where the behaviors of the agents in the domain are dependent on each other. All agents use A^* to traverse through the landscape, where terrain with lower coverage visibility is preferred over higher coverage visibility. In the maps shown in Figure <ref>, the low visibility areas correspond to the darker regions of the map. In our environment, we choose three target agents to track but our model can be used to track any n number of targets.
In our analysis, we examine two scenarios regarding detections. The first scenario assumes that we have knowledge about the origin of each detection. The second scenario assumes that we do not have any information about the origin of the detections. In real-world target tracking situations, it is common for us to lack knowledge about the origin of the detections and, therefore, implicitly conduct target assignments.
§.§ Metrics
We evaluate on two measures used in prior work <cit.> — Average Displacement Error (ADE, minADE) to compare against previous models. In the case of multi-agent tracking, we average the metrics across all agents. Previous work that fit a probability distribution included log-likelihood log(p(s_t|θ)), as a measure, Where θ is the model parameters. Computing the exact log-likelihood through the diffusion process is still an open research topic <cit.>, where deterministic samplers based on probablistic flow ODEs have been used to compute exact likelihoods.
* Average Displacement Error (ADE): Given a ground truth trajectory τ, we compute the average l_2 distance between each sampled trajectory and the ground truth trajectory over all timesteps.
* Minimum Average Displacement Error (minADE): minADE measures the distance of the closest sampled path to the ground truth trajectory.
§ RESULTS & DISCUSSION
We evaluate our multi-target models within the multi-target Prison Escape domain, showing ablations for knowing the origin of the detections and without knowing the origins of the detections.
The performance of our single-target tracking model was evaluated on the three Prison Escape datasets introduced in <cit.>. Our models show better ADE than the previous best Gaussian Mixture-based model (GMM) on every prediction horizon.
§.§ Multi-Target Tracking
We present findings regarding the performance of models with known detection origins compared to those without, as well as the qualitative analysis of the generated tracks in the multi-target tracking domain.
First, we are able to qualitatively show the multimodal behavior and dependencies between tracks with the cross-attention mechanism (Figure <ref>). Given the same inputs, the model produces tracks that do not intersect (left image) along with tracks that do intersect (right image), showing the models have learned the relationship between the tracks that is inherent within the dataset.
Detection Origin Ablation:
We compare the ADE and minADE for models with detection origin information and those without (Figure <ref>). For models with origin information, we pass each target's detections through its own LSTM encoder to retrieve an embedding per target. This embedding is fed uniquely into each of the track generators. For models without detection origin information, we feed all detections through a single LSTM encoder whose embedding is shared amongst the diffusion tracks.
As expected, the models that possess knowledge of detection origins exhibit superior performance over all time horizons in terms of ADE and minADE compared to the models lacking such knowledge. Notably, there is a significant difference between ADE and minADE, indicating that while the distribution of paths is large, we can also generate hypothetical tracks that align well with the actual trajectories.
Furthermore, we observed that models without detection origins exhibit significantly higher error rates comparatively for shorter prediction horizons compared to longer horizons. As the prediction horizon increases, trajectories are drawn to the discrete set of total goal locations, and the diffusion model directs trajectories towards these points. Therefore, in our domain, the challenge of assigning targets to the detections becomes more evident in shorter prediction horizons.
is the first method capable of performing multi-target tracking under partial observability and implicitly conducts target assignment to generate consistent interactions between agents.
§.§ Constraint-Guided Samples
We compare how effective the constraint-guided sampling process prevents collisions with obstacles in the environment. A visualization of the difference is shown in Figure <ref>. We report our findings where samples generated without the constraint-sampling method resulted in an average of 5% of the produced states colliding with obstacles. On the other hand, states generated using the constraint-sampling method only encountered mountain collisions approximately 0.641% of the time. Our findings demonstrate that the new sampling procedure led to a significant 90% decrease in the number of collisions in the generated trajectories and produces more consistent hypothetical trajectories with actual trajectories.
§.§ Single-Target Tracking
We present our results on single-target tracking using a unique version of the model without cross-attention, as there is only one agent. Our findings for single target tracking are presented based on the analysis of three Prisoner Escape and Smuggler datasets (Table <ref>). The results demonstrate that diffusion models outperform the previous best Gaussian Mixture models on average by 9.2%. At higher prediction horizons of 60, 90 and 120 timesteps into the future, the diffusion models yield improvements of 12.2%. Additionally, our models are able to generate complete trajectories compared to the previous models which could only predict states at a fixed horizon length.
The non-parametric formulation of the diffusion model lends itself better for trajectory generation in our sparse detection environments than fitting a mixture of Gaussians. Fitting a mixture of Gaussians requires identifying the appropriate number of Gaussians — a hyperparameter for the target tracks. The diffusion models overcome this requirement and are able to represent a more diverse set of tracks.
By incorporating the complete trajectories generated by our diffusion models, we can enhance the capabilities of searching agents and improve target containment strategies. The availability of full track predictions allows us to anticipate the target's movements, identify potential escape routes, and strategically position agents to cut off those routes effectively. This enables us to employ more advanced policies for cornering the target and maximizing the chances of capture.
§ LIMITATIONS
While our diffusion models show great improvements over previous models, a main limitation is the sampling time for generating tracks. Improving sampling speeds for diffusion models is currently an active area of research <cit.>.
Additionally, our model can perform track prediction for future timesteps but implicitly learns the target track assignment from past detections. We, therefore, did not consider the task of track reconstruction in this work, where all trajectories generated are of future timesteps and not of the past. Finaly, we currently assume all agents are homogeneous in this work but could extend this to heterogeneous agents by removing the shared weights between track generators.
§ CONCLUSION
We proposed a novel approach using diffusion probabilistic models for single and multi-target tracking in large-scale environments. Our model incorporates cross-attention, constraint-guided sampling, and conditioning techniques to improve track prediction accuracy and adhere to motion model and environmental constraints. The experimental results demonstrate the effectiveness of the approach, surpassing the performance of previous state-of-the-art models in single-target tracking and achieving successful multi-target tracking with improved target track assignment.
plain
|
http://arxiv.org/abs/2307.07381v1 | 20230714144536 | Investigating ChatGPT's Potential to Assist in Requirements Elicitation Processes | [
"Krishna Ronanki",
"Christian Berger",
"Jennifer Horkoff"
] | cs.SE | [
"cs.SE"
] |
Investigating ChatGPT's Potential to Assist in Requirements Elicitation Processes
Krishna Ronanki1,
Christian Berger2 and Jennifer Horkoff3
Dept. of Computer Science and Engineering,
University of Gothenburg
Gothenburg, Sweden
[email protected],
[email protected],
[email protected]
July 14, 2023
===============================================================================================================================================================================================================================
Natural Language Processing (NLP) for Requirements Engineering (RE) (NLP4RE) seeks to apply NLP tools, techniques, and resources to the RE process to increase the quality of the requirements. There is little research involving the utilization of Generative AI-based NLP tools and techniques for requirements elicitation. In recent times, Large Language Models (LLM) like ChatGPT have gained significant recognition due to their notably improved performance in NLP tasks. To explore the potential of ChatGPT to assist in requirements elicitation processes, we formulated six questions to elicit requirements using ChatGPT. Using the same six questions, we conducted interview-based surveys with five RE experts from academia and industry and collected 30 responses containing requirements. The quality of these 36 responses (human-formulated + ChatGPT-generated) was evaluated over seven different requirements quality attributes by another five RE experts through a second round of interview-based surveys. In comparing the quality of requirements generated by ChatGPT with those formulated by human experts, we found that ChatGPT-generated requirements are highly Abstract, Atomic, Consistent, Correct, and Understandable. Based on these results, we present the most pressing issues related to LLMs and what future research should focus on to leverage the emergent behaviour of LLMs more effectively in natural language-based RE activities.
ChatGPT, Large Language Models, NLP4RE, Requirements Elicitation
§ INTRODUCTION
Due to the increasing access to the huge volumes of data generated all over the world <cit.>, there is a growing involvement of AI in our daily lives. Because of this trend, AI systems need to be not only safe and reliable but also trustworthy since they have the potential to directly or indirectly cause harm to users and society <cit.>. Trustworthy AI can be defined as a conceptual framework that ensures that the development and implementation of technically and socially robust AI systems adhere to all the applicable laws and regulations, and conform to general ethical principles <cit.>. The importance of Trustworthy AI in the contemporary world cannot be understated, and the forthcoming mandatory compliance with the European Union's AI Act (AIA) guidelines while developing and implementing AI systems underscores its significance.
Requirements Engineering (RE) is considered a critical juncture of the interplay between ethics and technology <cit.>. The RE process at the beginning of a product development life cycle fosters increased communication and collaboration between various stakeholders, offering opportunities to discuss ethical concerns <cit.>, like aspects associated with the trustworthiness of AI, and incorporate them into the development process in a concrete manner. In the field of RE, the quality of the requirements gathered is a fundamental concern. Several researchers and standards organizations have identified a set of quality attributes that are crucial for RE based on the IEEE standards for requirements specification <cit.>.
Over the years, empirical evidence has suggested that using natural language is the most prevalent approach for writing requirements in industrial practice. This strong interrelation between natural language and requirements led to the outset of Natural Language Processing (NLP) for RE <cit.>. NLP4RE seeks to apply NLP tools, techniques, and resources to the RE processes to support human analysts in carrying out various tasks on textual requirements such as detecting and improving language issues, among other things <cit.>, which increase the quality of the requirements. Natural Language Generation (NLG) is a process that we use to generate meaningful phrases, sentences, and paragraphs in natural human language <cit.>, and is considered one of the most critical yet complex sub-fields of NLP <cit.>. But the utilization of Generative AI-based NLP tools and techniques for eliciting requirements in support of RE activities is lacking <cit.>, indicating a gap in the current state-of-the-art in NLP4RE. On one hand, if this gap is addressed, it could potentially lead to improvements in the overall quality of artifacts and processes involved in RE.
On the other hand, Large Language Models (LLM) have gained significant recognition due to their notably improved performance in NLP tasks <cit.> in recent times. ChatGPT, based on the GPT-3.5 language model, is optimized for dialogue <cit.> and is capable of answering questions in a human-like text while keeping track of the entire conversation <cit.>. Despite being trained on a large general domain data and specifically fine-tuned for conversational tasks <cit.>, it has been observed to perform surprisingly well on specific technical tasks <cit.>. The use of AI-based conversational chatbots for critical software development activities is not unprecedented either. Machine Learning (ML) models have been proven to provide multiple advantages while implemented in RE for developing privacy-aware systems <cit.>. Furthermore, sentiment analysis of Twitter data for early adopters of ChatGPT showed a positive sentiment of 83% towards enhancing NLP-based tasks <cit.>.
Considering the small number of studies involving requirements elicitation using AI-based models and the emergence of LLMs like ChatGPT that are proficient in interacting using natural language, we seek to investigate the potential of ChatGPT (Feb 9 Version) in the requirements elicitation processes by assessing the quality of requirements obtained using ChatGPT in a controlled context and compare them against requirements formulated by human RE experts. We chose to assess ChatGPT's capacity to generate requirements for a fairly novel and uncharted field, in comparison to human capabilities. The topic of Trustworthy AI has been chosen for investigation as it is an area of significant importance, yet there exists a lack of comprehensive understanding of this subject. To that end, the study seeks to answer the following research questions:
.96
RQ1- On which requirements quality attributes did ChatGPT-generated requirements score the highest?
.96
RQ2- What are the identified shortcomings of ChatGPT in generating requirements?
.96
RQ3- How do quality attribute scores of requirements generated by ChatGPT compare with requirements formulated by human RE experts?
Section <ref> provides relevant background and motivates the research goals. Section <ref> describes the design of our study. Section <ref> presents the results of the methods employed and an analysis of how requirements generated by ChatGPT compare to those formulated by human experts, based on the evaluation scores. We discuss the results and present any identified validity threats in Section <ref> and conclude in Section <ref>.
§ RELATED WORK
§.§ Trustworthy AI qualities
Kaur et al. <cit.> identified several key AI qualities for Trustworthy AI based on the guidelines proposed by the European Union (EU) <cit.>, including Accuracy & Robustness, Safety, Non-discrimination, Transparency & Explainability, Accountability, Privacy & Security, Regulations, and Human Agency & Oversight. The requirements elicitation questionnaire presented in <ref> was crafted based on the Trustworthy AI qualities presented in this study.
§.§ Requirements Quality Attributes
We selected 7 requirement quality attributes presented by Denger et al. <cit.> and Genova et al. <cit.> to evaluate the quality of the requirements gathered through the interview-based surveys and ChatGPT. These attributes include Abstraction, Atomicity, Consistency, Correctness, Unambiguity, Understandability, and Feasibility.
§.§ State-of-the-art in LLMs' Application in Software Engineering Activities
In recent years, transformer-based models such as BERT achieved state-of-the-art results in natural language processing (NLP) tasks. These models are typically pre-trained on large amounts of textual data and then fine-tuned on task-specific data to perform particular NLP tasks. In their study, Mosel et al. <cit.> compare BERT transformer models trained on software engineering (SE) data (context-specific specialized vocabulary) with those trained on general domain data in multiple dimensions: their vocabulary, their ability to understand which words are missing, and their performance in classification tasks. Their results demonstrate that for tasks that require an “understanding” of the SE context, pre-training with SE data is valuable yet for general language understanding tasks within the SE domain, models trained on general domain data are sufficient.
Alhoshan et al. <cit.> report an extensive study using the contextual word embedding-based zero-shot learning approach for requirements classification. The study tested this approach by conducting more than 360 experiments using 4 language models with a total of 1020 requirements and found generic language models trained on general-purpose data perform better than domain-specific language models under the zero-shot learning approach.
Ahmed et al. <cit.> investigate the potential and limitations of using ChatGPT to assist an architect in Architecture-centric Software Engineering (ACSE) using a Human-DevBot collaboration approach. They presented a case study in which a novice software architect collaborates with ChatGPT to analyze, synthesize, and evaluate a services-driven software application. They investigated the role that ChatGPT can play in supporting and leading the architectural activities of ACSE. Preliminary results of this study demonstrate that ChatGPT is capable of mimicking the role of an architect in ACSE.
Ozkaya et al. <cit.> present a wide range of potential applications of LLMs in various SE activities including requirements documentation and specification generation, arguing that LLMs can assist in generating more complete specifications significantly quicker.
Zhang et al. <cit.> empirically evaluate how ChatGPT performs on requirements analysis tasks to derive insights into how generative large language models, represented by ChatGPT, influence the research and practice of NLP4RE. The evaluation results demonstrate ChatGPT's impressive ability to retrieve requirements information from different types of artifacts involving multiple languages under a zero-shot setting. It is worthwhile for the research and industry communities to study generative large language models in NL RE tasks.
We can observe the growing interest among the research community to investigate the potential applications of large language models like ChatGPT within various fields of SE including RE, as evidenced by the increasing number of related studies, all within a short period of time.
§ METHODOLOGY
§.§ Research Design
We employed a two-stage process to address the research questions as shown in Figure <ref>. The first one is using ChatGPT to collect synthetic data, which, in this case, are requirements for developing Trustworthy AI systems. The second method is conducting interview-based surveys. We conducted two rounds of interview-based surveys with different research objectives. The first round was conducted to gather requirements for developing Trustworthy AI systems. These requirements were intended to be very general and not for a particular system. The collected responses included requirements relevant to a wide range of AI and AI-based systems including generative AI models, autonomous vehicles, self-driving cars, AI chatbots, etc. – all aiming to ensure the system being developed and deployed is Trustworthy. Neither ChatGPT nor the interview participants were provided with any definitions for the Trustworthy AI qualities for the round-1 interview-based survey. The second round was conducted to evaluate the quality of the requirements collected.
§.§ Generating Requirements for Trustworthy AI Using ChatGPT
This part of the study includes multiple steps. The first step was to formulate 6 questions to create a requirements elicitation questionnaire for developing Trustworthy AI systems. Before asking ChatGPT the 6 questions, it was given a context prompt as a conversation kick-off providing some context of what we are trying to achieve as shown in Figure <ref>. The same text in the prompt was mentioned to the participants of the interview-based surveys to ensure all sources from which we were eliciting requirements had the same context.
Then, the 6 questions we crafted were fed to the ChatGPT as inputs. It should be noted that the responses generated by ChatGPT were recorded without any modifications, and screenshots of the responses were taken to ensure the veracity of the data. The requirements elicitation questionnaire is as follows:
* Q1: What are the necessary requirements for developing an AI system that ensures its Accuracy and Robustness?
* Q2: How to ensure that the data used in training, testing, and validating an AI model is unbiased and fair?
* Q3: What requirements are important to ensure the AI model is transparent and explainable?
* Q4: What are the most important privacy and security requirements that AI developers need to consider?
* Q5: What kind of human oversight requirements need to be in place while developing AI systems to ensure that the AI is Trustworthy?
* Q6: Are there any other requirements you can think of that need to be considered while developing AI systems to ensure that the AI is Trustworthy?
Q1 is crafted to elicit requirements relevant to Accuracy & Robustness quality property of the Trustworthy AI. Similarly, Q2 is for Non-discrimination, Q3 is for Transparency & Explainability, Q4 is for Privacy & Security, Q5 is for Human Agency, and Q6 is for any other requirements that are relevant to developing a Trustworthy AI system. However, not all of the Trustworthy AI qualities were explicitly included in curating the list of questions presented to the interviewees and ChatGPT. No questions were formulated and asked of either ChatGPT or the interviewees for qualities like Safety, Accountability, Regulations, and Human Agency. This was a deliberate decision to assess the level of awareness of the interviewees and the ChatGPT model regarding these crucial factors related to the requirements of Trustworthy AI systems. By omitting these AI qualities, the study aimed to determine whether the interviewees and ChatGPT would mention them in response to Q6 of the requirements elicitation questionnaire, unsolicited. The intention behind this was to gain a deeper understanding of the degree to which both humans and ChatGPT are cognizant of the key AI qualities that are necessary for developing a niche system. The responses generated by ChatGPT for these 6 questions are provided as part of supplementary material in Section <ref>.
§.§ Round-1 Interview-based Surveys: Eliciting Requirements for Trustworthy AI
To effectively gather the necessary requirements, interviewing individuals with expert knowledge in the field of RE for AI/ML systems (whom we will refer to as expert respondents for the rest of the paper) was believed to be the most suitable approach in our study design. Interview-based surveys with 5 expert respondents were conducted to elicit requirements for developing Trustworthy AI. These expert respondents were selected based on their experience in the field of RE4AI and their familiarity with the concept of Trustworthy AI. The pre-interview formalities included explicitly informing the participants about the purpose of collecting requirements from them, i.e., evaluating the quality of the requirements and publishing the results as a part of a research study. We also informed and took consent from each of the participants to record the interviews, remove any personal identifiers from the interview transcripts to comply with GDPR regulations, and store the recordings and the anonymized transcripts securely along with a master file containing their contact information. It was made clear that their participation is completely voluntary and they can refuse to answer a particular question or end the interview at their discretion. The recording and transcription of the interviews began only after making sure all the participants understood the formalities and gave their consent. The same 6 questions that were given as inputs to the ChatGPT were asked of the expert respondents. Once the planned interview-based surveys were finished and the 30 responses from the expert respondents were obtained, a total of 36 responses, each consisting of a varying number of requirements related to the development of Trustworthy AI, were ready to be evaluated using selected requirement quality attributes. The supplementary material provided in Section <ref> does not contain round-1 interview participants' responses because the pre-interview informed consent did not involve making their responses publicly accessible.
§.§ Round-2 Interview-based Surveys: Evaluating the Quality of the Elicited Requirements
These 36 responses consisting of requirements for Trustworthy AI were presented to a different set of 5 RE experts (whom we will refer to as expert evaluators for the rest of the paper) for evaluation. These 5 people were chosen based on their experience in working with requirements for AI systems. To avoid any potential bias, the fact that 6 of the responses were generated by ChatGPT was not disclosed to the expert evaluators. The demographic data of the expert respondents was also not revealed. The 36 responses were presented to these expert evaluators in a randomised order. Expert evaluators were also provided with supplementary material with the definitions of Trustworthy AI qualities and the requirements quality attributes before the interview began. This was done in order to ensure a fair and consistent evaluation practice.
The expert evaluators were then requested to provide a score for each response across seven chosen quality attributes, i.e., Abstraction, Atomicity, Consistency, Correctness, Unambiguity, Understandability, and Feasibility. Each response was rated on a scale of 0-10 for each attribute, with 0 being the lowest score assigned for the poorest quality requirements and 10 being the highest score assigned for the best quality requirements, resulting in quantified measures of the selected requirements quality attributes.
§ RESULTS & ANALYSIS
In this section, we present the outcomes of our research study. We report the quality scores for requirements elicited using ChatGPT and compare them with those formulated by expert respondents. The average scores and standard deviations for each requirements quality attribute were computed separately for both the expert respondents' formulated and ChatGPT-generated requirements using the data from round-2 interview-based surveys. We also discuss the shortcomings of using ChatGPT to elicit requirements based on the analysis of our findings in subsection <ref>.
§.§ Quality Scores for ChatGPT-generated Requirements
Figure <ref> presents the scores of all 36 responses for each of the 6 questions from <ref> over the 7 requirements quality attributes. I1-I5 refer to the 5 interviews we conducted in round-2 interview-based surveys. Q1-Q6 are the questions given to the expert respondents of round-1 interviews. The columns labelled HR consists of the scores assigned for the requirements provided by the expert evaluators. Columns labelled CR consist of scores assigned for ChatGPT responses (highlighted in green). For example, the first column and row value (9) is the expert evaluator's numerical score of the abstraction quality for the response from the first expert respondent. Highlighted in green, (7) in that same row is the evaluation of the level of Abstraction of ChatGPT's response to the same question. HR-avg is the computed average score for expert evaluators' scores across all 5 round-2 interview-based surveys and across all five responses while CR-avg is the computed average score for ChatGPT-generated responses for the same. HR-STDEV is the computed standard deviation of scores for expert evaluators' scores across all 5 round-2 interview-based surveys. Based on the HR-avg, CR-avg and HR-STDEV presented in Figure <ref>, we generated Figure <ref>, which is used as a basis to answer RQ1 and RQ2.
Figure <ref> provides a comparison of the quality of the requirements generated by ChatGPT for different Trustworthy AI qualities over the 7 selected requirements quality attributes. The first column represents Abstraction quality scores for all 6 questions from <ref> with blue layer answering Q1, orange layer answering Q2, grey layer answering Q3, yellow layer answering Q4, light blue layer answering Q5, and green layer answering Q6. Similarly, the second column represents Atomicity quality scores, the third column represents Consistency quality scores, the fourth column represents Correctness quality scores, the fifth column represents Unambiguity quality scores, the sixth column represents Understandability quality scores and the seventh column represents Feasibility quality scores.
The overall height of the stacked bar represents the total score achieved by ChatGPT-generated requirements for each requirement quality parameter across all Trustworthy AI qualities. As observed in Figure <ref>, the tallest stacked bar is the Understandability column, i.e., the most attested attribute of ChatGPT-generated requirements is Understandability. This is closely followed by Correctness, Consistency and Abstraction. Unabmiguity and Feasibility are the second shortest and the shortest stacked bars respectively, i.e., they are the least attested quality attributes of ChatGPT-generated requirements.
The results indicated that Correctness with an average score of 8.4 out of 10, and Abstraction with an average score of 7.6, were the most prominent qualities of the requirements for ensuring Accuracy & Robustness, while Unambiguity was the least impressive quality, with an average score of only 4.6.
Coming to requirements or ensuring the usage of Unbiased & Fair data in training, testing & validating an AI model, Consistency and Understandability were jointly ranked the highest, with an average score of 7.6, followed closely by Correctness with an average score of 7.4. However, Unambiguity scored the least in this aspect with an average score of 5.8.
For Transparency & Explainability requirements, Understandability was the highest-ranked quality with an average score of 8.2, followed by Consistency and Correctness, with scores of 8 and 7.8 respectively. Feasibility scored the lowest, with an average score of 5.4.
Regarding Privacy & Security requirements, Consistency was the most highly rated quality, with an average score of 8.2 while Abstraction scored the lowest, with an average score of 6.4.
For Human Oversight requirements, Understandability scored the highest with an average score of 8, while Unambiguity was the least impressive, with an average score of 5.4. Finally, the requirements generated for Q6 of the interview questionnaire had Abstraction and Understandability as their most attested quality, with an average score of 7.4, while Feasibility scored the least, an unimpressive 4.2.
§.§ Shortcomings of ChatGPT in Generating Requirements
From Figure <ref> and the results presented in Section <ref>, it is observed that the requirements generated by ChatGPT exhibited high levels of Understandability, Consistency, and Correctness, which are essential qualities for good requirements. ChatGPT-generated requirements achieved good scores for Abstraction for most Trustworthy AI principles except for Privacy & Security. On the other hand, Unambiguity and Feasibility achieved the lowest scores, making them the least prominent quality attributes in ChatGPT's requirements.
Low Unambiguity of requirements could mean that presentation of the requirements is unclear and imprecise and could also lead to multiple interpretations. Low Feasibility indicates that the requirements cannot be realistically implemented. This could be due to technological constraints (unavailability of the recommended infrastructure or access to the recommended tech stack), resource limitations (time, budget, manpower) or dependency on external factors beyond the project stakeholders' control. Although no effort was made to investigate the precise cause of the low Unambiguity and Feasibility scores of ChatGPT-generated requirements, it can be concluded that the requirements are not clear and feasible enough to implement in real-world projects yet.
Additionally, ChatGPT's response to Q6 included requirements related to Human Rights, Ethical Considerations, Interoperability, Sustainability, Responsible Usage and Stakeholder Diversity during the development of the AI systems. But we did not find any requirements related to Accountability, Regulations, or Safety. In comparison, the requirements provided by human RE4AI experts for Q6 were centred around Sustainability, Sovereignty, User Experience, Safety, Human Factors and System Predictability.
§.§ Comparing ChatGPT-generated Requirements with Human Expert Requirements
From Figure <ref>, we can see that the orange bar representing the average scores of the ChatGPT-generated requirements is taller than the blue bar, which represents the average score of human-formulated requirements in most instances. It means quality evaluation scores of ChatGPT-generated requirements outperform the scores achieved by expert respondents' formulated requirements in most cases. The orange bar is shorter than the blue bar in only 4 instances; 1) Unambiguity of Accuracy & Robustness requirements, 2) Feasibility of Transparency & Explainability requirements, 3) Feasibility of Human Oversight requirements, and 4) Feasibility of other Trustworthy AI requirements. Once again, Feasibility and Unambiguity are the only attributes where the average scores of expert respondents' formulated requirements outperformed the ChatGPT-generated requirements, in 4 out of 42 instances.
§ DISCUSSIONS
Based on our findings presented in Section <ref>, we observe that ChatGPT-generated requirements are considered to be acceptable by RE experts (expert evaluators) in direct competition with expert respondents' formulated requirements on multiple quality attributes. But this does not mean they are flawless. Generating ambiguous and unfeasible requirements is a clear shortcoming identified in using ChatGPT to elicit requirements, even if the average scores of ChatGPT-generated requirements for Unambiguity and Feasibility are still higher than human-formulated requirements in the majority of instances. Feasibility is especially important to consider as a requirement is only of value if it can be transformed into a design and an implementation with reasonable effort and cost <cit.>. Ambiguous presentation of requirements may lead to uncertainty in the decision-making process during the design stage, which is considered highly undesirable by developers <cit.>.
Requirements for a particular system to be developed should come from or be approved by the customer. This is a key element of RE that cannot (and perhaps should not) be replaced by AI. However, AI can assist to some extent in various RE tasks that are tedious yet require complex reasoning abilities. Based on our results, we find that LLMs can assist requirements analysts in at least making requirements more Abstract, Atomic, Complete, and Understandable.
An example use case may look as follows: A product owner receives the requirements from the customer(s) through interviews or focus groups while ensuring there is a textual transcript of the entire conversation. This transcript can be refined using a LLM to convert it into a system requirements specification (SRS) which has highly Abstract (if required), Atomic, Consistent, Correct and easily Understandable set of requirements. This process can reduce time and effort compared to preparing requirements manually. In this way, the adoption of LLMs in the RE process can lead to increased process efficiency since analysts can redirect their efforts to activities that require advanced critical thinking. In addition, outputs from LLMs may have improved quality compared to manually crafted outputs, as observed here.
The level of awareness regarding high-level requirements that an AI system must adhere to be deemed Trustworthy is critical for the successful development of Trustworthy AI systems. However, of the five interviewees, only one identified the significance of incorporating Safety and Human Agency requirements into Trustworthy AI development as a response to Q6 from the requirements elicitation questionnaire. None of the other interviewees provided any requirements related to Accountability, Regulations, or Safety and neither did ChatGPT. This highlights the significance of possessing domain knowledge when formulating Trustworthy AI requirements, irrespective of the method that is being employed to elicit and formulate requirements. It holds true even while critically reflecting on ChatGPT-supplied content for the RE process to ensure the Accuracy of the output. But this should not necessarily overshadow the promise ChatGPT showed in providing content that is considered to be Abstract, Consistent, Correct and Understandable requirements by RE4AI experts. Instead, we believe this promise should be fostered by trying to address the shortcomings.
But there is a bigger challenge to overcome for the usage of LLMs like ChatGPT to support the requirements elicitation process, namely, hallucinations. The model is prone to generating factual-sounding statements that cannot be validated from the source, a phenomenon referred to as extrinsic hallucination <cit.>. Despite scoring higher on the Correctness attribute, and having no evidence of the ChatGPT's output in our study being affected by the hallucination effect, implementing ChatGPT-generated requirements into the AI systems development process without any Human Oversight mechanisms in place might lead to an edge-case scenario where a factually incorrect requirement might be perceived as a correct one.
Recent advancements in LLMs have shown indications of efforts to address these limitations. The development of GPT-4 has shown significant improvement over existing models in various NLP tasks. In particular, GPT-4 improves over the latest GPT-3.5 model, on which ChatGPT is based, by 19 percentage points, with significant gains across all topics, and has surpassed the majority of state-of-the-art systems, which typically require task-specific fine-tuning. This advancement is attributable to predictable scaling, which has enhanced measures of factuality and adherence to desired behaviour, demonstrating the potential of GPT-4 to offer improved language understanding capabilities and to facilitate the development of more advanced NLP systems <cit.>.
Apart from that, recent studies have also demonstrated that LLMs exhibit remarkable performance gains when trained using a small amount of in-context data using one-shot and few-shot prompting techniques <cit.>, with the performance being heavily influenced by the domain of the corpus source <cit.>. It has also been observed that transformer models, which have been pre-trained with software engineering (SE) domain data, exhibit superior performance on SE-related applications as compared to general domain models and can be regarded as the current state-of-the-art for SE use cases <cit.>. Further research has shown that fine-tuning such models on smaller datasets in combination with transfer learning can significantly enhance their performance on specific tasks with limited data <cit.>.
This multitude of available approaches, if leveraged, could help the research community to address the identified challenges and come up with robust mitigation strategies, fostering further research into investigating ways to utilize LLMs in RE activities. The findings presented in this study can be viewed as preliminary proof of concept, which could provide motivation for further research to explore and evaluate the boundary of robust state-of-the-art LLMs application in RE activities with appropriate Human Oversight mechanisms in place to ensure the ethical and responsible application of these technologies.
§.§ Future Work
One potential avenue for future research is to conduct more rigorous and comprehensive evaluation studies that involve a wider and more diverse range of factors like utilizing the latest version of the GPT architecture, such as GPT-4, over multiple system domains along with a variety of prompt engineering techniques to overcome the identified limitations.
To enhance the LLM's interpretation of the question and generate more accurate responses, users can improve the situation by managing the context of the dialogue <cit.>. One potential direction is to observe the effect of providing varying degrees of context on the output while interacting with the LLM. For example, giving information about the system's goal, the intended users and the development environment might result in more concrete/unambiguous and feasible requirements.
Another promising approach could be to use Knowledge Graphs (KG) to enhance the factuality of LLM's output. KG-enhanced LLM inference utilizes KGs during the inference stage of LLMs, which enables LLMs to access the latest knowledge without retraining <cit.>.
§.§ Threats to Validity
We followed Runeson & Höst's guidelines for conducting qualitative research analysis <cit.> in software engineering to discuss any possible threats to the validity of this research study. One of the possible threats to the validity of this study is the generalizability of the results. Since the requirements were gathered for Trustworthy AI systems development, which in itself is a rapidly evolving domain, these results might or might not hold for systems from other domains as well.
Another threat to validity would be the construct type. Neither ChatGPT nor the requirements elicitation interview participants were provided definitions for Accuracy & Robustness, Safety, Non-discrimination, Transparency & Explainability, Accountability, Privacy & Security, Regulations, and Human Agency & Oversight. This was intentionally done to encourage the responses and requirements to be influenced by the domain knowledge of the participants and ChatGPT.
For computational simplicity, the scores provided by expert evaluators were averaged out. This could potentially be a threat to validity as the variation in the human expert responses is minimized. Nevertheless, the standard deviation of human expert responses (HR-STDEV) was provided in Figure <ref> to represent the variety in the experts' opinions.
Since the expectation was to receive one-shot answers to the questions, the limitations observed may be a result of the constrained interaction employed for querying ChatGPT, as the chat is designed to facilitate continued interaction.
§ CONCLUSION
In conclusion, this study aimed to explore the potential of ChatGPT in eliciting requirements and comparing its output with requirements formulated by five RE4AI experts from academia and industry. The quality of requirements was evaluated by interviewing an additional five RE4AI experts in our study. Results from our experiment show that ChatGPT-generated requirements are considered highly Abstract, Atomic, Consistent, Correct, and Understandable in comparison to human RE experts' formulated requirements. Unambiguity and Feasibility of the requirements received lower scores in comparison to scores of other requirements' quality attributes. Our findings suggest that ChatGPT has promising potential to support requirements elicitation processes, like converting raw requirements documents into high-quality specification documents, ensuring consistency, and improving Understandability among other things. ChatGPT's use cases should be further investigated in various RE activities to leverage the emergent behaviour of LLMs more effectively and foster wider adoption of LLMs in natural language-based RE activities.
§ ACKNOWLEDGEMENT
We want to thank Dr. Beatriz Cabrero-Daniel for her encouragement & valuable support. This work was supported by the Vinnova project ASPECT [2021-04347].
§ SUPPLEMENTARY MATERIAL
<https://doi.org/10.5281/zenodo.8124936>.
plain
|
http://arxiv.org/abs/2307.04117v1 | 20230709081351 | SPar: estimating stellar parameters from multi-band photometries with empirical stellar libraries | [
"Mingxu Sun",
"Bingqiu Chen",
"Helong Guo",
"He Zhao",
"Ming Yang",
"Wenyuan Cui"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
UTF8gbsn
Stellar parameters from multi-band photometries]
SPar: estimating stellar parameters from multi-band photometries with empirical stellar libraries
0000-0002-2473-9948]Mingxu Sun (孙明旭)
Department of Physics,
Hebei Key Laboratory of Photophysics Research and Application,
Hebei Normal University,
Shijiazhuang 050024, P. R. China
0000-0003-2472-4903]Bingqiu Chen(陈丙秋)
South-Western Institute for Astronomy Research,
Yunnan University,
Kunming 650500, P. R. China
0000-0001-5737-6445]Helong Guo(郭贺龙)
South-Western Institute for Astronomy Research,
Yunnan University,
Kunming 650500, P. R. China
/0000-0003-2645-6869]He Zhao(赵赫)
Purple Mountain Observatory,
Chinese Academy of Sciences,
Nanjing 210023, P. R. China
0000-0001-8247-4936]Ming Yang(杨明)
Key Laboratory of Space Astronomy and Technology,
National Astronomical Observatories,
Chinese Academy of Sciences,
Beijing 100101, P. R. China
0000-0003-1359-9908]Wenyuan Cui(崔文元)
Department of Physics,
Hebei Key Laboratory of Photophysics Research and Application,
Hebei Normal University,
Shijiazhuang 050024, P. R. China
Bingqiu Chen
[email protected]
Modern large-scale photometric surveys have provided us with multi-band photometries of billions of stars. Determining the stellar atmospheric parameters, such as the effective temperature () and metallicities (), absolute magnitudes (M_G), distances (d) and reddening values () is fundamental to study the stellar populations, structure, kinematics and chemistry of the Galaxy. This work constructed an empirical stellar library which maps the stellar parameters to multi-band photometries from a dataset with Gaia parallaxes, LAMOST atmospheric parameters, and optical to near-infrared photometry from several photometric surveys. Based on the stellar library, we developed a new algorithm, SPar (Stellar Parameters from multiband photometry), which fits the multi-band stellar photometries to derive the stellar parameters (, , M_G, d and ) of the individual stars. The algorithm is applied to the multi-band photometric measurements of a sample of stars selected from the SMSS survey, which have stellar parameters derived from the spectroscopic surveys. The stellar parameters derived from multi-band photometries by our algorithm are in good agreement with those from the spectroscopic surveys. The typical differences between our results and the literature values are 170 K for , 0.23 dex for , 0.13 mag for M_G and 0.05 mag for . The algorithm proved to be robust and effective and will be applied to the data of future large-scale photometric surveys such as the Mephisto and CSST surveys.
§ INTRODUCTION
Modern large-scale photometric surveys, such as the Sloan Digital Sky Survey (SDSS; ), the Pan-STARRS 1 Survey (PS1; ), the Two Micron All Sky Survey (2MASS; ) and the Wide-Field Infrared Survey Explorer (WISE; ), have provided us board band photometry covering the wavelength from the optical to the infrared (IR) bands of billions of stars. The multi-band photometric measurements of stars contain the spectral energy distribution (SED) information of these stars, from which we can obtain the stellar atmosphere parameters (i.e. the effective temperature , the metallicity and the surface gravity log g), as well as the distance and extinction values of stars <cit.>.
Stellar atmospheric parameters are typically determined from their spectra. While large-scale multi-fibre spectroscopic surveys like the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; ), the Apache Point Observatory Galactic Evolution Experiment (APOGEE; ), the Galactic Archaeology with HERMES (GALAH, ), and the Dark Energy Spectroscopic Instrument (DESI; ) have provided spectra for tens of millions of stars, we can now determine the parameters of billions of stars with comparable precision using photometric data of exceptionally high accuracy or data obtained through specially designed filters. This approach allows us to obtain accurate stellar parameters without relying solely on spectroscopic data. For example, <cit.> have measured metallicities of ∼ 27 million stars from the Gaia Early Data Release 3 (Gaia EDR3; ) photometric data of unprecedented millimagnitude precision. The typical metallicity precision is about δ[Fe/H] = 0.2 dex. Based on the narrowband photometry from the SkyMapper Southern Survey (SMSS; ), <cit.> have determined stellar atmospheric parameters for ∼ 24 million stars. The precision of their metallicity estimates has typical values around 0.05 to 0.15 dex. <cit.> obtained stellar parameters (effective temperature, , surface gravity, , and metallicity, ) from the narrow-band photometries of the J-PLUS survey <cit.>. They have achieved precisions of δ∼ 55 K, δ∼ 0.15 dex, and δ∼ 0.07 dex, respectively. We note that these works are only for stars located at high Galactic latitudes, where the interstellar extinction is small. For stars at low Galactic latitudes, where the extinction effects are large, the stellar atmospheric parameters need to be estimated along with the stellar extinction values.
The future Multi-channel Photometric Survey Telescope (Mephisto; ) photometric survey and the Chinese Space Station Telescope (CSST; ; ) optical survey will have both high precision and specially designed filters, which will provide great opportunities for us to obtain accurate stellar parameters of billions of stars. Mephisto is a wide-field survey telescope with a 1.6 m primary mirror. It is equipped with three CCD cameras and is capable to image the same patch of sky in three bands simultaneously, which will provide us with real-time colours of stars with unprecedented accuracy. The filters of Mephisto (uvgriz) are similar to those of SkyMapper <cit.>. The Mephisto-W Survey (; Chen et al. in prep.) will target the northern sky of declination (Dec) between -21 and 75 with a coverage of over 27,000 deg^2. CSST is a 2 m space survey telescope, which shares the same orbit as the Chinese Space Station. The CSST optical survey (CSST-OS) will observe a large sky area of ∼ 18,000 deg^2 in seven photometric filters (NUV, u, g, r, i, z and y) covering the wavelengths from the near-ultraviolet (NUV) to near-infrared (NIR).
Deriving the stellar atmospheric parameters as well as the distance and extinction values is a fundamental task for the Mephisto and CSST surveys. In this work, we present a new algorithm,SPar (Stellar Parameters from multiband photometry), to estimate stellar atmospheric parameters, such as the effective temperature () and metallicities (), absolute magnitudes (M_G), distances (d) and reddening values () of a large sample of stars from their multi-band photometries. Previous algorithms such as the Star-Horse <cit.> and the General stellar Parameterized from photometry (GSP-PHot; ) rely on the theoretical stellar models, which may suffer systematic effects <cit.>. One method of dealing with these inaccuracies in theoretical models is to apply empirical corrections based on the observed photometry of stars of known type. <cit.>, <cit.> and <cit.> measure stellar parameters and extinction based on the empirical stellar locus in the colour-colour space. <cit.>, <cit.> and <cit.> calculated the intrinsic colours and reddening values of the individual stars based on a spectroscopic sample selected from the spectroscopic surveys. In this work, we will construct an empirical stellar library which maps the stellar parameters to multi-band photometries from a dataset with Gaia parallaxes, LAMOST atmospheric parameters, and optical to near-infrared photometry from several photometric surveys.
The paper is structured as follows: Section <ref> presents the relevant dataset. Section <ref> describes in detail the empirical stellar library we constructed. Section <ref> describes our algorithm and Section <ref> tests it. Finally, Section <ref> summarizes the algorithm.
§ DATA
As the Mephisto and CSST surveys have not yet started, in the current work we have used photometric data from the SMSS survey for the experiment. This work is based on the Gaia Data Release 3, the broad-band photometry from SMSS, Two Micron All Sky Survey (2MASS; ) and the Wide-field Infrared Survey Explorer survey (WISE; ), and the spectroscopic data from LAMOST, APOGEE and GALAH.
The SMSS is an ongoing photometric survey of the Southern sky <cit.>. The survey depth is between 19.7 and 21.7 mag in six optical bands: u, v, g, r, i and z. In this work, we adopt the data from its second data release (SMSS DR2; ). The photometry has a internal precision of 1 per cent in the u and v bands, and 0.7 per cent in the other four bands (g, r, i and z).
To break the degeneracy of effective temperature (or intrinsic colours) and extinction for the individual stars, we combine the SMSS photometry with the IR photometry of 2MASS and
WISE. The 2MASS survey is a full-sky survey undertaken in three filters: J, H and . The systematic errors of the 2MASS photometry are estimated to be less than 0.03 mag <cit.>. The WISE survey is a full-sky survey undertaken in four bands: W1, W2, W3 and W4. In the current work, we adopt the AllWISE Source Catalog <cit.>. We use only the data in the W1 and W2 bands, as the W3 and W4 measurements have lower sensitivities and poorer angular resolutions.
The Gaia DR3 <cit.> photometric data and parallax measurements are also adopted to derive the stellar parameters when available. Gaia DR3 was released by the Gaia mission <cit.>. It contains more than a billion sources with five astrometric parameters (position, parallaxes, and proper motions) and three-band photometry (G, and ). The median uncertainty of the parallax is ∼ 0.02 - 0.03 mas for G < 15 mag sources, 0.07 mas at G = 17 mag, and 0.5 mas at G = 20 mag <cit.>. At G = 20 mag, the typical uncertainties for the Gaia DR3 photometries are 6, 108, and 52 mmag for the Gaia G, and bands, respectively <cit.>.
The LAMOST, APOGEE and GALAH spectroscopic data are adopted in the current work for two aims: to construct the empirical stellar library and to validate the resulting stellar properties. In this work, we use the `LAMOST LRS Stellar Parameter Catalog of A, F, G and K Stars' from the LAMOST data release 8 (LAMOST DR8; ). The catalogue contains stellar atmospheric parameters (, and ) derived from over 6 million low-resolution spectra. For the APOGEE data, we use the APOGEE stellar parameters catalogue from the SDSS data release 17 (SDSS DR17; ). The catalogue contains stellar atmospheric parameters (, and [M/H]) derived from over 0.6 million near-IR spectra. For the GALAH data, we adopt its DR3 catalogue <cit.>, which contains stellar parameters (, and [Fe/H]) of over 0.5 million nearby stars.
§ EMPIRICAL STELLAR LIBRARY
An empirical stellar library is first created based on a sample of stars selected from the LAMOST spectroscopic data and Gaia DR3, for which atmospheric parameters, distances, and extinction values can be well measured. We cross-match the LAMOST and Gaia stars with the optical and near-IR photometric data, i.e. the SMSS, 2MASS and WISE, using a radius of 1 arcsec. To exclude the stars with bad observations, a sample of stars are selected by the criteria as below:
* LAMOST spectral signal-to-noise ratio (SNR) larger than 20 and effective temperature between 4000 and 8000 K,
* All Gaia G photometric errors smaller than 0.1 mag and
phot_bp_rp_excess_factor > 1.3 + 0.06( - )^2,
* Photometric errors in any SMSS uvgriz, 2MASS JH and WISE W1W2 bands less than 0.1 mag.
To obtain the absolute magnitude of our selected stars in each filter, the extinction values in the individual filters and the distance of each star need to be calculated.
§.§ Correct the extinction effect of the individual stars
In this work, The reddening values of our sample stars are calculated with a star-pair method <cit.>. In selecting the control sample for extinction correction, this study impose more rigorous constraints on the signal-to-noise ratio (SNR) of the LAMOST spectrum and photometric accuracies compared to the empirical stellar library, and only low-extinction stars are chosen. The control stars are selected via the following criteria:
* LAMOST spectral signal-to-noise ratio (SNR) larger than 50,
* Gaia G photometric errors smaller than 0.01 mag, all SMSS uvgriz, 2MASS JH and WISE W1W2 bands photometric errors less than 0.08 mag,
* values from the extinction map of <cit.> smaller than 0.025 mag.
For a given target star in our sample, its intrinsic colours, (G_ BP - x)_0 (where x denotes to the magnitude in another band, i.e., G, u, v, g, r, i, z, J, H, , W1 or W2), are estimated simultaneously from the corresponding values of the pair stars in the control sample. The reddening values of E(G_ BP -x) of the target star are then obtained from the differences between its observed and intrinsic colours, i.e. E(G_ BP -x) = (G_ BP - x) - (G_ BP - x)_0.
Stars with resulted E(G_ BP -x) errors larger than 0.1 mag are excluded from our catalogue. Based on the resultant reddening values E(G_ BP -x), a and reddening dependent extinction law similar to the work of <cit.> has been built. <cit.> obtained the empirical reddening coefficients for the individual colours as a function of and . In the current work, we derive the empirical reddening coefficients, defined as R(G_ BP - x) = E(G_ BP - x)E(G_ BP - G_ RP), as a function of and . In the current work, we adopt a binary function:
R(G_ BP - x) = C_0+C_1x+C_2x^2+C_3y+C_4xy+C_5y^2,
where x = T_ eff - 6000 and y = E(G_ BP-G_ RP) - 0.5. The resulting reddening values we derived from the star-pair method are used to fit Eq. <ref> to obtain the individual coefficients C_0 to C_5. The fitting results are listed in Table <ref>.
Finally, we assume that A_W2/E(G_ BP-G_ RP) = 0.063 <cit.> for the W2 band has the longest wavelength and experiences the least amount of extinction of all the bands.
Combining with the extinction coefficient relations we derived, the extinction value in each filter for all our sample stars can be calculated from their reddening values. The extinction values are then subtracted to obtain the intrinsic magnitude of the stars. In the left and middle panels of Fig. <ref>, we show the observed and intrinsic colour and magnitude diagrams (CMD) of our sample stars, respectively.
§.§ The empirical HR diagrams
We then obtain the absolute magnitudes of the sample stars. The distances of the sample stars are calculated from the Gaia DR3 parallaxes via a simple Bayesian approach <cit.>. A simple posterior probability is adopted:
p(d|ϖ) = d^2exp(-12σ^2_ϖ(ϖ-ϖ_ zp - 1d))p(d),
where σ_ϖ and ϖ _ zp are respectively the errors and globular zero points of the Gaia parallaxes, and p(d) the space density distribution prior for the sample stars. In the current work, a zero point of ϖ _ zp = -0.026 mas from <cit.> is adopted. The Galactic structure model of <cit.> is adopted as the spatial density distribution prior. With the resultant distances, the absolute magnitude in each band is then calculated by M_x = x - 5log d + 5 - A_x. Stars with parallax errors larger than 20 per cent are excluded. This leads to a final sample of 3,842,671 stars. In the sample, more than 3.5 million stars have all Gaia absolute magnitude estimates, over 3.2 million stars have all IR bands (2MASS JH and WISE W1W2) absolute magnitudes, and over 0.3 million stars have all SMSS absolute magnitude estimates. In the right panel of Fig. <ref>, we show the resulting Hertzsprung–Russell diagram (HRD) of our final sample stars.
The final sample of stars we obtained above is too large and not evenly distributed across the parameter spaces. Therefore, we use this sample to create a gridded stellar library. To map the stellar parameters, namely effective temperature (), metallicity (), and Gaia G band absolute magnitude (M_G), to absolute magnitude in each filter, we divide the parameter spaces into 100 bins for (ranging from 4000 to 8000 K), 50 bins for (ranging from -2.5 to 0.5 dex), and 100 bins for M_G (ranging from 8 to -4 mag). We use a machine learning algorithm called Random Forest regression to obtain the absolute magnitudes in each passband for each bin, using the final sample stars as the training dataset. We exclude any grids with fewer than 5 stars, resulting in an empirical stellar library with 39,905 grids in the parameter space. In Fig. <ref>, we present the Hertzsprung-Russell diagrams (HRDs) of both the final sample stars and the resultant gridded stellar library.
We performed a comparative analysis between our empirical stellar library and the PARSEC theoretical stellar isochrones <cit.> as a means of verifying our results. To achieve this, we linearly interpolated the PARSEC absolute magnitudes into our , and M_G grids, and subsequently compared them with our corresponding results. As illustrated in Fig. <ref>, the results of this comparison indicate that our stellar library is in good agreement with the PARSEC theoretical models. Specifically, the mean values of the differences between our empirical library and the PARSEC models ranged between -0.04 and 0.04 mag, with dispersions of 0.01 to 0.03 mag observed for most of the filters. We note a slight increase in the dispersions for the WISE W1W2 bands, which ranged between 0.04 - 0.05 mag, and the SMSS uv bands, which show dispersions of 0.08 - 0.11 mag. We attribute the increase in dispersion to the relatively large calibration errors and uncertainties associated with the filter response curves of these particular filters. For the u band, we find a slightly higher mean difference of -0.065 mag than other filters. This difference could be due to a variety of factors, including notable photometric error, extinction, and model uncertainty in the passband.
§ ESTIMATING STELLAR PARAMETERS FROM MULTI-BAND PHOTOMETRIES
This section introduces how we derive the stellar parameters from the multi-band photometries. To allow our algorithm SPar to be applied to large samples of billions of stars, we need to minimise the computational cost. Our algorithm fits only four stellar parameters, , , and . The distances d of stars can be further derived from the fitting results. SPar uses an ensemble Markov-chain Monto-Carlo (MCMC) method to obtain the best parameters of the individual stars by adopting a set of initial values derived from a minimum χ^2 method.
§.§ Initial parameters from the minimal χ^2 method
Based on the reference stellar library and extinction law derived in Sect. <ref>, assuming a reddening value, we can predict the `distance module corrected' magnitudes in the individual filters M^'_x of stars, i.e., M^'_x = M_x + A_x. Distance module μ can then be derived by subtracting M^'_x from the observed magnitude m_x of stars: μ = m_x - M^'_x. By substituting the resulting distance modulus into the standard magnitude equation: m_x = M_x + A_x + μ, we can simulate the magnitude of the stars in the individual passbands. With , , and as free parameters, we can model the stellar observed magnitude in each filter. We define,
χ^2 = 1N-K∑^N_x=1((m^ obs_x-m^ mod_x)σ_x)^2,
where m^ obs_x and m^ mod_x are respectively the observed and simulated magnitudes of the filter x, σ_x are the photometric errors, N and K is the number of adopted filters and free parameters, respectively.
If Gaia parallax exists,
χ^2 = 1N+1-K(∑^N_x=1((m^ obs_x-m^ mod_x)σ_x)^2 + ((ϖ_ obs +ϖ_ zp -ϖ_ mod)σ_ϖ)^2),
where ϖ_ obs and ϖ_ mod are respectively the observed and simulated parallax, σ_ϖ is the parallax error, ϖ_ zp (ϖ_ zp≡ -0.026 mas) is the zero point of the observed parallax.
We search for the minimal χ^2 parameters by running a series of values ranging from -0.1 to 6.0 in step of 0.02 mag and all grids in the reference stellar library. We use only the optical filters, i.e. Gaia G and SMSS gri, to derive the distances of the stars. This procedure yields best-fit values of , , and for the individual stars, which will be adopted as the initial parameters of the following MCMC analysis. If the resulted χ^2 are too large (χ^2 > 10), this means that the stellar parameters in our template library do not fit the observed values well. It is possible that the star of concern is not a normal AFGK star, and then this star will not be used further in the subsequent MCMC analysis.
§.§ Final parameters from the MCMC analysis
In order to determine the final parameters and their uncertainties for the individual stars, we adopt the MCMC procedure described in <cit.>. The initial values for effective temperature , metallicity , absolute magnitude , and reddening are set according to the values derived from Sect. <ref>. The maximum likelihood is defined as follows:
L = ∏^N_x=11/√(2π)σ_xexp((X^ obs_x-X^ mod_x)^2/2σ_x^2)
where X^ obs_x and X^ mod_x are respectively the observed and simulated magnitudes of the filter x, or parallax if available.
To run the MCMC analysis, we created 10 walkers and 20 steps chains, discarding the first 5 steps for burn-in purposes. The posterior distributions of the final parameters are determined by the 50th percentile values, and their uncertainties are obtained by computing the 16th and 84th percentile values.
§ TESTS OF OUR ALGORITHM
In this section, we will test the SPar algorithm. We cross-match the SMSS data with the LAMOST, APOGEE and GALAH spectroscopic data, and have obtained a test sample of stars with parameter measurements from the spectroscopic data. The SPar algorithm is then applied to the sample stars to obtain their parameters, which are then compared with those obtained from the spectra. The test sample contains 1,046,722 stars, of which 408,671, 200,902 and 437,149 stars are from the LAMOST, APOGEE and GALAH surveys, respectively.
Fig. <ref> displays the distribution of χ^2 values for all sources included in the test sample. A typical χ^2 distribution is observed, with a prominent peak at 2 and a long tail. Some stars in the sample exhibit a large χ^2 value, indicating that our empirical models are unable to effectively match their observed data. This may be due to the atmospheric parameters of these stars lying outside the range of our models. To optimize computational efficiency, we have excluded these stars from subsequent MCMC analysis. Consequently, by adopting a conservative χ^2 threshold of less than 10 in our study, we have retained 96% of LAMOST sources, 85% of APOGEE sources, and 97% of GALAH sources.
After conducting the MCMC analysis for the remaining 982,927 stars, we obtained their final parameters. To evaluate the accuracy of our algorithm, we first compared the differences between our predictions and the observations. We simulated the observed magnitudes of the individual stars at various wavelengths and their parallaxes based on the MCMC results, which were compared with the observed data presented in Fig. <ref>. Our simulations show good agreement with the observational results. The mean values of the differences are negligible. The dispersions of the differences are about 0.020-0.024 mag for the optical magnitudes (G g r i z), 0.034 - 0.039 mag for the UV and near-IR magnitudes (u v J HW1 W2), and 0.042 mas for parallax, respectively.
§.§ Comparison of the resulted parameters
In this section, we then compare the stellar parameters obtained by SPar to those obtained from the spectroscopic surveys to test the accuracy of Spar.
Fig. <ref> shows the comparisons of effective temperature and metallicities between the current work and the spectroscopic surveys, including LAMOST, APOGEE, and GALAH. The effective temperature from SPar and LAMOST are in good agreement without any obvious trend of change with temperature. The dispersion of the difference between the effective temperatures is only 170 K. Compared to LAMOST, APOGEE has more low-temperature stars and fewer high-temperature stars. Some of the stars in APOGEE have below 4000,K, which is outside the temperature range of our empirical stellar templates. For those low-temperature stars, we would overestimate their temperatures. Regarding the GALAH data, our effective temperatures are systematically higher than the GALAH values for high-temperature stars. The possible reason for this is that the effective temperatures measured by GALAH are systematically higher than those measured by LAMOST for stars of high temperatures, as discussed in <cit.>.
The sensitivity of the SkyMapper uv filters to stellar metallicities enables accurate measurements of based on the SMSS multiband photometric data, as illustrated in Fig. <ref>. Our resulting metallicities show good agreement with those obtained from the LAMOST, APOGEE, and GALAH surveys, even for extremely metal-poor stars with ∼ -2.5 dex. However, the dispersion of the differences is relatively high, with values of 0.23, 0.28, and 0.24 dex, respectively, for LAMOST, APOGEE, and GALAH measurements. These values are larger than the dispersion values reported by <cit.> (0.05 to 0.15 dex). This difference may be attributed to the fact that <cit.> restricted their analysis to stars at high Galactic latitudes, where dust extinction values are small and readily derived from two-dimensional extinction maps. In contrast, many of the stars in our sample are located in the Galactic disk, which is subject to high extinction effects. Therefore, errors arising from dust extinction hinder accurate determination of the intrinsic colors of stars, leading to relatively large uncertainties in the derived metallicities.
We plotted the differences in effective temperature and metallicities between our work and the spectroscopic surveys against the reddening values of our sample stars in Fig. <ref>. As the reddening values increase, the dispersions of the differences in the two parameters also increase. On average, the mean values of the differences do not vary with the reddening. However, for highly reddened stars, the mean value of the differences in the effective temperature deviates from zero. This may be due to the small number of stars in such regions and the relatively larger errors associated with them.
In addition to the stellar parameters, reddening and distance are also results from SPar. We compare our resultant values with those derived from the star-pair method, our derived distances and those from Gaia DR3 parallaxes for all stars in the test sample in Fig. <ref>. For , the consistencies are good. There are no offsets between our results and those from the spectroscopic results. The dispersion values of the differences for the LAMOST and GALAH stars are about 0.05 mag. While for the APOGEE stars, the dispersion is larger, of about 0.08 mag. This is partly due to the high proportion of stars with large reddening values in APOGEE, and partly due to the fact that there are some low-temperature stars in APOGEE with temperatures below 4000 K. As mentioned above, we may overestimate their effective temperatures, which would lead us to overestimate their reddening values at the same time.
Regarding the distance measurements, our results are consistent with the Gaia measurements, with no significant offsets observed. The dispersions of the relative differences for both LAMOST and GALAH stars are only around 7%, while for APOGEE stars, the value is about 19%. This is because LAMOST and GALAH stars, being mainly dwarfs that are relatively close to us, have accurate parallax measurements from Gaia, resulting in smaller relative distance dispersions. Conversely, the APOGEE catalogue contains many giant stars that are further away, leading to larger parallax measurement errors and, therefore, a larger dispersion in relative distance measurements.
Finally, we compared the absolute magnitudes in the Gaia G band, M_G values obtained from SPar, with those from the three surveys of all the test stars and found that, in general, the agreement is good (Fig. <ref>). The dispersion value of the differences is about 0.35 mag, but the parallax error has a significant impact on the accuracy of the distances, and therefore, it has a great impact on the accuracy of the absolute magnitude we obtain. For stars with small relative parallax errors (less than 5%), the dispersion value of the M_G difference is only 0.13 mag. However, for stars with relative parallax errors greater than 20%, these sources exhibit significant dispersion, and the dispersion value of the M_G difference is 1.62 mag.
§.§ Comparison of results with longer MCMC chains
To enable SPar to run on large samples of stars, we employed a smaller number of chains and steps in the final MCMC analysis phase. To evaluate the effect of chain and step numbers on the results, we randomly selected 30 sources (10 sources each from LAMOST, APOGEE, and GALAH) and performed 100 walkers and 1000 step chains for each of these sources. The results obtained from SPar and longer chains and steps are presented in Fig. <ref>. Overall, there are good agreements between the obtained parameters. However, four sources showed relatively large deviations in , mainly due to the large uncertainties. Despite this, we believe it is reasonable for SPar to use relatively short chains and steps. Using longer chains and steps can significantly increase the computational time without bringing any significant improvement in the results.
§ SUMMARY
In this work, we have developed a new algorithm called SPar to derive stellar parameters from multi-band photometries, which can be applied to large samples of stars. The algorithm takes advantage of empirical stellar libraries constructed from Gaia, LAMOST, and other photometric surveys. It leverages the minimum χ^2 fit of the stellar SEDs to obtain the initial values for the MCMC analysis, which results in the stellar parameters, including , , and M_G, of the individual stars. Our algorithm is tested on the LAMOST, APOGEE, and GALAH stars. The typical dispersion values of the differences between our results and literature values were 170 K for , 0.23 dex for , 0.13 mag for M_G and 0.05 mag for .
In the future, our new SPar algorithm will be implemented on large samples of stars obtained from the Mephisto and CSST surveys to derive atmospheric parameters, distances, and extinction values for billions of stars. This will give us crucial insights into the structure, chemistry, and other properties of the Milky Way.
§ ACKNOWLEDGEMENTS
We would like to thank the referee for providing us with detailed and constructive feedback that has significantly enhanced the quality of the manuscript. We are grateful to Professor Biwei Jiang for help and discussion. This work is partially supported by the National Key R&D Program of China No. 2019YFA0405500, National Natural Science Foundation of China 12173034, 11833006, 12203016 and 12173013, Natural Science Foundation of Hebei Province No. A2022205018, A2021205006, 226Z7604G, and Yunnan University grant No. C619300A034, and Science Foundation of Hebei Normal University No. L2022B33. We acknowledge the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A09, CMS-CSST-2021-A08 and CMS-CSST-2021-B03. We are grateful for the support of the Postdoctoral Research Station in Physics at Hebei Normal University.
This research made use of the cross-match service provided by CDS, Strasbourg.
Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia.
The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University’s Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth’s Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS).
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This publication makes use of data products from the Widefield Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
aasjournal
|
http://arxiv.org/abs/2307.04385v1 | 20230710074314 | Growing Fast without Colliding: Polylogarithmic Time Step Construction of Geometric Shapes | [
"Nada Almalki",
"Siddharth Gupta",
"Othon Michail"
] | cs.DS | [
"cs.DS",
"cs.CG",
"cs.RO"
] |
=1
obsObservation
mylistenvenumerate3
mylist[1]
[mylistenv] leftmargin = 2, label=#1mylistenvi,ref=#1mylistenvi
[mylistenv,2]label=#1mylistenvi.mylistenvii,ref=#1mylistenvi.mylistenvii
[mylistenv,3]label=#1mylistenvi.mylistenvii.mylistenviii.,ref=#1mylistenvi.mylistenvii.mylistenviii
mylist
plainurl
Growing Fast without Colliding
Department of Computer Science, University of Liverpool, [email protected]
Department of Computer Science, University of Warwick, [email protected]
Department of Computer Science, University of Liverpool, [email protected]://orcid.org/0000-0002-6234-3960
N. Almalki, S. Gupta, and O. Michail
Almalki, Gupta, Michail
[100]Theory of Computation → Computational Geometry; Theory of Computation → Design and analysis of algorithms
2
Growing Fast without Colliding: Polylogarithmic Time Step Construction of Geometric Shapes
Othon Michail
August 12, 2023
==========================================================================================
Building on two recent models of Almalki and Michail <cit.> and Gupta et al. <cit.>, we explore the constructive power of a set of geometric growth processes. The studied processes, by applying a sequence of centralized, parallel, and linear-strength growth operations, can construct shapes from smaller shapes or from a singleton exponentially fast. A technical challenge in growing shapes that fast is the need to avoid collisions caused, for example, when the shape breaks, stretches, or self-intersects. We distinguish two types of growth operations —one that avoids collisions by preserving cycles and one that achieves the same by breaking them— and two types of graph models. We study the following types of shape reachability questions in these models. Given a class of initial shapes ℐ and a class of final shapes ℱ, our objective is to determine whether any (some) shape S ∈ℱ can be reached from any shape S_0 ∈ℐ in a number of time steps which is (poly)logarithmic in the size of S. For the reachable classes, we additionally present the respective growth processes. In cycle-preserving growth, we study these problems in basic classes of shapes such as paths, spirals, and trees and reveal the importance of the number of turning points as a parameter. We give both positive and negative results. For cycle-breaking growth, we obtain a strong positive result —a general growth process that can grow any connected shape from a singleton fast.
§ INTRODUCTION
In recent years, the connection between algorithmic frameworks and the natural world has become increasingly evident and is opening up new research avenues. The principles and mechanisms underlying biological systems can be often modeled using computational approaches. This has led to the development of new computational frameworks and models inspired by biological systems. Examples are brain computation <cit.>, passively-dynamic systems <cit.>, and mobile robotics <cit.>. Recent research on programmable matter <cit.> is concerned with the algorithmic control of physical properties of programmable materials, such as their shape.
A set of recent models in the theory of DNA self-assembly and reconfigurable robotics have attempted to incorporate the concept of growth, which is a fundamental process in organisms. The processes that can be described in those models mimic the process of growth and development in biology. This, on one hand, enables the efficient algorithmic construction of complex shapes and structures and on the other might give insight into some of the algorithmic properties underlying biological systems.
Advances in geometric algorithms have led to significant progress in the theory of modular robotics and self-reconfigurable systems. The underlying systems consist of small, simple, and interchangeable components that can reconfigure themselves into various shapes and structures <cit.>. The efficient construction of geometric shapes is an important algorithmic objective in this context. This work, building on the models of Almalki and Michail <cit.> and Gupta et al. <cit.> further explores the algorithmic and structural properties of geometric growth processes.
§.§ Our Approach and Contribution
We explore the properties of a growth process that was proposed and largely left open in <cit.>.
It is the most general of the growth processes studied in <cit.> and the one in which there is no a priori restriction on the set of nodes that can grow in a given time step. Two different types of this process and its underlying growth operations can be identified: cycle-preserving growth and cycle-breaking growth. Intuitively, the former avoids collisions by preserving cycles, and the latter achieves the same by breaking them. For these two types of growth processes, the present study revolves around the following types of shape-reachability problems:
Given a class of initial shapes ℐ and a class of final shapes ℱ, determine whether any (some) shape S ∈ℱ can be reached from any shape S_0 ∈ℐ in a number of time steps which is (poly)logarithmic in the size of S. In case of a positive answer, we additionally want to provide the respective growth process.
All studied processes and constructions in this paper are centralized. We typically solve a given instance of the problem by designing a parameterized growth process —i.e., a centralized schedule of parallel growth operations— that works for all pairs of input-output shapes in the respective classes. Lower bounds for specific classes of shapes are established by proving that any growth process would fail to be efficient for all pairs of input-output shapes drawn from these classes. Distributed solutions fall beyond the scope of the present paper and form an interesting direction for future research. The main reason for adopting a centralized perspective is that both the centralized and distributed properties of such processes remain largely unexplored and the centralized is a more natural starting point. Centralized lower bounds immediately hold in the distributed case and centralized upper bounds can hint first —possibly inefficient— distributed solutions.
Collision avoidance is a core technical challenge in coming up with exponentially fast growth schedules. Note that if the requirement to avoid collisions —and a few other modeling assumptions related to collisions— was dropped, it would become straightforward to grow some classes of shapes that are otherwise hard to grow fast. For example, any spanning tree —and consequently any connected shape with such a spanning tree— having a bounded number of turning points on every root-to-leaf path could be grown as follows. We would first grow the tree of turning points by a parallel BFS, each time step t generating the turning points at turning-point-distance t from the root. This is linear in the maximum number of turning points on a path and possibly violates the requirement of nodes being collocated. We would then grow in parallel all segments between consecutive turning points to grow the tree to its final size. The latter can be done in time logarithmic in the length of the longest segment. Again, parallel growth could cause intersections between branches of the tree that we have now ignored. Overall, we would pay a logarithmic number of time steps. It will become evident that in the presence of collisions —and it is necessary to take collisions into account for practical implementations— more elaborate approaches are needed to get fast growth of shape classes as basic as paths and trees.
For cycle-preserving growth, in both the adjacency and connectivity graph models, we show that different graph classes can be constructed within (poly)log n time steps, n being the size of the final shape throughout.[It is important to note that we employ two distinct notions of time. The first refers to the time steps involved in the growth process, while the second refers to the running time of a centralized algorithm responsible for determining reachability between shapes and providing corresponding schedules. To maintain clarity, we will consistently differentiate between these two concepts, referring to the former as time steps and the latter as time.]
For path shapes characterized by a parameter k, which represents the number of turning points on the path, we prove that Ω (k log k) time steps are required to grow them from a singleton.
For cycle-breaking growth, our main contribution is a general algorithm that gives a growth schedule for any connected shape from a singleton. All schedules generated by the algorithm reach their final shape exponentially fast. We also study the weaker version of the shape-reachability problem and prove that any connected shape can be transformed into a tree within two time steps only.
In Section <ref>, we formally define the considered growth models and problems. In Section <ref>, we present our results for cycle-preserving growth in the adjacency graph model (Section <ref>) and the connectivity graph model (Section <ref>). In Section <ref>, we study the cycle-breaking type of growth processes. A weaker type of reachability is discussed in Section <ref>. In Section <ref>, we conclude and give further research directions opened by our work.
§ MODELS AND PRELIMINARIES
§.§ The Growth Models
The models studied in this paper build on the models of <cit.> and <cit.>. We consider a 2-dimensional square grid. Each grid point
is identified by its x and y coordinates, where x ≥ 0 indicates the column and y ≥ 0 indicates the row. A shape S is defined by a set of nodes and a set of connections between the nodes. Each node u occupies
a grid point (u_x,u_y) and is represented by a circle drawn on that point.
For a set of nodes V, two nodes u=(u_x, u_y) and v=(v_x, v_y) in the set are adjacent if u_x∈{v_x-1,v_x+1} and u_y=v_y or u_y∈{v_y-1,v_y+1} and u_x=v_x, that is if they are one orthogonal distance apart.
Nodes can only be connected —in which case we also call them neighbors— if they are adjacent. We consider two models of shape connectivity. One is based on the adjacency graph and the other on the connectivity graph, which can be any subgraph of the adjacency graph. For a shape S defined by the adjacency graph on a set of nodes V, we have S=(V,A) where A={uv | u,v∈ V and u,v are adjacent}.
For a shape S defined by a connectivity graph on a set of nodes V, we have S=(V, E) where E⊆ A. A shape S is connected if its graph is a connected graph. We restrict attention to connected shapes. We use n or |S| to denote |V|, i.e., the total number of nodes in a given shape S=(V,E).
Any connected shape S on the grid defines an (orthogonal) polygon that forms the external boundary of S. By the Jordan curve theorem <cit.>, the external boundary of S partitions the grid into an interior and an exterior of S. If a set of points H is a subset of the interior of S and shares no point with the external boundary of S, then we call H fully/strictly enclosed in the external boundary of S. Given a connected shape S, a hole of S is a maximal connected shape of unoccupied points H, strictly enclosed in the external boundary of S. A connected shape S with no holes is called compact. A row (column) of a shape S is the set of all nodes of S with the same y coordinate (x coordinate, resp.).
A growth operation (also called doubling in <cit.> and expansion in <cit.>) applied on a node u of a shape S, generates a new node in one of the points adjacent to u and possibly translates some part of the shape.
In general, applying one or more growth operations to a shape S either causes a collision or yields a new shape S'. There are two types of collisions: node collisions and cycle collisions.
Unless otherwise stated, we shall assume without loss of generality (abbreviated “w.l.o.g.” throughout) that there is an anchor node u_0∈ V that is stationary and other nodes move relative to it. This is sufficient because the constructed shapes are considered to be equivalent up to translations and their final absolute coordinates are not important for our purposes. To simplify the exposition, we first define growth operations for tree shapes and then generalize to any connected shape.
Let the shape be a tree T=(V,E). A single growth operation is applied on a node u∈ V toward a point (x,y) adjacent to u. If point (x,y) is occupied by a node v and uv∉ E then a collision occurs. The remaining cases are (i) (x,y) is empty, (ii) (x,y) is occupied by a node v and uv∈ E. We first define the effect in each of these cases when neighbor handover is not allowed. In case (i), the growth operation generates a node u' at the empty point (x,y) and connects it to u. In case (ii), assume w.l.o.g. that u is closer to u_0 in T than v.
Let T(v) denote the subtree of T rooted at node v. Then, the operation generates a node u' between u and v, connected to both, which translates T(v) by one unit away from u along the axis parallel to uv. After this, u' occupies (x,y) and uv has been replaced by {uu',u'v}. If neighbor handover is allowed, then any neighbor w of u perpendicular to uu' can be handed over to u'. This happens by a unit translation of T(w) or T(u) along the axis parallel to uu', depending on which of u,w, respectively, is closer to u_0 in T.
Let R be a set of operations to be applied in parallel to a connected shape S, each operation on a distinct pair of nodes or a node and an unoccupied point.
We assume that all operations in such a set of parallel operations R are applied concurrently, have the same constant execution speed, and their duration is equal to one time step.
Let T=(V,E) be a tree and u_0∈ V its anchor. We set u_0 to be the root of T. We want to determine the displacement of every v∈ V∖{u_0} due to the parallel application of the operations in R. As u_0 is stationary and each operation translates a subtree, only the operations on the unique u_0v path contribute to v's displacement.
In particular, any such operation contributes one of the unit vectors ⟨ -1,0⟩, ⟨ 0,-1⟩, ⟨ +1,0⟩, ⟨ 0,+1⟩ to the motion vector v⃗ of v.
Moreover, for any node v∈ V that doubles toward an empty point, we add a new node v' with a corresponding unit motion vector v⃗.
We can use the set of motion vectors to determine whether the trajectories of any two nodes will collide at any point. This type of collision is called a node collision (see Figure <ref>).
Let now S be any connected shape with at least one cycle and any node u_0 be its anchor. Then,
a set of parallel operations R on S either causes a cycle collision or its effect is essentially equivalent to the application of R on any spanning tree of S rooted at u_0.
Let u, v be any two nodes on a cycle. If p_1 and p_2 are the two paths between u and v of the cycle, then v⃗_p_1=v⃗_p_2 must hold:
the displacement vectors along the paths p_1 and p_2 are equal.
Otherwise, we cannot maintain all nodes or edges of the cycle. Such a violation is called a cycle collision as shown in Figure <ref>. We call a set of operations that does not cause any node or cycle collisions collision free.
A growth process starts from an initial shape S_0 —often a singleton— and by applying a sequence of parallel growth operations of a given type, goes through a sequence of shapes until it reaches a target shape. The considered growth processes operate in discrete time steps. In each time step t≥ 1, a set of parallel growth operations —possibly a single operation— are applied on the current shape S_t-1 to give the next shape S_t. To simplify our algorithms and w.l.o.g. we require parallel operations to have the same cardinal direction.
This divides time steps into those with horizontal only and those with vertical only motion and implies that a node gets at most one growth operation per time step.
We consider two general types of growth processes, cycle-preserving growth and cycle-breaking growth. Intuitively, the former type avoids cycle collisions by maintaining all cycles affected by growth operations and the latter by breaking them.
A cycle-preserving growth process applies a collision free set of parallel growth operations R_t to shape-instance S_t-1, for all time steps t≥ 1.
A cycle-breaking growth process additionally removes a —possibly empty— subset of the edges of S_t-1 that does not disconnect the shape, before applying R_t to it. If neighbor handover is allowed, growth of a node u generating a new node u' in direction d can hand any neighbor w of u perpendicular to d over to u'. In the adjacency graph model, at the end of each time step t, edge uv is added for all adjacent nodes u,v that are not connected. In the connectivity graph model, no such edges are added.
For the models of Definition <ref>, the following properties hold:
* Under the connectivity graph model, the growth processes never increase the number of cycles.
* Under the connectivity graph model, if S_0 is a singleton, the processes can only construct tree shapes.
* Under both graph models, the cycle-preserving process never decreases the number of cycles.
* Under the connectivity graph model, the cycle-preserving process preserves the number of cycles.
Property (2) is a special case of (1). Property (4) follows by taking (1) and (3) together. So, it is sufficient to prove properties (1) and (3). We first prove these without neighbor handover. In that case, the cycle-preserving process cannot remove any edges and neither do the graph models, thus property (3) holds. Property (1) follows by observing that, without neighbor handover, the growth processes can only add leaves or increase the length of existing line segments and that the connectivity graph model does not modify any edges. We now show that these remain true when neighbor handover is allowed. Let u be a node on which a growth operation is applied, and u_N,u_E,u_S,u_W its up to 4 neighbors in the respective cardinal directions. Let w.l.o.g. u^'_E be the node generated by the operation in the east direction. The nodes that can be handed over from u to u^'_E are u_N and u_S. If we show that the number of cycles is invariant of handover for both types of processes, then propositions (1) and (3) will follow. It is sufficient to consider those cycles that before applying the operation were using edge u_Nu, uu_S or both. If only u_N is handed over to u^'_E then any cycle using u_Nuu_W is replaced by a cycle using u_Nu^'_Euu_W, any using u_Nuu_E by one using u_Nu^'_Eu_E, and any using u_Nuu_S by one using u_Nu^'_Euu_S. The case is which only u_S is handed over is symmetric. If both u_N and u_S are handed over to u^'_E then the only difference is that any cycle using u_Nuu_S is now replaced by one using u_Nu^'_Eu_S. It follows that there is a one-to-one correspondence between previous and new cycles due to neighbor handover, which gives the required invariant.
It is worth noting that the cycle-breaking growth process is independent of whether the shape is represented using the adjacency or connectivity graph model. In both models, cycle-breaking growth follows the same principles and achieves the same results. However, this is not the case for cycle-preserving growth, as it behaves differently depending on the chosen graph model.
The property of neighbor handover is specific to cycle-breaking growth process, where neighboring nodes are transferred during the growth process.
Furthermore, it is important to highlight that any positive results obtained for the cycle-preserving growth process also apply to the cycle-breaking growth process. However, the reverse is not necessarily true, as the behavior and characteristics of the two operations differ.
§.§ Problem Definitions
The following two reachability problems between classes of shapes are defined for all types of growth processes described in Definition <ref>.
Given a growth model, a class of initial shapes ℐ (possibly consisting only of a singleton), and a class of final shapes ℱ we want to determine if there exists a time bound t=O(log n) or t=(poly)log n for which the following holds.
* Strong Reachability: Any shape in ℱ can be grown in the given model within t time steps from any shape in ℐ.
* Reachability: Starting from any shape in ℐ some shape in ℱ can be grown in the given model within t time steps.
For the reachable or strongly reachable classes, we additionally want to give the respective growth processes.
Some of our results concern shapes drawn from special graph classes, such as paths, spirals, and staircases, which we now define.
A node u_i of a path P=⟨ u_1, u_2, …, u_n ⟩ is called a turning point or turn if either i ∈{1,n} or u_i-1u_i is perpendicular to u_iu_i+1. For uniformity of our arguments, we add the endpoints of P to the set of turning points.
A direction of an internal turning point d(u_i) where 1≤ i < n is left if the orientation changes from d(u_i) to d(u_i+1) in a counterclockwise or right if the orientation changes from d(u_i) to d(u_i+1) in a clockwise manner.
A staircase is a path whose tuning points, when ordered from one endpoint to the other, alternate between two clockwise- or counterclockwise- consecutive cardinal directions.
A spiral S is a path whose tuning points, when ordered from one endpoint to the other, follow a continuous and unidirectional sequence of consecutive cardinal directions in either a clockwise or counterclockwise manner.
A fast line growth process begins with a singleton initial shape and by successively doubling all nodes grows a straight path of length n in O(log n) time steps. Fast line growth is used as a sub-process in most of our constructions in order to efficiently grow line segments of a shape. We use the term segment to refer to a line segment of a shape. A fast rectangle growth process —defined similarly— grows any compact rectangular shape of n nodes in O(log n) time steps (see <cit.> for these basic processes).
§ CYCLE-PRESERVING GROWTH PROCESSES
This section presents our results for cycle-preserving growth in the adjacency graph model (Section <ref>) and the connectivity graph model (Section <ref>).
§.§ Cycle-Preserving in the Adjacency Graph Model
We begin with positive results for cycle-preserving growth in the adjacency graph model. Due to cycle-preserving growth being a special case of cycle-breaking growth, positive results for the former immediately hold for the latter.
Assume that a spanning tree T of a shape S has at most k turning points in every root-to-leaf path. Then, if we had access to cycle-breaking growth instead, we could use breadth-first search to grow S in O(klog n) time steps. Starting from the root, all root-to-leaf paths can be grown in parallel. Every such path consists of at most k-1 line segments, each of length at most n, which —by using fast line growth— can be sequentially grown within O(klog n) time steps.
BFS cannot be directly applied by cycle-preserving growth in the adjacency graph model. This is due to the additional cycles that the graph model creates between adjacent segments, making the growth of a segment depend on the growth of segments adjacent to it.
We now describe a variant of BFS that avoids this by treating adjacent segments differently.
* Consider any tree shape T rooted at u_0.
* The process proceeds in phases. In each phase i≥ 1, we will grow all segments at segment-distance i from the root.
We do this by first growing in parallel the horizontal subset of those segments in a horizontal sub-phase i_h, followed by the vertical ones in a vertical sub-phase i_v.
* Each segment L —either horizontal or vertical— of phase i is grown as follows:
* For any sub-segment s of L which is adjacent to a segment s_past grown in a previous phase, grow s by duplicating s_past. Do this in parallel for all these sub-segments. The remaining sub-segments are then grown in parallel using fast line growth.
* For any sub-segment s of L which is adjacent to a segment s_present that will be grown in the same phase i in parallel to s, we use two stages i_h_even and i_h_odd (i_v_even and i_v_odd for the vertical sub-phase). In i_h_even, we grow the even-row segments followed by the odd-row segments in i_h_odd. We then repeat for the vertical sub-phase.
See Figure <ref> for an illustration of this process.
If every root-to-leaf path of a tree T has at most k turns, then the BFS variant grows T in O(k log n) time steps.
To prove the statement we use induction on the number of phases.
For the base case, since L_1 is the only segment at this point, it covers all the paths within distance i=1 in T, so the statement holds.
For the inductive step, let us assume that after i phases, the BFS variant has grown up to the i-th segment of every path within distance i in T, which corresponds to the number of turns k.
Then, in i+1 phase, we grow the line segment L_i+1 of each path within distance i+1. For each sub-segment s of L_i+1 that is adjacent to a sub-segment s_past grown in a previous phase, we can directly grow s by duplicating s_past in one time step.
For any sub-segment s of L_i+1 that is adjacent to a sub-segment s_present that will be grown in the same phase, we use two sub-phases, let us assume w.l.o.g. that it is a horizontal line segment, then we have i_h_even and i_h_odd. In i_h_even, we grow the even-row sub-segments, and in i_h_odd, we grow the odd-row sub-segments. This ensures that adjacent sub-segments in the same phase are grown sequentially without collisions.
By the induction hypothesis, after i phases, we have grown up to the i-th segment of every path within distance i, which corresponds to the number of turns k. Therefore, if T has a bounded number of turns, it can be grown in O(k log n) time steps.
Since all line segments L_1, L_2, L_k,…, L_k+1 are constructed using the BFS variant, the structure of the tree is maintained, and no line segments collide with other line segments during the parallel growth.
It follows that:
If a shape S has a spanning tree with at most k turns in every root-to-leaf path, then S can be grown in O(klog n) time steps.
Let S be any shape with a computed spanning tree T(S). According to Lemma <ref>, T(S) has a bounded number of k turns, implying that it can be grown in log |T(S)| time steps. By growing T(S), we obtain all the vertices of S, and to create the final shape S, we can add the edges between neighboring vertices in the constructed T(S) in a single-time step.
The overall time complexity of growing the shape S can be expressed as O(log |T(S)| + 1). Since |T(S)| is at most the size of S (|T(S)| ≤ |S|), the time complexity can be simplified to O(log |S| + 1). Hence, the time complexity of growing S is O(log |S|).
The next proposition is making use of a fast procedure that fills all holes of a given shape S in order to obtain a compact extension of S.
Given any shape S with at least one hole, there is a sequence of growth operations
of length O(log n) that yields a compact shape S'.
Assume that S has d holes H_1, H_2, …, H_d. We show how a single hole H∈{H_1, H_2, …, H_d} can be filled up in logarithmic time. The statement will then follow by applying this in parallel to all holes.
Hole H is defined by its boundary B(H), which is a closed polygon of nodes and forms an internal boundary of S. The interior H of B(H) is by definition, empty. We show how the nodes of B(H) can be used to efficiently fill H up with nodes.
W.l.o.g., we show how to do this vertically. Let C_1,…, C_k be the consecutive columns containing the empty points of H, say from left to right. Every C∈{C_1,…,C_k} consists of one or more empty vertical segments s_1,…,s_l. Note that all segments defined by the holes of S are pairwise disjoint. Each of the two endpoints of s∈{s_1,…,s_l} is adjacent to a node of B(H). Let (x,y), (x,y+|s|-1) be the bottom-most, uppermost endpoint of s and u, v be its adjacent node from B(H) lying below, above, respectively. Starting from u, we apply BFS-variant introduced at the beginning of this section.
We perform a fast line growth process along [(x,y),(x,y+|s|-1)], which generates a path of length |s| connecting u to v within O(log |s|) time steps. By doing this in parallel for all segments of all holes of S, we can make S compact within O(log (max_s{|s|}))=O(log n) time steps.
By combining Corollary <ref> with Proposition <ref> we get:
Any compact shape S whose perimeter has a bounded number of turns can be constructed in O(log n) time steps.
Consider any shape S with a constant c turns in its perimeter. Let T(S) be a computed spanning tree of S's perimeter. It is important to note that T(S) has a single root-to-leaf path with at most c turns, as per the properties of a spanning tree. Following Corollary <ref>, we can construct S's perimeter within logarithmic time steps using the computed spanning tree T(S). This step ensures that we have constructed the entire perimeter of S accurately and without collision.
Since we consider the adjacency model of S, it involves adding the missing edge to connect the start and end of S's perimeter. This ensures that the shape S is fully connected and maintains its original adjacency connections.
After constructing the perimeter of S and ensuring its full connectivity, we are left with a shape S that has one hole. To fill this hole and achieve a compact shape, we can apply Proposition <ref>. According to the proposition, the hole-filling process can be completed within at most log n time steps. We conclude that the total time complexity of constructing such a shape S with a constant number of turns in its perimeter is O(log |S|) + O(log n), which equals O(log n) time steps.
A family of shapes denoted by NICE was introduced by Almethen et al. <cit.>. A NICE shape consists of a horizontal line and various vertical lines that are perpendicular to the original horizontal line. This family of shapes can be constructed in logarithmic time steps using a growth operation from a single node.
All NICE shapes can be constructed in O(log n) time steps.
Assume a shape S_NICE∈ NICE of size n, that contains w.l.o.g a central horizontal line L_h with length 1≤ |L_h| ≤ n and a number of vertical lines L_1, L_2, …, L_v of total length 1 ≤ |L_v| < n that are orthogonal to L_h.
To construct S_NICE, we begin growing the horizontal line L_h using fast line growth, which starts from a single node and expands it in log|L_h| time steps. Next, we simultaneously grow all the vertical lines L_1, L_2, …, L_v in parallel. To achieve simultaneous growth of multiple vertical lines without collision, we can employ the BFS-variantas described earlier in this section. This ensures that the vertical lines do not collide with each other during the parallel growth.
Alternatively, if S_NICE has a vertical line with a length 1≤ |L_v| ≤ n, we can construct the vertical line first and then grow all the horizontal lines in parallel. It is important to note that since we are performing the cycle-preserving growth under the adjacency model, all the edges between the vertical segments will be added automatically during the growth operation without any form of collision.
The overall time complexity of constructing S_NICE using this method is determined by the time required to grow the longest line segment, which is at most log n time steps.
Any staircase shape S with a bounded number of steps can be constructed in O(log n) time steps.
A staircase is an alternating sequence of turning points and line segments connecting consecutive turning points. It can be uniquely defined by the coordinates of its turning points u_1, u_2, …, u_k. A bounded number of steps implies a bounded number of turning points. Thus, k is a constant. To construct such a staircase fast, we shall first construct the turning points sequentially and then grow in parallel the segments between them.
In the first phase, the k turning points are generated by a sequential —linear time step— process as follows. Starting from node u_1 which is the original singleton, u_i generates u_i+1 in time step i in the direction that respects their relative positions in S. This takes k-1 time steps to generate all turning points. The resulting staircase of turning points is equivalent to the one obtained by compressing all segments of S to unit length, which proves that this phase is collision free.
In the second phase, we grow —through a fast path growth process— all unit segments in parallel to their final length. Due to the geometry of the staircase, this phase is also collision free. It grows all segments within O(log (max_s{|s|})) time steps, where the maximum is over all segments s of the original staircase S. Thus, the whole process runs for (k-1)+O(log (max_s{|s|}))=O(log n) time steps.
§.§ Cycle-Preserving in the Connectivity Graph Model
This section defines the class of shapes that can be constructed by cycle-preserving growth in the connectivity graph model.
If a shape S can be grown in k time steps from a singleton u_0 in cycle-preserving growth, then S has a spanning tree T rooted at u_0, such that any root-to-leaf path of T has at most k turns.
Consider any shape S that can be grown in k time steps from a single node u_0, and let us assume that S has a spanning tree T_i which satisfies this until the time step i (i.e., T has at most k turns that equals i).
Let us assume w.l.o.g that at the next time step i+1, there is a horizontal cycle-preserving growth operation o_i. Then, the number of turns k in T_i can only be increased after operation o_i by at most one if one of the following cases occurs:
* If a line segment L_i in T_i is split into two line segments.
* If an additional turning point k+1 appears by extending the leaf of the line segment L_i.
For the first case, since o_i is a cycle-preserving growth, then, it cannot split any horizontal or vertical segment L_i due to its settings. As a result, we keep growing or translating the whole line segment L_i (i.e., the cycle-preserving growth will never increase the number of turns k in the tree T_i because it preserves all edges when growing any line segment) as shown in Figure <ref>.
For the second case, since o_i is a horizontal cycle-preserving growth, we can add one new turning point k+1 only
if the segment L_i, leading to the leaf, is vertical. However, if the segment is horizontal, applying o_i will only expand its length, not the number of turning points k.
Also, a new turning point k+1 is produced if a new horizontal root-to-leaf path is created by generating nodes to any vertical node of a vertical segment. These new leaves can increase the maximum number of turns in T_i by at most 1. Therefore, T_i+1 is an extension of T_i and has at most k+1 turns on every root to leaf path, which will consume at most k+1 time steps to generate such a shape S has a spanning tree T. Therefore, since cycle-preserving cannot increase the number of turns in the shape, then the statement holds.
If a shape S has a spanning tree T with O(log n) turns on every root-to-leaf path, then S can be constructed within O( log^2 n) time steps.
Consider a connected shape S,
by Proposition <ref> there is a spanning tree T of S with a constant k turns. To construct S, we can use breadth first search on every line segment in parallel.
We start from the root u_0 of the spanning tree T of S by using BFS line segments; we construct these line segments in parallel. Since each path on the tree T of S has at most k turns, it follows that there are k+1 line segments, and we can build all these segments within at most k+1 phases. In the worst case, each phase costs log n, which means at most O(klog n). However, in the worst case, if our tree T of S has at most log n turns, then, we consume log^2 n, thus, O( log^2 n) time steps to build such a shape S.
Any shape S with at most k turns can be compressed into a new shape S' with at most k turns and O(k^2) nodes on an O(k× k) grid.
Let S=(V, E) be a shape with O(k) turns, where V is the set of nodes and E is the set of edges. We will construct a compressed shape S'=(V', E') with at most k turns and k^2 nodes on an O(k × k) grid.
Since S has k turning points, then the number of rows and columns in S equals k, so we divide S into k horizontal rows and k vertical columns, forming a grid of size O(k × k).
Then, we identify the turning points in S and mark them as special nodes. Let V_k be the set of special nodes representing the turning points k in S.
For each row in the grid, we keep only nodes v∈ V_k and remove any duplicated nodes that are not connected to a node in V_k. The remaining nodes (i.e., that are connected to a node in V_k) in each row are stored in the set V_r. Similarly, we do the same for each column and keep the remaining nodes in the set V_c.
Let V' be the union of V_k, V_r, and V_c. The set of nodes in the compressed shape S' is V'. Then, we construct the set of edges E' in S' by including all edges in E that connect nodes in V'.
By construction, the compressed shape S' has at most k turns because we only consider the special nodes representing the turning points in S. The size of S' is at most k^2 since the grid has size O(k × k). Therefore, we have proved that any shape S with at most k turns can be compressed into a new shape S' with at most k turns.
To build a spiral shape S with a total number of k turns where k= log n by using BFS, we need O(log n loglog n) time steps.
From the above Lemma <ref>, any spiral with k turns can be compressed into a k = O(log n) sequence of segments, each of length m, which is at most log n. Then, using the breadth-first search to construct the compressed spiral, we first build the k turning point, point by point sequentially, then expand them into their final length in parallel. Therefore, O(k log m) time steps are needed, giving a total of O(log n loglog n) time steps.
A pipelined breadth-first search is a modified version of BFS that can be used to construct spiral shapes in the cycle-preserving growth process efficiently (i.e., within logarithmic time steps). It consists of two main phases:
Step-
* Constructing and waiting phase: During this phase, we build at most four turning points in an order that follows the geometry of a shape S.
* Growing phase: In this phase, the partially constructed structure from the constructing and waiting phase grows in parallel apart from those that reached their final length.
If a spiral shape S with a total number of k turns where k= log n, then such S can be constructed within logarithmic time steps by using pipelined BFS.
Consider a spiral shape S=(V, E) consisting of a set of layers, where each layer consists of two r∈ R rows and two c∈ C columns on the grid. Each layer S_i contains at most four turning points k, which can be defined as the moving point of each row r into column c. We will use the pipelined BFS approach to construct such a shape within a logarithmic number of time steps.
After compressing S by using Lemma <ref>, we achieved S', which contains all k turning points and possibly some in-compressible nodes that connect these turning points.
Then we start build S' as follow:
In phase i=1, we start from a root u_0 and generate the compressed version of the first (external) layer (i.e., its turning points) S_1={v_1,v_2,v_3, v_4} of S node by node.
Then, in phase i=2, we grow every node in parallel in its position and expand every segment of this layer using the cycle-preserving growth.
Following that, we start again generating the next (inner) layer, S_i+1 (i.e., the compressed version of the next spiral layer). We continue growing these two layers in parallel until the final layer S_log n fits in them.
Finally, because S has a total of log n layers, we consume log n waiting time for each layer to construct. Therefore, the total number of time steps to construct the whole shape is log n + log n for growing all segments in parallel until they reach their final length.
Let P be a path with k turning points. Let A be an algorithm that generates P from a singleton. Without loss of generality, we can assume that A starts from a turning point of the path P. We now give a few observations and lemmas concerning some properties of A. Recall that an edge, once generated, cannot be deleted in the cycle-preserving model. This immediately implies the following observation.
A node can grow in at most its degree many different directions. Moreover, once a node has degree many neighbors in the path constructed by A, it can only grow along one of its incident edges in the path.
As there exists a unique subpath between any two vertices in a path, this fact, together with the above observation gives the following observation.
Let x and z be any two vertices of P such that there exists a straight subpath between them in the path constructed so far by A. Then, all the vertices on the subpath between x and z in P will lie on a straight subpath in the final path constructed by A.
We now give the following lemma concerning the order in which the turning points of P are generated by A.
Let P be a path between u and v with k turning points. Let ⟨ tp_1, tp_2, …, tp_k ⟩ be the order of turning points of P from u to v. Let A be any algorithm that generates P from a singleton starting from the turning point tp_i. Then, the sets {tp_i+1, tp_i+2, …, tp_k} and {tp_1, tp_2, …, tp_i-1} of turning points are generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩ and ⟨ tp_i-1, tp_i-2, …, tp_1 ⟩, respectively by A. Moreover, A respects the direction of P at every node while generating the next node from it.
Recall that, an edge, once generated, can not be deleted in the cycle-preserving model. This in turn means that a node can grow in at most its degree many different directions. Moreover, once a node has degree many neighbors, it can only grow along an incident edge.
We first prove that A respects the direction of P at every node while generating the next node from it. As the direction makes sense only when the node already has a neighbor, we prove the statement for the nodes which grow at time step 2 or later. Let A grows the node v at time step t ≥ 2. Assume for contradiction, A does not respect the direction of P at v while generating the next node u. Then, once u is generated, the degree of v is 2 in the path constructed by A so far. By Observation <ref>, we get that A can never create a neighbor of v in the desired direction, a contradiction. Thus, A always respects the direction of P at every node while generating the next node from it.
We now prove the property regarding the order of generation of turning points. We prove that the set {tp_i+1, tp_i+2, …, tp_k} is generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩. The proof for the set {tp_1, tp_2, …, tp_i-1} is similar. Let tp_j+1 be the first turning point that was not generated in the desired order, for j ≥ i. Moreover, let t_k be the turning point that was generated after t_j, for k > j. This implies that there exists a subpath P' from t_j to t_k of the path constructed so far by A which does not contain any other turning points, i.e. P' is drawn as a straight line. As t_j+1 lies between t_j and t_k in P, by Observation <ref>, we get that A can never create the two neighbors of t_j+1 in different directions. This contradicts the fact that t_j+1 is a turning point of P. Thus, we conclude that the set {tp_i+1, tp_i+2, …, tp_k} is generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩.
Let P be an incompressible spiral path between u and v with k turning points (see Figure <ref>). Moreover, let u be the internal endpoint of P. We now give the following lemma about the lower bound on the number of steps taken by any algorithm that generates P from a single node starting from u.
Let P be an incompressible spiral path between u and v with k turning points. Moreover, let u be the internal endpoint of P. Let A be any algorithm that generates P from a singleton starting from u. Then, A requires Ω(klog k) time steps.
Let ⟨ tp_1 = u, tp_2, …, tp_k = v ⟩ be the order of turning points of P from u to v. By Lemma <ref>, we know that A generates the turning points in the order ⟨ tp_1 = u, tp_2, …, tp_k = v ⟩. Let GT_j be the time step when the turning point tp_j was generated by A, for any j ≥ 2. Let P(t) be the path constructed by A after time step t. Further, let a and b be two vertices of P. We denote by P[a,b] the path between a and b (including both a and b) of P. Moreover, we denote by |a-b|_P the number of edges in P[a,b]. Also, we denote by X(a,P) the x-coordinate of the vertex a in P. To prove the above lemma, we first prove the following lemma about the path constructed by A.
For any j ≥ 5, the path P(GT_j-1) generated by A till time step GT_j - 1 should be the same as the subpath P[tp_1, tp_j-1] of P between tp_1=u and tp_j-1.
We prove the statement by induction on j.
Base case (j=5). Recall that, as P is incompressible, |tp_2 - tp_1|_P = |tp_3 - tp_2|_P = 1. Thus GT_3 = 2 and P(GT_3) = P[tp_1, tp_3]. Assume for contradiction that the lemma is not true for j=5. This means that P(GT_5 - 1) is a subpath of P[tp_1,tp_4]. By Lemma <ref>, we know that GT_5 > GT_4 > GT_3. As P(GT_3) = P[tp_1, tp_3], we get that P[tp_1, tp_3] is a subpath of P(GT_5 - 1). Combining this fact with the fact that P(GT_5 - 1) is a subpath of P[tp_1,tp_4], we get that 1 ≤ |tp_4 - tp_3|_P(GT_5 - 1) < |tp_4 - tp_3|_P = 2. This implies that |tp_4 - tp_3|_P(GT_5-1) = 1. This further mean that X(tp_4, P(GT_5 - 1)) = X(tp_1, P(GT_5 - 1)). By Lemma <ref>, we get that A respects the direction of P at every node. Therefore, when tp_5 is generated it will collide with tp_1, a contradiction (e.g., see Figure <ref>). So, the lemma is true for j = 5.
Inductive hypothesis. Suppose that the lemma is true for j = t - 1 ≥ 5.
Inductive step. We need to prove that the lemma is true for j = t ≥ 6. Assume for contradiction that the lemma is not true for j. This means that P(GT_t - 1) is a subpath of P[tp_1,tp_t-1]. By Lemma <ref>, we know that GT_t > GT_t-1 > GT_t-2. By the inductive hypothesis, we know that P(GT_t-1 - 1) = P[tp_1, tp_t-2]. This implies that P[tp_1, tp_t-2] is a subpath of P(GT_t - 1). Combining this fact and the fact that P(GT_t - 1) is a subpath of P[tp_1,tp_t-1], we get that 1 ≤ |tp_t-1 - tp_t-2|_P(GT_t - 1) < |tp_t-1 - tp_t-2|_P = ⌊t-1/2⌋. This further implies that either t=6 and X(tp_5, P(GT_6 - 1)) = X(tp_1, P(GT_6 - 1)), or X(tp_t-5, P(GT_t - 1)) ≤ X(tp_t-1, P(GT_t - 1)) < X(tp_t-6, P(GT_t - 1)). By Lemma <ref>, we get that A respects the direction of P at every node. Therefore, when tp_t is generated, it will collide with a node on the subpath of P(GT_t-1 - 1) between tp_t-5 and tp_t-6, a contradiction (e.g., see Figure <ref>). So, the lemma is true for j=t.
We now give the proof of Lemma <ref> using Lemma <ref>.
Let ST_j be the time taken by A to create the path P[tp_1, tp_j] starting from tp_1, for any j ≥ 2. Then, by Lemma <ref>, we get that GT_j ≥ ST_j-1 + 1, for any j ≥ 5. Moreover, by Lemma <ref>, we know when tp_j is generated, the subpath from tp_1 to tp_j-1 is already generated by A. So, the difference between P(GT_j) and P[tp_1, tp_j] is the length of the subpath between tp_j-1 and tp_j in both the paths. As we know the subpath between tp_j-1 and tp_j is a straight line path in P, we can generate it in log(|tp_j - tp_j-1|_P). This implies that, ST_j = GT_j + log(|tp_j - tp_j-1|_P). Combining the two equations, we get that ST_j ≥ ST_j-1 + 1 + log(|tp_j - tp_j-1|_P). It is easy to observe that ST_4 = 4. Thus, by solving the recursive relation, we get that ST_j = Ω(klog k). This proves the lemma.
We now give the main theorem of this section.
Let A be an algorithm that generates a path from a singleton. Then, there exists a path for which A takes Ω(klog k) time steps.
We prove the theorem by giving a path on which any algorithm that generates a path from a singleton takes Ω(klog k) time steps. We construct an incompressible path P consisting of two spirals as shown in Figure <ref>. It is easy to observe that, due to Lemma <ref>, irrespective of the starting node A will generate one of the red or blue spirals from its internal endpoint u. Then, by a similar proof to that of Lemma <ref>, we can prove that A takes Ω(klog k) time steps.
§ CYCLE-BREAKING GROWTH PROCESSES
This growth process is characterized by its ability to break any edges within a shape while maintaining its global connectivity. It enriches the class of shapes that can be constructed in this growth process by breaking connections and transforming neighboring nodes using neighbor handover. The following proposition demonstrates growing any spanning tree of a rectangular shape in logarithmic time steps, where |S_I|=1.
For any rectangular shape S with all adjacencies, we can construct any spanning tree of S within O(log n) time steps.
In the first phase, we use the fast rectangle growth process, defined in Section <ref> to construct a rectangle shape S of size n. This operation starts from a singleton and doubles the shape until it reaches the desired size n. This consumes at most log n time steps.
In the second phase, once the rectangular shape S is constructed, we break the edges in parallel to form the final spanning tree in a constant time step c.
Therefore, the total time to construct such a shape is at most log n + c, thus, O(log n) time steps.
Any staircase shape S can be grown within logarithmic time.
Consider a staircase shape S with dimensions l× k.
First, we choose one dimension of the staircase (either length l or height k). For simplicity, let us assume we start from a single node u and grow the length of the shape S (i.e., dimension l) until it reaches the desired length of S. This can be done using the full doubling operation as described in <cit.>.
After that, we identify the starting node of each step in S and perform the breaking operation (see Definition <ref>). Then, we simultaneously grow all of these nodes vertically in parallel until we achieved the actual height k of each step in the staircase S. This involves splitting the whole segment from the first step into multiple smaller segments (i.e., each representing a step of the staircase S). The specific splitting pattern can be determined based on the desired configuration of the staircase S.
By following this approach, we can construct first one dimension of the shape S, in other words, a line segment of length l, in logarithmic time. Then, we efficiently add the remaining steps and grow them vertically to reach the desired height k in parallel. As a result, the overall time complexity of this is logarithmic to the dimensions of the staircase.
A family of shapes known as orthogonally convex shapes, as defined in Proposition 1 by Connor and Michail <cit.>, is a set of shapes where the perimeter consists of four staircases, and the interior is completely filled with nodes. It is possible to generate any shape in this family in logarithmic time steps by following these steps:
Step-
* Consider any orthogonally convex shape S, where the exterior of S consists of four staircases WN,NE,ES and SW, and the interior of S is fully filled with nodes.
* Start from any two consecutive quadrants of the shape's perimeter, such as (WN, NE),
(NE,ES),(ES,SW) or (SW,WN).
* Using Lemma <ref>, grow two consecutive quadrants (i.e., two consecutive staircases) of S in their final geometry. It is important to note that each quadrant of S's perimeter is constructed in accordance with its final position, ensuring that there will be no collisions between adjacent quadrants.
* Since the orthogonally convex shape S is fully filled with nodes, we can proceed to double the nodes in the generated subpart from Step-3. This doubling process is performed in lines until the entire shape S is obtained.
Given any orthogonally convex shape S, the above algorithm can grow S from a singleton within O(log n) time steps.
In order to construct any shape S that belongs to the orthogonally convex family, we perform the proposed procedure above.
By starting from a single node u we can use Lemma <ref> and grow any two consecutive quadrants of S's perimeter according to their final positions in S. Assume w.l.o.g. that the two consecutive staircases are WN and NE, this will consume logarithmic time steps of the length of WN plus logarithmic time steps of the length of NE.
Since the orthogonally convex shape S is fully filled with nodes, we perform the final step and every node of the constructed part of WN and NE of S doubles in lines to form the final shape S. This step also takes logarithmic time steps of the longest line of the other part, ES and SW. Therefore, the construction of any shape S in the orthogonally convex family can be completed within at most log n time steps, where n is the total number of nodes in S.
Below is an informal description of an algorithm that provides an O(log n) time steps growth schedule for any connected shape S. The algorithm achieves this by determining an elimination order of the nodes and generating a growth schedule by reversing this order.
The algorithm consists of two sets of phases: vertical phases and horizontal phases. Given a shape S with i rows and j columns do the following:
* Let L=l_1,l_2,…,l_i-1 be the set of vertical phases.
* For each phase l_i ∈ L, where i ranges from 1 to i-1, do the following:
2.
* Count rows from the bottom-most row, starting with i=1, and denote the odd row as 2i-1 and the even row as 2i.
* For every node u in an odd row 2i-1 that has a neighbor v in an even row 2i, eliminate v by contracting the edge uv towards u. Then, register the eliminated or translated nodes (i.e., if there is no neighbour, a node moves down one row) in a set σ to maintain their order.
* At the end of phase l_i, add all edges between nodes and move on to the next vertical phase l_i+1, counting rows from the bottom-most row, and repeating the same process.
* After completing the set of vertical phases, a horizontal line is obtained with a length equal to the horizontal dimension of the original shape (i.e., the number of columns in S).
* Apply the horizontal set of phases and repeat steps (1-3), which results in eliminating the horizontal line by successive halving.
* After completing both the vertical and horizontal sets of phases, reverse the constructed schedule σ into σ^', and return the growth schedule σ^'.
Given any connected shape S with dimensions l × k, the above algorithm can construct S from a singleton within O(log l + log k) time steps.
After executing the algorithm on the connected shape S and obtaining the growth schedule σ', the growth process involves adding nodes and edges based on the reversed order of elimination or translation represented by σ'.
By applying the schedule σ', starting from a single node, we expand the shape horizontally into j-1 columns using the doubling operation according to the schedule. After completing the horizontal growth, we proceed to the vertical growth, doubling the constructed row vertically into i-1 rows.
To analyze the time complexity of growing the shape S using this approach, let n be the total number of nodes in S, which is equal to the product of the dimension l × k. In each time step, we perform the growth operation, first horizontally and then vertically. The dimension of the horizontal growth is bounded by the number of columns l, which takes O(log l) time steps. Similarly, the vertical growth is bounded by the number of rows k, which takes O(log k) time steps. Therefore, the overall time complexity for this process is O(log k + log l).
§.§ Growth-Distance to Trees
The primary feature of the cycle-breaking growth is that it increases the distance in S by introducing new nodes and breaking certain edges. As a result, any connected shape S can be stretched and converted into a spanning tree T. Converting a shape S into a spanning tree T consists of the following steps:
Step-
* Consider a given spanning tree T of shape S=(V, E).
* At the first time step t_1, apply a cycle-breaking growth on every horizontal edge e∈ E of S that is parallel to non-tree edges of T (i.e., the decision to break such edges depends on the computed spanning tree T in Step-1).
* At the second time step t_2, apply a cycle-breaking growth on every vertical edge e∈ E of S that is parallel to non-tree edges of T. In other words, repeat Step-2 but vertically.
Algorithm <ref> transforms any shape S into a tree T within two-time steps.
To formally prove that we can convert any shape S into a tree T within two time steps, we need to demonstrate two main properties of the output tree T: connectivity and acyclicity.
For connectivity, the given shape S is initially assumed to be connected. The computation of the spanning tree T ensures that T spans all the nodes in S, meaning there is a single path between any pair of nodes in T.
Without loss of generality, let us assume that the horizontal uv edge is not part of the spanning tree T; we break it by introducing a new node x between u'v' that is parallel to uv. This ensures that any path between u and v in S is now connected through node x and the newly introduced edge u'v', that is, the new path between uv is uu', u'x, xv', v'v. Hence, after applying the cycle-breaking growth in parallel, the resulting tree T remains connected, fulfilling the connectivity property.
To prove the acyclicity of T, we consider that the computation of the spanning tree T ensures that T is a tree structure, which by definition, does not contain cycles. After that, during the cycle-breaking growth, no new cycles are introduced. Breaking an edge and growing a parallel edge does not create a cycle, as the newly introduced edges only connect existing nodes in T.
The computational complexity of Algorithm <ref> can be analyzed as follows. In the first step, a cycle-breaking growth is concurrently applied to each horizontal edge in S that is not part of the non-tree edges of T. Subsequently, in the second step, a cycle-breaking growth is performed on every vertical edge in S. Thus, Algorithm <ref> transforms any given shape S into a tree T within two time steps.
§ CONCLUSION
In conclusion, this paper has investigated the geometric properties of cycle-preserving and cycle-breaking growth processes within a centralized geometric framework. We have explored several key questions, including the class of shapes that can be constructed through these growth operations, their differences, and the possibility of transforming shapes from one family to another. As a result, we characterized some classes of shapes that can be constructed within logarithmic time steps using these growth operations. Also, we presented efficient algorithms and approaches for achieving the desired shape construction or transformation.
The results of this study open up new avenues for research and applications in the field of shape manipulation and provide valuable insights into the possibilities and limitations of growth operations. Despite the significant progress made, several open problems are worth further investigation. One open problem is the decision problem of determining whether a growth process exists that can transform an initial shape S_I to a final shape S_F within a given time-bound t. This problem has implications for reachability and can be further studied in special cases such as single-step reachability and the singleton special case of S_I. Additionally, extend the decision problem into the function problem where the objective is to return a growth schedule that transforms S_I into S_F within t time steps. Furthermore, an optimization problem arises in the context of shape growth. In this problem, given an initial shape S_I, a target shape S_F, and a time-bound t, the goal is to find the fastest growth process that transforms S_I into S_F within t time steps. The objective is to minimize the time steps required for the transformation, providing an optimal solution that achieves the desired shape in the shortest time possible.
Addressing these open problems will contribute to the development of efficient algorithms and techniques for geometric shape growth.
|
http://arxiv.org/abs/2307.05415v2 | 20230711163156 | Reliable optimal controls for SEIR models in epidemiology | [
"Simone Cacace",
"Alessio Oliviero"
] | math.OC | [
"math.OC",
"cs.NA",
"math.NA"
] |
Optimizing Scientific Data Transfer on Globus with Error-bounded Lossy Compression
Yuanjian Liu1, Sheng Di2, Kyle Chard12, Ian Foster12, Franck Cappello2
1
University of Chicago, Chicago, IL, USA
2
Argonne National Laboratory, Lemont, IL, USA
[email protected], [email protected], [email protected], [email protected],
[email protected]
Corresponding author: Sheng Di, Mathematics and Computer Science Division, Argonne National Laboratory, 9700 Cass Avenue, Lemont, IL 60439, USA
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================
We present and compare two different optimal control approaches applied to SEIR models in epidemiology, which allow us to obtain some policies for controlling the spread of an epidemic. The first approach uses Dynamic Programming to characterise the value function of the problem as the solution of a partial differential equation, the Hamilton-Jacobi-Bellman equation, and derive the optimal policy in feedback form. The second is based on Pontryagin's maximum principle and directly gives open-loop controls, via the solution of an optimality system of ordinary differential equations. This method, however, may not converge to the optimal solution. We propose a combination of the two methods in order to obtain high-quality and reliable solutions. Several simulations are presented and discussed.
Keywords: Optimal control, SEIR model, Dynamic Programming, Hamilton - Jacobi, Pontryagin maximum principle, Direct-Adjoint Looping
§ INTRODUCTION
The development of the Covid-19 pandemic has increased the interest towards the mathematical modelling of epidemics and, in the last three years, a number of papers dealing with several aspects of epidemiology have appeared. The number of topics is very large and ranges from the analytical description of infections to the socio-economic impact of the pandemic. However, epidemiological models have been introduced even before the Covid-19 pandemic (see e.g. <cit.> and the monography <cit.>) as a useful tool to analyse and predict the development of an infective disease. In particular, compartmental models describing the interactions between susceptible, infected and recovered consist of a system of ordinary differential equations, where the variables represent the categories which the population is divided into. One of the simplest model, the popular SIR model, dates back to a paper by Kermack and McKendrick (1927) <cit.>. Nowadays the modelling can include several aspects of an epidemic, such as space diffusion, age distribution, vaccination and local effects, and models are becoming more and more complex.
This work is focused on the control of epidemiological models. More precisely, given the description of the infection, we introduce some control parameters in the model, accounting for the possibility to get a vaccine, restrict social interactions or regulate the inflow of people from abroad. Clearly, by varying these parameters we will modify the evolution of the infection and its distribution in the population. In order to compare these effects, we introduce a cost functional associated with the development of the epidemic. This functional can contain different components, measuring costs in terms of victims, hospitalization procedures (such as intensive care) and economic impact. The final goal is to determine how restrictive measures and vaccines can affect the evolution of the disease and possibly find the optimal controls that minimise the cost at each time. To this end, we adopt two classical approaches based on general results in optimal control theory.
For the solution of an optimal control problem, one can look for controls that are expressed either as a function of time (open-loop controls) or as a function of the state of the system (feedback controls), depending on the method that is used to solve the problem.
The first approach is based on Pontryagin's maximum principle, a very well-known variational framework which gives necessary conditions for a control to be optimal. However, the corresponding open-loop controls cannot react to perturbations of the system or to uncertainties in the model, since they are just a function of time, and this might be an issue in some cases. For a general introduction to this theory we refer to <cit.>.
The second approach follows the celebrated Dynamic Programming Principle due to R. Bellman <cit.>, introducing the value function, a function of the state variables of the system, corresponding to the best cost among all the possible control strategies. This function can be characterised as the unique viscosity solution <cit.> of a non-linear partial differential equation, the Hamilton-Jacobi-Bellman equation <cit.>. Using a suitable numerical scheme of semi-Lagrangian type <cit.>, one can calculate the value function and obtain the optimal controls. General results on this approach can be found in <cit.>. The main advantage of this strategy, despite the high computational cost, is that it directly gives feedback controls.
There is a huge amount of literature on optimal control applied to epidemiological models, aiming at describing different infectious diseases in different settings. These works, some of which date back to several years before the Covid-19 pandemic, are mainly based on Pontryagin's principle and on the solution of the associated optimality system <cit.>. Moreover, the numerical schemes for the solution of the optimality system are typically based on local descent methods, such as Newton iterations <cit.> or the so-called Direct-Adjoint Looping algorithm, also known as Forward-Backward Sweep <cit.>. To our knowledge, one of the less explored aspects of these optimisation techniques is that – if they converge – they may not converge to the global minimum of the cost functional <cit.>, unless additional hypotheses are verified or the system is rather simple.
On the other hand, methods based on the Dynamic Programming approach are still scarcely employed for real-world problems, which involve complex or high-dimensional systems, due to their high computational cost. At present, the application of this approach to the control of epidemiological models is more theoretical or limited to low-dimensional settings <cit.>. However, the most relevant feature of this approach is the theoretical guarantee of convergence to the optimal solution, i.e. to the global minimum of the cost. In addition, the latest advancements in both CPU and GPU architectures make it possible to approximate the solution of large scale problems in a reasonable time, especially resorting to parallel computing techniques.
The present work aims at exploiting Dynamic Programming to improve and validate the results obtained with the variational approach. In particular, we combine the two approaches, using the optimal controls given by a semi-Lagrangian scheme as a warm guess to initialise the Direct-Adjoint Looping (DAL) method. As it will be confirmed by our numerical experiments, especially when the considered epidemiological model gets more complicated, the DAL algorithm alone could provide various sub-optimal solutions, depending on the initial guess for the controls, but this does not happen if we initialise it with the output of the semi-Lagrangian scheme.
This paper is structured as follows. In Section <ref>, we present the general setting of the SEIR compartmental model, then we introduce the control parameters representing restrictive measures and vaccination, thus obtaining a controlled SEIR model. In Section <ref>, we briefly outline the general theory of finite horizon optimal control problems and we formulate the cost functional. In Section <ref>, we introduce the Dynamic Programming approach, the related Hamilton-Jacobi-Bellman equation and its numerical approximation. Section <ref> is dedicated to the variational approach to finite horizon problems, via Pontryagin's maximum principle. In particular, we describe the Direct-Adjoint Looping method to approximate the solution of the optimality system associated with the control problem. Section <ref> contains a more in depth comparison between the two approaches, highlighting their differences and their strengths. In Section <ref>, we report the results of several numerical experiments performed with both methods on some variations of the controlled SEIR model. In particular, we consider the case of a control that can open or close the borders in a setting where there is a constant source term, representing incoming people from abroad.
Finally, we also consider some state constraints that represent the limited availability of intensive care units, together with temporary immunity.
§ THE CONTROLLED SEIR MODEL
In this section, we present one of the most simple and frequently used compartmental models, the epidemic SEIR model, and an optimal control problem associated with it. Since our aim is to formulate an optimal control problem, we will not go through all the analytical properties of the model, and refer to <cit.> for further details.
The population - which is assumed to be homogeneous and well mixed - is divided into four categories:
* S: the susceptible portion, individuals who can contract the disease,
* E: the exposed portion, individuals who have been infected but are not yet infectious,
* I: the infective portion, infected individuals who can transmit the disease,
* R: the recovered portion, those who have acquired immunity.
An individual can belong to any of the categories, but only to one of them at each time. If we denote the total population with N, the model can be sketched as in Figure <ref>.
The infection process is modeled with the usual law of mass action: the force of infection is I/N; β is the transmission rate, namely the product of the average number of contacts per unit time for each individual and the probability of transmission per contact; 1/ε is the mean duration of the latency period; 1/γ is the mean infectious period. The corresponding differential model is therefore:
S' = - β S I/N
E' = β S I/N - ε E
I' = ε E - γ I
R' = γ I
In order to avoid working with large numbers, we can normalise the equations by dividing everything by N, thus obtaining
s' = - β s i
e' = β s i - ε e
i' = ε e - γ i
r' = γ i
with initial data (s(0),e(0),i(0),r(0))=(s_0,e_0,i_0,r_0) such that s_0+e_0+i_0+r_0=1. Usually, as it is reasonable to assume, r_0=0 and s_0 ≈ 1. In the normalised model, each variable represents the fraction of population within a certain compartment, instead of the absolute number of individuals. We point out that the right hand side of both (<ref>) and (<ref>) sums to zero, implying the conservation of the total population. Indeed, setting x:=(s,e,i,r), it readily follows that the set
𝒮={ x ∈ [0,1]^4 | ||x||_1=1 }
is positively invariant for system (<ref>).
Finally, we observe that the fourth equation is independent from the previous three, hence we can reduce the system to
s' = - β s i
e' = β s i - ε e
i' = ε e - γ i
and then exploit the conservation property to obtain r(t)=1-s(t)-e(t)-i(t).
As in the simpler SIR model, the basic reproduction number for this model is
R_0= β/γ .
This may seem surprising, but in fact, as long as we do not consider natural deaths, the duration of the latency period has no influence on R_0 <cit.>.
What is peculiar of this model (see Figure <ref>) is the peak of infective, which is what the authorities usually want to minimise – or at least “spread” over a larger period of time – in order to avoid overloading health facilities.
Taking inspiration from the measures adopted to fight the SARS-CoV-2 pandemic, we introduce two types of controls that can act on the system at each time: restrictive measures (λ) and vaccinations (ν). The two controls have quite different behaviours: while restrictive measures have the effect of reducing the contacts among the individuals, vaccines cut down the number of susceptible, making them immune without undergoing the infectious period. Considering directly the normalised model and assuming that only the susceptible get vaccinated, the resulting model can be sketched as shown in Figure <ref>.
We multiply by (1-λ) wherever the transmission rate β appears in (<ref>) and we add a new term containing ν which takes “mass” from the susceptible and moves it into the recovered, without passing through the other compartments. The model we obtain is thus the following:
s' = - β (1-λ) s i - ν s
e' = β (1-λ) s i - ε e
i' = ε e - γ i
r' = γ i + ν s
with the same initial data as before. In this way, the choice λ=0 corresponds to not modifying the contact rate at all, while λ=1 deletes the transmission term completely, meaning complete lock-down. Similarly, ν can be seen as a vaccination rate, where ν=0 corresponds to not vaccinating anyone. We remark that there is no a priori upper bound ν_max for the vaccination rate, as it depends on the resources available at each time in the society one is considering. Clearly, (<ref>) can be reduced as well and from now on we will always consider the following differential system:
s' = - β (1-λ) s i - ν s
e' = β (1-λ) s i - ε e
i' = ε e - γ i
with initial data (s(0),e(0),i(0))=(s_0,e_0,i_0) such that s_0+e_0+i_0=1.
In Section <ref> we will also consider some extensions of the basic SEIR model, taking into account state constraints, temporary immunity and incoming individuals from abroad.
§ FINITE HORIZON PROBLEMS AND COST DEFINITION
In this section, we will briefly recall the theoretical setting for a finite horizon optimal control problem and then define it in our particular case. To keep the notation lighter, we set x:=(s_0,e_0,i_0) ∈ [0,1]^3, α:=(λ,ν) ∈ A, where A is a compact subset of ^2, and consider a fixed time interval [t,T]. We will also refer to system (<ref>) more concisely as
{[ ẏ(τ)=f(y(τ),α(τ),τ), τ∈(t,T],; y(t)=x, ].
where y:[t,T] → [0,1]^3 represents the state variables and f: [0,1]^3 × A × (t,T] → [0,1]^3 is the controlled dynamics. The expression “finite horizon” comes from the fact that T ∈ is fixed, whereas for other kind of problems one can be interested in controlling the system as T→+∞. Since we are dealing with an epidemic model, the time scale is relatively short, months or at most a few years, therefore it is natural to fix a finite time horizon. The optimal control problem itself consists in finding a control function α^* in the space of admissible controls
:= {α: [t,T] → A | α measurable},
minimising a given functional that represents the cost associated with the evolution of the epidemic. In general, the cost functional is of the form
J_x,t(α) = ∫_t^T ℓ(y_x,t (τ),α(τ),τ) dτ + g(y_x,t(T))
where y_x,t ( · ) indicates the solution of (<ref>) starting at time t with initial data x and implementing the control α( · ). The functions ℓ: ^3 × A × [0,+∞) → and g: ^3 →, called, respectively, the running cost and the final cost, are given bounded and continuous functions.
Given the dynamical system, it is crucial to define a suitable cost functional, in order to obtain the optimal strategy to control the epidemic. One can include several terms related to the sanitary impact and to other social effects. In particular, we take into account the following components:
* social and economic costs due to the disease itself; this includes lost working hours, psychological consequences of isolation, etc.,
* hospitalisation costs, including those related to Intensive Care Units (ICU),
* costs to implement the restrictive measures,
* vaccination costs.
We assume that every cost component is an increasing convex function, for both regularity and better representation of real costs (see <cit.>). For the socio-economic costs, we suppose that the infective are affected more than the others, therefore we take
c_1 i^2 + c_1/2 (1-i)^2,
where c_1>0 is an appropriate constant. For the second component, hospitalisations, we simply take
c_2 i^2, c_2 > 0.
Since everyone is affected by social restrictions, we assume the third component to be proportional to the entire population, which is equal to N=1, so we simply get
c_λ λ^2, c_λ >0.
Finally, for the vaccination costs, we suppose that there is a fixed cost, not proportional to the amount of people that are vaccinated, plus a second term that is indeed proportional to the susceptible, thus getting
(c^0_ν + c_ν s^2) ν^2, c^0_ν, c_ν >0.
Summing all the components, we obtain the following running cost:
ℓ(s,e,i,λ,ν) = (c_1+c_2) i^2 + c_1/2 (1-i)^2 + c_λ λ^2 + (c^0_ν + c_ν s^2) ν^2.
We point out that the choice of the constants c_1, c_2, c_λ, c^0_ν, c_ν is a rather delicate task, as they should be fitted according to experimental data for the particular disease and population one is considering, but this goes beyond the scope of the present work. We also remark that what is most important in this control approach is not the precise value of each constant, but rather their relative magnitudes. For all the simulations in Section <ref> we have used the values reported in following table:
Constant Value
c_1 3.5
c_2 14
c_λ 0.35
c^0_ν 0.025
c_ν 0.05
These values were chosen in order to prioritise the different cost components we considered. For the case of Covid-19, it has been estimated <cit.> that hospitalisations are more expensive than lock-down policies, which are in turn way more expensive than vaccinations. In particular, the cost of a hospital bed in an intensive care unit is several orders of magnitude higher than a vaccine.
For the final cost, we can take into account the distance from a given desired state for each of the compartments. In our case, we want to penalise the scenarios in which the epidemic is not over by time T, therefore we set
g(s(T),e(T),i(T))= c_i (i(T)-i)^2 + c_e (e(T)-e)^2,
with c_i=c_e=10 · c_1 and i=e=0.
We can also take into account a global bound for the Intensive Care Units (ICU), since their number is limited and it is one of the strong constraints for healthcare systems. In practice, this implies that the global number of people in ICU must stay always below that bound. One way to deal numerically with state constraints is described in Section <ref>.
§ THE DYNAMIC PROGRAMMING APPROACH AND SEMI-LAGRANGIAN SCHEMES
In this section, we present the Dynamic Programming (DP) approach to optimal control problems and the related semi-Lagrangian scheme to obtain the numerical solution. We refer to <cit.> for a complete description of the method.
We start by defining an auxiliary function, called the value function,
v(x,t) := inf_α∈ J_x,t (α),
which represents the best price we can pay starting from x at time t. By means of the Dynamic Programming Principle <cit.>, the value function, which is typically non-differentiable in optimal control problems, can be characterised as the unique viscosity solution <cit.> of the following Hamilton-Jacobi-Bellman equation for x ∈^3 and t ∈ (0,T):
- v_t (x,t) +max_a ∈ A{ -f(x,a,t)·∇ v (x,t) -ℓ(x,a,t) }=0,
v(x,T)=g(x).
Viscosity solutions of partial differential equations, introduced by Crandall and Lions in the 1980s <cit.>, allow us to consider non-differentiable functions that satisfy (<ref>) in a pointwise weak sense. Nonetheless, unlike other notions of weak solutions, we can still get uniqueness results <cit.> for viscosity solutions of Hamilton-Jacobi equations, which are essential for the well-posedness of the problem.
Once the value function is known, an optimal control in feedback form can be computed by solving the following optimisation problem:
a^* (x,t) = _a∈ A{ -f(x,a,t) ·∇ v(x,t) -ℓ(x,a,t) }.
Due to its non-linearity, equation (<ref>) typically does not admit solutions in closed form, hence numerical schemes are needed to obtain an approximate solution. Constructing an approximation of a nonlinear partial differential equation has two main difficulties: the first is to deal with non regular solutions – typically just Lipschitz continuous – and the second is the high-dimensionality of the problem, since the number of equations can be rather high for compartmental models. We briefly describe the semi-Lagrangian method, which naturally follows the continuous control problem, via a discrete Dynamic Programming principle (see <cit.> for details). First, we introduce a semi-discrete scheme with time step Δ t: = [(T-t)/n_max], where n_max is the number of time steps, for x ∈^3:
V^n_max(x)=g(x),
V^n(x)=min_a∈ A[V^n+1(x) +Δ t ℓ(x, a, t_n)], n= n_max-1,…, 0,
where V^n(x):=V(x, t_n), t_n=t+n Δ t, t_n_max = T and x=x+Δ t f(x, a, t_n). This is a backward problem, where we start from the final condition and we get back to the initial time. In order to obtain a fully discrete, explicit scheme, the term V^n+1(x) is treated by interpolation on a grid, since x=x+Δ t f(x, a, t_n) in general is not a grid point. This approximation leads to an a priori error estimate in terms of Δ t and Δ x that typically shows convergence of order 1/2 to the value function, due to the fact that v is just Lipschitz continuous (see <cit.> for details).
With the discrete value function at hand, still from (<ref>), we can also synthesise the discrete feedback controls a^*(x,t_n) and then reconstruct the optimal trajectories:
y^*(t_n+1)=y^*(t_n) + Δ t f(y^*(t_n),a^*(y^*(t_n),t_n),t_n),
y^*(t_0)=x.
Note in particular that α^*(t_n) := a^*(y^*(t_n),t_n) provides the corresponding discrete open-loop control.
It is worth noting that the semi-Lagrangian scheme is intrinsically parallel, since the computation of the value function on each grid node can be assigned to a single processor. Moreover, a single synchronisation among the processors is needed at each time step, but the computation of the new values only depends on the previous iteration and it requires the same amount of operations per node, including the location of the foot of the characteristic, the interpolation and the update. Finally, we point out that for our problem the structured grid and the positively invariant region make it possible to reduce the actual computation to a subset of the discrete state-space containing approximately 18% of the nodes.
§ THE VARIATIONAL APPROACH AND THE DIRECT-ADJOINT LOOPING METHOD
We now describe an alternative approach to Dynamic Programming, based on Pontryagin's maximum (or minimum) principle, for a finite horizon optimal control problem <cit.>. We keep the same notation adopted in the previous section. Instead of defining the value function, we want to find some conditions that make the cost functional J_x,t(α) stationary.
Assuming that the final cost g is regular, the cost functional can be expressed as
J_x,t(α) = [ ∫_t^T ℓ(y_x,t (τ),α(τ),τ) + d/dτ g(y_x,t(τ)) dτ] + g(x).
Since t and x=y_x,t(s) |_s=t are fixed, minimising J_x,t is equivalent to minimising
J̃_x,t(α) = ∫_t^T ℓ(y_x,t (τ),α(τ),τ) + d/dτ g(y_x,t(τ)) dτ =
= ∫_t^T ℓ(y_x,t (τ),α(τ),τ) + [ ∇ g(y_x,t(τ)) ]·ẏ_x,t(τ) dτ.
Keeping in mind that along the optimal trajectories
f(y(τ),α(τ),τ) - ẏ(τ) =0,
we form the augmented cost functional
J^a_x,t(α) = ∫_t^T ℓ(y_x,t (τ),α(τ),τ) + [ ∇ g(y_x,t(τ)) ]·ẏ_x,t(τ) +
+ p(τ) ·[ f(y_x,t(τ),α(τ),τ) - ẏ_x,t(τ) ] dτ
by introducing the Lagrange multipliers p(τ)=( p_1(τ),p_2(τ),p_3(τ) ). From now on, to keep the notation lighter, we will omit the dependence of the solution from the initial data {x,t} and all the explicit dependencies on time. Moreover, we assume for a moment that the controls are unconstrained. We then define the Hamiltonian
ℋ(y,α,p,τ) := ℓ(y,α,τ) + p· f(y,α,τ)
and, by imposing stationarity on J^a_x,t at α^*, we get the following optimality system
{ ẏ^̇*̇(τ) = ∂ℋ/∂ p (y^*,α^*,p^*,τ)
ṗ^̇*̇(τ) = - ∂ℋ/∂ y (y^*,α^*,p^*,τ)
∂ℋ/∂α (y^*,α^*,p^*,τ) = 0,
.
for all τ∈ (t,T) and with mixed initial and final conditions
{ y^*(t)=x
p^*(T)=∇_y g (y(T)).
.
More explicitly, omitting all the dependencies for simplicity, we can write the following two-point boundary-value problem:
{ ẏ^̇*̇ = f
ṗ^̇*̇ = - ∇_y f · p^* - ∇_y ℓ
∇_α f · p^* + ∇_α ℓ = 0
y^*(t)=x
p^*(T)=∇_y g (y(T))
.
which in our case is composed of eight equations: three for the state y^*=(s^*,e^*,i^*), three for the co-state (or adjoint variables) p^* and two for the optimality condition, plus the initial and final conditions.
We now briefly summarise the Direct-Adjoint Looping (DAL) method to solve system (<ref>) and we refer to <cit.> for further details:
* discretise the time interval [t,T] into N subintervals, thus generating the discrete times t_0=t, …, t_N=T, and set a descent step σ < 1 and a tolerance ϵ≪ 1;
* select an initial discrete approximation of the control
α^(0)=(α^(0)_0,…,α^(0)_N) and set k=0;
* once the control is fixed, integrate the forward equations for y with initial data y(t)=x, obtaining a discrete approximation of the state y^(k)=(y^(k)_0,…,y^(k)_N);
* integrate backwards the equations for the co-state p with final data
p(T)=∇_y g (y^(k)_N),
thus obtaining p^(k)=(p^(k)_0,…,p^(k)_N);
* if
|| ∂ℋ^(k)/∂α || := || ∇_α f · p^(k) + ∇_α ℓ || < ϵ,
terminate the procedure and the approximation of the optimal couple (α^*,y^*) is (α^(k),y^(k)); otherwise, set
α^(k+1) = α^(k) - σ∂ℋ^(k)/∂α
and repeat the procedure from step 3.
Multiple techniques for nonlinear optimisation problems can be found in literature, in order to refine the algorithm and make it faster or more reliable, e.g. choosing the best descent step σ^(k) at each iteration or using inexact line search algorithms, see for instance <cit.>. Moreover, to deal with constraints on the controls, the third equation in (<ref>) has to be replaced by the minimum principle:
ℋ(y^*(τ),α^*(τ),p^*(τ),τ) ≤ℋ(y(τ),α(τ),p(τ),τ)
for all τ∈ (t,T) and α∈, and the optimality system can be solved using classical methods in nonlinear constrained optimisation <cit.>. In our experiments, we simply combine the update of the controls in step 5 of the algorithm with a projection step on the control set A, under the additional assumption of convexity (in all our simulations A is precisely the rectangle [0,0.9] × [0,ν_max]). Finally, we employ the following criterion for the convergence of the optimisation procedure:
| J_x,t(α^(k+1)) - J_x,t(α^(k)) | < ϵ.
§ A COMPARISON BETWEEN SEMI-LAGRANGIAN SCHEMES AND THE DIRECT-ADJOINT LOOPING METHOD
In this section, we briefly discuss the features of the two approaches we presented above, in particular their pros and cons, which depend on the problem one wants to solve.
§.§ Feedback vs. open loop
The major perk of the Dynamic Programming approach is that it provides feedback controls, i.e. controls that are a function of the state of the system. This means that the control policy can instantly react to small perturbations or uncertainty in the data. The variational approach, instead, leads to open loop controls, i.e. controls that are only a function of time. As a consequence, they can be no longer optimal if there are, for instance, some model errors or external disturbances.
§.§ The role of initial data
In the Dynamic Programming approach, the variables of the value function are the initial data (x,t) of the dynamical system. This implies that if the initial data change, we do not have to compute the value function again, but only the optimal trajectories. This allows us to build a static controller that takes the initial data as an input and outputs the control policy in (<ref>). This property can be useful in case different strategies have to be computed, starting from various initial data. The DAL algorithm, instead, must be run again completely if we change the initial data of the dynamical system.
§.§ Convergence and error estimates
For semi-Lagrangian schemes, theoretical results show convergence to the optimal solution, i.e. to the global minimum of the cost functional, with an a priori error estimate of order 1/2 in the case of a Lipschitz continuous value function, depending on the time and space discretisations <cit.>. On the contrary, the variational approach only uses necessary conditions for a control to be optimal, therefore we have no guarantee that the iterative procedure will converge to the global minimum. Unless further hypotheses on J_x,t are verified, such as convexity or coercivity, it may converge to different stationary points depending on the initial choice for α^(0), σ and ϵ. It may even not converge at all for some choices of the initial parameters. Nevertheless, when it converges, the accuracy is much higher than that of a semi-Lagrangian scheme, since only the temporal interval is discretised; both the state-space and the control set are treated as continuous. Moreover, there is no interpolation error in the DAL algorithm: the only source of error is the integration of the ordinary differential equations, which can be treated with a high-order numerical scheme. In the simulations presented in Section <ref>, in fact, we will observe some differences in the control policies given by the two methods. In particular, those obtained with DAL appear smoother and their cost, computed a posteriori, is slightly smaller. Nevertheless, the difference in the trajectories is always compatible with the magnitude of the discretisation steps.
§.§ State constraints
The need to impose some state constraints in an optimal control problem is not uncommon: one may need the optimal trajectories to stay inside a certain subset Ω of the state-space. This implies substituting the space of admissible controls with its subset
_Ω={α∈ | y_x,t ( · ; α): [t,T] →Ω}.
Numerically, we can force the trajectories to stay inside Ω (or, conversely, drop all the control strategies that let them outside Ω) by simply changing the running cost ℓ(y,α,τ) so that it rapidly increases as soon as the trajectories y leave Ω.
§.§ Computational cost
The main difference between the two methods lays in their computational cost. Although the total number of iterations of the DAL algorithm cannot be known a priori, each iteration is going to be rather fast, since it mainly consists in integrating a system of ordinary differential equations. On the other hand, we can set the total number of temporal steps for the semi-Lagrangian scheme, but each iteration is going to be very expensive, since we need to build a local interpolation operator for every single node in the discrete state-space, for every single discrete control. As a result, the computational cost of semi-Lagrangian schemes drastically grows with the dimension of the state-space, i.e. with the number of equations in the dynamical system. This phenomenon is known as the curse of dimensionality, expression coined by Bellman himself <cit.>, and makes serial implementations of SL schemes unfeasible for systems with more than 4 or 5 equations. However, in order to mitigate the computational efforts, several acceleration methods for particular Hamilton-Jacobi equations have been developed, such as Fast Marching <cit.> and Fast Sweeping methods <cit.>, domain decomposition methods (for example <cit.>) or schemes based on unstructured grids <cit.>.
§ NUMERICAL SIMULATIONS
In this section, we report the numerical results we obtained applying the two proposed approaches to some variations of model (<ref>). The Direct-Adjoint Looping algorithm was implemented in C++ and run on a 1.4 GHz Intel Core i5 quad-core CPU, whereas a parallel version of semi-Lagrangian scheme was implemented in CUDA and run on a single NVIDIA GeForce RTX 2070 GPU, mounted on the “Gauss” cluster at the Department of Mathematics of Sapienza University of Rome.
The temporal interval we consider is 3 years and the time unit is one trimester, therefore, in the notation adopted above, t=0 and T=12. The time step is Δ t = 0.05, meaning 600 temporal iterations for both methods. For the the semi-Lagrangian scheme, we take a uniform 3D mesh in the state-space [0,1]^3 made of 150^3 nodes.
To set some reasonable parameters, we consider an ideal infectious disease for which the latency period lasts on average 1/ε=10 days and the mean infectious period is 1/γ=3 weeks, hence ε=9, γ=4. In addition, we set
β(τ)=
4 if 2 ≤t≤ 3,
16 otherwise,
where t≡τ mod 4, so that R_t=R_0=4 all year long except for one trimester, where it drops to 1. This choice of β is to represent the natural decrease in the transmission rate that we observe during summer time for many infectious diseases, like chickenpox, influenza and even Covid-19 <cit.>. Moreover, we consider a vaccine with p=90% efficacy, meaning we substitute ν with p·ν in (<ref>) wherever it appears. Finally, we assume that there are 1000 infective, 3000 exposed and no recovered at time t=0. We normalise these quantities with the total Italian population N=58,983,122 <cit.>, obtaining the initial data
e_0=3000/N = 5.09 · 10^-5,
i_0= 1000/N = 1.70 · 10^-5,
r_0 = 0,
s_0=1-e_0-i_0.
The value for s_0 is calculated keeping in mind that s_0+e_0+i_0+r_0=1, since the four variables represent the fractions of population in each compartment.
Regarding the controls, we set
A = A(τ) :=[0,0.9]× [0,ν_max(τ)],
where
ν_max(τ)=
0 if τ < 4,
(τ-4) if 4 ≤τ <5,
1 if τ≥ 5,
in order to mimic the initial unavailability of vaccines when a new virus emerges. Although it is theoretically possible, we do not allow λ to reach 1, as it would mean preventing any contact between susceptible and infectious individuals and that is impossible to achieve, even with the strongest lock-down policies adopted for the Covid-19 pandemic. For the SL scheme, the control set A is discretised with a uniform 2D mesh composed of 150^2 discrete controls.
From now on, the superscript SL will indicate the optimal solutions obtained with the semi-Lagrangian scheme and the reconstruction (<ref>), whereas the superscript DAL will be assigned to the results of the Direct-Adjoint Looping algorithm. For comparison purposes, in each test we evaluate the corresponding cost functional J_x,t in (<ref>) along the optimal trajectories computed by the two algorithms, discretised with a simple rectangular quadrature rule using the same step Δ t for the time interval [0,T].
§.§ Test 1: basic model
We begin with the simplest case, that is the basic model (<ref>) with running cost (<ref>) and final cost (<ref>), meaning that our final goal is to end the infection by time τ=T. We repeat the DAL algorithm with various initial guesses for λ^(0) and ν^(0), including λ^SL and ν^SL, always obtaining the same results, which are reported in Figure <ref>.
As already anticipated, the controls we get from the two methods are slightly different, but qualitatively the same. In particular, the optimal trajectories (dashed lines in Figure <ref>) are nearly indistinguishable. We observe that the typical peak of infective (see Figure <ref> for reference) is cut using mobility restrictions, then, when a second peak is about to form, some milder restrictions are applied in order to bring the infective down to zero. Vaccination is never applied with this model.
§.§ Test 2: temporary immunity
For the second simulation we consider a scenario in which immunity – whether acquired by infection or vaccination – is not permanent. Suppose its mean duration is 1/μ=9 months. This means we have to add a path from R back to S in the diagram in Figure <ref>, leading to the following dynamical system:
s' = - β (1-λ) s i - ν s + μ r
e' = β (1-λ) s i - ε e
i' = ε e - γ i
r' = γ i + ν s - μ r
with the same parameters and initial data as before, plus μ=1/3. Now the fourth equation is not independent anymore, since r appears in the first equation, but we can still use the conservation property to express r(τ)=1-[s(τ)+e(τ)+i(τ)] and thus reduce the system to
s' = - β (1-λ) s i - ν s + μ [1- s - e - i]
e' = β (1-λ) s i - ε e
i' = ε e - γ i
As before, we run the DAL algorithm with various initial guesses for λ^(0) and ν^(0), including λ^SL and ν^SL. In this case as well, the results are always the same and are shown in Figure <ref>.
The two numerical schemes give slightly different results, but qualitatively the same. What is peculiar of models with temporary immunity is the oscillations in the trajectories due to the different rates with which the susceptible and recovered compartments exchange individuals, which are also reflected in the control policy. As a matter of fact, we can observe four different periods of mobility restrictions, the first between the first and second trimester and the last at the very end of the 3-year window, together with two waves of vaccination, a first small one around the eighth trimester and a bigger one at the end. The restrictions are milder and milder with the passing of time, while the vaccination rate increases.
§.§ Test 3: border control
For this simulation, we go back to system (<ref>), but we introduce a constant source term δ in the model, representing incoming people from the outside. Let δ_j, j=1,…,4, be the fractions of δ going respectively into the s, e, i, r compartments. In order to show that restrictive measures and vaccines are not the only possible controls that one can apply to epidemiological models, for this simulation we will not consider vaccinations; we keep the restrictive measures λ as before and introduce a new control ϕ representing the opening of borders, so that we can control the influx of individuals. We want ϕ=0 to correspond to closing all the borders and ϕ=1 to be equivalent to the uncontrolled scenario. A diagram of this variation of the SEIR model is represented in Figure <ref>.
In this case the controlled differential system becomes
s' = - β (1-λ) s i + ϕδ_1
e' = β (1-λ) s i - ε e + ϕδ_2
i' = ε e - γ i + ϕδ_3
r' = γ i + ϕδ_4
Note that the conservation of population does not hold anymore, since
d/dτ[s(τ)+e(τ)+i(τ)+r(τ)] = δϕ(τ) ≥ 0.
However, the variable r still does not appear in the first three equations, so it is always possible to solve the reduced system first and then eventually get r(τ) by integration of the fourth equation.
For the optimal control problem, we have to modify the running cost in order to:
* eliminate the vaccination cost, since vaccines are not considered,
* take into account that the population is not constant,
* penalise closing the borders.
Therefore, we take out the term (c^0_ν + c_ν s^2) ν^2 from the running cost, modify the restrictions cost to c_λλ^2 N(τ) add a new term c_ϕ (1-ϕ)^2 N(τ), where N(τ) is the total population at time τ. The reason for this is that we want the cost components relative to λ and ϕ to be proportional to the entire population, since everyone is affected by the closure of the borders. The issue is that now, as already mentioned, the sum of the four variables is not equal to 1 for all τ≥ 0, but only at τ=t=0. From (<ref>) and the initial data we get
N(τ)=1+ δ∫_0^τϕ(ξ) dξ,
which we approximate as N(τ) ≈ 1 + δ τ ϕ(τ), obtaining the following running cost:
ℓ(s,e,i,λ,ϕ,τ) = (c_1 + c_2) i^2 + c_1/2 (1-i)^2 + c_λ λ^2 (1 + δτϕ) + c_ϕ (1-ϕ)^2 (1 + δτϕ).
Note that now ℓ explicitly depends on time. Moreover, we set the final cost
g(s(T),e(T),i(T))=0.
The constants relative to the incoming individuals that we used in this simulation are reported in the table below, while all the others are the same as the previous tests.
Constant Value
δ 0.75
δ_1 0.5 ·δ
δ_2 0.01 ·δ
δ_3 0.005 ·δ
δ_4 0.485 ·δ
c_ϕ 0.15
We perform a first test with the DAL algorithm, selecting as initial guess for the controls λ^(0)=[0,…,0] and ϕ^(0)=[1,…,1]. This choice corresponds to the uncontrolled scenario, where no restrictions are applied and the borders are open. The results are shown on the left part of Figure <ref> and we will indicate them with the superscript left. We observe an initial lockdown, followed by three other periods of mild mobility restrictions; the borders are shut only around the first trimester and after that they are partially closed twice. We then repeat the test changing the initial guess on the borders to ϕ^(0)=[0,…,0], obtaining the results reported on the right side of Figure <ref>, which we will indicate with the superscript right. They are drastically different from those obtained in the first run and resemble the solution relative to the basic model in Figure <ref>. In fact, the algorithm is keeping the borders closed all the time, and setting ϕ(τ)=0 for all τ is precisely equivalent to the basic model (<ref>) with no vaccination. This means that at least one of the two solutions we obtained is only a local minimum (or just a stationary point) of the cost functional. A quick comparison between the value of the cost functional computed along the two solutions confirms that the first is cheaper than the second: J_x,t^left=20.092728, whereas J_x,t^right=22.321380. As a consequence, we can be sure that the solution obtained with ϕ^(0)=[0,…,0] is not optimal. At this point, we run the semi-Lagrangian scheme, obtaining some discrete controls λ^SL and ϕ^SL, and then initialise the DAL algorithm with λ^(0)=λ^SL and ϕ^(0)=ϕ^SL. In this way, we obtain the solutions reported in Figure <ref>. We observe that not only do the two solutions almost coincide, but they are rather different from the ones we had obtained before: there are three distinct periods in which the borders are shut, while the restrictions are still applied four times, more strictly in the beginning and then lowered progressively. The cost functional computed along the solutions given by the SL scheme and the latest run of the DAL algorithm is equal to, respectively, J_x,t^SL=19.988677 and J_x,t^DAL=19.988674. This final check confirms that the actual minimum of the cost functional was not computed in any of the first two simulations, due to the different initialisation of the discrete controls, and that the combination with the semi-Lagrangian scheme can really make a difference.
§.§ Test 4: state constraints
For this simulation we go back to the basic model, but we adopt a different approach to the problem. Instead of penalising the infective through their cost, we impose a state constraint on the system. This is not an abstract hypothesis, since every country has a limited number of hospital beds and, most importantly, of Intensive Care Units (ICU). We assume, for simplicity, that a fixed percentage p_ICU of those who contract the disease end up in intensive care. This hypothesis can be misleading in models with heterogeneity, but since we are working with the epidemic SEIR model, the fundamental assumption is that the population is homogeneous and well mixed, as already mentioned in Section <ref>. For this reason, our assumption is reasonable and we set p_ICU=0.05%. In this way, the constraint on intensive care units translates into a constraint on the state variable i. Taking the total number of intensive care units available in Italy before the Covid-19 pandemic <cit.> and normalising it as we previously did for the initial data, we obtain an upper bound on i(τ):
i_max = 0.13.
We keep only the components of the running cost (<ref>) related to restrictions and vaccines and add the penalisation term discussed in Section <ref>. Once again, we run the DAL algorithm with various initial guesses, always obtaining the trajectories and controls reported in Figure <ref>.
Comparing this scenario with that of Test 1, without the state constraint, we notice that the strategy is not the exact same. Here we have a six-month period of mobility restrictions as soon as the disease starts spreading, but still no vaccination as in the unconstrained case. Since in this case the goal is not to minimise the number of infective at each time, but only to keep them strictly below the threshold i_max, there is no need for further restrictions after the second trimester.
§.§ Test 5: temporary immunity and state constraints
Similarly to what we did for the previous simulation, we impose the same state constraint on i, representing the limited availability of Intensive Care Units, on the model with temporary immunity (<ref>). We use the same running cost of Test 4, run the DAL algorithm with λ^(0)=ν^(0)=[0,…,0] and obtain the solution reported in Figure <ref>.
We observe two restriction periods, one in the first six months and another one during the fourth trimester, while vaccines are practically never used. The cost functional evaluated along this solution is J_x,t^ν≈ 0=0.362150. We then run the SL scheme and the DAL algorithm initialised with λ^(0)=λ^SL and ν^(0)=ν^SL, obtaining the results shown in Figure <ref>.
Similarly to what we found in Test 3, the solution we obtain is different from the one we get for λ^(0)=ν^(0)=[0,…,0]. Although the profile of λ(τ) is similar to what we obtained before, there is a remarkable difference in ν(τ), which is positive from around τ=4 (when vaccines become available, see (<ref>)) onwards. The cost of this solution is J_x,t^DAL=0.341224 < J_x,t^ν≈ 0, confirming that this is indeed the minimum of the cost functional.
§.§ Summary
For the sake of completeness, we summarise in Table <ref> the results of all the tests we performed. In particular, we report the values of the cost functional J_x,t evaluated along, respectively, the trajectories of the uncontrolled system (J_x,t^U), those computed by the semi-Lagrangian scheme (J_x,t^SL), and those obtained with the Direct-Adjoint Looping method initialised with the output of the SL scheme (J_x,t^DAL). In addition, we report the discrete L^∞ norm of the difference between the optimal trajectories obtained with the two algorithms:
|| Δ y ||_∞ :=
[ || s^SL - s^DAL ||_∞; || e^SL - e^DAL ||_∞; || i^SL - i^DAL ||_∞ ].
It is worth noting that || Δ y ||_∞ is always at most of order O(10^-2), which is compatible with the magnitude of the discretisation steps in the SL scheme. These results confirm that the Direct-Adjoint Looping algorithm, if initialised with the output of a semi-Lagrangian scheme, produces solutions that are in agreement with those given by the SL scheme. Moreover, as expected, J_x,t^DAL < J_x,t^SL for all simulations, since the state-space and the control set are not discretised in the variational approach, leading to more precise solutions. In particular, Test 3 and Test 5 are a concrete example in which the Dynamic Programming approach can significantly help to validate the optimality of the policy outputted by the variational one.
§ CONCLUSIONS
We presented some variations of the epidemic SEIR model, including a variable transmission rate, the initial unavailability of vaccines, temporary immunity, state constraints on Intensive Care Units and interactions with external populations. We formulated some finite horizon optimal control problems associated with them, considering vaccines, restrictive measures and the possibility to close the borders as controls. We presented two different theoretical approaches, Dynamic Programming and Pontryagin's Maximum Principle, in order to devise suitable approximation procedures for the computation of optimal strategies. We performed several numerical simulations and showed that descent methods based on the variational approach can be highly sensitive to the initial guess on the controls, and this can lead to sub-optimal solutions. We showed that a combination of the two methods, where we initialise the descent algorithm with the solution given by the semi-Lagrangian scheme, can help to obtain high-quality, reliable approximations of the optimal controls.
The aim of this work is to give an idea of the potential and the advantages of the combination of the two approaches for optimal control problems in epidemiology. This idea can also be applied to more complex models, aiming at capturing more realistic scenarios. In this case, a collaboration with epidemiologists would be needed in order to estimate the parameters from real data, not only for the dynamical system, but for the cost functional as well. The same framework can be applied to other compartmental models and different controls can be considered, based on the particular characteristics of the infectious disease or on the containment instruments that are available in a certain area. Even more in general, the techniques we presented can be adapted to other differential systems in biomathematics for which it may be interesting to study some optimal control problems.
§ DECLARATIONS OF INTEREST
None.
§ ACKNOWLEDGEMENTS
This research was supported by the Italian PNRR fund within the doctoral project Modelli matematici per la simulazione e controllo delle pandemie at Sapienza University of Rome, Italy.
The authors would like to dedicate this work to the memory of professor Maurizio Falcone, who passed away in November 2022. His long experience in numerical analysis and optimal control, together with his ideas and enthusiasm for research, were of great inspiration to start this research project.
siam
|
http://arxiv.org/abs/2307.04709v1 | 20230710171722 | Fatal mathematical errors in Hong-Page Theorem and Landemore's epistemic argument | [
"Álvaro Romaniega"
] | econ.TH | [
"econ.TH"
] |
emphasisquote
(emphasis added)
Fatal mathematical errors in Hong-Page Theorem and Landemore's epistemic argument]Fatal errors and misuse of mathematics in the Hong-Page Theorem and Landemore's epistemic argument
[email protected]
theoremTheorem[section]
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
definition
definition[theorem]Definition
example[theorem]Example
assumptionAssumption
errorError
remarkx[theorem]Remark
remark
In the pursuit of understanding collective intelligence, the Hong-Page Theorems have been presented as cornerstones of the interplay between diversity and ability. However, upon rigorous examination, there seem to be inherent flaws and misinterpretations within these theorems. Hélène Landemore's application of these theorems in her epistemic argument and her political proposal showcases a rather unsettling misuse of mathematical principles. This paper critically dissects the Hong-Page Theorems, revealing significant inconsistencies and oversights, and underscores the indispensable role of 'ability' in group problem-solving contexts.
This paper aims not to undermine the importance of diversity, but rather to highlight the dangers of misusing mathematical principles and the necessity for a more nuanced comprehension of mathematical results when applying them to social sciences.
[
Álvaro Romaniega
August 12, 2023
====================
In the burgeoning field of collective intelligence theory, one theorem has emerged as particularly influential: the Hong-Page Theorem. This theorem posits that in certain conditions, a diverse group of problem solvers has the potential to outperform a homogeneous group of high-ability problem solvers. The theorem has been instrumental in shaping discourses around the importance of diversity in decision-making and problem-solving contexts. One notable application of the Hong-Page Theorem is found in the work of Yale professor Hélène Landemore, who uses it as the cornerstone of her political proposal for an "Open Democracy." Landemore's thesis argues that greater cognitive diversity in a collective decision-making process not only enhances its epistemic properties, achieving the epistemic superiority of democracy, but is also serves as a normative benchmark.
However, the theorem has not gone unchallenged. Mathematician Abigail Thompson has been among the most vocal critics, casting doubt on the use of the Hong-Page Theorem in this way and raising concerns about the validity of the conclusions drawn from it. In this paper we strive to unmask the Hong-Page Theorem as what we believe they truly are— substantially trivial findings, devoid of substantial social content. By doing this, we aim to reveal the misconceptions propagated through their unchecked acceptance, especially in their application to real-world socio-political phenomena, such as in Hélène Landemore's "Open Democracy" proposition and her epistemic argument for democracy.
We hope that our critique stimulates introspection within the collective decision theory community and encourages a more discerning and judicious use of mathematical theorems. It is not appropriate to use a mathematical theorem, without further investigation, merely because its conclusions align with our views.
The paper is organized as follows. We begin with a thorough dissection of the Hong-Page "Diversity Trumps Ability" Theorem in Section <ref>. We delve into the definitions that underpin these theorems and carefully examine the assumptions relating to the problems and problem solvers. We then derive and discuss a series of trivial corollaries from these assumptions, concluding with a concise analysis of other related results and a simplified proof of the theorem.
In Section <ref>, the paper takes a critical turn, presenting a series of counterexamples that challenge the robustness of the Hong-Page Theorem. We start by challenging the necessity of the injective function and the 'unique best agent' assumption, eventually moving towards a discussion on the performance and selection of clones.
The critique deepens in Section <ref> with the proposition of a new Hong-Page style theorem: 'Ability trumps diversity'. This section revises the original assumptions to allow a fair comparison between ability and diversity, ultimately culminating in the presentation of this theorem.
Section <ref> then shifts focus onto the 'Diversity Prediction Theorem' and the 'Crowds Beat Averages Law', discussing their results and pointing out the asymmetric role of 'ability' and 'diversity'.
Moving into Section <ref>, we delve into the misuse of mathematics in the original Hong-Page theorems. We analyze how the mathematics was exploited to obscure a trivial fact and answer questions it was not designed to address. We further discuss the issue of using the prestige of mathematics to lend credence to flawed interpretations, and highlight a basic mathematical error in advocating for diversity.
In the penultimate section, Section <ref>, we turn our critique towards Hélène Landemore's political proposal. We present her misuse of mathematics and demonstrate a basic misunderstanding of the mathematical theorems. We proceed to analyze the misuse of the hypotheses of the 'Diversity Trumps Ability Theorem' and argue for the vacuousness of the 'Numbers Trump Ability Theorem'.
Finally, Section <ref> wraps up the paper, offering a robust summary of the findings and their implications on the ongoing discourse surrounding collective intelligence and the role of diversity and ability within it.
Certain sections of this text, particularly the initial ones, presume a degree of mathematical proficiency (although the mathematics used are not particularly involved). However, Sections <ref>, <ref>, and <ref> are essentially devoid of mathematics, instead referencing earlier sections and the results derived there. For the mathematically-intensive portions, I have strived to provide a non-technical explanation, typically prefaced with the phrase, "In other words".
A final caveat, this critique should not be taken as a dismissal of the importance of diversity (which I consider one, among others, important epistemic factor) in decision-making, but rather as a call to address the misuse of mathematics in these contexts. It urges us to consider the rigorous and nuanced approach required when applying mathematical theories to sociopolitical constructs. As such, this paper makes a contribution to the ongoing discourse on collective intelligence, fostering a deeper understanding of the mathematical theorems used to achieve a desired conclusion.
§ THE "DIVERSITY TRUMPS ABILITY" THEOREM
§.§ Definitions
V_ϕ: X → [0,1] a function and X, for simplicity, finite. The Problem is finding the maximum of V_ϕ.
A problem solver is a function ϕ:X→ X. The set of problem solvers is denoted by Φ. For a probability measure on X (full support), ν, the expected value of the performance of each agent is given by
𝔼_ν(V_ϕ∘ϕ)=∑_x∈ XV_ϕ(ϕ(x))ν(x) .
Let's discuss the intuition behind the problem-solving process. Given an initial state x from the set X, a problem solver aims to find a solution to the problem by mapping x to ϕ(x). In other words, the problem solver transforms the input x into a potential solution, hoping that this transformation will maximize the value of the function V_ϕ.
§.§ Problem assumptions
[Unique problem]
∀ϕ,ϕ'∈Φ, V_ϕ=V_ϕ'=:V.
In other words, any two problem solvers evaluate the problem space in the same way. This assumption simplifies the analysis by ensuring that all problem solvers have a consistent evaluation criterion for the problem[In real settings, like the ones to which people attempt to apply the theorem, this assumption is far from accurate. People have different values, so right from the beginning, the hypotheses are not satisfied. Nevertheless, in this paper, I want to focus on more profound critiques, even though issues like diversity in values are quite significant.].
[Unique solution]
∃_=1 x^* / V(x^*)=1.
In other words, there is exactly one optimal solution to the problem that maximizes the value of the function V. This assumption allows us to focus on finding the unique solution.
[Strictly increasing problem]
V is injective, i.e., if V(x)=V(x'), then x=x'.
That is, we can order X as {x_1,…,x_|X|} such that
V(x_1)<…<V(x_|X|) .
In other words, this assumption implies that the problem has a well-defined ordering of potential solutions. The original article did not state explicitly that the value function V is one-to-one. This assumption is necessary for the theorem to hold, as Thompson pointed out, <cit.>.
§.§ Problem solver assumptions
[Everywhere ability in problem solvers]
∀ ϕ∈Φ: V(ϕ(x))≥ V(x) ∀ x∈ X . In particular, ϕ(x^*)=x^* .
This assumption, in combination with Assumption <ref> and <ref>, states that all problem solvers are able to improve the value of any state. In other words, if a problem solver is applied to a state, the value of the state will never decrease.
[No improvement, idempotence]
∀ϕ∈Φ, ϕ∘ϕ=ϕ .
This assumption states that problem solvers are idempotent. In other words, applying a problem solver to a state twice will have the same effect as applying it once.
["Difficulty", imperfect problem solvers]
∀ϕ∈Φ ∃ x / ϕ(x)≠ x^* .
In other words, by hypothesis, for every agent there are instances where they fail to find the optimal solution.
[“Diversity”, sufficient unstuck problem solvers]
∀ x∈ X\{x^*} ∃ ϕ∈Φ / ϕ(x)≠ x .
In other words, this assumption ensures a “diversity” of problem solvers in Φ, with at least one problem solver capable of making progress from any non-optimal state.
[Unique best problem solver]
|max_ϕ∈Φ{𝔼_ν(V∘ϕ)}|=1, i.e., there is only one best-performing agent
In other words, this assumption states that there is only one problem solver that performs best on average. There is only one problem solver that is the most likely to lead to the optimal state.
§.§ Group problem solver assumptions
[In series deliberation]
The agents Φ'{ϕ_1,…, ϕ_N}, when working together to solve the problem starting at x are equivalent to:
* First, i_1 such that x_1ϕ_i_1^x(x)≠ x_0 x.
* Second, i_2 such that x_2ϕ_i_2^x(x_2)≠ x_1.
* Inductively, i_j such that x_jϕ_i_j^x(x_j-1)≠ x_j-1.
This stops at x_n such that it is a fixed point for all elements of {ϕ_1,…, ϕ_N} (all the agents are stuck at the same point, unanimity).
There can be multiple sequences arriving at the same point. The fixed point exists as x^* is a fixed point for all elements of Φ by assumption. The group performance is tantamount to composition of the functions in a proper way:
ϕ^Φ'(x)ϕ_i_n^x∘…ϕ_i_1^x(x) .
In other words, this assumption states that a group of problem solvers can be thought of as a sequence that takes turns applying the problem solvers in the group to the current state, such that we approach the optimal value. The group will stop when it reaches a state that is a fixed point for all of the problem solvers in the group, i.e., unanimity.
[Clones]
There exists an infinite amount of identical copies of each agent ϕ∈Φ.
This assumption states that there are an infinite number of problem solvers available. This is necessary for the theorem to hold.
§.§ Trivial corollaries from the assumptions
By construction (i.e., from the assumptions and not the theorem), we have the following corollaries. Note that no profound or even standard mathematical results are needed; we just need the assumptions mentioned above combined with trivial arithmetic and trivial properties of sets.
All members of Φ working together can solve the Problem ∀ x∈ X.
This follows straightforwardly from Assumptions <ref>, <ref>, <ref>, and <ref>. Indeed,
V(x)<V(ϕ_i_1^x(x))<… <V(ϕ_i_n^x(x_n-1))=1 ,
for some n≤|X|.
∃ x∈ X such that ∀ N_1∈ℕ, N_1 “clones” of the the best performing agent cannot solve the Problem.
By Assumptions <ref> and <ref>, N_1 “clones” of the best performing agent work in the same way as a single clone alone. By Assumption <ref>, no agent can solve the problem alone for some x, so the corollary follows straightforwardly.
So, just “rearranging" the assumptions we can trivially (again, no profound mathematics, just basic arithmetic and trivial properties of sets) prove the following version of the “theorem” containing the main conclusion. The advantage of this formulation is that no clones are needed.
Given the Assumptions <ref>-<ref>, all problem solvers working together perform better than the best problem solver (in the sense that there is a state x such that the best problem solver cannot solve but the whole group can). Note that the best problem solver is included in the first group.
This follows straightforwardly from the corollaries (i.e., the assumptions) given above.
Again, this is by construction, based on the way we've formulated our hypotheses. Let us explain the proof in words. By assumption, we have arranged for the best agent not to always solve the problem. On the other hand, by mere assumption, for every state, there is an agent that can reach a better state. As the number of states is finite, they will reach the optimum in a finite number of steps. Note that the “diverse group” includes the best agent. The fact that the “diverse" group” outperforms the best agent is trivial, as they include additional agents and, by hypothesis, they do not worsen the solution and improve it for some states.
To connect with the original statement, let us formulate the following version.
Given the hypotheses:
(𝒜_1) "Ability-diversity" assumptions. The assumptions given above, Assumption <ref>-<ref>.
(𝒜_2): "Counting" assumptions. We choose two groups. In the first, N_1 (large) clones such that[Actually, even less is required: it is sufficient to have a subset Φ_0 such that ∀ x∈ X, there are agents that reach x^* in accordance with Assumption <ref>. For the sake of simplicity, we do not make this irrelevant distinction.] Φ⊂{ϕ^R_1,…,ϕ^R_N_1}=:Φ_R, and, in the second, there are N_1 clones of the best performing agent selected from a group of N, Φ_B.
Then, the performance of Φ_R is better than Φ_B in the sense of (<ref>).
It is trivial by the corollaries given above. Indeed, by the first corollary 𝔼_ν(V∘ϕ^Φ_2)=1 . By the second corollary, 𝔼_ν(V∘ϕ^Φ_1)<1 as ∃_≥ 1 x such that V(ϕ^Φ_1(x))<1.
So, what did Hong and Page prove in their article? Essentially, they demonstrated that assumption 𝒜_2 holds almost surely (after defining some probability measures). This is a not very difficult probabilistic claim that has nothing to do with either ability or diversity, which are contained in the assumptions 𝒜_1. It is a probabilistic fact that can be shown, regardless of whether the objects considered are diverse agents, incapable problem solvers, balls in a box, or mathematical functions in a Hilbert space. Nevertheless, this is the heart of proof of their article published in the Proceeding of the National Academy of Sciences. But one might question, given that we have shown that Theorem <ref> (a trivial restatement of the assumptions) encapsulates all the information regarding diversity and ability, what is the necessity of introducing clones? This is why Thompson say that the theorem "is trivial. It is stated in a way which obscures its meaning. It has no mathematical interest and little content." We can compare the previous version with the original statement:
Given the assumption above, let μ be a probability distribution over Φ with full support. Then, with probability one, a sample path will have the following property: there exist positive integers N and N_1, N > N_1, such that the joint performance of the N_1 independently drawn problem solvers exceeds the joint performance of the N_1 individually best problem solvers among the group of N agents independently drawn from Φ according to μ.
As noted by Thompson, <cit.>, the theorem as originally stated was false because Assumption <ref> was not included. See Section <ref> for more details. Note that “N_1 individually best problem solvers” are just clones of the best problem solver (unique by assumption), not, for instance, the first and second according to their expected value (which will perform better). This restriction is imposed by assumption.
More precisely, we have a new assumption:
By hypothesis we have the following.
* The first group is selected randomly from a pool of clones of the elements in Φ. The group size N_1 can be adjusted as required.
* Similarly, the second group is chosen independently, but from an identically distributed set of clones of the elements in Φ of size N_1. This selection process follows the stipulations that:
* the group size N can be adjusted as required,
* the selection allows for the repetition of the best problem solvers.
§.§ Other results and simpler proof
Assuming the conditions of Theorem <ref> with N_1 large enough, with probability one,
* the randomly selected group of N_1 problem solvers will invariably converge on the correct solution without any disagreement and unanimity,
* the “random group” always contains the best-performing agent.
These facts explain that they can always outperform the best problem solvers.
This is straightforward from Corollary <ref> and that, following the first part of Assumption <ref>, the first group includes a copy of Φ μ- almost surely. It is also Lemma 1 in <cit.>. There is unanimity as, for every state x∈ X, the group solution is x^*, where everyone accepts as a solution, ϕ(x^*)=x^*. The second statement follows from the Strong Law of Large Numbers, see below for more details, Remark <ref>.
For instance, the following Φ{ϕ_1,ϕ_2,ϕ_3} such that
V (x) ϕ_1 (x) ϕ_2 (x) ϕ_3 (x)
a 1/4 b a b
b 1/2 b c b
c 3/4 d c c
d 1 d d d
satisfies the hypotheses of the theorem, but, if the “random” group does not include the best performing agent[Assumptions can be made to exclude the best performing agent, while ensuring that there is another agent that performs as the best one does when needed. Consequently, it is no surprise that the theorem still holds. However, this approach is purely ad hoc.], ϕ_1, then it cannot outperform ϕ_1.
In fact, a simpler proof of the theorem can be constructed based on this simple fact. This approach also exposes the theorem's triviality given its underlying assumptions.
First, by hypothesis (Assumptions <ref> and <ref>), ∃ x_*∈ X, ϕ^*,ϕ_*∈Φ such that the best agent ϕ^*(x_*)≠ x^* and V(ϕ_*(ϕ^*(x_*)))>V(ϕ^*(x_*)). By hypothesis (Assumptions <ref> and <ref>), V∘ϕ^{ϕ^*, ϕ_*}≥ V∘ϕ^*, where the equality is strict for at least one point. Given that ν has full-support, 𝔼_ν( V∘ϕ^{ϕ^*, ϕ_*})> 𝔼_ν(V∘ϕ^*).
Second, we introduce the probabilistic selection of clones. By the Strong Law of Large Numbers (SLLN),
μ(ω∈Ω : ⋂_ϕ(lim_N→∞f^N(ϕ)=μ({ϕ})))=1 ,
where f^N(ϕ) represents the frequency of appearance of ϕ when the size of the group of clones is N. The intersection is finite. For this full-measure set, we define N_ϕ=N_ϕ(ω) as the integer such that, if N≥ N_ϕ, then f^N(ϕ)>μ({ϕ})/2. Following Assumption <ref>, we take N_1max{N_ϕ^*,N_ϕ_*, 2/μ({ϕ^*}), 2/μ({ϕ_*})} for the first event. By these definitions, at least one copy of ϕ_* and ϕ^* are included. For the second event, take N≥2/μ({ϕ^*})N_1, so there are more than N_1 copies of ϕ^* in the second group. The proof then follows from the first part of this argument.
In other words, the first paragraph corresponds to the part of the theorem where diversity and ability are put into play, which essentially reduces to the following triviality: by assumption, there are two distinct agents - the best agent, and another agent - and a state x_* such that the best agent does not provide the optimal solution for this state. However, the other agent can improve upon the solution of the best agent for this state. This implies that the performance of a group consisting of the best agent and this additional agent surpasses the performance of the best agent alone, at least for some states. For other states, again by assumption, adding an agent does not worsen the situation, thus completing the deterministic clone-free part of the proof. Subsequently, we apply the strong law of large numbers to ensure that, under the setting of Assumption <ref>, the random group will always contain copies of these two agents, and the best performing agents are all copies of the unique best performing agent.
Noting that if X is finite, then Φ must also be finite, we can choose N_1 such that almost surely every member appears in the random group. We just need to set:
N_1max_ϕ∈Φ{ N_ϕ, 2/μ({ϕ})} .
In the original proof by Hong and Page, N_1 is set so that every member needed to reach the optimum state x^* appears with probability one. Thus, N_1 must be large, which virtually[In the case where ϕ_0 is not needed at all and μ({ϕ_0}) is close to zero, we cannot ensure that, even for a large N_1, one copy is included almost surely.] guarantees that at least one copy of each member of Φ is included in the “random group”. In any case, this N_1 as defined is sufficient to ensure the theorem holds true.
§ REMOVING TECHNICAL HYPOTHESES: COUNTEREXAMPLES
The theorem depends critically on certain assumptions that we are going to analyze now. In this section, I will refrain from critiquing certain empirical hypotheses, such as the assumption that agents share the same concept of problem-solving (Assumption <ref>), or that they can recognize the solution (ϕ(x^*)=x^*). Such critiques largely pertain to the plausibility inherent in every model, and one could always defend these by invoking ideal conditions, much as one might assume frictionless systems in physics. Although these critiques can be adequate, a different critique, following a “Moorean style”, will be presented in the following section, where we will revisit some empirical hypotheses (not the ones mentioned above), slightly modifying them to enhance their plausibility, which may lead to contrary conclusions. However, in this section, I wish to focus on certain technical assumptions, often overlooked, that are essential for the theorem to hold. Without these assumptions, the theorem fails. These technical assumptions, by their nature, involve facets of the model (not the underlying reality) that are difficult to verify, hence making it challenging to argue for their plausibility. This raises the question of why we should adopt these hypotheses, rather than others, unless we are trying to reach a particular conclusion.
The distinction between empirical and technical assumptions might seem somewhat arbitrary, but it nonetheless serves a useful purpose in our analysis. For instance, assume that we apply the theorem to a jury in a criminal trial. As we will see, the values of V (apart from V(x^*)=1, the right option) are important for the theorem to hold; if certain conditions are not met, then the thesis of the theorem fails. However, how could one verify that the hypotheses on V hold when V is not empirically observable? Similar remarks apply, even more strongly, on how to model clones and select them for (almost infinite) groups.
§.§ V is an injection, Assumption <ref>
This was pointed out by Thompson and we reproduce it here with minor modifications. This assumption was not originally in <cit.>, making the theorem false.
Let X = {a, b, c, d}. Define V (x) and three agents ϕ_1, ϕ_2 and ϕ_3 according to the table below:
V (x) ϕ_1 (x) ϕ_2 (x) ϕ_3 (x)
a 1/3 d c b
b 2/3 b c b
c 2/3 c c b
d 1 d d d
The set of agents Φ = {ϕ_1, ϕ_2, ϕ_3} satisfies all
the hypotheses of the theorem. The agents ϕ_1, ϕ_2, ϕ_3
have average values 5/6, 9/12, 9/12 respectively, so
ϕ_1 is the “best” agent. Notice that all three agents
acting together do not always return the point d,
where the maximum of V occurs. Indeed all three agents
acting together work only as well as ϕ_1
acting alone. Hence in this case, no group of agents
can outperform ϕ_1, or, equivalently, multiple
copies of ϕ_1, hence no N and N_1 exist which
satisfy the theorem.
In real-life applications, the value of V can be highly uncertain. Therefore, it is sensible to assume that, in the case of two states, x,x'∈ X, where it is estimated that V(x)≈ V(x'), we set V(x)=V(x') for practical purposes. This situation should not be disregarded as uncommon. Nevertheless, as argued in <cit.>, cited by Landemore:
You don’t fail to make it to the cashier in a grocery store when you are completely indifferent between buying one more apple or one more orange, nor do deliberators in a meeting fail to decide on some course of action if two options have precisely equivalent value. Adding a simple tie-breaking rule to the theorem is entirely sufficient to deal with the mathematical hiccup and move forward with the fundamental
scientific question at hand.
This argument completely misses the point. The problem is not that we are indifferent between the solutions b or c, but rather that no one knows the solution if we do start at b or c (no one moves from these states to d; they get stuck at c). The fact that the value function is “indifferent” implies that the hypotheses (in particular, the “diversity” assumption) are not sufficient to guarantee that d is reached.
The thesis of the theorem still holds if we replace Assumption <ref> with ∀ x∈ X\{x^*} ∃ ϕ∈Φ / V(ϕ(x))> V(x). However, this adjustment only serves to render the theorem more trivial and misapplies the term diversity. This condition simply implies that for every state, there exists an agent that can strictly improve that state. It is unsurprising that in a finite number of steps, these agents reach the maximum, which, by hypothesis, the best problem solver cannot always attain. Consequently, this adjustment does fix the theorem, but at the cost of making it more trivial and highlighting that what the theorem requires is not "diversity", but the existence of a more "able" problem solver who can improve upon areas where others fall short.
§.§ Unique best agent, Assumption <ref>
To justify this assumption, Hong and Page write:
Let ν be the uniform distribution. If the value function V is one to one, then the uniqueness assumption is satisfied.
This is a mathematical mistake. Let us consider X = {a, b, c, d}. Define V (x) such that 0<V(a)<V(b)<V(c)<1, V(b)<1/2(V(a)+1) and n agents ϕ_1, ϕ_2 and ϕ_i according to the table below:
V (x) ϕ_1 (x) ϕ_2 (x) ϕ_i (x)
a V(a) a c ϕ_i(a)
b V(b) b c ϕ_i(b)
c V(c) d c ϕ_i(c)
d 1 d d d
The set of agents Φ = {ϕ_1, ϕ_2, ϕ_i}_i=3^N satisfies all
the hypotheses of the theorem and are ordered according to its expected value. If we set
V(c)1/3(V(a)+V(b)+1) ,
then ϕ_1, ϕ_2 have the same “expected ability” under the uniform measure. Furthermore, now the theorem is false. Indeed,
ϕ_1∘ϕ_2(a)=d, ϕ_1∘ϕ_2(b)=d, ϕ_1(c)=d, ϕ_1(d)=ϕ_2(d)=d .
In this case, no group of agents
can outperform {ϕ_1, ϕ_2}, no N and N_1 exist which satisfy the theorem.
Here, we have demonstrated an example involving two agents possessing identical "expected abilities". Of course, in real-world applications, there would likely be uncertainty or variability in the value of 𝔼_ν(V∘ϕ); thus, it would be prudent to consider an interval rather than a single point. In such circumstances, the top-performing agents might comprise multiple individuals with high probability. However, as demonstrated, the theorem may not necessarily hold in these scenarios.
§.§ Clones performance
As we saw, simply by the assumptions, one million Einsteins, Gausses or von Neumanns are the same as just one of them. Indeed, mathematically, by Assumption <ref>, {ϕ,…,ϕ} is a well-defined set of problems solvers such that
ϕ^{ϕ,…,ϕ}Assumption <ref>=ϕ∘…∘ϕAssumption <ref>=ϕ.
Again, just the assumptions they arbitrarily made. But this may not make much sense if we want to apply it to real-life scenarios. More realistic versions could be:
* Improvement: V∘ϕ∘ϕ≥ V∘ϕ (strict inequality for some points). In other words, if an agent (competent) produced a solution after a certain amount of time, say one hour, it would provide a better answer if it had one-million hours, or if a “clone” could pick up where he left off.
* Work in parallel[As a technical note, now {ϕ, ϕ} should be considered a multiset (the multiplicity distinguishes multisets).]: V∘ϕ^{ϕ, ϕ}≥ V∘ϕ (strict inequality for some point). In other words, one can imagine that a group of Einsteins would not work sequentially, always producing the same result, but would divide the work, resources, focus, etc. to produce a better answer once they have put all of their findings together.
I am not certain about the most appropriate way to model clones, but the authors' approach does not seem plausible. However, this is necessary for the theorem to stand. Otherwise, as N_1→∞, no group of agents could generally outperform ϕ,N_1…,ϕ; we cannot guarantee the existence of an N that would satisfy the theorem.
Following Jason Brennan's 'magic wand' thought experiment, let's imagine we are confronted with an exceedingly difficult problem to solve, for instance, the Navier-Stokes Millennium Problem. Suppose we have a magic wand at our disposal that can create agents to solve the problem for us. Should we choose Terence Tao, or should we use the magic wand to create 100 Terence Taos working together to solve our problem? According to the assumption of the Hong-Page Theorem, this magic wand would be useless.
Regarding the issue of clones, the following is stated in <cit.>:
[...] we present a simpler version of our result where X is
assumed to be finite. This finite version makes the insight more straightforward, although it comes at the cost of trivializing some intricate assumptions and arguments. For example, the group of the best-performing agents is proven below to be comprised of identical agents. This is an artifact of the finite version. In the general version under reasonable conditions, the group of the best-performing agents can be shown to be similar, not necessarily the same.
However, this explanation is far from accurate. Clones also appear in the less realistic case where X is not finite. This occurs because we have to take copies from Φ and, if ϕ has already appeared, it can appear again. Moreover, the finite version of the model is neither sufficient nor necessary for proving that the group of the best-performing agents is comprised of identical agents.
In a scenario where X is finite, the best agents could be several different ones. This could be easily demonstrated by following my previous example from Section <ref> or <cit.>. In the version where X is not finite, according to Assumption 5 of their appendix, B(ϕ^*,δ)∩Φ={ϕ∈Φ | d(ϕ,ϕ^*)<δ} could contain only one agent, namely ϕ^*. It should also be noted that a finite X represents a more realistic setup. Typically, rendering things continuous simplifies the analysis, as it allows us to use standard calculus, for example, but this is not the case here. It is less realistic to assume that agents have answers to an infinite set of elements than to a finite set.
§.§ Selection of clones
Similarly, the selection of clones appears to be arbitrary and seems tailored to reach the intended conclusion.
* The choice of two independent groups seems arbitrary. Why not fix N and, from the same group, select a random subgroup of size N_1, as well as the best N_1 problem solvers, and then compare? In such a scenario, the theorem might not hold. Indeed, we need N≫ N_1 such that the Strong Law of Large Numbers (SLLN) applies, μ({ϕ^*}) can be very small. However, a random group of N_1, Φ_N_1, agents might not include all the problem solvers of Φ, thus we cannot guarantee a probability of one, as the theorem does. That is, for N>N_1, there are setting such that
ℙ(Φ⊂Φ_N_1)<1 .
* Permitting repetition is also arbitrary. We could, for instance, select the best problem solvers without allowing repetitions. Recall from Section <ref> that adding a repeated clone is equivalent to adding nothing. This could prevent the paradoxical result that, by mere hypotheses, when choosing the best problem solvers from a group of size N is more beneficial when the group size is relatively small, i.e., for choosing the best it is preferable to have less options available. However, if we prohibit repetitions, then the theorem does not hold as the best problem solvers will include the ones (without taking repeated clones into account) of the "random" group, so no N and N_1 exist which satisfy the theorem.
We should note the general approach adopted by Hong and Page. They introduce randomness into their model by employing clones. Subsequently, they invoke the Strong Law of Large Numbers to ensure that the frequency of appearance converges to the original probability μ, effectively eliminating the randomness that was introduced and obscuring the results. In the next section, we will remove clones.
§ NEW HONG-PAGE STYLE THEOREM: ABILITY TRUMPS DIVERSITY
We are going to state and prove a new version of the Hong-Page theorems such that the hypotheses are going to be of the same kind and as plausible (or even more as we will see, for instance, no need of clones and disagreement is possible) as the ones in the Hong-Page theorem. Nevertheless, we will reach the opposite conclusion, “ability trumps diversity". I am not claiming that this theorem has any social content, it simply reflects that it is the assumptions the ones that are doing all the work. The moral would be if we create two groups from the group in the original theorem – one in which we make the minimal reduction in ability while ensuring full diversity, and another in which we considerably reduce diversity while ensuring ability – the less diverse group would systematically outperform the fully diverse group. In other words, ability trumps diversity.
§.§ The new assumptions
Among a set of agents Φ, we select two finite groups with different properties. We are going to modify some assumptions, but the other remain the same. First, let us introduce the possibility of disagreement following Assumption <ref> as:
ϕ_i_j+1^x(ϕ_i_j^x(x_j-1))=x_j-1 , with ϕ_i_j^x(x_j-1)≠ x_j-1 and i_j^x≠ i_j+1^x .
A disagreement is a stopping point. In other words, if there is a cycle such that one agent proposes a new solution and other reverse back that solution there is a disagreement and that initial solution is given as the group solution. This is a simple model where disagreement is possible.
Note that in the original formulation of the Hong-Page Theorem, for any group of any size, even if they might not be able to reach the correct solution, there will be no disagreement in any case, as per Assumption <ref>. This always leads to unanimity, which is highly unrealistic.
Let also μ_x be a probability measure such that if x is the previous solution, μ_x({i}) represents the probability that ϕ_i(x) the next solution in the deliberation chain, see Assumption <ref>. This measure has full-support, no one is silenced. The indices are chosen independently. Once fixed the x_0, this defines a probability measure ℙ on the possible paths. That is,
ℙ(x_k=x'| x_k-1=x)=μ_x({i | ϕ_i(x)=x'})>0 .
§.§.§ The ability group.
Let us denote a group by Φ^A={ϕ_α}_α∈ A which is selected such that:
* Ability: V∘ϕ_α≥ V. In other words, this group is chosen to ensure ability in the sense that each agent does not decrease the value of the initial state given. This is Assumption <ref>, but now is imposed by the selection of the group.
* Common knowledge: ∃ X_CK⊂ X such that:
Non-diversity set: ∀α,α'∈ A we have ϕ_α|_X_CK≡ϕ_α'|_X_CK.
Knowledge: ϕ_α (x_c)=x^* ∀ x_c∈ X_CK ∀ α∈ A.
In other words, this group selection comes with a selection bias. The agents have a common knowledge that makes them similar; for the set X_CK, they all give the same solution. This is an extension of the second part of Assumption <ref>; agents are not only able to recognize that x^* is the solution, but they can do the same for other states x∈ X_CK. Note that x^*∈ X_CK.
* Smaller diversity set: ∀ x∈ X\ X_CK, Assumption <ref> holds. In other words, for this group, the original assumption of Hong and Page only holds for the “smaller” set X_CK.
* More importantly, we diminish diversity in a second way. For all x in X\ X_CK, there exists exactly one agent, ϕ^x, who provides a distinct answer, and the set of unique answers could be equidistributed, meaning it's not just one agent always giving the different answer. Formally, {x | ϕ^x=ϕ}≤|X|/|Φ^A|+1 for all ϕ∈Φ^A. Therefore, if the ratio is small enough, two agents are quite similar, signifying a lack of diversity, i.e., following <cit.>, their distance is relatively small.
For the theorem to function, we don't require a large X_CK, but having it large makes the agents less diverse. It could be just x^* as in the original theorem.
§.§.§ The diversity group.
Let us denote a group by Φ^D={ϕ_j}_j∈𝒥 which is selected such that the maximum diversity is guaranteed. More precisely, there is a unique x^0∈ X such that:
* Full-diversity with ability: ∀ x∈ X\{x^0, x^*} there is a set of agents {ϕ_j^x_k}_k=1^n_x⊂Φ^D such that ϕ_j^x_k(x)≠ x and such that all the states, and only those, closer to the solution x^* (that is, all improve the state) are the local optimum for some agent.
* Minimal ability loss: there is only one agent ϕ_j_0∈Φ^D and only one state x^0 such that V(ϕ_j_0(x^0))<V(x^0). Note that this is the minimal ability that can be lost.
§.§ The theorem
Let Φ^A, Φ^D as above with the given assumptions. Then, the ability group outperforms the diversity group.
To prove this theorem, we need to compare the performances of the two groups. First, we consider the ability group Φ^A. Any agent from Φ^A does not decrease the value of the given state. Moreover, for any state in the non-diversity set X_CK, all agents in Φ^A will return the optimal solution x^*. Thus, following the measure μ_x', for any x∈ X,
V(x)≤ V(ϕ_i_1^x(x))≤…≤ V(ϕ_i_n^x(x_n-1))=1 .
This is because p_x'μ_x'^A(α∈ A | ϕ_α(x')≠x')>0. This holds true even in the worst-case scenario where all agents, except one, are stuck at that point. Thus being stuck has probability, by the subadditive property,
∏_i=1^∞(1-p_x')=0 →ℙ(∃ n_0, x' : ϕ_i^x_n(x')=x' ∀ n≥ n_0)≤∑_n_0∑_x”∏_i=n_0^∞(1-p_x”) =0 .
where the sum in x' is finite. Thus, with probability one, in a finite number of steps we have strict inequalities reaching x^*, returning this as the solution. Thus, for all x∈ X, every path starting at x leads to x^*. Thus,
𝔼_μ^A,ν(V∘Φ^A)∑_x∈ Xν(x)𝔼_μ^A(V∘Φ^A(x)))=1 .
where Φ^A(x)ϕ^Φ^A.
Now, consider the diversity group Φ^D. This group is selected to maximize diversity and only allows minimal ability loss. However, there exists exactly one agent ϕ_j_0 and one state x^0 such that V(ϕ_j_0(x^0))<V(x^0). Let x^(-1)ϕ_j_0(x^0) and let x such that V(x)≤ V(x'). Similar as above, by fineness, the probability that
ℙ(∃ i^x_k | ϕ_i^x_k(x_k-1)=x')>0 .
We have two possibilities:
* If x_k-1=x^(-1) and V(x)≤ V(x^(-1)), then, a “disagreement cycle” can be completed, x_k-1=x^(-1)→ x^0→ x^(-1), returning x^(-1). This happens with probability
∑_k=1^∞ℙ(x_k-1=x^(-1))ℙ(x_k=x^0| x_k-1=x^(-1))μ^D_x^0(j_0)>0,
where ℙ(x_k=x^0| x_k-1=x^(-1))=μ^D_x^(-1)({j∈ J | ϕ_j(x^(-1))=x^0})>0, where we have used the full-diversity assumption.
* Also, if x_k-1=x^0 again, completes a disagreement cycle, x^0→ x^(-1)→ x^0, returning x_0. Similarly, this happens with probability
∑_k=1^∞ℙ(x_k-1=x^0)μ^D_x^0(j_0)ℙ(x_k+1=x^(-1)| x_k=x^(-1))>0 .
As V(x^(-1))<V(x^0)<1, thus,
𝔼_μ^D,ν(V∘Φ^D)∑_x∈ Xν(x)𝔼_μ^D(V∘Φ^D(x)))<1 .
§ THE DIVERSITY PREDICTION THEOREM AND THE CROWDS BEAT AVERAGES LAW
§.§ The results.
They also present another theorem that would be useful later. First, some definitions. Given a set of individuals labeled as i=1,…, N, we associate to each of them a signal or prediction of some magnitude, which has θ as true value. The squared error of an individual's signal equals the square of the difference between the signal and the true outcome:
SE(s_i) = (s_i - θ)^2 .
The average squared error is given by
MSE(s) = 1/n∑_i=1^n(s_i - θ)^2 ,
with s (s_1, s_2, …, s_n). The collective prediction is
c =c(s) = 1/n∑_i=1^n s_i .
Predictive diversity of the collective is defined as:
σ̂(s) = 1/n∑_i=1^n(s_i - c)^2 .
This is simply a (biased) estimation of the variance. Two trivial theorems can be deduced. The first, a particular version of the Pythagoras Theorem:
The squared error of the collective prediction equals the average squared error minus the predictive diversity:
SE(c(s)) = MSE(s) - σ̂(s) .
This is quite standard, but let us give a proof using the (generalized) Pythagoras Theorem. In ℝ^n we can define the standard Euclidean or l^2-norm. If c=(c,…, c) and analogously for θ, then ⟨s-c ,θ-c⟩_l^2=0 so the Pythagoras Theorem gives
s-θ_l^2^2=θ-c_l^2^2+s-c_l^2^2 .
The squared error of the collective's prediction is less than or equal to the averaged squared error of the individuals that make up the crowd.
SE(c(s)) ≤MSE(s) .
§.§ The asymmetric role of “ability” and “diversity”.
Before we proceed, let's note two simple mathematical observations:
MSE and σ̂ cannot be treated as independent as both depend on s. That is, altering one will generally change the other (it is not fixed), with the effect on the prediction error being, in principle, undetermined.
Therefore, it would be a significant mathematical error to consider that for the prediction error, SE, to be small is enough to make “diversity”, σ̂ large.
These observations are mathematically trivial. Also, they can be graphically demonstrated when we consider the case of n=2, which brings us back to the standard Pythagoras theorem, see Figure <ref>.
Knowing either MSE(s) or σ̂ alone is not sufficient to determine the value of the prediction error. In fact, according to the Crowd Beats Averages Law, we can see:
SE(MSE, σ̂) ∈ [0,SE^max(s)] .
This bound is sharp, with SE^maxMSE. Since SE is not solely determined by either "ability" or "diversity", these variables can be observed in the context of the maximum prediction error, i.e., SE^max. More precisely:
Let SE^max represent the maximum prediction error. Then,
* If ΔMSE<0, then ΔSE^max<0. In other words, if "ability" increases, the maximum prediction error decreases. Particularly, if the increase in ability is large enough, the prediction error will decrease.
* If Δσ̂>0, then ΔSE^max≥0. This implies that if "diversity" increases, the maximum prediction error also increases. In particular, an increase in diversity alone does not guarantee a reduction in the prediction error. Furthermore, if the increase in diversity is substantial enough, the maximum prediction error will also increase.
This is a trivial consequence of MSE=SE^max and the twin inequality of the Crowd Beats Averages Law: σ̂≤SE^max.
Using the Crowd Beats Averages Law (and other trivial results), we arrive at a seemingly contradictory result: increasing "ability" eventually reduces the prediction error, but increasing diversity ultimately increases the maximum prediction error. Consequently, the Diversity Prediction Theorem and the Crowd Beats Averages Law provide limited insight into how diversity impacts the prediction error in a general setting without controlling for ability.
§ HONG AND PAGE'S MISUSE OF MATHEMATICS: AN OBSCURED TRIVIAL THEOREM
§.§ Misusing the mathematics to obscure a trivial fact
The Hong-Page theorem is, in essence, a misuse of mathematics. It employs standard probability techniques, such as the Borel-Cantelli lemma (as my simpler proof demonstrates, unnecessarily), to obfuscate its hypotheses, making it inaccessible to individuals outside the field. That is, mathematics to complex a simple fact, not to simplify complex relations.
Indeed, as we saw in Theorem <ref> the theorem's conclusion—that a group performs better than the best single individual—is inevitable by construction, by the way the theorem's premises are structured. It posits two fundamental hypotheses: first, that the "best" individual agent, ϕ^*, cannot always solve the problem optimally, and second, that a diverse group Φ of agents can always find an optimal solution. When these assumptions are in play, the conclusion of the theorem is logically guaranteed.
But, to hide this simple fact the clone existence and selection is introduced, to invoke the probability apparatus. This is done in the second set of assumption of Theorem <ref>. They define a probability space and prove that, if we can select clones of Φ infinitely, with probability one the first group will contain at least one copy of each element of Φ and the second group will be chosen so that is made only of copies of ϕ^*. Using the previous paragraph, the conclusion follows directly. This constitutes the heart of their article's proof published in the Proceedings of the National Academy of Sciences. But, since we've shown that Theorem <ref>—a simple restatement of the assumptions—encapsulates all information regarding diversity and ability (the probabilistic part could be applied to anything, like colored stones in a box), one may question the necessity of introducing clones in the first place. Thus, it appears that the theorem's complexity may stem more from an obfuscation of its simple underpinnings than from a deep, mathematical truth about diversity and ability.
Thus, while the Hong-Page theorem uses mathematical techniques, its conclusion is more a trivial product of its constructed premises than a deep, unexpected and universal truth revealed through rigorous mathematical exploration.
§.§ Misusing the theorem to answer question it does not
In <cit.>, they say:
These results still
leave open an important question: Can a functionally diverse
group whose members have less ability outperform a group of
people with high ability who may themselves be diverse? The
main result of our paper addresses exactly this question.
This is false. They insist:
To make a more informed decision, the organization administers a test to 1,000 applicants that is designed to reflect their individual abilities in solving such a problem. Suppose the applicants receive scores ranging from 60% to 90%, so that they are all individually capable. Should the organization hire (i) the person with the highest score, (ii) 20 people with the next 20 highest scores, or (iii) 20 people randomly selected from the applicant pool? Ignoring possible problems of communication within a group, the existing literature would suggest that ii is better than i, because more people will search a larger space, but says little about ii vs. iii. The intuition that agents with the highest scores are smarter suggests that the organization should hire ii, the individually best- performing agents. The intuition that the randomly selected agents will be functionally diverse suggests that the organization should hire iii, the randomly selected ones. In this paper, we provide conditions under which iii is better than ii.
This is false. By Proposition <ref>, the groups being compared consist of clones that include, at least, all agents necessary to always reach the correct solutions versus clones of the best agent, which, by assumption, is the same as the best agent alone. As N_1 is large enough, the best agent will be included in the first group.
Expressed differently, if we consider only one copy for each agent (as more are, by assumption, see Section <ref>, redundant), the groups being compared are Φ versus ϕ^*. Note that ϕ^* ⊂Φ. No random selection is involved, as discussed in Section <ref>. Therefore, a more appropriate comparison would be:
* the person with the highest score,
* 20 people with the next 20 highest scores,
* 20 people randomly selected from the applicant pool,
* the 1000 applicants (or however many are needed to always reach the solution) working together perfectly.
The Hong-Page paper deals with i) versus iv), a triviality, not, as they explicitly claim, ii) versus iii).
During a conference at the European Central Bank (ECB), Page stated:
I create a group of the 20 best agents - the best individuals - and I compare them to a random group of 20 agents [...] it turns out though if you do the math on this, the diverse group almost always outperforms the other group if you use reasonable-sized groups, like groups of size 10 or 20 [...] the paper and model I just showed you where diverse groups do better than random groups was written by myself and Lu Hong [...]
He used Figure <ref> to illustrate this point. However, as we have mentioned before, this representation is not directly related to the theorem. In the "Alpha Group", the best agent, 138, should be the only member, and this agent should also be included in the Diverse Group, along with all the other agents, see Figure <ref>. Furthermore, groups of size 10 or 20 may not be large enough for the SLLN to hold, especially if μ({ϕ}) is small enough for some agents.
In the same conference at the ECB, he further states:
As the problem becomes complex, the best team doesn't consist of the best individuals. Why? Because the best individuals tend to be similar and what you really want on hard problems is diversity.
However, this statement seems to confuse, either deliberately or unintentionally, an assumption with a factual result. The claim that "the best individuals are similar" (actually, clones of the same agent) is not a derived conclusion, but a trivial consequence of the presuppositions, see Section <ref>. The proof of this claim cannot be found in <cit.>; it's established by assumption, Section <ref>. Furthermore, as explored in Section <ref>, even when conducting a fair comparison between ability and diversity - and even when the ability group is characterized by relatively homogeneous problem solvers - ability can still outperform diversity. Therefore, this statement is completely misguided.
§.§ Misusing the prestige of mathematics
When Page claims:
This theorem is no mere metaphor or cute empirical anecdote that may or may not be true ten years from now. It is a mathematical truth.
It is as accurate as asserting:
If p q, then p q .
is no mere metaphor or cute empirical anecdote that may or may not be true ten years from now. It is a mathematical truth. To be more precise, the logical structure of the theorem is as follows:
* Hypothesis 1, H_1: The best agent cannot always solve the problem.
* Hypothesis 2, H_2: The “diverse group” can always solve the problem.
* Conclusion, C: The “diverse group” outperforms the unique best agent at problem-solving, signifying that "diversity trumps ability".
This argument is logically valid—a tautology, so the proposition H_1 H_2→ C is certainly true. However, the argument's soundness might be questionable as the hypotheses might not be factual. Thus, it doesn't provide any certainty regarding the conclusion, C, i.e., whether diversity indeed outperforms ability. Here, mathematics seems to be used as a tool of persuasion, asserting that it's not ideological, but pure math. However, as we have shown, they are not proving what they claim to be proving.
§.§ A basic mathematical error in advocating for diversity
Scott Page has argued that large diversity implies small prediction error. However, this conclusion, while favorable to the hypothesis that diversity reduces prediction error, constitutes a significant mathematical mistake. Indeed, in a https://www.youtube.com/watch?v=EXn4vOuU3BE list=WL ab_channel=OsirisSalazarlecture (University of Michigan), Page states:
And you might also ask, where does the madness of crowds originate? How could it be that a crowd could get something completely wrong? Well, that's not difficult to understand either, because crowd error equals average error multiplied by diversity. If I want this to be large, if I want large collective error, then I need large average error, meaning that I need people to be getting things wrong, on average. Additionally, I need diversity to be small. So, the madness of crowds comes from like-minded individuals who are all incorrect, and once again, the equation provides us with this result.
This mathematical misunderstanding involves a basic arithmetic error that we mentioned in Error <ref>. From the "Diversity Prediction Theorem" (with s term omitted for simplicity),
SE= MSE - σ̂ ,
we cannot deduce that a large SE implies a small σ̂. Rather, it implies that MSE must be much larger than σ̂, where σ̂ could be as large as desired. See, for instance, Figure <ref> for an illustration, where the prediction error is large, but diversity is larger (so it cannot be "small").
§ LANDEMORE'S MISUSE OF MATHEMATICS: AN INVALID AND UNSOUND ARGUMENT FOR HER POLITICAL PROPOSAL
§.§ The argument
The argument, in a nutshell, is, <cit.>:
Democracy is here modeled as a collective decision-procedure involving the combination of two mechanisms: inclusive and egalitarian deliberation and simple majority rule. The claim is that democracy thus defined is more likely to yield better solutions and
predictions on political questions than less inclusive and less egalitarian decision-rules because it structurally maximizes the cognitive diversity brought to bear on collective problems. Cognitive diversity—here defined as the fact that people see problems in the world and make predictions based on different models of the way the world works or should be interpreted
—is a group property that has been shown to be a crucial factor of group performance in various contexts and indeed more important to the problem-solving abilities of a group than individual competence of the members itself (Page 2007). I argue that under the conditions of uncertainty that characterize politics (the fact that the bundle of issues to be faced by any polity over the medium to long term cannot be predicted ahead of time), political decision-making characterized by maximal inclusiveness and equality can
be expected to be correlated with greater cognitive diversity, which, in turn, is correlated with better problem-solving and prediction. A central assumption of the argument is that politics is characterized by uncertainty. This uncertainty (which is an assumption about the world, not necessarily the subjective epistemic stage of the deliberators) is what renders all-inclusiveness on an equal basis epistemically attractive as a model for collective decision-making. Given this uncertainty egalitarian inclusiveness is adaptive or “ecologically
rational” (Landemore 2014).
And the conclusion is:
The argument presented here is based on a simple model of democracy and is entirely deductive. It essentially credits the epistemic superiority of democracy to inclusive deliberation, that is, deliberation involving all the members of the community (whether directly or, where unfeasible, through their democratic representatives) [...] The advantage of my deductive epistemic argument, ultimately, is that even if it fails to explain the way actual democracies work, it can serve as a useful normative benchmark to diagnose the way in which existing democracies epistemically dysfunction and imagine alternative institutional arrangements. One implication of the epistemic argument is indeed that in order to obtain the theoretically promised epistemic benefits of democracy, we would need to make the decision-procedures used in actual democracies a lot more inclusive and a lot more egalitarian than they are at present. Institutional reforms that the argument points toward include the replacement of elected representatives with randomly selected ones and a greater use of simple majoritarian decision-making.
While the argument is not explicitly stated[Despite claiming that 'The argument presented here is based on a simple model of democracy and is entirely deductive,' the precise premises, intermediary steps, and conclusions are never explicitly stated. This should be the first step in constructing the argument and possible replies.], a crucial hypothesis needed for the theorem assumes the following forms:
* Hypothesis, H: Cognitive diversity, defined as individuals seeing problems and making predictions based on different models of the world, is a group property that improves group performance in various contexts.
* Hypothesis', H': Greater cognitive diversity within a group correlates with better problem-solving and prediction abilities.
To justify this, Landemore relies on the results of Hong and Page as described above <cit.>:
To make that claim, I essentially rely on Hong and Page’s formal results about the centrality of cognitive diversity to the emergent property of
collective intelligence.
We aim to demonstrate that this hypothesis is unjustified, which subsequently renders the argument both logically unsound and inapplicable to real-world scenarios. Additionally, we will highlight instances where she incorrectly deduces propositions from these mathematical theorems, leading to a logically invalid argument.
Some of the critiques presented in the previous section also apply to Landemore. For instance, when she informally discusses the theorem, she falls into the same misrepresentation as Hong and Page, as discussed in Section <ref>. For instance, she stated in a public https://youtu.be/HERmRx9wDXc?t=1654debate:
There are multiple Hong-Page theorems. The one that I use mostly is the 'Diversity Trumps Ability' theorem. It's basically a formalization of the idea that under certain conditions, you're better off solving problems with a group of average people who think differently than with a group of experts or very smart people.
As we have previously illustrated, this assertion is entirely false, see Section <ref> and below for more details.
§.§ Basic misunderstanding of the mathematical theorems
Landemore says (about the Theorem <ref>):
Let me pause here to emphasize what a remarkably counterintuitive, indeed amazing, result his is. Where the conditions apply, you are better off with a random group of people who think differently than with a bunch of Einsteins! Who would have thought? In my view, this result should truly change our perspective on what makes groups smart to begin with; I believe it has huge implications for the way we should think about political bodies making
decisions on our behalf.
Also <cit.>,
That theorem was sufficiently counterintuitive that they provided a computational example to provide intuition.
This misunderstanding is significant. She is confusing the conclusions of the theorem with its hypotheses. The fact that a 'bunch of Einsteins' is equivalent to only one Einstein (who, by hypothesis, cannot always solve the problem) is not a conclusion; it's an assumption that she fails to mention. More precisely, the hypotheses stipulate that the "random" or "diverse" group always reaches the global solution, see Corollary <ref>. Moreover, by assumption, a group of Einsteins is considered equivalent to one Einstein (Section <ref>). Yet again by assumption, it's not always guaranteed that this group or an individual Einstein reaches the global solution (Assumption <ref>). How is this counterintuitive or surprising? It appears to be merely a reiteration of the assumptions, which Landemore never fully discloses. She fails to mention that clones working together are presumed to perform just like a single person working alone (refer to Section <ref>), or that the best agent ("Einstein") is postulated to be unique (see Section <ref>), as detailed in <cit.>, <cit.>. Further details will be elaborated below. Furthermore, by Proposition <ref>, the random group includes a collection of Einsteins. Thus, the basic structure of the argument is:
* Hypothesis 1, H_1: Group G_R always reach the optimal solution. G_R includes a collection of Einsteins.
* Hypothesis 2, H_2: A collection of Einsteins is not perfect.
* Conclusion, C: Group G_R is "better" than a collection of Einsteins.
Thus, the "under the right conditions" of Landemore is, basically, presupposing the conclusion. How can someone truly understand the theorem and consider this counterintuitive? Once the probabilistic component, which might be obscuring for non-mathematicians but standard for most mathematicians, is removed, the theorem is a triviality (see Section <ref>).
This misunderstanding appears to have significant implications on Landemore's thought (from the same debate as before):
The theorem's conclusions are not intuitive at all. I think they run against an entrenched belief that experts know best [...]. What this theorem unveiled for me is the possibility that when it comes to collective intelligence, we should stop thinking of it in terms of an addition of individual intelligences. It's really more about the group property. Does it contain enough diversity that we're going to push each other closer again to this global optimum? And that, I think, is not trivial at all. For me, it was a paradigm shift.
Moreover, it leads her to compare the Hong-Page theorem, which is trivial (Section <ref>), with a genuinely profound and counterintuitive theorem, such as Arrow's impossibility theorem. However, she believes the difference in treatment between the theorems is based on the difference in their conclusions:
For me, these results are remarkable. In fact, it's interesting to see that other theorems, like the Arrow's Impossibility Theorem, which leads to very negative conclusions about democracy, are considered brilliant and worth a Nobel Prize. It always seems that things are not considered surprising and trivial if they go in one particular direction.
Despite of this, Landemore, after stating the Theorem <ref>, says <cit.>
To the extent that cognitive diversity is a key ingredient of collective intelligence, and specifically one that matters more than average individual ability, the more inclusive the deliberation process is, the smarter the solutions resulting from it should be, overall.
As we saw in Section <ref>, this is false. The theorem presupposes that every problem solver in every state improves the state to a new state closer to the global optimum. Furthermore, as shown by the more realistic Theorem <ref>, if we create two groups from the group in the original theorem – one in which we make the minimal reduction in ability while ensuring full diversity, and another in which we significantly reduce diversity while ensuring ability – the less diverse group would systematically outperform the fully diverse group. In other words, ability trumps diversity.
There are also other severe mathematical errors with the "Diversity Prediction Theorem". Landemore says <cit.>:
In other words, when it comes to predicting outcomes, cognitive differences among voters matter just as much as individual ability. Increasing prediction diversity by one unit results in
the same reduction in collective error as does increasing average ability by one unit.
This is mathematically incorrect and entirely wrong: the effect is undetermined, it's not of the same magnitude, and it's not necessarily a reduction, as explained in Section <ref>, see Error <ref>. It is a mathematical error to assume that the terms in Theorem <ref> can be changed while the other remain fixed. Furthermore, as we observed earlier in Proposition <ref>, the diversity and ability terms do not play the same role. While increasing ability eventually reduces the prediction error, increasing diversity does not have the same effect and, furthermore, it eventually increases the maximum prediction error. Therefore, Landemore's argument has a significant gap; without controlling for ability, increasing diversity does not guarantee a reduction in the prediction error.
§.§ The misuse of hypotheses of the "Diversity Trumps Ability Theorem"
To justify the use of the theorem, she says <cit.>,
Importantly, the four conditions for this theorem to apply are not utterly demanding. The first one simply requires that the problem be difficult enough, since we do not need a group to solve easy problems. The second condition requires that all problem solvers are relatively smart (or not too dumb). In other words, the members of the group must have local optima that are not too low; otherwise the group would get stuck far from the global optimum. The third condition simply assumes a diversity of local optima such that the intersection of the problem solvers’ local optima contains only the global optimum. In other words, the participants think very differently, even though the best solution must be obvious to all of them when they are made to think of it. Finally, the fourth condition requires that the initial population from which the problem solvers are picked must be large and the collection of problem solvers working together must contain more than a handful of problem solvers. This assumption ensures that the randomly picked collection of problem solvers in the larger pool is diverse—and in particular, more cognitively diverse than a collection of the best of the larger pool, which would not necessarily be the case for too small a pool relative to the size of the subset of randomly chosen problem solvers or for too small a subset of problem solvers in absolute terms.
This is, once again, incorrect. Those are not the only conditions required for the theorem to apply. Among others, she doesn't mention the hypotheses from Sections <ref>, <ref>, <ref>, and <ref>. If these conditions do not hold, the theorem doesn't hold (see the counterexamples). And, as we've seen, these conditions can be rather restrictive (such as assuming that a billion Einsteins will not outperform a single Einstein). Therefore, her statement of the theorem is incorrect.
Landemore is following Page's book, which also neglects to mention these conditions. Moreover, his Condition 2 (Landemore's second condition; see also <cit.>) is ill-stated. The 'Calculus Condition' requires that ϕ(X) is countable (which is trivial if X is finite), but he interprets it as 'all problem solvers are smart.' This condition doesn't relate to being smart, contrary to Page's and, consequently, Landemore's interpretation. For instance, consider the function ϕ:X→ X defined as ϕ(x)=x_m, where V(x_m) is a global minimum of V (e.g., 0). Then, ϕ(X)={x_m}, finite, which hardly represents being 'smart.' In fact, it's the worst agent conceivable since it assigns the solution furthest from the global optimum to every state. Nevertheless, Page (and subsequently Landemore) refers to this as being 'smart.' It's also noteworthy that in his book, Page's conditions are subject to Thompson's critique (Section <ref>), although Page denied it. As Landemore, citing Page, puts it in <cit.>,
Condition 3: The Diversity Condition. Any solution other than the global optimum is not a local optimum for some nonzero percentage of problem solvers.
However, in his response to Thompson, Page does not refer to his stated Condition 3, but to what he incorrectly thinks Condition 3 requires (which is the same mistake present in <cit.> and pointed out by Thompson).
Landemore defends the hypotheses of Theorem <ref>, see the previous quote or the section "The meaning and empirical plausibility of the assumptions behind the Diversity Trumps Ability Theorem" in <cit.>. However, other sets of hypotheses are plausible, or even more so, which is problematic. More specifically, in a Moorean style, we are going to construct a set of incompatible propositions, requiring us to reject (at least) the least plausible one. We are going to use Theorem <ref> for this task.
The propositions are as follows:
* Hong-Page's framework can be used for a deductive argument for epistemic collective-decision systems in the sense that it can serve as a benchmark or be useful in deriving implications to obtain epistemic benefits (as in Section <ref>).
* The assumptions of Theorem <ref>.
* The assumptions of Theorem <ref>.
Note that (1) and (2) imply that "diversity trumps ability," but (1) and (3) imply "ability trumps diversity", so at least one of these propositions must be rejected, but:
* Rejecting the first would invalidate Landemore's argument, as the theorem would then have no relevance to collective-decision systems.
* Rejecting the second would undermine Landemore's proposition that "cognitive diversity is a key ingredient of collective intelligence, and specifically one that matters more than average individual ability".
* Rejecting the third, without rejecting (2), amounts to "biting the bullet". The assumptions of Theorem <ref> are relatively more plausible than those in (2). For instance, there is no need to assume the existence of clones, that 100 Einsteins working together to solve a problem are the same as one, or that there will be no disagreement. Furthermore, unlike Hong and Page's Theorem, it provides a fair comparison between ability and diversity. See Section <ref> and references therein for more details.
Note that I am not claiming that Theorem <ref> has any social content (I personally reject several of the propositions above), but it suffices to form the Moorean set of incompatible propositions. It also serves to show how Landemore commits an equivocation fallacy[This can aso be seen as a motte-and-bailey fallacy, which is also present in Page's presentation of the theorems.] or misuses natural language to represent mathematical statements. For instance, Assumption <ref> is read as agents are "relatively smart (or not too dumb)" and then she discusses that voters satisfy this, <cit.>. But note that the hypotheses of Section <ref> only reduce by an almost insignificant amount the ability, so they can be considered "relatively smart (or not too dumb)". But the thesis changes radically. Thus, Landemore's justification for the use of the theorem is severely flawed.
Finally, even assuming that the hypotheses are plausible, there exists a significant contradiction in Landemore's work. Recall that the hypothesis of Theorem <ref> not only guarantees that the "random" group is better than the best agent, but also ensures that they always reach the correct conclusion without disagreement or dissent, as shown in Proposition <ref>, see also Remark <ref>. These are the same hypotheses that, according to Landemore, make cognitive diversity more crucial than individual ability. This perfect deliberation is what enables the "diverse" group to surpass the best agent; the former is perfect by assumption, while the latter is not. Nonetheless, Landemore states in <cit.>:
Deliberation is far from being a perfect or complete decision-mechanism, in part because it is time-consuming and rarely produces unanimity.
And in <cit.>, she further notes:
I thus do not need to assume away, as Quirk seems to accuse me of doing, the possibility of disagreement.
Therefore, if she rejects an implication of the theorem, she must also reject at least one of its hypotheses. However, as we have seen, she defends the hypotheses of the theorem she applies. This creates a contradiction.
§.§ The vacuousness of the Numbers Trump Ability "Theorem"
Landemore's key innovation is the following, as stated in <cit.>:
The second step of my argument—my addendum to Page and Hong—
proposes that the “cheapest” (i.e., easiest and most economical) way to
achieve cognitive diversity in the absence of knowledge about the nature of
complex and ever-changing political problems is to include everyone in
the group. [...] This “Numbers
Trump Ability Theorem” thus supports a strong epistemic case for
democracy, in which my key innovation is to support inclusiveness for its
instrumental, specifically epistemic properties: Under the right conditions,
including everyone in the decision-making process simply makes the group
more likely to get the right (or, at least better) answers.
The argument is straightforward: if, for epistemic reasons, diversity is what matters, then including everyone is the simplest way to increase diversity. Aside from practical issues, which Landemore somewhat considers, the problem with this reasoning (which is not actually a theorem) is that the premise is false. We have argued that in both Hong-Page theorems, ability plays a crucial role, as seen in Section <ref> and <ref>. Thus, increasing the number of people can have detrimental effects. Therefore, the "theorem" is false. Nevertheless, it can be "corrected" as:
Under the right conditions and given the uncertainty in the ability of the agents, including everyone with ability above a certain threshold in the decision-making process makes the group more likely to arrive at the correct (or, at least, better) answers than merely including people without controlling for ability.
If we acknowledge the 'absence of knowledge about the nature of complex and ever-changing political problems', it would be prudent to select problem solvers who are competent enough to handle these uncertain problems. Hence, we must take ability into account. In other words, once corrected, this theorem lends support to a version of epistocracy. As I've previously stated, I don't find the Hong-Page theorems particularly enlightening, so I do not advocate for this theorem. Nonetheless, if we follow Landemore's line of reasoning, this interpretation would be more accurate.
In general, all the theorems that Landemore uses for her political defense of democracy (<cit.>) presuppose certain levels of ability and knowledge. This is the case of the Hong-Page theorems, as seen in Section <ref> and <ref>, as well as the Condorcet Jury Theorem (CJT) and the Miracle of Aggregation, as shown in <cit.>. These latter two theorems, which belong to the same general theorem (non-homogeneous CJT), are far more likely if we include epistemic weights that are stochastically correlated (with a measurement error) with epistemic rationality. Also, if ability is not controlled, these theorems can operate in the opposite direction, ensuring that we almost surely choose the wrong option. Thus, from an epistemic and instrumental perspective, these theorems strongly suggest including ability thresholds or, the more feasible and semiotically problem-free case of epistemic weights with a minimum of 1 (no one excluded) and stochastically correlated (the inevitable measurement error is taken into account, perfect correlation is not assumed) with epistemic rationality, for a starting practical proposal, see <cit.> or, for a lengthier discussion[In Spanish, but see references therein (in English).], <cit.>. This could serve as a preliminary proposal that needs to be tested and experimented with. While it might still be far from perfect, it should be evaluated in comparison to the existing alternatives.
Nevertheless, Landemore staunchly opposes epistocracies. In her chapter "Against Epistocracies", <cit.>, she says:
My first question to Brennan is this: What would such
exclusion achieve? Recall that in my model deliberation
does most of the epistemic work. Most filtering of bad
input or bad reasoning occurs at that deliberative stage.
So there is no reason not to include everyone as one more,
howsoever uninformed, voice will not pollute the outcome but will at most delay the conclusion of the deliberation.
This is incorrect. First, if no selection is done, one cannot ensure the conditions of the Hong-Page theorem, so one cannot expect the result (that deliberation works) to hold. Second, as we have seen in Theorem <ref>, introducing these kinds of agents can pollute the outcome, getting stuck at a solution far from the global maximum. This is the same error that pollutes all of Landemore's analysis, so we insist here; all these theorems assume a certain amount of ability, but Landemore just presupposes[For instnace,
Assuming that, on average, the citizens from among
whom we select representatives meet a minimal thresh-
old of individual competence, random selection is a more promising, authentically democratic way of selecting representatives that maximizes cognitive diversity in the
face of political uncertainty.
See Section <ref> for more.] this without questioning seriously enough and focusing mainly on diversity, which has a "secondary" effect (Theorem <ref> and Section <ref>).
For instance, she continually emphasizes the uncertainty:
As time goes by and circumstances change, however, it becomes very likely that his epistocracy will run into issues where it will miss the very voices and votes it purposely excluded. Even if the probability is low, the expected cost might still be huge. Why
take the risk?
There may be a short window of time in which a
Brennanist epistocracy would work, perhaps even better
than a democracy. But probabilistically, this superiority
is bound to vanish over time. The question is when.
[...]
Most importantly, there is no reason
to exclude any voice in a model that assumes democratic
deliberation itself can weed out the bad input.
However, this same uncertainty, when translated into uncertain abilities of the problem solvers, could lead to the inclusion of some problem solvers who, rather than aiding, actually obstruct us from reaching the optimal solution. Thus, her "probabilistic" claims like "But probabilistically, this superiority is bound to vanish over time" and that the expected cost is substantial are unfounded, and she provides no valid proof for such strong propositions. It's important to note that there may be merit in including all voices in some capacity. The purpose of this part is not to criticize that, but to critically analyze her use of mathematical results to draw certain conclusions. Therefore, it is not reasonable—and borders on begging the question—to assume that democratic deliberation itself can weed out bad input.
§ CONCLUSION
Our rigorous dissection of the Hong-Page Theorems has uncovered significant issues. The misrepresentation of ability, the negligence of certain assumptions, and the fundamental misuse of mathematical principles have led to their flawed application in sociopolitical constructs.
Hong and Page's application of mathematics in their theorem obscures its inherent triviality. By employing mathematical complexity, they have managed to present trivial facts as profound insights, thus misrepresenting the actual implications of their theorem. It is vital that we apply mathematics with extreme caution and rigor, especially when it serves as the foundation for decisions that can have substantial impacts on our social structures and institutions. As such, despite its thousands of citations, <cit.> should not be regarded as a serious contribution to the field of collective decision problem solving. Similarly, with our additional analysis of the "Diversity Prediction Theorem" and Page's misinterpretation, Section <ref>, basic claims of Page's book The Difference are affected, from the preface:
Perhaps because The Difference takes time to digest, eventually, accurate readings won out. Reviewers recognized that The Difference explores the pragmatic, bottom-line contributions of diversity. It does so using models and logic, not metaphor. The book’s claims that “collective ability equals individual ability plus
diversity” and that “diversity trumps ability” are mathematical
truths, not feel-good mantras.
Hélène Landemore's application of mathematical reasoning in her political proposition is wanting in both validity and soundness. Specifically, her 'Numbers Trump Ability' theorem, derived from her interpretation of the Hong-Page Theorems, demonstrates significant flaws, as do many other conclusions based on these results, including her use of the 'Diversity Prediction Theorem'. For instance, as we have shown, her epistemic argument is both unsound and invalid. Consequently, the central thesis of her book <cit.> is seriously compromised. As Landemore herself states in <cit.>:
Let me briefly rehearse what I see as the
main argument of the book. At its heart is a simple model of what, under
certain conditions that I deem plausible enough, can be expected of an
inclusive political decision process in a comparison with less inclusive
ones. [...] In my eyes, the main value of my book is to create a simplified,
relatively rigorous framework for the meaningful comparison of the
properties of basic political “regimes.”
A similar problem arises in other foundational claims related to her political proposal of 'Open Democracy', found in works such as <cit.>, <cit.>, <cit.>, and <cit.>.
This critique should not be taken as a dismissal of the importance of diversity in decision-making, but rather as a call to address the misuse of mathematics in these contexts. It urges us to consider the rigorous and nuanced approach required when applying mathematical theories to sociopolitical constructs. As such, this paper makes a contribution to the ongoing discourse on collective intelligence, fostering a deeper understanding of the mathematical theorems used to achieve a desired conclusion.
amsplain
|
http://arxiv.org/abs/2307.03985v1 | 20230708142437 | Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT | [
"Somsawat Rattanasoon",
"Eugene Semenko",
"David Mkrtichian",
"Saran Poshyachinda"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.IM"
] |
National Astronomical Research Institute of Thailand (Public Organization)
260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
Saran Poshyachinda
=========================================================================
National Astronomical Research Institute of Thailand (NARIT) has a manifold network of small telescopes installed worldwide. These telescopes serve educational and research purposes and are equipped mainly with CCD detectors for direct imaging and photometry. To extend the possible field of applications, several telescopes were fitted with commercially available medium-resolution spectrographs eShel from Shelyak. With these devices, researchers in NARIT obtained a versatile tool for stellar spectroscopy. Here we describe the current status of available equipment, possible ways of upgrading, and briefly introduce the achieved results of the asteroseismologic study of fast-rotating stars.
§ MOTIVATION
A fibre-fed medium-resolution echelle spectrograph eShel has been designed and distributed for small telescopes by Shelyak Instruments (France) since 2008 <cit.>. A typical device consists of a stationary spectrograph block linked by a fibre with 50 μm core to the Fibre Injection and Guiding Unit (FIGU) installed at the telescope side. FIGU is also connected through a 200-μm fibre channel to the Calibration Unit comprising halogen, LED, and ThAr lamps. Spectrograph and its components are commercially available on the company's website <https://www.shelyak.com/>.
Earlier models of eShel registered spectra within the wavelength range 430–700 nm with the resolution R > 10,000. In 2018, after the upgrade, which affected many components of eShel, the working range was significantly extended.
NARIT has a distributed network of small telescopes with apertures up to 1 m. For the spectroscopy of relatively bright stars, these telescopes can optionally be equipped with eShel. At the moment, NARIT has three devices with serial numbers 6H-115 (2010), 6H-128 (2016), and 6H-171 (2018). All spectrographs were acquired in their original complete set, thus having limited capabilities. To enable observations of fainter objects and to increase sensitivity in the blue part of the spectrum, we initiated a substantial upgrade of a device with SN 6H-171.
§ MODIFICATION AND TESTS
The improved device received a new high-OH fibre with enhanced throughput in the blue part of the spectrum, a new doublet collimator (Shelyak provided both components), a new imaging lens, and a professional-grade CCD. All components, except fibre, are shown in Fig. <ref>.
As a detector, we use a water-cooled Andor iKon-L system based on a 2048×2048 pixels CCD array with 13.5 μm pixel pitch. To match the plate scale to the increased pixel size, among several lenses with comparable focus lengths available in the market, we choose a commercial lens Sony FE 135 mm F1.8 GM, primarily due to its outstanding optical quality. Subsequent testing of the whole assembly also showed excellent transmission of the selected lens within the required range of wavelengths. The imaging lens is attached to the CCD camera through a specially designed adapter with an enclosed shutter.
Technical parameters of the original and upgraded versions of eShel are summarized in Table <ref>.
An upgraded variant of the spectrograph was installed for tests in a spectrograph room of the Thai National Observatory (TNO) at Doi Inthanon (Chiang Mai, Thailand) in a temperature-controlled environment. The FIGU was mounted to the left Nasmyth port of the 1-m telescope of TNO. Tests were performed in December 2022 and January 2023 under affordable weather conditions and were aimed at the verification of the optical performance of the assembly. Observational data include a standard set of calibrations (bias, flat, ThAr) and spectra of the selected stars and daytime sky. Two-dimensional raw FITS images were reduced using the pipeline PyYAP (<http://github.com/ich-heisse-eugene/PyYAP>), specially adapted to a new device.
§ RESULTS
Test images taken with the upgraded device showed remarkable aberrations arising from the misaligned optical elements of the spectrograph. As this problem appeared in the direction perpendicular to dispersion, it influenced the overall throughput and the level of scattered light. Still, it didn't affect the spectral resolution and transmission of the device. Thus we leave the evaluation of the total throughput and stability for future works and concentrate here primarily on studying these unaffected characteristics.
§.§ Transmission
Analysis of observational data revealed significantly improved spectrum quality due to better control of aberrations and enhanced transmission in the Sony lens. In the images, the point spread function remains nearly stable across the field of view in the 380-850 nm wavelength range. As a result, the shortwave limit of the working spectral range has been extended by 70 nm, from 450 nm to 380 nm. In the infrared, the working range of the current setup is limited by 900 nm. In Fig. <ref>, we show four samples of the observed spectrum of the daytime sky.
§.§ Resolving power
The resolving power of the modified eShel was evaluated by fitting the Gaussian function to the emission lines of the ThAr spectrum. This procedure is implemented as a standard step of processing in PyYAP.
Inspection of the ThAr spectra showed that the focus of the imaging camera remained stable during all observational nights. Within the spectrograph's working wavelength range, the resolving power R = λ/Δλ varied from 10,000 to 12,500, with the median R = 11,700 evaluated from 355 lines in a single image. The resolving power does not variate significantly between nights: the full width at half maximum (FHWM) of the mean ThAr line equals 3.7 pixels, close to the optimal sampling.
§ SCIENTIFIC APPLICATION
A medium-resolution fibre-fed spectrograph, in combination with a 1-meter class telescope, can be a powerful instrument for the spectroscopy of relatively bright sources. Literature has many examples of using eShel in stellar physics and the physics of the Solar system objects. Due to its compact design and high positional stability, this spectrograph appears even in the observations of the extrasolar planets. The thing is that the accuracy of the radial velocity measurements reported in <cit.>, <cit.>, and <cit.> was better than 100 m s^-1 for the stars brighter than 11 magnitudes and exposure time under one hour. Such characteristics enable the detection and observation of hot Jupiters around the brightest stars. <cit.> also gave the example of how to use eShel for observation of pulsating stars (cepheids).
The proposed upgrade opens new perspectives for the family of small telescopes in NARIT, as we have several spectrographs which, after the improvement, can be installed at any of our telescopes. In this way, it becomes possible to move part of scientific proposals aimed at studying exoplanets, active solar-like stars, binary and multiple stars from the main 2.4-m Thai National Telescope to smaller instruments without losing the efficiency and observing time. However, the main stimulus which led us to this technical work was the capability of using this device for asteroseismology of the brightest fast-rotating pulsating stars.
To demonstrate the efficiency of eShel in asteroseismological observations, in Fig. <ref>, we show an example of non-radial pulsations discovered in a 4-magnitude fast-rotating star. A typical pattern of waves propagating across the averaged spectral profile is in the left panel of <ref>. The right panel shows the 2D periodogram used for the identification of frequencies of pulsation. In this example, the star has been observed continuously with short exposures for more than five hours with the original version of eShel and the 1-m telescope of NARIT. The upgraded version of the spectrograph will allow us to increase the signal-to-noise ratio (SNR) of observational data and, thus, expand the number of potential targets or increase the temporal resolution of data with shorter exposure time preserving the same level of SNR.
§.§.§ ORCID identifiers of the authors
0000-0002-1912-1342Eugene Semenko
0000-0001-5094-3910David Mkrtichian
§.§.§ Author contributions
SR, ES, and DM are responsible for formulating the project, its technical implementation, and carrying out the observations. ES and DM are responsible for data reduction and analysis. SP contributed to the project administration. All authors equally contributed to the text of the article.
§.§.§ Conflicts of interest
The authors declare no conflict of interest.
bullsrsl-en
|
http://arxiv.org/abs/2307.06046v1 | 20230712094915 | An OOD Multi-Task Perspective for Link Prediction with New Relation Types and Nodes | [
"Jincheng Zhou",
"Beatrice Bevilacqua",
"Bruno Ribeiro"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Quantum information diode based on a magnonic crystal
Sunil K. Mishra
=====================================================
The task of inductive link prediction in (discrete) attributed multigraphs infers missing attributed links (relations) between nodes in new test multigraphs. Traditional relational learning methods face the challenge of limited generalization to OOD test multigraphs containing both novel nodes and novel relation types not seen in training. Recently, under the only assumption that all relation types share the same structural predictive patterns (single task), <cit.> proposed an OOD link prediction method using the theoretical concept of double exchangeability (for nodes & relation types), in contrast to the (single) exchangeability (only for nodes) used to design Graph Neural Networks (GNNs). In this work we further extend the double exchangeability concept to multi-task double exchangeability, where we define link prediction in attributed multigraphs that can have distinct and potentially conflicting predictive patterns for different sets of relation types (multiple tasks). Our empirical results on real-world datasets demonstrate that our approach can effectively generalize to entirely new relation types in test, without access to additional information, yielding significant performance improvements over existing methods.
§ INTRODUCTION
Discrete attributed multigraphs, which we refer as for simplicity, have been widely used for modeling relational data, which can also be expressed as a collection of triplets.
Storing factual knowledge in enables their application across a wide variety of tasks, encompassing complex question answering <cit.> and logical reasoning <cit.>.
Since relational data is often incomplete, predicting missing triplets, or, equivalently, predicting the existence of a relation of a certain type between a pair of nodes, is a key task.
However, conventional methods are generally limited to predicting missing links for relation types observed during training.
As a consequence, standard attributed link prediction methods are incapable of making predictions that involve completely new relation types over completely new nodes (or new graphs), which is arguably the most difficult and perhaps the most interesting link prediction task in .
In this work we focus on the out-of-distribution (OOD) task of predicting missing triplets in test that contain new nodes and new relation types.
Our definition of OOD task follows prior work <cit.>, which defines an OOD task as having train and test distribution with distinct support (distinct nodes and new relations types in attributed graphs). We also assume that no extra information is available either at training or test time, apart from the input (obsevable) graph with its nodes and relation types. As a result, existing zero-shot methods <cit.>, which rely on textual descriptions of the relation types, and few-shot learning methods <cit.>, which require examples of triplets to be predicted involving the new relation types, are unable to perform our OOD task.
To the best of our knowledge, the only approach that can generalize to completely new nodes and relation types without requiring any extra information is the recent work of <cit.>. To solve the OOD task, the authors introduce the concept of double exchangeability, which is intuitively the property of all relation types, as well as all nodes, of being interchangeable with one another. Since the new test relation types are likewise assumed to be exchangeable with the training ones, the missing test triplets can be predicted directly using knowledge acquired during training. However, in some real-world , the double-exchangeability assumption might not hold.
For instance, consider the scenario in <Ref> where the training graph comprises two weakly-connected knowledge bases representing different sports communities (racing and gymnastic) with some different relation types. The predictive patterns for relation types in those communities might differ and potentially be contradictory, implying that not all relation types are exchangeable. In our example, two racing teammates necessarily have the same team principal in the racing community, and therefore (Horner, , Pérez) can be predicted from ((Horner, , Verstappen), (Verstappen, , Pérez)). On the contrary, two gymnastic teammates might have different coaches and (Landi, , Lee) should not be predicted when seeing ((Landi, , Biles), (Biles, , Lee)). Due to this contradictory patterns, and are not exchangeable. Similarly, the new test relation types might be exchangeable only with a subset of the training ones. In our example, is exchangeable with , but not with .
Our approach. To predict missing triplets involving new relation types in that have different predictive patterns, we suggest relaxing the double exchangeability assumption <cit.> and partitioning the set of relations into distinct groups, where each group exclusively contains relation types that are exchangeable among themselves. We demonstrate that these groups of relation types can be understood as distinct tasks in a multi-task setting.
Within each task, we assume double exchangeability, so that within a task our method is able to extrapolate OOD.
After learning to perform correctly in each task, we can apply the predictive patterns from the relevant tasks directly to predict missing triplets involving the new relation types.
Importantly, we do not require prior knowledge of task membership; instead, we can learn the relation type-task assignments during training and adapt to the new test relation types through a test-time adaptation procedure.
Main contributions.
Our main contributions are as follows:
* We develop a method capable of modeling the existence of distinct and contradictory predictive patterns among various sets of relation types by treating them as separate tasks in a multi-task setting;
* We propose a test-time adaptation procedure that learns task assignments for new relation types, enabling the application of our proposed method to entirely new test relation types;
* We create new benchmark datasets that fit the multi-task scenario we focus on;
* We develop a novel evaluation metric to more effectively measure the performance of existing methods in predicting missing triplets.
§ RELATED WORK
Link prediction in . Existing link prediction methods in can be categorized into factorization-based approaches <cit.> and GNN-based models <cit.>. Although the former exhibit remarkable performances, especially when combined with appropriate training strategies <cit.>, they are typically restricted to transductive settings. Conversely, GNN-based models can also be applied to inductive scenarios involving new nodes in test <cit.>. All these methods, however, cannot work when presented with new relation types in test, which instead represents the main interest of our paper.
Zero-shot and few-shot learning on new relation types.
Recent works, aiming to predict missing triplets involving new relation types, consider the zero-shot or few-shot learning paradigm. To generalize to new relation types, zero-shot methods <cit.> use additional contextual information, such as the semantic descriptions of the relation types, making them unfit for our scenario.
In contrast, few-shot methods primarily adopt a meta-learning paradigm <cit.> that requires manually curated task labels during training and fine-tune on a small number of test triplet examples during test.
To remove the need of task labels, <cit.> proposed a pre-training and fine-tuning few-shot procedure that uses subgraphs around the nodes, as they should contain the predictive pattern indicating the presence of the missing triplets. Similarly, our method removes the need of task labels, but unlike <cit.>, our test-time adaptation procedure involves only the observable parts of the test graphs and does not require examples of the triplets to be predicted in training nor test.
The most closely related approach is the one presented in <cit.>, which aims at OOD zero-shot generalizing to new test relation types by introducing the concept of exchangeability between relation types, which is intuitively their property of being interchangeable with one another.
However, as we will see next, the approach is unable to properly model the difference in predictions between non-exchangeable relation types that have different (and potentially contradictory) predictive patterns. Due to space constrains, we refer the reader to <Ref> for detailed comparisons with prior work.
§ PROBLEM DEFINITION
In this section we introduce the notation used through the remainder of this work. We consider an (multigraph) as a finite collection of typed relations between nodes. Formally, let be a finite discrete set of nodes and a finite discrete set of relation types. A triplet (u, r, v) in the indicates that a node u ∈ is linked to another node v ∈ by means of a relation of type r ∈. Without loss of generality, we assume node and relation sets are numbered, that is { 1, 2, …, N } and { 1, 2, …, R }, where N ≥ 2 and R ≥ 2. Consequently,
we can represent an in tensor form as ∈, with = { 0, 1 }^N × R × N, where _u, r, v = 1 if and only if (u, r, v) is a triplet in the .
Link prediction in can be cast as a self-supervised learning problem <cit.>, where an input graph is assumed to be the result of the application of an unknown mask M ∈{ 0, 1 }^N × R × N on an unknown ^(full), i.e., = M ⊙^(full) with ⊙ the element-wise product, where the masking process hides the existence of certain triplets. The goal of a model is to predict the existence of the masked (or missing) triplets from . That is, if M is the complement mask defined as M = 1 - M, the model is asked to predict P(M⊙^(full)|).
In this work we focus on predicting missing triplets in new with new relation types. Given a training graph ^(tr), with node set ^(tr) and relation set ^(tr), we aim to learn a model capable of accurately predicting missing triplets in a test graph ^(te), with node set ^(te) and relation set ^(te), involving both new nodes and new relations types: ^(tr)⊉^(te) and ^(tr)⊉^(te). To accurately predict missing test triplets without any extra information, such as contextual information (as in zero-shot methods <cit.>) or task labels (few-shot methods <cit.>), ^(te) must exhibit predictive patterns found in ^(tr), which implies that missing test triplets can be predicted using the knowledge acquired from training, even if the relations convey entirely different meanings.
To the best of our knowledge, the only approach capable of handling new relation types at test time without the need of extra information is the recent work of <cit.>. However, as we discuss next, <cit.> relies on the assumption that all relation types share a single predictive pattern.
Existing gap: Learning to predict new relation types on graphs with conflicting predictive patterns. To the best of our knowledge, no existing method can predict missing triplets involving new relation types without the need of extra information when relations exhibit different and potentially conflicting predictive patterns.
<Ref> illustrates an example of this setting, with solid arrows representing the input graph and dashed arrows denoting the missing triplets to be predicted. In this example, the training graph _1 has family-tree relationships, while the test graph _2 has relations in academia. Our goal is to learn predictive patterns from _1 and generalize to predict the missing triplets in _2.
However, as demonstrated in the figure, the training graph _1 contains conflicting predictive patterns.
The type of relation between Alice and Cole is the first relation type in the 2-hop chain ((Alice, , Bob), (Bob, , Cole)), which is . Conversely, the type of relation between Dina and Faye is the second relation type in the 2-hop chain ((Dina, , Elmo), (Elmo, , Faye)), which is .
Having conflicting predictive patterns in training does not prevent accurate prediction of test triplets, as long as we can correctly identify which pattern, among the learned ones, a test relation follows. In our example,
the true test relation type between Xena and Zane in _2 shares the same predictive pattern as the relation in _1, as can be inferred from the observable triplet (Ula, , Wynn).
Since the pioneering work by <cit.> assumes a single predictive pattern, their proposed approach would be unable to model the difference between the predictions of and , and, consequently, between those of and .
§ A MULTI-TASK PERSPECTIVE ON INDUCTIVE LEARNING OF NEW RELATION TYPES
In the previous section we noted that accurate prediction of missing triplets in a test graph with completely new relation types requires the test graph to exhibit the same predictive patterns as the training graph. Expanding upon this concept, we next define a predictive pattern through the notion of exchangeability and describe a task as a set of relation types sharing the same predictive patterns. This allows us to frame the problem of predicting with new relation types as a multi-task problem, where learning the different predictive patterns reduces to learning all tasks.
Correctly performing on all tasks will then result in accurate predictions of test triplets having new relation types, as long as we can identify to which task they belong to.
§.§ Re-imagining Inductive Learning as Relational Tasks
We begin by defining the concept introduced in <cit.> of exchangeability between relation types, which can informally be understood as the property of (certain) relation types to be interchangeable with each other. This property encapsulates our notion of shared predictive patterns, as exchangeable relation types necessarily follow the same patterns.
Let ∈ be a random variable representing an
with node set = { 1, 2, …, N } and relation set = { 1, 2, …, R } and two relation types r, r' ∈, we say that r and r' are exchangeable if there exists some node permutation π∈_N and relation type permutation σ∈_R such that the following two conditions are satisfied:
σ∘ r = r' and
P() = P(σ∘π∘) ,
where the permutation actions of π and σ are defined on the nodes and relation types respectively. That is, for all u, v ∈ and r ∈, the symmetric group _N and the symmetric group _R act on a graph via (π∘)_π∘ u, r, π∘ v = _u, r, v and (σ∘)_u, σ∘ r, v = _u, r, v.
We denote the exchangeability between relation types by ∼_e.
A similar formalization can be made for nodes. Due to space limit, we omit this definition, but we will assume nodes are exchangeable throughout the paper.
In the following, we introduce our theoretical contributions, which open up an entirely novel perspective to the problem. We start by proving that the exchangeability property between relation types can be regarded as a higher-order relation between the elements in , or, more precisely, as an equivalence relation on .
lemmmaequivalence
The exchangeability between relation types ∼_e defines an equivalence relation on , since it satisfies the reflexivity, symmetry, and transitivity properties.
The importance of <Ref> is in that it allows us to partition the set of relations into disjoint equivalence classes. Each equivalence class contains relation types that are exchangeable with each other, whereas relation types that are not exchangeable belong to different equivalence classes. Consequently, these partitions of can naturally be considered as different tasks, dubbed relational tasks, each containing all and only relation types that share the same predictive patterns.
The relational tasks of an with node set and relation set are the equivalence classes [r] of the relation types r ∈ under ∼_e, i.e., each
_r [r] = { r' ∈ : r' ∼_e r }
is a relational task.
Viewing as partitioned into disjoint relational tasks allows us to consider the problem from a multi-task perspective. Our goal then becomes finding a model able to accurately learn all tasks, specializing on the patterns that are unique to each task, which are potentially conflicting among each other. Such model will then be asked to recognize which task a test relation type belongs to, in order to correctly predict missing test triplets by applying what was learned on training for the same task.
§.§ Handling Conflicting Patterns for Different Relation Types as a Multi-task Scenario
<Ref> allow us to formalize in the following definition the of our interest, such as the one presented in <Ref>, which we dub .
We adopt the term double-exchangeable from <cit.>, since it inherently captures the idea of exchangeability both between node ids and between relation types – a shared concept between our work and theirs – but we extend it to our multi-task scenario.
Given an , with node set and relation set , is said to be a if it has more than one relational tasks, i.e.,
|{_r : r ∈}| > 1.
Equivalently, is a if there exist two relation types r, r' ∈ such that r ≁_e r'.
In this work, we assume our data satisfies <Ref>. More precisely, the training graph is considered a , in which each task comprises exchangeable relation types that follow the same predictive patterns, while distinct tasks may have conflicting patterns. We further assume our test graph is a , with tasks that constitute a subset of the training ones.
§.§ The Special Case of Single-Task Double-Exchangeable
<Ref> can be specialized into what we refer to as single-task if all relations belong to the same equivalence class (<Ref>).
An example of this scenario happens when considering only the boxed subgraph _1' of the training graph in <Ref>. If we restrict our training graph to _1' and maintain _2 as the test graph, then the true test relation type between Xena and Zane in _2 can accurately be predicted by the model from <cit.>. This is because it follows the only predictive pattern present in the data, which is the one of the relation . Nonetheless, as emphasized throughout our work, the single-task configuration represents a particular case of the more general multi-task setting, which accommodates a greater variety of , such as the complete training graph _1 in <Ref>.
§ PROPOSED METHOD
In this section we introduce our framework to learn the different predictive patterns which are specific for each task and a procedure to generalize to new test relation types. Our proposed architecture models exchangeability between relation types belonging to the same task while differentiating them from the learned patterns of other tasks. To adapt to the unseen relations in the test , we propose a test-time adaptation procedure to identify the tasks to which test relations belong.
§.§ Multi-Task Double-Equivariant Linear Layer
Suppose we knew that the ground-truth relational tasks {_r: r ∈} in <Ref> given an with node set and relation set . [In <Ref> we will remove this assumption and show how to learn task memberships.]
Without loss of generality, consider an arbitrary ordering of the relational tasks and denote the ordered relational tasks as ^(1), ^(2), …, ^(K), where K is the total number of tasks. We denote by i: →{ 1, 2, …, K } a task index mapping, such that ^(i(r)) = _r.
Inspired by the equivariant framework proposed by <cit.> we present the following (), which updates representations at every layer t as
^(t+1)_·, r, ·, · = L_1^(t)( ^(t)_·, r, ·, ·)
+ L_2^(t)( 1 ⊗ p_i(r) + ∑_r' ∈^(i(r))∖{r}^(t)_·, r', ·, ·)
+ ∑_k = 1, …, K
k ≠ i(r) L_3^(t)( 1 ⊗ p_k + ∑_r”∈^(k)^(t)_·, r”, ·, ·),
where ^(t)∈^N × R × N × d is the layer input with ^(0) =, and L_1^(t), L_2^(t), L_3^(t): ^N × N × d→^N × N × d' are GNN layers that output pairwise representations with N the number of nodes, R the number of relations and d, d' appropriate dimensions. The vectors p_k ∈^d, k = 1, 2, …, K are learnable positional embeddings, each specific to the task ^(k), which are repeated on the last dimension through the Kronecker product with the matrix of all ones, 1∈{1}^N × N × 1. The sums in ∑_r' ∈^(i(r))∖{r}^(t)_·, r', ·, · and ∑_r”∈^(k)^(t)_·, r”, ·, · can be replaced by any other set aggregations.
Note that if the total number of relational tasks K is 1, then <Ref> recovers the single-task double-equivariant layer proposed in <cit.>.
Indeed, if K=1, then i(r) = 1 for any r ∈ with ^(1) =, and <Ref> can be rewritten as
^(t+1)_·, r, ·, · = L^(t)_1(^(t)_·, r, ·, ·) + L^(t)_2( ∑_r' ∈∖{r}^(t)_·, r', ·, ·),
where the term 1 ⊗ p_i(r) was absorbed into L^(t)_2.
The role of positional embeddings. <Ref> uses the positional embedding vectors p_j∈^d, j ∈{1, …, K}, to allow representations of relation types belonging to different tasks to be different, even when they have isomorphic observable graphs. In order to understand the role of p_j in our architecture, we refer once again to <Ref>. Without the inclusion of the positional embeddings, <Ref> would give the same representation to the missing triplet involving and the missing triplet involving , even if those relations belong to two different relational tasks, because the inputs to L^(t)_1,L^(t)_2,L^(t)_3 are the same, starting from t=0.
§.§ Learning Soft Task Membership via Attention Weights
The previous section assumes we know the ground-truth assignment of relation types to tasks. In what follows, we learn such assignments from data only.
Intuitively, we need to partition all relations into disjoint equivalence classes, where each partition corresponds to a unique relational task.
This process is a discrete optimization problem, which we relax into a continuous one by means of an attention matrix α∈ [0, 1]^R ×K̂, where K̂ is the maximum number of partitions we allow our architecture to model (i.e., the maximum number of relational tasks we expect). The individual attention value α_r, k denotes the degree (or probability) that the relation r ∈ belongs to the k-th equivalence class, with the constraint that ∑_k=1^K̂α_r, k = 1 for all r ∈. Hence, the of <Ref> can be relaxed into what we called the soft :
^(t+1)_·, r, ·, · = L_1^(t)( ^(t)_·, r, ·, ·)
+ L_2^(t)( 1 ⊗ p_î(r) + ∑_r' ∈∖{r}α_r', î(r)^(t)_·, r', ·, ·)
+ ∑_k = 1, …, K̂
k ≠î(r) L_3^(t)( 1 ⊗ p_k + ∑_r”∈∖{ r }α_r”, k^(t)_·, r”, ·, ·),
where î(r) = max_k= 1 …K̂α_r, k, which ideally should give the correct id i(r) of the ground-truth relational task _r that the relation r belongs to.
The final architecture, which we name the (MTDEA), is obtained by stacking T soft to produce a graph representation Γ() ∈^N × R × N × d for a given :
Γ () L^(T)( f ( ⋯ f( L^(1)() ) ⋯) ),
where f is a non-polynomial activation such as ReLU. The predictions of individual triplets can then be obtained through a triplet score function Γ_tri: ×××→ [0, 1] followed by a sigmoid activation function, i.e.,
Γ_tri((u, r, v), ) σ(Γ ()_u, r, v, ·) .
§.§ Dual-Sampling Loss with Task Membership Regularization
Existing literature that tackles link prediction in relies on loss as based on entity-centric negative sampling <cit.>, where for each ground-truth (existing) triplet (u, r, v), the tail node v of (u, r, v) is randomly corrupted to obtain a fixed number of negative samples (u, r, v'). Such entity-based negative sampling is insufficient for our loss because correctly predicting the relation type between two nodes is equally important as correctly predicting the tail node given the head node and relation type. To this end, we propose the dual-sampling task loss _dual, which given the training ^(tr) with node set ^(tr) and relation set ^(tr), makes use of n negative samples obtained by corrupting tail nodes and m negative samples obtained by corrupting the relation types from positive samples, that is
_dual - ∑_(u, r, v) ∈(
log(Γ_tri((u, r, v), ^(tr)))
- 1/n∑_i=1^n log(1 - Γ_tri((u, r, v'_i), ^(tr))))
- 1/m∑_j=1^m log(1 - Γ_tri((u, r'_j, v), ^(tr))))
) ,
where { (u, r, v) ∈^(tr)×^(tr)×^(tr)|^(tr)_u, r, v = 1 } is the set of positive triplets, (u, r, v'_i) is the i-th entity-based negative sample and (u, r'_j, v) the j-th relation-based negative sample corresponding to the positive triplet (u, r, v).
<Ref> constitutes only a term of the loss function we optimize, which further contains regularization terms on the the attention matrix α.
Intuitively, we want the individual attention values to be either 0 or 1, because each value should represent whether a relation type belongs to certain task (value 1) or not (value 0). Moreover, we aim to have a large concentration of the attention values, in order to have as few partitions as possible. Hence, we propose the following model loss, where λ_1, λ_2 ∈ℝ are hyper-parameters weighting the terms:
ℒ = ℒ_dual + λ_1 ∑_r ∈^(tr)( -∑_j= 1 … kα_r, jlogα_r, j)_ℒ_1-hot + λ_2 (- ∑_j= 1 … kLGamma(1 + ∑_r ∈^(tr)α_r,j))_ℒ_conc.
The first term ℒ_dual is the dual-sampling loss in <Ref>. The second term ℒ_1-hot minimizes the entropy of the distribution of partition membership probabilities of each relation type r, so that individual attention values are pushed to be either 0 or 1, and attention weights for every relation type to be one-hot encoded. Finally, the third term ℒ_conc takes advantage of the log-gamma function to encourage the relation set to be split in as few partitions as possible.
§.§ A Test-Time Adaptation Procedure
The attention matrix learned during training encodes task membership of relation types in training, and therefore it cannot be directly ported to the new test relation types in our test graph ^(te) with N^(te) nodes and R^(te) relation types. We address this issue by adopting a test-time adaptation procedure where we optimize a test-time attention matrix α^(te)∈ [0, 1]^R^(te)×K̂, while freezing all other parameters of the architecture. During the adaptation, only the observable triplets of the test graph ^(te) are used for training α^(te). That is, we follow the standard self-supervised procedure for link prediction (as in <Ref> and <cit.>), and create a self-supervised mask M ∈{ 0, 1 }^N^(te)× R^(te)× N^(te) that tunes α^(te) to maximize P(M⊙^(te)| M ⊙^(te)), with M = 1 - M.
§ EXPERIMENTS
In this section, we empirically evaluate our model in predicting missing triplets involving new relation types under different settings.
We aim to address the following main questions:
* Does our model outperform the baselines when predicting new relation types in datasets containing multiple tasks?
* How effectively does our model handle a general dataset when the presence of a multi-task structure is unknown?
We present our primary experimental results below, and we defer readers to <Ref> for additional experiments and details.
Baselines.
We evaluate our model against three baselines: IS-DEA <cit.>, the homogeneous version of NBFNet <cit.> (NBFNet-homo), and the homogeneous version of IS-DEA (IS-DEA-homo).
These homogeneous models are obtained by modifying the corresponding base models to treat all relation types equally. As a result, when predicting a tail node v given a head node u and a relation type r, a homogeneous model returns the node v for which the edge (u, v) is most likely to exist, regardless of the relation type. When predicting the relation type r between given nodes u and v, a homogeneous model returns a uniform prediction over all possible relation types. This modification allows NBFNet to generalize to new test relation types, a task it cannot perform otherwise. To the best of our knowledge, these models are the only ones that are applicable to our scenario.
Dual-sampling metrics.
In line with the dual-sampling loss we proposed in <Ref>, we present the dual-sampling metrics, which include Hits@k, Mean Rank (MR), and Mean Reciprocal Rank (MRR).
For each positive triplet we generate 24 negative samples by corrupting the tail entity and 26 negative samples by corrupting the relation type. These metrics are better suited for measuring the capabilities of the models in our tasks, and we refer to <Ref> for a thorough discussion.
A1: Multi-task datasets.
To address Q1, we create a novel multi-task scenario, named WikiTopics-MT, in the WikiTopics datasets introduced by <cit.> and obtained from the WikiData5M datasets <cit.> by grouping the relation types into different topics, such as Art, Education, and Sports.
The WikiTopics dataset was employed by <cit.> to assess the extrapolation performance of the double-equivariant model, IS-DEA, when trained on one topic and tested on a different one. For our multi-task dataset, we select for training pairs of topics where, as shown in <cit.>, IS-DEA exhibits the lowest transfer-topic performance (the Art and People) and test on a third topic (Health or Taxonomy) on which the IS-DEA trained on one training topic (e.g. Art) performs good but the IS-DEA trained on the other training topic (e.g. People) performs poorly. Since the transfer-topic performance of IS-DEA indicates the degree of double-exchangeability between graphs in the two topics, which is related to the definition of tasks (<Ref>), this selection strategy likely produces train and test graphs with multiple tasks.
<Ref> presents the model performance on WikiTopics-MT when trained on relations from both art and people topics and tested on either health or taxonomy topic. Our model MTDEA, and in particular the one with the largest number of task partitions (K̂ = 6), outperforms all baselines, while having a significantly smaller standard deviation on MRR, Hits@1, and Hits@10 metrics, suggesting that the WikiTopics-MT dataset indeed possesses a complicated multi-task structure, and our model, which has the best multi-task modeling capability, yields most consistent performance.
r0.6
Model performance on FBNELL. We report mean and std across 3 random seeds. For our MTDEA, K̂ denote the maximum number of tasks the architecture can model (<Ref>). Our model with K̂ = 2 is comparable to, if not better than, the baselines even when the dataset does not have a multi-task structure.
!
Models MR ↓ MRR ↑ Hits@1 ↑ Hits@10 ↑
NBFNet-homo 14.116 (0.029) 0.129 (0.001) 0.042 (0.001) 0.379 (0.002)
IS-DEA-homo 31.625 (0.940) 0.033 (0.001) 0.000 (0.000) 0.000 (0.000)
IS-DEA <cit.> 10.925 (0.383) 0.624 (0.010) 0.562 (0.014) 0.697 (0.010)
MTDEA (K̂=2) 9.106 (0.162) 0.622 (0.012) 0.553 (0.010) 0.704 (0.024)
MTDEA (K̂=4) 10.730 (0.666) 0.606 (0.006) 0.543 (0.007) 0.680 (0.023)
MTDEA (K̂=6) 10.386 (0.683) 0.609 (0.015) 0.547 (0.017) 0.678 (0.008)
A2: General dataset.
To address Q2, we create the FBNELL dataset by combining FB15K-237 <cit.> and NELL-995 <cit.>. The training graph consists of the 50 most frequent relation types and the test graph of the 100 most frequent ones for each dataset. We note that it is not clear from this construction whether FBNELL exhibits multi-task structures because the relation types might still be exchangeable despite belonging to different domains.
<Ref> shows the performance on FBNELL. Our model is on par (and sometimes outperforms) the baselines, even in this scenario where a multi-task structure may not be present. We associate smaller performance gaps to the simplicity of the constructed dataset, which does not seem to exhibit complex multi-task structures (the smallest K̂ has the highest performance).
Overall, our result suggests that in real-world scenarios it is always advantageous to employ the MTDEA model because, even in the single-task setting (due to its regularization towards fewer relation equivalence classes and patterns), it obtains a similar, if not better, performance than the baselines.
§ CONCLUSIONS
In this work we studied the problem of extrapolating to new relation types in link prediction tasks in discrete attributed multigraphs. To overcome the challenge faced by existing work when the graphs contain relation types exhibiting contradictory predictive patterns, we proposed a relaxation of the double exchangeability assumption of <cit.> and demonstrated that this relaxation can be interpreted within a multi-task framework. We designed an architecture capable of modeling this multi-task double exchangeability, along with a test-time adaptation procedure to learn task assignments for new relation types. To empirically evaluate our method, we introduced new benchmark datasets featuring multi-task structures and presented novel evaluation metrics to measure its benefits.
The authors would like to thank Jianfei Gao and Yangze Zhou for insightful discussions. This work was supported in part by the National Science Foundation (NSF) awards CAREER IIS-1943364, CCF-1918483, and CNS-2212160 and an Amazon Research Award.
Any opinions and findings expressed in this manuscript are those of the authors and do not necessarily reflect the views of the sponsors.
plainnat
Supplementary Material for
An OOD Multi-Task Perspective for Link Prediction
with New Relation Types and Nodes
§ EXPANDED RELATED WORK
GNNs for completion. Due to their recent success in diverse graph-learning tasks, Graph Neural Networks (GNNs) have been widely used to predict missing attributed links between nodes in . One of the first adaptation of standard GNNs to multi-relational data was proposed in <cit.>, while an alternative formulation has been considered in <cit.>. These two models have then inspired several improved versions for both transductive <cit.> and inductive <cit.> link prediction tasks on , even in the context of large graphs <cit.>. Recently, their limitations and their relationships have been studied from a theoretical viewpoint, by relating their capabilities in distinguishing different to the Weisfeiler-Leman algorithm <cit.>. These methods, however, cannot work when presented with new relation types in test, which instead represents the main interest of our work.
Tensor Factorization. Factorization-based methods <cit.> are classical graph representation learning methods for attributed graphs. Despite their superior empirical performance on transductive tasks, especially when coupled with specific training strategies <cit.>, these models cannot be applied to inductive tasks featuring new nodes in test. To overcome this limitation, <cit.> propose a new architecture that borrows principles from GNNs and bridges the gap between these two approaches. All these methods, however, are not applicable to the tasks of our interest, where test graphs contain both new nodes and relation types.
Logical reasoning. Predicting missing attributed links in can also be performed by learning logical rules that are then used to infer the missing links. <cit.> focus on learning Horn clauses from the graph. To understand the expressive power of standard GNNs in learning logical rules, <cit.> characterize the fragment of FOC_2 formulas, a well-studied
fragment of first order logic, that can be expressed as GNNs. Recently, <cit.> extended the analysis to heterogeneous graphs.
Zero-shot learning for link prediction in . To predict links involving completely new relation types at test time, zero-shot methods require additional information encoding the semantic of the relation types. <cit.> rely on semantic features obtained from the text descriptions of the relation types. <cit.> enrich the relation features using information from the ontological schema. Finally,
<cit.> use the character n-gram information from
the relation name to generate more expressive representations of the relations. As we do not assume access to any extra information apart from the input graphs, not even the relation textual names[We always consider relation types as numbers, {1, …, R}, R ∈ℕ.], these methods are inapplicable to our scenario.
Few-shot learning for link prediction in . To the best of our knowledge, most few-shot methods predict novel relation types in test following a meta-learning paradigm <cit.>. This meta-learning paradigm requires learning with a set of meta-training tasks and then adapts to a new task during meta-testing. Consequently, these methods require access to many few-shot tasks for training, which can be challenging to obtain in real-world datasets. To remove the dependency on manually created training tasks, <cit.> develop a pretraining procedure followed by finetuning, which predicts triplets for new relation types through the existence of a hypothesis in the form of a subgraph, which should represent the predictive pattern indicating the existence of the triplet. Differently from our approach, however, <cit.> require access to support triplets (examples) of the new relation types, which are used to learn the hypothesis proposal module. Our approach instead only relies on seeing the observable test graph to learn the task-membership of the new relation types, whose observable triplets are not necessarily similar to the triplets we aim to predict, and therefore further removes the need of examples of test triplets to be predicted.
§ THEORETICAL ANALYSIS
*
We prove each property separately.
The exchangeability between relation types ∼_e is reflexive. This can be trivially shown by considering the identity node permutation Id_N ∈_N and identity relation type permutation Id_R∈_R. Namely, let ∈ be a random variable representing an attributed graph sampled from some data distribution. For any relation r ∈, naturally Id_R∘ r = r and Id_R∘Id_N∘ =, and consequently P(Id_R∘Id_N∘) = P(). Hence, r is exchangeable with r.
The exchangeability between relation types ∼_e is symmetric. We first note that the node permutation and relation type permutation are commutative <cit.>. That is, given any attributed graph ∈, π∈_N, and σ∈_R, we have σ∘π∘ = π∘σ∘. In other words, it makes no difference whether we permute the nodes first or we permute the relation types first. Now, for any two relations r, r' ∈, if r is exchangeable with r', then we know there exists some π∈_N and σ∈_R such that σ∘ r = r' and P() = P(σ∘π∘) for any sampled from the data distribution. Since _N and _R are groups, π and σ have unique inverses π' ∈_N and σ' ∈_R satisfying π' ∘π = Id_N and σ' ∘σ = Id_R respectively. Hence,
σ' ∘ r' = σ' ∘ (σ∘ r) = (σ' ∘σ) ∘ r = Id_N ∘ r = r
σ' ∘π' ∘ (σ∘π∘) = σ' ∘ (π' ∘π) ∘σ∘ = σ' ∘σ∘ = .
Consequently, we have P(σ∘π∘) = P() = P(σ' ∘π' ∘ (σ∘π∘)). Moreover, since permutations are bijective mappings, we know that for any ' ∈ there exists some such that ' = σ∘π∘. Hence, P(') = P(σ∘π∘) = P(σ' ∘π' ∘ (σ∘π∘)) = P(σ' ∘π' ∘') for any ' sampled from the data distribution. Therefore, r' is also exchangeable with r.
The exchangeability between relation types ∼_e is transitive. Let r_1, r_2, r_3 ∈ be three relation types such that r_1 is exchangeable with r_2, and r_2 is exchangeable with r_3. Then, there exists some π_1, π_2 ∈_N and σ_1, σ_2 ∈_R such that
σ_1 ∘ r_1 = r_2 and P() = P(σ_1 ∘π_1 ∘)
σ_2 ∘ r_2 = r_3 and P(') = P(σ_2 ∘π_2 ∘') ,
for any and ' sampled from the data distribution. Hence, take any ∈,
P()
= P(σ_1 ∘π_1 ∘) = P(σ_2 ∘π_2 ∘ (σ_1 ∘π_1 ∘))
= P((σ_2 ∘σ_1) ∘ (π_2 ∘π_1) ∘),
where we also have (σ_2 ∘σ_1) ∘ r_1 = r_3, showing that r_1 is exchangeable with r_3.
Since the exchangeability between relation types is reflexive, symmetric, and transitive, it is an equivalence relation on .
§ DATASETS CONSTRUCTION
§.§ WikiTopics-MT
The WikiTopics-MT scenarios are derived from the WikiTopics dataset previously introduced by <cit.>, which comprises 11 with relation types in each corresponding to a specific topic. Our goal is to construct training graphs exhibiting multi-task structures while ensuring the test graph possesses a task also present in training. In <cit.>, this dataset was leveraged to assess the ISDEA model's zero-shot generalization capabilities on pairs of that may have relation types not exchangeable with each other. We observe that the ISDEA model's performance can be viewed from an alternative perspective: poor test performance on a certain topic when trained on a different topic indicates that the topics contain relation types that are likely not exchangeable. Consequently, the corresponding likely contains distinct tasks (<Ref>), implying that combining the of these two topics would yield an aggregated containing multiple tasks.
To identify which pairs of topics are likely to contain distinct tasks, we rerun the ISDEA model from <cit.> on both versions of the WikiTopics dataset. To reduce the memory footprint and the computational time, we run the experiments without the shortest-distance heuristic embeddings that are used to augment the triplet representations in <cit.> (as explained in <Ref>).
<Ref> show the heatmaps representing the transfer-topic performance of the ISDEA model on the 121 pairs of topics for each version of the dataset. Each row in <Ref> corresponds to one training topic, each column corresponds to a test topic, and the color represents the performance evaluated using the dual-sampling Mean Reciprocal Ranks (MRR).
Based on the heatmaps (<Ref>), we devise multi-task scenarios using the outlined strategy. For the training graph, we pick two topics such that when trained on one and evaluated on the other the performance is low (e.g. Health and Sport in <Ref>). This indicates that the ISDEA model, trained on one topic (Health), demonstrates relatively poor zero-shot generalization performance on the other topic (Sport), suggesting that the associated with these topics likely contain distinct tasks.
We then combine the of the two topics to create an aggregated graph (Health + Sport), which is expected to exhibit multi-task structures.
Next, we select one test topic for each training topic, and evaluate separately on the two test topics. A test topic (e.g., Location) is determined such that the ISDEA model trained on one of the training topics (in this case, Science) performs well on the selected test topic (Location), but the good performance on this test topic may not necessarily be observed when the ISDEA model is trained on the other training topic (Sport).
Using this data creation strategy, we construct the following four multi-task scenarios, two for each version of WikiTopics:
* WikiTopics-MT: created from WikiTopics-V1. The training topic is a combination of Art and People, and the 2 test topics are Health and Taxonomy.
* WikiTopics-MT2: created from WikiTopics-V1. The training topic is a combination of Sport and Health, and the 2 test topics are Location and Science.
* WikiTopics-MT3: created from WikiTopics-V2. The training topic is a combination of People and Taxonomy, and the 2 test topics are Art and Infrastructure.
* WikiTopics-MT4: created from WikiTopics-V2. The training topic is a combination of Location and Organization, and the 2 test topics are Health and Science.
<Ref> shows the datasets statistics of the 4 multi-task scenarios. The experiment results of the WikiTopics-MT scenario are shown in <Ref> in the main paper, and the additional experiment results of the WikiTopics-MT2, WikiTopics-MT3, and WikiTopics-MT4 scenarios are shown in <Ref>.
§.§ FBNELL
We create the FBNELL dataset by combining the FB15K-237 <cit.> and the NELL-995 <cit.> datasets.
The training graph is obtained by first choosing the top 50 most frequent relation types in each FB15K-237 and NELL-995, yielding a total of 100 relation types, and then extracting the triplets corresponding to these 100 relation types.
For the test graph, we pick the top 100 most frequent relation types from each dataset, extract the triplets corresponding to the resulting 200 relation types, and predict only those triplets that involve new relation types, while using the remaining triplets as the observable (test) graph. Consequently, the test graph's set of relation types forms a strict superset of those in the training graph, but the evaluation is performed only on the unseen ones. <Ref> shows the statistics of the dataset.
We emphasize that, unlike in the data construction strategy employed for the WikiTopics-MT scenarios, we do not actively identify the presence of multiple tasks within either FB15K-237 or NELL-995, nor verify whether their combination exhibit a multi-task structure.
Therefore, even though the two datasets come from distinct domains, they might still share the same relational task (single-task).
§.§ MetaFam
We construct a synthetic dataset, dubbed MetaFam, that explicitly exhibits conflicting predictive patterns, or equivalently, a multi-task structure. In particular, we recreate the conflicting predictive patterns shown in <Ref>.
We generate the dataset by first creating the family trees using the ontology and the code provided in <cit.>. Each family tree is generated by starting from a single person and incrementally adding a new child to an existing node until the tree reaches the maximum size of 26 nodes or a maximum depth of 5. The parent of the node to be added is chosen uniformly at random, with the only constraint that the maximum branching factor of each tree is 5. Each family tree contains triplets involving 29 different relation types representing different kinds of relationships, such as , , .
We generate the training split by randomly selecting 50 non-isomorphic family trees. In each training family tree we mask out either some of the triplets with relation types and , or some of the triplets involving relation types and , and we use those as the triplets we aim to predict during training time. Doing so ensures that the model is challenged with the two conflicting patterns illustrated in <Ref> when learning to predict these triplets. Specifically, the former two relation types, and , obey to the first kind of predictive pattern (e.g. subgraph '_1 as illustrated in <Ref>), and the latter two relation types, and , follow the second kind of predictive pattern (the rest of the training graph _1 as illustrated in <Ref>). As the test split, we create 25 additional non-isomorphic family trees having the same relation types of the training but permuted. In test, we only mask out triplets corresponding to the (permuted) relation types and , so that only one predictive pattern is required to accurately predict these missing triplets at test time.
<Ref> shows the statistics of the MetaFam dataset. The experiment results are described in <Ref>.
§ IMPLEMENTATION AND EXPERIMENT DETAILS
§.§ Licenses, Computational Resources and Experimental Setup
We implemented our MTDEA model using PyTorch <cit.> and PyTorch Geometric <cit.>, which are available under the BSD and MIT license respectively. The Wikidata knowledge base <cit.>, which the WikiTopics dataset is based on, is available under the CC0 1.0 license. We ran our experiments on NVIDIA V100, A100, GeForce RTX 2080Ti, GeForce RTX 4090, and Titan V GPUs. We use Weights & Biases <cit.> to perform hyperparameter tuning. We train all models (baselines and MTDEA) in all experiments for a maximum of 10 epochs, with an early stop patience of 5 epochs based on the dual-sampling MRR value on the validation set. At test time we adapt our MTDEA models to learn the task assignments for the test relation types (as described in <Ref>) for a maximum of 10 epochs. For our MTDEA models, we train with a number of maximum task partitions K̂=2, 4, 6 in all experiments. The time spent on each experiment depends mainly on the size of the dataset and on K̂. For example, training MTDEA with K̂=4 on WikiTopics-MT3 takes around 10 hours to complete, while training MTDEA with K̂=2 on WikiTopics-MT takes around 2 hours and 30 minutes.
Our code and datasets are available. [<https://anonymous.4open.science/r/MTDEA>.]
§.§ Details of the Neural Architecture
Attention matrix.
In our implementation, the attention matrix α (<Ref>) is obtained from a real-valued learnable weight matrix w ∈^R ×K̂, where R is the number of relations and K̂ the number of maximum partitions we allow.
We apply a Softmax activation over the task partition dimension for every relation type r ∈, i.e., α_r, k = exp(w_r, k)/∑_k'=1^K̂exp(w_r, k'), for k ∈{1, …K̂}.
At training time we refer to w as w^(tr), since R is R^(tr), the number of training relation types.
During the test-time adaptation, we freeze all parameters of the model, we discard w^(tr) and initialize a new matrix w^(adapt)∈^R^(te)×K̂, where R^(te) is the number of test relation types. Then, w^(adapt) is optimized via gradient descent with the same training loss used in training (<Ref>), with the only difference that the positive and negative triplets are now sampled from the observable test graph ^(te).
Structural node representation for link prediction tasks. Structural node representations, which are obtained from GNNs, are known to have limited capabilities for link prediction tasks in homogeneous graphs <cit.>, an issue that also arises in . Theoretically, <Ref> overcomes this limitation by employing GNNs that output pairwise-representations as L_1^(t), L_2^(t), L_3^(t). However, most-expressive pairwise representations <cit.> are computationally expensive. In <cit.>, the authors sought a middle ground for their ISDEA model by employing structural node representations enhanced with heuristic embeddings, such as the shortest distances between the two nodes in the pair to be predicted. Specifically, the representation for a triplet (u, r, v), with u,v ∈, r ∈, before the final MLP layers is obtained as h_u, r^(T) h_v, r^(T) d(u, v) d(v, u), where h_u, r^(T) and h_u, r^(T) are the structural node representations for, respectively, nodes u and v specific to the relation type r obtained after T ISDEA layers; d(u, v) and d(v, u) are the shortest distances from node u to v and from node v to u in the directed , and denotes the vector concatenation operation.
Nevertheless, computing the shortest distance, d(u, v), for all pairs (u,v), u,v ∈ in the is time- and space-demanding. Due to this limitation, we opt not to compute the shortest distances, and instead use only the structural node representations as the representation of a triplet (u, r, v), that is h_u, r^(T) h_v, r^(T), in all our experiments, except for the synthetic MetaFam dataset. In practice, this means that we implement L_1^(t), L_2^(t), L_3^(t) in <Ref> as GNNs outputting node representations. We empirically observe no performance degradation when removing the shortest distance heuristics on real-world .
Layers.
In all our experiments, excluding those on the synthetic MetaFam dataset, our MTDEA model employs two GNN-based soft (<Ref>). Conversely, for the synthetic MetaFam dataset, our model consists of only one GNN-based , in order to learn exactly the conflicting predictive patterns depicted in <Ref>. We select GIN <cit.> with ϵ = 0 as our GNN layer, which implements the L_1, L_2, and L_3 components of a .
After these GNN-based , we employ two soft with MLPs L_1, L_2, and L_3 components. The representation of each triplet (u,r,v) with u,v ∈, r ∈ is then obtained using the node representations after these layers as h_u, r^(T) h_v, r^(T), and it is then passed to
a two-layers MLP to obtain the final prediction.
§.§ Hyper-parameters
In all experiments, involving either our MTDEA models or the baseline ISDEA-homo, we train with mini-batches comprising 256 positive triplets. We use a training negative sample rate of 2 for both the tail-based negative samples and the relation-based negative samples. Hence, for each positive triplet in a minibatch, we construct four negative samples, thus resulting in mini-batches containing 1280 triplets in total. Additional hyper-parameters values for MTDEA and ISDEA-homo include hidden layer dimension of 32, ReLU activations, mean set aggregation (i.e. the set aggregation operation within the parentheses of L_2 and L_3 in <Ref>), a gradient clipping norm of 1.0, a learning rate of 0.001, and a weight decay rate of 5 × 10^-4 in our Adam optimizer.
Apart from the aforementioned hyper-parameters, our MTDEA features additional hyper-parameters associated with its regularization losses. In all our experiments, we set the initial regularization coefficient values to λ_1 = 0.1 and λ_2 = 0.1 (<Ref>), with a per-epoch multiplicative annealing factor of 1.1. This approach ensures that the regularization gains more significance in later epochs (closer to the end). That is, after each epoch, the values of λ_1 and λ_2 are updated to 1.1 ×λ_1 and 1.1 ×λ_2 respectively.
For the NBFNet-homo baseline, we use the default hyper-parameters provided by <cit.>. Specifically, we choose a hidden dimension of 32 for all layers, the distance multiplier message function, the pna aggregate function, and we employ layer norm. We further use Adam optimizer with a learning rate of 0.005 and a batch size of 64.
§ ADDITIONAL EXPERIMENTS
§.§ Dual-sampling versus Entity-centric Metrics
In both our training loss (<Ref>) and evaluation metrics (<Ref>), we adopt a dual-sampling scheme wherein for each positive triplet we draw two different types of negative samples: those with the tail node randomly corrupted (entity-centric) and those with the relation type randomly corrupted (relation-type centric). In contrast, existing literature focuses solely on entity-centric negative samples <cit.>. We argue that correctly predicting the tail node, given a head node and a relation type, is as important as determining the type of relation connecting a head and a tail node of interest, and therefore the two negative sampling schemes should be used in conjunction. This combination results in the dual-sampling metrics we propose.
<Ref> compare the performances of various models under the dual-sampling metrics with their performances under the traditional entity-centric metrics, commonly used in the literature. As can be seen from the tables, the baseline NBFNet-homo demonstrates strong performances under the entity-centric metrics, but not under the dual-sampling metrics. In particular, it achieves 95% entity-centric Hits@10 on the FBNELL dataset. Therefore a model that disregards the information contained in the relation types can achieve near-perfect accuracy under the entity-centric metrics (simply by predicting whether u is connected to v, irrespective of the relation type).
These results suggest that the entity-centric metrics are insufficient for assessing model performance, as homogeneous link prediction methods can easily solve a task when evaluated based on these metrics.
In contrast, the homogeneous models achieve at most 38% Hits@10 accuracy under the dual-sampling metrics, illustrating that the dual-sampling scheme is a more suitable, comprehensive, and challenging evaluation scheme, where homogeneous link prediction models cannot unreasonably obtain near-perfect performances.
§.§ Synthetic Experiments
We conduct additional experiments using the dataset MetaFam, which was explicitly constructed to exhibit conflicting predictive patterns, or multiple tasks, in the (<Ref>). <Ref> shows the results under the dual-sampling metrics. As we can see from the table, our MTDEA model with two task partitions K̂=2 consistently obtains the best performance under all the metrics. This observation conforms to our expectation, since the MetaFam was constructed to include exactly two conflicting predictive patterns and therefore a model capable of modeling two distinct tasks is expected to obtain the best predictions in this dataset.
We further investigate the performances of the models under different metrics. We consider the entity-centric metrics, where we generate 50 negative samples for each positive triplet by corrupting its tail node, and we additionally compare to what we refer as the relation-type centric metrics, obtained by constructing 50 negative samples for each positive triplet by corrupting its relation type. These results are summarized in <Ref>. We observe that, although our best-performing model (MTDEA with K̂=2) is slightly worse than the baseline ISDEA under the entity-centric metrics, it outperforms all the models under the relation-based metrics.
§.§ More WikiTopics-MT Scenarios
<Ref> show the additional experiment results on multi-task scenarios WikiTopics-MT2, WikiTopics-MT3, and WikiTopics-MT4. In most cases, our MTDEA model outperforms the baseline models on the MRR and Hits@1, while being comparable in other metrics.
§ TIME AND SPACE COMPLEXITY
In this section we analyze the complexity of our model, focusing on <Ref>. We assume L_1^(t), L_2^(t), L_3^(t): ^N × N × d→^N × N × d' to be GNNs that output node representations instead of pairwise representations, which is the setup we adopt in our experimental evaluation, as described in <Ref>. We consider the feature dimension to be a constant.
For input graph with N nodes and R relation types, denote by Δ the maximum node degree, and let K̂ be the maximum number of tasks our architecture can model.
The time complexity of the L_1^(t) and L_2^(t) components in <Ref> is 𝒪(R N Δ_max), as each of the R relation types is processed using a standard GNN, which has time complexity 𝒪(N Δ_max).
The time complexity of the L_3^(t) component in <Ref>
is 𝒪(R K̂ N Δ_max), since each of the R relation types iterates over the K̂ tasks and for each of them aggregates all other relation types and processes the aggregation using a standard GNN, which has time complexity 𝒪(N Δ_max). Therefore, our method, as described in <Ref>, has an overall time complexity 𝒪(R K̂ N Δ_max). In practice, K̂ is small compared to R and N (the maximum value we consider in our experiments is K̂=6).
We note that the complexity of our method can be reduced if we replace the set aggregation inside L_3^(t) to avoid excluding the current relation type. That is, if we substitute ∑_r”∈∖{ r }α_r”, k^(t)_·, r”, ·, · with ∑_r”∈α_r”, k^(t)_·, r”, ·, ·, then for each of the K̂ tasks, the output of L_3^(t) can be computed only once, instead of computing it for each relation type.
Therefore, the overall time complexity of our method can be improved to 𝒪(R N Δ_max) by a simple change in the set aggregation function.
The space complexity of our method is 𝒪(R (N+ N Δ) + RK̂), as for each relation type we need to store N
node features and its connectivity, as well as the attention weights α∈ [0, 1]^R ×K̂.
§ LIMITATIONS
Despite the contributions and advancements made in this work, there are aspects that can be further refined and explored in future works:
* Scalability: The proposed model may face challenges when scaling up to extremely large graphs, as memory demands might become prohibitively high. Further research is necessary to develop approximation techniques to handle such large-scale applications.
* Model complexity: The proposed model introduces additional complexity compared to some baseline methods. Efforts to simplify the models while preserving their performance benefits are worth exploring in future research.
* Non-exchangeable relations: There may exist cases where no relations are exchangeable, rendering it necessary to have a number of task partitions equal to the number of relations. In such situations, the benefits of our proposed method may be reduced. Investigating these cases remains an important avenue for future research.
|
http://arxiv.org/abs/2307.05564v1 | 20230709223937 | Augmenters at SemEval-2023 Task 1: Enhancing CLIP in Handling Compositionality and Ambiguity for Zero-Shot Visual WSD through Prompt Augmentation and Text-To-Image Diffusion | [
"Jie S. Li",
"Yow-Ting Shiue",
"Yong-Siang Shih",
"Jonas Geiping"
] | cs.CL | [
"cs.CL"
] |
1]Jie S. Li
1]Yow-Ting Shiue
2]Yong-Siang Shih
1]Jonas Geiping
[1]University of Maryland, College Park
[2]Duolingo, Inc.
[ ]
[ ]
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
[
August 12, 2023
========================================================================
This paper describes our zero-shot approaches for the Visual Word Sense Disambiguation (VWSD) Task in English. Our preliminary study shows that the simple approach of matching candidate images with the phrase using CLIP suffers from the many-to-many nature of image-text pairs.
We find that the CLIP text encoder may have limited abilities in capturing the compositionality in natural language. Conversely, the descriptive focus of the phrase varies from instance to instance. We address these issues in our two systems, Augment-CLIP and Stable Diffusion Sampling (SD Sampling). Augment-CLIP augments the text prompt by generating sentences that contain the context phrase with the help of large language models (LLMs).
We further explore CLIP models in other languages, as the an ambiguous word may be translated into an unambiguous one in the other language. SD Sampling uses text-to-image Stable Diffusion to generate multiple images from the given phrase, increasing the likelihood that a subset of images match the one that paired with the text.
§ INTRODUCTION
The Task of Visual Word Sense Disambiguation, as set out in SemEval-2023 Task 1 Overview Paper <cit.>, can be described as follows: Given a target word (the target word) in the context of a two or more word phrase (the full phrase) and ten candidate images, pick the image (the gold image) among the ten candidate images that correctly corresponds to the target word. This competition was run in three languages, English, Farsi and Italian. We participated in the English version of the task. This task is in line with previous tasks connecting images to text, such as <cit.>. We explore two distinct systems to tackle this task. Both systems use Contrastive Language-Image Pre-training (CLIP) <cit.> as a foundation. CLIP was trained to associate text and related images, through increasing the cosine similarity (CLIP-similarity) between the normalized text-embedding and image-embedding of related text and image pairs and decreasing that for unrelated pairs. Our first system (Augment-CLIP) augments the CLIP text embedding by introducing additional context (through key-to-text) and accessing CLIP text and image embedding in other languages, through third-party implementation of CLIP for various languages. The second system (SD Sampling) samples Stable Diffusion <cit.> to generate multiple images illustrating the semantics of the full phrase and then applies a distance metric to select the candidate image that is closest to the generated images.
As standalone systems, Augment-CLIP and SD Sampling do not outperform Base-CLIP as the additional context may not correctly extend the target word meaning, but they offer complementary benefits and improve Base-CLIP through ensembling. We ensemble models by first calculating a new probability (or score) for each candidate image by taking the equally weighted average of the probability calculated from the underlying models. Each individual model can output a probability for a candidate image. For CLIP-based models, the probability is the softmax of the candidate image logits. We then rank the candidate images based on the new probability in descending order, with the highest probability candidate image being the predicted image from the ensembled model. See Table <ref>.
§ SYSTEMS OVERVIEW
§.§ Augment-CLIP
We look at two methods to create the Augment-CLIP system. Both methods attempt to disambiguate the full phrase containing the target word. The first does this through introducing additional text and the second does this through accessing additional languages.
§.§.§ Augment-CLIP with key-to-text
Our baseline approach, referred to as the Base-CLIP approach (or Base-CLIP model) is the approach of encoding the full phrase using the CLIP text encoder and encoding the candidate images using the CLIP image encoder, followed by choosing the candidate image whose encoding has the largest similarity to the full phrase encoding. Base-CLIP models, regardless of their specific underlying architecture, suffer from a weakness in compositionality. Compositionality is the change of word meaning in the presence of other words. For example, the meaning of "baby powder" is not the average of "baby" and "powder", and "powder" means different things in "baby powder" versus "milk powder". This is a general problem with embeddings beyond CLIP, such as text embeddings <cit.>. CLIP is trained with image and caption pairs only, with captions consisting of shorter texts and of less diversity than texts used to train language models, so the complex syntactic and semantic relationships among words, including compositionality, is not well-captured by CLIP. In comparison, standard language models trained on larger text corpora are composed of longer texts, from a larger variety of sources. We utilize this idea to augment Base-CLIP with key-to-text completion [<https://github.com/gagan3012/keytotext>] to leverage additional language knowledge through the key-to-text system. We use the key-to-text systems "k2t" (k2t 1), "k2t-base" (k2t 2), and "mrm8488/t5-base-finetuned-common_gen" (k2t 3).
For example, for the target word "administration" and the full phrase "administration prime minister" from the trial data, we created three additional sets of context texts:
* "The administration prime minister is the official title of the leader."
* "The Administration Prime Minister is the leader of the country."
* "prime minister speaks to the media during his visit."
These texts further reinforce the semantic meaning of "administration". The CLIP text-embedding of the augmented context text is used to measure the CLIP-similarity to the candidate images.
To keep the focus on the benefit of additional text context rather than optimizing the context itself, we use a greedy method to sample key-to-text and do not evaluate alternative sampling methods.
§.§.§ Augment-CLIP through additional languages
The second way to augment Base-CLIP is to resolve the ambiguity of the full phrase in the source language by translating the full phrase into a different language via a translation model (we leverage Google Translate[<https://translate.google.com/>]) and then use the other language's CLIP text-embedding of the translation to measure the distance to the candidate images. We evaluate this idea with Chinese translations. Chinese Augment-CLIP does not outperform Base-CLIP, often due to poor translation, but, interestingly, it offers sufficient complementarity to Base-CLIP or other Augment-CLIP that it improves performance through ensembling. See results in Table <ref>.
§.§.§ Base-CLIP model differences
For Base-CLIP, the performance difference in the two versions of CLIP that we used, ViT-B/32 and Vit-L/14, is notable. ViT-B/32 in fact gave better performance on trial and test data. This is unexpected as ViT-L/14 is a larger model and has more training and more data <cit.>. Further the organizers' baseline uses CLIP-ViT-large-patch14-336, an even larger model which improved performance in test data. See Table <ref>. This leads to the question of how different Base-CLIP embeddings affect performance on this task, which is outside the scope of this paper as we take the Base-CLIP embedding as a given in our systems.
§.§ Stable Diffusion Sampling
The second system samples text-to-image Stable Diffusion-v1.4 (SD) to generate multiple images after inputting the full phrase as the text prompt. Then the system outputs the candidate image with the closest distance to any of the generated SD images. There are two advantages of this system: first is the access to the larger training data of Stable Diffusion, which includes LAION2B-en <cit.>, a 2.32 billion common crawl image-text pairs dataset. Second, evaluating multiple images for a given text input resolves the text ambiguity of the input text and also the pictorial ambiguity in its image representation. As an example of text ambiguity, "angora" can mean a type of fiber or less frequently a specific city, as in "Angora City". Sampling several images allows the possibility that a subset of the images correctly express the meaning of the target word. Even for an unambiguous word, there may be pictorial diversity in its representation, and sampling multiple images allows for broader coverage of this diversity than a single image.
We evaluate two sampling methods of Stable Diffusion, text-to-image and text-and-image-to-image. For each, two similarity metrics were used: CLIP-similarity and l_2 distance between InceptionV3 <cit.> features of candidate image and InceptionV3 features of SD sampled image. Of these four, text-to-image sampling of Stable Diffusion with CLIP-similarity performs the best on trial data and a subset of train data - this is designated SD Sampling and is our submission 2 for the task.
For text-to-image sampling, we input the full phrase to SD and generate 50 output images (independent of any candidate images). We then calculate the maximum CLIP-similarity (CLIP ViT-L/14) between a candidate image and the 50 output images and associate that largest CLIP-similarity to that candidate image (candidate image distance). We then output the candidate image with the largest CLIP-similarity.
§ EXPERIMENTAL SETUP
The trial, train, and test datasets consist of multiple instances. An instance is a target word and a full phrase (containing the target word) and ten candidate images, with one image (the gold image) capturing the semantic meaning of the target word as exemplified in the full phrase. Train, trial, and test have 12869, 16, and 463 instances, respectively. For the test data, there are two versions of the dataset provided by the task organizers, differing in the image size <cit.>. We perform our predictions on the dataset with the smaller image size.
We do not train or fine-tune our models on training data to demonstrate the zero-shot property of our approach, although we do use the training data in part to inform us of which Augment-CLIP system and which SD Sampling system to select for task submissions. Based on trial data performance, among the three k2t systems, we choose k2t 2, and among the SD Sampling systems, we choose text-to-image with CLIP-similarity.
As measurements of the performance of the models, hit rate and mean reciprocal rank (mrr) are applied to the model predictions on the trial dataset and test dataset. Based on the inputs, the model assigns a score to each candidate image. The model can output one predicted image, with the highest score, or it can output a list of images ordered in decreasing order of score. Hit rate is the percentage of instances where the predicted image is the gold image. Mean reciprocal rank is the average of the reciprocal of the rank of the gold image in the list of images, ordered based on score.
§ RESULTS
§.§ Augment-CLIP through key-to-text
While standalone Augment-CLIP through key-to-text does not outperform Base-CLIP, it does reveal that adding context can improve performance. The additional context, when correctly augmenting the meaning of the target word, can indeed improve performance on the test set. In the best-case scenario, the additional context is an extension and explanation of meaning of the target word. In the worst-case, an incorrect extension of context dilutes the meaning of the target word. In the former case, Augment-CLIP is likely to correctly predict the gold image. In the group of instances in which both Augment-CLIP and Base-CLIP correctly predict the gold image, the CLIP-similarity score is higher in Augment-CLIP than in Base-CLIP, showing the effectiveness of added context. This depends on the quality of the context extension process: if the augmented context does not aid in conveying the correct semantic meaning of the target word, then the incorrect additional context may degrade performance in a standalone system. This is analogous to the performance of a language model with in-context learning, where the performance depend on the quality of in-context examples <cit.>.
Adding a k2t system can improve performance of the Base-CLIP. This can be seen in Table <ref>. For each instance in the dataset, consider the Base-CLIP similarity score for the full phrase and the gold image and consider the Augment-CLIP through k2t similarity score for the full phrase and the gold image. The difference between the Augment-CLIP similarity score and the Base-CLIP similarity score is calculated and shown in Table <ref>. This difference shows whether Augment-CLIP would have done better or worse than Base-CLIP. It also shows the potential of Augment-CLIP to improve Base-CLIP's performance.
Extra steps can be taken to improve the quality of the k2t text completion, but our focus is not to improve the performance of the k2t system but to show that reasonable additional context offers complementary benefits to Base-CLIP.
§.§ Augment-CLIP through other languages
We evaluate another method to disambiguate the full phrase by translating the full phrase in English to another language and exploring the CLIP text embedding and image embedding in that foreign language. Direct translation to a foreign language (through taking the first result of Google Translate), with that language chosen to be Chinese (Aug-CLIP: zh) in our evaluation, does not increase performance, and this is partly due to incorrect translations. Here, identical round-trip translations can serve as a proxy for correct translation from English to Chinese. The test instances can be divided into two groups, one being identical in round-trip translation and the other group containing all other instances. For instance, starting with the English full phrase and translating to Chinese and then translating that result back to English (English_1 → Chinese → English_2 with English_1 and English_2 being the same, up to capitalization) and another group that does not have identical round-trip translation, the first group has a higher foreign-language CLIP-similarity score than the second.
As a standalone system, direct translation does not improve performance, but ensembling with a direct translation system does improve performance. By adding Chinese translation to the ensemble (ensemble(B-CLIP, zh, k2t 2)), test data hit rate increases from 59.18 to 63.71 and test data mrr increases from 73.21 to 76.11. See Table <ref>.
§.§ SD Sampling
SD Sampling does not outperform Base-CLIP. It's worth noting that the instances where SD Sampling correctly selects the gold image is different from those of Base-CLIP, showing a potential to gain from accessing the SD Sampling system. See Table <ref>.
There is pictorial diversity in the SD samples, and often that diversity includes the correct image expression of the target word in the full phrase, as intended. There is diversity in viewpoint, proximity and style of the object presented. See images of various cityscapes outputted by Stable Diffusion for the full phrase "angora city" in Figure <ref>, and see images of various views of different models of "internet routers" in Figure <ref>. There is also diversity in the semantic interpretation of the full phrase: see for example Figure <ref> for interpretation of the word "breaking wheel" as both a torture device and a music group. This shows that the first goal of the SD System of producing a diversity of pictorial representation of the desired object is met. But the subsequent application of the distance metric fails to match the sampled SD image to the gold image. At times, incorrect candidate images have larger CLIP-similarity to the correct sampled SD image than the gold image, due to a coincidence of similar style, lighting, or material. This is not a shortcoming of CLIP-similarity as it is intended to be applied to (text, image) pairs and not (image, image) pairs <cit.>. As an alternative, we evaluate metrics such as l_2 distance between InceptionV3 features of the sampled image and InceptionV3 features of the candidate image. Using l_2 metric underperforms the CLIP-similarity metric as shown Table <ref>. We do not evaluate other image-to-image similarity metrics and leave for future work the search for an effective metric.
An issue with SD sampling on this dataset is the domain shift between the dataset on which SD was trained, a common crawl of text and image pairs in English, and the more scientific and technical focus of the full phrases in the test data. For example, the full phrase "breaking wheel" is a historical term and meant to be unambiguous and to resolve to mean a medieval torture device and the gold image is of such a device. On the other hand, to the layperson, "breaking wheel" sounds like the name of a band, akin to Stone Sour, Breaking Benjamin, Nickelback, and this popular understanding of "breaking wheel" is evidenced in Stable Diffusion sampled images, which include images of band groups. Similarly, other instances with the full phrase being technical and scientific terms that are not well-known to the general public are expressed in Stable Diffusion output images in terms of how a layperson would interpret such a term, instead of the correct technical meaning.
§ CONCLUSION
The Base-CLIP system is a strong solution to the task challenge.
Our system Augment-CLIP complements Base-CLIP to resolve text ambiguity and improves text compositionality. Our system SD Sampling provides pictorial diversity in ambiguous and unambiguous text interpretation. These two methods offer additional ways to connect text and images.
acl_natbib
|
http://arxiv.org/abs/2307.04418v1 | 20230710084854 | Quantum error correction beyond the toric code: dynamical systems meet encoding | [
"Garima Rajpoot",
"Komal Kumari",
"Sudhir Ranjan Jain"
] | quant-ph | [
"quant-ph"
] |
[
*
Received / Accepted
========================
We construct surface codes corresponding to genus greater than one in the context of quantum error correction. The architecture is inspired by the topology of invariant integral surfaces of certain non-integrable classical billiards. Corresponding to the fundamental domains of rhombus and square torus billiard, surface codes of genus two and five are presented here. There is significant improvement in encoding rates and code distance, in addition to immunity against noise.
§ FROM GEOMETRY TO ENCODING
Geometrical representations of algebraic and arithmetic relations <cit.>, and, algebraic representations of geometrical patterns <cit.> are both fascinating themes. In their turns, they have led to a deep understanding in physics and mathematics <cit.>. A one-to-one correspondence between Lie groups and reflection groups whose fundamental regions are simplexes in Euclidean space has been beautifully illustrated in <cit.>. These fundamental regions generate tori for “unit shapes" like a square, equilateral triangle, right isosceles triangle, or a hemi-equilateral triangle <cit.>. Here we bring out an application of geometry of regular polytopes <cit.> to encoding theory in the context of quantum information.
The dynamical systems which are most relevant to the present theme are planar billiards wherein a particle moves freely inside a two-dimensional enclosure, reflecting from the boundary in accordance to the Snell's law of reflection. According to the Liouville-Arnol'd theorem <cit.>, for a system with f degrees of freedom, if there are f functionally independent invariants which are in involution, the (invariant) surface on which the trajectory of the system resides is topologically equivalent to an f-torus. Another condition stipulated for the applicability of the Liouville-Arnol'd theorem is that the vector fields in phase space must be smooth everywhere. The integrability of such systems is a fragile property, so much so that even if the vector fields become singular at points of measure zero, the system loses integrability <cit.>. Perhaps the simplest example is when the shape of the enclosure is a square or a rectangle, explained later in some detail, where the invariant surface is a torus. However, an interesting situation arises by deforming the square to a rhombus with an acute angle π /n. The vector fields in phase space become singular at a set of points of measure zero. Corresponding invariant surface is topologically equivalent to a sphere with few handles, the number of handles is related to n. In this work, instead of a lattice of spins, we employ the lattice constructed by stacking fundamental domains in a plane. On this lattice, we show how to place qubits and set up a stabilizer code.
Somewhat unrelated but of great significance, a connection between billiard and computation was first realized by Fredkin and Toffoli <cit.>. Although it gave us the Toffoli gate, the connection between topology of invariant surfaces in billiards and surface codes was not relevant for them and has been brought out recently <cit.>.
§ GENUS-2 CODE
Computation requires scalability of logical qubits on planar chips. One way to achieve this is to use “unit shapes" which can fill the plane on successive reflections to encode the information on a surface. Our aim is to make use of the fundamental domains of certain geometrical structures such as squares and rhombi, which upon successive reflections, fill the whole plane while maintaining the planarity of the surface. This suitable arrangement allows one to make changes anywhere else in the circuit by only locally changing parameters, inadvertently leading to scalability. For example, if we consider a square tile, upon successive reflections about its sides, four copies form a unit of tessellation - the fundamental domain, identifying the pairs of parallel edges gives a torus, which is characterized by a topological invariant, the genus being equal to one. Thus, the surface code corresponds to tori, and hence makes the well-known “toric code" <cit.>. The fundamental domain of a π/3-rhombus is another such structure, genus equal to two, that can be tessellated on the whole surface. Here, we use this to design a new code on a surface of genus two.
§.§ “Tessellation" with Lg-rhombus
We introduce a new surface code using the fundamental domain equivalent to a genus two surface (Fig. <ref>), constructed by stitching six copies of π/3-rhombus. Upon identification of edges as shown in Fig. <ref>, it creates a “double-torus" <cit.>, which is equivalent to a sphere with two handles. This can be tessellated over the whole plane as shown in Fig. <ref>. Hence, encryption on this surface is termed as “Genus two code" or “Double-toric code". As per Kitaev's idea, whereby increasing the genus will give a higher encryption, the double-toric code helped achieve a significantly higher encryption rate as compared to the surface code.
§.§ Encoding on a plane
Let us start with a unit structure of the genus two code - constructed by using n=6 data qubits (represented by circles) and m=4 ancilla qubits (represented by squares), shown in Fig. <ref>.
The bold and dashed lines represent the control-X and control-Z operations, respectively from the ancilla qubit to the data qubits. Stabilizers are the operators which belong to the Pauli group and preserve the logical state, i.e. if the logical state is |Ψ⟩_L, then P_i|Ψ⟩_L=(+1)|Ψ⟩_L. The set of stabilisers for this code structure is P={X_1X_2X_3X_4, X_3X_4X_5X_6, Z_1Z_3Z_5, Z_2Z_4Z_6}. These four elements of the stabilizer set are the generators of the stabilizer group 𝒮. For this encoded logical qubit, the logical state |0⟩_L is <cit.>:
|0⟩_L =1/𝒩∏_P_i∈⟨P⟩(I^⊗n+P_i)|0^⊗n⟩
=1/𝒩(I^⊗6+X_1X_2X_3X_4)(I^⊗6+X_3X_4X_5X_6)(I^⊗6+Z_1Z_3Z_5)(I^⊗6+Z_2Z_4Z_6)|0^⊗6⟩
=1/𝒩(|000000⟩+|001111⟩+|111100⟩+|110011⟩),
where 𝒩 is the normalization factor. The circuit for this encryption is shown in Fig. <ref> (b). All the stabilizers commute with each other ([P_i,P_j]=0 ∀ i,j). To construct logical state |1⟩_L, we have to look for analogous Pauli sigma pairs of logical operators {X_i,Z_i}, that (i) commute with each of the stabilizers P_j ([X_i,P_j]=0=[Z_i,P_j] ∀ i,j) and (ii) pairwise anti-commute with each other ({X_i,Z_i}=0 and [X_i,Z_j]=0 ∀ i≠ j).
To find the logical operators, first we have to identify the edges to specify the boundaries. The filling of plane using π/3-rhombus, forms periodically arranged branch-cuts, which help identify the boundaries. On these boundaries, the control-X (bold lines) and control-Z (dashed lines) are arranged alternately. We define a path, between the boundaries, by connecting a data qubit vertex of a rhombus to another data qubit vertex of a corresponding copy with respect to the fundamental domain of the rhombus. Two sets of six paths are found which form the logical X operator (X̅) and logical Z operator(Z̅). Thus we found two pairs of logical operators, which satisfy the above conditions {X̅_1=X_1X_3, Z̅_1=Z_1Z_4Z_6} and {X̅_2=X_4X_6, Z̅_2=Z_2Z_4Z_5}. The minimum weight of error E=E_a^† E_b violating the Knill-Laflamme conditions <cit.> was found to be 2. Thus it is a [[6,2,2]] code. The encoding rate, or the ratio of the number of logical qubits to the number to data qubits for this code structure is 1/3.
To increase the code distance and the encoding rate of the double-toric code, we can stack a unit of this code (Fig. <ref>) vertically as well as horizontally. Reflecting the unit in equal number of vertical and horizontal directions, arranges the unit structures in equal number of rows and columns. To construct the code with p^2 number of unit structures, the number of rows and columns will be p, the the number of required data qubits is n=2p(2p+1), number of required ancilla qubits is m=2p(p+1), number of logical qubit is k=2p^2 and the code distance is d=⌊p+2/2⌋+1, where ⌊·⌋ is the floor function. So the general form of the code is [[2p(2p+1), 2p^2,⌊p+2/2⌋+1]]. The encoding rate of this code is k/n=p/(2p+1). For p→∞, the encoding rate is 1/2.
§.§ Comparison of code distance in toric and genus-2 codes
In the [[5,1,2]] code shown in Fig. <ref>, the code distance is 2. Let us try to make a logical operator of weight 3. The paths D1-A1-D3-A4-D5 and D2-A3-D3-A2-D4 provide such a pair of logical operator ⟨X̅=X_2X_3X_4, Z̅=Z_1Z_3Z_5⟩. Both the operators commute with all the stabilizers of the [[5,1,2]] code and anticommute with each other. In this way we achieved a pair of logical operators of weight 3 and so the code distance could be 3 making it a [[5,1,3]] code instead. But for the states corresponding to these operators, the minimum weight of error for which Knill-Laflamme conditions do not hold is d=2, indicating that this has to be a distance 2 code, hence the code is [[5,1,2]]. This is well-expected.
It is important to note that we could have found all logical operators of weight 2, while maintaining the code distance two - {X_1X_3, Z_1Z_2} and {X_4X_6, Z_5Z_6}. In this case also, the minimum weight of errors for which the Knill-Laflamme conditions do not hold is two. So we could have chosen either set of logical operators. But it is our aim to maximize the code distance using the reflection property of the structure. This makes the [[2p(2p+1),2p^2,⌊p+2/2⌋+1]] code more suitable for achieving higher encryption rates and distances than a [[2p(2p+1),2p^2,2]] code.
Consider now another unit stacked vertically on the single unit as shown in Fig. <ref>. Here, the number of physical qubits is n=10, while the number of ancilla qubits is m=7. The stabilizers for this code are, P={X_1X_2X_3X_4, X_3X_4X_5X_6X_7X_8, X_7X_8X_9X_10, Z_1Z_3Z_5, Z_2Z_4Z_6, Z_5Z_7Z_9, Z_6Z_8Z_10}. Following the arguments presented above for identifying paths between boundaries, we obtain X̅ and Z̅; the complete set of logical operators commuting with the stabilizers and anti-commuting pairwise is thus
(i) {X̅_1=X_2X_6X_8, Z̅_1=Z_1Z_4Z_8Z_9},
(ii) {X̅_2=X_2X_6X_10, Z̅_2=Z_5Z_7Z_10},
(iii) {X̅_3=X_4X_6X_8, Z̅_3=Z_2Z_3Z_6}.
The Knill-Laflamme conditions are violated for a weight of error three, giving the code distance three. However, we can again find logical operators of weight two - {X_1X_3,Z_1Z_2}, {X_3X_5X_7,Z_5Z_6} and {X_7X_9,Z_9Z_10}. This should give a distance of two which is also verified using the Knill-Laflamme conditions. Since both the cases are valid, we choose to use the one in which the distance is maximum without violating the stabilizer algebra.
§ GENUS-5 CODE
The motivation to this code stems from another dynamical system, the square torus billiard where the integrable dynamics of a square billiard is interrupted by a square shaped scatterer <cit.>. Following the association discussed above for genus 2, we construct a code with this dynamical system in mind.
§.§ Square torus billiard
The free motion of a point particle in a square torus billiard (STB) is shown in Figure <ref>. According to the theorem by Zemlyakov and Katok <cit.>, this system is non-integrable albeit non-chaotic with zero Lyapunov exponent. The invariant integral surface is topologically equivalent to a sphere with five handles, as shown in <cit.>. The entire trajectory of the free particle in the STB can be folded in four copies using which we can construct the invariant surface (constant energy). This is explained in Figure <ref>. In statistical mechanics, this model is related to Ehrenfest gas where a beam of particles moving freely in a plane gets scattered by square-shaped scatterers (also called wind-tree model <cit.>). A new finite-time exponent was introduced to describe these systems <cit.> as the long-time average vanishes due to rather pathological behaviour of these systems.
We shall now employ these features to our advantage in quantum encoding.
§.§ Encoding
We start with the fundamental domain of an equivalent genus five surfaces, Fig. <ref>, obtained by tessellating a square with a square-shaped scatterer inside it four times and placing the data and the ancilla qubits alternatively on the vertex of external squares as well as on the vertex of scatterers. The data qubits are represented as D (in the circles) and the ancilla qubits are represented as A (in the squares). As in earlier sections, the bold (dashed) lines represent the control-X(Z) operations from the ancilla qubits to the data qubits. The set of stabilizers is P={X_1X_2X_3X_6X_7, X_3X_4X_5X_12X_13, X_1X_6X_8, X_2X_7X_9, X_3X_10X_12, X_3X_11X_13, Z_1Z_3Z_4Z_8Z_10, Z_2Z_3Z_5Z_9Z_11, Z_3Z_6Z_8, Z_3Z_7Z_9, Z_4Z_10Z_12, Z_5Z_11Z_13}. The logical state |0⟩_L is:
|0⟩_L= 1/𝒩∏_P_i∈⟨P⟩(I^⊗n+P_i)|0^⊗n⟩
= 1/𝒩(I^⊗13+X_1X_2X_3X_6X_7)(I^⊗13+X_3X_4X_5X_12X_13)(I^⊗13+X_1X_6X_8)(I^⊗13+X_2X_7X_9)
(I^⊗13+X_3X_10X_12)(I^⊗13+X_3X_11X_13)(I^⊗13+Z_1Z_3Z_4Z_8Z_10)(I^⊗13+Z_2Z_3Z_5Z_9Z_11)
(I^⊗13+Z_3Z_6Z_8)(I^⊗13+Z_3Z_7Z_9)(I^⊗13+Z_4Z_10Z_12)(I^⊗13+Z_5Z_11Z_13)|0^⊗13⟩.
We next look for pairs of logical operators that commute with stabilizers and anti-commute pairwise. For this, we have to specify the boundaries. The filling of the plane using the fundamental domain of the equivalent genus five surfaces, forms periodically arranged branch cuts (edges EF and GH in Fig.<ref>), which are considered as the boundaries. Thus we define a path by connecting the data qubit vertex of one scatterer to the data qubit vertex of the corresponding copy with respect to the fundamental domain. The directed paths for the logical X operator are: X_6X_8X_10X_12, X_6X_8X_4X_12, X_7X_9X_11X_13, and X_7X_9X_5X_13. The directed paths for the logical Z operator are: Z_8Z_6Z_7Z_9, Z_8Z_6Z_2Z_9, Z_8Z_1Z_7Z_9, Z_8Z_1Z_2Z_9, and Z_10Z_12Z_13Z_11. From these paths, we found a pair of logical operators {X=X_6X_8X_4X_12, Z=Z_8Z_1Z_7Z_9}. The minimum weight the error E=E_a^† E_b, which violates the Knill-Laflamme conditions, is 3, thereby constructing a [[13,1,3]] code.
To increase the distance of the code, we can stack the unit structure of the code (Fig. <ref>) vertically as shown in Fig.<ref>. The number of required data qubits is n=24 and the number of required ancillary qubits is m=23. The set of
stabilizers is P={X_1X_2X_3X_4X_7, X_1X_3X_5, X_2X_4X_6, X_7X_8X_10, X_7X_9X_11, X_7X_10X_11X_12X_13X_14X_15X_18, X_12X_14X_16, X_13X_15X_17, X_18X_19X_21, X_18X_20X_22, X_18X_21X_22X_23X_24, Z_3Z_5Z_7, Z_4Z_6Z_7, Z_1Z_5Z_7Z_8Z_12, Z_2Z_6Z_7Z_9Z_13, Z_8Z_10Z_12, Z_9Z_11Z_13, Z_14Z_16Z_18, Z_15Z_17Z_18, Z_12Z_16Z_18Z_19Z_23, Z_13Z_17Z_18Z_20Z_24, Z_19Z_21Z_23, Z_20Z_22Z_24}. The pair of logical operators is {X=X_8X_12X_16X_14, Z=Z_8Z_10Z_15Z_17}. The minimum weight that violates the Knill-Laflamme conditions for this code is 4. Hence it is a [[24,1,4]] code. Thus, the distance of the code can be increased by stacking fundamental domains on the plane.
§.§ Effect of noise
Any logical qubit should be robust against dephasing due to an external noise. Recently, it has been shown <cit.> that certain observables formed by code space population and logical operators in the code space help determine the dynamical behaviour of logical qubits. We incorporate a time-dependent external fluctuating magnetic field in z-direction, which acts on the qubits globally, thus leading to global dephasing. To estimate the effect, consider the logical |1⟩_L:
|1⟩_L= X|0⟩_L
Let an initial logical quantum state be written as
|ψ⟩_L=cosθ/2|0⟩_L+e^ιϕsinθ/2|1⟩_L
where θ and ϕ are real parameters (θ≤π and 0≤ϕ≤ 2π). The evolution of |ψ⟩_L gives the logical Bloch sphere coordinates, X_L, Y_L and Z_L. Assuming the global dephasing process by a single fluctuating variable B(t) along the z-direction acting on all data qubits, the Hamiltonian representing the effect of noise may be written as H_G(t) =1/2B(t)∑_i=1^13σ_z_i. In case of local dephasing, the Hamiltonian reads as: H_L(t)=1/2∑_i=1^13B_i(t) σ_z_i. The randomly fluctuating variable B(t) obeys the Gaussian distribution P(B), which implies that <cit.>:
⟨exp (±ι∫_0^tB(t^')dt^')⟩
=exp[-1/2⟨(∫_0^tB(t^') dt^')^2⟩]= e^-γt/2
assuming the stationarity of the auto-correlation function of delta-correlated noise, with γ=⟨[B(0)]^2|$⟩.
Following <cit.>, we analyze the effect of noise on theN-qubit system by grouping the physical states by their magnetization, defined as the difference between the number of spins in the state|0⟩, denoted byn^', and the remaining in state|1⟩,N-n^'. The magnetisation is,m^'=2n^'-N. The logical state|0⟩_Lis written as,|0⟩_L=∑_m^' ∑_l=1^N_m^'b_l^m^'|b⟩_l^m^'. Dephasing noise changes the state|ψ⟩_Lto another state|ψ^'⟩, where|ψ^'⟩=exp[-ι∫_0^tH_L, G(t^')dt^']|ψ⟩_L. The density matrix corresponding to the logical qubit isρ^'=∫|ψ^'⟩⟨ψ^'| P(B)dB.The Bloch coordinatesℛ≡{R_X, R_Y, R_Z}in the new state are obtained by evaluating the expectation values of the logical operators in the evolved state, given by⟨ℛ|=⟩Tr[ρ^'ℒ̅], whereℒ̅≡{X̅, Y̅, Z̅}represents the logical Bloch vectors in the initial state,|ψ⟩. For the single unit structure (Fig. <ref>), in the presence of global dephasing noise, the logical Bloch coordinates turn out to be
⟨R_X|=⟩ 1/32e^-(2γ t+ιϕ)(1+e^-γ t)^4(1+e^2ιϕ)sinθ
⟨R_Y|=⟩ ι/32e^-(2γ t+ιϕ)(1+e^-γ t)^4(-1+e^2ιϕ)sinθ
⟨R_Z|=⟩ cosθ
In the absence of noise, i.e.,γ=0, the Bloch sphere coordinates in the new state,|ψ'⟩are⟨R_X|=⟩sinθcosϕ,⟨R_Y|=⟩sinθsinϕ, and⟨R_Z|=⟩cosθsame as that in the old state,|ψ⟩_L. Even in the presence of noise,⟨R_Z⟩remains unaffected. Thus the code is significantly robust against dephasing noise.
§ CONCLUDING REMARKS
The basic idea underlying surface codes for error detection and correction is to be able to arrange the data and ancillary qubits in a way thatXandZerrors can be corrected by making Stabilizer measurements through ancillae. For a scalable architecture, planar structures are desirable. This brings us to the question of tessellation of the plane. While in Kitaev's construction, two-dimensional Ising model is considered where the lattice shape can be anything - however, it should be noted that “anything" is only under periodic boundary conditions where then, unit shapes could be square, equilateral triangle etc. Here we take the essence from Kitaev's construction and use the correspondence between Lie and reflection groups, ideas from well-known billiards, and present a novel way to realize architectures of higher genus. The encoding rates - number of logical qubits for the physical qubits - surpasses the value for all surface codes hitherto known. We believe that these results pave the way to a new direction of research in the field of quantum error correction.
The codes presented here are not related to tessellations of hyperbolic surfaces. We have constructed fundamental domain using replicas of the billiard considered. We then stack the domains, thus taking care of all the symmetries of the system. It is at this point that we endow each vertex with a qubit or ancilla. This enables us to write the stabilizers and construct logical operators. This construction respects the commutation and anticommutation relations expected of a consistent and complete definition of a code.
The spectra of the Hamiltonian made by the generators is studied. The degeneracy of the ground state increases with the number of qubits. For instance, for the genus-two codes[[n, k, d]], the degeneracy of the ground state is2^k. The code is not topological. However, the ground state of the codes has high degeneracy which is useful for encoding. The code distance increases with the size of the code. The main advantage, however, is that the codes have much higher encoding rates. For genus-two codes of large size, the encoding rate tends to one-half. For the genus-five codes, the code distance increases with size whereas the encoding rate does not. Future investigations along these lines would be useful.
In classical dynamical systems, tori as invariant surfaces are synonymous to integrability. The surfaces of higher genus correspond to non-integrability, but not chaos, even when the dynamics is nonlinear. Nonlinearity of the dynamics leads to the appearance of special points in the phase space, which have been shown to play an important role in controlling of quantum jumps for error correction <cit.>. In quantum computing technology, almost all paradigms are related in an important way to aspects of nonlinearity, be it the nonlinearity of the Josephson junction, creation of EPR pair of photons from a nonlinear crystal and so on. Nonlinear resonances in coupled nonlinear quantum circuits with Josephson junctions have been shown to provide criteria for protection of qubits <cit.>. Ideas from nonlinear science would expectedly contribute to the development of quantum information theory and technology.
0.25 truecm
Acknowledgements
Authors thank the Referee for her(his) critique drawn on our work. They also thank Rhine Samajdar, Princeton University, for several helpful and stimulating discussions.
0.25 truecm
Data Availability Statement: No Data associated in the manuscript
99kvant Ed. S. Tabachnikov, Kvant Selecta: Algebra and Analysis, I and II (Universities Press (India) Limited, 2002).
weissman M. H. Weissman, An illustrated theory of numbers (American Mathematical Society, 2017).
aop2014 R. Samajdar and S. R. Jain, Ann. Phys. 351, 1 (2014).
aop2016 N. Manjunath, R. Samajdar, S. R. Jain, Ann. Phys. 372, 68 (2016).
rmp2017 S. R. Jain and R. Samajdar, Rev. Mod. Phys. 89, 045005 (2017).
nakahara M. Nakahara, Geometry, Topology, and Physics (Taylor and Francis, London, 2003).
coxeter H. S. M. Coxeter, Regular Polytopes (Dover, New York, 1973).
toffoli E. Fredkin and T. Toffoli (1982), International Journal of Theoretical Physics 21, 219 (1982).
krj K. Kumari, G. Rajpoot, and S. R. Jain, A genus-two surface code (arXiv:2211.12695 [quant-ph]).
weyl1926nachtrag Hermann Weyl, Mathematische Zeitschrift 24, 789 (1926).
cartan1927geometrieÉlie Cartan, Annali di Matematica pura ed applicata 4, 209 (1927).
arnold V. I. Arnol'd, Mathematical methods of classical mechanics (Springer, Heidelberg, 1978).
jain1992 S. R. Jain and H. D. Parab, J. Phys. A 25, 6669 (1992).
Kitaev Alexei Kitaev, Ann. Phys. 303, 2 (2003).
eckhardt1984analytically Bruno Eckhardt, Joseph Ford and Franco Vivaldi, Physica D: Nonlinear Phenomena 13, 339–356 (1984).
zemlyakov A. Zemlyakov and A. B. Katok, Math. Notes 18, 760 (1976).
richens1981pseudointegrable P. J. Richens and M. V. Berry, Physica D: Nonlinear Phenomena 2, 495–512 (1981).
Gottesman Daniel Gottesman, Stabilizer codes and quantum error correction, Ph. D. thesis (California Institute of Technology, 1997).
aa V. I. Arnold and A. Avez, Ergodic problems of classical mechanics (W. A. Benjamin, Inc., Amsterdam, 1970).
bob J. R. Dorfman, An introduction to chaos in nonequilibrium statistical mechanics (Cambridge Univ. Press, Cambridge, 1999).
manan M. Jain, Student J. Phys. 5, 55 (2013).
mcj S. Moudgalya, S. Chandra, and S. R. Jain, Ann. Phys. 361, 82 (2015).
pal Amit Kumar Pal, Philipp Schindler, Alexander Erhard, Ángel Rivas, Miguel A. Martin-Delgado, Rainer Blatt, Thomas Monz and Markus P. Müller, Quantum 6, 632 (2022).
krjj K. Kumari, G. Rajpoot, S. Joshi, and S. R. Jain, Ann. Phys. 450, 169222 (2023).
ssj R. K. Saini, R. Sehgal, and S. R. Jain, Eur. Phys. J. Plus 137, 356 (2022).
We start with the fundamental domain of genus five surface, by reflecting a square with a square shaped scatterer inside it four times. And placing the data and the ancilla qubits alternatively on the vertex of external squares as well as on the vertex of square shaped scatterer. The data qubits are represented asD(in circles) and the ancilla qubits are represented asA(in squares). The bold (dashed) lines are representing the control-X(Z)operations from the ancilla qubits to the data qubits. The set of stabilizers isP={X_1X_2X_3X_6X_7,X_3X_4X_5X_12X_13,X_1X_6X_8,X_2X_7X_9,X_4X_10X_12,X_5X_11X_13,Z_1Z_3Z_4Z_8Z_10,Z_2Z_3Z_5Z_9Z_11,Z_3Z_6Z_8,Z_3,Z_7Z_9,Z_3Z_10Z_12,Z_3Z_11Z_13}. The logical state|0⟩_Lis:
|0⟩_L= 1/𝒩∏_P_i∈⟨ P⟩(I^⊗ n+P_i)|0^⊗ n⟩
= 1/𝒩(I^⊗ 13+X_1X_2X_3X_6X_7)(I^⊗ 13+X_3X_4X_5X_12X_13)(I^⊗ 13+X_1X_6X_8)(I^⊗ 13+X_2X_7X_9)
(I^⊗ 13+X_4X_10X_12)(I^⊗ 13+X_5X_11X_13)(I^⊗ 13+Z_1Z_3Z_4Z_8Z_10)(I^⊗ 13+Z_2Z_3Z_5Z_9Z_11)
(I^⊗ 13+Z_3Z_6Z_8)(I^⊗ 13+Z_3,Z_7Z_9)(I^⊗ 13+Z_3Z_10Z_12)(I^⊗ 13+Z_3Z_11Z_13)|0^⊗ 13⟩ .
We next look for pairs of logical operators that commute with stabilisers and anti-commute pairwise. For this, we have to specify the boundaries. The filling of plane using the fundamental domain of genus five surface, forms periodically arranged branch cuts (edgesEFandGHin Fig.<ref>), which are considered as the boundaries. Thus we define the path by connecting the data qubit vertex of one square scatterer to the data qubit vertex of the corresponding copy with respect to the Fundamental Domain. The directed paths to for the logicalZoperator are:Z_8Z_6Z_7Z_9,Z_8Z_6Z_2Z_9,Z_8Z_1Z_7Z_9,Z_8Z_1Z_2Z_9,Z_10Z_12Z_13Z_11,Z_10Z_12Z_5Z_11,Z_10Z_4Z_13Z_11,andZ_10Z_4Z_5Z_11, all of these operators commute with all the stabilizers. The directed paths for the logicalXoperator are:X_6X_8X_10X_12,X_6X_3X_12,X_6X_3X_11,X_6X_3X_13,X_6X_3X_7,X_6X_3X_9X_11X_13,X_6X_8X_3X_11X_13,X_6X_8X_3X_9X_7,X_7X_9X_11X_13,X_7X_3X_13,X_7X_3X_11,X_7X_3X_10,X_7X_3X_8X_10X_12, andX_7X_9X_3X_10X_12. In these many operators, only two operatorsX_6X_8X_10X_12andX_7X_9X_11X_13commute with all the stabilizers. Thus we found a pair of logical operators{X=X_6X_8X_10X_12, Z=Z_8Z_1Z_7Z_9}. The minimum weight of the errorE=E_a^†E_b, which violates the Knill-Laflamme conditions, came out to be3. So it is a[[13,1,3]]code.
LetSbe the generators of stabilizer group. Then, for ann-qubit code encodingk-logical operators, we can define an(n-k)-bit binary number, or error syndrome function,f_Mfor the code. Letf_M:𝒢→ℤ_2, such that
f_M(E)= {[ f_M(E)=0, [M,E]=0; f_M(E)=1, {M,E}=0 ].,
wheref_M(E)=f_M_1(E)f_M_2(E)…f_M_n-k(E). If all the values off_Mare different, the code is nondegenerate.
For the single unit of the double-toric[[6,2,2]]code, stabilizer generators areM={X_1 X_2 X_3 X_4, X_3 X_4 X_5 X_6, Z_1 Z_3 Z_5, Z_2 Z_4 Z_6}. The functionf_M(E)for the error set,E={X_1,X_2,…, X_6,Z_1,Z_2,…,Z_6}is shown in table <ref>. Here,f_Mis four-bit binary function, which is not different for every error inE, thus making it a degenerate code.
By contrast, the[[13,1,3]]surface code is a nondegenerate code, wheref_Mis a twelve-bit binary number, which is different for each error in the error setE. |
http://arxiv.org/abs/2307.05923v1 | 20230712054139 | Pairs-trading System using Quantum-inspired Combinatorial Optimization Accelerator for Optimal Path Search in Market Graphs | [
"Kosuke Tatsumura",
"Ryo Hidaka",
"Jun Nakayama",
"Tomoya Kashimata",
"Masaya Yamasaki"
] | cs.ET | [
"cs.ET"
] |
Pairs-trading System using Quantum-inspired Combinatorial Optimization Accelerator for Optimal Path Search in Market Graphs
Kosuke Tatsumura^∗, Ryo Hidaka, Jun Nakayama,
Tomoya Kashimata, and Masaya Yamasaki
Corporate Research and Development Center, Toshiba Corporation, Japan
^∗Corresponding author: Kosuke Tatsumura (e-mail: [email protected])
======================================================================================================================================================================================================================================================
Pairs-trading is a trading strategy that involves matching a long position with a short position in two stocks aiming at market-neutral profits. While a typical pairs-trading system monitors the prices of two statistically correlated stocks for detecting a temporary divergence, monitoring and analyzing the prices of more stocks would potentially lead to finding more trading opportunities. Here we report a stock pairs-trading system that finds trading opportunities for any two stocks in an N-stock universe using a combinatorial optimization accelerator based on a quantum-inspired algorithm called simulated bifurcation. The trading opportunities are detected through solving an optimal path search problem in an N-node directed graph with edge weights corresponding to the products of instantaneous price differences and statistical correlation factors between two stocks. The accelerator is one of Ising machines and operates consecutively to find multiple opportunities in a market situation with avoiding duplicate detections by a tabu search technique. It has been demonstrated in the Tokyo Stock Exchange that the FPGA (field-programmable gate array)-based trading system has a sufficiently low latency (33 μs for N=15 or 210 pairs) to execute the pairs-trading strategy based on optimal path search in market graphs.
§ INTRODUCTION
A financial market with high efficiency and high liquidity is where investors can execute high-volume trading at fair values, at any time without significantly impacting the market prices. The concept of arbitrage is defined in Ref. <cit.> as the simultaneous purchase and sale of the same, or essentially similar, security in two different markets for advantageously different prices. Arbitrage opportunities can arise as a result of demand shocks and arbitragers bring temporarily deviated prices (hereafter, mispricing) to fundamental (fair) values. Arbitrage enforces the law of one price and thereby improves the efficiency of financial markets <cit.>. Recent studies <cit.> have also shown that arbitrage provides liquidity.
Pairs-trading strategy is categorized as a statistical arbitrage and profits from temporary mispricing of statistically correlated stocks <cit.>. The strategy monitors the performance of two historically correlated stocks for detecting the moment when one stock relatively moves up while the other relatively moves down (possibly temporarily), and at that moment simultaneously takes a short (selling) position of the outperforming stock and a long (buying) position of the underperforming one with each position having the almost same amount of transaction, betting that the spread between the two would eventually converge. The strategy is market-neutral, i.e., adaptable to various market conditions (uptrend, downtrend, or sideways) by keeping the net exposure low.
Various variants of pairs-trading that differ in how to identify comoving stocks and how to decide the timing of position opening have been proposed and summarized in Ref. <cit.>, involving distance approach, cointegration approach, time-series approach, stochastic control approach and other approaches (including machine learning approaches like recent one using long short-term memory networks <cit.>). Those, not necessarily mutually exclusive, can contribute to improving the market efficiency and liquidity by detecting the different trading opportunities (occurrences of mispricing).
To analyze the collective structure of a stock market, market graphs have been proposed and utilized <cit.>, where the nodes correspond to the stocks and each edge (or edge weight) between two nodes represents the relationship of the two stocks defined based on correlation factors <cit.> or more generalized risk-measures <cit.>. Graph analysis methods such as partitioning, clustering, coloring, and path search may give insights into the collective structures/behaviors of the stocks. Many of those methods are formulated as combinatorial (or discrete) optimization problems and belong to the nondeterministic polynomial time (NP)-hard class in computational complexity theory <cit.>.
Ising machines are hardware devices that solve the ground (energy minimum)-state search problems of Ising spin models and can be of use for quickly obtaining the optimal (exact) or near-optimal solutions of NP-hard combinatorial optimization problems <cit.>. The Ising problem belongs to the NP-hard class <cit.>; a variety of notoriously hard problems including many graph analysis methods can be represented in the form of the Ising problem <cit.>.
The Ising machine can be applied to automated trading systems <cit.> including ones executing pairs-trading and may enable the detection of trading opportunities based on the computationally-hard analysis of market graphs within the lifetime of the opportunities determined by the activities of other trading entities. Automated trading systems become increasingly important in financial markets <cit.> and the trading strategy enabled with emerging computing methodologies would complement the functionality of the market or contribute to mitigating the herding behaviors in financial markets <cit.>. The trading systems utilizing Ising machines as in <cit.> have been, however, not extensively studied. Furthermore, the execution capability of such a trading system in terms of response latency needs to be validated in the actual market.
Here we propose a pairs-trading strategy based on an optimal path analysis in market graphs and show through real-time trading that the strategy is executable with an automated pairs-trading system using an embedded Ising machine for the optimal path search.
The market graph for N tradable stocks (an N-stock universe) is an N-node fully-connected directed graph with edge weights corresponding to the products of instantaneous price differences and statistical correlation factors between two stocks. The trading opportunities (temporary mispricing of statistically correlated pairs) are detected by an optimal path analysis (a sort of collective evaluation) of the N-node market graph. As the embeddable Ising machine, we use a combinatorial optimization accelerator based on a quantum-inspired algorithm called simulated bifurcation (SB) <cit.>. The algorithm of SB, derived through classicizing a quantum-mechanical Hamiltonian describing a quantum adiabatic optimization method <cit.>, is highly parallelizable and thus can be accelerated with parallel processors such as FPGAs (field-programmable gate arrays) <cit.>. An SB machine (SBM) customized for the strategy operates consecutively to find multiple trading opportunities in an instantaneous market situation with avoiding duplicate detections by a tabu search technique. To examine the execution capability of the system, we compare the real-time transaction records of the system in the Tokyo Stock Exchange (TSE) with a backcast simulation of the strategy assuming the orders issued are necessarily filled.
The rest of the paper is organized as follows. In Sec. <ref> (trading strategy), we describe the proposed strategy and formulate the optimal path search in the form of quadratic unconstrained binary optimization (QUBO) mathematically equivalent to the Ising problem. Sec. <ref> (system) describes the architecture of the system and its implementation details. Sec. <ref> (experiment) describes the transaction records in the TSE and the execution capability of the system.
§ TRADING STRATEGY
§.§ Path search-based pairs-trading
The proposed strategy determines open pairs (a pair of long and short positions in two stocks to be taken) by an optimal path analysis of an N-node market graph representing a relative relationship in the prices of N stocks. The evaluation of a pair is based on not only the direct path but also any bypass paths. Multiple pairs can be chosen in an instantaneous market situation.
The market graph for an N-stock universe (Fig. <ref>a) is a directed graph in which an edge (i, j) corresponds to a trading pair that takes a short position of ith stock and a long position of jth stock and is distinguished from the edge (j, i). The weight w_i,j of an edge (i, j) is defined by
w_i,j=s_i,j×(ask_j-bid_i)
where s_i,j, ask_j, and bid_i are, respectively, the similarity factor between ith and jth stocks, the best ask for jth stock, and the best bid for ith stock. ask and bid are normalized by the base price on the day. s_i,j is based on the average value for the last five business days of the dynamic time warping (DTW) distance <cit.> of the price sequences (per day) of ith and jth stocks and is normalized to be in [0,1]. When the buying price of a long position (ask_j) is relatively lower than the selling price of a short position (bid_i) in the two stocks with a large similarity (s_i,j), w_i,j is negative and its absolute value is large.
In the market graph, two nodes connected by the minimum-weight one-way directed path are considered to correspond to the best trading opportunity. A pair of nodes can be selected based on a bypass path rather than the direct path. In the case of Fig. <ref>, the pair (a, b) is evaluated for both the direct path (a→ b) and the bypass path (a → c → b). The bypass path corresponds to concurrently taking the pair (a, c) and pair (c, b) positions, leaving the pair (a, b) position as a result of the cancellation of buying and selling the stock c (the direct and bypass paths correspond to the same open pair). If not considering the similarity factors, the sum of w_a,c and w_c,b (bypass) is always higher than w_a,b (direct) by the bid-ask spread of the stock c (transit nodes on the bypass) (see Fig. <ref>b). However, considering the similarity factors, the sum of w_a,c and w_c,b can be lower than w_a,b. In this case, the evaluation of pair (a, b) is represented by the sum of the weights on the bypass path. This bypass evaluation (or collective evaluation) partially complements the incompleteness of the representation of similarity coming from characterizing time series data as a scalar value and prevents us from missing the trading opportunity. The evaluation value (weight sum) of a pair selected by the optimal path analysis is compared with a threshold for determining the opening of the pair.
The number of lots per order for a stock (L_i) is determined to make the amount of transaction (A_trans) common for all tradable stocks by rounding with considering the minimum tradable shares per order (a lot) of the stock (S_i^min) and the base price on the day (p_i^b); L_i=round(A_trans/S_i^min p_i^b). The number of intraday positions is controlled to be within a maximum number (P_max) and all positions are closed (unwind) before the close of the day. Duplicate pair positions are not allowed. When the pair (a, b) has been ordered (opened), another order of the same pair (a, b) has been forbidden, but other pairs including (a, c) and (c, b) are orderable and the edge (a, b) is passable for bypass evaluation.
Consider a subgroup of stocks (for an example, a, b, and c) that are correlated one another. If the price of one in the subgroup (assume a in the example) deviates largely (drops in the example) while the prices of the remaining ones do not deviate, multiple pairs related to the deviating one [pairs (b, a) and (c, a) in the example] are highly evaluated at the moment and, as well as the best pair [pair (b, a) in the example], the second-best pair [pair (c, a) in the example] can be worth betting (can have an evaluation value beyond the threshold). To our backcast simulation (see Sec. <ref>), a temporary price deviation of one stock in the cross-correlated subgroup gives good trading opportunities. For finding multiple opportunities in a market situation, the optimal path analysis is repeated. We need a sort of tabu search technique to avoid repeatedly finding the solution that has been found.
§.§ Formulation
The problem to find a pair of two nodes connected by the minimum-weight directed path (direct or bypass) from any two nodes in the N-node market graph is formulated in the form of the QUBO. A tabu search technique using a tabu list (T_i,j) is implemented in the formulation.
After adding a dummy node (i=0) with edge weights of zero (w_k,0=w_0,k=0, ^∀ k>0) in the market graph (Fig. <ref>), we seek a cyclic (directed) path giving the minimum weight. Let the node next (/previous) to the dummy node in the cyclic path correspond to the short (/long) positions of a pair trade. As shown in Fig. <ref>, a pair (a, b) is represented by both the cyclic path (0→ a→ b→ 0) and the cyclic path (0→ a→ c→ b→ 0) with the different weight sums. The former (/latter) representation corresponds to the direct (/bypass) evaluation of the pair (a, b). Clockwise and anticlockwise cycles (ex. 0→ a→ b→ 0 and 0→ b→ a→ 0) are distinguished.
Define a decision (binary) variable b_i,j as taking value 1 if the corresponding edge (i,j) is in the chosen cycle and 0 otherwise. The cost function to be minimized is defined by
H_cost=∑_i,jw_i,jb_i,j.
The constraints for cyclic directed paths and the tabu search are represented as a penalty function expressed by
H_penalty=
∑_i∑_j≠ j^'b_i,jb_i,j^'+
∑_j∑_i≠ i^'b_i,jb_i^',j+
∑_i(∑_jb_i,j-∑_jb_j,i)^2+
∑_i,jb_i,jb_j,i+
∑_i,jT_i,jb_0,jb_i,0.
The first (/second) term forces the outflow (/inflow) of each node to be 1 or less. The third term forces the inflows and outflows of each node to be equal. The fourth term forbids traversing the same edge twice in different directions. The fifth term forbids choosing the pairs in the tabu list T_i,j. Constraint violations increase the penalty, with H_penalty=0 if there are no violations. Note that an entry T_i,j in the tabu list induces a penalty for the state (b_0,j=b_i,0=1) but not for the states (b_0,j=1 and b_i,0=0), (b_0,j=0 and b_i,0=1), and (b_i,j=1).
The total cost function (H_QUBO) is a linear combination of H_cost and H_penalty,
H_QUBO=∑_i,j,k,lQ_i,j,k,lb_i,jb_k,l=m_cH_cost+m_pH_penalty,
where m_c and m_p are positive coefficients. The Ising machine searches for the bit configuration {b_i,j} that minimizes the quadratic cost function H_QUBO.
The tabu search technique was introduced to enhance the search efficiency upon the multiple executions of the Ising machine for finding multiple opportunities in a market situation under the constraint of forbidding duplicate positions. The procedure and timing of registering and deregistering entries in the tabu list are described in Section <ref>. In the QUBO formulation, the number of decision variables for an N-stock universe is N(N+1) and the size of the solution space (all possible points of the decision variables) is 2^N(N+1), including constraint violation solutions. We use a heuristic method (an Ising machine) to solve the QUBO problems. Hence, the verification of solutions is necessary and implemented in the system as a function other than Ising machines. In addition, the penalty function, Eq. (<ref>), gives no penalty to the two cases (a cycle without the dummy node and split cycles) shown in Fig. <ref>. Those solutions are excluded by the verification. Note that those solutions are not advantageous in the evaluation of the cost function, Eq. (<ref>).
§ SYSTEM
To accelerate the decision of opening positions and the issuance of orders after receiving a market feed (informing the change of ask or bid of a stock), the submodules related to the position opening are, in an FPGA, hardwired (instantiated as custom circuits) and inlined as a task pipeline from a receiver (RX) to a transmitter (TX), which are functional without the intervention of a software processor (CPU). The SBM module involved in the pipeline is an inline-type accelerator (not a look-aside type one), featuring a consecutive execution operation and a tabu search function. The management of the positions including the decision of closing positions is carried out by the CPU (software processing). Overall, the system is a hybrid FPGA/CPU system.
§.§ Architecture
Figure <ref> (a) shows the block diagram of the hybrid FPGA/CPU system. The system components in the FPGA part are, in the order of data flow, a receiver (RX), a price buffer (P) that accommodates the price list of ask and bid for the N tradable stocks, the SBM module, a judgment module with a memory unit for the open list (O), a message generator, and a transmitter (TX). The SBM module includes two memory units for a market graph (M) and a tabu list (T), a preprocessing unit (pre) for preparing the market graph, and a core processing unit (core) for the discrete optimization. Those components are implemented as independent (not synchronized) circuit modules, which are connected with directed streaming data channels with FIFO (first-in-first-out) buffers. The CPU part controls the whole system and manages the positions using state machines for opened positions (see APPENDIX A). The market information (including the changes in ask or bid) is received by both the FPGA and CPU parts. The order (buying/selling) packets are issued only from the FPGA part. The execution-result packets informing the results (fill/lapse) of the orders are received by the CPU part. The FPGA and CPU parts are connected with the peripheral component interconnect-express (PCIe) bus.
Figure <ref> (b) shows the timing chart for the operation of the SBM module when representative events (Events 1 to 8) happen. When no event happens for a certain time, the SBM module is idling (polling to the FIFO buffers from the price buffer and judgment modules). When a market feed arrives (Event 1), the SBM module immediately starts the preprocessing. The preprocessing unit receives the 2N data of ask and bid and then generates the N(N-1) data of weight w_i,j (market graph, M) with referring to a memory unit for similarity s_i,j which is updated once a day before the trading hours. Afterward, the SBM module starts the main (core) processing (the optimal path analysis). Then the SBM module verifies the solution (the path found) in terms of the constraint violations (including the cases of Fig. <ref>) and compares the evaluation of the path found with the threshold. If the verification and evaluation pass, the SBM module registers the pair in the tabu list T and concurrently informs it as an open candidate to the judgment module (Event 2). The judgment module determines the open positions by finally checking them in terms of P_max (the maximum number of intraday positions) and other control signals, then registers them in the open list O and issues order packets via the message generator (Event 2).
Here, the judgment module registers the open pair position in the open list O when the opening is decided (before the issuance of orders) and deregisters them when the closing of the pair position is confirmed with the message from the CPU part. When the number of pair positions is decreased, the judgment module informs the updated open list O to the SBM module, which forces the SBM module to refresh the tabu list T by copying the open list O for avoiding duplicate positions.
At the timing of Event 2, the SBM module starts the main processing again (the consecutive execution operation) without refreshing the tabu list (already up-to-date) and preprocessing (no new market feed arrives), resulting in another order at the timing of Event 3 (the SBM module could find another tradable path efficiently due to the tabu list). When the SBM does not output an effective solution (Event 4), the tabu list T and the open list O are not updated. Note that considering the pair based on a direct path (/ a bypass path) corresponding to the ineffective solution may satisfy the threshold if it is evaluated on a bypass path (/ a direct path), we designed that in this case (Event 4) the pair is not registered in the tabu list. When the SBM outputs an effective solution but it is rejected by the judgment module [for example, due to excess positions (>P_ max)] (Event 5), the tabu list T is updated but the open list O is not updated. When a new market feed (Event 6) (or a close confirmation information, Event 7) arrives, the market graph M (or the tabu list T) is updated by the preprocessor (or by copying the open list), at the beginning of the next execution of the SBM module (Event 8).
As seen in Event 5, the SBM module determines registering in the tabu list without considering the decision by the judgment module. This design contributes to reducing the latency (not to incorporate the feedback latency from the judgment module). Note that the registration in the tabu list in the case of Event 5 seems to be undesirable (might miss a trading opportunity) but the over-registration in the tabu list does not matter practically because the tabu list is updated when the positions decrease (Event 8).
§.§ Customized SBM core circuit
The core processing unit (core) is architecturally similar to the basic SBM circuit design <cit.> but partially modified for the specific QUBO problem described in Sec. <ref>. The weight w_i,j in Eq. (<ref>) and tabu list T_i,j in Eq. (<ref>) are stored in separate memory units (the M memory and T memory in Fig. <ref>), which are directly accessed by the SBM computation units. Based on the specific pattern of the coupling matrix Q, inefficient parts (the products with zero) in the pairwise interaction computation in the SB algorithm are omitted.
In the consecutive execution operation, the SBM module repeats the main processing (simulating the time-evolution of a coupled oscillator network) with different initial states generated by an internal random number generator (RNG), Xorshift RNG <cit.>. This contributes to efficiently finding another good solution even when the market graph M and the tabu list T are not updated (Event 4). The latency of the RNG is hidden by overlapping the operations of the SBM core and the RNG; the RNG generates an initial state for the next execution of the SBM core while the SBM core is processing.
§.§ Implementation
We implemented the system described in Sec. <ref> with a CPU server with a network interface card (NIC) and an FPGA board having another network interface (see APPENDIX B for details).
Figure <ref> (a) shows the architecture and implementation results of the SBM module for 15-stock universes [N=15 stocks, N(N-1)=210 pairs]. The numbers of nodes and edges (directed) in the market graphs supported are, respectively, 16 and 240, including the dummy node explained in Sec. <ref>. Among three variants of simulated bifurcation (adiabatic, ballistic, and discrete SBs) <cit.>, ballistic SB is adopted in this work, with the SB parameters of N_step=50 and dt=0.65. The machine size (the number of spins) is 256 spins with a specific spin-spin connectively for the QUBO problem described in Sec. <ref>, and the computation precision is 32-bit floating point. Figure <ref> (b) shows the result of the placement of system modules in the FPGA. The SBM module (core and pre) is dominant, and the circuit resources used are listed in Fig. <ref> (a). The system clock frequency determined as a result of circuit synthesis, placement, and routing is 233 MHz. The clock cycles of the SB main processing (core) and preprocessing (pre) are 6,900 steps per run (138 per SB step) and 216, respectively. The computation time (the module latency) per run (t_pre+t_core) is 30.6 μs, where the SBM core processing is dominant (t_core=29.6 μs). The system-wide latency from the market feed arrival to the order packet issuance depicted in Fig. <ref>(b) as a red arrow is 33 μs (including the latencies of the RX, price buffer, judgment, SBM, message generator, and TX modules).
§ EXPERIMENT
The trading system described in Sec. <ref> was installed at the JPX Co-location area of the TSE and operated through real-time trading to examine whether the strategy based on the consecutive optimal path searches in the N-node market graph in Sec. <ref> is executable. The trading results are compared with a backcast simulation of the strategy assuming the orders issued are necessarily filled.
The proposed strategy determines the opening of positions based on an instantaneous market situation (a price list of ask and bid for the N-stock universe). Because of the latency of a system that executes the strategy and the activities of other trading entities, the orders issued are not necessarily filled at the ask/bid prices used for the decision-making. We developed a simulator that processes the historical market feeds provided by the TSE and emulates the internal state of the trading system. The simulator assumes that the orders issued are necessarily filled at the intended prices.
Figures <ref> (a) and (b) show the cumulative values of the amounts of transactions per day and the profit and loss (including ask-bid spread costs and commission) per day for real-time trading (red line) and backcast simulation (black line) with fixed strategic parameters of N=15 (210 pairs), P_max=16, and A_trans=1.5 million Japanese yen (JPY). The 15 stocks were selected from the bank/insurance sections in terms of high liquidity. The simulation data is from Aug. 1, 2017, to Aug. 31, 2022. The real trade data is from Mar. 1, 2022, to Aug. 31, 2022, being adjusted with the simulation at the first day.
The annualized return and risk over the simulation period (approximately 5 years) are, respectively, 7.5 % and 9.5 % for an investment of 24 million JPY (A_trans× P_ max). The Sharpe ratio of the strategy is 0.79, where the Sharpe ratio <cit.> is, in this work, the ratio of the mean to the standard deviation of the return (the profit and loss per period for an investment) from a strategy as in <cit.>. The strategy proposed can be profitable (a positive annualized return) for the long term (approx. 5 years), especially has shown a high annualized return of 18.5 % for the period of Aug. 1, 2017, to Feb. 28, 2020, before the COVID-19 pandemic.
The cumulative value of the amount of transaction by the system (3,817,201,458 JPY) over the experiment (750 hours of real-time trading) is coincident well (+2.6 %) with the simulation value (3,719,389,258 JPY). The fill rate at the intended prices was 93.4 % and the remaining included the fills at less-favorable prices and the lapses. Most of the lapses occurred just after the opening of the morning sessions. In this experiment, when the order for one of the paired stocks lapses, the position for the other (if the order is filled) is also closed immediately for experimental simplicity (see APPENDIX A), allowing the system to execute more transactions under the constrain of the maximum number of positions (P_max). This is the reason for the increased transaction amount observed in the experiment. Based on the good agreement in the cumulative transaction amounts and detailed comparison analysis of transactions between the experiment and simulation, we conclude that the strategy proposed is executable with the trading system with a latency of 33 μs.
Figure <ref> (a) and (b) show typical transaction behaviors by the trading system observed on Mar. 10, 2022, and Apr. 1, 2022, respectively. The number of the market feeds informing the changes of ask/bid of stocks in the N(=15)-stock universe on Mar. 10 (/Apr. 1) were 1,101,741 (/1,007,773), which arrived at intervals of 18.0 ms (/19.6 ms) on average.
On Mar. 10, 2022, the system decided the opening of the pair position (8750, 8355) [selling code 8750, buying code 8355] at 9:12:14 AM in JST (734 seconds after 9:00:00 AM) based on the evaluation of the bypass path (8750 → 8303 → 8355) found by the SBM module. It was confirmed by the backcast simulation that the evaluation value of the direct path (8750→ 8355) did not satisfy the threshold, meaning that this trading opportunity was missed if the bypass path was not evaluated for decision-making. On that day, both the prices of codes 8750 and 8355 were moving up (uptrend), but the relative difference of the prices (the spread) of the pair position decreased after the position opening, resulting in the profitable closing of the pair position before the end of the trading hours [Fig. <ref> (a)].
On Apr. 1, 2022, the system decided the opening of the pair positions (8304, 8355) [selling code 8304, buying code 8355] and (8308, 8355) [selling code 8308 buying code 8355] at 9:12:11 AM in JST (731 seconds after 9:00:00 AM) based on the evaluation of the direct paths (8304 → 8355) and (8308 → 8355). The two pair positions were found by the consecutive execution operation of the SBM module in the instantaneous market situation (before the market situation changed). On that day, the prices of codes 8308, 8304, and 8355 were, overall, moving up (uptrend), but the spreads of the pair positions decreased after the position opening, resulting in the profitable closing of the pair positions before the end of the trading hours [Fig. <ref> (b)].
§ CONCLUSION
We proposed a pairs-trading strategy that finds trading opportunities for any two stocks in an N-stock universe through solving an optimal path search problem in market graphs and have demonstrated with the real-time transaction records in the TSE that the strategy is executable in terms of response latency with the automated trading system using the SB-based embeddable Ising machine for the market graph analysis.
The market graph for the N-stock universe is an N-node fully-connected directed graph with each edge weight corresponding to the product of instantaneous price difference and dynamic time warping (DTW) distance-based similarity between a pair of stocks. In the graph, two nodes connected by the minimum-weight one-way directed path selected from among all possible direct and bypass paths (a collective evaluation of the graph) are considered to correspond to the best trading opportunity. The optimal path search is consecutively executed to find multiple trading opportunities in an instantaneous market situation with avoiding duplicate detections by a tabu search technique.
The automated trading system is a hybrid FPGA/CPU system. The FPGA part (hardware processing) decides the opening of a pair of long/short positions using the Ising machine and then issues the corresponding orders, while the CPU part (software processing) manages the opened positions (including the decision of closing positions). The system-wide latency from the market feed arrival to the order packet issuance is 33 μs for N=15 or 210 pairs.
The trading system was installed at the JPX Co-location area of the TSE and operated for a real-time trading period of 125 business days or 750 hours. The real-time transaction records were compared with a backcast simulation of the strategy assuming the orders issued are necessarily filled at the intended prices. Based on the good agreement in the cumulative transaction amounts and detailed comparison analysis of transactions between the experiment and simulation, we have concluded that the response latency of the system with the SB-based Ising machine is sufficiently low to execute the pairs-trading strategy based on optimal path search in market graphs.
Automated trading systems with embedded Ising machines would be applicable to the strategies based on various graph analyses of market graphs defined by various return/risk measures and other trading strategies that rely on high-speed discrete optimization.
§ APPENDICES
§.§ A. Position management
The position management module manages N(N-1) state machines corresponding to all the pairs. Fig. <ref> shows the states and transitions of the state machine. Initially the pair position (i, j) has been closed (closed state). When an execution packet (informing that the order of one of the stock pair is filled) is received, the state shifts to opening state (T1 transition) and then stays waiting for the remaining results to be received (T2). If the fill of the orders for the pair is confirmed as intended, the state shifts to opened (T3). Otherwise (unintended), the state shifts to closing (T4). The management module always monitors the prices (bid and ask) of all the tradable stocks and detects the convergence of the spread when opened (the confirmation of a profit more than a threshold) for the opened pair. If the closing condition is satisfied, the state shifts to closing (T5). In the closing state, the state stays waiting for the related positions to be all closed (T6); the management module issues the orders for closing via the message generator in the FPGA and then (if necessary) repeats ordering until all the positions are closed. If the closing of the positions is confirmed, the state shifts back to closed (T7).
§.§ B. Implementation details
An FPGA board and a high-speed network interface card (NIC) are mounted on a host server with dual CPUs (Intel Xeon Silver 4215R) and DDR-DRAM modules (384 GB). The FPGA (Intel Arria 10 GX 1150 FPGA) on the board has 427,200 adaptive logic modules (ALMs) including 854,400 adaptive look-up-tables (ALUTs, 5-input LUT equivalent) and 1,708,800 flip-flop registers, 2,713 20Kbit-size RAM blocks (BRAMs), and 1,518 digital signal processor blocks (DSPs). The system components in the FPGA described in Section <ref> were coded in a high-level synthesis (HLS) language (Intel FPGA SDK for OpenCL, ver. 18.1). The FPGA interfaces including a PCIe IP (PCIe Gen3×8), a 10 Gbps Ethernet PHY IP and communication IPs (RX, TX) were written in Verilog HDL and incorporated in the board support package (BSP).
§.§ Acknowledgment
The experiment in the Tokyo Stock Exchange was conducted under a joint project between Toshiba Corporation and Dharma Capital. K.K. The authors thank Ryosuke Iio and Kohei Shimane for fruitful discussions and technical support.
§.§ Conflicts of Interest
K.T., R.H., and M.Y. are included in inventors on two U.S. patent applications related to this work filed by the Toshiba Corporation (no. 17/249353, filed 20 February 2020; no. 17/565206, filed 29 December 2021). The authors declare that they have no other competing interests.
00
Sharpe90 W. F. Sharpe, G. J. Alexander, J. V. Bailey, “Investments (4th edition),” Prentice Hall, Englewood Cliffs, N.J. 1990.
shleifer97 A. Shleifer, R. W. Vishny, “The limits of arbitrage,” The Journal of finance 52, pp. 35–55, 1997. [Online]. Available: https://doi.org/10.1111/j.1540-6261.1997.tb03807.x
gromb10
D. Gromb, D. Vayanos, “Limits of arbitrage,” Annual Review of Financial Economics 2, pp. 251–275, 2010. [Online]. Available: https://doi.org/10.1146/annurev-financial-073009-104107
rosch21
D. Rösch, “The impact of arbitrage on market liquidity,” Journal of Financial Economics 142, pp. 195–213, 2021. [Online]. Available: https://doi.org/10.1016/j.jfineco.2021.04.034
gatev06 E. Gatev, W. N. Goetzmann, K. G. Rouwenhorst, “Pairs trading: Performance of a relative-value arbitrage rule,” The Review of Financial Studies 19, pp. 797–827, 2006. [Online]. Available: https://doi.org/10.1093/rfs/hhj020
krauss17 C. Krauss, “Statistical arbitrage pairs trading strategies: Review and outlook,” Journal of Economic Surveys 31, pp. 513–545, 2017. [Online]. Available: https://doi.org/10.1111/joes.12153
flori21 A. Flori, D. Regoli, “Revealing pairs-trading opportunities with long short-term memory network,” European Journal of Operational Research 295, pp. 772–791, 2021. [Online]. Available: https://doi.org/10.1016/j.ejor.2021.03.009
butenko03 S. Butenko, “Maximum independent set and related problems, with applications,” Ph.D. dissertation, the Industrial and Systems Engineering Department, University of Florida, 2003. [Online]. Available: https://ufdcimages.uflib.ufl.edu/UF/E0/00/10/11/
00001/butenko_s.pdf
boginski04 V. Boginski, S. Butenko, P. M. Pardalos, “Network-based Techniques in the Analysis of the Stock Market,” in Supply Chain and Finance, eds. P. M. Pardalos, A. Migdalas, G. Baourakis, World Scientific, pp. 1–14, 2004. [Online]. Available: https://doi.org/10.1142/9789812562586_0001
marzec16 M. Marzec, “Portfolio optimization: Applications in quantum computing,” in Handbook of High-Frequency Trading and Modeling in Finance eds. I. Florescu, M. C. Mariani, H. E. Stanley, F. G. Viens, Wiley Online Library, pp. 73–106, 2016. [Online]. Available: https://doi.org/10.1002/9781118593486.ch4
lucas14 A. Lucas, “Ising formulations of many NP problems,” Frontiers in physics 2, 5, 2014. [Online]. Available: https://doi.org/10.3389/fphy.2014.00005
sbm1 H. Goto, K. Tatsumura, A. R. Dixon, “Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems,” Science Advances 5, eaav2372, 2019. [Online]. Available: https://doi.org/10.1126/sciadv.aav2372
FPL19 K. Tatsumura, A. R. Dixon, H. Goto, “FPGA-Based Simulated Bifurcation Machine,” Proc. of IEEE International Conference on Field Programmable Logic and Applications (FPL), pp. 59–66, 2019. [Online]. Available: https://doi.org/10.1109/FPL.2019.00019
sbm2 H. Goto, K. Endo, M. Suzuki, Y. Sakai, T. Kanao, Y. Hamakawa, R. Hidaka, M. Yamasaki, K. Tatsumura, “High-performance combinatorial optimization based on classical mechanics,” Science Advances 7, eabe7953, 2021. [Online]. Available: https://doi.org//10.1126/sciadv.abe7953
NatEle K. Tatsumura, M. Yamasaki, H. Goto, “Scaling out Ising machines using a multi-chip architecture for simulated bifurcation,” Nature Electronics 4, pp. 208–217, 2021. [Online]. Available: https://doi.org/10.1038/s41928-021-00546-4
kanao23 T. Kanao, H. Goto, “Simulated bifurcation for higher-order cost functions,” Applied Physics Express 16, 014501, 2023. [Online]. Available: https://doi.org/10.35848/1882-0786/acaba9
johnson11
M. W. Johnson, M. H. S. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Johansson, P. Bunyk, E. M. Chapple, C. Enderud, J. P. Hilton, K. Karimi, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, C. J. S. Truncik, S. Uchaikin, J. Wang, B. Wilson, G. Rose, “Quantum annealing with manufactured spins,” Nature 473, pp. 194–198 (2011). [Online]. Available: https://doi.org/10.1038/nature10012
king23
A. D. King, J. Raymond, T. Lanting, R. Harris, A. Zucca, F. Altomare, A. J. Berkley, K. Boothby, S. Ejtemaee, C. Enderud, E. Hoskinson, S. Huang, E. Ladizinsky, A. J. R. MacDonald, G. Marsden, R. Molavi, T. Oh, G. Poulin-Lamarre, M. Reis, C. Rich, Y. Sato, N. Tsai, M. Volkmann, J. D. Whittaker, J. Yao, A. W. Sandvik, M. H. Amin, “Quantum critical dynamics in a 5,000-qubit programmable spin glass,” Nature 617, pp. 61–-66 (2023). [Online]. Available: https://doi.org/10.1038/s41586-023-05867-2
honjo21
T. Honjo, T. Sonobe, K. Inaba, T. Inagaki, T. Ikuta, Y. Yamada, T. Kazama, K. Enbutsu, T. Umeki, R. Kasahara, K. Kawarabayashi, H. Takesue, “100,000-spin coherent ising machine,” Science Advances 7, eabh095 (2021). [Online]. Available: https://doi.org/10.1126/sciadv.abh0952
pierangeli19
D. Pierangeli, G. Marcucci, C. Conti, “Large-Scale Photonic Ising Machine by Spatial Light Modulation,” Physical Review Letters 122, 213902 (2019). [Online]. Available: https://doi.org/10.1103/PhysRevLett.122.213902
cai20 F. Cai, S. Kumar, T. V. Vaerenbergh, X. Sheng, R. Liu, C. Li, Z. Liu, M. Foltin, S. Yu, Q. Xia, J. J. Yang, R. Beausoleil, W. D. Lu, J. P. Strachan, “Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks,” Nature Electronics 3, pp. 409–418, 2020. [Online]. Available: https://doi.org/10.1038/s41928-020-0436-6
aadit22 N. A. Aadit, A. Grimaldi, M. Carpentieri, L. Theogarajan, J. M. Martinis, G. Finocchio, K. Camsari, “Massively parallel probabilistic computing with sparse Ising machines,” Nature Electronics 5, pp. 460–468, 2022. [Online]. Available: https://doi.org/10.1038/s41928-022-00774-2
moy22 W. Moy, I. Ahmed, P. Chiu, J. Moy, S. S. Sapatnekar, C. H. Kim, “A 1,968-node coupled ring oscillator circuit for combinatorial optimization problem solving,” Nature Electronics 5, pp. 310–317, 2022. [Online]. Available: https://doi.org/10.1038/s41928-022-00749-3
sharma22 A. Sharma, R. Afoakwa, Z. Ignjatovic, M. Huang, “Increasing Ising machine capacity with multi-chip architectures,” Proc. of Annual International Symposium on Computer Architecture (ISCA), pp. 508–521, 2022. [Online]. Available: https://doi.org/10.1145/3470496.3527414
takemoto19
T. Takemoto, M. Hayashi, C. Yoshimura, M. Yamaoka, “A 2×30k-Spin Multi-Chip Scalable Annealing Processor Based on a Processing-In-Memory Approach for Solving Large-Scale Combinatorial Optimization Problems,” IEEE Journal of Solid-State Circuits 55, pp. 145–156, 2019. [Online]. Available: https://doi.org/10.1109/JSSC.2019.2949230
kawamura23 K. Kawamura, J. Yu, D. Okonogi, S. Jimbo, G. Inoue, A. Hyodo, Á. L. García-Anas, K. Ando, B. H. Fukushima-Kimura, R. Yasudo, T. Van Chu, M. Motomura, “Amorphica: 4-replica 512 fully connected spin 336MHz metamorphic annealer with programmable optimization strategy and compressed-spin-transfer multi-chip extension,” Proc. of IEEE International Solid-State Circuits Conference (ISSCC), pp. 42–43, 2023. [Online]. Available: https://doi.org/10.1109/ISSCC42615.2023.10067504
matsubara20 S. Matsubara, M. Takatsu, T. Miyazawa, T. Shibasaki, Y. Watanabe, K. Takemoto, H. Tamura, “Digital annealer for high-speed solving of combinatorial optimization problems and its applications,” Proc. of Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 667–672, 2020. [Online]. Available: https://doi.org/10.1109/ASP-DAC47756.2020.9045100
waidyasooriya21 H. M. Waidyasooriya, M. Hariyama, “Highly-parallel FPGA accelerator for simulated quantum annealing,” IEEE Transactions on Emerging Topics in Computing 9, pp. 2019–2029, 2021. [Online]. Available: https://doi.org/10.1109/TETC.2019.2957177
okuyama19 T. Okuyama, T. Sonobe, K. Kawarabayashi, M. Yamaoka, “Binary optimization by momentum annealing,” Physical Review E 100, 012111, 2019. [Online]. Available: https://doi.org/10.1103/PhysRevE.100.012111
barahona82 F. Barahona, “On the computational complexity of Ising spin glass models,” Journal of Physics A: Mathematical and General 15, pp. 3241–-3253, 1982. [Online]. Available: https://doi.org/10.1088/0305-4470/15/10/028
yoo23 S. Yoo, H. Kim, J. Kim, S. Park, J.-Y. Kim, J. Oh, “LightTrader: A Standalone High-Frequency Trading System with Deep Learning Inference Accelerators and Proactive Scheduler,” IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp. 1017–1030, 2023. [Online]. Available: https://doi.org/10.1109/HPCA56546.2023.10070930
fil20 M. Fil, L. Kristoufek, “Pairs trading in cryptocurrency markets,” IEEE Access 8, pp. 172644–172651, 2020. [Online]. Available: https://doi.org/10.1109/ACCESS.2020.3024619
huang19 B. Huang, Y. Huan, L. D. Xu, L. Zheng, Z. Zou, “Automated trading systems statistical and machine learning methods and hardware implementation: a survey,” Enterprise Information Systems 13, pp. 132–144, 2019. [Online]. Available: https://doi.org/10.1080/17517575.2018.1493145
denholm15 S. Denholm, H. Inoue, T. Takenaka, T. Becker, W. Luk, “Network-level FPGA acceleration of low latency market data feed arbitration,” IEICE Transactions on Information and Systemss E98-D, pp. 288–297, 2015. [Online]. Available: https://doi.org/10.1587/transinf.2014RCP0011
leber11 C. Leber, B. Geib, H. Litz, “High frequency trading acceleration using FPGAs,” Proc. of IEEE International Conference on Field Programmable Logic and Applications (FPL), pp. 317–322, 2011. [Online]. Available: https://doi.org/10.1109/FPL.2011.64
malceniece23 L. Malceniece, K. Malcenieks, T. J. Putniņš, Tālis, “High frequency trading and comovement in financial markets,” Journal of Financial Economics 134, pp. 381–399, 2019. [Online]. Available: https://doi.org/10.1016/j.jfineco.2018.02.015
brogaard14 J. Brogaard, T. Hendershott, R. Riordan, “High-Frequency Trading and Price Discovery,” The Review of Financial Studies 27, pp. .2267-2306, 2014. [Online]. Available: https://doi.org/10.1093/rfs/hhu032
spyrou13 S. Spyrou, “Herding in financial markets: a review of the literature,” Review of Behavioral Finance,5, pp. 175–194, 2013. [Online]. Available: https://doi.org/10.1108/RBF-02-2013-0009
ISCAS20 K. Tatsumura, R. Hidaka, M. Yamasaki, Y. Sakai, H. Goto, “A Currency Arbitrage Machine based on the Simulated Bifurcation Algorithm for Ultrafast Detection of Optimal Opportunity,” Proc. of IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, 2020. [Online]. Available: https://doi.org/10.1109/ISCAS45731.2020.9181114
qbm H. Goto, “Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network,” Scientific Reports 6, 21686, 2016. [Online]. Available: https://doi.org/10.1038/srep21686
sakoe78 H. Sakoe, S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing 26, pp. 43–49, 1978. [Online]. Available: https://doi.org/10.1109/TASSP.1978.1163055
marsaglia03 G. Marsaglia, “Xorshift RNGs,” Journal of Statistical software 8, pp. 1–6, 2003. [Online]. Available: https://doi.org/10.18637/jss.v008.i14
sharpe66 W. F. Sharpe, “Mutual fund performance,” The Journal of Business 39, pp. 119–138, 1966. [Online]. Available: https://www.jstor.org/stable/2351741
backus93 D. K. Backus, A. W. Gregory, C. I. Telmer, “Accounting for forward rates in markets for foreign currency,” The Journal of Finance 48, pp. 1887–1908, 1993. [Online]. Available: https://doi.org/10.1111/j.1540-6261.1993.tb05132.x
|
http://arxiv.org/abs/2307.07646v1 | 20230714223038 | Implementing an electronic sideband offset lock for precision spectroscopy in radium | [
"Tenzin Rabga",
"Kevin G. Bailey",
"Michael Bishof",
"Donald W. Booth",
"Matthew R. Dietrich",
"John P. Greene",
"Peter Mueller",
"Thomas P. O'Connor",
"Jaideep T. Singh"
] | physics.atom-ph | [
"physics.atom-ph",
"nucl-ex",
"physics.optics"
] |
1Physics Division, Argonne National Laboratory, Argonne, Illinois 60439, USA
2National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan 48824, USA
3Currently with the Center for Correlated Electron Systems, Institute for Basic Science (IBS) and Department of Physics and Astronomy, Seoul National University (SNU),
Seoul 151-742, Republic of Korea
*[email protected]
†[email protected]
We demonstrate laser frequency stabilization with at least 6 GHz of offset tunability using an in-phase/quadrature (IQ) modulator to generate electronic sidebands (ESB) on a titanium sapphire laser at 714 nm and we apply this technique to the precision spectroscopy of and . By locking the laser to a single resonance of a high finesse optical cavity and adjusting the lock offset, we determine the frequency difference between the magneto-optical trap (MOT) transitions in the two isotopes to be 2630.0±0.3 MHz, a factor of 29 more precise than the previously available data. Using the known value of the hyperfine splitting of the level, we calculate the isotope shift for the to transition to be 2267.0±2.2 MHz, which is a factor of 8 more precise than the best available value. Our technique could be applied to countless other atomic systems to provide unprecedented precision in isotope shift spectroscopy and other relative frequency comparisons.
§ MOTIVATION AND BACKGROUND
Laser frequency stabilization techniques are ubiquitous in applications such as precision spectroscopy <cit.>, laser cooling and trapping of atoms <cit.> and molecules <cit.>, and quantum information science <cit.>. Laser frequency stabilization is often achieved via comparison to a stable frequency reference. The most common frequency references include optical cavities and atomic or molecular transitions, but any system with a stable, measurable, and selective response to laser frequency can be used. This response signal, can then be used as an "error signal," which tracks the laser's frequency deviations from the stable reference and can be "fed back" to parameters that control the laser frequency to cancel these deviations. In this work, we use the reflection signal from a high-finesse optical cavity near a cavity resonance as our frequency reference.
In a side-of-peak locking scheme, one locks the laser frequency to the side of an optical cavity resonance - using the slope of the cavity resonance as the error signal. Although this allows some tunability in the frequency of the laser, by selecting different positions on the resonance, it couples any laser intensity fluctuations to its frequency instability. A preferred and an improved method uses the Pound-Drever-Hall (PDH) locking scheme <cit.>. This involves the phase modulation of the laser beam incident on a optical cavity reference. The reflected signal off of the cavity is collected on to a photo detector and demodulated to generate the error signal. This method overcomes the sensitivity to intensity fluctuations from the side-of-peak scheme at the price of tunability, since insensitivity to laser intensity noise is optimal only on the resonance peak.
However, it is still often desirable to have a tunable laser frequency lock while maintaining frequency stability. There are a variety of ways to achieve this goal. Using an acousto-optic modulator (AOM), one can achieve several hundreds of MHz of frequency tunability<cit.>. Alternatively an offset phase lock to another frequency stabilized laser can achieve tunable frequency locks over a larger tuning range<cit.>, but requires an additional laser. The electronic side-band (ESB) offset lock, a simple extension to the PDH lock, allows laser frequency stabilization to a fixed frequency reference with a broadly tunable offset frequency <cit.>. Offset frequencies up to 4 GHz have been achieved by combining the ESB offset locking technique with a high-bandwidth, fiber-coupled electro-optical modulator (EOM)<cit.>. Here, we describe the methods used to implement an ESB offset lock for laser frequency stabilization with an offset frequency that is tunable between 200 MHz and 6 GHz. In contrast to previous work, we use an in-phase/quadrature (IQ) modulator (Analog Devices, LTC5588-1) to generate the laser modulation signal from inexpensive digital signal generators. This eliminates the need for expensive microwave signal generators capable of carrier phase modulation at several MHz. Moreover, our approach can be adapted to a greater number of applications owing to the broad availability of low cost synthesizers and IQ modulators across the entire RF and Microwave spectrum. We leverage our ESB offset lock to perform laser spectroscopy of radium isotopes and compare the frequencies of different atomic transitions to the same optical cavity resonance. We determine the frequency difference between isotopes with precision and accuracy limited only by the cavity resonance, lock implementation, and fundamental atomic properties.
Spectroscopy of radium isotopes is of extreme importance due to their unique atomic and nuclear properties that make them suitable for electric dipole moment (EDM) searches. A non-zero EDM in a non-degenerate system violates time-reversal (T) symmetry and consequently charge-parity (CP) symmetry <cit.>. The octupole deformation and nearly degenerate nuclear parity doublet in radium make it an ideal candidate for probing CP violations due to the atomic nucleus <cit.>. The radium EDM experiment at the Argonne National Laboratory uses (I=1/2, τ_1/2 = 14.9 day) (where I is the nulcear spin) to set the best limit on the size of the EDM in <cit.>. Its 14.9-day half-life and low vapor pressure make a challenging system for an EDM experiment. Due to its greater abundance and longer half-life, (I=0, τ_1/2 = 1600 yr) is used to optimize certain parts of the EDM experimental apparatus, such as the atom cooling and trapping setup. Therefore it is crucial to be able to quickly and reilably tune laser frequencies over several GHz in order to laser cool and trap both isotopes. The laser frequency stabilization technique described here provides a convenient and robust method for tuning the laser frequency to the relevant atomic transitions used for cooling and trapping these two isotopes during a single experiment.
The relevant energy levels for slowing and trapping of a radium magneto-optical trap (MOT) are shown in Figure <ref> <cit.>. We perform an improved measurement of the frequency difference Δν between the MOT transitions for (→ ) and ([F=1/2] → [F=3/2]). From the measured Δν and the known value for the hyperfine splitting Δν_hfs between the F=3/2 and F=1/2 levels of , we extract the isotope shift Δν_iso between and for the → transition.
§ IMPLEMENTING AN ESB OFFSET LOCK USING AN IQ MODULATOR
An IQ modulator splits the input carrier radiofrequency signal into its In-phase (I) and Quadrature (Q) components. With appropriate choices of inputs into the I and Q ports it can approximate frequency modulation. In our case, to generate the necessary frequency sidebands, a carrier with frequency Ω_1 is fed into the local oscillator (LO) port of the IQ modulator. A constant DC voltage is fed into the I port while a modulation signal with frequency Ω_2 is fed into the Q port. The resultant combination of the in-phase and quadrature signals is then applied to the EOM. The principle of operation of an IQ modulator is shown in Figure <ref>a.
The electric field of a laser beam with frequency ω_0 and amplitude E_0 can be written as
E(t) = E_0e^iω_0 t
For the ESB offset lock scheme, the IQ modulated signal Acos(Ω_1t)+Bcos(Ω_2t)sin(Ω_1t) is amplified and applied to the EOM. The modulated electric field of the laser beam is then given by
E(t) = E_0e^i{ω_0 t + Acos(Ω_1t)+Bcos(Ω_2t)sin(Ω_1t)}
where, Ω_1 and Ω_2 are the respective modulations into the LO and the Q ports of the IQ modulator with associated modulation depths A and B respectively. Expanding the above expression in terms of Bessel functions, with terms up to 𝒪(J^2_1), enables us to see the associated sidebands generated.
E(t) ≈ E_0 {J_0(A)J^2_0(B/2)e^iω_0 t.
+ .iJ^2_0(B/2)J_1(A)[e^i(ω_0+Ω_1)t + e^i(ω_0-Ω_1)t] . 3a
+ .J_0(A)J^2_1(B/2)[e^i(ω_0+2Ω_1)t+e^i(ω_0-2Ω_1)t-e^i(ω_0+2Ω_2)t-e^i(ω_0-2Ω_2)t] . 3b
+ .J_0(B/2)J_1(B/2)(J_0(A)-2J_1(A)sin(Ω_1 t))×.
. [e^i(ω_0+Ω_1-Ω_2)t+e^i(ω_0+Ω_1+Ω_2)t-e^i(ω_0-Ω_1+Ω_2)t-e^i(ω_0-Ω_1-Ω_2)t] }3c
where J_n is the n-th order Bessel function of the first kind. Up to terms linear in J_1, we see the six sidebands generated as shown in Figure <ref>b. We also note in Eq. <ref>, that in terms up to 𝒪(J^2_1) we observe amplitude modulations at Ω_1. This is a consequence of using an IQ modulator for phase-modulations, there is always some amplitude modulation. For the present purposes, we focus our attention on the modulation at ω_0-Ω_1 and the two sidebands generated at ω_0-Ω_1-Ω_2 and ω_0-Ω_1+Ω_2. This is similar to the sidebands generated for the PDH scheme. The modulation at Ω_2 generates the error signal, while the modulation at Ω_1 provides the tunability of the lock. Compared to a conventional phase-modulated PDH frequency stabilization scheme with a modulation at frequency Ω_1 with a modulation-depth A, the relative size of the error signal (up to 𝒪(J^2_1)) is given by
D_ESB/D_PDH = J^3_0(B/2)J_1(B/2)[1 - 2J_1(A)/J_0(A)sin(Ω_1t)] 4
By tuning the EOM offset frequency Ω_1 we change the offset of the laser frequency from the resonant optical cavity mode and therefore tune the laser frequency. The collision of the different sidebands as one tunes Ω_1 poses a potential issue for broadband applications. In the above case, when Ω_1 is a multiple of Δν_FSR/2, where Δν_FSR is the free spectral range of the optical cavity, we notice the collision of the sidebands with opposite phases at ω_0+Ω_1 and ω_0-Ω_1. This can significantly affect the error signal and therefore the lock performance. However, we find that adding an AOM in the pathway helps optimize the tuning frequency Ω_1 and, as a result, prevents any lock degradation (See Fig <ref>). Incidentally, this effect can also be used to conveniently and accurately measure Δν_FSR of the cavity. One can tune the laser so that it scans a range near the center of two TEM_00 modes, then adjust Ω_1 until the error signal of the lower frequency mode precisely cancels that of the higher order mode. The free spectral range is then twice Ω_1. Greater relative accuracy can be obtained by comparing non-adjacent modes, and measuring 4 or 5 times Δν_FSR.
§ CHARACTERIZING THE ESB OFFSET LOCK PERFORMANCE
To characterize lock performance, we implement an ESB offset lock to an optical cavity using a 483 nm external cavity diode laser (ECDL1, Toptica DL Pro) and analyze the optical beat note between ECDL1 and a similar, unstabilized 483 nm laser (ECDL2, Moglabs CEL). The experimental setup is shown in Fig <ref>.
§.§ ESB Offset Lock Test Setup
A portion of the 483 nm ECDL1 laser power is modulated by a fiber coupled EOM (AdvR, KTP phase modulator), and sent to an optical reference cavity made from a zerodur spacer and fused silica high-reflectivity mirrors. The IQ modulation scheme for generating sidebands and locking the laser frequency is identical to what is described later in section 4.1. For ECDL1 we feedback on the DC component of the laser current to stabilize its frequency. The optical beat pattern between ECDL1 and ECDL2 is collected on a photo detector (PD2). The beat pattern is sampled and monitored on a signal analyzer and an oscilloscope. Typically, we use the optical beat signal from PD2 to stabilize the frequency of ECDL2 relative to ECDL1 using a phase discriminator and reference frequency (See Fig 3.). To evaluate the ESB lock performance we disengage the feedback on ECDL2 and tune its free-running frequency to achieve an optical beat signal at 100 MHz. This signal is down converted to 20 kHz by mixing the beat pattern with a function generator output at 99.98 MHz.
To test the performance of the ESB offset locking scheme, we compare it to a traditional PDH lock. For the PDH lock, we simply bypass the EOM and send the laser beam straight to the cavity. The AC component of the laser current for ECDL1 is dithered to generate the frequency sidebands. The cavity reflection signal is demodulated to generate the PDH error signal, which is then low-pass filtered and fed to the same PID controller (SRS, SIM960) as for the ESB offset lock.
§.§ Lock Evaluation and Optimization
Figure <ref> shows the power spectral density (PSD) of the beat pattern normalized to the 20kHz carrier signal, estimated using Welch's method <cit.>. We calculate the PSD from a 100 ms observation of the beat signal with a 1.25 Mhz sampling rate both when ECDL1 is locked using a traditional PDH lock (black) and using the ESB offset lock (blue). We note that the full width at half maximum of the 20kHz carrier signal is not significantly different between the PDH and ESB locked signals and the small difference in carrier normalized noise above 1kHz is likely due to additional electronic noise in the lock feedback owing to a reduction in optical power coupled to the cavity resonance when the EOM is in use. An additional source of noise in the lock is residual amplitude modulation (RAM) which depends on the temperature of the EOM and the polarization of light sent through the EOM. For this work the EOMs are thermally stabilized and light polarization is adjusted to minimize RAM on the EOM output. We expect that by sending additional optical power through the EOM and active RAM stabilization <cit.>, relative noise on the ESB lock beat signal could be reduced to that of the PDH signal. Still, it is important to note that certain EOMs, such as the one used in our 714 nm ESB lock, are susceptible to significant photo refractive damage when exposed to optical input power above certain thresholds, so the total optical power sent through an EOM may be limited. For this work, the input power is set below the threshold (6 mW) where the photo refractive effect occurs.
We do not expect that additional noise from the ESB-locked ECDL1 will affect atomic spectroscopy on the to transition, which has a natural linewidth of 30 MHz, although it has not yet been tested. We have, however, implemented a very similar ESB cavity lock in our 714 nm laser system (described below) which addresses the to transition. Despite this transition having a natural linewidth that is only 380 kHz, we do not observe any significant difference in the efficiency with which we cool, slow, and trap atoms in a MOT compared to a previous locking scheme which utilized a high-frequency AOM and a traditional PHD lock. Moreover, the estimated 714 nm laser linewidth is consistently observe to be 70 kHz using either locking scheme by comparing the locked error signal fluctuations to the cavity linewidth.
§ MOT CUTOFF AND ISOTOPE SHIFT MEASUREMENTS IN ^225RA AND ^226RA
The following section describes the implementation of an ESB offset lock in our 714 nm laser system and its application to precision spectroscopy in and .
§.§ ESB Offset Lock Setup
The experimental setup used for the ESB offset lock of our 714 nm laser and the MOT cutoff frequency measurements is shown in Fig <ref>. We generate 1.3 W of 714 nm light from a Ti:Sapphire ring cavity laser (Sirah, Matisse) pumped by a diode-pumped solid state (DPSS) laser (Lighthouse Photonics, Sprout). Most of the light is sent to our laser slowing and trapping setup for radium. A small sample of 4 mW is sent to a fiber-coupled EOM (EOSpace). The EOM provides phase modulation to the laser beam which is sent to an ultra low exapansion (ULE) optical cavity for frequency stabilization.
The modulation RF signal is generated by an IQ modulator (Analog Devices, LTC5588-1). A schematic of the IQ modulator is shown in Figure <ref> along with the relevant ports and inputs.
The blue trace in Figure <ref> shows a typical output from the IQ modulator. Here we see the offset sideband at 1 GHz (Ω_1) and the corresponding modulation sidebands that are offset ±10 MHz (Ω_2) from Ω_1. The reflected light from the cavity is collected on a fast photodiode (PD). The signal is sent to a mixer and demodulated at 10 MHz to create an error signal as shown by the black trace in Figure <ref>.
The error signal is low-pass filtered and sent to a PID controller inside the Ti:Sapphire laser control box, which feeds back on the fast etalon in the ring cavity to lock the laser frequency to a TEM_00 mode of the ULE cavity. The laser linewidth, when locked, is typically measured to be 70 kHz, limited by the bandwidth of the laser controller.
§.§ MOT Cutoff Measurement Setup
Most of the 714 nm laser power is sent to the transverse cooling (TC) chamber to collimate the atomic beam coming out of the oven, followed by the Zeeman slower (ZS) to slow the longitudinal velocity of the atomic beam below the MOT capture velocity, and finally to the MOT to trap the radium atoms. More details about the experimental design and the slowing and trapping of radium can be found in Ref. <cit.> The frequency applited to the acousto-optic modulator (AOM) for the MOT beams is switched between a loading and a probe phase using an RF switch. An experimental cycle is set to a total of 25 seconds during which the probe phase lasts for 300 ms, which is the camera exposure time. During the loading phase, the MOT laser frequency is red-detuned from resonance by 8Γ (Γ = 2π× 380 kHz). During the probe phase, we measure the MOT fluorescence on an EMCCD camera (Andor, Luca) for different RF frequencies applied to the MOT AOM.
For ^225Ra, we further offset the laser frequency from the cavity reference mode by 2629.95 MHz. This is achieved by increasing the offset frequency Ω_1 by the same amount. This highlights the broadband tunability of this implementation of the ESB lock. Locking to the same cavity resonance, we simply increase the laser frequency offset to probe transitions that are several GHz apart. Using the same cavity resonance eliminates any potential systematic or statistical frequency uncertainties introduced by referencing the laser to different cavity resonances.
§.§ Data Analysis
We measure the MOT fluorescence signal for ^225Ra and ^226Ra during the probe phase. For each isotope, we collect 5 background images before and after the frequency scan. We scan the MOT probe frequency in steps of 20 kHz to 50 kHz by scanning the RF drive frequency of the MOT AOM during the probe phase. At each frequency step during the scan, we acquire an atom fluoresence image. The atom images are background subtracted to extract the atom fluorescence count.
The MOT cutoff frequency is defined as the laser frequency at which the MOT fluorescence vanishes as frequency is increased. As depicted in Figure <ref>, above a certain frequency MOT fluorescence counts drop below the background level. More commonly, isotopic frequency differences in atomic vapors are measured without externally applied fields either in an effusive beam or, for MOT-trapped atoms, the anti-Helmholtz B-field is turned off before probing the freely expanding cloud. Due to the small number of ^225Ra atoms and the weakness of the transition, we require atoms remain trapped to scatter enough photons for detection. Not only have we observed in several elements and isotopes that the MOT cutoff frequency is an accurate, repeatable, and environmentally insensitive absolute frequency reference, but we also demonstrate the insensitively of this value to key systematic effects below.
To determine the MOT cutoff frequency, we assume the fluorescence signal is linear in the region of the cutoff and fit to the following function the segment of the MOT fluorescence curve containing several data points just before the fluorescence curve plateaus:
y(ν) = Max[a_1(ν - a_2), a_1(a_3 - a_2) ] 5
This fit function is the maximum between a line with a slope a_1 & x-intercept a_2 and a line with a constant y-intercept given by a_1(a_3-a_2). The MOT cutoff frequency, represented by a_3, is defined as the frequency at which these two lines intersect.
The linear approximation is valid for only those data points near the cutoff frequency. We therefore studied the sensitivity of the fitted value of MOT cutoff frequency on the number of data points used for the fit, which is discussed in the next section.
§.§ Results and Discussion
The MOT fluorescence data for the two isotopes are shown in Fig <ref>. From the fits we determine the MOT cutoff frequency a_3 for ^226Ra (with statistical uncertainties) to be ν_226 = 162.862(20) MHz and for ^225Ra to be ν_225+2629.95 MHz = 162.785(102) MHz. The frequency difference for the MOT transition between the two isotopes is therefore determined to be 2630.037 MHz with a statistical uncertainty of σ_Δν,stat = 0.104 MHz.
To select the number of data points to include in our fit, we study the effects of including certain number of data points on the fit values of MOT cutoff frequency a_3. For example, for our ^226Ra dataset at 25.8 mW MOT probe power, we fit the dataset containing data points between the peak of the dataset and the eighth from the last data point and observe how the fit values for a_3 changes. We expect the fit values for a_3 to converge to the true value near the actual MOT cutoff frequency, where the linear approximation holds, and to diverge as we include more points and this approximation breaks down. As shown in Figure <ref>, the MOT cutoff frequency fit values diverge by more than one standard deviation after more than 15 data points are included in the fit. We therefore fit for the MOT cutoff frequency a_3 using the last 15 data points for the above dataset. The data and the resultant fit is shown in Figure <ref>. This method for determining the cutoff frequency is repeated for all the MOT cutoff datasets taken for ^226Ra and ^225Ra.
We study the potential systematic effects of MOT probe laser power and the MOT gradient B-fields on our MOT cutoff frequency determination in . For the MOT probe power dependence, we keep the probe MOT gradient B-field constant and measure a MOT fluorescence spectrum at various probe laser powers. The dependence of the cutoff frequency on the MOT probe power for ^226Ra is shown in Figure <ref>. We estimate the size of this effect by using the largest possible difference in the MOT cutoff fit values from the four different MOT probe powers studied. This conservatively puts the effect at the level of 180 kHz.
Additionally, we investigate the effect of the MOT B-field gradient on the observed cuttoff frequency by measuring MOT fluorescence spectra using different currents applied to the anti-Helmholtz MOT B-field coils during the probe phase. The plot of the MOT cutoff frequency fit values for the different MOT B-field coil currents are shown in Figure <ref>. We similarly estimate the size of this effect at 180 kHz. Cumulatively, adding in quadrature, the total systematic effect is σ_Δν,sys = 0.254 MHz. Adding this in quadrature to the statistical uncertainty of σ_Δν,stat = 0.104 MHz gives a total uncertainty on our MOT cutoff frequency difference between the two isotopes of σ_Δν = 0.274 MHz. The frequency difference for the MOT transitions between (→ ) and ([F=1/2] → [F=3/2]) is therefore determined to be
Δν = ν_226 - ν_225
= 2630.0 ± 0.3 MHz6
This is consistent with the value calculated from previously available spectroscopic data for radium <cit.> of 2629.0±8.6 MHz and is a factor of 29 more precise. Additionally, along with the known hyperfine splitting between the F=1/2 and the F=3/2 levels of , we use our measurement of the difference in the MOT transitions between and to calculate the isotope shift Δν_iso between these two isotopes for the to transition:
Δν_iso = 1/3Δν_hfs - Δν
= 14691/3 - 2630 7
= 2267.0±2.2 MHz
This is a factor of 8 more precise than the isotope shift of 2268±17 MHz that can be determined from Ref. <cit.>. It should be noted that, in Ref. <cit.>, the isotope shifts for the to transition were measured for a range of radium isotopes with respect to ^214Ra. Therefore it is possible that the uncertainty of 17 MHz propagated onto this isotope shift between and is overestimated. Since, to the best of our knowledge, no direct measurements of this isotope shift is reported, we take the above calculated value and its propagated uncertainty to be the currently best available measurement.
§ CONCLUSION
We have demonstrated the implementation of a broadband tunable ESB offset lock using an IQ modulation scheme. The ESB offset lock on our 483 nm ECDL1 laser is compared to a PDH lock on the same laser to characterize the ESB lock performance. We find that the noise level for the ESB lock is about 3 dB higher in the sub-MHz region compared to the PDH lock. In our 714 nm laser, we do not notice any significant changes in the laser frequency linewidth.
Suggestions for optimizing the ESB lock are provided. RAM in the system can introduce significant drifts in the error signal DC offset. Adjusting the EOM temperature and tuning the input polarization has been shown to be most effective in reducing RAM in our system. Photo refractive damage is a serious concern for short wavelength fiber EOMs such as our 714 nm phase modulator and an appropriate limit on the input optical power needs to be placed.
Using this highly tunable ESB offset laser frequency stabilization scheme, with a 714 nm laser locked to the same resonance of a ULE optical cavity, we determine the frequency shift required for the MOT transition frequencies between (to ) and ([F=1/2] to [F=3/2]) to be 2630.0±0.3 MHz. Our measurement is a factor of 29 more precise than what can be calculated from the previously available spectroscopic data on radium. Using this measurement and the available hyperfine splitting in the level in , we determine the isotope shift for the to transition between and to be Δν_iso = 2267.0 ± 2.2 MHz. To the best of our knowledge, this is a factor of 8 better than the currently best available data and, as a consequence, the most precise measurement of this quantity.
The ESB locking technique used to obtain these results is broadly applicable to myriad physical physical systems to improve the precision of differential frequency measurements and could be expanded to frequency differences byond 6 GHz through the use of a higher bandwidth IQ modulator. We expect this technique will significantly improve precision in isotope shift spectroscopy in particular, but could have broader impacts in laser spectroscopy more generally.
Funding
This work is supported by the U.S. DOE, Office of Science, Office of Nuclear Physics under contracts DE-AC02-06CH11357 and DE-SC0019455, and by Michigan State University.
Acknowledgments
Disclosures
The authors declare no conflicts of interest
Data Availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
http://arxiv.org/abs/2307.05631v1 | 20230711070814 | Causal Kripke Models | [
"Yiwen Ding",
"Krishna Manoorkar",
"Apostolos Tzimoulis",
"Ruoding Wang",
"Xiaolong Wang"
] | cs.AI | [
"cs.AI",
"cs.LO"
] |
Belief Revision from Probability
Jeremy Goodman
School of Philosophy
University of Southern California, USA
[email protected]
Bernhard Salow
Faculty of Philosophy
University of Oxford, UK
[email protected]
August 12, 2023
=======================================================================================================================================================================================================
This work extends Halpern and Pearl's causal models for actual causality to a possible world semantics environment. Using this framework we introduce a logic of actual causality with modal operators, which allows for reasoning about causality in scenarios involving multiple possibilities, temporality, knowledge and uncertainty.
We illustrate this with a number of examples, and conclude by discussing some future directions for research.
§ INTRODUCTION
Causality is crucial in human reasoning and knowledge. Defining and formalizing causality has been a significant area of research in philosophy and formal methods <cit.>. In recent years, with the rise of machine learning and AI, there has been growing interest in formalizing causal reasoning. One of the key areas of AI research is designing algorithms capable of comprehending causal information and performing causal reasoning <cit.>. Causal reasoning can be instrumental in formally modeling notions such as responsibility, blame, harm, and explanation, which are important aspects in designing ethical and responsible AI systems <cit.>.
In this article we focus on the kind of causality known as "actual causality" (a.k.a. token causality) <cit.>. Actual causality refers to the causality of a specific event which has actually happened (e.g. "John died because Alice shot him") rather than general causes (e.g. "smoking causes cancer"). Several formal approaches have been used for modelling actual causality <cit.>. One of the most prominent formalizations of actual causation was developed by Halpern and Pearl <cit.>. This model describes dependencies between endogenous variables and exogenous variables using structural equations. Based on causal models Halpern and Pearl have given three different definitions of actual causality known as original, updated and modified definitions <cit.> of actual causality using counterfactual reasoning. The formal language developed to describe actual causality in this model is used to define several notions like normality, blame, accountability and responsibility. This model has been used in several applications in law <cit.>, database theory <cit.>, model checking <cit.>, and AI <cit.>.
Notions like knowledge, temporality, possibility, normality (or typicality) and uncertainty play important role in causal reasoning and related applications. In the past, attempts have been made to incorporate some of these notions into the causal models of Halpern and Pearl. In <cit.>, Beer et.al. define causality in linear temporal logic to explain counterexamples. This line of research has been carried forward in model checking and program verification <cit.>.
The Halpern and Pearl formalism has also been extended to define causality in frameworks such as transition systems and Hennessy-Milner logic <cit.>. In <cit.>, Barbero et.al. define causality with epistemic operators. However, to the best of our knowledge a general Kripke model for actual causality based on Halpern and Pearl framework has not been studied yet.
In this work, we develop the notion of causal Kripke models and introduce a modal language for causal reasoning with uncertainty, temporality, possibility, and epistemic knowledge. We show that our model can formalize notions like sufficient causality, blame, responsibility, normality, and explanations. Our framework provides a more natural definition of sufficient causality <cit.> by considering nearby contexts, which Halpern's causal model does not support
(c.f. <ref>). The developed causal Kripke models offer a straightforward way to describe nearby contexts and define sufficient causality as intended by Halpern. In order to stay as close as possible to Halpern's original framework, where formally only atomic events can be causes, we utilize a hybrid language not only contains modalities but also names for the possible worlds.
The structure of the paper is as follows. In Section <ref>, we provide preliminaries on causal models and logic of causality. In Section <ref>, we give several examples to motivate the development of causal Kripke semantics. In Section <ref>, we define causal Kripke models, and develop a modal logic of actual causality to reason about them. We generalize the Halpern-Pearl definitions of actual causality to this framework and provide a sound and complete axiomatization of the modal logic of actual causality. In Section <ref>, we model the examples discussed in Section <ref> using our framework and also show how this model can be used to provide an intuitive definition of sufficient causality. Finally, in Section <ref> we conclude and provide some directions for future research.
§ PRELIMINARIES
§.§ Causal models
In this section we briefly recall key concepts and ideas of the standard logic of causal reasoning as presented in <cit.>. A causal model describes the world in terms of variables which take values over certain sets. The variables and their ranges are given by a signature 𝒮=(𝒰, 𝒱, ℛ) where 𝒰 is a finite set of exogenous variables (i.e., variables whose value is independent of other variables in the model), 𝒱, which is disjoint with 𝒰, is a finite set of endogenous variables (i.e., variables whose value is determined by other variables in the model), and ℛ(X) for any X ∈𝒰∪𝒱, is the (finite) range of X. These variables may have dependencies between them described by structural equations defined as follows.
A causal model is a pair (𝒮, ℱ), where 𝒮=(𝒰, 𝒱, ℛ) is the model's signature and ℱ = (f_V_i| V_i ∈𝒱) assigns to each endogenous variable V_i a map such that
f_V_i: ℛ (𝒰∪𝒱 -{V_i}) →ℛ(V_i).
For any variables V ∈𝒱 and X ∈𝒰∪𝒱, we say "X is a direct cause, or a parent, of V" if there exist x, x' ∈ℛ(X) and z∈ℛ( 𝒰∪𝒱 -{X,V}) such that
f_V(z,x) ≠ f_V(z,x'). A causal model is said to be recursive if it contains no cyclic dependencies.
For a causal model M= (𝒮, ℱ), a context t assigns every variable U ∈𝒰 a value in ℛ(U). A causal setting is a pair (M, t), where M is a causal model and t is a context for it.
In recursive models, as there are no cyclic dependencies the values of all endogenous variables are determined by the context. Throughout this paper we only consider recursive causal models.
Let M = (𝒮, ℱ) be some causal model and 𝒴⊆𝒱 be a set of endogenous variables.
Let Y be the injective listing of the variables of 𝒴. Let Y = y be an assignment such that y_i ∈ℛ(Y_i) for every Y_i ∈𝒴. The causal model obtained from intervention setting values of variables of Y to y is given by M_Y←y = (𝒮, ℱ_Y←y) , where ℱ_Y←y is obtained by replacing for every variable Y_i ∈𝒴, the structural equation f_Y_i with Y_i=y_i.
Here, we consider the exogenous variables as given. Thus, we do not allow interventions on them.
§.§ Basic language for describing causality
The basic language, L_C, for describing causality is an extension of propositional logic where primitive events are of the form X=x, where X ∈𝒱 is an endogenous variable and x ∈ℛ(X). Given the signature 𝒮=(𝒰, 𝒱, ℛ), the formulas ϕ∈L_C are defined by the following recursion:
α ::= X=x |α|α∧α where, X ∈𝒱, x ∈ℛ(X)
ϕ::= X=x |ϕ|ϕ∧ϕ| [Y←y]α where, Y←y is an intervention
For any causal setting (M,t) and formula ϕ∈L_C, the satisfaction relation (M,t) ⊩ϕ is defined as follows. For any formula X=x, (M,t) ⊩ X=x if the value of endogenous variable X is set to x in context t. Satisfaction for the Boolean connectives is defined in a standard manner. Satisfaction of intervention formulas is defined as follows: for any event α, (M,t) ⊩ [Y←y]α iff (M_Y←y,t) ⊩α. This language is used by Halpern and Pearl to provide three different definitions of causality referred as original , updated and modified definitions of causality <cit.> (for details see Appendix <ref>).
We extend the language L_C using causal formulas.
For any set of variables X⊆𝒱, we use cause (X=x, α) (resp. cause^u (X=x, α), cause^m (X=x, α)), is a causal formula describing X=x is a cause of α by the original (resp. updated, modified) definition. Satisfaction clauses for these formulas are defined as follows: (M, t) ⊩ cause (X=x, α) (resp. (M, t) ⊩ cause^u(X=x, α), (M, t) ⊩ cause^m (X=x, α) ) to denote X=x is a cause of α in causal setting (M, t) by the original (resp. updated, modified) definition.
§ MOTIVATION FOR POSSIBLE WORLD SEMANTICS OF CAUSAL MODELS
The basic language for causal reasoning described above uses propositional logic as the language of events and for reasoning with causal formulas. However, we are interested in describing causal reasoning in scenarios that involve notions like possibility, knowledge or belief, temporality, uncertainty and accessibility. Here we provide several such examples.
[Umbrella]
Alice is going on a trip to London. She thinks that it may rain when she is there. Thus, she decides to take her umbrella with her for the trip. In this example, the possibility of raining in London in the future seems to be the cause for Alice taking her umbrella with her.
[Chess]
In a chess game, if knight and the king are only pieces that can move but every king move leads to king getting in check, then the player is forced to move the knight. Suppose that the king can not move to a certain square because it is covered by a bishop. Then it seems reasonable that the fact that bishop covers this square to be a cause for player being forced to move the knight. This example shows that reasoning with causality naturally involves considering possibilities.
[Police]
Suppose John is a criminal who is currently absconding. Inspectors Alice and Bob are trying to catch John. John is currently in Amsterdam. He has a train ticket to Brussels. Thus, his (only) options are to stay in Amsterdam or to take the train to Brussels. Bob decides to go to Brussels to catch John in case he takes the train. John learns this information and decides to stay in Amsterdam where Alice catches him. In this case, John's belief of Bob's presence in Brussels leads to him staying in Amsterdam. It seems reasonable that this should be part of the cause of John getting caught, even though he was caught by Alice in Amsterdam. This shows that the knowledge of the agents is crucially involved in causal reasoning.
[Robot]
Consider a scenario in which a robot is being commanded by a scientific team. Upon receiving command c, the robot completes task t or malfunctions. In this case the possibility of causing malfunction may become the cause of not sending command c. i.e. , causal reasoning involves scenarios in which the dependencies between different events may be "indeterministic" or "underdetermined". Halpern considers such scenarios in <cit.>, using the notion of probabilities over causal models. However, in certain cases qualitative reasoning in terms of possibilities may be more appropriate.
[Navigation]
Suppose Alice is trying to reach village A. She reaches a marker which indicates that she is at location B, C or D. She does not know in which of these locations she is at. However, she knows that A is to the east of all of these locations. Thus, she decides to go east. Suppose Alice was actually at point B. The fact that A is to the east of locations C and D is still part of Alice's considerations and seems to be part of the cause for her going east.
These examples highlight that notions such as possibility, knowledge and uncertainty play an important role in causal reasoning.
Possible world semantics, formally described by Kripke models, are the natural logical framework for modeling such notions.
In the next section we develop a framework for causal reasoning, based on Kripke frames, which allows for modeling such scenarios in a clear, intuitive and efficient way.
§ POSSIBLE WORLD SEMANTICS FOR CAUSAL REASONING
specify if some kind of "empty" intervention (over no variables) is allowed.
In this section, we define the causal Kripke model, introduce the modal language for causality and give the corresponding three HP definitions of causality in causal Kripke models. In our framework, we allow the same variable to possibly take different values in different worlds. Moreover, the structural equations treat the same endogenous variable separately for each different possible world.
A causal Kripke model is a tuple 𝒦 = (𝒮,W, R, ℱ), where W is a finite set of possible worlds, R ⊆ W × W is an accessibility relation, and
𝒮=(𝒰, 𝒱, ℛ) is the signature such that 𝒰 and 𝒱 are the disjoint sets of exogenous and endogenous variables, and ℛ is a function assigning each Γ∈𝒰∪𝒱 and a world w∈ W a set of possible values that Γ can take at w, and ℱ = (f_(X_i,w_j)| X_i ∈𝒱, w_j ∈ W) assigns to each endogenous variable X_i and each world w_j a map such that
f_(X_i,w_j): ℛ((𝒰∪𝒱)× W)-{(X_i,w_j)}) →ℛ(X_i,w_j).
For any causal Kripke model 𝒦 = (𝒮,W, R, ℱ) we refer to 𝒮 as its signature. For any variable Γ and world w we use (Γ, w) to denote the restriction of variable Γ to the world w. That is, (Γ, w) is a variable which takes a value c iff the propositional variable Γ takes the value c at the world w.
For any Γ∈𝒰 (resp. Γ∈𝒱) and any world w∈ W, we say (Γ,w) is an exogenous (resp. endogenous) variable. Note that we allow the same endogenous variable to have different structural equations associated with it in different worlds.
A context over a causal Kripke model 𝒦 = (𝒮,W,R, ℱ) is a function t such that for any w ∈ W, and U ∈𝒰, assigns a value in ℛ(U,w). A causal Kripke setting is a pair (𝒦,t), where 𝒦 is a causal Kripke model and t is a context for it.
For any variables X ∈𝒱, and Γ∈𝒰∪𝒱, and any w,w' ∈ W, we say "(X,w) is a direct cause, or a parent, of (Γ,w')" if there exist γ,γ' ∈ℛ(Γ,w'), and z∈ℛ( (𝒰∪𝒱)× W -{(Γ,w')}) such that
f_(X,w)(z,γ) ≠ f_(X,w)(z,γ'). A causal Kripke model is said to be recursive if it contains no cyclic dependencies.
In recursive models, as there are no cyclic dependencies the values of all endogenous variables at all the worlds are completely determined by the context. If 𝒱 only contains binary variables (i.e. the variable which take values either 0 or 1), then for any context t, and any world w, we use t(w) to denote the set of endogenous variables set to value 1 at w by t. In this paper, we only consider recursive causal Kripke models.
Given a causal Kripke model 𝒦 = (𝒮,W,R, ℱ), as assignment over 𝒦 is a function on some subset 𝒴⊆𝒱× W such that, for every Y =(X,w) ∈𝒴, it assigns some value in ℛ(X,w).
Let 𝒦 = (𝒮,W,R, ℱ) be some causal Kripke model and 𝒴 be a finite subset of 𝒱× W. Let Y be an injective (possibly empty) listing of all the variables in 𝒴. Let Y = y be an assignment such that for any Y_i ∈𝒴, y_i ∈ℛ(Y_i). The causal Kripke model obtained from intervention setting values of variables of Y to y is given by 𝒦_Y←y = (𝒮,W,R, ℱ_Y←y) , where ℱ_Y←y is obtained by replacing for every variable Y_i ∈𝒴, the structural equation f_Y_i with Y_i=y_i.
§.§ Modal logic language for describing causality
In this section we define the formal logical framework we introduce for describing causality. Since we want to talk about variables whose values depend on the possible world of a Kripke model, our language will be hybrid in character, augmenting the standard language (as presented e.g. in Section <ref>) not only with modal operators but also with a countable set of names, denoted with W. In principle each model M comes with an assignment from W to points of M, however in practice we will often conflate names with elements of Kripke models. The reason we require a countable number of names, even though the models are always finite, is because there is no bound on the size of the models. We will denote the language with L_M(W). We often omit W and write L_M when W is clear from the context. 𝒮 is a given signature, and all X, Y, x, y come from 𝒮. In what follows we will consistently use Y to denote a variable parametrized with a name for a world (i.e. Y=(X,w)). It is important to notice that in our language interventions involve only such variables. Any event α and formula ϕ of the language L_M is defined by the following recursion.
α ::= X=x| (X,w)=x |α|α∧α|α where, X ∈𝒱, w ∈ W
ϕ::= X=x | (X,w)=x |ϕ|ϕ∧ϕ|ϕ| [Y←y]α where, Y←y is an intervention
In particular, the language L_M has two types of atomic propositions, using variables of the form X and of the form (X,w). The second, the hybrid aspect of our language, provides global information regarding the Kripke model. For any causal Kripke setting (𝒦,t) with
𝒦 = (𝒮,W,R, ℱ), any causal formula ϕ, and any world w ∈ W, we define satisfaction relation ⊩ in the following way. For any primitive event X=x (resp. (X,w')=x), (𝒦,t,w) ⊩ X=x
(resp. (𝒦,t,w) ⊩ (X,w')=x) iff the value of X is set to be x at w (resp. at w') by the context t. Note that the satisfaction of (X,w')=x is independent of the world it is evaluated at. The satisfaction relation for Boolean connectives is defined by standard recursion. For the operator,
(𝒦,t, w) ⊩α iff for all w', w R w' implies (𝒦,t, w') ⊩α.
Let 𝒴⊆𝒱× W be a set of endogenous variables. Satisfaction of intervention formulas is defined as for any event α, (𝒦 ,t, w) ⊩ [Y←y]α iff (𝒦 _[Y←y],t, w) ⊩α. Satisfaction for Boolean combinations of causal formulas is defined in a standard manner. For the operator,
(𝒦,t, w) ⊩ϕ iff for all w', w R w' implies (𝒦,t, w') ⊩ϕ.
We now extend the HP definition(s) of causality to the setting of causal Kripke models.
Let α be any event. For Y⊆𝒱× W, Y = y is an actual cause of α in a causal Kripke setting (𝒦,t) at a world w if the following conditions hold.
AC1.
(𝒦,t,w) ⊩α
and for every w_j ∈ W, (𝒦,t,w_j)⊩ (X_i,w_j)=y_ij, for every (X_i, w_j)=y_ij∈Y=y.
AC2a. There exists a partition of 𝒱× W into two disjoint subsets Z and N with Y⊆Z and settings y' and n of variables in Y and N, such that
(𝒦,t,w) ⊩ [Y←y', N←n ] α.
AC2b^o. Let z^∗ be the unique setting of the variables in Z such that (𝒦,t,w)⊩Z = z^∗. If (𝒦,t,w')⊩ X=z^∗, for every (X,w') =z^∗∈Z=z^∗, then for all subsets Z' of Z∖Y we have
(𝒦,t,w) ⊩ [Y←y, N←n, Z' ←z'^∗ ] α. [Here we use the abuse of notation that if Z'⊆Z and Z=z^∗, then z'^∗ in Z'←z'^∗ refers to the restriction of z^∗ to Z'.]
AC3. Y is a minimal set of variables that satisfy AC1 and AC2.
We say that Y = y is an actual cause of α in a causal Kripke setting (𝒦,t) at a world w by updated definition iff AC1, AC2a, AC3 hold and AC2b^o is replaced by the following condition.
AC2b^u. Let z^∗ be the unique setting of the variables in Z such that (𝒦,t,w)⊩Z = z^∗. If
(𝒦,t,w')⊩ X=z^∗, for every (X,w')=z^∗∈Z = z^∗, then for all subsets Z' of Z∖Y and N' of N we have
(𝒦,t,w) ⊩ [Y←y, N' ←n, Z' ←z'^∗ ] α.
We say that Y = y is an actual cause of α in a causal Kripke setting (𝒦,t) at a world w by modified definition iff AC1, AC3 hold and AC2 is replaced by the following condition.
AC2a^m. If there exists a set of variables N⊆𝒱× W, and a setting y' of the variables in Y such that if n^∗ is such that
(𝒦,t,w')⊩ X=n^∗, for every (X,w')=n^∗∈N=n^∗, then
(𝒦,t,w) ⊩ [Y←y', N←n^∗ ] α.
We will refer to these definitions as original, updated, and modified definitions henceforth. Example <ref> shows that these definitions do not in general coincide. Theorem <ref> which relates these definitions in causal models can be generalized to causal Kripke models in a straightforward manner (see Theorem, <ref>).
For any set of variables 𝒴⊆𝒱× W, we use cause^o (Y=y, α) (resp. cause^u (Y=y,α), cause^m (Y=y,α)) as an abbreviation for stating Y=y is a cause of α by the original (resp. updated, modified) definition.
We write (𝒦,t,w) ⊩ cause^o(Y=y, α) (resp. (𝒦,t,w) ⊩ cause^u(Y=y, α), (𝒦,t,w) ⊩ cause^m (Y=y,α) ) as an abbreviation for stating Y=y is a cause of α in causal Kripke setting (𝒦,t) at a world w by the original (resp. updated, modified) definition. Moreover, For x=o,u,m, we write (𝒦,t,w) ⊩ cause^x (Y=y, α) if for all w' such that w R w', (𝒦,t,w') ⊩ cause^x(Y=y, α) and (𝒦,t,w) ⊩ cause^x(Y=y, α) if there exists w' such that wRw' and (𝒦,t,w') ⊩ cause^x(Y=y, α).
§.§ Axiomatization
In <cit.>, Halpern provides a sound and complete axiomatization for the logic of causality. This axiomatization can be extended to the modal logic of causality by adding the following axioms to the axiomatization in <cit.>:
* All substitution instances of axioms of basic modal logic K.
* Necessitation rule: from ϕ infer ϕ
* -axiom : [Y←y] ϕ⇔
[Y←y]ϕ and -axiom : [Y←y] ϕ⇔
[Y←y]ϕ
* G-axiom: ([Y←y](X,w)=x) ⇒([Y←y](X,w)=x )
Notice that, similar to the axiomatization in <cit.>, the schemes -axiom, -axiom, and G-axiom include empty interventions. When importing the axioms from <cit.> axiom scheme C5 involves only variables of the form (X,w). For the axiom schemes C1-4 and C6, the axioms involve atoms both of the form X=x and of the form (X,w)=x. Notice also that G-axiom is similar to axioms in Hybrid modal logic.
Since the language in <cit.> is finite (modulo classical tautologies), weak and strong completeness coincide. However our language is countable (given that W is countable). Since there is no upper bound on the size of the models, we cannot hope to have strong completeness w.r.t. finite models. However the axioms presented in this section are sound and weakly complete. In Appendix <ref> we provide the proofs of soundness and weak completeness w.r.t. the modal logic of causality.
§ EXAMPLES AND APPLICATIONS
In this section, we analyze the examples discussed in Section <ref> using causal Kripke models. For any endogenous variable X, and any world w, we use Eq(X,w) to denote the structural equation for X at w. Throughout this section, we use U to denote exogenous variables only.
[Umbrella] Let 𝒮 be the signature with endogenous variables p, q, and r standing for `it rains in London' and `Alice adds umbrella to the luggage' and `Alice is in London'. Let w_0 be the current world and w_1, w_2, w_3 be the future possible worlds considered by Alice. Let W ={w_0,w_1,w_2, w_3} and R= {(w_0,w_1), (w_0,w_2),(w_0,w_3)}. Let U =(U_1,U_2)∈{0,1}^2 be such that (p,w)=(U_1,w) and (r,w)=(U_2,w) for any w ∈ W. Let Eq(q,w) = (p ∧ r) for all w, i.e., Alice puts her umbrella in her luggage if she thinks it is possible that in the future she will be in London and it rains there.
Let t be a context such that U is set to be (0,0),(0,1), (1,0) and (1,1) at the worlds w_0, w_1, w_2 and w_3 respectively. We have t (w_0)= {q}, t (w_1)= {r}, t (w_2)= { p}, t (w_3)= { p,r}. Here, (p,w_3)=1 and (r,w_3)=1 are both causes of q=1 at w_0 by all three definitions.
We show that (p,w_3)=1 is a cause by the original and updated definitions. The proof for (r,w_3)=1 is analogous.
Indeed, as (𝒦,t,w_3)⊩ (p,w_3)=1, (𝒦,t,w_3)⊩ (r,w_3)=1
and (𝒦,t,w_0) ⊩ q, AC1 is satisfied. Let Z = {(r,w_3),(p,w_3)} and N = ∅, Y ={(p,w_3)}, and y' =0. Then from the structural equation as no world related to w_0 satisfies p ∧ r under this intervention, we have
(𝒦,t,w_0) ⊩ [Y← 0] (q=1) and (𝒦,t,w_0) ⊩ [Y← 1, Z'←z^∗ ] q=1.
where Z'=(r, w_3), z^∗=(1,1) as described by the context.
Thus, AC2 is satisfied and AC3 is trivial as we are considering a single variable. The updated definition in this case is equivalent to the original definition as N =∅. The modified definition is satisfied for the same setting y' =0 and N = {(r,w_3)}.
Thus, the fact that it rains in the world w_3 is a cause of Alice carrying her umbrella by all three definitions.
Now consider a slight variation of the above example in which we are sure Alice will be in the London and do not include r as a variable in our analysis. In this case, the structural equation for q is given by q= p. Note that, in this new model, we can argue in the way similar to the above example that any world w' accessible from w_0, (p,w')=1 would be a part of the cause of Alice carrying her umbrella at w_0 by all three definitions. This can be interpreted as the fact that 'Alice considers the possibility of a future world in which it rains in London and she will be in London' is a part of cause of her adding umbrella to luggage.
In general, for any event α and endogenous variable X we say that "the possibility of X=x" is a cause of α at w iff ⋀{(X,w')=x | w R w' & (𝒦,t, w') ⊩X=x} is a cause of α at w by the modified definition. Here, we use the modified definition because as mentioned by Halpern <cit.>, the conjunction being a cause by modified definition can in fact be interpreted as a cause being disjunctive, i.e. , the disjunction of the conjuncts can be interpreted as the cause of the event. Hence under this interpretation the existence of some w' which is accessible from w_0 and where X=x is a cause of α here. This can be interpreted as "the possibility of X=x" being a cause of α. Thus, in the variation of the example discussed in the above paragraph, we can say that the possibility of the world where it rains in London is a cause of Alice adding umbrella to her luggage.
[stalemate]
Let 𝒮 be the signature with endogenous variables p, q and r standing for `The king is in check', `The king and the knight are the only pieces that can move in the current position' and `The player is forced to move the knight'. Let w_0 be the current position. Let w_1 and w_2 be the positions obtained from the (only) available moves by the king. Let W={w_0, w_1, w_2} and R= {(w_0,w_1), (w_0,w_2)}.
Let U =(U_1,U_2)∈{0,1}^2 be such that (p,w)=(U_1,w) and (q,w)=(U_2,w) for any w ∈ W. Let Eq(r,w)= p ∧ q ∧ p at any w, i.e., the player is forced to move the knight if the king and the knight are the only pieces that can move, the king is not in check, and the king's every available move leads to the king being in check.
Let t be a context such that U is set to be (0,1),(1,1), and (1,0) at the worlds w_0, w_1, and w_2 respectively. We have t (w_0)= {q,r}, t (w_1)= {p,q}, t (w_2)= {p}. In the same way as in the last example, we can show that (p,w_0)=0, (q,w_0)=1, (p,w_1)=1 and (p,w_2)=1 are all the causes of r=1 at w_0 by all three definitions. This can intuitively be seen as the certainty of the king ending up in a check regardless of the king move (while not currently being in check) is a part of cause of being forced to move the knight.
In general, for any variable X, and any event α we say that the certainty of X=x is a cause of α at w
iff (X,w')=x is a cause of α at w for all w R w' by the modified definition.
[Police]
Let 𝒮 be the signature with endogenous variables p, q, r, and s standing for `Inspector Bob is in Brussels', `Inspector Alice is in Amsterdam', `John takes the train', and `John is caught in Amsterdam'. Let w_0 be the current world and w_1, w_2 be the possible future worlds considered by John. Let W ={w_0, w_1,w_2} and R= {(w_0,w_1), (w_0,w_2)}.
Let U ∈{0,1}^2 be such that (p,w)=(U_1,w) and (q,w)=(U_2,w) for any w ∈ W. Let Eq (r,w_0)= p and Eq (r,w_1)=Eq (r,w_2)= 1. i.e. , John takes the train if there is a possible future scenario in which inspector Bob is not in Brussels (John considers future scenarios when he takes the train). Let Eq (s,w_0)= q ∧ r (John gets caught in Amsterdam if Alice is there and he does not take the train) and Eq(s,w_1)=Eq (s, w_2)=0 (John considers future scenarios in which he is not caught).
Let t be a context such that U is set to be (1,1) at all the worlds. We have t (w_0)= {p,q,s} and t (w_1)= t (w_2) = {p,q,r}. It is easy to check that (p,w_1)=1 and (p,w_2)=1 are both causes of s=1 at w_0 by all three definitions. Thus, the fact that Bob is present in Brussels in all possible worlds considered by John is a cause of John getting caught in Amsterdam. If we assume that John knows Bob is in Brussels iff he is actually in Brussels, then we can say that Bob's presence in Brussels is a cause of John getting caught in Amsterdam.
Here we have assumed that John's knowledge of Bob's presence in Brussels is the same as Bob being actually present in Brussels. However, this need not be the case always.
Now consider a slightly more complicated version of the same story in which we have another endogenous variable o standing for `John lost his ticket'. Suppose John does not take the train if he loses the ticket or he knows inspector Bob is in Brussels. i.e. ,Eq(r,w_0)= ( p ∨ o). Other structural equations remain the same. Let t' be a context that sets variables p, q to the same values as t at all the worlds and sets o to be 1 at w_0. In this case (p,w_1)=1, (p,w_2)=1 and (o,w_0)=1 are all the causes of s=1 at w_0 by all three definitions. Now consider slightly different context t” such that p is set to be true at worlds w_0 and w_1, q at worlds w_0, w_1, and w_2 and o at world w_0. In this case, we again have r=0 and s=1 at w_0. However, only (o,w_0)=1 is a cause for s=1 at w_0 by all three definitions. Now suppose that Bob actually did go to Brussels, however John does not know this information and thinks that there is a possibility that Bob may not be in Brussels. The only reason he does not take the train is that he lost the ticket. Thus, Bob being present in Brussels is not a cause of John getting caught in Amsterdam in this case. This shows that the knowledge John has about the presence of Bob in Brussels (and not just presence itself) is an important part of causal reasoning.
[Robot]
Let 𝒮 be the signature with endogenous variables p, q and r standing for `The command c is sent by the scientific team', `The task t is completed by the robot' and `The robot malfunctions'. Let w_0 be the current world, the world in which the scientific team is reasoning. Let w_1 and w_2 be the possible worlds considered by the team. Let W= {w_0,w_1, w_2} and R= {(w_0,w_1), (w_0,w_2)}.
Suppose Eq(q,w_1)= ⧫ p, Eq(r,w_1)=0, Eq(r,w_2)= ⧫ p, Eq(q,w_2)=0, and Eq (q,w_0)=Eq(r,w_0)=0, where ⧫ is the diamond operator corresponding to the relation R^-1, i.e., there are two possible scenarios. In one scenario sending command c leads to the completion of task t and no malfunctioning, while in the other it leads to the malfunctioning of the robot and task t is not completed.
Let U ∈{0,1} be such that (p,w_i)=(U,w_i). Let t be a context such that U is set to be 1 in all the worlds. We have t (w_0)= {p}, t (w_1)= {p,q}, t (w_2)= {p,r}. It is easy to see that (p,w_0)=1 is the cause of r=1 at w_2 but not at w_1 (it is not even true at w_1) by all three definitions. Suppose the scientific team believes that if sending command can possibly cause malfunctioning then command shouldn't be sent. Then as (𝒦,t,w_0) ⊩ cause ((p,w_0)=1, r=1) ( for all three definitions), the team will decide not to send the command. On the other hand, if the team believes that the command should be sent if it can possibly cause the completion of the task, then it must be sent as (𝒦,t,w_0) ⊩ cause ((p,w_0)=1, q=1), i.e. sending signal can cause completion of task t.
[Navigation]
Let 𝒮 be the signature with endogenous variables p_x, q and r standing for `The current location of Alice is x' for x=B,C,D, `Point A is to the east of Alice's current location' and `Alice moves to the east'. Alice does not know if she is at point B, C or D. Let w_1, w_2 and w_3 be possible worlds and U ∈{0,1}^4 be such that (p_B,w)=(U_1,w), (p_C,w)=(U_2,w), (p_D,w)=(U_3,w) and (q,w)=(U_4,w) for any w ∈ W.
Let t be a context such that U is set to be (1,0,0,1), (0,1,0,1), and (0,0,1,1) at worlds w_1, w_2 and w_3. We have t (w_1)= {p_B,q,r}, t (w_2)= {p_C,q,r}, and t (w_3)= {p_D,q,r}. Worlds w_1, w_2 and w_3 here represent the possible scenarios considered by Alice. With the current available knowledge these worlds are indistinguishable from each other for Alice. This can be represented by R= W × W. At any world w Eq(r,w)= q, i.e. Alice moves to the East iff she knows point A is to the East of her current location. Here, in the world w_1 in which Alice is at B (the real situation), (q,w_1)=1, (q,w_2)=1 and (q,w_3)=1 are all causes of (r,w_1)=1 by all three definitions. Thus, the fact that A is to the East of point C or point D is also a cause of Alice moving to East even if she is not present there.
These examples show that causal Kripke models can be used to model several different scenarios involving causality interacting with notions like possibility, knowledge and uncertainty.
§.§ Sufficient causality
Halpern discusses the notion of sufficient causality in <cit.> to model the fact that people's reasoning about causality depends on how sensitive the causality ascription is to changes in various other factors. “The key intuition behind the definition of sufficient causality is that not only does X=x
suffice to bring about ϕ in the actual context,
but it also brings it about in other “nearby” contexts. Since the framework does not provide a metric on contexts, there is no obvious way to define nearby context. Thus, in the formal definition below, I start by considering all contexts.”<cit.> Sufficient causality is thus defined in <cit.> using Definition <ref>.
We can use the framework of causal Kripke models to define sufficient causality for a causal setting (M, u) in terms of nearby contexts instead of all the contexts (as suggested by Halpern) in the following way. We consider the causal Kripke model 𝒦 = (𝒮,W,R, ℱ), where 𝒮 is the signature of M, W is the set of all the possible contexts on M, and ℱ is the set of structural equations such that for any structural equation for the endogenous variable X, Eq(X,w) is the same as the structural equation for X in M and the relation R ⊆ W × W is such that u R u' iff context u' is nearby u. Let t be the setting of exogenous variables so that for any possible world the endogenous variables are set by the context identifying that world.
Let X =x be as in Definition <ref> and let Y = X× W. For any Y=(X,u), Y=y iff X is set to be x by the context u. Let Y←y is intervention setting X to x in all the possible contexts. In this structure we can describe sufficient causality in terms of nearby contexts by replacing the clause SC3 in the Definition <ref> by the condition
u R u' (𝒦, t, u') [Y←y] α or equivalently (𝒦, t, u) [Y←y] α.
We call this property SC3-local as we only require that the intervention [Y←y] makes α true in nearby (not all) contexts.
In this Section, we have mainly only considered the causal Kripke model with only one relation which is the "nearby" relation. However, we can also consider causal Kripke models with multiple relations in which we have a nearby relation N on the worlds in addition to the other accessibility relations denoting relationships between world like time, indistingushibility, etc. The definition of sufficient causality discussed above can naturally be extended to this setting allowing us to describe the sufficient causality in the setting of causal Kripke models. Here, we do not go into details of this generalization but we believe this would be an interesting direction for future research.
§ CONCLUSIONS AND FUTURE DIRECTIONS
In this paper, we have developed a possible world semantics for reasoning about actual causality. We develop a modal language and logic to formally reason in this framework. This language is used to generalize the HP definitions of actual causality for this framework. We provide a sound and complete axiomatization of the modal logic of causality developed, and give a number of examples to illustrate how our model can be used to reason about causality. Finally, we show that our framework allows us to define the intended notion of sufficient causality in a straightforward and more intuitive manner.
This work can be extended in several directions. First, results regarding the relationship of the HP definitions with but-for causality <cit.>, and transitivity of cause <cit.>, can be generalized to our modal setting. Secondly, we can allow for interventions on the relation R in causal Kripke models. Indeed, in many scenarios intuitively the cause for some event is accessibility to some possible world. Allowing interventions on R would allow us to model such scenarios. Finally, similar to sufficient causality, other notions related to actual causality like normality (or typicality) and graded causation can be described in more nuanced and possibly multiple ways using the causal modal language.
eptcs
§ HP DEFINITIONS OF CAUSALITY AND SUFFICIENT CAUSALITY
Let α be an event obtained by Boolean combination of primitive events. Let 𝒳⊆𝒱 be a set of endogenous variables. X = x is an actual cause of α in a causal setting (M, t) if the following conditions hold.
AC1. (M, t) ⊩X = x and (M, t) ⊩α.
AC2a. There is a partition of 𝒱 into two disjoint subsets Z and 𝒲 with X⊆Z and a setting x' and w of variables in 𝒳 and 𝒲, respectively, such that
(M, t) ⊩_p [X←x', W←w ] α.
AC2b^o. If z^∗ is such that (M, t) ⊩Z=z^∗, then for all subsets Z' of Z∖𝒳 we have
(M, t) ⊩_p [X←x, W←w, Z' ←z^∗ ] α.
AC3. 𝒳 is minimal set of variable that satisfy AC1 and AC2.
We say that X = x is an actual cause of α in a causal setting (M, t) by updated definition iff AC1, AC2a, AC3 hold and AC2b^o is replaced by the following condition.
AC2b^u. If z^∗ is such that (M, t) ⊩Z=z^∗, then for all subsets Z' of Z∖𝒳 and 𝒲' of 𝒲 we have
(M, t) ⊩_p [X←x, W' ←w, Z' ←z^∗ ] α.
We say that X = x is an actual cause of α in a causal setting (M, t) by modified definition iff AC1, AC3 hold and AC2 is replaced by the following condition.
AC2a^m. If there exists a set of variables 𝒲⊆𝒱, and a setting x' of variable in X such that if (M, t) ⊩W = w^∗, then
(M, t) ⊩_p [X←x', W←w^∗ ] α.
The following theorem describes relationship between these three definitions of causality.
X=x is a sufficient cause of α in the causal setting (M, u) if the following
conditions hold:
SC1. (M, u) X=x and (M, u) α.
SC2. Some conjunct of X=x is part of a cause of α in (M, u). More precisely, there exists a conjunct X=x of X=x and another (possibly empty) conjunction Y=y such that X =x ∧Y=y is a cause of α in (M, u); i.e. , AC1, AC2, and AC3 hold for (possibly empty) conjunction Y=y such that X =x ∧Y=y
SC3. (M, u') [X←x] α for all contexts u'.
* X is the minimal set satisfying above properties.
§ RELATIONSHIP BETWEEN THREE HP DEFINITIONS OF CAUSALITY
In <cit.> Halpern gives examples to show that the three HP definitions do not coincide with each other. Here we give one example to show that the modified definition may not coincide with the original and updated definition in the causal Kripke model. Similar example can be given to show that original and updated definition do not coincide.
[stalemate detailed]
We consider the following variation of the example <ref>. Let 𝒮 be signature with endogenous variables p_1 and p_2 instead of p (keeping the other endogenous variables unchanged) standing for 'The king is in check by the opponent's queen' and 'The king is in check by the opponent's king'. Let U =(U_1,U_2,U_3) ∈{0,1}^3 be such that (p_1,w)=(U_1,w), (p_2,w)=(U_2,w), and (q,w)=(U_3,w) for any w ∈ W. Let the structural equation for r at w_0 be given by r = (p_1 ∨ p_2) ∧ q ∧ (p_1 ∨ p_2). i.e. , the player is forced to move the knoight if if the only pieces that can move are the king and the knight, ther king is not in check and every possible king move leads to king being in the check by the king or the queen (We assume there are no other pieces on the board) of the opponenet . Let t be a context such that U is set to be (0,0,1),(1,1,1), and (0,1,0) at the worlds w_0, w_1, and w_2 respectively. We have t (w_0)= {r}, t (w_1)= {p_1,p_2,q}, t (w_2)= {p_2}. In the same way as the last example we can show that (p_1,w_0)=0, (p_2,w_0)=0,(q,w_0)=1, (p_1,w_1)=1, (p_2,w_1)=1 and (p_2,w_2)=1 are all the causes of r=1 at w_0 by the original and updated definition. However, in the case of modified definition, neither (p_1,w_1)=1 nor (p_2,w_1)=1 is the causes but (p_1,w_1)=1 ∧ (p_2,w_1)=1 is a cause of r=1 at w_0. To see this notice that for any choice of N we will always have
(𝒦,t,w_0) ⊩ [(p_1,w_1) ←x', N←n^∗ ] r=1.
for any choice of x'. Thus, (p_1,w_1)=1 is not cause of r=1 at w_0 by modified definition. Similar argument holds for (p_2,w_1)=1. However, (p_1,w_1)=1 ∧ (p_2,w_1)=1 is a cause is showed by setting N=∅ and x' =(0,0). Thus, three definitions of causality need not always match in causal Kripke models.
The following theorem describes relationship between the three HP definitions in the causal models.
For any event α, any variable X, and any world w, X=x is a part of cause of α by original (resp. updated, modified) definition of causality if it is a conjunct in the cause of α by original (resp. updated, modified) definition.
If X=x is a part of cause of α in (M,u) according to
* the modified HP definition then X=x is a part of cause of α in (M,u) according to the original HP definition .
* the modified HP definition then X=x is a part of cause of α in (M,u) according to the updated HP definition.
* the updated HP definition then X=x is a part of cause of α in (M,u) according to the original HP definition.
Now, we generalize this result to our framework of causal Kripke models.
For any event α, any variable X, and any world w', (X,w)=x is a part of cause of α by original (resp. updated, modified) definition of causality if it is a conjunct in the cause of α at w' by original (resp. updated, modified) definition.
If (X,w)=x is a part of cause of ϕ in (𝒦,t) at w' according to
* the modified HP definition then (X,w)=x is a part of cause of α in (𝒦,t) at w' according to the original HP definition.
* the modified HP definition then (X,w)=x is a part of cause of α in (𝒦,t) at w' according to the updated HP definition.
* the updated HP definition then (X,w)=x is a part of cause of α in (𝒦,t) at w' according to the original HP definition.
For item 1, let (X,w)=x be a part of cause of α in (𝒦,t) at a world w' according to the modified HP definition, so that there is a cause Y=y such that (X,w)=x is one of its conjuncts. Then there must exist a value x' ∈ℛ(Y) and a set N⊆𝒱× W ∖Y, such that if (𝒦,t,w) ⊢ X= n^∗ for every (X,w)=n^∗∈N=n^∗, then (𝒦,t,w') ⊢ [Y←y, N←n^∗]α. Moreover Y is minimal.
We will show that (X,w)=x is a cause of α. If Y ={(X,w)}, then the original HP definition is satisfied by (N, n^∗, x') given by the condition AC2a^m. If |Y|>1, then without loss of generality let Y = ((X_1,w_1), (X_2,w_2), ⋯, (X_n,w_n)) and (X,w)=(X_1,w_1). For any vector Y, we use Y_-1 to denote all components of Y except the first.
We will show that (X_1,w_1) is a cause of α in (𝒦,t) at w' according to the original definition. Since Y=y is a cause of α in (𝒦,t) at w' according to the modified definition, by AC1 (𝒦,t, w_1) ⊢ X_1=x_1 and (𝒦,t, w') ⊢α. Let N' = (Y_-1,N), n^∗' = (y'_-1,n^∗), y'=y_1', where y' is as given by the modified definition. It is easy to see that (𝒦,t,w')⊢ [(X_1,w_1) ← x_1', Y_-1←y_-1, N←n^∗] α satisfying condition AC2a. Since (X_1,w_1) is single variable, AC3 holds trivially. Thus, to complete the proof of (a) we need to show that AC2b^o holds. Suppose AC2b^o does not hold. Then there exists a subset Z'⊆𝒱× W ∖ (Y_-1∪N) of variables and value z^∗ such that (i) for each Z ∈Z', (𝒦,t,w)⊢ Z=z^∗ and (ii) (𝒦,t,w')⊢[(X_1,w_1) ← x_1, Y_-1←y_-1, N←n^∗, Z'←z^∗] α. But then Y =y is not a cause of α according to the modified definition. Indeed, AC2a^m is satisfied for T'=Y_-1
by setting N = ((X_1,w_1), N, Z') and n^∗ = (x_1, n^∗, z^∗) and t' = y'_-1 violating AC3 for Y=y. i.e. , Y=y is not a minimal cause by the modified definition as the conjunct obtained by removing (X_1,w_1)=x_1 from it is still a cause by the modified definition.
This is a contradiction. Therefore, AC2b^o is valid.
For item 2, the proof is similar in spirit. In addition to 1, we need to show that if Y'⊆Y_-1, N'⊆N, and Z'⊆Z, then
(𝒦,t,w) ⊢ [(X,w_1)← x_1, Y_-1←y_-1, N'←n^∗', Z'←z^∗'ϕ ]
If X'=∅, then the condition holds since (X,w)=x is a cause of ϕ according to the original definition by item 1. In case this condition does not hold for some non-empty Y'⊆Y_-1, then Y = y does not satisfy the minimimality condition AC3 of the modified HP definition (in causal Kripke models).
For item 3, the proof is same as item 1, upto the point where we have to prove AC2^o. Suppose there exists Z'⊆Z such that
(𝒦,t,w) ⊢ [(X,w_1)← x_1, Y_-1←y_-1, N'←n^∗', Z'←z^∗'ϕ ]
then Y_-1←y_-1 satisfies AC2a and AC2b^u. Thus, Y = y does not satisfy the minimimality condition AC3 for the updated definition. Hence proved.
§ SOUNDNESS AND COMPLETENESS
In this section we provide the proof of soundness and (weak) completeness of the axiomatization given in Section <ref>. The proof is a modification of the proof provided in <cit.> to include the modal operators.
Showing that the axiomatization is sound is routine. It is straightforward to verify that all the axioms except G-axiom are valid and that modus ponens and necessitation preserve validity. For the G-axiom, note that the truth of the formula (X,w)=x is independent of the world w' at which it is evaluated. Thus, if it is true at some world, then it is true at all the worlds, in particular true at all the world related to w.
To prove that the axiomatization is weakly complete, we show contrapositively that if ⊬ψ then there exists a model satisfying ¬ψ. As usual, starting with a consistent formula φ we obtain a maximal consistent set Σ containing all axioms such that φ∈Σ, is closed under and consequence, and enjoys the disjunction property (see also the proof of Theorem 5.4.1 in <cit.>).
Before moving to the details of the proof, we provide a high-level presentation of the argument, to help the reader follow: Given a consistent set of formulas, we can extract the formulas that do not contain modalities. Treating the variables (X,w) and (X,w) (where w≠ w') as simply distinct variables, this set can be seen as a consistent set of formulas for the standard logic of causality presented in <cit.>, because the axioms in Section <ref> strictly extend the axioms of the logic in <cit.>. Then, by the completeness presented in <cit.>, we get a set of structural equations, which readily provides a set of structural equations over a Kripke model with the empty relation (where (X,w) and (X,w') are now interpreted as the same variable at different points in the Kripke model). By the soundness of the axioms in Section <ref>, the set of non-modal formulas at each such state is consistent. These consistent sets guarantee that the “canonical” model we construct has enough points to interpret all the names that appear in our finite set of formulas. The proof then follows a standard filtration argument to show in the usual way the truth lemma for modal formulas.
Let φ be such that ⊬¬φ. Let us define W_φ:={w∈ W| w appears in φ} and S_φ:={ψ,ψ|ψ is a subformula of φ}. Now let Σ be a maximal consistent set of formulas of 𝐋(W_φ) that contains φ. Given axiom C6 in <cit.> and -axiom and -axiom, we can assume without loss of generality that all formulas are generated from [Y←y]X=x and [Y←y](X,w)=x using the connectives ,,, and ¬. Notice that Σ “decides” the value of variables (X,w) for every w∈ W_φ (that is to say, (X,w)=x∈Σ for some x∈ for some x∈ℛ(X,w)). Consider the set B={[Y←y](X,w)=x∈𝐋(W_φ)| [Y←y](X,w)=x∈Σ}. By the completeness in <cit.>, it follows that there exists a system of structural equations satisfying the non-modal formulas of Σ. Using this system we can define a causal Kripke model with domain W_φ, and empty Kripke relation. By the soundness of this system it follows that for every w∈ W_φ the set Σ_w:={[Y←y]X=x| [Y←y](X,w)=x∈Σ}∪ B (the set Σ_w includes the ) is consistent and hence can be extended to an maximal consistent set Σ'_w.
Let S=S_φ∪{[Y←y](X,w)=x,[Y←y]X=x∈𝐋(W_φ)}. Notice that S is finite. Define an equivalence relation on maximal consistent sets extending B of 𝐋(W_φ), T_1∼ T_2 if and only if T_1∩ S=T_2∩ S. Given that S is finite, there exist finite many equivalence classes. Let 𝕎 be the set of equivalence classes, and let ℜ⊆𝕎×𝕎 be defined as C_1ℜC_2 if and only if there exists T_1∈ C_1 and T_2∈ C_2, such that for all ψ∈ T_2, ψ∈ T_1. Define a name assignment i such that i(w)=[Σ'_w] for w∈ W_φ, and arbitrarily otherwise. Finally define the structural equations, depending only on variables of W_φ, exactly as defined in <cit.>. In particular the equations are independent of variables in W∖ W_φ, and f_(X,w)(y)=x, if and only if [Y←y]X=x∈ T for any T∈ i(w) (given that [Y←y]X=x∈ S, this is well defined).
We claim that t,[T]⊩ψ if and only if ψ∈ T, for every ψ∈ S, and maximal consistent set T.
The proof proceeds via induction on the complexity of the formulas. For formulas of the form [Y←y]X=x, [Y←y](X,w)=x, and for logical connectives the proof is verbatim the same as that of <cit.>.
Finally, let's show this for the case when ψ is σ.
First, let's assume that t,[T]⊩σ. Then there exists C∈𝕎 such that [T]ℜC and t,C⊩σ. By induction hypothesis σ∈ T', for every T'∈ C. Since σ∈ S, by the definition of ℜ, it follows that σ∈ T.
For the converse direction, assume that σ∈ T. Notice preliminarily, that since ⊤ p⇒ p is a theorem of classical normal modal logic, then (⊤([Y←y](X,w)=x))⇒[Y←y](X,w)=x is provable in our system. From the G-axiom, this implies that also
(⊤ ([Y←y](X,w)=x))⇒[Y←y](X,w)=x
is provable. Consider the set Z_T={τ∈𝐋(W_φ)|τ∉ T}. Clearly Z_T is an ideal of the free Boolean algebra of the logic. Given that B⊆ T and σ∈ T, it follows that ⊤∈ T, and by (<ref>) it follows that B⊆ T and so B∩ Z_T=∅. Hence there exists a maximal consistent set T', extending B∩{σ} such that T'∩ Z_T=∅. By definition [T]ℜ[T'], and hence t,[T]⊩σ, as required.
The proof, that the model is recursive, follows again the proof of <cit.>, using the fact that our Kripke frame is finite.
§.§ Causal diagrams
[black] (0,0) circle (2 pt);
[black] (-0.5, 2) circle (2 pt);
[black] (2,-0.5) circle (2 pt);
[black] (-0.5,-2) circle (2 pt);
[black] (0.5, 2) circle (2 pt);
[black] (2,0.5) circle (2 pt);
[black] (0.5,-2) circle (2 pt);
(-0.7, 0) node (q,w_0);
(-1.2, 2) node (p,w_1);
(1.2, 2) node (r,w_1);
(2.7, 0.5) node (p,w_2);
(2.7, -0.5) node (r,w_2);
(-0.9, -2.3) node (p,w_2);
(0.9, -2.3) node (r,w_2);
[<-](-0.1,0.1)–(-0.5, 1.9);
[<-](0.1,0.1)–(0.5, 1.9);
[<-](-0.1,-0.1)–(-0.5, -1.9);
[<-](0.1,-0.1)–(0.5, -1.9);
[<-](0.1,0)–(1.9, -0.5);
[<-](0.1,0)–(1.9, 0.5);
|
http://arxiv.org/abs/2307.04942v1 | 20230711001445 | Benchmarking Algorithms for Federated Domain Generalization | [
"Ruqi Bai",
"Saurabh Bagchi",
"David I. Inouye"
] | cs.LG | [
"cs.LG"
] |
Dispersive estimates for 1D matrix Schrödinger operators with threshold resonance
Yongming Li
October 2023
=================================================================================
While prior domain generalization (DG) benchmarks consider train-test dataset heterogeneity, we evaluate Federated DG which introduces federated learning (FL) specific challenges.
Additionally, we explore domain-based heterogeneity in clients' local datasets—a realistic Federated DG scenario.
Prior Federated DG evaluations are limited in terms of the number or heterogeneity of clients and dataset diversity.
To address this gap, we propose an Federated DG benchmark methodology that enables control of the number and heterogeneity of clients and provides metrics for dataset difficulty.
We then apply our methodology to evaluate 13 Federated DG methods, which include centralized DG methods adapted to the FL context, FL methods that handle client heterogeneity, and methods designed specifically for Federated DG.
Our results suggest that despite some progress, there remain significant performance gaps in Federated DG particularly when evaluating with a large number of clients, high client heterogeneity, or more realistic datasets.
Please check our extendable benchmark code here: https://github.com/inouye-lab/FedDG_Benchmarkhttps://github.com/inouye-lab/FedDG_Benchmark.
§ INTRODUCTION
Domain generalization (DG) <cit.> formalizes a special case of train-test heterogeneity in which the training algorithm has access to data from multiple source domains but the ultimate goal is to perform well on data from an unseen test domain—i.e., a type of out-of-distribution generalization instead of the standard in-distribution generalization.
While most prior DG work focuses on centralized algorithms, another natural context is federated learning (FL) <cit.>, which is a distributed machine learning context that assumes each client or device owns a local dataset.
These local datasets could exhibit heterogeneity, which we call client heterogeneity (e.g., class imbalance between clients).
Although train-test heterogeneity (in DG) and client heterogeneity (in FL) are independent concepts, both could be naturally defined in terms of domain datasets.
For example, suppose a network of hospitals aimed to use FL to train a model to predict a disease from medical images.
Because the equipment is different across hospitals, it is natural to assume that each hospital contains data from different domains or environments (or possibly a mixture of domains if it is a large hospital)—this is a case of domain-based client heterogeneity.
Yet, the trained model should be robust to changes in equipment within a hospital or to deployment in a new hospital that joins the network—both are cases of domain-based train-test heterogeneity.
The interaction between these types of heterogeneity produces new algorithmic and theoretic challenges yet it may also produce new insights and capabilities.
Solutions to Federated DG with both types of heterogeneity could increase the robustness and usefulness of FL approaches because the assumptions more naturally align with real-world scenarios rather than assuming the datasets are i.i.d.
This could enable training on partial datasets, increase robustness of models to benign spatial and temporal shifts, and reduce the need for retraining.
In the centralized regime, various approaches have been proposed for DG, including feature selection, feature augmentation, etc. Most of these methods are not applicable in the FL regime which poses unique challenges. In the FL regime, client heterogeneity has long been considered a statistical challenge since FedAvg <cit.>, where it experimentally shows that FedAVG effectively mitigate some client heterogeneity. There are many other extensions based on the FedAvg framework tackling the heterogeneity among clients in FL, for example using variance reduction method <cit.>. An alternative setup in FL, known as the personalized setting, aims to learn personalized models for different clients to tackle heterogeneity, for example <cit.>. However, none of these works consider model robustness under domain shift between training and testing data. Recently, a few works in the FL regime tackling DG <cit.> have been proposed, however their evaluations are limited in the following senses: 1) The evaluation datasets are limited in the number and diversity of domains. 2) The evaluations are restricted to the case when the number of clients is equal to the number of domains, which may be an unrealistic assumption (e.g., a hospital that has multiple imaging centers or a device that is used in multiple locations). The case when clients number might be massive are of both theoretical and application interests.
3) None of the works consider the influence of the effect of the number of communication rounds. We provide an overview of the tasks in<ref>, considering both the heterogeneity between training and testing datasets (standard vs. domain generalization) and among clients (domain client heterogeneity). While some studies have addressed the standard supervised learning task, there is a need for a fair evaluation to understand the behavior of domain generalization algorithms in the FL context under those new challenges.
There are several benchmark datasets available for evaluating domain generalization (DG) methods in the centralized setting. These benchmarks, such as DomainBed <cit.> and WILDS <cit.>, provide multiple datasets that are suitable for assessing the performance of DG algorithms. However, they did not explicitly consider the unique challenges that arise in the federated learning (FL) setting. On the other hand, there are also benchmarks specifically designed for FL. For instance, the LEAF benchmark <cit.> provides a standardized framework for evaluating FL algorithms. It includes several datasets from various domains and allows researchers to assess the performance of their algorithms in a realistic FL scenario. Another benchmark for FL is PFLBench <cit.>, which focuses on evaluating personalized FL methods. PFLBench provides 12 datasets containing various applications. Though these FL-based benchmarks consider statistical heterogeneity, they fail to consider the DG task adequately. Moreover, the level of statistical heterogeneity present in these datasets is insufficient for proper DG evaluation.
In summary, DG benchmarks do not consider FL challenges, and FL benchmarks do not consider DG challenges.
Major contributions:
We develop a benchmark methodology for evaluating Federated DG with various client heterogeneity contexts and diverse datasets, and we evaluate representative Federated DG approaches with this methodology.
1) We propose the first Federated DG benchmark methodology including four important dimensions of the experimental setting (see <ref>).
2) We propose a standardized definition of domain-based client heterogeneity that is unique to the FL context and interpolates between domain homogeneity and domain separation (see <ref>) while controlling class imbalance. In particular, we develop a novel method to split any dataset with domain labels across any number of clients (see <ref>).
3) We compare three broad approaches to Federated DG: centralized DG methods naïvely adapted to FL setting, FL methods developed for client heterogeneity (e.g., class imbalance), and recent methods specifically designed for Federated DG.
Our results indicate that there still exist significant gaps and open research directions in Federated DG.
4) We will release an extendable open-source library for evaluating Federated DG methods.
Notation
Let [A] := {1,2,⋯, A} denote the set of integers from 1 to A.
Let d ∈ [D] denote the d-th domain out of D total training domains and similarly let c ∈ [C] denote the c-th client out of C total clients.
Let 𝒟⊆ [D] denote a subset of domain indices.
Let ℒ(θ; p) denote a generic objective with model parameters θ given a distribution p, which is approximated via samples from p.
Let p_d and p_c denote the distribution of the d-th domain and the c-th client, respectively.
Let 𝒮 denote a set of samples.
§ APPROACHES TO FEDERATED DOMAIN GENERALIZATION
We first briefly review the DG problem, extend to the Federated DG problem, and explain domain-based client heterogeneity.
Then, we discuss three categories of Federated DG approaches.
§.§ Problem Background and Setup
Domain generalization (train-test heterogeneity)
Unlike standard ML which assumes the train and test are independent and identically distributed (i.i.d.), the ultimate goal of DG is to minimize the average or worst-case loss of the test domain distributions when only samples from the train domain distributions are given.
Formally, given a set of train domain distributions { p_d : d ∈}, minimize the average or worst case loss over test domain distributions, i.e.,
min_θ1/||∑_d ∈(θ; p_d)
or min_θmax_d ∈(θ; p_d) ,
where (θ; p_d) = _(x,y) ∼ p_d[ℓ(x,y; θ)] where ℓ is a per-sample loss function such as squared or cross-entropy loss.
The key challenge in DG is that the train and test domain distributions are disjoint, i.e., ∩ = ∅, and thus, the method must be able to generalize beyond the train domain distributions to perform well on the test domain distributions.
The naïve approach is to simply ignore the domains and perform empirical risk minimization (ERM) on all the training data—which is actually a challenging baseline to outperform in practice <cit.>.
Federated DG Federated DG adds a layer of complexity because now the domain distribution samples are not centrally located.
Instead, each FL client can update their local model based only on their local client distribution p_c and then pass their local parameters θ_c to a central server, which will aggregate the local models and broadcast the model to all clients.
The FL problem can be abstracted as follows:
∀ c ∈ [C], θ_c = ((θ; p_c), θ_init = θ_global )_Locally optimize given local distribution p_c and θ_global = (θ_1, θ_2, ⋯, θ_C)_Aggregate client model parameters on server ,
where the client distributions may be homogeneous (i.e., ∀ (c,c'), p_c = p_c') or heterogeneous (i.e., ∃ c≠ c', p_c≠ p_c'), minimizes an objective (θ; p_c) initialized at θ_init, and aggregates the client model parameters, where the most common aggregator is simply a (weighted) average of the client parameters, which corresponds to FedAvg <cit.>.
Domain-based client heterogeneity
While client heterogeneity (i.e., ∃ c≠ c', p_c≠ p_c') is often expressed as label imbalance, i.e., p_c(y) ≠ p_c'(y), we make a domain-based client heterogeneity assumption that each client distribution is a (different) mixture of train domain distributions, i.e., p_c(x,y) = ∑_d ∈ w_c,d p_d(x,y) where w_c,d is the weight of the d-th domain for the c-th client.
At one extreme, FL with i.i.d. data would be equivalent to the mixture proportions being the same across all clients, i.e., ∀ c, c', d, w_c,d = w_c',d, which we call the homogeneous setting.
On the other extreme, if the number of clients and number of domains are equal (D=C), the domain weights could be disjoint between clients such that each client “owns” one or more domains, i.e., for any domain d∈[D], w_c,d > 0 ⇔ w_c',d = 0, ∀ c' ≠ c, which we call domain separation (in <ref>, we extend the domain separation to the case D≠ C).
Finally, we use λ to denote an interpolation parameter between these two extremes: homogeneous case λ = 1 and domain separation case λ=0.
We give details of an explicit algorithm to use in practice for splitting datasets for evaluation in <ref> which covers two extremes and interpolation cases λ∈(0,1).
<ref> summarizes the train-test heterogeneity in DG and the domain-based client heterogeneity from the FL context, where we focus on Federated DG.
Overview of Federated DG methods
In this benchmark study, we explore three categories of Federated DG methods: DG methods originally designed for the centralized setting, FL methods specifically tailored to handle client heterogeneity, and methods specifically designed for Federated DG.
To provide a comprehensive evaluation, we assess the performance of several representative methods from each of these categories and compare to vanilla FedAvg <cit.> with ERM loss, where any heterogeneity is simply ignored. These methods are selected based on their prominence and potential effectiveness for Federated DG. To ensure a diverse range of evaluation scenarios, we conduct experiments on various datasets and under various heterogeneity settings. Our results provide insights into their relative performance and suitability for different Federated DG contexts. This benchmark study aims to offer a broad overview of the current popular methods, enabling researchers and practitioners to make informed decisions when tackling Federated DG.
§.§ Centralized DG methods adapted to the FL setting
The first natural choice is directly migrating the DG methods from the centralized regime.
To adapt those methods, we simply run the centralized DG method at each client locally with their own local dataset (see <ref> for how the local datasets are created), and then compute an average of model parameters at each communication round (see next paragraph). This approach is straightforward for the homogeneous (λ=1) and heterogeneous (λ=0.1) settings where each client has data from all training domains—albeit quite imbalanced for λ=0.1. This can be seen as biased updates at each client based on biased local data.
In the domain separation case λ=0, (i.e., if all clients only have one primary domain, i.e., ∀ k, |P_k| = 1), this simple approach cannot be applied because centralized DG methods require data from at least two domains.
In fact, these centralized DG methods degenerate to vanilla FedAvg if there is only one domain per client.
Extending these methods to the case where all clients only have one domain without violating the FL constraints is an interesting direction for future work.
A predominant and effective centralized DG approach is through representation learning, including domain-invariant
representation learning by learning domain-invariant features via either kernel methods <cit.> or invariant risk minimization <cit.>, and domain adversarial neural networks <cit.>, and <cit.> which explicitly tries to align the feature representation distribution. Besides invariant representation, there are other methods the general learning strategy to promote the
generalization ability gradient operation <cit.>, which tries to learn generalized representations by directly operating on gradients. Other approaches include distributionally robust optimization <cit.>, which learns the worst-case distribution scenario of training domains; and meta-learning <cit.>, which is based on the learning-to-learn mechanism to learn general knowledge by constructing meta-learning tasks to simulate domain shift.
We selected IRM <cit.>, Fish <cit.>, MMD <cit.>, Mixup <cit.>, DeepCoral <cit.>, and GroupDRO <cit.> from this category for their representative mechanisms.
§.§ FL methods tackling client heterogeneity
Another line of research in FL aims to guarantee convergence even under client heterogeneity, but these FL-based methods still assume the train and test datasets do not shift (i.e., they do not explicitly tackle train-test heterogeneity of the domain generalization task).
The empirical observation of the statistical challenge in federated learning when local data is non-IID was first made by <cit.>. Several subsequent works have analyzed client heterogeneity by assuming bounded gradients <cit.> or bounded gradient dissimilarity <cit.>, and additionally assuming bounded Hessian dissimilarity <cit.>.
From this category, we selected FedProx <cit.>, which addresses statistical heterogeneity by adding a proximal term to the local subproblem, constraining local updates to be closer to the initial global model. Scaffold <cit.> utilizes variance reduction to account for client heterogeneity.
§.§ FL methods designed for Federated DG
Limited research has focused explicitly on solving the Federated DG by design. FedDG <cit.> introduced a specific FL paradigm for medical image classification, which involves sharing the amplitude spectrum of images among local clients, violating the privacy protocol. Another approach, FedADG <cit.>, utilizes a generative adversarial network (GAN) framework in FL, where each client contains four models: a featurizer, a classifier, a generator, and a discriminator. FedADG first trains the featurizer and classifier using empirical loss and then trains the generator and discriminator using the GAN approach. However, this method requires training four models simultaneously and tuning numerous hyperparameters, making convergence challenging. A novel aggregation method called Federated Gradient Masking Averaging (FedGMA) <cit.> aims to enhance generalization across clients and the global model. FedGMA prioritizes gradient components aligned with the dominant direction across clients while assigning less importance to inconsistent components. FedSR <cit.> proposes a simple algorithm that uses two locally-computable regularizers for domain generalization.
Given the limited literature on solving domain generalization (DG) in the federated learning (FL) setting, we selected all the aforementioned algorithms.
§ BENCHMARK METHODOLOGY
In this study, we aimed to conduct a comprehensive evaluation of the Federated DG task by considering four distinct dimensions of the problem setup. We evaluated a total of 13 methods, encompassing three different types of approaches.
§.§ Four evaluation dimensions
(1) Domain-based client heterogeneity and (2) number of clients
Previous studies on Federated DG have often focused on domain separation client heterogeneity where the number of clients equals the number of training domains, i.e., C=D.
However, this excludes evaluation of the homogeneous and partially heterogeneous settings and restricts the number of clients to the number of training domains D.
In particular, many pseudo-realistic datasets, such as those in DomainBed <cit.>, which consist training domains D ≤ 5, which limits the evaluation of methods under scenarios with a large number of clients.
Conversely, when using realistic domain datasets that have many domains, such as those in WILDS <cit.>, most current methods perform poorly under this extremely challenging setting.
By introducing the domain split method discussed in <ref>, we can explore various levels of client heterogeneity and relax the assumption that C=D so that we can leverage both pseudo-realistic and realistic datasets and evaluate methods at an appropriate difficulty level.
(3) Dataset difficulty and (4) dataset type
While most Federated DG work focuses on standard image-based datasets, we evaluate methods across a broad range of dataset difficulty (ranging from easy pseudo-realistic datasets to very challenging realistic datasets) and two types of datasets (3 image datasets and 2 text datasets).
This ensures that we can fully understand the performance of each method across a wide range of scenarios.
Further details can be found in Section <ref>.
§.§ Domain-based client heterogeneity by splitting DG datasets
For evaluation dimensions (1) and (2) above, we need to control over the amount of client heterogeneity, ranging from homogeneous to domain separation, and we need to control over the number of clients, which may be smaller or larger than than the number of domains (i.e., D ≠ C).
We propose a way to split any DG dataset into any number of clients that allows explicit control of the amount of heterogeneity through the λ hyperparameter which attempts to balance the number of samples per client. An illustration of our domain splitting procedure can be found in the following <ref>.
R0.5
0.5
In Algorithm <ref>, we provide a concrete algorithm for splitting samples from multiple domains across an arbitrary number of clients C while controlling the amount of domain-based client heterogeneity via λ, ranging from homogeneous clients to domain separation.
Our algorithm has two main steps.
In Step 1, we assign “primary” domain indices 𝒟_c ⊆ [D] to each client c∈[C] depending on C and D.
If C ≤ D, the domains are sorted in descending order according to number of sample size and are iteratively assigned to the client c^* which currently has the smallest number of training samples (denoted by ∑_d'∈𝒟_c^* n_d').
In this case, the algorithm ensures that no client shares domains with the others but otherwise attempts to balance the total number of training samples between clients.
If C>D, we first assign the domains one by one to the first D clients.
Then, starting from client c=D+1, we iteratively assign the current on average largest domain d^* to c, accounting for the fact that the domain may already shared by different clients—notationally, on average is represented through dividing by ∑_c'1[d ∈𝒟_c'].
In this case, some clients may share one domain, but no client holds two domains simultaneously while otherwise attempting to balance the number of samples across clients as much as possible.
In Step 2, we define the sample counts for each client and domain, denoted n_c,d(λ) based on the balancing parameter λ∈ [0,1]:
n_d,c(λ) =
λn_d/C + (1-λ) 1[d ∈𝒟_c]/∑_c'=1^C 1[d ∈𝒟_c] n_d ,
where rounding to integers is carefully handled when not perfectly divisible and where 1[·] is the indicator function.
This is simply a convex combination between homogeneous clients (λ=1) and domain separation (λ=0).
Given the number of samples per client per domain, we simply sample without replacement from the corresponding domain datasets and build up the client datasets.
§.§ Dataset Selection and Dataset Difficulty Metrics
To ensure a comprehensive evaluation of current methods across different difficulty levels, we have curated a range of datasets with varying complexities.
We define two dataset metrics to measure the dataset difficulty with respect to the DG task and with respect to the FL context.
For DG difficulty, we compute R_DG, the ratio of the ERM performance with and without samples from the test domain (i.e., the former is able to “cheat” by seeing part of test domain samples during training).
For FL difficulty, we attempt to isolate the FL effect by computing R_FL(λ), the ratio of ERM-based FedAvg λ client heterogeneity over centralized ERM on in-domain test samples.
These dataset difficulty metrics can be formalized as follows:
R_DG ≜(𝒮_DG-train, 𝒮'_DG-test)/(𝒮_DG-train∪𝒮”_DG-test, 𝒮'_DG-test)
R_FL(λ) ≜(𝒮_DG-train, 𝒮_IN-test; λ)/(𝒮_DG-train, 𝒮_IN-test),
where is the performance of ERM using the first argument as training and the second for test, is similar but with the client heterogeneity parameter λ, 𝒮_DG-train denotes samples from the train domains , 𝒮_DG-test denotes samples from the test domains , and 𝒮'_DG-test and 𝒮”_DG-test are 20%,80% split respectively of 𝒮_DG-test.
For R_FL(λ), we use 𝒮_IN-test (test samples from the train domains) instead of 𝒮_DG-test to isolate the FL effect from DG effect.
We choose five datasets in our benchmark: FEMNIST from <cit.>, PACS from <cit.>, and IWildCam, CivilComments and Py150 from <cit.>. We summarize the statistics and difficulty metrics in <ref> and provide the rationale of selecting these datasets in the Appendix.
§ BENCHMARK EXPERIMENTAL RESULTS
In this section, we report the performance of 13 representative methods from three lines of research on 5 different datasets, where the FEMINIST results are provided in the appendix given the DG task simplicity (i.e., R_DG≈ 1).
For each dataset, we fix the total computation and communication rounds for different methods for a fair comparison. We then select the model based on the held-out-domain validation set. After training, we choose the model according to the early-stopping at the communication round which achieves the best held-out-domain performance, and finally we evaluate the performance on the test-domain in <ref> and <ref>. See Appendix for detailed hyperparameters choices.
We also include the results according to in-domain validation early-stopping in the Appendix but the trends are similar.
We make the following remarks on the main results from <ref> and <ref>.
FedAvg with an ERM objective is a strong baseline.
Simple FedAvg with an ERM objective is a strong baseline that is challenging to beat across datasets (except for CivilComments), similar to the centralized case stated in DomainBed <cit.> and WILDS <cit.>. We recommend always including FedAvg as a baseline in all future evaluations.
Most centralized DG methods degrade in the FL setting.
For image datasets, the DG methods adapted to the FL setting (except for GroupDRO) show significant degradation in performance compared to their centralized counterparts as can be seen when comparing the C=1 column to the C > 1 columns in <ref>.
Further, degradation can be seen in PACS when moving from the homogeneous client setting (λ=1) to the heterogeneous client setting (λ = 0.1).
FL methods tackling client heterogeneity perform surprisingly well on the Federated DG task.
FedProx and Scaffold, which were designed for client heterogeneity but not specifically for the DG task, perform quite well in the Federated DG setting and even perform the best for IWildCam and Py150.
Thus, we suggest including these methods in future Federated DG evaluations and suggest that future works could focus on combining the strengths of these methods and DG methods.
The performance of real-world data significantly degrades as λ decreases.
This can be seen from IWildCam and Py150. While it is challenging and expensive to run models for IWildCam and Py150, they show the largest differences between methods and demonstrates the real-world challenge of Federated DG. We suggest including IWildCam and Py150 in most future DG evaluations given their unique nature across datasets.
A diversity of evaluation datasets is important for holistically evaluating new methods for Federated DG.
This conclusion is inspired from the following two observations.
1)
When comparing FL methods, we notice opposite trends between PACS and all other datasets (IWildCam, CivilComments, Py150) when increasing client heterogeneity based on λ.
For PACS, the FL methods (except for FedADG and FedSR) surprisingly seem to improve when increasing client heterogeneity (i.e., λ =0).
However, for other datasets, the expected trend exists such that increasing client heterogeneity produces worse performance.
From this, we recommend that using PACS alone might be a misleading or at least incomplete evaluation of Federated DG methods.
2) Centralized DG methods adapted to FL perform the best on CivilComments. This unexpected result suggests that DG methods in the centralized regime may be able to better accommodate subpopulation shift, which is the special kind of shift that is exhibited in CivilComments. Both of these observations emphasize that a diversity of evaluation datasets is important for holistically evaluating new methods for Federated DG tasks.
Additional DG challenges from FL.
For further understanding, we explore some additional questions on the PACS dataset because it is the most common and computationally feasible to train many different models.
Specifically, we explore how the number of clients, amount of communication (i.e., the number of server aggregations) in the federated setup, and client heterogeneity affects the performance of various methods.
The figures and detailed analysis are provided in the Appendix but we highlight two remarks here.
The number of clients C strongly affects on the overall performance. The DG performance drops from 90% to 50% or even 10% when varying C from 1 to 200.
We strongly recommend future Federated DG evaluations to consider larger number of clients such as 50 to 100 rather than only a very small number of clients.
It indicates that there exist significant unresolved challenges in the domain of federated domain generalization when dealing with a large number of clients, which have not been adequately explored. In particular, FedADG and FedSR seem to be sensitive to the number of clients because they perform poorly when C ≥ 10, while in the original papers they were only evaluated on a few clients C=3 (and we reproduce the good results with such C in the Appendix).
The number of communications does not monotonically affect DG performance.
We observe an interesting implicit regularization phenomena in the FL context: different methods achieve their best performance when the communication rounds is relatively low. In particular, for PACS, this communication round is 10 (while fixing the total amount of computations).
This is surprising as in contrast for in-domain task, FL suggests more communications leads to better performance <cit.>.
Further theoretical investigation of the dependence of DG accuracy on the communications rounds, and its possible relation to implicit regularization via early stopping is an interesting area for future work.
§ CONCLUSION AND DISCUSSION
We first build a systematic benchmark methodology for evaluating Federated DG that includes novel methodologies for splitting the data across clients and evaluation of dataset difficulty.
We then evaluate 13 representative methods from three relevant lines of research.
Our evaluation shows that Federated DG is still unsolved and significant gaps remain between centralized DG performance and Federated DG performance.
Therefore, Federated DG is ripe for additional research.
Based on our evaluation and observation, here are some recommendations and suggestions for future work in Federated DG.
Recommendations for future evaluations of Federated DG.
* Stronger Baselines - FedAvg should always be included because it is a strong baseline (<Ref>). FL methods designed for client heterogeneity (though not necessarily DG) should also be included given their strong performance (<Ref>).
* Realistic Datasets - Federated DG methods should be evaluated on more realistic datasets. In particular, FEMNIST and PACS behave quite differently with respect to client heterogeneity than the more realistic WILDS datasets (<Ref>).
Thus, we recommend including both IWildCam and Py150 in future evaluations of Federated DG. CivilComments may be useful for evaluating realistic subpopulation shift (<Ref>).
* Large Number of Heterogeneous Clients - Evaluations should include scenarios with a large number of clients with varying degrees of heterogeneity.
A large number of clients poses unique challenges for most methods (<Ref>).
Additionally, the heterogeneity of clients is more realistic and plays a significant role in evaluation (<Ref>).
Suggestions for future work in Federated DG.
* Handle Domain Separation Case - The domain separation scenario (λ=0) limits the exchange of information between domains, i.e., a client may only have data from a single domain.
Centralized approaches cannot be adapted to this setting and current methods still struggle under this realistic client heterogeneity setting.
* Increase Convergence Rate - In most benchmark experiments (except for PACS, which is an easy dataset), we have observed slow convergence, particularly on challenging real-world datasets such as IWildCam and Py150.
Moreover, we have noticed even slower convergence when client heterogeneity is high (i.e., when λ is small).
In addition, we have found that federated methods designed to improve convergence actually performed quite well even though they were not designed for the DG task (<Ref>).
Therefore, it is crucial for future methods to improve the convergence rate of Federated DG approaches.
* Investigate the Effect of Communication Frequency - Because increased communication frequency may actually hurt DG performance (<Ref>), further investigation into the regularization effect of early stopping and infrequent communications in FedAvg may yield important insights.
* Understand the Effect of the Number of Clients - Given that the performance of some methods degrade very quickly when increasing the number of clients beyond a few (<Ref>), the community could benefit from an improved theoretic and empirical understanding of how the number of clients affects Federated DG in terms of convergence, computational requirements, sample complexity, and DG performance.
We hope this work provides a better foundation for future work in Federated DG and accelerates research progress.
§ ACKNOWLEDGEMENTS
This work was supported by Army Research Lab under Contract No. W911NF-2020-221.
R.B. and D.I. also acknowledge support from NSF (IIS-2212097) and ONR (N00014-23-C-1016).
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor(s).
plainnat
tocsectionAppendix
PART:
Appendix
§ REPRODUCIBILITY STATEMENT
Code for reproduce the results is available at the following link:
https://github.com/inouye-lab/FedDG_Benchmarkhttps://github.com/inouye-lab/FedDG_Benchmark.
We include detailed documentation in using our code to reproduce the results throughout the paper. We also provide documentation in adding new algorithm's DG evaluation in the FL context.
§ DATASETS AND DIFFICULTY METRIC
§.§ Dataset Introduction
In this section, we introduce the datasets we used in our experiments, and the split method we used to build heterogeneous datasets in the training and testing phase as well as the heterogeneous local training datasets among clients in the FL.
FEMNIST It is an FL prototyping image dataset of handwritten digits and characters each users created as a natural domains, widely used for evaluation for client heterogeneity in FL.
Though it contain many training domains, it lacks significant distribution shifts across domains (R_DG= 1), and considered as easy compared to other datasets.
PACS It is an image dataset for domain generalization. It consists of four domains, namely Photo (1,670 images), Art Painting (2,048 images), Cartoon (2,344 images), and Sketch (3,929 images). This task requires learning the classification task on a set of objects by learning on totally different renditions. R_DG=0.960 makes it a easy dataset as domain generalization in our setting. Notice that choosing different domain as test domain might give us different R_DG.
IWildCam It is a real-world image classification dataset based on wild animal camera traps around the world, where each camera represents a domain. It contains 243 training domains, 32 validation and 48 test domains. Usually people cares about rare speicies, thus we utilize the macro F1 score as the evaluation metric instead of standard accuracy, as recommended in the original dataset's reference <cit.>. The R_DG=0.449 makes it a very challenging dataset for domain generalization.
CivilComments It is a real-world binary classification text-based dataset formed from the comments from different demographic groups of people, containing 8 demographic group. The goal is to judge whether the text is malicious or not.
Py150 It is a real-world code-completion dataset which is challenging given massive domains 8421, where the domain is formed according to the repository author. The goal is to predict the next token given the context of previous tokens. We evaluate models by the accuracy on the class and method tokens.
§.§ Dataset Split Setup
For each dataset, we first split the dataset into 5 categories, namely training dataset, in-domain validation dataset, in-domain test dataset, held-out validation dataset and test domain dataset. For FEMNIST and PACS dataset, we use Cartoon and Sketch as training domain, Art-painting as held-out domain, Painting as test domain. For training domain, we split 10%,10% of the total training domain datasets as in-domain validation dataset and in-domain test dataset respectively. For IWildCam, CivilComments and Py150, we directly apply the Wilds official splits.
§ BENCHMARK EXPERIMENTAL SETTING
§.§ Model Structure
In this benchmark, for image-based datasets, we use ResNet-50 <cit.> dataset. For CivilComments and Py150 dataset, we choose DistilBERT <cit.> and CodeGPT <cit.> respectively as recommended by Wilds.
§.§ Model Selection
We conduct held-out domain model selection with 4 runs for each methods. The oracle model selection evaluates the model based on the performance on the held-out validation domain. The results are reported based on the best run.
§.§ Early Stopping
We conduct early stopping using the held-out validation dataset in our evaluation. For each dataset and method, We first run certain communication rounds, then we select the model parameters which achieves the best performance on the validation set. We report the held-out validation dataset in the main paper, and we report the results using the in-domain validation set in <ref>.
§.§ Hyperparameters
In this section, we present the hyperparameters selected for the evaluation. We opted to communicate per epoch for all experiments. For PACS, we run totally 50 communication rounds. For IWildCam, we run 50 communication rounds. For CivilComments, we run 10 communication rounds. For Py150, we run 10 communication rounds, and for FEMNIST, we run 40 communication rounds. Please refer to the <ref> to review other hyperparameters.
§ ADDITIONAL FL-SPECIFIC CHALLENGES FOR DOMAIN GENERALIZATION
As mentioned in <ref>, we also include some deeper exploration over the effect of number of clients and communication frequency, which are unique to the FL regime.
i) Massive number of clients:
In this experiment, we explore the performance of different algorithms when the number of clients K increases on PACS. We fix the communication rounds 50 and the local number of epoch is 1 (synchronizing the models every epoch). <ref> plots the held-out DG test accuracy versus number of clients for different levels of data heterogeneity. The following comments are in order: given communication budget, 1) current domain generalization methods all degrade a lot after C≥ 10, while the performance ERM and FedDG maintain relatively unchanged as the clients number increases given communication budget. FedADG and FedSR are are sensitive to the clients number, and they both fail after C≥ 20. 2) Even in the simplest homogeneous setting λ=1, where each local client has i.i.d training data, current domain generalization methods IRM, FISH, Mixup, MMD, Coral, GroupDRO work poorly in the existence of large clients number, this means new methods are needed for DG in FL context when data are stored among massive number of clients.
ii) Communication constraint:
To show the effect of communication rounds on convergence, we plot the test accuracy versus communication rounds in Appendix <ref>. We fix the number of clients C=100 on PACS and decreases rounds of communication (together with increasing local epochs).
That is, if the regime restricts the communication budget, then we increase its local computation E to have the same the total computations. Therefore, the influence of communication on the performance is fair between algorithms because the total data pass is fixed. We observe that the total number of communications does not monotonically affect the DG performance. With decreasing number of total communication, most methods' performance first increase then decrease. This might be an interesting implicit regularization phenomena in the FL context. Without discussing DG task, usually frequent communications lead to faster convergence. The relationship between DG performance and communications requires further exploration.
§ SUPPLEMENTARY RESULTS
In the main paper, we provide experiments results using held-out validation early stopping. Here we report the results using in-domain validation set. We also report the convergence curve for each methods on each dataset for reference. We observe that under most of cases, the held-out validation gives us a better model. Thus, we recommend using held-out validation set to perform early stopping.
More results on PACS and IWildCam dataset.
Here we both report the convergence curve using the DG accuracy of each method on PACS <ref> and IWildCam <ref> and the DG accuracy on PACS and IWildCam <ref> using the in-domain validation. From the PACS convergence curve <ref>, we observe that with lower λ, the model is harder to converge. FedAvg ang FedDG converge faster than all other methods while FedADG and FedSR does not converge. From IWildCam convergence curve <ref>, we observe that all methods are struggling to converge. This is due to the main challenge first come from the R_FL here. We thus observe that Federated methods dealing with heterogeneity clients achieves best performance. Especially when λ is low. It is worth noticing that this does not mean DG in FL is solved, none of the methods even achieve the centralized ERM performance.
More results on CivilComments and Py150 dataset.
Here we report the DG accuracy on CivilComments and Py150 <ref> using the in-domain validation. We could observe that most of the methods perform well on CivilComments and Py150. This could be attributed to the use of pretrained models. Additionally, the utilization of pretrained models may explain the relatively high value of R_DG, as NLP pretrained models have already been exposed to various domains.
Results on FEMNIST dataset.
As mentioned in the main paper, we include the FEMNIST dataset here as reference. We could observe from <ref> that λ does not influence the final DG accuracy, and either in-domain validation or held-out-domain validation does not affect the final DG accuracy. This indicates the lacking of the statistical heterogeneity across different domains. we observe that changing of λ does not significantly affect the convergence. Most of which does not converge to the centralized counterpart's performance, this is due to the challenge coming from large number of clients where R_FL=0.980<1.
§ GAP TABLE
We list the gap table in <ref> for summarizing the current DG algorithms performance gap w.r.t FedAvg-ERM in the FL context, in particular, positive means it outperforms FedAvg-ERM, negative means it is worse than FedAvg-ERM. It can be seen that in the on the simple dataset, the best DG migrated from centralized setting is better than FedAvg-ERM. In the domain separation case, no centralized DG algorithms can be adapted to it, and FDG methods performs comparably good in this setting. However, they fail in harder datasets. In the hardest setting, currently the Federated methods dealing with data heterogeneity performs the best. It is worth noting that while federated learning methods that address client heterogeneity perform better than other methods, they still fall short of achieving centralized empirical risk minimization (ERM). This highlights the need for future research and development of DG methods in the FL regime.
§ TRAINING TIME, COMMUNICATION ROUNDS AND LOCAL COMPUTATION
In this section, we provide training time per communication in terms of the wall clock training time. Notice that for a fixed dataset, most of algorithms have similar training time comparing to FedAvg-ERM, where FedDG and FedADG are significantly more expensive.
|
http://arxiv.org/abs/2307.04428v1 | 20230710090716 | Analysis of CN emission as a marker of organic compounds in meteoroids using laboratory simulated meteors | [
"Adriana Pisarčíková",
"Pavol Matlovič",
"Juraj Tóth",
"Stefan Loehle",
"Ludovic Ferrière",
"David Leiser",
"Felix Grigat",
"Jérémie Vaubaillon"
] | astro-ph.EP | [
"astro-ph.EP",
"physics.geo-ph"
] |
inst1]Adriana Pisarčíková[email protected]
[inst1]organization=Faculty of Mathematics, Physics and Informatics, Comenius University in Bratislava,
addressline=Mlynská dolina,
city=Bratislava,
postcode=84248,
country=Slovakia
inst1]Pavol Matlovič
inst1]Juraj Tóth
inst2]Stefan Loehle
inst3]Ludovic Ferrière
inst2]David Leiser
inst2]Felix Grigat
inst4]Jérémie Vaubaillon
[inst2]organization=High Enthalpy Flow Diagnostics Group, Institute of Space Systems, University of Stuttgart,
addressline=Pfaffenwaldring 29,
city=Stuttgart,
postcode=70569,
country=Germany
[inst3]organization=Natural History Museum Vienna,
addressline=Burgring 7,
city=Vienna,
postcode=1010,
country=Austria
[inst4]organization=IMCCE, Observatoire de Paris,
addressline=PSL, 77 Av Denfert Rochereau,
city=Paris,
postcode=75014,
country=France
Fragments of small solar system bodies entering Earth's atmosphere have possibly been important contributors of organic compounds to the early Earth. The cyano radical (CN) emission from meteors is considered as potentially one of the most suitable markers of organic compounds in meteoroids, however, its detection in meteor spectra has been thus far unsuccessful. With the aim to improve our abilities to identify CN emission in meteor observations and use its spectral features to characterize the composition of incoming asteroidal meteoroids, we present a detailed analysis of CN emission from high-resolution spectra of 22 laboratory simulated meteors including ordinary, carbonaceous, and enstatite chondrites, as well as a large diversity of achondrites (i.e., ureilite, aubrite, lunar, martian, howardite, eucrite, and diogenite), mesosiderite, and iron meteorites. We describe the variations of CN emission from different classes of asteroidal meteor analogues, its correlation and time evolution relative to other major meteoroid components. We demonstrate that CN can be used as a diagnostic spectral feature of carbonaceous and carbon-rich meteoroids, while most ordinary chondrites show no signs of CN. Our results point out strong correlation between CN and H emission and suggest both volatile features are suitable to trace contents of organic matter and water molecules present within meteoroids. For the application in lower resolution meteor observations, we demonstrate that CN can be best recognized in the early stages of ablation and for carbon-rich materials by measuring relative intensity ratio of CN band peak to the nearby Fe I-4 lines.
* First analysis of CN emission from various ablated meteorites
* CN emission identified as a diagnostic spectral feature of carbon-rich meteorites
* CN and H emission linked to organic matter and water content
* Method for identification of CN in lower resolution meteor spectra proposed
astrobiology spectroscopy meteorite
0000 1111
0000 1111
§ INTRODUCTION
The various abundances of organic matter found in asteroids and comets originate from the formation processes of the interstellar medium <cit.>. Small solar system bodies are assumed to have been responsible for the emergence of prebiotic molecules necessary for the origin of life on Earth <cit.>. The impacting interplanetary material is considered to be one of the main contributors of organic molecules to early Earth <cit.>.
While organic matter is abundantly present in all comets and spectral studies can focus on revealing the variations in the contents of different compounds, the abundance of organic matter in different types of asteroids remains an open question <cit.>. Studies of the fragments of asteroids and comets – meteoroids – which continuously enter the Earth's atmosphere from various sources in the solar system can provide important spectral data to help tackle this issue. Meteoroids ablate in the Earth's atmosphere as meteors emitting strong radiation. Analyzing the emission spectra allows to gain detailed information about the atoms and molecules present in the meteoroid and interacting with the surrounding air <cit.>.
A suitable trace feature for the detection of organic compounds in small solar system bodies captured through meteor observations appears to be the cyano radical (CN). CN has been detected by remote optical observations in several comets since the 19th century <cit.>. In recent years, even the first in-situ detection of CN in the coma of comet 67P/Churyumov–Gerasimenko has taken place <cit.>.
The origin of the CN observed in the cometary coma has been long associated with hydrogen cyanide (HCN) as the sole source. However, in the last few decades, two main CN sources have been considered: CN-bearing refractories (HCN polymers, hexamethylenetetramine (HMT), tholin, CHON dust grains) and CN-bearing volatiles <cit.> as the dominant source of CN production (mainly HCN, cyanogen (C2N2), cyanoacetylene (HC3N), acetonitrile (CH3CN)). Refractories carrying CN are assumed to be released from the interior of the comet along with dust particles and could generate HCN or CN radicals. CN products derived from volatile CN-bearing species are formed during the photodissociation of these species.
There has been several efforts to detect CN in meteor spectra in the past decades (see e.g., <cit.>). The detection was unsuccessful likely due to the typically insufficient resolution of meteor spectrographs, which are unable to resolve the CN band from the strong emission of surrounding Fe I lines. Because of its strong B → X transition of low excitation energy <cit.>, the CN emission is the most suitable tracer of organic compounds in the visible and near-UV range meteor spectra. At typical meteoric temperatures and instrumental resolutions, this vibrational band structure peaks at around 388.3 nm <cit.>. CN emission from meteor spectra is expected to originate either directly from the meteoroid composition and it may be generated from reactions with N bound in organic matter, or due to the interaction of the meteoric C atoms with molecular N2 originating in the atmosphere <cit.>.
In this work we present an analysis of the CN emission in spectra of different types of meteorites tested in a plasma wind tunnel simulating meteoric conditions. The fitting of the CN band was previously done in the terrestrial rock argillite tested under the same laboratory conditions <cit.>. We provide the first overview of the presence, relative intensity, and time evolution of the CN emission in different meteorites representing a wide range of asteroidal materials. This way, we aim to indicate CN as a suitable tracer of organic matter in meteoroids, demonstrate the detectable variations of organic matter in different asteroidal materials, and help constrain the instrumental limits for an efficient detection of CN in meteors.
First, in Section <ref>, we describe the laboratory conditions and instrumentation used for the meteorite ablation tests in the plasma wind tunnel and the data processing methodology. The following Section <ref> contains our results of a detailed study of the CN emission in spectra of different meteorites. In this section we focus on an analysis of the presence of CN and Hα in meteorite spectra and their mutual correlation, and an analysis of the relative intensity and time evolution of the CN band emission based on monochromatic light curves. The conclusions derived from the obtained results are summarized in Section <ref>.
§ LABORATORY EXPERIMENTS AND METHODS
<cit.> established an experimental setup in an arc-jet wind tunnel facility suitable for the analysis of meteoroid entry physics. Overall, three experimental campaigns (2020-2022) were performed within a cooperation between the Comenius University in Bratislava, Slovakia (CUB) and the High Enthalpy Flow Diagnostics Group at the Institute of Space Systems, University of Stuttgart, Germany (HEFDiG). The following analysis is based on the measurement of spectra of 22 meteorite samples simultaneously captured by the high-resolution HEFDiG Echelle spectrograph and the spectrograph AMOS-Spec-HR from the CUB, which is used within the global AMOS (All-sky Meteor Orbit System) network <cit.> for observing spectra of meteors in the Earth's atmosphere <cit.>. Given the relatively low resolution of AMOS-Spec-HR, we only use these data to evaluate the possibility to recognize CN in corresponding low resolution spectra. The analysis of relative intensities and time evolution of CN emission from different meteorite types is based on the detailed Echelle spectra. The uniquely large dataset of tested meteorites allows us to examine the presence of CN in almost all major meteorite classes including different ordinary chondrites, enstatite chondrites, carbonaceous chondrites, achondrites and mesosiderite group of stony irons. These types represent the most abundant meteorite falls.
§.§ Experiment conditions and instrumentation
The Institute of Space Systems of the University of Stuttgart operates several plasma wind tunnels <cit.>, which were developed in the early 1980s for basic testing of thermal protection materials required for spacecraft to safely enter the atmosphere of planets. The meteorite experiments were carried out in the Plasma Wind Tunnel 1 (PWK1) with a plasma flow condition with local mass-specific enthalpy of 70 MJ kg-1 at a stagnation pressure of ∼24 hPa. This corresponds to the entry of a meteoroid with a diameter of ∼4 cm at an altitude of ∼80 km in the Earth's atmosphere, with an assumed meteoroid entry velocity of ∼12 km s-1. This plasma flow condition was used for the first tests with meteorite samples <cit.>.
The Echelle spectrograph of HEFDiG is a fiber-fed system providing a wavelength range of 250–880 nm <cit.>. From the lower to the upper end of this spectral interval, the spectral dispersion varies from 43 pm px-1 to 143 pm px-1 (resolving power R ≈ 10 000). An Echelle high-order diffraction grating of 300 grooves per millimeter (gpmm) is utilized at orders 40–60. Another mounted diffraction grating with higher diffraction of about 1000 gpmm causes dispersion and alignment of obtained spectra. As a result, a high resolution and long wavelength interval is obtained.
The AMOS-Spec-HR system provides an image resolution of 2048 x 1536 px (1.76 arcmin px-1) and a resulting field of view (FOV) of 60° x 45° and a frame rate of 15 fps. The essential components of this spectrograph are 6 mm f/1.4 lens and a digital camera. The setup of holographic diffraction grating with 1000 gpmm provides a dispersion of 0.5 nm px-1 (resolving power R ≈ 550). The spectral system allows to analyze spectral events in the visual spectrum range of approximately 370–900 nm.
The large selection of tested meteorite samples, obtained from and in collaboration with the Natural History Museum Vienna, mainly consist of meteorite falls rather than finds to limit terrestrial contamination. Meteorite samples were cut into 1 cm diameter cylinders with lengths varying from ∼1 to 2 cm or into ∼1 cm diameter cubes depending on the availability and fragility of the samples. These dimensions are required for accurate experiment conditions with respect to the mentioned entry conditions. The meteorite samples were attached to a copper stick mounted on a standard ESA (European Space Agency) probe holder on a four-axis moving platform inside a 6 m long and 2 m wide vacuum chamber of the PWK1 plasma wind tunnel during experiment. The PWK1 plasma wind tunnel was evacuated, and subsequently, the magnetoplasmadynamic generator was turned on. The moving platform was used to transport the probe held outside the plasma flow to its direction after the air flow is stabilized. The duration of exposure of the sample to the air plasma flow ranged around 3-12 s depending on the meteorite composition and durability of the meteorite holder. For some smaller samples or samples with higher risk of fragmentation upon drilling for the copper stick holder, a high-temperature ceramic glue Resbond 940HT was used to attach the sample to the copper holder. The spectrum of a pure glue sample was obtained to ensure that the contamination from the glue to the obtained spectra was negligible, as was confirmed. An example of a melting Chelyabinsk meteorite sample in the PWK1 plasma wind tunnel is shown in Fig. <ref>.
§.§ Data processing
The calibration of the Echelle spectra of the ablated meteorites was performed after each laboratory experiment using a calibration lamp located at the position where the meteorite sample was previously placed. The radiation of the calibration lamp measured in the laboratory environment is used to convert the ADU camera units (Analog-to-Digital Units) to spectral radiance. Considering the typical case of the meteorite ablation of ∼4 s duration and the effort to obtain the highest possible camera gain, 15-70 frames of the meteorite emission spectrum are recorded. The resulting emission spectrum was obtained by summing the intensity profiles of the individual calibrated frames. The last step before calculating line intensities is subtracting spectral baselines using the Fityk program <cit.>. Within this program, a synthetic spectrum consisting of the main emission multiplets of the meteor was modeled and then fitted to the calibrated spectrum using the damped least-squares method (the Levenberg–Marquardt algorithm) to measure the relative intensities of spectral emission lines. For the shape of all modeled lines in the synthetic spectrum, Gaussian line profiles were used with appropriate full width at half maximum (FWHM) adjusted by automatic fit, typically ∼0.1 nm. The error bars calculation of the measured line intensities was estimated based on the signal to noise ratio (SNR) in each meteor spectrum. The multiplet numbers used in this work are taken from <cit.>.
The spectral data analysis of AMOS data was carried out according to the procedure described in <cit.>. Each meteorite spectrum was corrected for noise, other sources of illumination and spectral sensitivity of the system and later manually scanned in individual frames of the video recording. The resulting meteorite spectrum was obtained by summing all intensity profiles and scaled using well-known lines and a polynomial fit of the third order.
In this work, we analyzed the presence and relative intensity of the CN band in emission spectra of tested meteorites. The strongest peak of this band is located near 388.3 nm (studied in our analysis), followed by weaker CN peaks near 387.1 nm and 385.0 nm. Due to the high-resolution (R ≈ 10 000) of the Echelle data, the measurement of CN intensity in the spectra of ablated meteorites is straightforward, while in the case of the lower resolution (R ≈ 550) of the AMOS data, intensity of the CN band is affected by contributions of surrounding Fe lines. The comparison of the emission spectrum of Murchison meteorite in B → X CN band region in lower resolution data of AMOS and higher resolution Echelle data is displayed in Fig. <ref>.
An illustration of the model of the CN (B → X) Δν = 0 band fitted to an observed spectrum of the Murchison meteorite is displayed in Fig. <ref>. The CN model was obtained using the line-by-line emission code PARADE <cit.>. Equilibrium was assumed between the translational, rotational, vibrational and electronic temperatures, which were manually varied to fit the simulated spectrum to the data recorded with the Echelle spectrometer. In order to reduce the effect of noise on the fit, the spectra of five successive frames were averaged for each temperature estimate. The fit of the CN band displayed in Fig. <ref> was obtained at the resulting rotational Trot = 6500 K and vibrational temperatures Tvib = 6500 K.
§ RESULTS
§.§ The detection of CN in meteorite spectra and correlation with H
We have studied high-resolution Echelle spectra of 22 different meteorites obtained during their simulated ablation in plasma wind tunnel facility. The primary focus of this section is on the study of the occurrence and relative intensity of the CN band measured relative to the main meteor emission multiplets of Fe I-15 and Mg I-2 representing silicate and metallic components in meteoroids. These multiplets were selected as they are among the most universally observed features in visible-range meteor spectra. Additionally, we have studied the correlation between CN and Hα near 656.3 nm since both features originate from volatiles embedded in meteorites and are potentially the best candidates for tracing water molecules and organic compounds in small solar system bodies <cit.>.
The generated free plasma flow at the beginning of each experiment enabled us to observe the plasma spectrum before moving the meteorite sample to the plasma flow, i.e. before the meteorite ablation started, allowing us to identify plasma lines and possible contribution to H emission from outside source. All spectra were thoroughly examined for possible H contamination, and four meteorite spectra with CN emission (Bilanga, Eagle, Lancé, and NWA 11303 meteorites) were confirmed with additional source of H emission. In the case of the Bilanga, Eagle and Lancé meteorite spectra, H emission was already observed before the meteorite insertion and increased after ablation started, indicating that some fraction of water molecules and organic compounds originate in these meteorites. The additional source of H was also detected in the spectrum of the NWA 11303 meteorite but without significant intensity change after meteorite ablation starts, pointing out the absence of the original H source from the meteorite. The external H source probably originated in the evaporated water of the internal cooling system of the plasma wind tunnel facility. No contamination of the detected H emission was found in the majority of the presented meteorite spectra. In addition, out of the meteorites used in the analysis, four finds (Dhofar 1575, Mincy, NWA 13303, and Ragland meteorite) may be to some degree affected by terrestrial weathering, whereas all the other tested meteorites are falls and, thus, much more pristine. The effects of the terrestrial weathering will be further examined based on the time evolution of meteorite spectra from individual frames.
In a previous work by <cit.>, based on a limited number of samples, a correlation was found between the intensity of the Hα line and the CN band, which we here confirm based on an extended set of samples (namely Eagle [EH5], Bilanga [DIO], Lancé [CO3.5], Mincy [Mesosiderite], and Northwest Africa (NWA) 11303 [LUN]). Fig. <ref> shows that meteorites with increased CN content also exhibit higher volatile H content. The strongest H and CN emissions were detected in the CM2 carbonaceous chondrite Murchison, which is, in fact, rich in hydrocarbons, amino-acids and water content ∼ 10 wt.% <cit.>. We have also found stronger CN and H emissions in other carbonaceous chondrite meteorites, namely the CO3.5 Lancé and the CV3 Allende meteorites. The Allende meteorite contains on average < 1 wt.% water content <cit.>, which was manifested by a significantly lower Hα line intensity compared to Murchison. Moreover, among all three tested carbonaceous chondrites, the Murchison meteorite (CM2) has the highest carbon content of 2.7 wt.% (mean elemental abundance), followed by Lancé (CO3.5) and Allende (CV3) meteorites with 0.65 wt.% and 0.27 wt.% carbon content, respectively <cit.>. Our results well reflect the real bulk elemental composition of these meteorites, as the Murchison meteorite exhibits the strongest CN/Fe I-15 ratio, followed by Lancé and Allende meteorites, as shown in Fig. <ref> (upper panel). To account for the differences in the bulk composition of the individual meteorites, we display the intensities of Hα line and CN relative to both Fe I and Mg I emission (Table <ref>).
Within the group of achondrites, the strongest CN and H emissions were detected for the meteorite Dhofar 1575, belonging to the achondrite carbon-rich ureilite group. In ureilites, carbon is bound in the form of tiny grains of graphite and (nano)diamonds. Since this meteorite is a find, it is necessary to consider the potential influence of terrestrial weathering, although according to <cit.>, the weathering grade for this meteorite is low. The time evolution of the CN emission observed on monochromatic light curves (<ref>) revealed continuous release of CN during the ablation, also supporting embedded source of CN within the meteorite sample.
Among other tested achondrites, relatively strong CN emission was detected for the lunar meteorite NWA 11303, mostly originating from the early stages of the meteorite ablation (see further discussion in Section <ref>). On the contrary, the martian meteorite Tissint (Shergottite) did not exhibit any CN or H emission. Moderate CN and H emission was detected in the aubrite meteorite Norton County. Here we have found the most significant difference in the intensity of CN and Hα relative to Fe I-15 and Mg I-2. The reason is its composition consisting of Mg-rich silicates and depleted in iron <cit.>, which is reflected in low CN/Mg and H/Mg ratios and relatively high CN/Fe and H/Fe ratios, respectively (Fig. <ref>). The Norton County aubrite contains ∼ 0.3 wt.% water content <cit.>.
Moderate CN emission was also detected in the diogenite meteorite Bilanga. Interestingly, out of all the tested HED (howardite-eucrite-diogenite) meteorites, Bilanga is the only meteorite with detected CN as no CN was found in the eucrite Stannern or the howardite Sariçiçek. However, measurements of carbon isotopes which differentiate the presence of C caused by terrestrial contamination from indigenous content, confirmed the presence of indigenous carbon content in HED meteorites, including in the howardite Sariçiçek <cit.>. This level of carbon content however did not produce a detectable CN emission during the simulated ablation of the eucrite Stannern or the howardite Sariçiçek. To our knowledge, the carbon content of the Bilanga meteorite was not measured by previous authors, thus, we cannot compare with the other tested HED meteorites.
The H and CN line intensities are below the detection limit (log10(H/Fe I) < -1.6 and log10(CN/Fe I) < -1.3, respectively) in most of the tested ordinary chondrites, including Košice (H5), Pultusk (H5), Buzzard Coulee (H4), Mocs (L5-6), NWA 869 (L3-6), Knyahinya (L/LL5), Chelyabinsk (LL5), and Kheneg Ljouâd (LL5/6). Therefore, CN content was considered absent or unreliable in these tested meteorites.
While CN and H emissions were absent in most of the tested ordinary chondrites, they were surprisingly clearly detected in the LL3.4 ordinary chondrite Ragland. It has been reported that the terrestrial weathering altered metallic Fe, Ni and troilite to iron oxides and hydroxides <cit.> in the Ragland meteorite, and therefore we can assume a slight modification of its composition. However, Ragland has unusual mineralogical and chemical composition features for an LL ordinary chondrite. It has relatively high water content of 2.45 wt.% and it is the least metamorphosed ordinary chondrite investigated in this study <cit.>. Therefore, the observed spectral features may also represents its original, atypical composition <cit.>.
We have found very faint CN band peak in the EH5 enstatite chondrite Eagle, which also corresponds with detected faint Hα emission. Most of the detected CN emission originated from early stages of the meteorite ablation, implying source from the outer layers of the meteorite. Influence of the terrestrial weathering of the sample therefore cannot be excluded, although the sample originates from an observed fall. The water and carbon content of the Eagle meteorite are ∼ 0.5 wt.% and ∼ 0.3 wt.%, respectively <cit.>. It is believed that besides carbonaceous chondrites formed in the outer solar system as the main source of hydrated minerals delivered to Earth, enstatite chondrites from the inner solar system also contributed to the origin of Earth´s water <cit.>. Moreover, enstatite chondrites are considered to be the material from which the proto-Earth was formed, as they have identical isotopic abundances to terrestrial rocks <cit.>.
No CN emission was detected from the mesosiderite Mincy or the iron meteorite Mount Joy. Interestingly, we observed an onset of H emission in the early stages of the Mincy meteorite ablation with a gradual decrease and disappearance of the Hα line. Since significant hydration is not assumed to be present in mesosiderites, the detection of H at the beginning of the ablation may reflect an effect of terrestrial weathering on the outer layers of the sample. We note that Mincy is a meteorite find.
§.§ CN/Fe I-4 intensity ratio measurements and detection in lower resolution spectra
We have found that one of the most straightforward methods to recognize the presence of the CN band, which is also applicable to the lower resolution data, is by measuring the relative intensity ratio of the CN peak at 388.3 nm to the one line from the Fe I-4 multiplet positioned near 386.0 nm, as shown in Fig. <ref> and Table <ref>. Meteorites without CN present exhibit only very faint Fe I peak at the 388.3 nm. At 386.0 nm, all tested meteorites show a strong Fe I - 4 line peak. Without a considerable contribution of CN emission, the ratio between the Fe I lines at 388.3 nm/386.0 nm should remain relatively constant for different meteorite types, given that the ablation behavior of the sample is steady. Fig. <ref> shows the distinction of meteorites in which this intensity ratio was increased compared to meteorites with no detected CN emission. The boundary for recognition of CN emission in our data seems to be the value of 388.3 nm/386.0 nm ≈ 0.1. In general, we did not find CN in meteorites with the intensity ratio below this value. However, we note that the distinction of CN contribution based on the 388.3 nm/386.0 nm intensity ratio must be considered carefully. In this work, the presence of CN was confirmed by also taking into account the overall intensity of the CN band relative to other element lines and studying the time evolution of its emission.
The majority of all meteorites tested in the wind tunnel with apparent detection of CN emission belong to the group of carbonaceous chondrites (Murchison, Allende, Lancé) and distinct achondrites (Dhofar 1575, Bilanga, Norton County, and NWA 11303 meteorites). The surprising detection of CN in the spectrum of the ordinary chondrite Pultusk is assumed to be due to a contaminating source, as we only observed CN emission at the beginning of the ablation (see Fig. <ref> and the discussion in Section <ref>).
The measurement of the 388.3 nm/386.0 nm intensity ratio presents a suitable method of identifying the presence of CN emission in lower resolution data. To validate this method, we measured the intensity ratio in the meteorite ablation spectra captured by the AMOS-Spec-HR spectrograph routinely used for meteor observations. Our results show that the presence of the CN band is accompanied by increasing the 388.3/386.0 nm peak intensity ratio and decreasing line width ratio of these two peaks due to the contribution of the surrounding CN peaks near 387.1 nm and 385.0 nm (Table <ref>, Fig. <ref>). However, in the case of meteors, it is necessary to study these distinguishing features carefully, as the line emission depends on the flow enthalpy, i.e., on the entry speed (line intensity ratios may differ at different temperatures).
§.§ Monochromatic light curves
Next, we studied the time evolution of the CN emission in the spectra of tested meteorites by analyzing their monochromatic light curves. We have found that the early sharp increase of CN intensity can be observed in the earliest stages of the meteorite ablation, along with the onset of the emission of Na, due to the volatility of its atoms and the low excitation potential of the Na lines. An example of the monochromatic light curve of the CM2 carbonaceous chondrite Murchison with the most notable CN emission is displayed in Fig. <ref>. Five frames before and after the meteorite ablation showing spectrum noise level are displayed for a better distinction of the onset of the CN emission. The value of 0.0 s on the x-axis corresponds to the start of the meteorite ablation, i.e. to the onset of Na lines which are the first to begin to radiate. The intensity of Fe I-4 multiplet is only represented by the intensity of one line at 388.6 nm, close to the strongest CN peak. One can note a different behavior in the time-dependent CN emission, which peaks in the early stages of ablation compared to other elements detected in the meteorite spectrum showing slow onset of emission. In the later stages of the meteorite ablation, the CN generally radiates in a similar trend as the other elements, including very short and subtle flares. Similar behavior of early strong radiation to that of CN emission was observed for the low excitation Na lines. Due to the saturation of Na lines in most frames of the Echelle spectra, the relative intensity of the Na I lines was not investigated in this work. An interesting behavior was observed in the monochromatic light curve of the Murchison meteorite as a bright flare after the final stages of the meteorite ablation with increased Cr I-7 and Mn I-2 intensity (near 107 W/m^2/sr/nm and 170 W/m^2/sr/nm, respectively). This flare may be associated with a sudden release of a droplet of molten material with a specific composition consisting of chromium and manganese-bearing minerals.
Monochromatic light curves of other ablated meteorites plotted from the highest to the lowest CN intensity can be found in the <ref>, demonstrating different content of organic compounds and the time evolution of the CN emission. Slight variations are observed in the onset of CN emission among spectra of the different ablated meteorites. Meteorite spectra of Murchison, Dhofar 1575 and Eagle exhibit CN emission simultaneously with Na emission. In the spectra of Allende, Bilanga, Norton County, NWA 11303 and Ragland meteorites, the onset of CN emission was observed from the second frame (around 0.08 s of ablation) and from the third frame (about 0.17 s of ablation) in the case of Lancé meteorite. In addition, the light curve shape of CN emission for the Dhofar 1575 meteorite (<ref>) is not characterized by a typically very steep initial increase of brightness but rather by a very slow, gradual increase and decrease in brightness, which does not indicate an effect of terrestrial weathering, but may rather reflect a heterogeneous distribution of carbon within the ureilite meteorite. However, a steep increase in CN intensity in the early stages of ablation and a subsequent steep decrease towards the noise level of the recording in the case of the lunar NWA 11303 meteorite (<ref>) points to a source from the surface layer of the meteorite, which may also result from terrestrial weathering. We note that the weathering grade of this meteorite is low <cit.>. In this case, the origin of the detected CN emission was not clearly resolved. Further laboratory analysis of the bulk composition of this meteorite could help resolve this issue.
For further insight into the differential ablation of the studied meteorites, we measured the CN band peak intensity relative to the emission of other major atoms in individual frames. Fig. <ref> displays the time-dependent CN/Mg I-2 intensity ratio for all meteorites with detected CN content. Similarly to Fig. <ref>, five frames are plotted before the meteorite ablation starts. Fig. <ref> presents the evolution of CN emission relative to Mg I in the first 3.2 s of ablation for better visualization. However, as can be seen, the ablation duration of the Dhofar 1575, Eagle, Lancé, and NWA 11303 meteorite was shorter than for the other investigated meteorite samples.
The analyzed relative CN intensity ratios can be affected by the specific composition of the meteorite fragments, as observed by the high Mg content in the Norton County meteorite or the high Fe content in the Ragland meteorite (Fig. <ref>). The CN line intensity measured relative to Fe I-15 in individual frames of meteorite ablation can be found in the <ref> for comparison. Regardless of the specific meteorite composition, a clear feature of the CN band is its strong peak in the early stages of the ablation and the subsequent sharp decrease in the CN/Mg and CN/Fe intensity ratio due to the gradual release of Mg and Fe dominating in the later stages. This result implies that, in lower-resolution spectra of real meteor observations, CN can be best detected in the early stages of the ablation in the upper atmosphere, before the strong emission of the surrounding Fe I begins. Sufficient instrument sensitivity and high frame rate may therefore play a key role for the detection of the early CN emission from meteors.
As already mentioned, the most notable CN emission was detected from carbonaceous chondrites. The light curve shapes of three ablated carbonaceous meteorites do not show significant variations with the exception of CN content (<ref>). We observed the strongest CN peak in the CM2 Murchison meteorite, followed by the CV3 Allende and CO3.5 Lancé meteorites. In the case of the Lancé meteorite, it should be mentioned that the CN emission was observed starting from 0.17 s after the first detection of the meteorite spectrum (from the third frame) and the slight decrease of CN/Mg I-2 up to 0.5 s of ablation is caused by the significantly stronger Mg line (see also the monochromatic light curve of Lancé in <ref>).
The time evolution of CN emission varies slightly between the achondrite meteorites (<ref>). The strongest peak of the CN band in the diogenite Bilanga was observed at around 0.08 s of the ablation process while in the lunar NWA 11303 and ureilite Dhofar 1575 meteorites, slightly later at 0.16 s and 0.3 s, respectively. The measured CN/Mg I-2 intensity ratio in the aubrite Norton County is continuously low (from the onset of CN emission at 0.08 s), but this fact is slightly affected by the relatively Mg-rich composition of this meteorite. When analyzed relative to the Fe I emission, the aubrite Norton County exhibits relative CN intensities closer to some other achondrites with notable CN presence (Fig. <ref>). The apparent increased trend of CN/Mg I-2 intensity ratio in Dhofar 1575 meteorite in the last stages of the meteorite ablation is the result of a significant decrease of Mg lines and relatively steadily slow CN release at the same time (see also the monochromatic light curve of Dhofar 1575 in <ref>).
Our results suggest that ordinary and enstatite chondrites do not exhibit CN emission or only faint unrecognizable contribution near the background noise (<ref>). As discussed earlier, the ablation of LL3.4-type Ragland meteorite was accompanied by strong CN emission with the most atypical time-depended behavior. The monochromatic light curve of Ragland is characterized by a gradually increasing CN/Mg I-15 intensity ratio in the first few frames and a subsequent very short and strong flare for the longest period of time compared to other meteorites with CN emission (see also the monochromatic light curve of Ragland in <ref>). This behavior can potentially be related to terrestrial weathering of this meteorite. Although the CN emission was not reliably resolved from the summed spectral profile of the H5 ordinary chondrite Pultusk (Fig <ref>), we detected CN emission from the first second of the ablation (Fig. <ref>). The observed distinct light curve of Pultusk likely reflects the presence of a surface layer rich in CN content. In this specific case, it is difficult to advocate the terrestrial weathering as this meteorite is a fall that was recovered quickly after its landing on Earth.
§ CONCLUSIONS
We present here the first in-depth analysis of CN emission from a wide range of laboratory tested meteorites serving as asteroidal meteor analogues. The simulated ablation conditions correspond to an atmospheric flight of a slow meteoroid (∼12 km s-1) at an altitude of approximately 80 km. The observed variations of CN emission in various meteorite types demonstrate that CN can be used as a diagnostic spectral feature of carbonaceous and relatively carbon-rich meteoroids. The strongest CN emission was found in carbonaceous chondrites (CM2, CV3 and CO3.5) and a C-rich ureilite. Moderate CN emission was found for diogenite, aubrite and lunar meteorite samples. Low CN contribution in the early stages of the ablation was found in an enstatite chondrite.
In general, the CN band was either absent or not clearly detected in most of the ablated ordinary chondrites with the exception of the Ragland meteorite (LL3.4), consisting of moderately weathered material with originally atypical composition for an ordinary chondrite. CN was not detected in the tested eucrite, howardite, martian shergottite, mesosiderite and iron meteorites.
Our results point out strong correlation between CN and H emission and suggest that both volatile features are suitable to trace contents of organic matter and water molecules present in meteoroids. While this study only focus on analogues of asteroidal meteors, our previous survey <cit.> pointed out strong H emission as a marker of the high volatile contents in cometary meteoroids.
The analysis of monochromatic light curves of ablated meteorites has shown that CN emission can be best recognized in the early stages of the meteorite ablation, before the onset of surrounding Fe I lines. For application in lower resolution meteor observations, we therefore suggest that efficient detection of CN can be achieved during the early stages of meteor ablation in the upper atmosphere. Additionally, using lower resolution data from the meteor spectrograph AMOS we found that the measurement of the intensity ratio and line width ratio of the CN band peak near 388.3 nm to the Fe I-4 line peak near 386.0 nm can indicate the contribution of CN in lower resolution spectra dominated by surrounding iron lines.
Our results suggest that terrestrial weathering of meteorites can affect their spectral signature including potentially affecting the tracers of water molecules and organic content. Such effects, typically detected in the samples that were more weathered meteorite finds, were resolved in specific monochromatic light curves showing CN or H emission only in early stages of the ablation. This may be explained by a contaminant source of carbon on the surface layers of the meteorite. Nevertheless, the strong spectral distinction between the carbon-rich materials, specific achondrites and ordinary chondrites confirms that the studied diagnostic spectral features correlate with their bulk composition and thus can be used to trace the original contents of organic compounds in meteoroids.
§ ACKNOWLEDGEMENTS
We are thankful to the High Enthalpy Flow Diagnostics Group (HEFDiG) team of the Institute of Space Systems, University of Stuttgart for carrying out the meteorite ablation experiments. This work was supported by ESA grants under contracts No. 4000128930/19/NL/SC and No. 4000140012/22/NL/SC/rp, the Slovak Research and Development Agency grant APVV-16-0148, the Slovak Grant Agency for Science grant VEGA 1/0218/22, and the Comenius University Grant G-21-193-00 and G-22-145-00. J. Vaubaillon was supported by CNES, the French space agency, in the framework of the MALBEC project. We particularly thank all colleagues from HEFDiG in Stuttgart who supported and inspired the MetSpec campaigns. G. Batic (NHMW) is thanked for the preparation of the meteorite samples.
§ DATA AVAILABILITY
The spectral data of presented ablated meteorites will be made available upon a reasonable request.
§ ADDITIONAL MONOCHROMATIC LIGHT CURVES
elsarticle-harv
|
http://arxiv.org/abs/2307.05174v1 | 20230711111206 | Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text Classification | [
"Che Zhang",
"Ping'an Liu",
"Zhenyang Xiao",
"Haojun Fei"
] | cs.CL | [
"cs.CL"
] |
Shot noise classification of different conductance plateaus in a quantum point contact at the nu2by3 edge
Moshe Goldstein
August 12, 2023
=========================================================================================================
The study of human values is essential in both practical and theoretical domains. With the development of computational linguistics, the creation of large-scale datasets has made it possible to automatically recognize human values accurately. SemEval 2023 Task 4<cit.> provides a set of arguments and 20 types of human values that are implicitly expressed in each argument. In this paper, we present our team's solution. We use the Roberta<cit.> model to obtain the word vector encoding of the document and propose a multi-head attention mechanism to establish connections between specific labels and semantic components. Furthermore, we use a contrastive learning-enhanced K-nearest neighbor mechanism<cit.> to leverage existing instance information for prediction. Our approach achieved an F1 score of 0.533 on the test set and ranked fourth on the leaderboard. we make our code publicly available at https://github.com/peterlau0626/semeval2023-task4-HumanValuehttps://github.com/peterlau0626/semeval2023-task4-HumanValue.
§ INTRODUCTION
The identification and analysis of human values in texts has been an important area of research. With the development of computational linguistics, this research has gained widespread attention because of its potential impact on areas such as sentiment analysis, social science.
One of the challenges in this area is to accurately categorize all the human value. Several notable research achievements have been made in the categorization of human values. One of the approach is that classifies human values into 54 categories across four different levels<cit.>. SemEval2023 task4 uses the classification method in this paper, where an argument is given to identify whether a value is included in the instrument, and the F1 scores of the results at the level2 level are used for the total ranking. There are 20 categories of human values in level 2, and a argument could belong to multiple value categories or not to any one value category. this is a typical multi-label text classification (MLTC) problem which has been applied in many scenarios such as news emotion analysis<cit.> and web page tagging<cit.>.
In this paper, we propose a model that combines the label-specific attention network with the contrastive learning-enhanced nearest neighbor mechanism<cit.>. The multi-headed attention mechanism allows our model to overcome the shortcomings of traditional attention mechanism models and to be able to focus on different parts of a document, resulting in more accurate labeled attention results. And the nearest neighbor mechanism enables our model to not waste the rich knowledge that can be directly obtained from the existing training instances and helps enhance the interpretability and robustness of the model.
§ BACKGROUND
§.§ Datasets
The dataset comprises of arguments from six different domains such as news releases, online platforms, etc. originating from four different countries/regions, which are composed of 80% data from IBM argument quality dataset (95% from the original dataset), 15% from the European Future Conference (New), and 5% from group discussion ideas (2% from the original dataset). The training dataset comprises of more than 6500 arguments, whereas the validation and test datasets consist of around 1500 arguments each. In addition, the organizers of the competition provided three additional datasets to evaluate the robustness of methods:validation set Zhihu (labels available), test set Nahj al-Balagha (labels confidential), test set The New York Times (labels confidential). All datasets have been manually annotated.
Each sample in the dataset contains an argument ID, conclusion, stance towards the premise, and the premise itself. The labels consist of the argument ID and a column for each of the 20 value categories, indicating whether the sample belongs to each category (0 or 1).
§.§ Related Work
Before the widespread adoption of deep learning, models such as SVM were widely used to minimize an upper bound of the generalization error<cit.>. Simple neural network (NN) models were later used for MLTC and achieved good performance<cit.>. Additionally, convolutional neural networks (CNNs) and recurrent networks with gated recurrent units (GRUs) have been successfully used with pre-trained word2vec embeddings<cit.>. Feature selection has been shown to be effective in speeding up learning and improving performance by identifying representative words and removing unimportant ones<cit.>.
In recent years, with the development of pre-trained models, the ability to extract semantic information has become increasingly powerful. there have been several representative works that have focused on improving the MLTC models. For example, <cit.> utilized graph neural networks based on label graphs to explicitly extract label-specific semantic components from documents. seq2seq model can capture the correlation between tags<cit.>. LSAN<cit.> can focus on different tokens when predicting each label.
§ SYSTEM OVERVIEW
In this section, we will present our model, which consists of two main parts. The first part is a multi-headed attention mechanism based on a specific label representation, while the second part is a nearest neighbor mechanism enhanced using contrast learning.
The MLTC problem can be described as follows: assuming a set of data D={(x_i, y_i)}_i=1^N, N labeled documents, where x_i represents the text and y_i ∈{0,1}^l represents the label of x_i, and l represents the total number of labels. Each document x_i consists of a series of words. Our goal is to learn a classifier to establish a mapping from x_i to y_i, so that when a new document x is presented, its label vector y can be correctly predicted.As pretrained language models (PLMs) show remarkable performance in extracting natural language representations, we use PLMs as base encoder to get document and label feature. A input sample can be expressed as x_i={w_1, w_2, w_3 … w_n-1, w_n}, w_p∈ R^d denotes the pth word vector of a document. After calculated by PLMs , the input matrix of the whole sentence is obtained as H ∈ R^n× d, where d is the hidden dimension of PLMs.
§.§ Label-specific multi-head attention network
In order to explicitly capture the corresponding label-related semantic components from each document, the approach of using label-guided attention mechanisms to learn label-specific text representations has been widely used in previous studies, and such a method is used in LSAN<cit.>. In addition, the success of the Transformer model<cit.> illustrates the ability of multi-headed attention mechanisms to extend the model's ability to focus on different locations more effectively than single-headed attention mechanisms. The usefulness of this method for text classification is very intuitive.For example in the following sentence, "Social media is good for us. Although it may make some people rude, social media makes our lives easier." Focusing on the words “although”, “but” and “makes life easier” at the same time is a more accurate way of getting at the value of comfort in life, while ignoring the the disadvantages of social media.As mentioned above, we next show our model.
Firstly, to make use of the semantic information of labels, we initialize the trainable label representation matrix C ∈ R^l× d with the mean-pooling of the label features vector which is obtained by the pretrained encoder. Then, the multi-headed attention mechanism is used to compute the label-aware attention score. With the input document representation matrix as H ∈ R^n× d and the label representation matrix C, the query Q, key K, and value V of the attention mechanism can be expressed as follows:
Q=W_q C
K=W_k H
V=W_v H
Where W_Q, W_K, W_V ∈ R^d × d is the weight matrix to be learned.We use the h-head attention mechanism, then the three matrices Q,K,V can be expressed in the following form.
(Q_1, Q_2, … Q_h)=Q
(K_1, K_2, … K_h)=K
(V_1, V_2, … V_h)=V
Where Q_i∈ R^l × d_a, K_i, V_i∈ R^n× d_a correspond to the query, key and value of each attention header, and d_a=d/h denotes the dimensionality of a single attention mechanism representation space. Attention scores are then computed for each attention head similar to the method used in the Transformer model. Since the length of the document is different in a data batch, we perform a mask operation on the result of the QK matrix multiplication, set the value corresponding to the padding part to 1 e^-12, and then use the softmax activation function to activate it.
score_i=softmax(mask(Q_i K_i/d_a))
score_i∈ R^1 × n denotes the attention score of the label for each word vector in the document. Then we obtain the attention results for each label with respect to the document content.
Attention =
Concat (attention_1, …, attention_h) W^O
where attention_i =score_iV_i
Where Attention ∈ R^l × d can be considered as the representation vector of the document under the view of L labels. To obtain the representation vector of the document Z, the row vectors of the Attention matrix for each labeled view are averaged:
Z=mean(row(Attention))
After obtaining a comprehensive document representation with label-specific correlation, we can construct a multi-label text classifier by means of a perceptron consisting of fully connected layers. Mathematically, the predicted probability of each label of the next document can be determined by:
ŷ=sigmod(W^1 Z^T).
Where W^1 ∈ R^l × dis trainable parameters for the fully connected and output layers, which can transfer the output value into a probability. Since multi-label classification has the problem of unbalanced positive and negative samples, in order to balance the coefficients of positive and negative samples in the loss function and obtain a better trained model, we use the cross-entropy loss function with weights as the loss function of the model:
L_BCE=∑_i=1^b ∑_j=1^l-( w·y_ijlog(p_i j)
+(1-y_i j) log(1-p_i j))
Where b is the size of a data batch, w is the weighting factor, y_ij is the true value of the jth label of the ith sample, p_ij is the probability that the model predicts that label to be y_ij, and l is the total number of labels. The positive sample size/negative sample size in the training set is taken as the value of w.
§.§ Contrastive Learning-Enhanced Nearest Neighbor Mechanism
We use the k nearest neighbor (KNN) mechanism enhanced by contrast learning<cit.>. This approach innovatively proposes a KNN mechanism for multi-label text classification that can make good use of the information of existing instances. And a contrastive learning approach is designed to enhance this KNN mechanism mechanism effectively. Specifically, this approach designs a loss function for contrastive learning based on dynamic coefficients of label similarity, which compares the documents representation vectors at training time to let the vector with more same labels be more similar as possible, while the vectors of documents with fewer identical labels are as far away as possible. Assuming a data batch of size b, we define a function to output all other instances of a particular instance in this batch g(i)={k|k∈{1,2, …, b}, k≠i}.The contrastive loss(CL loss) of each instance pair (i, j) can be calculated as:
L_con^ij=-β_i jloge^-d(z_i, z_j) / τ^'/∑_k ∈ g(i) e^-d(z_i, z_k) / τ^'
C_ij=y_i^T · y_j, β_i j=C_i j/∑_k ∈ g(i) C_i k
where d(·, ·) is the euclidean distance, τ^' is the contrastive learning temperature and Z denotes the document representation. C_ijdenotes the label similarity between i, j, and normalized to obtain β_ij. The CL loss of the whole batch can be expressed as:L_con =∑_i ∑_j ∈ g(i) L_c o n^i j. The cross-entropy loss function is expressed as L_BCE, then the whole Loss function is L=L_B C E+γ L_con, where a controls the ratio of the coefficients of the contrast learning loss function and the cross-entropy loss function. Then, we construct a data store of training instances so that we can later use the existing instance information as a comparison. Based on the training set (x i, y i) ∈ D, the storage of a training set of document representation vectorsD^'={( hi, yi )}_i=1^N is obtained by a trained model. where h_idenotes the document representation vector of the training set, which is calculated by the model.
In the inference stage, give an input X, after the model calculation we obtain its document representation vector Z, and the prediction of the model ŷ_M ∈{p|p∈[0,1]}^1. Next, we compare with the data repository to find the nearest k nearest neighbors N={(hi, yi)}_i=1^k, then the KNN prediction can be calculated as:
ŷ_K N N=∑_i=1^k α_i y_i
α_i=e^-d(h_i, Z) / τ/∑_j e^-d(h j, Z) / τ
where d(·, ·) is the euclidean distance, τ is the temperature of KNN, α_i is the weight coefficient of the ith neighbor, when the closer the test instance vector representation is to this neighbor, the larger the weight will be. The final prediction form is expressed as follows:
ŷ=λŷ_K N N+(1-λ) ŷ_M
where λ is the weight coefficient that regulates the KNN prediction and the model prediction.
§ EXPERIMENTAL SETUP
In the dataset, many pronouns in the premise were used such as "this research", "it", "this way" etc, and these pronouns refer to objects contained in the conclusion. Whereas our model tries to establish the attention scores of different semantic components of a document for a specific label, it is clear that the presence of these words with unclear denotations affects the attention results. In addition, stance toward conclusions also influence value judgments. Therefore, we use a simple strategy: combining the three parts of conclusion, preference, and stance into a sentence that conforms to natural language conventions. Specifically, the data were preprocessed uniformly and simply, and the input structure was: "I agree (disagree) that" + conclusion content + ", because" + premise content.
We use the Roberta model<cit.> as the base pre-trained model to obtain word representation. Experimenting with our architecture on the base model. And we use the K-fold cross-validation method. We merge the training and validation sets and then randomly divide them into six copies. We perform the training process six times, each time using 5/6 of the data as the training set and 1/6 of the data as the validation. During the training process, the best-trained model for each fold is saved, and the average output probability of all models is taken as the final prediction score.
§ RESULTS
On the leaderboard of the TIRA, our method achieved an macro-F1 score of 0.53 and ranks fourth, while the baseline which use a bert model<cit.> achieved 0.42, with the best result for the whole competition being 0.56. In addition, our model also achieved an F1-score of 0.32 on each of the two test sets, Nahj al-Balagha and New York Times. This effect is relatively high among all participating teams, which fully demonstrates the robustness and stability of our approach.[Find it from Appendix]
To illustrate the effectiveness of our architecture, we conducted ablation experiments. The ablation experiments evaluate the performance effect of the model directly on the validation set merged with the dataset from Zhihu. In the ablation experiments, we did not use the strategy of k-fold cross-validation. The results of the ablation experiment are shown in Table <ref>,
which shows all strategy results with Precision, Recall, and marco-F1. We show the results for each method using the average results from three runs. At first, we use the word vector corresponding to the output of the Roberta model [CLS] as the representation vector of the document to connect the classifier as the baseline. We then compared the effect of baseline with LSAN<cit.>, just use multi-attention mechanism, and the effect of removing the multi-headed attention mechanism part. As can be seen in the table, after using the multi-headed attention mechanism, the marco-F1 value improves by about 0.3% compared to the baseline model, while the LSAN mechanism get 0.2% improvement on F1-score. And after adding the KNN mechanism augmented with contrastive learning alone, the marco-F1-score is improved by about 0.4%. In the case of the full model, the marco-F1-score improves by about 0.7% compared to the baseline. This result is within our expectation and illustrates the effectiveness of our method. Then in order to increase the stability and robustness, and to avoid overfitting generation, we use the K-fold cross-validation method, so that our experimental results can be shown relatively stable, which leads to an improvement of about 0.4 percentage points in the F1-scores.
§ CONCLUSION
We propose a multi-label text classification model using a label-specific multi-headed attention mechanism. Compared to previous models of attention mechanisms, the use of multi-headed attention enables specific labels to focus on different semantic components of the document more effectively. Besides, we use the KNN mechanism to exploit the instance information in the training set. We then perform ablation experiments on our architecture to analyze the role of each part and demonstrate the superiority of using a multi-headed attention mechanism.
acl_natbib
§ APPENDIX
|
http://arxiv.org/abs/2307.05550v1 | 20230709072109 | Exploring high scale seesaw models through a supersymmetric portal | [
"Yi Liu",
"Stefano Moretti",
"Harri Waltari"
] | hep-ph | [
"hep-ph"
] |
Mitosis Detection from Partial Annotation
by Dataset Generation via Frame-Order Flipping
Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
==========================================================================================
§ INTRODUCTION
Neutrino masses have been known to be non-zero for 25 years <cit.>. As they are so much smaller than all other Standard Model (SM) fermion masses, one usually assumes that they are generated by some kind of a seesaw mechanism <cit.>. The masses are still generated through the Higgs mechanism, but suppressed by a heavy seesaw particle, which can be a singlet neutrino (Type-I), a triplet of Higgs bosons (Type-II) or a triplet of exotic leptons (Type-III) (see Refs. <cit.> for reviews).
The seesaw scale is a priori unknown. If the seesaw scale is around the Electro-Weak (EW) scale, one may be able to produce the seesaw particles directly at the Large Hadron Collider (LHC) <cit.>. One of the original ideas <cit.> was that the smallness of the neutrino masses could be related to the breaking of a Grand Unification Theory (GUT), i.e., the relevant Yukawa couplings would be of order unity and the seesaw scale somewhere around α M_GUT∼ 10^14 GeV. Such energy scales are obviously out of the reach of present and future colliders.
Supersymmetry, the symmetry between fermions and bosons, is often a necessary ingredient in formulating models with large separations of scales. Due to the cancellation between the bosonic and fermionic loops, the separation of scales is radiatively stable <cit.>, once it has been generated by some dynamics. Thus in the supersymmetric framework, scalar masses would not get quadratic corrections proportional to the seesaw scale and an EW scale Higgs boson would not be unnatural even if the seesaw scale was close to the GUT scale.
In the context of high scale seesaw models, supersymmetry has one remarkable property. The scalar potential, and especially its F-terms being of the form
V=∑_i| ∂ W/∂φ_i|^2,
leads to four-scalar interactions without the seesaw particle but with the seesaw couplings involved. If the couplings are of the order unity, they are among the largest ones in the model and could lead to observable consequences.
For definiteness, let us consider the Type-I seesaw model, where the extra superpotential terms in addition to those of the Minimal Supersymmetric Standard Model (MSSM) are
W=W_MSSM+ y^ν L· H_u N^c+M_NN^cN^c,
where we assume y^ν∼ 1 and M_N∼ 10^14 GeV. When differentiating with respect to N^c, one gets the term
∑_k y^ν *_iky^ν_jkL̃^†_i· H_u^†L̃_j· H_u,
involving only Higgs bosons and left-handed sleptons, which we assume to be at the TeV scale. If there are significant mass splittings between the sfermion generations, which could well be generated through Renormalisation Group Evolution (RGE) due to the large couplings, one might get processes like ν̃_i→ν̃_jh with a large Branching Ratio (BR). If the sneutrinos decay visibly, the decays can be distinguished from mono-Higgs signatures that could arise from dark matter <cit.>. Slepton decays with Higgs bosons in the final state could offer an indication of a high scale seesaw model and thus provide us a window to scales otherwise beyond our experimental reach.
Our aim is to investigate how could one observe such slepton decay patterns involving Higgs bosons in seesaw models of Type-I and Type-III, which have a similar structure in terms of the TeV scale Lagrangian.
Our paper is organised as follows. Higgs-slepton interactions are described in the next section, which is followed by a discussion of the production and decay modes relevant to our research. Our numerical analysis is introduced in the following section, after which we conclude.
§ HIGGS-SLEPTON INTERACTIONS IN SEESAW MODELS
We shall now look at how the Higgs-slepton interactions arise from our seesaw models in some detail. In particular, we look at Type-I and Type-III seesaw models. Both have Yukawa couplings that connect the lepton and Higgs doublets to the seesaw particles, which form a singlet and triplet under SU(2). The superpotential of Type-I seesaw is given in Eq. (<ref>) and for Type-III seesaw it is
W = W_MSSM + y^ν L Σ H_u + M_ΣTr(Σ^2),
where L is the left-chiral lepton doublet and H_u = (H^+ , H^0)^T is the up-type Higgs doublet. The Σ is an antilepton (L=-1) chiral superfield which transforms as (1,3,0) under the SM gauge group SU(3)_c× SU(2)_L × U(1)_Y. The mass term for Σ violates lepton number by two units.
The superfield Σ can be represented
Σ = σ^iΣ^i= ( [ Σ^0/√(2) Σ^+; Σ^- -Σ^0/√(2) ]), Σ^± = Σ^1 ∓ iΣ^2/√(2), Σ^0 = Σ^3.
The models look very similar in what comes to neutrino mass generation, both having a lepton and a Higgs doublet coupling to the companion neutrinos. The only difference is that the L and H_u superfields combine to a singlet in the case of Type-I and to a triplet in the case of Type-III seesaw. This difference between the two seesaw models leads to a difference in the scalar potential which contributes the processes that lead to slepton decays containing a Higgs boson.
When we expand the neutrino Yukawa terms in the superpotential, we get
W = y^ν_ij( e^-_iH_u^+-1/√(2)ν_i H_u^0)N^c_j +…,
W = y^ν_ij( 1/√(2)e^-_iH_u^+Σ^0_j -ν_iΣ^-_jH_u^++1/√(2)e^-_iΣ^+_jH_u^0+1/2ν_iΣ^0_jH_u^0)+…,
for Type-I and Type-III, respectively. Here we have included a factor of 1/√(2) into the definition of the neutral Higgs field.
Differentiating with respect to the heavy seesaw fields leads to the scalar potentials
V = ∑_k1/2y^ν_iky^ν *_jkν̃_iν̃^*_jH_u^0H_u^0 *+…,
V = ∑_k1/4 y^ν_iky^ν *_jk(ν̃_iν̃^*_jH_u^0H_u^0 *+2ẽ^-_iẽ^+_jH_u^0H_u^0 *)+… ,
for Type-I and Type-III, respectively. Hence one in general gets Higgs interactions with sleptons that are non-diagonal in flavour space and, in the case of a high scale seesaw, have large couplings. After EW Symmetry Breaking (EWSB) we have ⟨ H_u^0⟩ = vsinβ (v=246 GeV), which generates a three-point coupling between sleptons and the SM-like Higgs.
One may also note that in Type-III seesaw there is a non-flavour-diagonal coupling between charged sleptons and Higgs bosons, while there is no such coupling in the case of Type-I seesaw. As we discuss below, this leads to a stronger signal arising from Type-III than Type-I seesaw. We further notice that, while the usual D-terms of the scalar potential also contain large couplings between sneutrinos, charged sleptons and Higgs bosons, such couplings are always flavour-diagonal and cannot result in decays of the type ν̃_2→ν̃_1h, which is our smoking gun signature for high scale seesaw models.
Besides the decay modes containing Higgs bosons, there are other decay channels and the visibility of the signal depends on the branching ratios. If the Lightest Supersymmetric Particle (LSP) is a higgsino-like neutralino and the gauginos are heavier than the sleptons, the decays of the left-handed sleptons arise from the superpotential term y^ℓ LH_dE^c, so one gets the decays ν̃→χ̃^±ℓ^∓ and ℓ̃^±→χ̃^0ℓ^±. These lead to partial widths
Γ(ν̃_j→ℓ^±_jχ̃^∓_i) = |y^ℓ_jj|^2|U_i2|^2(m_ν̃^2-m_χ̃^2)^2/32π m_ν̃^3,
Γ(ℓ̃^±_j→ℓ^±_jχ̃^0_i) = |y^ℓ_jj|^2|N_i3|^2(m_ℓ̃^2-m_χ̃^2)^2/16π m_ℓ̃^3,
where U_i2 gives the higgsino component of the chargino (for our benchmarks |U_i2|≃ 1), N_i3 gives the down-type higgsino component of the neutralino (for our benchmarks |N_13|≃ 1/√(2)). If the soft slepton masses are not flavour diagonal, an appropriate linear combination of the leptonic Yukawas corresponding to the flavour composition of the sleptons must be used.
If the LSP is a gaugino there are additional decay channels ν̃→νχ̃^0 and ℓ̃^±→χ̃^±ν (if winos are light) and the decay widths are propotional to g^2 instead of |y^ℓ|^2 and gaugino components instead of higgsino components. Since we have the hierarchy y^ℓ_11≪ y^ℓ_22≪ y^ℓ_33≪ g, the strength of our signal will depend on the nature of the light neutralinos and charginos and in the case of higgsinos, the flavour of the heavier sleptons. As the electron and muon Yukawas are so tiny, in practice the mixing between the gaugino and higgsino components will be significant for the overall decay widths of the sneutrinos and charged sleptons unless the gauginos are extremely heavy.
We shall concentrate on the higgsino case, since as we shall see, already the tau Yukawa is so large that the signal containing Higgs bosons will have a too small branching ratio if stau is the heavy slepton that decays. Hence in all our benchmarks we make our gauginos heavier than the sleptons.
§ THE PRODUCTION AND DECAY MECHANISMS
To study the high-scale seesaw signatures with Higgs bosons, we build some Benchmark Points (BPs) with m(ẽ^±)<m(μ̃^±)<m(τ̃^±) and mass splittings between generations larger than m_h≈ 125 GeV (the mass of the SM-like state h). As we shall see, this will be the limiting case, where we still can see a signal. If the second slepton (assuming the third one to be too heavy to be produced efficiently) would be a selectron, the signal would be similar (as the mixing with gauginos dominates the other decay modes already for smuons), while in the case of a stau, the signal would almost vanish due to the larger partial widths from equations (<ref>) and (<ref>). We consider the charged current process pp→ℓ̃_2^±ν̃_2, where the subscript indicates mass ordering. The charged current portal is more promising as the final state contains charged leptons even when the sneutrino decays invisibly.
As discussed above, in Type-III seesaw both sneutrinos and charged sleptons can decay to final states with Higgs bosons. The dominant process is ℓ̃_2→ℓ̃_1 h while ν̃_2 →ℓ^±χ̃_1^∓, νχ̃^0. The Feynman diagram for such a process is shown in Fig. <ref>. There is also a process, where the Higgs originates from a sneutrino decay, but that has a smaller BR as can be seen from equation (<ref>). In Type-I seesaw, only the sneutrino can decay into a Higgs boson via ν̃_2 → h ν̃_1. The corresponding Feynman diagram is shown in Fig. <ref>.
These processes can lead to a variety of final state topologies. Currently the limit for charged slepton masses is m(ẽ^±),m(μ̃^±)> 700 GeV for neutralino masses below 350 GeV <cit.>, which we take as our lower limit of charged slepton masses[With more compressed spectra m(ℓ̃)-m(χ̃^0)≲ 100 GeV, one obviously can have significantly lighter sleptons. Such cases need a different analysis strategy than the one adopted here as we rely on large E_T to suppress SM backgrounds.]. This means that the overall production rate of slepton-sneutrino pairs will be low, especially as we have to produce second generation sleptons with a large mass splitting compared to the first generation ones.
In fact, the production rate at the LHC even with nominal collision energy (√(s)=14 TeV) is so low (∼ 30 ab for 1 TeV sleptons), that there will not be sufficient statistics even at the High-Luminosity LHC (HL-LHC) <cit.>. Hence we turn to the proposed High-Energy LHC (HE-LHC) <cit.> with a nominal collision energy of √(s)=27 TeV. This increases the production cross section by an order of magnitude compared to the standard LHC.
In Tab. <ref> we show the lepton multiplicities for some typical benchmark points (BP1 and BP3, defined in Table <ref>). We see that the single lepton final state has the highest multiplicity for both seesaw models. As we will lose a part of the signal due to different BRs involved in the model, it is reasonable to look at the state with the highest multiplicity first. We also pick the Higgs decay mode to b-quarks as that has the highest BR and allows to reconstruct the Higgs boson, although not with a too high precision in mass. Unfortunately the channels with good mass resolution (i.e., γγ and ZZ^*→ 4 leptons) are too rare to be useful with such a small event rate.
Our signal events will then consist of events with a single lepton, two b-tagged jets and missing momentum carried by the LSP. The largest SM backgrounds to this final state arise from the following processes:
* tt̅ production where one the top (anti)quarks decays semileptonically and the other one hadronically;
* W^±h production in the case where the W^± boson decays into a lepton and a neutrino.
These have been considered to be the dominant backgrounds in similar types of experimental analyses (e.g., <cit.>).
§ SIMULATION AND RESULTS
In this section we will describe our numerical toolbox and the Monte Carlo (MC) simulations that we have pursued with it.
§.§ Analysis strategy
The model files are produced by the Mathematica package Sarah v4.14 <cit.>. This code also generates a source code for Spheno v4.0.4 <cit.> to obtain the mass spectrum and couplings as well as for Madgraph5 v2.8.2 <cit.> to simulate collider events. We use Pythia v8.2 <cit.> for parton showering and hadronisation while we simulate the detector response by using Delphes3 <cit.>. We simulate the analysis and present our numerical results with Madanalysis5 v1.8 <cit.>.
We prepare two BPs for Type-III seesaw and two for Type-I seesaw, which can be detected in the HE-LHC with 27 TeV collision energy and the integrated luminosity 10 ab^-1. We simulate proton-proton collisions to produce the second generation sneutrino (ν̃_2) and slepton (ℓ_2), which in our cases are smuon-like, and select decays to the SM-like Higgs boson plus corresponding first generation particles. The mass of ν̃_2 and ℓ_2 should be heavy enough to allow for the decay kinematics. At the same time, the mass of lightest slepton is required to be larger than 700 GeV <cit.>. The particle mass spectra and relevant BRs are shown in Tab. <ref>.
All of the BPs have the same Lightest Supersymmetric Particle (LSP) and Next-to-LSP (NLSP), which are higgsino-like neutralinos and charginos. BP1 has a mass spectrum similar to BP3 and the same situation arises between BP2 and BP4. However, there is a significant difference in the Higgs production cross section times BRs between Type-III seesaw and Type-I seesaw. For the sneutrino decay process, Type-I seesaw has BRs larger than the Type-III ones, which can be traced back to the factors in equations (<ref>) and (<ref>). However, the charged slepton decay channel does not exist in Type-I seesaw whereas it dominates the Higgs signal in Type-III seesaw, consistent with equations (<ref>) and (<ref>). As the slepton masses increase, the BR shows a decreasing trend.
The BR for μ̃^±→ẽ^±h is high in Type-III seesaw, since the competing decay mode of eq. (<ref>) is proportional to the small muon Yukawa coupling squared or the small gaugino-higgsino mixing factor squared. Had the second slepton been a selectron, the BR would have been similar as the gaugino-higgsino mixing would dominate the decays to neutralinos/charginos, while for staus the corresponding branching ratio is only a few percent as the tau Yukawa is large enough to dominate the branching ratio.
As a pre-selection, we require a single lepton and at least two b-jets, as shown in Tab. <ref>. We use a working point, where the b-jet tagger achieves 70% efficiency and only a 1.5% probability of misidentifying a light-parton jet as a b-one <cit.>. Then several cuts are imposed to select the Higgs signal as per the process in Fig. <ref>. The leading lepton is dominantly produced from the process ν̃_1→ e + χ̃_1^±. As the mass difference between sneutrino and the lightest chargino is larger than 500 GeV for BP1 and 400 GeV for BP2, we choose the transverse momentum of the leading lepton to be larger than 400 GeV to preserve the single lepton signal and reduce the background, as shown in Fig. <ref>. The E_T (MET) cut is chosen to be 500 GeV as the NLSP mass is around that value. In order handle properly the MC generation of the tt̅ background, we add a cut at the generation level (MET above 300 GeV) so as to generate this SM process automatically in the signal region of interest. The Higgs selection is done by choosing the interval of invariant mass of the leading and next-to-leading b-jets from 100 GeV to 150 GeV. Fig. <ref> shows a peak around the SM-like Higgs mass for the signal and W^± h background, while the tt̅ noise is rather flat therein. Hence, this requirement proves effective against the latter. Finally, the 100 GeV cut on the transverse mass defined using the highest p_T lepton
plus missing transverse momentum, M_T(l_1,E_T), can also significantly reduce background, especially tt̅, as evident from Fig. <ref>.
§.§ Numerical analysis
We have applied the cuts of Tab. <ref> to all BPs as well as backgrounds and the results are presented in Tab. <ref>, for the discussed HE-LHC energy and luminosity. As expected, Type-III seesaw preserves more signal events (25.8 for BP1 and 27.7 for BP2) than Type-I seesaw (15.5 for BP3 and 9.2 for BP4). Furthermore, BP2 and BP4 show the interesting feature of having fewer initial events (compared to BP1 and BP3, respectively) but displaying a similar final result. This is because the sneutrino and smuon in BP2(BP4) are heavier than those in BP1(BP3), leading to a larger MET and higher transverse momentum of the leading lepton (p_T(ℓ_1)), thereby increasing the efficiency of the corresponding selections.
The significances are shown in Tab. <ref>, for the usual HE-LHC parameters, wherein one can appreciate rather significant signal excesses above the SM backgrounds for Type-III seesaw while for Type-I seesaw the sensitivity is somewhat limited (but larger values of Yukawa couplings could be probed and there could be room to improve the analysis or increase the amount of data). We also tested a benchmark similar to BP1, but with the mass ordering m(ẽ)<m(τ̃)<m(μ̃) with the smuon too heavy to be produced. This gave just 0.6 events after the cuts, so we can get a significant signal only arising from selectrons or smuons and their sneutrinos.
In addition it is essential for our analysis that there is a significant mass splitting between the sleptons and the LSP. With a softer MET cut the tt background would be problematic, while the cut on the transverse mass of the lepton and MET would keep W^±h under control.
In summary, though, it is clear that the HE-LHC is
a machine with clear potential to access high scale seesaw models (like Type-III and Type-I embedded within the MSSM) by exploiting the SM-like Higgs (eventually decaying to bb̅) plus a hard lepton and MET signature.
§ CONCLUSIONS
How neutrino mass generation occurs in Nature is one of the outstanding questions in particle physics. Current probes of neutrinos hardly include colliders, as herein such particles appear as E_T, thereby offering no scope to identify their properties. However, in a supersymmetric world, there exist sneutrinos, which share with neutrinos their interactions. Therefore, given that sneutrinos can decay visibly at the LHC (i.e., inside the detectors), it makes sense, in order to study neutrino properties in supersymmetry, to study sneutrinos. One, however, needs a paradigm for supersymmetry to do so, i.e., a model realisation of it, which we assumed here to be the MSSM, supplemented with two kinds of seesaw mechanism for (s)neutrino mass generation, the so-called Type-I and Type-III. These mechanisms have a similar structure to generate neutrino masses and hence both lead to Higgs-sneutrino interactions, which are non-diagonal in flavour space.
These two are examples of high scale seesaw mechanisms, wherein the companion neutrinos (to the SM ones) can have masses of order 10^12-10^14 GeV. However, left-handed sneutrino and slepton masses are necessarily linked to the typical supersymmetry breaking scale, which ought to be 10 TeV or so at the most (in order to preserve gauge coupling unification, successful dynamical EWSB, etc.). In the case of a high seesaw scale the neutrino Yukawa couplings are among the largest ones in the model and, due to the structure of the supersymmetric scalar potential, they can lead to observable consequences at the supersymmetry breaking scale. We found that the current LHC, for which √(s)=14 TeV (in turn recalling that √(ŝ) is only a fraction of that), cannot test such seesaw scenarios. However, a possible energy upgrade has been proposed for it: the so-called HE-LHC. This offers √(s)=27 TeV (and ∫ L dt=10 ab^-1), therefore, it is in a position to test the aforementioned seesaw scenarios of neutrino mass generation.
In this paper, we have, in particular, tested the scope of a particular signal stemming from these two seesaw mechanisms. In fact, the signature is common to both, i.e., charged current induced slepton-sneutrino production and subsequent decay into the SM-like Higgs boson (in turn decaying to bb̅ pairs), a single lepton (l=e,μ) and MET (or E_T). Upon assessing that the single lepton channel (as opposed to multi-lepton ones also stemming in these two scenarios) is the most sensitive one, for any number of b-jets beyond 1,
we have devised a simple cut-and-count analysis, deployed identically for both Type-I and -III, that has enabled us to reach evidence to discovery significances at the HE-LHC for the Type-III case while for the Type-I case a more refined selection and/or additional data would be required. This was shown, in both cases, for BPs currently compliant with standard theoretical requirements as well as current experimental searches.
Parameterwise, the signature requires the gauginos to be heavier than the sleptons, a sufficient mass splitting (≳ 300 GeV) between the sleptons and the higgsino-like LSP and a sufficient mass splitting between the slepton generations so that the decay with a Higgs boson is kinematically allowed.
Even though this signal is common to the two seesaw models, the fact that in Type-I seesaw only sneutrinos have decay modes containing Higgs bosons, while for Type-III also charged sleptons have such decay channels allows us to distinguish the models. This distinction might be more difficult at a hadron collider but, if there was an electron-positron collider with sufficient collision energy, the pair production of charged sleptons above √(s)=2m_ℓ̃ would lead to an enhanced signal with Higgs bosons in case of Type-III, while no such an enhancement would be present in Type-I.
As an outlook of our work, we would like to highlight that a Future Circular Collider in hadron-hadron mode (FCC-hh) <cit.>, running at √(s) values up to 100 TeV, will not improve the scope of the HE-LHC since, herein, background rates increase more that the signal ones that we pursued (although this may not be true for other channels not considered here).
Altogether, we have shown that there exist cases where, in supersymmetric theories, it is possible to probe the neutrino mass generation mechanism through sneutrino physics while the (seesaw) scale related to this mechanism is extremely high, roughly, up to 10^14 GeV.
§ ACKNOWLEDGEMENTS
SM is supported in part through the
NExT Institute and STFC Consolidated Grant No. ST/L000296/1.
HW is supported by the Carl Trygger Foundation under grant No. CTS18:164.
We finally acknowledge the use of the IRIDIS5 High-Performance Computing Facility and associated
support services at the University of Southampton in the completion of this work.
99
Super-Kamiokande:1998kpq
Y. Fukuda et al. [Super-Kamiokande],
Phys. Rev. Lett. 81 (1998), 1562-1567
[arXiv:hep-ex/9807003 [hep-ex]].
Minkowski:1977sc
P. Minkowski,
Phys. Lett. B 67 (1977), 421.
Konetschny:1977bn
W. Konetschny and W. Kummer,
Phys. Lett. B 70 (1977), 433.
Gell-Mann:1979vob
M. Gell-Mann, P. Ramond and R. Slansky,
Conf. Proc. C 790927 (1979), 315
[arXiv:1306.4669 [hep-th]].
Mohapatra:1980yp
R. N. Mohapatra and G. Senjanovic,
Phys. Rev. D 23 (1981), 165-180.
Foot:1988aq
R. Foot, H. Lew, X. G. He and G. C. Joshi,
Z. Phys. C 44 (1989), 441.
Khalil:2022toi
S. Khalil and S. Moretti,
CRC Press, 2022,
ISBN 978-1-138-33643-8.
Moretti:2019ulc
S. Moretti and S. Khalil,
CRC Press, 2019,
ISBN 978-0-367-87662-3.
CMS:2017ybg
A. M. Sirunyan et al. [CMS],
Phys. Rev. Lett. 119 (2017) no.22, 221802
[arXiv:1708.07962 [hep-ex]].
CMS:2018jxx
A. M. Sirunyan et al. [CMS],
JHEP 01 (2019), 122
[arXiv:1806.10905 [hep-ex]].
ATLAS:2019kpx
G. Aad et al. [ATLAS],
JHEP 10 (2019), 265
[arXiv:1905.09787 [hep-ex]].
ATLAS:2020wop
G. Aad et al. [ATLAS],
Eur. Phys. J. C 81 (2021) no.3, 218
[arXiv:2008.07949 [hep-ex]].
Dimopoulos:1981zb
S. Dimopoulos and H. Georgi,
Nucl. Phys. B 193 (1981), 150.
Petrov:2013nia
A. A. Petrov and W. Shepherd,
Phys. Lett. B 730 (2014), 178
[arXiv:1311.1511 [hep-ph]].
Berlin:2014cfa
A. Berlin, T. Lin and L. T. Wang,
JHEP 06 (2014), 078
[arXiv:1402.7074 [hep-ph]].
ATLAS:2019lff
G. Aad et al. [ATLAS],
Eur. Phys. J. C 80 (2020) no.2, 123
[arXiv:1908.08215 [hep-ex]].
Gianotti:2002xx
F. Gianotti, M. L. Mangano, T. Virdee, S. Abdullin, G. Azuelos, A. Ball, D. Barberis, A. Belyaev, P. Bloch and M. Bosman, et al.
Eur. Phys. J. C 39 (2005), 293
[arXiv:hep-ph/0204087 [hep-ph]].
FCC:2018bvk
A. Abada et al. [FCC],
Eur. Phys. J. ST 228 (2019) no.5, 1109.
ATLAS:2022enb
G. Aad et al. [ATLAS],
JHEP 06 (2023), 016
[arXiv:2207.00230 [hep-ex]].
Staub:2015kfa
F. Staub,
Adv. High Energy Phys. 2015 (2015), 840780
[arXiv:1503.04200 [hep-ph]].
Porod:2003um
W. Porod,
Comput. Phys. Commun. 153 (2003), 275
[arXiv:hep-ph/0301101 [hep-ph]].
Porod:2011nf
W. Porod and F. Staub,
Comput. Phys. Commun. 183 (2012), 2458
[arXiv:1104.1573 [hep-ph]].
Alwall:2011uj
J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer,
JHEP 06 (2011), 128
[arXiv:1106.0522 [hep-ph]].
Sjostrand:2014zea
T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands,
Comput. Phys. Commun. 191 (2015), 159
[arXiv:1410.3012 [hep-ph]].
deFavereau:2013fsa
J. de Favereau et al. [DELPHES 3],
JHEP 02 (2014), 057
[arXiv:1307.6346 [hep-ex]].
Conte:2012fm
E. Conte, B. Fuks and G. Serret,
Comput. Phys. Commun. 184 (2013), 222
[arXiv:1206.1599 [hep-ph]].
ParticleDataGroup:2022pth
R. L. Workman et al. [Particle Data Group],
PTEP 2022 (2022), 083C01
CMS:2012feb
S. Chatrchyan et al. [CMS],
JINST 8 (2013), P04013
[arXiv:1211.4462 [hep-ex]].
FCC:2018byv
A. Abada et al. [FCC],
Eur. Phys. J. C 79 (2019) no.6, 474.
|
http://arxiv.org/abs/2307.05582v1 | 20230710143957 | DBFed: Debiasing Federated Learning Framework based on Domain-Independent | [
"Jiale Li",
"Zhixin Li",
"Yibo Wang",
"Yao Li",
"Lei Wang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CY",
"cs.DC"
] |
DBFed: Debiasing Federated Learning Framework based on Domain-Independent
Jiale Li
School of Software
Dalian University of Technology
Dalian, China
[email protected]
Zhixin Li
School of Computer Science
Fudan University
Shanghai, China
[email protected]
Yibo Wang
School of Software
Dalian University of Technology
Dalian, China
[email protected]
Yao Li
School of Software
Dalian University of Technology
Dalian, China
[email protected]
Lei Wang^∗* Lei Wang is corresponding author
School of Software
Dalian University of Technology
Dalian, China
[email protected]
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
As digital transformation continues, enterprises are generating, managing, and storing vast amounts of data, while artificial intelligence technology is rapidly advancing. However, it brings challenges in information security and data security. Data security refers to the protection of digital information from unauthorized access, damage, theft, etc. throughout its entire life cycle. With the promulgation and implementation of data security laws and the emphasis on data security and data privacy by organizations and users, Privacy-preserving technology represented by federated learning has a wide range of application scenarios. Federated learning is a distributed machine learning computing framework that allows multiple subjects to train joint models without sharing data to protect data privacy and solve the problem of data islands. However, the data among multiple subjects are independent of each other, and the data differences in quality may cause fairness issues in federated learning modeling, such as data bias among multiple subjects, resulting in biased and discriminatory models. Therefore, we propose DBFed, a debiasing federated learning framework based on domain-independent, which mitigates model bias by explicitly encoding sensitive attributes during client-side training. This paper conducts experiments on three real datasets and uses five evaluation metrics of accuracy and fairness to quantify the effect of the model. Most metrics of DBFed exceed those of the other three comparative methods, fully demonstrating the debiasing effect of DBFed.
data security; federated learning; model fairness; information security
§ INTRODUCTION
Artificial intelligence technology relies heavily on large amounts of data as input, which is used for learning and training to recognize patterns, discover rules, make decisions, and predict outcomes. This data can be structured, such as table data in a database, or unstructured, such as images, text, and speech. Due to the diversity and scale of data, AI technology requires powerful computing capabilities and algorithm support to realize its application value. Additionally, to protect personal privacy and data security, reasonable restrictions and protections need to be placed on the use of sensitive data. Therefore, as AI technology continues to evolve, ensuring the quality and security of data in terms of acquisition, storage, sharing, and utilization becomes an important issue. For enterprises, expanding data sources, establishing a complete data lifecycle management system, and adopting privacy-preserving computing technology can better utilize data resources, improve the efficiency of mining and utilizing data value, and better meet the needs of data security and privacy protection. Therefore, with the development of artificial intelligence technology, how to ensure the quality and security of data in terms of data acquisition, storage, sharing, and utilization through privacy computing technology has become an important frontier research topic. Many fields, such as finance, healthcare, and communication<cit.>, have extremely high requirements for data security. As a result, data between different institutions often becomes a data island, making it difficult to share and use it safely, resulting in ineffective utilization of data value and hindering the development of artificial intelligence technology. In order to protect data security, Privacy-preserving computation technology represented by federated learning has been widely applied in the field of artificial intelligence.
Privacy-preserving computation refers to a computing mode for computing and data processing under the premise of protecting data privacy. It can encrypt, share, calculate, and analyze data without exposing the original data, thereby protecting personal privacy and business secrets. Federated learning is a distributed machine learning technology that allows multiple participants to jointly train a model, but each participant can only access local data, thereby protecting data privacy. Federated learning can avoid collecting and storing user data on a central server, which better protects user privacy. However, the issue of fairness also becomes more important when using federated learning. Fairness issues usually involve inequalities between different participants, including data imbalance, computing resource imbalance, capability differences, etc. Fairness issues in federated learning can be grouped into the following categories:
§.§.§ Data Bias
The data distribution and characteristics of different participants may differ, leading to the imbalanced performance of the model among different participants. Factors such as gender, race, geographical location, etc. can lead to the neglect or unfair treatment of certain clients' datasets.
§.§.§ Model Bias
Since different participants have different data distributions and characteristics, the model may be biased towards certain participants, resulting in the unbalanced performance of the model, which will also affect model fairness.
§.§.§ Imbalance of Computing Resources
Different participants may have different computing resources. Some participants may have more computing power and storage resources to conduct more iterative training locally, while others may only have limited iterative training. This may result in some participants having better model performance than others, leading to fairness issues.
§.§.§ Capability Differences
Different participants may have different abilities and professional knowledge. Some participants may have more domain expertise and experience to better understand and interpret the model results, while others may lack these abilities, which may lead to issues with the interpretability and fairness of the model.
Addressing bias issues in federated learning can improve the robustness and generalization ability of federated learning models, as well as improve the model's coverage and service quality for different groups. Therefore, in order to address these fairness issues, it is necessary to conduct relevant research and develop appropriate algorithms and tools to promote fair federated learning. This includes methods based on multi-party data joint learning, the use of distributed privacy protection technology in federated learning, and the development of appropriate metrics to evaluate the fairness of the model.
The main contributions of this paper are as follows:
* This paper proposes a debiasing federated learning framework based on domain-independent, which alleviates model bias by explicitly encoding sensitive attributes during client training, effectively improving the fairness of deep learning classification models in federated learning.
* This paper conducts experiments on 3 real datasets, and uses 4 fairness indicators to quantify the debiasing effect of the model for multi-classification and multi-sensitive attribute tasks. The effectiveness of DBFed was verified through experiments.
§ RELATED WORK
§.§ Fedrated Learning
Federated learning is a distributed machine learning framework proposed by McMahan et al.<cit.>, which allows clients to collaborate with each other to train machine learning models without exposing their own data. Nowadays, federated learning has been applied to fields such as healthcare and finance. Yang et al. <cit.> and others divided federal learning into horizontal federal learning, vertical federal learning, and federal transfer learning according to the distribution and alignment characteristics of data in the model training process. In the horizontal federated learning framework, datasets from different clients share the same sample space, but the samples are different; In the vertical federated learning framework, datasets from different clients have the same or partially identical sample IDs, but the feature spaces of the datasets are not the same; In the federated transfer learning framework, the sample ID and feature space of datasets on different clients are different, but different clients have similar business scenarios.
In federated learning, the client protects local data privacy by passing model gradients or weights instead of data sharing. However, Zhu et al. <cit.> showed that gradient leakage attacks could achieve recovery of client datasets, thereby compromising customer data privacy. After obtaining gradients, this method randomly generates a pair of pseudo features and pseudo labels to perform forward and backward propagation, After deriving the gradient, optimize the pseudo features and pseudo labels by minimizing the distance between the pseudo gradient and the real gradient, so that the pseudo data continuously approaches the original data. Hanchi et al. <cit.> proposed an adversarial generation network for generating random data to minimize the distance between pseudo gradients and real gradients, thereby inferring the raw data of federated learning clients. Melis et al. <cit.> have demonstrated that malicious clients can deceive the global model by using multitasking learning, allowing the model to learn more of its desired features and extract more data information from other clients. In order to protect the privacy of clients in the federated learning process, researchers applied homomorphic encryption <cit.> and differential privacy <cit.> to federated learning. Homomorphic encryption applies the encryption algorithm to the process of gradient exchange. The value encrypted by the algorithm is decoded after addition and multiplication, and the result is the same as that of decoding before the operation. Therefore, it can effectively protect data privacy in the gradient exchange process between the client and the server. But homomorphic encryption reduces the computational efficiency of the federated learning framework. Differential privacy effectively protects data privacy by adding noise to the dataset or blurring certain features through generalization methods, making it difficult for attackers to distinguish between samples and recover data. However, due to modifications made to the dataset, differential privacy technology usually needs to balance accuracy and security.
Many researchers have made trade-offs between security, computational efficiency, and accuracy in federated learning, and have improved and optimized the federated learning framework. The FedCG proposed by Wu et al. <cit.>. utilizes conditional adversarial generation networks to achieve a high level of privacy protection while maintaining the computational performance of the model. FedCG has a private extractor and a public classifier on each client, and in the process of weight aggregation, a client generator is used instead of a public extractor. The client knowledge is aggregated through knowledge distillation, protect and the privacy of extractor weights prevents user information leakage while aggregating client knowledge. In the training process of the client, the extractor and classifier are first trained to learn the features of the local dataset, and the output distribution of the extractor is closer to the generation distribution of the generator. Then, the local adversarial generation network, namely the generator, and the discriminator are trained separately from the local data to improve the accuracy of the model. Zhu et al. <cit.> proposed a method to aggregate client knowledge through knowledge distillation without using additional data. They set up a lightweight generator on the server and used the learned knowledge as induction bias to adjust local training. This method uses fewer communication times and has good generalization ability.
§.§ Model bias and debiasing
Although machine learning models have been applied to a wide range of life scenarios such as face recognition and medical image analysis, some models make decisions based on information such as race, gender, and nationality, resulting in algorithmic bias. As Larson et al. <cit.> once pointed out, the COMPAS system has a certain degree of racial biases. The research of Kohavi et al. <cit.> shows that under the same circumstances, the deep neural network usually predicts that male salaries are higher than females. Ashraf et al. <cit.> pointed out that commercial gender classification systems developed by Microsoft, Face++, and IBM have a high recognition error rate for dark-skinned women. Some works have proposed solutions to algorithmic bias. Mehrabi et al.<cit.>divided the solutions to algorithmic bias into three types: preprocessing, in-process, and post-processing. Preprocessing technology eliminates or alleviates algorithm bias by changing the training dataset; in-process technology modifies the machine learning algorithm itself to eliminate or reduce algorithm bias during training; postprocessing technology usually refers that by regarding the trained model as back-box, the label output by the black-box is recalculated according to a new function to eliminate or reduce algorithm bias. In the field of community detection, Mehrabi et al. <cit.> proposed a community detection method for nodes with low connection attributes, which can alleviate the bias of low-degree nodes. In the field of classifiers, Bilal et al. <cit.>proposed a fair constraint to prevent classifiers from making predictions related to sensitive attributes in the data, and Kamishima et al. <cit.> controlled classification accuracy and results by adjusting regularization parameters and the trade-off between fairness. This regularization method is applicable to any probabilistic discriminant model prediction algorithm. In the field of language models, Bordia et al. <cit.> proposed a regularization loss term for language models, which minimizes the projection of the encoder-trained embeddings onto the gender-encoding embedding subspace, effectively alleviating gender bias in language models. In the field of causal inference, Lu et al. <cit.> proposed a framework for discovering and eliminating bias for causal networks, capturing direct and indirect discrimination through the causal effects of protected attributes on decisions passed along different causal paths.
The problems of algorithm bias and model fairness also exist in the field of federated learning. In recent years, some researchers have proposed some debiasing methods under the framework of federated learning. Zhang et al. <cit.>designed a reward mechanism to adjust the training model’s accuracy and fairness, which drives fairness across all demographic groups and addresses the challenges of limited information and limited coordination. The FairBatch framework proposed by Roh et al. <cit.>, while retaining the standard training algorithm as an internal optimizer, incorporates an external optimizer to equip the internal problem with additional features, implementing adaptively choosing the mini-batch size, so that it will improve the fairness of the model. This framework can significantly improve the fairness of any pre-trained model through fine-tuning. Papadaki et al. <cit.> proposed an algorithm for maximum and minimum fairness in federated learning, where the server requires each client to explicitly share the performance of the model on each race separately.
§ METHOD
This chapter details the federated training process of DBFed. As shown in Figure <ref>, in the learning process of DBFed, it is assumed that there is a global server and clients to jointly train a deep neural network image classification model, the server first initializes the global model and then sends the weights to each client to initialize the local model of the client. After receiving the model weights, the clients use the gradient descent principle to update the model weights on the local dataset to minimize the local loss function. After a certain number of rounds, the clients send the local model weights to the server, and the server performs federated average aggregation on the model weights of the client to obtain global model weights and then distributes the global model weights to the client. The specific process of client training and server aggregation will be introduced in detail below.
§.§ Client Domain-Independent Training
Inspired by the research of Wang [35] etc., this paper improves the fairness of the model through domain-independent training, which encoding sensitive attributes explicitly. For the problem of bias in deep learning classification tasks, the predicted features are called target attributes, and the potentially biased populations are called sensitive attributes. The fully connected layer of the deep learning classification model sets N D-way discriminant classifiers, where N is the categories of target attribute, and D is the categories of the sensitive attribute. DBFed mitigates model bias by explicitly encoding sensitive attribute information during training and reducing the correlation between sensitive attributes and predicted attributes during prediction. Assuming f_(·) is a deep learning classification model, there is N × D neuron in the last layer of the model, that is, the number output value of the last layer of the model is N× D, recording f_z(x,θ) as the output result of the z-th node of the classification layer, where x is the data sample, and θ is the model weight. The output of the N D-way discriminant classifier can be passed through the activation function:
Softmax(f_z(x;θ ))=e^f_z(x;θ )/∑_i=0^Ne^f_i(x;θ )
The activated data results can be considered as probabilities, then the probability that the predicted result of sample x with sensitive feature d is y is:
P (y|d,x)=Softmax (f_ (y+dN) (x;θ))
For a data sample, according to the full probability formula, the prediction result of the deep learning classifier can be calculated according to the following formula
ŷ=argmax_yP(y,x)=argmax_y∑_d^GP(y|d,x)P(d|x)
where P(d|x) is the probability of sensitive attribute d of the data sample x, and G is the set of sensitive attributes, then |G|=D. For a data sample with a known sensitive attribute, the predicted value ŷ=arg max_yP(y|d,x). However, in order to achieve blind review on the sensitive attribute, that is, to ignore the correlation between predicted attributes and sensitive attributes during the prediction process, as well as achieve Demographic Parity on sensitive attributes, that means, for any a,b∈ G, P(a|x)=P(b|x), to guarantee the fairness between the prediction different data samples of sensitive attributes and reduce the bias of the algorithm. So in the prediction process, take P(d|x)=1/|G|.
For the training data sample x whose real value of the target attribute is y and the sensitive attribute is d, its cross entropy can be calculated:
L(x,y,d;θ)=-logP(y|d,x)
Therefore, using the gradient descent algorithm, the model weight update formula of the client k can be expressed as:
θ^k=θ^k-η∇ L( b;θ^k)
where η is the learning rate and b is a batch of training samples. In the local training of the client, the local training data set is divided into multiple batches, and each batch of data is selected for training in each epoch. After multiple iterations of training, the local model weight data is sent to the server.
§.§ Server Aggregation
After the server receives the weight of the client, it performs weight aggregation through the FedAvg algorithm. The aggregation formula of the t+1 round global weight can be expressed as:
θ_t+1^g=∑_k=1^Kn_k/nθ_t^k
where n_k is the number of samples in the local training data set of the client k, and n=∑^K_k=1n_k is the sum of the number of samples in the training data set of all clients.
§.§ Joint Training
In the training process of the federated learning debiasing framework, as shown in Algorithm <ref>, the global model weights θ_0^g are first initialized by the server, and then many rounds of communication are performed. In each round, the server first sends the global model weights of this round to each client, and then each client receives the global model weights and performs local training in parallel. The client executes many iterations for each local training. In the iterations, the batch data sets are extracted in batches, and the local model weights are adjusted through gradient descent to optimize the local model loss function. After the client is trained locally, the client sends the local weights to the server. The server receives the weights of all clients in this round and starts to aggregate the weights of this round to obtain the global model weights of a new round, finally ending the communication training of this round.
§ EXPERIMENT
§.§ Dataset
§.§.§ CelebA
CelebA dataset <cit.> is a face attribute dataset provided by Liu et al. It contains 202,599 face pictures of 10,177 celebrity identities. The training data set contains 162,770 pictures, the test dataset contains 19,962 pictures, and each picture is marked with 40 features such as gender, hair color and lips. This dataset is widely used in computer vision deep learning tasks.TABLE <ref> shows the data distribution and settings of the dataset in this paper. In the experiment, this paper chooses the "Smiling" label as the target attribute and uses the "Male" label as the sensitive attribute to study the bias of the model in the gender population when predicting smiles.
§.§.§ FairFace
The FairFace dataset <cit.> is a face image dataset proposed by Karkkainen et al. It consists of 108,501 pictures, of which the training data set contains 86,744 pictures, and the verification data set contains 10,954 pictures. It includes seven ethnic groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latin. Each facial image is labeled with race, gender, and age, with age attributes classified into nine categories based on age group. In the experiment, this paper chooses the "Age" label as the target attribute and uses the "Race" label as the sensitive attribute to study the bias of the deep learning model on the racial group when predicting age.
§.§.§ UTKFace
The UTKFace dataset <cit.> is a facial dataset with a long age span proposed by Zhang et al., which contains over 20,000 images. In this paper, the dataset is divided into a training dataset with a size of 18,964 and a testing dataset with a size of 4,741 in a ratio of 80% and 20%. Each image in the dataset is labeled with gender, age, and race. There are five types of race labels, including white, black, Asian, Indian, and Others. This article uses images from the first four races for experiments[The number of the last race "Others" is too small and not properly labeled, which has a significant random impact on the experiment.], using the "Race" label as a sensitive attribute. The age tags in the image are divided into nine categories based on the age groups of "less than 2", "3-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", and "more than 70", with the "Age" label as the target attribute.
§.§ Evaluating Metrics
In order to quantify the actual debiasing effectiveness of deep learning classification models, this paper selects one metric to measure classification accuracy and four metrics to measure model fairness as evaluating metrics.
§.§.§ Accuracy
Accuracy refers to the probability that the model correctly classifies predictive attributes, which can be calculated by the following formula:
ACC=P(Ŷ=c | Y=c)
where c∈ C and C are the set of the target attribute.
§.§.§ Skewed Error Ratio(SER)
The skewed error ratio is a metric that evaluates the maximum difference between different sensitive attributes. It mainly represents the difference between the sensitive attribute with the highest accuracy and the sensitive attribute with the lowest accuracy. The larger the value, the greater the difference in the algorithm's discrimination accuracy for different races. The formula can be expressed as:
SER=min_g∈ GError_g/max_g∈ GError_g
where g is the sensitive attribute, G is the set of sensitive attribute, and Error_g representing the classification error rate of images with sensitive attribute g.
§.§.§ Equal of Opportunity(EO)
Equal of Opportunity is a metric that evaluates the equal discrimination between different races, mainly indicating the equality of correctly classified races. The larger the value, the greater the difference in the probability of correctly classified races. Achieving equal opportunities requires the model to have an equal true positive or false negative rate, and the conditions for achieving equal opportunities can be expressed as: for all a,b ∈ G,
P( Ŷ=1 | S=a,Y=1 )=P( Ŷ=1| S=b,Y=1 ).
Since this paper focuses on the situation of multiple categories of target attribute and sensitive attribute, the calculation formula for Equality of Opportunity is defined as the mean accuracy variance of each sensitive attribute image in different target attribute images, as follows:
EO=1/|C|∑_c∈ Cvar_g∈ G(P(Ŷ=c| S=g,Y=c))
where var(·) is variance calculation function.
§.§.§ Bias Amplification(BA)
Bias amplification is metric that evaluates the degree of inclination of algorithm decisions towards specific types of target attributes, mainly indicating the unfairness of the algorithm among the types of target attributes. The larger the value, the more inclined the algorithm is towards certain specific target attributes. Bias Amplification can be calculated using the following formula:
BA=1/|C|∑_c∈ Cmax_g∈ Gg_c/∑_g∈ G g_c-1/|G|
§.§.§ Demographic Parity(DP)
Demographic Parity is a metric that evaluates the similarity of algorithm decisions to different races, mainly indicating the degree of similarity of algorithm decisions to different races. The larger the value, the greater the difference in algorithm decisions for different populations. The condition that the model meets Demographic Parity is that all a,b ∈ G, meet P(Ŷ=1| S=a)=P(Ŷ=1| S=b). The calculation formula for Demography Parity is as follows:
DP=1/ C ∑_c∈ Cvar_g∈ G( P( Ŷ=c |S=g ) )
§.§ Comparative Experiment
§.§.§ Environment Settings
The experimental operating system is Linux 3.10.0, and the development environment is Anaconda3, Python3.10.9, and Pycharm. The deep learning model is mainly written based on the deep learning framework Pyorch 1.13.1 and trained on NVIDIA A100.
§.§.§ Comparison Methods
This paper chooses the Federated Average (FedAvg) and local training algorithm as the baseline and chooses Fair Federated Learning Model<cit.> (FairFed) proposed by Ezzeldin et al. as state-of-the-art (SOTA). In local training algorithms, each client only trains on the local dataset and does not aggregate weights through communication.
§.§.§ Parameter settings
This paper uses Resnet34 <cit.> as the basic model for deep learning for experiments. Resnet34 is a deep residual network with 34 convolutional layers, which is easy to optimize and widely used in computer vision tasks. In the experiment, the adaptive moment estimation (Adam)<cit.> optimizer was used to implement the gradient descent principle, which can achieve high computational efficiency in small memory. The learning rate of the optimizer is 0.0001, and the weight decay is 0.0003. During the model training process. Five clients are set up and randomly divided into equal datasets as their local training dataset. The batch size is 128, and the client performs communication aggregation after every three local training iterations. This paper mainly uses three image datasets, CelebA, FairFace, and UTKFace for experiments. Each image has three channels, R, G, and B, with values ranging from 0 to 255. All images are uniformly adjusted to pixel size.
§.§.§ Results and Analysis
TABLE <ref> shows the experiment results, the best-performing data for each metric in each dataset are highlighted in bold, while the second-best data is underlined. The experimental effect of DBFed on the CelebA data set is the best. It outperforms other methods in terms of Equality of Opportunity and Demographic Parity and ranks second in terms of Accuracy and Skewed Error Ratio. This shows that DBFed has a good effect on model fairness while having a high prediction accuracy, and effectively reduces the gender bias in smile classification prediction. In the experiments on the FairFace dataset, DBFed achieved the highest Accuracy, but the other four fairness metrics did not significantly surpass other methods. This may be due to the lack of significant unbiased effects caused by the large variety of sensitive attributes in the dataset. In the experiment of UTKFace, this framework outperformed other methods in terms of Skewed Error Ratio, Bias Amplification, and population equality while achieving high accuracy, indicating excellent results in model accuracy and fairness.
Overall, DBFed performs well in both model accuracy and fairness, effectively reducing population discrimination when performing classification tasks.
§ CONCLUSION
This paper proposes a debiasing federated learning framework
based on domain-independent that can predict classification without using sensitive attribute labels, mitigate model bias model biases during federated learning, and improve model fairness. This paper verifies the depolarization effect of DBFed through experiments on three datasets and five evaluation metrics. In addition, due to the need for sensitive attribute labels during the training process, there are certain requirements for dataset annotation. The research in this paper is a new attempt at model fairness under the federated learning computing architecture, which can be applied to many scenarios that require high data security, such as model fine-tuning and model unbinding in human recognition or fund trading intelligent dialogue models in the financial field. The next research direction will focus on exploring how to remove model biases in federated learning processes without using sensitive attribute labels and strive to perform better debiasing effects on more categories of the sensitive attribute.
based on domain-independent that can predict classification without using sensitive attribute labels, mitigate model bias model biases during federated learning, and improve model fairness. This paper verifies the depolarization effect of DBFed through experiments on three datasets and five evaluation metrics. In addition, due to the need for sensitive attribute labels during the training process, there are certain requirements for dataset annotation. The research in this paper is a new attempt at model fairness under the federated learning computing architecture, which can be applied to many scenarios that require high data security, such as model fine-tuning and model unbinding in human recognition or fund trading intelligent dialogue models in the financial field. The next research direction will focus on exploring how to remove model biases in federated learning processes without using sensitive attribute labels and strive to perform better debiasing effects on more categories of the sensitive attribute.
§ ACKNOWLEDGEMENTS
Joint Research and Development Project of Yangtze River Delta Region Technology and Innovation Community(2022CSJGG0800).
IEEEtran
|
http://arxiv.org/abs/2307.04793v1 | 20230710180004 | Stellar triples with chemically homogeneously evolving inner binaries | [
"Andris Dorozsmai",
"Silvia Toonen",
"Alejandro Vigna-Gómez",
"Selma E. de Mink",
"Floris Kummer"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
firstpage–lastpage
Generalized Hall current on a finite lattice
Srimoyee Sen, Semeon Valgushev
Department of Physics and Astronomy, Iowa State University, Ames, IA, 50011
=======================================================================================================================
Observations suggest that massive stellar triples are common. However, their evolution is not yet fully understood.
We investigate the evolution of hierarchical triples in which the stars of the inner binary experience chemically homogeneous evolution (CHE), particularly to understand the role of the tertiary star in the formation of gravitational-wave (GW) sources. We use the triple-star rapid population synthesis code to determine the evolution of these systems at two representative metallicities: Z = 0.005 and Z = 0.0005.
About half of all triples harbouring a CHE inner binary (CHE triples) experience tertiary mass transfer (TMT) episodes, an event which is rare for classically evolving stars.
In the majority of TMT episodes, the inner binary consists of two main-sequence stars (58-60 per cent) or two black holes (BHs, 24-31 per cent). Additionally, we explore the role of von Zeipel-Lidov-Kozai (ZLK) oscillations for CHE triples.
ZLK oscillations can result in eccentric stellar mergers or lead to the formation of eccentric compact binaries in systems with initial outer pericenters smaller than ∼ 1200 R_⊙.
Approximately 24-30 per cent of CHE triples form GW sources,
and in 31 per cent of these, the tertiary star plays a significant role and leads to configurations that are not predicted for isolated binaries.
We conclude that the evolution of CHE binaries can be affected by a close tertiary companion, resulting in astronomical transients such as BH-BH binaries that merge via GW emission orders of magnitude faster than their isolated binary counterparts and tertiary-driven
massive stellar mergers.
gravitational waves,
stars: evolution, stars: massive, stars:black holes, binaries:close
§ INTRODUCTION
An accurate and detailed understanding of the evolution of massive stars is essential for various important open questions in astrophysics, such as nucleosynthesis of heavy elements, the origin of supernova events, gamma-ray bursts, and GW sources (e.g. ).
Observational evidence shows that the fraction of stars in hierarchical triples
or in higher-order multiple-stellar systems increases with the mass of the primary star ().
In particular, <cit.> showed that the majority
of O-type stars reside either in triple or quadruple stellar systems.
This implies that in order to understand the evolution of massive stars, and to correctly interpret
the various astrophysical phenomena related to them, we need to consider stellar interactions in hierarchical triples.
The evolution of hierarchical triples involves a complex interplay between three-body dynamics, stellar evolution, and stellar interactions <cit.>.
Three-body interactions can result in e.g. ZLK oscillations <cit.>, a secular effect where the eccentricity of the inner binary can be significantly enhanced as a result of dynamics.
ZLK oscillations coupled with various dissipative processes (e.g., tides, GWs) can shrink the orbit <cit.> and prompt the merger of the inner binary <cit.>.
These type of mergers can result in astronomical transient events such as Type Ia supernova <cit.>
or double compact object mergers
<cit.>.
Furthermore, stellar evolution can affect the orbital dynamics of the triple.
For example, radial expansion and mass loss can prompt ZLK oscillations or dynamical instabilities <cit.>.
Population synthesis studies of stellar triples show that the inner binaries in hierarchical triples have increased stellar interactions compared to isolated binaries <cit.>.
Similarly, tertiary-driven dynamics could play an essential role in double compact object mergers.
While GW sources detected by the LIGO/Virgo collaboration <cit.>
have been studied in the context of stellar triples, this has been done so far only in a limited parameter space.
For example, for systems in which the inner binary is wide enough such that interaction between the two stars in the form of mass exchange can be neglected
<cit.>,
or in which the stars of the inner binary merge during the main sequence <cit.>.
There are still major uncertainties and a need to explore and to understand the population of merging binary BHs from hierarchical triples.
In this paper, we focus on the evolution of hierarchical triples in which the stars of the inner binaries are chemically homogeneously evolving.
CHE stars have been discussed in the context of rapidly-rotating stars <cit.>, which can experience enhanced mixing during the MS stage. This mixing allows hydrogen-rich matter in the radiative envelope to be deposited into the convective core, where it is fused to helium. At the same time, helium is mixed throughout the star. This prevents the build-up of a chemical gradient inside the star and the classical core-envelope structure. As a result, the stars remain very compact over their lifetime. CHE has been proposed to occur in very close binaries where the tidal deformation of both stars is strong and they are forced to rotate rapidly <cit.>.
More recently, CHE binaries received renewed interest as they have been proposed as a new pathway to form BH binaries that can merge within the age of the universe <cit.>. Recently, <cit.> studied triples with CHE inner binaries in the context of sequential merging BH-BHs with masses that fall in the pair-instability mass gap. Specifically, they considered sequential mergers of hierarchical co-planar triples, a simplified approach which neglected three-body dynamics. In this paper, we remove the constraints of co-planarity and explore, for the first time, the evolution of massive stellar triples with CHE inner binaries in the entire parameter space. As isolated CHE binaries are known to be promising GW progenitors, we will mostly focus on the role of the tertiary star in the evolution of the inner binary in the context of GW astronomy.
This paper is structured as follows.
In section <ref>, we introduce , the triple evolutionary code we use in this study,
and the adaptations we have made to model CHE and contact binaries.
In section <ref>, we discuss the results of our population synthesis in and identify the most important evolutionary channels.
In section <ref>, we show that the initial parameters of the tertiary star are sufficient to predict
the evolutionary channel of each system.
In section <ref>, we use analytical and numerical methods to explore our synthetic population of stellar triples in the context of GW sources.
Finally, we discuss the main difference between the evolution of triples with and without CHE stars in their inner binary.
§ METHODOLOGY
We use to simulate the evolution of our hierarchical triples <cit.>. couples secular dynamics of stellar triples with stellar evolution, and takes into account additional physical processes such as stellar interactions and dissipative processes.
determines the evolution of each star by using the fitting formulae of <cit.> to the stellar tracks of <cit.> from the rapid binary synthesis code <cit.>, while interactions between the stars are determined by .
treats three-body dynamics in the following way. For secular evolution, we include secular three body dynamics (subscript `3b') including quadrupole () and octupole terms ( with corrections of ). Regarding the additional physical processes, we take into account: i) general relativistic effects (GR) and GW emission <cit.>, ii) tidal friction <cit.>, iii)
the effects of stellar winds under the assumptions of fast, adiabatic wind at the mass loss rate provided by (subscript `wind'), iv) precession due to ZLK, GR, tides <cit.>
and intrinsic stellar rotation <cit.>, and v) the change in the stellar rotation due to stellar evolution based on spin angular momentum conservation (subscript `I'). This gives rise to a set of first-order ordinary differential equations, that are solved numerically.
These equations are:
{[ ȧ_ in = ȧ_ in, GR +ȧ_ in, TF +ȧ_ in, wind; ȧ_ out = ȧ_ out, GR +ȧ_ out, TF +ȧ_ out, wind; ė_ in = ė_ in,3b + ė_ in,GR +ė_ in,TF; ė_ out = ė_ out,3b +ė_ out,GR + ė_ out,TF; ġ_ in = ġ_ in,3b + ġ_ in,GR + ġ_ in,tides + ġ_ in,rotate; ġ_ out = ġ_ out, 3b + ġ_ out,GR + ġ_ out,tides +; ġ_ out,rotate; ḣ_ in = ḣ_ in, 3b; θ̇ = -1/J_ b, inJ_ b, out [J̇_ b, in(J_ b, in+J_ b, outθ) +; J̇_ b, out(J_ b, out+ J_ b, inθ)]; Ω̇_1 = Ω̇_ 1, TF +Ω̇_ 1, I +Ω̇_ 1, wind; Ω̇_2 = Ω̇_ 2, TF +Ω̇_ 2, I +Ω̇_ 2, wind; Ω̇_3 = Ω̇_ 3, TF +Ω̇_ 3, I +Ω̇_ 3, wind ].
where a, e, g, h and J_b represent the semimajor axis, eccentricity, argument of pericenter, line of ascending nodes, and the orbital angular momentum for the inner (subscript `in') and outer (subscript `out') orbit. The dot represents the time derivatives. Lastly θ≡cos(i), where i is the mutual inclination between the inner and outer orbit, and Ω_1, Ω_2, Ω_3 the spin frequency of the primary, secondary and tertiary star respectively. Per definition the primary and secondary stars are the stars in
the inner binary, with the primary star initially more massive than the secondary star, and the tertiary star orbits the inner binary.
We highlight three aspects of the orbital evolution of hierarchical triples that is particularly relevant for the systems we study in this paper. Firstly, if the apsidal precession of the inner binary due to short range forces, such as tides (ġ_ in, tides) and GR effects (ġ_ in, GR) occurs on a much shorter timescale than the precession due to three-body dynamics (ġ_ in, 3b), ZLK oscillations will be quenched <cit.>.
The timescale of ZLK oscilations can be approximated as <cit.>:
t_ ZLK = (M_1 + M_2/G M_ out^2)^1/2(a_ out/a_ in^1/2)^3(1-e_ out^2)^3/2.
The timescale related to the apsidal precession due to tides are <cit.>:
t_ tides = (M_1/15 k_ amμ_ in^1/2M_2) (a_ in^11/2/R_1^5) ((1 - e_ in^2)^5/1 + 3/2 e_ in^2 + 1/8 e_ in^4),
where k_ am the apsidal motion constant, which we assume to be 0.0144 for MS and helium stars, μ_ in = G(M_1+M_2), i.e. the standard gravitational parameter for the inner binary and R_1 is the radius of the inner star.
The timescale related to precession due to general relativistic effects is <cit.>:
t_ GR = c^2/3μ_ in^3/2a_ in^5/2 (1 - e_ in^2).
If t_ ZLK≫min(t_ GR,t_ tides), then three-body dynamics are suppressed. If the timescales are comparable, then the maximum eccentricity induced by the ZLK oscillations is diminished. In principle, rotation-induced oblateness in the inner binary also induces apsidal precession <cit.>. However, as long as the rotational period of the inner stars is not shorter than the orbital period (which is true for all systems considered here), ġ_ tides≫ġ_ rot and therefore precession due to stellar rotation does not play a role in suppressing three-body dynamics <cit.>.
Secondly, octupole terms in the three-body dynamics are typically negligible for CHE triples, as the mass ratio of the inner binary is always very close to one. Finally, we estimate the time it takes for the inner binary to merge due to GWs following <cit.>, if the tertiary is dynamically decoupled from the inner binary. If ZLK oscillations are still relevant during the inspiral phase, we follow the approximation of <cit.>:
t_ GW≈ t_ GW, Peters(a_ in,e_ in, max)(1 - e_ in, max)^-1/2,
where t_ GW is the time required for the merger, t_ GW, Peters is the time to merger based on the relation of <cit.>, e_ in, max is the maximum eccentricity reached during ZLK oscillations and a_ in is the initial inner semimajor axis. The approximation in equation <ref> is based on <cit.> and it neglects the effects of precession due GR. When the latter is taken into account, <cit.> finds that equation <ref> underestimates the actual merger timescale typically by a factor of 2-3.
§.§ Modelling of chemically homogeneous evolution
We follow <cit.> in order to incorporate CHE stars in .
That means that we assume a star evolves chemically homogeneously, if the angular frequency of the spin of the star is above a certain critical value, i.e. ω_ star > ω_ CHE, crit.
<cit.> provides a fit to this critical value based on <cit.> models at different masses and metallicities.
In order to determine whether a star evolves chemically homogeneously, we check whether our simulated star is spinning above ω_ CHE, crit at every timestep.
If a star meets this criteria we do not evolve its radius during that timestep. We assume that the star by the end of core hydrogen burning forms a helium star with a mass M_ He, ZAMS = M_ TAMS, where M_ He, ZAMS is
the initial mass of the helium star and M_ TAMS is the terminal age main sequence mass of the star. With these assumptions, CHE stars experience an instantaneous drop in radii at the end of their MS phase <cit.>. This is a simplification of the results of detailed simulations of CHE stars, where the latter suggests a gradual contraction of the radius during the MS <cit.>.
If a CHE star loses angular momentum (e.g. due to stellar winds), its rotational frequency decreases. If the frequency reduces to below the critical value, we assume the evolution of the star transitions back to the classical non-CHE case.
For simplicity, we only consider systems in which the stars of the inner binary are CHE from zero-age main sequence (ZAMS).
Stars that do not evolve chemically homogeneously from ZAMS could, in theory, become CHE if they attained a sufficiently high-spin frequency before a significant chemical gradient is built up in their interior.
This can be achieved for example, if a star is spun up by accretion during a mass transfer event <cit.>. We neglect such systems in this study.
§.§ Contact binaries
We follow the implementation of <cit.>
for modelling contact binaries <cit.>.
We assume that contact binaries, i.e. binaries in which both stars fill their Roche-lobes, can maintain co-rotation and consequently survive the contact phase without merging as long as neither of the stars fill the outer Lagrangian points (L2 and L3).
For contact binaries, <cit.> finds that mass is transferred between the two stars back and forth until they reach an equal mass ratio.
We follow <cit.> and approximate the L2 point as
R_L2,2 - R_RL,2/R_RL,2 = 0.299 tan^-1(1.84q^0.397),
where R_RL,2 is the Roche-lobe radius of the secondary star, which we approximate following <cit.>.
If the stars in the inner binary are in contact but without filling their L2 points,
we assume that the masses of the binary equalise via a fully conservative mass transfer phase.
We follow <cit.> and assume this mass equalisation occurs instantaneously and readjust the orbit of the inner binary as <cit.>:
a_ fin/a_ init = ( M_ 1,init M_ 2,init/M_ 1,fin M_ 2,fin)^2,
where a_ init, a_ fin are the initial and the final orbital separation and M_ 1,init, M_ 2,init are the initial masses of the primary and the secondary, respectively. The final masses are M_ 1,fin = M_ 2,fin = 1/2·(M_ 1,init + M_ 2,init) by definition.
The assumption of mass equalisation for contact binaries results in the prediction of the CHE channel leading to mostly equal-mass binary BH mergers <cit.>.
§.§ Stellar winds
The mass loss rates of stellar winds and their effects on the evolution of the star are determined by <cit.>, while the effects on the orbit of the triple are determined by (equation <ref>).
In this study, we use the same implementation of stellar winds for massive stars as in <cit.> with one difference; the mass loss rates of helium stars and giants are calculated according to the empirical formula of <cit.> instead of <cit.>.
For reference, we summarise the mass loss rates prescriptions used in this study.
For MS stars, we follow <cit.>, if T_eff≤ 50 kK and <cit.>, if T_eff > 50 kK.
For evolved stars crossing the Hertzsprung gap or core helium burning (CHeB) stars, we follow <cit.>, if T_eff≥ 8 kK or the maximum between <cit.> and <cit.>, if T_eff < 8 kK.
For evolved stars beyond the Humphreys-Davidson limit, we assume Ṁ_LBV = 1.5·10^-4 M_⊙yr^-1 <cit.>.
For Asymptotic Giant Brach stars and double shell burning supergiants, we calculate the maximum between <cit.>, <cit.> and <cit.>.
Finally, for helium stars we follow the empirical form from <cit.> in the form
Ṁ_WR = 0.5·10^-13·(L/L_⊙)^1.5(Z/Z_⊙)^0.86 with a clumping factor of η = 0.5 from <cit.> and a metallicity scaling of Ṁ_WR∼ Z^0.86 ().
In order to compute the change in the orbit due to stellar winds, we assume stellar winds are spherically symmetric and fast compared to the orbital velocity; additionally, we neglect wind accretion by the companions. In that case the inner and the outer orbit of the triple widens as
ȧ_ in,wind = ( a_ final/a_ init)_ in = M_ 1,init + M_ 2,init/M_ 1,final + M_ 2,final,
and
ȧ_ out,wind = ( a_ final/a_ init)_ out = M_ 1,init + M_ 2,init + M_ 3, init/M_ 1,final + M_ 2,final + M_ 3,final,
where subscripts `init' and `final' refer to properties before and after the stellar winds carried mass away from the stars in a given timestep.
We assume that the eccentricity remains unchanged by stellar winds <cit.>.
We neglect stellar wind accretion by the other stars in the triple system <cit.>.
Neglecting accretion is justified for line-driven winds due to their large terminal velocities <cit.>.
The assumptions of a fast and spherically symmetric wind might not always be valid <cit.>, and rapidly rotating stars might not have fully symmetric outflows <cit.>. Particularly, stellar winds in certain binary-configurations might even lead to orbital shrinking <cit.>.
§.§ Remnant formation
The mass of the compact object remnant is computed based on the delayed supernova model from <cit.>.
This prescription gives the mass of the stellar remnant as a function of CO core mass, where the latter is determined in based on the fits of <cit.>.
The natal kick velocity for BHs is calculated as
v_BH = (1 - f_ b)(M_ NS/M_BH)v_ kick,
where f_ b is the fallback fraction <cit.>, M_ NS is the canonical neutron star mass (M_ NS = 1.4 M_⊙) and v_ kick is a random kick velocity drawn from the distribution inferred by <cit.> from proper motion measurements of pulsars.
We determine the change in the inner and outer orbit due to the core collapse of any of the stars in the triple system based on the formalism developed in <cit.>.
Models of <cit.> predict that the most massive stars collapse directly (typically M_ ZAMS≳ 40 M_⊙),
without any ejecta, and the only mass loss during the remnant formation is due to neutrino losses, which is assumed to be 10 per cent of the pre-core-collapse mass of the star. Additionally, we assume that the neutrino emission is spherically symmetrical and do not impart natal kick onto the BH. In this case, the orbit is only changed due to the instantaneous mass loss <cit.>.
We note that, if the pre-core-collapse orbit is circular, a Blauuw kick due to neutrino losses does not lead to a significant change in the inner orbital elements. However, this is no longer the case for eccentric pre-core-collapse orbits. In particular, if the core collapse occurs near the pericenter, the orbit can become significantly wider <cit.>.
By the onset of core-oxygen burning, the core temperatures of the most massive stars can reach above T_ core∼ 3× 10^9 K.
Under these conditions, the emitted gamma-ray photons in the core are energetic enough to form electron-positron pairs.
This leads pair-instability (see e.g. , , , ).
Depending on the mass of the star, this instability can result in a pulsation pair instability supernova, in which the star experiences a series of pulsations leading to severe mass loss <cit.>,
or pair instability supernova, in which the star is completely disrupted and no remnant is formed <cit.>.
For the treatment of pair-instability in massive stars, we follow <cit.>. If the mass of the helium star pre-core-collapse
is M_ HE, pre-SN≥ 35 M_⊙, the star is assumed to undergo PPISN, and its remnant mass is determined by the fitting formula of <cit.>, based on the detailed stellar simulations of <cit.>.
If 60≤ M_ HE, pre-SN≤130 M_⊙, we assume the star undergoes PISN, and leaves no remnant behind. In principle, if M_ HE, pre-SN≥ 130 M_⊙, photo-disintegration prevents
the pair instability supernova to occur and the star collapses directly into a BH
(, , , ), however this does not occur for any of our simulated systems.
§.§ Tertiary mass transfer (TMT) episodes
If the tertiary star fills its Roche-lobe, it will transfer mass to the inner binary.
There have been some efforts to study and model this process <cit.>, but this complex scenario remains to be fully understood.
In order to calculate the Roche-lobe of the tertiary star, we assume the inner binary can be approximated as a point mass and estimate the Roche radius with the fitting formula of <cit.>.
This assumption is valid in the regime where the orbital separation of the outer star is much larger than that of the inner binary (e.g. a_out≫ a_in).
determines the stability of TMT based on extrapolating typical methods from binary star evolution, i.e. by using critical mass ratios <cit.>. This parameter is defined as q_ crit = M_ donor/M_ accetor, i.e. the ratio of the mass of the donor and the mass of the accretor star at the onset of the mass transfer episode. The mass transfer phase is assumed to be dynamically unstable, if the mass ratio of the system is above the critical mass ratio, i.e. q>q_ crit. We obtain q_ crit for each stellar evolutionary stage from <cit.> and <cit.>. We quote these values for the two most common donor types in our simulations <cit.>. These are q_ crit = 3 and q_ crit = (1.37+2[M_ donor, core/M_ donor]^5)/2.13 for Hertzsprung gap stars
(i.e. hydrogen shell burning stars which have not regained thermal equilibrium yet) and core helium burning (CHeB) stars,
respectively. The term in the squared bracket is the core mass to total mass ratio of the donor. If this equals to ∼ 0.45 - 0.65, which is fairly typical for massive CHeB stars <cit.>, then q_ crit≈ 0.7-0.75. This reflects the assumption made by <cit.>, CHeB stars tend to have deep convective envelopes <cit.>, and are therefore more likely to experience unstable mass transfer episodes <cit.>.
Stable TMT could be accompanied with the formation of a circumbinary disc or it could occur in a ballistic accretion fashion. These two types could lead to significantly different evolution of the inner orbit <cit.>.
We assume that TMT occurs via ballistic accretion, if a_ in(1 + e_ in) ≥ R_ cd at the onset of the TMT phase, where R_ cd is (i.e. adapting the fitting formulas for mass transferring binaries of and to triples):
R_ cd = 0.0425 a_ out(1-e_ out)[1/q_ out(1 + 1/q_ out) ]^1/4.
§.§.§ TMT: Evolution of the inner orbit
If the tertiary star fills its Roche-lobe, stops the simulation of the system. However, when discussing potential GW progenitors (Section <ref>), we determine the orbital evolution due to TMT by applying simplified assumptions, if the mass transfer episode is dynamically stable. In this subsection we describe our assumptions about the evolution of the inner orbit during a stable phase of TMT, while in subsection <ref> we discuss the evolution of the outer orbit.
We distinguish three particular TMT configurations cases, based on the evolutionary stage of the inner binary and on whether or not the transferred mass forms a circumbinary disc around the inner binary:
* an inner binary with compact objects and with ballistic accretion,
* an inner binary with compact objects and with a circumbinary disc,
* a non-compact inner binary.
(i) An inner binary with compact objects and with ballistic accretion. Hydrodynamical simulations of <cit.> showed that in case of a TMT episode with ballistic accretion, the transferred mass
eventually engulfs the inner binary and exerts friction on it.
This leads to a scenario that could be considered similar to the common-envelope evolution of binaries <cit.>, since in both cases drag forces exerted by a gaseous medium supplied from the donor star lead to the orbital shrinking of the binary.
Inspired by this similarity, <cit.> applied a modified version of α-formalism <cit.> to model the inner binary evolution of triples experiencing TMT <cit.>. For the configuration case (i), we take the same approach.
Below we explain how the post-mass-transfer inner orbit is determined based on this formalism in detail. Δ M_ trnsf is the mass that is transferred from the tertiary in a timestep Δ t. When Δ M_ trnsf ends up encompassing the inner binary, it has binding energy of E_ bind.
As the inner orbit is shrinking due to the friction during the TMT episode, the orbital energy of the inner binary changes by Δ E_ orb. We assume that a fraction (α_ TMT) of Δ E_ orb is used to unbind Δ M_ trnsf. We can write an equation expressing the energy balance as:
α_ TMTΔ E_ orb = E_ bind,
with
Δ E_ orb = GM_1M_2/2a_ in, fin - G(M_1 + Δ M_ trnsf/2) (M_2 + Δ M_ trnsf/2)/2a_ in, init,
and
E_ bind = -G(M_1 + M_2) Δ M_ trnsf/λ_ TMT a_ init,
where λ_ TMT is a parameter related to the structure of Δ M_ trnsf, parameterising its binding energy,
a_ in, init is the initial orbital separation before Δ M_ trnsf is transferred to the inner binary and a_ in, fin is the final orbital separation after Δ M_ trnsf is expelled from the inner binary. We assume that the total mass transferred to the inner binary throughout the entire TMT episode equals to the mass of the hydrogen envelope of the tertiary M_ out,env (but see ). Then assuming a constant α_ TMT and λ_ TMT, the orbit changes due to the entire TMT episode as:
a_ in, fin/a_ in, init = M_1M_2/2(M_1 + M_2) M_ out,env/α_ TMTλ_ TMT + (M_1 + M_ out,env/2) (M_2 + M_ out,env/2).
As both α_ TMT and λ_ TMT are unknown, we combine them and try three different values: α_ TMTλ_ TMT = 0.05, 0.5, 5. Here α_ TMTλ_ TMT = 5 is the fiducial value used in <cit.>, which is in a good agreement with the hydrodynamical simulations of <cit.>, in which the inner stars are on the MS during the TMT episode.
We note that we neglect the possibility of TMT episode with ballistic accretion transitioning to a TMT episode with a circumbinary disc.
Additionally, for configuration type (i), we assume that the inner binaries circularise as a result of the mass transfer phase (as a_ in, new = a_ in(1 - e_ in)). We note that this assumption might not be correct for highly eccentric inner binaries. For example, <cit.> showed that binaries at the onset of common-envelope events with e≳ 0.95 might retain eccentricities as high as e∼0.2.
(ii) An inner binary with compact objects with circumbinary disc. If a circumbinary disc is formed during a mass transfer phase towards an inner BH-BH binary we assume that the orbit of the inner binary remains unchanged.
The actual physics underlying such a process are very complex <cit.>. The circumbinary disc may exert a torque on the inner binary and extract angular momentum from it, while the accreted matter can transfer angular momentum onto the inner binary. Furthermore, the circumbinary disc and the inner binary could be tidally distorted by the tertiary star.
It is commonly assumed that circumbinary accretion of a BH-BH binary from a gaseous medium leads to the shrinking of its orbit due to the torques exerted by the circumbinary disc and due to dynamical friction of the gas (e.g. ). However, a consesus regarding this physical process is still missing, with some hydrodynamical simulations suggesting that accretion from circumbinary disc could even lead to to orbital widening instead of orbital decay <cit.>.
(iii) A non-compact inner binary. If the mass transfer occurs with a MS-MS accretor, we assume that this results in the merger of the inner binary. We make this assumption because these binaries have very short periods and a sizeable fraction of them are in contact and most likely they would expand due to TMT, overfilling their L2 point, which would lead to merger (see later subsection <ref>). As we discuss in in subsection <ref>, we do not consider GW sources from those triple systems, in which the TMT occurs towards a binary with evolved (i.e. non-MS), non-compact stars.
We do not model unstable phases of TMT (as we will show later, they are very rare among the systems we discuss in this paper) . We note, however, that during this type of mass transfer episode, the outer orbital separation is predicted to rapidly decrease due to the common-envelope-like evolution in triple system; this could result in a regime where the secular approximation from the triple is no longer valid <cit.>.
§.§.§ TMT: Evolution of the outer orbit
When determining the evolution of the outer orbit due to a stable phase of TMT, we apply the same method for all accretor types, irrespective of whether a circumbinary disc is formed.
We calculate the evolution of the outer orbit during the TMT phase, based on the following relation:
ȧ_ out/a_ out = -2 Ṁ_3/M_3[1 - βM_3/M_1 + M_2 - (1 - β)(γ + 1/2)M_3/M_ tot],
where β is the fraction of mass accreted by the inner binary, γ is the specific angular momentum lost from the system as a fraction of the specific angular momentum of the triple and Ṁ_3 is the mass transfer rate from the tertiary star.
Equation <ref>
can be derived from angular momentum arguments.
It is an adaptation of the relation describing the orbital evolution of a circular, mass transferring binary comprised of point particles <cit.>, applied to a triple experiencing a TMT episode. This adaptation is valid, if the tertiary star is sufficiently far away from the inner binary, such that the inner binary can be treated as a point particle with a mass of M_1 + M_2.
We assume that eventually all the transferred mass is isotropically expelled from the triple (β = 0), from near the inner binary. This expelled matter thus carries away a specific angular momentum that is equal to that of the inner binary (γ = M_3/(M_1 + M_2), see also , for a similar approach). In this case equation <ref> can be expressed as
a_ out, fin/a_ out, init = M_ tot,init/M_ tot, fin(M_ 3,init/M_ 3,fin)^2exp(2M_ 3,fin- M_ 3,init/M_1 + M_2).
In case of BH-BH inner accretors, these assumptions might be valid, as the accretion rate of BHs might be capped by the Eddington-limit, and most of the mass could indeed be expelled from the system, for example in the form of a jet (e.g. )
. On the other hand, MS stars are likely to accrete more efficiently, and therefore β = 0 might no longer be a good approximation.
§.§ Initial conditions
We sample 10^5 triples at two representative (moderate and low) metallicities: Z = 0.005 and Z = 0.0005. We simulate each hierarchical triples from ZAMS. After drawing the parameters for a given triple system, we further check, if it is dynamically stable <cit.> or if the stars in the inner binary are CHE at ZAMS. If any of the two criteria are not met, we do not evolve the triple system and only take it into account for the normalisation of event rate calculations. We terminate the simulation of a triple system when either a Hubble time (assumed to be 13.5 Gyrs) has passed, or when the tertiary star fills its Roche lobe, a merger occurs, a dynamical instability occurs or if any of the stars becomes unbound from the triple.
We also stop the simulation, if any of the stars in the inner binary transitions back from CHE to classical evolution. That is, we only consider triples in which the stars of the inner binary chemically homogeneously evolve throughout their entire MS lifetimes. We refer to this population as CHE triple population.
In this study, we motivate the choice of the initial distributions of the parameters of the inner binaries based on recent surveys of massive binaries <cit.>. In such surveys, a possible tertiary companion is not always unequivocally identified and therefore it is not clear whether the inferred distributions also hold for triples or only for isolated binaries.
We assume the ZAMS mass of the primary star (M_ 1,ZAMS) follows the power-law mass distribution of <cit.>, i.e. N∼M_ZAMS^-2.3 for M_ ZAMS≥ 0.5 M_⊙ and N∼M_ZAMS^-1.3 for M_ ZAMS < 0.5 M_⊙. We sample M_ 1,ZAMS from a mass range of 20-100M_⊙. The lower limit approximately coincides with the lowest initial mass at which CHE is still possible in a tidally locked binary <cit.>, while the upper limit is roughly the maximum mass at which the stellar tracks used in are still reasonably accurate. We assume a flat inner mass-ratio (i.e. q_ in, ZAMS = M_ 2,ZAMS/M_ 1,ZAMS) distribution, which is in broad agreement with <cit.>. We restrict the range of q_ in, ZAMS to 0.7-1 given that inner binaries in which both of the stars are chemically homogeneously evolving and have q_ in≤ 0.7 would
merge early during the MS (where we found the lower limit of 0.7 from our simulations).
We sample the inner semimajor axis from a log-uniform distribution (; and in broad agreement with ) in the range of 16 to 40 R_⊙.
We assume that the inner binaries are tidally locked at ZAMS. This has three implications: i) the inner binaries have circular orbits, ii) their rotational angular frequency is synchronised with the orbital angular frequency, and iii) the spins of the stars are aligned with the orbital angular momentum vector.
We draw the properties of the outer binary from the same distributions that we assume for the inner binaries, with the exception of outer eccentricities.
Observations of hierarchical multiple systems of galactic solar-type stars support the assumption that the distributions of the initial parameters of the inner and the outer binaries are the same <cit.>.
We sample the outer semimajor axis from a loguniform distribution in the range of 100 to 10^5 R_⊙.
We assume that the distribution of the outer mass ratio (i.e. q_ out, ZAMS = M_ out,ZAMS/(M_ 1,ZAMS + M_ 2,ZAMS)) is flat on a range of 0.1 to 1, furthermore the mass of the tertiary is restricted to a range of 5-100M_⊙.
We assume non-spinning tertiary stars.
The eccentricities of the outer orbit are drawn from a thermal distribution <cit.>.
The mutual inclination between the inner and outer orbit is assumed to be uniform in cos(i_ZAMS), where i_ ZAMS is the initial inclination. The initial argument of the pericenter is assumed to be uniformly distributed between -π and π.
In Section <ref>, we compare our CHE triple population to a CHE isolated binary population. To this end, we also perform population synthesis of isolated binaries with CHE stars. We sample 10^5 isolated binaries at Z = 0.005 and Z = 0.0005 and evolve them with . We sample from the same initial distributions that we assumed for the inner binaries of our triple population. Similarly to the triple population, we discard systems that are not CHE at ZAMS and stop the simulation, if a Hubble time has passed, or if any of the stars in the binary transitions from CHE to classical evolution. We only analyse binaries, in which the stars remain CHE throughout their entire MS lifetime (hereafter CHE binaries).
Throughout the paper, we estimate birth rate and merger rate densities of different evolutionary channels (discussed in detail in appendix, section <ref>). In order to determine each of these quantities, one must know how common single and multiple stellar systems are.
We assume two different stellar populations, with different binary and triple fractions. In the first, we assume that about 73 per cent of massive stars are found in triples <cit.>[<cit.> finds that 73 per cent of O stars are either in triples or quadrupoles. Therefore f_ triple = 0.73 should be considered as a rough upper limit. We also note that <cit.> finds that there is a strong correlation between the inner period and the triple multiplicity; among solar type stellar systems, 96 per cent of the spectroscopic binaries with periods less than
3 days has a tertiary companion. Therefore CHE triples, which have also inner binaries with periods of few days, could have exceptionally high triple fractions too.], whereas in the second test population, we assume there are no triples and about 70 per cent of massive stars are in binaries <cit.> [Strictly speaking, <cit.> did not make any statements about triple fractions, but they found that 70 per cent of massive stars have companions that are sufficiently close such that mass exchange will occur some time in their evolution.].
§ RESULTS OF POPULATION SYNTHESIS SIMULATIONS
In Table <ref>, we provide an overview of our sampled systems based on the evolutionary type of the inner binary. Out of our sampled population of triples, only about 10 per cent of the triples have an inner binary where both stars evolve chemically homogeneously from ZAMS (CHE at ZAMS triples, see Table <ref>), and we follow the further evolution only for these triples.
About 75 per cent of CHE at ZAMS triples qualify as CHE triples and we focus on these systems for the majority of the paper. For the remaining 25 per cent, we distinguish three scenarios:
* The inner stars transition to classical evolution. As the orbit of the inner binary
widens due to stellar winds, the rotational frequencies of the inner stars decrease, because the stellar tides enforce synchronization between the stellar spins and the (new longer) orbital period. If the inner orbit widens sufficiently, the angular rotational frequencies of the inner stars drop below ω_ CHE and therefore these stars transition to classical evolution. This occurs only in our moderate metallicity model (15.5 per cent of all CHE at ZAMS triples at Z = 0.005 and 0 per cent at Z = 0.0005).
* The inner binary does not survive the contact phase during the MS phase of the inner stars. We assume a merger takes place when both stars overflow their outer Lagrangian point during the contact phase. This occurs during mass equalization in the contact phase or due to GW emission, which lead to shrinkage of the inner orbit. As orbital widening due to stellar winds prevent mergers, the process occurs more efficiently at low metallicities (about 9 per cent of all CHE at ZAMS triples at Z = 0.005 metallicity and 17.5 per cent at Z=0.0005).
* Computational issue. Finally, we note that the simulation of about 2 (6.7) per cent of CHE at ZAMS triples fails at Z = 0.005 (Z = 0.0005). This can occur because either no solution is found for the secular orbital evolution of the system, or the computation time exceeds the allowed CPU time (which is 5000 seconds per system).
§.§ Main evolutionary outcomes
In Table <ref>, we show the most common evolutionary outcomes for CHE triples.
We distinguish 5 different evolutionary channels.:
* No post-MS mass transfer phase:
During the MS, it may be in a contact, but the system does not experience any other form of mass transfer events. The inner binary eventually forms a BH-BH binary in all these triples.
* Stellar merger of the inner binary due to ZLK: Stellar merger occurs in the inner binary due to ZLK oscillations.
* Tertiary mass transfer (TMT): The tertiary star fills its Roche lobe.
* Unbound systems:
This evolutionary outcome takes place, if any of the stars becomes unbound from the system.
This occurs when a stellar remnant is formed in the system, with three major subtypes: (i) natal kick imparted onto the remnant object during the SN explosion, (ii) instantaneous mass loss during pulsational PISN, or (iii) complete disruption of the star due to PISN.
* Dynamical instability: These systems eventually become dynamically unstable, where the secular approximation is no longer valid.
We discuss these channels in detail in sections <ref> - <ref>.
§.§ Examples for the evolution of a few selected systems
In the following, we present the evolution of a few selected systems from some of the channels introduced in section <ref>.
In all of these example systems, the initial parameters of the inner binary are the same: M_ 1,ZAMS = M_ 2,ZAMS = 70 M_⊙, a_ in, ZAMS = 22.4 R_⊙. These have been specifically chosen such that this system would form a GW source via the binary CHE channel within the Hubble time, if it was an isolated binary
(i.e. in about 8.9 Gyrs). The inner binary is tidally locked and therefore e_ in, ZAMS = 0. The stars of the inner binary are in contact from ZAMS and equalise in mass soon after ZAMS. The initial mutual inclination is i_ ZAMS = 90^∘ in all systems discussed below, which allows for ZLK oscillations to develop, unless they are suppressed by short range forces <cit.>.
In order to understand the evolutionary paths of CHE triples introduced below, we first show which configurations of CHE triples lead to efficient ZLK oscillations (see Fig. <ref>). We evolve the previously introduced CHE inner binary as an isolated system, and take four snapshots during different evolutionary stages (ZAMS, end of MS, at the onset of core collapse, and at the formation of an inner BH-BH binary). For each snapshot, we show a range of possible tertiary companions to this inner binary with different tertiary masses (M_ out) and outer semi major axes (a_ out) and identify those regions, where three-body dynamics are relevant.
As shown in the leftmost panel, precession due to tides completely
suppresses three-body dynamics when the inner stars are still on MS for almost the entire parameter space of CHE triples.
The limited number of triples for which this is not true typically become dynamically unstable later in the evolution (e.g. compare panel 1 with panel 4).
By the time of hydrogen depletion in the inner stars, the stellar radii of CHE stars shrinks typically by a factor of 3-5 with respect to their ZAMS value. Therefore, at this stage tides become less efficient (since t_ tides∼ R^-5, see equation <ref>) and precession due to GR becomes the major limitation to three-body dynamics. For the systems shown in Fig. <ref>, ZLK oscillations occur only, if a_ out≲ 500 R_⊙ . During the CHeB phase of the inner stars, the the typical timescale of precession due to GR further increases, as a result of the strong
Wolf-Rayet winds that significantly widen the inner orbit.
As long as the inner orbit widens faster than the outer orbit (which is always true if the tertiary star is the initially least massive star in the system), the timescale related to ZLK oscillations will not significantly increase. Therefore during this stage, the parameter space where three-body dynamics are relevant increases.
This is also shown in the rightmost panel of Fig. <ref>; by the time the inner binary forms BHs, triples with a_ out≲ 2000 R_⊙ will develop ZLK oscillations.
§.§.§ Example for stellar merger of the inner binary due to ZLK oscillations
First, we discuss the evolution of a CHE triple, in which the inner binary merges as a double helium star due to strong ZLK oscillations (shown in Fig. <ref>).
This triple has a tertiary with an initial mass of M_ out, ZAMS = 32.1 M_⊙ and a circular outer orbit with a_ out, ZAMS=200 R_⊙.
As indicated by Fig. <ref>, when the stars of the close inner binary are still on the MS, precession associated with strong tides suppresses the effects of the three-body dynamics <cit.>.
At 3.9 Myrs, the stars of the inner binary evolve off the MS. By this time, these stars had lost a small amount of mass due to stellar winds and the inner orbit had widened by only 2 per cent as a result. Similarly, the outer orbit also widens only by a negligible amount. Consequently, the timescale of the ZLK oscillations does not change significantly. On the other hand, the tidal effects become much weaker, as the radii of the stars had decreased by a factor of 5 with respect to their ZAMS value.
As a result, the ZLK oscillations are no longer suppressed (see also second panel of Fig <ref>).
At this stage, there are two competing mechanisms that drive the evolution of the pericenter: ZLK oscillations and the strong Wolf-Rayet-like winds, which decrease and increase the pericenter, respectively.
For this triple, the ZLK timescale is extremely short (few years) and a large inner eccentricity of e_ in≈ 0.65 is reached shortly after the onset of CHeB, during which the orbital widening due to stellar winds is negligible. At this stage, the pericenter becomes sufficiently small such that the helium stars fill their Roche-lobes at the point of closest approach. We assume this results in the merger of the inner binary.
§.§.§ Example for TMT towards an eccentric BH-BH binary
The next triple we discuss experiences a TMT episode towards an eccentric BH-BH inner binary (shown in Fig <ref>). This system has the same parameters as the previously discussed triple, but with a slightly larger initial outer semimajor axes: a_ out,ZAMS = 421 R_⊙.
When the inner stars evolve off MS, ZLK oscillations are quenched by precession due to GR (compare the second panels of Fig. <ref> and Fig. <ref>). Three-body dynamics become later effective, as the orbit of the inner binary widens significantly and faster than the outer orbit due to strong WR winds (compare the third panels of Fig. <ref> and Fig. <ref>, although by this stage the parameters of the inner binary differ slightly).
As a result t_ GR increases by a factor of 5, while t_ ZLK barely changes. As ZLK cycles become only effective once the inner orbit has sufficiently widened, the inner binary does not come into contact despite reaching similarly high inner eccentricities as in the previous system.
As the stars of the inner binary have the same mass, they co-evolve, and they become stellar remnants at the same time.
This occurs around 4.2 Myr, when the inner eccentricity is e_ in, max = 0.75.
Since core-collapse occurs in an eccentric orbit, large range of possbile post-supernova orbits are possible (a_ in = 42-186 R_⊙) depending on where exactly the stars are in their orbit. In the particular example shown in Fig <ref>, the core collapse occurs while both stars are near the pericenter (which is less likely as they spend more time near the apocenter). This leads to an inner semi-major axis of a_ in = 171 R_⊙ after BH-BH formation. As the outer orbit is circular at the onset of the core-collapse, it only widens by a moderate amount. As the inner period to outer period ratio has increased by a factor of 7, the timescale of the ZLK oscillations also further decrease, making the three-body dynamics even more relevant for the further evolution of the system. The evolution of this triple therefore demonstrates, that if the ZLK oscillations are strong enough to induce eccentricities before the formation of an inner BH-BH binary, the importance of three-body dynamics can be significantly increased during the last stages of the evolution of the triple, depending on (i) where the inner stars are in their orbit when the formation of the compact objects occur and (ii) on the eccentricity of the outer orbit.
After the formation of the inner BH-BH binary, the tertiary star evolves off MS, and at 6.1 Myr fills its Roche-lobe and transfers mass to the highly eccentric (e_ in = 0.94) BH-BH binary at a highly inclined orbit (i = 71.5^∘). At this stage, we stop the simulation (but see later section <ref>, where we predict the further evolution of some of these systems). We note, however, that even if the TMT episode does not affect the inner binary, it still merges due to GWs about a factor of 8 faster than its isolated binary counterpart, just alone due to the high eccentricities induced by the ZLK oscillations.
§.§.§ Example for TMT towards a circular BH-BH binary
Next, we show the evolution of a CHE triple, which also experiences a TMT episode towards a BH-BH binary, but in which three-body dynamics remain suppressed throughout the entire evolution. The initial outer semimajor axis is a_ out,ZAMS = 1069 R_⊙. For this system the timescales of the ZLK oscillations remain too long with respect to the timescale associated with precession due to GR effects throughout the entire post-MS phase. At the onset of the core-collapse, at which the parameter space for ZLK oscillations is the typically the largest for CHE triples with inner binaries composed of non-compact objects, the outer semimajor axis is a_ out,ZAMS = 1720 R_⊙ and the tertiary mass is M_ out = 31.9 M_⊙. Third panel in Fig. <ref> implies that the three-body dynamics is just quenched by the relativistic precession at this stage. Therefore, the inner orbit remains circular when the BHs are formed, and the inner orbit only widens moderately due to BH formation. The inner and the outer orbit after the formation of a BH-BH binary are a_ in = 46.6 R_⊙ and a_ out = 1860 R_⊙ and therefore the ZLK oscillations remain quenched.
At 6 Myr, the tertiary reaches a radius of 547 R_⊙ and fills its Roche-lobe while crossing the Hertzsprung gap. The last two examples suggest (and we will show in section <ref> that this is generally true for the vast majority of CHE triples) that three-body dynamics are only relevant for the evolution of CHE triples, if the tertiary star is on a sufficiently short orbit,
such that it will eventually fill its Roche-lobe and initiate a TMT episode. Conversely, if the tertiary star remains detached throughout the evolution of the triple, the inner binary evolves effectively as an isolated binary for the vast majority of CHE triples.
§.§ No post-MS mass transfer
In these triples, the tertiary star remains bound and detached, while the stars of the inner binary form a BH-BH binary.
The inner stars are in contact in the majority of the cases (e.g. around 90 per cent at Z = 0.005).
There are no any other mass transfer phases during the evolution of these systems (by definition). About 27 per cent of CHE triples evolve this way in our moderate metallicity model (see Table <ref>). This decreases to 11 per cent at Z = 0.0005. The main reason for this difference
is the larger number of PISN that occurs at lower metallicities, which prevent the formation of BHs.
After the formation of the BH-BH binary, the system may merge due to GW emission within a Hubble time.
This occurs for all systems of this type at Z = 0.0005. However at Z = 0.005,
the stellar winds are strong enough such that 32 per cent of the inner binaries of these triples end up with orbits that are too wide to merge within a Hubble time
due to GW emission. We note that these are not necessarily all of the GW sources from our simulations, as triples in other channels discussed here can also potentially
form merging binary BHs (see discussion in section <ref>).
For the majority of these triples (>97 per cent), the inner binary evolves essentially unaffected by the tertiary star (see also section <ref>). Therefore, the properties of the inner binaries of this channel are nearly indistinguishable from those of isolated CHE binaries. The initial outer pericenters of the triples of this channel are large enough such that the outer star remains detached (i.e. a_ p, out, ZAMS≳ 2000-3000 R_⊙ at Z = 0.005
, see also section <ref>). At such large tertiary separations, the three-body dynamics remain suppressed during the entire evolution of the triple.
The properties of the subgroup in which three-body dynamics drive the evolution of the inner binary are very different. Firstly, they have very short initial outer pericenters (i.e, a_ p, out, ZAMS≈ 100-700 R_⊙), and secondly, the tertiary has a relatively low mass (typically M_ out,ZAMS = 10-30 M_⊙). In these systems, the ZLK oscillations drive the eccentricity of the inner BH-BH binary up to large values (e.g. e_ in≳ 0.7-0.9).
Above a given eccentricity, the GW emission becomes so efficient that the inner binary decouples from the tertiary and it plunges due to GWs <cit.>. These systems typically have a relatively low-mass tertiary star compared to the stars in the inner binary, such that the inner binary merges as a BH-BH binary due to GW emission before the tertiary star would evolve off the MS and fill its Roche-lobe.
Overall, the parameter space for this subgroup is very small, and therefore we predict a negligible GW merger rate (see later discussion in section <ref>).
§.§ Stellar merger of the inner binary due to ZLK
In this scenario, the inner binary merges due to three-body dynamics, before it would form a BH-BH binary. At Z = 0.005, about 3.3 per cent of the CHE triple population evolves this way.
In our low metallicity model, this fraction decreases slightly, to 2.2 per cent. This is because at lower metallicities, the inner period to outer period ratio increases less due to the weaker stellar winds, and therefore ZLK oscillations remain less efficient (see equation <ref>).
Mergers in this channel occur in inner binaries, in which one or both of the stars have already evolved off MS, otherwise the strong tidal effects typically quench the ZLK oscillations (see section <ref>).
As shown in Table <ref>, most of the merger occurs between two helium stars (59-75 per cent).
The rest occurs between a helium star - MS star or helium star - BH binaries.
The majority of the double helium star mergers (>90 per cent) originate from triples, in which the stars in the inner binary were in contact during MS and co-evolved. This also implies that the majority of them have equal masses at the time of the merger. The masses of these helium inner stars typically range from 29 to 94 M_⊙ at Z = 0.005.
The outer orbital period of the triples from this channel has to be sufficiently short, such that the ZLK oscillations are strong enough such that they prompt the inner binary to merge. The outer pericenter at the moment of the merger typically ranges from 100 to 200 R_⊙ and it does not exceeds 700 R_⊙.
The eccentricities of the inner binary at the moment of the merger typically have values of e_ in≈ 0.5-0.9.
For all of these triples, the tertiary is a MS star at the time of the merger and less massive than the stars of the inner binary, otherwise it would evolve faster than the stars in the inner binary and would fill its Roche-lobe, while the inner stars are still on MS. If the outer orbit does not significantly change after the merger, the tertiary star is expected to transfer mass to the merger product, once it evolved off the MS.
§.§ Systems with tertiary mass transfer (TMT)
Among CHE triples, this is the most common evolutionary pathway. In these systems, the outer star eventually initiates a mass transfer phase while the inner binary is detached or in contact. Approximately 55 (52) per cent at Z = 0.005 (Z = 0.0005) of all CHE triples experience this type of evolution (see Table <ref>).
This means that a TMT episode would eventually occur in about 40 per cent of all stellar systems containing a binary with CHE stars (with f_ binary = 0.21, f_ triple = 0.73).
While systems containing binaries with CHE stars are rare (see e.g. typical birth rates in Table <ref>), they form GW sources very efficiently <cit.>. Therefore, our predictions suggest that the evolution of a non-negligible fraction of potential GW progenitors could experience a TMT episode.
This is an interesting result, as TMT is thought to be very uncommon for classically evolving hierarchical triples, which would have implied that they play a limited role in important astrophysical phenomena <cit.>. In particular <cit.> found that about 1 per cent of triples with primaries in the intermediate mass range belong to this evolutionary channel. Similarly, <cit.> predicts that about only 1 per cent of the observed 725 triples in the catalogue of <cit.> would eventually initiate TMT.
In the following sections (<ref>-<ref>), we discuss the properties of the triples of this channel at the onset of TMT. While predicting the outcome of a TMT episode is currently extremely challenging, highlighting several important aspects of these systems (e.g. dynamical stability of TMT, timescales of TMT epsiodes, the amount of transferred mass, the type of accretors, etc.) helps to better understand the nature of these systems and the role they potentially play in the evolution of GW progenitors.
§.§.§ Donors of TMT episodes
Here, we discuss the stellar evolutionary stage of the donor star at the onset of the mass transfer episode. as it is highly relevant for determining if the mass transfer episode occurs in a dynamically stable or unstable way <cit.>.
In particular, convective envelopes can be developed by core-helium-burning or asymptotic giant branch stars. Mass transfer episodes initiated by such cool-giant donors with deep convective envelopes are more likely to occur in a dynamically unstable way than mass transfer phases initiated by giant donors with mostly radiative envelopes <cit.>.
At Z = 0.005, around 80 per cent of the donors of TMT systems are stars crossing the Hertzsprung gap.
At this metallicity, the largest expansion in the radius of the star occurs during this evolutionary phase, which makes binary interaction during this stage the most probable. The second most common donor type is CHeB star
with 11.3 per cent, while the rest are either stars on the first giant branch (when the tertiary M_ out,ZAMS≲ 8 M_⊙) or stars on the asymptotic giant branch.
At lower metallicities, CHeB donors are more prevalent. At Z = 0.0005, only 58 per cent of the tertiary donors are HG stars while 40 per cent are CHeB stars; this is because the onset of CHeB occurs at a higher effective temperature with respect to systems at Z = 0.005. Consequently, at lower metallicities, the onset of CHeB is followed by a larger increase in radius with respect to their higher metallicity counterparts. This in turn implies that stars are more likely to fill their Roche-lobes at this evolutionary stage.
§.§.§ Stability of TMT episodes
The vast majority of mass transfer episodes in this channel occur in a dynamically stable way (99.9 per cent at Z = 0.005 and 98.8 per cent at Z = 0.0005). This is due to the relatively low mass ratios at the onset of the mass transfer phase (i.e. typically q_ out < q_ crit, see right panel of Fig. <ref> for our moderate metallicity model, and Fig. <ref> for our low metallicity model). Typical mass ratios for systems with HG donors are q_ out = 0.4-0.8, while for CHeB donors, they are q_ out = 0.3-0.5 . The values for CHeB donors are smaller because of the strong LBV winds that CHeB star experience decrease the mass ratios over time.
Unstable mass transfer phases exclusively occur with CHeB donors in our simulations.
These low mass ratios also imply that the expansion due to stellar evolution drives the TMT episodes <cit.>. Consequently, we expect TMT episodes with HG donors to last 10^4 yrs, while TMT epiosdes with CHeB donor could last much longer up to 10^5-10^4 years.
§.§.§ Accretors of TMT episodes
In this subsection, we discuss the type of accretors of TMT episodes. The evolutionary stage of the inner binaries has a crucial role in the outcome of TMT episodes. If the inner binary comprises CHE MS stars, a TMT episode probably leads to the merger, as CHE binaries have very short periods and the majority of them are in contact at the onset of the TMT <cit.>. On the other hand, if the inner binary consists of BHs, TMT epsiode is unlikely to lead to merger by itself, however, in principle, could be a source of (observable) X-ray emission <cit.>.
As shown in Table <ref>, the two most common types of accretors are MS-MS and BH-BH binaries. In only 11-15 per cent of CHE triples experience TMT with different accretors, such as an inner binary consisting of two helium stars or a helium star with a MS or BH companion.
We highlight the relatively large fraction of BH-BH accretors (24-31 per cent of CHE triples experiencing TMT).
For classically evolving triples, mass transfer towards a BH-BH binary is highly unlikely. Firstly, in systems in which a TMT episode were to occur towards a BH-BH inner binary, the stars of the inner binary need to be more massive than the tertiary, such that they form BHs before the outer star fills its Roche-lobe. Secondly, the outer star has to be sufficiently close, otherwise it would remain detached throughout its evolution. This, in turn, puts a limit on the largest possible inner orbit, if the system is to remain dynamically stable. The maximum inner orbit for such systems is so small that classically evolving inner stars (which eventually expand) would initiate mass transfer and would most likely merge, which would reduce the triple to a binary and a tertiary mass transfer would never occur <cit.>. On the other hand, if the triple has CHE inner stars, the stars will not expand and not merge with one another, instead the system will evolve to contain a BH-BH binary by the time the tertiary fills its Roche-lobe.
§.§.§ Mass transferred towards the inner binary
We discuss the amount of mass that is transferred during the TMT episode. This is an important aspect, as the relative transferred mass determines angular momentum reservoir available to change the orbit of the inner binary.
Assuming that the entire envelope of the donor star is transferred towards the inner binary, the amount of transferred mass ranges between 1-40M_⊙ for BH-BH accretors and between 10-50M_⊙ for MS-MS accretors (see left panel of Fig. <ref> for Z = 0.005 and <ref> in section of <ref> of Appendix for Z = 0.0005).
Systems with MS-MS accretors typically receive a larger amount of mass than BH-BH accretors, because the tertiary star is typically more massive in the former case.
This is because for the tertiary to fill its Roche lobe while the inner stars are still on the MS, the initial tertiary star needs to be evolve faster and hence be more massive than the MS stars.
The relative transferred mass expressed as a fraction of the total mass of the inner binary (i.e. M_ transferred/M_ tot,inner) has the same maximum value (∼ 0.5) for both BH-BH and MS-MS accretors (see grey histogram in left panel of Fig. <ref>).
§.§.§ Formation of circumbinary disc
We discuss how common it is for TMT systems to develop a circumbinary disc at the onset of the mass transfer episode.
As explained in section <ref>, whether a TMT episode is accompanied by a formation of a circumbinary disc can have important consequences for the evolution of the inner orbit.
We find that about 63 per cent of all TMT systems develop circumbinary discs in our moderate metallicity model, while in the rest TMT proceeds in a ballistic fashion.
Systems in which a circumbinary disc is formed during the TMT phase typically have larger outer pericenters at the onset of the mass transfer (a_ p, out≈ 300-6000 R_⊙) than in those where TMT proceeds in a ballistic manner (a_ p, out≈ 100-600 R_⊙).
TMT with circumbinary disc is more prevalent at lower metallcities. About 74 per cent of all TMT systems develop circumbinary discs at Z = 0.0005. This occurs because the ratio of the inner and the outer orbital separation decreases less by the onset of the mass transfer phase due to weaker stellar winds (see equation <ref>).
TMT episodes with inner BH-BH binaries are somewhat more likely to occur in a ballistic fashion than with MS-MS inner binaries. About 45 (23) per cent of TMT systems with BH-BH inner binaries do not develop circumbinary discs at Z = 0.005 (Z = 0.0005), while 32 (27) per cent of TMT episodes with MS-MS inner binaries occur in a ballistic fashion. This is mainly because the inner apocenter to outer pericenter ratios at the onset of TMT are typically higher for inner BH-BHs than for inner MS-MS binaries (see equation <ref>). This difference is due to Wolf-Rayet winds, supernova kicks and possible ZLK oscillations that BH-BH inner binaries experienced prior to the TMT episode.
§.§.§ Three-body dynamics prior to TMT
Three-body dynamics can increase the eccentricities of the inner binary. This can, for example significantly decrease the coalescence time due to GWs <cit.>.
Three-body dynamics are almost always suppressed during the MS phase of the inner binaries due to the strong tides (see also section <ref>). Consequently, the inner orbits of TMT systems with MS-MS inner binaries are always circular at the onset of the mass transfer episode. On the other hand, this is no longer the case when the inner stars are in their post-MS.
In Fig. <ref>, we show the cumulative distribution of the inner binary eccentricities at the onset of the mass transfer phase of TMT systems with BH-BH accretors at Z = 0.005.
We see that systems without circumbinary discs tend to have eccentric inner orbits at the onset of mass transfer. The high eccentricities are caused by ZLK cycles during the post-MS evolution of the inner binary.
About 40 per cent of such triples have e_ in≳ 0.4 at this stage.
This is in contrast with the systems with circumbinary discs; about 90 per cent of the systems have eccentricities e_ in≲0.1. The difference is due to the smaller inner period to outer period ratios that systems without circumbinary discs have (see equation <ref>).
In our low metallcity model, high eccentricites at the onset of TMT are much less common (see Fig. <ref> in section of <ref> of Appendix).
For these systems the inner period to outer period ratio does not increase significantly because of the weak stellar winds.
§.§ Unbound systems
In this channel, one of the stars in the triple becomes unbound as a result of core-collapse.
We distinguish systems based on whether this occurs via PISN or via classical core collapse <cit.>. As shown in Table <ref>, PISN does not occur in our moderate metallicity model, whereas at Z = 0.0005, it becomes quite prevalent; about 84 per cent of the unbound systems occur due to PISN.
If the triples becomes unbound as a result of a classical core-collapse, we further distinguish whether it is due to the core-collapse occuring in the inner binary (97 per cent of all classical core-collapse systems at Z = 0.005 and 99 at Z = 0.0005) or of the tertiary star (3 per cent at Z = 0.005 or 1 per cent at Z = 0.0005). As the inner binary consists of CHE stars, they have
large initial masses (i.e. M_ ZAMS≳ 30 M_⊙) and furthermore they develop more massive CO cores than their classically evolving counterparts. Therefore, they get weak (if any) natal kicks when they form BHs according to our implemented natalk kick prescription
Yet weak natal kicks, or even completely symmetrical instantaneous mass losses due to neutrino losses <cit.> can unbind the tertiary star, if the outer star has high eccentricities. We find that in systems in which one of the stars becomes unbound due to the core-collapse in the inner binary, the outer eccentricities are large, about 70 per cent of them e_ out≥ 0.8. In the vast majority of the cases (about 99 per cent of such unbound systems), only the tertiary is ejected, while the inner binary remains bound.
If the triple becomes unbound due to the core-collapse of the tertiary star and with low outer eccentricity, it almost always occurs as a result of a strong natal kick. Consequently, most of such unbound systems have initial tertiary masses of M_ out,ZAMS≈ 8-25 M_⊙ (see also discussion in section <ref>), as these systems are expecetd to receive the largest kicks according the supernova prescription of <cit.> .
§.§ Systems which become dynamically unstable
These triples typically have very short initial outer pericenters (a_ p, out, ZAMS≈ 70-400 R_⊙) and therefore are very close to the stability limit at ZAMS. Such systems can transition to non-secular or non-hierarchical evolution, if a_ in/a_ out, e_ out or q_ out, significantly increases during evolution <cit.>. Among CHE triples, there are primarily two processes that can trigger this change: stellar winds and core collapse.
If the relative wind mass loss rate (e.g. Ṁ/M) in the inner binary is higher than that of the tertiary star, a_ in/a_ out and q_ out will increase, which can prompt the triple to experience a dynamical instability <cit.>. 30 per cent of the systems of this channel destabilise due to stellar winds and the destabilisation occurs when the stars of the inner binary are in their post-MS phase. At this stage, the inner stars experience strong Wolf-Rayet winds, while the tertiary star is still on the MS with significantly lower mass loss rates.
In the remaining 70 per cent,
the instability sets in due to the collapse of the core of one of the stars. We find that this only occurs when BH formation takes place in the inner binary.
As noted in section <ref>, CHE stars typically form BHs via direct collapse, such that q_ out only increases slightly. Furthermore, the direct collapse
is expected to be accompanied by a weak Blauw-kick due to neutrino losses such that a_ in/a_ out and e_ out only increase significantly, if the inner or the outer pre-core-collapse orbits are eccentric, respectively.
The pre-core-collapse inner orbit is eccentric in 72 per cent of the systems of this channel. And the eccentricity is caused by ZLK oscillations. In the remaining 28 per cent, three body-dynamics is not efficient in driving up the eccentricity because the mutual inclination is outside of the critical Kozai range <cit.>. Therefore, the core collapse occurs when the inner orbit is circular. These systems still become unstable during the BH formation, because 1) either a_ in/a_ out already increased strongly due to stellar wind mass losses before the BH formation or 2) the outer orbit is eccentric and the core collapse occurs while the tertiary star is near the outer pericenter (leading to a significant increase in e_ out).
The occurrence rate of this channel is strongly dependent on metallicity (3.5 per cent of all CHE triples at Z = 0.005 and 0.7 per cent at Z = 0.0005, see Table <ref>). This dependence is due to the reduced strength of stellar winds and ZLK oscillations (which are responsible for any eccentricity in CHE inner binaries) at lower metallicities.
§ THE ORIGIN OF EACH EVOLUTIONARY CHANNEL
In this section, we discuss the initial parameters of the triples from each evolutionary channel introduced in section <ref>. We find that initial parameters can be used as a proxy to determine the final evolutionary outcome of CHE triples.
In particular, the evolutionary outcome can be parameterised by the initial mass and orbital separation of the tertiary star. The parameters of the inner binary play a less important role in this regard, as the parameter space for CHE inner binaries is already quite reduced.
We illustrate this in the left panel of Fig. <ref> by showing an ensemble of CHE triples at Z = 0.005, in which the parameters of the inner binary are the same, but the mass and the orbital separation of the tertiary star are varied (therefore this grid represents only a small subset of the entire CHE population discussed in section <ref>).
The inner binary consists of two 70 M_⊙ stars and with a circular initial orbit with a_ in, ZAMS = 22.4 R_⊙ (similarly to the example systems discussed in section <ref>). The initial tertiary mass ranges from 5 to 100M_⊙, while a_ out,ZAMS
ranges from 200 to 10^4 R_⊙.
§.§ Initial parameters of systems of different evolutionary channels
The majority of the triples
shown in the left panel of Fig. <ref> experience TMT episodes. Their initial outer orbital separations are relatively short and range roughly from 100 to 3300 R_⊙. The evolutionary phase of the inner stars at the onset of the TMT episode depends on the initial mass of the tertiary star. For the systems shown in the left panel of Fig. <ref>, the inner binary at the onset of TMT comprise of BHs, if M_ out, ZAMS≲59 M_⊙, helium stars, if 59 M_⊙≲ M_ out, ZAMS≤ 70 M_⊙, and MS stars, if M_ out, ZAMS≥ 70 M_⊙. The majority (53 per cent) of the TMT systems in the left panel of Fig. <ref> have a BH-BH inner binaries. For the entire population of CHE triples presented in section <ref>, the same percentage is smaller (i.e 31 per cent) at the same metallicity (see Table <ref>). As shown in Fig. <ref>, this quantity (i.e. the ratio of the number of TMT systems with BH-BH inner binaries and the number of all TMT system)
scales proportionally to the initial mass of the secondary star in the inner binary. This means that TMT episodes occur more frequently with BH-BH accretors among CHE triples with more massive inner stars. This is due to our assumptions about the initial distribution of the triples (section <ref>). If the TMT occurs towards a BH-BH inner binary, the tertiary has to be initially the least massive in the triple. With increasing M_ 2,ZAMS, the fraction of triples for which M_ 2,ZAMS > M_ out,ZAMS increases because of our assumptions of a maximum initial stellar mass of M_ ZAMS, max = 100 M_⊙ and a flat outer mass ratio distribution.
In 15 per cent of the
triples shown in the left panel of Fig. <ref>, the inner binary merges before BH formation or before a TMT episode occurs. All such mergers in the grid occur between two helium stars, and are due to ZLK oscillations that arise when the stars of the inner binary evolve off the MS. The initial outer orbital separations in this channel are very short, i.e. 200 to 241R_⊙, while the tertiary masses range between 32 ≤ M_ out, ZAMS/M_⊙≤ 68. For lower tertiary masses (M_ out, ZAMS<32 M_⊙), the ZLK oscillations are not strong enough to boost the inner eccentricity and cause a mass transfer episode in the inner binary.
For larger tertiary masses (M_ out, ZAMS>70 M_⊙), the tertiary typically fills its Roche-lobe before the stars of the inner binary evolve off the MS. However, during the main sequence phase of the inner stars, the effects of ZLK cycles are quenched and consequently no mergers are prompted by three-body dynamics before the tertiary initiates a TMT episode.
Triples of the no post-MS MT channel in the left panel of Fig. <ref> have initial outer orbits a_out≳ 2000-3000 R_⊙.
Their initial tertiary mass is also typically outside of the range of ∼8-25 M_⊙, such that the system does not dissociate due to SN kicks.
As we shown in the next subsection, three-body dynamics are not important for the evolution of these systems.
In left panel of Fig. <ref>, we show the inital pericenter (a_ outer,ZAMS) distribution of the entire CHE triple population for each evolutionary channel at Z = 0.005. As it can be seen, the range of initial pericenters are in agreement with those shown in Fig. <ref> for all channels except for the unbound systems (since for the unbound systems, the outer eccentricity plays a crucial role, as explained in section <ref>, and in the grid we assume circular outer orbits). This again confirms that the parameters of the tertiary star play the most important role in determining the evolutionary path of a CHE triple. As shown in left panel of
Fig. <ref>, the range of a_ outer,ZAMS of systems with TMT episodes increases with decreasing metallicity. At lower metallicity, the stellar winds are weaker and consequently, the outer orbit widens less. Therefore, the maximum a_ outer,ZAMS at which the tertiary stars can still fill their Roche-lobes also increases with decreasing metallicity.
§.§ Initial parameters of triples with three-body dynamics
In the right panel of Fig. <ref>,
we show the maximum eccentricities that the inner binaries reach during their evolution (e_ in, max).
About 29 per cent of the triples shown in the right panel of Fig. <ref> reach e_ in, max≥0.4 due to ZLK cycles.
In all of these triples, the tertiary star eventually fills its Roche-lobe (although in some cases, the inner binary merges first).
For the systems shown in Fig. <ref>, ZLK cycles are efficient when a_ out, ZAMS≲1200 R_⊙ and M_ out, ZAMS≲70 M_⊙. When the outer orbit is a_ out, ZAMS≳1200 R_⊙, the ZLK cycles are quenched by various short range forces (e.g. precession caused by tides or general relativistic effects). If a_ out, ZAMS≲1200 R_⊙ but M_ out, ZAMS≳70 M_⊙, the tertiary star fills its Roche-lobe while the stars in the inner binary are still on the MS. The inner binaries of these triples do not develop high eccentricities, as ZLK cycles are quenched during MS due to strong tides (see section <ref>), and TMT episode with MS-MS accretors are expected to result in the merger of the inner binary (see section <ref>).
The right panel of Fig. <ref> also shows that e_ in, max does not decrease smoothly with decreasing outer orbital separations, instead it drops rather abruptly across a_ out, ZAMS≈1200 R_⊙.
Triples with a_ out, ZAMS≈1200 R_⊙ reach very large inner eccentricties (e_ in, max≈ 0.9), while at slightly larger orbital separations (i.e. a_ out, ZAMS≈1500 R_⊙) the ZLK cycles are completely quenched.
These above mentioned effects are qualitatively also true for the entire CHE triple population presented in section <ref> (see right panel of Fig. <ref>).
At Z = 0.005, the ZLK oscillations are only efficient, if a_ p, out, ZAMS≲ 1200 R_⊙.
This implies that three-body dynamics are only relevant for those triples, in which the tertiary star would eventually fill its Roche-lobe (compare right and left panel of Fig. <ref>). Consequently, if the tertiary in a CHE triple remains detached throughout its evolution, the evolution of the inner binary will almost always be kinetically decoupled from the tertiary star. If a_ p, out, ZAMS≲ 1200 R_⊙, a wide range of inner eccentricites are possible (e_ in, max
= 0-0.9) for all a_ p, out, ZAMS. In this case, the value of e_ in, max is primarily determined by the mutual inclination of the triple <cit.>.
In our low metallicity model (Z = 0.0005) the maximum initial outer pericenter at which three-body dynamics are still relevant is lower compared to our moderate metallicity model (right panel in Fig. <ref> in section <ref> of Appendix). At such low metallicities, stellar winds do not widen the orbit of the inner binary significantly and thus the timescales of the ZLK cycles do not decrease as much as at Z = 0.005.
§ GRAVITATIONAL WAVES SOURCES
We now discuss the possible formation channels of GW sources that originate from CHE triples and their properties.
In section <ref> we predict the merger rate densities and compare them to that of GW sources from isolated CHE binaries. For this, we assume two test populations with different stellar multiplicity fractions.
One population is composed of only single and binary stellar systems (i.e. with stellar multiplicity fractions at ZAMS of f_ single = 0.3, f_ binary = 0.7, f_ triple = 0), in the other triples dominate (f_ single = 0.06, f_ binary = 0.21, f_ triple = 0.73).
In sections <ref> - <ref> we discuss the properties of each GW formation channel from CHE triples and binaries. These predictions are based on the synthetic populations discussed previously, and in cases where the simulations are stopped
before the formation of a BH-BH binary, we predict the further evolution of CHE triples beyond the stopping conditions (Section <ref>) by applying simple assumptions (as detailed below).
The four main identified formation channels of GW sources within our CHE triple population are (see also Fig. <ref>):
* Effectively isolated inner binary:
For such triples, three-body dynamics is suppressed by various short-range forces
and the tertiary star remains detached throughout the entire evolution. The inner binary therefore evolves effectively as an isolated binary and the properties of these GW sources are indistinguishable from those of the CHE binary channel. There are two ways these systems can form: i) with the tertiary star bound to the triple (systems from the no post-MS MT channel, see section <ref>) and ii) systems in which the tertiary star becomes unbound from the triple (from the unbound channel discussed in section <ref>). For the latter, we assume that the orbit of the inner binary is not affected by the tertiary unbinding from the triple system.
* TMT with a BH-BH accretor:
This channel comprises systems in which the tertiary star fills its Roche-lobe when the inner binary is a BH-BH binary.
The inner binary components do not coalesce during the TMT phase, but will merge afterwards due to GW emission.
In these systems, the tertiary star can affect the evolution of the inner binary in two major ways, via TMT episode and via three-body dynamics (see section <ref>). In section <ref>, we introduced our assumptions regarding the evolution of the inner orbiy during a TMT episode.
* TMT with a MS-MS accretor:
In this scenario, there are two sequential mergers taking place in the system
<cit.>.
First, the inner binary merges when the stars are still on the MS as a result of mass transfer from the tertiary to the inner binary. This reduces the triple to a binary.
We assume that the merger product of the inner binary evolves further in a classical way (as opposed to CHE). Consequently, the merger product expands and eventually fills its Roche-lobe and transfers mass to the initial tertiary star.
The orbit shrinks due to this second phase of mass transfer and as a result, a merging double compact object is formed.
The second phase of mass transfer is essential. Systems in which no mass transfer takes place after the inner binary merger might form detached BH-BH binaries but are too wide to merge due to GWs within the Hubble time.
We note that double MS mergers among CHE triples typically occur due to TMT episodes as three-body dynamics are suppressed during the MS phase.
* Dynamical mergers:
In the triples of this channel, ZLK oscillations are very efficient and drive up the inner eccentricities to e_ in≈ 0.6-0.9 after the stars of the inner binary have become BHs. Such systems merge due to GW emission within a few Myrs. The tertiary remains detached until the inner binary merges and therefore these triples belong to the no post-MS MT channel. As discussed in section <ref>, these systems are rare.
We ignore the possibility of GW source forming in a CHE triple through a stellar merger that do not occur between two MS stars. Such mergers can occur due to TMT or three-body dynamics with (i) helium star-MS binary or (ii) double helium star binaries. We justify the omission of the first type, as they are relatively rare. This type of merger occurs in 0.2-2 per cent of all CHE triples depending on metallicity. For the second type, the merger product is a helium star, it is not expected to significantly expand and it is unlikely to ever fill its Roche-lobe. Without a phase of mass transfer that leads to orbital shrinkage, the binary remains too wide to merge within a Hubble time. However, if the merger remnant can accrete matter during the TMT phase it could regain a hydrogen-rich envelope, and expand later in its evolution. For simplicity, we neglect this scenario.
§.§ Rates of GW mergers
In the population without triples, the predicted merger rate density is R_ merger = 44.2 Gpc^-3yr^-1 (see Table <ref>). This is about a factor of two higher than predicted by <cit.>, giving a rough agreement given the simplicity of our rate calculation (see discussion in Appendix <ref>). The total merger rate density of the population containing triples is R_ merger = 23 Gpc^-3yr^-1. This is about a factor of two lower than that of the population without triples. There are two reasons for this difference. Firstly, stellar mergers frequently occur in CHE triples, preventing the formation of compact BH-BH binaries. While all CHE binaries form BH-BH binaries, only about 60 (45) per cent of CHE triples form (inner) BH-BH binaries at Z = 0.005 (Z = 0.0005).
Secondly, the number of systems formed in the population with triples
is always lower per unit stellar mass formed than in the population without triples, as triple systems, on average, have larger total masses than binaries and/or single stars.
In the population with triples, about half of the GW mergers originate from formation channels involving CHE originate from triples.
The role of the tertiary is negligible for 69 per cent of GW progenitors from CHE triples. In the remaining 31 per cent, the evolution of the inner binary is affected by the tertiary star via TMT and/or three-body dynamics.
§.§ Isolated binaries
At Z = 0.005, about 68 per cent of the CHE binary population forms a
BH-BH binary that merges within the Hubble time, while at Z = 0.0005, all CHE binaries merge due to GWs within the age of the universe. In our moderate metallicity model, the delay times of these BH-BH binaries from this population ranges from 3 to 50 Gyr (and therefore the delay time of GW sources ranges from 3 to 13.5 Gyr).
In our low metallicity model, the delay times are considerably shorter, ranging roughly from 100 to 600 Myrs. At Z = 0.005, only those binaries merge which were in contact during their MS phase. At Z = 0.0005, about 97 per cent of all GW progenitors were in contact during their MS phase. Since we assume such binaries equalise in mass, we predict that the vast majority of GW sources consist of equal mass black home binaries from this population (in broad agreement with ).
The masses of the merging binary black
holes from this channel range from 20 to 42 M_⊙ at Z = 0.005 and 33 to
54 M_⊙ at Z = 0.0005.
§.§ Effectively isolated inner binaries
This is the dominant channel among CHE triples with a predicted merger rate density of 8.8 Gpc^-3yr^-1. At Z = 0.005 (Z = 0.0005), about 19 (12) per cent of all CHE systems (e.g CHE binaries and CHE triples, see section <ref>) are expected to form GW sources via this channel. In 53 per cent of the GW progenitors of this channel, the tertiary star becomes unbound by the time both stars in the inner binaries form BHs. This percentage drops to 38 per cent at Z = 0.0005.
The demographics of this channel are nearly indistinguishable from the isolated binary population.
The merger efficiency of this channel, which we define as the GW sources as a fraction of
BH-BH inner binaries formed via a certain channel, is 68 per cent.
Unsurprisingly,
this is the same as the merger efficiency of the isolated CHE binary channel.
Similarly to the CHE binary case, the majority of the inner binaries of these triples were also in contact during their MS phase and therefore this channel also produces overwhelmingly equal mass mergers.
§.§ TMT with a BH-BH accretor
This is the dominant formation channel in which the evolution of the inner binary is affected by the tertiary star.
The predicted merger rate density is R_ merger = 3.8 Gpc^-3yr^-1, which accounts for about 16 per cent of all GW mergers from CHE systems. About 10 per cent of all CHE systems form merging binary BHs via this channel.
With our simplistic models of TMT (see subsection <ref>), we predict that the outer orbit widens as a result of the TMT episode for all triples considered in this study.
In the lower panel of Fig. <ref>, we show how the outer pericenter changes after the mass transfer phase for
triples experiencing TMT with a BH-BH inner binary accretor for our moderate metallicity model (and in the lower panel of Fig. <ref> for our low metallicity model).
The orbital separations widen typically by a factor 1.5-2.
Even, if the inner orbit remains unchanged due to TMT, the outer orbit widens so much, such that three body-dynamics become typically negligible after the TMT episode for the majoirty of these triples.
For example, at Z = 0.005, in those TMT systems, in which ZKL oscillation are effective prior to the mass transfer event, 70 per cent of the inner binary becomes decoupled from the tertiary star after the TMT episode. If the evolution of the inner BH-BH inner binary is decoupled from the tertiary, its orbital evolution is solely determined by the emission of GWs (and therefore the coalescence time can be determined according , otherwise, we use equation <ref>).
As noted in section <ref>, we make different assumptions about the evolution of the inner orbit based on whether a circumbinary disc is formed during TMT. We therefore discuss the properties of GW sources from these two subtypes separately.
§.§.§ Accretion through a circumbinary disc
The predicted merger rate of this channel is 2.4 Gpc^-3yr^-1.
The merger rate efficiency is just 6 per cent higher than the merger rate efficiency from isolated binaries. The slight increase is due to the small number of eccentric inner binaries at the onset of the mass transfer (∼10 per cent of systems undergoing TMT with BH-BH accretors and circumbinary discs have e_ in>0.4, see Fig. <ref>).
The small difference is not surprising as we have assumed here that the orbit of the inner binary does not change due to circumbinary disc accretion. However, if circumbinary disc accretion leads to a significant increase (decrease) in the inner period, the compact object merger fraction decreases (increases) significantly as well. Clearly, better models are required to understand circumbinary accretion of a BH binary from a mass transferring tertiary star.
§.§.§ Ballistic accretion
The properties of these GW sources depend on how the inner binary evolves due to TMT. If we simplistically assume that that the inner orbit does not change (i.e. Scenario 1, see section <ref>), then
the merger rate density of this channel in the local universe is R_ merger = 1.4 Gpc^3yr^-1. In this case about 3.8 (2.3) per cent of all stellar systems containing a CHE binary form GW sources via this channel at Z = 0.005 (Z = 0.0005). The merger efficiency of this channel is 75 per cent at Z = 0.005, which is slightly higher than that of the CHE binary population (68 per cent).
As discussed in section <ref>, a considerable fraction of these sources have high eccentricities, namely, 48 per cent with e_ in≳ 0.4 at Z = 0.005 and 10 per cent at Z = 0.0005.
This results in shorter delay times and more mergers with respect to the isolated CHE binary channel (top left panel of Fig. <ref>).
If the orbital evolution can be described by equation <ref> (i.e. Scenario 2, see section <ref>), then the inner pericenters of BH-BH binaries decrease by 1-3 orders of magnitude due to the TMT episode, depending on the efficiency parameter α_ TMT. In this case, all inner bineries become dynamically decoupled from the tertiary star after the TMT episode. As shown in the left panel of Fig. <ref>, the peak of the orbital separation distribution shifts from 32 R_⊙ to 25, 5 and 1 R_⊙ with α_ TMT=0.05,5,5.
With such short periods, nearly all (i.e. typically ≳ 99 per cent) of the inner binaries eventually emerge. However, none of the inner binaries merge during the mass transfer, in fact they merge due to GW emission afterwards. In Fig. <ref>, we show that the typical delay times in Scenario 2 are also orders of magnitude shorter with respect to that of isolated CHE binaries.
With α_ TMT = 0.05, the delay times of these GW sources is dominated by the stellar evolution. Such timescales could make TMT episodes relevant in young clusters in which star-formation is still active. Even when assuming a weaker
friction exerted by the transferred mass (i.e. α_TMTλ_ TMT =5) resulting in the smallest orbital shrinkage in our models, most of the BHs merge within a few hundred Myr at Z = 0.005.
Despite the higher merger efficiency,
the predicted merger rate density for Scenario 2 is considerably lower (i.e. R_ merger = 0.5 Gpc^-3yr^-1) than in Scenario 1.
This is due to the extremely short delay times, implying the progenitor stars must have formed recently, when the cosmic star formation rate is low <cit.>.
As the cosmic star formation rate is expected to increase strongly from z=0 to z=2 , we expect the merger rate density of this channel to be significantly higher at z≈2 than at z = 0. This would make these sources more relevant for third-generation GW detectors.
We mention two interesting aspects of this channel. Firstly, depending on the efficiency parameter of the TMT episode, these systems could be in the LISA frequency band <cit.> during the mass transfer phase. In the right panel panel of Fig. <ref>, we show the frequency at which the BH-BH binaries emit GWs after the mass transfer episode. With α_TMT = 0.5, about half, and with α_TMT = 0.05, all of our systems enter the mHZ regime during the mass transfer phase. The evolution through the LISA frequency range would be primarily driven by gas dynamics instead of GW emission <cit.>. Such sources would be detectable by LISA, if the corresponding luminosity distances are not larger than ∼10 kpc and ∼10^4 kpc in case of α_TMT = 0.5 and α_TMT = 0.05, respectively <cit.>.
Secondly, a TMT episode could be accompanied by a detectable electromagnetic signal, as the transferred mass is expected to heat up when it reaches the inner BH binary. If the delay time between this signal and the GW merger is within the lifetimes of typical observing missions, then the GW merger could be associated with this electromagnetic counterpart <cit.>.
We find that the time between the end of the TMT episode and the GW merger in case of α_ TMTλ_ TMT = 0.05 is shorter than a year for 6 per cent of these sources at Z = 0.0005. This implies that in this case a electromagnetic counterpart could be detected, shortly before the GW merger. This is in contrast with the possible electromagnetic signatures associated with BH mergers in AGN discs, where the electromagnetic counterpart would occur after the GW merger <cit.>
§.§ TMT with a MS-MS accretor
This channel has a low merger rate density of R_ merger = 0.2 Gpc^-3yr^-1. Even though 25 per cent of all systems containing a CHE binary experience a double MS merger in the inner binary at Z = 0.005, only 1.1 per cent of them form merging binary BHs. This low merging efficiency is due to two reasons. Firstly, if the mass transfer episode between the merger product and the tertiary star proceeds in a dynamically unstable way, the process mostly ends in stellar merger and no double compact binary is formed. Secondly, if the same mass transfer proceeds instead in a stable way, the binary BH typically has too wide orbit to merge within the Hubble time. We note, however, that these predictions are sensitively dependent on uncertain stellar physics (such as the efficiency of CEE phase, mass-loss radius exponent and binding energy of stars with M_ ZAMS≳ 100 M_⊙). We also note that the merger efficiency is significantly higher in our low metallicity model, 12.3 per cent of triples with double MS merger forms merging binary BHs. As the merger efficiency seems to increase with decreasing metallicity, and we only calculate the merger rate density based on two metallicities, it is likely that we underestimate the merger rate density for this channel (see more detailed explanation in Appendix section <ref>).
In case of a TMT episode with a MS-MS accretor, we always assume that the inner binary merges due the mass transfer phase. We justify this assumption by the fact that that CHE MS-MS binaries tend to be on very close orbits (∼20-30 R_⊙) compared to their stellar radii (∼5-10R_⊙). A significant fraction of them are already in contact.
Furthermore, these stars may swell up as a result of accretion, this type mass transfer event is likely to end in merger <cit.>.
The merger product is a rejuvenated MS star with a mass of M_1+2 = M_1 + M_2. This means that we neglect any accretion during TMT and we assume a fully conservative merger without mass outflows. At Z = 0.005, the mass of the inner binary merger remnant M_ 1+2 ranges from 65 to 188 M_⊙. The distribution has a peak around ∼ 100 M_⊙. At Z = 0.0005, the mass of the merger product ranges from 70 M_⊙ to 190 M_⊙.
The orbital separations after the TMT episode are shown in the upper panel of Fig. <ref> (and Fig. <ref> for our low metallicity model). We can see that the outer orbit widens typically by a factor of 1.7-2.5 and the orbital separations range from 150 to 6800 R_⊙. While the ranges are similar at both metallicities, at Z = 0.0005, the typical
orbital separations are significantly shorter.
Most of the systems experience a second phase of mass transfer after the TMT episode (62 per cent at Z = 0.005 and 96 per cent at Z = 0.0005) and typically the donor star is on the Hertzsprung gap during this second phase of mass transfer ( about 99 per cent at Z = 0.005, and about 86 per cent at Z = 0.0005). More evolved donor stars are not expected to occur frequently, as the onset of CHeB occurs at a cooler effective temperature with increasing mass with and followed by a less significant subsequent radial expansion <cit.>.
In particular for M_ ZAMS≳ 100 M_⊙, stars are predicted to expand negligibly after the CHeB, even at low metallicities.
Regarding the stability of the mass transfer between the merger remnant and the initial tertiary, we find that it occurs in an dynamically unstable manner in 66 (30) per cent of cases at Z = 0.005 (Z = 0.0005).
We assume that CE phases with a donor star on the Hertzsprung gap result in a merger, following <cit.> <cit.>.
At both metallicities, binary BHs are only produced when the second phase of mass transfer proceeds in a stable manner. Furthermore, in order to form a GW source, the orbit needs to be compact enough (a_ out≲ 1000 R_⊙) at the onset of the second mass transfer event. This only occurs in about 5 per cent (30 per cent) of systems with stable mass transfer at Z = 0.005 (Z = 0.0005).
This is the only GW formation channel of CHE triples that yields a significantly different mass and mass ratio distributions than the CHE binary channel. The masses of the merging binary BHs range from 16 to 27 M_⊙ at Z = 0.005 and 17 to 54 M_⊙ at Z = 0.0005. The mass ratios range from 0.7 to 0.8 at Z = 0.005 and 0.5 to 1.0 at Z = 0.0005. All other channels produce merging binary BHs with masses that range from 20 to 42 at Z = 0.005 and 33 to 54 at Z = 0.0005. The vast majority (≳ 90 per cent) of these systems have equal masses, as the inner binaries had been in a contact during their MS phase.
§.§ Dynamical mergers
The merger rate density of these channel is very low, R_ merger = 0.05 Gpc^-3yr^-1.
The delay times of these systems are very short and range from 4 to 20 Myrs. Similarly to the GW progenitors that have experienced TMT episodes with ballistic accretion, the short delay times imply that the merger rate density could be about an order of magnitude larger at z ≈ 2. About 25 per cent of these systems have eccentricities e_ in≳ 10^-4 when the characteristic GW frequency reaches 10 Hz, making eccentricities detectable by third-generation detectors <cit.>.
For all systems, the tertiary star is still on the MS when the inner binary merges due to GWs with outer pericenters of a_p,out≈ 120-790R_⊙. It is therefore expected that the initial tertiary star will eventaully fill its Roche-lobe, once it evolves of the MS.
§ CONCLUSION
We studied the evolution of hierarchical triples with CHE stars in the inner binary with a rapid populations synthesis approach. We performed simulations with the triple population synthesis code at two representative metallicities: Z = 0.005 and Z = 0.0005.
We showed that the evolution of CHE stars can be altered by the presence of a tertiary star in several ways.
This can potentially lead to a formation of a number of diverse and unique astrophysical phenomena, e.g. TMT phases with BH-BH accretors, highly eccentric mergers of helium stars, and mergers of binary BHs with very short (few Myrs) delay times.
To summarise our main findings:
* Tertiary mass transfer (TMT) episodes are common among CHE triples:
Unlike in classically evolving hierarchical triples, we predict that TMT phase is very common among CHE triples. The tertiary star fills its Roche-lobe in about 50 per cent of all triples with CHE inner binaries.
The same fraction for classically evolving systems is predicted to be a few percent at best <cit.>.
We find that the mass transfer episodes initiated by the tertiary star typically occurs in a dynamically stable way.
* BH-BH inner binaries that accrete from tertiary star are also common: About 31 (24) per cent of the tertiary-driven mass-transfer episodes occur with BH-BH accretors at Z = 0.005 (Z = 0.0005).
Previous population synthesis studies suggest that such scenario is probably not possible triples with classically evolving stars <cit.>.
Therefore, mass transfer towards a BH-BH inner binary represents a unique scenario for triples (or higher-order multiples) with CHE stars in the inner binaries. An exciting prospects would be a possible EM counterpart from such an event <cit.>.
* Importance of three-body dynamics: ZLK oscillations can be effective for CHE triples, if the stars in the inner binary have evolved off MS (otherwise precession due to strong tides quench ZLK cycles) and if the initial outer pericenter is a_ p, outer,ZAMS≲ 2000 R_⊙ (otherwise ZLK cycles are quenched by various short range forces throughout the entire evolution of the inner binary). ZLK oscillations are only present in those CHE triples, in which the outer pericenter is so short, such that the tertiary star would eventually fill its Roche-lobe. The inner eccentricities of these systems can reach values up to e_ in, max∼0.9
(left panel of Fig. <ref>).
The effects of three-body dynamics are negligible for those CHE triples in which the triple remains detached. In this case, the inner binary evolves effectively as an isolated binary.
* Three-body dynamics can drive the inner binary to a stellar merger:
In about 3 per cent of CHE triples, the inner binary merges before BH-BH formation. The most common type is a merger of a double helium star binary,
that comes into contact in a highly eccentric orbit (Table <ref>).
* CHE triples form GW sources efficiently: About 30 (24) per cent of the CHE triple population forms BH binaries that merge due to GWs within Hubble time at Z = 0.005 (Z = 0.0005). We predict a merger rate density of GW sources from CHE triples of R_ merger≈ 12 Gpc^-3yr^-1 (Table <ref>). We also predict that about half of the GW sources from CHE systems originate from triples. In 69 per cent of all GW sources from CHE triples, the inner binary evolves effectively as an isolated binary and therefore its properties are indistinguishable from those of CHE binaries. In the remaining 31 per cent, the evolution of the GW progenitor is affected by three-body dynamics and/or TMT episodes.
* Tertiary mass transfer and three-body dynamics could lead to the formation of BH-BH binaries that merge within Myrs
The vast majority of those GW progenitors of CHE triples, in which the evolution of the inner binary is not decoupled from the tertiary object, experience a TMT episode with a BH-BH inner binary. In this case, we model the evolution of the inner binary during the TMT phase with energy arguments <cit.> and with different assumptions on how efficiently the transferred mass shrinks the orbit of the inner binary.
We find typical values for the delay time of these GW sources of few hundred Myrs and few Myrs in our model variation with the least and the most orbital shrinkage, respectively.
§ ACKNOWLEDGEMENTS
SdM acknowledges Fabio Antonini, Adrian Hamers and Lieke van Son for insightful discussions.
AD acknowledges travel grant from the HPC3 Europa programme for providing computational resources at the Snelius supercomputer in the Netherlands and acknowledges support fro API for allowing an extended visit.
Computational work was performed by the Snelius supercomputer in the Netherlands and by the University of Birmingham's BlueBEAR HPC service.
ST acknowledges support from the Netherlands Research Council NWO (VENI 639.041.645 and VIDI 203.061 grants).
SdM acknowledges funding by the Netherlands Organization for Scientific Research (NWO) as part of
the Vidi research program BinWaves with project number 639.042.728.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
§ ADDITIONAL FIGURES
In this section, we present the low metallicity model (i.e. Z = 0.0005) counterparts of some of the figures presented in the main text. In Fig. <ref>, we show the mass ratios at the onset of TMT and the amount of (relative) mass transferred towards the inner binary. In Fig. <ref>, we show the cumulative distribution of eccentricities for CHE triples that experience TMT at the onset of the mass transfer phase. In <ref>, we show the M_ 2,ZAMS distribution of TMT sources, distinguishing them based on the evolutionary phase of the inner binary. In Fig. <ref> we show the distribution of initial inner pericenters of CHE triples, distinguishing systems based on the maxium inner eccentricity reached during evolution (left panel), and the based on the evolutionary channel (right panel). In Fig <ref>, we show the outer pericentre before and after the TMT episode for systems with MS-MS inner binaries (upper panel) and BH-BH inner binaries (lower panel) at the onset of the mass transfer phase.
§ CALCULATION OF BIRTH AND EVENT RATES
Throughout the paper, we estimate the:
* Formation efficiency (equation <ref>)
* Birth rate density (equation <ref>)
* Merger rate density (equation <ref>)
for each identified evolutionary channels. In this section, we discuss in detail how we determine these quantities.
(i) Formation efficiency: The formation efficiency expresses the number of ZAMS stellar systems formed that will evolve according to a specific evolutionary channel as a fraction of all ZAMS stellar systems formed. We calculate this quantity as:
ϵ_ formation = f_ pm·N_ channel/N_ simulated,
where N_ channel is the number simulated systems that evolves according to the channel of interest, N_ simulated the total number of sampled systems, and f_ pm is the portion of the simulated parameter space with respect to the complete parameter space, that is:
f_ pm = f_ triple· f_ M_ 1, ZAMS· f_ q, in· f_ q, out· f_ a, in· f_ a, out,
where f_ triple is the assumed triple fraction, f_ M_ 1, ZAMS is the fraction of the simulated parameter space of primary masses:
f_ M_ 1, ZAMS = ∫_20 M_⊙^100 M_⊙ M_ 1, ZAMS^-2.3 dm/∫_0.08 M_⊙^0.5 M_⊙ M_ 1, ZAMS^-1.3 dm + ∫_0.5 M_⊙^100 M_⊙ M_ 1, ZAMS^-2.3 dm,
where we assumed that the absolute minimum stellar mass is M_ ZAMS,min = 0.08 M_⊙ and the absolute maximum stellar mass is M_ ZAMS,max = 100 M_⊙, and as explained in section <ref>, we sample primary masses in the range of 20-100M_⊙.
The fraction of the simulated parameter space of inner mass ratios is:
f_ q, in = 1.0-0.7/1.0-0.0,
since the distribution of (inner and outer) mass ratios is assumed to be uniform. In equation <ref>, we assume that inner mass ratios of hierarchical triples have an interval of (0,1] and we sample from the interval of [0.7,1].
The fraction of the simulated parameter space of outer mass ratios is
f_ q, out = 1.0-0.1/1.0-0.0,
where we assume that outer mass ratios triples have an interval of (0,1] and we sample from the interval of [0.1,1].
The fraction of the simulated parameter space of inner semimajor axis is:
f_ a, in = log_10(40 R_⊙)-log_10(14 R_⊙)/log_10(10^5 R_⊙)-log_10(14 R_⊙),
since the distribution of (inner and outer) semimajor axis is assumed to be uniform in a logarithmic space. We assume that inner mass semimajor axes of all triples range from 14R_⊙ to 10^5 R_⊙ and we sample from the interval of [14,40] R_⊙.
Finally, the fraction of the simulated parameter space of outer semimajor axis is:
f_ a, out = log_10(10^5 R_⊙)-log_10(10^2 R_⊙)/log_10(10^5 R_⊙)-log_10(10^2 R_⊙),
where we assume that inner mass semimajor axes of all triples range from 10^2 R_⊙ to 10^5 R_⊙ and we sample from the enitre interval. Equation <ref> for channels involving isolated binaries reduces to f_ pm = f_ binary· f_ M_ 1, ZAMS· f_ q, in· f_ a, in.
ii) Birth rate density: The birth rate density gives the number density of ZAMS stellar systems in the local universe (that is at redshift z≈0), which will evolve according to a specific channel.
We calculate the birth rate of systems in a certain channel as:
R_ birth = ∑_Z_iSFRd^*(Z_i,z_ ZAMS = 0)/M·ϵ_ formation,
where we sum over the two metallicity values, at which we performed our simulations; Z = 0.005 and Z = 0.0005. SFRd^*(Z, z) is defined as the metallicity-specific star formation rate density, and it gives the stellar mass formed within a metallicity range Z_ low≤ Z ≤ Z_ high at redshift z:
SFRd^*(Z, z) = ∫_Z_ low^Z_ high f_ met(Z, z) SFRd(z) dZ,
where Z_ low and Z_ high are 0.0015 (10^-10) and 0.01 (0.0015), respectively, for our model with Z = 0.005 (Z = 0.0005). Here, Z = 0.0015 is the midpoint between Z = 0.005 and Z = 0.0005 in logarithmic space, Z = 0.01 is the highest metallicity at which CHE binaries can still form GW sources at appreciable numbers and Z = 10^-10 is an arbitrarily chosen, extremely low metallicity value. In equation <ref>, SFRd(z) is the star formation rate density, and we use the model from <cit.>:
SFRd(z) = 0.01·(1 + z)^2.6/1 + ((1+z)/3.2)^6.2 M_⊙yr^-1Mpc^-3,
and f_ met(Z,z) is the metallicity distribution of the stellar mass formed. This quantity is also redshift dependent and assumed to follow a log-normal distribution <cit.>:
f_ met(Z,z) = 1/σ√(2π)exp((log_10(Z) - μ(z))^2/2σ^2),
with a standard deviation of σ = 0.5 and with a redshift-dependent mean metallicity μ(z) = log_10(Z_⊙· 10 ^0.153 - 0.074z^1.34) - 0.5ln(10)σ^2.
Finally, the term M̃ in equation <ref> is the average mass of all stellar systems and we calculate this as:
M̃=f_ single·M̃_ 1,ZAMS +
f_ binary·∫_0^1 (1 + q_ in) M̃_ 1,ZAMS dq_ in +
f_ triple·∫_0^1∫_0^1 (1 + q_ in) (1 + q_ out) M̃_ 1,ZAMS dq_ in dq_ out,
where we have defined M̃_ 1,ZAMS, as the average mass of the primary, i.e.:
M̃_ 1,ZAMS = ∫_0.08 M_⊙^100 M_⊙ M_ 1,ZAMS f_ IMF dM_ 1,ZAMS
where f_ IMF is the normalised, piecewise continuous initial mass function of <cit.>, f_ single and f_ binary are the single and binary fractions, respectively. We neglect higher order systems, such that f_ triple = 1 -f_ single - f_ binary.
We note that we also assume that the binary and triple fractions are independent on the primary mass of the system <cit.>. Assuming flat mass ratio distributions for both the inner and outer binary, equation <ref> becomes:
M̃ = (f_ single + 3/2 f_ binary + 9/4· f_ triple) ·M̃_ 1,ZAMS,
The term SFRd^*(Z_i,z_ ZAMS = 0)/M̃ in equation <ref> then gives the average number of stars formed at redshift z = 0 in a metallicity range of Z_ i,low≤ Z ≤ Z_ i, high. Multiplying this term with ϵ_ formation gives the number of systems formed in a given formation channel as a fraction of all systems formed in the above mentioned metallicity range for a given star formation history model. Summing these values over all of our metallicity bins therefore yields the total birth rate of systems in a specific channel.
iii) Merger rate density: The merger rate density gives the rate density of a given astrophysical event (such as GW transients from coalescing double compact objects) in the local universe. The main difference between the birth and merger rate is due to the considerable delay time between the formation of the stellar system and the occurrence of the GW merger. For example, if the delay time for a GW source at z = 0 is t_ delay = 10.5 Gyrs, then the redshift at ZAMS of its progenitor systems is z_ ZAMS≈ 2, at which the star formation rate density is an order of magnitude higher with respect to its value at z = 0 <cit.>.
We determine the merger rate density at z = 0 as:
R_ event = ∑_Z_i∫_0 Gyr^13.5 GyrSFRd^*(Z_i,z_ ZAMS(t_ delay))/M·ϵ̃(t_ delay) dt_ delay,
where z_ ZAMS is the redshift at which the progenitor of a given astrophysical event is formed (and therefore it is a function of delay time), ϵ̃ is the number of astrophysical events occurring at z = 0 with a delay time of t_ delay as a fraction of all ZAMS stellar
systems formed at z = z_ ZAMS. We determine z_ ZAMS for a given delay time via the standard relation for lookback time:
t_ delay = 1/H_0∫_z = 0^z_ ZAMSdz'/(1 + z') E(z')dz',
where E(z) = √(Ω_m(1+z)^3 + Ω_λ), with Ω_M = 0.3, Ω_λ = 0.7 and H_0 = 70 kms^-1Mpc^-1.
We note that our merger rate density should be only considered as an order of magnitude estimate at best. This imprecision is due to several uncertainties in stellar physics and, notably, the limited density of our metallicity grid. We performed simulations only at two metallicities to determine the merger rate density. However, the formation efficiency and delay times of GW sources originating from CHE systems is expected to be sensitively dependent on metallicity.
In particular, we overestimate the delay times for GW sources formed at 0.001<Z≤0.005, which in turn leads to an overestimation of the merger rate density at z = 0. This is because, we represent all systems formed in this metallicity range with our models at Z = 0.005, at which the stellar winds are stronger and therefore lead to wider BH-BH binaries. The longer time delays imply that GW sources merging at z = 0 are predicted to have formed at a larger redshift, at which the star formation rate is higher. In particular, predicts that the cosmic star formation rate montonically increases up to z∼2. This could also explain why our merger rate is a factor of two higher than predicted by <cit.>. Similarly, we underestimate the delay times for GW sources formed at 0.0005<Z≤0.001, and therefore we might underestimate the merger rate densities for such systems. In particular, this could mean that the merger rate density of the TMT with a MS-MS accretor channel (discussed in section <ref>) could be significantly higher than predicted (shown in Table <ref>).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.